# Proofs of Equality

The source code for this module: PartI/EqualityProofs.ard

The source code for the exercises: PartI/EqualityProofsEx.ard

In this module we give a number of examples, demonstrating techniques used in more advanced proofs of equalities.
We introduce a convention for writing more readable equality proofs called *equational reasoning*.
We argue that transport is insufficient in some cases and introduce its generalization
called *eliminator J*.

We supplement this discussion of equality with remarks on definitions of predicates.

# Commutativity of +

Let’s apply notions from the previous module and prove commutativity of + : Nat -> Nat -> Nat. Note that transport satisfies the property transport B idp b ==> b:

```
-- recall the definition of transport:
\func transport {A : \Type} (B : A -> \Type) {a a' : A} (p : a = a') (b : B a)
=> coe (\lam i => B (p @ i)) b right
-- indeed, coe (\lam i => B (idp @ i)) b right ==>
-- ==> coe (\lam i => B a) b right ==> b
```

It will be more convenient to have transitivity of =, which we proved in the previous module, in infix form:

```
\func \infixr 5 *> {A : \Type} {a a' a'' : A} (p : a = a') (q : a' = a'') : a = a''
=> transport (\lam x => a = x) q p
```

Function *> thus satisfies the property p *> idp ==> p. We can also define transitivity <* such that idp <* p ==> p, but we do not need it yet.

The commutativity can now be proved as follows:

```
\func +-comm (n m : Nat) : n + m = m + n
| 0, 0 => idp
| suc n, 0 => pmap suc (+-comm n 0)
| 0, suc m => pmap suc (+-comm 0 m)
| suc n, suc m => pmap suc (+-comm (suc n) m *> pmap suc (inv (+-comm n m)) *> +-comm n (suc m))
```

# Equational reasoning, proof of +-comm rewritten

There is one clever trick that allows to write sequences of equality proofs joining by transitivity in a
more readable form. Namely, one can define operators `==<`

, `>==`

and
`qed`

such that it is possible to write instead of a chain of equality proofs
p1 `*>`

… `*>`

pn an extended chain a1 `==<`

p1 `>==`

a2 `==<`

p2 `>==`

a3 … `qed,
where we also specify the objects equality of which is proven at each step (in other words, pi : ai = a(i+1)).
The proof +-comm rewritten in this way looks as follows:

```
\func +-comm' (n m : Nat) : n + m = m + n
| 0, 0 => idp
| suc n, 0 => pmap suc (+-comm' n 0)
| 0, suc m => pmap suc (+-comm' 0 m)
| suc n, suc m => pmap suc (
suc n + m ==< +-comm' (suc n) m >==
suc (m + n) ==< pmap suc (inv (+-comm' n m)) >==
suc (n + m) ==< +-comm' n (suc m) >==
suc m + n `qed
)
-- recall that:
-- x `f == f x -- postfix notation
-- x `f` y == f x y -- infix notation
```

These operators can be defined as follows:

```
\func \infix 2 qed {A : \Type} (a : A) : a = a => idp
\func \infixr 1 >== {A : \Type} {a a' a'' : A} (p : a = a') (q : a' = a'') => p *> q
\func \infix 2 ==< {A : \Type} (a : A) {a' : A} (p : a = a') => p
```

# J operator

Recall from Eliminators that elimination principles for a data type D say that functions from D are determined by their values on the constructors of D, that is that D is “generated” by its constructors.

Similarly, we can say that identity type = is “generated” by reflexivity idp: the non-dependent version of eliminator says that if we define the value of a function a = x -> B x on idp, that is some value b : B a, then we would uniquely determine this function:

```
\func transport'
{A : \Type}
(B : A -> \Type)
{a a' : A} (p : a = a')
(b : B a)
: B a'
=> coe (\lam i => B (p @ i)) b right
```

The J operator is a dependent version of this, stating that specifying value on idp is enough to determine a function \Pi (x : A) (p : a = x) -> B x p:

```
\func J
{A : \Type} {a : A}
(B : \Pi (a' : A) -> a = a' -> \Type)
(b : B a idp)
{a' : A} (p : a = a')
: B a' p
-- the details of the definition are not important for now
=> coe (\lam i => B (p @ i) (psqueeze p i)) b right
\where
\func psqueeze {A : \Type} {a a' : A} (p : a = a') (i : I) : a = p @ i => path (\lam j => p @ I.squeeze i j)
```

Note that B a’ p above depends on both a’ and p. If we make a’
fixed and equal to a in the definition above, then we obtain *K eliminator*:

```
\func K {A : \Type} {a : A} (B : a = a -> \Type)
(b : B idp)
(p : a = a) : B p => {?}
```

This eliminator equivalent to the statement that every element of a = a is idp.
It may seem natural at first sight to add it as an axiom then to simplify things by making proofs of equalities
unique (as it implies that p = p’ for any p, p’ : a = a’), but actually it is
important that these proofs are *not unique*. This issue will be discussed later.

It is not convenient to use the J operator directly. For this reason, Arend has the pattern matching principle corresponding to J. A parameter of type a = a’ can be matched with idp making a and a’ equivalent. For example, transport can be defined as follows:

```
\func transport {A : \Type} (B : A -> \Type) {a a' : A} (p : a = a') (b : B a) : B a' \elim p
| idp => b
```

Similarly, J can be defined in this way. See Prelude for more information about this pattern matching principle.

The pattern matching on idp may look confusing at first sight. On one hand we claim that a = a’
can have more than one element, but on the other hand we allow pattern matching on idp, which means that
a = a’ has one element. This seeming contradiction is resolved by observing that in reality we pattern
match simultaneously on *two* variables p : a = a’ and a’, instead of just one
p : a = a’. In other words, we *do not* match on the type a = a’, but match on the
type \Sigma (a’ : A) (a = a’) of pairs instead, which is one-element type.
Precisely this fact is the source of all the restrictions discussed in Prelude.

The operator J has a different form, which we denote Jalt:

```
\func Jalt {A : \Type} (B : \Pi (a a' : A) -> a = a' -> \Type)
(b : \Pi (a : A) -> B a a idp)
{a a' : A} (p : a = a') : B a a' p => {?}
```

**Exercise 1:** Prove that J and Jalt are equivalent, i.e. define J in terms of Jalt and vice versa.

# Associativity of append for vectors

Let us prove some statement that essentially requires J. A good example of such statement is associativity of the append v++ for vectors. Let’s recall the definitions of the data type Vec and the function v++:

```
\data Vec (A : \Type) (n : Nat) \elim n
| zero => vnil
| suc n => vcons A (Vec A n)
\func \infixl 4 v++ {A : \Type} {n m : Nat} (xs : Vec A n) (ys : Vec A m) : Vec A (m + n) \elim n, xs
| 0, vnil => ys
| suc n, vcons x xs => vcons x (xs v++ ys)
```

Already the statement of associativity of v++ requires some work since the types of (xs v++ ys) v++ zs and xs v++ (ys v++ zs) do not coincide: the types are Vec A (k + (m + n)) and Vec A ((k + m) + n) respectively. We can get over it by using transport since it allows to translate an element of Vec A x to an element of Vec A y if x = y:

```
\func v++-assoc {A : \Type} {n m k : Nat} (xs : Vec A n) (ys : Vec A m) (zs : Vec A k)
: (xs v++ ys) v++ zs = transport (Vec A) (+-assoc k m n) (xs v++ (ys v++ zs)) \elim n, xs
| 0, vnil => idp
| suc n, vcons x xs =>
pmap (vcons x) (v++-assoc xs ys zs) *>
inv (transport-vcons-comm (+-assoc k m n) x (xs v++ (ys v++ zs)))
\where
-- transport commutes with all constructors
-- here is the proof that it commutes with vcons
\func transport-vcons-comm {A : \Type} {n m : Nat} (p : n = m) (x : A) (xs : Vec A n)
: transport (Vec A) (pmap suc p) (vcons x xs) = vcons x (transport (Vec A) p xs)
| idp, _, _ => idp
{- This function can be defined with J as follows:
=> J (\lam m' p' => transport (Vec A) (pmap suc p') (vcons x xs) = vcons x (transport (Vec A) p' xs))
idp
p
-}
```

Let us take a closer look at what is going on in this proof. First, we apply the congruence pmap (vcons x) to the induction hypothesis v++-assoc xs ys zs so that we obtain the equality:

`vcons x (xs v++ ys) v++ zs = vcons x (transport (Vec A) (+-assoc k m n) (xs v++ (ys v++ zs))).`

This corresponds to the following three lines of the proof:

```
| 0, vnil => idp
| suc n, vcons x xs =>
pmap (vcons x) (v++-assoc xs ys zs)
```

The left side of the above equality is precisely what we need since ((vcons x xs) v++ ys) v++ zs evaluates to vcons x ((xs v++ ys) v++ zs). So, we should compose this proof with the proof of

`vcons x (transport (Vec A) (+-assoc k m n) (xs v++ (ys v++ zs))) = transport (Vec A) (+-assoc k m n) (vcons x (xs v++ (ys v++ zs)))`

And this is precisely the commutativity of transport with vcons, which we prove in transport-vcons-comm lemma. Note that it is important that we generalize the statement and prove the commutativity not only for +-assoc k m n but for all e : Nat satisfying p : k + m + n = e (otherwise, we would not be able to use pattern mathing or the J operator to prove this statement).

**Exercise 2:** Prove that vnil is identity for the operation v++.

# Predicates

A predicate on a type A is by definition a function from A to a type of propositions. In particular, in propositions-as-types logic predicates are functions A->\Type.

There are several ways to define a predicate over a type A:

- By combining existing predicates (for example, equality) by means of logical connectives. For example, isEven can be defined as \lam n => \Sigma (k : Nat) (n = 2 * k).
- By recursion, but only if A is a data type.
- By induction.

We now illustrate all these ways in case of the predicate <= for natural numbers.

```
-- Definition of <= via equality.
\func LessOrEq''' (n m : Nat) => \Sigma (k : Nat) (k + n = m)
-- Recursive definition of <=.
\func lessOrEq (n m : Nat) : \Type
| 0, _ => Unit
| suc _, 0 => Empty
| suc n, suc m => lessOrEq n m
-- First inductive definition of <=.
\data LessOrEq (n m : Nat) \with
| 0, m => z<=n
| suc n, suc m => s<=s (LessOrEq n m)
\func test11 : LessOrEq 0 100 => z<=n
\func test12 : LessOrEq 3 67 => s<=s (s<=s (s<=s z<=n))
-- Of course, there is no proof of 1 <= 0.
-- \func test10 : LessOrEq 1 0 => ...
-- Second inductive definition of <=.
-- This is a modification of the first inductive definition,
-- where we avoid constructor patterns.
\data LessOrEq' (n m : Nat)
| z<=n' (n = 0)
| s<=s' {n' m' : Nat} (n = suc n') (m = suc m') (LessOrEq' n' m')
```

There are usually many ways to choose constructors for inductive definitions. To define a predicate inductively, we need to come up with a set of axioms that characterise the predicate.

For example, for LessOrEq we chose two axioms: 1) 0 <= m for all m, 2) if n <= m, then suc n <= suc m for all n, m. Every inequality can be derived from these axioms. But this is not the only way to characterise LessOrEq. For example, we could use the following two axioms: 1) n <= n for all n, 2) if n <= m, then n <= suc m for all n, m:

```
-- Third inductive definition of <=.
\data LessOrEq'' (n m : Nat) \elim m
| suc m => <=-step (LessOrEq'' n m)
| m => <=-refl (n = m)
```

**Exercise 3:** Prove that all definitions of <= given above are equivalent.

**Exercise 4:** Define the membership predicate In for lists.

**Exercise 5:** Define reflexive and transitive closure of a relation.
That is ReflTransClosure R – is the minimal reflexive and transitive relation containing R.

**Exercise 6:** Prove that if R is already reflexive and transitive then ReflTransClosure R
is equivalent to R.

**Exercise 7:** Define the predicate xs <= ys for lists, which says “the list xs is a
sublist of ys”.

**Exercise 8:** Prove that filter xs <= xs for any list xs.