# Universes, Induction, Specifications

The source code for this module: universes.zip.

In this module we take a closer look at the structure of the hierarchy of universes and explain how polymorphism works.

We discuss more advanced forms of induction and recursion: induction-recursion and recursion-recursion.

We conclude with remarks on writing specifications for functions.

# Hierarchies of universes, polymorphism

As we mentioned earlier there is no actual type of all types in Arend. The type \Type behaves pretty much like the one, but not quite, and the difference is precisely that \Type cannot be used for contradictory circular definitions. This is so because \Type actually hides the hierarchy of unverses \Type0, \Type1, … as we explain below.

The type \Type n, where the natural number n is called *the predicative level*
of the universe, contains all types that do not refer to universes or refer to universes \Type i
of lower predicative level i < n.

`\func tt : \Type2 => \Type0 -> \Type1`

In order to see how \Type works let us consider a polymorphic definition:

`\func id (A : \Type) (a : A) => a`

This definition is implicitly polymorphic by the level of the universe of A, that is it has an implicit parameter \lp for the level. The function id above can equivalently be defined as follows:

`\func id (A : \Type \lp) (a : A) => a`

Whenever we use \Type without explicit level, the typechecker infers the minimal appropriate level. Consider the following example:

`\func type : \Type => \Type`

The typechecker will infer the level \lp for \Type in the body of the function type. Consequently, the typechecker infers \suc \lp for the level of the universe in the result type of type since n = \suc \lp is minimal such that \Type \lp : \Type n. Thus, the function type can be equivalently rewritten as follows:

`\func type : \Type (\suc \lp) => \Type \lp`

There are two operations that allow to form level expressions out of atomic ones: \suc l and \max l1 l2. Atomic ones, as we have seen, are just nonnegative numerals and polymorphic parameters \lp. Therefore any level expression is either equivalent to a constant or to an expression of the form \max (\lp + c) c’, where c, c’ are constants and \lp + c is the c-fold \suc applied to \lp.

The level of a Pi-type or other type forming construction is the maximal level among all types contained in this construction:

`\func test0 : \Type (\max (\suc (\suc \lp)) 4) => \Type (\max \lp 3) -> \Type (\suc \lp)`

We now illustrate the behaviour of universes and polymorphic definitions through a bunch of examples:

```
\func test1 => id Nat 0
\func test2 => id \Type0 Nat
\func test3 => id (\Type0 -> \Type1) (\lam X => X)
\func test4 => id _ id
\func test4' => id (\Pi (A : \Type) -> A -> A) id
```

While invoking a definition it is possible to specify the value for its polymorphic level parameter \lp. In case the value is not a numeral, it can be passed as an ordinary first parameter:

`\func test5 => id (\suc \lp) (\Type \lp) Nat`

Alternatively, it can be done using the construct \level p h, where p is the level (we ignore h for now). It is useful when the value is a numeral.

```
\func test5' => id (\level (\suc \lp) _) (\Type \lp) Nat
\func test6 => id (\level 2 _) \Type1 \Type0
```

In case a definition is invoked without explicit specification for the value of its level, the level will be infered by the typechecker. In most cases there is no need to specify the level explicitly.

The level of the universe of a data definition is the maximum among the levels of parameters of its constructors. Levels of parameters of the definition do not matter.

```
\data Magma (A : \Type)
| con (A -> A -> A)
\data MagmaEx (A : \Type) (B : \Type5)
| con (A -> A -> A)
\func test7 : \Type \lp => MagmaEx \lp Nat \Type4
```

The level of a class or a record is determined by its *non-implemented* fields and parameters (recall that parameters are
just fields). For example, consider the definition of Magma as a class:

```
\class Magma (A : \Type)
| \infixl 6 ** : A -> A -> A
```

The level of Magma \lp is \Type (\suc \lp) since the definition of Magma \lp contains \Type \lp. But the level of Magma \lp Nat is just \Type0 since non-implemented fields of Magma \lp Nat do not contain universes.

```
\func test8 : \Type (\suc \lp) => Magma \lp
\func test9 : \Type \lp => Magma \lp Nat
```

Consider one more example, the class Functor of functors:

```
\class Functor (F : \Type -> \Type)
| fmap {A B : \Type} : (A -> B) -> F A -> F B
```

The level of Functor will be \Type (\suc \lp) even if the field F is implemented since fmap also refers to \Type \lp.

```
\data Maybe (A : \Type) | nothing | just A
\func test10 : \Type (\suc \lp) => Functor \lp Maybe
```

**Exercise 1:** Calculate levels in each of the the invocations of id’’ below.
Specify explicitly result types for all idTest*.

```
\func id'' {A : \Type} (a : A) => a
\func idTest1 => id'' (id'' id)
\func idTest2 => id'' Maybe
\func idTest3 => id'' Functor
\func idTest4 => id'' (Functor Maybe)
\func idTest5 (f : \Pi {A B : \Type} -> (A -> B) -> Maybe A -> Maybe B) => id'' (Functor Maybe f)
```

# Induction principles

We have already seen that data types have canonical eliminators associated with them and that
non-dependent and dependent eliminators correspond to recursion and indunction principles
respectively. It is also possible to define custom eliminators and, thus, custom induction
principles that in some case are more convenient to use. For example, we can define an
induction principle for natural numbers that allows to use induction hypothesis for *any*
number less than the current one, not just for the number less by one:

```
\func Nat-ind (E : Nat -> \Type)
(r : \Pi (n : Nat) -> (\Pi (k : Nat) -> T (k < n) -> E k) -> E n)
(n : Nat) : E n => {?} -- prove this as an exercise
```

**Exercise 2:** Define div via Nat-ind.

`\func div (n k : Nat) (p : T (0 < k)) : Nat => {?}`

**Exercise 3:** Prove the following induction principle for lists:

```
\func List-ind
{A : \Type}
(E : List A -> \Type)
(r : \Pi (xs : List A) -> (\Pi (ys : List A) -> T (length ys < length xs) -> E ys) -> E xs)
(xs : List A) : E xs => {?}
```

# Induction-recursion

Recursion-recursion – is a principle, allowing to define mutually recursive functions. For example, consider function isOdd and isEven:

```
\func isEven (n : Nat) : Bool
| 0 => true
| suc n => isOdd n
\func isOdd (n : Nat) : Bool
| 0 => false
| suc n => isEven n
```

Induction-induction – is a principle, allowing to define mutually recursive data types. In case a data type is a part of mutually inductive definition, its type must be specified explicitly. Consider the types IsOdd and IsEven:

```
\data IsEven (n : Nat) : \Type \with
| 0 => zero-isEven
| suc n => suc-isEven (IsOdd n)
\data IsOdd (n : Nat) : \Type \with
| suc n => suc-isOdd (IsEven n)
```

Induction-recursion – is a principle, allowing to define data types and functions that are mutually recursive. For example, this construct allows to define universes of types as data types. We will explain how it can be done below.

# Universes via induction-recursion

Let us define a custom universe containing some custom set of types:

```
\data Type
| nat
| list Type
| arr Type Type
```

The type Type can be thought of as a type contains codes of types. We should also define a function that realizes them as actual types:

```
\func El (t : Type) : \Type0 \elim t
| nat => Nat
| list t => List (El t)
| arr t1 t2 => El t1 -> El t2
\func idc (t : Type) (x : El t) : El t => x
```

The universe Type contains just non-dependent types. If we want to include also dependent types to the universe, we should use induction-recursion:

```
\data Type' : \Set0 -- \Set0 is almost the same as \Type0, we will discuss the difference later
| nat'
| list' Type'
| pi' (a : Type') (El' a -> Type')
\func El' (t : Type') : \Type0 \elim t
| nat' => Nat
| list' t => List (El' t)
| pi' t1 t2 => \Pi (a : El' t1) -> El' (t2 a)
```

# Completeness of specifications

Specification for an element of type A is simply a predicate P : A -> \Type, describing the property of an element that we want to prove.

A specification P is *correct* for a : A if P a is provable.
A specification P is *complete* for a : A if P x implies
x=a for all x : A.

For example, assume we want to write specification for a function fac : Nat -> Nat that computes the factorial of a number:

```
-- P1 is correct specification for 'fac', but incomplete.
\func P1 (f : Nat -> Nat) => f 3 = 6
-- P2 is complete, but not correct.
\func P2 (f : Nat -> Nat) => Empty
-- P3 -- correct and complete specification for 'fac'.
\func P3 (f : Nat -> Nat) => \Sigma (f 0 = 1) (\Pi (n : Nat) -> f (suc n) = suc n * f n)
```

Another example of a correct and complete specification for a sort function:

```
\func P (f : List A -> List A) => \Pi (xs : List A) -> \Sigma (isSorted (f xs)) (isPerm (f xs) xs)
-- where 'isSorted xs' is true iff 'xs' is sorted and
-- 'isPerm xs ys' is true iff 'xs' is a permutation of 'ys'.
```

Of course, specifications must always be correct, but one may opt for working with incomplete specifications since sometimes it is too hard to write and prove a complete one. Nevertheless, it is useful to understand, when a specification is complete. One useful necessary and sufficient condition of completeness for correct specifications can be formulated as follows:

`\Pi (x y : A) -> P x -> P y -> x = y`

**Exercise 4:** Implement function filter and prove that it is correct, that is that the following holds:
a) filter p xs is a sublist of xs,
b) All elements of filter p xs satisfy the predicate p,
c) Any sublist of xs with property (b) is a sublist of filter p xs.

# Correctness of Insertion Sort

We now finally prove the correctness of the insertion sort algorithm, defined in Simple Examples:

```
\func sort {A : TotalPreorder} (xs : List A) : List A
| nil => nil
| cons a xs => insert a (sort xs)
\where
\func insert {A : TotalPreorder} (a : A) (xs : List A) : List A \elim xs
| nil => cons a nil
| cons x xs => \case totality x a \with {
| inl _ => cons x (insert a xs)
| inr _ => cons a (cons x xs)
}
```

The full specification of the sort function consists of two properties:

- sort xs is a sorted list.
- sort xs is a permutation of xs. We begin with the latter property. It can be defined in several different (but equivalent) ways. We will use an interesting option: instead of giving its definition from the start, we begin with an empty data type and add more constructors as needed:

`\data Perm {A : \Type} (xs ys : List A)`

Of course, the proof proceeds by induction on the list:

```
\func sort-perm {A : TotalPreorder} (xs : List A) : Perm xs (sort xs) \elim xs
| nil => {?}
| cons a l => {?}
```

In the first goal, we need to show that the empty list is a permutation of itself. To do this, we need to add the first constructor to Perm:

```
\data Perm {A : \Type} (xs ys : List A) \elim xs, ys
| nil, nil => perm-nil
```

In the second goal, we need to show that cons a l is a permutation of insert a (sort l). Clearly, we need to use the induction hypothesis and we also need to prove some lemma about insert function. There is an obvious property of this function related to permutation:

`\func insert-perm {A : TotalPreorder} (a : A) (xs : List A) : Perm (cons a xs) (sort.insert a xs) => {?}`

This lemma implies that insert a (sort l) is a permutation of cons a (sort l). We can combine this property with the induction hypothesis to obtain the required result. To do this, we need to know the following facts:

- Permutations are closed under cons a so that we can conclude that cons a (sort l) is a permutation of cons a l,
- Perm is a transitive relation so that we can combine two proofs.

We add two new constructors to Perm which reflect these properties. At this point, our proof looks like this:

```
\data Perm {A : \Type} (xs ys : List A) \elim xs, ys
| nil, nil => perm-nil
| cons x xs, cons y ys => perm-cons (x = y) (Perm xs ys)
| xs, ys => perm-trans {zs : List A} (Perm xs zs) (Perm zs ys)
\func sort-perm {A : TotalPreorder} (xs : List A) : Perm xs (sort xs) \elim xs
| nil => perm-nil
| cons a l => perm-trans (perm-cons idp (sort-perm l)) (insert-perm a (sort l))
\where
\func insert-perm {A : TotalPreorder} (a : A) (xs : List A) : Perm (cons a xs) (sort.insert a xs) => {?}
```

The proof of insert-perm also proceeds by induction on the list. The nil case is easy: we just need to show that cons a nil is a permutation of itself. In the cons case, we need to decide whether b <= a or a <= b. We can do this by using \case:

```
\func insert-perm {A : TotalPreorder} (a : A) (xs : List A) : Perm (cons a xs) (sort.insert a xs) \elim xs
| nil => perm-cons idp perm-nil
| cons b l => \case totality b a \as r \return
Perm (cons a (cons b l)) (\case r \with {
| inl _ => cons b (sort.insert a l)
| inr _ => cons a (cons b l)
}) \with {
| inl b<=a => {?}
| inr a<=b => {?}
}
```

The second subgoal is easy: we need to prove that cons a (cons b l) is a permutation of itself. We can show that Perm is reflexive using constructors that we already added:

```
\func perm-refl {A : \Type} {xs : List A} : Perm xs xs \elim xs
| nil => perm-nil
| cons a l => perm-cons idp perm-refl
```

In the first subgoal, we need to prove that cons a (cons b l) is a permutation of cons b (insert a l). By the induction hypothesis and transitivity of Perm, we can reduce this problem to the problem of showing that cons a (cons b l) is a permutation of cons b (cons a l). To prove this, we need to add yet another constructor to Perm:

```
\data Perm {A : \Type} (xs ys : List A) \elim xs, ys
| nil, nil => perm-nil
| cons x xs, cons y ys => perm-cons (x = y) (Perm xs ys)
| xs, ys => perm-trans {zs : List A} (Perm xs zs) (Perm zs ys)
| cons x (cons x' xs), cons y (cons y' ys) => perm-perm (x = y') (x' = y) (xs = ys)
```

Now, we can finish the proof of insert-perm:

```
\func insert-perm {A : TotalPreorder} (a : A) (xs : List A) : Perm (cons a xs) (sort.insert a xs) \elim xs
| nil => perm-cons idp perm-nil
| cons b l => \case totality b a \as r \return
Perm (cons a (cons b l)) (\case r \with {
| inl _ => cons b (sort.insert a l)
| inr _ => cons a (cons b l)
}) \with {
| inl b<=a => perm-trans (perm-perm idp idp idp) (perm-cons idp (insert-perm a l))
| inr a<=b => perm-refl
}
```

Our definition of Perm might look like cheating, but it is a perfectly valid definition of this predicate. It might happen that we already have some fixed definition Perm’ of this predicate and we want to prove that our sort function satisfies the required property with respect to Perm’. In this case, we just need to prove that Perm xs ys implies Perm’ xs ys. It is easy to do this: we just need to show that Perm’ satisfies 4 properties corresponding to 4 constructors of Perm. We can even omit the intermediate step and replace Perm with Perm’ and constructors of Perm with the proofs of corresponding properties.

Now, let us prove the remaining property of the sort function. We will define predicate IsSorted inductively. The empty list is always sorted and cons x xs is sorted if xs is sorted and x is less than or equal to the head of xs. The problem is that xs might be empty, in which case the last property does not make sense. We can consider three cases instead of two and define IsSorted as follows:

```
\data IsSorted {A : Preorder} (xs : List A) \with
| nil => nil-sorted
| cons _ nil => single-sorted
| cons x (cons y _ \as xs) => cons-sorted (x <= y) (IsSorted xs)
```

It turns out that this definition is not very convenient because we need to consider more cases when proof things about sorted list. There is another option: we can define the head that returns some default value when the list is empty:

```
\func head {A : \Type} (def : A) (xs : List A) : A \elim xs
| nil => def
| cons a _ => a
```

Now, we can define predicate IsSorted as follows:

```
\data IsSorted {A : Preorder} (xs : List A) \elim xs
| nil => nil-sorted
| cons x xs => cons-sorted (x <= head x xs) (IsSorted xs)
```

If xs is not empty, then condition x <= head x xs asserts that x is less than or equal to the head of xs. If xs is empty, then condition x <= head x xs is always true by reflexivity of <=.

The rest of the proof is straightforward. We formulate an obvious lemma about insert function and prove the required property by induction:

```
\func sort-sorted {A : TotalPreorder} (xs : List A) : IsSorted (sort xs) \elim xs
| nil => nil-sorted
| cons a l => insert-sorted a (sort-sorted l)
\where {
\func insert-sorted {A : TotalPreorder} (x : A) {xs : List A} (xs-sorted : IsSorted xs) : IsSorted (sort.insert x xs) => {?}
}
```

The proof of the lemma is also a straightforward induction:

```
\func insert-sorted {A : TotalPreorder} (x : A) {xs : List A} (xs-sorted : IsSorted xs) : IsSorted (sort.insert x xs) \elim xs
| nil => cons-sorted <=-refl nil-sorted
| cons a l => \case totality a x \as r \return
IsSorted (\case r \with {
| inl _ => cons a (sort.insert x l)
| inr _ => cons x (cons a l)
}) \with {
| inl a<=x => \case xs-sorted \with {
| cons-sorted a<=l l-sorted => cons-sorted (insert-lem a x l a<=x a<=l) (insert-sorted x l-sorted)
}
| inr x<=a => cons-sorted x<=a xs-sorted
}
```

In the case inl a<=x, we need another auxiliary lemma with a simple proof by case analysis:

```
\func insert-lem {A : TotalPreorder} (a x : A) (l : List A) (a<=x : a <= x) (a<=l : a <= head a l) : a <= head a (sort.insert x l) \elim l
| nil => a<=x
| cons b l => \case totality b x \as r \return
a <= head a (\case r \with {
| inl _ => cons b (sort.insert x l)
| inr _ => cons x (cons b l)
}) \with {
| inl _ => a<=l
| inr _ => a<=x
}
```

This completes the proof of correctness of sort. The full proof can be found here.

**Exercise:** Implement another sorting algorithm and prove its correctness.