This week, we will continue our type series by revisiting dependent types,
this time in the context of type theory and theorem proving. In previous post
type systems and dependent types, we mentioned agda as an example of a
dependently typed language. One of the uses of agda — if not the primary use —
is theorem proving. How can a language — which after a quick syntax
examination somewhat resembles haskell — be used to write proofs? In this
post, we will build a correspondence between mathematical proofs and computer
programs using the Curry-Howard isomorphism. The isomorphism contains two
parts, which will be the start of our mapping and that is the interpretation of
propositions as types and proofs as programs (written in the lambda
calculus syntax). Under this interpretation, to write a proof of a proposition
(mathematical statement) is to write a program of a corresponding type, leading
to proofs that can be run.
To start mapping logic into types, we will first consider logical connectives.
We shall start with implication. Here, a good first step is to look at
implication, , as an object that can be used in the rule known as
modus ponens, i.e. an object which produces once is provided. This
optic naturally leads to a type based on the suggestive symbol, the
function type. The mentioned modus ponens rule becomes a simple
function application:
To prove we need to supply and to prove from
, we simply write .
Other common connectives are logical conjunction and disjunction.
By conjunction we mean that both and hold. To prove said
statement, one simply proves , then proves . If we transform this process
into types, one needs to provide a term , a term , then package these
terms into a box capable of holding all the necessary data. This naturally
leads to a type of a container capable of holding two terms of two types e.g.
a pair type.
To create a proof of one simply constructs a pair
from and . The standard implications and correspond to projections and .
By disjunction we mean that at least one of and holds.
Here, it is enough, to prove either or . This can be mapped to the
type of a coproduct (e.g. Either in haskell), sometimes called
tagged or discriminated union. In haskell, the tag is implemented through
value constructors and , a common names for the injective
constructors used in type theory are and . These correspond to the tautologies and .
Another aspect of logic we need to encode into types are constants, namely
true and false. Let us start by finding the true type. The true
type should be trivially provable, meaning producing a term of said type
‘should not require any work’ and the type should not have any further
structure — the type should have a single accessible element:
We map true to the (unit) type (eg. in haskell) with the single
nullary constructor .
To map false, we need a type that has no elements as it should not be
provable. This can be achieved by an empty type, which we will denote
(sometimes, the name void is used, but it does not correspond to the notion
of void used in C and C++ — to avoid confusion, we will use ). Here, an
interesting observation is that ex falso quodlibet (from falsehood,
anything follows) corresponds to absurd functions, which exist
for all . Once you have a term , you can — through the corresponding
absurd function — get a term of any type.
We would like to be able to state that something is not true. In logic, this
of course corresponds to negation, as stating is the same as stating that
is not true.
So far, we have been avoiding a particular logic, but this is a good point to
make a choice: from now on, we will use intuitionistic (constructive) logic,
which uses as its definition of negation (i.e. it is a
rewrite/derivation rule – you might have noticed that we have scrupulously
avoided truth tables in our operator definitions). In other words:
To prove a negation , one needs to produce a proof which, if we also
assume , leads to a contradiction (this kind of proof is in
constructive mathematics unsurprisingly called proof of negation).
By mapping to our function type and (false) to , we get the type . How do we write down a function that returns an element of , which is
a type with no elements? One way to prove negation is to produce an
element of a type we know leads to a contradiction. Suppose, you want to
prove, assuming . The proof can be easily written in
terms of the following lemma :
which is inhabited by the following term:
Which is nothing else but (specialised) function composition; then clearly
as desired. Our current state of affairs is hence this:
Let us quickly check that the above mapping is consistent with standard
behaviour of logic. We know, that is . In the type realm, this
corresponds to a claim that ‘is’ . But what does ‘is’ mean?
For our purposes, it can be defined as an isomorphism1. To state
then requires a pair of functions and which are
inverses of each other. To show , for instance, we construct the
following pair of inverse functions:
is the function which defines ,
is the absurd function for .
Now that we know how to prove , let's get back to proving :
is the first projection from the product,
is (again) the absurd function, this time for .
Intuitively, a term of must be of the form , which
means we need to be able to write down some , which is of course
impossible, since has no values. Thus has no values either,
meaning it is isomorphic to the empty type.
Another example is . Here, the argument is somewhat less
straightforward. Consider the absurd function again. To be used,
such a function would need to be given an element of type . Since there is
no such element, it will never be applied, meaning the right-hand side does
not matter – extensionally (point-wise), all possible definitions are the
same function.
Thus types of these (absurd) functions are contractible (roughly speaking, a
type is contractible if there is an element and all elements are ‘equal’ to ). Perhaps more importantly, they are isomorphic to
the unit type (since all their inhabiting functions are the same). In a stroke
of luck, this is exactly the type which corresponds to (true) in our
established mapping.
Let's look at another property of implication: . Here, we are
producing a function but all functions of this type are just constant
functions which return , meaning once again, the type is contractible
and thus isomorphic to (unit) – the type-theoretic version of true.
As a final example, we shall consider the proof of a more complicated
statement:
Equality of types is a topic rather complicated and heavily discussed in type theories such as HoTT. We think, that for the illustrative purposes isomorphism will do.
A case split is a term of type constructed from three sub-terms: , and . The meaning is almost obvious given the types: ‘take apart and apply either or to get the result’ or in other words ‘give me a recipe to build a from , a recipe to build a from and I will give you a recipe for building a from ’.
So far so good, but more importantly so far so non-dependent. Where do
dependent types fit into this? One feature of first-order logic that we did
not deal with yet are quantifiers: the universal and the existential.
Let us start by considering the former. We would like the quantifier to
express the notion ‘a proposition about a value holds no matter the value we
choose’. Since propositions correspond to types, to make a statement about
a value, one needs to have a type which can incorporate values: a dependent
type.
Let us recall, that a type family is a family of types where . We say that is indexed by . An example of a type family is
the family of integer vectors types indexed by , where
is a type of vectors of length e.g. .
To find the mapping of the universal quantifier, we use the idea that to prove
a statement where is some predicate on , we need to be
able to prove the statement about no matter the you ‘get’. This seems
an awful lot like a function, and more precisely a dependent function: we
need to produce for every argument .
There are two commonly used notations for dependent functions:
– called a pi type in a type-theoretic setting,
– as used in Agda.
We will use the former, as it resembles the quantifier notation. An example of
a dependent function type would then be
where
and are type families indexed by natural numbers,
is inhabited iff is even:
we can take if for some ,
otherwise.
is inhabited iff is odd (same idea as above).
The type , interpreted as a proposition, claims that every number is either
odd or even (possibly both!). Written as a formula, .
Another example can be , which is function that, for each
natural number , returns a vector of length such that all its elements
are zeros. If such a function exists, it can be viewed as a proof that for
every , some exists.
Note, that in , the type family indexed by plays a
role of a predicate. Under our interpretation, dependant types or type
families correspond to predicates.
Under the Curry-Howard isomorphism, existential quantifier can be
introduced through sigma types, which we discussed under the name dependent
products in the dependant types blog post. Here, we use
to mean a type of dependent product in which the type of the second element
depends on the value of the first element.
An example of such a type can be , a type containing pairs
, where is a vector of length . The reason we employ the
sigma type notation is same as with pi types, and that is the similarity to
the quantifier notation . The intuition behind sigma types
corresponding to existential quantification is that to produce a term of this
type, one needs to create a single pair. One can read as
‘ of type such that it is even’, and thus a proof can be a single pair
e.g. , where is a proof that 2 is indeed
even.
We have talked about intuitive correspondence between propositions and types,
which we sum up one last time in the following table:
logic
type theory
propositions
types
proofs
elements
predicates
dependent types
⊤ (true)
𝟙 (unit)
⊥ (false)
ø (void)
As you can probably imagine, this is just the tip of a never-ending iceberg.
For example, we mentioned that our mapping to types was based on
intuitionistic logic and hence the result is an intuitionistic type theory.
In a future post, we will look at linear types, which are instead related to
linear logic.
We end this post with some food for thought:
Formulate as a type . Is it true that ?
Is our negation on a type an involution, i.e. is ?
Can you formulate surjectivity of a function as a type under the discussed
correspondence? Is it really the correct type?