propositions as types
This week, we will continue our type series by revisiting dependent types,
this time in the context of type theory and theorem proving. In previous post
type systems and dependent types, we mentioned agda as an example of a
dependently typed language. One of the uses of agda — if not the primary use —
is theorem proving. How can a language — which after a quick syntax
examination somewhat resembles haskell — be used to write proofs? In this
post, we will build a correspondence between mathematical proofs and computer
programs using the Curry-Howard isomorphism. The isomorphism contains two
parts, which will be the start of our mapping and that is the interpretation of
propositions as types and proofs as programs (written in the lambda
calculus syntax). Under this interpretation, to write a proof of a proposition
(mathematical statement) is to write a program of a corresponding type, leading
to proofs that can be run.
logic | type theory |
propositions | types |
proofs | elements |
logical connectives
To start mapping logic into types, we will first consider logical connectives.
We shall start with implication. Here, a good first step is to look at
implication,
, as an object that can be used in the rule known as
modus ponens, i.e. an object which produces
once
is provided. This
optic naturally leads to a type based on the suggestive
symbol, the
function type
. The mentioned modus ponens rule becomes a simple
function application:
To prove we need to supply and to prove from , we simply write .
Other common connectives are logical conjunction
and disjunction
.
By conjunction
we mean that both
and
hold. To prove said
statement, one simply proves
, then proves
. If we transform this process
into types, one needs to provide a term
, a term
, then package these
terms into a box capable of holding all the necessary data. This naturally
leads to a type of a container capable of holding two terms of two types e.g.
a pair type
.
To create a proof of one simply constructs a pair from and . The standard implications and correspond to projections and .
By disjunction
we mean that at least one of
and
holds.
Here, it is enough, to prove either
or
. This can be mapped to the
type of a coproduct
(e.g. Either in haskell), sometimes called
tagged or discriminated union. In haskell, the tag is implemented through
value constructors
and
, a common names for the injective
constructors used in type theory are
and
. These correspond to the tautologies
and
.
truth values
Another aspect of logic we need to encode into types are constants, namely
true
and false
. Let us start by finding the true type. The true
type should be trivially provable, meaning producing a term of said type
‘should not require any work’ and the type should not have any further
structure — the type should have a single accessible element:
We map true to the (unit) type (eg. in haskell) with the single nullary constructor .
To map false, we need a type that has no elements as it should not be
provable. This can be achieved by an empty type, which we will denote
(sometimes, the name
void
is used, but it does not correspond to the notion
of void
used in C and C++ — to avoid confusion, we will use
). Here, an
interesting observation is that ex falso quodlibet (from falsehood,
anything follows) corresponds to absurd functions
, which exist
for all
. Once you have a term
, you can — through the corresponding
absurd function — get a term of any type.
negation
We would like to be able to state that something is not true. In logic, this
of course corresponds to negation, as stating
is the same as stating that
is not true.
So far, we have been avoiding a particular logic, but this is a good point to
make a choice: from now on, we will use intuitionistic (constructive) logic,
which uses
as its definition of negation (i.e. it is a
rewrite/derivation rule – you might have noticed that we have scrupulously
avoided truth tables in our operator definitions). In other words:
To prove a negation , one needs to produce a proof which, if we also assume , leads to a contradiction (this kind of proof is in constructive mathematics unsurprisingly called proof of negation).
By mapping
to our function type and
(false) to
, we get the type
. How do we write down a function that returns an element of
, which is
a type with no elements? One way to prove negation
is to produce an
element of a type we know leads to a contradiction. Suppose, you want to
prove,
assuming
. The proof can be easily written in
terms of the following lemma
:
which is inhabited by the following term:
Which is nothing else but (specialised) function composition; then clearly
as desired. Our current state of affairs is hence this:
logic | type theory |
propositions | types |
proofs | elements (terms) |
(true) | (unit) |
(false) | (void) |
examples
Let us quickly check that the above mapping is consistent with standard
behaviour of logic. We know, that
is
. In the type realm, this
corresponds to a claim that
‘is’
. But what does ‘is’ mean?
For our purposes, it can be defined as an isomorphism1
. To state
then requires a pair of functions
and
which are
inverses of each other. To show
, for instance, we construct the
following pair of inverse functions:
- is the function which defines ,
- is the absurd function for .
- is the first projection from the product,
- is (again) the absurd function, this time for .
Another example is
. Here, the argument is somewhat less
straightforward. Consider the absurd function
again. To be used,
such a function would need to be given an element of type
. Since there is
no such element, it will never be applied, meaning the right-hand side does
not matter – extensionally (point-wise), all possible definitions are the
same function.
Thus types of these (absurd) functions are contractible (roughly speaking, a
type
is contractible if there is an element
and all elements
are ‘equal’ to
). Perhaps more importantly, they are isomorphic to
the unit type (since all their inhabiting functions are the same). In a stroke
of luck, this is exactly the type which corresponds to
(true) in our
established mapping.
Let's look at another property of implication:
. Here, we are
producing a function
but all functions of this type are just constant
functions which return
, meaning once again, the type is contractible
and thus isomorphic to
(unit) – the type-theoretic version of true.
As a final example, we shall consider the proof of a more complicated
statement:
which, rewritten as a type, becomes:
Then:
- assume ,
- define
- and
- ,
- we need some ,
- which we can define using a case split2 as:
- if for some then ,
- if for some then .
1
Equality of types is a topic rather complicated and heavily discussed in type theories such as HoTT. We think, that for the illustrative purposes isomorphism will do.
2
A case split is a term of type
constructed from three sub-terms:
,
and
. The meaning is almost obvious given the types: ‘take apart
and apply either
or
to get the result’ or in other words ‘give me a recipe to build a
from
, a recipe to build a
from
and I will give you a recipe for building a
from
’.
universal quantification
So far so good, but more importantly so far so non-dependent. Where do
dependent types fit into this? One feature of first-order logic that we did
not deal with yet are quantifiers: the universal
and the existential
.
Let us start by considering the former. We would like the quantifier to
express the notion ‘a proposition about a value holds no matter the value we
choose’. Since propositions correspond to types, to make a statement about
a value, one needs to have a type which can incorporate values: a dependent
type.
Let us recall, that a type family
is a family of types
where
. We say that
is indexed by
. An example of a type family is
the family of integer vectors types
indexed by
, where
is a type of vectors of length
e.g.
.
To find the mapping of the universal quantifier, we use the idea that to prove
a statement
where
is some predicate on
, we need to be
able to prove the statement about
no matter the
you ‘get’. This seems
an awful lot like a function, and more precisely a dependent function: we
need to produce
for every argument
.
There are two commonly used notations for dependent functions:
- – called a pi type in a type-theoretic setting,
- – as used in Agda.
- where
- and are type families indexed by natural numbers,
-
is inhabited iff
is even:
- we can take if for some ,
- otherwise.
- is inhabited iff is odd (same idea as above).
Another example can be