Intuitionistic Type Theory
1. Logic
1.1. Propositional Logic
Also called zero-th order logic, propositional logic, as it is often encountered has:
- propositional variables: e.g. \(p,q\)
- logical connectives: \(\neg, \rightarrow, \wedge, \vee\)
- some axioms
- some inference rules: usually just modus ponens, i.e., \(p \rightarrow q, p \vdash q\)
1.2. First order logic
Same as above, but variables can be quantified over domains and we can talk about predicates, which are functions that map individuals to True or False.
In Model Theory, first order logic is used to specify a theory.
1.3. Second and higher order logic
Can quantify over domains of individuals, but can also quantify over predicates. In other words, if we think of predicates as specifying sets, then quantifying over predicates is like quantifying over sets.
1.4. Combinatory Logic
Like first-order logic, but with no quantification. The project of combinatory logic is supposed to be the foundations that first-order logic is built on. Using combinatory logic, we can write out all computations possible in a lambda-calculus. This can be done starting with the combinators: \(\mathbf{S}\) and \(\mathbf{K}\).
Important name: Schonfinkel
2. Intuitionistic Logic
Brouwer: what if we think of math not as the discovery of truths, but instead as a human constructed system of abstractions? Then, the only proofs that we should accept are constructive proofs. We should not use proofs by contradiction, because proving that \neg \neg p
does not necessarily tell us that \(p\). So, intuitionistic logic does not include:
- the law of the excluded middle, i.e. either \(p\) or \(\not p\) is true
- double negation elimination, e.g. \(\not \not p \rightarrow p\)
3. Dependent Types
The type of a value can depend on the value. For instance, an array may have a type array n
, where n
is the length of the array. Note that this is not the same as polymorphism.
3.1. \(\Pi\) type
Let \(\mathcal{U}\) be the universe of types. Let \(A\) be some type. Let \(B\) be the family of types that vary with \(A\). That is, \(B:A\rightarrow U\). That is, given \(a:A\), \(B(a)\) picks out a type in \(\mathcal{U}\).
Then, we can specify the type of a very specific function. This function does not have a fixed codomain. Instead, it's codomain depends on its input. So, for input \(a:A\), the codomain of this function is \(B(a)\).
We can write the type of such a function as: \[ \prod_{x:A}B(x) \]
So, this is the function that takes inputs of type \(A\), and, depending on the value of the input \(x\), has a return type \(B(x)\).
In the Curry-Howard correspondence (see below), this type is associated with universal quantification.
3.2. \(\Sigma\) type
As above, let \(\mathcal{U}\) be the universe of types. Let \(A\) be a type, and let \(B\) be a family of types \(B:A\rightarrow\mathcal{U}\).
Then, let's write the type of an ordered pair, where the type of the second element depends on the value of the first.
This is written as: \[ \sum_{x:A}B(x) \]
This captures the type of \((a,b)\), where the type of \(b\) depends on the value of \(a\).
In the Curry-Howard correspondence, this is associated with the existential quantifier. See below for example.
4. Curry-Howard correspondence
Recall that theorems have hypotheses and a conclusion. Ordinarily, a proof of a theorem is a tree where the leaves are either axioms or hypotheses, the higher nodes are combinations of the lower nodes using inference rules, and the root is the conclusion.
The Curry-Howard correspondence associates theorems with types and proof-objects with programs. In keeping with an intutionistic approach, a theorem is only proved if it is witnessed by a proof object.
Before going forward, let's ground ourselves in some quick parallels with an ordinary programming language. Take Java for example. In Java, a function has typed arguments and a return type. In the function body, the arguments and their types (hypotheses) are fed to other functions (other theorems) that produce values of a new return type (conclusions). Eventually, these values are used to produce a value of the return type (conclusion). If we can actually write a function that takes these arguments and gives the correct return type, then we say that the (theorem) specified by the function type has been (proved).
The following correspondence between logic and types can be drawn:
- hypotheses and propositions are types
- implication is captured by the function type \(\rightarrow\)
- theorems \(\Gamma \vdash q\) are represented represented by functions with input types that correspond to \(\Gamma\) and return types that correspond with \(q\)
- universal quantification is captured by the \(\Pi\) type. Recall that this is the type of functions from input type to dependent output type (i.e. some proposition about the input)
- existential quantification is captured by the \(\Sigma\) type. Recall that this the type of a pair of input type and output type (i.e. a proposition). This is different from the \(\Pi\) type above, because it's not the type of a function, it's just the type of a single ordered pair.
4.1. Example
Let's think about the proposition \(m>n\). This is true when we can find a natural number \(k\) s.t. \(m=n+k\). What does \(m>n\) look like as a type? It's a type that talks about the existence of such a \(k\). We use the \(\Sigma\) dependent type from above: \[ \sum_{k:\mathbb{N}} m = k+n \] A couple things to notice:
- Here, \(m=k+n\) is also a proposition. Recall that propositions correspond with types.
- See that \(m=k+n\) is a type that depends on the value of \(k\).
What does a proof look like? It's an instance of the above type.