The exam will take place on April 24, at 9am, in the gym (room 408).
- 1 Sources
- 2 Natural deduction
- 3 Proof terms
- 4 First-order logic
- 5 Induction
- 6 Sequent calculus
1Sources¶
The content on this page has been extracted (not using inference rules) from both Brigitte Pientka's notes (available through WebCT) and Frank Pfenning's notes. Any errors or omissions should be assumed to be the fault of @dellsystem and not of either original author.
2Natural deduction¶
Inference rule: premises on top, conclusion on the bottom.
An inference rule with no premises is an axiom.
Hypothetical judgments: when we start from unproven assumptions. Though, if we have a proof for an assumption, we can replace the assumption with the derivation tree for it, eliminating the assumption in the process (substitution principle).
Hypothetical derivation: proof tree for a hypothetical judgment. Strategy: apply intro rules starting from the conclusion and working towards the assumptions; then, apply elimination rules starting from the assumptions.
2.1Introduction and elimination rules¶
2.1.1Conjunction¶
Introduction rule:
$$\frac{A \,\text{true} \quad B \,\text{true}}{A \land B \,\text{true}} \, \land I$$
Elimination rule:
$$\frac{A \land B \,\text{true}}{A \,\text{true}} \, \land E_l \qquad \frac{A \land B \,\text{true}}{B \,\text{true}} \, \land E_r$$
2.1.2Truth¶
Introduction rule:
$$\frac{}{\top \,\text{true}} \, \top I$$
Elimination rule:
No elimination rule since $\top$ holds unconditionally (and thus there is no information that we can extract from it).
2.1.3Implication¶
Introduction rule:
$$ \frac{}{A\,\text{true}} \, u \quad \quad \\ \vdots \quad \quad \\ \frac{B\,\text{true}}{A \subset B \,\text{true}} \,\subset I^u $$
Elimination rule:
$$\frac{A \subset B \,\text{true} \quad A\,\text{true}}{B\,\text{true}}\, \subset E$$
Note that implications are right-associative. So with $A \subset B \subset C$, start from the end and make that a thing first ($B \subset C$), etc.
Also note that the $u$ in the $I^u$ means that $u$ is no longer available for use as a general assumption, as it was consumed by the introduction rule. In other words, it indicates the end of the scope in which $u$ is valid.
Any assumptions that are discharged more than once should be given different labels.
It's fine if an assumption is not discharged by the end of the proof.
(I'm confused about an example in Pientka's notes in which she shows 3 proofs for $A \subset A \subset (A \land A)$, side by side, but the last two have equivalent assumptions with the same label, even though she mentions earlier that it is critical that all labels denoting assumptions are distinct. Also, where does $v$ come from in the middle one, and where does $u$ come from in the last one? They're not part of the scope ...)
2.1.4Disjunction¶
Introduction rule:
$$\frac{A\,\text{true}}{A \lor B \,\text{true}} \, \lor I_l \qquad \frac{B\,\text{true}}{A \lor B\,\text{true}} \, \lor I_r$$
$$ \frac{}{A\,\text{true}} \,u \quad \; \frac{}{B\,\text{true}}\,v \\ \vdots \qquad \qquad\; \vdots \; \\ \frac{A \lor B\,\text{true} \qquad C\,\text{true} \qquad C\,\text{true}}{C \,\text{true}} \, \lor E^{u, v} \qquad \qquad \tag{don't read the source, it's really bad} $$
2.1.5Falsehood¶
Introduction rule:
None, since if there were an introduction rule for deriving $\bot$, then our inference system would immediately be unsound.
Elimination rule:
$$\frac{\bot \,\text{true}}{C \,\text{true}} \, \bot E$$
i.e., we've arrived at a contradiction and so we can conclude anything
2.1.6Example proofs¶
$$\top \land (\top \land \top)$$
Working from the bottom: We can use $\land I$ to get the conclusion, from the hypotheses $\top$ and $(\top \land \top)$. For the former, we can get that immediately using $\top I$; for the latter, we have to do another $\land I$, for which we need $\top$ true twice. But we can get those using, once again, $\top I$. $\blacksquare$
Source: Pientka's notes.
$$A \subset B \subset (A \land B)$$
First, note that since $\subset$ is right-associative, this is equivalent to $A \subset (B \subset (A \land B))$. So first we need to introduce the left-most $\subset$. Let $u$ be $A$ true, and suppose that we have a proof of $B \subset (A \land B)$ given $u$. Well, to get $B \subset (A \land B)$, we need another $\subset$ intro: let $v$ be $B$ true, and suppose that we have a proof of $(A \land B)$ given $v$. But we can get $A \land B$ by using $\land I$ on $u$ and $v$. So that completes the proof.
Source: Pientka's notes.
$$A \subset A \subset (A \land A)$$
Starting from the bottom: first, get $A \subset (A \land A)$ (assuming we have a proof of $u=A$) using $\subset I^u$. Then, get $A \land A$ (assuming we have a proof of $v = A$) using $\subset I^v$. Then, we just use $\land I$ on $u$ and $v$ and that completes the proof.
Source: Pientka's notes.
$$(A \land B) \subset (B \land A)$$
Starting from the bottom: first, get $(B \land A)$ (assuming we have a proof of $u = A \land B$) using $\subset I^u$. To get $B \land A$, we use $\land I$, assuming we have $B$ and $A$ somewhere. But we can get $B$ using $\land E_r$ on $u$, and we can get $A$ using $\land E_l$ on $u$. So we're good.
Source: Pientka's notes.
$$(A \subset (B \land C)) \subset ((A \subset B) \land (A \subset C))$$
i.e., distributivity.
Starting from the bottom: get $(A \subset B) \land (A \subset C)$ (assuming we have $u=A \subset (B \land C)$) using $I^u$. To get $(A \subset B) \land (A \subset C)$, we use $\land I$ on $A \subset B$ and $A \subset C$.
Here we branch into the two cases. To get $A \subset B$, we use $I^v$ where $v= A$. But then we need to get $B$ from somewhere. Well, if we use $\subset E$ on $u$ and $v$, we can get $B \land C$, and then use $\land E_l$ to get $B$. Similarly, for $A \subset C$, we re-use $v= A$ and use $\land E_r$ to get $C$ from $B \land C$. That completes it. $\blacksquare$
Source: Pientka's notes.
$$(A \lor B) \subset (B \lor A)$$
i.e., commutativity of $\lor$.
Starting from the bottom, we get $B \lor A$ (assuming we have $u = A \lor B$) using $I^u$. To get $B \lor A$, we use an elimination rule: let $v = A$, then use $\lor I_l$ to get $B \lor A$, and let $w = B$, then use $\lor I_r$ to get $B \lor A$. Then we can use $\lor E^{v, w}$ to conclude $B \lor A$ (with $u$ being the term on the left).
Source: Pientka's notes.
$$(A \land (B \land C)) \subset (A \land B)$$
This is ridiculously straightforward. Let $u = A \land (B \land C)$, and elim twice to get $A$ (with $\land E_l$) and also $B \land C$ (with $\land E_r$). Then use $\land E_l$ on $B \land C$ to get $B$. Then use $\land I$ on $A$ and $B$ to get $A \land B$. Then just use $\subset I$.
Source: assignment 1, exercise 1.
$$(A \subset B) \subset ((B \subset C) \subset (A \subset C))$$
Starting from the bottom. We need $(B \subset C) \subset (A \subset C)$ (with $u = A \subset B$) using $\subset I^u$. Then, we need $A \subset C$ (with $v = B \subset C$) using $\subset I^v$. Finally, we need $C$ (with $w=A$) using $\subset I^w$. If we use $\subset E$ on $w$ and $u$, we get $B$; if we use $\subset E$ on $B$ and $v$, we get $C$. So we're done.
Source: assignment 1, exercise 1.
$$((A \lor B) \subset C) \subset (A \subset C) \land (B \subset C)$$
Starting from the bottom, we use $I^u$ to get $(A \subset C) \land (B \subset C)$ where $u = ((A \lor B)\subset C)$. Then, obviously need $\land I$; now we just need to get $A \subset C$ and $B \subset C$. For the former, we let $v = A$ and use $I^v$ to get C somehow. How do we get $C$? Well, if we use $\lor I_l$ on $v$, we get $A \lor B$; then, if we use $\subset E$ on $A \lor B$ and $u$, we get $C$. Similarly, to get $C$ on the other side, we use $\lor I_r$ on $w=B$, and conclude that $B \subset C$ using $I^w$. $\blacksquare$.
Source: assignment 1, exercise 1.
$$(A \subset C) \land (B \subset C) \subset ((A \lor B) \subset C)$$
Let $u = (A \subset C) \land (B \subset C)$. We can use $\subset I^u$ on $(A \lor B) \subset C$. To get that, let $v = A \lor B$, and use $\subset I^v$ on $C$. To get $C$, we need to make use of the elimination rule for $\lor$. So let $w_1 = A$, and $w_2 = B$. Next, apply $\land E_l$ to get $A \subset C$ and $\land E_r$ to get $B \subset C$. Then, applying $\subset E$ on $A$ and $A \subset C$ gives us $C$, for the middle branch; similarly, applying $\subset E$ on $B$ and $B \subset C$ gives us $C$. Then we just use $\lor E^{w_1, w_2}$ to close the gap. $\blacksquare$
Source: assignment 1, exercise 1.
$$((A \subset B) \land (A \lor C)) \subset (B \lor C)$$
Let $u = (A \subset B) \land (A \lor C)$. We can use $\subset I^u$ on $B \lor C$. To get that, we let $v = A$ and $w = C$, then use $\lor E^{v, w}$ on $A \lor C$ (the first term) and $B \lor C$ (the middle and last terms). To get $B \lor C$ from $v$ and $w$, we use $\land E_l$ on $u$ to get $A \subset B$. Then, we use $\subset E$ on $A$ and $A \subset B$ to get $B$, after which we just apply $\lor I_l$ to get $B \lor C$. The other branch is even simpler: from $C$, just use $\lor I_r$ to get $B \lor C$. Then, to get $A \lor C$ (the first term), we just use $\land E_r$ on $u$. $\blacksquare$.
Source: assignment 1, exercise 1.
2.2Local soundness and completeness¶
For each intro/elim rule, we should check that it's both sound (i.e., does not allow new truths to be deduced) and complete (i.e., strong enough to obtain all the information contained in the connective).
Local soundness: introducing then eliminating a connective should not introduce any new information, so if we do this in a proof, we should be able to get rid of the intro + elim and find a shorter path to the conclusion. We can prove this using a local reduction.
Local completeness: if we eliminate a connective, we get enough information out of it to be able to introduce it again. We can prove this using a local expansion.
2.2.1Conjunction¶
Local soundness
Start with $A$ true and $B$ true. The only things we should be able to extract from $A \land B$ true are $A$ true and $B$ true. So let $\mathcal D_1$ be $A$ true and $\mathcal D_2$ be $B$ true; get $A \land B$ true using $\land I$; then get $A$ true using $\land E_l$. From this we extract $\mathcal D_1$. Similarly, using $\land E_r$, we extract $\mathcal D_2$.
Pientka's notation for this is a pain to type out so I'm just going to take a screenshot of the $A$ true case in her notes:
Local completeness
Let $\mathcal D$ be $A \land B$. Starting from this, we need to use the elimination rules and then the introduction rule to rebuild $A \land B$. This is pretty straightforward: use $\land E_l$ for $\mathcal D$ on the left to get $A$ true, and $\land E_r$ for $\mathcal D$ on the right to get $B$ true, and use $\land I$ to get $A \land B$ true.
Screenshot of the proof in her notes:
Source: Pientka's notes.
2.2.2Implication¶
Local soundness
To introduce $A \subset B$, we need to have a proof of $A$ true first (which we'll call $u$). Let $\mathcal D$ be $B$ true. Then, given $u$ and $\mathcal D$, we can use $\subset I^u$ to conclude $A \subset B$ true. Then, using $A \subset B$ true and $A$ true (which we'll call $\mathcal E$), we can use $\subset E$ to get $B$ true. Thus, if we start with $A$ true as assumption (from which we can get a proof of $\mathcal D$), then use $\subset I^u$ and $\subset E$, all we can conclude is that if we have $\mathcal E$ as an assumption, we can get a proof of $\mathcal D$, which is what we wanted.
Note that we can instead write $[\mathcal E/u]\mathcal D$ on the right side, to indicate that we substitute $\mathcal E$ in place of the assumption $u$.
Local completeness
Given $\mathcal D = A\subset B$ true and $u = A$ true, we use $\subset E$ to conclude $B$ true, and then use $\subset I^u$ to conclude $A \subset B$ true, as desired.
Source: Pientka's notes.
2.2.3Disjunction¶
Local soundness
There are two cases: one for reconstituting that if we have $A$ true, we can get $C$ true; and another for reconstituting that if we have $B$ true, we can get $C$ true.
For the former, let $\mathcal D = A$ true. Use $\lor I_l$ to get $A \lor B$ true. Then, using the assumptions $u = A$ true (from which we can get a proof of $\mathcal E = C$ true) and $v = B$ true (from which we can get a proof of $\mathcal F = C$ true), we use the rule $\lor E^{u, v}$ to conclude that $C$ true. So if $\mathcal D$ holds, then we get $\mathcal E$, as desired.
For the latter, we let $\mathcal D = B$ true, and use $\lor I_r$ to get $A \lor B$ true; the rest is the same. Thus if $\mathcal D$ holds, then we get $\mathcal F$, as desired.
There's a typo in the proof of this in Pientka's notes; $B$ true is labelled $u$ whe it should be labelled $v$.
Local completeness
This one is a bit weird - although you still follow the general principle of elim-then-intro, the actual proof looks very similar to the reduction (i.e., intro-then-elim), which is very confusing. In Pientka's notes, she just puts in the proof without any explanation, but Frank Pfenning's notes on harmony have a pretty good explanation.
The basic idea: start with $\mathcal D = A \lor B$. Let $u = A$ true and $v = B$ true be the assumptions. From $u$, use $\lor I_l$ to get $A \lor B$ true; from $v$, use $\lor I_r$ to get $A\lor B$ true. Then, you have 3 instances of $A \lor B$ true, and you can use $E^{u, v}$ to conclude $A \lor B$ true.
If you think about it, there isn't really any other way to do it, since the elim rule for $\lor$ only allows you to conclude $C$ provided that $A$ and $B$ both allow you to conclude $C$. So $C$ must be $A \lor B$. Plus, $A \lor B$ must be on the left. And to get the other $A \lor B$s, we need to use intro rules. $\blacksquare \checkmark \bigstar \heartsuit \wr \text{pls let me pass}$
Source: Pientka's notes.
2.2.4Alternative conjunction¶
Suppose we had the following elimination rule for conjunction instead of the standard one:
$$ \frac{}{A\,\text{true}} \,u \quad \; \frac{}{B\,\text{true}}\,v \\ \vdots \; \\ \frac{A \lor B\,\text{true} \qquad C\,\text{true}}{C \,\text{true}} \, \lor E^{u, v} \qquad \qquad $$
The introduction rule remains the same:
$$\frac{A \,\text{true} \quad B \,\text{true}}{A \land B \,\text{true}} \, \land I$$
These rules are locally sound and complete. Proofs:
Local soundness
There are two: one starting with $A$, and one starting with $B$. For the one starting with $A$, let $D_1$ be $A$ true, and let $D_2$ be $B$ true. Apply $\land I$ to get $A \land B$ true. Now let $u = A$ and $v = B$ and apply $\land E^{u, v}$ on $A \land B$ and $A$ (which we've derived from $u$, ignoring $v$) to get $D_1 = A$.
To get $D_2=B$, just ignore $u$ instead of $v$.
Local completeness
Just as with disjunction, this one looks a bit weird at first. Basically, we need $A \land B$ to be our conclusion (so the last line, or bottom, of the proof tree). We can get this using $\land E^{u, v}$ where $u = A$ and $v = B$ - just use $\land I$ on $u$ and $v$ to get $A \land B$ on the right.
Actually, it looks like there's an alternative approach: with $A \land B$ on the bottom, we use $\land I$ to get it from $A$ and $B$. To get $A$, we use $\land E^{u, v}$ on $A \land B$ with $u = A$ and $v = B$. Similarly, to get $B$, we have another proof subtree where we do the same thing. This should work? But it's not in the official solutions for the question. So I guess it doesn't, though I'm not sure why.
Source: assignment 1, exercise 3.
2.2.5Nand¶
Let's define our own introduction/elimination rules for $A \bar \land B$.
Introduction rule
$$ \frac{}{A} \,u \quad \; \frac{}{B}\,v \quad\qquad\qquad\quad\; \\ \vdots \qquad\qquad\qquad \;\, \\ \frac{p}{A \bar \land B} \, \lor I^{u, v, p} \qquad \qquad \tag{where $p$ is any arbitrary proposition.} $$
Elimination rule
$$\frac{A \bar \land B \qquad A \qquad B}{C} \, \bar \land E$$
These rules are locally sound and complete. Proofs:
Local soundness
As usual, there are two proofs. For the first, we start with $u = A$ and $v=B$. Suppose we have a proof tree $\mathcal D$ whereby we can derive $p$ starting from $u$ and $v$. Then we can use $\bar \land I^{u, v, p}$ to conclude that $A \bar \land B$. Then, if $\mathcal E_1 = A$ and $\mathcal E_2 = B$, we use $\bar \land E$ to conclude $A$.
For the other proof, we just conclude $B$ instead of $A$ in the final step (the rest is the same).
Local completeness
Let $u = A$ and $v = B$. Start with $A \bar \land B$. Combining this with $A$ and $B$, we use $\bar \land E$ to conclude $p$. Then we immediately use $\bar \land I^{u, v, p}$ to reconstitute $A \bar \land B$. Easy.
Source: assignment 1, exercise 4 (optional).
2.3Classical logic¶
Pientka doesn't really mention this in her notes, but exercise 2 of assignment 1 briefly covers this. Classical logic is basically constructive logic (what we had above) with an additional axiom: the law of the excluded middle (so $p \lor \neg p$ holds for all $p$).
3Proof terms¶
$M: A$ means that $A$ is a proposition, and $M$ is a proof term for $A$ (i.e., the term that represents a proof of $A \text{true}$).
3.1List of proof terms¶
These notes mostly follow Pientka's, with some things confirmed by Frank Pfenning's, because Pientka's notes had some typos that made me lose my faith (and my sanity). I'm also mostly using Pfenning's notation (with the $\Gamma$s and all that) because Pientka's notation takes longer to type and in any case she ends up using that notation eventually.
3.1.1Conjunction¶
Introduction rule
$$\frac{\Gamma \vdash M:A \qquad N:B}{\Gamma \vdash \langle M, N \rangle: A \land B} \, \land I$$
Elimination rules
$$\frac{\Gamma \vdash M: A \land B}{\Gamma \vdash \text{fst} \, M:A} \,\land E_l \qquad \qquad \frac{\Gamma \vdash M: A \land B}{\Gamma \vdash \text{snd}\,M:B} \, \land E_r$$
Local reduction
It's really the same as the original local soundness rule, we just annotate it. Not including it here.
From the two soundness proofs, we get two reduction rules (just using the last line of each side of the arrow):
- $\text{fst}\,\langle M, N \rangle \implies M$
- $\text{snd}\,\langle M, N \rangle \implies N$
Note that reduction is type-preserving: if $M:A$ and $M \implies M'$, then $M':A$. So if $M$, of type $A$, reduces to $M'$, then $M'$ is also of type $A$ (where $A$ is, again, a proposition). This statement is sometimes referred to as subject reduction.
Local expansion
Just annotate the local completeness rule, starting from the $A \land B$ on the right being $M$. We should get $\langle \text{fst} M, \text{snd} M \rangle$ on the right. So the local expansion rule is
$$M: A \land B \implies \langle \text{fst} M, \text{snd} M \rangle$$
3.1.2Truth¶
Introduction rule
$$\frac{}{(): \top} \, \top I$$
(There's a typo in Pientka's notes - the annotated term should be on the bottom, not on the top. There's a pun in there somewhere but fuck it.)
Elimination rule
There isn't one. Since this isn't a connective, we don't have local reduction/expansion rules, either.
3.1.3Implication¶
Introduction rule
$$\frac{\Gamma, u:A \vdash M :B}{\Gamma \vdash (\lambda u:A .M): A \subset B} \, \subset I^u$$
Elimination rule
$$\frac{\Gamma \vdash M: A \subset B \qquad \Gamma \vdash N:A}{\Gamma \vdash M N : B}\, \subset E$$
Local reduction
As before, we just annotate the local soundness proof to get
$$(\lambda x: A.M) N \implies [N/x]M$$
Local expansion
Once again, here we annotate the local completeness proof:
$$M : A \subset B \implies \lambda x: A.M x$$
3.1.4Disjunction¶
Introduction rules
$$\frac{\Gamma \vdash M: A}{\Gamma \vdash \text{inl}^A M: A \lor B} \, \lor I^l \qquad \frac{\Gamma \vdash N:B}{\Gamma \vdash \text{inr}^B N: A\lor B} \, \lor I^r$$
Elimination rule
$$\frac{\Gamma \vdash M: A \lor B\qquad \Gamma, x:A\vdash N_l: C \qquad \Gamma, y: B \vdash N_r:C}{\Gamma \vdash \text{case}\,M\,\text{of inl}^A x \to N_l \mid \, \text{inr}^B y \to N_r}\,\lor E^{x, y}$$
Local reduction
Again, we simply annotate the two local soundness proofs:
$$\text{case}(\text{inl}^A M) \text{of inl}^A x \to N_l \mid \text{inr}^B y \to N_r \implies [M/x]N_l$$
For the other rule, we have:
$$\text{case}(\text{inr}^B M) \text{of inl}^A x \to N_l \mid \text{inr}^B y \to N_r \implies [M/y]N_r$$
Local expansion
Not mentioned in either Pientka's or Pfenning's notes ... I'll come back to this later.
3.1.5Falsehood¶
Introduction rule
There isn't one.
Elimination rule
$$\frac{\Gamma \vdash M: \bot}{\Gamma \vdash \text{abort}^C C: C} \, \bot E$$
3.1.6Iff¶
Introduction rule
$$\frac{\Gamma, u:B \vdash M: A \qquad \Gamma, v:A \vdash N:B}{\Gamma \vdash [ u.M, v.N ] : A \equiv B}\,\equiv I^{u, v}$$
Note that you need to keep track of the assumptions $u$ and $v$ within the proof term for the introduction rule.
Elimination rules
$$\frac{M: A \equiv B \qquad N:A}{\text{iff}_L(M, N): B} \, \equiv E_l \qquad\qquad \frac{M: A \equiv B \qquad N: B}{\text{iff}_R (M, N): A}\,\equiv E_r$$
Local reduction
$$\text{iff}_L([u.M, v.N], S) \implies [S/u]M \qquad \text{iff}_R([u.M, v.N], S) \implies [S/v]N$$
Local expansion
$$M \implies [u.\text{iff}_R(M, u), v.\text{iff}_L(M, v)]$$
Source: assignment 2, exercise 3.
3.2Example annotations¶
They're basically the same as regular proofs, only you need to use the annotated rules. Nothing too complicated.
$$(A \land A) \subset A$$
Given $u = A \land A$ we need to be able to get $A$. Well, using $\land E_l$, we can. So from $u = x:A \land A$ we get $\text{fst}\, x: A$. Then, for the implication, we need a lambda, so we have $\lambda x: A \land A.\text{fst}\, x:(A \land A) \subset A$.
Alternatively, we could extract the right $A$ from $A \land A$, in which case we'd just have to replace the $\text{fst}\,$ by $\text{snd}\,$.
I think there's a typo in the notes - it should be $u = x: A \land A$ not $u = x:A$.
Source: Pientka's notes.
$$(A \land B) \subset (B \land A)$$
Let $u = x:A \land B$. Use $\land E_r$ to get $\text{snd}\, x: B$. On the other branch, use $\land E_l$ to get $\text{fst}\, x: A$. Then use $\land I$ to get $\langle \text{snd}\, x, \text{fst}\, x \rangle: B \land A$. Finally, for the implication, we need a lambda, so we have $\lambda x: A \land B.\langle \text{snd}\, x, \text{fst x} \rangle: (A \land B) \subset (B \land A)$.
Source: Pientka's notes.
$$(A \land (B \land C)) \subset (A \land B)$$
For implication, the pattern is pretty easy to pick up on: the $x$ in the lambda needs to be of type $A \land (B \land C)$, and the body of the lambda needs to be of type $A \land B$ somehow. How do we derive $A \land B$? Well, if $u = x: A\land (B\land C)$, then $\text{fst}\, x$ gives us $A$, $\text{snd}\, x$ gives us $B \land C$ and so $\text{fst}\,\, \text{snd}\, x$ gives us $B$, and then using $\land I$ gives us $A \land B$. So we have
$$\lambda x: A \land B(\land C). \langle \text{fst}\, x, \text{fst}\,\,\text{snd}\, x \rangle: A \land (B \land C) \subset (A \land B)$$
Source: assignment 1, question 1.
$$(A \subset B) \subset ((B \subset C) \subset (A \subset C))$$
Our assumption is $u = x: A \subset B$. From this, we need to be able to extract something of type $(B \subset C) \subset (A \subset C)$. This itself requires the assumption $v = y:B \subset C$, from which we need to be able to extract something of type $A \subset C$. To get this, we need the assumption $w = z: A$. Then, by applying $x$ to $z$, we get $x z: B$ (using $\subset E$). Furthermore, applying $y$ to $x z$, we get $y (x z): C$. Thus to extract something of type $A \subset C$, we need $\lambda z: A. y (x z): A \subset C$.
Next, we define the scope of $y$: $\lambda y: B. \lambda z: A. y (x z): (B \subset C) \subset (A \subset C)$. Lastly, we define the scope of $x$:
$$\lambda x: A \subset B. \lambda y: B. \lambda z: A. y (x z): (A \subset B) \subset ((B \subset C) \subset (A \subset C))$$
Source: assignment 1, question 1.
$$((A \lor B) \subset C) \subset (A \subset C) \land (B \subset C)$$
Our assumption is $x:(A \lor B) \subset C$. From this, we need to extract $(A \subset C) \land (B \subset C)$. Clearly we need to extract the left and right terms individually, then use $\land I$. Well, to get $A \subset C$ is easy: if we have the assumption $y:A$, then we can use $\lor I^l$ to get $\text{inl}^A y: A \lor B$. So we have
$$\lambda y: A.\text{inl}^A y: A \subset (A \lor B)$$
And if we apply $x$ to the proof term for $A \lor B$, we get
$$\lambda y: A. x (\text{inl}^A y): A \subset C$$
Similarly, for the other branch, we have:
$$\lambda z: B. x (\text{inr}^B z): B \subset C$$
Then we just use $\land I$ and close off the scope with $\subset I$:
$$\lambda x: (A \lor B) \subset C. \langle \lambda y: A. x (\text{inl}^A y), \lambda z: B. x (\text{inr}^B z): B \subset C \rangle: ((A \lor B) \subset C) \subset (A \subset C) \land (B \subset C)$$
Source: assignment 1, question 1.
$$(A \subset C) \land (B \subset C) \subset (A \lor B) \subset C$$
Let $x = (A \subset C) \land (B \subset C)$. Need to prove $(A \lor B) \subset C$. To do this, let $y = A \lor B$, from which we need to conclude $C$. It looks like we'll need to use $\lor E$ somehow.
First, suppose that $A$ is true. How can we use this to conclude that $C$ is true? Well, if $z_1:A$, then applying $\text{fst}\, x: A \subset C$ to $z_1$ will give us $C$:
$$(\text{fst}\, x) z_1: C$$
Similarly, for the other branch, suppose that $z_2: B$ is true. Then, applying $\text{snd}\, x: B \subset C$ to $z_2$ will give us $C$:
$$(\text{snd}\, x) z_2: C$$
Now we can use $\lor E$ to conclude $C$:
$$\text{case}\,y\,\text{of inl}^A z_1 \to (\text{fst}\, x) z_1 \mid \text{inr}^B z_2 \to (\text{snd}\, x) z_2: C$$
Next, we limit the scope of $y$ using $\subset I$:
$$\lambda y: A \lor B. \text{case}\,y\,\text{of inl}^A z_1 \to (\text{fst}\, x) z_1 \mid \text{inr}^B z_2 \to (\text{snd}\, x) z_2: (A \lor B) \subset C$$
Finally, we limit the scope of $x$, again using $\subset $I$:
$$\lambda x: (A \subset C) \land (B \subset C). \lambda y: A \lor B. \text{case}\,y\,\text{of inl}^A z_1 \to (\text{fst}\, x) z_1 \mid \text{inr}^B z_2 \to (\text{snd}\, x) z_2: (A \subset C) \land (B \subset C) \subset (A \lor B) \subset C$$
Source: assignment 1, question 1.
((A => B) & (A | C)) => (B | C);
$$((A \subset B) \land (A \lor C)) \subset (B \lor C)$$
Let our assumption be $x: (A \subset B) \land (A \lor C)$. We need to conclude that $B \lor C$. Looks like we'll need to use $\lor E$ on $A \lor C$. Well, if we assume $y: A$, then we can apply $\text{fst}\, x: A \subset B$ to $y$ using $\subset E$ to get $\text{fst}\, x y: B$, from which we can use $\lor I^l$ to get $\text{inl}^B (\text{fst}\, x) y: B \lor C$. For the other branch, it's even easier - if we assume $z:C$, then immediately we can do $\lor I^r$ to get $\text{inr}^C z: B \lor C$. Thus we have
$$\text{case} \, \text{snd}\, x \, \text{of inl}^A y \to \text{inl}^B (\text{fst}\, x) y \mid \text{inr}^C z \to \text{inr}^C: B \lor C$$
Finally, we just need to localise $x$:
$$\lambda x: (A \subset B)\land (A \lor C). \text{case} \, \text{snd}\, x \, \text{of inl}^A y \to \text{inl}^B (\text{fst}\, x) y \mid \text{inr}^C z \to \text{inr}^C z: ((A\subset B)\land (A\lor C)) \subset B \lor C$$
Source: assignment 1, question 1.
4First-order logic¶
4.1Universal quantification¶
$$\text{Universal: } \forall x: \tau. A(x)$$
where $x$ is a term, $\tau$ is a type, and $A$ is a proposition.
Parametric assumptions: a new kind of assumption, based on type. Ex: $a: \tau$.
4.1.1Introduction and elimination rules¶
Introduction rule:
$$\frac{\Gamma, a: \tau \vdash A(a)}{\forall x:\tau .A(x)} \, \forall I^a$$
This is a bastardisation of the two possible notations I could have used. It would have taken too long to format it the way Pientka does. Sorry.
Elimination rule:
$$\frac{\forall x:\tau.A(x) \qquad t:\tau}{A(t)} \, \forall E$$
4.1.2Local soundness and completeness¶
Let's check local soundness and completeness for the rules we defined above.
Local soundness
Kind of obvious actually. Suppose we have $x: \tau$, and through derivation tree $\mathcal D$ we can conclude $A(x)$. Use $\forall I^u$ to get $\forall x:\tau.A(x)$. Then, use the fact that $t: \tau$ (which we obtained through derivation tree $\mathcal E$) in conjunction with $\forall E$ to get $A(t)$. This is equivalent to $[t/x]\mathcal D$ on the right side. $\blacksquare$.
Local completeness
Starting from $\forall x:\tau.A(x)$ (which we derive using $\mathcal D$), we use $\forall E$ (along with the assumption $a: \tau$) to conclude $A(a)$. But then we can use $\forall I^a$ directly to conclude that $\forall x:\tau A(x)$.
4.2Existential quantification¶
$$\text{Existential: } \exists x: \tau A(x)$$
4.2.1Introduction and elimination rules¶
Introduction rule:
$$\frac{A(t) \qquad t:\tau}{\exists x:\tau.A(x)} \, \exists I$$
Elimination rule:
$$\frac{\Gamma \vdash \exists x:\tau A(x) \qquad \Gamma, a:\tau, \overline{A(a)}^u \vdash C}{C} \, \exists E^{a, u}$$
4.3Universal and existential proofs¶
$$(\forall x:\tau .P(x) \land Q(x)) \subset (\forall x:\tau.P(x)) \land (\forall x:\tau.Q(x))$$
As usual, we work from the bottom. First, we use $\subset I^u$ with $u=\forall x:\tau.P(x) \land Q(x)$; we need to get $(\forall x:\tau.P(x)) \land (\forall x:\tau.Q(x))$. We can get this using $\land I$ on $\forall x:\tau.P(x)$ and $\forall x:\tau.Q(x)$. To get $\forall x:\tau.P(x)$, we use $\forall I^a$ with $a:\tau$; similarly, to get $\forall x: \tau.Q(x)$, we use $\forall I^b$ with $b:\tau$. Then, working down from the top, we have two branches: for the first, use $\forall E$ with $a:\tau$ and $u$ to get $P(A) \land Q(a)$, then use $\land E_l$ to get $P(a)$ true, and use $\forall I^a$ to get $\forall x:\tau.P(x)$. For the second branch, it's the same except with $b: \tau$, and we use $\land E_r$ instead to get $Q(b)$, then $\forall I^b$ to get $\forall x:\tau.Q(x)$.
Source: Pientka's notes.
$$\exists x:\tau.\neg A(x) A(x) \subset \neg \forall x:\tau .A(x)$$
Proof is in the notes, and it's pretty straightforward, except for the fact that she seems to use a $\neg I$ rule that is never explained anywhere. Also, there are some pretty confusing typos - the second $u$ in the proof tree (from the left) is actually supposed to be a $w$, and the $\subset I^w$ looks like it's supposed to be a $\neg I^w$, even though $\neg I$ isn't defined anywhere. I don't know. My guess is $\neg I$ would be defined as follows:
$$\frac{\Gamma, P \vdash \bot}{\Gamma \vdash \neg P}\, \neg I$$
Source: Pientka's notes.
There are lots of other examples in assignment 2 (exercises 1 and 2) but they're all pretty much the same so I'm not going to go through them.
4.4Explicit contexts¶
That's when we put in the $\Gamma$s. Whatever.
4.5Structural properties¶
- Weakening: extra assumptions don't matter.
- Exchange: the order of the assumptions does not matter.
- Strengthening: if an assumption was not used in the proof, we can remove it.
- Substitution:
- If $\Gamma, u:A \vdash J$ and $\Gamma \vdash A$ then $\Gamma \vdash J$.
- If $\Gamma, x:\tau \vdash J$ and $\Gamma \vdash t:\tau$ then $\Gamma \vdash [t/x]J$.
4.6Proof by structural induction¶
Given some theorem, we prove it by induction on the height of the proof tree (i.e., derivation) for some term. For the base cases (and there may be multiple), we consider how that term may have been introduced. Either it was introduced via $\top I$, or it was an assumption, or it's in one of the contexts. Usually the base cases are trivial.
For the step case, we use the IH to assume that the property holds for smaller derivations. We have to prove the step case for every possible introduction/elimination rule, using any of the intro/elim rules in our arsenal, in addition to the assumptions and the IH. Pretty straightforward.
4.6.1The substitution lemma¶
$$\Gamma, x:A, \Gamma' \vdash M:B \text{ and } \Gamma \vdash N:A \implies \Gamma, \Gamma'\vdash [N/x]M:B$$
We can prove this by structural induction on the first derivation, $\mathcal D$. We can ignore the proof terms here, and thus rewrite it as follows (for simplicity, and to avoid a clash of variables)
$$\overbrace{\underbrace{\Gamma, x:A, \Gamma' \vdash C}_{\text{induct on this}}}^{\mathcal D} \text{ and } \Gamma \vdash A \implies \Gamma, \Gamma'\vdash C$$
For example, here's how to prove the induction step for the intro/elim rules for the $\equiv$ connective (from assignment 2, exercise 3, task 3):
Introduction rule
Suppose the last line of $\mathcal D$ was derived using $\equiv I$, i.e., $C = B_1 \equiv B_2$. Thus we have:
$$\mathcal D= \frac{\Gamma, x:A, \Gamma', u:B_2 \vdash B_1 \qquad \Gamma, x:A, \Gamma', v:B_1 \vdash B_2}{\Gamma, x:A, \Gamma' \vdash B_1 \equiv B_2} \, \equiv I^{u, v}$$
Now we need to prove that $\Gamma, \Gamma' \vdash B_1 \equiv B_2$.
$$\begin{align} & \Gamma \vdash A \tag{by assumption} \\ & \Gamma, \Gamma', u:B_2 \vdash B_1 \tag{by i.h. on left subtree} \\ & \Gamma, \Gamma', v:B_1 \vdash B_2 \tag{by i.h. on right subtree} \\ & \Gamma, \Gamma' \vdash B_1 \equiv B_2 \; \blacksquare \tag{by $\equiv I^{u, v}$} \end{align}$$
Left elimination rule
Suppose the last line of $\mathcal D$ was derived using $\equiv E_l$, i.e., $C = B_2$, where we have $B_1 \equiv B_2$ being eliminated. Thus we have:
$$\mathcal D= \frac{\Gamma, x:A, \Gamma' \vdash B_1 \equiv B_2 \qquad \Gamma, x:A, \Gamma' \vdash B_1}{\Gamma, x:A, \Gamma' \vdash B_2} \, \equiv E_l$$
Now we need to prove that $\Gamma, \Gamma' \vdash B_2$.
$$\begin{align} & \Gamma \vdash A \tag{by assumption} \\ & \Gamma, \Gamma' \vdash B_1 \equiv B_2 \tag{by i.h. on left subtree} \\ & \Gamma, \Gamma' \vdash B_1 \tag{by i.h. on right subtree} \\ & \Gamma, \Gamma' \vdash B_2 \tag{by $\equiv E_l$} \end{align}$$
The right elimination rule is similar, just using $\equiv E_r$ instead of $\equiv E_l$.
4.6.2Type uniqueness¶
Suppose we want to prove that if $\Gamma \vdash M:A$ and $\Gamma \vdash M:B$ then $A = B$. We can do this by structural induction on the first derivation (i.e., $\Gamma \vdash M:A$). For the base case, I think we only need to consider $\top I$, since that's the only rule with an empty derivation tree above it. So
$$\mathcal D = \frac{}{\Gamma \vdash (): \top}\,\top I$$
So $M = ()$ and $A = \top$. But the only way to get $()$ as a proof term is using $\top I$. Thus $B = \top$, and so $A = B$. $\checkmark$
For the step case, we would need to use the i.h. prove that the theorem holds for every introduction/elimination rule we have. There are a bunch, so proving this would take a while, but it's pretty straightforward; they all follow the same idea. I'm only going to show the introduction rule for $\subset$.
Introduction rule
$$\mathcal D = \frac{\overbrace{\Gamma, u: C_1\vdash N: C_2}^{\mathcal D'}}{\Gamma \vdash \lambda u:C_1.N:C_1 \subset C_2}\, \subset I^u$$
So $M$ from the theorem is $\lambda u:C_1.N$, where $N$ is of type $C_2$ and $A = C_1 \subset C_2$. Consider what other rules produce terms of the form $\lambda u:C_1.N$ for some $C_1$ and some $N$. Well, this can only be done by the rule $\subset I$. So suppose we have another way of deriving something of the form $\lambda u:C_1.N:C_1\subset C_2$, with potentially different values for $C_1$ and $C_2$. We'll call this $\mathcal E = \lambda u: C_1'.N:C_1' \subset C_2'$. By inverting $\subset I$ on $\mathcal E$, we have $\mathcal E' = \Gamma, u:C_1' \vdash N:C_2'$ (i.e., the previous term in the derivation of $\mathcal E$).
Now, by the induction hypothesis, it must be that $\mathcal E'$ is equal to $\mathcal D' = \Gamma, u:C_1 \vdash N:C_2$. Then, if we apply $\subset I^u$ directly, we get $\Gamma \vdash \lambda u:C_1.N:C_1 \subset C_2$. So $\mathcal E = \mathcal D$, and thus $A = B$, as desired. $\blacksquare$
The model solutions provided use the same variable names for very different things, which is probably valid but which can be very confusing. The proofs above make use of fresh variables where necessary, for clarity.
Source: assignment 2, exercise 4.
4.7Proof terms for quantifiers¶
For $\exists$, the introduction rule overloads the notation we used for $\land$: $\langle M, t \rangle$ where $t: \tau$ is the "witness" (i.e., the object of type $\tau$ for which $M:A$ holds). The elimination rule is $\text{let} \langle u, a \rangle = M \text{ in } N$ where $u: A(a)$, $a : \tau$, $M$ is the proof for $\exists x:\tau . A(x)$ and $N$ is the thing we want to prove.
4.7.1Existential proof terms¶
Introduction rule
$$\frac{\Gamma \vdash M:A(t) \qquad \Gamma \vdash t:\tau} {\Gamma \vdash \langle M, t \rangle : \exists x:\tau.A(x)} \, \exists I$$
Elimination rule
$$\frac{\Gamma \vdash M:\exists x:\tau.A(x) \qquad \Gamma, a:\tau, u:A(a)\vdash N:C} {\Gamma \vdash \text{let } \langle u, a \rangle = M \text{ in } N: C}\, \exists E^{a, u}$$
Annotated reduction rule
$$\text{let } \langle u, a \rangle = \langle M, t \rangle \text{ in } N \implies [M/u][t/a]N$$
I think there's a typo in Pientka's notes - she uses $M$ as the last term above, but I'm pretty sure it should be an $N$.
4.7.2Universal proof terms¶
Introduction rule
$$\frac{\Gamma, a:\tau \vdash M:A(a)}{\Gamma \vdash \lambda a:\tau .M:\forall x:\tau.A(x)} \, \forall I^a$$
Elimination rule
$$\frac{\Gamma \vdash M: \forall x:\tau.A(x) \qquad \Gamma \vdash t:\tau}{\Gamma \vdash M t : A(t)} \, \forall E$$
Annotated reduction rule:
$$(\lambda a:\tau.M)t \implies [t/a]M$$
4.8Meta-theoretic properties¶
Reduction doesn't always take place at the top level - sometimes we want to apply a reduction rule to a redex (i.e., something that can be reduced by a rule) buried within a term.
To allow this, we create congruence rules. These rules are non-deterministic, and it's pretty obvious how they work once you see a few. For instance, for $\langle M, N \rangle$, we have the following two congruence rules:
$$\frac{M \implies M'}{\langle M, N \rangle \implies \langle M', N \rangle} \qquad \frac{N \implies N'}{\langle M, N \rangle \implies \langle M, N' \rangle} $$
4.8.1Subject reduction¶
$$\text{If } \overbrace{M \implies_R M'}^{\mathcal D} \text{ and } \Gamma \vdash M:C \text{ then } \Gamma\vdash M':C$$
Can prove this by structural induction on $\mathcal D = M \implies_R M'$. The base cases are the reduction rules for each redex. Here's an example using the $\equiv$ connective we defined earlier. Recall that our reduction rules are
$$\text{iff}_L([u.M, v.N], S) \implies [S/u]M \qquad \text{and} \qquad \text{iff}_R([u.M, v.N], S) \implies [S/v]N$$
Note that when we use elimination rules below, we're kind of reasoning backwards (upwards). As in, if we have some term, we must have the terms that led to that term in the derivation tree using an elimination rule. So we can apply the rules backwards (upwards).
Also keep in mind the substitution lemma: if $\Gamma, u:A \vdash M: B$, and $\Gamma \vdash N: A$, then $\Gamma \vdash [N/u] M: B$. (I think that's right.)
$$\begin{align} & \Gamma \vdash \text{iff}_L([u.M, v.N], S):C \tag{by assumption} \\ & \Gamma \vdash [u.M, v.N]: B \equiv C \\ & \Gamma \vdash S:B \tag{both by $\equiv E_l$, reasoning backwards} \\ & \Gamma, u:B \vdash M: C \tag{by $\equiv I^{u, v}$, reasoning backwards} \\ & \Gamma \vdash [S/u]M: C \tag{by substitution lemma} \end{align}$$
For the other reduction rule, it's pretty similar:
$$\begin{align} & \Gamma \vdash \text{iff}_R([u.M, v.N], S):C \tag{by assumption} \\ & \Gamma \vdash [u.M, v.N]: C \equiv B \\ & \Gamma \vdash S:B \tag{both by $\equiv E_r$, reasoning backwards} \\ & \Gamma, v:B \vdash M: C \tag{by $\equiv I^{u, v}$, reasoning backwards} \\ & \Gamma \vdash [S/v]M: C \tag{by substitution lemma} \end{align}$$
4.8.2Subject expansion¶
Same deal, except with the expansion rules.
$$\text{If } \overbrace{M \implies_E M'}^{\mathcal D} \text{ and } \Gamma \vdash M:C \text{ then } \Gamma\vdash M':C$$
Again, we can prove this by structural induction on $\mathcal D = M \implies_E M'$. The base cases are the expansion rules. Here's an example using the $\equiv$ connective. Recall that our expansion rule is
$$\mathcal D = M \implies_E [u.\text{iff}_R(M, u), v.\text{iff}_L(M, v)] $$
which gives something of type $A \equiv B$. Then, we just apply intro/elim rules in the standard order (i.e., downwards/forwards) until we get $\Gamma \vdash M':C$, where $C = A \equiv B$:
$$\begin{align} & \Gamma \vdash M: A \equiv B \tag{by assumption} \\ & \Gamma, u:B \vdash \text{iff}_R(M, u): A \tag{by $\equiv E_r$} \\ & \Gamma, v:A \vdash \text{iff}_L(M, v): B \tag{by $\equiv E_l$} \\ & \Gamma \vdash [u.\text{iff}_R(M, u), v.\text{iff}_L(M, v)]: A \equiv B \tag{by $\equiv I^{u, v}$} \end{align}$$
(I'm pretty sure there's a typo in the provided assignment solutions ... looks like $\equiv E_l$ and $\equiv E_r$ were swapped. Not a huge deal though, the rest of the reasoning seems right.)
5Induction¶
5.1The natural numbers¶
$$\frac{}{z: \text{nat}} \, \text{nat} \, I_z \qquad \frac{t: \text{nat}}{\text{suc}\,t: \text{nat}} \, \text{nat} I_{\text{suc}}$$
5.2Proof by induction¶
We can use induction to prove $A(t)$ for all $t:\text{nat}$.
The base case is that $A(z)$ holds. The i.h. is that for any $n:\text{nat}$, $A(n)$ holds. Then, for the induction step, we need to prove that $A(\text{suc} \,n)$ holds.
Formally, we have:
$$\frac{\Gamma \vdash t: \text{nat} \qquad \Gamma \vdash A(z) \qquad \Gamma, n:\text{nat}, \text{ih}: A(n) \vdash A(\text{suc}\,n)}{\Gamma \vdash A(t)}\, \text{nat}\, E^{n, \text{ih}}$$
5.2.1Example¶
Here's an example. Let $A(x)$ be defined as follows:
$$A(x) = x \leq x$$
where $\leq$ is defined inductively, by the following rules:
- $\text{le_z}$: $\forall x:\text{nat}. z \leq x$
- $\text{le_suc}$: $\forall x:\text{nat}.\forall y:\text{nat}. x \leq y \subset \text{suc }x \leq \text{suc }y$
We want to prove that $\Gamma \vdash \forall t:\text{nat} A(t)$. To do so, we first need to prove the base case. Well, by using $\text{le_z}$, and $\text{nat}_z$, then applying $\forall E$ on the results, we have that $z \leq z$ $\checkmark$.
Now for the induction step. Using $\text{le_suc}$ and an arbitrary witness $n:\text{nat}$, we can apply $\forall E$ on $x$ to get
$$\forall y:\text{nat}.n \leq y \subset \text{suc }n \leq \text{suc y}$$
Using the same witness $n$, we can apply $\forall E$ again, but this time on $y$, to get
$$n \leq n \subset \text{suc }n \leq \text{suc }n$$
But by the induction hypothesis, $n \leq n$. Thus we can use $\subset E$ on the i.h. to get
$$\text{suc }n \leq \text{suc }n$$
as desired. $\blacksquare$
5.2.2Proof terms¶
Here's the induction rule again, this time annotated with proof terms (representing a recursive program):
$$\frac{\Gamma \vdash t: \text{nat} \qquad \Gamma \vdash M_z A(z) \qquad \Gamma, n:\text{nat}, f n: A(n) \vdash M_{\text{suc}} A(\text{suc}\,n)}{\Gamma \vdash \text{rec}^{\forall x:\text{nat}.A(x)} t \text{ with } f z \to M_z \mid f (\text{suc }n) \to M_{\text{suc}} : A(t)}\, \text{nat}\, E^{n, \text{f n}}$$
5.2.2.1Reduction rules¶
For the $z$ rule:
$$\text{rec}^A z \text{ with } f z \to M_z \mid f(\text{suc }n) \to M_{\text{suc}} \implies_R M_z$$
For the successor rule:
$$\text{rec}^A (\text{suc }t) \text{ with } f z \to M_z \mid f(\text{suc }n) \to M_{\text{suc}} \implies [t/n][r / fn]M_{\text{suc}}$$
where $r = \text{rec}^A t \text{ with } f z \to M_z \mid f(\text{suc }n) \to M_{\text{suc}}$ (i.e., the recursive call).
5.2.2.2Congruence rules¶
For $N_z$:
$$\frac{N_Z \implies N_z'}{\text{rec}^A (\text{suc }t) \text{ with } f z \to N_z \mid f(\text{suc }n) \to N_{\text{suc}} \implies \text{rec}^A (\text{suc }t) \text{ with } f z \to N_z' \mid f(\text{suc }n) \to N_{\text{suc}}}$$
For $N_{\text{suc}}$:
$$\frac{N_{\text{suc}} \implies N_{\text{suc}}'}{\text{rec}^A (\text{suc }t) \text{ with } f z \to N_z \mid f(\text{suc }n) \to N_{\text{suc}} \implies \text{rec}^A (\text{suc }t) \text{ with } f z \to N_z \mid f(\text{suc }n) \to N_{\text{suc}}'}$$
5.2.2.3Subject reduction¶
To prove that subject reduction holds, we just do the same thing we did before. For the base case, get the assumption, use inversion on the assumption (using the induction rule) to get the relevant 3 terms. Then invert $\text{nat} I_{\text{suc}}$ on $\text{suc }t$ to get $t:\text{nat}$. Then use the induction rule to get something we can use the substitution lemma on (twice).
For the step case, get the assumption, use inversion on the induction rule, then the induction hypothesis, then the induction rule again.
5.2.3Successor existence proof¶
Here's another example proof:
$$\forall x:\text{nat}.\neg(x = z) \subset \exists y:\text{nat}.\text{suc }y = x$$
We define equality as follows (using reflexivity):
$$\text{ref }: \forall x:\text{nat}.x = x$$
As usual, we prove this using induction (on $x$). For the base case, we have both $u=\neg (z = z)$ and $z = z$, by ref; obviously this is a contradiction, so we get $\bot$, from which we can conclude anything; in this case we conclude $\exists y:\text{nat}.\text{suc }y = z$. Then we just use $\subset I^u$.
For the step case, we have $\text{suc n} = \text{suc n}$ (by ref) and also that $n: \text{nat}$. Using $n$ as a witness, we use $\exists I$ to get $\exists y:\text{nat}.\text{suc }y = \text{suc }n$. Then we just use $\subset I^u$ where $u = \neg (\text{suc }n=z)$ is an assumption we don't actually need to use. So we don't even need to use the induction hypothesis for this proof.
Incidentally, the proof terms for this function give us a program give us the predecessor function.
5.3Defining terms¶
idk, is this important?
5.4Lists¶
$$\frac{}{\text{nil}: \text{list}} \qquad \qquad \frac{h:\text{nat} \qquad l:\text{list}}{\text{cons} h l:\text{nat}}$$
Here's the induction rule for lists:
$$\frac{\Gamma \vdash s: \text{list} \qquad \Gamma \vdash A(\text{nil}) \qquad \Gamma, n:\text{nat}, t:\text{list}, \text{ih}:A(t) \vdash A(\text{cons } n t)} {\Gamma \vdash A(s)} \, \text{list} E^{n, t, \text{ih}}$$
Not going to write the annotated version as it'll take forever, but you can figure it out pretty easily.
5.4.1Keeping track of length¶
$$\frac{}{\text{nil}:\text{list}\, z} \qquad \frac{h:\text{nat} \qquad l:\text{list}\,n}{\text{cons } \{n\} h l: \text{list} \,\text{suc }n}$$
Induction rule (only going to write it out informally because the formal definition as an elimination rule is too long):
- $n$ is of type nat, $l$ is of type list and has length $n$.
- Establish the base case (that $A(z, \text{nil})$ holds).
- Establish the step case - for any $m:\text{nat}$, $h:\text{nat}$ and $t:\text{list }m$, assume $A(m, t)$ holds (i.e., the ih) and prove that $A(m, \text{cons} \{m \} h t)$ holds as well.
5.4.2Example predicates¶
Sorted, which takes a list and returns true if the elements are in increasing order:
$$\frac{\,}{\text{sorted nil}}\,\text{sorted_nil}$$
$$\frac{\text{sorted cons(N, L)} \qquad \text{M} \leq {N}} {\text{sorted cons(M, cons(N, L)}} \,\text{sorted_cons} $$
Source: assignment 3, exercise 1.
Member, which takes in a list and an element and return true if that element is in the list:
$$\frac{\,}{\text{member N cons(N, L)}}\,\text{member_cons}$$
$$\frac{\text{member N L}}{\text{member N cons(M, L)}}\,\text{member_trans}$$
Source: assignment 3, exercise 1
Insert: omitted cus the specification is weird and there's no point
Source: assignment 3, exercise 1
5.4.3Proving predicates¶
The length-tracking list type proposed above is really just a list predicate. Let's generalise this to reason about predicates in general, using our $\leq$ example from earlier (and turning it into a predicate with the following rules):
$$\frac{}{z \leq X}\, \text{le_z} \qquad \frac{X \leq Z}{\text{suc }X \leq \text{suc }Y} \, \text{le_suc}$$
We can prove the transivity of $\leq$ using structural induction on $\mathcal D: M \leq N$:
$$\text{If } M \leq N \text{ and } N \leq K \text{ then } M \leq K$$
For the base case, we have
$$\mathcal D = \frac{}{z \leq X} \,\text{le_z}$$
We need to show that if $z \leq X$ and $X \leq K$, then $z \leq K$ for any $K$. But this follows immediately from $\text{le_z}$ if we just have $K$ instead of $X$ in the conclusion. $\checkmark$
For the step case, we have:
$$\mathcal D = \frac{\overbrace{X \leq Y}^{\mathcal D_1}}{\text{suc }X \leq \text{suc }Y}\,\text{le_suc}$$
We assume the induction hypothesis: if $X \leq Y$ and $Y \leq K$ then $X \leq K$. We need to prove if that $\text{suc }X \leq \text{suc }Y$, and $\text{suc }Y \leq \text{suc }K$, then $\text{suc }X \leq \text{suc }K$. (I think it's equivalent to not having the $\text{suc}$ in front of the $K$, which was my initial instinct? Just easier to prove maybe?)
Well, by $\mathcal D_1$, we have that $X \leq Y$. Then by the induction hypothesis, $X \leq K$. Finally, we if apply $\text{le_suc}$ to that, we get $\text{suc }X \leq \text{suc }K$, as desired.
If we wanted to prove this more formally, using the actual induction rule, but it's pretty tedious so fuckit.bel.
6Sequent calculus¶
Normal derivations: derivations which do not admit detours.
Annotating natural deduction rules means to put in the arrows and shit.
6.1Normal and neutral terms¶
For a proof term to be in normal form, it must contain a redex (subterm that can be reduced).
A neutral proof term CANNOT BE REDUCED
Introduction rules give us normal derivations
List of normal terms
- $\lambda x:A.M$ where $M$ is normal
- $\langle M, N \rangle$ where $M$ and $N$ are normal
- $()$
- any neutral term (defined below)
We write $M:A \uparrow$ to indicate that $A$ has a normal deduction, described by the normal term $M$.
List of neutral terms
- $x$
- $\text{fst } R$ where $R$ is neutral
- $\text{snd } R$ where $R$ is neutral
- $R\,N$ where $R$ is neutral and $N$ is normal
We write $M:A \downarrow$ to indicate that $A$ is extracted from a hypothesis, described by the neutral term $R$. All assumptions will be of this form.
6.2Rules¶
6.2.1Hypothesis¶
$$ \frac{}{\Gamma^{\downarrow}_1, u:A \downarrow, \Gamma^{\downarrow}_2 \vdash u:A \downarrow}_u $$
This just says that from a list of assumptions, we can extract one.
6.2.2Coercion¶
$$ \frac{\Gamma^{\downarrow} \vdash R:A \downarrow}{\Gamma^{\downarrow} \vdash R:A \uparrow}\, \downarrow\uparrow $$
In other words, every neutral term is a normal term.
6.2.3Conjunction¶
Introduction rule
$$\frac{\Gamma^{\downarrow} \vdash M:A\uparrow \qquad \Gamma^{\downarrow} \vdash N:B \uparrow}{\Gamma^{\downarrow} \vdash \langle M, N \rangle: A \land B \uparrow} \, \land I$$
Elimination rules
$$\frac{\Gamma^{\downarrow} \vdash R:A \land B\downarrow}{\Gamma^{\downarrow} \vdash \text{fst } R: A \downarrow}\, \land E_l \qquad \frac{\Gamma^{\downarrow} \vdash R:A \land B \downarrow}{\Gamma^{\downarrow} \vdash \text{snd }<:B \downarrow}\, \land E_r$$
Alternatively, if the rule were
$$\frac{\Gamma \vdash A \land B \qquad \Gamma, u:A, v:B \vdash C}{\Gamma \vdash C}\, E^{u, v}$$
then the rule with arrows would be
$$\frac{\Gamma^\downarrow \vdash A \land B \downarrow \qquad \Gamma^\downarrow, u:A \downarrow, v: B \downarrow \vdash C \uparrow}{\Gamma^\downarrow \vdash C \uparrow} \, \land E^{u, v}$$
Source: assignment 4, exercise 1
6.2.4Truth¶
$$\frac{}{\Gamma^{\downarrow} \vdash ():\top \uparrow}\, \top I$$
6.2.5Implication¶
Introduction rule
$$\frac{\Gamma^{\downarrow}, u:A \downarrow \vdash M:B \uparrow}{\Gamma^{\downarrow} \vdash \lambda x:A . M:A \subset B\uparrow} \, \subset I^u$$
Elimination rule
$$\frac{\Gamma^{\downarrow} \vdash R: A \subset B \downarrow \qquad \Gamma^{\downarrow} \vdash M:A \uparrow}{\Gamma^{\downarrow} \vdash R \, M: B\downarrow}\,\subset E$$
6.2.6Disjunction¶
Introduction rules
$$\frac{\Gamma^{\downarrow} \vdash M: A\uparrow}{\Gamma^{\downarrow} \vdash \text{inl}^A\, M: A \lor B \uparrow}\, \lor I_l \qquad \frac{\Gamma^{\downarrow} \vdash N: B\uparrow}{\Gamma^{\downarrow} \vdash \text{inr}^B \, N:A \lor B \uparrow}\, \lor I_r$$
Elimination rule
$$\frac{\Gamma^{\downarrow}\vdash R:A \lor B \downarrow \qquad \Gamma^{\downarrow}, x:A \downarrow \vdash M_l:C \uparrow \qquad \Gamma^{\downarrow}, y:B \downarrow \vdash M_r : C\uparrow}{\Gamma^{\downarrow} \vdash \text{case } R \text{ of inl}^A\, x \to M_l \mid \text{inr}^B \, y \to M_r:C \uparrow} \, \lor E^{x, y}$$
Note that in the elim rule, we could also allow $M_l$ and $M_r$ to be extractions (with $\downarrow$ instead of $\uparrow$) but apparently it complicates something? Don't really get it.
6.2.7Falsehood¶
Introduction rule
None.
Elimination rule
$$\frac{\Gamma^{\downarrow} \vdash R: \bot \downarrow}{\text{abort}^C \, M: C\uparrow}\, \bot E$$
6.2.8Universal quantifier¶
Introduction rule
$$\frac{\Gamma^{\downarrow} \vdash [a/x]A \uparrow}{\Gamma^{\downarrow} \vdash \forall x.A \uparrow}\, \forall I^a$$
Elimination rule
$$\frac{\Gamma^{\downarrow} \vdash \forall x.A \downarrow}{\Gamma^{\downarrow} \vdash [t/x]A \downarrow} \, \forall E$$
Source: Pfenning's notes (they're left as an exercise in Pientka's notes)
6.2.9Existential quantifer¶
Introduction rule
$$\frac{\Gamma^{\downarrow} \vdash [t/x]A \uparrow}{\Gamma^{\downarrow} \vdash \exists x.A \uparrow}\,\exists I$$
Elimination rule
$$\frac{\Gamma^{\downarrow} \vdash \exists x.A \downarrow \qquad \Gamma^{\downarrow}, u:[a/x]A \downarrow \vdash C \uparrow}{\Gamma^{\downarrow} \vdash C \uparrow}\,\exists E^{a,u}$$
Source: ibid
6.2.10Negation¶
Introduction rule
$$\frac{\Gamma^{\downarrow}, u:A \downarrow \vdash p \uparrow}{\Gamma^{\downarrow} \vdash \neg A \uparrow}\, \neg I^{p, u}$$
Elimination rule
$$\frac{\Gamma^{\downarrow} \vdash \neg A \downarrow \qquad \Gamma^{\downarrow} \vdash A \uparrow}{\Gamma^{\downarrow} \vdash C \uparrow} \, \neg E$$
Source: ibid
6.3Soundness¶
This just says that if $\Gamma^{\downarrow} \vdash M:A \uparrow$ then $\Gamma \vdash M:A$ and if $\Gamma^{\downarrow} \vdash R:A \downarrow$ then $\Gamma \vdash R:A$. To drop the arrow from the $\Gamma$ just means to remove the arrows from the individual assumptions.
We can prove this by structural induction. In the base case, we have
$$\mathcal D = \frac{}{\Gamma_1^{\downarrow}, x:A \downarrow, \Gamma_2^{\downarrow} \vdash x: A \downarrow}\, x$$
Then we just drop the arrows lol.
For the step case, we have to consider all the different rules that we listed above. There are a ton, so I'm only going to do a couple:
Coercion (also known as exchange)
$$\mathcal D = \frac{\overbrace{\Gamma^{\downarrow} \vdash R:A \downarrow}^{\mathcal D_1}}{\Gamma^{\downarrow} \vdash R:A \uparrow}\, \uparrow \downarrow$$
We just apply the i.h. on $\mathcal D_1$ to conclude that $\Gamma \vdash R:A$.
Implication introduction
$$\mathcal D =\frac{\overbrace{\Gamma^\downarrow, x:A \downarrow \vdash M : B \uparrow}^{\mathcal D'}}{\Gamma^\downarrow \vdash \lambda x:A.M:A \subset B \uparrow}\,\subset I^x$$
First, by i.h. on $\mathcal D'$, we get that $\Gamma, x:A \vdash M:B$. Then, using $\subset I^x$ we get $\Gamma \vdash \lambda x:A.N:A \subset B$, as desired.
(I think there's a typo in Pientka's notes - it says $A.B$ instead of $A.M$.)
6.4Completeness¶
However, if we just use the rules we have above, then we don't have a complete system of natural deduction. Specifically, there are things we cannot construct, and I would give an example but Pientka's notes on this don't make any sense and I can't figure out where the typo is so fuck it.
If we wanted to get completeness all up in this shit we need to have some sort of $\downarrow \uparrow$ rule, temporarily. But then this allows non-normal derivations. Idk what I'm talking about tho
$\vdash +$ is like $\vdash$ except you allow non-normal derivations as well, using the $\downarrow \uparrow$ rule. Wait why is it $\downarrow$ $\uparrow$, that's the same as the coercion rule wtf???? Isn't it supposed to be $\uparrow \downarrow$? omg kill me
6.5Actual sequent calculus¶
This is closely related to the bi-directional natural deduction system. We reason from the bottom-up. Elimination rules become left rules.
6.5.1Initial sequent¶
$$\frac{\Gamma, u:A \implies A}\,\text{init}$$
6.5.2Conjunction¶
Right rule
$$\frac{\Gamma \implies A \qquad \Gamma \implies B}{\Gamma \implies A\land B} \, \land R$$
Left rules
$$\frac{\Gamma, u:A \land B, v: A \implies C}{\Gamma, u:A\land B\implies C}\, \land L_1 \qquad \frac{\Gamma, u:A \land B, v:B \implies C}{\Gamma, u:A \land B \implies C}\, \land L_2$$
6.5.3Implication¶
Right rule
$$\frac{\Gamma, u:A \implies B}{\Gamma \implies A \supset B}\,\supset R^u$$
Left rule
$$\frac{\Gamma, u:A \supset B \implies A \qquad \Gamma, u:A \supset B, v:B \implies C}{\Gamma, u:A \supset B \implies C} \, \supset L$$
6.5.4Disjunction¶
Right rules
$$\frac{\Gamma \implies A}{\Gamma \implies A\lor B}\, \lor R_1 \qquad \frac{\Gamma \implies B}{\Gamma A \lor B}\,\lor R_2$$
Left rule
$$\frac{\Gamma, u:A \lor B, v:A \implies C\qquad \Gamma, u:A \lor B, v:B \implies C}{\Gamma, u:A\lor B \implies C}\, \lor L^{v, w}$$
6.5.5Truth¶
Right rule
$$\frac{}{\Gamma \implies \top} \, \top R$$
Left rule
None, since there is no elimination rule for $\top$.
6.5.6Falsehood¶
Right rule
None, since there is no introduction rule.
Left rule
$$\frac{}{\Gamma, u:\bot \implies C} \, \bot L$$
6.5.7Universal quantifier¶
Right rule
$$\frac{\Gamma \implies [a/x]A}{\Gamma \implies \forall x.A}\, \forall R^a$$
Left rule
$$\frac{\Gamma, \forall x.A, [t/x]A \implies C}{\Gamma, \forall x.A \implies C}\, \forall L$$
Source: Pfenning's notes.
6.5.8Existential quantifier¶
Right rule
$$\frac{\Gamma \implies [t/x]A}{\Gamma \implies \exists x.A} \, \exists R$$
Left rule
$$\frac{\Gamma, \exists x.A, [a/x]A \implies C}{\Gamma, \exists x.A \implies C}\, \exists L^a$$
Source: ibid.
6.5.9Negation¶
Right rule
$$\frac{\Gamma, A \implies p}{\Gamma \implies \neg A}\, \neg R^p$$
Left rule
$$\frac{\Gamma, \neg A \implies A}{\Gamma, \neg A \implies C} \, \neg L$$
6.5.10Theoretical properties¶
- Weakening
- If $\Gamma \implies C$ then $\Gamma, u:A \implies C$ (so we can add arbitrary assumptions).
- Contraction
- If $\Gamma, u:A, v:A \implies C$ then $\Gamma, u:A \implies C$.
- Soundness
- If $\Gamma \implies C$ then $\Gamma^\downarrow \implies C\uparrow$.
- Completeness
- $\Gamma^\downarrow \vdash C\uparrow$ then $\Gamma \vdash C$. (typo in her notes - it just says der instead of $\vdash$. literally der.)
- Also, if $\Gamma^\downarrow \vdash A \downarrow$ and $\Gamma, u:A \implies C$ then $\Gamma \implies C$.
We can prove these properties via structural induction on the first derivation, as usual. Proofs omitted for the first two because they're easy and long. For soundness, we have to cover all the possible left/right rules that can be used. (Not going to show them all, just a few.)
Start with $\mathcal D = \Gamma, u:C\implies C$ being derived using init. We can get $\Gamma^\downarrow, u:C \downarrow \vdash C \downarrow$ directly from the hypothesis $u$. Then, $\Gamma^\downarrow, u:C \downarrow \vdash C^\uparrow$ via the rule $\uparrow \downarrow$, which is what we want to prove. $\checkmark$
Now suppose $\mathcal D$ was derived using $\supset R^u$. We can use the i.h. on the previous derivation ($\mathcal D'$) then use $\supset R^u$.
It's pretty similar for other rules too.
6.5.11The cut rule¶
$$\frac{\Gamma \implies A \qquad \Gamma, u:A \implies C}{\Gamma \implies C}\, \text{cut}$$
Pretend the $\implies$ here has a + on top.
Let's add this rule to our sequent calculus. If we can show that there is a link between the extended bi-directional natural deduction system and this, then we'll be able to show that any proposition provable in natural deduction has a normal proof.
The result is sound: if $\Gamma \implies C$ then $\Gamma^\downarrow \vdash^+ C \uparrow$. We can prove this via induction on the structure, as with the soundness proof before, but with one more case for handling the cut rule.
It is also complete. Proof by structural induction, as usual.
6.5.11.1Admissibility¶
In fact, cut is actually admissible! This means that we don't need to add it to sequent calculus for it to be true; it's true anyway.
To prove this, we need to prove that if $\mathcal D: \Gamma \implies A$ and $\mathcal E:\Gamma, A \implies C$, then $\Gamma \implies C$.
$A$ is the cut formula. The principal formula is the thing that is the focus of each left or right rule.
We use nested induction on the structure of $A$, the derivation $\mathcal D$ of $\Gamma \implies A$ and $\mathcal E$ of $\Gamma, A \implies C$.
First case: $\mathcal D$ is an initial sequent. Easy to prove using assumptions and contraction.
Next case: $\mathcal E$ is an initial sequent and uses the cut formula. Can prove using $\mathcal D$.
There are a bunch of other cases as well but I don't really have time.
To find a proof for $A$ in natural deduction calculus, we just need to show that there exists a proof in sequent calculus (without cut).
A corollary of all this shit is that there is no derivation for anything $\vdash \bot$.
Also, we can't derive the law of the excluded middle.
6.5.12Inversion¶
Rules are invertible when they're invertible. Actually tho. Look for an $\iff$ situation between the premises and the conclusion.
Some rules are, some aren't. For example, the $\land R$ rule is invertible, as is implication. The left rule for disjunction is invertible too, but the right ones aren't.
The $\top$ and $\bot$ rules are not invertible since they have no premises.
6.5.13Phases¶
Something about synchronous/asynchronous phases. idgi at all.
$$ \begin{align} & \Delta; \Gamma \implies [L] \tag{asynchronous phase (right)} \\ & \Delta; [\Gamma] \implies R \tag{asynchronous phase (left)} \\ & \Delta > R \tag{synchronous phase (right)} \\ & \Delta > L \implies R \tag{synchronous phase (left)} \end{align} $$
Used in theorem proving. Synchronous phase: when you do invertible shit. Right = committing to work on the right hand side of the sequent; left = committing to work on the left side, choosing an assumption from $\Delta$. I still don't know what this means.