This chapter presents detailed examples of certain tactics, to illustrate their behavior.
This tactic applies to any goal. It behaves like exact with a
big difference : the user can leave some holes (denoted by _ or
(_:type)) in the term.
refine will generate as many
subgoals as they are holes in the term. The type of holes must be
either synthesized by the system or declared by an
explicit cast like (\_:nat->Prop)
. This low-level
tactic can be useful to advanced users.
Example:
Example: Assume we have a relation on nat which is transitive:
Consider the goal (R n p) provable using the transitivity of R:
The direct application of Rtrans with apply fails because no value for y in Rtrans is found by apply:
A solution is to rather apply (Rtrans n m p).
More elegantly, apply Rtrans with (y:=m) allows to only mention the unknown m:
Another solution is to mention the proof of (R x y) in Rtrans...
... or the proof of (R y z):
On the opposite, one can use eapply which postpone the problem of finding m. Then one can apply the hypotheses Rnm and Rmp. This instantiates the existential variable and completes the proof.
Example 1: Induction scheme for tree and forest
The definition of principle of mutual induction for tree and forest over the sort Set is defined by the command:
You may now look at the type of tree_forest_rec:
This principle involves two different predicates for trees and forests; it also has three premises each one corresponding to a constructor of one of the inductive definitions.
The principle tree_forest_rec shares exactly the same premises, only the conclusion now refers to the property of forests.
Example 2: Predicates odd and even on naturals
Let odd and even be inductively defined as:
The following command generates a powerful elimination principle:
The type of odd_even for instance will be:
The type of even_odd shares the same premises but the conclusion is (n:nat)(even n)->(Q n).
We can define the induction principles for trees and forests using:
Then we can build the combined induction principle:
The type of tree_forest_mutrec will be:
Example 1: Induction scheme for div2
We define the function div2 as follows:
The definition of a principle of induction corresponding to the recursive structure of div2 is defined by the command:
You may now look at the type of div2_ind:
We can now prove the following lemma using this principle:
We can use directly the functional induction (8.7.6) tactic instead of the pattern/apply trick:
Remark: There is a difference between obtaining an induction scheme for a
function by using Function (section 2.3) and by
using Functional Scheme after a normal definition using
Fixpoint or Definition. See 2.3 for
details.
Example 2: Induction scheme for tree_size
We define trees by the following mutual inductive type:
We define the function tree_size that computes the size of a tree or a forest. Note that we use Function which generally produces better principles.
Remark: Function generates itself non mutual induction principles tree_size_ind and forest_size_ind:
The definition of mutual induction principles following the recursive structure of tree_size and forest_size is defined by the command:
You may now look at the type of tree_size_ind2:
When working with (co)inductive predicates, we are very often faced to some of these situations:
The inversion tactics are very useful to simplify the work in these cases. Inversion tools can be classified in three groups:
As inversion proofs may be large in size, we recommend the user to stock the lemmas whenever the same instance needs to be inverted several times.
Example 1: Non-dependent inversion
Let's consider the relation Le over natural numbers and the following variables:
For example, consider the goal:
To prove the goal we may need to reason by cases on H and to derive that m is necessarily of the form (S m0) for certain m0 and that (Le n m0). Deriving these conditions corresponds to prove that the only possible constructor of (Le (S n) m) is LeS and that we can invert the -> in the type of LeS. This inversion is possible because Le is the smallest set closed by the constructors LeO and LeS.
Note that m has been substituted in the goal for (S m0) and that the hypothesis (Le n m0) has been added to the context.
Sometimes it is interesting to have the equality m=(S m0) in the context to use it after. In that case we can use inversion that does not clear the equalities:
Example 2: Dependent Inversion
Let us consider the following goal:
As H occurs in the goal, we may want to reason by cases on its structure and so, we would like inversion tactics to substitute H by the corresponding term in constructor form. Neither Inversion nor Inversion_clear make such a substitution. To have such a behavior we use the dependent inversion tactics:
Note that H has been substituted by (LeS n m0 l) and m by (S m0).
Example 3: using already defined inversion lemmas
For example, to generate the inversion lemma for the instance (Le (S n) m) and the sort Prop we do:
Then we can use the proven inversion lemma:
Here are two examples of autorewrite use. The first one (Ackermann function) shows actually a quite basic use where there is no conditional rewriting. The second one (Mac Carthy function) involves conditional rewritings and shows how to deal with them using the optional tactic of the Hint Rewrite command.
Example 1: Ackermann function
Example 2: Mac Carthy function
The tactic quote allows to use Barendregt's so-called 2-level approach without writing any ML code. Suppose you have a language L of 'abstract terms' and a type A of 'concrete terms' and a function f : L -> A. If L is a simple inductive datatype and f a simple fixpoint, quote f will replace the head of current goal by a convertible term of the form (f t). L must have a constructor of type: A -> L.
Here is an example:
The algorithm to perform this inversion is: try to match the term with right-hand sides expression of f. If there is a match, apply the corresponding left-hand side and call yourself recursively on sub-terms. If there is no match, we are at a leaf: return the corresponding constructor (here f_const) applied to the term.
Error messages:
The normal use of quote is to make proofs by reflection: one defines a function simplify : formula -> formula and proves a theorem simplify_ok: (f:formula)(interp_f (simplify f)) -> (interp_f f). Then, one can simplify formulas by doing:
quote interp_f. apply simplify_ok. compute.
But there is a problem with leafs: in the example above one cannot write a function that implements, for example, the logical simplifications A ∧ A → A or A ∧ ¬ A → False. This is because the Prop is impredicative.
It is better to use that type of formulas:
index is defined in module quote. Equality on that type is decidable so we are able to simplify A ∧ A into A at the abstract level.
When there are variables, there are bindings, and quote provides also a type (varmap A) of bindings from index to any set A, and a function varmap_find to search in such maps. The interpretation function has now another argument, a variables map:
quote handles this second case properly:
It builds vm and t such that (f vm t) is convertible with the conclusion of current goal.
One can have both variables and constants in abstracts terms; that is the case, for example, for the ring tactic (chapter 20). Then one must provide to quote a list of constructors of constants. For example, if the list is [O S] then closed natural numbers will be considered as constants and other terms as variables.
Example:
Warning: Since function inversion
is undecidable in general case, don't expect miracles from it!
See also: comments of source file tactics/contrib/polynom/quote.ml
See also: the ring tactic (Chapter 20)
A first example which shows how to use the pattern matching over the proof contexts is the proof that natural numbers have more than two elements. The proof of such a lemma can be done as follows:
We can notice that all the (very similar) cases coming from the three eliminations (with three distinct natural numbers) are successfully solved by a match goal structure and, in particular, with only one pattern (use of non-linear matching).
Another more complex example is the problem of permutation on closed lists. The aim is to show that a closed list is a permutation of another one.
First, we define the permutation predicate as shown in table 10.1.
Coq < Section Sort.
Coq < Variable A : Set.
Coq < Inductive permut : list A -> list A -> Prop :=
Coq < | permut_refl : forall l, permut l l
Coq < | permut_cons :
Coq < forall a l0 l1, permut l0 l1 -> permut (a :: l0) (a :: l1)
Coq < | permut_append : forall a l, permut (a :: l) (l ++ a :: nil)
Coq < | permut_trans :
Coq < forall l0 l1 l2, permut l0 l1 -> permut l1 l2 -> permut l0 l2.
Coq < End Sort.
Figure 10.1: Definition of the permutation predicate
A more complex example is the problem of permutation on closed lists. The aim is to show that a closed list is a permutation of another one. First, we define the permutation predicate as shown on Figure 10.1.
Coq < Ltac Permut n :=
Coq < match goal with
Coq < | |- (permut _ ?l ?l) => apply permut_refl
Coq < | |- (permut _ (?a :: ?l1) (?a :: ?l2)) =>
Coq < let newn := eval compute in (length l1) in
Coq < (apply permut_cons; Permut newn)
Coq < | |- (permut ?A (?a :: ?l1) ?l2) =>
Coq < match eval compute in n with
Coq < | 1 => fail
Coq < | _ =>
Coq < let l1' := constr:(l1 ++ a :: nil) in
Coq < (apply (permut_trans A (a :: l1) l1' l2);
Coq < [ apply permut_append | compute; Permut (pred n) ])
Coq < end
Coq < end.
Permut is defined
Coq < Ltac PermutProve :=
Coq < match goal with
Coq < | |- (permut _ ?l1 ?l2) =>
Coq < match eval compute in (length l1 = length l2) with
Coq < | (?n = ?n) => Permut n
Coq < end
Coq < end.
PermutProve is defined
Figure 10.2: Permutation tactic
Next, we can write naturally the tactic and the result can be seen on Figure 10.2. We can notice that we use two toplevel definitions PermutProve and Permut. The function to be called is PermutProve which computes the lengths of the two lists and calls Permut with the length if the two lists have the same length. Permut works as expected. If the two lists are equal, it concludes. Otherwise, if the lists have identical first elements, it applies Permut on the tail of the lists. Finally, if the lists have different first elements, it puts the first element of one of the lists (here the second one which appears in the permut predicate) at the end if that is possible, i.e., if the new first element has been at this place previously. To verify that all rotations have been done for a list, we use the length of the list as an argument for Permut and this length is decremented for each rotation down to, but not including, 1 because for a list of length n, we can make exactly n−1 rotations to generate at most n distinct lists. Here, it must be noticed that we use the natural numbers of Coq for the rotation counter. On Figure 9.1, we can see that it is possible to use usual natural numbers but they are only used as arguments for primitive tactics and they cannot be handled, in particular, we cannot make computations with them. So, a natural choice is to use Coq data structures so that Coq makes the computations (reductions) by eval compute in and we can get the terms back by match.
With PermutProve, we can now prove lemmas as follows:
Coq < Ltac Axioms :=
Coq < match goal with
Coq < | |- True => trivial
Coq < | _:False |- _ => elimtype False; assumption
Coq < | _:?A |- ?A => auto
Coq < end.
Axioms is defined
Figure 10.3: Deciding intuitionistic propositions (1)
Coq < Ltac DSimplif :=
Coq < repeat
Coq < (intros;
Coq < match goal with
Coq < | id:(~ _) |- _ => red in id
Coq < | id:(_ /\ _) |- _ =>
Coq < elim id; do 2 intro; clear id
Coq < | id:(_ \/ _) |- _ =>
Coq < elim id; intro; clear id
Coq < | id:(?A /\ ?B -> ?C) |- _ =>
Coq < cut (A -> B -> C);
Coq < [ intro | intros; apply id; split; assumption ]
Coq < | id:(?A \/ ?B -> ?C) |- _ =>
Coq < cut (B -> C);
Coq < [ cut (A -> C);
Coq < [ intros; clear id
Coq < | intro; apply id; left; assumption ]
Coq < | intro; apply id; right; assumption ]
Coq < | id0:(?A -> ?B),id1:?A |- _ =>
Coq < cut B; [ intro; clear id0 | apply id0; assumption ]
Coq < | |- (_ /\ _) => split
Coq < | |- (~ _) => red
Coq < end).
DSimplif is defined
Coq < Ltac TautoProp :=
Coq < DSimplif;
Coq < Axioms ||
Coq < match goal with
Coq < | id:((?A -> ?B) -> ?C) |- _ =>
Coq < cut (B -> C);
Coq < [ intro; cut (A -> B);
Coq < [ intro; cut C;
Coq < [ intro; clear id | apply id; assumption ]
Coq < | clear id ]
Coq < | intro; apply id; intro; assumption ]; TautoProp
Coq < | id:(~ ?A -> ?B) |- _ =>
Coq < cut (False -> B);
Coq < [ intro; cut (A -> False);
Coq < [ intro; cut B;
Coq < [ intro; clear id | apply id; assumption ]
Coq < | clear id ]
Coq < | intro; apply id; red; intro; assumption ]; TautoProp
Coq < | |- (_ \/ _) => (left; TautoProp) || (right; TautoProp)
Coq < end.
TautoProp is defined
Figure 10.4: Deciding intuitionistic propositions (2)
The pattern matching on goals allows a complete and so a powerful backtracking when returning tactic values. An interesting application is the problem of deciding intuitionistic propositional logic. Considering the contraction-free sequent calculi LJT* of Roy Dyckhoff ([50]), it is quite natural to code such a tactic using the tactic language as shown on Figures 10.3 and 10.4. The tactic Axioms tries to conclude using usual axioms. The tactic DSimplif applies all the reversible rules of Dyckhoff's system. Finally, the tactic TautoProp (the main tactic to be called) simplifies with DSimplif, tries to conclude with Axioms and tries several paths using the backtracking rules (one of the four Dyckhoff's rules for the left implication to get rid of the contraction and the right or).
For example, with TautoProp, we can prove tautologies like those:
A more tricky problem is to decide equalities between types and modulo isomorphisms. Here, we choose to use the isomorphisms of the simply typed λ-calculus with Cartesian product and unit type (see, for example, [39]). The axioms of this λ-calculus are given by table 10.5.
Coq < Open Scope type_scope.
Coq < Section Iso_axioms.
Coq < Variables A B C : Set.
Coq < Axiom Com : A * B = B * A.
Coq < Axiom Ass : A * (B * C) = A * B * C.
Coq < Axiom Cur : (A * B -> C) = (A -> B -> C).
Coq < Axiom Dis : (A -> B * C) = (A -> B) * (A -> C).
Coq < Axiom P_unit : A * unit = A.
Coq < Axiom AR_unit : (A -> unit) = unit.
Coq < Axiom AL_unit : (unit -> A) = A.
Coq < Lemma Cons : B = C -> A * B = A * C.
Coq < Proof.
Coq < intro Heq; rewrite Heq; apply refl_equal.
Coq < Qed.
Coq < End Iso_axioms.
Figure 10.5: Type isomorphism axioms
A more tricky problem is to decide equalities between types and modulo isomorphisms. Here, we choose to use the isomorphisms of the simply typed λ-calculus with Cartesian product and unit type (see, for example, [39]). The axioms of this λ-calculus are given on Figure 10.5.
Coq < Ltac DSimplif trm :=
Coq < match trm with
Coq < | (?A * ?B * ?C) =>
Coq < rewrite <- (Ass A B C); try MainSimplif
Coq < | (?A * ?B -> ?C) =>
Coq < rewrite (Cur A B C); try MainSimplif
Coq < | (?A -> ?B * ?C) =>
Coq < rewrite (Dis A B C); try MainSimplif
Coq < | (?A * unit) =>
Coq < rewrite (P_unit A); try MainSimplif
Coq < | (unit * ?B) =>
Coq < rewrite (Com unit B); try MainSimplif
Coq < | (?A -> unit) =>
Coq < rewrite (AR_unit A); try MainSimplif
Coq < | (unit -> ?B) =>
Coq < rewrite (AL_unit B); try MainSimplif
Coq < | (?A * ?B) =>
Coq < (DSimplif A; try MainSimplif) || (DSimplif B; try MainSimplif)
Coq < | (?A -> ?B) =>
Coq < (DSimplif A; try MainSimplif) || (DSimplif B; try MainSimplif)
Coq < end
Coq < with MainSimplif :=
Coq < match goal with
Coq < | |- (?A = ?B) => try DSimplif A; try DSimplif B
Coq < end.
DSimplif is defined
MainSimplif is defined
Coq < Ltac Length trm :=
Coq < match trm with
Coq < | (_ * ?B) => let succ := Length B in constr:(S succ)
Coq < | _ => constr:1
Coq < end.
Length is defined
Coq < Ltac assoc := repeat rewrite <- Ass.
assoc is defined
Figure 10.6: Type isomorphism tactic (1)
Coq < Ltac DoCompare n :=
Coq < match goal with
Coq < | [ |- (?A = ?A) ] => apply refl_equal
Coq < | [ |- (?A * ?B = ?A * ?C) ] =>
Coq < apply Cons; let newn := Length B in
Coq < DoCompare newn
Coq < | [ |- (?A * ?B = ?C) ] =>
Coq < match eval compute in n with
Coq < | 1 => fail
Coq < | _ =>
Coq < pattern (A * B) at 1; rewrite Com; assoc; DoCompare (pred n)
Coq < end
Coq < end.
DoCompare is defined
Coq < Ltac CompareStruct :=
Coq < match goal with
Coq < | [ |- (?A = ?B) ] =>
Coq < let l1 := Length A
Coq < with l2 := Length B in
Coq < match eval compute in (l1 = l2) with
Coq < | (?n = ?n) => DoCompare n
Coq < end
Coq < end.
CompareStruct is defined
Coq < Ltac IsoProve := MainSimplif; CompareStruct.
IsoProve is defined
Figure 10.7: Type isomorphism tactic (2)
The tactic to judge equalities modulo this axiomatization can be written as shown on Figures 10.6 and 10.7. The algorithm is quite simple. Types are reduced using axioms that can be oriented (this done by MainSimplif). The normal forms are sequences of Cartesian products without Cartesian product in the left component. These normal forms are then compared modulo permutation of the components (this is done by CompareStruct). The main tactic to be called and realizing this algorithm is IsoProve.
Here are examples of what can be solved by IsoProve.