COS 441 - Polymorphic Typing; Modules - April 16, 1996

Polymorphic Typing

Recall the substitution typing rule for let:

A |- e[1] : t[1]    A |- e[2][x |-> e[1]] : t[2]
------------------------------------------------
A |- (let ((x e[1])) e[2]) : t[2]
We can reformulate this rule to compute a polymorphic type for x by introducing type schemes that bind type variables.
Type Scheme:   s ::= (forall (a*) t)
       Type:   t ::= bool | num | t -> t | a
(forall (a*) t) binds the type variables a* in t. Note that forall can not appear inside a type. We give x a type scheme by generalizing type variables in its type. The new typing rule looks like:
A |- e[1] : t[1]     A[x |-> Close([t[1], A])] |- e[2] : t[2]
-------------------------------------------------------------
A |- (let ((x e[1])) e[2]) : t[2]
In each copy of e[1] in e[2][x |-> e[1]], any free references of e[1] must have the same type as they all get types from A. These free references could have type variables in their types, which we cannot generalize. This leads to the following definition of close.
Close (t, A) = (forall (a*) t)   where a* = FTV(t) - FTV(A) 
      FTV(x) = free type variables of x
This typing rule has exactly the same effect as the substitution-based one (for the simple language that we started with).

Of course, since type environments now map variables to type schemes, we have to modify the variable typing rule:

t < A(x)
----------
A |- x : t
where t < (forall (a1 ... an) t') if and only if there exist t1 ... tn such that t'[a1 -> t1, ... ,an -> tn] = t.

Now we can build a rule for polymorphic letrec.

A[x |-> t[1]] |- f : t[1]    A[x |-> Close(t[1], A)] |- e[2] : t[2]
-------------------------------------------------------------------
A |- (letrec ((x f)) e[2]) : t[2]
Notice that a recursive procedure is not polymorphic within its own body. You might ask why not use A[x |-> Close(t[1], A)] |- f : t[1] for the left antecedent. The answer is that type inference becomes undecidable.

Exercise: Find an example where you would like x to be polymorphic within f.

Now it is also easy to handle assignment. The easy solution is to restrict the right hand side of let to be a syntactic value (lambda, constant, or variable) [Wright '95]. Why do we need to do this? Consider:

make-box: (forall (a) (a -> (a box)))
   unbox: (forall (a) ((a box) -> a))
set-box!: (forall (a) (a (a box) -> unit))

(let ((f (box (lambda (x) x))))
  (set-box! f (lambda (x) (+ 1 x)))
  ((unbox f) #t))
Without restricting let expressions, f gets the type scheme (forall (a) ((a -> a) box)), and the above expression type checks. So when the language includes assignment, we can only allow let expressions whose right hand sides are syntactic values to be polymorphic:
A |- e[1] : t[1]     A[x |-> Close([t[1], A])] |- e[2] : t[2]
------------------------------------------------------------- (e[1] a value)
A |- (let ((x e[1])) e[2]) : t[2]


A |- e[1] : t[1]     A[x |-> t[1]] |- e[2] : t[2]
------------------------------------------------- (e[1] not a value)
A |- (let ((x e[1])) e[2]) : t[2]

Type Soundness

It is critically important that when a type system says a value has a type, it really does (semantically).

Theorem: If 0 |- e : t then either (1) e diverges or (2) e |-> v* and 0 |- v : t.

Modules and Abstract Data Types

When we write programs about trees, stacks, lists, menus, etc. we would like to describe the "functional type" of a value in its type. In other words, we want to use types to record useful information about the values, not simply their representation. A collection of lists of symbols may have the same internal shape in conses, vectors, and symbols as a collection of stacks of symbols, but even though they can contain the same set of values, we want to use types to distinguish them. So types are going to take on more meaning that just "sets of values". Types will control what operations are applicable to sets of values. For example we want to distinguish the following.

symlist ::= nil | (cons sym symlist)

symstack ::= empty | (push sym symstack)
even though they may have the same internal representation. Why? To make programs easier to change: if only stack operations can be applied to stacks, we can change the representation of a stack by changing only their operations. List won't be affected, nor any code that uses them. ML's datatype declaration allows us to declare a new type with associated constructors, predicates, and selectors for this purpose.
(datatype (symlist) (Nil) (Cons symbol symlist))

 make-Nil : ( -> symlist)
make-Cons : (symbol symlist -> symlist)
     Nil? : (symlist -> bool)
    Cons? : (symlist -> bool)
   Cons-a : (symlist -> symbol)
   Cons-b : (symlist -> symlist)
Semantically, this datatype declaration behaves just like:
(define-record (Nil))
(define-record (Cons (a b)))
What about polymorphic datatypes like (a list)?
(datatype (a Stack) (Empty) (Push a (a Stack)))

make-Empty: (forall (a) (-> (a Stack)))
 make-Push: (forall (a) (a (a Stack) -> (a Stack)))
The first occurrences of a and Stack in the datatype definition above are binding occurrences.

Now we can build operations on stacks and lists and the type system will keep them separate.

An ML-Style Module System

Its often a good idea to collect the operations on a datatype together in one place. A module is a construct that provides a private namespace for collecting the datatype and its operations.

(define list-module
  (module
    (datatype (a List) ...)
    (define member ... )
    (define fold ... )
...
))
We can access members of a module with
    (mod-ref e x)
where e is an expression that evaluates to a module, x is a name defined by the module. Or by
(open e)
which imports the definitions of e into the top level environment.

Since we are working in a typed language, we need to assign a "type" to module expressions. We call such a "module type" a signature. The signature for a module lists the new types the module introduces, along with the names it defines and their types:

signature ::= (sig (a* k)* (x t*)*)

(a* k)* -> new type names   (k in type constructor names)
(x t*)*  -> new definitions  (a in type variables)
For example:
(sig (a list)
  (make-List (forall (a) (a (a List) -> (a List))))
  (make-Nil  (forall (a) (-> (a List))))
  (member ...)
...
)
The signature of a module is also called its interface or specification. To type check a use of a module, e.g. a mod-ref or open expression, we only need to know the module's signature, not its implementation. Hence we have achieved separation of implementation from specification.

Theorem: (Separation) If modules M1 and M2 have signature S, program P uses M1 and P has type T, then P' which results from replacing M1 with M2 in P has type T.

Signature Matching

Suppose two signatures S1 and S2 where S2 has more names than S1, but where S1 and S2 both have a name, it has the same type. Then we should be able to use a module with signature S2 where we only need signature S1. This leads to one notion of signature matching:

{(x1 t1*) ... (xn tn*)} subset {(y1 t'1*) ... (yn t'n*)}
--------------------------------------------------------------------------
(sig (a* k) (x1 t1*) ... (xn tn*)) <= (sig (a* k) (y1 t'1*) ... (yn t'n*))
(This rule is not as general as it could be; specifically, the second signature might define more types, or give them different names.)

Suppose two signatures S1 and S2 with the same names, but S2 defines polymorphic operations where S1 defines operations on a specific type. Then again we should be able to use a module of signature S2 when we only need a module of signature S1. This leads to a second notion of signature matching:

t1 <= t'1     ...    tn <= t'n
--------------------------------------------------------------------------
(sig (a* k) (x1 t1*) ... (xn tn*)) <= (sig (a* k) (x1 t'1*) ... (xn t'n*))
This notion of matching requires a notion of matching on type schemes. Rather than define it precisely, here is a few examples:
num <= num
(t1 -> t2) <= (t3 -> t4)   if t1 <= t3 and t2 <= t4
t' <= (forall (a) t)       if exists t'' such that t' = t[a -> t'']

Now we can introduce an operation to constrain the signature of a module:

A |- e : sig[2]    sig <= sig[2]
--------------------------------
A |- (constrain e sig) : sig
We use this to hide operations from modules. For example.
(define M1 (module (datatype (a List) ...) (member ...)))

(define M2 (constrain M1 (sig (a List) (car (a List) -> a)
	                               (cdr (a List) -> (a List)
				       (member ...)))))
The signature of M1 includes make-Nil and make-Cons, but the signature of M2 does not. If we only make M2 available to some component of the program, that component can only manipulate lists, not build new ones.

Other languages use different (less flexible) mechanisms to control the visibility of definitions from modules. For instance, in Clu and Java, definitions within a module (class in Java) include a keyword (public, private) that indicates whether the definition is visible in the module's signature. A private definition is visible to other definitions within the module, but is not exported in the module's interface.

Parameterized Modules

Suppose you are building a compiler, The code generator of a compiler can be largely target machine independent, but must obviously depend on certain characteristics of the target architecture. To support multiple architectures, build an abstract machine language and package it in a module. Build one module for each architecture:

(define mips-assembly
  (module (define make-load ...)
	  (define make-store ...)
	  ...))
All X-assembly modules have the same interface. The code generator makes calls to the assembly module to build generated code. Now to build a compiler for architecture X, use the X-assembly module in the code generator. But how do we do this without duplicating the code generator for each architecture? Parameterize the code generator!
(define code-generator 
  (lambda (assembler)
    (module ...)
))
Right? Wrong. "assembler" is a module which has a signature, not a value that has a type. We can try to make modules be values and signatures be types, but this leads to big difficulties (type inference becomes undecidable).

To handle this, ML has parameterized modules called functors:

(define code-generator
 (functor (asm)
   (module ...)
))
Functors are like functions that map modules to modules. Ada uses a similar solution and calls them generic packages. C++ has templates.