Negative results about the power of static typing
I was thinking about Jon Riecke's remark
> One can certainly imagine fancier type systems preventing more of
> these checks, but the uncomputability argument means that static
> type systems cannot rule out all such errors.
and realized something that had never been clear to me:
If one wants to prove negative results concerning the power of
static typing, the programs being considered must communicate in
some way with their environment (e.g., by taking in inputs).
Let me explain... Suppose we formalize a programming language as
an (effectively presented) triple
(P, ->, A)
P is the set of programs
-> is the finitely branching transition relation
A is the set of answers
We assume that the elements of A are irreducible, but there may be
other elements of P that are irreducible - and we consider these to be
Let's assume that our language is interesting, so that it won't be
decidable whether every execution sequence of a program terminates
with an answer.
On the other hand, we can easily augment our programming language with
a type system so that every program with a type will have this
As types we choose all the execution trees of (P, ->) that are finite
and whose leaves are all in A. The set of all types will then be
recursive. Say that a program p has type t iff the root of t is p.
The set of all pairs (p, t) such that p has type t will be recursive,
even though it won't be decidable whether a program has a type. And
every execution sequence of a program that has a type obviously
terminates with an answer.
Of course, I'm not claiming that this approach is in any way
practical---the types will be gigantic. I'm just arguing that one
must be careful when stating negative results concerning the
power of static typing.
On the other hand, if our programs take in inputs from some infinite
set, then we can use diagonalization to show that there is no effective
type system for our language that guarantees termination with an answer.