[Prev][Next][Index][Thread]

the number of types in PER



Date: Tue, 25 Oct 88 22:48:52 EDT

To: kim@williams.edu, types@theory.LCS.MIT.EDU

Kim Bruce writes:

    A note on Tennant and Robinson's remarks on the Bruce-Longo
    model.  Mitchell raised similar questions last spring, and in
    fact we worked out the Lemma in Tennant and Robinson's remarks at
    the Semantics and Category theory conference at CMU in the spring
    (unfortunately too late to get into the LICS proceedings).  As
    Jategaonkar and Mitchell point out in the last paragraph of their
    paper in this summer's LISP conference, the Bruce-Longo model has
    "too many" types.  That is, since every subset of a type is a
    type, all single element subsets are subtypes, leading to the
    proof of the lemma.  Thus the idea is to provide a model with the
    subtypes generated by records, but not by all possible subsets.
    (The more I have thought about this, the more convinced I have
    become that subset or subranges of types ought not to be
    considered subtypes, both because of problems like the above, but
    perhaps more compellingly, by the difference between what is
    statically typeable and what must be checked dynamically -
    decideable vs undecideable, etc.).
------------------------------------------

In the note that started this correspondence, Bob and I used the
Bruce-Longo model to prove some syntactic properties of bounded
fun. We certainly did not wish to lay claim to being the first to
see the lemma there, and only included its proof because it was so
short.

However, I would like to differ from Bruce on the subject of the
model having "too many" types, and the diagnosis that that is the
cause of the model's ills.

Bruce seems to imply that the fact that the type
"ALL (a <= C).  a -> a"
is trivial presents a problem for their model.
He blames this problem on the presence of a host of uncontrollable types
not denoted by any syntactic expression.  Leaving aside the fact that
the types that cause the "problem" are singletons, and seem particularly
simple, it still seems to me that reducing the number of types in the model
is prima facie unlikely to make the types above any larger.  What it
will do is make their triviality much harder to prove.  I believe that
the problem has much more to do with the semantic definition of the <=
relation on types.  If we leave this definition unchanged then we are still
likely to get instances of a type A coercing to a type B where this is
completely unjustified by syntax.  This will have the result that, if we
are lucky, we will be faced with a much harder combinatorial argument
to prove the triviality of the types above, and if we are unlucky, then
we will have to descend into the grungy details of the arithmetic
functions which interpret pairing, lambda abstraction and the like to
find out for which C's the types are trivial, and for which they are not.
Something of this seems to be acknowledged in the later stages of the
paragraph quoted above.

Edmund Robinson