[Prev][Next][Index][Thread]

Your counterexample



I have now figured out your counterexample.  The terms were:

M1 =  \abc.(\x.(b a) x)(c a)
M2 =  \abc.(b a)(c a)

You gave a context which distinguished the cps-transforms of the two terms.
As Albert points out in his recent note to the types list, the interesting
question is "Where did the context come from?"  In particular, is there a way
of understanding the context as the CPSing of some familiar terms?  In your
example, the context appears to be doing the following:

evaluate the term Mi in a context where 

a is bound to a constant 

b is bound to \xk.1  -- this is a procedure which, when called, aborts with 1.

c is bound to \xk.Omega  -- this is a procedure which, when called, fails to
			     terminate. 

If we do the cps to evaluate operators before operands, then M1 diverges
because (c a) diverges, but M2 halts with 1 because (b a) aborts before
evaluating (c a).

If we do the cps the opposite way, then both terms diverge.

This is a pitfall which is well-known (I think it is similar to what Wegman
said at Dan's talk at the last POPL):  namely, in a world with continuations,
ANY procedure call is potentially a black hole, and therefore there are almost
no safe optimizations.  

(* begin flame *)
Alas, this appears to be a problem of the world, not a problem in the model:
so long as any addition operator can potentially generate an overflow (and
abort the computation), then we have basically two choices:  (1)  We can allow
this to be reflected in the model, or (2) we can choose to ignore these
possibilities in the model.  So the interesting question is whether there any
useful intermediate strategies.
(* end flame *)

(* begin digression *)
There is a long tradition of examples that are used to distinguish obscure
facts about the context in which a program is computing.  In our book, we had
(at one point) a long discussion of how to use call/cc to figure out what
evaluation order the compiler uses.  I remember seeing long ago (mid-60's?) a
suite of Algol programs you could use to figure out what kind of
floating-point arithmetic your machine used!  
(* end digression *)

Anyhow, is this analysis right?  If so, does it shed some light on the
significance of the results?

--Mitch