[hudak-paul@YALE.ARPA: Operational vs. Denotational]

From: Paul Hudak <hudak-paul@YALE.ARPA>
Date: Tue, 31 May 88 15:12:07 EDT
Subject: Operational vs. Denotational
To: Kurt Sieber <sieber%fb10vax.informatik.uni-saarland.dbp.de@RELAY.CS.NET>,
        meyer@THEORY.LCS.MIT.EDU, matthias@rice.edu
Cc: hudak@YALE.ARPA

With regard to the debate about operational vs. denotational semantics,
I think it's worth asking what one "has" that the other doesn't, and
my chief answer has alway been:  non-termination as a first-class value.  
Aside from the theoretical advantage of this, in practice it is very useful
to me, wearing my semanticist hat, in reasoning about programs.  It's
also become a useful tool for programmers in reasoning equationally 
about their programs, especially in the functional programming community.

I'd also like to mention an unorthodox, but increasingly popular, use
of denotational semantics: reasoning about compiler optimizations.
The standard example of this is a justification of call-by-name to
call-by-value transformations of functional programs based on
information gleaned from strictness analysis.  Specifically, it is
quite easy to prove the equivalence of:

       E[[lam x. exp]] env = lam y. E[[exp]] env[y/x]

and:   E[[lam x. exp]] env = strict( lam y. E[[exp]] env[y/x] )

if I can prove that the expression is strict.  I think doing this kind
of proof in an operational semantics would be much harder.

Now, you could argue that I then have to prove the correctness of my
actual implementation with respect to this optimization, but I claim
that real implementations are as far from abstract operational
semantics as they are from denotational semantics, and thus I don't
think it's any harder.  One could even argue that it's easier, given
the trend toward "semantics-directed compilation."

Finally, let me point out that doing these things for a NON-STANDARD
semantics is equally valuable.  For example, I can give a store
semantics to a functional program, prove its consistency with respect
to the standard direct semantics, and then prove the correctness of
optimizations to the store semantics using an abstract interpretation 
of some other non-standard behavior such as reference counts (for 
example as outlined in my Lisp and FP 86 paper).  I find this approach
much nicer in a denotational model than in an operational model, even 
though I'm reasoning about non-standard behavior.

So I guess I'm arguing that denotational semantics is a nice TOOL for
reasoning about programs in general, and it's faithfullness to a given
operational semantics is not an important issue to me, other than to
gain a sanity check now and then...


From: Paul Hudak <hudak-paul@YALE.ARPA>
Date: Tue, 31 May 88 15:14:10 EDT
Subject: protocol

In sending that last message I was observing what seems to be the standard
protocol: sending it to you and letting you judge whether or not you want to
send it to one of your mailing lists.