[Prev][Next][Index][Thread]
Re: Type theory vs floatingpoint arithmetic

To: Sergei SOLOVIEV <soloviev@irit.fr>

Subject: Re: Type theory vs floatingpoint arithmetic

From: Matthias Blume <blume@ttic.org>

Date: Wed, 3 Sep 2003 11:29:42 0500

Cc: types@cis.upenn.edu

InReplyTo: <200309021305.h82D5mZF005988@saul.cis.upenn.edu>
Dear Sergei, dear readers of the Types forum,
I am familiar with the explanation of subtyping that uses coercions.
However, I am not sure about the "it is understood..." part, which
sounds
quite absolute. Fortunately, it all seems to come down to a mere
difference in terminology. The coercioncentric view of subtying
is quite consistent with what I originally meant to say (see below).
But first, to clear up the terminology problem, let me briefly describe
how *I* understand subtyping. More knowledgeable people can then
chime in and correct my if I am totally off base...
1. If A is a subtype of B (e.g., write A <: B), then from a typing
judgment
: x : A
one can conclude that the judgment
: x : B
must also hold. (I am simplifying by eliding the typing
environment.)
Now, if we are dealing with a semantic approach, e.g., by
interpreting
types as sets of values, then
x : A
means
[[x]] \in [[A]]
where [[x]] is the value of x and [[A]] is the set of values that
interprets A.
Now, since [[x]] \in [[A]] must imply that [[x]] \in [[B]] we can
conclude that
[[A]] \subset [[B]] (unless [[.]] is not sufficiently abstract so
that [[A]] and/or
[[B]] contain "useless" elements that can never be values of any
actual
program phrases).
In a syntactic theory of subtyping one usually finds (directly or
as part of
other rules) a subsumption rule of the form
: x : A A <: B

: x : B
This, without having to resort to semantics, directly captures the
abovementioned
"essence" of what it means for A to be a subtype of B.
2. In the *absence* of a subsumption rule in the type system, one can
"fake" it
(or, to phrase it differently, "explain" such a rule) by
postulating the existence
of explicit coercions. For each pair of what would be subtype A
and supertype
B we say that c_{AB} of type A > B takes a value of type A and
"views it as a
value of type B". A program that would typecheck with the
subsumption rule
but doesn't without can be decorated with just the right number of
explicit
coercions for typechecking to go through. In other words,
coercions take the
place of the subsumption rule.
There are a number of coherence requirements that must hold for
coercions to be
reasonably considered coercions. You have explained those quite
well.
The driving motivation behind those coherence requirements is that
coercions
should essentially behave like identities; they should not "alter"
the value (although,
as you pointed out, they might alter its representation). In
particular, coercions
should not add information that was not there before.
Now, what I said about the disjointness of values in IEEE 32bit and
64bit representations
can be rephrased in terms of coercions by pointing out that necessarily
a coercion form
32bit to 64bit adds information, namely 29 extra bits of
precision.(*) We observe this
by finding the coercion to be incoherent: c (f (x, y)) is not always
equal to f' (c x, c y).
Of course, one *could* make things coherent  by defining each 32bit
operation f in
terms of its 64bit counterpart f', coercion and truncation:
f (x, y) = trunc (f' (extend x, extend y))
However, this is correct only if it is guaranteed that
f' (extend x, extend y) = extend (trunc (f' (extend x, extend y)))
This would mean that highprecision operation would have to limit
themselves to lowprecision
if both operands have the form of a lowprecision number (= were
created by extending a low
precision number). Clearly this is not how we want highprecision
arithmetic to work (and,
consequently, IEEE754 doesn't).
My point was that the bits that are not represented are to be
interpreted as containing neither
0 nor 1 but "?". Any "coercion" that replaces some of these ? with
either 0 or 1 is not
an informationpreserving operation. Since *all* ways of extending
32bit to 64bit involve adding
either 0s or 1s, none of these can be informationpreserving. Under
this view, 32bit
is not a subtype of 64bit (even under a coercionbased explanation of
subtyping).
Matthias
(*) There are also 3 extra bits in the exponent. The exponent is
slightly better behaved since,
indeed, nonrepresented bits are to be interpreted as 0. However, even
here things break down
if you look closely enough  because of underflow and the associated
effect of denormalization
on the one hand and infinities on the other.