[Prev][Next][Index][Thread]

additives





	XMASS ANNOUNCEMENT : ESSENTIAL PROGRESS ON ADDITIVES

Additives were a bit mistreated in the first paper on LL, and they  
were (up to now) the poor relative of multiplicatives and even of  
exponentials -which are supposed to be less basic- : certain  
essential gadgets like proof-nets or geometry of interaction were not  
available in that case. Two reasons for this :
i) the absence of tradition (natural deduction for disjunction is a horror)
ii) the fact that they are much less central than multiplicatives,
and that it was possible to forget them.

Now follow the results obtained this year; the result on proof-nets  
is not yet written (although it dates back from February), but the  
paper on geometry of interaction for additives is mostly finished. 


I. PROOF-NETS
The problem was to extend proof-nets in a nice natural way ; of  
course some implicit criteria had to be met
i) a reasonable unicity of the proof-net ; for instance in the  
multiplicative case, we cannot identify two cut-free proof-nets with  
atomic axiom links without a collapse. This is also the case for our  
extension.
ii) a non-trivial correctness criterion (style : acyclicity) with a  
sequentialisation theorem
iii) a cut-elimination procedure preserving the criterion and not  
transiting through sequentialisation (I say this, since my first  
version of the quantifier case was not immediately closed under  
normalisation).
The solution is to adopt binary links for & and Plus, and to draw a  
proof-net in the usual style, the only novelty being that several  
links may share a conclusion. Additionally, boolean weights are given  
to links so as to avoid mutual jamming ; multiplicative proof-nets  
correspond to weights 0 and 1. These weights are polynomials in  
boolean parameters L/R which are associated to the rules for &.
The criterion is mostly a variation on my second criterion for  
quantifiers ; it is not difficult to formulate it. More delicate is  
to prove sequentialisation ; the only novelty of this year was to  
find a miraculous lemma enabling one to neglect the main technical  
difficulty.
The most spectacular use of the criterion is to write down the  
proof-structure for (a LL approximation of) the Gustave function, and  
to discover a beautiful cycle.

I never wrote this down, because of shortage of time, inexperience in  
TeX... and more deeply since these nets were completely exotic in  
terms of geometry of interaction. During the year I visited several  
fancy semantical worlds, trying to understand this, and eventually I  
arrived at the conclusion that the old formula of execution has not  
to be modified. But the algebra is modified, and here comes the  
surprise.

II THE ALGEBRA OF RESOLUTION
Let L be a language (only predicates and functions) ; in practice L  
uses two constants, g and d, one binary function, and n binary  
predicates (if the sequent has n formulas).
A wire is a clause a |- b where a and b are atomic formulas with the  
same variables. wires are defined up to the name of their variables. 

Composition of two wires a |- b and a' |- b' is defined by applying  
unification to b and a' (giving a mgu \theta) so that the result is  
a\theta \- b'\theta. If unification fails, composition is undefined.  
We call this operation resolution.
If we take formal linear combinations of wires (with complex  
coefficients), then resolution yields a multiplication (when  
resolution fails the product of wires is defined as 0). There is also  
an adjunction : basically
(a|-b)* = b|-a.
Now this defines a C-star algebra \Lambda{L} : simply becauses the  
formal combinations operate on the Hilbert space \ell^2(G) where G is  
the set of ground formulas of L.
The execution formula can be restated in logic programming style :  
let us have for instance 3 unary predicates : e, c, s, and we start  
with a set of clauses
e(t) |- s(u)
e(t) |- c(u)
c(t) |- c(u)
c(t) |- s(u)
then the execution consists in finding all resolution consequences of  
these clauses which are of the form e(t) |- s(u).

In other terms, GoI can be seen as a very elementary form of logic  
programming.

III GoI OF ADDITIVES
GoI is more user-friendly with binary predicates. The first argument  
is handled in the usual way ; the novelty come from the second : it  
is "constant", this means that we use only wires p(t,u) |- q(t',u).  
The second component is simply the additive weight of the proof-nets.  
These second components are very peculiar (I call them dialects,  
since they are not shared, whereas the first components behave as  
public channels). Communication without understanding...
The nilpotency is proved as ever, but the problem comes with the  
comparison to syntax : not enough erasings are performed and it is  
hard to state a nice result. Here comes the last novelty :

IV THE INFLATED CALCULUS
This a sequent calculus in which we can prove, besides usual sequents  
|- \Gamma, "flat sequents" |- \Gamma^\flat. A proof of a flat sequent  
is basically a proof which is in the process of being totally erased  
(for instance an empty proof, but this is far from being the only  
case). A flat proof can be superimposed to a real one and yield a  
real one. The additive cut-elimination can be stated by introducing  
an auxiliary flat proof. Two results :
i) execution corresponds to cut-elimination in the inflated calculus
ii) for very simple formulas (e.g. booleans) flat proofs can be  
eliminated, in other terms the result coincides with usual  
cut-elimination.

V NEW INSIGHTS, NEW PROBLEMS
For the first time the notion of connexity plays a central role ;  
this is enough to refute the MIX rule from the viewpoint of GoI. But  
weakening becomes problematic too, unless we adopt a version in which  
the weakening is physically connected to a preexisting formula. Also  
remember, that (due to work of Lincoln and Winkler), there is no  
correctness criterion for multiplicative proof-nets with units if we  
don't connect the weakenings (unless NP = coNP). 

This means that the construction of the ! has to be split into two  
steps (like in a recent paper of Jacobs). The first step is the  
construction of 1&A, and the difficulty is the treatment of 1 (rather  
of its negation). 

It might also be interesting to try to see GoI from the viewpoint of  
logic programming : GoI corresponds to very limited logic programs :
- one formula in the body of the clause
- the same variables in the head and body
- the heads (resp. tails) pairwise non-unifiable
LL appears a typing discipline for such programs... could this be the  
tail of this old sea-snake : the common foundation of logic and  
functional programming?

Best wishes for Xmas and the New Year

---

Jean-Yves GIRARD