Re: Lambda calculus w/theory of computation
Subject: Re: Lambda calculus w/theory of computation
From: Kim Bruce <email@example.com>
Date: Wed, 01 May 2002 20:27:24 -0400
Organization: Williams College
User-Agent: Mozilla/5.0 (Macintosh; U; PPC Mac OS X; en-US; rv:0.9.4.1) Gecko/20020315 Netscape6/6.2.2
Thanks to the many (over 40) people who responded to my question about
materials for teaching theory of computation using lambda calculus,
RAM's, or imperative language models (WHILE or LOOP languages) rather
than Turing machines. My hope was to find a book that used one or more
of these models and also covered the standard language and machine
Unfortunately, it does not appear that there are many options for this.
The one good fit seems to be
Computability, Complexity, and Languages : Fundamentals of Theoretical
Computer Science (Computer Science and Applied Mathematics), 2nd
edition, by Martin D. Davis, Ron Sigal, Elaine J. Weyuker, 1994
My only question on this book is whether or not it would be suitable for
undergraduates (in my case, Junior CS majors).
The following book by Neil Jones also got a number of recommendations.
Unfortunately for me, it covers complexity rather than machines and
languages after the computability chapters. Computability is done from
the point of view of WHILE programs and a functional languages, and
looks quite attractive:
N.D. Jones,Computability and Complexity: From a Programming Perspective.
MIT Press, 1997. ISBN 0262100649
Other books mentioned include two out-of-print texts:
Brainerd, Landweber: Theory of Computation, Wiley, 1974.
Kfoury, Moll, Arbib: A Programming approach to computability, 1982.
It does seem that there are a lot of people who might be interested in a
non-Turing machine based theory of computation course. Minimally, such
an approach might include Random-Access Machines (RAMs) and While
programs, but might also include functional equivalences. Both of the
first two models should also provide support for complexity measures.
A few people volunteered lecture notes or had other comments on the
question that might be of interest to readers. Some of the linked
papers seem quite interesting and useful for the desired approach. The
comments follow a copy of my original query below.
Thanks again to all who wrote.
Kim Bruce wrote:
> [----- The Types Forum, http://www.cis.upenn.edu/~bcpierce/types -----]
> This question is a bit off topic, but I hope it is of sufficient
> interest [because of lambda calculus] to this group anyway.
> I will be teaching our Theory of Computation course this coming fall,
> again. The topics are the standard: regular sets and finite automata,
> context-free languages and push-down automata, Turing machines and
> I personally find Turing machines to be a very unsatisfying model of
> computation, and would prefer to rely on other models. My favorites
> would be lambda calculus, RAM's, and imperative language models (e.g.,
> the LOOP language). My belief is that students would find the three of
> these easier to relate to than Turing machines. However, I have been
> unable to find a book for our undergraduate (junior-level) course that
> covers these models. Occasionally, a book will cover the RAM model as a
> side-light, but I have been unable to find a book, for example, that
> covers lambda calculus.
> Does anyone have any suggestions for books (or even detailed lecture
> notes) that cover the basic topics relating to languages (regular and
> context-free grammars and machines) and undecidablility as well as a
> less "traditional" approach to computability? [Though of course the
> lambda calculus was contemporaneous - or even earlier than - Turing
> machines.] Reply directly to me and I will summarize later if there is
> Kim Bruce
> Williams College
On my shelf, I have a book "Computability" by Nigel Cutland, Cambridge
U. Press, that starts with a register machine model. Eventually,
there's a few pages on Turing machines, some on Post systems, and quite
a bit about complexity and decidability. But there's nothing about
lambda, and the Chomsky hierarchy is barely mentioned.
-- Paul Steckler
I learned computability from Kfoury et al's book on a programming
approach to computability. Almost 20 years later, I still think it is a
great book. It uses while-programs instead of Turing machines.
Unfortunately there does not appear to be such a text, especially in
regard to the lambda calculus. However, there are aspects of the
R.W. Floyd and R. Beigel
The Language of Machines: An Introduction to Computability
and Formal Languages
Computer Science Press: W.H. Freeman and Co., 1994,
especially the notion of simulation, which may be of some interest.
David B. Benson
Hi, I feel much the same way. Here's a bunch of my thoughts and
+ Students have trouble with inductive reasoning.
+ Students probably find a programming language more intuitive than
+ The r.e./rec. unit is harder than the CFL unit, which is harder
than the regular set unit. Therefore, in the normal progression,
students see it last, when they are most likely to be burnt out
and stressed about final projects in other courses, and have
little time in which to catch up if they fall behind.
+ The formalism of TMs builds on DFAs and PDAs, but other than that,
the r.e./rec. unit does not depend heavily on the regular and CFL
units. Plus, reductions are often given with lots of hand-waving
(who wants to develop or grade complicated TMs?), which sometimes
students find confusing, e.g. somehow a few students always seem
to have trouble with the idea of dovetailing.
Therefore, my approach has been:
+ Use a programming language instead of TMs. I ended up making up
my own, string-based, imperative language. It has the following
+ characters and strings are the only datatypes
+ nested functions (but not higher-order functions)
+ strings are taken apart by the equivalent of car and cdr
+ Assert the existence of an interpreter, and then use it to
rather concretely define most reductions.
+ Do the r.e./rec. unit first and at the start, assign a bunch of
programming exercises to try to get students to think inductively
If you're interested, I can package up my handouts, homework
assignments, and solutions for you to look at and, if you so desire, use
with any modifications you deem necessary. They're all in LaTeX.
- Thomas Yan (firstname.lastname@example.org)
I'm teaching the course now, using Neil Jones' book. It's a bit dense
for our fourth-semester students, and the book has an enormous number of
minor errors, 3/4 of which are documented. But it's great. Simple
imperative language is main model, and is used as basis to describe RAM,
TM, etc. Some but not lot's of LAMBDA. There's a robust but no-frills
interpreter implemented by a student of Adriana's [see next note], and
Neil has a student working on one too.
I used Neil Jones book, the first half, to teach such a course. He has
what you are looking for and more.
He starts with a simple (but Turing complete) imperative language
called WHILE as the computational model and uses the notions of
interpreter and compiler to relate to other models such as TM,
RAM's, etc. Many of the exercises consist in writing such interpreters
Here is the course page. Feel free to use any of the material available.
Matthieu Lucotte(email@example.com), one of the TA's made an
implementation of WHILE in OCAML. Contact him if you are interested in
Hope this helps,
I am giving a similar course and I agree with you about your
opinion about Turing machines, I find them too concrete.
I also dislike lambda-calculus as a basic model of computation. The way
computation is expressed (reducing any subterm until no redexes remain)
makes it very different from what we are used to, and makes it necessary
to prove Church-Rosser. Another thing I don't like is the amount of
coding whichis neccesary to represent programs as data.
What I prefer is a simple functional language with a possibility to
define inductively defined data types. The language has a simple
deterministic semantics (no need to prove CH) and it is trivial to code
programs as data.
I have written some notes about it, you can find it at the adress
I have used it in part on my computability course for some years now.
I agree with you that there is a strong need for a more modern approach
to computability and have toyed with the idea of writing one myself. But
I haven't found the time to do that.
I like the book by N.J. Cutland "Computability: An introduction to
recursive function theory", which as the title suggests focuses mainly
on recursive functions. Of course, the Rogers book is authoritative, but
too heavy for Computer Scientists (like me).
I believe that Neil Jones' book takes a more programming language
approach to computability - I heard him give a talk about some of this
material, but I confess I haven't looked at the book - you might want to
give it a try.
For regular languages material, we recommend Kozen's "Automata and
Computability", and Sudkamp's "Languages and Machines". I don't know
these books that well I'm afraid.
Hope this helps,
I am teaching students Turing machines and the students love it! but
this is how it is done:
- Finite automata and regular expressions
- Pushdown automata and grammars
- Undecidability of the halting problem using JAVA programs (and the
- Reducibility - again with JAVA
After this, Turing machines are introduced, and for two purposes: To
have one standard model of universal computation, with which other
formalisms can be compared. The simpler such a model is, the better. The
other purpose is complexity theory: Turing machines have the best notion
of atomic computational step and unit of memory. So the course goes on:
- Turing machines (basic model)
- Possible improvements to the Turing machine model and why they don't
add any computational power
- Recap of complexity
- NP completeness
- Lambda calculus
- Typed lambda calculus
Obviously, I would like to explain these topics in more depth than
the British system allows me to, but on the other hand I am extremely
pleased that the vast majority of students actually like the course
(which is compulsory for most degree courses here at Birmingham), and
that virtually everyone understands at least undecidability and Church's
Thesis. To come back to the statement at the beginning, students do
actually like to program Turing machines and they do like the fact that
such a simple concept can be shown to be as powerful as a von Neumann
computer. I believe the main difference to other texts is that I do not
(need to) introduce Goedel numbering but instead rely on the existence
of a Java compiler for the undecidability proof.
My course notes are at
If you do have a look at them then please let me know of possible
improvements (In fact, the students like my lectures but find the course
Hope this helps. Best wishes,
Dexter Kozen's "Automata and Computability," Springer Verlag, is a
junior level book for such a course that includes a discussion of the
lambda calculus and while-programs. The main discussion of computability
is in terms of Turing machines, though. I taught a Theory of Computation
class with it and found that it covers regular languages extremely well
but is not doing such a great job on context-free languages.
You might consider Dexter Kozen's Automata and Computability
(Springer, 1997). In addition to the standard topics, it includes
sections on the lambda calculus and on "while" programs. The discussion
of lambda calculus is fairly brief, but detailed enough to be useful.
Also, the book ends with a proof of Godel's incompleteness theorem.
I think that it's quite a nice book, with clear, detailed proofs.
-- Geoff Smith
The book "Computability: Computable Functions, Logic, and
the Foundations of Mathematics" by Epstein and Carnielli,
intended for mid-upper level undergrads, seems to be in the
ballpark of what you're looking for. The lambda calculus is
*not* defined (though *briefly* discussed), but rather an
intuitive formalism for defining r.e. functions, which is then used
as the context for discussion of some classic computability results.
This formalism is proven equivalent to Turing machines.
I look forward to seeing the results of your search; I have also
looked for a book describing the lambda-calculus and related
results that would be accesible to undergrads, without success.
If it doesn't exist, I think it's high time that someone in the modern
PL community write this book, it would be an interesting and
You can look at
which is webpage for my course
6.044J/18.423J: Computability Theory Of and With Scheme
Handouts 17 & 18 are my attempt to develop computability theory using
Scheme with an operational semantics by rewriting, instead of TM's.
Although this was a useful exercise for me, and is a decent way to teach
about Scheme, I'm coming to doubt that it is a good way to teach
Computability Theory, where I think the emphasis should be on the bigger
ideas, not the programming details.
- Albert Meyer
You probably know already about the textbooks by Kfoury & als
which use loop programs (I believe).
I personally had the same feeling when restarting to teach
computation theory, a few years ago, and decided to stick with
Turing machines. I am glad I did. I think it is important for students
to see a mathematical model of hardware which is particularly
simple and distilled to the essential minimum. First, it is
a comceptually important exercise, when combined with Turing's analysis
of what is computation. Second, it allows you to give a particularly
simple definition of configurations, trasnitions, and computation traces.
Third, it is central to computational complexity theory.
Finally, the simplicity of the model is useful in applications, for
example in proving Church's theorem on the undecidability of
first order validity.
What I do find distasteful in many texts is the obsession with which
they take the model seriously, and develop "programming skills" in it.
What I have done just last semester in our sophomore course is to
pair the definition of Turing machines with a simple symbolic
programming language, particularly crafted to strike a balance between
(a) interpretable by Turing machines, and
(b) more user friendly and convincingly computationally-complete than
My draft notes are downloadable in
under "turing machines" and "assembly programs"
best wishes - daniel leivant
I once taught pure lambda calculus with a view to do recursion theory in it.
You may find the corresponding course notes in
Hope this can be of use to you.
These notes used to be executable in Caml, and can be easily adapted to
OCaml with camlp4 grammars.
What I tried to do is exactly I think in the direction you were looking
for: basing recursion theory on lambda-calculus as opposed to Turing
machines or Kleene equations. It has the advantage of having more
structure, and higher-order recursion is very natural, in layers which
correspond more or less to type disciplines. I however did not go very
far, because originally I intended to cover type lambda-calculus and
type theory as Part II. The problem is that I have sets of notes in
various meta-languages: some are in more or less informal maths, some
are in ML, and some are in Type theory (formalised in various versions
of Coq). I managed to do a part of pure lambda calculus in ML, these are
the notes I pointed to, with the beginning of some recursion theory in
2.3. Chapter 3 I was able to complete in Type theory, this is my
article in J. Functional Programming 4,3 (July 94). Chapter 5 was
published in TCS 121 (1993) 145-167 for Bohm's Festschrift, since it
gives an implementation of Bohm's theorem.
One important part missing is the characterization of SN terms as terms
typable in the conjunctive type discipline, this is within Part II, in
French Macwrite (I follow more or less Krivine).
You might find Neil Jones' book Computability and Complexity, From a
Programming Perspective, MIT Press 1997) interesting. I can't remember
whether it covers lambda calculus, but he considers a while language.
The old book by Brainered and Landweber (Theory of Computation) also
deals with LOOP and WHILE Languages. I usually use RAM and WHILE
language in my course, but the notes are unfortunately in German, thus
not of much help.
Dieter Spreen (firstname.lastname@example.org)
This may not be exactly what you're looking for, but John Savage's book
"Models of Computation: Exploring the Power of Computing" seems to favor
circuit models, with less emphasis on the traditional Turing machine
model. (I haven't read the book myself, though; I just know of it.)
It seems you are having very similar aims as I have had in a 2nd year
course I have taught in "Foundations of Computation" here at Chalmers.
This course covers the following topics: introduction to syntax,
automata, operational semantics, lambda calculus and computability. The
course does less than a traditional course on syntax, automata, and
computability, and instead introduces some basic ideas underlying
functional languages and operational semantics (I use PCF). I'm using a
course book in Swedish by my Colleague Kent Petersson, who was the first
one to create a course with this breadth. It's been taught since the mid
80-ies. In addition to this course book I use some notes on PCF,
operational semantics, and how to do computability using PCF. It's in
English. Here's the URL:
At Chalmers we teach Haskell early on in the curriculum, and I use
Haskell for programming exercises in connection with the above course. I
would very much like to have a course book which would be based on the
above topics which uses Haskell (ML would be fine too ...). My course
notes above only cover parts of the field and needs a lot more work on
them anyway. Simon Thomson at Kent have written some papers on using
Haskell for introducing automata which are also relevant.