[r6rs-discuss] R6 counterproposal
lord at emf.net
Sat May 26 17:37:32 EDT 2007
Joe Marshall wrote:
> Both FEXPRs and first-class environments make it extremely difficult
> to reason about the behavior of a program. With FEXPRs, you simply
> cannot tell whether any particular instance of a function call is
> going to be a special form or not. It might be applicative order this
> time, normal order next time. Every year or two someone tries to
> re-introduce fexprs. People point out the problems and the person
> ignores them and tries anyway. He comes back a month or so later and
> says ``Well, I got them working, but I can't figure out how to
I don't understand how any of that is a problem if FEXPRs are
an optional feature. They *can*, we know for certain, be usefully
implemented in interpreted environments. They can (in the form
proposed) be used to define things like PLTs module system.
Compiler writers are free to concentrate on FEXPRless environments,
exactly as they do now, but they also have the new option of finding
ways to compile certain special-case uses of FEXPRs (such as when
they can be eliminated from code by partial evaluation).
I've long thought that if a language design doesn't imply that
sometimes a compiler has to say "Sorry, can't do much with this
code, it has to run interpreted," that probably the design contains
> I was much more gung-ho on first-class environments, but Chris Hanson
> changed my mind. If an environment is first-class, then it becomes
> impossible to tell where a symbol is bound. It also becomes
> impossible to compile anything because you are exposing the
> implementation of closures. The things you can do with first-class
> environments can be done in other ways more efficiently and more
So long as it can be statically decided when environments might be
captured and might escape, Hanson's objections should hold no
weight: code only pays the "price" of first-class environments when
the programmer really, really wants to capture them in some way
that thwarts the compiler.
Meanwhile, environments can serve two useful roles (at least), as
an optional feature: (1) in combination with the other proposals, they
are ample basis for writing portable reference implementations
of SRFIs defining module systems, macro systems, etc. (2) they
encourage exploration in Scheme of the kinds of late-binding object
systems that are proving very successful in, well, more successful
high level languages.
> Finally, prototype-based object systems make good toys, but not good
> systems. I was a fan of Object-Lisp (Drescher) for a while, but it
> had some horrendous problems: The primary one is severe brittleness
> due to the `fragile base class' problem. This isn't an issue when you
> have a little object system that you have full control over. It is a
> huge issue when you have many developers manipulating objects and you
> want to do something like persist the objects. I tried porting Object
> Lisp to MIT Scheme and using first-class environments to represent the
> object bindings. It was a disaster (I managed to get the microcode
> confused enough to crash). After much discussion with Hanson and
> Rozas I came to the conclusion that this path was seductive, but
I'm not sure I get much from the anecdote. Let me say all of that
a little differently -- put a different spin on it:
Prototype-based object systems make good toys, but they are pretty
bad at scaling up when you have lots of code and lots of programmers
working on a system. They're great for fast, improvisational exploration
and development -- kind of liberating in their ad hoc flexibility -- but
that same laxness easily leads to chaos if a project becomes too
Sure, why not? I'm not convinced about the scalability problems -- maybe
that's just a question of how they're used -- but let's stipulate: they're
great toys for small systems. That's probably why they're popular.
(As an aside, I gather from another comment I got that I use the term
"prototype-based object system" a bit laxly. It isn't the presence or
absence of a Class/Instance distinction that matters, so much, in my view
as it is a very simple-minded MOP based on late binding via a hierarchy
of mutable, generic, associative data structures.)
I'm not sure what to say about your crashing the MIT Scheme
microcode engine: it sounds like you found a bug :-)
> There may not be much in this note that merits reply either.
It's very different. You presented experience and reason. That other
fella' presented name calling and advocated for cynical resignation.
I'm happy to reply.
> I'm just
> pointing out that it's been tried before (several times), found
> wanting (several times), and discarded (several times), for at least
> the past 20 years and that a lot of *very* smart people have been
> unable to make much headway.
Yet, SCM and Guile or organized around features pretty close
to what I propose and, while they aren't the Last Word in Scheme
implementations by any stretch, they seem to do ok.
> It's possible that these people are
> wrong, and that you have a breakthrough method, but based on past
> evidence I wouldn't put the entire Scheme community down this path.
You're confusing a very narrow failure -- something like trying
to efficiently compile when *every* environment can be captured
and escape at run-time -- with a general argument against the features.
It should be pretty obvious, though, that the proposed features are
a win if your commitment is to generality and flexibility over
compilability and raw performance. They are perfectly viable
features in an interpreter. They are semantically potent enough
to define macro systems, module systems, etc. When they are used
with care to define the current set of module systems, macro systems,
etc., static analysis and transformations can eliminate the use of such
features from the code.
They seem, to me, to be just about all that's still missing from the
definition of Scheme. They seem, to me, just to complete the thought.
More information about the r6rs-discuss