vmanis at telus.net
Tue Mar 24 21:46:06 EDT 2009
On 2009-03-24, at 18:11, John Cowan wrote:
>> in a Philip K Dick story who discovers that not only is the universe
>> unreal, but so too is the character him/herself.
> If he's not real, how can he know if the universe is real or not?
That is one of the major difficulties with reading Dick, I think if
can answer such questions, then the question ceases to exist.
my head explodes long before I get to that point :-)
>> the internal representation of C integers are not specified, somebody
>> build a replica of an IBM 7094, and we might have sign-magnitude
>> integers again!).
> C integers can be represented however you like (even as sign-magnitude
> bignums) but in certain ways must act *as if* they were two's
> binary representations: things like casting from signed to unsigned
> and shifting, e.g.
Actually, not. (To be truthful, I have read the C89 standard
carefully, but the
C99 standard much less so; for all I know, this stuff might have
I doubt it.) For example, the standard defines shifting of unsigned
but does not define what happens if a negative (signed) integer is
have written code that depends upon that, and I very much doubt that
will ever resurrect a Univac 1108, with ones-complement arithmetic, to
me wrong. But technically, the C standard doesn't define it. An even
case is signed division, where the choice of remainder versus modulus is
basically defined by a hardware architect who may or may not have ever
about which of the two is better. Again, C doesn't define signed
(On this point, the British mathematician/computer scientist Brian
published an article in Sigplan Notices back in the 70s, where he sent
an arithmetic test to his friends, asking them to evaluate things like
-17/2. He got something of a variation in quotient answers, and a huge
in remainder answers, many of them using modulus, with perhaps somewhat
odd rules about the sign of the result. Given that these answers get
into machine architectures, it's not a bad plan for a standard for a
that is supposed to be `efficient' to stay silent on which choice is
to be made.)
This raises an interesting point. Perhaps there are language standards
no implementation-defined behavior. I have certainly never seen one.
the language a programmer uses is a combination of the standard and
practice. A C programmer might be using some implementations on which
16 bits and others on which ints are 32 or 64; but every modern C
uses 8, 16, 32, and 64 as the set of available int sizes. It is
reasonable for a programmer
not to worry about the case where the implementation provides 9, 18,
36, and 72 as
the set of sizes.
The obsession with defining exactly what R6RS `means', in the absence
what implementations actually do, I think detracts from the principal
job of a standard,
which is to establish some sort of consensus that programmers can use
meaning to programs.
Regarding string->number, my vote is therefore for an erratum that
clarifies that if
string is not a string, the result is #f. This doesn't break the
`doesn't raise an
exception' statement, and is therefore the minimal change one can make.
But I can live with other definitions.
More information about the r6rs-discuss