Customary semantics

What is the real, definitive semantics of a language? There are three standard answers:

  1. The natural-language specification, because it's the one the designers understand.
  2. The reference implementation, because it's unambiguous and well-tested.
  3. The formal semantics (of whichever flavor), because it avoids implementation concerns, so it's simpler than a real implementation. (Or because it's difficult and therefore “rigorous”.)

There's a controversial fourth option: the definitive semantics of a language is the behavior that is consistent across all conventional implementations.

This approach has some virtues:

  • It identifies the behavior you can rely on. Implementations have bugs and deliberate deviations from the spec, where you can't rely on the specified behaviour. They also have widely supported extensions which you can rely on, even though they're not in the spec.
  • Unlike any other means of defining semantics, implementations are heavily tested. Formal semantics can be tested by turning them into implementations, but seldom are; natural-language specifications aren't mechanically tested at all.
  • It's reconstructable. Users can always find out what their implementations do, even when the spec is not publicly available, or is difficult to read. (Most specs are.) Sometimes this shows them implementation-dependendent behavior, but by comparing implementations they can discover the customary semantics.

Deferring to custom is unpopular among language designers and theorists. We see it as an ill-defined, unstable foundation about which nothing can be known with confidence, and on which nothing can be built reliably. We remember the chaos that engulfed HTML and CSS and Javascript when their users treated buggy implementations as specs, and we don't want it to happen again. We want our semantic questions to have authoritative answers, and mere custom does not provide that.

But it's the de facto standard among users of languages. Most programmers are not language lawyers, and can't readily figure out whether the spec says their code will work. But they can easily try it and see what happens.

We can tell users not to do this. We can tell them to avoid empiricism, to seek authority rather than evidence, to shut their lying eyes and trust in doctrine. This is not good advice in most areas, not even in other areas of programming, nor for semantics of other languages natural or artificial. Is it really good advice for programming languages?

Whether it's good advice or bad, users don't listen. Their models are based on the behaviour they observe. As a result, many popular “myths” about languages — that is, widely held beliefs that are officially supposed to be false — are true in the customary semantics. For example, here are some parts of C's customary semantics that are not part of the formal specification. Some of them are violated on unusual architectures, but most C users have never written for such an architecture, so custom doesn't care.

  • Signed integers are represented in two's complement. (Rumor has it this is not quite always true.)
  • Signed integer overflow is modulo word size, like unsigned.
  • All pointer types have the same representation: an integer.
  • NULL is represented as 0.
  • Memory is flat: it's all accessible by pointer arithmetic from any pointer.
  • Pointer arithmetic is always defined, even outside array bounds. Overflow is modulo word size, just like integers.
  • Dereferencing an invalid pointer, such as NULL or an out-of-bounds pointer, blindly tries to use the address.
  • Compilers generate native code. The built-in operators compile to machine instructions.
  • char is exactly eight bits wide.
  • Characters are represented in a superset of ASCII.

(I thought sizeof(char) == 1 was only in the customary semantics, but it's actually in the spec.)

Much of the furor over optimizations that exploit undefined behaviour is because they're invalid in the customary semantics. Some C compiler maintainers have come to believe that the spec is the whole of the contract between compilers and users, and thus that users don't care about semantics not defined therein. It's a convenient belief, since it permits optimizations that would otherwise be impossible, but it's wildly at odds with what their users want. This isn't the only problem with these optimizations — they make for perverse error behaviour under any semantics — but this is why users tend to see them as not merely bad but incorrect.

Language lawyers, especially those who write specs, should take customary semantics more seriously, so they don't contradict the semantics in actual use.

2 comments:

  1. I did what I could to find out the customary semantics of Scheme, which is one of the hardest things to find out, because there are so many implementations and there is no consensus on which ones are customary and which are not. Originally I just posted how many Schemes did what, but it was impressed on me that witnesses need to be weighed, not merely counted, and I decided to confine myself to reporting the raw facts. There is an index to 70 or so results on the WG wiki.

    ReplyDelete
    Replies
    1. Those comparisons are a valuable resource (I have several times caught myself pondering questions they could answer easily), but do they illuminate any customary semantics not specified in the standard? They always report differences, but maybe that's because you don't bother to write up the cases where all implementations agree?

      ISTM Scheme relies less on customary semantics than C, because it users accept such a wide range of implementations. Even tiny toy implementations are considered “Scheme”, so custom doesn't specify much more than RNRS. This probably contributes to the perception of Scheme as a small language: not only is the spec small, the customary semantics is small too.

      There are some customary semantics you can rely on even in toy implementations: there is garbage collection (not e.g. regions or refcounts or manual freeing or something else). eq? is pointer comparison and “works” (a suspiciously custom-dependent word) on everything. Most errors are reliably detected: type, arity, array bounds, unbound names. (Unsafe implementations make a fuss about being unsafe, because it's contrary to custom.) On the other hand, custom doesn't require full call/cc or tail-call optimization.

      Once case where Scheme's customary semantics can conflict with its official semantics is the behaviour of eq? on booleans. JVM Schemes must deal with multiple Booleans, so some of them special-case eq? to compare Booleans by value. This is wrong in the customary semantics, but having multiple copies of each boolean is also wrong, and furthermore contradicts RNRS, and it might break real programs, so special-casing eq? is the lesser evil.

      Delete

It's OK to comment on old posts.