Frink is fun

Alan Eliasen's Frink is a language designed for use as a calculator. Until today I thought it was written to predict sun and moon alignments with MIT's Infinite Corridor. Turns out it was written because the author was having trouble with some computations relating to fart jokes. Either way, what other language has such a interesting origin story?

These problems involve physical calculations. Physical calculations involve units. And (for Alan Eliasen, and me, and other mortals) units involve lots of mistakes. So Frink has dimension checking and automatic unit conversions. It also has terse syntax, including implicit multiplication, because nothing gets in the way of quick calculation like unnecessary typing. It's fun to read the documentation (no, really) but it's more fun to use the language to solve problems you thought were too much trouble.

Big library

You know those moments when you're using a language you don't know very well, and you know what feature you want, and wish it was in the standard library, but know that's just not realistic? Frink is surprisingly good at granting those wishes. Every time I've wanted a unit, and nearly every time I've wanted a constant, Frink has turned out to already know it. Sometimes it's in a different form or has a different name, but usually I can just guess and it works.

Helpful output

Frink is a language for use at the REPL, and its printer is very helpful. When it prints a ratnum, it gives both ratio and decimal forms: 4191/625 (exactly 6.7056) m s^-1 (velocity) . Notice the exactly - if the decimal representation was rounded, it would be approx. And of course it prints the unit, complete with a human-readable dimensions name. The human-readable name makes dimensions checking much more useful - you might overlook the difference between kg m s^-2 and kg m^2 s^-2, but you notice force when you're expecting energy.

Precedence of implicit multiplication

The most common mistake Frink users make - so common it issues warnings in normal operation - is to not parenthesize the denominator of a division.

Obviously a user who writes 1 mile / 4 minutes means (1 mile) / (4 minutes). But Frink gives implicit multiplication the same precedence as explicit *, so it parses this as (1 mile / 4) * minutes and helpfully reports the result as 603504/25 (exactly 24140.16) m s, together with a suggestion to use parentheses, and a link to the FAQ. (The FAQ neglects to mention that you can also use the low-precedence -> operator instead of /.)

Correction 20 October 2007: The FAQ doesn't mention it because it's not true. -> requires both arguments to have the same dimensions, so it does not in general work as a low-precedence division operator.

This is all unnecessary. The confusing precedence results from an effort to be consistent with standard mathematical notation. But in standard notation, implicit multiplication has higher precedence than *. Consider h/2π - pi is definitely in the denominator. Or compare 1/n(n+1) to 1/n*(n+1). Are there any examples where implicit multiplication doesn't have higher precedence?

There are cases like 1/2 km. But this isn't a division, it's a ratio. Compare it to a/2 km, which is ambiguous. The difference shows that 1/2 and a/2 don't necessarily have the same structure. So recognizing ratios would keep the parser more consistent with common mathematical notation. And it does this without whitespace-dependence, which would not fit Frink's philosophy.

Once you've distinguished implicit multiplication, you can go farther. In standard notation, unary functions' implicit operation is application, and Frink could easily imitate that - which would do away with a lot of annoying square brackets. You could do this without distinguishing implicit multiplication, but sin * 2 would have a surprising meaning. There's a pattern here: the implicit operator is different from the multiplication operator, even in standard notation.

In the FAQ, Frink's author agonizes about the precedence problem and various solutions that don't work, but he doesn't mention this approach. However, he also says he doesn't want to spend any time on the syntax, a view to which I'm quite sympathetic. Syntax problems are small constant factors; language designers have more important things to worry about. For Frink, the important thing is removing units and dimensions from the set of things the user has to think about.

Dynamic is easier

Dimensions seem like something that should be statically checked. They don't change dynamically, do they? But Frink does all its dimension analysis dynamically, because it's easier. Not necessarily easier to implement - a checker can be pretty simple - but easier to design. It's pretty obvious how to tag numbers with dimension information, but it's less obvious how to write an inferencer. It's also not obvious that the static checker would always work - might there be spurious errors, as there are in static typechecking? When doing an unfamiliar analysis, the dynamic approach is safer because it's easier to get right.

Static checking has the great virtue of not costing anything at runtime. So if I were to add units and dimensions to another language, I would try to do it statically. But I wouldn't trust my ability to write a static dimension checker, so I would rely on Frink's dynamic checking to show what the right answer is. That's ironic, considering that the most common motivation for static anaysis is to gain confidence.

Extralinguistic problems

The biggest problem with Frink is that I can't use it. Most of the times when I want to use a calculator, there's not a computer handy (which is a bit of a surprise, considering how much time I spend sitting in front of one). So I often wish my calculator spoke Frink. But my calculator's keyboard isn't really adequate. So I now have one more item on my wishlist for an ideal computer.

Frink isn't open-source, which is only a little bit annoying. I've felt no particular desire to read the source, since it doesn't do anything mysterious. There is no magic part here, only a lot of mundane bits well put together and polished. That, to a language designer, is the most encouraging thing about Frink: there's nothing there I can't do myself.

Lessons from Goo

Anyone following recent Lisp dialects is probably aware of Jonathan Bachrach's Goo, a descendant of Dylan that has returned to its Lispy roots. It's not being developed any more, but it's worth studying for a language designer, because it is the best-designed recent Lisp. I've learned a number of things from it that I did not expect.

Easy variables

Goo's internal def is more convenient than let. Because it doesn't affect the indentation of the code that follows it, it makes the common operation of adding or removing a lexical variable much easier. It's annoying that def is distinct from dv, and that the other defining forms have no internal version (especially df). And of course it only works in a seq, so it's not always an option. A def that worked in more contexts would be a big step torward eliminating the common hassle of creating a variable when a value is used twice.

Short names

Short names are a win, of course, but you can go too far. Many of Goo's names - df, <lst> - omit a vowel to save one character, at the cost of making the name unpronounceable. So one-syllable "def" and "list" become "D.F." and "L.S.T." in speech. Names are read aloud often enough that this matters more than the decrease in characters. The authors say "goo opts for pronounceable special forms as much as possible", but some of the most frequently used names aren't.

Short names also have more collisions. Goo used to abbreviate define-slot as ds, but there weren't enough two-letter names to go around. It was later renamed to dp (for define-property) so ds could become define-syntax. I don't think that's common enough to deserve a two-character name, but Goo's names are either full-length or tiny. There's no convention of intermediate length, so defsyntax (or, for that matter, the traditional defmacro, which ds closely resembles) wasn't an option. Flexibility in naming conventions is important.

Speaking of naming conventions, the <lst> convention for classes is annoyingly long. It looks short because you don't pronounce the brackets, but those two extra characters are repeated quite often in type declarations. The problem that motivates this convention is that class names want to be overloaded as constructors. In a language where anything can be callable, this would be easy to avoid: a class is its own constructor, so there's no need for another name.

The shorter the names get, the more important conciseness of other syntax becomes. Parentheses and spaces start to dominate the character count. Getting rid of those takes syntax, which is hazardous stuff. Goo uses a little of it to shorten lambda and arglists.

Research vs. development

Goo suffers from being a compilation research project. The implementors have spent much of their effort on on-the-fly translation to C. The result is reasonably fast, and the ability to write inline C is interesting, but other aspects of the language are rather unfinished. There are other experimental features, like dp, that are awkward in practice. I don't blame the authors for not trying very hard to make the language useful, but if they were, this would not be a good strategy.

op wins

Goo's op macro makes partial application easy, and it's more flexible than curried functions. Every functional language should have it.

SRFI 26 has two kinds of op (called cut and cute), differing in whether the argument subforms are evaluated once or on each call. (Despite the cute name, I had to look up the SRFI number. Unmemorable numbers are SRFIs' biggest problem.) This distinction rarely matters, because the arguments are almost always either variable references or literals. In all the code I've written using op, I have never had one that was expensive or had side effects. As it happens, my op implementation, like Goo's, reevaluates the arguments each time - but I didn't know that until I checked.