I saw a link to So You Think You Know C? by Oleksandr Kaleniuk on Hacker News and was pleasantly surprised. I expected a few comments about tricky parts of C, and found them, but there’s much more. The subtitle of the free book is And Ten More Short Essays on Programming Languages. Good reads.
This post gives a few of my reactions to the essays, my even shorter essays on Kaleniuk’s short essays.
My C
The first essay is about undefined parts of C. That essay, along with this primer on C obfuscation that I also found on Hacker News today, is enough to make anyone run screaming away from the language. And yet, in practice I don’t run into any of these pitfalls and find writing C kinda pleasant.
I have an atypical amount of freedom, and that colors my experience. I don’t maintain code that someone else has written—I paid my dues doing that years ago—and so I simply avoid using any features I don’t fully understand. And I usually have my choice of languages, so I use C only when there’s a good reason to use C.
I would expect that all these dark corners of C would be accidents waiting to happen. Even if I don’t intentionally use undefined or misleading features of the language, I could use them accidentally. And yet in practice that doesn’t seem to happen. C, or at least my personal subset of C, is safer in practice than in theory.
APL
The second essay is on APL. It seems that everyone who programs long enough eventually explores APL. I downloaded Iverson’s ACM lecture Notation as a Tool of Thought years ago and keep intending to read it. Maybe if things slow down I’ll finally get around to it. Kaleniuk said something about APL I hadn’t heard before:
[APL] didn’t originate as a computer language at all. It was proposed as a better notation for tensor algebra by Harvard mathematician Kenneth E. Iverson. It was meant to be written by hand on a blackboard to transfer mathematical ideas from one person to another.
There’s one bit of notation that Iverson introduced that I use fairly often, his indicator function notation described here. I used it a report for a client just recently where it greatly simplified the write-up. Maybe there’s something else I should borrow from Iverson.
Fortran
I last wrote Fortran during the Clinton administration and never thought I’d see it again, and yet I expect to need to use it on a project later this year. The language has modernized quite a bit since I last saw it, and I expect it won’t be that bad to use.
Apparently Fortran programmers are part of the dark matter of programmers, far more numerous than you’d expect based on visibility. Kaleniuk tells the story of a NASA programming competition in which submissions had to be written in Fortran. NASA cancelled the competition because they were overwhelmed by submissions.
Syntax
In his last essay, Kaleniuk gives some ideas for what he would do if he were to design a new language. His first point is that our syntax is arbitrarily constrained. We still use the small collection of symbols that were easy to input 50 years ago. As a result, symbols are highly overloaded. Regular expressions are a prime example of this, where the same character has to play multiple roles in various contexts.
I agree with Kaleniuk in principle that we should be able to expand our vocabulary of symbols, and yet in practice this hasn’t worked out well. It’s possible now, for example, to use λ than lambda
in source code, but I never do that.
I suspect the reason we stick to the old symbols is that we’re stuck at a local maximum: small changes are not improvements. A former client had a Haskell codebase that used one non-ASCII character, a Greek or Russian letter if I remember correctly. The character was used fairly often and it did made the code slightly easier to read. But it wreaked havoc with the tool chain and eventually they removed it.
Maybe a wholehearted commitment using more symbols would be worth it; it would take no more effort to allow 100 non-ASCII characters than to allow one. For that matter, source code doesn’t even need to be limited to text files, ASCII or Unicode. But efforts along those lines have failed too. It may be another local maximum problem. A radical departure from the status quo might be worthwhile, but there’s not a way to get there incrementally. And radical departures nearly always fail because they violate Gall’s law: A complex system that works is invariably found to have evolved from a simple system that worked.
Apl is a “tool of thought”. And if you learn it as your first programming language (as i did 40 yrs ago), it significantly colors (in a good way) how you program in every language learned thereafter.
That’s a very different experience than most of us. :)
Regarding character overloading, Swift will allow use of Unicode in source.
One can set a smile emoji to 25, define a banana emoji as a prefix square root operator, and foo = banana smiley will be evaluated as 5.
It’s frowned upon, and some style guides apparently prohibit the use. I’m nor sure if it’s the issues with tool chains, reactionary thinking, or what.
Swift is a nice language, from what I’ve seen (working my way through Newburg’s book), but writing in a terse style results in almost “write only code”.
You can go nuts with Greek and script characters in Mathematica. I use it when teaching stochastic processes, and it makes for a painless transition from textbooks and lectures to code that actually runs.
I think this has been in Mathematica for a long time, maybe from the beginning, but I hardly ever use it. I suppose it’s because I primarily use Mathematica as a scratch pad, not a publishing platform, so ease of input trumps ease of reading.
I used to wish I could use Greek letters and other symbols in a system programming language. Now a lot of languages allow that, and I don’t do it. Maybe back to time spent writing vs time spent reading.
Code snippets published here will be read by a wide audience, so it might be worth the extra effort to insert a few novel characters. I may experiment with that.
Speaking of APL, I presume you have seen J? https://jsoftware.com