The forgotten power within the research of John Backus
I answer the question posed by John Backus in his 1977 Turing Award lecture, in an unexpected way.
Way back in the recesses of the 1970s, computer science resembled nothing like what it is today. The work of many most notable computer scientists of this era has often become the stuff of legend, or, in the case of John Backus, relegated to a collection of confusing footnotes in the field’s history. We’ll soon see why, but first, let’s look at what exactly he was working on.
In his heyday, Backus led the team that created FORTRAN. Out of my simultaneous respect for the endeavour and my ceaseless sense of humour, I must inform you that it is a syntax error to write the word FORTRAN while not wearing a blue tie. (This is one reason among many I actually do wear blue ties.)
I’m not here to talk about FORTRAN, as odd a language as it is these days. Instead, I’m going to focus on Backus’ 1977 Turing Award lecture, Can Programming Be Liberated from the von Neumann Style?. Backus was awarded by the ACM for his work on FORTRAN, and he rightly saw the award lecture as his best chance to speak about this work he had been doing with programming languages since then, so he seized the moment as best he could. It was a stark departure from what he was, and currently still is, best known for in the field. Actually, I would contend that the new chapter of his life’s research was more profound and important than the work he is popularly lauded for.
Let’s quickly summarise the concepts prerequisite what he was talking about. From Wikipedia:
Using a metaphor from John Backus, assignment statements in von Neumann languages split programming into two worlds. The first world consists of expressions, an orderly mathematical space with potentially useful algebraic properties: most computation takes place here. The second world consists of statements, a disorderly mathematical space with few useful mathematical properties.
This is a familiar idea! I had not realised it when I originally coined the terms, but this is the same concept I identified as the “two broad branches” of computer science, functionalism and mechanicalism. Of course, Backus’ description is confined to the semantics of programming languages, but these ultimately derive from abstract models of machines anyway. It’s the same theory. My terms are more general and positively name both of the “two worlds” Backus refers to in his lecture.
What is known as a “von Neumann architecture” is what I have come to call functionalism, due to its integration of code and data in mathematical style. By contrast, what I call mechanicalism is known as the Harvard architecture in the old school theoretics, due to its strong separation of code and data necessitating an engineering-style approach. When code and data are interchangeable, the von Neumann architecture is in use, and mathematics rejoices. When code and data are strongly separated, the Harvard architecture is in use, and engineering rejoices.
That these concepts are obscure is not the only reason why Backus’ work with language theory is often overlooked or forgotten. Many of the motivators and environmental realities he identified as furnishing the importance of his work were, to put it gently, misgiven. He speaks of the financial “cost of producing and learning to use [new programming languages]”, a diagnosis that had been soundly disproven as early as the 1990s. His motivations regarding mathematics and “function-level programming” are also murky and strange, and to be completely frank I haven’t much to say about those.
What I do know is that Backus was onto something very important in programming language theory. So important in fact that I have partially independently rediscovered his work in the form of my own theoretics about functionalism and mechanicalism, the creation of C*, the Ethos for Sustainable Computing, and my implementation of such computing concepts with the likes of Project Tristan.
These things may coalesce to provide an answer to his question about the von Neumann style. More exactly, they will come to answer it posed in a slightly different phrasing: “what exactly is the place of the von Neumann style in computer programming?”
In the interest of pragmatism, Project Tristan’s main CPU is deliberately designed as a single-core processor, with no features of multi-processing, atomicity, out-of-order execution, or other VLIW-style microarchitectures that popular ISAs like ARM and x86 gloss over with their 1980s-style instruction sets. As it turns out, this makes Tristan’s CPU, codenamed Tristram, a proper Harvard computer, if you ignore the historical artefacts about bus sharing anyway.
Not coincidentally, C*
is also a proper Harvard style programming language, refusing to feature things like threading common to operating system APIs and later standard revisions like C++ and C11. C*
’s answer to those concerns is to model them at a higher level beyond the programming language.
Being a game console, Tristan doesn’t only provide a CPU, but also provides a classically parallel GPU called Simba. Simba’s computing model is a significant departure from what you see with typical GPU designs in the desktop and mobile computing industries, instead being based on an odd design from Adapteva/Parallela they called the Epiphany-V. It’s still a highly parallel multiprocessor, but each core or “compute unit” is modelled in a quasi-Harvard architecture, with its own stack, code store, and internal memory.
Only on the macro scale where many cores work in parallel, trading data on shared VRAM using DMA, does Simba begin to resemble a von Neumann architecture (functionalist) computer. Only at this high level of abstraction can Simba be modelled with the non-Harvard style of pure expressions (functionalism) that Backus was so concerned about. Indeed, most real world programming of Simba will be done in this way, out of pragmatism of course. This “abstract computing model” ideally will have an exhaustive and concrete mapping to micro-scale Harvard style (mechanicalist) C*
code.
This amounts to nothing short of a unification of the two broad branches of computer science, and the implications of this for the field are immense. I have come to identify both sides of the coin in hand here, so to speak, and in doing so I can selectively apply the strengths of each and cover the weaknesses of each where it makes sense to do so. Since most computer scientists are blissfully unaware of this reality (seriously, join a programming language theory community sometime, they probably couldn’t tell you the difference between the Harvard and von Neumann styles without looking it up), this is a huge leg up to have.
This framing allows me to break out of existing economic paradigms that normally define and limit programming as we have come to know it in business. I can avoid wasting time (and therefore money) in ways that no one else can. While this doesn’t mean I am omnipotent, it does mean that I am significantly more powerful than everyone else is, for now at least.
John Backus may not have had the best hand dealt to him when it came to presenting what he had learned about programming languages. Perhaps he was just too late, or too early, or some combination thereof. And while his economic grounding was even less reliable, he was nonetheless an immensely talented computer scientist who was working on something huge here I have had the honour of picking back up. In my opinion, he should be more exalted for this than he is for FORTRAN.