Software Complexity, Permacomputing, and Security

Recently I've come to understand the permacomputing movement. It takes the philosophy behind permaculture and adapts them into our technological world, so that computing itself can be sustainable and feasible into eternity. Which is kind of a weird concept to take in at first. Just the idea that computing isn't eternal is kind of weird because it is now so pervasive that we can't imagine life without it. But the rapid expansion of technologies like proof-of-work blockchains and AI synthesis should give us pause as to whether these extreme levels of energy use are sustainable.

This is being considered from a couple different angles. One popular angle is that of purely extending the longevity of existing equipment. The proposition is simple - if you could maintain a device indefinitely through repair and renewal, we can reduce massive amounts of waste from throwing away old devices. Right to repair is a key first step, but it requires a lot more - device designs will have to be fundamentally rethought to allow this kind of eternal repair in the first place. In the mean time, advocates are also working on the software side, so make older devices more viable for casual use.

There is also the matter of decay of the software and services themselves. As platforms move on, support for older systems and software are usually dropped after a time, which means running old software becomes difficult or impossible. The emulation community has become the biggest counter to this problem, devising systems to allow long out-of-print software to run on modern systems. From a more esoteric angle, people like Hundredrabbits have built bespoke virtual machines with intentionally simple specifications so that their software can be forward ported to new machines as long as the software is relevant. Or the Mega65, a recreation of a dead Commodore project in a new implementation.

But hardware and software that lasts forever is also not oriented towards change. The software can be updated, but usually the hardware cannot. This poses a problem for platform security - a critical component of a modern system for user safety, but unfortunately usually implemented at a very low level for trustability reasons. Often such a solution is in the hardware - TPM on PCs, SEP on iPhone. Keeping old systems running thus puts users at a security disadvantage. Many of the sustainable software solutions don't consider platform security at all, because they emulate systems that did not consider it.

And worse, the open source community often considers platform security a mechanism designed to keep you from controlling your own hardware, flattening all useful distinctions between fully locked down systems (like iPhones) and those that do in fact allow user control (like TPMs). Their prevailing philosophy is that many eyes make all bugs shallow, but the volume of bugs that continue to be found in critical security packages suggests that many eyes are looking elsewhere. So sadly, very little effort has been made outside of corporations to solve this problem. I, too, am distrustful of corporations, but when no one else even considers platform security to be a problem, where can you turn?

Alternatively, Paul Gardner-Stephen makes a compelling case for simplicity. In his opinion, the problem with software security isn't a matter of technology, it's a matter of complexity. The transparency of large, modern systems, even open source ones, is insufficient because the manpower required to properly audit them is just not available. Every modern device runs on enormous amounts of "just trust me". So his solution is to reduce the complexity of the platform to a level where everything can be audited. And the platform he has chosen for that is the Commodore 65. It's a brilliant lateral move - The C64 is an extremely well understood system. By selectively extending it, you can make a system that is nominally (if not extremely) useful, and it can be audited by mortals.

    Back

COMbx │ COM1 38400 8N1 │ RTS CTS