October 14, 2016

Software Crisis: The Next Generation

tl;dr: If you consider the current state of the art in software alongside current trends in the tech business, it’s hard not to conclude: Oh My God We’re All Gonna Die. I think we can fix things so we don’t die.

Marc Andreesen famously says software is eating the world.

His analysis is basically just a confirmation of a bunch of my long standing biases, so of course I think he is completely right about this. Also, I’m a software guy, so naturally I would think this is a natural way for the world to be.

And it’s observably true: an increasing fraction of everything that’s physically or socially or economically important in our world is turning into software.

The problem with this is the software itself. Most of this software is crap.

And I don’t mean mere Sturgeon’s Law (“90% of everything is crap”) levels of crap, either. I’d put the threshold much higher. How much? I don’t know, maybe 99.9%? But then, I’m an optimist.

This is one of the dirty little secrets of our industry, spoken about among developers with a kind of masochistic glee whenever they gather to talk shop, but little understood or appreciated by outsiders.

Anybody who’s seen the systems inside a major tech company knows this is true. Or a minor tech company. Or the insides of any product with a software component. It’s especially bad in the products of non-tech companies; they’re run by people who are even more removed from engineering reality than tech executives (who themselves tend to be pretty far removed, even if they came up through the technical ranks originally, or indeed even if what they do is oversee technical things on a daily basis). But I’m not here to talk about dysfunctional corporate cultures, as entertaining as that always is.

The reason this stuff is crap is far more basic. It’s because better-than-crap costs a lot more, and crap is usually sufficient. And I’m not even prepared to argue, from a purely darwinian, return on investment basis, that we’ve actually made this tradeoff wrong, whether we’re talking about the ROI of a specific company or about human civilization as a whole. Every dollar put into making software less crappy can’t be spent on other things we might also want, the list of which is basically endless. From the perspective of evolutionary biology, good enough is good enough.

But… (and you knew there was a “but”, right?)

Our economy’s ferocious appetite for software has produced teeming masses of developers who only know how to produce crap. And tooling optimized for producing more crap faster. And methodologies and processes organized around these crap producing developers with their crap producing tools. Because we want all the cool new stuff, and the cool new stuff needs software, and crappy software is good enough. And like I said, that’s OK, at least for a lot of it. If Facebook loses your post every now and then, or your Netflix feed dies and your movie gets interrupted, or if your web browser gets jammed up by some clickbait website you got fooled into visiting, well, all of these things are irritating, but rarely of lasting consequence. Besides, it’s not like you paid very much (if you paid anything at all) for that thing that didn’t work quite as well as you might wish, so what’s your grounds for complaint?

But now, like Andreesen says, software is eating the world. And the software is crap. So the world is being eaten by crap.

And still, this wouldn’t be so bad, if the crap wasn’t starting to seep into things that actually matter.

A leading indicator of what’s to come is the state of computer security. We’re seeing an alarming rise in big security breaches, each more epic than the next, to the point where they’re hardly news any more. Target has 70 million customers’ credit card and identity information stolen. Oh no! Security clearance and personal data for 21.5 million federal employees is taken from the Office of Personnel Management. How unfortunate. Somebody breaks into Yahoo! and makes off with a billion or so account records with password hashes and answers to security questions. Ho hum. And we regularly see survey articles like “Top 10 Security Breaches of 2015”. I Googled “major security breaches” and the autocompletes it offered were “2014”, “2015”, and “2016”.

And then this past month we had the website of security pundit Brian Krebs taken down by a distributed denial of service attack originating in a botnet made of a million or so compromised IoT devices (many of them, ironically, security cameras), an attack so extreme it got him evicted by his hosting provider, Akamai, whose business is protecting its customers against DDOS attacks.

Here we’re starting to get bleedover between the world where crap is good enough, and the world where crap kills. Obviously, something serious, like an implanted medical device — a pacemaker or an insulin pump, say — has to have software that’s not crap. If your pacemaker glitches up, you can die. If somebody hacks into your insulin pump, they can fiddle with the settings and kill you. For these things, crap just won’t do. Except of course, the software in those devices is still mostly crap anyway, and we’ll come back to that in a moment. But you can at least make the argument that being crap-free is an actual requirement here and people will (or anyway should) take this argument seriously. A $60 web-enabled security camera, on the other hand, doesn’t seem to have these kinds of life-or-death entanglements. Almost certainly this was not something its developers gave much thought to. But consider Krebs’ DDOS — that was possible because the devices used to do it had software flaws that let them be taken over and repurposed as attack bots. In this case, they were mainly used to get some attention. It was noisy and expensive, but mostly just grabbed some headlines. But the same machinery could have as easily been used to clobber the IT systems of a hospital emergency room, or some other kind of critical infrastructure, and now we’re talking consequences that matter.

The potential for those kinds of much more serious impacts has not been lost on the people who think and talk about computer security. But while they’ve done a lot of hand wringing, very few of them are asking a much more fundamental question: Why is this kind of foolishness even possible in the first place? It’s just assumed that these kinds of security incidents will inevitably happen from time to time, part of the price of having a technological civilization. Certainly they say we need to try harder to secure our systems, but it’s largely accepted without question that this is just how things are. Psychologists have a term for this kind of thinking. They call it learned helplessness.

Another example: every few months, it seems, we’re greeted with yet another study by researchers shocked (shocked!) to discover how readily people will plug random USB sticks they find into their computers. Depending on the spin of the coverage, this is variously represented as “look at those stupid people, har, har, har” or “how can we train people not to do that?” There seems to be a pervasive idea in the computer security world that maybe we can fix our problems by getting better users. My question is: why blame the users? Why the hell shouldn’t it be perfectly OK to plug in a random USB stick you found? For that matter, why is the overwhelming majority of malware even possible at all? Why shouldn’t you be able to visit a random web site, click on a link in a strange email, or for that matter run any damn executable you happen to stumble across? Why should anything bad happen if you do those things? The solutions have been known since at least the mid ’70s, but it’s a struggle to get security and software professionals to pay attention. I feel like we’re trapped in the world of medicine prior to the germ theory of disease. It’s like it’s 1870 and a few lone voices are crying, “hey, doctors, wash your hands” and the doctors are all like, “wut?”. The very people who’ve made it their job to protect us from this evil have internalized the crap as normal and can’t even imagine things being any other way. Another telling, albeit horrifying, fact: a lot of malware isn’t even bothering to exploit bugs like buffer overflows and whatnot, a lot of it is just using the normal APIs in the normal ways they were intended to be used. It’s not just the implementations that are flawed, the very designs are crap.

But let’s turn our attention back to those medical devices. Here you’d expect people to be really careful, and indeed in many cases they have tried, but even so you still have headlines about terrible vulnerabilities in insulin pumps and pacemakers. And cars. And even mundane but important things like hotel door locks. And basically just think of some random item of technology that you’d imagine needs to be secure and Google “<fill in the blank> security vulnerability” and you’ll generally find something horrible.

In the current tech ecosystem, non-crap is treated as an edge case, dealt with by hand on a by-exception basis. Basically, when it’s really needed, quality is put in by brute force. This makes non-crap crazy expensive, so the cost is only justified in extreme use cases (say, avionics). Companies that produce a lot of software have software QA organizations within them who do make an effort, but current software QA practices are dysfunctional much like contemporary security practices are. There’s a big emphasis on testing, often with the idea you can use statistical metrics for quality, which works for assembling Toyotas but not so much for software because software is pathologically non-linear. The QA folks at companies where I’ve worked have been some of the most dedicated & capable people there, but generally speaking they’re ultimately unsuccessful. The issue is not a lack of diligence or competence; it’s that the underlying problem is just too big and complicated. And there’s no appetite for the kinds of heavyweight processes that can sometimes help, either from the people paying the bills or from developers themselves.

One of the reasons things are so bad is that the core infrastructure that we rely on — programming languages, operating systems, network protocols — predates our current need for software to actually not be crap.

In historical terms, today’s open ecosystem was an unanticipated development. Well, lots of people anticipated it, but few people in positions of responsibility took them very seriously. Arguably we took a wrong fork in the road sometime vaguely in the 1970s, but hindsight is not very helpful. Anyway, we now have a giant installed base and replacing it is a boil the ocean problem.

Back in the late ’60s there started to be a lot of talk about what came to be called “the software crisis”, which was basically the same problem but without malware or the Internet.

Back then, the big concern was that hardware had advanced faster than our ability to produce software for it. Bigger, faster computers and all that. People were worried about the problem of complexity and correctness, but also the problem of “who’s going to write all the software we’ll need?”. We sort of solved the first problem by settling for crap, and we sort of solved the second problem by making it easier to produce that crap, which meant more people could do it. But we never really figured out how to make it easy to produce non-crap, or to protect ourselves from the crap that was already there, and so the crisis didn’t so much go away as got swept under the rug.

Now we see the real problem is that the scope and complexity of what we’ve asked the machines to do has exceeded the bounds of our coping ability, while at the same time our dependence on these systems has grown to the point where we really, really, really can’t live without them. This is the software crisis coming back in a newer, more virulent form.

Basically, if we stop using all this software-entangled technology (as if we could do that — hey, there’s nobody in the driver’s seat here), civilization collapses and we all die. If we keep using it, we are increasingly at risk of a catastrophic failure that kills us by accident, or a catastrophic vulnerability where some asshole kills us on purpose.

I don’t want to die. I assume you don’t either. So what do we do?

We have to accept that we can’t really be 100% crap free, because we are fallible. But we can certainly arrange to radically limit the scope of damage available to any particular piece of crap, which should vastly reduce systemic crappiness.

I see a three pronged strategy:

1. Embrace a much more aggressive and fine-grained level of software compartmentalization.

2. Actively deprecate coding and architectural patterns that we know lead to crap, while developing tooling — frameworks, libraries, etc — that makes better practices the path of least resistence for developers.

3. Work to move formal verification techniques out of ivory tower academia and into the center of the practical developer’s work flow.

Each of these corresponds to a bigger bundle of specific technical proposals that I won’t unpack here, as I suspect I’m already taxing a lot of readers’ attention spans. I do hope to go into these more deeply in future postings. I will say a few things now about overall strategy, though.

There have been, and continue to be, a number of interesting initiatives to try to build a reliable, secure computing infrastructure from the bottom up. A couple of my favorites are the Midori project at Microsoft, which has, alas, gone to the great source code repo in the sky (full disclosure: I did a little bit of work for the Midori group a few years back) and the CTSRD project at the University of Cambridge, still among the living. But while these have spun off useful, relevant technology, they haven’t really tried to take a run at the installed base problem. And that problem is significant and real.

A more plausible (to me) approach has been laid out by Mark Miller at Google with his efforts to secure the JavaScript environment, notably Caja and Secure EcmaScript (SES) (also full disclosure: I’m one of the co-champions, along with Mark, and Caridy PatiƱo of Salesforce, of a SES-related proposal, Frozen Realms, currently working its way through the JavaScript standardization process). Mark advocates an incremental, top down approach: deliver an environment that supports creating and running defensively consistent application code, one that we can ensure will be secure and reliable as long as the underlying computational substrate (initially, a web browser) hasn’t itself been compromised in some way. This gives application developers a sane place to stand, and begins delivering immediate benefit. Then use this to drive demand for securing the next layer down, and then the next, and so on. This approach doesn’t require us to replace everything at once, which I think means it has much higher odds of success.

You may have noticed this essay has tended to weave back and forth between software quality issues and computer security issues. This is not a coincidence, as these two things are joined at the hip. Crappy software is software that misbehaves in some way. The problem is the misbehavior itself and not so much whether this misbehavior is accidental or deliberate. Consequently, things that constrain misbehavior help with both quality and security. What we need to do is get to work adding such constraints to our world.

2 Comments

While I agree with most of your comments, you do need to train users not to plug in strange devices. It could and should be safe to plug in an actual data storage device, but there’s only so much you can do to limit the damage caused by what looks like a USB stick with a super-capacitor hooked to the data pins. (Frying the port to save the rest of the machine is probably possible, but that’s failure-mitigation, not non-failure)

Indeed. There’s nothing software can do to prevent a direct hardware assault on the machine. On the other hand, the damage caused in such a case is limited to the destruction of the machine. That can be bad, but at least it’s bounded.

Post a comment

(If you haven't left a comment here before, your comment may need to be approved by the site owners before it will appear. Thanks for waiting.)