October 14, 2016

Software Crisis: The Next Generation

tl;dr: If you consider the current state of the art in software alongside current trends in the tech business, it’s hard not to conclude: Oh My God We’re All Gonna Die. I think we can fix things so we don’t die.

Marc Andreesen famously says software is eating the world.

His analysis is basically just a confirmation of a bunch of my long standing biases, so of course I think he is completely right about this. Also, I’m a software guy, so naturally I would think this is a natural way for the world to be.

And it’s observably true: an increasing fraction of everything that’s physically or socially or economically important in our world is turning into software.

The problem with this is the software itself. Most of this software is crap.

And I don’t mean mere Sturgeon’s Law (“90% of everything is crap”) levels of crap, either. I’d put the threshold much higher. How much? I don’t know, maybe 99.9%? But then, I’m an optimist.

This is one of the dirty little secrets of our industry, spoken about among developers with a kind of masochistic glee whenever they gather to talk shop, but little understood or appreciated by outsiders.

Anybody who’s seen the systems inside a major tech company knows this is true. Or a minor tech company. Or the insides of any product with a software component. It’s especially bad in the products of non-tech companies; they’re run by people who are even more removed from engineering reality than tech executives (who themselves tend to be pretty far removed, even if they came up through the technical ranks originally, or indeed even if what they do is oversee technical things on a daily basis). But I’m not here to talk about dysfunctional corporate cultures, as entertaining as that always is.

The reason this stuff is crap is far more basic. It’s because better-than-crap costs a lot more, and crap is usually sufficient. And I’m not even prepared to argue, from a purely darwinian, return on investment basis, that we’ve actually made this tradeoff wrong, whether we’re talking about the ROI of a specific company or about human civilization as a whole. Every dollar put into making software less crappy can’t be spent on other things we might also want, the list of which is basically endless. From the perspective of evolutionary biology, good enough is good enough.

But… (and you knew there was a “but”, right?)

Our economy’s ferocious appetite for software has produced teeming masses of developers who only know how to produce crap. And tooling optimized for producing more crap faster. And methodologies and processes organized around these crap producing developers with their crap producing tools. Because we want all the cool new stuff, and the cool new stuff needs software, and crappy software is good enough. And like I said, that’s OK, at least for a lot of it. If Facebook loses your post every now and then, or your Netflix feed dies and your movie gets interrupted, or if your web browser gets jammed up by some clickbait website you got fooled into visiting, well, all of these things are irritating, but rarely of lasting consequence. Besides, it’s not like you paid very much (if you paid anything at all) for that thing that didn’t work quite as well as you might wish, so what’s your grounds for complaint?

But now, like Andreesen says, software is eating the world. And the software is crap. So the world is being eaten by crap.

And still, this wouldn’t be so bad, if the crap wasn’t starting to seep into things that actually matter.

A leading indicator of what’s to come is the state of computer security. We’re seeing an alarming rise in big security breaches, each more epic than the next, to the point where they’re hardly news any more. Target has 70 million customers’ credit card and identity information stolen. Oh no! Security clearance and personal data for 21.5 million federal employees is taken from the Office of Personnel Management. How unfortunate. Somebody breaks into Yahoo! and makes off with a billion or so account records with password hashes and answers to security questions. Ho hum. And we regularly see survey articles like “Top 10 Security Breaches of 2015”. I Googled “major security breaches” and the autocompletes it offered were “2014”, “2015”, and “2016”.

And then this past month we had the website of security pundit Brian Krebs taken down by a distributed denial of service attack originating in a botnet made of a million or so compromised IoT devices (many of them, ironically, security cameras), an attack so extreme it got him evicted by his hosting provider, Akamai, whose business is protecting its customers against DDOS attacks.

Here we’re starting to get bleedover between the world where crap is good enough, and the world where crap kills. Obviously, something serious, like an implanted medical device — a pacemaker or an insulin pump, say — has to have software that’s not crap. If your pacemaker glitches up, you can die. If somebody hacks into your insulin pump, they can fiddle with the settings and kill you. For these things, crap just won’t do. Except of course, the software in those devices is still mostly crap anyway, and we’ll come back to that in a moment. But you can at least make the argument that being crap-free is an actual requirement here and people will (or anyway should) take this argument seriously. A $60 web-enabled security camera, on the other hand, doesn’t seem to have these kinds of life-or-death entanglements. Almost certainly this was not something its developers gave much thought to. But consider Krebs’ DDOS — that was possible because the devices used to do it had software flaws that let them be taken over and repurposed as attack bots. In this case, they were mainly used to get some attention. It was noisy and expensive, but mostly just grabbed some headlines. But the same machinery could have as easily been used to clobber the IT systems of a hospital emergency room, or some other kind of critical infrastructure, and now we’re talking consequences that matter.

The potential for those kinds of much more serious impacts has not been lost on the people who think and talk about computer security. But while they’ve done a lot of hand wringing, very few of them are asking a much more fundamental question: Why is this kind of foolishness even possible in the first place? It’s just assumed that these kinds of security incidents will inevitably happen from time to time, part of the price of having a technological civilization. Certainly they say we need to try harder to secure our systems, but it’s largely accepted without question that this is just how things are. Psychologists have a term for this kind of thinking. They call it learned helplessness.

Another example: every few months, it seems, we’re greeted with yet another study by researchers shocked (shocked!) to discover how readily people will plug random USB sticks they find into their computers. Depending on the spin of the coverage, this is variously represented as “look at those stupid people, har, har, har” or “how can we train people not to do that?” There seems to be a pervasive idea in the computer security world that maybe we can fix our problems by getting better users. My question is: why blame the users? Why the hell shouldn’t it be perfectly OK to plug in a random USB stick you found? For that matter, why is the overwhelming majority of malware even possible at all? Why shouldn’t you be able to visit a random web site, click on a link in a strange email, or for that matter run any damn executable you happen to stumble across? Why should anything bad happen if you do those things? The solutions have been known since at least the mid ’70s, but it’s a struggle to get security and software professionals to pay attention. I feel like we’re trapped in the world of medicine prior to the germ theory of disease. It’s like it’s 1870 and a few lone voices are crying, “hey, doctors, wash your hands” and the doctors are all like, “wut?”. The very people who’ve made it their job to protect us from this evil have internalized the crap as normal and can’t even imagine things being any other way. Another telling, albeit horrifying, fact: a lot of malware isn’t even bothering to exploit bugs like buffer overflows and whatnot, a lot of it is just using the normal APIs in the normal ways they were intended to be used. It’s not just the implementations that are flawed, the very designs are crap.

But let’s turn our attention back to those medical devices. Here you’d expect people to be really careful, and indeed in many cases they have tried, but even so you still have headlines about terrible vulnerabilities in insulin pumps and pacemakers. And cars. And even mundane but important things like hotel door locks. And basically just think of some random item of technology that you’d imagine needs to be secure and Google “<fill in the blank> security vulnerability” and you’ll generally find something horrible.

In the current tech ecosystem, non-crap is treated as an edge case, dealt with by hand on a by-exception basis. Basically, when it’s really needed, quality is put in by brute force. This makes non-crap crazy expensive, so the cost is only justified in extreme use cases (say, avionics). Companies that produce a lot of software have software QA organizations within them who do make an effort, but current software QA practices are dysfunctional much like contemporary security practices are. There’s a big emphasis on testing, often with the idea you can use statistical metrics for quality, which works for assembling Toyotas but not so much for software because software is pathologically non-linear. The QA folks at companies where I’ve worked have been some of the most dedicated & capable people there, but generally speaking they’re ultimately unsuccessful. The issue is not a lack of diligence or competence; it’s that the underlying problem is just too big and complicated. And there’s no appetite for the kinds of heavyweight processes that can sometimes help, either from the people paying the bills or from developers themselves.

One of the reasons things are so bad is that the core infrastructure that we rely on — programming languages, operating systems, network protocols — predates our current need for software to actually not be crap.

In historical terms, today’s open ecosystem was an unanticipated development. Well, lots of people anticipated it, but few people in positions of responsibility took them very seriously. Arguably we took a wrong fork in the road sometime vaguely in the 1970s, but hindsight is not very helpful. Anyway, we now have a giant installed base and replacing it is a boil the ocean problem.

Back in the late ’60s there started to be a lot of talk about what came to be called “the software crisis”, which was basically the same problem but without malware or the Internet.

Back then, the big concern was that hardware had advanced faster than our ability to produce software for it. Bigger, faster computers and all that. People were worried about the problem of complexity and correctness, but also the problem of “who’s going to write all the software we’ll need?”. We sort of solved the first problem by settling for crap, and we sort of solved the second problem by making it easier to produce that crap, which meant more people could do it. But we never really figured out how to make it easy to produce non-crap, or to protect ourselves from the crap that was already there, and so the crisis didn’t so much go away as got swept under the rug.

Now we see the real problem is that the scope and complexity of what we’ve asked the machines to do has exceeded the bounds of our coping ability, while at the same time our dependence on these systems has grown to the point where we really, really, really can’t live without them. This is the software crisis coming back in a newer, more virulent form.

Basically, if we stop using all this software-entangled technology (as if we could do that — hey, there’s nobody in the driver’s seat here), civilization collapses and we all die. If we keep using it, we are increasingly at risk of a catastrophic failure that kills us by accident, or a catastrophic vulnerability where some asshole kills us on purpose.

I don’t want to die. I assume you don’t either. So what do we do?

We have to accept that we can’t really be 100% crap free, because we are fallible. But we can certainly arrange to radically limit the scope of damage available to any particular piece of crap, which should vastly reduce systemic crappiness.

I see a three pronged strategy:

1. Embrace a much more aggressive and fine-grained level of software compartmentalization.

2. Actively deprecate coding and architectural patterns that we know lead to crap, while developing tooling — frameworks, libraries, etc — that makes better practices the path of least resistence for developers.

3. Work to move formal verification techniques out of ivory tower academia and into the center of the practical developer’s work flow.

Each of these corresponds to a bigger bundle of specific technical proposals that I won’t unpack here, as I suspect I’m already taxing a lot of readers’ attention spans. I do hope to go into these more deeply in future postings. I will say a few things now about overall strategy, though.

There have been, and continue to be, a number of interesting initiatives to try to build a reliable, secure computing infrastructure from the bottom up. A couple of my favorites are the Midori project at Microsoft, which has, alas, gone to the great source code repo in the sky (full disclosure: I did a little bit of work for the Midori group a few years back) and the CTSRD project at the University of Cambridge, still among the living. But while these have spun off useful, relevant technology, they haven’t really tried to take a run at the installed base problem. And that problem is significant and real.

A more plausible (to me) approach has been laid out by Mark Miller at Google with his efforts to secure the JavaScript environment, notably Caja and Secure EcmaScript (SES) (also full disclosure: I’m one of the co-champions, along with Mark, and Caridy Patiño of Salesforce, of a SES-related proposal, Frozen Realms, currently working its way through the JavaScript standardization process). Mark advocates an incremental, top down approach: deliver an environment that supports creating and running defensively consistent application code, one that we can ensure will be secure and reliable as long as the underlying computational substrate (initially, a web browser) hasn’t itself been compromised in some way. This gives application developers a sane place to stand, and begins delivering immediate benefit. Then use this to drive demand for securing the next layer down, and then the next, and so on. This approach doesn’t require us to replace everything at once, which I think means it has much higher odds of success.

You may have noticed this essay has tended to weave back and forth between software quality issues and computer security issues. This is not a coincidence, as these two things are joined at the hip. Crappy software is software that misbehaves in some way. The problem is the misbehavior itself and not so much whether this misbehavior is accidental or deliberate. Consequently, things that constrain misbehavior help with both quality and security. What we need to do is get to work adding such constraints to our world.

October 27, 2014

The Bureaucratic Failure Mode Pattern

When we try to take purposeful action within an organization (or even in our lives more generally), we often find ourselves blocked or slowed by various bits of seemingly unrelated process that must first be satisfied before we are allowed to move forward. Some of these were put in place very deliberately, while others just grew more or less organically, but what they often have in common, aside from increasing the friction of activity, is that they seem disconnected from our ultimate purpose. If I want to drive my car to work, having to register my car with the DMV seems like a mechanically unnecessary step (regardless of what the real underlying reason for it may be).

Note that I’m not talking about the intrinsic difficulty or inconvenience of the process itself (car registration might entail waiting around for several hours in the DMV office or it might be 30 seconds online with a web page, for example), but the cost imposed by the mere existence of the need to report information or get permission or put things in some particular way just so or align or coordinate with some other thing (and the concomitant need to know that you are supposed to do whatever it is, and the need to know or find out how). Each of these is a friction factor; the competence or user-friendliness of whatever necessary procedure is involved may influence the magnitude of the inconvenience, but not the fact of it. (Other recursive friction factors embedded in the organizations or processes behind these things may well figure into why many of them are in fact incompetently executed or needlessly complex or time consuming, but that is a separate matter.)

Over time, organizations tend to acquire these bits of process, the way ships accumulate barnacles, with the accompanying increase in drag that makes forward progress increasingly difficult and expensive. However, barnacles are purely parasitic. They attach themselves to the hull for their own benefit, while the ship gains nothing of value in return. But even though organizational cynics enjoy characterizing these bits of process as also being purely parasitic, each of those bits of operational friction was usually put there for some purpose, presumably a purpose of some value. It may be that the cost-benefit analysis involved was flawed, but the intent was generally positive. (I’m ignoring here for a moment those things that were put in place for malicious reasons or to deliberately impede one person’s actions for the benefit of someone else. These kinds of counter-productive interventions do happen from time to time, and while they tend to loom large in people’s institutional mythologies, I believe such evil behavior is actually comparatively rare – perhaps not that uncommon in absolute terms, but still dwarfed by the truly vast number of ordinary, well-intentioned process elements that slow us down every day.)

Because I’m analyzing this from a premise of benign intent, I’m going to avoid characterizing these things with a loaded word like “barnacles”, even though they often have a similar effect. Instead, let’s refer to them as “checkpoints” – gates or control points or tests that you have to pass in order to move forward. They are annoying and progress-impeding but not necessarily valueless.

We are forced to pass through checkpoints all the time – having to swipe your badge past a reader to get into the office (or having to unlock the door to your own home, for that matter), entering a user name and password dozens of times per day to access various network services, getting approval from your boss to take a vacation day, having to fill out an expense report form (with receipts!) to get reimbursed for expenses you have incurred, all of the various layers of review and approval to push a software change into production, having to get approval from someone in the legal department before you can adopt a new piece of open source software; the list is potentially endless.

Note that while these vary wildly in terms of how much drag they introduce, for many of them the actual amount is very little, and this is a key point. The vast majority of these were motivated by some real world problem that called for some tiny addition to the process flow to prevent (or at least inhibit) whatever the problem was from happening again. No doubt some were the result of bad dealing or of an underemployed lawyer or administrator trying to preempt something purely hypothetical, but I think these latter kinds of checkpoint are the exception, and we weaken our campaign to reduce friction by paying too much attention to them – that is, by focusing too much on the unjustified bureaucracy, we distract attention from the far larger (and therefore far more problematic) volume of justified bureaucracy.

Let’s just presume, for the purpose of argument, that each of the checkpoints that we encounter is actually well motivated: that it exists for a reason, that the reason can be clearly articulated, that the reason is real, that it is more or less objective, that people, when presented with the argument for the checkpoint, will find it basically convincing. Let’s further presume that the friction imposed by the checkpoint is relatively modest – that the friction that results is not because the checkpoint is badly implemented but simply because it is there. And yes, I am trying, for purposes of argument, to cast things in a light that is as favorable to the checkpoints as possible. The reason I’m being so kind hearted towards them is because I think that, even given the most generous concessions to process, we still have a problem: the “death of a thousand cuts” phenomenon.

Checkpoints tend to accumulate over time. Organizations usually start out simple and only introduce new checkpoints as problems are encountered – most checkpoints are the product of actual experience. Checkpoints tend to accumulate with scale. As an organization grows, it finds itself doing each particular operation it does more often, which means that the frequency of actually encountering any particular low probability problem goes up. As an organization grows, it finds itself doing a greater variety of things, and this variety in turn implies greater variety of opportunities to encounter whole new species of problems. Both of these kinds of scale-driven problem sources motivate the introduction of additional checkpoints. What’s more, the greater variety of activities also means a greater number of permutations and combinations of activities that can be problematic when they interact with each other.

Checkpoints, once in place, tend to be sticky – they tend not to go away. Partly this is because if the checkpoint is successful at addressing its motivating problem, it’s hard to tell if the problem later ceases to exist – either way you don’t see it. In general, it is much easier for organizations to start doing things than it is for them to stop doing things.

The problem with checkpoints is their cumulative cost. In part, this is because the small cost of each makes them seductive. If the cost of checkpoint A is close to zero, it is not too painful, and there is little motivation or, really, little actual reason to do anything about it. Unfortunately, this same logic applies to checkpoint B, and to checkpoint C, and indeed to all of them. But the sum of a large number of values near zero is not necessarily itself a value near zero. It can, instead, be very large indeed. However, as we stipulated in our premises above, each one of them is individually justified and defensible. It is merely their aggregate that is indefensible – there is nothing to tell you, “here, this one, this is the problem” because there isn’t any one which is the problem. The problem is an emergent phenomenon.

Any specific checkpoint may be one that you encounter only rarely, or perhaps only once. Consider, for example, all the various procedures we make new hires go through. When you hit such a checkpoint, it may be tedious and annoying, but once you’ve passed it it’s done with. Thereafter you really have no incentive at all to do anything about it, because you’ll never encounter it again. But if we make a large number of people each go through it once, there’s still a large multiplier, and we’ve still burdened our organization with the cumulative cost.

A problem of particular note is that, because checkpoints tend to be specialized, they are often individually not well known. Plus, a larger total number of checkpoints increases the odds in general that you will encounter checkpoints that are unknown or mysterious to you, even if they are well known to others. Thus it becomes easy for somebody without the relevant specialized knowledge to get into trouble by violating a rule that they didn’t even know to exist.

Unknown or poorly understood checkpoints increase friction disproportionately. They trigger various kinds of remedial responses from the organization, in the form of compliance monitoring, mandatory training sessions, emailed warning messages and other notices that everyone has to read, and so on. Each such checkpoint thus generates a whole new set of additional checkpoints, meaning that the cumulative frictions multiply instead of just adding.

Violation of a checkpoint may visit sanctions or punishment on the transgressor, even if the transgression was inadvertent. The threat of this makes the environment more hostile. It trains people to be become more timid and risk averse. It encourages them to limit their actions to those areas where they are confident they know all the rules, lest they step on some unfamiliar procedural landmine, thus making the organization more insular and inflexible. It gives people incentives to spend their time and effort on defensive measures at the expense of forward progress.

When I worked at Electric Communities, we had (as most companies do) a bulletin board in our break room where we displayed all the various mandatory notices required by a profusion of different government agencies, including arms of the federal government, three states (though we were a California company, we had employees who commuted from Arizona and Oregon and so we were subject to some of those states’ rules too), a couple of different regional agencies, and the City of Cupertino. I called it The Wall Of Bureaucracy. At one point I counted 34 different such notices (and employees, of course, were expected to read them, hence the requirement that they be posted in a prominent, common location, though of course I suspect few people actually bothered). If you are required to post one notice, it’s pretty easy to know that you are in compliance: either you posted it or you didn’t. But if you are required to post 34 different notices, it’s nearly impossible to know that the number shouldn’t be 35 or 36 or that some of the ones you have are out of date or otherwise mistaken. Until, of course, some government inspector from some agency you never heard from before happens to wander in and issue you a citation and a fine (and often accuse you of being a bad person while they’re at it). As Alan Perlis once said, talking about programming, “If you have a procedure with ten parameters, you probably missed some.”

In the extreme case, the cumulative costs of all the checkpoints within an organization can exceed the working resources the organization has available, and forward progress becomes impossible. When this happens, the organization generally dies. From an external perspective – from the outside, or even from one part of the organization looking at another – this appears insane and self-destructive, but from the local perspective governing any particular piece of it, it all makes sense and so nothing is done to fix it until the inexorable laws of arithmetic put a stop to the whole thing. A famous example of this was Atari, where by 1984 the combined scleroses effecting the product development process became so extreme that no significant new products were able to make it out the door because the decision making and approval process managed to kill them all before they could ship, even though a vast quantity of time and money and effort was spent on developing products, many of them with great potential. Few organizations manage to achieve this kind of epic self-absorption, though some do seem to approach it as an asymptote (e.g., General Motors). In practice, however, what seems to keep the problem under control, here in Silicon Valley anyway, is that the organization reaches a level of dysfunction where it is no longer able to compete effectively and it is supplanted in the marketplace by nimbler and generally younger rivals whose sclerosis is not as advanced.

The challenge, of course, is how to deal with this problem. The most common pathway, as alluded to above, is for a newer organization to supplant the older one. This works, not because the one organization is intrinsically more immune to the phenomenon than the other but simply due to the fact that because it is younger and smaller it has not yet developed as many internal checkpoints. From the perspective of society, this is a fine way of handling things; this is Schumpeter’s “creative destruction” at work. It is less fine from the perspective of the people whose money or lives are invested in the organization being creatively destroyed.

Another path out of the dilemma is strong leadership that is prepared to ride roughshod over the sound justifications supporting all these checkpoints and simply do away with them by fiat. Leaders like this will disregard the relevant constituencies and just cut, even if crudely. Such leaders also tend to be authoritarian, megalomaniacal, visionary, insensitive, and arguably insane – and, disturbingly often, right – i.e., they are Steve Jobs. They also tend to be a bit rough on their subordinates. This kind of willingness to disrespect procedure can also sometimes be engendered by dire necessity, enabling even the most hidebound bureaucracies to manifest surprising bursts of speed and effectiveness. A well known and much studied example of this phenomenon is the military, ordinarily among the stuffiest and most procedure bound of institutions, which can become radically more effective in times of actual war. In the first three weeks of American involvement in World War II, when we weren’t yet really doing anything serious, Army Chief of Staff George Marshall merely started carefully asking people questions and half the generals in the US Army found themselves retired or otherwise displaced.

A more user-friendly way to approach the problem is to foster an institutional culture that sees the avoidance of checkpoints as a value unto itself. This is very hard to do, and I am hard pressed to think of any examples of organizations that have managed to do this consistently over the long term. Even in the short term, examples are few, and tend to be smaller organizations embedded within much larger, more traditional ones. Examples might include Bell Labs during AT&T’s pre-breakup years, Xerox PARC during its heyday, the Lucasfilm Computer Division during the early 1980s, or the early years of the Apollo program. Each of these examples, by the way, benefited from a generous surplus of externally provided resources, which allowed them to trade a substantial amount of resource inefficiency for effective productivity. Surplus resources, however, tend also to engender actual parasitism, which ultimately ends the golden age, as all these examples attest.

The foregoing was expressed in terms of people and organizations, but essentially the same analysis applies almost without modification to software systems. Each of the myriad little inefficiencies, rough edges, performance draining extra steps, needless added layers of indirection, and bits of accumulated cruft that plague mature software is like an organizational checkpoint.

October 19, 2014

Map of The Habitat World

By now a lot of you may have heard about the initiative at Oakland’s Museum of Digital Arts & Entertainment to resurrect Habitat on the web using C64 emulators and vintage server hardware. If not, you can read more about it here (there’s also been a modest bit of coverage in the game press, for example at Wired, Joystiq, and Gamasutra).

Part of this effort has had me digging through my archives, looking at old source files to answer questions that people had and to refresh my own memory of how things worked. It’s been pretty nostalgic, actually. One of the cooler things I stumbled across was the Habitat world map, something which relatively few people have ever seen because when Habitat was finally released to the public it got rebranded (as “Club Caribe”) with an entirely different set of publicity materials. I had a big printout of this decorating my office at Skywalker Ranch and later at American Information Exchange, but not very many people will have been in either of those places. Now, however, thanks to the web, I can share it publicly for the first time.

We wanted to have a map because we thought we would need a plan for enlarging the world as the user population grew. The idea was to have a framework into which we could plug new population centers and new places for stories and adventures.

The specific map we ended up with came about because I was playing around writing code to generate plausible topographic surfaces using fractal techniques (and, of course, lots and lots and LOTS of random numbers). The little program I wrote to do this was quite a CPU hog, but I could run it on a bunch of different computers in parallel and combine the results (sort of like modern MapReduce techniques, only by hand!). One night I grabbed every Unix machine on the Lucasfilm network that I could lay my hands on (two or three Vax minicomputers and six or eight Sun workstations) and let the thing cook for an epic all-nighter of virtual die rolling. In the morning I was left with this awesome height field, in the form of a file containing a big matrix of altitude numbers. Then, of course, the question was what to do with it, and in particular, how to look at it. Remember that in those days, computers didn’t have much in the way of image display capability; everything was either low resolution or low color fidelity or both (the Pixar graphics guys had some high end display hardware, but I didn’t have access to it and anyway I’d have to write more code to do something with the file I had, which wasn’t in any kind of standard image format). Then I realized that we had these new Apple LaserWriter printers. Although they were 1-bit per pixel monochrome devices, they printed at 300 DPI, which meant you could get away with dithering for grayscale. And you fed stuff to them using PostScript, a newfangled Forth-like programming language. So I ordered Adobe’s book on PostScript and went to work.

I wrote a little C program that took my big height field and reduced it to a 500×100 image at 4 bits per pixel, and converted this to a file full of ASCII hexadecimal values. I then wrapped this in a little bit of custom PostScript that would interpret the hex dump as an image and print it, and voilá, out of the printer comes a lovely grayscale topographic map. Another little quick filter and I clipped all the topography below a reasonable altitude to establish “sea level”, and I had some pretty sweet looking landscape. At this point, you could make out a bunch of obvious geographic features, so we picked locations for cities, and drew some lines for roads between them, and suddenly it was a world. A little bit more PostScript hacking and I was able to actually draw nicely rendered roads and city labels directly on the map. Then I blew it up to a much larger size and printed it over several pages which I trimmed and taped together to yield a six and a half foot wide map suitable for posting on the wall.

As I was going through my archives in conjunction with the project to reboot Habitat, I encountered the original PostScript source for the map. I ran it through GhostScript and rendered it into a 22,800×4,560 pixel TIFF image which I could open in Photoshop and wallow around in. This immediately tempted me to do a bit more embellishment with Photoshop, so a little bit more hacking on the PostScript and I could split the various components of the image (the topographic relief, the roads, the city labels, etc.) into separate images which could then be individually manipulated as layers. I colorized the topography, put it through a Gaussian blur to reduce the chunkiness, and did a few other little bits of cosmetic tweaking, and the result is the image you see here (clicking on the picture will take you to a much larger version):

Habitat map

(Also, if you care to fiddle with this in other formats, the PostScript for the raw map can be gotten here. Beware that depending on what kind of configuration your browser has, your browser may just attempt to render the PostScript, which might not have exactly the results you want or expect. Have fun.)

There a number of interesting details here worth mentioning. Note that the Habitat world is cylindrical. This lets us encompass several different interesting storytelling possibilities: Going around the cylinder lets you circumnavigate the world; obviously, the first avatar to do this would be famous. The top edge is bounded by a wall, the bottom edge by a cliff. This means that you can fall of the edge of the world, or explore the wall for mysterious openings. By the way, the top edge is West. Habitat compasses point towards the West Pole, which was endlessly confusing for nearly everyone.

We had all kinds of plans for what to do with this, which obviously we never had a chance to follow through on. One of my favorites was the notion that if you walked along the top (west) wall enough, eventually you’d find a door, and if you went through this door you’d find yourself in a control room of some kind, with all kinds of control panels and switches and whatnot. What these switches would do would not be obvious, but in fact they’d control things like the lights and the day/night cycle in different parts of the world, the color palette in various places, the prices of things, etc. Also, each of the cities had a little backstory that explained its name and what kinds of things you might expect to find there. If I run across that document I’ll post it here too.

April 29, 2014

Troll Indulgences: Virtual Goods Patent Gutted [7,076,445]

Indulgence Another terrible virtual currency/goods patent has been rightfully destroyed – this time in an unusual (but worthy) way: From Law360: EA, Zynga Beat Gametek Video Game Purchases Patent Suit, By Michael Lipkin

Law360, Los Angeles (April 25, 2014, 7:20 PM ET) — A California federal judge on Friday sided with Electronic Arts Inc., Zynga Inc. and two other video game companies, agreeing to toss a series of Gametek LLC suits accusing them of infringing its patent on in-game purchases because the patent covers an abstract idea. … “Despite the presumption that every issued patent is valid, this appears to be the rare case in which the defendants have met their burden at the pleadings stage to show by clear and convincing evidence that the ’445 patent claims an unpatentable abstract idea,” the opinion said.

The very first thing I thought when I saw this patent was: “Indulgences! They’re suing for Indulgences? The prior art goes back centuries!” It wasn’t much of a stretch, given the text of the patent contains this little fragment (which refers to the image at the head of this post):

Alternatively, in an illustrative non-computing application of the present invention, organizations or institutions may elect to offer and monetize non-computing environment features and/or elements (e.g. pay for the right to drive above the speed limit) by charging participating users fees for these environment features and/or elements.

WTF? Looks like reasoning something along those lines was used to nuke this stinker out of existence. It is quite unusual for a patent to be tossed out in court. Usually the invalidation process has to take a separate track, as it has with other cases I’ve helped with, such as The Word Balloon Patent. I’m very glad to see this happen – not just for the defendant, but for the industry as a whole. Just adding “on a computer [network]” to existing abstract processes doesn’t make them intellectual property! Hopefully this precedent will help kill other bad cases in the pipeline already…

March 5, 2014

Two Recipes for Stone Soup [A Fable of Pre-Funding Startups]

There once was a young Zen master, who had earned a decent name for himself throughout the land. He was not famous, but many of his peers knew of his reputation for being wise and fair. During his career, he was renowned for his loyalty to whatever dojo he was attached to, usually for many years at a time. One year his patronage decided to merge with another, larger dojo, and the young master found himself unexpectedly looking for a new livelihood. But he was not desperate, as he’d heeded the words of his mentor and had kept close contact with many other Zen masters over the years and considered many options.

As word spread about the young master’s availability, he began to receive more interest than he could possibly ever fulfill. It took all of his Zen training and long nights just to keep up with the correspondence and meetings. He was getting queries from well-established cooperatives, various governments, charitable groups, many recently formed houses, and even more people who had a grand idea around which to form a whole-new kind of dojo. This latter category was intriguing, but the most fraught with peril. There were too many people with too many ideas for the young master to sort between. So he decided to consult with his mentor. At least one more time, he would be the apprentice and ventured forth to the dojo of his youth, a half-day’s journey away.

“Master, the road ahead is filled with many choices, some are well traveled roads and others are merely slight indentations in the grass that may some day become paths. How can I choose?” asked the apprentice.

The mentor replied, “Have you considered the wide roads and the state-maintained roads?”

“Yes, I know them well and have many reasons to continue on one of them, but these untrodden paths still call to me. It is as if there is a man with his hands at his mouth standing at each one shouting to follow his new path to riches and glory. How do I sort out the truth of their words?” The young master was genuinely perplexed.

“You are wise, my son, to seek council on this matter — as sweet smelling words are enticing indeed and could lead you down a path of ruin or great fortune. Recount to me now two of the recruiting stories that you have heard and I will advise you.” The mentor’s face relaxed and his eyes closed as he dropped into thought, which was exactly what the young master needed to calm himself sufficiently to relate the stories.

After the mentor had heard the stories, he continued meditating for several minutes before speaking again: “Former apprentice, do you recall the story and lesson of Stone Soup?”

“Yes, master. We learned it as young adepts. It is the story of a man who pretended that he had a magic stone for making the world’s best soup, which he then used to convince others to contribute ingredients to the broth until a delicious brew was made. This story was about how leadership and an idea can ease people into cooperating to create great things for the good of them all.” recounted the student. “I can see the similarity between the callers standing on the new paths and the man with the magic stone. Also it is clear that that the ingredients are symbolic of the skills of the potential recruits. But, I don’t see how that helps me.” The apprentice had many years of experience with the mentor, and knew that this challenge would get the answer he was looking for.

“The stories you told me are two different recipes for Stone Soup,” the master started.

“The first caller was a man with a certain and impressive voice that said to you ‘You should join my dojo! It is like none other and it is a good and easy path that will lead to great riches. Many people that you know, such as Haruko and Jin, have tested this path and others who have great reputations including Master Po and Teacher Win are going to walk upon it as well. Your reputation would be invaluable to our venture. Join us now!'”

“The second caller was a humble and uncertain man who spoke softly as he said ‘You should join my dojo. It is like none other and the path, though potentially fraught with peril, could lead to riches if the right combination of people were to take to it. Your reputation is well known, and if you were to join the party, the chance of success would increase greatly. Would you consider meeting here in two days time to talk to others to discuss our goals and to see if a suitable party could be formed? Even if you don’t join us, any advice you have would be invaluable.” The mentor paused to see if his former student understood.

The young master said “I don’t see much difference, other than the second man seems the weaker.”

The mentor suppressed a sigh. Clearly this visit would not have been necessary if the young master were able to see this himself. Besides, it was good to see his student again and to be discussing such a wealth of opportunities.

He resumed, “Remember the parable of Stone Soup. The first man did not. He recited many names as if those names carried the weight of the reputations of their owners. He has forgotten the objective of the parable: The Soup. It is not the names or reputations of the people who placed the ingredients into the soup that mattered. It was that the soup needed the ingredients and the people added them anonymously, in exchange for a bowl of the broth. The first man merely suggested that important people were committed to the journey. I am quite certain that, were you to ask Haruko and Jin what names they have heard as being associated with the proposed dojo, you would find that your name was provided as a reference without your knowledge or consent.”

The student clearly became agitated as the truth of his mentor’s words sunk in. There was work to do before the day was done in order to repair any damage to his reputation that speaking with the first man may have caused.

The mentor continued, “The first recipe for Stone Soup is The Braggarts Brew. It tastes just like hot water because when everyone finds out that the founder is a liar, they all recover whatever ingredients they can to take them home and try to dry them out.”

The mentor took a quick drink, but gave a quelling glance that told the apprentice to remain silent until the lesson was over.

“You called the second man weaker, but his weakness is like that of the man with the Stone from the parable. He keeps his eyes on the goal — creating the Soup or staffing his dojo. Without excellent ingredients, there will be no success; and the best way to get them is to appeal to the better nature of those who possess them. He, by listening to them, transforms the dojo into a community project — which many contribute to, even if only a little bit.”

“Your skills, young master, are impressive on their own. You need not compare yourself with others, nor should you be impressed with one who would so trivially invoke the reputation of others, as if they were magic words in some charm.”

“The second recipe for Stone Soup is Humble Chowder, seasoned with a healthy dash of realism. This is the tempting broth.” And the mentor was finished.

The apprentice jumped up — “Master! I am so thankful! I knew that coming to you would help me see the truth. And now, I see a greater truth — you are also the man with a Stone. Please tell me what I can contribute to your Soup.”

“Choose your next course wisely, and return to me with the story so that I may share it with the next class of students.”

“I will!”

And with that, the young master ran as quickly as he could to catch up with the group meeting about the second man’s dojo. He wasn’t certain if he’d join them, but the honor of being able to contribute to its foundation would enough payment for now. When he approached the seated group, he was delighted to see several people whose reputation he respected around the fire, discussing amazing possibilities. One of them was Jin, who was shocked to learn that the first man had given his name to the young master…

[This is a long-lost post, originally posted on our old site six years go. Once again, the internet archive to the rescue!]

February 21, 2014

White Paper: 5 Questions for Selecting an Online Community Platform

From Cultivating Community (a Ning blog)

Today, we’re proud to announce a project that’s been in the works for a while: A collaboration with Community Pioneer F. Randall Farmer to produce this exclusive white paper – “Five Questions for Selecting an Online Community Platform.” 

Randy is co-host of the Social Media Clarity podcast, a prolific social media innovator, and literally co-wrote the book on Building Web Reputation Systems. We were very excited to bring him on board for this much needed project. While there are numerous books, blogs, and white papers out there to help Community Managers grow and manage their communities, there’s no true guide to how to pick the right kind of platform for your community.

In this white paper, Randy has developed five key questions that can help determine what platform suits your community best. This platform agnostic guide covers top level content permissions, contributor identity, community size, costs, and infrastructure. It truly is the first guide of its kind and we’re delighted to share it with you.

Go to the Cultivating Community post to get the paper.

December 19, 2013

Audio version of classic “BlockChat” post is up!

On the Social Media Clarity Podcast, we’re trying a new rotational format for episodes: “Stories from the Vault” – and the inaugural tale is a reading of the May 2007 post The Untold History of Toontown’s SpeedChat (or BlockChattm from Disney finally arrives)

[sc_embed_player fileurl=”http://traffic.libsyn.com/socialmediaclarity/138068-disney-s-hercworld-toontown-and-blockchat-tm-s01e08.mp3″]
toontown1

Link to podcast episode page[sc_embed_player fileurl=”http://traffic.libsyn.com/socialmediaclarity/138068-disney-s-hercworld-toontown-and-blockchat-tm-s01e08.mp3″]

October 30, 2013

Origin of Avatars, MMOs, and Freemium

Origin of Avatars, MMOs, and Freemium – S01E06 Social Media Clarity Podcast

The latest episode of the Social Media Clarity Podcast contains an interview with Chip Morningstar (and podcast hosts: Randy Farmer and Scott Moore). This segment focuses on the emergent social phenomenon encountered the first time people used avatars with virtual currency, and artificial scarcity.

Links and transcription at http://socialmediaclarity.net

August 26, 2013

Randy’s Got a Podcast: Social Media Clarity

icon 800x800 with border

I’ve teamed up with Bryce Glass and Marc Smith to create a podcast – here’s the link and the blurb:

http://socialmediaclarity.net

Social Media Clarity – 15 minutes of concentrated analysis and advice about social media in platform and product design.

First episode contents:

News: Rumor – Facebook is about to limit 3rd party app access to user data!

Topic: What is a social network, why should a product designer care, and where do you get one?

Tip: NodeXL – Instant Social Network Analysis

August 23, 2013

Patents and Software and Trials, Oh My! An Inventor’s View

What does almost 20 years of software patents yield? You’d be surprised!

I gave an Ignite talk (5 minutes: 20 slides advancing every 15 seconds) entitled

“Patents and Software and Trials, Oh My! An Inventor’s View”

Here’s some improved links…

I gave the talk twice, and the second version is also available (shows me giving the talk and static versions of my slides…) – watch that here: