Posts filed under "Lessons Learned"

July 4, 2004

Beware the Platform II

A long time ago we said “The implementation platform is relatively unimportant.” This was a statement made at a time when a lot of people were insisting that to do “real” cyberspace (whatever that is), you needed an $80,000 Silicon Graphics system (at least), whereas we came along and somewhat arrogantly claimed that all the stuff that really interested us could be done with a $150 Commodore 64. And we still believe that, notwithstanding the fact that today’s analog of the C64, a $150 PlayStation II or Xbox, has performance specs that exceed 1989’s $80,000 SGI machine. (Hey, we never said that we didn’t want cool 3D graphics, just that they weren’t the main event.)

So it should come as no great surprise, at least to those of you who have come to recognize us for the crusty contrarians that we are, when I tell you that one of the lessons that we’ve had our noses rubbed in over the past decade or so is that the platform is actually pretty darned important.

Our point about the platform being unimportant was really about performance specs: pixels and polygons, MIPS and megabytes. It was about what our long-time collaborator Doug Crockford calls the “threshold of goodenoughness”. We were in an era when the most salient characteristic of a computational platform was its performance envelope, which seemed to define the fundamental limits on what you could do. Our thesis was simply that much of we wanted to do was already inside that envelope. Of course we always hunger for more performance, but the point remains. What we didn’t pay enough attention to in our original story, however, was that a platform is characterized by more than just its horsepower.

No matter how much our technical capabilities advance, there will always be something which acts as a limiting constraint. But though there are always limits, our experience had always been that these limits kept moving outward with the march of progress. While we were always champing at the bit for the next innovation, we were also fundamentally optimistic that the inexorable workings of Moore’s Law would eventually knock down whatever barrier was currently vexing us.

In the past 5-10 years, however, we have begun to encounter very different kinds of limits in the platforms that are available in the marketplace. These limits have little to do with the sorts of quantitative issues we worry about in the performance domain, and none of them are addressed (at least not directly) by Moore’s Law. They include such things as:

  • Operating system misfeatures
  • Dysfunctional standards
  • The ascendency of the web application model
  • The progressive gumming up of workings of the Internet by the IT managers and ISPs of the world
  • Distribution channel bottlenecks, notably customer reluctance or inability to download and/or install software
  • A grotesquely out of balance intellectual property system
  • The ascendency of game consoles and attendant closed-system issues
  • Clueless regulators, corrupt legislators, and evil governments

As with the performance limitations that the march of progress has overcome for us, none of these are fundamental showstoppers, but they are all “friction factors” impeding development of the kinds of systems that we are interested in. In particular, several of these problems interact with each other in a kind of negative synergy, where one problem impedes solutions to another.

For example, the technical deficiencies of popular operating systems (Microsoft Windows being the most egregious offender in this regard, though certainly not the only one) have encouraged the proliferation of firewalls, proxies, and other function impeding features by ISPs and corporate network administrators. These in turn have shrunk many users’ connectivity options, reducing them from the universe of IP to HTTP plus whatever idiosyncratic collection of protocols their local administrators have deigned to allow. (Folks should remind me, once I get the current batch of posts I’m working on through the pipeline, to write something about the grotty reality of HTTP tunneling.) Furthermore, the security holes in Windows have made people rationally hesitant to install new software off the net (setting aside for a moment the additional inhibiting issues of download bandwidth and the quantum leap in user confusion caused by any kind of “OK to install?” dialog). Yet such downloaded software is the major pathway by which one could hope to distribute workarounds to these various connectivity barriers. And working around these barriers in turn often comes down to overcoming impediments deliberately placed by self-interested vendors who attempt to use various kinds of closed systems to achieve by technical means what they could not achieve by honest competition. And these workarounds must be developed and deployed in the face of government actions, such as the DMCA, which attempt to impose legal obstacles to their creation and distribution. Although we enjoyed a brief flowering of the open systems philosophy during the 1990s, I think this era is passing.

Note that, barring the imposition of a DRM regime that is both comprehensive and effective (which strikes me as unlikely in the extreme), the inexorable logic of technological evolution suggests that these barriers will be both permeable and temporary. That is a hopeful notion if you are, for example, a human rights activist working to tunnel through “The Great Firewall of China”. On the other hand, these things are, as I said, friction factors. In business terms that means they increase the cost of doing business: longer development times due to more complex systems that need to be coded and debugged, the time and expense of patent and intellectual property licensing requirements, more complicated distribution and marketing relationships that need to be negotiated, greater legal expenses and liability exposure, and the general hassle of other people getting into your business. This in turn means a bumpier road ahead for people like Randy and me if we try to raise investment capital for The Next Big Thing.

May 10, 2004

Beware the Platform I

It’s easy to get sucked into obsessing over the platform. In fact, in coming weeks I’ll probably spend a fair portion of the writing I do here obsessing over the platform. There are a couple of different modes of obsession that bear discussion here. One is obsessing over building a platform. The other is obsessing over requirements for (or, alternatively, coping with) a particular platform. I’ll talk about the first of these here and save the second for a later post.

There’s this seductive lure to the idea of building the ultimate toolkit. You think: if you can build the universal tool, then you can sell it to everyone — wow, that’d be a great business. Plus, it’s so cool to work on the problems involved; platform development is one of the prime arenas for demonstrating technical machismo. The danger is that you can easily work yourself into a big, huge, complicated mess with far too many knobs and dials to ever be usable, and yet too compromised to really be useful. Our experience over the past few years has forced us to accept that different kinds of applications have differently shaped envelopes they want to go in; there is no universal platform.

Nowadays, web application platforms are a huge business, dominated by huge companies like Microsoft, IBM, BEA, and Oracle. A huge portion of the attention (and money) currently being directed to platform issues goes in these guys’ direction.

However, an MMOG or virtual world system does not fit happily into the canonical web application envelope. It’s too interactive, too real-time, and it’s multi-user.

The converse, however, is not necessarily true. One of the things we discovered at State Software (our most recently deceased venture) is that an architecture originally shaped by the needs of virtual world -like applications can really kick ass when it comes to a lot of web-centric, traditional business applications. Starting from server concepts originally evolved for the community social space, we developed a web application platform that was able to offer a level of interactive responsiveness and user interface flexibility that was consistently superior to what could be done with a traditional J2EE or CGI style app server, as well as being able to easily support a variety of real-time and multi-user applications that traditional app servers couldn’t touch. The sheer size of the market made it an attractive target. Even though the big guys can squash you like a bug if they have a mind to, there are so many niches available that you can make a very nice living filling in the cracks that are too small for the big guys to care about.

Unfortunately, the unkind companion lesson was that this didn’t matter. A better paradigm is a different paradigm, and a different paradigm is nearly unsellable. The problem was that in order to enable customers to do things they couldn’t do in app servers (and which they wanted very much to do), they had to do things that you just don’t do in app servers (which was simply unacceptable). We were selling a technically superior solution, but one which asked too much of its customers. It wasn’t that what were asking them to do was hard — you could get proficient in our system in a few days of playing around with it — but we were asking them to adopt a completely different view of application architecture, and that’s just not the kind of change people make without an overwhelmingly compelling reason. We were so small that the only way we could try to give them such an overwhelmingly compelling reason was to try to tell them about our system, and this ran smack up against the you can’t tell people anything problem.

The paradox of the platform business is this:

  • Innovation is required
  • Innovation is regarded with great suspicion

Our short summary of this was, “the IT guys have it out for you,” but that’s an oversimplification. An application platform is but one component of a much more extended ecosystem that encompasses not only the developer community but the entire supporting cast of consultants, book publishers, trade show organizers, training seminar gurus, industry pundits, standards committees, corporate and institutional IT organizations, and vendors of affiliated technologies — not to mention the customers themselves. Among these diverse interests develops a self-contained, almost hermetically sealed worldview that is very difficult to breach. Our solution was outside that worldview, which made it a very hard sell indeed, notwithstanding the fact that breaking out of that worldview was a prerequisite to solving the problem they wanted solved. Perhaps if we’d had another $5-10 million for marketing we might have been able to do it. On the other hand, perhaps if we’d had another $5-10 million for marketing we might have just pissed away another $5-10 million on marketing and wound up in the same place in the end. In any event, the risk was just too big for anybody (investors or potential customers) to sign up for, and so this technology sits on the shelf.

One might think that taking this technology back to our roots in the games world might make sense. While I’d certainly use this technology as a base if I were starting out to develop an MMOG-type product today, I’m dubious about the broader prospects for a platform business per se. At entirely the other end of the spectrum from business software developers, game developers have historically been much more inclined to roll their own solutions rather than build with what they can get off the shelf. And, while some standards, such as the IP protocol suite or OpenGL, have proven a boon to the game development community, game developers have not been driven by the kind of slavish (one might even say cargo-cultish) obsession with standards that characterizes much business application development. To the extent that game developers remain compelled by competitive pressures always to be pushing the outside of the technical envelope, I suspect this bias towards home-grown solutions will stick with us. Even with all the stuff that’s now available for network application developers, I don’t think this has changed much recently. Couple this with the generally marginal economics of the games business, and I’m inclined to think that the platform business is unlikely to be a winning proposition. I fear that this does not bode well for MMOG platform companies like Zona or Butterfly.net who have positioned themselves in this space, irrespective of whatever merits their products may possess in purely technical terms.

April 22, 2004

You can't tell people anything

This is sort of Morningstar’s version of Murphy’s Law.

When we were assembling our catalog of the things we had learned over the past decade and a half in this business, we almost didn’t include this one because it seems so banal. But I keep finding that it’s often the first thing I say when people ask me what about my experiences (and another thing I’ve learned is to pay attention to things I find myself saying; that way I’ll know what I really think). And, upon reflection, I think it’s actually one of the more important lessons that we’ve learned.

We all spend a lot of our time talking to bosses or investors or marketing people or press or friends or other developers. I’m totally convinced that a new idea or a new plan or a new technique is never really understood when you just explain it. People will often think they understand, and they’ll say they understand, but then their actions show that it just ain’t so.

Years ago, before Lucasfilm, I worked for Project Xanadu (the original hypertext project, way before this newfangled World Wide Web thing). One of the things I did was travel around the country trying to evangelize the idea of hypertext. People loved it, but nobody got it. Nobody. We provided lots of explanation. We had pictures. We had scenarios, little stories that told what it would be like. People would ask astonishing questions, like “who’s going to pay to make all those links?” or “why would anyone want to put documents online?” Alas, many things really must be experienced to be understood. We didn’t have much of an experience to deliver to them though — after all, the whole point of all this evangelizing was to get people to give us money to pay for developing the software in the first place! But someone who’s spent even 10 minutes using the Web would never think to ask some of the questions we got asked.

In 1988 we began consulting to Fujitsu, when they licensed Habitat from Lucasfilm to create Fujitsu Habitat in Japan. We started out with a week long seminar at Skywalker Ranch for their team, explaining everything we knew about Habitat. We gave them copious documentation and complete source code listings. Following that, for the next couple of years they had unlimited access to us via fax, phone and email to answer any questions they might have. We made several visits to Japan to advise them. On our visits they often asked questions that seemed a little, well, odd. We chalked it up to the language barrier, but still, there were clearly things they weren’t getting. For example, their server ran on five (not four, not six, five) Fujitsu A60 minicomputers, and became hopelessly bogged down after about 80 concurrent users. We were never able to get a clear picture of why. We asked lots of questions and they’d try to answer them, but none of the explanations made any sense that we could puzzle out. They were trying to tell us, you see, but you can’t tell people anything.

The mystery was solved a few years later when we began the WorldsAway project, still consulting to Fujitsu but in a role that was much more hands-on. Our initial plan had been to work from the Fujitsu Habitat code, back porting the client to Macs and Windows, and cleaning up their server (80 users, yeesh). When we took apart their code, we finally figured out what had been puzzling us all that time: they had lost the architecture. In spite of all the information we gave them, we had completely failed to communicate how things worked. Their guys hadn’t understood the whole client-server concept, which for that day and place was somewhat exotic, so they just implemented what they knew, which was a terminal-mainframe architecture. Their “client” was basically a fancy, highly specialized graphics terminal; all the real work was done on the server. For example, when you issued a command to an object, instead of sending a command message to the object on the server, the client would send the X-Y coordinates of your mouse click. The server would then render its own copy of the scene into an internal buffer to figure out what object you had clicked on. Not only was this extremely inefficient, but the race conditions inherent a multi-user environment meant that it also sometimes just got the wrong answer. It was amazing…

What’s going on is that without some kind of direct experience to use as a touchstone, people don’t have the context that gives them a place in their minds to put the things you are telling them. The things you say often don’t stick, and the few things that do stick are often distorted. Also, most people aren’t very good at visualizing hypotheticals, at imagining what something they haven’t experienced might be like, or even what something they have experienced might be like if it were somewhat different. One of the things I really miss from my days at Lucasfilm is having artists on staff, being able to run down the hall and say, “hey Gary, draw me this picture.”

Eventually people can be educated, but what you have to do is find a way give them the experience, to put them in the situation. Sometimes this can only happen by making real the thing you are describing, but sometimes by dint of clever artifice you can simulate it.

With luck, eventually there will be an “Aha!”. If you’re really good, the “Aha!” will followed by “Oh, so that’s what you meant”. But don’t be too surprised or upset if the “Aha!” is instead followed by “Why didn’t you tell me that?”. At Communities.com we developed a system called Passport (I’ll save the astonishing trademark story for a later posting) that let us do some pretty amazing things with web browsers. For example, with just a few magic HTML tags we could stick avatars on a web page — pretty much any web page. For months Randy kept getting up at management meetings and saying, “We’ll be able to put avatars on web pages. Start thinking about what you might do with that.” Mostly, nobody reacted much. After a couple of months of this we had things working, and so he got up and presented a demo of avatars walking around on top of our company home page. People were amazed, joyful, and enthusiastic. But they also pretty much all said the same thing: “why didn’t you tell us that we could put avatars on web pages?” You can’t tell people anything.

When people ask me about my life’s ambitions, I often joke that my goal is to become independently wealthy so that I can afford to get some work done. Mainly that’s about being able to do things without having to explain them first, so that the finished product can be the explanation. I think this will be a major labor saving improvement.

One final point: I expect none of you to really get what I’m talking about here, because this principle also applies to itself. But I fully expect I’ll get the occasional email saying “Oh! so that’s what you meant.” or “Why didn’t you tell me that?” I did, but you can’t tell people anything.