Posts from July, 2004

July 31, 2004

The Birth of Habitat (with Many Digressions on the Early History of Lucasfilm Games and All That)

A posting by Andrew Kirmse in Orkut’s Lucas Valley High community (a group for current and former LucasArts and Lucasfilm Games people) expressed interest in the history of how Habitat came to be. I started to compose a response and then realized that this would probably be interesting to a wider audience than our little ghetto on Orkut, so I decided to publish it here instead. It also ended up being rather longer than I had counted on. Some of this may be a little rambling, but history is like that.

In 1985, when Habitat got started, Lucasfilm Games was a very different organization from the LucasArts Entertainment Company that people see today.

In the early 1980s, Lucasfilm had invested heavily in a number of different high-tech R&D projects intended to push the envelope of motion picture production technology. All this stuff was organized into a unit called the Lucasfilm Computer Division. There were groups working on computer graphics, digital audio processing, digital film editing, and a number of other things. Among the other things was a small computer games group that had been started as a joint venture with Atari, who had given Lucasfilm some money in hopes of capturing some kind of benefit by basking in the reflected glory of Star Wars (unfortunately, the proxy glow of celebrity did nothing to help Atari with its most fundamental problem, which was that it was quite possibly the worst managed company in the history of the universe). In order to understand where the games group went, I guess I first have to explain its somewhat odd position in the Lucasfilm organization.

Of the various elements of the Computer Division, by far the most prestigious and well-known was the Graphics Group, which had assembled a bunch of the smartest, most talented, most famous names in the computer graphics field into a world-class research team, working under the direction of Ed Catmull. Every year they would cut a wide swath through SIGGRAPH, stunning everyone with amazing images and brilliant technical papers. (This figures into how I actually came to work for Lucasfilm in the first place, but that’s a long and complicated story of its own for another time).

In 1984 they set out to make a really big splash. Various major bits of magic they had invented were written up for publication (I guess in those days you still got points for showing off how you did things rather than keeping them secret). Along with this they produced a short animated film, “The Adventures of Andre and Wally B”, for the SIGGRAPH film show. “Andre & Wally” made full use of the anti-aliased, motion-blurred, filtered and texture mapped rendering techniques they had developed. It was quite a tour de force relative to the kinds of things that could be put on film by computers in those days. They brought in an up-and-coming animator from Disney named John Lasseter to give the thing some life and filmic sensibility, to distinguish it from the run-of-the-mill SIGGRAPH film show fare, which generally consisted of (a) rendering demos, produced by and for engineers (and composed with exactly the aesthetic qualities that the obvious stereotypes would lead you to expect), (b) TV commercial promo reels, and (c) excruciating, unwatchable, hideous art crap (“Please God, not another Jane Veeder clip! Just kill me now.”).

“Andre & Wally” was, truth be told, perhaps a wee bit more ambitious than their production capabilities at the time were really ready for. It may well be that they thought it would do them good to try to stretch, but in any event it ended up absorbing a lot more time, effort, and money than anybody had really counted on, as they pushed to get all the pixels rendered by SIGGRAPH. In the end it had ballooned into a very big deal indeed. Rumors inside Lucasfilm put the production cost at something in the neighborhood of $500,000, which is a pretty tidy sum for a two minute demo film aimed at the computer graphics in-crowd. They invited George Lucas to come to the premiere at the SIGGRAPH 84 film show in Minneapolis — and he came. At the party afterwards, he was reserved and polite, but apparently he was not very happy with how things had developed.

At a time when Lucasfilm had no major film productions in the pipeline, the Computer Division was a major cash drain, even by the standards of George’s notoriously money sucking organizational empire. He’d invested in all this stuff on the promise that the technology could slash the wildly escalating costs of making the kinds of movies he wanted to make, and here he’d spent a fortune with no usable tools ready for prime time and no end in sight. He’d hired these guys to do technology development for him and they’d gone and spent half a million dollars of his money and made a movie, and not that great a movie either (though it was, admittedly, technically very advanced). He wasn’t paying these guys so they could make movies, he was paying them so he could make movies.

The fallout of this was that George decided he didn’t want to continue funding the world’s most glamorous technology research if there wasn’t going to be some fairly immediate, fairly concrete payoff in it for him. On the other hand, Lucasfilm had invested a lot of money in this stuff and didn’t want to just write it off either. So they made a decision to spin off these various projects into separate companies, in hopes that outside investors could be attracted and the technology might be better commercialized. In the end, the Computer Division was divided, like Gaul, into three pieces.

The Graphics Group became a company that was eventually called Pixar (“Pixar” was originally the name of a piece of hardware they had been developing, but the name ended up getting tacked onto the company because nobody could agree on another name they liked better). Pixar got sold off to Steve Jobs, who had the technology savvy, steely nerves, deep pockets and general megalomania needed to see it through to maturity. They eventually figured out that their biggest assets included not just their technology but John Lasseter, and the rest of that story you probably know. George (or perhaps his accountant) no doubt wishes now that he’d hung onto a bigger piece.

The digital audio and digital film editing projects got spun off into a company call DroidWorks. They attempted to market the film editing system — the EditDroid — and the Lucasfilm Audio Signal Processor (a huge DSP machine) — the SoundDroid. The film and television industry, contrary to what they’d like you to believe, is very conservative technologically. Selling big, expensive pieces of edge-of-the-art technology into that market was just too hard. DroidWorks sold a few EditDroids, but not enough to get any real traction. The company folded after just a few years (George’s reputed comment on this was, “Helluva waste of a good name.”) Ironically, 20 years later, Moore’s Law has brought the cost of this technology down to the point where it is now a big business. The kinds of video editing systems now sold by companies like Avid are pretty close to the EditDroid in both form and function. The vision that George originally bought into has largely been realized by the market; he was right, just too early.

The third piece was the Games Group. It was different from the rest of the various Computer Division elements in a couple of important ways. First of all, it was small — at this point, just eight people, so it was relatively inexpensive and didn’t demand a lot of management’s attention. And because Lucasfilm had gotten some money from Atari, it didn’t have a history as a major cost center. Second, whereas the other groups were principally concerned with creating technology that would be used to create entertainment products, the Games Group’s mission was to create entertainment products directly. This made it seem a lot closer to Lucasfilm’s core business. Also, George had an intuition that this interactive stuff was probably going to be an important part of the business somewhere off in the future, so it was probably a good idea to cultivate a native understanding of the medium in-house, in preparation for the distant day when the technology was mature. At one point George labeled us “The Lost Patrol” — Nobody knows for sure where they are or what they’re doing, but they’re somewhere out there; every now and again somebody sights their flag on the horizon. In some far off day they’ll return to us, bringing news of distant lands and wonders beyond imagining. So the Games Group was not spun off along with the rest, but instead became its own business unit within the company: the Lucasfilm Games Division.

Even though Games managed to stay within the fold, the institutional trauma of The Great Andre & Wally B Budget Blowout definitely had an impact on the group’s mandate. We had already been in the midst of shifting from a researchy mode into a more product oriented one, as the first games developed on Atari’s nickel came ready for market. This transformation was now to be accelerated and we were expected to become truly self-sufficient. We still had a bit of a research bent — we were expected not to be a drain, but we weren’t necessarily expected to make a big profit at first. Our mission statement was dictated directly by George: “Stay small, be the best, and don’t lose any money.”

Because of the phenomenal success of the Star Wars and Indiana Jones franchises, Lucasfilm existed in a weird kind of bubble that made it very different from other companies, especially companies in the computer games industry. Most of this weirdness had to do with money, or expectations about money. The basic attitude can be summed up as, “we are Lucasfilm, people will pay us.” The fundamental business concept of making an investment in expectation of a future return was not part of the general mindset. The expectation rather was that people would pay us to do things, and then we would take a share of the profits of whatever resulted. In other words, we wanted a cut of the proceeds but were not interested in sharing any of the risk. This attitude is a luxury most business people would love to have, but quite correctly recognize for the fantasy that it is. Except in Lucasfilm’s case it wasn’t a complete fantasy. Companies would, in fact, line up to make deals with us in which they took all the risk, either because they wanted what we had so badly that they were willing to pay an extraordinary premium to get it, or just because the cachet of being associated with us was so entrancing. (Lucasfilm insider joke: Q. How many Lucasfilm employees does it take to screw in a light bulb? A. Just one. He holds the light bulb and the world revolves around him.) Lucasfilm was, in fact, in the stone soup business, selling a very expensive, attractively branded stone. Thus, the predominant mindset in this extraordinarily successful and prosperous company was paradoxically one of extreme risk aversion.

This frame of mind colored everything. It lead to a couple of fundamental constraints that rather tightly restricted what we could do. The first rule was that we were not to do anything that required spending the company’s own money. We could do pretty much whatever we wanted, but we had to get somebody else to pay for it, arguments about ROI not withstanding. The second constraint was that although we had a fairly high level of creative freedom, we were absolutely forbidden from doing anything that made use of the company’s film properties, especially Star Wars. That was viewed as just like spending money, since these properties were, in effect, money in the bank. If somebody else wanted to make a Star Wars game, they had to pay a hefty license fee, and so we made money no matter how well or how poorly their game did, whereas if we made such a game ourselves we would be taking all the risk if it bombed (and never mind that we’d also get 100% of the upside if the game was a hit).

The practice the Games Division evolved in this environment was this: anyone in the group who came up with a serious design concept or project idea would write up a two or three page design proposal document, which we would kick around amongst ourselves for critical discussion but which would ultimately get placed in Steve Arnold’s ideas file (Steve was the head of the Games Division at the time). From time to time (a few times a month, I’d estimate), companies would come shopping, looking to do business with us. Sometimes they had specific projects in mind, but more often they just had a vague idea or two and were perhaps drawn here mostly because they thought it would be cool to visit Lucasfilm — tour the ranch, see some movie production facilities, maybe get a glimpse of George or some other famous person over lunch (the role of glamour in this process can’t be overstated). In the course of shmoozing with these folks, Steve would make a judgement about whether they were serious prospects, and if so he’d grab a couple of ideas from the idea file that seemed related to what they were interested in, grab the authors of those ideas, and have a meeting, where we’d make a pitch (often this process would extend over several visits and take weeks or months). Usually this ended up being just talk, but every now and then somebody would bite and we’d land a deal. During the time I was there we did projects not only with traditional games companies like Atari, Epyx, Activision, Electronic Arts, and Nintendo, but also RCA, Apple Computer, Phillips, IBM, the National Geographic Society, Fujitsu, Commodore, and others.

So one day, I think it was around October, 1984, I was chatting with my office mate, Noah Falstein, over lunch. We got to musing about the new generation of more powerful personal computers then appearing that were based on the Motorola 68000 16-bit processor, such as the Apple Macintosh and the soon-to-ship Amiga (not to mention such never-to-be-heard-from-again platforms as the Phillips CD-I box and the Mindset), as well as the increasing number of computer owners with modems. We had recently finished a many weeks’ long in-house experimental play through of Peter Langston’s ground-breaking multi-player Empire game (Peter was actually the founder of Lucasfilm Games, though he had left the company earlier that summer). One of the problems with Empire was that every game seemed to follow the same evolution: exploration, resource buildup, consolidation, nuclear annihilation. Even though you had the potential for rich interaction that came from having the players be real human beings (much more interesting than any game AI thus far produced), the same pattern always unfolded because there really wasn’t much else people could do. The game was closed-ended in this sense. We thought it might be more interesting if the world was much bigger and the player goals more open-ended. It seemed like the platform technology had matured to the point where it might be feasible to attempt such a thing. What resulted was a pair of proposals, one for something we called Lucasnet, which would correspond to what nowadays we’d call a games portal, and one for something we called the Lucasfilm Games Alliance, which would correspond to what nowadays we’d call a MMORPG (and indeed, which looked in concept a lot like what Star Wars Galaxies turned out to be in practice, albeit 20 years later). The latter proposal asserted the following goals:

  • open-ended
  • large-scale
  • low cost to play
  • permits people with varying time commitments to play
  • ability to rejoin if you get wiped out
  • science fiction/interstellar theme
  • distributed processing on home machines
  • permits different levels of interest and ability

Other than “science fiction/interstellar theme”, this actually seems like a reasonable set of desiderata for any large scale online game, even today. (The thematic goal was included partly because it seemed to provide a hook for the kind of open-endedness that we were seeking, but mainly because it appealed to us personally and we thought it would be cool. It’s really the only element of the proposal that was completely arbitrary.)

This proposal went through the usual process of discussion and revision, got expanded rather considerably (and retitled Lucasfilm’s Universe), and then found its way into Steve’s file along with the rest. And that was the last of it, aside from periodic bouts of wistful speculation about just what a fun project it would be if only we could find somebody to fund it. Then Clive Smith came along.

In 1985, the Commodore 64 was the king of the consumer-level machines, and Commodore International (nee Commodore Business Machines) was riding high. Clive Smith was their Vice President for Strategic Planning, and sometime during the spring or early summer of 1985 (I forget exactly when, and my notes aren’t clear on this), he came shopping. Every year, Commodore came up with some accessory to try to sell to Commodore 64 owners, in hopes of extracting another couple hundred bucks from each of them. One year it had been cheap printers, another year, floppy disk drives, and this year it was going to be a cheap 300 baud modem. In support of this, Commodore had made (at Clive’s instigation) a large investment in an up-and-coming, consumer-oriented online services company called Quantum Computer Services, who ran service called QuantumLink (Q-Link for short), which at the time was targeting the Commodore 64 exclusively.

Clive had also been the one to push Commodore into purchasing Amiga, an event that resulted in much bad blood between Commodore and Atari. Prior to Atari’s meltdown in 1984, everybody’s expectation had been that Amiga was destined to become part of Atari’s empire. Rumor had it that there had been a gentlemen’s agreement to this effect between the founders of Amiga and certain high Atari executives. The talent behind Amiga were the same folks who had developed the Atari 400 and 800 and at Amiga they were were fixing to take the next step along the same evolutionary pathway. The story went that Amiga had been set up as a separate company mainly because it was impossible to get anything done within Atari’s dysfunctional confines, but once the machine was ready Atari would buy back in (I don’t know if this is actually true, but it’s certainly the stuff of Silicon Valley legend). Jack Tramiel, Commodore’s founder, had been forced out of Commodore in a palace coup the previous year. In a weird sort of role reversal, he had then swallowed the bankrupt Atari’s remains and gone into competition with Commodore. There is evidence to suggest that Amiga had been a big part of what he (incorrectly) thought he was buying as part of the Atari deal. But Clive and Commodore had come along and snatched it up first, winning them Tramiel’s undying enmity (and also lawsuits), and forcing the rushed, impromptu, second-rate engineering behind the 520ST and 1040ST models that Atari introduced the following year.

Consequently, when Clive Smith came to Lucasfilm, he was shopping for two kinds of projects: things that would leverage modems and an online service, and things that would leverage the Amiga. Out of the file came the Lucasfilm’s Universe proposal, and another proposal for an Amiga-based space game that David Fox had written. Steve grabbed David and me, and off to the conference room we went to meet with Clive. We made our pitch, and Clive loved both proposals. However, they experienced very different fates.

Commodore, it turns out, was even more of a cheapskate company than Lucasfilm, and didn’t care to spend money on anything. They would be thrilled to have us develop David’s Amiga space game for them, but their concept of the deal seemed to consist of us developing the game and them being thrilled. They didn’t want to put up any actual cash as part of the transaction. Over the next few years we took a wait-and-see attitude towards how the market for Amiga games developed, as did all the other game developers, and so the whole thing was pretty much a flash in the pan: cool hardware, no sales. Mind you, Amiga enthusiasts are among the most dedicated in the industry, more so even than Mac zealots, so you would always hear a lot about the Amiga, but in round numbers (say, to the nearest million units) there weren’t any Amigas out there to develop for and so few of us did. David’s Amiga game never happened.

Quantum Computer Services, on the other hand, was a completely different matter. Although Commodore was one of their major investors, they were very much their own company, and they had considerably more of an entrepreneurial attitude than Commodore did (as one can readily see today from their respective fates — Commodore is gone, and Quantum Computer Services changed its name to America Online). From a business point of view, they were a good match for us because they already knew they were selling a consumer oriented product. We didn’t have to deal with any sales resistance on that point, as we might have with some of their competitors. All Clive had to do was broker some introductions, and we were off and running. Well, sort of. When two companies set out to do business with each other, there is this sort of dance that takes place involving the business people and the lawyers on each side. It became clear pretty early on in this process that we were going to do a deal, but the dance itself took a very long time.

The initial introduction took place sometime in the summer of 1985. It was promising enough that Steve put me to work preparing a design to pitch to them. I started thinking about open ended virtual worlds (though we didn’t have the vocabulary to talk about it then, which made the whole process much more difficult), and the design immediately started moving away from the kind of outer space, conquer the galaxy kind of fantasy that Noah and I had originally discussed. It was clear that a lot of the experience that QuantumLink was delivering to its customers lay purely in the social dimension — people interacting with other people. We wanted to appeal to people outside the hardcore gamer demographic, which tends to be adolescent males, and try instead to appeal to the more mainstream, non-gamer population who used Q-Link. So the design became much more generic, and we ended up with a world that looked kind of funky and suburban, rather than something based on science fiction or fantasy (though it certainly had many SF and fantasy elements).

This was my second big encounter with You Can’t Tell People Anything (the first having been Project Xanadu). The Commodore and QuantumLink people were visibly enthusiastic about the prospect of working with us, but nobody really had a clue what I was talking about when I tried to explain what the thing was supposed to be. Actually, most of the people at Lucasfilm pretty much didn’t have a clue either. At Steve’s suggestion, I wrote a number of “day in the life” scenarios, to help people who had never seen anything like this imagine what the experience might be like. I got our lead artist, Gary Winnick, to draw up lots of story boards to visualize these. Gary’s illustrations not only helped clarify what I was talking about to everyone else, they also helped me get my own head straight on how this would all end up working. Having all this visual material around gave the project a big creative boost. (It didn’t hurt that I really resonate with Gary’s artistic sensibilities and so found his visual wit and imagination very energizing.)

QuantumLink proved to be an interesting business. It was the first major online service that was really tailored for the consumer market. Although the existing services actually got a lot of their paying user hours from consumers, I think they tended to view the whole consumer aspect of things as a sort of disreputable sideline. The customers they really cared about were businesses, which they viewed as both more serious and more profitable. The 800 pound gorilla in this industry was CompuServe, which at the time charged $20.00/hour during “prime time” (basically, during normal business hours) and $12.00/hour “off peak” (evenings and weekends). This is pretty pricey for your average consumer, even if you think of it in inflated 2004 dollars. In contrast, QuantumLink charged $3.60/hour on evenings and weekends and wasn’t available at all during the work day. They did a number of clever things that let them get away with charging this radically lower price. First of all, they had a sweetheart deal with Telenet, in those days one of the major nationwide X.25 packet network vendors, to buy up unused off peak network capacity at deep discount rates. This was pure gravy for Telenet, who wouldn’t otherwise be selling those lines then anyway, while it gave Q-Link a huge reduction in operating costs. This is what accounted for Q-Link’s unusual operating hours. (It may have helped that some of Quantum’s founders were former Telenet people who had an inside track on who to talk to to make such an arrangement in the first place.) Second, they used a client-server architecture, then quite unheard of in an industry that was historically oriented toward alphanumeric terminals. This enabled them to offload a whole bunch of user interface computation onto the customer’s computer while at the same time dramatically reducing the telecommunications load they placed on the network and on their servers, further reducing their operating costs. The smart client also allowed them to make their interface vastly easier to use, since they could do all kinds of modern UI things that aren’t feasible in the keyboard/terminal world. This reduced their customer support costs. Finally, by standardizing on the Commodore 64, they made a virtue of the necessity to have platform-specific client software, because it meant that all client endpoints were essentially the same. This standardization cut customer support overhead even more.

From a technical point of view, QuantumLink was a good match because I didn’t have to sell them on the idea of a client-server game architecture. Heck, I didn’t even have to explain to them what a client-server architecture was, as I had had to do with many other folks (today it’s hard to imagine this was once considered an exotic way to do things). Plus, they had already worked out the technical details of doing data communications between Commodore 64 machines and their servers, and could even supply us with source code (well, mostly; as it turned out there were some issues wherein their implementation, uh, deviated from perfection, but that came up much later). Being based on the Commodore 64 was also advantageous since it meant that all of Lucasfilm Games’ heavy duty development tools and our big bag of C64 and 6502 tricks could be brought to bear. It wasn’t the slick, oomphy Amiga that we had originally been thinking in terms of, but it was a platform that we knew how to squeeze for every drop of performance it had.

As the summer of 1985 rolled on, it seemed increasingly likely that we were going to do this project (which had by this time been renamed again, and was now called MicroCosm — remember, this was an age when these machines we were developing for were still called “microcomputers”, so this name seemed like kind of a cool play on words). We had regular communications with the QuantumLink folks and the chemistry was pretty good. I started writing up a detailed technical design and putting together a project schedule. This was getting pretty serious. We already had in hand an amazing 6502-based cel animation engine that Charlie Kellner had developed for another Lucasfilm game, The Eidolon, which looked like it could provide a big chunk of the C64 client. It was all looking like it would come together.

Finally, in early October, Steve Arnold and I flew out to Virginia for a big nail-it-all-down meeting with Quantum. I spent a day with their technical team (Marc Seriff, their VP of Engineering, and Janet Hunter, who was going to be the technical lead on their end), going over the design and working through how it would mesh with their system. They liked what I had to show them and they in turn gave me a good long look at how their stuff worked. The next day we were joined by Steve Arnold from our side, Clive Smith from Commodore, and some of Quantum’s business and legal folks (notably their CEO, Jim Kimsey, and their head of marketing, Steve Case). Quantum management was really pumped and eager to do the deal.

So there we all are in this big conference room. All that remains at this point is for the business people to agree to the terms and for the lawyers to work out the details of the contract. Steve and the Quantum guys kick around the structure of the deal and block out the general shape of things. Actually, most of this has pretty much already been worked out ahead of time; the meeting is supposed to be a formality where management on both sides blesses the deal, they shake hands, and off we go. This is where the project has its first serious brush with death. Quantum’s lawyer jumps in and starts trying to renegotiate the deal: doing business with Lucasfilm is weird, it doesn’t work like anything he’s used to, so he starts trying to mold the relationship into a model he understands (basically, they pay us money, we give them software, they own everything). Steve attempts to enlighten him as to the nature of creative work as opposed to engineering, but the lawyer’s having none of it. But Lucasfilm just doesn’t do deals where the other guy ends up owning everything. Discussion with the business folks continues, ranging from engineering risks and scheduling, to creative and philosophical issues, but there’s this cloud of legal nastiness hanging over us the whole rest of the day. The lawyer doesn’t help things, every so often interjecting with further attempts to rip us off. Even Kimsey starts getting irritated with the guy.

We break at the end of the day with most of the outlines of the deal settled, but an air of extreme uneasiness about ownership and intellectual property issues. Steve Arnold and I go out to dinner with Clive Smith, Jim Kimsey, and Steve Case. Thankfully, over dinner we come to a handshake agreement about the ownership questions. The next day, Steve and I fly back to California. The project is on! (Except for actually finishing the legal paperwork.)

Once we returned to Marin, Lucasfilm’s lawyers got into the act. One thing I could always say with supreme confidence while working at Lucasfilm was: my lawyer can beat up your lawyer. Lucasfilm doesn’t have lawyers, it has legal ninjas — demons and wizards of the bar, each one utterly lacking in mercy. Our lawyers contacted Quantum’s lawyers, and a contract started taking shape. The worked on the contract through October. And through November. And through December…

While we waited, I worked on the technical specification. I worked on the spec through October. And through November. And through December. It proved to be a helluva design (and the vast amounts of unexpected preplanning would prove to really pay off, later). In December we concluded that this contract really was going to happen sooner or later (later, as it turned out, like February), and that we had better get started on the actual work. Steve gave me the go ahead to start recruiting development team members. I grabbed Aric Wilmunder, who had been a contractor working on Koronis Rift for Noah, and tapped Gary to do the artwork. The game was on! Really! This time, for sure!

And Steve pointed me to this other guy who we’d contracted with to port Koronis Rift to the Apple II. Maybe I should talk to him. I said “OK, Steve, if you say so”. So I went and talked to the guy. I didn’t have a lot of time to interview people, but this guy had done a really nice job on the port, and done it pretty fast. And he was totally excited about the project, he’d been thinking about this sort of thing for years, it’s just the kind of thing he always wanted to work on. OK, snap judgement, you’re hired.

His name was Randy Farmer.

Best. Management decision. Ever.

July 15, 2004

The Business of Social Avatar Virtual Worlds

Or, why I really like Second Life, even if their business is most likely doomed.

There, Inc. recently announced that they are winding down their consumer service to focus on external contracts (read: Government simulation and ramping up their platform business). This sort of repositioning is all-too-familiar to Chip and me. Radical shifts like these usually signal the beginning of the end.

Here’s a riddle:
What do you get when you combine three failed dotcoms?

A really, really big crater.

Electric Communities (a.k.a. made similar business strategy shifts when we abandoned our too-big-too-slow-too-soon-solution-without-an-acknowledged-problem secure distributed world EC Habitats/Microcosm in favor of leveraging assets then recently acquired in a 3-way merger with OnLive! and The Palace, Inc. After our attempt to capitalize on delivering advertisements to Palace users failed, we then repositioned once again as a multi-user interactive media production company. Those are pretty big shifts, and the people working at EC didn’t possess the needed skills to succeed at a complete transformation of two different businesses.

We’ll see if There can weather a shift of this magnitude. Though, history doesn’t bode well:

A history of the business of social avatar worlds
World Business Disposition
Habitat/Club Caribe Succeeded
(when services charged $.06/minute)
WorldsAway Business failed
(after services went flat-rate)
ECHabitats/Microcosm Never shipped, abandoned
The Palace Business failed. Twice.
There First Business (social) failed, platform business TBD
Second Life TBD

Many other miserable failures were omitted from this table for brevity.

These products all had avatars with animated gestures, virtual economies with scarcity and real-estate, the last three had user-uploaded textures, two even had user-programmable objects. For the first four projects, Chip and I had a significant hand in their development, repositioning, and/or deployment, so we’ve been down this road many time before. I’ve done some UI consulting work for Second Life.

Lessons applied

In the Lessons of Lucasfilm’s Habitat, we prescribed certain lessons about virtual worlds. During the 90’s, we followed our own advice on several projects. The architects of many other virtual worlds took our words to heart as well. The Palace founders (later acquired by and Second Life architects made a point to tell us so.

In fact, Second Life embodies several of the original lessons almost to a fault, specifically:

  • Communications bandwidth is a scarce resource.
  • The implementation platform is relatively unimportant.
  • Detailed central planning is impossible; don’t even try.
  • And especially our Future Directions section which said to let the users create the content – both the world and objects.

Communications bandwidth is a scarce resource.

A future post will detail why this is true as much as ever, but I find it interesting how There and Second Life each applied this lesson: There ignored it (in the sense of client graphic bandwidth) and selected a rendering design that required a high-powered graphics card, processor, and a fairly beefy Internet connection. Of the 5 machines I own, exactly 0 are officially compatible (I have to hack my way around the hardware check.) This choked off their customer base from the start.

Second Life was designed to run on a broader range of graphic hardware, but used streaming technology (the Founder was a RealAudio engineer) to attempt to combat the bandwidth issues. On the graphics side, they decided to use the minimum set of features they could get away with from an open standard: Open GL. This allowed them to apply another lesson like no one had before…

The implementation platform is relatively unimportant

In Habitat Redux, we basically recant this lesson, indicating that there now is a standard platform – MS-Windows + The Internet + The Web browser. It seemed to us that a universe of protocol-only compatible virtual world clients could be built, but the compromises required (lowest common denominator graphics and sound no common UI conventions, etc.) and the extra development time would mean that no one would make the investment to implement that for a stand-alone application. Certainly not for a social virtual world. With EC Habitats we tried (using Java as our core for the client), but it only made the application too slow to use. But, lo and behold! 5 years later – here comes Second Life: a proof of concept for this principle.

Detailed central planning is impossible; don’t even try.

and the corollary

Future Directions: Let the users build it, all of it.

In Chip’s most recent post he writes about how we discovered significant hidden costs related to user generated content. Let’s call the sample instance of the context problem: "Oh! There’s a penis on your sweater!"

Fortunately, There saw this problem coming and decided to charge people a fee to moderate their uploaded content, a move that seemed to make complete sense. We still don’t know if this portion of the business was break-even, so the jury is out on the viability of this solution. It certainly wasn’t enough to make the company profitable.

Second Life instead decided to make their service adult-only to dodge the problem of kids being exposed to uncontrolled content. They also let users create arbitrary 3D objects out of primitives, add scripts, provide communications conduits to off-site services, provide virtual real state with user landscaping, and run it all on top of a physics simulator.

OMG! Second Life is the system the original Lessons of Habitat described. The Graphical Mud Xanadu. How could it do anything but succeed?

Second life has the smallest active population of any virtual world platform on my list even though they continue to innovate and enable more and more sophisticated user-created-content.


I loved it when I was unemployed. It was nothing but fun and intellectual challenge to produce an invisible teleporting 100-round-per-minute auto-cannon that ripped havoc throughout the WWII online community that settled there. Creating a Blade Runner blimp that traveled the world and handed out teleport cards to the city of Little Tokyo meanwhile playing a custom Japanese audio track was the highlight of my citizenship.

But, as soon as I got a job, I stopped creating, and then I stopped playing.

This wasn’t a big surprise to me. I mentioned above that I did some UI consulting work for Second Life. Linden knew that the 1.0 platform was content-producer-centric and they needed to reorient their interface to accommodate content consumers: Those who would come in and enjoy all the wonderful content generated by folks like their early adopters (and me). I helped them design a new UI, which has been implemented over the last 8 months or so. It does almost eve
rything I re
commended and a lot of other great stuff, like maps of the most popular locations, events, and objects and land for sale.

But where are the consumers? Where are the folks who will pay to participate in all this great (and not so great) user produced content? We built it, why aren’t they coming?

Because available online time is a limited resource and …

  • users who are gamers play MMORPGs, and web-games and need that structure.
  • user who aren’t gamers are used to either…
    • visual entertainment being delivered to them and/or
    • chat-like interaction being low-overhead, mostly IM.

Loading a large a client, traveling a virtual geography with an awkward avatar, looking at a map to find and interact with people, and experimenting with a bunch of user-generated experiences of varying quality is just too heavyweight for people who are used to Television, Instant Messaging and Email.

It’s like comparing cell phones and video phones: We’ve had the capability for home video phones for over a decade but we don’t use them because we don’t need them. They only add very little incremental value and introduce a bunch of overhead and complication. On the other hand, cell phones took over in a shorter time because they significantly increased the utility of what a phone was already good at: Connecting people for immediate conversation. Interestingly enough, camera/cel phones can now send movies using the same technology that the picture-phones had. The big difference is that the camera is movies (pictures) are recorded separately from the conversation, which is a model that has much greater utility.

Where does that leave us? Are social/avatar virtual worlds doomed to business extinction? Is there any way services like Second Life can make it?


Focusing on the problems at hand: Consumers want to be fed content, they may even pay for it and a good platform can enable many talented people to create content, it seems that the main missing components are a way to identify and promote the content the consumers want and a create way to deliver it to them with the least possible burden on the consumer’s part.

If Second Life can accomplish this, they will be the first. I wish them the best of luck!

See my earlier post about a different path that avatars have taken.

July 11, 2004

It's a business, stupid

Everybody says this but this time we really mean it

Like we said, we really don’t want to be in the platform business. It’s like trying to sell PhD research to kindergartners: it doesn’t matter how smart you are or how great your stuff is, because that’s not what they are about (your competition is SpongeBob; you’re doomed).

When we started Electric Communities, we did understand that we needed to be in business. We wanted to do some awesome stuff, and awesome stuff requires awesome bucks, and awesome bucks generally come from a revenue stream. But the reality is that venture capitalists don’t invest in stuff because it’s awesome. At least not any more. They invest in stuff because they think they’re going to make a lot of money. If your plan is about making awesome stuff rather than about making a lot of money, they don’t want to talk to you. This is business 101, but it’s easy to fall into the following trap: “Here’s some awesome stuff, surely somebody will want to pay us a lot of money for it, it’s so awesome!” This is sort of The Business Plan of The Damned. But this is another one of those You Can’t Tell People Anything things. People were telling us this the whole time, and we didn’t really get what they were talking about. I mean, we thought we did, we thought were in business to make a product and sell it, but really we were more interested in making what we were making because it was what we wanted to make than because it was what people wanted to buy. Eventually we figured this out, but by that time the ride was over. I think I know what to do now, but unless one of you out there has 8 or 10 million dollars you want to bet on another spin of the wheel it may be a little while before we get to try again.

So what we’ve got to show for our troubles is a menagerie of amusing business fallacies whose fallaciousness we understand in an intimate and personal way:

User generated content isn’t free

Who’da thunk? You get a couple million people generating stuff, that’s way more stuff than you could ever afford if you had to actually pay people to create it. What’s not to love? Well, the cost of editing and filtering, for one thing.

Sturgeon’s Law famously says, roughly, that 90% of everything is crap. But that proportion is so low only because most of what we see is highly filtered. Over the centuries, vast industries have evolved to filter most of the crap out. It’s kind of like the mess we face today with spam. It’s not that any individual piece of crap is so hard to deal with, it’s that the producers are so damnably prolific.

Context equals Revenue

Advertising! We were going to make money selling ads. So was everybody else. In the early days of the Internet boom, a lot of the advertising business was basically just startups taking in each others’ laundry. Larry Samuels, one of our CEOs (the good one) at Electric Communities, described the dotcom advertising game this way: “Here is $5 million dollars. Your job is to spend this money in a way that generates $3 million dollars in ad revenues. Think you can do it??

We actually managed see this for the shell game that it was and avoided getting pulled into it for quite a while. We made the plunge into advertising when we took over The Palace, Inc. and unexpectedly found ourselves with a working product on our hands. Faced with the need to now make money from this situation, we talked ourselves into selling ads. In 1999, just as ad prices dropped through the floor.

One of the big things that popped the advertising bubble is that ads need context. Real advertisers, as opposed to dotcoms playing Let’s Pretend We’re A Business, are control freaks. They’re very sensitive to what their ads appear next to (just ask Janet Jackson and MTV). User generated content is unpredictable — not something that gives control freaks that warm fuzzy feeling that encourages them to spend money on you. Context is everything for them. This is one of the reasons why Google has been able to make substantial money selling really quite modest and unintrusive ads: they can deliver killer context.

Another thing about ads is, you’re serving two masters, the advertiser and the end user, and their wants and needs don’t always line up. Getting the value proposition right is tricky. Traditional media businesses generally resolve the conflict of interest by kissing up to the advertisers, but in the online world it’s harder to hold on to fickle users.

Extraordinary Popular Delusions & the Madness of Crowds

For those of you who don’t recognize the heading, it’s the title of a wonderful book by Charles MacKay all about the Internet. It was published in 1841. I wish I had read it in 1995.

A million free users does not a business make

Another business plan: “Hey! We can get millions of people to use our service. Millions of people, that’s a lot! Surely there must be a way to make money from such a large population!” This is another variation of The Business Plan of The Damned.

An important wrinkle on this is: “We’ll get millions of people to start using our service, then, once we have millions of users who know they like our service, we’ll start charging them money for stuff.” In the real world: turn on the money, watch them all go switch to the other free service who haven’t used up all their venture funding yet. Or watch them prove just what a powerful engine for innovation the Internet is, as they figure out countless clever ways to keep using your service while not paying you.

Economists have what they call the Principle of Revealed Preference, which is the idea that the only way to find out what people really think is to watch what they really do, especially when it comes to spending money on something. If you instead simply ask them what they think, you’ll get a different answer, because giving you an opinion does not require them to commit to anything. The same principle explains why it is hard to convert a free beta tester into a paying customer. As long as things are free, the starving student, the middle-class yuppie, and the millionaire are pretty much interchangeable. Once you put a price on things, the student can’t afford it and the yuppie now needs to be persuaded to spend. In fact, it’s worse than that, because the one resource they were expending during the free period was their time, for which their budgets likely follow completely the reverse hierarchy.

Boundless Potential

So we had this myth of the boundless potential of the Internet. I love this quote from the economist Herbert Stein:

“Things which cannot go on forever eventually tend to stop”

First Mover Advantage

Another lie to be suspicious of is the so called “first mover advantage”. This notion was most notably promoted by McKinsey Associates, the business consulting firm whose big Silicon Valley success was Apple Computer (or least, that’s the way it got spun). The idea is that a competitor who gets in first can establish such a dominant position that the barriers to entry to later competitors are huge and so the dominant position is locked in. This is just nonsense. Our experience is, first movers get slaughtered. And we always knew this; back in the days when IBM occupied the post of Great Satan now held by Microsoft, one of their widely quoted mottos was: “Never first. Always second.” The second mover has the advantage of being able to see which of the first movers didn’t get eaten by the sharks.

July 4, 2004

Beware the Platform II

A long time ago we said “The implementation platform is relatively unimportant.” This was a statement made at a time when a lot of people were insisting that to do “real” cyberspace (whatever that is), you needed an $80,000 Silicon Graphics system (at least), whereas we came along and somewhat arrogantly claimed that all the stuff that really interested us could be done with a $150 Commodore 64. And we still believe that, notwithstanding the fact that today’s analog of the C64, a $150 PlayStation II or Xbox, has performance specs that exceed 1989’s $80,000 SGI machine. (Hey, we never said that we didn’t want cool 3D graphics, just that they weren’t the main event.)

So it should come as no great surprise, at least to those of you who have come to recognize us for the crusty contrarians that we are, when I tell you that one of the lessons that we’ve had our noses rubbed in over the past decade or so is that the platform is actually pretty darned important.

Our point about the platform being unimportant was really about performance specs: pixels and polygons, MIPS and megabytes. It was about what our long-time collaborator Doug Crockford calls the “threshold of goodenoughness”. We were in an era when the most salient characteristic of a computational platform was its performance envelope, which seemed to define the fundamental limits on what you could do. Our thesis was simply that much of we wanted to do was already inside that envelope. Of course we always hunger for more performance, but the point remains. What we didn’t pay enough attention to in our original story, however, was that a platform is characterized by more than just its horsepower.

No matter how much our technical capabilities advance, there will always be something which acts as a limiting constraint. But though there are always limits, our experience had always been that these limits kept moving outward with the march of progress. While we were always champing at the bit for the next innovation, we were also fundamentally optimistic that the inexorable workings of Moore’s Law would eventually knock down whatever barrier was currently vexing us.

In the past 5-10 years, however, we have begun to encounter very different kinds of limits in the platforms that are available in the marketplace. These limits have little to do with the sorts of quantitative issues we worry about in the performance domain, and none of them are addressed (at least not directly) by Moore’s Law. They include such things as:

  • Operating system misfeatures
  • Dysfunctional standards
  • The ascendency of the web application model
  • The progressive gumming up of workings of the Internet by the IT managers and ISPs of the world
  • Distribution channel bottlenecks, notably customer reluctance or inability to download and/or install software
  • A grotesquely out of balance intellectual property system
  • The ascendency of game consoles and attendant closed-system issues
  • Clueless regulators, corrupt legislators, and evil governments

As with the performance limitations that the march of progress has overcome for us, none of these are fundamental showstoppers, but they are all “friction factors” impeding development of the kinds of systems that we are interested in. In particular, several of these problems interact with each other in a kind of negative synergy, where one problem impedes solutions to another.

For example, the technical deficiencies of popular operating systems (Microsoft Windows being the most egregious offender in this regard, though certainly not the only one) have encouraged the proliferation of firewalls, proxies, and other function impeding features by ISPs and corporate network administrators. These in turn have shrunk many users’ connectivity options, reducing them from the universe of IP to HTTP plus whatever idiosyncratic collection of protocols their local administrators have deigned to allow. (Folks should remind me, once I get the current batch of posts I’m working on through the pipeline, to write something about the grotty reality of HTTP tunneling.) Furthermore, the security holes in Windows have made people rationally hesitant to install new software off the net (setting aside for a moment the additional inhibiting issues of download bandwidth and the quantum leap in user confusion caused by any kind of “OK to install?” dialog). Yet such downloaded software is the major pathway by which one could hope to distribute workarounds to these various connectivity barriers. And working around these barriers in turn often comes down to overcoming impediments deliberately placed by self-interested vendors who attempt to use various kinds of closed systems to achieve by technical means what they could not achieve by honest competition. And these workarounds must be developed and deployed in the face of government actions, such as the DMCA, which attempt to impose legal obstacles to their creation and distribution. Although we enjoyed a brief flowering of the open systems philosophy during the 1990s, I think this era is passing.

Note that, barring the imposition of a DRM regime that is both comprehensive and effective (which strikes me as unlikely in the extreme), the inexorable logic of technological evolution suggests that these barriers will be both permeable and temporary. That is a hopeful notion if you are, for example, a human rights activist working to tunnel through “The Great Firewall of China”. On the other hand, these things are, as I said, friction factors. In business terms that means they increase the cost of doing business: longer development times due to more complex systems that need to be coded and debugged, the time and expense of patent and intellectual property licensing requirements, more complicated distribution and marketing relationships that need to be negotiated, greater legal expenses and liability exposure, and the general hassle of other people getting into your business. This in turn means a bumpier road ahead for people like Randy and me if we try to raise investment capital for The Next Big Thing.