December 22, 2006
Smart people can rationalize anything
One of the things we were able to do at Electric Communities was to attract one of the highest density collections of scary-smart people I’ve ever seen gathered in one place before. There are a lot of nice things about working with smart people. For one thing, they’re not stupid. Working with stupid people just sucks. Smart people are good if you need to do a lot of really hard things, and we did a lot of really hard things. But it’s not all upside. For one thing, smart people tend to systematically overestimate the value of being smart. In fact, it is really valuable, but they still tend to weight it too heavily compared to other virtues you might also value, such as consistency, focus, attentiveness to the emotional needs of your customers, and so on. One of the problems with really smart people is that they can talk themselves into anything. And often they can talk you into it with them. And if you’re smart yourself, you can talk them into stuff. The tendency to drift and lack of focus can be really extreme unless you have a few slower people in the group to act as a kind of intellectual ballast.
Why do less when you can do more?
Smart people can invent solutions to problems you don’t actually have yet. The problem is, it’s easy to think of problems you don’t have yet. Stopping to solve them all now is a recipe for paralysis. Furthermore, while it’s easy to think of all kinds of potential future problems, it’s much harder to forsee which of those you will actually have, much less all the ones that you are going to have that you didn’t anticipate. People who are less smart manage to avoid pouring resources into unnecessarily solving future problems because they aren’t able to figure out how to solve those problems anyway. So they just ignore them and hope they don’t actually come up, which in a lot of cases turns out to be the way to have bet.
Programming sage Donald Knuth taught us that “premature optimization is the root of all evil.” It turns out that this doesn’t just apply to coding.
You can’t sell someone the solution before they’ve bought the problem
Smart people can invent solutions to problems folks actually do have but don’t know it yet. These solutions are usually doomed. This ties in with the whole You Can’t Tell People Anything principle. It is nearly impossible to solve a problem for someone if they don’t believe they have the problem, even if they really, really do.
For example, one of the deep flaws in many distributed object schemes, such as the CORBA standard, is that they make no effective provision for distributed garbage collection. This is a major pain, because if storage management is annoying to get right in a non-distributed system, it can be brutally so in a distributed system. Java’s Remote Method Invocation standard is somewhat better in that it does do DGC, but it still can’t cope with unreferenced distributed cycles. One of our wiz kids, Arturo Bejar, devised for us a truly elegant DGC algorithm, which is not only efficient but gracefully handles distributed cycles. (To my eternal shame we patented it.) Since to work well in Java it really wanted to be in bed with the Java Virtual Machine, we tried to sell it to JavaSoft, who were literally next door to us in Cupertino (actually, we tried to give it to JavaSoft), but they weren’t interested. They hadn’t bought the problem yet. So a small piece of great technology that could make the world a slightly better place sits on the shelf.
Generalitas gratia generalitatis
(For those of you who, unlike me, lack a co-worker who spent 7 years studying Latin, whom you can bug for stuff like this, that’s “Generality for Generality’s Sake”.)
Smart people love to think about the general case scenario.
For example, at Electric Communities we ended up making a big investment in developing an orthogonal persistence mechanism for our object infrastructure. For those of you who are unfamiliar with it, orthogonal persistence is one of the ultimate examples of highly generalized technical coolness. Basically, the idea is that you abstract away the file system (or any other persistent storage, like a database) by keeping everything in memory and then playing tricks with the virtual memory system to make processes immortal. The example we were inspired by was KeyKOS, a highly reliable OS for IBM mainframes that was developed by some friends of ours in the 1980s, in which you could literally pull the power plug from the wall, and, after plugging it back in, be rebooted and running again — including transparently resuming all the running processes that were killed when you cut the power — in about 8 seconds (this was actually one of their trade show demos). You gotta admit, that’s pretty cool. Some commercial installations of KeyKOS have had processes with running times measured in years, surviving not only power failures and hardware malfunctions, but in some cases actual replacement of the underlying hardware with newer generations of equipment.
Orthogonal persistence was attractive to us not just because of the reliability enhancements that it promised, but because it would free programmers from having to worry about how to make their objects persistent, since it abstracted away all the serialization and deserialization of object state and associated design questions. Anything that made programming objects simpler seemed like a big win. And so we built such a system for our object framework, and it worked. It wasn’t quite as awesome as KeyKOS, but it was still pretty awesome.
One of my favorite catch phrases is, “The difference between theory and practice is that, in theory, there is no difference, but, in practice, there is.” Orthogonal persistence was a great idea — in theory. Of course it cost months and months of development time, and it introduced a number of subtle new problems, any one of which would have a made a good PhD dissertation topic. If you are trying to produce a commercial product in a timely and cost efficient way, it is not good to have somebody’s PhD research on your critical path. For example, it turns out that for most kinds of objects, the amount of state you actually need to save persistently is a small fraction of the full run-time state of the object. But in an orthogonal scheme you save it all, indiscriminately. The problem is not so much that it’s bulky, though it is, but that the volume of semantic meaning that you are now committed to maintain in near-perpetuity is vastly increased. That turns out to be very expensive. The problem of schema migration as new versions of objects were developed (due to bug fixes or feature enhancements) proved effectively intractable. Ultimately we fell back on a scheme that compelled programmers to be much more deliberate about what state was checkpointed, and when. That proved more practical, but in the meanwhile we lost literally years of development resources to the blind alley. Less sophisticated developers would not have gone down this blind alley because they wouldn’t have a clue that such a thing was even possible, let alone be able to figure out how to do it. They would have been saved by their own simplicity.
Keep in mind that all of the foregoing is not an argument against being smart. Rather, it’s a recognition that human rationality is bounded, and even really, really smart people must necessarily fall far short of mastering the complexities of a lot of the things we do, as engineers, as business people, and as ordinary human beings. What works the kinks out of a thing is often just the passage of time, as the shortcomings and subtleties gradually emerge from practice. Because stupid people work more slowly, they get the benefit of time for free, whereas smart people have to work at it.