Posts from March, 2006

March 26, 2006

A Contrarian View of Identity — Part 1: What are we talking about?

I was approached a few weeks ago to begin thinking about identity, the first time I’ve had the chance to get seriously into this subject for several years. In the time since my last big confrontation with these issues (at circa 1999-2000, when we were worrying about registration models for The Palace) there has been a lot of ferment in this area, especially with problems such as phishing and identity theft being much in the news.

As I survey the current state of the field, it’s clear there are now enormous hordes of people working on identity related problems. Indeed, it seems to have become an entire industry unto itself. Although there are the usual tidal waves of brain damaged dross and fraudulent nonsense that inevitably turn up when a field becomes hot, there also seems to be some quite good work that’s been done by some pretty smart people. Nevertheless, I’m finding much of even the good work very unsatisfying, and now feel challenged to try to articulate why.

I think this can be summed up in a conversation I recently had with a coworker who asked for my thoughts on this, and my immediate, instinctive reply was, “Identity is a bad idea; don’t do it.” But unless you’re already part of the tiny community of folks who I’ve been kicking these ideas around with for a couple of decades, that’s probably far too glib and elliptical a quip to be helpful.

The problem with identity is not that it’s actually a bad idea per se, but that it’s been made to carry far more freight than it can handle. The problem is that the question, “Who are you?”, has come to be a proxy for a number of more difficult questions that are not really related to identity at all. When interacting with somebody else, the questions you are typically trying to answer are really these:

  • Should I give this person the information they are asking for?
  • Should I take the action this person is asking of me? If not, what should I do instead?

and the counterparts to these:

  • Can I rely on the information this person is giving me?
  • Will this person behave the way I want or expect in response to my request? If not, what will they do instead?

Confronted with these questions, one should wonder why the answer to “Who are you?” should be of any help whatsoever. The answer to that is fairly complicated, which is why we get into so much trouble when we start talking about identity.

All of these questions are really about behavior prediction: what will this person do? To interact successfully with someone, you need to be able to make such predictions fairly reliably. We have a number of strategies for dealing with this knowledge problem. Principal among these are modeling, incentives, and scope reduction. Part of the complexity of the problem arises because these strategies intertwine.

Modeling is the most basic predictive strategy. You use information about the entity in question to try to construct a simulacrum from which predictive conclusions may be drawn. This can be as simple as asking, “what would I do if I were them?”, or as complicated as the kinds of elaborate statistical analyses that banks and credit bureaus perform. Modeling is based on the theory that people’s behavior tends to be purposeful rather than random, and that similar people tend to behave in similar ways in similar circumstances.

Incentives are about channeling a person’s behavior along lines that enhance predictability and improve the odds of a desirable outcome. Incentives rely on the theory that people adapt their behavior in response to what they perceive the consequences of that behavior are likely to be. By altering the consequences, the behavior can also be altered. This too presumes behavior generally to be purposeful rather than random, but seeks to gain predictability by shaping the behavior rather than by simulating it.

Scope reduction involves structuring the situation to constrain the possible variations in behavior that are of concern. The basic idea here is that the more things you can arrange to not care about, the less complicated your analysis needs to be and thus the easier prediction becomes. For example, a merchant who requires cash payment in advance avoids having to consider whether or not someone is a reasonable credit risk. The merchant still has to worry about other aspects of the person’s behavior (Are they shoplifters? Is their cash counterfeit? Will they return the merchandise later and demand a refund?), but the overall burden of prediction is reduced.

In pursuing these strategies, the human species has evolved a variety of tools. Key among these are reputation, accountability, and relationships.

Reputation enters into consideration because, mutual fund legal disclaimers notwithstanding, past behavior is frequently a fairly good predictor of future behavior. There is a large literature in economics and sociology demonstrating that iterated interactions are fundamentally different from one-time interactions, and that expectation of future interaction (or lack thereof) profoundly effects people’s behavior. Identity is the key that allows you connect the party you are interacting with now to their behavioral history. In particular, you may able to connect them to the history of their behavior towards parties other than you. Reputation is thus both a modeling tool (grist for the analytical mill, after all) and an incentive mechanism.

Accountability enters into consideration because the prospect that you may be able to initiate future, possibly out-of-band, possibility involuntary interactions with someone also influences their behavior. If you can sue somebody, or call the cops, or recommend them for a bonus payment, you change the incentive landscape they operate in. Identity is the key that enables you to target someone for action after the fact. In addition to the incentive effects, means of accountability introduce the possibility of actually altering the outcome later. This is a form of scope reduction, in that certain types of undesired behavior no longer need to considered because they can be mitigated or even undone. Note also that the incentives run in both directions: given possible recourse, someone may choose to interact with you when they might not otherwise have done so, even if you know that your own intentions were entirely honorable.

Note that reputation and accountability are connected. The difference between them relates to time: reputation is retrospective, whereas accountability is prospective. One mechanism of accountability is action or communication that effects someone’s reputation.

Relationships enter into consideration because they add structure to the interactions between the related parties. A relationship is basically a series of interactions over time, conducted according to some established set of mutual expectations. This is an aid to modeling (since the relationship provides a behavioral schema to work from), an incentive technique (since the relationship itself has value), and a scope limiting mechanism (since the relationship defines and thus constrains the domain of interaction). What makes a relationship possible is the ability to recognize someone and associate them with that relationship; identity is intimately bound up in this because it is the basis of such recognition.

All of this is fairly intuitive because this is how our brains are wired. Humans spent 100,000 or more years evolving social interaction in small tribal groups, where reputation, accountability and relationships are all intimately tied to knowing who someone is.

But the extended order of society in which we live is not a small tribal group, and the Internet is not the veldt.

End of Part 1

In Part 2 (up soon) I will talk about how our intuitions involving identity break down in the face of the scale of global civilization and the technological affordances of the Internet. I’ll also talk about what I think we should do about it.

March 19, 2006

Resilience is better than anticipation

“In preparing for battle I have always found that plans are useless, but planning is indispensable.”
— Dwight D. Eisenhower

Paul Saffo, favorite futurist of every journalist on the tech beat, famously said, “Never mistake a clear view for a short distance”. An important corollary is: being able to see the destination doesn’t necessarily mean you can see the road that gets you there. One lesson I take from this is that trying to plot a detailed map of that road can be a big waste of time.

This tome represents half a million (1993) dollars of Grade A, USDA Choice, prime Vision, happily paid for by our friends at Fujitsu. This lays it all out, in loving detail. This is the document that sold the venture capitalists on funding Electric Communities’ transition from a three-guys-who-do-cyberspace consulting partnership to a full bore Silicon Valley startup company. We had a regular business plan too (albeit one that was a little vague in the “and this is where the money comes from” part; see It’s a business, stupid), but what the VCs were really buying into was this: the utopian dream. It was exhilarating, it was brilliant, it was some of the best work I’ve ever done.

It was doomed.

It was at once too detailed and too vague. This is the nature of big complicated plans: they have lots of details (that’s what makes them big and complicated) and they leave lots out (because, the world being the complex thing that it is, no matter how much detail you give, it’s never enough to completely describe everything relevant). Plus, the more details and complexities there are, the more opportunities you have to make mistakes. As the number of elements you are juggling grows large, the probability of significant errors approaches certainty. (One notable VC who declined to invest in Electric Communities told us, “This is one of those Save The World plans. We don’t do those; they never work.” I want him for an investor the next time I try to start a company.)

Big utopian plans are doomed by their very natures, because they are always wrong. Being the huge fan of F. A. Hayek that I am, I should have figured this one out a lot sooner. I no longer believe in big plans that try to comprehensively anticipate every requirement and every contingency. Instead, I believe in resilience. Resilience is better than anticipation (a formulation for which I need to credit my friend Virginia Postrel).

It is better to have a simple plan and be resilient in its execution.

By resilience, I mean the ability to quickly and inexpensively adapt to the inevitable changes in circumstance, unforseen events, and surprising discoveries that any significant undertaking is bound to encounter. Unless what you are doing is completely trivial, the awful truth is that you must do it in an environment that is filled with uncertainty.

Most people will acknowledge the value of being prepared for unexpected external contingencies like earthquakes or downturns in the market. Fewer will take into account the broader, more diffuse, but ultimately more important phenomenon which plagues most long-term projects, namely that the nature of the world shifts between the time you start and the time you plan to be done, so that a plan that might have been ideal when you started might be disastrous by the time you finish (Freeman Dyson has some interesting things to say about this in Infinite in All Directions). Very few indeed appreciate what I think is the biggest source of uncertainty, which is that you don’t really understand exactly what to do and how to do it until you are well on your way to having already done it. It is in the doing of a thing that you end up discovering that you need to do other things you hadn’t originally counted on or that you now need to do some things differently from how you’d originally intended. Indeed, you may discover that you’ve already done some things wrong that now need to be done over or just thrown away.

You might reasonably ask how I can possibly reconcile this extreme skepticism about the value of (or, indeed, the possibility of) planning with what I mainly do for a living, which is to develop large, complex software systems. These undertakings would seem to demand exactly the kind of comprehensive, large-scale planning that I’m criticizing here. Indeed, this is how the world of software engineering has usually approached things, and they have the long history of schedule overruns, budget blowouts, and general mayhem and misery to prove it. Accepting the limitations of human rationality with respect to planning and forecasting is merely bowing to reality.

My new religion, in the realms of project planning in general and software development in particular, is what I guess I’d call “hyperaggressive incrementalism”:

Do really small steps, as small as you can manage, and do a lot of them really, really fast.

Don’t do anything you don’t have to do, and don’t do anything that you don’t have an immediate need for. In particular, don’t invest a lot of time and effort trying to preserve compatibility with things that don’t exist yet.

Don’t try too hard to anticipate your detailed long term needs because you’ll almost certainly anticipate wrong anyway. But be prepared to react quickly to customer demands and other changes in the environment.

And since one of the dangers of taking an incremental approach is that you can easily drift off course before you notice, be prepared to make sweeping course corrections quickly, to refactor everything on a dime. This means that you need to implement things in a way that facilitates changing things without breaking them.

Don’t fix warts in the next rev, fix them now, especially all the annoying little problems that you always keep meaning to get around to but which never seem to make it to the top of your todo list. Those annoying little warts are like barnacles on a ship: individually they are too small to matter, but in aggregate their drag makes it very hard to steer and costs you a fortune in fuel.

Simple is better than complicated. General is better than specialized. But simple and specialized now is better than general and complicated some day.

With respect to software in particular, adopt development tools and processes that help you be resilient, like memory safe, strongly typed, garbage collected programming languages and simple, straight-ahead application frameworks (i.e., Java very good, J2EE very bad). I also favor rigorous engineering standards, ferociously enforced by obsessive compulsive code nazis. I sometimes consider it a minor character flaw that I’m not temperamentally suited to being a whip-cracking hardass. Every team should have one.

Writing things down is good, but big complicated specifications are of a piece with big utopian plans: too ponderous to be useful, usually wrong, and rapidly obsolescent.

In general, it is better to have a clear, simple statement of the goal and a good internal compass, than to have a big, thick document that nobody ever looks at. That good internal compass is key; it’s what distinguishes a top tier executive or developer from the second and third stringers. Unfortunately, the only way I’ve found to tell whether someone is good in this respect is to work with them for several months; that’s pretty expensive.

My bias at this point is to favor productivity over predictability. It’s OK to have a big goal, possibly even a really big, visionary, utopian goal, as long as it’s just a marker on the horizon that you set your compass by. Regardless of what plans you may have (or not), a productive process will reach the goal sooner. Though predictability is elusive, productivity, in contrast, is actually achievable.