Posts filed under "Theory"

August 26, 2013

Randy’s Got a Podcast: Social Media Clarity

icon 800x800 with border

I’ve teamed up with Bryce Glass and Marc Smith to create a podcast – here’s the link and the blurb:

http://socialmediaclarity.net

Social Media Clarity – 15 minutes of concentrated analysis and advice about social media in platform and product design.

First episode contents:

News: Rumor – Facebook is about to limit 3rd party app access to user data!

Topic: What is a social network, why should a product designer care, and where do you get one?

Tip: NodeXL – Instant Social Network Analysis

July 7, 2010

RealID and WoW Forums: Classic Identity Design Mistake

Update #3, July 14th 4pm PST: GamePro interviewed Howard Rheingold and myself for a good analysis piece in which I add some new thoughts, including a likely-to-be-controversial comparison to a certain Arizona state law…

Update #2, July 9th 1pm PST: KillTenRats.com just posted an email interview on this topic that I did for them yesterday. There some potentially useful business analysis in there, and more specific suggestions, even if it now feels a bit like residual heat from a flamethrower fest…

Hey Blizzard! I’m a freelance consultant! Just sayin’ :-)

Update #1, July 9th 10am PST: Blizzard has had a change of heart and will not require RealID for forum postings. This is a big win both for the community, and I believe, for Blizzard! The post below remains only as a historical footnote and perhaps a cautionary tale…


Talk about a crapstorm…

Here’s my latest tweet:

@frandallfarmer Quit World of Warcraft. New policy of RealID for forums - stupid beyond belief. #wow #fail #realid #reputation #identity #quit #copa #coppa

That’s too terse, given the magnitude of the error that Blizzard is making, so here’s a longer post…

Identity as Defense?

Blizzard has announced that the upcoming Starcraft II forums will require posts to be attributed to the user’s read-life name, taken from their billing information. As if this wasn’t bad enough, they’ve also said that the World of Warcraft boards will start this requirement soon as well.

They also announced a posting rating system, which sounds like they haven’t read anything from Building Web Reputation Systems, or at least about the massive disasters from combining real names and social ratings at places like Consumating.com, but that’s a post for a different blog. :-)

The idea Blizzard has is a common initial misconception – that people will “play nice” if they have to show their real names to each other. I’m sure they are using Facebook as an example – I often do this in my consulting practice. There is no doubt that Facebook users are better behaved in general than their YouTube counterparts, but the error Blizzard made is to assume that their player relationships are like those of Facebook.

This is critical misconception, and the community is responding with the longest threads in WoW history, and blog posts everywhere.

The Misconceptions

There are a lot of valid (and invalid) complaints and fears about this change – I’m not going to list them all here. What I want to do is point out the fundamental flaws in this model, for WoW in particular.

My 35+ years in building online communities (with and without RealID-like systems) screams out that Blizzard is going to be very, very disappointed with the results of this change. Specifically:

1: Names != Quality

Though this is nominally meant to improve the quality of the community, by civilizing conversation through revealing true names, it won’t because the interesting conversation will simply stop or move elsewhere. Many women (including a Blizzard employee) have already clearly stated that they won’t post anymore. This kind of thing has happened many times before as communities move from Yahoo Groups to Ning or wherever. As John Gilmore said:

“The Net interprets censorship as damage and routes around it.”

2: Brain Drain or “NetNews died for our sins”

Some say that getting rid of (bad) people is what Blizzard wants, so point #1 is a plus. But hold on there! Just owning the problem of driving customers into silence or away doesn’t help either.

Consider the case of Usenet/Netnews, where all the great internet community was until 1994 – when the environment became inhospitable to types of discussions the natives wanted to have, and they left en masse to form private mailing lists, and eventually webblogs. The assertion that a community of those who will reveal their names is somehow better does NOT hold up to any reasonable scrutiny (see next point…)

A shocking number of people who leave will be amonst the best users Blizzard has – and that could kill the quality of content on the forums, just as happened with NetNews. Sure, less trollish posts, but less great posters too. I’m betting there are less trolls to remove than there are good users who’ll leave/not post.

3: Facebook Status != Message Board Participation

I approve my Facebook Friends. None of them are trolls/spammy – or if they are, I block their events and no harm done. All of them can see my real name, status postings, comments, and other personal information. If it turns out I’m sharing too much, I can turn down the disclosure. It’s all optional.

Message boards are public. Readable by God, Google and Everyone. This model requires me to disclose sensitive information to everyone. Completely different.

Here’s the deal. We’re talking gaming here. People will get pissed at each other for stolen kills, breaking alliances, and the price of components – and they want to – no, they need to – have a safe place to express this, to play.

This is my spare time. It’s no other player’s business where I work, where I live, who my family is. Just as it’s no business of my boss, who knows how to Google my name, what I dedicate my off-hours energy to. The Facebook-analogy of Real Identity = Quality Contributions falls apart when applied Gaming. Google + Friends + Foes + Bosses + My Real Name + The fact I have 6 80th Level Characters = Too Much Information.

Facebook does NOT leak this much information, and the US Senate is looking into their privacy practices.

This has also happened many times before. Every time someone new to the net starts a LiveJournal, they don’t know about friends locking until they get asked into the boss’s office to discuss something they read on the journal while ego-surfing. This is how many LiveJournals get owner-deleted!

It is completely unreasonable to expect that people will understand the risks of using their real names on a message board – and if they DO understand, I contend that most people won’t bother posting anything at all.

In short:

  • The trolls now get more information to harass
  • The best players will leave
  • The casual players will panic when they realize that their private-time activity is now public.

This is lose-lose. The worst kind of change. The only upside I see is the ability to lay off board moderation staff as traffic (good and bad) plummets.

An Alternative Everyone Can Live With

There was/is an alternative – described in the Tripartite Identity Model post from two years ago: Implement Nicknames!

Sure, have a top-level social identity, but present it as user-controlled Nickname and allow users to share a variant of their real name – but don’t require it! Sure, if the Nickname is the same as their RealID, feel free to show an indicator, like Amazon.com does with their Real Nametm markers. Allow users to reveal what they wish – even provide incentives for them to do so, but don’t bind full disclosure on them. Even Facebook doesn’t do this!

It’s never too late.

P.S.: I can’t stop being amazed – Asking for help on a forum requires disclosing your real name to God, Google, and Everyone? Come on! You’ve got to be kidding!

February 24, 2010

Grizzled Advice from Business & Legal Primer for Game Development

[Two years ago, I wrote up  a few lessons for inclusion in Business & Legal Primer for Game Development. I’d always meant to cross-post it here and was surprised to see I hadn’t already when I went looking for it to share with the folks over at PlayNoEvil in reply to a recent post. – Randy]

Here are three top-line lessons for those considering designing their own MMORG or latest Facebook game for that matter…

1.  Design Hubris Wastes Millions

Read all the papers/books/blogs written by your predecessors that you can – multi-user game designers are pretty chatty about their successes and failures. Pay close attention to their failures – try not to duplicate those. Believe it or not, several documented failures have been repeated over and over in multiple games, despite these freely available resources.

If you are going to ignore one of the lessons of those who went before, presumably because you think you know a better way, do it with your eyes wide open and be ready to change to plan B if your innovation doesn’t work out the way you expected. If you want to hash your idea out before committing it to code, consider consulting with the more experienced designers – they post on Terra Nova (http://blogs.terranova.com/) and talk to budding designers on the Mud-Dev (http://www.kanga.nu/) mailing list, amongst other places. Many of them respond pretty positively to direct contact via email – just be polite and ask your question clearly – after all, they are busy building their own worlds.

2.  Beta Testers != Paying Customers

One recurring error in multi-user game testing is the problem of assuming that Beta users of a product will behave like real customers would. They don’t, for several reasons:

A.  Beta testing is a status symbol amongst their peers

“I’m in the ZYXWorld Limited Beta!” is a bragging right. Since it has street-cred value, this leads the user to be on their best behavior. They will grief much less. They will share EULA breaking hacks with each other much less. They will harass much less. They won’t report duping bugs. The eBay aftermarket for goods won’t exist. In short, anything that would get them kicked out of the beta won’t happen anywhere near as often as when the product is released.

B.  Beta testers aren’t paying.

Paying changes everything. During the Beta, the users work for you. When you release the game, you are working for them. Now some users will expect to be allowed to do all sorts of nasty things that they would never had done during the Beta. Those who were Beta users (and behaved then) will start to exploit bugs they found during the test period, but never reported. Bad beta users save up bugs, so they could use them after your product’s release to gain an edge over the new users, to dupe gold, or to just crash your server to show off to a friend.

So, you’re probably wondering; How do I get my Beta testers to show me what life on my service will really be like and to help me find the important bugs/exploits/crashes before I ship? Here are some strategies that worked for projects I worked on:

Crash Our World: Own up to the fact that Beta testers work for you and they do it for the status – incentivize the finding of crash/dup/exploit bugs that you want them to find. Give them a t-shirt for finding one. Put their portrait on the Beta Hall Of Fame page. Give them a rare in-world item that they can carry on into general release. Drop a monument in the world, listing the names of the testers that submitted the most heinous bugs. Turn it into a contest. Make it more valuable to report a bug than to keep it secret.

Pay-ta: Run a Paid Beta phase (after Crash Our World) to find out how users will interact with each other socially (or using your in-game social/communications features.) During this phase of testing you will get better results about which social features to prioritize/fix for release. Encourage and/or track the creation of fan communities, content databases, and add-ons – it will help you understand what to prepare for, as well as build word-of-mouth marketing.  But, keep in mind that there is one thing you can never really test in advance: How your user community will socially scale. As the number of users grows, the type of user will diversify. For most games, the hard-core gamers come first and the casual players come later. Be sure to have a community manager whose job it is to track customer sentiment and understand the main player groups. How your community scales will challenge your development priorities and the choices you make will have you trading off new-customer acquisition vs. veteran player retention.

3.  There Are No Game Secrets, Period

Thanks to the internet – in-game puzzles are solved for everyone at the speed of the fastest solver. Read about “The D’nalsi Island” adventure in Lucasfilm’s Habitat where the players consumed hundreds of development hours in only tens of minutes.

The Lesson? Don’t count on secrets to hold up for long. Instead, treat game walk-thru websites as a feature to be embraced instead of the bane of your existence. “But,” you’ll say, “I could create a version of my puzzle that is customized (randomized) for every user! That will slow them down!”  Don’t bother; it will only upset your users.

The Tragedy of the Tapers

Consider the example of the per-player customized spell system in the original Asheron’s Call (by Turbine, Inc.): Each magic spell was designed to consume various types of several resources: scarabs, herbs, powders, potions, and colored tapers. The designers thought it would be great to have the users actually learn the spells by having to discover them through experimentation. The formula was different for every spell and the tapers were different for every user.

One can just hear the designer saying “That’ll fix those Internet spoilers! With this system, they each have to learn their own spells!” But, instead of feeling enjoyment, the players became frustrated with what seemed to be nothing other than a waste of their time and resources burning spell components as they were compelled to try the complete set of exponential combinations of tapers for no good reason.

What was interesting is that the users got frustrated enough to actually figure out the exact method of generating the random seed to determine the tapers for each user as follows:

Second Taper = (SEED * [ Talisman + (Herb + 3) + ((Powder + Potion) * 2) + (Scarab – 2) ] ) mod 12

[Modified from Jon Krueger’s web page on the subject.]

The players put this all into a client plug-in to remove the calculation overhead, and were now able to correctly formulate the spells the very first time they tried. Unfortunately, this meant that new users (who didn’t know about the plug-in) were likely to have a significantly poorer experience than veterans.

To Turbine’s credit, they revised the game in its second year to remove the need for most of the spell components and created rainbow tapers, which worked for all users in all spells, completely canceling the original per-player design.

Hundreds of thousands of dollars went into that spell system. The users made a large chunk of that effort obsolete very quickly, and Turbine then had to pay for more development and testing to undo their design.

Learn from Turbine’s mistake; Focus on making your game fun even if the player can look up all the answers in a database or a plug-in.

Don’t start a secrecy arms-war with your user. You’ll lose. Remember: There are more of them than you and collectively they have more time to work on your product than you do.

December 5, 2009

The Cake is a Lie: Reputation, Facebook Apps, and “Consent” User Interfaces

This is a cross-post from Randy’s other blog Building Web Reputation Systems and all comments should be directed there.


In early November, I attended the 9th meeting of the Internet Identity Workshop. One of the working sessions I attended was on Social Consent user interface design. After the session, I had an insight that reputation might play a pivotal role in solving one of the key challenges presented. I shared my detailed, yet simple, idea with Kevin Marks and he encouraged me to share my thoughts through a blog post—so here goes…

The Problem: Consent Dialogs

The technical requirements for the dialog are pretty simple: applications have to ask users for permission to access their sensitive personal data in order to produce the desired output—whether that’s to create an invitation list, or to draw a pretty graph, or to create a personalized high-score table including your friends, or to simply sign and attach an optional profile photo to a blog comment.

The problem, however, is this—users often don’t understand what they are being asked to provide, or the risks posed by granting access. It’s not uncommon for a trivial quiz application to request access to virtually the same amount of data as much more “heavyweight”applications (like, say, an app to migrate your data between social networks.) Explaining this to users—in any reasonable level of detail—just before running the application causes them to (perhaps rightfully) get spooked and abandon the permission grant.

Conflicting Interests

The platform providers want to make sure that their users are making as informed a decision as possible, and that unscrupulous applications don’t take advantage of their users.

The application developers want to keep the barriers to entry as low as possible. This fact creates a lot of pressure to (over)simplify the consent flow. One designer quipped that it reduces the user decision to a dialog with only two buttons: “Go” and “Go Away” (and no other text.)

The working group made no real progress. Kevin proposed creating categories, but that didn’t get anywhere because it just moved the problem onto user education—”What permissions does QuizApp grant again?”

Reputation to the Rescue?

All consent dialogs of this stripe suffer from the same problem: Users are asked to make a trust decision about an application that, by definition, they know nothing about!

This is where identity meets trust, and that’s the kind of problem that reputation is perfect for. Applications should have reputations in the platform’s database. That reputation can be displayed as part of the information provided when granting consent.

Here’s one proposed model (others are possible, this is offered as an exemplar).

The Cake is a Lie: Your Friends as Canaries in the Coal Mine of New Apps

First a formalism: when an application wants to access a user’s private Information (I), they have a set of intended Purposes (P) they wish to use it for. Therefore, the consent could be phrased thusly:

“If you let me have your (I), I will give you (P). [Grant] [Deny]”

Example: “If you give me access to your friends list, I will give you cake.”

In this system, I propose that the applications be compelled to declare this formulation as part of the consent API call. (P) would be stored along with the app’s record in the platform database. So far, this is only slightly different from what we have now, and of course, the application could omit or distort the request.

This is where the reputation comes in. Whenever a user uninstalls an application, the user is asked to provide a reason, including abusive use of data and specifically asks a question to see if the promise of (P) was kept.

“Did this application give you the [cake] it promised?”

All negative feedback is kept—to be re-used later when other new users install the app and encounter the consent dialog. If they have friends who have uninstalled this application already complaining that “If (I) then (P)” string was false, then the moral equivalent of this would appear scrawled in the consent box:


“Randy says the [cake] was unsatisfactory.
Bryce says the [cake] was unsatisfactory.
Pamela says the application spammed her friends list.”

Afterthoughts

Lots of improvements are possible (not limiting it to friends, and letting early-adopters know that they are canaries in the coal mine.) These are left for future discussion.

Sure, this doesn’t help early adopters.

But application reputation quickly shuts down apps that do obviously evil stuff.

Most importantly, it provides some insight to users by which they can make more informed consent decisions.

(And if you don’t get the cake reference, you obviously haven’t been playing Portal.)

September 6, 2009

Elko II: Against Statelessness (or, Everything Old Is New Again)

Preface: This is second of three posts on Elko, a server platform for sessionful, stateful web applications that I’m releasing this week as open source software. Part I, posted yesterday, presented the business backstory for Elko. This post presents the technical backstory: it lays out the key ideas that lead to the thing. Part III which will be posted tomorrow, presents a more detailed technical explication of the system itself.

It seems to be an article of faith in the web hosting and web server development communities that one of the most expensive resources that gets used up on a web server is open TCP connections. Consequently, a modern web server goes to great lengths to try to close any open TCP connection it can as soon as possible. Symptoms of this syndrome include short timeouts on HTTP Keep-Alive sessions (typically on the order of 10 seconds) and connection pool size limits on reverse proxies, gateways, and the like (indeed, a number of strange limits of various kinds seem to appear nearly any time you see the word “pool” used in any server related jargon). These guys really, really, really want to close that connection.

In the world as I see it, the most expensive thing is not an open connection per se. The cost of an open but inactive TCP connection is trivial: state data structures measured in the tens or hundreds of bytes, and buffer space measured in perhaps tens of kilobytes. Keeping hundreds of thousands of simultaneous inactive connections open on a single server (i.e., vastly more connections than the server would be able to service if they were all active) is really not that big a deal.

The expense I care about is the client latency associated with opening a new TCP connection. Over IP networks, just about the most expensive operation there is is opening a new TCP connection. In my more cynical moments, I imagine web guys thinking that since it is expensive, it must be valuable, so if we strive to do it as frequently as possible, we must be giving the users a lot of value, hence HTTP. However, the notable thing about this cost is that it is borne by the user, who pays it by sitting there waiting, whereas the cost of ongoing open connections is paid by the server owner.

So why do we have this IHMO upside down set of valuation memes driving the infrastructure of the net?

The answer, in part, lies in the architecture of a lot of server software, most notably Apache. Apache is not only the leading web server, it is arguably the template for many of its competitors and many of its symbionts. It is the 800 pound gorilla of web infrastructure.

Programming distributed systems is hard. Programming systems that do a lot of different things simultaneously is hard. Programming long-lived processes is hard. So a trick (and I should acknowledge up front that it’s a good trick) that Apache and its brethren use is the one-process-per-connection architecture (or, in some products, one-thread-per-connection). The idea is that you have a control process and a pool of worker processes. The control process designates one of the worker processes to listen for a new connection, while the others wait. When a new connection comes in, the worker process accepts the connection and notifies the control process, who hands off responsibility for listening to one of the other waiting processes from the pool (actually, often this handshake is handled by the OS itself rather than the control process per se, but the principle remains the same). The worker then goes about actually reading the HTTP request from the connection, processing it, sending the reply back to the client, and so on. When it’s done, it closes the connection and tells the control process to put it back into the pool of available worker processes, whence it gets recycled.

This is actually quite an elegant scheme. It kills several birds with one stone: the worker process doesn’t have to worry about coordinating with anything other than its sole client and the control process. The worker process can operate synchronously, which makes it much easier to program and to reason about (and thus to debug). If something goes horribly wrong and a particular HTTP request leads to something toxic, the worker process can crash without taking the rest of the world with it; the control process can easily spawn a new worker to replace it. And it need not even crash — it can simply exit prophylactically after processing a certain number of HTTP requests, thus mitigating problems due to slow storage leaks and cumulative data inconsistencies of various kinds. All this works because HTTP is a stateless RPC protocol: each HTTP request is a universe unto itself.

Given this model, it’s easy to see where the connections-are-expensive meme comes from: a TCP connection may be cheap, but a process certainly isn’t. If every live connection needs its own process to go with it, then a bunch of connections will eat up the server pretty quickly.

And, in the case of HTTP, the doctrine of statelessness is the key to scaling a web server farm. In such a world, it is frequently the case that successive HTTP requests have a high probability of being delivered to different servers anyway, and so the reasoning goes that although some TCP connects might be technically redundant, this will not make very much difference in the overall user experience. And some of the most obvious inefficiencies associated with loading a web page this way are addressed by persistent HTTP: when the browser knows in advance that it’s going to be fetching a bunch of resources all at once from a single host (such as all the images on a page), it can run all these requests through a single TCP session. This is a classic example of where optimization of a very common special case really pays off.

The problem with all this is that the user’s mental model of their relationship with a web site is often not stateless at all, and many web sites do a great deal of work in their presentation to encourage users to maintain a stateful view of things. So called “Web 2.0” applications only enhance this effect, first because they blur the distinction between a page load and an interaction with the web site, and second because their more responsive Ajax user interfaces make the interaction between the user and the site much more conversational, where each side has to actively participate to hold up their end of the dialog.

In order for a web server to act as a participant in a conversation, it needs to have some short-term memory to keep track of what it was just talking to the user about. So after having built up this enormous infrastructure predicated on a stateless world, we then have to go to great effort and inconvenience to put the state back in again.

Traditionally, web applications keep the state in one of four places: in a database on the backend, in browser cookies, in hidden form fields on the page, and in URLs. Each of these solutions have distinct limitations.

Cookies, hidden form fields, and URLs suffer from very limited storage capacity and from being in the hands of the user. Encryption can mitigate the latter problem but not eliminate it — you can ensure that the bits aren’t tampered with but you can’t ensure that they won’t be gratuitously lost. These three techniques all require a significant amount of defensive programming if they are to work safely and reliably in any but the most trivial applications.

Databases can avoid the security, capacity and reliability problems with the other three methods, but at the cost of reintroducing one of the key problems that motivated statelessness in the first place: the need for a single point of contact for the data. Since the universe is born anew with each HTTP request, the web server that receives the request must query the database each time to reconstruct its model of the session, only to discard it again a moment later when request processing is finished. In essence, the web server is using its connection to the database — often a network connection to another server external to itself — as its memory bus. The breathtaking overhead of this has lead to a vast repertoire of engineering tricks and a huge after-market for support products to optimize things, in the form of a bewildering profusion of caches, query accelerators, special low-latency networking technologies, database clusters, high-performance storage solutions, and a host of other specialty products that frequently are just bandaids for the fundamental inefficiencies of the architecture that is being patched. In particular, I’ve been struck by the cargo-cult-like regard that some developers seem to have for the products of companies like Oracle and Network Appliance, apparently believing these products to possess some magic scaling juju that somehow makes them immune to the fundamental underlying problems, rather than merely being intensely market-driven focal points for the relentless incremental optimization of special cases.

(Before people start jumping in here and angrily pointing out all the wonderful things that databases can do, please note that I’m not talking about the many ways that web sites use databases for the kinds of things databases are properly used for: query and long term storage of complexly structured large data sets. I’m talking about the use of a database to hold the session state of a relatively short-term user interaction.)

And all of these approaches still impose some strong limitations on the range of applications that are practical. In particular, applications that involve concurrent interaction among multiple users (a very simple example is multi-user chat) are quite awkward in a web framework, as are applications that involve autonomous processes running inside the backend (a very simple example of this might be an alarm clock). These things are by no means impossible, but they definitely require you to cut against the grain.

Since the range of things that the web does do well is still mind bogglingly huge, these limitations have not been widely seen as pain points. There are a few major applications that fundamentally just don’t work well in the web paradigm and have simply ignored it, most notably massively multiplayer online games like World of Warcraft, but these are exceptions for the most part. However, there is some selection bias at work here: because the web encourages one form of application and not another, the web is dominated by the form that it favors. This is not really a surprise. What does bother me is that the limitations of the web have been so internalized by the current generation of developers that I’m not sure they are even aware of them, thus applications that step outside the standard model are never even conceived of in the first place.

Just consider how long it has taken Ajax to get traction: “Web 2.0” was possible in the late 1990s, but few people then realized the potential that was latent in Javascript-enabled web browsers, and fewer still took the potential seriously (notably, among those who did is my long time collaborator and business associate, Doug Crockford, instigator of the JSON standard and now widely recognized, albeit somewhat retroactively, as a Primo Ajax Guru). That “Web 2.0” happened seven or eight years later than it might otherwise have is due almost entirely to widespread failure of imagination. Doug and I were founders of a company, State Software, that invented a form of Ajax in all but name in 2001, and then crashed and burned in 2002 due, in large part, to complete inability to get anybody interested (once again, You Can’t Tell People Anything).

Back in The Olden Days (i.e., to me, seems like yesterday, and, to many of my coworkers, before the dawn of time), the canonical networked server application was a single-threaded Unix program driven by an event loop sitting on top of a call to select(), listening for new connections on a server socket and listening for data I/O traffic on all the other open sockets. And that’s pretty much how it’s still done, even in the Apache architecture I described earlier, except that the population of developers has grown astronomically in the mean time, and most of those newer developers are working inside web frameworks that hide this from you. It’s not that developers are less sophisticated today — though many of them are, and that’s a Good Thing because it means you can do more with less — but it means that the fraction of developers who understand what they’re building on top of has gone way down. I hesitate to put percentages on it, lacking actual quantitivate data, but my suspicion is that it’s gone from something like “most of them” to something like “very, very few of them”.

But it’s worth asking what would happen if you implemented the backend for a web application like an old-fashioned stateful server process, i.e., keep the client interacting over the same TCP connection for the duration of the session, and just go ahead and keep the short-term state of the session in memory. Well, from the application developer’s perspective, that would be just terribly, terribly convenient. And that’s the idea behind Elko, the server and application framework this series of posts is concerned with. (Which, as mentioned in Part I, I’m now unleashing on the world as open source software that you can get here).

Now the only problem with the aforementioned approach, really, is that it blows the whole standard web scaling story completely to hell — that and the fact that the browser and the rest of the web infrastructure will try to thwart you at every turn as they attempt to optimize that which you are not doing. But let’s say you could overcome those issues, let’s say you had tricks to overcome the browser’s quirks, and had an awesome scaling story that worked in this paradigm. Obviously I wouldn’t have been going on at length about this if I didn’t have a punchline in mind, right? That will be the substance of Part III tomorrow.

October 17, 2008

The Tripartite Identity Pattern

One of the most misunderstood patterns in social media design is that of user identity management. Product designers often confuse the many different roles required by various user identifiers. This confusion is compounded by using older online services, such as Yahoo!, eBay and America Online, as canonical references. The services established their identity models based on engineering-centric requirements long before we had a more subtle understanding of user requirements for social media. By conjoining the requirements of engineering (establishing sessions, retrieving database records, etc.) with the users requirements of recognizability and self-expression, many older identity models actually discourage user participation. For example: Yahoo! found that users consistently listed that the fear of spammers farming their e-mail address was the number one reason they gave for abandoning the creation of user created content, such as restaurant reviews and message board postings. This ultimately led to a very expensive and radical re-engineering of the Yahoo identity model which has been underway since 2006.

Consistently I’ve found that a tripartite identity model best fits most online services and should be forward compatible with current identity sharing methods and future proposals.

The three components of user identity are: the account identifier, the login identifier, and the public identifier.

Identity 2.gif

Account Identifier (DB Key)

From an engineering point of view, there is always one database key – one-way to access a user’s record – one-way to refer to them in cookies and potentially in URLs. In a real sense he account identifier is the closest thing the company has to a user. It is required to be unique and permanent. Typically this is represented by a very large random number and is not under the user’s control in any way. In fact, from the user’s point of view this identifier should be invisible or at the very least inert; there should be no inherent public capabilities associated with this identifier. For example it should not be an e-mail address, accepted as a login name, displayed as a public name, or an instant messenger address.

Login Identifier(s) (Session Authentication)

Login identifiers are necessary create valid sessions associated with an account identifier. They are the user’s method of granting access to his privileged information on the service. Historically, these are represented by unique and validated name/password pairs. Note that the service need not generate its own unique namespace for login identifiers but may adopt identifiers from other providers. For example, many services except external e-mail addresses as login identifiers usually after verifying that the user is in control of that address. Increasingly, more sophisticated capability-based identities are accepted from services such as OpenID, oAuth, and Facebook Connect; these provide login credentials without constantly asking a user for their name and password.

By separating the login identifier from the account identifier, it is much easier to allow the user to customize their login as the situation changes. Since the account identifier need never change, data migration issues are mitigated. Likewise, separating the login identifier from public identifiers protects the user from those who would crack their accounts. Lastly, a service could provide the opportunity to attach multiple different login identifiers to a single account — thus allowing the service to aggregate information gathered from multiple identity suppliers.

Public identifier(s) (Social Identity)

Unlike the service-required account and login identifiers, the public identifier represents how the user wishes to be perceived by other users on the service. Think of it like clothing or the familar name people know you by. By definition, it does not possess the technical requirement to be 100% unique. There are many John Smiths of the world, thousands of them on Amazon.com, hundreds of them write reviews and everything seems to work out fine.

Online a user’s public identifier is usually a compound object: a photo, a nickname, and perhaps age, gender, and location. It provides sufficient information for any viewer to quickly interpret personal context. Public identifiers are usually linked to a detailed user profile, where further identity differentiation is available; ‘Is this the same John Smith from New York that also wrote the review of the great Gatsby that I like so much?’ ‘Is this the Mary Jones I went to college with?’

A sufficiently diverse service, such as Yahoo!, may wish to offer multiple public identifiers when a specific context requires it. For example, when playing wild-west poker a user may wish to present the public identity of a rough-and-tumble outlaw, or a saloon girl without having that imagery associated with their movie reviews.

Update 11/12/2008: This model was presented yesterday at the Internet Identity Workshop as an answer to many of the confusion surrounding making the distributed identity experience easier for users. The key insight this model provides is that no publicly shared identifier is required (or even desirable) to be used for session authentication, in fact requiring the user to enter one on a RP website is an unnecessary security risk.

Three main critiques of the model were raised that should be addressed in a wider forum:

  1. There was some confusion of the scope of the model – Are the Account IDs global?

    I hand modified the diagram to add an encompassing circle to show the context is local – a single context/site/RP. In a few days I’ll modify the image in this post to reflect the change.

  2. The term “Public Identity” is already in use by iCards to mean something incompatible with this model.

    I am more than open to an alternative term that captures this concept. Leave comments or contact me at randy dot farmer at pobox dot com.

  3. Publically sharable capability-based identifiers are not included in this model. These include email addresses, easy-to-read-URLs, cel phone numbers etc.

    There was much controversy on this point. To me, these capability based identifiers are outside the scope of the model, and generating them and policies sharing them are withing the scope of the context/site/RP. Perhaps an interested party might adopt the tripartite pattern as a sub-pattern of a bigger sea of identifiers. My goal was not to be all encompassing, but to demonstrate that only three identifiers are required for sites that have user generated content, and that no public capability bound ID exchange was required. RPs should only see a the Public ID and some unique key for the session that grants permission bound access to the user’s Account.

November 6, 2006

A Contrarian View of Identity — Part 2: Why is this confusing?

This is Part 2 of a multi-part essay on identity. Part 1 can be found here. Part 1 ended with a promise that Part 2 would be up soon, but, as John Lennon once said, life is what happens to you while you’re busy making other plans. But at long last here we are; enjoy.

Part 1 talked, in broad strokes, about the kinds of things that identity gets used for and why, but ended with the assertion that identity is being made to carry a heavier load than it can really support given the character and scope of the Internet. Here I’m going to speculate about why discussion of this seems to generate so much confusion.

One area of confusion is illustrated by a long-standing split among philosophers over the fundamental nature of what an identifier is. They’ve been chewing on the whole question of identity and naming for a long time. In particular, Bertrand Russell and his circle proposed that a name should be regarded as a form of “compact description”, whereas a line of thought promoted by Saul Kripke asserts that names should instead be viewed as “rigid designators”.

The “compact description” point of view should be one that is familiar from the physical world. For example, if you are pulled over for speeding and the police check your driver’s license, they consider whether you resemble the person whose photograph and description are on it. The “rigid designator” perspective is more familiar in the world of computer science, where we use such designators all the time in the form of memory pointers, DNS names, email addresses, URLs, etc.

Without delving into the philosophy-of-language arcana surrounding this debate, you can at least note that these are profoundly different perspectives. While I personally lean towards the view that the “rigid designator” perspective is more fundamental, this is basically a pragmatic position arrived at from my work with object capability systems and the E programming language. In the present discussion you don’t need to have a position yourself on whether either of these positions is right or wrong in some deep, essential sense (or if that’s even a meaningful question). All you need to recognize is that people who come at the identity issue from these different directions may have very different notions about what to do.

Another wellspring of confusion is that different people mean different things when they speak of “identity”. Moreover, many of them seem unaware of or indifferent to the fact that they are talking about different things. While I generally think that the Parable of The Blind Men and The Elephant is way overused, in this case I think it’s a wildly appropriate metaphor. Identity is a complicated concept with a number of different facets. Depending on which facets you focus your attention on, you end up believing different things.

Let’s look at the relationship between two entities, call them Alice and Bob, interacting over the Net. I diagram it like this:

We call them Alice and Bob simply because anthropomorphizing these situations makes them easier to think and talk about. We don’t actually care whether Alice and Bob are people or computers or processes running on computers or websites or whatever. Nor do Alice and Bob both have to be the same kind of thing. All that we care about are that each is some kind of discrete entity with some sense of itself as distinct from other entities out there.

When Alice interacts with Bob, there are (at least) four different distinct acts that are involved, each involving something that somebody somewhere calls “identity”. (1) Bob presents some information about himself, in essence saying “this is me”. (2) Alice, using this information, recognizes Bob, that is, associates the entity she is interacting with with some other information she already knows (remember that we said in Part 1 that relationships are all about repeated interactions over time). (3) Alice, to take action, needs to make reference to Bob. She designates Bob with some information that says, in essence, “that is you”. (4) Bob, based on this information, plus other information he already knows, accepts that Alice is referring to him and provides access to information or services.

At various times, various people have referred to the bundle of information used in one or another of these acts as Bob’s “identity”. However, there are four, potentially different, bundles of bits involved. These bundles can be considered singly or in combination. Depending on which of these bundles your view of things takes into account or not, there are fifteen different combinations that one could plausibly label “identity”. Furthermore, you get different models depending on whether or not you think two or more of these bundles are actually the same bits — the number of possibilities explodes to something like 50 (assuming I’ve done my arithmetic correctly), before you even begin talking about what the rules are. Absent awareness of this multiplicity, it is not surprising that confusion should result.

Observe too that this picture is asymmetrical. Most real interactions between parties will also involve the mirror counterpart of this picture, where Alice does the presenting and accepting and Bob does the recognizing and designating. Note that though the two directions are logical duals, the mechanisms involved in each direction might be radically different. For example, when I interact with my bank’s website, I present a username and password, while the bank presents text, forms, and graphics on a web page.

Those of you of a more technical bent are cautioned to keep in mind that I’m describing an abstract model, not a protocol. In informal discussions of this diagram with techie friends, I’ve noticed a tendency for people to latch onto the details of the handshaking between the parties, when that’s not really the point here.

This model now gives us a way to get a handle on some of the muddles and talking at cross purposes that people have gotten into.

Consider the many people making claims of the flavor, “you own your own identity” (or should own, or should control, or some similar variant of this meme). If you are focused on presentation, this makes a degree of sense, as you are thinking about the question, “what information is Bob revealing to Alice?” If you are concerned with Bob’s privacy (as Bob probably is, let alone what privacy advocates are worried about), this question seems pretty important. In particular, if you adopt the “compact description” stance on names, it seems like this identity thing could be probing pretty deeply into Bob’s private business. On the other hand, if you are focused on recognition, the “you own your own identity” idea can seem both muddled and outrageous. Recognition involves combining the information that was presented with information that you already know; indeed, in the absence of that pre-existing knowledge, the information presented may well be just so much useless noise. From this perspective, a claim that Bob owns his own identity looks a lot like a claim that Bob owns part of the contents of Alice’s mind. It should not come as a big surprise if Alice takes issue with this. Note that this is distinct from a political position which positively asserts that there should be (or believes that there could be) some legal or moral restrictions on what Alice is allowed to know or remember about Bob; there’s an interesting debate there but also a distraction from what I’m talking about.

Note that although the model is explained above in terms of just Alice and Bob, the most interesting questions only emerge in a world where there are more than two actors — if your world only contains one entity besides you, the whole question of the other’s identity is rather moot.

Let’s introduce Carol, a third party, into the picture. Setting aside for a moment discussion of what the identity relationship between Alice and Carol is, consider just the issue of how Bob’s identity is handled by the various parties. Recall that there are four bundles of information in the identity relationship of Bob to Alice:

  • the information that Bob presents to Alice
  • the information that Alice holds, enabling her recognize Bob from his presentation
  • the information that Alice uses to designate Bob
  • the information that Bob holds, enabling him to accept Alice’s designation

What is the scope of this information? Is the information that Bob presents to Alice meaningful only in the context of their particular two-way relationship, or is it meaningful in some broader context that might include other parties? In particular, is the information Bob presents to Alice unique to Alice, or might Bob present himself differently to Carol? If Carol already has a relationship to Bob, do Alice and Carol have the means to know (or to discover) that they are talking about the same Bob? More generally, which, if any, of the above listed four pieces of information does Carol see? Where does she get this information from? From Bob or from Alice or from some third (er, fourth) party?

Similarly, is the information Bob presents to Alice unique to Bob, or might some other entity besides Bob present the same information to her? In the latter case, is she really recognizing Bob or just some abstract Bob-like entity?

Each of these questions, and countless others which I didn’t explicitly raise or perhaps am not even overtly aware of, defines a dimension of the design space for an identity framework. The explosion of possibilities is very large and quite probably beyond the scope of exhaustive, systematic analysis. Instead, it seems more useful to pay attention to the purposes to which an identity system is being put. Any particular design can’t help but embed biases about the ways in which the designers intend it to be used (in and of itself, this is only a problem if the design has pretensions to universality).

I’m not prepared to go into all of these questions here. That’s probably the work of a lifetime in any event. However, there is one very important consideration that I’d like to highlight, which hinges on the distinction between the information that Bob presents to Alice and the information with which Alice designates Bob.

The presentation information seems naturally to fall into the “compact description” camp, whereas the designation information seems to just as naturally fall into the “rigid designator” camp. Indeed, the very language that I’ve adopted to label these pieces contains a bias towards these interpretations, and this is not an accident.

From Bob’s perspective, the information that designates him is far more dangerous than the information he presents. This is because a designator for Bob is the means by which an outside party acts upon Bob. Such action can range from pointing him out to other people in a crowd to sending him email to charging his credit card, depending on the context. Any of these actions might be beneficial or harmful to him, again depending on context, but Bob is fundamentally limited in his ability to control them. Presentation, by contrast, is more clearly under Bob’s control, and the risk posed by presentation information is closely related to the degree to which that information can be reverse engineered into designation information.

Much of the risk entailed by these interactions stems from the fact that in the real world it is rarely Bob himself who does the presenting and accepting; rather it tends to be various intermediaries to whom Bob has delegated these tasks in different contexts. These intermediaries might be technological (such as Bob’s web browser) or institutional (such as Bob’s bank) or an amalgam (such as Bob’s bank’s ATM). Such intermediaries tend to be severely limited in the degree to which they are able to exercise the same discretion Bob would in accepting a designator on Bob’s behalf, partially because they tend to be impersonal, “one size fits all” systems, but mainly because they cannot know everything Bob knows. Analysis is complicated by the fact that they may be able to compensate for some of these limitations by knowing things that Bob can’t. The ubiquitous presence of these intermediaries is a major difference between our modern, online world and the evolutionary environment in which our instincts for these things emerged.

Note that designation is generally associated with specific action. That is, there is usually some particular intent that Alice has in mind when designating Bob, and some specific behavior that will be elicited when Bob accepts the designator. This favors the “rigid designator” perspective: highly specific, with little tolerance nor use for ambiguity. In particular, different designators might be applied to different uses. In contrast, presentation may be open-ended. When Bob presents to Alice, he may have no idea of the use to which she will put this information. The information may, in some contexts, be quite general and possibly entirely ambiguous. This favors the “compact description” perspective.

All of the above leads to the following design prescription: these two bundles of information ought not to be conflated. In particular, Bob most likely will want to exercise much greater control over designation information than over presentation information. In any event, the contexts which these will be used will be different, hence the two should be separate. Furthermore, designation should not be derivable from presentation (derivation in the other direction may or may not be problematic, depending on the use case).

In Part 3 (about whose timing I now know better than to make any prediction), I’ll take a look at some of the more popular identity schemes now being floated, and use this model to hold them up to some critical scrutiny.

March 26, 2006

A Contrarian View of Identity — Part 1: What are we talking about?

I was approached a few weeks ago to begin thinking about identity, the first time I’ve had the chance to get seriously into this subject for several years. In the time since my last big confrontation with these issues (at Communities.com circa 1999-2000, when we were worrying about registration models for The Palace) there has been a lot of ferment in this area, especially with problems such as phishing and identity theft being much in the news.

As I survey the current state of the field, it’s clear there are now enormous hordes of people working on identity related problems. Indeed, it seems to have become an entire industry unto itself. Although there are the usual tidal waves of brain damaged dross and fraudulent nonsense that inevitably turn up when a field becomes hot, there also seems to be some quite good work that’s been done by some pretty smart people. Nevertheless, I’m finding much of even the good work very unsatisfying, and now feel challenged to try to articulate why.

I think this can be summed up in a conversation I recently had with a coworker who asked for my thoughts on this, and my immediate, instinctive reply was, “Identity is a bad idea; don’t do it.” But unless you’re already part of the tiny community of folks who I’ve been kicking these ideas around with for a couple of decades, that’s probably far too glib and elliptical a quip to be helpful.

The problem with identity is not that it’s actually a bad idea per se, but that it’s been made to carry far more freight than it can handle. The problem is that the question, “Who are you?”, has come to be a proxy for a number of more difficult questions that are not really related to identity at all. When interacting with somebody else, the questions you are typically trying to answer are really these:

  • Should I give this person the information they are asking for?
  • Should I take the action this person is asking of me? If not, what should I do instead?

and the counterparts to these:

  • Can I rely on the information this person is giving me?
  • Will this person behave the way I want or expect in response to my request? If not, what will they do instead?

Confronted with these questions, one should wonder why the answer to “Who are you?” should be of any help whatsoever. The answer to that is fairly complicated, which is why we get into so much trouble when we start talking about identity.

All of these questions are really about behavior prediction: what will this person do? To interact successfully with someone, you need to be able to make such predictions fairly reliably. We have a number of strategies for dealing with this knowledge problem. Principal among these are modeling, incentives, and scope reduction. Part of the complexity of the problem arises because these strategies intertwine.

Modeling is the most basic predictive strategy. You use information about the entity in question to try to construct a simulacrum from which predictive conclusions may be drawn. This can be as simple as asking, “what would I do if I were them?”, or as complicated as the kinds of elaborate statistical analyses that banks and credit bureaus perform. Modeling is based on the theory that people’s behavior tends to be purposeful rather than random, and that similar people tend to behave in similar ways in similar circumstances.

Incentives are about channeling a person’s behavior along lines that enhance predictability and improve the odds of a desirable outcome. Incentives rely on the theory that people adapt their behavior in response to what they perceive the consequences of that behavior are likely to be. By altering the consequences, the behavior can also be altered. This too presumes behavior generally to be purposeful rather than random, but seeks to gain predictability by shaping the behavior rather than by simulating it.

Scope reduction involves structuring the situation to constrain the possible variations in behavior that are of concern. The basic idea here is that the more things you can arrange to not care about, the less complicated your analysis needs to be and thus the easier prediction becomes. For example, a merchant who requires cash payment in advance avoids having to consider whether or not someone is a reasonable credit risk. The merchant still has to worry about other aspects of the person’s behavior (Are they shoplifters? Is their cash counterfeit? Will they return the merchandise later and demand a refund?), but the overall burden of prediction is reduced.

In pursuing these strategies, the human species has evolved a variety of tools. Key among these are reputation, accountability, and relationships.

Reputation enters into consideration because, mutual fund legal disclaimers notwithstanding, past behavior is frequently a fairly good predictor of future behavior. There is a large literature in economics and sociology demonstrating that iterated interactions are fundamentally different from one-time interactions, and that expectation of future interaction (or lack thereof) profoundly effects people’s behavior. Identity is the key that allows you connect the party you are interacting with now to their behavioral history. In particular, you may able to connect them to the history of their behavior towards parties other than you. Reputation is thus both a modeling tool (grist for the analytical mill, after all) and an incentive mechanism.

Accountability enters into consideration because the prospect that you may be able to initiate future, possibly out-of-band, possibility involuntary interactions with someone also influences their behavior. If you can sue somebody, or call the cops, or recommend them for a bonus payment, you change the incentive landscape they operate in. Identity is the key that enables you to target someone for action after the fact. In addition to the incentive effects, means of accountability introduce the possibility of actually altering the outcome later. This is a form of scope reduction, in that certain types of undesired behavior no longer need to considered because they can be mitigated or even undone. Note also that the incentives run in both directions: given possible recourse, someone may choose to interact with you when they might not otherwise have done so, even if you know that your own intentions were entirely honorable.

Note that reputation and accountability are connected. The difference between them relates to time: reputation is retrospective, whereas accountability is prospective. One mechanism of accountability is action or communication that effects someone’s reputation.

Relationships enter into consideration because they add structure to the interactions between the related parties. A relationship is basically a series of interactions over time, conducted according to some established set of mutual expectations. This is an aid to modeling (since the relationship provides a behavioral schema to work from), an incentive technique (since the relationship itself has value), and a scope limiting mechanism (since the relationship defines and thus constrains the domain of interaction). What makes a relationship possible is the ability to recognize someone and associate them with that relationship; identity is intimately bound up in this because it is the basis of such recognition.

All of this is fairly intuitive because this is how our brains are wired. Humans spent 100,000 or more years evolving social interaction in small tribal groups, where reputation, accountability and relationships are all intimately tied to knowing who someone is.

But the extended order of society in which we live is not a small tribal group, and the Internet is not the veldt.

End of Part 1

In Part 2 (up soon) I will talk about how our intuitions involving identity break down in the face of the scale of global civilization and the technological affordances of the Internet. I’ll also talk about what I think we should do about it.

May 10, 2005

KidTrade-inspired designs…

My original KidTrade posting caused quite a bit of controversy, as the design clearly demonstrated that a eBay-resistant trading economy was possible, but the biq question remained: Could such an ecomony be any fun when applied to current MMORG designs? This call to action was heard by several would-be virtual economy designers.

Several people produced counter proposals at the time, including Jenni Merrifield, who posted some design suggestions on strawberryJAMM’s Thoughtful Spot and [link missing – Ted, where’s yours?]

The initial designs presented some interesting thoughts, but weren’t as deep as the developer community was looking for.

Last month, that changed when the first full-fledged eBay resistent trade/market design proposal that would work with a ‘standard’ MMOG was published by Barry Kearns as Draft of “No-Cash”: a commodification-resistant MMO economy and the followup Detailed explanation of “commodities market” under No-Cash system.

The design is pretty elegant and interesting. Anonymous markets for objects, and person-to-person interaction for experience/skill points. Pretty clever.

I’m looking forward to more variations and trying out someone’s initial implementation! :-)

April 23, 2005

Prescience?

An addendum to Randy’s observation below. This triggered a memory of something our buddy Crock once wrote. He said:

There are three positions you can take on inevitability.

  1. Passive ignorance.
  2. Futile resistence.
  3. Exploitation.

Sony is moving from Position 1 to Position 2. eBay is in Position 3.

He was talking about Sony’s announcement that they were going to ban the sale of characters from their online games. This was in April, 2000.

But, as Randy said, just because they’ve decided to embrace reality doesn’t mean they’ll necessarily embrace it successfully.