Posts filed under "Technology"
August 23, 2013
Patents and Software and Trials, Oh My! An Inventor’s View
What does almost 20 years of software patents yield? You’d be surprised!
I gave an Ignite talk (5 minutes: 20 slides advancing every 15 seconds) entitled
“Patents and Software and Trials, Oh My! An Inventor’s View”
Here’s some improved links…
-
I’ve created ip-reform.org to support the “I Won’t Sign Bogus Patents” pledge.
-
Encourage your company to adopt Twitter’s Inventor’s Patent Agreement
-
Support the The EFF on Patent Reform – DefendInnovation.org has a proposal
-
Sequestration has delayed a bay area PTO office, support this bill
I gave the talk twice, and the second version is also available (shows me giving the talk and static versions of my slides…) – watch that here:
April 14, 2011
We’re at it again and we’re hiring…
Chip has created the Nth generation of his massive-scale real-time server architecture (the spiritual descendent of Habitat) and we think the time is right for mobile/social games to go multiplayer! So we’ve gotten the band back together, and you can join us!
FUDCorp Job Openings
Real-Time Game Server Programmer, SF Bay Area
About us: a still-stealth start-up with a groundbreaking mobile/gaming platform that will reshape social games/apps. Get in on the ground floor with world-class founders and established technology. If you know us, you what we’ve built since the earliest days of online play.
Your role:
- Writing server-side Java code for an original massively multiplayer mobile online game
- Writing/maintaining testing frameworks (mostly in JavaScript for Node.js) for rapid development and massive scale performance evaluation
- This is a contract position, with potential to join our full-time team
Job Requirements:
- Immediate Availability. Our recent successes (partners and funding) means we need more help immediately!
- San Francisco Bay Area. With live meetings at least weekly, increasing over time.
- Minimum 3 years as a professional Java programmer working on client-server applications in a small, decentralized team.
- Strong Linux/Unix skills: shell scripting, command line tools, server administration, etc.
- Big plus: server-side JavaScript/ECMAScript skills, especially with Node.js
- Big plus: experience with Amazon EC2, and optimizing server features for automatic deployment
- Big plus: previous work with implementing social games, such as taxonomies, economies, abuse mitigation, and social issues
- Big plus: experience with iPhone or Android app development
Please send resume and contact info to jobs@fudcorp.com.
September 7, 2009
Elko III: Scale Differently
Preface: This is the third of three posts on Elko, a server platform for sessionful, stateful web applications that I’m releasing this week as open source software. Earlier, Part I presented the business backstory for Elko. Part II, yesterday’s post, presented the technical backstory, laying out the key ideas that lead to the thing. Today’s post presents a more detailed technical explication of the system itself, with particular emphasis on the scaling model that enables it all to work effectively.
In Part II I ranted at length about some of the unfortunate consequences of the doctrine of statelessness, the predominant paradigm for scaling web applications. Keeping the short-term state of a client-server session in the server’s memory is easy and therefor tempting, but, the story goes, you shouldn’t do that because it means you can’t scale your application — you just can’t handle the traffic from thousands or millions of users on the single machine whose memory it would be.
But this isn’t so much a server capacity problem as it is a traffic routing problem. In a traditional web server farm, load is distributed across multiple servers by arranging for successive HTTP requests to a particular named host to be delivered to different servers. Typically this is accomplished through provision of multiple IP addresses in the DNS resolution of the host name or through special load balancing routers in the server datacenter that virtualize the nominal host IP address, directing successive TCP sessions to different machines on the datacenter’s internal network.
This technique has a number of virtues, not least of which is that it is relatively simple. It takes advantage of the expectation that the loads that successive HTTP requests are going to place on the servers are likely to be uncorellated, and thus delivering requests to servers on a simple round-robin schedule, or even randomly, will, through the statistical magic of large numbers, result in more or less even load distribution across the datacenter. This lack of correlation is usually a reasonable assumption, since the various browsers hitting a given site around the same time are, for most sites, uncoordinated (indeed, the deliberate coordination of such activity is the basis for a major class of denial of service attacks).
However, just as this scheme implies that a given browser has no control over (nor ability to predict) which server machine it’s actually going to be talking to when it sends an HTTP request, it similarly means that a given server has no say over which clients it will be servicing. Any service implementation that relies on local data coherence from one request to the next (other than of a statistical nature, as is exploited by caching) is thus doomed. Keeping session state in the server’s memory is right out.
Elko approaches the scaling problem in a different way. First of all, we embrace the concept of a session: a series of interactions between the client and the server that has a beginning, a middle, and an end. This is by no means an exotic abstraction; indeed, the TCP protocol that HTTP is layered on top of is sessionful in exactly this way. However, HTTP then takes the session abstraction away from us, leaving it to the web application framework (of which, in this sense, Elko is just one of many) to pile on a bunch of additional mechanism to put it back in again.
Whereas, from the client’s perspective, a TCP session represents a communications connection to a particular host on the network, an Elko session represents a communications connection to a particular context. Like a web page, a context has a distinct, addressable identity. Unlike a web page, a context has its own computational existence independent of who is communicating with it at any given moment. In particular, multiple clients can interact with a given context at the same time, and the context itself can act independent of any of its individual clients, including when there are no clients at all. For example, in a multi-user chat application, the contexts would most likely be chat rooms. In a real-time auction application, contexts might represent the various auctions that are going on.
The Elko platform provides several different types of servers, all based on a common set of building blocks. However, for purposes of the present discussion, there are two that matter: the Context Server and the Director.
A Context Server provides an environment in which contexts run. Context Servers are generic and fungible in the same kinds of ways that web servers are: need more capacity? Just add more servers. The difference in the scaling story is that rather than handling load by farming out HTTP requests amongst multiple web servers, the Elko approach is to farm out contexts amongst multiple Context Servers.
In Elko, a context can be said to be active or inactive. An inactive context is saved in persistent storage, such as a file or a database. An active context exists in the process and memory space of some Context Server. The job of the Director is to keep track of which contexts are active and, when active, which Context Server each one is running on. When a client wishes to enter a particular context (that is, initiate a communications connection to it), the client sends a request to a Director asking where to go (these requests are routed to Directors using the kinds of standard web scaling techniques described above). If the context is active, the Director replies to the client with the address of the Context Server upon which the context is running (and notifies the Context Server to expect the client’s arrival), rather like this:
If the context is not active, the Director picks a Context Server to run the context, replies to the client with the address of this Context Server, and sends the chosen Context Server a message commanding it to activate the context, like this:
(Note that there is a race between the client arriving at the Context Server and the Context Server loading the context, but the implementation ensures that this is taken care of.)
Unlike the members of a cluster of traditional web servers, the address of each Context Server is fixed. Thus, once the client connection to a particular Context’s Server is made, the client communicates with the same Context Server for all of its interaction needs in that context for as long as the session lasts. This means the Context Server can keep the context state in memory, only going to persistent storage as needed for checkpointing long-term application state. Once the last client exits a context, that context can be unloaded and the server capacity made available for other contexts.
The Context Servers keep the Directors aprised of the contexts they are handling, the clients that are in those contexts, and the server load they are currently experiencing. From this information, the Directors can route client traffic by context or by user (e.g., in a chat application, I may want to enter the chat room where my friends are, rather than a specific room whose identity I know a priori), and can identify the least heavily loaded servers for new context activation.
Directors can be replicated for scale and redundancy, but since they actually do very little work, one Director can handle the load for a large number of clients before capacity becomes an issue. Director scalability is also enhanced because servicing clients only makes reference to in-memory data structures, so everything the Director does is very fast and has quick turnaround.
This scheme scales very well. Because it has a very light footprint and services nearly everything from memory, even a single Context Server can manage a substantial load. We benchmarked the SAF Context Server, which had the identical architecture, in 2002 at Sun’s performance testing center in Menlo Park. On a Sun Enterprise 450 server (2 processor 400Mhz SPARC, a mid- to low-range machine even then), we ran a simulated chat environment, running 8000 concurrent connections spread over ~200 chat rooms, with an average fanout per room of ~40 users, with each client producing an utterances approximately every 30 seconds (in a 40 user chat room, that level of activity is positively frantic). This resulted in about 20% CPU load with no user detectable lag. Ironically, the biggest challenge in performing this test was generating enough load. We ended up having to use several of the biggest machines they had in the lab to run the client side of the test. Note also that this test was conducted three or four generations of server hardware ago. I expect that on modern machines, these numbers will be even more substantial.
One potential criticism of this scaling strategy is that it is more complicated than the way web servers usually do things. On the surface, I have to concede that that is true. However, by the time you take into consideration the extra work you need to do in an actual large-scale web setup, configuring routers and load balancers and memcache
servers and database clusters and endless other complications, plus all the extra application engineering work to make use of these, I think Elko ends up being a simpler configuration. I know from experience that it’s a vastly simpler environment for the application coder.
So that’s the theorical side of the scaling story. I invite anyone who has an interest in delving deeper to check things out for themselves. The code is here.
April 26, 2004
Announcing! Yahoo! Avatars
I’m working as Community Strategic Analyst for Yahoo!, where I’m helping to bring out next generation social software in a very large scale. I am proud to announce the first new product from our group (this year) is Yahoo! Avatars support in Messenger 6.0 Beta, which was released today [windows only]. Beside Avatar support, it now integrates LAUNCHcast Radio, Games, Addressbook, and adds sound effects called Audibles.
I think it is interesting that the original avatars walked, ‘talked’, and traded virtual objects in a virtual world with a (dis)functional virtual economy, but some of the latest incarnations include avatars that are more like paper-dolls and don’t interact with each other at all. Interesting market-driven optimization. |