Posts from November, 2017
November 24, 2017
A Slightly Skeptical Perspective on REST
A few years ago, the set of design principles traveling under the banner of REST became the New Hotness in the arena of network services architecture. I followed the development of these ideas pretty casually, until as part of my work in the PayPal Central Architecture organization, I ended up having to do a deeper dive.
We were then experiencing an increase in the frequency with which various parts of the organization were tilting up new services behind REST interfaces, or were considering how or whether to put RESTful facades on existing services, or who were simply seeking guidance as to how they should proceed with respect to this whole REST fad.
For me, a key piece of absorbing new technological ideas is getting my own head straight as to how I feel about it all. By way of background, I’m one of those people who isn’t entirely sure what he thinks until he hears what he says, so a big part of this was working out my own ideas through the act of trying to articulate them. What follows is a collection of thoughts I have on REST that I formulated in the course of my work on this. I don’t at this point have a particular overarching thesis that unifies these, aside from a vague hope that these musings may have some value to other folks’ cogitations on the topic.
I’ll warn you in advance that some of this may make some of you a little cranky and driven to try to ‘splain things to me. But I’m already a little cranky about this, so that’ll just make us even.
Is it REST?
One of the first things that strikes me is the level of dogma that seems to surround the concept of REST. Ironically, this seems to be rooted in the lack of a clear and reasonably objective declaration of what it actually is, leaving open a wide spectrum of interpretations for people to accuse each other of not hewing to. Roy Fielding, the originator of these ideas, for his part, has succeeded in being a rather enigmatic oracle, issuing declarations that are at once assured and obscure. I once quipped to somebody after a meeting on the topic that a key requirement for whatever we came up with be that it enable people to continue arguing about whether any particular service interface was or was not RESTful, since the pervasiveness of such debates seems to be the signature element of the REST movement. Although I am generally a very opinionated person with respect to all matters technical, I have zero or less interest in such debates. I don’t think we should be concerned with whether something passes a definitional litmus test when people can’t entirely agree on what the definition even is. I’m much more concerned with whether the thing works well for the purposes to which it is put. Consequently, I don’t think the “is it really REST?” question need loom very large in one’s deliberations, though it inevitably always seems to.
Pro and Con
There are several ideas from REST that I think are extremely helpful. In no particular order, I’d say these include:
- You should avoid adding layers of intermediation that don’t actually add value. In particular, if HTTP already does a particular job adequately well, just go ahead and let that job be done by HTTP itself. Implicit in this is the recognition that HTTP already does a lot of the jobs that we build application frameworks to do.
- It is better to embed the designators for relevant resources in the actual information that is communicated rather than in a protocol specification document.
- More generally, it can be worthwhile to trade bandwidth for a reduction in interpretive latitude; that is, explicitly communicate more stuff rather than letting the other end figure things out implicitly, because if one end needs to figure something out it could figure it out wrong. Bandwidth is getting cheaper all the time, whereas bugs are eternal.
- To designate all kinds of things, URIs are good enough.
- Simple, robust, extensible, general purpose data representations are better than fussy and precise protocol-specific ones.
Other ideas from REST I think are less helpful. For example:
- The belief that the fundamental method verbs of HTTP are mostly all you need for just about every purpose. I’ve observed this is sometimes accompanied by the odd and rather ahistorical assertion that the verbs of HTTP constitute a carefully conceived set with tight, well defined semantics, as opposed to an ad hoc collection of operations that seemed more or less sufficient at the time. This historical conceit is not actually material to the substance or value of the principles involved, but does seem to generate a lot of unproductive discussion with people talking at cross purposes about irrelevant background details.
- The boundless faith in the efficacy of caching as the all purpose solvent for scalability and performance issues.
- The sometimes awkward shoehorning of application-level abstractions onto the available affordances of HTTP. For example, mapping the defined set of standard HTTP error codes onto the set of possible problems in a specific application often seems to do violence to both sets. Another example would be the sometime layering of magical and unintuitive secondary application semantics onto operations like PUT or DELETE.
I got yer resource right hereā¦
One of the fundamental ideas that underlies REST is the conceptualization of every important application abstraction as a resource that is manipulated by passing around representations of it. Although the term “resource” is sufficiently vague and general that it can be repurposed to suit almost any need, REST applications typically relate to their resources in a descriptive mode. The worlds that REST applications deal with tend to consist mainly of data objects whose primary characteristic is their current state; for this, REST is a good match. I think that a key reason why REST has gotten as much traction as it has is because many of the applications we want to develop fit this mold.
In contrast, more traditional architectures — the ones that REST self-consciously sets itself apart from — typically relate to their resources in an imperative mode. This aligns well with an application universe that consists mainly of functional objects whose primary characteristic is their behavior. Since many kinds of applications can be just as well conceived of either way (i.e., representational or behavioral), the advocates of REST would argue that you are better off adopting the representational, descriptive stance as it is a better fit to the natural affordances of the web ecosystem. However, in cases where the important entities you are dealing really do exhibit a complex behavioral repertoire at a fundamental level, a RESTful interpretation is much more strained.
Note also that there is a profound asymmetry between these two ways of conceptualizing things. If you think of a distributed application protocol abstractly in terms of nouns and verbs, REST deals in a small, fixed set of verbs while the space of nouns remains wide open. In contrast, traditional, imperative frameworks also support a completely open space of nouns but then go on to allow the space of verbs to be similarly unconstrained. It is very easy to frame everything that RESTful systems do in imperative terms, but it can be quite difficult to go in the other direction. This may account for why so many systems supposedly designed on RESTful principles fall short when examined with a critical eye by those who take their REST seriously. The mere adoption of design elements taken from REST (HTTP! XML! JSON!) does not prevent the easy drift back into the imperative camp.
Representational? Differentiable!
The idea that resources are manipulated by passing around representations of them, rather than by operating on them directly, tends to result in confusion between visibility and authoritativeness. That is, the model does not innately distinguish representations of reality (i.e., the way things actually are) from representations of intent (i.e., the way somebody would like them to be). In particular, since the different parties who are concerned with a resource may be considered authoritative with respect to different aspects of that resource, we can easily end up with a situation where the lines of authority are blurry and confused. In some cases this can be addressed by breaking the resource into pieces, each of which embodies a different area of concern, but this only works when the authority boundaries correspond to identifiable, discrete subsets of the resource’s representation. If authority boundaries are based on considerations that don’t correspond to specific portions of the resource’s state, then this won’t work; instead, you’d have to invent specialized resources that represent distinct control channels of some kind, which starts looking a lot more like the imperative model that REST eschews.
That’s all a bit abstract, so let me try to illustrate with an example. Imagine a product catalog, where we treat each offered product as a resource. This resource includes descriptive information, like promotional text and pointers to images, that is used for displaying the catalog entry on a web page, as well as a price that is used both for display and billing. We’d like the marketing department to be able to update the descriptive information but not the price, whereas we’d like the accounting department to be able to update the price but not the description (well, maybe not the accounting department, but somebody other than the person who writes ad copy; bear with me, it’s just an illustration). One way to deal with this would be to have the product resource’s POST handler try to infer which portions of the resource a request is attempting to update and then accept or reject the request based on logic that checks the results of this inspection against the credentials of the requester. However, a more RESTful approach might be to represent the descriptive information and the pricing information as distinct resources. This latter model aligns the authority boundaries with the resource boundaries, resulting in a cleaner and simpler system. This kind of pattern generalizes in all sorts of interesting and useful ways.
On the other hand, consider a case where such a boundary cannot be so readily drawn. Let’s say the descriptive text can be edited both by the marketing department and the legal department. Marketing tries to make sure that the text does the best job describing the product to potential buyers in a way that maximizes sales, whereas legal tries to make sure that the text doesn’t say anything that would expose the company to liability (say for misrepresentation or copyright infringement or libel). The kinds of things the two departments are concerned with are very different, but we can’t have one resource representing the marketing aspects of the text and another for the legal aspects. This kind of separation isn’t feasible because the text is a singular body of work and anyway the distinction between these two aspects is way too subjective to be automated. However, we could say that the legal department has final approval authority, and so have one resource that represents the text in draft form and another that represents the approval authorization (and probably a third that represents the text in its published form). I think this is a reasonable approach, but the approval resource looks a lot more like a control signal than it looks like a data representation.
Client representation vs. Server representation
One particular concern I have with REST’s representational conceit is that it doesn’t necessarily take into account the separation of concerns between the client and the server. The typical division of labor between client and server places the client in charge of presentation and user interface, and considers the client authoritative with respect to the user’s intentions, while placing the server in charge of data storage and business logic, considering the server authoritative with respect to the “true” state of reality. This relationship is profoundly asymmetrical, as you would expect any useful division of labor to be, but it means that the kinds of things the client and the server have to say to each other will be very different, even when they are talking about the same thing.
I don’t believe there is anything fundamental in the REST concept that says that the representations that the client transfers to the server and the ones that the server transfers to the client need to be the same. Nevertheless, once you adopt the CRUD-ish verb set that REST is based on, where every operation is framed in terms of the representation of a specific resource with a specific URI, you easily begin to think of these things as platonic data objects whose existence transcends the client/server boundary. This begins a descent into solipsism and madness.
This mindset can lead to confusing and inappropriate comingling among information that is of purely client-side concern, information that is of purely server-side concern, and information that is truly pertinent to both the client and the server. In particular, it is often the case that one side or the other is authoritative with respect to facts that are visible to both, but the representation that they exchange is defined in a way that blurs or conceals this authority. Note that this kind of confusion is not inherent in REST per se (nor is it unknown to competing approaches), but it is a weakness that REST designs are prone to unless some care is taken.
REST vs. The code monkeys
One of the side benefits, touted by some REST enthusiasts, of using HTTP as the application protocol, is that developers can leverage the web browser. Since the browser speaks HTTP, it can speak to a RESTful service directly and the browser UI can play the role of client (note that this is distinct from JavaScript code running in the browser that is the client). This makes it easy to try things out, for testing, debugging, and experimentation, by typing in URLs and looking at what comes back. Unfortunately, with browsers the only verb you typically have ready access to in this mode is GET, and even then you don’t have the opportunity to control the various headers that the browser will tack onto the request, nor any graceful way to see everything that the server sends back. HTTP is a complex protocol with lots of bells and whistles, and the browser UI gives you direct access to only a portion of it. You could write a JavaScript application that runs in the browser and provides you some or all of these missing affordances (indeed, a quick Google search suggests that there are quite a lot of these kinds of tools out there already), but sometimes it’s easier, when you are a lazy developer, to simply violate the rules and do everything in a way that leverages the affordances you have ready access to. Thus we get service APIs that put everything you have to say to the server into the URL and do everything using GET. This kind of behavior is aided and abetted by backend tools like PHP, that allow the server-side application developer to transparently interchange GET and POST and treat form submission parameters (sent in the body) and query string parameters (sent in the URL) equivalently. All this is decried as unforgivable sloppiness by REST partisans, horrified by the way it violates the rules for safety and idempotency and interferes with the web’s caching semantics. To the extent that you care about these things, the REST partisans are surely right, and prudence suggests that you probably should care about these things if you know what’s good for you. On the other hand, I’m not sure we can glibly say that the sloppy developers are completely wrong either. RFC2616 (a writ that is about as close as the REST movement gets to the Old Testament) speaks of the safety and idempotency of HTTP GET using the timeless IETF admonishment of “SHOULD” rather than the more sacrosanct “MUST”. There is nothing in many embodiments of the conventional web pipeline that actually enforces these rules (perhaps some Java web containers do, but the grotty old PHP environment that runs much of the world certainly doesn’t); rather, these rules are principally observed as conventions of the medium. I also suspect that many web developers’ attitudes towards caching would shock many REST proponents.
[[Irrelevant historical tangent: Rather than benefitting from caching, on several occasions, I or people I was working with have had to go to some lengths to defeat caching, in the face of misbehaving intermediaries that inappropriately ignored or removed the various headers and tags saying “don’t cache this”, by resorting to the trick of embedding large random numbers in URLs to ensure that each URL that passed through the network was unique. This probably meant that some machine somewhere was filling its memory with millions of HTTP responses that would never be asked for again (and no doubt taking up space that could have been used for stuff whose caching might actually have been useful). Take that, RESTies!]]
HATEOAS is a lie (but there will be cake)
A big idea behind REST is expressed in the phrase “hypermedia as the engine of application state”, often rendered as the dreadful abbreviation HATEOAS. In this we find a really good idea and a really bad idea all tangled up in each other. The key notion is to draw an analogy to the way the web works with human beings: The user is presented with a web page, on which are found various options as to what they can do next: click on a button, submit a form, abandon the whole interaction, etc. Behind each of those elements is a link, used for a POST or GET as appropriate, which results in the fetching of a new page from the linked server, presenting the user with a new set of options. Lather, rinse, repeat. Each page represents the current state of the dialog between the user and the web. Implicit in this is the idea that the allowed state transitions are explicitly represented on the page itself, without reference to some additional context that is held someplace else. In REST, the HATEOAS concept embraces the same model with respect to interaction between machines: the reply the client receives to each HTTP request should contain within it the links to the possible next states in a coherent conversation between the client and the server, rather than having these be baked into the client implementation directly or computed by the client on the basis of an externally specified set of rules.
We’ll gloss over for a moment the fact that the model just presented of human-machine interaction in the web is a bit of a fiction (and more so every day as Ajax increasingly becomes the dominant paradigm). It’s still close enough to be very useful. And in the context of machine-to-machine interaction, it’s still really useful, but with some important caveats. The big benefit is that we build fewer dependencies into the client: because we deliver specific URIs with each service response, we have more flexibility to change these around even after the client implementation is locked down. This means we are free to rearrange our server configuration, for example, by relocating services to different hosts or even different vendors. We are free to refactor things, taking services that were previously all together and provide them separately from different places, or conversely to merge services that were previously independent and provide them jointly. All this is quite valuable and is a major benefit that REST brings to the enterprise.
However, the model presented in the human case doesn’t entirely work in the machine case. In particular, the idea of: we present you with all the options, you pick one, causing us to present you with a new set of options; this idea is broken. Machines can’t quite do this.
Humans are general purpose context interpreters, whereas machines are typically much more limited in what they can understand. A particular piece of software typically has a specific purpose and typically only contains enough interpretive ability to make sense of the information that it is, in some sense, already expecting to receive. Web interfaces only have to present the humans with a set of alternatives that make enough sense, in terms of the humans’ likely task requirements, that people can figure out what to do, whereas interfaces for machines must present this information in a form that is more or less exactly what the recipient was expecting. Another way of looking at this is that the human comes prepackaged with a large, general purpose ontology, whereas client software does not. Much of what appears on a web page is documentary information that enables a person to figure things out; computers are not yet able to parse these at a semantic level in the general case. This is papered over by the REST enthusiasts’ tendency to use passive verbs: “the application transitions to the state identified by URI X”. Nothing is captured here that expresses how the application made the determination as to which URI to pick next. We know how humans do it, but machines are blind.
This is especially the case when an operation entails not merely an endpoint (i.e., a URI) but a set of parameters that must be encoded into a resource that will be sent as the body of a POST. On the web, the server vends a <FORM> to the user, which contains within it a specification for what should be submitted, represented by a series of <INPUT> elements. The human can make sense of this, but absent some kind of AI, this is beyond the means of a programmed client — while an automated client could certainly handle some kind of form-like specification in a generic way that allows it to delegate the problem of understanding to some other entity, it’s not going to be in a position to interpret this information directly absent some pre-existing expectation as to what kinds of things might be there, and why, and what to do with them.
There is a whole layer of protocol and metadata that the human/web interaction incorporates inline that a REST service interface has to handle out of band. The HATEOS principle does not entirely work because while the available state transitions are referenced, they are not actually described. More importantly, to the extent that descriptive information is actually provided (e.g., via a URI contained in the “rel” attribute of an XML <link> element), the interpretation of this metadata is done by a client-side developer at application development time, whereas in the case of a human-oriented web application it is done by the end user at application use time. This is profoundly different; in particular, there is a huge difference in the respective time-of-analysis to time-of-use gaps of the two cases, not to mention a vast gulf between the respective goals of the these two kinds of people. Moreover, in the web case the entity doing the analyzing and the entity doing the using are the same, whereas in the REST case they are not. In particular, in the REST case the client must be anticipatory whereas in the web case the client (which is to say, the human user) can be reactive.
PUT? POST? I’m so confused
The mapping of the standard HTTP method verbs onto a complete set of operations for managing an application resource is tricky and fraught with complications. This is another artifact of the impedance mismatch between HTTP and specific application needs that I alluded to in my “not so helpful” list above.
One particular source of continuing confusion for me by is captured in the tension between PUT and POST. In simplified form, a PUT is a Write whereas a POST is an Update. Or maybe it is the other way around. Which is which is not really clear. The specification of HTTP says that PUT should store the resource as the given URI, whereas POST should store the resource subordinate to the given URI, but “The action performed by the POST method might not result in a resource that can be identified by a URI.” The one thing that is clearly specified is that PUT is supposed to be an idempotent operation, whereas POST is not so constrained. The result is that POST tends to end up as the universal catch-all verb. One problem is that a resource creation operation wants to use PUT if the identity of the resource is being specified by the requestor, whereas it wants to use POST if the identity is being generated by the server. A write operation that is a complete write (i.e., a replacement operation) wants to use PUT, whereas if the writer wants to do some other kind of side-effectful operation it should be expressed as a POST. Except that we normally wouldn’t really want a server to allow a PUT of the raw resource without interposing some level of validity checking and canonicalization, but if this results in any kind of transformation that is in any way context sensitive (including sensitive to the previous state of the resource), then the operation might not be idempotent, so maybe you should really use POST. Except that we can use ETag headers to ensure that the thing we are writing to is really what we expected it to be, in which case maybe we can use PUT after all. At root the problem is that most real world data update operations are not pure writes but read-modify-write cycles. If there’s anything in the resource’s state that is not something that the client should be messing with, it pretty much falls to the server to be responsible for the modify portion of the cycle, which suggests that we should only ever use POST. Except that if the purpose of locating the modify step in the server is just to ensure data sanity, then maybe the operation really will be idempotent and so it should be a PUT. And I think I can keep waffling back and forth between these two options indefinitely. It’s not that a very expert HTTP pedant wouldn’t be able to tell me precisely which operation is the correct one to use in a given case, but two different very expert HTTP pedants quite possibly might not give the same answer.
If we regard the PUT operation as delivering a total replacement for the current state of a resource, using PUT for update requires the client to have a complete understanding of the state representation, because it needs to provide it in the PUT body. This includes, one presumes, all the various linked URIs embedded within the representation that are presumed to be there to drive the state engine, thus inviting the client to rewrite the set of available state transitions. If one alternatively allows the server to reinterpret or augment the representation as part of processing the PUT, then the idempotency of PUT is again at risk (I don’t personally care much about this, but REST fans clearly do). Moreover, one of the benefits a server brings to the table is the potential to support a more elaborate state representation than the client needs to have a complete understanding of. Representations such as XML, JSON, and the like are particularly suited to this: in generating the representation it sends to the server, the client can simply elide the parts it doesn’t understand. However, if a read-modify-write cycle requires passing the complete state through the client, the client now potentially needs to model the entire state (at least to the extent of understanding which portions it is permitted to edit). In particular, the REST model accommodates races in state change by having the server reject incompatible PUTs with a 409 Conflict error. However, this means that if the server wants to concurrently fiddle with disjoint portions of state that the client might or might not care about, it risks introducing such 409 errors in cases where no conflict actually exists from a practical perspective. More recently, some implementations of HTTP have now incorporated the PATCH verb to perform partial updates; this is not universally supported, however. I think PATCH (as described by RFC 5789) is the only update semantics that makes sense at all, and thus we’d probably be better off having the semantics of PUT be what they call PATCH and get rid of the extra verb. However, such a model means that PUT would no longer be idempotent. The alternative approach is to simply require that PATCH always be used and to deprecate PUT entirely.
PUT update failures also throw the client into a mode of dealing with “here’s an updated copy of the state you couldn’t change; you guess what we didn’t like about your request”, instead of enabling the server to transmit a response that indicates what the particular problem was. This requires the client to have a model of what all the possible failure cases are, whereas a more explicit protocol only requires the client to understand the failure conditions that it can do something about, and lump all the rest into a catch-all category of “problem I can’t deal with”. As things stand now, the poverty of available error feedback effectively turns everything that goes wrong into “problem I can’t deal with”.
DELETE has similar trouble communicating details about failure reasons. For example a 405 Not Allowed could result either from the resource being in a state from which deletion is not permitted for semantic reasons (e.g., the resource in question is defined to be in some kind of immutable state) or from the requester having insufficient authority to perform the operation. Actually, this sort of concern probably applies to most if not all of the HTTP methods.
What is this “state” of which you speak?
The word “state”, especially as captured in the phrase “hypermedia as the engine of application state,” has two related but different meanings. One sense (let’s call it “state1”) is the kind of thing we mean when we describe a classic state machine: a node in a graph, usually a reasonably small and finite graph, that has edges between some nodes and not others that enable us to reason rigorously about how the system can change in response to events. The other sense (“state2”) is the collection of all the values of all the mutable bits that collectively define the way the system is — the state — at some point in time; when used in this sense, the number of possible states of even a fairly simple system can be very very large, perhaps effectively infinite. In a strictly mathematical sense these two things are the same, but in practice they really aren’t since we rarely use state machines to formally describe entire systems (the analysis would be awkward and perhaps intractable) but we use them all the time to model things at higher levels of abstraction. I take the word “state” as used in HATEOAS to generally conform to the first definition, except when people find it rhetorically convenient to mean the second. We often gloss over the distinction, though I can’t say whether this is due to sloppiness or ignorance. However, if we use the term in one way in one sentence and then in the other way in the next, it is easy to be oblivious to major conceptual confusion.
There are a couple of important issues with protocols built on HTTP (equally applicable to conventional RPC APIs as to designs based on REST principles) that arise from the asymmetry of the client and server roles with respect to the communications relationship. While the server is in control of the state of the data, the client is in control of the state of the conversation. This is justified on the basis of the dogma of statelessness — since the server is “stateless”, outside the context of a given request-response cycle the server has no concept that there even is a client. From the server’s perspective, the universe is born anew with each HTTP request.
It’s not that the HTTP standard actually requires the server to have the attention span of a goldfish, but it does insist that the state of long-lived non-client processes (that is, objects containing state whose lifetime transcends that of a given HTTP request) be captured in resources that are (conceptually, at least) external to the server. The theory identifies this as the key to scalability, since decoupling the resources from the server lets us handle load with parallel collections of interchangeable servers. The problem with this is that in the real world we often do have a notion of session state, meaning that there will be server-side resources that are genuinely coupled to the state of the client even if the protocol standard pretends that this is not so. This is the place where the distinction between the two definitions of state becomes important. If we mean state1 when we talk about “the state of the application”, then the REST conceit that the client drives the state through its selective choice of link following behavior (i.e., HATEOAS) is a coherent way of looking at what is going on (modulo my earlier caveat about the client software needing a priori understanding of what all those links actually mean). However, if we mean state2, then the HATEOAS concept looks more dubious, since we recognize that there can be knowledge that the server maintains about its relationship with the client that can alter the server’s response to particular HTTP requests, and yet this knowledge is not explicitly represented in any information exchanged as part of the HTTP dialog itself. This would seem to be a violation of the fundamental web principles that REST is based on, but given that it is how nearly every non-trivial website in the world actually works (for example, through values in cookies being used as keys into databases to access information that is never actually revealed to the client), this would be like saying nothing on the web works like the web. You might actually have a pedantic point, but it would be a silly thing to say. It would be more accurate just to say that REST is patterned on a simplified and idealized model of the web. This does not invalidate the arguments for REST, but I think it should cause us to consider them a bit more critically.
The second issue with HTTP’s client/server asymmetry arises from the fact that the server has no means to communicate with the client aside from responding to a client-initiated request. However, given the practical reality of server-side session state, we not uncommonly find ourselves in a situation where there are processes running on the server side that want to initiate communication with the client, despite the fact that the dominant paradigm insists that this is not a meaningful concept.
The result is that for autonomous server-side processes, instead of allowing the server to transmit a notification to the client, the client must poll. Polling is horrendously inefficient. If you poll too frequently, you put excess load on the network and the servers. If you don’t poll frequently enough, you introduce latency and delay that degrades the user experience. REST proponents counter that the solution to the load problem caused by frequent polling is caching, which HTTP supports quite well. But this is nonsense on stilts. Caching can help as a kind of denial-of-service mitigation, but to the extent that the current state of an asynchronous process is cached, poll queries are not asking “got any news for me?”, they are asking “has the cache timed out?” If the answer to the first question is yes but the answer to the second is no, then the overall system is giving the wrong answer. If the answer to the first question is no but the answer to the second question is yes, then the client is hitting the server directly and the cache has added nothing but infrastructure cost. It makes no sense for the client to poll more frequently than the cache timeout interval, since polling faster will provide no gain, but if you poll slower then the cache is never getting hit and might as well not be there.