Saving the Failwhale: The Art of Concurrency
One of the most challenging bits of server-side programming is when you go from the safe, comfortable environs of the single-user development stage to the wild, unpredictable realm of production. With its large volumes of users and odd, arbitrary, and unforeseen data inputs, it can be a scary transition indeed.
The key to making this jump successful is a robust and well-designed system of concurrency. This rule applies to web applications, mobile applications with back-end components, and even to services that have no users at all (in the traditional sense) but must contend with a large amount of data being processed simultaneously; for example, when a brokerage firm or a bank clears checks in a batch at the end of the day. This is where the experience of senior engineers really comes into its own. Everyone knows the story of an application working perfectly during the development phase, only to melt under even infinitesimal traffic. Even if you haven't lived through this experience yourself, you've heard about it. The social network Twitter has even coined a special phrase for it: the failwhale.
Of course, Twitter was the victim of several issues, including simply not having enough capacity to handle its rising traffic. However, it's a good emblem of the kinds of problems that new projects face when they're forced to grow up fast. One side of this problem is a lack of capacity; that is, the traffic growth vastly exceeded what was expected. The other side, one that's often ignored or forgotten, is the poor utilization of existing resources. In other words, poor concurrency.
In this article I'll summarize a few techniques that I've found useful over the years in laying the groundwork for good utilization of resources. The key is always concurrency.
Handling Unavoidable Resource Contention
At its core, effective concurrency involves the idea that there's always a discrepancy between the speed of resources in a system. For example, the CPU is generally faster than RAM, and RAM is generally faster than a hard disk. The result of these discrepancies is that resources belonging to one tier are accessed more slowly than they're required in other (faster) tiers. This design creates a bottleneck, where faster tiers of the application are constantly outpacing the slower tiers, and thus contending for the same resources.
In some cases, contention is unavoidable. Some resources are just slow, and we must wait for them. The secrets to good concurrency in this case are 1) ensuring that these slower resources are rare, and 2) during such waiting periods, giving the faster tiers other work to do so that they continue being utilized well.
Synchronized Versus Concurrent
Even systems with low resource contention perform poorly under load if their architecture isn't well suited for concurrency. Symptoms include the overuse of synchronization constructs such as locks and mutexes. In Java, overuse of the synchronized keyword is a good example.
Consider the following very common data structure—the hash table. A hash table is a fantastic tool for storing pairs of names and values for fast lookup. These tables perform incredibly well (in constant time, asymptotically speaking) for both inserting keys and reading back their associated values:
public class UserService { private Map<String, User> cache = new HashMap<String, User>(); public void register(User user) { cache.put(user.getName(), user); // etc. } }
In this example, we're using the hash table—called a HashMap in Java—as a cache. However, a HashMap by itself is not thread-safe. If register() is called concurrently by two or more threads, the underlying data structure will become corrupt—the worst possible outcome for any piece of critical code.
To fix this problem, people often use the synchronized version of the HashMap:
public class UserService { private Map<String, User> cache = Collections.synchronizedMap(new HashMap<String, User>()); public void register(User user) { cache.put(user.getName(), user); // etc. } }
This approach solves the data-corruption issue, but it also introduces a major new issue—lock contention. Any cache access will now occur under an exclusive lock, meaning that only one thread at a time will be able to write to the cache or read from it. This is a horrible situation if you have hundreds or even dozens of concurrent users and modern multicore CPUs. The hardware is wasted and user experience suffers.
The solution to this issue is to use a ConcurrentHashMap:
public class UserService { private Map<String, User> cache = new ConcurrentHashMap<String, User>(); public void register(User user) { cache.put(user.getName(), user); // etc. } }
The code now frees up, allowing concurrent access to competing threads. However, the ConcurrentHashMap isn't some magical construct; it's important to know how and why it works, so you know when to use it and what its benefits are.
The ConcurrentHashMap doesn't actually do away with locks. It still uses them, but it uses more than the single global lock, so that threads gain some measure of concurrency. (It also uses some tricks to prevent locking when only reading from the HashMap, but let's leave that discussion for another time.) By dividing the entire key space into several partitions, the ConcurrentHashMap statistically ensures that multiple writing threads interfere with one another infrequently. It reserves a separate lock for each of these partitions, so that multiple threads writing to the map are likely to access different partitions (taking separate locks) and therefore process their data simultaneously. This technique is known as lock-striping.
For ConcurrentHashMaps, 8–16 stripes is plenty for handling most concurrency use cases. (As a rule, this number is proportional to the number of CPU cores in a system.)
As a design pattern, this kind of concurrency can be applied to a great number of use cases. One of my favorites is the producer/consumer queue.
Improving Utilization
A traditional producer/consumer queue is a common design pattern that's used to handle use cases where tasks are generated sporadically by a small number of agents, and consumed uniformly by a large number of workers that perform the produced tasks. A coffee shop is a good analogy: You usually have one order-taker at the till who sporadically generates tasks (orders for coffee and so forth) for multiple baristas, who in turn operate espresso machines and other equipment to perform the designated tasks (preparing the coffee to fill the orders).
The baristas don't care who ordered the coffee—if one barista is busy, another swoops in to pick up any waiting orders and continue brewing coffee. The coffee orders are held in a queue to be processed as they arrive. This is the quintessential producer/consumer queue. It utilizes the available resources (baristas) efficiently, with hardly any contention at all (the tiny amount of time it takes for a barista to pull order tickets from the queue).
This pattern is extremely useful in a variety of computational tasks. Let's take email as an example--checking for new mail generally works like this:
- Open your email application (on your iPhone, for instance). The application sends a request for mail to the server.
- You wait until the server is contacted, and continue waiting while it performs its own check against a database.
- Finally the mail is returned and displayed for you to read.
This whole activity features two forms of contention that scream for better efficiency:
- You must wait with the mail app open, spinning, while the server is being checked.
- The server is expending resources waiting on its own database to read the data off disk and return it.
Applying our learned design patterns, we can improve both of these issues easily. On the client, we institute a periodic check (say, once every 15 minutes), which sends a simple task to the server: "Notify me if new mail has arrived since the last time I checked." This is our task producer. The server then performs the busy work of checking for mail. It sends a notification back to the app if new mail is available; if no new mail exists, it does nothing. This is the task consumer. On smartphones, this structure is sometimes called push notification.
In more practical terms, we can see this structure modeled on the server using a producer/consumer queue. The server receives a number of email check requests from various clients and places them in a queue:
public class EmailChecker { private ExecutorService exec = Executors.newFixedThreadPool(8); // producer public void checkForEmail(final Client details) { exec.submit(new Runnable() { public void run() { perform(newCheckRequest(details))); } }); } // consumer public void perform(Request req) { if (req != null) { NewMail mail = checkDatabase(req); if (mail != null) notify(req, mail); } } }
So here we have two methods—one that adds tasks to the queue, and another that pops off the tasks and checks against the database, sending a notification if necessary.
Two Methods for Addressing Resource Contention
Concurrency comes into play when we decide how to distribute our resources. By using an ExecutorService (essentially a thread pool accompanied by a task queue), we can schedule eight threads to run all the time processing email check requests. This approach ensures that we process a maximum of eight requests simultaneously against the database. By doing this, we convert the sporadic, patchy swaths of check requests from apps into a smooth, uniform, and predictable load on the CPU and database. Without this setup, if there was a sudden influx of check requests (everyone woke up in the morning and turned on their phone, for example), we'd be looking at a failwhale scenario, where the load on the database would have exceeded its capacity.
This is the same concurrency design pattern, but its purpose is inverted—rather than applying it to improve utilization of our resources, we're using it to reduce spiky load on the database. As you may have noticed, both applications are intended to reduce resource contention, but are applied from different angles and have different consequences. This design pattern is often referred to as an "asynchronous processing model."
Summary
Concurrency design patterns are vital to any robust system design. Understanding specific, low-level constructs like concurrent data structures is just as important as knowing broad, "birds-eye" architectural decisions such as producer/consumer models. Mastering concurrency at both levels gives you the ultimate harpoon against the failwhale.