- Rule 7-Design to Clone or Replicate Things (X Axis)
- Rule 8-Design to Split Different Things (Y Axis)
- Rule 9-Design to Split Similar Things (Z Axis)
- Summary
- Notes
Rule 9—Design to Split Similar Things (Z Axis)
Often referred to as sharding and podding, Rule 9 is about taking one data set or service and partitioning it into several pieces. These pieces are often equal in size but may be of different sizes if there is value in having several unequally sized chunks or shards. One reason to have unequally sized shards is to enable application rollouts that limit your risk by affecting first a small customer segment, and then increasingly large segments of customers as you feel you have identified and resolved major problems. It also serves as a great method for allowing discovery—as you roll out first to smaller segments, if a feature is not getting the traction you expect (or if you want to expose an “early” release to learn about usage of a feature), you can modify the feature before it is exposed to everybody.
Often sharding is accomplished by separating something we know about the requestor or customer. Let’s say that we are a time card and attendance-tracking SaaS provider. We are responsible for tracking the time and attendance for employees of each of our clients, who are in turn enterprise-class customers with more than 1,000 employees each. We might determine that we can easily partition or shard our solution by company, meaning that each company could have its own dedicated Web, application, and database servers. Given that we also want to leverage the cost efficiencies enabled by multitenancy, we also want to have multiple small companies exist within a single shard. Really big companies with many employees might get dedicated hardware, whereas smaller companies with fewer employees could cohabit within a larger number of shards. We have leveraged the fact that there is a relationship between employees and companies to create scalable partitions of systems that allow us to employ smaller, cost-effective hardware and scale horizontally (we discuss horizontal scale further in Rule 10 in the next chapter).
Maybe we are a provider of advertising services for mobile phones. In this case, we very likely know something about the end user’s device and carrier. Both of these create compelling characteristics by which we can partition our data. If we are an e-commerce player, we might split users by their geography to make more efficient use of our available inventory in distribution centers, and to give the fastest response time on the e-commerce Web site. Or maybe we create partitions of data that allow us to evenly distribute users based on the recency, frequency, and monetization of their purchases. Or, if all else fails, maybe we just use some modulus or hash of a user identification (userid) number that we’ve assigned the user at signup.
Why would we ever decide to partition similar things? For hyper-growth companies, the answer is easy. The speed with which we can answer any request is at least partially determined by the cache hit ratio of near and distant caches. This speed in turn indicates how many transactions we can process on any given system, which in turn determines how many systems we need to process a number of requests. In the extreme case, without partitioning of data, our transactions might become agonizingly slow as we attempt to traverse huge amounts of monolithic data to come to a single answer for a single user. Where speed is paramount and the data to answer any request is large, designing to split different things (Rule 8) and similar things (Rule 9) becomes a necessity.
Splitting similar things obviously isn’t just limited to customers, but customers are the most frequent and easiest implementation of Rule 9 within our consulting practice. Sometimes we recommend splitting product catalogs, for instance. But when we split diverse catalogs into items such as lawn chairs and diapers, we often categorize these as splits of different things. We’ve also helped clients shard their systems by splitting along a modulus or hash of a transaction ID. In these cases, we really don’t know anything about the requestor, but we do have a monotonically increasing number upon which we can act. These types of splits can be performed on systems that log transactions for future reference as in a system designed to retain errors for future evaluation.