Scheduling Challenges with ARM's big.LITTLE Architecture
One of the latest buzzwords to come out of ARM's marketing department is big.LITTLE—no doubt someone spent many seconds of creativity to come up with the idea of capitalizing the word little. In spite of the name, which makes me cringe slightly whenever I type it, the idea is quite interesting.
The concept behind big.LITTLE is to include multiple cores with the same instruction set on a single die. This idea is interesting because it's somewhere between traditional symmetric multiprocessing (SMP)-although obviously not symmetric-and heterogeneous multicore. The instruction sets of all the cores are the same, but their performance characteristics are very different.
Current implementations use a mixture of the Cortex-A15 and Cortex-A7 cores. (Apparently ARM's creative genius also extends to naming cores.) The A7 is an impressive piece of engineering. Its performance is similar to that of the Cortex A8 (though it's better, and therefore has a lower number), but in a smaller die area and at much lower power, and tweaked to understand a newer version of the instruction set. You may remember that the A8 was ARM's flagship mobile core a few years ago; it's now quite long in the tooth. The A15 is the successor to the A9. (It's better, and therefore has a higher number. No, it doesn't make sense to me either.) The A15 is very different internally from the A7. The A7 is an in-order architecture with limited superscalar features (dual-issue, if you hit the right combination of instructions), and the A15 is an out-of-order superscalar monster.
Of course, this extra performance from the A15 comes at a cost. It's ARM's fastest core by a fairly significant margin, but it's also ARM's most power-hungry core by a similar amount. Stick an A15 in a mobile phone, and you'll get strong biceps from lifting the battery to your ear. This is why ARM pairs the A15 with the A7: Both can run the same code, but the A15 will run much faster-at the expense of power consumption.
Unfortunately, the tradeoff isn't that simple. Often, the best way of minimizing power consumption is to let a job run to completion at the processor's highest power state, and then shut it down. For this kind of task, the A15 is more power-efficient. For other tasks, it might be the only option; the A7 just won't keep up.
Uniform Memory Architecture
The first generation of big.LITTLE chips from Samsung were buggy, and they lacked one of the most interesting features of later designs: They weren't cache-coherent between the A7 and A9 cores, so the only thing that a scheduler could do was turn on the A15s and turn off the A7s, or vice versa.
More-recent designs are fully cache-coherent between all of the cores. A typical system on chip (SoC) has four of each kind of core. With adequate cooling, you can power all of them at once. More commonly, you might power one A15 for a CPU-limited foreground application, and one or two A7s for everything else.
The large number of possible combinations of cores you can have active dramatically complicates the task of power management for the operating system. With a more conventional SMP system, power management is a two-dimensional problem: How many cores do you want to have active, and at what speed do you want to run them? Core speed is often an independent problem: You schedule jobs as best you can, and then adjust the core speed upward if everything is CPU-bound or downward if it isn't. big.LITTLE adds another dimension: What kinds of core are active?
I imagine that future releases in this series will have more than just fast-and-power-hungry cores versus slow-and-efficient cores. The newly launched A12 is intended to fit between the A7 and A15 in terms of power and performance. (Therefore, in a dramatic break with tradition, this new core has a number that makes some kind of sense.) The A12 will be the big chip in midrange big.LITTLE configurations, and it may well appear in "big.Medium.LITTLE" configurations in the future. (ARM marketing can have that name for free.)
The scheduling question is more complicated for another reason: The cost of migrating a thread between cores is quite expensive, and if you make the wrong scheduling decision then you have to migrate threads between cores. In a traditional SMP setup, a thread is only on the wrong core if you accidentally schedule too many CPU-bound processes on one core, or if you're trying to consolidate more mostly-idle processes on a few cores to power down the others. With big.LITTLE, it's possible for a thread to be on the wrong core for power consumption quite easily if it suddenly switches between CPU-bound and I/O-bound.
Execution History
With systems like this, where making correct scheduling decisions can affect both responsiveness and battery life to a noticeable degree, the operating system needs to be able to predict the behavior of processes. With some processes, this is very easy. For example, a compiler will spend a little bit of time reading files, and then it will allocate a load of memory, sit being CPU-bound for a while, and then write out a file and exit. A few execution traces are all you need to work out that this is probably what will happen every time.
Even with this workload, the correct execution isn't obvious. For example, should it start on an A7 core while it's doing the I/O and then move to the A15 when it stops being I/O-bound? This approach sounds good, but if the system is reading things into cache on the A7 then it will need to fetch them into the A15's cache as well. This design causes bus traffic, which consumes power and reduces performance. On the other hand, running on the A15 means that the power-hungry core is going to be sitting idle while it's waiting for I/O.
Many applications are a lot more "bursty." For example, when you type something into a rich text field in a graphical application, the application will do some work to render the text and may do some extra calculations for autocompletion, spelling correction, and so on. While the user pauses to think, the application may just sit completely idle waiting for events.
Why Bother?
At this point, you're probably wondering whether it's worth the effort. Wouldn't ARM be better off with just one core that can do fine-grained clock scaling? Intel took this route, but it's not necessarily better. Currently, Intel's flagship Atom cores are in approximately the same ballpark as the A15 in terms of power consumption. They have nothing that comes close to the A7, and for good reason: The A7 is designed for power consumption above all else.
An in-order chip like the A7 is significantly simpler, and this shows in the die area. A single A7 core is 0.452 millimeters (plus caches), whereas the A15 is five times this area; you get all four A7 cores in the space of one A15. If they're all busy, those four A7s will also have better performance. The catch, of course, is that keeping them all busy requires a multithreaded workload with perfect work distribution.
The power efficiency of the A7 core is very important for mobile devices, which spend much of their time in idle or almost-idle mode. The almost-idle part is the most important, as many things prevent a tablet or mobile phone from sleeping, without requiring more than a very small amount of CPU power. Running these devices on an Atom or an A15 would be inefficient.
I said earlier that the A15 is an out-of-order chip, but that term hides a lot of complexity. A simple in-order chip decodes an instruction and then pushes it into the next stage of the pipeline to execute. A slightly more complex chip like the A7 might decode two instructions, determine whether they use different execution units and have no interdependencies, and push them into different pipelines if this is the case. A fully out-of-order superscalar chip like the A15 or newer Atoms (or almost any modern desktop or server CPU) decodes multiple instructions, determines their data dependencies, and executes them in the order of their dependencies (or a close approximation).
Any out-of-order superscalar chip will need complex logic for register renaming. Register operands in such a chip are just names used to identify dependencies. It probably needs a store buffer, so memory writes appear in the correct order. It needs a wide instruction decoder so that it can decode multiple instructions simultaneously to determine which to execute. It needs a very complex branch predictor, because a mis-predicted branch is very expensive when lots of speculatively executed instructions are in flight-and with a branch on average every seven instructions, the CPU quickly runs out of work to do without speculative execution.
All of this extra complexity is very hard to disable selectively. Without too much effort, you could probably tweak the A15 to ignore most of the branch predictor state and issue only one or two instructions per clock. You could probably do some power gating to the unused pipelines, although doing this would mean you'd have quite a long delay when you want to bring them back online. But you'd still have all of the register-renaming logic and the extra wiring from the independent arithmetic logic units. You almost certainly wouldn't come close to the A7 in terms of power consumption.
One complaint leveled at ARM over the big.LITTLE model is that they're taking the easy way out in hardware and shoving the complexity into the operating system. Ignoring the fact that designing two power-efficient CPU cores doesn't meet my definition of "easy," this complaint is somewhat legitimate. Taking ARM's approach is easier than trying to design a single core that has the power/performance characteristics of either the A7 or A15, depending on its mode. I doubt that such an approach is actually possible.
Detractors are right, however, to claim that ARM is making the operating system a lot more complex. The burden of power optimizations is certainly placed squarely with the OS writers. On the other hand, the operating system is in the best position to track usage profiles of applications over time, so the OS seems like the right place for it.