Multicore
As we saw in Chapter 1, Moore’s Law is still providing more transistors but no longer significant increases in clock frequency or performance per clock cycle. This shift in capabilities means that our single-threaded programs are no longer getting faster just by running them on newer hardware. Instead, we now have to turn to multithreading in order to take advantage of the added capabilities, which come in the form of additional cores. Getting multithreading right is a hard problem, not just due to the potential for race conditions and deadlocks, but also because the addition of thread management and synchronization actually adds significant overhead that can be difficult to break even on, despite the additional CPU resources that are unlocked with multithreading.
Due to the pretty amazing single-core performance of today’s CPUs, it turns out that the vast majority of CPU performance problems are not, in fact, due to limits of the CPU, but rather due to suboptimal program organization.5 I hope the factors 3 to 4, 10 to 20, and 100 to 1,000 of often easily attainable performance improvements I have presented so far will convince you to at least give the code-tuning option serious consideration before jumping into multithreading, which at best can achieve a speedup to the number of cores in the system—and this is only for perfectly parallelizable, so-called “embarrassingly parallel” problems.
Amdahl’s Law (Equation 3.1), relating the potential speedup (S ) due to parallelization with N cores (S(N )) to the fraction of the program that can be parallelized (P ) shows that the benefit of newer cores peters off very quickly when there are even small parts of the program that cannot be parallelized. So even with a very good 90% parallelizable program, going from 2 to 4 cores gives a 70% speedup, but going from 8 to 12 cores only another 21%. And the maximum speedup even with an infinite number of cores is factor 10. For a program that is 50% parallelizable, the speedup with 2 cores is 33%, 4 cores 60% and 12 cores 80%, so approaching the limit of 2.
While I can’t possibly do this topic justice here, it being worthy of at least a whole book by itself, I can give some pointers on the specifics of the various multithreading mechanisms that have become available over the years, from pthreads via NSThread and NSOperationQueue all the way to the most recent addition, Grand Central Dispatch (GCD).
Threads
Threading on OS X is essentially built on a kernel-thread implementation of POSIX threads (pthreads). These kernel threads are relatively expensive entities to manage, somewhat similar to Objective-C objects, only much more so. Running a function my_computation( arg ) on a new POSIX thread using pthread_create, as in Example 3.22, takes around 7 μs to of threading overhead on my machine in addition to the cost of running my_computation() by itself, so your computation needs to take at least those 7 μs to break even, and at least 70 μs to have a chance of getting to the 90% parallelization (assuming we have a perfect distribution of tasks for all cores).
Creating a new thread using Cocoa’s NSThread class method +detachNewThreadSelector:… adds more than an order of magnitude of overhead to the tune of 120 μs to the task at hand, as does the NSObject convenience method -performSelectorInBackground:… (also Example 3.22).
Taking into account Amdahl’s Law, your task should probably take at least around 1 ms before you consider parallelizing, and you should probably consider other optimization options first.
Example 3.22 Creating new threads using pthreads, Cocoa NSThread, or convenience messages
pthread_create( &pthread, attrs, my_computation, arg ); [NSThread detachNewThreadSelector:@selector(myComputation:) toTarget:self withObject:arg]; [self performSelectorInBackground:@selector(myComputation:) withObject:arg];
So, similar to the balancing of OOP vs. C, getting good thread performance means finding independent tasks that are sufficiently coarse-grained to be worth off-loading into a thread, but at the same time either sufficiently fine-grained or uniformly sized that there are sufficient tasks to keep all cores busy.
In addition to the overhead of thread creation, there is also the overhead of synchronizing access to shared mutable state, or of ensuring that state is not shared—at least, if you get it right. If you get it wrong, you will have crashes, silently inconsistent and corrupted data, or deadlocks. One of the cheapest ways to ensure thread-safe access is actually pthread thread-local variables, accessing to such a variable via pthread_getspecific() is slightly cheaper than a message send. But this is obviously only an option if you actually want to have multiple separate values, instead of sharing a single value between threads.
In case data needs to be shared, access to that data generally needs to be protected with pthread_mutex_lock() (43 ns) or more conveniently and safely with an Objective-C @synchronized section, which also protects against dangling locks and thus deadlocks by handling exceptions thrown inside the @synchronized section. Atomic functions can be used to relatively cheaply (at 8 ns, around 10 times slower than a simple addition in the uncontended case) increment simple integer variables or build more complex lock-free or wait-free structures.
Work Queues
Just like the problem of thread creation overhead is similar to the problem of object-allocation overhead, so work queues are similar to object caches as a solution to the problem: They reuse the expensive threads to work on multiple work items, which are inserted into and later fetched from work queues.
Whereas Cocoa’s NSOperations actually take slightly longer to create and execute than a pthread (8 μs vs. 7 μs), dispatching a work item using GCD introduced in Snow Leopard really is 10 times faster than a pthread, at 700 ns per item for a simple static block, and around 1.8 μs for a slightly more complex block with arguments like the one in Example 3.23.
Example 3.23 Enqueuing GCD work using straight blocks
dispatch_async( dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{ [self myComputation:arg];} );
I personally prefer convenience messages such as the -async Higher Order Message (HOM),6 which simplifies this code to the one shown in Example 3.24 at a cost of an extra microsecond.
Example 3.24 Enqueuing GCD work using HOM convenience messages
[[self async] myComputation:arg];
In the end, I’ve rarely had to use multithreading for speeding up a CPU-bound task in anger, and chances are good that I would have made my code slower rather than faster. The advice to never optimize without measuring as you go along goes double for multithreading. On the flip side, I frequently use concurrency for overlapping and hiding I/O latencies (Chapter 12) or keeping the main thread responsive when there is a long running task, be it I/O or CPU bound (Chapter 16). I’ve also used libraries that use threading internally, for example, the vDSP routines mentioned earlier or various image-processing libraries.