- 13.1 Concurrentgate
- 13.2 A Brief History of Data Sharing
- 13.3 Look, Ma, No (Default) Sharing
- 13.4 Starting a Thread
- 13.5 Exchanging Messages between Threads
- 13.6 Pattern Matching with receive
- 13.7 File Copyingwith a Twist
- 13.8 Thread Termination
- 13.9 Out-of-Band Communication
- 13.10 Mailbox Crowding
- 13.11 The shared Type Qualifier
- 13.12 Operations with shared Data and Their Effects
- 13.13 Lock-Based Synchronization with synchronized classes
- 13.14 Field Typing in synchronized classes
- 13.15 Deadlocks and the synchronized Statement
- 13.16 Lock-Free Coding with shared classes
- 13.17 Summary
13.12 Operations with shared Data and Their Effects
Working with shared data is peculiar because multiple threads may read and write it at any moment. Therefore, the compiler makes sure that all operations preserve integrity of data and also causality of operations.
Reads and writes of shared values are allowed and guaranteed to be atomic: numeric types (save for real), pointers, arrays, function pointers, delegates, and class references. struct types containing exactly one of the mentioned types are also readable and writable atomically. Notably absent is real, which is the only platform-dependent type with which the implementation has discretion regarding atomic sharing. On Intel machines, real has 80 bits, which makes it difficult to assign atomically in 32-bit programs. Anyway, real is meant mostly for high-precision temporary results and not for data interchange, so it makes little sense to want to share it anyway.
For all numeric types and function pointers, shared-qualified values are convertible implicitly to and from unqualified values. Pointer conversions between shared(T*) and shared(T)* are allowed in both directions. Primitives in std.concurrency allow you to do arithmetic on shared numeric types.
13.12.1 Sequential Consistency of shared Data
With regard to the visibility of shared data operations across threads, D makes two guarantees:
- The order of reads and writes of shared data issued by one thread is the same as the order specified by the source code.
- The global order of reads and writes of shared data is some interleaving of reads and writes from multiple threads.
That seems to be a very reasonable set of assumptions—self-evident even. In fact, the two guarantees fit time-sliced threads implemented on a uniprocessor system quite well.
On multiprocessors, however, these guarantees are very restrictive. The problem is that in order to ensure the guarantees, all writes must be instantly visible throughout all threads. To effect that, shared accesses must be surrounded by special machine code instructions called memory barriers, ensuring that the order of reads and writes of shared data is the same as seen by all running threads. Such serialization is considerably more expensive in the presence of elaborate cache hierarchies. Also, staunch adherence to sequential consistency prevents reordering of operations, an important source of compiler-level optimizations. Combined, the two restrictions lead to dramatic slowdown—as much as one order of magnitude.
The good news is that such a speed loss occurs only with shared data, which tends to be rare. In real programs, most data is not shared and therefore need not meet sequential consistency requirements. The compiler optimizes code using non-shared data to the maximum, in full confidence that no other thread can ever access it, and only tiptoes around shared data. A common and recommended programming style with shared data is to copy shared values into thread-local working copies, work on the copies, and then write the copies back into the shared values.