- Dealing with Failure
- Guarded Methods
- Structuring and Refactoring Classes
- Using Concurrency Control Utilities
- Joint Actions
- Transactions
- Implementing Utilities
3.3 Structuring and Refactoring Classes
The basic waiting and notification techniques described in 3.2 can be combined with other design strategies to improve reusability and/or performance, as well as to obtain finer-grained control over actions. This section surveys some common patterns, techniques, and problems seen when tracking logical state and execution state, wrapping control in overridable methods, and creating confinement-based designs.
3.3.1 Tracking State
The most conservative strategy for writing guarded methods is to call notifyAll every time you change the value of any instance variable. This strategy is highly extensible. If all changes to all instance variables generate notifyAll, then any method in the class and all of its possible subclasses can define a guard clause that waits for any particular state. On the other hand, this practice can be inefficient when it generates notifications that cannot possibly affect the guard conditions of any waiting thread. Often, some or all of these useless notifications can be eliminated via logical state analysis. Rather than issuing notifications upon all changes in instance variables, you can arrange to issue notifications only upon transitions out of the logical states in which threads can wait. The following examples illustrate techniques.
3.3.1.1 Channels and bounded buffers
Channel abstractions play central roles in several styles of concurrent software design (see 1.2.4 and 4.1). A Channel interface may be defined as:
interface Channel { void put(Object x) throws InterruptedException; Object take() throws InterruptedException; }
Methods take and put may be viewed as data-carrying analogs of Sync acquire and release operations (see 2.5.1), non-IO-based versions of stream read and write operations, encapsulated forms of transfer operations (see 2.3.4), and when channel elements represent messages, message receive and send operations (see 4.1.1).
A bounded buffer can be used as a channel (see 3.4.1 for some other alternatives). Bounded buffers have the same overall structure as bounded counters. In addition to a size (count), a buffer maintains a fixed array of elements. Instead of inc, it supports put, and instead of dec, it supports take. Also, the MIN is simply zero and the MAX is the capacity (declared as int to simplify use in array indexing).
interface BoundedBuffer extends Channel { int capacity(); // INV: 0 < capacity int size(); // INV: 0 <= size <= capacity }
As described in just about any data structures textbook, implementations of BoundedBuffers can employ a fixed-sized array along with two indices that circularly traverse the array, keeping track of the next positions to put and take respectively. The logical states and transitions defined for a BoundedBuffer are similar to those for a BoundedCounter:
State | Condition | put | take |
full | size == capacity | no | yes |
partial | 0 < size < capacity | yes | yes |
empty | size == 0 | yes | no |
Notice that the only transitions that can possibly affect waiting threads are those that step away from states empty and full; that is, increment the size up from zero or decrement it down from the capacity.
These observations lead to the following implementation of BoundedBuffer in which notifications are issued only when transitions are made out of the empty and full states. (Part of the conciseness of the code is due to the convenience of post-increment and post-decrement coding idioms.)
This version can generate far fewer notifications than a version in which every change to size results in a notification, thus uselessly waking up threads. In cases where evaluating guards is itself computationally expensive, minimizing rechecks in this fashion results in even greater efficiency improvements.
class BoundedBufferWithStateTracking { protected final Object[] array; // the elements protected int putPtr = 0; // circular indices protected int takePtr = 0; protected int usedSlots = 0; // the count public BoundedBufferWithStateTracking(int capacity) throws IllegalArgumentException { if (capacity <= 0) throw new IllegalArgumentException(); array = new Object[capacity]; } public synchronized int size() { return usedSlots; } public int capacity() { return array.length; } public synchronized void put(Object x) throws InterruptedException { while (usedSlots == array.length) // wait until not full wait(); array[putPtr] = x; putPtr = (putPtr + 1) % array.length; // cyclically inc if (usedSlots++ == 0) // signal if was empty notifyAll(); } public synchronized Object take() throws InterruptedException{ while (usedSlots == 0) // wait until not empty wait(); Object x = array[takePtr]; array[takePtr] = null; takePtr = (takePtr + 1) % array.length; if (usedSlots-- == array.length) // signal if was full notifyAll(); return x; } }
3.3.1.2 State variables
State tracking can sometimes be simplified and better encapsulated by using state variables that represent the entire logical state of an object in a single field (see 3.2.1.3). Usually, a state variable takes on values of an enumerated type. The variable is re-evaluated after any update to relevant fields. This re-evaluation may then be isolated in a single method, say updateState, called after each update method. After re-evaluating state, updateState issues notifications associated with the state change. For example, using a state variable in a BoundedCounter (a BoundedBuffer would work similarly) leads to:
class BoundedCounterWithStateVariable { static final int BOTTOM = 0, MIDDLE = 1, TOP = 2; protected int state = BOTTOM; // the state variable protected long count = MIN; protected void updateState() {//PRE: synch lock held int oldState = state; if (count == MIN) state = BOTTOM; else if (count == MAX) state = TOP; else state = MIDDLE; if (state != oldState && oldState != MIDDLE) notifyAll(); // notify on transition } public synchronized long count() { return count; } public synchronized void inc() throws InterruptedException { while (state == TOP) wait(); ++count; updateState(); } public synchronized void dec() throws InterruptedException { while (state == BOTTOM) wait(); --count; updateState(); } }
Instead of using updateState to reassess state, you can arrange that each method that performs updates also determines the correct next state and sends it as an argument to updateState, which can then still perform notifications upon change. (However, this may compound the fragility problems discussed in 3.3.3.3.) As the number of states grows, you can employ heavier machinery, such as finite-state machines or decision tables (see Further Readings).
3.3.2 Conflict Sets
Classes that track the execution state of underlying operations can use this information to decide what to do about new incoming requests. One of the main applications is construction of custom-made exclusion policies that provide more fine-grained exclusion control than seen in Chapter 2.
To illustrate, consider an Inventory class with methods to store and retrieve objects, each of which has a unique description. Suppose that these operations are somewhat time-consuming, but are implemented in a way that does not necessarily require low-level synchronization. In this case, we can allow those operations that do not semantically conflict with each other to execute at the same time, thus permitting more concurrency than possible in a fully synchronized version of the class.
In the classic context of this form of policy control, basic functionality is arranged via database transactions, but we will illustrate using java.util.Hashtable. Even though the fully synchronized Hashtable class allows an Inventory class to be defined without worrying about some low-level synchronization details, we still want to place some semantic constraints on the store and retrieve operations. One policy choice is:
A retrieve operation should not run concurrently with a store operation since the store might be in the process of adding exactly the item requested, in which case you don't want to return a failure indication.
Two or more retrieve operations should not execute at the same time, since one may be in the process of removing the item requested by the others.
We could have made other decisions here, for example even allowing all operations to operate concurrently, thus allowing failures. Also, we could have based the policies on the internal implementation details of the operations. For example, the above choices would also hold here if the retrieve method were programmed in a way that required exclusion, but store did not.
Several formal and semiformal notations have been devised to help represent this kind of information. The most widely used method, which suffices for most concurrency control problems of this kind, is based on conflict sets — sets of pairs of actions that cannot co-occur. For example, here the conflict set is merely:
{ (store, retrieve), (retrieve, retrieve) }.
This information can serve both as documentation of class semantics and as a guide for implementing these semantics via execution-state tracking.
3.3.2.1 Implementation
Classes based on conflict sets can employ before/after designs (see 1.4) in which ground actions are surrounded by code that maintains the intended exclusion relations. The following mechanics can be implemented via any before/after pattern:
For each method, declare a counter field representing whether or not the method is in progress.
Isolate each ground action in a non-public method.
Write public versions of the methods that surround the ground action with before/after control:
Each synchronized before-action first waits until all non-conflicting methods have terminated, as indicated by counters. It then increments the counter associated with the method.
Each synchronized after-action decrements the method counter and performs notifications to wake up other waiting methods.
Applying these steps directly to the methods of an Inventory class leads to:
class Inventory { protected final Hashtable items = new Hashtable(); protected final Hashtable suppliers = new Hashtable(); // execution state tracking variables: protected int storing = 0; // number of in-progress stores protected int retrieving = 0; // number of retrieves // ground actions: protected void doStore(String description, Object item, String supplier) { items.put(description, item); suppliers.put(supplier, description); } protected Object doRetrieve(String description) { Object x = items.get(description); if (x != null) items.remove(description); return x; } public void store(String description, Object item, String supplier) throws InterruptedException { synchronized(this) { // before-action while (retrieving != 0) // don't overlap with retrieves wait(); ++storing; // record exec state } try { doStore(description, item, supplier); // ground action } finally { // after-action synchronized(this) { // signal retrieves if (--storing == 0) // only necessary when hit zero notifyAll(); } } } public Object retrieve(String description) throws InterruptedException { synchronized(this) { // before-action // wait until no stores or retrieves while (storing != 0 || retrieving != 0) wait(); ++retrieving; } try { return doRetrieve(description); // ground action } finally { synchronized(this) { // after-action if (--retrieving == 0) notifyAll(); } } } }
(Because of the nature of the conflict set here, the notifyAll in the retrieve method happens always to be enabled. However, more generally the notifications should take the conditional form shown.)
3.3.2.2 Variants and extensions
The ideas seen in the above Inventory example also apply to optimistic methods, in which case conflicts are often termed invalidation relations. These are implemented by aborting conflicting operations before commitment rather than waiting until it is safe to perform them (see 3.6).
More extensive notation can be used to represent conflict at an arbitrarily fine level of detail, covering cases such as those in which, say, some methodA conflicts with methodB only if it occurs after methodC. Similarly, in the Inventory class, we might want to use a more precise notation in order to state that a store operation can commence if a retrieve is in progress, but not vice versa. A range of notation has been devised for such purposes (see the Further Readings in 1.2.5 and 3.3.5), enabling more detailed representation of conflicts while still allowing semi-automatic implementation via execution-state tracking variables. However, in the extreme, it may be that nothing short of a full history log suffices to implement a given policy.
The techniques described in 3.4 and 3.7 can be used to reduce the number of notifications and context switches in most classes relying on conflict sets.
Implementations based on execution-state tracking and conflict sets can suffer fragility and non-extensibility problems. Since conflict sets are based on the methods actually defined in a class rather than on logical representations of their semantics or underlying state invariants, they are difficult to extend when changing or adding methods in subclasses. For example, if a sort method is introduced to re-order the items in some fashion, or a search method to check if an item exists, they might conflict in different ways from those currently handled, requiring rework.
The Readers and Writers pattern and related constructions described in 3.3.3 alleviate some of these problems by classifying operations into extensible categories. The Readers and Writers pattern also addresses matters of precedence and scheduling that are not covered by conflict notations. For example, in Inventory, we might want to add a provision that if there are multiple waiting threads, threads waiting to perform retrieve operations are preferred over those waiting to perform store operations, or vice versa.
3.3.3 Subclassing
Subclassing can be used to layer different control policies over existing mechanism, or even vice versa. This practice extends the applications of subclassing seen in 2.3.3.2 that layer locking over bare functionality.
3.3.3.1 Readers and Writers
The Readers and Writers pattern is a family of concurrency control designs having a common basis but differing in matters of policy governing control of threads invoking accessors (“Readers”) versus those invoking mutative, state-changing operations (“Writers”).
In 2.5.2, we saw a version of this pattern encapsulated as a utility class. Here we show a subclassable before/after version using the template method pattern (see 1.4.3). Beyond its intrinsic utility, this design is a good model for any kind of policy that can be implemented by mixing together subclass-based before/after concurrency control and counters recording messages and activities. For example, very similar techniques apply to classes that require certain categories of messages to occur in ordered pairs (as in enforcing, say, read, write, read, write, and so on). They also apply to extended schemes supporting intention locks that reserve the option to later acquire (or upgrade to) read or write locks for any of a set of objects reachable from a given container (see 2.4.5).
Before putting control mechanisms in place, you must first establish a set of policies governing their use. Readers and Writers policies are generalizations of the kinds of concurrency-control policies seen, for example, in the Inventory class in 3.3.2. But rather than dealing with particular methods, they deal with all methods having the semantics of reading versus writing. However, the details are still situation-dependent. Considerations include:
If there are already one or more active (executing) Readers, can a newly arriving Reader immediately join them even if there is also a waiting Writer? If so, a continuous stream of entering Readers will cause Writers to starve. If not, the throughput of Readers decreases.
If both some Readers and some Writers are waiting for an active Writer to finish, should you bias the policy toward allowing Readers? a Writer? Earliest first? Random? Alternate? Similar choices are available after termination of Readers.
Do you need a way to allow Writers to downgrade access to be Readers without having to give up locks?
Although there are no right answers to these policy matters, there are some standard solutions and corresponding implementations. We'll illustrate with a common set of choices: Readers are blocked if there are waiting Writers, waiting Writers are chosen arbitrarily (just relying on the order in which the underlying JVM scheduler happens to resume unblocked threads), and there are no downgrade mechanisms.
Implementing this concurrency control policy requires execution state tracking. Like most policies, it can be established by maintaining counts of threads that are actively engaged in the read and write operations, plus those that are waiting to do so. Tracking waiting threads is the main extension here of the techniques seen in typical implementations of conflict sets.
To structure the corresponding implementations, control code can be factored out into method pairs that surround the actual read and write code, which must be defined in subclasses. This before/after design (see 1.4.3) allows construction of any number of public read-style and write-style methods, where each public method invokes the non- public one within the pairs.
The following version is written in a generic fashion, so that minor variants are simple to implement in subclasses. In particular, the count of waiting readers is not really necessary in this version, since no policy depends on its value. However, its presence allows you to adjust policies by changing the predicates in the allowReader and allowWriter methods to rely on them in some way. For example, you might alter the conditionals to give preference to whichever count is greater.
abstract class ReadWrite { protected int activeReaders = 0; // threads executing read protected int activeWriters = 0; // always zero or one protected int waitingReaders = 0; // threads not yet in read protected int waitingWriters = 0; // same for write protected abstract void doRead(); // implement in subclasses protected abstract void doWrite(); public void read() throws InterruptedException { beforeRead(); try { doRead(); } finally { afterRead(); } } public void write() throws InterruptedException { beforeWrite(); try { doWrite(); } finally { afterWrite(); } } protected boolean allowReader() { return waitingWriters == 0 && activeWriters == 0; } protected boolean allowWriter() { return activeReaders == 0 && activeWriters == 0; } protected synchronized void beforeRead() throws InterruptedException { ++waitingReaders; while (!allowReader()) { try { wait(); } catch (InterruptedException ie) { --waitingReaders; // roll back state throw ie; } } --waitingReaders; ++activeReaders; } protected synchronized void afterRead() { --activeReaders; notifyAll(); } protected synchronized void beforeWrite() throws InterruptedException { ++waitingWriters; while (!allowWriter()) { try { wait(); } catch (InterruptedException ie) { --waitingWriters; throw ie; } } --waitingWriters; ++activeWriters; } protected synchronized void afterWrite() { --activeWriters; notifyAll(); } }
This class or its subclasses may also be repackaged to support the ReadWriteLock interface discussed in 2.5.2. This can be done using inner classes. (A similar strategy is used by the util.concurrent versions of ReadWriteLock, which also include some optimizations discussed in 3.7 to minimize unnecessary notifications.) For example:
class RWLock extends ReadWrite implements ReadWriteLock { class RLock implements Sync { public void acquire() throws InterruptedException { beforeRead(); } public void release() { afterRead(); } public boolean attempt(long msecs) throws InterruptedException{ return beforeRead(msecs); } } class WLock implements Sync { public void acquire() throws InterruptedException { beforeWrite(); } public void release() { afterWrite(); } public boolean attempt(long msecs) throws InterruptedException{ return beforeWrite(msecs); } } protected final RLock rlock = new RLock(); protected final WLock wlock = new WLock(); public Sync readLock() { return rlock; } public Sync writeLock() { return wlock; } public boolean beforeRead(long msecs) throws InterruptedException { // ... time-out version of beforeRead ... } public boolean beforeWrite(long msecs) throws InterruptedException { // ... time-out version of beforeWrite ... } }
3.3.3.2 Layering Guards
Guards may be added to basic data structure classes that were originally written in balking form. For example, consider a simple Stack:
class StackEmptyException extends Exception { } class Stack { // Fragments public synchronized boolean isEmpty() { /* ... */ } public synchronized void push(Object x) { /* ... */ } public synchronized Object pop() throws StackEmptyException { if (isEmpty()) throw new StackEmptyException(); // else ... } }
Balking on attempts to pop an element from an empty stack is attractive since it makes the class usable in sequential settings where it would be pointless to wait for a pop: if no other threads can add an element, the program will just stall forever. On the other hand, some clients of a Stack in concurrent contexts might want to hold up and wait for an element to appear. One inefficient approach is to try to perform pop and if a StackEmptyException is caught, to try again. This is a disguised form of busy-waiting.
A version that directly supports guarded usage can be built using a straightforward subclass- based design introducing methods that provide further coordination. However, it is not a particularly good idea to override method pop here. Among other considerations, the different policies of balking versus waiting are reflected in the different signatures of the methods: The balking form of pop can throw StackEmptyException, but a waiting version never can; conversely, a waiting version can throw InterruptedException, but a balking version never can. While these could be merged under some blander interface, it is more manageable all around to define them as separate methods.
Even so, it is possible to add method waitingPop in the subclass without needing to rewrite all of the internals of pop. Notice, however, that this also requires overriding push to provide notifications for threads blocked in waitingPop. (The notifyAll here could be further optimized.)
class WaitingStack extends Stack { public synchronized void push(Object x) { super.push(x); notifyAll(); } public synchronized Object waitingPop() throws InterruptedException { while (isEmpty()) { wait(); } try { return super.pop(); } catch (StackEmptyException cannothappen) { // only possible if pop contains a programming error throw new Error("Internal implementation error"); } } }
3.3.3.3 Inheritance anomalies
Some concurrent OO programming languages (see Further Readings) syntactically require separation between non-public methods defining functionality and public methods defining concurrency control policies; that is, they mandate the kind of separation seen in the template method version of class ReadWrite. Even when separation is not strictly required, it is an attractive option:
It enables either action code or concurrency control code to be varied independently in subclasses, avoiding constructions that would make it impossible for the subclass to obtain desired functionality without rewriting nearly every method.
It avoids the need to mix variables used solely for purposes of synchronization with logical state variables required for base functionality. Instead, these variables can be introduced in subclasses.
It avoids problems surrounding exclusive control, access to internal variables and methods, object identity, nested monitors (3.3.4), and interface adaptation encountered with other layering techniques. Subclassing extends objects rather than composing them. For example, no special considerations are needed to guarantee unique ownership of the superclass “part” of an object.
So long as all relevant variables and methods are declared as protected, a subclass can usually perform necessary modifications to base-level code in order to support a desired policy. Despite the best intentions of class authors, extensive surgery on method code in a subclass is sometimes the only way to salvage a class so that it obeys a given policy. Although protected access has some clear drawbacks as a design convention, in concurrent settings the resulting ability for subclasses to alter policy control can outweigh concerns about abuse of superclass representations.
For this to work, necessary invariants must be well documented. Superclasses relying on protected fields and methods and documented constraints are more likely to be correctly extended than those that publicly expose all fields (even via get/set methods), hoping that external clients can figure out how to preserve consistency, semantic invariants, and atomicity requirements for new actions or new policies.
But this form of subclassing does have its limitations. When people first started using experimental concurrent OO languages, several researchers noticed that it can be difficult or even impossible to define subclasses that add or extend commonly desired functionality or policy to superclasses. Similar concerns have been expressed in accounts of high-level OO analysis and design methods.
Some constructions in purely sequential classes are hard to extend as well, for example those declaring methods as final for no good reason. But enough additional snags are encountered in concurrent OO programming for this state of affairs to have been labeled the inheritance anomaly. The issues and problems covered by this term are only loosely related. Examples include:
If a subclass includes guarded waits on conditions about which superclass methods do not provide notifications, then these methods must be recoded. This is seen in class WaitingStack (3.3.3.2), where push is overridden solely in order to provide notifications for the new method waitingPop.
Similarly, if a superclass uses notify instead of notifyAll, and a subclass adds features that cause the conditions for using notify no longer to hold, then all methods performing notifications must be recoded.
If a superclass does not explicitly represent and track aspects of its logical or execution state on which subclass methods depend, then all methods that need to track and check that state must be recoded.
Using state variables (3.3.1.2) restricts subclasses to those in which synchronization depends only on the logical states or subdivisions of these states defined in the superclass. Thus, subclasses must conform to the same abstract specifications with respect to logical state. This practice is recommended in several accounts of high- level OO analysis and design, but can impede subclassing efforts. For example, suppose you want to extend class Bounded CounterWithStateVariable to add a disable method that causes inc and dec to block, and an enable method that allows them to continue. Support for these additional methods introduces a new dimension to logical state that alters both the guard and the notification conditions for the base methods.
Taken together, these kinds of problems serve as a warning that, without more care than is usually necessary in sequential settings, you are likely to write concurrent classes that programmers (including you) will not be able to extend easily or usefully. Although they have no catchy name, similar obstacles may be encountered when trying to aggregate, compose, and delegate to objects.
An approach that avoids some of the most common extensibility problems is to encapsulate both guards and notifications in overridable methods and then structure public actions as:
public synchronized void anAction() { awaitGuardsForThisAction(); doAction(); notifyOtherGuardsAffectedByThisAction(); }
However, just as in sequential OO programming, there are no universally valid rules for making classes that can serve as useful superclasses for all possible extensions or can be used without modification in all possible contexts. Most guidelines for writing classes that avoid obstacles boil down to two well-known design rules:
Avoid premature optimization.
Encapsulate design decisions.
Both of these rules can be surprisingly hard to follow. More often than not, avoiding optimization requires more abstraction and scaffolding than optimizing for known situations. Similarly, you cannot encapsulate a design decision unless you are aware that a decision has been made. This requires contemplation of alternatives that may not occur to you upon first writing a class.
Rules such as these are perhaps most commonly applied retrospectively, during cleanup of existing code in efforts to make it more reusable. In an ideal world, you might be able to anticipate all the ways a purportedly reusable class must be opened up to make it more extensible. The world is almost never this ideal. Retrospective refactorings and iterative reworkings are honorable and routine aspects of software development.
3.3.4 Confinement and Nested Monitors
As discussed in 2.3.3 and 2.4.5, it is generally acceptable to confine synchronized Part objects within synchronized Host objects — at worst, you may encounter some superfluous locking overhead. However, this story becomes significantly more complicated for Parts that employ waiting and notification methods. The associated issues are usually described as the nested monitor problem. To illustrate the potential for lockout, consider the following minimal classes:
class PartWithGuard { protected boolean cond = false; synchronized void await() throws InterruptedException { while (!cond) wait(); // any other code } synchronized void signal(boolean c) { cond = c; notifyAll(); } } class Host { protected final PartWithGuard part = new PartWithGuard(); synchronized void rely() throws InterruptedException { part.await(); } synchronized void set(boolean c) { part.signal(c); } }
Guarded suspension makes sense when you believe that other threads can eventually unblock a wait. But here, the Host class structurally precludes other threads from executing code that could do so. Problems here stem from the fact that any thread waiting in a wait set retains all of its locks except that of the object in whose wait set it was placed. For example, suppose that in thread T a call is made to host.rely causing it to block within part. The lock to host is retained while T is blocked: no other thread will ever get a chance to unblock it via host.set.
These nesting constraints can lead to unexpected lockouts when otherwise ordinary-looking synchronized methods invoke other equally ordinary-looking synchronized methods that employ wait. As with all policies for handling state- dependent behavior, you need to document and advertise the wait policies employed in a class so that people trying to use them have a chance to address potential problems. Simply adding InterruptedException to the signatures of guarded methods is a good start.
There are two general approaches to avoiding nested monitor lockouts. The first and simplest (in fact, just an application of our default rules in 1.1.1.1) is not to employ Host synchronization in the Host methods that relay to Part methods. This applies whenever the call is stateless with respect to the Host (see 2.4.1).
In other cases, where Part methods must access locked Host state, you can redefine Part classes to use an extended form of hierarchical containment locking (see 2.4.5) employing the Host as the monitor. For example:
class OwnedPartWithGuard { // Code sketch protected boolean cond = false; final Object lock; OwnedPartWithGuard(Object owner) { lock = owner; } void await() throws InterruptedException { synchronized(lock) { while (!cond) lock.wait(); // ... } } void signal(boolean c) { synchronized(lock) { cond = c; lock.notifyAll(); } } }
3.3.5 Further Readings
More thorough discussions and further examples of inheritance anomalies can be found in the collection edited by Agha, Wegner, and Yonezawa listed in 1.2.5, as well as in papers presented at recent OO conferences such as ECOOP, the thesis by David Holmes listed in 1.4.5, and in:
McHale, Ciaran. Synchronization in Concurrent Object-Oriented Languages, PhD Thesis, Trinity College, Ireland, 1994.
The Composition-Filters system is an example of an OO development framework that requires separation of functionality from synchronization control. It also includes a more extensive notation than conflict sets for representing concurrency control constraints. See for example, papers by Mehmet Aksit and others in the collection edited by Guerraoui, Nierstrasz, and Riveill listed in 1.2.5.
Techniques for representing states and transitions (for example using finite state machines) are described in most accounts of OO and concurrent software design listed in 1.3.5. Additional patterns are discussed in:
Dyson, Paul, and Bruce Anderson. “State Patterns”, in Robert Martin, Dirk Riehle, and Frank Buschmann (eds.), Pattern Languages of Program Design, Volume 3, Addison-Wesley, 1998.
The inner classes used in RWLock illustrate a simple, non-queryable version of the Extension Object pattern. See:
Gamma, Erich. “Extension Object”, in Robert Martin, Dirk Riehle, and Frank Buschmann (eds.), Pattern Languages of Program Design, Volume 3, Addison-Wesley, 1998.