7.7 Ordered Locking Pattern
The Ordered Locking Pattern is another way to ensure that deadlock cannot occurthis time by preventing condition 4 (circular waiting) from occurring. It does this by ordering the resources and requiring that they always be accessed by any client in that specified order. If this is religiously enforced, then no circular waiting condition can ever occur.
7.7.1 Abstract
The Ordered Locking Pattern eliminates deadlock by ordering resources and enforcing a policy in which resources must be allocated only in a specific order. Unlike "normal" resource access, but similar to the Simultaneous Locking Pattern, the client must explicitly lock and release the resources, rather than doing it implicitly by merely invoking a service on a resource. This means that the potential for neglecting to unlock the resource exists.
7.7.2 Problem
The Ordered Locking Pattern solely addresses the problem of deadlock elimination, as does the previous Simultaneous Locking Pattern.
7.7.3 Pattern Structure
Figure 7-17a shows the structural part of the Ordered Locking Pattern. Each Resource Client aggregates a Resource List, which contains an ordered list of Resource IDs currently locked by the Thread.
Figure 7-17b uses UML activity charts to show the algorithms for locking and unlocking the resource. The basic policy of resource locking is that each resource in the entire system has a unique integer-valued identifier, and a Thread may only lock a resource whose ID is greater than that of the highest resource it currently owns. An attempt to lock a resource with a lower-valued ID than the highest-valued resource you currently own causes the Resource List to throw a PoorlyOrderedAccess exception, indicating a violation of this policy. Sometimes a Resource Client may block on a resource that it needs because it is locked, and that is perfectly fine. But it can never happen that a circular waiting condition can occur (required for deadlock) because it would require that at least one Resource Client would have to block waiting for the release of a resource whose ID is lower than its highest-owned resource.5
Figure 7-17: Ordered Locking Pattern
7.7.4 Collaboration Roles
Mutex
The Mutex is a mutual exclusion semaphore object that associates with Shared Resource. If a Shared Resource is currently locked when requested by a Resource Client (via its Resource List), then the Resource Client blocks on that resource.
Resource Client
A Resource Client is an object (which may be «active») that owns and locks resources. It aggregates a Resource List to manage the locking and unlocking of those resources.
Resource ID
The Resource ID is a simple part objected aggregated by Resource List. It merely contains the ID of a corresponding Shared Resource currently locked by the Thread. When the Shared Resource is unlocked by the Resource List, its ID is removed from the list.
Resource List
The Resource List manages the locking and unlocking of Shared Resources according to the algorithm shown in Figure 7-17b. When a Resource Client wants to lock a resource, it makes the request of the Resource List. If the ID of the required resource is greater than any currently owned resource, and then if it is unlocked, the Resource List locks it and adds it to the list. If it is locked, then the Thread blocks on the resource.
Shared Resource
A resource is an object shared by one or more Resource Client. In this pattern, each Shared Resource has a unique integer-valued identifier. This identifier is used to control the order in which Shared Resources may be locked. If a Shared Resource itself uses other Shared Resources, then it may only do so if the called Shared Resource identifiers are of higher value than its own.
7.7.5 Consequences
This pattern effectively removes the possibility of resource-based deadlocks by removing the possibility of condition 4circular waiting. For the algorithm to work any ordering of Shared Resources will do provided that this ordering is global. However, some orderings are better than others and will result is less blocking overall. This may take some analysis at design time to identify the best ordering of the Shared Resources. As mentioned above, if Shared Resources are themselves Resource Clients (a reasonable possibility), then they should only invoke services of Shared Resources that have higher-valued IDs than they do. If they invoke a lower-valued Shared Resource, then they are in effect violating the ordered locking protocol by the transitive property of locking (if A locks B and then B locks C, then A is in effect locking C).
While draconian, one solution to the potential problem of transitive violation of the ordering policy is to enforce the rule that a Shared Resource may never invoke services or lock other Shared Resources. If your system design does allow such transitive locking, then each transitive path must be examined to ensure that the ordering policy is not violated. The Ordered Locking Pattern does not address the issue of bounding priority inversion as do some other patterns here.
7.7.6 Implementation Strategies
One memory-efficient implementation for Resource List is to use an array of integers to hold the Resource IDs. The array only needs to be as large as the maximum number of resources held at any one time. For an even more memory-efficient implementation (but at the cost of some computational complexity), a bit set can be used. The bit set must have the same number of bits as maximum Resource ID value. Setting and unsetting the bit is computationally lightweight, but checking to see if there is a greater bit set is a little more computationally intensive.
7.7.7 Related Patterns
There are two other patterns here that prevent deadlock. The Simultaneous Locking Pattern locks all the needed resources in a single critical section; other Resource Clients that need to run can do so as long as they don't request any of the same Shared Resources. If a small subset of the resources need to be locked at any given time or if the sets needed for different Resource Clients overlap only slightly, then the Simultaneous Locking Pattern works well.
The Priority Ceiling Pattern solves the deadlock problem as well, although the algorithm is significantly more complex. For that added sophistication, the Priority Ceiling Pattern also bounds priority inversion to a single level.
7.7.8 Sample Model
The example shown in Figure 7-18a provides a simple illustration of how the pattern works. Client 1 uses three Shared Resources: SR1 (ID=0), SR3 (ID=2), and SR4 (ID=3). Client 2 uses three Shared Resources: SR2 (ID=1), SR3 (ID=2), and SR4 (ID=3). They both, therefore, share SR2, SR3, and SR4.
Note that in the absence of some scheme to prevent deadlock (such as the use of the Ordered Locking Pattern), deadlock is easily possible in this configuration. Suppose Client 1 ran and locked SR2, and when it was just about to lock SR3, Client 2 (running in a higher-priority thread) preempts Client 1. Client 2 now locks SR3 and tries to lock SR2. It cannot, of course, because it is already locked (by Client 1), and so it must block and allow Client 1 to run until it releases the resource. However, now Client 1 cannot successfully run because it needs SR3, and it is locked by Client 2. A classic deadlock situation. This particular scenario is not allowed with the Ordered Locking Pattern. Figure 7-18b shows what happens when this scenario is attempted.
Figure 7-18: Ordered Locking Pattern