7.5 Priority Ceiling Pattern
The Priority Ceiling Pattern, or Priority Ceiling Protocol (PCP) as it is sometimes called, addresses both issues of bounding priority inversion (and hence bounding blocking time) and removal of deadlock. It is a relatively sophisticated approach, more complex than the previous methods. It is not as widely supported by commercial RTOSs, however, and so its implementation often requires writing extensions to the RTOS.
7.5.1 Abstract
The Priority Ceiling Pattern is used to ensure bounded priority inversion and task blocking times and also to ensure that deadlocks due to resource contention cannot occur. It has somewhat more overhead than the Highest Locker Pattern. It is used in highly reliable multitasking systems.
7.5.2 Problem
The unbounded priority inversion problem is discussed in the chapter introduction in some detail. The Priority Ceiling Pattern exists to limit the maximum amount of priority inversion to a single level and to completely prevent resource-based deadlock.
7.5.3 Pattern Structure
Figure 7-12 shows the Priority Ceiling Pattern structure. The primary structural difference between the Priority Ceiling Pattern and the Highest Locker Pattern is the addition of a System Priority Ceiling attribute for the Scheduler. Behaviorally, there are some differences as well. The algorithm for starting and ending a critical section is shown in Figure 7-13.
Figure 7-12: Priority Ceiling Pattern
7.5.4 Collaboration Roles
Abstract Thread
The Abstract Thread class is an abstract (noninstantiable) superclass for Concrete Thread. Abstract Thread associates with the Scheduler. Since Concrete Thread is a subclass, it has the same interface to the Scheduler as the Abstract Thread. This enforces interface compliance. The Abstract Thread is an «active» object, meaning that when it is created, it creates an OS thread in which to run. It contains (that is, it has composition relations with) more primitive application objects that execute in the thread of the composite «active» object.
Concrete Thread
The Concrete Thread is an «active» object most typically constructed to contain passive "semantic" objects (via the composition relation) that do the real work of the system. The Concrete Thread object provides a straightforward means of attaching these semantic objects into the concurrency architecture. Concrete Thread is an instantiable subclass of Abstract Thread.
Mutex
The Mutex is a mutual exclusion semaphore object that permits only a single caller through at a time. The operations of the Shared Resource invoke it whenever a relevant service is called, locking it prior to starting the service and unlocking it once the service is complete. Threads that attempt to invoke a service when the services are already locked become blocked until the Mutex is in its unlocked state. This is done by the Mutex semaphore signaling the Scheduler that a call attempt was made by the currently active thread, the Mutex ID (necessary to unblock the correct Thread later when the Mutex is released), and the entry pointthe place at which to continue execution of the Thread. See Figure 7-13 for the algorithms that control locking, blocking, and releasing the Mutex.
Figure 7-13: Priority Ceiling Pattern Resource Algorithm
Scheduler
This object orchestrates the execution of multiple threads based on their priority according to a simple rule: Always run the ready thread with the highest priority. When the «active» Thread object is created, it (or its creator) calls the createThread operation to create a thread for the «active» object. Whenever this thread is executed by the Scheduler, it calls the StartAddr address (except when the thread has been blocked or preemptedin which case it calls the EntryPoint address).
In this pattern, the Scheduler has some special duties when the Mutex signals an attempt to access a locked resource. Specifically, under some conditions, it must block the requesting task (done by stopping that task and placing a reference to it in the Blocked Queue (not shownfor details of the Blocked Queue, see the Static Priority Pattern in Chapter 5), and it must elevate the priority of the highest-priority blocked Thread being blocked. This is easy to determine, since the Blocked Queue is a priority FIFOthe highest-priority blocked task is the first one in that queue. Similarly, when the Thread releases the resource, the Scheduler must lower its priority back to its nominal priority. The Scheduler maintains the value of the highest-priority ceiling of all currently locked resources in its attribute systemPriorityCeiling.
Shared Resource
A resource is an object shared by one or more Threads. For the system to operate properly in all cases, all Shared Resources must either be reentrant (meaning that corruption from simultaneous access cannot occur), or they must be protected. In the case of a protected resource, when a Thread attempts to use the resource, the associated mutex semaphore is checked, and if locked, the calling task is placed into the Blocked Queue. The task is terminated with its reentry point noted in the TCB.
The SharedResource has a constant attribute (note the «frozen» constraint in Figure 7-12), called priorityCeiling. This is set during design to just greater than the priority of the highest priority task that can ever access it. In some RTOSs, this means that the priority will be one more (when a larger number indicates a higher priority), and in some it will be one less (when a lower number indicates a higher priority). This ensures that when the resource is locked, no other task using that resource can preempt it.
Task Control Block
The TCB contains the scheduling information for its corresponding Thread object. This includes the priority of the thread, the default start address and the current entry address if it was preempted or blocked prior to completion. The Scheduler maintains a TCB object for each existing Thread. Note that TCB typically also has a reference off to a call and parameter stack for its Thread, but that level of detail is not shown here. The TCB tracks both the current priority of the Thread (which may have been elevated due to resource access and blocking) and its nominal priority.
7.5.5 Consequences
This pattern effectively enforces the desirable property that a high-priority task can at most be blocked from execution by a single critical section of a lower-priority task owning a required resource.
It can happen in the Priority Ceiling Pattern that a running task may not be able to access a resource even though it is not currently locked. This will occur if that resource's priority ceiling is less than the current system resource ceiling.
Deadlock is prevented by this pattern because condition 4 (circular wait) is prevented. Any condition that could potentially lead to circular waiting is prohibited. This does mean that a task may be prevented from accessing a resource even though it is currently unlocked.
There is also a consequence of computational overhead associated with the Priority Ceiling Pattern. This pattern is the most sophisticated of the resource management patterns presented in this chapter and has the highest computational overhead.
7.5.6 Implementation Strategies
Rather few RTOSs support the Priority Ceiling Pattern, but it can be added if the RTOS permits extension, particularly when a mutex is locked or released. If not, you can create your own Mutex and System Resource Ceiling classes that intervene with the priority management prior to handing off control to the internal RTOS scheduler. If you are writing your own scheduler, then the implementation should be a relatively straightforward extension of the Highest Locker Pattern.
7.5.7 Related Patterns
Because this pattern is the most sophisticated, it also has the most computational overhead. Therefore, under some circumstances, it may be desirable to use a less computational, if less capable, approach, such as the Highest Locker Pattern, the Priority Inheritance Pattern, or even the Critical Section Pattern.
7.5.8 Sample Model
A robotic control system is given as an example in Figure 7-14a. There are three tasks. The lowest-priority task, Command Processor, inserts commands into a shared resource, the Command Queue. The middle-priority task, Safety Monitor, performs periodic safety monitoring, accessing the shared resource Robot Arm. The highest-priority task, Robotic Planner, accepts commands (and hence must access the Command Queue) and also moves the arm (and therefore must access Robot Arm). Note that the resource ceiling of both resources must be the priority of the highest-priority task in this case because it accesses both of these resources.
Figure 7-14: Priority Ceiling Pattern
Figure 7-14b shows a scenario for the example. At point A, the Command Processor runs, putting set of commands into the Command Queue. The call to the Command Processor locks the resource successfully because at this point, there are no locked resources. While this is happening, the Safety Monitor starts to run at point B. This preempts the Command Processor because it is a higher priority, so Command Processor goes into a waiting state because it's ready to run but cannot because a higher-priority task is running. Now the Safety Monitor attempts to access the second resource, Robot Arm. Because a resource is currently already locked with same priority ceiling (found by the Scheduler examining its systemPriorityCeiling attribute), that call is blocked. Note that the Safety Monitor is prevented from running even though it is trying to access a resource that is not currently locked but could start a circular waiting condition, potentially leading to deadlock. Thus, the access is prevented.
When the resource access to Safety Monitor is prevented, the priority of the Command Processor is elevated to Medium, the same level as the highest-blocked task. At point C, Robot Planner runs, preempting the Command Processor task. The Robot Planner invokes Command Queue.Get() to retrieve any waiting commands but finds that this resource is locked. Therefore, its access is blocked, and it is put on the blocked queue, and the Command Processor task resumes but at priority High.
When the call to Command Queue.put() finally completes, the priority of the Command Processor task is deescalated back to its nominal priorityLow (point D). At this point in time, there are two tasks of higher priority waiting to run. The higher priority of them, Robot Planning runs at its normal High priority. It accesses first the Command Queue resource and then the Robot Arm resource. When it completes, the next highest task ready to run is Safety Monitor. It runs, accessing the Robot Arm resource. When it completes, the lowest-priority task, Command Processor is allowed to complete its work and return control to the OS.