- Scheduling
- Memory Management
- Synchronization
- Asynchronous Event Handling
- Asynchronous Transfer of Control
- Asynchronous Thread Termination
- Physical Memory Access
- Exceptions
- Minimum Implementations of the RTSJ
- Optionally Required Components
- Documentation Requirements
- Parameter Objects
- Java Platform Dependencies
Synchronization
Terms
For the purposes of this section, the use of the term priority should be interpreted somewhat more loosely than in conventional usage. In particular, the term highest priority thread merely indicates the most eligible thread the thread that the dispatcher would choose among all of the threads that are ready to run and doesn't necessarily presume a strict priority based dispatch mechanism.
Wait Queues
Threads waiting to acquire a resource must be released in execution eligibility order. This applies to the processor as well as to synchronized blocks. If threads with the same execution eligibility are possible under the active scheduling policy, such threads are awakened in FIFO order. For example:
-
Threads waiting to enter synchronized blocks are granted access to the synchronized block in execution eligibility order.
-
A blocked thread that becomes ready to run is given access to the processor in execution eligibility order.
-
A thread whose execution eligibility is explicitly set by itself or another thread is given access to the processor in execution eligibility order.
-
A thread that performs a yield will be given access to the processor after waiting threads of the same execution eligibility.
-
Threads that are preempted in favor of a thread with higher execution eligibility may be given access to the processor at any time as determined by a particular implementation. The implementation is required to provide documentation stating exactly the algorithm used for granting such access.
Priority Inversion Avoidance
Any conforming implementation must provide an implementation of the synchronized primitive with default behavior that ensures that there is no unbounded priority inversion. Furthermore, this must apply to code if it is run within the implementation as well as to real-time threads. The priority inheritance protocol must be implemented by default. The priority inheritance protocol is a well-known algorithm in the real-time scheduling literature and it has the following effect. If thread t1 attempts to acquire a lock that is held by a lower-priority thread t2, then t2's priority is raised to that of t1 as long as t2 holds the lock (and recursively if t2 is itself waiting to acquire a lock held by an even lower-priority thread).
The specification also provides a mechanism by which the programmer can override the default system-wide policy, or control the policy to be used for a particular monitor, provided that policy is supported by the implementation. The monitor control policy specification is extensible so that new mechanisms can be added by future implementations.
A second policy, priority ceiling emulation protocol (or highest locker protocol), is also specified for systems that support it. The highest locker protocol is also a well-known algorithm in the literature, and it has the following effect:
-
With this policy, a monitor is given a priority ceiling when it is created, which is the highest priority of any thread that could attempt to enter the monitor.
-
As soon as a thread enters synchronized code, its priority is raised to the monitor's ceiling priority, thus ensuring mutually exclusive access to the code since it will not be preempted by any thread that could possibly attempt to enter the same monitor.
-
If, through programming error, a thread has a higher priority than the ceiling of the monitor it is attempting to enter, then an exception is thrown.
One needs to consider the design point given above, the two new thread types, RealtimeThread and NoHeapRealtimeThread, and regular Java threads and the possible issues that could arise when a NoHeapRealtimeThread and a regular Java thread attempt to synchronize on the same object. NoHeapRealtimeThreads have an implicit execution eligibility that must be higher than that of the garbage collector. This is fundamental to the RTSJ. However, given that regular Java threads may never have an execution eligibility higher than the garbage collector, no known priority inversion avoidance algorithm can be correctly implemented when the shared object is shared between a regular Java thread and a NoHeapRealtimeThread because the algorithm may not raise the priority of the regular Java thread higher than the garbage collector. Some mechanism other than the synchronized keyword is needed to ensure non-blocking, protected access to objects shared between regular Java threads and NoHeapRealtimeThreads.
Note that if the RTSJ requires that the execution of NoHeapRealtimeThreads must not be delayed by the execution of the garbage collector it is impossible for a NoHeapRealtimeThread to synchronize, in the classic sense, on an object accessed by regular Java threads. The RTSJ provides three wait-free queue classes to provide protected, non-blocking, shared access to objects accessed by both regular Java threads and NoHeapRealtimeThreads. These classes are provided explicitly to enable communication between the real-time execution of NoHeapRealtimeThreads and regular Java threads.
Determinism
Conforming implementations shall provide a fixed upper bound on the time required to enter a synchronized block for an unlocked monitor.