- A Brief History of Real-Time Java
- Major Features of the Specification
- Implementation
- RTSJ Hello World
Major Features of the Specification
The Real-Time Specification for Java enhances the Java specification in six ways:
It adds real-time threads. These threads have scheduling attributes that are more carefully defined than is the scheduling for ordinary Java threads.
It adds tools and mechanisms that help programmers write Java code that does not need garbage collection.
It adds an asynchronous event handler class and a mechanism that associates asynchronous events with happenings outside the JVM.
It adds a mechanism called asynchronous transfer of control that lets a thread change control flow in another thread. It is, essentially, a carefully controlled way for one thread to throw an exception into another thread.
It adds mechanisms that let the programmer control where objects will be allocated in memory.
It adds a mechanism that lets the programmer access memory at particular addresses.
What the Real-Time Java does not change may be as important as the things it changes. Ordinary Java programs will run on an implementation of the Real-Time Specification. They can even run while the JVM is executing real-time code. There is no magic that will cause ordinary Java programs to become more timely when they run on a JVM that implements the Real-Time Specification, but they won't behave any worse than they did.
Furthermore, non-real-time code will not interfere with real-time code unless they share resources.
Threads and Scheduling
Whether it is by priority scheduling, periodic scheduling, or deadline scheduling, the way tasks are scheduled on the processor is central to real-time computing. Non-real-time environments (like a standard JVM) can be casual about the details of scheduling, but a real-time environment must be much more precise. The specification treads a line between being specific enough to let designers reason about the way things will run but still flexible enough to permit innovative implementation of the RTSJ. For instance, there is only one method for which the RTSJ requires every implementation to meet a particular performance goal: allocation from an LTMemory area.
LTMemory Performance
The RTSJ specifies a high standard for the performance of LTMemory allocation because that allocation mechanism is intended for use in the tightest time-critical code. The specification is trying to assure designers that allocation of LTMemory is safe for critical real-time code.
Allocation from an LTMemory area must need time that is linear in the size of the allocation memory. That is the best possible allocation performance. Memory allocation in the JVM has several stages: first the right amount of free memory is allocated, then the memory is initialized in various stages under the control of the JVM and the class constructors. Every field in the object must be initialized before that field is used. In some cases, some initialization can be deferred, but ultimately every byte in the object is initialized.
The implementor can use any allocation algorithm that has asymptotically better performance than initialization of the allocated memory.
The RTSJ includes priority scheduling because it is almost universally used in commercial real-time systems and because all legacy Java applications use priority scheduling. The RTSJ requires at least 28 real-time priorities in addition to the
10 priorities called for by the normal JVM specification. The RTSJ calls for strict fixed-priority preemptive scheduling of those real-time priorities. That means that a lower-priority thread must never run when a higher-priority thread is ready. The RTSJ also requires the priority inheritance protocol as the default for locks between real-time threads, permits priority ceiling emulation protocol for those situations, and provides a hook for other protocols.
The RTSJ provides room for implementors to support other schedulers. The specification does not define the way new schedulers will be integrated with the system; it only says that an implementor may provide alternate schedulers and defines scheduler APIs that are general enough to support a wide variety of scheduling algorithms.
Sock Scheduling
While the Expert Group was refining the scheduling interfaces, we, to prevent ourselves from designing interfaces that would only accommodate known schedulers, invented a series of sock schedulers that would schedule according to various properties of socks.
Garbage Collection
The standard JVM specification does not require garbage collection. It requires dynamic memory allocation and has no mechanism for freeing memory, but the Java Language Specification does not require any particular solution for this massive memory leak. Almost every JVM has a garbage collector, but it is not required.
GC-less JVM
David Hardin (on the Expert Group) had extensive experience with the Rockwell Collins JEM chip. It is a hardware implementation of Java and has no garbage collector. Programming with no garbage collector requires discipline, and many standard Java idioms become convoluted, but it works well enough that the JEM has become a modestly successful Java platform.
The RTSJ continues the policy of the original Java specification. The RTSJ discusses interactions with a garbage collector at length, but a Java runtime with no garbage collector could meet the specification.
The RTSJ, although it does not require a garbage collector, specifies at least one API that provides for a particular class of garbage collection algorithm. Incremental garbage collectors that pace their operation to the rate at which threads create garbage are promising for real-time systems. Garbage collection can be scheduled as an overhead charge on the threads that create garbage and execute in brief intervals that do not disrupt other activities. The RTSJ has a constructor for real-time threads; the constructor includes a memory-parameters argument that can specify the allocation rate the garbage collector and scheduler should expect from the thread.
The Expert Group did not feel comfortable requiring a magical garbage collector and relying on it to make all the real-time problems with garbage collection disappear. Instead, we took the attitude that even the best-behaved garbage collector may sometimes be more trouble than it is worth to the real-time programmer. An implementation can provide any (correct) garbage collection algorithm it likes, and users will certainly appreciate a good one, but for real-time programming, the RTSJ provides ways to write Java code that will never be delayed by garbage collection.
The first tool for avoiding garbage collection is no-heap, real-time threads. These threads are not allowed to access memory in the heap. Since there is no interaction between no-heap threads and garbage collection or compaction, no-heap threads can preempt the garbage collector without waiting for the garbage collector to reach a consistent state. Ordinary threads and heap-using, real-time threads can be delayed by garbage collection when they create objects in the heap, and they have to wait for the garbage collector to reach a consistent state if they are activated while the garbage collector is running. No-heap, real-time threads are protected from these timing problems.
Asynchronous Event Handlers
Many real-time systems are event driven. Things happen and the system responds to them. It is easy to code an event-driven system structured so that each event is serviced by a thread created for that particular event, and it makes the scheduling attributes of each event clear to the scheduler. The idea sounds obvious. Why isn't it common practice? The time between an event and the service of the event is overhead on real-time responsiveness. Thread creation is slow. It is a resource-allocation service, and real-time programmers avoid resource allocation when they are concerned about time.
Asynchronous event handlers are an attempt to capture the advantages of creating threads to service events without taking the performance penalty.
Event-driven programming needs events. The standard Java platform has extensive mechanisms for input from its GUI, but no general-purpose mechanism for associating things that happen outside the Java environment with method invocation inside the environment. The RTSJ introduces happenings as a pathway between events outside the Java platform and asynchronous event handlers.
Asynchronous Transfer of Control
Asynchronous transfer of control was a late addition to the RTSJ, and it was much harder to invent than you might think.
Asynchronous transfer of control (ATC) is a mechanism that lets a thread throw an exception into another thread. Standard Java includes a similar mechanism, thread.interrupt, but it is weak.
Why is ATC so important?
It is a way to cancel a thread in a forcible but controlled way.
It is a way to break a thread out of a loop without requiring the thread to poll a "terminate me" variable.
It is a general-purpose timeout mechanism.
It lets sophisticated runtimes take scheduler-like control of execution. People interested in distributed real time have powerful requirements for this control.
Why is ATC so hard?
Code that is not written to be interrupted may break badly if the JVM suddenly jumps out of it.
You cannot just jump from the current point of execution to the "right" catch block. The platform has to unwind execution through catches and, finally, clauses in uninterruptible methods until it has fully serviced the exception.
Nested methods may be waiting for different asynchronous exceptions. The runtime has to make certain that the exceptions get to the right catch blocks.
Memory Allocation
By itself, support for no-heap, real-time threads would be useless. The thread would be restricted to elementary data types. It would not even be able to access its own thread object. The RTSJ created two new memory allocation domains to give no-heap threads access to objects: immortal memory and scoped memory.
Immortal memory is never garbage collected and would make no-heap threads thoroughly usable even without scoped memory. Immortal memory fits the large class of real-time programs that allocate all their resources in an initialization phase and then run forever without allocating or freeing any resources. Even systems written in C and assembly language use this paradigm. Even without garbage collection, resource allocation often has tricky timing characteristics and nasty failure modes. It makes sense to move it out of the time-critical part of an application.
Immortal memory is simple to explain and implement, but it leads to unnatural use of the Java language:
The Java Platform does not encourage reuse of objects. In some cases, properties of objects can only be set by their constructor, and the Java language's strong typing makes it impossible to reuse an object as anything other than exactly its original type. (The Java language has no union.)
The Java class libraries freely create objects. A programmer who called innocuous methods in the collections classes or the math classes could quickly find immortal memory overflowing with throwaway objects created in the class libraries. Real-time code is not compelled to use standard class libraries, but those class libraries are a major attraction of Java and the effort involved in recoding them all to real-time standards would be staggering.
Scoped memory isn't as simple as immortal memory, but it goes a long way toward addressing the problems with immortal memory. In simple applications, scoped memory works like a stack for objects. When the thread enters a memory scope, it starts allocating objects from that scope. It continues allocating objects there until it enters a nested scope or exits from the scope. After the thread exits the scope, it can no longer access objects allocated there and the JVM is free to recover the memory used there.
If a thread enters a scope before calling a method in a standard class library and leaves the scope shortly after returning, all objects allocated by the method will be contained in the scope and freed when the thread leaves the scope.2 Programmers can safely use convenience objects by enclosing the object creation and use in a scope. The mechanism (called a closure) for using scopes is a little ungainly, but an RTSJ programmer uses closures so much that they soon feel natural.
Performance is the most important cost of immortal and scoped memory. The RTSJ has access rules for no-heap, real-time threads and rules that govern the storage of references to objects in heap and scoped memory. These rules must be enforced by the class verifier or the execution engine. Unfortunately, it seems likely that the execution of the bytecodes that store references will have to do some part of that work. That necessity will hurt the JVM's performance.
Memory Access
Special types of memory, I/O devices that can be accessed with load and store operations, and communication with other tasks through shared memory are important issues for embedded systems. It takes a bit of a stretch to call these real-time issues, but the RTSJ makes that stretch.
Special types of memory are closely related to performance (slow memory, cached memory, high-speed nonsnooped access to sharable memory, etc.) Whereas performance is not a real-time issue in the strictest sense, predictable performance is a real-time issue and some memory attributes like cacheable, sharable, and pageable have a large impact on the predictability of code that uses the memory.
The RTSJ "raw" memory access classes give something like "peek and poke" access to memory. They run through the JVM's security and protection mechanisms, so this introduction of pointer-like objects does not compromise the integrity of the Java platform, but it does give enough direct access to memory to support device drivers written in Java and Java programs that share memory with other tasks.
The raw memory classes do nothing to improve the real-time performance of the Java platform. They are there because some of the most enthusiastic early supporters of a real-time Java specification wanted to use Java to write device drivers. It was a painless addition to the specification and it greatly increases the usefulness of Java for the embedded real-time community.