Technical Information
Sun Java Real-Time System 2.2 Update 1
  

This document presents some technical information that can help you to use Sun Java™ Real-Time System (Java RTS) 2.2 Update 1.

Technical Documentation: Links to all the Java RTS technical documents

Contents

Introduction

Basic Technical Information .....
Granting Resource Access Privileges to Java RTS Users
How to Compile and Execute
Controlling Runtime Jitter
Class Initialization Jitter
Interpreter Jitter
Compilation Jitter
Garbage Collection Jitter
Solaris: Using Processor Sets for Optimal Determinism
Linux: Using Cpusets and Processor Affinity for Optimal Determinism

Clocks
Real-Time Clock API
Solaris: High-Resolution Clock
Solaris: Advanced Programmable Interrupt Controller

Advanced Technical Information .....
Asynchronous Events and Handlers
Release Mechanisms for AEH
Handlers Released by System Threads
Additional Notes for Timers
Order of Timer Firings
Timer Thread Priority
Minimum Interval for Timer Firing
Aperiodic Parameters
Maximum Size of Arrival Queue
Memory Management
Configuring Memory Size
Scoped Memory
Memory Allocation Limits
Memory Locking
Shared Objects in Immortal Memory
Reducing Unnecessary Garbage Production
Memory Checks and Standard JDK Libraries
Synchronization
Real-Time Priorities
Scheduling
Order of Priority Setting
Aynchronous Transfer of Control
Security
Minimum Period Enforced

Reference Information .....
Abbreviations Used in This Document

Introduction

This document presents the technical information in three major sections: basic, advanced, and reference.

Be sure to consult the Java RTS Compilation Guide and the Java RTS Garbage Collection Guide for necessary details in those areas.


Granting Resource Access Privileges to Java RTS Users

In order to ensure the predicatable and deterministic behavior of a real-time application, the user must be granted unrestricted access to a number of system reources. The Java RTS Installation Guide describes this procedure for each operating system:


How to Compile and Execute

The only additional library required to run Java RTS is rt2.jar, located in the <Java RTS install dir>/jre/lib/ directory, where <Java RTS install dir>/ is the path to the directory where you installed Java RTS. For example, on Solaris OS, the installation directory might be /opt/SUNWrtjv/.

Note: (Solaris OS) In order to execute the 64-bit Java RTS VM, add the -d64 option to the command line, as described in Executing the 64-Bit VM in the Java RTS Installation Guide.

To compile your program, type the following commands:

$ cd <your program dir>
$ javac -classpath \
  <Java RTS install dir>/jre/lib/rt2.jar \
  <your program>.java

Alternatively, you can type the following commands:

$ cd <your program dir>
$ <Java RTS install dir>/javac \
  <your program>.java

Note: Java RTS 2.2 Update 1 is based on the Java Platform, Standard Edition (Java™ SE) version 5.0 Update 22. Therefore, if you are compiling a Java program using Java SE 6 or later for use with Java RTS 2.2 Update 1, include the argument -target 1.5 on the javac command line.

To run the compiled program, type the following command:

$ <Java RTS install dir>/bin/java \
  -classpath <your program dir> <your program>

See the Java RTS Options page for a list of Java RTS runtime parameters.

[Contents]


Controlling Runtime Jitter

This section provides information on how to reduce jitter during the execution of your application.

Class Initialization Jitter

Class initialization in Java programs occurs on demand when a program executes one of the following operations:

  • Call to a static method
  • Access to a static field
  • Class instantiation

Class initialization implies that some extra code will be executed the first time one of the above operations is carried out for a given class, thereby introducing some jitter at execution time. This problem cannot be avoided from an implementation point of view due to the semantics of the Java Programming Language. Instead, the programmer must anticipate this problem and solve it.

The Initialization Time Compilation (ITC) scheme also implies that code will be compiled on the execution of one of the above operations. This introduces even more non-determinism in class initialization.

See the Compilation Guide for a description of how to specify classes to be preinitialized with the ITC scheme.

Interpreter Jitter

Note that the interpreter might require several executions of each bytecode to reach a steady state, producing jitter the first few times a particular branch is taken. The programmer can reduce the jitter by ensuring that the branches used during the mission-critical phase have been previously executed. A simpler solution, however, is to precompile the methods. This compiled code is already optimized and will not be dynamically rewritten, resulting in a steady state obtained at the end of the class initialization. This should be done at least for the NHRTs.

In addition, the RTGC can more efficiently parse the stack frames of compiled methods. Therefore, we recommend compiling most of the methods that are called by all time-critical threads, in order to reduce the pause time. For additional information on pause times, see the Garbage Collection Guide.

Compilation Jitter

Java RTS provides mechanisms to avoid the delays due to JIT compilation. These mechanisms include the new ITC compilation mode, as well as new policies for JIT compilation. For additional details, see the Compilation Guide.

Garbage Collection Jitter

Java RTS provides two garbage collection technologies:

With the Serial GC, only NHRTs are deterministic and provide latencies in the low tens of microseconds, provided that you follow the guidelines above concerning class initialization and ITC.

With the RTGC, RTTs can also be deterministic, down to the low hundreds of microseconds. See the Garbage Collection Guide.

[Contents]


Solaris: Using Processor Sets for Optimal Determinism

The information in this section applies only to applications running on the Solaris™ Operating System.

If more than one processor is installed in the target machine, you can dedicate a different, separate processor set to the exclusive use of the No-Heap Realtime Threads (NHRTs) and the RealtimeThreads (RTTs) of Java RTS. This partition of the available processors ensures the best temporal behavior for these threads, by reducing cache thrashing effects. (Note that the actual benefits of using processor sets depend on the hardware that is used.)

To request Java RTS to assign the NHRT threads to an existing dedicated processor set, set the option -XX:RTSJBindNHRTToProcessorSet=<processor_set_id>. The processors assigned to this processor set should all be set no-intr to minimize latency and jitter. If this option is set, calling Runtime.availableProcessors from an NHRT will return the number of processors that are available in the processor set that has been assigned to NHRTs.

A similar option, RTSJBindRTTToProcessorSet, can be used to bind real-time threads to an existing dedicated processor set.

Note that, as a consequence of binding Java RTS to dedicated processor sets, high-priority real-time threads might compete for the processors assigned to a particular processor set, while other processors not assigned to that processor set might remain idle or might run other, lower-priority, non-real-time processes.

Creating and Assigning Processor Sets

The Solaris operating system allows a machine's processors to be partitioned into a number of smaller, non-overlapping processor sets. Processors that are assigned to a processor set are reserved for the processes explicitly bound to that processor set. Processes that are not bound to that processor set cannot use those processors.

Note: Shell command lines that are shown with a '#' are meant to be executed while you are logged in as the superuser (root).

A new processor set can be created with the following command:

# /usr/sbin/psrset -c
  created processor set <pset id>

Processors can then be assigned to this new processor set with the following command:

# /usr/sbin/psrset -a <pset id> <cpu id>

where <cpu id> refers to one of the on-line processors displayed by the following command:

# /usr/sbin/psrinfo
<cpu id>     on-line   since mm/dd/yyyy hh:mm:ss
<cpu id>     on-line   since mm/dd/yyyy hh:mm:ss
<cpu id>     on-line   since mm/dd/yyyy hh:mm:ss
<cpu id>     on-line   since mm/dd/yyyy hh:mm:ss

The Java RTS Virtual Machine can then be assigned to run in the new processor set as follows:

# /usr/sbin/psrset -e <pset id> <Java RTS command line>

A processor can also be set to no-intr, meaning that it is sheltered from unbounded interrupts. A no-intr processor still executes threads, but it is not interruptible by I/O devices. Note that you cannot set all processors in a machine to no-intr, because at least one processor must remain interruptible by the devices.

For optimal determinism, we recommend setting all the processors assigned to the NHRTs and RTTs to no-intr. To set a processor to no-intr, use the following command:

# /usr/sbin/psradm -i <cpu id>

For further information regarding these commands, refer to the related man pages.

[Contents]


Linux: Using Cpusets and Processor Affinity for Optimal Determinism

The information in this section applies only to applications running on the Linux Operating System.

The Linux kernel provides a feature called cpusets in order to specify that specific tasks should run only on specific CPUs (processors). Having a cpuset devoted to the exclusive use of Java RTS can provide better temporal behavior on a multiprocessor machine. The cpuset must be created before you launch the VM. A cpuset is designated by a path in the file system.

Java RTS can use processor affinity to bind No-Heap Realtime Threads (NHRTs) or Realtime Threads (RTTs) to a comma-separated list of processors or to a cpuset path with the following options:

  • -XX:RTSJBindNHRTToProcessors
  • -XX:RTSJBindRTTToProcessors

This partition of the available processors ensures the best temporal behavior for these threads, by reducing cache thrashing effects.

The following example binds NHRTs to processors 0, 1, and 2:

-XX:RTSJBindNHRTToProcessors=0,1,2

The following example binds NHRTs to the cpuset designated by /dev/cpuset/rt:

-XX:RTSJBindNHRTToProcessors=/dev/cpuset/rt

Note that, as a consequence of binding Java RTS to dedicated processor sets, high-priority real-time threads might compete for the processors assigned to a particular processor set, while other processors not assigned to that processor set might remain idle or might run other, lower-priority, non-real-time processes.

For detailed and up-to-date information about creating and using cpusets and CPU/processor affinity, refer to the documentation provided by the supplier of your Linux distribution.

[Contents]


Clocks

This section provides information related to clocks in Java RTS.

Real-Time Clock API

You should use the RTSJ real-time API Clock to perform measurements. It is best not to use other clocks (for example, java.util.Date or java.lang.System.currentTimeMillis). For some of these other clocks, the time base is different, synchronized with the world clock. This synchronization looks like jitter when this world clock is updated.

Solaris: High-Resolution Clock

The information in this section applies only to applications running on the Solaris™ Operating System.

For the best temporal accuracy, Java RTS requires access to the system's high-resolution clock. On the Solaris Operating System, this clock is accessible only to the superuser and to processes or users that have been granted the proc_clock_highres privilege.

If access to the high-resolution clock is not granted, the virtual machine falls back to the regular, less accurate clock, allowing Java RTS to run without particular privileges. As a consequence, the precision of timer operations will degrade, and will be bounded by the precision of the regular system clock, typically 10 milliseconds.

You can request the virtual machine to use the high-resolution clock and to exit if this resource is not available by adding the -XX:+RTSJForceHighResTimer option to the command line.

Java RTS automatically ensures that a single, consistent time source is used for all the time-related operations. If access to the high-resolution clock is not granted, then the cyclic driver will not be used. Similarly, if the cyclic driver is not installed, then the use of the high-resolution clock is disabled.

The installation example output in the Installation Guide shows the cyclic driver being installed. The cyclic device driver must be installed on the machine for real-time applications.

Solaris: Advanced Programmable Interrupt Controller

The information in this section applies only to applications running on the Solaris™ Operating System.

On x86/x64 machines, Java RTS uses the high-resolution local APIC (Advanced Programmable Interrupt Controller) timer as the primary time source for timed events such as timer expirations, periodic releases, and deadline monitoring. This ensures that the highest time precision is achieved.

If your system does not feature a local APIC, or if the local APIC is not usable, then the Java RTS virtual machine issues the following warning at startup time:

The cyclic driver's backend does not rely 
on the local APIC timer; using legacy PIT timer

In this case, Java RTS falls back to the legacy PIT (Programmable Interrupt Timer), causing the precision of timed events to be limited to 10 milliseconds by default. This precision can be raised to 1 millisecond by editing the /etc/system file, setting the hires_tick variable to 1, and rebooting your system.

[Contents]


Asynchronous Events and Handlers

This section presents some useful information concerning asynchronous events and the Asynchronous Event Handler (AEH) mechanism in Java RTS.

The RTSJ makes a clear distinction between an asynchronous event and handlers that are to be executed when such an event occurs. The asynchronous event is represented by the AsyncEvent class and instances of this class. A number of handlers can be associated with the event. Each handler is encapsulated in an instance of the AsyncEventHandler class. The Java RTS implementation has been optimized to minimize the number of threads required to execute AsyncEventHandler instances. This optimization is especially efficient when no handlers are blocking; although blocking handlers are fully supported, they require more resources.

"Happenings" are described as "events that take place outside the Java runtime environment." No standard happenings are defined in the RTSJ specification, nor does Java RTS define any.


Release Mechanisms for AEH

When several handlers are associated with the same event, it is important to understand the release mechanism. When the event is fired, all of its associated handlers are conditionally released for execution. Releases are done by execution eligibility order, but this is not an atomic operation. If some handlers have a priority higher than the schedulable that fired the event, they will preempt it, and the release of following handlers will be delayed until the high priority handlers complete or block. This behavior might add some jitter on the release time of low priority handlers associated with the event.


Handlers Released by System Threads

Handlers related to timers, POSIX signals, and deadline misses are released by system threads that are dedicated to each type of handler. The default value for the priorities of these threads is RTSJMaxPriority, which is a constant equal to one greater than the maximum real-time application priority value. It is possible to change these handler thread priorities by setting a value to a command-line option, namely, RTSJHighResTimerThreadPriority, RTSJSignalThreadPriority, or RTSJDeadlineMonitoringThreadPriority. The valid values for the system threads are in the range between the maximum and the minimum real-time application priorities. See the Real-Time Priorities section for an explanation of how to determine these maximum and minimum real-time application priority levels.

Caution: If you decrease the priority level of one of the system threads, your real-time application will experience different behavior. For example, if you decrease the priority of the POSIX signal thread, then signal handling can be delayed. Ensure that you can justify changing these values.

[Contents]


Additional Notes for Timers

As mentioned above, Java RTS manages all RTSJ timers through a dedicated system thread. This section provides some information to help you manage the actions of these timers.

Order of Timer Firings

One of the side effects of using a dedicated thread to handle all timers is that timers fire in a serialized fashion. If two timers must be fired at the exact same time, the release of handlers associated with the first handler delays the release of handlers associated with the second timer, even if the handlers of the second timer have a higher priority. Note that the delay corresponds only to the handler release, and not to the execution of the handler code. Therefore, this effect might be noticeable when one timer has a huge number of handlers or when a huge number of timers fire at the same time.

[Contents]

Timer Thread Priority

Timer firings occur more often than deadline misses or POSIX signals. Therefore, it might be important to manage the predictability and the impact of timers by tuning the priority of the timer thread. If a high accuracy is required for RTSJ timers, the timer thread's priority must be the highest of all real-time priorities. If the RTSJ timers are not required to be very accurate, but some other threads of the application must be strongly deterministic, the timer thread's priority can be set to a lower value in order to prevent impacting those critical threads. Add the -XX:RTSJHighResTimerThreadPriority=<priority> option to your command-line to change the timer thread's priority.

See sections Handlers Released by System Threads and Scheduling for important information related to changing Java RTS system thread priorities.

Minimum Interval for Timer Firing

The RTSJ does not define a minimum interval for periodic timers, but a timer firing too often can easily overload a system and prevent the execution of other activities. The Java RTS VM protects itself from this situation by enforcing a minimum value for the interval of periodic timers. If an application tries to create a periodic timer with an interval lower than this limit, or if it tries to set a new interval lower than this limit, it receives an IllegalArgumentException. By default, the minimum interval is set to 500 nanoseconds, but it can be modified with the Java property TimerMinInterval, which specifies the minimum interval expressed in nanoseconds. For example, the minimum interval can be increased to 800 nanoseconds by including -DTimerMinInterval=800 on the VM command line.

[Contents]


Aperiodic Parameters

Release parameters have been included in RTSJ to allow developers to control the way new activities are added to the set of schedulables ready to be executed. By setting an arrival time queue size in an AperiodicParameter instance, it is possible to control the number of pending releases of a schedulable. The choice of an ArrivalTimeQueueOverflow policy helps an application to react when too many releases are pending. However, the default ArrivalTimeQueueOverflow policy is SAVE, meaning that every time a new release arrives, its characteristics are saved until its corresponding execution is done. Under this policy, if a low-priority schedulable is released many times but is prevented from running because another high priority schedulable is running, the memory required to store all the characteristics of its releases will continually increase and might completely fill the system memory, causing abnormal behavior. For this reason, it is advisable to change the ArrivalTimeQueue policy of schedulables that might be released many times and that can be prevented from running for a long time.


Maximum Size of Arrival Queue

Aperiodic schedulable objects are associated with an arrival time queue controlled by a parameter object instance of AperiodicParameters. The size of an arrival queue is determined by the value returned by the getInitialArrivalTimeQueueLength() method of the parameter object when the aperiodic schedulable object is created. The value returned by this method can be modified using the setInitialArrivalTimeQueueLength() method.

To prevent the formation of huge arrival queues, which implies huge allocations, the method setInitialArrivalTimeQueueLength() will not accept arguments greater than the value of the Java property ArrivalQueueMaxSize. If the method setInitialArrivalTimeQueueLength() refuses a new size because its value is greater than ArrivalQueueMaxSize, it throws an IllegalArgumentException(). The default value of ArrivalQueueMaxSize is 16384, but it can be modified. For example, the value can be increased to 32768 by including -DArrivalQueueMaxSize=32768 on the VM command line.

[Contents]


Memory Management

This section presents information to facilitate memory management in Java RTS.

Configuring Memory Size

The total memory used for immortal and scoped areas must be configured when starting the JVM. The default values are 32MB and 16MB respectively, and can be modified using the -XX:ImmortalSize=value and -XX:ScopedSize=value options. The VM performs some internal rounding of these total sizes and of the size of each scoped memory area.

To see the memory layout, launch the VM with the -XX:+RTSJShowHeapLayout option.

Scoped Memory

The RTSJ defines ScopedMemory objects to represent alternative memory allocation contexts that are not subject to garbage collection. These are created using concrete subclasses such as LTMemory and VTMemory. The memory that these objects represent is known as the backing-store and is distinct from the object itself. The backing-store is managed by the virtual machine and is not obtained from the allocation context that contains the ScopedMemory instance, or any other memory area directly accessible to Java code.

As defined by the RTSJ, "this backing-store behaves effectively as if it were allocated when the associated scoped memory object is constructed and freed at that scoped memory object's finalization".

The detailed management of the backing-store memory is implementation-dependent, and in Java RTS there can be a delay between the time an area of backing-store is released and the time at which it is available to be used with a new ScopedMemory object. As always with dynamic allocation requests, if a program needs to guarantee that allocation of the ScopedMemory instance and its backing-store cannot fail when it is needed, then that ScopedMemory instance should be created during the initial start-up phase of the application.

The VTMemory and LTMemory classes can be constructed with a requested initial size and maximum size for the backing-store. Java RTS always allocates the maximum size requested for the backing-store upon object construction, to achieve better predictability and determinism and to guarantee the availability of the maximum requested memory. This guarantee is not required by the RTSJ but is provided by Java RTS. In addition, the minimum size of backing-store that is allocated is controlled by the value of the ScopedMemoryAllocGrain parameter, which defaults to 16KB. If a smaller size is requested, it will be rounded up to this minimum size. This value can be modified using the -XX:ScopedMemoryAllocGrain=<n> option, where <n> should be a power of two, and if not, will be rounded down to a power of two.

Memory Allocation Limits

Java RTS supports only the mandatory memory allocation limits in a MemoryParameters instance, as defined by the RTSJ. The optional parameters (allocation rates) are not enforced.

Memory Locking

Modern operating systems tend to use virtual memory, which allows for more programs to be resident in memory than would be possible using the physical memory resources of the machine. This is achieved by swapping memory pages out to disk, and loading them back in when needed. This means that a simple memory reference could have a worst-case execution latency that includes disk I/O activity. Such latencies would be extremely detrimental to deterministic real-time behavior. To avoid this, Java RTS utilizes memory locking, whereby the virtual addresses used by the VM process are mapped to physical pages which are locked-down and will not be swapped out to disk. These pages are always resident in physical memory and so the disk I/O latencies are avoided.

The down-side of memory locking is that the memory used by the VM process cannot exceed the physically installed memory on the machine. In fact, it is restricted to a fraction of the available memory, due to locking used by other processes, and the operating system itself, as well as limitations caused by fragmentation of the physical memory. For example, for the 32-bit VM, which is limited to a 4GB address space, the maximum amount of memory claimable by the VM may be little more than 2GB. This includes all of the memory regions used by the VM: heap, scope backing store, immortal memory, immortal physical memory, and the permanent-generation. As a result, the VM could fail to initialize due to a lack of physical memory. In addition, the VM could encounter an out-of-memory condition at runtime, which results in an abnormal termination of the VM.

If an application requires more memory than can be supported using memory locking, the developer may choose to disable memory locking by using the -XX:-RTSJLockPagesInMemory option. This should only be done when the latencies associated with virtual memory swapping are acceptable.

[Contents]


Shared Objects in Immortal Memory

The RTSJ requires that certain shared objects must be allocated in the ImmortalMemory area, including Class objects, string literals, and interned strings. Objects in immortal memory are not reclaimed by the garbage collector and so they, together with any objects they reference, are always considered reachable, even if the application no longer has a direct reference to them. Consequently, if an application's allocation of these objects continually grows over time, then the application will eventually exhaust the immortal memory area.

While we cannot give a detailed accounting of how all APIs consume immortal memory, we have become aware of one potentially problematic API, namely javax.management.ObjectName. An Objectname is instantiated by passing a String that names it. To allow for quick comparison of different ObjectName instances for equality, these string names are interned, and are therefore allocated in the immortal memory area. If an application continually creates uniquely named ObjectName instances (for example, as the meta-data associated with each new request that a service receives), then that application will eventually exhaust the immortal memory area. There is no workaround for this other than to avoid using the problematic APIs.

[Contents]


Reducing Unnecessary Garbage Production

Throughput-oriented garbage collectors, in particular generational, copying-based collectors, are very efficient at dealing with large quantities of garbage as they never have to visit garbage objects and so the cost of a GC pass is not dependent on the amount of garbage to be found. As these kinds of collectors exist in the mainstream Java SE implementations, there has been a tendency for developers to become very unconcerned with the amount of garbage they may generate, because they expect the collector will deal with it simply and efficiently.

However, this is not the case for deterministic, non-generational, garbage collectors, such as the Java RTS Real-Time Garbage Collector, where latency and pause-times are the main concerns. The cost of a GC cycle is dependent on both the number of live objects and the number of garbage objects. Consequently, algorithms that generate excessive amounts of garbage - which is inefficient even under generational collectors - can encounter much higher GC costs when run under Java RTS.

For example, consider this method that is used to build a string describing a set of attributes:

  String attributeString(Attribute[] attrs) {
    String str = "Attribute list: ";
    for (Attribute a : attrs) {
      str += a.toString();
      str += ", ";
    }
    return str;
  }

Because of the use of the string concatenation operator, this will be rewritten by the javac compiler to act the same as:

  String attributeString(Attribute[] attrs) {
    String str = "Attribute list: ";
    for (Attribute a : attrs) {
      StringBuilder sb = new StringBuilder();
      sb.append(str);
      sb.append(a.toString());
      sb.append(", ");
      str = sb.toString();
    }
    return str;
  }

Now consider how many objects are created each time through the loop. There are three obvious places where objects are created:

  • There is a new StringBuilder instance.
  • There is a new String representing each attribute.
  • There is a new String created from the StringBuilder (and assigned back to the str variable).

But less obviously, if the append operations exceed the size of the internal character buffer in the StringBuilder, then we will create a new, bigger buffer (and this could happen more than once per iteration). Note that as soon as str contains more characters than the default size of a StringBuilder, then this expansion will occur when we do the initial sb.append(str) in each iteration.

Out of all these objects, only the final String created from the final StringBuilder remains live - everything else is garbage. This is grossly inefficient in terms of memory use and computational overhead for object creation and GC effort.

There are a few basic rules for minimizing object creation in code such as the above:

  1. Do not append to String objects. Strings are immutable, so each append creates a new String and potentially throws the old one away.
  2. Convert StringBuilder objects (or StringBuffer objects) to String objects only when you need the actual String.
  3. Size StringBuilder (or StringBuffer) objects appropriately so that dynamic expansion of their internal character buffers is not needed.

With that in mind we can rewrite the above method much more efficiently (assuming we know how big attribute strings are expected to be):

  static final int ATTR_STRING_SIZE = 10;
  static final String ATTR_LIST = "Attribute list: ";
  static final String SEPARATOR = ", ";

  String attributeString(Attribute[] attrs) {
    int size = ATTR_LIST.length() +
               attrs.length * (ATTR_STRING_SIZE + SEPARATOR.length());

    StringBuilder str = new StringBuilder(size);
    str.append(ATTR_LIST);
    for (Attribute a : attrs) {
      str.append(a.toString());
      str.append(SEPARATOR);
    }
    return str.toString();
  }

Now all we create on each iteration is the actual String for the current attribute. There is one StringBuilder created, which should never need to dynamically grow, and one String created from that StringBuilder.

[Contents]


Memory Checks and Standard JDK Libraries

Under the RTSJ memory model, a legacy Java method might throw new exceptions when writing or reading an object. Hence, some legacy code, including the standard JDK libraries, might result in MemoryAccessError or IllegalAssignmentException exceptions being thrown.

It is important to note that this never happens for a regular java.lang.Thread, which can safely use legacy code and standard Java libraries. Such threads are unaffected because an assignment check fails only if the written object is in a scoped memory area, and an access check is performed only if the current thread is an NHRT. Note that for this reason, legacy code will also continue to work with RealtimeThreads as long as scoped memory is not used. Library calls, however, might fail for NoHeapRealtimeThreads or for RealtimeThreads running inside a scoped memory area, as RTSJ does not define the API set that must be supported in these two cases.

Rewriting a method so that it does not cause a memory exception when used by NHRTs or from scoped memory might result in performance loss, because additional checks must be performed, or optimization such as caching must be disabled. Note also that the design of most of the libraries is based on the assumption that, thanks to the GC, dynamic allocation is inexpensive and harmless. However, dynamic allocation can lead to memory leaks in ImmortalMemory or increase the size required for scoped memory areas. Whenever possible, you should try to delegate the standard library calls from NHRTs to RealtimeThreads, and perform these calls in the Heap memory area.

In any case, if the determinism provided by the Real-Time Garbage Collector (RTGC) is sufficient, then the safest solution is not using NHRTs and scoped memory.

[Contents]


Synchronization

The HotSpot virtual machine handles object locking in a very efficient manner. A locking operation can be performed in compiled code in a few CPU cycles as long as there is no contention. Contended monitors are more expensive because VM calls have to be performed. In addition, the first contention for a given lock can allocate new system resources. As a result, considerable jitter might appear on the first contention for a given monitor. If necessary, this jitter can be avoided by explicitly calling MonitorControl.setMonitorControl(obj,PriorityInheritance.instance()). This preallocates the system resources.

The Priority Ceiling Emulation Protocol (PCEP) is an optional feature of RTSJ, and is not supported by Java RTS, mostly for reasons of efficiency.

In order to avoid being subject to GC-induced jitter, an NHRT should never lock an object that can potentially be locked by a non-NHRT thread. Note also that such a lock can impact other real-time threads as well, since the owner of the lock could be boosted to the priority of the blocked NHRT. This might even cause the priority of the garbage collector to be boosted as well. In order to anticipate these issues, carefully check the code executed by the NHRTs and pay special attention to the synchronized statements. This is less critical with the RTGC, since the worst-case pause time would be in the hundreds of microseconds.

[Contents]


Real-Time Priorities

In addition to the 10 regular Java priorities, Java RTS features a number of unique real-time priority levels. The maximum number of real-time priorities available for Java RTS to use depends on the host operating system. On Solaris, there are 60 real-time priorities (numbered 0 to 59), while on Linux there are 99 (numbered from 1 to 99).

The actual number of real-time priorities used by Java RTS is controlled by the RTSJNumRTPrio flag. The default value for this flag on Solaris OS is 60, allowing Java RTS to use all the real-time priorities from 0 to 59. However, the default value of RTSJNumRTPrio on Linux is 49, allowing Java RTS to use real-time priorities 1 through 49. The reason for this value is that by default real-time Linux systems run some of their interrupt service routines (the "soft-IRQ" handlers) at priority 50. If application threads use priority 50 or higher, they can interfere with these interrupt service routines and, for example, cause delays in the processing of system timers and consequently in the release of periodic real-time threads or deadline miss handlers. By limiting RTSJNumRTPrio to 49, you prevent this interference from happening. If you have determined that your application code must run at a higher priority (perhaps after reconfiguring the interrupt service routines to use a higher priority), then you can set RTSJNumRTPrio to the value that is appropriate for the application.

Caution: Setting RTSJNumRTPrio too high could make your system unstable or unresponsive. It is your responsibility to determine a value for RTSJNumRTPrio that is appropriate for your application and the configuration of the system it is running on.

The minimum real-time priority value, RTSJMinPriority, is always 11, which is one greater than the maximum java.lang.Thread priority. The maximum real-time priority value, RTSJMaxPriority, is equal to (RTSJMinPriority + RTSJNumRTPrio - 1). The RTSJMaxPriority value is reserved for use by Java RTS itself. Therefore, the maximum real-time priority available to application code is equal to (RTSJMaxPriority - 1), and the number of available real-time priorities is equal to (RTSJNumRTPrio - 1).

The RTGCBoostedPriority parameter specifies the priority at which the Real-Time Garbage Collector (RTGC) should execute in order to guarantee determinism. (See the Garbage Collection Guide for a detailed description.) The value of this parameter is set by default to be the middle of the range of real-time priorities, that is, (RTSJMinPriority + RTSJMaxPriority) / 2). Therefore, the resulting default value is 40 on Solaris OS, and 35 on Linux.

Application code should always use the methods of the javax.realtime.PriorityScheduler.instance() to determine the minimum and maximum real-time priorities available to it. These methods are getMinPriority() and getMaxPriority(), respectively.

In the original version 1.0 of the RTSJ specification, the javax.realtime.PriorityScheduler class provided two fields, MIN_PRIORITY and MAX_PRIORITY, to describe the range of supported real-time priorities. The version 1.0.1 of the RTSJ specification has later deprecated these fields. Now, the minimum and the maximum real-time priorities must be obtained by invoking getMinPriority() and getMaxPriority() respectively.

Caution: Since the above two deprecated fields are defined as static and final, they can be inlined by Java compilers. Because different Java RTS Virtual Machines can implement different priority mappings, inlined values might be meaningless for different VMs. The Java RTS compiler does not inline these values in order to generate portable code. However, code compiled by other vendors might not run as expected on Java RTS if they inlined these values.

[Contents]


Scheduling

Threads running at a regular Java priority level are scheduled according to a time-sharing policy. This policy provides a fair allocation of processor resources among threads. The time-sharing policy is intended to provide good response time to interactive threads and good throughput to processor-bound threads, but not to provide any temporal guarantee. In contrast, threads running at a real-time priority level are scheduled according to a real-time, fixed-priority preemptive policy. Threads scheduled according to this policy are granted the highest range of scheduling priorities on the system.

The regular Java threads are restricted to the regular Java priority range, while the real-time threads can only be assigned real-time priority levels. However, keep in mind that, due to priority inheritance, a regular Java thread can be boosted to a real-time priority level if it holds a lock claimed by a real-time thread. Code executed by a non-real-time thread sharing a lock with high-priority real-time threads must consequently be written carefully.

Caution: Nothing prevents runaway threads running at real-time priority levels from effectively saturating the processor(s), possibly causing the machine to become unresponsive. In this case, the machine might have to be rebooted, either manually or through the service processor or remote management console, if available.


Order of Priority Setting

Note that PriorityParameters.setPriority(priority) is not atomic. If a thread increases the priority value of a PeriodicParameters shared by several other threads, it can be preempted by the first thread that has its new priority set, delaying the priority update of other threads sharing the same PeriodicParameters. In some cases, this might cause priority inversions. The workaround is to ensure that the priority of the thread invoking PriorityParameters.setPriority(newpriority) is greater than or equal to the value of the new priority.

[Contents]


Asynchronous Transfer of Control

Asynchronous Transfer of Control, or ATC, is a powerful mechanism that safely extends Thread.stop().

ATC users should keep in mind the following rules:

  • ATC has been defined at the Java-language level, rather than at the byte-code level. For performance reasons, Java RTS relies on the fact that locks are always properly nested in a Java frame. Hence, ATC might not work as expected if locks are not properly nested in a frame corresponding to an interruptible method. Improperly nested locks should never happen if the byte code has been generated with a compliant Java compiler. However, if you plan to use a code obfuscator, please check with your vendor before using such a tool on RTSJ code.
  • Do not forget that finally statements cannot be executed in asynchronously-interruptible methods. A finally statement is transformed into statements equivalent to catch(Throwable e) { ... code ... ; throw e; }. Unfortunately, when an ATC is delivered, the code statements might not be executed, because they are not in a deferred section.
  • Do not forget to call aie.clear() in the catch handler. This is the main difference between the AsynchronouslyInterruptedException delivered by ATC and a regular exception. An instance of AIE delivered by aie.fire() or Thread.interrupt() is re-thrown as soon as the thread exits from the deferred section in which the handler was found, unless it has been explicitly cleared.
  • Note that calling throw(new AsynchronouslyInterruptedException()) uses the regular exception delivery mechanism rather than the ATC mechanism. This exception is delivered synchronously and need not be cleared.
  • As a special case of uncleared AIE, you should never include a loop in a deferred section that contains a generic handler (catch (Exception or catch (Throwable). If this handler catches an AIE, this exception will be re-thrown as soon as the thread reenters the non-deferred section inside the loop.
  • In your design, implement interruptions through aie.doInterruptible(logic) and aie.fire() rather than through Thread.interrupt(). This allows you to interrupt the right logic in case of nested interruptible logic. Thread.interrupt() should be used only to safely terminate a thread.

[Contents]


Security

Java RTS 2.2 Update 1 provides a secure environment on a par with Java SE 5.0 Update 22. For Java RTS 2.2 Update 1 deployments that are not "closed," the use of trusted third-party applets and applications helps ensure the highest security.

Changes in Acquisition of Access Control Context

In Java RTS 2.0, when a No-Heap Realtime (NHRT) thread was created, it acquired its inherited access control context in the normal manner. However, when an NHRT thread created another thread, the inherited access control context of the new thread would be set to null. This was done to avoid having the NHRT thread encounter a heap-allocated AccessControlContext instance. The net effect of this is that the created thread, due to the missing inherited access control context, might have a higher level of permissions than it would otherwise have had.

In Java RTS 2.0 update 1, this behavior was modified as follows. When an NHRT thread is created, it is given a null inherited access control context. This ensures that the NHRT thread object will not hold a reference to a heap allocated AccessControlContext instance. However, when the NHRT thread creates another thread, that thread (unless also an NHRT thread) will acquire an inherited access control context in the normal manner, based (in part) on the code in the current execution stack.

This change implies that the NHRT thread might have permissions it would otherwise not have had, but an NHRT thread (like a real-time thread) is already considered a privileged execution context that should only be created by, and execute, trusted code.

Further, it is a simple matter to remove permissions from the NHRT thread, or from any thread created by the NHRT thread, by using a custom security policy that does not grant the permission to the codebase from which the code for the thread comes, or via which the new thread is created. For example, suppose all classes in the application classpath form a codebase that has all permissions granted, and you wanted to restrict the permissions of a plain java.lang.Thread when created by a specific NHRT thread. You could define the following factory class to be used by the NHRT thread to do the Thread creation:

public class Restricted {
   public static Thread newThread(Runnable r) {
       return new Thread(r);
   }
} 
Then you place this class in a separate codebase, which does not have the permissions of interest, in the custom security policy used. When the NHRT thread calls Restricted.newThread to create the new thread, the inherited access control context of the new thread will contain the restrictions applied to the Restricted class. When the new thread executes, its current access control context will also contain those restrictions and the thread will be unable to perform actions that require the permissions that have been denied.

For more details on access control contexts, security policies and permissions see the Java Platform Security Guide.

For backward compatibility with Java RTS 2.0, two system properties can be defined to restore the old behavior:

  • rtsj.security.setNHRTInheritedACC

    If this property is defined, then at construction, an NHRT thread will acquire an inherited access control context using the normal mechanism, rather than setting it to null. Note that if this inherited access control context is heap allocated, then the NHRT thread might encounter a MemoryAccessError if it attempts a security-checked action.

  • rtsj.security.IgnoreInheritedACCFromNHRT

    If this property is defined, then at construction, a thread created by an NHRT thread will have its inherited access control context set to null.

[Contents]


Minimum Period Enforced

Java RTS enforces a minimum period for periodic release parameters that are associated with a real-time thread. Periods shorter than the minimum value are accepted but are silently raised to that minimum value to try to avoid system hangs. No errors, warnings, or exceptions are currently reported.

This minimum period value is set by default to 50 microseconds.

  • Solaris: This value can be changed by editing the /platform/<arch>/kernel/drv/cyclic.conf file, setting the cyclic_min_interval variable to the appropriate value (expressed in nanoseconds), and rebooting your system.

    Note that lowering the value of this minimum period can potentially cause your system to hang, if an application specifies an unreasonably short period.

  • Linux: The minimum period value is hard-coded to 50 microseconds and cannot be changed.

[Contents]


Abbreviations Used in This Document

AEH - Asynchronous Event Handler

ATC - Asynchronous Transfer of Control

GC - Garbage Collector

ITC - Initialization Time Compilation

Java RTS - Sun Java™ Real-Time System

JIT - Just In Time compilation

JLT - an instance of the java.lang.Thread class.

NHRT - No Heap Realtime Thread; an instance of the NoHeapRealtimeThread class.

RTGC - Real-Time Garbage Collector

RTT - Realtime Thread; an instance of the RealtimeThread class.

RTSJ - Real-time Specification for Java

[Contents]


Copyright © 2008, 2010, Oracle Corporation and/or its affiliates