Real-time response is guaranteed when certain conditions are met. This section identifies these conditions and some of the more significant design errors that can cause problems or disable a system.
Most of the potential problems described here can degrade the response time of the system. One of the potential problems can freeze a workstation. Other, more subtle, mistakes are priority inversion and system overload.
A Solaris real-time process:
Runs in the RT scheduling class, as described in "Scheduling".
Locks down all the memory in its process address space, as described in "Memory Locking".
Is from a statically-linked program or from a program in which all dynamic binding is completed early, as described in "Shared Libraries".
Real-time operations are described in this chapter in terms of single-threaded processes, but the description can also apply to multithreaded processes (for detailed information about multithreaded processes, see the Multithreaded Programming Guide). To guarantee real-time scheduling of a thread, it must be created as a bound thread, and the thread's LWP must be run in the RT scheduling class. The locking of memory and early dynamic binding is effective for all threads in a process.
Acquires the processor within the guaranteed dispatch latency period of becoming runnable (see"Dispatch Latency")
Continues to run for as long as it remains the highest priority runnable process
A real-time process can lose control of the processor or can be unable to gain control of the processor because of other events on the system. These events include external events (such as interrupts), resource starvation, waiting on external events (synchronous I/O), and preemption by a higher priority process.
The problems described in this section all increase the response time of the system to varying extents. The degradation can be serious enough to cause an application to miss a critical deadline.
Real-time processing can also significantly impact the operation of aspects of other applications active on a system running a real-time application. Since real-time processes have higher priority, time-sharing processes can be prevented from running for significant amounts of time. This can cause interactive activities, such as displays and keyboard response time, to be noticeably slowed.
System response under SunOS 5.0 through 5.8 provides no bounds to the timing of I/O events. This means that synchronous I/O calls should never be included in any program segment whose execution is time critical. Even program segments that permit very large time bounds must not perform synchronous I/O. Mass storage I/O is such a case, where causing a read or write operation hangs the system while the operation takes place.
A common application mistake is to perform I/O to get error message text from disk. This should be done from an independent nonreal-time process or thread.
Interrupt priorities are independent of process priorities. Prioritizing processes does not carry through to prioritizing the services of hardware interrupts that result from the actions of the processes. This means that interrupt processing for a device controlled by a real-time process is not necessarily done before interrupt processing for another device controlled by a timeshare process.
Time-sharing processes can save significant amounts of memory by using dynamically linked, shared libraries. This type of linking is implemented through a form of file mapping. Dynamically linked library routines cause implicit reads.
Real-time programs can use shared libraries, yet avoid dynamic binding, by setting the environment variable LD_BIND_NOW to a non-NULL value when the program is invoked. This forces all dynamic linking to be bound before the program begins execution. See the Linker and Libraries Guide for more information.
A time-sharing process can block a real-time process by acquiring a resource that is required by a real-time process. Priority inversion is a condition that occurs when a higher priority process is blocked by a lower priority process. The term blocking describes a situation in which a process must wait for one or more processes to relinquish control of resources. If this blocking is prolonged, even for lower level resources, deadlines might be missed.
By way of illustration, consider the case in Figure 8-1 where a high priority process wanting to use a shared resource gets blocked when a lower priority process holds the resource, and the lower priority process is preempted by an intermediate priority process. This condition can persist for a long time, arbitrarily long, in fact, since the amount of time the high priority process must wait for the resource depends not only on the duration of the critical section being executed by the lower priority process, but on the duration until the intermediate process blocks. Any number of intermediate processes can be involved.
This issue and the methods of dealing with it are described in "Mutual Exclusion Lock Attributes" in Multithreaded Programming Guide.
A page is permanently locked into memory when its lock count reaches 65535 (0xFFFF). The value 0xFFFF is implementation-defined and might change in future releases. Pages locked this way cannot be unlocked.
Runaway real-time processes can cause the system to halt or can slow the system response so much that the system appears to halt.
If you have a runaway process on a SPARC system, press Stop-A. You might have to do this more than one time. If this doesn't work, or on non-SPARC systems, turn the power off, wait a moment, then turn it back on.
When a high priority real-time process does not relinquish control of the CPU, there is no simple way to regain control of the system until the infinite loop is forced to terminate. Such a runaway process does not respond to control-C. Attempts to use a shell set at a higher priority than that of a runaway process do not work.
There is no guarantee that asynchronous I/O operations will be done in the sequence in which they are queued to the kernel. Nor is there any guarantee that asynchronous operations will be returned to the caller in the sequence in which they were done.
If a single buffer is specified for a rapid sequence of calls to aioread(3AIO), there is no guarantee about the state of the buffer between the time that the first call is made and the time that the last result is signaled to the caller.
SunOS 5.0 through 5.8 provides no facilities to ensure that files will be allocated as physically contiguous.