During the Solaris boot procedure, various facets of the Solaris Resource Manager software are enabled at different points. The major steps are presented here:
When the kernel starts, various parameters are loaded from the /etc/system file. Some of these affect Solaris Resource Manager. These are documented in the next section, Booting Without Solaris Resource Manager.
As the kernel continues its initialization, after process 0 has been created, but before process 1 is started, Solaris Resource Manager is initialized by starting init in the Solaris Resource Manager CPU scheduling class (SHR) instead of the default scheduling class. The SHR module is loaded and process 1 (the init process) is scheduled by the Solaris Resource Manager software. (See init(1M).)
Initially the init process and all its children are attached to the surrogate root lnode.
When the kernel is fully initialized, the system will transition from single-user to one of the multi-user modes (usually run-level 2 or 3). Early in this procedure, the /etc/init.d/init.srm script is run. The actions performed by this script are described in Boot Sequence Events and enable normal Solaris Resource Manager operations.
If you must boot the system without Solaris Resource Manager active, change the initclass variable in the /etc/system file to refer to timesharing (TS) instead of SHR. A simple way of doing this is to use the -a (ask) option of the boot command, so that you will be prompted for a system file. For other prompts, press the Return key to accept the default values until you are prompted for the name of the system file. At the prompt for the name of the system file, type etc/system.noshrload (no leading slash) as the response. Here is an example of the procedure:
Note that /etc/system.noshrload is a backup copy of /etc/system made at the time Solaris Resource Manager was installed. If there have been subsequent edits to /etc/system, then /etc/system.noshrload should be maintained in parallel so that it differs only by the occurrence of the Solaris Resource Manager modification:
# diff /etc/system /etc/system.noshrload < # enable srm < set initclass="SHR" |
The sequence in which events occur while switching to multi-user mode is particularly important in Solaris Resource Manager. The following sequence of steps correctly establishes the Solaris Resource Manager system:
Configure and enable Solaris Resource Manager using the srmadm command.
At this point, the limits database will be opened and the SHR scheduler will be enabled. See Enabling Solaris Resource Manager Using srmadm for information on this process.
Assign the 'lost' (srmlost) and 'idle' (srmidle) lnodes.
Start the Solaris Resource Manager daemon.
See Starting the Solaris Resource Manager Daemon for information on this procedure.
Start other system daemons on an appropriate lnode.
The default script used in Steps 1 through 3 of the above process is shown in the appendix.
Of particular importance is the attachment of daemons (system maintenance processes which normally run permanently) to an lnode other than the root lnode. Processes attached to the root lnode are scheduled specially and will always be given all the CPU resources they demand, so it is best not to attach any process that is potentially CPU-intensive to the root lnode. Attaching daemons to their own lnode allows the central system administrator to allocate them a suitable CPU share.
During the boot procedure, each new process inherits its lnode attachment from its parent process. Since the init process is attached to the root lnode, so are all subsequent processes. Until the Solaris Resource Manager initialization script is run and the limits database is opened, processes cannot be attached to other lnodes; even then this only happens when a process does an explicit setuid system call (using login(1), for example) or explicitly asks Solaris Resource Manager to attach to a nominated lnode, like the srmuser(1SRM) command does. Running a program with the setuid file mode bit set does not change the lnode attachment.
Consequently, all system programs started automatically during system startup will be attached to the root lnode. This is often not desirable, since any process attached to the root lnode that becomes CPU intensive will severely disrupt the execution of other processes. Therefore, it is recommended that any daemon processes started as part of the boot procedure be explicitly attached to their own lnode by using the srmuser command to invoke them. This will not affect their real or effective UIDs.
A possible example is shown here:
/usr/srm/bin/srmuser network in.named |
This could be used to replace the existing invocation of the named(1M) daemon in its startup script. This requires that a user account and lnode for network be established beforehand.
The srmadm command allows the administrator to control the operating state and system-wide configuration of Solaris Resource Manager. This command is typically used during transition to run-level 2 or 3 from within the Solaris Resource Manager init.d(4) script /etc/init.d/init.srm. It is run to ensure that appropriate values for all parameters are set each time the system is booted, and to ensure that the Solaris Resource Manager system will be enabled prior to users having access to the system. The srmadm command is also used to administer the global Solaris Resource Manager parameters. See the srmadm(1MSRM) man page for a list of the parameters that can be set using srmadm. The srmadm commands issued in the Solaris Resource Manager init.d script will:
Open the limits database. Up until this point, any processes that are started are attached automatically to a surrogate root lnode. The surrogate root lnode is used to ensure that there is always an lnode available to connect processes to, regardless of the operational state of Solaris Resource Manager. For this reason, it is important that the limits database be opened before any non-root processes are started. When the limits database is opened, the values in the usage attributes in the surrogate root lnode are added into their counterparts in the real root lnode. A limitation of this technique is that any net decrease in usage will not be counted. This ensures that usage alterations prior to the limits database being opened are not discarded.
Enable limit enforcement.
Set the parameters which control the behavior of the Solaris Resource Manager SHR scheduler, for example, the usage decay rate.
Enable the SHR scheduler. Prior to this, processes in the SHR scheduling class are scheduled in a simple round-robin fashion and the CPU entitlements set within the Solaris Resource Manager system have no effect.
Refer to Global Solaris Resource Manager Parameters via srmadm for some common invocations of the srmadm command.
The limdaemon(1MSRM) program is the Solaris Resource Manager user-mode daemon. It is normally invoked at transition to run-level 2 or 3 as the last step in the Solaris Resource Manager init.d script. It shouldn't be confused with the srmgr system process (in the SYS class) that is initiated by the kernel. The following ps(1) listing shows both these processes:
# ps -efc | egrep 'limdaemon|srmgr' root 4 0 SYS 60 18:42:14 ? 0:05 srmgr root 92 1 SHR 19 18:42:32 ? 0:41 limdaemon |
The limdaemon program performs the following functions:
Receives notification messages and delivers them to the terminals of destination users
Receives login or log out notification messages, maintaining an exact record of all Solaris Resource Manager login sessions currently in progress
Periodically updates the connect-time usages for all users who have Solaris Resource Manager login sessions currently in progress (optional)
Detects users who have reached their connect-time limit, kills their processes, and logs them out (optional) after a grace interval
Logs all actions using syslog(3C) to syslogd(1M)
When notified of Solaris Resource Manager login sessions, limdaemon monitors the terminal connect-time of all users and checks it against their connect-time limits. When limits are nearly reached, users are sent notification messages. Once the expiration time is reached, a further grace period is allowed before all their processes are terminated and they are logged out.
The limdaemon program decays connect-time usages. Usage decay for the terminal device category must be performed, if connect-time limits are used. Refer to Using limdaemon for information on limdaemon command-line options.