Solaris Resource Manager 1.3 System Administration Guide

Chapter 4 Boot Procedure

During the Solaris boot procedure, various facets of the Solaris Resource Manager software are enabled at different points. The major steps are presented here:

Booting Without Solaris Resource Manager

If you must boot the system without Solaris Resource Manager active, change the initclass variable in the /etc/system file to refer to timesharing (TS) instead of SHR. A simple way of doing this is to use the -a (ask) option of the boot command, so that you will be prompted for a system file. For other prompts, press the Return key to accept the default values until you are prompted for the name of the system file. At the prompt for the name of the system file, type etc/system.noshrload (no leading slash) as the response. Here is an example of the procedure:

ok boot -a  
Booting from: sd(0,0,0) -a 
Enter filename [kernel/unix]:
Enter default directory for modules
 [/platform/SUNW,UltraSPARC/kernel /kernel /usr/kernel]: 
SunOS Release 5.6 Version ... [UNIX(R) System V Release 4.0]
Copyright (c) 1983-1997, Sun Microsystems, Inc. 
Name of system file [etc/system]: etc/system.noshrload
root filesystem type [ufs]: 
Enter physical name of root device 
 [/sbus@1,f8000000/esp@0,800000/sd@3,0:a]:

Note that /etc/system.noshrload is a backup copy of /etc/system made at the time Solaris Resource Manager was installed. If there have been subsequent edits to /etc/system, then /etc/system.noshrload should be maintained in parallel so that it differs only by the occurrence of the Solaris Resource Manager modification:

# diff /etc/system /etc/system.noshrload 
< # enable srm 
< set initclass="SHR"

Boot Sequence Events

The sequence in which events occur while switching to multi-user mode is particularly important in Solaris Resource Manager. The following sequence of steps correctly establishes the Solaris Resource Manager system:

  1. Configure and enable Solaris Resource Manager using the srmadm command.

    At this point, the limits database will be opened and the SHR scheduler will be enabled. See Enabling Solaris Resource Manager Using srmadm for information on this process.

  2. Assign the 'lost' (srmlost) and 'idle' (srmidle) lnodes.

  3. Start the Solaris Resource Manager daemon.

    See Starting the Solaris Resource Manager Daemon for information on this procedure.

  4. Start other system daemons on an appropriate lnode.

The default script used in Steps 1 through 3 of the above process is shown in the appendix.

System Daemon Processes

Of particular importance is the attachment of daemons (system maintenance processes which normally run permanently) to an lnode other than the root lnode. Processes attached to the root lnode are scheduled specially and will always be given all the CPU resources they demand, so it is best not to attach any process that is potentially CPU-intensive to the root lnode. Attaching daemons to their own lnode allows the central system administrator to allocate them a suitable CPU share.

During the boot procedure, each new process inherits its lnode attachment from its parent process. Since the init process is attached to the root lnode, so are all subsequent processes. Until the Solaris Resource Manager initialization script is run and the limits database is opened, processes cannot be attached to other lnodes; even then this only happens when a process does an explicit setuid system call (using login(1), for example) or explicitly asks Solaris Resource Manager to attach to a nominated lnode, like the srmuser(1SRM) command does. Running a program with the setuid file mode bit set does not change the lnode attachment.

Consequently, all system programs started automatically during system startup will be attached to the root lnode. This is often not desirable, since any process attached to the root lnode that becomes CPU intensive will severely disrupt the execution of other processes. Therefore, it is recommended that any daemon processes started as part of the boot procedure be explicitly attached to their own lnode by using the srmuser command to invoke them. This will not affect their real or effective UIDs.

A possible example is shown here:

/usr/srm/bin/srmuser network in.named 

This could be used to replace the existing invocation of the named(1M) daemon in its startup script. This requires that a user account and lnode for network be established beforehand.

Enabling Solaris Resource Manager Using srmadm

The srmadm command allows the administrator to control the operating state and system-wide configuration of Solaris Resource Manager. This command is typically used during transition to run-level 2 or 3 from within the Solaris Resource Manager init.d(4) script /etc/init.d/init.srm. It is run to ensure that appropriate values for all parameters are set each time the system is booted, and to ensure that the Solaris Resource Manager system will be enabled prior to users having access to the system. The srmadm command is also used to administer the global Solaris Resource Manager parameters. See the srmadm(1MSRM) man page for a list of the parameters that can be set using srmadm. The srmadm commands issued in the Solaris Resource Manager init.d script will:

Refer to Global Solaris Resource Manager Parameters via srmadm for some common invocations of the srmadm command.

Starting the Solaris Resource Manager Daemon

The limdaemon(1MSRM) program is the Solaris Resource Manager user-mode daemon. It is normally invoked at transition to run-level 2 or 3 as the last step in the Solaris Resource Manager init.d script. It shouldn't be confused with the srmgr system process (in the SYS class) that is initiated by the kernel. The following ps(1) listing shows both these processes:

# ps -efc | egrep 'limdaemon|srmgr' 
root     4     0  SYS  60 18:42:14 ?        0:05 srmgr    
root    92     1  SHR  19 18:42:32 ?        0:41 limdaemon
 

The limdaemon program performs the following functions:

When notified of Solaris Resource Manager login sessions, limdaemon monitors the terminal connect-time of all users and checks it against their connect-time limits. When limits are nearly reached, users are sent notification messages. Once the expiration time is reached, a further grace period is allowed before all their processes are terminated and they are logged out.

The limdaemon program decays connect-time usages. Usage decay for the terminal device category must be performed, if connect-time limits are used. Refer to Using limdaemon for information on limdaemon command-line options.