The following points relate to the structure of the scheduling tree, which is an area requiring special consideration by the central administrator:
The scheduling tree is the structure used by Solaris Resource Manager to implement a hierarchy of resource and privilege control. If a sub-administrator gains control over a sub-tree of the scheduling tree that the sub-administrator would normally not have access to, that person can gain access to additional resource usage and privileges without the approval of the central administrator. One way for this to happen is if an administrator removes an lnode and leaves an orphaned sub-tree behind.
The central administrator can use the limreport(1SRM) command to identify orphaned sections of the scheduling tree by employing the built-in orphan identifier. Any orphans found should then immediately be reattached.
When a new lnode is created, it is mostly zero-filled, which causes most flags to have the default value of inherit. This is the desired effect for most flags, because they are used to indicate device privileges. There are two flags that are explicitly cleared at lnode creation time-the uselimadm and admin flags. This is to prevent new users from automatically gaining any administrative privilege.
The tree shown below defines a structure consisting of several group headers and several ordinary users. The top of the tree is the root user. A group header lnode is shown with two integers, which represent the values of its cpu.shares and cpu.myshares attributes, respectively. A leaf lnode is shown with a single integer, which represents the value of its cpu.shares attribute only.
Using the previous figure as an example, nodes A, C, and N currently have processes attached to them. At the topmost level, the CPU would only need to be shared between A and M since there are no processes for W or any member of scheduling group W. The ratio of shares between A and M is 3:1, so the allocated share at the topmost level would be 75 percent to group A, and 25 percent to group M.
The 75 percent allocated to group A would then be shared between its active users (A and C), in the ratio of their shares within group A (that is, 1:2). Note that the myshares attribute is used when determining A's shares with respect to its children. User A would therefore get one third of the group's allocated share, and C would get the remaining two thirds. The whole of the allocation for group M would go to lnode N since it is the only lnode with processes.
The overall distribution of allocated share of available CPU would therefore be 0.25 for A, 0.5 for C, and 0.25 for N.
Further suppose that the A, C, and N processes are all continually demanding CPU and that the system has at most two CPUs. In this case, Solaris Resource Manager will schedule them so that the individual processes receive these percentages of total available CPU:
For the two A processes: 12.5 percent each
For the C process: 50 percent
For the three N processes: 8.3 percent each
The rate of progress of the individual processes is controlled so that the target for each lnode is met. On a system with more than two CPUs and only these six runnable processes, the C process will be unable to consume the 50 percent entitlement, and the residue is shared in proportion between A and N.