The examples in this section show you how to use rcapstat to monitor the resource utilization of collections of processes that have physical memory caps defined.
Caps are defined for two lnodes associated with two users. user1 has a cap of 50 megabytes, and user2 has a cap of 10 megabytes.
The following command produces reports at 5-second sampling intervals. A report will be issued five times, once after each sample.
usermachine% rcapstat 5 5 id lnode nproc vm rss cap at avgat pg avgpg 112270 user1 24 123M 35M 50M 50M 0K 3312K 0K 78194 user2 1 2368K 1856K 10M 0K 0K 0K 0K id lnode nproc vm rss cap at avgat pg avgpg 112270 user1 24 123M 35M 50M 0K 0K 0K 0K 78194 user2 1 2368K 1856K 10M 0K 0K 0K 0K id lnode nproc vm rss cap at avgat pg avgpg 112270 user1 24 123M 35M 50M 0K 0K 0K 0K 78194 user2 1 2368K 1928K 10M 0K 0K 0K 0K id lnode nproc vm rss cap at avgat pg avgpg 112270 user1 24 123M 35M 50M 0K 0K 0K 0K 78194 user2 1 2368K 1928K 10M 0K 0K 0K 0K id lnode nproc vm rss cap at avgat pg avgpg 112270 user1 24 123M 35M 50M 0K 0K 0K 0K 78194 user2 1 2368K 1928K 10M 0K 0K 0K 0K |
The first three lines of output constitute the first report, which contains the cap and lnode information for the two lnodes, and paging statistics since rcapd was started. The at and pg columns are a number greater than zero for user1 and zero for user2, which indicates that at some time in the daemon's history, user1 exceeded its cap but user2 did not.
The subsequent reports contain paging statistics since the prior intervals, but show no significant activity.
The limadm(1MSRM) command can be used to lower the memory cap of an lnode, which makes the cap more restrictive. rcapd will enforce the new cap after the next configuration interval (see rcapadm(1MSRM)). A signal can also be sent, which causes rcapd to enforce the new cap immediately.
admin# limadm set rss.limit=30M user1 admin# pkill -HUP rcapd |
The following command produces reports at 5-second sampling intervals. A report will be issued five times, once after each sample.
admin# rcapstat 5 5 id lnode nproc vm rss cap at avgat pg avgpg 112270 user1 24 123M 35M 30M 50M 0K 3312K 0K 78194 user2 1 2368K 1856K 10M 0K 0K 0K 0K id lnode nproc vm rss cap at avgat pg avgpg 112270 user1 24 123M 36M 30M 52M 52M 632K 632K 78194 user2 1 2368K 2096K 10M 0K 0K 0K 0K id lnode nproc vm rss cap at avgat pg avgpg 112270 user1 24 123M 33M 30M 57M 52M 816K 632K 78194 user2 1 2368K 1968K 10M 0K 0K 0K 0K id lnode nproc vm rss cap at avgat pg avgpg 112270 user1 24 123M 27M 30M 4792K 4792K 40K 40K 78194 user2 1 2368K 1144K 10M 0K 0K 0K 0K id lnode nproc vm rss cap at avgat pg avgpg 112270 user1 24 123M 27M 30M 0K 0K 0K 0K 78194 user2 1 2368K 1144K 10M 0K 0K 0K 0K |
When the cap was lowered to 30 megabytes from 50 megabytes, rcapd responded by attempting to page out the 6-megabyte amount of resident memory above the cap value. The goal was reached; it was exceeded by a small amount.
The following command produces reports at 5-second sampling intervals. A report will be issued five times, once after each sample.
user1machine% rcapstat 5 5 |
id project nproc vm rss cap at avgat pg avgpg 376565 user1 57 209M 46M 10M 440M 220M 5528K 2764K 376565 user1 57 209M 44M 10M 394M 131M 4912K 1637K 376565 user1 56 207M 43M 10M 440M 147M 6048K 2016K 376565 user1 56 207M 42M 10M 522M 174M 4368K 1456K 376565 user1 56 207M 44M 10M 482M 161M 3376K 1125K |
In this example, the project user1 has an RSS in excess of its physical memory cap. The nonzero values in the pg column indicate that rcapd is consistently paging out memory as it attempts to meet the cap by lowering the physical memory utilization of the project's processes. However, rcapd is unsuccessful, as indicated by the varying rss values that do not show a corresponding decrease. This means that the application's resident memory is being actively used, forcing rcapd to affect the working set. Under this condition, the system will continue to experience high page fault rates, and associated I/O, until the working set size is reduced, the cap is raised, or the application changes its memory access pattern.
The following example is a continuation of the previous example, and it uses the same project.
example% rcapstat 5 5 id project nproc vm rss cap at avgat pg avgpg 376565 user1 56 207M 44M 10M 381M 191M 15M 7924K 376565 user1 56 207M 46M 10M 479M 160M 2696K 898K 376565 user1 56 207M 46M 10M 424M 141M 7280K 2426K 376565 user1 56 207M 43M 10M 401M 201M 4808K 2404K 376565 user1 56 207M 43M 10M 456M 152M 4800K 1600K 376565 user1 56 207M 44M 10M 486M 162M 4064K 1354K 376565 user1 56 207M 52M 100M 191M 95M 1944K 972K 376565 user1 56 207M 55M 100M 0K 0K 0K 0K 376565 user1 56 207M 56M 100M 0K 0K 0K 0K 376565 user1 56 207M 56M 100M 0K 0K 0K 0K 376565 user1 56 207M 56M 100M 0K 0K 0K 0K 376565 user1 56 207M 56M 100M 0K 0K 0K 0K |
By inhibiting cap enforcement, either by raising the cap of a project or by changing the minimum cap enforcement memory pressure value (see rcapadm(1MSRM)), the resident set can become the working set. The rss column might stabilize to show the project working set size, as shown in this example. This is the minimum cap value that will allow the project's processes to operate without perpetually incurring page faults.