The host metrics chapter provides description, collection statistics, data source, multiple thresholds (where applicable), and user action information for each metric.
This metric category provides data on aggregate resource usage on a per project basis. This metric category is available only for Solaris version 9 and later.
This metric displays the cumulative number of seconds that this process has spent waiting for CPU over its lifetime.
Target Version | Collection Frequency |
---|---|
Solaris version 9 and later | Every 15 Minutes |
The Solaris CIM Object Manager
Specific to your site.
This metric displays the cumulative number of seconds that this process has spent sleeping in Data Page Faults over its lifetime.
Target Version | Collection Frequency |
---|---|
Solaris version 9 and later | Every 15 Minutes |
The Solaris CIM Object Manager
Specific to your site.
This metric displays the cumulative number of Major Page Faults engendered by the process over its lifetime.
Target Version | Collection Frequency |
---|---|
Solaris version 9 and later | Every 15 Minutes |
The Solaris CIM Object Manager
Specific to your site.
This metric displays the cumulative number of Minor Page Faults engendered by the process over its lifetime.
Target Version | Collection Frequency |
---|---|
Solaris version 9 and later | Every 15 Minutes |
The Solaris CIM Object Manager
Specific to your site.
This metric displays the cumulative number of character I/O bytes Read and Written by the process over its lifetime.
Target Version | Collection Frequency |
---|---|
Solaris version 9 and later. | Every 15 Minutes |
The Solaris CIM Object Manager
Specific to your site.
This metric displays the cumulative number of blocks Read by the process over its lifetime.
Target Version | Collection Frequency |
---|---|
Solaris version 9 and later. | Every 15 Minutes |
The Solaris CIM Object Manager
Specific to your site.
This metric displays the cumulative number of blocks Written by the process over its lifetime.
The following table shows how often the metric's value is collected.
Target Version | Collection Frequency |
---|---|
Solaris version 9 and later. | Every 15 Minutes |
The Solaris CIM Object Manager
Specific to your site.
This metric displays the cumulative number of Involuntary Context Switches made by the process over its lifetime.
Target Version | Collection Frequency |
---|---|
Solaris version 9 and later. | Every 15 Minutes |
The Solaris CIM Object Manager
Specific to your site.
This metric displays the cumulative number of Messages Received by the process over its lifetime.
Target Version | Collection Frequency |
---|---|
Solaris version 9 and later. | Every 15 Minutes |
The Solaris CIM Object Manager
Specific to your site.
This metric displays the cumulative number of Messages Sent by the process over its lifetime.
Target Version | Collection Frequency |
---|---|
Solaris version 9 and later. | Every 15 Minutes |
The Solaris CIM Object Manager
Specific to your site.
This metric displays the cumulative number of Signals taken by the process over its lifetime.
Target Version | Collection Frequency |
---|---|
Solaris version 9 and later. | Every 15 Minutes |
The Solaris CIM Object Manager
Specific to your site.
This metric displays the cumulative number of system calls made by the process over its lifetime.
Target Version | Collection Frequency |
---|---|
Solaris version 9 and later. | Every 15 Minutes |
The Solaris CIM Object Manager
Specific to your site.
This metric displays the cumulative number of Voluntary Context Switches made by the process over its lifetime.
Target Version | Collection Frequency |
---|---|
Solaris version 9 and later. | Every 15 Minutes |
The Solaris CIM Object Manager
Specific to your site.
This metric displays the cumulative number of seconds that this process has spent sleeping on User Lock Waits over its lifetime.
Target Version | Collection Frequency |
---|---|
Solaris version 9 and later | Every 15 Minutes |
The Solaris CIM Object Manager
Specific to your site.
This metric displays the cumulative number of seconds that this process has spent sleeping in all other ways over its lifetime.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
The Solaris CIM Object Manager
Specific to your site.
This metric displays the cumulative number of seconds that this process has spent Stopped over its lifetime.
Target Version | Collection Frequency |
---|---|
Solaris version 9 and later | Every 15 Minutes |
The Solaris CIM Object Manager
Specific to your site.
This metric displays the cumulative number of swap operations engendered by the process over its lifetime.
Target Version | Collection Frequency |
---|---|
Solaris version 9 and later | Every 15 Minutes |
The Solaris CIM Object Manager
Specific to your site.
This metric displays the cumulative number of seconds that this process has spent in System mode over its lifetime.
Target Version | Collection Frequency |
---|---|
Solaris version 9 and later | Every 15 Minutes |
The Solaris CIM Object Manager
Specific to your site.
This metric displays the cumulative number of seconds that this process has spent sleeping in System Page Faults over its lifetime.
Target Version | Collection Frequency |
---|---|
Solaris version 9 and later | Every 15 Minutes |
The Solaris CIM Object Manager
Specific to your site.
This metric displays the cumulative number of seconds that this process has spent in System Traps over its lifetime.
Target Version | Collection Frequency |
---|---|
Solaris version 9 and later | Every 15 Minutes |
The Solaris CIM Object Manager
Specific to your site.
This metric displays the cumulative number of seconds that this process has spent sleeping in Text Page Faults over its lifetime.
Target Version | Collection Frequency |
---|---|
Solaris version 9 and later | Every 15 Minutes |
The Solaris CIM Object Manager
Specific to your site.
This metric displays the cumulative number of seconds that this process has spent in User mode over its lifetime.
Target Version | Collection Frequency |
---|---|
Solaris version 9 and later | Every 15 Minutes |
The Solaris CIM Object Manager
Specific to your site.
This metric displays the number of processes owned by the project measured in the aggregate.
Target Version | Collection Frequency |
---|---|
Solaris version 9 and later | Every 15 Minutes |
The Solaris CIM Object Manager
Specific to your site.
This metric displays the percentage of CPU time used by the process.
Target Version | Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
Solaris version 9 and later | Every 15 Minutes | Not Defined | Not Defined | User CPU Time is %value%%%, crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
The Solaris CIM Object Manager
Specific to your site.
This metric displays the ratio of the process resident set size to physical memory.
Target Version | Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
Solaris version 9 and later | Every 15 Minutes | Not Defined | Not Defined | User Process Memory Size is %value%%%, crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
The Solaris CIM Object Manager
Specific to your site.
This metric displays the total number of KiloBytes of memory consumed by the process heap at the time that it is sampled.
Target Version | Collection Frequency |
---|---|
Solaris version 9 and later | Every 15 Minutes |
The Solaris CIM Object Manager
Specific to your site.
This metric displays the resident set size of the process in kilobytes.
Target Version | Collection Frequency |
---|---|
Solaris version 9 and later | Every 15 Minutes |
The Solaris CIM Object Manager
Specific to your site.
This metric category provides data on aggregate resource usage on a per user basis.
This metric category is available for Solaris version 9 and later only.
This metric displays the cumulative number of seconds that this process has spent Waiting for CPU over its lifetime.
Target Version | Collection Frequency |
---|---|
Solaris version 9 and later | Every 15 Minutes |
The Solaris CIM Object Manager
Specific to your site.
This metric displays the cumulative number of seconds that this process has spent sleeping in Data Page Faults over its lifetime.
Target Version | Collection Frequency |
---|---|
Solaris version 9 and later | Every 15 Minutes |
The Solaris CIM Object Manager
Specific to your site.
This metric displays the cumulative number of Major Page Faults engendered by the process over its lifetime.
Target Version | Collection Frequency |
---|---|
Solaris version 9 and later | Every 15 Minutes |
The Solaris CIM Object Manager
Specific to your site.
This metric displays the cumulative number of Minor Page Faults engendered by the process over its lifetime.
Target Version | Collection Frequency |
---|---|
Solaris version 9 and later | Every 15 Minutes |
The Solaris CIM Object Manager
Specific to your site.
This metric displays the cumulative number of character I/O bytes Read and Written by the process over its lifetime.
Target Version | Collection Frequency |
---|---|
Solaris version 9 and later | Every 15 Minutes |
The Solaris CIM Object Manager
Specific to your site.
This metric displays the cumulative number of blocks Read by the process over its lifetime.
Target Version | Collection Frequency |
---|---|
Solaris version 9 and later | Every 15 Minutes |
The Solaris CIM Object Manager
Specific to your site.
This metric displays the cumulative number of blocks Written by the process over its lifetime.
Target Version | Collection Frequency |
---|---|
Solaris version 9 and later | Every 15 Minutes |
The Solaris CIM Object Manager
Specific to your site.
This metric displays the cumulative number of Involuntary Context Switches made by the process over its lifetime.
Target Version | Collection Frequency |
---|---|
Solaris version 9 and later | Every 15 Minutes |
The Solaris CIM Object Manager
Specific to your site.
This metric displays the cumulative number of Messages Received by the process over its lifetime.
Target Version | Collection Frequency |
---|---|
Solaris version 9 and later | Every 15 Minutes |
The Solaris CIM Object Manager
Specific to your site.
This metric displays the cumulative number of Messages Sent by the process over its lifetime.
Target Version | Collection Frequency |
---|---|
Solaris version 9 and later | Every 15 Minutes |
The Solaris CIM Object Manager
Specific to your site.
This metric displays the cumulative number of Signals taken by the process over its lifetime.
Target Version | Collection Frequency |
---|---|
Solaris version 9 and later | Every 15 Minutes |
The Solaris CIM Object Manager
Specific to your site.
This metric displays the cumulative number of system calls made by the process over its lifetime.
Target Version | Collection Frequency |
---|---|
Solaris version 9 and later | Every 15 Minutes |
The Solaris CIM Object Manager
Specific to your site.
This metric displays the cumulative number of Voluntary Context Switches made by the process over its lifetime.
Target Version | Collection Frequency |
---|---|
Solaris version 9 and later | Every 15 Minutes |
The Solaris CIM Object Manager
Specific to your site.
This metric displays the cumulative number of seconds that this process has spent Stopped over its lifetime.
Target Version | Collection Frequency |
---|---|
Solaris version 9 and later | Every 15 Minutes |
The Solaris CIM Object Manager
Specific to your site.
This metric displays the cumulative number of seconds that this process has spent in System mode over its lifetime.
Target Version | Collection Frequency |
---|---|
Solaris version 9 and later | Every 15 Minutes |
The Solaris CIM Object Manager
Specific to your site.
This metric displays the cumulative number of seconds that this process has spent sleeping in System Page Faults over its lifetime.
Target Version | Collection Frequency |
---|---|
Solaris version 9 and later | Every 15 Minutes |
The Solaris CIM Object Manager
Specific to your site.
This metric displays the cumulative number of seconds that this process has spent in System Traps over its lifetime.
Target Version | Collection Frequency |
---|---|
Solaris version 9 and later | Every 15 Minutes |
The Solaris CIM Object Manager
Specific to your site.
This metric displays the cumulative number of Swap Operations engendered by the process over its lifetime.
Target Version | Collection Frequency |
---|---|
Solaris version 9 and later | Every 15 Minutes |
The Solaris CIM Object Manager
Specific to your site.
This metric displays the cumulative number of seconds that this process has spent sleeping in Text Page Faults over its lifetime.
Target Version | Collection Frequency |
---|---|
Solaris version 9 and later | Every 15 Minutes |
The Solaris CIM Object Manager
Specific to your site.
This metric displays the cumulative number of seconds that this process has spent sleeping on User Lock Waits over its lifetime.
Target Version | Collection Frequency |
---|---|
Solaris version 9 and later | Every 15 Minutes |
The Solaris CIM Object Manager
Specific to your site.
This metric displays the cumulative number of seconds that this process has spent in User mode over its lifetime.
Target Version | Collection Frequency |
---|---|
Solaris version 9 and later | Every 15 Minutes |
The Solaris CIM Object Manager
Specific to your site.
This metric displays the cumulative number of seconds that this process has spent sleeping in all other ways over its lifetime.
Target Version | Collection Frequency |
---|---|
Solaris version 9 and later | Every 15 Minutes |
The Solaris CIM Object Manager
Specific to your site.
This metric displays the number of processes owned by the user measured in the aggregate.
Target Version | Collection Frequency |
---|---|
Solaris version 9 and later | Every 15 Minutes |
The Solaris CIM Object Manager
Specific to your site.
This metric displays the number of threads active in the current process.
Target Version | Collection Frequency |
---|---|
Solaris version 9 and later | Every 15 Minutes |
The Solaris CIM Object Manager
Specific to your site.
This metric displays the percent CPU time used by the process.
Target Version | Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
Solaris version 9 and later | Every 15 Minutes | Not Defined | Not Defined | User CPU Time is %value%%%, crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
The Solaris CIM Object Manager
Specific to your site.
This metric displays the ratio of the process resident set size to physical memory.
Target Version | Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
Solaris version 9 and later | Every 15 Minutes | Not Defined | Not Defined | User Process Memory Size is %value%%%, crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
The Solaris CIM Object Manager
Specific to your site.
This metric displays the total number of kilobytes of memory consumed by the process heap at the time that it is sampled.
Target Version | Collection Frequency |
---|---|
Solaris version 9 and later | Every 15 Minutes |
The Solaris CIM Object Manager
Specific to your site.
This metric category provides information about batteries.
This metric category is for Dell PowerEdge only.
This metric displays the battery reading code.
Target Version | Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All versions | Every 15 Minutes | Not Defined | Not Defined | Battery reading code(Object identifier:1.3.6.1.4.1.674.10892.1.600.50.1.6) is %value%, crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold.Status message is %BatteryReading% |
The metrics in this category provide information about the status of the boot environment.
This metric specifies if the boot environment will start up on the next reboot of the system.
Target Version | Collection Frequency |
---|---|
All Versions | Every 10 Minutes |
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/sbin/beadm list -H |
This metric provides the name of the boot environment.
Target Version | Collection Frequency |
---|---|
All Versions | Every 10 Minutes |
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/sbin/beadm list -H |
This metric specifies if the boot environment is active now.
Target Version | Collection Frequency |
---|---|
All Versions | Every 10 Minutes |
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/sbin/beadm list -H |
The Buffer Activity metric category provides information about OS memory buffer usage. This metric reports buffer activity for transfers, accesses, and cache (kernel block buffer cache) hit ratios per second.
This metric represents the number of reads from block devices to buffer cache as a percentage of all buffer reads.
Table 2-1 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 5 Minutes |
Not Defined |
Not Defined |
Buffer Cache Read Hit Ratio %value%%% , has crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
The data sources for this metric include the following:
Host | Data Source |
---|---|
Solaris | sar command |
HP | sar command |
IBM AIX | sar command |
Specific to your site.
This metric represents the number of reads performed on the buffer cache per second.
Table 2-2 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All versions |
Every 5 Minutes |
Not Defined |
Not Defined |
Buffer Cache Reads (per second) is %value% , crossed warning (%warning_threshold% ) or critical (%critical_threshold% ) threshold. |
The data sources for this metric include the following:
Host | Data Source |
---|---|
Solaris | sar command |
HP | sar command |
IBM AIX | sar command |
Specific to your site.
This metric represents the number of writes from block devices to buffer cache as a percentage of all buffer writes.
Table 2-3 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 5 Minutes |
Not Defined |
Not Defined |
Buffer Cache Write Hit Ratio %value%%% , has crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
The data sources for this metric include the following:
Host | Data Source |
---|---|
Solaris | sar command |
HP | sar command |
HP Tru64 | table() system call |
IBM AIX | sar command |
Specific to your site.
This metric represents the number of writes performed on the buffer cache per second.
Table 2-4 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All versions |
Every 5 Minutes |
Not Defined |
Not Defined |
Buffer Cache Writes (per second) is %value% , crossed warning (%warning_threshold% ) or critical (%critical_threshold% ) threshold. |
The data sources for this metric include the following:
Host | Data Source |
---|---|
Solaris | sar command |
HP | sar command |
IBM AIX | sar command |
Specific to your site.
This metric represents the number of reads per second from character devices using physical I/O mechanisms.
Table 2-5 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All versions |
Every 5 Minutes |
Not Defined |
Not Defined |
Physical I/O Reads (per second) is %value% , crossed warning (%warning_threshold% ) or critical (%critical_threshold% ) threshold. |
The data sources for this metric include the following:
Host | Data Source |
---|---|
Solaris | sar command |
HP | sar command |
Linux | not available |
HP Tru64 | table() system call |
IBM AIX | sar command |
Windows | not available |
An unusually high value might indicate an abnormal situation, so it is important to set thresholds based on the average value observed over a period of time. An abnormally high value may cause performance issues. The user action varies from case to case, observe the running processes to track down any errant process. Placing highly active directories on different disks may help.
This metric represents the number of writes per second from character devices using physical I/O mechanisms.
Table 2-6 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All versions |
Every 5 Minutes |
Not Defined |
Not Defined |
Physical I/O Writes (per second) is %value% , crossed warning (%warning_threshold% ) or critical (%critical_threshold% ) threshold.. |
The data sources for this metric include the following:
Host | Data Source |
---|---|
Solaris | sar command |
HP | sar command |
Linux | not available |
HP Tru64 | table() system call |
IBM AIX | sar command |
Windows | not available |
An unusually high value might indicate an abnormal situation, so it is important to set thresholds based on the average value observed over a period of time. An abnormally high value may cause performance issues. The user action varies from case to case, observe the running processes to track down any errant process. Placing highly active directories on different disks may help.
This metric represents the number of physical reads per second from block devices to the system buffer cache.
Table 2-7 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All versions |
Every 5 Minutes |
Not Defined |
Not Defined |
Physical Reads (per second) is %value% , crossed warning (%warning_threshold% ) or critical (%critical_threshold% ) threshold. |
The data sources for this metric include the following:
Host | Data Source |
---|---|
Solaris | sar command |
HP | sar command |
Linux | not available |
HP Tru64 | table() system call |
IBM AIX | sar command |
Windows | not available |
An unusually high value might indicate an abnormal situation, so it is important to set thresholds based on the average value observed over a period of time. An abnormally high value may cause performance issues. The user action varies from case to case, observe the running processes to track down any errant process. Placing highly active directories on different disks may help.
This metric represents the number of physical writes per second from block devices to the system buffer cache.
Table 2-8 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All versions |
Every 5 Minutes |
Not Defined |
Not Defined |
Physical Writes (per second) is %value% , crossed warning (%warning_threshold% ) or critical (%critical_threshold% ) threshold. |
The data sources for this metric include the following:
Host | Data Source |
---|---|
Solaris | sar command |
HP | sar command |
Linux | not available |
HP Tru64 | table() system call |
IBM AIX | sar command |
Windows | not available |
An unusually high value might indicate an abnormal situation, so it is important to set thresholds based on the average value observed over a period of time. An abnormally high value may cause performance issues. The user action varies from case to case, observe the running processes to track down any errant process. Placing highly active directories on different disks may help.
The metrics in this category provide information about the temperature of the compute node.
The metric provides information on the processors of the host.
This is the size of the Cache memory measured in MB.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
/proc/cpuinfo
None.
This is the clock frequency of the processor.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
/proc/cpuinfo
None.
This tells whether hyper threading is enabled for this processor.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
/proc/cpuinfo
None.
Implementation type of processor.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
/proc/cpuinfo
None.
This is the count the number rows having the same information in other columns like vendor_name or num_cores. This is added to make at least one key in table.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
/proc/cpuinfo
None.
This is used to manufacture the CPU. Solaris prtdiag has CPU mask field. This column stores that information.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
/proc/cpuinfo
None.
This represents number of cores per physical CPU. For example. for dual core processors this count will be two and for quad core processor this count will be four.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
/proc/cpuinfo
None.
This is the revision.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
/proc/cpuinfo
None.
This effectively represents the number of logical processor per physical processors. For example, for one dual core processor with hyper thread enabled, this value will be four.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
/proc/cpuinfo
None.
The metrics in this category provide information about the CPU frequency state.
The metrics in this category provide information about the CPU power state.
This metric displays the CPU power state.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/powertop -d 1 -t 1 |
The metric in this category provides information about the overall average CPU usage.
The CPU Usage metric category provides information about the percentage of time the CPU was in various states, for example, idle state and wait state. The metric also provides information about the percentage of CPU time spent in user and system mode. All data is per-CPU in a multi-CPU system.
On HP Tru64, this information is available as the cumulative total for all the CPUs and not for each CPU which is monitored in the Load metric. Hence, this metric is not available on HP Tru64.
This metric represents the percentage of time that the CPU was idle and the system did not have an outstanding disk I/O request. This metric checks the percentage of processor time in idle mode for the CPU(s) specified by the Host CPU parameter, such as cpu_stat0, CPU0, or * (for all CPUs on the system).
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
The data sources for this metric include the following:
Host | Data Source |
---|---|
Solaris | kernel statistics (class cpu_stat) |
HP | pstat_getprocessor() system call |
Linux | /proc/stat |
HP Tru64 | not available |
IBM AIX | oracle_kstat() system call |
Windows | performance data counters |
An abnormally high value (determined on the basis of historical data) indicates an underutilized cpu. The user action varies from case to case.
This metric represents the percentage of time that the CPU receives and services hardware interruptions during representative intervals. This metric checks the percentage of processor time in interrupt mode for the CPU(s) specified by the Host CPU parameter, such as cpu_stat0, CPU0, or * (for all CPUs on the system).
This metric is available only on Windows.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
For this metric you can set different warning and critical threshold values for each "CPU Number" object.
If warning or critical threshold values are currently set for any "CPU Number" object, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each "CPU Number" object, use the Edit Thresholds page. See the Editing Thresholds topic in the Enterprise Manager online help for information on accessing the Edit Thresholds page.
The data sources for this metric are Performance Data counters.
This indicates the amount of time spent by the processor in handling interrupts. If an unusually high value is observed, there is a possibility of some hardware
This metric represents the percentage of time that the CPU is running in system mode (kernel). This metric checks the percentage of processor time in system mode for the CPU(s) specified by the Host CPU parameter, such as cpu_stat0, CPU0, or * (for all CPUs on the system).
Table 2-9 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
Linux |
Every 15 Minutes |
Not Defined |
Not Defined |
CPU System Time (%%) is %value% , crossed warning (%warning_threshold% ) or critical (%critical_threshold% ) threshold. |
The data sources for this metric include the following:
Host | Data Source |
---|---|
Linux | /proc/stat |
An abnormally high value (determined on the basis of historical data) indicates that the machine is doing a lot of work at the system (kernel) level. The user action varies from case to case.
This metric represents the portion of processor time running in user mode. This metric checks the percentage of processor time in user mode for the CPU(s) specified by the Host CPU parameter, such as cpu_stat0, CPU0, or * (for all CPUs on the system).
Table 2-10 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
Linux |
Every 15 Minutes |
Not Defined |
Not Defined |
CPU User Time (%%) is %value% , crossed warning (%warning_threshold% ) or critical (%critical_threshold% ) threshold. |
The data sources for this metric include the following:
Host | Data Source |
---|---|
Linux | /proc/stat |
An abnormally high value (determined on the basis of historical data) indicates the cpu is doing a lot of work at the user (application) level. An examination of the top processes on the system may help identify problematic processes.
This figure represents the percentage utilization of a CPU
Table 2-11 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All versions |
Every 15 Minutes |
Not Defined |
Not Defined |
CPU Utilization for %keyvalue% is %value%%%, crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
The data sources for this metric include the following:
Host | Data Source |
---|---|
Solaris | kernel statistics (class cpu_stat) |
HP | pstat_getprocessor() system call |
Linux | /proc/stat |
HP Tru64 | not available |
IBM AIX | oracle_kstat() system call |
Windows | performance data counters |
An abnormally high value (determined on the basis of historical data) indicates that the system is under heavy load. If the value is consistently high, consider reducing the load on the system.
This metric represents the percentage of time that the CPU was idle during which the system had an outstanding disk I/O request. This metric checks the percentage of processor time in wait mode for the CPU(s) specified by the Host CPU parameter, such as cpu_stat0, CPU0, or * (for all CPUs on the system).
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
The data sources for this metric include the following:
Host | Data Source |
---|---|
Solaris | not available |
HP | pstat_getprocessor() system call |
Linux | not available |
HP Tru64 | not available |
IBM AIX | oracle_kstat() system call |
Windows | performance data counters |
A high value indicates that the cpu spends a lot of time waiting for disk i/o to complete. Examine the disk errors and disk activity metrics to see if there are any problems with disk performance. Consider keeping heavily accessed directories on separate disks.
The Disk Activity metric category monitors the hard disk activity on the target being monitored. For each device on the system, this metric provides information about access to the device. This information includes: device name, disk utilization, write statistics, and read statistics for the device.
This metric represents the sum of average wait time and average run time.
Table 2-12 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 15 Minutes |
Not Defined |
Not Defined |
Average service time for disk %keyvalue% is %value% ms, crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
For this metric you can set different warning and critical threshold values for each "Disk Device" object.
If warning or critical threshold values are currently set for any "Disk Device" object, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each "Disk Device" object, use the Edit Thresholds page.
The data sources for this metric include the following:
Host | Data Source |
---|---|
Solaris | kernel statistics (class kstat_io) |
HP | pstat_getdisk system call |
Linux | iostat command |
HP Tru64 | table() system call |
IBM AIX | oracle_kstat() system call |
Windows | performance data counters |
This number should be low. A high number can indicate a disk that is slow due to excessive load or hardware issues. See also the CPU in IO-Wait (%) metric.
This metric represents the average time spent by the command waiting on the queue for getting executed.
Table 2-13 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 15 Minutes |
Not Defined |
Not Defined |
Average wait time for disk %keyvalue% is %value% ms, crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
For this metric you can set different warning and critical threshold values for each "Disk Device" object.
If warning or critical threshold values are currently set for any "Disk Device" object, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each "Disk Device" object, use the Edit Thresholds page.
The data sources for this metric include the following:
Host | Data Source |
---|---|
Solaris | kernel statistics (class kstat_io) |
HP | pstat_getdisk system call |
Linux | not available |
HP Tru64 | table() system call |
IBM AIX | oracle_kstat() system call |
Windows | performance data counters |
A high figure indicates a slow disk. Use the OS iostat -xn command to check wait time and service time for local disks and NFS mounted file systems. See also the CPU in IO-Wait (%) metric.
This metric represents the average number of commands waiting for service (queue length).
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
The data sources for this metric include the following:
Host | Data Source |
---|---|
Solaris | kernel statistics (class kstat_io) |
HP | pstat_getdisk system call |
Linux | not available |
HP Tru64 | table() system call |
IBM AIX | oracle_kstat() system call |
Windows | performance data counters |
Specific to your site.
This metric represents the average time spent by the command on the active queue waiting for its execution to be completed.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
The data sources for this metric include the following:
Host | Data Source |
---|---|
Solaris | kernel statistics (class kstat_io) |
HP | pstat_getdisk system call |
Linux | not available |
HP Tru64 | table() system call |
IBM AIX | oracle_kstat() system call |
Windows | performance data counters |
Specific to your site.
This metric represents the time spent in Input/Output operations (ms).
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
Linux: /proc/diskstats or /proc/partitions
Specific to your site.
This metric represents the number of disk reads from the last collection.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
Linux: /proc/diskstats or /proc/partitions
Specific to your site.
This metric represents the number of disk reads from the last collection.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
Linux: /proc/diskstats or /proc/partitions
Specific to your site.
This metric represents the number of blocks (512 bytes) written per second.
Table 2-14 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 15 Minutes |
Not Defined |
Not Defined |
Disk Block Writes (per second) for disk %keyvalue% is %value% , crossed warning (%warning_threshold% ) or critical (%critical_threshold% ) threshold. |
The data sources for this metric include the following:
Host | Data Source |
---|---|
Solaris | kernel statistics (class kstat_io) |
HP | not available |
Linux | iostat command |
HP Tru64 | table() system call |
IBM AIX | oracle_kstat() system call |
Windows | performance data counters |
Specific to your site.
This metric represents the number of blocks (512 bytes) read per second.
Note: On HP UNIX, this metric is named Disk Blocks Transferred (per second).
Table 2-15 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 15 Minutes |
Not Defined |
Not Defined |
Disk Blocks Reads (per second) for disk %keyvalue% is %value% , crossed warning (%warning_threshold% ) or critical (%critical_threshold% ) threshold. |
The data sources for this metric include the following:
Host | Data Source |
---|---|
Solaris | kernel statistics (class kstat_io) |
HP | pstat_getdisk system call |
Linux | iostat command |
HP Tru64 | table() system call |
IBM AIX | oracle_kstat() system call |
Windows | performance data counters |
Specific to your site.
This metric represents disk device busy percentage.
Note: On HP UNIX, this metric is named Device Busy (%).
Table 2-16 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 15 Minutes |
80 |
95 |
Disk Device %keyValue% is %value%%% busy. |
The data sources for this metric include the following:
Host | Data Source |
---|---|
Solaris | kernel statistics (class kstat_io) |
HP | pstat_getdisk system call |
Linux | iostat command |
HP Tru64 | table() system call |
IBM AIX | oracle_kstat() system call |
Windows | performance data counters |
Specific to your site.
This metric represents the disk reads per second for the specified disk device.
Table 2-17 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
Linux |
Every 15 Minutes |
Not Defined |
Not Defined |
Disk Reads (per second) for disk %keyvalue% is %value% , crossed warning (%warning_threshold% ) or critical (%critical_threshold% ) threshold. |
The data sources for this metric include the following:
Host | Data Source |
---|---|
Solaris | kernel statistics (class kstat_io) |
HP | not available |
Linux | iostat command |
HP Tru64 | table() system call |
IBM AIX | oracle_kstat() system call |
Windows | performance data counters |
Specific to your site.
This metric represents the disk writes per second for the specified disk device.
Table 2-18 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
Linux |
Every 15 Minutes |
Not Defined |
Not Defined |
Disk Writes (per second) for disk %keyvalue% is %value% , crossed warning (%warning_threshold% ) or critical (%critical_threshold% ) threshold. |
The data sources for this metric include the following:
Host | Data Source |
---|---|
Solaris | kernel statistics (class kstat_io) |
HP | not available |
Linux | iostat command |
HP Tru64 | table() system call |
IBM AIX | oracle_kstat() system call |
Windows | performance data counters |
Specific to your site.
The metrics in this category provide information about the summary of disk activity.
This metric displays the longest service time for disk I/Os in milliseconds.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
This metric displays the maximum disk I/O per second.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
This metric displays how many disk I/Os are being performed per second.
Table 2-19 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 15 Minutes |
Not Defined |
Not Defined |
Total Disk I/O made across all disks is %value%, crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
The Disk Device Errors metric category provides the number of errors on the disk device.
Note: These metrics are available on Solaris only.
This metric represents the error count of hard errors encountered while accessing the disk. Hard errors are considered serious and may be traced to misconfigured or bad disk devices.
Target Version | Collection Frequency |
---|---|
All Versions | Every 72 Hours |
The data sources for this metric include the following:
Host | Data Source |
---|---|
Solaris | iostat -e command |
Specific to your site.
This metric represents the error count of soft errors encountered while accessing the disk. Soft errors are synonymous to warnings.
The following table shows how often the metric's value is collected.
Target Version | Collection Frequency |
---|---|
All Versions | Every 72 Hours |
The data sources for this metric include the following:
Host | Data Source |
---|---|
Solaris | iostat -e command |
Specific to your site.
This metric represents the sum of all errors on the particular device
Target Version | Collection Frequency |
---|---|
All Versions | Every 72 Hours |
The data sources for this metric include the following:
Host | Data Source |
---|---|
Solaris | iostat -e command |
Specific to your site.
This metric represents the error count of network errors encountered. This generally indicates a problem with the network layer
Target Version | Collection Frequency |
---|---|
All Versions | Every 72 Hours |
The data sources for this metric include the following:
Host | Data Source |
---|---|
Solaris | iostat -e command |
Specific to your site.
The Fans metric monitors the status of various fans present in the system.
Note: This metric category is available only on Dell Poweredge Linux Systems.
This metric represents the status of the fan.
The following table lists the possible values for this metric and their meaning.
Metric Value | Meaning (per SNMP MIB) |
---|---|
1 | Other (not one of the following) |
2 | Unknown |
3 | Normal |
4 | Warning |
5 | Critical |
6 | Non-Recoverable |
Table 2-20 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 15 Minutes |
4 |
5 |
Status of Fan at device %FanIndex% in chassis %ChassisIndex% is %value%, crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. Status message is %FanStatus% |
For this metric you can set different warning and critical threshold values for each unique combination of "Chassis Index" and "Fan Index" objects.
If warning or critical threshold values are currently set for any unique combination of "Chassis Index" and "Fan Index" objects, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each unique combination of "Chassis Index" and "Fan Index" objects, use the Edit Thresholds page.
SNMP MIB object: coolingDeviceStatus (1.3.6.1.4.1.674.10892.1.700.12.1.5)
None.
The File Access System Calls metric category provides information about the usage of file access system calls.
Note: This metric is available on Solaris, HP, and IBM AIX.
This metric represents the number of file system blocks read per second performing direct lookup.
Target Version | Collection Frequency |
---|---|
All Versions | Every 5 Seconds |
The data sources for this metric include the following:
Host | Data Source |
---|---|
Solaris | sar command |
HP | sar command |
IBM AIX | sar command |
The OS sar command is used to sample cumulative activity counters maintained by the OS. The data is obtained by sampling system counters once in a five-second interval. The results are essentially the number of lookuppn() calls made over this five-second period divided by five.
None.
This metric represents the number of system iget() calls made per second. iget is a file access system routine.
Target Version | Collection Frequency |
---|---|
All Versions | Every 5 Seconds |
The data sources for this metric include the following:
Host | Data Source |
---|---|
Solaris | kernel memory structure (class cpu_vminfo |
HP | sar command |
IBM AIX | kernel memory structure (class cpu_vminfo |
This data is obtained using the OS sar command, which is used to sample cumulative activity counters maintained by the OS. The data is obtained by sampling system counters once in a five-second interval. The results are essentially the number of iget() calls made over this five-second period divided by five.
This metric represents the number of file system lookuppn() (pathname translation) calls made per second.
Target Version | Collection Frequency |
---|---|
All Versions | Every 5 Seconds |
The data sources for this metric include the following:
Host | Data Source |
---|---|
Solaris | sar command |
HP | sar command |
IBM AIX | sar command |
The OS sar command is used to sample cumulative activity counters maintained by the OS. The data is obtained by sampling system counters once in a five-second interval. The results are essentially the number of lookuppn() calls made over this five-second period divided by five.
None.
The File and Directory Monitoring metric monitors various attributes of specific files and directories. Setting of key value specific thresholds triggers the monitoring of files or directories referred to in the given key value. The operator must specify key value specific thresholds to monitor any file or directory.
This metric reports issues encountered in fetching the attributes of the file or directory. Errors encountered in monitoring the files and directories specified by the key value based thresholds are reported.
Table 2-21 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 15 Minutes |
Not Defined |
Not Defined |
%file_attribute_not_found% . |
For this metric you can set different warning and critical threshold values for each "File or Directory Name" object.
If warning or critical threshold values are currently set for any "File or Directory Name" object, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each "File or Directory Name" object, use the Edit Thresholds page.
The data sources for this metric include the following:
Host | Data Source |
---|---|
Solaris | perl stat command for files; df for directories that are file system mount points; du for directories that are not file system mount points |
HP | perl stat command for files; df for directories that are file system mount points; du for directories that are not file system mount points |
Linux | perl stat command for files; df for directories that are file system mount points; du for directories that are not file system mount points |
Looks for the attributes for file and directories like inode id, user id, and group id. If not found an alert is raised so that user can verify.
This metric fetches the octal value of file permissions on the different variations of UNIX operating systems including Linux. Setting a key value specific warning or critical threshold value against this metric would result in the monitoring of a critical file or directory. For example, to monitor the file permissions for file name /etc/passwd, you should set a threshold for /etc/passwd.
Table 2-22 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 15 Minutes |
Not Defined |
Not Defined |
Current permissions for %file_name% are %file_permissions%, different from warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
For this metric you can set different warning and critical threshold values for each "File or Directory Name" object.
If warning or critical threshold values are currently set for any "File or Directory Name" object, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each "File or Directory Name" object, use the Edit Thresholds page.
The data sources for this metric include the following:
Host | Data Source |
---|---|
Solaris | perl stat command for files; df for directories that are file system mount points; du for directories that are not file system mount points |
HP | perl stat command for files; df for directories that are file system mount points; du for directories that are not file system mount points |
Linux | perl stat command for files; df for directories that are file system mount points; du for directories that are not file system mount points |
IBM AIX | perl stat command for files; df for directories that are file system mount points; du for directories that are not file system mount points |
An alert will be raised if the permissions for a file or directory have changed. User may want to verify the change.
This metric fetches the current size of this file or directory in megabytes. Setting a key value specific warning or critical threshold value against this metric would result in monitoring of a critical file or directory. For example, to monitor the file permissions for directory /absolute_directory_path, you should set a threshold for /absolute_directory_path.
Table 2-23 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 15 Minutes |
Not Defined |
Not Defined |
Size of %file_name% is %file_size% MB, crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
For this metric you can set different warning and critical threshold values for each "File or Directory Name" object.
If warning or critical threshold values are currently set for any "File or Directory Name" object, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each "File or Directory Name" object, use the Edit Thresholds page.
The data sources for this metric include the following:
Host | Data Source |
---|---|
Solaris | perl stat command for files; df for directories that are file system mount points; du for directories that are not file system mount points |
HP | perl stat command for files; df for directories that are file system mount points; du for directories that are not file system mount points |
Linux | perl stat command for files; df for directories that are file system mount points; du for directories that are not file system mount points |
IBM AIX | perl stat command for files; df for directories that are file system mount points; du for directories that are not file system mount points |
If a threshold is exceeded, you may need to take action to adjust the file size or the threshold level.
This metric provides the value for the rate at which the file�s size is changing. Setting a key value specific warning or critical threshold value against this metric would result in monitoring of the critical file or directory. For example, to monitor the file change rate for the file name /absolute_file_path, the operator should set a threshold for /absolute_file_path.
Table 2-24 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 15 Minutes |
Not Defined |
Not Defined |
%file_name% is growing at the rate of %file_sizechangerate% (KB/hour), crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
For this metric you can set different warning and critical threshold values for each "File or Directory Name" object.
If warning or critical threshold values are currently set for any "File or Directory Name" object, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each "File or Directory Name" object, use the Edit Thresholds page.
The data sources for this metric include the following:
Host | Data Source |
---|---|
Solaris | perl stat command for files; df for directories that are file system mount points; du for directories that are not file system mount points |
HP | perl stat command for files; df for directories that are file system mount points; du for directories that are not file system mount points |
Linux | perl stat command for files; df for directories that are file system mount points; du for directories that are not file system mount points |
IBM AIX | perl stat command for files; df for directories that are file system mount points; du for directories that are not file system mount points |
Rate of change of file/directory. An abnormally high value (determined on the basis of historical data) indicates sudden increase in size. Users may want to take some action based on alert.
This lists all file systems mounted in the host.
This metric provides the File System capacity in GB.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
/etc/mtab
None.
Applicable NT only.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
/etc/mtab
None.
This metric provides the mount location of the file system.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
/etc/mtab
None.
This metric contains details about the mount options. These could be similar to "rw,intr,largefiles,logging,xattr,onerror=panic."
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
/etc/mtab
None.
The Filesystems metrics provide information about local file systems on the computer.
This metric represents the name of the disk device resource.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
The data sources for this metric include the following:
Host | Data Source |
---|---|
Solaris | /etc/mnttab file entries |
HP | bdf command |
Linux | df command |
HP Tru64 | df command |
IBM AIX | /etc/mnttab file entries |
Windows | not available |
None.
This metric represents the total space (in megabytes) allocated in the file system.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
The data sources for this metric include the following:
Host | Data Source |
---|---|
Solaris | vminfo system |
HP | bdf command |
Linux | df command |
HP Tru64 | df command |
IBM AIX | stavfs() system call |
Windows | Windows API |
None.
This metric represents the percentage of free space available in the file system.
Table 2-25 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 15 Minutes |
20 |
5 |
Filesystem %keyValue% has %value%%% available space, fallen below warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
For this metric you can set different warning and critical threshold values for each "Mount Point" object.
If warning or critical threshold values are currently set for any "Mount Point" object, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each "Mount Point" object, use the Edit Thresholds page.
The data sources for this metric include the following:
Host | Data Source |
---|---|
Solaris | stavfs() system call |
HP | bdf command |
Linux | df command |
HP Tru64 | df command |
IBM AIX | stavfs() system call |
Windows | Windows API |
Use the OS du -k command to check which directories are taking up the most space (du -k|sort -rn).
This metric represents the amount (in MB) of free space available in the file system.
Table 2-26 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 15 Minutes |
20 |
5 |
Filesystem %keyValue% has %value% MB free space, fallen below warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
For this metric you can set different warning and critical threshold values for each "Mount Point" object.
If warning or critical threshold values are currently set for any "Mount Point" object, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each "Mount Point" object, use the Edit Thresholds page.
The data sources for this metric include the following:
Host | Data Source |
---|---|
Solaris | stavfs() system call |
HP | bdf command |
Linux | df command |
HP Tru64 | df command |
IBM AIX | stavfs() system call |
Windows | Windows API |
Use the OS du -k command to check which directories are taking up the most space (du -k|sort -rn).
This metric represents the total space, expressed in megabytes, allocated in the file system.
This metric is available only on Windows.
The data source for this metric is GetDiskFreeSpaceEx.
A high value indicates that the filesystem has very little free space remaining.User might want to manage the free space.
The metrics in this category provide information about the Fault Management Activity (FMA) fault activity.
This metric provides the Fault Automated System Reconfiguration (ASR).
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours |
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/sbin/fmadm faulty |
This metric provides the UUID that was assigned to this fault.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours |
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/sbin/fmadm faulty |
This metric provides the action that was assigned to this fault.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours |
This metric provides the description of the fault.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours |
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/sbin/fmadm faulty |
This metric provides the Diagnosis Engine that identified the problem.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours |
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/sbin/fmadm faulty |
This metric indicates whether or not an error has impacted the services provided by the device.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours |
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/sbin/fmadm faulty |
This metric provides a message identifier that can be used to identify and view an associated knowledge article.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours |
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/sbin/fmadm faulty |
This metric provides a response to the fault.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours |
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/sbin/fmadm faulty |
This metric provides the severity of the fault.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours |
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/sbin/fmadm faulty |
This metric provides the status of the fault.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours |
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/sbin/fmadm faulty |
This metric provides the timestamp associated with the fault occurrence.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours |
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/sbin/fmadm faulty |
This metric provides the probability (percentage) that the suspected event is the source of the problem.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours |
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/sbin/fmadm faulty |
This metric provides the location of the suspected FRU.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours |
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/sbin/fmadm faulty |
This metric provides the name of the manufacturer of the FRU suspected of causing a fault.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours |
This metric provides the name of the FRU suspected of causing a fault.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours |
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/sbin/fmadm faulty |
This metric provides the part number of the FRU suspected of causing a fault.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours |
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/sbin/fmadm faulty |
The metric provides the suspect FRU resource.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours |
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/sbin/fmadm faulty |
This metric provides the revision level of the FRU suspected of causing a fault.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours |
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/sbin/fmadm faulty |
This metric provides the serial number of the FRU suspected of causing a fault.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours |
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/sbin/fmadm faulty |
This metric provides the status of the FRU suspected of causing a fault.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours |
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/sbin/fmadm faulty |
This metric provides the fault class, which represents a hierarchical classification string indicating the type of problem detected, as reported by the fault management subsystem.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours |
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/sbin/fmadm faulty |
The metrics in this category provide information about FMA SNMP traps.
This metric provides the UUID that was assigned to this problem.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours |
This metric provides the status of the fault.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours |
This section describes the HCA Port Configuration monitor metrics.
Channel Adapter Display Name.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
Channel Adapter Name.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
Channel Adapter Type.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
Firmware version of the HCA.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
The Globally Unique Identifier of the HCA node.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
The hardware version of the HCA.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
This section describes the HCA Port Connections and Configuration metrics.
The Channel Adapter Display name.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
The HCA port number.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
The Globally Unique Identifier of the switch to which this HCA port is connected.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
The port number of the switch to which this HCA port is connected.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
The HCA Node Globally Unique Identifier.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
The HCA Port Globally Unique Identifier.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
The enabled speed of this link (Gbps).
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
The enabled width of this link (for example, 1X or 4X).
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
This section describes the HCA Port Errors metrics.
This metric provides the HCA Node Globally Unique Identifier.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
This metric provides the HCA port number.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
This metric provides the number of errors of this type in the last collection interval.
Table 2-27 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 5 Minutes |
Not Defined |
Not Defined |
Port %PortNumber% has %value% excessive buffer overruns, crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
This metric provides the number of errors of this type in the last collection interval.
Table 2-28 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 5 Minutes |
Not Defined |
Not Defined |
Port %PortNumber% has %value% link integrity errors, crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
This metric provides the number of errors of this type in the last collection interval.
Table 2-29 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 5 Minutes |
Not Defined |
Not Defined |
Port %PortNumber% has %value% link recovers, crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
This metric provides the number of received packets discarded due to constraints.
Table 2-30 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 5 Minutes |
Not Defined |
Not Defined |
Port %PortNumber% has %value% received packets discarded due to constraints, crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
This metric provides the number of errors of this type in the last collection interval.
Table 2-31 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 5 Minutes |
Not Defined |
Not Defined |
Port %PortNumber% has %value% received packets containing an error, crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
This metric shows the number of errors of this type in the last collection interval.
Table 2-32 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 5 Minutes |
Not Defined |
Not Defined |
Port %PortNumber% has %value% received packets marked with the EBP delimiter, crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
This metric shows the number of errors of this type in the last collection interval.
Table 2-33 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 5 Minutes |
Not Defined |
Not Defined |
Port %PortNumber% has %value% symbol errors, crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
This metric provides the number of errors of this type in the last collection interval.
Table 2-34 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 5 Minutes |
10 |
Not Defined |
Port %PortNumber% has %value% total errors, crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
This metric shows the number of errors of this type in the last collection interval.
Table 2-35 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 5 Minutes |
Not Defined |
Not Defined |
Port %PortNumber% has %value% incoming VL15 packets dropped, crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
This metric shows the number of errors of this type in the last collection interval.
Table 2-36 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 5 Minutes |
Not Defined |
Not Defined |
Port %PortNumber% has %value% packets not transmitted due to constraints, crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
This section describes the HCA Port State metrics.
This metric displays the Globally Unique Identifier of the Host Channel Adapter node.
Target Version | Collection Frequency |
---|---|
All Versions | Every 5 Minutes |
This metric displays the port number of the Host Channel Adapter.
Target Version | Collection Frequency |
---|---|
All Versions | Every 5 Minutes |
This metric displays the active speed of this link (Gbps)
Target Version | Collection Frequency |
---|---|
All Versions | Every 5 Minutes |
This metric displays whether the link speed or width is less than the enabled speed or width respectively, then the link is operating in degraded mode.
This metric specifies the active width of this link (for example, 1X or 4X).
Target Version | Collection Frequency |
---|---|
All Versions | Every 5 Minutes |
This section describes the HCA Port State Alert metrics.
This metric indicates whether the HCA port if checking or polling for a peer port.
Table 2-39 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 5 Minutes |
1 |
Port %PortNumber%(%ca_disp_name%) is polling for peer port. This could happen when the cable is unplugged from one of the ends or the other end port is disabled. |
The metrics in this category provides information about the performance of the host channel adapter (HCA) port.
This metric displays the number of packets received per second.
Target Version | Collection Frequency |
---|---|
All Versions | Every 5 Minutes |
This metric displays the number of bytes received per second.
Target Version | Collection Frequency |
---|---|
All Versions | Every 5 Minutes |
This metric displays the link throughput.
Target Version | Collection Frequency |
---|---|
All Versions | Every 5 Minutes |
This metric category collects IP addresses and aliases from each line of the /etc/hosts file.
Note:
This metric category is supported for Linux, Oracle Solaris on SPARC, Oracle Solaris on x86, IBM AIX on POWER Systems, HP-UX PA-RISC (64-bit), HP-UX ItaniumThe metrics in this category provide information about the status of the IPCS message queues.
This metric provides the identifier for the facility entry.
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/ipcs -qaZ |
This metric provides the number of megabytes in messages currently outstanding on the associated message queue.
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/ipcs -qaZ |
This metric provides the maximum number of megabytes allowed in messages outstanding on the associated queue.
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/ipcs -qaZ |
This metric provides the key that is used as an argument to create the facility entry.
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/ipcs -qaZ |
This metric provides the name of the group of the creator of the facility entry.
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/ipcs -qaZ |
This metric provides the login name of the creator of the facility entry.
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/ipcs -qaZ |
This metric provides the time when the associated entry was created or last changed.
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/ipcs -qaZ |
This metric provides the name of the group of the owner of the facility entry.
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/ipcs -qaZ |
This metric provides the identifier of the last process to have received a message from this message queue.
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/ipcs -qaZ |
This metric provides the identifier of the last process to have sent a message to this message queue.
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/ipcs -qaZ |
This metric provides the facility access modes.
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/ipcs -qaZ |
This metric provides the login name of the owner of the facility entry.
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/ipcs -qaZ |
This metric provides the number of messages in this queue.
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/ipcs -qaZ |
This metric provides the last time a message was received from this queue.
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/ipcs -qaZ |
The metrics in this category provide information about the IPCS semaphores status.
This metric provides the identifier for the facility entry.
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/ipcs -saZ |
This metric provides the key that is used as an argument to create the facility entry.
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/ipcs -saZ |
This metric provides the name of the group of the owner of the facility entry.
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/ipcs -saZ |
This metric provides the login name of the owner of the facility entry.
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/ipcs -saZ |
This metric provides the access permissions.
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/ipcs -saZ |
This metric provides the name of the group of the creator of the facility entry.
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/ipcs -saZ |
This metric provides the login name of the creator of the facility entry.
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/ipcs -saZ |
This metric provides the time when the associated entry was created or last changed.
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/ipcs -saZ |
This metric provide the number of semaphores in the set associated with the semaphore entry.
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/ipcs -saZ |
The metrics in this category provide information about the status of the IPCS shared memory.
This metric provides the identifier for the facility entry.
Target Version | Collection Frequency |
---|---|
All Versions | Every Hour |
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/ipcs -maZ |
This metric provides the key. that is used as an argument to create the facility entry.
Target Version | Collection Frequency |
---|---|
All Versions | Every Hour |
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/ipcs -maZ |
This metric provides the name of the group of the owner of the facility entry.
Target Version | Collection Frequency |
---|---|
All Versions | Every Hour |
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/ipcs -maZ |
This metric provides the login name of the owner of the facility entry.
Target Version | Collection Frequency |
---|---|
All Versions | Every Hour |
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/ipcs -maZ |
This metric provides the access permissions.
Target Version | Collection Frequency |
---|---|
All Versions | Every Hour |
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/ipcs -maZ |
This metric provides the size (in MB) of segments for shared memory
Target Version | Collection Frequency |
---|---|
All Versions | Every Hour |
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/ipcs -maZ |
This metric provides the time the last attach on the associated shared memory segment was completed.
Target Version | Collection Frequency |
---|---|
All Versions | Every Hour |
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/ipcs -maZ |
This metric provides the name of the group of the creator of the facility entry.
Target Version | Collection Frequency |
---|---|
All Versions | Every Hour |
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/ipcs -maZ |
This metric provides the process ID of the creator of the shared memory entry.
Target Version | Collection Frequency |
---|---|
All Versions | Every Hour |
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/ipcs -maZ |
This metric provides the login name of the creator of the facility entry.
Target Version | Collection Frequency |
---|---|
All Versions | Every Hour |
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/ipcs -maZ |
This metric provides the time when the associated entry was created or last changed.
Target Version | Collection Frequency |
---|---|
All Versions | Every Hour |
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/ipcs -maZ |
This metric provides the time the last detach on the associated shared memory segment was completed.
Target Version | Collection Frequency |
---|---|
All Versions | Every Hour |
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/ipcs -maZ |
This metric provides the process ID of the last process to attach or detach the shared memory segment.
Target Version | Collection Frequency |
---|---|
All Versions | Every Hour |
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/ipcs -maZ |
This metric provides the number of processes attached to the associated shared memory segment.
Table 2-40 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every Hour |
Not Defined |
Not Defined |
IPCS Shared Memory attached processes %value% has gone above the warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/ipcs -maZ |
The metric category holds the information about the IO cards in the host, including PCI cards and USBs.
This is the bus type of the IO card.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
lspci
None.
The clock frequency of the IO card.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
lspci
None.
This represents the name of the IO card.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
lspci
None.
The Kernel Memory metric provides information on kernel memory allocation (KMA) activities.
This metric is available only on Solaris. The data source is the sar
command. The data is obtained by sampling system counters once in a five-second interval.
This metric represents the number of requests for large memory that failed, that is, requests that were not satisfied.
Target Version | Collection Frequency |
---|---|
All Versions | Every 5 Seconds |
The data sources for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/sbin/sar -k 2
The data is obtained by sampling system counters once in a five-second interval. |
None.
This metric represents the number of oversized requests made that could not be satisfied. Oversized memory requests are allocated dynamically so there is no pool for such requests.
Target Version | Collection Frequency |
---|---|
All Versions | Every 5 Seconds |
The data sources for this metric include the following:
Host | Data Source |
---|---|
Solaris | /usr/sbin/sar -k 2
The data is obtained by sampling system counters once in a five-second interval. |
None.
This metric represents the number of requests for small memory that failed, that is, requests that were not satisfied.
Target Version | Collection Frequency |
---|---|
All Versions | Every 5 Seconds |
The data sources for this metric include the following:
Host | Data Source |
---|---|
Solaris | /usr/sbin/sar -k 2
The data is obtained by sampling system counters once in a five-second interval. |
None.
This metric represents the amount of memory, in bytes, the kernel memory allocation (KMA) has for the large pool; the pool used for allocating and reserving large memory requests.
Target Version | Collection Frequency |
---|---|
All Versions | Every 5 Seconds |
The data sources for this metric include the following:
Host | Data Source |
---|---|
Solaris | /usr/sbin/sar -k 2
The data is obtained by sampling system counters once in a five-second interval. |
None.
This metric represents the amount of memory allocated for oversized memory requests.
Target Version | Collection Frequency |
---|---|
All Versions | Every 5 Seconds |
The data sources for this metric include the following:
Host | Data Source |
---|---|
Solaris | /usr/sbin/sar -k 2
The data is obtained by sampling system counters once in a five-second interval. |
None.
This metric represents the amount of memory, in bytes, the Kernel Memory Allocation has for the small pool; the pool used for allocating and reserving small memory requests.
Target Version | Collection Frequency |
---|---|
All Versions | Every 5 Seconds |
The data sources for this metric include the following:
Host | Data Source |
---|---|
Solaris | /usr/sbin/sar -k 2
The data is obtained by sampling system counters once in a five-second interval. |
None.
This metric represents the amount of memory, in bytes, the kernel allocated to satisfy large memory requests.
Target Version | Collection Frequency |
---|---|
All Versions | Every 5 Seconds |
The data sources for this metric include the following:
Host | Data Source |
---|---|
Solaris | /usr/sbin/sar -k 2
The data is obtained by sampling system counters once in a five-second interval. |
None.
This metric represents the amount of memory, in bytes, the kernel allocated to satisfy small memory requests.
Target Version | Collection Frequency |
---|---|
All Versions | Every 5 Seconds |
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/sbin/sar -k 2
The data is obtained by sampling system counters once in a five-second interval. |
None.
The metrics in this category provide information about the kernel memory usage.
This metric provides the number of available memory pages.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/kstat -p unix:0:system_pages |
This metric provides the number of desfree pages.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/kstat -p unix:0:system_pages |
The metric provides the number of desscan pages.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/kstat -p unix:0:system_pages |
This metric provides the number of contiguous kernel memory pages.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/kstat -p unix:0:system_pages |
The metric provides the number of fast scan pages.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/kstat -p unix:0:system_pages |
This metric provides the number of system free memory pages.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/kstat -p unix:0:system_pages |
This metric provides the number of kernel base pages.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/kstat -p unix:0:system_pages |
This metric provides the amount (in MB) of the kernel memory.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/kstat -p unix:0:system_pages |
This metric provides the number of lotsfree pages.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/kstat -p unix:0:system_pages |
This metric provides the number of minfree pages.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/kstat -p unix:0:system_pages |
This metric provides the number of kernel memory allocator calls.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/kstat -p unix:0:system_pages |
This metric provides the number of kernel memory allocation pages.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/kstat -p unix:0:system_pages |
This metric provides the number of kernel memory free call pages.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/kstat -p unix:0:system_pages |
This metric provides the number of kernel memory allocator free pages.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/kstat -p unix:0:system_pages |
This metric provides the number of scanned pages.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/kstat -p unix:0:system_pages |
This metric provides the total number of pages used by the kernel.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/kstat -p unix:0:system_pages |
This metric provides the page size in bytes.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/kstat -p unix:0:system_pages |
This metric provides the number of system free pages.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/kstat -p unix:0:system_pages |
This metric provides the number of locked pages.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/kstat -p unix:0:system_pages |
This metric provides the total number of available pages.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/kstat -p unix:0:system_pages |
This metric provides the number of physical pages.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/kstat -p unix:0:system_pages |
This metric provides the number of pages scanned per second.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/kstat -p unix:0:system_pages |
This metric provides the timestamp for the last data snapshot.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/kstat -p unix:0:system_pages |
This metric provides the current size (in MB) of the ZFS Adaptive Replacement Cache (ARC).
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
The data source for this metric includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/kstat -p unix:0:system_pages |
The Load metric provides information about the number of runnable processes on the system run queue. If this is greater than the number of CPUs on the system, then excess load exists.
This metric provides the value of the active logical memory.
Table 2-41 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 5 Minutes |
Not Defined |
Not Defined |
Active Logical Memory is %value% KB, crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
This metric represents the average number of jobs waiting for I/O in the last interval.
Table 2-42 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 5 Minutes |
40 |
80 |
CPU I/O Wait is %value%%%, crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
For the following hosts:
Host | Data Source |
---|---|
Solaris | kernel statistics |
HP | pstat_getdynamic(), pstat_getprocessor(), pstat_getproc(), pstat_getstatic(), getutent(), pstat_getvminfo() system calls |
Linux | uptime, free, getconf, ps, iostat, sar, w OS commands; /proc/stat |
HP Tru64 | table() system call, uptime, vmstat, psrinfo, ps, who, swapon OS commands |
IBM AIX | oracle_kstat(), getutent(), getproc(), sysconf() system calls |
Windows | performance data counters |
A high percentage of I/O wait can indicate a hardware problem, a slow NFS server, or poor load-balancing among local file systems and disks. Check the system messages log for any hardware errors. Use the iostat -xn command or the nfsstat -c (NFS client-side statistics) command or both to determine which disks or file systems are slow to respond. Check to see if the problem is with one or more swap partitions, as lack of swap or poor disk load balancing can cause these to become overloaded. Depending on the specific problem, fixes may include: NFS client or server tuning, hardware replacement, moving applications to other file systems, adding swap space, or restructuring a file system for better performance.
For UNIX-based platforms, this metric represents the amount of CPU being used in SYSTEM mode as a percentage of total CPU processing power.
For Windows, this metric represents the percentage of time the process threads spent executing code in privileged mode.
Table 2-43 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 5 Minutes |
Not Defined |
Not Defined |
CPU in Kernel Mode, %value%%%, crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
For the following hosts:
Host | Data Source |
---|---|
Solaris | kernel statistics |
HP | pstat_getdynamic(), pstat_getprocessor(), pstat_getproc(), pstat_getstatic(), getutent(), pstat_getvminfo() system calls |
Linux | uptime, free, getconf, ps, iostat, sar, w OS commands; /proc/stat |
HP Tru64 | table() system call, uptime, vmstat, psrinfo, ps, who, swapon OS commands |
IBM AIX | oracle_kstat(), getutent(), getproc(), sysconf() system calls |
Windows | performance data counters |
An abnormally high value (determined on the basis of historical data) indicates that the machine is doing a lot of work at the system (kernel) level. The user action varies from case to case.
For UNIX-based platforms, this metric represents the amount of CPU being used in USER mode as a percentage of total CPU processing power.For Windows, this metric represents the percentage of time the processor spends in the user mode. This metric displays the average busy time as a percentage of the sample time.
Table 2-44 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 5 Minutes |
Not Defined |
Not Defined |
CPU in User Mode, %value%%%, crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
For the following hosts:
Host | Data Source |
---|---|
Solaris | kernel statistics |
HP | pstat_getdynamic(), pstat_getprocessor(), pstat_getproc(), pstat_getstatic(), getutent(), pstat_getvminfo() system calls |
Linux | uptime, free, getconf, ps, iostat, sar, w OS commands; /proc/stat |
HP Tru64 | table() system call, uptime, vmstat, psrinfo, ps, who, swapon OS commands |
IBM AIX | oracle_kstat(), getutent(), getproc(), sysconf() system calls |
Windows | performance data counters |
An abnormally high value (determined on the basis of historical data) indicates the cpu is doing a lot of work at the user (application) level. An examination of the top processes on the system may help identify problematic processes.
This metric represents the percentage of time the processor spends receiving and servicing hardware interrupts during sample intervals. This value is an indirect indicator of the activity of devices that generate interrupts, such as the system clock, the mouse, disk drivers, data communication lines, network interface cards, and other peripheral devices. These devices normally interrupt the processor when they have completed a task or require attention. Normal thread execution is suspended during interrupts. Most system clocks interrupt the processor every 10 milliseconds, creating a background of interrupt activity. Suspends normal thread execution during interrupts.
This metric is available only on Windows.
Table 2-45 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 5 Minutes |
Not Defined |
Not Defined |
CPU %% Interrupt Time is %value%%%, crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
The data sources for this metric are Performance Data counters.
None.
Processor Queue Length is the number of ready threads in the processor queue. There is a single queue for processor time even on computers with multiple processors. A sustained processor queue of less than 10 threads per processor is normally acceptable, dependent on the workload.
This metric is available only on Windows.
Table 2-46 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 5 Minutes |
Not Defined |
Not Defined |
CPU Queue Length is %value%, crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
The data sources for this metric are Performance Data counters.
A consistently high value indicates a number of CPU bound tasks. This information should be corelated with other metrics such as Page Transfer Rate. Tuning the system, accompanied with additional memory, should help.
For UNIX-based platforms, this metric represents the amount of CPU utilization as a percentage of total CPU processing power available.
For Windows, this metric represents the percentage of time the CPU spends to execute a non-Idle thread. CPU Utilization (%) is the primary indicator of processor activity.
Table 2-47 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 5 Minutes |
80 |
95 |
CPU Utilization is %value%%%, crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
For the following hosts:
Host | Data Source |
---|---|
Solaris | kernel statistics |
HP | pstat_getdynamic(), pstat_getprocessor(), pstat_getproc(), pstat_getstatic(), getutent(), pstat_getvminfo() system calls |
Linux | uptime, free, getconf, ps, iostat, sar, w OS commands; /proc/stat |
HP Tru64 | table() system call, uptime, vmstat, psrinfo, ps, who, swapon OS commands |
IBM AIX | oracle_kstat(), getutent(), getproc(), sysconf() system calls |
Windows | performance data counters |
An abnormally high value (determined on the basis of historical data) indicates that the system is under heavy load. If the value is consistently high, consider reducing the load on the system.
This metric represents logical free memory in a system (discounting memory used for filesystem buffers). Note that this memory can potentially be freed, and may not be available immediately.
Table 2-48 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 5 Minutes |
Not Defined |
Not Defined |
Logical Free Memory, %value%%%, gone below warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
Linux: /proc/meminfo
AIX: libperfstat
A very low value (determined on the basis of historical data) indicates that the system is running out of RAM and this could be due to one or more of the following reasons. The first is that there is more than the planned number of processes running on the system. The second is that the processes are taking a lot more memory than expected. The third reason is that a specific process is leaking memory consistently.
This metric represents the available memory left after the current active memory is consumed out of total memory.
Table 2-49 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 5 Minutes |
Not Defined |
Not Defined |
Free Memory, %value%%%, gone below warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
For the following hosts:
Host | Data Source |
---|---|
Solaris | kernel statistics |
HP | pstat_getdynamic(), pstat_getprocessor(), pstat_getproc(), pstat_getstatic(), getutent(), pstat_getvminfo() system calls |
Linux | uptime, free, getconf, ps, iostat, sar, w OS commands; /proc/stat |
HP Tru64 | table() system call, uptime, vmstat, psrinfo, ps, who, swapon OS commands |
IBM AIX | oracle_kstat(), getutent(), getproc(), sysconf() system calls |
Windows | Windows API |
On Linux OS this value might always be close to 0%. Please refer Free logical memory (%) for actual free memory that is available for the users. User should not take any action based on the value of this metric
This metric represents the amount of free memory in kilobytes.
Table 2-50 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 5 Minutes |
not defined |
not defined |
Free Memory Size %value%, gone below the warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
This metric represents the maximum of the average service time of all disks. Units are represented in milliseconds.
Target Version | Collection Frequency |
---|---|
All Versions | Every 5 Seconds |
For the following hosts:
Host | Data Source |
---|---|
Solaris | kernel statistics |
HP | pstat_getdynamic(), pstat_getprocessor(), pstat_getproc(), pstat_getstatic(), getutent(), pstat_getvminfo() system calls |
Linux | uptime, free, getconf, ps, iostat, sar, w OS commands; /proc/stat |
HP Tru64 | table() system call, uptime, vmstat, psrinfo, ps, who, swapon OS commands |
IBM AIX | oracle_kstat(), getutent(), getproc(), sysconf() system calls |
Windows | Not available |
For UNIX-based systems, this metric represents the number of pages per second scanned by the page stealing daemon.
For Windows, this metric represents the rate at which pages are read from or written to disk to resolve hard page faults. The metric is a primary indicator of the kinds of faults that cause system-wide delays.
Table 2-51 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 5 Minutes |
Not Defined |
Not Defined |
Page scan rate is %value% /sec, crossed warning (%warning_threshold% /sec) or critical (%critical_threshold% /sec) threshold. |
For the following hosts:
Host | Data Source |
---|---|
Solaris | kernel statistics |
HP | pstat_getdynamic(), pstat_getprocessor(), pstat_getproc(), pstat_getstatic(), getutent(), pstat_getvminfo() system calls |
Linux | uptime, free, getconf, ps, iostat, sar, w OS commands; /proc/stat |
HP Tru64 | table() system call, uptime, vmstat, psrinfo, ps, who, swapon OS commands |
IBM AIX | oracle_kstat(), getutent(), getproc(), sysconf() system calls |
Windows | performance data counters |
If this number is zero or close to zero, then you can be sure the system has sufficient memory. If scan rate is always high, then adding memory will definitely help.
This metric represents the amount of used memory as a percentage of total memory.
Table 2-52 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 5 Minutes |
80 |
95 |
Memory Utilization is %value%%%, crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
For the following hosts:
Host | Data Source |
---|---|
Solaris | kernel statistics |
HP | pstat_getdynamic(), pstat_getprocessor(), pstat_getproc(), pstat_getstatic(), getutent(), pstat_getvminfo() system calls |
Linux | uptime, free, getconf, ps, iostat, sar, w OS commands; /proc/stat |
HP Tru64 | table() system call, uptime, vmstat, psrinfo, ps, who, swapon OS commands |
IBM AIX | oracle_kstat(), getutent(), getproc(), sysconf() system calls |
Windows | Windows API |
For Linux OS, this value might always be close to 100%. See Section 2.35.8, "Free Logical Memory (%)" for actual free memory that is available for users. Do not take any action based on the value of this metric.
This metric indicates the rate at which pages are read from or written to disk to resolve hard page faults. It is a primary indicator of the kinds of faults that cause system wide delays. It is counted in numbers of pages. It includes pages retrieved to satisfy faults in the file system cache (usually requested by applications) non-cached mapped memory files.
This metric is available only on Windows.
The data sources for this metric are Windows Performance counters.
High transfer rates indicate a memory contention. Adding memory would help.
This metric represents the average number of processes in memory and subject to be run in the last interval. This metric checks the run queue.
This metric is not available on Windows.
Table 2-53 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 5 Minutes |
Not Defined |
Not Defined |
CPU Load (Run Queue Length averaged over 1 minute) is %value%, crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
For the following hosts:
Host | Data Source |
---|---|
Solaris | kernel statistics |
HP | pstat_getdynamic(), pstat_getprocessor(), pstat_getproc(), pstat_getstatic(), getutent(), pstat_getvminfo() system calls |
Linux | uptime, free, getconf, ps, iostat, sar, w OS commands; /proc/stat |
HP Tru64 | table() system call, uptime, vmstat, psrinfo, ps, who, swapon OS commands |
IBM AIX | oracle_kstat(), getutent(), getproc(), sysconf() system calls |
Check the load on the system using the UNIX uptime or top commands. Also, check for processes using too much CPU time by using the top and ps -ef commands. Note that the issue may be a large number of instances of one or more processes, rather than a few processes each taking up a large amount of CPU time. Kill processes using excessive CPU time.
This metric represents the average number of processes in memory and subject to be run in the last interval. This metric checks the run queue.
This metric is not available on Windows.
Table 2-54 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 5 Minutes |
Not Defined |
Not Defined |
CPU Load (Run Queue Length averaged over 5 minutes) is %value%, crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
For the following hosts:
Host | Data Source |
---|---|
Solaris | kernel statistics |
HP | pstat_getdynamic(), pstat_getprocessor(), pstat_getproc(), pstat_getstatic(), getutent(), pstat_getvminfo() system calls |
Linux | uptime, free, getconf, ps, iostat, sar, w OS commands; /proc/stat |
HP Tru64 | table() system call, uptime, vmstat, psrinfo, ps, who, swapon OS commands |
IBM AIX | oracle_kstat(), getutent(), getproc(), sysconf() system calls |
Check the load on the system using the UNIX uptime or top commands. Also, check for processes using too much CPU time by using the top and ps -ef commands. Note that the issue may be a large number of instances of one or more processes, rather than a few processes each taking up a large amount of CPU time. Kill processes using excessive CPU time.
This metric represents the average number of processes in memory and subject to be run in the last interval. This metric checks the run queue.
This metric is not available on Windows.
Table 2-55 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 5 Minutes |
Not Defined |
Not Defined |
CPU Load (Run Queue Length averaged over 15 minutes) is %value%, crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
For the following hosts:
Host | Data Source |
---|---|
Solaris | kernel statistics |
HP | pstat_getdynamic(), pstat_getprocessor(), pstat_getproc(), pstat_getstatic(), getutent(), pstat_getvminfo() system calls |
Linux | uptime, free, getconf, ps, iostat, sar, w OS commands; /proc/stat |
HP Tru64 | table() system call, uptime, vmstat, psrinfo, ps, who, swapon OS commands |
IBM AIX | oracle_kstat(), getutent(), getproc(), sysconf() system calls |
Check the load on the system using the UNIX uptime or top commands. Also, check for processes using too much CPU time by using the top and ps -ef commands. Note that the issue may be a large number of instances of one or more processes, rather than a few processes each taking up a large amount of CPU time. Kill processes using excessive CPU time.
This metric represents the amount of free swap space available (in KB).
Table 2-56 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 5 Minutes |
Not Defined |
Not Defined |
Free Swap, %value% KB, gone below warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
For UNIX-based platforms, this metric represents the percentage of swapped memory in use for the last interval.
For Windows, this metric represents the percentage of page file instance used.
Table 2-57 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 5 Minutes |
80 |
95 |
Swap Utilization is %value%%%, crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
For the following hosts:
Host | Data Source |
---|---|
Solaris | kernel statistics |
HP | pstat_getdynamic(), pstat_getprocessor(), pstat_getproc(), pstat_getstatic(), getutent(), pstat_getvminfo() system calls |
Linux | uptime, free, getconf, ps, iostat, sar, w OS commands; /proc/stat |
HP Tru64 | table() system call, uptime, vmstat, psrinfo, ps, who, swapon OS commands |
IBM AIX | oracle_kstat(), getutent(), getproc(), sysconf() system calls |
Windows | Windows API and Performance data counters |
For UNIX-based platforms, check the swap usage using the UNIX top command or the Solaris swap -l command. Additional swap can be added to an existing file system by creating a swap file and then adding the file to the system swap pool. (See documentation for your UNIX OS). If swap is mounted on /tmp, space can be freed by removing any junk files in /tmp. If it is not possible to add file system swap or free up enough space, additional swap will have to be added by adding a raw disk partition to the swap pool. See UNIX documentation for procedures.
For Windows, check the page file usage and add an additional page file if current limits are insufficient.
For UNIX-based platforms, this metric represents the amount of swapped memory in use for the last interval.
For Windows, this metric represents the amount of page file instance used.
This metric represents the total number of processes currently running on the system. This metric checks the number of processes running on the system.
Table 2-59 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 5 Minutes |
Not Defined |
Not Defined |
Number of processes is %value%, crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
For the following hosts:
Host | Data Source |
---|---|
Solaris | kernel statistics |
HP | pstat_getdynamic(), pstat_getprocessor(), pstat_getproc(), pstat_getstatic(), getutent(), pstat_getvminfo() system calls |
Linux | uptime, free, getconf, ps, iostat, sar, w OS commands; /proc/stat |
HP Tru64 | table() system call, uptime, vmstat, psrinfo, ps, who, swapon OS commands |
IBM AIX | oracle_kstat(), getutent(), getproc(), sysconf() system calls |
Windows | performance data counters |
An abnormally high value (determined on the basis of historical data) indicates that the system is under heavy load. If the value is consistently high, consider reducing the load on the system by stopping the number of processes.
Total amount of page file space available to be allocated by processes. Paging files are shared by all processes and the lack of space in paging files can prevent processes from allocating memory.
This metric is available only on Windows.
Performance Data counters and Windows API GlobalMemoryStatusEx
An abnormally high value (determined on the basis of historical data) indicates that the system is doing a lot of swapping by moving data either to or from the disk. This typically will slow down the system because of the relatively slower access to the disk. The reason for this could be one or more of the following:
There are many processes running on the system competing for a limited RAM and this results in more swapping. User can try to reduce the load by stopping some process
A process occupying more memory than expected leading to a shortage of available memory
Typically these kinds of problems are solved by adding more RAM.
This metric represents the total number of users currently logged into the system. This metric checks the number of users running on the system.
Table 2-60 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 5 Minutes |
not defined |
not defined |
Number of users is %value%, crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
For the following hosts:
Host | Data Source |
---|---|
Solaris | kernel statistics |
HP | pstat_getdynamic(), pstat_getprocessor(), pstat_getproc(), pstat_getstatic(), getutent(), pstat_getvminfo() system calls |
Linux | uptime, free, getconf, ps, iostat, sar, w OS commands; /proc/stat |
HP Tru64 | table() system call, uptime, vmstat, psrinfo, ps, who, swapon OS commands |
IBM AIX | oracle_kstat(), getutent(), getproc(), sysconf() system calls |
Windows | not available |
An abnormally high value (determined on the basis of historical data) indicates that the system is under heavy load. If the value is consistently high, consider reducing the load on the system by restricting or removing active users from the system.
This metric represents the percentage of the Active Logical memory.
Table 2-61 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 5 Minutes |
Not Defined |
Not Defined |
Used Logical Memory, %value%%%, gone above warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
For the following hosts:
Host | Data Source |
---|---|
Linux | COMPUTE_EXPR="(100.0 - logicMemfreePct)"
where: logicMemfreePct: COMPUTE_EXPR="(100* freeLogicMem / realMem)" freeLogicMem: COMPUTE_EXPR="freeMem+buffers+cached" realMem, freeMem, buffers, cached are calculated using |
IBM AIX | COMPUTE_EXPR="(100.0 - logicMemfreePct)"
where: logicMemfreePct: COMPUTE_EXPR="(100* freeLogicMem / realMem)" freeLogicMem: COMPUTE_EXPR="(freeMemRaw + memFilesUsed) / 1024.0" realMem, freeMemRaw , memFilesUsed are calculated using |
If this alert is raised, then you must analyze the problem to determine the root cause and resolve the underlying issue.
This metric represents the size in kilobytes of the page file instance used.
This metric is available only on Windows.
Performance Data counters and Windows API GlobalMemoryStatusEx.
An abnormally high value (determined on the basis of historical data) indicates that the system is doing a lot of swapping by moving data either to or from the disk. This typically will slow down the system because of the relatively slower access to the disk. The reason for this could be one or more of the following:
There are many processes running on the system competing for a limited RAM and this results in more swapping. User can try to reduce the load by stopping some process
A process occupying more memory than expected leading to a shortage of available memory
Typically these kinds of problems are solved by adding more RAM.
The Log File Monitor metric category allows the operator to monitor one or more log files for the occurrence of one or more Perl patterns in the content. In addition, the operator can specify a Perl pattern to be ignored for the log file. Periodic scanning will be performed against new content added since the last scan, lines matching the ignore pattern will be ignored first, then lines matching specified match patterns will result in one record being uploaded to the Management Repository for each pattern. The user can set a threshold against the number of lines matching this pattern. File rotation will be handled within the given file.
This metric returns the actual content if this file has been specifically registered for content uploading. Otherwise, it returns the count of lines that matched the pattern specified.
The operator can list the names of files or directories to be never monitored in EMDROOT/sysman/config/lfm_efiles file. The operator can list the names of the files or directories whose contents can be uploaded into the Management Repository in EMDROOT/sysman/config/lfm_ifiles file.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
Oracle-provided Perl program that scans files for the occurrence of user-specified Perl patterns.
None.
This metric returns the number of lines matching the pattern specified in this file. Setting warning or critical thresholds against this column for a specific {log file name, match pattern in Perl, ignore pattern in Perl} triggers the monitoring of specified criteria against this log file.
Table 2-62 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 15 Minutes |
Not Defined |
Not Defined |
%log_file_message% Crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
For this metric you can set different warning and critical threshold values for each unique combination of "Log File Name", "Match Pattern in Perl", "Ignore Pattern in Perl", and "Time Stamp" objects.
If warning or critical threshold values are currently set for any unique combination of "Log File Name", "Match Pattern in Perl", "Ignore Pattern in Perl", and "Time Stamp" objects, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each unique combination of "Log File Name", "Match Pattern in Perl", "Ignore Pattern in Perl", and "Time Stamp" objects, use the Edit Thresholds page.
Oracle-supplied Perl program monitors the log files for user specified criteria.
None.
This metric category provides information about Logical Partitioning (LPAR) performance on IBM AIX systems.
This metric represents the percentage of the entitled processing capacity used while executing at the user level (application).
Table 2-63 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
AIX 5.3.0.0, 6.1.0.0, 7.1.0.0 |
Every 15 Minutes |
Not Defined |
Not Defined |
CPU in user mode for LPAR Performance metric is %value%%%, crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
This metric represents the percentage of the entitled processing capacity used while executing at the system level (kernel).
Table 2-64 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 15 Minutes |
Not Defined |
Not Defined |
CPU in sys mode for LPAR Performance metric is %value%%%, crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
This metric represents the percentage of the entitled processing capacity unused while the partition was idle and had outstanding disk I/O request(s).
Table 2-65 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 15 Minutes |
Not Defined |
Not Defined |
CPU in wait mode for LPAR Performance metric is %value%%%, crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
This metric represents the percentage of the entitled processing capacity unused while the partition was idle and did not have any outstanding disk I/O request.
Table 2-66 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 15 Minutes |
Not Defined |
Not Defined |
CPU in idle mode for LPAR Performance metric is %value%%%, crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
This metric represents the number of physical processors consumed.
Table 2-67 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 15 Minutes |
Not Defined |
Not Defined |
Physical Processor Consumed is %value%, crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
This metric represents the percentage of the LPAR's CPU entitlement consumed.
Table 2-68 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 15 Minutes |
Not Defined |
Not Defined |
Entitlement consumed is is %value%%%, crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
The Memory Devices metric category monitors the status of memory devices configured in the system.
This metric represents the bank location name of the memory device, when applicable.
This metric is available only on Dell Poweredge Linux Systems.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
SNMP MIB object: memoryDeviceStatus (1.3.6.1.4.1.674.10892.1.1100.50.1.5)
None.
This metric represents the location name of the memory device, for example, "DIMM A".
This metric is available only on Dell Poweredge Linux Systems.
Metric Value | Meaning (per SNMP MIB) |
---|---|
1 | Other (not one of the following) |
2 | Unknown |
3 | Normal |
4 | Warning |
5 | Critical |
6 | Non-Recoverable |
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
SNMP MIB object: memoryDeviceStatus (1.3.6.1.4.1.674.10892.1.1100.50.1.5)
None.
This metric represents the status of the memory device.
This metric is available only on Dell Poweredge Linux Systems.
Metric Value | Meaning (per SNMP MIB) |
---|---|
1 | Other (not one of the following) |
2 | Unknown |
3 | Normal |
4 | Warning |
5 | Critical |
6 | Non-Recoverable |
Table 2-69 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 15 Minutes |
4 |
5 |
Status of Memory(Object identifier:1.3.6.1.4.1.674.10892.1.1100.50.1.5) at bank location %MemoryBankLocation% and location %MemoryLocation% is %value%, crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. Status message is %MemoryStatus% |
For this metric you can set different warning and critical threshold values for each unique combination of "Chassis" and "Index" objects.
If warning or critical threshold values are currently set for any unique combination of "Chassis" and "Index" objects, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each unique combination of "Chassis" and "Index" objects, use the Edit Thresholds page.
SNMP MIB object: memoryDeviceStatus (1.3.6.1.4.1.674.10892.1.1100.50.1.5)
None.
The metrics in this category provide information about memory usage.
This metric provides the amount of free memory available in MB.
Target Version | Collection Frequency |
---|---|
All Versions | Every 10 Minutes |
This metric provides the available memory left after the current active memory is consumed out of total memory.
Target Version | Collection Frequency |
---|---|
All Versions | Every 10 Minutes |
This metric provides the total amount of memory in MB.
Target Version | Collection Frequency |
---|---|
All Versions | Every 10 Minutes |
This metric provides the amount of used memory in MB.
Target Version | Collection Frequency |
---|---|
All Versions | Every 10 Minutes |
This metric provides the amount of used memory as a percentage of total memory.
Target Version | Collection Frequency |
---|---|
All Versions | Every 10 Minutes |
The metrics in this category provide information about memory usage.
This metric provides the memory type.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
This metric provides the restriction or limit (MB) for this memory type (if memory cap is set).
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
This metric provides the memory restriction as a percentage of the total memory.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
The Message and Semaphore Activity metric category provides information about the message and semaphore activity of the host system being monitored.
This metric represents the number of msgrcv system calls made per second. The msgrcv system call reads a message from one queue to another user-defined queue.
The data sources for this metric include the following:
Host | Data Source |
---|---|
Solaris | sar command |
HP | sar command |
HP Tru64 | ipcs command |
IBM AIX | sar command |
None.
This metric represents the number of semop system calls made per second. The semop system call is used to perform semaphore operations on a set of semaphores.
The data sources for this metric include the following:
Host | Data Source |
---|---|
Solaris | sar command |
HP | sar command |
HP Tru64 | ipcs command |
IBM AIX | sar command |
None.
The metrics in this category provide information about the network datalinks bandwidth.
This metric provides the datalink name.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours |
This metric provides the relative bandwidth priority.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours |
This metric provides the differentiated service field.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours |
This metric displays the name of the datalink flow.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours |
This metric displays the local address for the datalink flow.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours |
This metric displays the service specified by the local port.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours |
This metric displays the maximum bandwidth of the datalink.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours |
This metric displays the remote address for the datalink flow.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours |
The metrics in this category provide information about the Network Datalinks performance.
This metric provides the datalink name.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
This metric provides the number of collisions.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
This metric provides the number of inbound broadcast octets.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
This metric provides the number of inbound broadcasts.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
This metric provides the number of inbound dropped octets.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
This metric provides the number of inbound drops.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
This metric provides the number of inbound errors.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
This metric provides the number of inbound multicast octets.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
This metric provides the number of inbound multicasts.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
This metric provides the number of inbound octets.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
This metric provides the number of inbound packets.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
This metric provides the number of outbound broadcast octets
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
This metric provides the number of outbound broadcasts.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
This metric provides the number of outbound dropped octets.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
This metric provides the number of outbound drops since the last collection.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
This metric provides the number of outbound errors
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
This metric provides the number of outbound multicast octets.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
This metric provides the number of outbound multicasts.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
This metric provides the number of outbound octets.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
This metric provides the average of the outbound octet rate for this interface.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
This metric provides the number of outbound packets.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
This metric category describes the metrics associated with network interfaces.
This metric represents the number of collisions per second. This metric checks the rate of collisions on the network interface specified by the network device names parameter, such as le0 or * (for all network interfaces).
Target Version | Collection Frequency |
---|---|
Sun Solaris, AIX, HP-UX | Every 15 Minutes |
Table 2-70 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
Linux |
Every 15 Minutes |
Not Defined |
Not Defined |
Network Interface Collisions (%%) for %keyvalue% is %value% , crossed warning (%warning_threshold% (%%)) or critical (%critical_threshold% (%%)) threshold. |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | kernel memory structures (kstat) |
HP | netstat, lanscan, and lanadmin commands |
Linux | netstat command and /proc/net/dev |
HP Tru64 | netstat command |
IBM AIX | oracle_kstat() system call |
Use the OS netstat -i command to check the performance of the interface. Also, check the system messages file for messages relating to duplex setting by using the OS grep -i command and searching for the word 'duplex'.
This metric represents the percentage of network bandwidth being used by reading and writing from and to the network for full-duplex network connections.
Table 2-71 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 15 Minutes |
Not Defined |
Not Defined |
Network utilization for %keyvalue% is %value%%%, crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
For this metric you can set different warning and critical threshold values for each "Network Interface Name" object.
If warning or critical threshold values are currently set for any "Network Interface Name" object, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each "Network Interface Name" object, use the Edit Thresholds page.
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | kernel memory structures (kstat) |
HP | netstat, lanscan, and lanadmin commands |
Linux | netstat command and /proc/net/dev |
HP Tru64 | netstat command |
IBM AIX | oracle_kstat() system call |
Windows | not available |
Use the OS netstat -i command to check the performance of the interface. Also, check the system messages file for messages relating to duplex setting by using the OS grep -i command and searching for the word 'duplex'.
This metric represents the number of input errors, per second, encountered on the device for unsuccessful reception due to hardware/network errors. This metric checks the rate of input errors on the network interface specified by the network device names parameter, such as le0 or * (for all network interfaces).
Target Version | Collection Frequency |
---|---|
Sun Solaris, AIX, HP-UX | Every 15 Minutes |
Table 2-72 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
Linux |
Every 15 Minutes |
Not Defined |
Not Defined |
Network Interface Input Errors (%%) for %keyvalue% is %value% , crossed warning (%warning_threshold% (%%)) or critical (%critical_threshold% (%%)) threshold. |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | kernel memory structures (kstat) |
HP | netstat, lanscan, and lanadmin commands |
Linux | netstat command and /proc/net/dev |
HP Tru64 | netstat command |
IBM AIX | oracle_kstat() system call |
Use the OS netstat -i command to check the performance of the interface. Also, check the system messages file for messages relating to duplex setting by using the OS grep -i command and searching for the word 'duplex'.
Represents the number of output errors per second. This metric checks the rate of output errors on the network interface specified by the network device names parameter, such as le0 or * (for all network interfaces).
Target Version | Collection Frequency |
---|---|
All versions | Every 15 Minutes |
Table 2-73 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
Linux |
Every 15 Minutes |
Not Defined |
Not Defined |
Network Interface Output Errors (%%) for %keyvalue% is %value% , crossed warning (%warning_threshold% (%%)) or critical (%critical_threshold% (%%)) threshold. |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | kernel memory structures (kstat) |
HP | netstat, lanscan, and lanadmin commands |
Linux | netstat command and /proc/net/dev |
HP Tru64 | netstat command |
IBM AIX | oracle_kstat() system call |
Use the OS netstat -i command to check the performance of the interface. Also, check the system messages file for messages relating to duplex setting by using the OS grep -i command and searching for the word 'duplex'.
This metric represents the amount of megabytes per second read from the specific interface.
Target Version | Collection Frequency |
---|---|
Sun Solaris, AIX, HP-UX | Every 15 Minutes |
Table 2-74 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
Linux |
Every 15 Minutes |
Not Defined |
Not Defined |
Network Interface Read (MB/s) for %keyvalue% is %value% , crossed warning (%warning_threshold% MB/sec) or critical (%critical_threshold% MB/sec) threshold. |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | kernel memory structures (kstat) |
HP | netstat, lanscan, and lanadmin commands |
Linux | netstat command and /proc/net/dev |
IBM AIX | perfstat system call |
Use the OS netstat -i command to check the performance of the interface. Also, check the system messages file for messages relating to duplex setting by using the OS grep -i command and searching for the word 'duplex'.
This metric represents the amount of network bandwidth being used for reading from the network as a percentage of total read capacity.
Target Version | Collection Frequency |
---|---|
Sun Solaris, AIX, HP-UX | Every 15 Minutes |
Table 2-75 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
Linux |
Every 15 Minutes |
Not Defined |
Not Defined |
Network Interface Read Utilization (%%) for %keyvalue% is %value% , crossed warning (%warning_threshold% (%%)) or critical (%critical_threshold% (%%)) threshold. |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | kernel memory structures (kstat) |
HP | netstat, lanscan, and lanadmin commands |
Linux | netstat command and /proc/net/dev |
IBM AIX | perfstat system call |
Use the OS netstat -i command to check the performance of the interface. Also, check the system messages file for messages relating to duplex setting by using the OS grep -i command and searching for the word 'duplex'.
This metric represents the number of total errors per second, encountered on the network interface. It is the rate of read and write errors encountered on the network interface.
Table 2-76 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 15 Minutes |
Not Defined |
Not Defined |
Network Error Rate for %keyvalue% is %value%%%, crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
For this metric you can set different warning and critical threshold values for each "Network Interface Name" object.
If warning or critical threshold values are currently set for any "Network Interface Name" object, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each "Network Interface Name" object, use the Edit Thresholds page.
It is computed as the sum of Network Interface Input Errors (%) and Network Interface Output Errors (%).
Use the OS netstat -i command to check the performance of the interface. Also, check the system messages file for messages relating to duplex setting by using the OS grep -i command and searching for the word 'duplex'.
This metric represents the total I/O rate on the network interface. It is measured as the sum of Network Interface Read (MB/s) and Network Interface Write (MB/s).
Table 2-77 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 5 Minutes |
Not Defined |
Not Defined |
Network I/O Rate for %keyvalue% is %value%MB/Sec, crossed warning (%warning_threshold%MB/Sec) or critical (%critical_threshold%MB/Sec) threshold. |
For this metric you can set different warning and critical threshold values for each "Network Interface Name" object.
If warning or critical threshold values are currently set for any "Network Interface Name" object, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each "Network Interface Name" object, use the Edit Thresholds page.
It is computed as the sum of Network Interface Read (MB/s) and Network Interface Write (MB/s).
Use the OS netstat -i command to check the performance of the interface. Also, check the system messages file for messages relating to duplex setting by using the OS grep -i command and searching for the word 'duplex'.
This metric represents the amount of megabytes per second written to the specific interface.
The following table shows how often the metric's value is collected.
Target Version | Collection Frequency |
---|---|
Sun Solaris, AIX, HP-UX, Windows | Every 15 Minutes |
Table 2-78 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
Linux |
Every 15 Minutes |
Not Defined |
Not Defined |
Network Interface Write (MB/s) for %keyvalue% is %value% , crossed warning (%warning_threshold% (MB/s)) or critical (%critical_threshold% (MB/s)) threshold. |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | kernel memory structures (kstat) |
HP | netstat, lanscan, and lanadmin commands |
Linux | netstat command and /proc/net/dev |
HP Tru64 | netstat command |
IBM AIX | oracle_kstat() system call |
Windows | not available |
Use the OS netstat -i command to check the performance of the interface. Also, check the system messages file for messages relating to duplex setting by using the OS grep -i command and searching for the word 'duplex'.
This metric represents the amount of network bandwidth being used for writing to the network as a percentage of total read capacity.
Target Version | Collection Frequency |
---|---|
Sun Solaris, AIX, HP-UX, Windows | Every 15 Minutes |
Table 2-79 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
Linux |
Every 15 Minutes |
Not Defined |
Not Defined |
Network Interface Write Utilization (%%) for %keyvalue% is %value% , crossed warning (%warning_threshold% (%%)) or critical (%critical_threshold% (%%)) threshold. |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | kernel memory structures (kstat) |
HP | netstat, lanscan, and lanadmin commands |
Linux | netstat command and /proc/net/dev |
HP Tru64 | not available |
IBM AIX | perfstat system call |
Windows | not available |
Use the OS netstat -i command to check the performance of the interface. Also, check the system messages file for messages relating to duplex setting by using the OS grep -i command and searching for the word 'duplex'.
The metrics in this category provide information about the network interfaces bandwidth.
This metric provides the name of the network interface.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
This metric provides the total network collisions (in percentage) of the network interface since the last collection.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
This metric provides the average input/output operations over 10 minutes.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
This metric provides the average percentage of activity over 10 minutes.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
This metric provides the amount of bandwidth used since the last collection.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
This metric provides the average input operations over 10 minutes.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
This metric provide the percentage of input errors on the interface.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
This metric provides the percentage of input/output errors.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
This metric category relates to bonded network interface cards. Slave interface cards have the same information as bonded cards.
Name of the bond.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
Mode of the bonds. This can be balance-alb.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
Options/properties of the bond. This can be something like "miimon=100 max_bonds=4."
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
This metric category relates to network interface cards, both unbonded and bonded interface cards. Slave interface cards have the same information as bonded cards.
Broadcast address of the local area network.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
ifconfig
None.
Default gateway configured for this host.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
ifconfig
None.
A description of the Network Interface Card.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
ifconfig
None.
This metric represents whether this Network Interface Card (NIC) is configured for dynamic or static ip addresses
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
ifconfig
None.
Network interface card's Flags
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
ifconfig
None.
This represents the aliases for the host corresponding to this Network Interface Card.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
arp
None.
IP address associated with this Network Interface Card. This is supposed to be an IPV4 address.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
ifconfig
None
This is a comma-separated list of IPV6 addresses.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
ifconfig
None.
Hardware address of the Network Interface Card.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
ifconfig
None.
This is the subnet mask inet address.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
ifconfig
None.
The metrics in this category provide information about the network interfaces performance.
This metric provides the average of the inbound octet rate for this interface.
This metric provides the average of the outbound octet rate for this interface.
The Network Interfaces Summary metric category provides information about all network interfaces.
This metric represents the amount of megabytes per second read from all the network interfaces.
Table 2-80 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 15 Minutes |
Not Defined |
Not Defined |
Total Network Interface Read (MB/s) for is %value% , crossed warning (%warning_threshold% MB/sec) or critical (%critical_threshold% MB/sec) threshold. |
This metric represents the total I/O rate on all the network interfaces. It is measured as the sum of All Network Interfaces Write Rate (MB/sec) and All Network Interfaces Read Rate (MB/sec).
Table 2-81 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 15 Minutes |
Not Defined |
Not Defined |
Total Network Interface Read/Write (MB/s) for is %value% , crossed warning (%warning_threshold% MB/sec) or critical (%critical_threshold% MB/sec) threshold. |
This metric represents the amount of megabytes per second written to all the interfaces.
Table 2-82 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 15 Minutes |
Not Defined |
Not Defined |
Network Interface Write (MB/s) for %keyvalue% is %value% , crossed warning (%warning_threshold% (MB/s)) or critical (%critical_threshold% (MB/s)) threshold. |
The metrics in this category provide information about the new disk activity summary IO.
The metrics in this category provide information about the new disk activity summary IO.
The metrics in this category provide information about new CPU usage.
This metric provides the CPU ID.
Target Version | Collection Frequency |
---|---|
All Versions | Every 60 Minutes |
This metric provides the number of context switches per second.
Target Version | Collection Frequency |
---|---|
All Versions | Every 60 Minutes |
This metric provides the percentage of IO waiting time.
Target Version | Collection Frequency |
---|---|
All Versions | Every 60 Minutes |
This metric provides the number of interrupts per second.
Target Version | Collection Frequency |
---|---|
All Versions | Every 60 Minutes |
This metric provides the percentage of the sort interrupts time.
Target Version | Collection Frequency |
---|---|
All Versions | Every 60 Minutes |
This metric provides the percentage of system time.
Table 2-83 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 60 Minutes |
Not Defined |
Not Defined |
CPU System Time (%%) is %value% , crossed warning (%warning_threshold% ) or critical (%critical_threshold% ) threshold. |
This metric provides the usage restriction or cap.
Target Version | Collection Frequency |
---|---|
All Versions | Every 60 Minutes |
This metric provides the usage restriction as a percentage of the total CPU usage.
Target Version | Collection Frequency |
---|---|
All Versions | Every 60 Minutes |
This metric provides the percentage of user time.
Table 2-84 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 60 Minutes |
Not Defined |
Not Defined |
CPU User Time (%%) is %value% , crossed warning (%warning_threshold% ) or critical (%critical_threshold% ) threshold. |
This metric provides the amount of CPU utilization as a percentage of total CPU processing power available.
Table 2-85 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 60 Minutes |
Not Defined |
Not Defined |
CPU Utilization for %keyvalue% is %value%%%, crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
This metric provides the percentage of time that the CPU was idle and the system did not have an outstanding disk I/O request.
Target Version | Collection Frequency |
---|---|
All Versions | Every 60 Minutes |
The metrics in this category provide information about new paging activity.
This metric provides the number of pages put on the freelist per second by the page stealing daemon.
Target Version | Collection Frequency |
---|---|
All Versions | Every 60 Minutes |
This metric provides the number of page-in requests per second.
Target Version | Collection Frequency |
---|---|
All Versions | Every 60 Minutes |
This metric provides the page-out requests per second.
Target Version | Collection Frequency |
---|---|
All Versions | Every 60 Minutes |
This metric provides the number of pages scanned per second by the page stealing daemon.
Table 2-86 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 60 Minutes |
Not Defined |
Not Defined |
Pages Paged-In (per second) %value%, has crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
This metric provides the number of paged-in pages per second.
Table 2-87 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 60 Minutes |
Not Defined |
Not Defined |
Pages Paged-in (per second) is %value% , crossed warning (%warning_threshold% ) or critical (%critical_threshold% ) threshold. |
This metric provides the number of paged-out pages per second.
Table 2-88 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 60 Minutes |
Not Defined |
Not Defined |
Pages Paged-out (per second) %value%, has crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold.. |
This metric provides the number of page faults from software lock requests.
Target Version | Collection Frequency |
---|---|
All Versions | Every 60 Minutes |
This metric category contains the operating summary information. There will be one row per host.
This is the OS address length. This is either 32 bit or 64 bit.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
uname -a
None.
Base version of the OS
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
None.
When the host is a member Database Machine, this column has a value of 1.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
uname -a
None.
Default Run level of the OS running on the host. Whenever the OS is booted, it will be booted to this run level.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
uname -a
None.
This is only applicable for Linux. Represents OS distributed version
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
uname -a
None.
This represents Maximum swap space available for the OS.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
uname -a
None
Name of the OS
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
/etc/enterprise-release for OEL
/etc/redhat-release for redhat
/etc/UnitedLinux-release
/etc/SuSE-release
None.
This is the platform id number
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
uname -a
None.
This metric stores the information about OS components, including Patches, Bundles, and Packages.
Description of the component
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
/bin/rpm
None.
Installation date of the component.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
/bin/rpm
None.
Name of the OS component.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
/bin/rpm
None.
This metric lists some of the OS properties, including OPEN_MAX, Semaphore values, and kernel.pid_max.
Name of configuration variable of the OS, for example OPEN_MAX.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
/bin/getconf, ulimit
This metric contains details of all the OS Registered Software.
Any vendor description for the software.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
/bin/rpm
Informational only.
ID of Software installed in the host. Only applicable for NT/Windows.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
/bin/rpm
Informational only.
Installation date of the software.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
/bin/rpm
Informational only.
The location where the software is installed.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
/bin/rpm
Informational only.
Installation or distribution source of the installed product. For example, the package name, bundling application, or distro.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
/bin/rpm
Informational only.
Name of installed software.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
/bin/rpm
Informational only.
Vendor who provided the software
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
/bin/rpm
Informational only.
Anything related to software
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
/bin/rpm
Informational only.
Any vendor description for the software.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
/bin/rpm
Informational only.
Represents the Solaris zone name in which the product is installed.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
/bin/rpm
Informational only.
Parent ID of the installed product. Applicable to the Sun Service Tag product taxonomy
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
/bin/rpm
Informational only.
The metrics in this category provide information about the operating system service status.
This metric provides the Fault Management Resource Identifier (FMRI).
Target Version | Collection Frequency |
---|---|
All Versions | Every 6 Hours |
The data source for the metrics in this category includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/svcs -a -o STATE,FMRI |
This metric provides the contract identifier.
Target Version | Collection Frequency |
---|---|
All Versions | Every 6 Hours |
The data source for the metrics in this category includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/svcs -a -o STATE,FMRI |
This metric provides the name and location of the error log file.
Target Version | Collection Frequency |
---|---|
All Versions | Every 6 Hours |
The data source for the metrics in this category includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/svcs -a -o STATE,FMRI |
This metric provides the next service state.
Target Version | Collection Frequency |
---|---|
All Versions | Every 6 Hours |
The data source for the metrics in this category includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/svcs -a -o STATE,FMRI |
This metric specifies if the service is enabled.
Target Version | Collection Frequency |
---|---|
All Versions | Every 6 Hours |
The data source for the metrics in this category includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/svcs -a -o STATE,FMRI |
This metric provides the service state.
Table 2-89 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 6 Hours |
DISABLED|disabled |
MAINTENANCE|maintenance |
Service status is %value%, Service name is %ServiceId% |
The data source for the metrics in this category includes the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/svcs -a -o STATE,FMRI |
This metric category contains details of the operating system ULIMITS.
Limits the size of a "core" file left behind when a process encounters a segmentation fault or other unexpected fatal error.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
Maximum CPU time a process can use before it get terminated. CPU time is the amount of time the CPU actual spends executing processor instructions and is often much less than the total program "runs time".
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
Limits the amount of memory that a process can allocate on the heap, as with malloc, calloc, C++ "new," and most object creation in higher-level languages. Specified in kilobytes.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
Maximum size of the file a process can create. Number will be in 512 bytes (one block).
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
This number represents maximum number of files that can be opened at a time.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
Limits the amount of memory a process can allocate on the stack, as in the case of local variables in C, C++, and many other languages.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
Maximum memory that can be allocated to a process. This includes all types of memory, including the stack, the heap, and memory-mapped files Attempts to allocate memory in excess of this limit will fail with an out-of-memory error.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
This metric category provides information about Solaris Engineered Systems. These configuration metrics are not available from the All Metrics page of the Cloud Control console.
To view the Oracle Engineered Systems configuration metrics:
From the Cloud Control UI, select your Host target type.
Right-click the target type name, and select Configuration, then select Last Collected.
The metrics appear under Latest Configuration.
Note:
These metrics are supported for Solaris hosts only.Configuration history is turned off for all these metrics but configuration comparison is available.
This metric displays the Oracle Engineered System Identifier. This is the serial number of the Engineered System specified during production.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours when enabled. For more information, see "Enabling Solaris Virtualization Technology Metrics". |
rack/serial_number property from the oes/id Oracle Solaris Service Management Facility (SMF) service.
Informational only.
This metric displays the Oracle Engineered system name and optionally the build ID.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours when enabled. For more information, see "Enabling Solaris Virtualization Technology Metrics". |
oes/type and configuration/build properties from the oes/id SMF service.
Informational only.
This metric can display the hardware or software version, revision or other details associated with this Engineered system.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours when enabled. For more information, see "Enabling Solaris Virtualization Technology Metrics". |
oes/node, configuration/name and configuration/domain_type properties from the oes/id SMF service.
Informational only.
The Paging Activity metric category provides the amount of paging activity on the system.
This metric displays the number of active pages.
Table 2-90 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
Linux |
Every 15 Minutes |
Not Defined |
Not Defined |
Active Pages are %value% , crossed warning (%warning_threshold% ) or critical (%critical_threshold% ) threshold. |
This metric represents the minor page faults by way of hat_fault() per second. This metric checks the number of faults for the CPU(s) specified by the Host CPU(s) parameter, such as cpu_stat0 or * (for all CPUs on the system).
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | kernel statistics (class misc cpu_stat) |
HP | pstat_getvminfo() system call |
HP Tru64 | table() system call and vmstat command |
IBM AIX | oracle_kstat() system call |
Informational only.
The Cache Faults/sec is the rate at which faults occur when a page sought in the file system cache is not found and must be retrieved from elsewhere in memory (a soft fault) or from disk (a hard fault). The file system cache is an area of physical memory that stores recently used pages of data for applications. Cache activity is a reliable indicator of most application I/O operations. This metric shows the number of faults, without regard for the number of pages faulted in each operation.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Windows | Performance Data counters |
Informational only.
Copy-on-Write faults/sec is the rate at which page faults are caused by attempts to write that have been satisfied by coping of the page from elsewhere in physical memory. This is an economical way of sharing data since pages are only copied when they are written to; otherwise, the page is shared. This metric shows the number of copies, without regard for the number of pages copied in each operation.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Windows | Performance Data counters |
Informational only
Demand Zero Faults/sec is the rate at which a zeroed page is required to satisfy the fault. Zeroed pages, pages emptied of previously stored data and filled with zeros, are a security feature of Windows that prevent processes from seeing data stored by earlier processes that used the memory space. Windows maintains a list of zeroed pages to accelerate this process. This metric shows the number of faults, without regard to the number of pages retrieved to satisfy the fault.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Windows | Performance Data counters |
Informational only.
This metric represents the percentage of UFS inodes taken off the freelist by iget which had reusable pages associated with them. These pages are flushed and cannot be reclaimed by processes.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | kernel statistics (class misc cpu_stat) |
HP | pstat_getvminfo system call |
IBM AIX | oracle_kstat() system call |
Informational only.
Page Faults/sec is the average number of pages faulted per second. It is measured in number of pages faulted per second because only one page is faulted in each fault operation, hence this is also equal to the number of page fault operations. This metric includes both hard faults (those that require disk access) and soft faults (where the faulted page is found elsewhere in physical memory.) Most processors can handle large numbers of soft faults without significant consequence. However, hard faults, which require disk access, can cause significant delays.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Windows | Performance Data Counters |
Informational only.
This metric represents the number of protection faults per second. These faults occur when a program attempts to access memory it should not access, receives a segmentation violation signal, and dumps a core file. This metric checks the number of faults for the CPU(s) specified by the Host CPU(s) parameter, such as cpu_stat0 or * (for all CPUs on the system).
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | kernel statistics (class misc cpu_stat) |
HP | pstat_getvminfo system call |
HP Tru64 | table() system call and vmstat command |
IBM AIX | perfstat system call |
Informational only.
For UNIX-based systems, represents the number of page read ins per second (read from disk to resolve fault memory references) by the virtual memory manager. Along with Page Outs, this statistic represents the amount of real I/O initiated by the virtual memory manager. This metric checks the number of page read ins for the CPU(s) specified by the Host CPU(s) parameter, such as cpu_stat0 or * (for all CPUs on the system).For Windows, this metric is the rate at which the disk was read to resolve hard page faults. It shows the number of reads operations, without regard to the number of pages retrieved in each operation. Hard page faults occur when a process references a page in virtual memory that is not in working set or elsewhere in physical memory, and must be retrieved from disk. This metric is a primary indicator of the kinds of faults that cause system wide delays. It includes read operations to satisfy faults in the file system cache (usually requested by applications) and in non-cached mapped memory files.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | kernel statistics (class misc cpu_stat) |
HP | pstat_getvminfo system call |
HP Tru64 | table() system call and vmstat command |
IBM AIX | oracle_kstat() system call |
Windows | performance data counters |
Informational only.
For UNIX-based systems, represents the number of page write outs to disk per second. This metric checks the number of page write outs for the CPU(s) specified by the Host CPU(s) parameter, such as cpu_stat0 or * (for all CPUs on the system).For Windows, this metric is the rate at which pages are written to disk to free up space in physical memory. Pages are written to disk only if they are changed while in physical memory, so they are likely to hold data, not code. This metric shows write operations, without regard to the number of pages written in each operation.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | kernel statistics (class misc cpu_stat) |
HP | pstat_getvminfo system call |
HP Tru64 | vmstat command |
IBM AIX | oracle_kstat() system call |
Windows | performance data counters |
Informational only.
For UNIX-based systems, represents the number of pages paged in (read from disk to resolve fault memory references) per second. This metric checks the number of pages paged in for the CPU(s) specified by the Host CPU(s) parameter, such as cpu_stat0 or * (for all CPUs on the system).For Windows, this metric is the rate at which pages are read from disk to resolve hard page faults. Hard page faults occur when a process refers to a page in virtual memory that is not in its working set or elsewhere in physical memory, and must be retrieved from disk. When a page is faulted, the system tries to read multiple contiguous pages into memory to maximize the benefit of the read operation.
Target Version | Collection Frequency |
---|---|
Sun Solaris, AIX, HP-UX, Windows | Every 15 Minutes |
Table 2-91 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
Linux |
Every 15 Minutes |
Not Defined |
Not Defined |
Pages Paged-in (per second) is %value% , crossed warning (%warning_threshold% ) or critical (%critical_threshold% ) threshold. |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | kernel statistics (class misc cpu_stat) |
HP | pstat_getvminfo system call |
Linux | sar command |
HP Tru64 | table() system call and vmstat command |
IBM AIX | oracle_kstat() system call |
Windows | performance data counters |
Informational only.
For UNIX-based systems, represents the number of pages written out (per second) by the virtual memory manager. Along with Page Outs, this statistic represents the amount of real I/O initiated by the virtual memory manager. This metric checks the number of pages paged out for the CPU(s) specified by the Host CPU(s) parameter, such as cpu_stat0 or * (for all CPUs on the system).For Windows, this metric is the rate at which pages are written to disk to free up space in physical memory. Pages are written back to disk only if they are changed in physical memory, so they are likely to hold data, not code. A high rate of pages output might indicate a memory shortage. Windows writes more pages back to disk to free up space when physical memory is in short supply.
Table 2-92 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 15 Minutes |
Not Defined |
Not Defined |
Pages Paged-out (per second) %value%, has crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | kernel statistics (class misc cpu_stat) |
HP | pstat_getvminfo() system call |
Linux | sar command |
HP Tru64 | vmstat command |
IBM AIX | oracle_kstat() system call |
Windows | performance data counters |
Informational only.
This metric represents the number of pages that are determined unused, by the pageout daemon (also called the page stealing daemon), and put on the list of free pages.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | kernel statistics (class misc cpu_stat) |
HP | pstat_getvminfo system call |
Linux | not available |
HP Tru64 | table() system call and vmstat command |
IBM AIX | oracle_kstat() system call |
Windows | not available |
Informational only.
This metric represents the scan rate is the number of pages per second scanned by the page stealing daemon.
Table 2-93 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 15 Minutes |
Not Defined |
Not Defined |
Pages Paged-In (per second) %value%, has crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | kernel statistics (class misc cpu_stat) |
HP | pstat_getvminfo() system call |
Linux | not available |
HP Tru64 | table() system call and vmstat command |
IBM AIX | oracle_kstat() system call |
Windows | not available |
If this number is zero or closer to zero, then you can be sure the system has sufficient memory. If the number is always high, then adding memory will definitely help.
Transition Faults/sec is the rate at which page faults are resolved by recovering pages that were being used by another process sharing the page, or were on the modified page list or the standby list, or were being written to disk at the time of the page fault. The pages were recovered without additional disk activity. Transition faults are counted in numbers of faults; because only one page is faulted in each operation, it is also equal to the number of pages faulted.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Windows | performance data counters |
Informational only.
The Peripheral Component Interconnect (PCI) Devices metric monitors the status of PCI devices.
Descriptive name of the Dell Peripheral Component Interconnect (PCI) Device
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
SNMP MIB object: pCIDeviceDescriptionName (1.3.6.1.4.1.674.10892.1.1100.80.1.9)
None.
Name of the Dell Peripheral Component Interconnect (PCI) Device
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
SNMP MIB object: pCIDeviceManufacturerName (1.3.6.1.4.1.674.10892.1.1100.80.1.8)
None.
Represents the status of the Dell Peripheral Component Interconnect (PCI) Device.
This metric is available only on Dell Poweredge Linux Systems.
The following table lists the possible values for this metric and their meaning.
Metric Value | Meaning (per SNMP MIB) |
---|---|
1 | Other (not one of the following) |
2 | Unknown |
3 | Normal |
4 | Warning |
5 | Critical |
6 | Non-Recoverable |
Table 2-94 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 15 Minutes |
4 |
5 |
Status of PCIDevice(Object identifier:1.3.6.1.4.1.674.10892.1.1100.80.1.5) is %PCIDeviceIndex% in chassis %ChassisIndex% is %value%, crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold.Status message is %PCIDeviceStatus% |
For this metric you can set different warning and critical threshold values for each unique combination of "Chassis Index", "PCI Device Index", and "System Slot Index" objects.
If warning or critical threshold values are currently set for any unique combination of "Chassis Index", "PCI Device Index", and "System Slot Index" objects, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each unique combination of "Chassis Index", "PCI Device Index", and "System Slot Index" objects, use the Edit Thresholds page.
SNMP MIB object: PCIDeviceStatus (1.3.6.1.4.1.674.10892.1.1100.80.1.5)
None.
This metric category provides information about the array disk state.
This metric is available only on Dell Poweredge Linux Systems.
This metric provides the severity of the array disk state. This is the combined status of the array disk and its components.
The following table lists the possible values for this metric and their meaning.
Metric Value | Meaning (per SNMP MIB) |
---|---|
1 | Other (not one of the following) |
2 | Unknown |
3 | Normal |
4 | Warning |
5 | Critical |
6 | Non-Recoverable |
Table 2-95 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
Linux Dell PowerEdge |
Every 15 Minutes |
Not Defined |
Not Defined |
Physical devices array disk rollup status code(Object identifier:1.3.6.1.4.1.674.10893.1.20.130.4.1.23) is %value%, crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold.Status message is %ArrayDiskRollUpStatus% |
This metric category provides information about the controller state.
This metric is available only on Dell Poweredge Linux Systems.
This metric provides the severity of the controller state. This is the combined status of the controller and its components.
The following table lists the possible values for this metric and their meaning.
Metric Value | Meaning (per SNMP MIB) |
---|---|
1 | Other (not one of the following) |
2 | Unknown |
3 | Normal |
4 | Warning |
5 | Critical |
6 | Non-Recoverable |
Table 2-96 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
Linux Dell PowerEdge |
Every 15 Minutes |
Not Defined |
Not Defined |
Status of physical devices controller(Object identifier:1.3.6.1.4.1.674.10893.1.20.130.1.1.37) is %value%, crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold.Status message is %ControllerRollUpStatus% |
The Power Supplies metric monitors the status of various power supplies present in the host system.
This metric is available only on Dell Poweredge Linux Systems.
This metric represents the location name of the power supply
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
SNMP MIB object: powerSupplyLocationName (1.3.6.1.4.1.674.10892.1.600.12.1.8)
None.
This metric represents the maximum sustained output wattage of the power supply, in tenths of watts.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
SNMP MIB object: powerSupplyOutputWatts (1.3.6.1.4.1.674.10892.1.600.12.1.6)
None.
This metric represents the status of the power supply.
This metric is available only on Dell Poweredge Linux Systems.
The following table lists the possible values for this metric and their meaning.
Metric Value | Meaning (per SNMP MIB) |
---|---|
1 | Other (not one of the following) |
2 | Unknown |
3 | Normal |
4 | Warning |
5 | Critical |
6 | Non-Recoverable |
Table 2-97 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 15 Minutes |
4 |
5 |
Status of Power Supply %PSIndex% in chassis %ChassisIndex% is %value%, crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
For this metric you can set different warning and critical threshold values for each unique combination of "Chassis Index" and "Power Supply Index" objects.
If warning or critical threshold values are currently set for any unique combination of "Chassis Index" and "Power Supply Index" objects, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each unique combination of "Chassis Index" and "Power Supply Index" objects, use the Edit Thresholds page.
SNMP MIB object: powerSupplyStatus (1.3.6.1.4.1.674.10892.1.600.12.1.5)
None.
The metrics in this category provide information about the process IPCS usage.
This metric provides the IPCS identifier.
Target Version | Collection Frequency |
---|---|
All Versions | Every Hour |
This metric provides the type of interprocess communication (IPC).
Target Version | Collection Frequency |
---|---|
All Versions | Every Hour |
The Top Processes metric category is a listing of (up to) 20 processes that include 10 processes consuming the largest percentage of memory and 10 processes consuming the most percentage of CPU time. The processes are listed in the order of memory consumption.
This metric represents the command and all its arguments.
For the following hosts:
Host | Data Source |
---|---|
Solaris | ps command, for example, ps -efo args |
HP | ps command, for example, ps -efo args |
Linux | ps command, for example, ps -efo args |
HP Tru64 | ps command, for example, ps -efo args |
IBM AIX | ps command, for example, ps -efo args |
Windows | performance data counters |
None.
This metric represents the CPU utilization time in seconds.
For the following hosts:
Host | Data Source |
---|---|
Solaris | ps command, for example, ps -efo time |
HP | ps command, for example, ps -efo time |
Linux | ps command, for example, ps -efo time |
HP Tru64 | ps command, for example, ps -efo time |
IBM AIX | ps command, for example, ps -efo time |
Windows | performance data counters |
None.
This metric represents the percentage of CPU time consumed by the process.
For the following hosts:
Host | Data Source |
---|---|
Solaris | ps command, for example, ps -efo pcpu |
HP | ps command, for example, ps -efo pmem |
Linux | ps command, for example, ps -efo pcpu |
HP Tru64 | ps command, for example, ps -efo pcpu |
IBM AIX | ps command, for example, ps -efo pcpu |
Windows | performance data counters |
None.
This metric represents the percentage of memory consumed by the process.
For the following hosts:
Host | Data Source |
---|---|
Solaris | ps command, for example, ps -efo pmem |
HP | ps command, for example, ps -efo pmem |
Linux | ps command, for example, ps -efo pmem |
HP Tru64 | ps command, for example, ps -efo pmem |
IBM AIX | ps command, for example, ps -efo pmem |
Windows | performance data counters |
None.
This metric represents the number of kilobytes of physical memory being used.
For the following hosts:
Host | Data Source |
---|---|
Solaris | kernel memory structure (class vminfo) |
HP | ps command |
Linux | ps command |
HP Tru64 | ps command |
IBM AIX | kernel memory structure (struct vminfo) |
Windows | Windows API |
None.
This metric represents the user name that owns the process, that is, the user ID of the process being reported on.
For the following hosts:
Host | Data Source |
---|---|
Solaris | ps command, for example, ps -efo user |
HP | ps command |
Linux | ps command, for example, ps -efo user |
HP Tru64 | ps command, for example, ps -efo user |
IBM AIX | ps command, for example, ps -efo user |
Windows | Windows API |
None.
This metric represents the total size of the process in virtual memory in kilobytes (KB).
For the following hosts:
Host | Data Source |
---|---|
Solaris | ps command, for example, ps -efo vsz |
HP | ps command, for example, ps -efo vsz |
Linux | ps command, for example, ps -efo vsz |
HP Tru64 | ps command, for example, ps -efo vsz |
IBM AIX | ps command, for example, ps -efo vsz |
Windows | Windows API |
None.
The Process, Inode, File Tables Stats metric category provides information about the process, inode, and file tables status.
This metric represents the number of times the system file table overflowed, that is, the number of times that the OS could not find any available entries in the table in the sampling period chosen to collect the data.
The OS sar command is used to sample cumulative activity counters maintained by the OS. The data is obtained by sampling system counters once in a five-second interval.
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | sar command |
HP | sar command |
HP Tru64 | table() system call |
IBM AIX | sar command |
None.
This metric represents the number of times the inode table overflowed, that is, the number of times the OS could not find any available inode table entries.
The OS sar command is used to sample cumulative activity counters maintained by the OS. The data is obtained by sampling system counters once in a five-second interval.
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | sar command |
HP | sar command |
HP Tru64 | table() system call |
IBM AIX | sar command |
None.
This metric represents the maximum size of the inode table.
The OS sar command is used to sample cumulative activity counters maintained by the OS. The data is obtained by sampling system counters once in a five-second interval.
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | sar command |
HP | sar command |
HP Tru64 | table() system call |
IBM AIX | sar command |
None.
This metric represents the maximum size of the process table.
The OS sar command is used to sample cumulative activity counters maintained by the OS. The data is obtained by sampling system counters once in a five-second interval.
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | sar command |
HP | sar command |
HP Tru64 | table() system call |
IBM AIX | sar command |
None.
This metric represents the maximum size of the system file table.
The OS sar command is used to sample cumulative activity counters maintained by the OS. The data is obtained by sampling system counters once in a five-second interval.
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | sar command |
HP | sar command |
HP Tru64 | table() system call |
IBM AIX | sar command |
None.
This metric represents the number of allocated disk quota entries.
Table 2-98 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 15 Minutes |
Not Defined |
Not Defined |
Number Of Allocated Disk Quota Entries is %value% , crossed warning (%warning_threshold% ) or critical (%critical_threshold% ) threshold. |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Linux | sar command |
None.
This metric provides the number of queued RT signals.
Table 2-99 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 15 Minutes |
Not Defined |
Not Defined |
Number Of Queued RT Signals is %value% , crossed warning (%warning_threshold% ) or critical (%critical_threshold% ) threshold. |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Linux | sar command |
None.
This metric provides the number of allocated super block handlers.
Table 2-100 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 15 Minutes |
Not Defined |
Not Defined |
Allocated Super Block Handlers %value%%% , has crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Linux | sar command |
None.
This metric represents the current size of the system file table.
Table 2-101 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 15 Minutes |
Not Defined |
Not Defined |
Number Of Used File Handles is %value% , crossed warning (%warning_threshold% ) or critical (%critical_threshold% ) threshold. |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Linux | sar command |
None.
This metric represents the Percentage Of Allocated Disk Quota Entries against the maximum number of cached disk quota entries that can be allocated.
Table 2-102 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 5 Minutes |
not defined |
not defined |
Allocated Disk Quota Entries %value%%% , has crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Linux | sar command |
None.
This metric represents the Percentage Of Allocated Super Block Handlers against the maximum number of super block handlers that Linux can allocate.
Table 2-103 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 5 Minutes |
not defined |
not defined |
Allocated Disk Quota Entries %value%%% , has crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Linux | sar command |
None.
This metric represents the percentage of queued RT signals.
Table 2-104 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 5 Minutes |
Not Defined |
Not Defined |
Queued RT Signals %value%%% , has crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Linux | sar command |
None.
This metric represents the percentage of used file handles against the maximum number of file handles that the Linux kernel can allocate.
Table 2-105 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 5 Minutes |
Not Defined |
Not Defined |
Used File Handles %value%%%, has crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Linux | sar command |
None.
This metric represents the number of times the process table overflowed, that is, the number of times the OS could not find any process table entries in a five-second interval.
Target Version | Collection Frequency |
---|---|
All Versions | Every 5 Minutes |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | sar command |
HP | sar command |
HP Tru64 | table() system call |
IBM AIX | sar command |
None.
This metric represents the current size of the inode table.
Table 2-106 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 5 Minutes |
Not Defined |
Not Defined |
Size of Inode Table is %value% , crossed warning (%warning_threshold% ) or critical (%critical_threshold% ) threshold. |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | sar command |
HP | sar command |
Linux | sar command |
HP Tru64 | table() system call |
IBM AIX | sar command |
None.
This metric represents the current size of the process table.
The following table shows how often the metric's value is collected.
Target Version | Collection Frequency |
---|---|
All Versions | Every 5 Minutes |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | sar command |
HP | sar command |
Linux | sar command |
HP Tru64 | table() system call |
IBM AIX | sar command |
The Processors metric category monitors the state of each CPU in the host.
This metric is available only on Dell Poweredge Linux Systems.
This metric represents the family of the Dell processor devices.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
SNMP MIB object: processorDeviceFamily (1.3.6.1.4.1.674.10892.1.1100.30.1.10)
None.
This metric represents the name of the manufacturer of the Dell processor.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
SNMP MIB object: processorDeviceManufacturerName (1.3.6.1.4.1.674.10892.1.1100.30.1.8)
None.
This metric represents the status of the Dell processor device.
This metric is available only on Dell Poweredge Linux Systems.
Metric Value | Meaning (per SNMP MIB) |
---|---|
1 | Other (not one of the following) |
2 | Unknown |
3 | Normal |
4 | Warning |
5 | Critical |
6 | Non-Recoverable |
Table 2-107 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 15 Minutes |
4 |
5 |
Status of Processor %ProcessorIndex% in chassis %ChassisIndex% is %value%, crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
For this metric you can set different warning and critical threshold values for each unique combination of "Chassis Index" and "Processor Index" objects.
If warning or critical threshold values are currently set for any unique combination of "Chassis Index" and "Processor Index" objects, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each unique combination of "Chassis Index" and "Processor Index" objects, use the Edit Thresholds page.
SNMP MIB object: processorDeviceStatus (1.3.6.1.4.1.674.10892.1.1100.30.1.5)
None.
This metric represents the current speed of the Dell processor device in MegaHertz (MHz). A value of zero indicates the speed is unknown.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
SNMP MIB object: processorDeviceCurrentSpeed (1.3.6.1.4.1.674.10892.1.1100.30.1.12)
None.
The metrics in this category provide information about processor group usage.
This metric provides the hardware load as a percentage.
Target Version | Collection Frequency |
---|---|
All Versions | Every 60 Minutes |
This metric provides the percentage of time that no software threads ran on CPUs in the processor group.
Target Version | Collection Frequency |
---|---|
All Versions | Every 60 Minutes |
This metric provides the software load as a percentage.
Target Version | Collection Frequency |
---|---|
All Versions | Every 60 Minutes |
The metrics in this category provide information about the processor set usage.
This metric provides name of the processor set.
Target Version | Collection Frequency |
---|---|
All Versions | Every 60 Minutes |
This metric provides the percentage of time that the CPU was idle and the system did not have an outstanding disk I/O request.
Target Version | Collection Frequency |
---|---|
All Versions | Every 60 Minutes |
This metric provides the percentage of process time.
Target Version | Collection Frequency |
---|---|
All Versions | Every 60 Minutes |
The metrics in this category provide information about the processor set zone usage.
This metric provides the name of the processor set.
Target Version | Collection Frequency |
---|---|
All Versions | Every 60 Minutes |
This metric provides the name of the zone.
Target Version | Collection Frequency |
---|---|
All Versions | Every 60 Minutes |
This metric provides the amount of CPU time used by the processor set.
Target Version | Collection Frequency |
---|---|
All Versions | Every 60 Minutes |
This metric provides the percentage of CPU time used by the processor set.
Target Version | Collection Frequency |
---|---|
All Versions | Every 60 Minutes |
This metric provides the processor set schedulers.
Target Version | Collection Frequency |
---|---|
All Versions | Every 60 Minutes |
This metric provides the number of shares allocated to the processor set.
Target Version | Collection Frequency |
---|---|
All Versions | Every 60 Minutes |
This metric provides the percentage of shares allocated to the processor set.
Target Version | Collection Frequency |
---|---|
All Versions | Every 60 Minutes |
This metric provides the percentage of used allocated shares.
Target Version | Collection Frequency |
---|---|
All Versions | Every 60 Minutes |
The Program Resource Utilization metric category provides flexible resource monitoring functionality. The operator must specify the criteria for the programs to be monitored by specifying key value specific thresholds. Values for the key value columns {program name, owner} define the unique criteria to be monitored for resource utilization in the system.
By default, no programs will be tracked by this metric. Key Values entered as part of a key value specific threshold setting define the criteria for monitoring and tracking.
This metric is available on Solaris only.
Target Version | Collection Frequency |
---|---|
All Versions | Every 5 Minutes |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | ps command |
HP | ps command |
Linux | ps command |
HP Tru64 | ps command |
IBM AIX | ps command |
Windows | ps command |
None.
This metric represents the maximum CPU time accumulated by the most active process matching the {program name, owner} key value criteria.
Table 2-108 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 5 Minutes |
Not Defined |
Not Defined |
Process %prog_max_cpu_time_pid% matched by the program name ''%prog_name%'' and owner ''%owner%'' has accumulated %prog_max_cpu_time% minutes of cpu time. This duration has crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
For this metric you can set different warning and critical threshold values for each unique combination of "Program Name" and "Owner" objects.
If warning or critical threshold values are currently set for any unique combination of "Program Name" and "Owner" objects, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each unique combination of "Program Name" and "Owner" objects, use the Edit Thresholds page.
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | ps command |
HP | ps command |
Linux | ps command |
HP Tru64 | ps command |
IBM AIX | ps command |
None.
This metric represents the maximum percentage of CPU utilized by a single process matching the {program name, owner} key value criteria since last scan.
Table 2-109 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 5 Minutes |
Not Defined |
Not Defined |
Process %prog_max_cpu_util_pid% running program %prog_name% is utilizing %prog_max_cpu_util%%% cpu. This percentage crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
For this metric you can set different warning and critical threshold values for each unique combination of "Program Name" and "Owner" objects.
If warning or critical threshold values are currently set for any unique combination of "Program Name" and "Owner" objects, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each unique combination of "Program Name" and "Owner" objects, use the Edit Thresholds page.
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | ps command |
HP | ps command |
Linux | ps command |
HP Tru64 | ps command |
IBM AIX | ps command |
None.
This metric fetches the current number of processes matching the {program name, owner} key value criteria. It can be used for setting warning or critical thresholds to monitor for maximum number of processes that a given {program name, owner} key value criteria crosses.
Table 2-110 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 5 Minutes |
Not Defined |
Not Defined |
%prog_max_process_count% processes are matched by the program name ''%prog_name%'' and owner ''%owner%''. They have crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
For this metric you can set different warning and critical threshold values for each unique combination of "Program Name" and "Owner" objects.
If warning or critical threshold values are currently set for any unique combination of "Program Name" and "Owner" objects, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each unique combination of "Program Name" and "Owner" objects, use the Edit Thresholds page.
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | ps command |
HP | ps command |
Linux | ps command |
HP Tru64 | ps command |
IBM AIX | ps command |
None.
This metric represents the maximum resident memory occupied by a single process matching the {program name, owner} key value criteria. It can be used for setting warning or critical thresholds to monitor for maximum value a given {program name, owner} key value criteria crosses.
Table 2-111 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 5 Minutes |
Not Defined |
Not Defined |
Process %prog_max_rss_pid% matched by the program name ''%prog_name%'' and owner ''%owner%'' is utilizing %prog_max_rss% (MB) of resident memory. It has crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
For this metric you can set different warning and critical threshold values for each unique combination of "Program Name" and "Owner" objects.
If warning or critical threshold values are currently set for any unique combination of "Program Name" and "Owner" objects, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each unique combination of "Program Name" and "Owner" objects, use the Edit Thresholds page.
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | ps command |
HP | ps command |
Linux | ps command |
HP Tru64 | ps command |
IBM AIX | ps command |
None.
Fetches the current number of processes matching the {program name, owner} key value criteria. It can be used for setting warning or critical thresholds to monitor for minimum number of processes that a given {program name, owner} key value criteria should never go under.
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 2-112 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 5 Minutes |
Not Defined |
Not Defined |
%prog_min_process_count% processes are matched by the program name ''%prog_name%'' and owner ''%owner%''. They have fallen below warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
For this metric you can set different warning and critical threshold values for each unique combination of "Program Name" and "Owner" objects.
If warning or critical threshold values are currently set for any unique combination of "Program Name" and "Owner" objects, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each unique combination of "Program Name" and "Owner" objects, use the Edit Thresholds page.
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | ps command |
HP | ps command |
Linux | ps command |
HP Tru64 | ps command |
IBM AIX | ps command |
None.
This metric represents the total CPU time accumulated by all active process matching the {program name, owner} key value criteria.
Table 2-113 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 5 Minutes |
Not Defined |
Not Defined |
%prog_max_process_count% processes matched by the program name ''%prog_name%'' and owner ''%owner%'' have accumulated %prog_total_cpu_time% minutes of cpu time. This duration has crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
For this metric you can set different warning and critical threshold values for each unique combination of "Program Name" and "Owner" objects.
If warning or critical threshold values are currently set for any unique combination of "Program Name" and "Owner" objects, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each unique combination of "Program Name" and "Owner" objects, use the Edit Thresholds page.
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | ps command |
HP | ps command |
Linux | ps command |
HP Tru64 | ps command |
IBM AIX | ps command |
None.
This metric represents the percentage of CPU time utilized by all active process matching the {program name, owner} key value criteria since last collection.
Table 2-114 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 5 Minutes |
Not Defined |
Not Defined |
%prog_max_process_count% processes matched by the program name ''%prog_name%'' and owner ''%owner%'' are utilizing %prog_total_cpu_util%%% of the cpu. It has crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
For this metric you can set different warning and critical threshold values for each unique combination of "Program Name" and "Owner" objects.
If warning or critical threshold values are currently set for any unique combination of "Program Name" and "Owner" objects, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each unique combination of "Program Name" and "Owner" objects, use the Edit Thresholds page.
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | ps command |
HP | ps command |
Linux | ps command |
HP Tru64 | ps command |
IBM AIX | ps command |
None.
The Remote Access Card metric category monitors the status of the Remote Access Card.
This metric category is available for Dell Poweredge Linux Systems only.
This metric determines whether the dynamic host configuration protocol (DHCP) was used to obtain the network interface card (NIC) information.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
SNMP MIB object: remoteAccessNICCurrentInfoFromDHCP (1.3.6.1.4.1.674.10892.1.1700.10.1.33)
None.
This metric represents the IP address for the gateway currently being used by the onboard network interface card (NIC) provided by the remote access (RAC) hardware.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
SNMP MIB object: remoteAccessNICCurrentGatewayAddress (1.3.6.1.4.1.674.10892.1.1700.10.1.32)
None.
This metric provides the internet protocol (IP) address currently being used by the onboard network interface card (NIC) provided by the remote access (RAC) hardware.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
SNMP MIB object: remoteAccessNICCurrentIPAddress (1.3.6.1.4.1.674.10892.1.1700.10.1.30)
None.
This metric represents the local area network (LAN) settings of the remote access hardware.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
SNMP MIB object: remoteAccessLANSettings (1.3.6.1.4.1.674.10892.1.1700.10.1.15)
None.
This metric represents the subnet mask currently being used by the onboard network interface card (NIC) provided by the remote access (RAC) hardware.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
SNMP MIB object: remoteAccessLANSettings (1.3.6.1.4.1.674.10892.1.1700.10.1.15)
None.
This metric represents the name of the product providing the remote access (RAC) functionality.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
SNMP MIB object: remoteAccessProductInfoName (1.3.6.1.4.1.674.10892.1.1700.10.1.7)
None.
This metric represents the state of the remote access (RAC) hardware.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
SNMP MIB object: remoteAccessStateSettings (1.3.6.1.4.1.674.10892.1.1700.10.1.5)
None.
This metric represents the status of the remote access (RAC) hardware.
This metric is available for Dell Poweredge Linux Systems only.
The following table lists the possible values for this metric and their meaning.
Metric Value | Meaning (per SNMP MIB) |
---|---|
1 | Other (not one of the following) |
2 | Unknown |
3 | Normal |
4 | Warning |
5 | Critical |
6 | Non-Recoverable |
Table 2-115 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 15 Minutes |
4 |
5 |
Status of Remote Access Card(Object identifier:1.3.6.1.4.1.674.10892.1.1700.10.1.6) is %value%, crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold.Status message is %RACStatus%. |
SNMP MIB object: remoteAccessStatus (1.3.6.1.4.1.674.10892.1.1700.10.1.6)
None.
This metric category provides the status of the Reliable Datagram Sockets (RDS protocol layer.
This section provides details on the Windows Services metrics.
This metric displays a unique identifier of the service that provides an indication of the functionality that is managed.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
This metric displays a fully qualified path to the service binary file that implements the service. For example, C:\Windows\Oracle\monitor.sys.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
This metric provides a description of the object
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
This metric provides the display name of the service. This string has a maximum length of 256 characters. The name is case-preserved in the Service Control Manager
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
This metric displays the process identifier of the service.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
This metric displays the start mode of the Windows base service. Possible values are Boot, System, Auto, Manual, and Disabled.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
This metric provides an account name under which a service runs. Depending on the service type, the account name may be in the form of DomainName\Username
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
This metric specifies if the service started. Possible values are true and false.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
This metric displays the current state of the base service.The following are the possible values:
Stopped
Start Pending
Stop Pending
Running
Continue Pending
Pause Pending
Paused
Unknown
Table 2-117 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 15 Minutes |
Not Defined |
Not Defined |
State for the service:(%name%) is (%state%) matches warning threshold (%warning_threshold%) or critical threshold (%critical_threshold%) |
Current status of the object. This includes both operational and non-operations status provided by the service. The following are the possible values:
OK
Error
Degraded
Unknown
Pred Fail
Starting
Stopping
Service
Table 2-118 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 15 Minutes |
Not Defined |
Not Defined |
Status for the service:(%name%) is (%status%) matches warning threshold (%warning_threshold%) or critical threshold (%critical_threshold%) |
This section provides information about the metric categories and metrics relating to Solaris Virtualization technologies.
These metrics support the following Solaris Virtualization technologies:
Zones
Non-global zone
Global zone
Virtual Box
Solaris running as Guest OS
Oracle VM for x86
domU
Oracle VM for SPARC (Logical Domains (LDOMs))
Guest
Control
I/O
Services
Root
Solaris Virtualization metrics use Oracle Certified monitoring templates to enable or disable scheduled collections. By default these metrics are disabled and are not collected.
For more information about monitoring templates, see the Oracle Enterprise Manager Cloud Control Administrator's Guide.
To enable the Solaris Virtualization configuration metrics:
From the Cloud Control console, select Setup, then Security, and then Privilege Delegation. On this page, you can either set the privilege delegation for each host manually or you can create a Privilege Delegation Setting template.
Privilege delegation for each host must have the SUDO setting enabled with the appropriate SUDO command filled in (for example, /usr/local/bin/sudo
).
Note:
SUDO setup must pass the LD_LIBRARY_PATH environment variable to root.Select Setup, then Security, and then Monitoring Credentials. From this page, select the Host target type and click Manage Monitoring Credentials.
For each entry with the credential "Privileged Host Monitoring Credentials", select the entry and click Set Credentials. You will be asked for a credential set to use. Ensure you add "sudo" to Run Privilege and "root" to the Run As entry.
Click Test and Save.
Then from the Enterprise menu, select Monitoring, and then Monitoring Templates.
Select Display Oracle Certified Templates and from the Target Type list, select Host.
From the results, select Oracle Certified - Enable Solaris Host Virtualization metrics, then click Apply.
Under Destination Targets, click Add to add the required Host targets, then click Select, and then click OK.
This metric category provides information about the Solaris OVM for SPARC configuration metrics.
These configuration metrics are not available from the All Metrics page of the Cloud Control console.
To view the Solaris OVM for SPARC configuration metrics:
From the Cloud Control UI, select your Host target type.
Right-click the target type name, and select Configuration, then select Last Collected.
The metrics appear under Latest Configuration.
This metric displays a number that represents available CPU count for this domain.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours when enabled. For more information, see "Enabling Solaris Virtualization Technology Metrics". |
The ncpu field retrieved using the command-line interface for the Logical Domains Manager utility (ldm
(1M)).
Informational only
This metric displays a list of the roles that a logical domain can perform.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours when enabled. For more information, see "Enabling Solaris Virtualization Technology Metrics". |
DOMAINROLE field retrieved using the virtinfo
(1M) utility.
Informational only
This metric displays the domain name from the ldm
(1M) utility or DOMANUUID field retrieved using the virtinfo
(1M) utility.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours when enabled. For more information, see "Enabling Solaris Virtualization Technology Metrics". |
Domain Name from the ldm
(1M) utility or DOMAINUUID field retrieved using the virtinfo
(1M) utility.
Informational only
This metric displays the comma separated list of all IO types used by the domain.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours when enabled. For more information, see "Enabling Solaris Virtualization Technology Metrics". |
Comma separated list of unique IO types retrieved using the ldm
(1M) utility.
Informational only
This metric displays 1 or 0. 1 indicates if any of the virtual switch (VSW) or virtual network (VNET) network devices have an maximum transmission unit (MTU) greater than 1500.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours when enabled. For more information, see "Enabling Solaris Virtualization Technology Metrics". |
Check based on the MTU value retrieved by the ldm
(1M) utility.
Informational only
This metric specifies the maximum number of cores that are permitted to be assigned to a domain.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours when enabled. For more information, see "Enabling Solaris Virtualization Technology Metrics". |
max-cores property retrieved using the ldm
(1M) utility.
Informational only
This metric displays the type of the migration mode. It can be generic or native.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours when enabled. For more information, see "Enabling Solaris Virtualization Technology Metrics". |
Generic or native value from the cpu-arch property retrieved using the ldm
(1M) utility.
Informational only
This metric displays the chassis serial number retrieved by the virtinfo(1M) utility.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours when enabled. For more information, see "Enabling Solaris Virtualization Technology Metrics". |
DOMAINCHASSIS field retrieved using the virtinfo
(1M) utility.
Informational only
This metric specifies that CPU cores are allocated to a domain rather than to virtual CPUs.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours when enabled. For more information, see "Enabling Solaris Virtualization Technology Metrics". |
whole-core property retrieved using the ldm
(1M) utility.
Informational only
This metric category provides information about the Solaris Zones Configuration metrics.
These configuration metrics are not available from the All Metrics page of the Cloud Control console.
To view the Solaris Zones configuration metrics:
From the Cloud Control UI, select your Host target type.
Right-click the target type name, and select Configuration, then select Last Collected.
The metrics appear under Latest Configuration.
This metric displays the system-assigned processor set ID for the processor set used by the zone.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours when enabled. For more information, see "Enabling Solaris Virtualization Technology Metrics". |
cpu.sys_id property for used pset retrieved by the pooladm
(1M) command.
Informational only
This metric displays the value to control an absolute limit on the amount of CPU resources that can be consumed by a zone.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours when enabled. For more information, see "Enabling Solaris Virtualization Technology Metrics". |
capped-cpu/ncpus property retrieved by the zonecfg
(1M) utility.
Informational only
This metric displays the current number of CPUs in the processor set used by the zone.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours when enabled. For more information, see "Enabling Solaris Virtualization Technology Metrics". |
pset.size property for used pset retrieved by the pooladm
(1M) command.
Informational only
This metric displays the host name of a zone.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours when enabled. For more information, see "Enabling Solaris Virtualization Technology Metrics". |
hostOSD::getHostName()
Informational only
This metric displays the IP type used by the zone. There are two IP types available for non-global zones:
shared-IP
exclusive-IP
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours when enabled. For more information, see "Enabling Solaris Virtualization Technology Metrics". |
zoneadm
(1M).
Informational only
This metric displays the value to control lock memory used by processes in the non-global zone.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours when enabled. For more information, see "Enabling Solaris Virtualization Technology Metrics". |
The capped-memory/locked property retrieved by the zonecfg
(1M) utility.
Informational only
This metric displays the limits on the number of processors that can be allocated to a processor set used by a zone.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours when enabled. For more information, see "Enabling Solaris Virtualization Technology Metrics". |
pset.max property for used pset retrieved by the pooladm
(1M) command.
Informational only
This metric displays the value to control maximum number of processes in the non-global zone.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours when enabled. For more information, see "Enabling Solaris Virtualization Technology Metrics". |
max-processes property retrieved by the zonecfg
(1M) utility.
Informational only
This metric displays the value specifying the number of the system's processors that should be dedicated to a non-global zone while it is running.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours when enabled. For more information, see "Enabling Solaris Virtualization Technology Metrics". |
dedicated-cpu/ncpus property retrieved by the zonecfg
(1M) utility.
Informational only
This metric displays the value to regulate physical memory consumption used by processes in the non-global zone.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours when enabled. For more information, see "Enabling Solaris Virtualization Technology Metrics". |
capped-memory/physical property retrieved by the zonecfg
(1M) utility.
Informational only
This metric displays the processor set name within the resource pool currently used by a zone.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours when enabled. For more information, see "Enabling Solaris Virtualization Technology Metrics". |
pset name for used pool name retrieved by the pooladm
(1M) command.
Informational only
This metric displays the resource pool name used by a zone.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours when enabled. For more information, see "Enabling Solaris Virtualization Technology Metrics". |
pool property retrieved by the zonecfg
(1M) utility.
Informational only
This metric displays a value to control swap space used by the non-global zone.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours when enabled. For more information, see "Enabling Solaris Virtualization Technology Metrics". |
capped-memory/swap property retrieved by the zonecfg(1M) utility.
Informational only
This metric displays the brand of the zone.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours when enabled. For more information, see "Enabling Solaris Virtualization Technology Metrics". |
zoneadm
(1M).
Informational only
This metric displays the name of the zone.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours |
zoneadm
(1M)
Informational only
This metric lists the privileges for this non-global zone. In the nonglobal zone, processes are restricted to a subset of privileges. Privilege restriction prevents a zone from performing operations that might affect other zones. The set of privileges limits the capabilities of privileged users within the zone.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours when enabled. For more information, see "Enabling Solaris Virtualization Technology Metrics". |
limitpriv property retrieved by the zonecfg(1M) utility.
Informational only
This metric displays the status of the zone, such as running, configured, ready, and installed.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours when enabled. For more information, see "Enabling Solaris Virtualization Technology Metrics". |
zoneadm
(1M).
Informational only
This metric displays the zone universally unique identifier (UUID), zone name, or unique identifier from the smbios
(1M) utility.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours when enabled. For more information, see "Enabling Solaris Virtualization Technology Metrics". |
Zone UUID, zone name, or UUID from the smbios
utility.
zoneadm
(1M) or smbios
(1M) UUID
Informational only
This metric category provides information about the Solaris Zones SMF Services Configuration metrics.
These configuration metrics are not available from the All Metrics page of the Cloud Control console.
To view the Solaris Zones SMF Services configuration metrics:
From the Cloud Control UI, select your Host target type.
Right-click the target type name, and select Configuration, then select Last Collected.
The metrics appear under Latest Configuration.
This metric displays the Fault Management Resource Identifier (FMRI) column for each Oracle Solaris Service Management Facility (SMF) service in all running zones except for services in a disabled state.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours |
The svcs
(1) command used with the zlogin
(1) command.
Informational only
This metric displays the STATE column for each SMF service in all running zones except for services in a disabled state.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours |
The svcs
(1) command used with the zlogin
(1) command.
Informational only
This metric category provides information about the Virtualization Technologies metrics. It collects generic details about virtualization technologies found on the system.
These configuration metrics are not available from the All Metrics page of the Cloud Control console.
To view the Virtualization Technologies configuration metrics:
From the Cloud Control UI, select your Host target type.
Right-click the target type name, and select Configuration, then select Last Collected.
The metrics appear under Latest Configuration.
This metric can display the hardware or software version, revision, or any other information associated with the selected virtualization technology.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours |
ldm
(1M), smbios
(1M), zoneadm
(1M)
Informational only
This metric can display a UUID, or another unique identifier preceded by the Virtualization Technology prefix.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours |
Prefixed with virtualization technology name UUID from the smbios
(1M) utility, zone name or UUID, Oracle VM for SPARC domain UUID or domain name.
smbios
(1M), zoneadm
(1M), virtinfo
(1M), ldm
(1M)
Informational only
This metric category provides information about the Storage Management SNMP agent.
This metric provides global health information for the subsystem managed by the Storage Management software.
Table 2-119 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 15 Minutes |
Not Defined |
Not Defined |
Status of Global Storage management data group(Object identifier:1.3.6.1.4.1.674.10893.1.20.110.13) is %value%, crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold.Status message is %AgentGlobalSystemStatus% |
This metric category provides information about the Storage Area Network configuration metrics. Configuration metrics are not available from the All Metrics page of the Cloud Control console.
To view the Storage Area Network configuration metrics:
From the Cloud Control UI, select your Host target type.
Right-click the target type name, and select Configuration, then select Last Collected.
The metrics appear under Latest Configuration.
Note:
These metrics are supported for Linux and Solaris hosts only.By default, these metrics are disabled and will not be collected.
Storage Area Network metrics use Oracle Certified monitoring templates to enable or disable scheduled collections. By default these metrics are disabled.
For more information about monitoring templates, see the Oracle Enterprise Manager Cloud Control Administrator's Guide.
To enable these configuration metrics:
From the Cloud Control console, select Setup, then Security, and then Privilege Delegation. On this page you can either set privilege delegation for each host manually or you can create a Privilege Delegation Setting Template.
Privilege delegation for each host must have the SUDO setting enabled with the appropriate SUDO command filled in (for example, /usr/local/bin/sudo
).
Select Setup, then Security, and then Monitoring Credentials. From this page, select the Host target type and click Manage Monitoring Credentials.
For each entry with the credential ”Privileged Host Monitoring Credentials”, select the entry and click Set Credentials. You will be asked for a credential set to use. Ensure you add "sudo" to Run Privilege and "root" to the Run As entry.
Click Test and Save.
Then from the Enterprise menu, select Monitoring, and then Monitoring Templates.
Select Display Oracle Certified Templates and from the Target Type list, select Host.
From the results, select Oracle Certified Template for SAN metrics - Enable, then click Apply.
Under Destination Targets, click Add to add the required Host targets, then click Select, and then click OK.
This metric specifies whether the Fibre Channel protocol is supported.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours when enabled. For more information, see "Enabling Storage Area Network Metrics". |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | fcinfo hba-port command |
Linux | From sysfs, /sys/class directory |
This metric specifies whether Internet Small Computer System Interface (iSCSI) protocol is supported.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours when enabled. For more information, see "Enabling Storage Area Network Metrics". |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | iscsiadm list initiator-node command |
Linux | /etc/initiatorname.iscsi file |
This metric displays the name of the multipath software.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours when enabled. For more information, see "Enabling Storage Area Network Metrics". |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | powermt command or mpathadm command |
Linux | powermt command or multipath command |
This metric subcategory provides information about the Storage Area Network (SAN) devices.
This metric displays the World Wide Port Name (WWPN) assigned to a port in a Fibre Channel fabric.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours when enabled. For more information, see "Enabling Storage Area Network Metrics". |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | fcinfo hba-port command |
Linux | From sysfs, /sys/class/fc_host/host*/ |
This metric displays the name of the SAN fabric. The SAN fabric enables any-server-to-any-storage device connectivity through the use of Fibre Channel switching technology.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours when enabled. For more information, see "Enabling Storage Area Network Metrics". |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | fcinfo hba-port command |
Linux | From sysfs, /sys/class/fc_host/host*/ |
This metric displays the maximum frame size for the fibre channel frame.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours when enabled. For more information, see "Enabling Storage Area Network Metrics". |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | fcinfo hba-port command |
Linux | From sysfs, /sys/class/fc_host/host*/ |
This metric displays the model description of the HBA.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours when enabled. For more information, see "Enabling Storage Area Network Metrics". |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | fcinfo hba-port command |
Linux | From sysfs, /sys/class/fc_host/host*/ |
This metric displays the World Wide Node Name (WWNN) assigned to a node in a Fibre Channel fabric.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours when enabled. For more information, see "Enabling Storage Area Network Metrics". |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | fcinfo hba-port command |
Linux | From sysfs, /sys/class/fc_host/host*/ |
This metric displays the port ID of the HBA.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours when enabled. For more information, see "Enabling Storage Area Network Metrics". |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | fcinfo hba-port command |
Linux | From sysfs, /sys/class/fc_host/host*/ |
This metric indicates the status of the HBA.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours when enabled. For more information, see "Enabling Storage Area Network Metrics". |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | fcinfo hba-port command |
Linux | From sysfs, /sys/class/fc_host/host*/ |
This metric displays the type of Fibre Channel port.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours when enabled. For more information, see "Enabling Storage Area Network Metrics". |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | fcinfo hba-port command |
Linux | From sysfs, /sys/class/fc_host/host*/ |
This metric displays the speed of the HBA.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours when enabled. For more information, see "Enabling Storage Area Network Metrics". |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | fcinfo hba-port command |
Linux | From sysfs, /sys/class/fc_host/host*/ |
This metric displays the classes supported by the HBA.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours when enabled. For more information, see "Enabling Storage Area Network Metrics". |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | fcinfo hba-port command |
Linux | From sysfs, /sys/class/fc_host/host*/ |
This metric displays the speeds supported by the HBA.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours when enabled. For more information, see "Enabling Storage Area Network Metrics". |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | fcinfo hba-port command |
Linux | From sysfs, /sys/class/fc_host/host*/ |
This metric displays the bind type of the target ID and specifies the method of binding for each port.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours when enabled. For more information, see "Enabling Storage Area Network Metrics". |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | fcinfo hba-port command |
Linux | From sysfs, /sys/class/fc_host/host*/ |
This metric displays the manufacturer or vendor of the HBA.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours when enabled. For more information, see "Enabling Storage Area Network Metrics". |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | fcinfo hba-port command |
Linux | lspci -vmm command |
This metric subcategory provides information about the Storage Area Network (SAN) devices.
This metric displays the type of property name such as IQN (iSCSI Qualified Name).
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours when enabled. For more information, see "Enabling Storage Area Network Metrics". |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | iscsiadm list initiator-node command |
Linux | /etc/initiatorname.iscsi file |
This metric displays the value of the property name.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours when enabled. For more information, see "Enabling Storage Area Network Metrics". |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | iscsiadm list initiator-node command |
Linux | /etc/initiatorname.iscsi file |
This metric category provides information about the Storage Area Network (SAN) devices. These are configuration metrics and cannot be viewed from the All Metrics page in the Cloud Control UI.
Note:
These metrics are supported for Linux and Solaris hosts only.This metric displays the disk path to the SAN device.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours when enabled. For more information, see "Enabling Storage Area Network Metrics". |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | iscsiadm / fcinfo command |
Linux | /sys/class/ directory |
This metric displays the Logical Unit Number (LUN). The LUN is a disk presented to a computer system by a storage array.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours when enabled. For more information, see "Enabling Storage Area Network Metrics". |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | iscsiadm / fcinfo command |
Linux | /sys/class/ directory |
This metric displays the unique ID assigned to a LUN.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours when enabled. For more information, see "Enabling Storage Area Network Metrics". |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | mpathadm or powermt and inquiry.pp command |
Linux | multipath or powermt and scsi_id command |
This metric displays the SAN protocol.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours when enabled. For more information, see "Enabling Storage Area Network Metrics". |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | iscsiadm / fcinfo command |
Linux | /sys/class/ directory |
This metric displays the path to the pseudo device used by multipathing to facilitate the sharing and balancing of I/O operations across all of the available I/O paths.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours when enabled. For more information, see "Enabling Storage Area Network Metrics". |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | mpathadm or powermt command |
Linux | multipath or powermt command |
This metric displays the iSCSI Qualified Name (IQN) of the storage server.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours when enabled. For more information, see "Enabling Storage Area Network Metrics". |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | iscsiadm command |
Linux | /sys/class/ directory |
This metric displays the World Wide Node Name (WWNN) assigned to the storage server.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours when enabled. For more information, see "Enabling Storage Area Network Metrics". |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | fcinfo command |
Linux | /sys/class/ directory |
This metric displays the World Wide Port Name (WWPN) of the storage server.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours when enabled. For more information, see "Enabling Storage Area Network Metrics". |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | fcinfo command |
Linux | /sys/class/ directory |
This metric displays the vendor or manufacturer of the SAN device.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours when enabled. For more information, see "Enabling Storage Area Network Metrics". |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | iscsiadm / fcinfo command |
Linux | /sys/class/ directory |
The Storage Summary metrics collectively represent the summary of storage data on a host target. These metrics are derived from the various metrics collected and uploaded into the Oracle Management Repository by the Management Agent. They are computed every time the Management Agent populates the Management Repository with storage data. This collection is also triggered automatically whenever the user manually refreshes the host storage data from the Storage Details page.
This metric represents the total storage allocated to Oracle databases from Automatic Storage Management (ASM) instances on the host.
Target Version | Collection Frequency |
---|---|
3.0 and higher | Every 24 hours or when the user manually refreshes storage data from the Storage Details page. |
For more details on how these metrics are computed see the ”About Storage Computation Formulas" topic in the Enterprise Manager online help. The online help also provides information about ASM, databases, disks, file systems, volumes, and storage details.
This metric represents the number of metric collection errors attributed to the storage related metrics of the Automatic Storage Management (ASM) targets on the host.
Target Version | Collection Frequency |
---|---|
3.0 and higher | Every 24 hours or when the user manually refreshes storage data from the Storage Details page. |
These metrics are available on the Linux and Solaris hosts.
For more details on how these metrics are computed see the ”About Storage Computation Formulas" topic in the Enterprise Manager online help. The online help also provides information about ASM, databases, disks, file systems, volumes, and storage details.
This metric represents the storage overhead of Automatic Storage Management (ASM) targets on the host.
Target Version | Collection Frequency |
---|---|
3.0 and higher | Every 24 hours or when the user manually refreshes storage data from the Storage Details page. |
These metrics are available on the Linux and Solaris hosts.
For more details on how these metrics are computed see the ”About Storage Computation Formulas" topic in the Enterprise Manager online help. The online help also provides information about ASM, databases, disks, file systems, volumes, and storage details.
This metric represents the storage available in Automatic Storage Management (ASM) targets on the host for allocating to databases.
Target Version | Collection Frequency |
---|---|
3.0 and higher | Every 24 hours or when the user manually refreshes storage data from the Storage Details page. |
These metrics are available on the Linux and Solaris hosts.
For more details on how these metrics are computed see the ”About Storage Computation Formulas" topic in the Enterprise Manager online help. The online help also provides information about ASM, databases, disks, file systems, volumes, and storage details.
This metric represents the total free storage available in the databases on the host.
Target Version | Collection Frequency |
---|---|
3.0 and higher | Every 24 hours or when the user manually refreshes storage data from the Storage Details page. |
These metrics are available on the Linux and Solaris hosts.
For more details on how these metrics are computed see the ”About Storage Computation Formulas" topic in the Enterprise Manager online help. The online help also provides information about ASM, databases, disks, file systems, volumes, and storage details.
This metric represents the metric collection errors of storage related metrics of databases on the host.
Target Version | Collection Frequency |
---|---|
3.0 and higher | Every 24 hours or when the user manually refreshes storage data from the Storage Details page. |
These metrics are available on the Linux and Solaris hosts.
For more details on how these metrics are computed see the ”About Storage Computation Formulas" topic in the Enterprise Manager online help. The online help also provides information about ASM, databases, disks, file systems, volumes, and storage details.
This metric represents the total free storage available in the databases on the host.
Target Version | Collection Frequency |
---|---|
3.0 and higher | Every 24 hours or when the user manually refreshes storage data from the Storage Details page. |
These metrics are available on the Linux and Solaris hosts.
For more details on how these metrics are computed see the "About Storage Computation Formulas" topic in the Enterprise Manager online help. The online help also provides information about ASM, databases, disks, file systems, volumes, and storage details.
This metric represents the storage allocated from the total disk storage available on the host.
Target Version | Collection Frequency |
---|---|
3.0 and higher | Every 24 hours or when the user manually refreshes storage data from the Storage Details page. |
These metrics are available on the Linux and Solaris hosts.
For more details on how these metrics are computed see the "About Storage Computation Formulas" topic in the Enterprise Manager online help. The online help also provides information about ASM, databases, disks, file systems, volumes, and storage details.
This metric represents the storage that is available for allocation in disks on the host.
Target Version | Collection Frequency |
---|---|
3.0 and higher | Every 24 hours or when the user manually refreshes storage data from the Storage Details page. |
These metrics are available on the Linux and Solaris hosts.
For more details on how these metrics are computed see the "About Storage Computation Formulas" topic in the Enterprise Manager online help. The online help also provides information about ASM, databases, disks, file systems, volumes, and storage details.
This metric represents the total number of storage related metric collection errors of the host target.
Target Version | Collection Frequency |
---|---|
3.0 and higher | Every 24 hours or when the user manually refreshes storage data from the Storage Details page. |
These metrics are available on the Linux and Solaris hosts.
For more details on how these metrics are computed see the "About Storage Computation Formulas" topic in the Enterprise Manager online help. The online help also provides information about ASM, databases, disks, file systems, volumes, and storage details.
The possible values for this metric are:
1 (one) if this host storage was computed successfully (sometimes with partial errors)
0 (zero) if the storage computation did not proceed at all due to some reasons (for example, failure to collect critical storage metric data).
Target Version | Collection Frequency |
---|---|
3.0 and higher | Every 24 hours or when the user manually refreshes storage data from the Storage Details page. |
These metrics are available on the Linux and Solaris hosts.
For more details on how these metrics are computed see the "About Storage Computation Formulas" topic in the Enterprise Manager online help. The online help also provides information about ASM, databases, disks, file systems, volumes, and storage details.
This metric represents the total free storage in all distinct local file systems on the host.
Target Version | Collection Frequency |
---|---|
3.0 and higher | Every 24 hours or when the user manually refreshes storage data from the Storage Details page. |
These metrics are available on the Linux and Solaris hosts.
For more details on how these metrics are computed see the "About Storage Computation Formulas" topic in the Enterprise Manager online help. The online help also provides information about ASM, databases, disks, file systems, volumes, and storage details.
This metric represents the total used space in all distinct local file systems on the host.
Target Version | Collection Frequency |
---|---|
3.0 and higher | Every 24 hours or when the user manually refreshes storage data from the Storage Details page. |
These metrics are available on the Linux and Solaris hosts.
For more details on how these metrics are computed see the "About Storage Computation Formulas" topic in the Enterprise Manager online help. The online help also provides information about ASM, databases, disks, file systems, volumes, and storage details.
This metric represents the total number of Automatic Storage Management (ASM) instances, the storage data of which was used in computing storage summary of this host.
Target Version | Collection Frequency |
---|---|
3.0 and higher | Every 24 hours or when the user manually refreshes storage data from the Storage Details page. |
These metrics are available on the Linux and Solaris hosts.
For more details on how these metrics are computed see the "About Storage Computation Formulas" topic in the Enterprise Manager online help. The online help also provides information about ASM, databases, disks, file systems, volumes, and storage details.
This metric represents the total number of databases, the storage data of which was used in computing storage summary of this host.
Target Version | Collection Frequency |
---|---|
3.0 and higher | Every 24 hours or when the user manually refreshes storage data from the Storage Details page. |
These metrics are available on the Linux and Solaris hosts.
For more details on how these metrics are computed see the "About Storage Computation Formulas" topic in the Enterprise Manager online help. The online help also provides information about ASM, databases, disks, file systems, volumes, and storage details.
This metric represents the storage metric mapping issues on the host excluding the unmonitored server mapping errors.
Target Version | Collection Frequency |
---|---|
3.0 and higher | Every 24 hours or when the user manually refreshes storage data from the Storage Details page. |
These metrics are available on the Linux and Solaris hosts.
For more details on how these metrics are computed see the "About Storage Computation Formulas" topic in the Enterprise Manager online help. The online help also provides information about ASM, databases, disks, file systems, volumes, and storage details.
This metric represents the total number of Automatic Storage Management (ASM) instances on the host.
Target Version | Collection Frequency |
---|---|
3.0 and higher | Every 24 hours or when the user manually refreshes storage data from the Storage Details page. |
These metrics are available on the Linux and Solaris hosts.
For more details on how these metrics are computed see the "About Storage Computation Formulas" topic in the Enterprise Manager online help. The online help also provides information about ASM, databases, disks, file systems, volumes, and storage details.
This metric represents the total number of databases on the host.
Target Version | Collection Frequency |
---|---|
3.0 and higher | Every 24 hours or when the user manually refreshes storage data from the Storage Details page. |
These metrics are available on the Linux and Solaris hosts.
For more details on how these metrics are computed see the "About Storage Computation Formulas" topic in the Enterprise Manager online help. The online help also provides information about ASM, databases, disks, file systems, volumes, and storage details.
This metric represents the total storage allocated from the host-visible storage available on the host.
Target Version | Collection Frequency |
---|---|
3.0 and higher | Every 24 hours or when the user manually refreshes storage data from the Storage Details page. |
These metrics are available on the Linux and Solaris hosts.
For more details on how these metrics are computed see the "About Storage Computation Formulas" topic in the Enterprise Manager online help. The online help also provides information about ASM, databases, disks, file systems, volumes, and storage details.
This metric represents the free storage available from the total allocated storage on the host.
Target Version | Collection Frequency |
---|---|
3.0 and higher | Every 24 hours or when the user manually refreshes storage data from the Storage Details page. |
These metrics are available on the Linux and Solaris hosts.
For more details on how these metrics are computed see the "About Storage Computation Formulas" topic in the Enterprise Manager online help. The online help also provides information about ASM, databases, disks, file systems, volumes, and storage details.
This metric represents the overhead associated with storage on the host.
Target Version | Collection Frequency |
---|---|
3.0 and higher | Every 24 hours or when the user manually refreshes storage data from the Storage Details page. |
These metrics are available on the Linux and Solaris hosts.
For more details on how these metrics are computed see the "About Storage Computation Formulas" topic in the Enterprise Manager online help. The online help also provides information about ASM, databases, disks, file systems, volumes, and storage details.
This metric represents the total unallocated storage on the host.
Target Version | Collection Frequency |
---|---|
3.0 and higher | Every 24 hours or when the user manually refreshes storage data from the Storage Details page. |
These metrics are available on the Linux and Solaris hosts.
For more details on how these metrics are computed see the "About Storage Computation Formulas" topic in the Enterprise Manager online help. The online help also provides information about ASM, databases, disks, file systems, volumes, and storage details.
This metric represents the total storage used in the file systems and databases on the host.
Target Version | Collection Frequency |
---|---|
3.0 and higher | Every 24 hours or when the user manually refreshes storage data from the Storage Details page. |
These metrics are available on the Linux and Solaris hosts.
For more details on how these metrics are computed see the "About Storage Computation Formulas" topic in the Enterprise Manager online help. The online help also provides information about ASM, databases, disks, file systems, volumes, and storage details.
This metric represents the total number of storage mapping issues that result from unmonitored Network File Systems (NFS) servers.
Target Version | Collection Frequency |
---|---|
3.0 and higher | Every 24 hours or when the user manually refreshes storage data from the Storage Details page. |
These metrics are available on the Linux and Solaris hosts.
For more details on how these metrics are computed see the "About Storage Computation Formulas" topic in the Enterprise Manager online help. The online help also provides information about ASM, databases, disks, file systems, volumes, and storage details.
This metric represents the total storage allocated from the volumes available on the host.
Target Version | Collection Frequency |
---|---|
3.0 and higher | Every 24 hours or when the user manually refreshes storage data from the Storage Details page. |
These metrics are available on the Linux and Solaris hosts.
For more details on how these metrics are computed see the "About Storage Computation Formulas" topic in the Enterprise Manager online help. The online help also provides information about ASM, databases, disks, file systems, volumes, and storage details.
This metric represents the storage overhead in the volumes on the host.
Target Version | Collection Frequency |
---|---|
3.0 and higher | Every 24 hours or when the user manually refreshes storage data from the Storage Details page. |
These metrics are available on the Linux and Solaris hosts.
For more details on how these metrics are computed see the "About Storage Computation Formulas" topic in the Enterprise Manager online help. The online help also provides information about ASM, databases, disks, file systems, volumes, and storage details.
This metric represents the storage available for allocation in the volumes on the host.
Target Version | Collection Frequency |
---|---|
3.0 and higher | Every 24 hours or when the user manually refreshes storage data from the Storage Details page. |
These metrics are available on the Linux and Solaris hosts.
For more details on how these metrics are computed see the "About Storage Computation Formulas" topic in the Enterprise Manager online help. The online help also provides information about ASM, databases, disks, file systems, volumes, and storage details.
This metric represents the total free space available in all distinct writeable NFS mounts on the host.
Target Version | Collection Frequency |
---|---|
3.0 and higher | Every 24 hours or when the user manually refreshes storage data from the Storage Details page. |
These metrics are available on the Linux and Solaris hosts.
For more details on how these metrics are computed see the "About Storage Computation Formulas" topic in the Enterprise Manager online help. The online help also provides information about ASM, databases, disks, file systems, volumes, and storage details.
This metric represents the storage used in all writeable NFS mounts on the host.
Target Version | Collection Frequency |
---|---|
3.0 and higher | Every 24 hours or when the user manually refreshes storage data from the Storage Details page. |
These metrics are available on the Linux and Solaris hosts.
For more details on how these metrics are computed see the "About Storage Computation Formulas" topic in the Enterprise Manager online help. The online help also provides information about ASM, databases, disks, file systems, volumes, and storage details.
This metric category is used to decide if the OS being monitored is supported. It's also used to inform the user if the number of Storage entities (Disks / FileSystems /Volumes) being monitored is within the applicable limits so as not to affect the performance of the Management Agent.
The metric is used to enable or disable the collection, depending on the OS supported or the number of storage entities being monitored.
The operating systems supported are Linux, Solaris, AIX, and HPUX.
The maximum number of storage entities monitored is set to 100 in the configuration file located in EMAgent/sysman/emd/emagent_storage.config. The Disks/Files/Volumes to be monitored can be added in the configuration file. If the storage entities are more than 100, the response time of the Management Agent increases.
Not available
Edit the configuration file (emagent/sysman/emd/emagent_storage.config), and add the Disks, Filesystems, and Volumes to be monitored.
The Swap Area Status metric category provides the status of the swap memory on the system.
This metric represents the number of 1K blocks in the swap area that is not allocated.
Table 2-120 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 24 hours |
Not Defined |
Not Defined |
Swap Free Size %value% has gone below the warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
For this metric you can set different warning and critical threshold values for each "Swap File" object.If warning or critical threshold values are currently set for any "Swap File" object, those thresholds can be viewed on the Metric Detail page for this metric.To specify or change warning or critical threshold values for each "Swap File" object, use the Edit Thresholds page. See Editing Thresholds for information on accessing the Edit Thresholds page.
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | /usr/sbin/swap -l |
HP | swapinfo |
Linux | /sbin/swapon -s |
HP Tru64 | swapon |
IBM AIX | lsps |
Check the swap usage using the UNIX top command or the Solaris swap -l command. Additional swap can be added to an existing file system by creating a swap file and then adding the file to the system swap pool. (See documentation for your UNIX OS). If swap is mounted on /tmp, space can be freed by removing any junk files in /tmp. If it is not possible to add file system swap or free up enough space, additional swap will have to be added by adding a raw disk partition to the swap pool. See UNIX documentation for procedures.
This metric represents the size of the swap file.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | /usr/sbin/swap -l |
HP | swapinfo |
Linux | /sbin/swapon -s |
HP Tru64 | swapon |
IBM AIX | lsps |
None.
The metrics in this category provide information about swap usage.
This metric provides the amount of free swap space.
Target Version | Collection Frequency |
---|---|
All Versions | Every 60 Minutes |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | /usr/sbin/swap -s |
Linux | /usr/bin/free -m |
This metric provides the free swap space as a percentage of the total swap space.
Target Version | Collection Frequency |
---|---|
All Versions | Every 60 Minutes |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | /usr/sbin/swap -s |
Linux | /usr/bin/free -m |
This metric provides the total amount of swap space.
Target Version | Collection Frequency |
---|---|
All Versions | Every 60 Minutes |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | /usr/sbin/swap -s |
Linux | /usr/bin/free -m |
This metric provides the amount of used swap space.
Target Version | Collection Frequency |
---|---|
All Versions | Every 60 Minutes |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | /usr/sbin/swap -s |
Linux | /usr/bin/free -m |
This metric provides the amount of used swap space as a percentage of the total swap space.
Target Version | Collection Frequency |
---|---|
All Versions | Every 60 Minutes |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | /usr/sbin/swap -s |
Linux | /usr/bin/free -m |
The Switch/Swap Activity metric category displays the metric reports on the system switching and swapping activity.
This metric displays the number of process context switches per second
Table 2-121 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 5 minutes |
Not Defined |
Not Defined |
Process Context Switches (per second) %value% , has crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | sar command |
HP | sar command |
IBM AIX | sar command |
The OS sar command is used to sample cumulative activity counters maintained by the OS. Also, the data is obtained by sampling system counters once in a five-second interval. The results are essentially the number of processes swapped in over this five-second period divided by five.
None.
This metric represents the number of 512-byte units transferred for swapins per second.
Table 2-122 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 5 minutes |
Not Defined |
Not Defined |
Swapins Transfers (per second) %value% , has crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | sar command |
HP | sar command |
Linux | sar command |
IBM AIX | sar command |
The OS sar command is used to sample cumulative activity counters maintained by the OS. Also, the data is obtained by sampling system counters once in a five-second interval. The results are essentially the number of processes swapped in over this five-second period divided by five.
None.
This metric represents the number of 512-byte units transferred for swapouts per second.
Table 2-123 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 5 minutes |
Not Defined |
Not Defined |
Swapins Transfers (per second) %value% , has crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | sar command |
HP | sar command |
Linux | sar command |
IBM AIX | sar command |
The OS sar command is used to sample cumulative activity counters maintained by the OS. Also, the data is obtained by sampling system counters once in a five-second interval. The results are essentially the number of processes swapped in over this five-second period divided by five.
None.
This metric represents the number of process swapins per second.
Table 2-124 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 5 minutes |
Not Defined |
Not Defined |
System Swapins (per second) %value% , has crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | sar command |
HP | sar command |
Linux | sar command |
IBM AIX | sar command |
The OS sar command is used to sample cumulative activity counters maintained by the OS. Also, the data is obtained by sampling system counters once in a five-second interval. The results are essentially the number of processes swapped in over this five-second period divided by five.
None.
This metric represents the number of process swapouts per second.
Table 2-125 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 5 minutes |
Not Defined |
Not Defined |
System Swapouts (per second) %value% , has crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | sar command |
HP | sar command |
Linux | sar command |
IBM AIX | sar command |
The OS sar command is used to sample cumulative activity counters maintained by the OS. Also, the data is obtained by sampling system counters once in a five-second interval. The results are essentially the number of processes swapped in over this five-second period divided by five.
None.
The System BIOS (Basic Input/Output System) metric category monitors the BIOS status for Dell Poweredge Linux systems.
This metric represents the manufacturer's name of the System BIOS (Basic Input/Output System). This metric is available only on Dell Poweredge Linux Systems.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
SNMP MIB object: systemBIOSManufacturerName (1.3.6.1.4.1.674.10892.1.300.50.1.11)
None.
This metric represents the image size of the System BIOS (Basic Input/Output System) in kilobytes. A value of zero indicates that the size is unknown. This metric is available only on Dell Poweredge Linux Systems.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
SNMP MIB object: systemBIOSSize (1.3.6.1.4.1.674.10892.1.300.50.1.6)
None.
This metric represents the status of the System BIOS (Basic Input/Output System) in this chassis.
This metric is available only on Dell Poweredge Linux Systems.
Metric Value | Meaning (per SNMP MIB) |
---|---|
1 | Other (not one of the following) |
2 | Unknown |
3 | Normal |
4 | Warning |
5 | Critical |
6 | Non-Recoverable |
Table 2-126 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 15 Minutes |
4 |
5 |
Status of BIOS(Object identifier:1.3.6.1.4.1.674.10892.1.300.50.1.5) is %BiosIndex% in chassis %ChassisIndex% is %value%, crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold.Status message is %BiosStatus% |
For this metric you can set different warning and critical threshold values for each unique combination of "Chassis Index" and "System BIOS Index" objects.
If warning or critical threshold values are currently set for any unique combination of "Chassis Index" and "System BIOS Index" objects, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each unique combination of "Chassis Index" and "System BIOS Index" objects, use the Edit Thresholds page.
SNMP MIB object: systemBIOSStatus (1.3.6.1.4.1.674.10892.1.300.50.1.5)
None.
This metric represents the version name of the System BIOS (Basic Input/Output System). This metric is available only on Dell Poweredge Linux Systems.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
SNMP MIB object: systemBIOSVersionName (1.3.6.1.4.1.674.10892.1.300.50.1.8)
None.
The System Calls metric category provides statistics about the system calls made over a five-second interval.
This metric represents the number of characters transferred by read system calls (block devices only) per second.
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | sar command |
HP | sar command |
Linux | not available |
HP Tru64 | table() system call |
IBM AIX | sar command |
Windows | not available |
The OS sar command is used to sample cumulative activity counters maintained by the OS. Also, the data is obtained by sampling system counters once in a five-second interval. The results are essentially the number of processes swapped in over this five-second period divided by five.
None.
This metric represents the number of characters transferred by write system calls (block devices only) per second.
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | sar command |
HP | sar command |
Linux | not available |
HP Tru64 | table() system call |
IBM AIX | sar command |
Windows | not available |
The OS sar command is used to sample cumulative activity counters maintained by the OS. Also, the data is obtained by sampling system counters once in a five-second interval. The results are essentially the number of processes swapped in over this five-second period divided by five.
None.
This metric represents the number of exec() system calls made per second.
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | sar command |
HP | sar command |
Linux | not available |
HP Tru64 | table() system call |
IBM AIX | sar command |
Windows | not available |
The OS sar command is used to sample cumulative activity counters maintained by the OS. Also, the data is obtained by sampling system counters once in a five-second interval. The results are essentially the number of processes swapped in over this five-second period divided by five.
None.
This metric represents the number of fork() system calls made per second.
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | sar command |
HP | sar command |
Linux | not available |
HP Tru64 | table() system call |
IBM AIX | sar command |
Windows | not available |
The OS sar command is used to sample cumulative activity counters maintained by the OS. Also, the data is obtained by sampling system counters once in a five-second interval. The results are essentially the number of processes swapped in over this five-second period divided by five.
None.
This metric represents the number of read() system calls made per second.
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | sar command |
HP | sar command |
Linux | not available |
HP Tru64 | table() system call |
IBM AIX | sar command |
Windows | not available |
The OS sar command is used to sample cumulative activity counters maintained by the OS. Also, the data is obtained by sampling system counters once in a five-second interval. The results are essentially the number of processes swapped in over this five-second period divided by five.
None.
This metric represents the number of system calls made per second. This includes system calls of all types.
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | sar command |
HP | sar command |
Linux | not available |
HP Tru64 | table() system call |
IBM AIX | sar command |
Windows | not available |
The OS sar command is used to sample cumulative activity counters maintained by the OS. Also, the data is obtained by sampling system counters once in a five-second interval. The results are essentially the number of processes swapped in over this five-second period divided by five.
None.
This metric represents the number of write() system calls made per second.
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | sar command |
HP | sar command |
Linux | not available |
HP Tru64 | table() system call |
IBM AIX | sar command |
Windows | not available |
The OS sar command is used to sample cumulative activity counters maintained by the OS. Also, the data is obtained by sampling system counters once in a five-second interval. The results are essentially the number of processes swapped in over this five-second period divided by five.
None.
The metrics in this category provide information about system load.
This metric provides the total number of CPU cores.
Target Version | Collection Frequency |
---|---|
All Versions | Every 60 Minutes |
This metric provides the total number of processes.
Target Version | Collection Frequency |
---|---|
All Versions | Every 60 Minutes |
The metric provides the total number of users.
Target Version | Collection Frequency |
---|---|
All Versions | Every 60 Minutes |
This metric provides the average number of processes in memory and subject to be run in the last interval. This metric checks the run queue.
Target Version | Collection Frequency |
---|---|
All Versions | Every 60 Minutes |
This metric provides the average number of processes in memory per core and subject to be run in the last interval. This metric checks the run queue.
Target Version | Collection Frequency |
---|---|
All Versions | Every 60 Minutes |
This metric provides the average number of processes in memory and subject to be run in the last interval. This metric checks the run queue.
Target Version | Collection Frequency |
---|---|
All Versions | Every 60 Minutes |
This metric provides the average number of processes in memory per core and subject to be run in the last interval. This metric checks the run queue.
Target Version | Collection Frequency |
---|---|
All Versions | Every 60 Minutes |
This metric category provides information about the global system status.
This metric is available for Dell Poweredge Linux Systems only.
This metric displays the global system status of all chassis being monitored by the system's management software.
Table 2-127 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 15 Minutes |
4 |
5 |
Status of System state(Object identifier:1.3.6.1.4.1.674.10892.1.200.10.1.2) is %value%, crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold.Status message is %GlobalSystemStatus% |
The metrics in this category provide information about system times.
This metric provides the system's last boot time.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/date +'%D %r %Z'
/usr/bin/who -b /usr/bin/uptime |
Linux | /bin/date +'%D %r %Z'
/usr/bin/who -b /usr/bin/uptime |
This metric provides the current system running time (in minutes) since the last time the system was started.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/date +'%D %r %Z'
/usr/bin/who -b /usr/bin/uptime |
Linux | /bin/date +'%D %r %Z'
/usr/bin/who -b /usr/bin/uptime |
This metric provides the current date and time of the system.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours |
The data sources for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/date +'%D %r %Z'
/usr/bin/who -b /usr/bin/uptime |
Linux | /bin/date +'%D %r %Z'
/usr/bin/who -b /usr/bin/uptime |
The Temperature metric category monitors the hotness or coldness of the temperature probe.
This metric is available for Dell Poweredge Linux Systems only.
This metric represents the current reading of the temperature probe. The value represents temperature in tenths of degrees Centigrade.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
SNMP MIB object: temperatureProbeReading (1.3.6.1.4.1.674.10892.1.700.20.1.6)
An abnormally high value indicates that the system is doing a lot of work and getting overheated. The system could be overheated due to inadequate cooling by the Fan.
This metric provides a description of the location name of the temperature probe. Examples of values are: "CPU Temp" and "System Temp".
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
SNMP MIB object: temperatureProbeLocationName (1.3.6.1.4.1.674.10892.1.700.20.1.8)
None.
This metric represents the status of the temperature probe.
This metric is available for Dell Poweredge Linux Systems only.
Metric Value | Meaning (per SNMP MIB) |
---|---|
1 | Other (not one of the following) |
2 | Unknown |
3 | Normal |
4 | Warning |
5 | Critical |
6 | Non-Recoverable |
Table 2-128 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 15 Minutes |
4 |
5 |
Temperature(Object Identifier:1.3.6.1.4.1.674.10892.1.700.20.1.5) at probe %ProbeIndex% in chassis %ChassisIndex% is %TemperatureReading% (C). Status is %value%, crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. Status message is %TemperatureStatus% |
For this metric you can set different warning and critical threshold values for each unique combination of "Chassis Index" and "Temperature Probe Index" objects.
If warning or critical threshold values are currently set for any unique combination of "Chassis Index" and "Temperature Probe Index" objects, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each unique combination of "Chassis Index" and "Temperature Probe Index" objects, use the Edit Thresholds page.
SNMP MIB object: temperatureProbeStatus (1.3.6.1.4.1.674.10892.1.700.20.1.5)
This describes the status of the temperature probe. A failed probe needs to be physically examined or replaced.
This metric category provides information about the disk space usage.
This metric displays the total amount of free disk space (in MB) across all local file systems.
Table 2-129 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 15 Minutes |
Not Defined |
Not Defined |
TotalDiskUsage %keyValue% has %value%%% available space, fallen below warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
This metric displays the total percentage of free disk space across all local file systems.
Table 2-130 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 15 Minutes |
Not Defined |
Not Defined |
TotalDiskUsage %keyValue% has %value%%% available space, fallen below warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
This metric reports TTY device activity.
This metric represents the number of received incoming character interrupts per second.
Target Version | Collection Frequency |
---|---|
All Versions | Every 5 Seconds |
For the following hosts:
Host | Data Source |
---|---|
Solaris | sar command |
HP | sar command |
Linux | not available |
HP Tru64 | table() system call |
IBM AIX | sar command |
Windows | not available |
The OS sar command is used to sample cumulative activity counters maintained by the OS. The data is obtained by sampling system counters once in a five-second interval.
None.
This metric represents the input characters processed by canon() per second.
Target Version | Collection Frequency |
---|---|
All Versions | Every 5 Seconds |
For the following hosts:
Host | Data Source |
---|---|
Solaris | sar command |
HP | sar command |
Linux | not available |
HP Tru64 | table() system call |
IBM AIX | sar command |
Windows | not available |
The OS sar command is used to sample cumulative activity counters maintained by the OS. The data is obtained by sampling system counters once in a five-second interval.
None.
This metric represents the modem interrupt rate.
Target Version | Collection Frequency |
---|---|
All Versions | Every 5 Seconds |
For the following hosts:
Host | Data Source |
---|---|
Solaris | sar command |
HP | sar command |
Linux | not available |
HP Tru64 | table() system call |
IBM AIX | sar command |
Windows | not available |
The OS sar command is used to sample cumulative activity counters maintained by the OS. The data is obtained by sampling system counters once in a five-second interval.
None.
This metric represents the number of transmit outgoing character interrupts per second.
Target Version | Collection Frequency |
---|---|
All Versions | Every 5 Seconds |
For the following hosts:
Host | Data Source |
---|---|
Solaris | sar command |
HP | sar command |
Linux | not available |
HP Tru64 | table() system call |
IBM AIX | sar command |
Windows | not available |
The OS sar command is used to sample cumulative activity counters maintained by the OS. The data is obtained by sampling system counters once in a five-second interval.
None.
This metric represents the number of output characters per second.
Target Version | Collection Frequency |
---|---|
All Versions | Every 5 Seconds |
For the following hosts:
Host | Data Source |
---|---|
Solaris | sar command |
HP | sar command |
Linux | not available |
HP Tru64 | table() system call |
IBM AIX | sar command |
Windows | not available |
The OS sar command is used to sample cumulative activity counters maintained by the OS. The data is obtained by sampling system counters once in a five-second interval.
None.
This metric represents the raw input characters per second.
Target Version | Collection Frequency |
---|---|
All Versions | Every 5 Seconds |
For the following hosts:
Host | Data Source |
---|---|
Solaris | sar command |
HP | sar command |
Linux | not available |
HP Tru64 | table() system call |
IBM AIX | sar command |
Windows | not available |
The OS sar command is used to sample cumulative activity counters maintained by the OS. The data is obtained by sampling system counters once in a five-second interval.
None.
The Users metric category provides information about the users currently on the system being monitored.
This metric represents the number of times a user with a certain user name is logged on to the host target.
Table 2-131 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 15 Minutes |
Not Defined |
Not Defined |
Number of Logons is %value% , crossed warning (%warning_threshold% ) or critical (%critical_threshold% ) threshold. |
For Solaris, HP, Linux, HP Tru64, and IBM AIX, the number of times a user is logged on is obtained from the OS w command.
For Windows, the source of information is Windows API.
None.
The UDM metric allows you to execute your own scripts. The data returned by these scripts can be compared against thresholds and generate severity alerts similar to alerts in predefined metrics. UDM is similar to the Oracle 9i Management Agent's UDE functionality.
This metric category provides information about the severity status of the virtual disk.
This metric category is available for Dell Poweredge Linux Systems only.
This metric displays the severity of the virtual disk state. This is the combined status of the virtual disk and its components.
This metric is available for Dell Poweredge Linux Systems only.
Metric Value | Meaning (per SNMP MIB) |
---|---|
1 | Other (not one of the following) |
2 | Unknown |
3 | Normal |
4 | Warning |
5 | Critical |
6 | Non-Recoverable |
Table 2-132 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 15 Minutes |
Not Defined |
Not Defined |
Logical devices virtual disk rollup status code(Object identifier:1.3.6.1.4.1.674.10893.1.20.140.1.1.19) is %value%, crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold.Status message is %VirtualDiskRollUpStatus% |
This metric category displays information about Virtualization configuration metrics. The value of the collected metrics depends on the platform.
The following properties will be collected for Solaris platforms. The collections vary depending on the type of virtualization technology implemented.
Property Name | Property Value | Description |
---|---|---|
Virtual | Yes
No Unknown |
Indicates whether this host is running on physical or virtual hardware |
Virtual Processors | Site specific | Indicates the number of allocated virtual processors |
Zone Names | List of all zone names (for global zone
Non global name |
Provides a list of all the zone names for the global zone or else provides the nonglobal name. |
Logical Domains Manager | Site specific | Indicates the version number of the Logical Domain Manager |
Domain Names | Site specific | Provides a comma-separated Domain Names list |
dedicated-cpu | True
False |
Uses the dedicated-cpu resource (specifies whether a subset of the system's processors should be dedicated to a non-global zone while it is running) |
capped-cpu | True
False |
Uses the capped-cpu resource (provides an absolute limit on the amount of CPU resources that can be consumed by a project or a zone) |
ncpus | Site specific | Indicates the number or range of CPUs |
Pool Name | Site specific | Pool name |
pset.min | Site specific | Indicates the minimum number of CPUs for this processor set |
pset.max | Site specific | Indicates the maximum number of CPUs for this processor set |
cpu.sys_id | Site specific | Indicates the CPU sys_id |
Pool name list | Site specific | Provides a list of pool names |
Pset name list | Site specific | Provides a list of processor sets |
The following properties will be collected for IBM AIX:
Version | Property Name | Property Value |
---|---|---|
5.3 - 6.0 | Virtual | Yes
No Unknown |
5.3 - 6.0 | Node Name | Site specific |
5.3 - 6.0 | Partition Name | Site specific |
5.3 - 6.0 | Partition Number | Site specific |
5.3 - 6.0 | Type | Site specific |
5.3 - 6.0 | Mode | Site specific |
5.3 - 6.0 | Entitled Capacity | Site specific |
5.3 - 6.0 | Partition Group-ID | Site specific |
5.3 - 6.0 | Shared Pool ID | Site specific |
5.3 - 6.0 | Online Virtual CPUs | Site specific |
5.3 - 6.0 | Active Physical CPUs in system | Site specific |
5.3 - 6.0 | Active CPUs in Pool | Site specific |
5-3 - 6.1 and later | WPAR Key | Site specific |
5-3 - 6.1 and later | WPAR Configured ID | Site specific |
5-3 - 6.1 and later | WPAR Maximum Logical CPUs | Site specific |
5-3 - 6.1 and later | WPAR Maximum Virtual CPUs | Site specific |
5-3 - 6.1 and later | WPAR Percentage CPU Limit | Site specific |
The following properties will be collected for Microsoft Windows:
Property Name | Property Value | Description |
---|---|---|
Virtual | Yes
No Unknown |
Indicates whether this host is running on physical or virtual hardware |
Model | Site specific | Indicates the model details such as HVM domU |
Virtual Machine | Site specific | Indicates the type of virtual machine such as Xen |
This metric category provides information about the voltage probe.
This metric category is available for Dell Poweredge Linux Systems only.
This metric displays the status of the voltage probe.
This metric is available for Dell Poweredge Linux Systems only.
The following table lists the possible values for this metric and their meaning.
Metric Value | Meaning (per SNMP MIB) |
---|---|
1 | Other (not one of the following) |
2 | Unknown |
3 | Normal |
4 | Warning |
5 | Critical |
6 | Non-Recoverable |
Table 2-133 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 15 Minutes |
Not Defined |
Not Defined |
Status code of voltage probe (Object Identifier:1.3.6.1.4.1.674.10892.1.600.20.1.5) is %value%, crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold.Status message is %VoltageProbeStatus% |
This metric displays the value of the voltage probe reading. The value is an integer representing the voltage in millivolts that the probe is reading.
This metric is available for Dell Poweredge Linux Systems only.
The following table lists the possible values for this metric and their meaning.
Metric Value | Meaning (per SNMP MIB) |
---|---|
1 | Other (not one of the following) |
2 | Unknown |
3 | Normal |
4 | Warning |
5 | Critical |
6 | Non-Recoverable |
Table 2-134 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 15 Minutes |
Not Defined |
Not Defined |
Status code of voltage probe reading(Object identifier:1.3.6.1.4.1.674.10892.1.600.20.1.6) is %value%, crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
The purpose of this metric is to collect those entries from all available Windows NT event log files whose type is either Error or Warning. A critical or a warning alert is raised only for System and Security Event log file entries.
Note: Since log files continue to grow, this metric outputs log events which had been written to the log file after the last collection time, that is, only those records are written out whose timeGenerated (time when the event was generated) is after the last collection time until the last record of the log file. If this metric is collected for the first time, only the events generated on the current date are outputted.
This metric is available only on Windows.
This metric displays a list of all categories for the events matching Log Name, Source, and Event ID defined for the monitored object. The actual category of the event can be found in the Windows event log message.
Target Version | Collection Frequency |
---|---|
3.0 and higher | Every 15 Minutes |
Windows Management Instrumentation (WMI)
None.
This is the time at which point the metric scanned through the event logs. This metric is available only on Windows.
Target Version | Collection Frequency |
---|---|
3.0 and higher | Every 15 Minutes |
Windows Management Instrumentation (WMI)
None.
This metric is a digest of all the events that match the Log Name, Source, and Event ID specified for the monitored object. After the above filtering is done, grouping of these events is done on Log Name, Source, Event ID, Category, and User to get the count of error events and warning events. The column has the details of the events in the following format:
[LogName: Source:Event ID:Category:User : :]
Example: [Application:Symantec AntiVirus:2: : :error=2:] [Application:Symantec AntiVirus:3: : ::warning =1 ]
To get the exact message about the events that satisfied the criteria set on the monitored object, the Event viewer provided by Microsoft Windows must be used.
This metric is available only on Windows.
Table 2-135 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
3.0 and later |
Every 15 Minutes |
Not Defined |
Not Defined |
Message is of following format.Logfile:Sourcename:EventCode:CategoryString:User:ErrorCount:WarningCount [%message%] |
Windows Management Instrumentation (WMI)
None.
This is the Perl pattern to match the string defined for the Event ID in the monitored objects. The actual Event ID of the event can be found in the Windows event log message. This metric is available for Windows only.
Target Version | Collection Frequency |
---|---|
3.0 and later | Every 15 Minutes |
Windows Management Instrumentation (WMI)
None.
This metric displays the seriousness of the event. Possible values are: Warning and Error.
This metric is available for Windows only.
Table 2-136 Metric Summary Table
Target Version | Key | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|---|
All Versions |
logfile: "system" |
Every 15 Minutes |
warning |
error |
X1User[%user%]:Category[%categorystring%]:Description[%message%] |
For this metric you can set different warning and critical threshold values for each unique combination of "Log Name", "Source", and "Event ID" objects.
If warning or critical threshold values are currently set for any unique combination of "Log Name", "Source", and "Event ID" objects, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each unique combination of "Log Name", "Source", and "Event ID" objects, use the Edit Thresholds page.
WMI Operating System Classes
None.
The metrics in this category provide information about ZFS ARC Cache usage.
This metric provides the number of demand data misses per second.
Target Version | Collection Frequency |
---|---|
All Versions | Every 60 Minutes |
The data source for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/kstat -p zfs:0:arcstats |
This metric provides the percentage of demand data misses.
Target Version | Collection Frequency |
---|---|
All Versions | Every 60 Minutes |
The data source for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/kstat -p zfs:0:arcstats |
This metric provides the current size of the ZFS metadata.
Target Version | Collection Frequency |
---|---|
All Versions | Every 60 Minutes |
The data source for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/kstat -p zfs:0:arcstats |
This metric provides the percentage size of the ZFS metadata.
Target Version | Collection Frequency |
---|---|
All Versions | Every 60 Minutes |
The data source for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/kstat -p zfs:0:arcstats |
This metric provides the number of read misses per second.
Target Version | Collection Frequency |
---|---|
All Versions | Every 60 Minutes |
The data source for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/kstat -p zfs:0:arcstats |
This metric provides the percentage of read misses.
Target Version | Collection Frequency |
---|---|
All Versions | Every 60 Minutes |
The data source for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/kstat -p zfs:0:arcstats |
This metric provides the number of metadata misses per second.
Target Version | Collection Frequency |
---|---|
All Versions | Every 60 Minutes |
The data source for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/kstat -p zfs:0:arcstats |
This metric provides the percentage of the metadata misses per second.
Target Version | Collection Frequency |
---|---|
All Versions | Every 60 Minutes |
The data source for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/kstat -p zfs:0:arcstats |
This metric provides the number of prefetch data misses per second.
Target Version | Collection Frequency |
---|---|
All Versions | Every 60 Minutes |
The data source for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/kstat -p zfs:0:arcstats |
This metric provides the percentage of prefetch data misses.
Target Version | Collection Frequency |
---|---|
All Versions | Every 60 Minutes |
The data source for the metrics in this category include the following:
Host | Data Source |
---|---|
Solaris | /usr/bin/kstat -p zfs:0:arcstats |
The Zombie Processes metric category monitors the orphaned processes in the different variations of UNIX systems.
This metric represents the percentage of all processes running on the system that are currently in zombie state.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
The data sources for this metric include the following:
Host | Data Source |
---|---|
Solaris | ps command |
HP | ps command |
Linux | ps command |
HP Tru64 | ps command |
IBM AIX | ps command |
Windows | ps command |
None.
This metric represents the percentage of all processes running on the system that are currently in zombie state.
Table 2-137 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Default Warning Threshold | Default Critical Threshold | Alert Text |
---|---|---|---|---|
All Versions |
Every 15 Minutes |
Not Defined |
Not Defined |
%value%%% of all processes are in zombie state, crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold. |
The data sources for this metric include the following:
Host | Data Source |
---|---|
Solaris | ps command |
HP | ps command |
Linux | ps command |
None.