This chapter explains how to start, stop, and monitor Oracle Communications Offline Mediation Controller components.
To start, stop, and monitor the Offline Mediation Controller components using the ProcessControl script, define the components that are running, the installation they run from, and the ports and IP addresses they use in the offline_mediation.conf file. The ProcessControl script uses the information in the offline_mediation.conf file to start, stop, and monitor multiple servers from multiple Offline Mediation Controller installations.
To configure the offline_mediation.conf file:
Open the OMC_home/offline_mediation/offline_mediation.conf file in a text editor, where OMC_home is the directory in which Offline Mediation Controller is installed.
Specify the Offline Mediation Controller components to start or stop using the following syntax:
daemon_name:OMC_home:port:[IP_Address]:{Y|N}
where:
daemon_name is the name for the Offline Mediation Controller component:
admnsvr (Administration Server)
nodemgr (Node Manager)
port is the port on which the server component runs. The port number range is between 49152 and 65535.
IP_Address is the IP address for the host computer to start, stop, and monitor multiple servers from multiple Offline Mediation Controller installations.
Y/N indicates whether this component is started when the system starts or not.
For example:
admnsvr:/OMC_home:55105::Y nodemgr:/OMC_home:55109::Y
Save and close the file.
You can start and stop Offline Mediation Controller by using the following methods:
The ProcessControl script. See "Starting and Stopping Offline Mediation Controller by Using the ProcessControl Script."
The individual component commands. See:
You can start or stop Offline Mediation Controller by using the ProcessControl script. This script preserves the node status when you restart Node Manager.
Important:
Before running the ProcessControl script, ensure that you have run the configure script. For more information, see the discussion about adding Offline Mediation Controller service to system startup in Offline Mediation Controller Installation Guide.To start and stop Offline Mediation Controller by using the ProcessControl script:
Go to the OMC_home/bin directory.
Run the following command, which starts the Offline Mediation Controller components that are defined in the offline_mediation.conf file on the appropriate ports:
./ProcessControl start
Run the following command, which stops the Offline Mediation Controller components that are defined in the offline_mediation.conf file:
./ProcessControl stop
To start and stop Node Manager:
Go to the OMC_home/bin directory.
Run the following command, which starts Node Manager:
./nodemgr [-d | -f | -F | -j] [-p port] [-i IP_Address]
where:
-d runs Node Manager in the background with debug output redirected to OMC_home/log/nodemgr_port.out.
This option uses a large amount of the CPU during its processes.
-f runs Node Manager in the foreground.
-F runs Node Manager in the foreground with debug output.
-j runs Node Manager, with the just-in-time (JIT) compiler enabled, in the background with debug output redirected to OMC_home/log/nodemgr_port.out.
-p port runs Node Manager on port.
-i IP_Address specifies the IP address of the host computer on which Node Manager is installed. Use this parameter to start Node Manager installed on multiple computers.
If you run this command with no options, Node Manager starts in the background with no debug output.
Run one of the following commands, which stops Node Manager:
To shut down Node Manager:
./nodemgr -s [-p port]
To stop the Node Manager process:
./nodemgr -k [-p port]
To start and stop Administration Server:
Go to the OMC_home/bin directory.
Run the following command, which starts Administration Server:
./adminsvr [-d | -f | -F | -j] [-x][-p port][-i IP_Address]
where:
-d runs Administration Server in the background with debug output redirected to OMC_home/log/adminsvr_port.out.
-f runs Administration Server in the foreground.
-F runs Administration Server in the foreground with debug output.
-j runs Administration Server, with the JIT compiler enabled, in the background with debug output redirected to OMC_home/log/adminsvr_port.out.
-x disables user authentication.
-p port runs Administration Server on port.
-i IP_Address specifies the IP address to be used. It is used for multi-home systems.
If you run this command with no options, Administration Server starts in the background with no debug output.
Run one of the following commands, which stops Administration Server:
To shut down Administration Server:
./adminsvr -s [-p port]
To stop the Administration Server process:
./adminsvr -k [-p port]
To start Administration Client:
Go to the OMC_home/bin directory.
Run the following command:
./gui [-d | -f | -F]
where:
-d runs Administration Client in the background with debug output redirected to OMC_home/log/gui_port.out.
-f runs Administration Client in the foreground.
-F runs Administration Client in the foreground with debug output.
If you run this command with no options, Administration Client starts in the background with no debug options.
You cannot change the IP address of a mediation host. Instead, you must remove the mediation host using that IP address, and reassign the IP address in the the offline_mediation.conf file.
To change the IP address of a mediation host:
Write down the port number on which Offline Mediation Controller is connected.
In the Admin Client, delete the mediation host running on the Offline Mediation Controller workstation. To do so:
Delete nodes from the node chain from left to right; otherwise, the dependence of one node on the previous node may prevent you from removing it.
Delete the mediation host.
Stop all Offline Mediation Controller related processes on the workstation.
For UNIX machines, modify the /etc/hosts file with the new IP addresses and reboot the workstations. Restart the Offline Mediation Controller processes.
Look in the directory OMC_Home/offline_mediation for the offline_mediation.conf file and replace any occurrences of the old IP address with the new IP address, where OMC_Home is the directory in which you installed Offline Mediation Controller. Entering an IP address is optional in this file, so if the field has no value, you can leave it as is.
When you log in to the Administration Client, enter the new IP address of the workstation on which the adminsvr will be running.
In the Admin Client, add a mediation host for each workstation.
Restart the adminsvr and nodemgr processes on the workstation.
Restart all nodes on the workstation.
Ensure the file SystemModel.cfg in the Administration Server config directory OMC_Home/config/adminserver have the new IP addresses and the correct port number.
Ensure the files dataflowmap.cfg and nmPort in the Node Manager config directory OMC_Home/config/nodemgr have the new IP addresses and the correct port number.
Note:
OMC_Home is the directory in which you installed Offline Mediation Controller.Should this fail, stop all nodes, stop all processes (Client, adminsvr, nodemgr) and manually change all the config files that have not been changed to the new IP addresses in the config directories.
Restart the adminsvr and nodemgr processes on the primary workstation (nodemgr only on the backup workstation).
Restart all the nodes on the primary workstation (for backup workstations, restart the CC node only).
You can use the ProcessControl script to monitor Offline Mediation Controller components to ensure they are still running and to restart the server.
To run the ProcessControl script to monitor Offline Mediation Controller components:
Stop all Offline Mediation Controller components.
Open the /etc/inittab file in a text editor.
Add the following entry:
NT:3:respawn:/etc/init.d/ProcessControl monitor
Save and close the file.
Run the following command, which periodically monitors the status of the Offline Mediation Controller components that are defined in the offline_mediation.conf file:
./ProcessControl monitor
By default, Record Editor uses the name Network Accounting Record for each NAR in your system. This means that each NAR will be displayed as Network Accounting Record in the left pane of the Record Editor window. When you expand a NAR, the NAR attribute names are listed.
To modify the attribute name:
Open the OMC_home/datadict/Data_Dictionary.xml file in a text editor, where OMC_home is the directory in which Offline Mediation Controller is installed.
Search for the attribute ID.
Change the <Attr> element to <Attr tagForName="true">.
The tagForName option overrides the default attribute name.
Set the <Name> element to the attribute name you want to display in Record Editor.
Note:
If you leave the <Name> element blank, Record Editor displays the attribute ID as the attribute name.Save the file.
Restart Record Editor.
This section explains the system-monitoring options for the OMC_home/config/nodemgr/nodemgr.cfg file. If you do not modify the nodemgr.cfg file, Offline Mediation Controller uses the default threshold values.
You can modify nodemgr.cfg to manage your threshold options to monitor disk, memory, and CPU usage levels. You can set a warning threshold and an error threshold for these areas. You must decide what action to take if the thresholds are crossed.
To monitor disk errors, see "Using the Disk Status Monitor".
To monitor memory errors, see "Using the Memory Monitor".
To monitor CPU usage levels, see "Using the CPU Usage Monitor".
By default, Offline Mediation Controller generates a single alarm for each error condition, even if the error condition occurs multiple times. To generate an alarm or trap for every error occurrence, open the nodemgr.cfg file and change the SUPPRESS_MULTIPLE_ALARMS parameter value to No.
You use the disk status monitor to alert you to potential disk issues, so you can take action to avoid unrecoverable errors.
Note:
The disk status monitor runs only on Solaris workstations that have the Sun Solstice DiskSuite metastat command installed.Table 1-1 lists the parameters you can add or modify in the nodemgr.cfg file.
You use the memory monitor to alert you when memory usage exceeds a specified threshold. In addition to the threshold, you can configure the memory monitor to log memory usage statistics.
Table 1-2 lists the parameters you can add or modify in the nodemgr.cfg file.
Table 1-2 Memory Monitor Parameters
| Parameter | Description |
|---|---|
|
LOG_MEMORY_USAGE |
Set to Y to log memory usage statistics. The default is N. |
|
MEMORY_MAJOR_THRESHOLD |
The level at which a major alarm is raised, as a percentage. The default is 85. |
|
MEMORY_WARNING_THRESHOLD |
The level at which a warning alarm is raised, as a percentage. The default is 70. |
|
MEMORY_SAMPLE_TIME |
A time interval, in seconds, during which the memory usage must be above a specific threshold level before an alarm is raised. The default is 60. |
|
MEMORY_SAMPLE_FREQ |
The number of polls that are taken during each sample period. The default is 4. |
For example, using the default values for MEMORY_SAMPLE_TIME (60 seconds) and MEMORY_SAMPLE_FREQ (4), the memory usage polls would occur every 15 seconds (60 seconds divided by 4). In this case, an alarm would be generated if the memory usage level was above the specified threshold for 4 consecutive polls.
The CPU usage monitor generates a critical or major alarm if the CPU usage level reaches a specified value.
Table 1-3 lists the parameters you can add or modify in the nodemgr.cfg file.
Table 1-3 CPU Usage Monitor Parameters
| Parameter | Description |
|---|---|
|
CPU_REDTHRESHOLD |
The percentage of CPU in use that will generate a critical alarm. The default is 90. |
|
CPU_YELLOWTHRESHOLD |
The percentage of CPU in use that will generate a major alarm. The default is 80. |
|
CPU_SAMPLETIME |
The period, in seconds, in which to poll a fixed number of times. The default is 60. |
|
CPU_SAMPLEFREQ |
How often to poll during the fixed period. The default is 3. |
For example, using the default values for CPU_SAMPLETIME (60 seconds) and CPU_SAMPLEFREQ (3), a poll will take place every 20 seconds (60 seconds divided by 3).
Offline Mediation Controller uses specific ports to send data to and to receive data from external devices and applications. Use the port information in Table 1-4 when you are planning the network and configuring routers and firewalls that communicate between Offline Mediation Controller components.
| Application | Protocol | Source | Source Port | Destination | Destination Port |
|---|---|---|---|---|---|
|
GTP |
UDP |
GSN |
1024 or higher |
Offline Mediation Controller |
3386 |
|
Open FTP and FTP |
TCP |
MSC, Application Server or Offline Mediation Controller |
20 or 21 |
Application Server or Offline Mediation Controller |
20 or 21 |
|
SNMP |
UDP |
Offline Mediation Controller |
161 |
EMS |
162 |
|
RADIUS |
UDP |
GSN or RADIUS Server |
1814 |
Offline Mediation Controller |
1813 |
|
DBSR |
TCP |
Offline Mediation Controller |
1521 |
Oracle database |
1521 |
By default, all Administration Servers can connect to the mediation host (also called a node manager). You can limit access to a mediation host by using its associated OMC_home/config/nodemgr/nodemgr_allow.cfg file. The file lists the IP addresses for all Administration Servers that are allowed to connect to the mediation host. You can edit the list at any time to allow or disallow Administration Server access to the mediation host.
You can set up a network firewall between the Offline Mediation Controller servers and the corporate intranet or external Internet. Administration Client can connect with and operate the Offline Mediation Controller servers through this firewall.
To set up a firewall, perform the following tasks:
These port numbers are defined during the installation process but can be modified to accommodate your particular firewall configuration.
To change the default Administration Server's port number for the firewall:
Stop all Offline Mediation Controller components.
Open the OMC_home/config/adminserver/firewallportnumber.cfg file in a text editor.
Change the value of the following entry:
AdminServer=port
where port is the port on which Administration Server runs. The suggested port number range is between 49152 and 65535. The default port number in the configuration file is 55110.
Save and close the file.
To change the default firewall port number range values:
Stop all Offline Mediation Controller components.
Open the OMC_home/config/GUI/firewallportnumber.cfg file in a text editor.
Change the values of the following entries:
RangeFrom=port RangeTo=port
where port is the port on which Administration Client runs. The suggested port number range is between 49152 and 65535. The default port number range in the configuration file is 55150 to 55199.
Save and close the file.
To configure Node Manager memory limits:
Note:
The performance of the system can be affected by changing these settings, which by default are optimized for most Offline Mediation Controller applications.Go to the OMC_home/customization directory and verify that the nodemgr.var file exists. On a newly installed system, the nodemgr.var file may not yet exist.
If the file does not exist, run the following command, which creates the file:
cp OMC_home/config/nodemgr/nodemgr.var.reference OMC_home/customization/nodemgr.var
Open the nodemgr.var file in a text editor.
Specify the upper memory size by modifying the NM_MAX_MEMORY parameter. The default is 3500 megabytes.
The valid range for a Solaris installation is from 500 to 3500.
The valid range for an Oracle/Red Hat Enterprise Linux installation is from 500 to 3500.
Specify the lower memory size by modifying the NM_MIN_MEMORY parameter. The default is 1024 megabytes.
The valid range for a Solaris installation is from 50 to 3500.
The valid range for a Oracle/Red Hat Enterprise Linux installation is from 50 to 3500.
Save and close the file.
Restart Node Manager.
You can configure the maximum and minimum memory sizes the Java Virtual Machine (JVM) uses when running Administration Client. By configuring the maximum memory size, you can reduce the amount of memory JVM uses when running Administration Client.
To set the maximum and minimum memory sizes:
Open the OMC_Home/bin/gui file in a text editor, where OMC_Home is the directory in which Administration Client is installed.
Add or modify the following entries:
InitializeExternalConfig(){ NM_MIN_MEMORY=value NM_MAX_MEMORY=value }
where value is the appropriate value for the respective entry.
For example:
InitializeExternalConfig(){
NM_MIN_MEMORY=1024
NM_MAX_MEMORY=3500
}
Save and close the file.
You can configure the maximum and minimum memory sizes the JVM uses when running Administration Server. By configuring the maximum memory size, you can reduce the amount of memory JVM uses when running Administration Server.
To set the maximum and minimum memory sizes:
Open the OMC_Home/bin/adminsvr file in a text editor.
Add or modify the following entries:
InitializeExternalConfig(){ NM_MIN_MEMORY=value NM_MAX_MEMORY=value }
where value is the appropriate value for the respective entry.
For example:
InitializeExternalConfig(){
NM_MIN_MEMORY=1024
NM_MAX_MEMORY=3500
}
Save and close the file.
Offline Mediation Controller records system activity in log files. One log file is generated for each Offline Mediation Controller component and for each node on the mediation host. Review the log files daily to monitor your system and detect and diagnose system problems.
Offline Mediation Controller generates log files for Offline Mediation Controller components and for the nodes you create.
For Offline Mediation Controller components such as Administration Server, Node Manager, and Administration Client, log files are named component.log; for example, nodemgr.log, adminserver.log, and GUI.log. The closed log files are saved using the Offline Mediation Controller component or cartridge node name, and an incrementing number; for example, nodemgr.log.1, nodemgr.log.2.
For each node on the mediation host, the log file is named nodeID.log, where nodeID is the unique ID that is assigned by Administration Server when you create a node on a mediation host; for example, if the unique ID of the node is 2ys4tt-16it-hslskvi1, the log file name is 2ys4tt-16it-hslskvi1.log.
The following are the minimum Offline Mediation Controller log files:
nodemgr.log
adminserver.log
GUI.log
Depending on the number of nodes you create, your installation will have one or more node log files.
The log files for Offline Mediation Controller components are stored in the OMC_home/log/component directory; for example, the Node Manager log file is in the OMC_home/log/nodemgr directory.
The log files for the nodes are stored in the OMC_home/log/nodeID directory; for example, if the node ID of the node is 2ys4tt-16it-hslskvi1, the node log file is in the OMC_home/log/2ys4tt-16it-hslskvi1 directory.
Note:
Oracle recommends that you not change the default location of the log files. If you change the location of the log files, you cannot access the log information from Administration Client.Table 1-5 lists the components and their corresponding log file locations:
Each Offline Mediation Controller component or node has its own logger properties file. When an Offline Mediation Controller component or a node is started for the first time, the logger properties file is dynamically created in the OMC_home/config/component directory. By default, the logger properties file is set to the default logging level.
Table 1-6 lists the components and their corresponding logger properties file locations:
Table 1-6 Offline Mediation Controller Components and Logger Properties File Locations
| Component | Logger Properties File Locations |
|---|---|
|
Node Manager |
OMC_home/config/nodemgr/nodemgrLogger.properties |
|
Administration Server |
OMC_home/config/adminserver/adminserverLogger.properties |
|
Administration Client |
OMC_home/config/GUI/GUILogger.properties |
|
Node |
OMC_home/config/nodeID/nodeIDLogger.properties |
By default, Offline Mediation Controller components report information messages. You can set Offline Mediation Controller to report or to not report information messages. The following levels of reporting are supported:
NO = no logging
WARN = log only warning messages
INFO = (default) log information messages
DEBUG = log debug messages
ALL = log warning, information, and debug messages
Important:
To avoid performance degradation, use INFO level logging for debugging.To change the severity level for logging:
Open the logger properties file for the component in a text editor. See "About the Logger Properties Files".
Search for the following entry:
log4j.logger.componentName.component=severity,componentAppender
where:
componentName is the name of the Offline Mediation Controller component.
component is the Offline Mediation Controller component or the node ID.
severity is the current severity level for the logging.
For example:
log4j.logger.NodeManager.nodemgr=WARN,nodemgrAppender
Change the entry to the desired severity level for logging.
For example, to change the log level from WARN to INFO for Node Manager:
log4j.logger.NodeManager.nodemgr=INFO,nodemgrAppender
Save and close the file.
Note:
You do not need to restart the running process to enable the changes in the logging level. A predefined delay of two minutes is set before the changed logger configuration takes effect.The Offline Mediation Controller server monitoring feature creates log files that report hardware performance at all times, and divides that data into convenient statistical categories.
Each statistical category has an entry in the nodemgr.cfg file to indicate if performance logging is desired. The default values are pre-set in this file and you can change them where necessary. The nodemgr.cfg file is located at OMC_Home/config/nodemgr. The statistical categories are listed in the following sections.
This function monitors the percentage of total disk space currently used on the Offline Mediation Controller partition. The corresponding entry in the nodemgr.cfg file is: SERVERMONITOR_DISK_UTILIZATION
The disk utilization log file is located in OMC_Home/serverMonitoring/IP_Port/diskUtilization
The log file values are as follows:
partition = Offline Mediation Controller installation, used to determine partition being monitored
kbytes= total disk space in partition, measured in kbytes
used= total disk space in partition used, measured in kbytes
available= disk space not in use
capacity= percentage of disk space in used
Here is an example of the disk utilization log file:
<poll date="2005/09/27" time="14:44:18" partition="/opt/nm500" kbytes="5886725" used="5139094" avail="688764" capacity="89%" /> <poll date="2005/09/27" time="14:49:19" partition="/opt/nm500" kbytes="5886725" used="5139129" avail="688729" capacity="89%" /> <poll date="2005/09/27" time="14:54:19" partition="/opt/nm500" kbytes="5886725" used="5139137" avail="688721" capacity="89%" /> <poll date="2005/09/27" time="14:59:20" partition="/opt/nm500" kbytes="5886725" used="5139144" avail="688714" capacity="89%" /> <poll date="2005/09/27" time="15:04:20" partition="/opt/nm500" kbytes="5886725" used="5139150" avail="688708" capacity="89%" />
This function monitors the health of the disk containing Offline Mediation Controller using the metastat command. Note: the metastat command must be previously installed on the system to correctly use this feature. The corresponding entry in the nodemgr.cfg file is: SERVERMONITOR_DISK_STATUS
The disk status log file is located in OMC_Home/serverMonitoring/IP_Port/diskStatus
Here is an example of the disk status log file:
<poll date="2005/09/27" time="16:26:15" diskHealth="healthy" /> <poll date="2005/09/27" time="16:36:15" diskHealth="healthy" /> <poll date="2005/09/27" time="16:46:15" diskHealth="healthy" />
This function monitors the percentage of the processor(s) currently in use in the system. The corresponding entry in the nodemgr.cfg file is: SERVERMONITOR_CPU_UTILIZATION
The CPU utilization log file is located in OMC_Home/serverMonitoring/IP_Port/cpuUtilization
The log file values are as follows:
cpuActive = percentage of cpu taken up with user processes
cpuSystem= percentage of cpu taken up with system processes
cpuIdle = percentage of cpu not being used
Here is an example of the CPU utilization log file:
<poll date="2005/09/27" time="14:39:46" cpuActive="34" cpuSystem="4" cpuIdle="62" /> <poll date="2005/09/27" time="14:40:06" cpuActive="62" cpuSystem="3" cpuIdle="35" /> <poll date="2005/09/27" time="14:40:26" cpuActive="38" cpuSystem="4" cpuIdle="58" /> <poll date="2005/09/27" time="14:40:46" cpuActive="16" cpuSystem="3" cpuIdle="81" />
This function monitors the percentage of the memory currently in use in the system. The corresponding entry in the nodemgr.cfg file is: SERVERMONITOR_MEMORY_UTILIZATION
The memory utilization log file is located in OMC_Home/serverMonitoring/IP_Port/memoryUtilization
The log file values are as follows:
freeMemory = amount of memory not used in the heap, measured in bytes
maxMemory = memory limit available for process to grow into (-Xmx option), measured in bytes
usedMemory = maxMemory - freeMemory, measured in bytes
memory Utilization = (currently allocated process limit - freeMemory) / maxMemory, measured in bytes
Here is an example of the memory utilization log file:
<poll date="2005/09/27" time="14:36:07" memoryUtilization="2.8679903" usedMemory="5.4105304E7" freeMemory="1.23482E7" maxMemory="6.6453504E7" /> <poll date="2005/09/27" time="14:36:22" memoryUtilization="2.4237578" usedMemory="5.6333232E7" freeMemory="1.0120272E7" maxMemory="6.6453504E7" /> <poll date="2005/09/27" time="14:36:37" memoryUtilization="2.577369" usedMemory="5.6435312E7" freeMemory="1.0018192E7" maxMemory="6.6453504E7" /> <poll date="2005/09/27" time="14:36:52" memoryUtilization="2.6964898" usedMemory="5.6514472E7" freeMemory="9939032.0" maxMemory="6.6453504E7" />
Open Files - System File Monitoring
This function tracks the number of files open on the operating system. To enable system file monitoring in Offline Mediation Controller, the open source package "lsof" must be installed in a location accessible from the $PATH variable. The corresponding entry in the nodemgr.cfg file is: SERVERMONITOR_OPEN_FILES
The log file is located in OMC_Home/serverMonitoring/IP_Port/systemFiles
The log file value "openFiles" is the number of open files in the entire system.
Here is an example of the open files log file:
<poll date="2005/09/27" time="16:19:55" openFiles="2203" /> <poll date="2005/09/27" time="16:20:58" openFiles="2222" /> <poll date="2005/09/27" time="16:22:02" openFiles="2298" /> <poll date="2005/09/27" time="16:23:05" openFiles="2298" /> <poll date="2005/09/27" time="16:24:09" openFiles="2201" /> <poll date="2005/09/27" time="16:25:12" openFiles="2247" /> <poll date="2005/09/27" time="16:26:15" openFiles="2201" />
A server monitor log is an xml file, which tracks performance values gathered at each poll instance. Each log contains a date and timestamp, followed by the statistical values gathered during that period. Each statistical category has its own performance log file. For example:
<poll date="04/27/2005" time="13:49:07" cpuActive= "4" cpuIdle= "96" /> <poll date="04/27/2005" time="13:50:07" cpuActive= "6" cpuIdle= "94" /> <poll date="04/27/2005" time="13:51:07" cpuActive= "5" cpuIdle= "95" /> <poll date="04/27/2005" time="13:52:07" cpuActive= "5" cpuIdle= "95" />
A server monitor log file contains performance data spanning a day or month, depending on which value you select in the nodemgr.cfg file. The default value is daily. For example:
SERVERMONITOR_LOG_GRANULARITY 'monthly/daily'
You can specify the number of performance logs the node manager will retain. The default value of 180 allows for half of a year of data retention. For example:
SERVERMONITOR_LOG_RETENTION '###'
As a new day or month begins, the node manager automatically opens a file for the new period. At that time, the node manager also performs post-processing on the xml file from the previous day or month. The post-processing involves adding opening and closing tags to the xml file to ensure the data is well-formed.
For each performance log xml file, the node manager creates a corresponding csv file. This command-delimited file mirrors the information present in the performance log xml file, and is suitable for importing into Microsoft Excel. For example:
Date,time,cpuActive,cpuIdle 04/27/2005,13:49:07,4,96 04/27/2005,13:49:07,6,94 04/27/2005,13:49:07,5,95
The performance log files (xml and csv), are named according to the statistical category and time period to which they pertain. The datestamp is in the format: YearMonthDay. For example:
cpu_utilization_20050431.xml (daily) cpu_utilization_20050431.csv cpu_utilization_200504.xml (monthly) cpu_utilization_200504.csv
You can use the Offline Mediation Controller Shell (NMShell) tool to access Offline Mediation Controller system information, discover node status, and perform start and stop operations, basic alarm monitoring, traffic monitoring and node configuration changes. The NMShell tool runs on Unix workstations and is useful for low speed connections or for accessing Offline Mediation Controller from behind a firewall that does not allow GUI access.
NMShell navigation is similar to the navigation in a file system. The components of the file system - the admin server, node managers and nodes - make up a tree of contexts you can access with the cd command. To list the information available in a specific context, you can use the ls command. Certain contexts have other, context-specific commands available. For example, if you execute the cd command to access a node manager context, you can then use the start and stop commands for the nodes. If you are starting or stopping more than one node, list the node IDs with a space between each one.
Before accessing the system information, you must use the login command, which logs you on to the admin server. You can then navigate the Offline Mediation Controller system.
The NMShell tool is located in the OMC_Home/bin/tools directory.
The commands for NMShell are shown in the following table:
| Component | Command | Description |
|---|---|---|
| admin server | login [IP address] [port] | Log on to the specified admin server. |
| cd [IP address] [port] | Change to the specified node manager. If no node manager is specified, change to the parent admin server. | |
| ls | Display list of node managers configured for this admin server. | |
| export [file name] | Export the system configuration to an xml file. | |
| help | Display list of available commands. | |
| node manager | cd [node ID] | Change to the specified node. If no node is specified, change to the parent node manager. |
| ls | Display list of nodes controlled by this node manager. | |
| ls [node ID] | Display list of configuration parameters for a specific node. | |
| start [node ID] | Start one or more nodes on the node manager. Separate the node IDs with a space. | |
| startall or start all | Start all nodes on this node manager. | |
| stop [node ID] | Stop one or more nodes on the node manager. Separate the node IDs with a space. | |
| stopall or stop all | Stop all nodes on this node manager. | |
| topalarm | Display the top level node manager alarm. | |
| perf or performance | Display the current node's performance window. | |
| help | Display list of available commands. | |
| node | cd | Return to the parent node manager. |
| ls | List the configuration for the node. | |
| all | exit/quit | Exit the NMShell tool. |
| help | List the available commands for the current context. | |
| pwd | Show the current context. |
You can monitor node performance in the Administration client in the following ways:
For a current view of node performance, use the Node Performance View. This view displays a list of nodes running on the selected host. You can monitor node up time, current NARs, current rate, average rate, and total NARs.
To get node performance statistics, you can configure statistics reporting: input records, output records, duplicate records, aggregated records and discarded records. You can enable or disable statistics reporting for each node, for all nodes, or for no nodes.
An SNMP trap host is an IP host designated to receive SNMP trap messages from the Offline Mediation Controller system. Offline Mediation Controller issues SNMP trap messages to notify one or more external SNMP-based network management systems of mediation host and node alarm events. You can view trap message text on an external SNMP management system, such as the Hewlett Packard OpenView system.
Note:
The same alarms are visible on the Offline Mediation Controller client and in the host and node log files.Offline Mediation Controller only supports trap messages defined in the Offline Mediation Controller SNMP trap management information base (MIB) which defines the severity levels and meanings of SNMP trap messages issued by any Offline Mediation Controller host.
To ensure traps are received from both the local and remote node managers, when adding the target host, enter the IP address or the host name of the local machine and not the string "localhost".
Offline Mediation Controller supports the generation of SNMP V1 and V2C traps.
The following contains the Offline Mediation Controller MIB used by the Offline Mediation Controller system to define SNMP trap message descriptions, type of fault, fault severity levels, possible data values, data types, and so on. Refer to this information when you configure SNMP network management workstations to receive, interpret, display, and store SNMP trap messages from Offline Mediation Controller.
*****************************************************************************
-- NM-SNMP-MIB
--
-- Version 1.0 - February 1, 2002
--
-- Revision History
-- Feb/01/2002 Creation of MIB
--
--
*****************************************************************************
NM-SNMP-MIB DEFINITIONS ::= BEGIN
IMPORTS
MODULE-IDENTITY,
OBJECT-TYPE,
enterprises,
Counter32
FROM SNMPv2-SMI
DisplayString
FROM SNMPv2-TC
MODULE-COMPLIANCE,
OBJECT-GROUP
FROM SNMPv2-CONF
IpAddress
FROM RFC1155-SMI;
-- Define the associated MIB root
nortel OBJECT IDENTIFIER ::= { enterprises 562 }
udc OBJECT IDENTIFIER ::= { nortel 57 }
PM OBJECT IDENTIFIER ::= { udc 1 }
registration OBJECT IDENTIFIER ::= { udc 2 }
udc-r330-mib MODULE-IDENTITY
LAST-UPDATED "0202010000Z" -- February 1st, 2002
ORGANIZATION "Oracle Communications Software"
CONTACT-INFO
"Contact: Oracle Communications Software"
DESCRIPTION
"This MIB module defines all of the managed objects
and traps for Offline Mediation Controller"
::= { registration 1 }
--
-- Textual Conventions used in this module
--
--
-- The main MIB branches introduced in this module.
--
udcFaultManagement OBJECT IDENTIFIER ::= { udc 3 }
udcFaultNotificationsPrefix OBJECT IDENTIFIER ::= { udcFaultManagement 1 }
udcFaultNotifications OBJECT IDENTIFIER ::= { udcFaultNotificationsPrefix 0 }
udcFaultObjects OBJECT IDENTIFIER ::= { udcFaultManagement 2 }
--
-- NM Fault Notification definitions
--
udcClearFault NOTIFICATION-TYPE
OBJECTS { udcFaultComponentName, udcFaultComponentType,
udcFaultHostName, udcFaultSeverity, udcFaultCategory,
udcFaultSpecificText, udcFaultAdditionalText,
udcFaultTime, udcFaultNotificationId }
STATUS current
DESCRIPTION
""
::= { udcFaultNotifications 1 }
udcWarningFault NOTIFICATION-TYPE
OBJECTS { udcFaultComponentName, udcFaultComponentType,
udcFaultHostName, udcFaultSeverity, udcFaultCategory,
udcFaultSpecificText, udcFaultAdditionalText,
udcFaultTime, udcFaultNotificationId }
STATUS current
DESCRIPTION
"A non-service affecting condition has occurred.
The component continues to operate properly."
::= { udcFaultNotifications 2 }
udcMinorFault NOTIFICATION-TYPE
OBJECTS { udcFaultComponentName, udcFaultComponentType,
udcFaultHostName, udcFaultSeverity, udcFaultCategory,
udcFaultSpecificText, udcFaultAdditionalText,
udcFaultTime, udcFaultNotificationId }
STATUS current
DESCRIPTION
"A non-service affecting condition has occurred.
The component is operating properly, but corrective action
is required to prevent escalation."
::= { udcFaultNotifications 3 }
udcMajorFault NOTIFICATION-TYPE
OBJECTS { udcFaultComponentName, udcFaultComponentType,
udcFaultHostName, udcFaultSeverity, udcFaultCategory,
udcFaultSpecificText, udcFaultAdditionalText,
udcFaultTime, udcFaultNotificationId }
STATUS current
DESCRIPTION
"A service affecting condition has occurred. The component is
functioning with degraded performance and requires immediate
operator action."
::= { udcFaultNotifications 4 }
udcCriticalFault NOTIFICATION-TYPE
OBJECTS { udcFaultComponentName, udcFaultComponentType,
udcFaultHostName, udcFaultSeverity, udcFaultCategory,
udcFaultSpecificText, udcFaultAdditionalText,
udcFaultTime, udcFaultNotificationId }
STATUS current
DESCRIPTION
"A service affecting condition has occurred. The component is
out of service and requires immediate operator action."
::= { udcFaultNotifications 5 }
--
-- UDC Fault Notification Objects
--
udcFaultComponentName OBJECT-TYPE
SYNTAX DisplayString
MAX-ACCESS accessible-for-notify
STATUS current
DESCRIPTION
"Unique name for the component with the fault condition.
For example:
nodemanager10900
d6hvac-8es-cvfbb830 (a node-id)
"
::= { udcFaultObjects 1 }
udcFaultComponentType OBJECT-TYPE
SYNTAX DisplayString
MAX-ACCESS accessible-for-notify
STATUS current
DESCRIPTION
"The component type at which the fault condition originates."
::= { udcFaultObjects 2 }
udcFaultHostName OBJECT-TYPE
SYNTAX DisplayString
MAX-ACCESS accessible-for-notify
STATUS current
DESCRIPTION
"The hostname or IP Address (dot notation) where the
NM component executes"
::= { udcFaultObjects 3 }
udcFaultSeverity OBJECT-TYPE
SYNTAX INTEGER {
clear (1),
warning (2),
minor (3),
major (4),
critical (5)
}
MAX-ACCESS accessible-for-notify
STATUS current
DESCRIPTION
"Severity of the reported fault condition. This object value
duplicates the severity of the notification. For example,
a udcWarningFault notification must have udcFaultSeverity
equal to warning"
::= { udcFaultObjects 4 }
udcFaultCategory OBJECT-TYPE
SYNTAX DisplayString
MAX-ACCESS accessible-for-notify
STATUS current
DESCRIPTION
"The category of fault condition reported reflects the
general type of the problem."
::= { udcFaultObjects 5 }
udcFaultSpecificText OBJECT-TYPE
SYNTAX DisplayString
MAX-ACCESS accessible-for-notify
STATUS current
DESCRIPTION
"The specific text describing the fault condition general
type of the problem. This text can be used as a key to
uniquely identify the fault condition in the category.
This text field will be provided in the corresponding
clear trap. This field will not contain any variable
text."
::= { udcFaultObjects 6 }
udcFaultAdditionalText OBJECT-TYPE
SYNTAX DisplayString
MAX-ACCESS accessible-for-notify
STATUS current
DESCRIPTION
"Additional text describing the fault condition. This
text may contain variable information such as:
IP addresses or hostnames, protocol port numbers,
file names, or timestamp information that is useful
to the operator in characterizing the fault condition."
::= { udcFaultObjects 7 }
udcFaultTime OBJECT-TYPE
SYNTAX DateAndTime
MAX-ACCESS accessible-for-notify
STATUS current
DESCRIPTION
""
::= { udcFaultObjects 8 }
udcFaultNotificationId OBJECT-TYPE
SYNTAX Counter32
MAX-ACCESS accessible-for-notify
STATUS current
DESCRIPTION
"Numeric identifier for a specific incidence of this fault
condition. This parameter is guaranteed unique for this
component name during the life of the fault condition.
The notification identifier will be provided in the
corresponding clear notification."
::= { udcFaultObjects 9 }
udcFaultReferenceId OBJECT-TYPE
SYNTAX DisplayString
MAX-ACCESS accessible-for-notify
STATUS current
DESCRIPTION
"Provides an alpha-numeric key that the network operator can
use to look up a text message in any suitable language. For
instance, the key may take the form of an integer, or even
a URL."
::= { udcFaultObjects 10}
END