This chapter explains how to start, stop, and monitor Oracle Communications Offline Mediation Controller components.
To start, stop, and monitor the Offline Mediation Controller components using the ProcessControl script, define the components that are running, the installation they run from, and the ports and IP addresses they use in the offline_mediation.conf file. The ProcessControl script uses the information in the offline_mediation.conf file to start, stop, and monitor multiple servers from multiple Offline Mediation Controller installations.
To configure the offline_mediation.conf file:
Open the OMC_home/offline_mediation/offline_mediation.conf file in a text editor, where OMC_home is the directory in which Offline Mediation Controller is installed.
Specify the Offline Mediation Controller components to start or stop using the following syntax:
daemon_name:OMC_home:port:[IP_Address]:{Y|N}
where:
daemon_name is the name for the Offline Mediation Controller component:
admnsvr (Administration Server)
nodemgr (Node Manager)
port is the port on which the server component runs. The port number range is between 49152 and 65535.
IP_Address is the IP address for the host computer to start, stop, and monitor multiple servers from multiple Offline Mediation Controller installations.
Y/N indicates whether this component is started when the system starts or not.
For example:
admnsvr:/OMC_home:55105::Y nodemgr:/OMC_home:55109::Y
Save and close the file.
You can start and stop Offline Mediation Controller by using the following methods:
The ProcessControl script. See "Starting and Stopping Offline Mediation Controller by Using the ProcessControl Script."
The individual component commands. See:
You can start or stop Offline Mediation Controller by using the ProcessControl script. This script preserves the node status when you restart Node Manager.
Important:
Before running the ProcessControl script, ensure that you have run the configure script. For more information, see the discussion about adding Offline Mediation Controller service to system startup in Offline Mediation Controller Installation Guide.To start and stop Offline Mediation Controller by using the ProcessControl script:
Go to the OMC_home/bin directory.
Run the following command, which starts the Offline Mediation Controller components that are defined in the offline_mediation.conf file on the appropriate ports:
./ProcessControl start
Run the following command, which stops the Offline Mediation Controller components that are defined in the offline_mediation.conf file:
./ProcessControl stop
To start and stop Node Manager:
Go to the OMC_home/bin directory.
Run the following command, which starts Node Manager:
./nodemgr [-d | -f | -F | -j] [-p port] [-i IP_Address]
where:
-d runs Node Manager in the background with debug output redirected to OMC_home/log/nodemgr_port.out.
This option uses a large amount of the CPU during its processes.
-f runs Node Manager in the foreground.
-F runs Node Manager in the foreground with debug output.
-j runs Node Manager, with the just-in-time (JIT) compiler enabled, in the background with debug output redirected to OMC_home/log/nodemgr_port.out.
-p port runs Node Manager on port.
-i IP_Address specifies the IP address of the host computer on which Node Manager is installed. Use this parameter to start Node Manager installed on multiple computers.
If you run this command with no options, Node Manager starts in the background with no debug output.
Run one of the following commands, which stops Node Manager:
To shut down Node Manager:
./nodemgr -s [-p port]
To stop the Node Manager process:
./nodemgr -k [-p port]
To start and stop Administration Server:
Go to the OMC_home/bin directory.
Run the following command, which starts Administration Server:
./adminsvr [-d | -f | -F | -j] [-x][-p port][-i IP_Address]
where:
-d runs Administration Server in the background with debug output redirected to OMC_home/log/adminsvr_port.out.
-f runs Administration Server in the foreground.
-F runs Administration Server in the foreground with debug output.
-j runs Administration Server, with the JIT compiler enabled, in the background with debug output redirected to OMC_home/log/adminsvr_port.out.
-x disables user authentication.
-p port runs Administration Server on port.
-i IP_Address specifies the IP address to be used. It is used for multi-home systems.
If you run this command with no options, Administration Server starts in the background with no debug output.
Run one of the following commands, which stops Administration Server:
To shut down Administration Server:
./adminsvr -s [-p port]
To stop the Administration Server process:
./adminsvr -k [-p port]
To start Administration Client:
Go to the OMC_home/bin directory.
Run the following command:
./gui [-d | -f | -F]
where:
-d runs Administration Client in the background with debug output redirected to OMC_home/log/gui_port.out.
-f runs Administration Client in the foreground.
-F runs Administration Client in the foreground with debug output.
If you run this command with no options, Administration Client starts in the background with no debug options.
You can use the ProcessControl script to monitor Offline Mediation Controller components to ensure they are still running and to restart the server.
To run the ProcessControl script to monitor Offline Mediation Controller components:
Stop all Offline Mediation Controller components.
Open the /etc/inittab file in a text editor.
Add the following entry:
NT:3:respawn:/etc/init.d/ProcessControl monitor
Save and close the file.
Run the following command, which periodically monitors the status of the Offline Mediation Controller components that are defined in the offline_mediation.conf file:
./ProcessControl monitor
This section explains the system-monitoring options for the OMC_home/config/nodemgr/nodemgr.cfg file. If you do not modify the nodemgr.cfg file, Offline Mediation Controller uses the default threshold values.
You can modify nodemgr.cfg to manage your threshold options to monitor disk, memory, and CPU usage levels. You can set a warning threshold and an error threshold for these areas. You must decide what action to take if the thresholds are crossed.
To monitor disk errors, see "Using the Disk Status Monitor".
To monitor memory errors, see "Using the Memory Monitor".
To monitor CPU usage levels, see "Using the CPU Usage Monitor".
By default, Offline Mediation Controller generates a single alarm for each error condition, even if the error condition occurs multiple times. To generate an alarm or trap for every error occurrence, open the nodemgr.cfg file and change the SUPPRESS_MULTIPLE_ALARMS parameter value to No.
You use the disk status monitor to alert you to potential disk issues, so you can take action to avoid unrecoverable errors.
Note:
The disk status monitor runs only on Solaris workstations that have the Sun Solstice DiskSuite metastat command installed.Table 2-1 lists the parameters you can add or modify in the nodemgr.cfg file.
You use the memory monitor to alert you when memory usage exceeds a specified threshold. In addition to the threshold, you can configure the memory monitor to log memory usage statistics.
Table 2-2 lists the parameters you can add or modify in the nodemgr.cfg file.
Table 2-2 Memory Monitor Parameters
Parameter | Description |
---|---|
LOG_MEMORY_USAGE |
Set to Y to log memory usage statistics. The default is N. |
MEMORY_MAJOR_THRESHOLD |
The level at which a major alarm is raised, as a percentage. The default is 85. |
MEMORY_WARNING_THRESHOLD |
The level at which a warning alarm is raised, as a percentage. The default is 70. |
MEMORY_SAMPLE_TIME |
A time interval, in seconds, during which the memory usage must be above a specific threshold level before an alarm is raised. The default is 60. |
MEMORY_SAMPLE_FREQ |
The number of polls that are taken during each sample period. The default is 4. |
For example, using the default values for MEMORY_SAMPLE_TIME (60 seconds) and MEMORY_SAMPLE_FREQ (4), the memory usage polls would occur every 15 seconds (60 seconds divided by 4). In this case, an alarm would be generated if the memory usage level was above the specified threshold for 4 consecutive polls.
The CPU usage monitor generates a critical or major alarm if the CPU usage level reaches a specified value.
Table 2-3 lists the parameters you can add or modify in the nodemgr.cfg file.
Table 2-3 CPU Usage Monitor Parameters
Parameter | Description |
---|---|
CPU_REDTHRESHOLD |
The percentage of CPU in use that will generate a critical alarm. The default is 90. |
CPU_YELLOWTHRESHOLD |
The percentage of CPU in use that will generate a major alarm. The default is 80. |
CPU_SAMPLETIME |
The period, in seconds, in which to poll a fixed number of times. The default is 60. |
CPU_SAMPLEFREQ |
How often to poll during the fixed period. The default is 3. |
For example, using the default values for CPU_SAMPLETIME (60 seconds) and CPU_SAMPLEFREQ (3), a poll will take place every 20 seconds (60 seconds divided by 3).
Offline Mediation Controller uses specific ports to send data to and to receive data from external devices and applications. Use the port information in Table 2-4 when you are planning the network and configuring routers and firewalls that communicate between Offline Mediation Controller components.
Application | Protocol | Source | Source Port | Destination | Destination Port |
---|---|---|---|---|---|
GTP |
UDP |
GSN |
1024 or higher |
Offline Mediation Controller |
3386 |
Open FTP and FTP |
TCP |
MSC, Application Server or Offline Mediation Controller |
20 or 21 |
Application Server or Offline Mediation Controller |
20 or 21 |
SNMP |
UDP |
Offline Mediation Controller |
161 |
EMS |
162 |
RADIUS |
UDP |
GSN or RADIUS Server |
1814 |
Offline Mediation Controller |
1813 |
DBSR |
TCP |
Offline Mediation Controller |
1521 |
Oracle database |
1521 |
By default, all Administration Servers can connect to the mediation host (also called a node manager). You can limit access to a mediation host by using its associated OMC_home/config/nodemgr/nodemgr_allow.cfg file. The file lists the IP addresses for all Administration Servers that are allowed to connect to the mediation host. You can edit the list at any time to allow or disallow Administration Server access to the mediation host.
You can set up a network firewall between the Offline Mediation Controller servers and the corporate intranet or external Internet. Administration Client can connect with and operate the Offline Mediation Controller servers through this firewall.
To set up a firewall, perform the following tasks:
These port numbers are defined during the installation process but can be modified to accommodate your particular firewall configuration.
To change the default Administration Server's port number for the firewall:
Stop all Offline Mediation Controller components.
Open the OMC_home/config/adminserver/firewallportnumber.cfg file in a text editor.
Change the value of the following entry:
AdminServer=port
where port is the port on which Administration Server runs. The suggested port number range is between 49152 and 65535. The default port number in the configuration file is 55110.
Save and close the file.
To change the default firewall port number range values:
Stop all Offline Mediation Controller components.
Open the OMC_home/config/GUI/firewallportnumber.cfg file in a text editor.
Change the values of the following entries:
RangeFrom=port RangeTo=port
where port is the port on which Administration Client runs. The suggested port number range is between 49152 and 65535. The default port number range in the configuration file is 55150 to 55199.
Save and close the file.
To configure Node Manager memory limits:
Note:
The performance of the system can be affected by changing these settings, which by default are optimized for most Offline Mediation Controller applications.Go to the OMC_home/customization directory and verify that the nodemgr.var file exists. On a newly installed system, the nodemgr.var file may not yet exist.
If the file does not exist, run the following command, which creates the file:
cp OMC_home/config/nodemgr/nodemgr.var.reference OMC_home/customization/nodemgr.var
Open the nodemgr.var file in a text editor.
Specify the upper memory size by modifying the NM_MAX_MEMORY parameter. The default is 3500 megabytes.
The valid range for a Solaris installation is from 500 to 3500.
The valid range for an Oracle/Red Hat Enterprise Linux installation is from 500 to 3500.
Specify the lower memory size by modifying the NM_MIN_MEMORY parameter. The default is 1024 megabytes.
The valid range for a Solaris installation is from 50 to 3500.
The valid range for a Oracle/Red Hat Enterprise Linux installation is from 50 to 3500.
Save and close the file.
Restart Node Manager.
Offline Mediation Controller records system activity in log files. One log file is generated for each Offline Mediation Controller component and for each node on the mediation host. Review the log files daily to monitor your system and detect and diagnose system problems.
Offline Mediation Controller generates log files for Offline Mediation Controller components and for the nodes you create.
For Offline Mediation Controller components such as Administration Server, Node Manager, and Administration Client, log files are named component.log; for example, nodemgr.log, adminserver.log, and GUI.log.
For each node on the mediation host, the log file is named nodeID.log, where nodeID is the unique ID that is assigned by Administration Server when you create a node on a mediation host; for example, if the unique ID of the node is 2ys4tt-16it-hslskvi1, the log file name is 2ys4tt-16it-hslskvi1.log.
The following are the minimum Offline Mediation Controller log files:
nodemgr.log
adminserver.log
GUI.log
Depending on the number of nodes you create, your installation will have one or more node log files.
The log files for Offline Mediation Controller components are stored in the OMC_home/log/component directory; for example, the Node Manager log file is in the OMC_home/log/nodemgr directory.
The log files for the nodes are stored in the OMC_home/log/nodeID directory; for example, if the node ID of the node is 2ys4tt-16it-hslskvi1, the node log file is in the OMC_home/log/2ys4tt-16it-hslskvi1 directory.
Note:
Oracle recommends that you not change the default location of the log files. If you change the location of the log files, you cannot access the log information from Administration Client.Table 2-5 lists the components and their corresponding log file locations:
Each Offline Mediation Controller component or node has its own logger properties file. When an Offline Mediation Controller component or a node is started for the first time, the logger properties file is dynamically created in the OMC_home/config/component directory. By default, the logger properties file is set to the default logging level.
Table 2-6 lists the components and their corresponding logger properties file locations:
Table 2-6 Offline Mediation Controller Components and Logger Properties File Locations
Component | Logger Properties File Locations |
---|---|
Node Manager |
OMC_home/config/nodemgr/nodemgrLogger.properties |
Administration Server |
OMC_home/config/adminserver/adminserverLogger.properties |
Administration Client |
OMC_home/config/GUI/GUILogger.properties |
Node |
OMC_home/config/nodeID/nodeIDLogger.properties |
By default, Offline Mediation Controller components report information messages. You can set Offline Mediation Controller to report or to not report information messages. The following levels of reporting are supported:
NO = no logging
WARN = log only warning messages
INFO = (default) log information messages
DEBUG = log debug messages
ALL = log warning, information, and debug messages
Important:
To avoid performance degradation, use INFO level logging for debugging.To change the severity level for logging:
Open the logger properties file for the component in a text editor. See "About the Logger Properties Files".
Search for the following entry:
log4j.logger.componentName.component=severity,componentAppender
where:
componentName is the name of the Offline Mediation Controller component.
component is the Offline Mediation Controller component or the node ID.
severity is the current severity level for the logging.
For example:
log4j.logger.NodeManager.nodemgr=WARN,nodemgrAppender
Change the entry to the desired severity level for logging.
For example, to change the log level from WARN to INFO for Node Manager:
log4j.logger.NodeManager.nodemgr=INFO,nodemgrAppender
Save and close the file.
Note:
You do not need to restart the running process to enable the changes in the logging level. A predefined delay of two minutes is set before the changed logger configuration takes effect.