1 Configuring Offline Mediation Controller

Learn how to start, stop, and manage Oracle Communications Offline Mediation Controller components.

Topics in this document:

Configuring the ProcessControl Script to Run Components

You can use the ProcessControl script to start or stop Offline Mediation Controller components. To do so, you must first configure the offline_mediation.conf file to include the components that are running, the installation they run from, and the ports and IP addresses they use. The ProcessControl script uses the information in the offline_mediation.conf file to start or stop multiple servers from multiple Offline Mediation Controller installations.

To configure the ProcessControl script to run components:

  1. Open the OMC_home/offline_mediation/offline_mediation.conf file in a text editor, where OMC_home is the directory in which Offline Mediation Controller is installed.

  2. Specify the Offline Mediation Controller components to start or stop using the following syntax:

    daemon_name:OMC_home:port:[IP_Address]:[Y|N]

    where:

    • daemon_name is the name for the Offline Mediation Controller component:

      • admnsvr (Administration Server)

      • nodemgr (Node Manager)

    • port is the port on which the server component runs. The port number range is between 49152 and 65535.

    • IP_Address is the IP address for the host computer to start, stop, and monitor multiple servers from multiple Offline Mediation Controller installations.

    • Y/N indicates whether this component is started when the system starts or not.

    For example:

    admnsvr:/OMC_home:55105::Y
    nodemgr:/OMC_home:55109::Y
  3. Save and close the file.

Starting and Stopping Offline Mediation Controller

You can start and stop Offline Mediation Controller by using the following methods:

Starting and Stopping Offline Mediation Controller by Using the ProcessControl Script

You can start or stop Offline Mediation Controller by using the ProcessControl script. This script preserves the node status when you restart Node Manager.

Note:

Before running the ProcessControl script, ensure that you have run the configure script. For more information, see "Adding Offline Mediation Controller Service to System Startup" in Offline Mediation Controller Installation Guide.

To start and stop Offline Mediation Controller by using the ProcessControl script:

  1. Go to the OMC_home/bin directory.

  2. Run the following command, which starts the Offline Mediation Controller components that are defined in the offline_mediation.conf file on the appropriate ports:

    ./ProcessControl start
  3. Run the following command, which stops the Offline Mediation Controller components that are defined in the offline_mediation.conf file:

    ./ProcessControl stop

Starting and Stopping Node Manager

To start and stop Node Manager:

  1. Go to the OMC_home/bin directory.

  2. Run the following command, which starts Node Manager:

    ./nodemgr [-d | -f | -F | -j] [-p port] [-i IP_Address]
    

    where:

    • -d runs Node Manager in the background with debug output redirected to OMC_home/log/nodemgr_port.out.

      This option uses a large amount of the CPU during its processes.

    • -f runs Node Manager in the foreground.

    • -F runs Node Manager in the foreground with debug output.

    • -j runs Node Manager, with the just-in-time (JIT) compiler enabled, in the background with debug output redirected to OMC_home/log/nodemgr_port.out.

    • -p port runs Node Manager on port.

    • -i IP_Address specifies the IP address of the host computer on which Node Manager is installed. Use this parameter to start Node Manager installed on multiple computers.

    If you run this command with no options, Node Manager starts in the background with no debug output.

  3. Run one of the following commands, which stops Node Manager:

    • To shut down Node Manager:

      ./nodemgr -s [-p port]
      
    • To stop the Node Manager process:

      ./nodemgr -k [-p port]
      

Starting and Stopping Administration Server

To start and stop Administration Server:

  1. Go to the OMC_home/bin directory.

  2. Run the following command, which starts Administration Server:

    ./adminsvr [-d | -f | -F | -j] [-x][-p port][-i IP_Address]
    

    where:

    • -d runs Administration Server in the background with debug output redirected to OMC_home/log/adminsvr_port.out.

    • -f runs Administration Server in the foreground.

    • -F runs Administration Server in the foreground with debug output.

    • -j runs Administration Server, with the JIT compiler enabled, in the background with debug output redirected to OMC_home/log/adminsvr_port.out.

    • -x disables user authentication.

    • -p port runs Administration Server on port.

    • -i IP_Address specifies the IP address to be used. It is used for multi-home systems.

    If you run this command with no options, Administration Server starts in the background with no debug output.

  3. Run one of the following commands, which stops Administration Server:

    • To shut down Administration Server:

      ./adminsvr -s [-p port]
      
    • To stop the Administration Server process:

      ./adminsvr -k [-p port]
      

Starting Administration Client

To start Administration Client:

  1. Go to the OMC_home/bin directory.

  2. Run the following command:

    ./gui [-d | -f | -F]
    

    where:

    • -d runs Administration Client in the background with debug output redirected to OMC_home/log/gui_port.out.

    • -f runs Administration Client in the foreground.

    • -F runs Administration Client in the foreground with debug output.

    If you run this command with no options, Administration Client starts in the background with no debug options.

Changing the IP Address of a Mediation Host

You cannot change the IP address of a mediation host. Instead, you must remove the mediation host using that IP address, and reassign the IP address in the offline_mediation.conf file.

To change the IP address of a mediation host:

  1. Write down the port number on which Offline Mediation Controller is connected.

  2. In the Admin Client, delete the mediation host running on the Offline Mediation Controller workstation. To do so:

    1. Delete nodes from the node chain from left to right. Otherwise, the dependence of one node on the previous node may prevent you from removing it.

    2. Delete the mediation host.

  3. Stop all Offline Mediation Controller related processes on the workstation.

  4. For UNIX machines, modify the /etc/hosts file with the new IP addresses and reboot the workstations. Restart the Offline Mediation Controller processes.

  5. Look in the directory OMC_home/offline_mediation for the offline_mediation.conf file and replace any occurrences of the old IP address with the new IP address, where OMC_home is the directory in which you installed Offline Mediation Controller. Entering an IP address is optional in this file, so if the field has no value, you can leave it as is.

  6. When you log in to the Administration Client, enter the new IP address of the workstation on which the adminsvr will be running.

  7. In the Admin Client, add a mediation host for each workstation.

  8. Restart the adminsvr and nodemgr processes on the workstation.

  9. Restart all nodes on the workstation.

  10. Ensure the file SystemModel.cfg in the Administration Server config directory (OMC_home/config/adminserver) have the new IP addresses and the correct port number.

  11. Ensure the files dataflowmap.cfg and nmPort in the Node Manager config directory (OMC_home/config/nodemgr) have the new IP addresses and the correct port number.

  12. Should this fail, stop all nodes, stop all processes (Client, adminsvr, nodemgr), and manually change all of the config files that have not been changed to the new IP addresses in the config directories.

  13. Restart the adminsvr and nodemgr processes on the primary workstation (nodemgr only on the backup workstation).

  14. Restart all of the nodes on the primary workstation (for backup workstations, restart the CC node only).

Modifying Attribute Names Displayed in Record Editor

By default, Record Editor uses the name Network Accounting Record for each NAR in your system. This means that each NAR will be displayed as Network Accounting Record in the left pane of the Record Editor window. When you expand a NAR, the NAR attribute names are listed.

To modify the attribute name:

  1. Open the OMC_home/datadict/Data_Dictionary.xml file in a text editor, where OMC_home is the directory in which Offline Mediation Controller is installed.

  2. Search for the attribute ID.

  3. Change the <Attr> element to <Attr tagForName="true">.

    The tagForName option overrides the default attribute name.

  4. Set the <Name> element to the attribute name you want to display in Record Editor.

    Note:

    If you leave the <Name> element blank, Record Editor displays the attribute ID as the attribute name.

  5. Save the file.

  6. Restart Record Editor.

Managing Ports

Offline Mediation Controller uses specific ports to send data to and to receive data from external devices and applications. Use the port information in Table 1-1 when you are planning the network and configuring routers and firewalls that communicate between Offline Mediation Controller components.

Table 1-1  Port Information

Application Protocol Source Source Port Destination Destination Port

GTP

UDP

GSN

1024 or higher

Offline Mediation Controller

3386

Open FTP and FTP

TCP

MSC, Application Server or Offline Mediation Controller

20 or 21

Application Server or Offline Mediation Controller

20 or 21

SNMP

UDP

Offline Mediation Controller

161

EMS

162

RADIUS

UDP

GSN or RADIUS Server

1814

Offline Mediation Controller

1813

DBSR

TCP

Offline Mediation Controller

1521

Oracle database

1521

Managing Mediation Host Security

By default, all Administration Servers can connect to the mediation host (also called a Node Manager). You can limit access to a mediation host by using its associated OMC_home/config/nodemgr/nodemgr_allow.cfg file. The file lists the IP addresses for all Administration Servers that are allowed to connect to the mediation host. You can edit the list at any time to allow or disallow Administration Server access to the mediation host.

Setting Up an Offline Mediation Controller Administration Server Firewall

You can set up a network firewall between the Offline Mediation Controller servers and the corporate intranet or external Internet. Administration Client can connect with and operate the Offline Mediation Controller servers through this firewall.

To set up a firewall, perform the following tasks:

These port numbers are defined during the installation process but can be modified to accommodate your particular firewall configuration.

Defining Administration Server Port Number

To change the default Administration Server's port number for the firewall:

  1. Stop all Offline Mediation Controller components.

  2. Open the OMC_home/config/adminserver/firewallportnumber.cfg file in a text editor.

  3. Change the value of the following entry:

    AdminServer=port
    

    where port is the port on which Administration Server runs. The suggested port number range is between 49152 and 65535. The default port number in the configuration file is 55110.

  4. Save and close the file.

Defining Administration Client Port Numbers

To change the default firewall port number range values:

  1. Stop all Offline Mediation Controller components.

  2. Open the OMC_home/config/GUI/firewallportnumber.cfg file in a text editor.

  3. Change the values of the following entries:

    RangeFrom=port
    RangeTo=port
    

    where port is the port on which Administration Client runs. The suggested port number range is between 49152 and 65535. The default port number range in the configuration file is 55150 to 55199.

  4. Save and close the file.

Configuring Node Manager Memory Limits

To configure Node Manager memory limits:

Note:

The performance of the system can be affected by changing these settings, which by default are optimized for most Offline Mediation Controller applications.

  1. Go to the OMC_home/customization directory and verify that the nodemgr.var file exists. On a newly installed system, the nodemgr.var file may not yet exist.

    If the file does not exist, run the following command, which creates the file:

    cp OMC_home/config/nodemgr/nodemgr.var.reference OMC_home/customization/nodemgr.var
    
  2. Open the nodemgr.var file in a text editor.

  3. Specify the upper memory size by modifying the NM_MAX_MEMORY parameter. The default is 3500 megabytes.

    • The valid range for a Solaris installation is from 500 to 3500.

    • The valid range for an Oracle/Red Hat Enterprise Linux installation is from 500 to 3500.

  4. Specify the lower memory size by modifying the NM_MIN_MEMORY parameter. The default is 1024 megabytes.

    • The valid range for a Solaris installation is from 50 to 3500.

    • The valid range for a Oracle/Red Hat Enterprise Linux installation is from 50 to 3500.

  5. Save and close the file.

  6. Restart Node Manager.

Configuring the Java Virtual Machine Memory Usage when Running Administration Client

You can configure the maximum and minimum memory sizes the Java Virtual Machine (JVM) uses when running Administration Client. By configuring the maximum memory size, you can reduce the amount of memory JVM uses when running Administration Client.

To set the maximum and minimum memory sizes:

  1. Open the OMC_home/bin/gui file in a text editor, where OMC_home is the directory in which Administration Client is installed.

  2. Add or modify the following entries:

    InitializeExternalConfig(){
       NM_MIN_MEMORY=value
       NM_MAX_MEMORY=value
    }

    where value is the appropriate value for the respective entry.

    For example:

    InitializeExternalConfig(){
       NM_MIN_MEMORY=1024
       NM_MAX_MEMORY=3500
    }
  3. Save and close the file.

Configuring the Java Virtual Machine Memory Usage when Running Administration Server

You can configure the maximum and minimum memory sizes the JVM uses when running Administration Server. By configuring the maximum memory size, you can reduce the amount of memory JVM uses when running Administration Server.

To set the maximum and minimum memory sizes:

  1. Open the OMC_home/bin/adminsvr file in a text editor.

  2. Add or modify the following entries:

    InitializeExternalConfig(){
       NM_MIN_MEMORY=value
       NM_MAX_MEMORY=value
    }

    where value is the appropriate value for the respective entry.

    For example:

    InitializeExternalConfig(){
       NM_MIN_MEMORY=1024 
       NM_MAX_MEMORY=3500 
    }
  3. Save and close the file.

Using the Offline Mediation Controller Shell Tool

You can use the Offline Mediation Controller Shell (NMShell) tool to access Offline Mediation Controller system information, discover node status, and perform start and stop operations, basic alarm monitoring, traffic monitoring, and node configuration changes. The NMShell tool runs on UNIX workstations and is useful for low-speed connections or for accessing Offline Mediation Controller from behind a firewall that does not allow GUI access.

NMShell navigation is similar to the navigation in a file system. The components of the file system (the Administration Server, Node Managers, and nodes) make up a tree of contexts you can access with the cd command. To list the information available in a specific context, you can use the ls command. Certain contexts have other, context-specific commands available. For example, if you run the cd command to access a Node Manager context, you can then use the start and stop commands for the nodes. If you are starting or stopping more than one node, list the node IDs with a space between each one.

Before accessing the system information, you must use the login command, which logs you on to the Administration Server. You can then navigate the Offline Mediation Controller system.

The NMShell tool is located in the OMC_home/bin/tools directory.

Table 1-2 lists the NMShell commands for the Administration Server.

Table 1-2 NMShell Commands for Administration Server

Command Description

login

Log on to the specified Administration Server.

cd

Change to the specified Node Manager. If no Node Manager is specified, change to the parent Administration Server.

ls

Display the list of Node Managers configured for this Administration Server.

lsNodesWithName

List the node ID and type of all nodes having a specified name.

cmd -status

Display whether the last NMShell command was successful or failed.

status

Check the status of all nodes in a mediation host, all nodes for a specific Node Manager, or a specific node.

import

Import the node configuration from an XML file.

export

Export the node configuration to an XML file.

addhost

Add a mediation host to the specified Node Manager.

startNodes or startNode

Start all nodes in the currently running mediation host, all nodes for a specific mediation host, or a specific node.

deleteNodes or deleteNode

Delete all nodes in the currently running mediation host, all nodes for a specific mediation host, or a specific node.

stopNodes or stopNode

Stop all nodes in a mediation host, all nodes for a specific mediation host, or a specific node.

compileNpl

Compile the NPL rule file.

addRoute

Add a route between the two nodes you specify.

removeRoute

Remove an existing route between the two nodes you specify.

help

Display the list of available commands.

Table 1-3 lists the NMShell commands for the Node Manager.

Table 1-3 NMShell Commands for Node Manager

Command Description

cd

Change to the specified node. If no node is specified, change to the parent Node Manager.

ls

Display the list of nodes controlled by this Node Manager.

ls

Display the list of configuration parameters for a specific node.

start

Start one or more nodes on the Node Manager. Separate the node IDs with a space.

startall or start all

Start all nodes on this Node Manager.

stop

Stop one or more nodes on the Node Manager. Separate the node IDs with a space.

stopall or stop all

Stop all nodes on this Node Manager.

topalarm

Display the top level Node Manager alarm.

addattribute

Add back-end only configurations at the node level.

change

Change the value of an existing configuration at the node level.

perf or performance

Display the current node's performance window.

help

Display the list of available commands.

Table 1-4 lists the NMShell commands for nodes.

Table 1-4 NMShell Commands for Nodes

Command Description

cd

Return to the parent Node Manager.

ls

List the configuration for the node.

Table 1-5 lists the NMShell commands for all components.

Table 1-5 NMShell Commands for All Components

Command Description

exit/quit

Exit the NMShell tool.

help

List the available commands for the current context.

pwd

Show the current context.

For more information about running NMShell commands, see "Managing Nodes Using NMShell Command-Line Components".

About SNMP Trap Hosts

An SNMP trap host is an IP host designated to receive SNMP trap messages from the Offline Mediation Controller system. Offline Mediation Controller sends SNMP trap messages to one or more external SNMP-based network management systems when:

  • The Mediation Host or Node Manager experiences critical, warning, major, or minor events. You define the supported severity levels and their meanings in the Offline Mediation Controller SNMP trap management information base (MIB).
  • You manually start or stop the Administration Server or Node Manager. For example, when you start the Administration Server, Offline Mediation Controller sends the “Server run state charge” trap message with the additional text “Admin server has been started”.

You can view trap message text on an external SNMP management system, such as the Hewlett Packard OpenView system, in the Offline Mediation Controller client, and in the host and node log files.

To ensure trap messages are received from both the local and remote Node Managers, when adding the target host, enter the IP address or the host name of the local machine and not the string "localhost". Offline Mediation Controller supports the generation of SNMP V1 and V2C traps.