2 New Features
Learn about the feature enhancements introduced in the Oracle Communications Offline Mediation Controller 12.0 patch sets.
Topics in this document:
-
New Features in Offline Mediation Controller 12.0 Patch Set 8
-
New Features in Offline Mediation Controller 12.0 Patch Set 6
-
New Features in Offline Mediation Controller 12.0 Patch Set 5
-
New Features in Offline Mediation Controller 12.0 Patch Set 4
-
New Features in Offline Mediation Controller 12.0 Patch Set 3
-
New Features in Offline Mediation Controller 12.0 Patch Set 2
-
New Features in Offline Mediation Controller 12.0 Patch Set 1
New Features in Offline Mediation Controller 12.0 Patch Set 8
Offline Mediation Controller 12.0 Patch Set 8 includes the following enhancements:
Monitoring Node Performance with Prometheus and Grafana
Both on-premises and cloud native versions of Offline Mediation Controller now track and expose the following Node Manager-level statistics through a single endpoint in Prometheus format:
-
The total network account records (NARs) processed
-
The current NARs processed
-
The current processing rate
-
The average processing rate
By default, the metric data for all Node Manager components are exposed at http://localhost:8082/metrics.
To more easily monitor Node Manager, you can configure Prometheus to scrape the metrics from the endpoint and store them for analysis and monitoring. You can then set up Grafana to display your metric data in a graphical format.
For more information about:
-
Monitoring Node Managers in on-premises Offline Mediation Controller, see "Monitoring Node Performance with Prometheus and Grafana" in Offline Mediation Controller System Administrator's Guide.
-
Monitoring Node Managers in Offline Mediation Controller cloud native, see "Using Prometheus to Monitor Offline Mediation Controller Cloud Native" in Offline Mediation Controller Cloud Native Installation and Administration Guide.
New Grafana Dashboards for Offline Mediation Controller
The Offline Mediation Controller on-premise and cloud native packages now include sample Grafana Dashboard templates that you can use for visualizing Offline Mediation Controller metrics. The package includes the following Grafana Dashboards:
-
OCOMC_JVM_Dashboard.json: Allows you to view JVM-related metrics for Offline Mediation Controller.
-
OCOMC_Node_Manager_Summary.json: Allows you to view NAR processing metrics for the Node Manager.
-
OCOMC_Node_Summary.json: Allows you to view NAR processing metrics for all nodes.
-
OCOMC_Summary_Dashboard.json: Allows you to view NAR-related metrics for all Offline Mediation Controller components.
To use the sample dashboards, import the JSON files from the OMC_home/sampleData/dashboards directory into Grafana. For information about importing dashboards into Grafana, see "Export and Import" in the Grafana Dashboards documentation.
New Features in Offline Mediation Controller 12.0 Patch Set 6
Offline Mediation Controller 12.0 Patch Set 6 includes the following enhancements:
Scaling Node Manager Pods without Affecting Non-Scalable Pods
You can now scale up or scale down the number of Node Manager Pod replicas in your Offline Mediation Controller cloud native environment without affecting the non-scalable Node Manager Pods.
For more information, see "Scaling CC, EP, and DC Nodes without Impacting Non-Scalable Nodes (Patch Set 5.1 and Later)" in Offline Mediation Controller Cloud Native Installation and Administration Guide.
Additional NMShell Command-Line Components
Offline Mediation Controller now allows you to perform the following tasks by using NMShell:
-
Add one of these types of routes between two nodes: round robin, multicast, modulus, or directed
-
Delete an existing route between two nodes
-
Change the value of an existing configuration, such as a password, at the node level
-
Add back-end only configurations at the node level
-
List the node ID and type of all nodes having a specified name
For information, see "Managing Nodes Using NMShell Command-Line Components" in Offline Mediation Controller System Administrator's Guide.
Customizing File Names for Sequencing
In previous releases, the file-based sequencing feature in Offline Mediation Controller required your CDR input file names to follow this syntax:
sourceFilename[_seqNum].fileExtension
You can now customize the file name syntax that is used with file-based sequencing to something else. For information, see "Customizing the Sequencing File Name Syntax" in Offline Mediation Controller User's Guide.
Connecting Administration Client to Administration Server Cloud Native
The Offline Mediation Controller documentation now includes the instructions for connecting an on-premise version of Administration Client with Administration Server cloud native. See "Connecting Your Administration Client" in Offline Mediation Controller Cloud Native Installation and Administration Guide.
New Features in Offline Mediation Controller 12.0 Patch Set 5
Offline Mediation Controller 12.0 Patch Set 5 includes the following enhancements:
Scaling Down Node Manager Pods in Cloud Native
You can now scale down the number of Node Manager Pod replicas in your Offline Mediation Controller cloud native environment based on a Pod's CPU or memory utilization. This helps ensure that your Node Manager Pods have enough capacity to handle the current traffic demand while still controlling costs.
For more information, see "Scaling Down Node Manager Pods (Patch Set 5 and Later)" in Offline Mediation Controller Cloud Native Installation and Administration Guide.
Enhancements to Scaling Up of Node Manager Pods
The process for scaling up your Offline Mediation Controller Pods has been simplified.
For more information, see "Scaling Up Node Manager Pods (Patch Set 5 and Later)" in Offline Mediation Controller Cloud Native Installation and Administration Guide.
NMShell Tool Exposed Outside of Pods
In Offline Mediation Controller cloud native, the NMShell tool is now exposed outside of your Pods. Instead of executing the tool inside the Administration Server Pod, you can now run the tool through an NMShell job. This makes it easier to access Offline Mediation Controller cloud native system information, manage nodes, and perform standard operations.
For more information, see "Using NMShell to Automate Deployment of Node Chains (Patch Set 5 and Later)" in Offline Mediation Controller Installation and Administration Guide.
JVM GC and Memory Parameters Now Exposed at Pod Level
In Offline Mediation Controller cloud native, you can now set the JVM garbage collection (GC) and JVM memory values at the Pod level. To do so, use the following new keys in the oc-cn-ocomc-helm-chart/values.yaml file:
-
ocomc.nodeMgrOptions.gcOptions.globalGC
-
ocomc.nodeMgrOptions.gcOptions.gc.x
-
ocomc.nodeMgrOptions.memoryOptions.globalMem
-
ocomc.nodeMgrOptions.memoryOptions.mem.x
Previously, you set the JVM GC and memory values by changing the internal files in admin-server-pvc and node-manager-pvc and then restarting the Pods.
For more information, see "Configuring Offline Mediation Controller Services" in Offline Mediation Controller Cloud Native Installation and Administration Guide.
Administration Server and Node Manager Service Types Exposed at Pod Level
In Offline Mediation Controller cloud native, you can now assign different service types to the Node Manager Pod and the Administration Server Pod. To do so, use the following new keys in the oc-cn-ocomc-helm-chart/values.yaml file:
-
ocomc.service.adminserver.type
-
ocomc.service.nodemgr.type
Previously, they were both assigned the same service type.
For more information, see "Configuring Offline Mediation Controller Services" in Offline Mediation Controller Cloud Native Installation and Administration Guide.
Monitoring JVM Metrics with Prometheus and Grafana
Offline Mediation Controller cloud native now tracks and exposes the following JVM metrics for all Node Manager components through a single endpoint in Prometheus format:
-
Performance on the Node Manager level
-
JVM parameters
The metric data is exposed at http://hostname:portJVM/metrics, where hostname is the host name of the machine on which Offline Mediation Controller cloud native is running and portJVM is the port number where the JVM metrics are exposed. You can set the port number by using the new ocomc.configEnv.metricsPortCN key in your override-values.yaml file for oc-cn-ocomc-helm-chart.
You can configure Prometheus to scrape the metrics from the endpoint and store them for analysis and monitoring. You can then set up Grafana to display your metric data in a graphical format.
For more information, see "Using Prometheus to Monitor Offline Mediation Controller Cloud Native" in Offline Mediation Controller Cloud Native Installation and Administration Guide.
New Features in Offline Mediation Controller 12.0 Patch Set 4
Offline Mediation Controller 12.0 Patch Set 4 includes the following enhancements:
Scaling Up of Node Manager Pods in Cloud Native
You can now scale up the number of Node Manager Pod replicas in your Offline Mediation Controller cloud native environment based on the Pod's CPU or memory utilization. This helps ensure that your Node Manager Pods have enough capacity to handle the current traffic demand while still controlling costs.
For more information, see "Scaling Up Node Manager Pods (Patch Set 4 Only)" in Offline Mediation Controller Cloud Native Installation and Administration Guide.
New Features in Offline Mediation Controller 12.0 Patch Set 3
Offline Mediation Controller 12.0 Patch Set 3 includes the following enhancement:
Deploying Offline Mediation Controller Services on a Cloud Native Environment
Oracle Communications Offline Mediation Controller now supports its deployment on a cloud native environment.
For more information, see "Overview of the Offline Mediation Controller Cloud Native Deployment" in Offline Mediation Controller Cloud Native Installation and Administration Guide.
New Features in Offline Mediation Controller 12.0 Patch Set 2
Offline Mediation Controller 12.0 Patch Set 2 includes the following enhancements:
Hostname Now Used for Identifying Mediation Hosts
In previous releases, Offline Mediation Controller was using only the IP addresses specified in the SystemModel.cfg file in the Administration Server configuration directory (OMC_home/config/adminserver) to identify mediation hosts. The IP addresses in this file could not be changed directly and the workaround contained more steps.
This process has now been simplified. The SystemModel.cfg file contains the details of the node managers and the corresponding nodes for one administration server. If the IP address of the node manager is not provided, Offline Mediation Controller reads the host name of the node manager in this file and derives the IP address for identifying the corresponding mediation host. If you change the host name of a node manager in the SystemModel.cfg file, Offline Mediation Controller reads the new host name and derives the IP address of the corresponding mediation host.
Additional NMShell Command-Line Components
In previous releases, when you edited a node programming language (NPL) rule file in Offline Mediation Controller NPL Editor, there was only the option to compile and validate the NPL rule file by using NPL Editor. Also, you could not delete any nodes by using NMShell or check the status of an NMShell command.
With this enhancement, you can perform the following by using NMShell command-line components:
-
Compile and validate the NPL rule file and make changes in the NPL rule file in case of any validation errors.
-
Check the status of the command run.
-
Delete all nodes or a specific node.
See the following topics for more information:
Compiling the NPL Rule File by Using NMShell
To compile the NPL rule file by using NMShell:
-
Start Administration Server and Node Manager daemons. See the discussion about starting component daemons in Offline Mediation Controller Installation Guide.
-
Go to OMC_home/bin/tools and enter the following command:
./NMShell
The prompt changes to nmsh>.
-
Enter the following command:
login server_hostname port
where:
-
server_hostname is the IP address or host name of the computer on which Administration Server is running.
-
port is the Administration Server port number.
-
-
When prompted, enter the user name and password.
You are connected to Administration Server.
-
Enter the following command:
compileNpl -f npl_file_name -d complied_npl_class -majorType major_type_of_the_node -minorType minor_type_of_the_node -id node_id
where:
-
-f npl_file_name specifies the absolute path of the NPL file that you want to compile.
-
-d complied_npl_class specifies the absolute path of the compiled NPL class after running the command.
-
-majorType major_type_of_the_node specifies the major type of the node for which the NPL rule file is compiled. This parameter is not applicable if node ID is defined using the ID argument.
-
-minorType minor_type_of_the_node specifies the minor type of the node for which the NPL rule file is compiled. This parameter is not applicable if node ID is defined using the ID argument.
-
-id node_id specifies the unique ID assigned to the node for which the NPL rule file is compiled. This parameter is not applicable if -majorType and -minorType are specified.
The NPL rule file is compiled. If the compilation fails, update the rule file and recompile.
You can store the compiled NPL rule file in the classpath directory in the config folder of the node and update the general.cfg file to use the compiled NPL rule file.
Note:
After compiling the NPL rule file, you must start and stop nodes only by using NMShell command-line components. Using the GUI to start or stop nodes uses only the attributes and NPL that are defined in GUI components.
-
Checking NMShell Command Status
To check the status of the last NMShell command run:
-
Start Administration Server and Node Manager daemons. See the discussion about starting component daemons in Offline Mediation Controller Installation Guide.
-
Go to OMC_home/bin/tools and enter the following command:
./NMShell
The prompt changes to nmsh>.
-
Enter the following command:
login server_hostname port
where:
-
server_hostname is the IP address or host name of the computer on which Administration Server is running.
-
port is the Administration Server port number.
-
-
When prompted, enter the user name and password.
You are connected to Administration Server.
-
Enter the following command:
cmd -status
This command returns the following results:
-
-1 specifies that the last run command failed or there are no commands run before.
-
0 specifies that the last run command was successful.
-
Note:
When multiple nodes are started or stopped by using NMShell, the status of the command can be retrieved only by running the status command. See "Checking Node Status" for more information.
cmd -status confirms only whether the last run command was successful or failed.
Deleting Nodes
To delete nodes:
-
Start Administration Server and Node Manager daemons. See the discussion about starting component daemons in Offline Mediation Controller Installation Guide.
-
Go to OMC_home/bin/tools and enter the following command:
./NMShell
The prompt changes to nmsh>.
-
Enter the following command:
login server_hostname port
where:
-
server_hostname is the IP address or host name of the computer on which Administration Server is running.
-
port is the Administration Server port number.
-
-
When prompted, enter the user name and password.
You are connected to Administration Server.
-
Do one of the following:
-
To delete all the nodes in the mediation host, enter the following command:
deleteNodes
All the nodes for the currently running mediation host are deleted.
-
To delete all the nodes managed by a specific node manager, enter the following command:
deleteNode -ip mediation_hostname -p port
-
mediation_hostname is the mediation host's IP address or host name.
-
port is the mediation host port number.
All the nodes for the specified mediation host are deleted.
-
-
To delete a specific node, enter the following command:
deleteNode node_id_1 node_id_2...
All the specified nodes are deleted.
-
Incremental Import and Export of Specific Nodes
In previous releases, you had to export or import node configuration and customization from all the mediation hosts configured in Node Manager even if the configuration or customization for only one node chain was modified.
With this enhancement, you can export or import node configuration and customization from one or more node chains under the Node Manager by using the Offline Mediation Controller user interface (GUI) or NMShell command-line components.
See the following topics for more information:
-
Exporting Node Chain Configuration and Customization by Using GUI
-
Importing Node Chain Configuration and Customization by Using GUI
-
Exporting Node Chain Configuration and Customization by Using NMShell
-
Importing Node Chain Configuration or Customization by Using NMShell
Note:
If you terminate the export or import process (by using GUI or NMShell) or if the system fails or an error occurs repeatedly, intermediate files, data files, and folders are created in the nodes directory in OMC_home. You need to create an offline copy of these files manually and delete the nodes before running the command again.
Exporting Node Chain Configuration and Customization by Using GUI
To export the node chain configuration and customization by using GUI:
-
Log on to Offline Mediation Controller Administration Client.
The Node Hosts & Nodes (logical view) screen appears.
-
In the Mediation Hosts table, select a host.
-
In the Nodes on Mediation Host section, select a node from which you want to export the configuration and customization.
-
Right-click on the node and select Export Node Chain or click Export Node Chain on the node host panel.
The Export Configuration dialog box appears.
-
In the Directory field, enter the full path or browse to the directory to which you want to export the node chain configuration and customization.
-
Click Export.
The node chain configuration and customization are exported to the export_timestamp.xml and export_timestamp.nmx file respectively.
Importing Node Chain Configuration and Customization by Using GUI
To import the node chain configuration and customization by using GUI:
-
Log on to Offline Mediation Controller Administration Client.
The Node Hosts & Nodes (logical view) screen appears.
-
In the Mediation Hosts table, select a host.
-
In the Nodes on Mediation Host section, right-click and select the following as appropriate or select from the node host panel:
-
Import Node Chain Customization
-
Import Node Chain Configuration
The Import Configuration dialog box appears.
-
-
In the Import File field, enter the full path or browse to the .xml or .nmx file from which you want to import the node chain configuration or customization.
The node managers display under the Old Node Manager column in the Node Manager mapping pane.
-
Select a Node Manager from the list and click Map.
The Map dialog box appears.
-
Enter Name, IP address or host name, and Port number for the new Node Manager.
-
Repeat step 5 and step 6 for the rest of the node managers in the list.
-
Select Regenerate Node id(s) to regenerate the node ID of the nodes for which the configuration or customization is imported.
-
After mapping all node managers, click Import.
The node chain configuration and customization is imported into the selected node manager. After the import, backup of existing nodes is created in the OMC_home/importbackup directory.
Exporting Node Chain Configuration and Customization by Using NMShell
To export the node chain configuration and customization:
-
Start Administration Server and Node Manager daemons. See the discussion about starting component daemons in Offline Mediation Controller Installation Guide.
-
Go to OMC_home/bin/tools and enter the following command:
./NMShell
The prompt changes to nmsh>.
-
Enter the following command:
login server_hostname port
where:
-
server_hostname is the IP address or host name of the computer on which Administration Server is running.
-
port is the Administration Server port number.
-
-
When prompted, enter the user name and password.
You are connected to Administration Server.
-
Enter the following command:
export [-n mediation_name@host_name:port] -f filename [-c value -nc y -id node_id]
where:
-
-n mediation_name@mediation_hostname:port exports the mediation host's node configuration or node customization.
where:
-
mediation_name is the mediation host's name configured in Node Manager.
-
mediation_hostname is the mediation host's IP address or host name.
-
port is the port number at which the mediation host communicates with Node Manager.
To export multiple hosts, enter the mediation hosts separated by comma (,).
-
-
-f filename specifies the name and path of the output files. Do not include the file extension.
-
-c value specifies whether to export both the node configuration and customization or only the node configuration.
where value is:
-
Y to export both the node configuration and node customization. Two files are generated; a filename.xml file with the node configuration and a filename.nmx file with the node customization. This is default.
-
N to export only the node configuration. One file is generated: a filename.xml file with the node configuration.
-
-
-nc y specifies to export only the node chain configuration and customization.
-
-id node_id specifies to export the node chain configuration and customization for the specified node_id. node_id is the unique ID assigned to the node when the node configuration is saved. You can add one or more node_ids as comma separated values.
For example:
export -n abc@localhost:55109 -f .../testnodechain/test/exportfile -c y -nc y -id 31a80o-16it-jrzysls9
The node configuration and customization from the 31a80o-16it-jrzysls9 node chain in the mediation host (abc@localhost:55109) are exported successfully to the specified file.
-
Importing Node Chain Configuration or Customization by Using NMShell
To import the node chain configuration or customization:
-
Start Administration Server and Node Manager daemons. See the discussion about starting component daemons in Offline Mediation Controller Installation Guide.
Note:
Ensure that Administration Server and Node Manager are available in the same OMC_home directory. If Node Manager is in a different directory, the node IDs are regenerated during the import by default and a backup of the old node chain is not created.
-
Go to OMC_home/bin/tools and enter the following command:
./NMShell
The prompt changes to nmsh>.
-
Enter the following command:
login server_hostname port
where:
-
server_hostname is the IP address or host name of the computer on which Administration Server is running.
-
port is the Administration Server port number.
-
-
When prompted, enter the user name and password.
You are connected to Administration Server.
Note:
Ensure that you stop the nodes for which you want to import the customization or configuration before importing the node chain configuration or customization. See "Stopping Nodes" for more information.
-
Enter the following command:
import -n mediation_name@mediation_hostname:port -f filename -c value -nc y -r y
where:
-
-n mediation_name@mediation_hostname:port specifies the mediation host configured in Node Manager.
where:
-
mediation_name is the mediation host's name configured in Node Manager.
-
mediation_hostname is the IP address or host name of the mediation host you are importing to.
-
port is the port number at which the mediation host you are importing to communicates with Node Manager.
The command verifies whether the mediation host exists in Node Manager. If the mediation host does not exist, the command generates an error.
-
-
-f filename specifies the name and path of the input file. Use filename.xml file to import the node configuration and use filename.nmx file to import the node customization.
-
-c value specifies whether to import the node customization or the node configuration.
where value is:
-
Y to import only the node customization. Use this value with the filename.nmx file.
-
N to import only the node configuration. Use this value with the filename.xml file.
-
-
-nc y specifies to import only the node chain customization or configuration.
-
-r y specifies to regenerate the node_id of the nodes for which the configuration or customization is imported. This value must be set if you importing node chain configuration or customization.
The node chain configuration or customization is imported into the specified mediation host. After the import, backup of existing nodes is created in the OMC_home/importbackup directory.
For example:
import -n linux1@10.10.10.111:55109 -f import.xml -c N -nc y -r y
The node chain configuration is imported from the import.xml file into the specified mediation hosts (linux1@10.10.10.111:55109).
After importing the node chain configuration or customization, you need to manually map the nodes that are not part of the imported node chain and delete undesired nodes.
-
New Features in Offline Mediation Controller 12.0 Patch Set 1
Offline Mediation Controller 12.0 Patch Set 1 includes the following enhancements:
Configurable Location for Storing ECE Response Records
By default, the Offline Mediation Controller Oracle Communications Elastic Charging Engine (ECE) Distribution Cartridge (DC) node writes the ECE response records to the files in the default output directory of the ECE DC node; for example, the success response records are written to the file in the OMC_home/ocomc/output/ecedc_NodeID/success directory, where:
-
OMC_home is the directory in which you installed Offline Mediation Controller.
-
ecedc_NodeID is the unique identifier of the ECE DC node.
With this enhancement, you can configure a custom location for storing the ECE response records. If a custom location is not configured, the ECE DC node writes the records to the files in the default output directory of the ECE DC node.
You can configure the location for storing the ECE response records by using the following options in the Output Directory Configuration tab in the Node Configuration section:
Table 2-1 Output Directory Configuration Options for ECE Response Records
Field | Description |
---|---|
Duplicate request directory |
Enter the path to the directory where all the files containing the duplicate response records must be stored. |
Success response directory |
Enter the path to the directory where all the files containing the success response records must be stored. |
Suspense directory |
Enter the path to the directory where all the files containing the suspense response records must be stored. |
No-response directory |
Enter the path to the directory where all the files containing the no response records must be stored. |
Delayed response directory |
Enter the path to the directory where all the files containing the delayed response records must be stored. |
For more information on the ECE DC node, see the discussion about the ECE cartridge pack in Offline Mediation Controller Cartridge Packs.
ECE Distribution Cartridge Can Be Configured for Disaster Recovery
The ECE DC node creates usage requests based on the call detail record (CDR) input stream, which are then submitted to ECE for rating. In case the node manager or system fails during this process, you might lose the input CDR data and may not be able to create the usage requests.
To recover input CDRs and to allow failover in case of system failure, you can now configure the ECE DC for disaster recovery. This ensures that the CDR files are retained in the system until the ECE DC node receives a success response from ECE.
To configure the ECE DC node for disaster recovery:
-
Open the OMC_home/web/htdocs/AdminServerImpl.properties file in a text editor.
-
Set the following entry to true:
com.nt.udc.admin.server.AdminServerImpl.disasterRecovery true
-
Save and close the file.
-
Restart Administration Server and Administration Client.
In this case, when the node manager or system fails, the CDRs for which the response has not been received from ECE are stored in the recovery (.archdel) files. The recovery files are stored in the input directory of the ECE DC node (which is the OMC_home/ocomc/input/ecedc_NodeID directory). You can use the RatedEventsChecker utility to reprocess the recovery files. For more information, see "Support for Filtering Delayed Response Records from ECE".
After you restart the system, you can copy the NAR files from the outputdir directory of the RatedEventsChecker utility to the input directory of the ECE DC node for reprocessing the records.
Support for Filtering Delayed Response Records from ECE
In the previous releases, the ECE DC node was reprocessing all the delayed response records from ECE irrespective of the response received, such as success or failure.
With this enhancement, you can avoid the reprocessing of the delayed response records which are already processed by ECE by filtering the delayed response records based on the response received. You can perform this by using the NARComparator and RatedEventsChecker utilities.
The NARComparator utility compares the network accounting records (NARs) in delayedresponsedir and noresponsedir directories:
-
If the session ID of the NAR in the delayedresponsedir directory matches the session ID of the NAR in the noresponsedir directory, NARComparator writes the NAR to the file in the filteroutdir/success directory.
-
If no match is found, NARComparator writes the NAR to the file in the filteroutdir/reprocess directory.
-
If any error occurs during this process, NARComparator writes the NAR to the file in the filteroutdir/error directory.
The RatedEventsChecker utility checks if the narfield values in the inputdir directory exist in the Oracle Communications Billing and Revenue Management (BRM) database. This utility compares the narfield values of the NARs in the inputdir directory with the values stored in the columnname in the BRM database. If no match is found, the NAR is copied to the file in the outputdir/reprocess directory for reprocessing the records.
Note:
After running the RatedEventsChecker utility, you must copy the files in the outputdir/reprocess directory to the input directory of the NAR CC node (which is the OMC_Home/suspense directory) for reprocessing the response records. And, ensure the following:
-
The InputRec block of the NAR CC Node Programming Language (NPL) is compatible with NAR fields specified in the output file generated by RatedEventsChecker.
-
The OutputRec block of NAR CC NPL is compatible with the InputRec block of the ECE DC NPL.
You can configure the NARComparator and RatedEventsChecker utilities by using the OMC_home/ocomc/web/htdocs/NarComparator.properties and OMC_home/ocomc/web/htdocs/RatedEventsChecker.properties files respectively.
For more information, see the following:
Configuring NARComparator and RatedEventsChecker
To configure the NARComparator and RatedEventsChecker utilties:
-
Open the OMC_home/ocomc/web/htdocs/NarComparator.properties file.
-
Edit the configuration entries listed in Table 2-2:
Table 2-2 NARComparator Configuration Entries
Entry Description noresponsedir
Specify the path to the directory in which you want to store the no response records.
delayedresponsedir
Specify the path to the directory in which you want to store the delayed response records.
filteroutdir
Specify the path to the directory in which you want to store the response records filtered by NARComparator.
narfilesuffix
Specify the string to append at the end of the NAR file name; for example, .arch,.archdel.
-
Save and close the file.
-
Open the OMC_home/ocomc/web/htdocs/RatedEventsChecker.properties file.
-
Edit the configuration entries listed in Table 2-3:
Table 2-3 RatedEventsChecker Configuration Entries
Entry Description dbuser
Specify the name of the BRM database user.
dbhost
Specify the host name or IP address of the BRM database user.
dbport
Specify the number for the Oracle database port.
dbsid
Specify the Oracle database alias.
dbservicename
Specify the BRM database service name.
JDBCUrl
Specify the Oracle JDBC URL to use to connect to the BRM database.
jdbcUrl="jdbc:oracle:thin@//hostname:port:sid"
where hostname and port are the host name and port number for the computer on which the database queue resides, and sid is the name of the BRM database service.
JDBCDriver
Specify the Oracle JDBC driver to use to connect to the BRM database; for example, oracle.jdbc.driver.OracleDriver.
inputdir
Specify the path to the directory in which you want to store the NAR files from the filteroutdir/reprocess directory filtered by NARComparator.
outputdir
Specify the path to the directory in which you want to store the response records filtered by RatedEventsChecker.
inputfilesuffix
Specify the string to append at the end of the input file name; for example, .arch,.archdel.
tablename
Specify the name of the BRM database table in which the NAR session IDs are stored; for example, EVENT_T.
columnname
Specify the name of the column in the BRM database table that must be used for comparing NAR session IDs; for example, NETWORK_SESSION_ID.
narfield
Specify the name of the NAR field that must be used for comparing NAR session IDs; for example, session_id.
-
Save and close the file.
Filtering Delayed Response Records
To filter the delayed response records received from ECE:
-
Copy the NAR files from the no response directory of the ECE DC node into the noresponsedir directory specified in the OMC_home/ocomc/web/htdocs/NarComparator.properties file.
-
Copy the NAR files from the delayed response directory of the ECE DC node into the delayedresponsedir directory specified in the OMC_home/ocomc/web/htdocs/NarComparator.properties file.
-
Go to the OMC_home/bin/tools directory.
-
Enter the following command, which compares the NARs in the noresponsedir and delayedresponsedir directories:
./NARComparator
-
Verify that the success, error, and reprocess response records are written to the NAR files in the respective subdirectories of the filteroutdir directory.
The location of the filteroutdir directory is specified in the OMC_home/ocomc/web/htdocs/NarComparator.properties file.
-
Copy the ojdbc-version.jar into the OMC_home/ocomc/3rdparty_jars/ directory; where version is the latest version of Java certified with Offline Mediation Controller.
See the discussion about Offline Mediation Controller system requirements in the Offline Meditation Controller Installation Guide for the Java version.
-
Copy the NAR files from the filteroutdir/reprocess directory into the inputdir directory specified in the OMC_home/ocomc/web/htdocs/RatedEventsChecker.properties file.
-
Go to the OMC_home/bin/tools directory.
-
Enter the following command, which compares the NARs in the directories of the ECE DC node:
./RatedEventsChecker -p BRMdbPassword
where BRMdbPassword is the password of the BRM database user.
-
Verify that the response records are written to the reprocess directory in the outputdir directory specified in the OMC_home/ocomc/web/htdocs/RatedEventsChecker.properties file.
Enhanced NMShell Command-Line Components
In previous releases, you could only start or stop all the nodes in the currently running mediation host by using the NMShell command-line components.
Offline Mediation Controller now allows you to perform the following tasks by using NMShell:
-
Start or stop all nodes in the mediation host.
-
Start or stop all nodes for a specific node manager.
-
Start or stop specific nodes by using the node ID.
-
Check the status of a node; for example, stopped, running, and suspended.
For more information, see:
Starting Nodes
To start nodes:
-
Start Administration Server and Node Manager daemons. See the discussion about starting component daemons in Installation Guide.
-
Go to OMC_home/bin/tools and enter the following command:
./NMShell
The prompt changes to nmsh>.
-
Enter the following command:
login server_hostname port
where:
-
server_hostname is the IP address or host name of the computer on which Administration Server is running.
-
port is the Administration Server port number.
-
-
When prompted, enter the user name and password.
You are connected to Administration Server.
-
Do one of the following:
-
To start all nodes in the mediation host, enter the following command:
startNodes
All the nodes for the currently running mediation host are started.
-
To start all nodes managed by a specific node manager, enter the following command:
startNode -ip mediation_hostname -p port
-
mediation_hostname is the mediation host's IP address or host name.
-
port is the mediation host port number.
All the nodes for the specified mediation host are started.
-
-
To start a specific node, enter the following command:
startNode node_id_1 node_id_2...
All the specified nodes are started.
-
Stopping Nodes
To stop nodes:
-
Start Administration Server and Node Manager daemons. See the discussion about starting component daemons in Installation Guide.
-
Go to OMC_home/bin/tools and enter the following command:
./NMShell
The prompt changes to nmsh>.
-
Enter the following command:
login server_hostname port
where:
-
server_hostname is the IP address or host name of the computer on which Administration Server is running.
-
port is the Administration Server port number.
-
-
When prompted, enter the user name and password.
You are connected to Administration Server.
-
Do one of the following:
-
To stop all the nodes in the mediation host, enter the following command:
stopNodes
All the nodes for the currently running mediation host are stopped.
-
To stop all the nodes managed by a specific node manager, enter the following command:
stopNode -ip mediation_hostname -p port
-
mediation_hostname is the mediation host's IP address or host name.
-
port is the mediation host port number.
All the nodes for the specified mediation host are stopped.
-
-
To stop a specific node, enter the following command:
stopNode node_id_1 node_id_2...
All the specified nodes are stopped.
-
Checking Node Status
To check the status of nodes:
-
Start Administration Server and Node Manager daemons. See the discussion about starting component daemons in Offline Mediation Controller Installation Guide.
-
Go to OMC_home/bin/tools and enter the following command:
./NMShell
The prompt changes to nmsh>.
-
Enter the following command:
login server_hostname port
where:
-
server_hostname is the IP address or host name of the computer on which Administration Server is running.
-
port is the Administration Server port number.
-
-
When prompted, enter the user name and password.
You are connected to Administration Server.
-
Do one of the following:
-
To check the status of all the nodes in the mediation host, enter the following command:
status
The status of all the nodes for the currently running mediation host is displayed.
-
To check the status of all the nodes managed by a specific node manager, enter the following command:
status -ip mediation_hostname -p port
-
mediation_hostname is the mediation host's IP address or host name.
-
port is the mediation host port number.
The status of all the nodes for the specified mediation host is displayed.
-
-
To check the status of a specific node, enter the following command:
status node_id_1 node_id_2...
The status for all the specified nodes is displayed.
-
Offline Mediation Controller Is Now Certified with Oracle Unified Directory 12.2
Offline Mediation Controller 12.0 is now certified with Oracle Unified Directory 11.1.2.3.0 and 12.2.