7 ECE Post-Installation Tasks

Learn about the post-installation tasks required for Oracle Communications Billing and Revenue Management (BRM) Elastic Charging Engine (ECE). You must install or upgrade ECE before following these procedures.

Topics in this document:

Post-Installation Tasks Common to All ECE Installations

Perform the following post-installation tasks for all ECE installation types:

  1. Set the properties for your driver machine. See "Specifying Driver Machine Properties".

  2. Set the properties for your server machine. See "Specifying Server Machine Properties".

  3. Expose the ECE MBeans, so you can configure ECE nodes through a JMX editor. See "Enabling Charging Server Nodes for JMX Management".

  4. Configure ECE to use multicast or unicast communication. See "Configuring ECE for Multicast or Unicast".

  5. Configure any Diameter Gateway nodes. See "Adding and Configuring Diameter Gateway Nodes for Online Charging".

  6. Configure any RADIUS Gateway nodes. See "Adding and Configuring RADIUS Gateway Nodes for Authentication and Accounting".

  7. Set your system's default currency. See "Configuring Default System Currency".

  8. If you installed ECE 12.0 Patch Set 7 or later, configure the ecs service for TLS 1.2. See "(Patch Set 7 or later) Configuring ecs for TLS 1.2".

  9. Deploy ECE from the driver machine onto the server machines in the cluster. See "Deploying ECE onto Server Machines".

Specifying Driver Machine Properties

The driver machine is the machine on which you installed ECE, and it is the machine used to administer the ECE system. You specify the driver machine properties in the ece.properties file.

If you installed an ECE standalone installation, you must add an entry to the properties file that specifies that Oracle Communications Billing and Revenue Management (BRM) is not installed; if you do not, ECE tries to load BRM update events and cannot transition into a usage processing state.

To specify the ECE driver machine properties:

  1. Open the ECE_home/oceceserver/config/ece.properties file.

  2. Specify the driver machine:

    • For a standalone installation, set the driverIP parameter either to localhost or to the explicit IP address or hostname of the machine. For example:

      driverIP = localhost
    • For an ECE system that has more than one machine, set driverIP to the explicit IP address value of the driver machine.

  3. (ECE standalone installation) Specify that BRM is not installed by adding the following entry:

    java.property.skipBackLogProcessing=true
  4. For an ECE system that has more than one machine, specify that configuration settings of a secondary machine should not be loaded into the driver machine by adding the following entry:

    loadConfigSettings = false
  5. Save and close the file.

Specifying Server Machine Properties

You specify server machine properties to configure your ECE topology and tune the nodes in the cluster for garbage collection and heap size.

When you configure your ECE topology, you specify the ECE nodes in the cluster. This includes the physical host machines, or server machines, on which to deploy ECE nodes and the nodes themselves. Each server machine is a part of the Coherence cluster.

For an ECE standalone installation, you can accept all default values in the topology file if desired. You can add any number of charging server nodes (nodes that have the role server specified) and modify or delete existing charging server nodes. You must have at least one charging server node.

Note:

The topology file is pre-configured with several nodes that are required by ECE. Do not delete existing rows in this file.

To specify server machine properties:

  1. Open the ECE_home/oceceserver/config/eceTopology.conf file.

  2. Add a row for each Coherence node for each physical host computer (server machine) in the cluster.

    For example, if you have three physical server machines and each physical server machine has three nodes, you require nine rows.

  3. For each row, enter the following information:

    • Name of the JVM process for that node.

      You can assign an arbitrary name. This name is used to distinguish processes that have the same role.

    • Role of the JVM process for that node.

      Each node in the ECE cluster plays a certain role.

    • Host name of the physical server machine on which the node resides.

      For a standalone system, enter localhost.

      A standalone system means that all ECE-related processes are running on a single physical server machine.

    • (For multihomed hosts) IP address of the server machine on which the node resides.

      For those hosts that have multiple IP addresses, enter the IP address so that Coherence can be pointed to a port.

    • Whether you want the node to be JMX-management enabled.

      See "Enabling Charging Server Nodes for JMX Management".

    • The JVM tuning file that contains the tuning profile for that node.

  4. (For Diameter Gateway nodes) For one Diameter Gateway node, specify a JMX port.

    Choose a port number that is not in use by another application.

    By specifying a JMX port number for one Diameter Gateway node, you expose MBeans for setting performance-related properties and collecting statistics for all Diameter Gateway node processes.

  5. (For SDK sample programs) To run the SDK sample programs by using the sdkCustomerLoader, uncomment the line where the sdkCustomerLoader node is defined.

  6. Save the file.

    You must specify the JVM tuning parameters (the number of threads, memory, and heap size) for each Coherence node that you specified in the eceTopology.conf file by editing or creating the JVM tuning file(s).

  7. Open the ECE_home/oceceserver/config/defaultTuningProfile.properties file.

    You can create your own JVM tuning file and save it in this directory. You can name the file what you want.

  8. Set the parameters as needed.

  9. Save the file.

  10. In the topology file (ECE_home/oceceserver/config/eceTopology.conf), ensure your JVM tuning file is associated with the node to which you want the tuning profile (as set by these parameters) to apply.

    The JVM tuning file is referenced by name in the topology file as mentioned earlier in this procedure.

Enabling Charging Server Nodes for JMX Management

After installing ECE, you may reset system configurations, such as connection parameters for connecting to other applications, and set business configurations, such as charging-related rules you want to apply at run time. To set most configuration parameters, you use a JMX editor such as JConsole. Before you can use a JMX editor to set configuration parameters, you must expose ECE MBeans. You expose ECE MBeans by enabling one ECE node for JMX management for each unique IP address in your topology. When a JMX-management-enabled node starts, it provides a JMX management service on the specified host and port which is used to expose the ECE configuration MBeans.

Though any ECE node can be enabled for JMX management, you enable charging-server nodes for JMX management to support central configuration of the ECE system. Charging-server nodes are always running, and enabling them for JMX management exposes MBeans for all ECE node processes (such as Diameter Gateway node instances, simulators, and data loaders).

To enable a charging server node for JMX management:

  1. Open the ECE_home/oceceserver/config/eceTopology.conf file.

  2. For each physical server machine or unique IP address in the cluster, provide the following information for one charging server node (node with role server):

    • JMX port of the JVM process for that node.

      Enter any free port, such as 9999, for the charging server node to be the JMX-management enabled node.

      Choose a port number that is not in use by another application.

      The default port number is 9999.

    • Specify that you want the node to be JMX-management enabled by entering true in the start CohMgt column.

      For charging server nodes (nodes with the role server), always enable JMX-management for the node for which a JMX port is supplied.

      Enable only one charging server node per physical server for JMX management.

      Because multiple charging server nodes are running on a single physical machine, you set CohMgt=true for only one charging server node on each physical machine. Each machine must have one charging server node with CohMgt=true for centralized configuration of ECE to work.

  3. Save the file.

Configuring ECE for Multicast or Unicast

You can configure ECE for multicast or unicast communication. Oracle Coherence uses the TCMP protocol, which can use the UDP/IP multicast or UDP/IP unicast methods of data transmission over the network. See the discussion about network protocols in Oracle Coherence Getting Started Guide for detailed information about how Oracle Coherence uses the TCMP protocol.

When ECE is deployed in a distributed environment (multiple machines), it uses multicast or unicast for discovering other nodes when forming a cluster; for example, for allowing an newly started node to discover a pre-existing cluster. Multicast is preferred because it allows packets to be sent only one time rather than sending one packet for each node. Multicast can be used only if it is enabled in the operating system and the network.

To configure ECE for multicast or unicast, see the following topics:

Determining Whether Multicast Is Enabled

To determine whether multicast is enabled in the operating system, use the Oracle Coherence multicast test utility. See the discussion about performing a multicast connectivity test in Oracle Coherence Administrator's Guide for detailed information about using the multicast test utility and how to understand the output of the test:

https://docs.oracle.com/cd/E18686_01/coh.37/e18679/tune_multigramtest.htm

To determine whether multicast is enabled in the operating system, go to the directory where the multicast-test.sh script is located and use the following test.

$ ./multicast-test.sh -ttl 0

You can use the following tests to determine if multicast is enabled in the network. Start the test on Machine A and Machine B by entering the following command into the respective command window of each and pressing ENTER:

Machine A $ ./multicast-test.sh -ttl 1
Machine B $ ./multicast-test.sh -ttl 1

If multicast across Machine A and Machine B is not working with a TTL (time to live) setting of 1, repeat this test with the default TTL setting of 4. A TTL setting of 4 is required when the machines are not on the same subnet. If all participating machines are connected to the same switch, and therefore in the same subnet, use the TTL setting of 1.

If Machine A and Machine B both have multicast enabled in the environment, the test output for each machine will show the machine issuing multicast packets and seeing both its own packets as well as the packets of the other machine. This indicates that multicast is functioning properly between the machines.

Configuring ECE for Multicast

To configure ECE when using multicast:

  1. Verify the TTL value you must use in your environment.

    See "Determining Whether Multicast Is Enabled".

  2. Open the ECE Coherence override file your ECE system uses (for example, ECE_home/oceceserver/config/charging-coherence-override-prod.xml).

    To confirm which ECE Coherence override file is used, refer to the tangosol.coherence.override parameter of the ECE_home/oceceserver/config/ece.properties file.

    Tip:

    When using multicast, using charging-coherence-override-prod.xml enables multicast across multiple computers within a single sub-network.

  3. In the multicast-listener section, update the tangosol.coherence.ttl parameter to match the TTL value you must use in your environment.

    For example, to set a TTL value of 4:

    <multicast-listener>                                                                                                                   
       <address system-property="tangosol.coherence.clusteraddress">ip_address</address>                                                
       <port system-property="tangosol.coherence.clusterport">port</port>                                                                
       <time-to-live system-property="tangosol.coherence.ttl">4</time-to-live>                                                            
    </multicast-listener>
    

    Note:

    You can segregate multiple ECE clusters within the same subnet by assigning distinct tangosol.coherence.clusteraddress values for each cluster.

  4. Save the file.

Configuring ECE for Unicast

If multicast is not used, you must set up the Well Known Addresses (WKA) mechanism for your ECE cluster. Configuring a list of well known addresses prevents Coherence from using multicast.

To configure ECE when not using multicast:

  1. Open the ECE Coherence override file your ECE system uses (for example, ECE_home/oceceserver/config/charging-coherence-override-prod.xml).

    To confirm which ECE Coherence override file is used, refer to the tangosol.coherence.override parameter of the ECE_home/oceceserver/config/ece.properties file.

  2. Comment out the multicast-listener section.

  3. Add the following unicast-listener section to the file:

    <unicast-listener>
      <well-known-addresses>
        <socket-address id="id">
          <address system-propety="tangosol.coherence.wka">ip_address</address>
          <port system-property="tangosol.coherence.wka.port">port</port>
        </socket-address>
        ...
      </well-known-addresses>
      <port system-property="tangosol.coherence.localport">port</port>
    </unicast-listener>

    where:

    • id is the ID for a particular cluster member

    • "tangosol.coherence.wka" must refer to the machine that runs the first Elastic Charging Server node (the ecs1 charging server node).

    • ip_address is the IP address of the cluster member

    • port is the value specified in the member's unicast listener port

  4. Save the file.

Adding and Configuring Diameter Gateway Nodes for Online Charging

During ECE installation, if you specified that Diameter Gateway must be started when ECE is started, the ECE Installer creates a single instance (node) of Diameter Gateway (diameterGateway1) that is added to your topology. By default, this instance listens to all network interfaces for Diameter messages.

For a standalone installation, a single node is sufficient for basic testing directly after installation; for example, to test if the Diameter client can send a Diameter request to the Diameter Gateway node. Add additional Diameter Gateway nodes to your topology, configure them to listen on the different network interfaces in your environment, and perform performance testing. For information, see "Adding Diameter Gateway Nodes for Online Charging" in BRM System Administrator's Guide.

Note:

When configuring additional Diameter Gateway nodes, ensure that you configure the Diameter peers and alternative peers for routing notifications. See "Configuring Alternative Diameter Peers for Notifications" in ECE Implementing Charging for more information.

Adding and Configuring RADIUS Gateway Nodes for Authentication and Accounting

During ECE installation, if you specified that RADIUS Gateway must be started when ECE is started, the ECE Installer creates a single instance (node) of RADIUS Gateway (radiusGateway1) that is added to your topology. By default, this instance listens to RADIUS messages.

For a standalone installation, a single node is sufficient for basic testing directly after installation; for example, to test if the RADIUS client can send a RADIUS request to the RADIUS Gateway node. Add additional RADIUS Gateway nodes to your topology and configure them to listen on the different network interfaces in your environment. For information, see "Adding RADIUS Gateway Nodes" in BRM System Administrator's Guide.

Configuring Default System Currency

During rating, ECE uses the subscriber's primary currency or the secondary currency for charging subscribers. If the currency used in the rate plans does not match the subscriber's primary or secondary currency, ECE uses the default system currency, US dollars.

For more information, see "Configuring Default System Currency" in BRM System Administrator's Guide.

(Patch Set 7 or later) Configuring ecs for TLS 1.2

If you installed ECE 12.0 Patch Set 7 or later, update the ecs service to use TLS 1.2.

To configure the ecs service for TLS 1.2:

  1. Open the following files in a text editor:

    • ECE_home/bin/environment.sh

    • ECE_home/config/defaultTuningProfile.properties

    • SDK_home/bin/launcher.sh

  2. Add the following lines to each file.

    -Djdk.tls.client.protocols=TLSv1.2 \
    -Djdk.tls.server.protocols=TLSv1.2
  3. Save and close the files.

Deploying ECE onto Server Machines

If you installed an ECE standalone installation on a single machine only, you can skip this task.

If your ECE cluster includes multiple physical server machines, you run the ECE sync command to deploy ECE from the driver machine onto the server machines in the cluster.

Deploying ECE onto the server machines (in your distributed environment) means that you distribute the Elastic Charging Server node instances (charging server nodes) and other nodes defined in your topology file across the server machines.

Tip:

For the sync command to work as expected, all the hosts included in the eceTopology.conf file must have the same login ID (user ID) and from the driver machine password-less SSH must be configured to all hosts.

To deploy ECE onto server machines:

  1. Log on to the driver machine.

  2. Change directory to the ECE_home/oceceserver/bin directory.

  3. Start Elastic Charging Controller (ECC):

    ./ecc
  4. Deploy the ECE installation onto server machines:

    sync

    The sync command copies the relevant files of the ECE installation onto the server machines you have defined to be part of the ECE cluster.

Post-Installation Tasks for an ECE Integrated Installation

For an ECE integrated installation, you perform the post-installation tasks common to all ECE installations and also the tasks described in this section. See "Post-Installation Tasks Common to All ECE Installations" for information on common post-installation tasks.

For an integrated installation, after you install ECE, you must do the following:

  1. If you specified to use WebLogic JMS queues during the ECE installation process, create the queues in Table 7-1 for BRM, ECE, and Pricing Design Center (PDC).

    Note:

    HTTP Gateway does not support WebLogic JMS queues. Use Apache Kafka topics with HTTP Gateway.

    Table 7-1 WebLogic JMS Queues

    Queue Name Description

    Suspense queue

    Set up a suspense queue on a server running Oracle WebLogic Server where ECE publishes failed data updates from Customer Updater.

    See "Configuring the Customer Updater Suspense Queue" in BRM System Administrator's Guide for more information.

    Acknowledgment queue

    Set up an acknowledgment queue where ECE publishes acknowledgments for BRM.

    For example, ECE uses this queue to send acknowledgment events to BRM during the rerating process, indicating that the process can start or finish.

    ECE notification queue (JMS topic)

    Set up an ECE notification queue on a server running Oracle WebLogic Server where ECE can publish notification events for consumption by external systems, such as Oracle Communications Offline Mediation Controller. The ECE notification queue is a JMS topic; it can be on the same WebLogic server as the JMS queue where PDC publishes pricing updates.

    If you set up multiple JMS WebLogic servers for failover, you must enter their connection information in the ECE_home/oceceserver/config/JMSConfiguration.xml file. See "Configuring Credentials for Multiple JMS WebLogic Servers".

    See "Configuring Notifications in ECE" in ECE Implementing Charging for more information.

    For instructions on creating these queues, see "Creating WebLogic JMS Queues for BRM".

  2. If you specified to use Apache Kafka topics during the ECE installation process, do this:

    • Create the following Kafka topics for ECE: ECE Notification topic, ECE failure topic, Suspense topic, and ECE overage topic. For instructions on creating Kafka topics, see "Creating Kafka Topics for ECE".

    • Create the following queue for BRM: an Acknowledgment queue. For instructions on creating the queue, see "Creating WebLogic JMS Queues for BRM" except, when prompted, create only the Acknowledgment queue.

  3. Install and configure your network mediation software. For example:

    • If you use Diameter Gateway as your network integration for online charging, ensure that you have added and configured Diameter Gateway nodes to listen on the different network interfaces in your environment. See "Adding and Configuring Diameter Gateway Nodes for Online Charging" for more information.

    • If you use Offline Mediation Controller as network mediation software for offline charging, see Offline Mediation Controller Cartridge Packs for instructions on installing and configuring Offline Mediation Controller to access ECE SDK libraries and send usage requests for offline CDRs.

  4. Enable secure communication between components in the ECE integrated installation. See the following topics for more information:

Creating WebLogic JMS Queues for BRM

Use the post_Install.pl script to create the required WebLogic JMS queues: Suspense queue, Acknowledgment queue, and Notification queue.

Note:

If you have a multischema BRM environment, you must manually create suspense, acknowledgment, and notification queues for each secondary BRM schema by using the pin_ifw_sync_oracle.pl script. See "Creating Additional Queues for Multischema BRM Systems" in BRM Installation Guide for more information.

Location

ECE_home/oceceserver/post_installation/

Syntax

perl post_Install.pl

You are prompted to install the BRM Suspense queue, Acknowledgment queue, and Notification queue. You can choose to install one, two, or all three queues.

Note:

If you are using Kafka topics for HTTP Gateway, install only the Acknowledgment queue.

The queue names are specified during the ECE installation process and are used by the post installation script.

If queues are already created, you see a message in the log files. Alternatively, you can check if the BRM queues exist by querying the user_queues table on your BRM machine. If the suspense and acknowledgement queues are already created, a note will be logged in brm_queue.log. If the notification queue is already created, a note will be logged in output.log.

Parameters

For the BRM suspense and acknowledgement queues, you also need to enter the BRM machine password in addition to entering the following parameters:

  • BRM_HOSTNAME: The IP address or the host name of the computer on which the BRM database is configured.

  • BRM_USER: The BRM user name on the specified host.

  • BRM_DB_PASSWORD: The password for the BRM database user.

For the ECE notification queue, you enter the following WebLogic server parameters:

  • JMS_PASSWORD: The password for logging on to the WebLogic server on which the JMS queue resides.

  • JMS _MODULE NAME: The JMS system module name of the module that has already been created on the WebLogic server.

  • JMS_SUBDEPLOYMENT: The name of the subdeployment target in the JMS system module that has already been created on the WebLogic server.

After the JMS ECE notification queue is created, do the following in the WebLogic server:

  1. Log on to the WebLogic Server on which the JMS topic for the ECE notification queue resides.

  2. In the WebLogic Server Administration Console, from the JMS modules list, select the connection factory that applies to the JMS topic.

  3. In the Client tab, do the following:

    1. Set Reconnect Policy to None.

    2. Set Client ID Policy to Unrestricted.

    3. Set Subscription Sharing Policy to shareable.

  4. In the Transactions tab, set Transaction Timeout to 2147483647.

Creating Kafka Topics for ECE

Use the kafka_post_install.sh script to create the ECE Notification topic, ECE failure topic, Suspense topic, and ECE overage topic in Apache Kafka.

Location

ECE_home/oceceserver/post_installation/

Syntax

sh kafka_post_install.sh

When running the script, enter your Kafka bin path when prompted to do so.

The script creates the Kafka topics using the Kafka host name, topic names, number of partitions, and replication factor details that you specified during the ECE installation process.

Configuring Credentials for Multiple JMS WebLogic Servers

The ECE installer gathers connection information for only one JMS WebLogic server on which the ECE notification queue (JMS topic) is to reside. If your ECE system includes multiple ECE notification queue hosts for failover, you must specify connection information for all hosts.

To configure credentials for multiple JMS WebLogic servers:

  1. Open the ECE_home/oceceserver/config/JMSConfiguration.xml file.

  2. Locate the <MessagesConfigurations> section.

  3. Specify values for the parameters in the following JMSDestination name sections:

    • NotificationQueue: Read by the ECE charging nodes.

    • BRMGatewayNotificationQueue: Read by BRM Gateway.

    • DiameterGatewayNotificationQueue: Read by Diameter Gateway.

    Note:

    Do not change the value of the JMSDestination name parameter.

    Each JMSDestination name section contains the following parameters:

    • HostName: If you provided a value for the Host Name field on the ECE Notification Queue Details installer screen, that value appears here. Add this host to the ConnectionURL parameter, which takes precedence over HostName.

    • Port: If you provided a value for the Port Number field on the ECE Notification Queue Details installer screen, that value appears here. Add this port number to the ConnectionURL parameter, which takes precedence over Port.

    • Protocol: Specify the wire protocol used by your WebLogic servers in the ConnectionURL parameter, which takes precedence over Protocol.

    • ConnectionURL: List all the URLs that applications can use to connect to the JMS WebLogic servers on which your ECE notification queue (JMS topic) or queues reside.

      Note:

      When this parameter contains values, it takes precedence over the deprecated HostName, Port, and Protocol parameters.

      Use the following URL syntax:

      [t3|t3s|http|https|iiop|iiops]://address[,address]. . . 

      where:

      t3, t3s, http, https, iiop, or iiops is the wire protocol used.

         For a WebLogic server, use t3.

      address is hostlist:portlist.

      hostlist is hostname[,hostname.]

      hostname is the name of a WebLogic server on which a JMS topic resides.

      portlist is portrange[+portrange.]

      portrange is port[-port.]

      port is the port number on which the WebLogic server resides.

      Examples:

      t3://hostA:7001
      t3://hostA,hostB:7001-7002

      The preceding URL is equivalent to all the following URLs:

      t3://hostA,hostB:7001+7002 
      t3://hostA:7001-7002,hostB:7001-7002 
      t3://hostA:7001+7002,hostB:7001+7002 
      t3://hostA:7001,hostA:7002,hostB:7001,hostB:7002 

      Note:

      If multiple URLs are specified for a high-availability configuration, an application randomly selects one URL and then tries the others until one succeeds.

    • ConnectionRetryCount: Specify the number of times a connection is retried after it fails.

      This applies only to clients that receive notifications from BRM.

    • ConnectionRetrySleepInterval: Specify the number of milliseconds between connection retry attempts.

  4. (Optional) Modify the values of the following parameters in the JMSDestination name section:

    • UserName: Specify the user for logging on to the WebLogic server.

      This user must have write privileges for the JMS topic.

    • Password: Specify the password for logging on to the WebLogic server.

      When you install ECE, the password you enter is encrypted and stored in the KeyStore. If you change the password, you must run a utility to encrypt the new password before entering it here. See "About Encrypting Passwords" in BRM System Administrator's Guide.

    • ConnectionFactory: Specify the connection factory used to create connections to the JMS topic on the WebLogic server to which ECE publishes notification events.

      You must also configure settings in Oracle WebLogic Server for the connection factory. See the discussion about setting up a JMS topic on a WebLogic server, see the Oracle WebLogic Server documentation.

    • QueueName: Specify the JMS topic that holds the published external notification messages.

    • InitialContextFactory: Specify the name of the initial connection factory used to create connections to the JMS topic queue on each WebLogic server to which ECE will publish notification events.

    • RequestTimeOut: Specify the number of milliseconds in which requests to the WebLogic server must be completed before the operation times out.

    • KeyStorePassword: If SSL is used to secure the ECE JMS queue connection, specify the password used to access the SSL KeyStore file.

    • KeyStoreLocation: If SSL is used to secure the ECE JMS queue connection, specify the full path to the SSL KeyStore file.

  5. Save and close the file.

Generating Java KeyStore Certificates

To generate Java KeyStore certificates for connecting to the WebLogic server, PDC, BRM, and the HTTP Gateway:

  1. Log on to the driver machine.

  2. Go to the Java_home/bin directory, where Java_home is the directory in which you installed the latest supported Java version.

  3. Run the following commands:

    keytool -genkey -alias weblogic -dname CN=commonName OU=organizationalunit o=organization c=countryname -keyalg RSA -keypass mykeypass -keystore mykeystore -storepass mystorepass -validity valdays
    
    keytool -genkey -alias pdc -dname CN=commonname OU=organizationalunit o=organization c=countryname -keyalg RSA -keypass mykeypass -keystore mykeystore -storepass mystorepass -validity valdays
    
    keytool -genkey -alias brm -dname CN=commonname OU=organizationalunit o=organization c=countryname -keyalg RSA -keypass mykeypass -keystore mykeystore -storepass mystorepass -validity valdays
    
    keytool -genkey -alias http_gateway -dname CN=commonname OU=organizationalunit o=organization c=countryname -keyalg RSA -keystore ${oracleHome}/oceceserver/config/httpGatewayServer.jks -keypass mykeypass -storepass mystorepass

    where:

    • commonName is the first and last name.

    • Organizationalunit is the container within a domain which can hold users, groups, and computers.

    • Organization is the name of the organization.

    • countryname is the name of the country.

    • mykeypass is the key password for the certificate.

    • mykeystore is the KeyStore.

    • mystorepass is the KeyStore password.

    • valdays is the number of days that the KeyStore is valid.

    The Java KeyStore certificates for the WebLogic server, PDC, and BRM are generated.

Exporting Java KeyStore Certificates

To export the Java KeyStore certificates to a file:

  1. Log on to the driver machine.

  2. Go to the Java_home/bin directory, where Java_home is the directory in which you installed the latest supported Java version.

  3. Run the following commands:

    keytool -export -alias weblogic -keystore mykeystore -storepass mystorepass -rfc -file certificatename
    
    keytool -export -alias pdc -keystore mykeystore -storepass mystorepass -rfc -file certificatename
    
    keytool -export -alias brm -keystore mykeystore -storepass mystorepass -rfc -file certificatename
    
    keytool -export -alias http_gateway -keystore mykeystore -storepass mystorepass -rfc -file certificatename

    where:

    • certificatename is the name of the file to store the Java KeyStore certificates for connecting to the WebLogic server, PDC, and BRM.

    • mykeystore is the KeyStore.

    • mystorepass is the KeyStore password.

    The Java KeyStore certificates are exported to the certificate file. For example: public-admin.cer.

Importing Java KeyStore Certificates

To import the Java KeyStore certificates into the default Java KeyStore:

  1. Log on to the driver machine.

  2. Go to the Java_home/bin directory, where Java_home is the directory in which you installed the latest supported Java version.

  3. Run the following commands:

    keytool -import -alias weblogic -keystore Java_home/jre/lib/security/cacerts -storepass mystorepass -file certificatename -noprompt rm mykeystore certificatename
    
    keytool -import -alias pdc -keystore Java_home/jre/lib/security/cacerts -storepass mystorepass -file certificatename -noprompt rm mykeystore certificatename
    
    keytool -import -alias brm -keystore Java_home/jre/lib/security/cacerts -storepass mystorepass -file certificatename -noprompt rm mykeystore certificatename
    
    keytool -import -alias http_gateway -keystore Java_home/jre/lib/security/cacerts -storepass mystorepass -file certificatename -noprompt rm mykeystore certificatename

    where:

    • certificatename is the name of the certificate file in which the Java KeyStore certificates for connecting to the WebLogic server, PDC, BRM, and the HTTP Gateway are stored.

    • mykeystore is the KeyStore.

    • mystorepass is the KeyStore password.

    The Java KeyStore certificates are imported into the default Java KeyStore.

Post-Installation Tasks for ECE Software Upgrade

After upgrading to ECE 12.0 or an ECE 12.0 Patch Set, perform the following post-installation tasks:

Note:

Rolling upgrades are not supported by ECE 12.0. You must stop all ECE nodes of your existing installation, restore your ECE system, and then start all ECE nodes of your existing installation.

  1. Reconfigure the default system and business configuration files. See "Reconfiguring Configuration File Settings to Match Your Old Release".

  2. Manually copy your existing mediation specification data to the new installation. See "Copying the Mediation Specification Data to the New Installation".

  3. Apply any ECE extensions to the new installation. See "Upgrading Existing ECE 12.0 Installation".

  4. Merge your existing ECE configuration files with the new installation files. See "Updating the BRM Configuration Files".

  5. Deploy ECE onto your server machines. See "Deploying ECE 12.0 Onto Server Machines".

  6. Restore your upgraded ECE system. See "Stopping and Restoring Your ECE System".

Reconfiguring Configuration File Settings to Match Your Old Release

After installing ECE 12.0, reconfigure the default system and business configuration files of the new installation to match the settings in your old release configuration files.

Reconfigure all settings in the files of the following directories to match your old installation settings:

  • ECE_home/oceceserver/

  • ECE_home/oceceserver/config

  • ECE_home/oceceserver/brm_config

You must move the configuration data, such as your custom customer profile data and request specification files, into the ECE_New_home created for ECE 12.0.

You can also use a merge tool to merge the configuration files you have changed in your old installation with the configuration files in the new installation.

Note:

Do not use a merge tool for reconfiguring the settings in the ECE_home/oceceserver/config/management/charging-settings.xml file. New and changed properties can be introduced in this file, which would make the file difficult to merge.

To reconfigure settings of the ECE_home/config/management/charging-settings.xml file on your new installation:

  1. On the driver machine, open the (ECE_New_home/oceceserver/config/eceTopology.conf) file.

  2. For each physical server machine or unique IP address in the cluster, enable charging server nodes (such as the ecs1 node) for JMX management by specifying a port for each node and setting it to start CohMgt = true.

    Note:

    Do not specify the same port for the JMX management service that is used by your old ECE installation. Enable charging server nodes on your new installation for JMX management by using unique port numbers.

  3. Save and close the file.

  4. Start the JMX-management-enabled charging server nodes by doing the following:

    1. Change directory to the ECE_New_home/oceceserver/bin directory.

    2. Start Elastic Charging Controller (ECC):

      ./ecc
    3. Run the following command:

      start server
  5. Access the ECE MBeans:

    1. Log on to the driver machine.

    2. Start a JMX editor, such as JConsole, that enables you to edit MBean attributes.

    3. Connect to the ECE charging server node set to start CohMgt = true in the ECE_home/oceceserver/config/eceTopology.conf file, where ECE_home is the directory in which you installed the old release.

      The eceTopology.conf file also contains the host name and port number for the node.

    4. In the editor's MBean hierarchy, expand the ECE Configuration node.

  6. Use the JMX editor to enter values for all settings associated with your old release's ECE_home/config/management/charging-settings.xml file and enter values for new settings introduced in the new release.

  7. Stop the JMX-management-enabled charging server nodes.

Copying the Mediation Specification Data to the New Installation

The mediation specification data is not automatically copied to the new installation.

You must manually copy the mediation specification data (for example, diameter_mediation.spec) in your old release's ECE_home/oceceserver/config/management directory to the new installation.

Upgrading Extension Code

If you customized rating by implementing ECE extensions using the ECE extensions API, apply the customizations to the corresponding files of the new installation.

Upgrade your extension code and recompile it. Recompile ECE_home/oceceserver/config/extensions with the new library. Ensure that the packaged extensions JAR files are available to the ECE runtime environment in the ECE_home/lib folder.

Updating the BRM Configuration Files

Update the BRM configuration files on your BRM installation.

Copy the ECE_home/oceceserver/brm_config/payloadconfig_ece_sync.xml file to your BRM environment and merge it with the version you are currently running on the BRM installation. Add your customizations to this file.

Deploying ECE 12.0 Onto Server Machines

If you installed an ECE standalone installation, you can skip this task.

Deploying ECE 12.0 onto the server machines means that you distribute the Elastic Charging Server node instances (charging server nodes) and other nodes defined in your topology file across the server machines.

To deploy ECE 12.0 onto your server machines:

  1. Open the ECE_New_home/oceceserver/config/eceTopology.conf file and your old release topology file.

  2. Verify the following:

    1. The settings in the ECE_New_home/oceceserver/config/eceTopology.conf file are the same as specified in your old release topology file.

      Your topology configuration must be identical to that of your old installation. Oracle recommends that you copy the topology file from your old installation.

    2. All the hosts included in the ECE_New_home/oceceserver/config/eceTopology.conf file have the same login ID (user ID) and the password-less SSH has been configured to all hosts from the driver machine.

  3. Save and close the files.

  4. Verify that all of the custom files and system and business configuration files of the new installation match the settings of your old installation configuration files and that your custom request specification files and custom customer profile data is being carried over.

  5. Open the ECE_New_home/oceceserver/config/management/migration-configuration.xml file.

  6. Verify that the configObjectsDataDirectory parameter is set to the directory where you store your configuration data (mediation specification used by Diameter Gateway).

  7. Save and close the file.

  8. Log on to the driver machine.

  9. Change directory to ECE_New_home/oceceserver/bin.

  10. Start ECC:

    ./ecc
  11. Run the following command, which deploys the ECE installation onto server machines:

    sync

    The sync command copies the relevant files of the ECE installation onto the server machines in the ECE cluster.

Stopping and Restoring Your ECE System

Perform this task only if you want to restore the upgraded ECE system.

Caution:

Restarting the ECE system removes all data from Coherence caches (stopping all charging server nodes removes all data from Coherence caches).

To restore an upgraded ECE system:

  1. Stop all ECE nodes of the old ECE installation.

  2. In PDC, publish all the PDC pricing data (the metadata, setup, pricing, and profile data) from the PDC database to ECE by running the following command:

    ImportExportPricing -publish -metadata -config -pricing -profile -target [ece] 

    Running this command publishes all the metadata, setup, pricing, and profile data in the PDC database to ECE.

  3. Ensure that you have reconfigured the ECE_New_home/config/management/charging-settings.xml file to match the settings (including customizations) in your old release and enter values for settings introduced in the new release.

  4. On the driver machine, go to the ECE_New_home/oceceserver/bin directory.

  5. Start ECC:

    ./ecc
  6. Enable real-time synchronization of BRM and ECE customer data updates. See "Synchronizing Data Between ECE and the BRM Database" in ECE Implementing Charging for more information.

  7. Start ECE processes and gateways in the following order:

    Note:

    Depending on your installation, you start Diameter Gateway, RADIUS Gateway, or both.

    start server
    start configLoader
    start pricingUpdater
    start customerUpdater
    start emGateway
    start brmGateway
    start ratedEventFormatter
    start diameterGateway
    start radiusGateway

    All data is now back in the ECE data grid.

    Real-time-data updates, which had been temporarily disrupted due to the shutdown, are processed upon restart.