6 Upgrading Existing ECE 11.3 Installation

This chapter describes how to upgrade the existing Oracle Communications Billing and Revenue Management Elastic Charging Engine (ECE) 11.3 installation.

In this chapter, the current ECE 11.3 patch set release running on your system is called the old release. The ECE patch set you are upgrading to is called the new release.

When upgrading the existing ECE installation, note the following:

  • A direct upgrade from the ECE 11.1 or ECE 11.2 release is not supported.

  • If you are upgrading to the latest patch set from an earlier ECE 11.2 patch-set release, ECE 11.3, or an ECE 11.3 patch-set release, you must upgrade your system to all prior patch set releases first. For example, if you are running ECE 11.3, you must upgrade to ECE 11.3 Patch Set 8 in the following order:

    • ECE 11.3 Patch Set 1

    • ECE 11.3 Patch Set 2

    • ECE 11.3 Patch Set 3

    • ECE 11.3 Patch Set 4

    • ECE 11.3 Patch Set 5

    • ECE 11.3 Patch Set 6

    • ECE 11.3 Patch Set 7

    • ECE 11.3 Patch Set 8

    • ECE 11.3 Patch Set 9

  • The ECE Installer installs the complete ECE software and copies the configuration files to the new complete installation to match your existing ECE 11.3 patch set settings.

Overview of Upgrading Existing ECE 11.3 Installation

Important:

To upgrade the existing ECE installation by using the zero downtime upgrade method, see "Performing Zero Downtime Upgrade".

If you have an existing installation of ECE integrated with Oracle Communications Billing and Revenue Management (BRM) and Pricing Design Center (PDC) and you are upgrading that installation, do the following:

Important:

Ensure that you install the following in the following order:
  1. The ECE patch set.

  2. A compatible version of BRM. See the corresponding BRM 7.5 Patch Set Installation Guide for installing BRM.

  3. A compatible version of PDC. See PDC Installation and System Administration Guide for installing PDC.

See "ECE System Requirements" for the compatible version of BRM and PDC.

  1. Plan your installation. See "About Planning Your ECE Installation" for more information.

  2. Review system requirements. See "ECE System Requirements" for more information.

  3. Perform the pre-upgrade tasks. See "Performing the Pre-Upgrade Tasks".

  4. Perform the upgrade tasks. See "Performing the Upgrade Tasks".

    Caution:

    If you are upgrading to ECE 11.3 Patch Set 7 or later releases, you must install ECE 11.3 Patch Set 7 interim patch 27976672 (IP2) and update the Coherence libraries before performing the rolling upgrade. Otherwise, the rolling upgrade for ECE 11.3 Patch Set 8 will not work. In such a case, you must stop all ECE nodes of your existing installation, restore your ECE system, and then start all ECE nodes of your existing installation. See "Stopping and Restoring Your ECE System".

    See the following for more information:

  5. Perform the post-upgrade tasks. See "Performing the Post-Upgrade Tasks".

Upgrading to ECE 11.3 Patch Set 7

To upgrade to ECE 11.3 Patch Set 7:

  1. Upgrade your system to all prior patch set releases until ECE 11.3 Patch Set 6.

  2. Install ECE 11.3 Patch Set 7.

  3. Install the ECE 11.3 Patch Set 7 interim patch 27976672 (IP2).

  4. Delete the Coherence libraries in the ECE_11.3_PS7_IP2_home/oceceserver/lib directory.

  5. Copy the Coherence 12.2.1.0.6 libraries manually from the ECE_11.3_PS6_home/oceceserver/lib directory to the ECE_11.3_PS7_IP2_home/oceceserver/lib directory.

  6. Perform the rolling upgrade for ECE 11.3 Patch Set 7 IP2.

You can later upgrade to ECE 11.3 Patch Set 8 and then ECE 11.3 Patch Set 9 by following the standard instructions in "Performing the Upgrade Tasks" and "Performing the Post-Upgrade Tasks".

Upgrading to ECE 11.3 Patch Set 8

To upgrade to ECE 11.3 Patch Set 8 from any release prior to ECE 11.3 Patch Set 7:

  1. Upgrade your system to all prior patch set releases until ECE 11.3 Patch Set 6.

  2. Install ECE 11.3 Patch Set 7.

  3. Install the ECE 11.3 Patch Set 7 interim patch 27976672 (IP2).

  4. Install ECE 11.3 Patch Set 8.

  5. Delete the Coherence libraries in the ECE_11.3_PS7_IP2_home/oceceserver/lib directory.

  6. Copy the Coherence 12.2.1.0.7 libraries manually from the ECE_11.3_PS8_home/oceceserver/lib directory to the ECE_11.3_PS7_IP2_home/oceceserver/lib directory.

  7. Perform the rolling upgrade for ECE 11.3 Patch Set 7 IP2.

  8. Perform the rolling upgrade for ECE 11.3 Patch Set 8.

You can later upgrade to ECE 11.3 Patch Set 9 by following the standard instructions in "Performing the Upgrade Tasks" and "Performing the Post-Upgrade Tasks".

Performing Zero Downtime Upgrade

If you have created an active-hot standby disaster recovery system, you can use the zero downtime upgrade method to upgrade the existing ECE installation with very minimal disruption to the existing installation and the services that are provided to your customers.

For more information on creating an active-hot standby disaster recovery system, see the discussion about configuring ECE for disaster recovery in BRM Elastic Charging Engine System Administrator's Guide.

Before you perform the zero downtime upgrade, ensure the following:

  • You have the same instances of ECE 11.3 or an ECE 11.3 patch set, BRM 7.5 patch set, and PDC 11.1 patch set installed in your production and backup sites.

  • Both the instances of ECE, BRM, and PDC installed in your production and backup sites and all the components connected to your ECE system are currently running.

To perform the zero downtime upgrade:

  1. Ensure that all the requests and updates (such as usage requests, top-up requests, and pricing and customer data updates) are routed to your production site.

  2. In your backup site, do the following:

    1. Stop the BRM and PDC instances.

    2. Upgrade the BRM instance to the version compatible with your new release.

      See "ECE System Requirements" for the compatible BRM version and the corresponding BRM 7.5 Patch Set Installation Guide for installing BRM using the zero downtime upgrade method.

    3. Upgrade the PDC instance to the version compatible with your new release.

      See "ECE System Requirements" for the compatible PDC version and PDC Installation and System Administration Guide for installing PDC.

    4. Stop replicating the ECE cache data to your production site by running the following command:

      gridSync stop [ProductionClusterName]
      

      where ProductionClusterName is the name of the ECE cluster in your production site.

  3. In your production site, do the following:

    1. Stop replicating the ECE cache data to your backup site by running the following command:

      gridSync stop [BackupClusterName]
      

      where BackupClusterName is the name of the ECE cluster in your backup site.

    2. Verify that the ECE and BRM data updates are synchronized in real time and all the rated events are getting published to the Oracle NoSQL database.

  4. In your backup site, do the following:

    1. Start the BRM and PDC instances and their processes.

    2. Upgrade ECE directly to the new release. You need not upgrade to all prior patch set releases. You can also skip the "Performing a Rolling Upgrade" task.

    3. Start ECE. See the discussion about starting and stopping ECE in BRM Elastic Charging Engine System Administrator's Guide for more information.

    4. Start the following ECE processes and gateways:

      Note:

      Depending on your installation, you start Diameter Gateway, RADIUS Gateway, or both.
      start emGateway
      start brmGateway
      start ratedEventFormatter
      start diameterGateway
      start radiusGateway
      
  5. In your production site, do the following:

    1. Stop the BRM and PDC instances.

    2. Upgrade ECE to the new release. You must upgrade to all prior patch set releases first before upgrading to the new release. Perform all the tasks described in "Overview of Upgrading Existing ECE 11.3 Installation".

    3. Upgrade the BRM instance to the version compatible with your new release.

      See "ECE System Requirements" for the compatible BRM version and the corresponding BRM 7.5 Patch Set Installation Guide for installing BRM using the zero downtime upgrade method.

    4. Upgrade the PDC instance to the version compatible with your new release.

      See "ECE System Requirements" for the compatible PDC version and PDC Installation and System Administration Guide for installing PDC.

    5. Start the BRM and PDC instances and their processes.

    6. Start replicating the ECE cache data to your backup site by running the following commands:

      gridSync start
      gridSync replicate
      
  6. In your backup site, start replicating the ECE cache data to your production site by running the following commands:

    gridSync start
    gridSync replicate
    
  7. Verify that the ECE data is automatically replicated to both sites.

Performing the Pre-Upgrade Tasks

This section provides instructions for ECE pre-upgrade tasks.

Backing Up Your Existing Configuration

Back up your existing configuration and installation area (the ECE installation directory and its content: ECE_home). In particular, make sure you back up all customized files.

Important:

Store this backup in a safe location. The data in these files are necessary if you encounter any issues in the installation process.

Creating the Home Directory for the New Release

Create a directory to be the new ECE 11.3 patch set home directory, ECE_New_home; for example, ECE_113PS9. Because you have your old release on the same driver machine, be careful to specify the home details for the new release when you run the ECE 11.3 patch set Installer. The home details consist of the home directory path and a unique name you give to the new installation.

When you run the Installer, it displays the home details of any old release installations it detects on your driver machine in the Specify Home Details list.

Performing the Upgrade Tasks

This section provides instructions for ECE upgrade tasks.

Obtaining the ECE 11.3 Patch Set Software

To obtain ECE 11.3 Patch Set software:

  1. Create a temporary directory (temp_dir).

  2. Go to the My Oracle Support Web site:

    http://support.oracle.com

  3. Sign in with your user name and password.

  4. Click the Patches & Updates tab.

  5. From the list, select Patch Name or Number.

  6. In the text field, enter the PatchNumber and click Search.

    where PatchNumber:

    • For ECE 11.3 Patch Set 1 is 24489027

    • For ECE 11.3 Patch Set 2 is 24708603

    • For ECE 11.3 Patch Set 3 is 25655714

    • For ECE 11.3 Patch Set 4 is 26420699

    • For ECE 11.3 Patch Set 5 is 27145275

    • For ECE 11.3 Patch Set 6 is 27406358

    • For ECE 11.3 Patch Set 7 is 27531574

    • For ECE 11.3 Patch Set 8 is 28133198

    • For ECE 11.3 Patch Set 9 is 28738541

    The Patch Search Results page appears.

  7. Click the patch name.

    The patch details appear.

  8. From the Platform list, select the platform and click Download.

    The File Download dialog box appears.

  9. Download the pPatchNumber_PatchSet_platform.zip software pack to temp_dir.

    where:

    • PatchSet:

      • For ECE 11.3 Patch Set 1 is 113010

      • For ECE 11.3 Patch Set 2 is 113020

      • For ECE 11.3 Patch Set 3 is 113030

      • For ECE 11.3 Patch Set 4 is 113040

      • For ECE 11.3 Patch Set 5 is 113050

      • For ECE 11.3 Patch Set 6 is 113060

      • For ECE 11.3 Patch Set 7 is 113070

      • For ECE 11.3 Patch Set 8 is 113080

      • For ECE 11.3 Patch Set 9 is 113090

    • platform is linux or solaris.

  10. Unzip pPatchNumber_PatchSet_platform.zip and extract the contents to temp_dir:

    The extracted software pack has the following structure:

    ocece/Disk1/install

    ocece/Disk1/stage

Installing the ECE 11.3 Patch Set for Your Upgrade

Install the ECE 11.3 patch set using the Patchset installer type into ECE_New_home.

Follow the instructions in "Installing Elastic Charging Engine" to install ECE using the Patchset installer type.

In the Existing ocece Installation Details screen, ensure that you enter the full path or browse to the directory in which you installed the existing ECE installation.

Reconfiguring Configuration File Settings to Match Your Old Release

After installing the new patch set, reconfigure the default system and business configuration files of the new installation to match the settings in your old release configuration files.

Reconfigure all settings in the files of the following directories to match your old installation settings:

  • ECE_New_home/oceceserver/

  • ECE_New_home/oceceserver/config

  • ECE_New_home/oceceserver/brm_config

You must move the configuration data, such as your custom customer profile data and request specification files, into ECE_New_home.

You can also use a merge tool to merge the configuration files you have changed in your old installation with the configuration files in the new installation.

Important:

Do not use a merge tool for reconfiguring the settings in the ECE_home/oceceserver/config/management/charging-settings.xml file. New and changed properties can be introduced in this file, which would make the file difficult to merge.

To reconfigure settings of the ECE_New_home/config/management/charging-settings.xml file:

  1. On the driver machine, Open the ECE_New_home/oceceserver/config/eceTopology.conf file.

  2. For each physical server machine or unique IP address in the cluster, enable charging server nodes (such as the ecs1 node) for JMX management by specifying a port for each node and setting it to start CohMgt = true.

    Important:

    Do not specify the same port for the JMX management service that is used by your old ECE installation. Enable charging server nodes on your new installation for JMX management by using unique port numbers.
  3. Save and close the file.

  4. Start the JMX-management-enabled charging server nodes by doing the following:

    1. Change directory to the ECE_New_home/oceceserver/bin directory.

    2. Start Elastic Charging Controller (ECC):

      ./ecc
      
    3. Run the following command:

      start server
      
  5. Access the ECE MBeans:

    1. Log on to the driver machine.

    2. Start a JMX editor, such as JConsole, that enables you to edit MBean attributes.

    3. Connect to the ECE charging server node set to start CohMgt = true in the ECE_New_home/oceceserver/config/eceTopology.conf file.

      The eceTopology.conf file also contains the host name and port number for the node.

    4. In the editor's MBean hierarchy, expand the ECE Configuration node.

  6. Use the JMX editor to enter values for all settings associated with your old release's ECE_home/config/management/charging-settings.xml file and enter values for new settings introduced in the new release.

    Your configurations are saved to the ECE_New_home/oceceserver/config/management/charging-settings.xml file.

  7. Stop the JMX-management-enabled charging server nodes.

Copying the Mediation Specification File to the New Installation

When installing the new release, the mediation specification file is not automatically copied to the new installation.

You must manually copy the mediation specification file (for example, diameter_mediation.spec) in your old release's ECE_home/oceceserver/config/management directory to the new installation.

For information about the mediation specification file, see BRM Elastic Charging Engine Implementation Guide.

Reconfiguring Log4j2 Configuration File Settings to Match Your Old Settings

Note:

Perform this step only if you are upgrading from ECE 11.3 Patch Set 3 to ECE 11.3 Patch Set 4.

In ECE 11.3 Patch Set 4, the Log4j.properties file is replaced with the log4j2.xml file. After installing ECE 11.3 Patch Set 4, reconfigure all the default settings in the ECE_home/oceceserver/config/log4j2.xml file to match your customized settings in the Log4j.properties file in your old release.

From ECE 11.3 Patch set 4, use the log4j2.xml file to configure logging in the XML format for the entire cluster.

Important:

The existing log4j.properties configuration is supported only for backward compatibility.

For information about Log4j settings and configuring logging, see BRM Elastic Charging Engine System Administrator's Guide.

Configuring Persistence Environment

Important:

Perform this step only in the test environment.

In the test environment, you can configure the persistence if you are upgrading from ECE 11.3 Patch Set 4 to ECE 11.3 Patch Set 5.

To configure the persistence environment:

  1. On the driver machine, open the ECE_home/oceceserver/config/ece.properties file.

  2. Ensure that the java.property.ece.persistence.mode and java.property.coherence.distributed.persistence.base.dir entries are set:

    java.property.ece.persistence.mode=on-demand
    java.property.coherence.distributed.persistence.base.dir= ECE_home/persistence/
    

    If you want to persist the pricing and configuration data in the active persistence mode, set the java.property.ece.persistence.mode entry to active. For more information, see the discussion about active persistence of ECE caches in ECE Release Notes.

  3. Save and close the file.

  4. Open the ECE Coherence override file your ECE system uses (for example, ECE_home/oceceserver/config/charging-coherence-override-secure-prod.xml).

    To confirm which ECE Coherence override file is used, refer to the tangosol.coherence.override parameter of the ECE_home/oceceserver/config/ece.properties file.

  5. Add the following entries:

     <!-- ECE persistence environment -->
            <persistence-environments>
                <persistence-environment id="ece-environment">
                    <persistence-mode
     system-property="ece.persistence.mode">persistence_mode</persistence-mode>
                </persistence-environment>
            </persistence-environments>
    

    where persistence_mode is active or on-demand. If you have set java.property.ece.persistence.mode to active in step 2, set persistence_mode to active.

  6. Save and close the file.

Upgrading Extension Code

If you customized rating by implementing extensions using the ECE extensions interface, apply the customizations to the corresponding files of the new installation.

Upgrade your extension code and recompile it. Recompile ECE_home/oceceserver/config/extensions with the new library. Ensure that the packaged extensions JAR files are available to the ECE runtime environment in the ECE_home/lib folder.

Verifying the New Parameters in the Upgraded ECE Configuration Files

The upgrade process automatically adds or updates parameters in the following configuration files:

  • JMSConfiguration.xml: The following JMSDestination name sections in this configuration file are updated or added:

    • NotificationQueue: New parameters read by the ECE charging nodes.

    • BRMGatewayNotificationQueue: New section read by BRM Gateway.

    • DiameterGatewayNotificationQueue: New section read by Diameter Gateway.

  • migration-configuration.xml: The pricingUpdater section in this configuration file is updated.

Perform the following procedures to verify that the new parameters were successfully added to the configuration files and that the default values of the new and updated parameters are appropriate for your system. If necessary, change the values.

Verifying New and Updated Parameters in the Upgraded JMSConfiguration.xml File

To verify the new and updated parameters in the upgraded JMSConfiguration.xml file:

  1. Open the ECE_New_home/config/JMSConfiguration.xml file in a text editor.

  2. Locate the <MessagesConfigurations> section and its three JMSDestination name sections.

  3. In each JMSDestination name section, verify that the values of the following parameters are appropriate for your system:

    <JMSDestination name="JMS_destination_name">
        <HostName>host_name</HostName>
        <Port>port_number</Port>
        <UserName>user_name</UserName>
        <Password>password</Password>
        <ConnectionFactory>connection_factory_name</ConnectionFactory>
        <QueueName>queue_name</QueueName>
        <SuspenseQueueName>suspense_queue_name</SuspenseQueueName>
        <Protocol>protocol</Protocol>
        <ConnectionURL>connection_URL</ConnectionURL>
        <ConnectionRetryCount>connection_retry_count</ConnectionRetryCount>
        <ConnectionRetrySleepInterval>connection_retry_sleep_interval
            </ConnectionRetrySleepInterval>
        <InitialContextFactory>initial_context_factory_name
            </InitialContextFactory>
        <RequestTimeOut>request_timeout</RequestTimeOut>
        <KeyStorePassword>keystore_password</KeyStorePassword>
        <keyStoreLocation>keystore_location</keyStoreLocation>
    </JMSDestination>
     
    

    where:

    • JMS_destination_name is one of the following names:

      • NotificationQueue

      • BRMGatewayNotificationQueue

      • DiameterGatewayNotificationQueue

      Note:

      Do not change the value of the JMSDestination name parameters.
    • host_name specifies the name of a WebLogic server on which a JMS topic resides.

      If you provided a value for the Host Name field on the ECE Notification Queue Details installer screen, that value appears here. Add this host to the ConnectionURL parameter, which takes precedence over HostName.

    • port_number specifies the port number on which the WebLogic server resides.

      If you provided a value for the Port Number field on the ECE Notification Queue Details installer screen, that value appears here. Add this port number to the ConnectionURL parameter, which takes precedence over Port.

    • user_name specifies the user for logging on to the WebLogic server.

      This user must have write privileges for the JMS topic.

    • password specifies the password for logging on to the WebLogic server.

      When you install ECE, the password you enter is encrypted and stored in the keystore. If you change the password, you must run a utility to encrypt the new password before entering it here. See the discussion about encrypting new passwords in BRM Elastic Charging Engine System Administrator's Guide.

    • connection_factory_name specifies the connection factory used to create connections to the JMS topic on the WebLogic server to which ECE publishes notification events.

      You must also configure settings in Oracle WebLogic Server for the connection factory. For more information, see the discussion about configuring a WebLogic Server connection factory for a JMS topic.

    • queue_name specifies the JMS topic that holds the published external notification messages.

    • suspense_queue_name specifies the name of the queue that holds failed updates sent through the BRM Gateway. This parameter is applicable only for the BRMGatewayNotificationQueue section.

    • protocol specifies the wire protocol used by your WebLogic servers in the ConnectionURL parameter, which takes precedence over Protocol. The default is t3.

    • connection_URL lists all the URLs that applications can use to connect to the JMS WebLogic servers on which your ECE notification queue (JMS topic) or queues reside.

      Note:

      • When this parameter contains values, it takes precedence over the deprecated HostName, Port, and Protocol parameters.

      • If multiple URLs are specified for a high-availability configuration, an application randomly selects one URL and then tries the others until one succeeds.

      Use the following URL syntax:

      [t3|t3s|http|https|iiop|iiops]://address[,address]. . . 
       
      

      where:

      t3, t3s, http, https, iiop, or iiops is the wire protocol used.

         For a WebLogic server, use t3.

      address is hostlist:portlist.

      hostlist is hostname[,hostname.]

      hostname is the name of a WebLogic server on which a JMS topic resides.

      portlist is portrange[+portrange.]

      portrange is port[-port.]

      port is the port number on which the WebLogic server resides.

      Examples:

      t3://hostA:7001
      t3://hostA,hostB:7001-7002
      

      The preceding URL is equivalent to all the following URLs:

      t3://hostA,hostB:7001+7002 
      t3://hostA:7001-7002,hostB:7001-7002 
      t3://hostA:7001+7002,hostB:7001+7002 
      t3://hostA:7001,hostA:7002,hostB:7001,hostB:7002 
      
    • connection_retry_count specifies the number of times a connection is retried after it fails. The default is 10.

      This applies only to clients that receive notifications from BRM.

    • connection_retry_sleep_interval specifies the number of milliseconds between connection retry attempts. The default is 10000.

    • initial_context_factory_name specifies the name of the initial connection factory used to create connections to the JMS topic queue on each WebLogic server to which ECE will publish notification events.

    • request_timeout specifies the number of milliseconds in which requests to the WebLogic server must be completed before the operation times out. The default is 3000.

    • keystore_password specifies the password used to access the SSL keystore file if SSL is used to secure the ECE JMS queue connection.

    • keystore_location specifies the full path to the SSL keystore file if SSL is used to secure the ECE JMS queue connection.

  4. Save and close the file.

For more information about these parameters, see the discussion about configuring JMS credentials for publishing external notifications in the ECE Implementation Guide.

Verifying New and Updated Parameters in the Upgraded migration-configuration.xml File

To verify the new and updated parameters in the upgraded migration-configuration.xml file:

  1. Open the ECE_New_home/oceceserver/config/management/migration-configuration.xml file.

  2. Locate the pricingUpdater section.

  3. Verify that the default values of the following parameters are appropriate for your system:

    <pricingUpdater
      . . .
      hostName="host_name"
      port="port_number"
      . . .
      connectionURL="connection_URL"
      connectionRetryCount="connection_retry_count"
      connectionRetrySleepInterval="connection_retry_sleep_interval"
      . . .
      protocol="protocol"
      . . .
      requestTimeOut="request_timeout"
      . . .
      </pricingUpdater>
     
    

    where:

    • host_name specifies the name of the server on which a JMS queue to which PDC publishes pricing data resides.

      If you provided a value for the Host Name field on the PDC Pricing Components Queue Details installer screen, that value appears here. Add this host to the ConnectionURL parameter, which takes precedence over HostName.

    • port_number specifies the port number of the server on which the PDC JMS queue resides.

      If you provided a value for the Port Number field on the PDC Pricing Components Queue Details installer screen, that value appears here. Add this port number to the ConnectionURL parameter, which takes precedence over Port.

    • connection_URL lists all the URLs that applications can use to connect to the servers on which the PDC JMS queue or queues reside.

      Note:

      • When this parameter contains values, it takes precedence over the deprecated hostName, port, and protocol parameters.

      • If multiple URLs are specified for a high-availability configuration, an application randomly selects one URL and then tries the others until one succeeds.

      Use the following URL syntax:

      [t3|t3s|http|https|iiop|iiops]://address[,address]. . . 
       
      

      where:

      t3, t3s, http, https, iiop, or iiops is the wire protocol used.

         For a WebLogic server, use t3.

      address is hostlist:portlist.

      hostlist is hostname[,hostname.]

      hostname is the name of a server on which a PDC JMS queue resides.

      portlist is portrange[+portrange.]

      portrange is port[-port.]

      port is the port number of the server on which the PDC JMS queue resides.

      Examples:

      t3://hostA:7001
      t3://hostA,hostB:7001-7002
      

      The preceding URL is equivalent to all the following URLs:

      t3://hostA,hostB:7001+7002 
      t3://hostA:7001-7002,hostB:7001-7002 
      t3://hostA:7001+7002,hostB:7001+7002 
      t3://hostA:7001,hostA:7002,hostB:7001,hostB:7002 
      
    • connection_retry_count specifies the number of times a connection is retried after it fails. The default is 10.

      This applies only to clients that receive notifications from BRM.

    • connection_retry_sleep_interval specifies the number of milliseconds between connection retry attempts. The default is 10000.

    • protocol specifies the wire protocol used by the servers listed in the ConnectionURL parameter, which takes precedence over Protocol. The default is t3.

    • request_timeout specifies the number of milliseconds in which requests to the PDC JMS queue server must be completed before the operation times out. The default is 3000.

  4. Save and close the file.

Performing the Post-Upgrade Tasks

This section provides instructions for ECE post-upgrade tasks.

Deploying the Patch Set Onto Server Machines

If you installed an ECE standalone installation, you can skip this task.

Deploying the patch set onto the server machines means that you distribute the Elastic Charging Server node instances (charging server nodes) and other nodes defined in your topology file across the server machines.

To deploy the patch set onto your server machines:

  1. Open the ECE_New_home/config/eceTopology.conf file and your old release topology file.

  2. Verify the following:

    1. The settings in the ECE_New_home/config/eceTopology.conf file are the same as specified in your old release topology file.

      Your topology configuration must be identical to that of your old installation. Oracle recommends that you copy the topology file from your old installation.

    2. All the hosts included in the ECE_New_home/config/eceTopology.conf file have the same login ID (user ID) and the password-less SSH has been configured to all hosts from the driver machine.

  3. Save and close the files.

  4. Verify that all of the custom files and system and business configuration files of the new installation match the settings of your old installation configuration files and that your custom request specification files and custom customer profile data is being carried over.

  5. Open the ECE_New_home/config/management/migration-configuration.xml file.

  6. Verify that the configObjectsDataDirectory parameter is set to the directory where you store your configuration data (mediation specification used by Diameter Gateway).

  7. Save and close the file.

  8. Log on to the driver machine.

  9. Go to the ECE_New_home/bin directory.

  10. Start Elastic Charging Controller (ECC):

    ./ecc
    
  11. Run the following command, which deploys the ECE installation onto server machines:

    sync
    

    The sync command copies the relevant files of the ECE installation onto the server machines in the ECE cluster.

Performing a Rolling Upgrade

Caution:

If you are upgrading to ECE 11.3 Patch Set 7 or ECE 11.3 Patch Set 8, you must install ECE 11.3 Patch Set 7 interim patch 27976672 (IP2) and update the Coherence libraries before performing the rolling upgrade. Otherwise, the rolling upgrade for ECE 11.3 Patch Set 8 will not work. In such a case, you must stop all ECE nodes of your existing installation, restore your ECE system, and then start all ECE nodes of your existing installation. See "Stopping and Restoring Your ECE System".

See the following for more information:

Note:

You can skip this step if you are upgrading from ECE 11.3 to ECE 11.3 Patch Set 1.

Rolling upgrade does not work for upgrading to ECE 11.3 Patch Set 1. You must stop all ECE nodes of your existing installation, restore your ECE system, and then start all ECE nodes of your existing installation. For more information, see "Stopping and Restoring Your ECE System".

Rolling upgrades will gracefully shut down processes of the old ECE installation and start up the processes of the new installation while maintaining operation of the overall ECE system.

Rolling upgrades are intended for production systems to prevent interruption of service for customers during the upgrade. Rolling upgrades are also useful for test systems to avoid tedious restarts of ECE charging server nodes that would require reloading data from BRM and PDC to re-prime ECE caches.

To perform a rolling upgrade:

Caution:

(Productions systems) To mitigate charging server node failures that might threaten your system's ability to handle your customer base:
  • Schedule the rolling upgrade outside of your regular peak processing time.

  • Ensure that you have appropriate number of charging server nodes for your customer base. If the minimum number of charging server nodes needed for your customer base is N, you must run at least N+ 1 nodes to have uninterrupted usage processing during a rolling upgrade.

Tip:

Before performing the rolling upgrade, the new release must be installed in a different directory. After launching ECC using the new installation, the rollingUpgrade command is called to upgrade the system to the new release.
  1. Ensure that you deploy the new release onto server machines. See "Deploying the Patch Set Onto Server Machines".

  2. Run the following command to start the rolling upgrade in the new release while ECE is still operating on the old release:

    groovy:000> rollingUpgrade
    

    One by one, each node on the old location is brought down, upgraded, and joined back to the cluster.

    When you run the rollingUpgrade command with no parameters specified, all running nodes are upgraded (charging server nodes, data-loading utility nodes, data updating nodes, and so on) except for simulator nodes.

    The order in which the nodes are restarted adheres to the order in which the nodes are listed in the ECE_New_home/config/eceTopology.conf file.

    You can choose to upgrade nodes (bring them down, upgrade them, and join them back to the cluster) by node role. It is recommended to first upgrade all the nodes of role ratedEventFormatter, followed by all the nodes of role server, followed by all the nodes of role updater, and then followed lastly by all the nodes of role diametergateway; for example:

    rollingUpgrade ratedEventFormatter
    rollingUpgrade server
    rollingUpgrade updater
    rollingUpgrade diametergateway
    

After the upgrade is completed, the new release is used, and you can decide what to do with the old directory installation.

When you use the new release, verify that the path to your configuration data (the path to your custom customer profile data and request specification data) is specified correctly for where the data lives on your new release by doing the following:

  1. Access the ECE MBeans by launching a JMX editor and entering the IP address (or host name) and port of your JMX-management-enabled charging server node on the running new release.

  2. Click the MBeans tab.

  3. Expand ECE Configuration.

  4. Expand migration.loader.

  5. In the Name column, select configObjectsDataDirectory.

  6. In the Value column, enter the directory where you store your configuration data (your mediation specification files).

    Your configuration is saved to the ECE_New_home/config/management/migration-configuration.xml file (do not edit this file directly).

Loading Pricing Data From PDC into ECE

After you perform the rolling upgrade, load all the pricing data (the metadata, setup, pricing, and profile data) from the PDC database into ECE.

To load the pricing data from PDC into ECE:

  1. In PDC, publish all the PDC pricing data (the metadata, setup, pricing, and profile data) from the PDC database to ECE by running the following command:

    ImportExportPricing -publish -metadata -config -pricing -profile -target [ece]
    

    Running this command publishes all the metadata, setup, pricing, and profile data in the PDC database to ECE.

  2. Log on to the driver machine.

  3. Go to the ECE_New_home/bin directory.

  4. Start ECC:

    ./ecc
    
  5. Run the following commands in this order:

    start
    start configLoader
    start pricingUpdater
    

    All the pricing data from the PDC database is loaded into ECE.

Stopping and Restoring Your ECE System

Caution:

Restarts of the ECE system are not intended for production systems. If you are upgrading a production system, perform a rolling upgrade. See "Performing a Rolling Upgrade" for information.

Restarts of the ECE system are done for test systems only when it is intended to remove all data from Coherence caches.

To restore an upgraded ECE system:

  1. Stop all ECE nodes of the old ECE installation.

  2. In PDC, publish all the PDC pricing data (the metadata, setup, pricing, and profile data) from the PDC database to ECE by running the following command:

    ImportExportPricing -publish -metadata -config -pricing -profile -target [ece] 
    

    Running this command publishes all the metadata, setup, pricing, and profile data in the PDC database to ECE.

  3. Reconfigure the ECE_New_home/config/management/charging-settings.xml file to match the settings (including customizations) in your old release and enter values for settings introduced in the new release.

  4. On the driver machine, go to the ECE_New_home/bin directory.

  5. Start ECC:

    ./ecc
    
  6. Enable real-time synchronization of BRM and ECE customer data updates. See the discussion about configuring ECE for synchronizing BRM and ECE customer data in real time in BRM Elastic Charging Engine Implementation Guide for more information.

  7. Start ECE processes and gateways in the following order:

    Important:

    Depending on your installation, you start Diameter Gateway, RADIUS Gateway, or both.
    start server
    start configLoader
    start pricingUpdater
    start customerUpdater
    start emGateway
    start brmGateway
    start ratedEventFormatter
    start diameterGateway
    start radiusGateway
    

    All data is now back in the ECE data grid.

    Real-time-data updates, which had been temporarily disrupted due to the shutdown, are processed upon restart.

Verifying the Installation After the Upgrade

Note:

Test the patch set that you installed on a non-production system with a copy of your production data before you deploy it on a production system.

Verify the ECE installation by starting the ECE nodes in the cluster, loading the data needed for rating, and generating usage to verify that usage requests can be processed and customer balances can be impacted.

See "Verifying the ECE Installation" for information about verifying the ECE installation.