6 Upgrading Existing ECE 12.0 Installation

Learn how to upgrade an existing Oracle Communications Billing and Revenue Management (BRM) Elastic Charging Engine (ECE) 12.0 installation.

Topics in this document:

Note:

When upgrading an existing ECE installation, note the following:

  • A direct upgrade from the ECE 11.1 or ECE 11.2 release is not supported.

  • You must upgrade to ECE 12.0 before you can upgrade to ECE 12.0 Patch Set 2.

  • The ECE Installer installs the complete ECE software and copies the configuration files to the new complete installation to match your existing ECE 12.0 patch set settings.

In this document, the current ECE 12.0 patch set release running on your system is called the old release. The ECE patch set you are upgrading to is called the new release.

Overview of Upgrading Existing ECE 12.0 Installation

If you have an existing installation of ECE integrated with BRM and Pricing Design Center (PDC) and you are upgrading that installation, do the following:

Note:

Ensure that you install these in the following order:

  1. The ECE patch set.

  2. A compatible version of BRM.

  3. A compatible version of PDC.

See "BRM Suite Compatibility" in BRM Compatibility Matrix for the compatible version of BRM and PDC.

  1. Plan your installation. See "About Planning Your ECE Installation".

  2. Review system requirements. See "ECE System Requirements".

  3. Perform the pre-upgrade tasks. See "Performing the Pre-Upgrade Tasks".

  4. Perform the upgrade tasks. See "Performing the Upgrade Tasks".

  5. Perform the post-upgrade tasks. See "Performing the Post-Upgrade Tasks".

Performing Zero Downtime Upgrade

If you have created an active-hot standby disaster recovery system, you can use the zero downtime upgrade method to upgrade the existing ECE installation with very minimal disruption to the existing installation and the services that are provided to your customers.

For more information on creating an active-hot standby disaster recovery system, see "Configuring ECE for Disaster Recovery" in BRM System Administrator's Guide.

Before you perform the zero downtime upgrade, ensure the following:

  • You have the same instances of ECE 12.0, BRM 12.0 patch set, and PDC 12.0 patch set installed in your production and backup sites.

  • Both the instances of ECE, BRM, and PDC installed in your production and backup sites and all the components connected to your ECE system are currently running.

To perform the zero downtime upgrade:

  1. Ensure that all the requests and updates (such as usage requests, top-up requests, and pricing and customer data updates) are routed to your production site.

  2. In your backup site, do the following:

    1. Stop the BRM and PDC instances.

    2. Upgrade the BRM instance to the version compatible with your new release.

    3. Upgrade the PDC instance to the version compatible with your new release.

    4. Stop replicating the ECE cache data to your production site by running the following command:

      gridSync stop [ProductionClusterName]

      where ProductionClusterName is the name of the ECE cluster in your production site.

  3. In your production site, do the following:

    1. Stop replicating the ECE cache data to your backup site by running the following command:

      gridSync stop [BackupClusterName]

      where BackupClusterName is the name of the ECE cluster in your backup site.

    2. Verify that the ECE and BRM data updates are synchronized in real time and all the rated events are getting published to the persistence database or Oracle NoSQL database.

  4. In your backup site, do the following:

    1. Start the BRM and PDC instances and their processes.

    2. Upgrade ECE directly to the new release. You need not upgrade to all prior patch set releases. You can also skip the "Performing a Rolling Upgrade" task.

    3. Start ECE. See "Starting ECE" in BRM System Administrator's Guide for more information.

    4. Start the following ECE processes and gateways:

      Note:

      Depending on your installation, you start Diameter Gateway, RADIUS Gateway, or both.

      start emGateway
      start brmGateway
      start ratedEventFormatter
      start diameterGateway
      start radiusGateway
  5. In your production site, do the following:

    1. Stop the BRM and PDC instances.

    2. Upgrade ECE to the new release. You must upgrade to all prior patch set releases first before upgrading to the new release. Perform all the tasks described in "Overview of Upgrading Existing ECE 12.0 Installation".

    3. Upgrade the BRM instance to the version compatible with your new release.

    4. Upgrade the PDC instance to the version compatible with your new release.

    5. Start the BRM and PDC instances and their processes.

    6. Start replicating the ECE cache data to your backup site by running the following commands:

      gridSync start
      gridSync replicate
  6. In your backup site, start replicating the ECE cache data to your production site by running the following commands:

    gridSync start
    gridSync replicate
  7. Verify that the ECE data is automatically replicated to both sites.

Performing the Pre-Upgrade Tasks

Before upgrading your existing ECE 12.0 system, perform these tasks:

  1. Backing Up Your Existing Configuration

  2. Creating the Home Directory for the New Release

Backing Up Your Existing Configuration

Back up your existing configuration and installation area (the ECE installation directory and its content: ECE_home). In particular, make sure you back up all customized files.

Note:

Store this backup in a safe location. The data in these files are necessary if you encounter any issues in the installation process.

Creating the Home Directory for the New Release

Create a directory to be the new ECE 12.0 patch set home directory, ECE_New_home; for example, ECE_120PS2. Because you have your old release on the same driver machine, be careful to specify the home details for the new release when you run the ECE 12.0 patch set Installer. The home details consist of the home directory path and a unique name you give to the new installation.

When you run the Installer, it displays the home details of any old release installations it detects on your driver machine in the Specify Home Details list.

Performing the Upgrade Tasks

To upgrade your existing ECE 12.0 system to the latest patch set, perform these tasks:

  1. Installing the ECE 12.0 Patch Set for Your Upgrade

  2. Reconfiguring Configuration File Settings to Match Your Old Release

  3. Copying the Mediation Specification Data to the New Installation

  4. Upgrading Extension Code

  5. Verifying the New Parameters in the Upgraded ECE Configuration Files

Installing the ECE 12.0 Patch Set for Your Upgrade

Download the ECE 12.0 Patch Set software from the Oracle Support website (https://support.oracle.com). Then, install the ECE 12.0 patch set using the Patchset installer type into ECE_New_home.

Follow the instructions in "Installing Elastic Charging Engine" to install ECE using the Patchset installer type.

In the Existing ocece Installation Details window, ensure that you enter the full path or browse to the directory in which you installed the existing ECE installation.

Reconfiguring Configuration File Settings to Match Your Old Release

After installing the new patch set, reconfigure the default system and business configuration files of the new installation to match the settings in your old release configuration files.

Reconfigure all settings in the files of the following directories to match your old installation settings:

  • ECE_New_home/oceceserver

  • ECE_New_home/oceceserver/config

  • ECE_New_home/oceceserver/brm_config

You must move the configuration data, such as your custom customer profile data and request specification files, into ECE_New_home.

You can also use a merge tool to merge the configuration files you have changed in your old installation with the configuration files in the new installation.

Note:

Do not use a merge tool for reconfiguring the settings in the ECE_home/oceceserver/config/management/charging-settings.xml file. New and changed properties can be introduced in this file, which would make the file difficult to merge.

To reconfigure settings of the ECE_New_home/config/management/charging-settings.xml file:

  1. On the driver machine, Open the ECE_New_home/oceceserver/config/eceTopology.conf file.

  2. For each physical server machine or unique IP address in the cluster, enable charging server nodes (such as the ecs1 node) for JMX management by specifying a port for each node and setting it to start CohMgt = true.

    Note:

    Do not specify the same port for the JMX management service that is used by your old ECE installation. Enable charging server nodes on your new installation for JMX management by using unique port numbers.

  3. Save and close the file.

  4. Start the JMX-management-enabled charging server nodes by doing the following:

    1. Change directory to the ECE_New_home/oceceserver/bin directory.

    2. Start Elastic Charging Controller (ECC):

      ./ecc
    3. Run the following command:

      start server
  5. Access the ECE MBeans:

    1. Log on to the driver machine.

    2. Start a JMX editor, such as JConsole, that enables you to edit MBean attributes.

    3. Connect to the ECE charging server node set to start CohMgt = true in the ECE_New_home/oceceserver/config/eceTopology.conf file.

      The eceTopology.conf file also contains the host name and port number for the node.

    4. In the editor's MBean hierarchy, expand the ECE Configuration node.

  6. Use the JMX editor to enter values for all settings associated with your old release's ECE_home/config/management/charging-settings.xml file and enter values for new settings introduced in the new release.

    Your configurations are saved to the ECE_New_home/oceceserver/config/management/charging-settings.xml file.

  7. Stop the JMX-management-enabled charging server nodes.

Copying the Mediation Specification File to the New Installation

When installing the new release, the mediation specification file is not automatically copied to the new installation.

You must manually copy the mediation specification file (for example, diameter_mediation.spec) in your old release's ECE_home/oceceserver/config/management directory to the new installation.

Upgrading Extension Code

If you customized rating by implementing extensions using the ECE extensions interface, apply the customizations to the corresponding files of the new installation.

Upgrade your extension code and recompile it. Recompile ECE_home/oceceserver/config/extensions with the new library. Ensure that the packaged extensions JAR files are available to the ECE runtime environment in the ECE_home/lib folder.

Verifying the New Parameters in the Upgraded ECE Configuration Files

The upgrade process automatically adds or updates parameters in the following configuration files:

  • JMSConfiguration.xml: The following JMSDestination name sections in this configuration file are updated or added:

    • NotificationQueue: New parameters read by the ECE charging nodes.

    • BRMGatewayNotificationQueue: New section read by BRM Gateway.

    • DiameterGatewayNotificationQueue: New section read by Diameter Gateway.

  • migration-configuration.xml: The pricingUpdater section in this configuration file is updated.

Perform the following procedures to verify that the new parameters were successfully added to the configuration files and that the default values of the new and updated parameters are appropriate for your system. If necessary, change the values.

Verifying New and Updated Parameters in the Upgraded JMSConfiguration.xml File

To verify the new and updated parameters in the upgraded JMSConfiguration.xml file:

  1. Open the ECE_New_home/config/JMSConfiguration.xml file in a text editor.

  2. Locate the <MessagesConfigurations> section and its three JMSDestination name sections.

  3. In each JMSDestination name section, verify that the values of the following parameters are appropriate for your system:

    <JMSDestination name="JMS_destination_name">
        <HostName>host_name</HostName>
        <Port>port_number</Port>
        <UserName>user_name</UserName>
        <Password>password</Password>
        <ConnectionFactory>connection_factory_name</ConnectionFactory>
        <QueueName>queue_name</QueueName>
        <SuspenseQueueName>suspense_queue_name</SuspenseQueueName>
        <Protocol>protocol</Protocol>
        <ConnectionURL>connection_URL</ConnectionURL>
        <ConnectionRetryCount>connection_retry_count</ConnectionRetryCount>
        <ConnectionRetrySleepInterval>connection_retry_sleep_interval
            </ConnectionRetrySleepInterval>
        <InitialContextFactory>initial_context_factory_name
            </InitialContextFactory>
        <RequestTimeOut>request_timeout</RequestTimeOut>
        <KeyStorePassword>keystore_password</KeyStorePassword>
        <keyStoreLocation>keystore_location</keyStoreLocation>
    </JMSDestination>

    where:

    • JMS_destination_name is one of the following names:

      • NotificationQueue

      • BRMGatewayNotificationQueue

      • DiameterGatewayNotificationQueue

      Note:

      Do not change the value of the JMSDestination name parameters.

    • host_name specifies the name of a WebLogic server on which a JMS topic resides.

      If you provided a value for the Host Name field on the ECE Notification Queue Details installer screen, that value appears here. Add this host to the ConnectionURL parameter, which takes precedence over HostName.

    • port_number specifies the port number on which the WebLogic server resides.

      If you provided a value for the Port Number field on the ECE Notification Queue Details installer screen, that value appears here. Add this port number to the ConnectionURL parameter, which takes precedence over Port.

    • user_name specifies the user for logging on to the WebLogic server.

      This user must have write privileges for the JMS topic.

    • password specifies the password for logging on to the WebLogic server.

      When you install ECE, the password you enter is encrypted and stored in the KeyStore. If you change the password, you must run a utility to encrypt the new password before entering it here. See "About Encrypting Passwords" in BRM Developer's Guide.

    • connection_factory_name specifies the connection factory used to create connections to the JMS topic on the WebLogic server to which ECE publishes notification events.

      You must also configure settings in Oracle WebLogic Server for the connection factory.

    • queue_name specifies the JMS topic that holds the published external notification messages.

    • suspense_queue_name specifies the name of the queue that holds failed updates sent through the BRM Gateway. This parameter is applicable only for the BRMGatewayNotificationQueue section.

    • protocol specifies the wire protocol used by your WebLogic servers in the ConnectionURL parameter, which takes precedence over Protocol. The default is t3.

    • connection_URL lists all the URLs that applications can use to connect to the JMS WebLogic servers on which your ECE notification queue (JMS topic) or queues reside.

      Note:

      • When this parameter contains values, it takes precedence over the deprecated HostName, Port, and Protocol parameters.

      • If multiple URLs are specified for a high-availability configuration, an application randomly selects one URL and then tries the others until one succeeds.

      Use the following URL syntax:

      [t3|t3s|http|https|iiop|iiops]://address[,address]. . . 

      where:

      t3, t3s, http, https, iiop, or iiops is the wire protocol used.

         For a WebLogic server, use t3.

      address is hostlist:portlist.

      hostlist is hostname[,hostname.]

      hostname is the name of a WebLogic server on which a JMS topic resides.

      portlist is portrange[+portrange.]

      portrange is port[-port.]

      port is the port number on which the WebLogic server resides.

      Examples:

      t3://hostA:7001
      t3://hostA,hostB:7001-7002

      The preceding URL is equivalent to all the following URLs:

      t3://hostA,hostB:7001+7002 
      t3://hostA:7001-7002,hostB:7001-7002 
      t3://hostA:7001+7002,hostB:7001+7002 
      t3://hostA:7001,hostA:7002,hostB:7001,hostB:7002 
    • connection_retry_count specifies the number of times a connection is retried after it fails. The default is 10.

      This applies only to clients that receive notifications from BRM.

    • connection_retry_sleep_interval specifies the number of milliseconds between connection retry attempts. The default is 10000.

    • initial_context_factory_name specifies the name of the initial connection factory used to create connections to the JMS topic queue on each WebLogic server to which ECE will publish notification events.

    • request_timeout specifies the number of milliseconds in which requests to the WebLogic server must be completed before the operation times out. The default is 3000.

    • keystore_password specifies the password used to access the SSL KeyStore file if SSL is used to secure the ECE JMS queue connection.

    • keystore_location specifies the full path to the SSL KeyStore file if SSL is used to secure the ECE JMS queue connection.

  4. Save and close the file.

For more information about these parameters, see "Configuring Notifications in ECE" in ECE Implementing Charging.

Verifying New and Updated Parameters in the Upgraded migration-configuration.xml File

To verify the new and updated parameters in the upgraded migration-configuration.xml file:

  1. Open the ECE_New_home/oceceserver/config/management/migration-configuration.xml file.

  2. Locate the pricingUpdater section.

  3. Verify that the default values of the following parameters are appropriate for your system:

    <pricingUpdater
      . . .
      hostName="host_name"
      port="port_number"
      . . .
      connectionURL="connection_URL"
      connectionRetryCount="connection_retry_count"
      connectionRetrySleepInterval="connection_retry_sleep_interval"
      . . .
      protocol="protocol"
      . . .
      requestTimeOut="request_timeout"
      . . .
      </pricingUpdater>

    where:

    • host_name specifies the name of the server on which a JMS queue to which PDC publishes pricing data resides.

      If you provided a value for the Host Name field on the PDC Pricing Components Queue Details installer screen, that value appears here. Add this host to the ConnectionURL parameter, which takes precedence over HostName.

    • port_number specifies the port number of the server on which the PDC JMS queue resides.

      If you provided a value for the Port Number field on the PDC Pricing Components Queue Details installer screen, that value appears here. Add this port number to the ConnectionURL parameter, which takes precedence over Port.

    • connection_URL lists all the URLs that applications can use to connect to the servers on which the PDC JMS queue or queues reside.

      Note:

      • When this parameter contains values, it takes precedence over the deprecated hostName, port, and protocol parameters.

      • If multiple URLs are specified for a high-availability configuration, an application randomly selects one URL and then tries the others until one succeeds.

      Use the following URL syntax:

      [t3|t3s|http|https|iiop|iiops]://address[,address]. . . 

      where:

      t3, t3s, http, https, iiop, or iiops is the wire protocol used. For a WebLogic server, use t3.

      address is hostlist:portlist.

      hostlist is hostname[,hostname.]

      hostname is the name of a server on which a PDC JMS queue resides.

      portlist is portrange[+portrange.]

      portrange is port[-port.]

      port is the port number of the server on which the PDC JMS queue resides.

      Examples:

      t3://hostA:7001
      t3://hostA,hostB:7001-7002

      The preceding URL is equivalent to all the following URLs:

      t3://hostA,hostB:7001+7002 
      t3://hostA:7001-7002,hostB:7001-7002 
      t3://hostA:7001+7002,hostB:7001+7002 
      t3://hostA:7001,hostA:7002,hostB:7001,hostB:7002 
    • connection_retry_count specifies the number of times a connection is retried after it fails. The default is 10.

      This applies only to clients that receive notifications from BRM.

    • connection_retry_sleep_interval specifies the number of milliseconds between connection retry attempts. The default is 10000.

    • protocol specifies the wire protocol used by the servers listed in the ConnectionURL parameter, which takes precedence over Protocol. The default is t3.

    • request_timeout specifies the number of milliseconds in which requests to the PDC JMS queue server must be completed before the operation times out. The default is 3000.

  4. Save and close the file.

Performing the Post-Upgrade Tasks

After upgrading ECE 12.0 to the latest patch set, perform these tasks:

  1. Deploying the Patch Set Onto Server Machines

  2. Performing a Rolling Upgrade

  3. Enabling Per Site Rated Event Formatter Instances in Persistence-Enabled Active-Active Systems

  4. Stopping and Restoring Your ECE System

Deploying the Patch Set Onto Server Machines

If you installed an ECE standalone installation, you can skip this task.

Deploying the patch set onto the server machines means that you distribute the Elastic Charging Server node instances (charging server nodes) and other nodes defined in your topology file across the server machines.

To deploy the patch set onto your server machines:

  1. Open the ECE_New_home/config/eceTopology.conf file and your old release topology file.

  2. Verify the following:

    1. The settings in the ECE_New_home/config/eceTopology.conf file are the same as specified in your old release topology file.

      Your topology configuration must be identical to that of your old installation. Oracle recommends that you copy the topology file from your old installation.

    2. All the hosts included in the ECE_New_home/config/eceTopology.conf file have the same login ID (user ID) and the password-less SSH has been configured to all hosts from the driver machine.

  3. Save and close the files.

  4. Verify that all of the custom files and system and business configuration files of the new installation match the settings of your old installation configuration files and that your custom request specification files and custom customer profile data is being carried over.

  5. Open the ECE_New_home/config/management/migration-configuration.xml file.

  6. Verify that the configObjectsDataDirectory parameter is set to the directory where you store your configuration data (mediation specification used by Diameter Gateway).

  7. Save and close the file.

  8. Log on to the driver machine.

  9. Go to the ECE_New_home/bin directory.

  10. Start Elastic Charging Controller (ECC):

    ./ecc
    
  11. Run the following command, which deploys the ECE installation onto server machines:

    sync
    

    The sync command copies the relevant files of the ECE installation onto the server machines in the ECE cluster.

Performing a Rolling Upgrade

A rolling upgrade gracefully shuts down processes of an old ECE installation and starts up the processes of the new ECE installation while maintaining the operation of the overall ECE system.

Rolling upgrades are intended for production systems to prevent interruption of service for customers during the upgrade. Rolling upgrades are also useful for test systems to avoid tedious restarts of ECE charging server nodes that would require reloading data from BRM and PDC to re-prime ECE caches.

Caution:

(Productions systems) To mitigate charging server node failures that might threaten your system's ability to handle your customer base:

  • Schedule the rolling upgrade outside of your regular peak processing time.

  • Ensure that you have an appropriate number of charging server nodes for your customer base. If the minimum number of charging server nodes needed for your customer base is N, you must run at least N+1 nodes to have uninterrupted usage processing during a rolling upgrade.

To perform a rolling upgrade:

  1. Ensure that the new release is installed in a different directory from the old release.

  2. Start ECC using the new installation.

  3. Run the rolling upgrade script. See "Starting the Rolling Upgrade Process".

  4. Verify that your new ECE installation is using the correct configuration data. See "Verifying the Configuration Data Path".

To roll back the upgrade, see "Rolling Back an Upgrade".

Starting the Rolling Upgrade Process

To start the rolling upgrade process:

  1. Ensure that you deploy the new release onto server machines. See "Deploying the Patch Set Onto Server Machines".

  2. Open the ECE_home/oceceserver/config/ece.properties file in a text editor.

  3. Specify the amount of time, in seconds, that the rollingupgrade script waits after a node has finished upgrading before starting the upgrade process for the next node by using this parameter:

    numberOfPauseSecondsForRollingUpgrade = PauseSeconds

    This is also the wait time used between rolling restarts of successive nodes.

  4. Specify the maximum amount of time, in seconds, that the rollingupgrade script will wait for a node to start up (that is, enter the NODE_SAFE state) by setting this parameter:

    rollingUpgradePauseSecondsPerNode = MaxSeconds

    If a node's start-up time exceeds the specified number of seconds, the rolling upgrade fails.

  5. (For all gateway nodes) To specify the number of seconds that the utility waits for each gateway node to restart before continuing to the next gateway node, add the following line:

    rollingUpgradeGatewayReadinessWait=seconds

    The default is 12 seconds.

  6. (For individual gateway nodes) To specify the number of seconds the utility waits for a specific gateway node to restart before continuing to the next gateway node, add one or more of the following lines:

    rollingUpgradeEmGatewayReadinessWait=seconds
    rollingUpgradeCdrGatewayReadinessWait=seconds
    rollingUpgradeDiameterGatewayReadinessWait=seconds
    rollingUpgradeRadiusGatewayReadinessWait=seconds
    rollingUpgradeHttpGatewayReadinessWait=seconds

    These values override the value set in rollingUpgradeGatewayReadinessWait.

  7. Save and close the ece.properties file.

  8. Specify the order in which to restart nodes in the ECE_New_home/config/eceTopology.conf file. Nodes are started in the order in which they are listed in the configuration file.

  9. Start the rolling upgrade in the new release while ECE is still operating on the old release by doing one of the following:

    • To upgrade all running nodes, run the following command:

      groovy:000> rollingUpgrade

      All running nodes are upgraded (charging server nodes, data-loading utility nodes, data updating nodes, and so on) except for simulator nodes. Each node on the old location is brought down, upgraded, and joined back to the cluster.

    • To upgrade nodes (bring them down, upgrade them, and join them back to the cluster) by node role, run the following commands:

      rollingUpgrade ratedEventFormatter
      rollingUpgrade server
      rollingUpgrade updater
      rollingUpgrade diametergateway

      It is recommended to first upgrade all nodes for role ratedEventFormatter, followed by all nodes for role server, followed by all nodes for role updater, and then followed lastly by all nodes for role diametergateway.

After the upgrade is completed, the new release is used. You can decide what to do with the old directory installation.

Verifying the Configuration Data Path

Before you use the new release, verify that the path to your configuration data (the path to your custom customer profile data and request specification data) is specified correctly for where the data lives on your new release.

To verify the path to the configuration data:

  1. Access the ECE MBeans by launching a JMX editor and entering the IP address (or host name) and port of your JMX-management-enabled charging server node on the running new release.

  2. Click the MBeans tab.

  3. Expand ECE Configuration.

  4. Expand migration.loader.

  5. In the Name column, select configObjectsDataDirectory.

  6. In the Value column, enter the directory where you store your configuration data (your mediation specification files).

    Your configuration is saved to the ECE_New_home/config/management/migration-configuration.xml file (do not edit this file directly).

Rolling Back an Upgrade

To roll back an upgrade:

  1. Go to the ECE_New_home/bin directory.

  2. Start ECC.

    ./ecc
  3. Run the following command to start rolling back the upgrade:

    groovy:000> rollingUpgrade server

    One by one, each charging server node is rolled back.

  4. After all charging server nodes are rolled back, restart the following nodes in the given order:

    1. Pricing Updater

    2. Customer Updater

    3. BRM Gateway

    4. Rated Event Formatter

    See "Starting and Stopping ECE" in BRM System Administrator's Guide for information about starting these nodes.

Enabling Per Site Rated Event Formatter Instances in Persistence-Enabled Active-Active Systems

After upgrading all sites in an active-active system, you may still have a single primary Rated Event Formatter instance used by all sites. When data-persistence is enabled, you can change to the recommended architecture of one primary Rated Event Formatter instance and one secondary instance for each site. To make this change, do the following:

  1. Configure your Rated Event Formatter instances in the charging-settings.xml file as described in "Configuring an Active-Active System" in BRM System Administrator's Guide.
  2. Start the primary Rated Event Formatter instance on each site. You can optionally start the secondary instances as well, or you can wait until you need them for failover.

Stopping and Restoring Your ECE System

You can perform restarts of the ECE system for test systems only when it is intended to remove all data from Coherence caches.

Note:

Restarts of the ECE system are not intended for production systems. If you are upgrading a production system, perform a rolling upgrade. See "Performing a Rolling Upgrade" for information.

To restore an upgraded ECE system:

  1. Stop all ECE nodes of the old ECE installation.

  2. In PDC, publish all PDC pricing data (the metadata, setup, pricing, and profile data) from the PDC database to ECE by running the following command:

    ImportExportPricing -publish -metadata -config -pricing -profile -target [ece] 

    Running this command publishes all metadata, setup, pricing, and profile data from the PDC database to ECE.

  3. Reconfigure the ECE_New_home/config/management/charging-settings.xml file to match the settings (including customizations) in your old release and enter values for settings introduced in the new release.

  4. On the driver machine, go to the ECE_New_home/bin directory.

  5. Start ECC:

    ./ecc
  6. Enable real-time synchronization of BRM and ECE customer data updates. See "Synchronizing Data Between ECE and the BRM Database" in ECE Implementing Charging for more information.

  7. Start ECE processes and gateways in the following order:

    start server
    start configLoader
    start pricingUpdater
    start customerUpdater
    start emGateway
    start brmGateway
    start ratedEventFormatter
    start diameterGateway
    start radiusGateway

    All data is now back in the ECE data grid.

    Real-time-data updates, which had been temporarily disrupted due to the shutdown, are processed upon restart.

Verifying the Installation After the Upgrade

Note:

Test the patch set that you installed on a non-production system with a copy of your production data before you deploy it on a production system.

Verify the ECE installation by starting the ECE nodes in the cluster, loading the data needed for rating, and generating usage to verify that usage requests can be processed and customer balances can be impacted.

See "Verifying the ECE Installation" for information about verifying the ECE installation.