4 Configuring the ECE System

This chapter describes how to configure the underlying system on which Oracle Communications Billing and Revenue Management Elastic Charging Engine (ECE) runs and describes the client-side configurations that control how requests are sent to ECE.

See BRM Elastic Charging Engine Implementation Guide for information about configuring charging business rules at the server level that influence how usage requests are charged, including taxation configurations.

About ECE Configuration

You configure ECE configuration parameters by doing one of the following:

  • Prior to starting the ECE charging servers, by editing the XML files located in the ECE_home/oceceserver/config/management directory, where ECE_home is the directory in which ECE is installed.

  • After starting the ECE charging servers, by using the Java Management Extensions (JMX) MBean editor of your choice.

You configure usage-charging business parameters that control how ECE charges offline and online usage requests as well as system configuration parameters that influence how ECE charging servers operate. You can configure standard JVM tuning parameters for ECE nodes that are running JVMs in the cluster. See "Configuring JVM Tuning Parameters".

Several system configuration parameters are initially configured during ECE installation when you enter values for fields as requested by the Oracle Universal Installer GUI installation processes. These parameters can be modified after installation if needed.

Other system configuration parameters are set in a configuration file and properties file by using a text editor immediately after ECE installation.

The configuration parameters of configuration files in the ECE_home/oceceserver/config/management directory can be edited through JConsole on a running system. For example, usage-charging business parameters can be edited by using the ECE configuration service that exposes ECE configuration parameters as MBeans. See the discussion about accessing and editing ECE MBean parameters in BRM Elastic Charging Engine Implementation Guide for more information.

You can also edit configuration parameters by editing the configuration XML files directly using a text editor, but these edits need to be made before you start the charging servers. After ECE is running, you must use the JConsole to edit configuration parameters so that your edits can impact the running system (through the MBean API.)

For information about the configurable parameters in ECE and how you configure them, see the following topics:

When you are within the network (inside the firewall), you can configure ECE configuration parameters remotely by logging in to the Coherence management JMX server using the host name and JMX port.

About Centralized Configuration

ECE offers centralized configuration. See the discussion about accessing and editing ECE MBean parameters in BRM Elastic Charging Engine Implementation Guide for more information.

For centralized configuration to work, two-way password-less SSH must be configured between client and server machines. See "Setting Up Password-less SSH Between the Driver and Servers" for instructions.

Initial Configuration

You typically install ECE on a machine that is meant to be used to administer the ECE system and is referred to as the driver machine. After installing ECE on the driver machine, you perform an initial configuration on the driver machine as follows:

  • Specify the machines/hosts for each Coherence node in the ECE_home/oceceserver/config/eceTopology.conf file. See "Configuring ECE Topology" for more information.

  • Specify required properties in the ECE_home/oceceserver/config/ece.properties file, including the ECE root directory, the ECE user name, and the IP address of the driver machine (driverIp).

All configuration settings configured on the driver machine are saved to XML files in the driver machine ECE_home/oceceserver/config/management directory.

After you complete the initial configuration on the driver machine, if your ECE cluster includes multiple machines, you use the Elastic Charging Controller (ECC) application to deploy ECE from the driver machine to all machines specified in your topology (using the sync command). See the discussion of ECE post-installation tasks in BRM Elastic Charging Engine Installation Guide for more information.

Important:

You must provision the other machines on which you will deploy ECE. The sync command does not provision the other machines for you. See the discussion of ECE pre-installation tasks in BRM Elastic Charging Engine Installation Guide for information about provisioning machines for an ECE integrated.

Note:

ECC commands are meant to be run from the driver machine for administering the ECE system (managing the nodes of the cluster).

System-Level Configuration

System-level configuration files are located in ECE_home/oceceserver/config/*.xml. See "ECE Configuration File Reference" for a summary of system-level configuration files. See the comments in each file for descriptions of the file parameters, including default values and accepted range of values.

For further information about ECE system-level configuration, see the following topics:

Usage-Charging Configuration

See BRM Elastic Charging Engine Implementation Guide for information about configuring usage charging business rules (setting parameters that control usage-charging behavior).

You can configure usage-charging business rules in the following two ways:

  • Before starting ECE by directly editing the ECE_home/oceceserver/config/management/*.xml files on the driver machine.

  • After starting ECE by using the configuration service. See the discussion about accessing and editing ECE MBean parameters in BRM Elastic Charging Engine Implementation Guide for more information.

Configuring ECE Topology

Each node you define in the topology file must have a role associated with it. The role identifies the application that ECC starts when you enter commands for managing nodes. For example, when you enter ECC command start configLoader, ECC starts the node with the role configLoader. When you enter ECC command start server, ECC starts all nodes with role server.

To configure ECE topology:

Important:

The topology file is pre-configured with several nodes that are required by ECE. Do not delete existing rows in this file.
  1. Open the ECE_home/oceceserver/config/eceTopology.conf file.

  2. Add a row for each node on each physical server machine in the Coherence cluster.

    For example, if you have three physical server machines and each physical server machine has three nodes, add nine rows.

  3. For each row, enter the following information:

    • Name of the JVM process for that node.

      You can assign an arbitrary name. This name is used to distinguish processes that have the same role.

    • Role of the JVM process for that node.

      Each node in the ECE cluster plays a certain role.

    • Host name of the physical server machine on which the node resides.

      For a standalone system, use localhost.

      A standalone system means that all ECE-related processes are running on a single physical server machine.

    • (For multihomed hosts) IP address of the server machine on which the node resides.

      For those hosts that have multiple IP addresses, specify the IP address so that Coherence can be pointed to a port.

    • Whether you want the node to be JMX-management enabled.

      See "Enabling a Charging Server Node for JMX Management".

    • The tuning profile for that node. Each node is associated with a JVM tuning file. See "Configuring JVM Tuning Parameters".

  4. (For SDK sample programs) To run the SDK sample programs by using sdkCustomerLoader, uncomment the line that defines the sdkCustomerLoader node.

  5. Save the file.

Enabling a Charging Server Node for JMX Management

For each unique IP address in your physical topology, you must enable one charging server node for JMX management. When the JMX-enabled node starts, it provides a JMX management service on the specified topology's host and port. The service exposes ECE MBeans so that you can edit the MBean attributes by using a JMX editor, such as JConsole. The service also enables ECC to verify the status of nodes as it enables the Coherence management framework.

Note:

Any ECE node can be enabled for JMX management, but you must enable the charging-server nodes for JMX management for central configuration of ECE to work. Charging-server nodes are always running, and enabling them for JMX management exposes MBeans for all ECE node processes (such as simulators and data loaders).

To enable a charging server node for JMX management:

  1. Open the ECE_home/oceceserver/config/eceTopology.conf file.

  2. In the row for one charging server node (node with role server), for each physical server machine or unique IP address in the cluster, provide the following information:

    • JMX port of the JVM process for that node.

      Enter any free port, such as 9999, for the charging server node to be the JMX-management enabled node.

      Choose a port number that is not in use by another application.

      The default port number is 9999.

    • Specify if you want the node to be JMX-management enabled by entering true in the start CohMgt column.

      For charging server nodes (nodes with the role server), always enable JMX-management when a JMX port is supplied.

      Only one charging server node per physical server should be JMX-management enabled.

      Because multiple charging server nodes are running on a single physical machine, you set CohMgt=true for only one charging server node on each physical machine. Each machine must have one charging server node with CohMgt=true for centralized configuration of ECE to work.

  3. Save the file.

Configuring JVM Tuning Parameters

Configure JVM tuning parameters for garbage collection and heap size tuning.

Each row in the topology file represents an ECE component that is a running JVM in the cluster. In the topology file, you can specify a JVM tuning profile file for each node defined. This allows you to provide specific tuning settings for each node in the cluster. Multiple nodes can point to the same tuning profile.

To configure JVM tuning parameters:

  1. Open the ECE_home/oceceserver/config/defaultTuningProfile.properties file.

    You can create your own JVM tuning file and save it in this directory. You can name the file what you want.

  2. Set the parameters as needed.

  3. Save the file.

  4. In the topology file, ensure your JVM tuning file is associated with the node to which you want the parameters to apply.

    The JVM tuning file is referenced by name in the topology file.

    See "Configuring ECE Topology" for information about the topology file.

Deploying JVM Tuning Parameter Updates onto a Running System

After configuring JVM tuning parameters, you can deploy JVM tuning parameter updates onto a running ECE system.

To deploy JVM tuning parameter updates onto a running system:

  1. Log on to the driver machine.

  2. Change directory to the ECE_home/oceceserver/bin directory.

  3. Start the Elastic Charging Controller.

    ./ecc
    
  4. Run the sync command to deploy the ECE installation onto server machines:

    sync
    

    The sync command copies the relevant files of the ECE installation (which includes your JVM tuning parameter changes) onto the server machines you have defined to be part of the ECE cluster.

  5. Open the ECE_home/oceceserver/config/eceTopology.conf file and define the following nodes at the top of the file (before the charging server nodes):

    • Updater nodes

    • Gateway nodes

    • Formatter nodes

    These processes must be restarted before restarting the charging server nodes.

  6. Run the rollingUpgrade command to perform a rolling restart of ECE nodes that are currently running on your topology.

    rollingUpgrade
    

    One by one, each currently running node listed in the topology file is brought down and joined back to the cluster.

    When you run the rollingUpgrade command with no parameters specified, all running nodes are upgraded (charging server nodes, data-loading utility nodes, data updater nodes, and so on) except for simulator nodes (nodes that have the role simulator).

Configuring Trusted Hosts of the Cluster

As part of initial configuration, configure the trusted hosts for the cluster. Trusted hosts are the machines or processes that are allowed from a security perspective to be part of the cluster. Because the cluster contains private customer data, ensure that only trusted hosts can access this data. You enter trusted host information during the ECE installation process. See "Adding Trusted Hosts" for information about adding or modifying trusted host information for your ECE system.

See BRM Elastic Charging Engine Security Guide for overall information about installing and administering a secure system.

Configuring ECE System-Level Settings

See the following topics for information about configuring ECE system-level parameters:

Configuring Coherence

The ECE configuration files related to configuring Coherence are as follows:

Note:

In an ECE standalone system, the default values in these files typically do not need to be modified.
  • ECE_home/oceceserver/config/charging-cache-config.xml

  • ECE_home/oceceserver/config/charging-coherence-override-dev.xml

  • ECE_home/oceceserver/config/charging-coherence-override-prod.xml

  • ECE_home/oceceserver/config/charging-coherence-override-secure-prod.xml

  • ECE_home/oceceserver/config/charging-pof-config.xml

Refer to the comments in each file for information about configuring their parameters.

See the Oracle Coherence documentation for general information about configuring Coherence.

Configuring Logging

Configure logging parameters so that you have the log levels set to the granularity you want. You can configure logging for each node in the cluster. The ECE_home/oceceserver/logs directory contains the log files for each node on each machine of your topology.

You can configure log levels and control logging in the following ways:

  • Controlling log level for each node or for the entire grid by way of JMX (using JConsole).

    A grid-level log level means that the log level is applied to all Elastic Charging Server nodes that run the Coherence work manager agent.

    See "Setting Log Levels by Editing MBeans".

  • Controlling log level for all nodes in the cluster.

    You set a global log level for the entire cluster by configuring the ECE_home/oceceserver/config/log4j.properties file. You must edit this file before starting ECE charging servers.

  • Turning logging on or off using ECC.

    Nodes that are started using ECC produce a log under the logs directory using the filename format node_name.log.

    To look up the logs produced by the nodes under the logs directory, use ECC cat, ls, and tail commands.

For collecting diagnostic information, Oracle recommends that you turn on ECC feedback mode which produces extra information when running commands. For example:

set feedback true

The feedback mode setting is saved in your local profile so you do not need to set it every time you start the Elastic Charging Controller.

Setting Log Levels by Editing MBeans

You can set the log level of the ECE module or modules for which you are setting the log level by ECE functional domain. Use this method to turn on debugging for all ECE modules that are used in the flow of an ECE functional domain that is associated with your debugging scenario. For example, if you are debugging a problem in which events are not being rerated properly, use this method to turn on debugging for all ECE modules used in the functional domain of rerating.

To set log levels for ECE functional domains:

  1. Access the ECE MBeans:

    1. Log on to the driver machine.

    2. Start the ECE charging servers (if they are not started).

    3. Start a JMX editor, such as JConsole, that enables you to edit MBean attributes.

    4. Connect to the ECE charging server node set to start CohMgt = true in the ECE_home/oceceserver/config/eceTopology.conf file.

      The eceTopology.conf file also contains the host name and port number for the node.

    5. In the editor's MBean hierarchy, expand the ECE Logging node.

  2. Expand Configuration.

  3. Expand Attributes.

  4. Select FunctionalDomains.

  5. Double-click in the attribute's Value field.

    A list of ECE functional domains for which you can turn on debugging appears in the field.

  6. In the list, copy the name of the ECE functional domain relevant to your debugging scenario (for example, Policy).

  7. Under Attributes, select LoggerLevels.

  8. Double-click in the attribute Value field.

    A list of log levels appears in the field.

  9. Scroll through the list to determine which log level you want to use for the functional domain you chose in step 6 (for example, DEBUG).

  10. Under Configuration, expand Operations.

  11. Select one of the following operations:

    • To set the log level for one ECE node, select setLogLevelForFunctionalDomain.

    • To set the log level for the grid, select setGridLogLevelForFunctionalDomain.

      This applies the log level to all Elastic Charging Server nodes that run the Coherence work manager agent.

  12. Specify values for the following operation parameters:

    • p0: Enter the name of the ECE functional domain you copied in step 6.

      Enter the name exactly as it appears in the list, including brackets, capitalization, and so on.

    • p1: Enter the log level you chose in step 9.

  13. Click the operation button.

Configuring Charging-Server Health Thresholds

This section describes how to configure a charging-server health threshold.

About the Charging-Server Health Threshold

To mitigate charging server node failures that might threaten your system's ability to handle your customer base, you can configure a charging-server health threshold. A charging-server health threshold is the minimum number of charging server nodes needed for your customer base. If the number of charging server nodes running on your system goes below the threshold, ECE stops processing usage requests and issues a SystemHealthException. For example, if you require six charging server nodes to process requests for your customer base, you set the threshold to 6. But, if the number of available charging server nodes falls to 5, ECE stops processing usage requests and issues the exception. ECE continues to process update, management, query, top-up, debit, and refund requests.

When setting a charging-server health threshold, note the following:

  • If a threshold is N, you need to run at least n+ 1 nodes to have uninterrupted usage processing during a rolling upgrade.

  • For an integrated system, have a minimum of two charging server nodes per machine (provided the total number of charging server nodes can handle the normal expected throughput for your system).

  • For a standalone system for design or test environment, note the following guidelines:

    • Although you can use one charging server node in a design or test environment, setting a charging-server health threshold of 1 is not a valid configuration for deploying into a runtime environment.

    • The minimum configuration for an ECE standalone system is 3, which accounts for two charging server nodes plus an additional node if both charging server nodes fail. In this case, you would set a charging-server health threshold to 2.

See "Configuring the Charging-Server Health Threshold" for information about configuring the charging-server health threshold.

Configuring the Charging-Server Health Threshold

To configure a charging-server health threshold:

  1. Access the ECE MBeans:

    1. Log on to the driver machine.

    2. Start the ECE charging servers (if they are not started).

    3. Start a JMX editor, such as JConsole, that enables you to edit MBean attributes.

    4. Connect to the ECE charging server node set to start CohMgt = true in the ECE_home/oceceserver/config/eceTopology.conf file.

      The eceTopology.conf file also contains the host name and port number for the node.

    5. In the editor's MBean hierarchy, expand the ECE Configuration node.

  2. Expand chargingServer.

  3. Expand Attributes

  4. Set the degradedModeThreshold attribute to the minimum number of charging server nodes needed for your customer base (the number that can handle the normal expected throughput for your system). The default is 0.

Checking the Ongoing Health of ECE Charging Client Nodes

You configure overload protection as a client-side configuration.

To check the ongoing health of ECE charging client nodes:

  1. Access the ECE MBeans:

    1. Log on to the driver machine.

    2. Start the ECE charging servers (if they are not started).

    3. Start a JMX editor, such as JConsole, that enables you to edit MBean attributes.

    4. Connect to the appropriate ECE charging client node set to start CohMgt = true in the ECE_home/oceceserver/config/eceTopology.conf file.

      The eceTopology.conf file also contains the host name and port number for the node.

    5. In the editor's MBean hierarchy, expand the ChargingClient node.

  2. Expand BatchRequestService.

  3. Expand Attributes.

  4. Check the value of the SystemHealth attribute:

    • HEALTHY: Charging client nodes are functioning.

    • DEGRADED: Charging client nodes are unavailable.

Configuring System Overload Protection

Because ECE is always on, it needs to handle system overload. The following scenarios might cause an overloaded system:

  • An undersized ECE deployment or lack of infrastructure for usage growth

  • Large batches of offline records

  • Bulk customer updates that trigger numerous update requests

Though ECE has dynamic scalability to enable you to adjust sizing for peak times, overload protection is intended for an exceptionally overloaded system. ECE protects the system infrastructure from meltdown by controlling the number of submitted requests for processing. System overload causes the ECE executing requests to become stuck or busy. To ensure ECE does not get overloaded by a high volume of requests, it can be monitored and controlled in real time. When the maximum throughput is exceeded or there are spikes in the number of usage requests, ECE does not suffer from performance degradation if system overload protection measures are in place.

About Overload Protection Infrastructure

Overload protection uses thread pools to accept and process requests submitted to the system. Thread pools improve performance when executing large numbers of updates because of the reduced per-update overhead. They also provide a means of bounding and managing the resources, including requests.

When the request throughput from ECE is too large for the system to handle, ECE reduces its throughput until it reaches a sustainable error-free level. It informs the client of any submitted requests that are not able to be processed.

As with setting the charging-server health threshold, when overload protection is enabled, only usage requests are impacted. Usage requests continue to be accepted until the capacity reaches the configurable pending count.

The value of the pending count should generally be at least equal to the number of thread counts. Select this value carefully, based on the expected throughput of the ECE instance and the expected latency of each request as revealed by your performance testing results.

Update, management, query, top-ups and debit refunds requests are always accepted, even when the system is overloaded.

Configuring Overload Protection

You configure overload protection as a client-side configuration.

To configure overload protection:

  1. Access the ECE MBeans:

    1. Log on to the driver machine.

    2. Start the ECE charging servers (if they are not started).

    3. Start a JMX editor, such as JConsole, that enables you to edit MBean attributes.

    4. Connect to the ECE charging server node set to start CohMgt = true in the ECE_home/oceceserver/config/eceTopology.conf file.

      The eceTopology.conf file also contains the host name and port number for the node.

    5. In the editor's MBean hierarchy, expand the ECE Configuration node.

  2. Expand ChargingClient.

  3. Expand BatchRequestService.

  4. Set the OverloadProtection attribute to true.

About MBean Attributes Used to Configure Overload

The BatchRequestService MBean contains the attributes to configure the system for an unexpected throughput. Table 4-1 describes the attributes that can be configured for overload protection.

Note:

When you configure the following MBean attributes, they are not saved. If the client is restarted, attributes are reset to their default values.

Table 4-1 MBean Attributes to Configure Overload Protection

MBean Attribute Description

AcceptablePendingCount

Number of requests that are accepted and queued to be processed. ECE rejects these requests if the number of pending requests in the queue exceeds this value. After performance testing and monitoring the ThreadPendingCount attribute, this value can be determined. It is typically larger than the ECE thread count.

AcceptedTaskCount

Number of requests that have been accepted for processing since the start of the ECE instance. This attribute is read only.

BatchSize

Size of the ECE batch when it is submitted for processing.

BatchTimeOut

Amount of time ECE waits before the batch is submitted for processing, irrespective of how much of the batch is filled.

OverloadProtection

Flag that enables overload protection that is disabled by default (set to false).

RejectedTaskCount

Number of requests rejected since the start of the ECE instance. RejectedTaskCount plus AcceptedTaskCount should be equal to the total number of submitted requests on a single ECE instance. This attribute is read only.

ThreadPendingCount

Number of requests that are pending in the queue. Monitor this attribute before setting a value of AcceptablePendingCount. This attribute is read-only.


About Request Prioritization

When prioritizing requests, you can control when requests are sent through ECE.

New client-side ECE request queues can be set up by modifying the brs-client-config.xml file in the Elastic Charging Client JAR file. For more information about creating new queues, refer to "Configuring Client-Side ECE Request Queues".

In addition, in a scenario where both online network mediation and offline network mediation clients are sending requests through ECE and the throughput is at its limit, the online request is given priority over the offline request.

Configuring Client-Side ECE Request Queues

You can configure the request queues that your charging clients use to submit requests to ECE. The Elastic Charging Client, which is installed on the charging client (such as the network mediation client application), can use queues for sending requests to the Elastic Charging Server charging server nodes. For each queue, you can set the local thread pool for submitting requests for processing and for handling responses from the charging server nodes. You can also set the batch size and batch time out of each queue.

To configure client-side ECE request queues:

  1. Access the ECE MBeans:

    1. Log on to the driver machine.

    2. Start the ECE charging servers (if they are not started).

    3. Start a JMX editor, such as JConsole, that enables you to edit MBean attributes.

    4. Connect to the ECE charging server node set to start CohMgt = true in the ECE_home/oceceserver/config/eceTopology.conf file.

      The eceTopology.conf file also contains the host name and port number for the node.

    5. In the editor's MBean hierarchy, expand the ECE Configuration node.

  2. Expand ChargingClient.

  3. Expand BatchRequestService.

  4. Set the thread pool size, batch time out, and batch size attributes for the request queues.

    For descriptions of each attribute, see the documentation for the BRSStatMXBean for oracle.communication.brm.charging.brs in ECE Java API Reference.

Configuring Default System Currency

You can configure ECE to use a default system currency for charging subscribers. During rating, ECE uses the subscriber's primary currency or the secondary currency to charge subscribers. If the currency used in the rate plans does not match the subscriber's primary or secondary currency, ECE uses the default system currency, US dollars.

To configure a default system currency:

  1. Access the ECE MBeans:

    1. Log on to the driver machine.

    2. Start the ECE charging servers (if they are not started).

    3. Start a JMX editor, such as JConsole, that enables you to edit MBean attributes.

    4. Connect to the ECE charging server node set to start CohMgt = true in the ECE_home/oceceserver/config/eceTopology.conf file.

      The eceTopology.conf file also contains the host name and port number for the node.

    5. In the editor's MBean hierarchy, expand the ECE Configuration node.

  2. Expand charging.server.

  3. Expand Attributes and select systemCurrencyNumericCode.

  4. Set the numeric code of the currency for the system.

    For descriptions of each attribute, see the documentation for the BRSStatMXBean for oracle.communication.brm.charging.brs in ECE Java API Reference.

Configuring Housekeeping: Expired Object Clean Up

Housekeeping tasks can efficiently manage ECE server memory. One such housekeeping task is the clean up of expired objects, historic data that is no longer used for processing customer updates and usage information.

ECE utilizes existing architecture to clean up expired objects with update and usage request processes. Cleaning up expired objects during the update and usage request process ensures regular system upkeep and avoids processing from occurring during peak processing times, potentially impacting performance.

About Cleaning Up Expired Objects in Update Requests

With update requests, the retention time of the expired object is checked for expiration. When a customer has an update request and an expired object for that customer has exceeded its retention time, it is removed during the update request process. You can configure the retention time of the expired object. See "Configuring Expired Object Clean Up in Update Requests" to configure the retention time of expired objects in update requests.

The following expired objects are processed in a customer's update request:

  • Purchased charge offers

  • Purchased alteration (discount) offers

  • Balance items

  • Expired audit data

    • Purchased charge offers

    • Purchased alteration (discount) offers

    • Products (Services)

    • Used alteration agreements

    • Used distribution agreements

The clean up time of a customer is performed as part of an update request if the last clean up has been more than one day. A timestamp at the customer level indicates the last clean up time. For example, a customer has balance updates at 10:00, 11:00, 12:00, and 13:00 on day 1 and another update at 10:30 on day 2. The first clean up is done at 10:00 on day 1, and clean ups are skipped at 11:00, 12:00, and 13:00. The next clean up occurs at 10:30 on day 2 as part of the update request.

Configuring Expired Object Clean Up in Update Requests

To configure the retention time (in days) of an expired object in an update request:

  1. Access the ECE MBeans:

    1. Log on to the driver machine.

    2. Start the ECE charging servers (if they are not started).

    3. Start a JMX editor, such as JConsole, that enables you to edit MBean attributes.

    4. Connect to the ECE charging server node set to start CohMgt = true in the ECE_home/oceceserver/config/eceTopology.conf file.

      The eceTopology.conf file also contains the host name and port number for the node.

    5. In the editor's MBean hierarchy, expand the ECE Configuration node.

  2. Expand charging.expirationConfiguration.

  3. Expand Attributes.

  4. Specify values for the expiration configuration attributes listed in Table 4-2.

About MBean Attributes Used to Configure Expired Object Retention Time

Table 4-2 lists MBean attributes and their default values that are used to configure expired object retention time. The maximum allowed retention time is 180 days and the minimum allowed retention time is 0 days.

Table 4-2 MBean Attributes to Configure Expired Object Retention Time

MBean Attribute Default Retention Time (in Days)

expiredAuditRetentionIntervalInDays

60

expiredPurchasedProductRetentionIntervalInDays

30

expiredPurchasedAlterationRetentionIntervalInDays

30

expiredRatingProfileRetentionIntervalInDays

30

defaultExpirationRetentionIntervalInDays

30

defaultExpiredBalanceItemRetentionIntervalInDays

30


Expired audit objects share a single common retention time, whereas other expired objects have individual retention times.

Expired balance items are configured at the balance element level. If there is no configuration for a given balance element, defaultExpiredBalanceItemRetentionIntervalInDays is used. Table 4-3 lists the expired balance elements and example retention times.

Table 4-3 Expired Balance Element Retention Times

Expired Balance Element Retention Time (in Days)

FREE_MIN

60

BONUS_POINTS

15


About Cleaning Up Expired Objects in Usage Requests

When the TERMINATE or CANCEL operation type in a customer's usage request is processed, the following objects are checked for expiration and are removed if the object has expired.

  • Active sessions

  • Balance reservations

Note:

Expired active sessions and balance reservations are considered for removal immediately. You cannot configure a retention time for these objects.

Expired active sessions and corresponding expired balance reservations are cleaned up so that reserved balances are made available for future usage requests. All expired active sessions are terminated with no allowances for exceptions. Only used units are considered for terminating expired active sessions. In most instances, clean up is performed for the product for which the TERMINATE or CANCEL operation type is being processed but not for all the products for the given customer. One exception is if products share the same balance object as the original product that is being cleaned up for a given customer, those products will be cleaned up as well.

The requests from the active sessions that have expired are converted to TERMINATE requests and processed.

Setting Eviction Policies for the Identity Cache

The identity cache is part of the Elastic Charging Client and stores the public user identity information of customers; this cache is populated as requests are processed. Each time requests come into the system for new customers, their identity information is newly created in the cache. You can set eviction policies for the identity cache to remove entries (or units) from it when a maximum number of units (high-units parameter) is reached.

The identity cache is configured as a near cache with its front local scheme being a size-limited local cache. The local cache uses a HYBRID eviction policy that constitutes LRU (Least Recently Used) and LFU (Least Frequently Used) policies. If the entries from the front scheme are to be evicted (when the configured high-units number is reached), then those entries that have been least recently or least frequently used will be evicted.

By default, the identity cache is configured with a HYBRID eviction policy and a high-units of 20,500,000.

For more information about the hybrid eviction policy for Coherence caches, refer to the Oracle Coherence documentation.

Configuring Notifications

You can configure ECE to send notifications, either in-session notifications that are part of the usage response or external notifications that are JMS messages. ECE can generate notifications for the following:

  • Notifications for BRM. Notifications that are used by Oracle Communications Billing and Revenue Management (BRM) that are enabled as a best practice when integrating with the BRM system. When you use ECE as a charging engine for BRM, you can trigger notifications for sending information (updates) to BRM from ECE. For more information, see the chapter on implementing ECE with BRM in BRM Elastic Charging Engine Implementation Guide.

  • Notifications for online network mediation. Notifications that are used by online network mediation software programs. These notifications are also used by Diameter Gateway. Typically, these notifications are used to send information to the customer. For more information, see the chapter on sending requests from Diameter Gateway to charging servers in BRM Elastic Charging Engine Implementation Guide.

  • Notifications for policy and charging rules functions (PCRF). Notifications that are used by PCRFs for policy and control. When you use ECE as a customer profile repository (SPR) for a PCRF, you can trigger notifications for sending information to the PRCF from ECE. For more information, see the chapter on implementing ECE with a PCRF in BRM Elastic Charging Engine Implementation Guide.

For ECE to publish external notifications, configure the JMS credentials for the JMS server on which the notification queue (JMS topic) resides. See the discussion of implementing ECE with BRM in BRM Elastic Charging Engine Implementation Guide for instructions.

Configuring ECE Data-Loading Utilities and Data Updaters

When you install ECE, you provide information for configuring the following data-loading utilities, which are used for loading data into ECE and updating that data:

  • Data-loading utilities

    • configLoader

  • Data-loading utilities used only for ECE standalone systems

    • pricingLoader

    • customerLoader

  • Data updaters

    • Pricing Updater: Keeps ECE synchronized with Pricing Design Center (PDC)

    • Customer Updater: Keeps ECE synchronized with BRM asynchronously (not in real time)

    • External Manager (EM) Gateway: Keeps ECE synchronized with BRM in real time

To change configurations:

  • For data-loading utilities, see the discussion of data-loading utilities in BRM Elastic Charging Engine Implementation Guide.

  • For Customer Updater, see the discussion of implementing ECE with BRM in BRM Elastic Charging Engine Implementation Guide.

  • For Pricing Updater, see the discussion of implementing ECE with PDC in BRM Elastic Charging Engine Implementation Guide.

  • For EM Gateway, see the discussion of configuring EM Gateway in BRM Elastic Charging Engine Implementation Guide.

For information about asynchronous and synchronous data updates, see the discussion about synchronizing BRM and ECE customer data in BRM Elastic Charging Engine Concepts.

Configuring Usage-Charging Settings

See BRM Elastic Charging Engine Implementation Guide for information about configuring settings that control how Elastic Charging Server processes usage requests.

Updating Subscriber Lifecycle States for BRM

ECE supports the BRM subscriber lifecycle state feature. When new lifecycle states are added in BRM, you must update the ECE lifecycle state configuration so that the BRM information and ECE information remain synchronized. See the discussion of implementing ECE with BRM in BRM Elastic Charging Engine Implementation Guide for information about updating subscriber lifecycle states in ECE.

Adding Diameter Gateway Nodes for Online Charging

The ECE installer process creates a single instance of a Diameter Gateway node (diameterGateway1) that is added to your topology (added to your ECE_home/oceceserver/config/eceTopology.conf file). By default, this single node listens to all network interfaces for Diameter messages.

For a standalone system, a single node is sufficient for basic testing directly after installation; for example, to test if the Diameter client can send a Diameter request to the Diameter Gateway node. Add additional Diameter Gateway nodes to your topology, configure them to listen on the different network interfaces in your environment, and perform performance testing to determine the minimum number of Diameter Gateway nodes needed for your customer base (the number that can handle the normal expected throughput for your system).

When adding Diameter Gateway nodes, note the following:

  • In an ECE integrated system, have a minimum of two charging server nodes per machine (provided the total number of charging server nodes can handle the normal expected throughput for your system). The guideline is to have two Diameter Gateway node instances to allow for failover and additional nodes as needed to handle the expected throughput for your system.

  • In a standalone system, the minimum configuration is three charging server nodes and two Diameter Gateway node instances to allow for failover. Server redundancy is a minimum requirement of ECE installations.

To add Diameter Gateway nodes:

  1. Access the ECE MBeans:

    1. Log on to the driver machine.

    2. Start the ECE charging servers (if they are not started).

    3. Start a JMX editor, such as JConsole, that enables you to edit MBean attributes.

    4. Connect to the ECE charging server node set to start CohMgt = true in the ECE_home/oceceserver/config/eceTopology.conf file.

      The eceTopology.conf file also contains the host name and port number for the node.

    5. In the editor's MBean hierarchy, expand the ECE Configuration node.

  2. Expand charging.diameterGatewayConfigurations.

  3. Expand Operations.

  4. Select addDiameterGatewayConfiguration.

  5. In the name parameter, enter a name for the Diameter Gateway node.

  6. Click the addDiameterGatewayConfiguration button.

    You have created a Diameter Gateway node.

    (Optional) To fully configure the Diameter Gateway node now, specify values for all Diameter Gateway node configuration properties. See "Specifying Diameter Gateway Node Properties". Alternatively, first create multiple nodes, and later configure them.

  7. To add a peer to the Diameter Gateway node, expand charging.diameterGatewayPeerConfigurations.

  8. Expand Operations and select addPeer.

  9. Specify the values for the following parameter:

    • peerName. Enter the name of the Diameter peer.

  10. Click the addPeer button.

    The peer is added to Diameter Gateway.

  11. Expand charging.diameterGatewayPeerConfigurations.Peer_Name, where Peer_Name is the name of the Diameter peer.

  12. Expand Attributes.

  13. For each peer connected to the Diameter Gateway, configure alternative peers by specifying values for the following attribute:

    • alternatePeerNames. Enter the name of the alternative peer for the specified Diameter peer. You can specify two alternative peers for each Diameter peer. If the peer connected to Diameter Gateway fails or if it is unavailable, Diameter Gateway routes the notifications to the alternate peers configured.

  14. Open the ECE_home/oceceserver/config/eceTopology.conf file.

  15. Add a row for the Diameter Gateway node instance.

  16. For that row, enter the following:

    • The name of the JVM process that you used when you created the Diameter Gateway node instance in the JMX editor.

    • The role of the JVM process for that node, diameterGateway.

    • The host name of the physical server machine on which the Diameter Gateway node resides.

    • The JVM tuning file that contains the tuning profile for the Diameter Gateway node.

  17. Save the file.

Specifying Diameter Gateway Node Properties

For each Diameter Gateway node, you must specify node properties for configuring the node to communicate with your network as well as for tuning the node for optimal performance.

To specify Diameter Gateway node properties:

  1. Access the ECE MBeans:

    1. Log on to the driver machine.

    2. Start the ECE charging servers (if they are not started).

    3. Start a JMX editor, such as JConsole, that enables you to edit MBean attributes.

    4. Connect to the ECE charging server node set to start CohMgt = true in the ECE_home/oceceserver/config/eceTopology.conf file.

      The eceTopology.conf file also contains the host name and port number for the node.

    5. In the editor's MBean hierarchy, expand the ECE Configuration node.

  2. Expand charging.diameterGatewayConfigurations.Instance_Name, where Instance_Name is the name of the instance to configure.

  3. Expand Attributes.

  4. Specify values for all the attributes required to configure the instance.

    See Table 4-4 for attribute descriptions and default values.

  5. Change directory to the ECE_home/oceceserver/bin directory.

  6. Start the Elastic Charging Controller:

    ./ecc
    
  7. Do one of the following:

    • If the Diameter Gateway instance is not running, start it.

      The instance reads its configuration information by name at startup.

    • If the Diameter Gateway instance is running, stop and restart it.

    For information about stopping and starting Diameter Gateway instances, see the discussion about starting and stopping ECE in BRM Elastic Charging Engine System Administrator's Guide.

Table 4-4 Diameter Gateway Node Configuration Parameters

Name Default Description

name

"diameterGateway1"

The name of the Diameter Gateway instance.

Name Diameter Gateway node instances consistently and uniquely (for example, diameterGateway1, diameterGateway2, and so on).

If you want to use the same name for the Diameter Gateway instances, for example, for the disaster recovery configuration, ensure that the cluster name is unique for each of these instances. ECE uses both name and clusterName to identify the Diameter Gateway instance.

The name you specify must match the name for this Diameter Gateway instance in the node-name column of the ECE_home/oceceserver/config/eceTopology.conf file. If you change the name of an existing instance by using the JMX editor, you must update the name of the instance in the topology file.

clusterName

""

The cluster name of the Diameter Gateway instance.

Name Diameter Gateway node instances consistently and uniquely (for example, cluster1, cluster2, and so on).

Specify a unique cluster name if you are configuring Diameter Gateway nodes with the same name, which is required for the disaster recovery configuration. For example, you can configure two Diameter Gateway nodes as follows:

Diameter Gateway 1

name='diameterGateway1'
clusterName="cluster1"

Diameter Gateway 2

name='diameterGateway1'
clusterName="cluster2"

The cluster name you specify must match the cluster name for this Diameter Gateway instance in the ECE_home/oceceserver/config/charging-coherence-override-secure-prod.xml file. If you change the cluster name of an existing instance by using the JMX editor, you must update the name of the instance in the Coherence override file.

diameterTrafficPort

"3868"

The port (on the physical host computer that is running the Diameter Gateway node instance) the Diameter Gateway instance listens on for handling Diameter messages.

When adding new Diameter Gateway instances, choose a port number that is not in use by another application.

When multiple Diameter Gateway instances run on the same physical host computer, each instance must use a different port number.

The value set here is used by the Diameter Gateway instance to determine which port to bind to on the server where the Diameter Gateway node instance is running.

diameterTrafficHost

""

The network interface (on the physical or virtual host computer that is running the Diameter Gateway node instance) that the Diameter Gateway node binds to and listens on for Diameter messages (sent from Diameter clients).

The Diameter Gateway instance uses this value to determine which network interface to bind to on the server where the Diameter Gateway node instance is running.

The value can be either an IP address or a host name. The value can also be an empty string.

  • If the value is an IP address or a host name, the Diameter Gateway instance listens for Diameter messages only on that one network interface.

  • If the value is an empty string (default), the Diameter Gateway instance listens for Diameter messages on all network interfaces available on the server.

diameterTrafficHostSctp

""

When SCTP is used, the network interface (on the physical or virtual host computer that is running the Diameter Gateway node instance) that the Diameter Gateway node binds to and listens on for Diameter messages (sent from Diameter clients).

The Diameter Gateway instance uses this value to determine which network interface to bind to on the server where the Diameter Gateway node instance is running.

The value can be either an SCTP IP address or host name or multiple SCTP IP addresses or host names. The value can also be an empty string.

For a multihoming system, multiple IP addresses can be specified separated with a comma (,).

For example:

10.240.179.147,10.240.182.149
  • If the value is an SCTP IP address(es) or a host name(s), SCTP transport is enabled and the Diameter Gateway instance supports Diameter messages that use SCTP transport on those network interfaces.

  • If the value is an empty string (default), SCTP transport is disabled.

To use this configuration, your operating system must have SCTP support. Verify that your operating system has SCTP support. If not, install the SCTP system package for your operating system version.

originHost

n/a

Note: Setting a value for this field is mandatory.

Enter the value for the Origin-Host attribute-value pair (AVP) to be sent in the Diameter request.This is a unique identifier that you assign your Diameter Gateway server on its host. It can be any string value. The value set here is used by the Diameter client to identify your Diameter Gateway server (at the application layer) as the connecting Diameter peer that is the source of the Diameter message.

originRealm

n/a

Note: Setting a value for this field is mandatory.

Enter the value for the Origin-Realm AVP to be sent by the Diameter Gateway in outgoing Diameter requests.This is the signaling realm (domain) that you assign your Diameter Gateway server.

You must set the same the origin realm value for all Diameter Gateway instances in the same ECE topology.The value set here is used by Diameter clients to identify your Diameter Gateway server as the source of the Diameter message.

loopback

"false"

Specifies the loopback setting for performance testing.

Valid values are:

  • True. specifies that the Diameter Gateway instance does not send the credit-control request to ECE. Instead, the Diameter Gateway instance returns the success result code to the network element.

  • False. specifies that the Diameter Gateway instance sends the credit-control request to ECE.

ioThreadPoolSize

"10"

The number of threads used by the network I/O thread pool. The network I/O thread that the Diameter Gateway node instance uses for sending and receiving Diameter requests over a network socket using TCP.

Valid values are greater than zero and up to any number the system resources allow.

responseTimeout

"10"

The maximum duration in seconds that the Diameter Gateway instance waits for a response from the Diameter client for a notification message the Diameter Gateway has sent to it. If the Diameter Gateway instance does not receive a response from the Diameter client within the specified duration, the Diameter Gateway instance stops waiting for a response and removes the notification from the JMS queue.

Valid values are greater than zero and up to any number the system resources allow. Tune this value to the expected workload in the deployed environment.

requestProcessorThreadPool
Size

"10"

The number of threads used by the request-processor thread pool.

The request-processor thread pool is a Diameter Gateway thread pool that is dedicated to processing Diameter requests handed off to it from the I/O thread pool.

Valid values are greater than zero and up to any number the system resources allow. Tune this value to the expected workload in the deployed environment.

requestProcessorBatchSize

"10"

The batch size of the Diameter requests handed off by the network I/O thread pool to the request-processor thread pool.

Valid values are greater than zero and up to any number the system resources allow. Tune this value to the expected workload in the deployed environment.

watchDogInterval

"30"

The duration in seconds that the Diameter Gateway instance waits before it issues a Device-Watchdog-Request message (DWR).

notificationThreadPoolSize

"10"

The number of threads used by the Diameter Gateway instance to process notification messages.

Valid values are greater than zero and up to any number the system resources allow. Tune this value to the expected workload in the deployed environment.

maxNotificationCommitSize

"100"

The maximum number of dequeued notification messages from the JMS topic that can remain uncommitted.

If the number of dequeued notification messages from the JMS topic exceeds this number, the Diameter Gateway instance stops reading messages until the read messages are committed.

ccFailover

"FAILOVER_SUPPORTED"

Indicates if the Diameter Gateway instance is operating in a cluster that supports failover.

Valid values are:

  • "FAILOVER_SUPPORTED"

  • "FAILOVER_NOT_SUPPORTED"

The value set here is the value the Diameter Gateway instance sends for the CC-Session-Failover AVP in all credit-control answers (CCAs) that the instance produces.

For more information, see Diameter Credit-Control Application standard at:

https://tools.ietf.org/html/rfc4006#section-8.4">>https://tools.ietf.org/html/rfc4006#section-8.4

creditControlFailureHandling

"RETRY_AND_TERMINATE"

Indicates how the Diameter client should proceed if a CCA is not received prior to the Tx timeout.

Valid values are:

  • "TERMINATE"

  • "CONTINUE"

  • "RETRY_AND_TERMINATE"

The value set here is the value the Diameter Gateway instance sends for the Credit-Control-Failure-Handling AVP in all CCAs that the instance produces.

For more information, see Diameter Credit-Control Application standard at:

https://tools.ietf.org/html/rfc4006#section-8.14">>https://tools.ietf.org/html/rfc4006#section-8.14

directDebitingFailureHandling

"TERMINATE_OR_BUFFER"

Indicates how the Diameter client should proceed if a Direct Debit CCA is not received prior to the Tx timeout.

Valid values are:

  • "TERMINATE_OR_BUFFER"

  • "CONTINUE"

The value set here is the value the Diameter Gateway instance sends for the Direct-Debiting-Failure-Handling AVP in all credit-control answers (CCAs) that it produces.

For more information, see Diameter Credit-Control Application standard at:

https://tools.ietf.org/html/rfc4006#section-8.15">>https://tools.ietf.org/html/rfc4006#section-8.15


Adding RADIUS Gateway Nodes for Authentication and Accounting

During ECE installation, if you specified that RADIUS Gateway must be started when ECE is started, the ECE installer process creates a single instance (node) of a RADIUS Gateway (radiusGateway1) that is added to your topology (added to your ECE_home/oceceserver/config/eceTopology.conf file). By default, this single node listens to RADIUS messages.

For a standalone system, a single node is sufficient for basic testing directly after installation; for example, to test if the RADIUS client can send a RADIUS request to the RADIUS Gateway node. Add additional RADIUS Gateway nodes to your topology, configure them to listen on the different network interfaces in your environment, and test performance to determine the minimum number of RADIUS Gateway nodes needed for your customer base.

When adding RADIUS Gateway nodes, note the following:

  • In an ECE integrated system, have two RADIUS Gateway nodes to allow for failover and additional nodes as needed to handle the expected throughput for your system.

  • For a standalone system, the minimum configuration is two RADIUS Gateway nodes to allow for failover.

To add RADIUS Gateway nodes:

  1. Log on to the driver machine.

  2. Change directory to ECE_home/oceceserver/bin.

  3. Start the Elastic Charging Controller:

    ./ecc
    
  4. Access the ECE MBeans:

    1. Start the ECE charging servers (if they are not started).

    2. Start a JMX editor, such as JConsole, that enables you to edit MBean attributes.

    3. Connect to the ECE charging server node set to start CohMgt = true in the ECE_home/oceceserver/config/eceTopology.conf file.

      The eceTopology.conf file also contains the host name and port number for the node.

    4. In the editor's MBean hierarchy, expand the ECE Configuration node.

  5. Expand charging.radiusGatewayConfigurations.

  6. Expand Operations.

  7. Select addRadiusGatewayConfiguration.

  8. In the name parameter, enter a name for the RADIUS Gateway node.

  9. Click the addRadiusGatewayConfiguration button.

    You have created a RADIUS Gateway node.

  10. Open the ECE_home/oceceserver/config/eceTopology.conf file.

  11. Add a row for the RADIUS Gateway node instance.

  12. For that row, enter the following information:

    • Name of the JVM process for the node instance.

      Enter the name used when the node instance was created in the JMX editor.

    • Role of the JVM process for the node instance.

      Enter the role radiusGateway.

    • Host name of the physical server machine on which the node resides.

    • JVM tuning file that contains the tuning profile for the node.

  13. Save the file.

After adding the RADIUS Gateway nodes, you must specify the node properties for configuring the nodes to communicate with your network and for tuning the node for optimal performance. For information on the RADIUS Gateway configuration properties and default values, see "Configuring RADIUS Gateway Nodes".

Configuring RADIUS Gateway Nodes

You must configure each RADIUS Gateway node to communicate with your network and to perform optimally.

To configure RADIUS Gateway nodes:

  1. Access the ECE MBeans:

    1. Log on to the driver machine.

    2. Start the ECE charging servers (if they are not started).

      See "Starting and Stopping ECE" in ECE System Administrator's Guide.

    3. Start a JMX editor, such as JConsole, that enables you to edit MBean attributes.

    4. Connect to the ECE charging server node set to start CohMgt = true in the ECE_home/oceceserver/config/eceTopology.conf file.

      The eceTopology.conf file also contains the host name and port number for the node.

    5. In the editor's MBean hierarchy, expand the ECE Configuration node.

  2. Expand charging.radiusGatewayConfigurations.

  3. Expand Attributes.

  4. Specify values for the attributes listed in Table 4-5:

    Note:

    Changing the values of base attributes affects all RADIUS Gateway node instances in your system.

    Table 4-5 RADIUS Gateway Node Base Configuration Attributes

    Name Default Description

    avpName

    "Service-Type"

    The name of the attribute value pair (AVP) that is used to determine the product type during authentication. This is used in conjunction with vendorId.

    timeToLive

    "30000"

    The expiry time (in milliseconds) for the RADIUS requests stored in the ECE cache.

    wallet

    "opt/wallet"

    The path to the Oracle wallet file containing the SSL trusted certificates and the BRM root key for RADIUS Gateway. When RADIUS Gateway is started, the BRM root key in the Oracle wallet file is stored in the memory.

    keyPass

    "@KEY_PASS@"

    The key password required for accessing certificates in the keystore.jks file. This password is stored in encrypted format.

    queueSize

    "8"

    The number of incoming requests that can be simultaneously processed by the RADIUS server. Adjust the queue size to correspond to the number of threads.

    keyStoreLocation

    "@KEY_STORE_LOCATION"

    The path to the keystore.jks file that contains the certificates to support Extensible Authentication Protocol - Tunneled Transport Layer Security (EAP TTLS) authentication.

    vendorId

    "0"

    The vendor ID of the AVP that you configured to determine the product type. This is used in conjunction with avpName.

    enableRetransmissionChecks

    "true"

    The flag that is used to enable or disable the duplicate packet detection feature. This feature is enabled by default. RADIUS Gateway uses this feature to identify duplicate requests from RADIUS clients by validating it against the requests stored in the ECE cache.

    Note: You must restart RADIUS Gateway after enabling or disabling the duplicate packet detection feature.


  5. Expand charging.radiusGatewayConfigurations.Instance_Name, where Instance_Name is the name of the RADIUS Gateway node to configure.

  6. Expand Attributes.

  7. Specify values for the attributes listed in Table 4-6:

    Table 4-6 RADIUS Gateway Node Instance Configuration Attributes

    Name Default Description

    radiusTrafficPort

    "1812"

    The number assigned to the port on which RADIUS Gateway listens. Add one radiusTrafficPort entry for each port on which you want RADIUS Gateway to listen.

    name

    "radiusGateway1"

    The name of the RADIUS Gateway instance.

    Name RADIUS Gateway node instances consistently and uniquely (for example, radiusGateway1, radiusGateway2, and so on).

    The name you specify must match the name for this RADIUS Gateway instance in the node-name column of the ECE_home/oceceserver/config/eceTopology.conf file. If you change the name of an existing instance by using the JMX editor, you must update the name of the instance in the topology file.

    noOfChallenges

    "1"

    The maximum number of challenges that can be sent to RADIUS clients when Challenge-Handshake Authentication Protocol (CHAP) is used for authentication. A random number within this value is chosen during authentication to carry out the number of challenges for a given authentication session.

    If the password is authenticated successfully, the challenge process begins and an Access-Challenge message is sent as reply to this request. If any of the challenge responses fail in authentication, an Access-Reject is sent. Upon all successful authentication, an Access-Accept message is sent.

    sharedSecret

    "e59VPnxr1o5+FGW97w/aMA=="

    The common password shared between RADIUS Gateway and Network Access Server (NAS). It is used by the RADIUS protocol for security. Each RADIUS Gateway instance must have a unique password in encrypted format.

    ioThreadPoolSize

    "16"

    The number of selected threads that determines the maximum number of simultaneous processes that RADIUS Gateway can handle. You can increase the number of threads to increase the server performance and reduce the number of threads to reduce the throughput.

    There is no one criterion for setting the number of threads. Many factors impact the number of threads required, such as the cache size of each CPU, memory size, and swap size. Systems can handle as many as eight threads per CPU. On production systems, set these values higher.


  8. Expand charging.radiusGatewayEapPriorityConfiguration.

  9. Expand Operations.

  10. Select addEapType.

  11. Specify values for the parameters listed in Table 4-7:

    Table 4-7 RADIUS Gateway Node Extensible Authentication Protocol (EAP) Parameters

    Name Default Description

    id

    "21"

    The unique identifier of the Extensible Authentication Protocol (EAP) type used for authentication.

    name

    "TTLS"

    The name of the EAP type used for authentication. This is associated with the EAP ID. By default, the following EAP types are supported for authentication: TTLS and MD5.

    priority

    "1"

    The priority set for the EAP type. 1 is the highest priority.


  12. Click the addEapType button.

  13. Change directory to the ECE_home/oceceserver/bin directory.

  14. Start the Elastic Charging Controller:

    ./ecc
    
  15. Do one of the following:

    • If the RADIUS Gateway instance is not running, start it.

      The instance reads its configuration information by name at startup.

    • If the RADIUS Gateway instance is running, stop and restart it.

    For information about stopping and starting RADIUS Gateway instances, see "Starting and Stopping RADIUS Gateway".

Customizing the RADIUS Data Dictionary

This section covers customizing the RADIUS data dictionary.

About the RADIUS Data Dictionary

The data dictionary includes a list of AVPs that are used by RADIUS Gateway to perform authentication and accounting operations. The RADIUS data dictionary contains the standard AVPs that are prescribed in RADIUS Request for Comments (RFC) 2865, 2866, and 2869, and also some sample vendor-specific attributes. You can use the sample vendor-specific attributes as a template for adding custom vendor-specific attributes. The default location of the RADIUS data dictionary file is ECE_home/config/radius/radiusDictionary.xml.

Important:

Do not remove, rename, or move the RADIUS data dictionary file to a different location.

Creating a Custom Data Dictionary

You can create a custom data dictionary file by using the ECE_home/config/radius/radiusDictionary.xml file as a template. The default location for your custom data dictionary file is ECE_home/config/radius/custom/dictionary_file, where dictionary_file is the name of your custom data dictionary file. You can add new vendor-specific attributes to your custom data dictionary file. See "Adding Custom Vendor-Specific Attributes".

Selecting a RADIUS Data Dictionary When Using Different NAS Vendors

If you must use NAS servers from multiple vendors, you have the following options:

  • If your NAS is RFC 2865 compliant, you can use the RFC2865 data dictionary. This is the preferred solution. Update the dictionary file with any vendor-specific attributes associated with the NAS.

  • If your NAS is not RFC 2865 compliant, you can use the RADIUS data dictionary files for adding vendor-specific attributes. See "Adding Custom Vendor-Specific Attributes" for more information.

Adding Custom Vendor-Specific Attributes

In special cases, where you are using NAS servers from multiple vendors, you must add the vendor attribute and code in your custom data dictionary file.

The syntax for adding a vendor-specific attribute is:

<?xml version="1.0" encoding="UTF-8"?>
    <dictionary schemaLocation= "radiusDictionary.xsd"
       <vendor value="vendor_ID"name="vendor_name"/>
       </attribute name="attribute_name" vendor="vendor_name" syntax="data_type" code="attribute_ID"/>
     /dictionary>

Table 4-8 lists the vendor-specific attribute values and descriptions.

Table 4-8 Vendor-specific Attribute Values

Parameters Description

vendor_ID

Number used to identify the NAS or gateway vendor. These numbers are assigned by the Internet Advisory Board (IAB). See your vendor's documentation for details.

Some common vendor identification numbers are:

  • 9 (Cisco)

  • 10415 (3GPP)

  • 2636 (Juniper)

vendor_name

Name of the vendor.

attribute_name

Name of the attribute. This must be unique.

Important: Do not use the same attribute name as used in the default RADIUS data dictionary file. Using the same attribute name in the custom data dictionary file overrides the attribute values in the default RADIUS data dictionary file.

attribute_ID

Identification number assigned to the attribute in the dictionary.

data_type

Any one of the following data types:

  • UnsignedInt

    32-bit unsigned value in big endian order (high byte first).

  • Integer

    32-bit value in big endian order (high octet first).

  • String

    0-253 octets

  • Ipaddr

    4 octets in network octet order

  • Binary

    0-254 octets

  • Password

    (n * 16) (>= 16) octets. This field is encrypted according to the User-Password AVP in RFC 2865.

  • Short

    16-bit value

  • Octet

    8-bit value

  • ifid

    IPv6 interface ID

  • ipv6addr

    IPv6 address

  • date

    UNIX timestamp in seconds (since January 1, 1970 GMT)


Loading the RADIUS Mediation Specification Data

RADIUS Gateway uses the RADIUS mediation specification data to determine which product and event type combination and network mapping applies to an incoming request from the RADIUS client.

To load the RADIUS mediation specification data:

  1. Create a mediation specification file or open the sample RADIUS mediation specification file.

    A sample mediation specification file (ECE_home/oceceserver/sample_data/config_data/specifications/ece_simple) is available.

    Important:

    Create only one RADIUS mediation specification file to represent the mediation specification for RADIUS Gateway.
  2. Load the pricing data from PDC into ECE.

    For every event definition, which contains charging operation types (for example, Initiate) loaded into ECE from PDC, ECE generates network mapping files.

    See the discussion about load pricing data from PDC in BRM Elastic Charging Engine Implementation Guide.

  3. Add a row (in the table) for each new product to be rated that specifies the following information:

    • Service-Identifier AVP

      A unique identifier of the service. The Service-Identifier AVP value is sent by the RADIUS request. ”null” is valid if the field is not expected to be present in the request.

    • ProductType

      The product type that you have defined for the event in its associated request specification.

    • EventType

      The event type that you have defined for the event in its associated request specification.

    • Version

      The version number of the request specification that you want to apply to the event.

    • ValidFrom

      A future date and time when you want RADIUS Gateway to recognize a newly deployed request specification.

      To have requests processed according to a new specification, you would enter:

      yyyy-mm-ddThh:mm:ss [timezone]

      If timezone is not specified, it defaults to UTC.

    • Network-Mapping-FileName

      The name of the network mapping file generated for the product and event combination.

    See Example 4-1 for a sample entry in the RADIUS mediation specification file.

  4. Open the ECE_home/oceceserver/config/management/migration-configuration.xml file.

  5. Search the configObjectsDataDirectory parameter and copy the value. For example:

    configObjectsDataDirectory = ECE_home/oceceserver/sample_data/config_data
    
  6. Save the mediation specification file to that same directory.

  7. Load the file into the ECE server by running the following command:

    start configLoader
    

    The utility loads the RADIUS mediation specification data to the ECE cluster. The configLoader utility uses the location in the configdata parameter for loading the data. As mediation specification files have same names, so any existing RADIUS mediation specification data in the ECE cluster is overwritten.

Example 4-1 Sample RADIUS Mediation Specification Entry

RadiusMediationTable {
Service-Identifier| ProductType | EventType | Version |  ValidFrom |  Network-Mapping-FileName|
 "1" | "TelcoGprs" | "EventDelayedSessionTelcoGprs" | 2.0 | "2010-12-31T12:01:01 PST" | "EventDelayedSessionTelcoGprs_TelcoGprs.xml" |
}

When you load the RADIUS mediation specification data into the ECE cluster, RADIUS Gateway re-creates its in-memory usage-request builder map and uses the mapping definitions to send requests to ECE.

About Mapping RADIUS Network Attributes to Event Attributes

To process requests from RADIUS clients, you map network attributes from RADIUS clients to the corresponding event attributes in ECE. You do this by editing the network mapping file. When you load the pricing data from PDC into ECE, ECE generates the network mapping file for each product and event combination. Some default network mappings are already pre-configured in the files generated by ECE. You can update the default values in these files.

RADIUS Gateway uses this mapping in ECE to process requests by dynamically mapping the values of the network attributes in the RADIUS request to the corresponding event attributes in ECE.

Mapping RADIUS Network Attributes to Event Attributes

If you add or remove an event attribute from the event definition in PDC, you have to add or remove the corresponding network attributes in ECE. You do this by editing the network mapping file in ECE.

Before you map the attributes, load the RADIUS mediation specification file. See "Loading the RADIUS Mediation Specification Data" for more information.

To map network attributes to event attributes:

  1. Load the pricing data from PDC into ECE.

    Mapping files will be automatically generated when the pricing data is published from PDC to ECE.

    See the discussion about load pricing data from PDC in BRM Elastic Charging Engine Implementation Guide

    For every event definition, which contains charging operation types (for example, Initiate) loaded into ECE from PDC, ECE generates the network mapping files. The network mapping files are stored in the directory specified by the configObjectsDataDirectory parameter in the ECE_Home/oceceserver/config/management/migration-configuration.xml file

    A sample network mapping file is available in the (ECE_home/oceceserver/sample_data/config_data/specifications/ece_end2end/network_mapping) directory. You can use this as a reference for mapping the attributes.

  2. Open a network mapping file in a text editor.

  3. Ensure that the ORIGIN_NETWORK event attribute is added as a top-level attribute in the network mapping file.

  4. Map the network attributes to the event attributes by doing the following:

    1. Search for the event attribute that you want to map to the network attribute.

    2. Add the following entry:

      <networkField>NetworkAttribute</networkField>
      

      where NetworkAttribute is the attribute of the requests received from RADIUS clients.

      For example:

      <attributeMapping type="RadiusMediationEntries">
           <attribute>
             <name>TERMINATE_CAUSE</name>
             <networkField>Acct-Terminate-Cause</networkField>
           </attribute>
         </attributeMapping>
      
  5. Save and close the file.

    Important:

    Verify that the name of this network mapping file is specified in the RADIUS mediation specification file.
  6. Load the network mapping data by doing one of the following:

    • If RADIUS Gateway is running, run the following command:

      start configLoader loadNetworkMapping
      
    • If RADIUS Gateway is not running, run the following commands:

      start customerUpdater
      start radiusGateway
      

      The network mapping data is loaded into the ECE cluster. Any existing network mapping data available for the product and event specification in the ECE cluster is overwritten. ECE is now in a usage-processing state, where it can accept requests from RADIUS Gateway.

When you load the network mapping into the ECE cluster, RADIUS Gateway re-creates its in-memory usage-request builder map and begins using the latest mapping definitions to send requests to ECE.