13 Configuring a High-Availability System

This chapter provides guidelines for configuring an Oracle Communications Billing and Revenue Management (BRM) high-availability system for real-time processing of prepaid and postpaid services.

For an overview of a high-availability BRM system, see "Understanding a High-Availability System".

About Setting Up a High-Availability System

To create a high-availability system, you must install and configure at least two instances of each component and then connect each instance to all instances of the component's server-side peer.

To create a high-availability BRM system, follow these guidelines:

  • Install and configure the following components:

    • A single-schema or multischema BRM database.

    • One Oracle Real Application Clusters (Oracle RAC) instance for each database schema, and at least one additional Oracle RAC instance to use as a backup.

    • Oracle Clusterware.

    • One or more pairs of active and standby Oracle In-Memory Database Cache (Oracle IMDB Cache) instances for each database schema. (Oracle IMDB Cache instances are also called data stores.)

    • One or more pairs of active and standby IMDB Cache Data Managers (DMs) for each database schema.

    • At least two Connection Managers (CMs). Use as many as you estimate will support your workload, and then add one more.

    • One real-time pipeline for each CM.

      Important:

      Duplicate components should reside on different physical hosts and use different power sources. They should not be located in the same rack. If you use blade servers, install duplicate components in different blade chassis. (This also applies to active and standby Oracle IMDB Cache and IMDB Cache DM instances.)

      For more information about configuring components in a high-availability system, see the following sections:

  • Connect each instance of a component to all instances of the component's server-side peer. For example, configure each CM to connect to all the IMDB Cache DMs.

  • When setting timeout periods for processing a request, ensure that each client-side (or calling) component has a longer timeout period than the server-side component that it calls.

    The timeout period (also called latency) is the time taken by a component to respond to a request from the moment the request is sent to the component.

    The timeout period should be long enough to accommodate slow responses due to overload, in which case there is an occasional response rather than no response.

    Table 13-1 lists the timeout settings that affect component failover in a high-availability system. The components are listed in the order that they process requests received from the network.

    Table 13-1 Timeout Settings for High Availability Systems

    Component Timeout Setting Description Suggested Value

    Connection Manager

    pcm_timeout_in_msecs

    Specifies the amount of time that the CM waits for a response from an IMDB Cache DM before failing over to the next IMDB Cache DM in the dm_pointer list. This entry is known as the CM's long (failover) timeout.

    See "Connecting CMs to IMDB Cache DMs".

    100 seconds

    Connection Manager

    pcm_bad_connection_retry_delay_time_in_secs

    Specifies the interval at which the CM tries again to connect to the active IMDB Cache DM after it fails to connect.

    The timeout should be long enough for the CM to reestablish a connection with the IMDB Cache DM.

    See "Connecting CMs to IMDB Cache DMs".

    100 seconds

    Oracle IMDB Cache

    LockWait

    Specifies the number of seconds that Oracle IMDB Cache (the data store) waits to acquire a lock for the BRM database before returning a timeout error to the IMDB Cache DM. See "Creating and Configuring Active Data Stores".

    30 seconds

    Oracle RAC database

    FAST_START_MTTR_TARGET

    Specifies the time limit for Oracle RAC instance recovery.

    See "Minimizing Recovery Time of Oracle RAC Instances".

    30 seconds


    For a diagram of the data processing flow in a high-availability system, see "About the Architecture of a High-Availability BRM System".

    Important:

    Timeout periods must be large enough to accommodate slow responses because of overload.

Configuring the BRM Database for High Availability

The BRM database in high-availability systems consists of the following components:

The following sections explain how to configure Oracle RAC for failover in a high-availability system:

Note:

Using a standby database and database recovery are out of the scope of this chapter.

Setting Up Oracle RAC for Failover in a High-Availability System

Figure 13-1 shows the configuration of Oracle RAC in a basic high-availability system. Dashed lines represent backup connections.

Figure 13-1 Oracle RAC Configuration for a Basic High-Availability System

Description of Figure 13-1 follows
Description of ''Figure 13-1 Oracle RAC Configuration for a Basic High-Availability System''

To configure Oracle RAC for failover in a high-availability system:

  1. Set up an Oracle RAC instance for each database schema in your system and then add at least one more Oracle RAC instance to function as the backup. For example, if you have three schemas, set up at least four Oracle RAC instances.

    For information about how to set up an Oracle RAC system, see the Oracle RAC documentation.

    Note:

    The number of backup Oracle RAC instances depends on your high-availability requirements and on the number of primary Oracle RAC instances. Typically, one backup Oracle RAC instance should be configured for every four or five primary Oracle RAC instances. (This recommendation assumes that after a failed Oracle RAC instance is fixed, you switch the database service that it originally supported back to it.)

    The Oracle RAC instances should reside on different physical hosts and use different power sources.

  2. Configure Oracle database services. See "Configuring Oracle Database Services".

  3. Add entries for the Oracle database services to all tnsnames.ora files that will be referenced by the IMDB Cache DMs in your system. See "Defining Connections to the Oracle Database Services".

  4. Configure your IMDB Cache DMs to connect to the Oracle RAC instances. See "Connecting IMDB Cache DMs to Oracle RAC Instances for High Availability".

Configuring Oracle Database Services

You use Oracle database services to connect IMDB Cache DMs to Oracle RAC instances. To create a high-availability system, you must map each database service to one primary Oracle RAC instance and to one backup Oracle RAC instance.

Note:

In Oracle RAC systems, the primary Oracle RAC instance is called the preferred instance, and the backup Oracle RAC instance is called the available instance.

For example, if your system has four database schemas and five Oracle RAC instances, configure the database services as shown in Table 13-2:

Table 13-2 Example Database Service Configuration

Database Service Primary Oracle RAC Instance Backup Oracle RAC Instance

Service1

Oracle RAC instance 1

Oracle RAC instance 5

Service2

Oracle RAC instance 2

Oracle RAC instance 5

Service3

Oracle RAC instance 3

Oracle RAC instance 5

Service4

Oracle RAC instance 4

Oracle RAC instance 5


To create the services in the preceding table, log on to any Oracle RAC node as the Oracle database administrator, and run the following commands:

srvctl add service -d racDatabaseName -s service1 -r racInstanceName1 -a racInstanceName5 -P Basic
srvctl add service -d racDatabaseName -s service2 -r racInstanceName2 -a racInstanceName5 -P Basic
srvctl add service -d racDatabaseName -s service3 -r racInstanceName3 -a racInstanceName5 -P Basic
srvctl add service -d racDatabaseName -s service4 -r racInstanceName4 -a racInstanceName5 -P Basic

For information about the srvctl command, see the Oracle RAC documentation.

You must also configure each service to be notified if the backup Oracle RAC instance becomes unreachable or network connections fail. To do this, enable Fast Application Notification (FAN) for each database service. For information about FAN, see Automatic Workload Management with Oracle Real Application Clusters on the Oracle Technology Network (http://www.oracle.com/technetwork/index.html)

Optionally, you can enable Transparent Application Failover (TAF) in Basic failover mode for the database services. TAF is not required by IMDB Cache DMs, but it benefits other BRM applications that connect directly to the database, such as Rated Event Loader. In Basic mode, applications connect to a backup Oracle RAC node only after their connection to the primary Oracle RAC node fails. This approach has low overhead, but end users might experience a delay while the new connection is created. For more information about TAF, see the Oracle database documentation.

Defining Connections to the Oracle Database Services

Perform the following procedure in the appropriate tnsnames.ora files.

To define connections to the Oracle database services:

  1. Open the tnsnames.ora file in a text editor.

    By default, that file is in the Oracle_home/network/admin/ directory.

  2. For each database service, add the following connect descriptor:

    connectionString =
      (DESCRIPTION =
        (ADDRESS = (PROTOCOL = TCP)(HOST = primaryRacInstanceHostName)(PORT = oraHostPortNo))
        (ADDRESS = (PROTOCOL = TCP)(HOST = backupRacInstanceHostName)(PORT = oraHostPortNo))
        (LOAD_BALANCE = OFF)
        (CONNECT_DATA = 
          (SERVER = DEDICATED)
          (SERVICE_NAME = serviceName)
          (FAILOVER_MODE =
            (TYPE = SELECT)
            (METHOD = BASIC)
            (RETRIES = 180)
            (DELAY = 5)
          )
        )
      )
    

    where:

    • connectionString is the connection name. The sm_database entry in the IMDB Cache DM pin.conf file must match this entry.

    • ADDRESS defines a single listener protocol address. Add an entry for the primary Oracle RAC instance's listener and the backup Oracle RAC instance's listener.

    • primaryRacInstanceHostName is the name of the computer on which the service's primary Oracle RAC instance resides.

    • backupRacInstanceHostName is the name of the computer on which the service's backup Oracle RAC instance resides.

    • oraHostPortNo is the port number for the Oracle database on the host computer. Typically, this number is 1521.

    • LOAD_BALANCE specifies whether to distribute connection requests across the listeners specified in the ADDRESS entries. For high-availability BRM systems, set this to OFF.

    • serviceName specifies the name of the database service. The sm_svcname entry in the IMDB Cache DM pin.conf file must match this entry.

    • For FAILOVER MODE,

      TYPE = SELECT specifies that work in progress is not lost when failover occurs from the primary to the backup Oracle RAC node.

      METHOD = BASIC specifies that applications connect to a backup Oracle RAC node only after their connection to the primary Oracle RAC node fails. Backup connections are not preestablished.

  3. Save and close the file.

Connecting IMDB Cache DMs to Oracle RAC Instances for High Availability

To connect each IMDB Cache DM to its primary and backup Oracle RAC instances in a high-availability BRM system:

  1. Open the IMDB Cache DM configuration file in a text editor:

    BRM_home/sys/dm_tt/pin.conf

  2. Set the sm_database entry to the appropriate connect descriptor:

    - dm sm_database connectionString
    

    connectionString must match the appropriate connectionString entry in the tnsnames.ora file referenced by the IMDB Cache DM.

  3. Add the following entry:

    - dm sm_svcname serviceName
    

    serviceName must match the SERVICE_NAME entry in the connect descriptor specified in the preceding step.

  4. Save and close the file.

  5. Stop and restart the IMDB Cache DM. See "Starting and Stopping the BRM System".

  6. Repeat this procedure for each IMDB Cache DM in your high-availability system.

Minimizing Recovery Time of Oracle RAC Instances

To minimize the recovery time of Oracle RAC instances, use the FAST_START_MTTR_TARGET initialization parameter to reduce the size of the redo log file. When setting this parameter, you must balance system performance against failure recovery time. Use the following values:

  • 0—Disables the parameter. In this case, blocks containing modified data not yet written to disk (dirty blocks) are flushed mainly during checkpoints triggered by redo log switches, so instance recovery time depends primarily on the amount of redo log data to apply at the time of failure. If the redo logs are huge and the current log is almost full, recovery might take several hours.

  • 1 through 3600—Specifies the time limit in seconds for database instance recovery. To meet this target, the Oracle database adjusts the frequency of checkpoint creation. It might need to proactively flush dirty blocks from the database cache to disks. This requires additional input/output operations, which can degrade performance.

For example:

alter system set FAST_START_MTTR_TARGET=30;

The preceding command sets database instance recovery time to 30 seconds. This value is a recommended starting point, but you should test it in your environment to find the optimal setting.

Important:

The timeout period of all BRM components in a high-availability system should be greater than FAST_START_MTTR_TARGET.

When FAST_START_MTTR_TARGET is set to a short time period, such as 30 seconds, you can further reduce service downtime by lowering the database cluster heartbeat interval. (By default, Oracle RAC waits 30 seconds before resetting a node after the loss of its heartbeat.) A very short heartbeat interval, however, might result in unnecessary node resets due to network blips.

Note:

Failure of one Oracle RAC node interrupts service in all Oracle RAC nodes because Oracle RAC must remaster internal services and restore the database by using the current state of the redo log file.

For more information about redo log files, see "Assigning Storage for Redo Log Files" in BRM Installation Guide.

Configuring IMDB Cache Manager for High Availability

This section provides information about configuring IMDB Cache Manager for high availability.

The basic BRM high-availability architecture has one pair of active and standby IMDB Cache DM instances and one pair of active and standby Oracle IMDB Cache instances (data stores) for each BRM database schema. Larger high-availability systems can have several IMDB Cache DMs and Oracle IMDB Cache pairs for each schema.

For more information, see "About IMDB Cache DMs and Data Stores in a High-Availability System".

Figure 13-2 shows a basic configuration of IMDB Cache DMs and their data stores in a high-availability system with two logical partitions and one BRM database schema. Dashed lines represent backup connections.

Figure 13-2 Basic IMDB Cache DM and Data Store Configuration for High Availability

Description of Figure 13-2 follows
Description of ''Figure 13-2 Basic IMDB Cache DM and Data Store Configuration for High Availability''

Note:

In a high-availability system, an IMDB Cache DM and its associated data store reside on the same physical server (node). Each node should contain only one DM–data store pair.

This section explains how to set up a high-availability BRM system that contains the following components:

  • Two logical partitions for a single-schema database

  • The ttGrid cache grid

  • Data stores tt_0.0.0.1 and tt_0.1.0.1

For more information about installing IMDB Cache Manager, including hardware and software requirements, see "Installing IMDB Cache Manager".

For an overview of IMDB Cache Manager, including information about cache grids and logical partitions, see "Using Oracle IMDB Cache Manager".

To set up a high-availability system for data stores tt_0.0.0.1 and tt_0.1.0.1:

  1. Install and configure the BRM database with Oracle RAC. See "Configuring the BRM Database for High Availability".

  2. Install Oracle Clusterware. See the Oracle Clusterware documentation.

  3. On each node on which you plan to configure a data store, install an instance of Oracle IMDB Cache. See the Oracle TimesTen In-Memory Database Installation Guide.

    To support the example data stores tt_0.0.0.1 and tt_0.1.0.1, you must configure a total of four instances of Oracle IMDB Cache (that is, an active and a standby instance for each data store). Each instance must reside on a different node.

    Important:

    The primary group of the Oracle IMDB Cache owner should be the same as the primary group of the Oracle Clusterware owner. Their user names, however, can be different.
  4. Install BRM. See "Installing BRM" in BRM Installation Guide.

  5. On each node on which you plan to configure a data store, install an instance of the IMDB Cache DM. See "Installing IMDB Cache Manager".

  6. Install any optional components that you want to add to your system.

  7. Create and configure the active data stores tt_0.0.0.1 and tt_0.1.0.1. See "Creating and Configuring Active Data Stores".

  8. If you have existing BRM data that was created in a BRM system before IMDB Cache Manager was installed, run the load_pin_uniqueness utility to prepare the data for migration to an IMDB Cache Manager–enabled system.

    Note:

    Stop and restart the CM and DM.
  9. Create the schema and load BRM objects into the active data stores. See "Initializing an Active Data Store".

  10. Configure clusters for the active data stores. See "Configuring the Cluster for an Active Data Store".

  11. Configure the standby data stores. See "Configuring Standby Data Stores".

  12. Create the active/standby data store pairs, and register them with Oracle Clusterware. See "Creating Active and Standby Data Store Pairs".

  13. Configure IMDB Cache DM instances to connect to the active and standby data stores. See "Associating IMDB Cache DM Instances with Data Stores".

  14. Configure the CMs to connect to the IMDB Cache DM instances. See "Configuring Connection Managers for High Availability".

Creating and Configuring Active Data Stores

To create and configure data stores for high availability, perform the following procedure on each node on which you want an active data store to reside.

  1. Log on to the node for the active data store.

  2. Create a directory for storing database files:

    mkdir BRM_home/Database_Files_Location
    

    For example:

    mkdir BRM_home/database_files
    

    Note:

    Oracle recommends using a local disk for database files instead of a network-mounted disk.
  3. Go to the following directory:

    cd IMDB_home/info
    

    where IMDB_home is the directory in which Oracle IMDB Cache is installed.

  4. Add the data store attributes to the sys.odbc.ini data store configuration file (see "sys.odbc.ini configuration file").

    Note:

    You can edit the sys.odbc.ini file in the IMDB_home/info directory by commenting out the default configurations.

    For example:

    [DSN]
    DataStore=BRM_home/brm_database_files/Data_Store_Name
    OracleNetServiceName=PinDB
    DatabaseCharacterSet=AL32UTF8
    ConnectionCharacterSet=AL32UTF8
    PLSQL=1
    OracleNetServiceName=pin_db
    oraclepwd=pin01
    Driver=IMDB_homehome/lib/libtten.so
    #Shared-memory size in megabytes allocated for the data store.
    PermSize= 32
    #Shared-memory size in megabytes allocated for temporary data partition, generally half the size of PermSize.
    TempSize=16
    PassThrough=0
    #Use large log buffer, log file sizes
    LogFileSize=512
    #Async repl flushes to disk before sending batches so this makes it faster 
    #on Linux
    LogFlushMethod=2
    #Limit Ckpt rate to 10 mb/s
    CkptFrequency=200
    CkptLogVolume=0
    CkptRate=10
    Connections=200
    #Oracle recommends setting LockWait to 30 seconds.
    LockWait=30
    DurableCommits=0
    CacheGridEnable=1
    

    where:

    • DSN is the data source name, which is the same as the data store name. The DSN must also be the same as the database alias name in the tnsnames.ora file. For more information about setting database alias names, see "Making a Data Store Accessible to IMDB Cache DM".

    • BRM_home is the directory in which BRM is installed.

    • Data_Store_Name is the name of the data store.

    • IMDB_home is the directory in which Oracle IMDB Cache is installed.

  5. Save and close the file.

  6. Go to the IMDB_home directory and source the ttenv.csh file:

    cd IMDB_home/bin
    source ttenv.csh
    

    where IMDB_home is the directory in which Oracle IMDB Cache is installed.

  7. Set up the Oracle IMDB Cache grid privileges in the BRM database:

    1. Connect to the BRM database as a system administrator:

      cd IMDB_home/oraclescripts
      sqlplus sys as sysdba
      
    2. Run the following SQL scripts:

      @IMDB_home/oraclescripts/initCacheGlobalSchema.sql "Data_Store_User";
      @IMDB_home/oraclescripts/grantCacheAdminPriveleges.sql "Data_Store_User"
      

      where Data_Store_User is the IMDB Cache data store user.

    3. Run the following commands to grant privileges:

      grant all on TIMESTEN.TT_GRIDID to "Oracle_DB_User";
      grant all on TIMESTEN.TT_GRIDINFO to "Oracle_DB_User";
      

      where Oracle_DB_User is the BRM database user.

    For more information, see the Oracle TimesTen In-Memory Database Cache User's Guide.

  8. Create the data store:

    cd IMDB_home/bin
    ttIsql Data_Store_Name
    

    where Data_Store_Name is the name of the data store, such as tt_0.0.0.1 and tt_0.1.0.1.

  9. Create the data store user and grant all permissions:

    ttIsql Data_Store_Name
    create user Data_Store_User identified by Data_Store_Password; 
    grant all to Data_Store_User;
    

    where:

    Data_Store_Name is the name of the data store.

    Data_Store_User is the IMDB Cache data store user.

    Data_Store_Password is the password for the IMDB Cache data store user.

    Important:

    The IMDB Cache data store user must be the same as the BRM database user. However, the data store user password can be different from the database user password.
  10. Set the data store user and password, and make the data store grid-enabled:

    ttIsql "uid=Data_Store_User;pwd=Data_Store_Password;dsn=Data_Store_Name"
    call ttcacheuidpwdset('Cache_Admin_User', 'Cache_Admin_User_Pwd');
    call ttGridCreate('ttGrid');
    call ttGridNameSet('ttGrid');
    

    where:

    Data_Store_User is the Oracle IMDB Cache data store user name, which must be the same as the BRM database user name.

    Data_Store_Password is the password for the Oracle IMDB Cache user name.

    Data_Store_Name is the name of the data store.

    Cache_Admin_User and Cache_Admin_User_Pwd are the cache user name and password.

    Important:

    Run the call ttGridCreate('ttGrid') command only once per database schema. For example, if you are configuring multiple active data stores for a single schema, run this command only when you configure the first active data store.

    For more information about creating a cache grid, see the Oracle TimesTen In-Memory Database Cache User's Guide.

    Note:

    To initialize the data stores for a multischema setup, you must generate the schema and load SQL files for each database schema by using the pin_tt_schema_gen utility. Then follow the preceding steps to initialize the data stores for each schema. See "Generating the BRM Cache Group Schema".

Initializing an Active Data Store

To load BRM objects into an active data store, you must perform the following procedures:

  1. Generate the BRM cache groups schema using the BRM database, and extract the data from the BRM database for caching. See "Generating the Schema and Load SQL Files for the Active Data Store".

  2. Create and initialize the BRM cache groups schema in the active data store. See "Initializing the Data Store".

Note:

If you have existing BRM data that was created on a BRM system before IMDB Cache Manager was installed, you must run the load_pin_uniqueness utility before performing these procedures.

Generating the Schema and Load SQL Files for the Active Data Store

Use the pin_tt_schema_gen utility to generate the schema SQL file with the BRM cache groups schema and the load SQL file with the BRM data. For more information, see "Generating the BRM Cache Group Schema".

To generate the schema and load SQL files, perform the following procedure on the node on which the active data store resides:

  1. Open the BRM_home/bin/pin_tt_schema_gen.values file, and configure the values in the file.

    Note:

    You must generate the load SQL for each active data store.

    For example, generate tt_load_0.0.0.1.sql with $db_no_for_load_sql set to 0.0.0.1, and generate tt_load_0.1.0.1.sql with $db_no_for_load_sql set to 0.1.0.1 in the pin_tt_schema_gen.values file.

    See "Configuring the pin_tt_schema_gen.values File".

  2. Save and close the file.

  3. Run the following command:

    source BRM_home/source.me.csh
    
  4. Run the pin_tt_schema_gen utility with the -a parameter:

    ./pin_tt_schema_gen -a
    

    See "pin_tt_schema_gen".

    Note:

    If you do not specify the values for MAIN_DB{'user'} and MAIN_DB{'password'} in the pin_tt_schema_gen.values file, the pin_tt_schema_gen utility prompts you to enter those values.

This updates the BRM database with unique indexes and non-null constraints and generates the following files:

  • tt_schema.sql

  • tt_load_Logical_Partition.sql

  • tt_drop.sql

where Logical_Partition is the name of the logical partition in which the data store resides, such as 0.0.0.1 and 0.1.0.1.

Initializing the Data Store

Use the schema and load SQL files to create the BRM cache groups schema and to load the BRM data.

To initialize an active data store, perform the following procedure on the node on which the active data store resides:

  1. Set the data store user and password:

    ttIsql "uid=Data_Store_User; pwd=Data_Store_Password; dsn=Data_Store_Name"
    call ttcacheuidpwdset('Cache_Admin_User','Cache_Admin_User_Pwd');
    

    where:

    Data_Store_User is the Oracle IMDB Cache data store user name, which must be the same as the BRM database user name.

    Data_Store_Password is the password for the Oracle IMDB Cache user name.

    Data_Store_Name is the name of the data store, such as tt_0.0.0.1 and tt_0.1.0.1.

    Cache_Admin_User and Cache_Admin_User_Pwd are the cache user name and password.

  2. Start the cache agent:

    call ttcachestart;
    
  3. Create the schema:

    run BRM_home/bin/tt_schema.sql;
    
  4. Create stored procedures:

    run BRM_home/sys/dm_tt/data/tt_create_pkg_pin_sequence.plb;
    run BRM_home/sys/dm_tt/data/tt_create_procedures.plb;
    run BRM_home/sys/dm_tt/data/create_tt_wrappers.plb;
    

    Note:

    Load the stored procedures in tt_create_pkg_pin_sequence.plb before the procedures in tt_create_procedures.plb.

Configuring the Cluster for an Active Data Store

To set up the cluster configuration for an active data store:

  1. Log on to the node on which the active data store resides.

  2. Go to the following directory:

    cd IMDB_home/info
    

    where IMDB_home is the directory in which Oracle IMDB Cache is installed.

  3. Add the following entries to the cluster.oracle.ini data store configuration file.

    You can edit the cluster.oracle.ini file in the IMDB_home/info directory by commenting out the default configurations.

    Note:

    The MasterHosts entry identifies the nodes on which a pair of active/standby data stores resides. The order in which the nodes are specified sets the default states (active, standby) of the data stores.
    [DSN]
    MasterHosts = Active_Node,Standby_Node
    ScriptInstallDir = /export/home/ttinstaller/TimesTen/tt1121_HA/info/crs_scripts 
    CacheConnect = Y 
    ReturnServiceAttribute = RETURN TWOSAFE 
    GridPort = Active_Port, Standby_Port
    AppName = DataStoreName
    AppType = Active
    AppCheckCmd = BRM_home/bin/pin_ctl_dmtt.sh check
    AppStartCmd = BRM_home/bin/pin_ctl_dmtt.sh start
    AppStopCmd = BRM_home/bin/pin_ctl_dmtt.sh stop
    MonInterval = MonitorInterval
    AppRestartAttempts = NumAttempts
    AppUptimeThreshold = AttemptReset
    

    where:

    • DSN is the data source name, which is the same as the data store name. The DSN must also be the same as the database alias name in the tnsnames.ora file. For more information about setting database alias names, see "Making a Data Store Accessible to IMDB Cache DM".

    • Active_Node is the host name for the active data store.

    • Standby_Node is the host name for the standby data store.

    • Active_Port is an unused port number assigned to the active data store.

    • Standby_Port is an unused port number assigned to the standby data store.

    • DataStoreName is the name of the active data store.

    • MonitorInterval specifies the amount of time, in seconds, that the Oracle Clusterware processes monitor an active/standby pair.

    • NumAttempts specifies the number of times Oracle Clusterware attempts to connect to the IMDB Cache data store before the IMDB Cache data store fails over. When set to 0, the IMDB Cache data store fails over immediately.

    • AttemptReset specifies the amount of time, in seconds, until the AppRestartAttempts entry is reset.

    Notes:

    • Active_Port and Standby_Port are ports on the private network that is set up as part of Oracle Clusterware installation.

    • The AppName and AppType entries must be the same on both nodes.

    • The cluster.oracle.ini data store configuration file includes other optional entries that can be used to fine-tune your system. For more information, see the Oracle TimesTen In-Memory Database TimesTen to TimesTen Replication Guide.

  4. Save and close the file.

Configuring Standby Data Stores

To configure a standby data store for an active data store, perform this procedure on the node on which you want the standby data store to reside.

Before performing this procedure, obtain the following information:

To create and configure a standby data store for an active data store:

  1. Log on to the node on which you want the standby data store to reside.

  2. Create a directory for storing database files:

    mkdir BRM_home/Database_Files_Location
    

    Important:

    The name and location of the directory must match the corresponding directory created on the node on which the active data store resides. See step 2 in "Creating and Configuring Active Data Stores".
  3. Go to the following directory:

    cd IMDB_home/info
    

    where IMDB_home is the directory in which Oracle IMDB Cache is installed.

  4. In the sys.odbc.ini data store configuration file, add the same entries that you added to the active data store's sys.odbc.ini file. See step 4 in "Creating and Configuring Active Data Stores".

  5. Save and close the file.

  6. In the cluster.oracle.ini data store configuration file, add the same entries that you added to the active data store's cluster.oracle.ini file. See step 3 in "Configuring the Cluster for an Active Data Store".

  7. Save and close the file.

Creating Active and Standby Data Store Pairs

Use this procedure to perform the following tasks:

  • Create a pair of active and standby data stores.

  • Register the TimesTen agent and the data stores with Oracle Clusterware.

  • Replicate the active data store's BRM cache groups schema and data in the standby data store.

Perform the procedure on each active data store node.

  1. Go to the bin directory on the node on which the active data store resides:

    cd IMDB_home/bin
    

    where IMDB_home is the directory in which Oracle IMDB Cache is installed.

  2. Register the cluster information on the host as a root user by entering the following command:

    ttCWAdmin -ocrconfig
    
  3. Start the Oracle IMDB Cache cluster agent using the TimesTen administrator user login:

    ttCWAdmin -init
    
  4. Create an active/standby pair replication scheme:

    ttCWAdmin -create -dsn Data_Store_Name
    

    where Data_Store_Name is the name of the data store, such as tt_0.0.0.1 or tt_0.1.0.1.

  5. Provide the required information, such as the admin user ID and password. A confirmation message is displayed when the registration is complete.

  6. Start the active/standby pair replication scheme:

    ttCWAdmin -start -dsn Data_Store_Name 
    

    Oracle Clusterware automatically starts the data stores.

  7. Load data into the active/standby pair replication scheme:

    ttisql Data_Store_Name
    run BRM_home/bin/tt_load_Logical_Partition.sql;
    

    where Logical_Partition is the database number of the logical partition in which the data stores reside, such as 0.0.0.1 or 0.1.0.1.

To initialize the data stores for a multischema setup, you must generate the schema and load SQL files for each database schema by using the pin_tt_schema_gen utility. Then follow the steps in this section to initialize the data stores for each schema.

For more information about registering data stores with Oracle Clusterware, see Oracle Clusterware Administration and Deployment Guide.

For more information about TimesTen replication, see TimesTen to TimesTen Replication Guide.

Associating IMDB Cache DM Instances with Data Stores

For high availability, you need an active and a standby instance of each IMDB Cache DM. The active DM instance is connected to the active data store, and the standby DM instance is connected to the standby data store.

To connect active and standby IMDB Cache DM instances to the active and standby data stores, perform the following procedures on each node on which an active or a standby data store resides:

  1. Making a Data Store Accessible to IMDB Cache DM

  2. Configuring an IMDB Cache DM Instance for a Data Store

  3. Configuring pin_ctl to Start and Stop IMDB Cache DM

Note:

These procedures assume that you have installed an instance of IMDB Cache Manager on each node on which an active or a standby data store resides.

Making a Data Store Accessible to IMDB Cache DM

To configure a data store so that IMDB Cache DM can directly connect to it, perform the following procedure on the node on which the data store resides:

  1. Open the tnsnames.ora file located in the directory specified by $TNS_ADMIN.

  2. Add the following entry:

    Database_Alias_Name = (DESCRIPTION = (ADDRESS = (PROTOCOL = ) (HOST = ) (PORT = )) 
                           (CONNECT_DATA = 
                              (SERVICE_NAME = Data_Store_Name) 
                                (SERVER = timesten_direct )))
    

    where:

    • Database_Alias_Name is the data store name specified in the sys.odbc.ini file.

    • Data_Store_Name is the data store name specified in the sys.odbc.ini file.

    • timesten_direct indicates that instances of IMDB Cache DM on the same node as the data store can directly connect to the data store.

    Note:

    You must add a separate entry for each logical partition.

    For example:

    tt_0.0.0.1 = (DESCRIPTION = (ADDRESS = (PROTOCOL = ) (HOST = ) (PORT = )) 
                     (CONNECT_DATA = 
                         (SERVICE_NAME = tt_0.0.0.1)
                         (SERVER = timesten_direct )))
     
    tt_0.1.0.1 = (DESCRIPTION = (ADDRESS = (PROTOCOL = ) (HOST = ) (PORT = )) 
                     (CONNECT_DATA = 
                         (SERVICE_NAME = tt_0.1.0.1)
                         (SERVER = timesten_direct )))
    
  3. Save and close the file.

Configuring an IMDB Cache DM Instance for a Data Store

Use the existing IMDB Cache DM installation to configure the settings for the active data store.

To configure IMDB Cache DM for the active data store:

  1. Open the BRM_home/sys/dm_tt/pin.conf file.

  2. Set the tt_ha_enabled entry to 1:

    - dm tt_ha_enabled 1
    
  3. Set the sm_database_tt entry to the data store:

    - dm sm_database_tt Data_Store_Name
    

    where Data_Store_Name is the name of the data store, such as tt_0.0.0.1 or tt_0.1.0.1.

  4. Set the sm_pw_tt entry to the data store password:

    - dm sm_pw_tt Data_Store_Password
    

    Data_Store_Password is the password for the IMDB Cache data store user.

    Important:

    The IMDB Cache data store user must be the same as the BRM database user. However, the data store user password can be different from the database user password.
  5. Set the logical_partition entry to 1 to enable logical partitioning:

    - dm logical_partition 1
    
  6. Set the pcm_op_max_retries entry to a value of 1 or greater. This entry specifies the number of times IMDB Cache DM attempts to connect to an opcode after a failure.

    - dm pcm_op_max_retries 3
    
  7. Set the pcm_reconnect_max_retries entry to a value of 1 or greater.

    - dm pcm_reconnect_max_retries 3
    
  8. Save and close the file.

Configuring pin_ctl to Start and Stop IMDB Cache DM

To configure the pin_ctl utility to start and stop an instance of IMDB Cache DM, perform the following procedure on the node on which the DM and its associated data store reside:

  1. Set the TIMESTEN_HOME environment variable to the directory in which the IMDB Cache DM is installed on the node. For example:

    /export/home/ttinstaller/TimesTen/tt1121_HA
    
  2. Go to the directory in which the IMDB Cache DM is installed and source the source.me file:

    • Bourne shell:

      . source.me.sh 
      
    • C shell:

      source source.me.csh
      
  3. Open the BRM_home/bin/pin_ctl.conf file in a text editor.

  4. Add the following line to the startup configuration section of the file:

    Start_DMTTInstance_Service_Name c1pidproc:DMTTInstance_Service_Name: cport:DMTT_Port host:DMTT_Host dbno:Data_Store_DB_Number
    

    where:

    • Start_DMTTInstance_Service_Name is the name of the start command for the IMDB Cache DM instance.

    • DMTTInstance_Service_Name is a simple process name matching filter.

    • DMTT_Port is the IMDB Cache DM port number.

    • DMTT_Host is the IMDB Cache DM host name.

    • Data_Store_DB_Number is the data store database number.

    For example:

    start_dm_tt cpidproc:dm_tt: cport:1234 host:vm31230 dbno:0.0.0.1
    
  5. Save and close the file.

  6. To ensure that pin_ctl is configured correctly, run the following commands:

    • pin_ctl start dm_tt

    • pin_ctl stop dm_tt

For more information about configuring pin_ctl for high availability, see "Using the pin_ctl Utility to Monitor BRM".

Configuring Connection Managers for High Availability

Figure 13-3 shows a basic configuration of CMs in a high-availability system with two logical partitions. Dashed lines represent backup connections.

Figure 13-3 Basic CM Configuration for High Availability

Description of Figure 13-3 follows
Description of ''Figure 13-3 Basic CM Configuration for High Availability''

To configure CMs for high availability:

  1. Install at least two CMs. Use as many as you estimate will support your workload, and then add one more.

  2. Install one real-time pipeline for each CM. See "Configuring Pipeline Manager".

  3. Connect each CM to all the active and standby IMDB Cache DMs in your BRM System. See "Connecting CMs to IMDB Cache DMs".

Connecting CMs to IMDB Cache DMs

To configure a CM to connect to the active and standby IMDB Cache DM instances in a high-availability system, set the CM configuration file (BRM_home/sys/cm/pin.conf) entries shown in Table 13-3. These settings minimize CM request failures and enable CM connections to succeed if an IMDB Cache DM failover occurs.

For information about how CMs handle IMDB Cache DM failure, see "About Connection Managers in a High-Availability System".

Table 13-3 CM pin.conf Entries for a High-Availability System

Configuration Entry Description

dm_pointer

Specifies the host and port number of the IMDB Cache DM instances to connect to. Include an entry for each active and standby pair of IMDB Cache DMs in your system. Oracle recommends that the active DM be listed first in each entry.

The CM pin.conf file should contain one dm_pointer entry per logical partition. Therefore, because the active and standby DMs in each pair support the same logical partition, they must be on the same dm_pointer line:

- cm dm_pointer lp_number ip active_dm_host active_dm_port ip standby_dm_host standby_dm_port

Example

- cm dm_pointer 0.0.0.1 ip 156.151.2.168 33950 ip 168.35.37.128 12960
- cm dm_pointer 0.1.0.1 ip 156.151.2.168 32250 ip 168.35.37.128 12850

pcm_timeout_in_msecs

Specifies the amount of time in milliseconds that the CM waits for a response from an IMDB Cache DM before failing over to the standby DM. This entry is called the CM's long (failover) timeout (see "Configuring Multilevel CM Timeout for Client Requests").

The default value is 120000 (120 seconds). For high-availability systems, the recommended value is 100000 (100 seconds).

Important: In a high-availability system, each client-side component should have a longer timeout period than the server-side component that it calls. For example, to minimize CM request failures, make this timeout long enough for Oracle Clusterware to restart the IMDB Cache DM. For suggested timeout values, see "About Setting Up a High-Availability System".

Example

- cm pcm_timeout_in_msecs 100000

pcm_op_max_retries

Specifies the maximum number of times an opcode is retried in the Portal Communications Model (PCM). The default value is 1. For high availability, the value must be at least 2.

Example

-cm pcm_op_max_retries 2

cm_op_max_retries

Specifies the maximum number of times an opcode is retried in the CM. The default value is 1. For high availability, the value must be at least 2.

Example

- cm cm_op_max_retries 2

pcm_bad_connection_retry_delay_time_in_secs

Specifies the interval, in seconds, at which the CM tries to connect to the active IMDB Cache DM after it fails to connect. See "How CMs Handle IMDB Cache DM Failure".

Important: In a high-availability system, each client-side component should have a longer timeout period than the server-side component that it calls. For suggested timeout values, see "About Setting Up a High-Availability System".

Example

-cm pcm_bad_connection_retry_delay_time_in_secs 100 

Note: This entry appears in both the CM and IMDB Cache DM pin.conf files.


Restoring a High-availability System after Failover

After failed components in a high-availability system are fixed, return the system to its original configuration by switching the workload back to the primary components. This enables the system to use its optimal architecture.

The following sections explain how to restore components in a high-availability system:

Switching Back to the Primary Oracle RAC Instance

To restart a failed Oracle RAC instance, run the following command:

srvctl start instance -d racDatabaseName -i primary_racInstanceName

where:

  • racDatabaseName is the name of the Oracle database.

  • primary_racInstanceName is the name of the primary (preferred) Oracle RAC instance.

For information about the srvctl command, see the Oracle RAC documentation.

After a failed database instance is restarted, the services for which it is the primary database instance do not automatically switch back to it from the backup instance.

To switch a database service from its backup database instance to its primary database instance, run the following command:

srvctl relocate service -d racDatabaseName -s serviceName
   -i backup_racInstanceName -t primary_racInstanceName -f

where:

  • racDatabaseName is the name of the Oracle database.

  • serviceName is the name of the database service that the primary Oracle RAC instance originally supported.

  • backup_racInstanceName is the name of the backup (available) Oracle RAC instance.

  • primary_racInstanceName is the name of the primary (preferred) Oracle RAC instance.

Note:

Switching database services back to their primary Oracle RAC instance causes a service interruption. Usually, however, switching back to the primary node takes less time than failing over to the backup node.

Ensuring All Accounts Are Billed

If a system failure occurs while the pin_bill_day application is running, some operations might fail, and thus bills might not be generated for all accounts.

To ensure that all accounts are billed, Oracle recommends rerunning pin_bill_day after the system is restored.