8 Using Oracle Clusterware to Manage Active Standby Pairs
Oracle Clusterware monitors and controls applications to provide high availability. Oracle Clusterware is a general purpose cluster manager that manages and monitors the availability of software components that participate in a cluster.
Note:
See the Oracle Clusterware Clusterware Administration and Deployment Guide in the Oracle Database documentation.
Overview of How Oracle Clusterware Can Manage TimesTen
Use Oracle Clusterware to manage only the following configurations for active standby pair replication schemes.
-
Active standby pair with or without read-only subscribers
-
Active standby pair (with or without read-only subscribers) with AWT cache groups and read-only cache groups
Figure 8-1 shows an active standby pair with one read-only subscriber in the same local network. The active master, the standby master and the read-only subscriber are on different nodes. There are two nodes that are not part of the active standby pair that are also running TimesTen. An application updates the active database. An application reads from the standby and the subscriber. All of the nodes are connected to shared storage.
Figure 8-1 Active Standby Pair With One Subscriber

Description of "Figure 8-1 Active Standby Pair With One Subscriber"
You can use Oracle Clusterware to start, monitor, and automatically fail over TimesTen databases and applications in response to node failures and other events. See Clusterware Management and Recovering from Failures.
Oracle Clusterware can be implemented at two levels of availability for TimesTen.
-
The basic level of availability manages two master nodes configured as an active standby pair and up to 127 read-only subscriber nodes in the cluster. The active standby pair is defined with local host names or IP addresses. If both master nodes fail, user intervention is necessary to migrate the active standby scheme to new hosts. When both master nodes fail, Oracle Clusterware notifies the user.
-
The advanced level of availability uses virtual IP addresses for the active, standby, and read-only subscriber databases. Extra nodes can be included in the cluster that are not part of the initial active standby pair. If a failure occurs, the use of virtual IP addresses enables one of the extra nodes to take on the role of a failed node automatically.
Note:
If your applications connect to TimesTen in a client/server configuration, automatic client failover enables the client to reconnect automatically to the active database after a failure. See Using Automatic Client Failover for an Active Standby Pair and TTC_FailoverPortRange in the Oracle TimesTen In-Memory Database Reference.
The ttCWAdmin utility is used to administer TimesTen active standby
pairs in a cluster that is managed by Oracle Clusterware. The configuration for each
active standby pair is manually created in an initialization file called
cluster.oracle.ini. The information in this file is used to create
Oracle Clusterware resources. Resources are used to manage the TimesTen daemon,
TimesTen databases, TimesTen processes, user applications, and virtual IP addresses. You
can run the ttCWAdmin utility from any host in the cluster, as long as
the cluster.oracle.ini file is reachable and readable from this host.
For more information about the ttCWAdmin utility, see ttCWAdmin in Oracle TimesTen In-Memory Database
Reference. For
more information about the cluster.oracle.ini file, see Configuring Oracle Clusterware Management with the cluster.oracle.ini File.
Requirements, Considerations, and Installation for Your Cluster
There are requirements and installation steps when creating your cluster.
Required Privileges
There are privileges required to run ttCWAdmin commands.
See ttCWAdmin in Oracle TimesTen In-Memory Database Reference.
Hardware and Software Requirements
TimesTen supports Clusterware on Linux or UNIX platforms except Linux arm64.
TimesTen supports Oracle Clusterware with TimesTen active standby pair replication. See Oracle Clusterware Clusterware Administration and Deployment Guide in the Oracle Database documentation for network and storage requirements and information about Oracle Clusterware configuration files.
Oracle Clusterware and TimesTen should be installed in the same location on all nodes. The TimesTen instance administrator must belong to the same UNIX or Linux primary group as the Oracle Clusterware installation owner.
Note:
The /tmp directory contains essential TimesTen Oracle Clusterware directories. Their names have the prefix crsTT. Do not delete them.
All hosts should use Network Time Protocol (NTP) or a similar system so that clocks on the hosts remain within 250 milliseconds of each other. When adjusting the system clocks on any nodes to be synchronized with each other, do not set any clock backward in time.
Install Oracle Clusterware
By default, when you install Oracle Clusterware, the installation occurs on all hosts concurrently. See Oracle Clusterware installation documentation for your platform.
For example, see the Oracle Grid Infrastructure Grid Infrastructure Installation and Upgrade Guide.
Oracle Clusterware starts automatically after successful installation.
Note:
You can verify whether Oracle Clusterware is running on all hosts in the cluster by running the following:
crsctl check crs -all
Install TimesTen on Each Host
Use the ttInstanceCreate command to install TimesTen in the same
location on each host in the cluster, including extra hosts.
See Create an Instance Interactively for Oracle Clusterware in Oracle TimesTen In-Memory Database Installation, Migration, and Upgrade Guide.
When responding to the various prompts, note that:
-
The instance name must be the same on each host.
-
The user name of the instance administrator must be the same on all hosts.
-
The TimesTen instance administrator must belong to the same UNIX or Linux primary group as the Oracle Clusterware installation owner.
In addition, when you respond yes to the following question:
Would you like to use TimesTen Replication with Oracle Clusterware?
Then, the ttInstanceCreate command prompts you for values used for
Oracle Clusterware, each of which is stored in the ttcrsagent.options
file:
-
The TCP/IP port number associated with the TimesTen cluster agent (
ttCRSAgent). The port number must be the same on all nodes of the cluster. If you do not provide a port number, then TimesTen adds six to the default TimesTen daemon port number to be the TCP/IP port number associated with the TimesTen cluster agent. Thus, the default daemon port number associated with the TimesTen cluster agent is 3574 for 64-bit systems. -
The Oracle Clusterware location. The location must be the same on each host.
-
The hosts included in the cluster, including spare hosts, with host names separated by commas. This list must be the same on each host.
See Installing Oracle Clusterware for Use with TimesTen and Create an Instance Interactively for Oracle Clusterware in Oracle TimesTen In-Memory Database Installation, Migration, and Upgrade Guide.
The ttCWAdmin –init and ttCWAdmin –shutdown commands use the ttcrsagent.options file to initiate and shut down the TimesTen cluster. The ttcrsagent.options file is located in the TimesTen daemon home directory.
You should not manually alter the ttcrsagent.options file. Instead, use the ttInstanceModify -crs command to create or modify the information in this file after the TimesTen cluster has been initiated. You can also use the -record and -batch options for setup.sh to perform identical installations on additional hosts.
Note:
See Change the Oracle Clusterware Configuration for an Instance in Oracle TimesTen In-Memory Database Installation, Migration, and Upgrade Guide.
The current home location of Oracle Clusterware is set in the CRS_HOME environment variable. In addition, the ttInstanceModify -crs command shows the current location of the Oracle Clusterware home as part of the prompts.
Note:
See Start the TimesTen Cluster Agent for more information on the
ttcrsagent.options file. For more information about
ttInstanceCreate and ttInstanceModify, see
ttInstanceCreate and ttInstanceModify respectively in Oracle TimesTen In-Memory Database
Reference.
The following example shows how the ttInstanceModify -crs prompts for you to modify each item in the ttcrsagent.options file:
% ttInstanceModify -crs Cannot find instance_info file : /etc/TimesTen/instance_info Would you like to modify the existing TimesTen Replication with Oracle Clusterware configuration? [ no ] yes This TimesTen instance is configured to use an Oracle Clusterware installation located in : /mydir/oracle/crs/app/11.2.0 Would you like to change this value? [ no ] no The TimesTen Clusterware agent is configured to use port 54504 Would you like to change this value? [ no ] no The TimesTen Clusterware agent is currently configured with these nodes : node1 node2 node3 node4 Would you like to change these values? [ no ] Overwrite the existing TimesTen Clusterware options file? [ no ] no
Restricted Commands and SQL Statements
When you use Oracle Clusterware with TimesTen, the active standby pair
replication scheme is created on the active database with the ttCWAdmin
-create command and dropped with the ttCWAdmin -drop
command.
ttCWAdmin -create and ttCWAdmin
-drop commands, you cannot run certain commands or SQL statements.
However, you can perform these commands or SQL statements when you use the
ttCWAdmin -beginAlterSchema and the ttCWAdmin
-endAlterSchema commands, as described in Changing the Schema.
You cannot run the following commands or SQL statements:
-
Creating, altering, or dropping the active standby pair with the
CREATE ACTIVE STANDBY PAIR,ALTER ACTIVE STANDBY PAIR, andDROP ACTIVE STANDBY PAIRSQL statements. -
Starting or stopping the replication agent with either the
-repStartand-repStopoptions of thettAdminutility or thettRepStartorttRepStopbuilt-in procedures. For more information, see Starting and Stopping the Replication Agents. -
Starting or stopping the cache agent after the active standby pair has been created with either the
-cacheStartand-cacheStopoptions of thettAdminutility or thettCacheStartandttCacheStopbuilt-in procedures. -
Duplicating the database with the
-duplicateoption of thettRepAdminutility.
In addition, do not call ttDaemonAdmin -stop before calling ttCWAdmin -shutdown.
The TimesTen integration with Oracle Clusterware accomplishes these operations with the ttCWAdmin utility and the attributes specified in the cluster.oracle.ini file.
Creating and Initializing a Cluster
There are procedures to create and initialize a cluster.
-
Create the Oracle Clusterware Resources to Manage Virtual IP Addresses
-
Configure an Oracle Database as a Disaster Recovery Subscriber
-
Configure a Read-Only Subscriber That Is Not Managed by Oracle Clusterware
If you plan to have more than one active standby pair in the cluster, see Include More Than One Active Standby Pair in a Cluster.
If you want to configure an Oracle database as a remote disaster recovery subscriber, see Configure an Oracle Database as a Disaster Recovery Subscriber.
If you want to set up a read-only subscriber that is not managed by Oracle Clusterware, see Configure a Read-Only Subscriber That Is Not Managed by Oracle Clusterware.
Start the TimesTen Cluster Agent
Start a TimesTen cluster agent (ttCRSAgent) and TimesTen cluster
daemon monitor (ttCRSDaemon) on all hosts in the cluster by running the
ttCWAdmin -init command.
You can run this command on any host in the cluster that is defined in the
ttcrsagent.options file.
For example:
ttCWAdmin -init
The ttCWAdmin -init command performs the following:
-
Reads the
ttcrsagent.optionsfile and launches the TimesTen main daemon on each of the hosts defined in this file. -
Starts and registers the TimesTen cluster agent (
ttCRSAgent) and the TimesTen cluster daemon monitor (ttCRSDaemon) on the all hosts in the cluster. There is one TimesTen cluster agent and one TimesTen cluster daemon monitor for the TimesTen installation on each host. When the TimesTen cluster agent has started, Oracle Clusterware begins monitoring the TimesTen daemon on each host and restarts a TimesTen daemon if it fails.
To start and register the TimesTen cluster agent and the TimesTen cluster daemon monitor on specific hosts in the cluster, use the -hosts command to specify the desired hosts in the cluster to start.
ttCWAdmin -init -hosts "host1, host2"
Note:
You must stop the TimesTen cluster agent on the local host with the ttCWAdmin
-shutdown before you run a ttDaemonAdmin -stop
command; otherwise the cluster agent restarts the TimesTen daemon.
Create and Populate a TimesTen Database on One Host
Create a database on the host where you intend the active database to reside. The DSN must be the same as the database file name.
Create schema objects (such as tables, AWT cache groups, and read-only cache groups) and populate with data as appropriate. However, before you create cache groups, you must first decide when to load the cache groups.
-
For best performance, load the cache group tables from the Oracle database tables before the
ttCWAdmin -createcommand. There is less performance overhead when cache groups are loaded with initial data before the duplicate is performed on the active database to create the standby database (and any subscriber databases).For this option, perform the following:
-
Start the cache agent as follows:
call ttCacheStart;
Note:
Since this is before the
ttCWAdmin -startcommand, you can start the cache agent at this time. ThettCWAdmin -startcommand notes that the cache agent is already started and continues. -
Use the
LOAD CACHE GROUPstatement to load the cache group tables from the Oracle database tables. -
If using cache groups with autorefresh, set the autorefresh state to pause with the
ALTER CACHE GROUP SET AUTOREFRESH STATE PAUSEDstatement. The autorefresh state will be set toONas part of thettCWAdmin -startcommand.
The following example demonstrates how to create a read-only cache group with autorefresh, load the data, and then set the autorefresh state to pause:
Command> call ttCacheStart; Command> CREATE READONLY CACHE GROUP my_cg AUTOREFRESH MODE INCREMENTAL INTERVAL 60 SECONDS FROM t1 (c1 NUMBER(22) NOT NULL PRIMARY KEY, c2 DATE, c3 VARCHAR(30)); Command> LOAD CACHE GROUP my_cg COMMIT EVERY 100 ROWS PARALLEL 4; Command> ALTER CACHE GROUP my_cg SET AUTOREFRESH STATE PAUSED;
-
-
Alternatively, wait to load the cache group tables after the
ttCWAdmin -startas described in Load Cache Groups. The data will be replicated to the standby database and any subscriber databases.
Create System DSN Files on Other Hosts
On all hosts that are to be included in the cluster, create the system DSN
(sys.odbc.ini) files.
The DataStore attribute and the Data Source
Name must be the same as the entry name for the
cluster.oracle.ini file. See Configuring Oracle Clusterware Management with the cluster.oracle.ini File.
Create a cluster.oracle.ini File
Create a cluster.oracle.ini file as a text file.
See Configuring Oracle Clusterware Management with the cluster.oracle.ini File for details about its contents and acceptable locations for the file.
Create the Oracle Clusterware Resources to Manage Virtual IP Addresses
Advanced availability involves configuring spare master or subscriber hosts that are idle until needed to replace master or subscriber hosts (used in the active standby pair replication scheme) that either shut down unexpectedly or experience an unrecoverable error.
This is an optional step that is only necessary if you decide to configure advanced availability.
If you are planning on providing additional master or subscriber hosts for advanced availability, then you need to configure virtual IP addresses (one for each master host and subscriber actively used in the active standby pair). See Configuring Advanced Availability for more details on how many virtual IP addresses should be created.
In this case, perform the following:
-
Designate (or create) new virtual IP addresses on the network that are to be used solely for managing multiple hosts in a TimesTen replication environment managed by Oracle Clusterware.
-
Configure these VIP addresses for use to manage multiple hosts for advanced availability in the
cluster.oracle.inifile, as described in Configuring Advanced Availability. -
Create the Oracle Clusterware resources that manage these VIP addresses by running the
ttCWAdmin -createVIPscommand as therootuser on any host in the cluster.For example:
ttCWAdmin -createVIPs -dsn myDSN
The VIP address names created by this command start with
network_followed by the TimesTen instance name, TimesTen instance administrator, and the DSN. Whereas, the VIP addresses created for use by Oracle Clusterware are prefixed withora.Note:
You must run the
ttCWAdmin -createVIPscommand before thettCWAdmin -createcommand. If you decide that you want to use VIP addresses for advanced availability after you run thettCWAdmin -createcommand, you must perform the following:-
Run
ttCWadmin -dropto drop the active standby pair replication scheme. -
Add VIP addresses into
cluster.oracle.inifile. -
Run
ttCWadmin -createVIPsto create the resources to manage the VIP addresses. -
Run
ttCWAdmin -createto create the active standby pair replication scheme managed by Oracle Clusterware.
-
Once created, the only way to drop the Oracle Clusterware resources that manage the VIP addresses is to run the ttCWAdmin -dropVIPs command. Before you can drop the virtual IP addresses, you must first run the ttCWAdmin -drop command.
The following is an example of how to drop the virtual IP addresses:
ttCWAdmin -dropVIPs -dsn myDSN
For an example of when to use the ttCWAdmin -dropVIPs command, see
Removing an Active Standby Pair from a Cluster.
Create an Active Standby Pair Replication Scheme
Create an active standby pair replication scheme by running the ttCWAdmin
-create command on any host in the cluster.
Note:
The cluster.oracle.ini file contains the configuration needed to
perform the ttCWAdmin -create command and so must reachable by the
ttCWAdmin executable. See Configuring Oracle Clusterware Management with the cluster.oracle.ini File.
For example:
ttCWAdmin -create -dsn myDSN
The ttCWAdmin -create command prompts for the following:
-
Prompts for the name of a TimesTen user with
ADMINprivileges. If cache groups are being managed by Oracle Clusterware, enter the TimesTen cache administration user name. -
Prompts for the TimesTen password for the previously entered user name.
-
If cache groups are being used, prompts for the password for the Oracle cache administration user. This password is provided in the
OraclePWDconnection attribute when the cache administration user connects. -
Prompts for a random string used to encrypt the above information.
If you want to specify the path and name of a file to be used as the cluster.oracle.ini file, use the -ttclusterini option of the ttCWAdmin -create command.
ttCWAdmin -create -dsn myDSN -ttclusterini path/to/cluster/mycluster.ini
To drop the active standby pair, use the ttCWAdmin -drop command, as follows:
ttCWAdmin -drop -dsn myDSN
Note:
If your application connects to the TimesTen database using the virtual IP address, then this connection drops with the ttCWAdmin -drop command, since the virtual IP address is managed by Oracle Clusterware. However, if your application connects to the TimesTen database using the host name, the connection is not dropped.
For examples showing the sequence in which to use the ttCWAdmin
-create and ttCWAdmin -drop commands, see Managing Active Standby Pairs in a Cluster and Managing Read-Only Subscribers in the Active Standby Pair.
Start the Active Standby Pair and the Applications
Start the cluster with the active standby pair replication scheme by running the
ttCWAdmin -start command on any host.
This starts the cache agent (if not already started) and replication agent on the active database, performs the duplicate to create the standby database (and any subscriber databases), and starts the cache agent and replication agent on the standby (and any subscribers).
If you do not specify -noApp option, the applications are also started. If you do specify -noApp option, then you can start and stop the applications with the -startApps and -stopApps options respectively.
For example:
ttCWAdmin -start -dsn myDSN
This command starts the following processes for the active standby pair:
-
TimesTen daemon monitor
ttCRSMaster -
Active service
ttCRSActiveService -
Standby service
ttCRSsubservice -
Monitor for application
AppName
The following example starts the cache and replication agents, but does not start the applications because of the inclusion of the -noapp option:
ttCWAdmin -start -noapp -dsn myDSN
To start and stop applications, use the ttCWAdmin -startApps and -stopApps commands as shown below:
ttCWAdmin -startapps -dsn myDSN
ttCWAdmin -stopapps -dsn myDSN
To stop the TimesTen database monitor (ttCRSMaster), cache agent and replication agent and disconnect the application from both databases, run the ttCWAdmin -stop command.
ttCWAdmin -stop -dsn myDSN
Note:
If your application connects to the TimesTen database using a virtual IP address, then this connection drops with the ttCWAdmin -stop command, since the virtual IP address is managed by Oracle Clusterware. However, if your application connects to the TimesTen database using the host name, the connection is not dropped; however, replication to the standby does not occur.
See Managing Active Standby Pairs in a Cluster and Managing Read-Only Subscribers in the Active Standby Pair for examples showing the sequence in which
to use the ttCWAdmin -start and -stop commands.
Load Cache Groups
LOAD CACHE GROUP statement to load the cache
group tables from the Oracle database tables.
For more information on when to load cache groups, see Create and Populate a TimesTen Database on One Host.
Include More Than One Active Standby Pair in a Cluster
If you want to use Oracle Clusterware to manage more than one active standby pair in
a cluster, include additional configuration in the cluster.oracle.ini file.
Oracle Clusterware can only manage more than one active standby pair in a cluster if all TimesTen databases are a part of the same TimesTen instance on a single host.
For example, the following cluster.oracle.ini file contains configuration information for two active standby pair replication schemes on the same host:
[advancedSubscriberDSN] MasterHosts=host1,host2,host3 SubscriberHosts=host4, host5 MasterVIP=192.168.1.1, 192.168.1.2 SubscriberVIP=192.168.1.3 VIPInterface=eth0 VIPNetMask=255.255.255.0 [advSub2DSN] MasterHosts=host1,host2,host3 SubscriberHosts=host4, host5 MasterVIP=192.168.1.4, 192.168.1.5 SubscriberVIP=192.168.1.6 VIPInterface=eth0 VIPNetMask=255.255.255.0
Perform these tasks for additional replication schemes:
-
Create and populate the databases.
-
Create the virtual IP addresses. Use the
ttCWAdmin -createVIPscommand. -
Create the active standby pair replication scheme. Use the
ttCWAdmin -createcommand. -
Start the active standby pair. Use the
ttCWAdmin -startcommand.
Configure an Oracle Database as a Disaster Recovery Subscriber
See Using a Disaster Recovery Subscriber in an Active Standby Pair.
Oracle Clusterware manages the active standby pair, but does not manage the disaster recovery subscriber. The user must explicitly switch to use the remote site if the primary site fails.
To use Oracle Clusterware to manage an active standby pair that has a remote disaster recovery subscriber, perform these tasks:
Configuring Oracle Clusterware Management with the cluster.oracle.ini File
The information in the cluster.oracle.ini file is used to create
Oracle Clusterware resources that manage TimesTen databases, TimesTen processes, user
applications, and virtual IP addresses. Create an initialization file called
cluster.oracle.ini as a text file.
Note:
See TimesTen Configuration Attributes for Oracle Clusterware for details on all of the attributes that can be used in the cluster.oracle.ini file.
The ttCWAdmin -create command reads this file for configuration
information, so the location of the text file must be reachable and readable by
ttCWAdmin. The ttCWAdmin utility is used to
administer TimesTen active standby pairs in a cluster that is managed by Oracle
Clusterware.
It is recommended that you place this file in the TimesTen daemon home directory on the host for the active database. However, you can place this file in any directory or shared drive on the same host as where you run the ttCWAdmin -create command.
The default location for this file is in the timesten_home/conf directory. If you place this file in another location, identify the path of the location with the -ttclusterini option.
The entry name in the cluster.oracle.ini file must be the same as
an existing system DSN in the sys.odbc.ini file. For example,
[basicDSN] is the entry name in the
cluster.oracle.ini file described in Configuring Basic Availability. [basicDSN] must also be the
DataStore and Data Source Name data store
attributes in the sys.odbc.ini files on each host. For example, the
sys.odbc.ini file for the basicDSN DSN on
host1 might be:
[basicDSN] DataStore=/path1/basicDSN LogDir=/path1/log DatabaseCharacterSet=AL32UTF8 ConnectionCharacterSet=AL32UTF8
The sys.odbc.ini file for basicDSN on host2 can have a different path, but all other attributes should be the same:
[basicDSN] DataStore=/path2/basicDSN LogDir=/path2/log DatabaseCharacterSet=AL32UTF8 ConnectionCharacterSet=AL32UTF8
The following sections demonstrate sample configurations of the cluster.oracle.ini file:
Configuring Basic Availability
This example shows an active standby pair with no subscribers.
The host for the active database is the first MasterHost
defined (host1) and the standby database is the second
MasterHost in the list (host2). Each host in the
list is delimited by commas. You can include spaces for readability, if desired.
[basicDSN] MasterHosts=host1,host2
The following is an example of a cluster.oracle.ini file for an
active standby pair with one subscriber on host3:
[basicSubscriberDSN] MasterHosts=host1,host2 SubscriberHosts=host3
Configuring Advanced Availability
Advanced availability involves configuring spare master or subscriber hosts that are idle until needed to replace master or subscriber hosts (used in the active standby pair replication scheme) that either shut down unexpectedly or experience an unrecoverable error.
As mentioned in Configuring Basic Availability, the MasterHosts
attribute in the cluster.oracle.ini file configures the hosts that are
used as the master nodes. For an active standby pair replication scheme, you only need
two master hosts (one to become the active and one to become the standby). In the event
of a failure, the host that did not fail becomes the active (if not already the active)
and the failed host is recovered and becomes the standby. However, if the failed host
cannot be recovered and if you specified more than two hosts as master hosts in the
cluster.oracle.ini file, then the next master host in the list can
be instantiated to take the place of an unrecoverable master host.
For example, the following shows a configuration of several master hosts. The first two master hosts (host1 and host2) become the active and the standby; the latter two master hosts (host3 and host4) can be used to take the place of either host1 or host2 if either encounter an unrecoverable failure.
MasterHosts=host1,host2,host3,host4
When you configure more than two multiple hosts, you should also configure two virtual IP (VIP) addresses used only by Oracle Clusterware resources that manage TimesTen resources. With these VIP addresses, TimesTen internal processes (those that manage replication) are isolated from any master host changes that may occur because of an unrecoverable host error.
Note:
As described in Create the Oracle Clusterware Resources to Manage Virtual IP Addresses, the Oracle Clusterware resource that manage these
VIP addresses (used in advanced availability) are created with the ttCWAdmin
-createVIPs command.
These VIP addresses must be different from any other VIP addresses defined for Oracle Clusterware use or any VIP addresses that are to be used by user applications. Furthermore, if an application does use these VIP addresses, then the application may encounter errors when a master host fails (either recoverable or unrecoverable). These VIP addresses cannot be used by a user application as a method for client failover or as a method to isolate themselves if an active database and standby database switch.
Specify two VIP addresses in the MasterVIP parameter, one for each master host in the active standby pair replication scheme. The VIP addresses specified for the TimesTen cluster must be different from any VIP addresses already defined and used by Oracle Clusterware. In particular, the VIP addresses that are created during the Oracle Clusterware install cannot be used with TimesTen.
MasterVIP=192.168.1.1, 192.168.1.2
The following parameters are also associated with advanced availability in the cluster.oracle.ini file:
-
SubscriberHosts, similar toMasterHostslists the host names that can contain subscriber databases. -
SubscriberVIP, similar toMasterVIP, provides VIP addresses that can be used by TimesTen internally to manage a subscriber node. -
VIPInterfaceis the name of the public network adaptor. -
VIPNetMaskdefines the netmask of the virtual IP addresses.
In the following example, the hosts for the active database and the standby database are host1 and host2. The hosts available for instantiation in case of an unrecoverable error are host3 and host4. There are no subscriber nodes. VIPInterface is the name of the public network adaptor. VIPNetMask defines the netmask of the virtual IP addresses.
[advancedDSN] MasterHosts=host1,host2,host3,host4 MasterVIP=192.168.1.1, 192.168.1.2 VIPInterface=eth0 VIPNetMask=255.255.255.0
The following example configures a single subscriber on host4. There is one extra host defined in SubscriberHosts that can be used for failover of the master databases and one extra node that can be used for failover of the subscriber database. MasterVIP and SubscriberVIP specify the virtual IP addresses defined for the master and subscriber hosts.
[advancedSubscriberDSN] MasterHosts=host1,host2,host3 SubscriberHosts=host4,host5 MasterVIP=192.168.1.1, 192.168.1.2 SubscriberVIP=192.168.1.3 VIPInterface=eth0 VIPNetMask=255.255.255.0
Ensure that the extra master nodes:
-
Have TimesTen installed
-
Have the direct mode application installed if this is part of the configuration. See Implementing Application Failover.
Including Cache Groups in the Active Standby Pair
If the active standby pair replicates one or more AWT or read-only cache groups, set the CacheConnect attribute to y.
This example sets the CacheConnect attribute to y. The example specifies an active standby pair with one subscriber in an advanced availability configuration. The active standby pair replicates one or more cache groups.
[advancedCacheDSN] MasterHosts=host1,host2,host3 SubscriberHosts=host4, host5 MasterVIP=192.168.1.1, 192.168.1.2 SubscriberVIP=192.168.1.3 VIPInterface=eth0 VIPNetMask=255.255.255.0 CacheConnect=y
Implementing Application Failover
TimesTen integration with Oracle Clusterware can facilitate the failover of a TimesTen application that is linked to any of the databases in the active standby pair.
TimesTen can manage both direct and client/server mode applications that are on the same host as Oracle Clusterware and TimesTen.
The required attributes in the cluster.oracle.ini file for failing over a TimesTen application are as follows:
-
AppName- Name of the application to be managed by Oracle Clusterware -
AppStartCmd- Command line for starting the application -
AppStopCmd- Command line for stopping the application -
AppCheckCmd- Command line for running an application that checks the status of the application specified byAppName -
AppType- Determines the database to which the application is linked. The possible values areActive,Standby,DualMaster,Subscriber (all)andSubscriber[index].
There are also several optional attributes that you can configure, such as AppFailureThreshold, DatabaseFailoverDelay, and AppScriptTimeout. Table A-3 lists and describes all optional attributes and their default values.
The TimesTen application monitor process uses the user-supplied script or program specified by AppCheckCmd to monitor the application. The script that checks the status of the application must be written to return 0 for success and a nonzero number for failure. When Oracle Clusterware detects a nonzero value, it takes action to recover the failed application.
This example shows advanced availability configured for an active standby pair with
no subscribers. The reader application is an application that queries
the data in the standby database. AppStartCmd,
AppStopCmd and AppCheckCmd can include arguments
such as start, stop and check
commands.
Note:
Do not use quotes in the values for AppStartCmd, AppStopCmd and AppCheckCmd.
[appDSN] MasterHosts=host1,host2,host3,host4 MasterVIP=192.168.1.1, 192.168.1.2 VIPInterface=eth0 VIPNetMask=255.255.255.0 AppName=reader AppType=Standby AppStartCmd=/mycluster/reader/app_start.sh start AppStopCmd=/mycluster/reader/app_stop.sh stop AppCheckCmd=/mycluster/reader/app_check.sh check
You can configure failover for more than one application. Use AppName to name the application and provide values for AppType, AppStartCmd, AppStopCmd and AppCheckCmd immediately following the AppName attribute. You can include blank lines for readability. For example:
[app2DSN] MasterHosts=host1,host2,host3,host4 MasterVIP=192.168.1.1, 192.168.1.2 VIPInterface=eth0 VIPNetMask=255.255.255.0 AppName=reader AppType=Standby AppStartCmd=/mycluster/reader/app_start.sh AppStopCmd=/mycluster/reader/app_stop.sh AppCheckCmd=/mycluster/reader/app_check.sh AppName=update AppType=Active AppStartCmd=/mycluster/update/app2_start.sh AppStopCmd=/mycluster/update/app2_stop.sh AppCheckCmd=/mycluster/update/app2_check.sh
If you set AppType to DualMaster, the application starts on both the active and the standby hosts. The failure of the application on the active host causes the active database and all other applications on the host to fail over to the standby host. You can configure the failure interval, the number of restart attempts, and the uptime threshold by setting the AppFailureInterval, AppRestartAttempts and AppUptimeThreshold attributes. These attributes have default values. For example:
[appDualDSN] MasterHosts=host1,host2,host3,host4 MasterVIP=192.168.1.1, 192.168.1.2 VIPInterface=eth0 VIPNetMask=255.255.255.0 AppName=update AppType=DualMaster AppStartCmd=/mycluster/update/app2_start.sh AppStopCmd=/mycluster/update/app2_stop.sh AppCheckCmd=/mycluster/update/app2_check.sh AppRestartAttempts=5 AppUptimeThreshold=300 AppFailureInterval=30
Configuring for Recovery When Both Master Nodes Permanently Fail
If both master nodes fail and then come back up, Oracle Clusterware can automatically recover the master databases.
Automatic recovery of a temporary dual failure requires the following:
-
RETURN TWOSAFEis not specified for the active standby pair. -
AutoRecoveris set toy. -
RepBackupDirspecifies a directory on shared storage. -
RepBackupPeriodis set to a value greater than0.
If both master nodes fail permanently, Oracle Clusterware can automatically recover the master databases to two new nodes if the following is true:
-
Advanced availability is configured (virtual IP addresses and at least four hosts).
-
The active standby pair does not replicate cache groups.
-
RETURN TWOSAFEis not specified. -
AutoRecoveris set toy. -
RepBackupDirspecifies a directory on shared storage. -
RepBackupPeriodmust be set to a value greater than0.
TimesTen first performs a full backup of the active database and then performs incremental backups. You can specify the optional attribute RepFullBackupCycle to manage when TimesTen performs subsequent full backup. By default, TimesTen performs a full backup after every five incremental backups.
If RepBackupDir and RepBackupPeriod are configured
for backups, TimesTen performs backups for any master database that becomes active. It
does not delete backups that were performed for a database that used to be the active
and has become the standby unless the database becomes the active again. Ensure that the
shared storage has enough space for two complete database backups. The ttCWAdmin
-restore command automatically chooses the correct backup files.
Incremental backups increase the amount of log records in the transaction log files. Ensure that the values of RepBackupPeriod and RepFullBackupCycle are small enough to prevent a large amount of log records in the transaction log file.
This example shows attribute settings for automatic recovery.
[autorecoveryDSN]
MasterHosts=host1,host2,host3,host4
MasterVIP=192.168.1.1, 192.168.1.2
VIPInterface=eth0
VIPNetMask=255.255.255.0
AutoRecover=y
RepBackupDir=/shared_drive/dsbackup
RepBackupPeriod=3600If you have cache groups in the active standby pair or prefer to recover manually
from failure of both master hosts, ensure that AutoRecover is set to
n (the default). Manual recovery requires the following:
-
RepBackupDirspecifies a directory on shared storage -
RepBackupPeriodmust be set to a value greater than0
This example shows attribute settings for manual recovery. The default value for AutoRecover is n, so it is not included in the file.
[manrecoveryDSN]
MasterHosts=host1,host2,host3
MasterVIP=192.168.1.1, 192.168.1.2
VIPInterface=eth0
VIPNetMask=255.255.255.0
RepBackupDir=/shared_drive/dsbackup
RepBackupPeriod=3600Using the RepDDL Attribute
The RepDDL attribute represents the SQL statement that creates the active standby pair.
The RepDDL attribute is optional. You can use it to exclude tables, cache groups and sequences from the active standby pair.
If you include RepDDL in the cluster.oracle.ini file, do not specify ReturnServiceAttribute, MasterStoreAttribute or SubscriberStoreAttribute in the cluster.oracle.ini file. Include those replication settings in the RepDDL attribute.
When you specify a value for RepDDL, use the <DSN> macro for the database file name prefix. Use the <MASTERHOST[1]> and <MASTERHOST[2]> macros to specify the master host names. TimesTen substitutes the correct values from the MasterHosts or MasterVIP attributes, depending on whether your configuration uses virtual IP addresses. Similarly, use the <SUBSCRIBERHOST[n]> macro to specify subscriber host names, where n is a number from 1 to the total number of SubscriberHosts attribute values or 1 to the total number of SubscriberVIP attribute values if virtual IP addresses are used.
Use the RepDDL attribute to exclude tables, cache groups, and
sequences from the active standby pair:
[excludeDSN] MasterHosts=host1,host2,host3,host4 SubscriberHosts=host5,host6 MasterVIP=192.168.1.1, 192.168.1.2 SubscriberVIP=192.168.1.3 VIPInterface=eth0 VIPNetMask=255.255.255.0 RepDDL=CREATE ACTIVE STANDBY PAIR \ <DSN> ON <MASTERHOST[1]>, <DSN> ON <MASTERHOST[2]> SUBSCRIBER <DSN> ON <SUBSCRIBERHOST[1]>\ EXCLUDE TABLE pat.salaries, \ EXCLUDE CACHE GROUP terry.salupdate, \ EXCLUDE SEQUENCE ttuser.empcount
The replication agent transmitter obtains route information as follows, in order of priority:
-
From the
ROUTEclause in theRepDDLsetting, if aROUTEclause is specified. Do not specify aROUTEclause if you are configuring advanced availability. -
From Oracle Clusterware, which provides the private host names and public host names of the local and remote hosts as well as the remote daemon port number. The private host name is preferred over the public host name. If the replication agent transmitter cannot connect to the IPC socket, it attempts to connect to the remote daemon using information that Oracle Clusterware maintains about the replication scheme.
-
From the active and standby hosts. If they fail, then the replication agent chooses the connection method based on host name.
This is an example of specifying the ROUTE clause in RepDDL:
[routeDSN] MasterHosts=host1,host2,host3,host4 RepDDL=CREATE ACTIVE STANDBY PAIR \ <DSN> ON <MASTERHOST[1]>, <DSN> ON <MASTERHOST[2]>\ ROUTE MASTER <DSN> ON <MASTERHOST[1]> SUBSCRIBER <DSN> ON <MASTERHOST[2]>\ MASTERIP "192.168.1.2" PRIORITY 1\ SUBSCRIBERIP "192.168.1.3" PRIORITY 1\ MASTERIP "10.0.0.1" PRIORITY 2\ SUBSCRIBERIP "10.0.0.2" PRIORITY 2\ MASTERIP "140.87.11.203" PRIORITY 3\ SUBSCRIBERIP "140.87.11.204" PRIORITY 3\ ROUTE MASTER <DSN> ON <MASTERHOST[2]> SUBSCRIBER <DSN> ON <MASTERHOST[1]>\ MASTERIP "192.168.1.3" PRIORITY 1\ SUBSCRIBERIP "192.168.1.2" PRIORITY 1\ MASTERIP "10.0.0.2" PRIORITY 2\ SUBSCRIBERIP "10.0.0.1" PRIORITY 2\ MASTERIP "140.87.11.204" PRIORITY 3\ SUBSCRIBERIP "140.87.11.203" PRIORITY 3\
Monitoring Cluster Status
You can retrieve cluster status and message log files.
The following sections describe how to retrieve the status of the cluster:
Obtaining Cluster Status
The ttCWAdmin -status command reports information about all of the
active standby pairs in a TimesTen instance that are managed by the same instance
administrator.
If you specify the DSN, the utility reports information for the active standby pair with that DSN.
When you run the ttCWAdmin -status command after you have created an active standby pair replication scheme but have not yet started replication, the status appears as follows:
% ttCWAdmin -status TimesTen Cluster status report as of Thu Nov 11 13:54:35 2010 ==================================================================== TimesTen daemon monitors: Host:HOST1 Status: online Host:HOST2 Status: online ==================================================================== ==================================================================== TimesTen Cluster agents Host:HOST1 Status: online Host:HOST2 Status: online ==================================================================== Status of Cluster related to DSN MYDSN: ==================================================================== 1. Status of Cluster monitoring components: Monitor Process for Active datastore:NOT RUNNING Monitor Process for Standby datastore:NOT RUNNING Monitor Process for Master Datastore 1 on Host host1: NOT RUNNING Monitor Process for Master Datastore 2 on Host host2: NOT RUNNING 2.Status of Datastores comprising the cluster Master Datastore 1: Host:host1 Status:AVAILABLE State:ACTIVE Master Datastore 2: Host:host2 Status:UNAVAILABLE State:UNKNOWN ==================================================================== The cluster containing the replicated DSN is offline
After you have started the replication scheme and the active database is running but the standby database is not yet running, ttCWAdmin -status returns:
% ttCWAdmin -status TimesTen Cluster status report as of Thu Nov 11 13:58:25 2010 ==================================================================== TimesTen daemon monitors: Host:HOST1 Status: online Host:HOST2 Status: online ==================================================================== ==================================================================== TimesTen Cluster agents Host:HOST1 Status: online Host:HOST2 Status: online ==================================================================== Status of Cluster related to DSN MYDSN: ==================================================================== 1. Status of Cluster monitoring components: Monitor Process for Active datastore:RUNNING on Host host1 Monitor Process for Standby datastore:RUNNING on Host host1 Monitor Process for Master Datastore 1 on Host host1: RUNNING Monitor Process for Master Datastore 2 on Host host2: RUNNING 2.Status of Datastores comprising the cluster Master Datastore 1: Host:host1 Status:AVAILABLE State:ACTIVE Master Datastore 2: Host:host2 Status:AVAILABLE State:IDLE ==================================================================== The cluster containing the replicated DSN is online
After you have started the replication scheme and the active database and the standby database are both running, ttCWAdmin -status returns:
% ttCWAdmin -status TimesTen Cluster status report as of Thu Nov 11 13:59:20 2010 ==================================================================== TimesTen daemon monitors: Host:HOST1 Status: online Host:HOST2 Status: online ==================================================================== ==================================================================== TimesTen Cluster agents Host:HOST1 Status: online Host:HOST2 Status: online ==================================================================== Status of Cluster related to DSN MYDSN: ==================================================================== 1. Status of Cluster monitoring components: Monitor Process for Active datastore:RUNNING on Host host1 Monitor Process for Standby datastore:RUNNING on Host host2 Monitor Process for Master Datastore 1 on Host host1: RUNNING Monitor Process for Master Datastore 2 on Host host2: RUNNING 2.Status of Datastores comprising the cluster Master Datastore 1: Host:host1 Status:AVAILABLE State:ACTIVE Master Datastore 2: Host:host2 Status:AVAILABLE State:STANDBY ==================================================================== The cluster containing the replicated DSN is online
Message Log Files
The monitor processes report events and errors to the ttcwerrors.log
and ttcwmsg.log files.
The files are located in the
daemon_home/info directory. The default
size of these files is the same as the default maximum size of the user log. The maximum
number of log files is the same as the default number of files for the user log. When
the maximum number of files has been written, additional errors and messages overwrite
the files, beginning with the oldest file.
For the default values for number of log files and log file size, see Error, Warning, and Informational Messages in Oracle TimesTen In-Memory Database Operations Guide.
Recovering from Failures
Oracle Clusterware can recover automatically from many kinds of failures.
The following sections describe several failure scenarios and how Oracle Clusterware manages the failures.
How TimesTen Performs Recovery When Oracle Clusterware is Configured
The TimesTen database monitor (the ttCRSmaster process) performs
recovery.
It attempts to connect to the failed database without using the
forceconnect option. If the connection fails with error 994
("Data store connection terminated"), the database monitor tries to
reconnect 10 times. If the connection fails with error 707 ("Attempt to connect
to a data store that has been manually unloaded from RAM"), the database
monitor changes the RAM policy and tries to connect again. If the database monitor
cannot connect, it returns a connection failure.
If the database monitor can connect to the database, then it performs these tasks:
-
It queries the
CHECKSUMcolumn in theTTREP.REPLICATIONSreplication table. -
If the value in the
CHECKSUMcolumn matches the checksum stored in the Oracle Cluster Registry, then the database monitor verifies the role of the database. If the role isACTIVE, then recovery is complete.If the role is not
ACTIVE, then the database monitor queries the replication Commit Ticket Number (CTN) in the local database and the CTN in the active database to find out whether there are transactions that have not been replicated. If all transactions have been replicated, then recovery is complete. -
If the checksum does not match or if some transactions have not been replicated, then the database monitor performs a duplicate operation from the remote database to re-create the local database.
If the database monitor fails to connect with the database because of error 8110 or
8111 (master catchup required or in progress), then it uses the
forceconnect=1 option to connect and starts master catchup.
Recovery is complete when master catchup has been completed. If master catchup fails
with error 8112 ("Operation not permitted"), then the database monitor
performs a duplicate operation from the remote database. See Automatic Catch-Up of a Failed Master Database.
If the connection fails because of other errors, then the database monitor tries to perform a duplicate operation from the remote database.
The duplicate operation verifies that:
-
The remote database is available.
-
The replication agent is running.
-
The remote database has the correct role. The role must be
ACTIVEwhen the duplicate operation is attempted for creation of a standby database. The role must beSTANDBYorACTIVEwhen the duplicate operation is attempted for creation of a read-only subscriber.
When the conditions for the duplicate operation are satisfied, the existing failed database is destroyed and the duplicate operation starts.
When an Active Database or Its Host Fails
If there is a failure on the node where the active database resides, Oracle
Clusterware automatically changes the state of the standby database to
ACTIVE. If application failover is configured, then the application
begins updating the new active database.
Figure 8-2 shows that the state of the old standby database has changed to ACTIVE and that the application is updating the new active database.
Figure 8-2 Standby Database Becomes Active

Description of "Figure 8-2 Standby Database Becomes Active"
Oracle Clusterware tries to restart the database or host where the failure occurred. If it is successful, then that database becomes the standby database.
Figure 8-3 shows a cluster where the former active master becomes the standby master.
Figure 8-3 Standby Database Starts on Former Active Host

Description of "Figure 8-3 Standby Database Starts on Former Active Host"
If the failure of the former active master is permanent and advanced availability is configured, Oracle Clusterware starts a standby master on one of the extra nodes.
Figure 8-4 shows a cluster in which the standby master is started on one of the extra nodes.
Figure 8-4 Standby Database Starts on Extra Host

Description of "Figure 8-4 Standby Database Starts on Extra Host"
See Perform a Forced Switchover After Failure of the Active Database or Host if you do not want to wait for these automatic actions to occur.
When a Standby Database or Its Host Fails
If there is a failure on the standby master, Oracle Clusterware first tries to restart the database or host. If it cannot restart the standby master on the same host and advanced availability is configured, Oracle Clusterware starts the standby master on an extra node.
Figure 8-5 shows a cluster in which the standby master is started on one of the extra nodes.
When Read-Only Subscribers or Their Hosts Fail
If there is a failure on a subscriber node, Oracle Clusterware first tries to restart the database or host. If it cannot restart the database on the same host and advanced availability is configured, Oracle Clusterware starts the subscriber database on an extra node.
When Failures Occur on Both Master Nodes
There are both automatic and manual methods for recovery when failures occur on both master nodes.
This section includes these topics:
Automatic Recovery
Oracle Clusterware can achieve automatic recovery from temporary failure on both master nodes after the nodes come back up.
Automatic recovery can occur if:
-
RETURN TWOSAFEis not specified for the active standby pair. -
AutoRecoveris set toy. -
RepBackupDirspecifies a directory on shared storage. -
RepBackupPeriodis set to a value greater than0.
Oracle Clusterware can achieve automatic recovery from permanent failure on both master nodes if:
-
Advanced availability is configured (virtual IP addresses and at least four hosts).
-
The active standby pair does not replicate cache groups.
-
RETURN TWOSAFEis not specified for the active standby pair. -
AutoRecoveris set toy. -
RepBackupDirspecifies a directory on shared storage. -
RepBackupPeriodis set to a value greater than0.
See Configuring for Recovery When Both Master Nodes Permanently Fail for examples of cluster.oracle.ini
files.
Manual Recovery for Advanced Availability
This section assumes that the failed master nodes are recovered to new hosts on which TimesTen and Oracle Clusterware are installed.
These steps use the manrecoveryDSN database and
cluster.oracle.ini file for examples.
To perform manual recovery in an advanced availability configuration, perform these tasks:
Manual Recovery for Basic Availability
This section assumes that the failed master nodes are recovered to new hosts on which TimesTen and Oracle Clusterware are installed.
These steps use the basicDSN database and
cluster.oracle.ini file for examples.
To perform manual recovery in a basic availability configuration, perform these steps:
Manual Recovery to the Same Master Nodes When Databases Are Corrupt
Failures can occur on both master nodes so that the databases are corrupt. You can recover to the same master nodes.
To recover to the same master nodes, perform the following steps:
Manual Recovery When RETURN TWOSAFE Is Configured
You can configure an active standby pair to have a return service of RETURN TWOSAFE.
You configure RETURN TWOSAFE by using the ReturnServiceAttribute Clusterware attribute in the cluster.oracle.ini file.
This cluster.oracle.ini example includes backup configuration in case the database logs are not available:
[basicTwosafeDSN]
MasterHosts=host1,host2
ReturnServiceAttribute=RETURN TWOSAFE
RepBackupDir=/shared_drive/dsbackup
RepBackupPeriod=3600Perform these recovery tasks:
When More Than Two Master Hosts Fail
Approach a failure of more than two master hosts as a more extreme case of dual host failure.
Use these guidelines:
-
Address the root cause of the failure if it is something like a power outage or network failure.
-
Identify or obtain at least two healthy hosts for the active and standby databases.
-
Update the
MasterHostsandSubscriberHostsentries in thecluster.oracle.inifile. -
See Manual Recovery for Advanced Availability and Manual Recovery for Basic Availability for guidelines on subsequent actions to take.
Perform a Forced Switchover After Failure of the Active Database or Host
If you want to force a switchover to the standby database without waiting for automatic recovery to be performed by TimesTen and Oracle Clusterware, you can write an application that uses Oracle Clusterware commands.
Perform the following:
-
Use the
crsctl stop resourcecommand to stop the TimesTen daemon monitor (ttCRSmaster) resource on the active database. This causes the role of the standby database to change to active. -
Use the
crsctl start resourcecommand to restart thettCRSmasterresource on the former active database. This causes the database to recover and become the standby database.
The following example demonstrates a forced switchover from the active database on host1 to the standby database on host2.
See the Oracle Clusterware Clusterware Administration and Deployment Guide in the Oracle Database documentation for more information about the crsctl start resource and crsctl stop resource commands.
Clusterware Management
There are certain procedures for managing clusterware when used in conjunction with TimesTen.
This section includes the following topics:
Changing User Names or Passwords When Using Oracle Clusterware
When you create the active standby pair replication scheme with the ttCWAdmin
-create command, Oracle Clusterware prompts for the required user names and
passwords in order to manage the TimesTen environment.
Oracle Clusterware stores these user names and passwords. After modifying
any user name or password, you must run the ttCWAdmin
-reauthenticate command to enable Oracle Clusterware to store these new
user names and passwords.
Managing Hosts in the Cluster
The following sections describe how to add or remove hosts when using a cluster.
Adding a Host to the Cluster
Adding a host requires that the cluster be configured for advanced availability.
The examples in this section use the
advancedSubscriberDSN.
To add two spare master hosts to a cluster, enter a command similar to the following:
ttCWAdmin -addMasterHosts -hosts "host8,host9" -dsn advancedSubscriberDSN
To add a spare subscriber host to a cluster, enter a command similar to the following:
ttCWAdmin -addSubscriberHosts -hosts "subhost1" -dsn advancedSubscriberDSN
Removing a Host from the Cluster
Removing a host from the cluster requires that the cluster be configured for advanced availability.
MasterHosts must list more than two hosts if one of the
master hosts is to be removed. SubscriberHosts must list at least one
more host than the number of subscriber databases if one of the subscriber hosts is to
be removed.
The examples in this section use the advancedSubscriberDSN.
To remove two spare master host from the cluster, enter a command similar to the following:
ttCWAdmin -delMasterHosts "host8,host9" -dsn advancedSubscriberDSN
To remove a spare subscriber hosts from the cluster, enter a command similar to the following:
ttCWAdmin -delSubscriberHosts "subhost1" -dsn advancedSubscriberDSN
Managing Active Standby Pairs in a Cluster
The following sections describe how to add or remove an active standby pair to a cluster.
Adding an Active Standby Pair to a Cluster
You can add an active standby pair (with or without subscribers) to a cluster that is already managing an active standby pair.
Managing Read-Only Subscribers in the Active Standby Pair
The following sections describe how to manage read-only subscribers in the active standby pair that is managed by Oracle Clusterware.
Adding a Read-Only Subscriber Managed by Oracle Clusterware
To add a read-only subscriber that is to be managed by Oracle Clusterware to an active standby pair replication scheme, perform these steps:
Removing a Read-Only Subscriber Managed by Oracle Clusterware
To remove a read-only subscriber that is managed by Oracle Clusterware from an active standby pair, perform these steps:
Adding or Dropping a Read-Only Subscriber Not Managed by Oracle Clusterware
You can add or drop a read-only subscriber that is not managed by Oracle Clusterware to or from an existing active standby pair replication scheme that is managed by Oracle Clusterware.
Using the ttCWAdmin -beginAlterSchema command enables
you to add a subscriber without dropping and re-creating the replication scheme.
Oracle Clusterware does not manage the subscriber, because it is not part of the
configuration that was set up for Oracle Clusterware management.
Perform these steps:
If you added a subscriber, ensure that the read-only subscriber is included if the
cluster is dropped and re-created by adding the RemoteSubscriberHosts Oracle Clusterware attribute for the read-only subscriber in the
cluster.oracle.ini file as described in Step 1 in Configure a Read-Only Subscriber That Is Not Managed by Oracle Clusterware. Alternatively, if you dropped a subscriber, remove the
RemoteSubscriberHosts Oracle Clusterware attribute for the
dropped subscriber in the cluster.oracle.ini file (if it is
configured).
Rebuilding a Read-Only Subscriber Not Managed by Oracle Clusterware
Perform the following tasks to destroy and rebuild a read-only subscriber that is not managed by Oracle Clusterware:
- Stop the replication agent on the subscriber host.
- Use the
ttDestroyutility to destroy the subscriber database. - On the subscriber host, use
ttRepAdmin -duplicateto duplicate the standby database to the read-only subscriber. See Duplicating a Database.
Reversing the Roles of the Master Databases
After a failover, the active and standby databases are on different hosts than they
were before the failover. You can use the -switch option of the
ttCWAdmin utility to restore the original configuration.
Optionally, you can also use the -timeout option with the
-switch option to set a timeout for the number of seconds to wait
for the active and standby database switch to complete.
For example:
ttCWAdmin -switch -dsn basicDSN
Ensure that there are no open transactions before using the -switch option. If there are open transactions, the command fails.
Note:
See ttCWAdmin in the Oracle TimesTen In-Memory Database Reference.
Figure 8-6 shows the hosts for an active standby pair. The active database resides on host A, and the standby database resides on host B.
Figure 8-6 Hosts for an Active Standby Pair

Description of "Figure 8-6 Hosts for an Active Standby Pair"
The ttCWAdmin -switch command performs these tasks:
-
Deactivates the TimesTen cluster agent (
ttCRSAgent) on host A (the active master). -
Disables the TimesTen database monitor (
ttCRSmaster) on host A. -
Calls the
ttRepSubscriberWait,ttRepStopandttRepDeactivatebuilt-in procedures on host A. -
Stops the active service (
ttCRSActiveService) on host A and reports a failure event to the Oracle ClusterwareCRSDprocess. -
Enables monitoring on host A and moves the active service to host B.
-
Starts the replication agent on host A, stops the standby service (
ttCRSsubservice) on host B and reports a failure event to the Oracle ClusterwareCRSDprocess on host B. -
Starts the standby service (
ttCRSsubservice) on host A.
Modifying Connection Attribute Values
When you modify connection attributes across an active standby pair with subscribers, the connection attributes must be modified on all hosts within this configuration.
Note:
You cannot modify any DATASTORE connection attributes since they are only allowed to be set at data store creation time. For example, this procedure can be used to change the PermSize value.
Use the ttCWAdmin -beginAlterSchema and
-endAlterSchema commands to facilitate the change of any connection
attribute values on the active and standby databases and any subscribers.
-
The
ttCWAdmin -beginAlterSchemacommand suspends the Oracle Clusterware management and stops the replication agents on the active and standby databases and any subscriber databases in preparation for any changes. -
After you complete all changes, the
ttCWAdmin -endAlterSchemacommand resumes Oracle Clusterware management and restarts all replication agents on the active and standby databases and any subscriber databases.
Perform the following tasks when altering any connection attributes for the active standby pair when using Oracle Clusterware:
-
Suspend Oracle Clusterware and stop all replication agents for the active and standby databases with the
ttCWAdmin -beginAlterSchemacommand.The active database continues to accept requests and updates, but any changes are not propagated to the standby database and any subscribers until the replication agents are restarted.
The
ttCWAdmin -beginAlterSchemacommand also changes the RAM policy temporarily for the standby database and all subscriber databases to InUse with RamGrace where the grace period is set for 60 seconds to enable these databases to be unloaded by TimesTen. Once the standby and subscriber databases are unloaded from memory, the connection attributes for these databases can be modified.ttCWAdmin -beginAlterSchema -dsn advancedDSN
-
Disconnect any application connections and wait for the standby and subscriber databases to unload from memory (based on the RAM policy).
Once the standby and subscriber databases are unloaded from memory, alter any connection attributes, such as
PermSize, on the hosts for the standby and all subscriber databases in their respectivesys.odbc.inifiles. -
Resume Oracle Clusterware and restart all replication agents for the active and standby databases with the
ttCWAdmin -endAlterSchemacommand. The configured RAM policy for each TimesTen database is set back to always. The active database propagates any transactions that occurred while the standby database and subscribers were down.ttCWAdmin -endAlterSchema -dsn advancedDSN
Note:
Wait an appropriate amount of time for all changes to propagate from the active database to the standby database and all subscribers before performing the next step.
The only host that has not had the connection attribute change is the active database. You will switch the active database with the standby database so that you can modify the connection attributes on this host.
-
Suspend all application workload and disconnect all applications on the active database.
-
Switch the active and standby databases with the
ttCWAdmin -switchcommand.ttCWAdmin -switch -dsn advancedDSN
Note:
-
Suspend Oracle Clusterware and stop all replication agents for all databases with the
ttCWAdmin -beginAlterSchemacommand.The new active database may still accept requests and updates, but any changes are not propagated to the new standby database and any subscribers.
The RAM policy changes for the new standby database (and all subscriber databases) to inUse with RamGrace where the grace period is set for 60 seconds to enable these databases to be unloaded by TimesTen.
ttCWAdmin -beginAlterSchema -dsn advancedDSN
-
Wait for the new standby database to unload from memory. Once unloaded, alter the same connection attributes, such as
PermSize, on the new standby database in itssys.odbc.inifile. The connection attributes are now modified on all hosts. -
Run the
ttCWAdmin -endAlterSchemacommand to resume Oracle Clusterware management and restart the replication agents on the active and standby databases. The configured RAM policy resumes to always.ttCWAdmin -endAlterSchema -dsn advancedDSN
-
Suspend all application workload and disconnect all applications on the active database.
-
If desired, you can switch the active and standby databases with the
ttCWAdmin -switchcommand to restore the active standby pair to the original configuration.ttCWAdmin -switch -dsn advancedDSN
Managing the TimesTen Database RAM Policy
By default, the TimesTen database RAM policy is set to always when Oracle Clusterware manages the TimesTen database. However, if you stop Oracle Clusterware management, the TimesTen database RAM policy is set to inUse.
If you no longer use Oracle Clusterware to manage TimesTen, you should set the TimesTen RAM policy to what is appropriate for your environment. Typically, the recommended setting is manual.
See Specifying a RAM Policy in the Oracle TimesTen In-Memory Database Operations Guide.
Changing the Schema
When using Oracle Clusterware to manage an active standby pair, you can modify the schema by running DDL statements as in a normal replication environment, except that Oracle Clusterware must start and stop all replication agents, when it is necessary to do so.
Thus, when you change the schema, note the following:
-
For those DDL statements on objects that are automatically replicated, you do not need to stop the replication agents. In this case, no further action is required, since these DDL statements are automatically propagated and applied to the standby database and any subscribers. The
DDLReplicationLevelconnection attribute controls what DDL statements are replicated. -
For those objects that are a part of the replication scheme, but any DDL statements processed on these objects are not replicated (these objects are listed in Making Other Changes to an Active Standby Pair), run the Oracle Clusterware
ttCWAdmin -beginAlterSchemacommand on the active database, which suspends any Oracle Clusterware management and stops the replication agents on each node in the replication scheme. Then, run the DDL statement on the active database in the replication scheme. Finally, run the Oracle ClusterwarettCWAdmin -endAlterSchemacommand on the active database to restart all replication agents.Because these objects are a part of the replication scheme, but the DDL statements are not replicated, a duplicate occurs after the
ttCWAdmin -endAlterSchemacommand to propagate these schema changes to the standby database and any subscribers. This is the only scenario when a duplicate is used to propagate the schema changes.Follow the instructions described in Facilitating Schema Change for Oracle Clusterware.
-
For those DDL statements on objects that are not automatically replicated and are not part of the replication scheme, run the Oracle Clusterware
ttCWAdmin -beginAlterSchemacommand on the active database, which suspends any Oracle Clusterware management and stops and the replication agents on all nodes. Then, you can synchronize all nodes by manually running these DDL statements as indicated in Making DDL Changes in an Active Standby Pair. Finally, run the Oracle ClusterwarettCWAdmin -endAlterSchemacommand on the active database to restart all replication agents.Follow the instructions described in Facilitating Schema Change for Oracle Clusterware.
Note:
The Making DDL Changes in an Active Standby Pair and Making Other Changes to an Active Standby Pair sections describe which DDL statements are and are not automatically replicated for an active standby pair. These sections also describe what objects are a part of the replication scheme.
Facilitating Schema Change for Oracle Clusterware
Use the ttCWAdmin -beginAlterSchema and
-endAlterSchema commands to facilitate a schema change on the active
and standby databases.
-
The
ttCWAdmin -beginAlterSchemacommand suspends the Oracle Clusterware management and stops replication agents on both the active and standby databases in preparation for any schema changes. -
After you complete all schema changes, run the
ttCWAdmin -endAlterSchemacommand. For those objects that are a part of the replication scheme, but any DDL statements processed on these objects are not automatically replicated, a duplicate occurs after thettCWAdmin -endAlterSchemacommand to propagate only these schema changes to the standby database and any subscribers. This command registers the altered replication scheme, restarts the replication agents on the active and standby databases, and reinstates Oracle Clusterware control.
Perform the following tasks when altering the schema of the active standby pair when using Oracle Clusterware:
-
Suspend Oracle Clusterware and stop the replication agents on both the active and standby databases.
ttCWAdmin -beginAlterSchema -dsn advancedDSN
-
Make any desired schema changes.
If you create, alter, or drop any objects where the DDL for these objects are not replicated, you should also manually create, alter, or drop the same objects on the standby database and subscribers while the replication agents are inactive to ensure that the same objects exist on all databases in the replication scheme. For example, if you create a materialized view on the active database, create the materialized view on the standby and subscriber databases at this time.
-
If the object is not automatically replicated but is a part of the replication scheme, (such as a sequence) and you want to include it in the active standby pair replication scheme, alter the active standby pair.
ALTER ACTIVE STANDBY PAIR INCLUDE samplesequence;
-
If the object is a cache group, see Making Schema Changes to Cache Groups for instructions to create, alter, or drop a cache group.
-
Run the
ttCWAdmin -endAlterSchemacommand to resume Oracle Clusterware and restart the replication agents on the active and standby databases. If you modified objects that are a part of the replication scheme, but any DDL statements processed on these objects are not automatically replicated, a duplicate occurs after thettCWAdmin -endAlterSchemacommand to propagate only these schema changes to the standby database and any subscribers.ttCWAdmin -endAlterSchema -dsn advancedDSN
Making Schema Changes to Cache Groups
Use the ttCWAdmin -beginAlterSchema and
-endAlterSchema commands to facilitate the schema changes on cache
groups within Oracle Clusterware.
Add a Cache Group
You can add a cache group on the active database of the active standby pair.
Perform these steps on the active database of the active standby pair.
You can load the cache group at any time after you create the cache group.
Drop a Cache Group
Dropping a cache group within a Clusterware environment requires several steps.
Perform these steps to drop a cache group.
Change an Existing Cache Group
Changing an existing cache group involves dropping and adding the cache group.
To change an existing cache group, first drop the existing cache group as described in Drop a Cache Group. Then add the cache group with the desired changes as described in Add a Cache Group.
Moving a Database to a Different Host
When a cluster is configured for advanced availability, you can use the
ttCWAdmin -relocate command to move a database from the local host to
the next available spare host specified in the MasterHosts attribute in the
cluster.oracle.ini file.
If the database on the local host has the active role, the
-relocate option first reverses the roles. Thus, the newly migrated
active database becomes the standby database and the old standby database becomes the
new active database.
The ttCWAdmin -relocate command is useful for relocating a database if you decide to take the host offline. Ensure that there are no open transactions before you use the command.
If the ttCWAdmin -relocate command requires a role switch, then you can optionally use the -timeout option with the -relocate option to set a timeout for the number of seconds to wait for the role switch.
For example:
ttCWAdmin -relocate -dsn advancedDSN
Note:
See ttCWAdmin in the Oracle TimesTen In-Memory Database Reference.
Performing a Rolling Upgrade of Oracle Clusterware Software
There are methods you can run to perform a rolling upgrade of the Oracle Clusterware software.
See Oracle Clusterware Clusterware Administration and Deployment Guide in the Oracle Database documentation.
Upgrading TimesTen When Using Oracle Clusterware
There are methods to upgrade TimesTen on all hosts when using Oracle Clusterware:
-
Performing an Offline TimesTen Upgrade When Using Oracle Clusterware in Oracle TimesTen In-Memory Database Installation, Migration, and Upgrade Guide.
-
Performing an Online TimesTen Upgrade When Using Oracle Clusterware in Oracle TimesTen In-Memory Database Installation, Migration, and Upgrade Guide.
Performing Host or Network Maintenance
When you need to perform host or network maintenance, you need to stop the Oracle Clusterware resources and take down one or more of the TimesTen databases in the cluster.
In order to maintain data consistency for the database, you need to ensure that the TimesTen databases included in the active standby pair are brought down properly so that no transactions are lost.
One of the decisions you will make during performing maintenance is whether you should leave Oracle Clusterware enabled or disabled. If you leave Oracle Clusterware enabled, then all Oracle Clusterware and TimesTen processes restart automatically after a system reboot. If you disable Oracle Clusterware, none of these processes restart automatically.
Perform Maintenance on All Hosts in the Cluster Simultaneously
You can perform tasks to facilitate minimal down time while performing maintenance on all hosts in the cluster.
Note:
If you have an active, standby and one or more subscriber databases, you need to run some of these tasks on each host that contains the designated database.
-
Stop Oracle Clusterware and the replication agents by running the Oracle Clusterware
crsctl stop crscommand asrootor OS administrator on each of the hosts that contain the active, standby, and subscriber databases.Since the active database is down, all requests are refused until the replication agents are restarted.
crsctl stop crs
The
crsctl stop crscommand changes the RAM policy temporarily for the active, standby, and all subscriber databases to inUse with RamGrace where the grace period is set for 60 seconds to enable these databases to be unloaded by TimesTen. -
Optionally, you can prevent the automatic startup of Oracle Clusterware and TimesTen when the server boots by running the Oracle Clusterware
crsctl disable crscommand asrootor OS administrator on all hosts in the cluster:crsctl disable crs
-
Disconnect any application connections and wait for the active, standby, and subscriber databases to unload from memory.
-
To gracefully shutdown each TimesTen database in the active standby pair, run the following command on each of the hosts that contain the active, standby, and subscriber databases:
ttDaemonAdmin -stop
Note:
The TimesTen main daemon process manages all databases under the same TimesTen installation, be sure to disconnect from all databases before running the above command.
-
Perform maintenance on the hosts that contain the active, standby, and subscriber databases.
-
After the maintenance is complete, either:
-
Reboot all hosts, then wait until the Oracle Clusterware and TimesTen processes are running (which can take several minutes) if you did not disable the automatic startup of Oracle Clusterware and TimesTen.
-
Perform the following tasks on each host in the cluster if you disabled the automatic startup of Oracle Clusterware and TimesTen after a reboot or if you are not rebooting the hosts after maintenance when automatic startup is enabled:
-
Start the TimesTen database by running the following command:
ttDaemonAdmin -start
-
Enable the automatic startup of Oracle Clusterware when the server boots by running
crsctl enable crsasrootor OS administrator:crsctl enable crs
-
Start Oracle Clusterware on the local server by running
crsctl start crsasrootor OS administrator. Wait until all of the Oracle Clusterware resources come up before continuing to the next step.crsctl start crs
-
Once everything is up, you can reconnect with your applications and the active starts to replicate all updates to the standby and subscriber databases. The configured RAM policy resumes to always.
-
Perform Maintenance While Still Accepting Requests
There are methods to provide minimal down time while performing maintenance on all hosts in the cluster.
Note:
If you have an active, standby and one or more subscriber databases, you need to run some of these tasks on each host that contains the designated database.
-
Stop Oracle Clusterware and the replication agents by running the Oracle Clusterware
crsctl stop crscommand asrootor OS administrator on each of the hosts that contain the standby and subscriber databases.The active database continues to accept requests and updates, but any changes are not propagated to the standby database and any subscribers until the replication agents are restarted.
crsctl stop crs
The
crsctl stop crscommand also changes the RAM policy temporarily for the standby and all subscriber databases to inUse with RamGrace where the grace period is set for 60 seconds to enable these databases to be unloaded by TimesTen. -
Optionally, you can prevent the automatic startup of Oracle Clusterware and TimesTen when the server boots by running the Oracle Clusterware
crsctl disable crscommand asrootor OS administrator on each of the hosts that contain the standby and subscriber databases.crsctl disable crs
-
Disconnect any application connections and wait for the standby and subscriber databases to unload from memory.
-
To gracefully shutdown a TimesTen database, run the following command on each of the hosts that contain the standby and subscriber databases:
ttDaemonAdmin -stop
Note:
The TimesTen main daemon process manages all databases under the same TimesTen installation, be sure to disconnect from all databases before running the above command.
-
Perform maintenance on the hosts that contain the standby and subscriber databases.
-
After the maintenance is complete, either:
-
Reboot all hosts, then wait until the Oracle Clusterware and TimesTen processes are running (which can take several minutes) if you did not disable the automatic startup of Oracle Clusterware and TimesTen.
-
Perform the following tasks on each host in the cluster if you disabled the automatic startup of Oracle Clusterware and TimesTen after a reboot or if you are not rebooting the hosts after maintenance when automatic startup is enabled:
-
Start the TimesTen database by running the following command:
ttDaemonAdmin -start
-
Enable the automatic startup of Oracle Clusterware when the server boots by running
crsctl enable crsasrootor OS administrator:crsctl enable crs
-
Start Oracle Clusterware on the local server by running
crsctl start crsasrootor OS administrator. Wait until all of the Oracle Clusterware resources come up before continuing to the next step.crsctl start crs
-
Once everything is up, the active replicates all updates to the standby and subscriber databases.
-
-
Switch the active and standby databases with the
ttCWAdmin -switchcommand so you can perform the same maintenance on the host with the active database.ttCWAdmin -switch -dsn advancedDSN
Note:
See Reversing the Roles of the Master Databases for more details on the
ttCWAdmin -switchcommand. -
Stop Oracle Clusterware and the replication agents by running the Oracle Clusterware
crsctl stop crscommand asrootor OS administrator on the host with the new standby database.The new active database continues to accept requests and updates, but any changes are not propagated to the new standby database and any subscribers until the replication agents are restarted.
crsctl stop crs
-
Disconnect any application connections and wait for the standby and subscriber databases to unload from memory.
-
To gracefully shutdown the TimesTen database, run the following command on the host that contains the new standby database:
ttDaemonAdmin -stop
-
Perform maintenance on the host that contains the new standby database. Now the maintenance has been performed on all hosts in the cluster.
-
After the maintenance is complete, either:
-
Reboot all hosts, then wait until the Oracle Clusterware and TimesTen processes are running (which can take several minutes) if you did not disable the automatic startup of Oracle Clusterware and TimesTen.
-
Perform the following tasks on each host in the cluster if you disabled the automatic startup of Oracle Clusterware and TimesTen after a reboot or if you are not rebooting the hosts after maintenance when automatic startup is enabled:
-
Start the TimesTen database by running the following command:
ttDaemonAdmin -start
-
Enable the automatic startup of Oracle Clusterware when the server boots by running
crsctl enable crsasrootor OS administrator:crsctl enable crs
-
Start Oracle Clusterware on the local server by running
crsctl start crsasrootor OS administrator. Wait until all of the Oracle Clusterware resources come up before continuing to the next step.crsctl start crs
-
Once everything is up, the active replicates all updates to the standby and subscriber databases. The RAM policy resumes to always.
-
-
Switch back to the original configuration for the active and standby roles for the active standby pair with the
ttCWAdmin -switchcommand.ttCWAdmin -switch -dsn advancedDSN
Note:
See the Oracle Clusterware Clusterware Administration and Deployment Guide in the Oracle Database documentation.
