6 Upgrades in TimesTen Classic

This chapter describes the process for upgrading to a new release of TimesTen Classic. For information on the upgrade process for TimesTen Scaleout, see "Upgrading a grid" and "Migrating, Backing Up and Restoring Data" in the Oracle TimesTen In-Memory Database Scaleout User's Guide.

Ensure you review the installation process in the preceding chapters before completing the upgrade procedures described in this chapter.

Topics include:

Overview of release numbers

There is a release numbering scheme for TimesTen releases. This scheme is relevant when discussing upgrades. For example, for a given release, a.b.c.d.e:

  • a indicates the first part of the major release.

  • b indicates the second part of the major release.

  • c indicates the patch set.

  • d indicates the patch level within the patch set.

  • e is reserved.

Important considerations:

  • Releases within the same major release (a.b) are binary compatible. If a release is binary compatible, you do not have to recreate the database for the upgrade (or downgrade).

  • Releases with a different major release are not binary compatible. In this case, you must recreate the database. See "Migrating a database" for details.

As an example, for the 18.1.4.1.0 release:

  • The first two numbers of the five-place release number (18.1) indicate the major release.

  • The third number of the five-place release number (4) indicates the patch set. For example, 18.1.4.1.0 is binary compatible with 18.1.3.5.0 because the first two digits (18 and 1) are the same.

  • The fourth number of the five-place release number (1) indicates the patch level within the patch set. 18.1.4.1.0 is the first patch level within patch set four.

  • The fifth number of the five-place release number (0) is reserved.

Note:

In releases 11.2.1.w.x and 11.2.2.y.z, the first three digits signified the major release. Thus, 11.2.1 is considered a major release as is 11.2.2.

Types of upgrades

TimesTen Classic supports two types of upgrades:

  • An offline upgrade requires all applications to disconnect from TimesTen and require all databases to be unloaded from memory. An offline upgrade involves two different situations depending on your requirement.

    If your requirement is to:

  • An online upgrade involves using a pair of databases that are replicated and then performing an offline upgrade of each database in turn. See "Online upgrade: Using TimesTen replication" for details.

Offline upgrade: Moving to a different patch or patch set: ttInstanceModify

The preferred offline upgrade method to move between a patch set or a patch level involves creating a new installation in a new location and then using the ttInstanceModify utility with the -install option to cause the instance to point to the new installation. This offline upgrade requires the instance administrator to close all databases to user connections, to disconnect all applications from all databases, and to unload all databases from memory.

To perform the upgrade, follow these steps:

  1. Create a new installation in a new location. For example, create the fullinstall_new installation directory. Then unzip the new patch release zip file into that directory. (For example, unzip timesten181410.server.linux8664.zip into the fullinstall_new directory).

    % mkdir fullinstall_new
    % cd fullinstall_new
    % unzip /swdir/TimesTen/ttinstallers/timesten181410.server.linux8664.zip
    [...UNZIP OUTPUT...]
    

    See "TimesTen installations" for detailed information.

  2. Unload all databases. See "Unloading a database from memory" for details.

  3. Stop the TimesTen daemon.

    % ttDaemonAdmin -stop
    TimesTen Daemon (PID: 24224, port: 6324) stopped.
    
  4. Modify the instance to point to the new installation. In this example, point the instance to the installation in swdir/TimesTen/ttinstallations/ttinstalllatest/tt18.1.4.1.0.

    % $TIMESTEN_HOME/bin/ttInstanceModify -install
     /swdir/TimesTen/ttinstallations/ttinstalllatest/tt18.1.4.1.0
    
    Instance Info (UPDATED)
    -----------------------
     
    Name:           ttuserinstance
    Version:        18.1.4.1.0
    Location:       /swdir/TimesTen/ttinstances/ttuserinstance
    Installation:   /swdir/TimesTen/ttinstallations/ttinstalllatest/tt18.1.4.1.0
    Daemon Port:    6324
    Server Port:    6325
     
    **********************************************
     
    NOTE: The ttclasses source code may have changed since the last release.
          Make sure to rebuild the ttclasses library in 
          /swdir/TimesTen/ttinstances/ttuserinstance/ttclasses.
     
    The instance ttuserinstance now points to the installation in 
    /swdir/TimesTen/ttinstallations/ttinstalllatest/tt18.1.4.1.0
    
  5. Restart the daemon.

    % ttDaemonAdmin -start
    TimesTen Daemon (PID: 31202, port: 6324) startup OK.
    
  6. Load the databases. See"Reloading a database into memory" for details.

  7. Optional: Ensure you can connect to the database.

    % ttIsql database1
     
    Copyright (c) 1996, 2020, Oracle and/or its affiliates. All rights reserved.
    Type ? or "help" for help, type "exit" to quit ttIsql.
     
     
     
    ...
    Command> SELECT * FROM dual;
    < X >
    1 row found.
    
  8. Optional: Delete the previous patch release installation.

    % chmod -R 750 installation_dir/tt18.1.3.5.0
    % rm -rf installation_dir/tt18.1.3.5.0
    

Unloading a database from memory

Perform the following steps to unload a database from memory.

  1. In release 18.1.3.1.0 and later, close the database. This prevents any future connections to the database. In releases prior to 18.1.3.1.0, ignore this step.

    % ttAdmin -close database1
    RAM Residence Policy            : manual
    Manually Loaded In RAM          : True
    Replication Agent Policy        : manual
    Replication Manually Started    : False
    Cache Agent Policy              : manual
    Cache Agent Manually Started    : False
    Database State                  : Closed
    

    See "Opening and closing the database for user connections" in the Oracle TimesTen In-Memory Database Operations Guide.

  2. If there are connections to the database, disconnect all applications from the database. You can do this manually, or you can instruct TimesTen to perform the disconnects for you. For the latter case, see "Disconnecting from a database" in the Oracle TimesTen In-Memory Database Operations Guide and "ForceDisconnectEnabled" in the Oracle TimesTen In-Memory Database Reference for detailed information.

  3. Ensure the RAM policy is set to either manual or inUse. Then unload the database from memory. See "Specifying a RAM policy" in the Oracle TimesTen In-Memory Database Operations Guide for information on specifying a RAM policy.

    If the RAM policy is set to always, change it to manual, then unload the database from memory.

    % ttAdmin -ramPolicy manual -ramUnload database1
    
    RAM Residence Policy            : manual
    Manually Loaded In RAM          : False
    Replication Agent Policy        : manual
    Replication Manually Started    : False
    Cache Agent Policy              : manual
    Cache Agent Manually Started    : False
    Database state                  : closed
    

    If the RAM policy is set to manual, unload the database from memory.

    ttAdmin -ramUnload database1
    
    RAM Residence Policy            : manual
    Manually Loaded In RAM          : False
    Replication Agent Policy        : manual
    Replication Manually Started    : False
    Cache Agent Policy              : manual
    Cache Agent Manually Started    : False
    Database state                  : closed
    

    If the RAM policy is set to inUse and a grace period is set, set the grace period to 0 or wait for the grace period to elapse. TimesTen unloads a database with an inUse RAM policy from memory once all active connections are closed.

    % ttAdmin -ramGrace 0 database1
    
    RAM Residence Policy            : inUse
    Replication Agent Policy        : manual
    Replication Manually Started    : False
    Cache Agent Policy              : manual
    Cache Agent Manually Started    : False
    Database state                  : closed
    
  4. Run the ttStatus utility to verify that the database has been unloaded from memory and, for release 18.1.3.1.0 and later, the database is closed. See "ttStatus" in Oracle TimesTen In-Memory Database Reference for details.

    % ttStatus
    TimesTen status report as of Mon Jun 29 14:11:19 2020
     
    Daemon pid 24224 port 6324 instance ttuserinstance
    TimesTen server pid 22019 started on port 6325
    ------------------------------------------------------------------------
    Data store /scratch/databases/database1
    Daemon pid 24224 port 6324 instance ttuserinstance
    TimesTen server pid 22019 started on port 6325
    There are no connections to the data store
    Closed to user connections
    RAM residence policy: Manual
    Data store is manually unloaded from RAM
    Replication policy  : Manual
    Cache Agent policy  : Manual
    PL/SQL enabled.
    ------------------------------------------------------------------------
    Accessible by group g900
    End of report
    

Reloading a database into memory

Follow these steps to load a database into memory.

  1. Load the database into memory. This example sets the RAM policy to manual and then loads the database1 database into memory.

    Set the RAM policy to manual.

    % ttAdmin -ramPolicy manual database1
    
    RAM Residence Policy            : manual
    Manually Loaded In RAM          : False
    Replication Agent Policy        : manual
    Replication Manually Started    : False
    Cache Agent Policy              : manual
    Cache Agent Manually Started    : False
    Database state                  : closed
    

    Load the database1 database into memory.

    % ttAdmin -ramLoad database1
    
    RAM Residence Policy            : manual
    Manually Loaded In RAM          : True
    Replication Agent Policy        : manual
    Replication Manually Started    : False
    Cache Agent Policy              : manual
    Cache Agent Manually Started    : False
    Database state                  : closed
    

    See "Specifying a RAM policy" in the Oracle TimesTen In-Memory Database Operations Guide for information on the RAM policy.

  2. In release 18.1.3.1.0 and later, open the database for user connections. In releases prior to 18.1.3.1.0, ignore this step.

    % ttAdmin -open database1
    RAM Residence Policy            : manual
    Manually Loaded In RAM          : True
    Replication Agent Policy        : manual
    Replication Manually Started    : False
    Cache Agent Policy              : manual
    Cache Agent Manually Started    : False
    Database State                  : Open
    

    See "Opening and closing the database for user connections" in the Oracle TimesTen In-Memory Database Operations Guide.

Offline upgrade: Moving to a different patch or patch set: ttBackup

You can run the ttBackup and ttRestore utilities to move to a new patch release of TimesTen Classic, although this is not the preferred method.

Perform these steps for each database.

On the old release:

  1. Disconnect all applications from the database. You can do this manually or you can instruct TimesTen to perform the disconnects for you. For the latter case, see "Disconnecting from a database" in the Oracle TimesTen In-Memory Database Operations Guide and "ForceDisconnectEnabled" in the Oracle TimesTen In-Memory Database Reference for detailed information.

  2. Backup the database. In this example, backup the database1_1813 database for release 18.1.3.5.0.

    ttBackup -dir /tmp/dump/backup_181350 -fname database1_1813 database1_1813
    Backup started ...
    Backup complete
    
  3. Unload the database from memory. This example assumes a RAM policy of manual. See "Specifying a RAM policy" in the Oracle TimesTen In-Memory Database Operations Guide for information on the RAM policy.

    % ttAdmin -ramUnload database1_1813
    
    RAM Residence Policy            : manual
    Manually Loaded In RAM          : False
    Replication Agent Policy        : manual
    Replication Manually Started    : False
    Cache Agent Policy              : manual
    Cache Agent Manually Started    : False
    
  4. Stop the TimesTen daemon.

    % ttDaemonAdmin -stop
    TimesTen Daemon (PID: 2749, port: 6666) stopped.
    

For the new release:

  1. Create a new installation in a new location. For example, create the fullinstall_new installation directory. Then unzip the patch release zip file into that directory. (For example, unzip timesten181410.server.linux8664.zip into the fullinstall_new directory). See "TimesTen installations" and "Creating an installation on Linux/UNIX" for detailed information.

    % mkdir fullinstall_new
    % cd fullinstall_new
    % unzip /swdir/TimesTen/ttinstallers/timesten181410.server.linux8664.zip
    [...UNZIP OUTPUT...]
    
  2. Run the ttInstanceCreate utility to create the instance. This example runs the ttInstanceCreate utility interactively. See "ttInstanceCreate" in the Oracle TimesTen In-Memory Database Reference and "Creating an instance on Linux/UNIX: Basics" in this book for details.

    User input is shown in bold.

    % installation_dir/tt18.1.4.1.0/bin/ttInstanceCreate
     
    NOTE: Each TimesTen instance is identified by a unique name.
          The instance name must be a non-null alphanumeric string, not longer
          than 255 characters.
     
    Please choose an instance name for this installation? [ tt181 ] inst1814_new
    Instance name will be 'inst1814_new'.
    Is this correct? [ yes ]
    Where would you like to install the inst1814_new instance of TimesTen? 
    [ /home/ttuser ] /scratch/ttuser
    Creating instance in /scratch/ttuser/inst1814_new ...
    INFO: Mapping files from the installation to 
    /scratch/ttuser/inst1814_new/install
    TCP port 6624 is in use!
     
    NOTE: If you are configuring TimesTen for use with Oracle Clusterware, the
          daemon port number must be the same across all TimesTen installations
          managed within the same Oracle Clusterware cluster.
     
    ** The default daemon port (6624) is already in use or within a range of 8
    ports of an existing TimesTen instance. You must assign a unique daemon port
    number for this instance. This installer will not allow you to assign another
    instance a port number within a range of 8 ports of the port you assign below.
     
    NOTE: All installations that replicate to each other must use the same daemon
          port number that is set at installation time. The daemon port number can
          be verified by running 'ttVersion'.
     
    Please enter a unique port number for the TimesTen daemon (<CR>=list)? [ ] 6324
     
    In order to use the 'TimesTen Application-Tier Database Cache' feature in any 
    databases
    created within this installation, you must set a value for the TNS_ADMIN
    environment variable. It can be left blank, and a value can be supplied later
    using <install_dir>/bin/ttInstanceModify.
     
    Please enter a value for TNS_ADMIN (s=skip)? [  ] s
    What is the TCP/IP port number that you want the TimesTen Server to listen on?
     [ 6325 ]
     
    Would you like to use TimesTen Replication with Oracle Clusterware? [ no ]
     
    NOTE: The TimesTen daemon startup/shutdown scripts have not been installed.
     
    The startup script is located here :
            '/scratch/ttuser/inst1814_new/startup/tt_inst1814_new'
     
    Run the 'setuproot' script :
            /scratch/ttuser/inst1814_new/bin/setuproot -install
    This will move the TimesTen startup script into its appropriate location.
     
    The 18.1 Release Notes are located here :
      '/scratch/ttuser/181410/tt18.1.4.1.0/README.html'
     
    Starting the daemon ...
    TimesTen Daemon (PID: 3253, port: 6324) startup OK.
    
  3. Restore the database. Ensure you source the environment variables, make all necessary changes to your connection attributes in the sys.odbc.ini (or the odbc.ini) file, and start the daemon (if not already started) prior to restoring the database.

    % ttRestore -dir /tmp/dump/backup_181350 -fname database1_1813 database1_1814
    Restore started ...
    Restore complete
    

Once your databases are correctly configured and fully operational, you can optionally remove the backup file (in this example, /tmp/dump/backup_181350/database1_1813).

Offline upgrade: Moving to a different major release

You can have multiple major releases installed on a host at the same time. However, databases created by one major release cannot be accessed directly by applications of a different major release. To migrate data between major releases, for example from TimesTen 11.2.2 to 18.1, you must export the data using the ttMigrate utility from the old release and import it using the ttMigrate utility to the new release.

Before migrating a database from one major release to another, ensure you backup the database in the old release. See "ttBackup" and "ttRestore" in Oracle TimesTen In-Memory Database Reference and "Backing up and restoring a database" in this book for details.

Follow these steps to perform the upgrade:

For the old release:

  1. Disconnect all applications from your database. You can do this manually or you can instruct TimesTen to perform the disconnects for you. For the latter case, see "Disconnecting from a database" in the Oracle TimesTen In-Memory Database Operations Guide and "ForceDisconnectEnabled" in the Oracle TimesTen In-Memory Database Reference for detailed information.

  2. Save a copy of your database with the ttMigrate utility. In this example, there are several database objects saved for database1.

    % ttMigrate -c database1 /tmp/database1.data
    Saving user PUBLIC
    User successfully saved.
     
    Saving table TTUSER.COUNTRIES
      Saving foreign key constraint COUNTR_REG_FK
      Saving rows...
      25/25 rows saved.
    Table successfully saved.
     
    Saving table TTUSER.DEPARTMENTS
      Saving foreign key constraint DEPT_LOC_FK
      Saving rows...
      27/27 rows saved.
    Table successfully saved.
     
    Saving table TTUSER.EMPLOYEES
      Saving index TTUSER.TTUNIQUE_0
      Saving foreign key constraint EMP_DEPT_FK
      Saving foreign key constraint EMP_JOB_FK
      107/107 rows saved.
      Saving rows...
    Table successfully saved.
     
    Saving table TTUSER.JOBS
      Saving rows...
      19/19 rows saved.
    Table successfully saved.
     
    Saving table TTUSER.JOB_HISTORY
      Saving foreign key constraint JHIST_DEPT_FK
      Saving foreign key constraint JHIST_EMP_FK
      Saving foreign key constraint JHIST_JOB_FK
      Saving rows...
      10/10 rows saved.
    Table successfully saved.
     
    Saving table TTUSER.LOCATIONS
      Saving foreign key constraint LOC_C_ID_FK
      Saving rows...
      23/23 rows saved.
    Table successfully saved.
     
    Saving table TTUSER.REGIONS
      Saving rows...
      4/4 rows saved.
    Table successfully saved.
     
    Saving view TTUSER.EMP_DETAILS_VIEW
    View successfully saved.
     
    Saving sequence TTUSER.DEPARTMENTS_SEQ
    Sequence successfully saved.
     
    Saving sequence TTUSER.EMPLOYEES_SEQ
    Sequence successfully saved.
     
    Saving sequence TTUSER.LOCATIONS_SEQ
    Sequence successfully saved.
    

    For more information about the ttMigrate utility, see "ttMigrate" in the Oracle TimesTen In-Memory Database Reference.

  3. Unload the database from memory. See "Unloading a database from memory" for details.

  4. Stop the TimesTen daemon.

    % ttDaemonAdmin -stop
    TimesTen Daemon (PID: 30841, port: 54496) stopped.
    
  5. Copy the migrate object files to a file system that is accessible by the instance in the new release.

For the new release:

  1. Create a new installation in a new location. For example, create the fullinstall_new installation directory. Then unzip the patch release zip file into that directory. (For example, unzip timesten181410.server.linux8664.zip into the fullinstall_new directory). See "TimesTen installations" and "Creating an installation on Linux/UNIX" for detailed information.

    % mkdir fullinstall_new
    % cd fullinstall_new
    % unzip /swdir/TimesTen/ttinstallers/timesten181410.server.linux8664.zip
    [...UNZIP OUTPUT...]
    
  2. Run the ttInstanceCreate utility to create the instance. This example runs the ttInstanceCreate utility interactively. See "ttInstanceCreate" in the Oracle TimesTen In-Memory Database Reference and "Creating an instance on Linux/UNIX: Basics" in this book for details.

    User input is shown in bold.

    % installation_dir/tt18.1.4.1.0/bin/ttInstanceCreate
     
    NOTE: Each TimesTen instance is identified by a unique name.
          The instance name must be a non-null alphanumeric string, not longer
          than 255 characters.
     
    Please choose an instance name for this installation? [ tt181 ] inst1814_new
    Instance name will be 'inst1814_new'.
    Is this correct? [ yes ]
    Where would you like to install the inst1814_new instance of TimesTen? 
    [ /home/ttuser ] /scratch/ttuser
    Creating instance in /scratch/ttuser/inst1814_new ...
    INFO: Mapping files from the installation to 
    /scratch/ttuser/inst1814_new/install
    TCP port 6624 is in use!
     
    NOTE: If you are configuring TimesTen for use with Oracle Clusterware, the
          daemon port number must be the same across all TimesTen installations
          managed within the same Oracle Clusterware cluster.
     
    ** The default daemon port (6624) is already in use or within a range of 8
    ports of an existing TimesTen instance. You must assign a unique daemon port
    number for this instance. This installer will not allow you to assign another
    instance a port number within a range of 8 ports of the port you assign below.
     
    NOTE: All installations that replicate to each other must use the same daemon
          port number that is set at installation time. The daemon port number can
          be verified by running 'ttVersion'.
     
    Please enter a unique port number for the TimesTen daemon (<CR>=list)? [ ] 6324
     
    In order to use the 'TimesTen Application-Tier Database Cache' feature in any 
    databases
    created within this installation, you must set a value for the TNS_ADMIN
    environment variable. It can be left blank, and a value can be supplied later
    using <install_dir>/bin/ttInstanceModify.
     
    Please enter a value for TNS_ADMIN (s=skip)? [  ] s
    What is the TCP/IP port number that you want the TimesTen Server to listen on?
     [ 6325 ]
     
    Would you like to use TimesTen Replication with Oracle Clusterware? [ no ]
     
    NOTE: The TimesTen daemon startup/shutdown scripts have not been installed.
     
    The startup script is located here :
            '/scratch/ttuser/inst1814_new/startup/tt_inst1814_new'
     
    Run the 'setuproot' script :
            /scratch/ttuser/inst1814_new/bin/setuproot -install
    This will move the TimesTen startup script into its appropriate location.
     
    The 18.1 Release Notes are located here :
      '/scratch/ttuser/181410/tt18.1.4.1.0/README.html'
     
    Starting the daemon ...
    TimesTen Daemon (PID: 3253, port: 6324) startup OK.
    
  3. From the instance of the new release, create a database. Ensure you have sourced the environment variables, made all necessary changes to your connection attributes in the sys.odbc.ini (or the odbc.ini) file, and started the daemon (if not already started).

    To create the database:

    % ttIsql -connstr "DSN=new_database1;AutoCreate=1" -e "quit"
    Copyright (c) 1996, 2020, Oracle and/or its affiliates. All rights reserved.
     
    Type ? or "help" for help, type "exit" to quit ttIsql.
     
     
     
    connect "DSN=new_database1;AutoCreate=1";
    Connection successful: DSN=new_database1;
    UID=instadmin;DataStore=/scratch/databases/new_database1;
    DatabaseCharacterSet=AL32UTF8;ConnectionCharacterSet=AL32UTF8;
    PermSize=128;
    (Default setting AutoCommit=1)
     
    quit;
    Disconnecting...
    Done.
    

    The database will be empty at this point.

  4. From the instance of the new release, run the ttMigrate utility with the -r and -relaxedUpgrade options to restore the backed up database to the new release. For example:

    % ttMigrate -r -relaxedUpgrade new_database1 /tmp/database1.data
    
     
      Restoring rows...
    Restoring table TTUSER.JOBS
      19/19 rows restored.
    Table successfully restored.
     
    Restoring table TTUSER.REGIONS
      Restoring rows...
      4/4 rows restored.
    Table successfully restored.
     
    Restoring table TTUSER.COUNTRIES
      Restoring rows...
      25/25 rows restored.
      Restoring foreign key dependency COUNTR_REG_FK on TTUSER.REGIONS
    Table successfully restored.
     
    Restoring table TTUSER.LOCATIONS
      Restoring rows...
      23/23 rows restored.
      Restoring foreign key dependency LOC_C_ID_FK on TTUSER.COUNTRIES
    Table successfully restored.
     
    Restoring table TTUSER.DEPARTMENTS
      Restoring rows...
      27/27 rows restored.
      Restoring foreign key dependency DEPT_LOC_FK on TTUSER.LOCATIONS
    Table successfully restored.
     
    Restoring table TTUSER.EMPLOYEES
      Restoring rows...
      107/107 rows restored.
      Restoring foreign key dependency EMP_DEPT_FK on TTUSER.DEPARTMENTS
      Restoring foreign key dependency EMP_JOB_FK on TTUSER.JOBS
    Table successfully restored.
     
    Restoring table TTUSER.JOB_HISTORY
      Restoring rows...
      10/10 rows restored.
      Restoring foreign key dependency JHIST_DEPT_FK on TTUSER.DEPARTMENTS
      Restoring foreign key dependency JHIST_EMP_FK on TTUSER.EMPLOYEES
      Restoring foreign key dependency JHIST_JOB_FK on TTUSER.JOBS
    Table successfully restored.
     
    Restoring view TTUSER.EMP_DETAILS_VIEW
    View successfully restored.
     
    Restoring sequence TTUSER.DEPARTMENTS_SEQ
    Sequence successfully restored.
     
    Restoring sequence TTUSER.EMPLOYEES_SEQ
    Sequence successfully restored.
     
    Restoring sequence TTUSER.LOCATIONS_SEQ
    Sequence successfully restored.
    

Once the database is operational in the new release, create a backup of this database to have a valid restoration point for your database. Once you have created a backup of your database, you may delete the ttMigrate copy of your database (in this example, /tmp/database1.data). Optionally, for the old release, you can remove the instance and delete the installation.

Online upgrade: Using TimesTen replication

When upgrading to a new release of TimesTen Classic, you may have a mission-critical database that must remain continuously available to your applications. You can use TimesTen replication to keep two copies of a database synchronized, even when the databases are from different releases of TimesTen, allowing your applications to stay connected to one copy of the database while the instance for the other database is being upgraded. When the upgrade is finished, any updates that have been made on the active database are transmitted immediately to the database in the upgraded instance, and your applications can then be switched with no data loss and no downtime. See "Performing an online upgrade with classic replication" for information.

The online upgrade process supports only updates to user tables during the upgrade. The tables to be replicated must have a PRIMARY KEY or a unique index on non-nullable columns. Data definition changes such as CREATE TABLE or CREATE INDEX are not replicated except in the case for an active standby pair with DDLReplicationLevel set to 2. In the latter case, CREATE TABLE and CREATE INDEX are replicated.

Because two copies of the database (or two copies of each database, if there are more than one) are required during the upgrade, you must have available twice the memory and disk space normally required, if performing the upgrade on a single host.

Notes:

  • Online major upgrades for active standby pairs with cache groups are only supported for read-only cache groups.

  • Online major upgrades for active standby pairs that are managed by Oracle Clusterware are not supported.

Performing an online upgrade with classic replication

This section describes how to use the TimesTen replication feature to perform online upgrades for applications that require continuous data availability.

This procedure is for classic replication in a unidirectional, bidirectional, or multidirectional scenario.

Typically, applications that require high availability of their data use TimesTen replication to keep at least one extra copy of their databases up to date. An online upgrade works by keeping one of these two copies available to the application while the other is being upgraded. The procedures described in this section assume that you have a bidirectional replication scheme configured and running for two databases, as described in "Unidirectional or bidirectional replication" in the Oracle TimesTen In-Memory Database Replication Guide.

Note the following:

The following sections describe how to perform an online upgrade with replication.

Requirements

To perform online upgrades with replication, replication must be configured to use static ports. See "Port assignments" in Oracle TimesTen In-Memory Database Replication Guide for information.

Additional disk space must be allocated to hold a backup copy of the database made by the ttMigrate utility. The size of the backup copy is typically about the same as the in-use size of the database. This size may be determined by querying the v$monitor view, using ttIsql:

Command> SELECT perm_in_use_size FROM v$monitor;

Upgrade steps

The following steps illustrate how to perform an online upgrade while replication is running. The upgrade host is the host on which the database upgrade is being performed, and the active host is the host containing the database to which the application remains connected.

Note:

The following steps are for a standard upgrade. Upgrading from a database in TimesTen 11.2.1 that has the connection attribute ReplicationApplyOrdering set to 0, or from a database in TimesTen 11.2.1 or higher that has ReplicationParallelism set to <2, requires that you re-create the database, even if the releases are from the same major release.
Step Upgrade host Active host
1. Configure replication to replicate to the active host using static ports. Configure replication to replicate to the upgrade host using static ports.
2. n/a Connect all applications to the active database, if they are not connected.
3. Disconnect all applications from the database that will be upgraded. n/a
4. n/a Set replication to the upgrade host to the PAUSE state.
5. Wait for updates to propagate to the active host. n/a
6. Stop replication. n/a
7. Back up the database with ttMigrate -c and run ttDestroy to destroy the database. n/a
8. Stop the TimesTen daemon for the old release. n/a
9. Create a new installation and a new instance for the new release. See "Creating an installation on Linux/UNIX" and "Creating an instance on Linux/UNIX: Basics" for information. n/a
10. Create a DSN for the post-upgrade database for the new release. Adjust parallelism options for the DSN. n/a
11. Restore the database from the backup with ttMigrate -r. n/a
12. Clear the replication bookmark and logs using ttRepAdmin -receiver -reset and by setting replication to the active host to the stop and then the start state. n/a
13. Start replication. n/a
14. n/a Set replication to the upgrade host to the start state, ensuring that the accumulated updates propagate once replication is restarted.
15. n/a Start replication.
16. n/a Wait for all of the updates to propagate to the upgrade host.
17. Reconnect all applications to the post-upgrade database. n/a

After the above procedures are completed on the upgrade host, the active host can be upgraded using the same steps.

Online upgrade example

This section describes how to perform an online upgrade in a scenario with two bidirectionally replicated databases.

In the following discussion, the two hosts are referred to as the upgrade host, on which the instance (with its databases) is being upgraded, and the active host, which remains operational and connected to the application for the duration of the upgrade. After the procedure is completed, the same steps can be followed to upgrade the active host. However, you may prefer to delay conversion of the active host to first test the upgraded instance.

The upgrade host in this example consists of the database upgrade on the server upgradehost. The active host consists of the database active on the server activehost.

Follow these steps in the order they are presented:

Step Upgrade host Active host
1. Use ttIsql to alter the replication scheme repscheme, setting static replication port numbers so that the databases can communicate across releases:

Command> call ttRepStop;

Command> ALTER REPLICATION repscheme ALTER STORE upgrade ON upgradehost SET PORT 40000 ALTER STORE active ON activehost SET PORT 40001;

Command> call ttRepStart;

Use ttIsql to alter the replication scheme repscheme, setting static replication port numbers so that the databases can communicate across releases:

Command> call ttRepStop;

Command> ALTER REPLICATION repscheme ALTER STORE upgrade ON upgradehost SET PORT 40000 ALTER STORE active ON activehost SET PORT 40001;

Command> call ttRepStart;

2. Disconnect all production applications connected to the database. Any workload being run on the upgrade host must start running on the active host instead. Use the ttRepAdmin utility to pause replication from the database active to the database upgrade:
ttRepAdmin -receiver -name upgrade
 -state pause active

This command temporarily stops the replication of updates from the database active to the database upgrade, but it retains any updates made to active in the database transaction log files. The updates made to active during the upgrade procedure are applied later, when upgrade is brought back up.

See "Set the replication state of subscribers" in Oracle TimesTen In-Memory Database Replication Guide for details.

3. Wait for all replication updates to be sent to the database active. You can verify that all updates have been sent by applying a recognizable update to a table reserved for that purpose on the database upgrade. When the update appears in the database active, you know that all previous updates have been sent.

For example, call the ttRepSubscriberWait built-in procedure. You should expect a value of <00> to be returned, indicating there was a clean response, not a time out. (If there is a time out, ttRepSubscriberWait returns a value of 01.)

Command> call ttRepSubscriberWait (,,,,60);
< 00 >
1 row found.

See "ttRepSubscriberWait" in the Oracle TimesTen In-Memory Database Reference for information.

n/a
4. Stop the replication agent with ttAdmin:
ttAdmin -repStop upgrade

From this point on, no updates are sent to the database active.

Stop the replication agent with ttAdmin:
ttAdmin -repStop active

From this point on, no updates are sent to the database upgrade.

See "Starting and stopping the replication agents" in Oracle TimesTen In-Memory Database Replication Guide for details.

5. Use ttMigrate to back up the database upgrade. If the database is very large, this step could take a significant amount of time. If sufficient disk space is free on the /backup file host, use the following ttMigrate command:
ttMigrate -c upgrade /backup/upgrade.dat
n/a
6. If the ttMigrate command is successful, destroy the database upgrade.
ttDestroy upgrade
Restart the replication agent on the database active:
ttAdmin -repStart active
7. Create a new installation and a new instance for the new release. See "Creating an installation on Linux/UNIX" and "Creating an instance on Linux/UNIX: Basics" for information. Resume replication from active to upgrade by setting the replication state to start:
ttRepAdmin -receiver -name upgrade
 -start start active
8. Use ttMigrate to load the backup created in step 5. into the database upgrade for the new release:
ttMigrate -r upgrade /backup/upgrade.dat
ttAdmin -ramLoad upgrade

Note: In this step, you must use the ttMigrate utility supplied with the new release of to which you are upgrading.

n/a
.9 Use ttRepAdmin to clear the replication bookmark and logs by resetting the receiver state for the database active and then setting replication to the stop state and then the start state:
ttRepAdmin -receiver -name active
   -reset upgrade
ttRepAdmin -receiver -name active
   -state stop upgrade
sleep 10
ttRepAdmin -receiver -name active
   -state start upgrade
sleep 10

Note: The sleep command is to ensure that each state takes effect, as the state change can take up to 10 seconds depending on the resources and operating system.

n/a
10. Use ttAdmin to start the replication agent on the new database upgrade and to begin sending updates to the database active:
ttAdmin -repStart upgrade
n/a
11. Verify that the database upgrade is receiving updates from the database active. You can verify that updates are sent by applying a recognizable update to a table reserved for that purpose in the database active. When the update appears in upgrade, you know that replication is operational. If the applications are still running on the database active, let them continue until the database upgrade has been successfully migrated and you have verified that the updates are being replicated correctly from active to upgrade.
12. n/a Once you are sure that updates are replicated correctly, you can disconnect all of the applications from the database active and reconnect them to the database upgrade. After verifying that the last of the updates from active are replicated to upgrade, the instance with active is ready to be upgraded.

Note: You may choose to delay upgrading the instance with active to the new release until sufficient testing has been performed with the database upgrade in the new release.


Performing an upgrade with active standby pair replication

Active standby pair replication provides high availability of your data to your applications. With active standby pairs, unless you want to perform an upgrade to a new major release of in a configuration that also uses asynchronous writethrough cache groups, you can perform an online upgrade to maintain continuous availability of your data during an upgrade. This section describes the following procedures:

Note:

Only asynchronous writethrough or read-only cache groups are supported with active standby pairs.

Online upgrades for an active standby pair with no cache groups

This section includes the following topics for online upgrades in a scenario with active standby pairs and no cache groups:

Also see "Performing an online upgrade with classic replication" for an overview, limitations, and requirements.

Online patch upgrade for standby master and subscriber

To perform an online upgrade to a new patch release for the standby master database and subscriber databases, complete the following tasks on each database. For this procedure, assume there are no cache groups.

  1. Stop the replication agent on the database using the ttRepStop built-in procedure or the ttAdmin utility. For example, to stop the replication agent for the master2 standby database:

    ttAdmin -repStop master2
    
  2. Create a new installation and a new instance for the new release. See "Creating an installation on Linux/UNIX" and "Creating an instance on Linux/UNIX: Basics" for information.

  3. Restart the replication agent using the ttRepStart built-in procedure or the ttAdmin utility:

    ttAdmin -repStart master2
    

Online patch upgrade for active master

To perform an online upgrade to a new patch release for the active master database, you must first reverse the roles of the active and standby master databases, then perform the upgrade. For this procedure, assume there are no cache groups.

  1. Pause any applications that are generating updates on the active master database.

  2. Run the ttRepSubscriberWait built-in procedure on the active master database, using the DSN and host of the standby master database. (The result of the call should be 00. If the value is 01, you should call ttRepSubscriberWait again until the value 00 is returned.) For example, to ensure that all transactions are replicated to the master2 standby master on the master2host:

    call ttRepSubscriberWait( null, null, 'master2', 'master2host', 120 );
    
  3. Stop the replication agent on the current active master database using the ttRepStop built-in procedure or the ttAdmin utility. For example, to stop the replication agent for the master1 active master database:

    ttAdmin -repStop master1
    
  4. Execute the ttRepDeactivate build-in procedure on the current active master database. This puts the database in the IDLE state:

    call ttRepDeactivate;
    
  5. On the standby master database, set the database to the ACTIVE state using the ttRepStateSet built-in procedure. This database becomes the active master in the active standby pair:

    call ttRepStateSet( 'ACTIVE' );
    
  6. Resume any applications that were paused in step 1, connecting them to the database that is now acting as the active master (for example, master2).

    Note:

    At this point, replication will not yet occur from the new active database to subscriber databases. Replication will resume after the host for the new standby database has been upgraded and the replication agent of the new standby database is running.
  7. Upgrade the instance of the former active master database, which is now the standby master database. See "Offline upgrade: Moving to a different patch or patch set: ttInstanceModify" for details.

  8. Restart replication on the database in the upgraded instance, using the ttRepStart built-in procedure or the ttAdmin utility:

    ttAdmin -repStart master2
    
  9. To make the database in the newly upgraded instance the active master database again, see "Reversing the roles of the active and standby databases" in the Oracle TimesTen In-Memory Database Replication Guide.

Online major upgrade for active standby pair

When you perform an online upgrade for an active standby pair to a new major release of TimesTen, you must explicitly specify the TCP/IP port for each database. If your active standby pair replication scheme is not configured with a PORT attribute for each database, you must use the following steps to prepare for the upgrade. For this procedure, assume there are no cache groups. (Online major upgrades for active standby pairs with cache groups are only supported for read-only cache groups.)

  1. Stop the replication agent on every database using the call ttRepStop built-in procedure or the ttAdmin utility. For example, to stop the replication agent on the master1 database:

    ttAdmin -repStop master1
    
  2. On the active master database, use the ALTER ACTIVE STANDBY PAIR statement to specify a PORT attribute for every database in the active standby pair. For example, to set a PORT attribute for the master1 database on the master1host host and the master2 database on the master2host host and the subscriber1 database on the subscriber1host host:

    ALTER ACTIVE STANDBY PAIR
     ALTER STORE master1 ON "master1host" SET PORT 30000
     ALTER STORE master2 ON "master2host" SET PORT 30001
     ALTER STORE subscriber1 ON "subscriber1host" SET PORT 30002;
    
  3. Destroy the standby master database and all of the subscribers using the ttDestroy utility. For example, to destroy the subscriber1 database:

    ttDestroy subscriber1
    
  4. Follow the normal procedure to start an active standby pair and duplicate the standby and subscriber databases from the active master. See "Setting up an active standby pair with no cache groups" in the Oracle TimesTen In-Memory Database Replication Guide for details.

To upgrade the instances of the active standby pair, first upgrade the instance of the standby master. While this node is being upgraded, there is no standby master database, so updates on the active master database are propagated directly to the subscriber databases. Following the upgrade of the standby node, the active and standby roles are switched and the new standby node is created from the new active node. Finally, the subscriber nodes are upgraded.

  1. Instruct the active master database to stop replicating updates to the standby master by executing the ttRepStateSave built-in procedure on the active master database. For example, to stop replication to the master2 standby master database on the master2host host:

    call ttRepStateSave( 'FAILED', 'master2', 'master2host' );
    
  2. Stop the replication agent on the standby master database using the ttRepStop built-in procedure or the ttAdmin utility. The following example stops the replication agent for the master2 standby master database.

    ttAdmin -repStop master2
    
  3. Use the ttMigrate utility to back up the standby master database to a binary file.

    ttMigrate -c master2 master2.bak
    

    See "ttMigrate" in the Oracle TimesTen In-Memory Database Reference for details.

  4. Destroy the standby master database, using the ttDestroy utility.

    ttDestroy master2
    
  5. Create a new installation and a new instance on the master2host standby master host. See "Creating an installation on Linux/UNIX" and "Creating an instance on Linux/UNIX: Basics" for information.

  6. In the new instance on master2host, use ttMigrate to restore the standby master database from the binary file created earlier. (This example performs a checkpoint operation after every 20 megabytes of data has been restored.)

    ttMigrate -r -C 20 master2 master2.bak
    
  7. Start the replication agent on the standby master database using the ttRepStart built-in procedure or the ttAdmin utility.

    ttAdmin -repStart master2
    

    When the standby master database in the upgraded instance has become synchronized with the active master database, this standby master database moves from the RECOVERING state to the STANDBY state. The standby master database also starts sending updates to the subscribers. You can determine when the standby master database is in the STANDBY state by calling the ttRepStateGet built-in procedure.

    call ttRepStateGet;
    
  8. Pause any applications that are generating updates on the active master database.

  9. Execute the ttRepSubscriberWait built-in procedure on the active master database, using the DSN and host of the standby master database. (The result of the call should be 00. If the value is 01, you should call ttRepSubscriberWait again until the value 00 is returned.) For example, to ensure that all transactions are replicated to the master2 standby master on the master2host host:

    call ttRepSubscriberWait( null, null, 'master2', 'master2host', 120 );
    
  10. Stop the replication agent on the active master database using the ttRepStop built-in procedure or the ttAdmin utility. For example, to stop the replication agent for the master1 active master database:

    ttAdmin -repStop master1
    
  11. On the standby master database, set the database to the ACTIVE state using the ttRepStateSet built-in procedure. This database becomes the active master in the active standby pair.

    call ttRepStateSet( 'ACTIVE' );
    
  12. Instruct the new active master database (master2, in our example) to stop replicating updates to what is now the standby master (master1) by executing the ttRepStateSave built-in procedure on the active master database. For example, to stop replication to the master1 standby master database on master1host host:

    call ttRepStateSave( 'FAILED', 'master1', 'master1host' );
    
  13. Destroy the former active master database, using the ttDestroy utility.

    ttDestroy master1
    
  14. Create the new installation and the instance for the new release on master1host. See "Creating an installation on Linux/UNIX" and "Creating an instance on Linux/UNIX: Basics" for information.

  15. Create a new standby master database by duplicating the new active master database, using the ttRepAdmin utility. For example, to duplicate the master2 database master2 on the master2host host to the master1 database, use the following on the host containing the master1 database:

    ttRepAdmin -duplicate -from master2 -host master2host -uid pat -pwd patpwd
     -setMasterRepStart master1
    
  16. Start the replication agent on the new standby master database using the ttRepStart built-in procedure or the ttAdmin utility.

    ttAdmin -repStart master1
    
  17. Stop the replication agent on the first subscriber database using the ttRepStop built-in procedure or the ttAdmin utility. For example, to stop the replication agent for the subscriber1 subscriber database:

    ttAdmin -repStop subscriber1
    
  18. Destroy the subscriber database using the ttDestroy utility.

    ttDestroy subscriber1
    
  19. Create a new installation and a new instance for the new release on the subscriber host. See "Creating an installation on Linux/UNIX" and "Creating an instance on Linux/UNIX: Basics" for information.

  20. Create the subscriber database by duplicating the new standby master database, using the ttRepAdmin utility, as follows.

    ttRepAdmin -duplicate -from master1 -host master1host -uid pat -pwd patpwd
     -setMasterRepStart subscriber1
    
  21. Start the replication agent for the duplicated subscriber database using the ttRepStart built-in procedure or the ttAdmin utility.

    ttAdmin -repStart subscriber1
    
  22. Repeat step 17 through step 21 for each other subscriber database.

Online upgrades for an active standby pair with cache groups

This section includes the following topics for online patch upgrades in a scenario with active standby pairs and cache groups:

Also see "Performing an online upgrade with classic replication" for an overview, limitations, and requirements.

Online patch upgrade for standby master and subscriber (cache groups)

To perform an online upgrade to a new patch release for the standby master database and subscriber databases, in a configuration with cache groups, complete the following tasks on each database (with exceptions noted).

  1. Stop the replication agent on the database using the ttRepStop built-in procedure or the ttAdmin utility. For example, to stop the replication agent for the master2 standby database:

    ttAdmin -repStop master2
    
  2. Stop the cache agent on the standby database using the ttCacheStop built-in procedure or the ttAdmin utility:

    ttAdmin -cacheStop master2
    
  3. Create a new installation and a new instance for the new release. See "Creating an installation on Linux/UNIX" and "Creating an instance on Linux/UNIX: Basics" for information.

  4. Restart the cache agent on the standby database using the ttCacheStart built-in procedure or the ttAdmin utility:

    ttAdmin -cacheStart master2
    
  5. Restart the replication agent using the ttRepStart built-in procedure or the ttAdmin utility:

    ttAdmin -repStart master2
    

Note:

Steps 2 and 4, stopping and restarting the cache agent, are not applicable for subscriber databases.

Online patch upgrade for active master (cache groups)

To perform an online upgrade to a new patch release for the active master database, in a configuration with cache groups, perform the following steps. You must first reverse the roles of the active and standby master databases, then perform an the upgrade.

  1. Pause any applications that are generating updates on the active master database.

  2. Stop the cache agent on the current active master database using the ttCacheStop built-in procedure or the ttAdmin utility:

    ttAdmin -cacheStop master1
    
  3. Execute the ttRepSubscriberWait built-in procedure on the active master database, using the DSN and host of the standby master database. For example, to ensure that all transactions are replicated to the master2 standby master on the master2host host:

    call ttRepSubscriberWait( null, null, 'master2', 'master2host', 120 );
    
  4. Stop the replication agent on the current active master database using the ttRepStop built-in procedure or the ttAdmin utility. For example, to stop the replication agent for the master1 active master database:

    ttAdmin -repStop master1
    
  5. Execute the ttRepDeactivate build-in procedure on the current active master database. This puts the database in the IDLE state:

    call ttRepDeactivate;
    
  6. On the standby master database, set the database to the ACTIVE state using the ttRepStateSet built-in procedure. This database becomes the active master in the active standby pair:

    call ttRepStateSet( 'ACTIVE' );
    
  7. Resume any applications that were paused in step 1, connecting them to the database that is now acting as the active master (in this example, the master2 database).

  8. Upgrade the instance for the former active master database, which is now the standby master database. See "Offline upgrade: Moving to a different patch or patch set: ttInstanceModify" for details.

  9. Restart the cache agent on the post-upgrade database using the ttCacheStart built-in procedure or the ttAdmin utility:

    ttAdmin -cacheStart master1
    
  10. Restart replication on the post-upgrade database using the ttRepStart built-in procedure or the ttAdmin utility:

    ttAdmin -repStart master1
    
  11. To make the post-upgrade database the active master database again, see "Reversing the roles of the active and standby databases" in the Oracle TimesTen In-Memory Database Replication Guide.

Online major upgrade for active standby pair (read-only cache groups)

Complete the following steps to perform a major upgrade from an 11.2.2 release to a 18.1 release in a scenario with an active standby pair with read-only cache groups.

These steps assume that master1 is the active master database on the master1host host and master2 is the standby master database on the master2host host.

Note:

For more information on the built-in procedures and utilities discussed here, see "Built-In Procedures" and "Utilities" in the Oracle TimesTen In-Memory Database Reference.
  1. On the active master host, run the ttAdmin utility to stop the replication agent for the active master database.

    ttAdmin -repStop master1
    
  2. On the active master database, use the DROP ACTIVE STANDBY PAIR statement to drop the active standby pair. For example, from the ttIsql utility:

    Command> DROP ACTIVE STANDBY PAIR;
    
  3. On the active master database, use the CREATE ACTIVE STANDBY PAIR statement to create a new active standby pair with the cache groups excluded. Ensure that you explicitly specify the TCP/IP port for each database.

    Command> CREATE ACTIVE STANDBY PAIR master1 ON "master1host",
               master2 ON "master2host"
             STORE master1 ON "master1host" PORT 20000
             STORE master2 ON "master2host" PORT 20010
             EXCLUDE CACHE GROUP cacheuser.readcache;
    

    Note:

    You can use the cachegroups command within the ttIsql utility to identify all the cache groups defined in the database. In this example, readcache is a read-only cache group owned by the cacheuser user.
  4. On the active master database, call the ttRepStateSet built-in procedure to set the replication state for the active master database to ACTIVE.

    Command> call ttRepStateSet('ACTIVE');
    

    To verify that the replication state for the active master database is set to ACTIVE, call the ttRepStateGet built-in procedure.

    Command> call ttRepStateGet();
    < ACTIVE >
    1 row found.
    
  5. On the active master database, call the ttRepStart built-in procedure to start the replication agent.

    Command> call ttRepStart();
    
  6. On the standby master host, run the ttAdmin utility to stop the replication agent for the standby master database.

    ttAdmin -repStop master2
    
  7. On the standby master host, run the ttAdmin utility to stop the cache agent for the standby master database.

    ttAdmin -cacheStop master2
    
  8. On the standby master host, run the ttDestroy utility to destroy the standby master database. You must either add the -force option or first drop all cache groups.

    ttDestroy -force master2
    
  9. Create a new standby master database by duplicating the active master database with the ttRepAdmin utility. For example, to duplicate the master1 database on the master1host host of the master2 database, run the following on the host containing the master2 database:

    ttRepAdmin -duplicate -from master1 -host master1host -UID pat -PWD patpwd 
      -keepCG -cacheUid cacheuser -cachePwd cachepwd master2
    

    Note:

    You need a user with ADMIN privileges defined in the active master database for it to be duplicated. In this example, the pat user identified by the patpwd password has ADMIN privileges.

    To keep the cache group tables, you need a cache administration user while adding the -keepCG option. In this example, the cacheuser user identified by the cachepwd password is a cache administration user.

  10. On the new standby master database, use the DROP CACHE GROUP statement to drop all the cache groups.

    Command> DROP CACHE GROUP cacheuser.readcache;
    
  11. On the standby master host, run the ttMigrate utility to back up the standby master database to a binary file.

    ttMigrate -c master2 master2.bak
    
  12. On the standby master host, run the ttDestroy utility to destroy the standby master database.

    ttDestroy master2
    
  13. Create a new installation and a new instance for the new release on the standby master host. See "Creating an installation on Linux/UNIX" and "Creating an instance on Linux/UNIX: Basics" for information.

  14. In the new instance on the standby master host, run the ttMigrate utility to restore the standby master database from the binary file created earlier.

    ttMigrate -r -C 20 master2 master2.bak
    

    Note:

    This example performs a checkpoint operation after every 20 MB of data has been restored.
  15. On the standby master database, use the CREATE USER statement to create a new cache administration user.

    Command> CREATE USER cacheuser2 IDENTIFIED BY cachepwd;
    Command> GRANT CREATE SESSION, CACHE_MANAGER, CREATE ANY TABLE,
             DROP ANY TABLE TO cacheuser2;
    

    Note:

    You must create the new cache administration user in the Oracle database and grant the user the minimum set of privileges required to perform cache group operations. See "Create users in the Oracle database" in the Oracle TimesTen Application-Tier Database Cache User's Guide for information.
  16. Connect to the standby master database as the cache administration user, and call the ttCacheUidPwdSet built-in procedure to set the new cache administration user name and password. Ensure you specify the cache administration user password for the Oracle database in the OraclePWD connection attribute within the connection string.

    ttIsql "DSN=master2;UID=cacheuser2;PWD=cachepwd;OraclePWD=oracle"
    Command> call ttCacheUidPwdSet('cacheuser2','oracle');
    
  17. On the standby master database, call the ttCacheStart built-in procedure to start the cache agent.

    Command> call ttCacheStart();
    
  18. On the standby master database, call the ttRepStart built-in procedure to start the replication agent.

    Command> call ttRepStart();
    

    The replication state will automatically be set to STANDBY. You can call the ttRepStateGet built-in procedure to confirm this. (This occurs asynchronously and may take a little time.)

    Command> call ttRepStateGet();
    < STANDBY >
    1 row found.
    
  19. On the standby master database, use the CREATE READONLY CACHE GROUP statement to create all the read-only cache groups.

    Command> CREATE READONLY CACHE GROUP cacheuser2.readcache
             AUTOREFRESH INTERVAL 10 SECONDS
             FROM oratt.readtbl
               (keyval NUMBER NOT NULL PRIMARY KEY, str VARCHAR(32));
    

    Note:

    Ensure that the cache administration user has SELECT privileges on the cache group tables in the Oracle database. In this example, the cacheuser2 user has SELECT privileges on the readtbl table owned by the oratt user in the Oracle database. For more information, see "Create the Oracle Database tables to be cached" in the Oracle TimesTen Application-Tier Database Cache User's Guide.
  20. On the standby master database, use the LOAD CACHE GROUP statement to load the data from the Oracle database tables into the TimesTen cache groups.

    Command> LOAD CACHE GROUP cacheuser2.readcache
             COMMIT EVERY 200 ROWS;
    
  21. Pause any applications that are generating updates on the active master database.

  22. On the active master database, call the ttRepSubscriberWait built-in procedure using the DSN and host of the standby master database. For example, to ensure that all transactions are replicated to the master2 database on the master2host host:

    Command> call ttRepSubscriberWait(NULL,NULL,'master2','master2host',120);
    
  23. On the active master database, call the ttRepStop built-in procedure to stop the replication agent.

    Command> call ttRepStop();
    
  24. On the active master database, call the ttRepDeactivate built-in procedure to set the replication state for the active master database to IDLE.

    Command> call ttRepDeactivate();
    
  25. On the standby master database, call the ttRepStateSet built-in procedure to set the replication state for the standby master database to ACTIVE. This database and its host become the active master in the active standby pair replication scheme.

    Command> call ttRepStateSet('ACTIVE');
    

    Note:

    In this example, the master2 database on the master2host host just became the active master in the active standby pair replication scheme. Likewise, the master1 database on the master1host host is henceforth considered the standby master in the active standby pair replication scheme.
  26. On the new active master database, call the ttRepStop built-in procedure to stop the replication agent.

    Command> call ttRepStop();
    
  27. On the active master database, use the ALTER CACHE GROUP statement to set the AUTOREFRESH mode of all cache groups to PAUSED.

    Command> ALTER CACHE GROUP cacheuser2.readcache
             SET AUTOREFRESH STATE PAUSED;
    
  28. On the active master database, use the DROP ACTIVE STANDBY PAIR statement to drop the active standby pair.

    Command> DROP ACTIVE STANDBY PAIR;
    
  29. On the active master database, use the CREATE ACTIVE STANDBY PAIR statement to create a new active standby pair with the cache groups included. Ensure you explicitly specify the TCP/IP port for each database.

    Command> CREATE ACTIVE STANDBY PAIR master1 ON "master1host",
               master2 ON "master2host"
             STORE master1 ON "master1host" PORT 20000
             STORE master2 ON "master2host" PORT 20010;
    
  30. On the active master database, call the ttRepStateSet built-in procedure to set the replication state for the active master database to ACTIVE.

    Command> call ttRepStateSet('ACTIVE');
    
  31. On the active master database, call the ttRepStart built-in procedure to start the replication agent.

    Command> call ttRepStart();
    
  32. Resume any applications that were paused in step 21, connecting them to the new active master database.

  33. On the new standby master host, run the ttDestroy utility to destroy the new standby master database.

    ttDestroy master1
    
  34. Create a new installation and a new instance for the new release on the standby master host. See "Creating an installation on Linux/UNIX" and "Creating an instance on Linux/UNIX: Basics" for information.

  35. Create a new standby master database by duplicating the active master database with the ttRepAdmin utility. For example, to duplicate the master2 database on the master2host host to the master1 database, run the following on the host containing the master1 database:

    ttRepAdmin -duplicate -from master2 -host master2host -UID pat -PWD patpwd 
      -keepCG -cacheUid cacheuser2 -cachePwd cachepwd master1
    
  36. On the standby master host, run the ttAdmin utility to start the cache agent for the standby master database.

    ttAdmin -cacheStart master1
    
  37. On the standby master host, run the ttAdmin utility to start the cache agent for the standby master database.

    ttAdmin -repStart master1
    

Offline upgrades for an active standby pair with cache groups

Performing a major upgrade in a scenario with an active standby pair with asynchronous writethrough cache groups requires an offline upgrade. This is discussed in the subsection that follows.

Offline major upgrade for active standby pair (cache groups)

Complete the following steps to perform a major upgrade from an 11.2.2 release to a 18.1 release in a scenario with an active standby pair with cache groups. You must perform this upgrade offline.

These steps assume master1 is an active master database on the master1host host and master2 is a standby master database on the master2host host. (For information about the built-in procedures and utilities discussed, refer to "Built-In Procedures" and "Utilities" in Oracle TimesTen In-Memory Database Reference.)

  1. Stop any updates to the active database before you upgrade.

  2. From master1, call the ttRepSubscriberWait built-in procedure to ensure that all data updates have been applied to the standby database, where numsec is the desired wait time.

    call ttRepSubscriberWait(null, null, 'master2', 'master2host', numsec);
    
  3. From master2, call ttRepSubscriberWait to ensure that all data updates have been applied to the Oracle database.

    call ttRepSubscriberWait(null, null, '_ORACLE', null, numsec);
    
  4. On master1host, use the ttAdmin utility to stop the replication agent for the active database.

    ttAdmin -repStop master1
    
  5. On master2host, use ttAdmin to stop the replication agent for the standby database.

    ttAdmin -repStop master2
    
  6. On master1host, call the ttCacheStop built-in procedure or use ttAdmin to stop the cache agent for the active database.

    ttAdmin -cacheStop master1
    
  7. On master2host, call ttCacheStop or use ttAdmin to stop the cache agent for the standby database.

    ttAdmin -cacheStop master2
    
  8. On master1host, use the ttMigrate utility to back up the active database to a binary file.

    ttMigrate -c master1 master1.bak
    
  9. On master1host, use the ttDestroy utility to destroy the active database. You must either use the -force option or first drop all cache groups. If you use -force, run the script cacheCleanup.sql afterward.

    ttDestroy -force /data_store_path/master1
    

    The cacheCleanup.sql script is a SQL*Plus script, located in the installation_dir/oraclescripts directory (and accessible through timesten_home/install/oraclescripts), that you run after connecting to the Oracle database as the cache user. It takes as parameters the host name and the database name (with full path). For information, refer to "Dropping Oracle Database objects used by autorefresh cache groups" in the Oracle TimesTen Application-Tier Database Cache User's Guide.

  10. Create a new installation and a new instance for the new major release on master1host. See "Creating an installation on Linux/UNIX" and "Creating an instance on Linux/UNIX: Basics" for information.

  11. Create a new database in 18.1.w.x using ttIsql with DSN connection attribute setting AutoCreate=1. In this new database, create a cache user. The following example is a sequence of commands to execute in ttIsql to create this cache user and give it appropriate access privileges.

    The cache user requires ADMIN privilege to execute the next step, ttMigrate –r. Once migration is complete, you can revoke the ADMIN privilege from this user if desired.

    Command> CREATE USER cacheuser IDENTIFIED BY cachepassword;
    Command> GRANT CREATE SESSION, CACHE_MANAGER, CREATE ANY TABLE, 
             DROP ANY TABLE TO cacheuser;
    Command> GRANT ADMIN TO cacheuser;
    
  12. In the new instance on master1host, use the ttMigrate utility as the cache user to restore master1 from the binary file created earlier. (This example performs a checkpoint operation after every 20 megabytes of data has been restored, and assumes the password is the same in the Oracle database as in TimesTen.)

    ttMigrate -r -cacheuid cacheuser -cachepwd cachepassword -C 20 -connstr
     "DSN=master1;uid=cacheuser;pwd=cachepassword;oraclepwd=cachepassword"
     master1.bak
    
  13. On master1host, use ttAdmin to start the replication agent.

    ttAdmin -repStart master1
    

    Note:

    This step also sets the database to the active state. You can then call the ttRepStateGet built-in procedure (which takes no parameters) to confirm the state.
  14. On master1host, call the ttCacheStart built-in procedure or use ttAdmin to start the cache agent.

    ttAdmin -cacheStart master1
    

    Then you can use the ttStatus utility to confirm the replication and cache agents have started.

  15. Put each automatic refresh cache group into the AUTOREFRESH PAUSED state. This example uses ttIsql:

    Command> ALTER CACHE GROUP mycachegroup SET AUTOREFRESH STATE paused;
    
  16. From master1, reload each cache group, specifying the name of the cache group and how often to commit during the operation. This example uses ttIsql:

    Command> LOAD CACHE GROUP cachegroupname COMMIT EVERY n ROWS;
    

    You can optionally specify parallel loading as well. See the "LOAD CACHE GROUP" SQL statement in the Oracle TimesTen In-Memory Database SQL Reference for details.

  17. On master2host, use ttDestroy to destroy the standby database. You must either use the -force option or first drop all cache groups. If you use -force, run the script cacheCleanup.sql afterward (as discussed earlier).

    ttDestroy -force /data_store_path/master2
    
  18. Create the new installation and the new instance for the new major release on master2host. See "Creating an installation on Linux/UNIX" and "Creating an instance on Linux/UNIX: Basics" for information.

  19. In the new instance on master2host, use the ttRepAdmin utility with the -duplicate option to create a duplicate of active database master1 to use as standby database master2. Specify the appropriate administrative user on master1, the cache manager user and password, and to keep cache groups.

    ttRepAdmin -duplicate -from master1 -host master1host -uid pat -pwd patpwd 
    -cacheUid orcluser -cachePwd orclpwd -keepCG master2
    
  20. On master2host, use ttAdmin to start the replication agent. (You could optionally have used the ttRepAdmin option -setMasterRepStart in the previous step instead.)

    ttAdmin -repStart master2
    
  21. On master2, the replication state will automatically be set to STANDBY. You can call the ttRepStateGet built-in procedure to confirm this. (This occurs asynchronously and may take a little time.)

    call ttRepStateGet();
    
  22. On master2host, call the ttCacheStart built-in procedure or use ttAdmin to start the cache agent.

    ttAdmin -cacheStart master2
    

    After this, you can use the ttStatus utility to confirm the replication and cache agents have started.

If you want to create read-only subscriber databases, on each subscriber host you can create the subscriber by using the ttRepAdmin utility -duplicate option to duplicate the standby database. The following example creates subscriber1, using the same ADMIN user as above and the -nokeepCG option to convert the cache tables to normal TimesTen tables, as appropriate for a read-only subscriber.

ttRepAdmin -duplicate -from master2 -host master2host -nokeepCG 
-uid pat -pwd patpwd subscriber1

For related information, refer to "Rolling out a disaster recovery subscriber" in the Oracle TimesTen In-Memory Database Replication Guide.

Performing an offline TimesTen upgrade when using Oracle Clusterware

This section discusses the steps for an offline upgrade of TimesTen when using TimesTen with Oracle Clusterware. You have the option of also upgrading Oracle Clusterware, independently, while upgrading TimesTen. (See "Performing an online TimesTen upgrade when using Oracle Clusterware" for details on online upgrade.)

Notes:

  • These instructions apply for either a TimesTen patch upgrade (18.1.w.x to 18.1.y.z) or a TimesTen major upgrade (11.2.2 to 18.1).

  • Refer to Oracle TimesTen In-Memory Database Release Notes for information about versions of Oracle Clusterware that are supported by TimesTen.

For this procedure, except where noted, you can execute the ttCWAdmin commands from any host in the cluster. Each command affects all hosts.

  1. Stop the replication agents on the databases in the active standby pair:

    ttCWAdmin -stop -dsn advancedDSN
    
  2. Drop the active standby pair:

    ttCWAdmin -drop -dsn advancedDSN
    
  3. Stop the TimesTen cluster agent. This removes the hosts from the cluster and stops the TimesTen daemon:

    ttCWAdmin -shutdown
    
  4. Upgrade TimesTen on the desired hosts.

  5. Upgrade Oracle Clusterware if desired. See the Oracle Clusterware Administration and Deployment Guide in the Oracle Database documentation for information.

  6. If you have upgraded Oracle Clusterware, use the ttInstanceModify utility to configure TimesTen with Oracle Clusterware. On each host, run:

    ttInstanceModify -crs
    

    For Linux or UNIX hosts, see "Change the Oracle Clusterware configuration for an instance" for details.

  7. Start the TimesTen cluster agent. This includes the hosts defined in the cluster as specified in ttcrsagent.options. This also starts the TimesTen daemon.

    ttCWAdmin -init
    
  8. Create the active standby pair replication scheme:

    ttCWAdmin -create -dsn advancedDSN
    

    Important: The host from which you run this command must have access to the cluster.oracle.ini file. (See "Configuring Oracle Clusterware management with the cluster.oracle.ini file" in the Oracle TimesTen In-Memory Database Replication Guide for information about this file.)

  9. Start the active standby pair replication scheme:

    ttCWAdmin -start -dsn advancedDSN
    

Performing an online TimesTen upgrade when using Oracle Clusterware

This section discusses how to perform an online rolling upgrade (patch) for TimesTen, from TimesTen 18.1.w.x to 18.1.y.z, in a configuration where Oracle Clusterware manages active standby pairs. (See "Performing an offline TimesTen upgrade when using Oracle Clusterware" for an offline upgrade.)

The following topics are covered:

Notes:

Supported configurations

The following basic configurations are supported for online rolling upgrades for TimesTen. In all cases, Oracle Clusterware manages the hosts.

  • One active standby pair on two hosts.

  • Multiple active standby pairs with one database on each host.

  • Multiple active standby pairs with one or more database on each host.

(Other scenarios, such as with additional spare hosts, are effectively equivalent to one of these scenarios.)

Restrictions and assumptions

Note the following assumptions for upgrading TimesTen when using Oracle Clusterware:

  • The existing active standby pairs are configured and operating properly.

  • Oracle Clusterware commands are used correctly to stop and start the standby database.

  • The upgrade does not change the TimesTen environment for the active and standby databases.

  • These instructions are for TimesTen patch upgrades only. Online major upgrades are not supported in configurations where Oracle Clusterware manages active standby pairs.

  • There are at least two hosts managed by Oracle Clusterware.

    Multiple active or standby databases managed by Oracle Clusterware can exist on a host only if there are at least two hosts in the cluster.

Important:

Upgrade Oracle Clusterware if desired, but not concurrently with an online TimesTen upgrade. When performing an online TimesTen patch upgrade in configurations where Oracle Clusterware manages active standby pairs, you must perform the Clusterware upgrade independently and separately, either before or after the TimesTen upgrade.

Note:

For information about Oracle Clusterware, see the Oracle Clusterware Administration and Deployment Guide in the Oracle Database documentation.

Upgrade tasks for one active standby pair

This section describes the following tasks:

Note:

In examples in the following subsections, the host name is host2, the DSN is myDSN, the instance name is upgrade2, and the instance administrator is terry.

Verify that the active standby pair is operating properly

Complete these steps to confirm that the active standby pair is operating properly.

  1. Verify the following.

    • The active and the standby databases run a TimesTen 18.1.w.x release.

    • The active and standby databases are on separate hosts managed by Oracle Clusterware.

    • Replication is working.

    • If the active standby pair replication scheme includes cache groups, the following are true:

      • AWT and SWT writes are working from the standby database in TimesTen to the Oracle database.

      • Refreshes are working from the Oracle database to the active database in TimesTen.

  2. Run the ttCWAdmin -status -dsn yourDSN command to verify the following.

    • The active database is on a different host than the standby database.

    • The state of the active database is 'ACTIVE' and the status is 'AVAILABLE'.

    • The state of the standby database is 'STANDBY' and the status is 'AVAILABLE'.

  3. Run the ttStatus command on the active database to verify the following.

    • The ttCRSactiveservice and ttCRSmaster processes are running.

    • The subdaemon and the replication agents are running.

    • If the active standby pair replication scheme includes cache groups, the cache agent is running.

  4. Run the ttStatus command on the standby database to verify the following.

    • The ttCRSsubservice and ttCRSmaster processes are running.

    • The subdaemon and the replication agents are running.

    • If the active standby pair replication scheme includes cache groups, the cache agent is running.

Shut down the standby database

Complete these steps to shut down the standby database.

  1. Run an Oracle Clusterware command similar to the following to obtain the names of the Oracle Clusterware Master, Daemon, and Agent processes on the host of the standby database. It is suggested to filter the output by using the grep TT command:

    crsctl status resource -n standbyHostName | grep TT
    
  2. Run Oracle Clusterware commands to shut down the standby database. The Oracle Clusterware commands stop the Master processes for the standby database, the Daemon process for the instance, and the Agent process for the instance.

    crsctl stop resource TT_Master_upgrade2_terry_myDSN_1
    crsctl stop resource TT_Daemon_upgrade2_terry_host2
    crsctl stop resource TT_Agent_upgrade2_terry_host2
    
  3. Stop the TimesTen main daemon.

    ttDaemonAdmin -stop
    

    If the ttDaemonAdmin -stop command gives error 10028, retry the command.

Perform an upgrade for the standby database

Complete these steps for an offline upgrade of the instance for the standby database.

  1. Create a new installation. See "Creating an installation on Linux/UNIX" for information.

  2. Point the instance to the new installation. See "Associate an instance with a different installation (upgrade or downgrade)" for details.

  3. Configure the new installation for Oracle Clusterware.

Start the standby database

Complete these steps to start the standby database.

  1. Run the following ttCWAdmin command to start the TimesTen main daemon, the TimesTen Oracle Clusterware agent process and the TimesTen Oracle Clusterware Daemon process:

    ttCWAdmin -init -hosts localhost
    
  2. Start the Oracle Clusterware Master process for the standby database.

    crsctl start resource TT_Master_upgrade2_terry_MYDSN_1
    

Switch the roles of the active and standby databases

Use the ttCWAdmin -switch command to switch the roles of the active and standby databases to enable the offline upgrade on the other master database.

ttCWAdmin -switch -dsn myDSN

Use the ttCWAdmin -status command to verify that the switch operation has completed before starting the next task.

Shut down the new standby database

Use the Oracle Clusterware crsctl status resource command to obtain the names of the Master, Daemon, and Agent processes on the host of the new standby database. This example assumes the host host1 and filters the output through grep TT:

crsctl status resource -n host1 | grep TT

Run commands such as those in "Shut down the standby database" and use the appropriate instance name, instance administrator, DSN, and host name. For example:

crsctl stop resource TT_Master_upgrade2_terry_MYDSN_0
crsctl stop resource TT_Daemon_upgrade2_terry_host1
crsctl stop resource TT_Agent_upgrade2_terry_host1
ttDaemonAdmin -stop

Perform an upgrade of the new standby database

See "Perform an upgrade for the standby database" for the steps.

Start the new standby database

See "Start the standby database" and use the Master process name obtained by the crsctl status resource command from "Shut down the new standby database" as outlined above.

ttCWAdmin -init -hosts localhost
crsctl start resource TT_Master_upgrade2_terry_MYDSN_0

Upgrades for multiple active standby pairs on many pairs of hosts

The process to upgrade the instances for multiple active standby pairs on multiple pairs of hosts is essentially the same as the process to upgrade the instances for a single active standby pair on two hosts. See "Upgrade tasks for one active standby pair" for details. The best practice is to perform the upgrades for the active standby pairs one at a time.

Use the ttCWAdmin -status command to determine the state of the databases managed by Oracle Clusterware.

Upgrades for multiple active standby pairs on a pair of hosts

Multiple active standby pairs can be on multiple pairs of hosts. See "Upgrades for multiple active standby pairs on many pairs of hosts" for details. Alternatively, multiple active standby pairs can be on a single pair of hosts. One scenario is for all the active databases to be on one host and all the standby databases to be on the other. A more typical scenario, to better balance the workload, is for each host to have some active databases and some standby databases.

Figure 6-1 shows two active standby pairs on two hosts managed by Oracle Clusterware. The active database called active1 on host1 replicates to standby1 on host2. The active database called active2 on host2 replicates to standby2 on host1. AWT updates from both standby databases are propagated to the Oracle database. Read-only updates from the Oracle database are propagated to the active databases.

Figure 6-1 Multiple active standby pairs on two hosts

Description of Figure 6-1 follows
Description of ''Figure 6-1 Multiple active standby pairs on two hosts''

This configuration can result in greater write throughput for cache groups and more balanced resource usage. See the next section, "Sample configuration files: multiple active standby pairs on one pair of hosts", for sample sys.odbc.ini entries and a sample cluster.oracle.ini file for this kind of configuration. (See "Configuring Oracle Clusterware management with the cluster.oracle.ini file" in the Oracle TimesTen In-Memory Database Replication Guide for information about that file.)

The rolling upgrade process for multiple active standby pairs on a single pair of hosts is similar in nature to the process of upgrading multiple active standby pairs on multiple pairs of hosts. See "Upgrades for multiple active standby pairs on many pairs of hosts" for details.

First, however, if the active and standby databases are mixed between the two hosts, switch all standby databases to one host and all active databases to the other host. Use the ttCWAdmin -switch -dsn DSN command to switch active and standby databases between hosts. Once all the active databases are on one host and all the standby databases are on the other host, follow the steps below to perform the upgrade for the entire "standby" host.

Be aware that upgrades affect the entire instance and associated databases on one host.

  1. Verify that the standby databases run on the desired host. Use the ttCWAdmin -status -dsn DSN command and the ttCWAdmin -status command.

  2. Modify the Oracle Clusterware stop commands to stop all Master processes on the host where all the standby databases reside.

  3. Modify the Oracle Clusterware start commands to start all Master processes on the host where all the standby databases reside.

The following subsections contain related samples.

Sample configuration files: multiple active standby pairs on one pair of hosts

The following are sample sys.odbc.ini entries:

[databasea]
Driver=timesten_home/install/lib/libtten.so
DataStore=/scratch/terry/ds/databasea
PermSize=400
TempSize=320
DatabaseCharacterSet=WE8MSWIN1252
OracleNetServiceName=ORCL
 
[databaseb]
Driver=timesten_home/install/lib/libtten.so
DataStore=/scratch/terry/ds/databaseb
PermSize=400
TempSize=320
DatabaseCharacterSet=WE8MSWIN1252
OracleNetServiceName=ORCL

[databasec]
Driver=timesten_home/install/lib/libtten.so
DataStore=/scratch/terry/ds/databasec
PermSize=400
TempSize=320
DatabaseCharacterSet=WE8MSWIN1252
OracleNetServiceName=ORCL

[databased]
Driver=timesten_home/install/lib/libtten.so
DataStore=/scratch/terry/ds/databased
PermSize=400
TempSize=320
DatabaseCharacterSet=WE8MSWIN1252
OracleNetServiceName=ORCL

The following is a sample cluster.oracle.ini file:

[databasea]
MasterHosts=host1,host2
CacheConnect=Y
 
[databaseb]
MasterHosts=host2,host1
CacheConnect=Y
 
[databasec]
MasterHosts=host2,host1
CacheConnect=Y
 
[databased]
MasterHosts=host1,host2
CacheConnect=Y

The cluster.oracle.ini file places one active database and one standby database on each host. This is accomplished by reversing the order of the host names specified for the MasterHost attribute.

Sample scripts: stopping and starting multiple standby processes on one host

Run an Oracle Clusterware command similar to the following to obtain the names of the Oracle Clusterware Master, Daemon and Agent processes on the host of the standby database. It is suggested to filter the output by using the grep TT:

crsctl status resource -n standbyHostName | grep TT

The following script is an example of a "stop standby" script for multiple databases on the same host that Oracle Clusterware manages. The instance name is upgrade2. The instance administrator is terry. The host is host2. There are two standby databases: databasea and databaseb.

crsctl stop resource TT_Master_upgrade2_terry_DATABASEA_0
crsctl stop resource TT_Master_upgrade2_terry_DATABASEB_1
crsctl stop resource TT_Daemon_upgrade2_terry_HOST2
crsctl stop resource TT_Agent_upgrade2_terry_HOST2
ttDaemonAdmin -stop

The following script is an example of a "start standby" script for the same configuration.

ttCWAdmin -init -hosts localhost
crs start resource TT_Master_upgrade2_terry_DATABASEA_0
crs start resource TT_Master_upgrade2_terry_DATABASEB_1

Upgrades when using parallel replication

Automatic parallel replication is enabled by default beginning in TimesTen release 11.2.2.2.0. In earlier releases, user-defined replication was available, but automatic parallel replication was not available. Automatic parallel replication with disabled commit dependencies was first available in TimesTen release 11.2.2.8.0. In TimesTen release 18.1.4.1.0, user-defined replication is not available; however, both automatic parallel replication options (with or without disabled commit dependencies) are available.

Note:

The values for the "ReplicationApplyOrdering" attribute, in the Oracle TimesTen In-Memory Database Reference, have changed. Beginning in release 11.2.2.2.0, a value of 0 enables automatic parallel replication. Before release 11.2.2.2.0, a value of 0 disabled user-defined parallel replication. Beginning in release 11.2.2.8.0, a value of 2 enables automatic parallel replication with disabled commit dependencies. In 18.1 releases, user-defined parallel replication (set with a value of 1) is not supported.

You can perform an online or offline upgrade from a database that has not enabled parallel replication to a database of this release that has enabled parallel replication (with or without disabled commit dependencies).

The rest of this section discusses additional considerations along with scenarios where an offline upgrade is required.

Considerations regarding parallel replication

Be aware of the following considerations when upgrading hosts that use parallel replication:

  • Consider an active standby pair without parallel replication enabled. To upgrade the instances to a 18.1 release and use automatic parallel replication (default value of 0 for the ReplicationApplyOrdering attribute), simply use the appropriate procedure for an active standby pair upgrade. See "Performing an upgrade with active standby pair replication" for details.

  • Consider an active standby pair with no cache groups and automatic parallel replication enabled (value of 0 for the ReplicationApplyOrdering attribute). To upgrade the instances to a 18.1 release to use automatic parallel replication with disabled commit dependencies (value of 2 for the ReplicationApplyOrdering attribute), use the procedure for an active standby pair online major upgrade. See "Online major upgrade for active standby pair" for details. The value for the ReplicationApplyOrdering attribute must be changed from 0 to 2 before restoring any of the databases. For example:

    ttMigrate -r "DSN=master2;ReplicationApplyOrdering=2;ReplicationParallelism=2;
      LogBufParallelism=4" master2.bak
    

    Note:

    You may upgrade a database with a replication scheme with ReplicationApplyOrdering=2 to a database with ReplicationApplyOrdering=0 by using the same active standby pair online major upgrade procedure.

    Automatic parallel replication with disabled commit dependencies supports only asynchronous active standby pairs with no cache groups. For more information, see "Configuring parallel replication" in the Oracle TimesTen In-Memory Database Replication Guide.

  • You cannot replicate between databases that have the ReplicationParallelism attribute set to greater than 1 but have different values for the ReplicationApplyOrdering attribute.

Scenarios that require an offline upgrade

You must use an offline upgrade for these scenarios:

  • Moving from user-defined parallel replication to automatic parallel replication. For example, from a release preceding 11.2.2.3.0 to a 18.1 release with the ReplicationApplyOrdering attribute set to the default value (0). Note that user-defined parallel replication is not supported in release 18.1.4.1.0.

  • Moving from an automatic parallel replication environment to another automatic parallel replication environment with a different number of tracks, as indicated by the value of the ReplicationParallelism attribute.

  • Moving between major releases (from 11.2.2 to 18.1) and using asynchronous writethrough cache groups.

  • Moving from regular replication with asynchronous writethrough in 11.2.2 to automatic parallel replication with asynchronous writethrough in 18.1.

For offline upgrades, you can use the procedure described in "Offline upgrade: Moving to a different major release". Alternatively, you can upgrade one side and use the ttRepAdmin -duplicate -recreate command to create the new database.

Performing an upgrade of your client instance

You can upgrade your client instance which is being used to access a database in a full instance. For information on instances, see "Overview of installations and instances" and "TimesTen instances" for details. For information on Client/Server, see "Overview of the TimesTen Client/Server" in the Oracle TimesTen In-Memory Database Operations Guide.

To perform the upgrade, follow these steps:

  1. Optional: This step is included for informational purposes to assist you in identifying and verifying the TimesTen client release information.

    In the client instance, run the ttVersion utility to verify the client release and the client instance. In this example, running ttVersion in the client instance shows the client release is 18.1.4.1.0 and the client instance is instance_1814_client.

    % ttVersion
    TimesTen Release 18.1.4.1.0 (64 bit Linux/x86_64) (instance_1814_client)
    2020-06-29T23:22:07Z
      Instance home directory: /scratch/instance_1814_client
      Group owner: g900
    
  2. Optional: This step is included for informational purposes to establish and then show a client connection to the database1_1814 database. In the client instance, run ttIsqlCS to connect to the database1_1814 database in the full instance (on the server). Note that the TCP_PORT is not specified. The default value is assumed.

    % ttIsqlCS -connstr "TTC_SERVER=server.mycompany.com;
    TTC_SERVER_DSN=database1_1814";
     
    Copyright (c) 1996, 2020, Oracle and/or its affiliates. All rights reserved.
    Type ? or "help" for help, type "exit" to quit ttIsql.
     
     
     
    connect "TTC_SERVER=server.mycompany.com;TTC_SERVER_DSN=database1_1814";
    Connection successful: DSN=;TTC_SERVER=server.mycompany.com;
    TTC_SERVER_DSN=database1_1814;
    ...
    (Default setting AutoCommit=1)
    
  3. Stop all applications using the client instance. In this example, in the client instance, first run ttIsqlCS to connect to the database1_1814 database, then exit from ttIsqlCS.

    % ttIsqlCS -connstr "TTC_SERVER=server.mycompany.com;
    TTC_SERVER_DSN=database1_1814";
     
    Copyright (c) 1996, 2020, Oracle and/or its affiliates. All rights reserved.
    Type ? or "help" for help, type "exit" to quit ttIsql.
     
     
     
    connect "TTC_SERVER=server.mycompany.com;TTC_SERVER_DSN=database1_1814";
    Connection successful: DSN=;TTC_SERVER=server.mycompany.com;
    TTC_SERVER_DSN=database1_1814;
    ...
    (Default setting AutoCommit=1)
    Command> exit
    Disconnecting...
    Done.
    
  4. Create a new client installation in a new location. For example, create the clientinstall_new installation directory. Then unzip the new release zip file into that directory. For example, to create the 18.1.4.1.0 installation on Linux 64-bit, unzip timesten181410.server.linux8664.zip into the clientinstall_new directory. (Note, there is only one distribution on Linux 64-bit. This distribution contains the server and the client installation.)

    % mkdir clientinstall_new
    % cd clientinstall_new
    % unzip /swdir/TimesTen/ttinstallers/timesten181410.server.linux8664.zip
    [...UNZIP OUTPUT...]
    

    See "TimesTen installations" for detailed information.

  5. Modify the client instance to point to the new installation. Do this by running the ttInstanceModify utility with the -install option from the $TIMESTEN_HOME/bin directory of the client instance.

    In this example, point the client instance to the installation in /clientinstall_new/tt18.1.4.1.0.

    % $TIMESTEN_HOME/bin/ttInstanceModify -install 
     /clientinstall_new/tt18.1.4.1.0
     
    Instance Info (UPDATED)
    -----------------------
     
    Name:           instance_1814_client
    Version:        18.1.4.1.0
    Location:       /scratch/instance_1814_client
    Installation:   /clientinstall_new/tt18.1.4.1.0
     
    * Client-Only Installation
     
     
    The instance instance_1814_client now points to the installation in 
    clientinstall_new/tt18.1.4.1.0
    
  6. Optional: In the client instance, run the ttVersion utility to verify the client release is 18.1.4.1.0.

    % ttVersion
    TimesTen Release 18.1.4.1.0 (64 bit Linux/x86_64) (instance_1814_client) 2020-06-28T22:37:51Z
      Instance home directory: /scratch/instance_1814_client
      Group owner: g900
    
  7. Restart the applications that use the client instance.

    In this example, in the client instance, run ttIsqlCS to connect to the database1_1814 database in the full instance.

    % ttIsqlCS -connstr "TTC_SERVER=server.mycompany.com;
    TTC_SERVER_DSN=database1_1814";
     
    Copyright (c) 1996, 2020, Oracle and/or its affiliates. All rights reserved.
    Type ? or "help" for help, type "exit" to quit ttIsql.
     
     
     
    connect "TTC_SERVER=server.mycompany.com;TTC_SERVER_DSN=database1_1814";
    Connection successful: DSN=;TTC_SERVER=server.mycompany.com;
    TTC_SERVER_DSN=database1_1814;
    ...
    (Default setting AutoCommit=1)
    
  8. Optional: Delete the previous release installation (used for the client).

    % chmod -R 750 installation_dir/tt18.1.3.5.0
    % rm -rf installation_dir/tt18.1.3.5.0