7 Upgrades in TimesTen Classic

This chapter describes the process for upgrading to a new release of TimesTen Classic. For information on the upgrade process for TimesTen Scaleout, see "Upgrading a grid" and "Migrating, Backing Up and Restoring Data" in the Oracle TimesTen In-Memory Database Scaleout User's Guide.

Ensure you review the installation process in the preceding chapters before completing the upgrade procedures described in this chapter.

Topics include:

Overview of release numbers

There is a release numbering scheme for TimesTen releases. This scheme is relevant when discussing upgrades. For example, for a given release, a.b.c.d.e:

  • a indicates the first part of the major release.

  • b indicates the second part of the major release.

  • c indicates the patch set.

  • d indicates the patch level within the patch set.

  • e is reserved.

Important considerations:

  • Releases within the same major release (a.b) are binary compatible. If a release is binary compatible, you do not have to recreate the database for the upgrade (or downgrade).

  • Releases with a different major release are not binary compatible. In this case, you must recreate the database. See "Migrating a database" for details.

As an example, for the 22.1.1.10.0 release:

  • The first two numbers of the five-place release number (22.1) indicate the major release.

  • The third number of the five-place release number (1) indicates the patch set. For example, 22.1.1.10.0 is binary compatible with 22.1.1.11.0 because the first two digits in the five-place release number (22 and 1) are the same.

  • The fourth number of the five-place release number (1) indicates the patch level within the patch set. 22.1.1.11.0 is the first patch level within patch set two.

  • The fifth number of the five-place release number (0) is reserved.

Types of upgrades

TimesTen Classic supports two types of upgrades:

  • An offline upgrade requires that you close all TimesTen databases to prevent future connections to these databases, you disconnect all applications from TimesTen, and you stop all TimesTen databases. This type of upgrade is useful when some amount of downtime is acceptable. During this downtime, the TimesTen databases are unavailable.

    An offline upgrade enables you to upgrade (or downgrade) to a new patch release or to upgrade to a major release:

  • An online upgrade involves using a pair of databases that are replicated and then performing an offline upgrade of each database in turn. This type of upgrade is useful when it is critically important that downtime be at a minimum. See "Online upgrade: Using TimesTen replication" for details.

About moving to a different patch release by modifying the instance

This section contains information about moving to a different patch release of TimesTen by modifying the TimesTen instance. Moving to a different patch release includes upgrades and downgrades. See "Moving to a different patch release using ttBackup and ttRestore" for information on moving to a different patch release using backup and restore operations.

Concepts that are important when moving to a different patch release:
  • Start a database: The subdaemon either creates a new shared memory segment or re-attaches to an existing one. These operations are used to start a database:

    • Load: The subdaemon creates a new shared memory segment, and loads the contents of the most recent checkpoint file into this new shared memory segment.

    • Remap: The subdaemon re-attaches to an existing shared memory segment.

  • Stop a database: The subdaemon disconnects from the shared memory segment and either destroys the shared memory segment or preserves it. These operations are used to stop a database:

    • Unload (clean): The shared memory segment is written to the checkpoint file on disk (by performing a static checkpoint operation). The subdaemon disconnects from and destroys the shared memory segment. The load operation starts the database.

    • Detach (clean): The shared memory segment is optionally written to the checkpoint file on disk (by performing a static checkpoint operation). The subdaemon disconnects from the shared memory segment, but does not destroy it. The shared memory segment remains in memory. The remap operation starts the database.

Note:

For a complete list of the operations you use to start and stop a database, see "Managing TimesTen Databases" in the Oracle TimesTen In-Memory Database Operations Guide.
There are two types of patch upgrades (or downgrades):
  • Basic patch upgrade: A type of upgrade where the shared memory segment is destroyed when the database is stopped. A new shared memory segment is created when the database is started. This is the preferred method for performing a patch upgrade. See "About performing a basic patch upgrade" for details.

  • Fast patch upgrade: A type of upgrade where the shared memory segment is preserved in memory when the database is stopped. The same memory segment is used when the database is started. This is the preferred method if your databases are large and you have both critical uptime requirements and short maintenance windows. See "About performing a fast patch upgrade" for details.

About performing a basic patch upgrade

A basic patch upgrade is used when you do not have critical uptime requirements and short maintenance windows. When you stop the database, the contents of the shared memory segment is written to the checkpoint file on disk. The shared memory segment is then destroyed. When you start the database after an upgrade, a new shared memory segment is created and the contents of the checkpoint file are read into this newly created shared memory segment. Depending on the size of your database, the checkpoint operation performed when the database stops and the subsequent ramLoad operation performed when the database starts could be time consuming.

The process involves downloading the TimesTen full distribution (the upgrade release) and creating a new installation. The instance that requires upgrading is then modified to point to the new installation. The ttInstanceModify utility is used to perform this instance modification. As previously noted, you must close all TimesTen databases and disconnect all applications from TimesTen.

Download and create the new installation

To upgrade to a new patch release of TimesTen, you must first create the new installation.

  1. Create the subdirectory into which you will download and unzip the new full distribution of TimesTen. Navigate to this directory and download the new full distribution into this directory. Then, use the ZIP utility to unpack this distribution. This example creates the new_installation_dir subdirectory and unpacks the timesten2211110.server.linux8664.zip file (the 22.1.1.11.0 full distribution for Linux 64-bit). Unzipping the timesten2211110.server.linux8664.zip file creates the new installation that will be used for this patch upgrade.
    % mkdir -p new_installation_dir
    % cd new_installation_dir

    Download the full distribution into the new_installation_dir subdirectory. Then use the ZIP utility to unpack the distribution.

    % unzip /timesten/installations/timesten2211110.server.linux8664.zip
    Archive:  /timesten/installations/timesten2211110.server.linux8664.zip
       creating: tt22.1.1.11.0/
    ...
  2. Optional: Use the ttInstallationCheck utility, located in the bin subdirectory of the new installation (new_installation_dir/bin in this example) to verify the installation is successful.
    % new_installation_dir/tt22.1.1.11.0/bin/ttInstallationCheck
    This installation has been verified.
    
  3. Optional: Verify the subdirectories are created under the full installation directory. These subdirectories may change from release to release.
    % ls new_installation_dir/tt22.1.1.11.0
    3rdparty     bin      info        network        plsql    ttoracle_home
    PERL         grid     kubernetes  nls            startup
    README.html  include  lib         oraclescripts  support
    
You have successfully created the new installation.

Unload the database from memory

Perform the following steps to unload the database from memory.

  1. Close the database. This prevents any future connections to the database.
    % ttAdmin -close database1
    RAM Residence Policy            : manual
    Manually Loaded In RAM          : True
    Replication Agent Policy        : manual
    Replication Manually Started    : False
    Cache Agent Policy              : manual
    Cache Agent Manually Started    : False
    Database State                  : Closed
    

    See "Opening and closing the database for user connections" in the Oracle TimesTen In-Memory Database Operations Guide.

  2. Disconnect all applications from the database. See "Disconnecting from a database" in the Oracle TimesTen In-Memory Database Operations Guide for details.
  3. Unload the database from memory. See "Specifying a RAM policy" in the Oracle TimesTen In-Memory Database Operations Guide for information on specifying a RAM policy.

    If the RAM policy is set to always, change it to manual, then unload the database from memory.

    % ttAdmin -ramPolicy manual -ramUnload database1
    
    RAM Residence Policy            : manual
    Manually Loaded In RAM          : False
    Replication Agent Policy        : manual
    Replication Manually Started    : False
    Cache Agent Policy              : manual
    Cache Agent Manually Started    : False
    Database state                  : closed
    

    If the RAM policy is set to manual, unload the database from memory.

    ttAdmin -ramUnload database1
    
    RAM Residence Policy            : manual
    Manually Loaded In RAM          : False
    Replication Agent Policy        : manual
    Replication Manually Started    : False
    Cache Agent Policy              : manual
    Cache Agent Manually Started    : False
    Database state                  : closed
    

    If the RAM policy is set to inUse and a grace period is set, set the grace period to 0 or wait for the grace period to elapse. TimesTen unloads a database with an inUse RAM policy from memory once all active connections are disconnected.

    % ttAdmin -ramGrace 0 database1
    
    RAM Residence Policy            : inUse
    Replication Agent Policy        : manual
    Replication Manually Started    : False
    Cache Agent Policy              : manual
    Cache Agent Manually Started    : False
    Database state                  : closed
    

Modify the instance to point to the new installation

The patch upgrade process requires you to modify the existing TimesTen instance to point to the new installation.
Perform these steps:
  1. Use the ttDaemonAdmin utility to stop the TimesTen main daemon.
    % ttDaemonAdmin -stop
    TimesTen Daemon (PID: 21031, port: 6624) stopped.
    
  2. Use the ttInstanceModify utility to modify the myinstance instance to point to the new installation. Recall that the TimesTen full distribution was unpacked and a new installation was created in new_installation_dir/tt22.1.1.11.0. See "Download and create the new installation" for details.
    % $TIMESTEN_HOME/bin/ttInstanceModify -install new_installation_dir/tt22.1.1.11.0
    
    Instance Info (UPDATED)
    -----------------------
    
    Name:           myinstance
    Version:        22.1.1.11.0
    Location:       /scratch/ttuser/myinstance
    Installation:   new_installation_dir/tt22.1.1.11.0
    Daemon Port:    6624
    Server Port:    6625
    
    
    The instance myinstance now points to the installation in new_installation_dir/tt22.1.1.11.0
  3. Use the ttDaemonAdmin utility to restart the TimesTen main daemon. Then run the ttVersion utility to verify the myinstance instance has been upgraded to the new patch release (22.1.1.11.0, in this example).
    % ttDaemonAdmin -start
    TimesTen Daemon (PID: 20699, port: 6624) startup OK.
    
    % ttVersion
    TimesTen Release 22.1.1.11.0 (64 bit Linux/x86_64) (myinstance:6624) 2021-09-15T16:53:47Z
      Instance admin: instanceadmin
      Instance home directory: /scratch/ttuser/myinstance
      Group owner: g900
      Daemon home directory: /scratch/ttuser/myinstance/info
      PL/SQL enabled.
    
You have successfully modified the instance to point to the new installation.

Load the database into memory

Follow these steps to load a database into memory.

  1. Load the database into memory. This example sets the RAM policy to manual and then loads the database1 database into memory.

    Set the RAM policy to manual.

    % ttAdmin -ramPolicy manual database1
    
    RAM Residence Policy            : manual
    Manually Loaded In RAM          : False
    Replication Agent Policy        : manual
    Replication Manually Started    : False
    Cache Agent Policy              : manual
    Cache Agent Manually Started    : False
    Database state                  : closed
    

    Load the database1 database into memory.

    % ttAdmin -ramLoad database1
    
    RAM Residence Policy            : manual
    Manually Loaded In RAM          : True
    Replication Agent Policy        : manual
    Replication Manually Started    : False
    Cache Agent Policy              : manual
    Cache Agent Manually Started    : False
    Database state                  : closed
    

    See "Specifying a RAM policy" in the Oracle TimesTen In-Memory Database Operations Guide for information on the RAM policy.

  2. Open the database for user connections.
    % ttAdmin -open database1
    RAM Residence Policy            : manual
    Manually Loaded In RAM          : True
    Replication Agent Policy        : manual
    Replication Manually Started    : False
    Cache Agent Policy              : manual
    Cache Agent Manually Started    : False
    Database State                  : Open
    

    See "Opening and closing the database for user connections" in the Oracle TimesTen In-Memory Database Operations Guide.

Verify the patch upgrade

Verify the patch upgrade:
  1. Verify the instance administrator user (instanceadmin, in this example) can connect to the database1 database and perform a query.
    % ttisql database1;
    
    Copyright (c) 1996, 2021, Oracle and/or its affiliates. All rights reserved.
    Type ? or "help" for help, type "exit" to quit ttIsql.
    
    
    
    connect "DSN=database1";
    Connection successful: DSN=database1;UID=instanceadmin;DataStore=/scratch/ttuser/database1;
    DatabaseCharacterSet=AL32UTF8;ConnectionCharacterSet=AL32UTF8;LogBufMB=1024;
    PermSize=500;TempSize=300;
    (Default setting AutoCommit=1)
    Command> connect adding "uid=user1;pwd=********" as user1;
    Connection successful: DSN=database1;UID=user1;DataStore=/scratch/ttuser/database1;
    DatabaseCharacterSet=AL32UTF8;ConnectionCharacterSet=AL32UTF8;LogBufMB=1024;
    PermSize=500;TempSize=300;
    (Default setting AutoCommit=1)
    user1: Command> SELECT COUNT (*) FROM employees;
    < 107 >
    1 row found.
    
You have successfully performed the patch upgrade.

About performing a fast patch upgrade

Consider performing a fast patch upgrade when you have large databases and you have both critical uptime requirements and short maintenance windows. During the fast patch upgrade, the static checkpoint operation performed at database stop is optional, and the shared memory segment is preserved after the subdaemon disconnects. When the database is started, the checkpoint operation is not performed and a new subdaemon connects to this preserved shared memory segment. This reduces the time it takes to upgrade an instance, especially if your databases are large, by skipping both the load of the database into memory operation and the checkpoint operation.

To use a fast patch upgrade, the ramPolicy for the database must be set to enduring. This keeps the database image in memory after the subdaemon disconnects. See "Specifying a RAM policy" in the Oracle TimesTen In-Memory Database Operations Guide for information on setting a RAM policy.

The size of the TimesTen shared memory segment must remain the same before and after the fast patch upgrade. The TimesTen ttShmSize utility is provided to calculate the size of the shared memory segment. Run this utility before you upgrade the instance and then run it again after you upgrade the instance to ensure the size of the shared memory segment has not changed after the upgrade. In addition, ensure you do not modify the PermSize, the TempSize, the LogBufMB, and the Connections connection attributes after the upgrade. These attributes affect the size of the shared memory segment.

See "ttShmSize" and "Connection Attributes" in the Oracle TimesTen In-Memory Database Reference for information on the ttShmSize utility and the TimesTen connection attributes.

Download and create the new installation

To upgrade to a new patch release of TimesTen, you must first create the new installation.

  1. Create the subdirectory into which you will download and unzip the new full distribution of TimesTen. Navigate to this directory and download the new full distribution into this directory. Then, use the ZIP utility to unpack this distribution. This example creates the new_installation_dir subdirectory and unpacks the timesten2211110.server.linux8664.zip file (the 22.1.1.11.0 full distribution for Linux 64-bit). Unzipping the timesten2211110.server.linux8664.zip file creates the new installation that will be used for this patch upgrade.
    % mkdir -p new_installation_dir
    % cd new_installation_dir

    Download the full distribution into the new_installation_dir subdirectory. Then use the ZIP utility to unpack the distribution.

    % unzip /timesten/installations/timesten2211110.server.linux8664.zip
    Archive:  /timesten/installations/timesten2211110.server.linux8664.zip
       creating: tt22.1.1.11.0/
    ...
  2. Optional: Use the ttInstallationCheck utility, located in the bin subdirectory of the new installation (new_installation_dir/bin in this example) to verify the installation is successful.
    % new_installation_dir/tt22.1.1.11.0/bin/ttInstallationCheck
    This installation has been verified.
    
  3. Optional: Verify the subdirectories are created under the full installation directory. These subdirectories may change from release to release.
    % ls new_installation_dir/tt22.1.1.11.0
    3rdparty     bin      info        network        plsql    ttoracle_home
    PERL         grid     kubernetes  nls            startup
    README.html  include  lib         oraclescripts  support
    
You have successfully created the new installation.

Prepare to detach the subdaemon from the shared memory segment

Perform these operations on the instance created with the current release of TimesTen (22.1.1.10.0, in this example).

  1. Optional: Run the ttVersion utility to verify the current TimesTen release (22.1.1.10.0, in this example).
    % ttVersion
    TimesTen Release 22.1.1.1.0 (64 bit Linux/x86_64) (myinstance:6624) 2021-09-16T07:41:05Z
      Instance admin: instanceadmin
      Instance home directory: /scratch/ttuser/myinstance
      Group owner: g900
      Daemon home directory: /scratch/ttuser/myinstance/info
      PL/SQL enabled.
    
  2. Run the ttStatus utility to check if the database is open to user connections and if there are connections to the database (database1, in this example). In this example, the database1 database is open and there are two connections to the database.
    % ttStatus
    TimesTen status report as of Fri Sep 24 05:46:05 2021
    
    Daemon pid 21031 port 6624 instance myinstance
    TimesTen server pid 21039 started on port 6625
    ------------------------------------------------------------------------
    ------------------------------------------------------------------------
    Data store /scratch/ttuser/database1
    Daemon pid 21031 port 6624 instance myinstance
    TimesTen server pid 21039 started on port 6625
    There are 14 connections to the data store
    Shared Memory Key 0x0b100699 ID 547979276
    PL/SQL Memory Key 0x0a100699 ID 547946502 Address 0x5000000000
    Type            PID     Context             Connection Name              ConnID
    Process         15076   0x0000000001f09990  database1                         1
    Process         15076   0x00000000020272b0  conn2                             2
    Subdaemon       21036   0x0000000000f3c260  Manager                        2047
    Subdaemon       21036   0x0000000000fbdbc0  Rollback                       2046
    Subdaemon       21036   0x000000000103cf40  XactId Rollback                2037
    Subdaemon       21036   0x00007f9fbc0008c0  Deadlock Detector              2043
    Subdaemon       21036   0x00007f9fc00008c0  Checkpoint                     2042
    Subdaemon       21036   0x00007f9fc007f9e0  Garbage Collector              2036
    Subdaemon       21036   0x00007f9fc40008c0  Monitor                        2044
    Subdaemon       21036   0x00007f9fcc0008c0  Flusher                        2045
    Subdaemon       21036   0x00007f9fcc0a0e70  Aging                          2041
    Subdaemon       21036   0x00007fa04c0008c0  HistGC                         2039
    Subdaemon       21036   0x00007fa0501bbb70  Log Marker                     2040
    Subdaemon       21036   0x00007fa054048370  IndexGC                        2038
    Open for user connections
    Replication policy  : Manual
    Cache Agent policy  : Manual
    PL/SQL enabled.
    ------------------------------------------------------------------------
    Accessible by group g900
    End of report
    
  3. Use the ttAdmin utility to close the database1 database. This prevents further user connections.
    % ttAdmin -close database1
    RAM Residence Policy            : manual
    Manually Loaded In RAM          : True
    Replication Agent Policy        : manual
    Replication Manually Started    : False
    Cache Agent Policy              : manual
    Cache Agent Manually Started    : False
    Database State                  : Closed
    
  4. Disconnect all applications from the database. Run the ttStatus utility to verify there are no connections to the database (database1, in this example).
    % ttStatus
    TimesTen status report as of Fri Sep 24 05:49:55 2021
    
    Daemon pid 21031 port 6624 instance myinstance
    TimesTen server pid 21039 started on port 6625
    ------------------------------------------------------------------------
    ------------------------------------------------------------------------
    Data store /scratch/ttuser/database1
    Daemon pid 21031 port 6624 instance myinstance
    TimesTen server pid 21039 started on port 6625
    There are 12 connections to the data store
    Shared Memory Key 0x0b100699 ID 547979276
    PL/SQL Memory Key 0x0a100699 ID 547946502 Address 0x5000000000
    Type            PID     Context             Connection Name              ConnID
    Subdaemon       21036   0x0000000000f3c260  Manager                        2047
    Subdaemon       21036   0x0000000000fbdbc0  Rollback                       2046
    Subdaemon       21036   0x000000000103cf40  XactId Rollback                2037
    Subdaemon       21036   0x00007f9fbc0008c0  Deadlock Detector              2043
    Subdaemon       21036   0x00007f9fc00008c0  Checkpoint                     2042
    Subdaemon       21036   0x00007f9fc007f9e0  Garbage Collector              2036
    Subdaemon       21036   0x00007f9fc40008c0  Monitor                        2044
    Subdaemon       21036   0x00007f9fcc0008c0  Flusher                        2045
    Subdaemon       21036   0x00007f9fcc0a0e70  Aging                          2041
    Subdaemon       21036   0x00007fa04c0008c0  HistGC                         2039
    Subdaemon       21036   0x00007fa0501bbb70  Log Marker                     2040
    Subdaemon       21036   0x00007fa054048370  IndexGC                        2038
    Closed to user connections
    RAM residence policy: Manual
    Data store is manually loaded into RAM
    Replication policy  : Manual
    Cache Agent policy  : Manual
    PL/SQL enabled.
    ------------------------------------------------------------------------
    Accessible by group g900
    End of report
    
  5. Run the ttShmSize utility to return the size of the shared memory segment. This size must match the size of the shared memory segment after the fast patch upgrade is completed.
    % ttShmSize -connStr DSN=database1
    The required shared memory size is 2148239512 bytes.
    
You have completed the preparatory steps to disconnect the subdaemon from the shared memory segment.

Detach the subdaemon from the shared memory segment

Perform these steps to disconnect the subdaemon from the shared memory segment.

  1. Run the ttAdmin utility to check the ramPolicy for the database1 database. In this example, the ramPolicy is set to manual and the database1 database is manually loaded in RAM.
    % ttAdmin -query database1
    RAM Residence Policy            : manual
    Manually Loaded In RAM          : True
    Replication Agent Policy        : manual
    Replication Manually Started    : False
    Cache Agent Policy              : manual
    Cache Agent Manually Started    : False
    Database State                  : Closed
    
  2. Use the ttAdmin utility to change the ramPolicy to enduring. The enduring setting preserves the shared memory segment in memory when the subdaemon disconnects from the shared memory segment.
    % ttAdmin -ramPolicy enduring database1
    RAM Residence Policy            : enduring
    Manually Loaded In RAM          : True
    Replication Agent Policy        : manual
    Replication Manually Started    : False
    Cache Agent Policy              : manual
    Cache Agent Manually Started    : False
    Database State                  : Closed
    
  3. Use the ttAdmin utility with the -shmDetach option to disconnect the subdaemon from the shared memory segment.
    % ttAdmin -shmDetach database1
    RAM Residence Policy            : enduring
    Manually Loaded In RAM          : False
    Replication Agent Policy        : manual
    Replication Manually Started    : False
    Cache Agent Policy              : manual
    Cache Agent Manually Started    : False
    Database State                  : Closed
    
  4. Use the ttStatus utility to verify the subdaemon is disconnected from the shared memory segment.
    % ttStatus
    TimesTen status report as of Fri Sep 24 06:12:04 2021
    
    Daemon pid 21031 port 6624 instance myinstance
    TimesTen server pid 21039 started on port 6625
    ------------------------------------------------------------------------
    ------------------------------------------------------------------------
    Data store /scratch/ttuser/database1
    Daemon pid 21031 port 6624 instance myinstance
    TimesTen server pid 21039 started on port 6625
    There are no connections to the data store
    Closed to user connections
    RAM residence policy: Enduring
    Subdaemon is manually detached from data store (Shared Memory Key 0x0b100699 ID 547979276)
    Replication policy  : Manual
    Cache Agent policy  : Manual
    PL/SQL enabled.
    ------------------------------------------------------------------------
    Accessible by group g900
    End of report
    
The subdaemon is disconnected from the shared memory segment. You are now ready to perform the patch upgrade.

Modify the instance to point to the new installation

The patch upgrade process requires you to modify the existing TimesTen instance to point to the new installation.
Perform these steps:
  1. Use the ttDaemonAdmin utility to stop the TimesTen main daemon.
    % ttDaemonAdmin -stop
    TimesTen Daemon (PID: 21031, port: 6624) stopped.
    
  2. Use the ttInstanceModify utility to modify the myinstance instance to point to the new installation. Recall that the TimesTen full distribution was unpacked and a new installation was created in new_installation_dir/tt22.1.1.11.0. See "Download and create the new installation" for details.
    % $TIMESTEN_HOME/bin/ttInstanceModify -install new_installation_dir/tt22.1.1.11.0
    
    Instance Info (UPDATED)
    -----------------------
    
    Name:           myinstance
    Version:        22.1.1.11.0
    Location:       /scratch/ttuser/myinstance
    Installation:   new_installation_dir/tt22.1.1.11.0
    Daemon Port:    6624
    Server Port:    6625
    
    
    The instance myinstance now points to the installation in new_installation_dir/tt22.1.1.11.0
  3. Use the ttDaemonAdmin utility to restart the TimesTen main daemon. Then run the ttVersion utility to verify the myinstance instance has been upgraded to the new patch release (22.1.1.11.0, in this example).
    % ttDaemonAdmin -start
    TimesTen Daemon (PID: 20699, port: 6624) startup OK.
    
    % ttVersion
    TimesTen Release 22.1.1.11.0 (64 bit Linux/x86_64) (myinstance:6624) 2021-09-15T16:53:47Z
      Instance admin: instanceadmin
      Instance home directory: /scratch/ttuser/myinstance
      Group owner: g900
      Daemon home directory: /scratch/ttuser/myinstance/info
      PL/SQL enabled.
    
You have successfully modified the instance to point to the new installation.

Attach a new subdaemon to the existing shared memory segment

Perform these steps to connect a new subdaemon to the existing shared memory segment:

  1. Run the ttShmSize utility to return the size of the shared memory segment. This size must match the size of the shared memory segment before the patch upgrade. Recall the size was 2148239512 bytes. See "Prepare to detach the subdaemon from the shared memory segment" for details.
    % ttShmSize -connStr DSN=database1
    The required shared memory size is 2148239512 bytes.
    
  2. Use the ttAdmin utility to attach a new subdaemon to the existing shared memory segment.
    % ttAdmin -shmAttach database1
    RAM Residence Policy            : enduring
    Manually Loaded In RAM          : True
    Replication Agent Policy        : manual
    Replication Manually Started    : False
    Cache Agent Policy              : manual
    Cache Agent Manually Started    : False
    Database State                  : Closed
    
  3. Use the ttStatus utility to verify the subdaemon is connected to the shared memory segment.
    % ttStatus
    TimesTen status report as of Fri Sep 24 06:35:10 2021
    
    Daemon pid 20699 port 6624 instance myinstance
    TimesTen server pid 20706 started on port 6625
    ------------------------------------------------------------------------
    ------------------------------------------------------------------------
    Data store /scratch/ttuser/database1
    Daemon pid 20699 port 6624 instance myinstance
    TimesTen server pid 20706 started on port 6625
    There are 12 connections to the data store
    Shared Memory Key 0x0b100699 ID 547979276
    PL/SQL Memory Key 0x0d100699 ID 548044806 Address 0x5000000000
    Type            PID     Context             Connection Name              ConnID
    Subdaemon       20704   0x000000000207f260  Manager                        2047
    Subdaemon       20704   0x0000000002100bc0  Rollback                       2046
    Subdaemon       20704   0x000000000217ff40  Aging                          2041
    Subdaemon       20704   0x00007f7ac40008c0  Checkpoint                     2042
    Subdaemon       20704   0x00007f7ac407f9e0  Garbage Collector              2040
    Subdaemon       20704   0x00007f7acc0008c0  Monitor                        2045
    Subdaemon       20704   0x00007f7acc0a0e70  IndexGC                        2038
    Subdaemon       20704   0x00007f7ad00008c0  Deadlock Detector              2043
    Subdaemon       20704   0x00007f7ad007f9e0  XactId Rollback                2039
    Subdaemon       20704   0x00007f7ad40008c0  Flusher                        2044
    Subdaemon       20704   0x00007f7ad407f9e0  HistGC                         2037
    Subdaemon       20704   0x00007f7b580bed90  Log Marker                     2036
    Closed to user connections
    RAM residence policy: Enduring
    Data store is manually loaded into RAM
    Replication policy  : Manual
    Cache Agent policy  : Manual
    PL/SQL enabled.
    ------------------------------------------------------------------------
    Accessible by group g900
    End of report
    
  4. Use the ttAdmin utility to change the ramPolicy back to manual.
    % ttAdmin -ramPolicy manual database1
    RAM Residence Policy            : manual
    Manually Loaded In RAM          : True
    Replication Agent Policy        : manual
    Replication Manually Started    : False
    Cache Agent Policy              : manual
    Cache Agent Manually Started    : False
    Database State                  : Closed
    
  5. Use the ttAdmin utility to open the database1 database for user connections.
    % ttAdmin -open database1;
    RAM Residence Policy            : manual
    Manually Loaded In RAM          : True
    Replication Agent Policy        : manual
    Replication Manually Started    : False
    Cache Agent Policy              : manual
    Cache Agent Manually Started    : False
    Database State                  : Open
    
  6. Verify the instance administrator user (instanceadmin, in this example) can connect to the database1 database and perform a query.
    % ttIsql database1;
    
    Copyright (c) 1996, 2021, Oracle and/or its affiliates. All rights reserved.
    Type ? or "help" for help, type "exit" to quit ttIsql.
    
    
    
    connect "DSN=database1";
    Connection successful: DSN=database1;UID=instanceadmin;DataStore=/scratch/ttuser/database1;
    DatabaseCharacterSet=AL32UTF8;ConnectionCharacterSet=AL32UTF8;LogBufMB=1024;PermSize=500;
    TempSize=300;
    (Default setting AutoCommit=1)
    Command> connect adding "uid=user1;pwd=********" as user1;
    Connection successful: DSN=database1;UID=user1;DataStore=/scratch/ttuser/database1;
    DatabaseCharacterSet=AL32UTF8;ConnectionCharacterSet=AL32UTF8;LogBufMB=1024;PermSize=500;
    TempSize=300;
    (Default setting AutoCommit=1)
    user1: Command> SELECT COUNT (*) FROM employees;
    < 107 >
    1 row found.
    
A new subdaemon connected to the preserved shared memory segment. The fast patch upgrade is successful.

Moving to a different patch release using ttBackup and ttRestore

You can run the ttBackup and ttRestore utilities to move to a new patch release, although this is not the preferred method. See "About moving to a different patch release by modifying the instance" for the preferred method.

Perform these steps for each database.

On the old release:

  1. Use the ttAdmin utility to close the database1 database. This prevents further user connections.
    % ttAdmin -close database1
    RAM Residence Policy            : manual
    Manually Loaded In RAM          : True
    Replication Agent Policy        : manual
    Replication Manually Started    : False
    Cache Agent Policy              : manual
    Cache Agent Manually Started    : False
    Database State                  : Closed
    
  2. Disconnect all applications from the database. Run the ttStatus utility to verify there are no connections to the database (database1, in this example).
    % ttStatus
    TimesTen status report as of Sat Oct  2 04:37:10 2021
    
    Daemon pid 4649 port 6624 instance myinstance
    TimesTen server pid 4656 started on port 6625
    ------------------------------------------------------------------------
    ------------------------------------------------------------------------
    Data store /scratch/ttuser/database1
    Daemon pid 4649 port 6624 instance myinstance
    TimesTen server pid 4656 started on port 6625
    There are no connections to the data store
    Closed to user connections
    RAM residence policy: manual
    Data store is manually loaded into RAM
    Replication policy  : Manual
    Cache Agent policy  : Manual
    PL/SQL enabled.
    ------------------------------------------------------------------------
    Accessible by group g900
    End of report
    
  3. Run the ttVersion utility to verify the current release (22.1.1.10.0, in this example).

    % ttVersion
    TimesTen Release 22.1.1.1.0 (64 bit Linux/x86_64) (myinstance:6624) 2021-09-16T07:41:05Z
      Instance admin: instanceadmin
      Instance home directory: /scratch/ttuser/myinstance20/myinstance
      Group owner: g900
      Daemon home directory: /scratch/ttuser/myinstance/myinstance/info
      PL/SQL enabled.
  4. Backup the database. In this example, backup the database1 database for release 22.1.1.10.0.

    % ttBackup -dir /tmp/dump/backup -fname database1_2211 database1
    Backup started ...
    Backup complete
    
  5. Unload the database from memory. This example assumes a RAM policy of manual. See "Specifying a RAM policy" in the Oracle TimesTen In-Memory Database Operations Guide for information on the RAM policy.
    % ttAdmin -ramUnload database1
    RAM Residence Policy            : manual
    Manually Loaded In RAM          : False
    Replication Agent Policy        : manual
    Replication Manually Started    : False
    Cache Agent Policy              : manual
    Cache Agent Manually Started    : False
    Database State                  : Closed
    
  6. Stop the TimesTen main daemon.
    % ttDaemonAdmin -stop
    TimesTen Daemon (PID: 4649, port: 6624) stopped.

For the new release:

  1. Create the subdirectory into which you will download and unzip the new full distribution of TimesTen. Navigate to this directory and download the new full distribution into this directory. Then, use the ZIP utility to unpack this distribution. This example creates the new_installation_dir subdirectory and unpacks the timesten2211110.server.linux8664.zip file (the 22.1.1.11.0 full distribution for Linux 64-bit). Unzipping the timesten2211110.server.linux8664.zip file creates the new installation that will be used for this patch upgrade.
    % mkdir -p new_installation_dir
    % cd new_installation_dir

    Download the full distribution into the new_installation_dir subdirectory. Then use the ZIP utility to unpack the distribution.

    % unzip /timesten/installations/timesten2211110.server.linux8664.zip
    Archive:  /timesten/installation/timesten2211110.server.linux8664.zip
       creating: tt22.1.1.11.0/
    ...
  2. Run the ttInstanceCreate utility to create the instance. This example runs the ttInstanceCreate utility interactively. See "ttInstanceCreate" in the Oracle TimesTen In-Memory Database Reference and "Creating an instance on Linux/UNIX: Basics" in this book for details.

    Navigate to the new_installation_dir/tt22.1.1.11.0/bin area of the installation directory and then run the ttInstanceCreate utility located in that directory. The ttInstanceCreate utility must be run from the installation directory. User input is shown in bold.

    % new_installation_dir/bin/ttInstanceCreate
    
    NOTE: Each TimesTen instance is identified by a unique name.
          The instance name must be a non-null alphanumeric string, not longer
          than 255 characters.
    
    Please choose an instance name for this installation? [ tt221 ] myinstance
    Instance name will be 'myinstance30'.
    Is this correct? [ yes ]
    Where would you like to install the myinstance instance of TimesTen? [ /home/ttuser ] /scratch/ttuser
    The directory /scratch/ttuser/ does not exist.
    Do you want to create it? [ yes ]
    Creating instance in /scratch/ttuser/myinstance ...
    
    NOTE: If you are configuring TimesTen for use with Oracle Clusterware, the
          daemon port number must be the same across all TimesTen installations
          managed within the same Oracle Clusterware cluster.
    
    NOTE: All installations that replicate to each other must use the same daemon
          port number that is set at installation time. The daemon port number can
          be verified by running 'ttVersion'.
    
    The default port number is 6624.
    
    Do you want to use the default port number for the TimesTen daemon? [ yes ]
    The daemon will run on the default port number (6624).
    
    In order to use the cache features in any TimesTen databases
    created within this instance, you must set a value for the TNS_ADMIN
    environment variable. It can be left blank, and a value can be supplied later
    using <install_dir>/bin/ttInstanceModify.
    
    Please enter a value for TNS_ADMIN (s=skip)? [  ] s
    What is the TCP/IP port number that you want the TimesTen Server to listen on? [ 6625 ]
    
    Would you like to use TimesTen Replication with Oracle Clusterware? [ no ]
    
    Would you like to use systemd to manage TimesTen? [ no ]
    
    NOTE: The TimesTen daemon startup/shutdown scripts have not been installed.
    
    The startup script is located here :
            '/scratch/ttuser/myinstance/startup/tt_myinstance'
    
    Run the 'setuproot' script :
            /scratch/ttuser/myinstance/bin/setuproot -install
    This will move the TimesTen startup script into its appropriate location.
    
    The 22.1 Release Notes are located here :
      'new_installation_dir/tt22.1.1.11.0/README.html'
    
    Starting the daemon ...
    TimesTen Daemon (PID: 11121, port: 6624) startup OK.
    Instance created successfully.
    
  3. Restore the database. Source the environment variables, make all necessary changes to your connection attributes in the sys.odbc.ini (or the odbc.ini) file, and start the daemon (if not already started) prior to restoring the database.
    % ttRestore -dir /tmp/dump/backup -fname database1_2211 database1
    Restore started ...
    Restore complete

Once your databases are correctly configured and fully operational, you can optionally remove the backup file (in this example, /tmp/dump/backup/database1_2211).

Moving to a different major release using ttMigrate

Moving to a different major release is done through migration. Migration includes upgrading from one major TimesTen release to a new major TimesTen release, or changing the operating system platform that TimesTen runs on.

Migration involves copying out the schema and data from one database, creating a new database with the new release, and then creating the schema and inserting the data into the new database. The ttMigrate utility is used to automate the migration of databases. See "ttMigrate" in the Oracle TimesTen In-Memory Database Reference for information on the ttMigrate utility.

Before migrating a database from one major release to another, ensure you backup the database in the old release. See "ttBackup" and "ttRestore" in Oracle TimesTen In-Memory Database Reference and "Backing up and restoring a database" in this book for details.

Follow these steps to perform the upgrade:

For the old release:

  1. Use the ttAdmin utility to close the database1 database. This prevents further user connections.

    % ttAdmin -close database1
    RAM Residence Policy            : manual
    Manually Loaded In RAM          : True
    Replication Agent Policy        : manual
    Replication Manually Started    : False
    Cache Agent Policy              : manual
    Cache Agent Manually Started    : False
    Database State                  : Closed
    
  2. Disconnect all applications from the database. Run the ttStatus utility to verify there are no connections to the database (database1, in this example).

    % ttStatus
    TimesTen status report as of Sat Oct  2 18:31:59 2021
    
    Daemon pid 28436 port 6624 instance myinstance
    TimesTen server pid 28443 started on port 6625
    ------------------------------------------------------------------------
    ------------------------------------------------------------------------
    Data store /scratch/ttuser/database1
    Daemon pid 28436 port 6624 instance myinstance
    TimesTen server pid 28443 started on port 6625
    There are 13 connections to the data store
    Shared Memory KEY 0x061014ae ID 491521
    PL/SQL Memory Key 0x071014ae ID 524290 Address 0x5000000000
    Type            PID     Context             Connection Name              ConnID
    Subdaemon       28440   0x0000000001893250  Manager                        2047
    Subdaemon       28440   0x0000000001914210  Rollback                       2046
    Subdaemon       28440   0x00007f55d80008c0  Deadlock Detector              2043
    Subdaemon       28440   0x00007f55d807f330  Log Marker                     2040
    Subdaemon       28440   0x00007f55dc0008c0  Monitor                        2044
    Subdaemon       28440   0x00007f55dc07f330  AsyncMV                        2039
    Subdaemon       28440   0x00007f55e00008c0  Checkpoint                     2042
    Subdaemon       28440   0x00007f55e007f330  Aging                          2041
    Subdaemon       28440   0x00007f55e40008c0  Flusher                        2045
    Subdaemon       28440   0x00007f55e40a6970  HistGC                         2038
    Subdaemon       28440   0x00007f56600008c0  XactId Rollback                2036
    Subdaemon       28440   0x00007f56641b9cb0  IndexGC                        2037
    Subdaemon       28440   0x00007f5668048360  Garbage Collector              2035
    Closed to user connections
    RAM residence policy: Manual
    Data store is manually loaded into RAM
    Replication policy  : Manual
    Cache Agent policy  : Manual
    PL/SQL enabled.
    ------------------------------------------------------------------------
    Accessible by group g900
    End of report
    
  3. Run the ttVersion utility to verify the current release.

    % ttVersion
    TimesTen Release tt22.1.1.10.0 (64 bit Linux/x86_64) (myinstance:6624) 2021-09-12T07:34:06Z
      Instance admin: instanceadmin
      Instance home directory: /scratch/ttuser/myinstance
      Group owner: g900
      Daemon home directory: /scratch/ttuser/myinstance/info
      PL/SQL enabled.
    
  4. Use the ttMigrate utility to copy out the schema and data from the database (database1, in this example).
    % ttMigrate -c database1 /tmp/database1.data
    
    Saving profile DEFAULT
    Profile successfully saved.
    
    Saving profile SYSTEM
    Profile successfully saved.
    
    Saving user PUBLIC
    User successfully saved.
    
    Saving table TTUSER.COUNTRIES
      Saving foreign key constraint COUNTR_REG_FK
      Saving rows...
      25/25 rows saved.
    Table successfully saved.
    
    Saving table TTUSER.DEPARTMENTS
      Saving foreign key constraint DEPT_LOC_FK
      Saving rows...
      27/27 rows saved.
    Table successfully saved.
    
    Saving table TTUSER.EMPLOYEES
      Saving index TTUSER.TTUNIQUE_0
      Saving foreign key constraint EMP_DEPT_FK
      Saving foreign key constraint EMP_JOB_FK
      Saving rows...
      107/107 rows saved.
    Table successfully saved.
    
    Saving table TTUSER.JOBS
      Saving rows...
      19/19 rows saved.
    Table successfully saved.
    
    Saving table TTUSER.JOB_HISTORY
      Saving foreign key constraint JHIST_DEPT_FK
      Saving foreign key constraint JHIST_EMP_FK
      Saving foreign key constraint JHIST_JOB_FK
      Saving rows...
      10/10 rows saved.
    Table successfully saved.
    
    Saving table TTUSER.LOCATIONS
      Saving foreign key constraint LOC_C_ID_FK
      Saving rows...
      23/23 rows saved.
    Table successfully saved.
    
    Saving table TTUSER.REGIONS
      Saving rows...
      4/4 rows saved.
    Table successfully saved.
    
    Saving view TTUSER.EMP_DETAILS_VIEW
    View successfully saved.
    
    Saving sequence TTUSER.DEPARTMENTS_SEQ
    Sequence successfully saved.
    
    Saving sequence TTUSER.EMPLOYEES_SEQ
    Sequence successfully saved.
    
    Saving sequence TTUSER.LOCATIONS_SEQ
    Sequence successfully saved.
    
  5. Unload the database from memory. This example assumes a RAM policy of manual. See "Specifying a RAM policy" in the Oracle TimesTen In-Memory Database Operations Guide for information on the RAM policy.

    % ttAdmin -ramUnload database1
    RAM Residence Policy            : manual
    Manually Loaded In RAM          : False
    Replication Agent Policy        : manual
    Replication Manually Started    : False
    Cache Agent Policy              : manual
    Cache Agent Manually Started    : False
    Database State                  : Closed
    
  6. Stop the TimesTen main daemon.

    % ttDaemonAdmin -stop
    TimesTen Daemon (PID: 28436, port: 6624) stopped.
    
  7. Copy the migrated object files (/tmp/database1.data, in this example) to a file system that is accessible by the instance in the new release.

For the new release:

  1. Create the subdirectory into which you will download and unzip the new full distribution of TimesTen. Navigate to this directory and download the new full distribution into this directory. Then, use the ZIP utility to unpack this distribution. This example creates the new_installation_dir subdirectory and unpacks the timesten2211100.server.linux8664.zip file (the 22.1.1.10.0 full distribution for Linux 64-bit). Unzipping the timesten2211100.server.linux8664.zip file creates the new installation that will be used for this patch upgrade.
    % mkdir -p new_installation_dir
    % cd new_installation_dir

    Download the full distribution into the new_installation_dir subdirectory. Then use the ZIP utility to unpack the distribution.

    % unzip /timesten/installations/timesten2211100.server.linux8664.zip
    Archive:  timesten/installations/timesten2211100.server.linux8664.zip
       creating: tt2211100/
    ...
  2. Run the ttInstanceCreate utility to create the instance. This example runs the ttInstanceCreate utility interactively. See "ttInstanceCreate" in the Oracle TimesTen In-Memory Database Reference and "Creating an instance on Linux/UNIX: Basics" in this book for details.
    User input is shown in bold.
    % new_installation_dir/bin/ttInstanceCreate
    
    NOTE: Each TimesTen instance is identified by a unique name.
          The instance name must be a non-null alphanumeric string, not longer
          than 255 characters.
    
    Please choose an instance name for this installation? [ tt221 ] myinstance
    Instance name will be 'myinstance'.
    Is this correct? [ yes ]
    Where would you like to install the myinstance instance of TimesTen? [ /home/ttuser ] /scratch/ttuser
    The directory /scratch/ttuser/myinstance does not exist.
    Do you want to create it? [ yes ]
    Creating instance in /scratch/ttuser/myinstance ...
    
    NOTE: If you are configuring TimesTen for use with Oracle Clusterware, the
          daemon port number must be the same across all TimesTen installations
          managed within the same Oracle Clusterware cluster.
    
    NOTE: All installations that replicate to each other must use the same daemon
          port number that is set at installation time. The daemon port number can
          be verified by running 'ttVersion'.
    
    The default port number is 6624.
    
    Do you want to use the default port number for the TimesTen daemon? [ yes ]
    The daemon will run on the default port number (6624).
    
    In order to use the cache features in any TimesTen databases
    created within this instance, you must set a value for the TNS_ADMIN
    environment variable. It can be left blank, and a value can be supplied later
    using <install_dir>/bin/ttInstanceModify.
    
    Please enter a value for TNS_ADMIN (s=skip)? [  ] s
    What is the TCP/IP port number that you want the TimesTen Server to listen on? [ 6625 ]
    
    Would you like to use TimesTen Replication with Oracle Clusterware? [ no ]
    
    Would you like to use systemd to manage TimesTen? [ no ]
    
    NOTE: The TimesTen daemon startup/shutdown scripts have not been installed.
    
    The startup script is located here :
            '/scratch/ttuser/myinstance/startup/tt_myinstance'
    
    Run the 'setuproot' script :
            /scratch/ttuser/myinstance/bin/setuproot -install
    This will move the TimesTen startup script into its appropriate location.
    
    The 22.1 Release Notes are located here :
      'new_installation_dir/tt22.1.1.1.0/README.html'
    
    Starting the daemon ...
    TimesTen Daemon (PID: 2214, port: 6624) startup OK.
    Instance created successfully.
  3. From the instance of the new release, create a database. Ensure you have sourced the environment variables, made all necessary changes to your connection attributes in the sys.odbc.ini (or the odbc.ini) file, and started the daemon (if not already started).

    To create the database:

    % ttIsql -connStr "DSN=mynewdatabase;AutoCreate=1" -e "quit"
    
    Copyright (c) 1996, 2021, Oracle and/or its affiliates. All rights reserved.
    Type ? or "help" for help, type "exit" to quit ttIsql.
    
    
    
    connect "DSN=mynewdatabase;AutoCreate=1";
    Connection successful: DSN=mynewdatabase;UID=ttuser;DataStore=/scratch/ttuser/mynewdatabase;
    DatabaseCharacterSet=AL32UTF8;ConnectionCharacterSet=AL32UTF8;LogBufMB=1024;PermSize=500;
    TempSize=300;
    (Default setting AutoCommit=1)
    
    quit;
    Disconnecting...
    Done.
    

    The database will be empty at this point.

  4. From the instance of the new release, run the ttMigrate utility with the -r and -relaxedUpgrade options to restore the backed up database to the new release. For example:
    % $TIMESTEN_HOME/bin/ttMigrate -r -relaxedUpgrade mynewdatabase /tmp/database1.data
    
    Restoring profile DEFAULT
    Profile successfully restored.
    
    Restoring profile SYSTEM
    Profile successfully restored.
    
    Restoring table TTUSER.JOBS
      Restoring rows...
      19/19 rows restored.
    Table successfully restored.
    
    Restoring table TTUSER.REGIONS
      Restoring rows...
      4/4 rows restored.
    Table successfully restored.
    
    Restoring table TTUSER.COUNTRIES
      Restoring rows...
      25/25 rows restored.
      Restoring foreign key dependency COUNTR_REG_FK on TTUSER.REGIONS
    Table successfully restored.
    
    Restoring table TTUSER.LOCATIONS
      Restoring rows...
      23/23 rows restored.
      Restoring foreign key dependency LOC_C_ID_FK on TTUSER.COUNTRIES
    Table successfully restored.
    
    Restoring table TTUSER.DEPARTMENTS
      Restoring rows...
      27/27 rows restored.
      Restoring foreign key dependency DEPT_LOC_FK on TTUSER.LOCATIONS
    Table successfully restored.
    
    Restoring table TTUSER.EMPLOYEES
      Restoring rows...
      107/107 rows restored.
      Restoring foreign key dependency EMP_DEPT_FK on TTUSER.DEPARTMENTS
      Restoring foreign key dependency EMP_JOB_FK on TTUSER.JOBS
    Table successfully restored.
    
    Restoring table TTUSER.JOB_HISTORY
      Restoring rows...
      10/10 rows restored.
      Restoring foreign key dependency JHIST_DEPT_FK on TTUSER.DEPARTMENTS
      Restoring foreign key dependency JHIST_EMP_FK on TTUSER.EMPLOYEES
      Restoring foreign key dependency JHIST_JOB_FK on TTUSER.JOBS
    Table successfully restored.
    
    Restoring view TTUSER.EMP_DETAILS_VIEW
    View successfully restored.
    
    Restoring sequence TTUSER.DEPARTMENTS_SEQ
    Sequence successfully restored.
    
    Restoring sequence TTUSER.EMPLOYEES_SEQ
    Sequence successfully restored.
    
    Restoring sequence TTUSER.LOCATIONS_SEQ
    Sequence successfully restored.

Once the database is operational in the new release, create a backup of this database to have a valid restoration point for your database. Once you have created a backup of your database, you may delete the ttMigrate copy of your database (in this example, /tmp/database1.data). Optionally, for the old release, you can remove the instance and delete the installation.

Ensure you recompile and relink existing ODBC and OCI applications after you perform the upgrade and before you use the new release of TimesTen. See "Overview of ODBC API incompatibilities" in the Oracle TimesTen In-Memory Database C Developer's Guide for more information.

Online upgrade: Using TimesTen replication

When upgrading to a new release of TimesTen Classic, you may have a mission-critical database that must remain continuously available to your applications. You can use TimesTen replication to keep two copies of a database synchronized, even when the databases are from different releases of TimesTen, allowing your applications to stay connected to one copy of the database while the instance for the other database is being upgraded. When the upgrade is finished, any updates that have been made on the active database are transmitted immediately to the database in the upgraded instance, and your applications can then be switched with no data loss and no downtime. See "Performing an online upgrade with classic replication" for information.

The online upgrade process supports only updates to user tables during the upgrade. The tables to be replicated must have a PRIMARY KEY or a unique index on non-nullable columns. Data definition changes such as CREATE TABLE or CREATE INDEX are not replicated except in the case for an active standby pair with DDLReplicationLevel set to 2. In the latter case, CREATE TABLE and CREATE INDEX are replicated.

Because two copies of the database (or two copies of each database, if there are more than one) are required during the upgrade, you must have available twice the memory and disk space normally required, if performing the upgrade on a single host.

Note:

  • Online major upgrades for active standby pairs with cache groups are only supported for read-only cache groups.

  • Online major upgrades for active standby pairs that are managed by Oracle Clusterware are not supported.

Performing an online upgrade with classic replication

This section describes how to use the TimesTen replication feature to perform online upgrades for applications that require continuous data availability.

This procedure is for classic replication in a unidirectional, bidirectional, or multidirectional scenario.

Typically, applications that require high availability of their data use TimesTen replication to keep at least one extra copy of their databases up to date. An online upgrade works by keeping one of these two copies available to the application while the other is being upgraded. The procedures described in this section assume that you have a bidirectional replication scheme configured and running for two databases, as described in "Unidirectional or bidirectional replication" in the Oracle TimesTen In-Memory Database Replication Guide.

Note the following:

The following sections describe how to perform an online upgrade with replication.

Requirements

To perform online upgrades with replication, replication must be configured to use static ports. See "Port assignments" in Oracle TimesTen In-Memory Database Replication Guide for information.

Additional disk space must be allocated to hold a backup copy of the database made by the ttMigrate utility. The size of the backup copy is typically about the same as the in-use size of the database. This size may be determined by querying the v$monitor view, using ttIsql:

Command> SELECT perm_in_use_size FROM v$monitor;

Upgrade steps

The following steps illustrate how to perform an online upgrade while replication is running. The upgrade host is the host on which the database upgrade is being performed, and the active host is the host containing the database to which the application remains connected.

Step Upgrade host Active host

1.

Configure replication to replicate to the active host using static ports.

Configure replication to replicate to the upgrade host using static ports.

2.

n/a

Connect all applications to the active database, if they are not connected.

3.

Disconnect all applications from the database that will be upgraded.

n/a

4.

n/a

Set replication to the upgrade host to the PAUSE state.

5.

Wait for updates to propagate to the active host.

n/a

6.

Stop replication.

n/a

7.

Back up the database with ttMigrate -c and run ttDestroy to destroy the database.

n/a

8.

Stop the TimesTen daemon for the old release.

n/a

9.

Create a new installation and a new instance for the new release. See "Creating an installation on Linux/UNIX" and "Creating an instance on Linux/UNIX: Basics" for information.

n/a

10.

Create a DSN for the post-upgrade database for the new release. Adjust parallelism options for the DSN.

n/a

11.

Restore the database from the backup with ttMigrate -r.

n/a

12.

Clear the replication bookmark and logs using ttRepAdmin -receiver -reset and by setting replication to the active host to the stop and then the start state.

n/a

13.

Start replication.

n/a

14.

n/a

Set replication to the upgrade host to the start state, ensuring that the accumulated updates propagate once replication is restarted.

15.

n/a

Start replication.

16.

n/a

Wait for all of the updates to propagate to the upgrade host.

17.

Reconnect all applications to the post-upgrade database.

n/a

After the above procedures are completed on the upgrade host, the active host can be upgraded using the same steps.

Online upgrade example

This section describes how to perform an online upgrade in a scenario with two bidirectionally replicated databases.

In the following discussion, the two hosts are referred to as the upgrade host, on which the instance (with its databases) is being upgraded, and the active host, which remains operational and connected to the application for the duration of the upgrade. After the procedure is completed, the same steps can be followed to upgrade the active host. However, you may prefer to delay conversion of the active host to first test the upgraded instance.

The upgrade host in this example consists of the database upgrade on the server upgradehost. The active host consists of the database active on the server activehost.

Follow these steps in the order they are presented:

Step Upgrade host Active host

1.

Use ttIsql to alter the replication scheme repscheme, setting static replication port numbers so that the databases can communicate across releases:

Command> call ttRepStop;

Command> ALTER REPLICATION repscheme ALTER STORE upgrade ON upgradehost SET PORT 40000 ALTER STORE active ON activehost SET PORT 40001;

Command> call ttRepStart;

Use ttIsql to alter the replication scheme repscheme, setting static replication port numbers so that the databases can communicate across releases:

Command> call ttRepStop;

Command> ALTER REPLICATION repscheme ALTER STORE upgrade ON upgradehost SET PORT 40000 ALTER STORE active ON activehost SET PORT 40001;

Command> call ttRepStart;

2.

Disconnect all production applications connected to the database. Any workload being run on the upgrade host must start running on the active host instead.

Use the ttRepAdmin utility to pause replication from the database active to the database upgrade:

ttRepAdmin -receiver -name upgrade
 -state pause active

This command temporarily stops the replication of updates from the database active to the database upgrade, but it retains any updates made to active in the database transaction log files. The updates made to active during the upgrade procedure are applied later, when upgrade is brought back up.

See "Set the replication state of subscribers" in Oracle TimesTen In-Memory Database Replication Guide for details.

3.

Wait for all replication updates to be sent to the database active. You can verify that all updates have been sent by applying a recognizable update to a table reserved for that purpose on the database upgrade. When the update appears in the database active, you know that all previous updates have been sent.

For example, call the ttRepSubscriberWait built-in procedure. You should expect a value of <00> to be returned, indicating there was a clean response, not a time out. (If there is a time out, ttRepSubscriberWait returns a value of 01.)

Command> call ttRepSubscriberWait (,,,,60);
< 00 >
1 row found.

See "ttRepSubscriberWait" in the Oracle TimesTen In-Memory Database Reference for information.

n/a

4.

Stop the replication agent with ttAdmin:

ttAdmin -repStop upgrade

From this point on, no updates are sent to the database active.

Stop the replication agent with ttAdmin:

ttAdmin -repStop active

From this point on, no updates are sent to the database upgrade.

See "Starting and stopping the replication agents" in Oracle TimesTen In-Memory Database Replication Guide for details.

5.

Use ttMigrate to back up the database upgrade. If the database is very large, this step could take a significant amount of time. If sufficient disk space is free on the /backup file host, use the following ttMigrate command:

ttMigrate -c upgrade /backup/upgrade.dat

n/a

6.

If the ttMigrate command is successful, destroy the database upgrade.

ttDestroy upgrade

Restart the replication agent on the database active:

ttAdmin -repStart active

7.

Create a new installation and a new instance for the new release. See "Creating an installation on Linux/UNIX" and "Creating an instance on Linux/UNIX: Basics" for information.

Resume replication from active to upgrade by setting the replication state to start:

ttRepAdmin -receiver -name upgrade
 -start start active

8.

Use ttMigrate to load the backup created in step 5. into the database upgrade for the new release:

ttMigrate -r upgrade /backup/upgrade.dat
Change the ramPolicy to manual (The ramPolicy is set to inUse by default).
ttAdmin -ramPolicy manual upgrade
ttAdmin -ramLoad upgrade

Note: In this step, you must use the ttMigrate utility contained inthe new release of to which you are upgrading.

n/a

9.

Use ttRepAdmin to clear the replication bookmark and logs by resetting the receiver state for the database active and then setting replication to the stop state and then the start state:

ttRepAdmin -receiver -name active
   -reset upgrade
ttRepAdmin -receiver -name active
   -state stop upgrade
sleep 10
ttRepAdmin -receiver -name active
   -state start upgrade
sleep 10

Note: The sleep command is to ensure that each state takes effect, as the state change can take up to 10 seconds depending on the resources and operating system.

n/a

10.

Use ttAdmin to start the replication agent on the new database upgrade and to begin sending updates to the database active:

ttAdmin -repStart upgrade

n/a

11.

Verify that the database upgrade is receiving updates from the database active. You can verify that updates are sent by applying a recognizable update to a table reserved for that purpose in the database active. When the update appears in upgrade, you know that replication is operational.

If the applications are still running on the database active, let them continue until the database upgrade has been successfully migrated and you have verified that the updates are being replicated correctly from active to upgrade.

12.

n/a

Once you are sure that updates are replicated correctly, you can disconnect all of the applications from the database active and reconnect them to the database upgrade. After verifying that the last of the updates from active are replicated to upgrade, the instance with active is ready to be upgraded.

Note: You may choose to delay upgrading the instance with active to the new release until sufficient testing has been performed with the database upgrade in the new release. When you are ready to upgrade the instance with the active to the new release, follow the steps in "Online patch upgrade for active master" for details.

Performing an upgrade with active standby pair replication

Active standby pair replication provides high availability of your data to your applications. With active standby pairs, unless you want to perform an upgrade to a new major release in a configuration that also uses asynchronous writethrough cache groups, you can perform an online upgrade to maintain continuous availability of your data during an upgrade. This section describes the following procedures:

Note:

Only asynchronous writethrough or read-only cache groups are supported with active standby pairs.

Online upgrades for an active standby pair with no cache groups

This section includes the following topics for online upgrades in a scenario with active standby pairs and no cache groups:

Also see "Performing an online upgrade with classic replication" for an overview, limitations, and requirements.

Online patch upgrade for standby master and subscriber

To perform an online upgrade to a new patch release for the standby master database and subscriber databases, complete the following tasks on each database. For this procedure, assume there are no cache groups.

  1. Stop the replication agent on the database using the ttRepStop built-in procedure or the ttAdmin utility. For example, to stop the replication agent for the master2 standby database:
    ttAdmin -repStop master2
    
  2. Create a new installation and a new instance for the new release. See "Creating an installation on Linux/UNIX" and "Creating an instance on Linux/UNIX: Basics" for information.
  3. Restart the replication agent using the ttRepStart built-in procedure or the ttAdmin utility:
    ttAdmin -repStart master2
Online patch upgrade for active master

To perform an online upgrade to a new patch release for the active master database, you must first reverse the roles of the active and standby master databases, then perform the upgrade. For this procedure, assume there are no cache groups.

  1. Pause any applications that are generating updates on the active master database.
  2. Run the ttRepSubscriberWait built-in procedure on the active master database, using the DSN and host of the standby master database. (The result of the call should be 00. If the value is 01, you should call ttRepSubscriberWait again until the value 00 is returned.) For example, to ensure that all transactions are replicated to the master2 standby master on the master2host:
    call ttRepSubscriberWait( null, null, 'master2', 'master2host', 120 );
    
  3. Stop the replication agent on the current active master database using the ttRepStop built-in procedure or the ttAdmin utility. For example, to stop the replication agent for the master1 active master database:
    ttAdmin -repStop master1
    
  4. Execute the ttRepDeactivate build-in procedure on the current active master database. This puts the database in the IDLE state:
    call ttRepDeactivate;
    
  5. On the standby master database, set the database to the ACTIVE state using the ttRepStateSet built-in procedure. This database becomes the active master in the active standby pair:
    call ttRepStateSet( 'ACTIVE' );
    
  6. Resume any applications that were paused in step 1, connecting them to the database that is now acting as the active master (for example, master2).

    Note:

    At this point, replication will not yet occur from the new active database to subscriber databases. Replication will resume after the host for the new standby database has been upgraded and the replication agent of the new standby database is running.

  7. Upgrade the instance of the former active master database, which is now the standby master database. See "About performing a basic patch upgrade" for details.
  8. Restart replication on the database in the upgraded instance, using the ttRepStart built-in procedure or the ttAdmin utility:
    ttAdmin -repStart master2
    
  9. To make the database in the newly upgraded instance the active master database again, see "Reversing the roles of the active and standby databases" in the Oracle TimesTen In-Memory Database Replication Guide.
Online major upgrade for active standby pair

When you perform an online upgrade for an active standby pair to a new major release of TimesTen, you must explicitly specify the TCP/IP port for each database. If your active standby pair replication scheme is not configured with a PORT attribute for each database, you must use the following steps to prepare for the upgrade. For this procedure, assume there are no cache groups. (Online major upgrades for active standby pairs with cache groups are only supported for read-only cache groups.)

  1. Stop the replication agent on every database using the call ttRepStop built-in procedure or the ttAdmin utility. For example, to stop the replication agent on the master1 database:

    ttAdmin -repStop master1
    
  2. On the active master database, use the ALTER ACTIVE STANDBY PAIR statement to specify a PORT attribute for every database in the active standby pair. For example, to set a PORT attribute for the master1 database on the master1host host and the master2 database on the master2host host and the subscriber1 database on the subscriber1host host:

    ALTER ACTIVE STANDBY PAIR
     ALTER STORE master1 ON "master1host" SET PORT 30000
     ALTER STORE master2 ON "master2host" SET PORT 30001
     ALTER STORE subscriber1 ON "subscriber1host" SET PORT 30002;
    
  3. Destroy the standby master database and all of the subscribers using the ttDestroy utility. For example, to destroy the subscriber1 database:

    ttDestroy subscriber1
    
  4. Follow the normal procedure to start an active standby pair and duplicate the standby and subscriber databases from the active master. See "Setting up an active standby pair with no cache groups" in the Oracle TimesTen In-Memory Database Replication Guide for details.

To upgrade the instances of the active standby pair, first upgrade the instance of the standby master. While this node is being upgraded, there is no standby master database, so updates on the active master database are propagated directly to the subscriber databases. Following the upgrade of the standby node, the active and standby roles are switched and the new standby node is created from the new active node. Finally, the subscriber nodes are upgraded.

  1. Instruct the active master database to stop replicating updates to the standby master by executing the ttRepStateSave built-in procedure on the active master database. For example, to stop replication to the master2 standby master database on the master2host host:
    call ttRepStateSave( 'FAILED', 'master2', 'master2host' );
    
  2. Stop the replication agent on the standby master database using the ttRepStop built-in procedure or the ttAdmin utility. The following example stops the replication agent for the master2 standby master database.
    ttAdmin -repStop master2
    
  3. Use the ttMigrate utility to back up the standby master database to a binary file.
    ttMigrate -c master2 master2.bak
    

    See "ttMigrate" in the Oracle TimesTen In-Memory Database Reference for details.

  4. Destroy the standby master database, using the ttDestroy utility.
    ttDestroy master2
    
  5. Create a new installation and a new instance on the master2host standby master host. See "Creating an installation on Linux/UNIX" and "Creating an instance on Linux/UNIX: Basics" for information.
  6. In the new instance on master2host, use ttMigrate to restore the standby master database from the binary file created earlier. (This example performs a checkpoint operation after every 20 megabytes of data has been restored.)
    ttMigrate -r -C 20 master2 master2.bak
    
  7. Start the replication agent on the standby master database using the ttRepStart built-in procedure or the ttAdmin utility.
    ttAdmin -repStart master2
    

    When the standby master database in the upgraded instance has become synchronized with the active master database, this standby master database moves from the RECOVERING state to the STANDBY state. The standby master database also starts sending updates to the subscribers. You can determine when the standby master database is in the STANDBY state by calling the ttRepStateGet built-in procedure.

    call ttRepStateGet;
    
  8. Pause any applications that are generating updates on the active master database.
  9. Execute the ttRepSubscriberWait built-in procedure on the active master database, using the DSN and host of the standby master database. (The result of the call should be 00. If the value is 01, you should call ttRepSubscriberWait again until the value 00 is returned.) For example, to ensure that all transactions are replicated to the master2 standby master on the master2host host:
    call ttRepSubscriberWait( null, null, 'master2', 'master2host', 120 );
    
  10. Stop the replication agent on the active master database using the ttRepStop built-in procedure or the ttAdmin utility. For example, to stop the replication agent for the master1 active master database:
    ttAdmin -repStop master1
    
  11. On the standby master database, set the database to the ACTIVE state using the ttRepStateSet built-in procedure. This database becomes the active master in the active standby pair.
    call ttRepStateSet( 'ACTIVE' );
    
  12. Instruct the new active master database (master2, in our example) to stop replicating updates to what is now the standby master (master1) by executing the ttRepStateSave built-in procedure on the active master database. For example, to stop replication to the master1 standby master database on master1host host:
    call ttRepStateSave( 'FAILED', 'master1', 'master1host' );
    
  13. Destroy the former active master database, using the ttDestroy utility.
    ttDestroy master1
    
  14. Create the new installation and the instance for the new release on master1host. See "Creating an installation on Linux/UNIX" and "Creating an instance on Linux/UNIX: Basics" for information.
  15. Create a new standby master database by duplicating the new active master database, using the ttRepAdmin utility. For example, to duplicate the master2 database master2 on the master2host host to the master1 database, use the following on the host containing the master1 database:
    ttRepAdmin -duplicate -from master2 -host master2host -uid pat -pwd patpwd
     -setMasterRepStart master1
    
  16. Start the replication agent on the new standby master database using the ttRepStart built-in procedure or the ttAdmin utility.
    ttAdmin -repStart master1
    
  17. Stop the replication agent on the first subscriber database using the ttRepStop built-in procedure or the ttAdmin utility. For example, to stop the replication agent for the subscriber1 subscriber database:
    ttAdmin -repStop subscriber1
    
  18. Destroy the subscriber database using the ttDestroy utility.
    ttDestroy subscriber1
    
  19. Create a new installation and a new instance for the new release on the subscriber host. See "Creating an installation on Linux/UNIX" and "Creating an instance on Linux/UNIX: Basics" for information.
  20. Create the subscriber database by duplicating the new standby master database, using the ttRepAdmin utility, as follows.
    ttRepAdmin -duplicate -from master1 -host master1host -uid pat -pwd patpwd
     -setMasterRepStart subscriber1
    
  21. Start the replication agent for the duplicated subscriber database using the ttRepStart built-in procedure or the ttAdmin utility.
    ttAdmin -repStart subscriber1
    
  22. Repeat step 17 through step 21 for each other subscriber database.

Online upgrades for an active standby pair with cache groups

This section includes the following topics for online patch upgrades in a scenario with active standby pairs and cache groups:

Also see "Performing an online upgrade with classic replication" for an overview, limitations, and requirements.

Online patch upgrade for standby master and subscriber (cache groups)

To perform an online upgrade to a new patch release for the standby master database and subscriber databases, in a configuration with cache groups, complete the following tasks on each database (with exceptions noted).

  1. Stop the replication agent on the database using the ttRepStop built-in procedure or the ttAdmin utility. For example, to stop the replication agent for the master2 standby database:
    ttAdmin -repStop master2
    
  2. Stop the cache agent on the standby database using the ttCacheStop built-in procedure or the ttAdmin utility:
    ttAdmin -cacheStop master2
    
  3. Create a new installation and a new instance for the new release. See "Creating an installation on Linux/UNIX" and "Creating an instance on Linux/UNIX: Basics" for information.
  4. Restart the cache agent on the standby database using the ttCacheStart built-in procedure or the ttAdmin utility:
    ttAdmin -cacheStart master2
    
  5. Restart the replication agent using the ttRepStart built-in procedure or the ttAdmin utility:
    ttAdmin -repStart master2

Note:

Steps 2 and 4, stopping and restarting the cache agent, are not applicable for subscriber databases.

Online patch upgrade for active master (cache groups)

To perform an online upgrade to a new patch release for the active master database, in a configuration with cache groups, perform the following steps. You must first reverse the roles of the active and standby master databases, then perform an the upgrade.

  1. Pause any applications that are generating updates on the active master database.
  2. Stop the cache agent on the current active master database using the ttCacheStop built-in procedure or the ttAdmin utility:
    ttAdmin -cacheStop master1
    
  3. Execute the ttRepSubscriberWait built-in procedure on the active master database, using the DSN and host of the standby master database. For example, to ensure that all transactions are replicated to the master2 standby master on the master2host host:
    call ttRepSubscriberWait( null, null, 'master2', 'master2host', 120 );
    
  4. Stop the replication agent on the current active master database using the ttRepStop built-in procedure or the ttAdmin utility. For example, to stop the replication agent for the master1 active master database:
    ttAdmin -repStop master1
    
  5. Execute the ttRepDeactivate build-in procedure on the current active master database. This puts the database in the IDLE state:
    call ttRepDeactivate;
    
  6. On the standby master database, set the database to the ACTIVE state using the ttRepStateSet built-in procedure. This database becomes the active master in the active standby pair:
    call ttRepStateSet( 'ACTIVE' );
    
  7. Resume any applications that were paused in step 1, connecting them to the database that is now acting as the active master (in this example, the master2 database).
  8. Upgrade the instance for the former active master database, which is now the standby master database. See "About performing a basic patch upgrade" for details.
  9. Restart the cache agent on the post-upgrade database using the ttCacheStart built-in procedure or the ttAdmin utility:
    ttAdmin -cacheStart master1
    
  10. Restart replication on the post-upgrade database using the ttRepStart built-in procedure or the ttAdmin utility:
    ttAdmin -repStart master1
    
  11. To make the post-upgrade database the active master database again, see "Reversing the roles of the active and standby databases" in the Oracle TimesTen In-Memory Database Replication Guide.
Online major upgrade for active standby pair (read-only cache groups)

Complete the following steps to perform a major upgrade in a scenario with an active standby pair with read-only cache groups. This example upgrades from the 18.1 release to the 22.1

These steps assume that master1 is the active master database on the master1host host and master2 is the standby master database on the master2host host.

Note:

For more information on the built-in procedures and utilities discussed here, see "Built-In Procedures" and "Utilities" in the Oracle TimesTen In-Memory Database Reference.

  1. On the active master host, run the ttAdmin utility to stop the replication agent for the active master database.
    ttAdmin -repStop master1
    
  2. On the active master database, use the DROP ACTIVE STANDBY PAIR statement to drop the active standby pair. For example, from the ttIsql utility:
    Command> DROP ACTIVE STANDBY PAIR;
    
  3. On the active master database, use the CREATE ACTIVE STANDBY PAIR statement to create a new active standby pair with the cache groups excluded. Ensure that you explicitly specify the TCP/IP port for each database.
    Command> CREATE ACTIVE STANDBY PAIR master1 ON "master1host",
               master2 ON "master2host"
             STORE master1 ON "master1host" PORT 20000
             STORE master2 ON "master2host" PORT 20010
             EXCLUDE CACHE GROUP cacheuser.readcache;

    Note:

    You can use the cachegroups command within the ttIsql utility to identify all the cache groups defined in the database. In this example, readcache is a read-only cache group owned by the cacheuser user.

  4. On the active master database, call the ttRepStateSet built-in procedure to set the replication state for the active master database to ACTIVE.
    Command> call ttRepStateSet('ACTIVE');
    

    To verify that the replication state for the active master database is set to ACTIVE, call the ttRepStateGet built-in procedure.

    Command> call ttRepStateGet();
    < ACTIVE >
    1 row found.
    
  5. On the active master database, call the ttRepStart built-in procedure to start the replication agent.
    Command> call ttRepStart();
    
  6. On the standby master host, run the ttAdmin utility to stop the replication agent for the standby master database.
    ttAdmin -repStop master2
    
  7. On the standby master host, run the ttAdmin utility to stop the cache agent for the standby master database.
    ttAdmin -cacheStop master2
    
  8. On the standby master host, run the ttDestroy utility to destroy the standby master database. You must either add the -force option or first drop all cache groups. After you run the ttDestroy utility, run the cacheCleanUp.sql script as described below.
    ttDestroy -force master2
    

    Run the timesten_home/install/oraclescripts/cacheCleanUp.sql SQL*Plus script as the cache administration user to drop the Oracle Database objects. This script takes the host name and the database name (with full path) as parameters. See "Dropping Oracle Database objects used by cache groups with autorefresh" in the Oracle TimesTen In-Memory Database Cache Guide for details.

  9. Create a new standby master database by duplicating the active master database with the ttRepAdmin utility. For example, to duplicate the master1 database on the master1host host of the master2 database, run the following on the host containing the master2 database:
    ttRepAdmin -duplicate -from master1 -host master1host -UID pat -PWD patpwd 
      -keepCG -cacheUid cacheuser -cachePwd cachepwd master2

    Note:

    You need a user with ADMIN privileges defined in the active master database for it to be duplicated. In this example, the pat user identified by the patpwd password has ADMIN privileges.

    To keep the cache group tables, you need a cache administration user while adding the -keepCG option. In this example, the cacheuser user identified by the cachepwd password is a cache administration user.

  10. On the new standby master database, use the DROP CACHE GROUP statement to drop all the cache groups.
    Command> DROP CACHE GROUP cacheuser.readcache;
    
  11. On the standby master host, run the ttMigrate utility to back up the standby master database to a binary file.
    ttMigrate -c master2 master2.bak
    
  12. On the standby master host, run the ttDestroy utility to destroy the standby master database. After you run the ttDestroy utility, run the cacheCleanUp.sql script as described below.
    ttDestroy master2
    

    Run the timesten_home/install/oraclescripts/cacheCleanUp.sql SQL*Plus script as the cache administration user to drop the Oracle Database objects. This script takes the host name and the database name (with full path) as parameters. See "Dropping Oracle Database objects used by cache groups with autorefresh" in the Oracle TimesTen In-Memory Database Cache Guide for details.

  13. Create a new installation and a new instance for the new release on the standby master host. See "Creating an installation on Linux/UNIX" and "Creating an instance on Linux/UNIX: Basics" for information.
  14. In the new instance on the standby master host, run the ttMigrate utility to restore the standby master database from the binary file created earlier.
    ttMigrate -r -C 20 master2 master2.bak

    Note:

    This example performs a checkpoint operation after every 20 MB of data has been restored.

  15. On the standby master database, use the CREATE USER statement to create a new cache administration user.
    Command> CREATE USER cacheuser2 IDENTIFIED BY cachepwd;
    Command> GRANT CREATE SESSION, CACHE_MANAGER, CREATE ANY TABLE,
             DROP ANY TABLE TO cacheuser2;

    Note:

    You must create the new cache administration user in the Oracle database and grant the user the minimum set of privileges required to perform cache group operations. See "Create users in the Oracle database" in the Oracle TimesTen In-Memory Database Cache Guide for information.

  16. Connect to the standby master database as the cache administration user, and call the ttCacheUidPwdSet built-in procedure to set the new cache administration user name and password. Ensure you specify the cache administration user password for the Oracle database in the OraclePWD connection attribute within the connection string.
    ttIsql "DSN=master2;UID=cacheuser2;PWD=cachepwd;OraclePWD=oracle"
    Command> call ttCacheUidPwdSet('cacheuser2','oracle');
    
  17. On the standby master database, call the ttCacheStart built-in procedure to start the cache agent.
    Command> call ttCacheStart();
    
  18. On the standby master database, call the ttRepStart built-in procedure to start the replication agent.
    Command> call ttRepStart();
    

    The replication state will automatically be set to STANDBY. You can call the ttRepStateGet built-in procedure to confirm this. (This occurs asynchronously and may take a little time.)

    Command> call ttRepStateGet();
    < STANDBY >
    1 row found.
    
  19. On the standby master database, use the CREATE READONLY CACHE GROUP statement to create all the read-only cache groups.
    Command> CREATE READONLY CACHE GROUP cacheuser2.readcache
             AUTOREFRESH INTERVAL 10 SECONDS
             FROM oratt.readtbl
               (keyval NUMBER NOT NULL PRIMARY KEY, str VARCHAR(32));

    Note:

    Ensure that the cache administration user has SELECT privileges on the cache group tables in the Oracle database. In this example, the cacheuser2 user has SELECT privileges on the readtbl table owned by the oratt user in the Oracle database. For more information, see "Create the Oracle Database tables to be cached" in the Oracle TimesTen In-Memory Database Cache Guide.

  20. On the standby master database, use the LOAD CACHE GROUP statement to load the data from the Oracle database tables into the TimesTen cache groups.
    Command> LOAD CACHE GROUP cacheuser2.readcache
             COMMIT EVERY 200 ROWS;
    
  21. Pause any applications that are generating updates on the active master database.
  22. On the active master database, call the ttRepSubscriberWait built-in procedure using the DSN and host of the standby master database. For example, to ensure that all transactions are replicated to the master2 database on the master2host host:
    Command> call ttRepSubscriberWait(NULL,NULL,'master2','master2host',120);
    
  23. On the active master database, call the ttRepStop built-in procedure to stop the replication agent.
    Command> call ttRepStop();
    
  24. On the active master database, call the ttRepDeactivate built-in procedure to set the replication state for the active master database to IDLE.
    Command> call ttRepDeactivate();
    
  25. On the standby master database, call the ttRepStateSet built-in procedure to set the replication state for the standby master database to ACTIVE. This database and its host become the active master in the active standby pair replication scheme.
    Command> call ttRepStateSet('ACTIVE');

    Note:

    In this example, the master2 database on the master2host host just became the active master in the active standby pair replication scheme. Likewise, the master1 database on the master1host host is henceforth considered the standby master in the active standby pair replication scheme.

  26. On the new active master database, call the ttRepStop built-in procedure to stop the replication agent.
    Command> call ttRepStop();
    
  27. On the active master database, use the ALTER CACHE GROUP statement to set the AUTOREFRESH mode of all cache groups to PAUSED.
    Command> ALTER CACHE GROUP cacheuser2.readcache
             SET AUTOREFRESH STATE PAUSED;
    
  28. On the active master database, use the DROP ACTIVE STANDBY PAIR statement to drop the active standby pair.
    Command> DROP ACTIVE STANDBY PAIR;
    
  29. On the active master database, use the CREATE ACTIVE STANDBY PAIR statement to create a new active standby pair with the cache groups included. Ensure you explicitly specify the TCP/IP port for each database.
    Command> CREATE ACTIVE STANDBY PAIR master1 ON "master1host",
               master2 ON "master2host"
             STORE master1 ON "master1host" PORT 20000
             STORE master2 ON "master2host" PORT 20010;
    
  30. On the active master database, call the ttRepStateSet built-in procedure to set the replication state for the active master database to ACTIVE.
    Command> call ttRepStateSet('ACTIVE');
    
  31. On the active master database, call the ttRepStart built-in procedure to start the replication agent.
    Command> call ttRepStart();
    
  32. Resume any applications that were paused in step 21, connecting them to the new active master database.
  33. On the new standby master host, run the ttDestroy utility to destroy the new standby master database. After you run the ttDestroy utility, run the cacheCleanUp.sql script as described below.
    ttDestroy master1
    

    Run the timesten_home/install/oraclescripts/cacheCleanUp.sql SQL*Plus script as the cache administration user to drop the Oracle Database objects. This script takes the host name and the database name (with full path) as parameters. See "Dropping Oracle Database objects used by cache groups with autorefresh" in the Oracle TimesTen In-Memory Database Cache Guide for details.

  34. Create a new installation and a new instance for the new release on the standby master host. See "Creating an installation on Linux/UNIX" and "Creating an instance on Linux/UNIX: Basics" for information.
  35. Create a new standby master database by duplicating the active master database with the ttRepAdmin utility. For example, to duplicate the master2 database on the master2host host to the master1 database, run the following on the host containing the master1 database:
    ttRepAdmin -duplicate -from master2 -host master2host -UID pat -PWD patpwd 
      -keepCG -cacheUid cacheuser2 -cachePwd cachepwd master1
    
  36. On the standby master host, run the ttAdmin utility to start the cache agent for the standby master database.
    ttAdmin -cacheStart master1
    
  37. On the standby master host, run the ttAdmin utility to start the cache agent for the standby master database.
    ttAdmin -repStart master1
    

Offline upgrades for an active standby pair with cache groups

Performing a major upgrade in a scenario with an active standby pair with asynchronous writethrough cache groups requires an offline upgrade. This is discussed in the subsection that follows.

Offline major upgrade for active standby pair (cache groups)

Complete the following steps to perform a major upgrade in a scenario with an active standby pair with cache groups. You must perform this upgrade offline. (This example assumes you want to upgrade from release 18.1 to release 22.1

These steps assume master1 is an active master database on the master1host host and master2 is a standby master database on the master2host host. (For information about the built-in procedures and utilities discussed, refer to "Built-In Procedures" and "Utilities" in Oracle TimesTen In-Memory Database Reference.)

  1. Stop any updates to the active database before you upgrade.
  2. From master1, call the ttRepSubscriberWait built-in procedure to ensure that all data updates have been applied to the standby database, where numsec is the desired wait time.
    call ttRepSubscriberWait(null, null, 'master2', 'master2host', numsec);
    
  3. From master2, call ttRepSubscriberWait to ensure that all data updates have been applied to the Oracle database.
    call ttRepSubscriberWait(null, null, '_ORACLE', null, numsec);
    
  4. On master1host, use the ttAdmin utility to stop the replication agent for the active database.
    ttAdmin -repStop master1
    
  5. On master2host, use ttAdmin to stop the replication agent for the standby database.
    ttAdmin -repStop master2
    
  6. On master1host, call the ttCacheStop built-in procedure or use ttAdmin to stop the cache agent for the active database.
    ttAdmin -cacheStop master1
    
  7. On master2host, call ttCacheStop or use ttAdmin to stop the cache agent for the standby database.
    ttAdmin -cacheStop master2
    
  8. On master1host, use the ttMigrate utility to back up the active database to a binary file.
    ttMigrate -c master1 master1.bak
    
  9. On master1host, use the ttDestroy utility to destroy the active database. You must either use the -force option or first drop all cache groups. If you use -force, run the script cacheCleanup.sql afterward.
    ttDestroy -force /data_store_path/master1
    

    The cacheCleanup.sql script is a SQL*Plus script, located in the installation_dir/oraclescripts directory (and accessible through timesten_home/install/oraclescripts), that you run after connecting to the Oracle database as the cache user. It takes as parameters the host name and the database name (with full path). For information, refer to "Dropping Oracle Database objects used by autorefresh cache groups" in the Oracle TimesTen In-Memory Database Cache Guide.

  10. Create a new installation and a new instance for the new major release on master1host. See "Creating an installation on Linux/UNIX" and "Creating an instance on Linux/UNIX: Basics" for information.
  11. Create a new database in 22.1.w.x using ttIsql with DSN connection attribute setting AutoCreate=1. In this new database, create a cache user. The following example is a sequence of commands to execute in ttIsql to create this cache user and give it appropriate access privileges.

    The cache user requires ADMIN privilege to execute the next step, ttMigrate –r. Once migration is complete, you can revoke the ADMIN privilege from this user if desired.

    Command> CREATE USER cacheuser IDENTIFIED BY cachepassword;
    Command> GRANT CREATE SESSION, CACHE_MANAGER, CREATE ANY TABLE, 
             DROP ANY TABLE TO cacheuser;
    Command> GRANT ADMIN TO cacheuser;
    
  12. In the new instance on master1host, use the ttMigrate utility as the cache user to restore master1 from the binary file created earlier. (This example performs a checkpoint operation after every 20 megabytes of data has been restored, and assumes the password is the same in the Oracle database as in TimesTen.)
    ttMigrate -r -cacheuid cacheuser -cachepwd cachepassword -C 20 -connstr
     "DSN=master1;uid=cacheuser;pwd=cachepassword;oraclepwd=cachepassword"
     master1.bak
    
  13. On master1host, use ttAdmin to start the replication agent.
    ttAdmin -repStart master1

    Note:

    This step also sets the database to the active state. You can then call the ttRepStateGet built-in procedure (which takes no parameters) to confirm the state.

  14. On master1host, call the ttCacheStart built-in procedure or use ttAdmin to start the cache agent.
    ttAdmin -cacheStart master1
    

    Then you can use the ttStatus utility to confirm the replication and cache agents have started.

  15. Put each automatic refresh cache group into the AUTOREFRESH PAUSED state. This example uses ttIsql:
    Command> ALTER CACHE GROUP mycachegroup SET AUTOREFRESH STATE paused;
    
  16. From master1, reload each cache group, specifying the name of the cache group and how often to commit during the operation. This example uses ttIsql:
    Command> LOAD CACHE GROUP cachegroupname COMMIT EVERY n ROWS;
    

    You can optionally specify parallel loading as well. See the "LOAD CACHE GROUP" SQL statement in the Oracle TimesTen In-Memory Database SQL Reference for details.

  17. On master2host, use ttDestroy to destroy the standby database. You must either use the -force option or first drop all cache groups. If you use -force, run the script cacheCleanup.sql afterward (as discussed earlier).
    ttDestroy -force /data_store_path/master2
    
  18. Create the new installation and the new instance for the new major release on master2host. See "Creating an installation on Linux/UNIX" and "Creating an instance on Linux/UNIX: Basics" for information.
  19. In the new instance on master2host, use the ttRepAdmin utility with the -duplicate option to create a duplicate of active database master1 to use as standby database master2. Specify the appropriate administrative user on master1, the cache manager user and password, and to keep cache groups.
    ttRepAdmin -duplicate -from master1 -host master1host -uid pat -pwd patpwd 
    -cacheUid orcluser -cachePwd orclpwd -keepCG master2
    
  20. On master2host, use ttAdmin to start the replication agent. (You could optionally have used the ttRepAdmin option -setMasterRepStart in the previous step instead.)
    ttAdmin -repStart master2
    
  21. On master2, the replication state will automatically be set to STANDBY. You can call the ttRepStateGet built-in procedure to confirm this. (This occurs asynchronously and may take a little time.)
    call ttRepStateGet();
    
  22. On master2host, call the ttCacheStart built-in procedure or use ttAdmin to start the cache agent.
    ttAdmin -cacheStart master2
    

    After this, you can use the ttStatus utility to confirm the replication and cache agents have started.

If you want to create read-only subscriber databases, on each subscriber host you can create the subscriber by using the ttRepAdmin utility -duplicate option to duplicate the standby database. The following example creates subscriber1, using the same ADMIN user as above and the -nokeepCG option to convert the cache tables to normal TimesTen tables, as appropriate for a read-only subscriber.

ttRepAdmin -duplicate -from master2 -host master2host -nokeepCG 
-uid pat -pwd patpwd subscriber1

For related information, refer to "Rolling out a disaster recovery subscriber" in the Oracle TimesTen In-Memory Database Replication Guide.

Performing an offline TimesTen upgrade when using Oracle Clusterware

This section discusses the steps for an offline upgrade of TimesTen when using TimesTen with Oracle Clusterware. You have the option of also upgrading Oracle Clusterware, independently, while upgrading TimesTen. (See "Performing an online TimesTen upgrade when using Oracle Clusterware" for details on online upgrade.)

Note:

  • These instructions apply for either a TimesTen patch upgrade (for example, from 22.1.w.x to 22.1.y.z) or a TimesTen major upgrade (for example, from 18.1 to 22.1).

  • Refer to the Oracle TimesTen In-Memory Database Release Notes for information about versions of Oracle Clusterware that are supported by TimesTen.

For this procedure, except where noted, you can execute the ttCWAdmin commands from any host in the cluster. Each command affects all hosts.

  1. Stop the replication agents on the databases in the active standby pair:
    ttCWAdmin -stop -dsn advancedDSN
    
  2. Drop the active standby pair:
    ttCWAdmin -drop -dsn advancedDSN
    
  3. Stop the TimesTen cluster agent. This removes the hosts from the cluster and stops the TimesTen daemon:
    ttCWAdmin -shutdown
    
  4. Upgrade TimesTen on the desired hosts.
  5. Upgrade Oracle Clusterware if desired. See "Oracle Clusterware" in the Oracle Database documentation for information.
  6. If you have upgraded Oracle Clusterware, use the ttInstanceModify utility to configure TimesTen with Oracle Clusterware. On each host, run:
    ttInstanceModify -crs
    

    For Linux or UNIX hosts, see "Change the Oracle Clusterware configuration for an instance" for details.

  7. Start the TimesTen cluster agent. This includes the hosts defined in the cluster as specified in ttcrsagent.options. This also starts the TimesTen daemon.
    ttCWAdmin -init
    
  8. Create the active standby pair replication scheme:
    ttCWAdmin -create -dsn advancedDSN
    

    Important: The host from which you run this command must have access to the cluster.oracle.ini file. (See "Configuring Oracle Clusterware management with the cluster.oracle.ini file" in the Oracle TimesTen In-Memory Database Replication Guide for information about this file.)

  9. Start the active standby pair replication scheme:
    ttCWAdmin -start -dsn advancedDSN
    

Performing an online TimesTen upgrade when using Oracle Clusterware

This section discusses how to perform an online rolling upgrade (patch) for TimesTen, from TimesTen 22.1.w.x to 22.1.y.z, in a configuration where Oracle Clusterware manages active standby pairs. (See "Performing an offline TimesTen upgrade when using Oracle Clusterware" for an offline upgrade.)

The following topics are covered:

Note:

Supported configurations

The following basic configurations are supported for online rolling upgrades for TimesTen. In all cases, Oracle Clusterware manages the hosts.

  • One active standby pair on two hosts.

  • Multiple active standby pairs with one database on each host.

  • Multiple active standby pairs with one or more database on each host.

(Other scenarios, such as with additional spare hosts, are effectively equivalent to one of these scenarios.)

Restrictions and assumptions

Note the following assumptions for upgrading TimesTen when using Oracle Clusterware:

  • The existing active standby pairs are configured and operating properly.

  • Oracle Clusterware commands are used correctly to stop and start the standby database.

  • The upgrade does not change the TimesTen environment for the active and standby databases.

  • These instructions are for TimesTen patch upgrades only. Online major upgrades are not supported in configurations where Oracle Clusterware manages active standby pairs.

  • There are at least two hosts managed by Oracle Clusterware.

    Multiple active or standby databases managed by Oracle Clusterware can exist on a host only if there are at least two hosts in the cluster.

Note:

Upgrade Oracle Clusterware if desired, but not concurrently with an online TimesTen upgrade. When performing an online TimesTen patch upgrade in configurations where Oracle Clusterware manages active standby pairs, you must perform the Clusterware upgrade independently and separately, either before or after the TimesTen upgrade.

Note:

For information about Oracle Clusterware, see "Oracle Clusterware" in the Oracle Database documentation.

Upgrade tasks for one active standby pair

This section describes the following tasks:

Note:

In examples in the following subsections, the host name is host2, the DSN is myDSN, the instance name is upgrade2, and the instance administrator is terry.

Verify that the active standby pair is operating properly

Complete these steps to confirm that the active standby pair is operating properly.

  1. Verify the following.
    • The active and the standby databases run a TimesTen 22.1.w.x release.

    • The active and standby databases are on separate hosts managed by Oracle Clusterware.

    • Replication is working.

    • If the active standby pair replication scheme includes cache groups, the following are true:

      • AWT and SWT writes are working from the standby database in TimesTen to the Oracle database.

      • Refreshes are working from the Oracle database to the active database in TimesTen.

  2. Run the ttCWAdmin -status -dsn yourDSN command to verify the following.
    • The active database is on a different host than the standby database.

    • The state of the active database is 'ACTIVE' and the status is 'AVAILABLE'.

    • The state of the standby database is 'STANDBY' and the status is 'AVAILABLE'.

  3. Run the ttStatus command on the active database to verify the following.
    • The ttCRSactiveservice and ttCRSmaster processes are running.

    • The subdaemon and the replication agents are running.

    • If the active standby pair replication scheme includes cache groups, the cache agent is running.

  4. Run the ttStatus command on the standby database to verify the following.
    • The ttCRSsubservice and ttCRSmaster processes are running.

    • The subdaemon and the replication agents are running.

    • If the active standby pair replication scheme includes cache groups, the cache agent is running.

Shut down the standby database

Complete these steps to shut down the standby database.

  1. Run an Oracle Clusterware command similar to the following to obtain the names of the Oracle Clusterware Master, Daemon, and Agent processes on the host of the standby database. It is suggested to filter the output by using the grep TT command:
    crsctl status resource -n standbyHostName | grep TT
    
  2. Run Oracle Clusterware commands to shut down the standby database. The Oracle Clusterware commands stop the Master processes for the standby database, the Daemon process for the instance, and the Agent process for the instance.
    crsctl stop resource TT_Master_upgrade2_terry_myDSN_1
    crsctl stop resource TT_Daemon_upgrade2_terry_host2
    crsctl stop resource TT_Agent_upgrade2_terry_host2
    
  3. Stop the TimesTen main daemon.
    ttDaemonAdmin -stop
    

    If the ttDaemonAdmin -stop command gives error 10028, retry the command.

Perform an upgrade for the standby database

Complete these steps for an offline upgrade of the instance for the standby database.

  1. Create a new installation. See "Creating an installation on Linux/UNIX" for information.
  2. Point the instance to the new installation. See "Associate an instance with a different installation (upgrade or downgrade)" for details.
  3. Configure the new installation for Oracle Clusterware.
Start the standby database

Complete these steps to start the standby database.

  1. Run the following ttCWAdmin command to start the TimesTen main daemon, the TimesTen Oracle Clusterware agent process and the TimesTen Oracle Clusterware Daemon process:
    ttCWAdmin -init -hosts localhost
    
  2. Start the Oracle Clusterware Master process for the standby database.
    crsctl start resource TT_Master_upgrade2_terry_MYDSN_1
Switch the roles of the active and standby databases

Use the ttCWAdmin -switch command to switch the roles of the active and standby databases to enable the offline upgrade on the other master database.

ttCWAdmin -switch -dsn myDSN

Use the ttCWAdmin -status command to verify that the switch operation has completed before starting the next task.

Shut down the new standby database

Use the Oracle Clusterware crsctl status resource command to obtain the names of the Master, Daemon, and Agent processes on the host of the new standby database. This example assumes the host host1 and filters the output through grep TT:

crsctl status resource -n host1 | grep TT

Run commands such as those in "Shut down the standby database" and use the appropriate instance name, instance administrator, DSN, and host name. For example:

crsctl stop resource TT_Master_upgrade2_terry_MYDSN_0
crsctl stop resource TT_Daemon_upgrade2_terry_host1
crsctl stop resource TT_Agent_upgrade2_terry_host1
ttDaemonAdmin -stop
Perform an upgrade of the new standby database
Start the new standby database

See "Start the standby database" and use the Master process name obtained by the crsctl status resource command from "Shut down the new standby database" as outlined above.

ttCWAdmin -init -hosts localhost
crsctl start resource TT_Master_upgrade2_terry_MYDSN_0

Upgrades for multiple active standby pairs on many pairs of hosts

The process to upgrade the instances for multiple active standby pairs on multiple pairs of hosts is essentially the same as the process to upgrade the instances for a single active standby pair on two hosts. See "Upgrade tasks for one active standby pair" for details. The best practice is to perform the upgrades for the active standby pairs one at a time.

Use the ttCWAdmin -status command to determine the state of the databases managed by Oracle Clusterware.

Upgrades for multiple active standby pairs on a pair of hosts

Multiple active standby pairs can be on multiple pairs of hosts. See "Upgrades for multiple active standby pairs on many pairs of hosts" for details. Alternatively, multiple active standby pairs can be on a single pair of hosts. One scenario is for all the active databases to be on one host and all the standby databases to be on the other. A more typical scenario, to better balance the workload, is for each host to have some active databases and some standby databases.

Figure 7-1 shows two active standby pairs on two hosts managed by Oracle Clusterware. The active database called active1 on host1 replicates to standby1 on host2. The active database called active2 on host2 replicates to standby2 on host1. AWT updates from both standby databases are propagated to the Oracle database. Read-only updates from the Oracle database are propagated to the active databases.

Figure 7-1 Multiple active standby pairs on two hosts

Description of Figure 7-1 follows
Description of "Figure 7-1 Multiple active standby pairs on two hosts"

This configuration can result in greater write throughput for cache groups and more balanced resource usage. See the next section, "Sample configuration files: multiple active standby pairs on one pair of hosts", for sample sys.odbc.ini entries and a sample cluster.oracle.ini file for this kind of configuration. (See "Configuring Oracle Clusterware management with the cluster.oracle.ini file" in the Oracle TimesTen In-Memory Database Replication Guide for information about that file.)

The rolling upgrade process for multiple active standby pairs on a single pair of hosts is similar in nature to the process of upgrading multiple active standby pairs on multiple pairs of hosts. See "Upgrades for multiple active standby pairs on many pairs of hosts" for details.

First, however, if the active and standby databases are mixed between the two hosts, switch all standby databases to one host and all active databases to the other host. Use the ttCWAdmin -switch -dsn DSN command to switch active and standby databases between hosts. Once all the active databases are on one host and all the standby databases are on the other host, follow the steps below to perform the upgrade for the entire "standby" host.

Be aware that upgrades affect the entire instance and associated databases on one host.

  1. Verify that the standby databases run on the desired host. Use the ttCWAdmin -status -dsn DSN command and the ttCWAdmin -status command.
  2. Modify the Oracle Clusterware stop commands to stop all Master processes on the host where all the standby databases reside.
  3. Modify the Oracle Clusterware start commands to start all Master processes on the host where all the standby databases reside.
Sample configuration files: multiple active standby pairs on one pair of hosts

The following are sample sys.odbc.ini entries:

[databasea]
Driver=timesten_home/install/lib/libtten.so
DataStore=/scratch/terry/ds/databasea
PermSize=400
TempSize=320
DatabaseCharacterSet=WE8MSWIN1252
OracleNetServiceName=ORCL
 
[databaseb]
Driver=timesten_home/install/lib/libtten.so
DataStore=/scratch/terry/ds/databaseb
PermSize=400
TempSize=320
DatabaseCharacterSet=WE8MSWIN1252
OracleNetServiceName=ORCL

[databasec]
Driver=timesten_home/install/lib/libtten.so
DataStore=/scratch/terry/ds/databasec
PermSize=400
TempSize=320
DatabaseCharacterSet=WE8MSWIN1252
OracleNetServiceName=ORCL

[databased]
Driver=timesten_home/install/lib/libtten.so
DataStore=/scratch/terry/ds/databased
PermSize=400
TempSize=320
DatabaseCharacterSet=WE8MSWIN1252
OracleNetServiceName=ORCL

The following is a sample cluster.oracle.ini file:

[databasea]
MasterHosts=host1,host2
CacheConnect=Y
 
[databaseb]
MasterHosts=host2,host1
CacheConnect=Y
 
[databasec]
MasterHosts=host2,host1
CacheConnect=Y
 
[databased]
MasterHosts=host1,host2
CacheConnect=Y

The cluster.oracle.ini file places one active database and one standby database on each host. This is accomplished by reversing the order of the host names specified for the MasterHost attribute.

Sample scripts: stopping and starting multiple standby processes on one host

Run an Oracle Clusterware command similar to the following to obtain the names of the Oracle Clusterware Master, Daemon and Agent processes on the host of the standby database. It is suggested to filter the output by using the grep TT:

crsctl status resource -n standbyHostName | grep TT

The following script is an example of a "stop standby" script for multiple databases on the same host that Oracle Clusterware manages. The instance name is upgrade2. The instance administrator is terry. The host is host2. There are two standby databases: databasea and databaseb.

crsctl stop resource TT_Master_upgrade2_terry_DATABASEA_0
crsctl stop resource TT_Master_upgrade2_terry_DATABASEB_1
crsctl stop resource TT_Daemon_upgrade2_terry_HOST2
crsctl stop resource TT_Agent_upgrade2_terry_HOST2
ttDaemonAdmin -stop

The following script is an example of a "start standby" script for the same configuration.

ttCWAdmin -init -hosts localhost
crs start resource TT_Master_upgrade2_terry_DATABASEA_0
crs start resource TT_Master_upgrade2_terry_DATABASEB_1

Upgrades when using parallel replication

You can perform an online or offline upgrade from a database that has not enabled parallel replication to a database that has enabled automatic parallel replication (with or without disabled commit dependencies). See "ReplicationApplyOrdering" attribute, in the Oracle TimesTen In-Memory Database Reference for information on setting automatic parallel replication values.

The remainder of this section discusses additional considerations along with scenarios where an offline upgrade is required.

Considerations regarding parallel replication

Be aware of the following considerations when upgrading hosts that use parallel replication:

  • Consider an active standby pair without parallel replication enabled. To upgrade the instances to a 22.1 release and use automatic parallel replication (default value of 0 for the ReplicationApplyOrdering attribute), use the appropriate procedure for an active standby pair upgrade. See "Performing an upgrade with active standby pair replication" for details.

  • Consider an active standby pair with no cache groups and automatic parallel replication enabled (value of 0 for the ReplicationApplyOrdering attribute). To upgrade the instances to a 22.1 release to use automatic parallel replication with disabled commit dependencies (value of 2 for the ReplicationApplyOrdering attribute), use the procedure for an active standby pair online major upgrade. See "Online major upgrade for active standby pair" for details. The value for the ReplicationApplyOrdering attribute must be changed from 0 to 2 before restoring any of the databases. For example:

    ttMigrate -r "DSN=master2;ReplicationApplyOrdering=2;ReplicationParallelism=2;
      LogBufParallelism=4" master2.bak

    Note:

    You may upgrade a database with a replication scheme with ReplicationApplyOrdering=2 to a database with ReplicationApplyOrdering=0 by using the same active standby pair online major upgrade procedure.

    Automatic parallel replication with disabled commit dependencies supports only asynchronous active standby pairs with no cache groups. For more information, see "Configuring parallel replication" in the Oracle TimesTen In-Memory Database Replication Guide.

  • You cannot replicate between databases that have the ReplicationParallelism attribute set to greater than 1 but have different values for the ReplicationApplyOrdering attribute.

Scenarios that require an offline upgrade

You must use an offline upgrade for these scenarios:

  • Moving from an automatic parallel replication environment to another automatic parallel replication environment with a different number of tracks, as indicated by the value of the ReplicationParallelism attribute.

  • Moving between major releases (for example, from 18.1 to 22.1) and using asynchronous writethrough cache groups.

  • Moving from regular replication with asynchronous writethrough in 18.1 to automatic parallel replication with asynchronous writethrough in 22.1.

Use the procedure described in "Moving to a different major release using ttMigrate" for offline upgrades. Alternatively, you can upgrade one side and use the ttRepAdmin -duplicate -recreate command to create the new database.

Performing an upgrade of your client instance

You can upgrade your client instance which is being used to access a database in a full instance. For information on instances, see "Overview of installations and instances" and "TimesTen instances" for details. For information on Client/Server, see "Overview of the TimesTen Client/Server" in the Oracle TimesTen In-Memory Database Operations Guide.

To perform the upgrade, follow these steps:

  1. Optional: This step is included for informational purposes to assist you in identifying and verifying the TimesTen client release information.

    In the client instance, run the ttVersion utility to verify the client release and the client instance. In this example, running ttVersion in the client instance shows the client release is 22.1.1.10.0 and the client instance is instance_221_client.

    % ttVersion
    TimesTen Release 22.1.1.10.0  (64 bit Linux/x86_64) (instance_221_client)
    2023-06-29T23:22:07Z
      Instance home directory: /scratch/instance_221_client
      Group owner: g900
    
  2. Optional: This step is included for informational purposes to establish and then show a client connection to the database1 database. In the client instance, run ttIsqlCS to connect to the database1 database in the full instance (on the server). Note that the TCP_PORT is not specified. The default value is assumed.
    % ttIsqlCS -connstr "TTC_SERVER=server.mycompany.com;TTC_SERVER_DSN=database1";
     
    Copyright (c) 1996, 2023, Oracle and/or its affiliates. All rights reserved.
    Type ? or "help" for help, type "exit" to quit ttIsql.
     
     
     
    connect "TTC_SERVER=server.mycompany.com;TTC_SERVER_DSN=database1";
    Connection successful: DSN=;TTC_SERVER=server.mycompany.com;
    TTC_SERVER_DSN=database1;
    ...
    (Default setting AutoCommit=1)
    
  3. Stop all applications using the client instance. In this example, in the client instance, first run ttIsqlCS to connect to the database1 database, then exit from ttIsqlCS.
    % ttIsqlCS -connstr "TTC_SERVER=server.mycompany.com;
    TTC_SERVER_DSN=database1";
     
    Copyright (c) 1996, 2021, Oracle and/or its affiliates. All rights reserved.
    Type ? or "help" for help, type "exit" to quit ttIsql.
     
     
     
    connect "TTC_SERVER=server.mycompany.com;TTC_SERVER_DSN=database1";
    Connection successful: DSN=;TTC_SERVER=server.mycompany.com;
    TTC_SERVER_DSN=database1;
    ...
    (Default setting AutoCommit=1)
    Command> exit
    Disconnecting...
    Done.
    
  4. Create a new client installation in a new location. For example, create the clientinstall_new installation directory. Then unzip the new release zip file into that directory. For example, to create the 22.1.1.11.0 installation on Linux 64-bit, unzip timesten2211110.server.linux8664.zip into the clientinstall_new directory. (Note, there is only one distribution on Linux 64-bit. This distribution contains the server and the client installation.)
    % mkdir clientinstall_new
    % cd clientinstall_new
    % unzip /swdir/TimesTen/ttinstallers/timesten2211110.server.linux8664.zip
    [...UNZIP OUTPUT...]
    

    See "TimesTen installations" for detailed information.

  5. Modify the client instance to point to the new installation. Do this by running the ttInstanceModify utility with the -install option from the $TIMESTEN_HOME/bin directory of the client instance.

    In this example, point the client instance to the installation in /clientinstall_new/tt22.1.1.11.0.

    % $TIMESTEN_HOME/bin/ttInstanceModify -install 
     /clientinstall_new/tt22.1.1.11.0
     
    Instance Info (UPDATED)
    -----------------------
     
    Name:           instance_221_client
    Version:        22.1.1.11.0
    Location:       /scratch/instance__client
    Installation:   /clientinstall_new/tt22.1.1.11.0
     
    * Client-Only Installation
     
     
    The instance instance_221_client now points to the installation in 
    clientinstall_new/tt22.1.1.10.0
    
  6. Optional: In the client instance, run the ttVersion utility to verify the client release is 22.1.1.11.0.
    % ttVersion
    TimesTen Release 22.1.1.11.0 (64 bit Linux/x86_64) (instance_221_client) 2021-06-28T22:37:51Z
      Instance home directory: /scratch/instance_221_client
      Group owner: g900
    
  7. Restart the applications that use the client instance.

    In this example, in the client instance, run ttIsqlCS to connect to the database1 database in the full instance.

    % ttIsqlCS -connstr "TTC_SERVER=server.mycompany.com;
    TTC_SERVER_DSN=database1";
     
    Copyright (c) 1996, 2021, Oracle and/or its affiliates. All rights reserved.
    Type ? or "help" for help, type "exit" to quit ttIsql.
     
     
     
    connect "TTC_SERVER=server.mycompany.com;TTC_SERVER_DSN=database1";
    Connection successful: DSN=;TTC_SERVER=server.mycompany.com;
    TTC_SERVER_DSN=database1;
    ...
    (Default setting AutoCommit=1)
    
  8. Optional: Delete the previous release installation (used for the client).
    % chmod -R 750 installation_dir/tt22.1.1.11.0
    % rm -rf installation_dir/tt22.1.1.11.0