6 Scenario: Upgrading Database with Manually Configured Oracle Data Guard

This scenario describes upgrading a database with manually configured Oracle Data Guard on Oracle Database Appliance on the DCS stack.

Note:

The following section does not apply for Oracle Data Guard set up with ODACLI commands. Use ODACLI commands to upgrade the databases.

Upgrading All Components

Upgrading an Oracle Database Appliance environment consists of upgrading DCS, server, storage, and database components.

When upgrading an Oracle Database Appliance environment where a standby system is already implemented, you can use the standby system to reduce the downtime required for completing the upgrade activities. The purpose of this section is to provide a high-level overview of the upgrade process in a primary-standby setup.

  1. Verify that the system is operating correctly by running pre-checks, validating hardware and system processes, and verifying system configuration using ORAChk.
  2. Take a backup of the operating system, Oracle Grid Infrastructure, Oracle homes, and databases in the primary environment. Refer to My Oracle Support Note 2466177.1 - ODA (Oracle Database Appliance): ODABR a System Backup/Restore Utility.
  3. Upgrade DCS and server components on the standby Oracle Database Appliance system.
  4. Switchover the primary database role and application connections to the standby system.
  5. Upgrade DCS and server components on the current standby, that is, the former primary system.
  6. For deployments with Oracle Database Appliance release earlier than 19.12, patch or upgrade the database. On bare metal deployments with Oracle Database Appliance release 19.12 and later, refer to the Oracle Database Appliance Deployment and User's Guide for the steps to patch the databases using ODACLI commands. ODACLI provides complete lifecycle management for Oracle Data Guard environments including database patching and upgrade if your Oracle Data Guard deployment was configured using ODACLI commands.

With this upgrade process, the downtime during the upgrade is minimized and system availability is affected only for the duration of upgrade or patching of the database component.

Upgrading Oracle Database Release 19.12 and Earlier

The purpose of this section is to provide a high-level overview of the upgrade process in a primary-standby setup.

Upgrading DCS, server, operating system, Oracle Grid Infrastructure, and general firmware, storage with switchover and switchback can help reduce downtime during upgrade. If you are only upgrading the database component, then unless you are using a zero downtime solution such as active-active Oracle GoldenGate solution, some downtime is expected for the application. Following is the process for database upgrade when a standby configuration exists:
  1. Verify that the system is operating correctly by running pre-checks, validating hardware and system processes, and verifying system configuration using ORAChk.
  2. Take a backup of the database and Oracle homes.
  3. Stop the standby database.
    [oracle@stbydb1]$ srvctl stop database -d boston
  4. Create a new database home or use an existing one on the standby with the version that you want to upgrade the database to on the primary.
    [oracle@stbydb1]# odacli create-dbhome -v 19.14.0.0.220118
  5. Stop log shipping on the primary.
    [oracle@ proddb1] dgmgrl connect sys/welcome1@chicago 
    DGMGRL> SHOW DATABASE 'boston' 'LogShipping'; LogShipping = 'ON' 
    DGMGRL> edit database 'boston' SET PROPERTY 'LogShipping'='OFF'; Property "LogShipping" updated 
    DGMGRL> SHOW DATABASE 'boston' 'LogShipping'; LogShipping = 'OFF'
  6. Create a new database home or use an existing database home on the primary with the version that you want to upgrade the database.
    # odacli create-dbhome -v 19.14.0.0.220118
  7. Stop the application.
  8. Upgrade the primary database using the odacli upgrade database command.
    [root@proddb1]# odacli list-databases
    ID DB Name DB Type DB Version CDB Class Shape Storage Status DbHomeID 
    ---------------------------------------- ---------- -------- ----------------------------------- 
    e97cc2f3-bdd8-4775-b959-d5f79a6c59fc chicago Rac 18.11.0.0.200714 false Oltp Odb1 Asm Configured 
    88ce2c7-fa3d-4f93-802a-bfa50d180758
    [root@proddb1]# odacli list-dbhomes
    ID Name DB Version Home Location Status 
    ---------------------------------------- -------------------- ---------------------------------------- --------------------------------------------- ---------- 
    863c8cbe-1c5f-450e-866c-15c384580ad3 OraDB19000_home1 19.14.0.0.220118 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1 Configured 
    288ce2c7-fa3d-4f93-802a-bfa50d180758 OraDB18000_home1 18.11.0.0.200714 /u01/app/oracle/product/18.0.0.0/dbhome_1 Configured 
    [root@proddb1]# odacli upgrade-database -i 713b68d3-8c43-4d10-973e-90a3fa88a84a -destDbHomeId 863c8cbe-1c5f-450e-866c-15c384580ad3 -sourceDbHomeId 288ce2c7-fa3d-4f93-802a-bfa50d180758
    [root@proddb1]# odacli list-databases
    ID DB Name DB Type DB Version CDB Class Shape Storage Status DbHomeID 
    ---------------------------------------- ---------- -------- -------------------- ---------- -------- -------- ---------- ------------ ---------------------------------- 
    713b68d3-8c43-4d10-973e-90a3fa88a84a chicago Rac 19.14.0.0.220118 false Oltp Odb1 Asm Configured 
    863c8cbe-1c5f-450e-866c-15c384580ad3
  9. Start the application.
  10. Copy the tnsnames.ora file on the standby from the old Oracle home to the new on all nodes.
  11. Copy the password file from the primary to the standby.
    [oracle@proddb1]$ srvctl config database -d chicago |grep Password Password file: +DATA/CHICAGO/PASSWORD/pwdchicago.277.1023633847 [grid@proddb1 ~]$ asmcmd ASMCMD> pwcopy +DATA/CHICAGO/PASSWORD/pwdchicago.277.1023633847 /tmp/pwdboston copying +DATA/CHICAGO/PASSWORD/pwdchicago.277.1023633847 -> /tmp/pwdboston 
    [oracle@proddb1]$ scp /tmp/pwdboston oracle@stbydb1: /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/dbs/orapwboston 
    [grid@stbydb1 ~]$ asmcmd ASMCMD> pwcopy /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/dbs/orapwboston +DATA/BOSTON/PASSWORDFILE/pwdboston copying /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/dbs/orapwboston -> +DATA/BOSTON/PASSWORDFILE/pwdboston
  12. Remove the Oracle Database 18c database from Oracle Clusterware on the standby.
    [oracle@ stbydb1]# srvctl remove database -db boston 
    Remove the database boston? (y/[n]) y
  13. Add the database back to the Clusterware on the standby. The Oracle home must point to the new version of the home.
    Example with single-instance Oracle Database:
    [oracle@stbydb1]$ srvctl add database -db boston -oraclehome /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1 -dbtype SINGLE -instance boston1 -node stbydb1 -dbname chicago -diskgroup 'DATA,REDO,RECO' -role physical_standby -spfile '+DATA/BOSTON/PARAMETERFILE/spfileboston' -pwfile '+DATA/BOSTON/PASSWORDFILE/pwdboston' -startoption mount
    
    Example with Oracle RAC Database:
    [oracle@stbydb1]$ srvctl add database -db boston -oraclehome /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1 -dbtype RAC -dbname chicago -diskgroup 'DATA,RECO,REDO' -role physical_standby -spfile '+DATA/BOSTON/PARAMETERFILE/spfileboston' -pwfile '+DATA/BOSTON/PASSWORDFILE/pwdboston' -startoption mount 
    [oracle@stbydb1]$ srvctl add instance -database boston -instance boston1 -node stbydb1 
    [oracle@stbydb1]$ srvctl add instance -database boston -instance boston2 -node stbydb2
    [oracle@stbydb1]$ srvctl start instance -db boston -instance boston1 -o mount 
    [oracle@stbydb1]$ srvctl start instance -db boston -instance boston2 -o mount
  14. Enable log shipping and validate Oracle Data Guard configuration.
    [oracle@stbydb1]$ dgmgrl 
    DGMGRL> connect sys/welcome1@chicago 
    DGMGRL> edit database 'boston' SET PROPERTY 'LogShipping'='ON'; Property "LogShipping" updated 
    DGMGRL> SHOW DATABASE 'boston' 'LogShipping'; LogShipping = 'ON' DGMGRL> show configuration verbose 
    DGMGRL> show database verbose chicago 
    DGMGRL> show database verbose boston 
    DGMGRL> validate database chicago DGMGRL> validate database boston
  15. Verify switchover and failover operations.

    Switchover tests are as follows:

    $ dgmgrl DGMGRL> connect sys/welcome1@boston 
    DGMGRL> switchover to boston 
    DGMGRL> connect sys/welcome1@chicago 
    DGMGRL> switchover to chicago;

    Failover tests are as follows:

    //Connect to standby before failover: 
    $ dgmgrl 
    DGMGRL> connect sys/welcome1@boston DGMGRL> failover to boston 
    DGMGRL> reinstate database chicago
    
    //Connect to former primary before failover: 
    DGMGRL> connect sys/welcome1@chicago DGMGRL> failover to chicago; 
    DGMGRL> reinstate database boston
    
    //Health check:
    DGMGRL> show database verbose chicago 
    DGMGRL> show database verbose boston 
    DGMGRL> validate database chicago 
    DGMGRL> validate database boston
  16. Sync up the registry on the standby system:
    [root@ stbydb1~]# odacli list-databases
    ID DB Name DB Type DB Version CDB Class Shape Storage Status DbHomeID 
    ---------------------------------------- ---------- -------- -------------------- ---------- --- 
    e6450a56-5a7d-4dab-9ca9-25b004b66646 chicago Rac 18.11.0.0.200714 false Oltp Odb1 Asm Configured 
    755b4b5d-6211-4d94-81e8-cf611868fe39
    Sync up registry entries
    [root@ stbydb1~]# odacli update-registry -n db -f 
    [root@ stbydb1~]# odacli describe-job -i 25ec2987-4c93-4d25-97db-bad2f6f602f6
    Job details 
    ---------------------------------------------------------------- 
    ID: 25ec2987-4c93-4d25-97db-bad2f6f602f6 
    Description: Discover 
    Components : db 
    Status: Success 
    Created: November 6, 2021 11:00:50 PM CET 
    Message: 
    Task Name Start Time End Time Status 
    ---------------------------------------- ----------------------------------- --------------------
    Rediscover DBHome November 6, 2019 11:00:54 PM CET November 6, 2019 11:00:56 PM CET Success 
    Rediscover DB: boston November 6, 2019 11:00:56 PM CET November 6, 2019 11:01:02 PM CET Success
    Confirm the changes in the registry
    [root@ stbydb1~]# odacli list-databases
    ID DB Name DB Type DB Version CDB Class Shape Storage Status bHomeID 
    ---------------------------------------- ---------- -------- -------------------- ----
    e6450a56-5a7d-4dab-9ca9-25b004b66646 chicago Rac 19.14.0.0.220118 false Oltp Odb1 Asm Configured 
    17f68bbf-b812-42e5-96ba-1433c30f75ed

    The total downtime requirement the duration of the database upgrade. A switchover and switchback is not required for a database upgrade. Note that the update registry operation removes backup, dbdomain, CPU pools and associated network settings for all databases. Backup, CPU pools and associated network settings can be added again with the odacli modify-database command.

Patching Oracle Database Release 19.12 and Earlier

Patching databases on Oracle Database Appliance is an online operation. The following steps describe how to patch databases on a standby configuration. These stteps apply to databases on bare metal and databases on DB systems.

Note that if the database uses, Oracle JVM, then you cannot patch the standby system first. Refer to My Oracle Support Note 2217053.1 - RAC Rolling Install Process for the "Oracle JavaVM Component Database PSU/RU" to confirm OJVM usage. In such a case, defer log shipping on the primary system and patch the primary system first.

Follow these steps:
  1. Verify that the system is operating correctly by running pre-checks, validating hardware and system processes, and verifying system configuration using ORAChk.
  2. Take a backup of the database.
  3. Stop log shipping on the primary.
    $ dgmgrl DGMGRL> connect sys/welcome1@chicago 
    DGMGRL> edit database 'CHICAGO' SET STATE="LOG-TRANSPORT-OFF"; 
    DGMGRL> SHOW DATABASE 'boston' 'LogShipping'; LogShipping = 'ON' 
    DGMGRL> edit database 'boston' SET PROPERTY 'LogShipping'='OFF'; Property "LogShipping" updated 
    DGMGRL> SHOW DATABASE 'boston' 'LogShipping'; LogShipping = 'OFF'
  4. Stop the standby database and restart it in read only mode.
    [oracle@stbydb1]$ srvctl stop database -d boston 
    [oracle@stbydb1]$ srvctl start database -db boston -o "read only"
  5. Patch the standby database first. Identify the Oracle home of the database.
    [root@ocboda10 ~]# odacli list-databases
    ID DB Name DB Type DB Version CDB Class Shape Storage Status DbHomeID 
    ---------------------------------------- ---------- -------- -------------------- ---------- ----------------------------------- 
    667a0eec-910c-404b-9820-aedcddf668d7 chicago Rac 19.11.0.0.210420 false Oltp Odb1 Asm Configured 
    863c8cbe-1c5f-450e-866c-15c384580ad3 
    [oracle@stbydb1]# odacli list-dbhomes
    ID Name DB Version Home Location Status 
    ---------------------------------------- -------------------- ---------------------------------------- --------------------------------- 
    863c8cbe-1c5f-450e-866c-15c384580ad3 OraDB19000_home1 19.11.0.0.210420 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1 Configured
    Run pre-checks on the Oracle home.
    [oracle@stbydb1]# odacli update-dbhome -p -i 6d05e3f1-e948-4482-bcba-c560d9c8e5e5 -v 19.14.0.0 
    [oracle@stbydb1]# odacli describe-job -i b4ee24d9-2b82-4c80-b789-ced90013e4b3
    Job details 
    ---------------------------------------------------------------- 
    ID: b4ee24d9-2b82-4c80-b789-ced90013e4b3 
    Description: DB Home 
    Prechecks Status: Success 
    Created: November 7, 2021 6:26:51 PM CET
    Apply the patches.
    [oracle@stbydb1]# odacli update-dbhome -i 6d05e3f1-e948-4482-bcba-c560d9c8e5e5 -v 19.14.0.0 
    [oracle@stbydb1]# odacli describe-job -i "e3556125-7ce6-4560-9f22-3fdd9738f955"
    Job details 
    ---------------------------------------------------------------- 
    ID: e3556125-7ce6-4560-9f22-3fdd9738f955 
    Description: DB Home Patching: Home Id is e4e9fcbd-63d4-4c56-bb0c-b239a4e749f3 
    Status: Success 
    Created: November 7, 2021 7:09:52 PM CET
    Verify the results.
    [oracle@stbydb1]# odacli list-dbhomes
    ID Name DB Version Home Location Status 
    ---------------------------------------- -------------------- ----------------------------------------- 
    e4e9fcbd-63d4-4c56-bb0c-b239a4e749f3 OraDB19000_home2 19.14.0.0.220118 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_2 Configured 
    863c8cbe-1c5f-450e-866c-15c384580ad3 OraDB19000_home1 19.11.0.0.210420 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1 Configured
    [oracle@stbydb1]# odacli list-databases
    ID DB Name DB Type DB Version CDB Class Shape Storage Status DbHomeID
    ---------------------------------------- ---------- -------- -------------------- ---------- -------- -------- ---------- ------------ 
    667a0eec-910c-404b-9820-aedcddf668d7 chicago Rac 19.14.0.0.220118 false Oltp Odb1 Asm Configured 
    e4e9fcbd-63d4-4c56-bb0c-b239a4e749f3
  6. Patch the primary database, similar to the steps for patching the standby database.
  7. Start log shipping on the primary and verify Oracle Data Guard configuration.
    DGMGRL> connect sys/welcome1@chicago 
    DGMGRL> edit database 'boston' SET PROPERTY 'LogShipping'='ON'; Property "LogShipping" updated 
    DGMGRL> SHOW DATABASE 'boston' 'LogShipping'; LogShipping = 'ON' 
    DGMGRL> show configuration verbose 
    DGMGRL> show database verbose chicago 
    DGMGRL> show database verbose boston 
    DGMGRL> validate database chicago 
    DGMGRL> validate database boston

Configuring NFS Server on Oracle Database Appliance

If either NAS or Oracle Object Storage is not an option, then configure NFS on one of the Oracle Database Appliance to take a backup of the source database and to restore it as a standby on the target system.

NFS server must be configured on the bare system location of the primary and the standby, for both Oracle Data Guard on bare metal system or DB system.

Follow these steps:
  1. Create an ADVM volume on source bare metal system node0 as the grid operating system user.
    [grid@odabm1 ~]$ asmcmd 
    asmcmd> volcreate -G data -s 100G backup
    ASMCMD> volinfo -G data backup 
    Diskgroup Name: DATA 
    Volume Name: BACKUP 
    Volume Device: /dev/asm/backup-322 
    State: ENABLED 
    Size (MB): 102400 
    Resize Unit (MB): 64 
    Redundancy: HIGH 
    Stripe Columns: 8 
    Stripe Width (K): 4096 
    Usage: Mountpath:
  2. Format the volume as Oracle ACFS.
    [grid@odabm1 ~]$ mkfs -t acfs /dev/asm/backup-322 
    mkfs.acfs: version = 19.0.0.0.0 
    mkfs.acfs: on-disk version = 46.0 
    mkfs.acfs: volume = /dev/asm/backup-322 
    mkfs.acfs: volume size = 107374182400 ( 100.00 GB ) 
    mkfs.acfs: Format complete.
  3. Create a mount point on both nodes. Run the command on both nodes on the bare metal system:
    # mkdir /backup
  4. Register the file system with Oracle Clusterware and start it as root operating system user.
    [root@odabm1 ~]# /u01/app/19.15.0.0/grid/bin/srvctl add filesystem -d /dev/asm/backup-322 -path /backup -mountowner oracle -mountgroup dba 
    [root@odabm1 ~]# /u01/app/19.15.0.0/grid/bin/srvctl start filesystem -d /dev/asm/backup-322
  5. Append to /etc/exports on node0 on the bare metal system and make it active.
    [root@odabm1 ~]# vi /etc/exports 
    /backup *(rw,sync,no_root_squash) 
    
    //or add each source and target nodes separately:
    /backup primary1(rw,sync,no_root_squash) 
    /backup primary2(rw,sync,no_root_squash) 
    /backup standby1(rw,sync,no_root_squash) 
    /backup standby2(rw,sync,no_root_squash)
     
    //where primary1, primary2 nodes refer to the nodes hosting the primary database and standby1, 
    //standby2 refer to the nodes hosting the standby
    [root@odabm1 ~]# exportfs -a 
    [root@odabm1 ~]# exportfs -v 
    ... 
    /backup *(sync,wdelay,hide,no_subtree_check,sec=sys,rw,secure,no_root_squash,no_all_squash)
  6. Create a mount point on the source and the target nodes using the same mount point name.
    # mkdir /odabackup
  7. Mount the file system on both nodes using the public IP address of node0 on the source bare metal system.
    # mount -t nfs 192.168.17.2:/backup /odabackup
  8. As the oracle user ID may be different between the source and target, create a subfolder under /odabkp and change the ownership to oracle:dba on it.
    # mkdir /odabackup/db 
    In case the DB is TDE enabled then one more folder is required: 
    # mkdir /odabackup/tde # chown -R oracle:dba /odabackup
  9. After configuring NFS on both source and target, follow the Oracle Data Guard configuration process till the step to restore the database as a standby.
  10. Before restoring the database, change the ownership to oracle:dba on the target. The user and group IDs may be different between the source and target.
    # chown -R oracle:dba /odabackup
  11. Complete the Oracle Data Guard configuration.
  12. After configuring Oracle Data Guard, revert all NFS-related changes.
    1. Unmount /odabkp on source and target nodes.
      # unmount /odabkp
    2. Unmount /backup on all bare metal system nodes.
      # unmount /backup
    3. Remove /backup from /etc/exports.
    4. Update the NFS configuration on the first bare metal system node.
      [root@odabm1 ~]# exportfs -a
    5. Delete the backup Oracle ACFS file system from the Oracle Clusterware configuration.
      [root@odabm1 ~]# /u01/app/19.15.0.0/grid/bin/srvctl stop filesystem -d /dev/asm/backup-322 
      [root@odabm1 ~]# /u01/app/19.15.0.0/grid/bin/srvctl remove filesystem -d /dev/asm/backup-322
    6. Delete the backup related Oracle ADVM volume as the grid operating system user on the bare metal system node.
      [grid@odabm1 ~]$ asmcmd 
      ASMCDM> voldelete -G data backup
    7. Reassign the original backup configuration to the primary database. By default, the value is default.
      [root@proddb1 ~]# odacli modify-database -in databasename -bin default