9 Upgrading Oracle Database Appliance Using Data Preserving Reprovisioning

Understand how you can directly upgrade your Oracle Database Appliance software from Oracle Database Appliance releases 12.1.2.12, 12.2.1.4, 18.3, 18.5, 18.7, and 18.8 to the latest release without upgrading to intermediate releases.

About Upgrading Using Data Preserving Reprovisioning

Understand how you can upgrade your appliance from Oracle Database Appliance releases 12.1.2.12, 12.2.1.4, 18.3, 18.5, 18.7, and 18.8 to the latest release without upgrading to the intermediate releases.

When you upgrade from Oracle Database Appliance releases 12.1.2.12, 12.2.1.4, 18.3, 18.5, 18.7 and 18.8, you are required to upgrade to the intermediate releases, so that you can finally, patch your appliance to the latest release. This process involves many steps in the upgrade path, and may have a long patching duration and application downtime. You can use Data Preserving Reprovisioning to upgrade your appliance directly to the latest release.

About Upgrading Using Data Preserving Reprovisioning

Data Preserving Reprovisioning enables reprovisioning of an already deployed Oracle Database Appliance system without modifying the storage and and the databases on the appliance. The advantage of this method over the regular upgrade process is that it has a much shorter upgrade path. This is achieved by saving the information of the source system, capturing them as server data archive files. Then, the appliance is reimaged to the desired version, such as Oracle Database Appliance release 19.20 or later and the saved metadata is used to directly reprovision the system and bring back all the databases.

Advantages of using Data Preserving Reprovisioning for upgrade are as follows:
  • The upgrade utility runs prechecks on the system, such as detection of databases that are inactive and provides a warning before you upgrade the appliance. You can proactively address these failures beforehand and not encounter issues at the time of reprovisioning of the appliance.
  • During the first step of detaching the node, information about the system is collected and preserved, including information about the VLAN, CPU, and Oracle AFD settings. These settings are migrated after the reimage and the third step in this process reprovisions these settings.
  • The deployment is initially at Oracle Database Appliance release 12.1.2.12, 12.2.1.4, or 18.x, but after the reprovisioning process, the software is upgraded to Oracle Database Appliance release 19.20 and the deployment automatically starts using new features wherever applicable. For example, the database software is installed on Oracle ACFS-based storage.
  • You can upgrade your appliance directly to the latest Oracle Database Appliance release without upgrading to intermediate releases.

Steps in the Data Preserving Reprovisioning for Upgrade Process

There are three steps in this process:
  1. Detach Nodes using Oracle Database Appliance upgrade utility from the source version of the appliance: This step saves the metadata about the databases, listeners, networks, and other configuration details in archive files, namely, the server data archive files. Then, the services running on the system are shutdown and uninstalled to prepare the environment for reimage in step 2.

    The server data archive files are generated after the successful detach of nodes. You must save the server data archive files in a location other than the appliance which is being upgraded, and copy these files back to the appliance to restore the system in step 3.

  2. Reimage Nodes using the Oracle Database Appliance ISO image: The procedure is similar to provisioning the appliance. This step sets up the operating system and DCS software with the Oracle Database Appliance release you want to upgrade to.
  3. Provision Nodes using the Data Preserving Reprovisioning method: This step reconfigures networks, operating system users and groups, installs Oracle Grid Infrastructure and configures the licensed CPU cores. Then, this step reprovisions the databases to the same state as they were, before they were detached in step 1. The databases are restarted and added to the Oracle Grid Infrastructure cluster.

The procedure for each step is detailed in the subsequent topics in this chapter.

Customizations to the Appliance and Their Persistence After Upgrade

As part of the upgrade process, the Data Preserving Reprovisioning procedure involves reimage of the appliance with the latest ISO image. Hence, any prior customizations to the operating system configuration or settings are lost during the reimage.Note the impact on the following customizations during the Data Preserving Reprovisioning process:
  • Custom RPMs: If your appliance has any custom operating systems installed from Oracle Linux Yum repository, then the prechecks report lists these custom RPMs. You must uninstall these RPMs and then continue with the next step in the upgrade process. You can reinstall these custom RPMs as required, after the upgrade.
  • Fixes applied by STIG and CIS scripts: Since the system is reimaged during the upgrade progess, fixes applied on the appliance to conform with Security Technical Implementation Guides (STIG) and Center for Internet Security (CIS) benchmarks are lost. When you reimage with the latest ISO image, the operating system is upgraded to Oracle Linux 7. You must, then, run STIG and CIS scripts again.
  • Oracle ASR: Oracle ASR is not restored during the reprovisioning process. After the reprovisioning process, you can manually configure Oracle ASR with the latest RPMs using the command odacli configure-asr.

Step 1: Detaching Nodes for Upgrade Using Data Preserving Reprovisioning

The Oracle Database Appliance nodes are detached in Step 1 in upgrading from Oracle Database Appliance releases 12.1.2.12, 12.2.1.4, 18.3, 18.5, 18.7, and 18.8 to the latest release.

This step first checks the system to verify that the nodes can be be detached. Then, if the checks pass, the step also saves the metadata about the databases, listeners, and networks that can help bring back all the services in step 3. Then, the services running on the machine are shut down and uninstalled to prepare the environment for a re-image in step 2. Note that the data on the root file system on the boot disk is removed during the reimage of step 2, but data in the Oracle ASM disk groups is kept intact.

Important:

Run the commands in this topic in the same order as documented.

Important:

Ensure that you take a backup of the databases before you start this process. The nodes are reimaged during this process, so ensure that your backup is stored on a location outside the appliance.

WARNING:

Do not run cleanup.pl either before or after running the odaupgradeutil detach-node command. Running cleanup.pl erases all the Oracle ASM disk groups on the storage and you cannot reprovision your Oracle Database Appliance system.
Follow these steps. For high-availability systems, run the commands on both nodes, one node at a time. Run all the steps mentioned below on the first node and if all the steps are successful, then repeat the steps on second node.
  1. Download the Oracle Database Appliance Upgrade Utility (Patch 33594115) from My Oracle Support to a temporary location on an external client. Refer to the Oracle Database Appliance Release Notes for details about the patch numbers and software for the latest release.
  2. Unzip the software and save it to /opt/oracle directory.
    cd /opt/oracle
    unzip p33594115_1920000_Linux-x86-64.zip
    unzip -d /opt/oracle odaupgradeutil_date.zip

    The utility is extracted to the /opt/oracle/odaupgradeutil location.

  3. Run pre-checks to evaluate whether the system is ready for upgrade.
    [root@node1 odaupgradeutil]# ./odaupgradeutil run-prechecks
    The command runs pre-checks related to Oracle Grid Infrastructure, databases, OAK, firmware, and other components. These checks determine whether the current node of the Oracle Database Appliance can be successfully detached. If there are failures reported, then review the failures in the report and take appropriate action. The precheck report is generated in the location /opt/oracle/oak/restore/prechecks/precheck_report.json. The log of the precheck operation is saved at /opt/oracle/oak/restore/log/odaupgradeutil_prechecks_timestamp.log. The odaupgradeutil utility logs are stored at opt/oracle/oak/restore/log.
    For information about the components in the prechecks report and the errors and possible fixes for the prechecks report, see Troubleshooting Data Preserving Reprovisioning on Oracle Database Appliance in this guide.
    For example:
    [root@node1 odaupgradeutil]# ./odaupgradeutil run-prechecks
    ******************************
    ODAUPGRADEUTIL
    ------------------------------
    Version : 19.20.0.0.0
      Build : 19.20.0.0.0.230629
    ******************************
    
    Initializing...
    ########################## ODAUPGRADEUTIL - INIT - BEGIN ##########################
    Please check /opt/oracle/oak/restore/log/odaupgradeutil_init_05-05-2023_12:05:35.log for details.
    Get System Version...BEGIN
    System Version is: 12.2.1.4.0
    Get System Version...DONE
    Get Hardware Info...BEGIN
    Hardware Model: X6-2, Hardware Platform: S
    Get Hardware Info...DONE
    Get Grid home...BEGIN
    Grid Home is: /u01/app/12.2.0.1/grid
    Get Grid home...DONE
    Get system configuration details...BEGIN
    Grid user is: grid
    Oracle user is: oracle
    Get system configuration details...DONE
    ########################## ODAUPGRADEUTIL - INIT - END ##########################
    *********
    IMPORTANT
    *********
    odaupgradeutil will bring down the databases and grid services on the system. 
    The files that belong to the databases, which are stored on ASM or ACFS, 
    are left intact on the storage. The databases will be started up back after 
    re-imaging the ODA system using 'odacli restore-node' commands.
    As a good precautionary measure, please backup all the databases on the
    system before you start this process. Do not store the backup on this ODA 
    machine since the local file system will be wiped out as part of the re-image.
    *********
    IMPORTANT
    *********
    ########################## ODAUPGRADEUTIL - PRECHECKS - BEGIN ##########################
    Please check /opt/oracle/oak/restore/log/odaupgradeutil_prechecks_05-05-2023_12:05:55.log for details.
    System version precheck...BEGIN
    System version precheck...PASSED
    System config precheck...BEGIN
    System config precheck...PASSED
    Required Files precheck...BEGIN
    Required Files precheck...PASSED
    Need to discover DB homes
    Get Database homes...BEGIN
    Get Database homes...SUCCESS
    Disk space precheck...BEGIN
    Get Quorum Disks...BEGIN
    Get Quorum Disks...SUCCESS
    Disk space precheck...PASSED
    DCS Agent status precheck...BEGIN
    DCS Agent status precheck...PASSED
    OAK precheck...BEGIN
    OAK precheck...PASSED
    ASM precheck...BEGIN
    ASM precheck...PASSED
    Database precheck...BEGIN
    Get databases...BEGIN
      Database Name: tdb
      Oracle Home: /u01/app/oracle/product/12.2.0.1/dbhome_1
      Database Name: acZ
      Oracle Home: /u01/app/oracle/product/11.2.0.4/dbhome_1
      Database Name: KLv
      Oracle Home: /u01/app/oracle/product/12.1.0.2/dbhome_1
      Database Name: U4
      Oracle Home: /u01/app/oracle/product/12.2.0.1/dbhome_2
      Database Name: onetdb1
      Oracle Home: /u01/app/oracle/product/11.2.0.4/dbhome_2
      Database Name: onetdb3
      Oracle Home: /u01/app/oracle/product/12.2.0.1/dbhome_3
      Database Name: sitdb2
      Oracle Home: /u01/app/oracle/product/12.1.0.2/dbhome_2
      Database Name: sitdb4
      Oracle Home: /u01/app/oracle/product/12.1.0.2/dbhome_3
    Get databases...SUCCESS
    Database precheck...PASSED
    Audit Files precheck...BEGIN
    Audit Files precheck...WARNING
    Custom RPMs precheck...BEGIN
    Custom RPMs precheck...PASSED
    ########################## ODAUPGRADEUTIL - PRECHECKS - END ##########################
    Use 'odaupgradeutil describe-precheck-report [-j]' to view the precheck report.
  4. Review the prechecks report:
    [root@node1 odaupgradeutil]# ./odaupgradeutil describe-precheck-report

    For example:

    [root@node1 odaupgradeutil]# ./odaupgradeutil describe-precheck-report
    COMPONENT       STATUS  MESSAGE                                                        ACTION                                                        
    -------------------------------------------------------------------------------------------------------------------------------------------------------
    SYSTEM VERSION  PASSED  PASSED                                                         NONE                                                          
     
    REQUIRED FILES  PASSED  PASSED                                                         NONE                                                          
     
    DISK SPACE      PASSED  PASSED                                                         NONE                                                          
     
    OAK             PASSED  PASSED                                                         NONE                                                          
     
    ASM             PASSED  PASSED                                                         NONE                                                          
     
    DATABASES       PASSED  PASSED                                                         NONE                                                          
     
    AUDIT FILES     WARNING Audit files found under ['/u01/app/oracle/product/12.2.0.1/    These files will be lost after reimage, advise is to backup   
                            dbhome_1/rdbms/audit', '/u01/app/oracle/admin', '/var/log']    if necessary                                                  
     
    OSRPMS          PASSED  PASSED                                                         NONE                                                           

    Note:

    If the odaupgradeutil run-prechecks command is run on an Oracle Database Appliance system with DCS version 18.x, that was migrated from an OAK environment, then the operating system RPMs prechecks display a warning with a list of additional RPMs. These are RPMs which remain after the migration process. This warning is expected behaviour as the RPMs are compared with the RPMs in the ISO image for the Oracle Database Appliance release. This is merely a warning and does not cause any issues.
  5. If there are failures in the precheck report, take corrective action as suggested in the ACTION column. After fixing the failures, rerun the precheck as explained in step 3 in this procedure. If there are no failures in the precheck report, then run the command to detach the node:
    [root@node1 odaupgradeutil]# ./odaupgradeutil detach-node
    ******************************
    ODAUPGRADEUTIL
    ------------------------------
    Version : 19.20.0.0.0
      Build : 19.20.0.0.0.230629
    ******************************
    
    #*********
    IMPORTANT
    *********
    odaupgradeutil will bring down the databases and grid services on the system. 
    The files that belong to the databases, which are stored on ASM or ACFS, 
    are left intact on the storage. The databases will be started up back after 
    re-imaging the ODA system using 'odacli restore-node' commands.
    As a good precautionary measure, please backup all the databases on the
    system before you start this process. Do not store the backup on this ODA 
    machine since the local file system will be wiped out as part of the re-image.
    *********
    IMPORTANT
    *********
    Do you want to continue? [y/n]: Y
    ########################## ODAUPGRADEUTIL - SAVECONF - BEGIN ##########################
    Please check /opt/oracle/oak/restore/log/odaupgradeutil_saveconf_05-05-2023_12:07:36.log for details.
    Backup files to /opt/oracle/oak/restore/bkp...BEGIN
    Backup files to /opt/oracle/oak/restore/bkp...SUCCESS
    Get provision instance...BEGIN
    Get provision instance...SUCCESS
    Get network configuration...BEGIN
    Get network configuration...SUCCESS
    Get databases...BEGIN
      Database Name: tdb
      Oracle Home: /u01/app/oracle/product/12.2.0.1/dbhome_1
      Database Name: acZ
      Oracle Home: /u01/app/oracle/product/11.2.0.4/dbhome_1
      Database Name: KLv
      Oracle Home: /u01/app/oracle/product/12.1.0.2/dbhome_1
      Database Name: U4
      Oracle Home: /u01/app/oracle/product/12.2.0.1/dbhome_2
      Database Name: onetdb1
      Oracle Home: /u01/app/oracle/product/11.2.0.4/dbhome_2
      Database Name: onetdb3
      Oracle Home: /u01/app/oracle/product/12.2.0.1/dbhome_3
      Database Name: sitdb2
      Oracle Home: /u01/app/oracle/product/12.1.0.2/dbhome_2
      Database Name: sitdb4
      Oracle Home: /u01/app/oracle/product/12.1.0.2/dbhome_3
    Get databases...SUCCESS
    Get Database homes...BEGIN
      Checking Unified Auditing for dbhome '/u01/app/oracle/product/12.2.0.1/dbhome_1'
      Unified Auditing is set to FALSE
      Checking Unified Auditing for dbhome '/u01/app/oracle/product/11.2.0.4/dbhome_1'
      Could not determine Unified Auditing status, defaulting to TRUE
      Checking Unified Auditing for dbhome '/u01/app/oracle/product/12.1.0.2/dbhome_1'
      Unified Auditing is set to FALSE
      Checking Unified Auditing for dbhome '/u01/app/oracle/product/12.2.0.1/dbhome_2'
      Unified Auditing is set to FALSE
      Checking Unified Auditing for dbhome '/u01/app/oracle/product/11.2.0.4/dbhome_2'
      Could not determine Unified Auditing status, defaulting to TRUE
      Checking Unified Auditing for dbhome '/u01/app/oracle/product/12.2.0.1/dbhome_3'
      Unified Auditing is set to FALSE
      Checking Unified Auditing for dbhome '/u01/app/oracle/product/12.1.0.2/dbhome_2'
      Unified Auditing is set to FALSE
      Checking Unified Auditing for dbhome '/u01/app/oracle/product/12.1.0.2/dbhome_3'
      Unified Auditing is set to FALSE
    Get Database homes...SUCCESS
    Get Database storages...BEGIN
      Database Name: tdb
        DATA destination: /u02/app/oracle/oradata/tdb/
        RECO destination: /u03/app/oracle/fast_recovery_area/
        REDO destination: /u03/app/oracle/redo/
        Flash Cache destination: 
      Database Name: acZ
        DATA destination: /u02/app/oracle/oradata/acZ
        RECO destination: /u03/app/oracle/fast_recovery_area/
        REDO destination: /u03/app/oracle/redo/
        Flash Cache destination: 
      Database Name: KLv
        DATA destination: +DATA
        RECO destination: +RECO
        REDO destination: +RECO
        Flash Cache destination: 
      Database Name: U4
        DATA destination: +DATA
        RECO destination: +RECO
        REDO destination: +RECO
        Flash Cache destination: 
      Database Name: onetdb1
        DATA destination: /u02/app/oracle/oradata/onetdb1
        RECO destination: /u03/app/oracle/fast_recovery_area/
        REDO destination: /u03/app/oracle/redo/
        Flash Cache destination: 
      Database Name: onetdb3
        DATA destination: /u02/app/oracle/oradata/onetdb3/
        RECO destination: /u03/app/oracle/fast_recovery_area/
        REDO destination: /u03/app/oracle/redo/
        Flash Cache destination: 
      Database Name: sitdb2
        DATA destination: /u02/app/oracle/oradata/sitdb2
        RECO destination: /u03/app/oracle/fast_recovery_area/
        REDO destination: /u03/app/oracle/redo/
        Flash Cache destination: 
      Database Name: sitdb4
        DATA destination: /u02/app/oracle/oradata/sitdb4
        RECO destination: /u03/app/oracle/fast_recovery_area/
        REDO destination: /u03/app/oracle/redo/
        Flash Cache destination: 
    Get Database storages...SUCCESS
    Get Volumes...BEGIN
    Get Volumes...SUCCESS
    Get Filesystems...BEGIN
    Get Filesystems...SUCCESS
    Get Quorum Disks...BEGIN
    Get Quorum Disks...SUCCESS
    SAVECONF: SUCCESS
    ########################## ODAUPGRADEUTIL - SAVECONF - END ##########################
    ########################## ODAUPGRADEUTIL - DETACHNODE - BEGIN ##########################
    Please check /opt/oracle/oak/restore/log/odaupgradeutil_detachnode_05-05-2023_12:09:44.log for details.
    Deconfigure databases...BEGIN
      Database Name: tdb
      Local Instance: tdb
      Local Instance Status: RUNNING
      Stopping database 'tdb'...
      Removing database 'tdb' from CRS...
      Database Name: acZ
      Local Instance: acZ
      Local Instance Status: RUNNING
      Stopping database 'acZ'...
      Removing database 'acZ' from CRS...
      Database Name: KLv
      Local Instance: KLv
      Local Instance Status: RUNNING
      Stopping database 'KLv'...
      Removing database 'KLv' from CRS...
      Database Name: U4
      Local Instance: U4
      Local Instance Status: RUNNING
      Stopping database 'U4'...
      Removing database 'U4' from CRS...
      Database Name: onetdb1
      Local Instance: onetdb1
      Local Instance Status: RUNNING
      Stopping database 'onetdb1'...
      Removing database 'onetdb1' from CRS...
      Database Name: onetdb3
      Local Instance: onetdb3_1
      Local Instance Status: RUNNING
      Stopping database 'onetdb3'...
      Removing database 'onetdb3' from CRS...
      Database Name: sitdb2
      Local Instance: sitdb2
      Local Instance Status: RUNNING
      Stopping database 'sitdb2'...
      Removing database 'sitdb2' from CRS...
      Database Name: sitdb4
      Local Instance: sitdb4
      Local Instance Status: RUNNING
      Stopping database 'sitdb4'...
      Removing database 'sitdb4' from CRS...
    Deconfigure databases...SUCCESS
    Get DB backup metadata...BEGIN
    No backupconfigs found
    No backupreports found
    Quorum Disks were found
      Quorum Disk: /dev/SSD_QRMDSK_p1, Size: 10240 MB... will be resized to 1024 MB
      Resizing disk: /dev/SSD_QRMDSK_p1 ...
      Quorum Disk: /dev/SSD_QRMDSK_p2, Size: 10240 MB... will be resized to 1024 MB
      Resizing disk: /dev/SSD_QRMDSK_p2 ...
    Deconfigure Grid Infrastructure...BEGIN
    Deconfigure Grid Infrastructure...SUCCESS
    Backup quorum disks...
      Backing up quorum disk '/dev/SSD_QRMDSK_p1'
      Backing up quorum disk '/dev/SSD_QRMDSK_p2'
    Create serverarchives...BEGIN
      Serverarchive '/opt/oracle/oak/restore/out/serverarchive_rwsoda6s002.zip' created
      Size = 381604 bytes
      SHA256 checksum = 405ab7068fee857755836d1174eec2ab1fb2a7accbab4655a828e01b22da50e8
    Create serverarchives...DONE
    DETACHNODE: SUCCESS
    [CRITICAL] Server data archive file(s) generated at /opt/oracle/oak/restore/out . Please ensure the file(s) are copied outside the ODA system and preserved.
    The checksum of the server archive files are stored as serverarchive_name.sha256, for example, serverarchive_host123.sha256, in the same location where server archive files are generated, that is, /opt/oracle/oak/restore/out. Check that the checksum displayed in the output is identical to the copy in the external location. Use the checksum to confirm that the bytes are transferred completely and there were no network issues. If the checksum is not identical after the copy operation, then repeat the step to copy the checksum.
    When you run the odaupgradeutil detach-node command, it saves all the metadata and generates server data archive files after the process is completed. The current node is detached from the Oracle Grid Infrastructure cluster by deinstalling databases and Oracle Grid Infrastructure. The deinstallation does not affect Oracle ASM disk groups and Oracle ACFS volumes and the stored files such as datafiles, control files, and archive logs. In case of high-availability systems, run all the steps in this procedure on each node separately, one node at a time. In high-availability systems, when the command is run on the first node, the services are brought down and software is deinstalled on that node. At this time, the services continue to run on second node. The database, for example, can still be connected to, using the second node, and queries can be issued. The second node remains functional, until detached, at which point, there is a full downtime of the database and Oracle Grid Infrastructure.

    WARNING:

    Ensure that there is no hardware or networking change after issuing the command odaupgradeutil detach-node.
  6. On successful completion of the command on both nodes, the following zip files are generated:
    For High-Availability systems, three zip files are generated:
    /opt/oracle/oak/restore/out/serverarchive_node0_hostname.zip,
    /opt/oracle/oak/restore/out/serverarchive_node1_hostname.zip, and 
    /opt/oracle/oak/restore/out/serverarchive_cluster_name_common.zip
    The serverarchive_node0_hostname.zip and serverarchive_node1_hostname.zip contain the file configure-firstnet.rsp. This file contains the values that you need to provide when running odacli configure-firstnet after reimaging the system in step 2.
    For single-node systems, only one zip file is generated. This zip file contains configure-firstnet.rsp, which stores the values that you need to provide when running the command odacli configure-firstnet after reimaging the system in step 2.
    /opt/oracle/oak/restore/out/serverarchive_host_name.zip
  7. Copy the files to a location outside of the Oracle Database Appliance system, to prepare the environment for reimage.

WARNING:

Make sure to save these files in a location outside the Oracle Database Appliance system. These files are needed to reprovision the system after you reimage the appliance in Step 2 of this process. Without these files, the system cannot be reprovisioned in Step 3 and you will lose all data stored in the Oracle ASM disk groups.

Important:

When the source versions are running DCS software, odaupgradeutil commands do not edit the DCS metadata. This implies that when a resource such as a database is deconfigured, the command odacli list-databases continues to show the status as CONFIGURED. However, in reality, the database service is brought down and listeners are no longer active, which can be verified using the srvctl command from the database home. This is expected behavior.

Step 2: Reimaging Nodes for Upgrading Using Data Preserving Reprovisioning

Step 2 in upgrading Oracle Database Appliance from Oracle Database Appliance releases 12.1.2.12, 12.2.1.4, 18.3, 18.5, 18.7, and 18.8 to the latest release.

WARNING:

Do not run cleanup.pl either before or after reimaging the nodes. Running cleanup.pl erases all the Oracle ASM disk groups on the storage and you cannot reprovision your Oracle Database Appliance system.
  1. Download the Oracle Database Appliance release 19.20 bare metal ISO image and reimage the appliance as described in the topic Reimaging an Oracle Database Appliance Baremetal System.
  2. Plumb the network as described in the topic Plumbing the Network.

Important:

For high-availability systems, serverarchive_node0_hostname.zip and serverarchive_node1_hostname.zip contain the file configure-firstnet.rsp. For single-node systems, serverarchive_hostname.zip contains the file configure-firstnet.rsp. The configure-firstnet.rsp file contains the values that you need to provide when running odacli configure-firstnet after reimaging the system. Extract the file configure-firstnet.rsp, use any text editor to open the file, and then provide the IP address that was saved in in the file.

Step 3: Provisioning Nodes Using Data Preserving Reprovisioning Method

Step 3 in upgrading Oracle Database Appliance from Oracle Database Appliance releases 12.1.2.12, 12.2.1.4, 18.3, 18.5, 18.7, and 18.8 to the latest release.

After reimaging of the nodes has completed successfully, the operating system and DCS software are now updated to the latest release. Update the firmware and other components by downloading the Server Patch and updating the repository, server, storage, and other components.

WARNING:

Update the firmware immediately after reimaging the system with Oracle Database Appliance release 19.20 or later. Failing to update the firmware can lead to errors during the reprovisioning step.

WARNING:

Do not run cleanup.pl before you run the command odacli restore-node -g. Running cleanup.pl erases all the Oracle ASM disk groups on the storage and you cannot reprovision your Oracle Database Appliance system with all databases intact. However, after you run the command odacli restore-node -g at least once, and the process of reprovisioning has started, the clean up is specific to the attempt of reprovisioning and does not erase the Oracle ASM disk groups. If the command odacli restore-node -g has failed, then cleanup.pl can be used to clean up failures in that step. In such a case, the command odacli restore-node -g must be attempted again to complete the provisioning.

WARNING:

After reimaging the appliance, do not log into the Browser User Interface (BUI). When running the odacli restore-node -g command, you are prompted for the password for the oda-admin user. Use this password to log into the BUI after the odacli restore-node -g and odacli restore-node -d commands complete successfully. Do not start the BUI before completing the odacli restore-node -g and odacli restore-node -d operations.
Follow these steps. For high-availability systems, run the commands on one node.
  1. Download the Oracle Database Appliance Server Patch for the ODACLI/DCS stack and update the repository with the server software file as described in the topic Patching Oracle Database Appliance Bare Metal Systems Using the Command-Line:
    [root@oda1 opt]# /opt/oracle/dcs/bin/odacli update-repository -f /tmp/software_file
  2. Create the pre-patch report for the odacli update-server command by specifying the -s option.
    [root@oda1 opt]# /opt/oracle/dcs/bin/odacli create-prepatchreport -v 19.20.0.0.0 -s

    Fix the warnings and errors mentioned in the report and proceed with the server patching.

  3. Update the server:
    [root@oda1 opt]# /opt/oracle/dcs/bin/odacli update-server -v version

    Updating the server in this step updates only the Oracle ILOM and boot disk firmware since the appliance is not yet reprovisioned.

  4. Update the storage:
    [root@oda1 opt]# /opt/oracle/dcs/bin/odacli update-storage -v version
  5. Update the repository with the server data archive files generated in Step 1: Detaching Nodes for Upgrade Using Data Preserving Reprovisioning of this upgrade process.
    For High-Availability systems, specify the three zip files generated in Step 1: Detaching Nodes.
    [root@oda1 opt]# /opt/oracle/dcs/bin/odacli update-repository -f serverarchive_node0_hostname.zip,serverarchive_node1_hostname.zip,serverarchive_cluster_name_common.zip
    For single-node systems, specify the zip file generated in Step1: Detaching Nodes.
    [root@oda1 opt]# /opt/oracle/dcs/bin/odacli update-repository -f serverarchive_host_name.zip
  6. Update the repository with the Oracle Grid Infrastructure clone of release 19.20 or later:
    [root@oda1 opt]# /opt/oracle/dcs/bin/odacli update-repository -f odacli-dcs-19.version.0.0.0-date-GI-19.version.0.0.zip
  7. Reprovision the appliance:
    [root@oda1 opt]# /opt/oracle/dcs/bin/odacli restore-node -g
    This command reconfigures networks, operating system users, and operating system groups and installs the latest Oracle Grid Infrastructure. At this step, the installation reuses the existing Oracle ASM disk groups instead of creating new ones.
    For example:
    [root@oak clones]# odacli restore-node -g
    Enter new system password: 
    Retype new system password: 
    Enter an initial password for Web Console account (oda-admin):
    Retype the password for Web Console account (oda-admin):
    User 'oda-admin' created successfully...
    {
      "jobId" : "120d447f-be28-46b4-b9cd-da652133bbee",
      "status" : "Created",
      "message" : "The system will reboot, if required, to enable the licensed number of CPU cores",
      "reports" : [ ],
      "createTimestamp" : "October 05, 2022 15:02:31 PM UTC",
      "resourceList" : [ ],
      "description" : "Restore node service - GI",
      "updatedTime" : "October 05, 2022 15:02:31 PM UTC"
    }
    
    [root@oak ~]# /opt/oracle/dcs/bin/odacli describe-job -i 120d447f-be28-46b4-b9cd-da652133bbee 
    
    Job details                                                      
    ----------------------------------------------------------------
                         ID:  120d447f-be28-46b4-b9cd-da652133bbee
                Description:  Restore node service - GI
                     Status:  Success
                    Created:  June 6, 2023 3:02:31 PM UTC
                    Message:  The system will reboot, if required, to enable the licensed number of CPU cores
    
    Task Name                                Start Time                          End Time                            Status    
    ---------------------------------------- ----------------------------------- ----------------------------------- ----------
    Restore node service creation            June 6, 2023 3:02:46 PM UTC          June 6, 2023 3:29:35 PM UTC          Success   
    Setting up Network                       June 6, 2023 3:02:47 PM UTC          June 6, 2023 3:02:47 PM UTC          Success   
    Setting up Vlan                          June 6, 2023 3:03:09 PM UTC          June 6, 2023 3:03:37 PM UTC          Success   
    Setting up Network                       June 6, 2023 3:03:59 PM UTC          June 6, 2023 3:03:59 PM UTC          Success   
    network update                           June 6, 2023 3:04:26 PM UTC          June 6, 2023 3:04:49 PM UTC          Success   
    updating network                         June 6, 2023 3:04:26 PM UTC          June 6, 2023 3:04:49 PM UTC          Success   
    Setting up Network                       June 6, 2023 3:04:26 PM UTC          June 6, 2023 3:04:26 PM UTC          Success   
    OS usergroup 'asmdba'creation            June 6, 2023 3:04:49 PM UTC          June 6, 2023 3:04:49 PM UTC          Success   
    OS usergroup 'asmoper'creation           June 6, 2023 3:04:49 PM UTC          June 6, 2023 3:04:49 PM UTC          Success   
    OS usergroup 'asmadmin'creation          June 6, 2023 3:04:49 PM UTC          June 6, 2023 3:04:49 PM UTC          Success   
    OS usergroup 'dba'creation               June 6, 2023 3:04:49 PM UTC          June 6, 2023 3:04:49 PM UTC          Success   
    OS usergroup 'dbaoper'creation           June 6, 2023 3:04:49 PM UTC          June 6, 2023 3:04:49 PM UTC          Success   
    OS usergroup 'oinstall'creation          June 6, 2023 3:04:49 PM UTC          June 6, 2023 3:04:49 PM UTC          Success   
    OS user 'grid'creation                   June 6, 2023 3:04:49 PM UTC          June 6, 2023 3:04:49 PM UTC          Success   
    OS user 'oracle'creation                 June 6, 2023 3:04:49 PM UTC          June 6, 2023 3:04:49 PM UTC          Success   
    Default backup policy creation           June 6, 2023 3:04:49 PM UTC          June 6, 2023 3:04:49 PM UTC          Success   
    Backup config metadata persist           June 6, 2023 3:04:49 PM UTC          June 6, 2023 3:04:49 PM UTC          Success   
    Grant permission to RHP files            June 6, 2023 3:04:49 PM UTC          June 6, 2023 3:04:49 PM UTC          Success   
    Add SYSNAME in Env                       June 6, 2023 3:04:50 PM UTC          June 6, 2023 3:04:50 PM UTC          Success   
    Install oracle-ahf                       June 6, 2023 3:04:50 PM UTC          June 6, 2023 3:05:57 PM UTC          Success   
    Stop DCS Admin                           June 6, 2023 3:05:58 PM UTC          June 6, 2023 3:05:59 PM UTC          Success   
    Generate mTLS certificates               June 6, 2023 3:05:59 PM UTC          June 6, 2023 3:06:00 PM UTC          Success   
    Exporting Public Keys                    June 6, 2023 3:06:00 PM UTC          June 6, 2023 3:06:02 PM UTC          Success   
    Creating Trust Store                     June 6, 2023 3:06:02 PM UTC          June 6, 2023 3:06:04 PM UTC          Success   
    Update config files                      June 6, 2023 3:06:04 PM UTC          June 6, 2023 3:06:04 PM UTC          Success   
    Restart DCS Admin                        June 6, 2023 3:06:04 PM UTC          June 6, 2023 3:06:25 PM UTC          Success   
    Unzipping storage configuration files    June 6, 2023 3:06:25 PM UTC          June 6, 2023 3:06:25 PM UTC          Success   
    Reloading multipath devices              June 6, 2023 3:06:25 PM UTC          June 6, 2023 3:06:25 PM UTC          Success   
    restart oakd                             June 6, 2023 3:06:25 PM UTC          June 6, 2023 3:06:36 PM UTC          Success   
    Reloading multipath devices              June 6, 2023 3:07:36 PM UTC          June 6, 2023 3:07:36 PM UTC          Success   
    restart oakd                             June 6, 2023 3:07:36 PM UTC          June 6, 2023 3:07:47 PM UTC          Success   
    Restore Quorum Disks                     June 6, 2023 3:07:47 PM UTC          June 6, 2023 3:07:47 PM UTC          Success   
    Creating GI home directories             June 6, 2023 3:07:47 PM UTC          June 6, 2023 3:07:47 PM UTC          Success   
    Extract GI clone                         June 6, 2023 3:07:47 PM UTC          June 6, 2023 3:09:32 PM UTC          Success   
    Creating wallet for Root User            June 6, 2023 3:09:32 PM UTC          June 6, 2023 3:09:37 PM UTC          Success   
    Creating wallet for ASM Client           June 6, 2023 3:09:37 PM UTC          June 6, 2023 3:09:41 PM UTC          Success   
    Grid stack creation                      June 6, 2023 3:09:41 PM UTC          June 6, 2023 3:20:49 PM UTC          Success   
    GI Restore with RHP                      June 6, 2023 3:09:41 PM UTC          June 6, 2023 3:17:28 PM UTC          Success   
    Updating GIHome version                  June 6, 2023 3:17:29 PM UTC          June 6, 2023 3:17:33 PM UTC          Success   
    Post cluster OAKD configuration          June 6, 2023 3:20:49 PM UTC          June 6, 2023 3:24:12 PM UTC          Success   
    Mounting disk group DATA                 June 6, 2023 3:24:12 PM UTC          June 6, 2023 3:24:14 PM UTC          Success   
    Mounting disk group RECO                 June 6, 2023 3:24:21 PM UTC          June 6, 2023 3:24:30 PM UTC          Success   
    Setting ACL for disk groups              June 6, 2023 3:24:36 PM UTC          June 6, 2023 3:24:41 PM UTC          Success   
    Register Scan and Vips to Public Network June 6, 2023 3:24:41 PM UTC          June 6, 2023 3:24:43 PM UTC          Success   
    Configure export clones resource         June 6, 2023 3:25:44 PM UTC          June 6, 2023 3:25:45 PM UTC          Success   
    Adding Volume COMMONSTORE to Clusterware June 6, 2023 3:25:47 PM UTC          June 6, 2023 3:25:52 PM UTC          Success   
    Adding Volume DATACZ to Clusterware      June 6, 2023 3:25:52 PM UTC          June 6, 2023 3:25:56 PM UTC          Success   
    Adding Volume DATONETDB1 to Clusterware  June 6, 2023 3:25:56 PM UTC          June 6, 2023 3:26:00 PM UTC          Success   
    Adding Volume DATONETDB3 to Clusterware  June 6, 2023 3:26:00 PM UTC          June 6, 2023 3:26:04 PM UTC          Success   
    Adding Volume DATSITDB2 to Clusterware   June 6, 2023 3:26:04 PM UTC          June 6, 2023 3:26:09 PM UTC          Success   
    Adding Volume DATSITDB4 to Clusterware   June 6, 2023 3:26:09 PM UTC          June 6, 2023 3:26:13 PM UTC          Success   
    Adding Volume DATTDB to Clusterware      June 6, 2023 3:26:13 PM UTC          June 6, 2023 3:26:17 PM UTC          Success   
    Adding Volume RECO to Clusterware        June 6, 2023 3:26:17 PM UTC          June 6, 2023 3:26:21 PM UTC          Success   
    Enabling Volume(s)                       June 6, 2023 3:26:21 PM UTC          June 6, 2023 3:27:53 PM UTC          Success   
    Provisioning service creation            June 6, 2023 3:29:34 PM UTC          June 6, 2023 3:29:34 PM UTC          Success   
    persist new agent state entry            June 6, 2023 3:29:34 PM UTC          June 6, 2023 3:29:34 PM UTC          Success   
    persist new agent state entry            June 6, 2023 3:29:34 PM UTC          June 6, 2023 3:29:34 PM UTC          Success   
    Restart DCS Agent                        June 6, 2023 3:29:34 PM UTC          June 6, 2023 3:29:35 PM UTC          Success   
    When you run the command odacli restore-node -g, the number of cores enabled are reset at the BIOS level. The system may restart as part of this operation.

    Note:

    The command odacli restore-node -g must not be run twice. If the command odacli restore-node -g does not run successfully, then you can run cleanup.pl to clean up the system while preserving the data on Oracle ASM disk groups. After the cleanup.pl is run successfully, then you can run the command odacli restore-node -g again.
  8. Update the repository with the Oracle Database clones as described in /opt/oracle/oak/restore/metadata/dbVersions.list:
    [root@oda1 opt]# /opt/oracle/dcs/bin/odacli update-repository -f /tmp/DB_software_file
    For example:
    [root@oda1 opt]# cat /opt/oracle/oak/restore/metadata/dbVersions.list
    # List of all db versions found, to be used for downloading required clones before DB restore
    11.2.0.4.180417
    12.1.0.2.180417
    You can download the Oracle Database clones for an Oracle Database Appliance release from My Oracle Support. For details about the patch numbers and Oracle Database clones for all supported Oracle Database Appliance releases, refer to the Oracle Database Appliance FAQs Guide.

    Important:

    If the source Oracle Database Appliance version on which the odacli detach-node command is run is an OAK stack, then after re-imaging to Oracle Database Appliance release 19.15 or later, the software runs on a DCS stack. The Oracle Database clone file corresponding to the DCS stack must be used.
  9. Configure the storage location and size for database homes on Oracle ACFS and specify the database home size equal to or greater than the value indicated at the time of the odaupgradeutil detach-node operation. For the space requirement, check the prechecks report located at /opt/oracle/oak/restore/log/odaupgradeutil_prechecks_timestamp.log for an entry similar to the following:
    odaupgradeutil_prechecks_11-05-2022_04:13:49.log:2022-05-11 04:13:55,523 - DEBUG - Total space required for ACFS DB homes = 15360 MB
    Configure the storage location and size for database homes on Oracle ACFS:
    [root@oda1 opt]# /opt/oracle/dcs/bin/odacli configure-dbhome-storage -dg DATA -s 80
    For information about creating database homes on Oracle ACFS, see the topic About Creating Database Homes on Oracle ACFS Storage in this guide.
  10. The database files are intact on Oracle ASM disk groups. However, the database software must be reinstalled and the database instances must be restarted. Run the following command to create the database homes on Oracle ACFS and then start the instances on the nodes. Single-instance Oracle databases and Oracle RAC One Node databases have only one instance running. For Oracle RAC deployments on high-availability systems, the database instances are restarted on both nodes.
    [root@oda1 opt]# /opt/oracle/dcs/bin/odacli restore-node -d
    {
      "updatedTime" : 1638912060633,
      "jobId" : "045b8492-7d0c-4c45-a00f-65bc2535f884",
      "status" : "Created",
      "message" : null,
      "description" : "Restore node service - Database",
      "createTimestamp" : 1638912060633,
      "diagCollectionId" : null,
      "reports" : [ ],
      "resourceList" : [ ],
      "uniqueIds" : [ ]
    }
    This command restarts the database services to the same versions previous to the upgrade. The databases are restarted and added to the cluster.

    Note:

    You must not use cleanup.pl after running the command odacli restore-node -d. Ensure that the command odacli restore-node -g ran successfully before you run the command odacli restore-node -d. If the command odacli restore-node -d failed, then you can run it again.
  11. View the progress of the restore node operation:
    [root@oda1 opt]# odacli describe-job -i fd71a38e-10a7-4fab-ba21-d80ad51b9e20
    
    Job details                                                      
    ----------------------------------------------------------------
                         ID:  fd71a38e-10a7-4fab-ba21-d80ad51b9e20
                Description:  Restore node service - Database
                     Status:  Success
                    Created:  April 9, 2022 12:53:03 PM CST
                    Message:  
    
    Task Name                                Start Time                          End Time                            Status    
    ---------------------------------------- ----------------------------------- ----------------------------------- ----------
    Storage creation for DB homes on ACFS    April 9, 2022 12:53:32 PM CST    April 9, 2022 12:54:54 PM CST    Success   
    Setting up ssh equivalance               April 9, 2022 12:54:54 PM CST    April 9, 2022 12:54:54 PM CST    Success   
    DB home creation : OraDB12201_home1      April 9, 2022 12:54:55 PM CST    April 9, 2022 12:58:29 PM CST    Success   
    Validating dbHome available space        April 9, 2022 12:54:55 PM CST    April 9, 2022 12:54:55 PM CST    Success   
    Creating DbHome Directory                April 9, 2022 12:54:55 PM CST    April 9, 2022 12:54:55 PM CST    Success   
    Create required directories              April 9, 2022 12:54:55 PM CST    April 9, 2022 12:54:55 PM CST    Success   
    Extract DB clone                         April 9, 2022 12:54:55 PM CST    April 9, 2022 12:56:13 PM CST    Success   
    ProvDbHome by using RHP                  April 9, 2022 12:56:13 PM CST    April 9, 2022 12:58:03 PM CST    Success   
    Enable DB options                        April 9, 2022 12:58:03 PM CST    April 9, 2022 12:58:18 PM CST    Success   
    Creating wallet for DB Client            April 9, 2022 12:58:25 PM CST    April 9, 2022 12:58:29 PM CST    Success   
    DB home creation : OraDB11204_home1      April 9, 2022 12:58:29 PM CST    April 9, 2022 1:01:11 PM CST     Success   
    Validating dbHome available space        April 9, 2022 12:58:29 PM CST    April 9, 2022 12:58:29 PM CST    Success   
    Creating DbHome Directory                April 9, 2022 12:58:29 PM CST    April 9, 2022 12:58:29 PM CST    Success   
    Create required directories              April 9, 2022 12:58:29 PM CST    April 9, 2022 12:58:29 PM CST    Success   
    Extract DB clone                         April 9, 2022 12:58:29 PM CST    April 9, 2022 12:59:22 PM CST    Success   
    ProvDbHome by using RHP                  April 9, 2022 12:59:22 PM CST    April 9, 2022 1:00:58 PM CST     Success   
    Enable DB options                        April 9, 2022 1:00:58 PM CST     April 9, 2022 1:01:06 PM CST     Success   
    Creating wallet for DB Client            April 9, 2022 1:01:11 PM CST     April 9, 2022 1:01:11 PM CST     Success   
    Adding database odacn to GI              April 9, 2022 1:01:11 PM CST     April 9, 2022 1:01:15 PM CST     Success   
      Adding database instance(s) to GI      April 9, 2022 1:01:15 PM CST     April 9, 2022 1:01:15 PM CST     Success   
      Modifying SPFILE for database          April 9, 2022 1:01:15 PM CST     April 9, 2022 1:02:02 PM CST     Success   
      Restore password file for database     April 9, 2022 1:02:02 PM CST     April 9, 2022 1:02:02 PM CST     Success   
      Start instance(s) for database         April 9, 2022 1:02:02 PM CST     April 9, 2022 1:02:23 PM CST     Success   
      Persist metadata for database          April 9, 2022 1:02:23 PM CST     April 9, 2022 1:02:23 PM CST     Success   
    Adding database db11g to GI              April 9, 2022 1:02:23 PM CST     April 9, 2022 1:02:29 PM CST     Success   
      Adding database instance(s) to GI      April 9, 2022 1:02:29 PM CST     April 9, 2022 1:02:29 PM CST     Success   
      Modifying SPFILE for database          April 9, 2022 1:02:29 PM CST     April 9, 2022 1:02:51 PM CST     Success   
      Restore password file for database     April 9, 2022 1:02:51 PM CST     April 9, 2022 1:02:51 PM CST     Success   
      Start instance(s) for database         April 9, 2022 1:02:51 PM CST     April 9, 2022 1:03:08 PM CST     Success   
      Persist metadata for database          April 9, 2022 1:03:08 PM CST     April 9, 2022 1:03:08 PM CST     Success  
    Restore custom network 'mynet2'          April 9, 2022 01:03:08 PM UTC     April 9, 2022 01:03:08 PM UTC      Failure   
    Restore custom network 'mynet4'          April 9, 2022 01:03:08 PM UTC     April 9, 2022 01:03:08 PM UTC      Failure   
    Restore Object Stores                    April 9, 2022 01:03:08 PM UTC     April 9, 2022 01:03:08 PM UTC      Success   
    Remount NFS backups                      April 9, 2022 01:03:08 PM UTC     April 9, 2022 01:03:08 PM UTC      Success   
    Restore BackupConfigs                    April 9, 2022 01:03:08 PM UTC     April 9, 2022 01:03:08 PM UTC      Success   
    Reattach backupconfigs to DBs            April 9, 2022 01:03:08 PM UTC     April 9, 2022 01:03:08 PM UTC      Success   
    Restore backup reports                   April 9, 2022 01:03:08 PM UTC     April 9, 2022 01:03:08 PM UTC      Failure
  12. View the databases and database homes that are restored:
    [root@oda1 opt]# odacli list-dbhomes
    [root@oda1 opt]# odacli list-databases 
    Note that starting with Oracle Database Appliance release 19.11, database homes are created on an Oracle ACFS file system and no longer on the local /u01 directory. All the restored database homes are created in the Oracle ACFS location. For example, if your 11.2.0.4 database was in the location /u01/app/oracle/product/11.2.0.4/dbhome_1, then the new location after restore is completed is /u01/app/odaorahome/oracle/product/11.2.0.4/dbhome_1.
  13. Patch or upgrade the databases to the latest release as described in the topic Patching Oracle Database Appliance in this guide.