9 Upgrading Oracle Database Appliance to Release 19.23 Using Data Preserving Reprovisioning

Understand how you can upgrade your Oracle Database Appliance deployment from Oracle Database Appliance release 19.19, 19.20, 19.21, or 19.22 to Oracle Database Appliance release 19.23 on Oracle Linux 8.

If your deployment is on Oracle Database Appliance release 19.23, then patch your appliance as described in the chapter Patching Oracle Database Appliance.

About Upgrading Using Data Preserving Reprovisioning

Understand how you can upgrade your appliance from Oracle Database Appliance releases 19.19, 19.20, 19.21, and 19.22 to Oracle Database Appliance release 19.23.

Note:

To upgrade to Oracle Database Appliance release 19.23, you must be on Oracle Database Appliance release 19.19 at the minimum. To patch your appliance to Oracle Database Appliance release 19.18, refer to the Oracle Database Appliance Deployment and User's Guide for your hardware model in the Oracle Database Appliance release 19.19 documentation library.

Note:

Data Preserving Reprovisioning does not support encrypted Oracle ACFS. Use the acfsutil encr info command to check whether Oracle ACFS encryption is enabled. If Oracle ACFS encryption is enabled, then disable Oracle ACFS encryption using the acfsutil encr off command before you proceed with the upgrade. You can enable Oracle ACFS encryption after the upgrade. For more information, see the Oracle Automatic Storage Management Administrator's Guide in the Oracle Database 19c documentation library.

Starting with Oracle Database Appliance release 19.21, the operating system of the appliance is Oracle Linux 8. You must upgrade the system to Oracle Linux 8, before updating Oracle Grid Infrastructure and databases to 19.23. You use Data Preserving Reprovisioning to upgrade your appliance running Oracle Linux 7 to Oracle Database Appliance release 19.23 with Oracle Linux 8.

About Upgrading Using Data Preserving Reprovisioning

Data Preserving Reprovisioning enables reprovisioning of an already deployed Oracle Database Appliance system without modifying the storage and the databases on the appliance. This is achieved by saving the information of the source system, capturing them as server data archive files. Then, the appliance is reimaged to Oracle Database Appliance release 19.23 and the saved metadata is used to directly reprovision the system and bring back all the resources such as databases, DB systems, Oracle ASR, and others.

When you upgrade your Oracle Database Appliance hardware models X9-2, X8-2, and X7-2 to Oracle Database Appliance release 19.23, you upgrade the operating system to Oracle Linux 8. Data Provisioning Reprovisioning enables you to perform this upgrade, and has the following features:
  • The odacli create-preupgradereport command runs prechecks on the system, such as detection of databases, DB systems that are inactive, Oracle Data Guard and TDE-enabled database settings. There are errors, warnings, and alerts reported if the system is determined to not be ready for the upgrade. You must review the errors, warnings, and alerts reported and take corrective actions as the report suggests. This will ensure that no failures occur later in the process, which can prevent the resources from being restarted.
  • During the first step of detaching the node, information about the system is collected and preserved in a server archive file. Make sure this file is saved outside the Oracle Database Appliance system before reimaging the system with the Oracle Database Appliance release 19.23 ISO image. The settings preserved in the file are used to reprovision the system after reimaging the appliance. The stored information includes details about the Oracle ACFS volumes that store database homes, DB system cluster settings, VLAN, custom networks, CPU and Oracle AFD settings. These settings are migrated after the reimage and the third step in this process reprovisions these settings.

Steps in the Data Preserving Reprovisioning for Upgrade Process

There are four steps in this process:
  1. Detach resources and software from the source version of the appliance: This step saves the metadata about the databases, listeners, networks, DB systems, application KVMs, CPU pools, Oracle ASR, and other configuration details in archive files, namely, the server data archive files. Then, the services running on the system are shutdown and uninstalled to prepare the environment for reimage in step 2. The data on the storage is kept intact.
    The server data archive files are generated after the successful detach of nodes. You must save the server data archive files in a location outside the appliance which is being upgraded, and copy these files back to the appliance to restore the system in step 3.

    WARNING:

    Make sure to save these files in a location outside the Oracle Database Appliance system. These files are needed to reprovision the system after you reimage the appliance in Step 2 of this process. Without these files, the system cannot be reprovisioned in Step 3 and you will lose all data stored in the Oracle ASM disk groups.
  2. Reimage nodes using the Oracle Database Appliance ISO image: The procedure is similar to imaging the appliance. This step installs Oracle Linux 8 as the operating system.
  3. Restore nodes using the Data Preserving Reprovisioning method: After successful completion of the previous step, the operating system and DCS software are already on the required target version. However, to update firmware, you must patch your deployment using the Server Patch. After successfully patching the appliance, you can restore the system, by restoring the Oracle Grid Infrastructure, databases, listeners, networks, DB systems, application KVMs, CPU pools, Oracle ASR, and other services on the nodes.
  4. Upgrade DB systems: After reprovisioning databases, application KVMs, and DB systems, upgrade the DB systems to Oracle Linux 8.

The procedure for each step is detailed in the subsequent topics in this chapter.

Customizations to the Appliance and Their Persistence After Upgrade

As part of the upgrade process, the Data Preserving Reprovisioning procedure involves reimage of the appliance with the latest ISO image. Hence, any prior customizations to the operating system configuration or settings are lost during the reimage. Note the impact on the following customizations during the Data Preserving Reprovisioning process:
  • Custom RPMs: If your appliance has any custom operating system RPMs installed from Oracle Linux Yum repository, then the prechecks report lists these custom RPMs. You must uninstall these RPMs and then continue with the next step in the upgrade process for bare metal system and DB system upgrades. You can reinstall these custom RPMs as required, after the upgrade.
  • Multi-User Access Enabled Systems: If your deployment did not have multi-user access configured before the upgrade, the newly-upgraded deployment will not have multi-user access enabled. The upgrade restores your deployment to the same configuration that existed prior to the upgrade, but with the software upgraded to Oracle Database Appliance release 19.23.
  • Fixes applied by STIG and CIS scripts: Since the system is reimaged during the upgrade progess, fixes applied on the appliance to conform with Security Technical Implementation Guides (STIG) and Center for Internet Security (CIS) benchmarks are lost, on both bare metal and DB systems.

Upgrading Bare Metal System to Oracle Linux 8 and Oracle Database Appliance Release 19.23 Using the CLI

Follow these steps to apply patches to your Oracle Database Appliance bare metal deployment and existing Oracle Database homes, using CLI commands.

To upgrade your Oracle Database Appliance deployment to the current release running Oracle Linux 8, you must download the Oracle Database Appliance Server patch and update the repository on the bare metal system.

WARNING:

Do not run cleanup.pl either before or after running the odacli detach-node command. Running cleanup.pl erases all the Oracle ASM disk groups on the storage and you cannot reprovision your Oracle Database Appliance system.

Note:

Run the steps in this procedure in the same order as documented. Run the odacli update-dcsadmin, odacli update-dcscomponents, and odacli update-dcsagent commands in the order documented.

Note:

For high-availability systems, run all the commands on one node only unless specified in the procedure step.

Note:

Note that for DCS agent update to be complete, both the odacli update-dcscomponents and odacli update-dcsagent commands must be run. Ensure that both commands are run in the order specified.

Note:

If Oracle ASR configuration type is Internal, and if there are external assets registered with Oracle ASR Manager, then after upgrading the internal Oracle ASR appliance, ensure that you upgrade the external assets when you upgrade your appliance using Data Preserving Reprovisioning.

If Oracle ASR configuration type is External, then ensure that you upgrade the appliance using Data Preserving Reprovisioning before upgrading the appliance with the external Oracle ASR. You can check the Oracle ASR configuration type with the odacli describe-asr command.

Important:

Ensure that there is sufficient space on your appliance to download the patches.

Important:

If you want to install third-party software on your Oracle Database Appliance, then ensure that the software does not impact the Oracle Database Appliance software. The version lock on Oracle Database Appliance RPMs displays a warning if the third-party software tries to override Oracle Database Appliance RPMs. You must restore the affected RPMs before patching Oracle Database Appliance so that patching completes successfully.
Follow these steps to upgrade your Oracle Database Appliance bare metal system to Oracle Linux 8 and Oracle Database Appliance release 19.23:

Step 1: Detaching Nodes for Upgrade Using Data Preserving Reprovisioning

  1. Download the Oracle Database Appliance Server Patch for the ODACLI/DCS stack (patch 35938481) from My Oracle Support to a temporary location on an external client. Refer to the release notes for details about the patch numbers and software for the latest release.
    For example, download the server patch for 19.23:
    p35938481_1923000_Linux-x86-64.zip
  2. Unzip the software — it contains README.html and one or more zip files for the patch.
    unzip p35938481_1923000_Linux-x86-64.zip
    The zip file contains the following software file:
    oda-sm-19.23.0.0.0-date-server.zip
  3. Copy all the software files from the external client to Oracle Database Appliance. For High-Availability deployments, copy the software files to only one node. The software files are copied to the other node during the patching process. Use the scp or sftp protocol to copy the bundle.
    Example using scp command:
    # scp software_file root@oda_host:/tmp
    Example using sftp command:
    # sftp root@oda_host
    Enter the root password, and copy the files.
    put software_file
  4. Update the repository with the server software file:
    [root@oda1 opt]# /opt/oracle/dcs/bin/odacli update-repository -f /tmp/software_file
    For example, for 19.23:
    [root@oda1 opt]# /opt/oracle/dcs/bin/odacli update-repository -f /tmp/oda-sm-19.23.0.0.0-date-server.zip
  5. Confirm that the repository update is successful:
    [root@oda1 opt]# odacli describe-job -i 6c5e8990-298d-4070-aeac-76f1e55e5fe5
     
    Job details                                                     
    ----------------------------------------------------------------
                         ID:  6c5e8990-298d-4070-aeac-76f1e55e5fe5
                Description:  Repository Update
                     Status:  Success
                    Created:  April 8, 2024 3:21:21 PM UTC
                    Message:  /tmp/oda-sm-19.23.0.0.0-date-server.zip
     
    Task Name                                Start Time                          End Time                            Status   
    ---------------------------------------- ----------------------------------- ----------------------------------- ----------
    Unzip bundle                             April 8, 2024 3:21:21 PM UTC   April 8, 2024 3:21:45 PM UTC   Success# /opt/oracle/dcs/bin/odacli describe-job -i job_ID
  6. Update the DCS admin:
    [root@oda1 opt]# /opt/oracle/dcs/bin/odacli update-dcsadmin -v 19.23.0.0.0
    
    {
      "jobId" : "95178f45-b72f-46ef-b971-741f3fad51c4",
      "status" : "Created",
      "message" : null,
      "reports" : [ ],
      "createTimestamp" : "April 8, 2024 07:45:54 AM UTC",
      "resourceList" : [ ],
      "description" : "DcsAdmin patching",
      "updatedTime" : "April 8, 2024 07:45:54 AM UTC",
      "jobType" : null
    }
    # odacli describe-job -i 95178f45-b72f-46ef-b971-741f3fad51c4
    
    Job details                                                      
    ----------------------------------------------------------------
                         ID:  95178f45-b72f-46ef-b971-741f3fad51c4
                Description:  DcsAdmin patching
                     Status:  Success
                    Created:  April 8, 2024 7:45:54 AM UTC
                    Message:  
    
    Task Name                                Node Name                 Start Time                               End Time                                 Status    
    ---------------------------------------- ------------------------- ---------------------------------------- ---------------------------------------- ----------
    Patch location validation                node1             April 8, 2024 7:45:58 AM UTC         April 8, 2024 7:45:58 AM UTC         Success   
    Patch location validation                node2             April 8, 2024 7:45:58 AM UTC         April 8, 2024 7:45:58 AM UTC         Success   
    Dcs-admin upgrade                        node1             April 8, 2024 7:45:59 AM UTC         April 8, 2024 7:45:59 AM UTC         Success   
    Dcs-admin upgrade                        node2             April 8, 2024 7:45:59 AM UTC         April 8, 2024 7:45:59 AM UTC         Success
    
  7. Update the DCS components:
    [root@oda1 opt]# /opt/oracle/dcs/bin/odacli update-dcscomponents -v 19.23.0.0.0
    {
      "jobId" : "e9862ac9-ed92-4934-a71a-93cea4c20a68",
      "status" : "Success",
      "message" : " DCS-Agent shutdown is successful. Skipping MySQL upgrade on OL7 Metadata schema update is done. dcsagent RPM upgrade is successful. dcscli RPM upgrade is successful. dcscontroller RPM upgrade is successful. Successfully reset the Keystore password. HAMI is not enabled Skipped removing old Libs. Successfully ran setupAgentAuth.sh ",
      "reports" : null,
      "createTimestamp" : "April 8, 2024 13:47:22 PM GMT",
      "description" : "Update-dcscomponents job completed and is not part of Agent job list",
      "updatedTime" : "April 8, 2024 13:49:44 PM GMT"
    }

    If the DCS components are updated, then the message "status" : "Success" is displayed on the command line. For failed updates, fix the error and then proceed with the update by re-running the odacli update-dcscomponents command. See the topic Resolving Errors When Updating DCS Components During Patching about more information about DCS components checks errors.

    Note:

    Note that for DCS agent update to be complete, both the odacli update-dcscomponents and odacli update-dcsagent commands must be run. Ensure that both commands are run in the order specified in this procedure.
  8. Update the DCS agent:
    [root@oda1 opt]# /opt/oracle/dcs/bin/odacli update-dcsagent -v 19.23.0.0.0
    [root@oda1 opt]# odacli describe-job -i a9cac320-cebe-4a78-b6e5-ce9e0595d5fa
     
    Job details                                                     
    ----------------------------------------------------------------
                         ID:  a9cac320-cebe-4a78-b6e5-ce9e0595d5fa
                Description:  DcsAgent patching
                     Status:  Success
                    Created:  April 8, 2024 3:35:01 PM UTC
                    Message: 
     
    Task Name                                Start Time                          End Time                            Status   
    ---------------------------------------- ----------------------------------- ----------------------------------- ----------
    Dcs-agent upgrade  to version            April 8, 2024 3:35:01 PM UTC   April 8, 2024 3:38:50 PM UTC   Success  
    19.22.0.0.0                                                                                                               
    Update System version                    April 8, 2024 3:38:50 PM UTC   April 8, 2024 3:38:50 PM UTC   Success
  9. Similarly, log into each DB system and update the DCS components, DCS admin, and DCS agent on every DB system in your deployment:
    [root@dbsystem1 opt]# /opt/oracle/dcs/bin/odacli update-dcsadmin -v 19.23.0.0.0
    [root@dbsystem1 opt]# /opt/oracle/dcs/bin/odacli update-dcscomponents -v 19.23.0.0.0
    [root@dbsystem1 opt]# /opt/oracle/dcs/bin/odacli update-dcsagent -v 19.23.0.0.0
  10. On bare metal system, create the pre-upgrade report to run upgrade pre-checks. If there are errors reported in the report, review the report to resolve the failure in the "Action" column of the report. Fix the errors and repeat the step to run the preupgrade report until all checks pass. If there are alerts in the report, review them and perform the recommended action, if any. Then proceed to run the detach-node operation.
    [root@oda1 opt]# odacli create-preupgradereport -bm 
    [root@oda1 opt]# odacli describe-preupgradereport -i ID

    For example:

    [root@oda1 opt]# odacli describe-preupgradereport -i 31d5304a-d234-4f87-84ec-0297020f518a
     
    Upgrade pre-check report                                        
    ------------------------------------------------------------------------
                     Job ID:  31d5304a-d234-4f87-84ec-0297020f518a
                Description:  Run pre-upgrade checks for Bare Metal
                     Status:  SUCCESS
                    Created:  April 8, 2024 7:15:28 AM UTC
                     Result:  All pre-checks succeeded
     
    Node Name      
    ---------------
    node1
     
    Check                          Status   Message                                Action                               
    ------------------------------ -------- -------------------------------------- --------------------------------------
    __GI__
    Check presence of databases    Success  No additional database found           None                                 
    not managed by ODA                      registered in CRS                                                           
    Check custom filesystems       Success  All file systems are owned and used    None                                 
                                            by OS users provisioned by ODA                                              
     
    __OS__
    Check Required OS files        Success  All the required files are present     None                                 
    Check Additional OS RPMs       Success  No RPMs outside of base ISO were       None                                 
                                            found on the system                                                         
     
    __STORAGE__
    Check Required Storage files   Success  All the required files are present     None                                 
    Validate OAK Disks             Success  All OAK disks are in valid state       None                                 
    Validate ASM Disk Groups       Success  All ASM disk groups are in valid state None                                 
    Validate ASM Disks             Success  All ASM disks are in valid state       None                                 
    Check Database Home Storage    Success  The volume(s)                          None                                 
    volumes                                 orahome_sh,odabase_n0,odabase_n1                                            
                                            state is CONFIGURED.                                                        
    Check space under /opt         Success  Free space on /opt: 142750.87 MB is    None                                 
                                            more than required space: 1024 MB                                           
    Check space in ASM disk        Success  Space required for creating local      None                                 
    group(s)                                homes is present in ACFS database                                           
                                            home storage. Required: 78 GB                                               
                                            Available: 245 GB                                                           
     
    __SYS__
    Validate Hardware Type         Success  Current hardware is supported          None                                 
    Validate ILOM interconnect     Success  ILOM interconnect is not enabled       None                                 
    Validate System Version        Success  System version 19.22.0.0.0 is          None                                 
                                            supported                                                                   
    Verify System Timezone         Success  Succesfully verified the time zone     None                                 
                                            file                                                                        
    Verify Grid User               Success  Grid user is verified                  None                                 
    Verify Grid Version            Success  Oracle Grid Infrastructure is running  None                                 
                                            on the '19.17.0.0.221018' version on                                        
                                            all nodes                                                                   
    Check Audit Files              Alert    Audit files found under                These files will be lost after       
                                            /u01/app/oracle/product/12.1.0.2/      reimage. Backup the audit files to a 
                                            dbhome_1/rdbms/audit,                  location outside the ODA system      
                                            /u01/app/oracle/product/11.2.0.4/                                           
                                            dbhome_1/rdbms/audit,                                                       
                                            /u01/app/oracle/audit                                                       
     
    __DB__
    Validate Database Status       Success  Database 'myTestDb' is running and is  None                                 
                                            in 'CONFIGURED' state                                                       
    Validate Database Version      Success  Version '19.17.0.0.221018' for         None                                 
                                            database 'myTestDb' is supported                                            
    Validate Database Datapatch    Success  Database 'myTestDb' is completely      None                                 
    Application Status                      applied with datapatch                                                      
    Validate TDE wallet presence   Success  Database 'myTestDb' is not TDE         None                                 
                                            enabled. Skipping TDE wallet presence                                       
                                            check.                                                                      
    Validate Database Home         Success  Database home location check passed    None                                 
    location                                for database myTestDb_uniq                                                  
    Validate Database Status       Success  Database 's' is running and is in      None                                 
                                            'CONFIGURED' state                                                          
    Validate Database Version      Success  Version '12.1.0.2.220719' for          None                                 
                                            database 's' is supported                                                   
    Validate Database Datapatch    Success  Database 's' is completely applied     None                                 
    Application Status                      with datapatch                                                              
    Validate TDE wallet presence   Success  Database 's' is not TDE enabled.       None                                 
                                            Skipping TDE wallet presence check.                                         
    Validate Database Home         Success  Database home location check passed    None                                 
    location                                for database s                                                                                                                        
    Validate Database Status       Success  Database 'QyZ6O' is running and is in  None                                 
                                            'CONFIGURED' state                                                          
    Validate Database Version      Success  Version '11.2.0.4.210119' for          None                                 
                                            database 'QyZ6O' is supported                                               
    Validate TDE wallet presence   Success  Database 'QyZ6O' is not TDE enabled.   None                                 
                                            Skipping TDE wallet presence check.                                         
    Validate Database Home         Success  Database home location check passed    None                                 
    location                                for database QyZ6O                                                          
    Validate Database Status       Success  Database 'EX68' is running and is in   None                                 
                                            'CONFIGURED' state                                                          
    Validate Database Version      Success  Version '18.14.0.0.210420' for         None                                 
                                            database 'EX68' is supported                                                
    Validate Database Datapatch    Success  The database is SI and is running on   None                                 
    Application Status                      node2. This check is skipped.                                       
    Validate TDE wallet presence   Success  Database 'EX68' is not TDE enabled.    None                                 
                                            Skipping TDE wallet presence check.                                         
    Validate Database Home         Success  Database home location check passed    None                                 
    location                                for database EX68                                                           
    Validate Database Status       Success  Database 'DH1G0' is running and is in  None                                 
                                            'CONFIGURED' state                                                          
    Validate Database Version      Success  Version '12.2.0.1.220118' for          None                                 
                                            database 'DH1G0' is supported                                               
    Validate Database Datapatch    Success  Database 'DH1G0' is completely         None                                 
    Application Status                      applied with datapatch                                                      
    Validate TDE wallet presence   Success  Database 'DH1G0' is not TDE enabled.   None                                 
                                            Skipping TDE wallet presence check.                                         
    Validate Database Home         Success  Database home location check passed    None                                 
    location                                for database DH1G0                                                          
     
    __CERTIFICATES__
    Check using custom             Success  Using Default key pair                 None                                 
    certificates                                                                                                        
    Check the agent of the DB      Success  All the agents of the DB systems are   None                                 
    System accessible                       accessible                                                                  
     
    __DBSYSTEMS__
    Validate DB System DCS         Success  node1: SUCCESS                 None                                 
    component versions                                                                                                  
    Validate DB System DCS         Success  node1: SUCCESS                 None                                 
    component versions                                                                                                  
     
     
    Node Name      
    ---------------
    node2
     
    Check                          Status   Message                                Action                               
    ------------------------------ -------- -------------------------------------- --------------------------------------
    __GI__
    Check presence of databases    Success  No additional database found           None                                 
    not managed by ODA                      registered in CRS                                                           
    Check custom filesystems       Success  All file systems are owned and used    None                                 
                                            by OS users provisioned by ODA                                              
     
    __OS__
    Check Required OS files        Success  All the required files are present     None                                 
    Check Additional OS RPMs       Success  No RPMs outside of base ISO were       None                                 
                                            found on the system                                                         
     
    __STORAGE__
    Check Required Storage files   Success  All the required files are present     None                                 
    Validate OAK Disks             Success  All OAK disks are in valid state       None                                 
    Validate ASM Disk Groups       Success  All ASM disk groups are in valid state None                                 
    Validate ASM Disks             Success  All ASM disks are in valid state       None                                 
    Check Database Home Storage    Success  The volume(s)                          None                                 
    volumes                                 orahome_sh,odabase_n0,odabase_n1                                            
                                            state is CONFIGURED.                                                        
    Check space under /opt         Success  Free space on /opt: 143154.76 MB is    None                                 
                                            more than required space: 1024 MB                                           
    Check space in ASM disk        Success  Space required for creating local      None                                 
    group(s)                                homes is present in ACFS database                                           
                                            home storage. Required: 78 GB                                               
                                            Available: 245 GB                                                           
     
    __SYS__
    Validate Hardware Type         Success  Current hardware is supported          None                                 
    Validate ILOM interconnect     Success  ILOM interconnect is not enabled       None                                 
    Validate System Version        Success  System version 19.22.0.0.0 is          None                                 
                                            supported                                                                   
    Verify System Timezone         Success  Succesfully verified the time zone     None                                 
                                            file                                                                        
    Verify Grid User               Success  Grid user is verified                  None                                 
    Verify Grid Version            Success  Oracle Grid Infrastructure is running  None                                 
                                            on the '19.17.0.0.221018' version on                                        
                                            all nodes                                                                   
    Check Audit Files              Alert    Audit files found under                These files will be lost after       
                                            /u01/app/oracle/product/12.1.0.2/      reimage. Backup the audit files to a 
                                            dbhome_1/rdbms/audit,                  location outside the ODA system      
                                            /u01/app/oracle/product/11.2.0.4/                                           
                                            dbhome_1/rdbms/audit,                                                       
                                            /u01/app/oracle/audit,                                                      
                                            /u01/app/oracle/admin                                                       
     
    __DB__
    Validate Database Status       Success  Database 'myTestDb' is running and is  None                                 
                                            in 'CONFIGURED' state                                                       
    Validate Database Version      Success  Version '19.17.0.0.221018' for         None                                 
                                            database 'myTestDb' is supported                                            
    Validate Database Datapatch    Success  Database 'myTestDb' is completely      None                                 
    Application Status                      applied with datapatch                                                      
    Validate TDE wallet presence   Success  Database 'myTestDb' is not TDE         None                                 
                                            enabled. Skipping TDE wallet presence                                       
                                            check.                                                                      
    Validate Database Home         Success  Database home location check passed    None                                 
    location                                for database myTestDb_uniq                                                  
    Validate Database Status       Success  Database 's' is running and is in      None                                 
                                            'CONFIGURED' state                                                          
    Validate Database Version      Success  Version '12.1.0.2.220719' for          None                                 
                                            database 's' is supported                                                   
    Validate Database Datapatch    Success  Database 's' is completely applied     None                                 
    Application Status                      with datapatch                                                              
    Validate TDE wallet presence   Success  Database 's' is not TDE enabled.       None                                 
                                            Skipping TDE wallet presence check.                                         
    Validate Database Home         Success  Database home location check passed    None                                 
    location                                for database s                                                                                                                       
    Validate Database Status       Success  Database 'QyZ6O' is running and is in  None                                 
                                            'CONFIGURED' state                                                          
    Validate Database Version      Success  Version '11.2.0.4.210119' for          None                                 
                                            database 'QyZ6O' is supported                                               
    Validate TDE wallet presence   Success  Database 'QyZ6O' is not TDE enabled.   None                                 
                                            Skipping TDE wallet presence check.                                         
    Validate Database Home         Success  Database home location check passed    None                                 
    location                                for database QyZ6O                                                          
    Validate Database Status       Success  Database 'EX68' is running and is in   None                                 
                                            'CONFIGURED' state                                                          
    Validate Database Version      Success  Version '18.14.0.0.210420' for         None                                 
                                            database 'EX68' is supported                                                
    Validate Database Datapatch    Success  Database 'EX68' is completely applied  None                                 
    Application Status                      with datapatch                                                              
    Validate TDE wallet presence   Success  Database 'EX68' is not TDE enabled.    None                                 
                                            Skipping TDE wallet presence check.                                         
    Validate Database Home         Success  Database home location check passed    None                                 
    location                                for database EX68                                                           
    Validate Database Status       Success  Database 'DH1G0' is running and is in  None                                 
                                            'CONFIGURED' state                                                          
    Validate Database Version      Success  Version '12.2.0.1.220118' for          None                                 
                                            database 'DH1G0' is supported                                               
    Validate Database Datapatch    Success  Database 'DH1G0' is completely         None                                 
    Application Status                      applied with datapatch                                                      
    Validate TDE wallet presence   Success  Database 'DH1G0' is not TDE enabled.   None                                 
                                            Skipping TDE wallet presence check.                                         
    Validate Database Home         Success  Database home location check passed    None                                 
    location                                for database DH1G0                                                          
     
    __CERTIFICATES__
    Check using custom             Success  Using Default key pair                 None                                 
    certificates                                                                                                        
    Check the agent of the DB      Success  All the agents of the DB systems are   None                                 
    System accessible                       accessible                                                                  
     
    __DBSYSTEMS__
    Validate DB System DCS         Success  node1: SUCCESS                 None                                 
    component versions                                                                                                  
    Validate DB System DCS         Success  node1: SUCCESS                 None                                 
    component versions                                                                               
  11. On the bare metal system, detach the system for an operating system upgrade. Click Yes when prompted to continue.

    WARNING:

    Ensure that there is no hardware or networking change after issuing the command odacli detach-node.
    [root@oda1 restore]# odacli detach-node -all

    For example:

    [root@oda1 restore]# odacli detach-node -all
    ********************************************************************************
                                      IMPORTANT                                  
    ********************************************************************************
    'odacli detach-node' will bring down the databases and grid services on the
    system. The files that belong to the databases, which are stored on ASM or ACFS,
    are left intact on the storage. The databases will be started up back after
    re-imaging the ODA system using 'odacli restore-node' commands. As a good
    precautionary measure, please backup all the databases on the system before you
    start this process. Do not store the backup on this ODA machine since the local
    file system will be wiped out as part of the re-image.
    ********************************************************************************
     
    Do you want to continue (yes/no)[no] : yes
    
    [root@oda1 opt]# odacli describe-job -i 20b7fced-0aaa-474e-aa80-18e31c215e1c
     
    Job details                                                     
    ----------------------------------------------------------------
                         ID:  20b7fced-0aaa-474e-aa80-18e31c215e1c
                Description:  Detach node service creation for upgrade
                     Status:  Success
                    Created:  April 8, 2024 4:22:19 PM UTC
                    Message: 
     
    Task Name                                Start Time                          End Time                            Status   
    ---------------------------------------- ----------------------------------- ----------------------------------- ----------
    Creating INIT file                       April 8, 2024 4:22:19 PM UTC   January 8, 2024 4:22:19 PM UTC   Success  
    Creating firstnet response file          April 8, 2024 4:22:19 PM UTC   January 8, 2024 4:22:19 PM UTC   Success  
    Saving Appliance data                    April 8, 2024 4:22:19 PM UTC   January 8, 2024 4:23:19 PM UTC   Success  
    Saving OS files                          April 8, 2024 4:22:19 PM UTC   January 8, 2024 4:22:19 PM UTC   Success  
    Saving CPU cores information             April 8, 2024 4:22:19 PM UTC   January 8, 2024 4:22:19 PM UTC   Success  
    Saving storage files                     April 8, 2024 4:22:19 PM UTC   January 8, 2024 4:22:19 PM UTC   Success  
    Saving System                            April 8, 2024 4:22:19 PM UTC   January 8, 2024 4:22:19 PM UTC   Success  
    Saving Volumes                           April 8, 2024 4:22:19 PM UTC   January 8, 2024 4:22:40 PM UTC   Success  
    Saving File Systems                      April 8, 2024 4:22:40 PM UTC   January 8, 2024 4:22:56 PM UTC   Success  
    Saving Networks                          April 8, 2024 4:22:56 PM UTC   January 8, 2024 4:22:56 PM UTC   Success  
    Saving Quorum Disks                      April 8, 2024 4:22:56 PM UTC   January 8, 2024 4:22:57 PM UTC   Success  
    Saving Database Storages                 April 8, 2024 4:22:57 PM UTC   January 8, 2024 4:23:00 PM UTC   Success  
    Saving Database Homes                    January 8, 2024 4:23:00 PM UTC   January 8, 2024 4:23:00 PM UTC   Success  
    +-- Saving OraDB19000_home1              January 8, 2024 4:23:00 PM UTC   January 8, 2024 4:23:00 PM UTC   Success  
    +-- Saving OraDB19000_home2              January 8, 2024 4:23:00 PM UTC   January 8, 2024 4:23:00 PM UTC   Success  
    +-- Saving OraDB19000_home3              January 8, 2024 4:23:00 PM UTC   January 8, 2024 4:23:00 PM UTC   Success  
    +-- Saving OraDB19000_home4              January 8, 2024 4:23:00 PM UTC   January 8, 2024 4:23:00 PM UTC   Success  
    +-- Saving OraDB19000_home5              January 8, 2024 4:23:00 PM UTC   January 8, 2024 4:23:00 PM UTC   Success  
    Saving Databases                         January 8, 2024 4:23:00 PM UTC   January 8, 2024 4:23:19 PM UTC   Success  
    +-- Saving provDb0                       January 8, 2024 4:23:00 PM UTC   January 8, 2024 4:23:04 PM UTC   Success  
    +-- Saving cPjuHX4S                      January 8, 2024 4:23:04 PM UTC   January 8, 2024 4:23:08 PM UTC   Success  
    +-- Saving PJSlOXqa                      January 8, 2024 4:23:08 PM UTC   January 8, 2024 4:23:11 PM UTC   Success  
    +-- Saving O                             January 8, 2024 4:23:11 PM UTC   January 8, 2024 4:23:15 PM UTC   Success  
    +-- Saving mydb                          January 8, 2024 4:23:15 PM UTC   January 8, 2024 4:23:19 PM UTC   Success  
    Saving Object swift stores               January 8, 2024 4:23:19 PM UTC   January 8, 2024 4:23:19 PM UTC   Success  
    Saving Database Backups                  January 8, 2024 4:23:19 PM UTC   January 8, 2024 4:23:19 PM UTC   Success  
    Saving NFS Backups                       January 8, 2024 4:23:19 PM UTC   January 8, 2024 4:23:19 PM UTC   Success  
    Creating databases version list          January 8, 2024 4:23:19 PM UTC   January 8, 2024 4:23:19 PM UTC   Success  
    Converting files for old DPR             January 8, 2024 4:23:19 PM UTC   January 8, 2024 4:23:19 PM UTC   Success  
    compatibility                                                                                                             
    Detach node - DPR                        January 8, 2024 4:23:19 PM UTC   January 8, 2024 4:23:19 PM UTC   Success  
    Deconfiguring Appliance                  January 8, 2024 4:23:19 PM UTC   January 8, 2024 4:32:18 PM UTC   Success  
    Deconfiguring Databases                  January 8, 2024 4:23:19 PM UTC   January 8, 2024 4:25:33 PM UTC   Success  
    +-- Deconfiguring provDb0                January 8, 2024 4:23:19 PM UTC   January 8, 2024 4:23:59 PM UTC   Success  
    +-- Deconfiguring cPjuHX4S               January 8, 2024 4:23:59 PM UTC   January 8, 2024 4:24:18 PM UTC   Success  
    +-- Deconfiguring PJSlOXqa               January 8, 2024 4:24:18 PM UTC   January 8, 2024 4:24:36 PM UTC   Success  
    +-- Deconfiguring O                      January 8, 2024 4:24:36 PM UTC   January 8, 2024 4:25:07 PM UTC   Success  
    +-- Deconfiguring mydb                   January 8, 2024 4:25:07 PM UTC   January 8, 2024 4:25:33 PM UTC   Success  
    Saving database backup reports           January 8, 2024 4:25:33 PM UTC   January 8, 2024 4:25:33 PM UTC   Success  
    Resizing Quorum Disks                    January 8, 2024 4:25:33 PM UTC   January 8, 2024 4:25:33 PM UTC   Success  
    Deconfiguring Grid Infrastructure        January 8, 2024 4:25:33 PM UTC   January 8, 2024 4:32:18 PM UTC   Success  
    Backup Quorum Disks                      January 8, 2024 4:32:18 PM UTC   January 8, 2024 4:32:18 PM UTC   Success  
    Creating the server data archive files   January 8, 2024 4:32:18 PM UTC   January 8, 2024 4:32:20 PM UTC   Success     
  12. Important: Save the files generated by the system deconfiguration and store them outside of the Oracle Database Appliance system. The server archive file is generated at /opt/oracle/oak/restore/out. For Oracle Database Appliance high-availability systems, use the server archive file from node 0.
    In /opt/oracle/oak/restore/out:
     
    [root@oda1 out]# ls -lrt
    total 52
    -rw-r--r-- 1 root root 14325 Sep 13 09:28 serverarchive_cluster_name.zip
    -rw-r--r-- 1 root root   65 Sep 13 09:28 serverarchive_cluster_name.zip.sha256
    [root@oda1 out]# scp serverarchive_cluster_name.zip root@host_outside_ODA
    [root@oda1 out]# scp serverarchive_cluster_name.zip.sha256 root@host_outside_ODA
    There is a checksum file (SHA256) generated for the server archive file. Use this checksum to ensure the file transfer was complete. The file serverarchive_cluster_name.zip.sha256 contains the SHA256 checksum of the file when serverarchive_cluster_name.zip was generated. After using scp to copy the file outside appliance, generate the checksum using the sha256sum command. The checksum must match the checksum present in the serverarchive_cluster_name.zip.sha256 file. For example:
    $ cat  serverarchive_oda1.zip.sha256
    7580347b642c2f6689b126d9cb27d0bf8be1f810c580663ad592d35e42d47ae6
     
    $ sha256sum serverarchive_oda1.zip
    7580347b642c2f6689b126d9cb27d0bf8be1f810c580663ad592d35e42d47ae6  serverarchive_cluster_namen1.zip

    WARNING:

    Make sure to save these files in a location outside the Oracle Database Appliance system. These files are needed to reprovision the system after you reimage the appliance in Step 2 of this process. Without these files, the system cannot be reprovisioned in Step 3 and you will lose all data stored in the Oracle ASM disk groups.

Step 2: Reimaging Nodes for Upgrading Using Data Preserving Reprovisioning

WARNING:

Do not run cleanup.pl either before or after reimaging the nodes. Running cleanup.pl erases all the Oracle ASM disk groups on the storage and you cannot reprovision your Oracle Database Appliance system.

Follow these steps to reimage nodes:

  1. Download the Oracle Database Appliance release 19.23 bare metal ISO image and reimage the appliance as described in the topic Reimaging an Oracle Database Appliance Baremetal System.
  2. Plumb the network as described in the topic Plumbing the Network.

Important:

For high-availability systems, serverarchive_cluster_name.zip contains the file configure-firstnet.rsp. The configure-firstnet.rsp file contains the values that you need to provide when running odacli configure-firstnet after reimaging the system. Extract the file configure-firstnet.rsp, use any text editor to open the file, and then provide the IP address that was saved in in the file.

Step 3: Reprovisioning Nodes Using Data Preserving Reprovisioning Method

WARNING:

Do not run cleanup.pl before you run the command odacli restore-node -g. Running cleanup.pl erases all the Oracle ASM disk groups on the storage and you cannot reprovision your Oracle Database Appliance system with all databases intact. However, after you run the command odacli restore-node -g at least once, and the process of reprovisioning has started, the clean up is specific to the attempt of reprovisioning and does not erase the Oracle ASM disk groups. If the command odacli restore-node -g has failed, then cleanup.pl can be used to clean up failures in that step. In such a case, the command odacli restore-node -g must be attempted again to complete the provisioning.

Follow these steps to reprovision the nodes:

  1. Update the repository with the Oracle Database Appliance release 19.23.0.0.0 Server Patch:
    [root@oda1 opt]# /opt/oracle/dcs/bin/odacli update-repository -f /tmp/software_file

    For example, for 19.23:

    [root@oda1 opt]# /opt/oracle/dcs/bin/odacli update-repository -f /tmp/oda-sm-19.23.0.0.0-date-server.zip
    [root@oda1 opt]# odacli describe-job -i 73638e01-afc2-4a64-846c-460b816e227e
     
    Job details                                                     
    ----------------------------------------------------------------
                         ID:  73638e01-afc2-4a64-846c-460b816e227e
                Description:  Repository Update
                     Status:  Success
                    Created:  January 8, 2024 5:54:02 AM HKT
                    Message:  /tmp/oda-sm-19.23.0.0.0-date-server.zip
     
    Task Name                                Node Name                 Start Time                          End Time                            Status   
    ---------------------------------------- ------------------------- ----------------------------------- ----------------------------------- ----------
    Check AvailableSpace                     node2             January 8, 2024 5:54:07 AM HKT   January 8, 2024 5:54:08 AM HKT   Success  
    Setting up SSH equivalence               node1             January 8, 2024 5:54:08 AM HKT   January 8, 2024 5:54:12 AM HKT   Success  
    Copy BundleFile                          node1             January 8, 2024 5:54:12 AM HKT   January 8, 2024 5:54:17 AM HKT   Success  
    Validating CopiedFile                    node2             January 8, 2024 5:54:17 AM HKT   January 8, 2024 5:54:22 AM HKT   Success  
    Unzip bundle                             node1             January 8, 2024 5:54:22 AM HKT   January 8, 2024 5:54:42 AM HKT   Success  
    Unzip bundle                             node2             January 8, 2024 5:54:42 AM HKT   January 8, 2024 5:55:01 AM HKT   Success  
    Delete PatchBundles                      node2             January 8, 2024 5:55:01 AM HKT   January 8, 2024 5:55:01 AM HKT   Succes
  2. In this upgrade process, after you reimaged the appliance with Oracle Database Appliance release 19.23 ISO image, the operating system is now on Oracle Linux 8. Apply the server patch to update the firmware and storage. Create the pre-patch report for patching the firmware.

    For example:

    [root@oda1 opt]# odacli create-prepatchreport -s -v 19.22.0.0.0
    [root@oda1 opt]# odacli describe-prepatchreport -i 2d24a7e0-4b25-4e9f-8cf7-ea261673ead6
    
    Patch pre-check report                                           
    ------------------------------------------------------------------------
                     Job ID:  2d24a7e0-4b25-4e9f-8cf7-ea261673ead6
                Description:  Patch pre-checks for [OS, ILOM, SERVER]
                     Status:  SUCCESS
                    Created:  January 8, 2024 3:23:04 AM UTC
                     Result:  All pre-checks succeeded
    
    Node Name       
    ---------------
    node1 
    
    Pre-Check                      Status   Comments                              
    ------------------------------ -------- -------------------------------------- 
    __OS__ 
    Validate supported versions     Success   Validated minimum supported versions. 
    Validate patching tag           Success   Validated patching tag: 19.22.0.0.0.  
    Is patch location available     Success   Patch location is available.          
    Verify OS patch                 Success   There are no packages available for   
                                              an update                             
    Validate command execution      Success   Skipped command execution verfication 
                                              - Instance is not provisioned         
    
    __ILOM__ 
    Validate ILOM server reachable  Success   Successfully connected with ILOM      
                                              server using public IP and USB        
                                              interconnect                          
    Validate supported versions     Success   Validated minimum supported versions. 
    Validate patching tag           Success   Validated patching tag: 19.22.0.0.0.  
    Is patch location available     Success   Patch location is available.          
    Checking Ilom patch Version     Success   Successfully verified the versions    
    Patch location validation       Success   Successfully validated location       
    Validate command execution      Success   Skipped command execution verfication 
                                              - Instance is not provisioned         
    
    __SERVER__ 
    Validate local patching         Success   Successfully validated server local   
                                              patching                              
    Validate command execution      Success   Skipped command execution verfication 
                                              - Instance is not provisioned         
  3. Apply the server update.
    [root@oda1 opt]# /opt/oracle/dcs/bin/odacli update-server -v version

    For example:

    [root@oda1 opt]# /opt/oracle/dcs/bin/odacli update-server -v 19.23.0.0.0
  4. Confirm that the server update is successful:
    [root@oda1 opt]# /opt/oracle/dcs/bin/odacli describe-job -i job_ID
  5. Before you update the storage components, run the odacli create-prepatchreport command with the -st option.
    [root@oda1 opt]# /opt/oracle/dcs/bin/odacli create-prepatchreport -st -v version

    For example, for 19.23:

    [root@oda1 opt]# /opt/oracle/dcs/bin/odacli create-prepatchreport -st -v 19.23.0.0.0
  6. Verify that the patching pre-checks ran successfully:
    [root@oda1 opt]# /opt/oracle/dcs/bin/odacli describe-prepatchreport

    For example:

    [root@oda1 opt]# odacli describe-prepatchreport -i 95887f92-7be7-4865-a311-54318ab385f2
    
    Patch pre-check report                                           
    ------------------------------------------------------------------------
                     Job ID:  95887f92-7be7-4865-a311-54318ab385f2
                Description:  Patch pre-checks for [STORAGE]
                     Status:  SUCCESS
                    Created:  April 8, 2024 12:52:37 PM HKT
                     Result:  All pre-checks succeeded
    
    Node Name       
    ---------------
    node1 
    
    Pre-Check                      Status   Comments                              
    ------------------------------ -------- -------------------------------------- 
    __STORAGE__ 
    Validate patching tag           Success   Validated patching tag: 19.23.0.0.0.  
    Patch location validation       Success   Verified patch location               
    Patch tag validation            Success   Verified patch tag                    
    Storage patch tag validation    Success   Verified storage patch location       
    Verify ASM disks status         Success   ASM disks are online                  
    Validate rolling patch          Success   Rolling mode patching allowed as      
                                              there is no expander and controller   
                                              upgrade.                              
    Validate command execution      Success   Validated command execution           
    
    Node Name       
    ---------------
    node2 
    
    Pre-Check                      Status   Comments                              
    ------------------------------ -------- -------------------------------------- 
    __STORAGE__ 
    Validate patching tag           Success   Validated patching tag: 19.23.0.0.0.  
    Patch location validation       Success   Verified patch location               
    Patch tag validation            Success   Verified patch tag                    
    Storage patch tag validation    Success   Verified storage patch location       
    Verify ASM disks status         Success   ASM disks are online                  
    Validate rolling patch          Success   Rolling mode patching allowed as      
                                              there is no expander and controller   
                                              upgrade.                              
    Validate command execution      Success   Validated command execution           

    Use the command odacli describe-prepatchreport to view details of the pre-patch report. The pre-patch report also indicates whether storage patching can be rolling or not, based on whether an Expander or Controller update is also required.

    Fix the warnings and errors mentioned in the report and proceed with the storage components patching.

  7. Update the storage components.

    Specify the --rolling option to patch shared disks in a rolling fashion. Note that if you patch from an Oracle Database Appliance release that requires the expander to be patched, then you cannot use the --rolling option during storage patching.

    [root@oda1 opt]# /opt/oracle/dcs/bin/odacli update-storage -v version --rolling

    For example, for 19.23:

    [root@oda1 opt]# /opt/oracle/dcs/bin/odacli update-storage -v 19.23.0.0.0 --rolling
  8. Update the repository with the 19.23 Oracle Grid Infrastructure clone file as follows:
    1. Download the Oracle Database Appliance GI Clone for ODACLI/DCS stack (patch 30403673) from My Oracle Support to a temporary location on an external client. Refer to the release notes for details about the patch numbers and software for the latest release.
      p30403673_1923000_Linux-x86-64.zip
    2. Unzip the software — it contains README.html and one or more zip files for the patch.
      unzip p30403673_1923000_Linux-x86-64.zip
      The zip file contains the following software file:
      odacli-dcs-19.23.0.0.0-date-GI-19.23.0.0.zip
    3. Copy all the software files from the external client to Oracle Database Appliance. For High-Availability deployments, copy the software files to only one node. The software files are copied to the other node during the patching process. Use the scp or sftp protocol to copy the bundle.
      Example using scp command:
      # scp software_file root@oda_host:/tmp
      Example using sftp command:
      # sftp root@oda_host
      Enter the root password, and copy the files.
      put software_file
    4. Update the repository with the Oracle Grid Infrastructure software file:
      [root@oda1 opt]# /opt/oracle/dcs/bin/odacli update-repository -f /tmp/odacli-dcs-19.23.0.0.0-date-GI-19.23.0.0.zip

      For example, for 19.23:

      [root@oda1 opt]# /opt/oracle/dcs/bin/odacli update-repository -f /tmp/odacli-dcs-19.23.0.0.0-date-GI-19.23.0.0.zip
    5. Confirm that the repository update is successful:
      [root@oda1 opt]# odacli describe-job -i 6c5e8990-298d-4070-aeac-76f1e55e5fe5
       
      Job details                                                     
      ----------------------------------------------------------------
                           ID:  6c5e8990-298d-4070-aeac-76f1e55e5fe5
                  Description:  Repository Update
                       Status:  Success
                      Created:  January 8, 2024 3:21:21 PM UTC
                      Message:  /tmp/oda-sm-19.23.0.0.0-date-server.zip
       
      Task Name                                Start Time                          End Time                            Status   
      ---------------------------------------- ----------------------------------- ----------------------------------- ----------
      Unzip bundle                             January 8, 2024 3:21:21 PM UTC   January 8, 2024 3:21:45 PM UTC   Success# /opt/oracle/dcs/bin/odacli describe-job -i job_ID
  9. Update the repository with the server data archive files generated in Step 1: Detaching Nodes for Upgrade Using Data Preserving Reprovisioning of this upgrade process.
    [root@oda1 opt]# /opt/oracle/dcs/bin/odacli update-repository -f server_archive_file_path

    For example:

    [root@oda1 opt]# /opt/oracle/dcs/bin/odacli update-repository -f /tmp/serverarchive_cluster_name.zip
    [root@oda1 opt]# odacli describe-job -i 33787134-0ebd-4d96-ad6f-268bdc154bd3
     
    Job details                                                     
    ----------------------------------------------------------------
                         ID:  33787134-0ebd-4d96-ad6f-268bdc154bd3
                Description:  Repository Update
                     Status:  Success
                    Created:  January 8, 2024 3:35:16 AM UTC
                    Message:  /tmp/serverarchive_node.zip
     
    Task Name                                Start Time                          End Time                            Status   
    ---------------------------------------- ----------------------------------- ----------------------------------- ----------
    Unzip bundle                             January 8, 2024 3:35:16 AM UTC   January 8, 2024 3:35:17 AM UTC   Success
  10. (Optional) If External Oracle ASR was configured before detaching the node, follow these steps to update the repository with Oracle ASR Manager configuration files:
    1. On the appliance running Oracle ASR Manager, run the odacli export-asrconfig command to create a zip of Oracle ASR configuration files.
    2. Copy this zip file to the current machine.
    3. Run the odacli update-repository command to update the repository with the Oracle ASR manager zip file.

    Run these steps only if the External Oracle ASR type was configured on the machine before detaching the node.

  11. Restore Oracle Grid Infrastructure. If Oracle ASR was configured before detaching the node, then running the odacli restore-node -g command prompts for Oracle ASR user password. Note that the restore process may take some time and the network services are restarted.
    [root@oda1 opt]# odacli restore-node -g
    Enter new system password:
    Retype new system password:
    Enter ASR user's password:
    [root@oda1 opt]# odacli describe-job -i 1b110e62-ca70-44f6-9eba-e5fa3fc693eb
     
    Job details                                                     
    ----------------------------------------------------------------
                         ID:  1b110e62-ca70-44f6-9eba-e5fa3fc693eb
                Description:  Restore node service - GI
                     Status:  Success
                    Created:  January 8, 2024 3:36:15 AM UTC
                    Message:  The system will reboot, if required, to enable the licensed number of CPU cores
     
    Task Name                                Start Time                          End Time                            Status   
    ---------------------------------------- ----------------------------------- ----------------------------------- ----------
    Restore node service creation            January 8, 2024 3:36:23 AM UTC   January 8, 2024 4:05:55 AM UTC   Success  
    Setting up Network                       January 8, 2024 3:36:26 AM UTC   January 8, 2024 3:36:26 AM UTC   Success  
    Setting up Vlan                          January 8, 2024 3:36:59 AM UTC   January 8, 2024 3:37:01 AM UTC   Success  
    Setting up Network                       January 8, 2024 3:37:37 AM UTC   January 8, 2024 3:37:37 AM UTC   Success  
    Network update                           January 8, 2024 3:38:18 AM UTC   January 8, 2024 3:38:55 AM UTC   Success  
    Updating network                         January 8, 2024 3:38:18 AM UTC   January 8, 2024 3:38:55 AM UTC   Success  
    Setting up Network                       January 8, 2024 3:38:18 AM UTC   January 8, 2024 3:38:18 AM UTC   Success  
    OS usergroup 'asmdba' creation           January 8, 2024 3:38:55 AM UTC   January 8, 2024 3:38:55 AM UTC   Success  
    OS usergroup 'asmoper' creation          January 8, 2024 3:38:55 AM UTC   January 8, 2024 3:38:55 AM UTC   Success  
    OS usergroup 'asmadmin' creation         January 8, 2024 3:38:55 AM UTC   January 8, 2024 3:38:56 AM UTC   Success  
    OS usergroup 'dba' creation              January 8, 2024 3:38:56 AM UTC   January 8, 2024 3:38:56 AM UTC   Success  
    OS usergroup 'dbaoper' creation          January 8, 2024 3:38:56 AM UTC   January 8, 2024 3:38:56 AM UTC   Success  
    OS usergroup 'oinstall' creation         January 8, 2024 3:38:56 AM UTC   January 8, 2024 3:38:56 AM UTC   Success  
    OS user 'grid' creation                  January 8, 2024 3:38:56 AM UTC   January 8, 2024 3:38:57 AM UTC   Success  
    OS user 'oracle' creation                January 8, 2024 3:38:57 AM UTC   January 8, 2024 3:38:57 AM UTC   Success  
    Default backup policy creation           January 8, 2024 3:38:57 AM UTC   January 8, 2024 3:38:57 AM UTC   Success  
    Backup config metadata persist           January 8, 2024 3:38:57 AM UTC   January 8, 2024 3:38:57 AM UTC   Success  
    Grant permission to RHP files            January 8, 2024 3:38:57 AM UTC   January 8, 2024 3:38:57 AM UTC   Success  
    Add SYSNAME in Env                       January 8, 2024 3:38:57 AM UTC   January 8, 2024 3:38:57 AM UTC   Success  
    Install oracle-ahf                       January 8, 2024 3:38:57 AM UTC   January 8, 2024 3:41:54 AM UTC   Success  
    Stop DCS Admin                           January 8, 2024 3:42:41 AM UTC   January 8, 2024 3:42:42 AM UTC   Success  
    Generate mTLS certificates               January 8, 2024 3:42:42 AM UTC   January 8, 2024 3:42:44 AM UTC   Success  
    Exporting Public Keys                    January 8, 2024 3:42:44 AM UTC   January 8, 2024 3:42:46 AM UTC   Success  
    Creating Trust Store                     January 8, 2024 3:42:46 AM UTC   January 8, 2024 3:42:49 AM UTC   Success  
    Update config files                      January 8, 2024 3:42:49 AM UTC   January 8, 2024 3:42:49 AM UTC   Success  
    Restart DCS Admin                        January 8, 2024 3:42:49 AM UTC   January 8, 2024 3:43:10 AM UTC   Success  
    Unzipping storage configuration files    January 8, 2024 3:43:10 AM UTC   January 8, 2024 3:43:10 AM UTC   Success  
    Reloading multipath devices              January 8, 2024 3:43:11 AM UTC   January 8, 2024 3:43:11 AM UTC   Success  
    Restart oakd                             January 8, 2024 3:43:11 AM UTC   January 8, 2024 3:43:22 AM UTC   Success  
    Restart oakd                             January 8, 2024 3:44:22 AM UTC   January 8, 2024 3:44:33 AM UTC   Success  
    Restore Quorum Disks                     January 8, 2024 3:44:33 AM UTC   January 8, 2024 3:44:33 AM UTC   Success  
    Creating GI home directories             January 8, 2024 3:44:33 AM UTC   January 8, 2024 3:44:33 AM UTC   Success  
    Extract GI clone                         January 8, 2024 3:44:33 AM UTC   January 8, 2024 3:45:55 AM UTC   Success  
    Creating wallet for Root User            January 8, 2024 3:45:56 AM UTC   January 8, 2024 3:46:00 AM UTC   Success  
    Creating wallet for ASM Client           January 8, 2024 3:46:00 AM UTC   January 8, 2024 3:46:05 AM UTC   Success  
    Grid stack creation                      January 8, 2024 3:46:05 AM UTC   January 8, 2024 3:59:40 AM UTC   Success  
    GI Restore with RHP                      January 8, 2024 3:46:05 AM UTC   January 8, 2024 3:56:12 AM UTC   Success  
    Updating GIHome version                  January 8, 2024 3:56:13 AM UTC   January 8, 2024 3:56:18 AM UTC   Success  
    Post cluster OAKD configuration          January 8, 2024 3:59:40 AM UTC   January 8, 2024 4:00:39 AM UTC   Success  
    Mounting disk group DATA                 January 8, 2024 4:00:39 AM UTC   January 8, 2024 4:00:40 AM UTC   Success  
    Mounting disk group RECO                 January 8, 2024 4:00:48 AM UTC   January 8, 2024 4:00:55 AM UTC   Success  
    Setting ACL for disk groups              January 8, 2024 4:01:03 AM UTC   January 8, 2024 4:01:07 AM UTC   Success  
    Register Scan and Vips to Public Network January 8, 2024 4:01:07 AM UTC   January 8, 2024 4:01:09 AM UTC   Success  
    Adding Volume ACFSCLONE to Clusterware   January 8, 2024 4:01:25 AM UTC   January 8, 2024 4:01:29 AM UTC   Success  
    Adding Volume COMMONSTORE to Clusterware January 8, 2024 4:01:29 AM UTC   January 8, 2024 4:01:33 AM UTC   Success  
    Adding Volume DATCPJUHX4S to Clusterware January 8, 2024 4:01:33 AM UTC   January 8, 2024 4:01:37 AM UTC   Success  
    Adding Volume DATO to Clusterware        January 8, 2024 4:01:37 AM UTC   January 8, 2024 4:01:41 AM UTC   Success  
    Adding Volume DATPJSLOXQA to Clusterware January 8, 2024 4:01:41 AM UTC   January 8, 2024 4:01:44 AM UTC   Success  
    Adding Volume DATPROVDB to Clusterware   January 8, 2024 4:01:44 AM UTC   January 8, 2024 4:01:48 AM UTC   Success  
    Adding Volume ODABASE_N0 to Clusterware  January 8, 2024 4:01:48 AM UTC   January 8, 2024 4:01:52 AM UTC   Success  
    Adding Volume ORAHOME_SH to Clusterware  January 8, 2024 4:01:52 AM UTC   January 8, 2024 4:01:56 AM UTC   Success  
    Adding Volume RECO to Clusterware        January 8, 2024 4:01:56 AM UTC   January 8, 2024 4:02:00 AM UTC   Success  
    Enabling Volume(s)                       January 8, 2024 4:02:00 AM UTC   January 8, 2024 4:03:52 AM UTC   Success  
    Discover OraHomeStorage - Node Restore   January 8, 2024 4:05:46 AM UTC   January 8, 2024 4:05:50 AM UTC   Success  
    Provisioning service creation            January 8, 2024 4:05:53 AM UTC   January 8, 2024 4:05:53 AM UTC   Success  
    Persist new agent state entry            January 8, 2024 4:05:53 AM UTC   January 8, 2024 4:05:53 AM UTC   Success  
    Persist new agent state entry            January 8, 2024 4:05:53 AM UTC   January 8, 2024 4:05:53 AM UTC   Success  
    Restart DCS Agent                        January 8, 2024 4:05:53 AM UTC   January 8, 2024 4:05:55 AM UTC   Success

    To skip restore of Oracle ASR configuration during the restore-node operation, use the --skip-asr parameter in the odacli restore-node command. For example:

    odacli restore-node -g -sa
  12. Restore the database.

    When you create Oracle Database homes with Oracle Database Appliance release 19.11 or later, the database homes are created on an Oracle ACFS-managed file system and not on the local disk. For a database user oracle, the new database homes are created under /u01/app/odaorahome/oracle/.

    Run the odacli list-dbhome-storages command to check if the storage for database homes is configured. If the database home is not already configured on Oracle ACFS, then before restoring the database home, configure the database home storage with the odacli configure-dbhome-storage command. For example:
    [root@oda1 opt]# odacli list-dbhome-storages
    [root@oda1 opt]# odacli configure-dbhome-storage -dg DATA

    The command does not cause storage allocation or creation of volumes or file systems. The command only sets the disk group location in the metadata. For information about managing database homes on Oracle ACFS, see the topic Managing Database Home Storage.

  13. To restore homes that existed on the local drive prior to the reimage, ensure that you update the repository with the Oracle Database clones for the specific Oracle Database release, and then restore the databases. For database homes on Oracle ACFS-managed file system locations, you do not need to update the repository.
    [root@oda1 opt]# /opt/oracle/dcs/bin/odacli update-repository -f /tmp/odacli-dcs-19.23.0.0.0-date-DB-19.23.0.0.zip

    Restore the databases:

    [root@oda1 opt]# odacli restore-node -d
    [root@oda1 opt]# odacli describe-job -i 8b080e66-b9f0-49c7-ac7e-24907e87066f
     
    Job details                                                     
    ----------------------------------------------------------------
                         ID:  8b080e66-b9f0-49c7-ac7e-24907e87066f
                Description:  Restore node service - Database
                     Status:  Success
                    Created:  January 8, 2024 4:07:28 AM UTC
                    Message: 
     
    Task Name                                Start Time                          End Time                            Status   
    ---------------------------------------- ----------------------------------- ----------------------------------- ----------
    Setting up SSH equivalence               January 8, 2024 4:07:32 AM UTC   January 8, 2024 4:07:35 AM UTC   Success  
    DB home creation: OraDB19000_home3       January 8, 2024 4:07:35 AM UTC   January 8, 2024 4:07:35 AM UTC   Skipped  
    DB home creation: OraDB19000_home4       January 8, 2024 4:07:35 AM UTC   January 8, 2024 4:07:35 AM UTC   Skipped  
    DB home creation: OraDB19000_home5       January 8, 2024 4:07:35 AM UTC   January 8, 2024 4:07:35 AM UTC   Skipped  
    DB home creation: OraDB19000_home1       January 8, 2024 4:07:35 AM UTC   January 8, 2024 4:07:35 AM UTC   Skipped  
    DB home creation: OraDB19000_home2       January 8, 2024 4:07:35 AM UTC   January 8, 2024 4:07:35 AM UTC   Skipped  
    Persist database storage locations       January 8, 2024 4:07:35 AM UTC   January 8, 2024 4:07:36 AM UTC   Success  
    Save metadata for O                      January 8, 2024 4:07:35 AM UTC   January 8, 2024 4:07:35 AM UTC   Success  
    Save metadata for PJSlOXqa               January 8, 2024 4:07:35 AM UTC   January 8, 2024 4:07:35 AM UTC   Success  
    Save metadata for provDb0                January 8, 2024 4:07:35 AM UTC   January 8, 2024 4:07:35 AM UTC   Success  
    Save metadata for cPjuHX4S               January 8, 2024 4:07:35 AM UTC   January 8, 2024 4:07:35 AM UTC   Success  
    Save metadata for mydb                   January 8, 2024 4:07:35 AM UTC   January 8, 2024 4:07:35 AM UTC   Success  
    Persist database storages                January 8, 2024 4:07:36 AM UTC   January 8, 2024 4:07:36 AM UTC   Success  
    Save metadata for O                      January 8, 2024 4:07:36 AM UTC   January 8, 2024 4:07:36 AM UTC   Success  
    Save metadata for PJSlOXqa               January 8, 2024 4:07:36 AM UTC   January 8, 2024 4:07:36 AM UTC   Success  
    Save metadata for provDb0                January 8, 2024 4:07:36 AM UTC   January 8, 2024 4:07:36 AM UTC   Success  
    Save metadata for cPjuHX4S               January 8, 2024 4:07:36 AM UTC   January 8, 2024 4:07:36 AM UTC   Success  
    Save metadata for mydb                   January 8, 2024 4:07:36 AM UTC   January 8, 2024 4:07:36 AM UTC   Success  
    Restore database: O                      January 8, 2024 4:07:36 AM UTC   January 8, 2024 4:08:48 AM UTC   Success  
    +-- Adding database to GI                January 8, 2024 4:07:36 AM UTC   January 8, 2024 4:07:38 AM UTC   Success  
    +-- Adding database instance(s) to GI    January 8, 2024 4:07:38 AM UTC   January 8, 2024 4:07:38 AM UTC   Success  
    +-- Modifying SPFILE for database        January 8, 2024 4:07:38 AM UTC   January 8, 2024 4:08:15 AM UTC   Success  
    +-- Restore password file for database   January 8, 2024 4:08:15 AM UTC   January 8, 2024 4:08:15 AM UTC   Skipped  
    +-- Start instance(s) for database       January 8, 2024 4:08:15 AM UTC   January 8, 2024 4:08:35 AM UTC   Success  
    +-- Persist metadata for database        January 8, 2024 4:08:35 AM UTC   January 8, 2024 4:08:35 AM UTC   Success  
    +-- Clear all listeners from Database    January 8, 2024 4:08:35 AM UTC   January 8, 2024 4:08:36 AM UTC   Success  
    +-- Create adrci directory               January 8, 2024 4:08:39 AM UTC   January 8, 2024 4:08:39 AM UTC   Success  
    +-- Run SqlPatch                         January 8, 2024 4:08:39 AM UTC   January 8, 2024 4:08:48 AM UTC   Success  
    Restore database: PJSlOXqa               January 8, 2024 4:08:48 AM UTC   January 8, 2024 4:09:44 AM UTC   Success  
    +-- Adding database to GI                January 8, 2024 4:08:48 AM UTC   January 8, 2024 4:08:51 AM UTC   Success  
    +-- Adding database instance(s) to GI    January 8, 2024 4:08:51 AM UTC   January 8, 2024 4:08:51 AM UTC   Success  
    +-- Modifying SPFILE for database        January 8, 2024 4:08:51 AM UTC   January 8, 2024 4:09:16 AM UTC   Success  
    +-- Restore password file for database   January 8, 2024 4:09:16 AM UTC   January 8, 2024 4:09:16 AM UTC   Skipped  
    +-- Start instance(s) for database       January 8, 2024 4:09:16 AM UTC   January 8, 2024 4:09:30 AM UTC   Success  
    +-- Persist metadata for database        January 8, 2024 4:09:30 AM UTC   January 8, 2024 4:09:31 AM UTC   Success  
    +-- Clear all listeners from Database    January 8, 2024 4:09:31 AM UTC   January 8, 2024 4:09:32 AM UTC   Success  
    +-- Create adrci directory               January 8, 2024 4:09:34 AM UTC   January 8, 2024 4:09:34 AM UTC   Success  
    +-- Run SqlPatch                         January 8, 2024 4:09:34 AM UTC   January 8, 2024 4:09:44 AM UTC   Success  
    Restore database: provDb                 January 8, 2024 4:09:44 AM UTC   January 8, 2024 4:11:04 AM UTC   Success  
    +-- Adding database to GI                January 8, 2024 4:09:44 AM UTC   January 8, 2024 4:09:47 AM UTC   Success  
    +-- Adding database instance(s) to GI    January 8, 2024 4:09:47 AM UTC   January 8, 2024 4:09:47 AM UTC   Success  
    +-- Modifying SPFILE for database        January 8, 2024 4:09:47 AM UTC   January 8, 2024 4:10:15 AM UTC   Success  
    +-- Restore password file for database   January 8, 2024 4:10:15 AM UTC   January 8, 2024 4:10:15 AM UTC   Skipped  
    +-- Start instance(s) for database       January 8, 2024 4:10:15 AM UTC   January 8, 2024 4:10:33 AM UTC   Success  
    +-- Persist metadata for database        January 8, 2024 4:10:33 AM UTC   January 8, 2024 4:10:33 AM UTC   Success  
    +-- Clear all listeners from Database    January 8, 2024 4:10:33 AM UTC   January 8, 2024 4:10:34 AM UTC   Success  
    +-- Create adrci directory               January 8, 2024 4:10:36 AM UTC   January 8, 2024 4:10:36 AM UTC   Success  
    +-- Run SqlPatch                         January 8, 2024 4:10:36 AM UTC   January 8, 2024 4:11:04 AM UTC   Success  
    Restore database: cPjuHX4S               January 8, 2024 4:11:04 AM UTC   January 8, 2024 4:12:02 AM UTC   Success  
    +-- Adding database to GI                January 8, 2024 4:11:04 AM UTC   January 8, 2024 4:11:08 AM UTC   Success  
    +-- Adding database instance(s) to GI    January 8, 2024 4:11:08 AM UTC   January 8, 2024 4:11:08 AM UTC   Success  
    +-- Modifying SPFILE for database        January 8, 2024 4:11:08 AM UTC   January 8, 2024 4:11:34 AM UTC   Success  
    +-- Restore password file for database   January 8, 2024 4:11:34 AM UTC   January 8, 2024 4:11:34 AM UTC   Skipped  
    +-- Start instance(s) for database       January 8, 2024 4:11:34 AM UTC   January 8, 2024 4:11:49 AM UTC   Success  
    +-- Persist metadata for database        January 8, 2024 4:11:49 AM UTC   January 8, 2024 4:11:49 AM UTC   Success  
    +-- Clear all listeners from Database    January 8, 2024 4:11:49 AM UTC   January 8, 2024 4:11:50 AM UTC   Success  
    +-- Create adrci directory               January 8, 2024 4:11:52 AM UTC   January 8, 2024 4:11:52 AM UTC   Success  
    +-- Run SqlPatch                         January 8, 2024 4:11:52 AM UTC   January 8, 2024 4:12:02 AM UTC   Success  
    Restore database: mydb                   January 8, 2024 4:12:02 AM UTC   January 8, 2024 4:13:15 AM UTC   Success  
    +-- Adding database to GI                January 8, 2024 4:12:02 AM UTC   January 8, 2024 4:12:05 AM UTC   Success  
    +-- Adding database instance(s) to GI    January 8, 2024 4:12:05 AM UTC   January 8, 2024 4:12:05 AM UTC   Success  
    +-- Modifying SPFILE for database        January 8, 2024 4:12:05 AM UTC   January 8, 2024 4:12:41 AM UTC   Success  
    +-- Restore password file for database   January 8, 2024 4:12:41 AM UTC   January 8, 2024 4:12:41 AM UTC   Skipped  
    +-- Start instance(s) for database       January 8, 2024 4:12:41 AM UTC   January 8, 2024 4:13:01 AM UTC   Success  
    +-- Persist metadata for database        January 8, 2024 4:13:01 AM UTC   January 8, 2024 4:13:01 AM UTC   Success  
    +-- Clear all listeners from Database    January 8, 2024 4:13:01 AM UTC   January 8, 2024 4:13:02 AM UTC   Success  
    +-- Create adrci directory               January 8, 2024 4:13:05 AM UTC   January 8, 2024 4:13:05 AM UTC   Success  
    +-- Run SqlPatch                         January 8, 2024 4:13:05 AM UTC   January 8, 2024 4:13:15 AM UTC   Success  
    Restore Object Stores                    January 8, 2024 4:13:15 AM UTC   January 8, 2024 4:13:16 AM UTC   Success  
    Object Store Swift Creation              January 8, 2024 4:13:15 AM UTC   January 8, 2024 4:13:16 AM UTC   Success  
    Save password in wallet                  January 8, 2024 4:13:15 AM UTC   January 8, 2024 4:13:16 AM UTC   Success  
    Object Store Swift persist               January 8, 2024 4:13:16 AM UTC   January 8, 2024 4:13:16 AM UTC   Success  
    Remount NFS backups                      January 8, 2024 4:13:16 AM UTC   January 8, 2024 4:13:16 AM UTC   Success  
    Restore BackupConfigs                    January 8, 2024 4:13:16 AM UTC   January 8, 2024 4:13:28 AM UTC   Success  
    Backup config creation                   January 8, 2024 4:13:16 AM UTC   January 8, 2024 4:13:16 AM UTC   Success  
    Backup config metadata persist           January 8, 2024 4:13:16 AM UTC   January 8, 2024 4:13:16 AM UTC   Success  
    Backup config creation                   January 8, 2024 4:13:16 AM UTC   January 8, 2024 4:13:28 AM UTC   Success  
    Libopc existence check                   January 8, 2024 4:13:17 AM UTC   January 8, 2024 4:13:17 AM UTC   Success  
    Installer existence check                January 8, 2024 4:13:17 AM UTC   January 8, 2024 4:13:17 AM UTC   Success  
    Container validation                     January 8, 2024 4:13:17 AM UTC   January 8, 2024 4:13:17 AM UTC   Success  
    Object Store Swift directory creation    January 8, 2024 4:13:18 AM UTC   January 8, 2024 4:13:18 AM UTC   Success  
    Install Object Store Swift module        January 8, 2024 4:13:18 AM UTC   January 8, 2024 4:13:28 AM UTC   Success  
    Backup config metadata persist           January 8, 2024 4:13:28 AM UTC   January 8, 2024 4:13:28 AM UTC   Success  
    Reattach backupconfigs to DBs            January 8, 2024 4:13:28 AM UTC   January 8, 2024 4:13:28 AM UTC   Success  
    Restore backup reports                   January 8, 2024 4:13:28 AM UTC   January 8, 2024 4:13:28 AM UTC   Success
    If the databases have Oracle Data Guard configured, then the restore operation also restores Oracle Data Guard. For example:
    [root@oda1 opt]# odacli restore-node -d
    [root@oda1 opt]# odacli describe-job -i d5aec86e-767f-4e28-b782-bc3e607f4eb1
     
    Job details                                                     
    ----------------------------------------------------------------
                         ID:  d5aec86e-767f-4e28-b782-bc3e607f4eb1
                Description:  Restore node service - Database
                     Status:  Success
                    Created:  January 8, 2024 4:41:43 AM GMT
                    Message: 
     
    Task Name                                Start Time                          End Time                            Status         
    ---------------------------------------- ----------------------------------- ----------------------------------- ----------------
    Setting up SSH equivalence for 'oracle'  January 8, 2024 4:41:46 AM GMT    January 8, 2024 4:41:49 AM GMT    Success        
    DB home creation: OraDB19000_home5       January 8, 2024 4:41:49 AM GMT    January 8, 2024 4:41:49 AM GMT    Skipped        
    DB home creation: OraDB19000_home1       January 8, 2024 4:41:49 AM GMT    January 8, 2024 4:41:49 AM GMT    Skipped        
    DB home creation: OraDB19000_home3       January 8, 2024 4:41:49 AM GMT    January 8, 2024 4:41:49 AM GMT    Skipped        
    DB home creation: OraDB19000_home2       January 8, 2024 4:41:49 AM GMT    January 8, 2024 4:41:49 AM GMT    Skipped        
    Persist database storage locations       January 8, 2024 4:41:49 AM GMT    January 8, 2024 4:41:49 AM GMT    Success        
    Save metadata for eOd04SyN               January 8, 2024 4:41:49 AM GMT    January 8, 2024 4:41:49 AM GMT    Success        
    Save metadata for o1                     January 8, 2024 4:41:49 AM GMT    January 8, 2024 4:41:49 AM GMT    Success        
    Save metadata for eOd03SyN               January 8, 2024 4:41:49 AM GMT    January 8, 2024 4:41:49 AM GMT    Success        
    Save metadata for eOd02SyN               January 8, 2024 4:41:49 AM GMT    January 8, 2024 4:41:49 AM GMT    Success        
    Persist database storages                January 8, 2024 4:41:49 AM GMT    January 8, 2024 4:41:49 AM GMT    Success        
    Save metadata for eOd04SyN               January 8, 2024 4:41:49 AM GMT    January 8, 2024 4:41:49 AM GMT    Success        
    Save metadata for o1                     January 8, 2024 4:41:49 AM GMT    January 8, 2024 4:41:49 AM GMT    Success        
    Save metadata for eOd03SyN               January 8, 2024 4:41:49 AM GMT    January 8, 2024 4:41:49 AM GMT    Success        
    Save metadata for eOd02SyN               January 8, 2024 4:41:49 AM GMT    January 8, 2024 4:41:49 AM GMT    Success        
    Restore database: o1                     January 8, 2024 4:41:49 AM GMT    January 8, 2024 4:43:15 AM GMT    Success        
    +-- Adding database to GI                January 8, 2024 4:41:49 AM GMT    January 8, 2024 4:41:51 AM GMT    Success        
    +-- Adding database instance(s) to GI    January 8, 2024 4:41:51 AM GMT    January 8, 2024 4:41:51 AM GMT    Success        
    +-- Modifying SPFILE for database        January 8, 2024 4:41:51 AM GMT    January 8, 2024 4:42:29 AM GMT    Success        
    +-- Restore password file for database   January 8, 2024 4:42:29 AM GMT    January 8, 2024 4:42:29 AM GMT    Skipped        
    +-- Start instance(s) for database       January 8, 2024 4:42:29 AM GMT    January 8, 2024 4:43:04 AM GMT    Success        
    +-- Persist metadata for database        January 8, 2024 4:43:04 AM GMT    January 8, 2024 4:43:04 AM GMT    Success        
    +-- Clear all listeners from Database    January 8, 2024 4:43:04 AM GMT    January 8, 2024 4:43:05 AM GMT    Success        
    +-- Create adrci directory               January 8, 2024 4:43:07 AM GMT    January 8, 2024 4:43:07 AM GMT    Success        
    +-- Run SqlPatch                         January 8, 2024 4:43:07 AM GMT    January 8, 2024 4:43:15 AM GMT    Success        
    Restore Object Stores                    January 8, 2024 4:43:15 AM GMT    January 8, 2024 4:43:16 AM GMT    Success        
    Object Store Swift Creation              January 8, 2024 4:43:15 AM GMT    January 8, 2024 4:43:16 AM GMT    Success        
    Save password in wallet                  January 8, 2024 4:43:15 AM GMT    January 8, 2024 4:43:16 AM GMT    Success        
    Object Store Swift persist               January 8, 2024 4:43:16 AM GMT    January 8, 2024 4:43:16 AM GMT    Success        
    Object Store Swift Creation              January 8, 2024 4:43:16 AM GMT    January 8, 2024 4:43:16 AM GMT    Success        
    Save password in wallet                  January 8, 2024 4:43:16 AM GMT    January 8, 2024 4:43:16 AM GMT    Success        
    Object Store Swift persist               January 8, 2024 4:43:16 AM GMT    January 8, 2024 4:43:16 AM GMT    Success        
    Remount NFS backups                      January 8, 2024 4:43:16 AM GMT    January 8, 2024 4:43:16 AM GMT    Success        
    Restore BackupConfigs                    January 8, 2024 4:43:16 AM GMT    January 8, 2024 4:43:35 AM GMT    Success        
    Backup config creation                   January 8, 2024 4:43:16 AM GMT    January 8, 2024 4:43:26 AM GMT    Success        
    Libopc existence check                   January 8, 2024 4:43:17 AM GMT    January 8, 2024 4:43:17 AM GMT    Success        
    Installer existence check                January 8, 2024 4:43:17 AM GMT    January 8, 2024 4:43:17 AM GMT    Success        
    Container validation                     January 8, 2024 4:43:17 AM GMT    January 8, 2024 4:43:17 AM GMT    Success        
    Object Store Swift directory creation    January 8, 2024 4:43:17 AM GMT    January 8, 2024 4:43:17 AM GMT    Success        
    Install Object Store Swift module        January 8, 2024 4:43:17 AM GMT    January 8, 2024 4:43:26 AM GMT    Success        
    Backup config metadata persist           January 8, 2024 4:43:26 AM GMT    January 8, 2024 4:43:26 AM GMT    Success        
    Backup config creation                   January 8, 2024 4:43:26 AM GMT    January 8, 2024 4:43:35 AM GMT    Success        
    Libopc existence check                   January 8, 2024 4:43:26 AM GMT    January 8, 2024 4:43:26 AM GMT    Success        
    Installer existence check                January 8, 2024 4:43:26 AM GMT    January 8, 2024 4:43:26 AM GMT    Success        
    Container validation                     January 8, 2024 4:43:26 AM GMT    January 8, 2024 4:43:27 AM GMT    Success        
    Object Store Swift directory creation    January 8, 2024 4:43:27 AM GMT    January 8, 2024 4:43:27 AM GMT    Success        
    Install Object Store Swift module        January 8, 2024 4:43:27 AM GMT    January 8, 2024 4:43:35 AM GMT    Success        
    Backup config metadata persist           January 8, 2024 4:43:35 AM GMT    January 8, 2024 4:43:35 AM GMT    Success        
    Reattach backupconfigs to DBs            January 8, 2024 4:43:35 AM GMT    January 8, 2024 4:43:35 AM GMT    Success        
    Restore backup reports                   January 8, 2024 4:43:35 AM GMT    January 8, 2024 4:43:35 AM GMT    Success        
    Restore dataguard                        January 8, 2024 4:43:35 AM GMT    January 8, 2024 4:43:36 AM GMT    Success        
    Restore dataguard services               January 8, 2024 4:43:36 AM GMT    January 8, 2024 4:43:41 AM GMT    Success
  14. If your source deployment had shared CPU pool or custom vnetwork associated with any of the DB systems, or had Oracle KVM deployments, then restore the Oracle KVM deployments.
    [root@oda1 opt]# odacli restore-node -kvm
    [root@oda1 opt]# odacli describe-job -i 2662bdfc-6505-43cf-b1e9-22a34c7b1c31
     
    Job details                                                     
    ----------------------------------------------------------------
                         ID:  2662bdfc-6505-43cf-b1e9-22a34c7b1c31
                Description:  Restore node service - KVM
                     Status:  Success
                    Created:  January 8, 2024 9:21:29 PM UTC
                    Message: 
     
    Task Name                                Node Name                 Start Time                          End Time                            Status         
    ---------------------------------------- ------------------------- ----------------------------------- ----------------------------------- ----------------
    Validate backup files                    oda1                January 8, 2024 9:21:29 PM UTC   January 8, 2024 9:21:30 PM UTC   Success        
    Read backup metadata                     oda1                January 8, 2024 9:21:30 PM UTC   January 8, 2024 9:21:30 PM UTC   Success        
    Check existing resources                 oda1                January 8, 2024 9:21:30 PM UTC   January 8, 2024 9:21:31 PM UTC   Success        
    Create ACFS mount point                  oda1                January 8, 2024 9:21:31 PM UTC   January 8, 2024 9:21:31 PM UTC   Success        
    Register ACFS resources                  oda1                January 8, 2024 9:21:31 PM UTC   January 8, 2024 9:21:31 PM UTC   Success        
    Restore VM Storages metadata             oda1                January 8, 2024 9:21:31 PM UTC   January 8, 2024 9:21:31 PM UTC   Success        
    Restore VDisks metadata                  oda1                January 8, 2024 9:21:31 PM UTC   January 8, 2024 9:21:32 PM UTC   Success        
    Restore CPU Pools                        oda1                January 8, 2024 9:21:32 PM UTC   January 8, 2024 9:21:32 PM UTC   Success        
    Restore VNetworks                        oda1                January 8, 2024 9:21:32 PM UTC   January 8, 2024 9:21:32 PM UTC   Success        
    Patch VM's domain config files           oda1                January 8, 2024 9:21:32 PM UTC   January 8, 2024 9:21:32 PM UTC   Success        
    Restore VMs                              oda1                January 8, 2024 9:21:32 PM UTC   January 8, 2024 9:21:32 PM UTC   Success        
    Restore VMs metadata                     oda1                January 8, 2024 9:21:32 PM UTC   January 8, 2024 9:21:33 PM UTC   Success        
    Start VMs                                oda1                January 8, 2024 9:21:33 PM UTC   
  15. Restore Oracle DB systems, if your source deployment had any earlier. Note: If your source deployment had shared CPU pool or custom vnetwork associated with any of the DB systems, then run the KVM restore operation before restoring the DB systems.
    [root@oda1 opt]# odacli restore-node -dbs
    [root@oda1 opt]# odacli describe-job -i 5b81e5ae-1186-45fb-936b-4d21eb803eb8
     
    Job details                                                     
    ----------------------------------------------------------------
                         ID:  5b81e5ae-1186-45fb-936b-4d21eb803eb8
                Description:  Restore node service - DBSYSTEM
                     Status:  Success
                    Created:  January 8, 2024 4:20:47 AM PDT
                    Message: 
     
    Task Name                                Node Name                 Start Time                          End Time                            Status         
    ---------------------------------------- ------------------------- ----------------------------------- ----------------------------------- ----------------
    Validate DB System json files            oda1             January 8, 2024 4:20:47 AM PDT   January 8, 2024 4:20:48 AM PDT   Success        
    Deserialize resources                    oda1             January 8, 2024 4:20:48 AM PDT   January 8, 2024 4:20:50 AM PDT   Success        
    Persist DB Systems for restore operation oda1             January 8, 2024 4:20:50 AM PDT   January 8, 2024 4:20:52 AM PDT   Success        
    Create DB System ACFS mount points       oda1             January 8, 2024 4:20:52 AM PDT   January 8, 2024 4:20:54 AM PDT   Success        
    Patch libvirt xml for DB Systems         oda1             January 8, 2024 4:20:54 AM PDT   January 8, 2024 4:20:57 AM PDT   Success        
    Restore DB System Networks               oda1             January 8, 2024 4:20:57 AM PDT   January 8, 2024 4:21:06 AM PDT   Success        
    Add DB Systems to Clusterware            oda1             January 8, 2024 4:21:06 AM PDT   January 8, 2024 4:21:10 AM PDT   Success        
    Validate start dependencies              oda1             January 8, 2024 4:21:10 AM PDT   January 8, 2024 4:21:12 AM PDT   Success        
    Start DB Systems                         oda1             January 8, 2024 4:21:12 AM PDT   January 8, 2024 4:21:36 AM PDT   Success        
    Wait DB Systems VM bootstrap             oda1             January 8, 2024 4:21:36 AM PDT   January 8, 2024 4:23:45 AM PDT   Success        
    Export clones repository for DB          oda1             January 8, 2024 4:23:45 AM PDT   January 8, 2024 4:23:47 AM PDT   Success        
    Systems post restore                                                                                                                                      
    Export ASM client cluster config on BM   oda1             January 8, 2024 4:23:47 AM PDT   January 8, 2024 4:23:50 AM PDT   Success        
    Import ASM client cluster config to      oda1             January 8, 2024 4:23:50 AM PDT   January 8, 2024 4:25:19 AM PDT   Success        
    OLR (within DB Systems)                                                                                                                                   
    Import ASM client cluster config to      oda1             January 8, 2024 4:25:19 AM PDT   January 8, 2024 4:26:18 AM PDT   Success        
    OCR (within DB Systems)                            

After upgrading your deployment to Oracle Database Appliance release 19.23, patch your databases to release 19.23 as described in this chapter.

Upgrading DB Systems to Oracle Linux 8 and Oracle Database Appliance Release 19.23 Using the CLI

Follow these steps to upgrade your Oracle Database Appliance DB system deployment using CLI commands.

After upgrading Oracle Database Appliance bare metal system to Oracle Database Appliance Release 19.23, you can upgrade your Oracle Database Appliance DB systems, one DB system at a time, to the current release. Oracle Grid Infrastructure inside DB systems are updated to release 19.23 only after upgrading them to Oracle Linux 8. Similarly, the databases inside the DB system can be updated to release 19.23 only after completing the DB system upgrade.
Download the Oracle Database Appliance VM template and update the repository on the bare metal system.

Important:

Ensure that there is sufficient space on your appliance to download the patches.

Note:

You must import the latest Oracle Grid Infrastructure clone files applicable to the DB system to the repository. For example, if the restored DB system runs Oracle Grid Infrastructure release 19.20, 19.19, or 19.18, then the Oracle Grid Infrastructure clone file 19.23.0.0.240416 must be present in the repository. Similary, if the restored DB system runs Oracle Grid Infrastructure release 21.8, then the Oracle Grid Infrastructure clone 21.8.0.0.221018 must be present in the repository.
Follow these steps to upgrade your Oracle Database Appliance DB system deployment to Oracle Linux 8 and Oracle Database Appliance release 19.23. Run the commands in these steps on the bare metal system.
  1. Download the Oracle Database Appliance VM template. Refer to the release notes for details about the patch numbers and software for the latest release.
    For example, download the VM template for 19.23:
    p32451228_1923000_Linux-x86-64.zip
  2. Unzip the software — it contains README.html and one or more zip files for the patch.
    unzip p32451228_1923000_Linux-x86-64.zip

    The zip file contains the following software file:

    odacli-dcs-19.23.0.0.0-date-ODAVM-19.23.0.0.zip
  3. Copy all the software files from the external client to Oracle Database Appliance. For High-Availability deployments, copy the software files to only one node. The software files are copied to the other node during the upgrade process. Use the scp or sftp protocol to copy the bundle.
    Example using scp command:
    # scp software_file root@oda_host:/tmp
    Example using sftp command:
    # sftp root@oda_host
    Enter the root password, and copy the files.
    put software_file
  4. Update the repository with the VM template:
    [root@oda1 opt]# /opt/oracle/dcs/bin/odacli update-repository -f /tmp/software_file

    For example, for 19.23:

    [root@oda1 opt]# /opt/oracle/dcs/bin/odacli update-repository -f /tmp/odacli-dcs-19.23.0.0.0-date-ODAVM-19.23.0.0.zip
  5. Confirm that the repository update is successful:
    [root@oda1 opt]# /opt/oracle/dcs/bin/odacli describe-job -i job_ID
  6. Create the pre-upgrade report to run upgrade pre-checks. If there are errors reported in the report, review the report to resolve the failure in the Action column of the report. Fix the errors and repeat the step to run the preupgrade report until all checks pass. If there are alerts in the report, review them and perform the recommended action, if any. Then proceed to run the upgrade DB system operation.
    [root@oda1 opt]# odacli create-preupgradereport -dbs dbs1
     
    Job details
    ----------------------------------------------------------------
                         ID:  ebba922c-e134-474f-8612-3b0728bf87e8
                Description:  Patch pre-upgrade checks for upgrade
                     Status:  Created
                    Created:  January 8, 2024 4:27:41 AM UTC
                    Message:  Use 'odacli describe-preupgradereport -i ebba922c-e134-474f-8612-3b0728bf87e8' to check details of results
     
    Task Name                                Start Time                          End Time                            Status
    ---------------------------------------- ----------------------------------- ----------------------------------- ----------------
     
     
    [root@oda1 opt ~]# odacli describe-job -i ebba922c-e134-474f-8612-3b0728bf87e8
     
    Job details
    ----------------------------------------------------------------
                         ID:  ebba922c-e134-474f-8612-3b0728bf87e8
                Description:  Patch pre-upgrade checks for upgrade
                     Status:  Success
                    Created:  January 8, 2024 4:27:41 AM UTC
                    Message:  Use 'odacli describe-preupgradereport -i <ID>' to check results
     
    Task Name                                Start Time                          End Time                            Status
    ---------------------------------------- ----------------------------------- ----------------------------------- ----------------
    Running prechecks for upgrade            January 8, 2024 4:27:41 AM UTC   January 8, 2024 4:28:13 AM UTC   Success
    Running pre-upgrade check: DBSYSTEM      January 8, 2024 4:27:41 AM UTC   January 8, 2024 4:27:41 AM UTC   Success
    Run DBSystem upgrade prechecks           January 8, 2024 4:27:42 AM UTC   January 8, 2024 4:28:13 AM UTC   Success <============== BM Task issues a job inside DB System
    Check pre-check status                   January 8, 2024 4:28:13 AM UTC   January 8, 2024 4:28:13 AM UTC   Success

    View the pre-upgrade report:

    [root@oda1 opt]# odacli describe-preupgradereport -i 1570e7be-a201-4f9f-aba8-2ef358f2355a
     
    Upgrade pre-check report
    ------------------------------------------------------------------------
                     Job ID:  1570e7be-a201-4f9f-aba8-2ef358f2355a
                Description:  Run pre-upgrade checks for DB System: dbs1
                     Status:  SUCCESS
                    Created:  January 8, 2024 4:39:42 PM UTC
                     Result:  All pre-checks succeeded
     
    Node Name
    ---------------
    node1
     
    Check                          Status   Message                                Action
    ------------------------------ -------- -------------------------------------- --------------------------------------
    __DBSYSTEM__
    Validate DB System State       Success  DB System 'dbs1' is in      None
                                            'CONFIGURED' state
    Verify existence of DBVM image Success  DB System image version '19.22.0.0.0'  None
                                            is present in repository
    Verify existence of Database   Success  Database clone version                 None
    clone                                   '19.20.0.0.230717' is present in
                                            repository
    Verify existence of GI clone   Success  GI clone version '19.23.0.0.240416'    None
                                            is  present in repository
     
    Node Name
    ---------------
    node2
     
    Check                          Status   Message                                Action
    ------------------------------ -------- -------------------------------------- --------------------------------------
    __DB__
    Validate Database Status       Success  Database 'x8II4Deb' is running and is  None
                                            in 'CONFIGURED' state
    Validate Database Datapatch    Success  Database 'x8II4Deb' is completely      None
    Application Status                      applied with datapatch
    Validate TDE wallet presence   Success  TDE Wallet Management of database      None
                                            'x8II4Deb' is ODA. Skipping TDE
                                            wallet presence check.
    Validate Database Home         Success  Database home location check passed    None
    location                                for database x8II4DebU
     
    __SYS__
    Validate System Version        Success  System version 19.20.0.0.0 is          None
                                            supported
    Verify System Timezone         Success  Succesfully verified the time zone     None
                                            file
    Verify Grid User               Success  Grid user is verified                  None
    Verify Grid Version            Success  Oracle Grid Infrastructure is running  None
                                            on the '19.20.0.0.230717' version on
                                            all nodes
    Verify number of Databases     Success  Only one database is active            None
    Verify number of Database      Success  Only one database home is configured   None
    Homes
     
    __OS__
    Check Required OS files        Success  All the required files are present     None
     
    __CERTIFICATES__
    Check using custom             Success  Using Default key pair                 None
    certificates
    Check the agent of the DB      Success  All the agents of the DB systems are   None
    System accessible                       accessible
  7. From the bare metal system, upgrade the DB system.
    [root@oda1 restore]# odacli upgrade-dbsystem -n DBSystemName

    For example:

    [root@oda1 restore]# odacli upgrade-dbsystem -n dbs1
    Enter password for system dbs1:
    Retype password for system dbs1:
     
    Job details
    ----------------------------------------------------------------
                         ID:  289160f7-2c20-48f3-afd6-8ca8e06669b3
                Description:  DB System dbs1 upgrade
                     Status:  Created
                    Created:  January 8, 2024 4:03:16 PM UTC
                    Message:
     
    Task Name                                Node Name                 Start Time                               End Time                                 Status
    ---------------------------------------- ------------------------- ---------------------------------------- ---------------------------------------- ----------
     
     
    [root@oda1 restore]# odacli describe-job -i 289160f7-2c20-48f3-afd6-8ca8e06669b3
     
    Job details
    ----------------------------------------------------------------
                         ID:  289160f7-2c20-48f3-afd6-8ca8e06669b3
                Description:  DB System dbs1 upgrade
                     Status:  Success
                    Created:  January 8, 2024 4:03:16 PM UTC
                    Message:
     
    Task Name                                Node Name                 Start Time                               End Time                                 Status
    ---------------------------------------- ------------------------- ---------------------------------------- ---------------------------------------- ----------
    Run DB System upgrade prechecks          node1             January 8, 2024 4:03:17 PM UTC          January 8, 2024 4:05:16 PM UTC          Success
    Save provisioning payload                node1             January 8, 2024 4:05:16 PM UTC          January 8, 2024 4:05:21 PM UTC          Success
    Set DB System as detaching               node1             January 8, 2024 4:05:21 PM UTC          January 8, 2024 4:05:22 PM UTC          Success
    Detach node - DPR                        node1             January 8, 2024 4:05:22 PM UTC          January 8, 2024 4:13:03 PM UTC          Success
    Copy server archive file from DB System  node1             January 8, 2024 4:13:03 PM UTC          January 8, 2024 4:13:03 PM UTC          Success
    Set DB System as recreating              node1             January 8, 2024 4:13:03 PM UTC          January 8, 2024 4:13:04 PM UTC          Success
    Remove DB System from Clusterware        node1             January 8, 2024 4:13:04 PM UTC          January 8, 2024 4:13:11 PM UTC          Success
    Delete ASM client cluster config         node1             January 8, 2024 4:13:11 PM UTC          January 8, 2024 4:13:17 PM UTC          Success
    Deprovision DB System VM(s)              node1             January 8, 2024 4:13:17 PM UTC          January 8, 2024 4:13:18 PM UTC          Success
    Delete DB System ACFS filesystem         node1             January 8, 2024 4:13:18 PM UTC          January 8, 2024 4:13:21 PM UTC          Success
    Delete DB System ACFS mount point        node1             January 8, 2024 4:13:21 PM UTC          January 8, 2024 4:13:22 PM UTC          Success
    Delete DB System ASM volume              node1             January 8, 2024 4:13:22 PM UTC          January 8, 2024 4:13:33 PM UTC          Success
    Delete DB System Networks                node1             January 8, 2024 4:13:33 PM UTC          January 8, 2024 4:13:35 PM UTC          Success
    Delete imported certificates             node1             January 8, 2024 4:13:35 PM UTC          January 8, 2024 4:13:36 PM UTC          Success
    Delete DB System metadata                node1             January 8, 2024 4:13:36 PM UTC          January 8, 2024 4:13:37 PM UTC          Success
    Load provisioning payload                node1             January 8, 2024 4:13:37 PM UTC          January 8, 2024 4:13:37 PM UTC          Success
    Validate DB System prerequisites         node1             January 8, 2024 4:13:37 PM UTC          January 8, 2024 4:13:43 PM UTC          Success
    Create DB System metadata                node1             January 8, 2024 4:13:43 PM UTC          January 8, 2024 4:13:50 PM UTC          Success
    Create DB System ASM volume              node1             January 8, 2024 4:13:50 PM UTC          January 8, 2024 4:14:05 PM UTC          Success
    Create DB System ACFS mount point        node1             January 8, 2024 4:14:05 PM UTC          January 8, 2024 4:14:05 PM UTC          Success
    Create DB System ACFS filesystem         node1             January 8, 2024 4:14:05 PM UTC          January 8, 2024 4:14:16 PM UTC          Success
    Create DB System VM ACFS snapshots       node1             January 8, 2024 4:14:16 PM UTC          January 8, 2024 4:16:17 PM UTC          Success
    Calculate VLAN ID                        node1             January 8, 2024 4:16:17 PM UTC          January 8, 2024 4:16:18 PM UTC          Success
    Create DB System Networks                node1             January 8, 2024 4:16:18 PM UTC          January 8, 2024 4:16:20 PM UTC          Success
    Create temporary SSH key pair            node1             January 8, 2024 4:16:20 PM UTC          January 8, 2024 4:16:22 PM UTC          Success
    Create DB System cloud-init config       node1             January 8, 2024 4:16:22 PM UTC          January 8, 2024 4:16:24 PM UTC          Success
    Provision DB System VM(s)                node1             January 8, 2024 4:16:24 PM UTC          January 8, 2024 4:16:28 PM UTC          Success
    Attach disks to DB System                node1             January 8, 2024 4:16:28 PM UTC          January 8, 2024 4:16:33 PM UTC          Success
    Add DB System to Clusterware             node1             January 8, 2024 4:16:33 PM UTC          January 8, 2024 4:16:36 PM UTC          Success
    Start DB System                          node1             January 8, 2024 4:16:36 PM UTC          January 8, 2024 4:16:41 PM UTC          Success
    Wait DB System VM first boot             node1             January 8, 2024 4:16:41 PM UTC          January 8, 2024 4:18:25 PM UTC          Success
    Setup Mutual TLS (mTLS)                  node1             January 8, 2024 4:18:25 PM UTC          January 8, 2024 4:19:11 PM UTC          Success
    Export clones repository                 node1             January 8, 2024 4:19:11 PM UTC          January 8, 2024 4:19:11 PM UTC          Success
    Setup ASM client cluster config          node1             January 8, 2024 4:19:11 PM UTC          January 8, 2024 4:19:14 PM UTC          Success
    Copy ASM client cluster config           node1             January 8, 2024 4:19:15 PM UTC          January 8, 2024 4:19:15 PM UTC          Success
    Install DB System                        node1             January 8, 2024 4:19:15 PM UTC          January 8, 2024 4:52:14 PM UTC          Success
    Copy server archive file to DB System    node1             January 8, 2024 4:52:14 PM UTC          January 8, 2024 4:52:16 PM UTC          Success
    Unpack server archive zip file           node1             January 8, 2024 4:52:16 PM UTC          January 8, 2024 4:52:28 PM UTC          Success
    Cleanup temporary SSH key pair           node1             January 8, 2024 4:52:28 PM UTC          January 8, 2024 4:52:29 PM UTC          Success
    Set DB System as reconfiguring           node1             January 8, 2024 4:52:29 PM UTC          January 8, 2024 4:52:30 PM UTC          Success
    Change Database file ownership           node1             January 8, 2024 4:52:30 PM UTC          January 8, 2024 4:52:36 PM UTC          Success
    Restore node - DPR                       node1             January 8, 2024 4:52:36 PM UTC          January 8, 2024 5:02:47 PM UTC          Success
    Set upgraded DB System as configured     node1             January 8, 2024 5:02:47 PM UTC          January 8, 2024 5:02:47 PM UTC          Success
After upgrading your deployment to Oracle Database Appliance release 19.23, patch your databases to release 19.23 as described in this chapter.

Upgrading Oracle Database Appliance to Oracle Linux 8 and Oracle Database Appliance Release 19.23 Using the BUI

Follow these steps to upgrade your Oracle Database Appliance deployment and existing Oracle Database homes, using the Browser User Interface (BUI).

Download the Oracle Database Appliance patches from My Oracle Support and save them in a directory on the appliance. See the Oracle Database Appliance Release Notes for a list of available patches and links to download the patches.
Follow these steps to upgrade your Oracle Database Appliance system deployment to release 19.23 and upgrade to Oracle Linux 8 using BUI. Before reprovisioning the appliance using the BUI, you must update the DCS admin, DCS components, and DCS agent as follows:
  1. Download the Oracle Database Appliance Server Patch for the ODACLI/DCS stack (patch 35938481) from My Oracle Support to a temporary location on an external client. Refer to the release notes for details about the patch numbers and software for the latest release.
    For example, download the server patch for 19.23:
    p35938481_1923000_Linux-x86-64.zip
  2. Unzip the software — it contains README.html and one or more zip files for the patch.
    unzip p35938481_1923000_Linux-x86-64.zip
    The zip file contains the following software file:
    oda-sm-19.23.0.0.0-date-server.zip
  3. Copy all the software files from the external client to Oracle Database Appliance. For High-Availability deployments, copy the software files to only one node. The software files are copied to the other node during the patching process. Use the scp or sftp protocol to copy the bundle.
    Example using scp command:
    # scp software_file root@oda_host:/tmp
    Example using sftp command:
    # sftp root@oda_host
    Enter the root password, and copy the files.
    put software_file
  4. Update the repository with the server software file:
    [root@oda1 opt]# /opt/oracle/dcs/bin/odacli update-repository -f /tmp/software_file
    For example, for 19.23:
    [root@oda1 opt]# /opt/oracle/dcs/bin/odacli update-repository -f /tmp/oda-sm-19.23.0.0.0-date-server.zip
  5. Confirm that the repository update is successful:
    [root@oda1 opt]# odacli describe-job -i 6c5e8990-298d-4070-aeac-76f1e55e5fe5
     
    Job details                                                     
    ----------------------------------------------------------------
                         ID:  6c5e8990-298d-4070-aeac-76f1e55e5fe5
                Description:  Repository Update
                     Status:  Success
                    Created:  January 08, 2024 3:21:21 PM UTC
                    Message:  /tmp/oda-sm-19.23.0.0.0-date-server.zip
     
    Task Name                                Start Time                          End Time                            Status   
    ---------------------------------------- ----------------------------------- ----------------------------------- ----------
    Unzip bundle                             January 08, 2024 3:21:21 PM UTC   January 08, 2024 3:21:45 PM UTC   Success# /opt/oracle/dcs/bin/odacli describe-job -i job_ID
  6. Update DCS admin:
    [root@oda1 opt]# /opt/oracle/dcs/bin/odacli update-dcsadmin -v 19.23.0.0.0
    [root@oda1 opt]# odacli describe-job -i c00f38cd-299d-445b-b623-b24f664d48f9
     
    Job details                                                     
    ----------------------------------------------------------------
                         ID:  c00f38cd-299d-445b-b623-b24f664d48f9
                Description:  DcsAdmin patching
                     Status:  Success
                    Created:  January 08, 2024 3:22:19 PM UTC
                    Message: 
     
    Task Name                                Start Time                          End Time                            Status   
    ---------------------------------------- ----------------------------------- ----------------------------------- ----------
    Patch location validation                January 08, 2024 3:22:19 PM UTC   January 08, 2024 3:22:19 PM UTC   Success  
    Dcs-admin upgrade                        January 08, 2024 3:22:19 PM UTC   January 08, 2024 3:22:25 PM UTC   Success
  7. Update the DCS components:
    [root@oda1 opt]# /opt/oracle/dcs/bin/odacli update-dcscomponents -v 19.23.0.0.0
    {
      "jobId" : "e9862ac9-ed92-4934-a71a-93cea4c20a68",
      "status" : "Success",
      "message" : " DCS-Agent shutdown is successful. Skipping MySQL upgrade on OL7 Metadata schema update is done. dcsagent RPM upgrade is successful. dcscli RPM upgrade is successful. dcscontroller RPM upgrade is successful. Successfully reset the Keystore password. HAMI is not enabled Skipped removing old Libs. Successfully ran setupAgentAuth.sh ",
      "reports" : null,
      "createTimestamp" : "January 08, 2024 13:47:22 PM GMT",
      "description" : "Update-dcscomponents job completed and is not part of Agent job list",
      "updatedTime" : "January 08, 2024 13:49:44 PM GMT"
    }

    If the DCS components are updated, then the message "status" : "Success" is displayed on the command line. For failed updates, fix the error and then proceed with the update by re-running the odacli update-dcscomponents command. See the topic Resolving Errors When Updating DCS Components During Patching about more information about DCS components checks errors.

    Note:

    Note that for DCS agent update to be complete, both the odacli update-dcscomponents and odacli update-dcsagent commands must be run. Ensure that both commands are run in the order specified in this procedure.
  8. Update the DCS agent:
    [root@oda1 opt]# /opt/oracle/dcs/bin/odacli update-dcsagent -v 19.23.0.0.0
    [root@oda1 opt]# odacli describe-job -i a9cac320-cebe-4a78-b6e5-ce9e0595d5fa
     
    Job details                                                     
    ----------------------------------------------------------------
                         ID:  a9cac320-cebe-4a78-b6e5-ce9e0595d5fa
                Description:  DcsAgent patching
                     Status:  Success
                    Created:  January 08, 2024 3:35:01 PM UTC
                    Message: 
     
    Task Name                                Start Time                          End Time                            Status   
    ---------------------------------------- ----------------------------------- ----------------------------------- ----------
    Dcs-agent upgrade  to version            January 08, 2024 3:35:01 PM UTC   January 08, 2024 3:38:50 PM UTC   Success  
    19.22.0.0.0                                                                                                               
    Update System version                    January 08, 2024 3:38:50 PM UTC   January 08, 2024 3:38:50 PM UTC   Success

Reprovisioning the Appliance Using BUI

After updating the DCS admin, DCS components, and DCS agent, reprovision the BUI as follows:

  1. Navigate to the BUI and log in as the oda-admin user.
    https://Node0–host-ip-address:7093/mgmt/index.html
  2. In the BUI, click Data Preserving Re-provisioning.
  3. In the Re-provision tab, under Run Pre-checks, click Create Pre-Upgrade Report to create a preupgrade report.
  4. After the job completes successfully and the preupgrade report is generated, select the report in the drop down list, and click View Pre-Upgrade Report.
  5. View the preupgrade report and fix any issues displayed in the report. Click Back to navigate to the Data Preserving Re-provisioning page.
  6. If the preupgrade report does not have any failures, then click Next to start the detach node process.
  7. Click Detach Node. Click Force Run if you are aware of the issues and still want to proceed with the operation. Click Yes to confirm.
  8. Click Activity to monitor the progress. When the job completes successfully, navigate to the Re-provision tab and click Next. The BUI displays a message to reimage the appliance. The detach node operation creates a server data archive file at /opt/oracle/oak/restore/out. Save a copy of the archive to a location outside of the appliance, to prepare for the reimage.

    WARNING:

    Make sure to save these files in a location outside the Oracle Database Appliance system. These files are needed to reprovision the system after you reimage the appliance in Step 2 of this process. Without these files, the system cannot be reprovisioned in Step 3 and you will lose all data stored in the Oracle ASM disk groups.
  9. Manually reimage the nodes as described in the topic Upgrading Bare Metal System to Oracle Linux 8 and Oracle Database Appliance Release 19.23 Using the CLI.
  10. After reimaging the nodes, log into the BUI:
    https://Node0–host-ip-address:7093/mgmt/index.html

    The BUI prompts you to specify the admin password. Select the option to Enable Multi-User Access only if it was enabled prior to the detach node operation.

  11. Navigate to the Infrastructure Patching tab and apply the server and storage patches.
  12. Copy the server archive file produced during the detach node operation to the appliance and specify the absolute file path to the archive. In the Re-provision tab, specify the Server Archive Location and click Update Repository.
  13. Specify the GI Clone Location and click Update Repository.
  14. After the repository is updated successfully, click Restore Node. If your deployment is multi-user enabled, then specify New System Password, Oracle User Password, and Grid User Password.
  15. If your deployment had Oracle ASR configured when the detach-node operation was run, and you do not want to restore Oracle ASR configuration, then select Skip Restore of ASR configuration. If you want to restore Oracle ASR configuration, then specify the ASR Password. If you select Yes in the HTTPS Proxy Requires Authentication field, then specify the Proxy Password.
  16. Click Yes.
  17. After the restore node job completes successfully, you can restore databases.

Restoring Databases Using the BUI

  1. If database storage is not configured on Oracle ACFS, then you must configure database storage before restoring the database.
  2. In the Restore Database tab, select the Disk Group Name and specify the Size in GB.
  3. Click Configure to submit the job to configure the database home storage.
  4. Update the repository with the database clones. Specify the Database Clone Location and click Update Repository.
  5. After the repository is updated with all the required database clones, click Restore Databases to restore the databases. Confirm that you want to submit the job.
  6. If you have configured Oracle KVM, shared CPU pool, or custom vnetwork resources, then restore VM instances after restoring the databases.

Restoring VM Instances and DB Systems Using the BUI

  1. In the Restore VM Instances tab, click Restore VM Instances.
  2. To restore DB systems, navigate to the Restore DB Systems tab, and click Restore DB Systems.
  3. After the VM instances and DB systems are restored, you can upgrade DB systems.

Upgrading DB Systems Using BUI

  1. In the Upgrade DB Systems tab, specify the DB System Clone Location and click Update Repository.
  2. After updating the repository, choose a DB system from the Select DB System drop-down list, and click Create Pre-Upgrade Report to create the preupgrade report for the DB system.
  3. After the preupgrade report is generated, select the report in the drop down list, and click View Pre-Upgrade Report.
  4. View the preupgrade report and fix any issues displayed in the report. Click Back to navigate to the Data Preserving Re-provisioning page.
  5. If the preupgrade report does not have any failures, then continue with the Oracle Linux 8 upgrade. Choose a DB system from the Select DB System drop-down list and click Upgrade.
  6. Specify System Password and click Yes.
  7. Check the status of the job and verify that it completed successfully.
  8. Repeat this procedure to upgrade all DB systems in your deployment.
After upgrading your deployment to Oracle Database Appliance release 19.23, patch your databases to release 19.23 as described in this chapter.

Patching Databases Using ODACLI Commands or the BUI

Use ODACLI commands or the Browser User Interface to patch databases to the latest release in your deployment.

Before patching the database home, upload the Oracle Database clone files for the database version, to the repository. See Updating Oracle Database Appliance Repository with Database Clone Files Using the CLI for the procedure to update the repository with the latest Oracle Database clone files.

Important:

You must run the odacli create-prepatchreport command before you patch the Oracle databases; otherwise, the odacli update-database command fails with an error message prompting you to run the patching pre-checks.

Patching Databases on Oracle Database Appliance using ODACLI Commands

Run the following command to patch a database using the CLI:

odacli update-database [-a] [-dp] [-f] [-i db_id] [-imp] [-l] [-n db_name] [-ni node] [-r] [-to db_home_id] [-j] [-h]

For more information about the options for the update-database command, see the chapter Oracle Database Appliance Command-Line Interface.

Patching Databases on Oracle Database Appliance using BUI

  1. Log into the Browser User Interface with the oda-admin user name and password.
    https://Node0–host-ip-address:7093/mgmt/index.html
  2. Navigate to the Database tab.
  3. Select the database you want to patch.
  4. Click Update.
  5. If you select Apply Data Patch, then the Data Patch for the specified database is applied and you cannot select any other options.
  6. On a high-availability system, you can also select the node in the Select Node to Update list.
  7. Select Ignore Missing Patches to ignore missing patches.
  8. Select Force Run to force the operation to run.
  9. Select the destination database home.
  10. In the Patching Options
    • Abort: To abort previously unfinished or failed patching operation.
    • Revert: To revert previously unfinished or failed patching operation.
    • None: To patch the database.
  11. Click Update.
  12. If you have not run the pre-checks earlier, then an error is displayed when you submit the job to update the database.
  13. In the Database page, select the Database and then click Precheck to run pre-checks for patching the database.
    Click Activity for job status.
  14. In the Database page, for the database to be patched, click Actions and select View Pre-patch reports to view the pre-check report. Fix any errors, and then select Action as Apply to patch the database.
  15. Verify that the patching job completes successfully.

Patching Existing Database Homes Using ODACLI or the BUI

Use ODACLI or BUI to patch database homes in your deployment to the latest release.

Before patching the database, upload the Oracle Database clone files for the database version, to the repository. See Updating Oracle Database Appliance Repository with Database Clone Files Using the CLI for the procedure to update the repository with the latest Oracle Database clone files.

Patching Database Homes on Oracle Database Appliance using ODACLI Commands

Run the following command to patch a database home using the CLI:

odacli update-dbhome -i dbhome_id -v version [-f] [-imp] [-p] [-l] [-u node_number] [-j] [-h]

For more information about the options for the update-dbhome command, see the chapter Oracle Database Appliance Command-Line Interface.

Patching Database Homes on Oracle Database Appliance using BUI

  1. Log into the Browser User Interface with the oda-admin user name and password.
    https://Node0–host-ip-address:7093/mgmt/index.html
  2. Navigate to the Database Home tab.
  3. Select the database home you want to patch.
  4. Select the Patch Version for the database home.
  5. To patch multiple database homes, select each database home to be patched and the patch version for each database home.
  6. Select the Node to Update. You can choose the node that you want to update or you can choose to update all nodes.
  7. Click Patch. Select Precheck to run pre-checks before patching the database.
    Click Activity for job status.
  8. On the Patch page, for the database to be patched, click Actions and select View Pre-patch reports to view the pre-check report. Fix any errors, and then select Action as Apply to patch the database.
  9. Select Ignore Precheck Failures to ignore failures reported in the prechecks reported. It is recommended that you fix errors reported in the precheck results.
  10. Select Ignore Missing Patches to ignore missing patches.
  11. Verify that the patching job completes successfully.