9 Upgrading Oracle Database Appliance to Release 19.24 Using Data Preserving Reprovisioning
Understand how you can upgrade your Oracle Database Appliance deployment from Oracle Database Appliance release 19.20 to Oracle Database Appliance release 19.24 on Oracle Linux 8.
If your deployment is on Oracle Database Appliance release 19.21 or later, then patch your appliance as described in the chapter Patching Oracle Database Appliance.
- About Upgrading Using Data Preserving Reprovisioning
Understand how you can upgrade your appliance from Oracle Database Appliance releases 19.20 to Oracle Database Appliance release 19.24. - Upgrading Bare Metal System to Oracle Linux 8 and Oracle Database Appliance Release 19.24 Using the CLI
Follow these steps to apply patches to your Oracle Database Appliance bare metal deployment and existing Oracle Database homes, using CLI commands. - Upgrading DB Systems to Oracle Linux 8 and Oracle Database Appliance Release 19.24 Using the CLI
Follow these steps to upgrade your Oracle Database Appliance DB system deployment using CLI commands. - Upgrading Oracle Database Appliance to Oracle Linux 8 and Oracle Database Appliance Release 19.24 Using the BUI
Follow these steps to upgrade your Oracle Database Appliance deployment and existing Oracle Database homes, using the Browser User Interface (BUI). - Patching Databases Using ODACLI Commands or the BUI
Use ODACLI commands or the Browser User Interface to patch databases to the latest release in your deployment. - Patching Existing Database Homes Using ODACLI or the BUI
Use ODACLI or BUI to patch database homes in your deployment to the latest release.
About Upgrading Using Data Preserving Reprovisioning
Understand how you can upgrade your appliance from Oracle Database Appliance releases 19.20 to Oracle Database Appliance release 19.24.
Note:
To upgrade to Oracle Database Appliance release 19.24, you must be on Oracle Database Appliance release 19.20 at the minimum. To patch your appliance to Oracle Database Appliance release 19.18, refer to the Oracle Database Appliance Deployment and User's Guide for your hardware model in the Oracle Database Appliance release 19.20 documentation library.Note:
Data Preserving Reprovisioning does not support encrypted Oracle ACFS. Use theacfsutil encr info
command to check whether Oracle ACFS encryption
is enabled. If Oracle ACFS encryption is enabled, then disable Oracle ACFS encryption
using the acfsutil encr off
command before you proceed with the
upgrade. You can enable Oracle ACFS encryption after the upgrade. For more information,
see the Oracle Automatic Storage Management Administrator's Guide in the
Oracle Database 19c documentation library.
Starting with Oracle Database Appliance release 19.21, the operating system of the appliance is Oracle Linux 8. You must upgrade the system to Oracle Linux 8, before updating Oracle Grid Infrastructure and databases to 19.24. You use Data Preserving Reprovisioning to upgrade your appliance running Oracle Linux 7 to Oracle Database Appliance release 19.24 with Oracle Linux 8.
About Upgrading Using Data Preserving Reprovisioning
Data Preserving Reprovisioning enables reprovisioning of an already deployed Oracle Database Appliance system without modifying the storage and the databases on the appliance. This is achieved by saving the information of the source system, capturing them as server data archive files. Then, the appliance is reimaged to Oracle Database Appliance release 19.24 and the saved metadata is used to directly reprovision the system and bring back all the resources such as databases, DB systems, Oracle ASR, and others.
- The
odacli create-preupgradereport
command runs prechecks on the system, such as detection of databases, DB systems that are inactive, Oracle Data Guard and TDE-enabled database settings. There are errors, warnings, and alerts reported if the system is determined to not be ready for the upgrade. You must review the errors, warnings, and alerts reported and take corrective actions as the report suggests. This will ensure that no failures occur later in the process, which can prevent the resources from being restarted. - During the first step of detaching the node, information about the system is collected and preserved in a server archive file. Make sure this file is saved outside the Oracle Database Appliance system before reimaging the system with the Oracle Database Appliance release 19.24 ISO image. The settings preserved in the file are used to reprovision the system after reimaging the appliance. The stored information includes details about the Oracle ACFS volumes that store database homes, DB system cluster settings, VLAN, custom networks, CPU and Oracle AFD settings. These settings are migrated after the reimage and the third step in this process reprovisions these settings.
Steps in the Data Preserving Reprovisioning for Upgrade Process
- Detach resources and software from the source version of the
appliance: This step saves the metadata about the databases,
listeners, networks, DB systems, application KVMs, CPU pools, Oracle ASR,
and other configuration details in archive files, namely, the server data
archive files. Then, the services running on the system are shutdown and
uninstalled to prepare the environment for reimage in step 2. The data on
the storage is kept intact.
The server data archive files are generated after the successful detach of nodes. You must save the server data archive files in a location outside the appliance which is being upgraded, and copy these files back to the appliance to restore the system in step 3.
WARNING:
Make sure to save these files in a location outside the Oracle Database Appliance system. These files are needed to reprovision the system after you reimage the appliance in Step 2 of this process. Without these files, the system cannot be reprovisioned in Step 3 and you will lose all data stored in the Oracle ASM disk groups. - Reimage nodes using the Oracle Database Appliance ISO image: The procedure is similar to imaging the appliance. This step installs Oracle Linux 8 as the operating system.
- Restore nodes using the Data Preserving Reprovisioning method: After successful completion of the previous step, the operating system and DCS software are already on the required target version. However, to update firmware, you must patch your deployment using the Server Patch. After successfully patching the appliance, you can restore the system, by restoring the Oracle Grid Infrastructure, databases, listeners, networks, DB systems, application KVMs, CPU pools, Oracle ASR, and other services on the nodes.
- Upgrade DB systems: After reprovisioning databases, application KVMs, and DB systems, upgrade the DB systems to Oracle Linux 8.
The procedure for each step is detailed in the subsequent topics in this chapter.
Customizations to the Appliance and Their Persistence After Upgrade
- Custom RPMs: If your appliance has any custom operating system RPMs installed from Oracle Linux Yum repository, then the prechecks report lists these custom RPMs. You must uninstall these RPMs and then continue with the next step in the upgrade process for bare metal system and DB system upgrades. You can reinstall these custom RPMs as required, after the upgrade.
- Multi-User Access Enabled Systems: If your deployment did not have multi-user access configured before the upgrade, the newly-upgraded deployment will not have multi-user access enabled. The upgrade restores your deployment to the same configuration that existed prior to the upgrade, but with the software upgraded to Oracle Database Appliance release 19.24.
- Fixes applied by STIG and CIS scripts: Since the system is reimaged during the upgrade progess, fixes applied on the appliance to conform with Security Technical Implementation Guides (STIG) and Center for Internet Security (CIS) benchmarks are lost, on both bare metal and DB systems.
Upgrading Bare Metal System to Oracle Linux 8 and Oracle Database Appliance Release 19.24 Using the CLI
Follow these steps to apply patches to your Oracle Database Appliance bare metal deployment and existing Oracle Database homes, using CLI commands.
WARNING:
Do not run cleanup.pl either before or after running theodacli detach-node
command. Running cleanup.pl erases
all the Oracle ASM disk groups on the storage and you cannot reprovision your Oracle
Database Appliance system.
Note:
Run the steps in this procedure in the same order as documented. Run theodacli update-dcsadmin
,
odacli update-dcscomponents
, and odacli
update-dcsagent
commands in the order documented.
Note:
For high-availability systems, run all the commands on one node only unless specified in the procedure step.Note:
Note that for DCS agent update to be complete, both theodacli
update-dcscomponents
and odacli update-dcsagent
commands must be run. Ensure that both commands are run in the order
specified.
Note:
If Oracle ASR configuration type is Internal, and if there are external assets registered with Oracle ASR Manager, then after upgrading the internal Oracle ASR appliance, ensure that you upgrade the external assets when you upgrade your appliance using Data Preserving Reprovisioning.
If Oracle ASR configuration type is External, then ensure that you
upgrade the appliance using Data Preserving Reprovisioning before upgrading the
appliance with the external Oracle ASR. You can check the Oracle ASR
configuration type with the odacli describe-asr
command.
Important:
Ensure that there is sufficient space on your appliance to download the patches.Important:
If you want to install third-party software on your Oracle Database Appliance, then ensure that the software does not impact the Oracle Database Appliance software. The version lock on Oracle Database Appliance RPMs displays a warning if the third-party software tries to override Oracle Database Appliance RPMs. You must restore the affected RPMs before patching Oracle Database Appliance so that patching completes successfully.Step 1: Detaching Nodes for Upgrade Using Data Preserving Reprovisioning
- Download the Oracle Database Appliance Server Patch for the
ODACLI/DCS stack from My Oracle Support to a temporary location on an external
client. Refer to the release notes for details about the patch numbers and
software for the latest release.
For example, download the server patch for 19.24:
patch_number_1924000_Linux-x86-64.zip
- Unzip the software — it contains README.html and one or more zip
files for the
patch.
unzip patch_number_1924000_Linux-x86-64.zip
The zip file contains the following software file:oda-sm-19.24.0.0.0-date-server.zip
- Copy all the software files from the external client to Oracle Database
Appliance. For High-Availability deployments, copy the software files to only
one node. The software files are copied to the other node during the patching
process. Use the
scp
orsftp
protocol to copy the bundle.Example usingscp
command:# scp software_file root@oda_host:/tmp
Example usingsftp
command:# sftp root@oda_host
Enter theroot
password, and copy the files.put software_file
- Update the repository with the server software file:
[root@oda1 opt]# /opt/oracle/dcs/bin/odacli update-repository -f /tmp/software_file
For example, for 19.24:[root@oda1 opt]# /opt/oracle/dcs/bin/odacli update-repository -f /tmp/oda-sm-19.24.0.0.0-date-server.zip
- Confirm that the repository update is
successful:
[root@oda1 opt]# odacli describe-job -i 6c5e8990-298d-4070-aeac-76f1e55e5fe5 Job details ---------------------------------------------------------------- ID: 6c5e8990-298d-4070-aeac-76f1e55e5fe5 Description: Repository Update Status: Success Created: June 8, 2024 3:21:21 PM UTC Message: /tmp/oda-sm-19.24.0.0.0-date-server.zip Task Name Start Time End Time Status ---------------------------------------- ----------------------------------- ----------------------------------- ---------- Unzip bundle June 8, 2024 3:21:21 PM UTC June 8, 2024 3:21:45 PM UTC Success# /opt/oracle/dcs/bin/odacli describe-job -i job_ID
- Update the DCS
admin:
[root@oda1 opt]# /opt/oracle/dcs/bin/odacli update-dcsadmin -v 19.24.0.0.0 { "jobId" : "95178f45-b72f-46ef-b971-741f3fad51c4", "status" : "Created", "message" : null, "reports" : [ ], "createTimestamp" : "June 8, 2024 07:45:54 AM UTC", "resourceList" : [ ], "description" : "DcsAdmin patching", "updatedTime" : "June 8, 2024 07:45:54 AM UTC", "jobType" : null } # odacli describe-job -i 95178f45-b72f-46ef-b971-741f3fad51c4 Job details ---------------------------------------------------------------- ID: 95178f45-b72f-46ef-b971-741f3fad51c4 Description: DcsAdmin patching Status: Success Created: June 8, 2024 7:45:54 AM UTC Message: Task Name Node Name Start Time End Time Status ---------------------------------------- ------------------------- ---------------------------------------- ---------------------------------------- ---------- Patch location validation node1 June 8, 2024 7:45:58 AM UTC June 8, 2024 7:45:58 AM UTC Success Patch location validation node2 June 8, 2024 7:45:58 AM UTC June 8, 2024 7:45:58 AM UTC Success Dcs-admin upgrade node1 June 8, 2024 7:45:59 AM UTC June 8, 2024 7:45:59 AM UTC Success Dcs-admin upgrade node2 June 8, 2024 7:45:59 AM UTC June 8, 2024 7:45:59 AM UTC Success
- Update the DCS
components:
[root@oda1 opt]# /opt/oracle/dcs/bin/odacli update-dcscomponents -v 19.24.0.0.0 { "jobId" : "e9862ac9-ed92-4934-a71a-93cea4c20a68", "status" : "Success", "message" : " DCS-Agent shutdown is successful. Skipping MySQL upgrade on OL7 Metadata schema update is done. dcsagent RPM upgrade is successful. dcscli RPM upgrade is successful. dcscontroller RPM upgrade is successful. Successfully reset the Keystore password. HAMI is not enabled Skipped removing old Libs. Successfully ran setupAgentAuth.sh ", "reports" : null, "createTimestamp" : "June 8, 2024 13:47:22 PM GMT", "description" : "Update-dcscomponents job completed and is not part of Agent job list", "updatedTime" : "June 8, 2024 13:49:44 PM GMT" }
If the DCS components are updated, then the message
"status" : "Success"
is displayed on the command line. For failed updates, fix the error and then proceed with the update by re-running theodacli update-dcscomponents
command. See the topic Resolving Errors When Updating DCS Components During Patching about more information about DCS components checks errors.Note:
Note that for DCS agent update to be complete, both theodacli update-dcscomponents
andodacli update-dcsagent
commands must be run. Ensure that both commands are run in the order specified in this procedure. - Update the DCS
agent:
[root@oda1 opt]# /opt/oracle/dcs/bin/odacli update-dcsagent -v 19.24.0.0.0 [root@oda1 opt]# odacli describe-job -i a9cac320-cebe-4a78-b6e5-ce9e0595d5fa Job details ---------------------------------------------------------------- ID: a9cac320-cebe-4a78-b6e5-ce9e0595d5fa Description: DcsAgent patching Status: Success Created: June 8, 2024 3:35:01 PM UTC Message: Task Name Start Time End Time Status ---------------------------------------- ----------------------------------- ----------------------------------- ---------- Dcs-agent upgrade to version June 8, 2024 3:35:01 PM UTC June 8, 2024 3:38:50 PM UTC Success 19.22.0.0.0 Update System version June 8, 2024 3:38:50 PM UTC June 8, 2024 3:38:50 PM UTC Success
- Similarly, log into each DB system and update the DCS components,
DCS admin, and DCS agent on every DB system in your
deployment:
[root@dbsystem1 opt]# /opt/oracle/dcs/bin/odacli update-dcsadmin -v 19.24.0.0.0 [root@dbsystem1 opt]# /opt/oracle/dcs/bin/odacli update-dcscomponents -v 19.24.0.0.0 [root@dbsystem1 opt]# /opt/oracle/dcs/bin/odacli update-dcsagent -v 19.24.0.0.0
- On bare metal system, create the pre-upgrade report to run upgrade
pre-checks. If there are errors reported in the report, review the report to
resolve the failure in the "Action" column of the report. Fix the errors and
repeat the step to run the preupgrade report until all checks pass. If there are
alerts in the report, review them and perform the recommended action, if any.
Then proceed to run the detach-node
operation.
[root@oda1 opt]# odacli create-preupgradereport -bm [root@oda1 opt]# odacli describe-preupgradereport -i ID
For example:
[root@oda1 opt]# odacli describe-preupgradereport -i 31d5304a-d234-4f87-84ec-0297020f518a Upgrade pre-check report ------------------------------------------------------------------------ Job ID: 31d5304a-d234-4f87-84ec-0297020f518a Description: Run pre-upgrade checks for Bare Metal Status: SUCCESS Created: June 8, 2024 7:15:28 AM UTC Result: All pre-checks succeeded Node Name --------------- node1 Check Status Message Action ------------------------------ -------- -------------------------------------- -------------------------------------- __GI__ Check presence of databases Success No additional database found None not managed by ODA registered in CRS Check custom filesystems Success All file systems are owned and used None by OS users provisioned by ODA __OS__ Check Required OS files Success All the required files are present None Check Additional OS RPMs Success No RPMs outside of base ISO were None found on the system __STORAGE__ Check Required Storage files Success All the required files are present None Validate OAK Disks Success All OAK disks are in valid state None Validate ASM Disk Groups Success All ASM disk groups are in valid state None Validate ASM Disks Success All ASM disks are in valid state None Check Database Home Storage Success The volume(s) None volumes orahome_sh,odabase_n0,odabase_n1 state is CONFIGURED. Check space under /opt Success Free space on /opt: 142750.87 MB is None more than required space: 1024 MB Check space in ASM disk Success Space required for creating local None group(s) homes is present in ACFS database home storage. Required: 78 GB Available: 245 GB __SYS__ Validate Hardware Type Success Current hardware is supported None Validate ILOM interconnect Success ILOM interconnect is not enabled None Validate System Version Success System version 19.22.0.0.0 is None supported Verify System Timezone Success Succesfully verified the time zone None file Verify Grid User Success Grid user is verified None Verify Grid Version Success Oracle Grid Infrastructure is running None on the '19.17.0.0.221018' version on all nodes Check Audit Files Alert Audit files found under These files will be lost after /u01/app/oracle/product/12.1.0.2/ reimage. Backup the audit files to a dbhome_1/rdbms/audit, location outside the ODA system /u01/app/oracle/product/11.2.0.4/ dbhome_1/rdbms/audit, /u01/app/oracle/audit __DB__ Validate Database Status Success Database 'myTestDb' is running and is None in 'CONFIGURED' state Validate Database Version Success Version '19.17.0.0.221018' for None database 'myTestDb' is supported Validate Database Datapatch Success Database 'myTestDb' is completely None Application Status applied with datapatch Validate TDE wallet presence Success Database 'myTestDb' is not TDE None enabled. Skipping TDE wallet presence check. Validate Database Home Success Database home location check passed None location for database myTestDb_uniq Validate Database Status Success Database 's' is running and is in None 'CONFIGURED' state Validate Database Version Success Version '12.1.0.2.220719' for None database 's' is supported Validate Database Datapatch Success Database 's' is completely applied None Application Status with datapatch Validate TDE wallet presence Success Database 's' is not TDE enabled. None Skipping TDE wallet presence check. Validate Database Home Success Database home location check passed None location for database s Validate Database Status Success Database 'QyZ6O' is running and is in None 'CONFIGURED' state Validate Database Version Success Version '11.2.0.4.210119' for None database 'QyZ6O' is supported Validate TDE wallet presence Success Database 'QyZ6O' is not TDE enabled. None Skipping TDE wallet presence check. Validate Database Home Success Database home location check passed None location for database QyZ6O Validate Database Status Success Database 'EX68' is running and is in None 'CONFIGURED' state Validate Database Version Success Version '18.14.0.0.210420' for None database 'EX68' is supported Validate Database Datapatch Success The database is SI and is running on None Application Status node2. This check is skipped. Validate TDE wallet presence Success Database 'EX68' is not TDE enabled. None Skipping TDE wallet presence check. Validate Database Home Success Database home location check passed None location for database EX68 Validate Database Status Success Database 'DH1G0' is running and is in None 'CONFIGURED' state Validate Database Version Success Version '12.2.0.1.220118' for None database 'DH1G0' is supported Validate Database Datapatch Success Database 'DH1G0' is completely None Application Status applied with datapatch Validate TDE wallet presence Success Database 'DH1G0' is not TDE enabled. None Skipping TDE wallet presence check. Validate Database Home Success Database home location check passed None location for database DH1G0 __CERTIFICATES__ Check using custom Success Using Default key pair None certificates Check the agent of the DB Success All the agents of the DB systems are None System accessible accessible __DBSYSTEMS__ Validate DB System DCS Success node1: SUCCESS None component versions Validate DB System DCS Success node1: SUCCESS None component versions Node Name --------------- node2 Check Status Message Action ------------------------------ -------- -------------------------------------- -------------------------------------- __GI__ Check presence of databases Success No additional database found None not managed by ODA registered in CRS Check custom filesystems Success All file systems are owned and used None by OS users provisioned by ODA __OS__ Check Required OS files Success All the required files are present None Check Additional OS RPMs Success No RPMs outside of base ISO were None found on the system __STORAGE__ Check Required Storage files Success All the required files are present None Validate OAK Disks Success All OAK disks are in valid state None Validate ASM Disk Groups Success All ASM disk groups are in valid state None Validate ASM Disks Success All ASM disks are in valid state None Check Database Home Storage Success The volume(s) None volumes orahome_sh,odabase_n0,odabase_n1 state is CONFIGURED. Check space under /opt Success Free space on /opt: 143154.76 MB is None more than required space: 1024 MB Check space in ASM disk Success Space required for creating local None group(s) homes is present in ACFS database home storage. Required: 78 GB Available: 245 GB __SYS__ Validate Hardware Type Success Current hardware is supported None Validate ILOM interconnect Success ILOM interconnect is not enabled None Validate System Version Success System version 19.22.0.0.0 is None supported Verify System Timezone Success Succesfully verified the time zone None file Verify Grid User Success Grid user is verified None Verify Grid Version Success Oracle Grid Infrastructure is running None on the '19.17.0.0.221018' version on all nodes Check Audit Files Alert Audit files found under These files will be lost after /u01/app/oracle/product/12.1.0.2/ reimage. Backup the audit files to a dbhome_1/rdbms/audit, location outside the ODA system /u01/app/oracle/product/11.2.0.4/ dbhome_1/rdbms/audit, /u01/app/oracle/audit, /u01/app/oracle/admin __DB__ Validate Database Status Success Database 'myTestDb' is running and is None in 'CONFIGURED' state Validate Database Version Success Version '19.17.0.0.221018' for None database 'myTestDb' is supported Validate Database Datapatch Success Database 'myTestDb' is completely None Application Status applied with datapatch Validate TDE wallet presence Success Database 'myTestDb' is not TDE None enabled. Skipping TDE wallet presence check. Validate Database Home Success Database home location check passed None location for database myTestDb_uniq Validate Database Status Success Database 's' is running and is in None 'CONFIGURED' state Validate Database Version Success Version '12.1.0.2.220719' for None database 's' is supported Validate Database Datapatch Success Database 's' is completely applied None Application Status with datapatch Validate TDE wallet presence Success Database 's' is not TDE enabled. None Skipping TDE wallet presence check. Validate Database Home Success Database home location check passed None location for database s Validate Database Status Success Database 'QyZ6O' is running and is in None 'CONFIGURED' state Validate Database Version Success Version '11.2.0.4.210119' for None database 'QyZ6O' is supported Validate TDE wallet presence Success Database 'QyZ6O' is not TDE enabled. None Skipping TDE wallet presence check. Validate Database Home Success Database home location check passed None location for database QyZ6O Validate Database Status Success Database 'EX68' is running and is in None 'CONFIGURED' state Validate Database Version Success Version '18.14.0.0.210420' for None database 'EX68' is supported Validate Database Datapatch Success Database 'EX68' is completely applied None Application Status with datapatch Validate TDE wallet presence Success Database 'EX68' is not TDE enabled. None Skipping TDE wallet presence check. Validate Database Home Success Database home location check passed None location for database EX68 Validate Database Status Success Database 'DH1G0' is running and is in None 'CONFIGURED' state Validate Database Version Success Version '12.2.0.1.220118' for None database 'DH1G0' is supported Validate Database Datapatch Success Database 'DH1G0' is completely None Application Status applied with datapatch Validate TDE wallet presence Success Database 'DH1G0' is not TDE enabled. None Skipping TDE wallet presence check. Validate Database Home Success Database home location check passed None location for database DH1G0 __CERTIFICATES__ Check using custom Success Using Default key pair None certificates Check the agent of the DB Success All the agents of the DB systems are None System accessible accessible __DBSYSTEMS__ Validate DB System DCS Success node1: SUCCESS None component versions Validate DB System DCS Success node1: SUCCESS None component versions
- On the bare metal system, detach the system for an operating system
upgrade. Click Yes when prompted to continue.
WARNING:
Ensure that there is no hardware or networking change after issuing the commandodacli detach-node
.[root@oda1 restore]# odacli detach-node -all
For example:
[root@oda1 restore]# odacli detach-node -all ******************************************************************************** IMPORTANT ******************************************************************************** 'odacli detach-node' will bring down the databases and grid services on the system. The files that belong to the databases, which are stored on ASM or ACFS, are left intact on the storage. The databases will be started up back after re-imaging the ODA system using 'odacli restore-node' commands. As a good precautionary measure, please backup all the databases on the system before you start this process. Do not store the backup on this ODA machine since the local file system will be wiped out as part of the re-image. ******************************************************************************** Do you want to continue (yes/no)[no] : yes [root@oda1 opt]# odacli describe-job -i 20b7fced-0aaa-474e-aa80-18e31c215e1c Job details ---------------------------------------------------------------- ID: 20b7fced-0aaa-474e-aa80-18e31c215e1c Description: Detach node service creation for upgrade Status: Success Created: June 8, 2024 4:22:19 PM UTC Message: Task Name Start Time End Time Status ---------------------------------------- ----------------------------------- ----------------------------------- ---------- Creating INIT file June 8, 2024 4:22:19 PM UTC January 8, 2024 4:22:19 PM UTC Success Creating firstnet response file June 8, 2024 4:22:19 PM UTC January 8, 2024 4:22:19 PM UTC Success Saving Appliance data June 8, 2024 4:22:19 PM UTC January 8, 2024 4:23:19 PM UTC Success Saving OS files June 8, 2024 4:22:19 PM UTC January 8, 2024 4:22:19 PM UTC Success Saving CPU cores information June 8, 2024 4:22:19 PM UTC January 8, 2024 4:22:19 PM UTC Success Saving storage files June 8, 2024 4:22:19 PM UTC January 8, 2024 4:22:19 PM UTC Success Saving System June 8, 2024 4:22:19 PM UTC January 8, 2024 4:22:19 PM UTC Success Saving Volumes June 8, 2024 4:22:19 PM UTC January 8, 2024 4:22:40 PM UTC Success Saving File Systems June 8, 2024 4:22:40 PM UTC January 8, 2024 4:22:56 PM UTC Success Saving Networks June 8, 2024 4:22:56 PM UTC January 8, 2024 4:22:56 PM UTC Success Saving Quorum Disks June 8, 2024 4:22:56 PM UTC January 8, 2024 4:22:57 PM UTC Success Saving Database Storages June 8, 2024 4:22:57 PM UTC January 8, 2024 4:23:00 PM UTC Success Saving Database Homes January 8, 2024 4:23:00 PM UTC January 8, 2024 4:23:00 PM UTC Success +-- Saving OraDB19000_home1 January 8, 2024 4:23:00 PM UTC January 8, 2024 4:23:00 PM UTC Success +-- Saving OraDB19000_home2 January 8, 2024 4:23:00 PM UTC January 8, 2024 4:23:00 PM UTC Success +-- Saving OraDB19000_home3 January 8, 2024 4:23:00 PM UTC January 8, 2024 4:23:00 PM UTC Success +-- Saving OraDB19000_home4 January 8, 2024 4:23:00 PM UTC January 8, 2024 4:23:00 PM UTC Success +-- Saving OraDB19000_home5 January 8, 2024 4:23:00 PM UTC January 8, 2024 4:23:00 PM UTC Success Saving Databases January 8, 2024 4:23:00 PM UTC January 8, 2024 4:23:19 PM UTC Success +-- Saving provDb0 January 8, 2024 4:23:00 PM UTC January 8, 2024 4:23:04 PM UTC Success +-- Saving cPjuHX4S January 8, 2024 4:23:04 PM UTC January 8, 2024 4:23:08 PM UTC Success +-- Saving PJSlOXqa January 8, 2024 4:23:08 PM UTC January 8, 2024 4:23:11 PM UTC Success +-- Saving O January 8, 2024 4:23:11 PM UTC January 8, 2024 4:23:15 PM UTC Success +-- Saving mydb January 8, 2024 4:23:15 PM UTC January 8, 2024 4:23:19 PM UTC Success Saving Object swift stores January 8, 2024 4:23:19 PM UTC January 8, 2024 4:23:19 PM UTC Success Saving Database Backups January 8, 2024 4:23:19 PM UTC January 8, 2024 4:23:19 PM UTC Success Saving NFS Backups January 8, 2024 4:23:19 PM UTC January 8, 2024 4:23:19 PM UTC Success Creating databases version list January 8, 2024 4:23:19 PM UTC January 8, 2024 4:23:19 PM UTC Success Converting files for old DPR January 8, 2024 4:23:19 PM UTC January 8, 2024 4:23:19 PM UTC Success compatibility Detach node - DPR January 8, 2024 4:23:19 PM UTC January 8, 2024 4:23:19 PM UTC Success Deconfiguring Appliance January 8, 2024 4:23:19 PM UTC January 8, 2024 4:32:18 PM UTC Success Deconfiguring Databases January 8, 2024 4:23:19 PM UTC January 8, 2024 4:25:33 PM UTC Success +-- Deconfiguring provDb0 January 8, 2024 4:23:19 PM UTC January 8, 2024 4:23:59 PM UTC Success +-- Deconfiguring cPjuHX4S January 8, 2024 4:23:59 PM UTC January 8, 2024 4:24:18 PM UTC Success +-- Deconfiguring PJSlOXqa January 8, 2024 4:24:18 PM UTC January 8, 2024 4:24:36 PM UTC Success +-- Deconfiguring O January 8, 2024 4:24:36 PM UTC January 8, 2024 4:25:07 PM UTC Success +-- Deconfiguring mydb January 8, 2024 4:25:07 PM UTC January 8, 2024 4:25:33 PM UTC Success Saving database backup reports January 8, 2024 4:25:33 PM UTC January 8, 2024 4:25:33 PM UTC Success Resizing Quorum Disks January 8, 2024 4:25:33 PM UTC January 8, 2024 4:25:33 PM UTC Success Deconfiguring Grid Infrastructure January 8, 2024 4:25:33 PM UTC January 8, 2024 4:32:18 PM UTC Success Backup Quorum Disks January 8, 2024 4:32:18 PM UTC January 8, 2024 4:32:18 PM UTC Success Creating the server data archive files January 8, 2024 4:32:18 PM UTC January 8, 2024 4:32:20 PM UTC Success
- Important: Save the files generated by the system
deconfiguration and store them outside of the Oracle Database Appliance system.
The server archive file is generated at
/opt/oracle/oak/restore/out
. For Oracle Database Appliance high-availability systems, use the server archive file fromnode 0
.In /opt/oracle/oak/restore/out: [root@oda1 out]# ls -lrt total 52 -rw-r--r-- 1 root root 14325 Sep 13 09:28 serverarchive_cluster_name.zip -rw-r--r-- 1 root root 65 Sep 13 09:28 serverarchive_cluster_name.zip.sha256 [root@oda1 out]# scp serverarchive_cluster_name.zip root@host_outside_ODA [root@oda1 out]# scp serverarchive_cluster_name.zip.sha256 root@host_outside_ODA
There is a checksum file (SHA256) generated for the server archive file. Use this checksum to ensure the file transfer was complete. The fileserverarchive_cluster_name.zip.sha256
contains the SHA256 checksum of the file whenserverarchive_cluster_name.zip
was generated. After usingscp
to copy the file outside appliance, generate the checksum using thesha256sum
command. The checksum must match the checksum present in theserverarchive_cluster_name.zip.sha256
file. For example:$ cat serverarchive_oda1.zip.sha256 7580347b642c2f6689b126d9cb27d0bf8be1f810c580663ad592d35e42d47ae6 $ sha256sum serverarchive_oda1.zip 7580347b642c2f6689b126d9cb27d0bf8be1f810c580663ad592d35e42d47ae6 serverarchive_cluster_namen1.zip
WARNING:
Make sure to save these files in a location outside the Oracle Database Appliance system. These files are needed to reprovision the system after you reimage the appliance in Step 2 of this process. Without these files, the system cannot be reprovisioned in Step 3 and you will lose all data stored in the Oracle ASM disk groups.
Step 2: Reimaging Nodes for Upgrading Using Data Preserving Reprovisioning
WARNING:
Do not run cleanup.pl either before or after reimaging the nodes. Running cleanup.pl erases all the Oracle ASM disk groups on the storage and you cannot reprovision your Oracle Database Appliance system.Follow these steps to reimage nodes:
- Download the Oracle Database Appliance release 19.24 bare metal ISO image and reimage the appliance as described in the topic Reimaging an Oracle Database Appliance Baremetal System.
- Plumb the network as described in the topic Plumbing the Network.
Important:
For high-availability systems,serverarchive_cluster_name.zip
contains the file configure-firstnet.rsp
. The
configure-firstnet.rsp
file contains the values that you need
to provide when running odacli configure-firstnet
after reimaging
the system. Extract the file configure-firstnet.rsp
, use any text
editor to open the file, and then provide the IP address that was saved in in the
file.
Step 3: Reprovisioning Nodes Using Data Preserving Reprovisioning Method
WARNING:
Do not run cleanup.pl before you run the commandodacli
restore-node -g
. Running cleanup.pl erases all the Oracle ASM
disk groups on the storage and you cannot reprovision your Oracle Database Appliance
system with all databases intact. However, after you run the command odacli
restore-node -g
at least once, and the process of reprovisioning has
started, the clean up is specific to the attempt of reprovisioning and does not
erase the Oracle ASM disk groups. If the command odacli restore-node
-g
has failed, then cleanup.pl can be used to clean up failures
in that step. In such a case, the command odacli restore-node -g
must be attempted again to complete the provisioning.
Follow these steps to reprovision the nodes:
- Update the repository with the Oracle Database Appliance release 19.24.0.0.0 Server
Patch:
[root@oda1 opt]# /opt/oracle/dcs/bin/odacli update-repository -f /tmp/software_file
For example, for 19.24:
[root@oda1 opt]# /opt/oracle/dcs/bin/odacli update-repository -f /tmp/oda-sm-19.24.0.0.0-date-server.zip [root@oda1 opt]# odacli describe-job -i 73638e01-afc2-4a64-846c-460b816e227e Job details ---------------------------------------------------------------- ID: 73638e01-afc2-4a64-846c-460b816e227e Description: Repository Update Status: Success Created: January 8, 2024 5:54:02 AM HKT Message: /tmp/oda-sm-19.24.0.0.0-date-server.zip Task Name Node Name Start Time End Time Status ---------------------------------------- ------------------------- ----------------------------------- ----------------------------------- ---------- Check AvailableSpace node2 January 8, 2024 5:54:07 AM HKT January 8, 2024 5:54:08 AM HKT Success Setting up SSH equivalence node1 January 8, 2024 5:54:08 AM HKT January 8, 2024 5:54:12 AM HKT Success Copy BundleFile node1 January 8, 2024 5:54:12 AM HKT January 8, 2024 5:54:17 AM HKT Success Validating CopiedFile node2 January 8, 2024 5:54:17 AM HKT January 8, 2024 5:54:22 AM HKT Success Unzip bundle node1 January 8, 2024 5:54:22 AM HKT January 8, 2024 5:54:42 AM HKT Success Unzip bundle node2 January 8, 2024 5:54:42 AM HKT January 8, 2024 5:55:01 AM HKT Success Delete PatchBundles node2 January 8, 2024 5:55:01 AM HKT January 8, 2024 5:55:01 AM HKT Succes
- In this upgrade process, after you reimaged the appliance with
Oracle Database Appliance release 19.24 ISO image,
the operating system is now on Oracle Linux 8. Apply the server patch to update
the firmware and storage. Create the pre-patch report for patching the
firmware.
For example:
[root@oda1 opt]# odacli create-prepatchreport -s -v 19.22.0.0.0 [root@oda1 opt]# odacli describe-prepatchreport -i 2d24a7e0-4b25-4e9f-8cf7-ea261673ead6 Patch pre-check report ------------------------------------------------------------------------ Job ID: 2d24a7e0-4b25-4e9f-8cf7-ea261673ead6 Description: Patch pre-checks for [OS, ILOM, SERVER] Status: SUCCESS Created: January 8, 2024 3:23:04 AM UTC Result: All pre-checks succeeded Node Name --------------- node1 Pre-Check Status Comments ------------------------------ -------- -------------------------------------- __OS__ Validate supported versions Success Validated minimum supported versions. Validate patching tag Success Validated patching tag: 19.22.0.0.0. Is patch location available Success Patch location is available. Verify OS patch Success There are no packages available for an update Validate command execution Success Skipped command execution verfication - Instance is not provisioned __ILOM__ Validate ILOM server reachable Success Successfully connected with ILOM server using public IP and USB interconnect Validate supported versions Success Validated minimum supported versions. Validate patching tag Success Validated patching tag: 19.22.0.0.0. Is patch location available Success Patch location is available. Checking Ilom patch Version Success Successfully verified the versions Patch location validation Success Successfully validated location Validate command execution Success Skipped command execution verfication - Instance is not provisioned __SERVER__ Validate local patching Success Successfully validated server local patching Validate command execution Success Skipped command execution verfication - Instance is not provisioned
- Apply the server
update.
[root@oda1 opt]# /opt/oracle/dcs/bin/odacli update-server -v version
For example:
[root@oda1 opt]# /opt/oracle/dcs/bin/odacli update-server -v 19.24.0.0.0
- Confirm that the server update is
successful:
[root@oda1 opt]# /opt/oracle/dcs/bin/odacli describe-job -i job_ID
- Before you update the storage components, run the
odacli create-prepatchreport
command with the-st
option.[root@oda1 opt]# /opt/oracle/dcs/bin/odacli create-prepatchreport -st -v version
For example, for 19.24:
[root@oda1 opt]# /opt/oracle/dcs/bin/odacli create-prepatchreport -st -v 19.24.0.0.0
- Verify that the patching pre-checks ran
successfully:
[root@oda1 opt]# /opt/oracle/dcs/bin/odacli describe-prepatchreport
For example:
[root@oda1 opt]# odacli describe-prepatchreport -i 95887f92-7be7-4865-a311-54318ab385f2 Patch pre-check report ------------------------------------------------------------------------ Job ID: 95887f92-7be7-4865-a311-54318ab385f2 Description: Patch pre-checks for [STORAGE] Status: SUCCESS Created: June 8, 2024 12:52:37 PM HKT Result: All pre-checks succeeded Node Name --------------- node1 Pre-Check Status Comments ------------------------------ -------- -------------------------------------- __STORAGE__ Validate patching tag Success Validated patching tag: 19.24.0.0.0. Patch location validation Success Verified patch location Patch tag validation Success Verified patch tag Storage patch tag validation Success Verified storage patch location Verify ASM disks status Success ASM disks are online Validate rolling patch Success Rolling mode patching allowed as there is no expander and controller upgrade. Validate command execution Success Validated command execution Node Name --------------- node2 Pre-Check Status Comments ------------------------------ -------- -------------------------------------- __STORAGE__ Validate patching tag Success Validated patching tag: 19.24.0.0.0. Patch location validation Success Verified patch location Patch tag validation Success Verified patch tag Storage patch tag validation Success Verified storage patch location Verify ASM disks status Success ASM disks are online Validate rolling patch Success Rolling mode patching allowed as there is no expander and controller upgrade. Validate command execution Success Validated command execution
Use the command
odacli describe-prepatchreport
to view details of the pre-patch report. The pre-patch report also indicates whether storage patching can be rolling or not, based on whether an Expander or Controller update is also required.Fix the warnings and errors mentioned in the report and proceed with the storage components patching.
- Update the storage
components.
[root@oda1 opt]# /opt/oracle/dcs/bin/odacli update-storage -v version
For example, for 19.24:
[root@oda1 opt]# /opt/oracle/dcs/bin/odacli update-storage -v 19.24.0.0.0
- Update the repository with the 19.24
Oracle Grid Infrastructure clone file as follows:
- Download the Oracle Database Appliance GI Clone for
ODACLI/DCS stack (patch 30403673) from My Oracle Support to a temporary
location on an external client. Refer to the release notes for details
about the patch numbers and software for the latest release.
p30403673_1924000_Linux-x86-64.zip
- Unzip the software — it contains README.html and one or more
zip files for the
patch.
unzip p30403673_1924000_Linux-x86-64.zip
The zip file contains the following software file:odacli-dcs-19.24.0.0.0-date-GI-19.24.0.0.zip
- Copy all the software files from the external client to
Oracle Database Appliance. For High-Availability deployments, copy the
software files to only one node. The software files are copied to the
other node during the patching process. Use the
scp
orsftp
protocol to copy the bundle.Example usingscp
command:# scp software_file root@oda_host:/tmp
Example usingsftp
command:# sftp root@oda_host
Enter theroot
password, and copy the files.put software_file
- Update the repository with the Oracle Grid Infrastructure
software file:
[root@oda1 opt]# /opt/oracle/dcs/bin/odacli update-repository -f /tmp/odacli-dcs-19.24.0.0.0-date-GI-19.24.0.0.zip
For example, for 19.24:
[root@oda1 opt]# /opt/oracle/dcs/bin/odacli update-repository -f /tmp/odacli-dcs-19.24.0.0.0-date-GI-19.24.0.0.zip
- Confirm that the repository update is
successful:
[root@oda1 opt]# odacli describe-job -i 6c5e8990-298d-4070-aeac-76f1e55e5fe5 Job details ---------------------------------------------------------------- ID: 6c5e8990-298d-4070-aeac-76f1e55e5fe5 Description: Repository Update Status: Success Created: January 8, 2024 3:21:21 PM UTC Message: /tmp/oda-sm-19.24.0.0.0-date-server.zip Task Name Start Time End Time Status ---------------------------------------- ----------------------------------- ----------------------------------- ---------- Unzip bundle January 8, 2024 3:21:21 PM UTC January 8, 2024 3:21:45 PM UTC Success# /opt/oracle/dcs/bin/odacli describe-job -i job_ID
- Download the Oracle Database Appliance GI Clone for
ODACLI/DCS stack (patch 30403673) from My Oracle Support to a temporary
location on an external client. Refer to the release notes for details
about the patch numbers and software for the latest release.
- Update the repository with the server data archive files generated
in Step 1: Detaching Nodes for Upgrade Using Data Preserving Reprovisioning of
this upgrade
process.
[root@oda1 opt]# /opt/oracle/dcs/bin/odacli update-repository -f server_archive_file_path
For example:
[root@oda1 opt]# /opt/oracle/dcs/bin/odacli update-repository -f /tmp/serverarchive_cluster_name.zip [root@oda1 opt]# odacli describe-job -i 33787134-0ebd-4d96-ad6f-268bdc154bd3 Job details ---------------------------------------------------------------- ID: 33787134-0ebd-4d96-ad6f-268bdc154bd3 Description: Repository Update Status: Success Created: January 8, 2024 3:35:16 AM UTC Message: /tmp/serverarchive_node.zip Task Name Start Time End Time Status ---------------------------------------- ----------------------------------- ----------------------------------- ---------- Unzip bundle January 8, 2024 3:35:16 AM UTC January 8, 2024 3:35:17 AM UTC Success
- (Optional) If External Oracle ASR was configured before detaching
the node, follow these steps to update the repository with Oracle ASR Manager
configuration files:
- On the appliance running Oracle ASR Manager, run the
odacli export-asrconfig
command to create a zip of Oracle ASR configuration files. - Copy this zip file to the current machine.
- Run the
odacli update-repository
command to update the repository with the Oracle ASR manager zip file.
Run these steps only if the External Oracle ASR type was configured on the machine before detaching the node.
- On the appliance running Oracle ASR Manager, run the
- Restore Oracle Grid Infrastructure. If Oracle ASR was configured
before detaching the node, then running the
odacli restore-node -g
command prompts for Oracle ASR user password. Note that the restore process may take some time and the network services are restarted.[root@oda1 opt]# odacli restore-node -g Enter new system password: Retype new system password: Enter ASR user's password: [root@oda1 opt]# odacli describe-job -i 1b110e62-ca70-44f6-9eba-e5fa3fc693eb Job details ---------------------------------------------------------------- ID: 1b110e62-ca70-44f6-9eba-e5fa3fc693eb Description: Restore node service - GI Status: Success Created: January 8, 2024 3:36:15 AM UTC Message: The system will reboot, if required, to enable the licensed number of CPU cores Task Name Start Time End Time Status ---------------------------------------- ----------------------------------- ----------------------------------- ---------- Restore node service creation January 8, 2024 3:36:23 AM UTC January 8, 2024 4:05:55 AM UTC Success Setting up Network January 8, 2024 3:36:26 AM UTC January 8, 2024 3:36:26 AM UTC Success Setting up Vlan January 8, 2024 3:36:59 AM UTC January 8, 2024 3:37:01 AM UTC Success Setting up Network January 8, 2024 3:37:37 AM UTC January 8, 2024 3:37:37 AM UTC Success Network update January 8, 2024 3:38:18 AM UTC January 8, 2024 3:38:55 AM UTC Success Updating network January 8, 2024 3:38:18 AM UTC January 8, 2024 3:38:55 AM UTC Success Setting up Network January 8, 2024 3:38:18 AM UTC January 8, 2024 3:38:18 AM UTC Success OS usergroup 'asmdba' creation January 8, 2024 3:38:55 AM UTC January 8, 2024 3:38:55 AM UTC Success OS usergroup 'asmoper' creation January 8, 2024 3:38:55 AM UTC January 8, 2024 3:38:55 AM UTC Success OS usergroup 'asmadmin' creation January 8, 2024 3:38:55 AM UTC January 8, 2024 3:38:56 AM UTC Success OS usergroup 'dba' creation January 8, 2024 3:38:56 AM UTC January 8, 2024 3:38:56 AM UTC Success OS usergroup 'dbaoper' creation January 8, 2024 3:38:56 AM UTC January 8, 2024 3:38:56 AM UTC Success OS usergroup 'oinstall' creation January 8, 2024 3:38:56 AM UTC January 8, 2024 3:38:56 AM UTC Success OS user 'grid' creation January 8, 2024 3:38:56 AM UTC January 8, 2024 3:38:57 AM UTC Success OS user 'oracle' creation January 8, 2024 3:38:57 AM UTC January 8, 2024 3:38:57 AM UTC Success Default backup policy creation January 8, 2024 3:38:57 AM UTC January 8, 2024 3:38:57 AM UTC Success Backup config metadata persist January 8, 2024 3:38:57 AM UTC January 8, 2024 3:38:57 AM UTC Success Grant permission to RHP files January 8, 2024 3:38:57 AM UTC January 8, 2024 3:38:57 AM UTC Success Add SYSNAME in Env January 8, 2024 3:38:57 AM UTC January 8, 2024 3:38:57 AM UTC Success Install oracle-ahf January 8, 2024 3:38:57 AM UTC January 8, 2024 3:41:54 AM UTC Success Stop DCS Admin January 8, 2024 3:42:41 AM UTC January 8, 2024 3:42:42 AM UTC Success Generate mTLS certificates January 8, 2024 3:42:42 AM UTC January 8, 2024 3:42:44 AM UTC Success Exporting Public Keys January 8, 2024 3:42:44 AM UTC January 8, 2024 3:42:46 AM UTC Success Creating Trust Store January 8, 2024 3:42:46 AM UTC January 8, 2024 3:42:49 AM UTC Success Update config files January 8, 2024 3:42:49 AM UTC January 8, 2024 3:42:49 AM UTC Success Restart DCS Admin January 8, 2024 3:42:49 AM UTC January 8, 2024 3:43:10 AM UTC Success Unzipping storage configuration files January 8, 2024 3:43:10 AM UTC January 8, 2024 3:43:10 AM UTC Success Reloading multipath devices January 8, 2024 3:43:11 AM UTC January 8, 2024 3:43:11 AM UTC Success Restart oakd January 8, 2024 3:43:11 AM UTC January 8, 2024 3:43:22 AM UTC Success Restart oakd January 8, 2024 3:44:22 AM UTC January 8, 2024 3:44:33 AM UTC Success Restore Quorum Disks January 8, 2024 3:44:33 AM UTC January 8, 2024 3:44:33 AM UTC Success Creating GI home directories January 8, 2024 3:44:33 AM UTC January 8, 2024 3:44:33 AM UTC Success Extract GI clone January 8, 2024 3:44:33 AM UTC January 8, 2024 3:45:55 AM UTC Success Creating wallet for Root User January 8, 2024 3:45:56 AM UTC January 8, 2024 3:46:00 AM UTC Success Creating wallet for ASM Client January 8, 2024 3:46:00 AM UTC January 8, 2024 3:46:05 AM UTC Success Grid stack creation January 8, 2024 3:46:05 AM UTC January 8, 2024 3:59:40 AM UTC Success GI Restore with RHP January 8, 2024 3:46:05 AM UTC January 8, 2024 3:56:12 AM UTC Success Updating GIHome version January 8, 2024 3:56:13 AM UTC January 8, 2024 3:56:18 AM UTC Success Post cluster OAKD configuration January 8, 2024 3:59:40 AM UTC January 8, 2024 4:00:39 AM UTC Success Mounting disk group DATA January 8, 2024 4:00:39 AM UTC January 8, 2024 4:00:40 AM UTC Success Mounting disk group RECO January 8, 2024 4:00:48 AM UTC January 8, 2024 4:00:55 AM UTC Success Setting ACL for disk groups January 8, 2024 4:01:03 AM UTC January 8, 2024 4:01:07 AM UTC Success Register Scan and Vips to Public Network January 8, 2024 4:01:07 AM UTC January 8, 2024 4:01:09 AM UTC Success Adding Volume ACFSCLONE to Clusterware January 8, 2024 4:01:25 AM UTC January 8, 2024 4:01:29 AM UTC Success Adding Volume COMMONSTORE to Clusterware January 8, 2024 4:01:29 AM UTC January 8, 2024 4:01:33 AM UTC Success Adding Volume DATCPJUHX4S to Clusterware January 8, 2024 4:01:33 AM UTC January 8, 2024 4:01:37 AM UTC Success Adding Volume DATO to Clusterware January 8, 2024 4:01:37 AM UTC January 8, 2024 4:01:41 AM UTC Success Adding Volume DATPJSLOXQA to Clusterware January 8, 2024 4:01:41 AM UTC January 8, 2024 4:01:44 AM UTC Success Adding Volume DATPROVDB to Clusterware January 8, 2024 4:01:44 AM UTC January 8, 2024 4:01:48 AM UTC Success Adding Volume ODABASE_N0 to Clusterware January 8, 2024 4:01:48 AM UTC January 8, 2024 4:01:52 AM UTC Success Adding Volume ORAHOME_SH to Clusterware January 8, 2024 4:01:52 AM UTC January 8, 2024 4:01:56 AM UTC Success Adding Volume RECO to Clusterware January 8, 2024 4:01:56 AM UTC January 8, 2024 4:02:00 AM UTC Success Enabling Volume(s) January 8, 2024 4:02:00 AM UTC January 8, 2024 4:03:52 AM UTC Success Discover OraHomeStorage - Node Restore January 8, 2024 4:05:46 AM UTC January 8, 2024 4:05:50 AM UTC Success Provisioning service creation January 8, 2024 4:05:53 AM UTC January 8, 2024 4:05:53 AM UTC Success Persist new agent state entry January 8, 2024 4:05:53 AM UTC January 8, 2024 4:05:53 AM UTC Success Persist new agent state entry January 8, 2024 4:05:53 AM UTC January 8, 2024 4:05:53 AM UTC Success Restart DCS Agent January 8, 2024 4:05:53 AM UTC January 8, 2024 4:05:55 AM UTC Success
To skip restore of Oracle ASR configuration during the restore-node operation, use the
--skip-asr
parameter in theodacli restore-node
command. For example:odacli restore-node -g -sa
- Restore the database.
When you create Oracle Database homes with Oracle Database Appliance release 19.11 or later, the database homes are created on an Oracle ACFS-managed file system and not on the local disk. For a database user
oracle
, the new database homes are created under/u01/app/odaorahome/oracle/
.Run theodacli list-dbhome-storages
command to check if the storage for database homes is configured. If the database home is not already configured on Oracle ACFS, then before restoring the database home, configure the database home storage with theodacli configure-dbhome-storage
command. For example:[root@oda1 opt]# odacli list-dbhome-storages [root@oda1 opt]# odacli configure-dbhome-storage -dg DATA
The command does not cause storage allocation or creation of volumes or file systems. The command only sets the disk group location in the metadata. For information about managing database homes on Oracle ACFS, see the topic Managing Database Home Storage.
- To restore homes that existed on the local drive prior to the
reimage, ensure that you update the repository with the Oracle Database clones
for the specific Oracle Database release, and then restore the databases. For
database homes on Oracle ACFS-managed file system locations, you do not need to
update the
repository.
[root@oda1 opt]# /opt/oracle/dcs/bin/odacli update-repository -f /tmp/odacli-dcs-19.24.0.0.0-date-DB-19.24.0.0.zip
Restore the databases:
[root@oda1 opt]# odacli restore-node -d [root@oda1 opt]# odacli describe-job -i 8b080e66-b9f0-49c7-ac7e-24907e87066f Job details ---------------------------------------------------------------- ID: 8b080e66-b9f0-49c7-ac7e-24907e87066f Description: Restore node service - Database Status: Success Created: January 8, 2024 4:07:28 AM UTC Message: Task Name Start Time End Time Status ---------------------------------------- ----------------------------------- ----------------------------------- ---------- Setting up SSH equivalence January 8, 2024 4:07:32 AM UTC January 8, 2024 4:07:35 AM UTC Success DB home creation: OraDB19000_home3 January 8, 2024 4:07:35 AM UTC January 8, 2024 4:07:35 AM UTC Skipped DB home creation: OraDB19000_home4 January 8, 2024 4:07:35 AM UTC January 8, 2024 4:07:35 AM UTC Skipped DB home creation: OraDB19000_home5 January 8, 2024 4:07:35 AM UTC January 8, 2024 4:07:35 AM UTC Skipped DB home creation: OraDB19000_home1 January 8, 2024 4:07:35 AM UTC January 8, 2024 4:07:35 AM UTC Skipped DB home creation: OraDB19000_home2 January 8, 2024 4:07:35 AM UTC January 8, 2024 4:07:35 AM UTC Skipped Persist database storage locations January 8, 2024 4:07:35 AM UTC January 8, 2024 4:07:36 AM UTC Success Save metadata for O January 8, 2024 4:07:35 AM UTC January 8, 2024 4:07:35 AM UTC Success Save metadata for PJSlOXqa January 8, 2024 4:07:35 AM UTC January 8, 2024 4:07:35 AM UTC Success Save metadata for provDb0 January 8, 2024 4:07:35 AM UTC January 8, 2024 4:07:35 AM UTC Success Save metadata for cPjuHX4S January 8, 2024 4:07:35 AM UTC January 8, 2024 4:07:35 AM UTC Success Save metadata for mydb January 8, 2024 4:07:35 AM UTC January 8, 2024 4:07:35 AM UTC Success Persist database storages January 8, 2024 4:07:36 AM UTC January 8, 2024 4:07:36 AM UTC Success Save metadata for O January 8, 2024 4:07:36 AM UTC January 8, 2024 4:07:36 AM UTC Success Save metadata for PJSlOXqa January 8, 2024 4:07:36 AM UTC January 8, 2024 4:07:36 AM UTC Success Save metadata for provDb0 January 8, 2024 4:07:36 AM UTC January 8, 2024 4:07:36 AM UTC Success Save metadata for cPjuHX4S January 8, 2024 4:07:36 AM UTC January 8, 2024 4:07:36 AM UTC Success Save metadata for mydb January 8, 2024 4:07:36 AM UTC January 8, 2024 4:07:36 AM UTC Success Restore database: O January 8, 2024 4:07:36 AM UTC January 8, 2024 4:08:48 AM UTC Success +-- Adding database to GI January 8, 2024 4:07:36 AM UTC January 8, 2024 4:07:38 AM UTC Success +-- Adding database instance(s) to GI January 8, 2024 4:07:38 AM UTC January 8, 2024 4:07:38 AM UTC Success +-- Modifying SPFILE for database January 8, 2024 4:07:38 AM UTC January 8, 2024 4:08:15 AM UTC Success +-- Restore password file for database January 8, 2024 4:08:15 AM UTC January 8, 2024 4:08:15 AM UTC Skipped +-- Start instance(s) for database January 8, 2024 4:08:15 AM UTC January 8, 2024 4:08:35 AM UTC Success +-- Persist metadata for database January 8, 2024 4:08:35 AM UTC January 8, 2024 4:08:35 AM UTC Success +-- Clear all listeners from Database January 8, 2024 4:08:35 AM UTC January 8, 2024 4:08:36 AM UTC Success +-- Create adrci directory January 8, 2024 4:08:39 AM UTC January 8, 2024 4:08:39 AM UTC Success +-- Run SqlPatch January 8, 2024 4:08:39 AM UTC January 8, 2024 4:08:48 AM UTC Success Restore database: PJSlOXqa January 8, 2024 4:08:48 AM UTC January 8, 2024 4:09:44 AM UTC Success +-- Adding database to GI January 8, 2024 4:08:48 AM UTC January 8, 2024 4:08:51 AM UTC Success +-- Adding database instance(s) to GI January 8, 2024 4:08:51 AM UTC January 8, 2024 4:08:51 AM UTC Success +-- Modifying SPFILE for database January 8, 2024 4:08:51 AM UTC January 8, 2024 4:09:16 AM UTC Success +-- Restore password file for database January 8, 2024 4:09:16 AM UTC January 8, 2024 4:09:16 AM UTC Skipped +-- Start instance(s) for database January 8, 2024 4:09:16 AM UTC January 8, 2024 4:09:30 AM UTC Success +-- Persist metadata for database January 8, 2024 4:09:30 AM UTC January 8, 2024 4:09:31 AM UTC Success +-- Clear all listeners from Database January 8, 2024 4:09:31 AM UTC January 8, 2024 4:09:32 AM UTC Success +-- Create adrci directory January 8, 2024 4:09:34 AM UTC January 8, 2024 4:09:34 AM UTC Success +-- Run SqlPatch January 8, 2024 4:09:34 AM UTC January 8, 2024 4:09:44 AM UTC Success Restore database: provDb January 8, 2024 4:09:44 AM UTC January 8, 2024 4:11:04 AM UTC Success +-- Adding database to GI January 8, 2024 4:09:44 AM UTC January 8, 2024 4:09:47 AM UTC Success +-- Adding database instance(s) to GI January 8, 2024 4:09:47 AM UTC January 8, 2024 4:09:47 AM UTC Success +-- Modifying SPFILE for database January 8, 2024 4:09:47 AM UTC January 8, 2024 4:10:15 AM UTC Success +-- Restore password file for database January 8, 2024 4:10:15 AM UTC January 8, 2024 4:10:15 AM UTC Skipped +-- Start instance(s) for database January 8, 2024 4:10:15 AM UTC January 8, 2024 4:10:33 AM UTC Success +-- Persist metadata for database January 8, 2024 4:10:33 AM UTC January 8, 2024 4:10:33 AM UTC Success +-- Clear all listeners from Database January 8, 2024 4:10:33 AM UTC January 8, 2024 4:10:34 AM UTC Success +-- Create adrci directory January 8, 2024 4:10:36 AM UTC January 8, 2024 4:10:36 AM UTC Success +-- Run SqlPatch January 8, 2024 4:10:36 AM UTC January 8, 2024 4:11:04 AM UTC Success Restore database: cPjuHX4S January 8, 2024 4:11:04 AM UTC January 8, 2024 4:12:02 AM UTC Success +-- Adding database to GI January 8, 2024 4:11:04 AM UTC January 8, 2024 4:11:08 AM UTC Success +-- Adding database instance(s) to GI January 8, 2024 4:11:08 AM UTC January 8, 2024 4:11:08 AM UTC Success +-- Modifying SPFILE for database January 8, 2024 4:11:08 AM UTC January 8, 2024 4:11:34 AM UTC Success +-- Restore password file for database January 8, 2024 4:11:34 AM UTC January 8, 2024 4:11:34 AM UTC Skipped +-- Start instance(s) for database January 8, 2024 4:11:34 AM UTC January 8, 2024 4:11:49 AM UTC Success +-- Persist metadata for database January 8, 2024 4:11:49 AM UTC January 8, 2024 4:11:49 AM UTC Success +-- Clear all listeners from Database January 8, 2024 4:11:49 AM UTC January 8, 2024 4:11:50 AM UTC Success +-- Create adrci directory January 8, 2024 4:11:52 AM UTC January 8, 2024 4:11:52 AM UTC Success +-- Run SqlPatch January 8, 2024 4:11:52 AM UTC January 8, 2024 4:12:02 AM UTC Success Restore database: mydb January 8, 2024 4:12:02 AM UTC January 8, 2024 4:13:15 AM UTC Success +-- Adding database to GI January 8, 2024 4:12:02 AM UTC January 8, 2024 4:12:05 AM UTC Success +-- Adding database instance(s) to GI January 8, 2024 4:12:05 AM UTC January 8, 2024 4:12:05 AM UTC Success +-- Modifying SPFILE for database January 8, 2024 4:12:05 AM UTC January 8, 2024 4:12:41 AM UTC Success +-- Restore password file for database January 8, 2024 4:12:41 AM UTC January 8, 2024 4:12:41 AM UTC Skipped +-- Start instance(s) for database January 8, 2024 4:12:41 AM UTC January 8, 2024 4:13:01 AM UTC Success +-- Persist metadata for database January 8, 2024 4:13:01 AM UTC January 8, 2024 4:13:01 AM UTC Success +-- Clear all listeners from Database January 8, 2024 4:13:01 AM UTC January 8, 2024 4:13:02 AM UTC Success +-- Create adrci directory January 8, 2024 4:13:05 AM UTC January 8, 2024 4:13:05 AM UTC Success +-- Run SqlPatch January 8, 2024 4:13:05 AM UTC January 8, 2024 4:13:15 AM UTC Success Restore Object Stores January 8, 2024 4:13:15 AM UTC January 8, 2024 4:13:16 AM UTC Success Object Store Swift Creation January 8, 2024 4:13:15 AM UTC January 8, 2024 4:13:16 AM UTC Success Save password in wallet January 8, 2024 4:13:15 AM UTC January 8, 2024 4:13:16 AM UTC Success Object Store Swift persist January 8, 2024 4:13:16 AM UTC January 8, 2024 4:13:16 AM UTC Success Remount NFS backups January 8, 2024 4:13:16 AM UTC January 8, 2024 4:13:16 AM UTC Success Restore BackupConfigs January 8, 2024 4:13:16 AM UTC January 8, 2024 4:13:28 AM UTC Success Backup config creation January 8, 2024 4:13:16 AM UTC January 8, 2024 4:13:16 AM UTC Success Backup config metadata persist January 8, 2024 4:13:16 AM UTC January 8, 2024 4:13:16 AM UTC Success Backup config creation January 8, 2024 4:13:16 AM UTC January 8, 2024 4:13:28 AM UTC Success Libopc existence check January 8, 2024 4:13:17 AM UTC January 8, 2024 4:13:17 AM UTC Success Installer existence check January 8, 2024 4:13:17 AM UTC January 8, 2024 4:13:17 AM UTC Success Container validation January 8, 2024 4:13:17 AM UTC January 8, 2024 4:13:17 AM UTC Success Object Store Swift directory creation January 8, 2024 4:13:18 AM UTC January 8, 2024 4:13:18 AM UTC Success Install Object Store Swift module January 8, 2024 4:13:18 AM UTC January 8, 2024 4:13:28 AM UTC Success Backup config metadata persist January 8, 2024 4:13:28 AM UTC January 8, 2024 4:13:28 AM UTC Success Reattach backupconfigs to DBs January 8, 2024 4:13:28 AM UTC January 8, 2024 4:13:28 AM UTC Success Restore backup reports January 8, 2024 4:13:28 AM UTC January 8, 2024 4:13:28 AM UTC Success
If the databases have Oracle Data Guard configured, then the restore operation also restores Oracle Data Guard. For example:[root@oda1 opt]# odacli restore-node -d [root@oda1 opt]# odacli describe-job -i d5aec86e-767f-4e28-b782-bc3e607f4eb1 Job details ---------------------------------------------------------------- ID: d5aec86e-767f-4e28-b782-bc3e607f4eb1 Description: Restore node service - Database Status: Success Created: January 8, 2024 4:41:43 AM GMT Message: Task Name Start Time End Time Status ---------------------------------------- ----------------------------------- ----------------------------------- ---------------- Setting up SSH equivalence for 'oracle' January 8, 2024 4:41:46 AM GMT January 8, 2024 4:41:49 AM GMT Success DB home creation: OraDB19000_home5 January 8, 2024 4:41:49 AM GMT January 8, 2024 4:41:49 AM GMT Skipped DB home creation: OraDB19000_home1 January 8, 2024 4:41:49 AM GMT January 8, 2024 4:41:49 AM GMT Skipped DB home creation: OraDB19000_home3 January 8, 2024 4:41:49 AM GMT January 8, 2024 4:41:49 AM GMT Skipped DB home creation: OraDB19000_home2 January 8, 2024 4:41:49 AM GMT January 8, 2024 4:41:49 AM GMT Skipped Persist database storage locations January 8, 2024 4:41:49 AM GMT January 8, 2024 4:41:49 AM GMT Success Save metadata for eOd04SyN January 8, 2024 4:41:49 AM GMT January 8, 2024 4:41:49 AM GMT Success Save metadata for o1 January 8, 2024 4:41:49 AM GMT January 8, 2024 4:41:49 AM GMT Success Save metadata for eOd03SyN January 8, 2024 4:41:49 AM GMT January 8, 2024 4:41:49 AM GMT Success Save metadata for eOd02SyN January 8, 2024 4:41:49 AM GMT January 8, 2024 4:41:49 AM GMT Success Persist database storages January 8, 2024 4:41:49 AM GMT January 8, 2024 4:41:49 AM GMT Success Save metadata for eOd04SyN January 8, 2024 4:41:49 AM GMT January 8, 2024 4:41:49 AM GMT Success Save metadata for o1 January 8, 2024 4:41:49 AM GMT January 8, 2024 4:41:49 AM GMT Success Save metadata for eOd03SyN January 8, 2024 4:41:49 AM GMT January 8, 2024 4:41:49 AM GMT Success Save metadata for eOd02SyN January 8, 2024 4:41:49 AM GMT January 8, 2024 4:41:49 AM GMT Success Restore database: o1 January 8, 2024 4:41:49 AM GMT January 8, 2024 4:43:15 AM GMT Success +-- Adding database to GI January 8, 2024 4:41:49 AM GMT January 8, 2024 4:41:51 AM GMT Success +-- Adding database instance(s) to GI January 8, 2024 4:41:51 AM GMT January 8, 2024 4:41:51 AM GMT Success +-- Modifying SPFILE for database January 8, 2024 4:41:51 AM GMT January 8, 2024 4:42:29 AM GMT Success +-- Restore password file for database January 8, 2024 4:42:29 AM GMT January 8, 2024 4:42:29 AM GMT Skipped +-- Start instance(s) for database January 8, 2024 4:42:29 AM GMT January 8, 2024 4:43:04 AM GMT Success +-- Persist metadata for database January 8, 2024 4:43:04 AM GMT January 8, 2024 4:43:04 AM GMT Success +-- Clear all listeners from Database January 8, 2024 4:43:04 AM GMT January 8, 2024 4:43:05 AM GMT Success +-- Create adrci directory January 8, 2024 4:43:07 AM GMT January 8, 2024 4:43:07 AM GMT Success +-- Run SqlPatch January 8, 2024 4:43:07 AM GMT January 8, 2024 4:43:15 AM GMT Success Restore Object Stores January 8, 2024 4:43:15 AM GMT January 8, 2024 4:43:16 AM GMT Success Object Store Swift Creation January 8, 2024 4:43:15 AM GMT January 8, 2024 4:43:16 AM GMT Success Save password in wallet January 8, 2024 4:43:15 AM GMT January 8, 2024 4:43:16 AM GMT Success Object Store Swift persist January 8, 2024 4:43:16 AM GMT January 8, 2024 4:43:16 AM GMT Success Object Store Swift Creation January 8, 2024 4:43:16 AM GMT January 8, 2024 4:43:16 AM GMT Success Save password in wallet January 8, 2024 4:43:16 AM GMT January 8, 2024 4:43:16 AM GMT Success Object Store Swift persist January 8, 2024 4:43:16 AM GMT January 8, 2024 4:43:16 AM GMT Success Remount NFS backups January 8, 2024 4:43:16 AM GMT January 8, 2024 4:43:16 AM GMT Success Restore BackupConfigs January 8, 2024 4:43:16 AM GMT January 8, 2024 4:43:35 AM GMT Success Backup config creation January 8, 2024 4:43:16 AM GMT January 8, 2024 4:43:26 AM GMT Success Libopc existence check January 8, 2024 4:43:17 AM GMT January 8, 2024 4:43:17 AM GMT Success Installer existence check January 8, 2024 4:43:17 AM GMT January 8, 2024 4:43:17 AM GMT Success Container validation January 8, 2024 4:43:17 AM GMT January 8, 2024 4:43:17 AM GMT Success Object Store Swift directory creation January 8, 2024 4:43:17 AM GMT January 8, 2024 4:43:17 AM GMT Success Install Object Store Swift module January 8, 2024 4:43:17 AM GMT January 8, 2024 4:43:26 AM GMT Success Backup config metadata persist January 8, 2024 4:43:26 AM GMT January 8, 2024 4:43:26 AM GMT Success Backup config creation January 8, 2024 4:43:26 AM GMT January 8, 2024 4:43:35 AM GMT Success Libopc existence check January 8, 2024 4:43:26 AM GMT January 8, 2024 4:43:26 AM GMT Success Installer existence check January 8, 2024 4:43:26 AM GMT January 8, 2024 4:43:26 AM GMT Success Container validation January 8, 2024 4:43:26 AM GMT January 8, 2024 4:43:27 AM GMT Success Object Store Swift directory creation January 8, 2024 4:43:27 AM GMT January 8, 2024 4:43:27 AM GMT Success Install Object Store Swift module January 8, 2024 4:43:27 AM GMT January 8, 2024 4:43:35 AM GMT Success Backup config metadata persist January 8, 2024 4:43:35 AM GMT January 8, 2024 4:43:35 AM GMT Success Reattach backupconfigs to DBs January 8, 2024 4:43:35 AM GMT January 8, 2024 4:43:35 AM GMT Success Restore backup reports January 8, 2024 4:43:35 AM GMT January 8, 2024 4:43:35 AM GMT Success Restore dataguard January 8, 2024 4:43:35 AM GMT January 8, 2024 4:43:36 AM GMT Success Restore dataguard services January 8, 2024 4:43:36 AM GMT January 8, 2024 4:43:41 AM GMT Success
- If your source deployment had shared CPU pool or custom vnetwork
associated with any of the DB systems, or had Oracle KVM deployments, then
restore the Oracle KVM
deployments.
[root@oda1 opt]# odacli restore-node -kvm [root@oda1 opt]# odacli describe-job -i 2662bdfc-6505-43cf-b1e9-22a34c7b1c31 Job details ---------------------------------------------------------------- ID: 2662bdfc-6505-43cf-b1e9-22a34c7b1c31 Description: Restore node service - KVM Status: Success Created: January 8, 2024 9:21:29 PM UTC Message: Task Name Node Name Start Time End Time Status ---------------------------------------- ------------------------- ----------------------------------- ----------------------------------- ---------------- Validate backup files oda1 January 8, 2024 9:21:29 PM UTC January 8, 2024 9:21:30 PM UTC Success Read backup metadata oda1 January 8, 2024 9:21:30 PM UTC January 8, 2024 9:21:30 PM UTC Success Check existing resources oda1 January 8, 2024 9:21:30 PM UTC January 8, 2024 9:21:31 PM UTC Success Create ACFS mount point oda1 January 8, 2024 9:21:31 PM UTC January 8, 2024 9:21:31 PM UTC Success Register ACFS resources oda1 January 8, 2024 9:21:31 PM UTC January 8, 2024 9:21:31 PM UTC Success Restore VM Storages metadata oda1 January 8, 2024 9:21:31 PM UTC January 8, 2024 9:21:31 PM UTC Success Restore VDisks metadata oda1 January 8, 2024 9:21:31 PM UTC January 8, 2024 9:21:32 PM UTC Success Restore CPU Pools oda1 January 8, 2024 9:21:32 PM UTC January 8, 2024 9:21:32 PM UTC Success Restore VNetworks oda1 January 8, 2024 9:21:32 PM UTC January 8, 2024 9:21:32 PM UTC Success Patch VM's domain config files oda1 January 8, 2024 9:21:32 PM UTC January 8, 2024 9:21:32 PM UTC Success Restore VMs oda1 January 8, 2024 9:21:32 PM UTC January 8, 2024 9:21:32 PM UTC Success Restore VMs metadata oda1 January 8, 2024 9:21:32 PM UTC January 8, 2024 9:21:33 PM UTC Success Start VMs oda1 January 8, 2024 9:21:33 PM UTC
- Restore Oracle DB systems, if your source deployment had any
earlier. Note: If your source deployment had shared CPU pool or custom
vnetwork associated with any of the DB systems, then run the KVM restore
operation before restoring the DB
systems.
[root@oda1 opt]# odacli restore-node -dbs [root@oda1 opt]# odacli describe-job -i 5b81e5ae-1186-45fb-936b-4d21eb803eb8 Job details ---------------------------------------------------------------- ID: 5b81e5ae-1186-45fb-936b-4d21eb803eb8 Description: Restore node service - DBSYSTEM Status: Success Created: January 8, 2024 4:20:47 AM PDT Message: Task Name Node Name Start Time End Time Status ---------------------------------------- ------------------------- ----------------------------------- ----------------------------------- ---------------- Validate DB System json files oda1 January 8, 2024 4:20:47 AM PDT January 8, 2024 4:20:48 AM PDT Success Deserialize resources oda1 January 8, 2024 4:20:48 AM PDT January 8, 2024 4:20:50 AM PDT Success Persist DB Systems for restore operation oda1 January 8, 2024 4:20:50 AM PDT January 8, 2024 4:20:52 AM PDT Success Create DB System ACFS mount points oda1 January 8, 2024 4:20:52 AM PDT January 8, 2024 4:20:54 AM PDT Success Patch libvirt xml for DB Systems oda1 January 8, 2024 4:20:54 AM PDT January 8, 2024 4:20:57 AM PDT Success Restore DB System Networks oda1 January 8, 2024 4:20:57 AM PDT January 8, 2024 4:21:06 AM PDT Success Add DB Systems to Clusterware oda1 January 8, 2024 4:21:06 AM PDT January 8, 2024 4:21:10 AM PDT Success Validate start dependencies oda1 January 8, 2024 4:21:10 AM PDT January 8, 2024 4:21:12 AM PDT Success Start DB Systems oda1 January 8, 2024 4:21:12 AM PDT January 8, 2024 4:21:36 AM PDT Success Wait DB Systems VM bootstrap oda1 January 8, 2024 4:21:36 AM PDT January 8, 2024 4:23:45 AM PDT Success Export clones repository for DB oda1 January 8, 2024 4:23:45 AM PDT January 8, 2024 4:23:47 AM PDT Success Systems post restore Export ASM client cluster config on BM oda1 January 8, 2024 4:23:47 AM PDT January 8, 2024 4:23:50 AM PDT Success Import ASM client cluster config to oda1 January 8, 2024 4:23:50 AM PDT January 8, 2024 4:25:19 AM PDT Success OLR (within DB Systems) Import ASM client cluster config to oda1 January 8, 2024 4:25:19 AM PDT January 8, 2024 4:26:18 AM PDT Success OCR (within DB Systems)
After upgrading your deployment to Oracle Database Appliance release 19.24, patch your databases to release 19.24 as described in this chapter.
Upgrading DB Systems to Oracle Linux 8 and Oracle Database Appliance Release 19.24 Using the CLI
Follow these steps to upgrade your Oracle Database Appliance DB system deployment using CLI commands.
Important:
Ensure that there is sufficient space on your appliance to download the patches.Note:
You must import the latest Oracle Grid Infrastructure clone files applicable to the DB system to the repository. For example, if the restored DB system runs Oracle Grid Infrastructure release 19.20, 19.19, or 19.18, then the Oracle Grid Infrastructure clone file 19.24.0.0.240716 must be present in the repository. Similary, if the restored DB system runs Oracle Grid Infrastructure release 21.8, then the Oracle Grid Infrastructure clone 21.8.0.0.221018 must be present in the repository.Upgrading Oracle Database Appliance to Oracle Linux 8 and Oracle Database Appliance Release 19.24 Using the BUI
Follow these steps to upgrade your Oracle Database Appliance deployment and existing Oracle Database homes, using the Browser User Interface (BUI).
- Download the Oracle Database Appliance Server Patch for the
ODACLI/DCS stack (patch 35938481) from My Oracle Support to a temporary location
on an external client. Refer to the release notes for details about the patch
numbers and software for the latest release.
For example, download the server patch for 19.24:
p35938481_1924000_Linux-x86-64.zip
- Unzip the software — it contains README.html and one or more zip
files for the
patch.
unzip p35938481_1924000_Linux-x86-64.zip
The zip file contains the following software file:oda-sm-19.24.0.0.0-date-server.zip
- Copy all the software files from the external client to Oracle
Database Appliance. For High-Availability deployments, copy the software files
to only one node. The software files are copied to the other node during the
patching process. Use the
scp
orsftp
protocol to copy the bundle.Example usingscp
command:# scp software_file root@oda_host:/tmp
Example usingsftp
command:# sftp root@oda_host
Enter theroot
password, and copy the files.put software_file
- Update the repository with the server software file:
[root@oda1 opt]# /opt/oracle/dcs/bin/odacli update-repository -f /tmp/software_file
For example, for 19.24:[root@oda1 opt]# /opt/oracle/dcs/bin/odacli update-repository -f /tmp/oda-sm-19.24.0.0.0-date-server.zip
- Confirm that the repository update is
successful:
[root@oda1 opt]# odacli describe-job -i 6c5e8990-298d-4070-aeac-76f1e55e5fe5 Job details ---------------------------------------------------------------- ID: 6c5e8990-298d-4070-aeac-76f1e55e5fe5 Description: Repository Update Status: Success Created: January 08, 2024 3:21:21 PM UTC Message: /tmp/oda-sm-19.24.0.0.0-date-server.zip Task Name Start Time End Time Status ---------------------------------------- ----------------------------------- ----------------------------------- ---------- Unzip bundle January 08, 2024 3:21:21 PM UTC January 08, 2024 3:21:45 PM UTC Success# /opt/oracle/dcs/bin/odacli describe-job -i job_ID
- Update DCS
admin:
[root@oda1 opt]# /opt/oracle/dcs/bin/odacli update-dcsadmin -v 19.24.0.0.0 [root@oda1 opt]# odacli describe-job -i c00f38cd-299d-445b-b623-b24f664d48f9 Job details ---------------------------------------------------------------- ID: c00f38cd-299d-445b-b623-b24f664d48f9 Description: DcsAdmin patching Status: Success Created: January 08, 2024 3:22:19 PM UTC Message: Task Name Start Time End Time Status ---------------------------------------- ----------------------------------- ----------------------------------- ---------- Patch location validation January 08, 2024 3:22:19 PM UTC January 08, 2024 3:22:19 PM UTC Success Dcs-admin upgrade January 08, 2024 3:22:19 PM UTC January 08, 2024 3:22:25 PM UTC Success
- Update the DCS
components:
[root@oda1 opt]# /opt/oracle/dcs/bin/odacli update-dcscomponents -v 19.24.0.0.0 { "jobId" : "e9862ac9-ed92-4934-a71a-93cea4c20a68", "status" : "Success", "message" : " DCS-Agent shutdown is successful. Skipping MySQL upgrade on OL7 Metadata schema update is done. dcsagent RPM upgrade is successful. dcscli RPM upgrade is successful. dcscontroller RPM upgrade is successful. Successfully reset the Keystore password. HAMI is not enabled Skipped removing old Libs. Successfully ran setupAgentAuth.sh ", "reports" : null, "createTimestamp" : "January 08, 2024 13:47:22 PM GMT", "description" : "Update-dcscomponents job completed and is not part of Agent job list", "updatedTime" : "January 08, 2024 13:49:44 PM GMT" }
If the DCS components are updated, then the message
"status" : "Success"
is displayed on the command line. For failed updates, fix the error and then proceed with the update by re-running theodacli update-dcscomponents
command. See the topic Resolving Errors When Updating DCS Components During Patching about more information about DCS components checks errors.Note:
Note that for DCS agent update to be complete, both theodacli update-dcscomponents
andodacli update-dcsagent
commands must be run. Ensure that both commands are run in the order specified in this procedure. - Update the DCS
agent:
[root@oda1 opt]# /opt/oracle/dcs/bin/odacli update-dcsagent -v 19.24.0.0.0 [root@oda1 opt]# odacli describe-job -i a9cac320-cebe-4a78-b6e5-ce9e0595d5fa Job details ---------------------------------------------------------------- ID: a9cac320-cebe-4a78-b6e5-ce9e0595d5fa Description: DcsAgent patching Status: Success Created: January 08, 2024 3:35:01 PM UTC Message: Task Name Start Time End Time Status ---------------------------------------- ----------------------------------- ----------------------------------- ---------- Dcs-agent upgrade to version January 08, 2024 3:35:01 PM UTC January 08, 2024 3:38:50 PM UTC Success 19.22.0.0.0 Update System version January 08, 2024 3:38:50 PM UTC January 08, 2024 3:38:50 PM UTC Success
Reprovisioning the Appliance Using BUI
After updating the DCS admin, DCS components, and DCS agent, reprovision the BUI as follows:
- Navigate to the BUI and log in as the
oda-admin
user.https://Node0–host-ip-address:7093/mgmt/index.html
- In the BUI, click Data Preserving Re-provisioning.
- In the Re-provision tab, under Run Pre-checks, click Create Pre-Upgrade Report to create a preupgrade report.
- After the job completes successfully and the preupgrade report is generated, select the report in the drop down list, and click View Pre-Upgrade Report.
- View the preupgrade report and fix any issues displayed in the report. Click Back to navigate to the Data Preserving Re-provisioning page.
- If the preupgrade report does not have any failures, then click Next to start the detach node process.
- Click Detach Node. Click Force Run if you are aware of the issues and still want to proceed with the operation. Click Yes to confirm.
- Click Activity to monitor the progress. When
the job completes successfully, navigate to the
Re-provision tab and click
Next. The BUI displays a message to reimage the
appliance. The detach node operation creates a server data archive file at
/opt/oracle/oak/restore/out
. Save a copy of the archive to a location outside of the appliance, to prepare for the reimage.WARNING:
Make sure to save these files in a location outside the Oracle Database Appliance system. These files are needed to reprovision the system after you reimage the appliance in Step 2 of this process. Without these files, the system cannot be reprovisioned in Step 3 and you will lose all data stored in the Oracle ASM disk groups. - Manually reimage the nodes as described in the topic Upgrading Bare Metal System to Oracle Linux 8 and Oracle Database Appliance Release 19.24 Using the CLI.
- After reimaging the nodes, log into the
BUI:
https://Node0–host-ip-address:7093/mgmt/index.html
The BUI prompts you to specify the admin password. Select the option to Enable Multi-User Access only if it was enabled prior to the detach node operation.
- Navigate to the Infrastructure Patching tab and apply the server and storage patches.
- Copy the server archive file produced during the detach node operation to the appliance and specify the absolute file path to the archive. In the Re-provision tab, specify the Server Archive Location and click Update Repository.
- Specify the GI Clone Location and click Update Repository.
- After the repository is updated successfully, click Restore Node. If your deployment is multi-user enabled, then specify New System Password, Oracle User Password, and Grid User Password.
- If your deployment had Oracle ASR configured when the detach-node operation was run, and you do not want to restore Oracle ASR configuration, then select Skip Restore of ASR configuration. If you want to restore Oracle ASR configuration, then specify the ASR Password. If you select Yes in the HTTPS Proxy Requires Authentication field, then specify the Proxy Password.
- Click Yes.
- After the restore node job completes successfully, you can restore databases.
Restoring Databases Using the BUI
- If database storage is not configured on Oracle ACFS, then you must configure database storage before restoring the database.
- In the Restore Database tab, select the Disk Group Name and specify the Size in GB.
- Click Configure to submit the job to configure the database home storage.
- Update the repository with the database clones. Specify the Database Clone Location and click Update Repository.
- After the repository is updated with all the required database clones, click Restore Databases to restore the databases. Confirm that you want to submit the job.
- If you have configured Oracle KVM, shared CPU pool, or custom vnetwork resources, then restore VM instances after restoring the databases.
Restoring VM Instances and DB Systems Using the BUI
- In the Restore VM Instances tab, click Restore VM Instances.
- To restore DB systems, navigate to the Restore DB Systems tab, and click Restore DB Systems.
- After the VM instances and DB systems are restored, you can upgrade DB systems.
Upgrading DB Systems Using BUI
- In the Upgrade DB Systems tab, specify the DB System Clone Location and click Update Repository.
- After updating the repository, choose a DB system from the Select DB System drop-down list, and click Create Pre-Upgrade Report to create the preupgrade report for the DB system.
- After the preupgrade report is generated, select the report in the drop down list, and click View Pre-Upgrade Report.
- View the preupgrade report and fix any issues displayed in the report. Click Back to navigate to the Data Preserving Re-provisioning page.
- If the preupgrade report does not have any failures, then continue with the Oracle Linux 8 upgrade. Choose a DB system from the Select DB System drop-down list and click Upgrade.
- Specify System Password and click Yes.
- Check the status of the job and verify that it completed successfully.
- Repeat this procedure to upgrade all DB systems in your deployment.
Patching Databases Using ODACLI Commands or the BUI
Use ODACLI commands or the Browser User Interface to patch databases to the latest release in your deployment.
Important:
You must run theodacli create-prepatchreport
command before you patch the Oracle databases; otherwise, the odacli
update-database
command fails with an error message prompting you to run the
patching pre-checks.
Patching Databases on Oracle Database Appliance using ODACLI Commands
Run the following command to patch a database using the CLI:
odacli update-database [-a] [-dp] [-f] [-i db_id] [-imp] [-l] [-n db_name] [-ni node] [-r] [-to db_home_id] [-j] [-h]
For more information about the options for the
update-database
command, see the chapter Oracle
Database Appliance Command-Line Interface.
Patching Databases on Oracle Database Appliance using BUI
Patching Existing Database Homes Using ODACLI or the BUI
Use ODACLI or BUI to patch database homes in your deployment to the latest release.
Patching Database Homes on Oracle Database Appliance using ODACLI Commands
Run the following command to patch a database home using the CLI:
odacli update-dbhome -i dbhome_id -v version [-f] [-imp] [-p] [-l] [-u node_number] [-j] [-h]
For more information about the options for the update-dbhome
command, see the chapter Oracle Database Appliance Command-Line
Interface.
Patching Database Homes on Oracle Database Appliance using BUI