4 Known Issues with Oracle Database Appliance in This Release
The following are known issues deploying, updating, and managing Oracle Database Appliance in this release.
- Known Issues When Patching Oracle Database Appliance
Understand the known issues when patching Oracle Database Appliance to this release. - Known Issues When Deploying Oracle Database Appliance
Understand the known issues when provisioning or deploying Oracle Database Appliance. - Known Issues When Managing Oracle Database Appliance
Understand the known issues when managing or administering Oracle Database Appliance.
Known Issues When Patching Oracle Database Appliance
Understand the known issues when patching Oracle Database Appliance to this release.
- Error in patching the server with --local option
An error is encountered when patching the server with the--local
option. - Permissions error when unpacking the server patch on virtualized platform
An error is encountered when patching the server on virtualized platform. - Error in patching database homes with --local option
An error is encountered when patching database homes with the--local
option on virtualized platforms. - Error in Oracle Grid Infrastructure upgrade
Oracle Grid Infrastructure upgrade fails, though therootupgrade.sh
script ran successfully. - Error when running ORAChk or updating the server or database home
When running Oracle ORAchk or the commandsodacli create-prepatchreport
,odacli update-server
,odacli update-dbhome
, an error is encountered. - Errors when running ORAchk or the odacli create-prepatchreport command
When you run ORAchk or theodacli create-prepatchreport
command, an error is encountered. - Error in patching database homes
An error is encountered when patching database homes on databases that have Standard Edition High Availability enabled. - Error in server patching
An error is encountered when patching the server. - Server status not set to Normal when patching
When patching Oracle Database Appliance, an error is encountered. - Error when patching to 12.1.0.2.190716 Bundle Patch
When patching Oracle Database release 12.1.0.2 to Oracle Database 12.1.0.2.190716 Bundle Patch, an error is encountered. - Patching of M.2 drives not supported
Patching of M.2 drives (local disks SSDSCKJB48 and SSDSCKJB480G7) is not supported. - 11.2.0.4 databases fail to start after patching
After patching Oracle Database Appliance to release 18.3, databases of version 11.2.0.4 fail to start. - Patching errors on Oracle Database Appliance Virtualized Platform
When applying the server patch for Oracle Database Appliance Virtualized Platform, an error is encountered. - Patching Oracle Database home fails with errors
When applying the patch for Oracle Database homes, an error is encountered. - Error in patching Oracle Database Appliance
When applying the server patch for Oracle Database Appliance, an error is encountered. - Versions of some components not updated after cleaning up and reprovisioning Oracle Database Appliance
Oracle Auto Service Request (ASR), or Oracle TFA Collector, or Oracle ORAchk versions are not updated after cleaning up and reprovisioning Oracle Database Appliance.
Error in patching the server with --local option
An error is encountered when patching the server with the
--local
option.
oakcli update -patch 19.9.0.0.0 --server
--local
, the following error is encountered:
ERROR: 2020-11-20 21:52:32: Unable to run the command : /usr/bin/yum
--disablerepo=* --enablerepo=ODA_REPOS_LOC install
uptrack-updates-4.14.35-2025.400.9.el7uek.x86_64-20201001-0.noarch -y
ERROR: 2020-11-20 21:52:33: Failed to patch all the server components.
Hardware Models
All Oracle Database Appliance hardware models virtualized platform deployments
Workaround
Reapply the server patch.
This issue is tracked with Oracle bug 32183505.
Parent topic: Known Issues When Patching Oracle Database Appliance
Permissions error when unpacking the server patch on virtualized platform
An error is encountered when patching the server on virtualized platform.
oakcli unpack -pack
server_patch_zip
, the following error is encountered:
mv: missing destination file operand after
/opt/oracle/oak/ahf/oracle-ahf-202100.x86_64.rpm
Try 'mv --help' for more information.
sh: line 1: /opt/oracle/oak/ahf/oracle-ahf-202300.x86_64.rpm: Permission
denied
Successfully unpacked the files to repository.
Hardware Models
All Oracle Database Appliance hardware models virtualized platform deployments
Workaround
Delete the file
/opt/oracle/oak/ahf/oracle-ahf-202100.x86_64.rpm
.
This issue is tracked with Oracle bug 32156614.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in patching database homes with --local option
An error is encountered when patching database homes with the
--local
option on virtualized platforms.
oakcli update -patch 19.9.0.0.0
--database --local
on database homes, an error is
encountered.File does not exist: /opt/oracle/oak/pkgrepos/System/0/conf/PatchImage.xml at
/opt/oracle/oak/lib/oakutilslib/PatchCommonUtils.pm line 420
ERROR: Unable to apply the patch
Hardware Models
All Oracle Database Appliance hardware models with High-Availability deployments
Workaround
--local
option.# oakcli update -patch 19.9.0.0.0 --database
This issue is tracked with Oracle bug 32182669.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in Oracle Grid Infrastructure upgrade
Oracle Grid Infrastructure upgrade fails, though the
rootupgrade.sh
script ran successfully.
/opt/oracle/oak/log/<NODENAME>/patch/19.8.0.0.0/
.ERROR: The clusterware active state is UPGRADE_AV_UPDATED
INFO: ** Refer to the release notes for more information **
INFO: ** and suggested corrective action **
This is because when the root upgrade scripts run on the last node, the active version is not set to the correct state.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- As
root
user, run the following command on the second node:/u01/app/19.0.0.0/grid/rootupgrade.sh -f
- After the command completes, verify that the active version of
the cluster is updated to UPGRADE
FINAL.
/u01/app/19.0.0.0/grid/bin/crsctl query crs activeversion -f The cluster upgrade state is [UPGRADE FINAL]
- Run Oracle Database Applaince server patching process again to upgrade Oracle Grid Infrastructure.
This issue is tracked with Oracle bug 31546654.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error when running ORAChk or updating the server or database home
When running Oracle ORAchk or the commands odacli
create-prepatchreport
, odacli update-server
, odacli
update-dbhome
, an error is encountered.
- Table AUD$[FGA_LOG$] should use Automatic Segment Space Management
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- To verify the segment space management policy currently in use by the AUD$ and
FGA_LOG$ tables, use the following SQL*Plus
command:
select t.table_name,ts.segment_space_management from dba_tables t, dba_tablespaces ts where ts.tablespace_name = t.tablespace_name and t.table_name in ('AUD$','FGA_LOG$');
- The output should be similar to the
following:
TABLE_NAME SEGMEN ------------------------------ ------ FGA_LOG$ AUTO AUD$ AUTO If one or both of the AUD$ or FGA_LOG$ tables return "MANUAL", use the DBMS_AUDIT_MGMT package to move them to the SYSAUX tablespace: BEGIN DBMS_AUDIT_MGMT.set_audit_trail_location(audit_trail_type => DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD,--this moves table AUD$ audit_trail_location_value => 'SYSAUX'); END; BEGIN DBMS_AUDIT_MGMT.set_audit_trail_location(audit_trail_type => DBMS_AUDIT_MGMT.AUDIT_TRAIL_FGA_STD,--this moves table FGA_LOG$ audit_trail_location_value => 'SYSAUX'); END;
This issue is tracked with Oracle bug 27856448.
Parent topic: Known Issues When Patching Oracle Database Appliance
Errors when running ORAchk or the odacli create-prepatchreport command
When you run ORAchk or the odacli create-prepatchreport
command, an error is encountered.
One or more log archive destination and alternate log archive destination settings are not as recommended
Software home check failed
Hardware Models
Oracle Database Appliance hardware models bare metal deployments
Workaround
odacli update-dbhome
, odacli
create-prepatchreport
, odacli update-server
commands with the
-sko
option. For
example:odacli update-dbhome -j -v 19.9.0.0.0 -i dbhome_id -sko
This issue is tracked with Oracle bugs 30931017, 31631618, and 31921112.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in patching database homes
An error is encountered when patching database homes on databases that have Standard Edition High Availability enabled.
odacli update-dbhome -v
release_number
on database homes that have Standard Edition
High Availability enabled, an error is
encountered.WARNING::Failed to run the datapatch as db <db_name> is not in running state
Hardware Models
All Oracle Database Appliance hardware models with High-Availability deployments
Workaround
- Locate the running node of the target database
instance:
srvctl status database -database dbUniqueName
Or, relocate the single-instance database instance to the required node:odacli modify-database -g node_number (-th node_name)
- On the running node, manually run the datapatch for non-CDB
databases:
dbhomeLocation/OPatch/datapatch
- For CDB databases, locate the PDB list using
SQL*Plus.
select name from v$containers where open_mode='READ WRITE'; dbhomeLocation/OPatch/datapatch -pdbs pdb_names_found_in_previous_step_divided_by_comma
This issue is tracked with Oracle bug 31654816.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in server patching
An error is encountered when patching the server.
odacli update-server -v
release_number
, the following error is encountered:
DCS-10001:Internal error encountered: patchmetadata for 19.6.0.0.0 missing
target version for GI.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- Change the file ownership temporarily to the appropriate
grid
user for theosdbagrp
binary in thegrid_home/bin
location. For example:$ chown -R grid:oinstall /u01/app/18.0.0.0/grid/bin/osdbagrp
- Run either the
update-registry -n gihome
or theupdate-registry -n system
command.
This issue is tracked with Oracle bug 31125258.
Parent topic: Known Issues When Patching Oracle Database Appliance
Server status not set to Normal when patching
When patching Oracle Database Appliance, an error is encountered.
When patching the appliance, the odacli
update-server
command fails with the
following error:
DCS-10001:Internal error encountered: Server upgrade state is not NORMAL node_name
Hardware Models
All Oracle Database Appliance hardware models
Workaround
-
Run the command:
Grid_home/bin/cluvfy stage -post crsinst -collect cluster -gi_upgrade -n all
-
Ignore the following two warnings:
Verifying OCR Integrity ...WARNING PRVG-6017 : OCR backup is located in the same disk group "+DATA" as OCR. Verifying Single Client Access Name (SCAN) ...WARNING RVG-11368 : A SCAN is recommended to resolve to "3" or more IP
-
Run the command again till the output displays only the two warnings above. The status of Oracle Custerware status should be
Normal
again. -
You can verify the status with the command:
Grid_home/bin/crsctl query crs activeversion -f
This issue is tracked with Oracle bug 30099090.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error when patching to 12.1.0.2.190716 Bundle Patch
When patching Oracle Database release 12.1.0.2 to Oracle Database 12.1.0.2.190716 Bundle Patch, an error is encountered.
The ODACLI job displays the following error:
DCS-10001:Internal error encountered: Failed to run SQL script: datapatch script.
The data patch log contains the entry
"Prereq check failed, exiting without
installing any patches.
".
Hardware Models
All Oracle Database Appliance hardware models bare metal deployments
Workaround
Install the same patch again.
This issue is tracked with Oracle bugs 30026438 and 30155710.
Parent topic: Known Issues When Patching Oracle Database Appliance
Patching of M.2 drives not supported
Patching of M.2 drives (local disks SSDSCKJB48 and SSDSCKJB480G7) is not supported.
These drives are displayed when you run the odacli
describe-component
command. Patching of neither of the two known
versions 0112 and 0121 of the M.2 disk is supported. Patching the LSI controller
version 13.00.00.00 to version 16.00.01.00 is also not supported. However, on some
Oracle Database Appliance X8-2 models, the installed LSI controller version may be
16.00.01.00.
Hardware Models
Oracle Database Appliance bare metal deployments
Workaround
None
This issue is tracked with Oracle bug 30249232.
Parent topic: Known Issues When Patching Oracle Database Appliance
11.2.0.4 databases fail to start after patching
After patching Oracle Database Appliance to release 18.3, databases of version 11.2.0.4 fail to start.
Hardware Models
All Oracle Database Appliance Hardware models
Workaround
Databases of versions 11.2.0.4.170814 and 11.2.0.4.171017 must be manually started after patching to Oracle Database Appliance release 18.3.
srvctl start database -db db_unique_name
This issue is tracked with Oracle bug 28815716.
Parent topic: Known Issues When Patching Oracle Database Appliance
Patching errors on Oracle Database Appliance Virtualized Platform
When applying the server patch for Oracle Database Appliance Virtualized Platform, an error is encountered.
Error Encountered When Patching Virtualized Platform:
When patching the appliance on Virtualized Platform, patching fails with an error similar to the following:
ERROR: Unable to apply the GRID patch
ERROR: Failed to patch server (grid) component
This error can occur even if you stopped Oracle TFA Collector before patching. During server patching on the node, Oracle TFA Collector is updated and this can restart the TFA processes, thus causing an error. To resolve this issue, follow the steps described in the Workaround.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- Run
/etc/init.d/init.tfa stop
on all the nodes in the cluster. - Run the
command:
/u01/app/18.0.0.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/grid -chkckpt -name ROOTCRS_PREPATCH -status
Verify that the command output is
SUCCESS
. - If the command output was
SUCCESS
, then run the following commands on all the nodes:/u01/app/18.0.0.0/grid/crs/install/rootcrs.sh -prepatch -rollback /u01/app/18.0.0.0/grid/crs/install/rootcrs.sh -postpatch
- Restart patching.
This issue is tracked with Oracle bug 30886701.
Parent topic: Known Issues When Patching Oracle Database Appliance
Patching Oracle Database home fails with errors
When applying the patch for Oracle Database homes, an error is encountered.
Error Encountered When Patching Oracle Database Homes on Bare Metal Systems:
When patching Oracle Database homes on baremetal systems, the odacli
update-dbhome
command fails with an error similar to the following:
Please stop TFA before dbhome patching.
To resolve this issue, follow the steps described in the Workaround.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- Run
tfactl stop
on all the nodes in the cluster. - Restart patching once Oracle TFA Collector has stopped on all nodes.
This issue is tracked with Oracle bug 30799713.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in patching Oracle Database Appliance
When applying the server patch for Oracle Database Appliance, an error is encountered.
Error Encountered When Patching Bare Metal Systems:
When patching the appliance on bare metal systems, the odacli
update-server
command fails with the following error:
Please stop TFA before server patching.
To resolve this issue, follow the steps described in the Workaround.
Error Encountered When Patching Virtualized Platform:
When patching the appliance on Virtualized Platform, patching fails with an error similar to the following:
INFO: Running prepatching on local node
WARNING: errors seen during prepatch on local node
ERROR: Unable to apply the patch 1
Check the prepatch log file generated in the directory
/opt/oracle/oak/log/hostname/patch/18.8.0.0.0
. You can also view
the prepatch log for the last run with the command ls -lrt prepatch_*.log
.
Check the last log file in the command output.
In the log file, search for entries similar to the following:
ERROR: date_time_stamp: TFA is running on one or more nodes.
WARNING: date_time_stamp: Shutdown TFA and then restart patching
INFO: date_time_stamp: Read the Release Notes for additional information.
To resolve this issue, follow the steps described in the Workaround.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- Run
tfactl stop
on all the nodes in the cluster. - Restart patching once Oracle TFA Collector has stopped on all nodes.
- Run
/etc/init.d/init.tfa stop
on all the nodes in the cluster. - Restart patching once Oracle TFA Collector has stopped on all nodes.
This issue is tracked with Oracle bug 30260318.
Parent topic: Known Issues When Patching Oracle Database Appliance
Versions of some components not updated after cleaning up and reprovisioning Oracle Database Appliance
Oracle Auto Service Request (ASR), or Oracle TFA Collector, or Oracle ORAchk versions are not updated after cleaning up and reprovisioning Oracle Database Appliance.
When cleaning up and reprovisioning Oracle Database Appliance with release 19.9, the Oracle Auto Service Request (ASR), or Oracle TFA Collector, or Oracle ORAchk RPMs may not be updated to release 19.9. The components are updated when you apply the patches for Oracle Database Appliance release 19.9.
Hardware Models
All Oracle Database Appliance deployments
Workaround
Update to the latest server patch for the release.
This issue is tracked with Oracle bugs 28933900 and 30187516.
Parent topic: Known Issues When Patching Oracle Database Appliance
Known Issues When Deploying Oracle Database Appliance
Understand the known issues when provisioning or deploying Oracle Database Appliance.
- Error when creating or restoring 11.2.0.4 database
An error is encountered when creating or restoring 11.2.0.4 databases. - Error in TFACTL Status
When running tfactl status command, the TFA status on the local node only is displayed. - TFA disabled after patching Oracle Database Appliance
After patching Oracle Database Appliance, TFA status shows as disabled. - Error in starting VMs after updating CPU Cores
When starting VMs after updating CPU cores, an error is encountered. - Compatibility issues in KVM network association
When creating or modifying a network on Oracle Database Appliance KVM, the properties of the vnetwork are not validated, and hence an error is encountered. - Validation error when creating database with CPU pool
Theodacli create-database
command does not display a validation error when the local CPU pool is used with options DbType as SI and DbEdition as SE. - Error in creating database on Virtualized Platform
When creating a database on Virtualized Platform, an error is encountered. - Error when upgrading database from 11.2.0.4 to 12.1 or 12.2
When upgrading databases from 11.2.0.4 to 12.1 or 12.2, an error is encountered. - Error when upgrading 12.1 single-instance database
When upgrading 12.1 single-instance database, a job failure error is encountered. - Failure in creating RECO disk group during provisioning
When provisioning Oracle Database Appliance X8-2-HA with High Performance configuration containing default storage and expansion shelf, creation of RECO disk group fails. - Simultaneous creation of two Oracle ACFS Databases fails
If you try to create two Oracle ACFS databases on a system where there is no database or database storage already created, then database creation fails for one of the databases with an error. - Database creation hangs when using a deleted database name for database creation
The accelerator volume for data is not created on flash storage, for database created during provisioning of appliance. - Error encountered after running cleanup.pl
Errors encountered in runningodacli
commands after runningcleanup.pl
. - Accelerator volume for data is not created on flash storage
The accelerator volume for data is not created on flash storage, for databases created during provisioning of appliance. - Errors in clone database operation
Clone database operation fails due to errors. - Clone database operation fails
For Oracle Database release 12.1 databases, the database clone creation may fail because the default compatible version from Oracle binaries was set to 12.0.0.0.0
Error when creating or restoring 11.2.0.4 database
An error is encountered when creating or restoring 11.2.0.4 databases.
When you run the command odacli create-database
or
odacli irestore-database
for 11.2.0.4 databases, the command
fails to run at the Configuring DB Console step. This error may also occur when
creating 11.2.0.4 databases using the Browser User Interface.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Run the commands without enabling DB Console.
This issue is tracked with Oracle bug 31017360.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error in TFACTL Status
When running tfactl status command, the TFA status on the local node only is displayed.
odacli reinstate-dataguard
command fails with
the following
error:Message:
DCS-10001:Internal error encountered: Unable to reinstate Dg.
The dcs-agent.log file has the following error entry:
ORA-12514: TNS:listener does not currently know of service requested in connect descriptor
Hardware Models
All Oracle Database Appliance high-availability hardware models
Workaround
Run tfactl syncnodes
to generate the TFA certificates for both
nodes.
This issue is tracked with Oracle bug 31759137.
Parent topic: Known Issues When Deploying Oracle Database Appliance
TFA disabled after patching Oracle Database Appliance
After patching Oracle Database Appliance, TFA status shows as disabled.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
odacli update-dbhome
command with the -sko
option:odacli update-dbhome -j -v 19.9.0.0.0 -i dbhome_id -sko
This issue is tracked with Oracle bug 32058933.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error in starting VMs after updating CPU Cores
When starting VMs after updating CPU cores, an error is encountered.
odacli
update-cpucores
, VM fails to start. The following error message is
displayed:'/sys/fs/cgroup/cpuset/machine.slice/machine-qemu\x2d4\x2dol7guest2.scope/emulator/cpuset.cpus': Permission denied'
cat /sys/fs/cgroup/cpuset/cpuset.cpus
cat /sys/fs/cgroup/cpuset/machine.slice/cpuset.cpus
Hardware Models
Oracle Database Appliance hardware models Virtualized Platform
Workaround
/sys/fs/cgroup/cpuset/machine.slice/cpuset.cpus
with
/sys/fs/cgroup/cpuset/cpuset.cpus
as follows:
cat /sys/fs/cgroup/cpuset/cpuset.cpus > /sys/fs/cgroup/cpuset/machine.slice/cpuset.cpus
This issue is tracked with Oracle bug 31975721.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Compatibility issues in KVM network association
When creating or modifying a network on Oracle Database Appliance KVM, the properties of the vnetwork are not validated, and hence an error is encountered.
Hardware Models
All Oracle Database Appliance hardware models with KVM configuration
Workaround
Associate a compatible vnetwork with the high-availability properties of the VM.
This issue is tracked with Oracle bug 32065475.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Validation error when creating database with CPU pool
The odacli create-database
command does not display a
validation error when the local CPU pool is used with options DbType as SI and DbEdition
as SE.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
-sh
option to the odacli
create-database
command when you create the SEHA database.
odacli create-database -n test1 -u test1 -r acfs -y SI -de SE -sh
This issue is tracked with Oracle bug 32040722.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error in creating database on Virtualized Platform
When creating a database on Virtualized Platform, an error is encountered.
INFO : Running on the local node: /bin/su oracle -c
/opt/oracle/oak/onecmd/tmp/dbca-sandb1.sh
WARNING: Ignore any errors returned by '/bin/su oracle -c
"/opt/oracle/oak/onecmd/tmp/dbca-sandb1.sh"'
cat /opt/oracle/oak/onecmd/tmp/dbca-sandb1.sh
SEVERE: [FATAL] [DBT-06103] The port (5,502) is already in use.
ACTION: Specify a free port.
Hardware Models
Oracle Database Appliance hardware models Virtualized Platform
Workaround
-emConfiguration DBEXPRESS \
Now manually run the shell script as the database user.
This issue is tracked with Oracle bug 32075086.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error when upgrading database from 11.2.0.4 to 12.1 or 12.2
When upgrading databases from 11.2.0.4 to 12.1 or 12.2, an error is encountered.
UpgradeResults.html
file, when upgrading database from 11.2.0.4 to 12.1
or 12.2:
Database is using a newer time zone file version than the Oracle home
Hardware Models
All Oracle Database Appliance bare metal deployments
Workaround
- Refer to the Database Upgrade Guide for manual steps for fixing the time zone.
- After manually completing the database upgrade, run the following command to update
DCS
metadata:
/opt/oracle/dcs/bin/odacli update-registry update-registry -n db -f
This issue is tracked with Oracle bug 31125985.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error when upgrading 12.1 single-instance database
When upgrading 12.1 single-instance database, a job failure error is encountered.
Hardware Models
All Oracle Database Appliance hardware models bare metal deployments
Workaround
- Before upgrading the 12.1 single-instance database, run the following PL/SQL
command to change the
local_listener
to an empty string:ALTER SYSTEM SET LOCAL_LISTENER='';
- After upgrading the 12.1 single-instance database successfully, run the
following PL/SQL command to change the
local_listener
to the desired value:ALTER SYSTEM SET LOCAL_LISTENER='-oracle-none-';
This issue is tracked with Oracle bugs 31202775 and 31214657.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Failure in creating RECO disk group during provisioning
When provisioning Oracle Database Appliance X8-2-HA with High Performance configuration containing default storage and expansion shelf, creation of RECO disk group fails.
Hardware Models
All Oracle Database Appliance X8-2-HA with High Performance configuration
Workaround
- Power off storage expansion shelf.
- Reboot both nodes.
- Proceed with provisioning the default storage shelf (first JBOD).
- After the system is successfully provisioned
with default storage shelf (first JBOD), check
that
oakd
is running on both nodes in foreground mode.# ps -aef | grep oakd
- Check that all first JBOD disks have the status
online, good in
oakd
, and CACHED in Oracle ASM. - Power on the storage expansion shelf (second JBOD), wait for a few minutes for the operating system and other subsystems to recognize it.
- Run the following command from the master node
to add the storage expansion shelf disks (two JBOD
setup) to
oakd
and Oracle ASM.#odaadmcli show ismaster OAKD is in Master Mode # odaadmcli expand storage -ndisk 24 -enclosure 1 Skipping precheck for enclosure '1'... Check the progress of expansion of storage by executing 'odaadmcli show disk' Waiting for expansion to finish ... #
- Check that the storage expansion shelf disks
(two JBOD setup) are added to
oakd
and Oracle ASM.
Replace odaadmcli
with
oakcli
commands on Oracle
Database Appliance Virtualized Platform in the
procedure.
For more information, see the chapter Managing Storage in the Oracle Database Appliance X8-2 Deployment Guide.
This issue is tracked with Oracle bug 30839054.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Simultaneous creation of two Oracle ACFS Databases fails
If you try to create two Oracle ACFS databases on a system where there is no database or database storage already created, then database creation fails for one of the databases with an error.
DCS-10001:Internal error encountered: Fail to run command Failed to create
volume.
Hardware Models
All Oracle Database Appliance bare metal deployments
Workaround
Manually delete the DATA volume (and REDO volume, in case of Oracle Database Appliance X8-2) from the system.
su - GRID_USER
export ORACLE_SID=+ASM1(in case of first node) /+ASM2(in case of second
node);
export ORACLE_HOME=GRID_HOME;
GRID_HOME/bin/asmcmd --nocp voldelete -G Data datdbname
su - GRID_USER
export ORACLE_SID=+ASM1(in case of first node) /+ASM2(in case of second
node);
export ORACLE_HOME=GRID_HOME;
GRID_HOME/bin/asmcmd --nocp voldelete -G Reco rdodbname
su - GRID_USER
export ORACLE_SID=+ASM1(in case of first node) /+ASM2(in case of second
node);
export ORACLE_HOME=GRID_HOME;
GRID_HOME/bin/asmcmd --nocp voldelete -G Flash datdbname (if volume exists in FLASH disk group)
GRID_HOME/bin/asmcmd --nocp voldelete -G data datdbname (if volume exists in DATA disk group)
su - GRID_USER
export ORACLE_SID=+ASM1(in case of first node) /+ASM2(in case of second
node);
export ORACLE_HOME=GRID_HOME;
GRID_HOME/bin/asmcmd --nocp voldelete -G Flash rdodbname
This issue is tracked with Oracle bug 30750497.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Database creation hangs when using a deleted database name for database creation
The accelerator volume for data is not created on flash storage, for database created during provisioning of appliance.
If you delete a 11.2.0.4 database, and then create a new database with same name as the deleted database, database creation hangs while unlocking the DBSNMP user for the database.
Hardware Models
All Oracle Database Appliance high-availability environments
Workaround
Before creating the 11.2.0.4 database with the same name as the deleted database, delete the DBSNMP user, if the user exists.
For example, the following command creates a database testdb
with user DBSNMP
.
/u01/app/18.0.0.0/grid/bin/crsctl delete wallet -type CVUDB -name testdb -user DBSNMP
This issue is tracked with Oracle bug 28916487.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error encountered after running cleanup.pl
Errors encountered in running odacli
commands after running cleanup.pl
.
After running cleanup.pl
, when you try to use odacli
commands, the following error is encountered:
DCS-10042:User oda-cliadmin cannot be authorized.
Hardware Models
All Oracle Database Appliance hardware models for bare metal deployments
Workaround
Run the following commands to set up the credentials for the user oda-cliadmin
on the agent wallet:
# rm -rf /opt/oracle/dcs/conf/.authconfig
# /opt/oracle/dcs/bin/setupAgentAuth.sh
This issue is tracked with Oracle bug 29038717.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Accelerator volume for data is not created on flash storage
The accelerator volume for data is not created on flash storage, for databases created during provisioning of appliance.
Hardware Models
Oracle Database Appliance high capacity environments with HDD disks
Workaround
Do not create the database when provisioning the appliance. This creates all required disk groups, including flash. After provisioning the appliance, create the database. The accelerator volume is then created.
This issue is tracked with Oracle bug 28836461.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Errors in clone database operation
Clone database operation fails due to errors.
If the source database is single-instance or Oracle RAC One Node, or running on the remote node, the clone database operation fails, because the paths are not created correctly in the control file.
Clone database operation may also fail with errors if the source database creation time stamp is too close to the clone operation (at least within 60 minutes).
Hardware Models
All Oracle Database Appliance high-availability hardware models for bare metal deployments
Workaround
Create the clone database from the source database instance that is running on the same node from which the clone database creation is triggered.
SQL> alter system checkpoint;
This issue is tracked with Oracle bugs 29002563, 29002004, 29001906, 29001855, 29001631, 28995153, 28986643, 30309971, and 30228362.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Clone database operation fails
For Oracle Database release 12.1 databases, the database clone creation may fail because the default compatible version from Oracle binaries was set to 12.0.0.0.0
Hardware Models
All Oracle Database Appliance high-availability hardware models for bare metal deployments
Workaround
- Change the parameter
value.
SQL> ALTER SYSTEM SET COMPATIBLE = '12.1.0.2.0' SCOPE=SPFILE;
- Shut down the database.
SQL> SHUTDOWN IMMEDIATE
- Start the database.
SQL> Startup
- Verify the parameter for the new
value.
SQL> SELECT name, value, description FROM v$parameter WHERE name ='compatible';
This issue is tracked with Oracle bug 30309914.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Known Issues When Managing Oracle Database Appliance
Understand the known issues when managing or administering Oracle Database Appliance.
- Error in creating an Oracle ACFS database after deletion
If you delete an Oracle ACFS database and then recreate it with the same name, the operation fails. - Error in switchover operation with Oracle Data Guard
When performing switchover operation with Oracle Data Guard on Oracle Database Appliance, an error is encountered. - Error in irestore operation with Oracle Data Guard
When performing irestore operation with Oracle Data Guard on Oracle Database Appliance, an error is encountered. - Error in Configuring Oracle Data Guard
When configuring Oracle Data Guard on Oracle Database Appliance, an error is encountered. - Error in Reinstating Oracle Data Guard
When reinstating Oracle Data Guard on Oracle Database Appliance, an error is encountered. - Failure in Reinstating Oracle Data Guard
When reinstating Oracle Data Guard on Oracle Database Appliance, an error is encountered. - Failure in Reinstating Oracle Data Guard
When reinstating Oracle Data Guard on Oracle Database Appliance, an error is encountered. - Error when restoring a database on the second node with a CPU Pool
When restoring a database on the second node with a CPU pool, an error is encountered. - Error in running other operations when modifying database with CPU pool
When modifying a database with CPU pool, an error is encountered with other operations. - Error in creating a database with a CPU Pool
When creating a database with a CPU pool, an error is encountered. - Error in restoring a TDE-enabled database
When restoring a TDE-enabled database on Oracle Database Appliance, an error is encountered. - Error in recovering a TDE-enabled database
When recovering a TDE-enabled database on Oracle Database Appliance, an error is encountered. - Error in restoring a TDE-enabled database
When restoring a TDE-enabled database on Oracle Database Appliance, an error is encountered. - Error in considering memory value unit in BUI
For KVM on Browser User Interface (BUI), the VM memory size is validated against the max VM memory size but the unit is not taken into consideration. - Validation error when deleting a resource after stopping VM
When deleting the associated resource after stopping a VM, an error is encountered. - Error in updating Role after Oracle Data Guard operations
When performing operations with Oracle Data Guard on Oracle Database Appliance, an error is encountered in updating the Role. - Error when recovering a single-instance database
When recovering a single-instance database, an error is encountered. - Error when rebooting the appliance
When rebooting Oracle Database Appliance, the user interactive screen is displayed. - Job history not erased after running cleanup.pl
After runningcleanup.pl
, job history is not erased. - Inconsistency in ORAchk summary and details report page
ORAChk report summary on the Browser User Interface may show different counts of Critical, Failed, and Warning issues than the report detail page. - Missing DATA, RECO, and REDO entries when dbstorage is rediscovered
Running theodacli update-registry
command with-n all --force
or-n dbstorage --force
option can result in metadata corruption. - The odaeraser tool does not work if oakd is running in non-cluster mode
After cleaning up the deployment, the Secure Eraser tool does not work if oakd is running in non-cluster mode. - Issues with the Web Console on Microsoft web browsers
Oracle Database Appliance Web Console has issues on Microsoft Edge and Microsoft Internet Explorer web browsers. - Disk space issues due to Zookeeper logs size
The Zookeeper log files,zookeeper.out
and/opt/zookeeper/log/zkMonitor.log
, are not rotated, when new logs are added. This can cause disk space issues. - Error after running the cleanup script
After running thecleanup.pl
script, the following error message appears:DCS-10001:Internal error encountered: Fail to start hand shake
. - Error in attaching vdisk to guest VM
The current system firmware may be different from the available firmware after applying the latest patch. - Unrecognized Token Messages Appear in /var/log/messages
After updating Oracle Database Appliance, unrecognized token messages appear in/var/log/messages
.
Error in creating an Oracle ACFS database after deletion
If you delete an Oracle ACFS database and then recreate it with the same name, the operation fails.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Recreate the Oracle ACFS database with a different name.
This issue is tracked with Oracle bug 31833629.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in switchover operation with Oracle Data Guard
When performing switchover operation with Oracle Data Guard on Oracle Database Appliance, an error is encountered.
odacli
describe-dataguardstatus
command is inconsistent with the
DGMGRL> show configuration;
output. The command
odacli switchover-dataguard
fails because the Role
component in odacli describe-dataguardstatus
is not
correct.
Hardware Models
All Oracle Database Appliance hardware models with Oracle Data Guard configuration
Workaround
Run odacli describe-dataguardstatus -i dgconfigId
a few times
to check if Role is updated. Perform the switchover operation after the Role
component in the output of the odacli
describe-dataguardstatus
command is updated.
This issue is tracked with Oracle bug 31584695.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in irestore operation with Oracle Data Guard
When performing irestore operation with Oracle Data Guard on Oracle Database Appliance, an error is encountered.
Hardware Models
All Oracle Database Appliance hardware models with Oracle Data Guard configuration
Workaround
Change the time zone on the backup report "pitrTimeStamp" field to the same time zone as that of the standby machine.
This issue is tracked with Oracle bug 31542638.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in Configuring Oracle Data Guard
When configuring Oracle Data Guard on Oracle Database Appliance, an error is encountered.
odacli configure-dataguard
command fails with
the following
error:DCS-10001:Internal error encountered: Unable to pass postcheckDgStatus. Primary database has taken a non-Archivelog type backup between irestore standby database and configure-dataguard.
Verify
the status of the job with the odacli list-jobs
command.
Hardware Models
All Oracle Database Appliance hardware models with Oracle Data Guard configuration
Workaround
Follow these steps:
- On the primary machine, remove the Oracle Data Guard
configuration:
DGMGRL > remove configuration;
- On the standby machine, delete the standby database.
- On the primary machine, disable the database backup
schedule:
odacli update-schedule -i ID -d
- Start the Oracle Data Guard configuration steps.
- Enable primary database backup schedule after Oracle Data Guard configuration is successful.
This issue is tracked with Oracle bug 31880191.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in Reinstating Oracle Data Guard
When reinstating Oracle Data Guard on Oracle Database Appliance, an error is encountered.
odacli reinstate-dataguard
command fails with
the following
error:Unable to reinstate Dg. Reinstate job was executed within 24hrs after failover job.
The dcs-agent.log file has the following error entry:
DGMGRL> Reinstating database "xxxx",
please wait...
Oracle Clusterware is restarting database "xxxx" ...
Connected to "xxxx"
Continuing to reinstate database "xxxx" ...
Error: ORA-16653: failed to reinstate database
Hardware Models
All Oracle Database Appliance hardware models with Oracle Data Guard configuration
Workaround
Follow these steps:
- On the primary machine, get the
standby_became_primary_scn:
SQL> select standby_became_primary_scn from v$database; STANDBY_BECAME_PRIMARY_SCN -------------------------- 3522449
- On the old primary database, flashback to this SCN with
RMAN with the backup encryption
password:
RMAN> set decryption identified by 'rman_backup_password' ; executing command: SET decryption RMAN> FLASHBACK DATABASE TO SCN 3522449 ; ... Finished flashback at 24-SEP-20 RMAN> exit
- On the new primary machine, run the
odacli reinstate-dataguard
command.
This issue is tracked with Oracle bug 31884506.
Parent topic: Known Issues When Managing Oracle Database Appliance
Failure in Reinstating Oracle Data Guard
When reinstating Oracle Data Guard on Oracle Database Appliance, an error is encountered.
odacli reinstate-dataguard
command fails with
the following
error:DCS-10001:Internal error encountered:
Unable enqueue Id and update DgConfig.
Use DGMGRL to show standby database has this error
GMGRL> show database xxxx
Database - xxxx
Role: PHYSICAL STANDBY
Intended State: APPLY-ON
Transport Lag: (unknown)
Apply Lag: 4 days 22 hours 1 minute 23 seconds (computed 1 second ago)
Average Apply Rate: 0 Byte/s
Real Time Query: OFF
Instance(s):
xxxx1 (apply instance)
xxxx2
Database Warning(s):
ORA-16853: apply lag has exceeded specified threshold
ORA-16856: transport lag could not be determined
Database Status:
WARNING
The dcs-agent.log file has the following error entry:
DGMGRL> Reinstating database "xxxx",
please wait...
Oracle Clusterware is restarting database "xxxx" ...
Connected to "xxxx"
Continuing to reinstate database "xxxx" ...
Error: ORA-16653: failed to reinstate database
Hardware Models
All Oracle Database Appliance hardware models with Oracle Data Guard configuration
Workaround
Follow these steps:
- On the new primary machine, get the
standby_became_primary_scn:
SQL> select standby_became_primary_scn from v$database; STANDBY_BECAME_PRIMARY_SCN -------------------------- 4370820
- On the new primary database, check missing sequence after
standby_became_primary_scn:
SQL> select name, sequence#, first_change#, next_change# from v$archived_log where first_change#>4370820 and name is NULL; ... NAME ------------------------------------------------------------------------------- SEQUENCE# FIRST_CHANGE# NEXT_CHANGE# ---------- ------------- ------------ 53 4601014 4601154
- On the new primary machine, restore the missing sequence
with
RMAN.
$rman target/ RMAN> restore archivelog from logseq=1 until logseq=53;
- On the new standby machine, check if current_scn is increasing, and
check with
DGMGRL> SHOW CONFIGURATION;
to see if the apply lag is being resolved.
This issue is tracked with Oracle bug 32041012.
Parent topic: Known Issues When Managing Oracle Database Appliance
Failure in Reinstating Oracle Data Guard
When reinstating Oracle Data Guard on Oracle Database Appliance, an error is encountered.
odacli reinstate-dataguard
command fails with
the following
error:Message:
DCS-10001:Internal error encountered: Unable to reinstate Dg.
The dcs-agent.log file has the following error entry:
ORA-12514: TNS:listener does not currently know of service requested in connect descriptor
Hardware Models
All Oracle Database Appliance hardware models with Oracle Data Guard configuration
Workaround
Follow these steps:
- Make sure the database you are reinstating is started in
MOUNT mode. To start the database in MOUNT mode, run this
command:
srvctl start database -d db-unique-name -o mount
- After the above command runs successfully, run the
odacli reinstate-dataguard
command.
This issue is tracked with Oracle bug 32047967.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error when restoring a database on the second node with a CPU Pool
When restoring a database on the second node with a CPU pool, an error is encountered.
DCS-10001:Internal error encountered: Missing arguments : required sqlplus
connection information is not provided.
Hardware Models
All Oracle Database Appliance hardware models bare metal deployments
Workaround
Restore the single-instance database with CPU pool on the first node (node 0) or restore the single-instance database on the second node (node 1) without CPU pool. Then modify the database to attach to the CPU pool.
This issue is tracked with Oracle bug 32044216.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in running other operations when modifying database with CPU pool
When modifying a database with CPU pool, an error is encountered with other operations.
# odacli create-backup -in dbName -bt Regular-L0
DCS-10089:Database dbName is in an invalid state `{Node Name:closed}' Hardware Models
Hardware Models
All Oracle Database Appliance hardware models with bare metal configuration
Workaround
Wait until the odacli modify-database
completes before you
perform any other operation on the same database.
This issue is tracked with Oracle bug 32045674.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in creating a database with a CPU Pool
When creating a database with a CPU pool, an error is encountered.
# /opt/oracle/dcs/bin/odacli create-database -n new3 -c -p pdb1 -cl OLTP -r ACFS -y SI -no-f -de EE -u Unew3 -cp local2 -g 1
Hardware Models
All Oracle Database Appliance hardware models high-availability deployments
Workaround
Run the command to create the single-instance database command on the same node where the local CPU pool exists instead of using the "--targetnode"/"-g" option.
This issue is tracked with Oracle bug 32040969.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in restoring a TDE-enabled database
When restoring a TDE-enabled database on Oracle Database Appliance, an error is encountered.
spfile
is lost, then recovery of database job fails in the "Database recovery validation"
task with the following
error:DCS-10001:Internal error encountered: Failed to run RMAN command. Please
refer log at location : scaoda****:
/u01/app/oracle/diag/rdbms/tdbasm1/tdbasm1/scaoda*****/rman/bkup/rman_restore/
2020-10-22/rman_restore_2020-10-22_18-43-14.0540.log
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 10/22/2020 18:45:22
ORA-19870: error while restoring backup piece c-3022438697-20201022-03
ORA-19913: unable to decrypt backup
ORA-28365: wallet is not open
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- Copy the 'DATA' and 'RECO' locations of the database by running the
odacli list-dbstorage
command and matching the corresponding DB unique name . Following is the sample output of theodacli list-dbstorage
command for a database with DB unique namemydbu
.# odacli list-dbstorages ID Type DBUnique Name DiskGroup Location Total Used Available Status ---------------------------------------- ------ -------------------- ---------- ------------------------------------------------------------ ---------- ---------- ---------- ---------- 3d45c6ac-e9a5-48e0-8412-1c8bec0b95d9 ACFS mydbu Configured DATA /u02/app/oracle/oradata/mydbu 99.99 GB 3.45 GB 96.54 GB REDO /u04/app/oracle/redo/mydb/ 13.99 GB 12.30 GB 1.69 GB RECO /u03/app/oracle/fast_recovery_area/ 803.99 GB 2.64 GB 801.35 GB 640bc6aa-fc97-43c2-a2b5-11534c37c6b7 ASM tdbasm1 Configured DATA +DATA/tdbasm1 2.37 TB 1.70 GB 2.37 TB REDO +RECO/tdbasm1 2.31 TB 12.09 GB 2.30 TB RECO +RECO/tdbasm1 2.31 TB 12.09 GB 2.30 TB
In the example, the DATA location is '/u02/app/oracle/oradata/mydbu' and RECO location is '/u03/app/oracle/fast_recovery_area/' for database mydbu. Also, the DATA location is '+DATA/tdbasm1' and RECO location is '+RECO/tdbasm1' for database tdbasm1.
- Create a file initdbInstanceName.ora under dbhome_location/dbs.
- Add the following entries to initdbInstanceName.ora file.
db_name=dbName db_unique_name=dbUniqueName wallet_root=data_location tde_configuration='KEYSTORE_CONFIGURATION=FILE' instance_number= 1
Specify 'instance_number' if it is an Oracle RAC database. Else, 'instance_number' can be removed. The data_location is the value copied in step 1.
- Perform RMAN connection:
- Run
su - oracle_user
. - Run the
. oraenv
command. - Enter the Instance Name when prompted for Oracle SID.
- Enter DB Home location when prompted for Oracle home.
- Run the
rman target /
command.# su - oracle $ . oraenv ORACLE_SID = [oracle] ? dbInstanceName ORACLE_HOME = [/home/oracle] ? /u01/app/oracle/product/19.0.0.0/dbhome_1 The Oracle base has been set to /u01/app/oracle $ rman target / Recovery Manager: Release 19.0.0.0.0 - Production on Fri Oct 23 02:57:14 2020 Version 19.9.0.0.0 Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved. connected to target database (not started)
- Run
- Run the
shutdown abort
command in RMAN prompt.RMAN> shutdown abort Oracle instance shut down
- Run the following set of commands in the RMAN prompt, based on the
backup destination type.
- For Disk backup
destination:
startup nomount; RESTORE SPFILE TO 'spfile_location' FROM AUTOBACKUP RECOVERY AREA='reco_location' db_unique_name='dbUniqueName'; shutdown abort;
In case of Oracle ACFS database the value of spfile_location is 'Data_location/dbs/spfiledbinstancename.ora'. In case of Oracle ASM database, the value of spfile_location is 'DB home location/dbs/spfiledbinstancename.ora' * The value of reco_location and Data_location are the values copied in step 1.
- For NFS backup destination:
startup nomount; SET CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE disk TO 'spfilehandle_value/%F'; run { set DBID dbid; ALLOCATE CHANNEL C1 DEVICE TYPE DISK; RESTORE SPFILE TO 'spfile_location' FROM AUTOBACKUP; } shutdown abort;
The spfilehandle_value is the value of the "spfBackupHandle" attribute in the backupreport. For example: "spfBackupHandle" : "/tmp/nfs_backup_path/database/orabackups/test-c/database/3315481963/td basm2/db/c-3315481963-20201023-04". The value of <spfilehandle_value> will be "/tmp/nfs_backup_path/database/orabackups/test-c/database/3315481963/td basm2/db". However "c-3315481963-20201023-04" can be ignored. In case of Oracle ACFS database the value of spfile_location is 'DATA location/dbs/spfiledbinstancename.ora' * In case of Oracle ASM database the value of spfile_location will be 'DB home location/dbs/spfiledbinstancename.ora'
- ObjectStore backup
destination:
startup nomount; run { ALLOCATE CHANNEL DISK1 DEVICE TYPE DISK; ALLOCATE CHANNEL C1 DEVICE TYPE 'SBT_TAPE' parms 'SBT_LIBRARY=/opt/oracle/dcs/commonstore/pkgrepos/oss/odbcs/libopc.so ENV=(OPC_PFILE=/opt/oracle/dcs/commonstore/objectstore/opc_pfile/dbid/opc_d bUniqueName.ora)'; set DBID = dbid; RESTORE SPFILE TO 'spfile_location' FROM AUTOBACKUP RECOVERY AREA='<recolocation>' db_unique_name='dbuniquename'; } shutdown abort;
The commands mentioned in step 6 are present in the log file specified in the error message, and can be used as reference.
- For Disk backup
destination:
- Exit the RMAN
prompt:
RMAN> exit Recovery Manager complete.
- Remove the initdbInstanceName.ora file under dbhome_location/dbs, which was created in step 1.
- If the storage type of the database is Oracle ASM, then run the
following steps. If the storage type is Oracle ACFS, then go to step
9.
- Start SQL*Plus
connection:
$ sqlplus / as sysdba SQL*Plus: Release 19.0.0.0.0 - Production on Fri Oct 23 03:21:14 2020 Version 19.9.0.0.0 Copyright (c) 1982, 2020, Oracle. All rights reserved. Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.9.0.0.0 SQL>
- Run the following set of
commands:
startup nomount; create pfile from spfile; create spfile='Data_Disk_Group' from pfile; Example : Data_Disk_Group is typically '+DATA'. If flash is enabled, then it is '+FLASH' shutdown abort; exit;
- Get the spfile
location:
srvctl config database -db dbUniqueName | grep -i spfile
For example:
srvctl config database -db tdbasm1 | grep -i spfile Spfile: +DATA/TDBASM1/PARAMETERFILE/spfile.272.1054495987
- Create initdbInstanceName.ora file under dbhome_location/dbs.
- Set the permission of initdbInstanceName.ora file to
oracle_user:group_user using the
chown
command. - Add the 'spfile' value fetched from above step in the
initdbInstanceName.ora file.
For example:
spfile ='+DATA/TDBASM1/PARAMETERFILE/spfile.272.1054495987'
- Remove the spfileintancename.ora present in DB_home_location/dbs/.
- Start SQL*Plus
connection:
- Perform recovery of the database using
odacli recover-database
command.
This issue is tracked with Oracle bug 32012176.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in recovering a TDE-enabled database
When recovering a TDE-enabled database on Oracle Database Appliance, an error is encountered.
spfile
is lost, then recovery of database job fails in the "Database recovery validation"
task with the following
error:DCS-10001:Internal error encountered: Failed to run RMAN command. Please
refer log at location : scaoda****:
/u01/app/oracle/diag/rdbms/tdbasm1/tdbasm1/scaoda*****/rman/bkup/rman_restore/
2020-10-22/rman_restore_2020-10-22_18-43-14.0540.log
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 10/22/2020 18:45:22
ORA-19870: error while restoring backup piece c-3022438697-20201022-03
ORA-19913: unable to decrypt backup
ORA-28365: wallet is not open
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- Copy the 'DATA' and 'RECO' locations of the database by running the
odacli list-dbstorage
command and matching the corresponding DB unique name . Following is the sample output of theodacli list-dbstorage
command for a database with DB unique namemydbu
.# odacli list-dbstorages ID Type DBUnique Name DiskGroup Location Total Used Available Status ---------------------------------------- ------ -------------------- ---------- ------------------------------------------------------------ ---------- ---------- ---------- ---------- 3d45c6ac-e9a5-48e0-8412-1c8bec0b95d9 ACFS mydbu Configured DATA /u02/app/oracle/oradata/mydbu 99.99 GB 3.45 GB 96.54 GB REDO /u04/app/oracle/redo/mydb/ 13.99 GB 12.30 GB 1.69 GB RECO /u03/app/oracle/fast_recovery_area/ 803.99 GB 2.64 GB 801.35 GB 640bc6aa-fc97-43c2-a2b5-11534c37c6b7 ASM tdbasm1 Configured DATA +DATA/tdbasm1 2.37 TB 1.70 GB 2.37 TB REDO +RECO/tdbasm1 2.31 TB 12.09 GB 2.30 TB RECO +RECO/tdbasm1 2.31 TB 12.09 GB 2.30 TB
In the example, the DATA location is '/u02/app/oracle/oradata/mydbu' and RECO location is '/u03/app/oracle/fast_recovery_area/' for database mydbu. Also, the DATA location is '+DATA/tdbasm1' and RECO location is '+RECO/tdbasm1' for database tdbasm1.
- Create a file initdbInstanceName.ora under dbhome_location/dbs.
- Add the following entries to initdbInstanceName.ora file.
db_name=dbName db_unique_name=dbUniqueName wallet_root=data_location tde_configuration='KEYSTORE_CONFIGURATION=FILE' instance_number= 1
Specify 'instance_number' if it is an Oracle RAC database. Else, 'instance_number' can be removed. The data_location is the value copied in step 1.
- Perform RMAN connection:
- Run
su - oracle_user
. - Run the
. oraenv
command. - Enter the Instance Name when prompted for Oracle SID.
- Enter DB Home location when prompted for Oracle home.
- Run the
rman target /
command.# su - oracle $ . oraenv ORACLE_SID = [oracle] ? dbInstanceName ORACLE_HOME = [/home/oracle] ? /u01/app/oracle/product/19.0.0.0/dbhome_1 The Oracle base has been set to /u01/app/oracle $ rman target / Recovery Manager: Release 19.0.0.0.0 - Production on Fri Oct 23 02:57:14 2020 Version 19.9.0.0.0 Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved. connected to target database (not started)
- Run
- Run the
shutdown abort
command in RMAN prompt.RMAN> shutdown abort Oracle instance shut down
- Run the following set of commands in the RMAN prompt, based on the
backup destination type.
- For Disk backup
destination:
startup nomount; RESTORE SPFILE TO 'spfile_location' FROM AUTOBACKUP RECOVERY AREA='reco_location' db_unique_name='dbUniqueName'; shutdown abort;
In case of Oracle ACFS database the value of spfile_location is 'Data_location/dbs/spfiledbinstancename.ora'. In case of Oracle ASM database, the value of spfile_location is 'DB home location/dbs/spfiledbinstancename.ora' * The value of reco_location and Data_location are the values copied in step 1.
- For NFS backup destination:
startup nomount; SET CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE disk TO 'spfilehandle_value/%F'; run { set DBID dbid; ALLOCATE CHANNEL C1 DEVICE TYPE DISK; RESTORE SPFILE TO 'spfile_location' FROM AUTOBACKUP; } shutdown abort;
The spfilehandle_value is the value of the "spfBackupHandle" attribute in the backupreport. For example: "spfBackupHandle" : "/tmp/nfs_backup_path/database/orabackups/test-c/database/3315481963/td basm2/db/c-3315481963-20201023-04". The value of <spfilehandle_value> will be "/tmp/nfs_backup_path/database/orabackups/test-c/database/3315481963/td basm2/db". However "c-3315481963-20201023-04" can be ignored. In case of Oracle ACFS database the value of spfile_location is 'DATA location/dbs/spfiledbinstancename.ora' * In case of Oracle ASM database the value of spfile_location will be 'DB home location/dbs/spfiledbinstancename.ora'
- ObjectStore backup
destination:
startup nomount; run { ALLOCATE CHANNEL DISK1 DEVICE TYPE DISK; ALLOCATE CHANNEL C1 DEVICE TYPE 'SBT_TAPE' parms 'SBT_LIBRARY=/opt/oracle/dcs/commonstore/pkgrepos/oss/odbcs/libopc.so ENV=(OPC_PFILE=/opt/oracle/dcs/commonstore/objectstore/opc_pfile/dbid/opc_d bUniqueName.ora)'; set DBID = dbid; RESTORE SPFILE TO 'spfile_location' FROM AUTOBACKUP RECOVERY AREA='<recolocation>' db_unique_name='dbuniquename'; } shutdown abort;
The commands mentioned in step 6 are present in the log file specified in the error message, and can be used as reference.
- For Disk backup
destination:
- Exit the RMAN
prompt:
RMAN> exit Recovery Manager complete.
- Remove the initdbInstanceName.ora file under dbhome_location/dbs, which was created in step 1.
- If the storage type of the database is Oracle ASM, then run the
following steps. If the storage type is Oracle ACFS, then go to step
9.
- Start SQL*Plus
connection:
$ sqlplus / as sysdba SQL*Plus: Release 19.0.0.0.0 - Production on Fri Oct 23 03:21:14 2020 Version 19.9.0.0.0 Copyright (c) 1982, 2020, Oracle. All rights reserved. Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.9.0.0.0 SQL>
- Run the following set of
commands:
startup nomount; create pfile from spfile; create spfile='Data_Disk_Group' from pfile; Example : Data_Disk_Group is typically '+DATA'. If flash is enabled, then it is '+FLASH' shutdown abort; exit;
- Get the spfile
location:
srvctl config database -db dbUniqueName | grep -i spfile
For example:
srvctl config database -db tdbasm1 | grep -i spfile Spfile: +DATA/TDBASM1/PARAMETERFILE/spfile.272.1054495987
- Create initdbInstanceName.ora file under dbhome_location/dbs.
- Set the permission of initdbInstanceName.ora file to
oracle_user:group_user using the
chown
command. - Add the 'spfile' value fetched from above step in the
initdbInstanceName.ora file.
For example:
spfile ='+DATA/TDBASM1/PARAMETERFILE/spfile.272.1054495987'
- Remove the spfileintancename.ora present in DB_home_location/dbs/.
- Start SQL*Plus
connection:
- Perform recovery of the database using
odacli recover-database
command.
This issue is tracked with Oracle bug 32012186.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in restoring a TDE-enabled database
When restoring a TDE-enabled database on Oracle Database Appliance, an error is encountered.
Failed to copy file from : source_location to: destination_location
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Do not change the database storage type when restoring a TDE-enabled database.
This issue is tracked with Oracle bug 31848183.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in considering memory value unit in BUI
For KVM on Browser User Interface (BUI), the VM memory size is validated against the max VM memory size but the unit is not taken into consideration.
The requested VM memory size exceeds maximum VM memory size.
Hardware Models
Oracle Database Appliance hardware models
Workaround
None.
This issue is tracked with Oracle bug 32064320.
Parent topic: Known Issues When Managing Oracle Database Appliance
Validation error when deleting a resource after stopping VM
When deleting the associated resource after stopping a VM, an error is encountered.
DCS-10045:Validation error encountered: The following resources are currently
associated to resource type 'resource name': vm list
Hardware Models
Oracle Database Appliance hardware models
Workaround
Start the VM again to sync the live configuration metadata to the existing configuration metadata, and try to delete the resource such as the CPU Pool, vdisk or vnetwork.
This issue is tracked with Oracle bug 32078682.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in updating Role after Oracle Data Guard operations
When performing operations with Oracle Data Guard on Oracle Database Appliance, an error is encountered in updating the Role.
odacli
describe-database
command is not updated after Oracle Data Guard
switchover, failover, and reinstate operations on Oracle Database
Appliance.
Hardware Models
All Oracle Database Appliance hardware models with Oracle Data Guard configuration
Workaround
Run odacli update-registry -n db --force/-f
to update the
database metadata. After the job completes, run the odacli
describe-database
command and verify that dbRole is updated.
This issue is tracked with Oracle bug 31378202.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error when recovering a single-instance database
When recovering a single-instance database, an error is encountered.
DCS-10001:Internal error encountered: DCS-10001:Internal error encountered:
Missing arguments : required sqlplus connection information is not
provided
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Perform recovery of the single-instance database on the node where the database is running.
This issue is tracked with Oracle bug 31399400.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error when rebooting the appliance
When rebooting Oracle Database Appliance, the user interactive screen is displayed.
Hardware Models
Oracle Database Appliance X7-2-HA hardware models
Workaround
From the system console, select or highlight the kernel using the Up or Down arrow keys and then press Enter to continue with the reboot of the appliance.
This issue is tracked with Oracle bug 31196452.
Parent topic: Known Issues When Managing Oracle Database Appliance
Job history not erased after running cleanup.pl
After running cleanup.pl
, job history is not
erased.
After running cleanup.pl
, when you run
/opt/oracle/dcs/bin/odacli list-jobs
commands, the list is not
empty.
Hardware Models
All Oracle Database Appliance hardware models for bare metal deployments
Workaround
- Stop the DCS Agent by running the following commands on both nodes.
For Oracle Linux 6, run:
initctl stop initdcsagent
For Oracle Linux 7, run:
systemctl stop initdcsagent
- Run the cleanup script sequentially on both the nodes.
This issue is tracked with Oracle bug 30529709.
Parent topic: Known Issues When Managing Oracle Database Appliance
Inconsistency in ORAchk summary and details report page
ORAChk report summary on the Browser User Interface may show different counts of Critical, Failed, and Warning issues than the report detail page.
Hardware Models
Oracle Database Appliance hardware models bare metal deployments
Workaround
Ignore counts of Critical, Failed, and Warning issues in the ORAchk report summary on the Browser User Interface. Check the report detail page.
This issue is tracked with Oracle bug 30676674.
Parent topic: Known Issues When Managing Oracle Database Appliance
Missing DATA, RECO, and REDO entries when dbstorage is rediscovered
Running the odacli update-registry
command with -n
all --force
or -n dbstorage --force
option can result in metadata corruption.
Hardware Models
All Oracle Database Appliance hardware models bare metal deployments
Workaround
Run the -all
option when all the databases created in the system
use OAKCLI in migrated systems. On other systems
that run on DCS stack, update all components other
than dbstorage individually, using the
odacli update-registry -n
component_name_to_be_updated_excluding_dbstorage
.
This issue is tracked with Oracle bug 30274477.
Parent topic: Known Issues When Managing Oracle Database Appliance
The odaeraser tool does not work if oakd is running in non-cluster mode
After cleaning up the deployment, the Secure Eraser tool does not work if oakd is running in non-cluster mode.
Hardware Models
All Oracle Database Appliance Hardware bare metal systems
Workaround
After cleanup of the deployment, oakd
is started in the non-cluster mode, and it cannot be stopped using "odaadmcli stop oak" command. In such a case, if the Secure Erase tool is run, then the odaeraser command fails.
Use the command odaadmcli shutdown oak
to stop oakd
.
This issue is tracked with Oracle bug 28547433.
Parent topic: Known Issues When Managing Oracle Database Appliance
Issues with the Web Console on Microsoft web browsers
Oracle Database Appliance Web Console has issues on Microsoft Edge and Microsoft Internet Explorer web browsers.
- Oracle Database Appliance Web Console does not display correctly on Microsoft Edge and Microsoft Internet Explorer web browsers.
- Advanced Information for the appliance does not display on Microsoft Internet Explorer web browser.
- Job activity status does not refresh in the Web Console on Microsoft Internet Explorer web browser.
- After configuring the oda-admin password, the following error is
displayed:
Failed to change the default user (oda-admin) account password. Status Code: 500 DCS-10001: DCS-10001:Internal error encountered: User not authorized
Workaround: Close the Microsoft Internet Explorer browser session and open another browser session.
Models
All Oracle Database Appliance Hardware Models bare metal deployments
Workaround
To access the Web Console, use either Google Chrome or Firefox.
This issue is tracked with Oracle bugs 30077007, 30099089, and 29887027.
Parent topic: Known Issues When Managing Oracle Database Appliance
Disk space issues due to Zookeeper logs size
The Zookeeper log files, zookeeper.out
and /opt/zookeeper/log/zkMonitor.log
, are not rotated, when new logs are added. This can cause disk space issues.
Hardware Models
All Oracle Database Appliance hardware models for bare metal deployments
Workaround
Rotate the zookeeper log file manually, if the log file size increases, as follows:
-
Stop the DCS-agent service for zookeeper on both nodes.
initctl stop initdcsagent
-
Stop the zookeeper service on both nodes.
/opt/zookeeper/bin/zkServer.sh stop
-
Clean the zookeeper logs after taking the backup, by manually deleting the existing file or by following steps 4 to 10.
-
Set the
ZOO_LOG_DIR
as an environment variable to a different log directory, before starting the zookeeper server.export ZOO_LOG_DIR=/opt/zookeeper/log
-
Switch to
ROLLINGFILE
, to set the capability to roll.
Restart the zookeeper server, for the changes to take effect.export ZOO_LOG4J_PROP="INFO, ROLLINGFILE"
-
Set the following parameters in the
/opt/zookeeper/conf/log4j.properties
file, to limit the number of backup files, and the file sizes.zookeeper.log.dir=/opt/zookeeper/log zookeeper.log.file=zookeeper.out log4j.appender.ROLLINGFILE.MaxFileSize=10MB log4j.appender.ROLLINGFILE.MaxBackupIndex=10
-
Start zookeeper on both nodes.
/opt/zookeeper/bin/zkServer.sh start
-
Check the zookeeper status, and verify that zookeeper runs in
leader/follower/standalone
mode./opt/zookeeper/bin/zkServer.sh status ZooKeeper JMX enabled by default Using config: /opt/zookeeper/bin/../conf/zoo.cfg Mode: follower
-
Start the dcs agent on both nodes.
initctl start initdcsagent
-
Purge the zookeeper monitor log,
zkMonitor.log
, in the location/opt/zookeeper/log
. You do not have to stop the zookeeper service.
This issue is tracked with Oracle bug 29033812.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error after running the cleanup script
After running the cleanup.pl
script, the following error message appears: DCS-10001:Internal error encountered: Fail to start hand shake
.
The error is caused when you run the following steps:
-
Run
cleanup.pl
on the first node (Node0). Wait until the cleanup script finishes, then reboot the node. -
Run
cleanup.pl
on the second node (Node1). Wait until the cleanup script finishes, then reboot the node. -
After both nodes are started, use the command-line interface to list the jobs on Node0. An internal error appears.
# odacli list-jobs DCS-10001:Internal error encountered: Fail to start hand shake to localhost:7070
Hardware Models
Oracle Database Appliance X7-2-HA
Workaround
-
Verify the zookeeper status on the both nodes before starting
dcsagent
:/opt/zookeeper/bin/zkServer.sh status
For a single-node environment, the status should be: leader, or follower, or standalone.
-
Restart the
dcsagent
on Node0 after running thecleanup.pl
script.# systemctl stop initdcsagent # systemctl start initdcsagent
This issue is tracked with Oracle bug 26996134.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in attaching vdisk to guest VM
The current system firmware may be different from the available firmware after applying the latest patch.
When multiple vdisks from the oda_base driver_domain are attached to the guest VM, their entries are not written on the xenstore, vdisks are not attached to the VM, and the VM may not start.
xen-hotplug.log
in
ODA_BASE:xenstore-write: could not write path backend/vbd/6/51728/node
xenstore-write: could not write path backend/vbd/6/51728/hotplug-error
Hardware Models
Oracle Database Appliance Virtualized Platform
Workaround
- Add the followng into the
/etc/sysconfig/xencommons
file indom0
:XENSTORED_ARGS="--entry-nb=4096 --transaction=512"
- Reboot
dom0
.
This issue is tracked with Oracle bug 30886365.
Parent topic: Known Issues When Managing Oracle Database Appliance
Unrecognized Token Messages Appear in /var/log/messages
After updating Oracle Database Appliance, unrecognized token messages appear in /var/log/messages
.
Updating to Oracle Database Appliance 12.1.2.11.0 updates the Oracle VM Server version to 3.4.3. After updating, the following messages appear in /var/log/messages
:
Unrecognized token: "max_seq_redisc"
Unrecognized token: "rereg_on_guid_migr"
Unrecognized token: "aguid_inout_notice"
Unrecognized token: "sm_assign_guid_func"
Unrecognized token: "reports"
Unrecognized token: "per_module_logging"
Unrecognized token: "consolidate_ipv4_mask"
You can ignore the messages for these parameters, they do not impact the InfiniBand compliant Subnet Manager and Administration (opensm) functionality. However, Oracle recommends removing the parameters to avoid flooding /var/log/messages
.
Hardware Models
Oracle Database Appliance X6-2-HA and X5-2 with InfiniBand
Workaround
Perform the following to remove the parameters:
-
After patching, update the /
etc/opensm/opensm.conf
file in bare metal deployments and in Dom0 in virtualized platform environment to remove the parameters.cat /etc/opensm/opensm.conf | egrep -w 'max_seq_redisc|rereg_on_guid_migr|aguid_inout_notice|sm_assign_guid_func|repo rts|per_module_logging|consolidate_ipv4_mask' | grep -v ^# max_seq_redisc 0 rereg_on_guid_migr FALSE aguid_inout_notice FALSE sm_assign_guid_func uniq_count reports 2 per_module_logging FALSE consolidate_ipv4_mask 0xFFFFFFFF
-
Reboot. The messages will not appear after rebooting the node.
This issue is tracked with Oracle bug 25985258.
Parent topic: Known Issues When Managing Oracle Database Appliance