4 Known Issues with the Oracle Database Appliance
The following are known issues deploying, updating, and managing Oracle Database Appliance in this release.
- Known Issues When Patching Oracle Database Appliance
Understand the known issues when patching Oracle Database Appliance to this release. - Known Issues When Deploying Oracle Database Appliance
Understand the known issues when provisioning or deploying Oracle Database Appliance. - Known Issues When Managing Oracle Database Appliance
Understand the known issues when managing or administering Oracle Database Appliance.
Known Issues When Patching Oracle Database Appliance
Understand the known issues when patching Oracle Database Appliance to this release.
- Resilvering of Oracle ADVM processes impacting performance after upgrading to 18.3
Upgrading to Oracle Database Appliance 18.3 or later can impact performance on some Oracle Database Appliance systems due to Oracle ASM Dynamic Volume Manager (Oracle ADVM) processes consuming excessive CPU. - Error when patching 11.2.0.4 database homes
Patching 11.2.0.4 Oracle Database homes to Release 18.5 may fail if bug#2015 exists in the inventory. - Failure in patching Oracle Grid Infrastructure when applying server patch
When you apply the server patch to update Oracle Database Appliance deployment from 18.3 to 18.5, the update may fail. - Onboard public network interfaces do not come up after patching or imaging
When you apply patches or re-image Oracle Database Appliance, the onboard public network interfaces may not come up due to faulty status presented in the ILOM. - Local server patching hangs on the second node
Local server patching does not complete on the second node. - Server patching fails to start Oracle Clusterware
When applying the server patch, Oracle Clusterware does not start due to issue with Oracle Clusterware Time Synchronization Services Daemon (OCTSSD). - Stack migration fails during patching
After patching the OAK stack, the following error is encountered when runningodacli
commands: - DATA disk group fails to start after upgrading Oracle Grid Infrastructure to 18.5
After upgrading Oracle Grid Infrastructure to 18.5, the DATA disk group fails to start. - Error when patching a 11.2.0.4 database
Patching a 11.2.0.4 DB home that was created using the 11.2.0.4.190115 database clone, fails with the following error: - Some files missing after patching the appliance
Some files are missing after patching the appliance. - Space issues with /u01 directory after patching
After patching to 18.5, the directory/u01/app/18.0.0.0/grid/log/hostname/client
fills quickly withgpnp
logs. - Errors when deleting database storage after migration to DCS stack
After migrating to the DCS stack, some volumes in the database storage cannot be deleted. - Repository in offline or unknown status after patching
After rolling or local patching of both nodes to 18.5, repositories are in offline or unknown state on node 0 or 1. - Oracle ASR version is 5.5.1 after re-imaging Oracle Database Appliance
Oracle Auto Service Request (ASR) version is not updated after re-imaging Oracle Database Appliance - 11.2.0.4 databases fail to start after patching
After patching Oracle Database Appliance to release 18.3, databases of version 11.2.0.4 fail to start. - FLASH disk group is not mounted when patching or provisioning the server
The FLASH disk group is not mounted after a reboot, including after provisioning, reimaging, or patching the server with Oracle Database Appliance 12.2.1.2. - Error in patching database home locally using the Web Console
Applying a database home patch locally through the Web Console, creates a pre-patch submission request. - Error when patching Oracle Database 11.2.0.4
When patching Oracle Database 11.2.0.4, the log file may show some errors.
Parent topic: Known Issues with the Oracle Database Appliance
Resilvering of Oracle ADVM processes impacting performance after upgrading to 18.3
Upgrading to Oracle Database Appliance 18.3 or later can impact performance on some Oracle Database Appliance systems due to Oracle ASM Dynamic Volume Manager (Oracle ADVM) processes consuming excessive CPU.
When you upgrade to Oracle Database Appliance 18.3, the storage disk may be resilvered or synchronized again, for mirrored volumes on an Oracle ASM disk group with Allocation Unit (AU) size greater than 1 MB. The larger the Oracle Automatic Storage Management Cluster File System (Oracle ACFS) volume size, the higher is the impact.
Hardware Models
All Oracle Database Appliance hardware models, particularly, X5-2 and X7-2 High Capacity models that use 8T HDDs.
Workaround
For information about resolving this issue, see Oracle Support Note 2525427.1 at:
https://support.oracle.com/rs?type=doc&id=2525427.1
This issue is tracked with Oracle bug 29520544.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error when patching 11.2.0.4 database homes
Patching 11.2.0.4 Oracle Database homes to Release 18.5 may fail if bug#2015 exists in the inventory.
Hardware Models
All Oracle Database Appliance Bare Metal and Virtualized Platform Deployments.
Workaround
- Check if bug#2015 exists in the
inventory:
su - oracle export ORACLE_HOME=path_to_the_11.2.0.4_ORACLE_HOME $ORACLE_HOME/OPatch/opatch lspatches | grep -i "OCW" | cut -d ';' -f1
- The command returns a bug number, for example, 28729234.
Navigate to the
inventory:
cd $ORACLE_HOME/inventory/oneoffs/bug# from above command/etc/config
- Check if
inventory.xml
contains a string such as'bug number="2015"'
. If no match is found, then no action is required, and you can continue with step 6 in this procedure.grep 'bug number="2015"' inventory.xml echo $? ( the command returns 0, if match found )
- Take a backup of
inventory.xml
.cp inventory.xml inventory.xml.$(date +%Y%m%d-%H%M)
- Delete entry like
<bug number="2015" ...>
frominventory.xml
.sed '/bug number="2015"/d' inventory.xml
This issue is tracked with Oracle bugs 29834563 and 29446248.
Parent topic: Known Issues When Patching Oracle Database Appliance
Failure in patching Oracle Grid Infrastructure when applying server patch
When you apply the server patch to update Oracle Database Appliance deployment from 18.3 to 18.5, the update may fail.
If you have upgraded to release 18.3 from 12.1.2.12, or 12.2.2.1.x, or 12.1.0.2 to 12.2.0.1, and then to Oracle Database Appliance release 18.3, and are patching to Oracle Database Appliance release 18.5, then patching to release 18.5 may fail.
The /u01/app/oraInventory/ContentsXML/inventory.xml
file may have
the 12.1 or 12.2 grid
home listed as the default Oracle Clusterware
home.
Hardware Models
All Oracle Database Appliance hardware models.
Workaround
- Detach the 12.1.0.2
home:
/u01/app/18.0.0.0/grid/oui/bin/runInstaller -silent -detachHome ORACLE_HOME="/u01/app/12.1.0.2/grid" ORACLE_HOME_NAME="OraGrid12102" -local
- Detach the 12.2.0.1
home:
/u01/app/18.0.0.0/grid/oui/bin/runInstaller -silent -detachHome ORACLE_HOME="/u01/app/12.2.0.1/grid" ORACLE_HOME_NAME="OraGrid12201" -local
- Note the output of the following command. You must substitute the node names
in the next
step.
/u01/app/18.0.0.0/grid/bin/olsnodes
If the output of the above command is
node1 node2
, then the cluster nodes are "node1,node2
". - Run the grid installer in silent mode to update the nodelist in the
inventory:
/u01/app/18.0.0.0/grid/oui/bin/runInstaller -silent -local -updateNodeList ORACLE_HOME_NAME="OraGrid18000" ORACLE_HOME="/u01/app/18.0.0.0/grid" CLUSTER_NODES="node1,node2" CRS="true"
After running the steps above, patch your deployment to Oracle Database Appliance release 18.5.
This issue is tracked with Oracle bug 29476961.
Parent topic: Known Issues When Patching Oracle Database Appliance
Onboard public network interfaces do not come up after patching or imaging
When you apply patches or re-image Oracle Database Appliance, the onboard public network interfaces may not come up due to faulty status presented in the ILOM.
Hardware Models
Oracle Database Appliance X7-2-HA, X7-2S, X7-2M
Workaround
- Clear all faults on the ILOM.
- Reset or power cycle the host.
- Check that the ILOM has the most current version of firmware patches.
- Check that the X7-2 On Board Dual Port 10Gb/25Gb SFP28 Ethernet Controller firmware is up-to-date.
- Collect a new snapshot and monitor your appliance to confirm that the faults did not recur.
- Contact Oracle Support if this issue recurs.
This issue is tracked with Oracle bugs 29206350 and 28308268.
Parent topic: Known Issues When Patching Oracle Database Appliance
Local server patching hangs on the second node
Local server patching does not complete on the second node.
When patching Oracle Database Appliance using the -local
option, the patching activity may hang and not complete on the second node due to a
HAIP error.
Hardware Models
Oracle Database Appliance high-availability hardware models baremetal deployments.
Workaround
crs
on both nodes and start crs
one by one.
- Login as
root
in any node and stop the cluster with the-all
option.# /u01/app/18.0.0.0/grid/bin/crsctl stop cluster -all
- Stop
crs
on both nodes.[Node 0] # /u01/app/18.0.0.0/grid/bin/crsctl stop crs [Node 1] # /u01/app/18.0.0.0/grid/bin/crsctl stop crs
- Start
crs
on each node, one by one.[Node 0] # /u01/app/18.0.0.0/grid/bin/crsctl start crs [Node 1] # /u01/app/18.0.0.0/grid/bin/crsctl start crs
- View job details and check if the patching activity
completed.
# odacli list-jobs 6157593a-e3d1-444c-a99f-7211f05e075c Server Patching April 17, 2019 9:47:43 PM BRT Running
- Verify whether the operating system or firmware was updated in the patching
operation.
# odacli describe-job -i 6157593a-e3d1-444c-a99f-7211f05e075c
If the operating system or firmware was updated, then restart the nodes manually. If the operating system or firmware was not updated, then restart the
dcs-agent
using theinitctl
command.
This issue is tracked with Oracle bug 29663931.
Parent topic: Known Issues When Patching Oracle Database Appliance
Server patching fails to start Oracle Clusterware
When applying the server patch, Oracle Clusterware does not start due to issue with Oracle Clusterware Time Synchronization Services Daemon (OCTSSD).
Hardware Models
Oracle Database Appliance high-availability hardware models baremetal deployments.
Workaround
- Login as
root
in any node and stop the cluster with the-force
option.# export ORACLE_HOME = /u01/app/18.0.0.0/grid/bin # $ORACLE_HOME/bin/crsctl stop crs -force
- Restart
ctssd
on the master node and failed node.On the master node: # $ORACLE_HOME/bin/crsctl stop res ora.ctssd -init # $ORACLE_HOME/bin/crsctl start res ora.ctssd -init
- Update the
server.
# odacli update-server -v 18.5.0.0.0
This issue is tracked with Oracle bug 29549267.
Parent topic: Known Issues When Patching Oracle Database Appliance
Stack migration fails during patching
After patching the OAK stack, the following error is encountered when running
odacli
commands:
DCS-10001:Internal error encountered: java.lang.String cannot be cast to
com.oracle.dcs.agent.model.DbSystemNodeComponents.
Hardware Models
All Oracle Database Appliance Hardware models
Workaround
- Rename the
/etc/ntp.conf
file temporarily and retry patching the appliance.# mv /etc/ntp.conf /etc/ntp.conf.orig
- After patching is successful, restore the
/etc/ntp.conf
file.# mv /etc/ntp.conf.orig /etc/ntp.conf
This issue is tracked with Oracle bug 29216717.
Parent topic: Known Issues When Patching Oracle Database Appliance
DATA disk group fails to start after upgrading Oracle Grid Infrastructure to 18.5
After upgrading Oracle Grid Infrastructure to 18.5, the DATA disk group fails to start.
The following error is reported in the log file:
ORA-15038: disk '/dev/mapper/HDD_E1_S13_1931008292p1' mismatch on 'Sector
Size' with target disk group [512] [4096]
Hardware Models
Oracle Database Appliance hardware models X5-2 or later, with mixed storage disks installed
Workaround
To start Oracle Clusterware successfully, connect to Oracle ASM as
grid
user, and run the following SQL commands:
SQL> show parameter _disk_sector_size_override;
NAME TYPE VALUE
--------------------------------------------------------
_disk_sector_size_override boolean TRUE
SQL> alter system set "_disk_sector_size_override" = FALSE scope=both;
alter system set "_disk_sector_size_override" = FALSE scope=both
*
ERROR at line 1:
ORA-32000: write to SPFILE requested but SPFILE is not modifiable
SQL> alter diskgroup DATA mount;
Diskgroup altered.
SQL> alter system set "_disk_sector_size_override" = FALSE scope=both;
System altered
This issue is tracked with Oracle bug 29220984.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error when patching a 11.2.0.4 database
Patching a 11.2.0.4 DB home that was created using the 11.2.0.4.190115 database clone, fails with the following error:
ERROR: 2019-03-06 00:04:14: Unable to apply db patch on the following Homes :
/u01/app/oracle/product/11.2.0.4/database_name
Hardware Models
Oracle Database Appliance Hardware models running Virtualized Platform
Workaround
None.
This issue is tracked with Oracle bug 29446260.
Parent topic: Known Issues When Patching Oracle Database Appliance
Some files missing after patching the appliance
Some files are missing after patching the appliance.
Hardware Models
Oracle Database Appliance X7-2 hardware models
Workaround
Before patching the appliance, take a backup of the /etc/sysconfig/network-scripts/ifcfg-em*
folder, and compare the folder contents after patching. If any files or parameters of the ifcfg-em*
are missing, then they can be recovered from the backup directory.
This issue is tracked with Oracle bug 28308268.
Parent topic: Known Issues When Patching Oracle Database Appliance
Space issues with /u01 directory after patching
After patching to 18.5, the directory /u01/app/18.0.0.0/grid/log/hostname/client
fills quickly with gpnp
logs.
Hardware Models
All Oracle Database Appliance hardware models for virtualized platforms deployments (X3-2 HA, X4-2 HA, X5-2 HA, X6-2 HA, X7-2 HA)
Workaround
-
Run the following commands on both ODA_BASE nodes:
On Node0:
rm -rf /u01/app/18.0.0.0/grid/log/hostname/client/ oakcli enable startrepo -node 0 oakcli stop oak pkill odaBaseAgent oakcli start oak
On Node1:
rm -rf /u01/app/18.0.0.0/grid/log/hostname/client/ oakcli enable startrepo -node 1 oakcli stop oak pkill odaBaseAgent oakcli start oak
This issue is tracked with Oracle bug 28865162.
Parent topic: Known Issues When Patching Oracle Database Appliance
Errors when deleting database storage after migration to DCS stack
After migrating to the DCS stack, some volumes in the database storage cannot be deleted.
Create an Oracle ACFS database storage using the oakcli create dbstorage
command for multitenant environment (CDB) without database in the OAK stack and then migrate to the DCS stack. When deleting the database storage, only the DATA volume is deleted, and not the REDO and RECO volumes.
Hardware Models
All Oracle Database Appliance high-availability hardware models for bare metal deployments
Workaround
Create a database on Oracle ACFS database storage with the same name as the database for which you want to delete the storage volumes, and then delete the database. This cleans up all the volumes and file systems.
This issue is tracked with Oracle bug 28987135.
Parent topic: Known Issues When Patching Oracle Database Appliance
Repository in offline or unknown status after patching
After rolling or local patching of both nodes to 18.5, repositories are in offline or unknown state on node 0 or 1.
The command oakcli start repo <reponame>
fails with the error:
OAKERR8038 The filesystem could not be exported as a crs resource
OAKERR:5015 Start repo operation has been disabled by flag
Models
Oracle Database Appliance X7-2-HA, X6-2-HA, X5-2, X4-2, X3-2, and V1.
Workaround
Log in to oda_base
of any node and run the following two commands:
oakcli enable startrepo -node 0
oakcli enable startrepo -node 1
The commands start the repositories and enable them to be available online.
This issue is tracked with Oracle bug 27539157.
Parent topic: Known Issues When Patching Oracle Database Appliance
Oracle ASR version is 5.5.1 after re-imaging Oracle Database Appliance
Oracle Auto Service Request (ASR) version is not updated after re-imaging Oracle Database Appliance
When re-imaging Oracle Database Appliance to Release 18.5, the Oracle Auto Service Request (ASR) RPM is not updated to 18.5. Oracle ASR is updated when you apply the patches for Oracle Database Appliance Release 18.5.
Hardware Models
All Oracle Database Appliance deployments that have Oracle Auto Service Request (ASR).
Workaround
Update to the latest server patch for the release.
This issue is tracked with Oracle bug 28933900.
Parent topic: Known Issues When Patching Oracle Database Appliance
11.2.0.4 databases fail to start after patching
After patching Oracle Database Appliance to release 18.3, databases of version 11.2.0.4 fail to start.
Hardware Models
All Oracle Database Appliance Hardware models
Workaround
Databases of versions 11.2.0.4.170814 and 11.2.0.4.171017 must be manually started after patching to Oracle Database Appliance release 18.3.
srvctl start database -db db_unique_name
This issue is tracked with Oracle bug 28815716.
Parent topic: Known Issues When Patching Oracle Database Appliance
FLASH disk group is not mounted when patching or provisioning the server
The FLASH disk group is not mounted after a reboot, including after provisioning, reimaging, or patching the server with Oracle Database Appliance 12.2.1.2.
# oakcli update -patch 12.2.1.2 --server
****************************************************************************
***** For all X5-2 customers with 8TB disks, please make sure to *****
***** run storage patch ASAP to update the disk firmware to "PAG1". *****
****************************************************************************
INFO: DB, ASM, Clusterware may be stopped during the patch if required
INFO: Both Nodes may get rebooted automatically during the patch if required
Do you want to continue: [Y/N]?: y
INFO: User has confirmed for the reboot
INFO: Patch bundle must be unpacked on the second Node also before applying the patch
Did you unpack the patch bundle on the second Node? : [Y/N]? : y
Please enter the 'root' password :
Please re-enter the 'root' password:
INFO: Setting up the SSH
..........Completed .....
... ...
INFO: 2017-12-26 00:31:22: -----------------Patching ILOM & BIOS-----------------
INFO: 2017-12-26 00:31:22: ILOM is already running with version 3.2.9.23r116695
INFO: 2017-12-26 00:31:22: BIOS is already running with version 30110000
INFO: 2017-12-26 00:31:22: ILOM and BIOS will not be updated
INFO: 2017-12-26 00:31:22: Getting the SP Interconnect state...
INFO: 2017-12-26 00:31:44: Clusterware is running on local node
INFO: 2017-12-26 00:31:44: Attempting to stop clusterware and its resources locally
Killed
# Connection to server.example.com closed.
The Oracle High Availability Services, Cluster Ready Services, Cluster Synchronization Services, and Event Manager are online. However, when you attempt to create an Oracle Automatic Storage Management Cluster File System (Oracle ACFS) database, you receive an error: flash space is 0
.
Hardware Models
Oracle Database Appliance X5-2, X6-2-HA, and X7-2 HA SSD systems.
Workaround
Manually mount FLASH disk group before creating an Oracle ACFS database.
Perform the following steps as the GRID owner:
-
Set the environment variables as grid OS user:
on node0 export ORACLE_SID=+ASM1 export ORACLE_HOME= /u01/app/12.2.0.1/grid
-
Log on to the ASM instance as
sysasm
$ORACLE_HOME/bin/sqlplus / as sysasm
-
Execute the following SQL command:
SQL> ALTER DISKGROUP FLASH MOUNT
This issue is tracked with Oracle bug 27322213.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in patching database home locally using the Web Console
Applying a database home patch locally through the Web Console, creates a pre-patch submission request.
Models
All Oracle Database Appliance Hardware Models
Workaround
Use the odacli update-dbhome --local
command patching database homes locally.
This issue is tracked with Oracle bug 28909972.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error when patching Oracle Database 11.2.0.4
When patching Oracle Database 11.2.0.4, the log file may show some errors.
When patching Oracle Database 11.2.0.4 homes, the following error may be logged in alert.log
.
ORA-00600: internal error code, arguments: [kgfmGetCtx0], [kgfm.c],
[2840], [ctx], [], [], [], [], [], [], [], []
Once the patching completes, the error will no longer be raised.
Hardware Models
Oracle Database Appliance X7-2-HA Virtualized Platform, X6-2-HA Bare Metal and Virtualized Platform, X5-2, X4-2, X3-2, and V1.
Workaround
There is no workaround for this issue.
This issue is tracked with Oracle bug 28032876.
Parent topic: Known Issues When Patching Oracle Database Appliance
Known Issues When Deploying Oracle Database Appliance
Understand the known issues when provisioning or deploying Oracle Database Appliance.
- Only one network interface displayed after rebooting node
After rebooting the node, only one network interface is displayed. - Snapshot databases can only be created on the primary database
Foroakcli
stack, snapshot database can be created from the primary database, and not from the standby database. - Creation of CDB for 12.1.0.2 databases may fail
Creation of multitenant container database (CDB) for 12.1.0.2 databases on Virtualized Platform may fail. - DCS-10045:Validation error encountered: Error retrieving the cpucores
When deploying the appliance, DCS-10045 error appears. There is an error retrieving the CPU cores of the second node. - Database creation hangs when using a deleted database name for database creation
The accelerator volume for data is not created on flash storage, for database created during provisioning of appliance. - Error when updating 12.1.0.2 database homes
When updating Oracle Database homes from 12.1.0.2 to 18.3, using the commandodacli update-dbhome -i dbhomeId -v 18.3.0.0.0
, the following error may be seen: - Error encountered after running cleanup.pl
Errors encountered in runningodacli
commands after runningcleanup.pl
. - Accelerator volume for data is not created on flash storage
The accelerator volume for data is not created on flash storage, for databases created during provisioning of appliance. - Database connection fails after database upgrade
After upgrading the database from 11.2 to 12.1.0.2, database connection fails due tojob_queue_processes
value. - Failure in creating 18.3 database with DSS database shape odb1s
When creating 18.3 databases, with DSS database shape odb1s, the creation fails, with the following eror message: - Errors in clone database operation
Clone database operation fails due to the following errors. - Database creation fails when multiple SCAN listeners exist
Creation of 11.2 database fails when multiple SCAN listeners exist. - Errors after restarting CRS
If the Cluster Ready Services (CRS) are stopped or restarted, before stopping the repository and virtual machines, then this may cause errors. - Unable to create an Oracle ASM Database for Release 12.1
Known issues with Oracle Automatic Storage Management (Oracle ASM) are preventing the REDO diskgroup from mounting for Oracle Database Release 12.1. - Database creation fails for odb-01s DSS databases
When attempting to create an DSS database with shape odb-01s, the job may fail with the following error:
Parent topic: Known Issues with the Oracle Database Appliance
Only one network interface displayed after rebooting node
After rebooting the node, only one network interface is displayed.
netstat
command returns only one of two
interfaces.# netstat -nr | grep 169
169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
ora.cluster_interconnect.haip
is ONLINE on one node
before rebooting (or powering on) on the other
node.# /u01/app/18.0.0.0/grid/bin/crsctl stat res -t -init|grep -A1
ora.cluster_interconnect.haip
------------------------------------------------------------------------------
--
Name Target State Server State details
------------------------------------------------------------------------------
--
Cluster Resources
------------------------------------------------------------------------------
--
ora.cluster_interconnect.haip
1 ONLINE ONLINE <hostname> STABLE
Hardware Models
Oracle Database Appliance hardware models baremetal deployments on X4-2 and X7-2. X5-2 and X6-2 baremetal deployments with Infiniband Interconnect are not affected.
Workaround
crs
on both nodes and
start crs
one by one.
- Login as
root
in any node and stop the cluster with the-all
option.# /u01/app/18.0.0.0/grid/bin/crsctl stop cluster -all
- Stop
crs
on both nodes.[Node 0] # /u01/app/18.0.0.0/grid/bin/crsctl stop crs [Node 1] # /u01/app/18.0.0.0/grid/bin/crsctl stop crs
- Start
crs
on each node, one by one.[Node 0] # /u01/app/18.0.0.0/grid/bin/crsctl start crs [Node 1] # /u01/app/18.0.0.0/grid/bin/crsctl start crs
This issue is tracked with Oracle bug 29613692.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Snapshot databases can only be created on the primary database
For oakcli
stack, snapshot database can be created from the
primary database, and not from the standby database.
If the database name (db_name
) and database unique name
(db_unique_name
) are different when creating snahsot database,
then the following error is encountered:
WARNING: 2018-09-13 12:47:18: Following data files are not on SNAP location
Hardware Models
All Oracle Database Appliance hardware models for Virtualized Platform
Workaround
None. For oakcli
stack, create snapshot database from the primary
database, and not from the standby database.
This issue is tracked with Oracle bug 28649665.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Creation of CDB for 12.1.0.2 databases may fail
Creation of multitenant container database (CDB) for 12.1.0.2 databases on Virtualized Platform may fail.
If the database name (db_name
) and database unique name
(db_unique_name
) are different when creating snahsot database,
then the following error is encountered:
WARNING: 2018-09-13 12:47:18: Following data files are not on SNAP location
Hardware Models
All Oracle Database Appliance hardware models for Virtualized Platform
Workaround
None.
This issue is tracked with Oracle bug 29231958.
Parent topic: Known Issues When Deploying Oracle Database Appliance
DCS-10045:Validation error encountered: Error retrieving the cpucores
When deploying the appliance, DCS-10045 error appears. There is an error retrieving the CPU cores of the second node.
Hardware Models
Oracle Database Appliance X7-2-HA
Workaround
-
Remove the following directory in Node0:
/opt/oracle/dcs/repo/node_0
-
Remove the following directory in Node1:
/opt/oracle/dcs/repo/node_1
-
Restart the
dcs-agent
on both nodes.cd /opt/oracle/dcs/bin initctl stop initdcsagent initctl start initdcsagent
This issue is tracked with Oracle bug 27527676.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Database creation hangs when using a deleted database name for database creation
The accelerator volume for data is not created on flash storage, for database created during provisioning of appliance.
If you delete a 11.2.0.4 database, and then create a new database with same name as the deleted database, database creation hangs while unlocking the DBSNMP user for the database.
Hardware Models
All Oracle Database Appliance high-availability environments
Workaround
Before creating the 11.2.0.4 database with the same name as the deleted database, delete the DBSNMP user, if the user exists.
For example, the following command creates a database testdb
with user DBSNMP
.
/u01/app/18.0.0.0/grid/bin/crsctl delete wallet -type CVUDB -name testdb -user DBSNMP
This issue is tracked with Oracle bug 28916487.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error when updating 12.1.0.2 database homes
When updating Oracle Database homes from 12.1.0.2 to 18.3, using the command odacli update-dbhome -i dbhomeId -v 18.3.0.0.0
, the following error may be seen:
DCS-10001:Internal error encountered: Failed to run SQL script: datapatch script
Hardware Models
All Oracle Database Appliance hardware models bare metal deployments
Workaround
Apply the patch for bug 24385625 and run odacli update-dbhome -i dbhomeId -v 18.3.0.0.0
again to fix the issue.
This issue is tracked with Oracle bug 28975529.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error encountered after running cleanup.pl
Errors encountered in running odacli
commands after running cleanup.pl
.
After running cleanup.pl
, when you try to use odacli
commands, the following error is encountered:
DCS-10042:User oda-cliadmin cannot be authorized.
Hardware Models
All Oracle Database Appliance hardware models for bare metal deployments
Workaround
Run the following commands to set up the credentials for the user oda-cliadmin
on the agent wallet:
# rm -rf /opt/oracle/dcs/conf/.authconfig
# /opt/oracle/dcs/bin/setupAgentAuth.sh
This issue is tracked with Oracle bug 29038717.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Accelerator volume for data is not created on flash storage
The accelerator volume for data is not created on flash storage, for databases created during provisioning of appliance.
Hardware Models
Oracle Database Appliance high capacity environments with HDD disks
Workaround
Do not create the database when provisioning the appliance. This creates all required disk groups, including flash. After provisioning the appliance, create the database. The accelerator volume is then created.
This issue is tracked with Oracle bug 28836461.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Database connection fails after database upgrade
After upgrading the database from 11.2 to 12.1.0.2, database connection fails due to job_queue_processes
value.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Follow these steps:
-
Before upgrading the database, check the
job_queue_processes
parameter, for example, x. If the value ofjob_queue_processes
is less than 4, then set the value to 4. -
Upgrade the database to 12.1.0.2.
-
After upgrading the database, set the value of
job_queue_processes
to the earlier value, for example, x.
This issue is tracked with Oracle bug 28987900.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Failure in creating 18.3 database with DSS database shape odb1s
When creating 18.3 databases, with DSS database shape odb1s, the creation fails, with the following eror message:
ORA-04031: unable to allocate 6029352 bytes of shared memory ("shared
pool","unknown object","sga heap(1,0)","ksipc pct")
Hardware Models
All Oracle Database Appliance Hardware Models
Workaround
None.
This issue is tracked with Oracle bug 28444642.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Errors in clone database operation
Clone database operation fails due to the following errors.
If the dbname and dbunique name are not the same for the source database or they are in mixed case (mix of uppercase and lowercase letters) or the source database is single-instance or Oracle RAC One Node, running on the remote node, the clone database operation fails, because the paths are not created correctly in the control file.
Hardware Models
All Oracle Database Appliance high-availability hardware models for bare metal deployments
Workaround
Create the clone database from source database which has the same db name and db unique name, in lowercase letters, and the source database instance is running on the same node from which the clone database creation is triggered.
This issue is tracked with Oracle bugs 29002231, 29002563, 29002004, 29001906, 29001855, 29001631, 28995153, 28986643, and 28986950.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Database creation fails when multiple SCAN listeners exist
Creation of 11.2 database fails when multiple SCAN listeners exist.
Hardware Models
All Oracle Database Appliance hardware models bare metal deployments
Workaround
Apply patch 22258643 to fix the issue.
This issue is tracked with Oracle bug 29056579.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Errors after restarting CRS
If the Cluster Ready Services (CRS) are stopped or restarted, before stopping the repository and virtual machines, then this may cause errors.
Repository status is unknown and High Availability Virtual IP is offline if the Cluster Ready Services (CRS) are stopped or restarted before stopping the repository and virtual machines.
Hardware Models
Oracle Database Appliance HA models X7-2-HA, X6-2-HA, X5-2, X4-2, X3-2, V1
Workaround
Follow these steps:
-
Start the High Availability Virtual IP on node1.
# /u01/app/GI_version/grid/bin/srvctl start havip -id havip_0
-
Stop the
oakVmAgent.py
process ondom0
. -
Run the lazy unmount option on the
dom0
repository mounts:umount -l mount_points
This issue is tracked with Oracle bug 20461930.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Unable to create an Oracle ASM Database for Release 12.1
Known issues with Oracle Automatic Storage Management (Oracle ASM) are preventing the REDO diskgroup from mounting for Oracle Database Release 12.1.
Unable to create an Oracle ASM database lower than 12.1.0.2.17814 PSU (12.1.2.12).
Hardware Models
Oracle Database Appliance X6-2-HA, X5-2, X4-2, X3-2, and V1.
Workaround
There is not a workaround. If you have Oracle Database 11.2 or 12.1 that is using Oracle Automatic Storage Management (Oracle ASM) and you want to upgrade to a higher release of Oracle Database, then you must be on at least Oracle Database Appliance 12.1.2.12.0 and Database Home 12.1.0.2.170814.
The upgrade path for Oracle Database 11.2 or 12.1 Oracle ASM is as follows:
-
If you are on Oracle Database Appliance version 12.1.2.6.0 or later, then upgrade to 12.1.2.12 or higher before upgrading your database.
-
If you are on Oracle Database Appliance version 12.1.2.5 or earlier, then upgrade to 12.1.2.6.0, and then upgrade again to 12.1.2.12 or higher before upgrading your database.
This issue is tracked with Oracle bug 21626377, 27682997, and 21780146. The issues are fixed in Oracle Database 12.1.0.2.170814.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Database creation fails for odb-01s DSS databases
When attempting to create an DSS database with shape odb-01s, the job may fail with the following error:
CRS-2674: Start of 'ora.test.db' on 'rwsoda609c1n1' failed
CRS-5017: The resource action "ora.test.db start" encountered the following
error:
ORA-03113: end-of-file on communication channel
Process ID: 0
Session ID: 0 Serial number: 0
. For details refer to "(:CLSN00107:)" in
"/u01/app/grid/diag/crs/rwsoda609c1n2/crs/trace/crsd_oraagent_oracle.trc".
Hardware Models
Oracle Database Appliance X6-2-HA, X5-2, X4-2, X3-2, and V1
Workaround
There is no workaround. Select an alternate shape to create the database.
This issue is tracked with Oracle bug 27768012.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Known Issues When Managing Oracle Database Appliance
Understand the known issues when managing or administering Oracle Database Appliance.
- ODA_BASE is in read-only mode or cannot start
The/OVS
directory is full and ODA_BASE is in read-only mode. - Restriction in moving database home for database shape greater than odb8
When creating databases, there is a policy restriction for creating databases with database shapes odb8 or higher for Oracle Database Standard Edition. - The odaeraser tool does not work if oakd is running in non-cluster mode
After cleaning up the deployment, the Secure Eraser tool does not work if oakd is running in non-cluster mode. - Unable to use the Web Console on Microsoft web browsers
Oracle Appliance Manager Web Console does not display correctly on Microsoft Edge and Microsoft Internet Explorer web browsers. - Disk space issues due to Zookeeper logs size
The Zookeeper log files,zookeeper.out
and/opt/zookeeper/log/zkMonitor.log
, are not rotated, when new logs are added. This can cause disk space issues. - Error after running the cleanup script
After running thecleanup.pl
script, the following error message appears:DCS-10001:Internal error encountered: Fail to start hand shake
. - Incorrect results returned for the describe-component command in certain cases
Thedescribe-component
command may return incorrect results in some cases. - Old configuration details persisting in custom environment
The configuration file/etc/security/limits.conf
contains default entries even in the case of custom environments. - Incorrect SGA and PGA values displayed
For online transaction processing (OLTP), In-Memory (IMDB), and decision support services (DSS) databases created with odb36 database shape, the PGA and SGA values are displayed incorrectly. - OAKERR:7007 Error encountered while starting VM
When starting a virtual machine (VM), an error message appears that the domain does not exist. - Error in node number information when running network CLI commands
Network information for node0 is always displayed for someodacli
commands, when the-u
option is not specified. - Unrecognized Token Messages Appear in /var/log/messages
After updating Oracle Database Appliance, unrecognized token messages appear in/var/log/messages
.
Parent topic: Known Issues with the Oracle Database Appliance
ODA_BASE is in read-only mode or cannot start
The /OVS
directory is full and ODA_BASE is in read-only mode.
The vmcore
file in the /OVS/ var
directory can cause the /OVS
directory (Dom 0) to become 100% used. When Dom 0 is full, ODA_BASE is in read-only mode or cannot start.
Hardware Models
Oracle Database Appliance X6-2-HA, X5-2, X4-2, X3-2, and V1.
Oracle Database Appliance X7-2-HA Virtualized Platform.
Workaround
Perform the following to correct or prevent this issue:
-
Periodically check the file usage on Dom 0 and clean up the
vmcore
file, as needed. -
Edit the
oda_base vm.cfg
file and change theon_crash = 'coredump-restart'
parameter toon_crash = 'restart'
. Especially when ODA_BASE is using more than 200 GB (gigabytes) of memory.
This issue is tracked with Oracle bug 26121450.
Parent topic: Known Issues When Managing Oracle Database Appliance
Restriction in moving database home for database shape greater than odb8
When creating databases, there is a policy restriction for creating databases with database shapes odb8 or higher for Oracle Database Standard Edition.
To maintain consistency with this policy restriction, do not migrate any database to an Oracle Database Standard Edition database home, where the database shape is greater than odb8. The database migration may not fail, but it may not adhere to policy rules.
Hardware Models
All Oracle Database Appliance Hardware Models bare metal deployments
Workaround
None.
This issue is tracked with Oracle bug 29003323.
Parent topic: Known Issues When Managing Oracle Database Appliance
The odaeraser tool does not work if oakd is running in non-cluster mode
After cleaning up the deployment, the Secure Eraser tool does not work if oakd is running in non-cluster mode.
Hardware Models
All Oracle Database Appliance Hardware bare metal systems
Workaround
After cleanup of the deployment, oakd
is started in the non-cluster mode, and it cannot be stopped using "odaadmcli stop oak" command. In such a case, if the Secure Erase tool is run, then the odaeraser command fails.
Use the command odaadmcli shutdown oak
to stop oakd
.
This issue is tracked with Oracle bug 28547433.
Parent topic: Known Issues When Managing Oracle Database Appliance
Unable to use the Web Console on Microsoft web browsers
Oracle Appliance Manager Web Console does not display correctly on Microsoft Edge and Microsoft Internet Explorer web browsers.
Models
Oracle Database Appliance X7-2-HA, X7-2S, X7-2M, X6-2S, X6-2M, X6-2L
Workaround
To access the Web Console, use either Google Chrome or Firefox.
This issue is tracked with Oracle bugs 27798498, 27028446, and 27799452.
Parent topic: Known Issues When Managing Oracle Database Appliance
Disk space issues due to Zookeeper logs size
The Zookeeper log files, zookeeper.out
and /opt/zookeeper/log/zkMonitor.log
, are not rotated, when new logs are added. This can cause disk space issues.
Hardware Models
All Oracle Database Appliance hardware models for bare metal deployments
Workaround
Rotate the zookeeper log file manually, if the log file size increases, as follows:
-
Stop the DCS-agent service for zookeeper on both nodes.
initctl stop initdcsagent
-
Stop the zookeeper service on both nodes.
/opt/zookeeper/bin/zkServer.sh stop
-
Clean the zookeeper logs after taking the backup, by manually deleting the existing file or by following steps 4 to 10.
-
Set the
ZOO_LOG_DIR
as an environment variable to a different log directory, before starting the zookeeper server.export ZOO_LOG_DIR=/opt/zookeeper/log
-
Switch to
ROLLINGFILE
, to set the capability to roll.
Restart the zookeeper server, for the changes to take effect.export ZOO_LOG4J_PROP="INFO, ROLLINGFILE"
-
Set the following parameters in the
/opt/zookeeper/conf/log4j.properties
file, to limit the number of backup files, and the file sizes.zookeeper.log.dir=/opt/zookeeper/log zookeeper.log.file=zookeeper.out log4j.appender.ROLLINGFILE.MaxFileSize=10MB log4j.appender.ROLLINGFILE.MaxBackupIndex=10
-
Start zookeeper on both nodes.
/opt/zookeeper/bin/zkServer.sh start
-
Check the zookeeper status, and verify that zookeeper runs in
leader/follower/standalone
mode./opt/zookeeper/bin/zkServer.sh status ZooKeeper JMX enabled by default Using config: /opt/zookeeper/bin/../conf/zoo.cfg Mode: follower
-
Start the dcs agent on both nodes.
initctl start initdcsagent
-
Purge the zookeeper monitor log,
zkMonitor.log
, in the location/opt/zookeeper/log
. You do not have to stop the zookeeper service.
This issue is tracked with Oracle bug 29033812.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error after running the cleanup script
After running the cleanup.pl
script, the following error message appears: DCS-10001:Internal error encountered: Fail to start hand shake
.
The error is causes when you run the following steps:
-
Run
cleanup.pl
on the first node (Node0). Wait until the cleanup script finishes, then reboot the node. -
Run
cleanup.pl
on the second node (Node1). Wait until the cleanup script finishes, then reboot the node. -
After both nodes are started, use the command-line interface to list the jobs on Node0. An internal error appears.
# odacli list-jobs DCS-10001:Internal error encountered: Fail to start hand shake to localhost:7070
Hardware Models
Oracle Database Appliance X7-2-HA
Workaround
-
Verify the zookeeper status on the both nodes before starting
dcsagent
:/opt/zookeeper/bin/zkServer.sh status
For a single-node environment, the status should be: leader, or follower, or standalone.
-
Restart the
dcsagent
on Node0 after running thecleanup.pl
script.# initctl stop initdcsagent # initctl start initdcsagent
Parent topic: Known Issues When Managing Oracle Database Appliance
Incorrect results returned for the describe-component command in certain cases
The describe-component
command may return incorrect results
in some cases.
For the following disk, the describe-component
command
shows the available version as QDV1RE14 which is lower than the actual version
QDV1RF30:
Disk type: NVMe
Manufacturer : Intel
Model: 0x0a54
Product name: 7335940:ICDPC2DD2ORA6.4T
Version: QDV1RF30
TThe following disk is not visible when you run the describe-component command. This does not impact the system components, except display.
Disk type: NVMe
Manufacturer : Intel
Model: 0x0a54
Product name: 7361456_ICRPC2DD2ORA6.4T
Version: VDV1RY03
Hardware Models
All Oracle Database Appliance hardware models.
Workaround
Use the fwupdate list all
command to check the correct versions.
This issue is tracked with Oracle bug 29680034.
Parent topic: Known Issues When Managing Oracle Database Appliance
Old configuration details persisting in custom environment
The configuration file /etc/security/limits.conf
contains default entries even in the case of custom environments.
On custom environments, when a single user is configured for both grid and oracle, the default grid user entries for the image are not removed from the /etc/security/limits.conf
file.
Models
Oracle Database Appliance X7-2-HA, X7-2S, and X7-2M
Workaround
This issue does not affect the functionality. Manually edit the /etc/security/limits.conf
file and remove invalid entries.
This issue is tracked with Oracle bug 27036374.
Parent topic: Known Issues When Managing Oracle Database Appliance
Incorrect SGA and PGA values displayed
For online transaction processing (OLTP), In-Memory (IMDB), and decision support services (DSS) databases created with odb36 database shape, the PGA and SGA values are displayed incorrectly.
For OLTP databases created with odb36 shape, following are the issues:
-
sga_target
is set as 128 GB instead of 144 GB -
pga_aggregate_target
is set as 64 GB instead of 72 GB
For DSS databases created with with odb36 shape, following are the issues:
-
sga_target
is set as 64 GB instead of 72 GB -
pga_aggregate_target
is set as 128 GB instead of 144 GB
For IMDB databases created with Odb36 shape, following are the issues:
-
sga_target
is set as 128 GB instead of 144 GB -
pga_aggregate_target
is set as 64 GB instead of 72 GB -
inmmory_size
is set as 64 GB instead of 72 GB
Models
Oracle Database Appliance X7-2-HA, X7-2S, and X7-2M
Workaround
Reset the PGA and SGA sizes manually
This issue is tracked with Oracle bug 27036374.
Parent topic: Known Issues When Managing Oracle Database Appliance
OAKERR:7007 Error encountered while starting VM
When starting a virtual machine (VM), an error message appears that the domain does not exist.
If a VM was cloned in Oracle Database Appliance 12.1.2.10 or earlier, you cannot start the HVM domain VMs in Oracle Database Appliance 12.1.2.11.
This issue does not impact newly cloned VMs in Oracle Database Appliance 12.1.2.11 or any other type of VM cloned on older versions. The vm templates were fixed in 12.1.2.11.0.
When trying to start the VM (vm4
in this example), the output is similar to the following:
# oakcli start vm vm4 -d
.
Start VM : test on Node Number : 0 failed.
DETAILS:
Attempting to start vm on node:0=>FAILED.
<OAKERR:7007 Error encountered while starting VM - Error: Domain 'vm4' does not exist.>
The following is an example of the vm.cfg
file for vm4
:
vif = ['']
name = 'vm4'
extra = 'NODENAME=vm4'
builder = 'hvm'
cpus = '0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23'
vcpus = 2
memory = 2048
cpu_cap = 0
vnc = 1
serial = 'pty'
disk =
[u'file:/OVS/Repositories/odarepo1/VirtualMachines/vm4/68c32afe2ba8493e89f018a
970c644ea.img,xvda,w']
maxvcpus = 2
maxmem = 2048
Hardware Models
Oracle Database Appliance X6-2-HA, X5-2, X4-2, X3-2, and V1
Oracle Database Appliance X7-2-HA Virtualized Platform.
Workaround
Delete the extra = 'NODENAME=vm_name'
line from the vm.cfg
file for the VM that failed to start.
-
Open the
vm.cfg
file for the virtual machine (vm) that failed to start.-
Dom0 : /Repositories/ vm_repo_name /.ACFS/snaps/ vm_name / VirtualMachines/ vm_name
-
ODA_BASE : /app/sharedrepo/ vm_repo_name /.ACFS/snaps/ vm_name / VirtualMachines/ vm_name
-
-
Delete the following line:
extra=’NODENAME=vmname’
. For example, if virtual machinevm4
failed to start, delete the lineextra = 'NODENAME=vm4'
.vif = [''] name = 'vm4' extra = 'NODENAME=vm4' builder = 'hvm' cpus = '0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23' vcpus = 2 memory = 2048 cpu_cap = 0 vnc = 1 serial = 'pty' disk = [u'file:/OVS/Repositories/odarepo1/VirtualMachines/vm4/68c32afe2ba8493e89f018a 970c644ea.img,xvda,w'] maxvcpus = 2 maxmem = 2048
-
Start the virtual machine on Oracle Database Appliance 12.1.2.11.0.
# oakcli start vm vm4
This issue is tracked with Oracle bug 25943318.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in node number information when running network CLI commands
Network information for node0 is always displayed for some odacli
commands, when the -u
option is not specified.
If the -u
option is not provided, then the describe-networkinterface
, list-networks
and the describe-network
odacli
commands always display the results for node0 (the default node), irrespective of whether the command is run from node0 or node1.
Hardware Models
Oracle Database Appliance X7-2-HA, X6-2-HA, X5-2, X4-2, X3-2, and V1
Workaround
Specify the -u
option in the odacli
command, for details about the current node.
This issue is tracked with Oracle bug 27251239.
Parent topic: Known Issues When Managing Oracle Database Appliance
Unrecognized Token Messages Appear in /var/log/messages
After updating Oracle Database Appliance, unrecognized token messages appear in /var/log/messages
.
Updating to Oracle Database Appliance 12.1.2.11.0 updates the Oracle VM Server version to 3.4.3. After updating, the following messages appear in /var/log/messages
:
Unrecognized token: "max_seq_redisc"
Unrecognized token: "rereg_on_guid_migr"
Unrecognized token: "aguid_inout_notice"
Unrecognized token: "sm_assign_guid_func"
Unrecognized token: "reports"
Unrecognized token: "per_module_logging"
Unrecognized token: "consolidate_ipv4_mask"
You can ignore the messages for these parameters, they do not impact the InfiniBand compliant Subnet Manager and Administration (opensm) functionality. However, Oracle recommends removing the parameters to avoid flooding /var/log/messages
.
Hardware Models
Oracle Database Appliance X6-2-HA and X5-2 with InfiniBand
Workaround
Perform the following to remove the parameters:
-
After patching, update the /
etc/opensm/opensm.conf
file in bare metal deployments and in Dom0 in virtualized platform environment to remove the parameters.cat /etc/opensm/opensm.conf | egrep -w 'max_seq_redisc|rereg_on_guid_migr|aguid_inout_notice|sm_assign_guid_func|repo rts|per_module_logging|consolidate_ipv4_mask' | grep -v ^# max_seq_redisc 0 rereg_on_guid_migr FALSE aguid_inout_notice FALSE sm_assign_guid_func uniq_count reports 2 per_module_logging FALSE consolidate_ipv4_mask 0xFFFFFFFF
-
Reboot. The messages will not appear after rebooting the node.
This issue is tracked with Oracle bug 25985258.
Parent topic: Known Issues When Managing Oracle Database Appliance