The following are known issues deploying, updating, and managing Oracle Database Appliance X7-2S, X7-2M, and X7-2-HA:
odacli
to recover the database from a database backup report saved from Oracle Appliance Manager Web Console./etc/security/limits.conf
contains default entries even in the case of custom environments.odacli update-repository
job fails with patch-name.zip file does not exist in the /tmp
directory.dcscli.jar
files in the directory /opt/oracle/dcs/bin/
with different versions.--local
option is not supported.oakcli validate -a
and oakcli validate -c
return some errors on a virtualized platform.cleanup.pl
script, the following error message appears: DCS-10001:Internal error encountered: Fail to start hand shake
.When Oracle ACFS and Oracle ASM databases exist in the same Oracle homes, they may fail to start, after patching.
This can happen when an Oracle ACFS database starts before any Oracle ASM based database has started for the first time. This only occurs immediately after database home patching or Oracle binary relinking. In the non-ASM databases that start before the Oracle ASM database, the non-ASM database starts without issue but after the Oracle ASM database starts, the following errors might occur:
ORA-27140: attach to post/wait facility failed ORA-27300: OS system dependent operation:invalid_egid failed with status: 1 ORA-27301: OS failure message: Operation not permitted ORA-27302: failure occurred at: skgpwinit6 ORA-27303: additional information: startup egid = 1001 (oinstall), current egid = 1006 (asmadmin)
You can prevent this error by adding an Oracle ASM disk group dependency to the non-ASM databases, before patching Oracle Database home.
Follow these steps to add an Oracle ASM disk group dependency:
Run all steps as the oracle home owner or user. Set the environment to the GI home.
Determine the databases to be reviewed:
$ /opt/oracle/oak/bin/oakcli show databases
Check the current configuration as per the Oracle ACFS databases:
crsctl stat res ora.DBName.db -p | grep DEPENDENCIES | grep uniform:global
Continue to the next step, if nothing is returned.
Update the dependency after setting environment for that database home:
srvctl modify database -db DBName -diskgroup DATA
Confirm the update:
crsctl stat res ora.DBName.db -p | grep DEP | grep uniform:global
The change must include the following:
START_DEPENDENCIES=hard(uniform:global:ora.DATA.dg,ora.redo.datastore.acfs,ora.data.datastore.acfs,ora.reco.datastore.acfs,ora.flash.flashdata.acfs) weak(type:ora.listener.type,global:type:ora.scan_listener.type,uniform:ora.ons,global:ora.gns) pullup(global:ora.DATA.dg,ora.redo.datastore.acfs,ora.data.datastore.acfs,ora.reco.datastore.acfs,ora.flash.flashdata.acfs)
Hardware Models
All supported models of Oracle Database Appliance
Workaround
Restart all Oracle ACFS databases.
This issue is tracked with Oracle bugs 25795974 and 28058830.
Performance can be significantly affected on the virtualized platform for Oracle Database Appliance, if certain configuration changes are not made.
On virtualized platforms, the default scaling governor feature (scaling_governor
) is set to on-demand instead of performance mode, causing stack stability issues, due to degraded CPU clockspeed.
Hardware Models
Oracle Database Appliance virtualized platforms
Workaround
Explicitly specify the scaling governor
, max_cstate
, and max grant
table frames in dom0
.
Add the cmd line options cpufreq=xen:performance max_cstate=1 gnttab_max_frames=256
in the /etc/default/grub
file.
GRUB_CMDLINE_XEN="dom0_mem=max:4096MM allowsuperpage extra_guest_irqs=256,2048 nr_irqs=2048 dom0_vcpus_pin dom0_max_vcpus=20 cpufreq=xen:performance max_cstate=1 gnttab_max_frames=256 "
Make the following changes to the GRUB (GRand Unified Boot Loader) file:
For non-X7-2 hardware models:
# grub2-mkconfig -o /boot/grub2/grub.cfg
For X7-2 hardware models:
# grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg
Reboot the node.
Verify the cmd line is effective:
# xenpm get-cpuidle-states|grep "Max possible C-state" Max possible C-state: C1 # xenpm get-cpufreq-para|grep current_governor current_governor: performance
This issue is tracked with Oracle bug 28057749.
On certain hardware models of Oracle Database Appliance, the node restarts when an NVMe device is powered off.
On Oracle Database Appliance with Oracle Automatic Storage Management Filter Driver (Oracle ASMFD) configured, the node restarts when an NVMe device is powered off.
Hardware Models
Oracle Database Appliance X7-2S, X7-2M, X6-2S, X6-2M, X6-2L
Workaround
None.
This bug is tracked with Oracle bug 28090492.
Oracle Appliance Manager Web Console does not display correctly on Microsoft Edge and Microsoft Internet Explorer web browsers.
Models
Oracle Database Appliance X7-2-HA, X7-2S, X7-2M, X6-2S, X6-2M, X6-2L
Workaround
To access the Web Console, use either Google Chrome or Firefox.
This issue is tracked with Oracle bug 27028446.
You cannot use odacli
to recover the database from a database backup report saved from Oracle Appliance Manager Web Console.
Models
Oracle Database Appliance X7-2-HA, X7-2S, X7-2M, X6-2S, X6-2M, X6-2L
Workaround
Recover the database with the odacli recover-database
command, using the Backup Report saved with odacli
, and not the Web Console.
This issue is tracked with Oracle bug 27742604.
Error in patching 12.2.0.1 Oracle Database homes to the latest patchset
There may be an upgrade error when trying to patch 12.2.0.1 Oracle Database homes to the latest patch, if you are upgrading an existing 12.1.0.2 database to the patched Oracle Database home.
Models
Oracle Database Appliance X7-2-HA, X7-2S, X7-2M, X6-2S, X6-2M, X6-2L, X5-2, X4-2, X3-2, V1
Workaround
Create a new 12.2 Oracle Database home using the 12.2 January 18 clones or manually apply patch for bug 24923080 to the patched 12.2.0.1 Oracle Database homes, and then attempt the upgrade.
This issue is tracked with Oracle bug 27983436.
After rolling or local patching of both nodes to 12.2.1.4.0, repositories are in offline or unknown state on node 0 or 1.
The command oakcli start repo <reponame>
fails with the error:
OAKERR8038 The filesystem could not be exported as a crs resource OAKERR:5015 Start repo operation has been disabled by flag
Models
Oracle Database Appliance X7-2-HA, X6-2-HA, X5-2, X4-2, X3-2, and V1.
Workaround
Log in to oda_base
of any node and run the following two commands:
oakcli enable startrepo -node 0 oakcli enable startrepo -node 1
The commands start the repositories and enable them to be available online.
This issue is tracked with Oracle bug 27539157.
If the Cluster Ready Services (CRS) are stopped or restarted, before stopping the repository and virtual machines, then this may cause errors.
Repository status is unknown and High Availability Virtual IP is offline if the Cluster Ready Services (CRS) are stopped or restarted before stopping the repository and virtual machines.
Hardware Models
Oracle Database Appliance HA models X7-2-HA, X6-2-HA, X5-2, X4-2, X3-2, V1
Workaround
Follow these steps:
# /u01/app/GI_version/grid/bin/srvctl start havip -id havip_0
Stop the oakVmAgent.py
process on dom0
.
dom0
repository mounts:
umount -l mount_points
This issue is tracked with Oracle bug 20461930.
The configuration file /etc/security/limits.conf
contains default entries even in the case of custom environments.
On custom environments, when a single user is configured for both grid and oracle, the default grid user entries for the image are not removed from the /etc/security/limits.conf
file.
Models
Oracle Database Appliance X7-2-HA, X7-2S, and X7-2M
Workaround
This issue does not affect the functionality. Manually edit the /etc/security/limits.conf
file and remove invalid entries.
This issue is tracked with Oracle bug 27036374.
Upgrade from Oracle Database 12.1 to 12.2 fails, if the database is not running.
Models
Oracle Database Appliance X7-2-HA, X7-2S, and X7-2M
Workaround
Start the database manually before performing the upgrade.
This issue is tracked with Oracle bug 27054542.
For online transaction processing (OLTP), In-Memory (IMDB), and decision support services (DSS) databases created with odb36 database shape, the PGA and SGA values are displayed incorrectly.
For OLTP databases created with odb36 shape, following are the issues:
sga_target
is set as 128 GB instead of 144 GB
pga_aggregate_target
is set as 64 GB instead of 72 GB
For DSS databases created with with odb36 shape, following are the issues:
sga_target
is set as 64 GB instead of 72 GB
pga_aggregate_target
is set as 128 GB instead of 144 GB
For IMDB databases created with Odb36 shape, following are the issues:
sga_target
is set as 128 GB instead of 144 GB
pga_aggregate_target
is set as 64 GB instead of 72 GB
inmmory_size
is set as 64 GB instead of 72 GB
Models
Oracle Database Appliance X7-2-HA, X7-2S, and X7-2M
Workaround
Reset the PGA and SGA sizes manually
This issue is tracked with Oracle bug 27036374.
The odacli update-repository
job fails with patch-name.zip file does not exist in the /tmp
directory.
When updating the repository, the update is not able to validate the copied file and the job fails. An error similar to the following appears:
CS-10001:Internal error encountered: /tmp/oda-sm-12.2.1.2.0-171124-GI-12.2.0.1.zip does not exist in the /tmp directory.
Hardware Models
Oracle Database Appliance X7-2S, X7-2M, X7-2-HA, X6-2S, X6-2M, and X6-2L.
Workaround
An invalid null_null
auth-key is in ZooKeeper. Remove the invalid key, restart the dcsagent
on each node, then execute the command odacli update-repository
.
Navigate to the /bin
directory in ZooKeeper.
# cd /opt/zookeeper/bin
Connect with zookeeper.
# ./zkCli.sh
List all of the auth-keys.
# ls /ssh-auth-keys
Delete the invalid key.
# rmr /ssh-auth-keys/null_null
Quit from zookeeper.
quit
Restart the dcsagent
on each node.
/opt/oracle/dcs/bin/restartagent.sh
Execute the command odacli update-repository
.
In some cases, the dbhome patch update fails due to a timezone issue in opatch.
An error similar to the following appears in the job details:
DCS-10001:Internal error encountered: run datapatch after bundlePatch application on the database home dbhomeID
Hardware Models
Oracle Database Appliance X7-2S, X7-2M, X7-2-HA, X6-2S, X6-2M, and X6-2L
Workaround
Open the /u01/app/oracle/product/*/*/inventory/ContentsXML/comps.xml
file.
Search for four (4) character timezone (TZ) information.
For example, HADT and HAST.
Take a backup of those files.
Convert the 4-character timezone to a 3-character timezone.
For example, convert HADT and HAST to HST.
Patch dbhome.
ODACLI does not work because of two dcscli.jar
files in the directory /opt/oracle/dcs/bin/
with different versions.
ODA CLI commands do not work.
odacli: '/opt/oracle/dcs/bin/dcscli-2.4.*-SNAPSHOT.jar' is not an odacli command.
Hardware Models
Oracle Database Appliance X7-2-HA, X7-2S, X7-2M, X6-2S, X6-2M, and X6-2L
Workaround
/opt/oracle/dcs/bin
directory.
ls dcscli-*
Verify whether there are two CLI jar files.
Delete the older CLI jar file.
This issue is tracked with Oracle bug 27807116.
An issue with Oracle Database 12.2.1.2 might cause an internal error CRS-01019: THE OCR SERVICE EXITED. If this occurs, the Cluster Ready Services daemon (crsd) is down.
Hardware Models
Oracle Database Appliance X7-2-HA, X7-2S, X7-2M, X6-2S, X6-2M, and X6-2L
Workaround
Restart the CRS daemon.
Stop crs.
# crsctl stop crs -f
Start crs.
# crsctl start crs -wait
This issue is tracked with Oracle bug 27060167.
When patching a virtualized platform, the --local
option is not supported.
When you patch Oracle Database Appliance from Release 12.1.2.x to Release 12.2.1.x, on a virtualized platform, attempting to use the --local
option to patch a single node will result in an error.
# oakcli update -patch 12.2.1.2.0 --server --local ERROR: -local is not supported for server patching, on VM systems.
Hardware Models
Oracle Database Appliance X7-2-HA
Workaround
Use the following to update the software on Oracle Database Appliance, which applies the patch to both nodes.
# oakcli update -patch 12.2.1.4.0 --server
The commands oakcli validate -a
and oakcli validate -c
return some errors on a virtualized platform.
With one exception, the options in the command validate
are not supported in Oracle Database Appliance 12.2.1.1.0 release.
Note:
The commandoakcli validate -c storagetopology
is supported.Hardware Models
Oracle Database Appliance X7-2-HA virtualized platform.
Workaround
A workaround is not available.
This issue is tracked with Oracle bug 27022056 and 27021403.
After running the cleanup.pl
script, the following error message appears: DCS-10001:Internal error encountered: Fail to start hand shake
.
The following are the steps to reproduce the issue:
Run cleanup.pl
on the first node (Node0). Wait until the cleanup script finishes, then reboot the node.
Run cleanup.pl
on the second node (Node1). Wait until the cleanup script finishes, then reboot the node.
After both nodes are started, use the command-line interface to list the jobs on Node0. An internal error appears.
# odacli list-jobs DCS-10001:Internal error encountered: Fail to start hand shake to localhost:7070
Hardware Models
Oracle Database Appliance X7-2-HA
Workaround
Verify the zookeeper status on the both nodes before starting dcsagent
:
/opt/zookeeper/bin/zkServer.sh status
For a single-node environment, the status should be: leader, or follower, or standalone.
Restart the dcsagent
on Node0 after running the cleanup.pl
script.
# initctl stop initdcsagent # initctl start initdcsagent
When using Oracle Database 12.2.0.1 grid infrastructure (GI) to create an 11.2.0.4 database, the option to configure Oracle Enterprise Manager DB Console is disabled.
An issue with the Enterprise Manager Control (emctl) command line utility and Enterprise Manager Configuration Assistant (emca) occurs when using the 12.2.0.1 GI to create an 11.2.0.4 database.
Hardware Models
Oracle Database Appliance X7-2-HA, X7-2S, X7-2M, X6-2S, X6-2M, and X6-2L that are using the 12.2.0.1 GI.
Workaround
Manually configure Oracle Enterprise Manager DB Console after creating the database.
If the appliance is a multi-node system, perform the steps on both nodes. The example assumes a multi-node system:
Create a dbconsole.rsp
response file, as follows, based on your environment.
To obtain the cluster name for your environment, run the command $GI_HOME/bin/cemutlo -n
DB_UNIQUE_NAME=db_unique_name SERVICE_NAME=db_unique_name.db_domain PORT=scan listener port LISTENER_OH=$GI_HOME SYS_PWD=admin password DBSNMP_PWD=admin password SYSMAN_PWD=admin password CLUSTER_NAME=cluster name ASM_OH=$GI_HOME ASM_SID=+ASM1 ASM_PORT=asm listener port ASM_USER_NAME=ASMSNMP ASM_USER_PWD=admin password
Run the command to configure the dbcontrol using the response file. The command will fail with an error. You will use the steps in the output in Step 4.
$ORACLE_HOME/bin/emca -config dbcontrol db -repos create -cluster -silent -respFile dbconsole.rsp Error securing Database Control. Database Control has not been brought-up on nodes node1 node2 Execute the following command(s) on nodes: node1 node2 1. Set the environment variable ORACLE_UNQNAME to the Database unique name. 2. /u01/app/oracle/product/11.2.0.4/dbhome_1/bin/emctl config emkey -repos -sysman_pwd Password for SYSMAN user -host node -sid Database unique name 3. /u01/app/oracle/product/11.2.0.4/dbhome_1/bin/emctl secure dbconsole -sysman_pwd Password for SYSMAN user -host node -sid Database unique name 4. /u01/app/oracle/product/11.2.0.4/dbhome_1/bin/emctl start dbconsole To secure Em Key, run /u01/app/oracle/product/11.2.0.4/dbhome_1/bin/emctl config emkey -remove_from_repos -sysman_pwd Password for SYSMAN user
Use vi editor to open $ORACLE_HOME/bin/emctl
, then change the setting CRS_HOME=
to CRS_HOME=/u01/app/12.2.0.1/grid
Run the steps reported by emca
in Step 2 with the proper values.
$ORACLE_HOME/bin/emca -reconfig dbcontrol -silent -cluster -EM_NODE node0 host -EM_NODE_LIST node1 host -DB_UNIQUE_NAME db_unique_name -SERVICE_NAME db_unique_name.db_domain
$ORACLE_HOME/bin/emca -reconfig dbcontrol -silent -cluster -EM_NODE node1 host -EM_NODE_LIST node1 host -DB_UNIQUE_NAME db_unique_name -SERVICE_NAME db_unique_name.db_domain
Use vi editor to open $ORACLE_HOME/bin/emctl
, then change the setting CRS_HOME=
to CRS_HOME=/u01/app/12.2.0.1/grid
Check the db console configuration status.
# /u01/app/oracle/product/11.2.0.4/dbhome_1/bin/emctl status agent - https://public IP for Node0:1158/em - https://public IP for Node1:1158/em