4 Known Issues with Oracle Database Appliance in This Release
The following are known issues deploying, updating, and managing Oracle Database Appliance in this release.
- Known Issues When Patching Oracle Database Appliance
Understand the known issues when patching Oracle Database Appliance to this release. - Known Issues When Deploying Oracle Database Appliance
Understand the known issues when provisioning or deploying Oracle Database Appliance. - Known Issues When Managing Oracle Database Appliance
Understand the known issues when managing or administering Oracle Database Appliance.
Known Issues When Patching Oracle Database Appliance
Understand the known issues when patching Oracle Database Appliance to this release.
- Errors when running oakcli commands on Oracle Database Appliance Virtualized Platform
You may encounter an error when you runoakcli
commands on Oracle Database Appliance Virtualized Platforms. - Patching Oracle Database home fails with errors
When applying the patch for Oracle Database homes, an error is encountered. - Error in patching Oracle Database Appliance
When applying the server patch for Oracle Database Appliance, an error is encountered. - Patching errors on Oracle Database Appliance Virtualized Platform
When applying the server patch for Oracle Database Appliance Virtualized Platform, an error is encountered. - Error encountered when running Oracle Database Appliance commands
If you submit anodacli create-prepatchreport
job, and then run any command before the job is completed, an error is encountered. - Error in updating Oracle ILOM when patching the appliance
When patching the appliance, there may be errors in patching Oracle ILOM. - Patching pre-checks do not complete with --local option during server patching
Server patching fails while running patching pre-checks with the--local
option. - Relocation of Oracle RAC One Database fails during patching
When relocating Oracle RAC One Database during patching, an error is encountered. - Error when patching to 12.1.0.2.190716 Bundle Patch
When patching Oracle Database release 12.1.0.2 to Oracle Database 12.1.0.2.190716 Bundle Patch, an error is encountered. - Server status not set to Normal when patching
When patching Oracle Database Appliance, an error is encountered. - Error in patching NVMe disks to the latest version
Patching of NVMe disks to the latest version may not be supported on some Oracle Database Appliance hardware models. - Failure in patching Oracle Database Appliance Virtualized Platform
Server patching for Oracle Database Appliance may fail with errors. - Error in patching Oracle Database Appliance Virtualized Platform
When applying the server patch for Oracle Database Appliance Virtualized Platform, an error is encountered. - Patching of M.2 drives not supported
Patching of M.2 drives (local disks disks SSDSCKJB48 and SSDSCKJB480G7) is not supported. - DATA disk group fails to start after upgrading Oracle Grid Infrastructure to 18.5
After upgrading Oracle Grid Infrastructure to 18.5, the DATA disk group fails to start. - Errors when deleting database storage after migration to DCS stack
After migrating to the DCS stack, some volumes in the database storage cannot be deleted. - Repository in offline or unknown status after patching
After rolling or local patching of both nodes to 18.8, repositories are in offline or unknown state on node 0 or 1. - Versions of some components not updated after cleaning up and reprovisioning Oracle Database Appliance
Oracle Auto Service Request (ASR), or Oracle TFA Collector, or Oracle ORAchk versions are not updated after cleaning up and reprovisioning Oracle Database Appliance. - 11.2.0.4 databases fail to start after patching
After patching Oracle Database Appliance to release 18.3, databases of version 11.2.0.4 fail to start. - FLASH disk group is not mounted when patching or provisioning the server
The FLASH disk group is not mounted after a reboot, including after provisioning, reimaging, or patching the server with Oracle Database Appliance 12.2.1.2.
Errors when running oakcli commands on Oracle Database Appliance Virtualized Platform
You may encounter an error when you run oakcli
commands on
Oracle Database Appliance Virtualized Platforms.
oakcli
expand storage
command on Oracle Database Appliance Virtualized
Platforms.sh: /opt/oracle/oak/bin/odaadmcli: No such file or directory
This error occurs in Oracle Database Appliance Virtualized Platforms with
Oracle Autonomous Health Framework version 19.3 or earlier, where Oracle Autonomous Health
Framework is installed in the DCS home directory (/opt/oracle/dcs
) under
/opt/oracle/dcs/oracle.ahf/
.
# tfactl -version;orachk -v
TFA Version : 193000
TFA Build ID : 20200108023845
ORACHK VERSION: 19.3.0_20200108
# cd /opt/oracle
# ls
dcs extapi oak
# cd dcs
# ls
oracle.ahf
#
Hardware Models
Oracle Database Appliance High-Availability hardware models Virtualized Platform deployments
Workaround
Uninstall the installed Oracle Autonomous Health Framework version and download the latest Oracle Autonomous Health Framework version on both nodes of your Oracle Database Appliance High-Availability deployment.
- Uninstall Oracle Autonomous Health
Framework.
# tfactl uninstall -local -deleterepo
For Oracle Autonomous Health Framework version 19.3, manually remove the installed version, by running the following command:
# rpm -e oracle-ahf
- Check that no
oracle-ahf
rpm is installed on the system.# rpm -qi oracle-ahf package oracle-ahf is not installed #
- Remove the DCS home (
/opt/oracle/dcs
) and its sub-directories.# cd /opt/oracle # rm -rf /opt/oracle/dcs
- Check and ensure the DCS home directory (
/opt/oracle/dcs
) does not exist under the/opt/oracle
directory on the system.# cd /opt/oracle/ # ls extapi oak #
- Download the latest Oracle Autonomous Health Framework version from My Oracle Support.
- Copy the downloaded zip file to
/root
on your system, for example,AHF-LINUX_v20.1.1.zip
. The zip file name for each Oracle Autonomous Health Framework version may change based on the latest release number.# /root/AHF-LINUX_v20.1.1.zip
- Create the
ahfinstall
directory under/tmp
.# cd /tmp # mkdir ahfinstall
- Unzip the Oracle Autonomous Health Framework zip file into
/tmp/ahfinstall
.# cd ahfinstall # unzip /root/AHF-LINUX_v20.1.1.zip
- Run the
ahf_setup
installer from/tmp/ahfinstall
. During installation, select the default location for installation.Note: Do not choose the DCS home directory as the default Oracle Autonomous Health Framework location for install.
# ./ahf_setup
- Check the Oracle TFA and ORAchk
versions.
#tfactl -version;orachk -v TFA Version : 201100 TFA Build ID : 20200331131556 ORACHK VERSION: 20.1.1_20200331 #
- Check that Oracle Autonomous Health Framework is installed in the
/opt/oracle.ahf
directory.
# tfactl -version;orachk -v
TFA Version : 193000
TFA Build ID : 20200108023845
ORACHK VERSION: 19.3.0_20200108
# tfactl uninstall -local -deleterepo
Starting AHF Uninstall
AHF will be uninstalled on: node2
Do you want to continue with AHF uninstall ? [Y]|N : y
Stopping AHF service on local node node2...
Sleeping for 10 seconds...
Stopping TFA Support Tools...
TFA-00002 Oracle Trace File Analyzer (TFA) is not running
Stopping orachk scheduler ...
Removing orachk cache discovery....
Successfully completed orachk cache discovery removal.
Removed orachk from inittab
Removing AHF setup on node2:
Removing /etc/rc.d/rc0.d/K17init.tfa
Removing /etc/rc.d/rc1.d/K17init.tfa
Removing /etc/rc.d/rc2.d/K17init.tfa
Removing /etc/rc.d/rc4.d/K17init.tfa
Removing /etc/rc.d/rc6.d/K17init.tfa
Removing /etc/init.d/init.tfa...
Removing /opt/oracle.ahf/jre
Removing /opt/oracle.ahf/common
Removing /opt/oracle.ahf/bin
Removing /opt/oracle.ahf/python
Removing /opt/oracle.ahf/analyzer
Removing /opt/oracle.ahf/tfa
Removing /opt/oracle.ahf/orachk
Removing /opt/oracle.ahf/ahf
Removing /u01/app/grid/oracle.ahf/data/node2
Removing /opt/oracle.ahf/install.properties
Removing /u01/app/grid/oracle.ahf/data/repository
Removing /u01/app/grid/oracle.ahf/data
Removing /u01/app/grid/oracle.ahf
Removing AHF Home : /opt/oracle.ahf
# tfactl -version;orachk -v
-bash: /usr/bin/tfactl: No such file or directory
-bash: /usr/bin/orachk: No such file or directory
# rpm -e oracle-ahf
warning: erase unlink of /opt/oracle.ahf failed: No such file or directory
# rpm -qi oracle-ahf
package oracle-ahf is not installed
#
# cd /opt/oracle
# ls
dcs extapi oak
# rm -rf dcs/
# ls
extapi oak
#
#cd /tmp
#mkdir ahfinstall
#cd ahfinstall/
# pwd
/tmp/ahfinstall
# unzip /root/AHF-LINUX_v20.1.1.zip
Archive: /root/AHF-LINUX_v20.1.1.zip
inflating: README.txt
inflating: ahf_setup
# ls
ahf_setup README.txt
# ./ahf_setup
AHF Installer for Platform Linux Architecture x86_64
AHF Installation Log : /tmp/ahf_install_85684_2020_04_01-00_15_38.log
Starting Autonomous Health Framework (AHF) Installation
AHF Version: 20.1.1 Build Date: 202003311315
Default AHF Location : /opt/oracle.ahf
Do you want to install AHF at [/opt/oracle.ahf] ? [Y]|N : y
AHF Location : /opt/oracle.ahf
AHF Data Directory stores diagnostic collections and metadata.
AHF Data Directory requires at least 5GB (Recommended 10GB) of free space.
Choose Data Directory from below options :
1. /u01/app/grid [Free Space : 51458 MB]
2. Enter a different Location
Choose Option [1 - 2] : 1
AHF Data Directory : /u01/app/grid/oracle.ahf/data
Do you want to add AHF Notification Email IDs ? [Y]|N : xxxxxxxxxxxx
AHF will also be installed/upgraded on these Cluster Nodes :
1. node1
The AHF Location and AHF Data Directory must exist on the above nodes
AHF Location : /opt/oracle.ahf
AHF Data Directory : /u01/app/grid/oracle.ahf/data
Do you want to install/upgrade AHF on Cluster Nodes ? [Y]|N : N
Extracting AHF to /opt/oracle.ahf
Configuring TFA Services
Discovering Nodes and Oracle Resources
Not generating certificates as GI discovered
Starting TFA Services
.----------------------------------------------------------------------------------.
| Host | Status of TFA | PID | Port | Version | Build ID |
+---------------+---------------+-------+------+------------+----------------------+
| node2 | RUNNING | 87939 | 5000 | 20.1.1.0.0 | 20110020200331131556 |
'---------------+---------------+-------+------+------------+----------------------'
Running TFA Inventory...
Adding default users to TFA Access list...
.--------------------------------------------------------------------.
| Summary of AHF Configuration |
+-----------------+--------------------------------------------------+
| Parameter | Value |
+-----------------+--------------------------------------------------+
| AHF Location | /opt/oracle.ahf |
| TFA Location | /opt/oracle.ahf/tfa |
| Orachk Location | /opt/oracle.ahf/orachk |
| Data Directory | /u01/app/grid/oracle.ahf/data |
| Repository | /u01/app/grid/oracle.ahf/data/repository |
| Diag Directory | /u01/app/grid/oracle.ahf/data/node2/diag |
'-----------------+--------------------------------------------------'
Starting orachk daemon from AHF ...
AHF install completed on node1.
Installing AHF on Remote Nodes :
AHF will be installed on node1, Please wait.
Installing AHF on node1:
[node1] Copying AHF Installer
[node1] Running AHF Installer
Adding rpm Metadata to rpm database on ODA system
RPM File /opt/oracle.ahf/rpms/oracle-ahf-201100-20200331131556.x86_64.rpm
Preparing... ########################################### [100%]
---------
1:oracle-ahf ########################################### [100%]
Upgrading oracle-ahf
warning: erase unlink of /opt/oracle/dcs/oracle.ahf failed: No such file or directory
AHF binaries are available in /opt/oracle.ahf/bin
AHF is successfully installed
Moving /tmp/ahf_install_85684_2020_04_01-00_15_38.log to /u01/app/grid/oracle.ahf/data/node2/diag/ahf/
# tfactl -version;orachk -v
TFA Version : 201100
TFA Build ID : 20200331131556
ORACHK VERSION: 20.1.1_20200331
# ls -l /opt/
........
drwxr-xr-x 11 root root 4096 Mar 31 13:16 oracle.ahf
........
#
This issue is tracked with Oracle bug 31014517.
Parent topic: Known Issues When Patching Oracle Database Appliance
Patching Oracle Database home fails with errors
When applying the patch for Oracle Database homes, an error is encountered.
Error Encountered When Patching Oracle Database Homes on Bare Metal Systems:
When patching Oracle Database homes on baremetal systems, the odacli
update-dbhome
command fails with an error similar to the following:
Please stop TFA before dbhome patching.
To resolve this issue, follow the steps described in the Workaround.
Error Encountered When Patching Oracle Database Homes on Virtualized Platform:
When patching Oracle Database homes on Virtualized Platform, patching fails with an error similar to the following:
INFO: Running prepatching on local node
WARNING: errors seen during prepatch on local node
ERROR: Unable to apply the patch 1
Check the prepatch log file generated in the directory
/opt/oracle/oak/log/hostname/patch/18.8.0.0.0
. You can also view the
prepatch log for the last run with the command ls -lrt prepatch_*.log
.
Check the last log file in the command output.
In the log file, search for entries similar to the following:
ERROR: date_time_stamp: TFA is running on one or more nodes.
WARNING: date_time_stamp: Shutdown TFA and then restart patching
INFO: date_time_stamp: Read the Release Notes for additional information.
To resolve this issue, follow the steps described in the Workaround.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- Run
tfactl stop
on all the nodes in the cluster. - Restart patching once Oracle TFA Collector has stopped on all nodes.
- Run
/etc/init.d/init.tfa stop
on all the nodes in the cluster. - Restart patching once Oracle TFA Collector has stopped on all nodes.
This issue is tracked with Oracle bugs 30799713 and 30892062.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in patching Oracle Database Appliance
When applying the server patch for Oracle Database Appliance, an error is encountered.
Error Encountered When Patching Bare Metal Systems:
When patching the appliance on bare metal systems, the odacli
update-server
command fails with the following error:
Please stop TFA before server patching.
To resolve this issue, follow the steps described in the Workaround.
Error Encountered When Patching Virtualized Platform:
When patching the appliance on Virtualized Platform, patching fails with an error similar to the following:
INFO: Running prepatching on local node
WARNING: errors seen during prepatch on local node
ERROR: Unable to apply the patch 1
Check the prepatch log file generated in the directory
/opt/oracle/oak/log/hostname/patch/18.8.0.0.0
. You can also view
the prepatch log for the last run with the command ls -lrt prepatch_*.log
.
Check the last log file in the command output.
In the log file, search for entries similar to the following:
ERROR: date_time_stamp: TFA is running on one or more nodes.
WARNING: date_time_stamp: Shutdown TFA and then restart patching
INFO: date_time_stamp: Read the Release Notes for additional information.
To resolve this issue, follow the steps described in the Workaround.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- Run
tfactl stop
on all the nodes in the cluster. - Restart patching once Oracle TFA Collector has stopped on all nodes.
- Run
/etc/init.d/init.tfa stop
on all the nodes in the cluster. - Restart patching once Oracle TFA Collector has stopped on all nodes.
This issue is tracked with Oracle bugs 30260318 and 30892062.
Parent topic: Known Issues When Patching Oracle Database Appliance
Patching errors on Oracle Database Appliance Virtualized Platform
When applying the server patch for Oracle Database Appliance Virtualized Platform, an error is encountered.
Error Encountered When Patching Virtualized Platform:
When patching the appliance on Virtualized Platform, patching fails with an error similar to the following:
ERROR: Unable to apply the GRID patch
ERROR: Failed to patch server (grid) component
This error can occur even if you stopped Oracle TFA Collector before patching. During server patching on the node, Oracle TFA Collector is updated and this can restart the TFA processes, thus causing an error. To resolve this issue, follow the steps described in the Workaround.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- Run
/etc/init.d/init.tfa stop
on all the nodes in the cluster. - Run the
command:
/u01/app/18.0.0.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/grid -chkckpt -name ROOTCRS_PREPATCH -status
Verify that the command output is
SUCCESS
. - If the command output was
SUCCESS
, then run the following commands on all the nodes:/u01/app/18.0.0.0/grid/crs/install/rootcrs.sh -prepatch -rollback /u01/app/18.0.0.0/grid/crs/install/rootcrs.sh -postpatch
- Restart patching.
This issue is tracked with Oracle bug 30886701.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error encountered when running Oracle Database Appliance commands
If you submit an odacli create-prepatchreport
job, and then
run any command before the job is completed, an error is encountered.
If you issue an odacli
command such as odacli
create-appliance
, odacli update-dbhome
, odacli
update-server
, odacli create-database
, or odacli
update-repository
, while an odacli create-prepatchreport
job is
running, then you may see an error.
Hardware Models
All Oracle Database Appliance bare metal deployments
Workaround
Wait for the odacli create-prepatchreport
job to complete before issuing
any other odacli
commands.
This issue is tracked with Oracle bug 30892528.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in updating Oracle ILOM when patching the appliance
When patching the appliance, there may be errors in patching Oracle ILOM.
When patching Oracle Database Appliance, patching of Oracle ILOM fails. The
odacli describe-component
command output does not display Oracle ILOM as
up-to-date.
Hardware Models
All Oracle Database Appliance bare metal deployments
Workaround
Run the command ln -s /usr/bin/ipmiflash /usr/sbin/ipmiflash
and then run the odacli update-server
command again.
This issue is tracked with Oracle bug 30619842.
Parent topic: Known Issues When Patching Oracle Database Appliance
Patching pre-checks do not complete with --local option during server patching
Server patching fails while running patching pre-checks with the
--local
option.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Do not run patching pre-checks on the server with the --local
option.
This issue is tracked with Oracle bug 30255817.
Parent topic: Known Issues When Patching Oracle Database Appliance
Relocation of Oracle RAC One Database fails during patching
When relocating Oracle RAC One Database during patching, an error is encountered.
When patching a database home in which one or more Oracle RAC One Databases are running, the relocation of the Oracle RAC One Database may fail. This causes the Oracle Database home patching to fail.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Shut down the Oracle RAC One node manually and then patch the database home. After patching completes successfully, start Oracle Database.
This issue is tracked with Oracle bug 30187542.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error when patching to 12.1.0.2.190716 Bundle Patch
When patching Oracle Database release 12.1.0.2 to Oracle Database 12.1.0.2.190716 Bundle Patch, an error is encountered.
The ODACLI job displays the following error:
DCS-10001:Internal error encountered: Failed to run SQL script: datapatch script.
The data patch log contains the entry
"Prereq check failed, exiting without
installing any patches.
".
Hardware Models
All Oracle Database Appliance hardware models bare metal deployments
Workaround
Install the same patch again.
This issue is tracked with Oracle bugs 30026438 and 30155710.
Parent topic: Known Issues When Patching Oracle Database Appliance
Server status not set to Normal when patching
When patching Oracle Database Appliance, an error is encountered.
When patching the appliance, the odacli
update-server
command fails with the
following error:
DCS-10001:Internal error encountered: Server upgrade state is not NORMAL node_name
Hardware Models
All Oracle Database Appliance hardware models
Workaround
-
Run the command:
/u01/app/18.0.0.0/grid//bin/cluvfy stage -post crsinst -collect cluster -gi_upgrade -n all
-
Ignore the following two warnings:
Verifying OCR Integrity ...WARNING PRVG-6017 : OCR backup is located in the same disk group "+DATA" as OCR. Verifying Single Client Access Name (SCAN) ...WARNING RVG-11368 : A SCAN is recommended to resolve to "3" or more IP
-
Run the command again till the output displays only the two warnings above. The status of Oracle Custerware status should be
Normal
again. -
You can verify the status with the command:
/u01/app/18.0.0.0/grid/bin/crsctl query crs activeversion -f
This issue is tracked with Oracle bug 30099090.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in patching NVMe disks to the latest version
Patching of NVMe disks to the latest version may not be supported on some Oracle Database Appliance hardware models.
On Oracle Database Appliance X8-2 hardware models, the NVMe controller 7361456_ICRPC2DD2ORA6.4T is installed with higher version VDV1RL01/VDV1RL02. Patching of this controller is not supported on Oracle Database Appliance X8-2 hardware models. For other platforms, if the installed version is QDV1RE0F, or QDV1RE13, or QDV1RD09, or QDV1RE14 then when you patch the storage, the NVMe controller version is updated to qdv1rf30.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
None
This issue is tracked with Oracle bug 30287439.
Parent topic: Known Issues When Patching Oracle Database Appliance
Failure in patching Oracle Database Appliance Virtualized Platform
Server patching for Oracle Database Appliance may fail with errors.
Patching the appliance server fails with the following error:
Worker 0: IOError: [Errno 28] No space left on device
This can occur during server patching. The space issue may occur either on ODA_BASE or dom0. The issue occurs when the log files opensm.log on dom0 and ibacm.log on ODA_BASE increase in size and consume all free space on the volume.
Hardware Models
Oracle Database Appliance hardware models X6-2 and X5-2 Virtualized Platform with InfiniBand
Workaround
Follow these steps:
- On ODA_BASE, truncate
/var/log/opensm.log
. - On dom0, truncate
/var/log/ibacm.log
. - Stop Oracle
Clusterware:
/u01/app/18.0.0.0/grid/bin/crsctl stop crs -f
- After the cluster and the cluster resources are stopped, start Oracle
Clusterware:
/u01/app/18.0.0.0/grid/bin/crsctl start crs
Restart Oracle Database Appliance server patching.
This issue is tracked with Oracle bug 30327847.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in patching Oracle Database Appliance Virtualized Platform
When applying the server patch for Oracle Database Appliance Virtualized Platform, an error is encountered.
Patching the appliance server fails with the following error:
ERROR: Host 192.168.16.28 listed in file /opt/oracle/oak/temp_privips.txt is not pingable at /opt/oracle/oak/pkgrepos/System/18.7.0.0.0/bin/pkg_install.pl
line 1806
ERROR: Unable to apply the patch 2
This can occur during non-local (rolling) server patch. The error is seen on the first node after patching of ODA_BASE and dom0 is complete. This issue is caused because the remote node Node1 rebooted during patching.
Hardware Models
Oracle Database Appliance hardware models X6-2 and X5-2 Virtualized Platform with InfiniBand
Workaround
-
Shut down Oracle TFA Collector.
/u01/app/18.0.0.0/grid/bin/tfactl stop
-
Restart Oracle Database Appliance server patching.
This issue is tracked with Oracle bug 30318927.
Parent topic: Known Issues When Patching Oracle Database Appliance
Patching of M.2 drives not supported
Patching of M.2 drives (local disks disks SSDSCKJB48 and SSDSCKJB480G7) is not supported.
These drives are displayed when you run the odacli
describe-component
command. Patching of neither of the two known
versions 0112 and 0121 of the M.2 disk is supported. Patching the LSI controller
version 13.00.00.00 to version 16.00.01.00 is also not supported. However, on some
Oracle Database Appliance X8-2 models, the installed LSI controller version may be
16.00.01.00.
Hardware Models
Oracle Database Appliance bare metal deployments
Workaround
None
This issue is tracked with Oracle bug 30249232.
Parent topic: Known Issues When Patching Oracle Database Appliance
DATA disk group fails to start after upgrading Oracle Grid Infrastructure to 18.5
After upgrading Oracle Grid Infrastructure to 18.5, the DATA disk group fails to start.
The following error is reported in the log file:
ORA-15038: disk '/dev/mapper/HDD_E1_S13_1931008292p1' mismatch on 'Sector
Size' with target disk group [512] [4096]
Hardware Models
Oracle Database Appliance hardware models X5-2 or later, with mixed storage disks installed
Workaround
To start Oracle Clusterware successfully, connect to Oracle ASM as
grid
user, and run the following SQL commands:
SQL> show parameter _disk_sector_size_override;
NAME TYPE VALUE
--------------------------------------------------------
_disk_sector_size_override boolean TRUE
SQL> alter system set "_disk_sector_size_override" = FALSE scope=both;
alter system set "_disk_sector_size_override" = FALSE scope=both
*
ERROR at line 1:
ORA-32000: write to SPFILE requested but SPFILE is not modifiable
SQL> alter diskgroup DATA mount;
Diskgroup altered.
SQL> alter system set "_disk_sector_size_override" = FALSE scope=both;
System altered
This issue is tracked with Oracle bug 29220984.
Parent topic: Known Issues When Patching Oracle Database Appliance
Errors when deleting database storage after migration to DCS stack
After migrating to the DCS stack, some volumes in the database storage cannot be deleted.
Create an Oracle ACFS database storage using the oakcli create dbstorage
command for multitenant environment (CDB) without database in the OAK stack and then migrate to the DCS stack. When deleting the database storage, only the DATA volume is deleted, and not the REDO and RECO volumes.
Hardware Models
All Oracle Database Appliance high-availability hardware models for bare metal deployments
Workaround
Create a database on Oracle ACFS database storage with the same name as the database for which you want to delete the storage volumes, and then delete the database. This cleans up all the volumes and file systems.
This issue is tracked with Oracle bug 28987135.
Parent topic: Known Issues When Patching Oracle Database Appliance
Repository in offline or unknown status after patching
After rolling or local patching of both nodes to 18.8, repositories are in offline or unknown state on node 0 or 1.
The command oakcli start repo <reponame>
fails with the error:
OAKERR8038 The filesystem could not be exported as a crs resource
OAKERR:5015 Start repo operation has been disabled by flag
Models
Oracle Database Appliance X7-2-HA, X6-2-HA, X5-2, X4-2, X3-2, and V1.
Workaround
Log in to oda_base
of any node and run the following two commands:
oakcli enable startrepo -node 0
oakcli enable startrepo -node 1
The commands start the repositories and enable them to be available online.
This issue is tracked with Oracle bug 27539157.
Parent topic: Known Issues When Patching Oracle Database Appliance
Versions of some components not updated after cleaning up and reprovisioning Oracle Database Appliance
Oracle Auto Service Request (ASR), or Oracle TFA Collector, or Oracle ORAchk versions are not updated after cleaning up and reprovisioning Oracle Database Appliance.
When cleaning up and reprovisioning Oracle Database Appliance with release 18.8, the Oracle Auto Service Request (ASR), or Oracle TFA Collector, or Oracle ORAchk RPMs may not be updated to release 18.8. The components are updated when you apply the patches for Oracle Database Appliance release 18.8.
Hardware Models
All Oracle Database Appliance deployments
Workaround
Update to the latest server patch for the release.
This issue is tracked with Oracle bugs 28933900 and 30187516.
Parent topic: Known Issues When Patching Oracle Database Appliance
11.2.0.4 databases fail to start after patching
After patching Oracle Database Appliance to release 18.3, databases of version 11.2.0.4 fail to start.
Hardware Models
All Oracle Database Appliance Hardware models
Workaround
Databases of versions 11.2.0.4.170814 and 11.2.0.4.171017 must be manually started after patching to Oracle Database Appliance release 18.3.
srvctl start database -db db_unique_name
This issue is tracked with Oracle bug 28815716.
Parent topic: Known Issues When Patching Oracle Database Appliance
FLASH disk group is not mounted when patching or provisioning the server
The FLASH disk group is not mounted after a reboot, including after provisioning, reimaging, or patching the server with Oracle Database Appliance 12.2.1.2.
# oakcli update -patch 12.2.1.2 --server
****************************************************************************
***** For all X5-2 customers with 8TB disks, please make sure to *****
***** run storage patch ASAP to update the disk firmware to "PAG1". *****
****************************************************************************
INFO: DB, ASM, Clusterware may be stopped during the patch if required
INFO: Both Nodes may get rebooted automatically during the patch if required
Do you want to continue: [Y/N]?: y
INFO: User has confirmed for the reboot
INFO: Patch bundle must be unpacked on the second Node also before applying the patch
Did you unpack the patch bundle on the second Node? : [Y/N]? : y
Please enter the 'root' password :
Please re-enter the 'root' password:
INFO: Setting up the SSH
..........Completed .....
... ...
INFO: 2017-12-26 00:31:22: -----------------Patching ILOM & BIOS-----------------
INFO: 2017-12-26 00:31:22: ILOM is already running with version 3.2.9.23r116695
INFO: 2017-12-26 00:31:22: BIOS is already running with version 30110000
INFO: 2017-12-26 00:31:22: ILOM and BIOS will not be updated
INFO: 2017-12-26 00:31:22: Getting the SP Interconnect state...
INFO: 2017-12-26 00:31:44: Clusterware is running on local node
INFO: 2017-12-26 00:31:44: Attempting to stop clusterware and its resources locally
Killed
# Connection to server.example.com closed.
The Oracle High Availability Services, Cluster Ready Services, Cluster Synchronization Services, and Event Manager are online. However, when you attempt to create an Oracle Automatic Storage Management Cluster File System (Oracle ACFS) database, you receive an error: flash space is 0
.
Hardware Models
Oracle Database Appliance X5-2, X6-2-HA, and X7-2 HA SSD systems.
Workaround
Manually mount FLASH disk group before creating an Oracle ACFS database.
Perform the following steps as the GRID owner:
-
Set the environment variables as grid OS user:
on node0 export ORACLE_SID=+ASM1 export ORACLE_HOME= /u01/app/12.2.0.1/grid
-
Log on to the ASM instance as
sysasm
$ORACLE_HOME/bin/sqlplus / as sysasm
-
Execute the following SQL command:
SQL> ALTER DISKGROUP FLASH MOUNT
This issue is tracked with Oracle bug 27322213.
Parent topic: Known Issues When Patching Oracle Database Appliance
Known Issues When Deploying Oracle Database Appliance
Understand the known issues when provisioning or deploying Oracle Database Appliance.
- Incorrect RAID configuration of some Oracle Database Appliance X8-2 Systems
Oracle Database Appliance X8-2 Systems shipped before December 20, 2019 may have local system or boot disks with incorrect RAID configuration, and hence require reimaging the system. - Failure in creating RECO disk group during provisioning
When provisioning Oracle Database Appliance X8-2-HA with High Performance configuration containing default storage and expansion shelf, creation of RECO disk group fails. - Simultaneous creation of two Oracle ACFS Databases fails
If you try to create two Oracle ACFS databases on a system where there is no database or database storage already created, then database creation fails for one of the databases with an error. - Shared repositories are not online after patching Virtualized Platform
The oakcli show repo command displays only the local repository or the shared repository status is unknown. - Creation of CDB for 12.1.0.2 databases may fail
Creation of multitenant container database (CDB) for 12.1.0.2 databases on Virtualized Platform may fail. - Database creation hangs when using a deleted database name for database creation
The accelerator volume for data is not created on flash storage, for database created during provisioning of appliance. - Error encountered after running cleanup.pl
Errors encountered in runningodacli
commands after runningcleanup.pl
. - Accelerator volume for data is not created on flash storage
The accelerator volume for data is not created on flash storage, for databases created during provisioning of appliance. - Errors in clone database operation
Clone database operation fails due to errors. - Clone database operation fails
For Oracle Database release 12.1 databases, the database clone creation may fail because the default compatible version from Oracle binaries was set to 12.0.0.0.0 - Errors after restarting CRS
If the Cluster Ready Services (CRS) are stopped or restarted, before stopping the repository and virtual machines, then this may cause errors. - Unable to create an Oracle ASM Database for Release 12.1
Known issues with Oracle Automatic Storage Management (Oracle ASM) are preventing the REDO diskgroup from mounting for Oracle Database Release 12.1. - Database creation fails for odb-01s DSS databases
When attempting to create an DSS database with shape odb-01s, the job may fail with errors.
Incorrect RAID configuration of some Oracle Database Appliance X8-2 Systems
Oracle Database Appliance X8-2 Systems shipped before December 20, 2019 may have local system or boot disks with incorrect RAID configuration, and hence require reimaging the system.
Hardware Models
Oracle Database Appliance X8-2 Systems shipped before December 20, 2019
Workaround
See My Oracle Support Note 2622035.1 for more details.
https://support.oracle.com/rs?type=patch&id=2622035.1
This issue is tracked with Oracle bug 30651492.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Failure in creating RECO disk group during provisioning
When provisioning Oracle Database Appliance X8-2-HA with High Performance configuration containing default storage and expansion shelf, creation of RECO disk group fails.
Hardware Models
All Oracle Database Appliance X8-2-HA with High Performance configuration
Workaround
- Power off storage expansion shelf.
- Reboot both nodes.
- Proceed with provisioning the default storage shelf (first JBOD).
- After the system is successfully provisioned
with default storage shelf (first JBOD), check
that
oakd
is running on both nodes in foreground mode.# ps -aef | grep oakd
- Check that all first JBOD disks have the status
online, good in
oakd
, and CACHED in Oracle ASM. - Power on the storage expansion shelf (second JBOD), wait for a few minutes for the oeprating system and other subsystems to recognize it.
- Run the following command from the master node
to add the storage expansion shelf disks (two JBOD
setup) to
oakd
and Oracle ASM.#odaadmcli show ismaster OAKD is in Master Mode # odaadmcli expand storage -ndisk 24 -enclosure 1 Skipping precheck for enclosure '1'... Check the progress of expansion of storage by executing 'odaadmcli show disk' Waiting for expansion to finish ... #
- Check that the storage expansion shelf disks
(two JBOD setup) are added to
oakd
and Oracle ASM.
Replace odaadmcli
with
oakcli
commands on Oracle
Database Appliance Virtualized Platform in the
procedure.
For more information, see the chapter Managing Storage in the Oracle Database Appliance X8-2 Deployment Guide.
This issue is tracked with Oracle bug 30839054.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Simultaneous creation of two Oracle ACFS Databases fails
If you try to create two Oracle ACFS databases on a system where there is no database or database storage already created, then database creation fails for one of the databases with an error.
DCS-10001:Internal error encountered: Fail to run command Failed to create
volume.
Hardware Models
All Oracle Database Appliance bare metal deployments
Workaround
Manually delete the DATA volume (and REDO volume, in case of Oracle Database Appliance X8-2) from the system.
su - GRID_USER
export ORACLE_SID=+ASM1(in case of first node) /+ASM2(in case of second
node);
export ORACLE_HOME=GRID_HOME;
GRID_HOME/bin/asmcmd --nocp voldelete -G Data datdbname
su - GRID_USER
export ORACLE_SID=+ASM1(in case of first node) /+ASM2(in case of second
node);
export ORACLE_HOME=GRID_HOME;
GRID_HOME/bin/asmcmd --nocp voldelete -G Reco rdodbname
su - GRID_USER
export ORACLE_SID=+ASM1(in case of first node) /+ASM2(in case of second
node);
export ORACLE_HOME=GRID_HOME;
GRID_HOME/bin/asmcmd --nocp voldelete -G Flash datdbname (if volume exists in FLASH disk group)
GRID_HOME/bin/asmcmd --nocp voldelete -G data datdbname (if volume exists in DATA disk group)
su - GRID_USER
export ORACLE_SID=+ASM1(in case of first node) /+ASM2(in case of second
node);
export ORACLE_HOME=GRID_HOME;
GRID_HOME/bin/asmcmd --nocp voldelete -G Flash rdodbname
This issue is tracked with Oracle bug 30750497.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Shared repositories are not online after patching Virtualized Platform
The oakcli show repo command displays only the local repository or the shared repository status is unknown.
Hardware Models
All Oracle Database Appliance Virtualized Platform
Workaround
# oakcli enable startrepo -node 0
# oakcli enable startrepo -node 1
# oakcl restart oak
This issue is tracked with Oracle bug 30325619.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Creation of CDB for 12.1.0.2 databases may fail
Creation of multitenant container database (CDB) for 12.1.0.2 databases on Virtualized Platform may fail.
If the database name (db_name
) and database unique name
(db_unique_name
) are different when creating snahsot database,
then the following error is encountered:
WARNING: 2018-09-13 12:47:18: Following data files are not on SNAP location
Hardware Models
All Oracle Database Appliance hardware models for Virtualized Platform
Workaround
None.
This issue is tracked with Oracle bug 29231958.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Database creation hangs when using a deleted database name for database creation
The accelerator volume for data is not created on flash storage, for database created during provisioning of appliance.
If you delete a 11.2.0.4 database, and then create a new database with same name as the deleted database, database creation hangs while unlocking the DBSNMP user for the database.
Hardware Models
All Oracle Database Appliance high-availability environments
Workaround
Before creating the 11.2.0.4 database with the same name as the deleted database, delete the DBSNMP user, if the user exists.
For example, the following command creates a database testdb
with user DBSNMP
.
/u01/app/18.0.0.0/grid/bin/crsctl delete wallet -type CVUDB -name testdb -user DBSNMP
This issue is tracked with Oracle bug 28916487.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error encountered after running cleanup.pl
Errors encountered in running odacli
commands after running cleanup.pl
.
After running cleanup.pl
, when you try to use odacli
commands, the following error is encountered:
DCS-10042:User oda-cliadmin cannot be authorized.
Hardware Models
All Oracle Database Appliance hardware models for bare metal deployments
Workaround
Run the following commands to set up the credentials for the user oda-cliadmin
on the agent wallet:
# rm -rf /opt/oracle/dcs/conf/.authconfig
# /opt/oracle/dcs/bin/setupAgentAuth.sh
This issue is tracked with Oracle bug 29038717.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Accelerator volume for data is not created on flash storage
The accelerator volume for data is not created on flash storage, for databases created during provisioning of appliance.
Hardware Models
Oracle Database Appliance high capacity environments with HDD disks
Workaround
Do not create the database when provisioning the appliance. This creates all required disk groups, including flash. After provisioning the appliance, create the database. The accelerator volume is then created.
This issue is tracked with Oracle bug 28836461.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Errors in clone database operation
Clone database operation fails due to errors.
If the source database is single-instance or Oracle RAC One Node, or running on the remote node, the clone database operation fails, because the paths are not created correctly in the control file.
Clone database operation may also fail with errors if the source database creation time stamp is too close to the clone operation (at least within 60 minutes).
Hardware Models
All Oracle Database Appliance high-availability hardware models for bare metal deployments
Workaround
Create the clone database from the source database instance that is running on the same node from which the clone database creation is triggered.
SQL> alter system checkpoint;
This issue is tracked with Oracle bugs 29002563, 29002004, 29001906, 29001855, 29001631, 28995153, 28986643, 30309971, and 30228362.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Clone database operation fails
For Oracle Database release 12.1 databases, the database clone creation may fail because the default compatible version from Oracle binaries was set to 12.0.0.0.0
Hardware Models
All Oracle Database Appliance high-availability hardware models for bare metal deployments
Workaround
- Change the parameter
value.
SQL> ALTER SYSTEM SET COMPATIBLE = '12.1.0.2.0' SCOPE=SPFILE;
- Shut down the database.
SQL> SHUTDOWN IMMEDIATE
- Start the database.
SQL> Startup
- Verify the parameter for the new
value.
SQL> SELECT name, value, description FROM v$parameter WHERE name ='compatible';
This issue is tracked with Oracle bug 30309914.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Errors after restarting CRS
If the Cluster Ready Services (CRS) are stopped or restarted, before stopping the repository and virtual machines, then this may cause errors.
Repository status is unknown and High Availability Virtual IP is offline if the Cluster Ready Services (CRS) are stopped or restarted before stopping the repository and virtual machines.
Hardware Models
Oracle Database Appliance HA models X7-2-HA, X6-2-HA, X5-2, X4-2, X3-2, V1
Workaround
Follow these steps:
-
Start the High Availability Virtual IP on node1.
# /u01/app/GI_version/grid/bin/srvctl start havip -id havip_0
-
Stop the
oakVmAgent.py
process ondom0
. -
Run the lazy unmount option on the
dom0
repository mounts:umount -l mount_points
This issue is tracked with Oracle bug 20461930.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Unable to create an Oracle ASM Database for Release 12.1
Known issues with Oracle Automatic Storage Management (Oracle ASM) are preventing the REDO diskgroup from mounting for Oracle Database Release 12.1.
Unable to create an Oracle ASM database lower than 12.1.0.2.17814 PSU (12.1.2.12).
Hardware Models
Oracle Database Appliance X6-2-HA, X5-2, X4-2, X3-2, and V1.
Workaround
There is not a workaround. If you have Oracle Database 11.2 or 12.1 that is using Oracle Automatic Storage Management (Oracle ASM) and you want to upgrade to a higher release of Oracle Database, then you must be on at least Oracle Database Appliance 12.1.2.12.0 and Database Home 12.1.0.2.170814.
The upgrade path for Oracle Database 11.2 or 12.1 Oracle ASM is as follows:
-
If you are on Oracle Database Appliance version 12.1.2.6.0 or later, then upgrade to 12.1.2.12 or higher before upgrading your database.
-
If you are on Oracle Database Appliance version 12.1.2.5 or earlier, then upgrade to 12.1.2.6.0, and then upgrade again to 12.1.2.12 or higher before upgrading your database.
This issue is tracked with Oracle bug 21626377, 27682997, 27250552, and 21780146. The issues are fixed in Oracle Database 12.1.0.2.170814.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Database creation fails for odb-01s DSS databases
When attempting to create an DSS database with shape odb-01s, the job may fail with errors.
CRS-2674: Start of 'ora.test.db' on 'example_node' failed
CRS-5017: The resource action "ora.test.db start" encountered the following
error:
ORA-03113: end-of-file on communication channel
Process ID: 0
Session ID: 0 Serial number: 0
. For details refer to "(:CLSN00107:)" in
"/u01/app/grid/diag/crs/example_node/crs/trace/crsd_oraagent_oracle.trc".
Hardware Models
Oracle Database Appliance X6-2-HA, X5-2, X4-2, X3-2, and V1
Workaround
There is no workaround. Select an alternate shape to create the database.
This issue is tracked with Oracle bug 27768012.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Known Issues When Managing Oracle Database Appliance
Understand the known issues when managing or administering Oracle Database Appliance.
- Error when expanding storage on Oracle Database Appliance Virtualized Platform
When you run theoakcli expand storage
command on Oracle Database Appliance Virtualized Platforms, an error is encountered. - Inconsistency in available and current system firmware
The current system firmware may be different from the available firmware after applying the latest patch. - DCS logs not collected by the odaadmcli manage diagcollect command
By default, the DCS logs are not collected when you run theodaadmcli manage diagcollect
command. - Inconsistency in ORAchk summary and details report page
ORAChk report summary on the Browser User Interface may show different counts of Critical, Failed, and Warning issues than the report detail page. - Error in attaching vdisk to guest VM
The current system firmware may be different from the available firmware after applying the latest patch. - Extensive tracing generated for server processes
Extensive tracing files for the server processes are generated with DRM messages. - Missing DATA, RECO, and REDO entries when dbstorage is rediscovered
Running theodacli update-registry
command with-n all --force
or-n dbstorage --force
option can result in metadata corruption. - Incorrect Aura8 firmware value displayed
The Aura8 firmware version displayed in the components list is incorrect. - iRestore, recovery, and update operations on a database fail
iRestore, recovery, and update operations on a database fail, if the ObjectStore Container used by the database already has a copy. - ODA_BASE is in read-only mode or cannot start
The/OVS
directory is full and ODA_BASE is in read-only mode. - The odaeraser tool does not work if oakd is running in non-cluster mode
After cleaning up the deployment, the Secure Eraser tool does not work if oakd is running in non-cluster mode. - Issues with the Web Console on Microsoft web browsers
Oracle Database Appliance Web Console has issues on Microsoft Edge and Microsoft Internet Explorer web browsers. - Disk space issues due to Zookeeper logs size
The Zookeeper log files,zookeeper.out
and/opt/zookeeper/log/zkMonitor.log
, are not rotated, when new logs are added. This can cause disk space issues. - Error after running the cleanup script
After running thecleanup.pl
script, the following error message appears:DCS-10001:Internal error encountered: Fail to start hand shake
. - Old configuration details persisting in custom environment
The configuration file/etc/security/limits.conf
contains default entries even in the case of custom environments. - Incorrect SGA and PGA values displayed
For online transaction processing (OLTP), In-Memory (IMDB), and decision support services (DSS) databases created with odb36 database shape, the PGA and SGA values are displayed incorrectly. - Error in node number information when running network CLI commands
Network information for node0 is always displayed for someodacli
commands, when the-u
option is not specified. - Unrecognized Token Messages Appear in /var/log/messages
After updating Oracle Database Appliance, unrecognized token messages appear in/var/log/messages
.
Error when expanding storage on Oracle Database Appliance Virtualized Platform
When you run the oakcli expand storage
command on Oracle
Database Appliance Virtualized Platforms, an error is encountered.
oakcli expand
storage
command on Oracle Database Appliance Virtualized
Platforms.sh: /opt/oracle/oak/bin/odaadmcli: No such file or directory
# oakcli expand storage -ndisk 5 -enclosure 0
sh: /opt/oracle/oak/bin/odaadmcli: No such file or directory
sh: /opt/oracle/oak/bin/odaadmcli: No such file or directory
sh: /opt/oracle/oak/bin/odaadmcli: No such file or directory
sh: /opt/oracle/oak/bin/odaadmcli: No such file or directory
sh: /opt/oracle/oak/bin/odaadmcli: No such file or directory
The disk(s) [e0_pd_00][e0_pd_01][e0_pd_02][e0_pd_03][e0_pd_04] is/are not
online in oakd
Aborting ...
This error occurs in Oracle Database Appliance Virtualized Platforms with Oracle
Autonomous Health Framework version 19.3, where Oracle Autonomous Health Framework is
installed in the DCS home directory (/opt/oracle/dcs
) under
/opt/oracle/dcs/oracle.ahf/
.
# tfactl -version;orachk -v
TFA Version : 193000
TFA Build ID : 20200108023845
ORACHK VERSION: 19.3.0_20200108
# cd /opt/oracle
# ls
dcs extapi oak
# cd dcs
# ls
oracle.ahf
#
Hardware Models
Oracle Database Appliance High-Availability hardware models Virtualized Platform deployments
Workaround
Uninstall the installed Oracle Autonomous Health Framework version and download the latest Oracle Autonomous Health Framework version on both nodes of your Oracle Database Appliance High-Availability deployment.
- Uninstall Oracle Autonomous Health
Framework.
# tfactl uninstall -local -deleterepo
For Oracle Autonomous Health Framework version 19.3, manually remove the installed version, by running the following command:
# rpm -e oracle-ahf
- Check that no
oracle-ahf
rpm is installed on the system.# rpm -qi oracle-ahf package oracle-ahf is not installed #
- Remove the DCS home (
/opt/oracle/dcs
) and its sub-directories.# cd /opt/oracle # rm -rf /opt/oracle/dcs
- Check and ensure the DCS home directory (
/opt/oracle/dcs
) does not exist under the/opt/oracle
directory on the system.# cd /opt/oracle/ # ls extapi oak #
- Download the latest Oracle Autonomous Health Framework version from My Oracle Support.
- Copy the downloaded zip file to
/root
on your system, for example,AHF-LINUX_v20.1.1.zip
. The zip file name for each Oracle Autonomous Health Framework version may change based on the latest release number.# /root/AHF-LINUX_v20.1.1.zip
- Create the
ahfinstall
directory under/tmp
.# cd /tmp # mkdir ahfinstall
- Unzip the Oracle Autonomous Health Framework zip file into
/tmp/ahfinstall
.# cd ahfinstall # unzip /root/AHF-LINUX_v20.1.1.zip
- Run the
ahf_setup
installer from/tmp/ahfinstall
. During installation, select the default location for installation.Note: Do not choose the DCS home directory as the default Oracle Autonomous Health Framework location for install.
# ./ahf_setup
- Check the Oracle TFA and ORAchk
versions.
#tfactl -version;orachk -v TFA Version : 201100 TFA Build ID : 20200331131556 ORACHK VERSION: 20.1.1_20200331 #
- Check that Oracle Autonomous Health Framework is installed in the
/opt/oracle.ahf
directory.
# tfactl -version;orachk -v
TFA Version : 193000
TFA Build ID : 20200108023845
ORACHK VERSION: 19.3.0_20200108
# tfactl uninstall -local -deleterepo
Starting AHF Uninstall
AHF will be uninstalled on: node2
Do you want to continue with AHF uninstall ? [Y]|N : y
Stopping AHF service on local node node2...
Sleeping for 10 seconds...
Stopping TFA Support Tools...
TFA-00002 Oracle Trace File Analyzer (TFA) is not running
Stopping orachk scheduler ...
Removing orachk cache discovery....
Successfully completed orachk cache discovery removal.
Removed orachk from inittab
Removing AHF setup on node2:
Removing /etc/rc.d/rc0.d/K17init.tfa
Removing /etc/rc.d/rc1.d/K17init.tfa
Removing /etc/rc.d/rc2.d/K17init.tfa
Removing /etc/rc.d/rc4.d/K17init.tfa
Removing /etc/rc.d/rc6.d/K17init.tfa
Removing /etc/init.d/init.tfa...
Removing /opt/oracle.ahf/jre
Removing /opt/oracle.ahf/common
Removing /opt/oracle.ahf/bin
Removing /opt/oracle.ahf/python
Removing /opt/oracle.ahf/analyzer
Removing /opt/oracle.ahf/tfa
Removing /opt/oracle.ahf/orachk
Removing /opt/oracle.ahf/ahf
Removing /u01/app/grid/oracle.ahf/data/node2
Removing /opt/oracle.ahf/install.properties
Removing /u01/app/grid/oracle.ahf/data/repository
Removing /u01/app/grid/oracle.ahf/data
Removing /u01/app/grid/oracle.ahf
Removing AHF Home : /opt/oracle.ahf
# tfactl -version;orachk -v
-bash: /usr/bin/tfactl: No such file or directory
-bash: /usr/bin/orachk: No such file or directory
# rpm -e oracle-ahf
warning: erase unlink of /opt/oracle.ahf failed: No such file or directory
# rpm -qi oracle-ahf
package oracle-ahf is not installed
#
# cd /opt/oracle
# ls
dcs extapi oak
# rm -rf dcs/
# ls
extapi oak
#
#cd /tmp
#mkdir ahfinstall
#cd ahfinstall/
# pwd
/tmp/ahfinstall
# unzip /root/AHF-LINUX_v20.1.1.zip
Archive: /root/AHF-LINUX_v20.1.1.zip
inflating: README.txt
inflating: ahf_setup
# ls
ahf_setup README.txt
# ./ahf_setup
AHF Installer for Platform Linux Architecture x86_64
AHF Installation Log : /tmp/ahf_install_85684_2020_04_01-00_15_38.log
Starting Autonomous Health Framework (AHF) Installation
AHF Version: 20.1.1 Build Date: 202003311315
Default AHF Location : /opt/oracle.ahf
Do you want to install AHF at [/opt/oracle.ahf] ? [Y]|N : y
AHF Location : /opt/oracle.ahf
AHF Data Directory stores diagnostic collections and metadata.
AHF Data Directory requires at least 5GB (Recommended 10GB) of free space.
Choose Data Directory from below options :
1. /u01/app/grid [Free Space : 51458 MB]
2. Enter a different Location
Choose Option [1 - 2] : 1
AHF Data Directory : /u01/app/grid/oracle.ahf/data
Do you want to add AHF Notification Email IDs ? [Y]|N : xxxxxxxxxxxx
AHF will also be installed/upgraded on these Cluster Nodes :
1. node1
The AHF Location and AHF Data Directory must exist on the above nodes
AHF Location : /opt/oracle.ahf
AHF Data Directory : /u01/app/grid/oracle.ahf/data
Do you want to install/upgrade AHF on Cluster Nodes ? [Y]|N : N
Extracting AHF to /opt/oracle.ahf
Configuring TFA Services
Discovering Nodes and Oracle Resources
Not generating certificates as GI discovered
Starting TFA Services
.----------------------------------------------------------------------------------.
| Host | Status of TFA | PID | Port | Version | Build ID |
+---------------+---------------+-------+------+------------+----------------------+
| node2 | RUNNING | 87939 | 5000 | 20.1.1.0.0 | 20110020200331131556 |
'---------------+---------------+-------+------+------------+----------------------'
Running TFA Inventory...
Adding default users to TFA Access list...
.--------------------------------------------------------------------.
| Summary of AHF Configuration |
+-----------------+--------------------------------------------------+
| Parameter | Value |
+-----------------+--------------------------------------------------+
| AHF Location | /opt/oracle.ahf |
| TFA Location | /opt/oracle.ahf/tfa |
| Orachk Location | /opt/oracle.ahf/orachk |
| Data Directory | /u01/app/grid/oracle.ahf/data |
| Repository | /u01/app/grid/oracle.ahf/data/repository |
| Diag Directory | /u01/app/grid/oracle.ahf/data/node2/diag |
'-----------------+--------------------------------------------------'
Starting orachk daemon from AHF ...
AHF install completed on node1.
Installing AHF on Remote Nodes :
AHF will be installed on node1, Please wait.
Installing AHF on node1:
[node1] Copying AHF Installer
[node1] Running AHF Installer
Adding rpm Metadata to rpm database on ODA system
RPM File /opt/oracle.ahf/rpms/oracle-ahf-201100-20200331131556.x86_64.rpm
Preparing... ########################################### [100%]
---------
1:oracle-ahf ########################################### [100%]
Upgrading oracle-ahf
warning: erase unlink of /opt/oracle/dcs/oracle.ahf failed: No such file or directory
AHF binaries are available in /opt/oracle.ahf/bin
AHF is successfully installed
Moving /tmp/ahf_install_85684_2020_04_01-00_15_38.log to /u01/app/grid/oracle.ahf/data/node2/diag/ahf/
# tfactl -version;orachk -v
TFA Version : 201100
TFA Build ID : 20200331131556
ORACHK VERSION: 20.1.1_20200331
# ls -l /opt/
........
drwxr-xr-x 11 root root 4096 Mar 31 13:16 oracle.ahf
........
#
This issue is tracked with Oracle bug 31014517.
Parent topic: Known Issues When Managing Oracle Database Appliance
Inconsistency in available and current system firmware
The current system firmware may be different from the available firmware after applying the latest patch.
Oracle Database Appliance X8-2 with expander model ORACLE/DE3-24C are at version 0309 but patching of expander firmware from earlier versions to 0309 is not supported in this release. Oracle Database Appliance Release 18.8 contains the patch for expander version 0306, so when you run odacli describe-component command, the available expander version is displayed as 0306.
Oracle Database Appliance X8-2 with controller model LSI Logic/0x0097 are at version 16.00.00.00 but patching of controller firmware from earlier versions to 16.00.00.00 is not supported in this release. Oracle Database Appliance Release 18.8 contains the patch for controller version 13.00.00.00, so when you run odacli describe-component command, the available expander version is displayed as 13.00.00.00.
Hardware Models
Oracle Database Appliance X8-2 hardware models
Workaround
Ignore this inconsistency, since this is a display issue and does not affect the installed firmware version.
This issue is tracked with Oracle bug 30787910.
Parent topic: Known Issues When Managing Oracle Database Appliance
DCS logs not collected by the odaadmcli manage diagcollect command
By default, the DCS logs are not collected when you run the odaadmcli
manage diagcollect
command.
Hardware Models
Oracle Database Appliance hardware models bare metal deployments
Workaround
Use the CLI command odaadmcli manage diagcollect --components
dcs
to collect DCS logs.
This issue is tracked with Oracle bug 30760941.
Parent topic: Known Issues When Managing Oracle Database Appliance
Inconsistency in ORAchk summary and details report page
ORAChk report summary on the Browser User Interface may show different counts of Critical, Failed, and Warning issues than the report detail page.
Hardware Models
Oracle Database Appliance hardware models bare metal deployments
Workaround
Ignore counts of Critical, Failed, and Warning issues in the ORAchk report summary on the Browser User Interface. Check the report detail page.
This issue is tracked with Oracle bug 30676674.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in attaching vdisk to guest VM
The current system firmware may be different from the available firmware after applying the latest patch.
When multiple vdisks from the oda_base driver_domain are attached to the guest VM, their entries are not written on the xenstore, vdisks are not attached to the VM, and the VM may not start.
xen-hotplug.log
in
ODA_BASE:xenstore-write: could not write path backend/vbd/6/51728/node
xenstore-write: could not write path backend/vbd/6/51728/hotplug-error
Hardware Models
Oracle Database Appliance Virtualized Platform
Workaround
- Add the followng into the
/etc/sysconfig/xencommons
file indom0
:XENSTORED_ARGS="--entry-nb=4096 --transaction=512"
- Reboot
dom0
.
This issue is tracked with Oracle bug 30886365.
Parent topic: Known Issues When Managing Oracle Database Appliance
Extensive tracing generated for server processes
Extensive tracing files for the server processes are generated with DRM messages.
2019-08-07 03:35:33.498*:example1():
[0x3fc1001c][0xf02],[TX][ext0x0,0x0][domid 0x0]
maxnodes 16, key 2663540594, node 2 (inst 3), member_node 0
2019-08-07 03:35:33.498*:example1(): delta 15
2019-08-07 03:35:33.498*:example2():
[0x3fc1001c][0xf11],[TX][ext0x0,0x0][domid 0x0]
maxnodes 16, key 2663540609, node 1 (inst 2), member_node 1
Hardware Models
All Oracle Database Appliance hardware models
Workaround
alter system set event='trace [rac_enq] disk disable' scope=spfile;
This issue is tracked with Oracle bug 30166512.
Parent topic: Known Issues When Managing Oracle Database Appliance
Missing DATA, RECO, and REDO entries when dbstorage is rediscovered
Running the odacli update-registry
command with -n
all --force
or -n dbstorage --force
option can result in metadata corruption.
Hardware Models
All Oracle Database Appliance hardware models bare metal deployments
Workaround
Run the -all
option when all the databases created in the system
use OAKCLI in migrated systems. On other systems
that run on DCS stack, update all components other
than dbstorage individually, using the
odacli update-registry -n
component_name_to_be_updated_excluding_dbstorage
.
This issue is tracked with Oracle bug 30274477.
Parent topic: Known Issues When Managing Oracle Database Appliance
Incorrect Aura8 firmware value displayed
The Aura8 firmware version displayed in the components list is incorrect.
Models
Oracle Database Appliance X8-2S and X8-2M
Workaround
None.
This issue is tracked with Oracle bug 30340410.
Parent topic: Known Issues When Managing Oracle Database Appliance
iRestore, recovery, and update operations on a database fail
iRestore, recovery, and update operations on a database fail, if the ObjectStore Container used by the database already has a copy.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
When performing iRestore, recovery, and update operations on a database, ensure that files are not copied to the ObjectStore Container.
This issue is tracked with Oracle bug 30529607.
Parent topic: Known Issues When Managing Oracle Database Appliance
ODA_BASE is in read-only mode or cannot start
The /OVS
directory is full and ODA_BASE is in read-only mode.
The vmcore
file in the /OVS/ var
directory can cause the /OVS
directory (Dom 0) to become 100% used. When Dom 0 is full, ODA_BASE is in read-only mode or cannot start.
Hardware Models
Oracle Database Appliance X6-2-HA, X5-2, X4-2, X3-2, and V1.
Oracle Database Appliance X7-2-HA Virtualized Platform.
Workaround
Perform the following to correct or prevent this issue:
-
Periodically check the file usage on Dom 0 and clean up the
vmcore
file, as needed. -
Edit the
oda_base vm.cfg
file and change theon_crash = 'coredump-restart'
parameter toon_crash = 'restart'
. Especially when ODA_BASE is using more than 200 GB (gigabytes) of memory.
This issue is tracked with Oracle bug 26121450.
Parent topic: Known Issues When Managing Oracle Database Appliance
The odaeraser tool does not work if oakd is running in non-cluster mode
After cleaning up the deployment, the Secure Eraser tool does not work if oakd is running in non-cluster mode.
Hardware Models
All Oracle Database Appliance Hardware bare metal systems
Workaround
After cleanup of the deployment, oakd
is started in the non-cluster mode, and it cannot be stopped using "odaadmcli stop oak" command. In such a case, if the Secure Erase tool is run, then the odaeraser command fails.
Use the command odaadmcli shutdown oak
to stop oakd
.
This issue is tracked with Oracle bug 28547433.
Parent topic: Known Issues When Managing Oracle Database Appliance
Issues with the Web Console on Microsoft web browsers
Oracle Database Appliance Web Console has issues on Microsoft Edge and Microsoft Internet Explorer web browsers.
- Oracle Database Appliance Web Console does not display correctly on Microsoft Edge and Microsoft Internet Explorer web browsers.
- Advanced Information for the appliance does not display on Microsoft Internet Explorer web browser.
- Job activity status does not refresh in the Web Console on Microsoft Internet Explorer web browser.
- After configuring the oda-admin password, the following error is
displayed:
Failed to change the default user (oda-admin) account password. Status Code: 500 DCS-10001: DCS-10001:Internal error encountered: User not authorized
Workaround: Close the Microsoft Internet Explorer browser session and open another browser session.
Models
All Oracle Database Appliance Hardware Models bare metal deployments
Workaround
To access the Web Console, use either Google Chrome or Firefox.
This issue is tracked with Oracle bugs 30077007, 30099089, and 29887027.
Parent topic: Known Issues When Managing Oracle Database Appliance
Disk space issues due to Zookeeper logs size
The Zookeeper log files, zookeeper.out
and /opt/zookeeper/log/zkMonitor.log
, are not rotated, when new logs are added. This can cause disk space issues.
Hardware Models
All Oracle Database Appliance hardware models for bare metal deployments
Workaround
Rotate the zookeeper log file manually, if the log file size increases, as follows:
-
Stop the DCS-agent service for zookeeper on both nodes.
initctl stop initdcsagent
-
Stop the zookeeper service on both nodes.
/opt/zookeeper/bin/zkServer.sh stop
-
Clean the zookeeper logs after taking the backup, by manually deleting the existing file or by following steps 4 to 10.
-
Set the
ZOO_LOG_DIR
as an environment variable to a different log directory, before starting the zookeeper server.export ZOO_LOG_DIR=/opt/zookeeper/log
-
Switch to
ROLLINGFILE
, to set the capability to roll.
Restart the zookeeper server, for the changes to take effect.export ZOO_LOG4J_PROP="INFO, ROLLINGFILE"
-
Set the following parameters in the
/opt/zookeeper/conf/log4j.properties
file, to limit the number of backup files, and the file sizes.zookeeper.log.dir=/opt/zookeeper/log zookeeper.log.file=zookeeper.out log4j.appender.ROLLINGFILE.MaxFileSize=10MB log4j.appender.ROLLINGFILE.MaxBackupIndex=10
-
Start zookeeper on both nodes.
/opt/zookeeper/bin/zkServer.sh start
-
Check the zookeeper status, and verify that zookeeper runs in
leader/follower/standalone
mode./opt/zookeeper/bin/zkServer.sh status ZooKeeper JMX enabled by default Using config: /opt/zookeeper/bin/../conf/zoo.cfg Mode: follower
-
Start the dcs agent on both nodes.
initctl start initdcsagent
-
Purge the zookeeper monitor log,
zkMonitor.log
, in the location/opt/zookeeper/log
. You do not have to stop the zookeeper service.
This issue is tracked with Oracle bug 29033812.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error after running the cleanup script
After running the cleanup.pl
script, the following error message appears: DCS-10001:Internal error encountered: Fail to start hand shake
.
The error is causes when you run the following steps:
-
Run
cleanup.pl
on the first node (Node0). Wait until the cleanup script finishes, then reboot the node. -
Run
cleanup.pl
on the second node (Node1). Wait until the cleanup script finishes, then reboot the node. -
After both nodes are started, use the command-line interface to list the jobs on Node0. An internal error appears.
# odacli list-jobs DCS-10001:Internal error encountered: Fail to start hand shake to localhost:7070
Hardware Models
Oracle Database Appliance X7-2-HA
Workaround
-
Verify the zookeeper status on the both nodes before starting
dcsagent
:/opt/zookeeper/bin/zkServer.sh status
For a single-node environment, the status should be: leader, or follower, or standalone.
-
Restart the
dcsagent
on Node0 after running thecleanup.pl
script.# initctl stop initdcsagent # initctl start initdcsagent
Parent topic: Known Issues When Managing Oracle Database Appliance
Old configuration details persisting in custom environment
The configuration file /etc/security/limits.conf
contains default entries even in the case of custom environments.
On custom environments, when a single user is configured for both grid and oracle, the default grid user entries for the image are not removed from the /etc/security/limits.conf
file.
Models
Oracle Database Appliance X7-2-HA, X7-2S, and X7-2M
Workaround
This issue does not affect the functionality. Manually edit the /etc/security/limits.conf
file and remove invalid entries.
This issue is tracked with Oracle bug 26978354.
Parent topic: Known Issues When Managing Oracle Database Appliance
Incorrect SGA and PGA values displayed
For online transaction processing (OLTP), In-Memory (IMDB), and decision support services (DSS) databases created with odb36 database shape, the PGA and SGA values are displayed incorrectly.
For OLTP databases created with odb36 shape, following are the issues:
-
sga_target
is set as 128 GB instead of 144 GB -
pga_aggregate_target
is set as 64 GB instead of 72 GB
For DSS databases created with with odb36 shape, following are the issues:
-
sga_target
is set as 64 GB instead of 72 GB -
pga_aggregate_target
is set as 128 GB instead of 144 GB
For IMDB databases created with Odb36 shape, following are the issues:
-
sga_target
is set as 128 GB instead of 144 GB -
pga_aggregate_target
is set as 64 GB instead of 72 GB -
inmmory_size
is set as 64 GB instead of 72 GB
Models
Oracle Database Appliance X7-2-HA, X7-2S, and X7-2M
Workaround
Reset the PGA and SGA sizes manually
This issue is tracked with Oracle bug 27036374.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in node number information when running network CLI commands
Network information for node0 is always displayed for some odacli
commands, when the -u
option is not specified.
If the -u
option is not provided, then the describe-networkinterface
, list-networks
and the describe-network
odacli
commands always display the results for node0 (the default node), irrespective of whether the command is run from node0 or node1.
Hardware Models
Oracle Database Appliance X7-2-HA, X6-2-HA, X5-2, X4-2, X3-2, and V1
Workaround
Specify the -u
option in the odacli
command, for details about the current node.
This issue is tracked with Oracle bug 27251239.
Parent topic: Known Issues When Managing Oracle Database Appliance
Unrecognized Token Messages Appear in /var/log/messages
After updating Oracle Database Appliance, unrecognized token messages appear in /var/log/messages
.
Updating to Oracle Database Appliance 12.1.2.11.0 updates the Oracle VM Server version to 3.4.3. After updating, the following messages appear in /var/log/messages
:
Unrecognized token: "max_seq_redisc"
Unrecognized token: "rereg_on_guid_migr"
Unrecognized token: "aguid_inout_notice"
Unrecognized token: "sm_assign_guid_func"
Unrecognized token: "reports"
Unrecognized token: "per_module_logging"
Unrecognized token: "consolidate_ipv4_mask"
You can ignore the messages for these parameters, they do not impact the InfiniBand compliant Subnet Manager and Administration (opensm) functionality. However, Oracle recommends removing the parameters to avoid flooding /var/log/messages
.
Hardware Models
Oracle Database Appliance X6-2-HA and X5-2 with InfiniBand
Workaround
Perform the following to remove the parameters:
-
After patching, update the /
etc/opensm/opensm.conf
file in bare metal deployments and in Dom0 in virtualized platform environment to remove the parameters.cat /etc/opensm/opensm.conf | egrep -w 'max_seq_redisc|rereg_on_guid_migr|aguid_inout_notice|sm_assign_guid_func|repo rts|per_module_logging|consolidate_ipv4_mask' | grep -v ^# max_seq_redisc 0 rereg_on_guid_migr FALSE aguid_inout_notice FALSE sm_assign_guid_func uniq_count reports 2 per_module_logging FALSE consolidate_ipv4_mask 0xFFFFFFFF
-
Reboot. The messages will not appear after rebooting the node.
This issue is tracked with Oracle bug 25985258.
Parent topic: Known Issues When Managing Oracle Database Appliance