4 Known Issues with the Oracle Database Appliance

The following are known issues deploying, updating, and managing Oracle Database Appliance in this release.

Resilvering of Oracle ADVM processes impacting performance after upgrading to 18.3

Upgrading to Oracle Database Appliance 18.3 or later can impact performance on some Oracle Database Appliance systems due to Oracle ASM Dynamic Volume Manager (Oracle ADVM) processes consuming excessive CPU.

When you upgrade to Oracle Database Appliance 18.3, the storage disk may be resilvered or synchronized again, for mirrored volumes on an Oracle ASM disk group with Allocation Unit (AU) size greater than 1 MB. The larger the Oracle Automatic Storage Management Cluster File System (Oracle ACFS) volume size, the higher is the impact.

Hardware Models

All Oracle Database Appliance hardware models, particularly, X5-2 and X7-2 High Capacity models that use 8T HDDs.

Workaround

For information about resolving this issue, see Oracle Support Note 2525427.1 at:

https://support.oracle.com/rs?type=doc&id=2525427.1

This issue is tracked with Oracle bug 29520544.

Onboard public network interfaces do not come up after patching or imaging

When you apply patches or re-image Oracle Database Appliance, the onboard public network interfaces may not come up due to faulty status presented in the ILOM.

Hardware Models

Oracle Database Appliance X7-2-HA, X7-2S, X7-2M

Workaround

  1. Clear all faults on the ILOM.
  2. Reset or power cycle the host.
  3. Check that the ILOM has the most current version of firmware patches.
  4. Check that the X7-2 On Board Dual Port 10Gb/25Gb SFP28 Ethernet Controller firmware is up-to-date.
  5. Collect a new snapshot and monitor your appliance to confirm that the faults did not recur.
  6. Contact Oracle Support if this issue recurs.

This issue is tracked with Oracle bugs 29206350 and 28308268.

Stack migration fails during patching

After patching the OAK stack, the following error is encountered when running odacli commands:

DCS-10001:Internal error encountered: java.lang.String cannot be cast to 
com.oracle.dcs.agent.model.DbSystemNodeComponents.

Hardware Models

All Oracle Database Appliance Hardware models

Workaround

  1. Rename the /etc/ntp.conf file temporarily and retry patching the appliance.
    # mv /etc/ntp.conf /etc/ntp.conf.orig
  2. After patching is successful, restore the /etc/ntp.conf file.
    # mv /etc/ntp.conf.orig /etc/ntp.conf

This issue is tracked with Oracle bug 29216717.

DCS-10045:Validation error encountered: Error retrieving the cpucores

When deploying the appliance, DCS-10045 error appears. There is an error retrieving the CPU cores of the second node.

Hardware Models

Oracle Database Appliance X7-2-HA

Workaround

  1. Remove the following directory in Node0: /opt/oracle/dcs/repo/node_0

  2. Remove the following directory in Node1: /opt/oracle/dcs/repo/node_1

  3. Restart the dcs-agent on both nodes.

    cd /opt/oracle/dcs/bin
    initctl stop initdcsagent
    initctl start initdcsagent

This issue is tracked with Oracle bug 27527676.

Database creation hangs when using a deleted database name for database creation

The accelerator volume for data is not created on flash storage, for database created during provisioning of appliance.

If you delete a 11.2.0.4 database, and then create a new database with same name as the deleted database, database creation hangs while unlocking the DBSNMP user for the database.

Hardware Models

All Oracle Database Appliance high-availability environments

Workaround

Before creating the 11.2.0.4 database with the same name as the deleted database, delete the DBSNMP user, if the user exists.

For example, the following command creates a database testdb with user DBSNMP.

/u01/app/18.0.0.0/grid/bin/crsctl delete wallet -type CVUDB -name testdb -user DBSNMP 

This issue is tracked with Oracle bug 28916487.

Some files missing after patching the appliance

Some files are missing after patching the appliance.

Hardware Models

Oracle Database Appliance X7-2 hardware models

Workaround

Before patching the appliance, take a backup of the /etc/sysconfig/network-scripts/ifcfg-em* folder, and compare the folder contents after patching. If any files or parameters of the ifcfg-em* are missing, then they can be recovered from the backup directory.

This issue is tracked with Oracle bug 28308268.

Error when updating 12.1.0.2 database homes

When updating Oracle Database homes from 12.1.0.2 to 18.3, using the command odacli update-dbhome -i dbhomeId -v 18.3.0.0.0, the following error may be seen:

DCS-10001:Internal error encountered: Failed to run SQL script: datapatch script

Hardware Models

All Oracle Database Appliance hardware models bare metal deployments

Workaround

Apply the patch for bug 24385625 and run odacli update-dbhome -i dbhomeId -v 18.3.0.0.0 again to fix the issue.

This issue is tracked with Oracle bug 28975529.

ODA_BASE is in read-only mode or cannot start

The /OVS directory is full and ODA_BASE is in read-only mode.

The vmcore file in the /OVS/ var directory can cause the /OVS directory (Dom 0) to become 100% used. When Dom 0 is full, ODA_BASE is in read-only mode or cannot start.

Hardware Models

Oracle Database Appliance X6-2-HA, X5-2, X4-2, X3-2, and V1.

Oracle Database Appliance X7-2-HA Virtualized Platform.

Workaround

Perform the following to correct or prevent this issue:

  • Periodically check the file usage on Dom 0 and clean up the vmcore file, as needed.

  • Edit the oda_base vm.cfg file and change the on_crash = 'coredump-restart' parameter to on_crash = 'restart'. Especially when ODA_BASE is using more than 200 GB (gigabytes) of memory.

This issue is tracked with Oracle bug 26121450.

Space issues with /u01 directory after patching

After patching to 18.3, the directory /u01/app/18.0.0.0/grid/log/hostname/client fills quickly with gpnp logs.

Hardware Models

All Oracle Database Appliance hardware models for virtualized platforms deployments (X3-2 HA, X4-2 HA, X5-2 HA, X6-2 HA, X7-2 HA)

Workaround

  1. Run the following commands on both ODA_BASE nodes:

    On Node0:

    rm -rf /u01/app/18.0.0.0/grid/log/hostname/client/
    oakcli enable startrepo -node 0
    oakcli stop oak
    pkill odaBaseAgent
    oakcli start oak

    On Node1:

    rm -rf /u01/app/18.0.0.0/grid/log/hostname/client/
    oakcli enable startrepo -node 1
    oakcli stop oak
    pkill odaBaseAgent
    oakcli start oak

This issue is tracked with Oracle bug 28865162.

Disk space issues due to Zookeeper logs size

The Zookeeper log files, zookeeper.out and /opt/zookeeper/log/zkMonitor.log, are not rotated, when new logs are added. This can cause disk space issues.

Hardware Models

All Oracle Database Appliance hardware models for bare metal deployments

Workaround

Rotate the zookeeper log file manually, if the log file size increases, as follows:

  1. Stop the DCS-agent service for zookeeper on both nodes.

    initctl stop initdcsagent
  2. Stop the zookeeper service on both nodes.

    /opt/zookeeper/bin/zkServer.sh stop
  3. Clean the zookeeper logs after taking the backup, by manually deleting the existing file or by following steps 4 to 10.

  4. Set the ZOO_LOG_DIR as an environment variable to a different log directory, before starting the zookeeper server.

    export ZOO_LOG_DIR=/opt/zookeeper/log
  5. Switch to ROLLINGFILE, to set the capability to roll.

    export ZOO_LOG4J_PROP="INFO, ROLLINGFILE"
    Restart the zookeeper server, for the changes to take effect.
  6. Set the following parameters in the /opt/zookeeper/conf/log4j.properties file, to limit the number of backup files, and the file sizes.

    zookeeper.log.dir=/opt/zookeeper/log
    zookeeper.log.file=zookeeper.out
    log4j.appender.ROLLINGFILE.MaxFileSize=10MB
    log4j.appender.ROLLINGFILE.MaxBackupIndex=10
  7. Start zookeeper on both nodes.

    /opt/zookeeper/bin/zkServer.sh start
  8. Check the zookeeper status, and verify that zookeeper runs in leader/follower/standalone mode.

    /opt/zookeeper/bin/zkServer.sh status
    ZooKeeper JMX enabled by default
    Using config: /opt/zookeeper/bin/../conf/zoo.cfg
    Mode: follower
  9. Start the dcs agent on both nodes.

    initctl start initdcsagent
  10. Purge the zookeeper monitor log, zkMonitor.log, in the location /opt/zookeeper/log. You do not have to stop the zookeeper service.

This issue is tracked with Oracle bug 29033812.

Error encountered after running cleanup.pl

Errors encountered in running odacli commands after running cleanup.pl.

After running cleanup.pl, when you try to use odacli commands, the following error is encountered:

DCS-10042:User oda-cliadmin cannot be authorized.

Hardware Models

All Oracle Database Appliance hardware models for bare metal deployments

Workaround

Run the following commands to set up the credentials for the user oda-cliadmin on the agent wallet:

# rm -rf /opt/oracle/dcs/conf/.authconfig  
# /opt/oracle/dcs/bin/setupAgentAuth.sh 

This issue is tracked with Oracle bug 29038717.

Errors when deleting database storage after migration to DCS stack

After migrating to the DCS stack, some volumes in the database storage cannot be deleted.

Create an Oracle ACFS database storage using the oakcli create dbstorage command for multitenant environment (CDB) without database in the OAK stack and then migrate to the DCS stack. When deleting the database storage, only the DATA volume is deleted, and not the REDO and RECO volumes.

Hardware Models

All Oracle Database Appliance high-availability hardware models for bare metal deployments

Workaround

Create a database on Oracle ACFS database storage with the same name as the database for which you want to delete the storage volumes, and then delete the database. This cleans up all the volumes and file systems.

This issue is tracked with Oracle bug 28987135.

Accelerator volume for data is not created on flash storage

The accelerator volume for data is not created on flash storage, for databases created during provisioning of appliance.

Hardware Models

Oracle Database Appliance high capacity environments with HDD disks

Workaround

Do not create the database when provisioning the appliance. This creates all required disk groups, including flash. After provisioning the appliance, create the database. The accelerator volume is then created.

This issue is tracked with Oracle bug 28836461.

Database connection fails after database upgrade

After upgrading the database from 11.2 to 12.1.0.2, database connection fails due to job_queue_processes value.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Follow these steps:

  1. Before upgrading the database, check the job_queue_processes parameter, for example, x. If the value of job_queue_processes is less than 4, then set the value to 4.

  2. Upgrade the database to 12.1.0.2.

  3. After upgrading the database, set the value of job_queue_processes to the earlier value, for example, x.

This issue is tracked with Oracle bug 28987900.

Failure in creating 18.3 database with DSS database shape odb1s

When creating 18.3 databases, with DSS database shape odb1s, the creation fails, with the following eror message:

ORA-04031: unable to allocate 6029352 bytes of shared memory ("shared
pool","unknown object","sga heap(1,0)","ksipc pct")

Hardware Models

All Oracle Database Appliance Hardware Models

Workaround

None.

This issue is tracked with Oracle bug 28444642.

Restriction in moving database home for database shape greater than odb8

When creating databases, there is a policy restriction for creating databases with database shapes odb8 or higher for Oracle Database Standard Edition.

To maintain consistency with this policy restriction, do not migrate any database to an Oracle Database Standard Edition database home, where the database shape is greater than odb8. The database migration may not fail, but it may not adhere to policy rules.

Hardware Models

All Oracle Database Appliance Hardware Models bare metal deployments

Workaround

None.

This issue is tracked with Oracle bug 29003323.

Errors in clone database operation

Clone database operation fails due to the following errors.

If the dbname and dbunique name are not the same for the source database or they are in mixed case (mix of uppercase and lowercase letters) or the source database is single-instance or Oracle RAC One Node, running on the remote node, the clone database operation fails, because the paths are not created correctly in the control file.

Hardware Models

All Oracle Database Appliance high-availability hardware models for bare metal deployments

Workaround

Create the clone database from source database which has the same db name and db unique name, in lowercase letters, and the source database instance is running on the same node from which the clone database creation is triggered.

This issue is tracked with Oracle bugs 29002231, 29002563, 29002004, 29001906, 29001855, 29001631, 28995153, 28986643, and 28986950.

Unable to use the Web Console on Microsoft web browsers

Oracle Appliance Manager Web Console does not display correctly on Microsoft Edge and Microsoft Internet Explorer web browsers.

Models

Oracle Database Appliance X7-2-HA, X7-2S, X7-2M, X6-2S, X6-2M, X6-2L

Workaround

To access the Web Console, use either Google Chrome or Firefox.

This issue is tracked with Oracle bugs 27798498, 27028446, and 27799452.

Error in patching database home locally using the Web Console

Applying a database home patch locally through the Web Console, creates a pre-patch submission request.

Models

All Oracle Database Appliance Hardware Models

Workaround

Use the odacli update-dbhome --local command patching database homes locally.

This issue is tracked with Oracle bug 28909972.

Error after running the cleanup script

After running the cleanup.pl script, the following error message appears: DCS-10001:Internal error encountered: Fail to start hand shake.

The error is causes when you run the following steps:

  1. Run cleanup.pl on the first node (Node0). Wait until the cleanup script finishes, then reboot the node.

  2. Run cleanup.pl on the second node (Node1). Wait until the cleanup script finishes, then reboot the node.

  3. After both nodes are started, use the command-line interface to list the jobs on Node0. An internal error appears.

    # odacli list-jobs
    DCS-10001:Internal error encountered: Fail to start hand shake to localhost:7070

Hardware Models

Oracle Database Appliance X7-2-HA

Workaround

  1. Verify the zookeeper status on the both nodes before starting dcsagent:

    /opt/zookeeper/bin/zkServer.sh status

    For a single-node environment, the status should be: leader, or follower, or standalone.

  2. Restart the dcsagent on Node0 after running the cleanup.pl script.

    # initctl stop initdcsagent 
    # initctl start initdcsagent

The odaeraser tool does not work if oakd is running in non-cluster mode

After cleaning up the deployment, the Secure Eraser tool does not work if oakd is running in non-cluster mode.

Hardware Models

All Oracle Database Appliance Hardware bare metal systems

Workaround

After cleanup of the deployment, oakd is started in the non-cluster mode, and it cannot be stopped using "odaadmcli stop oak" command. In such a case, if the Secure Erase tool is run, then the odaeraser command fails.

Use the command odaadmcli shutdown oak to stop oakd.

This issue is tracked with Oracle bug 28547433.

Repository in offline or unknown status after patching

After rolling or local patching of both nodes to 18.3, repositories are in offline or unknown state on node 0 or 1.

The command oakcli start repo <reponame> fails with the error:

OAKERR8038 The filesystem could not be exported as a crs resource  
OAKERR:5015 Start repo operation has been disabled by flag

Models

Oracle Database Appliance X7-2-HA, X6-2-HA, X5-2, X4-2, X3-2, and V1.

Workaround

Log in to oda_base of any node and run the following two commands:

oakcli enable startrepo -node 0  
oakcli enable startrepo -node 1

The commands start the repositories and enable them to be available online.

This issue is tracked with Oracle bug 27539157.

Oracle ASR version is 5.5.1 after re-imaging Oracle Database Appliance

Oracle Auto Service Request (ASR) version is not updated after re-imaging Oracle Database Appliance

When re-imaging Oracle Database Appliance to Release 18.3, the Oracle Auto Service Request (ASR) RPM is not updated to 18.3. Oracle ASR is updated when you apply the patches for Oracle Database Appliance Release 18.3.

Hardware Models

All Oracle Database Appliance deployments that have Oracle Auto Service Request (ASR).

Workaround

Update to the latest server patch for the release.

This issue is tracked with Oracle bug 28933900.

11.2.0.4 databases fail to start after patching

After patching Oracle Database Appliance to release 18.3, databases of version 11.2.0.4 fail to start.

Hardware Models

All Oracle Database Appliance Hardware models

Workaround

Databases of versions 11.2.0.4.170814 and 11.2.0.4.171017 must be manually started after patching to Oracle Database Appliance release 18.3.

Start the databases with the command:
srvctl start database -db db_unique_name

This issue is tracked with Oracle bug 28815716.

Database creation fails when multiple SCAN listeners exist

Creation of 11.2 database fails when multiple SCAN listeners exist.

Hardware Models

All Oracle Database Appliance hardware models bare metal deployments

Workaround

Apply patch 22258643 to fix the issue.

This issue is tracked with Oracle bug 29056579.

Unrecognized Token Messages Appear in /var/log/messages

After updating Oracle Database Appliance, unrecognized token messages appear in /var/log/messages.

Updating to Oracle Database Appliance 12.1.2.11.0 updates the Oracle VM Server version to 3.4.3. After updating, the following messages appear in /var/log/messages:

Unrecognized token: "max_seq_redisc"
Unrecognized token: "rereg_on_guid_migr"
Unrecognized token: "aguid_inout_notice"
Unrecognized token: "sm_assign_guid_func"
Unrecognized token: "reports"
Unrecognized token: "per_module_logging"
Unrecognized token: "consolidate_ipv4_mask"

You can ignore the messages for these parameters, they do not impact the InfiniBand compliant Subnet Manager and Administration (opensm) functionality. However, Oracle recommends removing the parameters to avoid flooding /var/log/messages.

Hardware Models

Oracle Database Appliance X6-2-HA and X5-2 with InfiniBand

Workaround

Perform the following to remove the parameters:

  1. After patching, update the /etc/opensm/opensm.conf file in bare metal deployments and in Dom0 in virtualized platform environment to remove the parameters.

    cat /etc/opensm/opensm.conf  | egrep -w
    'max_seq_redisc|rereg_on_guid_migr|aguid_inout_notice|sm_assign_guid_func|repo
    rts|per_module_logging|consolidate_ipv4_mask' | grep -v ^#
    max_seq_redisc 0
    rereg_on_guid_migr FALSE
    aguid_inout_notice FALSE
    sm_assign_guid_func uniq_count
    reports 2
    per_module_logging FALSE
    consolidate_ipv4_mask 0xFFFFFFFF
  2. Reboot. The messages will not appear after rebooting the node.

This issue is tracked with Oracle bug 25985258.

Unable to patch an empty Oracle Database 12.1 dbhome

Cannot patch an empty Oracle Database Home (dbhome) due to an issue with Oracle Database auto patch.

When attempting to patch an empty dbhome, an error message similar to the following appears:
ERROR: 2017-12-19 18:48:02: Unable to apply db patch on the following Homes :  /u01/app/oracle/product/12.1.0.2/dbhome_name

The following is an example excerpt from the dbupdate log:

  OPATCHAUTO-68036: Topology empty. 
  OPATCHAUTO-68036: The topology was empty, unable to proceed. 
  OPATCHAUTO-68036: Check the log for more information. 
  OPatchAuto failed.
opatchauto failed with error code 42

Models

Oracle Database Appliance X6-2-HA, X5-2, X4-2, X3-2, and V1.

Workaround

The issue occurs when the dbhome does not have any databases. The workaround is to create a database before patching.

This issue is tracked with Oracle bug 27292674 and 27126871.

Errors after restarting CRS

If the Cluster Ready Services (CRS) are stopped or restarted, before stopping the repository and virtual machines, then this may cause errors.

Repository status is unknown and High Availability Virtual IP is offline if the Cluster Ready Services (CRS) are stopped or restarted before stopping the repository and virtual machines.

Hardware Models

Oracle Database Appliance HA models X7-2-HA, X6-2-HA, X5-2, X4-2, X3-2, V1

Workaround

Follow these steps:

  1. Start the High Availability Virtual IP on node1.
    # /u01/app/GI_version/grid/bin/srvctl start havip -id havip_0  
  2. Stop the oakVmAgent.py process on dom0.

  3. Run the lazy unmount option on the dom0 repository mounts:
    umount -l mount_points

This issue is tracked with Oracle bug 20461930.

Error in node number information when running network CLI commands

Network information for node0 is always displayed for some odacli commands, when the -u option is not specified.

If the -u option is not provided, then the describe-networkinterface, list-networks and the describe-network odacli commands always display the results for node0 (the default node), irrespective of whether the command is run from node0 or node1.

Hardware Models

Oracle Database Appliance X7-2-HA, X6-2-HA, X5-2, X4-2, X3-2, and V1

Workaround

Specify the -u option in the odacli command, for details about the current node.

This issue is tracked with Oracle bug 27251239.

Error when patching Oracle Database 11.2.0.4

When patching Oracle Database 11.2.0.4, the log file may show some errors.

When patching Oracle Database 11.2.0.4 homes, the following error may be logged in alert.log.

ORA-00600: internal error code, arguments: [kgfmGetCtx0], [kgfm.c],
[2840], [ctx], [], [], [], [], [], [], [], []

Once the patching completes, the error will no longer be raised.

Hardware Models

Oracle Database Appliance X7-2-HA Virtualized Platform, X6-2-HA Bare Metal and Virtualized Platform, X5-2, X4-2, X3-2, and V1.

Workaround

There is no workaround for this issue.

This issue is tracked with Oracle bug 28032876.

OAKERR:7007 Error encountered while starting VM

When starting a virtual machine (VM), an error message appears that the domain does not exist.

If a VM was cloned in Oracle Database Appliance 12.1.2.10 or earlier, you cannot start the HVM domain VMs in Oracle Database Appliance 12.1.2.11.

This issue does not impact newly cloned VMs in Oracle Database Appliance 12.1.2.11 or any other type of VM cloned on older versions. The vm templates were fixed in 12.1.2.11.0.

When trying to start the VM (vm4 in this example), the output is similar to the following:

# oakcli start vm vm4 -d 
.
Start VM : test on Node Number : 0 failed.
DETAILS:
        Attempting to start vm on node:0=>FAILED.  
<OAKERR:7007 Error  encountered while starting VM -  Error: Domain 'vm4' does not exist.>                        

The following is an example of the vm.cfg file for vm4:

vif = ['']
name = 'vm4'
extra = 'NODENAME=vm4'
builder = 'hvm'
cpus = '0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23'
vcpus = 2
memory = 2048
cpu_cap = 0
vnc = 1
serial = 'pty'
disk =
[u'file:/OVS/Repositories/odarepo1/VirtualMachines/vm4/68c32afe2ba8493e89f018a
 
970c644ea.img,xvda,w']
maxvcpus = 2
maxmem = 2048

Hardware Models

Oracle Database Appliance X6-2-HA, X5-2, X4-2, X3-2, and V1

Oracle Database Appliance X7-2-HA Virtualized Platform.

Workaround

Delete the extra = 'NODENAME=vm_name'  line from the vm.cfg file for the VM that failed to start.

  1. Open the vm.cfg file for the virtual machine (vm) that failed to start.

    • Dom0 : /Repositories/ vm_repo_name /.ACFS/snaps/ vm_name / VirtualMachines/ vm_name

    • ODA_BASE : /app/sharedrepo/ vm_repo_name /.ACFS/snaps/ vm_name / VirtualMachines/ vm_name

  2. Delete the following line: extra=’NODENAME=vmname. For example, if virtual machine vm4 failed to start, delete the line extra = 'NODENAME=vm4'.

    vif = ['']
    name = 'vm4'
    extra = 'NODENAME=vm4' 
    builder = 'hvm'
    cpus = '0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23'
    vcpus = 2
    memory = 2048
    cpu_cap = 0
    vnc = 1
    serial = 'pty'
    disk =
    [u'file:/OVS/Repositories/odarepo1/VirtualMachines/vm4/68c32afe2ba8493e89f018a
     
    970c644ea.img,xvda,w']
    maxvcpus = 2
    maxmem = 2048
  3. Start the virtual machine on Oracle Database Appliance 12.1.2.11.0.

    # oakcli start vm vm4

This issue is tracked with Oracle bug 25943318.

FLASH disk group is not mounted when patching or provisioning the server

The FLASH disk group is not mounted after a reboot, including after provisioning, reimaging, or patching the server with Oracle Database Appliance 12.2.1.2.

This issue occurs when the node reboots and then you attempt to create an Oracle Automatic Storage Management Cluster File System (Oracle ACFS) database. When patching or provisioning a server with Oracle Database Appliance 12.2.1.2, you will encounter an SSH disconnect issue and an error.
# oakcli update -patch 12.2.1.2 --server

**************************************************************************** 
*****   For all X5-2 customers with 8TB disks, please make sure to     *****
*****   run storage patch ASAP to update the disk firmware to "PAG1".  *****
**************************************************************************** 
INFO: DB, ASM, Clusterware may be stopped during the patch if required 
INFO: Both Nodes may get rebooted automatically during the patch if required 
Do you want to continue: [Y/N]?: y 
INFO: User has confirmed for the reboot 
INFO: Patch bundle must be unpacked on the second Node also before applying the patch 
Did you unpack the patch bundle on the second Node? : [Y/N]? : y  
Please enter the 'root'  password :  
Please re-enter the 'root' password:  
INFO: Setting up the SSH 
..........Completed .....  
... ...
INFO: 2017-12-26 00:31:22: -----------------Patching ILOM & BIOS----------------- 
INFO: 2017-12-26 00:31:22: ILOM is already running with version 3.2.9.23r116695 
INFO: 2017-12-26 00:31:22: BIOS is already running with version 30110000 
INFO: 2017-12-26 00:31:22: ILOM and BIOS will not be updated  
INFO: 2017-12-26 00:31:22: Getting the SP Interconnect state... 
INFO: 2017-12-26 00:31:44: Clusterware is running on local node 
INFO: 2017-12-26 00:31:44: Attempting to stop clusterware and its resources locally 
Killed 
# Connection to server.example.com closed. 

The Oracle High Availability Services, Cluster Ready Services, Cluster Synchronization Services, and Event Manager are online. However, when you attempt to create an Oracle Automatic Storage Management Cluster File System (Oracle ACFS) database, you receive an error: flash space is 0.

Hardware Models

Oracle Database Appliance X5-2, X6-2-HA, and X7-2 HA SSD systems.

Workaround

Manually mount FLASH disk group before creating an Oracle ACFS database.

Perform the following steps as the GRID owner:

  1. Set the environment variables as grid OS user:

    on node0 
    export ORACLE_SID=+ASM1 
    export ORACLE_HOME= /u01/app/12.2.0.1/grid
    
  2. Log on to the ASM instance as sysasm

    $ORACLE_HOME/bin/sqlplus / as sysasm
  3. Execute the following SQL command:

    SQL> ALTER DISKGROUP FLASH MOUNT

This issue is tracked with Oracle bug 27322213.

Unable to create an Oracle ASM Database for Release 12.1

Known issues with Oracle Automatic Storage Management (Oracle ASM) are preventing the REDO diskgroup from mounting for Oracle Database Release 12.1.

Unable to create an Oracle ASM database lower than 12.1.0.2.17814 PSU (12.1.2.12).

Hardware Models

Oracle Database Appliance X6-2-HA, X5-2, X4-2, X3-2, and V1.

Workaround

There is not a workaround. If you have Oracle Database 11.2 or 12.1 that is using Oracle Automatic Storage Management (Oracle ASM) and you want to upgrade to a higher release of Oracle Database, then you must be on at least Oracle Database Appliance 12.1.2.12.0 and Database Home 12.1.0.2.170814.

The upgrade path for Oracle Database 11.2 or 12.1 Oracle ASM is as follows:

  • If you are on Oracle Database Appliance version 12.1.2.6.0 or later, then upgrade to 12.1.2.12 or higher before upgrading your database.

  • If you are on Oracle Database Appliance version 12.1.2.5 or earlier, then upgrade to 12.1.2.6.0, and then upgrade again to 12.1.2.12 or higher before upgrading your database.

This issue is tracked with Oracle bug 21626377, 27682997, and 21780146. The issues are fixed in Oracle Database 12.1.0.2.170814.

Old configuration details persisting in custom environment

The configuration file /etc/security/limits.conf contains default entries even in the case of custom environments.

On custom environments, when a single user is configured for both grid and oracle, the default grid user entries for the image are not removed from the /etc/security/limits.conf file.

Models

Oracle Database Appliance X7-2-HA, X7-2S, and X7-2M

Workaround

This issue does not affect the functionality. Manually edit the /etc/security/limits.conf file and remove invalid entries.

This issue is tracked with Oracle bug 27036374.

Database creation fails for odb-01s DSS databases

When attempting to create an DSS database with shape odb-01s, the job may fail with the following error:

CRS-2674: Start of 'ora.test.db' on 'rwsoda609c1n1' failed
CRS-5017: The resource action "ora.test.db start" encountered the following
error:
ORA-03113: end-of-file on communication channel
Process ID: 0
Session ID: 0 Serial number: 0
. For details refer to "(:CLSN00107:)" in
"/u01/app/grid/diag/crs/rwsoda609c1n2/crs/trace/crsd_oraagent_oracle.trc".

Hardware Models

Oracle Database Appliance X6-2-HA, X5-2, X4-2, X3-2, and V1

Workaround

There is no workaround. Select an alternate shape to create the database.

This issue is tracked with Oracle bug 27768012.

Incorrect SGA and PGA values displayed

For online transaction processing (OLTP), In-Memory (IMDB), and decision support services (DSS) databases created with odb36 database shape, the PGA and SGA values are displayed incorrectly.

For OLTP databases created with odb36 shape, following are the issues:

  • sga_target is set as 128 GB instead of 144 GB

  • pga_aggregate_target is set as 64 GB instead of 72 GB

For DSS databases created with with odb36 shape, following are the issues:

  • sga_target is set as 64 GB instead of 72 GB

  • pga_aggregate_target is set as 128 GB instead of 144 GB

For IMDB databases created with Odb36 shape, following are the issues:

  • sga_target is set as 128 GB instead of 144 GB

  • pga_aggregate_target is set as 64 GB instead of 72 GB

  • inmmory_size is set as 64 GB instead of 72 GB

Models

Oracle Database Appliance X7-2-HA, X7-2S, and X7-2M

Workaround

Reset the PGA and SGA sizes manually

This issue is tracked with Oracle bug 27036374.