4 Known Issues with Oracle Database Appliance in This Release

The following are known issues deploying, updating, and managing Oracle Database Appliance in this release.

Known Issues When Deploying Oracle Database Appliance

Understand the known issues when provisioning or deploying Oracle Database Appliance.

Node is inaccessible when patching or provisioning appliance with reduced CPU core count to release 19.4

On an Oracle Database Appliance system with a reduced CPU core count, the node is inaccessible after provisioning or patching to Oracle Database Appliance release 19.4.

An Oracle Database Appliance system is provisioned with all CPU cores enabled by default. You can reduce the CPU core count after provisioning the appliance with the command odacli update-cpucore -c count. If you provision an Oracle Database Appliance system with release 19.4, and reduce the CPU core count as a postinstallation task, then the nodes become inaccessible after rebooting.

The SYSLOG contains the following log message:
unable to handle kernel paging request at ffff886683719438

This issue also occurs when you patch an Oracle Database Appliance system with a reduced CPU core count to Oracle Database Appliance release 19.4.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

To reduce the CPU core count on a newly-provisioned Oracle Database Appliance system successfully, follow these steps:
  1. To reduce the CPU core count on a newly-provisioned Oracle Database Appliance system, run the command:
    odacli update-cpucore -c count

    Verify that the job completed successfully.

  2. Update the BIOS and set the enabled cores per socket value.

    Update the BIOS for both nodes with half of the count you set in Step 1. For example, if you set the value to 36, then update the BIOS with the value 18.

To increase CPU core count on an Oracle Database Appliance system successfully, follow these steps:
  1. To increase the CPU core count on an Oracle Database Appliance system, run the command:
    odacli update-cpucore -c count

    Verify that the job completed successfully.

  2. Reboot the nodes, update the BIOS, and set the enabled cores per socket value.

    Update the BIOS for both nodes with half of the count you set in Step 1. For example, if you set the value to 36, then update the BIOS with the value 18.

  3. Take a backup of the DCS agent files /etc/init/initdcsagent*.
  4. Stop the DCS agent on both nodes.
    initctl stop initdcsagent
  5. Stop the cluster resource on the system.
    /u01/app/18.0.0.0/grid/bin/crsctl stop cluster -f
  6. Verify that the cluster resource stopped successfully and then remove the DCS agent configuration files /etc/init/initdcsagent*.
  7. Reboot the nodes, update the BIOS, and set the enabled cores per socket value.

    Update the BIOS for both nodes with half of the count you set in step 1. For example, if you set the value to 36, then update the BIOS with the value 18.

  8. After the reboot, verify that the BIOS has the correct number of CPU cores enabled.

    For High-Availability systems, verify that both nodes have the same number of CPU cores enabled. Use the command lscpu to verify the CPU core count.

  9. Ensure that Oracle Clusterware and the managed resources are running.
  10. Move back the DCS agent files /etc/init/initdcsagent* that you backed up in step 2 and removed in step 6.
  11. Restart the DCS agent.
    initctl start initdcsagent
  12. Set the CPU core count to the desired value with the command odacli update-cpucore -c count on one node only.
  13. Verify that the job completed successfully. Also verify that the output value is the same when you run the commands odacli describe-cpucore -c count and lscpu.

This issue is tracked with Oracle bug 30313635.

Database backup and clone operations fail on Oracle ASM disks with Oracle ASMFD enabled

Database backup or clone operations fail on Oracle Databases that have Oracle Automatic Storage Management Filter Driver (Oracle ASMFD) enabled.

The following error message is displayed.
Failed to run RMAN command. Please refer log at location...

Hardware Models

Oracle Database Appliance hardware models X8-2 baremetal deployments with Oracle Database Appliance Release 19.4.

Workaround

Follow these steps:
  1. Generate the JSON file from the Web Console and save it.
  2. Set "enableAFD": "FALSE" in the JSON file:
    "language": "en", 
    "enableAFD": "FALSE" 
    "scan": null 
  3. Provision the appliance again without Oracle ASMFD.

This issue is tracked with Oracle bugs 30423790 and 30404303.

Database recovery fails on Oracle Database Appliance release 19.4 Oracle Database

Database iRestore operation fails on Oracle Database Appliance release 19.4 databases.

One of the following error messages may be displayed.
DCS-10001:Internal error encountered: Required redo space in MB:8192.0 is not available for database :KV514d. 
DCS-10001:Internal error encountered:Failed to run RMAN command. Please refer log.
DCS-10001:Internal error encountered: Unable to find database clones for the given database version.

Hardware Models

Oracle Database Appliance hardware models X8-2 baremetal deployments with Oracle Database Appliance Release 19.4.

Workaround

None.

This issue is tracked with Oracle bugs 30423720, 30405684, and 30396123.

Database creation fails for Oracle Database Appliance release 19.4 databases

Database creation fails for Oracle Database Appliance release 19.4 databases.

The following error message is displayed.
DCS-10001:Internal error encountered: Unable to find database clones for the given database version. 

Hardware Models

Oracle Database Appliance hardware models X8-2 baremetal deployments with Oracle Database Appliance Release 19.4.

Workaround

Specify the database version in the odacli create-database command.

odacli create-database -n test_db -v 19.4.0.0 -m 

This issue is tracked with Oracle bug 30423790.

VLAN public network is not listed in the odacli list-networks command

The odacli list-networks command does not list the VLAN public network.

Hardware Models

Oracle Database Appliance hardware models X8-2 baremetal deployments with Oracle Database Appliance Release 19.4.

Workaround

After running configure-firstnet, restart the DCS agent.

This issue is tracked with Oracle bug 30399409.

Only one network interface displayed after rebooting node

After rebooting the node, only one network interface is displayed.

When both nodes reboot or power on simultaneously, only one of HAIP interfaces is used and Oracle ASM may not be able to start. The netstat command returns only one of two interfaces.
# netstat -nr | grep 169 
169.254.0.0     0.0.0.0         255.255.0.0     U         0 0          0 eth0
Ensure that the ora.cluster_interconnect.haip is ONLINE on one node before rebooting (or powering on) on the other node.
# /u01/app/18.0.0.0/grid/bin/crsctl stat res -t -init|grep -A1  
ora.cluster_interconnect.haip 
------------------------------------------------------------------------------ 
-- 
Name           Target  State        Server                   State details     
    
------------------------------------------------------------------------------ 
-- 
Cluster Resources 
------------------------------------------------------------------------------ 
-- 
ora.cluster_interconnect.haip 
      1        ONLINE  ONLINE       <hostname>            STABLE 

Hardware Models

Oracle Database Appliance hardware models baremetal deployments on X4-2 and X7-2. X5-2 and X6-2 baremetal deployments with Infiniband Interconnect are not affected.

Workaround

If both nodes are already rebooted simultaneously and only one interface is configured for high-availability, then stop crs on both nodes and start crs one by one.
  1. Login as root in any node and stop the cluster with the -all option.
    # /u01/app/18.0.0.0/grid/bin/crsctl stop cluster -all
  2. Stop crs on both nodes.
    [Node 0] 
    # /u01/app/18.0.0.0/grid/bin/crsctl stop crs 
    [Node 1] 
    # /u01/app/18.0.0.0/grid/bin/crsctl stop crs
  3. Start crs on each node, one by one.
    [Node 0] 
    # /u01/app/18.0.0.0/grid/bin/crsctl start crs 
    [Node 1] 
    # /u01/app/18.0.0.0/grid/bin/crsctl start crs 

This issue is tracked with Oracle bug 29613692.

Snapshot databases can only be created on the primary database

For oakcli stack, snapshot database can be created from the primary database, and not from the standby database.

If the database name (db_name) and database unique name (db_unique_name) are different when creating snapshot database, then the following error is encountered:

WARNING: 2018-09-13 12:47:18: Following data files are not on SNAP location

Hardware Models

All Oracle Database Appliance hardware models for Virtualized Platform

Workaround

None. For oakcli stack, create snapshot database from the primary database, and not from the standby database.

This issue is tracked with Oracle bug 28649665.

DCS-10045:Validation error encountered: Error retrieving the cpucores

When deploying the appliance, DCS-10045 error appears. There is an error retrieving the CPU cores of the second node.

Hardware Models

Oracle Database Appliance X7-2-HA

Workaround

  1. Remove the following directory in Node0: /opt/oracle/dcs/repo/node_0

  2. Remove the following directory in Node1: /opt/oracle/dcs/repo/node_1

  3. Restart the dcs-agent on both nodes.

    cd /opt/oracle/dcs/bin
    initctl stop initdcsagent
    initctl start initdcsagent

This issue is tracked with Oracle bug 27527676.

Database creation hangs when using a deleted database name for database creation

The accelerator volume for data is not created on flash storage, for database created during provisioning of appliance.

If you delete a 11.2.0.4 database, and then create a new database with same name as the deleted database, database creation hangs while unlocking the DBSNMP user for the database.

Hardware Models

All Oracle Database Appliance high-availability environments

Workaround

Before creating the 11.2.0.4 database with the same name as the deleted database, delete the DBSNMP user, if the user exists.

For example, the following command creates a database testdb with user DBSNMP.

/u01/app/18.0.0.0/grid/bin/crsctl delete wallet -type CVUDB -name testdb -user DBSNMP 

This issue is tracked with Oracle bug 28916487.

Error encountered after running cleanup.pl

Errors encountered in running odacli commands after running cleanup.pl.

After running cleanup.pl, when you try to use odacli commands, the following error is encountered:

DCS-10042:User oda-cliadmin cannot be authorized.

Hardware Models

All Oracle Database Appliance hardware models for bare metal deployments

Workaround

Run the following commands to set up the credentials for the user oda-cliadmin on the agent wallet:

# rm -rf /opt/oracle/dcs/conf/.authconfig  
# /opt/oracle/dcs/bin/setupAgentAuth.sh 

This issue is tracked with Oracle bug 29038717.

Accelerator volume for data is not created on flash storage

The accelerator volume for data is not created on flash storage, for databases created during provisioning of appliance.

Hardware Models

Oracle Database Appliance high capacity environments with HDD disks

Workaround

Do not create the database when provisioning the appliance. This creates all required disk groups, including flash. After provisioning the appliance, create the database. The accelerator volume is then created.

This issue is tracked with Oracle bug 28836461.

Error in provisioning Oracle ASM Database on FLASH storage

On Oracle Database Appliance High-Availability systems with High Capacity storage, Oracle ASM Database creation on FLASH storage fails.

This issue occurs because the FLASH disk group is not mounted.

Hardware Models

All Oracle Database Appliance high-availability hardware models with High Capacity storage configuration

Workaround

Provision the appliance without creating the database, and then create the database.

This issue is tracked with Oracle bug 30309798.

Database cloning not supported in Oracle Database Appliance release 19.4

Database cloning is not supported in Oracle Database Appliance release 19.4.

Hardware Models

Oracle Database Appliance hardware models X8-2 baremetal deployments with Oracle Database Appliance Release 19.4

Workaround

None.

This issue is tracked with Oracle bug 30404303.

Errors after restarting CRS

If the Cluster Ready Services (CRS) are stopped or restarted, before stopping the repository and virtual machines, then this may cause errors.

Repository status is unknown and High Availability Virtual IP is offline if the Cluster Ready Services (CRS) are stopped or restarted before stopping the repository and virtual machines.

Hardware Models

Oracle Database Appliance HA models X7-2-HA, X6-2-HA, X5-2, X4-2, X3-2, V1

Workaround

Follow these steps:

  1. Start the High Availability Virtual IP on node1.
    # /u01/app/GI_version/grid/bin/srvctl start havip -id havip_0  
  2. Stop the oakVmAgent.py process on dom0.

  3. Run the lazy unmount option on the dom0 repository mounts:
    umount -l mount_points

This issue is tracked with Oracle bug 20461930.

Database creation fails for odb-01s DSS databases

When attempting to create an DSS database with shape odb-01s, the job may fail with errors.

CRS-2674: Start of 'ora.test.db' on 'example_node' failed
CRS-5017: The resource action "ora.test.db start" encountered the following
error:
ORA-03113: end-of-file on communication channel
Process ID: 0
Session ID: 0 Serial number: 0
. For details refer to "(:CLSN00107:)" in
"/u01/app/grid/diag/crs/example_node/crs/trace/crsd_oraagent_oracle.trc".

Hardware Models

Oracle Database Appliance X6-2-HA, X5-2, X4-2, X3-2, and V1

Workaround

There is no workaround. Select an alternate shape to create the database.

This issue is tracked with Oracle bug 27768012.

Known Issues When Managing Oracle Database Appliance

Understand the known issues when managing or administering Oracle Database Appliance.

Restoring a database from backup not supported in Oracle Database Appliance release 19.4

Restoring a database from backup is not supported in Oracle Database Appliance release 19.4

Hardware Models

Oracle Database Appliance hardware models X8-2 baremetal deployments with Oracle Database Appliance Release 19.4

Workaround

None.

This issue is tracked with Oracle bugs 30423790 and 30423720.

Extensive tracing generated for server processes

Extensive tracing files for the server processes are generated with DRM messages.

2019-08-07 03:35:33.498*:example1():   
[0x3fc1001c][0xf02],[TX][ext0x0,0x0][domid 0x0] 
  maxnodes 16, key 2663540594, node 2 (inst 3), member_node 0 
2019-08-07 03:35:33.498*:example1():   delta 15 
2019-08-07 03:35:33.498*:example2():   
[0x3fc1001c][0xf11],[TX][ext0x0,0x0][domid 0x0] 
  maxnodes 16, key 2663540609, node 1 (inst 2), member_node 1 

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Disable tracing:
alter system set event='trace [rac_enq] disk disable' scope=spfile; 

This issue is tracked with Oracle bug 30166512.

Missing DATA, RECO, and REDO entries when dbstorage is rediscovered

Running the odacli update-registry command with -n all --force or -n dbstorage --force option can result in metadata corruption.

Hardware Models

All Oracle Database Appliance hardware models bare metal deployments

Workaround

Run the -all option when all the databases created in the system use OAKCLI in migrated systems. On other systems that run on DCS stack, update all components other than dbstorage individually, using the odacli update-registry -n component_name_to_be_updated_excluding_dbstorage.

This issue is tracked with Oracle bug 30274477.

Incorrect Aura8 firmware value displayed

The Aura8 firmware version displayed in the components list is incorrect.

Models

Oracle Database Appliance X8-2S and X8-2M

Workaround

None.

This issue is tracked with Oracle bug 30340410.

Error encountered for database operations for odb28 database shape

When creating databases, there is a policy restriction for creating databases with database shapes odb8 or higher for Oracle Database Standard Edition.

The database shape odb28 is listed as an unsupported database shape in the opt/oracle/dcs/rdbaas/config/opc_sizing_metadata.xml file for some Oracle Database Appliance hardware models.

Hardware Models

Oracle Database Appliance Hardware Models X7-2 and X8-2

Workaround

Update the opt/oracle/dcs/rdbaas/config/opc_sizing_metadata.xml file with the information for odb28. For example:
<shape name="Odb28"> 
                        <ocpus>28</ocpus> 
                        <memory>224GB</memory> 
                        <log_buffer>128M</log_buffer> 
                        <redo_size>4GB</redo_size> 
                        <db_block_size>8k</db_block_size> 
                        <db_size>100</db_size> 
 </shape> 

This issue is tracked with Oracle bug 30313914.

ODA_BASE is in read-only mode or cannot start

The /OVS directory is full and ODA_BASE is in read-only mode.

The vmcore file in the /OVS/ var directory can cause the /OVS directory (Dom 0) to become 100% used. When Dom 0 is full, ODA_BASE is in read-only mode or cannot start.

Hardware Models

Oracle Database Appliance X6-2-HA, X5-2, X4-2, X3-2, and V1.

Oracle Database Appliance X7-2-HA Virtualized Platform.

Workaround

Perform the following to correct or prevent this issue:

  • Periodically check the file usage on Dom 0 and clean up the vmcore file, as needed.

  • Edit the oda_base vm.cfg file and change the on_crash = 'coredump-restart' parameter to on_crash = 'restart'. Especially when ODA_BASE is using more than 200 GB (gigabytes) of memory.

This issue is tracked with Oracle bug 26121450.

Restriction in moving database home for database shape greater than odb8

When creating databases, there is a policy restriction for creating databases with database shapes odb8 or higher for Oracle Database Standard Edition.

To maintain consistency with this policy restriction, do not migrate any database to an Oracle Database Standard Edition database home, where the database shape is greater than odb8. The database migration may not fail, but it may not adhere to policy rules.

Hardware Models

All Oracle Database Appliance Hardware Models bare metal deployments

Workaround

None.

This issue is tracked with Oracle bug 29003323.

The odaeraser tool does not work if oakd is running in non-cluster mode

After cleaning up the deployment, the Secure Eraser tool does not work if oakd is running in non-cluster mode.

Hardware Models

All Oracle Database Appliance Hardware bare metal systems

Workaround

After cleanup of the deployment, oakd is started in the non-cluster mode, and it cannot be stopped using "odaadmcli stop oak" command. In such a case, if the Secure Erase tool is run, then the odaeraser command fails.

Use the command odaadmcli shutdown oak to stop oakd.

This issue is tracked with Oracle bug 28547433.

Issues with the Web Console on Microsoft web browsers

Oracle Database Appliance Web Console has issues on Microsoft Edge and Microsoft Internet Explorer web browsers.

Following are issues with Microsoft web browsers:
  • Oracle Database Appliance Web Console does not display correctly on Microsoft Edge and Microsoft Internet Explorer web browsers.
  • Advanced Information for the appliance does not display on Microsoft Internet Explorer web browser.
  • Job activity status does not refresh in the Web Console on Microsoft Internet Explorer web browser.
  • After configuring the oda-admin password, the following error is displayed:
    Failed to change the default user (oda-admin) account password. 
    Status Code: 500 DCS-10001: DCS-10001:Internal error encountered: User not authorized

    Workaround: Close the Microsoft Internet Explorer browser session and open another browser session.

Models

All Oracle Database Appliance Hardware Models bare metal deployments

Workaround

To access the Web Console, use either Google Chrome or Firefox.

This issue is tracked with Oracle bugs 27798498, 27028446, 30077007, 30099089, 29887027, and 27799452.

Disk space issues due to Zookeeper logs size

The Zookeeper log files, zookeeper.out and /opt/zookeeper/log/zkMonitor.log, are not rotated, when new logs are added. This can cause disk space issues.

Hardware Models

All Oracle Database Appliance hardware models for bare metal deployments

Workaround

Rotate the zookeeper log file manually, if the log file size increases, as follows:

  1. Stop the DCS-agent service for zookeeper on both nodes.

    initctl stop initdcsagent
  2. Stop the zookeeper service on both nodes.

    /opt/zookeeper/bin/zkServer.sh stop
  3. Clean the zookeeper logs after taking the backup, by manually deleting the existing file or by following steps 4 to 10.

  4. Set the ZOO_LOG_DIR as an environment variable to a different log directory, before starting the zookeeper server.

    export ZOO_LOG_DIR=/opt/zookeeper/log
  5. Switch to ROLLINGFILE, to set the capability to roll.

    export ZOO_LOG4J_PROP="INFO, ROLLINGFILE"
    Restart the zookeeper server, for the changes to take effect.
  6. Set the following parameters in the /opt/zookeeper/conf/log4j.properties file, to limit the number of backup files, and the file sizes.

    zookeeper.log.dir=/opt/zookeeper/log
    zookeeper.log.file=zookeeper.out
    log4j.appender.ROLLINGFILE.MaxFileSize=10MB
    log4j.appender.ROLLINGFILE.MaxBackupIndex=10
  7. Start zookeeper on both nodes.

    /opt/zookeeper/bin/zkServer.sh start
  8. Check the zookeeper status, and verify that zookeeper runs in leader/follower/standalone mode.

    /opt/zookeeper/bin/zkServer.sh status
    ZooKeeper JMX enabled by default
    Using config: /opt/zookeeper/bin/../conf/zoo.cfg
    Mode: follower
  9. Start the dcs agent on both nodes.

    initctl start initdcsagent
  10. Purge the zookeeper monitor log, zkMonitor.log, in the location /opt/zookeeper/log. You do not have to stop the zookeeper service.

This issue is tracked with Oracle bug 29033812.

Error after running the cleanup script

After running the cleanup.pl script, the following error message appears: DCS-10001:Internal error encountered: Fail to start hand shake.

The error is causes when you run the following steps:

  1. Run cleanup.pl on the first node (Node0). Wait until the cleanup script finishes, then reboot the node.

  2. Run cleanup.pl on the second node (Node1). Wait until the cleanup script finishes, then reboot the node.

  3. After both nodes are started, use the command-line interface to list the jobs on Node0. An internal error appears.

    # odacli list-jobs
    DCS-10001:Internal error encountered: Fail to start hand shake to localhost:7070

Hardware Models

Oracle Database Appliance X7-2-HA

Workaround

  1. Verify the zookeeper status on the both nodes before starting dcsagent:

    /opt/zookeeper/bin/zkServer.sh status

    For a single-node environment, the status should be: leader, or follower, or standalone.

  2. Restart the dcsagent on Node0 after running the cleanup.pl script.

    # initctl stop initdcsagent 
    # initctl start initdcsagent

Incorrect results returned for the describe-component command in certain cases

The describe-component command may return incorrect results in some cases.

For the following disk, the describe-component command shows the available version as QDV1RE14 which is lower than the actual version QDV1RF30:

Disk type: NVMe
    Manufacturer : Intel
    Model:  0x0a54
    Product name: 7335940:ICDPC2DD2ORA6.4T
    Version: QDV1RF30

TThe following disk is not visible when you run the describe-component command. This does not impact the system components, except display.

Disk type: NVMe
    Manufacturer : Intel
    Model:  0x0a54
    Product name: 7361456_ICRPC2DD2ORA6.4T
    Version: VDV1RY03

Hardware Models

All Oracle Database Appliance hardware models.

Workaround

Use the fwupdate list all command to check the correct versions.

This issue is tracked with Oracle bug 29680034.

OAKERR:7007 Error encountered while starting VM

When starting a virtual machine (VM), an error message appears that the domain does not exist.

If a VM was cloned in Oracle Database Appliance 12.1.2.10 or earlier, you cannot start the HVM domain VMs in Oracle Database Appliance 12.1.2.11.

This issue does not impact newly cloned VMs in Oracle Database Appliance 12.1.2.11 or any other type of VM cloned on older versions. The vm templates were fixed in 12.1.2.11.0.

When trying to start the VM (vm4 in this example), the output is similar to the following:

# oakcli start vm vm4 -d 
.
Start VM : test on Node Number : 0 failed.
DETAILS:
        Attempting to start vm on node:0=>FAILED.  
<OAKERR:7007 Error  encountered while starting VM -  Error: Domain 'vm4' does not exist.>                        

The following is an example of the vm.cfg file for vm4:

vif = ['']
name = 'vm4'
extra = 'NODENAME=vm4'
builder = 'hvm'
cpus = '0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23'
vcpus = 2
memory = 2048
cpu_cap = 0
vnc = 1
serial = 'pty'
disk =
[u'file:/OVS/Repositories/odarepo1/VirtualMachines/vm4/68c32afe2ba8493e89f018a
 
970c644ea.img,xvda,w']
maxvcpus = 2
maxmem = 2048

Hardware Models

Oracle Database Appliance X6-2-HA, X5-2, X4-2, X3-2, and V1

Oracle Database Appliance X7-2-HA Virtualized Platform.

Workaround

Delete the extra = 'NODENAME=vm_name'  line from the vm.cfg file for the VM that failed to start.

  1. Open the vm.cfg file for the virtual machine (vm) that failed to start.

    • Dom0 : /Repositories/ vm_repo_name /.ACFS/snaps/ vm_name / VirtualMachines/ vm_name

    • ODA_BASE : /app/sharedrepo/ vm_repo_name /.ACFS/snaps/ vm_name / VirtualMachines/ vm_name

  2. Delete the following line: extra=’NODENAME=vmname. For example, if virtual machine vm4 failed to start, delete the line extra = 'NODENAME=vm4'.

    vif = ['']
    name = 'vm4'
    extra = 'NODENAME=vm4' 
    builder = 'hvm'
    cpus = '0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23'
    vcpus = 2
    memory = 2048
    cpu_cap = 0
    vnc = 1
    serial = 'pty'
    disk =
    [u'file:/OVS/Repositories/odarepo1/VirtualMachines/vm4/68c32afe2ba8493e89f018a
     
    970c644ea.img,xvda,w']
    maxvcpus = 2
    maxmem = 2048
  3. Start the virtual machine on Oracle Database Appliance 12.1.2.11.0.

    # oakcli start vm vm4

This issue is tracked with Oracle bug 25943318.

Error in node number information when running network CLI commands

Network information for node0 is always displayed for some odacli commands, when the -u option is not specified.

If the -u option is not provided, then the describe-networkinterface, list-networks and the describe-network odacli commands always display the results for node0 (the default node), irrespective of whether the command is run from node0 or node1.

Hardware Models

Oracle Database Appliance X7-2-HA, X6-2-HA, X5-2, X4-2, X3-2, and V1

Workaround

Specify the -u option in the odacli command, for details about the current node.

This issue is tracked with Oracle bug 27251239.

Unrecognized Token Messages Appear in /var/log/messages

After updating Oracle Database Appliance, unrecognized token messages appear in /var/log/messages.

Updating to Oracle Database Appliance 12.1.2.11.0 updates the Oracle VM Server version to 3.4.3. After updating, the following messages appear in /var/log/messages:

Unrecognized token: "max_seq_redisc"
Unrecognized token: "rereg_on_guid_migr"
Unrecognized token: "aguid_inout_notice"
Unrecognized token: "sm_assign_guid_func"
Unrecognized token: "reports"
Unrecognized token: "per_module_logging"
Unrecognized token: "consolidate_ipv4_mask"

You can ignore the messages for these parameters, they do not impact the InfiniBand compliant Subnet Manager and Administration (opensm) functionality. However, Oracle recommends removing the parameters to avoid flooding /var/log/messages.

Hardware Models

Oracle Database Appliance X6-2-HA and X5-2 with InfiniBand

Workaround

Perform the following to remove the parameters:

  1. After patching, update the /etc/opensm/opensm.conf file in bare metal deployments and in Dom0 in virtualized platform environment to remove the parameters.

    cat /etc/opensm/opensm.conf  | egrep -w
    'max_seq_redisc|rereg_on_guid_migr|aguid_inout_notice|sm_assign_guid_func|repo
    rts|per_module_logging|consolidate_ipv4_mask' | grep -v ^#
    max_seq_redisc 0
    rereg_on_guid_migr FALSE
    aguid_inout_notice FALSE
    sm_assign_guid_func uniq_count
    reports 2
    per_module_logging FALSE
    consolidate_ipv4_mask 0xFFFFFFFF
  2. Reboot. The messages will not appear after rebooting the node.

This issue is tracked with Oracle bug 25985258.