D Issues with Oracle Database Appliance X6-2-HA, X5-2, X4-2, X3-2, and V1

The following are known issues deploying, updating, and managing Oracle Database Appliance X6-2-HA, X5-2, X4-2, X3-2, and V1:

GI upgrade to 12.2.1.2 fails due to incorrect permissions in the config.sh file

Unless the workaround is applied before patching, upgrade to 12.2.1.2 fails during grid patching.

Insufficient permissions in the config.sh file prevents upgrading the grid infrastructure (GI) patch. The issue is that the permission of the /u01/app/oraInventory/locks directory is not sufficient to access the inventory locks directory (/u01/app/oraInventory/locks).

Perform the workaround before applying the GI patch to prevent the issue.

If you do not apply the workaround before upgrading to 12.2.1.2, errors similar to the following occur and the upgrade fails:

There is no directory as: /u01/app/12.2.0/grid/perl/bin/ exist in the server   
ERROR : Ran '/bin/su grid -c "/opt/oracle/oak/onecmd/tmp/gridconfig.sh"' 

and it returns code (127). The output is as follows:

/u01/app/12.2.0.1/grid/crs/config/config.sh: line 48: /u01/app/12.2.0/grid/perl/bin/perl: No such file or directory
ERROR : /bin/su grid -c "/opt/oracle/oak/onecmd/tmp/gridconfig.sh" did not complete successfully.
Exit code 127 #Step -1#

Hardware Models

Oracle Database Appliance X6-2-HA, X5-2, X4-2, X3-2, and V1 bare metal platform and virtualized platform

Oracle Database Appliance X7-2-HA virtualized platform

Workaround

To prevent the issue, perform the following steps before upgrading Oracle Database Appliance:

  1. Check oraInventory for a locks directory.

    ls -al /u01/app/oraInventory/locks
    
    • If the locks directory does not exist on either node, then there is no issue.

    • If the locks directory exists on both nodes, then go to Step 2.

    • If the locks directory exists only on node 1, then see MOS Note 2360709.1 for how to check for and detach Oracle_Home on the second node before removing the locks directory.

    • If the locks directory exists only on the second node, then see MOS Note 2360709.1.

  2. Remove the locks directory on both nodes.

    rm -R /u01/app/oraInventory/locks
    
  3. Perform the upgrade.

Note:

If you do not perform the workaround before upgrading, a failure might happen at a different point and require a different procedure, depending on the point of failure. See My Oracle Support Note 2360709.1 for more information.

This issue is tracked with Oracle bug 27314077.

Insufficient space in the /boot directory to upgrade to 12.2.1.2

ERROR: Unable to apply the patch <1> message appears when attempting to upgrade from Oracle Database Appliance 12.1.2.6 to 12.2.1.2.

The error is due to the kernel file ( initramfs*94.3.9*.img) in Oracle Database Appliance 12.1.2.6. The file expands to more than 48M during patching, which causes the local server patch to fail.

Hardware Models

Oracle Database Appliance X5-2, X4-2, X3-2, and V1

Workaround

  1. Shrink the initramfs by running the following command:

    dracut --force --omit-drivers "oracleoks oracleacfs oracleadvm" "/boot/initramfs-$(uname -r).img" $(uname -r) 
    
  2. Apply the server patch using the --local option.

    # oakcli update -patch 12.2.1.2 --server --local
    
  3. Reboot the system.

  4. Remove the old kernel rpms/initrdkdump.img file.

    # rpm -e kernel-uek-4.1.12-94.3.9.el6uek.x86_64 
    # rpm -e kernel-uek-firmware-4.1.12-94.3.9.el6uek.noarch 
    # rm initrd-4.1.12-94.3.9.el6uek.x86_64kdump.img
    

This issue is tracked with Oracle bug 27314077.

FLASH disk group is not mounted when patching or provisioning the server

The FLASH disk group is not mounted after a reboot, including after provisioning, reimaging, or patching the server with Oracle Database Appliance 12.2.1.2.

This issue occurs when the node reboots and then you attempt to create an Oracle Automatic Storage Management Cluster File System (Oracle ACFS) database. When patching or provisioning a server with Oracle Database Appliance 12.2.1.2, you will encounter an SSH disconnect issue and an error.
# oakcli update -patch 12.2.1.2 --server

**************************************************************************** 
*****   For all X5-2 customers with 8TB disks, please make sure to     *****
*****   run storage patch ASAP to update the disk firmware to "PAG1".  *****
**************************************************************************** 
INFO: DB, ASM, Clusterware may be stopped during the patch if required 
INFO: Both Nodes may get rebooted automatically during the patch if required 
Do you want to continue: [Y/N]?: y 
INFO: User has confirmed for the reboot 
INFO: Patch bundle must be unpacked on the second Node also before applying the patch 
Did you unpack the patch bundle on the second Node? : [Y/N]? : y  
Please enter the 'root'  password :  
Please re-enter the 'root' password:  
INFO: Setting up the SSH 
..........Completed .....  
... ...
INFO: 2017-12-26 00:31:22: -----------------Patching ILOM & BIOS----------------- 
INFO: 2017-12-26 00:31:22: ILOM is already running with version 3.2.9.23r116695 
INFO: 2017-12-26 00:31:22: BIOS is already running with version 30110000 
INFO: 2017-12-26 00:31:22: ILOM and BIOS will not be updated  
INFO: 2017-12-26 00:31:22: Getting the SP Interconnect state... 
INFO: 2017-12-26 00:31:44: Clusterware is running on local node 
INFO: 2017-12-26 00:31:44: Attempting to stop clusterware and its resources locally 
Killed 
# Connection to server.example.com closed. 

The Oracle High Availability Services, Cluster Ready Services, Cluster Synchronization Services, and Event Manager are online. However, when you attempt to create an Oracle Automatic Storage Management Cluster File System (Oracle ACFS) database, you receive an error: flash space is 0.

Hardware Models

Oracle Database Appliance X5-2, X6-2-HA, and X7-2 HA SSD systems.

Workaround

Manually mount FLASH disk group before creating an Oracle ACFS database.

Perform the following steps as the GRID owner:

  1. Set the environment variables as grid OS user:

    on node0 
    export ORACLE_SID=+ASM1 
    export ORACLE_HOME= /u01/app/12.2.0.1/grid
    
  2. Log on to the ASM instance as sysasm

    $ORACLE_HOME/bin/sqlplus / as sysasm
    
  3. Execute the following SQL command:

    SQL> ALTER DISKGROUP FLASH MOUNT
    

This issue is tracked with Oracle bug 27322213.

Do not use the local patching option on a virtualized platform

When patching a virtualized platform, the --local option is not supported.

On a virtualized platform, attempting to use the --local option to patch a single node will result in an error.

When you use the --local option, the patch server fails with following error:
# oakcli update -patch 12.2.1.2.0 --server --local 
ERROR: -local is not supported for server patching, on VM systems.

Hardware Models

Oracle Database Appliance X6-2-HA, X5-2, X4-2, X3-2, and V1

Workaround

Use the following to update the software on Oracle Database Appliance, which applies the patch to both nodes.
# oakcli update -patch 12.2.1.2.0 --server

Unable to upgrade an Oracle Database from version 12.1 to 12.2

When attempting to upgrade an Oracle Database from version 12.1 to 12.2, the upgrade pre-check fails with error 254 and the database is not upgraded.

The log file shows the following issues:
  • ORA-00093: pga_aggregate_limit must be between 4096M and 100000G

  • ORA-01078: failure in processing system parameters

After attempting the upgrade, the database might be offline.

Hardware Models

Oracle Database Appliance X6-2-HA, X5-2, X4-2, X3-2, and V1.

Workaround

Set the pga_aggregate_limit to two times the pga_aggregate_target before upgrading to Oracle Database 12.2. The pga_aggregate_limit must be greater than 2G.

This issue is tracked with Oracle bug 27251762.

Unable to upgrade an Oracle Database from version 11.2 to 12.1 or 12.2

When attempting to upgrade an Oracle Database from version 11.2 to 12.1 or 12.2, the upgrade pre-check fails and the database is not upgraded.

Hardware Models

Oracle Database Appliance X6-2-HA, X5-2, X4-2, X3-2, and V1.

Workaround

A workaround is not available.

This issue is tracked with Oracle bug 27332396.

Error CRS-01019: The OCR Service Exited

An issue with Oracle Database 12.2.1.2 might cause an internal error CRS-01019: THE OCR SERVICE EXITED. If this occurs, the Cluster Ready Services daemon (crsd) is down. 

Hardware Models

Oracle Database Appliance X6-2-HA, X5-2, X4-2, X3-2, and V1

Oracle Database Appliance X7-2-HA virtualized platform

Workaround

Restart the CRS daemon.

  1. Stop crs.

    # crsctl stop crs -f 
    
  2. Start crs.

     # crsctl start crs -wait 
    

This issue is tracked with Oracle bug 27060167.

CRSD is unresponsive when patching Oracle Database 12.1 ASM

Run the database patch, after it finishes patching Node 0, the Clusterware does not run and patching on Node 1 hangs.

The Cluster Ready Services daemon (crsd) process becomes unresponse and the Oracle High Availability Services daemon (OHASD) cannot clean the resource. The server becomes unresponsive when other process cannot complete on the CPU and the load on the server increases. This issue is related to bug 27060167.

Hardware Models

Oracle Database Appliance X6-2-HA, X5-2, X4-2, X3-2, and V1

Oracle Database Appliance X7-2-HA

Workaround

Kill the crsd processes running on both nodes, then restart the cluster.

This issue is tracked with Oracle bug 27366345.

Errors when upgrading a database from 11.2.0.4 dbhome to 12.2.0.2

When upgrading a database from 11.2.0.4 dbhome to 12.2.0.2, receive Error: No protocol specified or Exit Code 6.

Oracle Database Appliance performs pre-checks before applying the upgrade. If one or more of the pre-upgrade checks on the database results in warning conditions that required manual intervention, you should address the warnings as suggested before proceeding with the upgrade. If you receive the No protocol specified or Exit Code 6errors, then the warnings were not addressed before the upgrade.

Exit code 6 indicates successful execution with warnings.

Hardware Models

Oracle Database Appliance X6-2-HA, X5-2, X4-2, X3-2, and V1

Oracle Database Appliance X7-2-HA virtualized platform

Workaround

None.

This issue is tracked with Oracle bug 27381804.

Unable to create an Oracle ASM Database for Release 12.1

Known issues with Oracle Automatic Storage Management (Oracle ASM) are preventing the REDO diskgroup from mounting for Oracle Database Release 12.1.

Unable to create an Oracle ASM database lower than 12.1.0.2.17814 PSU (12.1.2.12).

Hardware Models

Oracle Database Appliance X6-2-HA, X5-2, X4-2, X3-2, and V1.

Workaround

There is not a workaround. If you have Oracle Database 11.2 or 12.1 that is using Oracle Automatic Storage Management (Oracle ASM) and you want to upgrade to a higher release of Oracle Database, then you must be on at least Oracle Database Appliance 12.1.2.12.0 and Database Home 12.1.0.2.170814.

The upgrade path for Oracle Database 11.2 or 12.1 Oracle ASM is as follows:

  • If you are on Oracle Database Appliance version 12.1.2.6.0 or later, then upgrade to 12.1.2.12 or higher before upgrading your database.

  • If you are on Oracle Database Appliance version 12.1.2.5 or earlier, then upgrade to 12.1.2.6.0, and then upgrade again to 12.1.2.12 or higher before upgrading your database.

This issue is tracked with Oracle bug 21626377 and 21780146. The issues are fixed in Oracle Database 12.1.0.2.170814.

Unable to patch an empty Oracle Database 12.1 dbhome

Cannot patch an empty Oracle Database Home (dbhome) due to an issue with Oracle Database auto patch.

When attempting to patch an empty dbhome, an error message similar to the following appears:
ERROR: 2017-12-19 18:48:02: Unable to apply db patch on the following Homes :  /u01/app/oracle/product/12.1.0.2/dbhome_name

The following is an example excerpt from the dbupdate log:

  OPATCHAUTO-68036: Topology empty. 
  OPATCHAUTO-68036: The topology was empty, unable to proceed. 
  OPATCHAUTO-68036: Check the log for more information. 
  OPatchAuto failed.
opatchauto failed with error code 42

Models

Oracle Database Appliance X6-2-HA, X5-2, X4-2, X3-2, and V1.

Workaround

The issue occurs when the dbhome does not have any databases. The workaround is to create a database before patching.

This issue is tracked with Oracle bug 27292674 and 27126871.

Unable to patch dbhome from 12.1.0.2.170814 to 12.2.0.1.171017

In some cases, the dbhome patch update fails due to a timezone issue in opatch.

An error similar to the following appears in the job details:

DCS-10001:Internal error encountered:  run datapatch after bundlePatch application on the database home dbhomeID 

Hardware Models

Oracle Database Appliance X6-2-HA, X5-2, X4-2, X3-2, and V1

Workaround

  1. Open the /u01/app/oracle/product/*/*/inventory/ContentsXML/comps.xml file.

  2. Search for four (4) character timezone (TZ) information.

    For example, HADT and HAST.

  3. Take a backup of those files.

  4. Convert the 4-character timezone to a 3-character timezone.

    For example, convert HADT and HAST to HST.

  5. Patch dbhome.

This issue is tracked with Oracle bug 27313653 and 27331844.

The DB Console option is disabled when creating an 11.2.0.4 database

When using Oracle Database 12.2.0.1 grid infrastructure (GI) to create an 11.2.0.4 database, the option to configure Oracle Enterprise Manager DB Console is disabled.

An issue with the Enterprise Manager Control (emctl) command line utility and Enterprise Manager Configuration Assistant (emca) occurs when using the 12.2.0.1 GI to create an 11.2.0.4 database.

Hardware Models

Oracle Database Appliance X6-2-HA, X5-2, X4-2, X3-2, and V1 that are using the 12.2.0.1 GI.

Oracle Database Appliance X7-2-HA Virtualized Platform that is using the 12.2.0.1 GI.

Workaround

Manually configure Oracle Enterprise Manager DB Console after creating the database.

If the appliance is a multi-node system, perform the steps on both nodes. The example assumes a multi-node system:

  1. Create a dbconsole.rsp response file, as follows, based on your environment.

    To obtain the cluster name for your environment, run the command $GI_HOME/bin/cemutlo -n

    DB_UNIQUE_NAME=pdb_unique_name 
    SERVICE_NAME=db_unique_name.db_domain 
    PORT=scan listener port
    LISTENER_OH=$GI_HOME
    SYS_PWD=admin password
    DBSNMP_PWD=admin password
    SYSMAN_PWD=admin password
    CLUSTER_NAME=cluster name 
    ASM_OH=$GI_HOME
    ASM_SID=+ASM1
    ASM_PORT=asm listener port
    ASM_USER_NAME=ASMSNMP
    ASM_USER_PWD=admin password   
    
  2. Run the command to configure the dbcontrol using the response file. The command will fail with an error. You will use the steps in the output in Step 4.

    $ORACLE_HOME/bin/emca -config dbcontrol db -repos create -cluster -silent -respFile dbconsole.rsp 
    
    Error securing Database Control. Database Control has not been brought-up on nodes node1 node2
    Execute the following command(s) on nodes: node1 node2
    
    1. Set the environment variable ORACLE_UNQNAME to the Database unique name.
    2. /u01/app/oracle/product/11.2.0.4/dbhome_1/bin/emctl config emkey -repos
    -sysman_pwd Password for SYSMAN user -host node -sid  Database unique
    name
    3. /u01/app/oracle/product/11.2.0.4/dbhome_1/bin/emctl secure dbconsole
    -sysman_pwd Password for SYSMAN user -host node -sid  Database unique
    name
    4. /u01/app/oracle/product/11.2.0.4/dbhome_1/bin/emctl start dbconsole
    
    To secure Em Key, run /u01/app/oracle/product/11.2.0.4/dbhome_1/bin/emctl  config emkey -remove_from_repos -sysman_pwd Password for SYSMAN user
    
  3. Use vi editor to open $ORACLE_HOME/bin/emctl, then change the setting CRS_HOME= to CRS_HOME=/u01/app/12.2.0.1/grid

  4. Run the steps reported by emca in Step 2 with the proper values.

  5. Configure dbconsole in Node1 , so that agent in Node0 reports to the dbconsole in Node0, and the agent in Node1 reports to the dbconsole in Node1:
    $ORACLE_HOME/bin/emca -reconfig dbcontrol -silent -cluster -EM_NODE node0
    host -EM_NODE_LIST node1 host -DB_UNIQUE_NAME db_unique_name 
    -SERVICE_NAME db_unique_name.db_domain
    
  6. Use vi editor to open $ORACLE_HOME/bin/emctl, then change the setting CRS_HOME= to CRS_HOME=/u01/app/12.2.0.1/grid

  7. Check the db console configuration status.

    # /u01/app/oracle/product/11.2.0.4/dbhome_1/bin/emctl  status agent
       - https://public IP for Node0:1158/em
       - https://public IP for Node1:1158/em  
    

This issue is tracked with Oracle bug 27071994.

ODA_BASE is in read-only mode or cannot start

The /OVS directory is full and ODA_BASE is in read-only mode.

The vmcore file in the /OVS/ var directory can cause the /OVS directory (Dom 0) to become 100% used. When Dom 0 is full, ODA_BASE is in read-only mode or cannot start.

Hardware Models

Oracle Database Appliance X6-2-HA, X5-2, X4-2, X3-2, and V1.

Oracle Database Appliance X7-2-HA Virtualized Platform.

Workaround

Perform the following to correct or prevent this issue:

  • Periodically check the file usage on Dom 0 and clean up the vmcore file, as needed.

  • Edit the oda_base vm.cfg file and change the on_crash = 'coredump-restart' parameter to on_crash = 'restart'. Especially when ODA_BASE is using more than 200 GB (gigabytes) of memory.

This issue is tracked with Oracle bug 26121450.

OAKERR:7007 Error encountered while starting VM

When starting a virtual machine (VM), an error message appears that the domain does not exist.

If a VM was cloned in Oracle Database Appliance 12.1.2.10 or earlier, you cannot start the HVM domain VMs in Oracle Database Appliance 12.1.2.11.

This issue does not impact newly cloned VMs in Oracle Database Appliance 12.1.2.11 or any other type of VM cloned on older versions. The vm templates were fixed in 12.1.2.11.0.

When trying to start the VM (vm4 in this example), the output is similar to the following:

# oakcli start vm vm4 -d 
.
Start VM : test on Node Number : 0 failed.
DETAILS:
        Attempting to start vm on node:0=>FAILED.  
<OAKERR:7007 Error  encountered while starting VM -  Error: Domain 'vm4' does not exist.>                        

The following is an example of the vm.cfg file for vm4:

vif = ['']
name = 'vm4'
extra = 'NODENAME=vm4'
builder = 'hvm'
cpus = '0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23'
vcpus = 2
memory = 2048
cpu_cap = 0
vnc = 1
serial = 'pty'
disk =
[u'file:/OVS/Repositories/odarepo1/VirtualMachines/vm4/68c32afe2ba8493e89f018a
 
970c644ea.img,xvda,w']
maxvcpus = 2
maxmem = 2048

Hardware Models

Oracle Database Appliance X6-2-HA, X5-2, X4-2, X3-2, and V1

Oracle Database Appliance X7-2-HA Virtualized Platform.

Workaround

Delete the extra = 'NODENAME=vm_name'  line from the vm.cfg file for the VM that failed to start.

  1. Open the vm.cfg file for the virtual machine (vm) that failed to start.

    • Dom0 : /Repositories/ vm_repo_name /.ACFS/snaps/ vm_name / VirtualMachines/ vm_name

    • ODA_BASE : /app/sharedrepo/ vm_repo_name /.ACFS/snaps/ vm_name / VirtualMachines/ vm_name

  2. Delete the following line: extra=’NODENAME=vmname. For example, if virtual machine vm4 failed to start, delete the line extra = 'NODENAME=vm4'.

    vif = ['']
    name = 'vm4'
    extra = 'NODENAME=vm4' 
    builder = 'hvm'
    cpus = '0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23'
    vcpus = 2
    memory = 2048
    cpu_cap = 0
    vnc = 1
    serial = 'pty'
    disk =
    [u'file:/OVS/Repositories/odarepo1/VirtualMachines/vm4/68c32afe2ba8493e89f018a
     
    970c644ea.img,xvda,w']
    maxvcpus = 2
    maxmem = 2048
    
  3. Start the virtual machine on Oracle Database Appliance 12.1.2.11.0.

    # oakcli start vm vm4
    

This issue is tracked with Oracle bug 25943318.

Server patch does not update the kernel version

After applying the server patch and rebooting the node, the kernel version is not updated.

This issue occurs with Oracle Database Appliance 12.1.2.11 and 12.1.2.12.

Hardware Models

Oracle Database Appliance X6-2-HA, X5-2, X4-2, X3-2, and V1

Workaround

  1. Update to release 12.2.1.2.0

    # oakli update -patch 12.2.1.2.0 --server --local
    
  2. Remove the kernel rpm.

    # rpm -e kernel-uek-4.1.12-103.3.8.1.el6uek.x86_64
    
  3. Manually install the kernel rpm using rpm -ivh.

    kernel-uek-4.1.12-103.3.8.1.el6uek.x86_64  
    
  4. Modify /boot/grub/grub.conf to boot from the new kernel. Change default=1 to default=0   

    # cat /boot/grub/grub.conf 
    timeout=5 
    splashimage=(hd0,0)/grub/splash.xpm.gz 
    #hiddenmenu 
    serial --unit=0 --speed=115200  --word=8 --parity=no --stop=1 
    terminal --timeout=5 serial console 
    default=0 
    title Oracle Linux Server Unbreakable Enterprise Kernel  
    (4.1.12-103.3.8.1.el6uek.x86_64)
             root (hd0,0) 
             kernel /vmlinuz-4.1.12-103.3.8.1.el6uek.x86_64 ro root=LABEL=rootfs 
     tsc=reliable nohpet nopmtimer hda=noprobe hdb=noprobe ide0=noprobe numa=off 
     console=tty0 console=ttyS0,115200n8 selinux=0 nohz=off crashkernel=256M@64M 
     loglevel=3 panic=60 ipv6.disable=1 transparent_hugepage=never NODENUM=0 
     PRODUCT=SUN_SERVER_X4-2 TYPE=V3 pci=noaer 
             initrd /initramfs-4.1.12-103.3.8.1.el6uek.x86_64.img
     title Oracle Linux Server (4.1.12-61.44.1.el6uek.x86_64)
             root (hd0,0)
              kernel /vmlinuz-4.1.12-61.44.1.el6uek.x86_64 ro root=LABEL=rootfs 
     tsc=reliable nohpet nopmtimer hda=noprobe hdb=noprobe ide0=noprobe numa=off 
     console=tty0 console=ttyS0,115200n8 selinux=0 nohz=off crashkernel=256M@64M 
     loglevel=3 panic=60 ipv6.disable=1 transparent_hugepage=never NODENUM=0 
     PRODUCT=SUN_SERVER_X4-2 TYPE=V3 pci=noaer 
            initrd /initramfs-4.1.12-61.44.1.el6uek.x86_64.img 
    
  5. Reboot the node.

  6. Confirm the new kernel is running on the system.
    # uname -r 4.1.12-103.3.8.1.el6uek.x86_64
    
  7. Repeat for Node 1.

This issue is tracked with Oracle bug 26887116.

Server patch does not set an active version of Oracle Clusterware

The server patch does not set an active version of Oracle Clusterware. Before upgrading, link the Grid Software with the Reliable Datagram Sockets (RDS) protocol.

For hardware with a CX3 card, the interconnect IPC should be linked with RDS, instead of UDP. If you are using UDP for the Oracle Grid Infrastructure home IP protocol, the following error message appears:

***************************************************************** 
The grid software on this system is linked with UDP/IP protocol. It should be relinked with RDS protocol.

For more details, please refer to the 12.2.1.2.0 release notes and README. 
*****************************************************************

Hardware Models

Oracle Database Appliance X6-2-HA and X5-2

Workaround

During scheduled downtime, relink Oracle with RDS on both nodes.

cd $ORACLE_HOME/rdbms/lib  make -f ins_rdbms.mk ipc_rds
make -f ins_rdbms.mk ipc_rds

This issue is tracked with Oracle bug 27366876.

Oracle ASR version is 5.7.6 instead of 5.7.7

The Oracle Auto Service Request (Oracle ASR) version is 5.7.6 in Oracle Database Appliance 12.2.1.2.

Hardware Models

Oracle Database Appliance X6-2-HA, X5-2, X4-2, X3-2, and V1.

Workaround

None.

This issue is tracked with Oracle bug 27418286.

Unrecognized Token Messages Appear in /var/log/messages

After updating Oracle Database Appliance, unrecognized token messages appear in /var/log/messages.

Updating to Oracle Database Appliance 12.1.2.11.0 updates the Oracle VM Server version to 3.4.3. After updating, the following messages appear in /var/log/messages:

Unrecognized token: "max_seq_redisc"
Unrecognized token: "rereg_on_guid_migr"
Unrecognized token: "aguid_inout_notice"
Unrecognized token: "sm_assign_guid_func"
Unrecognized token: "reports"
Unrecognized token: "per_module_logging"
Unrecognized token: "consolidate_ipv4_mask"

You can ignore the messages for these parameters, they do not impact the InfiniBand compliant Subnet Manager and Administration (opensm) functionality. However, Oracle recommends removing the parameters to avoid flooding /var/log/messages.

Hardware Models

Oracle Database Appliance X6-2-HA and X5-2 with InfiniBand

Workaround

Perform the following to remove the parameters:

  1. After patching, update the /etc/opensm/opensm.conf file in bare metal deployments and in Dom0 in virtualized platform environment to remove the parameters.

    cat /etc/opensm/opensm.conf  | egrep -w
    'max_seq_redisc|rereg_on_guid_migr|aguid_inout_notice|sm_assign_guid_func|repo
    rts|per_module_logging|consolidate_ipv4_mask' | grep -v ^#
    max_seq_redisc 0
    rereg_on_guid_migr FALSE
    aguid_inout_notice FALSE
    sm_assign_guid_func uniq_count
    reports 2
    per_module_logging FALSE
    consolidate_ipv4_mask 0xFFFFFFFF
    
  2. Reboot. The messages will not appear after rebooting the node.

This issue is tracked with Oracle bug 25985258.

Virtual machine task blocked

After updating to Oracle Database Appliance 12.1.2.11.0, the IOs to local disks can get stuck and block tasks.

The issue is cased by an Oracle Linux bug when using Oracle VM 3.4.3. All Oracle Database Appliance guest virtual machines that use multiple VLANS and have VDISKS might encounter this bug, causing the IO to hang. The problem can manifest itself in different ways, depending on which process gets stuck. For example, after deploying ODA_BASE, the untar command cannot proceed or virtual machines can hang.

Hardware Models

Oracle Database Appliance X6-2-HA, X5-2, X4-2, X3-2, and V1 with guest virtual machines that use multiple VLANS and VDISKS.

Workaround

Oracle Database Appliance X6-2-HA and X5-2 Dom0 use grub2. For these models, perform the following to set the gnttab_max_frames to 256 on dom0 of both nodes:

  1. Increase the gnttab_max_frames in the update /etc/default/grub file by changing the following line:

    GRUB_CMDLINE_XEN="dom0_mem=max:4096MM allowsuperpage crashkernel=256M@64M extra_ guest_irqs=256,2048 nr_irqs=2048 dom0_vcpus_pin dom0_max_vcpus=20"
    

    to

    GRUB_CMDLINE_XEN="dom0_mem=max:4096MM allowsuperpage crashkernel=256M@64M extra_ guest_irqs=256,2048 nr_irqs=2048 dom0_vcpus_pin dom0_max_vcpus=20 gnttab_max_frames=256"
    
  2. Create a new configuration file based on the changes.

    grub2-mkconfig -o /boot/grub2/grub.cfg 
    
  3. Reboot.

  4. Repeat the process on Dom0 of the second node.

Oracle Database Appliance X4-2, X3-2, and V1 Dom0 uses grub1. For these models, perform the following to set the gnttab_max_frames to 256 in the xen hypervisor on Dom0 of both nodes:

  1. Open the /boot/grub/grub.conf file in Dom0.

  2. Add the line gnttab_max_frames=256 at the xen.gz command line.

    For example, change the following line:

    kernel /xen.gz dom0_mem=4096M crashkernel=256M@64M
    

    to

    kernel /xen.gz dom0_mem=4096M crashkernel=256M@64M gnttab_max_frames=256
    
  3. Reboot.

  4. Repeat the process on Dom0 of the second node.

This issue is tracked with Oracle bug 26731461.

High Availability IP (HAIP) addresses are not supported

High Availability IP (HAIP) addresses are not supported on Oracle engineered systems.

If you use an HAIP address, then an error message will appear in your operating system log indicating that the address is not supported.

The following error messages might appear in system logs during boot of Oracle Database Appliance systems:

Aug 11 15:31:11 odac1n1 kernel: [ 9932.651622] ** WARNING WARNING WARNING 
WARNING WARNING        ** 
Aug 11 15:31:11 odac1n1 kernel: [ 9932.651624] **                             
                   ** 
Aug 11 15:31:11 odac1n1 kernel: [ 9932.651627] ** RDS/IB: Link local address 
169.254.165.15 NOT SUPPORTED  ** 
Aug 11 15:31:11 odac1n1 kernel: [ 9932.651629] **                             
                   ** 
Aug 11 15:31:11 odac1n1 kernel: [ 9932.651631] ** HAIP IP addresses should 
not be used on ORACLE ** 
Aug 11 15:31:11 odac1n1 kernel: [ 9932.651632] ** engineered systems           
                   ** 
Aug 11 15:31:11 odac1n1 kernel: [ 9932.651634] **                             
                   ** 
Aug 11 15:31:11 odac1n1 kernel: [ 9932.651636] ** If you see this message, 
Please refer to       ** 
Aug 11 15:31:11 odac1n1 kernel: [ 9932.651638] ** cluster_interconnects in 
MOS note #1274318.1   ** 
Aug 11 15:31:11 odac1n1 kernel: [ 9932.651639] 

You can ignore these messages. Functionality is not impacted.

Hardware Models

Oracle Database Appliance X6-2S, X6-2M, X6-2L, X6-2-HA, X5-2, X4-2, X3-2, and V1

This issue is tracked with Oracle bug 26623697.