B Issues with Oracle Database Appliance X7-2S, X7-2M, and X7-2-HA

The following are known issues deploying, updating, and managing Oracle Database Appliance X7-2S, X7-2M, and X7-2-HA:

GI upgrade to 12.2.1.2 fails due to incorrect permissions in the config.sh file

Unless the workaround is applied before patching, upgrade to 12.2.1.2 fails during grid patching.

Insufficient permissions in the config.sh file prevents upgrading the grid infrastructure (GI) patch. The issue is that the permission of the /u01/app/oraInventory/locks directory is not sufficient to access the inventory locks directory (/u01/app/oraInventory/locks).

Perform the workaround before applying the GI patch to prevent the issue.

If you do not apply the workaround before upgrading to 12.2.1.2, errors similar to the following occur and the upgrade fails:

There is no directory as: /u01/app/12.2.0/grid/perl/bin/ exist in the server   
ERROR : Ran '/bin/su grid -c "/opt/oracle/oak/onecmd/tmp/gridconfig.sh"' 

and it returns code (127). The output is as follows:

/u01/app/12.2.0.1/grid/crs/config/config.sh: line 48: /u01/app/12.2.0/grid/perl/bin/perl: No such file or directory
ERROR : /bin/su grid -c "/opt/oracle/oak/onecmd/tmp/gridconfig.sh" did not complete successfully.
Exit code 127 #Step -1#

Hardware Models

Oracle Database Appliance X7-2-HA, X7-2S, X7-2M, X6-2S, X6-2M, and X6-2L

Workaround

To prevent the issue, perform the following steps before upgrading Oracle Database Appliance:

  1. Check oraInventory for a locks directory.

    ls -al /u01/app/oraInventory/locks
    
    • If the locks directory does not exist on either node, then there is no issue.

    • If the locks directory exists on both nodes, then go to Step 2.

    • If the locks directory exists only on node 1, then see MOS Note 2360709.1 for how to check for and detach Oracle_Home on the second node before removing the locks directory.

    • If the locks directory exists only on the second node, then see MOS Note 2360709.1.

  2. Remove the locks directory on both nodes.

    rm -R /u01/app/oraInventory/locks
    
  3. Perform the upgrade.

Note:

If you do not perform the workaround before upgrading, a failure might happen at a different point and require a different procedure, depending on the point of failure. See My Oracle Support Note 2360709.1 for more information.

Apply the server bundle before provisioning

Apply the patch oda-sm-12.2.1.1.0-171030-server.zip before creating (deploying) Oracle Database Appliance. Apply the patch on all nodes.

When you apply the server patch bundle after creating the appliance and then try to upgrade a database to a database home (dbhome) created as part of deploying the appliance, the job fails and the database cannot be upgraded.

Hardware Models

Oracle Database Appliance X7-2S, X7-2M, and X7-2-HA

Workaround

Apply the server patch bundle before creating the appliance to avoid the issue.

For X7-2-HA, you must save and apply the server patch on both nodes.

During provisioning, error: Failed to activate ASR assets

When deploying the appliance, Oracle ASR installation fails with the following error: error: Failed to activate ASR assets.

Hardware Models

Oracle Database Appliance X7-2S, X7-2M, X7-2-HA, X6-2S, X6-2M, and X6-2L.

Workaround

After deploying the appliance, manually register the asset.

  1. Manually register the asset.

    # /opt/asrmanager/bin/asr activate_asset -i  IP Address 
    hostname : 2 service tags 
    Successfully submitted activation for the asset  
    Host Name: hostname 
    IP Address: IP Address 
    Serial Number: serial number
    The e-mail address associated with the registration id for this asset's ASR 
    Manager will receive an e-mail highlighting the asset activation status and 
    any additional instructions for completing activation. 
    Please use My Oracle Support http://support.oracle.com to complete the  activation process. 
    The Oracle Auto Service Request documentation can be accessed on http://oracle.com/asr
    
    # /opt/asrmanager/bin/asr list_asset 
    IP_ADDRESS   HOST_NAME     SERIAL_NUMBER PARENT_SERIAL ASR PROTOCOL SOURCE  LAST_HEARTBEAT PRODUCT_NAME                       
    ----------   ---------     ------------- ------------- --- -------- ------  -------------- ------------                       
    IP Address hostname serial number                  Y   SNMP     FMA    NA              ORACLE SERVER X7-2 x86/x64 System   
    
    
  2. Verify that the system is listed.

    # /opt/asrmanager/bin/asr list_asset 
    IP_ADDRESS   HOST_NAME     SERIAL_NUMBER PARENT_SERIAL ASR PROTOCOL SOURCE  LAST_HEARTBEAT PRODUCT_NAME                       
    ----------   ---------     ------------- ------------- --- -------- ------  -------------- ------------                       
    IP Address hostname serial number                  Y   SNMP     FMA    NA              ORACLE SERVER X7-2 x86/x64 System   
    
    Please use My Oracle Support 'http://support.oracle.com' to view the  activation status.
    

Error: patch zip does not exist

The odacli update-repository job fails with patch-name.zip file does not exist in the /tmp directory.

When updating the repository, the update is not able to validate the copied file and the job fails. An error similar to the following appears:

CS-10001:Internal error encountered: /tmp/oda-sm-12.2.1.2.0-171124-GI-12.2.0.1.zip does not exist in the /tmp directory.

Hardware Models

Oracle Database Appliance X7-2S, X7-2M, X7-2-HA, X6-2S, X6-2M, and X6-2L.

Workaround

An invalid null_null auth-key is in ZooKeeper. Remove the invalid key, restart the dcsagent on each node, then execute the command odacli update-repository .

  1. Navigate to the /bin directory in ZooKeeper.

    # cd /opt/zookeeper/bin
    
  2. Connect with zookeeper.

    # ./zkCli.sh
    
  3. List all of the auth-keys.

    # ls /ssh-auth-keys
    
  4. Delete the invalid key.

    # rmr /ssh-auth-keys/null_null
    
  5. Quit from zookeeper.

    quit
    
  6. Restart the dcsagent on each node.

    /opt/oracle/dcs/bin/restartagent.sh
    
  7. Execute the command odacli update-repository .

An error might occur when updating the patch repository

When updating the patch repository, you might get an internal error (Error DCS-10001) stating that the zip file does not appear in the /tmp directory.

The update repository action fails because Oracle Database Appliance 'Failed to fetch private IP Address of RemoteNode'. After re-imaging or after cleaning the system, the network object is not synced up across the nodes and the private IP addresses of both the nodes are not available.

Hardware Models

Oracle Database Appliance X7-2-HA

Workaround

  1. Restart the dcsagent on each node.

  2. Update the patch repository.

Unable to patch dbhome

In some cases, the dbhome patch update fails due to a timezone issue in opatch.

An error similar to the following appears in the job details:

DCS-10001:Internal error encountered:  run datapatch after bundlePatch application on the database home dbhomeID 

Hardware Models

Oracle Database Appliance X7-2S, X7-2M, X7-2-HA, X6-2S, X6-2M, and X6-2L

Workaround

  1. Open the /u01/app/oracle/product/*/*/inventory/ContentsXML/comps.xml file.

  2. Search for four (4) character timezone (TZ) information.

    For example, HADT and HAST.

  3. Take a backup of those files.

  4. Convert the 4-character timezone to a 3-character timezone.

    For example, convert HADT and HAST to HST.

  5. Patch dbhome.

Error CRS-01019: The OCR Service Exited

An issue with Oracle Database 12.2.1.2 might cause an internal error CRS-01019: THE OCR SERVICE EXITED. If this occurs, the Cluster Ready Services daemon (crsd) is down. 

Hardware Models

Oracle Database Appliance X7-2-HA, X7-2S, X7-2M, X6-2S, X6-2M, and X6-2L

Workaround

Restart the CRS daemon.

  1. Stop crs.

    # crsctl stop crs -f 
    
  2. Start crs.

     # crsctl start crs -wait 
    

This issue is tracked with Oracle bug 27060167.

Do not use the local patching option on a virtualized platform

When patching a virtualized platform, the --local option is not supported.

On a virtualized platform, attempting to use the --local option to patch a single node will result in an error.

When you use the --local option, the patch server fails with following error:
# oakcli update -patch 12.2.1.2.0 --server --local 
ERROR: -local is not supported for server patching, on VM systems.

Hardware Models

Oracle Database Appliance X7-2-HA

Workaround

Use the following to update the software on Oracle Database Appliance, which applies the patch to both nodes.
# oakcli update -patch 12.2.1.2.0 --server

The command oakcli validate returns errors on a virtualized platform

The commands oakcli validate -a and oakcli validate -c return some errors on a virtualized platform.

With one exception, the options in the command validate are not supported in Oracle Database Appliance 12.2.1.1.0 release.

Note:

The command oakcli validate -c storagetopology is supported.

Hardware Models

Oracle Database Appliance X7-2-HA virtualized platform.

Workaround

A workaround is not available.

This issue is tracked with Oracle bug 27022056 and 27021403.

After re-imaging with 12.2.1.2.0 virtualized ISO, the network setup is incorrect

The built-in SFP ports are not being recognized after re-imaging with the 12.2.1.2.0 virtualized OS image.

The issue occurs when using a fiber network. When the 10g/25g NICs are connected to 10g fiber, they are not identified during OS imaging. See ODA X7-2 HA Network Issue On Virtualized Platform, the built-in SFP ports are not being recognized (Doc ID 2358976.1) for more information.

Hardware Models

Oracle Database Appliance X7-2-HA

Workaround

  1. Unplug all network cables and transceivers.

  2. Re-image the machine with the virtualized ISO image.

  3. Plug in the network cables and transceivers.

  4. Reboot both nodes.

  5. Run ifconfig and check the firmware of the Broadcom ports (eth2 and eth3). They should be 20.06.04.07 or higher.

    # ethtool -i eth2 
    driver: bnxt_en 
    version: 1.7.0 
    firmware-version: 20.6.156/1.8.1 pkg 20.06.04.07 <<<<<<<<<<<< 
    bus-info: 0000:18:00.0 
    supports-statistics: yes 
    supports-test: no 
    supports-eeprom-access: yes 
    supports-register-dump: no 
    supports-priv-flags: no 
    
    # ethtool -i eth3 
    driver: bnxt_en version: 1.7.0 firmware-version: 20.6.156/1.8.1 pkg 20.06.04.07 <<<<<<<<<<<< 
    bus-info: 0000:18:00.1 
    supports-statistics: yes 
    supports-test: no 
    supports-eeprom-access: yes 
    supports-register-dump: no 
    supports-priv-flags: no
    
    • If the firmware version is okay, go to Step 6.

    • If the firmware is not at least version 20.06.04.07, then go to My Oracle Support and follow the instructions in DOC ID 2358976.1 for how to download and apply the firmware update.

  6. If your switch speed is set to 10Gb, then execute the following ethtool commands to manually force the speed to 10000 and turn off autoneg.

    ethtool -s eth2 speed 10000 autoneg off 
    ethtool -s eth3 speed 10000 autoneg off
    

    Note:

    This is required because the ports come from the factory set to 25Gb and the optical transceivers do not auto-negotiate back to 10Gb when a 10Gb optical transceiver is used.
  7. Run the command oakcli configure firstnet to configure the network.

    # oakcli configure firstnet
    

Error after running the cleanup script

After running the cleanup.pl script, the following error message appears: DCS-10001:Internal error encountered: Fail to start hand shake.

The following are the steps to reproduce the issue:

  1. Run cleanup.pl on the first node (Node0). Wait until the cleanup script finishes, then reboot the node.

  2. Run cleanup.pl on the second node (Node1). Wait until the cleanup script finishes, then reboot the node.

  3. After both nodes are started, use the command-line interface to list the jobs on Node0. An internal error appears.

    # odacli list-jobs
    DCS-10001:Internal error encountered: Fail to start hand shake to localhost:7070
    

Hardware Models

Oracle Database Appliance X7-2-HA

Workaround

Restart the dcsagent on Node0 after running the cleanup.pl script.

# initctl stop initdcsagent 
# initctl start initdcsagent

Do not create a network on the p1p2 interface

Do not create a network on the p1p2 interface on Oracle Database Appliance X7-2-HA. The p1p2 interface is configured and reserved for high availability.

Hardware Models

Oracle Database Appliance X7-2-HA

Workaround

A workaround is not available.

This issue is tracked with Oracle bug 27048925.

Unable to create an Oracle Database 11g Standard Edition RAC database with Oracle ACFS

Unable to create an Oracle Database 11g Standard Edition RAC database with Oracle Automatic Storage Management Cluster File System (Oracle ACFS) storage.

Standard Edition for Oracle Database 11.2.0.4 includes support for Oracle RAC and RAC One. When trying to create Standard Edition 2-node RAC database in a multi-node Oracle Database Appliance (HA model) with Oracle ACFS storage, the following message appears: The current home was detected to have Standard Edition licensing. You must choose Oracle ASM for database storage.

Hardware Models

Oracle Database Appliance X7-2-HA, X6-2-HA, X5-2, X4-2, X3-2, and V1

Workaround

A workaround is not available.

This issue is tracked with Oracle bug 27071989.

The DB Console option is disabled when creating an 11.2.0.4 database

When using Oracle Database 12.2.0.1 grid infrastructure (GI) to create an 11.2.0.4 database, the option to configure Oracle Enterprise Manager DB Console is disabled.

An issue with the Enterprise Manager Control (emctl) command line utility and Enterprise Manager Configuration Assistant (emca) occurs when using the 12.2.0.1 GI to create an 11.2.0.4 database.

Hardware Models

Oracle Database Appliance X7-2-HA, X7-2S, X7-2M, X6-2S, X6-2M, and X6-2L that are using the 12.2.0.1 GI.

Workaround

Manually configure Oracle Enterprise Manager DB Console after creating the database.

If the appliance is a multi-node system, perform the steps on both nodes. The example assumes a multi-node system:

  1. Create a dbconsole.rsp response file, as follows, based on your environment.

    To obtain the cluster name for your environment, run the command $GI_HOME/bin/cemutlo -n

    DB_UNIQUE_NAME=db_unique_name 
    SERVICE_NAME=db_unique_name.db_domain 
    PORT=scan listener port
    LISTENER_OH=$GI_HOME
    SYS_PWD=admin password
    DBSNMP_PWD=admin password
    SYSMAN_PWD=admin password
    CLUSTER_NAME=cluster name 
    ASM_OH=$GI_HOME
    ASM_SID=+ASM1
    ASM_PORT=asm listener port
    ASM_USER_NAME=ASMSNMP
    ASM_USER_PWD=admin password   
    
  2. Run the command to configure the dbcontrol using the response file. The command will fail with an error. You will use the steps in the output in Step 4.

    $ORACLE_HOME/bin/emca -config dbcontrol db -repos create -cluster -silent -respFile dbconsole.rsp 
    
    Error securing Database Control. Database Control has not been brought-up on nodes node1 node2
    Execute the following command(s) on nodes: node1 node2
    
    1. Set the environment variable ORACLE_UNQNAME to the Database unique name.
    2. /u01/app/oracle/product/11.2.0.4/dbhome_1/bin/emctl config emkey -repos
    -sysman_pwd Password for SYSMAN user -host node -sid  Database unique
    name
    3. /u01/app/oracle/product/11.2.0.4/dbhome_1/bin/emctl secure dbconsole
    -sysman_pwd Password for SYSMAN user -host node -sid  Database unique
    name
    4. /u01/app/oracle/product/11.2.0.4/dbhome_1/bin/emctl start dbconsole
    
    To secure Em Key, run /u01/app/oracle/product/11.2.0.4/dbhome_1/bin/emctl  config emkey -remove_from_repos -sysman_pwd Password for SYSMAN user
    
  3. Use vi editor to open $ORACLE_HOME/bin/emctl, then change the setting CRS_HOME= to CRS_HOME=/u01/app/12.2.0.1/grid

  4. Run the steps reported by emca in Step 2 with the proper values.

  5. Configure dbconsole in Node1, so that agent in Node0 reports to the dbconsole in Node0, and the agent in Node1 reports to the dbconsole in Node1:
    $ORACLE_HOME/bin/emca -reconfig dbcontrol -silent -cluster -EM_NODE node0
    host -EM_NODE_LIST node1 host -DB_UNIQUE_NAME db_unique_name 
    -SERVICE_NAME db_unique_name.db_domain
    
  6. If the appliance is multiple nodes, then configure the dbconsole for the second node. Configure dbconsole in Node0, so that agent in Node1 reports to the dbconsole in Node1, and the agent in Node0 reports to the dbconsole in Node0:
    $ORACLE_HOME/bin/emca -reconfig dbcontrol -silent -cluster -EM_NODE node1
    host -EM_NODE_LIST node1 host -DB_UNIQUE_NAME db_unique_name 
    -SERVICE_NAME db_unique_name.db_domain
    
  7. Use vi editor to open $ORACLE_HOME/bin/emctl, then change the setting CRS_HOME= to CRS_HOME=/u01/app/12.2.0.1/grid

  8. Check the db console configuration status.

    # /u01/app/oracle/product/11.2.0.4/dbhome_1/bin/emctl  status agent
       - https://public IP for Node0:1158/em
       - https://public IP for Node1:1158/em