C Issues with Oracle Database Appliance X6-2S, X6-2M, and X6-2L

The following are known issues deploying, updating, and managing Oracle Database Appliance X6-2S, X6-2M, and X6-2L:

GI upgrade to 12.2.1.2 fails due to incorrect permissions in the config.sh file

Unless the workaround is applied before patching, upgrade to 12.2.1.2 fails during grid patching.

Insufficient permissions in the config.sh file prevents upgrading the grid infrastructure (GI) patch. The issue is that the permission of the /u01/app/oraInventory/locks directory is not sufficient to access the inventory locks directory (/u01/app/oraInventory/locks).

Perform the workaround before applying the GI patch to prevent the issue.

If you do not apply the workaround before upgrading to 12.2.1.2, errors similar to the following occur and the upgrade fails:

There is no directory as: /u01/app/12.2.0/grid/perl/bin/ exist in the server   
ERROR : Ran '/bin/su grid -c "/opt/oracle/oak/onecmd/tmp/gridconfig.sh"' 

and it returns code (127). The output is as follows:

/u01/app/12.2.0.1/grid/crs/config/config.sh: line 48: /u01/app/12.2.0/grid/perl/bin/perl: No such file or directory
ERROR : /bin/su grid -c "/opt/oracle/oak/onecmd/tmp/gridconfig.sh" did not complete successfully.
Exit code 127 #Step -1#

Hardware Models

Oracle Database Appliance X7-2-HA, X7-2S, X7-2M, X6-2S, X6-2M, and X6-2L

Workaround

To prevent the issue, perform the following steps before upgrading Oracle Database Appliance:

  1. Check oraInventory for a locks directory.

    ls -al /u01/app/oraInventory/locks
    
    • If the locks directory does not exist on either node, then there is no issue.

    • If the locks directory exists on both nodes, then go to Step 2.

    • If the locks directory exists only on node 1, then see MOS Note 2360709.1 for how to check for and detach Oracle_Home on the second node before removing the locks directory.

    • If the locks directory exists only on the second node, then see MOS Note 2360709.1.

  2. Remove the locks directory on both nodes.

    rm -R /u01/app/oraInventory/locks
    
  3. Perform the upgrade.

Note:

If you do not perform the workaround before upgrading, a failure might happen at a different point and require a different procedure, depending on the point of failure. See My Oracle Support Note 2360709.1 for more information.

Unable to patch an empty Oracle Database 12.1 dbhome

Cannot patch an empty Oracle Database Home (dbhome) due to an issue with Oracle Database auto patch.

When attempting to patch an empty dbhome, an error message similar to the following appears:
ERROR: 2017-12-19 18:48:02: Unable to apply db patch on the following Homes :  /u01/app/oracle/product/12.1.0.2/dbhome_name

The following is an example excerpt from the dbupdate log:

  OPATCHAUTO-68036: Topology empty. 
  OPATCHAUTO-68036: The topology was empty, unable to proceed. 
  OPATCHAUTO-68036: Check the log for more information. 
  OPatchAuto failed.
opatchauto failed with error code 42

Models

Oracle Database Appliance X7-2-HA, X7-2S, X7-2M, X6-2S, X6-2M, and X6-2L.

Workaround

The issue occurs when the dbhome does not have any databases. The workaround is to create a database for the dbhome before patching.

Patch zip files require concatenation

The patch to update Oracle Database Appliance to release 12.1.2.12.0 consists of two zip files. You must concatenate the two zip files before updating the repository.

Hardware Models

Oracle Database Appliance X6-2S, X6-2M, and X6-2L 

Workaround

Perform the following to concatenate the files before updating the repository:
  1. Download both zip files for patch 26433721 from My Oracle Support.

    p26433721_1212120_Linux-x86-64_1of2.zip and p26433721_1212120_Linux-x86-64_2of2.zip

  2. Upload the files to a temporary location in the /u01 directory in Oracle Database Appliance, then unzip the files.

    When inflated, the files are named oda-sm-12.1.2.12.0-170920-server_1of2.zippart and oda-sm-12.1.2.12.0-170920-server_2of2.zippart

  3. Concatenate the two zip files into a single zip file. For example, a file named p26433721_1212120_Linux-x86-64.zip.

    # cat oda-sm-12.1.2.12.0-170920-server_1of2.zippart oda-sm-12.1.2.12.0-170920-server_2of2.zippart > p26433721_1212120_Linux-x86-64.zip
    

    The file is named p26433721_1212120_Linux-x86-64.zip

    Update the repository.

    # /opt/oracle/dcs/bin/odacli update-repository -f /u01/tmpdir p26433721_1212120_Linux-x86-64.zip
    
    {   
    "jobId" : "c5288c4f-4a0e-4977-9aa4-4acbf81b65a1",   
    "status" : "Created",   
    "message" : "/u01/tmpdir/p26433721_1212120_Linux-x86-64.zip",   
    "reports" : [ ],   
    "createTimestamp" : "October 7, 2017 06:52:01 AM WSDT",   
    "resourceList" : [ ],   
    "description" : "Repository Update",   
    "updatedTime" : "October 7, 2017 06:52:01 AM WSDT" 
    }
    
  4. Verify that the job completed successfully.

    # odacli describe-job -i c5288c4f-4a0e-4977-9aa4-4acbf81b65a1  
    Job details                                                      
    ----------------------------------------------------------------                      
    ID:  c5288c4f-4a0e-4977-9aa4-4acbf81b65a1             
    Description:  Repository Update                  
    Status:  Success                 
    Created:  October 7, 2017 6:52:01 AM WSDT                 
    Message:  /u01/tmpdir/121212_patch.zip  
    Task Name                        Start Time                                     End Time                                         Status 
    -------------------- --------------------------------- ----------------------------------- -------
    Unzip patch bundle      October 7, 2017 6:52:01 AM WSDT     October 7, 2017 6:52:31 AM WSDT      Success  
    
  5. Update the agent, server, and database, as described in “Updating Oracle Database Appliance Software”.

 

See Also:

For more information about updating to release 12.1.2.12.0, including how to update the agent, server, and database, see Updating Oracle Database Appliance Software in the Oracle Database Appliance X6-2S/X6-2M/X6-2L Deployment and User’s Guide.

Upgrading an SE database results in an error: Failed to run datapatch

After successfully upgrading an Oracle Database Standard Edition (SE) to 12.1.0.2, the following error appears in the log file: Failed to run datapatch

Datapatch is a tool that enables automation of post-patch SQL actions for RDBMS patches. The error impacts all Standard Edition databases upgrading to release 12.1.0.2.

The following is an excerpt of the log:

...
The following patches will be applied: 25397136
(DATABASE BUNDLE PATCH 12.1.0.2.170418) 

Installing patches...
Patch installation complete.  Total patches installed: 1

Validating logfiles...
Patch 26609798 apply: WITH ERRORS
  logfile:
/u01/app/oracle/cfgtoollogs/sqlpatch/26609798/21481992/26609798_apply_XT_2017S
ep28_06_58_16.log (errors)
    Error at line 1310: Warning: Package altered with compilation errors.

Please refer to MOS Note 1609718.1 and/or the invocation log
/u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_95130_2017_09_28_06_57_51/sqlpat
ch_invocation.log
for information on how to resolve the above errors.

SQL Patching tool complete on Thu Sep 28 06:58:51 2017
2017-09-28 06:58:51,867 ERROR [Running DataPatch] []
c.o.d.a.r.s.d.DbOperations:  run datapatch
2017-09-28 06:58:51,867 WARN [Running DataPatch] [] c.o.d.a.r.s.d.DbActions:
Failed to run datapatch.

Hardware Models

Oracle Database Appliance X7-2S, X7-2M, X7-2-HA, X6-2S, X6-2M, X6-2L

Workaround

See My Oracle Support (MOS) Note: Datapatch Known Issues, Doc ID 1609718.1.

After replacing a disk, the disk is not added to Oracle ASM

When replacing or adding disks to Oracle Database Appliance X6-2S, X6-2M, or X6-2L, the disk is recognized as good, but it is not added to Oracle Automatic Storage Management (ASM).

Use the procedure to add or expand storage and wait the recommended time between tasks, but the new disks are not added to Oracle ASM.

Hardware Models

Oracle Database Appliance X6-2S, X6-2M, and X6-2L

Workaround

After expanding, adding, or replacing disks, restart oakd.

This issue is tracked with Oracle bug 26283996.

Unable to upgrade an Oracle Database from version 12.1 to 12.2

When attempting to upgrade an Oracle Database from version 12.1 to 12.2 on a bare metal system, the upgrade pre-check fails and the database is not upgraded.

The job details report displays an internal error message similar to the following:
odacli describe-job -i  database ID
 Job details                                                      
 ---------------------------------------------------------------- 
                     ID:  7857876b-8289-47c5-a6a9-dc4f9a9c6e19 
            Description:  Database service upgrade with db ids: database ID 
                 Status:  Failure 
                Created:  December 12, 2017 3:39:51 PM CST 
                Message:  DCS-10001:Internal error encountered: Databases failed to upgrade are : [database ID].

The database is not upgraded and there is no impact to the database being upgraded.

Hardware Models

Oracle Database Appliance X7-2-HA, X7-2S, X7-2M, X6-2S, X6-2M, and X6-2L

Workaround

Set the pga_aggregate_limit to two times the pga_aggregate_target before upgrading to Oracle Database 12.2. The pga_aggregate_limit must be greater than 2G.

Unable to create an Oracle ASM Database for Release 12.1

Known issues with Oracle Automatic Storage Management (Oracle ASM) are preventing the REDO diskgroup from mounting for Oracle Database Release 12.1.

Unable to create an Oracle ASM database lower than 12.1.0.2.170117 BP (12.1.2.10).

Hardware Models

Oracle Database Appliance X7-2-HA, X7-2S, X7-2M, X6-2S, X6-2M, and X6-2L

Workaround

There is not a workaround. If you have Oracle Database 11.2 or 12.1 that is using Oracle Automatic Storage Management (Oracle ASM) and you want to upgrade to a higher release of Oracle Database, then you must be on at least Oracle Database Appliance 12.1.2.12.0 and Database Home 12.1.0.2.170814.

This issue is tracked with Oracle bug 21626377 and 21780146. The issues are fixed in Oracle Database 12.1.0.2.170814.