9 Managing Storage

Understand the storage options and how to manage storage for your Oracle Database Appliance deployment.

About Managing Storage

You can add storage at any time without shutting down your databases or applications.

Oracle Database Appliance uses raw storage to protect data in the following ways:

  • Fast Recovery Area (FRA) backup. FRA is a storage area (directory on disk or Oracle ASM diskgroup) that contains redo logs, control file, archived logs, backup pieces and copies, and flashback logs.

  • Mirroring. Double or triple mirroring provides protection against mechanical issues.

The amount of available storage is determined by the location of the FRA backup (external or internal) and if double or triple mirroring is used. External NFS storage is supported for online backups, data staging, or additional database files.

Oracle Database Appliance X7-2M and X7-2-HA models provide storage expansion options from the base configuration.

Note:

The storage expansion shelf is no longer available for Oracle Database Appliance X7-2-HA and other older models. You can repurpose an existing storage expansion shelf from one Oracle Database Appliance system to another.

When you add storage, Oracle Automatic Storage Management (Oracle ASM) automatically rebalances the data across all of the storage including the new drives. Rebalancing a disk group moves data between disks to ensure that every file is evenly spread across all of the disks in a disk group and all of the disks are evenly filled to the same percentage. Oracle ASM automatically initiates a rebalance after storage configuration changes, such as when you add disks.

The redundancy level for FLASH is based on the DATA and RECO selection. If you choose High redundancy (triple mirroring), then FLASH is also High redudancy.

WARNING:

Pulling a drive before powering it off will crash the kernel, which can lead to data corruption. Do not pull the drive when the LED is an amber or green color.  When you need to replace an NVMe drive, use the software to power off the drive before pulling the drive from the slot. If you have more than one disk to replace, complete the replacement of one disk before starting replacement of the next disk.

Storage on Single Node Platforms

Review storage options on Oracle Database Appliance single node platforms.

Storage Options for Single Node Systems

Oracle Database Appliance X7-2S and X7-2M have NVMe storage configurations with storage expansion options.

Table 9-1 Storage Options for Oracle Database Appliance X7-2S and X7-2M

Configuration Oracle Database Appliance X7-2S Oracle Database Appliance X7-2M

Base Configuration

Two (2) 6.4TB NVMe drives populated in slots 0 and 1.

Two (2) 6.4TB NVMe drives populated in slots 0 and 1.

Expansion Options

None

Options:

  • Three (3) 6.4TB NVMe drives populated in slots 2 to 4. Order Qty 1 - 7117431 (3-pack 6.4TB NVMe SSD) and upgrade to Oracle Database Appliance release 18.8 or later.

  • Six (6) 6.4TB NVMe drives populated in slots 2 to 7. Order Qty 2 - 7117431 (3-pack 6.4TB NVMe SSD) and upgrade to Oracle Database Appliance release 18.8 or later.

Adding NVMe Storage Disks

Depending on the available drives, you can expand Oracle Database Appliance X7-2M storage to add NVMe disks or replace existing NVMe disks.

Use the ODAADMCLI commands to perform appliance storage maintenance tasks, including perform storage diagnostics and collect diagnostic logs for storage components.

Preparing for a Storage Upgrade

  1. Update Oracle Database Appliance to the latest Patch Bundle before expanding storage.

    # odacli describe-component 
  2. Check the disk health of the existing storage disks.

    Use the default checks option to check the NetworkComponents, OSDiskStorage, SharedStorage, and SystemComponents

    # odaadmcli validate -d
  3. Run the odaadmcli show diskgroup command to display and review Oracle Automatic Storage Management (Oracle ASM) disk group information.

  4. Use orachk to confirm Oracle ASM and CRS health.

Review and perform these best practices before adding storage.

Adding NVMe Storage Disks

The default configuration for Oracle Database Appliance X7-2S or X7-2M includes two (2) NVMe disks. You cannot expand storage for Oracle Database Appliance X7-2S.

For Oracle Database Appliance X7-2M, you can expand storage by adding three (3) additional disks for a total of five (5) NVMe disks or by adding six (6) additional disks for a total of eight (8) NVMe disks. When you expand storage, adding just one or two NVMe drives is not supported.

WARNING:

Pulling a drive before powering it off will crash the kernel, which can lead to data corruption. Do not pull the drive when the LED is an amber or green color.  When you need to replace an NVMe drive, use the software to power off the drive before pulling the drive from the slot. If you have more than one disk to replace, complete the replacement of one disk before starting replacement of the next disk.

Follow these steps to add NVMe storage disks:

  1. Before adding the NVMe disks, ensure that the current disks are online in oakd and Oracle ASM. Otherwise, the prechecks fail. For example, for 3-disks expansion from slots 2 to 4, the disks in slots 0 and 1 must be online in Oracle ASM and oakd. For 3-disks expansion from slots 5 to 7 when slots 0 to 4 are filled, then all disks in slots 0 to 4 must be online. For 6-disks expansion from slots 2 to 7, all disks in slots 0 and 1 must be online.
  2. Insert disks one at a time in the slots and power on the device.
    For example, to add three (3) NVMe disks, insert the disks in slots 2 to 4. To add six (6) NVMe drives, insert the disks in slots 2 to 7.
    # odaadmcli power disk on slot_number
    For example, when adding three (3) NVMe disks:
    # odaadmcli power disk on pd_02 
    # odaadmcli power disk on pd_03
    # odaadmcli power disk on pd_04

    Allow at least one minute between inserting each disk.

  3. Run the odaadmcli expand storage command to add the new storage drives:
    # odaadmcli expand storage -ndisk number_of_disks
    For example, to add three (3) NVMe drives:
    #odaadmcli expand storage -ndisk 3
    Precheck passed. 
    Check the progress of expansion of storage by executing 'odaadmcli show disk' 
    Waiting for expansion to finish ...
  4. Run the odaadmcli show disk command to ensure that all disks are listed, are online, and are in a good state.
    # odaadmcli show disk

Storage on Multi Node Platforms

Review storage options on Oracle Database Appliance multi node platforms.

About Expanding Storage on Multi-Node Systems

Oracle Database Appliance X7-2-HA platforms have options for high performance and high capacity storage configurations.

Oracle Database Appliance X7-2-HA are shipped with the base configuration of 16 TB SSD raw storage for DATA and 3.2 TB SSD raw storage for REDO, leaving 15 available slots to expand the storage. If you choose to expand the storage, you can fill the 15 slots with either SSD or HDD drives. For a high performance configuration, you can expand storage by adding 15 SSDs. If you want to add 15 HDDs, then the high performance configuration changes to a high capacity configuration. In this case, you must reimage and redeploy the appliance.

In all configurations, the base storage and the storage expansion shelf each have four (4) 800 GB SSDs for REDO disk group and five (5) 3.2 TB SSDs (either for DATA/RECO in the SSD option or FLASH in the HDD option).

Note:

With Oracle Database Appliance release 18.8, you can add 7.68 TB SSDs to a configuration with existing 3.2 TB SSDs. The 7.68 TB SSDs are partitioned down to match the 3.2 TB SSD capacity. The 3.2 TB SSDs and the expansion shelf are no longer available. However, if you replace all your existing 3.2 TB SSDs with 7.68 TB SSDs, then the entire 7.68 TB capacity of the SSDs is utilized for storage.

High Performance

A high performance configuration uses solid state drives (SSDs) for DATA and REDO storage. The base configuration has 16 TB SSD raw storage for DATA and 3.2 TB SSD raw storage for REDO.

You can add up to fifteen 7.68 TB SSDs (available in five-packs). Note that 3.2 TB SSDs are no longer available. To support 7.68 TB SSDs in the system, ensure that your deployment is on Oracle Database Appliance release 18.7 or later.

Note:

With Oracle Database Appliance release 18.8, you can add 7.68 TB SSDs to a configuration with existing 3.2 TB SSDs. The 7.68 TB SSDs are partitioned down to match the 3.2 TB SSD capacity. The 3.2 TB SSDs and the expansion shelf are no longer available. However, if you replace all your existing 3.2 TB SSDs with 7.68 TB SSDs, then the entire 7.68 TB capacity of the SSDs is utilized for storage.

High Capacity

A high capacity configuration uses a combination of SSD and HDD drives.

The base configuration has 16 TB SSD raw storage for FLASH disk group and 3.2 TB SSD raw storage for REDO.

With Oracle Database Appliance release 18.8, the following expansion options are available:

  • Base shelf: Additional fifteen 14 TB HDDs (available in a fifteen-pack). Note that 10 TB HDDs are no longer available. To support 14 TB HDDs in the system, ensure that your deployment is on Oracle Database Appliance release 18.7 or later.

  • Storage Expansion shelf: The expansion shelf is no longer available.

Note:

When you expand storage to include HDD on the base storage shelf, you must reposition the drives to the correct slots and redeploy the appliance after adding the HDD drives.

Note:

10TB HDDs are no longer available. To expand storage, you can use 15 packs of 14TB HDD drive.

Table 9-2 Storage Options for Oracle Database Appliance X7-2-HA

Configuration Oracle Database Appliance X7-2-HA SSD Only Configuration for High Performance Oracle Database Appliance X7-2-HA SSD and HDD Configuration for High Capacity
Base Configuration

JBOD:

  • Four (4) 800 GB SSD
  • Five (5) 3.2 TB SSD

JBOD:

  • Four (4) 800 GB SSD

  • Five (5) 3.2 TB SSD

  • Fifteen (15) 10 TB HDD

Base Shelf Expansion Options

With Oracle Database Appliance release 18.8, you can add 7.68 TB SSDs to a configuration with existing 3.2 TB SSDs. The 7.68 TB SSDs are partitioned down to match the 3.2 TB SSD capacity. The 3.2 TB SSDs are no longer available.

With Oracle Database Appliance release 18.8, you can also reimage and redeploy the appliance to completely replace 3.2 TB SSDs with 7.68 TB SSDs. The entire 7.68 TB capacity of the SSDs is utilized for storage.

Order 7600790: Five pack of 7.68 TB SSD drive

You can use 15 packs of 14 TB HDD drive. The 10 TB HDDs are no longer available.

With Oracle Database Appliance release 18.8, you can also replace all five 3.2 TB SSDs in the base configuration with 7.68 TB SSDs.

If you replace all HDDs or SSDs in the base configuration with higher capacity disks, then you must reimage and redeploy the appliance with Oracle Database Appliance release 18.8.

Order Qty 1: 7600792: Fifteen pack of 14TB HDD drive

Order 7600790: Five pack of 7.68 TB SSD drive

Storage Expansion Shelf The expansion shelf is no longer available. The expansion shelf is no longer available.

Preparing for a Storage Upgrade

Review and perform these best practices before adding storage to the base shelf or adding the expansion shelf.

  1. Update Oracle Database Appliance to the latest Patch Bundle before expanding storage.
  2. Confirm both nodes are at the same version and patch bundle level for software and firmware.
    # odacli describe-component  
  3. Check the disk health of the existing storage disks.

    Run the check on both nodes and use the default checks option to check the NetworkComponents, OSDiskStorage, SharedStorage, and SystemComponents.

    # odaadmcli validate -d
  4. Run the odaadmcli show diskgroup command on each node to display and review Oracle Automatic Storage Management (Oracle ASM) disk group information.
    # odaadmcli show diskgroup DATA
    # odaadmcli show diskgroup RECO
     # odaadmcli show diskgroup REDO 
  5. Confirm Oracle ASM and CRS health on both nodes.
    Run orachk on each node. If there is a problem connecting to either node, then check the /etc/bashrc  file and remove (or remark out) any values in the profile for root ; oracle ; grid users 
  6. Confirm communications between the nodes and that SSH is working using the same password for oracle, root and grid.
    From each node:
    1. ssh to both nodes.
    2. Ping both nodes.
  7. Confirm there is at least 10 GB of space available on each node.
    [root@oda]# df -h
    [root@odb]# df -h 

Preparing for a Storage Upgrade for a Virtualized Platform

Review and perform these best practices before adding storage to the base shelf or adding the expansion shelf.

  1. Update Oracle Database Appliance to the latest Patch Bundle before expanding storage.
  2. Confirm both nodes are at the same version and patch bundle level for software and firmware.
    # oakcli show version -detail 
    # oakcli inventory -q 

    Note:

    If oakd is not running on either node, fix the problem before adding storage.
  3. Check the disk health of the existing storage disks.

    Run the check on both nodes and use the default checks option to check the NetworkComponents, OSDiskStorage, SharedStorage, and SystemComponents.

    # oakcli validate -d
  4. Run the command oakcli show diskgroup on each node to display and review Oracle Automatic Storage Management (Oracle ASM) disk group information.
    # oakcli show diskgroup data
    # oakcli show diskgroup reco
     # oakcli show diskgroup redo 
  5. Confirm Oracle ASM and CRS health on both nodes.
    Run the oakcli orachk command on each node. If there is a problem connecting to either node, then check the /etc/bashrc  file and remove (or remark out) any values in the profile for root ; oracle ; grid users 

    Run oakcli orachk on Node 0:

    # oakcli orachk
    ...
    
    Checking Status of Oracle Software Stack - Clusterware, ASM, RDBMS
    
    . . . . . . . . .
    -------------------------------------------------------------------------------------------------------
    Oracle Stack Status
    -------------------------------------------------------------------------------------------------------
    Host Name CRS Installed  ASM HOME   RDBMS Installed    CRS UP    ASM UP    RDBMS UP DB Instance Name
    -------------------------------------------------------------------------------------------------------
    odax3rm1       Yes           No          Yes              No        No        No          ........
    -------------------------------------------------------------------------------------------------------
    
     ...

    Run oakcli orachk on Node 1:

    # oakcli orachk
    ...
    
    Checking Status of Oracle Software Stack - Clusterware, ASM, RDBMS
    
    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
    -------------------------------------------------------------------------------------------------------
    Oracle Stack Status
    -------------------------------------------------------------------------------------------------------
    Host Name CRS Installed  ASM HOME   RDBMS Installed    CRS UP    ASM UP    RDBMS UP DB Instance Name
    -------------------------------------------------------------------------------------------------------
    odax3rm2      Yes           Yes           Yes            Yes       Yes        Yes      b22S2 b23S2 b24S2
    -------------------------------------------------------------------------------------------------------
    
    ...
  6. Confirm communications between the nodes and that SSH is working using the same password for oracle, root and grid.
    From each node:
    1. ssh to both nodes.
    2. Ping both nodes.
  7. Confirm that there is at least 10 GB of space available on each node.
    [root@oda]# df -h
    [root@odb]# df -h 

Adding Solid-State Drives (SSDs) for Data Storage

Add a pack of solid-state drives (SSDs) for data storage into the existing Oracle Database Appliance base configuration to fully populate the base storage shelf.

If you need to add storage to the base configuration, you can order one, two, or three 5-pack of SSDs to complete the base configuration on Oracle Database Appliance X7-2-HA.

Note:

You can only add SSDs to the base storage shelf, for a high-performance configuration. For high-capacity configuration, you can expand storage to use HDDs.
Before adding the disks to the system, ensure that Oracle Database Appliance is on the latest update version.
The 3.2 TB SSDs are no longer available. You can use the 5-pack of 7.68 TB SSDs for storage expansion.
  1. Insert disks one at a time in the slots.

    To add one 5-pack of SSDs, insert the disks in slots 5 to 9. To add two 5-pack of SSDs, insert the disks in slots 5 to 14. To add three 5-pack of SSDs, insert the disks in slots 5 to 19.

    Note:

    Allow at least one minute between inserting each disk.
    After all disks are added, go to Step 2.
  2. Run the odaadmcli show ismaster command to determine which node is the master.
    # odaadmcli show ismaster
  3. Run the odaadmcli expand storage command on the master node.
    #odaadmcli expand storage -ndisk number of disks to be added 
    -enclosure enclosure number of the disks to be added, either 0 or 1
    

    For example:

    
    #odaadmcli expand storage -ndisk 5 -enclosure 0
    Precheck passed. 
    Check the progress of expansion of storage by executing 'odaadmcli show disk' 
    Waiting for expansion to finish ...
    It takes 10 to 12 minutes to add all of the disks to the configuration.
  4. Run the odaadmcli show disk command to ensure that all disks are listed, are online, and are in a good state.
    # odaadmcli show disk
  5. Verify that the disks in slots 5 to 9 are added to Oracle Automatic Storage Management (Oracle ASM).
    1. Run the asm_script to verify that the disks in slots 5 to 9 are added to Oracle Automatic Storage Management (Oracle ASM). If the 5 disks are successfully added (CACHED and MEMBER), then go to Step 7.
      su grid user /opt/oracle/oak/bin/stordiag/asm_script.sh 1 6
      

      For example:

      #/opt/oracle/oak/bin/stordiag/asm_script.sh 1 6 | grep CACHED
      .......
      /dev/mapper/SSD_E0_S05_1399652120p1 SSD_E0_S05_1399652120P1 1 NORMAL ONLINE CACHED MEMBER
      /dev/mapper/SSD_E0_S05_1399652120p2 SSD_E0_S05_1399652120P2 3 NORMAL ONLINE CACHED MEMBER
      /dev/mapper/SSD_E0_S06_1399645200p1 SSD_E0_S06_1399645200P1 1 NORMAL ONLINE CACHED MEMBER
      /dev/mapper/SSD_E0_S06_1399645200p2 SSD_E0_S06_1399645200P2 3 NORMAL ONLINE CACHED MEMBER
      /dev/mapper/SSD_E0_S07_1399646692p1 SSD_E0_S07_1399646692P1 1 NORMAL ONLINE CACHED MEMBER
      /dev/mapper/SSD_E0_S07_1399646692p2 SSD_E0_S07_1399646692P2 3 NORMAL ONLINE CACHED MEMBER
      /dev/mapper/SSD_E0_S08_1399649840p1 SSD_E0_S08_1399649840P1 1 NORMAL ONLINE CACHED MEMBER
      /dev/mapper/SSD_E0_S08_1399649840p2 SSD_E0_S08_1399649840P2 3 NORMAL ONLINE CACHED MEMBER
      /dev/mapper/SSD_E0_S09_1399649424p1 SSD_E0_S09_1399649424P1 1 NORMAL ONLINE CACHED MEMBER
      /dev/mapper/SSD_E0_S09_1399649424p2 SSD_E0_S09_1399649424P2 3 NORMAL ONLINE CACHED MEMBER
    2. If the disks are not added to Oracle ASM, then add them manually. As grid user, execute the sqlplus '/as sysasm' command on the master node to add the disks to Oracle ASM.

      For a system without Oracle Automatic Storage Management Filter Driver (Oracle ASM Filter Driver) configured, add the Oracle ASM disks as follows:

      
      SQL> alter diskgroup /*+ _OAK_AsmCookie */ data add disk 
      '/dev/mapper/SSD_E0_S05_1399764284p1' name SSD_E0_S05_1399764284p1, 
      '/dev/mapper/SSD_E0_S06_1399765076p1' name SSD_E0_S06_1399765076p1, 
      '/dev/mapper/SSD_E0_S07_1399765116p1' name SSD_E0_S07_1399765116p1, 
      '/dev/mapper/SSD_E0_S08_1399765484p1' name SSD_E0_S08_1399765484p1, 
      '/dev/mapper/SSD_E0_S09_1399765504p1' name SSD_E0_S09_1399765504p1;  
      
      SQL> alter diskgroup /*+ _OAK_AsmCookie */ reco add disk 
      '/dev/mapper/SSD_E0_S05_1399764284p2' name SSD_E0_S05_1399764284p2, 
      '/dev/mapper/SSD_E0_S06_1399765076p2' name SSD_E0_S06_1399765076p2, 
      '/dev/mapper/SSD_E0_S07_1399765116p2' name SSD_E0_S07_1399765116p2, 
      '/dev/mapper/SSD_E0_S08_1399765484p2' name SSD_E0_S08_1399765484p2, 
      '/dev/mapper/SSD_E0_S09_1399765504p2' name SSD_E0_S09_1399765504p2;  
      
      

      For a system with Oracle Automatic Storage Management Filter Driver (Oracle ASM Filter Driver) configured, add the Oracle ASM disks as follows:

      SQL> alter diskgroup /*+ _OAK_AsmCookie */ data add disk 
      'AFD:SSD_E0_S05_1399764284P1' name SSD_E0_S05_1399764284p1, 
      'AFD:SSD_E0_S06_1399765076P1' name SSD_E0_S06_1399765076p1, 
      'AFD:SSD_E0_S07_1399765116P1' name SSD_E0_S07_1399765116p1, 
      'AFD:SSD_E0_S08_1399765484P1' name SSD_E0_S08_1399765484p1, 
      'AFD:SSD_E0_S09_1399765504P1' name SSD_E0_S09_1399765504p1;  
      
      SQL> alter diskgroup /*+ _OAK_AsmCookie */ reco add disk 
      'AFD:SSD_E0_S05_1399764284P2' name SSD_E0_S05_1399764284p2, 
      'AFD:SSD_E0_S06_1399765076P2' name SSD_E0_S06_1399765076p2, 
      'AFD:SSD_E0_S07_1399765116P2' name SSD_E0_S07_1399765116p2, 
      'AFD:SSD_E0_S08_1399765484P2' name SSD_E0_S08_1399765484p2, 
      'AFD:SSD_E0_S09_1399765504P2' name SSD_E0_S09_1399765504p2;  
      
  6. Use the odaadmcli show validation storage errors command to show hard storage errors.
    Hard errors include having the wrong type of disk inserted into a particular slot, an invalid disk model, or an incorrect disk size.
    # odaadmcli show validation storage errors
  7. Use the odaadmcli show validation storage failures command to show soft validation errors.
    A typical soft disk error would be an invalid version of the disk firmware.
    # odaadmcli show validation storage failures
  8. Confirm that the oak_storage_conf.xml file shows the number of disks added on both nodes, after the addition. For example, if you added 10 disks to the base configuration, then the oak_storage_conf.xml file must show 19.
    #cat /opt/oracle/oak/conf/oak_storage_conf.xml

Adding the Storage Expansion Shelf

Use the following procedure only if you want to repurpose an existing storage expansion shelf from one Oracle Database Appliance system to another.

Note:

The storage expansion shelf is no longer available for Oracle Database Appliance X7-2-HA and other older models. Use the following procedure only if you want to repurpose an existing storage expansion shelf from one Oracle Database Appliance system to another. If an existing storage shelf is repurposed, that is, moved from a system where oakd and Oracle ASM were configured, then you must clean up the disks on the second JBOD before adding them to the new deployment. See the topic Performing Secure Erase of Data on Storage Disks in this guide.
You must fully populate the base configuration before you can add an expansion shelf. If you add an expansion shelf, the shelf must have the same disk storage configuration as the base storage shelf.

Note:

You can follow the same procedure to add storage to tbe base configuration on Virtualized Platform by using oakcli command equivalent of the odacli or odaadmcli commands in the procedure.

Note:

Oracle recommends that you add a storage expansion shelf when you have relatively little activity on your databases. When the system discovers the new storage, Oracle Automatic Storage Management (Oracle ASM) automatically rebalances the disk groups. The rebalance operation may degrade database performance until the operation completes.
  1. Install and cable the storage expansion shelf, but do not power on the expansion shelf.

    Caution:

    Review cabling instructions carefully to ensure that you have carried out cabling correctly. Incorrect connections can cause data loss when adding a storage expansion shelf to Oracle Database Appliance with existing databases.

  2. If this is a new deployment or re-image of Oracle Database Appliance, perform the following steps in order:
    1. Power on the base storage.
    2. Power on Node 0.
    3. Power on Node 1.

    Caution:

    Do not power on the expansion shelf yet.
  3. Verify that both nodes plus the base storage shelf are up and running. Log into each server node and run the odacli validate-storagetopology command to confirm that the base configuration cabling is correct.
    # odacli validate-storagetopology
     ...
          INFO  : Check if JBOD powered on
     SUCCESS    : JBOD : Powered-on                                          
          INFO  : Check for correct number of EBODS(2 or 4)
     SUCCESS    : EBOD found : 2                                                                                                                                                                     INFO       : Check for overall status of cable validation on Node0
      SUCCESS   : Overall Cable Validation on Node0            
     SUCCESS    : JBOD Nickname set correctly : Oracle Database Appliance - E0
    Run the command to confirm that the two server nodes are properly cabled to the base storage shelf and all disks are online, with a good status, and added to the existing disk groups on both nodes. If there any failures, then fix the cabling before proceeding to the next step.

    Note:

    If the output shows that EBOD found is 2, then you only have the base storage shelf. If EBOD found is 4, then you have a base storage shelf and an expansion shelf.

    Note:

    If the JBOD was configured earlier, then the EBOD found message is displayed. If an unconfigured JBOD is added, then a warning message is displayed.
     odacli validate-storagetopology
     ...
    WARNING : JBOD Nickname is incorrectly set to :
  4. Power on the storage expansion shelf.
  5. Log in to each server node and run the odacli validate-storagetopology command to validate the storage cabling and confirm that the new storage shelf is recognized.
    
    # odacli validate-storagetopology
    
      INFO    : Check if JBOD powered on
      SUCCESS : 2JBOD : Powered-on                                               
      INFO    : Check for correct number of EBODS(2 or 4)
      SUCCESS : EBOD found : 4                                                   
       ...
       ...
    
       INFO    : Check for overall status of cable validation on Node0
       SUCCESS : Overall Cable Validation on Node0            
       SUCCESS : JBOD0 Nickname set correctly : Oracle Database Appliance - E0
       SUCCESS : JBOD1 Nickname set correctly : Oracle Database Appliance - E1                 
    Look for the following indicators that both storage shelves are recognized:
    • When there are two shelves, the JBOD (just a bunch of disks) is numbered. For example:
      SUCCESS : 2JBOD : Powered-on
    • When both shelves are recognized, the EBOD found value is 4.
      SUCCESS : EBOD found : 4
    • When the expansion shelf is cabled properly, the nickname is E1. For example:

              SUCCESS : JBOD0 Nickname set correctly : Oracle Database Appliance - E0
              SUCCESS : JBOD1 Nickname set correctly : Oracle Database Appliance - E1  

    Fix any errors before proceeding.

  6. Run the odaadmcli show disk command to ensure that all disks in the expansion shelf are listed, are online, and are in a good state.
    # odaadmcli show disk
    When all disks are online and in a good state, proceed to the next step.
  7. Run the odaadmcli show enclosure command to check the health of components in expansion shelf.
    # odaadmcli show enclosure
  8. Run the odaadmcli show ismaster command on Node 0 to confirm that Node 0 is the master.
    # odaadmcli show ismaster
  9. Run the odaadmcli expand storage command on the master node.
    # odaadmcli expand storage -ndisk 24 -enclosure 1 
    
    Precheck passed. 
    Check the progress of expansion of storage by executing 'odaadmcli show disk' 
    Waiting for expansion to finish ...
    It takes approximately 30 to 40 minutes to add all of the disks to the configuration.
  10. Use the odaadmcli show validation storage errors command to show hard storage errors.
    Hard errors include having the wrong type of disk inserted into a particular slot, an invalid disk model, or an incorrect disk size.
    # odaadmcli show validation storage errors
  11. Use the odaadmcli show validation storage failures command to show soft validation errors.
    A typical soft disk error would be an invalid version of the disk firmware.
    # odaadmcli show validation storage failures
  12. Run the odacli describe-component command to verify that all firmware components in the storage expansion are current.
    # odaadmcli describe-component
  13. If needed, update the storage shelf and then run the odacli describe-component command to confirm that the firmware is current.
    # odaadmcli update
    # odaadmcli describe-component