10 Managing Storage

Understand the storage options and how to manage storage for your Oracle Database Appliance deployment.

About Managing Storage

Understand Oracle Database Appliance storage options.

Oracle Database Appliance uses raw storage to protect data in the following ways:

  • Fast Recovery Area (FRA) backup. FRA is a storage area (directory on disk or Oracle ASM diskgroup) that contains redo logs, control file, archived logs, backup pieces and copies, and flashback logs.

  • Mirroring. Double or triple mirroring provides protection against mechanical issues.

The amount of available storage is determined by the location of the FRA backup (external or internal) and if double or triple mirroring is used. External NFS storage is supported for online backups, data staging, or additional database files.

Oracle Database Appliance X8-2M and X8-2-HA models provide storage expansion options from the base configuration. In addition, on Oracle Database Appliance X8-2-HA multi-node platforms, you can add an optional storage expansion shelf.

The redundancy level for FLASH is based on the DATA and RECO selection. If you choose High redundancy (triple mirroring), then FLASH is also High redundancy.

About Managing Oracle ASM Disks

Understand the Oracle ASM disk management features that Oracle Database Appliance supports.

Oracle Database Appliance enables you to manage your Oracle ASM disks.

Bringing Oracle ASM Disk Groups Online Automatically

Oracle Database Appliance periodically checks the status of Oracle ASM disks in disk groups. If any Oracle ASM disk is OFFLINE due to transient disk errors, then Oracle Database Appliance attempts to bring the disk ONLINE.

Optimizing Oracle ASM Disk Group Rebalance Operations

Oracle Database Appliance ensures rebalancing of Oracle ASM disks to complete as quickly as possible, without overloading the system and the disks at the same time. This ensures that the system is in a steady state running with the appropriate redundancy. There are default thresholds for the values for rebalancing operations, and you can also set the custom threshold values. For example:
odacli update-agentconfig-parameters -n ASMRM_CPU_RQ -v 50 -d "CPU RUN QUEUE THRESHOLD" -u
odacli update-agentconfig-parameters -n ASMRM_MAX_HDD_DISK_RQ -v 2 -d "HDD DISK QUEUE THRESHOLD" -u
odacli update-agentconfig-parameters -n ASMRM_MAX_SSD_DISK_RQ -v 32 -d "SSD DISK QUEUE THRESHOLD" -u
odacli update-agentconfig-parameters -n ASMRM_MAX_NVME_DISK_RQ -v 50 -d "NVME DISK QUEUE THRESHOLD" -u

The above command options set custom threshold limits for rebalance monitoring of Oracle ASM disks.

You can monitor rebalance operations using the odacli describe-schedule -i Schedule ID and odacli list-scheduled-executions commands.

Managing Storage on Single-Node Systems

Understand the storage options for your Oracle Database Appliance X8-2S and X8-2M systems.

About Storage on Oracle Database Appliance X8-2S and X8-2M

Understand the storage for your Oracle Database Appliance single-node system.

Oracle Database Appliance X8-2S has two 6.4TB NVMe disks that host DATA and RECO disk groups. There are two partitions, one each for DATA and RECO for Oracle ASM storage information. The storage capacity is fixed and cannot be expanded.

Oracle Database Appliance X8-2M has two 6.4TB NVMe disks that host DATA and RECO disk groups. There are two partitions, one each for DATA and RECO for Oracle ASM storage information. When you first deploy and configure X8-2M in this release, you can set the storage on X8-2M in multiple of 2 packs of NVMe drives, such as 2,4,6, 8, and 10 disks, up to a maximum of 12 disks.

The table describes the NVMe storage configurations with expansion memory and storage options for single-node systems.

Table 10-1 Storage Options for Oracle Database Appliance X8-2S and X8-2M

Configuration Oracle Database Appliance X8-2S Oracle Database Appliance X8-2M

Base Configuration

2 x 6.4 TB NVMe = 12.8 TB NVMe

2 x 6.4 TB NVMe = 12.8 TB NVMe

Storage addition options

None

10 x 6.4 TB NVMe storage drives for a total storage of 76.8 TB NVMe.

Order Qty 1 - 7600927 (2-pack 6.4TB 2.5-inch NVMe PCIe 3.0 SSD v2 with coral-d bracket for Oracle Database Appliance X8-2M)

Pack of two 6.4 TB NVMe disks to a maximum of 12 disks.

Adding NVMe Storage Disks

Depending on the available drives, you can expand Oracle Database Appliance X8-2 storage to add NVMe disks or replace existing NVMe disks.

Use the ODAADMCLI commands to perform appliance storage maintenance tasks, including perform storage diagnostics and collect diagnostic logs for storage components.

Preparing for a Storage Upgrade

  1. Update Oracle Database Appliance to the latest Patch Bundle before expanding storage.

    # odacli describe-component 
  2. Check the disk health of the existing storage disks.

    # odaadmcli show disk
  3. Run the odaadmcli show diskgroup command to display and review Oracle Automatic Storage Management (Oracle ASM) disk group information.

  4. Use orachk to confirm Oracle ASM and CRS health.

Review and perform these best practices before adding storage.

Adding NVMe Storage Disks

The default configuration for Oracle Database Appliance X8-2S or X8-2M includes two (2) NVMe disks. You cannot expand storage for Oracle Database Appliance X8-2S.

For Oracle Database Appliance X8-2M, you can expand storage by adding 2, 4, 6, 8, and 10 disks, up to a maximum of 12 disks. When you expand storage, adding odd numbers of NVMe drives is not supported.

WARNING:

Pulling a drive before powering it off will crash the kernel, which can lead to data corruption. Do not pull the drive when the LED is an amber or green color.  When you need to replace an NVMe drive, use the software to power off the drive before pulling the drive from the slot. If you have more than one disk to replace, complete the replacement of one disk before starting replacement of the next disk.

Follow these steps to add NVMe storage disks:

  1. Before adding the NVMe disks, ensure that the current disks are online in oakd and Oracle ASM. Otherwise, the prechecks fail. For example, for 2-disks expansion to slots 2 and 3, the disks in slots 0 and 1 must be online in Oracle ASM and oakd. For 4-disks expansion from slots 2 to 5 when slots 0 to 5 are filled, then all disks in slots 0 to 1 must be online. For 10-disks expansion from slots 2 to 11, all disks in slots 0 to 1 must be online.
  2. Insert disks one at a time in the slots and power on the device.
    For example, to add two (2) NVMe disks, insert the disks in slots 2 and 3. To add four (4) NVMe drives, insert the disks in slots 2 to 5.
    # odaadmcli power disk on slot_number
    For example, when adding four (4) NVMe disks:
    # odaadmcli power disk on pd_02 
    # odaadmcli power disk on pd_03
    # odaadmcli power disk on pd_04
    # odaadmcli power disk on pd_05

    Allow at least one minute between inserting each disk.

  3. Run the odaadmcli expand storage command to add the new storage drives:
    # odaadmcli expand storage -ndisk number_of_disks
    For example, to add two (2) NVMe drives:
    #odaadmcli expand storage -ndisk 4
    Precheck passed. 
    Check the progress of expansion of storage by executing 'odaadmcli show disk' 
    Waiting for expansion to finish ...
  4. Run the odaadmcli show disk command to ensure that all disks are listed, are online, and are in a good state.
    # odaadmcli show disk

Managing Storage on High-Availability Systems

Understand the storage for your Oracle Database Appliance X8-2-HA system.

About Storage Options for Oracle Database Appliance X8-2-HA

Oracle Database Appliance High-Availability systems have options for high performance and high capacity storage configurations.

The base configuration of Oracle Database Appliance X8-2HA hardware model has six slots (slots 0-5) with 7.68 TB drives of SSD raw storage. If you choose to order and deploy the full storage capacity, then you can fill the remaining 18 slots (slots 6-23) with either SSD or HDD drives. For even more storage, you can add a storage expansion shelf to double the storage capacity of your appliance.

In all configurations, the base storage and the storage expansion shelf each have six SSDs for DATA/RECO in the SSD option or FLASH in the HDD option.

Oracle Database Appliance X8-2-HA does not allocate dedicated SSD drives for REDO disk groups. Instead, the space for REDO logs is allocated on SSD drives as required.

For Oracle ASM storage, the REDO logs are stored in the available disk group space during database creation, based on the database shape selected. For Oracle ACFS storage, the space for REDO logs is allocated during the database storage creation assuming the minimum db shape (odb1s). If you create the database storage without database, then the space allocated for REDO logs is 4 GB, assuming the minimum db shape (odb1s). Subsequently, when you create a database with your required database shape on the existing database storage, the REDO logs space is extended based on shape of the database.

On Oracle Database Appliance X8-2-HA High Performance configurations, with only SSD drives, the DATA and RECO disk groups use all the SSD drives whether 6, 12, 18, 24, or 48 with storage expansion shelf. REDO logs are stored in the RECO disk group.

On Oracle Database Appliance X8-2-HA High Capacity configurations, with both HDD and SSD drives, the DATA and RECO disk groups use the HDD drives, and the SSD drives store the FLASH disk group. REDO logs are stored in the FLASH disk group.

On both High Performance and High Capacity configurations, REDO logs are always created on SSD drives, similar to earlier Oracle Database Appliance hardware models. REDO logs are always created with high redundancy irrespective of the redundancy level of the disk group, whether RECO or FLASH.

High Performance

A high performance configuration uses solid state drives (SSDs) for DATA and RECO storage. The base configuration has six disks, each with 7.68 TB SSD raw storage for DATA and RECO.

You can add up to three (3) 6-Pack SSDs on the base configuration, for a total of 184.32 TB SSD raw storage. If you need more storage, you can double the capacity by adding an expansion shelf of SSD drives. The expansion shelf provides an additional 24 SSDs, each with 7.68TB raw storage for DATA and RECO, for a total of another 184.32 TB SSD raw storage.

Adding an expansion shelf requires that the base storage shelf and expansion shelf are fully populated with SSD drives. When you expand the storage using only SSD, there is no downtime.

A system fully configured for high performance has 368.64 TB SSD raw storage for DATA and RECO.

High Capacity

A high capacity configuration uses a combination of SSD and HDD drives.

The base configuration has six disks, each with 7.68 TB SSD raw storage for FLASH.

The following expansion options are available:

  • Base shelf: additional 252 TB HDD raw storage for DATA and RECO (18 HDDs, each with 14 TB storage)

  • Expansion Storage shelf: additional shelf storage configuration must be identical to the storage configuration of the base shelf.

A system fully configured for high capacity has a total of 596.16 TB raw storage for DATA, RECO, and FLASH, with 92.16 TB SSD and 504 TB HDD.

Table 10-2 Storage Options for Oracle Database Appliance X8-2-HA

Configuration Oracle Database Appliance X8-2-HA SSD-Only Configuration for High Performance Oracle Database Appliance X8-2-HA SSD and HDD Configuration for High Capacity
Base configuration

Base storage shelf contains 6 SSDs of 7.68 TB.

  • 6 x 7.68 TB SSD = 46 TB SSD

Base storage shelf is fully populated with 6-pack SSDs of 7.68 TB and 18-drives of HDDs with 14 TB.

  • 6 x 7.68 TB SSD = 46 TB SSD

  • 18 x 14TB HDD = 252 TB HDD

  • Total storage on the first JBOD = 298 TB, with 46 TB SSD and 252 TB HDD
Storage addition options

Base shelf contains 6 SSDs. Additional 18 SSDs must be added in packs of 6.

  • Base system: 6 x 7.68 TB SSD = 46 TB SSD

  • Adding 6 SSDs: 12x 7.68 TB SSD = 92 TB SSD

  • Adding 12 SSDs: 18 x 7.68 TB SSD = 138 TB SSD
  • Adding 18 SSDs: 24 x 7.68 TB SSD = 184 TB SSD (full shelf)

Not applicable. Base storage shelf is fully populated.

Storage shelf expansion options
  • The optional expansion storage shelf can only be installed after the base storage shelf is fully populated, and it must have the same configuration as the base storage shelf.
  • Total storage on the base storage shelf = 184 TB SSD
  • Storage on the expansion shelf = 24 x 7.68 TB SSD = 184 TB SSD
  • Total storage including both JBODs = 368.64 TB SSD
  • The optional expansion storage shelf can only be installed after the base storage shelf is fully populated, and it must have the same configuration as the base storage shelf.
  • Total storage on the base storage shelf = 298 TB, with 46 TB SSD and 252 TB HDD
  • Total storage including both JBODs = 596 TB, with 92 TB SSD and 504 TB HDD

Preparing for a Storage Upgrade for a Virtualized Platform

Review and perform these best practices before adding storage to the base shelf or adding the expansion shelf.

  1. Update Oracle Database Appliance to the latest Patch Bundle before expanding storage.
  2. Confirm both nodes are at the same version and patch bundle level for software and firmware.
    # oakcli show version -detail 
    # oakcli inventory -q 

    Note:

    If oakd is not running on either node, fix the problem before adding storage.
  3. Check the disk health of the existing storage disks.

    Run the check on both nodes and use the default checks option to check the NetworkComponents, OSDiskStorage, SharedStorage, and SystemComponents.

    # oakcli validate -d
  4. Run the command oakcli show diskgroup on each node to display and review Oracle Automatic Storage Management (Oracle ASM) disk group information.
    # oakcli show diskgroup data
    # oakcli show diskgroup reco
     # oakcli show diskgroup redo 
  5. Confirm Oracle ASM and CRS health on both nodes.
    Run the oakcli orachk command on each node. If there is a problem connecting to either node, then check the /etc/bashrc  file and remove (or remark out) any values in the profile for root ; oracle ; grid users 

    Run oakcli orachk on Node 0:

    # oakcli orachk
    ...
    
    Checking Status of Oracle Software Stack - Clusterware, ASM, RDBMS
    
    . . . . . . . . .
    -------------------------------------------------------------------------------------------------------
    Oracle Stack Status
    -------------------------------------------------------------------------------------------------------
    Host Name CRS Installed  ASM HOME   RDBMS Installed    CRS UP    ASM UP    RDBMS UP DB Instance Name
    -------------------------------------------------------------------------------------------------------
    odax3rm1       Yes           No          Yes              No        No        No          ........
    -------------------------------------------------------------------------------------------------------
    
     ...

    Run oakcli orachk on Node 1:

    # oakcli orachk
    ...
    
    Checking Status of Oracle Software Stack - Clusterware, ASM, RDBMS
    
    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
    -------------------------------------------------------------------------------------------------------
    Oracle Stack Status
    -------------------------------------------------------------------------------------------------------
    Host Name CRS Installed  ASM HOME   RDBMS Installed    CRS UP    ASM UP    RDBMS UP DB Instance Name
    -------------------------------------------------------------------------------------------------------
    odax3rm2      Yes           Yes           Yes            Yes       Yes        Yes      b22S2 b23S2 b24S2
    -------------------------------------------------------------------------------------------------------
    
    ...
  6. Confirm communications between the nodes and that SSH is working using the same password for oracle, root and grid.
    From each node:
    1. ssh to both nodes.
    2. Ping both nodes.
  7. Confirm that there is at least 10 GB of space available on each node.
    [root@oda]# df -h
    [root@odb]# df -h 

Adding Solid-State Drives (SSDs) for Data Storage

Add a pack of solid-state drives (SSDs) for data storage into the existing Oracle Database Appliance X8-2-HA and X8-2M base configuration to fully populate the base storage shelf.

If you need to add storage to the base configuration, you can order one, two, or three 6-pack of SSDs to complete the base configuration on Oracle Database Appliance X8-2-HA.

You must fully populate the base configuration before you can add an expansion shelf to Oracle Database Appliance X8-2-HA. If you add an expansion shelf, the shelf must have the same disk storage configuration as the base configuration.

Note:

For a high-performance configuration, you can add SSDs to the base storage shelf or add a storage expansion shelf. For high-capacity base configuration with 6-SSDs, if you want to expand storage to use HDDs, then you must reimage and deploy the appliance.

Note:

You can follow the same procedure to add storage to the base configuration on Virtualized Platform by using oakcli command equivalent of the odacli and odaadmcli commands in the procedure.
Before adding the disks to the system, ensure that Oracle Database Appliance is on the latest release.
  1. Insert disks one at a time in the slots.

    To add one 6-pack of SSDs, insert the disks in slots 6 to 11. To add two 6-pack of SSDs, insert the disks in slots 6 to 17. To add three 6-pack of SSDs, insert the disks in slots 6 to 23.

    Note:

    Allow at least one minute between inserting each disk.
    After all disks are added, go to Step 2.
  2. Run the odaadmcli expand storage command on any node.
    # odaadmcli expand storage -ndisk number_of_disks_to_be_added -enclosure enclosure_number_of_the_disks_to_be_added

    The enclosure number is 0 when you add storage disks to the first JBOD.

    For example:

    
    # odaadmcli expand storage -ndisk 6 -enclosure 0
    Precheck passed. 
    Check the progress of expansion of storage by executing 'odaadmcli show disk' 
    Waiting for expansion to finish ...
    It takes 10 to 12 minutes to add all of the disks to the configuration.
  3. Run the odaadmcli show disk command to ensure that all disks are listed, are online, and are in a good state.
    # odaadmcli show disk
  4. Verify that the disks in slots 6 to 11 are added to Oracle Automatic Storage Management (Oracle ASM).
    1. Run the asm_script to verify that the disks in slots 6 to 11 are added to Oracle Automatic Storage Management (Oracle ASM). If the 6 disks are successfully added (CACHED and MEMBER), then go to Step 7.
      su - grid user -c /opt/oracle/oak/bin/stordiag/asm_script.sh 1 6

      For example:

      # /opt/oracle/oak/bin/stordiag/asm_script.sh 1 6 | grep CACHED
      .......
      /dev/mapper/SSD_E0_S06_1399645200p1 SSD_E0_S06_1399645200P1 1 NORMAL ONLINE CACHED MEMBER
      /dev/mapper/SSD_E0_S06_1399645200p2 SSD_E0_S06_1399645200P2 3 NORMAL ONLINE CACHED MEMBER
      /dev/mapper/SSD_E0_S07_1399646692p1 SSD_E0_S07_1399646692P1 1 NORMAL ONLINE CACHED MEMBER
      /dev/mapper/SSD_E0_S07_1399646692p2 SSD_E0_S07_1399646692P2 3 NORMAL ONLINE CACHED MEMBER
      /dev/mapper/SSD_E0_S08_1399649840p1 SSD_E0_S08_1399649840P1 1 NORMAL ONLINE CACHED MEMBER
      /dev/mapper/SSD_E0_S08_1399649840p2 SSD_E0_S08_1399649840P2 3 NORMAL ONLINE CACHED MEMBER
      /dev/mapper/SSD_E0_S09_1399649424p1 SSD_E0_S09_1399649424P1 1 NORMAL ONLINE CACHED MEMBER
      /dev/mapper/SSD_E0_S09_1399649424p2 SSD_E0_S09_1399649424P2 3 NORMAL ONLINE CACHED MEMBER
      /dev/mapper/SSD_E0_S10_1399649846p1 SSD_E0_S10_1399649846P1 1 NORMAL ONLINE CACHED MEMBER
      /dev/mapper/SSD_E0_S10_1399649846p2 SSD_E0_S10_1399649846P2 3 NORMAL ONLINE CACHED MEMBER
      /dev/mapper/SSD_E0_S11_1399649428p1 SSD_E0_S11_1399649428P1 1 NORMAL ONLINE CACHED MEMBER
      /dev/mapper/SSD_E0_S11_1399649428p2 SSD_E0_S11_1399649428P2 3 NORMAL ONLINE CACHED MEMBER
    2. If the disks are not added to Oracle ASM, then add them manually. As grid user, execute the sqlplus '/as sysasm' command on the first node to add the disks to Oracle ASM.

      For a system without Oracle Automatic Storage Management Filter Driver (Oracle ASM Filter Driver) configured, add the Oracle ASM disks as follows:

      
      SQL> alter diskgroup /*+ _OAK_AsmCookie */ data add disk 
      '/dev/mapper/SSD_E0_S06_1399765076p1' name SSD_E0_S06_1399765076p1,
      '/dev/mapper/SSD_E0_S07_1399765116p1' name SSD_E0_S07_1399765116p1,
      '/dev/mapper/SSD_E0_S08_1399765484p1' name SSD_E0_S08_1399765484p1,
      '/dev/mapper/SSD_E0_S09_1399765504p1' name SSD_E0_S09_1399765504p1,
      '/dev/mapper/SSD_E0_S10_1399765506p1' name SSD_E0_S09_1399765506p1,
      '/dev/mapper/SSD_E0_S11_1399765508p1' name SSD_E0_S09_1399765508p1;
      
      SQL> alter diskgroup /*+ _OAK_AsmCookie */ reco add disk 
      '/dev/mapper/SSD_E0_S06_1399765076p2' name SSD_E0_S06_1399765076p2,
      '/dev/mapper/SSD_E0_S07_1399765116p2' name SSD_E0_S07_1399765116p2,
      '/dev/mapper/SSD_E0_S08_1399765484p2' name SSD_E0_S08_1399765484p2,
      '/dev/mapper/SSD_E0_S09_1399765504p2' name SSD_E0_S09_1399765504p2,
      '/dev/mapper/SSD_E0_S10_1399765506p2' name SSD_E0_S09_1399765506p2,
      '/dev/mapper/SSD_E0_S11_1399765508p2' name SSD_E0_S09_1399765508p2; 

      For a system with Oracle Automatic Storage Management Filter Driver (Oracle ASM Filter Driver) configured, add the Oracle ASM disks as follows:

      SQL> alter diskgroup /*+ _OAK_AsmCookie */ data add disk 
      'AFD:SSD_E0_S06_1399765076P1' name SSD_E0_S06_1399765076p1,
      'AFD:SSD_E0_S07_1399765116P1' name SSD_E0_S07_1399765116p1,
      'AFD:SSD_E0_S08_1399765484P1' name SSD_E0_S08_1399765484p1,
      'AFD:SSD_E0_S09_1399765504P1' name SSD_E0_S09_1399765504p1,
      'AFD:SSD_E0_S10_1399765506P1' name SSD_E0_S10_1399765506p1,
      'AFD:SSD_E0_S11_1399765508P1' name SSD_E0_S11_1399765508p1;
      
      SQL> alter diskgroup /*+ _OAK_AsmCookie */ reco add disk 
      'AFD:SSD_E0_S06_1399765076P2' name SSD_E0_S06_1399765076p2,
      'AFD:SSD_E0_S07_1399765116P2' name SSD_E0_S07_1399765116p2,
      'AFD:SSD_E0_S08_1399765484P2' name SSD_E0_S08_1399765484p2,
      'AFD:SSD_E0_S09_1399765504P2' name SSD_E0_S09_1399765504p2,
      'AFD:SSD_E0_S10_1399765506P2' name SSD_E0_S10_1399765506p2,
      'AFD:SSD_E0_S11_1399765508P2' name SSD_E0_S11_1399765508p2; 
  5. Use the odaadmcli show validation storage errors command to show hard storage errors.
    Hard errors include having the wrong type of disk inserted into a particular slot, an invalid disk model, or an incorrect disk size.
    # odaadmcli show validation storage errors
  6. Use the odaadmcli show validation storage failures command to show soft validation errors.
    A typical soft disk error would be an invalid version of the disk firmware.
    # odaadmcli show validation storage failures
  7. Confirm that the oak_storage_conf.xml file shows the number of disks added on both nodes. For example, if you added 6 disks to the base configuration, then the oak_storage_conf.xml file must show 12. If you added 12 disks to the base configuration, then the oak_storage_conf.xml file must show 18.
    # cat /opt/oracle/oak/conf/oak_storage_conf.xml

Adding the Storage Expansion Shelf

After the base storage shelf is fully populated, you can add the storage expansion shelf to expand your data storage on your multi-node platform.

The expansion shelf is available on Oracle Database Appliance multi-node platforms, such as Oracle Database Appliance X8-2-HA. The addition of the storage expansion shelf includes checks across both nodes. It is important to confirm that SSH does work across the nodes and all users can connect as expected using their shared password.

You must fully populate the base configuration before you can add an expansion shelf. If you add an expansion shelf, the shelf must have the same disk storage configuration as the base storage shelf.

Note:

You can follow the same procedure to add storage to the base configuration on Virtualized Platform by using oakcli command equivalent of the odacli or odaadmcli commands in the procedure.

Note:

Oracle recommends that you add a storage expansion shelf when you have relatively little activity on your databases. When the system discovers the new storage, Oracle Automatic Storage Management (Oracle ASM) automatically rebalances the disk groups. The rebalance operation may degrade database performance until the operation completes.
  1. Install and cable the storage expansion shelf, but do not power on the expansion shelf.

    Caution:

    Review cabling instructions carefully to ensure that you have carried out cabling correctly. Incorrect connections can cause data loss when adding a storage expansion shelf to Oracle Database Appliance with existing databases.

  2. If this is a new deployment or re-image of Oracle Database Appliance, perform the following steps in order:
    1. Power on the base storage.
    2. Power on Node 0.
    3. Power on Node 1.

    Caution:

    Do not power on the expansion shelf yet.
  3. Verify that both nodes plus the base storage shelf are up and running. Log into each server node and run the odacli validate-storagetopology command to confirm that the base configuration cabling is correct.
    
    # odacli validate-storagetopology
     ...
          INFO  : Check if JBOD powered on
      SUCCESS   : JBOD : Powered-on                                          
          INFO  : Check for correct number of EBODS(2 or 4)
      SUCCESS   : EBOD found : 2                                                                                                                                                                     INFO       : Check for overall status of cable validation on Node0
      SUCCESS   : Overall Cable Validation on Node0            
      SUCCESS   : JBOD Nickname set correctly : Oracle Database Appliance - E0
    Run the command to confirm that the two server nodes are properly cabled to the base storage shelf and all disks are online, with a good status, and added to the existing disk groups on both nodes. If there any failures, then fix the cabling before proceeding to the next step.

    Note:

    If the output shows that EBOD found is 2, then you only have the base storage shelf. If EBOD found is 4, then you have a base storage shelf and an expansion shelf.

    Note:

    If you add a new JBOD fresh from the factory, then the output of the odacli validate-storagetopology command is:
    # odacli validate-storagetopology
     ...
    WARNING : JBOD Nickname is incorrectly set to :
  4. Power on the storage expansion shelf and wait for 20 minutes before issuing the CLI command for storage expansion.
  5. Log in to each server node and run the odacli validate-storagetopology command to validate the storage cabling and confirm that the new storage shelf is recognized.
    
    # odacli validate-storagetopology
    
      INFO    : Check if JBOD powered on
      SUCCESS : 2JBOD : Powered-on                                               
      INFO    : Check for correct number of EBODS(2 or 4)
      SUCCESS : EBOD found : 4                                                   
       ...
       ...
    
       INFO    : Check for overall status of cable validation on Node0
       SUCCESS : Overall Cable Validation on Node0            
       SUCCESS : JBOD0 Nickname set correctly : Oracle Database Appliance - E0
       SUCCESS : JBOD1 Nickname set correctly : Oracle Database Appliance - E1                 
    If you add a new JBOD fresh from the factory, then the output of the odacli validate-storagetopology command is:
    # odacli validate-storagetopology
     ...
    WARNING : JBOD Nickname is incorrectly set to :
    Look for the following indicators that both storage shelves are recognized:
    • When there are two shelves, the JBOD (just a bunch of disks) is numbered. For example:
      SUCCESS : 2JBOD : Powered-on
    • When both shelves are recognized, the EBOD found value is 4.
      SUCCESS : EBOD found : 4
    • When the expansion shelf is cabled properly, the nickname is E1. For example:

              SUCCESS : JBOD0 Nickname set correctly : Oracle Database Appliance - E0
              SUCCESS : JBOD1 Nickname set correctly : Oracle Database Appliance - E1  

    Fix any errors before proceeding.

  6. Run the odaadmcli show disk command to ensure that all disks in the expansion shelf are listed, are online, and are in a good state.
    # odaadmcli show disk
    When all disks are online and in a good state, proceed to the next step.
  7. Run the odaadmcli show enclosure command to check the health of components in expansion shelf.
    # odaadmcli show enclosure
  8. Run the odaadmcli expand storage command.
    # odaadmcli expand storage -ndisk 24 -enclosure 1 
    
    Precheck passed. 
    Check the progress of expansion of storage by executing 'odaadmcli show disk' 
    Waiting for expansion to finish ...
    It takes approximately 30 to 40 minutes to add all of the disks to the configuration.
  9. Use the odaadmcli show validation storage errors command to show hard storage errors.
    Hard errors include having the wrong type of disk inserted into a particular slot, an invalid disk model, or an incorrect disk size.
    # odaadmcli show validation storage errors
  10. Use the odaadmcli show validation storage failures command to show soft validation errors.
    A typical soft disk error would be an invalid version of the disk firmware.
    # odaadmcli show validation storage failures
  11. Run the odacli describe-component command to verify that all firmware components in the storage expansion are current.
    # odacli describe-component
  12. If needed, update the storage shelf and then run the odacli describe-component command to confirm that the firmware is current.
    # odacli update
    # odacli describe-component