3 Managing Storage

You can add storage to fully populate the base storage shelf and add a storage expansion shelf to your Oracle Database Appliance.

Topics:

About Managing Storage

You can add storage at any time without shutting down your databases or applications.

Oracle Database Appliance uses raw storage to protect data in the following ways:

  • Flash or Fast Recovery Area (FRA) backup. Flash or fast recovery area is a storage area (directory on disk or Oracle ASM diskgroup) that contains redo logs, control file, archived logs, backup pieces and copies, and flashback logs.

  • Mirroring. Double or triple mirroring provides protection against mechanical issues.

The amount of available storage is determined by the location of the FRA backup (external or internal) and if double or triple mirroring is used.

Oracle Database Appliance uses storage shelves, a base shelf and an optional storage expansion shelf. You can expand the base storage by adding a pack of solid-state drives (SSDs) to fully populate the base storage. You can further expand the storage by adding a second storage shelf. External NFS storage is supported for online backups, data staging, or additional database files.

Note:

You must fully populate the base storage shelf before adding the expansion shelf.

When you add storage, Oracle Automatic Storage Management (Oracle ASM) automatically rebalances the data across all of the storage including the new drives. Rebalancing a disk group moves data between disks to ensure that every file is evenly spread across all of the disks in a disk group and all of the disks are evenly filled to the same percentage. Oracle ASM automatically initiates a rebalance after storage configuration changes, such as when you add disks.

About Expanding Storage

If you need additional storage after fully populating the base shelf, you can add a storage expansion shelf. The expansion shelf is hot-pluggable, enabling you to expand storage without database downtime. After cabling and powering up the expansion shelf, the system automatically configures Oracle ASM storage and data is automatically distributed to the new shelf.

Note:

The process of rebalancing the data might impact performance until the new storage is correctly balanced across all drives. If possible, add a storage expansion shelf during a non-peak or non-production time period to minimize the performance impact of the automatic storage balancing.

The addition of the storage expansion shelf includes checks across both nodes. It is important to confirm that SSH does work across the nodes and all users can connect as expected using their shared password.

The following are the high level steps to expand storage:

  1. Review the Oracle Database Appliance storage options.

  2. Prepare for a storage upgrade by running checks to verify that the configuration is ready before adding storage to the base shelf or adding the expansion shelf.

  3. Add storage if the base configuration is not full.

  4. Add the storage expansion shelf, then log in to each server node and validate the cabling. After you confirm that the cabling is correct, power on the shelf and validate the storage.

    Caution:

    Review cabling instructions carefully to ensure that you have carried out cabling correctly. Incorrect connections can cause data loss when adding a storage expansion shelf to Oracle Database Appliance with existing databases.

Preparing for a Storage Expansion

Review and perform these best practices before adding storage to the base shelf or adding the expansion shelf.

  1. Update Oracle Database Appliance to the latest Patch Bundle before expanding storage.
  2. Confirm both nodes are at the same version and patch bundle level for software and firmware.
    # oakcli show version -detail 
    
    #oakcli inventory -q 
    

    Note:

    If oakd is not running in the foreground mode, on either node, fix the problem before adding storage.
  3. Check the disk health of the existing storage disks.

    Run the check on both nodes and use the default checks option to check the NetworkComponents, OSDiskStorage, SharedStorage, and SystemComponents.

    # oakcli validate -d
    
  4. Run the oakcli show diskgroup command on each node to display and review Oracle Automatic Storage Management (Oracle ASM) disk group information. Verify that all disks are listed, are online, and are in a good state.
    # oakcli show diskgroup data
    
    # oakcli show diskgroup reco
    
     # oakcli show diskgroup redo 
    
  5. Confirm Oracle ASM and CRS health on both nodes.
    Run the oakcli orachk command on each node. If there is a problem connecting to either node, then check the /etc/bashrc  file and remove (or remark out) any values in the profile for root, oracle, grid users.

    Run oakcli orachk on Node 0:

    # oakcli orachk
    ...
    
    Checking Status of Oracle Software Stack - Clusterware, ASM, RDBMS
    
    . . . . . . . . .
    -------------------------------------------------------------------------------------------------------
    Oracle Stack Status
    -------------------------------------------------------------------------------------------------------
    Host Name CRS Installed  ASM HOME   RDBMS Installed    CRS UP    ASM UP    RDBMS UP DB Instance Name
    -------------------------------------------------------------------------------------------------------
    odax3rm1       Yes           No          Yes              No        No        No          ........
    -------------------------------------------------------------------------------------------------------
    
     ...
    

    Run oakcli orachk on Node 1:

    # oakcli orachk
    ...
    
    Checking Status of Oracle Software Stack - Clusterware, ASM, RDBMS
    
    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
    -------------------------------------------------------------------------------------------------------
    Oracle Stack Status
    -------------------------------------------------------------------------------------------------------
    Host Name CRS Installed  ASM HOME   RDBMS Installed    CRS UP    ASM UP    RDBMS UP DB Instance Name
    -------------------------------------------------------------------------------------------------------
    odax3rm2      Yes           Yes           Yes            Yes       Yes        Yes      b22S2 b23S2 b24S2
    -------------------------------------------------------------------------------------------------------
    
    ...
    
  6. Confirm communications between the nodes and that SSH is working using the same password for oracle, root, and grid.
    From each node:
    1. ssh to both nodes.
    2. Ping both nodes.
  7. Confirm there is at least 10 GB of space available on each node.
    [root@oda]# df -h
    
    [root@odb]# df -h