Chapter 8 Extending Oracle Private Cloud Appliance - Additional Storage

Extending the Oracle Private Cloud Appliance by connecting and configuring additional storage can enhance the product by providing the disk space required for large repositories, for backup and recovery storage, and to provision virtual machines with virtual disks beyond the capacity already available on the appliance. This process does not require that the system is powered down, but cabling and configuration must match the Oracle Private Cloud Appliance requirements.

8.1 Extending Storage Capacity

This section describes the options to extend storage capacity for systems with an Ethernet-based network architecture.

8.1.1 Adding Disk Space to the Internal Storage Appliance

Systems with an Ethernet-based network architecture are built with an internal Oracle ZFS Storage Appliance ZS7-2. A major advantage is that the standard disk shelf already provides a significant amount of disk space at approximately 100TB. This is sufficient for the appliance internal 'system disk', as well as an Oracle VM storage repository for a virtualized production environment. Another significant feature is that, for environments with high storage demands, more disk shelves can be attached to the internal storage appliance.

Some additional storage hardware can be installed inside the Oracle Private Cloud Appliance base rack. It takes up 4 rack units – the space of two empty rack units and two compute nodes –, thereby reducing the maximum capacity of the rack compute nodes. For the extra storage inside the base rack, the customer can choose between one high-capacity disk shelf or two high-performance disk shelves.

If a larger amount of additional storage capacity is required, up to 14 extra disk shelves can be installed in an additional rack. All of the extra disk shelves are connected to, and managed by, the two storage appliance controllers located in the bottom rack units of the Oracle Private Cloud Appliance base rack.

The addition of disk shelves to the Oracle ZFS Storage Appliance ZS7-2 is handled entirely by Oracle Advanced Customer Services. The process includes cabling, removal of compute nodes where required, and updating server pool configurations. Please contact Oracle for more details about extending the capacity of the ZFS storage appliance.

8.1.2 Adding External Ethernet Storage

Ethernet-based external storage hardware must be made accessible over the data center network. Oracle Private Cloud Appliance, and the virtualized environment it hosts, access the storage through a custom network with external connectivity. Each external network corresponds with an isolated tunnel.

For instructions to set up a custom network, refer to the section entitled Network Customization in the Monitoring and Managing Oracle Private Cloud Appliance chapter of the Oracle Private Cloud Appliance Administrator's Guide. For information about discovering and using storage resources within Oracle VM, refer to the section Viewing and Managing Storage Resources section of the Managing the Oracle VM Virtual Infrastructure chapter, and the Oracle VM documentation.

8.1.3 Adding External Fibre Channel Storage

Oracle Server X9-2 expansion compute nodes can be ordered with optional 32Gbit Fibre Channel cards pre-installed. Field installation at a later time is possible as well. However, it is not possible to order new base racks with FC cards already installed in the compute nodes. Therefore, if Fibre Channel storage is part of your intended design, you should order a minimal base rack configuration and expansion compute nodes with the FC option.

Fibre Channel cables, switches and patch panels must be supplied by the customer. The installation of compute nodes with FC cards is performed by Oracle Advanced Customer Services. Once these compute nodes are integrated in the Oracle Private Cloud Appliance environment, the fibre channel HBAs can connect to standard FC switches and storage hardware in your data center. External FC storage configuration is managed through Oracle VM Manager. For more information, refer to the Fibre Channel Storage Attached Network section of the Oracle VM Concepts Guide.


When re-provisioning an Oracle Server X8-2 or Oracle Server X9-2 compute node with an optional dual-port FC HBA card installed, the provisioning process fails if the SAN Zoning model is not amended. During the provisioning step where you install Oracle VM, all FC presented disks remain visible to the installer. This creates an error as the installer cannot find the correct disk to install Oracle VM onto. Eventually, the provisioning process times out and flag the compute node as DEAD.

Avoid this error by updating the existing SAN Zoning model to disable FC storage presentation to the compute node being re-provisioned, prior to starting the re-provision process. You can re-enable the SAN Zoning after the provisioning process completes.

It is required that you configure Fibre Channel Zoning on the external FC switches you connect to in your data center. For more information see Section 8.1.4, “Zone Configuration”.

8.1.4 Zone Configuration

The Oracle Private Cloud Appliance requires that Fibre Channel Zoning is configured on the external FC switches to control and limit the amount of traffic on the storage network for efficient LUN discovery and to maximize stability within the environment.

FC Zoning is also used to enhance security by providing an extra layer of traffic separation on the storage network. Even if you are using storage initiator groups to perform LUN masking, it is generally considered good practice to also configure FC zones to limit the exposure of LUNs and unrestricted use of this network medium. Zone configuration is very useful in the situation where the FC switch or switches are shared with other devices apart from the Oracle Private Cloud Appliance.

The Oracle Private Cloud Appliance supports single initiator pWWN zoning, in line with industry best-practice. It is highly recommended that you configure single initiator pWWN zoning for all Fibre Channel connections to the rack. This requires a Fibre Channel switch that supports NPIV.

For all storage clouds that are cable-connected, at least one zone should be configured per WWPN on each compute node. However, multiple zones may be created for each WWPN depending on your cabling. In a setup using all four storage clouds, four zones should exist on the Fibre Channel switch for each compute node. You can obtain a listing of the WWPNs for the compute nodes by running the pca-admin list wwpn-info command.

Using a Fibre Channel storage device with two controller heads in an active/active cluster, and two targets configured for each head, every LUN has 4 storage paths. A storage path is a connection between a target and a storage cloud. Each LUN has two active paths between a storage cloud and the controller head that owns the LUN, and an additional two standby paths between the same storage cloud and the other controller head. If the storage head that owns the LUN should fail, the other storage head takes over and the standby paths become active. This way the storage connectivity for a given cloud remains uninterrupted. To better support failover/failback operations, consider employing a zoning strategy using two zones per storage cloud. This approach involves one zone connecting a given cloud to a target of one controller head, and a second zone connecting that same cloud to a target of the other controller head.

The configuration of Fibre Channel single initiator pWWN zones is not optional. If you had previously attempted to extend your storage using Fibre Channel and did not configure any zones on your FC switch, you should do so now. Furthermore, if you previously configured D,P zones, it is important that you rezone your switches to use single initiator pWWN zones.

Please refer to the documentation of your switch vendor for more information on the configuration steps that you must perform to configure single initiator pWWN zoning.