8.3 Adding External InfiniBand Storage

InfiniBand can be used to connect additional ZFS Storage Appliances to the Oracle Private Cloud Appliance (PCA) using the available ports on the Fabric Interconnects. Since ZFS Storage Appliances are dual-headed, each ZFS Controller head has its own InfiniBand connection to each of the Fabric Interconnects, providing redundancy both on the side of the ZFS Storage Appliance and on the side of the Oracle PCA.

The Oracle Fabric Interconnect F1-15s have only 4 available InfiniBand ports – 2 per Fabric Interconnect – to attach external storage. If higher performance is required, or if multiple storage appliances need to be connected, you may add two external NM2-36P Sun Datacenter InfiniBand Expansion Switches between the external storage and the Fabric Interconnects inside the appliance rack. A ZFS Storage Appliance may be connected to the InfiniBand Switches in active/passive or active/active configuration. If an active/active configuration is selected, then both storage controllers must be expanded with additional HCAs for a total of 4 InfiniBand ports per ZFS controller head.

Tip

For the latest, detailed information about InfiniBand storage with Oracle PCA, please refer to the white paper entitled Expanding Oracle Private Cloud Appliance Using Oracle ZFS Storage Appliance.

8.3.1 Connecting InfiniBand Storage Hardware

The recommended cabling configuration that should be implemented to connect the ZFS appliance to the Oracle PCA cross-connects each controller head to each of the Fabric Interconnects for a total of four connections. This configuration maximizes redundancy and throughput. The following table describes how the cabling should be connected between the Fabric Interconnects on the Oracle PCA and the ZFS Storage Appliance.

Table 8.3 InfiniBand Cabling to Attach a ZFS Storage Appliance to the Oracle PCA Fabric Interconnects

Top Oracle Fabric Interconnect F1-15 (RU 22-25):

Bottom Oracle Fabric Interconnect F1-15 (RU 15-18):

ZFS Storage Appliance Controller Head 1

ZFS Storage Appliance Controller Head 2

IB Port 1

none

IB Port 1 (ibp0)

none

IB Port 11

none

none

IB Port 1 (ibp0)

none

IB Port 1

IB Port 2 (ibp1)

none

none

IB Port 11

none

IB Port 2 (ibp1)


If you install two InfiniBand switches between the external storage and the appliance rack, then those switches connect to the Fabric Interconnect ports using the same cabling pattern as the two controllers of a directly attached ZFS Storage Appliance. On the side of the ZFS Storage Appliance, the cabling pattern in active/passive configuration remains identical whether it is connected to the InfiniBand switches or directly to the Fabric Interconnects.

If the ZFS Storage Appliance is in active/active configuration, the cable connections must be doubled to provide redundancy between both ZFS controllers and the pair of InfiniBand switches. Because both controller heads are in active mode, they both require redundant connections to the two switches, meaning 4 cables per controller.

The different supported cabling layouts are illustrated in the figures below.

Figure 8.6 IPoIB Storage Directly Attached

Figure showing IPoIB storage connected directly to the appliance. The illustration shows an active/passive ZFS storage controller configuration with each controller cross-cabled to both Fabric Interconnects for a redundant HA connection. If the connection to the active controller fails, the standby controller takes ownership of all the storage resources and presents them over the same redundant network connection, thereby avoiding downtime due to unavailable storage.

Figure 8.7 IPoIB Storage Connected to Switches in Active/Passive Configuration

Figure showing IPoIB storage connected to a pair of InfiniBand switches. The illustration shows an active/passive ZFS storage controller configuration with 2 cable connections per controller for high availability. If the connection to the active controller fails, the standby controller takes ownership of all the storage resources and presents them over the same redundant network connection, thereby avoiding downtime due to unavailable storage. The switches are also cross-cabled to the Fabric Interconnects for a redundant HA connection.

Figure 8.8 IPoIB Storage Connected to Switches in Active/Active Configuration

Figure showing IPoIB storage connected to a pair of InfiniBand switches. The illustration shows an active/active ZFS storage controller configuration with 4 cable connections per controller for high availability of both active controller heads. The switches are also cross-cabled to the Fabric Interconnects for a redundant HA connection.

8.3.2 IP Address Allocation

The following IP address blocks have been reserved for use by a ZFS Storage Appliance external to the Oracle PCA rack:

  • 192.168.40.242

  • 192.168.40.243

  • 192.168.40.244

  • 192.168.40.245

  • 192.168.40.246

  • 192.168.40.247

  • 192.168.40.248

  • 192.168.40.249

8.3.3 Configuring the ZFS Storage Appliance

This section describes the configuration steps that you must perform on the ZFS Storage Appliance to use IPoIB in conjunction with the Oracle PCA. The description provided here assumes a typical configuration with an active/passive management cluster, using iSCSI to serve LUNs to compute nodes or virtual machines as physical disks. The ZFS Storage Appliance supports a standard NFS configuration as well.

Note

If you install extra expansion cards in the ZFS Storage Appliance controllers, and add a pair of InfiniBand switches between the external storage and the Fabric Interconnects inside the appliance rack, you can set up the ZFS Storage Appliance management cluster in active/active configuration. Notes in the procedure below highlight the required additional configuration.

Creating a Basic IPoIB Configuration on a ZFS Storage Appliance

  1. Create Datalinks

    Log on to management user interface on the controller head you intend to use as the master. Go to Configuration and then to Network. If you have cabled correctly, two active devices are listed that map onto the cabled IB ports.

    Note

    In active/active configuration both controller heads have 4 active devices: typically ibp0, ibp1, ibp2 and ibp3.

    Drag each of these across to the Datalink menu to create a new datalink for each device. Edit each of these datalinks to provide a datalink name and partition key.

    Figure 8.9 Datalink Configuration

    Figure showing ZFS Storage Appliance datalink configuration for InfiniBand. The illustration shows the IB Partition option is checked, the Name and Partition Key fields are filled out and the Link Mode is set to Connected Mode.

  2. Create Interfaces

    Drag each configured datalink across into the Interface menu to create an interface for each datalink that you have defined. Edit each interface to provide a value for the Name field that makes it easy to identify the interface. Add the netmask 0.0.0.0/8 to each interface to prevent the system from probing and testing the routing details behind the network connection. Leave the IP MultiPathing Group unchecked. Do not configure an IP address at this stage.

    Figure 8.10 Interface Configuration

    Figure showing ZFS Storage Appliance interface configuration for InfiniBand. The illustration shows the interfaces configured with default options; one interface for each datalink configured in the previous step. A netmask has been added to the interfaces to prevent the system from probing and checking all routing details.

  3. Configure Multipathing

    Click the Plus icon in the Interface menu to create an additional interface. This additional interface enables IPMP across the two interfaces you created in the previous step. Enter a value in the Name field that makes it easy to identify the IPMP interface. Assign an IP address from the list of reserved IPs in the Oracle PCA storage network.

    Near the bottom of the window, select the IP MultiPathing Group check box. Select the two previously configured interfaces, mark the first one as Active and the second one as Standby. The selected IP address will be configured on the active interface until a failover to the standby interface occurs.

    Note

    In active/active configuration you create 2 IPMP interfaces: one across interfaces ibp0 and ibp2, and the other across ibp1 and ibp3. This ensures redundancy across the two PCIe InfiniBand expansion cards. Assign an IP address to each IPMP interface. The configuration is replicated to the other storage controller.

    Figure 8.11 Multipathing (IPMP) Configuration

    Figure showing ZFS Storage Appliance interface configuration for InfiniBand. The illustration shows an additional interface being configured for IPMP across the two existing interfaces. The IP address is assigned to the active underlying interface. The other interface remains in standby, ready to take over the master role in case the active interface should fail.

  4. Apply Connectivity Configuration

    Click the Apply button to commit the connectivity configuration you performed up to this point. When you select any device, datalink or interface associated with your configuration, all related items are highlighted in the user interface, as shown in the screenshot below. All other items unrelated to the configuration in this procedure have been blurred out in the image.

    Figure 8.12 Final Connectivity Configuration

    Figure showing ZFS Storage Appliance interface configuration for InfiniBand. The connectivity configuration is now final and active.

  5. Verify Management Cluster Configuration

    The external storage setup proposed in this chapter has an active/passive cluster configuration. Consequently all storage resources for Oracle PCA have to be available at one particular IP address at any time. The IP address and the storage resources are owned by the active head in the management cluster, and the standby head is ready to take over the IP address and expose the same storage resources if the active head should fail.

    On the active storage controller, go to Configuration and then to Cluster. The active/passive cluster configuration is illustrated in the screenshot below. All resources involved in the Oracle PCA external storage setup are owned by the active head: the storage appliance management interface, the IPMP interface configured specifically for Oracle PCA external storage connectivity, and the storage pool where the LUNs will be created.

    In this setup the entire configuration is applied on the active controller and automatically replicated on the standby controller. Since all resources are owned by the active head, there is no need to log on to the standby head and make any configuration changes.

    Note

    In active/active configuration both storage controllers take ownership of the storage resources in their storage pool. Each storage controller presents its storage resources to Oracle PCA using separate redundant network paths. The active path at any given moment is determined through multipathing.

    Figure 8.13 Management Cluster Configuration

    Figure showing ZFS Storage Appliance management cluster configuration as seen from the perspective of the active head. All resources are owned by the active head, none by the standby head.

  6. Configure iSCSI Targets

    On the active controller, go to Configuration, SAN and then to ISCSI. Click the Plus icon in the Targets menu to create a new iSCSI target. Enter a value in the Alias field that makes it easy to identify the iSCSI target. Select the IPMP interface you configured specifically for Oracle PCA external storage connectivity. Click OK to create the new target.

    Figure 8.14 Create iSCSI Target

    Figure showing ZFS Storage Appliance iSCSI target configuration for InfiniBand. The illustration shows the dialog to add an iSCSI target. Enter an alias to properly identify the target and associate it with the IPMP interface you configured earlier.

    Drag the iSCSI target that you have created into the Target Group area to create a target group. Edit the target group, give it an appropriate name, and make sure the IQN of the correct iSCSI target is selected. Click OK to save your changes.

    Figure 8.15 Create iSCSI Target Group

    Figure showing ZFS Storage Appliance iSCSI target group configuration for InfiniBand. The illustration shows the dialog to add an iSCSI target group. Enter a name to properly identify the target group and select the correct iSCSI target IQN.

    Note

    In active/active configuration you must create an iSCSI target and target group on each storage controller. In the target configuration select the two IPMP interfaces you created.

    The iSCSI initiators and initiator group, which are configured in the steps below, are automatically replicated to the second storage controller.

  7. Configure iSCSI Initiators

    First, you need to retrieve the IQN of each compute node you wish to grant access to the external storage. Log on to the master management node of your Oracle PCA and proceed as follows:

    1. Using the CLI, list all compute nodes.

      [root@ovcamn05r1 ~]# pca-admin list compute-node
      
      Compute_Node  IP_Address    Provisioning_Status  ILOM_MAC            Provisioning_State
      ------------  ----------    -------------------  --------            ------------------
      ovcacn09r1    192.168.4.7   RUNNING              00:10:e0:3f:82:75   running
      ovcacn13r1    192.168.4.11  RUNNING              00:10:e0:3f:87:73   running
      ovcacn26r1    192.168.4.13  RUNNING              00:10:e0:3e:46:db   running
      ovcacn12r1    192.168.4.10  RUNNING              00:10:e0:3f:8a:c7   running
      ovcacn08r1    192.168.4.6   RUNNING              00:10:e0:3f:84:df   running
      ovcacn27r1    192.168.4.14  RUNNING              00:10:e0:3f:9f:13   running
      ovcacn07r1    192.168.4.5   RUNNING              00:10:e0:3f:75:73   running
      ovcacn11r1    192.168.4.9   RUNNING              00:10:e0:3f:83:23   running
      ovcacn10r1    192.168.4.8   RUNNING              00:10:e0:3f:89:83   running
      ovcacn14r1    192.168.4.12  RUNNING              00:10:e0:3f:8b:5d   running
      -----------------
      10 rows displayed
      
      Status: Success
    2. SSH into each compute node and display the contents of the file initiatorname.iscsi.

      [root@ovcamn05r1 ~]# ssh ovcacn07r1
      root@ovcacn07r1's password:
      [root@ovcacn07r1 ~]# cat /etc/iscsi/initiatorname.iscsi
      InitiatorName=iqn.1988-12.com.oracle:a72be49151
      
      [root@ovcacn07r1 ~]# exit
      logout
      Connection to ovcacn07r1 closed.
      
      [root@ovcamn05r1 ~]#
      Tip

      Using SSH to connect to each of the listed compute nodes is the fastest way to obtain the IQNs. However, they can also be copied from the Oracle VM Manager user interface, in the Storage Initiator perspective for each server.

    3. Copy the IQN of the compute node and use it to define a corresponding iSCSI initiator in the ZFS Storage Appliance user interface.

    Click on the Initiators link to define the iSCSI initiators for the compute nodes that you wish to expose LUNs to. Click the Plus icon to identify a new iSCSI initiator. Enter the compute node Initiator IQN and an Alias that makes it easy to identify the iSCSI initiator. Repeat this for each compute node so that all initiators appear in the list.

    Figure 8.16 Identify iSCSI Initiators

    Figure showing ZFS Storage Appliance iSCSI initiator configuration for InfiniBand. The illustration shows an example of an initiator being defined for one of the compute nodes.

    Drag the one of the iSCSI initiators that you have created into the Initiator Group area to create a new initiator group. Edit the initiator group, give it an appropriate name, and make sure that all IQNs of the compute nodes that you want to make a member of this initiator group are selected. Click OK to save your changes.

    Figure 8.17 Create iSCSI Initiator Group

    Figure showing ZFS Storage Appliance iSCSI initiator group configuration for InfiniBand. The illustration shows the dialog to edit an iSCSI initiator group. Enter a name to properly identify the initiator group and select all required compute node IQNs.

  8. Apply iSCSI Target and Initiator Configuration

    Click the Apply button to commit the iSCSI target and initiator configuration you performed up to this point. When you select any initiator or initiator group associated with your configuration, all related items are highlighted in the user interface, as shown in the screenshot below. Other items unrelated to the configuration in this procedure have been blurred out in the image.

    Figure 8.18 Final iSCSI Configuration

    Figure showing ZFS Storage Appliance iSCSI configuration for InfiniBand. The iSCSI configuration is now final and active.

  9. Create a LUN

    For easy management it is good practice to organize storage resources in separate projects. Create a project first for your Oracle PCA external storage, and then add LUNs as you need them.

    On the active storage controller, go to Shares. On the left hand side, click the Plus icon to create a new Project. In the Create Project dialog box, enter a Name to make it easy to identify the project. Click Apply to save the new project.

    Figure 8.19 Create ZFS Storage Project

    Figure showing ZFS Storage Appliance storage project configuration for InfiniBand. The illustration shows the dialog to add a new project. Enter a name to properly identify the project.

    In the navigation pane on the left hand side, select the project you just created. In the Shares window, first select LUNs,and then click the Plus icon to create a new LUN as part of this project.

    Fill out the LUN properties, select the iSCSI target group and initiator group you created earlier, and enter a Name that makes it easy to identify the LUN. Click Apply to add the LUN to the selected project.

    Figure 8.20 Create LUN

    Figure showing ZFS Storage Appliance Create LUN dialog. A LUN is configured for the target and initiator groups created in the previous steps.

    Note

    In active/active configuration each storage controller owns a storage pool. You must create a storage project for use with Oracle PCA on each storage controller. Associate the LUNs you create with the iSCSI initiator group containing the Oracle PCA compute nodes, and with the iSCSI target group of the selected storage controller. As a result, the LUNs in question are owned by that storage controller. For optimal performance the ownership of storage resources should be balanced between both active storage controllers.

If you wish to access these LUNs as physical disks within Oracle VM Manager, you must configure Oracle VM Manager first. Refer to Section 8.3.4.1, “ISCSI Configuration” for more information.

8.3.4 Enabling External IPoIB Storage in Oracle VM Manager

If you intend to use your ZFS appliance to provide storage for use directly by Oracle VM, to host repositories and virtual machines, you must configure the storage within Oracle VM Manager before you are able to use it. The configuration steps that you must perform depend on whether you have configured iSCSI or NFS on your ZFS Storage Appliance. This section provides a brief outline of the steps that you must perform to configure Oracle VM Manager for each of these technologies. For more detailed information, you should refer to the Oracle VM documentation.

If you only intend to make this storage available to individual virtual machines and do not intend to use the storage for underlying Oracle VM infrastructure, you do not need to perform any of the steps documented in this section, but you will need to configure each virtual machine directly to access the storage either over NFS or iSCSI.

Caution

Reprovisioning restores a compute node to a clean state. If a compute node with active connections to external storage repositories is reprovisioned, the external storage connections need to be configured again after reprovisioning.

8.3.4.1 ISCSI Configuration

The following configuration steps should be performed in Oracle VM Manager if you have configured your storage appliance for iSCSI. The process to add a SAN Server in Oracle VM Manager is clearly documented in the Oracle VM Manager User's Guide. Refer to the section entitled Discover SAN Server.

  1. Log into Oracle VM Manager on the Oracle PCA

  2. Select the Storage tab to configure your storage

  3. Click on the Discover SAN Server icon to load the wizard

  4. Enter the DNS name of the storage appliance in the Name field. In the Storage Type field, use the drop-down selector to select iSCSI Storage Server. In the Storage Plug-in field, you must select the Oracle Generic SCSI plugin. Note that alternate storage plugins are not supported in this configuration. Click Next.

    Figure 8.21 Discover the SAN Server

    Figure showing the Discover SAN Server wizard in Oracle VM Manager. Note that the Oracle Generic SCSI Plugin is selected.

  5. In the Access Information dialog, click on the icon that allows you to add a new Access Host. This opens the Create Access Host dialog. Enter the IP address that you configured for the IPMP interface of the storage appliance, for example 192.168.40.242. If you have configured your storage appliance with CHAP access, you must also enter the CHAP username and password here. Click OK to close the Create Access Host dialog. Click Next.

    Figure 8.22 Create Access Host

    Figure showing the Create Access Host dialog in Oracle VM Manager. Use this dialog to add an access host for the IP address of the IPMP interface on the ZFS Storage Appliance.

  6. In the Add Admin Servers dialog, select all of the servers in the Available Servers frame and move them to the Selected Servers frame. Click Next.

  7. In the Manage Access Group dialog, click on Default Access Group and then click on the Edit icon. The Edit Access Group dialog is opened. Click on the Storage Initiators tab. Select the IQN name from all of the initiators that you have configured on the ZFS Storage Appliance and move them into the Selected Storage Initiators pane. Usually it is acceptable to move all of the IQNs across. Click OK to save the changes.

    Figure 8.23 Edit Access Group

    Figure showing the Edit Access Group dialog in Oracle VM Manager. The initiators that are configured on the ZFS Storage appliance are added to the Selected Storage Initiators for the Access Group.

  8. Click the Finish button to exit the wizard and to save your changes.

    The iSCSI server appears in the SAN Servers tree in the navigation pane. After storage discovery the LUN appears in the Physical Disks perspective.

    Figure 8.24 New Physical Disk Available

    Figure showing the Storage tab in Oracle VM Manager. The LUN exposed by the ZFS storage appliance appears in the Physical Disks perspective for the selected SAN Server.

  9. If the ZFS Storage Appliance is running in active/active configuration, the two exposed iSCSI targets are detected as separate SAN servers by Oracle VM Manager. You must execute the Discover SAN Server procedure separately for each access host IP address, so that the storage resources owned by each of the two respective controllers are available for use with Oracle PCA.

8.3.4.2 NFS Configuration

The following configuration steps should be performed in Oracle VM Manager if you have configured your storage appliance for NFS. The process to add a File Server in Oracle VM Manager is clearly documented in the Oracle VM Manager User's Guide. Refer to the section entitled Discover File Server.

  1. Log into Oracle VM Manager on the Oracle PCA

  2. Select the Storage tab to configure your storage

  3. Click on the Discover File Server icon to load the wizard

  4. Enter all required information into the wizard. Use the IP addresses that you configured for the device as the Access Host IP, for example 192.168.40.242.

  5. Select all of the compute nodes that should be designated as Admin Servers.

  6. Select two or three compute nodes that should be used as Refresh Servers.

  7. Select the file systems that you would like to use.

  8. Click the Finish button.

  9. The file server appears in the File Servers tree in the navigation pane.

  10. If the ZFS Storage Appliance is running in active/active configuration, you must execute the Discover File Server procedure separately for each access host IP address, so that the storage resources owned by each of the two respective controllers are available for use with Oracle PCA.