8.2 Adding External Fibre Channel Storage

Fibre Channel (FC) can be used to connect additional FC-capable storage devices to the Oracle Private Cloud Appliance (PCA) using the FC ports available on the Fabric Interconnects. The Oracle PCA automatically creates virtual Host Bus Adaptors (vHBAs) on each server in the Oracle PCA rack. The World Wide Port Names (WWPNs) defined for these vHBAs can be used to define Initiator Groups on your storage appliance to facilitate access to the LUNs that you wish to make available to your compute and management nodes.

Note

Oracle PCA can be ordered without Fibre Channel I/O modules in the Fabric Interconnects. However, these modules can be ordered separately and installed at a later time. For installation instructions, see Section 8.2.2, “Installing Optional Fibre Channel I/O Modules”.

Oracle PCA introduces the concept of "storage clouds" that group together the FC ports used to connect the Fabric Interconnects to your own FC switch or switches. Two FC ports on each of the Fabric Interconnects are assigned to a cloud. A vHBA is created on each server for each storage cloud. A total of four storage clouds are defined when the Oracle PCA is provisioned, resulting in four vHBAs on each of the compute and management nodes.

Storage clouds allow you to cable and configure your external storage in such a way as to improve overall throughput or to build a fully HA enabled infrastructure. The storage clouds are created and configured automatically, so that all you need to do is choose the type of HA configuration that you wish to use and then cable accordingly. The design of the storage cloud configuration reduces complexity in configuration and allows the greatest possible flexibility in terms of your HA requirements.

At bare minimum, a single cloud can be used to access storage on each of your compute nodes or on your management nodes. However a simple HA configuration is recommended. To achieve this, you should use at least two clouds cross-cabled from each of the Fabric Interconnects. For a fully HA-enabled environment, you can use all four clouds cross-cabled between the Fabric Interconnects and two of your own FC switches.

If you do not require HA, you can use multiple clouds to increase overall throughput. In this configuration, you can cable three clouds into separate switches and connect them to the same storage. If one cloud should fail, you are able to bring up the fourth cloud to take over its function and to maintain the same level of throughput.

Each of these configurations is achieved entirely through the different ways in which your FC switch or switches are patched into the Fabric Interconnects. These approaches are described in more detail in Section 8.2.3, “Connecting Fibre Channel Hardware”.

Since storage clouds map directly to the vHBAs on each server, it is possible to configure zones on your FC switch or switches to securely separate traffic for each storage cloud. This gives you the opportunity to use separate clouds for different purposes using secured channels to facilitate communication between servers and the LUNs on your storage. This is described in Section 8.2.4, “Zone Configuration”.

To use Fibre Channel with the Oracle PCA , you must supply your own NPIV-capable FC switch or switches. It is not possible to simply patch your FC-capable storage directly into the FC ports on the Fabric Interconnects. This is because the Fabric Interconnects use NPIV – Fibre Channel Node Port ID Virtualization – to map the port nodes to the World Wide Node Names (WWNNs) of the vHBAs that are created on each server. Software required to translate WWPNs to WWNNs does not exist on the storage heads of most FC storage devices, so directly attaching the storage device would prevent registration of the WWPNs for the vHBAs available on each server.

8.2.1 Creation of Virtual Host Bus Adaptors (vHBAs)

Each server in the Oracle PCA is connected to the Fabric Interconnects via an InfiniBand (IB) connection. The Fabric Interconnects are capable of translating connections on their Fibre Channel ports to reroute them over these IB connections. To facilitate this, vHBAs must be defined on each server to map to a storage cloud defined on the Fabric Interconnects. The storage cloud that these vHBAs map to, determine which FC ports they relate to on the Fabric Interconnects.

During the initial configuration of the management nodes, each of the storage clouds is configured automatically on the Fabric Interconnects and vHBAs for each cloud are created on each of the management nodes. WWNNs and WWPNs are generated for the vHBAs on the management nodes. When you cable for a storage cloud and connect your storage appliance, you can add these WWPNs to a storage initiator group and export the LUNs that you wish to make available to your management nodes. These disks can then be mounted on the management nodes as needed. Management nodes are configured with vHBAs to allow them to connect to the different storage clouds so that it is possible to use externally connected storage to store backup and log data, if required.

Tip

It is important to distinguish between WWNNs and WWPNs. A WWNN is used to identify a device or node such as an HBA, while a WWPN is used to identify a port that is accessible via that same device. Since some devices can have multiple ports, a device may have a single WWNN and multiple WWPNs.

In the case of the vHBAs that are generated on each compute node, there is a single WWNN and a single WWPN for each vHBA. While these may look almost identical, the fourth hexadecimal octet that makes up the WWNN differs. This is illustrated as follows:

WWPN

WWNN

50:01:39:70:00:4F:91:00

50:01:39:71:00:4F:91:00

When configuring storage initiators and initiator groups, you should ensure that you are configuring these for the WWPN for each vHBA. If you use the WWNN for any vHBA within an initiator group, that initiator group may not function as expected.

Compute nodes are configured similarly, although the process takes place during compute node provisioning, so that compute nodes are configured as they are provisioned. This means that if you add a compute node to the rack, it is automatically configured so that it is ready to access any externally attached storage. Creation and configuration of vHBAs has also been built into the upgrade process, so that external FC storage can be used with existing environments. Once you have cabled for your storage and have configured the initiator groups on your storage, you can view the LUNs that you make available to your compute nodes directly in Oracle VM Manager.

If you purchased an Oracle PCA with factory-installed Fibre Channel I/O modules, then these processes are entirely automated and do not require any intervention.

If you purchased an Oracle PCA without Fibre Channel I/O modules, and you need to install the optional modules into the Fabric Interconnects first, then carefully follow the instructions in Section 8.2.2, “Installing Optional Fibre Channel I/O Modules”.

8.2.2 Installing Optional Fibre Channel I/O Modules

The X5-2 base rack with Oracle PCA Release 2.0.3 software marks the first configuration that can be ordered with or without fibre channel I/O modules. In an Oracle PCA without fibre channel connectivity, slots 3 and 12 of the two Fabric Interconnects are empty. As a consequence, the Software Clouds and vHBAs for Fibre Channel connectivity are not created during provisioning.

If you decide to add Fibre Channel connectivity at a later time, you must order and install the optional Fibre Channel I/O modules, upgrade the controller software and configure the necessary storage clouds and vHBAs.

Adding Fibre Channel Connectivity to a Provisioned Appliance

  1. If your Oracle PCA has no factory-installed Fibre Channel I/O modules you must first make sure that the appliance controller software is Release 2.1.1 or newer. The update process typically takes at least 2.5 hours.

    For detailed software update instructions, see Updating Oracle Private Cloud Appliance in the Oracle Private Cloud Appliance Administrator's Guide.

  2. When the controller software update has completed successfully, install a total of four Fibre Channel I/O modules in slots 3 and 12 of each Oracle Fabric Interconnect F1-15. You can install the new modules without powering down the Fabric Interconnect.

    Caution

    You must use only slots 3 and 12. Fibre Channel I/O modules installed in other slots are likely to cause future upgrade issues and are therefore not supported within Oracle PCA.

    1. Ensure that the module's component side is facing towards the right edge of the chassis. Also, ensure that the module's top and bottom edges are correctly aligned with the upper and lower tracks at the left edge of each module slot.

    2. Secure the module in the slot by tightening the securing screw located at the bottom edge of the module's rear panel.

      Caution

      Tighten the screws so that they are lightly snug. If you overtighten the screws they can break.

  3. When all four Fibre Channel I/O modules have been installed, configure the storage clouds and vHBAs by running the appropriate commands from the Oracle PCA Command Line Interface (CLI).

    1. Configure the vHBAs on both management nodes.

      PCA> configure vhbas ovcamn05r1 ovcamn06r1
      
      Compute_Node         Status         
      ------------         ------         
      ovcamn05r1           Succeeded
      ovcamn06r1           Succeeded    
      ----------------
      2 rows displayed
      
      Status: Success
    2. Verify that the clouds have been configured.

      PCA> list storage-network
      
      Network_Name              Description
      ------------              -----------
      Cloud_A                   Default Storage Cloud ru22 port1 - Do not delete or modify
      Cloud_B                   Default Storage Cloud ru22 port2 - Do not delete or modify
      Cloud_C                   Default Storage Cloud ru15 port1 - Do not delete or modify
      Cloud_D                   Default Storage Cloud ru15 port2 - Do not delete or modify
      ----------------
      4 rows displayed
      
      Status: Success
    3. If the 4 storage clouds have been configured correctly, configure the vHBAs on all compute nodes.

      PCA> configure vhbas ALL
      
      Compute_Node         Status         
      ------------         ------         
      ovcacn07r1           Succeeded
      ovcacn08r1           Succeeded
      [...]
      ovcacn36r1           Succeeded
      ovcacn37r1           Succeeded
      ----------------
      20 rows displayed
      
      Status: Success

    These steps provide only an overview of the procedure. For detailed instructions, see Enabling Fibre Channel Connectivity on a Provisioned Appliance in the Oracle Private Cloud Appliance Administrator's Guide.

8.2.3 Connecting Fibre Channel Hardware

Cabling requirements to attach external FC-capable storage to the Oracle PCA are very specific due to the grouping of FC ports for storage clouds on the Fabric Interconnects. You must ensure that when cabling for a storage cloud, both ports defined for the storage cloud are cabled into your FC switch. Ideally, you should cable all four FC ports on each of the Fabric Interconnects within the rack for a total of eight cables, at the same time. These should either connect to a single Fibre Channel switch outside of the rack, or should be cross-connected to two Fibre Channel switches outside of the rack to improve redundancy. The following table describes how ports are grouped to create each cloud:

Table 8.2 Fibre Channel Port and Cloud Mappings

Mapping

Top Oracle Fabric Interconnect F1-15 (RU 22-25):

Bottom Oracle Fabric Interconnect F1-15 (RU 15-18):

Cloud Name

I/O Module 3

I/O Module 12

I/O Module 3

I/O Module 12

A

Port 1

Port 1

none

none

B

Port 2

Port 2

none

none

C

none

none

Port 1

Port 1

D

none

none

Port 2

Port 2


For a bare minimum configuration, you can use two cables to connect your FC switch to cloud A. This would require that you connect cables to the top Fabric Interconnect in the rack, using I/O Module 3 Port 1 and I/O Module 12 Port 1. Connect each controller head on your FC storage appliance to your FC switch. Typically you would use two cables for each controller head.

For a basic HA configuration, you should cable for cloud A and cloud C. This allows you to use both of the Fabric Interconnects in the rack to improve redundancy. This is the recommended default minimum configuration. It is important that cloud A and cloud C are cabled to the same FC switch.

Figure 8.1 Minimum HA FC Storage Cabling Configuration

Figure showing the minimum HA FC storage cabling configuration, with cloud A on the top Fabric Interconnect and cloud C on the bottom Fabric Interconnect connected to the FC switch. Two cables are connected from each controller head on the storage appliance into the same FC switch.

To maximize an HA configuration, by using an additional FC switch, you can cable cloud B and cloud D into the second FC switch. In this situation you should cross-cable the controller heads on your FC storage appliance, so that each controller head is connected to each of your FC switches.

Figure 8.2 Full HA FC Storage Cabling Configuration

Figure showing full HA FC storage cabling configuration, with cloud A on the top Fabric Interconnect and cloud C on the bottom Fabric Interconnect connected to the same FC switch. cloud B on the top Fabric Interconnect and cloud D on the bottom Fabric Interconnect are connected to an alternate FC switch. Each controller head on the storage appliance is connected to each of the FC switches.

Other cabling possibilities exist, such as the aforementioned configuration where clouds are each connected to independent FC switches to maximize throughput or to achieve physical traffic separation. Usually, in this configuration, one cloud is left disconnected in standby so that in the event that one of the other clouds fails for some reason, it is possible to connect the standby cloud to take over the function of the failed cloud. In this situation, you would need to configure so that the WWPNs for the standby cloud are substituted for the failed cloud in the initiator groups on your storage appliance.

The most important points to remember when cabling are as follows:

  • Clouds are comprised of two equivalent ports on alternate I/O Modules on the same Fabric Interconnect. For instance, cloud A on the top Fabric Interconnect in the rack is comprised of Port 1 on I/O Module 3 and Port 1 on I/O Module 12; while cloud D on the bottom Fabric Interconnect in the rack is comprised of Port 2 on I/O Module 3 and Port 2 on I/O Module 12.

  • Although not required, you should cable both ports that belong to a cloud to provide redundancy and to allow better throughput.

  • Unless you have defined zones that span more than one FC switch, the ports that belong to a cloud must be cabled to connect to the same FC switch.

  • For HA configurations, clouds on alternate Fabric Interconnects should be connected to the same FC switch. For instance, cloud A and cloud C should be connected to the same FC switch; while cloud B and cloud D can be connected to an alternate switch.

You can connect FC cables to the appliance at any time, and no system shutdown is required, allowing you to dynamically add storage or improve redundancy as required.

8.2.4 Zone Configuration

The Oracle PCA requires that Fibre Channel Zoning is configured on the external FC switches to control and limit the amount of traffic on the storage network for efficient LUN discovery and to maximize stability within the environment.

FC Zoning is also used to enhance security by providing an extra layer of traffic separation on the storage network. Even if you are using storage initiator groups to perform LUN masking, it is generally considered good practice to also configure FC zones to limit the exposure of LUNs and unrestricted use of this network medium. Zone configuration is very useful in the situation where the FC switch or switches are shared with other devices apart from the Oracle PCA.

The Oracle PCA supports single initiator pWWN zoning, in line with industry best-practice. It is highly recommended that you configure single initiator pWWN zoning for all Fibre Channel connections to the rack. This requires a Fibre Channel switch that supports NPIV.

For all storage clouds that are cable-connected, at least one zone should be configured per WWPN on each compute node. However, multiple zones may be created for each WWPN depending on your cabling. In a setup using all four storage clouds, four zones should exist on the Fibre Channel switch for each compute node. You can obtain a listing of the WWPNs for the compute nodes by running the pca-admin list wwpn-info command.

Using a Fibre Channel storage device with two controller heads in an active/active cluster, and two targets configured for each head, every LUN has 4 storage paths. A storage path is a connection between a target and a storage cloud. Each LUN has two active paths between a storage cloud and the controller head that owns the LUN, and an additional two standby paths between the same storage cloud and the other controller head. If the storage head that owns the LUN should fail, the other storage head takes over and the standby paths become active. This way the storage connectivity for a given cloud remains uninterrupted. To better support failover/failback operations, consider employing a zoning strategy using two zones per storage cloud. This approach involves one zone connecting a given cloud to a target of one controller head, and a second zone connecting that same cloud to a target of the other controller head.

The configuration of Fibre Channel single initiator pWWN zones is not optional. If you had previously attempted to extend your storage using Fibre Channel and did not configure any zones on your FC switch, you should do so now. Furthermore, if you previously configured D,P zones, it is important that you rezone your switches to use single initiator pWWN zones.

Please refer to the documentation of your switch vendor for more information on the configuration steps that you must perform to configure single initiator pWWN zoning.

8.2.5 Configuring Your Storage Appliance

Some initial configuration steps may be required on your appliance before you are able to access storage directly within Oracle VM Manager. Typically these steps involve configuring some form of 'LUN masking' achieved by mapping LUNs to particular initiator groups that ultimately define the servers and clouds that have access to each LUN. These steps are outlined as follows:

  1. Create the initiators that you intend to use to identify the different servers in each cloud within your storage appliance. This is achieved by registering the World Wide Port Names (WWPNs) that were created for the vHBAs on each server in the Oracle PCA rack with your appliance and assigning them aliases so that they can be easily identified as belonging to a particular server and cloud. Most appliances should be able to see the WWPNs presented by the Fabric Interconnects and you can use the Oracle PCA command line interface to identify matching WWPNs and match them with the recommended aliases that you should use when creating each initiator. This is achieved by running the pca-admin list wwpn-info command:

    PCA> list wwpn-info
    
    WWPN                      vHBA     Cloud_Name    Server       Type  Alias
    ----                      ----     ----------    ------       ----  -----
    50:01:39:70:00:69:F1:06   vhba01   Cloud_A       ovcacn08r1   CN    ovcacn08r1-Cloud_A
    50:01:39:70:00:69:F1:08   vhba01   Cloud_A       ovcacn32r1   CN    ovcacn32r1-Cloud_A
    50:01:39:70:00:69:F1:0C   vhba01   Cloud_A       ovcacn30r1   CN    ovcacn30r1-Cloud_A
    50:01:39:70:00:69:F1:0A   vhba01   Cloud_A       ovcacn29r1   CN    ovcacn29r1-Cloud_A
    50:01:39:70:00:69:F1:07   vhba02   Cloud_B       ovcacn08r1   CN    ovcacn08r1-Cloud_B
    50:01:39:70:00:69:F1:09   vhba02   Cloud_B       ovcacn32r1   CN    ovcacn32r1-Cloud_B
    50:01:39:70:00:69:F1:0B   vhba02   Cloud_B       ovcacn29r1   CN    ovcacn29r1-Cloud_B
    50:01:39:70:00:69:F1:0D   vhba02   Cloud_B       ovcacn30r1   CN    ovcacn30r1-Cloud_B
    50:01:39:70:00:6A:11:0A   vhba03   Cloud_C       ovcacn29r1   CN    ovcacn29r1-Cloud_C
    50:01:39:70:00:6A:11:0C   vhba03   Cloud_C       ovcacn30r1   CN    ovcacn30r1-Cloud_C
    50:01:39:70:00:6A:11:08   vhba03   Cloud_C       ovcacn32r1   CN    ovcacn32r1-Cloud_C
    50:01:39:70:00:6A:11:06   vhba03   Cloud_C       ovcacn08r1   CN    ovcacn08r1-Cloud_C
    50:01:39:70:00:6A:11:0D   vhba04   Cloud_D       ovcacn30r1   CN    ovcacn30r1-Cloud_D
    50:01:39:70:00:6A:11:0B   vhba04   Cloud_D       ovcacn29r1   CN    ovcacn29r1-Cloud_D
    50:01:39:70:00:6A:11:09   vhba04   Cloud_D       ovcacn32r1   CN    ovcacn32r1-Cloud_D
    50:01:39:70:00:6A:11:07   vhba04   Cloud_D       ovcacn08r1   CN    ovcacn08r1-Cloud_D
    -----------------
    16 rows displayed
    
    Status: Success

    Note the Alias column in the example output. Use the values presented in this column for each matching WWPN when you configure the initiators on your appliance. The Oracle PCA CLI is discussed in more detail in the section entitled The Oracle Private Cloud Appliance Command Line Interface (CLI) in the Oracle Private Cloud Appliance Administrator's Guide.

  2. Create initiator groups that define how your storage should be presented to the different compute nodes and virtual machines within your environment. This step is very dependent on your own storage requirements and should be carefully planned for before you start using the storage appliance in conjunction with the Oracle PCA.

    Remember that there is a WWPN for each vHBA on each compute node, representing each storage cloud defined on the Fabric Interconnects. In an HA configuration, you attach the WWPNs for the different storage clouds to the same initiator group. Using this approach, if a storage cloud fails, the LUNs that are exposed for the initiator group are still available to each compute node via an alternate vHBA. Alternatively, you can sacrifice high availability for a more flexible configuration where the WWPN for each vHBA on each compute node is used independently across different initiator groups. This type of configuration may make more sense where you intend to separate out storage for different functions, such as storage that might be provided to virtual machines and storage that might be used for repositories and server related functions.

  3. Map LUNs that you have created on your storage appliance to the initiator groups you have just created. You should only map LUNs to the initiator groups that you intend to use for those LUNs. Although remapping LUNs is possible, you should remember that remapping a LUN once it is in use by Oracle VM is not recommended and may cause unexpected behavior within Oracle VM. Therefore, it is important to carefully plan this step around your requirements before proceeding with this configuration.

    Caution

    A maximum of 1024 LUN paths per host applies.

To perform these steps, you should refer to the appropriate documentation for your appliance or contact your vendor for assistance with these steps. An example is provided in Example 8.1, “Configuring an Oracle Storage Appliance ZS3-4” to show you how you can configure an Oracle Storage Appliance ZS3-4 correctly for this purpose.

Example 8.1 Configuring an Oracle Storage Appliance ZS3-4

It is possible to configure an Oracle Storage Appliance ZS3-4 either using the web-based user interface or the command line interface. These instructions presume that you are using the web-based user interface, since the interface shows all of the WWPNs that it can actually detect.

  1. Log into the web-based user interface as either root or a user with adequate permissions to create initiator groups and LUNs.

  2. Click on the Configuration link, then click on the SAN link. If Fibre Channel is not already selected, click on the Fibre Channel link. The SAN Summary page is displayed.

  3. Click on the + Initiators link as indicated in the image below:

    Figure 8.3 Click on the + Initiators link.

    Figure showing the SAN Summary page in the Oracle Storage Appliance ZS3-4 web user interface. The + Initiators link is circled in red.

  4. The Identify FC Initiator dialog opens and lists the WWPNs that the ZS3-4 is able to detect. You should create a user-friendly alias for each WWPN listed so that it is easy to identify how these map onto the nodes and clouds configured within the Oracle PCA. Use the Oracle PCA command line interface on the master management node to obtain a listing of all of these WWPNs and the recommended aliases that you should use. You can do this by running the pca-admin list wwpn-info command. Find the matching WWPN in the Identify FC Initiator dialog and click on it to enter the recommended alias in the Alias field. Repeat this action for every WWPN. This step is illustrated in the figure below:

    Figure 8.4 Use the Identify FC Initiator dialog to define user-friendly aliases for the WWPNs.

    Figure showing the Identify FC Initiator dialog in the Oracle Storage Appliance ZS3-4 web user interface. A WWN has been selected from the list of Unaliased ports and an Alias is provided. The Alias should match the alias provided for the equivalent WWPN in the CLI when you run the list wwpn-info command.

  5. When you have finished creating aliases for each WWPN, you can start to create the storage initiator groups where LUNs are presented later in the configuration. To create a new storage initiator group in the SAN Summary view, move your mouse over the left hand side of the WWN name until you see the move icon appear, then drag it to the right hand side of the screen and drop it in the Create New Initiator Group box that appears. This step is illustrated in the figure below:

    Figure 8.5 Use the SAN Summary page to create a new storage initiator group by dragging an initiator across the page.

    Figure showing how to use the SAN Summary page in the Oracle Storage Appliance ZS3-4 web user interface to create a new storage initiator group by dragging an initiator across the page and dropping it in the Create New Initiator Group box that appears.

    You can create as many initiator groups as you like. In most cases, if your storage is not used outside of the Oracle PCA and you do not need to segregate storage within the Oracle PCA, you can simply use the default initiator group. However, if your storage is shared with appliances outside of the Oracle PCA, you must, at least create an initiator group for all of the initiators that belong to nodes within the Oracle PCA. If you need to segregate storage further, you can create initiator groups that only include the initiators for particular compute nodes, as is illustrated in the screenshot.

  6. Once you have created a new initiator group, you can roll over it using your mouse and then click on the edit icon that appears in the form of a pencil. A dialog appears where you are able to edit the initiator name to change it to something more appropriate. You can also select the rest of the initiators that you wish to include in the initiator group. Click OK when you have finished editing the initiator group.

  7. When you have finished creating and editing initiator groups, it is important that you click on the Apply button on the SAN Summary page to save the changes that you have made.

  8. You can now define how LUNs map to your initiator groups depending on your requirements. This is achieved by clicking on the Shares link in the navigation bar at the top of the page. On the Project Summary page, you can view any storage pools that you have defined and the LUNs associated with those pools. You can also create new LUNs for each storage pool as required. On this page, you can click to edit a LUN and edit the storage initiator groups that it is exposed to. In this view, you can make a LUN accessible to any of the storage initiators that you defined in the previous steps.


8.2.6 Updating Physical Disks in Oracle VM Manager

Since vHBAs are created on each compute node during the provisioning or upgrade process, the storage should be automatically refreshed within Oracle VM Manager as part of this process and the physical disks should be visible within Oracle VM Manager immediately. However, in the case that a disk is not displayed within Oracle VM Manager after the initial server discovery is performed, it is possible that you may need to rediscover the compute nodes so that Oracle VM Manager sees the vHBAs and the LUNs that are accessible to them. The following steps describe actions that should be performed in Oracle VM Manager to start making use of your newly attached FC storage in the case where these disks do not appear automatically within Oracle VM Manager.

Caution

Reprovisioning restores a compute node to a clean state. If a compute node with active connections to external storage repositories is reprovisioned, the external storage connections need to be configured again after reprovisioning.

  1. Log into the Oracle VM Manager web-interface on the Oracle PCA.

  2. Click on the Servers and VMs tab and select all of your compute node servers.

  3. Click on the Rediscover Servers icon to rediscover all of your servers.

  4. After you have finished rediscovery, check that there are four new vHBAs for each server by clicking on the server in the navigation pane and then changing the perspective to Storage Initiators.

  5. Click on the Storage tab.

  6. Expand the SAN Servers item in the navigation pane and select the Unmanaged Fibre Channel Storage Array.

  7. Click on the Edit icon and navigate to the Admin Servers tab in the dialog that appears.

  8. Add all of the compute node servers as admin servers for the storage array and click OK to close the dialog.

  9. Click on the Refresh SAN Server icon and click OK on the confirmation dialog.

  10. Wait for all of your servers to be updated. This operation can take several minutes to complete.

  11. The LUNs display in the Physical Disks perspective for the Unmanaged Fibre Channel Storage Array.

  12. You can check that the LUNs are also available to your servers on the Servers and VMs tab by clicking on a server and changing the perspective to the Physical Disks view.