Go to main content

Oracle® Solaris Cluster With Network-Attached Storage Device Manual

Exit Print View

Updated: February 2017
 
 

Installing an Oracle ZFS Storage Appliance NAS Device in an Oracle Solaris Cluster Environment

How to Install an Oracle ZFS Storage Appliance in a Cluster


Note -  You can also add an Oracle ZFS Storage Appliance using the Oracle Solaris Cluster Manager (Cluster Manager) browser interface. For Cluster Manager log-in instructions, see How to Access Oracle Solaris Cluster Manager in Oracle Solaris Cluster 4.3 System Administration Guide. After you install the appliance, you can also use Cluster Manager to edit the Export List property.

Before You Begin

    This procedure relies on the following assumptions:

  • Your cluster nodes have the operating system and Oracle Solaris Cluster software installed.

  • You have administrative access to the Oracle ZFS Storage Appliance.

To perform this procedure, assume the root role or a role that provides solaris.cluster.read and solaris.cluster.modify authorization.

  1. Set up the Oracle ZFS Storage Appliance.

    You can set up the appliance at any point in your cluster installation. Follow the instructions in your Oracle ZFS Storage Appliance's documentation. You can also click Help in the Oracle ZFS Storage Appliance GUI to access information specific to the device you are installing.

    When setting up your Oracle ZFS Storage Appliance, follow the standards that are described in Requirements, Recommendations, and Restrictions for Oracle ZFS Storage Appliance NAS Devices.

  2. On each cluster node, add the Oracle ZFS Storage Appliance name to the /etc/inet/hosts file.

    Add a hostname-to-address mapping for the device in the /etc/inet/hosts file on all cluster nodes, as shown in the following example:

    192.192.11.191 zfssa-123
  3. In the /etc/nsswitch.conf file on every cluster node, ensure that files follows cluster and precedes any directory or name service.

    For example:

    hosts:     cluster files nis
    1. Display the current setting for the host and netmask lookup.
      # /usr/sbin/svccfg -s svc:/system/name-service/switch listprop config/host
      config/host astring "cluster files nis dns"
      
      # /usr/sbin/svccfg -s svc:/system/name-service/switch listprop config/netmask
      config/netmask astring  "cluster files nis"
    2. If the cluster lookup is not included at the beginning of either lookup list, set the correct lookup list and refresh the name-service switch.
      # /usr/sbin/svccfg \
      -s svc:/system/name-service/switch setprop config/host =astring: \"cluster files nis\"
      # /usr/sbin/svccfg \
      -s svc:/system/name-service/switch setprop config/netmask =astring: \"cluster files nis\"
      # /usr/sbin/svcadm refresh svc:/system/name-service/switch
    3. Verify that the lookup lists now have the cluster lookup at the beginning of the list.
      # /usr/sbin/svccfg -s svc:/system/name-service/switch listprop config/host
      # /usr/sbin/svccfg -s svc:/system/name-service/switch listprop config/netmask
  4. Configure the filer workflow for Oracle Solaris Cluster NFS.
    1. In the Oracle ZFS Storage Appliance GUI, select Maintenance, select Workflows, and click the workflow called Configure for Oracle Solaris Cluster NFS.
    2. Provide a password for this workflow.

      This same password will be used again in Step 7.

    Perform the workflow configuration from only one head in a dual-head configuration.


    Note -  If the workflow of the specified name is not present, it is likely that the filer is not running the correct software release. See Requirements for Oracle ZFS Storage Appliance NAS Devices for an example of a supported software release.
  5. In the global zone, install the zfssa-client package on all cluster nodes.

    On all nodes within the global zone, install the zfssa-client package from the repository. You can use the pkg publisher command to check that the publisher is already set for the zfssa-client package. For example, the pkg publisher command might return the following location: https://pkg.oracle.com/ha-cluster/release. For more information about setting the publisher, see How to Install Oracle Solaris Cluster Software Packages in Oracle Solaris Cluster 4.3 Software Installation Guide.

    For example:

    # pkg publisher
    # pkg list -af zfssa-client
    # pkg install zfssa-client
    
    ==========================
    #
    Packages to install:  1
    Create boot environment: No
    Create backup boot environment: No
    
    DOWNLOAD                        PKGS       FILES    XFER (MB)
    Completed                       1/1        7/7      0.2/0.2$<3>
    
    PHASE                           ACTIONS
    Install Phase                   17/17
    
    PHASE                           ITEMS
    Package State Update Phase      1/1
    Image State Update Phase        2/2
    #
    ==========================
  6. If you have a zone cluster, log in to the zone cluster and install the zfssa-client package on the zone cluster non-global zone.
    # pkg install zfssa-client
  7. Configure Oracle Solaris Cluster fencing support for the Oracle ZFS Storage Appliance.

    Note -  If you skip this step, Oracle Solaris Cluster will not provide fencing support for the appliance.
    1. Add the device and provide the cluster network addresses used to access the appliance.

      Perform this command from any cluster node:

      # clnasdevice add -t sun_uss -p userid=osc_agent -p "nodeIPs{node_name}"=ip_address myfiler

      For example:

      # clnasdevice add -t sun_uss -p userid=osc_agent
      -p "nodeIPs{node1}"=10.111.11.111
      -p "nodeIPs{node2}"=10.111.11.112 device1.us.example.com
      Please enter password
      –t sun_uss

      Specifies sun_uss as the type of device you are adding.

      ip_address

      Specifies the IP address used to perform I/O to the appliance from this node.

      myfiler

      Specifies the name of the Oracle ZFS Storage Appliance that you are adding.

      node_name

      Specifies the name of cluster node whose IP addresses is being added.

      This step allows the cluster fencing framework to restrict access to the filer for the specified IP address to read-only when nodes leave the cluster.


      Note -  The IP addresses configured for the cluster nodes should match the ones configured in the Oracle ZFS Storage Appliance as described in Requirements for Oracle ZFS Storage Appliance NAS Devices.

      Note -  If you want to add an appliance and provide the cluster network addresses used to access the appliance for a zone cluster but you need to issue the command from the global zone, use the clnasdevice command with the –Z option:
      # clnasdevice add -t sun_uss -p userid=osc_agent -Z zcname \
      -p "nodeIPs{node_name}"=ip_address myfiler
      Please enter password
      –Z zcname

      Specifies the name of the zone cluster where the Oracle ZFS Storage Appliance is being added.


    2. At the prompt, type the same password that you used in Step 4.
    3. Confirm that the device has been added to the cluster.

      Perform this command from any cluster node:

      # clnasdevice show
      ===NAS Devices===
      Nas Device:                  device1.us.example.com
      Type:                       sun_uss
      userid:                     osc_agent
      nodeIPs{node1}                  10.111.11.111
      nodeIPs{node2}                  10.111.11.112
      nodeIPs{node3}                  10.111.11.113
      nodeIPs{node4}                  10.111.11.114

      For more information about the clnasdevice command, see the clnasdevice (1CL) man page.


      Note -  If you are checking for the device for a zone cluster but you need to issue the command from the global zone, use the clnasdevice show command with the –Z option:
      # clnasdevice show -Z zcname

      You can also perform zone cluster-related commands inside the zone cluster by omitting the –Z option. For more information about the clnasdevice command, see the clnasdevice (1CL) man page.


  8. To enable fencing support for the NFS file systems used by the cluster nodes, add the associated projects to the cluster configuration.

    Follow the directions in How to Add Oracle ZFS Storage Appliance Directories and Projects to a Cluster.

  9. Configure a LUN on the Oracle ZFS Storage Appliance NAS device as a quorum device.

    Note -  You can skip this step if the cluster does not require a quorum device or if it has been configured with quorum services from other devices or quorum servers.

    See How to Add an Oracle ZFS Storage Appliance NAS Quorum Device in Oracle Solaris Cluster 4.3 System Administration Guide for instructions for configuring an Oracle ZFS Storage Appliance NAS quorum device.