Sun Cluster 3.1 - 3.2 With Network-Attached Storage Devices Manual for Solaris OS

Chapter 1 Installing and Maintaining Sun Network-Attached Storage Devices in a Sun Cluster Environment

This chapter contains procedures about installing and maintaining Sun network-attached storage (NAS) devices in a SunTM Cluster environment. Before you perform any of the procedures in this chapter, read the entire procedure. If you are not reading an online version of this document, have the books listed in Related Books available.

This chapter contains the following procedures.

For conceptual information about multihost storage devices, see the Sun Cluster Concepts Guide for Solaris OS.

Requirements, Recommendations, and Restrictions for Sun NAS Devices

This section includes only restrictions and requirements that have a direct impact on the procedures in this chapter. A Sun NAS device is supported as a quorum device only in a two-node cluster. For general support information, contact your Sun service provider.

Requirements for Sun NAS Devices

This section describes the following requirements.

Requirements When Configuring Sun NAS Devices

When you configure a Sun NAS device, you must meet the following requirements.

Requirements When Configuring Sun NAS Devices for Use With Oracle Real Application Clusters

When you configure your Sun NAS device for use with Oracle Real Application Clusters (RAC), you must meet the following requirements.

Requirements When Configuring Sun NAS Devices as Quorum Devices

The administrator has the option of deciding whether to use the Sun NAS device as a quorum device.

When you use a Sun NAS device as a quorum device, you must meet the following requirements.

Recommendations for Sun NAS Devices

It is strongly recommended that you use a Sun StorageTekTM 5320 NAS Cluster Appliance. Clustered filers provide high availability with respect to the filer data and do not constitute a single point of failure in the cluster.

It is strongly recommended that you use the network time protocol (NTP) to synchronize time on the cluster nodes and the Sun NAS device. Refer to your Sun documentation for instructions about how to configure NTP on the Sun NAS device. Select at least one NTP server for the Sun NAS device that also serves the cluster nodes.

Restrictions for Sun NAS Devices

A Sun NAS device must be connected to all nodes. A Sun NAS device is supported as a quorum device only in a two-node cluster. A Sun NAS device appears as a SCSI shared disk to the quorum subsystem. The iSCSI connection to the Sun NAS device is completely invisible to the quorum subsystem.

There is no fencing support for NFS-exported file systems from a NAS device when used in a non-global zone, including nodes of a zone cluster. Fencing support of NAS devices is only provided in global zones.

Installing a Sun NAS Device in a Sun Cluster Environment

ProcedureHow to Install a Sun NAS Device in a Cluster

Before You Begin

This procedure relies on the following assumptions:

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC (role-based access control) authorization.

  1. Set up the Sun NAS device.

    You can set up the Sun NAS device at any point in your cluster installation. Follow the instructions in your Sun NAS device's documentation.

    When setting up your Sun NAS device, follow the standards that are described in Requirements, Recommendations, and Restrictions for Sun NAS Devices.

  2. On each cluster node, add the Sun NAS device name to the /etc/inet/hosts file.

    Add a hostname-to-address mapping for the device in the /etc/inet/hosts file on all cluster nodes, as shown in the following example:

    sunnas-123 192.168.11.123
  3. On each node in the cluster, add the device netmasks to the /etc/inet/netmasks file.

    Add an entry to the /etc/inet/netmasks file for the subnet on which the filer is located, as shown in the following example.

    192.168.11.0 255.255.255.0
  4. In /etc/nsswitch.conf on every cluster node, ensure that files precedes nis and dns information sources for hosts and netmasks information types, as shown in the following example.

    hosts:     files nis
  5. Use Sun StorEdgeTM Web Administrator to add net addresses for all cluster nodes to the Sun NAS device.

    “Product Overview” in the Sun StorageTek NAS OS Administration Guide describes the Sun StorEdge Web Administrator graphical user interface (GUI). “Adding and Editing Hosts” in the Sun StorageTek NAS OS Administration Guide describes how to add net addresses.

  6. Log into your Sun NAS device and use the Sun StorEdge hostlook command to verify that the net address for each cluster node resolves correctly, as shown in the following example.


    pschost-2# telnet 10.8.165.42
    Trying 10.8.165.42...
    Connected to 10.8.165.42.
    Escape character is '^]'.
    connect to (? for list) ? [menu] admin
    password for admin access ? ********
    n1nas20 > hostlook pschost-1
    pschost-1:
      Name:  pschost-1
      Addr:  10.8.165.42

    If the NIS+ configuration is correct and is used as the primary Host Order naming service, information about the entered host is displayed.

  7. If you are attaching the cluster to the Sun StorageTek 5320 NAS Cluster Appliance filer for the first time, log into the NAS device and use the load command to load the NAS fencing command, as shown in the following example.


    pschost-2# telnet 10.8.165.42
    Trying 10.8.165.42...
    Connected to 10.8.165.42.
    Escape character is '^]'.
    connect to (? for list) ? [menu] admin
    password for admin access ? ********
    n1nas20 > load fencing
    n1nas20 > 
  8. If you are attaching the cluster to the Sun StorageTek 5320 NAS Cluster Appliance filer for the first time, configure the fencing command so that it loads automatically after the filer reboots.

    1. Use ftp(1) to get the /dvol/etc/inetload.ncf file from your Sun NAS device onto your local machine.

    2. Using a text editor, in the inetload.ncf file on your local machine, add the following entry.

      fencing
    3. Use ftp to put back the inetload.ncf file onto your Sun NAS device (in /dvol/etc/inetload.ncf).

  9. Use Sun StorEdge Web Administrator to add trusted administrator access to every cluster node.

    “Product Overview” in the Sun StorageTek NAS OS Administration Guide describes the Sun StorEdge Web Administrator graphical user interface (GUI).

    1. In Web Administrator, create a host group for the cluster, which includes every node in the cluster, by selecting, in the Navigation Pane, UNIX Configuration->Configure NFS->Set Up Hostgroups.

    2. Use ftp(1) to get the /dvol/etc/approve file from your Sun NAS device onto your local machine.

    3. Using a text editor, in the approve file on your local machine, add the following entry.

      admin * @cluster-host-group access=trusted

      Note –

      You must add this entry before any existing entries in the approve file, as shown in the following example.

      admin * @schostgroup access=trusted
      admin * @general access=granted

      This approve file is searched in sequence and stops at the first match. Placing the entry that you add before any existing entries ensures that it is matched first.


      admin

      A service type that controls administrative access to StorEdge configuration menus and commands through rlogin and rsh or ssh clients. Each admin entry in the approve file specifies the users and hosts that are allowed administrative access.

      @cluster-host-group

      The name of the host group that you previously created (preceded by the “at” symbol (@)).

      access=trusted

      How the host group can access administrative services on the Sun NAS device. Sun Cluster requires that you grant trusted access for the cluster nodes. Trusted access grants the user access without having to specify an administrative password.

      For example, change the contents of your approve file from that shown in the first example to that shown in the second example.

      # Approve file -- controls client access to resources
      files / @trusted access=rw uid0=0
      # Approve file -- controls client access to resources
      files / @trusted access=rw uid0=0
      admin * @schostgroup access=trusted
    4. Use ftp to put back the approve file onto your Sun NAS device (in /dvol/etc/approve).

  10. Log into your NAS device and use the Sun StorEdge reload command to reload the updated approve file, as shown in the following example.


    pschost-2# telnet 10.8.165.42
    Trying 10.8.165.42...
    Connected to 10.8.165.42.
    Escape character is '^]'.
    connect to (? for list) ? [menu] admin
    password for admin access ? ********
    n1nas20 > approve reload
    n1nas20 > 
  11. Configure Sun Cluster fencing support for the Sun NAS device. If you skip this step, Sun Cluster will not provide fencing support for the NAS device.

    1. From any cluster node, add the device.

      • If you are using Sun Cluster 3.2, use the following command:


        # clnasdevice add -t sun myfiler
        
        -t sun

        Enter sun as the type of device you are adding.

        myfiler

        Enter the name of the Sun NAS device that you are adding.

      • If you are using Sun Cluster 3.1, use the following command:


        # scnas -a -h myfiler -t sunnas
        
        -a

        Add the device to cluster configuration.

        -h myfiler

        Enter the name of the Sun NAS device you are adding.

    2. Confirm that the device has been added to the cluster.

      • If you are using Sun Cluster 3.2, use the following command:


        # clnasdevice list
        

        For more information about the clnasdevice command, see the clnasdevice(1CL) man page.

      • If you are using Sun Cluster 3.1, use the following command:


        # scnas -p
        
  12. Add the Sun NAS directories to the cluster when the NAS device has been configured to support fencing.

    Follow the directions in How to Add Sun NAS Directories to a Cluster.

  13. (Optional) Configure a LUN on the Sun NAS device as a quorum device.

    See How to Add a Network Appliance Network-Attached Storage (NAS) Quorum Device in Sun Cluster System Administration Guide for Solaris OS for instructions for configuring a Sun NAS quorum device.

Maintaining a Sun NAS Device in a Sun Cluster Environment

This section contains procedures about maintaining Sun NAS devices that are attached to a cluster. If a device's maintenance procedure might jeopardize the device's availability to the cluster, you must always perform the steps in How to Prepare the Cluster for Sun NAS Device Maintenance before performing the maintenance procedure. After performing the maintenance procedure, perform the steps in How to Restore Cluster Configuration After Sun NAS Device Maintenance to return the cluster to its original configuration.

ProcedureHow to Prepare the Cluster for Sun NAS Device Maintenance

Follow the instructions in this procedure whenever the Sun NAS device maintenance you are performing might affect the device's availability to the cluster nodes.

Before You Begin

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.

  1. Stop I/O to the Sun NAS device.

  2. On each cluster node, unmount the Sun NAS device directories.

  3. Determine whether a LUN on this Sun NAS device is a quorum device.

    • If you are using Sun Cluster 3.2, use the following command:


      # clquorum show
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scstat -q
      
  4. If no LUNs on this Sun NAS device are quorum devices, you are finished with this procedure.

  5. If a LUN is a quorum device, perform the following steps:

    1. If your cluster uses other shared storage devices, select and configure another quorum device.

    2. Remove this quorum device.

      See Chapter 6, Administering Quorum, in Sun Cluster System Administration Guide for Solaris OS for instructions about adding and removing quorum devices.


      Note –

      If your cluster requires a quorum device (for example, a two-node cluster) and you are maintaining the only shared storage device in the cluster, your cluster is in a vulnerable state throughout the maintenance procedure. Loss of a single node during the procedure causes the other node to panic and your entire cluster becomes unavailable. Limit the amount of time for performing such procedures. To protect your cluster against such vulnerability, add a shared storage device to the cluster.


ProcedureHow to Restore Cluster Configuration After Sun NAS Device Maintenance

Follow the instructions in this procedure after performing any Sun NAS device maintenance that might affect the device's availability to the cluster nodes.

  1. Mount the Sun NAS directories.

  2. Determine whether you want an iSCSI LUN on this Sun NAS device to be a quorum device.

  3. Restore I/O to the Sun NAS device.

ProcedureHow to Remove a Sun NAS Device From a Cluster

Before You Begin

This procedure relies on the following assumptions:


Note –

When you remove the device from cluster configuration, the data on the device is not available to the cluster. Ensure that other shared storage in the cluster can continue to serve the data when the Sun NAS device is removed.


This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.

  1. From any cluster node, remove the device.

    • If you are using Sun Cluster 3.2, use the following command:


      # clnasdevice remove myfiler
      

      For more information about the clnasdevice command, see the clnasdevice(1CL) man page.

    • If you are using Sun Cluster 3.1, use the following command:


      # scnas -r -h myfiler
      
      -r

      Remove the device from cluster configuration.

      -h

      Enter the name of the Sun NAS device that you are removing.

  2. Confirm that the device has been removed from the cluster.

    • If you are using Sun Cluster 3.2, use the following command:


      # clnasdevice list
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scnas -p
      

ProcedureHow to Add Sun NAS Directories to a Cluster

Before You Begin

The procedure relies on the following assumptions:

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.

  1. Use Sun StorEdge Web Administrator to create the Sun NAS volumes.

    “Product Overview” in the Sun StorageTek NAS OS Administration Guide describes the Sun StorEdge Web Administrator graphical user interface (GUI). “Creating File Volumes or Segments” in the Sun StorageTek NAS OS Administration Guide describes how to create file volumes.

  2. Use Sun StorEdge Web Administrator to add read/write access to every cluster node.


    Caution – Caution –

    You must explicitly grant read/write access to each cluster node. Do not enable general access and do not add access by specifying a cluster host group.


    “Setting Up NFS Exports” in the Sun StorageTek NAS OS Administration Guide describes how to add read/write access to nodes in the cluster.

  3. Log into your NAS device and use the Sun StorEdge list command to verify the changes that you made to the approve file, as shown in the following example.


    pschost-2# telnet 10.8.165.42
    Trying 10.8.165.42...
    Connected to 10.8.165.42.
    Escape character is '^]'.
    connect to (? for list) ? [menu] admin
    password for admin access ? ********
    n1nas20 > approve list
    ====================
    acache: approve
    ====================
    files / @trusted access=rw uid0=0
    admin * @schostgroup access=trusted
    files /vol1 schost1 access=rw
    files /vol1 schost2 access=rw
    files /vol2 schost1 access=rw
    files /vol2 schost2 access=rw
    ====================
    acache: hostgrps
    ====================
    trusted schostgroup
    n1nas20 > 
  4. From any cluster node, add the directories.

    • If you are using Sun Cluster 3.2, use the following command:


      # clnasdevice add-dir -d /export/dir1,/export/dir2 myfiler
      
      -d /export/dir1, /export/dir2

      Enter the directory or directories that you are adding.

      myfiler

      Enter the name of the Sun NAS device containing the directories.

      For more information about the clnasdevice command, see the clnasdevice(1CL) man page.

    • If you are using Sun Cluster 3.1, use the following command:


      # scnasdir -a -h myfiler -d /vol/DB1 -d /vol/DB2
      
      -a

      Add the directory or directories to cluster configuration.

      -h myfiler

      Enter the name of the Sun NAS device whose directories you are adding.

      -d

      Enter the directory to add. Use this option once for each directory you are adding. This value must match the name of one of the directories exported by the Sun NAS device.

  5. Confirm that the directories have been added.

    • If you are using Sun Cluster 3.2, use the following command:


      # clnasdevice show -v
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scnas -p
      
  6. If you do not use the automounter, mount the directories by performing the following steps:

    1. On each node in the cluster, create a mount-point directory for each Sun NAS directory that you added.


      # mkdir -p /path-to-mountpoint
      
      path-to-mountpoint

      Name of the directory on which to mount the directory

    2. On each node in the cluster, add an entry to the /etc/vfstab file for the mount point.

      If you are using your Sun NAS device for Oracle Real Application Clusters database files, set the following mount options:

      • forcedirectio

      • noac

      • proto=tcp

      When mounting Sun NAS directories, select the mount options appropriate to your cluster applications. Mount the directories on each node that will access the directories. Sun Cluster places no additional restrictions or requirements on the options that you use.

ProcedureHow to Remove Sun NAS Directories From a Cluster

Before You Begin

This procedure assumes that your cluster is operating.


Note –

When you remove the device directories, the data on those directories is not available to the cluster. Ensure that other device directories or shared storage in the cluster can continue to serve the data when these directories have been removed.


This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.

  1. If you are using hard mounts rather than the automounter, unmount the Sun NAS directories:

    1. On each node in the cluster, unmount the directories you are removing.


      # umount /mount-point
      
    2. On each node in the cluster, remove the entries in the /etc/vfstab file for the directories you are removing.

  2. From any cluster node, remove the directories.

    For more information about the clnasdevice command, see theclnasdevice(1CL) man page.

    • If you are using Sun Cluster 3.2, use the following command:


      # clnasdevice remove-dir -d /export/dir1 myfiler
      
      -d /export/dir1

      Enter the directory or directories that you are removing.

      myfiler

      Enter the name of the Sun NAS device containing the directories.

      For more information about the clnasdevice command, see the clnasdevice(1CL) man page.

    • If you are using Sun Cluster 3.1, use the following command:


      # scnasdir -r -h myfiler -d /vol/DB1 -d /vol/DB2 
      
      -r

      Remove the directory or directories from cluster configuration.

      -h myfiler

      Enter the name of the Sun NAS device whose directories you are removing.

      -d

      Enter the directory to remove. Use this option once for each directory you are removing.

      To remove all of this device's directories, specify all for the -d option:


      # scnasdir -r -h myfiler -d all 
      
  3. Confirm that the directories have been removed.

    • If you are using Sun Cluster 3.2, use the following command:


      # clnasdevice show -v
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scnas -p
      
See Also

To remove the device, see How to Remove a Sun NAS Device From a Cluster.