JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster With Network-Attached Storage Device Manual     Oracle Solaris Cluster 4.0
search filter icon
search icon

Document Information

Preface

1.  Installing and Maintaining Oracle's Sun ZFS Storage Appliances as NAS Devices in an Oracle Solaris Cluster Environment

Requirements, Recommendations, and Restrictions for Sun ZFS Storage Appliance NAS Devices

Requirements for Sun ZFS Storage Appliance NAS Devices

Requirements When Configuring Sun ZFS Storage Appliances

Requirements When Configuring Sun ZFS Storage Appliance NAS Devices for Oracle RAC or HA Oracle

Requirements When Configuring Sun ZFS Storage Appliance NAS Devices as Quorum Devices

Restrictions for Sun ZFS Storage Appliance NAS Devices

Installing a Sun ZFS Storage Appliance NAS Device in an Oracle Solaris Cluster Environment

How to Install a Sun ZFS Storage Appliance in a Cluster

Maintaining a Sun ZFS Storage Appliance NAS Device in an Oracle Solaris Cluster Environment

How to Prepare the Cluster for Sun ZFS Storage Appliance NAS Device Maintenance

How to Restore Cluster Configuration After Sun ZFS Storage Appliance NAS Device Maintenance

How to Remove a Sun ZFS Storage Appliance NAS Device From a Cluster

How to Add Sun ZFS Storage Appliance Directories and Projects to a Cluster

How to Remove Sun ZFS Storage Appliance Directories and Projects and From a Cluster

Index

Maintaining a Sun ZFS Storage Appliance NAS Device in an Oracle Solaris Cluster Environment

This section contains procedures about maintaining Sun ZFS Storage Appliance NAS devices that are attached to a cluster. If a device's maintenance procedure might jeopardize the device's availability to the cluster, you must always perform the steps in How to Prepare the Cluster for Sun ZFS Storage Appliance NAS Device Maintenance before performing the maintenance procedure. After performing the maintenance procedure, perform the steps in How to Restore Cluster Configuration After Sun ZFS Storage Appliance NAS Device Maintenance to return the cluster to its original configuration.

How to Prepare the Cluster for Sun ZFS Storage Appliance NAS Device Maintenance

Follow the instructions in this procedure whenever the Sun ZFS Storage Appliance NAS device maintenance you are performing might affect the device's availability to the cluster nodes.


Note - If your cluster requires a quorum device (for example, a two-node cluster) and you are maintaining the only shared storage device in the cluster, your cluster is in a vulnerable state throughout the maintenance procedure. Loss of a single node during the procedure causes the other node to panic and your entire cluster becomes unavailable. Limit the amount of time for performing such procedures. To protect your cluster against such vulnerability, add a shared storage device to the cluster.


Before You Begin

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.

To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.

  1. Stop I/O to the Sun ZFS Storage Appliance NAS device.

    If you have data services using NFS file systems from the Sun ZFS Storage Appliance, bring the data services offline and disable the resources for the applications using those file systems. On each node, ensure that no existing processes are still using any of the NFS file systems from the device.

  2. On each cluster node, unmount the NFS file systems from the Sun ZFS Storage Appliance NAS device.

    If you have a resource of type SUNW.ScalMountPoint managing the file system, disable that resource to achieve that.


    Note - For more information on disabling a resource, see How to Disable a Resource and Move Its Resource Group Into the UNMANAGED State in Oracle Solaris Cluster Data Services Planning and Administration Guide.

    If that resource is not configured, use the Oracle Solaris umount(1M) command. If the file system cannot be unmounted because it is still busy, check for applications or processes that are still on that file system, as explained in Step 1. You can also force the unmount by using the -f option with the umount command.


  3. Determine whether a LUN on this Sun ZFS Storage Appliance NAS device is a quorum device.
    # clquorum show
  4. If the LUNs on this NAS device are not quorum devices, you are finished with this procedure.
  5. If a LUN is a quorum device, perform the following steps:
    1. If your cluster uses other shared storage devices or a quorum server, select and configure another quorum device.
    2. Remove this quorum device.

      See Chapter 6, Administering Quorum, in Oracle Solaris Cluster System Administration Guide for instructions on adding and removing quorum devices.

How to Restore Cluster Configuration After Sun ZFS Storage Appliance NAS Device Maintenance

Follow the instructions in this procedure after performing any Sun ZFS Storage Appliance NAS device maintenance that might affect the device's availability to the cluster nodes.

  1. Mount the NFS file systems from the Sun ZFS Storage Appliance NAS device.

    If you have configured a resource of type SUNW.ScalMountPoint for the file system, enable the resource and bring its resource group online.

  2. Determine whether you want an iSCSI LUN on this Sun ZFS Storage Appliance NAS device to be a quorum device.

    If you do, configure the LUN as a quorum device by following the steps in How to Add a Sun ZFS Storage Appliance NAS Quorum Device in Oracle Solaris Cluster System Administration Guide.

    Remove any extraneous quorum device that you configured in How to Prepare the Cluster for Sun ZFS Storage Appliance NAS Device Maintenance.

  3. I/O to the NFS file systems from the Sun ZFS Storage Appliance NAS device can be resumed by bringing up the applications using the file systems. If the application is managed by a data service, enable the corresponding resources and bring their resource group online.

How to Remove a Sun ZFS Storage Appliance NAS Device From a Cluster

Before You Begin

This procedure relies on the following assumptions:


Note - When you remove the device from cluster configuration, the data on the device is not available to the cluster. Ensure that other shared storage in the cluster can continue to serve the data when the Sun ZFS Storage Appliance NAS device is removed. When the device is removed, change the following items in the cluster configuration:


This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.

To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.

  1. Remove the device.
    • Perform this command from any cluster node:
      # clnasdevice remove myfiler
      myfiler

      Enter the name of the Sun ZFS Storage Appliance NAS device that you are removing.

      For more information about the clnasdevice command, see the clnasdevice(1CL) man page.

    • If you want to remove a NAS device from a zone cluster but you need to issue the command from the global zone, use the clnasdevice command with the -Z option:
      # clnasdevice remove -Z zcname myfiler
      zcname

      Enter the name of the zone cluster where the Sun ZFS Storage Appliance NAS device is being removed.

  2. Confirm that the device has been removed from the cluster.
    • Perform this command from any cluster node:
      # clnasdevice list
    • If you want to check the NAS device for a zone cluster but you need to issue the command from the global zone, use the clnasdevice command with the -Z option:
      # clnasdevice list -Z zcname

    Note - You can also perform zone cluster-related commands inside the zone cluster by omitting the -Z option. For more information about the clnasdevice command, see the clnasdevice(1CL) man page.


How to Add Sun ZFS Storage Appliance Directories and Projects to a Cluster

Before You Begin

The procedure relies on the following assumptions:

An NFS file system or directory from the Sun ZFS Storage Appliance is already created in a project, which is itself in one of the storage pools of the device. It is important that in order for a directory (i.e., the NFS file system) to be used by the cluster, to perform the configuration at the project level, as described below.

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.

To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.

  1. Use the Sun ZFS Storage Appliance GUI to identify the project associated with the NFS file systems for use by the cluster.

    After you have identified the appropriate project, click Edit for that project.

  2. If read/write access to the project has not been configured, set up read/write access to the project for the cluster nodes.
    1. Access the NFS properties for the project.

      In the Sun ZFS Storage Appliance GUI, select the Protocols tab in the Edit Project page.

    2. Set the Share Mode for the project to None or Read only, depending on the desired access rights for nonclustered systems. The Share Mode can be set to Read/Write if it is required to make the project world-writable, but it is not recommended.
    3. Add a read/write NFS Exception for each cluster node by performing the following steps.
      • Under NFS Exceptions, click +.
      • Select Network as the Type.
      • Enter the public IP address the cluster node will use to access the appliance as the Entity. Use a CIDR mask of /32. For example, 192.168.254.254/32 .
      • Select Read/Write as the Access Mode.
      • If desired, select Root Access. Root Access is required when configuring applications, such as Oracle RAC or HA Oracle.
      • Add exceptions for all cluster nodes.
      • Click Apply after the exceptions have been added for all IP addresses.
  3. Ensure that the directory being added is set to inherit its NFS properties from its parent project.
    1. Navigate to the Shares tab in the Sun ZFS Storage Appliance GUI.
    2. Click Edit Entry to the right of the Share that will have fencing enabled.
    3. Navigate to the Protocols tab for that share, and ensure that the Inherit from project property is set in the NFS section.

    image:In the Protocols tab for the Share that will have fencing enabled, ensure that the Inherit from project property is set.

    If you are adding multiple directories within the same project, verify that each directory that needs to be protected by cluster fencing has the Inherit from project property set.

  4. If the project has not already been configured with the cluster, add the project to the cluster configuration.

    Use clnasdevice show -v command to determine if the project has already been configured with the cluster.

    # clnasdevice show -v
    
    ===NAS Devices===
    Nas Device:                  device1.us.example.com
     Type:                       sun_uss
     userid:                     osc_agent
     nodeIPs{node1}                  10.111.11.111
     nodeIPs{node2}                  10.111.11.112
     nodeIPs{node3}                  10.111.11.113
     nodeIPs{node4}                  10.111.11.114
     Project:                    pool-0/local/projecta
     Project:                    pool-0/local/projectb
    • Perform this command from any cluster node:
      # clnasdevice add-dir -d project1,project2 myfiler
      -d project1, project2

      Enter the project or projects that you are adding.

      Specify the full path name of the project, including the pool. For example, pool-0/local/projecta.

      myfiler

      Enter the name of the NAS device containing the projects.

      For example:

      # clnasdevice add-dir -d pool-0/local/projecta device1.us.example.com
      # clnasdevice add-dir -d pool-0/local/projectb device1.us.example.com

      For example:

      # clnasdevice find-dir -v
      === NAS Devices ===
      
      Nas Device:                      device1.us.example.com
        Type:                          sun_uss
        Unconfigured Project:            pool-0/local/projecta
          File System:                       /export/projecta/filesystem-1
          File System:                       /export/projecta/filesystem-2
        Unconfigured Project:            pool-0/local/projectb
          File System:                       /export/projectb/filesystem-1

      For more information about the clnasdevice command, see the clnasdevice(1CL) man page.

    • If you want to add the project from a Sun ZFS Storage Appliance to a zone cluster but you need to issue the command from the global zone, use the clnasdevice command with the -Z option:
      # clnasdevice add-dir -d project1,project2 -Z zcname myfiler
      zcname

      Enter the name of the zone cluster where the NAS projects are being added.

  5. Confirm that the directory and project have been configured.
    • Perform this command from any cluster node:
      # clnasdevice show -v -d all

      For example:

      # clnasdevice show -v -d all
      
      ===NAS Devices===
      Nas Device:                  device1.us.example.com
       Type:                       sun_uss
       nodeIPs{node1}                  10.111.11.111
       nodeIPs{node2}                  10.111.11.112
       nodeIPs{node3}                  10.111.11.113
       nodeIPs{node4}                  10.111.11.114
       userid:                     osc_agent
       Project:                    pool-0/local/projecta
        File System:                   /export/projecta/filesystem-1
        File System:                   /export/projecta/filesystem-2
       Project:                    pool-0/local/projectb
        File System:                   /export/projectb/filesystem-1
    • If you want to check the projects for a zone cluster but you need to issue the command from the global zone, use the clnasdevice command with the -Z option:
      # clnasdevice show -v -Z zcname

      Note - You can also perform zone cluster-related commands inside the zone cluster by omitting the -Z option. For more information about the clnasdevice command, see the clnasdevice(1CL) man page.


    After you confirm that a project name is associated with the desired NFS file system, use that project name in the configuration command.

  6. If you do not use the automounter, mount the directories by performing the following steps:
    1. On each node in the cluster, create a mount-point directory for each Sun ZFS Storage Appliance NAS project that you added.
      # mkdir -p /path-to-mountpoint
      path-to-mountpoint

      Name of the directory on which to mount the project.

    2. On each node in the cluster, add an entry to the /etc/vfstab file for the mount point.

      If you are using your Sun ZFS Storage Appliance NAS device for Oracle RAC or HA Oracle, consult your Oracle database guide or log into My Oracle Support for a current list of supported files and mount options. After you log into My Oracle Support, click the Knowledge tab and search for Bulletin 359515.1.

      When mounting Sun ZFS Storage Appliance NAS directories, select the mount options appropriate to your cluster applications. Mount the directories on each node that will access the directories. Oracle Solaris Cluster places no additional restrictions or requirements on the options that you use.

  7. To enable file system monitoring, configure a resource of type SUNW.ScalMountPoint for the file systems.

    For more information, see Configuring Failover and Scalable Data Services on Shared File Systems in Oracle Solaris Cluster Data Services Planning and Administration Guide.

How to Remove Sun ZFS Storage Appliance Directories and Projects and From a Cluster

Before You Begin

This procedure relies on the following assumptions:


Note - When you remove the directories, the data on those directories is not available to the cluster. Ensure that other device projects or shared storage in the cluster can continue to serve the data when these directories are removed. When the directory is removed, change the following items in the cluster configuration:


This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.

To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.

  1. If you are using hard mounts or the automounter, unconfigure the NFS file system.
    1. On each node in the cluster, unmount the file system you are removing.
      # umount /mount-point
    2. On each node in the cluster, remove the entries in the /etc/vfstab file for the projects you are removing.

      Skip this step if you are using the automounter.

  2. (Optional) Perform the remaining steps in this procedure only if you want to remove the project containing this directory from the cluster configuration. Before you remove the project, ensure that no directories within the project are in use within the cluster. Remove the projects.
    • Perform this command from any cluster node:
      # clnasdevice remove-dir -d project1 myfiler
      -d project1

      Enter the project or projects that you are removing.

      myfiler

      Enter the name of the Sun ZFS Storage Appliance NAS device containing the projects.

      To remove all of this device's projects, specify all for the -d option:

      # clnasdevice remove-dir -d all myfiler

      For more information about the clnasdevice command, see the clnasdevice(1CL) man page.

    • If you want to remove a project from a Sun ZFS Storage Appliance device from a zone cluster but you need to issue the command from the global zone, use the clnasdevice command with the -Z option:
      # clnasdevice remove-dir -d project1 -Z zcname myfiler
      zcname

      Enter the name of the zone cluster where the Sun ZFS Storage Appliance NAS projects are being removed.

      To remove all of this device's projects, specify all for the -d option:

      # clnasdevice remove-dir -d all -Z zcname myfiler

      For more information about the clnasdevice command, see the clnasdevice(1CL) man page.

  3. Confirm that the projects have been removed.
    • Perform this command from any cluster node:
      # clnasdevice show -v
    • If you want to check the NAS projects for a zone cluster but you need to issue the command from the global zone, use the clnasdevice command with the -Z option:
      # clnasdevice show -v -Z zcname

    Note - You can also perform zone cluster-related commands inside the zone cluster by omitting the -Z option. For more information about the clnasdevice command, see the clnasdevice(1CL) man page.


See Also

To remove the device, see How to Remove a Sun ZFS Storage Appliance NAS Device From a Cluster.