This section contains procedures about maintaining NAS devices that are attached to a cluster. If a device's maintenance procedure might jeopardize the device's availability to the cluster, you must always perform the steps in How to Prepare the Cluster for Network Appliance NAS Device Maintenance before performing the maintenance procedure. After performing the maintenance procedure, perform the steps in How to Restore Cluster Configuration After Network Appliance NAS Device Maintenance to return the cluster to its original configuration.
The following Network Appliance clustered-filer procedures can be performed without affecting the filer's availability.
Monitoring the status of the cluster as a whole
Viewing information about the cluster
Enabling and disabling takeover on a cluster to perform software upgrades or other maintenance
Halting a filer in a cluster without causing a takeover
Performing a takeover on the partner filer
Performing license operations on the cluster feature
Enabling and disabling the negotiated failover feature on a cluster
When performing any maintenance procedure other than those listed, perform the steps in How to Prepare the Cluster for Network Appliance NAS Device Maintenance before the maintenance procedure. Perform the steps in How to Restore Cluster Configuration After Network Appliance NAS Device Maintenance after performing the maintenance procedure.
If you fail to prepare the cluster, you can experience loss of cluster availability. If the cluster loses access to the NAS device's directories, your cluster applications will experience I/O errors, might not be able to fail over correctly, and might fail. If your cluster experiences this kind of failure, you must reboot the entire cluster (booting NAS device before the cluster nodes). If your cluster loses access to a NAS quorum device, and then a node fails, the entire cluster can become unavailable. In this case, you must either reboot the entire cluster (booting NAS device before the cluster nodes) or remove the quorum device and configure it again.
Follow the instructions in this procedure whenever the NAS device maintenance you are performing might affect the device's availability to the cluster nodes.
Stop I/O to the NAS device.
On each cluster node, unmount the NAS device directories.
Determine whether a LUN on this NAS device is a quorum device.
# scstat -q |
If no, you are finished with this procedure.
If a LUN is a quorum device, perform the following steps:
If your cluster uses other shared storage devices, select and configure another quorum device.
Remove this quorum device.
See Chapter 5, Administering Quorum, in Sun Cluster System Administration Guide for Solaris OS for instructions about adding and removing quorum devices.
If your cluster requires a quorum device (for example, a two-node cluster) and you are maintaining the only shared storage device in the cluster, your cluster is in a vulnerable state throughout the maintenance procedure. Loss of a single node during the procedure causes the other node to panic and your entire cluster becomes unavailable. Limit the amount of time for performing such procedures. To protect your cluster against such vulnerability, add a shared storage device to the cluster.
Follow the instructions in this procedure after performing any NAS device maintenance that might affect the device's availability to the cluster nodes.
Mount the NAS directories.
Determine whether you want an iSCSI LUN on this NAS device to be a quorum device.
If no, continue to Step 3.
If yes, configure the LUN as a quorum device, following the steps in How to Add a Network Appliance Network-Attached Storage (NAS) Quorum Device in Sun Cluster System Administration Guide for Solaris OS.
Remove any extraneous quorum device that you configured in How to Prepare the Cluster for Network Appliance NAS Device Maintenance.
Restore I/O to the NAS device.
This procedure relies on the following assumptions:
Your cluster is operational.
You have prepared the cluster by performing the steps in How to Prepare the Cluster for Network Appliance NAS Device Maintenance.
You have removed any device directories from the cluster by performing the steps in How to Remove Network Appliance NAS Directories From a Cluster.
When you remove the device from cluster configuration, the data on the device is not available to the cluster. Ensure that other shared storage in the cluster can continue to serve the data when the NAS device is removed
From any cluster node, remove the device by using the scnas command.
# scnas -r -h myfiler |
Remove the device from cluster configuration
Is the name of the NAS device you are removing
For more information about the scnas command, see the scnas(1M) man page.
Confirm that the device has been removed from the cluster.
# scnas -p |
The procedure relies on the following assumptions:
Your cluster is operational.
The NAS device is properly configured and the directories the cluster will use have been exported.
See Requirements, Recommendations, and Restrictions for Network Appliance NAS Devices for the details about required device configuration.
You have added the device to the cluster by performing the steps in How to Install a Network Appliance NAS Device in a Cluster.
From any cluster node, add the directories by using the scnasdir command.
# scnasdir -a -h myfiler -d /vol/DB1 -d /vol/DB2 |
Add the directory or directories to cluster configuration.
Is the name of the NAS device whose directories you are adding.
Is the directory to add. Use this option once for each directory you are adding. This value must match the name of one of the directories exported by the NAS device.
For more information about the scnasdir command, see the scnasdir(1M) man page.
Confirm that the directories have been added.
# scnasdir -p |
If you do not use the automounter, mount the directories by performing the following steps:
On each node in the cluster, create a mount-point directory for each NAS directory that you added.
# mkdir -p /path-to-mountpoint |
Name of the directory on which to mount the directory
On each node in the cluster, add an entry to the /etc/vfstab file for the mount point.
If you are using your NAS device for Oracle Real Application Clusters database files, set the following mount options:
forcedirectio
noac
proto=tcp
When mounting NAS directories, select the mount options appropriate to your cluster applications. Mount the directories on each node that will access the directories. Sun Cluster places no additional restrictions or requirements on the options that you use.
This procedure assumes that your cluster is operational.
When you remove the device directories, the data on those directories is not available to the cluster. Ensure that other device directories or shared storage in the cluster can continue to serve the data when these directories have been removed.
If you are using hard mounts rather than the automounter, unmount the NAS directories:
From any cluster node, remove the directories by using the scnasdir command.
# scnasdir -r -h myfiler -d /vol/DB1 -d /vol/DB2 |
Remove the directory or directories from cluster configuration.
Is the name of the NAS device whose directories you are removing.
Is the directory to remove. Use this option once for each directory you are removing.
To remove all of this device's directories, specify all for the -d option:
# scnasdir -r -h myfiler -d all |
For more information about the scnasdir command, see the scnasdir(1M) man page.
Confirm that the directories have been removed.
# scnasdir -p |
To remove the device, see How to Remove a Network Appliance NAS Device From a Cluster.