This chapter contains procedures about installing and maintaining Network Appliance network-attached storage (NAS) devices in a SunTM Cluster environment. Before you perform any of the procedures in this chapter, read the entire procedure. If you are not reading an online version of this document, have the books listed in Related Books available.
This chapter contains the following procedures.
How to Prepare the Cluster for Network Appliance NAS Device Maintenance
How to Restore Cluster Configuration After Network Appliance NAS Device Maintenance
How to Remove Network Appliance NAS Directories From a Cluster
For conceptual information about multihost storage devices, see the Sun Cluster Concepts Guide for Solaris OS.
This section includes only restrictions and requirements that have a direct impact on the procedures in this chapter. For general support information, contact your Sun service provider.
This section describes the following requirements.
When you configure a Network Appliance NAS device, you must meet the following requirements.
Allow HTTP administrative access.
Sun Cluster uses the HTTP administrative access to support fencing. Sun Cluster automatically removes file system write permission for a node that has left the cluster, and grants file system write permission for a node that has just joined the cluster. These actions ensure that a node that departed the cluster can no longer modify data. You must have the administrator login and password. You need these items when you are configuring the device in the cluster.
Cluster nodes must be configured with Solaris Internet Protocol Multipathing (IPMP).
See IP Network Multipathing Administration Guide for details on configuring IPMP.
Ensure that the SUNWzlib package is available on your cluster.
The Network Appliance NAS device requires the SUNWzlib package to run on Sun Cluster. If you installed the End User Solaris Software Group on your cluster, you must add the SUNWzlib package using the pkgadd command.
# pkgadd SUNWzlib |
When exporting Network Appliance NAS directories for use with the cluster, you must use the form rw=nodename1;nodename2 to specify access to the directories. You make these entries in the exports file on the NAS device.
When you configure your Network Appliance NAS device for use with Oracle Real Application Clusters (RAC), you must meet the following requirements.
You must configure the Network Appliance NAS device with fencing support in order to guarantee data integrity.
You must create a volume on each Network Appliance NAS device for storing Oracle database files, namely:
Data files
Control files
Online redo log files
Archived redo log files
You must create a quota tree (qtree) for the each directory in the following list:
The directory that contains Oracle data files for the cluster
The Oracle home directory that is to be mounted on each node
On each Network Appliance NAS device, you must add an entry to the /etc/exports file for the root of the volume that you created for storing Oracle database files.
You must ensure that the volume is exported without the nosuid option.
When adding the Network Appliance NAS directories to the cluster, ensure that the following mount options are set:
forcedirectio
noac
proto=tcp
The administrator has the option of deciding whether to use the Network Appliance NAS device as a quorum device.
When you use a Network Appliance NAS device as a quorum device, you must meet the following requirements.
You must configure an iSCSI LUN on the Network Appliance NAS device for use as the quorum device.
When booting the cluster, you must always boot the Network Appliance NAS device before you boot the cluster nodes.
If you boot devices in the wrong order, your nodes cannot find the quorum device. If a node should fail in this situation, your cluster might be unable to remain in service. If the cluster fails because the Network Appliance NAS quorum device was not available, bring up the NAS device. After that action completes, boot the cluster.
It is strongly recommended that you use a Network Appliance clustered filer. Clustered filers provide high availability with respect to the filer data and do not constitute a single point of failure in the cluster.
It is strongly recommended that you use the network time protocol (NTP) to synchronize time on the cluster nodes and the NAS device. Refer to your Network Appliance documentation for instructions about how to configure NTP on the NAS device. Select at least one NTP server for the NAS device that also serves the cluster nodes.
When configuring a Network Appliance NAS device as a quorum device, you can only add the quorum device when all cluster nodes are operating and communicating with the Network Appliance NAS device.
There is no fencing support for NFS-exported file systems from a NAS device when used in a non-global zone, including nodes of a zone cluster. Fencing support of Network Appliance NAS devices is only provided in global zones.
This procedure relies on the following assumptions:
Your cluster nodes have the operating system and Sun Cluster software installed.
You have the HTTP administrator login and password for the Network Appliance NAS device.
This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.
To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC (role-based access control) authorization.
Set up the Network Appliance NAS device.
You can set up the device at any point in your cluster installation. Follow the instructions in your device's documentation. See Related Third-Party Web Site References for a list of related device documentation.
When setting up your Network Appliance NAS device, follow the standards that are described in Requirements, Recommendations, and Restrictions for Network Appliance NAS Devices.
Install the NAS-support software package NTAPclnas on each node in the cluster.
Perform this step after you have installed the Solaris OS and the Sun Cluster software.
If this is the first Network Appliance NAS device in your cluster, or if you need to upgrade the NAS-support software package, perform this step. See Related Third-Party Web Site References for instructions about downloading and installing this software.
On each cluster node, add the Network Appliance NAS device name to the /etc/inet/hosts file.
Add a hostname-to-address mapping for the device in the /etc/inet/hosts file on all cluster nodes. For example:
netapp-123 192.168.11.123 |
On each cluster node, add the device netmasks to the /etc/inet/netmasks file.
Add an entry to the /etc/inet/netmasks file for the subnet the filer is on. For example:
192.168.11.0 255.255.255.0 |
Verify that the hosts and netmasks entries in /etc/nsswitch.conf file on all cluster nodes have files appearing before nis and dns. If they are not, edit the corresponding line in /etc/nsswitch.conf by moving files before nis and dns.
Configure Sun Cluster fencing support for the Network Appliance NAS device. If you skip this step, Sun Cluster will not provide fencing support for the Network Appliance NAS device.
From any cluster node, add the device.
If you are using Sun Cluster 3.2, use the following command:
# clnasdevice add -t netapp -p userid=root myfiler Please enter password |
Enter netapp as the type of device you are adding.
Enter the HTTP administrator login for the NAS device.
Enter the name of the NAS device you are adding.
If you are using Sun Cluster 3.1, use the following command:
# scnas -a -h myfiler -t netapp -o userid=root Please enter password |
Add the device to cluster configuration.
Enter the name of the NAS device you are adding.
Enter the HTTP administrator login for the NAS device.
At the prompt, type the HTTP administrator password.
Confirm that the device has been added to the cluster.
If you are using Sun Cluster 3.2, use the following command:
# clnasdevice list |
For more information about the clnasdevice command, see the clnasdevice(1CL) man page.
If you are using Sun Cluster 3.1, use the following command:
# scnas -p |
Add the Network Appliance NAS directories to the cluster when the NAS device has been configured to support fencing.
Follow the directions in How to Add Network Appliance NAS Directories to a Cluster.
(Optional) Configure a LUN on the NAS device as a quorum device.
See How to Add a Network Appliance Network-Attached Storage (NAS) Quorum Device in Sun Cluster System Administration Guide for Solaris OS for instructions for configuring a Network Appliance NAS quorum device.
This section contains procedures about maintaining Network Appliance NAS devices that are attached to a cluster. If a device's maintenance procedure might jeopardize the device's availability to the cluster, you must always perform the steps in How to Prepare the Cluster for Network Appliance NAS Device Maintenance before performing the maintenance procedure. After performing the maintenance procedure, perform the steps in How to Restore Cluster Configuration After Network Appliance NAS Device Maintenance to return the cluster to its original configuration.
The following Network Appliance clustered-filer procedures can be performed without affecting the filer's availability.
Monitoring the status of the cluster as a whole
Viewing information about the cluster
Enabling and disabling takeover on a cluster to perform software upgrades or other maintenance
Halting a filer in a cluster without causing a takeover
Performing a takeover on the partner filer
Performing license operations on the cluster feature
Enabling and disabling the negotiated failover feature on a cluster
When performing any maintenance procedure other than those listed, perform the steps in How to Prepare the Cluster for Network Appliance NAS Device Maintenance before the maintenance procedure. Perform the steps in How to Restore Cluster Configuration After Network Appliance NAS Device Maintenance after performing the maintenance procedure.
If you fail to prepare the cluster, you can experience loss of cluster availability. If the cluster loses access to the Network Appliance NAS device's directories, your cluster applications will experience I/O errors, might not be able to fail over correctly, and might fail. If your cluster experiences this kind of failure, you must reboot the entire cluster (booting Network Appliance NAS device before the cluster nodes).
If your cluster loses access to a Network Appliance NAS quorum device, and then a node fails, the entire cluster can become unavailable. In this case, you must either reboot the entire cluster (booting Network Appliance NAS device before the cluster nodes) or remove the quorum device and configure it again.
Follow the instructions in this procedure whenever the Network Appliance NAS device maintenance you are performing might affect the device's availability to the cluster nodes.
This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.
To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.
Stop I/O to the NAS device.
On each cluster node, unmount the Network Appliance NAS device directories.
Determine whether a LUN on this Network Appliance NAS device is a quorum device.
If no LUNs on this Network Appliance NAS device are quorum devices, you are finished with this procedure.
If a LUN is a quorum device, perform the following steps:
If your cluster uses other shared storage devices, select and configure another quorum device.
Remove this quorum device.
See Chapter 6, Administering Quorum, in Sun Cluster System Administration Guide for Solaris OS for instructions about adding and removing quorum devices.
If your cluster requires a quorum device (for example, a two-node cluster) and you are maintaining the only shared storage device in the cluster, your cluster is in a vulnerable state throughout the maintenance procedure. Loss of a single node during the procedure causes the other node to panic and your entire cluster becomes unavailable. Limit the amount of time for performing such procedures. To protect your cluster against such vulnerability, add a shared storage device to the cluster.
Follow the instructions in this procedure after performing any Network Appliance NAS device maintenance that might affect the device's availability to the cluster nodes.
Mount the Network Appliance NAS directories.
Determine whether you want an iSCSI LUN on this Network Appliance NAS device to be a quorum device.
If no, continue to Step 3.
If yes, configure the LUN as a quorum device, following the steps in How to Add a Network Appliance Network-Attached Storage (NAS) Quorum Device in Sun Cluster System Administration Guide for Solaris OS.
Remove any extraneous quorum device that you configured in How to Prepare the Cluster for Network Appliance NAS Device Maintenance.
Restore I/O to the Network Appliance NAS device.
This procedure relies on the following assumptions:
Your cluster is operating.
You have prepared the cluster by performing the steps in How to Prepare the Cluster for Network Appliance NAS Device Maintenance.
You have removed any device directories from the cluster by performing the steps in How to Remove Network Appliance NAS Directories From a Cluster.
When you remove the device from cluster configuration, the data on the device is not available to the cluster. Ensure that other shared storage in the cluster can continue to serve the data when the Network Appliance NAS device is removed.
This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.
To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.
From any cluster node, remove the device.
If you are using Sun Cluster 3.2, use the following command:
# clnasdevice remove myfiler |
For more information about the clnasdevice command, see the clnasdevice(1CL) man page.
If you are using Sun Cluster 3.1, use the following command:
# scnas -r -h myfiler |
Remove the device from cluster configuration.
Enter the name of the NAS device you are removing.
Confirm that the device has been removed from the cluster.
The procedure relies on the following assumptions:
Your cluster is operating.
The Network Appliance NAS device is properly configured and the directories the cluster will use have been exported.
See Requirements, Recommendations, and Restrictions for Network Appliance NAS Devices for the details about required device configuration.
You have added the device to the cluster by performing the steps in How to Install a Network Appliance NAS Device in a Cluster.
This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.
To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.
From any cluster node, add the directories.
If you are using Sun Cluster 3.2, use the following command:
# clnasdevice add-dir -d /export/dir1,/export/dir2 myfiler |
Enter the directory or directories that you are adding.
Enter the name of the NAS device containing the directories.
For more information about the clnasdevice command, see the clnasdevice(1CL) man page.
If you are using Sun Cluster 3.1, use the following command:
# scnasdir -a -h myfiler -d /vol/DB1 -d /vol/DB2 |
Add the directory or directories to cluster configuration.
Enter the name of the NAS device whose directories you are adding.
Enter the directory to add. Use this option once for each directory you are adding. This value must match the name of one of the directories exported by the NAS device.
Confirm that the directories have been added.
If you do not use the automounter, mount the directories by performing the following steps:
On each node in the cluster, create a mount-point directory for each Network Appliance NAS directory that you added.
# mkdir -p /path-to-mountpoint |
Name of the directory on which to mount the directory
On each node in the cluster, add an entry to the /etc/vfstab file for the mount point.
If you are using your Network Appliance NAS device for Oracle Real Application Clusters database files, set the following mount options:
forcedirectio
noac
proto=tcp
When mounting Network Appliance NAS directories, select the mount options appropriate to your cluster applications. Mount the directories on each node that will access the directories. Sun Cluster places no additional restrictions or requirements on the options that you use.
This procedure assumes that your cluster is operating.
When you remove the device directories, the data on those directories is not available to the cluster. Ensure that other device directories or shared storage in the cluster can continue to serve the data when these directories have been removed.
This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.
To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.
If you are using hard mounts rather than the automounter, unmount the NAS directories:
From any cluster node, remove the directories.
For more information about the clnasdevice command, see theclnasdevice(1CL) man page.
If you are using Sun Cluster 3.2, use the following command:
# clnasdevice remove-dir -d /export/dir1 myfiler |
Enter the directory or directories that you are removing.
Enter the name of the NAS device containing the directories.
For more information about the clnasdevice command, see the clnasdevice(1CL) man page.
If you are using Sun Cluster 3.1, use the following command:
# scnasdir -r -h myfiler -d /vol/DB1 -d /vol/DB2 |
Remove the directory or directories from cluster configuration.
Enter the name of the NAS device whose directories you are removing.
Enter the directory to remove. Use this option once for each directory you are removing.
To remove all of this device's directories, specify all for the -d option:
# scnasdir -r -h myfiler -d all |
Confirm that the directories have been removed.
To remove the device, see How to Remove a Network Appliance NAS Device From a Cluster.