This chapter contains procedures about installing and maintaining Network Appliance network-attached storage (NAS) devices in a SunTM Cluster environment. Before you perform any of the procedures in this chapter, read the entire procedure. If you are not reading an online version of this document, have the books listed in Related Books available.
This chapter contains the following procedures:
How to Prepare the Cluster for Network Appliance NAS Device Maintenance
How to Restore Cluster Configuration After Network Appliance NAS Device Maintenance
How to Remove Network Appliance NAS Directories From a Cluster
For conceptual information about multihost storage devices, see the Sun Cluster Concepts Guide for Solaris OS.
This section includes only restrictions and requirements that have a direct impact on the procedures in this chapter. For general support information, contact your Sun service provider.
When you use a Network Appliance NAS device, the following are always required:
The Network Appliance NAS device requires the SUNWzlib package to run on Sun Cluster. If you installed the End User Solaris Software Group on your cluster, you must add the SUNWzlib package using the pkgadd command.
# pkgadd SUNWzlib |
You must configure the NAS device to allow HTTP administrative access.
You must have the administrator login and password. You need these items when you are configuring the device in the cluster.
When exporting NAS directories for use with the cluster, you must use the form rw=nodename1;nodename2 to specify access to the directories. You make these entries in the exports file on the NAS device.
When setting up your NAS device:
Create a volume on each Network Appliance NAS device for storing Oracle database files, namely:
Data files
Control files
Online redo log files
Archived redo log files
Create a quota tree (qtree) for the each directory in the following list:
The directory that contains Oracle data files for the cluster
The Oracle home directory that is to be mounted on each node
On each Network Appliance NAS device, add an entry to the /etc/exports file for the root of the volume that you created for storing Oracle database files.
Ensure that the volume is exported without the nosuid option.
When adding the NAS directories to the cluster, ensure that the following mount options are set:
forcedirectio
noac
proto=tcp
When you use a NAS device as a quorum device, the following are required:
You must install the iSCSI license from your NAS device vendor.
You must configure an iSCSI LUN on the NAS device for use as the quorum device.
Cluster physical node names must be in the same LAN as the NAS device.
When booting the cluster, you must always boot the NAS device before you boot the cluster nodes.
If you boot devices in the wrong order, your nodes cannot find the quorum device. If a node should fail in this situation, your cluster might be unable to remain in service. If your cluster experiences these failures, you must either reboot the entire cluster or remove the NAS quorum device and add it again.
A cluster can use a NAS device for only a single quorum device.
You can configure other shared storage if you need additional quorum devices. Additional clusters using the same NAS device can use separate LUNs on that device as their quorum devices.
It is strongly recommended that you use a NetApp clustered filer. Clustered filers provide high availability with respect to the filer data and do not constitute a single point of failure in the cluster.
It is strongly recommended that you use the network time protocol (NTP) to synchronize time on the cluster nodes and the NAS device. Refer to your Network Appliance documentation for instructions about how to configure NTP on the NAS device. Select at least one NTP server for the NAS device that also serves the cluster nodes.
When configuring a NAS device as a quorum device, you can only add the quorum device when all cluster nodes are operational and communicating with the NAS device.
This procedure relies on the following assumptions:
Your cluster nodes have the operating system and Sun Cluster software installed.
You have the HTTP administrator login and password for the NAS device.
Set up the NAS device.
You can set up the device at any point in your cluster installation. Follow the instructions in your device's documentation. See Related Third-Party Web Site References for a list of related device documentation.
When setting up your NAS device, follow the standards that are described in Requirements, Recommendations, and Restrictions for Network Appliance NAS Devices.
Install the NAS-support software package NTAPclnas on each node in the cluster.
Perform this step after you have installed the Solaris OS and the Sun Cluster software.
If this is the first NAS device in your cluster, or if you need to upgrade the NAS-support software package, perform this step. See Related Third-Party Web Site References for instructions about downloading and installing this software.
On each cluster node, add the NAS device name to the /etc/inet/hosts file.
Add a hostname-to-address mapping for the device in the /etc/inet/hosts file on all cluster nodes. For example:
netapp-123 192.168.11.123 |
On each cluster node, add the device netmasks to the /etc/inet/netmasks file.
Add an entry to the /etc/inet/netmasks file for the subnet the filer is on. For example:
192.168.11.0 255.255.255.0 |
Verify that the hosts and netmasks entries in /etc/nsswitch.conf file on all cluster nodes have files appearing before nis and dns. If they are not, edit the corresponding line in /etc/nsswitch.conf by moving files before nis and dns.
Add the NAS device to the cluster.
From any cluster node, add the device by using the scnas command.
# scnas -a -h myfiler -t netapp -o userid=root |
Add the device to cluster configuration.
Is the name of the NAS device you are adding.
Is the HTTP administrator login for the NAS device.
At the prompt, type the HTTP administrator password.
Confirm that the device has been added to the cluster.
# scnas -p |
For more information about the scnas command, see the scnas(1M) man page.
Add the NAS directories to the cluster.
Follow the directions in How to Add Network Appliance NAS Directories to a Cluster.
(Optional) Configure a LUN on the NAS device as a quorum device.
See How to Add a Network Appliance Network-Attached Storage (NAS) Quorum Device in Sun Cluster System Administration Guide for Solaris OS for instructions for configuring a NAS quorum device.
This section contains procedures about maintaining NAS devices that are attached to a cluster. If a device's maintenance procedure might jeopardize the device's availability to the cluster, you must always perform the steps in How to Prepare the Cluster for Network Appliance NAS Device Maintenance before performing the maintenance procedure. After performing the maintenance procedure, perform the steps in How to Restore Cluster Configuration After Network Appliance NAS Device Maintenance to return the cluster to its original configuration.
The following Network Appliance clustered-filer procedures can be performed without affecting the filer's availability.
Monitoring the status of the cluster as a whole
Viewing information about the cluster
Enabling and disabling takeover on a cluster to perform software upgrades or other maintenance
Halting a filer in a cluster without causing a takeover
Performing a takeover on the partner filer
Performing license operations on the cluster feature
Enabling and disabling the negotiated failover feature on a cluster
When performing any maintenance procedure other than those listed, perform the steps in How to Prepare the Cluster for Network Appliance NAS Device Maintenance before the maintenance procedure. Perform the steps in How to Restore Cluster Configuration After Network Appliance NAS Device Maintenance after performing the maintenance procedure.
If you fail to prepare the cluster, you can experience loss of cluster availability. If the cluster loses access to the NAS device's directories, your cluster applications will experience I/O errors, might not be able to fail over correctly, and might fail. If your cluster experiences this kind of failure, you must reboot the entire cluster (booting NAS device before the cluster nodes). If your cluster loses access to a NAS quorum device, and then a node fails, the entire cluster can become unavailable. In this case, you must either reboot the entire cluster (booting NAS device before the cluster nodes) or remove the quorum device and configure it again.
Follow the instructions in this procedure whenever the NAS device maintenance you are performing might affect the device's availability to the cluster nodes.
Stop I/O to the NAS device.
On each cluster node, unmount the NAS device directories.
Determine whether a LUN on this NAS device is a quorum device.
# scstat -q |
If no, you are finished with this procedure.
If a LUN is a quorum device, perform the following steps:
If your cluster uses other shared storage devices, select and configure another quorum device.
Remove this quorum device.
See Chapter 5, Administering Quorum, in Sun Cluster System Administration Guide for Solaris OS for instructions about adding and removing quorum devices.
If your cluster requires a quorum device (for example, a two-node cluster) and you are maintaining the only shared storage device in the cluster, your cluster is in a vulnerable state throughout the maintenance procedure. Loss of a single node during the procedure causes the other node to panic and your entire cluster becomes unavailable. Limit the amount of time for performing such procedures. To protect your cluster against such vulnerability, add a shared storage device to the cluster.
Follow the instructions in this procedure after performing any NAS device maintenance that might affect the device's availability to the cluster nodes.
Mount the NAS directories.
Determine whether you want an iSCSI LUN on this NAS device to be a quorum device.
If no, continue to Step 3.
If yes, configure the LUN as a quorum device, following the steps in How to Add a Network Appliance Network-Attached Storage (NAS) Quorum Device in Sun Cluster System Administration Guide for Solaris OS.
Remove any extraneous quorum device that you configured in How to Prepare the Cluster for Network Appliance NAS Device Maintenance.
Restore I/O to the NAS device.
This procedure relies on the following assumptions:
Your cluster is operational.
You have prepared the cluster by performing the steps in How to Prepare the Cluster for Network Appliance NAS Device Maintenance.
You have removed any device directories from the cluster by performing the steps in How to Remove Network Appliance NAS Directories From a Cluster.
When you remove the device from cluster configuration, the data on the device is not available to the cluster. Ensure that other shared storage in the cluster can continue to serve the data when the NAS device is removed
From any cluster node, remove the device by using the scnas command.
# scnas -r -h myfiler |
Remove the device from cluster configuration
Is the name of the NAS device you are removing
For more information about the scnas command, see the scnas(1M) man page.
Confirm that the device has been removed from the cluster.
# scnas -p |
The procedure relies on the following assumptions:
Your cluster is operational.
The NAS device is properly configured and the directories the cluster will use have been exported.
See Requirements, Recommendations, and Restrictions for Network Appliance NAS Devices for the details about required device configuration.
You have added the device to the cluster by performing the steps in How to Install a Network Appliance NAS Device in a Cluster.
From any cluster node, add the directories by using the scnasdir command.
# scnasdir -a -h myfiler -d /vol/DB1 -d /vol/DB2 |
Add the directory or directories to cluster configuration.
Is the name of the NAS device whose directories you are adding.
Is the directory to add. Use this option once for each directory you are adding. This value must match the name of one of the directories exported by the NAS device.
For more information about the scnasdir command, see the scnasdir(1M) man page.
Confirm that the directories have been added.
# scnasdir -p |
If you do not use the automounter, mount the directories by performing the following steps:
On each node in the cluster, create a mount-point directory for each NAS directory that you added.
# mkdir -p /path-to-mountpoint |
Name of the directory on which to mount the directory
On each node in the cluster, add an entry to the /etc/vfstab file for the mount point.
If you are using your NAS device for Oracle Real Application Clusters database files, set the following mount options:
forcedirectio
noac
proto=tcp
When mounting NAS directories, select the mount options appropriate to your cluster applications. Mount the directories on each node that will access the directories. Sun Cluster places no additional restrictions or requirements on the options that you use.
This procedure assumes that your cluster is operational.
When you remove the device directories, the data on those directories is not available to the cluster. Ensure that other device directories or shared storage in the cluster can continue to serve the data when these directories have been removed.
If you are using hard mounts rather than the automounter, unmount the NAS directories:
From any cluster node, remove the directories by using the scnasdir command.
# scnasdir -r -h myfiler -d /vol/DB1 -d /vol/DB2 |
Remove the directory or directories from cluster configuration.
Is the name of the NAS device whose directories you are removing.
Is the directory to remove. Use this option once for each directory you are removing.
To remove all of this device's directories, specify all for the -d option:
# scnasdir -r -h myfiler -d all |
For more information about the scnasdir command, see the scnasdir(1M) man page.
Confirm that the directories have been removed.
# scnasdir -p |
To remove the device, see How to Remove a Network Appliance NAS Device From a Cluster.