C H A P T E R 8 |
Sun StorEdge QFS in a Sun Cluster Environment |
This chapter describes how the Sun StorEdge QFS software works in a Sun Cluster environment. It also provides configuration examples for a Sun StorEdge QFS shared file system in a Sun Cluster environment and for an unshared Sun StorEdge QFS file system in a Sun Cluster environment.
This chapter contains the following sections:
With version 4.2 of the Sun StorEdge QFS software, you can install a Sun StorEdge QFS file system in a Sun Cluster environment and can configure the file system for high availability. The configuration method you use varies, depending on whether your file system is shared or unshared.
This chapter assumes that you are an experienced user of both the Sun StorEdge QFS software and the Sun Cluster environment. It also assumes you have performed either or both of the following:
It is recommended that you read the following documentation before continuing with this chapter:
Note - All references in this document to "Oracle Real Application Clusters" apply also to "Oracle Parallel Server" unless otherwise specified. |
The following restrictions apply to the Sun StorEdge QFS software in a Sun Cluster environment:
The shared file system uses Sun Cluster Disk ID (DID) support to enable data access by the Sun Cluster data service for Oracle Real Application Clusters. The unshared file system uses global device volume support and volume manager-controlled volume support to enable data access by failover applications supported by Sun Cluster.
With DID support, each device that is under the control of the Sun Cluster system, whether it is multipathed or not, is assigned a unique disk ID. For every unique DID device, there is a corresponding global device. The Sun StorEdge QFS shared file system can be configured on redundant storage that consists only of DID devices (/dev/did/*), where DID devices are accessible only on nodes that have a direct connection to the device through a host bust adapater (HBA).
Configuring the Sun StorEdge QFS shared file system on DID devices and configuring the SUNW.qfs resource type for use with the file system makes the file system's shared metadata server highly available. The Sun Cluster data service for Oracle Real Application Clusters can then access data from within the file system. Additionally, the Sun StorEdge QFS Sun Cluster agent can then automatically relocate the metadata server for the file system as necessary.
A global device is Sun Cluster's mechanism for accessing an underlying DID device from any node within the Sun Cluster, assuming that the nodes hosting the DID device are available. Global devices and volume manager-controlled volumes can be made accessible from every node in the Sun Cluster. The unshared Sun StorEdge QFS file system can be configured on redundant storage that consists of either raw global devices (/dev/global/*) or volume manager-controlled volumes.
Configuring the unshared file system on these global devices or volume manager-controlled devices and configuring the HAStoragePlus resource type for use with the file system makes the file system highly available with the ability to fail over to other nodes.
This chapter provides configuration examples for the Sun StorEdge QFS shared file system on a Sun Cluster and for the unshared Sun StorEdge QFS file system on a Sun Cluster. All configuration examples are based on a platform consisting of the following:
All configurations in this chapter are also based on CODE EXAMPLE 8-1. In this code example, the scdidadm(1M) command displays the disk identifier (DID) devices, and the -L option lists the DID device paths, including those on all nodes in the Sun Cluster system.
CODE EXAMPLE 8-1 shows that DID devices d4 through d8 are accessible from both Sun Cluster systems (scnode-A and scnode-B). With the Sun StorEdge QFS file system sizing requirements and with knowledge of your intended application and configuration, you can decide on the most appropriate apportioning of devices to file systems. By using the Solaris format(1M) command, you can determine the sizing and partition layout of each DID device and resize the partitions on each DID device, if needed. Given the available DID devices, you can also configure multiple devices and their associated partitions to contain the file systems, according to your sizing requirements.
When you install a Sun StorEdge QFS shared file system on a Sun Cluster, you configure the file system's metadata server under the SUNW.qfs resource type. This makes the metadata server highly available and enables the Sun StorEdge QFS shared file system to be globally accessible on all configured nodes in the Sun Cluster.
A Sun StorEdge QFS shared file system is typically associated with a scalable application. The Sun StorEdge QFS shared file system is mounted on, and the scalable application is active on, one or more Sun Cluster nodes.
If a node in the Sun Cluster system fails, or if you switch over the resource group, the metadata server resource (Sun StorEdge QFS Sun Cluster agent) automatically relocates the file system's metadata server as necessary. This ensures that the other nodes' access to the shared file system is not affected.
When the Sun Cluster boots, the metadata server resource ensures that the file system is mounted on all nodes that are part of the resource group. However, the file system mount on those nodes is not monitored. Therefore, in certain failure cases, the file system might be unavailable on certain nodes, even if the metadata server resource is in the online state.
If you use Sun Cluster administrative commands to bring the metadata server resource group offline, the file system under the metadata server resource remains mounted on the nodes. To unmount the file system (with the exception of a node that is shut down), you must bring the metadata server resource group into the unmanaged state by using the appropriate Sun Cluster administrative command.
To remount the file system at a later time, you must bring the resource group into a managed state and then into an online state.
This section shows an example of the Sun StorEdge QFS shared file system installed on raw DID devices with the Sun Cluster data service for Oracle Real Application Clusters. For detailed information on how to use the Sun StorEdge QFS shared file system with the Sun Cluster data service for Oracle Real Application Clusters, see the Sun Cluster Data Service for Oracle Real Application Clusters Guide for Solaris OS.
As shown in CODE EXAMPLE 8-1, DID devices d4 through d8 are highly available and are contained on controller-based storage. For you to configure a Sun StorEdge QFS shared file system on a Sun Cluster, the controller-based storage must support device redundancy by using RAID-1 or RAID-5.
For simplicity in this example, two file systems are created:
Additionally, device d4 is used for Sun StorEdge QFS metadata. This device has two 50 GB slices. The remaining devices, d5 through d8, are used for Sun StorEdge QFS file data.
This configuration involves five main steps, as detailed in the following subsections:
1. Preparing to create Sun StorEdge QFS file systems.
2. Creating the file systems and configuring the Sun Cluster nodes.
3. Validating the configuration.
4. Configuring the network name service.
5. Configuring the Sun Cluster data service for Oracl Real Application Clusters.
Steps 1 through 3 in this procedure must be performed from one node in the Sun Cluster system. In this example, the steps are performed from node scnode-A.
1. From one node in the Sun Cluster system, use the format(1M) utility to lay out partitions on /dev/did/dsk/d4.
Partition (or slice) 0 skips over the volume's Volume Table of Contents (VTOC) and is then configured as a 50 GB partition. Partition 1 is configured to be the same size as partition 0.
2. Use the format(1M) utility to lay out partitions on /dev/did/dsk/d5.
3. Replicate the device d5 partitioning to devices d6 through d8.
This example shows the command for device d6.
4. On all nodes that are potential hosts of the file systems, perform the following:
a. Configure the six partitions into two Sun StorEdge QFS shared file systems by adding two new configuration entries (qfs1 and qfs2) to the mcf file.
For more information about the mcf file, see the Sun StorEdge QFS and Sun StorEdge SAM-FS Software Installation and Configuration Guide.
b. Edit the /etc/opt/SUNWsamfs/samfs.cmd file to add the mount options that are required for the Sun Cluster data service for Oracle Real Application Clusters.
fs = qfs2 stripe = 1 sync_meta = 1 mh_write qwrite forcedirectio nstreams = 1024 rdlease = 600 |
For more information about the mount options that are required by the Sun Cluster data service for Oracle Real Application Clusters, see the Sun Cluster Data Service for Oracle Real Application Clusters Guide for Solaris OS.
c. Validate that the configuration is correct.
Be sure to perform this validation after you have configured the mcf file and the samfs.cmd file on each node.
To Create the Sun StorEdge QFS Shared File System and Configure Sun Cluster Nodes |
Perform this procedure for each file system you are creating. This example describes how to create the qfs1 file system.
1. Obtain the Sun Cluster private interconnect names by using the following command.
2. On all nodes that are potential hosts of the file system, perform the following:
a. Use the samd(1M) config command, which signals to the Sun StorEdge QFS daemon that a new Sun StorEdge QFS configuration is available.
b. Create the Sun StorEdge QFS shared hosts file for the file system (/etc/opt/SUNWsamfs/hosts.family-set-name), based on the Sun Cluster's private interconnect names that you obtained in Step 1.
3. Edit the unique Sun StorEdge QFS shared file system's host configuration file with the Sun Cluster interconnect names.
For Sun Cluster failover and fencing operations, the Sun StorEdge QFS shared file system must use the same interconnect names as the Sun Cluster system.
4. From one node in the Sun Cluster, use the sammkfs(1M) -S command to create the Sun StorEdge QFS shared file system.
5. On all nodes that are potential hosts of the file system, perform the following:
a. Use the mkdir(1M) command to create a global mount point for the file system, use the chmod(1M) command to make root the owner of the mount point, and use the chown(1M) command to make the mount point usable by other with read/write (755) access.
# mkdir /global/qfs1 # chmod 755 /global/qfs1 # chown root:other /global/qfs1 |
b. Add the Sun StorEdge QFS shared file system entry to the /etc/vfstab file.
Perform this procedure for each file system you create. This example describes how to validate the configuration for file system qfs1.
1. If you do not know which node is acting as the metadata server for the file system, use the samsharefs(1M) -R command.
The example shows that the metadata server for qfs1 is scnode-A.
2. Use the mount(1M) command to mount the file system first on the metadata server and then on each node in the Sun Cluster system.
It is very imporant that you mount the file system on the metadata server first.
# mount qfs1 # ls /global/qfs1 lost+found/ |
3. Validate voluntary failover by issuing the samsharefs(1M) -s command, which changes the Sun StorEdge QFS shared file system between nodes.
# samsharefs -s scnode-B qfs1 # ls /global/qfs1 lost+found/ # samsharefs -s scnode-A qfs1 # ls /global/qfs1 lost+found |
4. Validate that the required Sun Cluster resource type is added to the resource configuration.
5. If you cannot find the Sun Cluster resource type, use the scrgadm(1M) -a -t command to add it to the resource configuration.
6. Register and configure the SUNW.qfs resource type.
# scrgadm -a -g qfs-rg -h scnode-A,scnode-B # scrgadm -a -g qfs-rg -t SUNW.qfs -j qfs-res \ -x QFSFileSystem=/global/qfs1,/global/qfs2 |
7. Use the scswitch(1M) -Z -g command to bring the resource group online.
8. Ensure that the resource group is functional on all configured nodes.
# scswitch -z -g qfs-rg -h scnode-B # scswitch -z -g qfs-rg -h scnode-A |
To Configure the Sun Cluster Data Service for Oracle Real Application Clusters |
This section provides an example of how to configure the data service for Oracle Real Application Clusters for use with Sun StorEdge QFS shared file systems. For more information, see the Sun Cluster Data Service for Oracle Real Application Clusters Guide for Solaris OS.
1. Install the data service as described in the Sun Cluster Data Service for Oracle Real Application Clusters Guide for Solaris OS.
2. Mount the Sun StorEdge QFS shared file systems.
3. Set the correct ownership and permissions on the file systems so that the Oracle database operations are successful.
# chown oracle:dba /global/qfs1 /global/qfs2 # chmod 755 /global/qfs1 /global/qfs2 |
4. As the oracle user, create the subdirectories that are required for the Oracle Real Application Clusters installation and database files.
$ id uid=120(oracle) gid=520(dba) $ mkdir /global/qfs1/oracle_install $ mkdir /global/qfs2/oracle_db |
The Oracle Real Application Clusters installation uses the /global/qfs1/oracle_install directory path as the value for the ORACLE_HOME environment variable that is used in Oracle operations. The Oracle Real Application Clusters database files' path is prefixed with the /global/qfs2/oracle_db directory path.
5. Install the Oracle Real Application Clusters software.
During the installation, provide the path for the installation as defined in Step 4 (/global/qfs1/oracle_install).
6. Create the Oracle Real Application Clusters database.
During database creation, specify that you want the database files located in the qfs2 shared file system.
7. If you are automating the startup and shutdown of Oracle Real Application Clusters database instances, ensure that the required dependencies for resource groups and resources are set.
For more information, see the Sun Cluster Data Service for Oracle Real Application Clusters Guide for Solaris OS.
Note - If you plan to automate the startup and shutdown of Oracle Real Application Clusters database instances, you must use Sun Cluster 3.1 9/04 or a compatible version. |
When you install the unshared Sun StorEdge QFS file system on a Sun Cluster system, you configure the file system for high availability (HA) under the Sun Cluster HAStoragePlus resource type. An unshared Sun StorEdge QFS file system on a Sun Cluster is typically associated with one or more failover applications, such as HA-NFS, HA-ORACLE, and so on. Both the unshared Sun StorEdge QFS file system and the failover applications are active in a single resource group; the resource group is active on one Sun Cluster node at a time.
An unshared Sun StorEdge QFS file system is mounted on a single node at any given time. If the Sun Cluster fault monitor detects an error, or if you switch over the resource group, the unshared Sun StorEdge QFS file system and its associated HA applications fail over to another node, depending on how the resource group has been previously configured.
Any file system contained on a Sun Cluster global device group (/dev/global/*) can be used with the HAStoragePlus resource type. When a file system is configured with the HAStoragePlus resource type, it becomes part of a Sun Cluster resource group and the file system under Sun Cluster Resource Group Manager (RGM) control is mounted locally on the node where the resource group is active. When the RGM causes a resource group switchover or fails over to another configured Sun Cluster node, the unshared Sun StorEdge QFS file system is unmounted from the current node and remounted on the new node.
Each unshared Sun StorEdge QFS file system requires a minimum of two raw disk partitions or volume manager-controlled volumes (Solstice DiskSuite/Solaris Volume Manager or VERITAS Clustered Volume Manager), one for Sun StorEdge QFS metadata (inodes) and one for Sun StorEdge QFS file data. Configuring multiple partitions or volumes across multiple disks through multiple data paths increases unshared Sun StorEdge QFS file system performance. For information about sizing metadata and file data partitions, see Design Basics.
This section provides three examples of Sun Cluster configurations using the unshared Sun StorEdge QFS file system. In these examples, a file system is configured in combination with an HA-NFS file mount point on the following:
For simplicity in all of these configurations, ten percent of each file system is used for Sun StorEdge QFS metadata and the remaining space is used for Sun StorEdge QFS file data. For information about sizing and disk layout considerations, see the Sun StorEdge QFS and Sun StorEdge SAM-FS Software Installation and Configuration Guide.
This example shows how to configure the unshared Sun StorEdge QFS file system with HA-NFS on raw global devices. For this configuration, the raw global devices must be contained on controller-based storage. This controller-based storage must support device redundancy by using RAID-1 or RAID-5.
As shown in CODE EXAMPLE 8-1, the DID devices used in this example, d4 through d7, are highly available and are contained on controller-based storage. (This example uses devices d4 through d7.)The HAStoragePlus resource type requires the use of global devices, so each DID device (/dev/did/dsk/dx) is accessible as a global device by using the following syntax: /dev/global/dsk/dx.
The main steps in this example are as follows:
1. Prepare to create an unshared file system.
2. Create the file system and configure the Sun Cluster nodes.
3. Configure the network name service and the IPMP validation testing.
4. Configure HA-NFS and configure the file system for high availability.
To Prepare to Create an Unshared Sun StorEdge QFS File System |
1. Use the format(1M) utility to lay out the partitions on /dev/global/dsk/d4.
Partition (or slice) 0 skips over the volume's Volume Table of Contents (VTOC) and is then configured as a 20 GB partition. The remaining space is configured into partition 1.
2. Replicate the global device d4 partitioning to global devices d5 through d7.
This example shows the command for global device d5.
3. On all nodes that are potential hosts of the file system, perform the following:
a. Configure the eight partitions (four global devices, with two partitions each) into a Sun StorEdge QFS file system by adding a new file system entry to the mcf file.
For information about the mcf file, see the Sun StorEdge QFS and Sun StorEdge SAM-FS Software Installation and Configuration Guide.
b. Validate that the configuration information you added to the mcf file is correct.
It is important to complete this step before you configure the Sun StorEdge QFS file system under the HAStoragePlus resource type.
Step 2: Create the Sun StorEdge QFS File System and Configure The Sun Cluster Nodes |
1. On all nodes that are potential hosts of the file system, use the samd(1M) config command, which signals to the Sun StorEdge QFS daemon that a new Sun StorEdge QFS configuration is available.
2. From one node in the Sun Cluster, use the sammkfs(1M) command to create the file system.
3. On all nodes that are potential hosts of the file system, perform the following:
a. Use the mkdir(1M) command to create a global mount point for the file system, use the chmod(1M) command to make root the owner of the mount point, and use the chown(1M) command to make the mount point usable by other with read/write (755) access.
# mkdir /global/qfsnfs1 # chmod 755 /global/qfsnfs1 # chown root:other /global/qfsnfs1 |
b. Add the Sun StorEdge QFS file system entry to the /etc/vfstab file.
Note that the mount options field contains the sync_meta=1 value.
c. Validate the configuration by mounting and unmounting the file system.
# mount qfsnfs1 # ls /global/qfsnfs1 lost+found/ # umount qfsnfs1 |
4. Use the scrgadm(1M) -p | egrep command to validate that the required Sun Cluster resource types have been added to the resource configuration.
# scrgadm -p | egrep "SUNW.HAStoragePlus|SUNW.LogicalHostname|SUNW.nfs" |
5. If you cannot find a required Sun Cluster resource type, use the scrgadm(1M) -a -t command to add it to the configuration.
# scrgadm -a -t SUNW.HAStoragePlus |
To Configure the Network Name Service and the IPMP Validation Testing |
This section provides an example of how to configure the network name service and the IPMP Validation Testing for your Sun Cluster nodes. For more information, see the Sun Cluster Software Installation Guide for Solaris OS.
1. Use vi or another text editor to edit the /etc/nsswitch.conf file so that it looks in the Sun Cluster and files for node names.
Perform this step before you configure the NIS server.
2. Verify that the changes you made to the /etc/nsswitch.conf are correct.
# grep `^hosts:' /etc/nsswitch.conf hosts: cluster files nis [NOTFOUND=return] # |
3. Set up IPMP validation testing by using available network adapters.
The adapters qfe2 and qfe3 are used as examples.
a. Statically configure the IPMP test address for each adapter.
b. Dynamically configure the IPMP Adapters
To Configure HA-NFS and the Sun StorEdge QFS File System for High Availability |
This section provides an example of how to configure HA-NFS. For more information about HA-NFS, see the Sun Cluster Data Service for Network File System (NFS) Guide for Solaris OS and your NFS documentation.
1. Create the NFS share point for the Sun StorEdge QFS file system.
Note that the share point is contained within the /global file system, not within the Sun StorEdge QFS file system.
# mkdir -p /global/nfs/SUNW.nfs # echo "share -F nfs -o rw /global/qfsnfs1" > \ /global/nfs/SUNW.nfs/dfstab.nfs1-res |
2. Create the NFS resource group.
3. Add the NFS logical host to the /etc/hosts table, using the address for your site.
# cat >> /etc/hosts << EOF # # IP Addresses for LogicalHostnames # 192.168.2.10 lh-qfs1 EOF |
4. Use the scrgadm(1M) -a -L -g command to add the logical host to the NFS resource group.
5. Use the scrgadm(1M) -c -g command to configure the HAStoragePlus resource type.
6. Bring the resource group online.
7. Configure the NFS resource type and set a dependency on the HAStoragePlus resource.
# scrgadm -a -g nfs-rg -j nfs1-res -t SUNW.nfs -y \ Resource_dependencies=qfsnfs1-res |
8. Bring the NFS resource online.
The NFS resource /net/lh-nfs1/global/qfsnfs1 is now fully configured and is also highly available.
9. Before announcing the availability of the highly available NFS file system on the Sun StorEdge QFS file system, ensure that the resource group can be switched between all configured nodes without errors and can be taken online and offline.
# scswitch -z -g nfs-rg -h scnode-A# scswitch -z -g nfs-rg -h scnode-B # scswitch -F -g nfs-rg # scswitch -Z -g nfs-rg |
This example shows how to configure the unshared Sun StorEdge QFS file system with HA-NFS on volumes controlled by Solstice DiskSuite/Solaris Volume Manager software. With this configuration, you can choose whether the DID devices are contained on redundant controller-based storage using RAID-1 or RAID-5 volumes. Typically, Solaris Volume Manager is used only when the underlying controller-based storage is not redundant.
As shown in CODE EXAMPLE 8-1, the DID devices used in this example, d4 through d7, are highly available and are contained on controller-based storage. Solaris Volume Manager requires that DID devices be used to populate the raw devices from which Solaris Volume Manager can configure volumes. Solaris Volume Manager creates globally accessible disk groups, which can then be used by the HAStoragePlus resource type for creating Sun StorEdge QFS file systems.
This example follows these steps:
1. Prepare the Solstice DiskSuite/Solaris Volume Manager software.
2. Prepare to create an unshared file system.
3. Create the file system and configure the Sun Cluster nodes.
4. Configure the network name service and the IPMP validation testing.
5. Configure HA-NFS and configure the file system for high availability.
To Prepare the Solstice DiskSuite/Solaris Volume Manager Software |
1. Determine whether a Solaris Volume Manager metadatabase (metadb) is already configured on each node that is a potential host of the Sun StorEdge QFS file system.
If the metadb(1M) command does not return a metadatabase configuration, then on each node, create three or more database replicas on one or more local disks. Each replica must be at least 16 MB in size. For more information about creating the metadatabase configuration, see the Sun Cluster Software Installation Guide for Solaris OS.
2. Create an HA-NFS disk group to containall Solaris Volume Manager volumes for this Sun StorEdge QFS file system.
3. Add DID devices d4 through d7 to the pool of raw devices from which Solaris Volume Manager can create volumes.
# metaset -s nfsdg -a /dev/did/dsk/d4 /dev/did/dsk/d5 \ /dev/did/dsk/d6 /dev/did/dsk/d7 |
1. Use the format(1M) utility to lay out partitions on /dev/global/dsk/d4.
CODE EXAMPLE 8-36 shows that partition or slice 0 skips over the volume's Volume Table of Contents (VTOC) and is then configured as a 20 GB partition. The remaining space is configured into partition 1.
2. Replicate the partitioning of DID device d4 to DID devices d5 through d7.
This example shows the command for device d5.
3. Configure the eight partitions (four DID devices, two partitions each) into two RAID-1 (mirrored) Sun StorEdge QFS metadata volumes and two RAID-5 (parity-striped) Sun StorEdge QFS file data volumes.
Combine partition (slice) 0 of these four drives into two RAID-1 sets.
4. Combine partition 1 of these four drives into two RAID-5 sets.
5. On each node that is a potential host of the file system, add the Sun StorEdge QFS file system entry to the mcf file.
For more information about the mcf file, see the Sun StorEdge QFS and Sun StorEdge SAM-FS Software Installation and Configuration Guide.
6. Validate that the mcf configuration is correct on each node.
To Create the Sun StorEdge QFS File System and Configure Sun Cluster Nodes |
1. On each node that is a potential host of the file system, use the samd(1M) config command.
This command signals to the Sun StorEdge QFS daemon that a new Sun StorEdge QFS configuration is available.
2. Enable Solaris Volume Manager mediation detection of disk groups, which assists the Sun Cluster system in the detection of drive errors.
# metaset -s nfsdg -a -m scnode-A # metaset -s nfsdg -a -m scnode-B |
3. On each node that is a potential host of the file system, ensure that the NFS disk group exists.
4. From one node in the Sun Cluster system, use the sammkfs(1M) command to create the Sun StorEdge QFS file system.
5. On each node that is a potential host of the file system, perform the following:
a. Use the mkdir(1M) command to create a global mount point for the file system, use the chmod(1M) command to make root the owner of the mount point, and use the chown(1M) command to make the mount point usable by other with read/write (755) access.
# mkdir /global/qfsnfs1 # chmod 755 /global/qfsnfs1 # chown root:other /global/qfsnfs1 |
b. Add the Sun StorEdge QFS file system entry to the /etc/vfstab file.
Note that the mount options field contains the sync_meta=1 value.
c. Ensure that the nodes are configured correctly by mounting and unmounting the file system.
Perform this step one node at a time. In this example, the qfsnfs1 file system is being mounted and unmounted on one node.
# mount qfsnfs1 # ls /global/qfsnfs1 lost+found/ # umount qfsnfs1 |
6. Use the scrgadm(1M) -p | egrep command to validate that the required Sun Cluster resource types have been added to the resource configuration.
If you cannot find a required Sun Cluster resource type, add it with one or more of the following commands.
# scrgadm -a -t SUNW.HAStoragePlus # scrgadm -a -t SUNW.LogicalHostname # scrgadm -a -t SUNW.nfs |
To Configure the Network Name Service and the IPMP Validation Testing |
This section provides an example of how to configure the network name service and IPMP validation testing for use with the Sun StorEdge QFS software. For more information, see the System Administration Guide: IP Services and the System Administration Guide: Naming and Directory Services (DNS, NIS, and LDAP).
1. Use vi or another text editor to edit the /etc/nsswitch.conf file so that it looks in the Sun Cluster and files for node names.
Perform this step before you configure the NIS server.
2. Verify that the changes you made to the /etc/nsswitch.conf are correct.
# grep `^hosts:' /etc/nsswitch.conf hosts: cluster files nis [NOTFOUND=return] # |
3. Set up IPMP validation testing using available network adapters.
The adapters qfe2 and qfe3 are used in the examples.
a. Statically configure the IPMP test address for each adapter.
b. Dynamically configure the IPMP adapters.
c. Validate the configuration.
To Configure HA-NFS and the Sun StorEdge QFS File System for High Availability |
This section provides an example of how to configure HA-NFS. For more information about HA-NFS, see the Sun Cluster Data Service for Network File System (NFS) Guide for Solaris OS and your NFS documentation.
1. Create the NFS share point for the Sun StorEdge QFS file system.
Note that the share point is contained within the /global file system, not within the Sun StorEdge QFS file system.
# mkdir -p /global/nfs/SUNW.nfs # echo "share -F nfs -o rw /global/qfsnfs1" > \ /global/nfs/SUNW.nfs/dfstab.nfs1-res |
2. Create the NFS resource group.
3. Add a logical host to the NFS resource group.
4. Configure the HAStoragePlus resource type.
5. Bring the resource group online.
6. Configure the NFS resource type and set a dependency on the HAStoragePlus resource.
# scrgadm -a -g nfs-rg -j nfs1-res -t SUNW.nfs -y \ Resource_dependencies=qfsnfs1-res |
7. Use the scswitch(1M) -e -j command to bring the NFS resource online.
The NFS resource /net/lh-nfs1/global/qfsnfs1 is fully configured and highly available.
8. Before you announce the availability of the highly available NFS file system on the Sun StorEdge QFS file system, ensure that the resource group can be switched between all configured nodes without errors and can be taken online and offline.
# scswitch -z -g nfs-rg -h scnode-A# scswitch -z -g nfs-rg -h scnode-B # scswitch -F -g nfs-rg # scswitch -Z -g nfs-rg |
This example shows how to configure the unshared Sun StorEdge QFS file system with HA-NFS on VERITAS Clustered Volume manager-controlled volumes (VxVM volumes). With this configuration, you can choose whether the DID devices are contained on redundant controller-based storage using RAID-1 or RAID-5. Typically, VxVM is used only when the underlying storage is not redundant.
As shown in CODE EXAMPLE 8-1, the DID devices used in this example, d4 through d7, are highly available and are contained on controller-based storage. VxVM requires that shared DID devices be used to populate the raw devices from which VxVM configures volumes. VxVM creates highly available disk groups by registering the disk groups as Sun Cluster device groups. These disk groups are not globally accessible, but can be failed over, making them accessible to at least one node. The disk groups can be used by the HAStoragePlus resource type.
Note - The VxVM packages are separate, additional packages that must be installed, patched, and licensed. For information about installing VxVM, see the VxVM Volume Manager documentation. |
To use Sun StorEdge QFS software with VxVM, you must install the following VxVM packages:
This example follows these steps:
1. Configure the VxVM software.
2. Prepare to create an unshared file system.
3. Create the file system and configure the Sun Cluster nodes.
4. Validate the configuration.
5. Configure the network name service and the IPMP validation testing.
6. Configure HA-NFS and configure the file system for high availability.
This section provides an example of how to configure the VxVM software for use with the Sun StorEdge QFS software. For more detailed information about the VxVM software, see the VxVM documentation.
1. Determine the status of DMP (dynamic multipathing) for VERITAS.
2. Use the scdidadm(1M) utility to determine the HBA controller number of the physical devices to be used by VxVM.
As shown in the following example, the multi-node accessible storage is available from scnode-A using HBA controller c6, and from node scnode-B using controller c7.
3. Use VxVM to configure all available storage as seen through controller c6.
4. Place all of this controller's devices under VxVM control.
5. Create a disk group, create volumes, and then start the new disk group. Ensure that the previously started disk group is active on this system.
# vxdg import nfsdg # vxdg free |
6. Configure two mirrored volumes for Sun StorEdge QFS metadata and two volumes for Sun StorEdge QFS file data volumes.
These mirroring operations are performed as background processes, given the length of time they take to complete.
7. Configure the previously created VxVM disk group as a Sun Cluster-controlled disk group.
Perform this procedure on each node that is a potential host of the file system.
1. Add the Sun StorEdge QFS file system entry to the mcf file.
For more information about the mcf file, see the Sun StorEdge QFS and Sun StorEdge SAM-FS Software Installation and Configuration Guide.
2. Validate that the mcf configuration is correct.
To Create the Sun StorEdge QFS File System and Configure Sun Cluster Nodes |
1. On each node that is a potential host of the file system, use the samd(1M) config command.
This command signals to the Sun StorEdge QFS daemon that a new Sun StorEdge QFS configuration is available.
2. From one node in the Sun Cluster system, use the sammkfs(1M) command to create the Sun StorEdge QFS file system.
3. On each node that is a potential host of the file system, perform the following:
a. Use the mkdir(1M) command to create a global mount point for the file system, use the chmod(1M) command to make root the owner of the mount point, and use the chown(1M) command to make the mount point usable by other with read/write (755) access.
# mkdir /global/qfsnfs1 # chmod 755 /global/qfsnfs1 # chown root:other /global/qfsnfs1 |
b. Add the Sun StorEdge QFS file system entry to the /etc/vfstab file.
Note that the mount options field contains the sync_meta=1 value.
1. Validate that all nodes that are potential hosts of the file system are configured correctly.
To do this, move the disk group that you created in To Configure the VxVM Software to the node, and mount and then unmount the file system. Perform this validation one node at a time.
# scswitch -z -D nfsdg -h scnode-B # mount qfsnfs1 # ls /global/qfsnfs1 lost+found/ # umount qfsnfs1 |
2. Ensure that the required Sun Cluster resource types have been added to the resource configuration. If you cannot find a required Sun Cluster resource type, add it with one or more of the following commands.
# scrgadm -a -t SUNW.HAStoragePlus# scrgadm -a -t SUNW.LogicalHostname# scrgadm -a -t SUNW.nfs |
To Configure the Network Name Service and the IPMP Validation Testing |
This section provides an example of how to configure the network name service and the IPMP validation testing. For more information, see the Sun Cluster Software Installation Guide for Solaris OS.
1. Use vi or another text editor to edit the /etc/nsswitch.conf file so that it looks in the Sun Cluster and files for node names.
Perform this step before you configure the NIS server.
2. Verify that the changes you made to the /etc/nsswitch.conf are correct.
# grep `^hosts:' /etc/nsswitch.conf hosts: cluster files nis [NOTFOUND=return] # |
3. Set up IPMP validation testing using available network adapters.
The adapters qfe2 and qfe3 are used as examples.
a. Statically configure IPMP test address for each adapter.
b. Dynamically configure IPMP adapters.
c. Validate the configuration.
To Configure HA-NFS and the Sun StorEdge QFS File System for High Availability |
This section provides an example of how to configure HA-NFS. For more information about HA-NFS, see the Sun Cluster Data Service for Network File System (NFS) Guide for Solaris OS and your NFS documentation.
1. On each node that is a potential host of the file system, create the NFS share point for the Sun StorEdge QFS file system.
Note that the share point is contained within the /global file system, not within the Sun StorEdge QFS file system.
# mkdir -p /global/qfsnfs1/SUNW.nfs # echo "share -F nfs -o rw /global/qfsnfs1" > \ /global/qfsnfs1/SUNW.nfs/dfstab.nfs1-res |
2. From one node in the Sun Cluster system, create the NFS resource group.
3. Add a logical host to the NFS resource group.
4. Configure the HAStoragePlus resource type.
5. Bring the resource group online.
6. Configure the NFS resource type and set a dependency on the HAStoragePlus resource.
7. Bring the NFS resource online.
The NFS resources /net/lh-nfs1/global/qfsnfs1 is fully configured and highly available.
8. Before you announce the availability of the highly available NFS file system on the Sun StorEdge QFS file system, validate that the resource group can be switched between all configured nodes without errors and taken online and offline.
# scswitch -z -g nfs-rg -h scnode-A# scswitch -z -g nfs-rg -h scnode-B # scswitch -F -g nfs-rg # scswitch -Z -g nfs-rg |
This section demonstrates how to make changes to, disable, or remove the Sun StorEdge QFS shared or unshared file system configuration. It contains the following sections:
To Change the Shared File System Configuration |
This procedure is based on the example in Example Configuration.
1. Log into each node as the oracle user and shut down the database instance and stop the listener.
$ sqlplus "/as sysdba" SQL > shutdown immediate SQL > exit $ lsnrctl stop listener |
2. Log into the metadata server as superuser and bring the metadata server resource group into the unmanaged state.
# scswitch -F -g qfs-rg # scswitch -u -g qfs-rg |
At this point, the shared file systems are unmounted on all nodes. You can now apply any changes to the file systems' configuration, mount options, and so on. You can also re-create the file systems, if necessary. To use the file systems again after recreating them, follow the steps in Example Configuration.
If you want to make changes to the metadata server resource group configuration or to the Sun StorEdge QFS software (For example, you might need to upgrade to new packages.), continue to Step 3.
3. As superuser, remove the resource, the resource group, and the resource type, and verify that everything is removed.
# scswitch -n -j qfs-res # scswitch -r -j qfs-res # scrgadm -r -g qfs-rg # scrgadm -r -t SUNW.qfs # scstat |
At this point, you can re-create the resource group to define different names, node lists, and so on. You can also remove or upgrade the Sun StorEdge QFS shared software, if necessary. After the new software is installed, the metadata resource group and the resource can be recreated and can be brought online.
To Disable HA-NFS on a File System That Uses Raw Global Devices |
Use this procedure to disable HA-NFS on an unshared Sun StorEdge QFS file system that is using raw global devices. This example procedure is based on Example 1.
1. Use the scswitch(1M) -F -g command to take the resource group offline.
2. Disable the NFS, Sun StorEdge QFS, and LogicalHost resource types.
# scswitch -n -j nfs1-res # scswitch -n -j qfsnfs1-res # scswitch -n -j lh-nfs1 |
3. Remove the previously configured resources.
# scrgadm -r -j nfs1-res # scrgadm -r -j qfsnfs1-res # scrgadm -r -j lh-nfs1 |
4. Remove the previously configured resource group.
5. Clean up the NFS configuration directories.
6. Disable the resource types used, if they were previously added and are no longer needed.
# scrgadm -r -t SUNW.HAStoragePlus # scrgadm -r -t SUNW.LogicalHostname # scrgadm -r -t SUNW.nfs |
To Disable HA-NFS on a File System That Uses Solaris Volume Manager-Controlled Volumes |
Use this procedure to disable HA-NFS on an unshared Sun StorEdge QFS file system that is using Solstice DiskSuite/Solaris Volume Manager-controlled volumes. This example procedure is based on Example 2.
1. Take the resource group offline.
2. Disable the NFS, Sun StorEdge QFS, and LogicalHost resources types
# scswitch -n -j nfs1-res # scswitch -n -j qfsnfs1-res # scswitch -n -j lh-nfs1 |
3. Remove the previously configured resources.
# scrgadm -r -j nfs1-res # scrgadm -r -j qfsnfs1-res # scrgadm -r -j lh-nfs1 |
4. Remove the previously configured resource group.
5. Clean up the NFS configuration directories.
6. Disable the resource types used, if they were previously added and are no longer needed.
# scrgadm -r -t SUNW.HAStoragePlus # scrgadm -r -t SUNW.LogicalHostname # scrgadm -r -t SUNW.nfs |
7. Delete RAID-5 and RAID-1 sets.
# metaclear -s nfsdg -f d30 d20 d21 d22 d23 d11 d1 d2 d3 d4 |
8. Remove mediation detection of drive errors.
# metaset -s nfsdg -d -m scnode-A # metaset -s nfsdg -d -m scnode-B |
9. Remove the shared DID devices from the nfsdg disk group.
10. Remove the configuration of disk group nfsdg across nodes in the Sun Cluster system.
11. Delete the metadatabase, if it is no longer needed.
# metadb -d -f /dev/dsk/c0t0d0s7 # metadb -d -f /dev/dsk/c1t0d0s7 # metadb -d -f /dev/dsk/c2t0d0s7 |
To Disable HA-NFS on a Sun StorEdge QFS File System That Uses VxVM-Controlled Volumes |
Use this procedure to disable HA-NFS on an unshared Sun StorEdge QFS file system that is using VxVM-controlled volumes. This example procedure is based on Example 3.
1. Take the resource group offline.
2. Disable the NFS, Sun StorEdge QFS, and LogicalHost resources types.
# scswitch -n -j nfs1-res # scswitch -n -j qfsnfs1-res # scswitch -n -j lh-nfs1 |
3. Remove the previously configured resources.
# scrgadm -r -j nfs1-res # scrgadm -r -j qfsnfs1-res # scrgadm -r -j lh-nfs1 |
4. Remove the previously configured resource group.
5. Clean up the NFS configuration directories.
6. Disable the resource types used, if they were previously added and are no longer needed.
# scrgadm -r -t SUNW.HAStoragePlus # scrgadm -r -t SUNW.LogicalHostname # scrgadm -r -t SUNW.nfs |
Copyright © 2004, Sun Microsystems, Inc. All rights reserved.