C H A P T E R 6 |
Configuring Sun StorEdge QFS in a Sun Cluster Environment |
This chapter describes how the Sun StorEdge QFS software works in a Sun Cluster environment. It also provides configuration examples for a Sun StorEdge QFS shared file system in a Sun Cluster environment and for an unshared Sun StorEdge QFS file system in a Sun Cluster environment.
This chapter contains the following sections:
With versions 4U2 and later of the Sun StorEdge QFS software, you can install a Sun StorEdge QFS file system in a Sun Cluster environment and configure the file system for high availability. The configuration method you use varies, depending on whether your file system is shared or unshared.
This chapter assumes that you are an experienced user of both the Sun StorEdge QFS software and the Sun Cluster environment. It also assumes you have performed either or both of the following:
It is recommended that you read the following documentation before continuing with this chapter:
The following restrictions apply to the Sun StorEdge QFS software in a Sun Cluster environment:
The shared file system uses Sun Cluster disk identifier (DID) support to enable data access by the Sun Cluster data service for Oracle Real Application Clusters. The unshared file system uses global device volume support and volume manager-controlled volume support to enable data access by failover applications supported by the Sun Cluster system.
With DID support, each device that is under the control of the Sun Cluster system, whether it is multipathed or not, is assigned a unique DID. For every unique DID device, there is a corresponding global device. The Sun StorEdge QFS shared file system can be configured on redundant storage that consists only of DID devices (/dev/did/*), where DID devices are accessible only on nodes that have a direct connection to the device through a host bust adapter (HBA).
Configuring the Sun StorEdge QFS shared file system on DID devices and configuring the SUNW.qfs resource type for use with the file system makes the file system's shared metadata server highly available. The Sun Cluster data service for Oracle Real Application Clusters can then access data from within the file system. Additionally, the Sun StorEdge QFS Sun Cluster agent can then automatically relocate the metadata server for the file system as necessary.
A global device is the Sun Cluster system's mechanism for accessing an underlying DID device from any node within the Sun Cluster system, assuming that the nodes hosting the DID device are available. Global devices and volume manager-controlled volumes can be made accessible from every node in the Sun Cluster system. The unshared Sun StorEdge QFS file system can be configured on redundant storage that consists of either raw global devices (/dev/global/*) or volume manager-controlled volumes.
Configuring the unshared file system on these global devices or volume manager-controlled devices and configuring the HAStoragePlus resource type for use with the file system makes the file system highly available with the ability to fail over to other nodes.
In the 4U4 release of Sun StorEdge QFS, support was added for Solaris Volume Manager for Sun Cluster, which is an extension to Solaris Volume Manager that is bundled with the Solaris 9 and Solaris 10 OS releases. Sun StorEdge QFS only supports Shared Sun StorEdge QFS with use of Solaris Volume Manager for Sun Cluster on Solaris 10.
Sun StorEdge QFS support for Solaris Volume Manager for Sun Cluster was introduced to take advantage of shared Sun StorEdge QFS host-based mirroring as well as Oracle's implementation for application binary recovery (ABR) and direct mirror reads (DMR) for Oracle RAC-based applications.
Use of Solaris Volume Manager for Sun Cluster with Sun StorEdge QFS requires Sun Cluster software and an additional unbundled software package included with the Sun Cluster software.
With this addition of Solaris Volume Manager for Sun Cluster support, four new mount options were introduced. These mount options are only available if Sun StorEdge QFS detects that it is configured on Solaris Volume Manager for Sun Cluster. The mount options are:
The following is a configuration example for using Sun StorEdge QFS with Solaris Volume Manager for Sun Cluster.
In this example below, it is assumed the following configuration has been done:
In this example there are three shared Sun StorEdge QFS file systems:
|
1. Create the metadb on each node.
2. Create the disk group on one node.
3. Run scdidadm to obtain devices on one node.
The mirroring scheme is as follows:
21 <-> 13
14 <-> 17
23 <-> 16
15 <-> 19
4. Add devices to the set on one node.
# metaset -s datadg -a /dev/did/rdsk/d21 /dev/did/rdsk/d13 /dev/did/rdsk/d14 \ /dev/did/rdsk/d17 /dev/did/rdsk/d23 /dev/did/rdsk/d16 /dev/did/rdsk/d15 \ /dev/did/rdsk/d19 |
5. Create the mirrors on one node.
6. Perform the Sun StorEdge QFS installation on each node.
7. Create the mcf file on each node.
8. Create file system hosts files.
9. Create /etc/opt/SUNWsamfs/samfs.cmd file.
10. Create the Sun StorEdge QFS file systems. See Sun StorEdge QFS Installation and Upgrade Guide for more information.
11. Configure the resource group in Sun Cluster to manage failover of the Sun StorEdge QFS metadata server.
a. Build and append the /etc/vfstab mount entries:
# # # RAC on shared QFS Data - /cluster/Data samfs - no shared,notrace Redo - /cluster/Redo samfs - no shared,notrace Crs - /cluster/Crs samfs - no shared,notrace |
b. Mount the file systems across the cluster on each node. First, mount the shared Sun StorEdge QFS file systems on the current metadata server, and then mount the file system on each metadata client.
To verify this step, type:
# df -h -F samfs
c. Create the Sun Cluster resource group to manage the metadata server.
Register the QFS resource type:
# scrgadm -a -t SUNW.qfs
Add the resource group with the Sun Cluster and shared Sun StorEdge QFS metadata nodes:
# scrgadm -a -g sc-QFS-rg -h scNode-A,sc-Node-B -y
RG_DEPENDENCIES="rac-framework-rg"
Add the shared Sun StorEdge QFS file system resource and the SUNWqfs resource type to the resource group:
# scrgadm -a -g sc-QFS-rg -t SUNW.qfs -j sc-qfs-fs-rs -x QFSFileSystem=/cluster/Data, \
/cluster/Redo,/cluster/Crs
Bring the resource group online:
# scswitch -Z -g sc-QFS-rg
The shared Sun StorEdge QFS file system is now ready to use.
This chapter provides configuration examples for the Sun StorEdge QFS shared file system on a Sun Cluster system and for the unshared Sun StorEdge QFS file system on a Sun Cluster system. All configuration examples are based on a platform consisting of the following:
All configurations in this chapter are also based on CODE EXAMPLE 6-1. In this code example, the scdidadm(1M) command displays the DID devices, and the -L option lists the DID device paths, including those on all nodes in the Sun Cluster system.
CODE EXAMPLE 6-1 shows that DID devices d4 through d8 are accessible from both Sun Cluster systems (scnode-A and scnode-B). With the Sun StorEdge QFS file system sizing requirements and with knowledge of your intended application and configuration, you can decide on the most appropriate apportioning of devices to file systems. By using the Solaris format(1M) command, you can determine the sizing and partition layout of each DID device and resize the partitions on each DID device, if needed. Given the available DID devices, you can also configure multiple devices and their associated partitions to contain the file systems, according to your sizing requirements.
When you install a Sun StorEdge QFS shared file system in a Sun Cluster environment, you configure the file system's metadata server under the SUNW.qfs resource type. This makes the metadata server highly available and enables the Sun StorEdge QFS shared file system to be globally accessible on all configured nodes in the Sun Cluster environment.
A Sun StorEdge QFS shared file system is typically associated with a scalable application. The Sun StorEdge QFS shared file system is mounted on, and the scalable application is active on, one or more Sun Cluster nodes.
If a node in the Sun Cluster system fails, or if you switch over the resource group, the metadata server resource (Sun StorEdge QFS Sun Cluster agent) automatically relocates the file system's metadata server as necessary. This ensures that the other nodes' access to the shared file system is not affected.
When the Sun Cluster system boots, the metadata server resource ensures that the file system is mounted on all nodes that are part of the resource group. However, the file system mount on those nodes is not monitored. Therefore, in certain failure cases, the file system might be unavailable on certain nodes, even if the metadata server resource is in the online state.
If you use Sun Cluster administrative commands to bring the metadata server resource group offline, the file system under the metadata server resource remains mounted on the nodes. To unmount the file system (with the exception of a node that is shut down), you must bring the metadata server resource group into the unmanaged state by using the appropriate Sun Cluster administrative command.
To remount the file system at a later time, you must bring the resource group into a managed state and then into an online state.
This section shows an example of the Sun StorEdge QFS shared file system installed on raw DID devices with the Sun Cluster data service for Oracle Real Application Clusters. For detailed information on how to use the Sun StorEdge QFS shared file system with the Sun Cluster data service for Oracle Real Application Clusters, see the Sun Cluster Data Service for Oracle Real Application Clusters Guide for Solaris OS.
As shown in CODE EXAMPLE 6-1, DID devices d4 through d8 are highly available and are contained on controller-based storage. For you to configure a Sun StorEdge QFS shared file system in a Sun Cluster environment, the controller-based storage must support device redundancy by using RAID-1 or RAID-5.
For simplicity in this example, two file systems are created:
Additionally, device d4 is used for Sun StorEdge QFS metadata. This device has two 50-gigabyte slices. The remaining devices, d5 through d8, are used for Sun StorEdge QFS file data.
This configuration involves five main steps, as detailed in the following subsections:
1. Preparing to create Sun StorEdge QFS file systems.
2. Creating the file systems and configuring the Sun Cluster nodes.
3. Validating the configuration.
4. Configuring the network name service.
5. Configuring the Sun Cluster data service for Oracle Real Application Clusters.
1. From one node in the Sun Cluster system, use the format(1M) utility to lay out partitions on /dev/did/dsk/d4 (CODE EXAMPLE 6-2).
In this example, the action is performed from node scnode-A.
Partition (or slice) 0 skips over the volume's Volume Table of Contents (VTOC) and is then configured as a 50-gigabyte partition. Partition 1 is configured to be the same size as partition 0.
2. On the same node, use the format(1M) utility to lay out partitions on /dev/did/dsk/d5 (CODE EXAMPLE 6-3).
3. Still on the same node, replicate the device d5 partitioning to devices d6 through d8.
This example shows the command for device d6:
4. On all nodes that are potential hosts of the file systems, perform the following:
a. Configure the six partitions into two Sun StorEdge QFS shared file systems by adding two new configuration entries (qfs1 and qfs2) to the mcf(4) file (CODE EXAMPLE 6-4).
For more information about the mcf(4) file, see Function of the mcf File or the Sun StorEdge QFS Installation and Upgrade Guide.
b. Edit the /etc/opt/SUNWsamfs/samfs.cmd file to add the mount options that are required for the Sun Cluster data service for Oracle Real Application Clusters (CODE EXAMPLE 6-5).
fs = qfs2 stripe = 1 sync_meta = 1 mh_write qwrite forcedirectio nstreams = 2048 rdlease = 300 |
For more information about the mount options that are required by the Sun Cluster data service for Oracle Real Application Clusters, see the Sun Cluster Data Service for Oracle Real Application Clusters Guide for Solaris OS.
c. Validate that the configuration is correct.
Be sure to perform this validation after you have configured the mcf(4) file and the samfs.cmd file on each node.
|
Perform this procedure for each file system you are creating. This example describes how to create the qfs1 file system.
1. Obtain the Sun Cluster private interconnect names by using the following command:
2. On each node that is a potential host of the file system, do the following:
a. Use the samd(1M) config command, which signals to the Sun StorEdge QFS daemon that a new Sun StorEdge QFS configuration is available:
b. Create the Sun StorEdge QFS shared hosts file for the file system (/etc/opt/SUNWsamfs/hosts.family-set-name), based on the Sun Cluster system's private interconnect names that you obtained in Step 1.
3. Edit the unique Sun StorEdge QFS shared file system's host configuration file with the Sun Cluster system's interconnect names (CODE EXAMPLE 6-6).
For Sun Cluster software failover and fencing operations, the Sun StorEdge QFS shared file system must use the same interconnect names as the Sun Cluster system.
4. From one node in the Sun Cluster system, use the sammkfs(1M) -S command to create the Sun StorEdge QFS shared file system:
5. On each node that is a potential host of the file system, do the following:
a. Use the mkdir(1M) command to create a global mount point for the file system, use the chmod(1M) command to make root the owner of the mount point, and use the chown(1M) command to make the mount point usable by other with read/write (755) access:
b. Add the Sun StorEdge QFS shared file system entry to the /etc/vfstab file:
# cat >> /etc/vfstab <<EOF # device device mount FS fsck mount mount # to mount to fsck point type pass at boot options # qfs1 - /global/qfs1 samfs - no shared EOF |
Perform this procedure for each file system you create. This example describes how to validate the configuration for file system qfs1.
1. If you do not know which node is acting as the metadata server for the file system, use the samsharefs(1M) -R command.
In CODE EXAMPLE 6-7 the metadata server for qfs1 is scnode-A.
2. Use the mount(1M) command to mount the file system first on the metadata server and then on each node in the Sun Cluster system.
Note - It is important that you mount the file system on the metadata server first. |
3. Validate voluntary failover by issuing the samsharefs(1M) -s command, which changes the Sun StorEdge QFS shared file system between nodes:
# samsharefs -s scnode-B qfs1 # ls /global/qfs1 lost+found/ # samsharefs -s scnode-A qfs1 # ls /global/qfs1 lost+found |
4. Validate that the required Sun Cluster resource type is added to the resource configuration:
5. If you cannot find the Sun Cluster resource type, use the scrgadm(1M) -a -t command to add it to the resource configuration:
6. Register and configure the SUNW.qfs resource type:
# scrgadm -a -g qfs-rg -h scnode-A,scnode-B # scrgadm -a -g qfs-rg -t SUNW.qfs -j qfs-res \ -x QFSFileSystem=/global/qfs1,/global/qfs2 |
7. Use the scswitch(1M) -Z -g command to bring the resource group online:
8. Ensure that the resource group is functional on all configured nodes:
|
This section provides an example of how to configure the data service for Oracle Real Application Clusters for use with Sun StorEdge QFS shared file systems. For more information, see the Sun Cluster Data Service for Oracle Real Application Clusters Guide for Solaris OS.
1. Install the data service as described in the Sun Cluster Data Service for Oracle Real Application Clusters Guide for Solaris OS.
2. Mount the Sun StorEdge QFS shared file systems.
3. Set the correct ownership and permissions on the file systems so that the Oracle database operations are successful:
4. As the oracle user, create the subdirectories that are required for the Oracle Real Application Clusters installation and database files:
$ id uid=120(oracle) gid=520(dba) $ mkdir /global/qfs1/oracle_install $ mkdir /global/qfs2/oracle_db |
The Oracle Real Application Clusters installation uses the /global/qfs1/oracle_install directory path as the value for the ORACLE_HOME environment variable that is used in Oracle operations. The Oracle Real Application Clusters database files' path is prefixed with the /global/qfs2/oracle_db directory path.
5. Install the Oracle Real Application Clusters software.
During the installation, provide the path for the installation defined in Step 4 (/global/qfs1/oracle_install).
6. Create the Oracle Real Application Clusters database.
During database creation, specify that you want the database files located in the qfs2 shared file system.
7. If you are automating the startup and shutdown of Oracle Real Application Clusters database instances, ensure that the required dependencies for resource groups and resources are set.
For more information, see the Sun Cluster Data Service for Oracle Real Application Clusters Guide for Solaris OS.
Note - If you plan to automate the startup and shutdown of Oracle Real Application Clusters database instances, you must use Sun Cluster software version 3.1 9/04 or a compatible version. |
When you install the unshared Sun StorEdge QFS file system on a Sun Cluster system, you configure the file system for high availability (HA) under the Sun Cluster HAStoragePlus resource type. An unshared Sun StorEdge QFS file system in a Sun Cluster system is typically associated with one or more failover applications, such as HA-NFS, HA-ORACLE, and so on. Both the unshared Sun StorEdge QFS file system and the failover applications are active in a single resource group; the resource group is active on one Sun Cluster node at a time.
An unshared Sun StorEdge QFS file system is mounted on a single node at any given time. If the Sun Cluster fault monitor detects an error, or if you switch over the resource group, the unshared Sun StorEdge QFS file system and its associated HA applications fail over to another node, depending on how the resource group has been previously configured.
Any file system contained on a Sun Cluster global device group (/dev/global/*) can be used with the HAStoragePlus resource type. When a file system is configured with the HAStoragePlus resource type, it becomes part of a Sun Cluster resource group and the file system under Sun Cluster Resource Group Manager (RGM) control is mounted locally on the node where the resource group is active. When the RGM causes a resource group switchover or fails over to another configured Sun Cluster node, the unshared Sun StorEdge QFS file system is unmounted from the current node and remounted on the new node.
Each unshared Sun StorEdge QFS file system requires a minimum of two raw disk partitions or volume manager-controlled volumes (Solstice DiskSuite/Solaris Volume Manager or VERITAS Clustered Volume Manager), one for Sun StorEdge QFS metadata (inodes) and one for Sun StorEdge QFS file data. Configuring multiple partitions or volumes across multiple disks through multiple data paths increases unshared Sun StorEdge QFS file system performance. For information about sizing metadata and file data partitions, see Design Basics.
This section provides three examples of Sun Cluster system configurations using the unshared Sun StorEdge QFS file system. In these examples, a file system is configured in combination with an HA-NFS file mount point on the following:
For simplicity in all of these configurations, ten percent of each file system is used for Sun StorEdge QFS metadata, and the remaining space is used for Sun StorEdge QFS file data. For information about sizing and disk layout considerations, see the Sun StorEdge QFS Installation and Upgrade Guide.
This example shows how to configure the unshared Sun StorEdge QFS file system with HA-NFS on raw global devices. For this configuration, the raw global devices must be contained on controller-based storage. This controller-based storage must support device redundancy through RAID-1 or RAID-5.
As shown in CODE EXAMPLE 6-1, the DID devices used in this example, d4 through d7, are highly available and are contained on controller-based storage. The HAStoragePlus resource type requires the use of global devices, so each DID device (/dev/did/dsk/dx) is accessible as a global device by using the following syntax: /dev/global/dsk/dx.
The main steps in this example are as follows:
1. Prepare to create an unshared file system.
2. Create the file system and configure the Sun Cluster nodes.
3. Configure the network name service and the IP Measurement Protocol (IPMP) validation testing.
4. Configure HA-NFS and configure the file system for high availability.
|
1. Use the format(1M) utility to lay out the partitions on /dev/global/dsk/d4:
Partition (or slice) 0 skips over the volume's Volume Table of Contents (VTOC) and is then configured as a 20-gigabyte partition. The remaining space is configured into partition 1.
2. Replicate the global device d4 partitioning to global devices d5 through d7.
This example shows the command for global device d5:
3. On all nodes that are potential hosts of the file system, perform the following:
a. Configure the eight partitions (four global devices, with two partitions each) into a Sun StorEdge QFS file system by adding a new file system entry to the mcf(4) file.
For information about the mcf(4) file, see Function of the mcf File.
b. Validate that the configuration information you added to the mcf(4) file is correct, and fix any errors in the mcf(4) file before proceeding.
It is important to complete this step before you configure the Sun StorEdge QFS file system under the HAStoragePlus resource type.
|
1. On each node that is a potential host of the file system, issue the samd(1M) config command.
This command signals to the Sun StorEdge QFS daemon that a new Sun StorEdge QFS configuration is available.
2. From one node in the Sun Cluster system, use the sammkfs(1M) command to create the file system:
3. On each node that is a potential host of the file system, do the following:
a. Use the mkdir(1M) command to create a global mount point for the file system, use the chmod(1M) command to make root the owner of the mount point, and use the chown(1M) command to make the mount point usable by other with read/write (755) access:
b. Add the Sun StorEdge QFS file system entry to the /etc/vfstab file.
Note that the mount options field contains the sync_meta=1 value.
# cat >> /etc/vfstab <<EOF # device device mount FS fsck mount mount # to mount to fsck point type pass at boot options # qfsnfs1 - /global/qfsnfs1 samfs 2 no sync_meta=1 EOF |
c. Validate the configuration by mounting and unmounting the file system:
4. Use the scrgadm(1M) -p | egrep command to validate that the required Sun Cluster resource types have been added to the resource configuration:
5. If you cannot find a required Sun Cluster resource type, use the scrgadm(1M) -a -t command to add it to the configuration:
|
This section provides an example of how to configure the network name service and the IPMP Validation Testing for your Sun Cluster nodes. For more information, see the Sun Cluster Software Installation Guide for Solaris OS, the System Administration Guide: IP Services, and the System Administration Guide: Naming and Directory Services (DNS, NIS, and LDAP).
1. Use vi or another text editor to edit the /etc/nsswitch.conf file so that it looks in the Sun Cluster system and files for node names.
Perform this step before you configure the NIS server.
2. Verify that the changes you made to the /etc/nsswitch.conf are correct:
3. Set up IPMP validation testing using available network adapters.
The adapters qfe2 and qfe3 are used as examples.
a. Statically configure the IPMP test address for each adapter:
b. Dynamically configure the IPMP adapters:
|
This section provides an example of how to configure HA-NFS. For more information about HA-NFS, see the Sun Cluster Data Service for Network File System (NFS) Guide for Solaris OS and your NFS documentation.
1. Create the NFS share point for the Sun StorEdge QFS file system.
Note that the share point is contained within the /global file system, not within the Sun StorEdge QFS file system.
# mkdir -p /global/nfs/SUNW.nfs # echo "share -F nfs -o rw /global/qfsnfs1" > \ /global/nfs/SUNW.nfs/dfstab.nfs1-res |
2. Create the NFS resource group:
3. Add the NFS logical host to the /etc/hosts table, using the address for your site:
4. Use the scrgadm(1M) -a -L -g command to add the logical host to the NFS resource group:
5. Use the scrgadm(1M) -c -g command to configure the HAStoragePlus resource type:
# scrgadm -c -g nfs-rg -h scnode-A,scnode-B # scrgadm -a -g nfs-rg -j qfsnfs1-res -t SUNW.HAStoragePlus \ -x FilesystemMountPoints=/global/qfsnfs1 \ -x FilesystemCheckCommand=/bin/true |
6. Bring the resource group online:
7. Configure the NFS resource type and set a dependency on the HAStoragePlus resource:
8. Bring the NFS resource online:
The NFS resource /net/lh-nfs1/global/qfsnfs1 is now fully configured and is also highly available.
9. Before announcing the availability of the highly available NFS file system on the Sun StorEdge QFS file system, test the resource group to ensure that it can be switched between all configured nodes without errors and can be taken online and offline:
# scswitch -z -g nfs-rg -h scnode-A# scswitch -z -g nfs-rg -h scnode-B # scswitch -F -g nfs-rg # scswitch -Z -g nfs-rg |
This example shows how to configure the unshared Sun StorEdge QFS file system with HA-NFS on volumes controlled by Solstice DiskSuite/Solaris Volume Manager software. With this configuration, you can choose whether the DID devices are contained on redundant controller-based storage using RAID-1 or RAID-5 volumes. Typically, Solaris Volume Manager is used only when the underlying controller-based storage is not redundant.
As shown in CODE EXAMPLE 6-1, the DID devices used in this example, d4 through d7, are highly available and are contained on controller-based storage. Solaris Volume Manager requires that DID devices be used to populate the raw devices from which Solaris Volume Manager can configure volumes. Solaris Volume Manager creates globally accessible disk groups, which can then be used by the HAStoragePlus resource type for creating Sun StorEdge QFS file systems.
This example follows these steps:
1. Prepare the Solstice DiskSuite/Solaris Volume Manager software.
2. Prepare to create an unshared file system.
3. Create the file system and configure the Sun Cluster nodes.
4. Configure the network name service and the IPMP validation testing.
5. Configure HA-NFS and configure the file system for high availability.
|
1. Determine whether a Solaris Volume Manager metadatabase (metadb) is already configured on each node that is a potential host of the Sun StorEdge QFS file system:
# metadb flags first blk block count a m p luo 16 8192 /dev/dsk/c0t0d0s7 a p luo 16 8192 /dev/dsk/c1t0d0s7 a p luo 16 8192 /dev/dsk/c2t0d0s7 |
If the metadb(1M) command does not return a metadatabase configuration, then on each node, create three or more database replicas on one or more local disks. Each replica must be at least 16 megabytes in size. For more information about creating the metadatabase configuration, see the Sun Cluster Software Installation Guide for Solaris OS.
2. Create a HA-NFS disk group to contain all Solaris Volume Manager volumes for this Sun StorEdge QFS file system:
3. Add DID devices d4 through d7 to the pool of raw devices from which Solaris Volume Manager can create volumes:
1. Use the format(1M) utility to lay out partitions on /dev/global/dsk/d4:
This example shows that partition or slice 0 skips over the volume's Volume Table of Contents (VTOC) and is then configured as a 20-gigabyte partition. The remaining space is configured into partition 1.
2. Replicate the partitioning of DID device d4 to DID devices d5 through d7.
This example shows the command for device d5:
3. Configure the eight partitions (four DID devices, two partitions each) into two RAID-1 (mirrored) Sun StorEdge QFS metadata volumes and two RAID-5 (parity-striped) Sun StorEdge QFS file data volumes:
a. Combine partition (slice) 0 of these four drives into two RAID-1 sets:
b. Combine partition 1 of these four drives into two RAID-5 sets:
c. On each node that is a potential host of the file system, add the Sun StorEdge QFS file system entry to the mcf(4) file:
For more information about the mcf(4) file, see Function of the mcf File.
4. Validate that the mcf(4) configuration is correct on each node, and fix any errors in the mcf(4) file before proceeding.
|
1. On each node that is a potential host of the file system, use the samd(1M) config command.
This command signals to the Sun StorEdge QFS daemon that a new Sun StorEdge QFS configuration is available.
2. Enable Solaris Volume Manager mediation detection of disk groups, which assists the Sun Cluster system in the detection of drive errors:
3. On each node that is a potential host of the file system, ensure that the NFS disk group exists:
4. From one node in the Sun Cluster system, use the sammkfs(1M) command to create the file system:
5. On each node that is a potential host of the file system, do the following:
a. Use the mkdir(1M) command to create a global mount point for the file system, use the chmod(1M) command to make root the owner of the mount point, and use the chown(1M) command to make the mount point usable by other with read/write (755) access:
b. Add the Sun StorEdge QFS file system entry to the /etc/vfstab file.
Note that the mount options field contains the sync_meta=1 value.
# cat >> /etc/vfstab << EOF # device device mount FS fsck mount mount # to mount to fsck point type pass at boot options # qfsnfs1 - /global/qfsnfs1 samfs 2 no sync_meta=1 EOF |
c. Validate the configuration by mounting and unmounting the file system.
Perform this step one node at a time. In this example, the qfsnfs1 file system is mounted and unmounted on one node.
6. Use the scrgadm(1M) -p | egrep command to validate that the required Sun Cluster resource types have been added to the resource configuration:
7. If you cannot find a required Sun Cluster resource type, add it with one or more of the following commands:
|
To configure the Network Name Service and the IPMP validation testing, follow the instructions in To Configure the Network Name Service and the IPMP Validation Testing
|
This section provides an example of how to configure HA-NFS. For more information about HA-NFS, see the Sun Cluster Data Service for Network File System (NFS) Guide for Solaris OS and your NFS documentation.
1. Create the NFS share point for the Sun StorEdge QFS file system.
Note that the share point is contained within the /global file system, not within the Sun StorEdge QFS file system.
# mkdir -p /global/nfs/SUNW.nfs # echo "share -F nfs -o rw /global/qfsnfs1" > \ /global/nfs/SUNW.nfs/dfstab.nfs1-res |
2. Create the NFS resource group:
3. Add a logical host to the NFS resource group:
4. Configure the HAStoragePlus resource type:
# scrgadm -c -g nfs-rg -h scnode-A,scnode-B # scrgadm -a -g nfs-rg -j qfsnfs1-res -t SUNW.HAStoragePlus \ -x FilesystemMountPoints=/global/qfsnfs1 \ -x FilesystemCheckCommand=/bin/true |
5. Bring the resource group online:
6. Configure the NFS resource type and set a dependency on the HAStoragePlus resource:
7. Use the scswitch(1M) -e -j command to bring the NFS resource online:
The NFS resource /net/lh-nfs1/global/qfsnfs1 is fully configured and highly available.
8. Before you announce the availability of the highly available NFS file system on the Sun StorEdge QFS file system, test the resource group to ensure that it can be switched between all configured nodes without errors and can be taken online and offline:
# scswitch -z -g nfs-rg -h scnode-A# scswitch -z -g nfs-rg -h scnode-B # scswitch -F -g nfs-rg # scswitch -Z -g nfs-rg |
This example shows how to configure the unshared Sun StorEdge QFS file system with HA-NFS on VERITAS Clustered Volume manager-controlled volumes (VxVM volumes). With this configuration, you can choose whether the DID devices are contained on redundant controller-based storage using RAID-1 or RAID-5. Typically, VxVM is used only when the underlying storage is not redundant.
As shown in CODE EXAMPLE 6-1, the DID devices used in this example, d4 through d7, are highly available and are contained on controller-based storage. VxVM requires that shared DID devices be used to populate the raw devices from which VxVM configures volumes. VxVM creates highly available disk groups by registering the disk groups as Sun Cluster device groups. These disk groups are not globally accessible, but can be failed over, making them accessible to at least one node. The disk groups can be used by the HAStoragePlus resource type.
Note - The VxVM packages are separate, additional packages that must be installed, patched, and licensed. For information about installing VxVM, see the VxVM Volume Manager documentation. |
To use Sun StorEdge QFS software with VxVM, you must install the following VxVM packages:
This example follows these steps:
1. Configure the VxVM software.
2. Prepare to create an unshared file system.
3. Create the file system and configure the Sun Cluster nodes.
4. Validate the configuration.
5. Configure the network name service and the IPMP validation testing.
6. Configure HA-NFS and configure the file system for high availability.
This section provides an example of how to configure the VxVM software for use with the Sun StorEdge QFS software. For more detailed information about the VxVM software, see the VxVM documentation.
1. Determine the status of DMP (dynamic multipathing) for VERITAS.
2. Use the scdidadm(1M) utility to determine the HBA controller number of the physical devices to be used by VxVM.
As shown in the following example, the multi-node accessible storage is available from scnode-A using HBA controller c6, and from node scnode-B using controller c7:
# scdidadm -L [ some output deleted] 4 scnode-A:/dev/dsk/c6t60020F20000037D13E26595500062F06d0 /dev/did/dsk/d4 4 scnode-B:/dev/dsk/c7t60020F20000037D13E26595500062F06d0 /dev/did/dsk/d4 |
3. Use VxVM to configure all available storage as seen through controller c6:
4. Place all of this controller's devices under VxVM control:
5. Create a disk group, create volumes, and then start the new disk group:
6. Ensure that the previously started disk group is active on this system:
7. Configure two mirrored volumes for Sun StorEdge QFS metadata and two volumes for Sun StorEdge QFS file data volumes.
These mirroring operations are performed as background processes, given the length of time they take to complete.
8. Configure the previously created VxVM disk group as a Sun Cluster-controlled disk group:
Perform this procedure on each node that is a potential host of the file system.
1. Add the Sun StorEdge QFS file system entry to the mcf(4) file.
For more information about the mcf(4) file, see Function of the mcf File.
2. Validate that the mcf(4) configuration is correct, and correct any errors in the mcf(4) file before proceeding:
|
1. On each node that is a potential host of the file system, use the samd(1M) config command.
This command signals to the Sun StorEdge QFS daemon that a new Sun StorEdge QFS configuration is available.
2. From one node in the Sun Cluster system, use the sammkfs(1M) command to create the file system:
3. On each node that is a potential host of the file system, do the following:
a. Use the mkdir(1M) command to create a global mount point for the file system, use the chmod(1M) command to make root the owner of the mount point, and use the chown(1M) command to make the mount point usable by other with read/write (755) access:
b. Add the Sun StorEdge QFS file system entry to the /etc/vfstab file.
Note that the mount options field contains the sync_meta=1 value.
# cat >> /etc/vfstab << EOF # device device mount FS fsck mount mount # to mount to fsck point type pass at boot options # qfsnfs1 - /global/qfsnfs1 samfs 2 no sync_meta=1 EOF |
1. Validate that all nodes that are potential hosts of the file system are configured correctly.
To do this, move the disk group that you created in To Configure the VxVM Software to the node, and mount and then unmount the file system. Perform this validation one node at a time.
# scswitch -z -D nfsdg -h scnode-B # mount qfsnfs1 # ls /global/qfsnfs1 lost+found/ # umount qfsnfs1 |
2. Ensure that the required Sun Cluster resource types have been added to the resource configuration:
3. If you cannot find a required Sun Cluster resource type, add it with one or more of the following commands:
|
To configure the Network Name Service and the IPMP validation testing, follow the instructions in To Configure the Network Name Service and the IPMP Validation Testing
|
To configure HA-NFS and the file system for high availability, follow the instructions in To Configure HA-NFS and the Sun StorEdge QFS File System for High Availability.
This section demonstrates how to make changes to, disable, or remove the Sun StorEdge QFS shared or unshared file system configuration in a Sun Cluster environment. It contains the following sections:
|
This example procedure is based on the example in Example Configuration.
1. Log in to each node as the oracle user, shut down the database instance, and stop the listener:
2. Log in to the metadata server as superuser and bring the metadata server resource group into the unmanaged state:
At this point, the shared file systems are unmounted on all nodes. You can now apply any changes to the file systems' configuration, mount options, and so on. You can also re-create the file systems, if necessary. To use the file systems again after re-creating them, follow the steps in Example Configuration.
3. If you want to make changes to the metadata server resource group configuration or to the Sun StorEdge QFS software, remove the resource, the resource group, and the resource type, and verify that everything is removed.
For example, you might need to upgrade to new packages.
# scswitch -n -j qfs-res # scswitch -r -j qfs-res # scrgadm -r -g qfs-rg # scrgadm -r -t SUNW.qfs # scstat |
At this point, you can re-create the resource group to define different names, node lists, and so on. You can also remove or upgrade the Sun StorEdge QFS shared software, if necessary. After the new software is installed, the metadata resource group and the resource can be re-created and can be brought online.
|
Use this general example procedure to disable HA-NFS on an unshared Sun StorEdge QFS file system that is using raw global devices. This example procedure is based on Example 1: HA-NFS on Raw Global Devices.
1. Use the scswitch(1M) -F -g command to take the resource group offline:
2. Disable the NFS, Sun StorEdge QFS, and LogicalHost resource types:
3. Remove the previously configured resources:
4. Remove the previously configured resource group:
5. Clean up the NFS configuration directories:
6. Disable the resource types used, if they were previously added and are no longer needed:
|
Use this general example procedure to disable HA-NFS on an unshared Sun StorEdge QFS file system that is using Solstice DiskSuite/Solaris Volume Manager-controlled volumes. This example procedure is based on Example 2: HA-NFS on Volumes Controlled by Solstice DiskSuite/Solaris Volume Manager.
1. Take the resource group offline:
2. Disable the NFS, Sun StorEdge QFS, and LogicalHost resources types:
3. Remove the previously configured resources:
4. Remove the previously configured resource group:
5. Clean up the NFS configuration directories:
6. Disable the resource types used, if they were previously added and are no longer needed:
7. Delete RAID-5 and RAID-1 sets:
8. Remove mediation detection of drive errors:
9. Remove the shared DID devices from the nfsdg disk group:
10. Remove the configuration of disk group nfsdg across nodes in the Sun Cluster system:
11. Delete the metadatabase, if it is no longer needed:
|
Use this general example procedure to disable HA-NFS on an unshared Sun StorEdge QFS file system that is using VxVM-controlled volumes. This example procedure is based on Example 3: HA-NFS on VxVM Volumes.
1. Take the resource group offline:
2. Disable the NFS, Sun StorEdge QFS, and LogicalHost resources types:
3. Remove the previously configured resources:
4. Remove the previously configured resource group:
5. Clean up the NFS configuration directories:
6. Disable the resource types used, if they were previously added and are no longer needed:
Copyright © 2006, Sun Microsystems, Inc. All Rights Reserved.