This section describes how disk device groups and resource groups are configured for an NFS application. For additional information, see Configuring Replication Resource Groups and Configuring Application Resource Groups.
This section contains the following procedures:
How to Configure a Disk Device Group on the Secondary Cluster
How to Configure the File System on the Primary Cluster for the NFS Application
How to Configure the File System on the Secondary Cluster for the NFS Application
How to Create a Replication Resource Group on the Primary Cluster
How to Create a Replication Resource Group on the Secondary Cluster
How to Create an NFS Application Resource Group on the Primary Cluster
How to Create an NFS Application Resource Group on the Secondary Cluster
The following table lists the names of the groups and resources that are created for the example configuration.
Table 6–3 Summary of the Groups and Resources in the Example Configuration
Group or Resource |
Name |
Description |
---|---|---|
Disk device group |
devicegroup |
The disk device group |
Replication resource group and resources |
devicegroup-stor-rg |
The replication resource group |
lhost-reprg-prim, lhost-reprg-sec |
The logical hostnames for the replication resource group on the primary cluster and the secondary cluster |
|
devicegroup-stor |
The HAStoragePlus resource for the replication resource group |
|
Application resource group and resources |
nfs-rg |
The application resource group |
lhost-nfsrg-prim, lhost-nfsrg-sec |
The logical hostnames for the application resource group on the primary cluster and the secondary cluster |
|
nfs-dg-rs |
The HAStoragePlus resource for the application |
|
nfs-rs |
The NFS resource |
With the exception of devicegroup-stor-rg, the names of the groups and resources are example names that can be changed as required. The replication resource group must have a name with the format devicegroup-stor-rg .
This example configuration uses VxVM software. For information about Solstice DiskSuite or Solaris Volume Manager software, see Chapter 3, Installing and Configuring Solstice DiskSuite or Solaris Volume Manager Software.
The following figure illustrates the volumes that are created in the disk device group.
The volumes defined in this procedure must not include disk-label private areas, for example, cylinder 0. The VxVM software manages this constraint automatically.
Ensure that you have completed the following tasks:
Read the guidelines and requirements in the following sections:
Set up the primary and secondary clusters as described in Connecting and Installing the Clusters.
Access nodeA as superuser.
nodeA is the first node of the primary cluster. For a reminder of which node is nodeA, see Figure 6–7.
Create a disk group on nodeA that contains four volumes: volume 1, vol01 through volume 4, vol04.
For information about configuring a disk group by using the VxVM software, see Chapter 4, SPARC: Installing and Configuring VERITAS Volume Manager.
Configure the disk group to create a disk device group.
nodeA# /usr/cluster/bin/scconf -a \ -D type=vxvm,name=devicegroup,nodelist=nodeA:nodeB |
The disk device group is called devicegroup.
Create the file system for the disk device group.
nodeA# /usr/sbin/newfs /dev/vx/rdsk/devicegroup/vol01 < /dev/null nodeA# /usr/sbin/newfs /dev/vx/rdsk/devicegroup/vol02 < /dev/null |
No file system is needed for vol03 or vol04, which are instead used as raw volumes.
Go to How to Configure a Disk Device Group on the Secondary Cluster.
Ensure that you completed steps in How to Configure a Disk Device Group on the Primary Cluster.
Access nodeC as superuser.
Create a disk group on nodeC that contains four volumes: volume 1, vol01 through volume 4, vol04.
Configure the disk group to create a disk device group.
nodeC# /usr/cluster/bin/scconf -a \ -D type=vxvm,name=devicegroup,nodelist=nodeC |
The disk device group is called devicegroup.
Create the file system for the disk device group.
nodeC# /usr/sbin/newfs /dev/vx/rdsk/devicegroup/vol01 < /dev/null nodeC# /usr/sbin/newfs /dev/vx/rdsk/devicegroup/vol02 < /dev/null |
No file system is needed for vol03 or vol04, which are instead used as raw volumes.
Go to How to Configure the File System on the Primary Cluster for the NFS Application.
Ensure that you completed steps in How to Configure a Disk Device Group on the Secondary Cluster.
On nodeA and nodeB, create a mount point directory for the NFS file system.
For example:
nodeA# mkdir /global/mountpoint |
On nodeA and nodeB, configure the master volume to be mounted automatically on the mount point.
Add or replace the following text to the /etc/vfstab file on nodeA and nodeB. The text must be on a single line.
/dev/vx/dsk/devicegroup/vol01 /dev/vx/rdsk/devicegroup/vol01 \ /global/mountpoint ufs 3 no global,logging |
For a reminder of the volumes names and volume numbers used in the disk device group, see Figure 6–8.
On nodeA, create a volume for the file system information that is used by the Sun Cluster HA for NFS data service.
nodeA# /usr/sbin/vxassist -g devicegroup make vol05 120m disk1 |
Volume 5, vol05, contains the file system information that is used by the Sun Cluster HA for NFS data service.
On nodeA, resynchronize the device group with the Sun Cluster software.
nodeA# /usr/cluster/bin/scconf -c -D name=devicegroup,sync |
On nodeA, create the file system for vol05.
nodeA# /usr/sbin/newfs /dev/vx/rdsk/devicegroup/vol05 |
On nodeA and nodeB, create a mount point for vol05.
For example:
nodeA# mkdir /global/etc |
On nodeA and nodeB, configure vol05 to be mounted automatically on the mount point.
Add or replace the following text to the /etc/vfstab file on nodeA and nodeB. The text must be on a single line.
/dev/vx/dsk/devicegroup/vol05 /dev/vx/rdsk/devicegroup/vol05 \ /global/etc ufs 3 yes global,logging |
Mount vol05 on nodeA.
nodeA# mount /global/etc |
Make vol05 accessible to remote systems.
Create a directory called /global/etc/SUNW.nfs on nodeA.
nodeA# mkdir -p /global/etc/SUNW.nfs |
Create the file /global/etc/SUNW.nfs/dfstab.nfs-rs on nodeA.
nodeA# touch /global/etc/SUNW.nfs/dfstab.nfs-rs |
Add the following line to the /global/etc/SUNW.nfs/dfstab.nfs-rs file on nodeA:
share -F nfs -o rw -d "HA NFS" /global/mountpoint |
Go to How to Configure the File System on the Secondary Cluster for the NFS Application.
Ensure that you completed steps in How to Configure the File System on the Primary Cluster for the NFS Application.
On nodeC, create a mount point directory for the NFS file system.
For example:
nodeC# mkdir /global/mountpoint |
On nodeC, configure the master volume to be mounted automatically on the mount point.
Add or replace the following text to the /etc/vfstab file on nodeC. The text must be on a single line.
/dev/vx/dsk/devicegroup/vol01 /dev/vx/rdsk/devicegroup/vol01 \ /global/mountpoint ufs 3 no global,logging |
On nodeC, create a volume for the file system information that is used by the Sun Cluster HA for NFS data service.
nodeC# /usr/sbin/vxassist -g devicegroup make vol05 120m disk1 |
Volume 5, vol05, contains the file system information that is used by the Sun Cluster HA for NFS data service.
On nodeC, resynchronize the device group with the Sun Cluster software.
nodeC# /usr/cluster/bin/scconf -c -D name=devicegroup,sync |
On nodeC, create the file system for vol05.
nodeC# /usr/sbin/newfs /dev/vx/rdsk/devicegroup/vol05 |
On nodeC, create a mount point for vol05.
For example:
nodeC# mkdir /global/etc |
On nodeC, configure vol05 to be mounted automatically on the mount point.
Add or replace the following text to the /etc/vfstab file on nodeC. The text must be on a single line.
/dev/vx/dsk/devicegroup/vol05 /dev/vx/rdsk/devicegroup/vol05 \ /global/etc ufs 3 yes global,logging |
Mount vol05 on nodeC.
nodeC# mount /global/etc |
Make vol05 accessible to remote systems.
Create a directory called /global/etc/SUNW.nfs on nodeC.
nodeC# mkdir -p /global/etc/SUNW.nfs |
Create the file /global/etc/SUNW.nfs/dfstab.nfs-rs on nodeC.
nodeC# touch /global/etc/SUNW.nfs/dfstab.nfs-rs |
Add the following line to the /global/etc/SUNW.nfs/dfstab.nfs-rs file on nodeC:
share -F nfs -o rw -d "HA NFS" /global/mountpoint |
Go to How to Create a Replication Resource Group on the Primary Cluster.
Ensure that you completed steps in How to Configure the File System on the Secondary Cluster for the NFS Application.
Access nodeA as superuser.
Register SUNW.HAStoragePlus as a resource type.
nodeA# /usr/cluster/bin/scrgadm -a -t SUNW.HAStoragePlus |
Create a replication resource group for the disk device group.
nodeA# /usr/cluster/bin/scrgadm -a -g devicegroup-stor-rg -h nodeA,nodeB |
The name of the disk device group
The name of the replication resource group
Specifies the cluster nodes that can master the replication resource group
Add a SUNW.HAStoragePlus resource to the replication resource group.
nodeA# /usr/cluster/bin/scrgadm -a -j devicegroup-stor \ -g devicegroup-stor-rg -t SUNW.HAStoragePlus \ -x GlobalDevicePaths=devicegroup \ -x AffinityOn=True |
The HAStoragePlus resource for the replication resource group.
Specifies the extension property that Sun StorEdge Availability Suite software relies on.
Specifies that the SUNW.HAStoragePlus resource must perform an affinity switchover for the global devices and cluster file systems defined by -x GlobalDevicePaths=. Therefore, when the replication resource group fails over or is switched over, the associated device group is switched over.
For more information about these extension properties, see the SUNW.HAStoragePlus(5) man page.
Add a logical hostname resource to the replication resource group.
nodeA# /usr/cluster/bin/scrgadm -a -L -j lhost-reprg-prim \ -g devicegroup-stor-rg -l lhost-reprg-prim |
lhost-reprg-prim is the logical hostname for the replication resource group on the primary cluster.
Enable the resources, manage the resource group, and bring the resource group online.
nodeA# /usr/cluster/bin/scswitch -Z -g devicegroup-stor-rg nodeA# /usr/cluster/bin/scswitch -z -g devicegroup-stor-rg -h nodeA |
Verify that the resource group is online.
nodeA# /usr/cluster/bin/scstat -g |
Examine the resource group state field to confirm that the replication resource group is online on nodeA.
Go to How to Create a Replication Resource Group on the Secondary Cluster.
Ensure that you completed steps in How to Create a Replication Resource Group on the Primary Cluster.
Access nodeC as superuser.
Register SUNW.HAStoragePlus as a resource type.
nodeC# /usr/cluster/bin/scrgadm -a -t SUNW.HAStoragePlus |
Create a replication resource group for the disk device group.
nodeC# /usr/cluster/bin/scrgadm -a -g devicegroup-stor-rg -h nodeC |
The name of the disk device group
The name of the replication resource group
Specifies the cluster node that can master the replication resource group
Add a SUNW.HAStoragePlus resource to the replication resource group.
nodeC# /usr/cluster/bin/scrgadm -a -j devicegroup-stor \ -g devicegroup-stor-rg -t SUNW.HAStoragePlus \ -x GlobalDevicePaths=devicegroup \ -x AffinityOn=True |
The HAStoragePlus resource for the replication resource group.
Specifies the extension property that Sun StorEdge Availability Suite software relies on.
Specifies that the SUNW.HAStoragePlus resource must perform an affinity switchover for the global devices and cluster file systems defined by -x GlobalDevicePaths=. Therefore, when the replication resource group fails over or is switched over, the associated device group is switched over.
For more information about these extension properties, see the SUNW.HAStoragePlus(5) man page.
Add a logical hostname resource to the replication resource group.
nodeC# /usr/cluster/bin/scrgadm -a -L -j lhost-reprg-sec \ -g devicegroup-stor-rg -l lhost-reprg-sec |
lhost-reprg-sec is the logical hostname for the replication resource group on the primary cluster.
Enable the resources, manage the resource group, and bring the resource group online.
nodeC# /usr/cluster/bin/scswitch -Z -g devicegroup-stor-rg |
Verify that the resource group is online.
nodeC# /usr/cluster/bin/scstat -g |
Examine the resource group state field to confirm that the replication resource group is online on nodeC.
Go to How to Create an NFS Application Resource Group on the Primary Cluster.
This procedure describes how application resource groups are created for NFS. This procedure is specific to this application and cannot be used for another type of application.
Ensure that you completed steps in How to Create a Replication Resource Group on the Secondary Cluster.
Access nodeA as superuser.
Register SUNW.nfs as a resource type.
nodeA# scrgadm -a -t SUNW.nfs |
If SUNW.HAStoragePlus has not been registered as a resource type, register it.
nodeA# scrgadm -a -t SUNW.HAStoragePlus |
Create an application resource group for the devicegroup.
nodeA# scrgadm -a -g nfs-rg \ -y Pathprefix=/global/etc \ -y Auto_start_on_new_cluster=False \ -y RG_dependencies=devicegroup-stor-rg |
The name of the application resource group.
Specifies a directory into which the resources in the group can write administrative files.
Specifies that the application resource group is not started automatically.
Specifies the resource groups that the application resource group depends on. In this example, the application resource group depends on the replication resource group.
If the application resource group is switched over to a new primary node, the replication resource group is automatically switched over. However, if the replication resource group is switched over to a new primary node, the application resource group must be manually switched over.
Add a SUNW.HAStoragePlus resource to the application resource group.
nodeA# scrgadm -a -j nfs-dg-rs -g nfs-rg \ -t SUNW.HAStoragePlus \ -x FileSystemMountPoints=/global/mountpoint \ -x AffinityOn=True |
Is the name of the HAStoragePlus resource for the NFS application.
Specifies that the mount point for the file system is global.
Specifies that the resource is of the type SUNW.HAStoragePlus.
Specifies that the application resource must perform an affinity switchover for the global devices and cluster file systems defined by -x GlobalDevicePaths=. Therefore, when the application resource group fails over or is switched over, the associated device group is switched over.
For more information about these extension properties, see the SUNW.HAStoragePlus(5) man page.
Add a logical hostname resource to the application resource group.
nodeA# /usr/cluster/bin/scrgadm -a -L -j lhost-nfsrg-prim -g nfs-rg \ -l lhost-nfsrg-prim |
lhost-nfsrg-prim is the logical hostname of the application resource group on the primary cluster.
Enable the resources, manage the application resource group, and bring the application resource group online.
Bring the HAStoragePlus resource for the NFS application online.
nodeA# /usr/cluster/bin/scrgadm -a -g nfs-rg \ -j nfs-rs -t SUNW.nfs -y Resource_dependencies=nfs-dg-rs |
Bring the application resource group online on nodeA .
nodeA# /usr/cluster/bin/scswitch -Z -g nfs-rg nodeA# /usr/cluster/bin/scswitch -z -g nfs-rg -h nodeA |
Verify that the application resource group is online.
nodeA# /usr/cluster/bin/scstat -g |
Examine the resource group state field to determine whether the application resource group is online for nodeA and nodeB.
Go to How to Create an NFS Application Resource Group on the Secondary Cluster.
Ensure that you completed steps in How to Create an NFS Application Resource Group on the Primary Cluster.
Access nodeC as superuser.
Register SUNW.nfs as a resource type.
nodeC# scrgadm -a -t SUNW.nfs |
If SUNW.HAStoragePlus has not been registered as a resource type, register it.
nodeC# scrgadm -a -t SUNW.HAStoragePlus |
Create an application resource group for the devicegroup.
nodeC# scrgadm -a -g nfs-rg \ -y Pathprefix=/global/etc \ -y Auto_start_on_new_cluster=False \ -y RG_dependencies=devicegroup-stor-rg |
The name of the application resource group.
Specifies a directory into which the resources in the group can write administrative files.
Specifies that the application resource group is not started automatically.
Specifies the resource groups that the application resource group depends on. In this example, the application resource group depends on the replication resource group.
If the application resource group is switched over to a new primary node, the replication resource group is automatically switched over. However, if the replication resource group is switched over to a new primary node, the application resource group must be manually switched over.
Add a SUNW.HAStoragePlus resource to the application resource group.
nodeC# scrgadm -a -j nfs-dg-rs -g nfs-rg \ -t SUNW.HAStoragePlus \ -x FileSystemMountPoints=/global/mountpoint \ -x AffinityOn=True |
Is the name of the HAStoragePlus resource for the NFS application.
Specifies that the mount point for the file system is global.
Specifies that the resource is of the type SUNW.HAStoragePlus.
Specifies that the application resource must perform an affinity switchover for the global devices and cluster file systems defined by -x GlobalDevicePaths=. Therefore, when the application resource group fails over or is switched over, the associated device group is switched over.
For more information about these extension properties, see the SUNW.HAStoragePlus(5) man page.
Add a logical hostname resource to the application resource group.
nodeC# /usr/cluster/bin/scrgadm -a -L -j lhost-nfsrg-sec -g nfs-rg \ -l lhost-nfsrg-sec |
lhost-nfsrg-sec is the logical hostname of the application resource group on the secondary cluster.
Add an NFS resource to the application resource group.
nodeC# /usr/cluster/bin/scrgadm -a -g nfs-rg \ -j nfs-rs -t SUNW.nfs -y Resource_dependencies=nfs-dg-rs |
Ensure that the application resource group does not come online on nodeC.
nodeC# /usr/cluster/bin/scswitch -n -j nfs-rs nodeC# /usr/cluster/bin/scswitch -n -j nfs-dg-rs nodeC# /usr/cluster/bin/scswitch -n -j lhost-nfsrg-sec nodeC# /usr/cluster/bin/scswitch -z -g nfs-rg -h "" |
The resource group remains offline after a reboot, because Auto_start_on_new_cluster=False.
If the global volume is mounted on the primary cluster, unmount the global volume from the secondary cluster.
nodeC# umount /global/mountpoint |
If the volume is mounted on a secondary cluster, the synchronization fails.