Sun Cluster System Administration Guide for Solaris OS

Example of How to Configure Device Groups and Resource Groups

This section describes how device groups and resource groups are configured for an NFS application. For additional information, see Configuring Replication Resource Groups and Configuring Application Resource Groups.

This section contains the following procedures:

The following table lists the names of the groups and resources that are created for the example configuration.

Table A–3 Summary of the Groups and Resources in the Example Configuration

Group or Resource 

Name 

Description 

Device group 

devgrp

The device group 

Replication resource group and resources 

devgrp-stor-rg

The replication resource group 

lhost-reprg-prim, lhost-reprg-sec

The logical host names for the replication resource group on the primary cluster and the secondary cluster 

devgrp-stor

The HAStoragePlus resource for the replication resource group 

Application resource group and resources 

nfs-rg

The application resource group 

lhost-nfsrg-prim, lhost-nfsrg-sec

The logical host names for the application resource group on the primary cluster and the secondary cluster 

nfs-dg-rs

The HAStoragePlus resource for the application 

nfs-rs

The NFS resource 

With the exception of devgrp-stor-rg, the names of the groups and resources are example names that can be changed as required. The replication resource group must have a name with the format devicegroupname-stor-rg.

This example configuration uses VxVM software. For information about Solaris Volume Manager software, see the Chapter 4, Configuring Solaris Volume Manager Software, in Sun Cluster Software Installation Guide for Solaris OS.

The following figure illustrates the volumes that are created in the device group.

Figure A–8 Volumes for the Device Group

Figure shows the volumes created in the device group.


Note –

The volumes that are defined in this procedure must not include disk-label private areas, for example, cylinder 0. The VxVM software manages this constraint automatically.


ProcedureHow to Configure a Device Group on the Primary Cluster

Before You Begin

Ensure that you have completed the following tasks:

  1. Access nodeA as superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

    The node nodeA is the first node of the primary cluster. For a reminder of which node is nodeA, see Figure A–7.

  2. Create a disk group on nodeA that contains volume 1, vol01 through volume 4, vol04.

    For information about configuring a disk group by using the VxVM software, see the Chapter 5, Installing and Configuring Veritas Volume Manager, in Sun Cluster Software Installation Guide for Solaris OS.

  3. Configure the disk group to create a device group.


    nodeA# cldevicegroup create -t vxvm -n nodeA nodeB devgrp
    

    The device group is called devgrp.

  4. Create the file system for the device group.


    nodeA# newfs /dev/vx/rdsk/devgrp/vol01 < /dev/null
    nodeA# newfs /dev/vx/rdsk/devgrp/vol02 < /dev/null
    

    No file system is needed for vol03 or vol04, which are instead used as raw volumes.

Next Steps

Go to How to Configure a Device Group on the Secondary Cluster.

ProcedureHow to Configure a Device Group on the Secondary Cluster

Before You Begin

Complete the procedure How to Configure a Device Group on the Primary Cluster.

  1. Access nodeC as superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  2. Create a disk group on nodeC that contains four volumes: volume 1, vol01, through volume 4, vol04.

  3. Configure the disk group to create a device group.


    nodeC# cldevicegroup create -t vxvm -n nodeC devgrp
    

    The device group is named devgrp.

  4. Create the file system for the device group.


    nodeC# newfs /dev/vx/rdsk/devgrp/vol01 < /dev/null
    nodeC# newfs /dev/vx/rdsk/devgrp/vol02 < /dev/null
    

    No file system is needed for vol03 or vol04, which are instead used as raw volumes.

Next Steps

Go to How to Configure the File System on the Primary Cluster for the NFS Application.

ProcedureHow to Configure the File System on the Primary Cluster for the NFS Application

Before You Begin

Complete the procedure How to Configure a Device Group on the Secondary Cluster.

  1. On nodeA and nodeB, become superuser or assume a role that provides solaris.cluster.admin RBAC authorization.

  2. On nodeA and nodeB, create a mount-point directory for the NFS file system.

    For example:


    nodeA# mkdir /global/mountpoint
    
  3. On nodeA and nodeB, configure the master volume to be mounted automatically on the mount point.

    Add or replace the following text in the /etc/vfstab file on nodeA and nodeB. The text must be on a single line.


    /dev/vx/dsk/devgrp/vol01 /dev/vx/rdsk/devgrp/vol01 \
    /global/mountpoint ufs 3 no global,logging

    For a reminder of the volumes names and volume numbers that are used in the device group, see Figure A–8.

  4. On nodeA, create a volume for the file system information that is used by the Sun Cluster HA for NFS data service.


    nodeA# vxassist -g devgrp make vol05 120m disk1
    

    Volume 5, vol05, contains the file system information that is used by the Sun Cluster HA for NFS data service.

  5. On nodeA, resynchronize the device group with the Sun Cluster software.


    nodeA# cldevicegroup sync devgrp
    
  6. On nodeA, create the file system for vol05.


    nodeA# newfs /dev/vx/rdsk/devgrp/vol05
    
  7. On nodeA and nodeB, create a mount point for vol05.

    The following example creates the mount point /global/etc.


    nodeA# mkdir /global/etc
    
  8. On nodeA and nodeB, configure vol05 to be mounted automatically on the mount point.

    Add or replace the following text in the /etc/vfstab file on nodeA and nodeB. The text must be on a single line.


    /dev/vx/dsk/devgrp/vol05 /dev/vx/rdsk/devgrp/vol05 \
    /global/etc ufs 3 yes global,logging
  9. Mount vol05 on nodeA.


    nodeA# mount /global/etc
    
  10. Make vol05 accessible to remote systems.

    1. Create a directory called /global/etc/SUNW.nfs on nodeA.


      nodeA# mkdir -p /global/etc/SUNW.nfs
      
    2. Create the file /global/etc/SUNW.nfs/dfstab.nfs-rs on nodeA.


      nodeA# touch /global/etc/SUNW.nfs/dfstab.nfs-rs
      
    3. Add the following line to the /global/etc/SUNW.nfs/dfstab.nfs-rs file on nodeA.


      share -F nfs -o rw -d "HA NFS" /global/mountpoint
      
Next Steps

Go to How to Configure the File System on the Secondary Cluster for the NFS Application.

ProcedureHow to Configure the File System on the Secondary Cluster for the NFS Application

Before You Begin

Complete the procedure How to Configure the File System on the Primary Cluster for the NFS Application.

  1. On nodeC, become superuser or assume a role that provides solaris.cluster.admin RBAC authorization.

  2. On nodeC, create a mount-point directory for the NFS file system.

    For example:


    nodeC# mkdir /global/mountpoint
    
  3. On nodeC, configure the master volume to be mounted automatically on the mount point.

    Add or replace the following text in the /etc/vfstab file on nodeC. The text must be on a single line.


    /dev/vx/dsk/devgrp/vol01 /dev/vx/rdsk/devgrp/vol01 \
    /global/mountpoint ufs 3 no global,logging
  4. On nodeC, create a volume for the file system information that is used by the Sun Cluster HA for NFS data service.


    nodeC# vxassist -g devgrp make vol05 120m disk1
    

    Volume 5, vol05, contains the file system information that is used by the Sun Cluster HA for NFS data service.

  5. On nodeC, resynchronize the device group with the Sun Cluster software.


    nodeC# cldevicegroup sync devgrp
    
  6. On nodeC, create the file system for vol05.


    nodeC# newfs /dev/vx/rdsk/devgrp/vol05
    
  7. On nodeC, create a mount point for vol05.

    The following example creates the mount point /global/etc.


    nodeC# mkdir /global/etc
    
  8. On nodeC, configure vol05 to be mounted automatically on the mount point.

    Add or replace the following text in the /etc/vfstab file on nodeC. The text must be on a single line.


    /dev/vx/dsk/devgrp/vol05 /dev/vx/rdsk/devgrp/vol05 \
    /global/etc ufs 3 yes global,logging
  9. Mount vol05 on nodeC.


    nodeC# mount /global/etc
    
  10. Make vol05 accessible to remote systems.

    1. Create a directory called /global/etc/SUNW.nfs on nodeC.


      nodeC# mkdir -p /global/etc/SUNW.nfs
      
    2. Create the file /global/etc/SUNW.nfs/dfstab.nfs-rs on nodeC.


      nodeC# touch /global/etc/SUNW.nfs/dfstab.nfs-rs
      
    3. Add the following line to the /global/etc/SUNW.nfs/dfstab.nfs-rs file on nodeC:


      share -F nfs -o rw -d "HA NFS" /global/mountpoint
      
Next Steps

Go to How to Create a Replication Resource Group on the Primary Cluster.

ProcedureHow to Create a Replication Resource Group on the Primary Cluster

Before You Begin

Complete the procedure How to Configure the File System on the Secondary Cluster for the NFS Application.

  1. Access nodeA as superuser or assume a role that provides solaris.cluster.modify, solaris.cluster.admin, and solaris.cluster.read RBAC authorization.

  2. Register the SUNW.HAStoragePlus resource type.


    nodeA# clresourcetype register SUNW.HAStoragePlus
    
  3. Create a replication resource group for the device group.


    nodeA# clresourcegroup create -n nodeA,nodeB devgrp-stor-rg
    
    -n nodeA,nodeB

    Specifies that cluster nodes nodeA and nodeB can master the replication resource group.

    devgrp-stor-rg

    The name of the replication resource group. In this name, devgrp specifies the name of the device group.

  4. Add a SUNW.HAStoragePlus resource to the replication resource group.


    nodeA# clresource create -g devgrp-stor-rg -t SUNW.HAStoragePlus \
    -p GlobalDevicePaths=devgrp \
    -p AffinityOn=True \
    devgrp-stor
    
    -g

    Specifies the resource group to which resource is added.

    -p GlobalDevicePaths=

    Specifies the extension property that Sun StorageTek Availability Suite software relies on.

    -p AffinityOn=True

    Specifies that the SUNW.HAStoragePlus resource must perform an affinity switchover for the global devices and cluster file systems defined by -x GlobalDevicePaths=. Therefore, when the replication resource group fails over or is switched over, the associated device group is switched over.

    For more information about these extension properties, see the SUNW.HAStoragePlus(5) man page.

  5. Add a logical hostname resource to the replication resource group.


    nodeA# clreslogicalhostname create -g devgrp-stor-rg lhost-reprg-prim
    

    The logical hostname for the replication resource group on the primary cluster is named lhost-reprg-prim.

  6. Enable the resources, manage the resource group, and bring the resource group online.


    nodeA# clresourcegroup online -e -M -n nodeA devgrp-stor-rg
    
    -e

    Enables associated resources.

    -M

    Manages the resource group.

    -n

    Specifies the node on which to bring the resource group online.

  7. Verify that the resource group is online.


    nodeA# clresourcegroup status devgrp-stor-rg
    

    Examine the resource group state field to confirm that the replication resource group is online on nodeA.

Next Steps

Go to How to Create a Replication Resource Group on the Secondary Cluster.

ProcedureHow to Create a Replication Resource Group on the Secondary Cluster

Before You Begin

Complete the procedure How to Create a Replication Resource Group on the Primary Cluster.

  1. Access nodeC as superuser or assume a role that provides solaris.cluster.modify, solaris.cluster.admin, and solaris.cluster.read RBAC authorization.

  2. Register SUNW.HAStoragePlus as a resource type.


    nodeC# clresourcetype register SUNW.HAStoragePlus
    
  3. Create a replication resource group for the device group.


    nodeC# clresourcegroup create -n nodeC devgrp-stor-rg
    
    create

    Creates the resource group.

    -n

    Specifies the node list for the resource group.

    devgrp

    The name of the device group.

    devgrp-stor-rg

    The name of the replication resource group.

  4. Add a SUNW.HAStoragePlus resource to the replication resource group.


    nodeC# clresource create \
    -t SUNW.HAStoragePlus \
    -p GlobalDevicePaths=devgrp \
    -p AffinityOn=True \
    devgrp-stor
    
    create

    Creates the resource.

    -t

    Specifies the resource type.

    -p GlobalDevicePaths=

    Specifies the extension property that Sun StorageTek Availability Suite software relies on.

    -p AffinityOn=True

    Specifies that the SUNW.HAStoragePlus resource must perform an affinity switchover for the global devices and cluster file systems defined by -x GlobalDevicePaths=. Therefore, when the replication resource group fails over or is switched over, the associated device group is switched over.

    devgrp-stor

    The HAStoragePlus resource for the replication resource group.

    For more information about these extension properties, see the SUNW.HAStoragePlus(5) man page.

  5. Add a logical hostname resource to the replication resource group.


    nodeC# clreslogicalhostname create -g devgrp-stor-rg lhost-reprg-sec
    

    The logical hostname for the replication resource group on the primary cluster is named lhost-reprg-sec.

  6. Enable the resources, manage the resource group, and bring the resource group online.


    nodeC# clresourcegroup online -e -M -n nodeC devgrp-stor-rg
    
    online

    Brings online.

    -e

    Enables associated resources.

    -M

    Manages the resource group.

    -n

    Specifies the node on which to bring the resource group online.

  7. Verify that the resource group is online.


    nodeC# clresourcegroup status devgrp-stor-rg
    

    Examine the resource group state field to confirm that the replication resource group is online on nodeC.

Next Steps

Go to How to Create an NFS Application Resource Group on the Primary Cluster.

ProcedureHow to Create an NFS Application Resource Group on the Primary Cluster

This procedure describes how application resource groups are created for NFS. This procedure is specific to this application and cannot be used for another type of application.

Before You Begin

Complete the procedure How to Create a Replication Resource Group on the Secondary Cluster.

  1. Access nodeA as superuser or assume a role that provides solaris.cluster.modify, solaris.cluster.admin, and solaris.cluster.read RBAC authorization.

  2. Register SUNW.nfs as a resource type.


    nodeA# clresourcetype register SUNW.nfs
    
  3. If SUNW.HAStoragePlus has not been registered as a resource type, register it.


    nodeA# clresourcetype register SUNW.HAStoragePlus
    
  4. Create an application resource group for the device group devgrp.


    nodeA# clresourcegroup create \
    -p Pathprefix=/global/etc \
    -p Auto_start_on_new_cluster=False \
    -p RG_dependencies=devgrp-stor-rg \
    nfs-rg
    
    Pathprefix=/global/etc

    Specifies the directory into which the resources in the group can write administrative files.

    Auto_start_on_new_cluster=False

    Specifies that the application resource group is not started automatically.

    RG_dependencies=devgrp-stor-rg

    Specifies the resource group that the application resource group depends on. In this example, the application resource group depends on the replication resource group devgrp-stor-rg.

    If the application resource group is switched over to a new primary node, the replication resource group is automatically switched over. However, if the replication resource group is switched over to a new primary node, the application resource group must be manually switched over.

    nfs-rg

    The name of the application resource group.

  5. Add a SUNW.HAStoragePlus resource to the application resource group.


    nodeA# clresource create -g nfs-rg \
    -t SUNW.HAStoragePlus \
    -p FileSystemMountPoints=/global/mountpoint \
    -p AffinityOn=True \
    nfs-dg-rs
    
    create

    Creates the resource.

    -g

    Specifies the resource group to which the resource is added.

    -t SUNW.HAStoragePlus

    Specifies that the resource is of the type SUNW.HAStoragePlus.

    -p FileSystemMountPoints=/global/

    Specifies that the mount point for the file system is global.

    -p AffinityOn=True

    Specifies that the application resource must perform an affinity switchover for the global devices and cluster file systems defined by -p GlobalDevicePaths=. Therefore, when the application resource group fails over or is switched over, the associated device group is switched over.

    nfs-dg-rs

    The name of the HAStoragePlus resource for the NFS application.

    For more information about these extension properties, see the SUNW.HAStoragePlus(5) man page.

  6. Add a logical hostname resource to the application resource group.


    nodeA# clreslogicalhostname create -g nfs-rg \
    lhost-nfsrg-prim
    

    The logical hostname of the application resource group on the primary cluster is named lhost-nfsrg-prim.

  7. Enable the resources, manage the application resource group, and bring the application resource group online.

    1. Enable the HAStoragePlus resource for the NFS application.


      nodeA# clresource enable nfs-rs
      
    2. Bring the application resource group online on nodeA .


      nodeA# clresourcegroup online -e -M -n nodeA nfs-rg
      
      online

      Brings the resource group online.

      -e

      Enables the associated resources.

      -M

      Manages the resource group.

      -n

      Specifies the node on which to bring the resource group online.

      nfs-rg

      The name of the resource group.

  8. Verify that the application resource group is online.


    nodeA# clresourcegroup status
    

    Examine the resource group state field to determine whether the application resource group is online for nodeA and nodeB.

Next Steps

Go to How to Create an NFS Application Resource Group on the Secondary Cluster.

ProcedureHow to Create an NFS Application Resource Group on the Secondary Cluster

Before You Begin

Complete the procedure How to Create an NFS Application Resource Group on the Primary Cluster.

  1. Access nodeC as superuser or assume a role that provides solaris.cluster.modify, solaris.cluster.admin, and solaris.cluster.read RBAC authorization.

  2. Register SUNW.nfs as a resource type.


    nodeC# clresourcetype register SUNW.nfs
    
  3. If SUNW.HAStoragePlus has not been registered as a resource type, register it.


    nodeC# clresourcetype register SUNW.HAStoragePlus
    
  4. Create an application resource group for the device group.


    nodeC# clresourcegroup create \
    -p Pathprefix=/global/etc \
    -p Auto_start_on_new_cluster=False \
    -p RG_dependencies=devgrp-stor-rg \
    nfs-rg
    
    create

    Creates the resource group.

    -p

    Specifies a property of the resource group.

    Pathprefix=/global/etc

    Specifies a directory into which the resources in the group can write administrative files.

    Auto_start_on_new_cluster=False

    Specifies that the application resource group is not started automatically.

    RG_dependencies=devgrp-stor-rg

    Specifies the resource groups that the application resource group depends on. In this example, the application resource group depends on the replication resource group.

    If the application resource group is switched over to a new primary node, the replication resource group is automatically switched over. However, if the replication resource group is switched over to a new primary node, the application resource group must be manually switched over.

    nfs-rg

    The name of the application resource group.

  5. Add a SUNW.HAStoragePlus resource to the application resource group.


    nodeC# clresource create -g nfs-rg \
    -t SUNW.HAStoragePlus \
    -p FileSystemMountPoints=/global/mountpoint \
    -p AffinityOn=True \
    nfs-dg-rs
    
    create

    Creates the resource.

    -g

    Specifies the resource group to which the resource is added.

    -t SUNW.HAStoragePlus

    Specifies that the resource is of the type SUNW.HAStoragePlus.

    -p

    Specifies a property of the resource.

    FileSystemMountPoints=/global/

    Specifies that the mount point for the file system is global.

    AffinityOn=True

    Specifies that the application resource must perform an affinity switchover for the global devices and cluster file systems defined by -x GlobalDevicePaths=. Therefore, when the application resource group fails over or is switched over, the associated device group is switched over.

    nfs-dg-rs

    The name of the HAStoragePlus resource for the NFS application.

    For more information about these extension properties, see the SUNW.HAStoragePlus(5) man page.

  6. Add a logical hostname resource to the application resource group.


    nodeC# clreslogicalhostname create -g nfs-rg \
    lhost-nfsrg-sec
    

    The logical hostname of the application resource group on the secondary cluster is named lhost-nfsrg-sec.

  7. Add an NFS resource to the application resource group.


    nodeC# clresource create -g nfs-rg \
    -t SUNW.nfs -p Resource_dependencies=nfs-dg-rs nfs-rg
    
  8. Ensure that the application resource group does not come online on nodeC.


    nodeC# clresource disable -n nodeC nfs-rs
    nodeC# clresource disable -n nodeC nfs-dg-rs
    nodeC# clresource disable -n nodeC lhost-nfsrg-sec
    nodeC# clresourcegroup online -n "" nfs-rg
    

    The resource group remains offline after a reboot, because Auto_start_on_new_cluster=False.

  9. If the global volume is mounted on the primary cluster, unmount the global volume from the secondary cluster.


    nodeC# umount /global/mountpoint
    

    If the volume is mounted on a secondary cluster, the synchronization fails.

Next Steps

Go to Example of How to Enable Data Replication.