Sun Cluster Software Installation Guide for Solaris OS

Chapter 3 Installing and Configuring Solstice DiskSuite or Solaris Volume Manager Software

Install and configure your local and multihost disks for Solstice DiskSuite or Solaris Volume Manager software by using the procedures in this chapter, along with the planning information in Planning Volume Management. See your Solstice DiskSuite or Solaris Volume Manager documentation for additional details.


Note –

DiskSuite Tool (Solstice DiskSuite metatool) and the Enhanced Storage module of Solaris Management Console (Solaris Volume Manager) are not compatible with Sun Cluster software. Use the command-line interface or Sun Cluster utilities to configure Solstice DiskSuite or Solaris Volume Manager software.


The following information and procedures are in this chapter:

Task Map: Installing and Configuring Solstice DiskSuite or Solaris Volume Manager Software

The following table lists the tasks that you perform to install and configure Solstice DiskSuite or Solaris Volume Manager software for Sun Cluster configurations. You can skip certain procedures under the following conditions:

Table 3–1 Task Map: Installing and Configuring Solstice DiskSuite or Solaris Volume Manager Software

Task 

Instructions 

1. Install and Configure Solstice DiskSuite or Solaris Volume Manager software 

1a. Plan the layout of your Solstice DiskSuite or Solaris Volume Manager configuration. 

1b. (Solaris 8 only) Install Solstice DiskSuite software.

How to Install Solstice DiskSuite Software

1c. Calculate the number of metadevice names and disk sets needed for your configuration, and modify the /kernel/drv/md.conf file.

How to Set the Number of Metadevice or Volume Names and Disk Sets

1d. Create state database replicas on the local disks. 

How to Create State Database Replicas

1e. (Optional) Mirror file systems on the root disk.

Mirroring the Root Disk

2. Create disk sets 

2a. Create disk sets by using the metaset command.

How to Create a Disk Set

2b. Add drives to the disk sets. 

How to Add Drives to a Disk Set

2c. (Optional) Repartition drives in a disk set to allocate space to slices 1 through 6.

How to Repartition Drives in a Disk Set

2d. List DID pseudo-driver mappings and define metadevices or volumes in the /etc/lvm/md.tab files.

How to Create an md.tab File

2e. Initialize the md.tab files.

How to Activate Metadevices or Volumes

3. (Dual-string configurations only) Configure dual-string mediator hosts, check the status of mediator data, and, if necessary, fix bad mediator data.

  1. How to Add Mediator Hosts

  2. How to Check the Status of Mediator Data

4. Configure the cluster. 

Configuring the Cluster

Installing and Configuring Solstice DiskSuite or Solaris Volume Manager Software

This section provides the following information and procedures to install and configure Solstice DiskSuite or Solaris Volume Manager software:

Solstice DiskSuite or Solaris Volume Manager Configuration Example

The following example helps to explain the process for determining the number of drives to place in each disk set. In this example, three storage devices are used. Existing applications are run over NFS (two file systems of 5 Gbytes each) and two ORACLE databases (one 5 Gbytes and one 10 Gbytes).

The following table shows the calculations that are used to determine the number of drives that are needed in the sample configuration. In a configuration with three storage devices, you would need 28 drives, which would be divided as evenly as possible among each of the three storage devices. Note that the 5-Gbyte file systems were given an additional 1 Gbyte of disk space because the number of drives needed was rounded up.

Table 3–2 Determining the Number of Drives Needed for a Configuration

Use 

Data 

Disk Storage Needed 

Drives Needed 

nfs1

5 Gbytes 

3x2.1 Gbyte disks * 2 (Mirror) 

nfs2

5 Gbytes 

3x2.1 Gbyte disks * 2 (Mirror) 

SPARC: oracle1

5 Gbytes 

3x2.1 Gbyte disks * 2 (Mirror) 

SPARC: oracle2

10 Gbytes 

5x2.1 Gbyte disks * 2 (Mirror) 

10 

The following table shows the allocation of drives among the two disk sets and four data services.

Table 3–3 Division of Disk Sets

Disk Set 

Data Services 

Drives 

Storage Device 1 

Storage Device 2 

Storage Device 3 

dg-schost-1

nfs1, oracle1

12 

dg-schost-2

nfs2, oracle2

16 

Initially, four drives on each storage device (a total of 12 drives) are assigned to dg-schost-1, and five or six drives on each (a total of 16 drives) are assigned to dg-schost-2.

No hot spare disks are assigned to either disk set. A minimum of one hot spare disk per storage device per disk set enables one drive to be hot spared, which restores full two-way mirroring.

How to Install Solstice DiskSuite Software


Note –

Do not perform this procedure under the following circumstances:


Perform this task on each node in the cluster.

  1. Have available the following information.

  2. Become superuser on the cluster node.

  3. If you install from the CD-ROM, insert the Solaris 8 Software 2 of 2 CD-ROM into the CD-ROM drive on the node.

    This step assumes that the Volume Management daemon vold(1M) is running and configured to manage CD-ROM devices.

  4. Install the Solstice DiskSuite software packages in the order that is shown in the following example.


    # cd /cdrom/sol_8_sparc_2/Solaris_8/EA/products/DiskSuite_4.2.1/sparc/Packagespkgadd -d . SUNWmdr SUNWmdu [SUNWmdx] optional-pkgs
    


    Note –

    If you have Solstice DiskSuite software patches to install, do not reboot after you install the Solstice DiskSuite software.


    The SUNWmdr and SUNWmdu packages are required for all Solstice DiskSuite installations. The SUNWmdx package is also required for the 64-bit Solstice DiskSuite installation.

    See your Solstice DiskSuite installation documentation for information about optional software packages.

  5. If you installed from a CD-ROM, eject the CD-ROM.

  6. Install any Solstice DiskSuite patches.

    See “Patches and Required Firmware Levels” in Sun Cluster 3.1 Release Notes for the location of patches and installation instructions.

  7. Repeat Step 1 through Step 6 on the other nodes of the cluster.

  8. From one node of the cluster, manually populate the global-device namespace for Solstice DiskSuite.


    # scgdevs
    


    Note –

    The scgdevs command might return a message similar to the following:


    Could not open /dev/rdsk/c0t6d0s2 to verify device id, Device busy

    If the listed device is a CD-ROM device, you can safely ignore the message.


  9. Set the number of metadevice names and disk sets that are expected in the cluster.

    Go to How to Set the Number of Metadevice or Volume Names and Disk Sets.

How to Set the Number of Metadevice or Volume Names and Disk Sets


Note –

If you used SunPlex Installer to install Solstice DiskSuite software, do not perform this procedure. Instead, go to Mirroring the Root Disk.


This procedure describes how to determine the number of Solstice DiskSuite metadevice or Solaris Volume Manager volume names and disk sets that are needed for your configuration. This procedure also describes how to modify the /kernel/drv/md.conf file to specify these numbers.


Tip –

The default number of metadevice or volume names per disk set is 128, but many configurations need more than the default. Increase this number before you implement a configuration, to save administration time later.

At the same time, keep the value of the nmdfield and the md_nsets field as low as possible. Memory structures exist for all possible devices as determined by nmdand md_nsets, even if you have not created those devices. For optimal performance, keep the value of nmd and md_nsets only slightly higher than the number of metadevices or volumes you plan to use.


  1. Have available the Disk Device Group Configurations Worksheet.

  2. Determine the total number of disk sets that you expect to need in the cluster, then add one more disk set for private disk management.

    The cluster can have a maximum of 32 disk sets, 31 disk sets for general use plus one disk set for private disk management. The default number of disk sets is 4. You supply this value for the md_nsets field in Step 4.

  3. Determine the largest metadevice or volume name that you expect to need for any disk set in the cluster.

    Each disk set can have a maximum of 8192 metadevice or volume names. You supply this value for the nmd field in Step 4.

    1. Determine the quantity of metadevice or volume names that you expect to need for each disk set.

      If you use local metadevices or volumes, ensure that each local metadevice or volume name is unique throughout the cluster and does not use the same name as any device-ID name in the cluster.


      Tip –

      Choose a range of numbers to use exclusively for device-ID names and a range for each node to use exclusively for its local metadevice or volume names. For example, device-ID names might use the range from d1 to d100. Local metadevices or volumes on node 1 might use names in the range from d100 to d199. And local metadevices or volumes on node 2 might use d200 to d299.


    2. Determine the highest of the metadevice or volume names that you expect to use in any disk set.

      The quantity of metadevice or volume names to set is based on the metadevice or volume name value rather than on the actual quantity. For example, if your metadevice or volume names range from d950 to d1000, Solstice DiskSuite or Solaris Volume Manager software requires that you set the value at 1000 names, not 50.

  4. On each node, become superuser and edit the /kernel/drv/md.conf file.


    Caution – Caution –

    All cluster nodes (or cluster pairs in the cluster-pair topology) must have identical /kernel/drv/md.conf files, regardless of the number of disk sets served by each node. Failure to follow this guideline can result in serious Solstice DiskSuite or Solaris Volume Manager errors and possible loss of data.


    1. Set the md_nsets field to the value that you determined in Step 2.

    2. Set the nmd field to the value that you determined in Step 3.

  5. On each node, perform a reconfiguration reboot.


    # touch /reconfigure
    # shutdown -g0 -y -i6
    

    Changes to the /kernel/drv/md.conf file become operative after you perform a reconfiguration reboot.

  6. Create local state database replicas.

    Go to How to Create State Database Replicas.

How to Create State Database Replicas


Note –

If you used SunPlex Installer to install Solstice DiskSuite software, do not perform this procedure. Instead, go to Mirroring the Root Disk.


Perform this procedure on each node in the cluster.

  1. Become superuser on the cluster node.

  2. Create state database replicas on one or more local devices for each cluster node.

    Use the physical name (cNtXdYsZ), not the device-ID name (dN), to specify the slices to use.


    # metadb -af slice-1 slice-2 slice-3
    


    Tip –

    To provide protection of state data, which is necessary to run Solstice DiskSuite or Solaris Volume Manager software, create at least three replicas for each node. Also, you can place replicas on more than one device to provide protection if one of the devices fails.


    See the metadb(1M) man page and your Solstice DiskSuite or Solaris Volume Manager documentation for details.

  3. Verify the replicas.


    # metadb
    

    The metadb command displays the list of replicas.

  4. To mirror file systems on the root disk, go to Mirroring the Root Disk.

    Otherwise, go to Creating Disk Sets in a Cluster to create Solstice DiskSuite or Solaris Volume Manager disk sets.

Example—Creating State Database Replicas

The following example shows three Solstice DiskSuite state database replicas. Each replica is created on a different device. For Solaris Volume Manager, the replica size would be larger.


# metadb -af c0t0d0s7 c0t1d0s7 c1t0d0s7
# metadb
flags            first blk      block count
    a       u       16          1034         /dev/dsk/c0t0d0s7
    a       u       16          1034         /dev/dsk/c0t1d0s7
    a       u       16          1034         /dev/dsk/c1t0d0s7

Mirroring the Root Disk

Mirroring the root disk prevents the cluster node itself from shutting down because of a system disk failure. Four types of file systems can reside on the root disk. Each file-system type is mirrored by using a different method.

Use the following procedures to mirror each type of file system.


Note –

Some of the steps in these mirroring procedures can cause an error message similar to the following, which is harmless and can be ignored.


metainit: dg-schost-1: d1s0: not a metadevice



Caution – Caution –

For local disk mirroring, do not use /dev/global as the path when you specify the disk name. If you specify this path for anything other than cluster file systems, the system cannot boot.


How to Mirror the Root (/) File System

Use this procedure to mirror the root (/) file system.

  1. Become superuser on the node.

  2. Use the metainit(1M) command to put the root slice in a single-slice (one-way) concatenation.

    Specify the physical disk name of the root-disk slice (cNtXdYsZ).


    # metainit -f submirror1 1 1 root-disk-slice
    

  3. Create a second concatenation.


    # metainit submirror2 1 1 submirror-disk-slice
    

  4. Create a one-way mirror with one submirror.


    # metainit mirror -m submirror1
    


    Note –

    The metadevice or volume name for the mirror must be unique throughout the cluster.


  5. Run the metaroot(1M) command.

    This command edits the /etc/vfstab and /etc/system files so the system can be booted with the root (/) file system on a metadevice or volume.


    # metaroot mirror
    

  6. Run the lockfs(1M) command.

    This command flushes all transactions out of the log and writes the transactions to the master file system on all mounted UFS file systems.


    # lockfs -fa
    

  7. Move any resource groups or device groups from the node.


    # scswitch -S -h from-node
    
    -S

    Moves all resource groups and device groups

    -h from-node

    Specifies the name of the node from which to move resource or device groups

  8. Reboot the node.

    This command remounts the newly mirrored root (/) file system.


    # shutdown -g0 -y -i6
    

  9. Use the metattach(1M) command to attach the second submirror to the mirror.


    # metattach mirror submirror2
    

  10. If the disk that is used to mirror the root disk is physically connected to more than one node (multihosted), enable the localonly property.

    Perform the following steps to enable the localonly property of the raw-disk device group for the disk that is used to mirror the root disk. You must enable the localonly property to prevent unintentional fencing of a node from its boot device if the boot device is connected to multiple nodes.

    1. If necessary, use the scdidadm(1M) -L command to display the full device-ID path name of the raw-disk device group.

      In the following example, the raw-disk device-group name dsk/d2 is part of the third column of output, which is the full device-ID path name.


      # scdidadm -L
      …
      1         phys-schost-3:/dev/rdsk/c1t1d0     /dev/did/rdsk/d2
      

    2. View the node list of the raw-disk device group.

      Output looks similar to the following:


      # scconf -pvv | grep dsk/d2
      Device group name:						dsk/d2
      …
        (dsk/d2) Device group node list:		phys-schost-1, phys-schost-3
      …

    3. If the node list contains more than one node name, remove all nodes from the node list except the node whose root disk you mirrored.

      Only the node whose root disk you mirrored should remain in the node list for the raw-disk device group.


      # scconf -r -D name=dsk/dN,nodelist=node
      
      -D name=dsk/dN

      Specifies the cluster-unique name of the raw-disk device group

      nodelist=node

      Specifies the name of the node or nodes to remove from the node list

    4. Use the scconf(1M) command to enable the localonly property.

      When the localonly property is enabled, the raw-disk device group is used exclusively by the node in its node list. This usage prevents unintentional fencing of the node from its boot device if the boot device is connected to multiple nodes.


      # scconf -c -D name=rawdisk-groupname,localonly=true
      
      -D name=rawdisk-groupname

      Specifies the name of the raw-disk device group

      For more information about the localonly property, see the scconf_dg_rawdisk(1M) man page.

  11. Record the alternate boot path for possible future use.

    If the primary boot device fails, you can then boot from this alternate boot device. See “Troubleshooting the System” in Solstice DiskSuite 4.2.1 User's Guide or “Mirroring root (/) Special Considerations” in Solaris Volume Manager Administration Guide for more information about alternate boot devices.


    # ls -l /dev/rdsk/root-disk-slice
    

  12. Repeat Step 1 through Step 11 on each remaining node of the cluster.

    Ensure that each metadevice or volume name for a mirror is unique throughout the cluster.

  13. (Optional) To mirror the global namespace, /global/.devices/node@nodeid, go to How to Mirror the Global Namespace.

  14. (Optional) To mirror file systems than cannot be unmounted, go to How to Mirror File Systems Other Than Root (/) That Cannot Be Unmounted.

  15. (Optional) To mirror user-defined file systems, go to How to Mirror File Systems That Can Be Unmounted.

  16. Go to Creating Disk Sets in a Cluster to create a disk set.

Example—Mirroring the Root (/) File System

The following example shows the creation of mirror d0 on the node phys-schost-1, which consists of submirror d10 on partition c0t0d0s0 and submirror d20 on partition c2t2d0s0. Device c2t2d0 is a multihost disk, so the localonly property is enabled.


(Create the mirror)
# metainit -f d10 1 1 c0t0d0s0
d11: Concat/Stripe is setup
# metainit d20 1 1 c2t2d0s0
d12: Concat/Stripe is setup
# metainit d0 -m d10
d10: Mirror is setup
# metaroot d0
# lockfs -fa
 
(Move resource groups and device groups from phys-schost-1)
# scswitch -S -h phys-schost-1
 
(Reboot the node)
# shutdown -g0 -y -i6
 
(Attach the second submirror)
# metattach d0 d20
d0: Submirror d20 is attached
 
(Display the device-group node list)
# scconf -pvv | grep dsk/d2
Device group name:						dsk/d2
…
  (dsk/d2) Device group node list:		phys-schost-1, phys-schost-3
…
 
(Remove phys-schost-3 from the node list)
# scconf -r -D name=dsk/d2,nodelist=phys-schost-3
 
(Enable the localonly property)
# scconf -c -D name=dsk/d2,localonly=true
 
(Record the alternate boot path)
# ls -l /dev/rdsk/c2t2d0s0
lrwxrwxrwx  1 root     root          57 Apr 25 20:11 /dev/rdsk/c2t2d0s0 
–> ../../devices/node@1/pci@1f,0/pci@1/scsi@3,1/disk@2,0:a,raw

How to Mirror the Global Namespace

Use this procedure to mirror the global namespace, /global/.devices/node@nodeid/.

  1. Become superuser on a node of the cluster.

  2. Put the global namespace slice in a single-slice (one-way) concatenation.

    Use the physical disk name of the disk slice (cNtXdYsZ).


    # metainit -f submirror1 1 1 diskslice
    

  3. Create a second concatenation.


    # metainit submirror2 1 1 submirror-diskslice
    

  4. Create a one-way mirror with one submirror.


    # metainit mirror -m submirror1
    


    Note –

    The metadevice or volume name for the mirror must be unique throughout the cluster.


  5. Attach the second submirror to the mirror.

    This attachment starts a synchronization of the submirrors.


    # metattach mirror submirror2
    

  6. Edit the /etc/vfstab file entry for the /global/.devices/node@nodeid file system.

    Replace the names in the device to mount and device to fsck columns with the mirror name.


    # 
    vi /etc/vfstab
    #device        device        mount    FS     fsck    mount    mount
    #to mount      to fsck       point    type   pass    at boot  options
    #
    /dev/md/dsk/mirror /dev/md/rdsk/mirror /global/.devices/node@nodeid ufs 2 no global

  7. Repeat Step 1 through Step 6 on each remaining node of the cluster.

  8. Wait for the synchronization of the mirrors, started in Step 5, to complete.

    Use the metastat(1M) command to view mirror status and to verify that mirror synchronization if complete.


    # metastat mirror
    

  9. If the disk that is used to mirror the global namespace is physically connected to more than one node (multihosted), enable the localonly property.

    Perform the following steps to enable the localonly property of the raw-disk device group for the disk that is used to mirror the global namespace. You must enable the localonly property to prevent unintentional fencing of a node from its boot device if the boot device is connected to multiple nodes.

    1. If necessary, use the scdidadm(1M) command to display the full device-ID path name of the raw-disk device group.

      In the following example, the raw-disk device-group name dsk/d2 is part of the third column of output, which is the full device-ID path name.


      # scdidadm -L
      …
      1         phys-schost-3:/dev/rdsk/c1t1d0     /dev/did/rdsk/d2
      

    2. View the node list of the raw-disk device group.

      Output looks similar to the following.


      # scconf -pvv | grep dsk/d2
      Device group name:						dsk/d2
      …
        (dsk/d2) Device group node list:		phys-schost-1, phys-schost-3
      …

    3. If the node list contains more than one node name, remove all nodes from the node list except the node whose disk is mirrored.

      Only the node whose disk is mirrored should remain in the node list for the raw-disk device group.


      # scconf -r -D name=dsk/dN,nodelist=node
      
      -D name=dsk/dN

      Specifies the cluster-unique name of the raw-disk device group

      nodelist=node

      Specifies the name of the node or nodes to remove from the node list

    4. Use the scconf(1M) command to enable the localonly property.

      When the localonly property is enabled, the raw-disk device group is used exclusively by the node in its node list. This usage prevents unintentional fencing of the node from its boot device if the boot device is connected to multiple nodes.


      # scconf -c -D name=rawdisk-groupname,localonly=true
      
      -D name=rawdisk-groupname

      Specifies the name of the raw-disk device group

      For more information about the localonly property, see the scconf_dg_rawdisk(1M) man page.

  10. (Optional) To mirror file systems other than root (/) that cannot be unmounted, go to How to Mirror File Systems Other Than Root (/) That Cannot Be Unmounted.

  11. (Optional) To mirror user-defined file systems, go to How to Mirror File Systems That Can Be Unmounted

  12. Go to Creating Disk Sets in a Cluster to create a disk set.

Example—Mirroring the Global Namespace

The following example shows creation of mirror d101, which consists of submirror d111 on partition c0t0d0s3 and submirror d121 on partition c2t2d0s3. The /etc/vfstab file entry for /global/.devices/node@1 is updated to use the mirror name d101. Device c2t2d0 is a multihost disk, so the localonly property is enabled.


(Create the mirror)
# metainit -f d111 1 1 c0t0d0s3
d111: Concat/Stripe is setup
# metainit d121 1 1 c2t2d0s3
d121: Concat/Stripe is setup
# metainit d101 -m d111
d101: Mirror is setup
# metattach d101 d121
d101: Submirror d121 is attached
 
(Edit the /etc/vfstab file)
# vi /etc/vfstab
#device        device        mount    FS     fsck    mount    mount
#to mount      to fsck       point    type   pass    at boot  options
#
/dev/md/dsk/d101 /dev/md/rdsk/d101 /global/.devices/node@1 ufs 2 no global
 
(View the sync status)
# metastat d101
d101: Mirror
      Submirror 0: d111
         State: Okay
      Submirror 1: d121
         State: Resyncing
      Resync in progress: 15 % done
…
 
(Identify the device-ID name of the mirrored disk's raw-disk device group)
# scdidadm -L
…
1         phys-schost-3:/dev/rdsk/c2t2d0     /dev/did/rdsk/d2
 
(Display the device-group node list)
# scconf -pvv | grep dsk/d2
Device group name:						dsk/d2
…
  (dsk/d2) Device group node list:		phys-schost-1, phys-schost-3
…
 
(Remove phys-schost-3 from the node list)
# scconf -r -D name=dsk/d2,nodelist=phys-schost-3
 
(Enable the localonly property)
# scconf -c -D name=dsk/d2,localonly=true

How to Mirror File Systems Other Than Root (/) That Cannot Be Unmounted

Use this procedure to mirror file systems other than root (/) that cannot be unmounted during normal system usage, such as /usr, /opt, or swap.

  1. Become superuser on a node of the cluster.

  2. Put the slice on which an unmountable file system resides in a single-slice (one-way) concatenation.

    Specify the physical disk name of the disk slice (cNtXdYsZ).


    # metainit -f submirror1 1 1 diskslice
    

  3. Create a second concatenation.


    # metainit submirror2 1 1 submirror-diskslice
    

  4. Create a one-way mirror with one submirror.


    # metainit mirror -m submirror1
    


    Note –

    The metadevice or volume name for this mirror does not need to be unique throughout the cluster.


  5. Repeat Step 1 through Step 4 for each remaining unmountable file system that you want to mirror.

  6. On each node, edit the /etc/vfstab file entry for each unmountable file system you mirrored.

    Replace the names in the device to mount and device to fsck columns with the mirror name.


    # vi /etc/vfstab
    #device        device        mount    FS     fsck    mount    mount
    #to mount      to fsck       point    type   pass    at boot  options
    #
    /dev/md/dsk/mirror /dev/md/rdsk/mirror /filesystem ufs 2 no global

  7. Move any resource groups or device groups from the node.


    # scswitch -S -h from-node
    
    -S

    Moves all resource groups and device groups

    -h from-node

    Specifies the name of the node from which to move resource or device groups

  8. Reboot the node.


    # shutdown -g0 -y -i6
    

  9. Attach the second submirror to each mirror.

    This attachment starts a synchronization of the submirrors.


    # metattach mirror submirror2
    

  10. Wait for the synchronization of the mirrors, started in Step 9, to complete.

    Use the metastat(1M) command to view mirror status and to verify that mirror synchronization is complete.


    # metastat mirror
    

  11. If the disk that is used to mirror the unmountable file system is physically connected to more than one node (multihosted), enable the localonly property.

    Perform the following steps to enable the localonly property of the raw-disk device group for the disk that is used to mirror the unmountable file system. You must enable the localonly property to prevent unintentional fencing of a node from its boot device if the boot device is connected to multiple nodes.

    1. If necessary, use the scdidadm -L command to display the full device-ID path name of the raw-disk device group.

      In the following example, the raw-disk device-group name dsk/d2 is part of the third column of output, which is the full device-ID path name.


      # scdidadm -L
      …
      1            phys-schost-3:/dev/rdsk/c1t1d0    /dev/did/rdsk/d2
      

    2. View the node list of the raw-disk device group.

      Output looks similar to the following.


      # scconf -pvv | grep dsk/d2
      Device group name:						dsk/d2
      …
        (dsk/d2) Device group node list:		phys-schost-1, phys-schost-3
      …

    3. If the node list contains more than one node name, remove all nodes from the node list except the node whose root disk is mirrored.

      Only the node whose root disk is mirrored should remain in the node list for the raw-disk device group.


      # scconf -r -D name=dsk/dN,nodelist=node
      
      -D name=dsk/dN

      Specifies the cluster-unique name of the raw-disk device group

      nodelist=node

      Specifies the name of the node or nodes to remove from the node list

    4. Use the scconf(1M) command to enable the localonly property.

      When the localonly property is enabled, the raw-disk device group is used exclusively by the node in its node list. This usage prevents unintentional fencing of the node from its boot device if the boot device is connected to multiple nodes.


      # scconf -c -D name=rawdisk-groupname,localonly=true
      
      -D name=rawdisk-groupname

      Specifies the name of the raw-disk device group

      For more information about the localonly property, see the scconf_dg_rawdisk(1M) man page.

  12. (Optional) To mirror user-defined file systems, go to How to Mirror File Systems That Can Be Unmounted.

  13. Go to Creating Disk Sets in a Cluster to create a disk set.

Example—Mirroring File Systems That Cannot Be Unmounted

The following example shows the creation of mirror d1 on the node phys-schost-1 to mirror /usr, which resides on c0t0d0s1. Mirror d1 consists of submirror d11 on partition c0t0d0s1 and submirror d21 on partition c2t2d0s1. The /etc/vfstab file entry for /usr is updated to use the mirror name d1. Device c2t2d0 is a multihost disk, so the localonly property is enabled.


(Create the mirror)
# metainit -f d11 1 1 c0t0d0s1
d11: Concat/Stripe is setup
# metainit d21 1 1 c2t2d0s1
d21: Concat/Stripe is setup
# metainit d1 -m d11
d1: Mirror is setup
 
(Edit the /etc/vfstab file)
# vi /etc/vfstab
#device        device        mount    FS     fsck    mount    mount
#to mount      to fsck       point    type   pass    at boot  options
#
/dev/md/dsk/d1 /dev/md/rdsk/d1 /usr ufs  2       no global
 
(Move resource groups and device groups from phys-schost-1)
# scswitch -S -h phys-schost-1
 
(Reboot the node)
# shutdown -g0 -y -i6
 
(Attach the second submirror)
# metattach d1 d21
d1: Submirror d21 is attached
 
(View the sync status)
# metastat d1
d1: Mirror
      Submirror 0: d11
         State: Okay
      Submirror 1: d21
         State: Resyncing
      Resync in progress: 15 % done
…
 
(Identify the device-ID name of the mirrored disk's raw-disk device group)
# scdidadm -L
…
1         phys-schost-3:/dev/rdsk/c2t2d0     /dev/did/rdsk/d2
 
(Display the device-group node list)
# scconf -pvv | grep dsk/d2
Device group name:						dsk/d2
…
  (dsk/d2) Device group node list:		phys-schost-1, phys-schost-3
…
 
(Remove phys-schost-3 from the node list)
# scconf -r -D name=dsk/d2,nodelist=phys-schost-3
 
(Enable the localonly property)
# scconf -c -D name=dsk/d2,localonly=true

How to Mirror File Systems That Can Be Unmounted

Use this procedure to mirror user-defined file systems that can be unmounted. In this procedure, the nodes do not need to be rebooted.

  1. Become superuser on a node of the cluster.

  2. Unmount the file system to mirror.

    Ensure that no processes are running on the file system.


    # umount /mount-point
    

    See the umount(1M) man page and “Mounting and Unmounting File Systems” in System Administration Guide: Basic Administration for more information.

  3. Put in a single-slice (one-way) concatenation the slice that contains a user-defined file system that can be unmounted.

    Specify the physical disk name of the disk slice (cNtXdYsZ).


    # metainit -f submirror1 1 1 diskslice
    

  4. Create a second concatenation.


    # metainit submirror2 1 1 submirror-diskslice
    

  5. Create a one-way mirror with one submirror.


    # metainit mirror -m submirror1
    


    Note –

    The metadevice or volume name for this mirror does not need to be unique throughout the cluster.


  6. Repeat Step 1 through Step 5 for each mountable file system to be mirrored.

  7. On each node, edit the /etc/vfstab file entry for each file system you mirrored.

    Replace the names in the device to mount and device to fsck columns with the mirror name.


    # vi /etc/vfstab
    #device        device        mount    FS     fsck    mount    mount
    #to mount      to fsck       point    type   pass    at boot  options
    #
    /dev/md/dsk/mirror /dev/md/rdsk/mirror /filesystem ufs 2 no global

  8. Attach the second submirror to the mirror.

    This attachment starts a synchronization of the submirrors.


    # metattach mirror submirror2
    

  9. Wait for the synchronization of the mirrors, started in Step 8, to be completed.

    Use the metastat(1M) command to view mirror status.


    # metastat mirror
    

  10. If the disk that is used to mirror the user-defined file system is physically connected to more than one node (multihosted), enable the localonly property.

    Perform the following steps to enable the localonly property of the raw-disk device group for the disk that is used to mirror the user-defined file system. You must enable the localonly property to prevent unintentional fencing of a node from its boot device if the boot device is connected to multiple nodes.

    1. If necessary, use the scdidadm -L command to display the full device-ID path name of the raw-disk device group.

      In the following example, the raw-disk device-group name dsk/d4 is part of the third column of output, which is the full device-ID path name.


      # scdidadm -L
      …
      1         phys-schost-3:/dev/rdsk/c1t1d0     /dev/did/rdsk/d2
      

    2. View the node list of the raw-disk device group.

      Output looks similar to the following.


      # scconf -pvv | grep dsk/d2
      Device group name:						dsk/d2
      …
        (dsk/d2) Device group node list:		phys-schost-1, phys-schost-3
      …

    3. If the node list contains more than one node name, remove all nodes from the node list except the node whose root disk you mirrored.

      Only the node whose root disk you mirrored should remain in the node list for the raw-disk device group.


      # scconf -r -D name=dsk/dN,nodelist=node
      
      -D name=dsk/dN

      Specifies the cluster-unique name of the raw-disk device group

      nodelist=node

      Specifies the name of the node or nodes to remove from the node list

    4. Use the scconf(1M) command to enable the localonly property.

      When the localonly property is enabled, the raw-disk device group is used exclusively by the node in its node list. This usage prevents unintentional fencing of the node from its boot device if the boot device is connected to multiple nodes.


      # scconf -c -D name=rawdisk-groupname,localonly=true
      
      -D name=rawdisk-groupname

      Specifies the name of the raw-disk device group

      For more information about the localonly property, see the scconf_dg_rawdisk(1M) man page.

  11. Mount the mirrored file system.


    # mount /mount-point
    

    See the mount(1M) man page and “Mounting and Unmounting File Systems” in System Administration Guide: Basic Administration for more information.

  12. Create a disk set.

    Go to Creating Disk Sets in a Cluster.

Example—Mirroring File Systems That Can Be Unmounted

The following example shows creation of mirror d4 to mirror /export, which resides on c0t0d0s4. Mirror d4 consists of submirror d14 on partition c0t0d0s4 and submirror d24 on partition c2t2d0s4. The /etc/vfstab file entry for /export is updated to use the mirror name d4. Device c2t2d0 is a multihost disk, so the localonly property is enabled.


(Unmount the file system)
# umount /export
 
(Create the mirror)
# metainit -f d14 1 1 c0t0d0s4
d14: Concat/Stripe is setup
# metainit d24 1 1 c2t2d0s4
d24: Concat/Stripe is setup
# metainit d4 -m d14
d4: Mirror is setup
 
(Edit the /etc/vfstab file)
# vi /etc/vfstab
#device        device        mount    FS     fsck    mount    mount
#to mount      to fsck       point    type   pass    at boot  options
#
/dev/md/dsk/d4 /dev/md/rdsk/d4 /export ufs 2 no	global
 
(Attach the second submirror)
# metattach d4 d24
d4: Submirror d24 is attached
 
(View the sync status)
# metastat d4
d4: Mirror
      Submirror 0: d14
         State: Okay
      Submirror 1: d24
         State: Resyncing
      Resync in progress: 15 % done
…
 
(Identify the device-ID name of the mirrored disk's raw-disk device group)
# scdidadm -L
…
1         phys-schost-3:/dev/rdsk/c2t2d0     /dev/did/rdsk/d2
 
(Display the device-group node list)
# scconf -pvv | grep dsk/d2
Device group name:						dsk/d2
…
  (dsk/d2) Device group node list:		phys-schost-1, phys-schost-3
…
 
(Remove phys-schost-3 from the node list)
# scconf -r -D name=dsk/d2,nodelist=phys-schost-3
 
(Enable the localonly property)
# scconf -c -D name=dsk/d2,localonly=true
 
(Mount the file system)
# mount /export

Creating Disk Sets in a Cluster

This section describes how to create disk sets for a cluster configuration. You might not need to create disk sets under the following circumstances:

The following procedures are in this section:

How to Create a Disk Set

Perform this procedure to create disk sets.

  1. Determine whether the cluster will have more than three disk sets after you create the new disk sets.

    • If the cluster will have no more than three disk sets, skip to Step 2.

    • If the cluster will have four or more disk sets, perform the following steps to prepare the cluster.

      You must perform this task whether you are installing disk sets for the first time or whether you are adding more disk sets to a fully configured cluster.

    1. On any node of the cluster, check the value of the md_nsets variable in the /kernel/drv/md.conf file.

    2. If the total number of disk sets in the cluster will be greater than the existing value of md_nsets minus one, increase the value of md_nsets to the desired value.

      The maximum permissible number of disk sets is one less than the configured value of md_nsets. The maximum possible value of md_nsets is 32, therefore the maximum permissible number of disk sets that you can create is 31.

    3. Ensure that the /kernel/drv/md.conf file is identical on each node of the cluster.


      Caution – Caution –

      Failure to follow this guideline can result in serious Solstice DiskSuite or Solaris Volume Manager errors and possible loss of data.


    4. If you made changes to the md.conf file on any node, perform the following steps to make those changes active.

      1. From one node, shut down the cluster.


        # scshutdown -g0 -y
        

      2. Reboot each node of the cluster.


        ok> boot
        

    5. On each node in the cluster, run the devfsadm(1M) command.

      You can run this command on all nodes in the cluster at the same time.

    6. From one node of the cluster, run the scgdevs(1M) command to update the global-devices namespace.

    7. On each node, verify that the scgdevs command has completed processing before you attempt to create any disk sets.

      The scgdevs command calls itself remotely on all nodes, even when the command is run from just one node. To determine whether the scgdevs command has completed processing, run the following command on each node of the cluster.


      % ps -ef | grep scgdevs
      

  2. Ensure that the disk set you intend to create meets one of the following requirements.

    • If the disk set is configured with exactly two disk strings, the disk set must connect to exactly two nodes and use exactly two mediator hosts. These mediator hosts must be the same two hosts used for the disk set. See Configuring Dual-String Mediators for details on how to configure dual-string mediators.

    • If the disk set is configured with more than two disk strings, ensure that for any two disk strings S1 and S2, the sum of the number of drives on those strings exceeds the number of drives on the third string S3. Stated as a formula, the requirement is that count(S1) + count(S2) > count(S3).

  3. Ensure that the local state database replicas exist.

    For instructions, see How to Create State Database Replicas.

  4. Become superuser on the cluster node that will master the disk set.

  5. Create the disk set.

    The following command creates the disk set and registers the disk set as a Sun Cluster disk device group.


    # metaset -s setname -a -h node1 node2
    
    -s setname

    Specifies the disk set name

    -a

    Adds (creates) the disk set

    -h node1

    Specifies the name of the primary node to master the disk set

    node2

    Specifies the name of the secondary node to master the disk set


    Note –

    When you run the metaset command to configure a Solstice DiskSuite or Solaris Volume Manager device group on a cluster, the command designates one secondary node by default. You can change the desired number of secondary nodes in the device group by using the scsetup(1M) utility after the device group is created. Refer to “Administering Disk Device Groups” in Sun Cluster System Administration Guide for Solaris OS for more information about how to change the numsecondaries property.


  6. Verify the status of the new disk set.


    # metaset -s setname
    

  7. Add drives to the disk set.

    Go to Adding Drives to a Disk Set.

Example—Creating a Disk Set

The following command creates two disk sets, dg-schost-1 and dg-schost-2, with the nodes phys-schost-1 and phys-schost-2 specified as the potential primaries.


# metaset -s dg-schost-1 -a -h phys-schost-1 phys-schost-2
# metaset -s dg-schost-2 -a -h phys-schost-1 phys-schost-2

Adding Drives to a Disk Set

When you add a drive to a disk set, the volume management software repartitions the drive as follows so that the state database for the disk set can be placed on the drive.

How to Add Drives to a Disk Set

  1. Become superuser on the node.

  2. Ensure that the disk set has been created.

    For instructions, see How to Create a Disk Set.

  3. List the DID mappings.


    # scdidadm -L
    

    • Choose drives that are shared by the cluster nodes that will master or potentially master the disk set.

    • Use the full device-ID path names when you add drives to a disk set.

    The first column of output is the DID instance number, the second column is the full physical path name, and the third column is the full device-ID path name (pseudo path). A shared drive has more than one entry for the same DID instance number.

    In the following example, the entries for DID instance number 2 indicate a drive that is shared by phys-schost-1 and phys-schost-2, and the full device-ID path name is /dev/did/rdsk/d2.


    1       phys-schost-1:/dev/rdsk/c0t0d0 /dev/did/rdsk/d1
    2       phys-schost-1:/dev/rdsk/c1t1d0 /dev/did/rdsk/d2
    2       phys-schost-2:/dev/rdsk/c1t1d0 /dev/did/rdsk/d2
    3       phys-schost-1:/dev/rdsk/c1t2d0 /dev/did/rdsk/d3
    3       phys-schost-2:/dev/rdsk/c1t2d0 /dev/did/rdsk/d3
    …

  4. Take ownership of the disk set.


    # metaset -s setname -t
    
    -s setname

    Specifies the disk set name

    -t

    Takes ownership of the disk set

  5. Add the drives to the disk set.

    Use the full device-ID path name.


    # metaset -s setname -a drivename
    

    -a

    Adds the drive to the disk set

    drivename

    Full device-ID path name of the shared drive


    Note –

    Do not use the lower-level device name (cNtXdY) when you add a drive to a disk set. Because the lower-level device name is a local name and not unique throughout the cluster, using this name might prevent the metaset from being able to switch over.


  6. Verify the status of the disk set and drives.


    # metaset -s setname
    

  7. (Optional) To repartition drives for use in metadevices or volumes, go to How to Repartition Drives in a Disk Set.

  8. Go to How to Create an md.tab File to define metadevices or volumes by using an md.tab file.

Example—Adding Drives to a Disk Set

The metaset command adds the drives /dev/did/rdsk/d1 and /dev/did/rdsk/d2 to the disk set dg-schost-1.


# metaset -s dg-schost-1 -a /dev/did/rdsk/d1 /dev/did/rdsk/d2

How to Repartition Drives in a Disk Set

The metaset(1M) command repartitions drives in a disk set so that a small portion of each drive is reserved in slice 7 for use by Solstice DiskSuite or Solaris Volume Manager software. The remainder of the space on each drive is placed into slice 0. To make more effective use of the drive, use this procedure to modify the disk layout. If you allocate space to slices 1 through 6, you can use these slices when you set up Solstice DiskSuite metadevices or Solaris Volume Manager volumes.

  1. Become superuser on the cluster node.

  2. Use the format command to change the disk partitioning for each drive in the disk set.

    When you repartition a drive, you must meet the following conditions to prevent the metaset(1M) command from repartitioning the drive.

    • Create slice 7 starting at cylinder 0, large enough to hold a state database replica. See your Solstice DiskSuite or Solaris Volume Manager administration guide to determine the size of a state database replica for your version of the volume-manager software.

    • Set the Flag field in slice 7 to wu (read-write, unmountable). Do not set it to read-only.

    • Do not allow slice 7 to overlap any other slice on the drive.

    See the format(1M) man page for details.

  3. Define metadevices or volumes by using an md.tab file.

    Go to How to Create an md.tab File.

How to Create an md.tab File

Create an /etc/lvm/md.tab file on each node in the cluster. Use the md.tab file to define Solstice DiskSuite metadevices or Solaris Volume Manager volumes for the disk sets that you created.


Note –

If you are using local metadevices or volumes, ensure that local metadevices or volumes names are distinct from the device-ID names used to form disk sets. For example, if the device-ID name /dev/did/dsk/d3 is used in a disk set, do not use the name /dev/md/dsk/d3 for a local metadevice or volume. This requirement does not apply to shared metadevices or volumes, which use the naming convention /dev/md/setname/{r}dsk/d#.



Tip –

To avoid possible confusion between local metadevices or volumes in a cluster environment, use a naming scheme that makes each local metadevice or volume name unique throughout the cluster. For example, for node 1 choose names from d100-d199. And for node 2 use d200-d299.


  1. Become superuser on the cluster node.

  2. List the DID mappings for reference when you create your md.tab file.

    Use the full device-ID path names in the md.tab file in place of the lower-level device names (cNtXdY).


    # scdidadm -L
    

    In the following example, the first column of output is the DID instance number, the second column is the full physical path name, and the third column is the full device-ID path name (pseudo path).


    1       phys-schost-1:/dev/rdsk/c0t0d0 /dev/did/rdsk/d1
    2       phys-schost-1:/dev/rdsk/c1t1d0 /dev/did/rdsk/d2
    2       phys-schost-2:/dev/rdsk/c1t1d0 /dev/did/rdsk/d2
    3       phys-schost-1:/dev/rdsk/c1t2d0 /dev/did/rdsk/d3
    3       phys-schost-2:/dev/rdsk/c1t2d0 /dev/did/rdsk/d3
    …

  3. Create an /etc/lvm/md.tab file and edit it by hand with your preferred text editor.

    See your Solstice DiskSuite or Solaris Volume Manager documentation and the md.tab(4) man page for details on how to create an md.tab file.


    Note –

    If you have existing data on the drives that will be used for the submirrors, you must back up the data before metadevice or volume setup. Then restore the data onto the mirror.


  4. Activate the metadevices or volumes that are defined in the md.tab files.

    Go to How to Activate Metadevices or Volumes.

Example—Sample md.tab File

The following sample md.tab file defines the disk set that is named dg-schost-1. The ordering of lines in the md.tab file is not important.


dg-schost-1/d0 -m dg-schost-1/d10 dg-schost-1/d20
    dg-schost-1/d10 1 1 /dev/did/rdsk/d1s0
    dg-schost-1/d20 1 1 /dev/did/rdsk/d2s0

The following example uses Solstice DiskSuite terminology. For Solaris Volume Manager, a trans metadevice is instead called a transactional volume and a metadevice is instead called a volume. Otherwise, the following process is valid for both volume managers.

The sample md.tab file is constructed as follows.

  1. The first line defines the device d0 as a mirror of metadevices d10 and d20. The -m signifies that this device is a mirror device.


    dg-schost-1/d0 -m dg-schost-1/d0 dg-schost-1/d20

  2. The second line defines metadevice d10, the first submirror of d0, as a one-way stripe.


    dg-schost-1/d10 1 1 /dev/did/rdsk/d1s0

  3. The third line defines metadevice d20, the second submirror of d0, as a one-way stripe.


    dg-schost-1/d20 1 1 /dev/did/rdsk/d2s0

How to Activate Metadevices or Volumes

Perform this procedure to activate Solstice DiskSuite metadevices or Solaris Volume Manager volumes that are defined in md.tab files.

  1. Become superuser on the cluster node.

  2. Ensure that md.tab files are located in the /etc/lvm directory.

  3. Ensure that you have ownership of the disk set on the node where the command will be executed.

  4. Take ownership of the disk set.


    # metaset -s setname -t
    
    -s setname

    Specifies the disk set name

    -t

    Takes ownership of the disk set

  5. Activate the disk set's metadevices or volumes, which are defined in the md.tab file.


    # metainit -s setname -a
    
    -a

    Activates all metadevices in the md.tab file

  6. For each master and log device, attach the second submirror (submirror2).

    When the metadevices or volumes in the md.tab file are activated, only the first submirror (submirror1) of the master and log devices is attached, so submirror2 must be attached by hand.


    # metattach mirror submirror2
    

  7. Repeat Step 3 through Step 6 for each disk set in the cluster.

    If necessary, run the metainit(1M) command from another node that has connectivity to the drives. This step is required for cluster-pair topologies, where the drives are not accessible by all nodes.

  8. Check the status of the metadevices or volumes.


    # metastat -s setname
    

    See the metastat(1M) man page for more information.

  9. If your cluster contains disk sets that are configured with exactly two disk enclosures and two nodes, add dual-string mediators.

    Go to Configuring Dual-String Mediators.

  10. Go to How to Create Cluster File Systems to create a cluster file system.

Example—Activating Metadevices or Volumes in the md.tab File

In the following example, all metadevices that are defined in the md.tab file for disk set dg-schost-1 are activated. Then the second submirrors of master device dg-schost-1/d1 and log device dg-schost-1/d4 are activated.


# metainit -s dg-schost-1 -a
# metattach dg-schost-1/d1 dg-schost-1/d3
# metattach dg-schost-1/d4 dg-schost-1/d6

Configuring Dual-String Mediators

This section contains the following information and procedures:

Requirements for Dual-String Mediators

A dual-string mediator, or mediator host, is a cluster node that stores mediator data. Mediator data provides information on the location of other mediators and contains a commit count that is identical to the commit count stored in the database replicas. This commit count is used to confirm that the mediator data is in sync with the data in the database replicas.

Dual-string mediators are required for all Solstice DiskSuite or Solaris Volume Manager disk sets that are configured with exactly two disk strings and two cluster nodes. A disk string consists of a disk enclosure, its physical drives, cables from the enclosure to the node(s), and the interface adapter cards. The use of mediators enables the Sun Cluster software to ensure that the most current data is presented in the instance of a single-string failure in a dual-string configuration. The following rules apply to dual-string configurations that use mediators.

These rules do not require that the entire cluster must have exactly two nodes. Rather, only those disk sets that have two disk strings must be connected to exactly two nodes. An N+1 cluster and many other topologies are permitted under these rules.

How to Add Mediator Hosts

Perform this procedure if your configuration requires dual-string mediators.

  1. Become superuser on the node that currently masters the disk set to which you intend to add mediator hosts.

  2. Run the metaset(1M) command to add each node with connectivity to the disk set as a mediator host for that disk set.


    # metaset -s setname -a -m mediator-host-list
    
    -s setname

    Specifies the disk set name

    -a

    Adds to the disk set

    -m mediator-host-list

    Specifies the name of the node to add as a mediator host for the disk set

    See the mediator(7D) man page for details about mediator-specific options to the metaset command.

  3. Check the status of mediator data.

    Go to How to Check the Status of Mediator Data.

Example—Adding Mediator Hosts

The following example adds the nodes phys-schost-1 and phys-schost-2 as mediator hosts for the disk set dg-schost-1. Both commands are run from the node phys-schost-1.


# metaset -s dg-schost-1 -a -m phys-schost-1
# metaset -s dg-schost-1 -a -m phys-schost-2

How to Check the Status of Mediator Data

  1. Add mediator hosts as described in How to Add Mediator Hosts.

  2. Run the medstat command.


    # medstat -s setname
    
    -s setname

    Specifies the disk set name

    See the medstat(1M) man page for more information.

  3. If Bad is the value in the Status field of the medstat output, repair the affected mediator host.

    Go to How to Fix Bad Mediator Data.

  4. Go to How to Create Cluster File Systems to create a cluster file system.

How to Fix Bad Mediator Data

Perform this procedure to repair bad mediator data.

  1. Identify all mediator hosts with bad mediator data as described in the procedure How to Check the Status of Mediator Data.

  2. Become superuser on the node that owns the affected disk set.

  3. Remove all mediator hosts with bad mediator data from all affected disk sets.


    # metaset -s setname -d -m mediator-host-list
    
    -s setname

    Specifies the disk set name

    -d

    Deletes from the disk set

    -m mediator-host-list

    Specifies the name of the node to remove as a mediator host for the disk set

  4. Restore each mediator host that you removed in Step 3.


    # metaset -s setname -a -m mediator-host-list
    
    -a

    Adds to the disk set

    -m mediator-host-list

    Specifies the name of the node to add as a mediator host for the disk set

    See the mediator(7D) man page for details about mediator-specific options to the metaset command.

  5. Create cluster file systems.

    Go to How to Create Cluster File Systems.