Sun Cluster Software Installation Guide for Solaris OS

Configuring Solaris Volume Manager Software

The following table lists the tasks that you perform to configure Solaris Volume Manager software for Sun Cluster configurations.

Table 4–1 Task Map: Configuring Solaris Volume Manager Software

Task 

Instructions 

1. Plan the layout of your Solaris Volume Manager configuration. 

Planning Volume Management

2. (Solaris 9 only) Calculate the number of volume names and disk sets that you need for your configuration, and modify the /kernel/drv/md.conf file.

SPARC: How to Set the Number of Volume Names and Disk Sets

3. Create state database replicas on the local disks. 

How to Create State Database Replicas

4. (Optional) Mirror file systems on the root disk.

Mirroring the Root Disk

ProcedureSPARC: How to Set the Number of Volume Names and Disk Sets


Note –

This procedure is only required for the Solaris 9 OS. If the cluster runs on the Solaris 10 OS, proceed to How to Create State Database Replicas.

With the Solaris 10 release, Solaris Volume Manager has been enhanced to configure volumes dynamically. You no longer need to edit the nmd and the md_nsets parameters in the /kernel/drv/md.conf file. New volumes are dynamically created, as needed.


This procedure describes how to determine the number of Solaris Volume Manager volume names and disk sets that you need for your configuration. This procedure also describes how to modify the /kernel/drv/md.conf file to specify these numbers.


Tip –

The default number of volume names per disk set is 128, but many configurations need more than the default. Increase this number before you implement a configuration, to save administration time later.

At the same time, keep the value of the nmdfield and the md_nsets field as low as possible. Memory structures exist for all possible devices as determined by nmdand md_nsets, even if you have not created those devices. For optimal performance, keep the value of nmd and md_nsets only slightly higher than the number of volumes that you plan to use.


Before You Begin

Have available the completed Device Group Configurations Worksheet.

  1. Calculate the total number of disk sets that you expect to need in the cluster, then add one more disk set for private disk management.

    The cluster can have a maximum of 32 disk sets, 31 disk sets for general use plus one disk set for private disk management. The default number of disk sets is 4. You supply this value for the md_nsets field in Step 3.

  2. Calculate the largest volume name that you expect to need for any disk set in the cluster.

    Each disk set can have a maximum of 8192 volume names. You supply this value for the nmd field in Step 3.

    1. Determine the quantity of volume names that you expect to need for each disk set.

      If you use local volumes, ensure that each local volume name on which a global-devices file system, /global/.devices/node@ nodeid, is mounted is unique throughout the cluster and does not use the same name as any device-ID name in the cluster.


      Tip –

      Choose a range of numbers to use exclusively for device-ID names and a range for each node to use exclusively for its local volume names. For example, device-ID names might use the range from d1 to d100. Local volumes on node 1 might use names in the range from d100 to d199. And local volumes on node 2 might use d200 to d299.


    2. Calculate the highest of the volume names that you expect to use in any disk set.

      The quantity of volume names to set is based on the volume name value rather than on the actual quantity . For example, if your volume names range from d950 to d1000, Solaris Volume Manager software requires that you set the value at 1000 names, not 50.

  3. On each node, become superuser and edit the /kernel/drv/md.conf file.


    Caution – Caution –

    All cluster nodes (or cluster pairs in the cluster-pair topology) must have identical /kernel/drv/md.conf files, regardless of the number of disk sets served by each node. Failure to follow this guideline can result in serious Solaris Volume Manager errors and possible loss of data.


    1. Set the md_nsets field to the value that you determined in Step 1.

    2. Set the nmd field to the value that you determined in Step 2.

  4. On each node, perform a reconfiguration reboot.


    phys-schost# touch /reconfigure
    phys-schost# shutdown -g0 -y -i6
    

    Changes to the /kernel/drv/md.conf file become operative after you perform a reconfiguration reboot.

Next Steps

Create local state database replicas. Go to How to Create State Database Replicas.

ProcedureHow to Create State Database Replicas

Perform this procedure on each node in the cluster.

  1. Become superuser.

  2. Create state database replicas on one or more local devices for each cluster node.

    Use the physical name (cNtXdY sZ), not the device-ID name (dN), to specify the slices to use.


    phys-schost# metadb -af slice-1 slice-2 slice-3
    

    Tip –

    To provide protection of state data, which is necessary to run Solaris Volume Manager software, create at least three replicas for each node. Also, you can place replicas on more than one device to provide protection if one of the devices fails.


    See the metadb(1M) man page and your Solaris Volume Manager documentation for details.

  3. Verify the replicas.


    phys-schost# metadb
    

    The metadb command displays the list of replicas.


Example 4–1 Creating State Database Replicas

The following example shows three state database replicas. Each replica is created on a different device.


phys-schost# metadb -af c0t0d0s7 c0t1d0s7 c1t0d0s7
phys-schost# metadb
flags            first blk      block count
    a       u       16          8192         /dev/dsk/c0t0d0s7
    a       u       16          8192         /dev/dsk/c0t1d0s7
    a       u       16          8192         /dev/dsk/c1t0d0s7

Next Steps

To mirror file systems on the root disk, go to Mirroring the Root Disk.

Otherwise, go to Creating Disk Sets in a Cluster to create Solaris Volume Manager disk sets.

Mirroring the Root Disk

Mirroring the root disk prevents the cluster node itself from shutting down because of a system disk failure. Four types of file systems can reside on the root disk. Each file-system type is mirrored by using a different method.

Use the following procedures to mirror each type of file system.


Caution – Caution –

For local disk mirroring, do not use /dev/global as the path when you specify the disk name. If you specify this path for anything other than cluster file systems, the system cannot boot.


ProcedureHow to Mirror the Root (/) File System

Use this procedure to mirror the root (/) file system.


Note –

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster System Administration Guide for Solaris OS.


  1. Become superuser.

  2. Place the root slice in a single-slice (one-way) concatenation.

    Specify the physical disk name of the root-disk slice (cNtXdYsZ).


    phys-schost# metainit -f submirror1 1 1 root-disk-slice
    
  3. Create a second concatenation.


    phys-schost# metainit submirror2 1 1 submirror-disk-slice
    
  4. Create a one-way mirror with one submirror.


    phys-schost# metainit mirror -m submirror1
    

    Note –

    If the device is a local device to be used to mount a global-devices file system, /global/.devices/node@nodeid, the volume name for the mirror must be unique throughout the cluster.


  5. Set up the system files for the root (/) directory.


    phys-schost# metaroot mirror
    

    This command edits the /etc/vfstab and /etc/system files so the system can be booted with the root (/) file system on a metadevice or volume. For more information, see the metaroot(1M) man page.

  6. Flush all file systems.


    phys-schost# lockfs -fa
    

    This command flushes all transactions out of the log and writes the transactions to the master file system on all mounted UFS file systems. For more information, see the lockfs(1M) man page.

  7. Move any resource groups or device groups from the node.


    phys-schost# clnode evacuate from-node
    
    from-node

    Specifies the name of the node from which to evacuate resource or device groups.

  8. Reboot the node.

    This command remounts the newly mirrored root (/) file system.


    phys-schost# shutdown -g0 -y -i6
    
  9. Attach the second submirror to the mirror.


    phys-schost# metattach mirror submirror2
    

    See the metattach(1M) man page for more information.

  10. If the disk that is used to mirror the root disk is physically connected to more than one node (multihosted), modify the device group's properties to support its use as a mirror.

    Ensure that the device group meets the following requirements:

    • The raw-disk device group must have only one node configured in its node list.

    • The localonly property of the raw-disk device group must be enabled. The localonly property prevents unintentional fencing of a node from its boot device if the boot device is connected to multiple nodes.

    1. If necessary, use the cldevice command to determine the name of the raw-disk device group.


      phys-schost# cldevice show node:/dev/rdsk/cNtXdY
      

      Tip –

      If you issue the command from a node that is physically connected to the disk, you can specify the disk name as cNtXdY instead of by its full device path name.


      In the following example, the raw-disk device-group name dsk/d2 is part of the DID device name.


      === DID Device Instances ===                   
      
      DID Device Name:                                /dev/did/rdsk/d2
        Full Device Path:                               phys-schost-1:/dev/rdsk/c1t1d0
        Full Device Path:                               phys-schost-3:/dev/rdsk/c1t1d0
      …

      See the cldevice(1CL) man page for more information.

    2. View the node list of the raw-disk device group.


      phys-schost# cldevicegroup show dsk/dN
      

      Output looks similar to the following for the device group dsk/d2:


      Device Group Name:                              dsk/d2
      …
        Node List:                                      phys-schost-1, phys-schost-3
      …
        localonly:                                      false
    3. If the node list contains more than one node name, remove all nodes from the node list except the node whose root disk you mirrored.

      Only the node whose root disk you mirrored should remain in the node list for the raw-disk device group.


      phys-schost# cldevicegroup remove-node -n node devicegroup
      
      -n node

      Specifies the node to remove from the device-group node list.

    4. Enable the localonly property of the raw-disk device group, if it is not already enabled.

      When the localonly property is enabled, the raw-disk device group is used exclusively by the node in its node list. This usage prevents unintentional fencing of the node from its boot device if the boot device is connected to multiple nodes.


      phys-schost# cldevicegroup set -p localonly=true devicegroup
      
      -p

      Sets the value of a device-group property.

      localonly=true

      Enables the localonly property of the device group.

      For more information about the localonly property, see the cldevicegroup(1CL) man page.

  11. Record the alternate boot path for possible future use.

    If the primary boot device fails, you can then boot from this alternate boot device. See Special Considerations for Mirroring root (/) in Solaris Volume Manager Administration Guide or Creating a RAID-1 Volume in Solaris Volume Manager Administration Guide for more information about alternate boot devices.


    phys-schost# ls -l /dev/rdsk/root-disk-slice
    
  12. Repeat Step 1 through Step 11 on each remaining node of the cluster.

    Ensure that each volume name for a mirror on which a global-devices file system, /global/.devices/node@nodeid, is to be mounted is unique throughout the cluster.


Example 4–2 Mirroring the Root (/) File System

The following example shows the creation of mirror d0 on the node phys-schost-1, which consists of submirror d10 on partition c0t0d0s0 and submirror d20 on partition c2t2d0s0. Device c2t2d0 is a multihost disk, so the localonly property is enabled. The example also displays the alternate boot path for recording.


phys-schost# metainit -f d10 1 1 c0t0d0s0
d11: Concat/Stripe is setup
phys-schost# metainit d20 1 1 c2t2d0s0
d12: Concat/Stripe is setup
phys-schost# metainit d0 -m d10
d10: Mirror is setup
phys-schost# metaroot d0
phys-schost# lockfs -fa
phys-schost# clnode evacuate phys-schost-1
phys-schost# shutdown -g0 -y -i6
phys-schost# metattach d0 d20
d0: Submirror d20 is attached
phys-schost# cldevicegroup show dsk/d2
Device Group Name:                              dsk/d2
…
  Node List:                                      phys-schost-1, phys-schost-3
…
  localonly:                                     false
phys-schost# cldevicegroup remove-node -n phys-schost-3 dsk/d2
phys-schost# cldevicegroup set -p localonly-true dsk/d2
phys-schost# ls -l /dev/rdsk/c2t2d0s0
lrwxrwxrwx  1 root     root          57 Apr 25 20:11 /dev/rdsk/c2t2d0s0 
–> ../../devices/node@1/pci@1f,0/pci@1/scsi@3,1/disk@2,0:a,raw

Next Steps

To mirror the global devices namespace, /global/.devices/node@nodeid, go to How to Mirror the Global Devices Namespace.

To mirror file systems than cannot be unmounted, go to How to Mirror File Systems Other Than Root (/) That Cannot Be Unmounted.

To mirror user-defined file systems, go to How to Mirror File Systems That Can Be Unmounted.

Otherwise, go to Creating Disk Sets in a Cluster to create a disk set.

Troubleshooting

Some of the steps in this mirroring procedure might cause an error message similar to metainit: dg-schost-1: d1s0: not a metadevice. Such an error message is harmless and can be ignored.

ProcedureHow to Mirror the Global Devices Namespace

Use this procedure to mirror the global devices namespace, /global/.devices/node@nodeid/.


Note –

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster System Administration Guide for Solaris OS.


  1. Become superuser.

  2. Place the global devices namespace slice in a single-slice (one-way) concatenation.

    Use the physical disk name of the disk slice (cNtXdY sZ).


    phys-schost# metainit -f submirror1 1 1 diskslice
    
  3. Create a second concatenation.


    phys-schost# metainit submirror2 1 1 submirror-diskslice
    
  4. Create a one-way mirror with one submirror.


    phys-schost# metainit mirror -m submirror1
    

    Note –

    The volume name for a mirror on which a global-devices file system, /global/.devices/node@nodeid, is to be mounted must be unique throughout the cluster.


  5. Attach the second submirror to the mirror.

    This attachment starts a synchronization of the submirrors.


    phys-schost# metattach mirror submirror2
    
  6. Edit the /etc/vfstab file entry for the /global/.devices/node@nodeid file system.

    Replace the names in the device to mount and device to fsck columns with the mirror name.


    phys-schost# vi /etc/vfstab
    #device        device        mount    FS     fsck    mount    mount
    #to mount      to fsck       point    type   pass    at boot  options
    #
    /dev/md/dsk/mirror /dev/md/rdsk/mirror /global/.devices/node@nodeid ufs 2 no global
  7. Repeat Step 1 through Step 6 on each remaining node of the cluster.

  8. Wait for the synchronization of the mirrors, started in Step 5, to be completed.

    Use the metastat(1M) command to view mirror status and to verify that mirror synchronization is complete.


    phys-schost# metastat mirror
    
  9. If the disk that is used to mirror the global devices namespace is physically connected to more than one node (multihosted), ensure that the device-group node list contains only one node and that the localonly property is enabled.

    Ensure that the device group meets the following requirements:

    • The raw-disk device group must have only one node configured in its node list.

    • The localonly property of the raw-disk device group must be enabled. The localonly property prevents unintentional fencing of a node from its boot device if the boot device is connected to multiple nodes.

    1. If necessary, use the cldevice command to determine the name of the raw-disk device group.


      phys-schost# cldevice show node:/dev/rdsk/cNtXdY
      

      Tip –

      If you issue the command from a node that is physically connected to the disk, you can specify the disk name as cNtXdY instead of by its full device path name.


      In the following example, the raw-disk device-group name dsk/d2 is part of the DID device name.


      === DID Device Instances ===                   
      
      DID Device Name:                                /dev/did/rdsk/d2
        Full Device Path:                               phys-schost-1:/dev/rdsk/c1t1d0
        Full Device Path:                               phys-schost-3:/dev/rdsk/c1t1d0
      …

      See the cldevice(1CL) man page for more information.

    2. View the node list of the raw-disk device group.


      phys-schost# cldevicegroup show dsk/dN
      

      Output looks similar to the following for the device group dsk/d2:


      Device Group Name:                              dsk/d2
      …
        Node List:                                      phys-schost-1, phys-schost-3
      …
        localonly:                                      false
    3. If the node list contains more than one node name, remove all nodes from the node list except the node whose root disk you mirrored.

      Only the node whose root disk you mirrored should remain in the node list for the raw-disk device group.


      phys-schost# cldevicegroup remove-node -n node devicegroup
      
      -n node

      Specifies the node to remove from the device-group node list.

    4. Enable the localonly property of the raw-disk device group, if it is not already enabled.

      When the localonly property is enabled, the raw-disk device group is used exclusively by the node in its node list. This usage prevents unintentional fencing of the node from its boot device if the boot device is connected to multiple nodes.


      phys-schost# cldevicegroup set -p localonly=true devicegroup
      
      -p

      Sets the value of a device-group property.

      localonly=true

      Enables the localonly property of the device group.

      For more information about the localonly property, see the cldevicegroup(1CL) man page.


Example 4–3 Mirroring the Global Devices Namespace

The following example shows creation of mirror d101, which consists of submirror d111 on partition c0t0d0s3 and submirror d121 on partition c2t2d0s3. The /etc/vfstab file entry for /global/.devices/node@1 is updated to use the mirror name d101. Device c2t2d0 is a multihost disk, so the localonly property is enabled.


phys-schost# metainit -f d111 1 1 c0t0d0s3
d111: Concat/Stripe is setup
phys-schost# metainit d121 1 1 c2t2d0s3
d121: Concat/Stripe is setup
phys-schost# metainit d101 -m d111
d101: Mirror is setup
phys-schost# metattach d101 d121
d101: Submirror d121 is attached
phys-schost# vi /etc/vfstab
#device        device        mount    FS     fsck    mount    mount
#to mount      to fsck       point    type   pass    at boot  options
#
/dev/md/dsk/d101 /dev/md/rdsk/d101 /global/.devices/node@1 ufs 2 no global
phys-schost# metastat d101
d101: Mirror
      Submirror 0: d111
         State: Okay
      Submirror 1: d121
         State: Resyncing
      Resync in progress: 15 % done
…
phys-schost# cldevice show phys-schost-3:/dev/rdsk/c2t2d0 
=== DID Device Instances ===                   

DID Device Name:                                /dev/did/rdsk/d2
  Full Device Path:                               phys-schost-1:/dev/rdsk/c2t2d0
  Full Device Path:                               phys-schost-3:/dev/rdsk/c2t2d0
…

phys-schost# cldevicegroup show | grep dsk/d2
Device Group Name:                              dsk/d2
…
  Node List:                                      phys-schost-1, phys-schost-3
…
  localonly:                                      false
phys-schost# cldevicegroup remove-node -n phys-schost-3 dsk/d2
phys-schost# cldevicegroup set -p localonly-true dsk/d2

Next Steps

To mirror file systems other than root (/) that cannot be unmounted, go to How to Mirror File Systems Other Than Root (/) That Cannot Be Unmounted.

To mirror user-defined file systems, go to How to Mirror File Systems That Can Be Unmounted

Otherwise, go to Creating Disk Sets in a Cluster to create a disk set.

Troubleshooting

Some of the steps in this mirroring procedure might cause an error message similar to metainit: dg-schost-1: d1s0: not a metadevice. Such an error message is harmless and can be ignored.

ProcedureHow to Mirror File Systems Other Than Root (/) That Cannot Be Unmounted

Use this procedure to mirror file systems other than root (/) that cannot be unmounted during normal system usage, such as /usr, /opt, or swap.


Note –

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster System Administration Guide for Solaris OS.


  1. Become superuser.

  2. Place the slice on which an unmountable file system resides in a single-slice (one-way) concatenation.

    Specify the physical disk name of the disk slice (cNtX dYsZ).


    phys-schost# metainit -f submirror1 1 1 diskslice
    
  3. Create a second concatenation.


    phys-schost# metainit submirror2 1 1 submirror-diskslice
    
  4. Create a one-way mirror with one submirror.


    phys-schost# metainit mirror -m submirror1
    

    Note –

    The volume name for this mirror does not need to be unique throughout the cluster.


  5. Repeat Step 1 through Step 4 for each remaining unmountable file system that you want to mirror.

  6. On each node, edit the /etc/vfstab file entry for each unmountable file system you mirrored.

    Replace the names in the device to mount and device to fsck columns with the mirror name.


    phys-schost# vi /etc/vfstab
    #device        device        mount    FS     fsck    mount    mount
    #to mount      to fsck       point    type   pass    at boot  options
    #
    /dev/md/dsk/mirror /dev/md/rdsk/mirror /filesystem ufs 2 no global
  7. Move any resource groups or device groups from the node.


    phys-schost# clnode evacuate from-node
    
    from-node

    Specifies the name of the node from which to move resource or device groups.

  8. Reboot the node.


    phys-schost# shutdown -g0 -y -i6
    
  9. Attach the second submirror to each mirror.

    This attachment starts a synchronization of the submirrors.


    phys-schost# metattach mirror submirror2
    
  10. Wait for the synchronization of the mirrors, started in Step 9, to complete.

    Use the metastat(1M) command to view mirror status and to verify that mirror synchronization is complete.


    phys-schost# metastat mirror
    
  11. If the disk that is used to mirror the unmountable file system is physically connected to more than one node (multihosted), ensure that the device-group node list contains only one node and that the localonly property is enabled.

    Ensure that the device group meets the following requirements:

    • The raw-disk device group must have only one node configured in its node list.

    • The localonly property of the raw-disk device group must be enabled. The localonly property prevents unintentional fencing of a node from its boot device if the boot device is connected to multiple nodes.

    1. If necessary, use the cldevice command to determine the name of the raw-disk device group.


      phys-schost# cldevice show node:/dev/rdsk/cNtXdY
      

      Tip –

      If you issue the command from a node that is physically connected to the disk, you can specify the disk name as cNtXdY instead of by its full device path name.


      In the following example, the raw-disk device-group name dsk/d2 is part of the DID device name.


      === DID Device Instances ===                   
      
      DID Device Name:                                /dev/did/rdsk/d2
        Full Device Path:                               phys-schost-1:/dev/rdsk/c1t1d0
        Full Device Path:                               phys-schost-3:/dev/rdsk/c1t1d0
      …

      See the cldevice(1CL) man page for more information.

    2. View the node list of the raw-disk device group.


      phys-schost# cldevicegroup show dsk/dN
      

      Output looks similar to the following for the device group dsk/d2:


      Device Group Name:                              dsk/d2
      …
        Node List:                                      phys-schost-1, phys-schost-3
      …
        localonly:                                      false
    3. If the node list contains more than one node name, remove all nodes from the node list except the node whose root disk you mirrored.

      Only the node whose root disk you mirrored should remain in the node list for the raw-disk device group.


      phys-schost# cldevicegroup remove-node -n node devicegroup
      
      -n node

      Specifies the node to remove from the device-group node list.

    4. Enable the localonly property of the raw-disk device group, if it is not already enabled.

      When the localonly property is enabled, the raw-disk device group is used exclusively by the node in its node list. This usage prevents unintentional fencing of the node from its boot device if the boot device is connected to multiple nodes.


      phys-schost# cldevicegroup set -p localonly=true devicegroup
      
      -p

      Sets the value of a device-group property.

      localonly=true

      Enables the localonly property of the device group.

      For more information about the localonly property, see the cldevicegroup(1CL) man page.


Example 4–4 Mirroring File Systems That Cannot Be Unmounted

The following example shows the creation of mirror d1 on the node phys-schost-1 to mirror /usr, which resides on c0t0d0s1. Mirror d1 consists of submirror d11 on partition c0t0d0s1 and submirror d21 on partition c2t2d0s1. The /etc/vfstab file entry for /usr is updated to use the mirror name d1. Device c2t2d0 is a multihost disk, so the localonly property is enabled.


phys-schost# metainit -f d11 1 1 c0t0d0s1
d11: Concat/Stripe is setup
phys-schost# metainit d21 1 1 c2t2d0s1
d21: Concat/Stripe is setup
phys-schost# metainit d1 -m d11
d1: Mirror is setup
phys-schost# vi /etc/vfstab
#device        device        mount    FS     fsck    mount    mount
#to mount      to fsck       point    type   pass    at boot  options
#
/dev/md/dsk/d1 /dev/md/rdsk/d1 /usr ufs  2       no global
…
phys-schost# clnode evacuate phys-schost-1
phys-schost# shutdown -g0 -y -i6
phys-schost# metattach d1 d21
d1: Submirror d21 is attached
phys-schost# metastat d1
d1: Mirror
      Submirror 0: d11
         State: Okay
      Submirror 1: d21
         State: Resyncing
      Resync in progress: 15 % done
…
phys-schost# cldevice show phys-schost-3:/dev/rdsk/c2t2d0
…
DID Device Name:                                /dev/did/rdsk/d2
phys-schost# cldevicegroup show dsk/d2
Device Group Name:                              dsk/d2
…
  Node List:                                      phys-schost-1, phys-schost-3
…
  localonly:                                      false
phys-schost# cldevicegroup remove-node -n phys-schost-3 dsk/d2
phys-schost# cldevicegroup set -p localonly=true dsk/d2

Next Steps

To mirror user-defined file systems, go to How to Mirror File Systems That Can Be Unmounted.

Otherwise, go to Creating Disk Sets in a Cluster to create a disk set.

Troubleshooting

Some of the steps in this mirroring procedure might cause an error message similar to metainit: dg-schost-1: d1s0: not a metadevice. Such an error message is harmless and can be ignored.

ProcedureHow to Mirror File Systems That Can Be Unmounted

Use this procedure to mirror user-defined file systems that can be unmounted. In this procedure, the nodes do not need to be rebooted.


Note –

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster System Administration Guide for Solaris OS.


  1. Become superuser.

  2. Unmount the file system to mirror.

    Ensure that no processes are running on the file system.


    phys-schost# umount /mount-point
    

    See the umount(1M) man page and Chapter 18, Mounting and Unmounting File Systems (Tasks), in System Administration Guide: Devices and File Systems for more information.

  3. Place in a single-slice (one-way) concatenation the slice that contains a user-defined file system that can be unmounted.

    Specify the physical disk name of the disk slice (cNtX dYsZ).


    phys-schost# metainit -f submirror1 1 1 diskslice
    
  4. Create a second concatenation.


    phys-schost# metainit submirror2 1 1 submirror-diskslice
    
  5. Create a one-way mirror with one submirror.


    phys-schost# metainit mirror -m submirror1
    

    Note –

    The volume name for this mirror does not need to be unique throughout the cluster.


  6. Repeat Step 1 through Step 5 for each mountable file system to be mirrored.

  7. On each node, edit the /etc/vfstab file entry for each file system you mirrored.

    Replace the names in the device to mount and device to fsck columns with the mirror name.


    phys-schost# vi /etc/vfstab
    #device        device        mount    FS     fsck    mount    mount
    #to mount      to fsck       point    type   pass    at boot  options
    #
    /dev/md/dsk/mirror /dev/md/rdsk/mirror /filesystem ufs 2 no global
  8. Attach the second submirror to the mirror.

    This attachment starts a synchronization of the submirrors.


    phys-schost# metattach mirror submirror2
    
  9. Wait for the synchronization of the mirrors, started in Step 8, to be completed.

    Use the metastat(1M) command to view mirror status.


    phys-schost# metastat mirror
    
  10. If the disk that is used to mirror the user-defined file system is physically connected to more than one node (multihosted), ensure that the device-group node list contains only one node and that the localonly property is enabled.

    Ensure that the device group meets the following requirements:

    • The raw-disk device group must have only one node configured in its node list.

    • The localonly property of the raw-disk device group must be enabled. The localonly property prevents unintentional fencing of a node from its boot device if the boot device is connected to multiple nodes.

    1. If necessary, use the cldevice command to determine the name of the raw-disk device group.


      phys-schost# cldevice show node:/dev/rdsk/cNtXdY
      

      Tip –

      If you issue the command from a node that is physically connected to the disk, you can specify the disk name as cNtXdY instead of by its full device path name.


      In the following example, the raw-disk device-group name dsk/d2 is part of the DID device name.


      === DID Device Instances ===                   
      
      DID Device Name:                                /dev/did/rdsk/d2
        Full Device Path:                               phys-schost-1:/dev/rdsk/c1t1d0
        Full Device Path:                               phys-schost-3:/dev/rdsk/c1t1d0
      …

      See the cldevice(1CL) man page for more information.

    2. View the node list of the raw-disk device group.


      phys-schost# cldevicegroup show dsk/dN
      

      Output looks similar to the following for the device group dsk/d2:


      Device Group Name:                              dsk/d2
      …
        Node List:                                      phys-schost-1, phys-schost-3
      …
        localonly:                                      false
    3. If the node list contains more than one node name, remove all nodes from the node list except the node whose root disk you mirrored.

      Only the node whose root disk you mirrored should remain in the node list for the raw-disk device group.


      phys-schost# cldevicegroup remove-node -n node devicegroup
      
      -n node

      Specifies the node to remove from the device-group node list.

    4. Enable the localonly property of the raw-disk device group, if it is not already enabled.

      When the localonly property is enabled, the raw-disk device group is used exclusively by the node in its node list. This usage prevents unintentional fencing of the node from its boot device if the boot device is connected to multiple nodes.


      phys-schost# cldevicegroup set -p localonly=true devicegroup
      
      -p

      Sets the value of a device-group property.

      localonly=true

      Enables the localonly property of the device group.

      For more information about the localonly property, see the cldevicegroup(1CL) man page.

  11. Mount the mirrored file system.


    phys-schost# mount /mount-point
    

    See the mount(1M) man page and Chapter 18, Mounting and Unmounting File Systems (Tasks), in System Administration Guide: Devices and File Systems for more information.


Example 4–5 Mirroring File Systems That Can Be Unmounted

The following example shows creation of mirror d4 to mirror /export, which resides on c0t0d0s4. Mirror d4 consists of submirror d14 on partition c0t0d0s4 and submirror d24 on partition c2t2d0s4. The /etc/vfstab file entry for /export is updated to use the mirror name d4. Device c2t2d0 is a multihost disk, so the localonly property is enabled.


phys-schost# umount /export
phys-schost# metainit -f d14 1 1 c0t0d0s4
d14: Concat/Stripe is setup
phys-schost# metainit d24 1 1 c2t2d0s4
d24: Concat/Stripe is setup
phys-schost# metainit d4 -m d14
d4: Mirror is setup
phys-schost# vi /etc/vfstab
#device        device        mount    FS     fsck    mount    mount
#to mount      to fsck       point    type   pass    at boot  options
#
# /dev/md/dsk/d4 /dev/md/rdsk/d4 /export ufs 2 no    global
phys-schost# metattach d4 d24
d4: Submirror d24 is attached
phys-schost# metastat d4
d4: Mirror
       Submirror 0: d14
          State: Okay
       Submirror 1: d24
          State: Resyncing
       Resync in progress: 15 % done
…
phys-schost# cldevice show phys-schost-3:/dev/rdsk/c2t2d0
…
DID Device Name:                                /dev/did/rdsk/d2
phys-schost# cldevicegroup show dsk/d2
Device Group Name:                              dsk/d2
…
  Node List:                                      phys-schost-1, phys-schost-2
…
  localonly:                                      false
phys-schost# cldevicegroup remove-node -n phys-schost-3 dsk/d2
phys-schost# cldevicegroup set -p localonly=true dsk/d2 
phys-schost# mount /export

Next Steps

If you need to create disk sets, go to one of the following:

If you have sufficient disk sets for your needs, go to one of the following:

Troubleshooting

Some of the steps in this mirroring procedure might cause an error message that is similar to metainit: dg-schost-1: d1s0: not a metadevice. Such an error message is harmless and can be ignored.