Install and configure your local and multihost disks for Solstice DiskSuite/Solaris Volume Manager software by using the procedures in this appendix, along with the planning information in Planning Volume Management. See your Solstice DiskSuite/Solaris Volume Manager documentation for additional details.
The following procedures are in this appendix.
How to Set the Number of Metadevice or Volume Names and Disksets
How to Mirror File Systems Other than Root (/) That Cannot Be Unmounted
Before you begin, have available the following information.
Mappings of your storage disk drives.
The following completed configuration planning worksheets. See Planning Volume Management for planning guidelines.
“Local File Systems With Mirrored Root Worksheet” in Sun Cluster 3.1 Release Notes or “Local File Systems with Non-Mirrored Root Worksheet” in Sun Cluster 3.1 Release Notes
“Disk Device Groups Worksheet” in Sun Cluster 3.1 Release Notes
“Volume Manager Configurations Worksheet” in Sun Cluster 3.1 Release Notes
“Metadevices Worksheet (Solstice DiskSuite/Solaris Volume Manager)” in Sun Cluster 3.1 Release Notes
The following table lists the tasks to perform to install and configure Solstice DiskSuiteSolstice DiskSuite/Solaris Volume Manager software for Sun Cluster configurations.
If you used SunPlex Manager to install Solstice DiskSuite software (Solaris 8), the procedures How to Install Solstice DiskSuite Software through How to Create State Database Replicas are already completed. Go to Mirroring the Root Disk or How to Create a Diskset to continue to configure Solstice DiskSuite software.
If you installed Solaris 9, Solaris Volume Manager is already installed and you can start at How to Set the Number of Metadevice or Volume Names and Disksets.
Task |
For Instructions, Go To … |
---|---|
Plan the layout of your Solstice DiskSuite/Solaris Volume Manager configuration. |
Solstice DiskSuite/Solaris Volume Manager Configuration Example |
For Solaris 8, install Solstice DiskSuite software. | |
Calculate the number of metadevice names and disksets needed for your configuration, and modify the /kernel/drv/md.conf file. |
How to Set the Number of Metadevice or Volume Names and Disksets |
Create state database replicas on the local disks. | |
(Optional) Mirror file systems on the root disk. | |
Create disksets by using the metaset command. | |
Add disk drives to the disksets. | |
Repartition drives in a diskset to allocate space to slices 1 through 6. | |
List device ID pseudo-driver mappings and define metadevices or volumes in the /etc/lvm/md.tab files. | |
Initialize the md.tab files. | |
For dual-string configurations, configure mediator hosts, check the status of mediator data, and, if necessary, fix bad mediator data. | |
Configure the cluster. |
The following example helps to explain the process for determining the number of disks to place in each diskset when you use Solstice DiskSuite/Solaris Volume Manager software. In this example three storage devices are used, and existing applications run over NFS (two file systems of 5 Gbytes each) and two ORACLE databases (one 5 Gbytes and one 10 Gbytes).
The following table shows the calculations used to determine the number of drives needed in the sample configuration. If you have three storage devices, you would need 28 drives that would be divided as evenly as possible among each of the three storage devices. Note that the 5-Gbyte file systems were given an additional Gbyte of disk space because the number of disks needed was rounded up.
Table A–2 Determining Drives Needed for a Configuration
Use |
Data |
Disk Storage Needed |
Drives Needed |
---|---|---|---|
nfs1 |
5 Gbytes |
3x2.1 Gbyte disks * 2 (Mirror) |
6 |
nfs2 |
5 Gbytes |
3x2.1 Gbyte disks * 2 (Mirror) |
6 |
oracle1 |
5 Gbytes |
3x2.1 Gbyte disks * 2 (Mirror) |
6 |
oracle2 |
10 Gbytes |
5x2.1 Gbyte disks * 2 (Mirror) |
10 |
The following table shows the allocation of drives among the two disksets and four data services.
Table A–3 Division of Disksets
Diskset |
Data Services |
Disks |
Storage Device 1 |
Storage Device 2 |
Storage Device 3 |
---|---|---|---|---|---|
dg-schost-1 |
nfs1, oracle1 |
12 |
4 |
4 |
4 |
dg-schost-2 |
nfs2, oracle2 |
16 |
5 |
6 |
5 |
Initially, four disks on each storage device (a total of 12 disks) are assigned to dg-schost-1, and five or six disks on each (a total of 16) are assigned to dg-schost-2.
No hot spare disks are assigned to either diskset. A minimum of one hot spare disk per storage device per diskset enables one drive to be hot spared, which restores full two-way mirroring).
If you used SunPlex Manager to install Solstice DiskSuite software, do not perform this procedure. Instead, go to Mirroring the Root Disk.
If you installed Solaris 9 software, do not perform this procedure. Solaris Volume Manager software is installed with Solaris 9 software. Instead, go to How to Set the Number of Metadevice or Volume Names and Disksets.
Perform this task on each node in the cluster.
Become superuser on the cluster node.
If you install from the CD-ROM, insert the Solaris 8 Software 2 of 2 CD-ROM into the CD-ROM drive on the node.
This step assumes that the Volume Management daemon vold(1M) is running and configured to manage CD-ROM devices.
Install the Solstice DiskSuite software packages.
If you have Solstice DiskSuite software patches to install, do not reboot after you install the Solstice DiskSuite software.
Install software packages in the order shown in the following example.
# cd /cdrom/sol_8_sparc_2/Solaris_8/EA/products/DiskSuite_4.2.1/sparc/Packages # pkgadd -d . SUNWmdr SUNWmdu [SUNWmdx] optional-pkgs |
The SUNWmdr and SUNWmdu packages are required for all Solstice DiskSuite installations. The SUNWmdx package is also required for the 64-bit Solstice DiskSuite installation.
See your Solstice DiskSuite installation documentation for information about optional software packages.
If you installed from a CD-ROM, eject the CD-ROM.
Install any Solstice DiskSuite patches.
See “Patches and Required Firmware Levels” in Sun Cluster 3.1 Release Notes for the location of patches and installation instructions.
Repeat Step 1 through Step 5 on the other nodes of the cluster.
From one node of the cluster, manually populate the global device namespace for Solstice DiskSuite.
# scgdevs |
Set the number of metadevice names and disksets expected in the cluster.
Go to How to Set the Number of Metadevice or Volume Names and Disksets.
If you used SunPlex Manager to install Solstice DiskSuite software, do not perform this procedure. Instead, go to Mirroring the Root Disk.
This procedure describes how to determine the number of Solstice DiskSuite metadevice or Solaris Volume Manager volume names and disksets needed for your configuration. This procedure also describes how to modify the /kernel/drv/md.conf file to specify these numbers.
The default number of metadevice or volume names per diskset is 128, but many configurations need more than the default. Increase this number before you implement a configuration, to save administration time later.
At the same time, keep the value of the nmd field and the md_nsets field as low as possible. Memory structures exist for all possible devices as determined by nmd and md_nsets, even if you have not created those devices. For optimal performance, keep the value of nmd and md_nsets only slightly higher than the number of metadevices or volumes you will use.
Have available the following completed worksheets:
“Disk Device Groups Worksheet” in Sun Cluster 3.1 Release Notes
Determine the total number of disksets you expect to need in the cluster, then add one for private disk management.
The cluster can have a maximum of 32 disksets, 31 disksets for general use plus one diskset for private disk management. The default number of disksets is 4. You will supply this value for the md_nsets field in Step 4.
Determine the largest metadevice or volume name you expect to need for any diskset in the cluster.
Each diskset can have a maximum of 8192 metadevice or volume names. You will supply this value for the nmd field in Step 4.
Determine the quantity of metadevice or volume names you expect to need for each diskset.
If you use local metadevices or volumes, ensure that each local metadevice or volume name is unique throughout the cluster and does not use the same name as any device ID (DID) in the cluster.
Choose a range of numbers to use exclusively for DID names and a range for each node to use exclusively for its local metadevice or volume names. For example, DIDs might use names in the range from d1 to d100, local metadevices or volumes on node 1 might use names in the range from d100 to d199, local metadevices or volumes on node 2 might use d200 to d299, and so on.
Determine the highest of the metadevice or volume names you expect to use in any diskset.
The quantity of metadevice or volume names to set is based on the metadevice or volume name value rather than on the actual quantity. For example, if your metadevice or volume names range from d950 to d1000, Solstice DiskSuite/Solaris Volume Manager software requires that you set the value at 1000 names, not 50.
On each node, become superuser and edit the /kernel/drv/md.conf file.
All cluster nodes (or cluster pairs in the cluster-pair topology) must have identical /kernel/drv/md.conf files, regardless of the number of disksets served by each node. Failure to follow this guideline can result in serious Solstice DiskSuite/Solaris Volume Manager errors and possible loss of data.
On each node, perform a reconfiguration reboot.
# touch /reconfigure # shutdown -g0 -y -i6 |
Changes to the /kernel/drv/md.conf file become operative after you perform a reconfiguration reboot.
Create local replicas.
If you used SunPlex Manager to install Solstice DiskSuite software, do not perform this procedure. Instead, go to Mirroring the Root Disk.
Perform this procedure on each node in the cluster.
Become superuser on the cluster node.
Create replicas on one or more local disks for each cluster node by using the metadb command.
# metadb -af slice-1 slice-2 slice-3 |
To provide protection of state data, which is necessary to run Solstice DiskSuite/Solaris Volume Manager software, create at least three replicas for each node. Also, you can place replicas on more than one disk to provide protection if one of the disks fails.
See the metadb(1M) man page and your Solstice DiskSuite/Solaris Volume Manager documentation for details.
Verify the replicas.
# metadb |
The metadb command displays the list of replicas.
Do you intend to mirror file systems on the root disk?
If yes, go to Mirroring the Root Disk.
If no, go to How to Create a Diskset to create Solstice DiskSuite/Solaris Volume Manager disksets.
The following example shows three Solstice DiskSuite state database replicas, each created on a different disk. For Solaris Volume Manager, the replica size would be larger.
# metadb -af c0t0d0s7 c0t1d0s7 c1t0d0s7 # metadb flags first blk block count a u 16 1034 /dev/dsk/c0t0d0s7 a u 1050 1034 /dev/dsk/c0t1d0s7 a u 2084 1034 /dev/dsk/c1t0d0s7 |
Mirroring the root disk prevents the cluster node itself from shutting down because of a system disk failure. Four types of file systems can reside on the root disk. Each file system type is mirrored by using a different method.
Use the following procedures to mirror each type of file system.
Some of the steps in these mirroring procedures can cause an error message similar to the following, which is harmless and can be ignored.
metainit: dg-schost-1: d1s0: not a metadevice |
For local disk mirroring, do not use /dev/global as the path when you specify the disk name. If you specify this path for anything other than cluster file systems, the system cannot boot.
Use this procedure to mirror the root (/) file system.
Become superuser on a node of the cluster.
Use the metainit(1M) command to put the root slice in a single-slice (one-way) concatenation.
Use the physical disk name of the root-disk slice (cNtXdYsZ).
# metainit -f submirror1 1 1 root-disk-slice |
Create a second concatenation.
# metainit submirror2 1 1 submirror-disk-slice |
Create a one-way mirror with one submirror.
# metainit mirror -m submirror1 |
The metadevice or volume name for the mirror must be unique throughout the cluster.
Run the metaroot(1M) command.
This command edits the /etc/vfstab and /etc/system files so the system can be booted with the root (/) file system on a metadevice or volume.
# metaroot mirror |
Run the lockfs(1M) command.
This command flushes all transactions out of the log and writes the transactions to the master file system on all mounted UFS file systems.
# lockfs -fa |
Evacuate any resource groups or device groups from the node.
# scswitch -S -h node |
Evacuates all resource groups and device groups
Specifies the name of the node from which to evacuate resource or device groups
Reboot the node.
This command remounts the newly mirrored root (/) file system.
# shutdown -g0 -y -i6 |
Use themetattach(1M)) command to attach the second submirror to the mirror.
# metattach mirror submirror2 |
Is the disk that is used to mirror the root disk physically connected to more than one node (multiported)?
If no, go to Step 11.
If yes, enable the localonly property of the raw disk device group for the disk used to mirror the root disk. You must enable the localonly property to prevent unintentional fencing of a node from its boot device if the boot device is connected to multiple nodes.
If necessary, use the scdidadm(1M) -L command to display the full device ID (DID) pseudo-driver name of the raw disk device group.
In the following example, the raw disk device group name dsk/d2 is part of the third column of output, which is the full DID pseudo-driver name.
# scdidadm -L ... 1 phys-schost-3:/dev/rdsk/c1t1d0 /dev/did/rdsk/d2 # scconf -c -D name=dsk/d2,localonly=true |
For more information about the localonly property, see the scconf_dg_rawdisk(1M) man page.
View the node list of the raw disk device group.
Output will look similar to the following, where N is the DID number.
# scconf -pvv | grep dsk/dN Device group name: dsk/d2 ... (dsk/d2) Device group node list: phys-schost-1, phys-schost-3 ... |
Does the node list contain more than one node name?
Remove all nodes from the node list for the raw disk device group except the node whose root disk you mirrored.
Only the node whose root disk you mirrored should remain in the node list.
# scconf -r -D name=dsk/dN,nodelist=node |
Specifies the cluster-unique name of the raw disk device group
Specifies the name of the node or nodes to remove from the node list
Use the scconf(1M) command to enable the localonly property.
When the localonly property is enabled, the raw disk device group is used exclusively by the node in its node list. This prevents unintentional fencing of the node from its boot device if the boot device is connected to multiple nodes.
# scconf -c -D name=rawdisk-groupname,localonly=true |
Specifies the name of the raw disk device group
Record the alternate boot path for possible future use.
If the primary boot device fails, you can then boot from this alternate boot device. See “Troubleshooting the System” in Solstice DiskSuite 4.2.1 User's Guide or “Mirroring root () Special Considerations” in Solaris Volume Manager Administration Guide for more information about alternate boot devices.
# ls -l /dev/rdsk/root-disk-slice |
Repeat Step 1 through Step 11 on each remaining node of the cluster.
Ensure that each metadevice or volume name for a mirror is unique throughout the cluster.
Do you intend to mirror the global namespace, /global/.devices/node@nodeid?
If yes, go to How to Mirror the Global Namespace.
If no, go to Step 14.
Do you intend to mirror file systems than cannot be unmounted?
If yes, go to How to Mirror File Systems Other than Root (/) That Cannot Be Unmounted.
If no, go to Step 15.
Do you intend to mirror user-defined file systems?
If yes, go to How to Mirror File Systems That Can Be Unmounted.
If no, go to How to Create a Diskset to create a diskset.
The following example shows creation of mirror d0 on the node phys-schost-1, which consists of submirror d10 on partition c0t0d0s0 and submirror d20 on partition c2t2d0s0. Disk c2t2d0 is a multiported disk, so the localonly property is enabled.
(Create the mirror) # metainit -f d10 1 1 c0t0d0s0 d11: Concat/Stripe is setup # metainit d20 1 1 c2t2d0s0 d12: Concat/Stripe is setup # metainit d0 -m d10 d10: Mirror is setup # metaroot d0 # lockfs -fa (Reboot the node) # scswitch -S -h phys-schost-1 # shutdown -g0 -y -i6 (Attach the second submirror) # metattach d0 d20 d0: Submirror d20 is attached (Display the node list of the mirror disk's raw disk device group) # scconf -pvv | grep dsk/d2 Device group name: dsk/d2 ... (dsk/d2) Device group node list: phys-schost-1, phys-schost-3 ... (Remove phys-schost-3 from the node list for the raw disk device group) # scconf -r -D name=dsk/d2,nodelist=phys-schost-3 (Enable the localonly property of the mirrored disk's raw disk device group) # scconf -c -D name=dsk/d2,localonly=true (Record the alternate boot path) # ls -l /dev/rdsk/c2t2d0s0 lrwxrwxrwx 1 root root 57 Apr 25 20:11 /dev/rdsk/c2t2d0s0 -> ../../devices/node@1/pci@1f,0/pci@1/scsi@3,1/disk@2,0:a,raw |
Use this procedure to mirror the global namespace, /global/.devices/node@nodeid.
Become superuser on a node of the cluster.
Put the global namespace slice in a single slice (one-way) concatenation.
Use the physical disk name of the disk slice (cNtXdYsZ).
# metainit -f submirror1 1 1 diskslice |
Create a second concatenation.
# metainit submirror2 1 1 submirror-diskslice |
Create a one-way mirror with one submirror.
# metainit mirror -m submirror1 |
The metadevice or volume name for the mirror must be unique throughout the cluster.
Attach the second submirror to the mirror.
This attachment starts a sync of the submirrors.
# metattach mirror submirror2 |
Edit the /etc/vfstab file entry for the /global/.devices/node@nodeid file system.
Replace the names in the device to mount and device to fsck columns with the mirror name.
# vi /etc/vfstab #device device mount FS fsck mount mount #to mount to fsck point type pass at boot options # /dev/md/dsk/mirror /dev/md/rdsk/mirror /global/.devices/node@nodeid ufs 2 no global |
Repeat Step 1 through Step 6 on each remaining node of the cluster.
Ensure that each metadevice or volume name for a mirror is unique throughout the cluster.
Wait for the sync of the mirrors, started in Step 5, to complete.
Use the metastat(1M) command to view mirror status.
# metastat mirror |
Is the disk that is used to mirror the global namespace physically connected to more than one node (multiported)?
If no, go to Step 10.
If yes, enable the localonly property of the raw disk device group for the disk used to mirror the global namespace. You must enable the localonly property to prevent unintentional fencing of a node from its boot device if the boot device is connected to multiple nodes.
If necessary, use the scdidadm(1M) command to display the full device ID (DID) pseudo-driver name of the raw disk device group.
In the following example, the raw disk device group name dsk/d2 is part of the third column of output, which is the full DID pseudo-driver name.
# scdidadm -L ... 1 phys-schost-3:/dev/rdsk/c1t1d0 /dev/did/rdsk/d2 # scconf -c -D name=dsk/d2,localonly=true |
For more information about the localonly property, see the scconf_dg_rawdisk(1M) man page.
View the node list of the raw disk device group.
Output will look similar to the following, where N is the DID number.
# scconf -pvv | grep dsk/dN Device group name: dsk/d2 ... (dsk/d2) Device group node list: phys-schost-1, phys-schost-3 ... |
Does the node list contain more than one node name?
Remove all nodes from the node list for the raw disk device group except the node whose root disk you mirrored.
Only the node whose root disk you mirrored should remain in the node list.
# scconf -r -D name=dsk/dN,nodelist=node |
Specifies the cluster-unique name of the raw disk device group
Specifies the name of the node or nodes to remove from the node list
Use the scconf(1M) command to enable the localonly property.
When the localonly property is enabled, the raw disk device group is used exclusively by the node in its node list. This prevents unintentional fencing of the node from its boot device if the boot device is connected to multiple nodes.
# scconf -c -D name=rawdisk-groupname,localonly=true |
Specifies the name of the raw disk device group
Do you intend to mirror file systems other than root (/) that cannot be unmounted?
If yes, go to How to Mirror File Systems Other than Root (/) That Cannot Be Unmounted.
If no, go to Step 11.
Do you intend to mirror user-defined file systems?
If yes, go to How to Mirror File Systems That Can Be Unmounted.
If no, go to How to Create a Diskset to create a diskset.
The following example shows creation of mirror d101, which consists of submirror d111 on partition c0t0d0s3 and submirror d121 on partition c2t2d0s3. The /etc/vfstab file entry for /global/.devices/node@1 is updated to use the mirror name d101. Disk c2t2d0 is a multiported disk, so the localonly property is enabled.
(Create the mirror) # metainit -f d111 1 1 c0t0d0s3 d111: Concat/Stripe is setup # metainit d121 1 1 c2t2d0s3 d121: Concat/Stripe is setup # metainit d101 -m d111 d101: Mirror is setup # metattach d101 d121 d101: Submirror d121 is attached (Edit the /etc/vfstab file) # vi /etc/vfstab #device device mount FS fsck mount mount #to mount to fsck point type pass at boot options # /dev/md/dsk/d101 /dev/md/rdsk/d101 /global/.devices/node@1 ufs 2 no global (View the sync status) # metastat d101 d101: Mirror Submirror 0: d111 State: Okay Submirror 1: d121 State: Resyncing Resync in progress: 15 % done ... (Identify the DID name of the mirrored disk's raw disk device group) # scdidadm -L ... 1 phys-schost-3:/dev/rdsk/c2t2d0 /dev/did/rdsk/d2 (Display the node list of the mirror disk's raw disk device group) # scconf -pvv | grep dsk/d2 Device group name: dsk/d2 ... (dsk/d2) Device group node list: phys-schost-1, phys-schost-3 ... (Remove phys-schost-3 from the node list for the raw disk device group) # scconf -r -D name=dsk/d2,nodelist=phys-schost-3 (Enable the localonly property of the mirrored disk's raw disk device group) # scconf -c -D name=dsk/d2,localonly=true |
Use this procedure to mirror file systems other than root (/) that cannot be unmounted during normal system usage, such as /usr, /opt, or swap.
Become superuser on a node of the cluster.
Put the slice on which an unmountable file system resides in a single slice (one-way) concatenation.
Use the physical disk name of the disk slice (cNtXdYsZ).
# metainit -f submirror1 1 1 diskslice |
Create a second concatenation.
# metainit submirror2 1 1 submirror-diskslice |
Create a one-way mirror with one submirror.
# metainit mirror -m submirror1 |
The metadevice or volume name for the mirror does not need to be unique throughout the cluster.
Repeat Step 1 through Step 4 for each unmountable file system to be mirrored.
On each node, edit the /etc/vfstab file entry for each unmountable file system you mirrored.
Replace the names in the device to mount and device to fsck columns with the mirror name.
# vi /etc/vfstab #device device mount FS fsck mount mount #to mount to fsck point type pass at boot options # /dev/md/dsk/mirror /dev/md/rdsk/mirror /filesystem ufs 2 no global |
Evacuate any resource groups or device groups from the node.
# scswitch -S -h node |
Evacuates all resource groups and device groups
Specifies the name of the node from which to evacuate resource or device groups
Reboot the node.
# shutdown -g0 -y -i6 |
Attach the second submirror to each mirror.
This attachment starts a sync of the submirrors.
# metattach mirror submirror2 |
Wait for the sync of the mirrors, started in Step 9, to complete.
Use the metastat(1M) command to view mirror status.
# metastat mirror |
Is the disk that is used to mirror the unmountable file system physically connected to more than one node (multiported)?
If no, go to Step 12.
If yes, enable the localonly property of the raw disk device group for the disk used to mirror the unmountable file system. You must enable the localonly property to prevent unintentional fencing of a node from its boot device if the boot device is connected to multiple nodes.
If necessary, use the scdidadm -L command to display the full device ID (DID) pseudo-driver name of the raw disk device group.
In the following example, the raw disk device group name dsk/d2 is part of the third column of output, which is the full DID pseudo-driver name.
# scdidadm -L ... 1 phys-schost-3:/dev/rdsk/c1t1d0 /dev/did/rdsk/d2 # scconf -c -D name=dsk/d2,localonly=true |
For more information about the localonly property, see the scconf_dg_rawdisk(1M) man page.
View the node list of the raw disk device group.
Output will look similar to the following, where N is the DID number.
# scconf -pvv | grep dsk/dN Device group name: dsk/d2 ... (dsk/d2) Device group node list: phys-schost-1, phys-schost-3 ... |
Does the node list contain more than one node name?
Remove all nodes from the node list for the raw disk device group except the node whose root disk you mirrored.
Only the node whose root disk you mirrored should remain in the node list.
# scconf -r -D name=dsk/dN,nodelist=node |
Specifies the cluster-unique name of the raw disk device group
Specifies the name of the node or nodes to remove from the node list
Use the scconf(1M) command to enable the localonly property.
When the localonly property is enabled, the raw disk device group is used exclusively by the node in its node list. This prevents unintentional fencing of the node from its boot device if the boot device is connected to multiple nodes.
# scconf -c -D name=rawdisk-groupname,localonly=true |
Specifies the name of the raw disk device group
Do you intend to mirror user-defined file systems?
If yes, go to How to Mirror File Systems That Can Be Unmounted.
If no, go to How to Create a Diskset to create a diskset.
The following example shows the creation of mirror d1 on the node phys-schost-1 to mirror /usr, which resides on c0t0d0s1. Mirror d1 consists of submirror d11 on partition c0t0d0s1 and submirror d21 on partition c2t2d0s1. The /etc/vfstab file entry for /usr is updated to use the mirror name d1. Disk c2t2d0 is a multiported disk, so the localonly property is enabled.
(Create the mirror) # metainit -f d11 1 1 c0t0d0s1 d11: Concat/Stripe is setup # metainit d21 1 1 c2t2d0s1 d21: Concat/Stripe is setup # metainit d1 -m d11 d1: Mirror is setup (Edit the /etc/vfstab file) # vi /etc/vfstab #device device mount FS fsck mount mount #to mount to fsck point type pass at boot options # /dev/md/dsk/d1 /dev/md/rdsk/d1 /usr ufs 2 no global (Reboot the node) # scswitch -S -h phys-schost-1 # shutdown -g0 -y -i6 (Attach the second submirror) # metattach d1 d21 d1: Submirror d21 is attached (View the sync status) # metastat d1 d1: Mirror Submirror 0: d11 State: Okay Submirror 1: d21 State: Resyncing Resync in progress: 15 % done ... (Identify the DID name of the mirrored disk's raw disk device group) # scdidadm -L ... 1 phys-schost-3:/dev/rdsk/c2t2d0 /dev/did/rdsk/d2 (Display the node list of the mirror disk's raw disk device group) # scconf -pvv | grep dsk/d2 Device group name: dsk/d2 ... (dsk/d2) Device group node list: phys-schost-1, phys-schost-3 ... (Remove phys-schost-3 from the node list for the raw disk device group) # scconf -r -D name=dsk/d2,nodelist=phys-schost-3 (Enable the localonly property of the mirrored disk's raw disk device group) # scconf -c -D name=dsk/d2,localonly=true |
Use this procedure to mirror user-defined file systems that can be unmounted. In this procedure, nodes do not need to be rebooted.
Become superuser on a node of the cluster.
Put in a single slice (one-way) concatenation the slice on which a user-defined file system that can be unmounted resides.
Use the physical disk name of the disk slice (cNtXdYsZ).
# metainit -f submirror1 1 1 diskslice |
Create a second concatenation.
# metainit submirror2 1 1 submirror-diskslice |
Create a one-way mirror with one submirror.
# metainit mirror -m submirror1 |
The metadevice or volume name for the mirror does not need to be unique throughout the cluster.
Repeat Step 1 through Step 4 for each mountable file system to be mirrored.
On each node, edit the /etc/vfstab file entry for each file system you mirrored.
Replace the names in the device to mount and device to fsck columns with the mirror name.
# vi /etc/vfstab #device device mount FS fsck mount mount #to mount to fsck point type pass at boot options # /dev/md/dsk/mirror /dev/md/rdsk/mirror /filesystem ufs 2 no global |
Attach the second submirror to the mirror.
This attachment starts a sync of the submirrors.
# metattach mirror submirror2 |
Wait for the sync of the mirrors, started in Step 7, to complete.
Use the metastat(1M) command to view mirror status.
# metastat mirror |
Is the disk that is used to mirror the user-defined file system physically connected to more than one node (multiported)?
If no, go to Step 10.
If yes, enable the localonly property of the raw disk device group for the disk used to mirror the user-defined file system. You must enable the localonly property to prevent unintentional fencing of a node from its boot device if the boot device is connected to multiple nodes.
If necessary, use the scdidadm -L command to display the full device ID (DID) pseudo-driver name of the raw disk device group.
In the following example, the raw disk device group name dsk/d4 is part of the third column of output, which is the full DID pseudo-driver name.
# scdidadm -L ... 1 phys-schost-3:/dev/rdsk/c1t1d0 /dev/did/rdsk/d2 # scconf -c -D name=dsk/d2,localonly=true |
For more information about the localonly property, see the scconf_dg_rawdisk(1M) man page.
View the node list of the raw disk device group.
Output will look similar to the following, where N is the DID number.
# scconf -pvv | grep dsk/dN Device group name: dsk/d2 ... (dsk/d2) Device group node list: phys-schost-1, phys-schost-3 ... |
Does the node list contain more than one node name?
Remove all nodes from the node list for the raw disk device group except the node whose root disk you mirrored.
Only the node whose root disk you mirrored should remain in the node list.
# scconf -r -D name=dsk/dN,nodelist=node |
Specifies the cluster-unique name of the raw disk device group
Specifies the name of the node or nodes to remove from the node list
Use the scconf(1M) command to enable the localonly property.
When the localonly property is enabled, the raw disk device group is used exclusively by the node in its node list. This prevents unintentional fencing of the node from its boot device if the boot device is connected to multiple nodes.
# scconf -c -D name=rawdisk-groupname,localonly=true |
Specifies the name of the raw disk device group
Create a diskset.
Go to How to Create a Diskset.
The following example shows creation of mirror d4 to mirror /export, which resides on c0t0d0s4. Mirror d4 consists of submirror d14 on partition c0t0d0s4 and submirror d24 on partition c2t2d0s4. The /etc/vfstab file entry for /export is updated to use the mirror name d4. Disk c2t2d0 is a multiported disk, so the localonly property is enabled.
(Create the mirror) # metainit -f d14 1 1 c0t0d0s4 d14: Concat/Stripe is setup # metainit d24 1 1 c2t2d0s4 d24: Concat/Stripe is setup # metainit d4 -m d14 d4: Mirror is setup (Edit the /etc/vfstab file) # vi /etc/vfstab #device device mount FS fsck mount mount #to mount to fsck point type pass at boot options # /dev/md/dsk/d4 /dev/md/rdsk/d4 /export ufs 2 no global (Attach the second submirror) # metattach d4 d24 d4: Submirror d24 is attached (View the sync status) # metastat d4 d4: Mirror Submirror 0: d14 State: Okay Submirror 1: d24 State: Resyncing Resync in progress: 15 % done ... (Identify the DID name of the mirrored disk's raw disk device group) # scdidadm -L ... 1 phys-schost-3:/dev/rdsk/c2t2d0 /dev/did/rdsk/d2 (Display the node list of the mirror disk's raw disk device group) # scconf -pvv | grep dsk/d2 Device group name: dsk/d2 ... (dsk/d2) Device group node list: phys-schost-1, phys-schost-3 ... (Remove phys-schost-3 from the node list for the raw disk device group) # scconf -r -D name=dsk/d2,nodelist=phys-schost-3 (Enable the localonly property of the mirrored disk's raw disk device group) # scconf -c -D name=dsk/d2,localonly=true |
Perform this procedure for each diskset you create.
If you used SunPlex Manager to install Solstice DiskSuite, one to three disksets might already exist. See Using SunPlex Manager to Install Sun Cluster Software for information about the metasets created by SunPlex Manager.
Do you intend to create more than three disksets in the cluster?
Ensure that the value of the md_nsets variable is set high enough to accommodate the total number of disksets you intend to create in the cluster.
On any node of the cluster, check the value of the md_nsets variable in the /kernel/drv/md.conf file.
If the total number of disksets in the cluster will be greater than the existing value of md_nsets minus one, on each node increase the value of md_nsets to the desired value.
The maximum permissible number of disksets is one less than the value of md_nsets. The maximum possible value of md_nsets is 32.
Ensure that the /kernel/drv/md.conf file is identical on each node of the cluster.
Failure to follow this guideline can result in serious Solstice DiskSuite/Solaris Volume Manager errors and possible loss of data.
From one node, shut down the cluster.
# scshutdown -g0 -y |
Reboot each node of the cluster.
ok> boot |
On each node in the cluster, run the devfsadm(1M) command.
You can run this command on all nodes in the cluster at the same time.
From one node of the cluster, run the scgdevs(1M) command.
On each node, verify that the scgdevs command has completed before you attempt to create any disksets.
The scgdevs command calls itself remotely on all nodes, even when the command is run from just one node. To determine whether the scgdevs command has completed processing, run the following command on each node of the cluster.
% ps -ef | grep scgdevs |
Ensure that the diskset you intend to create meets one of the following requirements.
If configured with exactly two disk strings, the diskset must connect to exactly two nodes and use exactly two mediator hosts, which must be the same two hosts used for the diskset. See Mediators Overview for details on how to set up mediators.
If configured with more than two disk strings, ensure that for any two disk strings S1 and S2, the sum of the number of disks on those strings exceeds the number of disks on the third string S3. Stated as a formula, the requirement is that count(S1) + count(S2) > count(S3).
Ensure that the local state database replicas exist.
For instructions, see How to Create State Database Replicas.
Become superuser on the cluster node that will master the diskset.
Create the diskset.
This command also registers the diskset as a Sun Cluster disk device group.
# metaset -s setname -a -h node1 node2 |
Specifies the diskset name
Adds (creates) the diskset
Specifies the name of the primary node to master the diskset
Specifies the name of the secondary node to master the diskset
Running the metaset command to set up a Solstice DiskSuite/Solaris Volume Manager device group on a cluster creates one secondary node by default, regardless of the number of nodes that are included in that device group. You can change the desired number of secondary nodes by using the scsetup(1M) utility after the device group is created. Refer to “Administering Disk Device Groups” in Sun Cluster 3.1 System Administration Guide for more information about how to change the numsecondaries property.
Verify the status of the new diskset.
# metaset -s setname |
Add drives to the diskset.
Go to Adding Drives to a Diskset.
The following command creates two disksets, dg-schost-1 and dg-schost-2, with the nodes phys-schost-1 and phys-schost-2 assigned as the potential primaries.
# metaset -s dg-schost-1 -a -h phys-schost-1 phys-schost-2 # metaset -s dg-schost-2 -a -h phys-schost-1 phys-schost-2 |
When you add a disk drive to a diskset, Solstice DiskSuite/Solaris Volume Manager repartitions it as follows so that the state database for the diskset can be placed on the drive.
A small portion of each drive is reserved in slice 7 for use by Solstice DiskSuite/Solaris Volume Manager software. The remainder of the space on each drive is placed into slice 0.
Drives are repartitioned when they are added to the diskset only if slice 7 is not set up correctly.
Any existing data on the disks is lost by the repartitioning.
If slice 7 starts at cylinder 0, and the disk is large enough to contain a state database replica, the disk is not repartitioned.
Become superuser on the node.
Ensure that the diskset has been created.
For instructions, see How to Create a Diskset.
List the device ID (DID) mappings.
# scdidadm -L |
Choose drives that are shared by the cluster nodes that will master or potentially master the diskset.
Use the full DID pseudo-driver names when you add drives to a diskset.
The first column of output is the DID instance number, the second column is the full path (physical path), and the third column is the full DID pseudo-driver name (pseudo path). A shared drive has more than one entry for the same DID instance number.
In the following example, the entries for DID instance number 2 indicate a drive that is shared by phys-schost-1 and phys-schost-2, and the full DID name is /dev/did/rdsk/d2.
1 phys-schost-1:/dev/rdsk/c0t0d0 /dev/did/rdsk/d1 2 phys-schost-1:/dev/rdsk/c1t1d0 /dev/did/rdsk/d2 2 phys-schost-2:/dev/rdsk/c1t1d0 /dev/did/rdsk/d2 3 phys-schost-1:/dev/rdsk/c1t2d0 /dev/did/rdsk/d3 3 phys-schost-2:/dev/rdsk/c1t2d0 /dev/did/rdsk/d3 ... |
Take ownership of the diskset.
# metaset -s setname -t |
Specifies the diskset name
Takes ownership of the diskset
Add the drives to the diskset.
Use the full DID pseudo-driver name.
# metaset -s setname -a DIDname |
Adds the disk drive to the diskset
Device ID (DID) name of the shared disk
Do not use the lower-level device name (cNtXdY) when you add a drive to a diskset. Because the lower-level device name is a local name and not unique throughout the cluster, using this name might prevent the metaset from being able to switch over.
Verify the status of the diskset and drives.
# metaset -s setname |
Do you intend to repartition drives for use in metadevices or volumes?
If yes, go to How to Repartition Drives in a Diskset.
If no, go to How to Create an md.tab File to define metadevices or volumes by using an md.tab file.
The metaset command adds the disk drives /dev/did/dsk/d1 and /dev/did/dsk/d2 to the diskset dg-schost-1.
# metaset -s dg-schost-1 -a /dev/did/dsk/d1 /dev/did/dsk/d2 |
The metaset(1M) command repartitions drives in a diskset so that a small portion of each drive is reserved in slice 7 for use by Solstice DiskSuite/Solaris Volume Manager software. The remainder of the space on each drive is placed into slice 0. To make more effective use of the disk, use this procedure to modify the disk layout. If you allocate space to slices 1 through 6, you can use these slices when you set up Solstice DiskSuite metadevices or Solaris Volume Manager volumes.
Become superuser on the cluster node.
Use the format command to change the disk partitioning for each drive in the diskset.
When you repartition a drive, you must meet the following conditions to prevent the metaset(1M) command from repartitioning the disk.
Create a partition 7 starting at cylinder 0 that is large enough to hold a state database replica (approximately 2 Mbytes).
Set the Flag field in slice 7 to wu (read-write, unmountable). Do not set it to read-only.
Do not allow slice 7 to overlap any other slice on the disk.
See the format(1M) man page for details.
Define metadevices or volumes by using an md.tab file.
Go to How to Create an md.tab File.
Create an /etc/lvm/md.tab file on each node in the cluster. Use the md.tab file to define Solstice DiskSuite metadevices or Solaris Volume Manager volumes for the disksets you created.
If you are using local metadevices or volumes, ensure that local metadevices or volumes names are distinct from the device ID (DID) names used to form disksets. For example, if the DID name /dev/did/dsk/d3 is used in a diskset, do not use the name /dev/md/dsk/d3 for a local metadevice or volume. This requirement does not apply to shared metadevices or volumes, which use the naming convention /dev/md/setname/{r}dsk/d#.
To avoid possible confusion between local metadevices or volumes in a cluster environment, use a naming scheme that makes each local metadevice or volume name unique throughout the cluster. For example, for node 1 choose names from d100-d199, for node 2 use d200-d299, and so on.
Become superuser on the cluster node.
List the DID mappings for reference when you create your md.tab file.
Use the full DID pseudo-driver names in the md.tab file in place of the lower-level device names (cNtXdY).
# scdidadm -L |
In the following example, the first column of output is the DID instance number, the second column is the full path (physical path), and the third column is the full DID pseudo-driver name (pseudo path).
1 phys-schost-1:/dev/rdsk/c0t0d0 /dev/did/rdsk/d1 2 phys-schost-1:/dev/rdsk/c1t1d0 /dev/did/rdsk/d2 2 phys-schost-2:/dev/rdsk/c1t1d0 /dev/did/rdsk/d2 3 phys-schost-1:/dev/rdsk/c1t2d0 /dev/did/rdsk/d3 3 phys-schost-2:/dev/rdsk/c1t2d0 /dev/did/rdsk/d3 ... |
Create an /etc/lvm/md.tab file and edit it by hand with your preferred text editor.
See your Solstice DiskSuite/Solaris Volume Manager documentation and the md.tab(4) man page for details on how to create an md.tab file.
If you have existing data on the disks that will be used for the submirrors, you must back up the data before metadevice or volume setup and restore it onto the mirror.
Activate the metadevices or volumes defined in the md.tab files.
The following sample md.tab file defines the metadevices for the diskset named dg-schost-1. The ordering of lines in the md.tab file is not important.
dg-schost-1/d0 -t dg-schost-1/d1 dg-schost-1/d4 dg-schost-1/d1 -m dg-schost-1/d2 dg-schost-1/d2 1 1 /dev/did/rdsk/d1s4 dg-schost-1/d3 1 1 /dev/did/rdsk/d55s4 dg-schost-1/d4 -m dg-schost-1/d5 dg-schost-1/d5 1 1 /dev/did/rdsk/d3s5 dg-schost-1/d6 1 1 /dev/did/rdsk/d57s5 |
The sample md.tab file is constructed as follows.
The following example uses Solstice DiskSuite terminology. For Solaris Volume Manager, a trans metadevice is instead called a transactional volume and a metadevice is instead called a volume. Otherwise, the following process is valid for both volume managers.
The first line defines the trans metadevice d0 to consist of a master (UFS) metadevice d1 and a log device d4. The -t signifies this is a trans metadevice. The master and log devices are specified by their position after the -t flag.
dg-schost-1/d0 -t dg-schost-1/d1 dg-schost-1/d4 |
The second line defines the master device as a mirror of the metadevices. The -m in this definition signifies a mirror device, and one of the submirrors, d2, is associated with the mirror device, d1.
dg-schost-1/d1 -m dg-schost-1/d2 |
The fifth line similarly defines the log device, d4, as a mirror of metadevices.
dg-schost-1/d4 -m dg-schost-1/d5 |
The third line defines the first submirror of the master device, d2, as a one-way stripe.
dg-schost-1/d2 1 1 /dev/did/rdsk/d1s4 |
The fourth line defines the second submirror of the master device, d3.
dg-schost-1/d3 1 1 /dev/did/rdsk/d55s4 |
Finally, the log device submirrors, d5 and d6, are defined. In this example, simple metadevices for each submirror are created.
dg-schost-1/d5 1 1 /dev/did/rdsk/d3s5 dg-schost-1/d6 1 1 /dev/did/rdsk/d57s5 |
Perform this procedure to activate Solstice DiskSuite metadevices or Solaris Volume Manager volumes defined in the md.tab files.
Become superuser on the cluster node.
Ensure that md.tab files are located in the /etc/lvm directory.
Ensure that you have ownership of the diskset on the node where the command will be executed.
Take ownership of the diskset.
# metaset -s setname -t |
Specifies the diskset name
Takes ownership of the diskset
Activate the diskset's metadevices or volumes, which are defined in the md.tab file.
# metainit -s setname -a |
Activates all metadevices in the md.tab file
For each master and log device, attach the second submirror (submirror2).
When the metadevices or volumes in the md.tab file are activated, only the first submirror (submirror1) of the master and log devices is attached, so submirror2 must be attached by hand.
# metattach mirror submirror2 |
Repeat Step 3 through Step 6 for each diskset in the cluster.
If necessary, run the metainit(1M) command from another node that has connectivity to the disks. This step is required for cluster-pair topologies, where the disks are not accessible by all nodes.
Check the status of the metadevices or volumes.
# metastat -s setname |
See the metastat(1M) man page for more information.
Does your cluster contain disksets configured with exactly two disk enclosures and two nodes?
If yes, those disksets require mediators. Go to Mediators Overview to add mediator hosts.
If no, go to How to Add Cluster File Systems to create a cluster file system.
In the following example, all metadevices defined in the md.tab file for diskset dg-schost-1 are activated. Then the second submirrors of master device dg-schost-1/d1 and log device dg-schost-1/d4 are activated.
# metainit -s dg-schost-1 -a # metattach dg-schost-1/d1 dg-schost-1/d3 # metattach dg-schost-1/d4 dg-schost-1/d6 |
A mediator, or mediator host, is a cluster node that stores mediator data. Mediator data provides information on the location of other mediators and contains a commit count that is identical to the commit count stored in the database replicas. This commit count is used to confirm that the mediator data is in sync with the data in the database replicas.
Mediators are required for all Solstice DiskSuite/Solaris Volume Manager disksets configured with exactly two disk strings and two cluster nodes. A disk string consists of a disk enclosure, its physical disks, cables from the enclosure to the node(s), and the interface adapter cards. The use of mediators enables the Sun Cluster software to ensure that the most current data is presented in the instance of a single-string failure in a dual-string configuration. The following rules apply to dual-string configurations that use mediators.
Disksets must be configured with exactly two mediator hosts, and those two mediator hosts must be the same two cluster nodes used for the diskset.
A diskset cannot have more than two mediator hosts.
Mediators cannot be configured for disksets that do not meet the two-string and two-host criteria.
These rules do not require that the entire cluster have exactly two nodes. Rather, they only require that those disksets that have two disk strings must be connected to exactly two nodes. An N+1 cluster and many other topologies are permitted under these rules.
Perform this procedure if your configuration requires mediators.
Become superuser on the node that currently masters the diskset you intend to add mediator hosts to.
Run the metaset(1M) command to add each node with connectivity to the diskset as a mediator host for that diskset.
# metaset -s setname -a -m mediator-host-list |
Specifies the diskset name
Adds to the diskset
Specifies the name of the node to add as a mediator host for the diskset
See the mediator(7D) man page for details about mediator-specific options to the metaset command.
Check the status of mediator data.
The following example adds the nodes phys-schost-1 and phys-schost-2 as mediator hosts for the diskset dg-schost-1. Both commands are run from the node phys-schost-1.
# metaset -s dg-schost-1 -a -m phys-schost-1 # metaset -s dg-schost-1 -a -m phys-schost-2 |
Add mediator hosts as described in How to Add Mediator Hosts.
Run the medstat command.
# medstat -s setname |
Specifies the diskset name
See the medstat(1M) man page for more information.
Is Bad the value in the Status field?
If yes, go to How to Fix Bad Mediator Data to repair the affected mediator host.
If no, go to How to Add Cluster File Systems to create a cluster file system.
Perform this procedure to repair bad mediator data.
Identify the mediator host(s) with bad mediator data as described in the procedure How to Check the Status of Mediator Data.
Become superuser on the node that owns the affected diskset.
Remove the mediator host(s) with bad mediator data from all affected disksets.
# metaset -s setname -d -m mediator-host-list |
Specifies the diskset name
Deletes from the diskset
Specifies the name of the node to remove as a mediator host for the diskset
Restore the mediator host.
# metaset -s setname -a -m mediator-host-list |
Adds to the diskset
Specifies the name of the node to add as a mediator host for the diskset
See the mediator(7D) man page for details about mediator-specific options to the metaset command.
Create a cluster file system.