This section provides information and procedures to install and configure Solstice DiskSuite or Solaris Volume Manager software. You can skip certain procedures under the following conditions:
If you used SunPlex Installer to install Solstice DiskSuite software (Solaris 8), the procedures How to Install Solstice DiskSuite Software through How to Create State Database Replicas are already completed. Go to Mirroring the Root Disk or Creating Disk Sets in a Cluster to continue to configure Solstice DiskSuite software.
If you installed Solaris 9 or Solaris 10 software, Solaris Volume Manager is already installed. You can start configuration at How to Set the Number of Metadevice or Volume Names and Disk Sets.
The following table lists the tasks that you perform to install and configure Solstice DiskSuite or Solaris Volume Manager software for Sun Cluster configurations.
Table 3–1 Task Map: Installing and Configuring Solstice DiskSuite or Solaris Volume Manager Software
Task |
Instructions |
---|---|
1. Plan the layout of your Solstice DiskSuite or Solaris Volume Manager configuration. | |
2. (Solaris 8 only) Install Solstice DiskSuite software. | |
3. (Solaris 8 and Solaris 9 only) Calculate the number of metadevice names and disk sets needed for your configuration, and modify the /kernel/drv/md.conf file. |
How to Set the Number of Metadevice or Volume Names and Disk Sets |
4. Create state database replicas on the local disks. | |
5. (Optional) Mirror file systems on the root disk. |
Do not perform this procedure under the following circumstances:
You installed Solaris 9 software. Solaris Volume Manager software is automatically installed with Solaris 9 software. Instead, go to How to Set the Number of Metadevice or Volume Names and Disk Sets.
You installed Solaris 10 software. Instead, go to How to Create State Database Replicas.
You used SunPlex Installer to install Solstice DiskSuite software. Instead, do one of the following:
If you plan to create additional disk sets, go to How to Set the Number of Metadevice or Volume Names and Disk Sets.
If you do not plan to create additional disk sets, go to Mirroring the Root Disk.
Perform this task on each node in the cluster.
Perform the following tasks:
Make mappings of your storage drives.
Complete the following configuration planning worksheets. See Planning Volume Management for planning guidelines.
Become superuser on the cluster node.
If you install from the CD-ROM, insert the Solaris 8 Software 2 of 2 CD-ROM in the CD-ROM drive on the node.
This step assumes that the Volume Management daemon vold(1M) is running and configured to manage CD-ROM devices.
Install the Solstice DiskSuite software packages.
Install the packages in the order that is shown in the following example.
# cd /cdrom/sol_8_sparc_2/Solaris_8/EA/products/DiskSuite_4.2.1/sparc/Packages # pkgadd -d . SUNWmdr SUNWmdu [SUNWmdx] optional-pkgs |
The SUNWmdr and SUNWmdu packages are required for all Solstice DiskSuite installations.
The SUNWmdx package is also required for the 64-bit Solstice DiskSuite installation.
See your Solstice DiskSuite installation documentation for information about optional software packages.
If you have Solstice DiskSuite software patches to install, do not reboot after you install the Solstice DiskSuite software.
If you installed from a CD-ROM, eject the CD-ROM.
Install any Solstice DiskSuite patches.
See Patches and Required Firmware Levels in Sun Cluster 3.1 8/05 Release Notes for Solaris OS for the location of patches and installation instructions.
Repeat Step 1 through Step 5 on each of the other nodes of the cluster.
From one node of the cluster, manually populate the global-device namespace for Solstice DiskSuite.
# scgdevs |
If you used SunPlex Installer to install Solstice DiskSuite software, go to Mirroring the Root Disk.
If the cluster runs on the Solaris 10 OS, go to How to Create State Database Replicas.
Otherwise, go to How to Set the Number of Metadevice or Volume Names and Disk Sets.
The scgdevs command might return a message similar to the message Could not open /dev/rdsk/c0t6d0s2 to verify device id, Device busy. If the listed device is a CD-ROM device, you can safely ignore the message.
Do not perform this procedure. in the following circumstances:
The cluster runs on the Solaris 10 OS. Instead, go to How to Create State Database Replicas.
With the Solaris 10 release, Solaris Volume Manager has been enhanced to configure volumes dynamically. You no longer need to edit the nmd and the md_nsets parameters in the /kernel/drv/md.conf file. New volumes are dynamically created, as needed.
You used SunPlex Installer to install Solstice DiskSuite software. Instead, go to Mirroring the Root Disk.
This procedure describes how to determine the number of Solstice DiskSuite metadevice or Solaris Volume Manager volume names and disk sets that are needed for your configuration. This procedure also describes how to modify the /kernel/drv/md.conf file to specify these numbers.
The default number of metadevice or volume names per disk set is 128, but many configurations need more than the default. Increase this number before you implement a configuration, to save administration time later.
At the same time, keep the value of the nmdfield and the md_nsets field as low as possible. Memory structures exist for all possible devices as determined by nmdand md_nsets, even if you have not created those devices. For optimal performance, keep the value of nmd and md_nsets only slightly higher than the number of metadevices or volumes you plan to use.
Have available the completed Disk Device Group Configurations Worksheet.
Calculate the total number of disk sets that you expect to need in the cluster, then add one more disk set for private disk management.
The cluster can have a maximum of 32 disk sets, 31 disk sets for general use plus one disk set for private disk management. The default number of disk sets is 4. You supply this value for the md_nsets field in Step 3.
Calculate the largest metadevice or volume name that you expect to need for any disk set in the cluster.
Each disk set can have a maximum of 8192 metadevice or volume names. You supply this value for the nmd field in Step 3.
Determine the quantity of metadevice or volume names that you expect to need for each disk set.
If you use local metadevices or volumes, ensure that each local metadevice or volume name on which a global-devices file system, /global/.devices/node@ nodeid, is mounted is unique throughout the cluster and does not use the same name as any device-ID name in the cluster.
Choose a range of numbers to use exclusively for device-ID names and a range for each node to use exclusively for its local metadevice or volume names. For example, device-ID names might use the range from d1 to d100. Local metadevices or volumes on node 1 might use names in the range from d100 to d199. And local metadevices or volumes on node 2 might use d200 to d299.
Calculate the highest of the metadevice or volume names that you expect to use in any disk set.
The quantity of metadevice or volume names to set is based on the metadevice or volume name value rather than on the actual quantity . For example, if your metadevice or volume names range from d950 to d1000, Solstice DiskSuite or Solaris Volume Manager software requires that you set the value at 1000 names, not 50.
On each node, become superuser and edit the /kernel/drv/md.conf file.
All cluster nodes (or cluster pairs in the cluster-pair topology) must have identical /kernel/drv/md.conf files, regardless of the number of disk sets served by each node. Failure to follow this guideline can result in serious Solstice DiskSuite or Solaris Volume Manager errors and possible loss of data.
On each node, perform a reconfiguration reboot.
# touch /reconfigure # shutdown -g0 -y -i6 |
Changes to the /kernel/drv/md.conf file become operative after you perform a reconfiguration reboot.
Create local state database replicas. Go to How to Create State Database Replicas.
If you used SunPlex Installer to install Solstice DiskSuite software, do not perform this procedure. Instead, go to Mirroring the Root Disk.
Perform this procedure on each node in the cluster.
Become superuser on the cluster node.
Create state database replicas on one or more local devices for each cluster node.
Use the physical name (cNtXdY sZ), not the device-ID name (dN), to specify the slices to use.
# metadb -af slice-1 slice-2 slice-3 |
To provide protection of state data, which is necessary to run Solstice DiskSuite or Solaris Volume Manager software, create at least three replicas for each node. Also, you can place replicas on more than one device to provide protection if one of the devices fails.
See the metadb(1M) man page and your Solstice DiskSuite or Solaris Volume Manager documentation for details.
Verify the replicas.
# metadb |
The metadb command displays the list of replicas.
The following example shows three Solstice DiskSuite state database replicas. Each replica is created on a different device. For Solaris Volume Manager, the replica size would be larger.
# metadb -af c0t0d0s7 c0t1d0s7 c1t0d0s7 # metadb flags first blk block count a u 16 1034 /dev/dsk/c0t0d0s7 a u 16 1034 /dev/dsk/c0t1d0s7 a u 16 1034 /dev/dsk/c1t0d0s7 |
To mirror file systems on the root disk, go to Mirroring the Root Disk.
Otherwise, go to Creating Disk Sets in a Cluster to create Solstice DiskSuite or Solaris Volume Manager disk sets.
Mirroring the root disk prevents the cluster node itself from shutting down because of a system disk failure. Four types of file systems can reside on the root disk. Each file-system type is mirrored by using a different method.
Use the following procedures to mirror each type of file system.
For local disk mirroring, do not use /dev/global as the path when you specify the disk name. If you specify this path for anything other than cluster file systems, the system cannot boot.
Use this procedure to mirror the root (/) file system.
Become superuser on the node.
Place the root slice in a single-slice (one-way) concatenation.
Specify the physical disk name of the root-disk slice (cNtXdY sZ).
# metainit -f submirror1 1 1 root-disk-slice |
Create a second concatenation.
# metainit submirror2 1 1 submirror-disk-slice |
Create a one-way mirror with one submirror.
# metainit mirror -m submirror1 |
If the device is a local device to be used to mount a global-devices file system, /global/.devices/node@nodeid, the metadevice or volume name for the mirror must be unique throughout the cluster.
Run the metaroot(1M) command.
This command edits the /etc/vfstab and /etc/system files so the system can be booted with the root (/) file system on a metadevice or volume.
# metaroot mirror |
Run the lockfs(1M) command.
This command flushes all transactions out of the log and writes the transactions to the master file system on all mounted UFS file systems.
# lockfs -fa |
Move any resource groups or device groups from the node.
# scswitch -S -h from-node |
Moves all resource groups and device groups
Specifies the name of the node from which to move resource or device groups
Reboot the node.
This command remounts the newly mirrored root (/) file system.
# shutdown -g0 -y -i6 |
Use the metattach(1M) command to attach the second submirror to the mirror.
# metattach mirror submirror2 |
If the disk that is used to mirror the root disk is physically connected to more than one node (multihosted), enable the localonly property.
Perform the following steps to enable the localonly property of the raw-disk device group for the disk that is used to mirror the root disk. You must enable the localonly property to prevent unintentional fencing of a node from its boot device if the boot device is connected to multiple nodes.
If necessary, use the scdidadm(1M) -L command to display the full device-ID path name of the raw-disk device group.
In the following example, the raw-disk device-group name dsk/d2 is part of the third column of output, which is the full device-ID path name.
# scdidadm -L … 1 phys-schost-3:/dev/rdsk/c1t1d0 /dev/did/rdsk/d2 |
View the node list of the raw-disk device group.
Output looks similar to the following:
# scconf -pvv | grep dsk/d2 Device group name: dsk/d2 … (dsk/d2) Device group node list: phys-schost-1, phys-schost-3 … |
If the node list contains more than one node name, remove all nodes from the node list except the node whose root disk you mirrored.
Only the node whose root disk you mirrored should remain in the node list for the raw-disk device group.
# scconf -r -D name=dsk/dN,nodelist=node |
Specifies the cluster-unique name of the raw-disk device group
Specifies the name of the node or nodes to remove from the node list
Use the scconf(1M) command to enable the localonly property.
When the localonly property is enabled, the raw-disk device group is used exclusively by the node in its node list. This usage prevents unintentional fencing of the node from its boot device if the boot device is connected to multiple nodes.
# scconf -c -D name=rawdisk-groupname,localonly=true |
Specifies the name of the raw-disk device group
For more information about the localonly property, see the scconf_dg_rawdisk(1M) man page.
Record the alternate boot path for possible future use.
If the primary boot device fails, you can then boot from this alternate boot device. See Chapter 7, Troubleshooting the System, in Solstice DiskSuite 4.2.1 User’s Guide, Special Considerations for Mirroring root (/) in Solaris Volume Manager Administration Guide, or Creating a RAID-1 Volume in Solaris Volume Manager Administration Guide for more information about alternate boot devices.
# ls -l /dev/rdsk/root-disk-slice |
Repeat Step 1 through Step 11 on each remaining node of the cluster.
Ensure that each metadevice or volume name for a mirror on which a global-devices file system, /global/.devices/node@nodeid, is to be mounted is unique throughout the cluster.
The following example shows the creation of mirror d0 on the node phys-schost-1, which consists of submirror d10 on partition c0t0d0s0 and submirror d20 on partition c2t2d0s0. Device c2t2d0 is a multihost disk, so the localonly property is enabled.
(Create the mirror) # metainit -f d10 1 1 c0t0d0s0 d11: Concat/Stripe is setup # metainit d20 1 1 c2t2d0s0 d12: Concat/Stripe is setup # metainit d0 -m d10 d10: Mirror is setup # metaroot d0 # lockfs -fa (Move resource groups and device groups from phys-schost-1) # scswitch -S -h phys-schost-1 (Reboot the node) # shutdown -g0 -y -i6 (Attach the second submirror) # metattach d0 d20 d0: Submirror d20 is attached (Display the device-group node list) # scconf -pvv | grep dsk/d2 Device group name: dsk/d2 … (dsk/d2) Device group node list: phys-schost-1, phys-schost-3 … (Remove phys-schost-3 from the node list) # scconf -r -D name=dsk/d2,nodelist=phys-schost-3 (Enable the localonly property) # scconf -c -D name=dsk/d2,localonly=true (Record the alternate boot path) # ls -l /dev/rdsk/c2t2d0s0 lrwxrwxrwx 1 root root 57 Apr 25 20:11 /dev/rdsk/c2t2d0s0 –> ../../devices/node@1/pci@1f,0/pci@1/scsi@3,1/disk@2,0:a,raw |
To mirror the global namespace, /global/.devices/node@nodeid, go to How to Mirror the Global Namespace.
To mirror file systems than cannot be unmounted, go to How to Mirror File Systems Other Than Root (/) That Cannot Be Unmounted.
To mirror user-defined file systems, go to How to Mirror File Systems That Can Be Unmounted.
Otherwise, go to Creating Disk Sets in a Cluster to create a disk set.
Some of the steps in this mirroring procedure might cause an error message similar to metainit: dg-schost-1: d1s0: not a metadevice. Such an error message is harmless and can be ignored.
Use this procedure to mirror the global namespace, /global/.devices/node@nodeid/.
Become superuser on a node of the cluster.
Place the global namespace slice in a single-slice (one-way) concatenation.
Use the physical disk name of the disk slice (cNtXdY sZ).
# metainit -f submirror1 1 1 diskslice |
Create a second concatenation.
# metainit submirror2 1 1 submirror-diskslice |
Create a one-way mirror with one submirror.
# metainit mirror -m submirror1 |
The metadevice or volume name for a mirror on which a global-devices file system, /global/.devices/node@nodeid, is to be mounted must be unique throughout the cluster.
Attach the second submirror to the mirror.
This attachment starts a synchronization of the submirrors.
# metattach mirror submirror2 |
Edit the /etc/vfstab file entry for the /global/.devices/node@nodeid file system.
Replace the names in the device to mount and device to fsck columns with the mirror name.
# vi /etc/vfstab #device device mount FS fsck mount mount #to mount to fsck point type pass at boot options # /dev/md/dsk/mirror /dev/md/rdsk/mirror /global/.devices/node@nodeid ufs 2 no global |
Repeat Step 1 through Step 6 on each remaining node of the cluster.
Wait for the synchronization of the mirrors, started in Step 5, to be completed.
Use the metastat(1M) command to view mirror status and to verify that mirror synchronization is complete.
# metastat mirror |
If the disk that is used to mirror the global namespace is physically connected to more than one node (multihosted), enable the localonly property.
Perform the following steps to enable the localonly property of the raw-disk device group for the disk that is used to mirror the global namespace. You must enable the localonly property to prevent unintentional fencing of a node from its boot device if the boot device is connected to multiple nodes.
If necessary, use the scdidadm(1M) command to display the full device-ID path name of the raw-disk device group.
In the following example, the raw-disk device-group name dsk/d2 is part of the third column of output, which is the full device-ID path name.
# scdidadm -L … 1 phys-schost-3:/dev/rdsk/c1t1d0 /dev/did/rdsk/d2 |
View the node list of the raw-disk device group.
Output looks similar to the following.
# scconf -pvv | grep dsk/d2 Device group name: dsk/d2 … (dsk/d2) Device group node list: phys-schost-1, phys-schost-3 … |
If the node list contains more than one node name, remove all nodes from the node list except the node whose disk is mirrored.
Only the node whose disk is mirrored should remain in the node list for the raw-disk device group.
# scconf -r -D name=dsk/dN,nodelist=node |
Specifies the cluster-unique name of the raw-disk device group
Specifies the name of the node or nodes to remove from the node list
Enable the localonly property.
When the localonly property is enabled, the raw-disk device group is used exclusively by the node in its node list. This usage prevents unintentional fencing of the node from its boot device if the boot device is connected to multiple nodes.
# scconf -c -D name=rawdisk-groupname,localonly=true |
Specifies the name of the raw-disk device group
For more information about the localonly property, see the scconf_dg_rawdisk(1M) man page.
The following example shows creation of mirror d101, which consists of submirror d111 on partition c0t0d0s3 and submirror d121 on partition c2t2d0s3. The /etc/vfstab file entry for /global/.devices/node@1 is updated to use the mirror name d101. Device c2t2d0 is a multihost disk, so the localonly property is enabled.
(Create the mirror) # metainit -f d111 1 1 c0t0d0s3 d111: Concat/Stripe is setup # metainit d121 1 1 c2t2d0s3 d121: Concat/Stripe is setup # metainit d101 -m d111 d101: Mirror is setup # metattach d101 d121 d101: Submirror d121 is attached (Edit the /etc/vfstab file) # vi /etc/vfstab #device device mount FS fsck mount mount #to mount to fsck point type pass at boot options # /dev/md/dsk/d101 /dev/md/rdsk/d101 /global/.devices/node@1 ufs 2 no global (View the sync status) # metastat d101 d101: Mirror Submirror 0: d111 State: Okay Submirror 1: d121 State: Resyncing Resync in progress: 15 % done … (Identify the device-ID name of the mirrored disk's raw-disk device group) # scdidadm -L … 1 phys-schost-3:/dev/rdsk/c2t2d0 /dev/did/rdsk/d2 (Display the device-group node list) # scconf -pvv | grep dsk/d2 Device group name: dsk/d2 … (dsk/d2) Device group node list: phys-schost-1, phys-schost-3 … (Remove phys-schost-3 from the node list) # scconf -r -D name=dsk/d2,nodelist=phys-schost-3 (Enable the localonly property) # scconf -c -D name=dsk/d2,localonly=true |
To mirror file systems other than root (/) that cannot be unmounted, go to How to Mirror File Systems Other Than Root (/) That Cannot Be Unmounted.
To mirror user-defined file systems, go to How to Mirror File Systems That Can Be Unmounted
Otherwise, go to Creating Disk Sets in a Cluster to create a disk set.
Some of the steps in this mirroring procedure might cause an error message similar to metainit: dg-schost-1: d1s0: not a metadevice. Such an error message is harmless and can be ignored.
Use this procedure to mirror file systems other than root (/) that cannot be unmounted during normal system usage, such as /usr, /opt, or swap.
Become superuser on a node of the cluster.
Place the slice on which an unmountable file system resides in a single-slice (one-way) concatenation.
Specify the physical disk name of the disk slice (cNtX dYsZ).
# metainit -f submirror1 1 1 diskslice |
Create a second concatenation.
# metainit submirror2 1 1 submirror-diskslice |
Create a one-way mirror with one submirror.
# metainit mirror -m submirror1 |
The metadevice or volume name for this mirror does not need to be unique throughout the cluster.
Repeat Step 1 through Step 4 for each remaining unmountable file system that you want to mirror.
On each node, edit the /etc/vfstab file entry for each unmountable file system you mirrored.
Replace the names in the device to mount and device to fsck columns with the mirror name.
# vi /etc/vfstab #device device mount FS fsck mount mount #to mount to fsck point type pass at boot options # /dev/md/dsk/mirror /dev/md/rdsk/mirror /filesystem ufs 2 no global |
Move any resource groups or device groups from the node.
# scswitch -S -h from-node |
Moves all resource groups and device groups
Specifies the name of the node from which to move resource or device groups
Reboot the node.
# shutdown -g0 -y -i6 |
Attach the second submirror to each mirror.
This attachment starts a synchronization of the submirrors.
# metattach mirror submirror2 |
Wait for the synchronization of the mirrors, started in Step 9, to complete.
Use the metastat(1M) command to view mirror status and to verify that mirror synchronization is complete.
# metastat mirror |
If the disk that is used to mirror the unmountable file system is physically connected to more than one node (is multihosted), enable the localonly property.
Perform the following steps to enable the localonly property of the raw-disk device group for the disk that is used to mirror the unmountable file system. You must enable the localonly property to prevent unintentional fencing of a node from its boot device if the boot device is connected to multiple nodes.
If necessary, use the scdidadm -L command to display the full device-ID path name of the raw-disk device group.
In the following example, the raw-disk device-group name dsk/d2 is part of the third column of output, which is the full device-ID path name.
# scdidadm -L … 1 phys-schost-3:/dev/rdsk/c1t1d0 /dev/did/rdsk/d2 |
View the node list of the raw-disk device group.
Output looks similar to the following.
# scconf -pvv | grep dsk/d2 Device group name: dsk/d2 … (dsk/d2) Device group node list: phys-schost-1, phys-schost-3 … |
If the node list contains more than one node name, remove all nodes from the node list except the node whose root disk is mirrored.
Only the node whose root disk is mirrored should remain in the node list for the raw-disk device group.
# scconf -r -D name=dsk/dN,nodelist=node |
Specifies the cluster-unique name of the raw-disk device group
Specifies the name of the node or nodes to remove from the node list
Enable the localonly property.
When the localonly property is enabled, the raw-disk device group is used exclusively by the node in its node list. This usage prevents unintentional fencing of the node from its boot device if the boot device is connected to multiple nodes.
# scconf -c -D name=rawdisk-groupname,localonly=true |
Specifies the name of the raw-disk device group
For more information about the localonly property, see the scconf_dg_rawdisk(1M) man page.
The following example shows the creation of mirror d1 on the node phys-schost-1 to mirror /usr, which resides on c0t0d0s1. Mirror d1 consists of submirror d11 on partition c0t0d0s1 and submirror d21 on partition c2t2d0s1. The /etc/vfstab file entry for /usr is updated to use the mirror name d1. Device c2t2d0 is a multihost disk, so the localonly property is enabled.
(Create the mirror) # metainit -f d11 1 1 c0t0d0s1 d11: Concat/Stripe is setup # metainit d21 1 1 c2t2d0s1 d21: Concat/Stripe is setup # metainit d1 -m d11 d1: Mirror is setup (Edit the /etc/vfstab file) # vi /etc/vfstab #device device mount FS fsck mount mount #to mount to fsck point type pass at boot options # /dev/md/dsk/d1 /dev/md/rdsk/d1 /usr ufs 2 no global (Move resource groups and device groups from phys-schost-1) # scswitch -S -h phys-schost-1 (Reboot the node) # shutdown -g0 -y -i6 (Attach the second submirror) # metattach d1 d21 d1: Submirror d21 is attached (View the sync status) # metastat d1 d1: Mirror Submirror 0: d11 State: Okay Submirror 1: d21 State: Resyncing Resync in progress: 15 % done … (Identify the device-ID name of the mirrored disk's raw-disk device group) # scdidadm -L … 1 phys-schost-3:/dev/rdsk/c2t2d0 /dev/did/rdsk/d2 (Display the device-group node list) # scconf -pvv | grep dsk/d2 Device group name: dsk/d2 … (dsk/d2) Device group node list: phys-schost-1, phys-schost-3 … (Remove phys-schost-3 from the node list) # scconf -r -D name=dsk/d2,nodelist=phys-schost-3 (Enable the localonly property) # scconf -c -D name=dsk/d2,localonly=true |
To mirror user-defined file systems, go to How to Mirror File Systems That Can Be Unmounted.
Otherwise, go to Creating Disk Sets in a Cluster to create a disk set.
Some of the steps in this mirroring procedure might cause an error message similar to metainit: dg-schost-1: d1s0: not a metadevice. Such an error message is harmless and can be ignored.
Use this procedure to mirror user-defined file systems that can be unmounted. In this procedure, the nodes do not need to be rebooted.
Become superuser on a node of the cluster.
Unmount the file system to mirror.
Ensure that no processes are running on the file system.
# umount /mount-point |
See the umount(1M) man page and Chapter 19, Mounting and Unmounting File Systems (Tasks), in System Administration Guide: Devices and File Systems for more information.
Place in a single-slice (one-way) concatenation the slice that contains a user-defined file system that can be unmounted.
Specify the physical disk name of the disk slice (cNtX dYsZ).
# metainit -f submirror1 1 1 diskslice |
Create a second concatenation.
# metainit submirror2 1 1 submirror-diskslice |
Create a one-way mirror with one submirror.
# metainit mirror -m submirror1 |
The metadevice or volume name for this mirror does not need to be unique throughout the cluster.
Repeat Step 1 through Step 5 for each mountable file system to be mirrored.
On each node, edit the /etc/vfstab file entry for each file system you mirrored.
Replace the names in the device to mount and device to fsck columns with the mirror name.
# vi /etc/vfstab #device device mount FS fsck mount mount #to mount to fsck point type pass at boot options # /dev/md/dsk/mirror /dev/md/rdsk/mirror /filesystem ufs 2 no global |
Attach the second submirror to the mirror.
This attachment starts a synchronization of the submirrors.
# metattach mirror submirror2 |
Wait for the synchronization of the mirrors, started in Step 8, to be completed.
Use the metastat(1M) command to view mirror status.
# metastat mirror |
If the disk that is used to mirror the user-defined file system is physically connected to more than one node (multihosted), enable the localonly property.
Perform the following steps to enable the localonly property of the raw-disk device group for the disk that is used to mirror the user-defined file system. You must enable the localonly property to prevent unintentional fencing of a node from its boot device if the boot device is connected to multiple nodes.
If necessary, use the scdidadm -L command to display the full device-ID path name of the raw-disk device group.
In the following example, the raw-disk device-group name dsk/d4 is part of the third column of output, which is the full device-ID path name.
# scdidadm -L … 1 phys-schost-3:/dev/rdsk/c1t1d0 /dev/did/rdsk/d2 |
View the node list of the raw-disk device group.
Output looks similar to the following.
# scconf -pvv | grep dsk/d2 Device group name: dsk/d2 … (dsk/d2) Device group node list: phys-schost-1, phys-schost-3 … |
If the node list contains more than one node name, remove all nodes from the node list except the node whose root disk you mirrored.
Only the node whose root disk you mirrored should remain in the node list for the raw-disk device group.
# scconf -r -D name=dsk/dN,nodelist=node |
Specifies the cluster-unique name of the raw-disk device group
Specifies the name of the node or nodes to remove from the node list
Enable the localonly property.
When the localonly property is enabled, the raw-disk device group is used exclusively by the node in its node list. This usage prevents unintentional fencing of the node from its boot device if the boot device is connected to multiple nodes.
# scconf -c -D name=rawdisk-groupname,localonly=true |
Specifies the name of the raw-disk device group
For more information about the localonly property, see the scconf_dg_rawdisk(1M) man page.
Mount the mirrored file system.
# mount /mount-point |
See the mount(1M) man page and Chapter 19, Mounting and Unmounting File Systems (Tasks), in System Administration Guide: Devices and File Systems for more information.
The following example shows creation of mirror d4 to mirror /export, which resides on c0t0d0s4. Mirror d4 consists of submirror d14 on partition c0t0d0s4 and submirror d24 on partition c2t2d0s4. The /etc/vfstab file entry for /export is updated to use the mirror name d4. Device c2t2d0 is a multihost disk, so the localonly property is enabled.
(Unmount the file system) # umount /export (Create the mirror) # metainit -f d14 1 1 c0t0d0s4 d14: Concat/Stripe is setup # metainit d24 1 1 c2t2d0s4 d24: Concat/Stripe is setup # metainit d4 -m d14 d4: Mirror is setup (Edit the /etc/vfstab file) # vi /etc/vfstab #device device mount FS fsck mount mount #to mount to fsck point type pass at boot options # # /dev/md/dsk/d4 /dev/md/rdsk/d4 /export ufs 2 no global (Attach the second submirror) # metattach d4 d24 d4: Submirror d24 is attached (View the sync status) # metastat d4 d4: Mirror Submirror 0: d14 State: Okay Submirror 1: d24 State: Resyncing Resync in progress: 15 % done … (Identify the device-ID name of the mirrored disk's raw-disk device group) # scdidadm -L … 1 phys-schost-3:/dev/rdsk/c2t2d0 /dev/did/rdsk/d2 (Display the device-group node list) # scconf -pvv | grep dsk/d2 Device group name: dsk/d2 … (dsk/d2) Device group node list: phys-schost-1, phys-schost-3 … (Remove phys-schost-3 from the node list) # scconf -r -D name=dsk/d2,nodelist=phys-schost-3 (Enable the localonly property) # scconf -c -D name=dsk/d2,localonly=true (Mount the file system) # mount /export |
If you need to create disk sets, go to one of the following:
To create a Solaris Volume Manager for Sun Cluster disk set for use by Oracle Real Application Clusters, go to Creating a Multi-Owner Disk Set in Solaris Volume Manager for Sun Cluster for the Oracle Real Application Clusters Database in Sun Cluster Data Service for Oracle Real Application Clusters Guide for Solaris OS.
To create a disk set for any other application, go to Creating Disk Sets in a Cluster.
If you used SunPlex Installer to install Solstice DiskSuite, one to three disk sets might already exist. See Using SunPlex Installer to Configure Sun Cluster Software for information about the metasets that were created by SunPlex Installer.
If you have sufficient disk sets for your needs, go to one of the following:
If your cluster contains disk sets that are configured with exactly two disk enclosures and two nodes, you must add dual-string mediators. Go to Configuring Dual-String Mediators.
If your cluster configuration does not require dual-string mediators, go to How to Create Cluster File Systems.
Some of the steps in this mirroring procedure might cause an error message that is similar to metainit: dg-schost-1: d1s0: not a metadevice. Such an error message is harmless and can be ignored.