This section provides information and procedures to install and configure VxVM software on a Sun Cluster configuration.
The following table lists the tasks to perform to install and configure VxVM software for Sun Cluster configurations. Complete the procedures in the order that is indicated.
Table 5–1 Task Map: Installing and Configuring VxVM Software
Task |
Instructions |
---|---|
Plan the layout of your VxVM configuration. | |
(Optional) Determine how you will create the root disk group on each node. | |
Install VxVM software. |
How to Install Veritas Volume Manager Software VxVM installation documentation |
(Optional) Create a root disk group. You can either encapsulate the root disk (UFS only) or create the root disk group on local, nonroot disks. | |
(Optional) Mirror the encapsulated root disk. | |
Create disk groups. |
The creation of a root disk group is optional. If you do not intend to create a root disk group, proceed to How to Install Veritas Volume Manager Software.
Access to a node's root disk group must be restricted to only that node.
Remote nodes must never access data stored in another node's root disk group.
Do not use the cldevicegroup command to register the root disk group as a device group.
Whenever possible, configure the root disk group for each node on a nonshared disk.
Sun Cluster software supports the following methods to configure the root disk group.
Encapsulate the node's root disk (UFS only) – This method enables the root disk to be mirrored, which provides a boot alternative if the root disk is corrupted or damaged. To encapsulate the root disk you need two free disk slices as well as free cylinders, preferably at the beginning or the end of the disk.
You cannot encapsulate the root disk if it uses the ZFS file system. Instead, configure the root disk group on local nonroot disks.
Use local nonroot disks – This method provides an alternative to encapsulating the root disk. If a node's root disk is encapsulated, certain tasks you might later perform, such as upgrade the Solaris OS or perform disaster recovery procedures, could be more complicated than if the root disk is not encapsulated. To avoid this potential added complexity, you can instead initialize or encapsulate local nonroot disks for use as root disk groups.
A root disk group that is created on local nonroot disks is local to that node, neither globally accessible nor highly available. As with the root disk, to encapsulate a nonroot disk you need two free disk slices as well as free cylinders at the beginning or the end of the disk.
See your VxVM installation documentation for more information.
Perform this procedure to install Veritas Volume Manager (VxVM) software on each global-cluster node that you want to install with VxVM. You can install VxVM on all nodes of the cluster, or install VxVM just on the nodes that are physically connected to the storage devices that VxVM will manage.
Perform the following tasks:
Ensure that all nodes in the cluster are running in cluster mode.
Obtain any Veritas Volume Manager (VxVM) license keys that you need to install.
Have available your VxVM installation documentation.
Become superuser on a cluster node that you intend to install with VxVM.
Insert the VxVM CD-ROM in the CD-ROM drive on the node.
Follow procedures in your VxVM installation guide to install and configure VxVM software and licenses.
Run the clvxvm utility in noninteractive mode.
phys-schost# clvxvm initialize |
The clvxvm utility performs necessary postinstallation tasks. The clvxvm utility also selects and configures a cluster-wide vxio driver major number. See the clvxvm(1CL) man page for more information.
SPARC: To enable the VxVM cluster feature, supply the cluster feature license key, if you did not already do so.
See your VxVM documentation for information about how to add a license.
(Optional) Install the VxVM GUI.
See your VxVM documentation for information about installing the VxVM GUI.
Eject the CD-ROM.
Install any VxVM patches to support Sun Cluster software.
See Patches and Required Firmware Levels in Sun Cluster Release Notes for the location of patches and installation instructions.
Repeat Step 1 through Step 8 to install VxVM on any additional nodes.
SPARC: To enable the VxVM cluster feature, you must install VxVM on all nodes of the cluster.
If you do not install one or more nodes with VxVM, modify the /etc/name_to_major file on each non-VxVM node.
On a node that is installed with VxVM, determine the vxio major number setting.
phys-schost# grep vxio /etc/name_to_major |
Become superuser on a node that you do not intend to install with VxVM.
Edit the /etc/name_to_major file and add an entry to set the vxio major number to NNN, the number derived in Step a.
phys-schost# vi /etc/name_to_major vxio NNN |
Initialize the vxio entry.
phys-schost# drvconfig -b -i vxio -m NNN |
Repeat Step a through Step d on all other nodes that you do not intend to install with VxVM.
When you finish, each node of the cluster should have the same vxio entry in its /etc/name_to_major file.
To create a root disk group, go to SPARC: How to Encapsulate the Root Disk or How to Create a Root Disk Group on a Nonroot Disk.
Otherwise, proceed to Step 12.
A root disk group is optional.
Reboot each node on which you installed VxVM.
phys-schost# shutdown -g0 -y -i6 |
To create a root disk group, go to (UFS only) SPARC: How to Encapsulate the Root Disk or How to Create a Root Disk Group on a Nonroot Disk.
Otherwise, create disk groups. Go to Creating Disk Groups in a Cluster.
Perform this procedure to create a root disk group by encapsulating the UFS root disk. Root disk groups are optional. See your VxVM documentation for more information.
If your root disk uses ZFS, you can only create a root disk group on local nonroot disks. If you want to create a root disk group on nonroot disks, instead perform procedures in How to Create a Root Disk Group on a Nonroot Disk.
Ensure that you have installed VxVM as described in How to Install Veritas Volume Manager Software.
Become superuser on a node that you installed with VxVM.
Encapsulate the UFS root disk.
phys-schost# clvxvm encapsulate |
See the clvxvm(1CL) man page for more information.
Repeat for any other node on which you installed VxVM.
To mirror the encapsulated root disk, go to How to Mirror the Encapsulated Root Disk.
Otherwise, go to Creating Disk Groups in a Cluster.
Use this procedure to create a root disk group by encapsulating or initializing local disks other than the root disk. The creation of a root disk group is optional.
If you want to create a root disk group on the root disk and the root disk uses UFS, instead perform procedures in SPARC: How to Encapsulate the Root Disk.
If the disks are to be encapsulated, ensure that each disk has at least two slices with 0 cylinders. If necessary, use the format(1M) command to assign 0 cylinders to each VxVM slice.
Become superuser.
Start the vxinstall utility.
phys-schost# vxinstall |
When prompted by the vxinstall utility, make the following choices or entries.
SPARC: To enable the VxVM cluster feature, supply the cluster feature license key.
Choose Custom Installation.
Do not encapsulate the boot disk.
Choose any disks to add to the root disk group.
Do not accept automatic reboot.
If the root disk group that you created contains one or more disks that connect to more than one node, ensure that fencing is disabled for such disks.
Use the following command to disable fencing for each shared disk in the root disk group.
phys-schost# cldevice set -p default_fencing=nofencing device |
Specifies a device property.
Disables fencing for the specified device.
Disabling fencing for the device prevents unintentional fencing of the node from the disk that is used by the root disk group if that disk is connected to multiple nodes.
For more information about the default_fencing property, see the cldevice(1CL) man page.
Evacuate any resource groups or device groups from the node.
phys-schost# clnode evacuate from-node |
Specifies the name of the node from which to move resource or device groups.
Reboot the node.
phys-schost# shutdown -g0 -y -i6 |
Use the vxdiskadm command to add multiple disks to the root disk group.
The root disk group becomes tolerant of a disk failure when it contains multiple disks. See VxVM documentation for procedures.
Create disk groups. Go to Creating Disk Groups in a Cluster.
After you install VxVM and encapsulate the root disk, perform this procedure on each node on which you mirror the encapsulated root disk.
Ensure that you have encapsulated the root disk as described in SPARC: How to Encapsulate the Root Disk.
Become superuser.
List the devices.
phys-schost# cldevice list -v |
Output looks similar to the following:
DID Device Full Device Path ---------- ---------------- d1 phys-schost-1:/dev/rdsk/c0t0d0 d2 phys-schost-1:/dev/rdsk/c0t6d0 d3 phys-schost-2:/dev/rdsk/c1t1d0 d3 phys-schost-1:/dev/rdsk/c1t1d0 |
Mirror the encapsulated root disk.
Follow the procedures in your VxVM documentation.
For maximum availability and simplified administration, use a local disk for the mirror. See Guidelines for Mirroring the Root Disk for additional guidelines.
Do not use a quorum device to mirror a root disk. Using a quorum device to mirror a root disk might prevent the node from booting from the root-disk mirror under certain circumstances.
View the node list of the raw-disk device group for the device that you used to mirror the root disk.
The name of the device group is the form dsk/dN, where dN is the DID device name.
phys-schost# cldevicegroup list -v dsk/dN |
Displays verbose output.
Output looks similar to the following.
Device group Type Node list ------------ ---- --------- dsk/dN Local_Disk phys-schost-1, phys-schost-3 |
If the node list contains more than one node name, remove from the node list all nodes except the node whose root disk you mirrored.
Only the node whose root disk you mirrored should remain in the node list for the raw-disk device group.
phys-schost# cldevicegroup remove-node -n node dsk/dN |
Specifies the node to remove from the device-group node list.
Disable fencing for all disks in the raw-disk device group that connect to more than one node.
Disabling fencing for a device prevents unintentional fencing of the node from its boot device if the boot device is connected to multiple nodes.
phys-schost# cldevice set -p default_fencing=nofencing device |
Sets the value of a device property.
Disables fencing for the specified device.
For more information about the default_fencing property, see the cldevice(1CL) man page.
Repeat this procedure for each node in the cluster whose encapsulated root disk you want to mirror.
The following example shows a mirror created of the root disk for the node phys-schost-1. The mirror is created on the disk c0t0d0, whose raw-disk device-group name is dsk/d2. Disk c0t0d0 is a multihost disk, so the node phys-schost-3 is removed from the disk's node list and fencing is disabled.
phys-schost# cldevice list -v DID Device Full Device Path ---------- ---------------- d2 pcircinus1:/dev/rdsk/c0t0d0 … Create the mirror by using VxVM procedures phys-schost# cldevicegroup list -v dsk/d2 Device group Type Node list ------------ ---- --------- dsk/d2 Local_Disk phys-schost-1, phys-schost-3 phys-schost# cldevicegroup remove-node -n phys-schost-3 dsk/d2 phys-schost# cldevice set -p default_fencing=nofencing c0t0d0 |
Create disk groups. Go to Creating Disk Groups in a Cluster.