Install and configure your local and multihost disks for Veritas Volume Manager (VxVM) by using the procedures in this chapter, along with the planning information in Planning Volume Management. See your VxVM documentation for additional details.
The following sections are in this chapter:
This section provides information and procedures to install and configure VxVM software on a Sun Cluster configuration.
The following table lists the tasks to perform to install and configure VxVM software for Sun Cluster configurations. Complete the procedures in the order that is indicated.
Table 5–1 Task Map: Installing and Configuring VxVM Software
Task |
Instructions |
---|---|
Plan the layout of your VxVM configuration. | |
(Optional) Determine how you will create the root disk group on each node. | |
Install VxVM software. |
How to Install Veritas Volume Manager Software VxVM installation documentation |
(Optional) Create a root disk group. You can either encapsulate the root disk or create the root disk group on local, nonroot disks. | |
(Optional) Mirror the encapsulated root disk. | |
Create disk groups. |
The creation of a root disk group is optional. If you do not intend to create a root disk group, proceed to How to Install Veritas Volume Manager Software.
Access to a node's root disk group must be restricted to only that node.
Remote nodes must never access data stored in another node's root disk group.
Do not use the cldevicegroup command to register the root disk group as a device group.
Whenever possible, configure the root disk group for each node on a nonshared disk.
Sun Cluster software supports the following methods to configure the root disk group.
Encapsulate the node's root disk – This method enables the root disk to be mirrored, which provides a boot alternative if the root disk is corrupted or damaged. To encapsulate the root disk you need two free disk slices as well as free cylinders, preferably at the beginning or the end of the disk.
Use local nonroot disks – This method provides an alternative to encapsulating the root disk. If a node's root disk is encapsulated, certain tasks you might later perform, such as upgrade the Solaris OS or perform disaster recovery procedures, could be more complicated than if the root disk is not encapsulated. To avoid this potential added complexity, you can instead initialize or encapsulate local nonroot disks for use as root disk groups.
A root disk group that is created on local nonroot disks is local to that node, neither globally accessible nor highly available. As with the root disk, to encapsulate a nonroot disk you need two free disk slices as well as free cylinders at the beginning or the end of the disk.
See your VxVM installation documentation for more information.
Perform this procedure to install Veritas Volume Manager (VxVM) software on each global-cluster node that you want to install with VxVM. You can install VxVM on all nodes of the cluster, or install VxVM just on the nodes that are physically connected to the storage devices that VxVM will manage.
Perform the following tasks:
Ensure that all nodes in the cluster are running in cluster mode.
Obtain any Veritas Volume Manager (VxVM) license keys that you need to install.
Have available your VxVM installation documentation.
Become superuser on a cluster node that you intend to install with VxVM.
Insert the VxVM CD-ROM in the CD-ROM drive on the node.
Follow procedures in your VxVM installation guide to install and configure VxVM software and licenses.
Run the clvxvm utility in noninteractive mode.
phys-schost# clvxvm initialize |
The clvxvm utility performs necessary postinstallation tasks. The clvxvm utility also selects and configures a cluster-wide vxio driver major number. See the clvxvm(1CL) man page for more information.
SPARC: To enable the VxVM cluster feature, supply the cluster feature license key, if you did not already do so.
See your VxVM documentation for information about how to add a license.
(Optional) Install the VxVM GUI.
See your VxVM documentation for information about installing the VxVM GUI.
Eject the CD-ROM.
Install any VxVM patches to support Sun Cluster software.
See Patches and Required Firmware Levels in Sun Cluster Release Notes for the location of patches and installation instructions.
Repeat Step 1 through Step 8 to install VxVM on any additional nodes.
SPARC: To enable the VxVM cluster feature, you must install VxVM on all nodes of the cluster.
If you do not install one or more nodes with VxVM, modify the /etc/name_to_major file on each non-VxVM node.
On a node that is installed with VxVM, determine the vxio major number setting.
phys-schost# grep vxio /etc/name_to_major |
Become superuser on a node that you do not intend to install with VxVM.
Edit the /etc/name_to_major file and add an entry to set the vxio major number to NNN, the number derived in Step a.
phys-schost# vi /etc/name_to_major vxio NNN |
Initialize the vxio entry.
phys-schost# drvconfig -b -i vxio -m NNN |
Repeat Step a through Step d on all other nodes that you do not intend to install with VxVM.
When you finish, each node of the cluster should have the same vxio entry in its /etc/name_to_major file.
To create a root disk group, go to SPARC: How to Encapsulate the Root Disk or How to Create a Root Disk Group on a Nonroot Disk.
Otherwise, proceed to Step 12.
A root disk group is optional.
Reboot each node on which you installed VxVM.
phys-schost# shutdown -g0 -y -i6 |
To create a root disk group, go to SPARC: How to Encapsulate the Root Disk or How to Create a Root Disk Group on a Nonroot Disk.
Otherwise, create disk groups. Go to Creating Disk Groups in a Cluster.
Perform this procedure to create a root disk group by encapsulating the root disk. Root disk groups are optional. See your VxVM documentation for more information.
If you want to create a root disk group on nonroot disks, instead perform procedures in How to Create a Root Disk Group on a Nonroot Disk.
Ensure that you have installed VxVM as described in How to Install Veritas Volume Manager Software.
Become superuser on a node that you installed with VxVM.
Encapsulate the root disk.
phys-schost# clvxvm encapsulate |
See the clvxvm(1CL) man page for more information.
Repeat for any other node on which you installed VxVM.
To mirror the encapsulated root disk, go to How to Mirror the Encapsulated Root Disk.
Otherwise, go to Creating Disk Groups in a Cluster.
Use this procedure to create a root disk group by encapsulating or initializing local disks other than the root disk. The creation of a root disk group is optional.
If you want to create a root disk group on the root disk, instead perform procedures in SPARC: How to Encapsulate the Root Disk.
If the disks are to be encapsulated, ensure that each disk has at least two slices with 0 cylinders. If necessary, use the format(1M) command to assign 0 cylinders to each VxVM slice.
Become superuser.
Start the vxinstall utility.
phys-schost# vxinstall |
When prompted by the vxinstall utility, make the following choices or entries.
SPARC: To enable the VxVM cluster feature, supply the cluster feature license key.
Choose Custom Installation.
Do not encapsulate the boot disk.
Choose any disks to add to the root disk group.
Do not accept automatic reboot.
If the root disk group that you created contains one or more disks that connect to more than one node, ensure that fencing is disabled for such disks.
Use the following command to disable fencing for each shared disk in the root disk group.
phys-schost# cldevice set -p default_fencing=nofencing device |
Specifies a device property.
Disables fencing for the specified device.
Disabling fencing for the device prevents unintentional fencing of the node from the disk that is used by the root disk group if that disk is connected to multiple nodes.
For more information about the default_fencing property, see the cldevice(1CL) man page.
Evacuate any resource groups or device groups from the node.
phys-schost# clnode evacuate from-node |
Specifies the name of the node from which to move resource or device groups.
Reboot the node.
phys-schost# shutdown -g0 -y -i6 |
Use the vxdiskadm command to add multiple disks to the root disk group.
The root disk group becomes tolerant of a disk failure when it contains multiple disks. See VxVM documentation for procedures.
Create disk groups. Go to Creating Disk Groups in a Cluster.
After you install VxVM and encapsulate the root disk, perform this procedure on each node on which you mirror the encapsulated root disk.
Ensure that you have encapsulated the root disk as described in SPARC: How to Encapsulate the Root Disk.
Become superuser.
List the devices.
phys-schost# cldevice list -v |
Output looks similar to the following:
DID Device Full Device Path ---------- ---------------- d1 phys-schost-1:/dev/rdsk/c0t0d0 d2 phys-schost-1:/dev/rdsk/c0t6d0 d3 phys-schost-2:/dev/rdsk/c1t1d0 d3 phys-schost-1:/dev/rdsk/c1t1d0 |
Mirror the encapsulated root disk.
Follow the procedures in your VxVM documentation.
For maximum availability and simplified administration, use a local disk for the mirror. See Guidelines for Mirroring the Root Disk for additional guidelines.
Do not use a quorum device to mirror a root disk. Using a quorum device to mirror a root disk might prevent the node from booting from the root-disk mirror under certain circumstances.
View the node list of the raw-disk device group for the device that you used to mirror the root disk.
The name of the device group is the form dsk/dN, where dN is the DID device name.
phys-schost# cldevicegroup list -v dsk/dN |
Displays verbose output.
Output looks similar to the following.
Device group Type Node list ------------ ---- --------- dsk/dN Local_Disk phys-schost-1, phys-schost-3 |
If the node list contains more than one node name, remove from the node list all nodes except the node whose root disk you mirrored.
Only the node whose root disk you mirrored should remain in the node list for the raw-disk device group.
phys-schost# cldevicegroup remove-node -n node dsk/dN |
Specifies the node to remove from the device-group node list.
Disable fencing for all disks in the raw-disk device group that connect to more than one node.
Disabling fencing for a device prevents unintentional fencing of the node from its boot device if the boot device is connected to multiple nodes.
phys-schost# cldevice set -p default_fencing=nofencing device |
Sets the value of a device property.
Disables fencing for the specified device.
For more information about the default_fencing property, see the cldevice(1CL) man page.
Repeat this procedure for each node in the cluster whose encapsulated root disk you want to mirror.
The following example shows a mirror created of the root disk for the node phys-schost-1. The mirror is created on the disk c0t0d0, whose raw-disk device-group name is dsk/d2. Disk c0t0d0 is a multihost disk, so the node phys-schost-3 is removed from the disk's node list and fencing is disabled.
phys-schost# cldevice list -v DID Device Full Device Path ---------- ---------------- d2 pcircinus1:/dev/rdsk/c0t0d0 … Create the mirror by using VxVM procedures phys-schost# cldevicegroup list -v dsk/d2 Device group Type Node list ------------ ---- --------- dsk/d2 Local_Disk phys-schost-1, phys-schost-3 phys-schost# cldevicegroup remove-node -n phys-schost-3 dsk/d2 phys-schost# cldevice set -p default_fencing=nofencing c0t0d0 |
Create disk groups. Go to Creating Disk Groups in a Cluster.
This section describes how to create VxVM disk groups in a cluster. The following table describes the types of VxVM disk groups you can configure in a Sun Cluster configuration and their characteristics.
Disk Group Type |
Use |
Registered with Sun Cluster? |
Storage Requirement |
---|---|---|---|
VxVM disk group |
Device groups for failover or scalable data services, global devices, or cluster file systems |
Yes |
Shared storage |
Local VxVM disk group |
Applications that are not highly available and are confined to a single node |
No |
Shared or unshared storage |
VxVM shared disk group |
Oracle Real Application Clusters (also requires the VxVM cluster feature) |
No |
Shared storage |
The following table lists the tasks to perform to create VxVM disk groups in a Sun Cluster configuration. complete the procedures in the order that is indicated.
Table 5–2 Task Map: Creating VxVM Disk Groups
Task |
Instructions |
---|---|
Create disk groups and volumes. | |
Register as Sun Cluster device groups those disk groups that are not local and that do not use the VxVM cluster feature. | |
If necessary, resolve any minor-number conflicts between device groups by assigning a new minor number. | |
Verify the disk groups and volumes. |
Use this procedure to create your VxVM disk groups and volumes.
Perform this procedure from a node that is physically connected to the disks that make the disk group that you add.
Perform the following tasks:
Make mappings of your storage disk drives. See the appropriate manual in the Sun Cluster Hardware Administration Collection to perform an initial installation of your storage device.
Complete the following configuration planning worksheets.
See Planning Volume Management for planning guidelines.
If you did not create root disk groups, ensure that you have rebooted each node on which you installed VxVM, as instructed in Step 12 of How to Install Veritas Volume Manager Software.
Become superuser on the node that will own the disk group.
Create the VxVM disk groups and volumes.
Observe the following special instructions:
SPARC: If you are installing Oracle Real Application Clusters, create shared VxVM disk groups by using the cluster feature of VxVM. Observe guidelines and instructions in How to Create a VxVM Shared-Disk Group for the Oracle RAC Database in Sun Cluster Data Service for Oracle RAC Guide for Solaris OS and in the Veritas Volume Manager Administrator's Reference Guide.
Otherwise, create VxVM disk groups by using the standard procedures that are documented in the VxVM documentation.
You can use Dirty Region Logging (DRL) to decrease volume recovery time if a node failure occurs. However, DRL might decrease I/O throughput.
For local disk groups, set the localonly property and add a single node to the disk group's node list.
A disk group that is configured to be local only is not highly available or globally accessible.
Start the clsetup utility.
phys-schost# clsetup |
Choose the menu item, Device groups and volumes.
Choose the menu item, Set localonly on a VxVM disk group.
Follow the instructions to set the localonly property and to specify the single node that will exclusively master the disk group.
Only one node at any time is permitted to master the disk group. You can later change which node is the configured master.
When finished, quit the clsetup utility.
Determine your next step:
SPARC: If the VxVM cluster feature is enabled, go to How to Verify the Disk Group Configuration.
If you created disk groups that are not local and the VxVM cluster feature is not enabled, register the disk groups as Sun Cluster device groups. Go to How to Register a Disk Group.
If you created only local disk groups, go to How to Verify the Disk Group Configuration.
If the VxVM cluster feature is not enabled, perform this procedure to register disk groups that are not local as Sun Cluster device groups.
SPARC: If the VxVM cluster feature is enabled or you created a local disk group, do not perform this procedure. Instead, proceed to How to Verify the Disk Group Configuration.
Become superuser on a node of the cluster.
Register the global disk group as a Sun Cluster device group.
Start the clsetup utility.
phys-schost# clsetup |
Choose the menu item, Device groups and volumes.
Choose the menu item, Register a VxVM disk group.
Follow the instructions to specify the VxVM disk group that you want to register as a Sun Cluster device group.
When finished, quit the clsetup utility.
Deport and re-import each local disk group.
phys-schost# vxdg deport diskgroup # vxdg import dg |
Restart each local disk group.
phys-schost# vxvol -g diskgroup startall |
Verify the local-only status of each local disk group.
If the value of the flags property of the disk group is nogdl, the disk group is correctly configured for local-only access.
phys-schost# vxdg list diskgroup | grep flags flags: nogdl |
Verify that the device group is registered.
Look for the disk device information for the new disk that is displayed by the following command.
phys-schost# cldevicegroup status |
Go to How to Verify the Disk Group Configuration.
Stack overflow – If a stack overflows when the device group is brought online, the default value of the thread stack size might be insufficient. On each node, add the entry set cl_haci:rm_thread_stacksize=0xsize to the /etc/system file, where size is a number greater than 8000, which is the default setting.
Configuration changes – If you change any configuration information for a VxVM device group or its volumes, you must register the configuration changes by using the clsetup utility. Configuration changes that you must register include adding or removing volumes and changing the group, owner, or permissions of existing volumes. See Administering Device Groups in Sun Cluster System Administration Guide for Solaris OS for procedures to register configuration changes that are made to a VxVM device group.
If device group registration fails because of a minor-number conflict with another disk group, you must assign the new disk group a new, unused minor number. Perform this procedure to reminor a disk group.
Become superuser on a node of the cluster.
Determine the minor numbers in use.
phys-schost# ls -l /global/.devices/node@1/dev/vx/dsk/* |
Choose any other multiple of 1000 that is not in use to become the base minor number for the new disk group.
Assign the new base minor number to the disk group.
phys-schost# vxdg reminor diskgroup base-minor-number |
This example uses the minor numbers 16000-16002 and 4000-4001. The vxdg reminor command reminors the new device group to use the base minor number 5000.
phys-schost# ls -l /global/.devices/node@1/dev/vx/dsk/* /global/.devices/node@1/dev/vx/dsk/dg1 brw------- 1 root root 56,16000 Oct 7 11:32 dg1v1 brw------- 1 root root 56,16001 Oct 7 11:32 dg1v2 brw------- 1 root root 56,16002 Oct 7 11:32 dg1v3 /global/.devices/node@1/dev/vx/dsk/dg2 brw------- 1 root root 56,4000 Oct 7 11:32 dg2v1 brw------- 1 root root 56,4001 Oct 7 11:32 dg2v2 phys-schost# vxdg reminor dg3 5000 |
Register the disk group as a Sun Cluster device group. Go to How to Register a Disk Group.
Perform this procedure on each node of the cluster.
Become superuser.
List the disk groups.
phys-schost# vxdisk list |
List the device groups.
phys-schost# cldevicegroup list -v |
Verify that all disk groups are correctly configured.
Ensure that the following requirements are met:
The root disk group includes only local disks.
All disk groups and any local disk groups are imported on the current primary node only.
Verify that all volumes have been started.
phys-schost# vxprint |
Verify that all disk groups have been registered as Sun Cluster device groups and are online.
phys-schost# cldevicegroup status |
Output should not display any local disk groups.
(Optional) Capture the disk partitioning information for future reference.
phys-schost# prtvtoc /dev/rdsk/cNtXdYsZ > filename |
Store the file in a location outside the cluster. If you make any disk configuration changes, run this command again to capture the changed configuration. If a disk fails and needs replacement, you can use this information to restore the disk partition configuration. For more information, see the prtvtoc(1M) man page.
(Optional) Make a backup of your cluster configuration.
An archived backup of your cluster configuration facilitates easier recovery of the your cluster configuration. For more information, see How to Back Up the Cluster Configuration in Sun Cluster System Administration Guide for Solaris OS.
Observe the following guidelines for administering VxVM disk groups in a Sun Cluster configuration:
VxVM device groups – VxVM disk groups that have been registered as device groups are managed by Sun Cluster software. After a disk group is registered as a device group, you should never import or deport that VxVM disk group by using VxVM commands. The Sun Cluster software can handle all cases where device groups need to be imported or deported. See Administering Device Groups in Sun Cluster System Administration Guide for Solaris OS for procedures about how to manage device groups.
Local disk groups – Local VxVM disk groups are not managed by Sun Cluster software. Use VxVM commands to administer local disk groups as you would in a nonclustered system.
If the output of the cldevicegroup status command includes any local disk groups, the displayed disk groups are not configured correctly for local-only access. Return to How to Create a Disk Group to reconfigure the local disk group.
Determine from the following list the next task to perform that applies to your cluster configuration. If you need to perform more than one task from this list, go to the first of those tasks in this list.
To create cluster file systems, go to How to Create Cluster File Systems.
To create non-global zones on a node, go to How to Create a Non-Global Zone on a Global-Cluster Node.
SPARC: To configure Sun Management Center to monitor the cluster, go to SPARC: Installing the Sun Cluster Module for Sun Management Center.
Install third-party applications, register resource types, set up resource groups, and configure data services. See the documentation that is supplied with the application software and the Sun Cluster Data Services Planning and Administration Guide for Solaris OS.
This section describes how to unencapsulate the root disk in a Sun Cluster configuration.
Perform this procedure to unencapsulate the root disk.
Perform the following tasks:
Ensure that only Solaris root file systems are present on the root disk. The Solaris root file systems are root (/), swap, the global devices namespace, /usr, /var, /opt, and /home.
Back up and remove from the root disk any file systems other than Solaris root file systems that reside on the root disk.
Become superuser on the node that you intend to unencapsulate.
Evacuate all resource groups and device groups from the node.
phys-schost# clnode evacuate from-node |
Specifies the name of the node from which to move resource or device groups.
Determine the node-ID number of the node.
phys-schost# clinfo -n |
Unmount the global-devices file system for this node, where N is the node ID number that is returned in Step 3.
phys-schost# umount /global/.devices/node@N |
View the /etc/vfstab file and determine which VxVM volume corresponds to the global-devices file system.
phys-schost# vi /etc/vfstab #device device mount FS fsck mount mount #to mount to fsck point type pass at boot options # #NOTE: volume rootdiskxNvol (/global/.devices/node@N) encapsulated #partition cNtXdYsZ |
Remove from the root disk group the VxVM volume that corresponds to the global-devices file system.
phys-schost# vxedit -g rootdiskgroup -rf rm rootdiskxNvol |
Do not store data other than device entries for global devices in the global-devices file system. All data in the global-devices file system is destroyed when you remove the VxVM volume. Only data that is related to global devices entries is restored after the root disk is unencapsulated.
Unencapsulate the root disk.
Do not accept the shutdown request from the command.
phys-schost# /etc/vx/bin/vxunroot |
See your VxVM documentation for details.
Use the format(1M) command to add a 512-Mbyte partition to the root disk to use for the global-devices file system.
Use the same slice that was allocated to the global-devices file system before the root disk was encapsulated, as specified in the /etc/vfstab file.
Set up a file system on the partition that you created in Step 8.
phys-schost# newfs /dev/rdsk/cNtXdYsZ |
Determine the DID name of the root disk.
phys-schost# cldevice list cNtXdY dN |
In the /etc/vfstab file, replace the path names in the global-devices file system entry with the DID path that you identified in Step 10.
The original entry would look similar to the following.
phys-schost# vi /etc/vfstab /dev/vx/dsk/rootdiskxNvol /dev/vx/rdsk/rootdiskxNvol /global/.devices/node@N ufs 2 no global |
The revised entry that uses the DID path would look similar to the following.
/dev/did/dsk/dNsX /dev/did/rdsk/dNsX /global/.devices/node@N ufs 2 no global |
Mount the global-devices file system.
phys-schost# mount /global/.devices/node@N |
From one node of the cluster, repopulate the global-devices file system with device nodes for any raw-disk devices and Solaris Volume Manager devices.
phys-schost# cldevice populate |
VxVM devices are recreated during the next reboot.
On each node, verify that the cldevice populate command has completed processing before you proceed to the next step.
The cldevice populate command executes remotely on all nodes, even through the command is issued from just one node. To determine whether the cldevice populate command has completed processing, run the following command on each node of the cluster.
phys-schost# ps -ef | grep scgdevs |
Reboot the node.
phys-schost# shutdown -g0 -y -i6 |
Repeat this procedure on each node of the cluster to unencapsulate the root disk on those nodes.