1. Planning the Oracle Solaris Cluster Configuration
2. Installing Software on Global-Cluster Nodes
3. Establishing the Global Cluster
4. Configuring Solaris Volume Manager Software
5. Installing and Configuring Veritas Volume Manager
Installing and Configuring VxVM Software
Setting Up a Root Disk Group Overview
How to Install Veritas Volume Manager Software
SPARC: How to Encapsulate the Root Disk
Creating Disk Groups in a Cluster
How to Assign a New Minor Number to a Device Group
How to Verify the Disk Group Configuration
How to Unencapsulate the Root Disk
6. Creating a Cluster File System
7. Creating Non-Global Zones and Zone Clusters
8. Installing the Oracle Solaris Cluster Module to Sun Management Center
9. Uninstalling Software From the Cluster
A. Oracle Solaris Cluster Installation and Configuration Worksheets
This section provides information and procedures to install and configure VxVM software on an Oracle Solaris Cluster configuration.
The following table lists the tasks to perform to install and configure VxVM software for Oracle Solaris Cluster configurations. Complete the procedures in the order that is indicated.
Table 5-1 Task Map: Installing and Configuring VxVM Software
|
The creation of a root disk group is optional. If you do not intend to create a root disk group, proceed to How to Install Veritas Volume Manager Software.
Access to a node's root disk group must be restricted to only that node.
Remote nodes must never access data stored in another node's root disk group.
Do not use the cldevicegroup command to register the root disk group as a device group.
Whenever possible, configure the root disk group for each node on a nonshared disk.
Oracle Solaris Cluster software supports the following methods to configure the root disk group.
Encapsulate the node's root disk (UFS only) – This method enables the root disk to be mirrored, which provides a boot alternative if the root disk is corrupted or damaged. To encapsulate the root disk you need two free disk slices as well as free cylinders, preferably at the beginning or the end of the disk.
You cannot encapsulate the root disk if it uses the ZFS file system. Instead, configure the root disk group on local nonroot disks.
Use local nonroot disks – This method provides an alternative to encapsulating the root disk. If a node's root disk is encapsulated, certain tasks you might later perform, such as upgrade the Solaris OS or perform disaster recovery procedures, could be more complicated than if the root disk is not encapsulated. To avoid this potential added complexity, you can instead initialize or encapsulate local nonroot disks for use as root disk groups.
A root disk group that is created on local nonroot disks is local to that node, neither globally accessible nor highly available. As with the root disk, to encapsulate a nonroot disk you need two free disk slices as well as free cylinders at the beginning or the end of the disk.
See your VxVM installation documentation for more information.
Perform this procedure to install Veritas Volume Manager (VxVM) software on each global-cluster node that you want to install with VxVM. You can install VxVM on all nodes of the cluster, or install VxVM just on the nodes that are physically connected to the storage devices that VxVM will manage.
Before You Begin
Perform the following tasks:
Ensure that all nodes in the cluster are running in cluster mode.
Obtain any Veritas Volume Manager (VxVM) license keys that you need to install.
Have available your VxVM installation documentation.
phys-schost# clvxvm initialize
The clvxvm utility performs necessary postinstallation tasks. The clvxvm utility also selects and configures a cluster-wide vxio driver major number. See the clvxvm(1CL) man page for more information.
See your VxVM documentation for information about how to add a license.
See your VxVM documentation for information about installing the VxVM GUI.
See Patches and Required Firmware Levels in Sun Cluster Release Notes for the location of patches and installation instructions.
phys-schost# grep vxio /etc/name_to_major
phys-schost# vi /etc/name_to_major vxio NNN
phys-schost# drvconfig -b -i vxio -m NNN
When you finish, each node of the cluster should have the same vxio entry in its /etc/name_to_major file.
Otherwise, proceed to Step 12.
Note - A root disk group is optional.
phys-schost# shutdown -g0 -y -i6
Next Steps
To create a root disk group, go to (UFS only) SPARC: How to Encapsulate the Root Disk or How to Create a Root Disk Group on a Nonroot Disk.
Otherwise, create disk groups. Go to Creating Disk Groups in a Cluster.
Perform this procedure to create a root disk group by encapsulating the UFS root disk. Root disk groups are optional. See your VxVM documentation for more information.
Note - If your root disk uses ZFS, you can only create a root disk group on local nonroot disks. If you want to create a root disk group on nonroot disks, instead perform procedures in How to Create a Root Disk Group on a Nonroot Disk.
Before You Begin
Ensure that you have installed VxVM as described in How to Install Veritas Volume Manager Software.
phys-schost# clvxvm encapsulate
See the clvxvm(1CL) man page for more information.
Next Steps
To mirror the encapsulated root disk, go to How to Mirror the Encapsulated Root Disk.
Otherwise, go to Creating Disk Groups in a Cluster.
Use this procedure to create a root disk group by encapsulating or initializing local disks other than the root disk. The creation of a root disk group is optional.
Note - If you want to create a root disk group on the root disk and the root disk uses UFS, instead perform procedures in SPARC: How to Encapsulate the Root Disk.
Before You Begin
If the disks are to be encapsulated, ensure that each disk has at least two slices with 0 cylinders. If necessary, use the format(1M) command to assign 0 cylinders to each VxVM slice.
phys-schost# vxinstall
SPARC: To enable the VxVM cluster feature, supply the cluster feature license key.
Choose Custom Installation.
Do not encapsulate the boot disk.
Choose any disks to add to the root disk group.
Do not accept automatic reboot.
Use the following command to disable fencing for each shared disk in the root disk group.
phys-schost# cldevice set -p default_fencing=nofencing device
Specifies a device property.
Disables fencing for the specified device.
Disabling fencing for the device prevents unintentional fencing of the node from the disk that is used by the root disk group if that disk is connected to multiple nodes.
For more information about the default_fencing property, see the cldevice(1CL) man page.
phys-schost# clnode evacuate from-node
Specifies the name of the node from which to move resource or device groups.
phys-schost# shutdown -g0 -y -i6
The root disk group becomes tolerant of a disk failure when it contains multiple disks. See VxVM documentation for procedures.
Next Steps
Create disk groups. Go to Creating Disk Groups in a Cluster.
After you install VxVM and encapsulate the root disk, perform this procedure on each node on which you mirror the encapsulated root disk.
Before You Begin
Ensure that you have encapsulated the root disk as described in SPARC: How to Encapsulate the Root Disk.
phys-schost# cldevice list -v
Output looks similar to the following:
DID Device Full Device Path ---------- ---------------- d1 phys-schost-1:/dev/rdsk/c0t0d0 d2 phys-schost-1:/dev/rdsk/c0t6d0 d3 phys-schost-2:/dev/rdsk/c1t1d0 d3 phys-schost-1:/dev/rdsk/c1t1d0
Follow the procedures in your VxVM documentation.
For maximum availability and simplified administration, use a local disk for the mirror. See Guidelines for Mirroring the Root Disk for additional guidelines.
Caution - Do not use a quorum device to mirror a root disk. Using a quorum device to mirror a root disk might prevent the node from booting from the root-disk mirror under certain circumstances. |
The name of the device group is the form dsk/dN, where dN is the DID device name.
phys-schost# cldevicegroup list -v dsk/dN
Displays verbose output.
Output looks similar to the following.
Device group Type Node list ------------ ---- --------- dsk/dN Local_Disk phys-schost-1, phys-schost-3
Only the node whose root disk you mirrored should remain in the node list for the raw-disk device group.
phys-schost# cldevicegroup remove-node -n node dsk/dN
Specifies the node to remove from the device-group node list.
Disabling fencing for a device prevents unintentional fencing of the node from its boot device if the boot device is connected to multiple nodes.
phys-schost# cldevice set -p default_fencing=nofencing device
Sets the value of a device property.
Disables fencing for the specified device.
For more information about the default_fencing property, see the cldevice(1CL) man page.
Example 5-1 Mirroring the Encapsulated Root Disk
The following example shows a mirror created of the root disk for the node phys-schost-1. The mirror is created on the disk c0t0d0, whose raw-disk device-group name is dsk/d2. Disk c0t0d0 is a multihost disk, so the node phys-schost-3 is removed from the disk's node list and fencing is disabled.
phys-schost# cldevice list -v DID Device Full Device Path ---------- ---------------- d2 pcircinus1:/dev/rdsk/c0t0d0 … Create the mirror by using VxVM procedures phys-schost# cldevicegroup list -v dsk/d2 Device group Type Node list ------------ ---- --------- dsk/d2 Local_Disk phys-schost-1, phys-schost-3 phys-schost# cldevicegroup remove-node -n phys-schost-3 dsk/d2 phys-schost# cldevice set -p default_fencing=nofencing c0t0d0
Next Steps
Create disk groups. Go to Creating Disk Groups in a Cluster.