Configure your local and multihost disks for VERITAS Volume Manager (VxVM) using the guidelines in this chapter along with the information in Chapter 2, Planning the Configuration. Refer to your VERITAS documentation for additional details.
This appendix includes the following sections:
Verify that the items listed below are in place before configuring the volume manager:
The volume manager and VxFS are installed and licensed on each cluster node.
The volume manager has been installed using the "custom install" option.
After configuring the volume manager, verify that:
Only the private disks are included in the root disk group (rootdg).
Disk groups have been deported from all nodes, then imported to the default master node.
All volumes have been started.
Use this procedure to configure your disk groups, volumes, and file systems for the logical hosts.
This procedure is only applicable for high availability (HA) configurations. If you are using Oracle Parallel Server and the cluster feature of VxVM, refer to your VERITAS documentation for configuration information.
Format the disks to be administered by the volume manager.
Use the fmthard(1M) command to create a VTOC on each disk with a single Slice 2 defined for the entire disk.
For each cluster node, create a root disk group (rootdg).
See your VERITAS documentation for guidelines and details about creating a rootdg.
Initialize each disk for use by the volume manager.
You can use the vxdiskadd(1M) or vxinstall(1M) commands to initialize each disk. See your VERITAS documentation for details.
(Optional) Assign hot spares.
For each disk group, use the vxedit(1M) command to assign one disk as a hot spare for each disk controller.
Reboot all nodes on which you installed VxVM.
For each disk group, create a volume to be used for the HA administrative file system on the multihost disks.
The HA administrative file system is used by Sun Cluster for data service specific state or configuration information.
Use the vxassist(1M) command to create a 10-Mbyte volume mirrored across two controllers for the HA administrative file system. Name this volume diskgroup-stat.
For each disk group, create the other volumes to be used by HA data services.
Use the vxassist(1M) command to create these volumes.
Start the volumes.
Use the vxvol(1M) command to start the volumes.
Create file systems on the volumes.
Refer to "Configuring VxFS File Systems on the Multihost Disks", for details on creating the necessary file systems.
This section contains procedures to configure multihost VxFS file systems. To configure file systems to be shared by NFS, refer to Chapter 11, Installing and Configuring Sun Cluster HA for NFS.
Use the mkfs(1M) command to create file systems on the volumes.
Before you can run the mkfs(1M) command on the disk groups, you might need to take ownership of the disk group containing the volume. Do this by importing the disk group to the active node using the vxdg(1M) command.
phys-hahost1# vxdg import diskgroup |
Create the HA administrative file systems on the volumes.
Run the mkfs(1M) command on each volume in the configuration.
phys-hahost1# mkfs -F vxfs /dev/vx/rdsk/diskgroup/diskgroup-stat |
Create file systems for all volumes.
These volumes will be mounted by the logical hosts.
phys-hahost1# mkfs -F vxfs /dev/vx/rdsk/diskgroup/volume |
Create a directory mount point for the HA administrative file system.
Always use the logical host name as the mount point. This strategy is necessary to enable start up of DBMS fault monitors.
phys-hahost1# mkdir /logicalhost |
Mount the HA administrative file system.
phys-hahost1# mount /dev/vx/dsk/diskgroup/diskgroup-stat/logicalhost |
Create mount points for the data service file systems created in Step 1b.
phys-hahost1# mkdir /logicalhost/volume |
Edit the /etc/opt/SUNWcluster/conf/hanfs/vfstab.logicalhost file to update the administrative and multihost VxFS file system information.
Make sure that entries for each disk group appear in the vfstab.logicalhost files on each node that is a potential master of the disk group. Make sure the vfstab.logicalhost files contain the same information. Use the cconsole(1) facility to make simultaneous edits to vfstab.logicalhost files on all nodes in the cluster.
Here is a sample /etc/vfstab.logicalhost file showing the administrative file system and two other VxFS file systems. In this example, dg1 is the disk group name and hahost1 is the logical host name.
/dev/vx/dsk/dg1/dg1-stat /dev/vx/rdsk/dg1/dg1-stat /hahost1 vxfs - yes - /dev/vx/dsk/dg1/vol_1 /dev/vx/rdsk/dg1/vol_1 /hahost1/vol_1 vxfs - yes - /dev/vx/dsk/dg1/vol_2 /dev/vx/rdsk/dg1/vol_2 /hahost1/vol_1 vxfs - yes - |
Unmount the HA administrative file systems that you mounted in Step 3.
phys-hahost1# umount /logicalhost |
Export the disk groups.
If you took ownership of the disk groups on the active node by using the vxdg(1M) command before creating the file systems, release ownership of the disk groups once file system creation is complete.
phys-hahost1# vxdg deport diskgroup |
Import the disk groups to their default masters.
It is most convenient to create and populate disk groups from the active node that is the default master of the particular disk group.
Each disk group should be imported onto the default master node using the -t option. The -t option is important, as it prevents the import from persisting across the next boot.
phys-hahost1# vxdg -t import diskgroup |
(Optional) To make file systems NFS-sharable, refer to Chapter 11, Installing and Configuring Sun Cluster HA for NFS.
To avoid "Stale File handle" errors on the client on NFS failovers, the vxio driver must have identical pseudo-device major numbers on all cluster nodes. This number can be found in the /etc/name_to_major file after you complete the installation. Use the following procedures to verify and change the pseudo-device major numbers.
Verify the pseudo-device major number on all nodes.
For example, enter the following:
# grep vxio /etc/name_to_major vxio 45 |
If the pseudo-device number is not the same on all nodes, stop all activity on the system and edit the /etc/name_to_major file to make the number identical on all cluster nodes.
Be sure that the number is unique in the /etc/name_to_major file for each node. A quick way to do this is to find, by visual inspection, the maximum number assigned on each node in the /etc/name_to_major file, compute the maximum of these numbers, add one, then assign the sum to the vxio driver.
Reboot the system immediately after the number is changed.
(Optional) If the system reports disk group errors and the cluster will not start, you might need to perform these steps.
Unencapsulate the root disk using the VxVM upgrade_start script.
Find the script in the /Tools/scripts directory on your VxVM media. Run the script from only one node. In this example, CDROM_path is the path to the scripts on the VxVM media.
phys-hahost1# CDROM_path/upgrade_start |
Reboot the node.
Edit the /etc/name_to_major file and remove the appropriate entry, for example, /dev/vx/{dsk,rdsk,dmp,rdmp}.
Reboot the node.
Run the following command:
phys-hahost1# vxconfigd -k -r reset |
Re-encapsulate the root disk using the VxVM upgrade_finish script.
Find the path to the script on your VxVM media. Run the script from only one node.
phys-hahost1# CDROM_path/upgrade_finish |
Reboot the node.
You use the confccdssa(1M) command to create a disk group and volume to be used to store the CCD database. This is supported only on two-node clusters using VERITAS Volume Manager. This is not supported on clusters using Solstice DiskSuite.
The root disk group (rootdg) must be initialized before you run the confccdssa(1M) command.
Make sure you have configured a volume for the CCD.
Run the following command on both nodes. See the scconf(1M) man page for more details.
# scconf clustername -S ccdvol |
Run the confccdssa(1M) command on only one node, and use it to select disks for the CCD.
Select two disks from the shared disk expansion unit on which the shared CCD volume will be constructed:
# /opt/SUNWcluster/bin/confccdssa clustername On a 2-node configured cluster you may select two disks that are shared between the 2 nodes to store the CCD database in case of a single node failure. Please, select the disks you want to use from the following list: Select devices from list. Type the number corresponding to the desired selection. For example: 1<CR> 1) SSA:00000078C9BF 2) SSA:00000080295E 3) DISK:c3t32d0s2:9725B71845 4) DISK:c3t33d0s2:9725B70870 Device 1: 3 Disk c3t32d0s2 with serial id 9725B71845 has been selected as device 1. Select devices from list. Type the number corresponding to the desired selection. For example: 1<CR> 1) SSA:00000078C9BF 2) SSA:00000080295E 3) DISK:c3t33d0s2:9725B70870 4) DISK:c3t34d0s2:9725B71240 Device 2: 4 Disk c3t34d0s2 with serial id 9725B71240 has been selected as device 2. newfs: construct a new file system /dev/vx/rdsk/sc_dg/ccdvol: (y/n)? y ... |
The two disks selected can no longer be included in any other disk group. Once selected, the volume is created and a file system is laid out on the volume. See the confccdssa(1M) man page for more details.