This section describes how to install or update CVM. The packages on the CD-ROM can be installed on systems running Solaris 2.6. CVM 2.2.1 requires Sun Cluster 2.2 software. Complete the installation or upgrade to Sun Cluster 2.2 before attempting to install CVM 2.2.1.
CVM installation consists of two parts:
Installing the combined package onto the system. Refer to "1.4.2 Installing CVM for the First Time", or "1.4.3 Upgrading to CVM Release 2.2.1".
Configuring and setting up CVM. Refer to "1.4.4 Creating rootdg", and "1.4.5 Configuring Shared Disks".
If you are installing CVM for the first time, refer to the Sun StorEdge Volume Manager Installation Guide for additional pre-installation information.
Most of the commands involved in the installation of CVM are in the /sbin or /usr/sbin directories. You should add these directories to your PATH environment variable.
If you are using a Bourne Shell (sh or ksh), use the command:
PATH=/sbin:/usr/sbin:$PATH export PATH |
If you are using a C Shell (csh or tcsh), use the command:
setenv PATH /sbin:/usr/sbin:${PATH} |
A system using CVM has one or more disk groups, including the root disk group (rootdg). The rootdg must exist and cannot be shared between systems. At least one disk must exist within rootdg while CVM is running. Before installing CVM, you should decide where to place rootdg for each node in the cluster.
You can create rootdg by encapsulating the root disk as described in "1.4.4 Creating rootdg". Before beginning the installation, you must decide on the layout of shared disk groups. There may be one or more shared disk groups.
If you plan to use Dirty Region Logging (DRL) with CVM, consider leaving a small amount of space on the disk for these logs. The log size is proportional to the volume size and the number of nodes (each log has one recovery map plus one active map per node).
For a two-gigabyte volume in a two-node cluster, a log size of five blocks (one block per map) would be required. For every additional two gigabytes of volume size, the log size should then increase by approximately one block per map (so a four-gigabyte volume with two nodes would have a log size of ten blocks) up to a maximum of 96 blocks. For larger volumes, DRL changes the log granularity to accommodate the increased size without exceeding the maximum log size. A four-node cluster requires larger logs. See "2.1.4 Dirty Region Logging and CVM", for more information about log sizes.
To use CVM with a SPARCstorageTM Array, you must use firmware level 3.4 or later.
CVM Release 2.2.1 requires Solaris 2.6, so it may be necessary to upgrade the operating environment before you install CVM.
Load and mount the CVM 2.2.1 CD-ROM.
It should then be visible as the file system mounted on /cdrom.
Go to the directory containing the CVM packages:
# cd /cdrom/cdrom0/CVM_2_2_1/Product |
Use pkgadd to install the following packages:
# pkgadd -d . SUNWvxvm SUNWvxva SUNWvmman SUNWvmdev |
Packages must be installed in the order specified.
Go to "1.4.4 Creating rootdg", to proceed with the CVM installation.
Note that CVM Release 2.2.1 running on Sun Cluster 2.2 requires Solaris 2.6, so it may be necessary to upgrade the operating environment at the same time. The recommended procedure is to upgrade the operating environment first (if needed), then install or upgrade to Sun Cluster 2.2, and finally, upgrade CVM.
If you have encapsulated one or more disks you must execute "" through Step 6 before doing the operating environment upgrade.
If you have Sun Cluster 2.0 or 2.1 installed, upgrade the CVM software and the operating environment as follows:
Make sure you have enough space in /opt to upgrade the operating environment.
If any of the file systems /, /usr, /var, or /opt are defined on volumes, make sure that at least one plex for each of those volumes is formed from a single subdisk that begins on a cylinder boundary.
This is a required step. Part of the upgrade process includes temporarily placing file systems onto volumes that are using direct disk partitions. The Solaris operating environment requires that disk partitions start on cylinder boundaries. This conversion is handled automatically by the upgrade scripts, as necessary. If the upgrade scripts detect any problems (such as lack of cylinder alignment), the scripts display an explanation of the problem and the upgrade process stops.
Load and mount the CVM 2.2.1 CD-ROM.
It should then be visible as the file system mounted on /cdrom.
Run the upgrade_start script to prepare the previous release of CVM for its removal:
# /cdrom/cdrom0/CVM_2_2_1/Tools/scripts/upgrade_start |
The upgrade_start script looks for volumes containing file systems. If certain key file systems must be converted back to using partitions, this script handles the conversions.
Reboot to single-user mode (using a command such as uadmin 2 3).
Remove the volume manager package(s).
phys-hahost1# pkgrm SUNWvmdev SUNWvmman SUNWvxva SUNWvxvm |
Shut down and halt the machine (using a command such as uadmin 2 0).
(Optional) Upgrade the operating environment to Solaris 2.6, if necessary.
Refer to the Solaris installation documentation for instructions on how to upgrade the Solaris software environment.
Go to the directory containing the CVM packages (on the CVM CD-ROM):
# cd /cdrom/cdrom0/CVM_2_2_1/ Product/ |
Use pkgadd to install the following packages:
# pkgadd -d . SUNWvxvm SUNWvxva SUNWvmman SUNWvmdev |
Complete the upgrade by entering:
# /cdrom/cdrom0/CVM_2_2_1/Tools/scripts/upgrade_finish |
Reboot to multi-user mode.
At this point, your pre-upgrade configuration should be in effect and any file systems previously defined on volumes should be defined and mounted.
Go to "1.4.4 Creating rootdg", to proceed with the CVM installation.
After loading the CVM software, you must create the default disk group, rootdg. One approach is to place the root disk under CVM control through the process of encapsulation. The disk group resulting from the encapsulation will then become the rootdg disk group. However, if frequent upgrades to this package are anticipated, this may not be convenient because it is more difficult to upgrade to new versions or recover from certain errors when the root disk is encapsulated. If you do not wish to encapsulate the root disk, you can encapsulate any other disk using vxinstall to create the rootdg (required for CVM to come up. Another approach is to create a simple volume manager disk (on a partition of a disk that is not shared and has not been encapsulated) and then use this for rootdg.
This section describes how to create the root disk group by using encapsulation; we do not recommend using the simple disk approach. After creating rootdg, go to "1.4.5 Configuring Shared Disks".
To encapsulate your root disk, create rootdg as follows:
Invoke vxinstall and follow the instructions in the "Custom Installation" section of the Sun StorEdge Volume Manager 2.6 Installation Guide to encapsulate only the root disk.
For all other disks, select the Leave these disks alone option.
After using vxinstall to encapsulate the root disk, reboot the system.
The vxinstall command will automatically create rootdg.
If you are installing CVM for the first time or adding disks to an existing cluster, you must configure new shared disks. If you are upgrading CVM, verify that your shared disks still exist.
The shared disks should be configured from one node only. Since the CVM software cannot tell whether a disk is shared or not, you must specify which are the shared disks.
Make sure that nobody else is accessing the shared disks from another node while you are performing the configuration.
If you are upgrading from a previous release of CVM to CVM 2.2.1, verify that your shared disk groups still exist:
Start the cluster on all nodes.
Type the following command on all nodes:
# vxdg list |
This should display the shared disk groups that existed before. DRL logs that were created with earlier versions of CVM may be too small for CVM 2.2.1. For additional information, refer to "2.1.4 Dirty Region Logging and CVM".
If you are upgrading from SEVM 2.x to CVM 2.2.1 and want to share existing disk groups, configure the shared disks as follows:
Start the cluster on at least one node.
For a two-node cluster, start the cluster on one node; for a four-node cluster, start the cluster on three nodes.
List all disk groups:
# vxdg list |
Deport disk groups to be shared:
# vxdg deport groupname |
Import disk groups to be shared:
# vxdg -s import groupname |
This will mark the disks in the shared disk groups as shared and stamp them with the ID of the cluster, enabling other nodes to recognize the shared disks.
If there are dirty region logs, make sure they are active. If not, replace them with bigger ones.
Display the shared flag for all the shared disk groups:
# vxdg list |
The disk groups are now ready to be shared.
If the cluster is running with one node only, bring up the other cluster nodes.
When the each node is ready, enter the command vxdg list on it.
This should display the same list of shared disk groups that appeared earlier.
If you are installing and setting up CVM for the first time, configure the shared disks as follows:
Start the cluster on at least one node. If the cluster contains more than one node, perform Steps 3 and 4 only on the master node. vxdctl -c mode reports the operating mode of CVM.
Run vxdisksetup to initialize each shared disk on any node; run vxdctl enable on all nodes afterwards.
If you have decided not to put configuration information on every disk, or if you want larger areas for this information, vxdisksetup enables you to specify your choices.
Create disk groups on the shared disks.
You can use vxdg or the Visual Administrator to do this. Use the -s option of vxdg to create shared disk groups.
Create volumes in the disk groups.
You can use vxassist or the Visual Administrator to do this.
The volumes must be of type gen. Do not create RAID5 volumes. Before creating any log subdisks, read "2.1.4 Dirty Region Logging and CVM".
If the cluster is running with one node only, bring up the other cluster nodes.
When each node is ready, enter the command vxdg list on it. This should display the same list of shared disk groups that appeared earlier.
This section is applicable for two-node configurations only.
As part of failure fencing, Sun Cluster reserves shared disks when only one node is active. This prevents "rogue" hosts from accessing the shared disks. When this happens, the command vxdisk list on a node that has left the cluster may show all disks on such a controller as having an error status. The more detailed options of vxdisk will show the flag unavailable. When a new node joins the cluster, the Sun Cluster software releases the controllers. CVM attempts to access these disks, and if that is successful, the disks return to an online status. If one system boots while the other system has the disks reserved, the disks may be invisible to the booting system, and vxdisk may display none of the shared disks. When the system joins the cluster, the shared disks become visible.