This chapter provides the installation and release information for Cluster Volume Manager (CVM) Release 2.2.1 for Solaris 2.6.
This release of CVM has been tested on, and requires, Solaris 2.6.
CVM is a cluster-aware version of Veritas Volume Manager (VxVM), and is designed to be used in Oracle Parallel Server configurations.
This document is a supplement to the Sun StorEdge Volume Manager Release Notes and Installation Guide. For general information on SSVM, refer to the Sun StorEdge Volume Manager 2.6 manual set.
Before you attempt to install the package, you should read this entire document.
The document covers the following topics:
For additional information contact your local Enterprise Service representative or Enterprise Service Authorized Service Provider.
For information on the features associated with Cluster Volume Manager, refer to Chapter 2, Cluster Volume Manager. New features in Release 2.2.1 include:
CVM is now compatible with the Sun StorEdgeTM A3000.
Before using CVM with the Sun StorEdge A3000, refer to the Sun StorEdge Volume Manager Release Notes (Release 2.5 or 2.6 as applicable) for information on how to install and use SSVM (and CVM) with the Sun StorEdge A3000.
CVM now supports up to four nodes per cluster. However, storage devices that are physically connected to all nodes are required for shared disk groups to work on more than two nodes.
The Visual Administrator provides the following support for CVM:
The Visual Administrator root window highlights shared disk group view buttons with a green shaded border (to distinguish them from unshared disk groups). In monochrome, a shared disk group button has a grey shaded border instead.
The view window for a shared disk group displays the string Shared Disk Group in its title bar. On color monitors, the background color for a shared disk group view is green; on monochrome monitors, there is no change in the background color.
A disk group can be initialized as cluster-shareable for an active cluster. To accommodate this, the Initialize Disk Group form contains a Shared disk group: field that can be set to Yes or No. If set to Yes, the disk group is defined as cluster-shareable upon initialization. The system administrator is responsible for ensuring that disks specified as members of the cluster-shareable disk group are physically accessible from the hosts that make up the cluster.
A disk group can be imported as shared through the Visual Administrator. To accommodate this, the Import Disk Group form has a Shared disk group: field that can be set to Yes or No. If set to Yes, the disk group is imported as cluster-sharable. This is only valid if the cluster is active on the host where the import takes place. The administrator is responsible for ensuring that all disks in a shared disk group are physically accessible by all hosts; a host that cannot access all disks in a shared disk group cannot join the cluster.
This section describes how to install or update CVM. The packages on the CD-ROM can be installed on systems running Solaris 2.6. CVM 2.2.1 requires Sun Cluster 2.2 software. Complete the installation or upgrade to Sun Cluster 2.2 before attempting to install CVM 2.2.1.
CVM installation consists of two parts:
Installing the combined package onto the system. Refer to "1.4.2 Installing CVM for the First Time", or "1.4.3 Upgrading to CVM Release 2.2.1".
Configuring and setting up CVM. Refer to "1.4.4 Creating rootdg", and "1.4.5 Configuring Shared Disks".
If you are installing CVM for the first time, refer to the Sun StorEdge Volume Manager Installation Guide for additional pre-installation information.
Most of the commands involved in the installation of CVM are in the /sbin or /usr/sbin directories. You should add these directories to your PATH environment variable.
If you are using a Bourne Shell (sh or ksh), use the command:
PATH=/sbin:/usr/sbin:$PATH export PATH |
If you are using a C Shell (csh or tcsh), use the command:
setenv PATH /sbin:/usr/sbin:${PATH} |
A system using CVM has one or more disk groups, including the root disk group (rootdg). The rootdg must exist and cannot be shared between systems. At least one disk must exist within rootdg while CVM is running. Before installing CVM, you should decide where to place rootdg for each node in the cluster.
You can create rootdg by encapsulating the root disk as described in "1.4.4 Creating rootdg". Before beginning the installation, you must decide on the layout of shared disk groups. There may be one or more shared disk groups.
If you plan to use Dirty Region Logging (DRL) with CVM, consider leaving a small amount of space on the disk for these logs. The log size is proportional to the volume size and the number of nodes (each log has one recovery map plus one active map per node).
For a two-gigabyte volume in a two-node cluster, a log size of five blocks (one block per map) would be required. For every additional two gigabytes of volume size, the log size should then increase by approximately one block per map (so a four-gigabyte volume with two nodes would have a log size of ten blocks) up to a maximum of 96 blocks. For larger volumes, DRL changes the log granularity to accommodate the increased size without exceeding the maximum log size. A four-node cluster requires larger logs. See "2.1.4 Dirty Region Logging and CVM", for more information about log sizes.
To use CVM with a SPARCstorageTM Array, you must use firmware level 3.4 or later.
CVM Release 2.2.1 requires Solaris 2.6, so it may be necessary to upgrade the operating environment before you install CVM.
Load and mount the CVM 2.2.1 CD-ROM.
It should then be visible as the file system mounted on /cdrom.
Go to the directory containing the CVM packages:
# cd /cdrom/cdrom0/CVM_2_2_1/Product |
Use pkgadd to install the following packages:
# pkgadd -d . SUNWvxvm SUNWvxva SUNWvmman SUNWvmdev |
Packages must be installed in the order specified.
Go to "1.4.4 Creating rootdg", to proceed with the CVM installation.
Note that CVM Release 2.2.1 running on Sun Cluster 2.2 requires Solaris 2.6, so it may be necessary to upgrade the operating environment at the same time. The recommended procedure is to upgrade the operating environment first (if needed), then install or upgrade to Sun Cluster 2.2, and finally, upgrade CVM.
If you have encapsulated one or more disks you must execute "" through Step 6 before doing the operating environment upgrade.
If you have Sun Cluster 2.0 or 2.1 installed, upgrade the CVM software and the operating environment as follows:
Make sure you have enough space in /opt to upgrade the operating environment.
If any of the file systems /, /usr, /var, or /opt are defined on volumes, make sure that at least one plex for each of those volumes is formed from a single subdisk that begins on a cylinder boundary.
This is a required step. Part of the upgrade process includes temporarily placing file systems onto volumes that are using direct disk partitions. The Solaris operating environment requires that disk partitions start on cylinder boundaries. This conversion is handled automatically by the upgrade scripts, as necessary. If the upgrade scripts detect any problems (such as lack of cylinder alignment), the scripts display an explanation of the problem and the upgrade process stops.
Load and mount the CVM 2.2.1 CD-ROM.
It should then be visible as the file system mounted on /cdrom.
Run the upgrade_start script to prepare the previous release of CVM for its removal:
# /cdrom/cdrom0/CVM_2_2_1/Tools/scripts/upgrade_start |
The upgrade_start script looks for volumes containing file systems. If certain key file systems must be converted back to using partitions, this script handles the conversions.
Reboot to single-user mode (using a command such as uadmin 2 3).
Remove the volume manager package(s).
phys-hahost1# pkgrm SUNWvmdev SUNWvmman SUNWvxva SUNWvxvm |
Shut down and halt the machine (using a command such as uadmin 2 0).
(Optional) Upgrade the operating environment to Solaris 2.6, if necessary.
Refer to the Solaris installation documentation for instructions on how to upgrade the Solaris software environment.
Go to the directory containing the CVM packages (on the CVM CD-ROM):
# cd /cdrom/cdrom0/CVM_2_2_1/ Product/ |
Use pkgadd to install the following packages:
# pkgadd -d . SUNWvxvm SUNWvxva SUNWvmman SUNWvmdev |
Complete the upgrade by entering:
# /cdrom/cdrom0/CVM_2_2_1/Tools/scripts/upgrade_finish |
Reboot to multi-user mode.
At this point, your pre-upgrade configuration should be in effect and any file systems previously defined on volumes should be defined and mounted.
Go to "1.4.4 Creating rootdg", to proceed with the CVM installation.
After loading the CVM software, you must create the default disk group, rootdg. One approach is to place the root disk under CVM control through the process of encapsulation. The disk group resulting from the encapsulation will then become the rootdg disk group. However, if frequent upgrades to this package are anticipated, this may not be convenient because it is more difficult to upgrade to new versions or recover from certain errors when the root disk is encapsulated. If you do not wish to encapsulate the root disk, you can encapsulate any other disk using vxinstall to create the rootdg (required for CVM to come up. Another approach is to create a simple volume manager disk (on a partition of a disk that is not shared and has not been encapsulated) and then use this for rootdg.
This section describes how to create the root disk group by using encapsulation; we do not recommend using the simple disk approach. After creating rootdg, go to "1.4.5 Configuring Shared Disks".
To encapsulate your root disk, create rootdg as follows:
Invoke vxinstall and follow the instructions in the "Custom Installation" section of the Sun StorEdge Volume Manager 2.6 Installation Guide to encapsulate only the root disk.
For all other disks, select the Leave these disks alone option.
After using vxinstall to encapsulate the root disk, reboot the system.
The vxinstall command will automatically create rootdg.
If you are installing CVM for the first time or adding disks to an existing cluster, you must configure new shared disks. If you are upgrading CVM, verify that your shared disks still exist.
The shared disks should be configured from one node only. Since the CVM software cannot tell whether a disk is shared or not, you must specify which are the shared disks.
Make sure that nobody else is accessing the shared disks from another node while you are performing the configuration.
If you are upgrading from a previous release of CVM to CVM 2.2.1, verify that your shared disk groups still exist:
Start the cluster on all nodes.
Type the following command on all nodes:
# vxdg list |
This should display the shared disk groups that existed before. DRL logs that were created with earlier versions of CVM may be too small for CVM 2.2.1. For additional information, refer to "2.1.4 Dirty Region Logging and CVM".
If you are upgrading from SEVM 2.x to CVM 2.2.1 and want to share existing disk groups, configure the shared disks as follows:
Start the cluster on at least one node.
For a two-node cluster, start the cluster on one node; for a four-node cluster, start the cluster on three nodes.
List all disk groups:
# vxdg list |
Deport disk groups to be shared:
# vxdg deport groupname |
Import disk groups to be shared:
# vxdg -s import groupname |
This will mark the disks in the shared disk groups as shared and stamp them with the ID of the cluster, enabling other nodes to recognize the shared disks.
If there are dirty region logs, make sure they are active. If not, replace them with bigger ones.
Display the shared flag for all the shared disk groups:
# vxdg list |
The disk groups are now ready to be shared.
If the cluster is running with one node only, bring up the other cluster nodes.
When the each node is ready, enter the command vxdg list on it.
This should display the same list of shared disk groups that appeared earlier.
If you are installing and setting up CVM for the first time, configure the shared disks as follows:
Start the cluster on at least one node. If the cluster contains more than one node, perform Steps 3 and 4 only on the master node. vxdctl -c mode reports the operating mode of CVM.
Run vxdisksetup to initialize each shared disk on any node; run vxdctl enable on all nodes afterwards.
If you have decided not to put configuration information on every disk, or if you want larger areas for this information, vxdisksetup enables you to specify your choices.
Create disk groups on the shared disks.
You can use vxdg or the Visual Administrator to do this. Use the -s option of vxdg to create shared disk groups.
Create volumes in the disk groups.
You can use vxassist or the Visual Administrator to do this.
The volumes must be of type gen. Do not create RAID5 volumes. Before creating any log subdisks, read "2.1.4 Dirty Region Logging and CVM".
If the cluster is running with one node only, bring up the other cluster nodes.
When each node is ready, enter the command vxdg list on it. This should display the same list of shared disk groups that appeared earlier.
This section is applicable for two-node configurations only.
As part of failure fencing, Sun Cluster reserves shared disks when only one node is active. This prevents "rogue" hosts from accessing the shared disks. When this happens, the command vxdisk list on a node that has left the cluster may show all disks on such a controller as having an error status. The more detailed options of vxdisk will show the flag unavailable. When a new node joins the cluster, the Sun Cluster software releases the controllers. CVM attempts to access these disks, and if that is successful, the disks return to an online status. If one system boots while the other system has the disks reserved, the disks may be invisible to the booting system, and vxdisk may display none of the shared disks. When the system joins the cluster, the shared disks become visible.
The following caveats and usage issues are known for this release of CVM Release 2.2.1:
If CVM has deported a disk group because the disk group has lost access to one or more of its disks (due to a node leaving the cluster), the only way to try to regain access to the deported disks that are still attached to nodes in the cluster is to force-import the deported disk group. However, forcing an import in this situation is dangerous because it can cause mirrors to become unsynchronized in such a way that it cannot be determined which mirror has correct data.
It is possible to have private (non-shared) disk groups on physically shared disks. If these disks are on controllers that have been designated for fencing (for example, reserved by Sun Cluster), the owner of the private disk group may not be able to access it when it is not in the cluster.
CVM does not currently support RAID5 volumes.
Only gen volume types are supported in shared disk groups. The use of fsgen volumes can cause system deadlocks.
When a node leaves the cluster due to clean shutdown or abort, the surviving node performs a cluster reconfiguration. If the leaving node attempts to rejoin before the cluster reconfiguration is complete, the outcome depends on whether the leaving node is a slave or master.
If the leaving node is a slave, the attempt will fail with one of the following pairs of error messages:
Resource temporarily unavailable [vxclust] return from cluster_establish is configuration daemon error -1 Resource temporarily unavailable master has disconnected |
A retry at a later time should succeed.
If the leaving node is a master, the attempt will generate disk-related error messages on both nodes and the remaining node will abort. The joining node will eventually join and may become master.
If vxconfigd is stopped on both the master and slave nodes and then restarted on the slave first, its displays will not be reliable until vxconfigd has started on the master and the slave has reconnected (which may take about 30 seconds). In particular, shared disk groups will be marked "disabled" and no information about them will be available. vxconfigd should therefore be started on the master first.
When a node aborts from the cluster, open volume devices in shared disk groups on which I/O is not active are not removed until the volumes are closed. If this node later joins the cluster as the master while these volumes are still open, the presence of these volumes does not cause a problem. However, if the node tries to rejoin the cluster as a slave, this may fail with the error message:
cannot assign minor # |
This is accompanied by the console message:
WARNING:minor number ### disk group group in use |
The current disk hot-sparing mechanism does not work well for partial disk failures. The model was written using the presumption that disks fail totally, rather than partially, and that partial errors can usually be fixed by writing back a failing sector. Usually this is a good assumption, but some users have encountered situations where only a few sectors failed and hot-sparing did not occur.
When vxconfigd is stopped and restarted, it may disable large disk groups (for example, disk groups containing hundreds of volumes).
Workaround: Restart vxconfigd with the cleartempdir option. If needed, deport and reimport the disk groups and start all volumes.
Under some circumstances, a node abort may lead to a panic. This is relatively rare, but can occur- if I/O cannot be quiesced in a timely manner and the node needs to be brought down to ensure data integrity.