Sun Cluster 2.2 Cluster Volume Manager Guide

Chapter 1 Installation and Release Notes

This chapter provides the installation and release information for Cluster Volume Manager (CVM) Release 2.2.1 for Solaris 2.6.

This release of CVM has been tested on, and requires, Solaris 2.6.

CVM is a cluster-aware version of Veritas Volume Manager (VxVM), and is designed to be used in Oracle Parallel Server configurations.

This document is a supplement to the Sun StorEdge Volume Manager Release Notes and Installation Guide. For general information on SSVM, refer to the Sun StorEdge Volume Manager 2.6 manual set.


Note -

Before you attempt to install the package, you should read this entire document.


The document covers the following topics:

1.1 Getting Help

For additional information contact your local Enterprise Service representative or Enterprise Service Authorized Service Provider.

1.2 New Features and Changes

For information on the features associated with Cluster Volume Manager, refer to Chapter 2, Cluster Volume Manager. New features in Release 2.2.1 include:

1.3 The Visual Administrator and CVM

The Visual Administrator provides the following support for CVM:

1.4 Installing Cluster Volume Manager

This section describes how to install or update CVM. The packages on the CD-ROM can be installed on systems running Solaris 2.6. CVM 2.2.1 requires Sun Cluster 2.2 software. Complete the installation or upgrade to Sun Cluster 2.2 before attempting to install CVM 2.2.1.

CVM installation consists of two parts:

  1. Installing the combined package onto the system. Refer to "1.4.2 Installing CVM for the First Time", or "1.4.3 Upgrading to CVM Release 2.2.1".

  2. Configuring and setting up CVM. Refer to "1.4.4 Creating rootdg", and "1.4.5 Configuring Shared Disks".

1.4.1 Pre-installation

If you are installing CVM for the first time, refer to the Sun StorEdge Volume Manager Installation Guide for additional pre-installation information.


PATH=/sbin:/usr/sbin:$PATH export PATH

If you are using a C Shell (csh or tcsh), use the command:


setenv PATH /sbin:/usr/sbin:${PATH}

Note -

To use CVM with a SPARCstorageTM Array, you must use firmware level 3.4 or later.


1.4.2 Installing CVM for the First Time


Note -

CVM Release 2.2.1 requires Solaris 2.6, so it may be necessary to upgrade the operating environment before you install CVM.


  1. Load and mount the CVM 2.2.1 CD-ROM.

    It should then be visible as the file system mounted on /cdrom.

  2. Go to the directory containing the CVM packages:


    # cd /cdrom/cdrom0/CVM_2_2_1/Product
    
  3. Use pkgadd to install the following packages:


    # pkgadd -d . SUNWvxvm SUNWvxva SUNWvmman SUNWvmdev
    

    Caution - Caution -

    Packages must be installed in the order specified.


  4. Go to "1.4.4 Creating rootdg", to proceed with the CVM installation.

1.4.3 Upgrading to CVM Release 2.2.1

Note that CVM Release 2.2.1 running on Sun Cluster 2.2 requires Solaris 2.6, so it may be necessary to upgrade the operating environment at the same time. The recommended procedure is to upgrade the operating environment first (if needed), then install or upgrade to Sun Cluster 2.2, and finally, upgrade CVM.


Note -

If you have encapsulated one or more disks you must execute "" through Step 6 before doing the operating environment upgrade.


1.4.3.1 Upgrading From Earlier Versions of CVM

If you have Sun Cluster 2.0 or 2.1 installed, upgrade the CVM software and the operating environment as follows:

  1. Make sure you have enough space in /opt to upgrade the operating environment.

  2. If any of the file systems /, /usr, /var, or /opt are defined on volumes, make sure that at least one plex for each of those volumes is formed from a single subdisk that begins on a cylinder boundary.

    This is a required step. Part of the upgrade process includes temporarily placing file systems onto volumes that are using direct disk partitions. The Solaris operating environment requires that disk partitions start on cylinder boundaries. This conversion is handled automatically by the upgrade scripts, as necessary. If the upgrade scripts detect any problems (such as lack of cylinder alignment), the scripts display an explanation of the problem and the upgrade process stops.

  3. Load and mount the CVM 2.2.1 CD-ROM.

    It should then be visible as the file system mounted on /cdrom.

  4. Run the upgrade_start script to prepare the previous release of CVM for its removal:


    # /cdrom/cdrom0/CVM_2_2_1/Tools/scripts/upgrade_start
    

    The upgrade_start script looks for volumes containing file systems. If certain key file systems must be converted back to using partitions, this script handles the conversions.

  5. Reboot to single-user mode (using a command such as uadmin 2 3).

  6. Remove the volume manager package(s).


    phys-hahost1# pkgrm SUNWvmdev SUNWvmman SUNWvxva SUNWvxvm
    
  7. Shut down and halt the machine (using a command such as uadmin 2 0).

  8. (Optional) Upgrade the operating environment to Solaris 2.6, if necessary.

    Refer to the Solaris installation documentation for instructions on how to upgrade the Solaris software environment.

  9. Go to the directory containing the CVM packages (on the CVM CD-ROM):


    # cd /cdrom/cdrom0/CVM_2_2_1/		Product/	
    
  10. Use pkgadd to install the following packages:


    # pkgadd -d . SUNWvxvm SUNWvxva SUNWvmman SUNWvmdev
    
  11. Complete the upgrade by entering:


    # /cdrom/cdrom0/CVM_2_2_1/Tools/scripts/upgrade_finish
    
  12. Reboot to multi-user mode.

    At this point, your pre-upgrade configuration should be in effect and any file systems previously defined on volumes should be defined and mounted.

  13. Go to "1.4.4 Creating rootdg", to proceed with the CVM installation.

1.4.4 Creating rootdg

After loading the CVM software, you must create the default disk group, rootdg. One approach is to place the root disk under CVM control through the process of encapsulation. The disk group resulting from the encapsulation will then become the rootdg disk group. However, if frequent upgrades to this package are anticipated, this may not be convenient because it is more difficult to upgrade to new versions or recover from certain errors when the root disk is encapsulated. If you do not wish to encapsulate the root disk, you can encapsulate any other disk using vxinstall to create the rootdg (required for CVM to come up. Another approach is to create a simple volume manager disk (on a partition of a disk that is not shared and has not been encapsulated) and then use this for rootdg.

This section describes how to create the root disk group by using encapsulation; we do not recommend using the simple disk approach. After creating rootdg, go to "1.4.5 Configuring Shared Disks".

1.4.4.1 Encapsulating Your Root Disk

To encapsulate your root disk, create rootdg as follows:

  1. Invoke vxinstall and follow the instructions in the "Custom Installation" section of the Sun StorEdge Volume Manager 2.6 Installation Guide to encapsulate only the root disk.

    For all other disks, select the Leave these disks alone option.

  2. After using vxinstall to encapsulate the root disk, reboot the system.

    The vxinstall command will automatically create rootdg.

1.4.5 Configuring Shared Disks

If you are installing CVM for the first time or adding disks to an existing cluster, you must configure new shared disks. If you are upgrading CVM, verify that your shared disks still exist.


Note -

The shared disks should be configured from one node only. Since the CVM software cannot tell whether a disk is shared or not, you must specify which are the shared disks.


Make sure that nobody else is accessing the shared disks from another node while you are performing the configuration.

1.4.6 How to Verify Existing Shared Disks

If you are upgrading from a previous release of CVM to CVM 2.2.1, verify that your shared disk groups still exist:

  1. Start the cluster on all nodes.

  2. Type the following command on all nodes:


    # vxdg list
    

    This should display the shared disk groups that existed before. DRL logs that were created with earlier versions of CVM may be too small for CVM 2.2.1. For additional information, refer to "2.1.4 Dirty Region Logging and CVM".

1.4.7 How to Convert Existing SEVM 2.x Disks to Shared Disks

If you are upgrading from SEVM 2.x to CVM 2.2.1 and want to share existing disk groups, configure the shared disks as follows:

  1. Start the cluster on at least one node.

    For a two-node cluster, start the cluster on one node; for a four-node cluster, start the cluster on three nodes.

  2. List all disk groups:


    # vxdg list
    
  3. Deport disk groups to be shared:


    # vxdg deport groupname
    
  4. Import disk groups to be shared:


    # vxdg -s import groupname
    

    This will mark the disks in the shared disk groups as shared and stamp them with the ID of the cluster, enabling other nodes to recognize the shared disks.

    If there are dirty region logs, make sure they are active. If not, replace them with bigger ones.

  5. Display the shared flag for all the shared disk groups:


    # vxdg list
    

    The disk groups are now ready to be shared.

  6. If the cluster is running with one node only, bring up the other cluster nodes.

  7. When the each node is ready, enter the command vxdg list on it.

    This should display the same list of shared disk groups that appeared earlier.

1.4.8 How to Configure New Disks

If you are installing and setting up CVM for the first time, configure the shared disks as follows:

  1. Start the cluster on at least one node. If the cluster contains more than one node, perform Steps 3 and 4 only on the master node. vxdctl -c mode reports the operating mode of CVM.

  2. Run vxdisksetup to initialize each shared disk on any node; run vxdctl enable on all nodes afterwards.

    If you have decided not to put configuration information on every disk, or if you want larger areas for this information, vxdisksetup enables you to specify your choices.

  3. Create disk groups on the shared disks.

    You can use vxdg or the Visual Administrator to do this. Use the -s option of vxdg to create shared disk groups.

  4. Create volumes in the disk groups.

    You can use vxassist or the Visual Administrator to do this.

    The volumes must be of type gen. Do not create RAID5 volumes. Before creating any log subdisks, read "2.1.4 Dirty Region Logging and CVM".

  5. If the cluster is running with one node only, bring up the other cluster nodes.

    When each node is ready, enter the command vxdg list on it. This should display the same list of shared disk groups that appeared earlier.

1.4.9 Disk Reservation

This section is applicable for two-node configurations only.

As part of failure fencing, Sun Cluster reserves shared disks when only one node is active. This prevents "rogue" hosts from accessing the shared disks. When this happens, the command vxdisk list on a node that has left the cluster may show all disks on such a controller as having an error status. The more detailed options of vxdisk will show the flag unavailable. When a new node joins the cluster, the Sun Cluster software releases the controllers. CVM attempts to access these disks, and if that is successful, the disks return to an online status. If one system boots while the other system has the disks reserved, the disks may be invisible to the booting system, and vxdisk may display none of the shared disks. When the system joins the cluster, the shared disks become visible.

1.5 Software Limitations and Known Problems

The following caveats and usage issues are known for this release of CVM Release 2.2.1:


Resource temporarily unavailable

[vxclust] return from cluster_establish is configuration
daemon error -1
Resource temporarily unavailable

master has disconnected

A retry at a later time should succeed.

If the leaving node is a master, the attempt will generate disk-related error messages on both nodes and the remaining node will abort. The joining node will eventually join and may become master.


cannot assign minor #

This is accompanied by the console message:


WARNING:minor number ### disk group group in use