Sun Cluster 3.2 Release Notes for Solaris OS

Upgrade

The vxlufinish Script Returns an Error When the Root Disk is Encapsulated (6448341)

Problem Summary: This problem is seen when the original disk is root encapsulated and a live upgrade is attempted from VxVM 3.5 on Solaris 9 8/03 OS to VxVM 5.0 on Solaris 10 6/06 OS. The vxlufinish script fails with the following error.


#./vslufinish -u 5.10

    VERITAS Volume Manager VxVM 5.0
    Live Upgrade finish on the Solairs release <5.10>

    Enter the name of the alternate root diskgroup: altrootdg
ld.so.1: vxparms: fatal: libvxscsi.so: open failed: No such file or directory
ld.so.1: vxparms: fatal: libvxscsi.so: open failed: No such file or directory
Killed
ld.so.1: ugettxt: fatal: libvxscsi.so: open failed: No such file or directory
ERROR:vxlufinish Failed: /altroot.5.10/usr/lib/vxvm/bin/vxencap -d -C 10176
-c -p 5555 -g
    -g altrootdg rootdisk=c0t1d0s2
    Please install, if 5.0 or higher version of VxVM is not installed
    on alternate bootdisk.

Workaround: Use the standard upgrade or dual-partition upgrade method instead.

Contact Sun support or your Sun representative to learn whether Sun Cluster 3.2 Live Upgrade support for VxVM 5.0 becomes available at a later date.

Live Upgrade Should Support Mounting Global Devices From Boot Disk (6433728)

Problem Summary: During live upgrade, the lucreate and luupgrade commands fail to change the DID names in the alternate boot environment that corresponds to the /global/.devices/node@N entry.

Workaround: Before you start the live upgrade, perform the following steps on each cluster node.

  1. Become superuser.

  2. Back up the /etc/vfstab file.


    # cp /etc/vfstab /etc/vfstab.old
    
  3. Open the /etc/vfstab file for editing.

  4. Locate the line that corresponds to /global/.device/node@N.

  5. Edit the global device entry.

    • Change the DID names to the physical names.

      Change /dev/did/{r}dsk/dYsZ to /dev/{r}dsk/cNtXdYsZ.

    • Remove global from the entry.

    The following example shows the name of DID device d3s3 which corresponds to /global/.devices/node@s, changed to its physical device names and the global entry removed:


    Original:
    /dev/did/dsk/d3s3    /dev/did/rdsk/d3s3    /global/.devices/node@2   ufs   2   no   global
    
    Changed:
    dev/dsk/c0t0d0s3     /dev/rdsk/c0t0d0s3    /global/.devices/node@2   ufs   2   no   -
  6. When the /etc/vfstab file is modified on all cluster nodes, perform live upgrade of the cluster, but stop before you reboot from the upgraded alternate boot environment.

  7. On each node, on the current, unupgraded, boot environment, restore the original /etc/vfstab file.


    # cp /etc/vstab.old /etc/vfstab
    
  8. In the alternate boot environment, open the /etc/vfstab file for editing.

  9. Locate the line that corresponds to /global/.devices/node@N and replace the dash (-) at to the end of the entry with the word global.


    /dev/dsk/cNtXdYsZ    /dev/rdsk/cNtXdYsZ    /global/.devices/node@N   ufs   2   no   global
    
  10. Reboot the node from the upgraded alternate boot environment.

    The DID names are substituted in the /etc/vfstab file automatically.

The vxlustart Script Fails to Create the Alternate Boot Environment During a Live Upgrade (6445430)

Problem Summary: This problem is seen when upgrading VERITAS Volume Manager (VxVM) during a Sun Cluster live upgrade. The vxlustart script is used to upgrade the Solaris OS and VxVM from the previous version. The script fails with error messages similar to the following:


# ./vxlustart -u 5.10 -d c0t1d0 -s OSimage

   VERITAS Volume Manager VxVM 5.0.
   Live Upgrade is now upgrading from 5.9 to <5.10>
…
ERROR: Unable to copy file systems from boot environment &lt;sorce.8876> to BE &lt;dest.8876>.
ERROR: Unable to populate file systems on boot environment &lt;dest.8876>.
ERROR: Cannot make file systems for boot environment &lt;dest.8876>.
ERROR: vxlustart: Failed: lucreate -c sorce.8876 -C /dev/dsk/c0t0d0s2 
-m -:/dev/dsk/c0t1d0s1:swap -m /:/dev/dsk/c0t1d0s0:ufs 
-m /globaldevices:/dev/dsk/c0t1d0s3:ufs -m /mc_metadb:/dev/dsk/c0t1d0s7:ufs 
-m /space:/dev/dsk/c0t1d0s4:ufs -n dest.8876

Workaround: Use the standard upgrade or dual-partition upgrade method if you are upgrading the cluster to VxVM 5.0.

Contact Sun support or your Sun representative to learn whether Sun Cluster 3.2 Live Upgrade support for VxVM 5.0 becomes available at a later date.

vxio Major Numbers Different Across the Nodes When the Root Disk is Encapsulated (6445917)

Problem Summary: For clusters that run VERITAS Volume Manager (VxVM), a standard upgrade or dual-partition upgrade of any of the following software fails if the root disk is encapsulated:

The cluster node panics and fails to boot after upgrade. This is due to the major-number or minor-number changes made by VxVM during the upgrade.

Workaround: Unencapsulate the root disk before you begin the upgrade.


Caution – Caution –

If the above procedure is not followed correctly, you may experience serious unexpected problems on all nodes being upgraded. Also, unencapsulation and encapsulation of root disk causes an additional reboot (each time) of the node automatically, increasing the number of required reboots during upgrade.


Cannot Use Zones Following Live Upgrade From Sun Cluster Version 3.1 on Solaris 9 to Version 3.2 on Solaris 10 (6509958)

Problem Summary: Following a live upgrade from Sun Cluster version 3.1 on Solaris 9 to version 3.2 on Solaris 10, zones cannot be used properly with the cluster software. The problem is that the pspool data is not created for the Sun Cluster packages. So those packages that must be propagated to the non-global zones, such as SUNWsczu, are not propagated correctly.

Workaround: After the Sun Cluster packages have been upgraded by using the scinstall -R command but before the cluster has booted into cluster mode, run the following script twice:

ProcedureInstructions for Using the Script

Before You Begin

Prepare and run this script in one of the following ways:

  1. Become superuser.

  2. Create a script with the following content.

    #!/bin/ksh
    
    typeset PLATFORM=${PLATFORM:-`uname -p`}
    typeset PATHNAME=${PATHNAME:-/cdrom/cdrom0/Solaris_${PLATFORM}/Product/sun_cluster/Solaris_10/Packages}
    typeset BASEDIR=${BASEDIR:-/}
    
    cd $PATHNAME
    for i in *
    do
    	if pkginfo -R ${BASEDIR} $i >/dev/null 2>&1
    	then
    		mkdir -p ${BASEDIR}/var/sadm/pkg/$i/save/pspool
    		pkgadd -d . -R ${BASEDIR} -s ${BASEDIR}/var/sadm/pkg/$i/save/pspool $i
    	fi
    done
  3. Set the variables PLATFORM, PATHNAME, and BASEDIR.

    Either set these variables as environment variables or modify the values in the script directly.

    PLATFORM

    The name of the platform. For example, it could be sparc or x86. By default, the PLATFORM variable is set to the output of the uname -p command.

    PATHNAME

    A path to the device from where the Sun Cluster framework or data-service packages can be installed. This value corresponds to the -d option in the pkgadd command.

    As an example, for Sun Cluster framework packages, this value would be of the following form:


    /cdrom/cdrom0/Solaris_${PLATFORM}/Product/sun_cluster/Solaris_10/Packages

    For the data services packages, this value would be of the following form:


    /cdrom/cdrom0/Solaris_${PLATFORM}/Product/sun_cluster_agents/Solaris_10/Packages
    BASEDIR

    The full path name of a directory to use as the root path and corresponds to the -R option in the pkgadd command. For live upgrade, set this value to the root path that is used with the -R option in the scinstall command. By default, the BASEDIR variable is set to the root (/) file system.

  4. Run the script, once for the Sun Cluster framework packages and once for the data-service packages.

    After the script is run, you should see the following message at the command prompt for each package:


    Transferring pkgname package instance

    Note –

    If the pspool directory already exists for a package or if the script is run twice for the same set of packages, the following error is displayed at the command prompt:


    Transferring pkgname package instance
    pkgadd: ERROR: unable to complete package transfer
        - identical version of pkgname already exists on destination device

    This is a harmless message and can be safely ignored.


  5. After you run the script for both framework packages and data-service packages, boot your nodes into cluster mode.

Can't Add Node to an Existing Sun Cluster 3.2–Patched Cluster Without Adding the Sun Cluster 3.2 Core Patch to the Node (6554107)

Problem Summary: Adding a new cluster node without ensuring that the node has the same patches as the existing cluster nodes might cause the cluster nodes to panic.

Workaround: Before adding nodes to the cluster, ensure that the new node is first patched to the same level as the existing cluster nodes. Failure to do this might cause the cluster nodes to panic.