Sun Cluster Software Installation Guide for Solaris OS

Chapter 8 Upgrading Sun Cluster Software

This chapter provides the following information and procedures to upgrade a Sun Cluster 3.0 or 3.1 configuration to Sun Cluster 3.2 software:

Upgrade Requirements and Software Support Guidelines

Observe the following requirements and software-support guidelines when you upgrade to Sun Cluster 3.2 software:

Choosing a Sun Cluster Upgrade Method

Choose from the following methods to upgrade your cluster to Sun Cluster 3.2 software:

For overview information about planning your Sun Cluster 3.2 configuration, see Chapter 1, Planning the Sun Cluster Configuration.

Performing a Standard Upgrade to Sun Cluster 3.2 Software

This section provides the following information to upgrade to Sun Cluster 3.2 software by using the standard upgrade method:

The following table lists the tasks to perform to upgrade from Sun Cluster 3.1 software to Sun Cluster 3.2 software. You also perform these tasks to upgrade only the version of the Solaris OS. If you upgrade the Solaris OS from Solaris 9 to Solaris 10 software, you must also upgrade the Sun Cluster software and dependency software to the version that is compatible with the new version of the Solaris OS.

Table 8–1 Task Map: Performing a Standard Upgrade to Sun Cluster 3.2 Software

Task 

Instructions 

1. Read the upgrade requirements and restrictions. Determine the proper upgrade method for your configuration and needs. 

Upgrade Requirements and Software Support Guidelines

Choosing a Sun Cluster Upgrade Method

2. Remove the cluster from production and back up shared data. 

How to Prepare the Cluster for Upgrade (Standard)

3. Upgrade the Solaris software, if necessary, to a supported Solaris update. If the cluster uses dual-string mediators for Solaris Volume Manager software, unconfigure the mediators. As needed, upgrade VERITAS Volume Manager (VxVM) and VERITAS File System (VxFS). Solaris Volume Manager software is automatically upgraded with the Solaris OS. 

How to Upgrade the Solaris OS and Volume Manager Software (Standard)

4. Upgrade to Sun Cluster 3.2 framework and data-service software. If necessary, upgrade applications. If the cluster uses dual-string mediators and you upgraded the Solaris OS, reconfigure the mediators. If you upgraded VxVM, upgrade disk groups. 

How to Upgrade Sun Cluster 3.2 Software (Standard)

5. Verify successful completion of upgrade to Sun Cluster 3.2 software. 

How to Verify Upgrade of Sun Cluster 3.2 Software

6. Enable resources and bring resource groups online. Migrate existing resources to new resource types. 

How to Finish Upgrade to Sun Cluster 3.2 Software

7. (Optional) SPARC: Upgrade the Sun Cluster module for Sun Management Center, if needed.

SPARC: How to Upgrade Sun Cluster Module Software for Sun Management Center

ProcedureHow to Prepare the Cluster for Upgrade (Standard)

Perform this procedure to remove the cluster from production before you perform a standard upgrade. On the Solaris 10 OS, perform all steps from the global zone only.

Before You Begin

Perform the following tasks:

  1. Ensure that the cluster is functioning normally.

    1. View the current status of the cluster by running the following command from any node.


      phys-schost% scstat
      

      See the scstat(1M) man page for more information.

    2. Search the /var/adm/messages log on the same node for unresolved error messages or warning messages.

    3. Check the volume-manager status.

  2. Notify users that cluster services will be unavailable during the upgrade.

  3. Become superuser on a node of the cluster.

  4. Take each resource group offline and disable all resources.

    Take offline all resource groups in the cluster, including those that are in non-global zones. Then disable all resources, to prevent the cluster from bringing the resources online automatically if a node is mistakenly rebooted into cluster mode.

    • If you are upgrading from Sun Cluster 3.1 software and want to use the scsetup utility, perform the following steps:

      1. Start the scsetup utility.


        phys-schost# scsetup
        

        The scsetup Main Menu is displayed.

      2. Type the number that corresponds to the option for Resource groups and press the Return key.

        The Resource Group Menu is displayed.

      3. Type the number that corresponds to the option for Online/Offline or Switchover a resource group and press the Return key.

      4. Follow the prompts to take offline all resource groups and to put them in the unmanaged state.

      5. When all resource groups are offline, type q to return to the Resource Group Menu.

      6. Exit the scsetup utility.

        Type q to back out of each submenu or press Ctrl-C.

    • To use the command line, perform the following steps:

      1. Take each resource offline.


        phys-schost# scswitch -F -g resource-group
        
        -F

        Switches a resource group offline.

        -g resource-group

        Specifies the name of the resource group to take offline.

      2. From any node, list all enabled resources in the cluster.


        phys-schost# scrgadm -pv | grep "Res enabled"
        (resource-group:resource) Res enabled: True
      3. Identify those resources that depend on other resources.

        You must disable dependent resources first before you disable the resources that they depend on.

      4. Disable each enabled resource in the cluster.


        phys-schost# scswitch -n -j resource
        
        -n

        Disables.

        -j resource

        Specifies the resource.

        See the scswitch(1M) man page for more information.

      5. Verify that all resources are disabled.


        phys-schost# scrgadm -pv | grep "Res enabled"
        (resource-group:resource) Res enabled: False
      6. Move each resource group to the unmanaged state.


        phys-schost# scswitch -u -g resource-group
        
        -u

        Moves the specified resource group to the unmanaged state.

        -g resource-group

        Specifies the name of the resource group to move into the unmanaged state.

  5. Verify that all resources on all nodes are Offline and that all resource groups are in the Unmanaged state.


    phys-schost# scstat
    
  6. For a two-node cluster that uses Sun StorEdge Availability Suite software or Sun StorageTekTM Availability Suite software, ensure that the configuration data for availability services resides on the quorum disk.

    The configuration data must reside on a quorum disk to ensure the proper functioning of Availability Suite after you upgrade the cluster software.

    1. Become superuser on a node of the cluster that runs Availability Suite software.

    2. Identify the device ID and the slice that is used by the Availability Suite configuration file.


      phys-schost# /usr/opt/SUNWscm/sbin/dscfg
      /dev/did/rdsk/dNsS
      

      In this example output, N is the device ID and S the slice of device N.

    3. Identify the existing quorum device.


      phys-schost# scstat -q
      -- Quorum Votes by Device --
                           Device Name         Present Possible Status
                           -----------         ------- -------- ------
         Device votes:     /dev/did/rdsk/dQsS  1       1        Online

      In this example output, dQsS is the existing quorum device.

    4. If the quorum device is not the same as the Availability Suite configuration-data device, move the configuration data to an available slice on the quorum device.


      phys-schost# dd if=`/usr/opt/SUNWesm/sbin/dscfg` of=/dev/did/rdsk/dQsS
      

      Note –

      You must use the name of the raw DID device, /dev/did/rdsk/, not the block DID device, /dev/did/dsk/.


    5. If you moved the configuration data, configure Availability Suite software to use the new location.

      As superuser, issue the following command on each node that runs Availability Suite software.


      phys-schost# /usr/opt/SUNWesm/sbin/dscfg -s /dev/did/rdsk/dQsS
      
  7. (Optional) If you are upgrading from a version of Sun Cluster 3.0 software and do not want your ntp.conf file renamed to ntp.conf.cluster, create an ntp.conf.cluster file.

    On each node, copy /etc/inet/ntp.cluster as ntp.conf.cluster.


    phys-schost# cp /etc/inet/ntp.cluster /etc/inet/ntp.conf.cluster
    

    The existence of an ntp.conf.cluster file prevents upgrade processing from renaming the ntp.conf file. The ntp.conf file will still be used to synchronize NTP among the cluster nodes.

  8. Stop all applications that are running on each node of the cluster.

  9. Ensure that all shared data is backed up.

  10. If you will upgrade the Solaris OS and your cluster uses dual-string mediators for Solaris Volume Manager software, unconfigure your mediators.

    See Configuring Dual-String Mediators for more information about mediators.

    1. Run the following command to verify that no mediator data problems exist.


      phys-schost# medstat -s setname
      
      -s setname

      Specifies the disk set name.

      If the value in the Status field is Bad, repair the affected mediator host. Follow the procedure How to Fix Bad Mediator Data.

    2. List all mediators.

      Save this information for when you restore the mediators during the procedure How to Finish Upgrade to Sun Cluster 3.2 Software.

    3. For a disk set that uses mediators, take ownership of the disk set if no node already has ownership.


      phys-schost# scswitch -z -D setname -h node
      
      -z

      Changes mastery.

      -D devicegroup

      Specifies the name of the disk set.

      -h node

      Specifies the name of the node to become primary of the disk set.

    4. Unconfigure all mediators for the disk set.


      phys-schost# metaset -s setname -d -m mediator-host-list
      
      -s setname

      Specifies the disk set name.

      -d

      Deletes from the disk set.

      -m mediator-host-list

      Specifies the name of the node to remove as a mediator host for the disk set.

      See the mediator(7D) man page for further information about mediator-specific options to the metaset command.

    5. Repeat Step c through Step d for each remaining disk set that uses mediators.

  11. From one node, shut down the cluster.


    # scshutdown -g0 -y
    

    See the scshutdown(1M)man page for more information.

  12. Boot each node into noncluster mode.

    • On SPARC based systems, perform the following command:


      ok boot -x
      
    • On x86 based systems, perform the following commands:

      1. In the GRUB menu, use the arrow keys to select the appropriate Solaris entry and type e to edit its commands.

        The GRUB menu appears similar to the following:


        GNU GRUB version 0.95 (631K lower / 2095488K upper memory)
        +-------------------------------------------------------------------------+
        | Solaris 10 /sol_10_x86                                                  |
        | Solaris failsafe                                                        |
        |                                                                         |
        +-------------------------------------------------------------------------+
        Use the ^ and v keys to select which entry is highlighted.
        Press enter to boot the selected OS, 'e' to edit the
        commands before booting, or 'c' for a command-line.

        For more information about GRUB based booting, see Chapter 11, GRUB Based Booting (Tasks), in System Administration Guide: Basic Administration.

      2. In the boot parameters screen, use the arrow keys to select the kernel entry and type e to edit the entry.

        The GRUB boot parameters screen appears similar to the following:


        GNU GRUB version 0.95 (615K lower / 2095552K upper memory)
        +----------------------------------------------------------------------+
        | root (hd0,0,a)                                                       |
        | kernel /platform/i86pc/multiboot                                     |
        | module /platform/i86pc/boot_archive                                  |
        +----------------------------------------------------------------------+
        Use the ^ and v keys to select which entry is highlighted.
        Press 'b' to boot, 'e' to edit the selected command in the
        boot sequence, 'c' for a command-line, 'o' to open a new line
        after ('O' for before) the selected line, 'd' to remove the
        selected line, or escape to go back to the main menu.
      3. Add -x to the command to specify that the system boot into noncluster mode.


        [ Minimal BASH-like line editing is supported. For the first word, TAB
        lists possible command completions. Anywhere else TAB lists the possible
        completions of a device/filename. ESC at any time exits. ]
        
        grub edit> kernel /platform/i86pc/multiboot -x
        
      4. Press Enter to accept the change and return to the boot parameters screen.

        The screen displays the edited command.


        GNU GRUB version 0.95 (615K lower / 2095552K upper memory)
        +----------------------------------------------------------------------+
        | root (hd0,0,a)                                                       |
        | kernel /platform/i86pc/multiboot -x                                  |
        | module /platform/i86pc/boot_archive                                  |
        +----------------------------------------------------------------------+
        Use the ^ and v keys to select which entry is highlighted.
        Press 'b' to boot, 'e' to edit the selected command in the
        boot sequence, 'c' for a command-line, 'o' to open a new line
        after ('O' for before) the selected line, 'd' to remove the
        selected line, or escape to go back to the main menu.-
      5. Type b to boot the node into noncluster mode.


        Note –

        This change to the kernel boot parameter command does not persist over the system boot. The next time you reboot the node, it will boot into cluster mode. To boot into noncluster mode instead, perform these steps to again to add the -x option to the kernel boot parameter command.


  13. Ensure that each system disk is backed up.

Next Steps

Upgrade software on each node.

ProcedureHow to Upgrade the Solaris OS and Volume Manager Software (Standard)

Perform this procedure on each node in the cluster to upgrade the Solaris OS. On the Solaris 10 OS, perform all steps from the global zone only. If the cluster already runs on a version of the Solaris OS that supports Sun Cluster 3.2 software, further upgrade of the Solaris OS is optional. If you do not intend to upgrade the Solaris OS, proceed to How to Upgrade Sun Cluster 3.2 Software (Standard).


Note –

The cluster must already run on, or be upgraded to, at least the minimum required level of the Solaris OS to support upgrade to Sun Cluster 3.2 software. See Supported Products in Sun Cluster 3.2 Release Notes for Solaris OS for more information.


Before You Begin

Ensure that all steps in How to Prepare the Cluster for Upgrade (Standard) are completed.

  1. Become superuser on the cluster node to upgrade.

    If you are performing a dual-partition upgrade, the node must be a member of the partition that is in noncluster mode.

  2. If Sun Cluster Geographic Edition software is installed, uninstall it.

    For uninstallation procedures, see the documentation for your version of Sun Cluster Geographic Edition software.

  3. Determine whether the following Apache run-control scripts exist and are enabled or disabled:


    /etc/rc0.d/K16apache
    /etc/rc1.d/K16apache
    /etc/rc2.d/K16apache
    /etc/rc3.d/S50apache
    /etc/rcS.d/K16apache

    Some applications, such as Sun Cluster HA for Apache, require that Apache run control scripts be disabled.

    • If these scripts exist and contain an uppercase K or S in the file name, the scripts are enabled. No further action is necessary for these scripts.

    • If these scripts do not exist, in Step 8 you must ensure that any Apache run control scripts that are installed during the Solaris OS upgrade are disabled.

    • If these scripts exist but the file names contain a lowercase k or s, the scripts are disabled. In Step 8 you must ensure that any Apache run control scripts that are installed during the Solaris OS upgrade are disabled.

  4. Comment out all entries for globally mounted file systems in the node's /etc/vfstab file.

    1. For later reference, make a record of all entries that are already commented out.

    2. Temporarily comment out all entries for globally mounted file systems in the /etc/vfstab file.

      Entries for globally mounted file systems contain the global mount option. Comment out these entries to prevent the Solaris upgrade from attempting to mount the global devices.

  5. Determine which procedure to follow to upgrade the Solaris OS.

    Volume Manager 

    Procedure 

    Location of Instructions 

    Solaris Volume Manager 

    Any Solaris upgrade method except the Live Upgrade method

    Solaris installation documentation 

    VERITAS Volume Manager 

    “Upgrading VxVM and Solaris” 

    VERITAS Volume Manager installation documentation 


    Note –

    If your cluster has VxVM installed, you must reinstall the existing VxVM software or upgrade to the Solaris 9 or 10 version of VxVM software as part of the Solaris upgrade process.


  6. Upgrade the Solaris software, following the procedure that you selected in Step 5.


    Note –

    Do not perform the final reboot instruction in the Solaris software upgrade. Instead, do the following:

    1. Return to this procedure to perform Step 7 and Step 8.

    2. Reboot into noncluster mode in Step 9 to complete Solaris software upgrade.


    • When prompted, choose the manual reboot option.

    • When you are instructed to reboot a node during the upgrade process, always reboot into noncluster mode. For the boot and reboot commands, add the -x option to the command. The -x option ensures that the node reboots into noncluster mode. For example, either of the following two commands boot a node into single-user noncluster mode:

    • On SPARC based systems, perform either of the following commands:


      phys-schost# reboot -- -xs
      or
      ok boot -xs
      

      If the instruction says to run the init S command, use the reboot -- -xs command instead.

    • On x86 based systems running the Solaris 9 OS, perform either of the following commands:


      phys-schost# reboot -- -xs
      or
      ...
                            <<< Current Boot Parameters >>>
      Boot path: /pci@0,0/pci-ide@7,1/ata@1/cmdk@0,0:b
      Boot args:
      
      Type  b [file-name] [boot-flags] <ENTER>  to boot with options
      or    i <ENTER>                           to enter boot interpreter
      or    <ENTER>                             to boot with defaults
      
                        <<< timeout in 5 seconds >>>
      Select (b)oot or (i)nterpreter: b -xs
      
    • On x86 based systems running the Solaris 10 OS, perform the following command:


      phys-schost# shutdown -g -y -i0Press any key to continue
      1. In the GRUB menu, use the arrow keys to select the appropriate Solaris entry and type e to edit its commands.

        The GRUB menu appears similar to the following:


        GNU GRUB version 0.95 (631K lower / 2095488K upper memory)
        +-------------------------------------------------------------------------+
        | Solaris 10 /sol_10_x86                                                  |
        | Solaris failsafe                                                        |
        |                                                                         |
        +-------------------------------------------------------------------------+
        Use the ^ and v keys to select which entry is highlighted.
        Press enter to boot the selected OS, 'e' to edit the
        commands before booting, or 'c' for a command-line.

        For more information about GRUB based booting, see Chapter 11, GRUB Based Booting (Tasks), in System Administration Guide: Basic Administration.

      2. In the boot parameters screen, use the arrow keys to select the kernel entry and type e to edit the entry.

        The GRUB boot parameters screen appears similar to the following:


        GNU GRUB version 0.95 (615K lower / 2095552K upper memory)
        +----------------------------------------------------------------------+
        | root (hd0,0,a)                                                       |
        | kernel /platform/i86pc/multiboot                                     |
        | module /platform/i86pc/boot_archive                                  |
        +----------------------------------------------------------------------+
        Use the ^ and v keys to select which entry is highlighted.
        Press 'b' to boot, 'e' to edit the selected command in the
        boot sequence, 'c' for a command-line, 'o' to open a new line
        after ('O' for before) the selected line, 'd' to remove the
        selected line, or escape to go back to the main menu.
      3. Add -x to the command to specify that the system boot into noncluster mode.


        [ Minimal BASH-like line editing is supported. For the first word, TAB
        lists possible command completions. Anywhere else TAB lists the possible
        completions of a device/filename. ESC at any time exits. ]
        
        grub edit> kernel /platform/i86pc/multiboot -x
        
      4. Press Enter to accept the change and return to the boot parameters screen.

        The screen displays the edited command.


        GNU GRUB version 0.95 (615K lower / 2095552K upper memory)
        +----------------------------------------------------------------------+
        | root (hd0,0,a)                                                       |
        | kernel /platform/i86pc/multiboot -x                                  |
        | module /platform/i86pc/boot_archive                                  |
        +----------------------------------------------------------------------+
        Use the ^ and v keys to select which entry is highlighted.
        Press 'b' to boot, 'e' to edit the selected command in the
        boot sequence, 'c' for a command-line, 'o' to open a new line
        after ('O' for before) the selected line, 'd' to remove the
        selected line, or escape to go back to the main menu.-
      5. Type b to boot the node into noncluster mode.


        Note –

        This change to the kernel boot parameter command does not persist over the system boot. The next time you reboot the node, it will boot into cluster mode. To boot into noncluster mode instead, perform these steps to again to add the -x option to the kernel boot parameter command.


      If the instruction says to run the init S command, shut down the system then change the GRUB kernel boot command to /platform/i86pc/multiboot -sx instead.

  7. In the /a/etc/vfstab file, uncomment those entries for globally mounted file systems that you commented out in Step 4.

  8. If Apache run control scripts were disabled or did not exist before you upgraded the Solaris OS, ensure that any scripts that were installed during Solaris upgrade are disabled.

    To disable Apache run control scripts, use the following commands to rename the files with a lowercase k or s.


    phys-schost# mv /a/etc/rc0.d/K16apache /a/etc/rc0.d/k16apache 
    phys-schost# mv /a/etc/rc1.d/K16apache /a/etc/rc1.d/k16apache
    phys-schost# mv /a/etc/rc2.d/K16apache /a/etc/rc2.d/k16apache
    phys-schost# mv /a/etc/rc3.d/S50apache /a/etc/rc3.d/s50apache
    phys-schost# mv /a/etc/rcS.d/K16apache /a/etc/rcS.d/k16apache
    

    Alternatively, you can rename the scripts to be consistent with your normal administration practices.

  9. Reboot the node into noncluster mode.

    Include the double dashes (--) in the following command:


    phys-schost# reboot -- -x
    
  10. If your cluster runs VxVM, perform the remaining steps in the procedure “Upgrading VxVM and Solaris” to reinstall or upgrade VxVM.

    Make the following changes to the procedure:

    • After VxVM upgrade is complete but before you reboot, verify the entries in the /etc/vfstab file.

      If any of the entries that you uncommented in Step 7 were commented out, make those entries uncommented again.

    • When the VxVM procedures instruct you to perform a final reconfiguration reboot, do not use the -r option alone. Instead, reboot into noncluster mode by using the -rx options.

      • On SPARC based systems, perform the following command:


        phys-schost# reboot -- -rx
        
      • On x86 based systems, perform the shutdown and boot procedures that are described in Step 6 except add -rx to the kernel boot command instead of -sx.


    Note –

    If you see a message similar to the following, type the root password to continue upgrade processing. Do not run the fsck command nor type Ctrl-D.


    WARNING - Unable to repair the /global/.devices/node@1 filesystem. 
    Run fsck manually (fsck -F ufs /dev/vx/rdsk/rootdisk_13vol). Exit the 
    shell when done to continue the boot process.
    
    Type control-d to proceed with normal startup,
    (or give root password for system maintenance):  Type the root password
    

  11. (Optional) SPARC: Upgrade VxFS.

    Follow procedures that are provided in your VxFS documentation.

  12. Install any required Solaris software patches and hardware-related patches, and download any needed firmware that is contained in the hardware patches.


    Note –

    Do not reboot after you add patches. Wait to reboot the node until after you upgrade the Sun Cluster software.


    See Patches and Required Firmware Levels in Sun Cluster 3.2 Release Notes for Solaris OS for the location of patches and installation instructions.

Next Steps

Upgrade to Sun Cluster 3.2 software. Go to How to Upgrade Sun Cluster 3.2 Software (Standard).


Note –

To complete the upgrade to a new marketing release of the Solaris OS, such as from Solaris 8 to Solaris 10 software, you must also upgrade the Sun Cluster software and dependency software to the version that is compatible with the new version of the Solaris OS.


ProcedureHow to Upgrade Sun Cluster 3.2 Software (Standard)

Perform this procedure to upgrade each node of the cluster to Sun Cluster 3.2 software. This procedure also upgrades required Sun Java Enterprise System shared components.

You must also perform this procedure after you upgrade to a different marketing release of the Solaris OS, such as from Solaris 8 to Solaris 10 software.

On the Solaris 10 OS, perform all steps from the global zone only.


Tip –

You can perform this procedure on more than one node at the same time.


Before You Begin

Perform the following tasks:

  1. Become superuser on a node of the cluster.

  2. Ensure that the /usr/java/ directory is a symbolic link to the minimum or latest version of Java software.

    Sun Cluster software requires at least version 1.5.0_06 of Java software. If you upgraded to a version of Solaris that installs an earlier version of Java, the upgrade might have changed the symbolic link to point to a version of Java that does not meet the minimum requirement for Sun Cluster 3.2 software.

    1. Determine what directory the /usr/java/ directory is symbolically linked to.


      phys-schost# ls -l /usr/java
      lrwxrwxrwx   1 root   other    9 Apr 19 14:05 /usr/java -> /usr/j2se/
    2. Determine what version or versions of Java software are installed.

      The following are examples of commands that you can use to display the version of their related releases of Java software.


      phys-schost# /usr/j2se/bin/java -version
      phys-schost# /usr/java1.2/bin/java -version
      phys-schost# /usr/jdk/jdk1.5.0_06/bin/java -version
      
    3. If the /usr/java/ directory is not symbolically linked to a supported version of Java software, recreate the symbolic link to link to a supported version of Java software.

      The following example shows the creation of a symbolic link to the /usr/j2se/ directory, which contains Java 1.5.0_06 software.


      phys-schost# rm /usr/java
      phys-schost# ln -s /usr/j2se /usr/java
      
  3. Load the Sun Java Availability Suite DVD-ROM into the DVD-ROM drive.

    If the volume management daemon vold(1M) is running and is configured to manage CD-ROM or DVD devices, the daemon automatically mounts the media on the /cdrom/cdrom0/ directory.

  4. Change to the installation wizard directory of the DVD-ROM.

    • If you are installing the software packages on the SPARC platform, type the following command:


      phys-schost# cd /cdrom/cdrom0//Solaris_sparc
      
    • If you are installing the software packages on the x86 platform, type the following command:


      phys-schost# cd /cdrom/cdrom0//Solaris_x86
      
  5. Start the installation wizard program.


    phys-schost# ./installer
    
  6. Follow the instructions on the screen to select and upgrade Shared Components software packages on the node.


    Note –

    Do not use the installation wizard program to upgrade Sun Cluster software packages.


    The installation wizard program displays the status of the installation. When the installation is complete, the program displays an installation summary and the installation logs.

  7. Exit the installation wizard program.

  8. Change to the Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/ directory, where arch is sparc or x86 (Solaris 10 only) and where ver is 9 for Solaris 9 or 10 for Solaris 10 .


    phys-schost# cd /cdrom/cdrom0/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools
    
  9. Start the scinstall utility.


    phys-schost# ./scinstall
    

    Note –

    Do not use the /usr/cluster/bin/scinstall command that is already installed on the node. You must use the scinstall command that is located on the Sun Java Availability Suite DVD-ROM.


    The scinstall Main Menu is displayed.

  10. Type the number that corresponds to the option for Upgrade this cluster node and press the Return key.


      *** Main Menu ***
    
        Please select from one of the following (*) options:
    
            1) Create a new cluster or add a cluster node
            2) Configure a cluster to be JumpStarted from this install server
          * 3) Manage a dual-partition upgrade
          * 4) Upgrade this cluster node
          * 5) Print release information for this cluster node
     
          * ?) Help with menu options
          * q) Quit
    
        Option:  4
    

    The Upgrade Menu is displayed.

  11. Type the number that corresponds to the option for Upgrade Sun Cluster framework on this cluster node and press the Return key.

  12. Follow the menu prompts to upgrade the cluster framework.

    During the Sun Cluster upgrade, scinstall might make one or more of the following configuration changes:

    Upgrade processing is finished when the system displays the message Completed Sun Cluster framework upgrade and prompts you to press Enter to continue.

  13. Quit the scinstall utility.

  14. Unload the Sun Java Availability Suite DVD-ROM from the DVD-ROM drive.

    1. To ensure that the DVD-ROM is not being used, change to a directory that does not reside on the DVD-ROM.

    2. Eject the DVD-ROM.


      phys-schost# eject cdrom
      
  15. Upgrade data service packages.

    You must upgrade all data services to the Sun Cluster 3.2 version.


    Note –

    For Sun Cluster HA for SAP Web Application Server, if you are using a J2EE engine resource or a web application server component resource or both, you must delete the resource and recreate it with the new web application server component resource. Changes in the new web application server component resource includes integration of the J2EE functionality. For more information, see Sun Cluster Data Service for SAP Web Application Server Guide for Solaris OS.


    1. Start the upgraded interactive scinstall utility.


      phys-schost# /usr/cluster/bin/scinstall
      

      Note –

      Do not use the scinstall utility that is on the installation media to upgrade data service packages.


      The scinstall Main Menu is displayed.

    2. Type the number that corresponds to the option for Upgrade this cluster node and press the Return key.

      The Upgrade Menu is displayed.

    3. Type the number that corresponds to the option for Upgrade Sun Cluster data service agents on this node and press the Return key.

    4. Follow the menu prompts to upgrade Sun Cluster data service agents that are installed on the node.

      You can choose from the list of data services that are available to upgrade or choose to upgrade all installed data services.

      Upgrade processing is finished when the system displays the message Completed upgrade of Sun Cluster data services agents and prompts you to press Enter to continue.

    5. Press Enter.

      The Upgrade Menu is displayed.

  16. Quit the scinstall utility.

  17. If you have Sun Cluster HA for NFS configured on a highly available local file system, ensure that the loopback file system (LOFS) is disabled.


    Note –

    If you have non-global zones configured, LOFS must remain enabled. For guidelines about using LOFS and alternatives to disabling it, see Cluster File Systems.


    As of the Sun Cluster 3.2 release, LOFS is no longer disabled by default during Sun Cluster software installation or upgrade. To disable LOFS, ensure that the /etc/system file contains the following entry:


    exclude:lofs

    This change becomes effective at the next system reboot.

  18. As needed, manually upgrade any custom data services that are not supplied on the product media.

  19. Verify that each data-service update is installed successfully.

    View the upgrade log file that is referenced at the end of the upgrade output messages.

  20. Install any Sun Cluster 3.2 framework and data-service software patches.

    See Patches and Required Firmware Levels in Sun Cluster 3.2 Release Notes for Solaris OS for the location of patches and installation instructions.

  21. Upgrade software applications that are installed on the cluster.

    Ensure that application levels are compatible with the current versions of Sun Cluster and Solaris software. See your application documentation for installation instructions.

  22. (Optional) Reconfigure the private-network address range.

    Perform this step if you want to increase or decrease the size of the IP address range that is used by the private interconnect. The IP address range that you configure must minimally support the number of nodes and private networks in the cluster. See Private Network for more information.

    1. From one node, start the clsetup utility.

      When run in noncluster mode, the clsetup utility displays the Main Menu for noncluster-mode operations.

    2. Type the number that corresponds to the option for Change IP Address Range and press the Return key.

      The clsetup utility displays the current private-network configuration, then asks if you would like to change this configuration.

    3. To change either the private-network IP address or the IP address range, type yes and press the Return key.

      The clsetup utility displays the default private-network IP address, 172.16.0.0, and asks if it is okay to accept this default.

    4. Change or accept the private-network IP address.

      • To accept the default private-network IP address and proceed to changing the IP address range, type yes and press the Return key.

        The clsetup utility will ask if it is okay to accept the default netmask. Skip to the next step to enter your response.

      • To change the default private-network IP address, perform the following substeps.

        1. Type no in response to the clsetup utility question about whether it is okay to accept the default address, then press the Return key.

          The clsetup utility will prompt for the new private-network IP address.

        2. Type the new IP address and press the Return key.

          The clsetup utility displays the default netmask and then asks if it is okay to accept the default netmask.

    5. Change or accept the default private-network IP address range.

      The default netmask is 255.255.248.0. This default IP address range supports up to 64 nodes and up to 10 private networks in the cluster.

      • To accept the default IP address range, type yes and press the Return key.

        Then skip to the next step.

      • To change the IP address range, perform the following substeps.

        1. Type no in response to the clsetup utility's question about whether it is okay to accept the default address range, then press the Return key.

          When you decline the default netmask, the clsetup utility prompts you for the number of nodes and private networks that you expect to configure in the cluster.

        2. Enter the number of nodes and private networks that you expect to configure in the cluster.

          From these numbers, the clsetup utility calculates two proposed netmasks:

          • The first netmask is the minimum netmask to support the number of nodes and private networks that you specified.

          • The second netmask supports twice the number of nodes and private networks that you specified, to accommodate possible future growth.

        3. Specify either of the calculated netmasks, or specify a different netmask that supports the expected number of nodes and private networks.

    6. Type yes in response to the clsetup utility's question about proceeding with the update.

    7. When finished, exit the clsetup utility.

  23. After all nodes in the cluster are upgraded, reboot the upgraded nodes.

    1. Shut down each node.


      phys-schost# shutdown -g0 -y
      
    2. Boot each node into cluster mode.

      • On SPARC based systems, do the following:


        ok boot
        
      • On x86 based systems, do the following:

        When the GRUB menu is displayed, select the appropriate Solaris entry and press Enter. The GRUB menu appears similar to the following:


        GNU GRUB version 0.95 (631K lower / 2095488K upper memory)
        +-------------------------------------------------------------------------+
        | Solaris 10 /sol_10_x86                                                  |
        | Solaris failsafe                                                        |
        |                                                                         |
        +-------------------------------------------------------------------------+
        Use the ^ and v keys to select which entry is highlighted.
        Press enter to boot the selected OS, 'e' to edit the
        commands before booting, or 'c' for a command-line.

        For more information about GRUB based booting, see Chapter 11, GRUB Based Booting (Tasks), in System Administration Guide: Basic Administration.

Next Steps

Go to How to Verify Upgrade of Sun Cluster 3.2 Software

Performing a Dual-Partition Upgrade to Sun Cluster 3.2 Software

This section provides the following information to upgrade from a Sun Cluster 3.1 release to Sun Cluster 3.2 software by using the dual-partition upgrade method:

The following table lists the tasks to perform to upgrade from Sun Cluster 3.1 software to Sun Cluster 3.2 software. You also perform these tasks to upgrade only the version of the Solaris OS. If you upgrade the Solaris OS from Solaris 9 to Solaris 10 software, you must also upgrade the Sun Cluster software and dependency software to the version that is compatible with the new version of the Solaris OS.

Table 8–2 Task Map: Performing a Dual-Partition Upgrade to Sun Cluster 3.2 Software

Task 

Instructions 

1. Read the upgrade requirements and restrictions. Determine the proper upgrade method for your configuration and needs. 

Upgrade Requirements and Software Support Guidelines

Choosing a Sun Cluster Upgrade Method

2. Partition the cluster into two groups of nodes. 

How to Prepare the Cluster for Upgrade (Dual-Partition)

3. Upgrade the Solaris software, if necessary, to a supported Solaris update. If the cluster uses dual-string mediators for Solaris Volume Manager software, unconfigure the mediators. As needed, upgrade VERITAS Volume Manager (VxVM) and VERITAS File System (VxFS). Solaris Volume Manager software is automatically upgraded with the Solaris OS. 

How to Upgrade the Solaris OS and Volume Manager Software (Dual-Partition)

4. Upgrade to Sun Cluster 3.2 framework and data-service software. If necessary, upgrade applications. If the cluster uses dual-string mediators and you upgraded the Solaris OS, reconfigure the mediators. If you upgraded VxVM, upgrade disk groups. 

How to Upgrade Sun Cluster 3.2 Software (Dual-Partition)

5. Verify successful completion of upgrade to Sun Cluster 3.2 software. 

How to Verify Upgrade of Sun Cluster 3.2 Software

6. Enable resources and bring resource groups online. Optionally, migrate existing resources to new resource types. 

How to Finish Upgrade to Sun Cluster 3.2 Software

7. (Optional) SPARC: Upgrade the Sun Cluster module for Sun Management Center, if needed.

SPARC: How to Upgrade Sun Cluster Module Software for Sun Management Center

ProcedureHow to Prepare the Cluster for Upgrade (Dual-Partition)

Perform this procedure to prepare the cluster for a dual-partition upgrade. These procedures will refer to the two groups of nodes as the first partition and the second partition. The nodes that you assign to the second partition will continue cluster services while you upgrade the nodes in the first partition. After all nodes in the first partition are upgraded, you switch cluster services to the first partition and upgrade the second partition. After all nodes in the second partition are upgraded, you boot the nodes into cluster mode to rejoin the nodes from the first partition.


Note –

If you are upgrading a single-node cluster, do not use this upgrade method. Instead, go to How to Prepare the Cluster for Upgrade (Standard) or How to Prepare the Cluster for Upgrade (Live Upgrade).


On the Solaris 10 OS, perform all steps from the global zone only.

Before You Begin

Perform the following tasks:

  1. Ensure that the cluster is functioning normally.

    1. View the current status of the cluster by running the following command from any node.


      % scstat
      

      See the scstat(1M) man page for more information.

    2. Search the /var/adm/messages log on the same node for unresolved error messages or warning messages.

    3. Check the volume-manager status.

  2. If necessary, notify users that cluster services might be temporarily interrupted during the upgrade.

    Service interruption will be approximately the amount of time that your cluster normally takes to switch services to another node.

  3. Become superuser on a node of the cluster.

  4. For a two-node cluster that uses Sun StorEdge Availability Suite software or Sun StorageTek Availability Suite software, ensure that the configuration data for availability services resides on the quorum disk.

    The configuration data must reside on a quorum disk to ensure the proper functioning of Availability Suite after you upgrade the cluster software.

    1. Become superuser on a node of the cluster that runs Availability Suite software.

    2. Identify the device ID and the slice that is used by the Availability Suite configuration file.


      phys-schost# /usr/opt/SUNWscm/sbin/dscfg
      /dev/did/rdsk/dNsS
      

      In this example output, N is the device ID and S the slice of device N.

    3. Identify the existing quorum device.


      phys-schost# scstat -q
      -- Quorum Votes by Device --
                           Device Name         Present Possible Status
                           -----------         ------- -------- ------
         Device votes:     /dev/did/rdsk/dQsS  1       1        Online

      In this example output, dQsS is the existing quorum device.

    4. If the quorum device is not the same as the Availability Suite configuration-data device, move the configuration data to an available slice on the quorum device.


      phys-schost# dd if=`/usr/opt/SUNWesm/sbin/dscfg` of=/dev/did/rdsk/dQsS
      

      Note –

      You must use the name of the raw DID device, /dev/did/rdsk/, not the block DID device, /dev/did/dsk/.


    5. If you moved the configuration data, configure Availability Suite software to use the new location.

      As superuser, issue the following command on each node that runs Availability Suite software.


      phys-schost# /usr/opt/SUNWesm/sbin/dscfg -s /dev/did/rdsk/dQsS
      
  5. If you will upgrade the Solaris OS and your cluster uses dual-string mediators for Solaris Volume Manager software, unconfigure your mediators.

    See Configuring Dual-String Mediators for more information about mediators.

    1. Run the following command to verify that no mediator data problems exist.


      phys-schost# medstat -s setname
      
      -s setname

      Specifies the disk set name.

      If the value in the Status field is Bad, repair the affected mediator host. Follow the procedure How to Fix Bad Mediator Data.

    2. List all mediators.

      Save this information for when you restore the mediators during the procedure How to Finish Upgrade to Sun Cluster 3.2 Software.

    3. For a disk set that uses mediators, take ownership of the disk set if no node already has ownership.


      phys-schost# scswitch -z -D setname -h node
      
      -z

      Changes mastery.

      -D devicegroup

      Specifies the name of the disk set.

      -h node

      Specifies the name of the node to become primary of the disk set.

    4. Unconfigure all mediators for the disk set.


      phys-schost# metaset -s setname -d -m mediator-host-list
      
      -s setname

      Specifies the disk set name.

      -d

      Deletes from the disk set.

      -m mediator-host-list

      Specifies the name of the node to remove as a mediator host for the disk set.

      See the mediator(7D) man page for further information about mediator-specific options to the metaset command.

    5. Repeat Step c through Step d for each remaining disk set that uses mediators.

  6. If you are running the Sun Cluster HA for Sun Java System Application Server EE (HADB) data service with Sun Java System Application Server EE (HADB) software as of version 4.4, disable the HADB resource and shut down the HADB database.

    If you are running a version of Sun Java System Application Server EE (HADB) software before 4.4, you can skip this step.

    When one cluster partition is out of service during upgrade, there are not enough nodes in the active partition to meet HADB membership requirements. Therefore, you must stop the HADB database and disable the HADB resource before you begin to partition the cluster.


    phys-schost# hadbm stop database-name
    phys-schost# scswitch -n -j hadb-resource
    

    For more information, see the hadbm(1m) man page.

  7. If you are upgrading a two-node cluster, skip to Step 16.

    Otherwise, proceed to Step 8 to determine the partitioning scheme to use. You will determine which nodes each partition will contain, but interrupt the partitioning process. You will then compare the node lists of all resource groups against the node members of each partition in the scheme that you will use. If any resource group does not contain a member of each partition, you must change the node list.

  8. Load the Sun Java Availability Suite DVD-ROM into the DVD-ROM drive.

    If the volume management daemon vold(1M) is running and is configured to manage CD-ROM or DVD devices, the daemon automatically mounts the media on the /cdrom/cdrom0/ directory.

  9. Change to the Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/ directory, where arch is sparc or x86 (Solaris 10 only) and where ver is 9 for Solaris 9 or 10 for Solaris 10 .


    phys-schost# cd /cdrom/cdrom0/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools
    
  10. Start the scinstall utility in interactive mode.


    phys-schost# ./scinstall
    

    Note –

    Do not use the /usr/cluster/bin/scinstall command that is already installed on the node. You must use the scinstall command on the Sun Java Availability Suite DVD-ROM.


    The scinstall Main Menu is displayed.

  11. Type the number that corresponds to the option for Manage a dual-partition upgrade and press the Return key.


    *** Main Menu ***
    
        Please select from one of the following (*) options:
    
            1) Create a new cluster or add a cluster node
            2) Configure a cluster to be JumpStarted from this install server
          * 3) Manage a dual-partition upgrade
          * 4) Upgrade this cluster node
          * 5) Print release information for this cluster node
     
          * ?) Help with menu options
          * q) Quit
    
        Option:  3
    

    The Manage a Dual-Partition Upgrade Menu is displayed.

  12. Type the number that corresponds to the option for Display and select possible partitioning schemes and press the Return key.

  13. Follow the prompts to perform the following tasks:

    1. Display the possible partitioning schemes for your cluster.

    2. Choose a partitioning scheme.

    3. Choose which partition to upgrade first.


      Note –

      Stop and do not respond yet when prompted, Do you want to begin the dual-partition upgrade?, but do not exit the scinstall utility. You will respond to this prompt in Step 18 of this procedure.


  14. Make note of which nodes belong to each partition in the partition scheme.

  15. On another node of the cluster, become superuser.

  16. Ensure that any critical data services can switch over between partitions.

    For a two-node cluster, each node will be the only node in its partition.

    When the nodes of a partition are shut down in preparation for dual-partition upgrade, the resource groups that are hosted on those nodes switch over to a node in the other partition. If a resource group does not contain a node from each partition in its node list, the resource group cannot switch over. To ensure successful switchover of all critical data services, verify that the node list of the related resource groups contains a member of each upgrade partition.

    1. Display the node list of each resource group that you require to remain in service during the entire upgrade.


      phys-schost# scrgadm -pv -g resourcegroup | grep "Res Group Nodelist"
      
      -p

      Displays configuration information.

      -v

      Displays in verbose mode.

      -g resourcegroup

      Specifies the name of the resource group.

    2. If the node list of a resource group does not contain at least one member of each partition, redefine the node list to include a member of each partition as a potential primary node.


      phys-schost# scrgadm -a -g resourcegroup -h nodelist
      
      -a

      Adds a new configuration.

      -h

      Specifies a comma-separated list of node names.

  17. Determine your next step.

    • If you are upgrading a two-node cluster, return to Step 8 through Step 13 to designate your partitioning scheme and upgrade order.

      When you reach the prompt Do you want to begin the dual-partition upgrade?, skip to Step 18.

    • If you are upgrading a cluster with three or more nodes, return to the node that is running the interactive scinstall utility.

      Proceed to Step 18.

  18. At the interactive scinstall prompt Do you want to begin the dual-partition upgrade?, type Yes.

    The command verifies that a remote installation method is available.

  19. When prompted, press Enter to continue each stage of preparation for dual-partition upgrade.

    The command switches resource groups to nodes in the second partition, and then shuts down each node in the first partition.

  20. After all nodes in the first partition are shut down, boot each node in that partition into noncluster mode.

    • On SPARC based systems, perform the following command:


      ok boot -x
      
    • On x86 based systems running the Solaris 9 OS, perform either of the following commands:


      phys-schost# reboot -- -xs
      or
      ...
                            <<< Current Boot Parameters >>>
      Boot path: /pci@0,0/pci-ide@7,1/ata@1/cmdk@0,0:b
      Boot args:
      
      Type  b [file-name] [boot-flags] <ENTER>  to boot with options
      or    i <ENTER>                           to enter boot interpreter
      or    <ENTER>                             to boot with defaults
      
                        <<< timeout in 5 seconds >>>
      Select (b)oot or (i)nterpreter: b -xs
      
    • On x86 based systems running the Solaris 10 OS, perform the following commands:

      1. In the GRUB menu, use the arrow keys to select the appropriate Solaris entry and type e to edit its commands.

        The GRUB menu appears similar to the following:


        GNU GRUB version 0.95 (631K lower / 2095488K upper memory)
        +-------------------------------------------------------------------------+
        | Solaris 10 /sol_10_x86                                                  |
        | Solaris failsafe                                                        |
        |                                                                         |
        +-------------------------------------------------------------------------+
        Use the ^ and v keys to select which entry is highlighted.
        Press enter to boot the selected OS, 'e' to edit the
        commands before booting, or 'c' for a command-line.

        For more information about GRUB based booting, see Chapter 11, GRUB Based Booting (Tasks), in System Administration Guide: Basic Administration.

      2. In the boot parameters screen, use the arrow keys to select the kernel entry and type e to edit the entry.

        The GRUB boot parameters screen appears similar to the following:


        GNU GRUB version 0.95 (615K lower / 2095552K upper memory)
        +----------------------------------------------------------------------+
        | root (hd0,0,a)                                                       |
        | kernel /platform/i86pc/multiboot                                     |
        | module /platform/i86pc/boot_archive                                  |
        +----------------------------------------------------------------------+
        Use the ^ and v keys to select which entry is highlighted.
        Press 'b' to boot, 'e' to edit the selected command in the
        boot sequence, 'c' for a command-line, 'o' to open a new line
        after ('O' for before) the selected line, 'd' to remove the
        selected line, or escape to go back to the main menu.
      3. Add -x to the command to specify that the system boot into noncluster mode.


        [ Minimal BASH-like line editing is supported. For the first word, TAB
        lists possible command completions. Anywhere else TAB lists the possible
        completions of a device/filename. ESC at any time exits. ]
        
        grub edit> kernel /platform/i86pc/multiboot -x
        
      4. Press Enter to accept the change and return to the boot parameters screen.

        The screen displays the edited command.


        GNU GRUB version 0.95 (615K lower / 2095552K upper memory)
        +----------------------------------------------------------------------+
        | root (hd0,0,a)                                                       |
        | kernel /platform/i86pc/multiboot -x                                  |
        | module /platform/i86pc/boot_archive                                  |
        +----------------------------------------------------------------------+
        Use the ^ and v keys to select which entry is highlighted.
        Press 'b' to boot, 'e' to edit the selected command in the
        boot sequence, 'c' for a command-line, 'o' to open a new line
        after ('O' for before) the selected line, 'd' to remove the
        selected line, or escape to go back to the main menu.-
      5. Type b to boot the node into noncluster mode.


        Note –

        This change to the kernel boot parameter command does not persist over the system boot. The next time you reboot the node, it will boot into cluster mode. To boot into noncluster mode instead, perform these steps to again to add the -x option to the kernel boot parameter command.


  21. If any applications that are running in the second partition are not under control of the Resource Group Manager (RGM), create scripts to halt the applications before you begin to upgrade those nodes.

    During dual-partition upgrade processing, these scripts would be called to stop applications such as Oracle RAC before the nodes in the second partition are halted.

    1. Create the scripts that you need to stop applications that are not under RGM control.

      • Create separate scripts for those applications that you want stopped before applications under RGM control are stopped and for those applications that you want stop afterwards.

      • To stop applications that are running on more than one node in the partition, write the scripts accordingly.

      • Use any name and directory path for your scripts that you prefer.

    2. Ensure that each node in the cluster has its own copy of your scripts.

    3. On each node, modify the following Sun Cluster scripts to call the scripts that you placed on that node.

      • /etc/cluster/ql/cluster_pre_halt_apps - Use this file to call those scripts that you want to run before applications that are under RGM control are shut down.

      • /etc/cluster/ql/cluster_post_halt_apps - Use this file to call those scripts that you want to run after applications that are under RGM control are shut down.

      The Sun Cluster scripts are issued from one arbitrary node in the partition during post-upgrade processing of the partition. Therefore, ensure that the scripts on any node of the partition will perform the necessary actions for all nodes in the partition.

Next Steps

Upgrade software on each node in the first partition.

ProcedureHow to Upgrade the Solaris OS and Volume Manager Software (Dual-Partition)

Perform this procedure on each node in the cluster to upgrade the Solaris OS. On the Solaris 10 OS, perform all steps from the global zone only. If the cluster already runs on a version of the Solaris OS that supports Sun Cluster 3.2 software, further upgrade of the Solaris OS is optional. If you do not intend to upgrade the Solaris OS, proceed to How to Upgrade Sun Cluster 3.2 Software (Standard).


Note –

The cluster must already run on, or be upgraded to, at least the minimum required level of the Solaris OS to support upgrade to Sun Cluster 3.2 software. See Supported Products in Sun Cluster 3.2 Release Notes for Solaris OS for more information.


Before You Begin

Ensure that all steps in How to Prepare the Cluster for Upgrade (Standard) are completed.

  1. Become superuser on the cluster node to upgrade.

    The node must be a member of the partition that is in noncluster mode.

  2. If Sun Cluster Geographic Edition software is installed, uninstall it.

    For uninstallation procedures, see the documentation for your version of Sun Cluster Geographic Edition software.

  3. Determine whether the following Apache run-control scripts exist and are enabled or disabled:


    /etc/rc0.d/K16apache
    /etc/rc1.d/K16apache
    /etc/rc2.d/K16apache
    /etc/rc3.d/S50apache
    /etc/rcS.d/K16apache

    Some applications, such as Sun Cluster HA for Apache, require that Apache run control scripts be disabled.

    • If these scripts exist and contain an uppercase K or S in the file name, the scripts are enabled. No further action is necessary for these scripts.

    • If these scripts do not exist, in Step 8 you must ensure that any Apache run control scripts that are installed during the Solaris OS upgrade are disabled.

    • If these scripts exist but the file names contain a lowercase k or s, the scripts are disabled. In Step 8 you must ensure that any Apache run control scripts that are installed during the Solaris OS upgrade are disabled.

  4. Comment out all entries for globally mounted file systems in the node's /etc/vfstab file.

    1. For later reference, make a record of all entries that are already commented out.

    2. Temporarily comment out all entries for globally mounted file systems in the /etc/vfstab file.

      Entries for globally mounted file systems contain the global mount option. Comment out these entries to prevent the Solaris upgrade from attempting to mount the global devices.

  5. Determine which procedure to follow to upgrade the Solaris OS.

    Volume Manager 

    Procedure 

    Location of Instructions 

    Solaris Volume Manager 

    Any Solaris upgrade method except the Live Upgrade method

    Solaris installation documentation 

    VERITAS Volume Manager 

    “Upgrading VxVM and Solaris” 

    VERITAS Volume Manager installation documentation 


    Note –

    If your cluster has VxVM installed, you must reinstall the existing VxVM software or upgrade to the Solaris 9 or 10 version of VxVM software as part of the Solaris upgrade process.


  6. Upgrade the Solaris software, following the procedure that you selected in Step 5.

    1. When prompted, choose the manual reboot option.

    2. When prompted to reboot, always reboot into noncluster mode.


      Note –

      Do not perform the final reboot instruction in the Solaris software upgrade. Instead, do the following:

      1. Return to this procedure to perform Step 7 and Step 8.

      2. Reboot into noncluster mode in Step 9 to complete Solaris software upgrade.


      Execute the following commands to boot a node into noncluster mode during Solaris upgrade:

      • On SPARC based systems, perform either of the following commands:


        phys-schost# reboot -- -xs
        or
        ok boot -xs
        

        If the instruction says to run the init S command, use the reboot -- -xs command instead.

      • On x86 based systems, perform the following command:


        phys-schost# shutdown -g -y -i0
        
        Press any key to continue
        1. In the GRUB menu, use the arrow keys to select the appropriate Solaris entry and type e to edit its commands.

          The GRUB menu appears similar to the following:


          GNU GRUB version 0.95 (631K lower / 2095488K upper memory)
          +-------------------------------------------------------------------------+
          | Solaris 10 /sol_10_x86                                                  |
          | Solaris failsafe                                                        |
          |                                                                         |
          +-------------------------------------------------------------------------+
          Use the ^ and v keys to select which entry is highlighted.
          Press enter to boot the selected OS, 'e' to edit the
          commands before booting, or 'c' for a command-line.

          For more information about GRUB based booting, see Chapter 11, GRUB Based Booting (Tasks), in System Administration Guide: Basic Administration.

        2. In the boot parameters screen, use the arrow keys to select the kernel entry and type e to edit the entry.

          The GRUB boot parameters screen appears similar to the following:


          GNU GRUB version 0.95 (615K lower / 2095552K upper memory)
          +----------------------------------------------------------------------+
          | root (hd0,0,a)                                                       |
          | kernel /platform/i86pc/multiboot                                     |
          | module /platform/i86pc/boot_archive                                  |
          +----------------------------------------------------------------------+
          Use the ^ and v keys to select which entry is highlighted.
          Press 'b' to boot, 'e' to edit the selected command in the
          boot sequence, 'c' for a command-line, 'o' to open a new line
          after ('O' for before) the selected line, 'd' to remove the
          selected line, or escape to go back to the main menu.
        3. Add -x to the command to specify that the system boot into noncluster mode.


          [ Minimal BASH-like line editing is supported. For the first word, TAB
          lists possible command completions. Anywhere else TAB lists the possible
          completions of a device/filename. ESC at any time exits. ]
          
          grub edit> kernel /platform/i86pc/multiboot -x
          
        4. Press Enter to accept the change and return to the boot parameters screen.

          The screen displays the edited command.


          GNU GRUB version 0.95 (615K lower / 2095552K upper memory)
          +----------------------------------------------------------------------+
          | root (hd0,0,a)                                                       |
          | kernel /platform/i86pc/multiboot -x                                  |
          | module /platform/i86pc/boot_archive                                  |
          +----------------------------------------------------------------------+
          Use the ^ and v keys to select which entry is highlighted.
          Press 'b' to boot, 'e' to edit the selected command in the
          boot sequence, 'c' for a command-line, 'o' to open a new line
          after ('O' for before) the selected line, 'd' to remove the
          selected line, or escape to go back to the main menu.-
        5. Type b to boot the node into noncluster mode.


          Note –

          This change to the kernel boot parameter command does not persist over the system boot. The next time you reboot the node, it will boot into cluster mode. To boot into noncluster mode instead, perform these steps to again to add the -x option to the kernel boot parameter command.


        If the instruction says to run the init S command, shut down the system then change the GRUB kernel boot command to /platform/i86pc/multiboot -sx instead.

  7. In the /a/etc/vfstab file, uncomment those entries for globally mounted file systems that you commented out in Step 4.

  8. If Apache run control scripts were disabled or did not exist before you upgraded the Solaris OS, ensure that any scripts that were installed during Solaris upgrade are disabled.

    To disable Apache run control scripts, use the following commands to rename the files with a lowercase k or s.


    phys-schost# mv /a/etc/rc0.d/K16apache /a/etc/rc0.d/k16apache 
    phys-schost# mv /a/etc/rc1.d/K16apache /a/etc/rc1.d/k16apache
    phys-schost# mv /a/etc/rc2.d/K16apache /a/etc/rc2.d/k16apache
    phys-schost# mv /a/etc/rc3.d/S50apache /a/etc/rc3.d/s50apache
    phys-schost# mv /a/etc/rcS.d/K16apache /a/etc/rcS.d/k16apache
    

    Alternatively, you can rename the scripts to be consistent with your normal administration practices.

  9. Reboot the node into noncluster mode.

    • On SPARC based systems, perform the following command.

      Include the double dashes (--) in the command:


      phys-schost# reboot -- -x
      
    • On x86 based systems, perform the shutdown and boot procedures that are described in Step 6 except add -x to the kernel boot command instead of -sx.

  10. If your cluster runs VxVM, perform the remaining steps in the procedure “Upgrading VxVM and Solaris” to reinstall or upgrade VxVM.

    Make the following changes to the procedure:

    • After VxVM upgrade is complete but before you reboot, verify the entries in the /etc/vfstab file.

      If any of the entries that you uncommented in Step 7 were commented out, make those entries uncommented again.

    • When the VxVM procedures instruct you to perform a final reconfiguration reboot, do not use the -r option alone. Instead, reboot into noncluster mode by using the -rx options.

      • On SPARC based systems, perform the following command:


        phys-schost# reboot -- -rx
        
      • On x86 based systems, perform the shutdown and boot procedures that are described in Step 6 except add -rx to the kernel boot command instead of -sx.


    Note –

    If you see a message similar to the following, type the root password to continue upgrade processing. Do not run the fsck command nor type Ctrl-D.


    WARNING - Unable to repair the /global/.devices/node@1 filesystem. 
    Run fsck manually (fsck -F ufs /dev/vx/rdsk/rootdisk_13vol). Exit the 
    shell when done to continue the boot process.
    
    Type control-d to proceed with normal startup,
    (or give root password for system maintenance):  Type the root password
    

  11. (Optional) SPARC: Upgrade VxFS.

    Follow procedures that are provided in your VxFS documentation.

  12. Install any required Solaris software patches and hardware-related patches, and download any needed firmware that is contained in the hardware patches.


    Note –

    Do not reboot after you add patches. Wait to reboot the node until after you upgrade the Sun Cluster software.


    See Patches and Required Firmware Levels in Sun Cluster 3.2 Release Notes for Solaris OS for the location of patches and installation instructions.

Next Steps

Upgrade to Sun Cluster 3.2 software. Go to How to Upgrade Sun Cluster 3.2 Software (Dual-Partition).


Note –

To complete the upgrade to a new marketing release of the Solaris OS, such as from Solaris 9 to Solaris 10 software, you must also upgrade the Sun Cluster software and dependency software to the version that is compatible with the new version of the Solaris OS.


ProcedureHow to Upgrade Sun Cluster 3.2 Software (Dual-Partition)

Perform this procedure to upgrade each node of the cluster to Sun Cluster 3.2 software. This procedure also upgrades required Sun Java Enterprise System shared components. You must also perform this procedure after you upgrade to a different marketing release of the Solaris OS, such as from Solaris 9 to Solaris 10 software.

On the Solaris 10 OS, perform all steps from the global zone only.


Tip –

You can perform this procedure on more than one node of the partition at the same time.


Before You Begin

Perform the following tasks:

  1. Become superuser on a node that is a member of the partition that is in noncluster mode.

  2. Ensure that the /usr/java/ directory is a symbolic link to the minimum or latest version of Java software.

    Sun Cluster software requires at least version 1.5.0_06 of Java software. If you upgraded to a version of Solaris that installs an earlier version of Java, the upgrade might have changed the symbolic link to point to a version of Java that does not meet the minimum requirement for Sun Cluster 3.2 software.

    1. Determine what directory the /usr/java/ directory is symbolically linked to.


      phys-schost# ls -l /usr/java
      lrwxrwxrwx   1 root   other    9 Apr 19 14:05 /usr/java -> /usr/j2se/
    2. Determine what version or versions of Java software are installed.

      The following are examples of commands that you can use to display the version of their related releases of Java software.


      phys-schost# /usr/j2se/bin/java -version
      phys-schost# /usr/java1.2/bin/java -version
      phys-schost# /usr/jdk/jdk1.5.0_06/bin/java -version
      
    3. If the /usr/java/ directory is not symbolically linked to a supported version of Java software, recreate the symbolic link to link to a supported version of Java software.

      The following example shows the creation of a symbolic link to the /usr/j2se/ directory, which contains Java 1.5.0_06 software.


      phys-schost# rm /usr/java
      phys-schost# ln -s /usr/j2se /usr/java
      
  3. Load the Sun Java Availability Suite DVD-ROM into the DVD-ROM drive.

    If the volume management daemon vold(1M) is running and is configured to manage CD-ROM or DVD devices, the daemon automatically mounts the media on the /cdrom/cdrom0/ directory.

  4. Change to the installation wizard directory of the DVD-ROM.

    • If you are installing the software packages on the SPARC platform, type the following command:


      phys-schost# cd /cdrom/cdrom0//Solaris_sparc
      
    • If you are installing the software packages on the x86 platform, type the following command:


      phys-schost# cd /cdrom/cdrom0//Solaris_x86
      
  5. Start the installation wizard program.


    phys-schost# ./installer
    
  6. Follow the instructions on the screen to select and upgrade Shared Components software packages on the node.


    Note –

    Do not use the installation wizard program to upgrade Sun Cluster software packages.


    The installation wizard program displays the status of the installation. When the installation is complete, the program displays an installation summary and the installation logs.

  7. Exit the installation wizard program.

  8. Change to the Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/ directory, where arch is sparc or x86 (Solaris 10 only) and where ver is 9 for Solaris 9 or 10 for Solaris 10 .


    phys-schost# cd /cdrom/cdrom0/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools
    
  9. Start the scinstall utility.


    phys-schost# ./scinstall
    

    Note –

    Do not use the /usr/cluster/bin/scinstall command that is already installed on the node. You must use the scinstall command that is located on the Sun Java Availability Suite DVD-ROM.


    The scinstall Main Menu is displayed.

  10. Type the number that corresponds to the option for Upgrade this cluster node and press the Return key.


      *** Main Menu ***
    
        Please select from one of the following (*) options:
    
            1) Create a new cluster or add a cluster node
            2) Configure a cluster to be JumpStarted from this install server
          * 3) Manage a dual-partition upgrade
          * 4) Upgrade this cluster node
          * 5) Print release information for this cluster node
     
          * ?) Help with menu options
          * q) Quit
    
        Option:  4
    

    The Upgrade Menu is displayed.

  11. Type the number that corresponds to the option for Upgrade Sun Cluster framework on this cluster node and press the Return key.

  12. Follow the menu prompts to upgrade the cluster framework.

    During the Sun Cluster upgrade, scinstall might make one or more of the following configuration changes:

    Upgrade processing is finished when the system displays the message Completed Sun Cluster framework upgrade and prompts you to press Enter to continue.

  13. Quit the scinstall utility.

  14. Unload the Sun Java Availability Suite DVD-ROM from the DVD-ROM drive.

    1. To ensure that the DVD-ROM is not being used, change to a directory that does not reside on the DVD-ROM.

    2. Eject the DVD-ROM.


      phys-schost# eject cdrom
      
  15. Upgrade data service packages.

    You must upgrade all data services to the Sun Cluster 3.2 version.


    Note –

    For Sun Cluster HA for SAP Web Application Server, if you are using a J2EE engine resource or a web application server component resource or both, you must delete the resource and recreate it with the new web application server component resource. Changes in the new web application server component resource includes integration of the J2EE functionality. For more information, see Sun Cluster Data Service for SAP Web Application Server Guide for Solaris OS.


    1. Start the upgraded interactive scinstall utility.


      phys-schost# /usr/cluster/bin/scinstall
      

      Note –

      Do not use the scinstall utility that is on the installation media to upgrade data service packages.


      The scinstall Main Menu is displayed.

    2. Type the number that corresponds to the option for Upgrade this cluster node and press the Return key.

      The Upgrade Menu is displayed.

    3. Type the number that corresponds to the option for Upgrade Sun Cluster data service agents on this node and press the Return key.

    4. Follow the menu prompts to upgrade Sun Cluster data service agents that are installed on the node.

      You can choose from the list of data services that are available to upgrade or choose to upgrade all installed data services.

      Upgrade processing is finished when the system displays the message Completed upgrade of Sun Cluster data services agents and prompts you to press Enter to continue.

    5. Press Enter.

      The Upgrade Menu is displayed.

  16. Quit the scinstall utility.

  17. If you have Sun Cluster HA for NFS configured on a highly available local file system, ensure that the loopback file system (LOFS) is disabled.


    Note –

    If you have non-global zones configured, LOFS must remain enabled. For guidelines about using LOFS and alternatives to disabling it, see Cluster File Systems.


    As of the Sun Cluster 3.2 release, LOFS is no longer disabled by default during Sun Cluster software installation or upgrade. To disable LOFS, ensure that the /etc/system file contains the following entry:


    exclude:lofs

    This change becomes effective at the next system reboot.

  18. As needed, manually upgrade any custom data services that are not supplied on the product media.

  19. Verify that each data-service update is installed successfully.

    View the upgrade log file that is referenced at the end of the upgrade output messages.

  20. Install any Sun Cluster 3.2 framework and data-service software patches.

    See Patches and Required Firmware Levels in Sun Cluster 3.2 Release Notes for Solaris OS for the location of patches and installation instructions.

  21. Upgrade software applications that are installed on the cluster.

    Ensure that application levels are compatible with the current versions of Sun Cluster and Solaris software. See your application documentation for installation instructions.

  22. After all nodes in a partition are upgraded, apply the upgrade changes.

    1. From one node in the partition that you are upgrading, start the interactive scinstall utility.


      phys-schost# /usr/cluster/bin/scinstall
      

      Note –

      Do not use the scinstall command that is located on the installation media. Only use the scinstall command that is located on the cluster node.


      The scinstall Main Menu is displayed.

    2. Type the number that corresponds to the option for Apply dual-partition upgrade changes to the partition and press the Return key.

    3. Follow the prompts to continue each stage of the upgrade processing.

      The command performs the following tasks, depending on which partition the command is run from:

      • First partition - The command halts each node in the second partition, one node at a time. When a node is halted, any services on that node are automatically switched over to a node in the first partition, provided that the node list of the related resource group contains a node in the first partition. After all nodes in the second partition are halted, the nodes in the first partition are booted into cluster mode and take over providing cluster services.

      • Second partition - The command boots the nodes in the second partition into cluster mode, to join the active cluster that was formed by the first partition. After all nodes have rejoined the cluster, the command performs final processing and reports on the status of the upgrade.

    4. Exit the scinstall utility, if it is still running.

    5. If you are finishing upgrade of the first partition, perform the following substeps to prepare the second partition for upgrade.

      Otherwise, if you are finishing upgrade of the second partition, proceed to How to Verify Upgrade of Sun Cluster 3.2 Software.

      1. Boot each node in the second partition into noncluster mode.

        • On SPARC based systems, perform the following command:


          ok boot -x
          
        • On x86 based systems, perform the following commands:

          1. In the GRUB menu, use the arrow keys to select the appropriate Solaris entry and type e to edit its commands.

            The GRUB menu appears similar to the following:


            GNU GRUB version 0.95 (631K lower / 2095488K upper memory)
            +-------------------------------------------------------------------------+
            | Solaris 10 /sol_10_x86                                                  |
            | Solaris failsafe                                                        |
            |                                                                         |
            +-------------------------------------------------------------------------+
            Use the ^ and v keys to select which entry is highlighted.
            Press enter to boot the selected OS, 'e' to edit the
            commands before booting, or 'c' for a command-line.

            For more information about GRUB based booting, see Chapter 11, GRUB Based Booting (Tasks), in System Administration Guide: Basic Administration.

          2. In the boot parameters screen, use the arrow keys to select the kernel entry and type e to edit the entry.

            The GRUB boot parameters screen appears similar to the following:


            GNU GRUB version 0.95 (615K lower / 2095552K upper memory)
            +----------------------------------------------------------------------+
            | root (hd0,0,a)                                                       |
            | kernel /platform/i86pc/multiboot                                     |
            | module /platform/i86pc/boot_archive                                  |
            +----------------------------------------------------------------------+
            Use the ^ and v keys to select which entry is highlighted.
            Press 'b' to boot, 'e' to edit the selected command in the
            boot sequence, 'c' for a command-line, 'o' to open a new line
            after ('O' for before) the selected line, 'd' to remove the
            selected line, or escape to go back to the main menu.
          3. Add -x to the command to specify that the system boot into noncluster mode.


            [ Minimal BASH-like line editing is supported. For the first word, TAB
            lists possible command completions. Anywhere else TAB lists the possible
            completions of a device/filename. ESC at any time exits. ]
            
            grub edit> kernel /platform/i86pc/multiboot -x
            
          4. Press Enter to accept the change and return to the boot parameters screen.

            The screen displays the edited command.


            GNU GRUB version 0.95 (615K lower / 2095552K upper memory)
            +----------------------------------------------------------------------+
            | root (hd0,0,a)                                                       |
            | kernel /platform/i86pc/multiboot -x                                  |
            | module /platform/i86pc/boot_archive                                  |
            +----------------------------------------------------------------------+
            Use the ^ and v keys to select which entry is highlighted.
            Press 'b' to boot, 'e' to edit the selected command in the
            boot sequence, 'c' for a command-line, 'o' to open a new line
            after ('O' for before) the selected line, 'd' to remove the
            selected line, or escape to go back to the main menu.-
          5. Type b to boot the node into noncluster mode.


            Note –

            This change to the kernel boot parameter command does not persist over the system boot. The next time you reboot the node, it will boot into cluster mode. To boot into noncluster mode instead, perform these steps to again to add the -x option to the kernel boot parameter command.


      2. Upgrade the nodes in the second partition.

        To upgrade Solaris software before you perform Sun Cluster software upgrade, go to How to Upgrade the Solaris OS and Volume Manager Software (Dual-Partition).

        Otherwise, upgrade Sun Cluster software on the second partition. Return to Step 1.

Next Steps

Go to How to Verify Upgrade of Sun Cluster 3.2 Software.

Troubleshooting

If you experience an unrecoverable error during dual-partition upgrade, perform recovery procedures in How to Recover from a Failed Dual-Partition Upgrade.

Performing a Live Upgrade to Sun Cluster 3.2 Software

This section provides the following information to upgrade from Sun Cluster 3.1 software to Sun Cluster 3.2 software by using the live upgrade method:

The following table lists the tasks to perform to upgrade from Sun Cluster 3.1 software to Sun Cluster 3.2 software. You also perform these tasks to upgrade only the version of the Solaris OS. If you upgrade the Solaris OS from Solaris 9 to Solaris 10 software, you must also upgrade the Sun Cluster software and dependency software to the version that is compatible with the new version of the Solaris OS.

Table 8–3 Task Map: Performing a Live Upgrade to Sun Cluster 3.2 Software

Task 

Instructions 

1. Read the upgrade requirements and restrictions. Determine the proper upgrade method for your configuration and needs. 

Upgrade Requirements and Software Support Guidelines

Choosing a Sun Cluster Upgrade Method

2. Remove the cluster from production, disable resources, and back up shared data and system disks. If the cluster uses dual-string mediators for Solaris Volume Manager software, unconfigure the mediators. 

How to Prepare the Cluster for Upgrade (Live Upgrade)

3. Upgrade the Solaris software, if necessary, to a supported Solaris update. Upgrade to Sun Cluster 3.2 framework and data-service software. If necessary, upgrade applications. If the cluster uses dual-string mediators, reconfigure the mediators. As needed, upgrade VERITAS Volume Manager (VxVM)software and disk groups and VERITAS File System (VxFS). 

How to Upgrade the Solaris OS and Sun Cluster 3.2 Software (Live Upgrade)

4. Verify successful completion of upgrade to Sun Cluster 3.2 software. 

How to Verify Upgrade of Sun Cluster 3.2 Software

5. Enable resources and bring resource groups online. Migrate existing resources to new resource types. 

How to Finish Upgrade to Sun Cluster 3.2 Software

6. (Optional) SPARC: Upgrade the Sun Cluster module for Sun Management Center, if needed.

SPARC: How to Upgrade Sun Cluster Module Software for Sun Management Center

ProcedureHow to Prepare the Cluster for Upgrade (Live Upgrade)

Perform this procedure to prepare a cluster for live upgrade.

Before You Begin

Perform the following tasks:

  1. Ensure that the cluster is functioning normally.

    1. View the current status of the cluster by running the following command from any node.


      phys-schost% scstat
      

      See the scstat(1M) man page for more information.

    2. Search the /var/adm/messages log on the same node for unresolved error messages or warning messages.

    3. Check the volume-manager status.

  2. If necessary, notify users that cluster services will be temporarily interrupted during the upgrade.

    Service interruption will be approximately the amount of time that your cluster normally takes to switch services to another node.

  3. Become superuser on a node of the cluster.

  4. If Sun Cluster Geographic Edition software is installed, uninstall it.

    For uninstallation procedures, see the documentation for your version of Sun Cluster Geographic Edition software.

  5. For a two-node cluster that uses Sun StorEdge Availability Suite software or Sun StorageTek Availability Suite software, ensure that the configuration data for availability services resides on the quorum disk.

    The configuration data must reside on a quorum disk to ensure the proper functioning of Availability Suite after you upgrade the cluster software.

    1. Become superuser on a node of the cluster that runs Availability Suite software.

    2. Identify the device ID and the slice that is used by the Availability Suite configuration file.


      phys-schost# /usr/opt/SUNWscm/sbin/dscfg
      /dev/did/rdsk/dNsS
      

      In this example output, N is the device ID and S the slice of device N.

    3. Identify the existing quorum device.


      phys-schost# scstat -q
      -- Quorum Votes by Device --
                           Device Name         Present Possible Status
                           -----------         ------- -------- ------
         Device votes:     /dev/did/rdsk/dQsS  1       1        Online

      In this example output, dQsS is the existing quorum device.

    4. If the quorum device is not the same as the Availability Suite configuration-data device, move the configuration data to an available slice on the quorum device.


      phys-schost# dd if=`/usr/opt/SUNWesm/sbin/dscfg` of=/dev/did/rdsk/dQsS
      

      Note –

      You must use the name of the raw DID device, /dev/did/rdsk/, not the block DID device, /dev/did/dsk/.


    5. If you moved the configuration data, configure Availability Suite software to use the new location.

      As superuser, issue the following command on each node that runs Availability Suite software.


      phys-schost# /usr/opt/SUNWesm/sbin/dscfg -s /dev/did/rdsk/dQsS
      
  6. Ensure that all shared data is backed up.

  7. Ensure that each system disk is backed up.

Next Steps

Perform a live upgrade of the Solaris OS, Sun Cluster 3.2 software, and other software. Go to How to Upgrade the Solaris OS and Sun Cluster 3.2 Software (Live Upgrade).

ProcedureHow to Upgrade the Solaris OS and Sun Cluster 3.2 Software (Live Upgrade)

Perform this procedure to upgrade the Solaris OS, Java ES shared components, volume-manager software, and Sun Cluster software by using the live upgrade method. The Sun Cluster live upgrade method uses the Solaris Live Upgrade feature. For information about live upgrade of the Solaris OS, refer to the documentation for the Solaris version that you are using:


Note –

The cluster must already run on, or be upgraded to, at least the minimum required level of the Solaris OS to support upgrade to Sun Cluster 3.2 software. See Supported Products in Sun Cluster 3.2 Release Notes for Solaris OS for more information.


Perform this procedure on each node in the cluster.


Tip –

You can use the cconsole utility to perform this procedure on all nodes simultaneously. See How to Install Cluster Control Panel Software on an Administrative Console for more information.


Before You Begin

Ensure that all steps in How to Prepare the Cluster for Upgrade (Live Upgrade) are completed.

  1. Ensure that a supported version of Solaris Live Upgrade software is installed on each node.

    If your operating system is already upgraded to Solaris 9 9/05 software or Solaris 10 11/06 software, you have the correct Solaris Live Upgrade software. If your operating system is an older version, perform the following steps:

    1. Insert the Solaris 9 9/05 software or Solaris 10 11/06 software media.

    2. Become superuser.

    3. Install the SUNWluu and SUNWlur packages.


      phys-schost# pkgadd -d path SUNWluu SUNWlur
      
      path

      Specifies the absolute path to the software packages.

    4. Verify that the packages have been installed.


      phys-schost# pkgchk -v SUNWluu SUNWlur
      
  2. If you will upgrade the Solaris OS and your cluster uses dual-string mediators for Solaris Volume Manager software, unconfigure your mediators.

    See Configuring Dual-String Mediators for more information about mediators.

    1. Run the following command to verify that no mediator data problems exist.


      phys-schost# medstat -s setname
      
      -s setname

      Specifies the disk set name.

      If the value in the Status field is Bad, repair the affected mediator host. Follow the procedure How to Fix Bad Mediator Data.

    2. List all mediators.

      Save this information for when you restore the mediators during the procedure How to Finish Upgrade to Sun Cluster 3.2 Software.

    3. For a disk set that uses mediators, take ownership of the disk set if no node already has ownership.


      phys-schost# scswitch -z -D setname -h node
      
      -z

      Changes mastery.

      -D devicegroup

      Specifies the name of the disk set.

      -h node

      Specifies the name of the node to become primary of the disk set.

    4. Unconfigure all mediators for the disk set.


      phys-schost# metaset -s setname -d -m mediator-host-list
      
      -s setname

      Specifies the disk set name.

      -d

      Deletes from the disk set.

      -m mediator-host-list

      Specifies the name of the node to remove as a mediator host for the disk set.

      See the mediator(7D) man page for further information about mediator-specific options to the metaset command.

    5. Repeat Step c through Step d for each remaining disk set that uses mediators.

  3. Build an inactive boot environment (BE).


    phys-schost# lucreate options-n BE-name
    
    -n BE-name

    Specifies the name of the boot environment that is to be upgraded.

    For information about important options to the lucreate command, see Solaris 10 Installation Guide: Solaris Live Upgrade and Upgrade Planning and the lucreate(1M) man page.

  4. If necessary, upgrade the Solaris OS software in your inactive BE.

    If the cluster already runs on a properly patched version of the Solaris OS that supports Sun Cluster 3.2 software, this step is optional.

    • If you use Solaris Volume Manager software, run the following command:


      phys-schost# luupgrade -u -n BE-name -s os-image-path
      
      -u

      Upgrades an operating system image on a boot environment.

      -s os-image-path

      Specifies the path name of a directory that contains an operating system image.

    • If you use VERITAS Volume Manager, follow live upgrade procedures in your VxVM installation documentation.

  5. Mount your inactive BE by using the lumount command.


    phys-schost# lumount -n BE-name -m BE-mount-point
    
    -m BE-mount-point

    Specifies the mount point of BE-name.

    For more information, see Solaris 10 Installation Guide: Solaris Live Upgrade and Upgrade Planning and the lumount(1M) man page.

  6. Ensure that the /BE-mount-point/usr/java/ directory is a symbolic link to the minimum or latest version of Java software.

    Sun Cluster software requires at least version 1.5.0_06 of Java software. If you upgraded to a version of Solaris that installs an earlier version of Java, the upgrade might have changed the symbolic link to point to a version of Java that does not meet the minimum requirement for Sun Cluster 3.2 software.

    1. Determine what directory the /BE-mount-point/usr/java/ directory is symbolically linked to.


      phys-schost# ls -l /BE-mount-point/usr/java
      lrwxrwxrwx   1 root   other    9 Apr 19 14:05 /BE-mount-point/usr/java -> /BE-mount-point/usr/j2se/
    2. Determine what version or versions of Java software are installed.

      The following are examples of commands that you can use to display the version of their related releases of Java software.


      phys-schost# /BE-mount-point/usr/j2se/bin/java -version
      phys-schost# /BE-mount-point/usr/java1.2/bin/java -version
      phys-schost# /BE-mount-point/usr/jdk/jdk1.5.0_06/bin/java -version
      
    3. If the /BE-mount-point/usr/java/ directory is not symbolically linked to a supported version of Java software, recreate the symbolic link to link to a supported version of Java software.

      The following example shows the creation of a symbolic link to the /usr/j2se/ directory, which contains Java 1.5.0_06 software.


      phys-schost# rm /BE-mount-point/usr/java
      phys-schost# cd /mnt/usr
      phys-schost# ln -s j2se java
      
  7. Apply any necessary Solaris patches.

    You might need to patch your Solaris software to use the Live Upgrade feature. For details about the patches that the Solaris OS requires and where to download them, see Managing Packages and Patches With Solaris Live Upgrade in Solaris 9 9/04 Installation Guide or Upgrading a System With Packages or Patches in Solaris 10 Installation Guide: Solaris Live Upgrade and Upgrade Planning.

  8. If necessary and if your version of the VERITAS Volume Manager (VxVM) software supports it, upgrade your VxVM software.

    Refer to your VxVM software documentation to determine whether your version of VxVM can use the live upgrade method.

  9. (Optional) SPARC: Upgrade VxFS.

    Follow procedures that are provided in your VxFS documentation.

  10. If your cluster hosts software applications that require an upgrade and that you can upgrade by using the live upgrade method, upgrade those software applications.

    If your cluster hosts software applications that cannot use the live upgrade method, you will upgrade them later in Step 25.

  11. Load the Sun Java Availability Suite DVD-ROM into the DVD-ROM drive.

    If the volume management daemon vold(1M) is running and is configured to manage CD-ROM or DVD devices, the daemon automatically mounts the media on the /cdrom/cdrom0/ directory.

  12. Change to the installation wizard directory of the DVD-ROM.

    • If you are installing the software packages on the SPARC platform, type the following command:


      phys-schost# cd /cdrom/cdrom0/Solaris_sparc
      
    • If you are installing the software packages on the x86 platform, type the following command:


      phys-schost# cd /cdrom/cdrom0/Solaris_x86
      
  13. Start the installation wizard program to direct output to a state file.

    Specify the name to give the state file and the absolute or relative path where the file should be created.

    • To create a state file by using the graphical interface, use the following command:


      phys-schost# ./installer -no -saveState statefile
      
    • To create a state file by using the text-based interface, use the following command:


      phys-schost# ./installer -no -nodisplay -saveState statefile
      

    See Generating the Initial State File in Sun Java Enterprise System 5 Installation Guide for UNIX for more information.

  14. Follow the instructions on the screen to select and upgrade Shared Components software packages on the node.

    The installation wizard program displays the status of the installation. When the installation is complete, the program displays an installation summary and the installation logs.

  15. Exit the installation wizard program.

  16. Run the installer program in silent mode and direct the installation to the alternate boot environment.


    Note –

    The installer program must be the same version that you used to create the state file.



    phys-schost# ./installer -nodisplay -noconsole -state statefile -altroot BE-mount-point
    

    See To Run the Installer in Silent Mode in Sun Java Enterprise System 5 Installation Guide for UNIX for more information.

  17. Change to the Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/ directory, where arch is sparc or x86 (Solaris 10 only) and where ver is 9 for Solaris 9 or 10 for Solaris 10 .


    phys-schost# cd /cdrom/cdrom0/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools
    
  18. Upgrade your Sun Cluster software by using the scinstall command.


    phys-schost# ./scinstall -u update -R BE-mount-point
    
    -u update

    Specifies that you are performing an upgrade of Sun Cluster software.

    -R BE-mount-point

    Specifies the mount point for your alternate boot environment.

    For more information, see the scinstall(1M) man page.

  19. Upgrade your data services by using the scinstall command.


    phys-schost# BE-mount-point/usr/cluster/bin/scinstall -u update -s all  \
    -d /cdrom/cdrom0/Solaris_arch/Product/sun_cluster_agents -R BE-mount-point
    
  20. Unload the Sun Java Availability Suite DVD-ROM from the DVD-ROM drive.

    1. To ensure that the DVD-ROM is not being used, change to a directory that does not reside on the DVD-ROM.

    2. Eject the DVD-ROM.


      phys-schost# eject cdrom
      
  21. Unmount the inactive BE.


    phys-schost# luumount -n BE-name
    
  22. Activate the upgraded inactive BE.


    phys-schost# luactivate BE-name
    
    BE-name

    The name of the alternate BE that you built in Step 3.

  23. Repeat Step 1 through Step 22 for each node in the cluster.


    Note –

    Do not reboot any node until all nodes in the cluster are upgraded on their inactive BE.


  24. Reboot all nodes.


    phys-schost# shutdown -y -g0 -i6
    

    Note –

    Do not use the reboot or halt command. These commands do not activate a new BE. Use only shutdown or init to reboot into a new BE.


    The nodes reboot into cluster mode using the new, upgraded BE.

  25. (Optional) If your cluster hosts software applications that require upgrade for which you cannot use the live upgrade method, perform the following steps.


    Note –

    Throughout the process of software-application upgrade, always reboot into noncluster mode until all upgrades are complete.


    1. Shut down the node.


      phys-schost# shutdown -y -g0 -i0
      
    2. Boot each node into noncluster mode.

      • On SPARC based systems, perform the following command:


        ok boot -x
        
      • On x86 based systems, perform the following commands:

        1. In the GRUB menu, use the arrow keys to select the appropriate Solaris entry and type e to edit its commands.

          The GRUB menu appears similar to the following:


          GNU GRUB version 0.95 (631K lower / 2095488K upper memory)
          +-------------------------------------------------------------------------+
          | Solaris 10 /sol_10_x86                                                  |
          | Solaris failsafe                                                        |
          |                                                                         |
          +-------------------------------------------------------------------------+
          Use the ^ and v keys to select which entry is highlighted.
          Press enter to boot the selected OS, 'e' to edit the
          commands before booting, or 'c' for a command-line.

          For more information about GRUB based booting, see Chapter 11, GRUB Based Booting (Tasks), in System Administration Guide: Basic Administration.

        2. In the boot parameters screen, use the arrow keys to select the kernel entry and type e to edit the entry.

          The GRUB boot parameters screen appears similar to the following:


          GNU GRUB version 0.95 (615K lower / 2095552K upper memory)
          +----------------------------------------------------------------------+
          | root (hd0,0,a)                                                       |
          | kernel /platform/i86pc/multiboot                                     |
          | module /platform/i86pc/boot_archive                                  |
          +----------------------------------------------------------------------+
          Use the ^ and v keys to select which entry is highlighted.
          Press 'b' to boot, 'e' to edit the selected command in the
          boot sequence, 'c' for a command-line, 'o' to open a new line
          after ('O' for before) the selected line, 'd' to remove the
          selected line, or escape to go back to the main menu.
        3. Add -x to the command to specify that the system boot into noncluster mode.


          [ Minimal BASH-like line editing is supported. For the first word, TAB
          lists possible command completions. Anywhere else TAB lists the possible
          completions of a device/filename. ESC at any time exits. ]
          
          grub edit> kernel /platform/i86pc/multiboot -x
          
        4. Press Enter to accept the change and return to the boot parameters screen.

          The screen displays the edited command.


          GNU GRUB version 0.95 (615K lower / 2095552K upper memory)
          +----------------------------------------------------------------------+
          | root (hd0,0,a)                                                       |
          | kernel /platform/i86pc/multiboot -x                                  |
          | module /platform/i86pc/boot_archive                                  |
          +----------------------------------------------------------------------+
          Use the ^ and v keys to select which entry is highlighted.
          Press 'b' to boot, 'e' to edit the selected command in the
          boot sequence, 'c' for a command-line, 'o' to open a new line
          after ('O' for before) the selected line, 'd' to remove the
          selected line, or escape to go back to the main menu.-
        5. Type b to boot the node into noncluster mode.


          Note –

          This change to the kernel boot parameter command does not persist over the system boot. The next time you reboot the node, it will boot into cluster mode. To boot into noncluster mode instead, perform these steps to again to add the -x option to the kernel boot parameter command.


        If the instruction says to run the init S command, shut down the system then change the GRUB kernel boot command to /platform/i86pc/multiboot -sx instead.

    3. Upgrade each software application that requires an upgrade.

      Remember to boot into noncluster mode if you are directed to reboot, until all applications have been upgraded.

    4. Boot each node into cluster mode.

      • On SPARC based systems, perform the following command:


        ok boot
        
      • On x86 based systems, perform the following commands:

        When the GRUB menu is displayed, select the appropriate Solaris entry and press Enter. The GRUB menu appears similar to the following:


        GNU GRUB version 0.95 (631K lower / 2095488K upper memory)
        +-------------------------------------------------------------------------+
        | Solaris 10 /sol_10_x86                                                  |
        | Solaris failsafe                                                        |
        |                                                                         |
        +-------------------------------------------------------------------------+
        Use the ^ and v keys to select which entry is highlighted.
        Press enter to boot the selected OS, 'e' to edit the
        commands before booting, or 'c' for a command-line.

Example 8–1 Live Upgrade to Sun Cluster 3.2 Software

This example shows a live upgrade of a cluster node. The example upgrades the SPARC based node to the Solaris 10 OS, Sun Cluster 3.2 framework, and all Sun Cluster data services that support the live upgrade method. In this example, sc31u2 is the original boot environment (BE). The new BE that is upgraded is named sc32 and uses the mount point /sc32. The directory /net/installmachine/export/solaris10/OS_image/ contains an image of the Solaris 10 OS. The Java ES installer state file is named sc32state.

The following commands typically produce copious output. This output is shown only where necessary for clarity.


phys-schost# lucreate sc31u2 -m /:/dev/dsk/c0t4d0s0:ufs -n sc32
…
lucreate: Creation of Boot Environment sc32 successful.

phys-schost# luupgrade -u -n sc32 -s /net/installmachine/export/solaris10/OS_image/
The Solaris upgrade of the boot environment sc32 is complete.
Apply patches

phys-schost# lumount sc32 /sc32
phys-schost# ls -l /sc32/usr/java
lrwxrwxrwx   1 root   other    9 Apr 19 14:05 /sc32/usr/java -> /sc32/usr/j2se/

Insert the Sun Java Availability Suite DVD-ROM.
phys-schost# cd /cdrom/cdrom0/Solaris_sparc
phys-schost# ./installer -no -saveState sc32state
phys-schost# ./installer -nodisplay -noconsole -state sc32state -altroot /sc32
phys-schost# cd /cdrom/cdrom0/Solaris_sparc/sun_cluster/Sol_9/Tools
phys-schost# ./scinstall -u update -R /sc32
phys-schost# /sc32/usr/cluster/bin/scinstall -u update -s all -d /cdrom/cdrom0 -R /sc32
phys-schost# cd /
phys-schost# eject cdrom

phys-schost# luumount sc32
phys-schost# luactivate sc32
Activation of boot environment sc32 successful.
Upgrade all other nodes

Boot all nodes
phys-schost# shutdown -y -g0 -i6
ok boot

At this point, you might upgrade data-service applications that cannot use the live upgrade method, before you reboot into cluster mode.


Troubleshooting

DID device name errors - During the creation of the inactive BE, if you receive an error that a file system that you specified with its DID device name, /dev/dsk/did/dNsX, does not exist, but the device name does exist, you must specify the device by its physical device name. Then change the vfstab entry on the alternate BE to use the DID device name instead. Perform the following steps:

Mount point errors - During creation of the inactive boot environment, if you receive an error that the mount point that you supplied is not mounted, mount the mount point and rerun the lucreate command.

New BE boot errors - If you experience problems when you boot the newly upgraded environment, you can revert to your original BE. For specific information, see Failure Recovery: Falling Back to the Original Boot Environment (Command-Line Interface) in Solaris 9 9/04 Installation Guide or Chapter 10, Failure Recovery: Falling Back to the Original Boot Environment (Tasks), in Solaris 10 Installation Guide: Solaris Live Upgrade and Upgrade Planning.

Global-devices file-system errors - After you upgrade a cluster on which the root disk is encapsulated, you might see one of the following error messages on the cluster console during the first reboot of the upgraded BE:

mount: /dev/vx/dsk/bootdg/node@1 is already mounted or /global/.devices/node@1 is busy Trying to remount /global/.devices/node@1 mount: /dev/vx/dsk/bootdg/node@1 is already mounted or /global/.devices/node@1 is busy

WARNING - Unable to mount one or more of the following filesystem(s):     /global/.devices/node@1 If this is not repaired, global devices will be unavailable. Run mount manually (mount filesystem...). After the problems are corrected, please clear the maintenance flag on globaldevices by running the following command: /usr/sbin/svcadm clear svc:/system/cluster/globaldevices:default

Dec 6 12:17:23 svc.startd[8]: svc:/system/cluster/globaldevices:default: Method "/usr/cluster/lib/svc/method/globaldevices start" failed with exit status 96. [ system/cluster/globaldevices:default misconfigured (see 'svcs -x' for details) ] Dec 6 12:17:25 Cluster.CCR: /usr/cluster/bin/scgdevs: Filesystem /global/.devices/node@1 is not available in /etc/mnttab. Dec 6 12:17:25 Cluster.CCR: /usr/cluster/bin/scgdevs: Filesystem /global/.devices/node@1 is not available in /etc/mnttab.

These messages indicate that the vxio minor number is the same on each cluster node. Reminor the root disk group on each node so that each number is unique in the cluster. See How to Assign a New Minor Number to a Device Group.

Next Steps

Go to How to Verify Upgrade of Sun Cluster 3.2 Software.

See Also

You can choose to keep your original, and now inactive, boot environment for as long as you need to. When you are satisfied that your upgrade is acceptable, you can then choose to remove the old environment or to keep and maintain it.

You can also maintain the inactive BE. For information about how to maintain the environment, see Chapter 37, Maintaining Solaris Live Upgrade Boot Environments (Tasks), in Solaris 9 9/04 Installation Guide or Chapter 11, Maintaining Solaris Live Upgrade Boot Environments (Tasks), in Solaris 10 Installation Guide: Solaris Live Upgrade and Upgrade Planning.

Completing the Upgrade

This section provides the following information to complete all Sun Cluster 3.2 software upgrade methods:

ProcedureHow to Verify Upgrade of Sun Cluster 3.2 Software

Perform this procedure to verify that the cluster is successfully upgraded to Sun Cluster 3.2 software. On the Solaris 10 OS, perform all steps from the global zone only.


Note –

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster System Administration Guide for Solaris OS.


Before You Begin

Ensure that all upgrade procedures are completed for all cluster nodes that you are upgrading.

  1. On each node, become superuser.

  2. On each upgraded node, view the installed levels of Sun Cluster software.


    phys-schost# clnode show-rev -v
    

    The first line of output states which version of Sun Cluster software the node is running. This version should match the version that you just upgraded to.

  3. From any node, verify that all upgraded cluster nodes are running in cluster mode (Online).


    phys-schost# clnode status
    

    See the clnode(1CL) man page for more information about displaying cluster status.

  4. SPARC: If you upgraded from Solaris 8 to Solaris 9 software, verify the consistency of the storage configuration.

    1. On each node, run the following command to verify the consistency of the storage configuration.


      phys-schost# cldevice check
      

      Caution – Caution –

      Do not proceed to Step b until your configuration passes this consistency check. Failure to pass this check might result in errors in device identification and cause data corruption.


      The following table lists the possible output from the cldevice check command and the action you must take, if any.

      Example Message 

      Action 

      device id for 'phys-schost-1:/dev/rdsk/c1t3d0' does not match physical device's id, device may have been replaced

      Go to Recovering From an Incomplete Upgrade and perform the appropriate repair procedure.

      device id for 'phys-schost-1:/dev/rdsk/c0t0d0' needs to be updated, run cldevice repair to update

      None. You update this device ID in Step b.

      No output message 

      None. 

      See the cldevice(1CL) man page for more information.

    2. On each node, migrate the Sun Cluster storage database to Solaris 9 device IDs.


      phys-schost# cldevice repair
      
    3. On each node, run the following command to verify that storage database migration to Solaris 9 device IDs is successful.


      phys-schost# cldevice check
      
      • If the cldevice command displays a message, return to Step a to make further corrections to the storage configuration or the storage database.

      • If the cldevice command displays no messages, the device-ID migration is successful. When device-ID migration is verified on all cluster nodes, proceed to How to Finish Upgrade to Sun Cluster 3.2 Software.


Example 8–2 Verifying Upgrade to Sun Cluster 3.2 Software

The following example shows the commands used to verify upgrade of a two-node cluster to Sun Cluster 3.2 software. The cluster node names are phys-schost-1 and phys-schost-2.


phys-schost# clnode show-rev -v
3.2
…
phys-schost# clnode status
=== Cluster Nodes ===

--- Node Status ---

Node Name                                          Status
---------                                          ------
phys-schost-1                                      Online
phys-schost-2                                      Online

Next Steps

Go to How to Finish Upgrade to Sun Cluster 3.2 Software.

ProcedureHow to Finish Upgrade to Sun Cluster 3.2 Software

Perform this procedure to finish Sun Cluster upgrade. On the Solaris 10 OS, perform all steps from the global zone only. First, reregister all resource types that received a new version from the upgrade. Second, modify eligible resources to use the new version of the resource type that the resource uses. Third, re-enable resources. Finally, bring resource groups back online.

Before You Begin

Ensure that all steps in How to Verify Upgrade of Sun Cluster 3.2 Software are completed.

  1. Copy the security files for the common agent container to all cluster nodes.

    This step ensures that security files for the common agent container are identical on all cluster nodes and that the copied files retain the correct file permissions.

    1. On each node, stop the Sun Java Web Console agent.


      phys-schost# /usr/sbin/smcwebserver stop
      
    2. On each node, stop the security file agent.


      phys-schost# /usr/sbin/cacaoadm stop
      
    3. On one node, change to the /etc/cacao/instances/default/ directory.


      phys-schost-1# cd /etc/cacao/instances/default/
      
    4. Create a tar file of the /etc/cacao/SUNWcacao/security/ directory.


      phys-schost-1# tar cf /tmp/SECURITY.tar security
      
    5. Copy the /tmp/SECURITY.tar file to each of the other cluster nodes.

    6. On each node to which you copied the /tmp/SECURITY.tar file, extract the security files.

      Any security files that already exist in the /etc/cacao/instances/default/ directory are overwritten.


      phys-schost-2# cd /etc/cacao/instances/default/
      phys-schost-2# tar xf /tmp/SECURITY.tar
      
    7. Delete the /tmp/SECURITY.tar file from each node in the cluster.

      You must delete each copy of the tar file to avoid security risks.


      phys-schost-1# rm /tmp/SECURITY.tar
      phys-schost-2# rm /tmp/SECURITY.tar
      
    8. On each node, start the security file agent.


      phys-schost# /usr/sbin/cacaoadm start
      
    9. On each node, start the Sun Java Web Console agent.


      phys-schost# /usr/sbin/smcwebserver start
      
  2. If you upgraded any data services that are not supplied on the product media, register the new resource types for those data services.

    Follow the documentation that accompanies the data services.

  3. If you upgraded Sun Cluster HA for SAP liveCache from the Sun Cluster 3.0 or 3.1 version to the Sun Cluster 3.2 version, modify the /opt/SUNWsclc/livecache/bin/lccluster configuration file.

    1. Become superuser on a node that will host the liveCache resource.

    2. Copy the new /opt/SUNWsclc/livecache/bin/lccluster file to the /sapdb/LC_NAME/db/sap/ directory.

      Overwrite the lccluster file that already exists from the previous configuration of the data service.

    3. Configure this /sapdb/LC_NAME/db/sap/lccluster file as documented in How to Register and Configure Sun Cluster HA for SAP liveCache in Sun Cluster Data Service for SAP liveCache Guide for Solaris OS.

  4. If you upgraded the Solaris OS and your configuration uses dual-string mediators for Solaris Volume Manager software, restore the mediator configurations.

    1. Determine which node has ownership of a disk set to which you will add the mediator hosts.


      phys-schost# metaset -s setname
      
      -s setname

      Specifies the disk set name.

    2. On the node that masters or will master the disk set, become superuser.

    3. If no node has ownership, take ownership of the disk set.


      phys-schost# cldevicegroup switch -n node devicegroup
      
      node

      Specifies the name of the node to become primary of the disk set.

      devicegroup

      Specifies the name of the disk set.

    4. Re-create the mediators.


      phys-schost# metaset -s setname -a -m mediator-host-list
      
      -a

      Adds to the disk set.

      -m mediator-host-list

      Specifies the names of the nodes to add as mediator hosts for the disk set.

    5. Repeat these steps for each disk set in the cluster that uses mediators.

  5. If you upgraded VxVM, upgrade all disk groups.

    1. Bring online and take ownership of a disk group to upgrade.


      phys-schost# cldevicegroup switch -n node devicegroup
      
    2. Run the following command to upgrade a disk group to the highest version supported by the VxVM release you installed.


      phys-schost# vxdg upgrade dgname
      

      See your VxVM administration documentation for more information about upgrading disk groups.

    3. Repeat for each remaining VxVM disk group in the cluster.

  6. Migrate resources to new resource type versions.

    You must migrate all resources to the Sun Cluster 3.2 resource-type version.


    Note –

    For Sun Cluster HA for SAP Web Application Server, if you are using a J2EE engine resource or a web application server component resource or both, you must delete the resource and recreate it with the new web application server component resource. Changes in the new web application server component resource includes integration of the J2EE functionality. For more information, see Sun Cluster Data Service for SAP Web Application Server Guide for Solaris OS.


    See Upgrading a Resource Type in Sun Cluster Data Services Planning and Administration Guide for Solaris OS, which contains procedures which use the command line. Alternatively, you can perform the same tasks by using the Resource Group menu of the clsetup utility. The process involves performing the following tasks:

    • Registering the new resource type.

    • Migrating the eligible resource to the new version of its resource type.

    • Modifying the extension properties of the resource type as specified in Sun Cluster 3.2 Release Notes for Solaris OS.


      Note –

      The Sun Cluster 3.2 release introduces new default values for some extension properties, such as the Retry_interval property. These changes affect the behavior of any existing resource that uses the default values of such properties. If you require the previous default value for a resource, modify the migrated resource to set the property to the previous default value.


  7. If your cluster runs the Sun Cluster HA for Sun Java System Application Server EE (HADB) data service and you shut down the HADB database before you began a dual-partition upgrade, re-enable the resource and start the database.


    phys-schost# clresource enable hadb-resource
    phys-schost# hadbm start database-name
    

    For more information, see the hadbm(1m) man page.

  8. If you upgraded to the Solaris 10 OS and the Apache httpd.conf file is located on a cluster file system, ensure that the HTTPD entry in the Apache control script still points to that location.

    1. View the HTTPD entry in the /usr/apache/bin/apchectl file.

      The following example shows the httpd.conf file located on the /global cluster file system.


      phys-schost# cat /usr/apache/bin/apchectl | grep HTTPD=/usr
      HTTPD="/usr/apache/bin/httpd -f /global/web/conf/httpd.conf"
    2. If the file does not show the correct HTTPD entry, update the file.


      phys-schost# vi /usr/apache/bin/apchectl
      #HTTPD=/usr/apache/bin/httpd
      HTTPD="/usr/apache/bin/httpd -f /global/web/conf/httpd.conf"
      
  9. From any node, start the clsetup utility.


    phys-schost# clsetup
    

    The clsetup Main Menu is displayed.

  10. Re-enable all disabled resources.

    1. Type the number that corresponds to the option for Resource groups and press the Return key.

      The Resource Group Menu is displayed.

    2. Type the number that corresponds to the option for Enable/Disable a resource and press the Return key.

    3. Choose a resource to enable and follow the prompts.

    4. Repeat Step c for each disabled resource.

    5. When all resources are re-enabled, type q to return to the Resource Group Menu.

  11. Bring each resource group back online.

    This step includes the bringing online of resource groups in non-global zones.

    1. Type the number that corresponds to the option for Online/Offline or Switchover a resource group and press the Return key.

    2. Follow the prompts to put each resource group into the managed state and then bring the resource group online.

  12. When all resource groups are back online, exit the clsetup utility.

    Type q to back out of each submenu, or press Ctrl-C.

  13. If, before upgrade, you enabled automatic node reboot if all monitored disk paths fail, ensure that the feature is still enabled.

    Also perform this task if you want to configure automatic reboot for the first time.

    1. Determine whether the automatic reboot feature is enabled or disabled.


      phys-schost# clnode show
      
      • If the reboot_on_path_failure property is set to enabled, no further action is necessary.

      • If reboot_on_path_failure property is set to disabled, proceed to the next step to re-enable the property.

    2. Enable the automatic reboot feature.


      phys-schost# clnode set -p reboot_on_path_failure=enabled
      
      -p

      Specifies the property to set

      reboot_on_path_failure=enable

      Specifies that the node will reboot if all monitored disk paths fail, provided that at least one of the disks is accessible from a different node in the cluster.

    3. Verify that automatic reboot on disk-path failure is enabled.


      phys-schost# clnode show
      === Cluster Nodes ===                          
      
      Node Name:                                      node
      …
        reboot_on_path_failure:                          enabled
      …
  14. (Optional) Capture the disk partitioning information for future reference.


    phys-schost# prtvtoc /dev/rdsk/cNtXdYsZ > filename
    

    Store the file in a location outside the cluster. If you make any disk configuration changes, run this command again to capture the changed configuration. If a disk fails and needs replacement, you can use this information to restore the disk partition configuration. For more information, see the prtvtoc(1M) man page.

  15. (Optional) Make a backup of your cluster configuration.

    An archived backup of your cluster configuration facilitates easier recovery of the your cluster configuration,

    For more information, see How to Back Up the Cluster Configuration in Sun Cluster System Administration Guide for Solaris OS.

Troubleshooting

Resource-type migration failure - Normally, you migrate resources to a new resource type while the resource is offline. However, some resources need to be online for a resource-type migration to succeed. If resource-type migration fails for this reason, error messages similar to the following are displayed:

phys-schost - Resource depends on a SUNW.HAStoragePlus type resource that is not online anywhere. (C189917) VALIDATE on resource nfsrs, resource group rg, exited with non-zero exit status. (C720144) Validation of resource nfsrs in resource group rg on node phys-schost failed.

If resource-type migration fails because the resource is offline, use the clsetup utility to re-enable the resource and then bring its related resource group online. Then repeat migration procedures for the resource.

Java binaries location change - If the location of the Java binaries changed during the upgrade of shared components, you might see error messages similar to the following when you attempt to run the cacaoadm start or smcwebserver start commands:

# /opt/SUNWcacao/bin/cacaoadm startNo suitable Java runtime found. Java 1.4.2_03 or higher is required.Jan 3 17:10:26 ppups3 cacao: No suitable Java runtime found. Java 1.4.2_03 or higher is required.Cannot locate all the dependencies

# smcwebserver start/usr/sbin/smcwebserver: /usr/jdk/jdk1.5.0_04/bin/java: not found

These errors are generated because the start commands cannot locate the current location of the Java binaries. The JAVA_HOME property still points to the directory where the previous version of Java was located, but that previous version was removed during upgrade.

To correct this problem, change the setting of JAVA_HOME in the following configuration files to use the current Java directory:

/etc/webconsole/console/config.properties/etc/opt/SUNWcacao/cacao.properties

Next Steps

If you have a SPARC based system and use Sun Management Center to monitor the cluster, go to SPARC: How to Upgrade Sun Cluster Module Software for Sun Management Center.

To install or complete upgrade of Sun Cluster Geographic Edition 3.2 software, see Sun Cluster Geographic Edition Installation Guide.

Otherwise, the cluster upgrade is complete.

Recovering From an Incomplete Upgrade

This section provides the following information to recover from certain kinds of incomplete upgrades:

ProcedureHow to Recover from a Failed Dual-Partition Upgrade

If you experience an unrecoverable error during upgrade, perform this procedure to back out of the upgrade.


Note –

You cannot restart a dual-partition upgrade after the upgrade has experienced an unrecoverable error.


  1. Become superuser on each node of the cluster.

  2. Boot each node into noncluster mode.

    • On SPARC based systems, perform the following command:


      ok boot -x
      
    • On x86 based systems, perform the following commands:

      1. In the GRUB menu, use the arrow keys to select the appropriate Solaris entry and type e to edit its commands.

        The GRUB menu appears similar to the following:


        GNU GRUB version 0.95 (631K lower / 2095488K upper memory)
        +-------------------------------------------------------------------------+
        | Solaris 10 /sol_10_x86                                                  |
        | Solaris failsafe                                                        |
        |                                                                         |
        +-------------------------------------------------------------------------+
        Use the ^ and v keys to select which entry is highlighted.
        Press enter to boot the selected OS, 'e' to edit the
        commands before booting, or 'c' for a command-line.

        For more information about GRUB based booting, see Chapter 11, GRUB Based Booting (Tasks), in System Administration Guide: Basic Administration.

      2. In the boot parameters screen, use the arrow keys to select the kernel entry and type e to edit the entry.

        The GRUB boot parameters screen appears similar to the following:


        GNU GRUB version 0.95 (615K lower / 2095552K upper memory)
        +----------------------------------------------------------------------+
        | root (hd0,0,a)                                                       |
        | kernel /platform/i86pc/multiboot                                     |
        | module /platform/i86pc/boot_archive                                  |
        +----------------------------------------------------------------------+
        Use the ^ and v keys to select which entry is highlighted.
        Press 'b' to boot, 'e' to edit the selected command in the
        boot sequence, 'c' for a command-line, 'o' to open a new line
        after ('O' for before) the selected line, 'd' to remove the
        selected line, or escape to go back to the main menu.
      3. Add -x to the command to specify that the system boot into noncluster mode.


        [ Minimal BASH-like line editing is supported. For the first word, TAB
        lists possible command completions. Anywhere else TAB lists the possible
        completions of a device/filename. ESC at any time exits. ]
        
        grub edit> kernel /platform/i86pc/multiboot -x
        
      4. Press Enter to accept the change and return to the boot parameters screen.

        The screen displays the edited command.


        GNU GRUB version 0.95 (615K lower / 2095552K upper memory)
        +----------------------------------------------------------------------+
        | root (hd0,0,a)                                                       |
        | kernel /platform/i86pc/multiboot -x                                  |
        | module /platform/i86pc/boot_archive                                  |
        +----------------------------------------------------------------------+
        Use the ^ and v keys to select which entry is highlighted.
        Press 'b' to boot, 'e' to edit the selected command in the
        boot sequence, 'c' for a command-line, 'o' to open a new line
        after ('O' for before) the selected line, 'd' to remove the
        selected line, or escape to go back to the main menu.-
      5. Type b to boot the node into noncluster mode.


        Note –

        This change to the kernel boot parameter command does not persist over the system boot. The next time you reboot the node, it will boot into cluster mode. To boot into noncluster mode instead, perform these steps to again to add the -x option to the kernel boot parameter command.


  3. On each node, run the upgrade recovery script from the installation media.

    If the node successfully upgraded to Sun Cluster 3.2 software, you can alternatively run the scinstall command from the /usr/cluster/bin directory.


    phys-schost# cd /cdrom/cdrom0/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools
    phys-schost# ./scinstall -u recover
    
    -u

    Specifies upgrade.

    recover

    Restores the /etc/vfstab file and the Cluster Configuration Repository (CCR) database to their original state before the start of the dual-partition upgrade.

    The recovery process leaves the cluster nodes in noncluster mode. Do not attempt to reboot the nodes into cluster mode.

    For more information, see the scinstall(1M) man page.

  4. Perform either of the following tasks.

    • Restore the old software from backup to return the cluster to its original state.

    • Continue to upgrade software on the cluster by using the standard upgrade method.

      This method requires that all cluster nodes remain in noncluster mode during the upgrade. See the task map for standard upgrade, Table 8–1. You can resume the upgrade at the last task or step in the standard upgrade procedures that you successfully completed before the dual-partition upgrade failed.

ProcedureSPARC: How to Recover From a Partially Completed Dual-Partition Upgrade

Perform this procedure if a dual-partition upgrade fails and the state of the cluster meets all of the following criteria:

You can also perform this procedure if the upgrade has succeeded on the first partition but you want to back out of the upgrade.


Note –

Do not perform this procedure after dual-partition upgrade processes have begun on the second partition. Instead, perform How to Recover from a Failed Dual-Partition Upgrade.


Before You Begin

Before you begin, ensure that all second-partition nodes are halted. First-partition nodes can be either halted or running in noncluster mode.

Perform all steps as superuser.

  1. Boot each node in the second partition into noncluster mode.


    # ok boot -x
    
  2. On each node in the second partition, run the scinstall -u recover command.


    # /usr/cluster/bin/scinstall -u recover
    

    The command restores the original CCR information, restores the original /etc/vfstab file, and eliminates modifications for startup.

  3. Boot each node of the second partition into cluster mode.


    # shutdown -g0 -y -i6
    

    When the nodes of the second partition come up, the second partition resumes supporting cluster data services while running the old software with the original configuration.

  4. Restore the original software and configuration data from backup media to the nodes in the first partition.

  5. Boot each node in the first partition into cluster mode.


    # shutdown -g0 -y -i6
    

    The nodes rejoin the cluster.

Procedurex86: How to Recover From a Partially Completed Dual-Partition Upgrade

Perform this procedure if a dual-partition upgrade fails and the state of the cluster meets all of the following criteria:

You can also perform this procedures if the upgrade has succeeded on the first partition but you want to back out of the upgrade.


Note –

Do not perform this procedure after dual-partition upgrade processes have begun on the second partition. Instead, perform How to Recover from a Failed Dual-Partition Upgrade.


Before You Begin

Before you begin, ensure that all second-partition nodes are halted. First-partition nodes can be either halted or running in noncluster mode.

Perform all steps as superuser.

  1. Boot each node in the second partition into noncluster mode by completing the following steps.

  2. In the GRUB menu, use the arrow keys to select the appropriate Solaris entry and type e to edit its commands.

    The GRUB menu appears similar to the following:


    GNU GRUB version 0.95 (631K lower / 2095488K upper memory)
    +-------------------------------------------------------------------------+
    | Solaris 10 /sol_10_x86                                                  |
    | Solaris failsafe                                                        |
    |                                                                         |
    +-------------------------------------------------------------------------+
    Use the ^ and v keys to select which entry is highlighted.
    Press enter to boot the selected OS, 'e' to edit the
    commands before booting, or 'c' for a command-line.

    For more information about GRUB-based booting, see Chapter 11, GRUB Based Booting (Tasks), in System Administration Guide: Basic Administration.

  3. In the boot parameters screen, use the arrow keys to select the kernel entry and type e to edit the entry.

    The GRUB boot parameters screen appears similar to the following:


    GNU GRUB version 0.95 (615K lower / 2095552K upper memory)
    +----------------------------------------------------------------------+
    | root (hd0,0,a)                                                       |
    | kernel /platform/i86pc/multiboot                                     |
    | module /platform/i86pc/boot_archive                                  |
    +----------------------------------------------------------------------+
    Use the ^ and v keys to select which entry is highlighted.
    Press 'b' to boot, 'e' to edit the selected command in the
    boot sequence, 'c' for a command-line, 'o' to open a new line
    after ('O' for before) the selected line, 'd' to remove the
    selected line, or escape to go back to the main menu.
  4. Add the -x option to the command to specify that the system boot into noncluster mode.


    Minimal BASH-like line editing is supported.
    For the first word, TAB lists possible command completions.
    Anywhere else TAB lists the possible completions of a device/filename.
    ESC at any time exits.

    # grub edit> kernel /platform/i86pc/multiboot -x
    
  5. Press Enter to accept the change and return to the boot parameters screen.

    The screen displays the edited command.


    GNU GRUB version 0.95 (615K lower / 2095552K upper memory)
    +----------------------------------------------------------------------+
    | root (hd0,0,a)                                                       |
    | kernel /platform/i86pc/multiboot -x                                  |
    | module /platform/i86pc/boot_archive                                  |
    +----------------------------------------------------------------------+
    Use the ^ and v keys to select which entry is highlighted.
    Press 'b' to boot, 'e' to edit the selected command in the
    boot sequence, 'c' for a command-line, 'o' to open a new line
    after ('O' for before) the selected line, 'd' to remove the
    selected line, or escape to go back to the main menu.-
  6. Type b to boot the node into noncluster mode.


    Note –

    This change to the kernel boot parameter command does not persist over the system boot. The next time you reboot the node, it will boot into cluster mode. To boot into noncluster mode instead, perform these steps to again to add the -x option to the kernel boot parameter command.


  7. On each node in the second partition, run the scinstall -u recover command.


    # /usr/cluster/bin/scinstall -u recover
    

    The command restores the original CCR information, restores the original /etc/vfstab file, and eliminates modifications for startup.

  8. Boot each node of the second partition into cluster mode.


    # shutdown -g0 -y -i6
    

    When the nodes of the second partition come up, the second partition resumes supporting cluster data services while running the old software with the original configuration.

  9. Restore the original software and configuration data from backup media to the nodes in the first partition.

  10. Boot each node in the first partition into cluster mode.


    # shutdown -g0 -y -i6
    

    The nodes rejoin the cluster.

Recovering From Storage Configuration Changes During Upgrade

This section provides the following repair procedures to follow if changes were inadvertently made to the storage configuration during upgrade:

ProcedureHow to Handle Storage Reconfiguration During an Upgrade

Any changes to the storage topology, including running Sun Cluster commands, should be completed before you upgrade the cluster to Solaris 9 or Solaris 10 software. If, however, changes were made to the storage topology during the upgrade, perform the following procedure. This procedure ensures that the new storage configuration is correct and that existing storage that was not reconfigured is not mistakenly altered.


Note –

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster System Administration Guide for Solaris OS.


Before You Begin

Ensure that the storage topology is correct. Check whether the devices that were flagged as possibly being replaced map to devices that actually were replaced. If the devices were not replaced, check for and correct possible accidental configuration changes, such as incorrect cabling.

  1. On a node that is attached to the unverified device, become superuser.

  2. Manually update the unverified device.


    phys-schost# cldevice repair device
    

    See the cldevice(1CL) man page for more information.

  3. Update the DID driver.


    phys-schost# scdidadm -ui
    phys-schost# scdidadm -r
    
    -u

    Loads the device-ID configuration table into the kernel.

    -i

    Initializes the DID driver.

    -r

    Reconfigures the database.

  4. Repeat Step 2 through Step 3 on all other nodes that are attached to the unverified device.

Next Steps

Return to the remaining upgrade tasks. Go to Step 4 in How to Upgrade Sun Cluster 3.2 Software (Standard).

ProcedureHow to Resolve Mistaken Storage Changes During an Upgrade

If accidental changes are made to the storage cabling during the upgrade, perform the following procedure to return the storage configuration to the correct state.


Note –

This procedure assumes that no physical storage was actually changed. If physical or logical storage devices were changed or replaced, instead follow the procedures in How to Handle Storage Reconfiguration During an Upgrade.


Before You Begin

Return the storage topology to its original configuration. Check the configuration of the devices that were flagged as possibly being replaced, including the cabling.

  1. On each node of the cluster, become superuser.

  2. Update the DID driver on each node of the cluster.


    phys-schost# scdidadm -ui
    phys-schost# scdidadm -r
    
    -u

    Loads the device–ID configuration table into the kernel.

    -i

    Initializes the DID driver.

    -r

    Reconfigures the database.

    See the scdidadm(1M) man page for more information.

  3. If the scdidadm command returned any error messages in Step 2, make further modifications as needed to correct the storage configuration, then repeat Step 2.

Next Steps

Return to the remaining upgrade tasks. Go to Step 4 in How to Upgrade Sun Cluster 3.2 Software (Standard).