Sun Cluster Upgrade Guide for Solaris OS

Chapter 3 Performing a Dual-Partition Upgrade to Sun Cluster 3.2 11/09 Software

This chapter provides the following information to upgrade a multiple-node cluster from a Sun Cluster 3.1 8/05 or 3.2 release to Sun Cluster 3.2 11/09 software by using the dual-partition upgrade method:

Performing a Dual-Partition Upgrade of a Cluster

The following table lists the tasks to perform to upgrade from Sun Cluster 3.1 8/05 or 3.2 software to Sun Cluster 3.2 11/09 software. You also perform these tasks to upgrade only the version of the Solaris OS. If you upgrade the Solaris OS to a new marketing release, such as from Solaris 9 to Solaris 10 software, you must also upgrade the Sun Cluster software and dependency software to the version that is compatible with the new version of the Solaris OS.

Table 3–1 Task Map: Performing a Dual-Partition Upgrade to Sun Cluster 3.2 11/09 Software

Task 

Instructions 

1. Read the upgrade requirements and restrictions. Determine the proper upgrade method for your configuration and needs. 

Upgrade Requirements and Software Support Guidelines

Choosing a Sun Cluster Upgrade Method

2. If a quorum server is used, upgrade the Quorum Server software. 

How to Upgrade Quorum Server Software

3. If Sun Cluster Geographic Edition software is installed, uninstall it. Partition the cluster into two groups of nodes. 

How to Prepare the Cluster for Upgrade (Dual-Partition)

4. Upgrade the Solaris software, if necessary, to a supported Solaris update. If the cluster uses dual-string mediators for Solaris Volume Manager software, unconfigure the mediators. As needed, upgrade Veritas Volume Manager (VxVM) and Veritas File System (VxFS). Solaris Volume Manager software is automatically upgraded with the Solaris OS. 

How to Upgrade the Solaris OS and Volume Manager Software (Dual-Partition)

5. Upgrade to Sun Cluster 3.2 11/09 framework and data-service software. If necessary, upgrade applications. If the cluster uses dual-string mediators and you upgraded the Solaris OS, reconfigure the mediators. If you upgraded VxVM, upgrade disk groups. 

How to Upgrade Sun Cluster 3.2 11/09 Software (Dual-Partition)

6. Use the scversions command to commit the cluster to the upgrade.

How to Commit the Upgraded Cluster to Sun Cluster 3.2 11/09 Software

7. Verify successful completion of upgrade to Sun Cluster 3.2 11/09 software. 

How to Verify Upgrade of Sun Cluster 3.2 11/09 Software

8. Enable resources and bring resource groups online. Optionally, migrate existing resources to new resource types. 

How to Finish Upgrade to Sun Cluster 3.2 11/09 Software

9. (Optional) SPARC: Upgrade the Sun Cluster module for Sun Management Center, if needed.

SPARC: How to Upgrade Sun Cluster Module Software for Sun Management Center

ProcedureHow to Upgrade Quorum Server Software

If the cluster uses a quorum server, upgrade the Quorum Server software on the quorum server before you upgrade the cluster.


Note –

If more than one cluster uses the quorum server, perform these steps for each of those clusters.


Perform all steps as superuser on the cluster and on the quorum server.

  1. If the cluster has two nodes and the quorum server is the cluster's only quorum device, temporarily add a second quorum device.

    See Adding a Quorum Device in Sun Cluster System Administration Guide for Solaris OS.

    If you add another quorum server as a temporary quorum device, the quorum server can run the same software version as the quorum server that you are upgrading, or it can run the 3.2 11/09 version of Quorum Server software.

  2. Unconfigure the quorum server from each cluster that uses the quorum server.


    phys-schost# clquorum remove quorumserver
    
  3. From the quorum server to upgrade, verify that the quorum server no longer serves any cluster.


    quorumserver# clquorumserver show +
    

    If the output shows any cluster is still served by the quorum server, unconfigure the quorum server from that cluster. Then repeat this step to confirm that the quorum server is no longer configured with any cluster.


    Note –

    If you have unconfigured the quorum server from a cluster but the clquorumserver show command still reports that the quorum server is serving that cluster, the command might be reporting stale configuration information. See Cleaning Up Stale Quorum Server Cluster Information in Sun Cluster System Administration Guide for Solaris OS.


  4. From the quorum server to upgrade, halt all quorum server instances.


    quorumserver# clquorumserver stop +
    
  5. Uninstall the Quorum Server software from the quorum server to upgrade.

    1. Navigate to the directory where the uninstaller is located.


      quorumserver# cd /var/sadm/prod/SUNWentsysver
      
      ver

      The version of Java Enterprise System that is installed on your system.

    2. Start the uninstallation wizard.


      quorumserver# ./uninstall
      
    3. Follow instructions on the screen to uninstall the Quorum Server software from the quorum-server host computer.

      After removal is finished, you can view any available log. See Chapter 8, Uninstalling, in Sun Java Enterprise System 5 Update 1 Installation Guide for UNIX for additional information about using the uninstall program.

    4. (Optional) Clean up or remove the quorum server directories.

      By default, this directory is /var/scqsd.

  6. Install the Sun Cluster 3.2 11/09 Quorum Server software, reconfigure the quorum server, and start the quorum server daemon.

    Follow the steps in How to Install and Configure Quorum Server Software in Sun Cluster Software Installation Guide for Solaris OS for installing the Quorum Server software.

  7. From a cluster node, configure the upgraded quorum server as a quorum device.

    Follow the steps in How to Configure Quorum Devices in Sun Cluster Software Installation Guide for Solaris OS.

  8. If you configured a temporary quorum device, unconfigure it.


    phys-schost# clquorum remove tempquorum
    

ProcedureHow to Prepare the Cluster for Upgrade (Dual-Partition)

Perform this procedure to prepare a multiple-node cluster for a dual-partition upgrade. These procedures will refer to the two groups of nodes as the first partition and the second partition. The nodes that you assign to the second partition will continue cluster services while you upgrade the nodes in the first partition. After all nodes in the first partition are upgraded, you switch cluster services to the first partition and upgrade the second partition. After all nodes in the second partition are upgraded, you boot the nodes into cluster mode to rejoin the nodes from the first partition.


Note –

If you are upgrading a single-node cluster, do not use this upgrade method. Instead, go to How to Prepare the Cluster for Upgrade (Standard) or How to Prepare the Cluster for Upgrade (Live Upgrade).


Perform all steps from the global zone only.

Before You Begin

Perform the following tasks:

  1. Ensure that the cluster is functioning normally.

    1. View the current status of the cluster by running the following command from any node.

      • On Sun Cluster 3.1 8/05 software, use the following command:


        phys-schost% scstat
        
      • On Sun Cluster 3.2 software, use the following command:


        phys-schost% cluster status
        

      See the scstat(1M) or cluster(1CL) man page for more information.

    2. Search the /var/adm/messages log on the same node for unresolved error messages or warning messages.

    3. Check the volume-manager status.

  2. If necessary, notify users that cluster services might be temporarily interrupted during the upgrade.

    Service interruption will be approximately the amount of time that your cluster normally takes to switch services to another node.

  3. Become superuser.

  4. Ensure that the RG_system property of all resource groups in the cluster is set to FALSE.

    A setting of RG_system=TRUE would restrict certain operations that the dual-partition software must perform.

    1. On each node, determine whether any resource groups are set to RG_system=TRUE.


      phys-schost# clresourcegroup show -p RG_system
      

      Make note of which resource groups to change. Save this list to use when you restore the setting after upgrade is completed.

    2. For each resource group that is set to RG_system=TRUE, change the setting to FALSE.


      phys-schost# clresourcegroup set -p RG_system=FALSE resourcegroup
      
  5. If Sun Cluster Geographic Edition software is installed, uninstall it.

    For uninstallation procedures, see the documentation for your version of Sun Cluster Geographic Edition software.

  6. If you will upgrade the Solaris OS and your cluster uses dual-string mediators for Solaris Volume Manager software, unconfigure your mediators.

    See Configuring Dual-String Mediators in Sun Cluster Software Installation Guide for Solaris OS for more information about mediators.

    1. Run the following command to verify that no mediator data problems exist.


      phys-schost# medstat -s setname
      
      -s setname

      Specifies the disk set name.

      If the value in the Status field is Bad, repair the affected mediator host. Follow the procedure How to Fix Bad Mediator Data in Sun Cluster Software Installation Guide for Solaris OS.

    2. List all mediators.

      Save this information for when you restore the mediators during the procedure How to Finish Upgrade to Sun Cluster 3.2 11/09 Software.

    3. For a disk set that uses mediators, take ownership of the disk set if no node already has ownership.

      • On Sun Cluster 3.1 8/05 software, use the following command:


        phys-schost# scswitch -z -D setname -h node
        
        -z

        Changes mastery.

        -D devicegroup

        Specifies the name of the disk set.

        -h node

        Specifies the name of the node to become primary of the disk set.

      • On Sun Cluster 3.2 software, use the following command:


        phys-schost# cldevicegroup switch -n node devicegroup
        
    4. Unconfigure all mediators for the disk set.


      phys-schost# metaset -s setname -d -m mediator-host-list
      
      -s setname

      Specifies the disk set name.

      -d

      Deletes from the disk set.

      -m mediator-host-list

      Specifies the name of the node to remove as a mediator host for the disk set.

      See the mediator(7D) man page for further information about mediator-specific options to the metaset command.

    5. Repeat Step c through Step d for each remaining disk set that uses mediators.

  7. If you are upgrading a two-node cluster, skip to Step 17.

    Otherwise, proceed to Step 8 to determine the partitioning scheme to use. You will determine which nodes each partition will contain, but interrupt the partitioning process. You will then compare the node lists of all resource groups against the node members of each partition in the scheme that you will use. If any resource group does not contain a member of each partition, you must change the node list.

  8. Load the Sun Java Availability Suite DVD-ROM into the DVD-ROM drive.

    If the volume management daemon vold(1M) is running and is configured to manage CD-ROM or DVD devices, the daemon automatically mounts the media on the /cdrom/cdrom0 directory.

  9. Become superuser on a node of the cluster.

  10. Change to the /Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/ directory, where arch is sparc or x86and where ver is 10 for Solaris 10 .


    phys-schost# cd /cdrom/cdrom0/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools
    
  11. Start the scinstall utility in interactive mode.


    phys-schost# ./scinstall
    

    Note –

    Do not use the /usr/cluster/bin/scinstall command that is already installed on the node. You must use the scinstall command on the Sun Java Availability Suite DVD-ROM.


    The scinstall Main Menu is displayed.

  12. Type the option number for Manage a Dual-Partition Upgrade and press the Return key.


    *** Main Menu ***
    
        Please select from one of the following (*) options:
    
            1) Create a new cluster or add a cluster node
            2) Configure a cluster to be JumpStarted from this install server
          * 3) Manage a dual-partition upgrade
          * 4) Upgrade this cluster node
          * 5) Print release information for this cluster node
     
          * ?) Help with menu options
          * q) Quit
    
        Option:  3
    

    The Manage a Dual-Partition Upgrade Menu is displayed.

  13. Type the option number for Display and Select Possible Partitioning Schemes and press the Return key.

  14. Follow the prompts to perform the following tasks:

    1. Display the possible partitioning schemes for your cluster.

    2. Choose a partitioning scheme.

    3. Choose which partition to upgrade first.


      Note –

      Stop and do not respond yet when prompted, Do you want to begin the dual-partition upgrade?, but do not exit the scinstall utility. You will respond to this prompt in Step 19 of this procedure.


  15. Make note of which nodes belong to each partition in the partition scheme.

  16. On another node of the cluster, become superuser.

  17. Ensure that any critical data services can switch over between partitions.

    For a two-node cluster, each node will be the only node in its partition.

    When the nodes of a partition are shut down in preparation for dual-partition upgrade, the resource groups that are hosted on those nodes switch over to a node in the other partition. If a resource group does not contain a node from each partition in its node list, the resource group cannot switch over. To ensure successful switchover of all critical data services, verify that the node list of the related resource groups contains a member of each upgrade partition.

    1. Display the node list of each resource group that you require to remain in service during the entire upgrade.

      • On Sun Cluster 3.1 8/05 software, use the following command:


        phys-schost# scrgadm -pv -g resourcegroup | grep "Res Group Nodelist"
        
        -p

        Displays configuration information.

        -v

        Displays in verbose mode.

        -g resourcegroup

        Specifies the name of the resource group.

      • On Sun Cluster 3.2 software, use the following command:


        phys-schost# clresourcegroup show -p nodelist
        === Resource Groups and Resources ===
        
        Resource Group:                                 resourcegroup
          Nodelist:                                        node1 node2
    2. If the node list of a resource group does not contain at least one member of each partition, redefine the node list to include a member of each partition as a potential primary node.

      • On Sun Cluster 3.1 8/05 software, use the following command:


        phys-schost# scrgadm -a -g resourcegroup -h nodelist
        
        -a

        Adds a new configuration.

        -h

        Specifies a comma-separated list of node names.

      • On Sun Cluster 3.2 software, use the following command:


        phys-schost# clresourcegroup add-node -n node resourcegroup
        
  18. Determine your next step.

    • If you are upgrading a two-node cluster, return to Step 8 through Step 14 to designate your partitioning scheme and upgrade order.

      When you reach the prompt Do you want to begin the dual-partition upgrade?, skip to Step 19.

    • If you are upgrading a cluster with three or more nodes, return to the node that is running the interactive scinstall utility.

      Proceed to Step 19.

  19. At the interactive scinstall prompt Do you want to begin the dual-partition upgrade?, type Yes.

    The command verifies that a remote installation method is available.

  20. When prompted, press Enter to continue each stage of preparation for dual-partition upgrade.

    The command switches resource groups to nodes in the second partition, and then shuts down each node in the first partition.

  21. After all nodes in the first partition are shut down, boot each node in that partition into noncluster mode.

    • On SPARC based systems, perform the following command:


      ok boot -x
      
    • On x86 based systems, perform the following commands:

      1. In the GRUB menu, use the arrow keys to select the appropriate Solaris entry and type e to edit its commands.

        The GRUB menu appears similar to the following:


        GNU GRUB version 0.95 (631K lower / 2095488K upper memory)
        +----------------------------------------------------------------------+
        | Solaris 10 /sol_10_x86                                               |
        | Solaris failsafe                                                     |
        |                                                                      |
        +----------------------------------------------------------------------+
        Use the ^ and v keys to select which entry is highlighted.
        Press enter to boot the selected OS, 'e' to edit the
        commands before booting, or 'c' for a command-line.

        For more information about GRUB based booting, see Booting an x86 Based System by Using GRUB (Task Map) in System Administration Guide: Basic Administration.

      2. In the boot parameters screen, use the arrow keys to select the kernel entry and type e to edit the entry.

        The GRUB boot parameters screen appears similar to the following:


        GNU GRUB version 0.95 (615K lower / 2095552K upper memory)
        +----------------------------------------------------------------------+
        | root (hd0,0,a)                                                       |
        | kernel /platform/i86pc/multiboot                                     |
        | module /platform/i86pc/boot_archive                                  |
        +----------------------------------------------------------------------+
        Use the ^ and v keys to select which entry is highlighted.
        Press 'b' to boot, 'e' to edit the selected command in the
        boot sequence, 'c' for a command-line, 'o' to open a new line
        after ('O' for before) the selected line, 'd' to remove the
        selected line, or escape to go back to the main menu.
      3. Add -x to the command to specify that the system boot into noncluster mode.


        [ Minimal BASH-like line editing is supported. For the first word, TAB
        lists possible command completions. Anywhere else TAB lists the possible
        completions of a device/filename. ESC at any time exits. ]
        
        grub edit> kernel /platform/i86pc/multiboot -x
        
      4. Press Enter to accept the change and return to the boot parameters screen.

        The screen displays the edited command.


        GNU GRUB version 0.95 (615K lower / 2095552K upper memory)
        +----------------------------------------------------------------------+
        | root (hd0,0,a)                                                       |
        | kernel /platform/i86pc/multiboot -x                                  |
        | module /platform/i86pc/boot_archive                                  |
        +----------------------------------------------------------------------+
        Use the ^ and v keys to select which entry is highlighted.
        Press 'b' to boot, 'e' to edit the selected command in the
        boot sequence, 'c' for a command-line, 'o' to open a new line
        after ('O' for before) the selected line, 'd' to remove the
        selected line, or escape to go back to the main menu.-
      5. Type b to boot the node into noncluster mode.


        Note –

        This change to the kernel boot parameter command does not persist over the system boot. The next time you reboot the node, it will boot into cluster mode. To boot into noncluster mode instead, perform these steps to again to add the -x option to the kernel boot parameter command.


  22. Ensure that each system disk is backed up.

  23. If any applications that are running in the second partition are not under control of the Resource Group Manager (RGM), create scripts to halt the applications before you begin to upgrade those nodes.

    During dual-partition upgrade processing, these scripts would be called to stop applications such as Oracle Real Application Clusters before the nodes in the second partition are halted.

    1. Create the scripts that you need to stop applications that are not under RGM control.

      • Create separate scripts for those applications that you want stopped before applications under RGM control are stopped and for those applications that you want stop afterwards.

      • To stop applications that are running on more than one node in the partition, write the scripts accordingly.

      • Use any name and directory path for your scripts that you prefer.

    2. Ensure that each node in the cluster has its own copy of your scripts.

    3. On each node, modify the following Sun Cluster scripts to call the scripts that you placed on that node.

      • /etc/cluster/ql/cluster_pre_halt_apps - Use this file to call those scripts that you want to run before applications that are under RGM control are shut down.

      • /etc/cluster/ql/cluster_post_halt_apps - Use this file to call those scripts that you want to run after applications that are under RGM control are shut down.

      The Sun Cluster scripts are issued from one arbitrary node in the partition during post-upgrade processing of the partition. Therefore, ensure that the scripts on any node of the partition will perform the necessary actions for all nodes in the partition.

Next Steps

Upgrade software on each node in the first partition.

ProcedureHow to Upgrade the Solaris OS and Volume Manager Software (Dual-Partition)

Perform this procedure on each node in the cluster to upgrade the Solaris OS. Perform all steps from the global zone only. If the cluster already runs on a version of the Solaris OS that supports Sun Cluster 3.2 11/09 software, further upgrade of the Solaris OS is optional. If you do not intend to upgrade the Solaris OS, proceed to How to Upgrade Sun Cluster 3.2 11/09 Software (Standard).


Note –

The cluster must already run on, or be upgraded to, at least the minimum required level of the Solaris OS to support upgrade to Sun Cluster 3.2 11/09 software. See Supported Products in Sun Cluster Release Notes for more information.


Before You Begin

Ensure that all steps in How to Prepare the Cluster for Upgrade (Standard) are completed.

  1. Become superuser on the cluster node to upgrade.

    The node must be a member of the partition that is in noncluster mode.

  2. Determine whether the following Apache run-control scripts exist and are enabled or disabled:


    /etc/rc0.d/K16apache
    /etc/rc1.d/K16apache
    /etc/rc2.d/K16apache
    /etc/rc3.d/S50apache
    /etc/rcS.d/K16apache

    Some applications, such as Sun Cluster HA for Apache, require that Apache run control scripts be disabled.

    • If these scripts exist and contain an uppercase K or S in the file name, the scripts are enabled. No further action is necessary for these scripts.

    • If these scripts do not exist, in Step 7 you must ensure that any Apache run control scripts that are installed during the Solaris OS upgrade are disabled.

    • If these scripts exist but the file names contain a lowercase k or s, the scripts are disabled. In Step 7 you must ensure that any Apache run control scripts that are installed during the Solaris OS upgrade are disabled.

  3. Comment out all entries for globally mounted file systems in the node's /etc/vfstab file.

    1. For later reference, make a record of all entries that are already commented out.

    2. Temporarily comment out all entries for globally mounted file systems in the /etc/vfstab file.

      Entries for globally mounted file systems contain the global mount option. Comment out these entries to prevent the Solaris upgrade from attempting to mount the global devices.

  4. Determine which procedure to follow to upgrade the Solaris OS.

    Volume Manager 

    Procedure 

    Location of Instructions 

    Solaris Volume Manager 

    Any Solaris upgrade method except the Live Upgrade method

    Solaris installation documentation 

    Veritas Volume Manager 

    “Upgrading VxVM and Solaris” 

    Veritas Volume Manager installation documentation 


    Note –

    If your cluster has VxVM installed, you must reinstall the existing VxVM software or upgrade to the Solaris 9 or 10 version of VxVM software as part of the Solaris upgrade process.


  5. Upgrade the Solaris software, following the procedure that you selected in Step 4.

    1. When prompted, choose the manual reboot option.

    2. When prompted to reboot, always reboot into noncluster mode.


      Note –

      Do not perform the final reboot instruction in the Solaris software upgrade. Instead, do the following:

      1. Return to this procedure to perform Step 6 and Step 7.

      2. Reboot into noncluster mode in Step 8 to complete Solaris software upgrade.


      Execute the following commands to boot a node into noncluster mode during Solaris upgrade:

      • On SPARC based systems, perform either of the following commands:


        phys-schost# reboot -- -x
        or
        ok boot -x
        

        If the instruction says to run the init S command, use the reboot -- -xs command instead.

      • On x86 based systems, perform the following command:


        phys-schost# shutdown -g -y -i0
        
        Press any key to continue
        1. In the GRUB menu, use the arrow keys to select the appropriate Solaris entry and type e to edit its commands.

          The GRUB menu appears similar to the following:


          GNU GRUB version 0.95 (631K lower / 2095488K upper memory)
          +-------------------------------------------------------------------------+
          | Solaris 10 /sol_10_x86                                                  |
          | Solaris failsafe                                                        |
          |                                                                         |
          +-------------------------------------------------------------------------+
          Use the ^ and v keys to select which entry is highlighted.
          Press enter to boot the selected OS, 'e' to edit the
          commands before booting, or 'c' for a command-line.

          For more information about GRUB based booting, see Booting an x86 Based System by Using GRUB (Task Map) in System Administration Guide: Basic Administration.

        2. In the boot parameters screen, use the arrow keys to select the kernel entry and type e to edit the entry.

          The GRUB boot parameters screen appears similar to the following:


          GNU GRUB version 0.95 (615K lower / 2095552K upper memory)
          +----------------------------------------------------------------------+
          | root (hd0,0,a)                                                       |
          | kernel /platform/i86pc/multiboot                                     |
          | module /platform/i86pc/boot_archive                                  |
          +----------------------------------------------------------------------+
          Use the ^ and v keys to select which entry is highlighted.
          Press 'b' to boot, 'e' to edit the selected command in the
          boot sequence, 'c' for a command-line, 'o' to open a new line
          after ('O' for before) the selected line, 'd' to remove the
          selected line, or escape to go back to the main menu.
        3. Add -x to the command to specify that the system boot into noncluster mode.


          [ Minimal BASH-like line editing is supported. For the first word, TAB
          lists possible command completions. Anywhere else TAB lists the possible
          completions of a device/filename. ESC at any time exits. ]
          
          grub edit> kernel /platform/i86pc/multiboot -x
          
        4. Press Enter to accept the change and return to the boot parameters screen.

          The screen displays the edited command.


          GNU GRUB version 0.95 (615K lower / 2095552K upper memory)
          +----------------------------------------------------------------------+
          | root (hd0,0,a)                                                       |
          | kernel /platform/i86pc/multiboot -x                                  |
          | module /platform/i86pc/boot_archive                                  |
          +----------------------------------------------------------------------+
          Use the ^ and v keys to select which entry is highlighted.
          Press 'b' to boot, 'e' to edit the selected command in the
          boot sequence, 'c' for a command-line, 'o' to open a new line
          after ('O' for before) the selected line, 'd' to remove the
          selected line, or escape to go back to the main menu.-
        5. Type b to boot the node into noncluster mode.


          Note –

          This change to the kernel boot parameter command does not persist over the system boot. The next time you reboot the node, it will boot into cluster mode. To boot into noncluster mode instead, perform these steps to again to add the -x option to the kernel boot parameter command.


        If the instruction says to run the init S command, shut down the system then change the GRUB kernel boot command to /platform/i86pc/multiboot -sx instead.

  6. In the /a/etc/vfstab file, uncomment those entries for globally mounted file systems that you commented out in Step 3.

  7. If Apache run control scripts were disabled or did not exist before you upgraded the Solaris OS, ensure that any scripts that were installed during Solaris upgrade are disabled.

    To disable Apache run control scripts, use the following commands to rename the files with a lowercase k or s.


    phys-schost# mv /a/etc/rc0.d/K16apache /a/etc/rc0.d/k16apache
    phys-schost# mv /a/etc/rc1.d/K16apache /a/etc/rc1.d/k16apache
    phys-schost# mv /a/etc/rc2.d/K16apache /a/etc/rc2.d/k16apache
    phys-schost# mv /a/etc/rc3.d/S50apache /a/etc/rc3.d/s50apache
    phys-schost# mv /a/etc/rcS.d/K16apache /a/etc/rcS.d/k16apache
    

    Alternatively, you can rename the scripts to be consistent with your normal administration practices.

  8. Reboot the node into noncluster mode.

    • On SPARC based systems, perform the following command.

      Include the double dashes (--) in the command:


      phys-schost# reboot -- -x
      
    • On x86 based systems, perform the shutdown and boot procedures that are described in Step 5 except add -x to the kernel boot command instead of -sx.

  9. If your cluster runs VxVM, perform the remaining steps in the procedure “Upgrading VxVM and Solaris” to reinstall or upgrade VxVM.

    Make the following changes to the procedure:

    • After VxVM upgrade is complete but before you reboot, verify the entries in the /etc/vfstab file.

      If any of the entries that you uncommented in Step 6 were commented out, make those entries uncommented again.

    • When the VxVM procedures instruct you to perform a final reconfiguration reboot, do not use the -r option alone. Instead, reboot into noncluster mode by using the -rx options.

      • On SPARC based systems, perform the following command:


        phys-schost# reboot -- -rx
        
      • On x86 based systems, perform the shutdown and boot procedures that are described in Step 5 except add -rx to the kernel boot command instead of -sx.


    Note –

    If you see a message similar to the following, type the root password to continue upgrade processing. Do not run the fsck command nor type Ctrl-D.


    WARNING - Unable to repair the /global/.devices/node@1 filesystem. 
    Run fsck manually (fsck -F ufs /dev/vx/rdsk/rootdisk_13vol). Exit the 
    shell when done to continue the boot process.
    
    Type control-d to proceed with normal startup,
    (or give root password for system maintenance):  Type the root password
    

  10. (Optional) SPARC: Upgrade VxFS.

    Follow procedures that are provided in your VxFS documentation.

  11. Install any required Solaris software patches and hardware-related patches, and download any needed firmware that is contained in the hardware patches.


    Note –

    Do not reboot after you add patches. Wait to reboot the node until after you upgrade the Sun Cluster software.


    See Patches and Required Firmware Levels in the Sun Cluster Release Notes for the location of patches and installation instructions.

Next Steps

Upgrade to Sun Cluster 3.2 11/09 software. Go to How to Upgrade Sun Cluster 3.2 11/09 Software (Dual-Partition).


Note –

To complete the upgrade to a new marketing release of the Solaris OS, such as from Solaris 9 to Solaris 10 software, you must also upgrade the Sun Cluster software and dependency software to the version that is compatible with the new version of the Solaris OS.


ProcedureHow to Upgrade Sun Cluster 3.2 11/09 Software (Dual-Partition)

Perform this procedure to upgrade each node of the cluster to Sun Cluster 3.2 11/09 software. This procedure also upgrades required Sun Java Enterprise System shared components. You must also perform this procedure after you upgrade to a different marketing release of the Solaris OS, such as from Solaris 9 to Solaris 10 software.

Perform all steps from the global zone only.


Tip –

You can perform this procedure on more than one node of the partition at the same time.


Before You Begin

Perform the following tasks:

  1. Become superuser on a node that is a member of the partition that is in noncluster mode.

  2. Ensure that the /usr/java/ directory is a symbolic link to the minimum or latest version of Java software.

    Sun Cluster software requires at least version 1.5.0_06 of Java software. If you upgraded to a version of Solaris that installs an earlier version of Java, the upgrade might have changed the symbolic link to point to a version of Java that does not meet the minimum requirement for Sun Cluster 3.2 11/09 software.

    1. Determine what directory the /usr/java/ directory is symbolically linked to.


      phys-schost# ls -l /usr/java
      lrwxrwxrwx   1 root   other    9 Apr 19 14:05 /usr/java -> /usr/j2se/
    2. Determine what version or versions of Java software are installed.

      The following are examples of commands that you can use to display the version of their related releases of Java software.


      phys-schost# /usr/j2se/bin/java -version
      phys-schost# /usr/java1.2/bin/java -version
      phys-schost# /usr/jdk/jdk1.5.0_06/bin/java -version
      
    3. If the /usr/java/ directory is not symbolically linked to a supported version of Java software, recreate the symbolic link to link to a supported version of Java software.

      The following example shows the creation of a symbolic link to the /usr/j2se/ directory, which contains Java 1.5.0_06 software.


      phys-schost# rm /usr/java
      phys-schost# ln -s /usr/j2se /usr/java
      
  3. Load the Sun Java Availability Suite DVD-ROM into the DVD-ROM drive.

    If the volume management daemon vold(1M) is running and is configured to manage CD-ROM or DVD devices, the daemon automatically mounts the media on the /cdrom/cdrom0 directory.

  4. Change to the installation wizard directory of the DVD-ROM.

    • If you are installing the software packages on the SPARC platform, type the following command:


      phys-schost# cd /cdrom/cdrom0/Solaris_sparc
      
    • If you are installing the software packages on the x86 platform, type the following command:


      phys-schost# cd /cdrom/cdrom0/Solaris_x86
      
  5. Start the installation wizard program.


    phys-schost# ./installer
    
  6. Follow the instructions on the screen to select and upgrade Shared Components software packages on the node.


    Note –

    Do not use the installation wizard program to upgrade Sun Cluster software packages.


    The installation wizard program displays the status of the installation. When the installation is complete, the program displays an installation summary and the installation logs.

  7. Exit the installation wizard program.

  8. Change to the /Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/ directory, where arch is sparc or x86 (Solaris 10 only) and where ver is 9 for Solaris 9 or 10 for Solaris 10.


    phys-schost# cd /cdrom/cdrom0/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools
    
  9. Start the scinstall utility.


    phys-schost# ./scinstall
    

    Note –

    Do not use the /usr/cluster/bin/scinstall command that is already installed on the node. You must use the scinstall command that is located on the Sun Java Availability Suite DVD-ROM.


    The scinstall Main Menu is displayed.

  10. Type the option number for Upgrade This Cluster Node and press the Return key.


      *** Main Menu ***
    
        Please select from one of the following (*) options:
    
            1) Create a new cluster or add a cluster node
            2) Configure a cluster to be JumpStarted from this install server
          * 3) Manage a dual-partition upgrade
          * 4) Upgrade this cluster node
          * 5) Print release information for this cluster node
     
          * ?) Help with menu options
          * q) Quit
    
        Option:  4
    

    The Upgrade Menu is displayed.

  11. Type the option number for Upgrade Sun Cluster Framework On This Cluster Node and press the Return key.

  12. Follow the menu prompts to upgrade the cluster framework.

    During the Sun Cluster upgrade, scinstall might make one or more of the following configuration changes:

    • Rename the ntp.conf file to ntp.conf.cluster, if ntp.conf.cluster does not already exist on the node.

    • Set the local-mac-address? variable to true, if the variable is not already set to that value.

    Upgrade processing is finished when the system displays the message Completed Sun Cluster framework upgrade and prompts you to press Enter to continue.

  13. Quit the scinstall utility.

  14. Upgrade data service packages.

    You must upgrade all data services to the Sun Cluster 3.2 version.


    Note –

    For Sun Cluster HA for SAP Web Application Server, if you are using a J2EE engine resource or a web application server component resource or both, you must delete the resource and recreate it with the new web application server component resource. Changes in the new web application server component resource includes integration of the J2EE functionality. For more information, see Sun Cluster Data Service for SAP Web Application Server Guide for Solaris OS.


    1. Start the upgraded interactive scinstall utility.


      phys-schost# /usr/cluster/bin/scinstall
      

      Note –

      Do not use the scinstall utility that is on the installation media to upgrade data service packages.


      The scinstall Main Menu is displayed.

    2. Type the option number for Upgrade This Cluster Node and press the Return key.

      The Upgrade Menu is displayed.

    3. Type the option number for Upgrade Sun Cluster Data Service Agents On This Node and press the Return key.

    4. Follow the menu prompts to upgrade Sun Cluster data service agents that are installed on the node.

      You can choose from the list of data services that are available to upgrade or choose to upgrade all installed data services.

      Upgrade processing is finished when the system displays the message Completed upgrade of Sun Cluster data services agents and prompts you to press Enter to continue.

    5. Press Enter.

      The Upgrade Menu is displayed.

  15. Quit the scinstall utility.

  16. Unload the Sun Java Availability Suite DVD-ROM from the DVD-ROM drive.

    1. To ensure that the DVD-ROM is not being used, change to a directory that does not reside on the DVD-ROM.

    2. Eject the DVD-ROM.


      phys-schost# eject cdrom
      
  17. If you have Sun Cluster HA for NFS configured on a highly available local file system, ensure that the loopback file system (LOFS) is disabled.


    Note –

    If you have non-global zones configured, LOFS must remain enabled. For guidelines about using LOFS and alternatives to disabling it, see Cluster File Systems in Sun Cluster Software Installation Guide for Solaris OS.


    To disable LOFS, ensure that the /etc/system file contains the following entry:


    exclude:lofs

    This change becomes effective at the next system reboot.

  18. As needed, manually upgrade any custom data services that are not supplied on the product media.

  19. Verify that each data-service update is installed successfully.

    View the upgrade log file that is referenced at the end of the upgrade output messages.

  20. Install any Sun Cluster 3.2 11/09 framework and data-service software patches.

    See Patches and Required Firmware Levels in the Sun Cluster Release Notes for the location of patches and installation instructions.

  21. Upgrade software applications that are installed on the cluster.

    Ensure that application levels are compatible with the current versions of Sun Cluster and Solaris software. See your application documentation for installation instructions.

  22. Repeat all steps in this procedure up to this point on all remaining nodes that you need to upgrade in the partition.

  23. After all nodes in a partition are upgraded, apply the upgrade changes.

    1. From one node in the partition that you are upgrading, start the interactive scinstall utility.


      phys-schost# /usr/cluster/bin/scinstall
      

      Note –

      Do not use the scinstall command that is located on the installation media. Only use the scinstall command that is located on the cluster node.


      The scinstall Main Menu is displayed.

    2. Type option number for Apply Dual-Partition Upgrade Changes to the Partition and press the Return key.

    3. Follow the prompts to continue each stage of the upgrade processing.

      The command performs the following tasks, depending on which partition the command is run from:

      • First partition - The command halts each node in the second partition, one node at a time. When a node is halted, any services on that node are automatically switched over to a node in the first partition, provided that the node list of the related resource group contains a node in the first partition. After all nodes in the second partition are halted, the nodes in the first partition are booted into cluster mode and take over providing cluster services.


        Caution – Caution –

        Do not reboot any node of the first partition again until after the upgrade is completed on all nodes. If you again reboot a node of the first partition before the second partition is upgraded and rebooted into the cluster, the upgrade might fail in an unrecoverable state.


      • Second partition - The command boots the nodes in the second partition into cluster mode, to join the active cluster that was formed by the first partition. After all nodes have rejoined the cluster, the command performs final processing and reports on the status of the upgrade.

    4. Exit the scinstall utility, if it is still running.

    5. If you are finishing upgrade from Sun Cluster 3.1 8/05 software of the first partition and you want to configure zone clusters (on Solaris 10 only), set the expected number of nodes and private networks in the cluster.

      If you upgraded from Sun Cluster 3.1 8/05 software and do not want to configure zone clusters, or if you upgraded from Sun Cluster 3.2 software, this task is optional.

      1. Boot all nodes in the first partition into noncluster mode.

        • On SPARC based systems, perform the following command:


          ok boot -x
          
        • On x86 based systems, perform the following commands:

          1. In the GRUB menu, use the arrow keys to select the appropriate Solaris entry and type e to edit its commands.

            The GRUB menu appears similar to the following:


            GNU GRUB version 0.97 (639K lower / 1047488K upper memory)
            +----------------------------------------------------------------------+
            | Solaris 10 /sol_10_x86                                               | 
            | Solaris failsafe                                                     |
            |                                                                      |
            +----------------------------------------------------------------------+
            Use the ^ and v keys to select which entry is highlighted.
            Press enter to boot the selected OS, 'e' to edit the
            commands before booting, or 'c' for a command-line.

            For more information about GRUB based booting, see Booting an x86 Based System by Using GRUB (Task Map) in System Administration Guide: Basic Administration.

          2. In the boot parameters screen, use the arrow keys to select the kernel entry and type e to edit the entry.

            The GRUB boot parameters screen appears similar to the following:


            GNU GRUB version 0.97 (639K lower / 1047488K upper memory)
            +----------------------------------------------------------------------+
            | root (hd0,0,a)                                                       | 
            | kernel /platform/i86pc/multiboot                                     | 
            | module /platform/i86pc/boot_archive                                  | 
            |+----------------------------------------------------------------------+
            Use the ^ and v keys to select which entry is highlighted.
            Press 'b' to boot, 'e' to edit the selected command in the
            boot sequence, 'c' for a command-line, 'o' to open a new line
            after ('O' for before) the selected line, 'd' to remove the
            selected line, or escape to go back to the main menu.
          3. Add -x to the command to specify that the system boot into noncluster mode.


            [ Minimal BASH-like line editing is supported. For the first word, TAB
            lists possible command completions. Anywhere else TAB lists the possible
            completions of a device/filename. ESC at any time exits. ]
            
            grub edit> kernel /platform/i86pc/multiboot -x
            
          4. Press Enter to accept the change and return to the boot parameters screen.

            The screen displays the edited command.


            GNU GRUB version 0.97 (639K lower / 1047488K upper memory)
            +----------------------------------------------------------------------+
            | root (hd0,0,a)                                                       |
            | kernel /platform/i86pc/multiboot -x                                  |
            | module /platform/i86pc/boot_archive                                  |
            +----------------------------------------------------------------------+
            Use the ^ and v keys to select which entry is highlighted.
            Press 'b' to boot, 'e' to edit the selected command in the
            boot sequence, 'c' for a command-line, 'o' to open a new line
            after ('O' for before) the selected line, 'd' to remove the
            selected line, or escape to go back to the main menu.-
          5. Type b to boot the node into noncluster mode.


            Note –

            This change to the kernel boot parameter command does not persist over the system boot. The next time you reboot the node, it will boot into cluster mode. To boot into noncluster mode instead, perform these steps to again add the -x option to the kernel boot parameter command.


      2. From one node, start the clsetup utility.

        When run in noncluster mode, the clsetup utility displays the Main Menu for noncluster-mode operations.

      3. Type the option number for Change IP Address Range and press the Return key.

        The clsetup utility displays the current private-network configuration, then asks if you would like to change this configuration.

      4. To change either the private-network IP address or the IP address range, type yes and press the Return key.

        The clsetup utility displays the default private-network IP address, 172.16.0.0, and asks if it is okay to accept this default.

      5. Change or accept the private-network IP address.

        • To accept the default private-network IP address and proceed to changing the IP address range, type yes and press the Return key.

          The clsetup utility will ask if it is okay to accept the default netmask. Skip to the next step to enter your response.

        • To change the default private-network IP address, perform the following substeps.

          1. Type no in response to the clsetup utility question about whether it is okay to accept the default address, then press the Return key.

            The clsetup utility will prompt for the new private-network IP address.

          2. Type the new IP address and press the Return key.

            The clsetup utility displays the default netmask and then asks if it is okay to accept the default netmask.

      6. Change or accept the default private-network IP address range.

        • On the Solaris 10 OS, the default netmask is 255.255.240.0. This default IP address range supports up to 64 nodes, up to 10 private networks, and up to 12 zone clusters in the cluster.

        • On the Solaris 9 OS, the default netmask is 255.255.248.0. This default IP address range supports up to 64 nodes and up to 10 private networks in the cluster.

        • To accept the default IP address range, type yes and press the Return key.

          Then skip to the next step.

        • To change the IP address range, perform the following substeps.

          1. Type no in response to the clsetup utility's question about whether it is okay to accept the default address range, then press the Return key.

            When you decline the default netmask, the clsetup utility prompts you for the number of nodes and private networks that you expect to configure in the cluster.

          2. Enter the number of nodes and private networks that you expect to configure in the cluster.

            From these numbers, the clsetup utility calculates two proposed netmasks:

            • The first netmask is the minimum netmask to support the number of nodes and private networks that you specified.

            • The second netmask supports twice the number of nodes and private networks that you specified, to accommodate possible future growth.

          3. Specify either of the calculated netmasks, or specify a different netmask that supports the expected number of nodes and private networks.

      7. Type yes in response to the clsetup utility's question about proceeding with the update.

      8. When finished, exit the clsetup utility.

      9. Boot the nodes of the first partition into cluster mode.

    6. If you are finishing upgrade of the first partition, perform the following substeps to prepare the second partition for upgrade.

      Otherwise, if you are finishing upgrade of the second partition, proceed to How to Verify Upgrade of Sun Cluster 3.2 11/09 Software.

      1. Boot each node in the second partition into noncluster mode.

        • On SPARC based systems, perform the following command:


          ok boot -x
          
        • On x86 based systems, perform the following commands:

          1. In the GRUB menu, use the arrow keys to select the appropriate Solaris entry and type e to edit its commands.

            The GRUB menu appears similar to the following:


            GNU GRUB version 0.97 (639K lower / 1047488K upper memory)
            +----------------------------------------------------------------------+
            | Solaris 10 /sol_10_x86                                               | 
            | Solaris failsafe                                                     |
            |                                                                      |
            +----------------------------------------------------------------------+
            Use the ^ and v keys to select which entry is highlighted.
            Press enter to boot the selected OS, 'e' to edit the
            commands before booting, or 'c' for a command-line.

            For more information about GRUB based booting, see Booting an x86 Based System by Using GRUB (Task Map) in System Administration Guide: Basic Administration.

          2. In the boot parameters screen, use the arrow keys to select the kernel entry and type e to edit the entry.

            The GRUB boot parameters screen appears similar to the following:


            GNU GRUB version 0.97 (639K lower / 1047488K upper memory)
            +----------------------------------------------------------------------+
            | root (hd0,0,a)                                                       | 
            | kernel /platform/i86pc/multiboot                                     | 
            | module /platform/i86pc/boot_archive                                  | 
            |+----------------------------------------------------------------------+
            Use the ^ and v keys to select which entry is highlighted.
            Press 'b' to boot, 'e' to edit the selected command in the
            boot sequence, 'c' for a command-line, 'o' to open a new line
            after ('O' for before) the selected line, 'd' to remove the
            selected line, or escape to go back to the main menu.
          3. Add -x to the command to specify that the system boot into noncluster mode.


            [ Minimal BASH-like line editing is supported. For the first word, TAB
            lists possible command completions. Anywhere else TAB lists the possible
            completions of a device/filename. ESC at any time exits. ]
            
            grub edit> kernel /platform/i86pc/multiboot -x
            
          4. Press Enter to accept the change and return to the boot parameters screen.

            The screen displays the edited command.


            GNU GRUB version 0.97 (639K lower / 1047488K upper memory)
            +----------------------------------------------------------------------+
            | root (hd0,0,a)                                                       |
            | kernel /platform/i86pc/multiboot -x                                  |
            | module /platform/i86pc/boot_archive                                  |
            +----------------------------------------------------------------------+
            Use the ^ and v keys to select which entry is highlighted.
            Press 'b' to boot, 'e' to edit the selected command in the
            boot sequence, 'c' for a command-line, 'o' to open a new line
            after ('O' for before) the selected line, 'd' to remove the
            selected line, or escape to go back to the main menu.-
          5. Type b to boot the node into noncluster mode.


            Note –

            This change to the kernel boot parameter command does not persist over the system boot. The next time you reboot the node, it will boot into cluster mode. To boot into noncluster mode instead, perform these steps to again add the -x option to the kernel boot parameter command.


      2. Upgrade the nodes in the second partition.

        To upgrade Solaris software before you perform Sun Cluster software upgrade, go to How to Upgrade the Solaris OS and Volume Manager Software (Dual-Partition).

        Otherwise, upgrade Sun Cluster software on the second partition. Return to Step 1.

  24. If you changed the RG_system property of any resource groups to FALSE, change the settings back to TRUE.


    phys-schost# clresourcegroup set -p RG_system=TRUE resourcegroup
    
Next Steps

Go to Chapter 6, Completing the Upgrade.

Troubleshooting

If you experience an unrecoverable error during dual-partition upgrade, perform recovery procedures in How to Recover from a Failed Dual-Partition Upgrade.