Sun Cluster Upgrade Guide for Solaris OS

ProcedureHow to Prepare the Cluster for Upgrade (Dual-Partition)

Perform this procedure to prepare a multiple-node cluster for a dual-partition upgrade. These procedures will refer to the two groups of nodes as the first partition and the second partition. The nodes that you assign to the second partition will continue cluster services while you upgrade the nodes in the first partition. After all nodes in the first partition are upgraded, you switch cluster services to the first partition and upgrade the second partition. After all nodes in the second partition are upgraded, you boot the nodes into cluster mode to rejoin the nodes from the first partition.


Note –

If you are upgrading a single-node cluster, do not use this upgrade method. Instead, go to How to Prepare the Cluster for Upgrade (Standard) or How to Prepare the Cluster for Upgrade (Live Upgrade).


Perform all steps from the global zone only.

Before You Begin

Perform the following tasks:

  1. Ensure that the cluster is functioning normally.

    1. View the current status of the cluster by running the following command from any node.

      • On Sun Cluster 3.1 8/05 software, use the following command:


        phys-schost% scstat
        
      • On Sun Cluster 3.2 software, use the following command:


        phys-schost% cluster status
        

      See the scstat(1M) or cluster(1CL) man page for more information.

    2. Search the /var/adm/messages log on the same node for unresolved error messages or warning messages.

    3. Check the volume-manager status.

  2. If necessary, notify users that cluster services might be temporarily interrupted during the upgrade.

    Service interruption will be approximately the amount of time that your cluster normally takes to switch services to another node.

  3. Become superuser.

  4. Ensure that the RG_system property of all resource groups in the cluster is set to FALSE.

    A setting of RG_system=TRUE would restrict certain operations that the dual-partition software must perform.

    1. On each node, determine whether any resource groups are set to RG_system=TRUE.


      phys-schost# clresourcegroup show -p RG_system
      

      Make note of which resource groups to change. Save this list to use when you restore the setting after upgrade is completed.

    2. For each resource group that is set to RG_system=TRUE, change the setting to FALSE.


      phys-schost# clresourcegroup set -p RG_system=FALSE resourcegroup
      
  5. If Sun Cluster Geographic Edition software is installed, uninstall it.

    For uninstallation procedures, see the documentation for your version of Sun Cluster Geographic Edition software.

  6. If you will upgrade the Solaris OS and your cluster uses dual-string mediators for Solaris Volume Manager software, unconfigure your mediators.

    See Configuring Dual-String Mediators in Sun Cluster Software Installation Guide for Solaris OS for more information about mediators.

    1. Run the following command to verify that no mediator data problems exist.


      phys-schost# medstat -s setname
      
      -s setname

      Specifies the disk set name.

      If the value in the Status field is Bad, repair the affected mediator host. Follow the procedure How to Fix Bad Mediator Data in Sun Cluster Software Installation Guide for Solaris OS.

    2. List all mediators.

      Save this information for when you restore the mediators during the procedure How to Finish Upgrade to Sun Cluster 3.2 11/09 Software.

    3. For a disk set that uses mediators, take ownership of the disk set if no node already has ownership.

      • On Sun Cluster 3.1 8/05 software, use the following command:


        phys-schost# scswitch -z -D setname -h node
        
        -z

        Changes mastery.

        -D devicegroup

        Specifies the name of the disk set.

        -h node

        Specifies the name of the node to become primary of the disk set.

      • On Sun Cluster 3.2 software, use the following command:


        phys-schost# cldevicegroup switch -n node devicegroup
        
    4. Unconfigure all mediators for the disk set.


      phys-schost# metaset -s setname -d -m mediator-host-list
      
      -s setname

      Specifies the disk set name.

      -d

      Deletes from the disk set.

      -m mediator-host-list

      Specifies the name of the node to remove as a mediator host for the disk set.

      See the mediator(7D) man page for further information about mediator-specific options to the metaset command.

    5. Repeat Step c through Step d for each remaining disk set that uses mediators.

  7. If you are upgrading a two-node cluster, skip to Step 17.

    Otherwise, proceed to Step 8 to determine the partitioning scheme to use. You will determine which nodes each partition will contain, but interrupt the partitioning process. You will then compare the node lists of all resource groups against the node members of each partition in the scheme that you will use. If any resource group does not contain a member of each partition, you must change the node list.

  8. Load the Sun Java Availability Suite DVD-ROM into the DVD-ROM drive.

    If the volume management daemon vold(1M) is running and is configured to manage CD-ROM or DVD devices, the daemon automatically mounts the media on the /cdrom/cdrom0 directory.

  9. Become superuser on a node of the cluster.

  10. Change to the /Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/ directory, where arch is sparc or x86and where ver is 10 for Solaris 10 .


    phys-schost# cd /cdrom/cdrom0/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools
    
  11. Start the scinstall utility in interactive mode.


    phys-schost# ./scinstall
    

    Note –

    Do not use the /usr/cluster/bin/scinstall command that is already installed on the node. You must use the scinstall command on the Sun Java Availability Suite DVD-ROM.


    The scinstall Main Menu is displayed.

  12. Type the option number for Manage a Dual-Partition Upgrade and press the Return key.


    *** Main Menu ***
    
        Please select from one of the following (*) options:
    
            1) Create a new cluster or add a cluster node
            2) Configure a cluster to be JumpStarted from this install server
          * 3) Manage a dual-partition upgrade
          * 4) Upgrade this cluster node
          * 5) Print release information for this cluster node
     
          * ?) Help with menu options
          * q) Quit
    
        Option:  3
    

    The Manage a Dual-Partition Upgrade Menu is displayed.

  13. Type the option number for Display and Select Possible Partitioning Schemes and press the Return key.

  14. Follow the prompts to perform the following tasks:

    1. Display the possible partitioning schemes for your cluster.

    2. Choose a partitioning scheme.

    3. Choose which partition to upgrade first.


      Note –

      Stop and do not respond yet when prompted, Do you want to begin the dual-partition upgrade?, but do not exit the scinstall utility. You will respond to this prompt in Step 19 of this procedure.


  15. Make note of which nodes belong to each partition in the partition scheme.

  16. On another node of the cluster, become superuser.

  17. Ensure that any critical data services can switch over between partitions.

    For a two-node cluster, each node will be the only node in its partition.

    When the nodes of a partition are shut down in preparation for dual-partition upgrade, the resource groups that are hosted on those nodes switch over to a node in the other partition. If a resource group does not contain a node from each partition in its node list, the resource group cannot switch over. To ensure successful switchover of all critical data services, verify that the node list of the related resource groups contains a member of each upgrade partition.

    1. Display the node list of each resource group that you require to remain in service during the entire upgrade.

      • On Sun Cluster 3.1 8/05 software, use the following command:


        phys-schost# scrgadm -pv -g resourcegroup | grep "Res Group Nodelist"
        
        -p

        Displays configuration information.

        -v

        Displays in verbose mode.

        -g resourcegroup

        Specifies the name of the resource group.

      • On Sun Cluster 3.2 software, use the following command:


        phys-schost# clresourcegroup show -p nodelist
        === Resource Groups and Resources ===
        
        Resource Group:                                 resourcegroup
          Nodelist:                                        node1 node2
    2. If the node list of a resource group does not contain at least one member of each partition, redefine the node list to include a member of each partition as a potential primary node.

      • On Sun Cluster 3.1 8/05 software, use the following command:


        phys-schost# scrgadm -a -g resourcegroup -h nodelist
        
        -a

        Adds a new configuration.

        -h

        Specifies a comma-separated list of node names.

      • On Sun Cluster 3.2 software, use the following command:


        phys-schost# clresourcegroup add-node -n node resourcegroup
        
  18. Determine your next step.

    • If you are upgrading a two-node cluster, return to Step 8 through Step 14 to designate your partitioning scheme and upgrade order.

      When you reach the prompt Do you want to begin the dual-partition upgrade?, skip to Step 19.

    • If you are upgrading a cluster with three or more nodes, return to the node that is running the interactive scinstall utility.

      Proceed to Step 19.

  19. At the interactive scinstall prompt Do you want to begin the dual-partition upgrade?, type Yes.

    The command verifies that a remote installation method is available.

  20. When prompted, press Enter to continue each stage of preparation for dual-partition upgrade.

    The command switches resource groups to nodes in the second partition, and then shuts down each node in the first partition.

  21. After all nodes in the first partition are shut down, boot each node in that partition into noncluster mode.

    • On SPARC based systems, perform the following command:


      ok boot -x
      
    • On x86 based systems, perform the following commands:

      1. In the GRUB menu, use the arrow keys to select the appropriate Solaris entry and type e to edit its commands.

        The GRUB menu appears similar to the following:


        GNU GRUB version 0.95 (631K lower / 2095488K upper memory)
        +----------------------------------------------------------------------+
        | Solaris 10 /sol_10_x86                                               |
        | Solaris failsafe                                                     |
        |                                                                      |
        +----------------------------------------------------------------------+
        Use the ^ and v keys to select which entry is highlighted.
        Press enter to boot the selected OS, 'e' to edit the
        commands before booting, or 'c' for a command-line.

        For more information about GRUB based booting, see Booting an x86 Based System by Using GRUB (Task Map) in System Administration Guide: Basic Administration.

      2. In the boot parameters screen, use the arrow keys to select the kernel entry and type e to edit the entry.

        The GRUB boot parameters screen appears similar to the following:


        GNU GRUB version 0.95 (615K lower / 2095552K upper memory)
        +----------------------------------------------------------------------+
        | root (hd0,0,a)                                                       |
        | kernel /platform/i86pc/multiboot                                     |
        | module /platform/i86pc/boot_archive                                  |
        +----------------------------------------------------------------------+
        Use the ^ and v keys to select which entry is highlighted.
        Press 'b' to boot, 'e' to edit the selected command in the
        boot sequence, 'c' for a command-line, 'o' to open a new line
        after ('O' for before) the selected line, 'd' to remove the
        selected line, or escape to go back to the main menu.
      3. Add -x to the command to specify that the system boot into noncluster mode.


        [ Minimal BASH-like line editing is supported. For the first word, TAB
        lists possible command completions. Anywhere else TAB lists the possible
        completions of a device/filename. ESC at any time exits. ]
        
        grub edit> kernel /platform/i86pc/multiboot -x
        
      4. Press Enter to accept the change and return to the boot parameters screen.

        The screen displays the edited command.


        GNU GRUB version 0.95 (615K lower / 2095552K upper memory)
        +----------------------------------------------------------------------+
        | root (hd0,0,a)                                                       |
        | kernel /platform/i86pc/multiboot -x                                  |
        | module /platform/i86pc/boot_archive                                  |
        +----------------------------------------------------------------------+
        Use the ^ and v keys to select which entry is highlighted.
        Press 'b' to boot, 'e' to edit the selected command in the
        boot sequence, 'c' for a command-line, 'o' to open a new line
        after ('O' for before) the selected line, 'd' to remove the
        selected line, or escape to go back to the main menu.-
      5. Type b to boot the node into noncluster mode.


        Note –

        This change to the kernel boot parameter command does not persist over the system boot. The next time you reboot the node, it will boot into cluster mode. To boot into noncluster mode instead, perform these steps to again to add the -x option to the kernel boot parameter command.


  22. Ensure that each system disk is backed up.

  23. If any applications that are running in the second partition are not under control of the Resource Group Manager (RGM), create scripts to halt the applications before you begin to upgrade those nodes.

    During dual-partition upgrade processing, these scripts would be called to stop applications such as Oracle Real Application Clusters before the nodes in the second partition are halted.

    1. Create the scripts that you need to stop applications that are not under RGM control.

      • Create separate scripts for those applications that you want stopped before applications under RGM control are stopped and for those applications that you want stop afterwards.

      • To stop applications that are running on more than one node in the partition, write the scripts accordingly.

      • Use any name and directory path for your scripts that you prefer.

    2. Ensure that each node in the cluster has its own copy of your scripts.

    3. On each node, modify the following Sun Cluster scripts to call the scripts that you placed on that node.

      • /etc/cluster/ql/cluster_pre_halt_apps - Use this file to call those scripts that you want to run before applications that are under RGM control are shut down.

      • /etc/cluster/ql/cluster_post_halt_apps - Use this file to call those scripts that you want to run after applications that are under RGM control are shut down.

      The Sun Cluster scripts are issued from one arbitrary node in the partition during post-upgrade processing of the partition. Therefore, ensure that the scripts on any node of the partition will perform the necessary actions for all nodes in the partition.

Next Steps

Upgrade software on each node in the first partition.