Sun Cluster Upgrade Guide for Solaris OS

ProcedureHow to Prepare the Cluster for Upgrade (Standard)

Perform this procedure to remove the cluster from production before you perform a standard upgrade. On the Solaris 10 OS, perform all steps from the global zone only.

Before You Begin

Perform the following tasks:

  1. Ensure that the cluster is functioning normally.

    1. View the current status of the cluster by running the following command from any node.

      • On Sun Cluster 3.0 or 3.1 software, use the following command:


        phys-schost% scstat
        
      • On Sun Cluster 3.2 software, use the following command:


        phys-schost% cluster status
        

      See the scstat(1M) or cluster(1CL) man page for more information.

    2. Search the /var/adm/messages log on the same node for unresolved error messages or warning messages.

    3. Check the volume-manager status.

  2. Notify users that cluster services will be unavailable during the upgrade.

  3. If Sun Cluster Geographic Edition software is installed, uninstall it.

    For uninstallation procedures, see the documentation for your version of Sun Cluster Geographic Edition software.

  4. Become superuser on a node of the cluster.

  5. Take each resource group offline and disable all resources.

    Take offline all resource groups in the cluster, including those that are in non-global zones. Then disable all resources, to prevent the cluster from bringing the resources online automatically if a node is mistakenly rebooted into cluster mode.

    • If you are upgrading from Sun Cluster 3.1 or 3.2 software and want to use the scsetup or clsetup utility, perform the following steps:

      1. Start the utility.

        • On Sun Cluster 3.1 software, use the following command:


          phys-schost# scsetup
          
        • On Sun Cluster 3.2 software, use the following command:


          phys-schost# clsetup
          

        The Main Menu is displayed.

      2. Type the option number for Resource Groups and press the Return key.

        The Resource Group Menu is displayed.

      3. Type the option number for Online/Offline or Switchover a Resource Group and press the Return key.

      4. Follow the prompts to take offline all resource groups and to put them in the unmanaged state.

      5. When all resource groups are offline, type q to return to the Resource Group Menu.

      6. Exit the scsetup utility.

        Type q to back out of each submenu or press Ctrl-C.

    • To use the command line, perform the following steps:

      1. Take each resource offline.

        • On Sun Cluster 3.0 or 3.1 software, use the following command:


          phys-schost# scswitch -F -g resource-group
          
          -F

          Switches a resource group offline.

          -g resource-group

          Specifies the name of the resource group to take offline.

        • On Sun Cluster 3.2 software, use the following command:


          phys-schost# clresource offline resource-group
          
      2. From any node, list all enabled resources in the cluster.

        • On Sun Cluster 3.0 or 3.1 software, use the following command:


          phys-schost# scrgadm -pv | grep "Res enabled"
          (resource-group:resource) Res enabled: True
        • On Sun Cluster 3.2 software, use the following command:


          phys-schost# clresource show -p Enabled
          === Resources ===
          
          Resource:                                       resource
            Enabled{nodename1}:                               True
            Enabled{nodename2}:                               True
          …
      3. Identify those resources that depend on other resources.


        phys-schost# clresource show -p resource_dependencies
        === Resources ===
        
        Resource:                                       node
          Resource_dependencies:                           node

        You must disable dependent resources first before you disable the resources that they depend on.

      4. Disable each enabled resource in the cluster.

        • On Sun Cluster 3.0 or 3.1 software, use the following command:


          phys-schost# scswitch -n -j resource
          
          -n

          Disables.

          -j resource

          Specifies the resource.

        • On Sun Cluster 3.2 software, use the following command:


          phys-schost# clresource disable resource
          

        See the scswitch(1M) or clresource(1CL) man page for more information.

      5. Verify that all resources are disabled.

        • On Sun Cluster 3.0 or 3.1 software, use the following command:


          phys-schost# scrgadm -pv | grep "Res enabled"
          (resource-group:resource) Res enabled: False
        • On Sun Cluster 3.2 software, use the following command:


          phys-schost# clresource show -p Enabled
          === Resources ===
          
          Resource:                                       resource
            Enabled{nodename1}:                               False
            Enabled{nodename2}:                               False
          …
      6. Move each resource group to the unmanaged state.

        • On Sun Cluster 3.0 or 3.1 software, use the following command:


          phys-schost# scswitch -u -g resource-group
          
          -u

          Moves the specified resource group to the unmanaged state.

          -g resource-group

          Specifies the name of the resource group to move into the unmanaged state.

        • On Sun Cluster 3.2 software, use the following command:


          phys-schost# clresourcegroup unmanage resource-group
          
  6. Verify that all resources on all nodes are Offline and that all resource groups are in the Unmanaged state.

    • On Sun Cluster 3.0 or 3.1 software, use the following command:


      phys-schost# scstat -g
      
    • On Sun Cluster 3.2 software, use the following command:


      phys-schost# cluster status -t resource,resourcegroup
      
  7. (Optional) If you are upgrading from a version of Sun Cluster 3.0 software and do not want your ntp.conf file renamed to ntp.conf.cluster, create an ntp.conf.cluster file.

    On each node, copy /etc/inet/ntp.cluster as ntp.conf.cluster.


    phys-schost# cp /etc/inet/ntp.cluster /etc/inet/ntp.conf.cluster
    

    The existence of an ntp.conf.cluster file prevents upgrade processing from renaming the ntp.conf file. The ntp.conf file will still be used to synchronize NTP among the cluster nodes.

  8. Stop all applications that are running on each node of the cluster.

  9. Ensure that all shared data is backed up.

  10. If you will upgrade the Solaris OS and your cluster uses dual-string mediators for Solaris Volume Manager software, unconfigure your mediators.

    See Configuring Dual-String Mediators in Sun Cluster Software Installation Guide for Solaris OS for more information about mediators.

    1. Run the following command to verify that no mediator data problems exist.


      phys-schost# medstat -s setname
      
      -s setname

      Specifies the disk set name.

      If the value in the Status field is Bad, repair the affected mediator host. Follow the procedure How to Fix Bad Mediator Data in Sun Cluster Software Installation Guide for Solaris OS.

    2. List all mediators.

      Save this information for when you restore the mediators during the procedure How to Finish Upgrade to Sun Cluster 3.2 1/09 Software.

    3. For a disk set that uses mediators, take ownership of the disk set if no node already has ownership.

      • On Sun Cluster 3.1 software, use the following command:


        phys-schost# scswitch -z -D setname -h node
        
        -z

        Changes mastery.

        -D devicegroup

        Specifies the name of the disk set.

        -h node

        Specifies the name of the node to become primary of the disk set.

      • On Sun Cluster 3.2 software, use the following command:


        phys-schost# cldevicegroup switch -n node devicegroup
        
    4. Unconfigure all mediators for the disk set.


      phys-schost# metaset -s setname -d -m mediator-host-list
      
      -s setname

      Specifies the disk set name.

      -d

      Deletes from the disk set.

      -m mediator-host-list

      Specifies the name of the node to remove as a mediator host for the disk set.

      See the mediator(7D) man page for further information about mediator-specific options to the metaset command.

    5. Repeat Step c through Step d for each remaining disk set that uses mediators.

  11. From one node, shut down the cluster.

    • On Sun Cluster 3.0 or 3.1 software, use the following command:


      phys-schost# scshutdown -g0 -y
      
    • On Sun Cluster 3.2 software, use the following command:


      phys-schost# cluster shutdown -g0 -y
      

    See the scshutdown(1M)man page for more information.

  12. Boot each node into noncluster mode.

    • On SPARC based systems, perform the following command:


      ok boot -x
      
    • On x86 based systems, perform the following commands:

      1. In the GRUB menu, use the arrow keys to select the appropriate Solaris entry and type e to edit its commands.

        The GRUB menu appears similar to the following:


        GNU GRUB version 0.95 (631K lower / 2095488K upper memory)
        +----------------------------------------------------------------------+
        | Solaris 10 /sol_10_x86                                               |
        | Solaris failsafe                                                     |
        |                                                                      |
        +----------------------------------------------------------------------+
        Use the ^ and v keys to select which entry is highlighted.
        Press enter to boot the selected OS, 'e' to edit the
        commands before booting, or 'c' for a command-line.

        For more information about GRUB based booting, see Booting an x86 Based System by Using GRUB (Task Map) in System Administration Guide: Basic Administration.

      2. In the boot parameters screen, use the arrow keys to select the kernel entry and type e to edit the entry.

        The GRUB boot parameters screen appears similar to the following:


        GNU GRUB version 0.95 (615K lower / 2095552K upper memory)
        +----------------------------------------------------------------------+
        | root (hd0,0,a)                                                       |
        | kernel /platform/i86pc/multiboot                                     |
        | module /platform/i86pc/boot_archive                                  |
        +----------------------------------------------------------------------+
        Use the ^ and v keys to select which entry is highlighted.
        Press 'b' to boot, 'e' to edit the selected command in the
        boot sequence, 'c' for a command-line, 'o' to open a new line
        after ('O' for before) the selected line, 'd' to remove the
        selected line, or escape to go back to the main menu.
      3. Add -x to the command to specify that the system boot into noncluster mode.


        [ Minimal BASH-like line editing is supported. For the first word, TAB
        lists possible command completions. Anywhere else TAB lists the possible
        completions of a device/filename. ESC at any time exits. ]
        
        grub edit> kernel /platform/i86pc/multiboot -x
        
      4. Press Enter to accept the change and return to the boot parameters screen.

        The screen displays the edited command.


        GNU GRUB version 0.95 (615K lower / 2095552K upper memory)
        +----------------------------------------------------------------------+
        | root (hd0,0,a)                                                       |
        | kernel /platform/i86pc/multiboot -x                                  |
        | module /platform/i86pc/boot_archive                                  |
        +----------------------------------------------------------------------+
        Use the ^ and v keys to select which entry is highlighted.
        Press 'b' to boot, 'e' to edit the selected command in the
        boot sequence, 'c' for a command-line, 'o' to open a new line
        after ('O' for before) the selected line, 'd' to remove the
        selected line, or escape to go back to the main menu.-
      5. Type b to boot the node into noncluster mode.


        Note –

        This change to the kernel boot parameter command does not persist over the system boot. The next time you reboot the node, it will boot into cluster mode. To boot into noncluster mode instead, perform these steps to again to add the -x option to the kernel boot parameter command.


  13. Ensure that each system disk is backed up.

Next Steps

Upgrade software on each node.