Sun Cluster Software Installation Guide for Solaris OS

ProcedureHow to Prepare a Cluster Node for a Rolling Upgrade

Perform this procedure on one node at a time. You will take the upgraded node out of the cluster while the remaining nodes continue to function as active cluster members.

Before You Begin

Perform the following tasks:

Observe the following guidelines when you perform a rolling upgrade:

Steps
  1. (Optional) Install Sun Cluster 3.1 8/05 documentation.

    Install the documentation packages on your preferred location, such as an administrative console or a documentation server. See the Solaris_arch/Product/sun_cluster/index.html file on the Sun Cluster 2 of 2 CD-ROM, where arch is sparc or x86, to access installation instructions.

  2. If you are upgrading from the Sun Cluster 3.1 9/04 release, ensure that the latest Sun Cluster 3.1 Core Patch is installed.

    This Core Patch contains the code fix for 6210440, which is necessary to enable rolling upgrade from Sun Cluster 3.1 9/04 software to Sun Cluster 3.1 8/05 software.

  3. Become superuser on one node of the cluster to upgrade.

  4. For a two-node cluster that uses Sun StorEdge Availability Suite software, ensure that the configuration data for availability services resides on the quorum disk.

    The configuration data must reside on a quorum disk to ensure the proper functioning of Sun StorEdge Availability Suite after you upgrade the cluster software.

    1. Become superuser on a node of the cluster that runs Sun StorEdge Availability Suite software.

    2. Identify the device ID and the slice that is used by the Sun StorEdge Availability Suite configuration file.


      # /usr/opt/SUNWscm/sbin/dscfg
      /dev/did/rdsk/dNsS
      

      In this example output, N is the device ID and S the slice of device N.

    3. Identify the existing quorum device.


      # scstat -q
      -- Quorum Votes by Device --
                           Device Name         Present Possible Status
                           -----------         ------- -------- ------
         Device votes:     /dev/did/rdsk/dQsS  1       1        Online

      In this example output, dQsS is the existing quorum device.

    4. If the quorum device is not the same as the Sun StorEdge Availability Suite configuration-data device, move the configuration data to an available slice on the quorum device.


      # dd if=`/usr/opt/SUNWesm/sbin/dscfg` of=/dev/did/rdsk/dQsS
      

      Note –

      You must use the name of the raw DID device, /dev/did/rdsk/, not the block DID device, /dev/did/dsk/.


    5. If you moved the configuration data, configure Sun StorEdge Availability Suite software to use the new location.

      As superuser, issue the following command on each node that runs Sun StorEdge Availability Suite software.


      # /usr/opt/SUNWesm/sbin/dscfg -s /dev/did/rdsk/dQsS
      
  5. From any node, view the current status of the cluster.

    Save the output as a baseline for later comparison.


    % scstat
    % scrgadm -pv[v]

    See the scstat(1M) and scrgadm(1M) man pages for more information.

  6. Move all resource groups and device groups that are running on the node to upgrade.


    # scswitch -S -h from-node
    
    -S

    Moves all resource groups and device groups

    -h from-node

    Specifies the name of the node from which to move resource groups and device groups

    See the scswitch(1M) man page for more information.

  7. Verify that the move was completed successfully.


    # scstat -g -D
    
    -g

    Shows status for all resource groups

    -D

    Shows status for all disk device groups

  8. Ensure that the system disk, applications, and all data are backed up.

  9. If your cluster uses dual-string mediators for Solstice DiskSuite or Solaris Volume Manager software, unconfigure your mediators.

    See Configuring Dual-String Mediators for more information.

    1. Run the following command to verify that no mediator data problems exist.


      # medstat -s setname
      
      -s setname

      Specifies the disk set name

      If the value in the Status field is Bad, repair the affected mediator host. Follow the procedure How to Fix Bad Mediator Data.

    2. List all mediators.

      Save this information for when you restore the mediators during the procedure How to Finish a Rolling Upgrade to Sun Cluster 3.1 8/05 Software.

    3. For a disk set that uses mediators, take ownership of the disk set if no node already has ownership.


      # scswitch -z -D setname -h node
      
      -z

      Changes mastery

      -D

      Specifies the name of the disk set

      -h node

      Specifies the name of the node to become primary of the disk set

    4. Unconfigure all mediators for the disk set.


      # metaset -s setname -d -m mediator-host-list
      
      -s setname

      Specifies the disk-set name

      -d

      Deletes from the disk set

      -m mediator-host-list

      Specifies the name of the node to remove as a mediator host for the disk set

      See the mediator(7D) man page for further information about mediator-specific options to the metaset command.

    5. Repeat these steps for each remaining disk set that uses mediators.

  10. Shut down the node that you want to upgrade and boot it into noncluster mode.

    • On SPARC based systems, perform the following commands:


      # shutdown -y -g0
      ok boot -x
      
    • On x86 based systems, perform the following commands:


      # shutdown -y -g0
      ...
                            <<< Current Boot Parameters >>>
      Boot path: /pci@0,0/pci-ide@7,1/ata@1/cmdk@0,0:b
      Boot args:
      
      Type   b [file-name] [boot-flags] <ENTER>    to boot with options
      or     i <ENTER>                             to enter boot interpreter
      or     <ENTER>                               to boot with defaults
      
                        <<< timeout in 5 seconds >>>
      Select (b)oot or (i)nterpreter: b -x
      

    The other nodes of the cluster continue to function as active cluster members.

Next Steps

To upgrade the Solaris software to a Maintenance Update release, go to How to Perform a Rolling Upgrade of a Solaris Maintenance Update.


Note –

The cluster must already run on, or be upgraded to, at least the minimum required level of the Solaris OS to support Sun Cluster 3.1 8/05 software. See the Sun Cluster 3.1 8/05 Release Notes for Solaris OS for information about supported releases of the Solaris OS.


If you do not intend to upgrade the Solaris OS, go to How to Upgrade Dependency Software Before a Rolling Upgrade.