Sun Cluster System Administration Guide for Solaris OS

Patching Sun Cluster Software

Table 11–1 Task Map: Patching the Cluster

Task 

Instructions 

Apply a nonrebooting Sun Cluster patch to one node at a time without stopping the node 

How to Apply a Nonrebooting Sun Cluster Patch

Apply a rebooting Sun Cluster patch after taking the cluster member to noncluster mode 

How to Apply a Rebooting Patch (Node)

How to Apply a Rebooting Patch (Cluster)

Apply a patch in single-user mode when your cluster contains failover nodes 

How to Apply Patches in Single-User Mode with Failover Nodes

Remove a Sun Cluster patch 

Changing a Sun Cluster Patch

ProcedureHow to Apply a Rebooting Patch (Node)

Apply the patch to one node in the cluster at a time to keep the cluster itself operational during the patch process. With this procedure, you must first stop the node in the cluster and boot it to single-user mode by using the boot -sx or shutdown -g -y -i0command, before applying the patch.

The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.

  1. Before applying the patch, check the Sun Cluster product web site for any special preinstallation or postinstallation instructions.

  2. Become superuser or assume a role that provides solaris.cluster.admin RBAC authorization on the node to which you are applying the patch.

  3. List the resource groups and device groups on the node being patched.


    # clresourcegroup status -n node
    # cldevicegroup status -n node
    
  4. Switch all resource groups, resources, and device groups from the node being patched to other cluster members.


    # clnode evacuate -n node
    
    evacuate

    Evacuates all device groups and resource groups, including all global-cluster non-voting nodes.

    -n node

    Specifies the node from which you are switching the resource groups and device groups.

  5. Shut down the node.


    # shutdown -g0 [-y]
     [-i0]
  6. Boot the node in noncluster, single-user mode.

    • On SPARC based systems, run the following command.


      ok boot -sx
      
    • On x86 based systems, run the following commands.


      phys-schost# shutdown -g -y -i0
      
      Press any key to continue
    1. In the GRUB menu, use the arrow keys to select the appropriate Solaris entry and type e to edit its commands.

      The GRUB menu appears similar to the following:


      GNU GRUB version 0.95 (631K lower / 2095488K upper memory)
      +-------------------------------------------------------------------------+
      | Solaris 10 /sol_10_x86                                                  |
      | Solaris failsafe                                                        |
      |                                                                         |
      +-------------------------------------------------------------------------+
      Use the ^ and v keys to select which entry is highlighted.
      Press enter to boot the selected OS, 'e' to edit the
      commands before booting, or 'c' for a command-line.

      For more information about GRUB based booting, see Booting an x86 Based System by Using GRUB (Task Map) in System Administration Guide: Basic Administration.

    2. In the boot parameters screen, use the arrow keys to select the kernel entry and type e to edit the entry.

      The GRUB boot parameters screen appears similar to the following:


      GNU GRUB version 0.95 (615K lower / 2095552K upper memory)
      +----------------------------------------------------------------------+
      | root (hd0,0,a)                                                       |
      | kernel /platform/i86pc/multiboot                                     |
      | module /platform/i86pc/boot_archive                                  |
      +----------------------------------------------------------------------+
      Use the ^ and v keys to select which entry is highlighted.
      Press 'b' to boot, 'e' to edit the selected command in the
      boot sequence, 'c' for a command-line, 'o' to open a new line
      after ('O' for before) the selected line, 'd' to remove the
      selected line, or escape to go back to the main menu.
    3. Add -sx to the command to specify that the system boot into noncluster mode.


      [ Minimal BASH-like line editing is supported. For the first word, TAB
      lists possible command completions. Anywhere else TAB lists the possible
      completions of a device/filename. ESC at any time exits. ]
      
      grub edit> kernel /platform/i86pc/multiboot -sx
    4. Press the Enter key to accept the change and return to the boot parameters screen.

      The screen displays the edited command.


      GNU GRUB version 0.95 (615K lower / 2095552K upper memory)
      +----------------------------------------------------------------------+
      | root (hd0,0,a)                                                       |
      | kernel /platform/i86pc/multiboot -sx                                  |
      | module /platform/i86pc/boot_archive                                  |
      +----------------------------------------------------------------------+
      Use the ^ and v keys to select which entry is highlighted.
      Press 'b' to boot, 'e' to edit the selected command in the
      boot sequence, 'c' for a command-line, 'o' to open a new line
      after ('O' for before) the selected line, 'd' to remove the
      selected line, or escape to go back to the main menu.-
    5. Type b to boot the node into noncluster mode.


      Note –

      This change to the kernel boot parameter command does not persist over the system boot. The next time you reboot the node, it will boot into cluster mode. To boot into noncluster mode instead, perform these steps to again to add the -sx option to the kernel boot parameter command.


  7. Apply the software or firmware patch.


    # patchadd -M patch-dir patch-id
    
    patch-dir

    Specifies the directory location of the patch.

    patch-id

    Specifies the patch number of a given patch.


    Note –

    Always defer to the instructions in the patch directory, which supersede procedures in this chapter.


  8. Verify that the patch has been installed successfully.


    # showrev -p | grep patch-id
    
  9. Reboot the node into the cluster.


    # reboot
    
  10. Verify that the patch works, and that the node and cluster are operating normally.

  11. Repeat Step 2 through Step 10 for all remaining cluster nodes.

  12. Switch resource groups and device groups as needed.

    After you reboot all the nodes, the last node rebooted will not have the resource groups and device groups online.


    # cldevicegroup switch -n node   + | devicegroup ...
    # clresourcegroup switch -n node[:zone][,...] + | resource-group ...
    
    node

    The name of the node to which you are switching the resource groups and device groups.

    zone

    The name of the global cluster non-voting node (node) that can master the resource group. Specify zone only if you specified a non-voting node when you created the resource group.

  13. Check to see if you need to commit the patch software by using the scversions command.


    # /usr/cluster/bin/scversions
    

    You will see one of the following results:


    Upgrade commit is needed.
    
    Upgrade commit is NOT needed. All versions match.
  14. If a commit is needed, commit the patch software.


    # scversions -c
    

    Note –

    Running scversions will cause one or more CMM reconfigurations, depending on the situation.



Example 11–1 Applying a Rebooting Patch (Node)

The following example shows the application of a rebooting Sun Cluster patch to a node.


# clresourcegroup status -n rg1
...Resource Group     Resource
--------------     --------
rg1                rs-2
rg1                rs-3
...
# cldevicegroup status -n nodedg-schost-1
...
Device Group Name:											dg-schost-1
...
# clnode evacuate  phys-schost-2
# shutdown -g0 -y -i0
...

Boot the node in noncluster, single-user mode.


# patchadd -M /var/tmp/patches 234567-05
...
# showrev -p | grep 234567-05

...
# reboot
...
# cldevicegroup switch -n  phys-schost-1 dg-schost-1
# clresourcegroup switch -n  phys-schost-1 schost-sa-1
# scversions
Upgrade commit is needed.
# scversions -c

See Also

If you need to back out a patch, see Changing a Sun Cluster Patch.

ProcedureHow to Apply a Rebooting Patch (Cluster)

With this procedure, you must first stop the cluster and boot each node to single-user mode by using the boot -sx or shtudown -g -y -i0 command, before applying the patch.

  1. Before applying the patch, check the Sun Cluster product web site for any special preinstallation or postinstallation instructions.

  2. Become superuser on any node in the cluster.

  3. Shut down the cluster.


    # cluster shutdown -y -g grace-period message
    
    -y

    Specifies to answer yes to the confirmation prompt.

    -g grace-period

    Specifies, in seconds, the amount of time to wait before shutting down. Default grace period is 60 seconds.

    message

    Specifies the warning message to broadcast. Use quotes if message contains multiple words.

  4. Boot each node into noncluster, single-user mode.

    On the console of each node, run the following commands.

    • On SPARC based systems, run the following command.


      ok boot -sx
      
    • On x86 based systems, run the following commands.


      phys-schost# shutdown -g -y -i0
      
      Press any key to continue
    1. In the GRUB menu, use the arrow keys to select the appropriate Solaris entry and type e to edit its commands.

      The GRUB menu appears similar to the following:


      GNU GRUB version 0.95 (631K lower / 2095488K upper memory)
      +-------------------------------------------------------------------------+
      | Solaris 10 /sol_10_x86                                                  |
      | Solaris failsafe                                                        |
      |                                                                         |
      +-------------------------------------------------------------------------+
      Use the ^ and v keys to select which entry is highlighted.
      Press enter to boot the selected OS, 'e' to edit the
      commands before booting, or 'c' for a command-line.

      For more information about GRUB based booting, see Booting an x86 Based System by Using GRUB (Task Map) in System Administration Guide: Basic Administration.

    2. In the boot parameters screen, use the arrow keys to select the kernel entry and type e to edit the entry.

      The GRUB boot parameters screen appears similar to the following:


      GNU GRUB version 0.95 (615K lower / 2095552K upper memory)
      +----------------------------------------------------------------------+
      | root (hd0,0,a)                                                       |
      | kernel /platform/i86pc/multiboot                                     |
      | module /platform/i86pc/boot_archive                                  |
      +----------------------------------------------------------------------+
      Use the ^ and v keys to select which entry is highlighted.
      Press 'b' to boot, 'e' to edit the selected command in the
      boot sequence, 'c' for a command-line, 'o' to open a new line
      after ('O' for before) the selected line, 'd' to remove the
      selected line, or escape to go back to the main menu.
    3. Add -sx to the command to specify that the system boot into noncluster mode.


      [ Minimal BASH-like line editing is supported. For the first word, TAB
      lists possible command completions. Anywhere else TAB lists the possible
      completions of a device/filename. ESC at any time exits. ]
      
      grub edit> kernel /platform/i86pc/multiboot -sx
    4. Press the Enter key to accept the change and return to the boot parameters screen.

      The screen displays the edited command.


      GNU GRUB version 0.95 (615K lower / 2095552K upper memory)
      +----------------------------------------------------------------------+
      | root (hd0,0,a)                                                       |
      | kernel /platform/i86pc/multiboot -sx                                  |
      | module /platform/i86pc/boot_archive                                  |
      +----------------------------------------------------------------------+
      Use the ^ and v keys to select which entry is highlighted.
      Press 'b' to boot, 'e' to edit the selected command in the
      boot sequence, 'c' for a command-line, 'o' to open a new line
      after ('O' for before) the selected line, 'd' to remove the
      selected line, or escape to go back to the main menu.-
    5. Type b to boot the node into noncluster mode.


      Note –

      This change to the kernel boot parameter command does not persist over the system boot. The next time you reboot the node, it will boot into cluster mode. To boot into noncluster mode instead, perform these steps to again to add the -sx option to the kernel boot parameter command.


  5. Apply the software or firmware patch.

    On one node at a time, run the following command.


    # patchadd -M patch-dir patch-id
    
    patch-dir

    Specifies the directory location of the patch.

    patch-id

    Specifies the patch number of a given patch.


    Note –

    Always defer to the instructions in the patch directory that supersede procedures in this chapter.


  6. Verify that the patch has been installed successfully on each node.


    # showrev -p | grep patch-id
    
  7. After applying the patch to all nodes, reboot the nodes into the cluster.

    On each node, run the following command.


    # reboot
    
  8. Check to see if you need to commit the patch software by using the scversions command.


    # /usr/cluster/bin/scversions
    

    You will see one of the following results:


    Upgrade commit is needed.
    
    Upgrade commit is NOT needed. All versions match.
  9. If a commit is needed, commit the patch software.


    # scversions -c
    

    Note –

    Running scversions will cause one or more CMM reconfigurations, depending on the situation.


  10. Verify that the patch works, and that the nodes and cluster are operating normally.


Example 11–2 Applying a Rebooting Patch (Cluster)

The following example shows the application of a rebooting Sun Cluster patch to a cluster.


# cluster shutdown -g0 -y
...

Boot the cluster in noncluster, single-user mode.


...
# patchadd -M /var/tmp/patches 234567-05
(Apply patch to other cluster nodes)
...
# showrev -p | grep 234567-05
# reboot
# scversions
Upgrade commit is needed.
# scversions -c

See Also

If you need to back out a patch, see Changing a Sun Cluster Patch.

ProcedureHow to Apply a Nonrebooting Sun Cluster Patch

Apply the patch to one node in the cluster at a time. When applying a nonrebooting patch, you do not need to first the node that is receiving the patch.

  1. Before applying the patch, check the Sun Cluster product web page for any special preinstallation or postinstallation instructions.

  2. Apply the patch on a single node.


    # patchadd -M patch-dir patch-id
    
    patch-dir

    Specifies the directory location of the patch.

    patch-id

    Specifies the patch number of a given patch.

  3. Verify that the patch has been installed successfully.


    # showrev -p | grep patch-id
    
  4. Verify that the patch works, and that the node and cluster are operating normally.

  5. Repeat Step 2 through Step 4 for the remaining cluster nodes.

  6. Check to see if you need to commit the patch software by using the scversions command.


    # /usr/cluster/bin/scversions
    

    You will see one of the following results:


    Upgrade commit is needed.
    
    Upgrade commit is NOT needed. All versions match.
  7. If a commit is needed, commit the patch software.


    # scversions -c
    

    Note –

    Running scversions will cause one or more CMM reconfigurations, depending on the situation.



Example 11–3 Applying a Nonrebooting Sun Cluster Patch


# patchadd -M /tmp/patches 234567-05
...
# showrev -p | grep 234567-05
# scversions
Upgrade commit is needed.
# scversions -c

See Also

If you need to back out a patch, see Changing a Sun Cluster Patch.

ProcedureHow to Apply Patches in Single-User Mode with Failover Nodes

Perform this task to apply patches in single-user mode with failover nodes. This patch method is required if you use the Sun Cluster Data Service for Solaris Containers in a failover configuration with Sun Cluster software.

  1. Verify that the quorum device is not configured for one of the LUNs used as shared storage that is part of the disksets that contain the zone path that is manually taken in this procedure.

    1. Determine if the quorum device is used in the disksets containing the zonepaths, and determine if the quorum device uses SCSI2 or SCSI3 reservations.


      # clquorum show
      
    2. If the quorum device is within a LUN of the disksets, add a new LUN as a quorum device that is not part of any disk set containing the zone path.


      # clquorum add new-didname
      
    3. Remove the old quorum device.


      # clquorum remove old-didname
      
    4. If SCSI2 reservations are used for the old quorum device, scrub SCSI2 reservations from the old quorum and verify that there are no SCSI2 reservations left.


      # /usr/cluster/lib/sc/pgre -c pgre_scrub -d /dev/did/rdsk/old-didnames2
      # /usr/cluster/lib/sc/pgre -c pgre_inkeys -d /dev/did/rdsk/old-didnames2
      

      Note –

      If you accidentally scrub reservation keys on your active quorum device, you must remove and re-add the quorum device to put new reservations keys on your quorum device.


  2. Evacuate the node you want to patch.


    # clresourcegroup evacuate -n node1
    
  3. Take offline the resource group or resource groups that contain HA Solaris Container resources.


    # clresourcegroup offline resourcegroupname
    
  4. Disable all the resources in the resource groups that you took offline.


    # clresource disable resourcename
    
  5. Unmanage the resource groups you took offline.


    # clresourcegroup unmanage resourcegroupname
    
  6. Take offline the corresponding device group or device groups.


    # cldevicegroup offline cldevicegroupname
    
  7. Disable the device groups that you took offline


    # cldevicegroup disable devicegroupname
    
  8. Boot the passive node out of the cluster.


    # reboot -- -x
    
  9. Verify that the SMF start methods are completed on the passive node before proceeding.


    # svcs -x
    
  10. Verify that any reconfiguration process on the active node has completed.


    # cluster status
    
  11. Determine if SCSI-2 reservations exist on the disk in the disk set and release the keys. Follow these instructions to determine if SCSI-2 reservations exist and then release them.

    • For all disks in the disk set, run the following command: /usr/cluster/lib/sc/scsi -c disfailfast -d /dev/did/rdsk/d#s2.

    • If keys are listed, release them by running the following command: /usr/cluster/lib/sc/scsi -c release -d /dev/did/rdsk/d#s2.

    When you finish releasing the reservation keys, skip Step #12 and proceed to Step #13.

  12. Determine if there are any SCSI-3 reservations on the disks in the disksets.

    1. Run the following command on all disks in the disksets.


      # /usr/cluster/lib/sc/scsi -c inkeys -d /dev/did/rdsk/didnames2
      
    2. If keys are listed, scrub them.


      # /usr/cluster/lib/sc/scsi -c scrub -d /dev/did/rdsk/didnames2
      
  13. Take ownership of the metaset on the passive node


    # metaset -s disksetname -C take -f
    
  14. Mount the file system or file systems that contains the zone path on the passive node.


    # mount device mountpoint
    
  15. Switch to single-user mode on the passive node.


    # init s
    
  16. Halt all possible booted zones that are not under the Sun Cluster Data Service for Solaris Container control.


    # zoneadm -z zonename halt
    
  17. (Optional) If you install multiple patches, for performance reasons you can choose to boot all the configured zones in single-user mode.


    # zoneadm -z zonename boot -s
    
  18. Apply the patches.

  19. Reboot the node and wait until all its SMF start methods are finished. Perform the svcs -a command only after the node has been rebooted.


    # reboot
    

    # svcs -a
    

    The first node is now ready.

  20. Evacuate the second node you want to patch.


    # clresourcegroup evacuate -n node2
    
  21. Repeat steps 8 through 13 for the second node.

  22. Detach the zones you patched already to speed up the patch process.


    # zoneadm -z zonename detach
    
  23. Switch to single-user mode on the passive node.


    # init s
    
  24. Halt all possible booted zones that are not under the Sun Cluster Data Service for Solaris Container control.


    # zoneadm -z zonename halt
    
  25. (Optional) If you install multiple patches, for performance reasons you can choose to boot all the configured zones in single-user mode.


    # zoneadm -z zonename boot -s
    
  26. Apply the patches.

  27. Attach the zones you detached.


    # zoneadm -z zonename attach -F
    
  28. Reboot the node into cluster mode.


    # reboot
    
  29. Bring online the device group or device groups.

  30. Start the resource groups.

  31. Check to see if you need to commit the patch software by using the scversions command.


    # /usr/cluster/bin/scversions
    

    You will see one of the following results:


    Upgrade commit is needed.
    
    Upgrade commit is NOT needed. All versions match.
  32. If a commit is needed, commit the patch software.


    # scversions -c
    

    Note –

    Running scversions will cause one or more CMM reconfigurations, depending on the situation.


Changing a Sun Cluster Patch

To remove a Sun Cluster patch that you've applied to your cluster, you must first remove the new Sun Cluster patch, and then reapply the previous patch or update release. To remove the new Sun Cluster patch, see the following procedures. To reapply a previous Sun Cluster patch, see one of the following procedures:


Note –

Before applying a Sun Cluster patch, check the patch's README file.


ProcedureHow to Remove a Non-Rebooting Sun Cluster Patch

  1. Become superuser on any node in the cluster.

  2. Remove the non-rebooting patch.


    # patchrm patchid
    

ProcedureHow to Remove a Rebooting Sun Cluster Patch

  1. Become superuser on any node in the cluster.

  2. Boot the cluster node into noncluster mode. For information about booting a node into noncluster mode, see How to Boot a Node in Noncluster Mode.

  3. Remove the rebooting patch.


    # patchrm patchid
    
  4. Reboot the cluster node back into cluster mode.


    # reboot
    
  5. Repeat steps 2 through 4 for each cluster node.