Sun Cluster Software Installation Guide for Solaris OS

ProcedureHow to Finish a Rolling Upgrade to Sun Cluster 3.1 8/05 Software

Before You Begin

Ensure that all upgrade procedures are completed for all cluster nodes that you are upgrading.

Steps
  1. From one node, check the upgrade status of the cluster.


    # scversions
    
  2. From the following table, perform the action that is listed for the output message from Step 1.

    Output Message 

    Action 

    Upgrade commit is needed.

    Proceed to Step 4.

    Upgrade commit is NOT needed. All versions match.

    Skip to Step 6.

    Upgrade commit cannot be performed until all cluster nodes are upgraded. Please run scinstall(1m) on cluster nodes to identify older versions.

    Return to How to Perform a Rolling Upgrade of Sun Cluster 3.1 8/05 Software to upgrade the remaining cluster nodes.

    Check upgrade cannot be performed until all cluster nodes are upgraded. Please run scinstall(1m) on cluster nodes to identify older versions.

    Return to How to Perform a Rolling Upgrade of Sun Cluster 3.1 8/05 Software to upgrade the remaining cluster nodes.

  3. After all nodes have rejoined the cluster, from one node commit the cluster to the upgrade.


    # scversions -c
    

    Committing the upgrade enables the cluster to utilize all features in the newer software. New features are available only after you perform the upgrade commitment.

  4. From one node, verify that the cluster upgrade commitment has succeeded.


    # scversions
    Upgrade commit is NOT needed. All versions match.
  5. Copy the security files for the common agent container to all cluster nodes.

    This step ensures that security files for the common agent container are identical on all cluster nodes and that the copied files retain the correct file permissions.

    1. On each node, stop the Sun Java Web Console agent.


      # /usr/sbin/smcwebserver stop
      
    2. On each node, stop the security file agent.


      # /opt/SUNWcacao/bin/cacaoadm stop
      
    3. On one node, change to the /etc/opt/SUNWcacao/ directory.


      phys-schost-1# cd /etc/opt/SUNWcacao/
      
    4. Create a tar file of the /etc/opt/SUNWcacao/security/ directory.


      phys-schost-1# tar cf /tmp/SECURITY.tar security
      
    5. Copy the /tmp/SECURITY.tar file to each of the other cluster nodes.

    6. On each node to which you copied the /tmp/SECURITY.tar file, extract the security files.

      Any security files that already exist in the /etc/opt/SUNWcacao/ directory are overwritten.


      phys-schost-2# cd /etc/opt/SUNWcacao/
      phys-schost-2# tar xf /tmp/SECURITY.tar
      
    7. Delete the /tmp/SECURITY.tar file from each node in the cluster.

      You must delete each copy of the tar file to avoid security risks.


      phys-schost-1# rm /tmp/SECURITY.tar
      phys-schost-2# rm /tmp/SECURITY.tar
      
    8. On each node, start the security file agent.


      phys-schost-1# /opt/SUNWcacao/bin/cacaoadm start
      phys-schost-2# /opt/SUNWcacao/bin/cacaoadm start
      
    9. On each node, start the Sun Java Web Console agent.


      phys-schost-1# /usr/sbin/smcwebserver start
      phys-schost-2# /usr/sbin/smcwebserver start
      
  6. If your configuration uses dual-string mediators for Solstice DiskSuite or Solaris Volume Manager software, restore the mediator configurations.

    1. Determine which node has ownership of a disk set to which you are adding the mediator hosts.


      # metaset -s setname
      
      -s setname

      Specifies the disk-set name

    2. If no node has ownership, take ownership of the disk set.


      # scswitch -z -D setname -h node
      
      -z

      Changes mastery

      -D

      Specifies the name of the disk set

      -h node

      Specifies the name of the node to become primary of the disk set

    3. Re-create the mediators.


      # metaset -s setname -a -m mediator-host-list
      
      -a

      Adds to the disk set

      -m mediator-host-list

      Specifies the names of the nodes to add as mediator hosts for the disk set

    4. Repeat Step a through Step c for each disk set in the cluster that uses mediators.

  7. If you upgraded any data services that are not supplied on the product media, register the new resource types for those data services.

    Follow the documentation that accompanies the data services.

  8. (Optional) Switch each resource group and device group back to its original node.


    # scswitch -z -g resource-group -h node
    # scswitch -z -D disk-device-group -h node
    
    -z

    Performs the switch

    -g resource-group

    Specifies the resource group to switch

    -h node

    Specifies the name of the node to switch to

    -D disk-device-group

    Specifies the device group to switch

  9. Restart any applications.

    Follow the instructions that are provided in your vendor documentation.

  10. Migrate resources to new resource type versions.


    Note –

    If you upgrade to the Sun Cluster HA for NFS data service for the Solaris 10 OS, you must migrate to the new resource type version. See Upgrading the SUNW.nfs Resource Type in Sun Cluster Data Service for NFS Guide for Solaris OS for more information.

    For all other data services, this step is optional.


    See Upgrading a Resource Type in Sun Cluster Data Services Planning and Administration Guide for Solaris OS, which contains procedures which use the command line. Alternatively, you can perform the same tasks by using the Resource Group menu of the scsetup utility. The process involves performing the following tasks:

    • Registration of the new resource type

    • Migration of the eligible resource to the new version of its resource type

    • Modification of the extension properties of the resource type as specified in the manual for the related data service

Next Steps

If you have a SPARC based system and use Sun Management Center to monitor the cluster, go to SPARC: How to Upgrade Sun Cluster Module Software for Sun Management Center.

Otherwise, the cluster upgrade is complete.