Sun Cluster Software Installation Guide for Solaris OS

Upgrading to Sun Cluster 3.1 9/04 Software (Rolling)

This section provides procedures to perform a rolling upgrade from Sun Cluster 3.1 software to Sun Cluster 3.1 9/04 software. In a rolling upgrade, you upgrade one cluster node at a time, while the other cluster nodes remain in production. After all nodes are upgraded and have rejoined the cluster, you must commit the cluster to the new software version before you can use any new features.

To upgrade from Sun Cluster 3.0 software, follow the procedures in Upgrading to Sun Cluster 3.1 9/04 Software (Nonrolling).


Note –

Sun Cluster 3.1 9/04 software does not support rolling upgrade from Solaris 8 software to Solaris 9 software. You can upgrade Solaris software to an update release during Sun Cluster rolling upgrade. To upgrade a Sun Cluster configuration from Solaris 8 software to Solaris 9 software, perform the procedures in Upgrading to Sun Cluster 3.1 9/04 Software (Nonrolling).


Task Map: Upgrading to Sun Cluster 3.1 9/04 Software (Rolling)

To perform a rolling upgrade, follow the tasks that are listed in Table 5–2.

Table 5–2 Task Map: Upgrading to Sun Cluster 3.1 9/04 Software

Task 

Instructions 

1. Read the upgrade requirements and restrictions. 

Upgrade Requirements and Support Guidelines

2. On one node of the cluster, move resource groups and device groups to another cluster node, and ensure that shared data and system disks are backed up. If the cluster uses dual-string mediators for Solstice DiskSuite or Solaris Volume Manager software, unconfigure the mediators. Then reboot the node into noncluster mode. 

How to Prepare a Cluster Node for a Rolling Upgrade

3. Upgrade the Solaris OS on the cluster node, if necessary, to a supported Solaris update release. SPARC: Optionally, upgrade VERITAS File System (VxFS) and VERITAS Volume Manager (VxVM). 

How to Perform a Rolling Upgrade of a Solaris Maintenance Update

4. Upgrade the cluster node to Sun Cluster 3.1 9/04 framework and data-service software. If necessary, upgrade applications. SPARC: If you upgraded VxVM, upgrade disk groups. Then reboot the node back into the cluster. 

How to Perform a Rolling Upgrade of Sun Cluster 3.1 9/04 Software

5. Repeat Tasks 2 through 4 on each remaining node to upgrade. 

 

6. Use the scversions command to commit the cluster to the upgrade. If the cluster uses dual-string mediators, reconfigure the mediators. Optionally, migrate existing resources to new resource types.

How to Finish a Rolling Upgrade to Sun Cluster 3.1 9/04 Software

7. (Optional) SPARC: Upgrade the Sun Cluster module to Sun Management Center.

SPARC: How to Upgrade Sun Cluster-Module Software for Sun Management Center

How to Prepare a Cluster Node for a Rolling Upgrade

Perform this procedure on one node at a time. You will take the upgraded node out of the cluster while the remaining nodes continue to function as active cluster members.


Note –

Observe the following guidelines when you perform a rolling upgrade:


  1. Ensure that the configuration meets requirements for upgrade.

    See Upgrade Requirements and Support Guidelines.

  2. Have available the CD-ROMs, documentation, and patches for all the software products you are upgrading before you begin to upgrade the cluster.

    • Solaris 8 or Solaris 9 OS

    • Sun Cluster 3.1 9/04 framework

    • Sun Cluster 3.1 9/04 data services (agents)

    • Applications that are managed by Sun Cluster 3.1 9/04 data-service agents

    See “Patches and Required Firmware Levels” in Sun Cluster 3.1 Release Notes for the location of patches and installation instructions.

  3. (Optional) Install Sun Cluster 3.1 9/04 documentation.

    Install the documentation packages on your preferred location, such as an administrative console or a documentation server. See the index.html file at the top level of the Sun Cluster 3.1 9/04 CD-ROM to access installation instructions.

  4. Become superuser on one node of the cluster to upgrade.

  5. If not already installed, install Sun Web Console packages.

    These packages are required by Sun Cluster software, even if you do not use Sun Web Console.

    1. Insert the Sun Cluster 3.1 9/04 CD-ROM in the CD-ROM drive.

    2. Change to the /cdrom/cdrom0/Solaris_arch/Product/sun_web_console/2.1/ directory, where arch is sparc or x86.

    3. Run the setup command.


      # ./setup
      

      The setup command installs all packages to support Sun Web Console.

  6. For a two-node cluster, if the cluster uses Sun StorEdge Availability Suite software, ensure that the configuration data for availability services resides on the quorum disk.

    The configuration data must reside on a quorum disk to ensure the proper functioning of Sun StorEdge Availability Suite after you upgrade the cluster software.

    1. Become superuser on a node of the cluster that runs Sun StorEdge Availability Suite software.

    2. Identify the device ID and the slice that is used by the Sun StorEdge Availability Suite configuration file.


      # /usr/opt/SUNWscm/sbin/dscfg
      /dev/did/rdsk/dNsS
      

      In this example output, N is the device ID and S the slice of device N.

    3. Identify the existing quorum device.


      # scstat -q
      -- Quorum Votes by Device --
                           Device Name         Present Possible Status
                           -----------         ------- -------- ------
         Device votes:     /dev/did/rdsk/dQsS  1       1        Online

      In this example output, dQsS is the existing quorum device.

    4. If the quorum device is not the same as the Sun StorEdge Availability Suite configuration-data device, move the configuration data to an available slice on the quorum device.


      # dd if=`/usr/opt/SUNWesm/sbin/dscfg` of=/dev/did/rdsk/dQsS
      


      Note –

      You must use the name of the raw DID device, /dev/did/rdsk/, not the block DID device, /dev/did/dsk/.


    5. If you moved the configuration data, configure Sun StorEdge Availability Suite software to use the new location.

      As superuser, issue the following command on each node that runs Sun StorEdge Availability Suite software.


      # /usr/opt/SUNWesm/sbin/dscfg -s /dev/did/rdsk/dQsS
      

  7. From any node, view the current status of the cluster.

    Save the output as a baseline for later comparison.


    % scstat
    % scrgadm -pv[v]

    See the scstat(1M) and scrgadm(1M) man pages for more information.

  8. Move all resource groups and device groups that are running on the node to upgrade.


    # scswitch -S -h from-node
    
    -S

    Moves all resource groups and device groups

    -h from-node

    Specifies the name of the node from which to move resource groups and device groups

    See the scswitch(1M) man page for more information.

  9. Verify that the move was completed successfully.


    # scstat -g -D
    
    -g

    Shows status for all resource groups

    -D

    Shows status for all disk device groups

  10. Ensure that the system disk, applications, and all data are backed up.

  11. If your cluster uses dual-string mediators for Solstice DiskSuite or Solaris Volume Manager software, unconfigure your mediators.

    See Configuring Dual-String Mediators for more information.

    1. Run the following command to verify that no mediator data problems exist.


      # medstat -s setname
      
      -s setname

      Specifies the disk set name

      If the value in the Status field is Bad, repair the affected mediator host. Follow the procedure How to Fix Bad Mediator Data.

    2. List all mediators.

      Save this information for when you restore the mediators during the procedure How to Finish a Rolling Upgrade to Sun Cluster 3.1 9/04 Software.

    3. For a disk set that uses mediators, take ownership of the disk set if no node already has ownership.


      # metaset -s setname -t
      
      -t

      Takes ownership of the disk set

    4. Unconfigure all mediators for the disk set.


      # metaset -s setname -d -m mediator-host-list
      
      -s setname

      Specifies the disk-set name

      -d

      Deletes from the disk set

      -m mediator-host-list

      Specifies the name of the node to remove as a mediator host for the disk set

      See the mediator(7D) man page for further information about mediator-specific options to the metaset command.

    5. Repeat Step c through Step d for each remaining disk set that uses mediators.

  12. Shut down the node that you want to upgrade and boot it into noncluster mode.

    On SPARC based systems, perform the following commands:


    # shutdown -y -g0
    ok boot -x
    

    On x86 based systems, perform the following commands:


    # shutdown -y -g0
    ...
                          <<< Current Boot Parameters >>>
    Boot path: /pci@0,0/pci-ide@7,1/ata@1/cmdk@0,0:b
    Boot args:
    
    Type   b [file-name] [boot-flags] <ENTER>    to boot with options
    or     i <ENTER>                             to enter boot interpreter
    or     <ENTER>                               to boot with defaults
    
                      <<< timeout in 5 seconds >>>
    Select (b)oot or (i)nterpreter: b -x
    

    The other nodes of the cluster continue to function as active cluster members.

  13. To upgrade the Solaris software to a Maintenance Update release, go to How to Perform a Rolling Upgrade of a Solaris Maintenance Update.


    Note –

    The cluster must already run on, or be upgraded to, at least the minimum required level of the Solaris OS to support Sun Cluster 3.1 9/04 software. See the Sun Cluster Release Notes for Solaris OS for information about supported releases of the Solaris OS.


  14. Go to How to Perform a Rolling Upgrade of Sun Cluster 3.1 9/04 Software.

How to Perform a Rolling Upgrade of a Solaris Maintenance Update

Perform this procedure to upgrade the Solaris 8 or Solaris 9 OS to a supported Maintenance Update release.


Note –

To upgrade a cluster from Solaris 8 to Solaris 9 software, with or without upgrading Sun Cluster software as well, you must perform a nonrolling upgrade. Go to Upgrading to Sun Cluster 3.1 9/04 Software (Nonrolling).


  1. Ensure that all steps in How to Prepare a Cluster Node for a Rolling Upgrade are completed.

  2. Temporarily comment out all entries for globally mounted file systems in the node's /etc/vfstab file.

    Perform this step to prevent the Solaris upgrade from attempting to mount the global devices.

  3. Follow the instructions in the Solaris maintenance update installation guide to install the Maintenance Update release.


    Note –

    Do not reboot the node when prompted to reboot at the end of installation processing.


  4. Uncomment all entries in the /a/etc/vfstab file for globally mounted file systems that you commented out in Step 2.

  5. Install any required Solaris software patches and hardware-related patches, and download any needed firmware that is contained in the hardware patches.


    Note –

    Do not reboot the node until Step 6.


  6. Reboot the node into noncluster mode.

    Include the double dashes (--) in the following command:


    # reboot -- -x
    

  7. Upgrade the Sun Cluster software.

    Go to How to Perform a Rolling Upgrade of Sun Cluster 3.1 9/04 Software.

How to Perform a Rolling Upgrade of Sun Cluster 3.1 9/04 Software

Perform this procedure to upgrade a node to Sun Cluster 3.1 9/04 software while the remaining cluster nodes are in cluster mode.


Note –

Until all nodes of the cluster are upgraded and the upgrade is committed, new features that are introduced by the new release might not be available.


  1. Ensure that all steps in How to Prepare a Cluster Node for a Rolling Upgrade are completed.

  2. If you upgraded the Solaris OS to a Maintenance Update release, ensure that all steps in How to Perform a Rolling Upgrade of a Solaris Maintenance Update are completed.

  3. Ensure that you have installed all required Solaris software patches and hardware-related patches.

    For Solstice DiskSuite software (Solaris 8), also ensure that you have installed all required Solstice DiskSuite software patches.

  4. Become superuser on the node of the cluster.

  5. Install Sun Web Console packages.

    Perform this step on each node of the cluster. These packages are required by Sun Cluster software, even if you do not use Sun Web Console.

    1. Insert the Sun Cluster 3.1 9/04 CD-ROM in the CD-ROM drive.

    2. Change to the /cdrom/cdrom0/Solaris_arch/Product/sun_web_console/2.1/ directory, where arch is sparc or x86.

    3. Run the setup command.


      # ./setup
      

      The setup command installs all packages to support Sun Web Console.

  6. On the Sun Cluster 3.1 9/04 CD-ROM, change to the Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/ directory, where arch is sparc or x86 and where ver is 8 (for Solaris 8) or 9 (for Solaris 9) .


    # cd /cdrom/cdrom0/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools
    

  7. Upgrade the cluster framework software.


    Note –

    Do not use the /usr/cluster/bin/scinstall command that is already installed on the node. You must use the scinstall command that is on the Sun Cluster 3.1 9/04 CD-ROM.



    ./scinstall -u update [-M patchdir=dirname]
    
    -M patchdir=dirname[[,patchlistfile=filename]]

    Specifies the path to patch information so that the specified patches can be installed by the scinstall command. If you do not specify a patch-list file, the scinstall command installs all the patches in the directory dirname, including tarred, jarred, and zipped patches.

    The -M option is not required. You can use any method you prefer for installing patches.

    See “Patches and Required Firmware Levels” in Sun Cluster 3.1 Release Notes for the location of patches and installation instructions.


    Note –

    Sun Cluster 3.1 9/04 software requires at least version 3.5.1 of Sun Explorer software. Upgrading to Sun Cluster software includes installing Sun Explorer data collector software, to be used in conjunction with the sccheck utility. If another version of Sun Explorer software was already installed before the Sun Cluster upgrade, it is replaced by the version that is provided with Sun Cluster software. Options such as user identity and data delivery are preserved, but crontab entries must be manually re-created.


    Upgrade processing is finished when the system displays the message Completed Sun Cluster framework upgrade and the path to the upgrade log.

  8. Change to a directory that does not reside on the CD-ROM and eject the CD-ROM.


    # eject cdrom
    

  9. (Optional) Upgrade Sun Cluster data services.


    Note –

    If you are using the Sun Cluster HA for Oracle 3.0 64–bit for Solaris 9 data service, you must upgrade to the Sun Cluster 3.1 9/04 version.

    You can continue to use any other Sun Cluster 3.0 data services after you upgrade to Sun Cluster 3.1 9/04 software.


    1. Insert the Sun Cluster 3.1 9/04 Agents CD-ROM into the CD-ROM drive on the node.

    2. Upgrade the data-service software.

      Use one of the following methods:

      • To upgrade one or more specified data services, type the following command.


        # scinstall -u update -s srvc[,srvc,…] -d /cdrom/cdrom0
        

        -u update

        Upgrades a cluster node to a later Sun Cluster software release

        -s srvc

        Upgrades the specified data service

        -d

        Specifies an alternate directory location for the CD-ROM image

      • To upgrade all data services present on the node, type the following command.


        # scinstall -u update -s all -d /cdrom/cdrom0
        

        -s all

        Upgrades all data services

      The scinstall command assumes that updates for all installed data services exist on the update release. If an update for a particular data service does not exist in the update release, that data service is not upgraded.

      Upgrade processing is finished when the system displays the message Completed upgrade of Sun Cluster data services agents and displays the path to the upgrade log.

    3. Change to a directory that does not reside on the CD-ROM and eject the CD-ROM.


      # eject cdrom
      

  10. As needed, manually upgrade any custom data services that are not supplied on the Sun Cluster 3.1 9/04 Agents CD-ROM.

  11. Verify that each data-service update is installed successfully.

    View the upgrade log file that is referenced at the end of the upgrade output messages.

  12. Install any Sun Cluster 3.1 9/04 software patches, if you did not already install them by using the scinstall command.

  13. Install any Sun Cluster 3.1 9/04 data-service software patches.

    See “Patches and Required Firmware Levels” in Sun Cluster 3.1 Release Notes for the location of patches and installation instructions.

  14. Upgrade software applications that are installed on the cluster.

    Ensure that application levels are compatible with the current versions of Sun Cluster and Solaris software. See your application documentation for installation instructions. In addition, follow these guidelines to upgrade applications in a Sun Cluster 3.1 9/04 configuration:

    • If the applications are stored on shared disks, you must master the relevant disk groups and manually mount the relevant file systems before you upgrade the application.

    • If you are instructed to reboot a node during the upgrade process, always add the -x option to the command.

      The -x option ensures that the node reboots into noncluster mode. For example, either of the following two commands boot a node into single-user noncluster mode:

      On SPARC based systems, perform the following commands:


      # reboot -- -xs
      ok boot -xs
      

      On x86 based systems, perform the following commands:


      # reboot -- -xs
      ...
                            <<< Current Boot Parameters >>>
      Boot path: /pci@0,0/pci-ide@7,1/ata@1/cmdk@0,0:b
      Boot args:
      
      Type   b [file-name] [boot-flags] <ENTER>    to boot with options
      or     i <ENTER>                             to enter boot interpreter
      or     <ENTER>                               to boot with defaults
      
                        <<< timeout in 5 seconds >>>
      Select (b)oot or (i)nterpreter: b -xs
      


    Note –

    Do not upgrade an application if the newer version of the application cannot coexist in the cluster with the older version of the application.


  15. Reboot the node into the cluster.


    # reboot
    

  16. Run the following command on the upgraded node to verify that Sun Cluster 3.1 9/04 software was installed successfully.


    # scinstall -pv
    

    The first line of output states which version of Sun Cluster software the node is running. This version should match the version you just upgraded to.

  17. From any node, verify the status of the cluster configuration.


    % scstat
    % scrgadm -pv[v]

    Output should be the same as for Step 7 in How to Prepare a Cluster Node for a Rolling Upgrade.

  18. If you have another node to upgrade, return to How to Prepare a Cluster Node for a Rolling Upgrade and repeat all upgrade procedures on the next node to upgrade.

  19. When all nodes in the cluster are upgraded, go to How to Finish a Rolling Upgrade to Sun Cluster 3.1 9/04 Software.

Example—Rolling Upgrade From Sun Cluster 3.1 to Sun Cluster 3.1 9/04 Software

The following example shows the process of a rolling upgrade of a cluster node from Sun Cluster 3.1 to Sun Cluster 3.1 9/04 software on the Solaris 8 OS. The example includes the installation of Sun Web Console software and the upgrade of all installed data services that have new versions on the Sun Cluster 3.1 9/04 Agents CD-ROM. The cluster node names is phys-schost-1.


(Install Sun Web Console software from the Sun Cluster 3.1 9/04 CD-ROM)
phys-schost-1# cd /cdrom/cdrom0/Solaris_sparc/Product/sun_cluster/ \
Solaris_8/Misc
phys-schost-1# ./setup

(Upgrade framework softwarefrom the Sun Cluster 3.1 9/04 CD-ROM)
phys-schost-1# cd /cdrom/cdrom0/Solaris_sparc/Product/sun_cluster/Solaris_8/Tools
phys-schost-1# ./scinstall -u update -S interact
 
(Upgrade data services from the Sun Cluster 3.1 9/04 Agents CD-ROM)
phys-schost-1# scinstall -u update -s all -d /cdrom/cdrom0

(Reboot the node into the cluster)
phys-schost-1# reboot

(Verify that software upgrade succeeded)
# scinstall -pv

(Verify cluster status)
# scstat
# scrgadm -pv

How to Finish a Rolling Upgrade to Sun Cluster 3.1 9/04 Software

  1. Ensure that all upgrade procedures are completed for all cluster nodes that you are upgrading.

  2. From one node, check the upgrade status of the cluster.


    # scversions
    

  3. From the following table, perform the action that is listed for the output message from Step 2.

    Output Message 

    Action 

    Upgrade commit is needed.

    Go to Step 4.

    Upgrade commit is NOT needed. All versions match.

    Skip to Step 6.

    Upgrade commit cannot be performed until all cluster nodes are upgraded. Please run scinstall(1m) on cluster nodes to identify older versions.

    Return to How to Perform a Rolling Upgrade of Sun Cluster 3.1 9/04 Software to upgrade the remaining cluster nodes.

    Check upgrade cannot be performed until all cluster nodes are upgraded. Please run scinstall(1m) on cluster nodes to identify older versions.

    Return to How to Perform a Rolling Upgrade of Sun Cluster 3.1 9/04 Software to upgrade the remaining cluster nodes.

  4. After all nodes have rejoined the cluster, from one node commit the cluster to the upgrade.


    # scversions -c
    

    Committing the upgrade enables the cluster to utilize all features in the newer software. New features are available only after you perform the upgrade commitment.

  5. From one node, verify that the cluster upgrade commitment has succeeded.


    # scversions
    Upgrade commit is NOT needed. All versions match.

  6. If your configuration uses dual-string mediators for Solstice DiskSuite or Solaris Volume Manager software, restore the mediator configurations.

    1. Determine which node has ownership of a disk set to which you are adding the mediator hosts.


      # metaset -s setname
      
      -s setname

      Specifies the disk-set name

    2. If no node has ownership, take ownership of the disk set.


      # metaset -s setname -t
      
      -t

      Takes ownership of the disk set

    3. Re-create the mediators.


      # metaset -s setname -a -m mediator-host-list
      
      -a

      Adds to the disk set

      -m mediator-host-list

      Specifies the names of the nodes to add as mediator hosts for the disk set

    4. Repeat Step a through Step c for each disk set in the cluster that uses mediators.

  7. If you upgraded any data services that are not supplied on the Sun Cluster 3.1 9/04 Agents CD-ROM, register the new resource types for those data services.

    Follow the documentation that accompanies the data services.

  8. (Optional) Switch each resource group and device group back its original node.


    # scswitch -z -g resource-group -h node
    # scswitch -z -D disk-device-group -h node
    
    -z

    Performs the switch

    -g resource-group

    Specifies the resource group to switch

    -h node

    Specifies the name of the node to switch to

    -D disk-device-group

    Specifies the device group to switch

  9. Restart any applications.

    Follow the instructions that are provided in your vendor documentation.

  10. (Optional) Migrate resources to new resource type versions.

    See “Upgrading a Resource Type” in Sun Cluster Data Services Planning and Administration Guide for Solaris OS, which contains procedures which use the command line. Alternatively, you can perform the same tasks by using the Resource Group menu of the scsetup utility. The process involves performing the following tasks:

    • Register the new resource type.

    • Migrate the eligible resource to the new version of its resource type.

    • Modify the extension properties of the resource type as specified in the manual for the related data service.

  11. If you have a SPARC based system and use Sun Management Center to monitor the cluster, go to SPARC: How to Upgrade Sun Cluster-Module Software for Sun Management Center.

The cluster upgrade is complete.