Sun Cluster 3.1 10/03 Software Installation Guide

Chapter 3 Upgrading Sun Cluster Software

This chapter provides the following step-by-step procedures to upgrade a Sun Cluster 3.x configuration to Sun Cluster 3.1 10/03 software:

Overview of Upgrading a Sun Cluster Configuration

This section provides the following guidelines to upgrade a Sun Cluster configuration:

Upgrade Requirements and Restrictions

Observe the following requirements and restrictions when you upgrade to Sun Cluster 3.1 10/03 software:

Choosing a Sun Cluster Upgrade Method

Choose one of the following methods to upgrade your cluster to Sun Cluster 3.1 10/03 software:

If your cluster configuration meets the requirements to perform a rolling upgrade, you can still choose to perform a nonrolling upgrade instead.

For overview information about planning your Sun Cluster 3.1 10/03 configuration, see Chapter 1, Planning the Sun Cluster Configuration.

Upgrading to Sun Cluster 3.1 10/03 Software (Nonrolling)

Perform the following tasks to perform a nonrolling upgrade from Sun Cluster 3.x software to Sun Cluster 3.1 10/03 software. In a nonrolling upgrade, you shut down the entire cluster before you upgrade the cluster nodes. This procedure also enables you to upgrade the cluster from Solaris 8 software to Solaris 9 software.


Note –

To perform a rolling upgrade to Sun Cluster 3.1 10/03 software, instead perform the procedures in Upgrading to Sun Cluster 3.1 10/03 Software (Rolling).


Table 3–1 Task Map: Upgrading to Sun Cluster 3.1 10/03 Software (Nonrolling)

Task 

Instructions 

1. Read the upgrade requirements and restrictions. 

Upgrade Requirements and Restrictions

2. Take the cluster out of production, disable resources, and back up shared data and system disks.  

How to Prepare the Cluster for Upgrade (Nonrolling)

3. Upgrade the Solaris software, if necessary, to a supported Solaris update release. Optionally, upgrade VERITAS Volume Manager (VxVM).  

How to Upgrade the Solaris Operating Environment (Nonrolling)

4. Upgrade to Sun Cluster 3.1 10/03 framework and data-service software. If necessary, upgrade applications. If you upgraded VxVM, upgrade disk groups. 

How to Upgrade to Sun Cluster 3.1 10/03 Software (Nonrolling)

5. (Optional) Upgrade the Sun Cluster module to Sun Management Center, if needed.

How to Upgrade Sun Cluster–Module Software for Sun Management Center (Nonrolling)

6. Reregister resource types, enable resources, and bring resource groups online. 

How to Finish Upgrading to Sun Cluster 3.1 10/03 Software (Nonrolling)

How to Prepare the Cluster for Upgrade (Nonrolling)

Before you upgrade the software, perform the following steps to take the cluster out of production:

  1. Ensure that the configuration meets requirements for upgrade.

    See Upgrade Requirements and Restrictions.

  2. Have available the CD-ROMs, documentation, and patches for all software products you are upgrading.

    • Solaris 8 or Solaris 9 operating environment

    • Sun Cluster 3.1 10/03 framework

    • Sun Cluster 3.1 10/03 data services (agents)

    • Applications that are managed by Sun Cluster 3.1 10/03 data-service agents

    • VERITAS Volume Manager

    See “Patches and Required Firmware Levels” in Sun Cluster 3.1 Release Notes for the location of patches and installation instructions.

  3. (Optional) Install Sun Cluster 3.1 10/03 documentation.

    Install the documentation packages on your preferred location, such as an administrative console or a documentation server. See the index.html file at the top level of the Sun Cluster 3.1 10/03 CD-ROM to access installation instructions.

  4. Are you upgrading from Sun Cluster 3.0 software?

    • If no, proceed to Step 5.

    • If yes, have available your list of test IP addresses, one for each public network adapter in the cluster.

      A test IP address is required for each public network adapter in the cluster, regardless of whether the adapter is the active adapter or the backup adapter in the group. The test IP addresses will be used to reconfigure the adapters to use IP Network Multipathing.


      Note –

      Each test IP address must be on the same subnet as the existing IP address that is used by the public network adapter.


      To list the public network adapters on a node, run the following command:


      % pnmstat
      

      See the IP Network Multipathing Administration Guide (Solaris 8) or System Administration Guide: IP Services (Solaris 9) for more information about test IP addresses for IP Network Multipathing.

  5. Notify users that cluster services will be unavailable during upgrade.

  6. Ensure that the cluster is functioning normally.

    • To view the current status of the cluster, run the following command from any node:


      % scstat
      

      See the scstat(1M) man page for more information.

    • Search the /var/adm/messages log on the same node for unresolved error messages or warning messages.

    • Check volume-manager status.

  7. Become superuser on a node of the cluster.

  8. Switch each resource group offline.


    # scswitch -F -g resource-group
    

    -F

    Switches a resource group offline

    -g resource-group

    Specifies the name of the resource group to take offline

  9. Disable all resources in the cluster.

    The disabling of resources before upgrade prevents the cluster from bringing the resources online automatically if a node is mistakenly rebooted into cluster mode.


    Note –

    If you are upgrading from a Sun Cluster 3.1 release, you can use the scsetup(1M) utility instead of the command line. From the Main Menu, choose Resource Groups, then choose Enable/Disable Resources.


    1. From any node, list all enabled resources in the cluster.


      # scrgadm -pv | grep "Res enabled"
      

    2. Identify those resources that depend on other resources.

      You must disable dependent resources first before you disable the resources that they depend on.

    3. Disable each enabled resource in the cluster.


      scswitch -n -j resource
      
      -n

      Disables

      -j resource

      Specifies the resource

      See the scswitch(1M) man page for more information.

  10. Move each resource group to the unmanaged state.


    # scswitch -u -g resource-group
    

    -u

    Moves the specified resource group to the unmanaged state

    -g resource-group

    Specifies the name of the resource group to move into the unmanaged state

  11. Verify that all resources on all nodes are disabled and that all resource groups are in the unmanaged state.


    # scstat -g
    

  12. Stop all databases that are running on each node of the cluster.

  13. Ensure that all shared data is backed up.

  14. From one node, shut down the cluster.


    # scshutdown
    ok

    See the scshutdown(1M) man page for more information.

  15. Boot each node into noncluster mode.


    ok boot -x
    

  16. Ensure that each system disk is backed up.

  17. Determine whether to upgrade the Solaris operating environment.

    See “Supported Products” in Sun Cluster 3.1 10/03 Release Notes for more information.

How to Upgrade the Solaris Operating Environment (Nonrolling)

Perform this procedure on each node in the cluster to upgrade the Solaris operating environment. If the cluster already runs on a version of the Solaris environment that supports Sun Cluster 3.1 10/03 software, this procedure is optional.


Note –

The cluster must already run on, or be upgraded to, at least the minimum required level of the Solaris 8 or Solaris 9 environment to support Sun Cluster 3.1 10/03 software. See “Supported Products” in Sun Cluster 3.1 10/03 Release Notes for more information.


  1. Ensure that all steps in How to Prepare the Cluster for Upgrade (Nonrolling) are completed.

  2. Become superuser on the cluster node to upgrade.

  3. Determine whether the following Apache links already exist, and if so, whether the file names contain an uppercase K or S:


    /etc/rc0.d/K16apache 
    /etc/rc1.d/K16apache 
    /etc/rc2.d/K16apache 
    /etc/rc3.d/S50apache 
    /etc/rcS.d/K16apache
    • If these links already exist and do contain an uppercase K or S in the file name, no further action is necessary for these links.

    • If these links do not exist, or if these links exist but instead contain a lowercase k or s in the file name, you move aside these links in Step 8.

  4. Comment out all entries for globally mounted file systems in the /etc/vfstab file.

    1. Make a record of all entries that are already commented out for later reference.

    2. Temporarily comment out all entries for globally mounted file systems in the /etc/vfstab file.

      Entries for globally mounted file systems contain the global mount option. Comment out these entries to prevent the Solaris upgrade from attempting to mount the global devices.

  5. Determine which procedure to follow to upgrade the Solaris operating environment.

    Volume Manager 

    Procedure to Use 

    Location of Instructions 

    Solstice DiskSuite/Solaris Volume Manager 

    Upgrading Solaris software  

    Solaris 8 or Solaris 9 installation documentation 

    VERITAS Volume Manager 

    Upgrading VxVM and Solaris software 

    VERITAS Volume Manager installation documentation 

  6. Upgrade the Solaris software, following the procedure you selected in Step 5.


    Note –

    Ignore the instruction to reboot at the end of the Solaris software upgrade process. You must first perform Step 7 and Step 8, then reboot into noncluster mode in Step 9 to complete Solaris software upgrade.

    If you are instructed to reboot a node at other times during the upgrade process, always add the -x option to the command. This option ensures that the node reboots into noncluster mode. For example, either of the following two commands boot a node into single-user noncluster mode:


    # reboot -- -xs
    ok boot -xs
    


  7. In the /a/etc/vfstab file, uncomment those entries for globally mounted file systems that you commented out in Step 4.

  8. If the Apache links in Step 3 did not already exist or if they contained a lowercase k or s in the file names before you upgraded the Solaris software, move aside the restored Apache links.

    Use the following commands to rename the files with a lowercase k or s:


    # mv /a/etc/rc0.d/K16apache /a/etc/rc0.d/k16apache 
    # mv /a/etc/rc1.d/K16apache /a/etc/rc1.d/k16apache
    # mv /a/etc/rc2.d/K16apache /a/etc/rc2.d/k16apache
    # mv /a/etc/rc3.d/S50apache /a/etc/rc3.d/s50apache
    # mv /a/etc/rcS.d/K16apache /a/etc/rcS.d/k16apache
    

  9. Reboot the node into noncluster mode.

    Include the double dashes (--) in the following command:


    # reboot -- -x
    

  10. Install any required Solaris software patches and hardware-related patches, and download any needed firmware that is contained in the hardware patches.

    For Solstice DiskSuite software (Solaris 8), also install any Solstice DiskSuite software patches.


    Note –

    Do not reboot after you add patches. You reboot the node after you upgrade the Sun Cluster software.


    See “Patches and Required Firmware Levels” in Sun Cluster 3.1 Release Notes for the location of patches and installation instructions.

  11. Upgrade to Sun Cluster 3.1 10/03 software.

    Go to How to Upgrade to Sun Cluster 3.1 10/03 Software (Nonrolling).


    Note –

    To complete upgrade from Solaris 8 to Solaris 9 software, you must also upgrade to the Solaris 9 version of Sun Cluster 3.1 10/03 software, even if the cluster already runs on Sun Cluster 3.1 10/03 software.


How to Upgrade to Sun Cluster 3.1 10/03 Software (Nonrolling)

This procedure describes how to upgrade the cluster to Sun Cluster 3.1 10/03 software. You must also perform this procedure to complete cluster upgrade from Solaris 8 to Solaris 9 software.


Tip –

You can perform this procedure on more than one node at the same time.


  1. Ensure that all steps in How to Prepare the Cluster for Upgrade (Nonrolling) are completed.

    If you upgraded from Solaris 8 to Solaris 9 software, also ensure that all steps in How to Upgrade the Solaris Operating Environment (Nonrolling) are completed.

  2. Become superuser on a node of the cluster.

  3. Ensure that you have installed all required Solaris software patches and hardware-related patches.

    For Solstice DiskSuite software (Solaris 8), also ensure that you have installed all required Solstice DiskSuite software patches.

  4. Insert the Sun Cluster 3.1 10/03 CD-ROM into the CD-ROM drive on the node.

    If the Volume Management daemon vold(1M) is running and configured to manage CD-ROM devices, it automatically mounts the CD-ROM on the /cdrom/suncluster_3_1_u1 directory.

  5. Upgrade the node to Sun Cluster 3.1 10/03 software.

    1. Change to the /cdrom/suncluster_3_1/SunCluster_3.1/Sol_ver/Tools directory, where ver is 8 (for Solaris 8) or 9 (for Solaris 9) .


      # cd /cdrom/suncluster_3_1_u1/SunCluster_3.1/Sol_ver/Tools
      

    2. Upgrade the cluster framework software.

      • To upgrade from Sun Cluster 3.0 software, run the following command:


        # ./scinstall -u update -S interact -M patchdir=dirname
        
        -S

        Specifies the test IP addresses to use to convert NAFO groups to IP Network Multipathing groups

        interact

        Specifies that scinstall prompts the user for each test IP address needed

        -M patchdir=dirname[[,patchlistfile=filename]]

        Specifies the path to patch information so that the specified patches can be installed using the scinstall command. If you do not specify a patch-list file, the scinstall command installs all the patches in the directory dirname, including tarred, jarred, and zipped patches.

        The -M option is not required. You can use any method you prefer for installing patches.

      • To upgrade from Sun Cluster 3.1 software, run the following command:


        # ./scinstall -u update -M patchdir=dirname
        
        -M patchdir=dirname[[,patchlistfile=filename]]

        Specifies the path to patch information so that the specified patches can be installed using the scinstall command. If you do not specify a patch-list file, the scinstall command installs all the patches in the directory dirname, including tarred, jarred, and zipped patches.

      The -M option is not required. You can use any method you prefer for installing patches.

      See “Patches and Required Firmware Levels” in Sun Cluster 3.1 Release Notes for the location of patches and installation instructions.


      Tip –

      If upgrade processing is interrupted, use the scstat(1M) command to ensure that the node is in noncluster mode (Offline), then restart the scinstall command.


      # scstat -n
      -- Cluster Nodes --
                         Node name      Status
                         ---------      ------
        Cluster node:    nodename        Offline
        Cluster node:    nodename        Offline

      See the scinstall(1M) man page for more information. See the IP Network Multipathing Administration Guide (Solaris 8) or System Administration Guide: IP Services (Solaris 9) for information about test addresses for IP Network Multipathing.


      Note –

      Sun Cluster 3.1 software requires at least version 3.5.1 of Sun Explorer software. Upgrade to Sun Cluster software includes installing Sun Explorer data collector software, to be used in conjunction with the sccheck utility. If another version of Sun Explorer software was already installed before Sun Cluster upgrade, it is replaced by the version that is provided with Sun Cluster software. Options such as user identity and data delivery are preserved, but crontab entries must be manually recreated.


      During Sun Cluster upgrade, scinstall might make one or more of the following configuration changes:

      • Convert NAFO groups to IP Network Multipathing groups but keep the original NAFO-group name.

      • Rename the ntp.conf file to ntp.conf.cluster, if ntp.conf.cluster does not already exist on the node.

      • Set the local-mac-address? variable to true, if the variable is not already set to that value.

    3. Change to the CD-ROM root directory and eject the CD-ROM.

  6. Upgrade software applications that are installed on the cluster.

    Ensure that application levels are compatible with the current version of Sun Cluster and Solaris software. See your application documentation for installation instructions. In addition, follow these guidelines to upgrade applications in a Sun Cluster 3.1 10/03 configuration:

    • If the applications are stored on shared disks, you must master the relevant disk groups and manually mount the relevant file systems before you upgrade the application.

    • If you are instructed to reboot a node during the upgrade process, always add the -x option to the command. This option ensures that the node reboots into noncluster mode. For example, either of the following two commands boot a node into single-user noncluster mode:


      # reboot -- -xs
      ok boot -xs
      

  7. (Optional) Upgrade Sun Cluster data services to the Sun Cluster 3.1 10/03 software versions.


    Note –

    You must upgrade the Sun Cluster HA for Oracle 3.0 64–bit for Solaris 9 data service to the Sun Cluster 3.1 10/03 version. Otherwise, you can continue to use Sun Cluster 3.0 data services after upgrade to Sun Cluster 3.1 10/03 software.


    Only those data services that are provided on the Sun Cluster 3.1 Agents CD-ROM are automatically upgraded by scinstall(1M). You must manually upgrade any custom or third-party data services.

    1. Insert the Sun Cluster 3.1 Agents CD-ROM into the CD-ROM drive on the node to upgrade.

    2. Upgrade the data-service software.


      # scinstall -u update -s all -d /cdrom/cdrom0
      

      -u update

      Specifies upgrade

      -s all

      Updates all Sun Cluster data services that are installed on the node


      Tip –

      If upgrade processing is interrupted, use the scstat(1M) command to ensure that the node is in noncluster mode (Offline), then restart the scinstall command.


      # scstat -n
      -- Cluster Nodes --
                         Node name      Status
                         ---------      ------
        Cluster node:    nodename        Offline
        Cluster node:    nodename        Offline

    3. Change to the CD-ROM root directory and eject the CD-ROM.

    4. As needed, manually upgrade any custom data services that are not supplied on the Sun Cluster 3.1 Agents CD-ROM.

    5. Install any Sun Cluster 3.1 10/03 data-service patches.

      See “Patches and Required Firmware Levels” in Sun Cluster 3.1 Release Notes for the location of patches and installation instructions.

  8. After all nodes are upgraded, reboot each node into the cluster.


    # reboot							
    

  9. Verify that all upgraded software is at the same version on all upgraded nodes.

    1. On each upgraded node, view the installed levels of Sun Cluster software.


      # scinstall -pv
      

    2. From one node, verify that all upgraded cluster nodes are running in cluster mode (Online).


      # scstat -n
      

      See the scstat(1M) man page for more information about displaying cluster status.

  10. Did you upgrade from Solaris 8 to Solaris 9 software?

  11. On each node, run the following command to verify the consistency of the storage configuration:


    # scdidadm -c
    
    -c

    Perform a consistency check


    Caution – Caution –

    Do not proceed to Step 12 until your configuration passes this consistency check. Failure to do so might result in errors in device identification and cause data corruption.


    The following table lists the possible output from the scdidadm -c command and the action you must take, if any.

    Example Message 

    Action to Take 

    device id for 'phys-schost-1:/dev/rdsk/c1t3d0' does not match physical device's id, device may have been replaced

    Go to Recovering From Storage Configuration Changes During Upgrade and perform the appropriate repair procedure.

    device id for 'phys-schost-1:/dev/rdsk/c0t0d0' needs to be updated, run scdidadm –R to update

    None. You update this device ID in Step 12.

    No output message 

    None 

    See the scdidadm(1M) man page for more information.

  12. On each node, migrate the Sun Cluster storage database to Solaris 9 device IDs.


    # scdidadm -R all
    
    -R

    Perform repair procedures

    all

    Specify all devices

  13. On each node, run the following command to verify that storage database migration to Solaris 9 device IDs is successful:


    # scdidadm -c
    
    • If the scdidadm command displays a message, return to Step 11 to make further corrections to the storage configuration or the storage database.

    • If the scdidadm command displays no messages, the device-ID migration is successful. If device-ID migration is verified on all cluster nodes, proceed to Step 14.

  14. Did you upgrade VxVM?

    • If no, proceed to Step 15.

    • If yes, upgrade all disk groups.

      To upgrade a disk group to the highest version supported by the VxVM release you installed, run the following command from the primary node of the disk group:


      # vxdg upgrade dgname
      

      See your VxVM administration documentation for more information about upgrading disk groups.

  15. Do you intend to use Sun Management Center to monitor the cluster?

Example—Upgrade From Sun Cluster 3.0 to Sun Cluster 3.1 10/03 Software

The following example shows the process of a nonrolling upgrade of a two-node cluster from Sun Cluster 3.0 to Sun Cluster 3.1 10/03 software on the Solaris 8 operating environment. The cluster node names are phys-schost-1 and phys-schost-2.


(On the first node, upgrade framework software from the Sun Cluster 3.1 10/03 CD-ROM)
phys-schost-1# cd /cdrom/suncluster_3_1_u1/SunCluster_3.1/Sol_8/Tools
phys-schost-1# ./scinstall -u update -S interact
 
(On the first node, upgrade data services from the Sun Cluster 3.1 Agents CD-ROM)
phys-schost-1# ./scinstall -u update -s all -d /cdrom/cdrom0
 
(On the second node, upgrade framework software from the Sun Cluster 3.1 10/03 CD-ROM)
phys-schost-2# cd /cdrom/suncluster_3_1_u1/SunCluster_3.1/Sol_8/Tools
phys-schost-2# ./scinstall -u update -S interact
 
(On the second node, upgrade data services from the Sun Cluster 3.1 Agents CD-ROM)
phys-schost-2# ./scinstall -u update -s all -d /cdrom/cdrom0
 
(Reboot each node into the cluster)
phys-schost-1# reboot
phys-schost-2# reboot
 
(Verify cluster membership)
# scstat
-- Cluster Nodes --
                   Node name      Status
                   ---------      ------
  Cluster node:    phys-schost-1  Online
  Cluster node:    phys-schost-2  Online

How to Upgrade Sun Cluster–Module Software for Sun Management Center (Nonrolling)

Perform the following steps to upgrade to the Sun Cluster 3.1 10/03 module software packages for Sun Management Center on the Sun Management Center server machine and help-server machine.

  1. Ensure that all Sun Management Center core packages are installed on the appropriate machines, as described in your Sun Management Center installation documentation.

    This step includes installing Sun Management Center agent packages on each cluster node.

  2. Become superuser on the Sun Management Center server machine.

  3. Insert the Sun Cluster 3.1 10/03 CD-ROM into the CD-ROM drive.

  4. Change to the /cdrom/suncluster_3_1/SunCluster_3.1/Sol_ver/Packages directory, where ver is 8 (for Solaris 8) or 9 (for Solaris 9) .


    # cd /cdrom/suncluster_3_1_u1/SunCluster_3.1/Sol_ver/Packages
    

  5. Install the Sun Cluster–module server package SUNWscssv.


    # pkgadd -d . SUNWscssv
    

  6. Change to the CD-ROM root directory and eject the CD-ROM.

  7. Become superuser on the Sun Management Center help-server machine.

  8. Repeat Step 3 through Step 6 to install the Sun Cluster–module help-server package SUNWscshl.

  9. Finish the upgrade.

    Go to How to Finish Upgrading to Sun Cluster 3.1 10/03 Software (Nonrolling).

How to Finish Upgrading to Sun Cluster 3.1 10/03 Software (Nonrolling)

Perform this procedure to reregister and reversion all resource types that received a new version from the upgrade, then to re-enable resources and bring resource groups back online.


Note –

To upgrade future versions of resource types, see “Upgrading a Resource Type” in Sun Cluster 3.1 Data Service 4/03 Planning and Administration Guide.


  1. Ensure that all steps in How to Upgrade to Sun Cluster 3.1 10/03 Software (Nonrolling) are completed.

  2. From any node, start the scsetup(1M) utility.


    # scsetup
    

  3. To work with resource groups, type 2 (Resource groups).

  4. To register resource types, type 4 (Resource type registration).

    Type yes when prompted to continue.

  5. Type 1 (Register all resource types which are not yet registered).

    The scsetup utility displays all resource types that are not registered.

    Type yes to continue to register these resource types.

  6. Type 8 (Change properties of a resource).

    Type yes to continue.

  7. Type 3 (Manage resource versioning).

    Type yes to continue.

  8. Type 1 (Show versioning status).

    The scsetup utility displays which resources you can upgrade to new versions of the same resource type. The utility also displays the state that the resource should be in before the upgrade can begin.

    Type yes to continue.

  9. Type 4 (Re-version all eligible resources).

    Type yes to continue when prompted.

  10. Return to the Resource Group Menu.

  11. Type 6 (Enable/Disable a resource).

    Type yes to continue when prompted.

  12. Select a resource to enable and follow the prompts.

  13. Repeat Step 12 for each disabled resource.

  14. When all resources are re-enabled, type q to return to the Resource Group Menu.

  15. Type 5 (Online/Offline or Switchover a resource group).

    Type yes to continue when prompted.

  16. Follow the prompts to bring each resource group online.

  17. Exit the scsetup utility.

    Type q to back out of each submenu, or press Ctrl-C.

The cluster upgrade is complete. You can now return the cluster to production.

Upgrading to Sun Cluster 3.1 10/03 Software (Rolling)

This section provides the following procedure to perform a rolling upgrade from Sun Cluster 3.1 software to Sun Cluster 3.1 10/03 software. In a rolling upgrade, you upgrade one cluster node at a time, while the other cluster nodes remain in production.

To upgrade from Sun Cluster 3.0 software, perform the procedures in Upgrading to Sun Cluster 3.1 10/03 Software (Nonrolling).


Note –

Sun Cluster 3.1 10/03 software does not support rolling upgrade from Solaris 8 software to Solaris 9 software. You can upgrade Solaris software to an update release during Sun Cluster rolling upgrade. To upgrade a Sun Cluster configuration from Solaris 8 software to Solaris 9 software, perform the procedures in Upgrading to Sun Cluster 3.1 10/03 Software (Nonrolling).


Table 3–2 Task Map: Upgrading to Sun Cluster 3.1 10/03 Software (Nonrolling)

Task 

Instructions 

1. Read the upgrade requirements and restrictions. 

Upgrade Requirements and Restrictions

2. Take the cluster out of production, disable resources, and ensure that shared data and system disks are backed up.  

How to Prepare the Cluster for Upgrade (Rolling)

3. Upgrade the Solaris software, if necessary, to a supported Solaris update release. Optionally, upgrade VERITAS Volume Manager (VxVM).  

How to Upgrade to a Solaris Maintenance Update Release (Rolling)

4. Upgrade to Sun Cluster 3.1 10/03 framework and data-service software. If necessary, upgrade applications. If you upgraded VxVM, upgrade disk groups. 

How to Upgrade to Sun Cluster 3.1 10/03 Software (Rolling)

5. Upgrade the Sun Cluster module to Sun Management Center, if needed. Reregister resource types, enable resources, and bring resource groups online. 

How to Finish Upgrading to Sun Cluster 3.1 10/03 Software (Rolling)

How to Prepare the Cluster for Upgrade (Rolling)

Perform this procedure on one node at a time. The upgraded node is taken out of the cluster, while the remaining nodes continue to function as active cluster members.


Note –

Do not use any new features of the update release, install new data services, or issue any administrative configuration commands until all nodes of the cluster are successfully upgraded.


  1. Ensure that the configuration meets requirements for upgrade.

    See Upgrade Requirements and Restrictions.

  2. Have available the CD-ROMs, documentation, and patches for all the software products you are upgrading before you begin to upgrade the cluster.

    • Solaris 8 or Solaris 9 operating environment

    • Sun Cluster 3.1 10/03 framework

    • Sun Cluster 3.1 10/03 data services (agents)

    • Applications that are managed by Sun Cluster 3.1 10/03 data-service agents

    See “Patches and Required Firmware Levels” in Sun Cluster 3.1 Release Notes for the location of patches and installation instructions.

  3. (Optional) Install Sun Cluster 3.1 10/03 documentation.

    Install the documentation packages on your preferred location, such as an administrative console or a documentation server. See the index.html file at the top level of the Sun Cluster 3.1 10/03 CD-ROM to access installation instructions.

  4. From any node, view the current status of the cluster.

    Save the output as a baseline for later comparison.


    % scstat
    % scrgadm -pv[v]

    See the scstat(1M) and scrgadm(1M) man pages for more information.

  5. Become superuser on one node of the cluster to upgrade.

  6. Move all resource groups and device groups that are running on the node to upgrade.


    # scswitch -S -h from-node
    
    -S

    Moves all resource groups and device groups

    -h from-node

    Specifies the name of the node from which to move resource groups and device groups

    See the scswitch(1M) man page for more information.

  7. Verify that the evacuation completed successfully.


    # scstat -g -D
    
    -g

    Show status for all resource groups

    -D

    Show status for all disk device groups

  8. Ensure that the system disk and data is backed up.

  9. Shut down the node to upgrade and boot it into noncluster mode.


    # shutdown -y -g0
    ok boot -x
    

    The other nodes of the cluster continue to function as active cluster members.

  10. Do you intend to upgrade the Solaris software to a Maintenance Update release?


    Note –

    The cluster must already run on, or be upgraded to, at least the minimum required level of the Solaris operating environment to support Sun Cluster 3.1 10/03 software. See the Sun Cluster 3.1 10/03 Release Notes for information about supported releases of the Solaris operating environment.


How to Upgrade to a Solaris Maintenance Update Release (Rolling)

Perform this procedure to upgrade the Solaris 8 or Solaris 9 operating environment to a supported Maintenance Update release.


Note –

To upgrade a cluster from Solaris 8 to Solaris 9 software, with or without upgrading Sun Cluster software as well, you must perform a nonrolling upgrade. Go to Upgrading to Sun Cluster 3.1 10/03 Software (Nonrolling).


  1. Ensure that all steps in How to Prepare the Cluster for Upgrade (Rolling) are completed.

  2. Temporarily comment out all entries for globally mounted file systems in the /etc/vfstab file.

    Perform this step to prevent the Solaris upgrade from attempting to mount the global devices.

  3. Follow instructions in your installation procedures for the Solaris Maintenance Update version you are upgrading to.


    Note –

    Do not reboot the node when prompted to reboot.


  4. Uncomment all entries in the /a/etc/vfstab file for globally mounted file systems that you commented out in Step 2.

  5. Install any required Solaris software patches and hardware-related patches, and download any needed firmware that is contained in the hardware patches.


    Note –

    Do not reboot the node until Step 6.


  6. Reboot the node into noncluster mode.

    Include the double dashes (--) in the following command:


    # reboot -- -x
    

  7. Upgrade the Sun Cluster software.

    Go to How to Upgrade to Sun Cluster 3.1 10/03 Software (Rolling).

How to Upgrade to Sun Cluster 3.1 10/03 Software (Rolling)

Perform this procedure to upgrade a node to Sun Cluster 3.1 10/03 software while the remaining cluster nodes are in cluster mode.


Note –

Do not use any new features provided in the Sun Cluster 3.1 10/03 software until all nodes of the cluster are upgraded.


  1. Ensure that all steps in How to Prepare the Cluster for Upgrade (Rolling) are completed.

    If you upgraded the Solaris operating environment to a Maintenance Update release, also ensure that all steps in How to Upgrade to a Solaris Maintenance Update Release (Rolling) are completed.

  2. Upgrade to Sun Cluster 3.1 10/03 software.

    1. Insert the Sun Cluster 3.1 10/03 CD-ROM into the CD-ROM drive on the node.

      If the Volume Management daemon vold(1M) is running and configured to manage CD-ROM devices, it automatically mounts the CD-ROM on the /cdrom/suncluster_3_1_u1 directory.

    2. Change to the /cdrom/suncluster_3_1/SunCluster_3.1/Sol_ver/Tools directory, where ver is 8 (for Solaris 8) or 9 (for Solaris 9) .


      # cd /cdrom/suncluster_3_1_u1/SunCluster_3.1/Sol_ver/Tools
      

    3. Install the Sun Cluster 3.1 10/03 software.


      Note –

      Do not use the /usr/cluster/bin/scinstall command that is already installed on the node. You must use the scinstall command on the Sun Cluster 3.1 10/03 CD-ROM.



      ./scinstall -u update -M patchdir=dirname
      
      -M patchdir=dirname[[,patchlistfile=filename]]

      Specifies the path to patch information so that the specified patches can be installed using the scinstall command. If you do not specify a patch-list file, the scinstall command installs all the patches in the directory dirname, including tarred, jarred, and zipped patches.

      The -M option is not required. You can use any method you prefer for installing patches.


      Tip –

      If upgrade processing is interrupted, use the scstat(1M) command to ensure that the node is in noncluster mode (Offline), then restart the scinstall command.


      # scstat -n
      -- Cluster Nodes --
                         Node name      Status
                         ---------      ------
        Cluster node:    nodename        Offline
        Cluster node:    nodename        Offline

      See “Patches and Required Firmware Levels” in Sun Cluster 3.1 Release Notes for the location of patches and installation instructions.

      See the scinstall(1M) man page for more information.


      Note –

      Sun Cluster 3.1 software requires at least version 3.5.1 of Sun Explorer software. Upgrade to Sun Cluster software includes installing Sun Explorer data collector software, to be used in conjunction with the sccheck utility. If another version of Sun Explorer software was already installed before Sun Cluster upgrade, it is replaced by the version that is provided with Sun Cluster software. Options such as user identity and data delivery are preserved, but crontab entries must be manually recreated.


    4. Change to the CD-ROM root directory and eject the CD-ROM.

    5. Install any Sun Cluster 3.1 10/03 software patches.

  3. Do you intend to upgrade any data services?

  4. Upgrade applications as needed.


    Note –

    Do not upgrade an application if the newer version cannot coexist in the cluster with the older version.


    Follow the instructions that are provided in your third-party documentation.

  5. (Optional) For each node on which data services are installed, upgrade to the Sun Cluster 3.1 10/03 data-service update software.


    Note –

    You must upgrade the Sun Cluster HA for Oracle 3.0 64–bit for Solaris 9 data service to the Sun Cluster 3.1 10/03 version. Otherwise, you can continue to use Sun Cluster 3.0 data services after upgrade to Sun Cluster 3.1 10/03 software.


    1. Insert the Sun Cluster 3.1 Agents CD-ROM into the CD-ROM drive on the node.

    2. Install the Sun Cluster 3.1 10/03 data-service–update patches.

      Use one of the following methods:

      • To upgrade one or more specified data services, type the following command.


        # scinstall -u update -s srvc[,srvc,…] -d cdrom-image
        
        -u update

        Upgrades a cluster node to a later Sun Cluster software release

        -s srvc

        Upgrades the specified data service

        -d cdrom-image

        Specifies an alternate directory location for the CD-ROM image

      • To upgrade all data services present on the node, type the following command.


        # scinstall -u update -s all -d cdrom-image
        

        -s all

        Upgrades all data services

        This command assumes that updates for all installed data services exist on the update release. If an update for a particular data service does not exist in the update release, that data service is not upgraded.

    3. Change to the CD-ROM root directory and eject the CD-ROM.

    4. Install any Sun Cluster 3.1 10/03 data-service software patches.

    5. Verify that each data-service update patch is installed successfully.

      View the upgrade log file that is referenced at the end of the upgrade output messages.

  6. Reboot the node into the cluster.


    # reboot
    

  7. Run the following command on the upgraded node to verify that Sun Cluster 3.1 10/03 software is installed successfully.


    # scinstall -pv
    

  8. From any node, verify the status of the cluster configuration.


    % scstat
    % scrgadm -pv[v]

    Output should be the same as for Step 4 in How to Prepare the Cluster for Upgrade (Rolling).

  9. Do you have another node to upgrade?

How to Finish Upgrading to Sun Cluster 3.1 10/03 Software (Rolling)

  1. Ensure that all upgrade procedures are completed for all cluster nodes that you are upgrading.

  2. Are you using Sun Management Center to monitor your Sun Cluster configuration?

    • If no, proceed to Step 3.

    • If yes, perform the following steps

    1. Ensure that all Sun Management Center core packages are installed on the appropriate machines, as described in your Sun Management Center installation documentation.

      This step includes installing Sun Management Center agent packages on each cluster node.

    2. Become superuser on the Sun Management Center server machine.

    3. Insert the Sun Cluster 3.1 10/03 CD-ROM into the CD-ROM drive.

    4. Change to the /cdrom/suncluster_3_1/SunCluster_3.1/Sol_ver/Packages directory, where ver is 8 (for Solaris 8) or 9 (for Solaris 9) .


      # cd /cdrom/suncluster_3_1_u1/SunCluster_3.1/Sol_ver/Packages
      

    5. Install the Sun Cluster–module server package SUNWscssv.


      # pkgadd -d . SUNWscssv
      

    6. Change to the CD-ROM root directory and eject the CD-ROM.

    7. Become superuser on the Sun Management Center help-server machine.

    8. Repeat Step e through Step g to install the Sun Cluster–module help-server package SUNWscshl.

  3. Reregister and re-version all resource types that received a new version from the upgrade, then re-enable resources and bring resource groups back online.

    1. From any node, start the scsetup(1M) utility.


      # scsetup
      

    2. To work with resource groups, type 2 (Resource groups).

    3. To register resource types, type 4 (Resource type registration).

      Type yes when prompted to continue.

    4. Type 1 (Register all resource types which are not yet registered).

      The scsetup utility displays all resource types that are not registered.

      Type yes to continue to register these resource types.

    5. Type 8 (Change properties of a resource).

      Type yes to continue.

    6. Type 3 (Manage resource versioning).

      Type yes to continue.

    7. Type 1 (Show versioning status).

      The scsetup utility displays which resources you can upgrade to new versions of the same resource type, and the state that the resource should be in before the upgrade can begin.

      Type yes to continue.

    8. Type 4 (Re-version all eligible resources).

      Type yes to continue when prompted.

    9. Return to the Resource Group Menu.

    10. Type 6 (Enable/Disable a resource).

      Type yes to continue when prompted.

    11. Select a resource to enable and follow the prompts.

    12. Repeat Step k for each disabled resource.

    13. When all resources are re-enabled, type q to return to the Resource Group Menu.

    14. Type 5 (Online/Offline or Switchover a resource group).

      Type yes to continue when prompted.

    15. Follow the prompts to bring each resource group online.

    16. Exit the scsetup utility.

      Type q to back out of each submenu, or press Ctrl-C.

  4. Restart any applications.

    Follow the instructions that are provided in your third-party documentation.

The cluster upgrade is complete.

Recovering From Storage Configuration Changes During Upgrade

This section provides the following repair procedures to follow if changes were inadvertently made to the storage configuration during upgrade:

How to Handle Storage Reconfiguration During an Upgrade

Any changes to the storage topology, including running Sun Cluster commands, should be completed before you upgrade the cluster to Solaris 9 software. If, however, changes were made to the storage topology during the upgrade, perform the following procedure. This procedure ensures that the new storage configuration is correct and that existing storage that was not reconfigured is not mistakenly altered.

  1. Ensure that the storage topology is correct.

    Check whether the devices that were flagged as possibly being replaced map to devices that actually were replaced. If the devices were not replaced, check for and correct possible accidental configuration changes, such as incorrect cabling.

  2. Become superuser on a node that is attached to the unverified device.

  3. Manually update the unverified device.


    # scdidadm -R device
    
    -R device

    Performs repair procedures on the specified device

    See the scdidadm(1M) man page for more information.

  4. Update the DID driver.


    # scdidadm -ui
    # scdidadm -r
    
    -u

    Loads the device ID configuration table into the kernel

    -i

    Initializes the DID driver

    -r

    Reconfigures the database

  5. Repeat Step 2 through Step 4 on all other nodes that are attached to the unverified device.

  6. Return to the remaining upgrade tasks.

How to Resolve Mistaken Storage Changes During an Upgrade

If accidental changes are made to the storage cabling during the upgrade, perform the following procedure to change the storage configuration back to the correct state.


Note –

This procedure assumes that no physical storage was actually changed. If physical or logical storage devices were changed or replaced, instead follow procedures in How to Handle Storage Reconfiguration During an Upgrade.


  1. Change the storage topology back to its original configuration.

    Check the configuration of the devices that were flagged as possibly being replaced, including the cabling.

  2. As superuser, update the DID driver on each node of the cluster.


    # scdidadm -ui
    # scdidadm -r
    
    -u

    Loads the device–ID configuration table into the kernel

    -i

    Initializes the DID driver

    -r

    Reconfigures the database

    See the scdidadm(1M) man page for more information.

  3. Did the scdidadm command return any error messages in Step 2?

    • If no, proceed to Step 4.

    • If yes, return to Step 1 to make further modifications to correct the storage configuration, then repeat Step 2.

  4. Return to the remaining upgrade tasks.

Sun Management Center Software Upgrade

This section describes how to upgrade from Sun Management Center 2.1.1 to either Sun Management Center 3.0 software or Sun Management Center 3.5 software on a Sun Cluster 3.1 10/03 configuration.

How to Upgrade Sun Management Center Software

  1. Have available the following items:

    • Sun Cluster 3.1 10/03 CD-ROM or the path to the CD-ROM image. You use the CD-ROM to reinstall the Sun Cluster 3.1 10/03 version of the Sun Cluster–module packages after you upgrade Sun Management Center software.

    • Sun Management Center documentation.

    • Sun Management Center patches and Sun Cluster–module patches, if any.

      See “Patches and Required Firmware Levels” in Sun Cluster 3.1 Release Notes for the location of patches and installation instructions.

  2. Stop any Sun Management Center processes.

    1. If the Sun Management Center console is running, exit the console.

      In the console window, select File>Exit from the menu bar.

    2. On each Sun Management Center agent machine (cluster node), stop the Sun Management Center agent process.


      # /opt/SUNWsymon/sbin/es-stop -a
      

    3. On the Sun Management Center server machine, stop the Sun Management Center server process.


      # /opt/SUNWsymon/sbin/es-stop -S
      

  3. As superuser, remove Sun Cluster–module packages.

    Use the pkgrm(1M) command to remove all Sun Cluster–module packages from all locations listed in the following table.

    Location 

    Package to Remove 

    Each cluster node 

    SUNWscsam, SUNWscsal

    Sun Management Center console machine 

    SUNWscscn

    Sun Management Center server machine 

    SUNWscssv

    Sun Management Center help-server machine 

    SUNWscshl

    If you do not remove the listed packages, the Sun Management Center software upgrade might fail because of package dependency problems. You reinstall these packages in Step 5, after you upgrade Sun Management Center software.

  4. Upgrade the Sun Management Center software.

    Follow the upgrade procedures in your Sun Management Center documentation.

  5. As superuser, reinstall Sun Cluster–module packages to the locations listed in the table below.


    # cd /cdrom/suncluster_3_1_u1/SunCluster_3.1/Packages/
    # pkgadd module-package
    

    Location 

    Package to Install 

    Each cluster node 

    SUNWscsam, SUNWscsal

    Sun Management Center server machine 

    SUNWscssv

    Sun Management Center console machine 

    SUNWscshl

    Sun Management Center help-server machine 

    SUNWscshl

    You install the help-server package SUNWscshl on both the console machine and the help-server machine.

  6. Apply any Sun Management Center patches and any Sun Cluster–module patches to each node of the cluster.

  7. Restart Sun Management Center agent, server, and console processes.

    Follow procedures in How to Start Sun Management Center.

  8. Load the Sun Cluster module.

    Follow procedures in How to Load the Sun Cluster Module.

    If the Sun Cluster module was previously loaded, unload the module and then reload it to clear all cached alarm definitions on the server. To unload the module, select Module⇒Unload Module from the console's Details window.