Sun Cluster Software Installation Guide for Solaris OS

Upgrading to Sun Cluster 3.1 4/04 Software (Nonrolling)

Follow the tasks in this section to perform a nonrolling upgrade from Sun Cluster 3.x software to Sun Cluster 3.1 4/04 software. In a nonrolling upgrade, you shut down the entire cluster before you upgrade the cluster nodes. This procedure also enables you to upgrade the cluster from Solaris 8 software to Solaris 9 software.


Note –

To perform a rolling upgrade to Sun Cluster 3.1 4/04 software, instead follow the procedures in Upgrading to Sun Cluster 3.1 4/04 Software (Rolling).


Task Map: Upgrading to Sun Cluster 3.1 4/04 Software (Nonrolling)

Table 5–1 Task Map: Upgrading to Sun Cluster 3.1 4/04 Software (Nonrolling)

Task 

Instructions 

1. Read the upgrade requirements and restrictions. 

Upgrade Requirements and Restrictions

2. Take the cluster out of production, disable resources, and back up shared data and system disks. If the cluster uses dual-string mediators for Solstice DiskSuite/Solaris Volume Manager, unconfigure the mediators. 

How to Prepare the Cluster for Upgrade (Nonrolling)

3. Upgrade the Solaris software, if necessary, to a supported Solaris update release. Optionally, upgrade VERITAS Volume Manager (VxVM). 

How to Upgrade the Solaris Operating Environment (Nonrolling)

4. Upgrade to Sun Cluster 3.1 4/04 framework and data-service software. If necessary, upgrade applications. If the cluster uses dual-string mediators, reconfigure the mediators. If you upgraded VxVM, upgrade disk groups. 

How to Upgrade to Sun Cluster 3.1 4/04 Software (Nonrolling)

5. (Optional) Upgrade the Sun Cluster module to Sun Management Center, if needed.

How to Upgrade Sun Cluster-Module Software for Sun Management Center (Nonrolling)

6. Register new resource types, migrate existing resources to new resource types, modify resource type extension properties as needed, enable resources, and bring resource groups online. 

How to Finish Upgrading to Sun Cluster 3.1 4/04 Software (Nonrolling)

How to Prepare the Cluster for Upgrade (Nonrolling)

Before you upgrade the software, perform the following steps to take the cluster out of production:

  1. Ensure that the configuration meets requirements for upgrade.

    See Upgrade Requirements and Restrictions.

  2. Have available the CD-ROMs, documentation, and patches for all software products you are upgrading.

    • Solaris 8 or Solaris 9 operating environment

    • Sun Cluster 3.1 4/04 framework

    • Sun Cluster 3.1 4/04 data services (agents)

    • Applications that are managed by Sun Cluster 3.1 4/04 data-service agents

    • VERITAS Volume Manager

    See “Patches and Required Firmware Levels” in Sun Cluster 3.1 Release Notes for the location of patches and installation instructions.

  3. (Optional) Install Sun Cluster 3.1 4/04 documentation.

    Install the documentation packages on your preferred location, such as an administrative console or a documentation server. See the index.html file at the top level of the Java Enterprise System Accessory CD 3 CD-ROM to access installation instructions.

  4. Are you upgrading from Sun Cluster 3.0 software?

    • If no, proceed to Step 5.

    • If yes, have available your list of test IP addresses, one for each public-network adapter in the cluster.

      A test IP address is required for each public-network adapter in the cluster, regardless of whether the adapter is the active adapter or the backup adapter in the group. The test IP addresses will be used to reconfigure the adapters to use IP Network Multipathing.


      Note –

      Each test IP address must be on the same subnet as the existing IP address that is used by the public-network adapter.


      To list the public-network adapters on a node, run the following command:


      % pnmstat
      

      See the IP Network Multipathing Administration Guide (Solaris 8) or System Administration Guide: IP Services (Solaris 9) for more information about test IP addresses for IP Network Multipathing.

  5. Notify users that cluster services will be unavailable during upgrade.

  6. Ensure that the cluster is functioning normally.

    • To view the current status of the cluster, run the following command from any node:


      % scstat
      

      See the scstat(1M) man page for more information.

    • Search the /var/adm/messages log on the same node for unresolved error messages or warning messages.

    • Check volume-manager status.

  7. Become superuser on a node of the cluster.

  8. Switch each resource group offline.


    # scswitch -F -g resource-group
    

    -F

    Switches a resource group offline

    -g resource-group

    Specifies the name of the resource group to take offline

  9. Disable all resources in the cluster.

    The disabling of resources before upgrade prevents the cluster from bringing the resources online automatically if a node is mistakenly rebooted into cluster mode.


    Note –

    If you are upgrading from a Sun Cluster 3.1 release, you can use the scsetup(1M) utility instead of the command line. From the Main Menu, choose Resource Groups, then choose Enable/Disable Resources.


    1. From any node, list all enabled resources in the cluster.


      # scrgadm -pv | grep "Res enabled"
      (resource-group:resource) Res enabled: True

    2. Identify those resources that depend on other resources.

      You must disable dependent resources first before you disable the resources that they depend on.

    3. Disable each enabled resource in the cluster.


      scswitch -n -j resource
      
      -n

      Disables

      -j resource

      Specifies the resource

      See the scswitch(1M) man page for more information.

    4. Verify that all resources are disabled.


      # scrgadm -pv | grep "Res enabled"
      (resource-group:resource) Res enabled: False
  10. Move each resource group to the unmanaged state.


    # scswitch -u -g resource-group
    

    -u

    Moves the specified resource group to the unmanaged state

    -g resource-group

    Specifies the name of the resource group to move into the unmanaged state

  11. Verify that all resources on all nodes are Offline and that all resource groups are in the Unmanaged state.


    # scstat -g
    

  12. Does your cluster use dual-string mediators for Solstice DiskSuite/Solaris Volume Manager?

    1. Run the following command to verify that no mediator data problems exist.


      # medstat -s setname
      
      -s setname

      Specifies the diskset name

      If the value in the Status field is Bad, repair the affected mediator host. Follow the procedure How to Fix Bad Mediator Data.

    2. List all mediators.

      Use this information for when you restore the mediators during the procedure How to Upgrade to Sun Cluster 3.1 4/04 Software (Nonrolling).

    3. For a diskset that uses mediators, take ownership of the diskset if no node already has ownership.


      # metaset -s setname -t
      
      -t

      Takes ownership of the diskset

    4. Unconfigure all mediators for the diskset.


      # metaset -s setname -d -m mediator-host-list
      
      -s setname

      Specifies the diskset name

      -d

      Deletes from the diskset

      -m mediator-host-list

      Specifies the name of the node to remove as a mediator host for the diskset

      See the mediator(7D) man page for further information about mediator-specific options to the metaset command.

    5. Repeat Step c through Step d for each remaining diskset that uses mediators.

  13. Stop all applications that are running on each node of the cluster.

  14. Ensure that all shared data is backed up.

  15. From one node, shut down the cluster.


    # scshutdown -g -y
    

    See the scshutdown(1M) man page for more information.

  16. Boot each node into noncluster mode.


    ok boot -x
    

  17. Ensure that each system disk is backed up.

  18. Determine whether to upgrade the Solaris operating environment.

    See “Supported Products” in Sun Cluster Release Notes for Solaris OS for more information.

How to Upgrade the Solaris Operating Environment (Nonrolling)

Perform this procedure on each node in the cluster to upgrade the Solaris operating environment. If the cluster already runs on a version of the Solaris environment that supports Sun Cluster 3.1 4/04 software, this procedure is optional.


Note –

The cluster must already run on, or be upgraded to, at least the minimum required level of the Solaris 8 or Solaris 9 environment to support Sun Cluster 3.1 4/04 software. See “Supported Products” in Sun Cluster Release Notes for Solaris OS for more information.


  1. Ensure that all steps in How to Prepare the Cluster for Upgrade (Nonrolling) are completed.

  2. Become superuser on the cluster node to upgrade.

  3. Determine whether the following Apache links already exist, and if so, whether the file names contain an uppercase K or S:


    /etc/rc0.d/K16apache 
    /etc/rc1.d/K16apache 
    /etc/rc2.d/K16apache 
    /etc/rc3.d/S50apache 
    /etc/rcS.d/K16apache
    • If these links already exist and do contain an uppercase K or S in the file name, no further action is necessary for these links.

    • If these links do not exist, or if these links exist but instead contain a lowercase k or s in the file name, you move aside these links in Step 8.

  4. Comment out all entries for globally mounted file systems in the /etc/vfstab file.

    1. Make a record of all entries that are already commented out for later reference.

    2. Temporarily comment out all entries for globally mounted file systems in the /etc/vfstab file.

      Entries for globally mounted file systems contain the global mount option. Comment out these entries to prevent the Solaris upgrade from attempting to mount the global devices.

  5. Determine which procedure to follow to upgrade the Solaris operating environment.

    Volume Manager 

    Procedure to Use 

    Location of Instructions 

    Solstice DiskSuite/Solaris Volume Manager 

    Any Solaris upgrade method except the Live Upgrade method.

    Solaris 8 or Solaris 9 installation documentation 

    VERITAS Volume Manager 

    “Upgrading VxVM and Solaris” 

    VERITAS Volume Manager installation documentation 


    Note –

    If your cluster has VxVM installed, you must reinstall the existing VxVM software or upgrade to the Solaris 9 version of VxVM software as part of the Solaris upgrade process.


  6. Upgrade the Solaris software, following the procedure that you selected in Step 5.

    Note the following special instructions:

    • Do not perform the final reboot instruction in the Solaris software upgrade. Instead, return to this procedure to perform Step 7 and Step 8, then reboot into noncluster mode in Step 9 to complete Solaris software upgrade.

    • When you are instructed to reboot a node during the upgrade process, always add the -x option to the command.

      The -x option ensures that the node reboots into noncluster mode. For example, either of the following two commands boot a node into single-user noncluster mode:


      # reboot -- -xs
      ok boot -xs
      
  7. In the /a/etc/vfstab file, uncomment those entries for globally mounted file systems that you commented out in Step 4.

  8. Move aside restored Apache links if either of the following conditions was true before you upgraded the Solaris software:

    • The Apache links listed in Step 3 did not exist.

    • The Apache links listed in Step 3 existed and contained a lowercase k or s in the file names.

    To move aside restored Apache links, which contain an uppercase K or S in the name, use the following commands to rename the files with a lowercase k or s.


    # mv /a/etc/rc0.d/K16apache /a/etc/rc0.d/k16apache 
    # mv /a/etc/rc1.d/K16apache /a/etc/rc1.d/k16apache
    # mv /a/etc/rc2.d/K16apache /a/etc/rc2.d/k16apache
    # mv /a/etc/rc3.d/S50apache /a/etc/rc3.d/s50apache
    # mv /a/etc/rcS.d/K16apache /a/etc/rcS.d/k16apache
    
  9. Reboot the node into noncluster mode.

    Include the double dashes (--) in the following command:


    # reboot -- -x
    

  10. If your cluster runs VxVM, perform the remaining steps in the procedure “Upgrading VxVM and Solaris” to reinstall or upgrade VxVM.

    Note the following special instructions:

    • If you see a message similar to the following, type the root password to continue upgrade processing. Do not run the fsck command nor type Ctrl-D.


      WARNING - Unable to repair the /global/.devices/node@1 filesystem. 
      Run fsck manually (fsck -F ufs /dev/vx/rdsk/rootdisk_13vol). Exit the 
      shell when done to continue the boot process.
      
      Type control-d to proceed with normal startup,
      (or give root password for system maintenance):  Type the root password
      

    • When the VxVM procedures instruct you to perform a final reconfiguration reboot by using the -r option, reboot into noncluster mode by using the -x option instead.


      # reboot -- -x
      
    • After VxVM upgrade is complete, verify the entries in the /etc/vfstab file. If any of the entries that you uncommented in Step 7 were commented out, make those entries uncommented again.

  11. Install any required Solaris software patches and hardware-related patches, and download any needed firmware that is contained in the hardware patches.

    For Solstice DiskSuite software (Solaris 8), also install any Solstice DiskSuite software patches.


    Note –

    Do not reboot after you add patches. Wait to reboot the node until after you upgrade the Sun Cluster software.


    See “Patches and Required Firmware Levels” in Sun Cluster 3.1 Release Notes for the location of patches and installation instructions.

  12. Upgrade to Sun Cluster 3.1 4/04 software.

    Go to How to Upgrade to Sun Cluster 3.1 4/04 Software (Nonrolling).


    Note –

    To complete upgrade from Solaris 8 to Solaris 9 software, you must also upgrade to the Solaris 9 version of Sun Cluster 3.1 4/04 software, even if the cluster already runs on Sun Cluster 3.1 4/04 software.


How to Upgrade to Sun Cluster 3.1 4/04 Software (Nonrolling)

This procedure describes how to upgrade the cluster to Sun Cluster 3.1 4/04 software. You must also perform this procedure to complete cluster upgrade from Solaris 8 to Solaris 9 software.


Tip –

You can perform this procedure on more than one node at the same time.


  1. Ensure that all steps in How to Prepare the Cluster for Upgrade (Nonrolling) are completed.

    If you upgraded from Solaris 8 to Solaris 9 software, also ensure that all steps in How to Upgrade the Solaris Operating Environment (Nonrolling) are completed.

  2. Become superuser on a node of the cluster.

  3. Ensure that you have installed all required Solaris software patches and hardware-related patches.

    For Solstice DiskSuite software (Solaris 8), also ensure that you have installed all required Solstice DiskSuite software patches.

  4. Insert the Sun Java Enterprise System 2004Q2 2 of 2 CD-ROM into the CD-ROM drive on the node.

    If the volume management daemon vold(1M) is running and configured to manage CD-ROM devices, the daemon automatically mounts the CD-ROM on the /cdrom/cdrom0 directory.

  5. Upgrade the node to Sun Cluster 3.1 4/04 software.

    1. From the /cdrom/cdrom0 directory, change to the Solaris_sparc/Product/sun_cluster/Solaris_ver/Tools directory, where ver is 8 (for Solaris 8) or 9 (for Solaris 9) .

      The following example uses the path to the Solaris 8 version of Sun Cluster software.


      # cd /cdrom/cdrom0/Solaris_sparc/Product/sun_cluster/Solaris_8/Tools
      

    2. Upgrade the cluster framework software.

      • To upgrade from Sun Cluster 3.0 software, run the following command:


        # ./scinstall -u update -S interact -M patchdir=dirname
        
        -S

        Specifies the test IP addresses to use to convert NAFO groups to IP Network Multipathing groups

        interact

        Specifies that scinstall prompts the user for each test IP address needed

        -M patchdir=dirname[[,patchlistfile=filename]]

        Specifies the path to patch information so that the specified patches can be installed using the scinstall command. If you do not specify a patch-list file, the scinstall command installs all the patches in the directory dirname, including tarred, jarred, and zipped patches.

        The -M option is not required. You can use any method you prefer for installing patches.

      • To upgrade from Sun Cluster 3.1 software, run the following command:


        # ./scinstall -u update -M patchdir=dirname
        
        -M patchdir=dirname[[,patchlistfile=filename]]

        Specifies the path to patch information so that the specified patches can be installed using the scinstall command. If you do not specify a patch-list file, the scinstall command installs all the patches in the directory dirname, including tarred, jarred, and zipped patches.

      The -M option is not required. You can use any method you prefer for installing patches.

      See “Patches and Required Firmware Levels” in Sun Cluster 3.1 Release Notes for the location of patches and installation instructions.

      Upgrade processing is finished when the system displays the message Completed Sun Cluster framework upgrade and the path to the upgrade log.

      See the scinstall(1M) man page for more information. See the IP Network Multipathing Administration Guide (Solaris 8) or System Administration Guide: IP Services (Solaris 9) for information about test addresses for IP Network Multipathing.


      Note –

      Sun Cluster 3.1 4/04 software requires at least version 3.5.1 of Sun Explorer software. Upgrade to Sun Cluster software includes installing Sun Explorer data collector software, to be used in conjunction with the sccheck utility. If another version of Sun Explorer software was already installed before Sun Cluster upgrade, it is replaced by the version that is provided with Sun Cluster software. Options such as user identity and data delivery are preserved, but crontab entries must be manually recreated.


      During Sun Cluster upgrade, scinstall might make one or more of the following configuration changes:

      • Convert NAFO groups to IP Network Multipathing groups but keep the original NAFO-group name.

      • Rename the ntp.conf file to ntp.conf.cluster, if ntp.conf.cluster does not already exist on the node.

      • Set the local-mac-address? variable to true, if the variable is not already set to that value.

    3. Change to the CD-ROM root directory and eject the CD-ROM.

  6. Upgrade software applications that are installed on the cluster.

    Ensure that application levels are compatible with the current version of Sun Cluster and Solaris software. See your application documentation for installation instructions. In addition, follow these guidelines to upgrade applications in a Sun Cluster 3.1 4/04 configuration:

    • If the applications are stored on shared disks, you must master the relevant disk groups and manually mount the relevant file systems before you upgrade the application.

    • If you are instructed to reboot a node during the upgrade process, always add the -x option to the command.

      The -x option ensures that the node reboots into noncluster mode. For example, either of the following two commands boot a node into single-user noncluster mode:


      # reboot -- -xs
      ok boot -xs
      
  7. (Optional) Upgrade Sun Cluster data services to the Sun Cluster 3.1 4/04 software versions.


    Note –

    You must upgrade the Sun Cluster HA for Oracle 3.0 64–bit for Solaris 9 data service to the Sun Cluster 3.1 4/04 version. Otherwise, you can continue to use Sun Cluster 3.0 data services after upgrade to Sun Cluster 3.1 4/04 software.


    Only those data services that are delivered on the Java Enterprise System Accessory CD 3 CD-ROM are automatically upgraded by the scinstall(1M) utility. You must manually upgrade any custom or third-party data services. Follow the procedures provided with those data services.

    1. Insert the Java Enterprise System Accessory CD 3 CD-ROM into the CD-ROM drive on the node to upgrade.

    2. Upgrade the data-service software.


      # scinstall -u update -s all -d /cdrom/cdrom0
      

      -u update

      Specifies upgrade

      -s all

      Updates all Sun Cluster data services that are installed on the node

      Upgrade processing is finished when the system displays the message Completed upgrade of Sun Cluster data services agents and the path to the upgrade log.

    3. Change to the CD-ROM root directory and eject the CD-ROM.

    4. As needed, manually upgrade any custom data services that are not supplied on the Java Enterprise System Accessory CD 3 CD-ROM.

    5. Install any Sun Cluster 3.1 4/04 data-service patches.

      See “Patches and Required Firmware Levels” in Sun Cluster 3.1 Release Notes for the location of patches and installation instructions.

  8. After all nodes are upgraded, reboot each node into the cluster.


    # reboot							
    

  9. Verify that all upgraded software is at the same version on all upgraded nodes.

    1. On each upgraded node, view the installed levels of Sun Cluster software.


      # scinstall -pv
      

    2. From one node, verify that all upgraded cluster nodes are running in cluster mode (Online).


      # scstat -n
      

      See the scstat(1M) man page for more information about displaying cluster status.

  10. Did you upgrade from Solaris 8 to Solaris 9 software?

  11. On each node, run the following command to verify the consistency of the storage configuration:


    # scdidadm -c
    
    -c

    Perform a consistency check


    Caution – Caution –

    Do not proceed to Step 12 until your configuration passes this consistency check. Failure to do so might result in errors in device identification and cause data corruption.


    The following table lists the possible output from the scdidadm -c command and the action you must take, if any.

    Example Message 

    Action to Take 

    device id for 'phys-schost-1:/dev/rdsk/c1t3d0' does not match physical device's id, device may have been replaced

    Go to Recovering From Storage Configuration Changes During Upgrade and perform the appropriate repair procedure.

    device id for 'phys-schost-1:/dev/rdsk/c0t0d0' needs to be updated, run scdidadm –R to update

    None. You update this device ID in Step 12.

    No output message 

    None 

    See the scdidadm(1M) man page for more information.

  12. On each node, migrate the Sun Cluster storage database to Solaris 9 device IDs.


    # scdidadm -R all
    
    -R

    Perform repair procedures

    all

    Specify all devices

  13. On each node, run the following command to verify that storage database migration to Solaris 9 device IDs is successful:


    # scdidadm -c
    
    • If the scdidadm command displays a message, return to Step 11 to make further corrections to the storage configuration or the storage database.

    • If the scdidadm command displays no messages, the device-ID migration is successful. If device-ID migration is verified on all cluster nodes, proceed to Step 14.

  14. Does your configuration use dual-string mediators for Solstice DiskSuite/Solaris Volume Manager?

    • If no, proceed to Step 15.

    • If yes, restore the mediator configurations.

    1. Determine which node has ownership of a diskset to which you will add the mediator hosts.


      # metaset -s setname
      
      -s setname

      Specifies the diskset name

    2. If no node has ownership, take ownership of the diskset.


      # metaset -s setname -t
      
      -t

      Takes ownership of the diskset

    3. Recreate the mediators.


      # metaset -s 	setname -a -m mediator-host-list
      
      -a

      Adds to the diskset

      -m mediator-host-list

      Specifies the names of the nodes to add as mediator hosts for the diskset

    4. Repeat Step a through Step c for each diskset in the cluster that uses mediators.

  15. Did you upgrade VxVM?

    • If no, proceed to Step 16.

    • If yes, upgrade all disk groups.

      To upgrade a disk group to the highest version supported by the VxVM release you installed, run the following command from the primary node of the disk group:


      # vxdg upgrade dgname
      

      See your VxVM administration documentation for more information about upgrading disk groups.

  16. Do you use Sun Management Center to monitor the cluster?

Example—Upgrade From Sun Cluster 3.0 to Sun Cluster 3.1 4/04 Software

The following example shows the process of a nonrolling upgrade of a two-node cluster from Sun Cluster 3.0 to Sun Cluster 3.1 4/04 software on the Solaris 8 operating environment. The cluster node names are phys-schost-1 and phys-schost-2.


(On the first node, upgrade framework software from the Sun Java Enterprise System 2004Q2 2 of 2 CD-ROM)
phys-schost-1# cd /cdrom/cdrom0/Solaris_sparc/Product/sun_cluster/ \
Solaris_8/Tools
phys-schost-1# ./scinstall -u update -S interact
 
(On the first node, upgrade data services from the Java Enterprise System Accessory CD 3 CD-ROM)
phys-schost-1# ./scinstall -u update -s all -d /cdrom/cdrom0
 
(On the second node, upgrade framework software from the Sun Java Enterprise System 2004Q2 2 of 2 CD-ROM)
phys-schost-2# cd /cdrom/cdrom0/Solaris_sparc/Product/sun_cluster/ \
Solaris_8/Tools
phys-schost-2# ./scinstall -u update -S interact
 
(On the second node, upgrade data services from the Java Enterprise System Accessory CD 3 CD-ROM)
phys-schost-2# ./scinstall -u update -s all -d /cdrom/cdrom0
 
(Reboot each node into the cluster)
phys-schost-1# reboot
phys-schost-2# reboot
 
(Verify cluster membership)
# scstat
-- Cluster Nodes --
                   Node name      Status
                   ---------      ------
  Cluster node:    phys-schost-1  Online
  Cluster node:    phys-schost-2  Online

How to Upgrade Sun Cluster-Module Software for Sun Management Center (Nonrolling)

Perform the following steps to upgrade Sun Cluster-module software on the Sun Management Center server machine, help-server machine, and console machine.

If you intend to upgrade the Sun Management Center software itself, do not perform this procedure. Instead, proceed to How to Finish Upgrading to Sun Cluster 3.1 4/04 Software (Nonrolling) to finish Sun Cluster software upgrade. Then go to How to Upgrade Sun Management Center Software to upgrade Sun Management Center software and the Sun Cluster module.

  1. As superuser, remove the existing Sun Cluster–module packages.

    Use the pkgrm(1M) command to remove all Sun Cluster–module packages from all locations listed in the following table.


    # pkgrm module-package
    

    Location 

    Module Package to Remove 

    Sun Management Center console machine 

    SUNWscscn

    Sun Management Center server machine 

    SUNWscssv

    Sun Management Center help-server machine 

    SUNWscshl


    Note –

    Sun Cluster-module software on the cluster nodes was already upgraded during the cluster-framework upgrade.


  2. As superuser, reinstall Sun Cluster–module packages from the Sun Java Enterprise System 2004Q2 2 of 2 CD-ROM to the locations listed in the following table.

    In the CD-ROM path, the value of ver is 8 (for Solaris 8) or 9 (for Solaris 9).


    # cd /cdrom/cdrom0/Solaris_sparc/Product/sun_cluster/Solaris_ver/Packages/
    # pkgadd module-package
    

    Location 

    Module Package to Install 

    Sun Management Center console machine 

    SUNWscshl

    Sun Management Center server machine 

    SUNWscssv

    Sun Management Center help-server machine 

    SUNWscshl

    You install the help-server package SUNWscshl on both the console machine and the help-server machine. You do not upgrade to a new SUNWscscn package on the console machine.

  3. Finish the upgrade.

    Go to How to Finish Upgrading to Sun Cluster 3.1 4/04 Software (Nonrolling).

How to Finish Upgrading to Sun Cluster 3.1 4/04 Software (Nonrolling)

Perform this procedure to reregister all resource types that received a new version from the upgrade, to modify eligible resources to use the new version of its resource type, then to re-enable resources and bring resource groups back online.


Note –

To upgrade future versions of resource types, see “Upgrading a Resource Type” in Sun Cluster Data Service Planning and Administration Guide for Solaris OS.


  1. Ensure that all steps in How to Upgrade to Sun Cluster 3.1 4/04 Software (Nonrolling) are completed.

  2. If you upgraded any data services that are not supplied on the Sun Java Enterprise System 2004Q2 2 of 2 CD-ROM or the Java Enterprise System Accessory CD 3 CD-ROM, register the new resource types for those data services.

    Follow the documentation that comes with the data services.

  3. From any node, start the scsetup(1M) utility.


    # scsetup
    

  4. Register the new resource types.

    1. From the Main Menu, choose Resource groups.

    2. Choose Resource type registration.

    3. Choose Register all resource types which are not yet registered.

      The scsetup utility displays all resource types that are not registered.

      Follow the prompts to register the new resource types.

  5. Migrate all eligible resources to the new version of its resource type.

    1. From the Resource Group menu, choose Change properties of a resource.

    2. Choose Manage resource versioning.

    3. Choose Show versioning status.

      The scsetup utility displays all resources for which a new version of its resource type was installed during the upgrade. Make note of which new resource types you will upgrade the resources to.

    4. Choose Re-version all eligible resources.

      Follow the prompts to upgrade eligible resources to the new version of its resource type.

    5. Return to the Change properties of a resource menu.

  6. Modify extension properties for new resource type versions.

    1. For each new resource type that you migrated existing resources to, determine whether the new resource type requires additional modifications to its extension properties.

      Refer to each related data service manual for the requirements of each new resource type.


      Note –

      You do not need to change the Type_version property of a new resource type. That property was modified when you migrated resources to their new resource types in Step 5.


      • If no resource type requires additional modifications other than the Type_version property, go to Step 7.

      • If one or more resource types require additional modifications to extension properties, proceed to Step b.

    2. From the Change properties of a resource menu, choose Change extension resource properties.

    3. Follow the prompts to modify the necessary extension properties.

      Refer to your data service documentation for the names of extension properties and values to modify.

    4. Repeat for each resource type that requires modifications.

    5. Return to the Resource Groups menu.

  7. Re-enable all disabled resources.

    1. From the Resource Group Menu, choose Enable/Disable a resource.

    2. Choose a resource to enable and follow the prompts.

    3. Repeat Step b for each disabled resource.

    4. When all resources are re-enabled, type q to return to the Resource Group Menu.

  8. Bring each resource group back online.

    1. From the Resource Group Menu, choose Online/Offline or Switchover a resource group.

    2. Follow the prompts to put each resource group into the managed state and then bring the resource group online.

  9. When all resource groups are back online, exit the scsetup utility.

    Type q to back out of each submenu, or press Ctrl-C.

    The cluster upgrade is complete. You can now return the cluster to production.