Sun Cluster Software Installation Guide for Solaris OS

Performing a Nonrolling Upgrade

Follow the tasks in this section to perform a nonrolling upgrade from Sun Cluster 3.x software to Sun Cluster 3.1 8/05 software. In a nonrolling upgrade, you shut down the entire cluster before you upgrade the cluster nodes. This procedure also enables you to upgrade the cluster from Solaris 8 software to Solaris 9 software or from Solaris 9 software to Solaris 10 10/05 software or compatible.


Note –

To perform a rolling upgrade to Sun Cluster 3.1 8/05 software, instead follow the procedures in Performing a Rolling Upgrade.


Table 5–1 Task Map: Performing a Nonrolling Upgrade to Sun Cluster 3.1 8/05 Software

Task 

Instructions 

1. Read the upgrade requirements and restrictions. 

Upgrade Requirements and Software Support Guidelines

2. Remove the cluster from production and back up shared data. If the cluster uses dual-string mediators for Solstice DiskSuite or Solaris Volume Manager software, unconfigure the mediators. 

How to Prepare the Cluster for a Nonrolling Upgrade

3. Upgrade the Solaris software, if necessary, to a supported Solaris update. Optionally, upgrade VERITAS Volume Manager (VxVM). 

How to Perform a Nonrolling Upgrade of the Solaris OS

4. Install or upgrade software on which Sun Cluster 3.1 8/05 software has a dependency. 

How to Upgrade Dependency Software Before a Nonrolling Upgrade

5. Upgrade to Sun Cluster 3.1 8/05 framework and data-service software. If necessary, upgrade applications. If the cluster uses dual-string mediators, reconfigure the mediators. SPARC: If you upgraded VxVM, upgrade disk groups. 

How to Perform a Nonrolling Upgrade of Sun Cluster 3.1 8/05 Software

6. Enable resources and bring resource groups online. Optionally, migrate existing resources to new resource types. 

How to Finish a Nonrolling Upgrade to Sun Cluster 3.1 8/05 Software

7. (Optional) SPARC: Upgrade the Sun Cluster module for Sun Management Center, if needed.

SPARC: How to Upgrade Sun Cluster Module Software for Sun Management Center

ProcedureHow to Prepare the Cluster for a Nonrolling Upgrade

Perform this procedure to remove the cluster from production.

Before You Begin

Perform the following tasks:

Steps
  1. Ensure that the cluster is functioning normally.

    • To view the current status of the cluster, run the following command from any node:


      % scstat
      

      See the scstat(1M) man page for more information.

    • Search the /var/adm/messages log on the same node for unresolved error messages or warning messages.

    • Check the volume-manager status.

  2. (Optional) Install Sun Cluster 3.1 8/05 documentation.

    Install the documentation packages on your preferred location, such as an administrative console or a documentation server. See the Solaris_arch/Product/sun_cluster/index.html file on the Sun Cluster 2 of 2 CD-ROM, where arch is sparc or x86, to access installation instructions.

  3. Notify users that cluster services will be unavailable during the upgrade.

  4. Become superuser on a node of the cluster.

  5. Start the scsetup(1m) utility.


    # scsetup
    

    The Main Menu is displayed.

  6. Switch each resource group offline.

    1. From the scsetup Main Menu, choose the menu item, Resource groups.

    2. From the Resource Group Menu, choose the menu item, Online/Offline or Switchover a resource group.

    3. Follow the prompts to take offline all resource groups and to put them in the unmanaged state.

    4. When all resource groups are offline, type q to return to the Resource Group Menu.

  7. Disable all resources in the cluster.

    The disabling of resources before upgrade prevents the cluster from bringing the resources online automatically if a node is mistakenly rebooted into cluster mode.

    1. From the Resource Group Menu, choose the menu item, Enable/Disable a resource.

    2. Choose a resource to disable and follow the prompts.

    3. Repeat Step b for each resource.

    4. When all resources are disabled, type q to return to the Resource Group Menu.

  8. Exit the scsetup utility.

    Type q to back out of each submenu or press Ctrl-C.

  9. Verify that all resources on all nodes are Offline and that all resource groups are in the Unmanaged state.


    # scstat -g
    
  10. If your cluster uses dual-string mediators for Solstice DiskSuite or Solaris Volume Manager software, unconfigure your mediators.

    See Configuring Dual-String Mediators for more information.

    1. Run the following command to verify that no mediator data problems exist.


      # medstat -s setname
      
      -s setname

      Specifies the disk set name

      If the value in the Status field is Bad, repair the affected mediator host. Follow the procedure How to Fix Bad Mediator Data.

    2. List all mediators.

      Save this information for when you restore the mediators during the procedure How to Finish a Nonrolling Upgrade to Sun Cluster 3.1 8/05 Software.

    3. For a disk set that uses mediators, take ownership of the disk set if no node already has ownership.


      # scswitch -z -D setname -h node
      
      -z

      Changes mastery

      -D

      Specifies the name of the disk set

      -h node

      Specifies the name of the node to become primary of the disk set

    4. Unconfigure all mediators for the disk set.


      # metaset -s setname -d -m mediator-host-list
      
      -s setname

      Specifies the disk set name

      -d

      Deletes from the disk set

      -m mediator-host-list

      Specifies the name of the node to remove as a mediator host for the disk set

      See the mediator(7D) man page for further information about mediator-specific options to the metaset command.

    5. Repeat Step c through Step d for each remaining disk set that uses mediators.

  11. For a two-node cluster that uses Sun StorEdge Availability Suite software, ensure that the configuration data for availability services resides on the quorum disk.

    The configuration data must reside on a quorum disk to ensure the proper functioning of Sun StorEdge Availability Suite after you upgrade the cluster software.

    1. Become superuser on a node of the cluster that runs Sun StorEdge Availability Suite software.

    2. Identify the device ID and the slice that is used by the Sun StorEdge Availability Suite configuration file.


      # /usr/opt/SUNWscm/sbin/dscfg
      /dev/did/rdsk/dNsS
      

      In this example output, N is the device ID and S the slice of device N.

    3. Identify the existing quorum device.


      # scstat -q
      -- Quorum Votes by Device --
                           Device Name         Present Possible Status
                           -----------         ------- -------- ------
         Device votes:     /dev/did/rdsk/dQsS  1       1        Online

      In this example output, dQsS is the existing quorum device.

    4. If the quorum device is not the same as the Sun StorEdge Availability Suite configuration-data device, move the configuration data to an available slice on the quorum device.


      # dd if=`/usr/opt/SUNWesm/sbin/dscfg` of=/dev/did/rdsk/dQsS
      

      Note –

      You must use the name of the raw DID device, /dev/did/rdsk/, not the block DID device, /dev/did/dsk/.


    5. If you moved the configuration data, configure Sun StorEdge Availability Suite software to use the new location.

      As superuser, issue the following command on each node that runs Sun StorEdge Availability Suite software.


      # /usr/opt/SUNWesm/sbin/dscfg -s /dev/did/rdsk/dQsS
      
  12. Stop all applications that are running on each node of the cluster.

  13. Ensure that all shared data is backed up.

  14. From one node, shut down the cluster.


    # scshutdown -g0 -y
    

    See the scshutdown(1M) man page for more information.

  15. Boot each node into noncluster mode.

    • On SPARC based systems, perform the following command:


      ok boot -x
      
    • On x86 based systems, perform the following commands:


      …
                            <<< Current Boot Parameters >>>
      Boot path: /pci@0,0/pci-ide@7,1/ata@1/cmdk@0,0:b
      Boot args:
      
      Type  b [file-name] [boot-flags] <ENTER>    to boot with options
      or    i <ENTER>                             to enter boot interpreter
      or    <ENTER>                               to boot with defaults
      
                        <<< timeout in 5 seconds >>>
      Select (b)oot or (i)nterpreter: b -x
      
  16. Ensure that each system disk is backed up.

Next Steps

To upgrade Solaris software before you perform Sun Cluster software upgrade, go to How to Perform a Nonrolling Upgrade of the Solaris OS.

Otherwise, upgrade dependency software. Go to How to Upgrade Dependency Software Before a Nonrolling Upgrade.

ProcedureHow to Perform a Nonrolling Upgrade of the Solaris OS

Perform this procedure on each node in the cluster to upgrade the Solaris OS. If the cluster already runs on a version of the Solaris OS that supports Sun Cluster 3.1 8/05 software, further upgrade of the Solaris OS is optional. If you do not intend to upgrade the Solaris OS, proceed to How to Perform a Nonrolling Upgrade of Sun Cluster 3.1 8/05 Software.


Caution – Caution –

Sun Cluster 3.1 8/05 software does not support upgrade from the Solaris 9 OS to the original release of the Solaris 10 OS, which was distributed in March 2005. You must upgrade to at least the Solaris 10 10/05 release or compatible.


Before You Begin

Perform the following tasks:

Steps
  1. Become superuser on the cluster node to upgrade.

  2. (Optional) SPARC: Upgrade VxFS.

    Follow procedures that are provided in your VxFS documentation.

  3. Determine whether the following Apache run control scripts exist and are enabled or disabled:


    /etc/rc0.d/K16apache
    /etc/rc1.d/K16apache
    /etc/rc2.d/K16apache
    /etc/rc3.d/S50apache
    /etc/rcS.d/K16apache

    Some applications, such as Sun Cluster HA for Apache, require that Apache run control scripts be disabled.

    • If these scripts exist and contain an uppercase K or S in the file name, the scripts are enabled. No further action is necessary for these scripts.

    • If these scripts do not exist, in Step 8 you must ensure that any Apache run control scripts that are installed during the Solaris OS upgrade are disabled.

    • If these scripts exist but the file names contain a lowercase k or s, the scripts are disabled. In Step 8 you must ensure that any Apache run control scripts that are installed during the Solaris OS upgrade are disabled.

  4. Comment out all entries for globally mounted file systems in the node's /etc/vfstab file.

    1. For later reference, make a record of all entries that are already commented out.

    2. Temporarily comment out all entries for globally mounted file systems in the /etc/vfstab file.

      Entries for globally mounted file systems contain the global mount option. Comment out these entries to prevent the Solaris upgrade from attempting to mount the global devices.

  5. Determine which procedure to follow to upgrade the Solaris OS.

    Volume Manager 

    Procedure 

    Location of Instructions 

    Solstice DiskSuite or Solaris Volume Manager 

    Any Solaris upgrade method except the Live Upgrade method

    Solaris installation documentation 

    SPARC: VERITAS Volume Manager 

    “Upgrading VxVM and Solaris” 

    VERITAS Volume Manager installation documentation 


    Note –

    If your cluster has VxVM installed, you must reinstall the existing VxVM software or upgrade to the Solaris 9 version of VxVM software as part of the Solaris upgrade process.


  6. Upgrade the Solaris software, following the procedure that you selected in Step 5.

    Make the following changes to the procedures that you use:

    • When you are instructed to reboot a node during the upgrade process, always reboot into noncluster mode.

      • For the boot and reboot commands, add the -x option to the command.

        The -x option ensures that the node reboots into noncluster mode. For example, either of the following two commands boot a node into single-user noncluster mode:

        • On SPARC based systems, perform either of the following commands:


          # reboot -- -xs
          or
          ok boot -xs
          
        • On x86 based systems, perform either of the following commands:


          # reboot -- -xs
          or
          ...
                                <<< Current Boot Parameters >>>
          Boot path: /pci@0,0/pci-ide@7,1/ata@1/cmdk@0,0:b
          Boot args:
          
          Type  b [file-name] [boot-flags] <ENTER>  to boot with options
          or    i <ENTER>                           to enter boot interpreter
          or    <ENTER>                             to boot with defaults
          
                            <<< timeout in 5 seconds >>>
          Select (b)oot or (i)nterpreter: b -xs
          
      • If the instruction says to run the init S command, use the reboot -- -xs command instead.

    • Do not perform the final reboot instruction in the Solaris software upgrade. Instead, do the following:

      1. Return to this procedure to perform Step 7 and Step 8.

      2. Reboot into noncluster mode in Step 9 to complete Solaris software upgrade.

  7. In the /a/etc/vfstab file, uncomment those entries for globally mounted file systems that you commented out in Step 4.

  8. If Apache run control scripts were disabled or did not exist before you upgraded the Solaris OS, ensure that any scripts that were installed during Solaris upgrade are disabled.

    To disable Apache run control scripts, use the following commands to rename the files with a lowercase k or s.


    # mv /a/etc/rc0.d/K16apache /a/etc/rc0.d/k16apache 
    # mv /a/etc/rc1.d/K16apache /a/etc/rc1.d/k16apache
    # mv /a/etc/rc2.d/K16apache /a/etc/rc2.d/k16apache
    # mv /a/etc/rc3.d/S50apache /a/etc/rc3.d/s50apache
    # mv /a/etc/rcS.d/K16apache /a/etc/rcS.d/k16apache
    

    Alternatively, you can rename the scripts to be consistent with your normal administration practices.

  9. Reboot the node into noncluster mode.

    Include the double dashes (--) in the following command:


    # reboot -- -x
    
  10. SPARC: If your cluster runs VxVM, perform the remaining steps in the procedure “Upgrading VxVM and Solaris” to reinstall or upgrade VxVM.

    Make the following changes to the procedure:

    • After VxVM upgrade is complete but before you reboot, verify the entries in the /etc/vfstab file.

      If any of the entries that you uncommented in Step 7 were commented out, make those entries uncommented again.

    • When the VxVM procedures instruct you to perform a final reconfiguration reboot, do not use the -r option alone. Instead, reboot into noncluster mode by using the -rx options.


      # reboot -- -rx
      

    Note –

    If you see a message similar to the following, type the root password to continue upgrade processing. Do not run the fsck command nor type Ctrl-D.


    WARNING - Unable to repair the /global/.devices/node@1 filesystem. 
    Run fsck manually (fsck -F ufs /dev/vx/rdsk/rootdisk_13vol). Exit the 
    shell when done to continue the boot process.
    
    Type control-d to proceed with normal startup,
    (or give root password for system maintenance):  Type the root password
    

  11. Install any required Solaris software patches and hardware-related patches, and download any needed firmware that is contained in the hardware patches.

    For Solstice DiskSuite software (Solaris 8), also install any Solstice DiskSuite software patches.


    Note –

    Do not reboot after you add patches. Wait to reboot the node until after you upgrade the Sun Cluster software.


    See Patches and Required Firmware Levels in Sun Cluster 3.1 8/05 Release Notes for Solaris OS for the location of patches and installation instructions.

Next Steps

Upgrade dependency software. Go to How to Upgrade Dependency Software Before a Nonrolling Upgrade.


Note –

To complete the upgrade from Solaris 8 to Solaris 9 software or from Solaris 9 to Solaris 10 10/05 software or compatible, you must also upgrade to the Solaris 9 or Solaris 10 version of Sun Cluster 3.1 8/05 software, including dependency software. You must perform this task even if the cluster already runs on Sun Cluster 3.1 8/05 software for another version of Solaris software.


ProcedureHow to Upgrade Dependency Software Before a Nonrolling Upgrade

Perform this procedure on each cluster node to install or upgrade software on which Sun Cluster 3.1 8/05 software has a dependency. The cluster remains in production during this procedure.

If you are running SunPlex Manager, status on a node will not be reported during the period that the node's security file agent is stopped. Status reporting resumes when the security file agent is restarted, after the common agent container software is upgraded.

Before You Begin

Perform the following tasks:

Steps
  1. Become superuser on the cluster node.

  2. For the Solaris 8 and Solaris 9 OS, ensure that the Apache Tomcat package is at the required patch level, if the package is installed.

    1. Determine whether the SUNWtcatu package is installed.


      # pkginfo SUNWtcatu
      SUNWtcatu       Tomcat Servlet/JSP Container
    2. If the Apache Tomcat package is installed, determine whether the required patch for the platform is installed.

      • SPARC based platforms require at least 114016-01

      • x86 based platforms require at least 114017-01


      # patchadd -p | grep 114016
      Patch: 114016-01 Obsoletes: Requires: Incompatibles: Packages: SUNWtcatu
    3. If the required patch is not installed, remove the Apache Tomcat package.


      # pkgrm SUNWtcatu
      
  3. Insert the Sun Cluster 1 of 2 CD-ROM.

  4. Change to the /cdrom/cdrom0/Solaris_arch/Product/shared_components/Packages/ directory, where arch is sparc or x86 .


    # cd /cdrom/cdrom0/Solaris_arch/Product/shared_components/Packages/
    
  5. Ensure that at least version 4.3.1 of the Explorer packages is installed.

    These packages are required by Sun Cluster software for use by the sccheck utility.

    1. Determine whether the Explorer packages are installed and, if so, what version.


      # pkginfo -l SUNWexplo | grep SUNW_PRODVERS
      SUNW_PRODVERS=4.3.1
    2. If a version earlier than 4.3.1 is installed, remove the existing Explorer packages.


      # pkgrm SUNWexplo SUNWexplu SUNWexplj
      
    3. If you removed Explorer packages or none were installed, install the latest Explorer packages from the Sun Cluster 1 of 2 CD-ROM.

      • For the Solaris 8 or Solaris 9 OS, use the following command:


        # pkgadd -d . SUNWexpl*
        
      • For the Solaris 10 OS, use the following command:


        # pkgadd -G -d . SUNWexpl*
        

        The -G option adds packages to the current zone only. You must add these packages only to the global zone. Therefore, this option also specifies that the packages are not propagated to any existing non-global zone or to any non-global zone that is created later.

  6. Ensure that at least version 5.1,REV=34 of the Java Dynamic Management Kit (JDMK) packages is installed.

    1. Determine whether JDMK packages are installed and, if so, what version.


      # pkginfo -l SUNWjdmk-runtime | grep VERSION
      VERSION=5.1,REV=34
    2. If a version earlier than 5.1,REV=34 is installed, remove the existing JDMK packages.


      # pkgrm SUNWjdmk-runtime SUNWjdmk-runtime-jmx
      
    3. If you removed JDMK packages or none were installed, install the latest JDMK packages from the Sun Cluster 1 of 2 CD-ROM.

      • For the Solaris 8 or Solaris 9 OS, use the following command:


        # pkgadd -d . SUNWjdmk*
        
      • For the Solaris 10 OS, use the following command:


        # pkgadd -G -d . SUNWjdmk*
        
  7. Change to the Solaris_arch/Product/shared_components/Solaris_ver/Packages/ directory, where arch is sparc or x86 and where ver is 8 for Solaris 8, 9 for Solaris 9, or 10 for Solaris 10.


    # cd ../Solaris_ver/Packages
    
  8. Ensure that at least version 4.5.0 of the Netscape Portable Runtime (NSPR) packages is installed.

    1. Determine whether NSPR packages are installed and, if so, what version.


      # cat /var/sadm/pkg/SUNWpr/pkginfo | grep SUNW_PRODVERS
      SUNW_PRODVERS=4.5.0
    2. If a version earlier than 4.5.0 is installed, remove the existing NSPR packages.


      # pkgrm packages
      

      The following table lists the applicable packages for each hardware platform.


      Note –

      Install packages in the order in which they are listed in the following table.


      Hardware Platform 

      NSPR Package Names 

      SPARC 

      SUNWpr SUNWprx

      x86 

      SUNWpr

    3. If you removed NSPR packages or none were installed, install the latest NSPR packages.

      • For the Solaris 8 or Solaris 9 OS, use the following command:


        # pkgadd -d . packages
        
      • For the Solaris 10 OS, use the following command:


        # pkgadd -G -d . packages
        
  9. Ensure that at least version 3.9.4 of the Network Security Services (NSS) packages is installed.

    1. Determine whether NSS packages are installed and, if so, what version.


      # cat /var/sadm/pkg/SUNWtls/pkginfo | grep SUNW_PRODVERS
      SUNW_PRODVERS=3.9.4
    2. If a version earlier than 3.9.4 is installed, remove the existing NSS packages.


      # pkgrm packages
      

      The following table lists the applicable packages for each hardware platform.


      Note –

      Install packages in the order in which they are listed in the following table.


      Hardware Platform 

      NSS Package Names 

      SPARC 

      SUNWtls SUNWtlsu SUNWtlsx

      x86 

      SUNWtls SUNWtlsu

    3. If you removed NSS packages or none were installed, install the latest NSS packages from the Sun Cluster 1 of 2 CD-ROM.

      • For the Solaris 8 or Solaris 9 OS, use the following command:


        # pkgadd -d . packages
        
      • For the Solaris 10 OS, use the following command:


        # pkgadd -G -d . packages
        
  10. Change back to the Solaris_arch/Product/shared_components//Packages/ directory.


    # cd ../../Packages
    
  11. Ensure that at least version 1.0,REV=25 of the common agent container packages is installed.

    1. Determine whether the common agent container packages are installed and, if so, what version.


      # pkginfo -l SUNWcacao | grep VERSION
      VERSION=1.0,REV=25
    2. If a version earlier than 1.0,REV=25 is installed, stop the security file agent for the common agent container on each cluster node.


      # /opt/SUNWcacao/bin/cacaoadm stop
      
    3. If a version earlier than 1.0,REV=25 is installed, remove the existing common agent container packages.


      # pkgrm SUNWcacao SUNWcacaocfg
      
    4. If you removed the common agent container packages or none were installed, install the latest common agent container packages from the Sun Cluster 1 of 2 CD-ROM.

      • For the Solaris 8 or Solaris 9 OS, use the following command:


        # pkgadd -d . SUNWcacao*
        
      • For the Solaris 10 OS, use the following command:


        # pkgadd -G -d . SUNWcacao*
        
  12. Change to a directory that does not reside on the CD-ROM and eject the CD-ROM.


    # eject cdrom
    
  13. Insert the Sun Cluster 2 of 2 CD-ROM.

  14. For upgrade from Solaris 8 to Solaris 9 OS, install or upgrade Sun Java Web Console packages.

    1. Change to the Solaris_arch/Product/sunwebconsole/ directory, where arch is sparc or x86.

    2. Install the Sun Java Web Console packages.


      # ./setup
      

      The setup command installs or upgrades all packages to support Sun Java Web Console.

  15. Change to a directory that does not reside on the CD-ROM and eject the CD-ROM.


    # eject cdrom
    
  16. Ensure that the /usr/java/ directory is a symbolic link to the minimum or latest version of Java software.

    Sun Cluster software requires at least version 1.4.2_03 of Java software.

    1. Determine what directory the /usr/java/ directory is symbolically linked to.


      # ls -l /usr/java
      lrwxrwxrwx   1 root   other    9 Apr 19 14:05 /usr/java -> /usr/j2se/
    2. Determine what version or versions of Java software are installed.

      The following are examples of commands that you can use to display the version of their related releases of Java software.


      # /usr/j2se/bin/java -version
      # /usr/java1.2/bin/java -version
      # /usr/jdk/jdk1.5.0_01/bin/java -version
      
    3. If the /usr/java/ directory is not symbolically linked to a supported version of Java software, recreate the symbolic link to link to a supported version of Java software.

      The following example shows the creation of a symbolic link to the /usr/j2se/ directory, which contains Java 1.4.2_03 software.


      # rm /usr/java
      # ln -s /usr/j2se /usr/java
      
Next Steps

Upgrade to Sun Cluster 3.1 8/05 software. Go to How to Perform a Nonrolling Upgrade of Sun Cluster 3.1 8/05 Software.

ProcedureHow to Perform a Nonrolling Upgrade of Sun Cluster 3.1 8/05 Software

Perform this procedure to upgrade each node of the cluster to Sun Cluster 3.1 8/05 software. You must also perform this procedure to complete cluster upgrade from Solaris 8 to Solaris 9 software or from Solaris 9 to Solaris 10 10/05 software or compatible.


Tip –

You can perform this procedure on more than one node at the same time.


Before You Begin

Ensure that dependency software is installed or upgraded. See How to Upgrade Dependency Software Before a Nonrolling Upgrade.

Steps
  1. Become superuser on a node of the cluster.

  2. Insert the Sun Cluster 2 of 2 CD-ROM in the CD-ROM drive on the node.

    If the volume management daemon vold(1M) is running and is configured to manage CD-ROM devices, the daemon automatically mounts the CD-ROM on the /cdrom/cdrom0/ directory.

  3. Change to the Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/ directory, where arch is sparc or x86 and where ver is 8 for Solaris 8, 9 for Solaris 9, or 10 for Solaris 10 .


    # cd /cdrom/cdrom0/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools
    
  4. Start the scinstall utility.


    # ./scinstall
    

    Note –

    Do not use the /usr/cluster/bin/scinstall command that is already installed on the node. You must use the scinstall command on the Sun Cluster 2 of 2 CD-ROM.


  5. From the Main Menu, choose the menu item, Upgrade this cluster node.


      *** Main Menu ***
    
        Please select from one of the following (*) options:
    
          * 1) Install a cluster or cluster node
            2) Configure a cluster to be JumpStarted from this install server
          * 3) Add support for new data services to this cluster node
          * 4) Upgrade this cluster node
          * 5) Print release information for this cluster node
    
          * ?) Help with menu options
          * q) Quit
    
        Option:  4
    
  6. From the Upgrade Menu, choose the menu item, Upgrade Sun Cluster framework on this node.

  7. Follow the menu prompts to upgrade the cluster framework.

    During the Sun Cluster upgrade, scinstall might make one or more of the following configuration changes:

    Upgrade processing is finished when the system displays the message Completed Sun Cluster framework upgrade and prompts you to press Enter to continue.

  8. Press Enter.

    The Upgrade Menu is displayed.

  9. (Optional) Upgrade Java Enterprise System data services from the Sun Cluster 2 of 2 CD-ROM.

    1. From the Upgrade Menu of the scinstall utility, choose the menu item, Upgrade Sun Cluster data service agents on this node.

    2. Follow the menu prompts to upgrade Sun Cluster data service agents that are installed on the node.

      You can choose from the list of data services that are available to upgrade or choose to upgrade all installed data services.

      Upgrade processing is finished when the system displays the message Completed upgrade of Sun Cluster data services agents and prompts you to press Enter to continue.

    3. Press Enter.

      The Upgrade Menu is displayed.

  10. Quit the scinstall utility.

  11. Change to a directory that does not reside on the CD-ROM and eject the CD-ROM.


    # eject cdrom
    
  12. Upgrade Sun Cluster data services from the Sun Cluster 2 of 2 CD-ROM.

    • If you are using the Sun Cluster HA for NFS data service and you upgrade to the Solaris 10 OS, you must upgrade the data service and migrate the resource type to the new version. See Upgrading the SUNW.nfs Resource Type in Sun Cluster Data Service for NFS Guide for Solaris OS for more information.

    • If you are using the Sun Cluster HA for Oracle 3.0 64-bit for Solaris 9 data service, you must upgrade to the Sun Cluster 3.1 8/05 version.

    • The upgrade of any other data services to the Sun Cluster 3.1 8/05 version is optional. You can continue to use any other Sun Cluster 3.x data services after you upgrade the cluster to Sun Cluster 3.1 8/05 software.

    Only those data services that are delivered on the Sun Cluster Agents CD are automatically upgraded by the scinstall(1M) utility. You must manually upgrade any custom or third-party data services. Follow the procedures that are provided with those data services.

    1. Insert the Sun Cluster Agents CD in the CD-ROM drive on the node.

    2. Start the scinstall utility.

      For data-service upgrades, you can use the /usr/cluster/bin/scinstall command that is already installed on the node.


      # scinstall
      
    3. From the Main Menu, choose the menu item, Upgrade this cluster node.

    4. From the Upgrade Menu, choose the menu item, Upgrade Sun Cluster data service agents on this node.

    5. Follow the menu prompts to upgrade Sun Cluster data service agents that are installed on the node.

      You can choose from the list of data services that are available to upgrade or choose to upgrade all installed data services.

      Upgrade processing is finished when the system displays the message Completed upgrade of Sun Cluster data services agents and prompts you to press Enter to continue.

    6. Press Enter.

      The Upgrade Menu is displayed.

    7. Quit the scinstall utility.

    8. Change to a directory that does not reside on the CD-ROM and eject the CD-ROM.


      # eject cdrom
      
  13. As needed, manually upgrade any custom data services that are not supplied on the product media.

  14. Verify that each data-service update is installed successfully.

    View the upgrade log file that is referenced at the end of the upgrade output messages.

  15. Install any Sun Cluster 3.1 8/05 software patches, if you did not already install them by using the scinstall command.

  16. Install any Sun Cluster 3.1 8/05 data-service software patches.

    See Patches and Required Firmware Levels in Sun Cluster 3.1 8/05 Release Notes for Solaris OS for the location of patches and installation instructions.

  17. Upgrade software applications that are installed on the cluster.

    Ensure that application levels are compatible with the current versions of Sun Cluster and Solaris software. See your application documentation for installation instructions.

  18. After all nodes are upgraded, reboot each node into the cluster.


    # reboot
    
  19. Copy the security files for the common agent container to all cluster nodes.

    This step ensures that security files for the common agent container are identical on all cluster nodes and that the copied files retain the correct file permissions.

    1. On each node, stop the Sun Java Web Console agent.


      # /usr/sbin/smcwebserver stop
      
    2. On each node, stop the security file agent.


      # /opt/SUNWcacao/bin/cacaoadm stop
      
    3. On one node, change to the /etc/opt/SUNWcacao/ directory.


      phys-schost-1# cd /etc/opt/SUNWcacao/
      
    4. Create a tar file of the /etc/opt/SUNWcacao/security/ directory.


      phys-schost-1# tar cf /tmp/SECURITY.tar security
      
    5. Copy the /tmp/SECURITY.tar file to each of the other cluster nodes.

    6. On each node to which you copied the /tmp/SECURITY.tar file, extract the security files.

      Any security files that already exist in the /etc/opt/SUNWcacao/ directory are overwritten.


      phys-schost-2# cd /etc/opt/SUNWcacao/
      phys-schost-2# tar xf /tmp/SECURITY.tar
      
    7. Delete the /tmp/SECURITY.tar file from each node in the cluster.

      You must delete each copy of the tar file to avoid security risks.


      phys-schost-1# rm /tmp/SECURITY.tar
      phys-schost-2# rm /tmp/SECURITY.tar
      
    8. On each node, start the security file agent.


      phys-schost-1# /opt/SUNWcacao/bin/cacaoadm start
      phys-schost-2# /opt/SUNWcacao/bin/cacaoadm start
      
    9. On each node, start the Sun Java Web Console agent.


      phys-schost-1# /usr/sbin/smcwebserver start
      phys-schost-2# /usr/sbin/smcwebserver start
      
Next Steps

Go to How to Verify a Nonrolling Upgrade of Sun Cluster 3.1 8/05 Software

ProcedureHow to Verify a Nonrolling Upgrade of Sun Cluster 3.1 8/05 Software

Perform this procedure to verify that the cluster is successfully upgraded to Sun Cluster 3.1 8/05 software.

Before You Begin

Ensure that all upgrade procedures are completed for all cluster nodes that you are upgrading.

Steps
  1. On each upgraded node, view the installed levels of Sun Cluster software.


    # scinstall -pv
    

    The first line of output states which version of Sun Cluster software the node is running. This version should match the version that you just upgraded to.

  2. From any node, verify that all upgraded cluster nodes are running in cluster mode (Online).


    # scstat -n
    

    See the scstat(1M) man page for more information about displaying cluster status.

  3. If you upgraded from Solaris 8 to Solaris 9 software, verify the consistency of the storage configuration.

    1. On each node, run the following command to verify the consistency of the storage configuration.


      # scdidadm -c
      
      -c

      Performs a consistency check


      Caution – Caution –

      Do not proceed to Step b until your configuration passes this consistency check. Failure to pass this check might result in errors in device identification and cause data corruption.


      The following table lists the possible output from the scdidadm -c command and the action you must take, if any.

      Example Message 

      Action 

      device id for 'phys-schost-1:/dev/rdsk/c1t3d0' does not match physical device's id, device may have been replaced

      Go to Recovering From Storage Configuration Changes During Upgrade and perform the appropriate repair procedure.

      device id for 'phys-schost-1:/dev/rdsk/c0t0d0' needs to be updated, run scdidadm –R to update

      None. You update this device ID in Step b.

      No output message 

      None. 

      See the scdidadm(1M) man page for more information.

    2. On each node, migrate the Sun Cluster storage database to Solaris 9 device IDs.


      # scdidadm -R all
      
      -R

      Performs repair procedures

      all

      Specifies all devices

    3. On each node, run the following command to verify that storage database migration to Solaris 9 device IDs is successful.


      # scdidadm -c
      

Example 5–1 Verifying a Nonrolling Upgrade From Sun Cluster 3.0 to Sun Cluster 3.1 8/05 Software

The following example shows the commands used to verify a nonrolling upgrade of a two-node cluster from Sun Cluster 3.0 to Sun Cluster 3.1 8/05 software on the Solaris 8 OS. The cluster node names are phys-schost-1 and phys-schost-2.


(Verify that software versions are the same on all nodes)
# scinstall -pv
 
(Verify cluster membership)
# scstat -n
-- Cluster Nodes --
                   Node name      Status
                   ---------      ------
  Cluster node:    phys-schost-1  Online
  Cluster node:    phys-schost-2  Online

Next Steps

Go to How to Finish a Nonrolling Upgrade to Sun Cluster 3.1 8/05 Software.

ProcedureHow to Finish a Nonrolling Upgrade to Sun Cluster 3.1 8/05 Software

Perform this procedure to finish Sun Cluster upgrade. First, reregister all resource types that received a new version from the upgrade. Second, modify eligible resources to use the new version of the resource type that the resource uses. Third, re-enable resources. Finally, bring resource groups back online.

Before You Begin

Ensure that all steps in How to Verify a Nonrolling Upgrade of Sun Cluster 3.1 8/05 Software are completed.

Steps
  1. If you upgraded any data services that are not supplied on the product media, register the new resource types for those data services.

    Follow the documentation that accompanies the data services.

  2. If you upgraded Sun Cluster HA for SAP liveCache from the version for Sun Cluster 3.0 to the version for Sun Cluster 3.1, modify the /opt/SUNWsclc/livecache/bin/lccluster configuration file.

    1. Become superuser on a node that will host the liveCache resource.

    2. Copy the new /opt/SUNWsclc/livecache/bin/lccluster file to the /sapdb/LC_NAME/db/sap/ directory.

      Overwrite the lccluster file that already exists from the previous configuration of the data service.

    3. Configure this /sapdb/LC_NAME/db/sap/lccluster file as documented in How to Register and Configure Sun Cluster HA for SAP liveCache in Sun Cluster Data Service for SAP liveCache Guide for Solaris OS.

  3. If your configuration uses dual-string mediators for Solstice DiskSuite or Solaris Volume Manager software, restore the mediator configurations.

    1. Determine which node has ownership of a disk set to which you will add the mediator hosts.


      # metaset -s setname
      
      -s setname

      Specifies the disk set name

    2. If no node has ownership, take ownership of the disk set.


      # scswitch -z -D setname -h node
      
      -z

      Changes mastery

      -D setname

      Specifies the name of the disk set

      -h node

      Specifies the name of the node to become primary of the disk set

    3. Re-create the mediators.


      # metaset -s setname -a -m mediator-host-list
      
      -a

      Adds to the disk set

      -m mediator-host-list

      Specifies the names of the nodes to add as mediator hosts for the disk set

    4. Repeat these steps for each disk set in the cluster that uses mediators.

  4. SPARC: If you upgraded VxVM, upgrade all disk groups.

    1. Bring online and take ownership of a disk group to upgrade.


      # scswitch -z -D setname -h thisnode
      
    2. Run the following command to upgrade a disk group to the highest version supported by the VxVM release you installed.


      # vxdg upgrade dgname
      

      See your VxVM administration documentation for more information about upgrading disk groups.

    3. Repeat for each remaining VxVM disk group in the cluster.

  5. Migrate resources to new resource type versions.


    Note –

    If you upgrade to the Sun Cluster HA for NFS data service for the Solaris 10 OS, you must migrate to the new resource type version. See Upgrading the SUNW.nfs Resource Type in Sun Cluster Data Service for NFS Guide for Solaris OS for more information.

    For all other data services, this step is optional.


    See Upgrading a Resource Type in Sun Cluster Data Services Planning and Administration Guide for Solaris OS, which contains procedures which use the command line. Alternatively, you can perform the same tasks by using the Resource Group menu of the scsetup utility. The process involves performing the following tasks:

    • Registration of the new resource type

    • Migration of the eligible resource to the new version of its resource type

    • Modification of the extension properties of the resource type as specified in the manual for the related data service

  6. From any node, start the scsetup(1M) utility.


    # scsetup
    
  7. Re-enable all disabled resources.

    1. From the Resource Group Menu, choose the menu item, Enable/Disable a resource.

    2. Choose a resource to enable and follow the prompts.

    3. Repeat Step b for each disabled resource.

    4. When all resources are re-enabled, type q to return to the Resource Group Menu.

  8. Bring each resource group back online.

    1. From the Resource Group Menu, choose the menu item, Online/Offline or Switchover a resource group.

    2. Follow the prompts to put each resource group into the managed state and then bring the resource group online.

  9. When all resource groups are back online, exit the scsetup utility.

    Type q to back out of each submenu, or press Ctrl-C.

Next Steps

If you have a SPARC based system and use Sun Management Center to monitor the cluster, go to SPARC: How to Upgrade Sun Cluster Module Software for Sun Management Center.

Otherwise, the cluster upgrade is complete.

See Also

To upgrade future versions of resource types, see Upgrading a Resource Type in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.