Sun Cluster Software Installation Guide for Solaris OS

Upgrading to Sun Cluster 3.1 9/04 Software (Nonrolling)

Follow the tasks in this section to perform a nonrolling upgrade from Sun Cluster 3.x software to Sun Cluster 3.1 9/04 software. In a nonrolling upgrade, you shut down the entire cluster before you upgrade the cluster nodes. This procedure also enables you to upgrade the cluster from Solaris 8 software to Solaris 9 software.


Note –

To perform a rolling upgrade to Sun Cluster 3.1 9/04 software, instead follow the procedures in Upgrading to Sun Cluster 3.1 9/04 Software (Rolling).


Task Map: Upgrading to Sun Cluster 3.1 9/04 Software (Nonrolling)

Table 5–1 Task Map: Upgrading to Sun Cluster 3.1 9/04 Software

Task 

Instructions 

1. Read the upgrade requirements and restrictions. 

Upgrade Requirements and Support Guidelines

2. Remove the cluster from production, disable resources, and back up shared data and system disks. If the cluster uses dual-string mediators for Solstice DiskSuite or Solaris Volume Manager software, unconfigure the mediators. 

How to Prepare the Cluster for a Nonrolling Upgrade

3. Upgrade the Solaris software, if necessary, to a supported Solaris update. Optionally, upgrade VERITAS Volume Manager (VxVM). 

How to Perform a Nonrolling Upgrade of the Solaris OS

4. Upgrade to Sun Cluster 3.1 9/04 framework and data-service software. If necessary, upgrade applications. If the cluster uses dual-string mediators, reconfigure the mediators. SPARC: If you upgraded VxVM, upgrade disk groups. 

How to Perform a Nonrolling Upgrade of Sun Cluster 3.1 9/04 Software

5. Enable resources and bring resource groups online. Optionally, migrate existing resources to new resource types. 

How to Finish a Nonrolling Upgrade to Sun Cluster 3.1 9/04 Software

6. (Optional) SPARC: Upgrade the Sun Cluster module for Sun Management Center, if needed.

SPARC: How to Upgrade Sun Cluster-Module Software for Sun Management Center

How to Prepare the Cluster for a Nonrolling Upgrade

Before you upgrade the software, perform the following steps to remove the cluster from production:

  1. Ensure that the configuration meets requirements for upgrade.

    See Upgrade Requirements and Support Guidelines.

  2. Have available the CD-ROMs, documentation, and patches for all software products you are upgrading.

    • Solaris 8 or Solaris 9 OS

    • Sun Cluster 3.1 9/04 framework

    • Sun Cluster 3.1 9/04 data services (agents)

    • Applications that are managed by Sun Cluster 3.1 9/04 data-service agents

    • SPARC: VERITAS Volume Manager

    See “Patches and Required Firmware Levels” in Sun Cluster 3.1 Release Notes for the location of patches and installation instructions.

  3. (Optional) Install Sun Cluster 3.1 9/04 documentation.

    Install the documentation packages on your preferred location, such as an administrative console or a documentation server. See the index.html file at the top level of the Sun Cluster 3.1 9/04 CD-ROM to access installation instructions.

  4. If you are upgrading from Sun Cluster 3.0 software, have available your list of test IP addresses.

    Each public-network adapter in the cluster must have at least one test IP address. This requirement applies regardless of whether the adapter is the active adapter or the backup adapter in the group. The test IP addresses are used to reconfigure the adapters to use IP Network Multipathing.


    Note –

    Each test IP address must be on the same subnet as the existing IP address that is used by the public-network adapter.


    To list the public-network adapters on a node, run the following command:


    % pnmstat
    

    See the IP Network Multipathing Administration Guide (Solaris 8) or System Administration Guide: IP Services (Solaris 9) for more information about test IP addresses for IP Network Multipathing.

  5. Notify users that cluster services will be unavailable during the upgrade.

  6. Ensure that the cluster is functioning normally.

    • To view the current status of the cluster, run the following command from any node:


      % scstat
      

      See the scstat(1M) man page for more information.

    • Search the /var/adm/messages log on the same node for unresolved error messages or warning messages.

    • Check the volume-manager status.

  7. Become superuser on a node of the cluster.

  8. Start the scsetup(1m) utility.


    # scsetup
    

    The Main Menu displays.

  9. Switch each resource group offline.

    1. From the scsetup Main Menu, choose Resource groups.

    2. From the Resource Group Menu, choose Online/Offline or Switchover a resource group.

    3. Follow the prompts to take offline all resource groups and to put them in the unmanaged state.

    4. When all resource groups are offline, type q to return to the Resource Group Menu.

  10. Disable all resources in the cluster.

    The disabling of resources before upgrade prevents the cluster from bringing the resources online automatically if a node is mistakenly rebooted into cluster mode.

    1. From the Resource Group Menu, choose Enable/Disable a resource.

    2. Choose a resource to disable and follow the prompts.

    3. Repeat Step b for each resource.

    4. When all resources are disabled, type q to return to the Resource Group Menu.

  11. Exit the scsetup utility.

    Type q to back out of each submenu or press Ctrl-C.

  12. Verify that all resources on all nodes are Offline and that all resource groups are in the Unmanaged state.


    # scstat -g
    

  13. If your cluster uses dual-string mediators for Solstice DiskSuite or Solaris Volume Manager software, unconfigure your mediators.

    See Configuring Dual-String Mediators for more information.

    1. Run the following command to verify that no mediator data problems exist.


      # medstat -s setname
      
      -s setname

      Specifies the disk set name

      If the value in the Status field is Bad, repair the affected mediator host. Follow the procedure How to Fix Bad Mediator Data.

    2. List all mediators.

      Save this information for when you restore the mediators during the procedure How to Finish a Nonrolling Upgrade to Sun Cluster 3.1 9/04 Software.

    3. For a disk set that uses mediators, take ownership of the disk set if no node already has ownership.


      # metaset -s setname -t
      
      -t

      Takes ownership of the disk set

    4. Unconfigure all mediators for the disk set.


      # metaset -s setname -d -m mediator-host-list
      
      -s setname

      Specifies the disk set name

      -d

      Deletes from the disk set

      -m mediator-host-list

      Specifies the name of the node to remove as a mediator host for the disk set

      See the mediator(7D) man page for further information about mediator-specific options to the metaset command.

    5. Repeat Step c through Step d for each remaining disk set that uses mediators.

  14. If not already installed, install Sun Web Console packages.

    Perform this step on each node of the cluster. These packages are required by Sun Cluster software, even if you do not use Sun Web Console.

    1. Insert the Sun Cluster 3.1 9/04 CD-ROM in the CD-ROM drive.

    2. Change to the /cdrom/cdrom0/Solaris_arch/Product/sun_web_console/2.1/ directory, where arch is sparc or x86.

    3. Run the setup command.


      # ./setup
      

      The setup command installs all packages to support Sun Web Console.

  15. For a two-node cluster, if the cluster uses Sun StorEdge Availability Suite software, ensure that the configuration data for availability services resides on the quorum disk.

    The configuration data must reside on a quorum disk to ensure the proper functioning of Sun StorEdge Availability Suite after you upgrade the cluster software.

    1. Become superuser on a node of the cluster that runs Sun StorEdge Availability Suite software.

    2. Identify the device ID and the slice that is used by the Sun StorEdge Availability Suite configuration file.


      # /usr/opt/SUNWscm/sbin/dscfg
      /dev/did/rdsk/dNsS
      

      In this example output, N is the device ID and S the slice of device N.

    3. Identify the existing quorum device.


      # scstat -q
      -- Quorum Votes by Device --
                           Device Name         Present Possible Status
                           -----------         ------- -------- ------
         Device votes:     /dev/did/rdsk/dQsS  1       1        Online

      In this example output, dQsS is the existing quorum device.

    4. If the quorum device is not the same as the Sun StorEdge Availability Suite configuration-data device, move the configuration data to an available slice on the quorum device.


      # dd if=`/usr/opt/SUNWesm/sbin/dscfg` of=/dev/did/rdsk/dQsS
      


      Note –

      You must use the name of the raw DID device, /dev/did/rdsk/, not the block DID device, /dev/did/dsk/.


    5. If you moved the configuration data, configure Sun StorEdge Availability Suite software to use the new location.

      As superuser, issue the following command on each node that runs Sun StorEdge Availability Suite software.


      # /usr/opt/SUNWesm/sbin/dscfg -s /dev/did/rdsk/dQsS
      

  16. Stop all applications that are running on each node of the cluster.

  17. Ensure that all shared data is backed up.

  18. From one node, shut down the cluster.


    # scshutdown -g0 -y
    

    See the scshutdown(1M) man page for more information.

  19. Boot each node into noncluster mode.

    On SPARC based systems, perform the following command:


    ok boot -x
    

    On x86 based systems, perform the following commands:


    ...
                          <<< Current Boot Parameters >>>
    Boot path: /pci@0,0/pci-ide@7,1/ata@1/cmdk@0,0:b
    Boot args:
    
    Type  b [file-name] [boot-flags] <ENTER>    to boot with options
    or    i <ENTER>                             to enter boot interpreter
    or    <ENTER>                               to boot with defaults
    
                      <<< timeout in 5 seconds >>>
    Select (b)oot or (i)nterpreter: b -x
    

  20. Ensure that each system disk is backed up.

  21. Upgrade the Sun Cluster software or the Solaris operating system.

How to Perform a Nonrolling Upgrade of the Solaris OS

Perform this procedure on each node in the cluster to upgrade the Solaris OS. If the cluster already runs on a version of the Solaris OS that supports Sun Cluster 3.1 9/04 software, further upgrade of the Solaris OS is optional. If you do not intend to upgrade the Solaris OS, go to How to Perform a Nonrolling Upgrade of Sun Cluster 3.1 9/04 Software.


Note –

The cluster must already run on, or be upgraded to, at least the minimum required level of the Solaris 8 or Solaris 9 OS to support Sun Cluster 3.1 9/04 software. See “Supported Products” in Sun Cluster Release Notes for Solaris OS for more information.


  1. Ensure that all steps in How to Prepare the Cluster for a Nonrolling Upgrade are completed.

  2. Become superuser on the cluster node to upgrade.

  3. (Optional) Upgrade VxFS.

    Follow procedures that are provided in your VxFS documentation.

  4. Determine whether the following Apache links already exist, and if so, whether the file names contain an uppercase K or S:


    /etc/rc0.d/K16apache
    /etc/rc1.d/K16apache
    /etc/rc2.d/K16apache
    /etc/rc3.d/S50apache
    /etc/rcS.d/K16apache
    • If these links already exist and do contain an uppercase K or S in the file name, no further action is necessary for these links.

    • If these links do not exist, or if these links exist but instead contain a lowercase k or s in the file name, you move aside these links in Step 9.

  5. Comment out all entries for globally mounted file systems in the node's /etc/vfstab file.

    1. For later reference, make a record of all entries that are already commented out.

    2. Temporarily comment out all entries for globally mounted file systems in the /etc/vfstab file.

      Entries for globally mounted file systems contain the global mount option. Comment out these entries to prevent the Solaris upgrade from attempting to mount the global devices.

  6. Determine which procedure to follow to upgrade the Solaris OS.

    Volume Manager 

    Procedure to Use 

    Location of Instructions 

    Solstice DiskSuite or Solaris Volume Manager 

    Any Solaris upgrade method except the Live Upgrade method

    Solaris 8 or Solaris 9 installation documentation 

    SPARC: VERITAS Volume Manager 

    “Upgrading VxVM and Solaris” 

    VERITAS Volume Manager installation documentation 


    Note –

    If your cluster has VxVM installed, you must reinstall the existing VxVM software or upgrade to the Solaris 9 version of VxVM software as part of the Solaris upgrade process.


  7. Upgrade the Solaris software, following the procedure that you selected in Step 6.

    1. When you are instructed to reboot a node during the upgrade process, always add the -x option to the command. Or, if the instruction says to run the init S command, use the reboot -- -xs command instead.

      The -x option ensures that the node reboots into noncluster mode. For example, either of the following two commands boot a node into single-user noncluster mode:

      • On SPARC based systems, perform the following commands:


        # reboot -- -xs
        ok boot -xs
        
      • On x86 based systems, perform the following commands:


        # reboot -- -xs
        ...
                              <<< Current Boot Parameters >>>
        Boot path: /pci@0,0/pci-ide@7,1/ata@1/cmdk@0,0:b
        Boot args:
        
        Type  b [file-name] [boot-flags] <ENTER>  to boot with options
        or    i <ENTER>                           to enter boot interpreter
        or    <ENTER>                             to boot with defaults
        
                          <<< timeout in 5 seconds >>>
        Select (b)oot or (i)nterpreter: b -xs
        

    2. Do not perform the final reboot instruction in the Solaris software upgrade. Instead, return to this procedure to perform Step 8 and Step 9, then reboot into noncluster mode in Step 10 to complete Solaris software upgrade.

  8. In the /a/etc/vfstab file, uncomment those entries for globally mounted file systems that you commented out in Step 5.

  9. Move aside restored Apache links if either of the following conditions was true before you upgraded the Solaris software:

    • The Apache links listed in Step 4 did not exist.

    • The Apache links listed in Step 4 existed and contained a lowercase k or s in the file names.

    To move aside restored Apache links, which contain an uppercase K or S in the name, use the following commands to rename the files with a lowercase k or s.


    # mv /a/etc/rc0.d/K16apache /a/etc/rc0.d/k16apache 
    # mv /a/etc/rc1.d/K16apache /a/etc/rc1.d/k16apache
    # mv /a/etc/rc2.d/K16apache /a/etc/rc2.d/k16apache
    # mv /a/etc/rc3.d/S50apache /a/etc/rc3.d/s50apache
    # mv /a/etc/rcS.d/K16apache /a/etc/rcS.d/k16apache
    
  10. Reboot the node into noncluster mode.

    Include the double dashes (--) in the following command:


    # reboot -- -x
    

  11. SPARC: If your cluster runs VxVM, perform the remaining steps in the procedure “Upgrading VxVM and Solaris” to reinstall or upgrade VxVM.

    Note the following special instructions:

    1. After VxVM upgrade is complete but before you reboot, verify the entries in the /etc/vfstab file. If any of the entries that you uncommented in Step 8 were commented out, make those entries uncommented again.

    2. When the VxVM procedures instruct you to perform a final reconfiguration reboot by using the -r option, reboot into noncluster mode by using the -x option instead.


      # reboot -- -x
      

    Note –

    If you see a message similar to the following, type the root password to continue upgrade processing. Do not run the fsck command nor type Ctrl-D.


    WARNING - Unable to repair the /global/.devices/node@1 filesystem. 
    Run fsck manually (fsck -F ufs /dev/vx/rdsk/rootdisk_13vol). Exit the 
    shell when done to continue the boot process.
    
    Type control-d to proceed with normal startup,
    (or give root password for system maintenance):  Type the root password
    


  12. Install any required Solaris software patches and hardware-related patches, and download any needed firmware that is contained in the hardware patches.

    For Solstice DiskSuite software (Solaris 8), also install any Solstice DiskSuite software patches.


    Note –

    Do not reboot after you add patches. Wait to reboot the node until after you upgrade the Sun Cluster software.


    See “Patches and Required Firmware Levels” in Sun Cluster 3.1 Release Notes for the location of patches and installation instructions.

  13. Upgrade to Sun Cluster 3.1 9/04 software.

    Go to How to Perform a Nonrolling Upgrade of Sun Cluster 3.1 9/04 Software.


    Note –

    To complete the upgrade from Solaris 8 to Solaris 9 software, you must also upgrade to the Solaris 9 version of Sun Cluster 3.1 9/04 software, even if the cluster already runs on the Solaris 8 version of Sun Cluster 3.1 9/04 software.


How to Perform a Nonrolling Upgrade of Sun Cluster 3.1 9/04 Software

Perform this procedure to upgrade each node of the cluster to Sun Cluster 3.1 9/04 software. You must also perform this procedure to complete cluster upgrade from Solaris 8 to Solaris 9 software.


Tip –

You can perform this procedure on more than one node at the same time.


  1. Ensure that all steps in How to Prepare the Cluster for a Nonrolling Upgrade are completed.

  2. If you upgraded from Solaris 8 to Solaris 9 software, ensure that all steps in How to Perform a Nonrolling Upgrade of the Solaris OS are completed.

  3. Ensure that you have installed all required Solaris software patches and hardware-related patches.

    For Solstice DiskSuite software (Solaris 8), also ensure that you have installed all required Solstice DiskSuite software patches.

  4. Become superuser on a node of the cluster.

  5. Insert the Sun Java Enterprise System 1/05 2 of 2 CD-ROM into the CD-ROM drive on the node.

    If the volume management daemon vold(1M) is running and is configured to manage CD-ROM devices, the daemon automatically mounts the CD-ROM on the /cdrom/cdrom0/ directory.

  6. On the Sun Cluster 3.1 9/04 CD-ROM, change to the Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/ directory, where arch is sparc or x86 and where ver is 8 (for Solaris 8) or 9 (for Solaris 9) .


    # cd /cdrom/cdrom0/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools
    

  7. Upgrade the cluster framework software.


    Note –

    Do not use the /usr/cluster/bin/scinstall command that is already installed on the node. You must use the scinstall command that is on the Sun Cluster 3.1 9/04 CD-ROM.


    • To upgrade from Sun Cluster 3.0 software, run the following command:


      # ./scinstall -u update -S interact [-M patchdir=dirname]
      
      -S

      Specifies the test IP addresses to use to convert NAFO groups to IP Network Multipathing groups

      interact

      Specifies that scinstall prompts the user for each test IP address needed

      -M patchdir=dirname[[,patchlistfile=filename]]

      Specifies the path to patch information so that the specified patches can be installed using the scinstall command. If you do not specify a patch-list file, the scinstall command installs all the patches in the directory dirname, including tarred, jarred, and zipped patches.

      The -M option is not required. You can use any method you prefer for installing patches.

    • To upgrade from Sun Cluster 3.1 software, run the following command:


      # ./scinstall -u update [-M patchdir=dirname]
      
      -M patchdir=dirname[[,patchlistfile=filename]]

      Specifies the path to patch information so that the specified patches can be installed by the scinstall command. If you do not specify a patch-list file, the scinstall command installs all the patches in the directory dirname, including tarred, jarred, and zipped patches.

      The -M option is not required. You can use any method you prefer for installing patches.

      See “Patches and Required Firmware Levels” in Sun Cluster 3.1 Release Notes for the location of patches and installation instructions.


    Note –

    Sun Cluster 3.1 9/04 software requires at least version 3.5.1 of Sun Explorer software. Upgrading to Sun Cluster software includes installing Sun Explorer data collector software, to be used in conjunction with the sccheck utility. If another version of Sun Explorer software was already installed before the Sun Cluster upgrade, it is replaced by the version that is provided with Sun Cluster software. Options such as user identity and data delivery are preserved, but crontab entries must be manually re-created.


    During Sun Cluster upgrade, scinstall might make one or more of the following configuration changes:

    • Convert NAFO groups to IP Network Multipathing groups but keep the original NAFO-group name.

      See the scinstall(1M) man page for more information. See the IP Network Multipathing Administration Guide (Solaris 8) or System Administration Guide: IP Services (Solaris 9) for information about test addresses for IP Network Multipathing.

    • Rename the ntp.conf file to ntp.conf.cluster, if ntp.conf.cluster does not already exist on the node.

    • Set the local-mac-address? variable to true, if the variable is not already set to that value.

    Upgrade processing is finished when the system displays the message Completed Sun Cluster framework upgrade and the path to the upgrade log.

  8. Change to a directory that does not reside on the CD-ROM and eject the CD-ROM.


    # eject cdrom
    

  9. (Optional) Upgrade Sun Cluster data services.


    Note –

    If you are using the Sun Cluster HA for Oracle 3.0 64–bit for Solaris 9 data service, you must upgrade to the Sun Cluster 3.1 9/04 version.

    You can continue to use any other Sun Cluster 3.0 data services after you upgrade to Sun Cluster 3.1 9/04 software.


    1. Insert the Sun Cluster 3.1 9/04 Agents CD-ROM into the CD-ROM drive on the node.

    2. Upgrade the data-service software.

      Use one of the following methods:

      • To upgrade one or more specified data services, type the following command.


        # scinstall -u update -s srvc[,srvc,…] -d /cdrom/cdrom0
        

        -u update

        Upgrades a cluster node to a later Sun Cluster software release

        -s srvc

        Upgrades the specified data service

        -d

        Specifies an alternate directory location for the CD-ROM image

      • To upgrade all data services present on the node, type the following command.


        # scinstall -u update -s all -d /cdrom/cdrom0
        

        -s all

        Upgrades all data services

      The scinstall command assumes that updates for all installed data services exist on the update release. If an update for a particular data service does not exist in the update release, that data service is not upgraded.

      Upgrade processing is finished when the system displays the message Completed upgrade of Sun Cluster data services agents and displays the path to the upgrade log.

    3. Change to a directory that does not reside on the CD-ROM and eject the CD-ROM.


      # eject cdrom
      

  10. As needed, manually upgrade any custom data services that are not supplied on the Sun Cluster 3.1 9/04 Agents CD-ROM.

  11. Verify that each data-service update is installed successfully.

    View the upgrade log file that is referenced at the end of the upgrade output messages.

  12. Install any Sun Cluster 3.1 9/04 software patches, if you did not already install them by using the scinstall command.

  13. Install any Sun Cluster 3.1 9/04 data-service software patches.

    See “Patches and Required Firmware Levels” in Sun Cluster 3.1 Release Notes for the location of patches and installation instructions.

  14. Upgrade software applications that are installed on the cluster.

    Ensure that application levels are compatible with the current versions of Sun Cluster and Solaris software. See your application documentation for installation instructions.

  15. After all nodes are upgraded, reboot each node into the cluster.


    # reboot
    

  16. Verify that all upgraded software is at the same version on all upgraded nodes.

    1. On each upgraded node, view the installed levels of Sun Cluster software.


      # scinstall -pv
      

      The first line of output states which version of Sun Cluster software the node is running. This version should match the version that you just upgraded to.

    2. From any node, verify that all upgraded cluster nodes are running in cluster mode (Online).


      # scstat -n
      

      See the scstat(1M) man page for more information about displaying cluster status.

  17. If you upgraded from Solaris 8 to Solaris 9 software, verify the consistency of the storage configuration.

    1. On each node, run the following command to verify the consistency of the storage configuration.


      # scdidadm -c
      
      -c

      Perform a consistency check


      Caution – Caution –

      Do not proceed to Step b until your configuration passes this consistency check. Failure to pass this check might result in errors in device identification and cause data corruption.


      The following table lists the possible output from the scdidadm -c command and the action you must take, if any.

      Example Message 

      Action 

      device id for 'phys-schost-1:/dev/rdsk/c1t3d0' does not match physical device's id, device may have been replaced

      Go to Recovering From Storage Configuration Changes During Upgrade and perform the appropriate repair procedure.

      device id for 'phys-schost-1:/dev/rdsk/c0t0d0' needs to be updated, run scdidadm –R to update

      None. You update this device ID in Step b.

      No output message 

      None. 

      See the scdidadm(1M) man page for more information.

    2. On each node, migrate the Sun Cluster storage database to Solaris 9 device IDs.


      # scdidadm -R all
      
      -R

      Perform repair procedures

      all

      Specify all devices

    3. On each node, run the following command to verify that storage database migration to Solaris 9 device IDs is successful.


      # scdidadm -c
      
      • If the scdidadm command displays a message, return to Step a to make further corrections to the storage configuration or the storage database.

      • If the scdidadm command displays no messages, the device-ID migration is successful. When device-ID migration is verified on all cluster nodes, proceed to Step 4.

  18. Go to How to Finish a Nonrolling Upgrade to Sun Cluster 3.1 9/04 Software.

Example—Nonrolling Upgrade From Sun Cluster 3.0 to Sun Cluster 3.1 9/04 Software

The following example shows the process of a nonrolling upgrade of a two-node cluster from Sun Cluster 3.0 to Sun Cluster 3.1 9/04 software on the Solaris 8 OS. The example includes the installation of Sun Web Console software and the upgrade of all installed data services that have new versions on the Sun Cluster 3.1 9/04 Agents CD-ROM. The cluster node names are phys-schost-1 and phys-schost-2.


(On the first node, install Sun Web Console software from the Sun Cluster 3.1 9/04 CD-ROM)
phys-schost-1# cd /cdrom/cdrom0/Solaris_sparc/Product/sun_cluster/ \
Solaris_8/Misc
phys-schost-1# ./setup

(On the first node, upgrade framework software from the Sun Cluster 3.1 9/04 CD-ROM)
phys-schost-1# cd /cdrom/cdrom0/Solaris_sparc/Product/sun_cluster/Solaris_8/Tools
phys-schost-1# ./scinstall -u update -S interact
 
(On the first node, upgrade data services from the Sun Cluster 3.1 9/04 Agents CD-ROM)
phys-schost-1# scinstall -u update -s all -d /cdrom/cdrom0
 
(On the second node, install Sun Web Console software from the Sun Cluster 3.1 9/04 CD-ROM)
phys-schost-2# cd /cdrom/cdrom0/Solaris_sparc/Product/sun_cluster/ \
Solaris_8/Misc
phys-schost-2# ./setup

(On the second node, upgrade framework software from the Sun Cluster 3.1 9/04 CD-ROM)
phys-schost-2# cd /cdrom/cdrom0/Solaris_sparc/Product/sun_cluster/Solaris_8/Tools
phys-schost-2# ./scinstall -u update -S interact
 
(On the second node, upgrade data services from the Sun Cluster 3.1 9/04 Agents CD-ROM)
phys-schost-2# scinstall -u update -s all -d /cdrom/cdrom0
 
(Reboot each node into the cluster)
phys-schost-1# reboot
phys-schost-2# reboot

(Verify that software versions are the same on all nodes)
# scinstall -pv 

(Verify cluster membership)
# scstat -n
-- Cluster Nodes --
                   Node name      Status
                   ---------      ------
  Cluster node:    phys-schost-1  Online
  Cluster node:    phys-schost-2  Online

How to Finish a Nonrolling Upgrade to Sun Cluster 3.1 9/04 Software

Perform this procedure to finish Sun Cluster upgrade. First, reregister all resource types that received a new version from the upgrade. Second, modify eligible resources to use the new version of the resource type that the resource uses. Third, re-enable resources. Finally, bring resource groups back online.


Note –

To upgrade future versions of resource types, see “Upgrading a Resource Type” in Sun Cluster Data Service Planning and Administration Guide for Solaris OS.


  1. Ensure that all steps in How to Perform a Nonrolling Upgrade of Sun Cluster 3.1 9/04 Software are completed.

  2. If you upgraded any data services that are not supplied on the Sun Cluster 3.1 9/04 Agents CD-ROM, register the new resource types for those data services.

    Follow the documentation that accompanies the data services.

  3. If you upgraded Sun Cluster HA for SAP liveCache from the version for Sun Cluster 3.0 to the version for Sun Cluster 3.1, modify the /opt/SUNWsclc/livecache/bin/lccluster configuration file.

    In the lccluster file, specify the value of put-Confdir_list-here in the CONFDIR_LIST="put-Confdir_list-here" entry. This entry did not exist in the Sun Cluster 3.0 version of the lccluster file. Follow instructions in “Registering and Configuring the Sun Cluster HA for SAP liveCache” in Sun Cluster Data Service for SAP liveCache Guide for Solaris OS.

  4. If your configuration uses dual-string mediators for Solstice DiskSuite or Solaris Volume Manager software, restore the mediator configurations.

    1. Determine which node has ownership of a disk set to which you will add the mediator hosts.


      # metaset -s setname
      
      -s setname

      Specifies the disk set name

    2. If no node has ownership, take ownership of the disk set.


      # metaset -s setname -t
      
      -t

      Takes ownership of the disk set

    3. Re-create the mediators.


      # metaset -s 	setname -a -m mediator-host-list
      
      -a

      Adds to the disk set

      -m mediator-host-list

      Specifies the names of the nodes to add as mediator hosts for the disk set

    4. Repeat Step a through Step c for each disk set in the cluster that uses mediators.

  5. SPARC: If you upgraded VxVM, upgrade all disk groups.

    To upgrade a disk group to the highest version supported by the VxVM release you installed, run the following command from the primary node of the disk group:


    # vxdg upgrade dgname
    

    See your VxVM administration documentation for more information about upgrading disk groups.

  6. From any node, start the scsetup(1M) utility.


    # scsetup
    

  7. Re-enable all disabled resources.

    1. From the Resource Group Menu, choose Enable/Disable a resource.

    2. Choose a resource to enable and follow the prompts.

    3. Repeat Step b for each disabled resource.

    4. When all resources are re-enabled, type q to return to the Resource Group Menu.

  8. Bring each resource group back online.

    1. From the Resource Group Menu, choose Online/Offline or Switchover a resource group.

    2. Follow the prompts to put each resource group into the managed state and then bring the resource group online.

  9. When all resource groups are back online, exit the scsetup utility.

    Type q to back out of each submenu, or press Ctrl-C.

  10. (Optional) Migrate resources to new resource type versions.

    See “Upgrading a Resource Type” in Sun Cluster Data Services Planning and Administration Guide for Solaris OS, which contains procedures which use the command line. Alternatively, you can perform the same tasks by using the Resource Group menu of the scsetup utility. The process involves performing the following tasks:

    • Register the new resource type.

    • Migrate the eligible resource to the new version of its resource type.

    • Modify the extension properties of the resource type as specified in the manual for the related data service.

  11. If you have a SPARC based system and use Sun Management Center to monitor the cluster, go to SPARC: How to Upgrade Sun Cluster-Module Software for Sun Management Center.

The cluster upgrade is complete.