Sun Cluster 3.0 Installation Guide

Chapter 3 Upgrading Sun Cluster Software

This chapter provides step-by-step procedures for upgrading a two-node Sun Cluster 2.2 configuration to Sun Cluster 3.0 software.

The following step-by-step instructions are in this chapter.

For overview information about planning your Sun Cluster configuration, see Chapter 1, Planning the Sun Cluster Configuration. For a high-level description of the related procedures in this chapter, see "Upgrading From Sun Cluster 2.2 to Sun Cluster 3.0 Software".

Upgrading From Sun Cluster 2.2 to Sun Cluster 3.0 Software

Perform the following tasks to upgrade your two-node cluster from Sun Cluster 2.2 to Sun Cluster 3.0 software.

Table 3-1 Task Map: Upgrading From Sun Cluster 2.2 to Sun Cluster 3.0 Software

Task 

For Instructions, Go To ... 

Read upgrade conditions and restrictions, and plan a root disk partitioning scheme to support Sun Cluster 3.0 software. 

"Overview of Upgrading From Sun Cluster 2.2 to Sun Cluster 3.0 Software"

Take the cluster out of production. 

"How to Shut Down the Cluster"

If your cluster uses VERITAS Volume Manager (VxVM), deport disk groups and remove VxVM software packages. 

"How to Uninstall VERITAS Volume Manager Software"

Upgrade to the Solaris 8 operating environment if necessary, add a new /globaldevices file system, and change file system allocations to support Sun Cluster 3.0 software. If your cluster uses Solstice DiskSuite software, remove mediators and upgrade Solstice DiskSuite software.

"How to Upgrade the Solaris Operating Environment"

Upgrade to Sun Cluster 3.0 framework software. If your cluster uses Solstice DiskSuite software, recreate mediators. 

"How to Upgrade Cluster Software Packages"

Update the PATH and MANPATH.

"How to Update the Root User's Environment"

Upgrade to Sun Cluster 3.0 data services software. If necessary, upgrade third-party applications. 

"How to Upgrade Data Service Software Packages"

Assign a quorum device, finish the cluster software upgrade, and start device groups and data services. If your cluster uses VERITAS Volume Manager (VxVM), reinstall VxVM software packages and import and register disk groups. If your cluster uses Solstice DiskSuite software, restore mediators. 

"How to Finish Upgrading Cluster Software"

Verify that all nodes have joined the cluster. 

"How to Verify Cluster Membership"

Overview of Upgrading From Sun Cluster 2.2 to Sun Cluster 3.0 Software

This section provides conditions, restrictions, and planning guidelines for upgrading from Sun Cluster 2.2 to Sun Cluster 3.0 software.

Conditions and Restrictions

The following conditions must be met to upgrade from Sun Cluster 2.2 to Sun Cluster 3.0 software.

Planning the Upgrade

To support Sun Cluster 3.0 software, you probably need to change your current system disk layout. Consider the following when planning your new partitioning scheme.

Refer to "System Disk Partitions" for more information about disk space requirements to support Sun Cluster 3.0 software.

How to Shut Down the Cluster

Before upgrading the software, take the cluster out of production.

  1. Have available the CD-ROMs, documentation, and patches for all the software products you are upgrading.

    • Solaris 8 operating environment

    • Solstice DiskSuite software or VERITAS Volume Manager

    • Sun Cluster 3.0 framework

    • Sun Cluster 3.0 data services

    • Third-party applications

    Solstice DiskSuite software and documentation are now part of the Solaris 8 product.


    Note -

    These procedures assume you are installing from CD-ROMs. If you are installing from a network, ensure that the CD-ROM image for each software product is loaded on the network.


    Refer to Sun Cluster 3.0 Release Notes for the location of patches and installation instructions.

  2. Notify users that the cluster will be down.

  3. Become superuser on each node of the cluster.

  4. Search the /var/adm/messages log for unresolved error or warning messages.

    Correct any problems.

  5. Verify that no logical hosts are in the maintenance state.

    1. Become superuser on a node of the cluster.

    2. Use the hastat(1M) command to display the status of the cluster.


      # hastat
      HIGH AVAILABILITY CONFIGURATION AND STATUS
      -------------------------------------------
      ...
      LOGICAL HOSTS IN MAINTENANCE STATE

      If the screen output displays NONE, no logical hosts are in the maintenance state. Proceed to Step 6.

    3. If a logical host is in the maintenance state, use the haswitch(1M) command to perform switchover.


      # haswitch hostname logicalhostname
      
      hostname

      Specifies the name of the node that is to own the logical host

      logicalhostname

      Specifies the name of the logical host

    4. Run the hastat command to verify the switchover completed successfully.

  6. Ensure that the size of each logical host administrative file system is at least 10 Mbytes.


    # df -k /logicalhostname
    

    A logical host administrative file system that is not the required minimum size of 10 Mbytes will not be mountable after upgrade to Sun Cluster 3.0. If a logical host administrative file system is smaller than 10 Mbytes, follow your volume manager documentation procedure for growing this file system.

  7. Back up your system.

    Ensure that all users are logged off the system before you back it up.

  8. Stop the Sun Cluster 2.2 software on each node of the cluster.


    # scadmin stopnode
    
  9. Run the hastat command to verify that no nodes are in the cluster.

Where to Go From Here

If your cluster uses VERITAS Volume Manager, go to "How to Uninstall VERITAS Volume Manager Software". If your cluster uses Solstice DiskSuite software, to upgrade or prepare the Solaris operating environment to support Sun Cluster 3.0 software, go to "How to Upgrade the Solaris Operating Environment".

How to Uninstall VERITAS Volume Manager Software

If your cluster uses VERITAS Volume Manager (VxVM), perform this procedure on each node of the cluster to uninstall the VxVM software. Existing disk groups are retained and automatically reimported after you have upgraded all software.


Note -

To upgrade to Sun Cluster 3.0 software, you must remove VxVM software and later reinstall it, regardless of whether you have the latest version of VxVM installed.


  1. Become superuser on a cluster node.

  2. Deport all VxVM disk groups.

    Refer to your VxVM documentation for procedures.


    Note -

    Ensure that disks containing data to be preserved are not used for other purposes during the upgrade.


  3. Unencapsulate the root disk, if it is encapsulated.

    Refer to your VxVM documentation for procedures.

  4. Shut down VxVM.

    Refer to your VxVM documentation for procedures.

  5. Remove all installed VxVM software packages.

    Refer to your VxVM documentation for procedures.

  6. Remove the VxVM device namespace.


    # rm -rf /dev/vx
    

Where to Go From Here

To upgrade or prepare the Solaris operating environment to support Sun Cluster 3.0 software, go to "How to Upgrade the Solaris Operating Environment".

How to Upgrade the Solaris Operating Environment

Perform this procedure on each node in the cluster to upgrade or prepare the Solaris operating environment to support Sun Cluster 3.0 software.

  1. Become superuser on the cluster node.

  2. If your volume manager is Solstice DiskSuite and you are using mediators, unconfigure mediators.

    1. Run the following command to verify that no mediator data problems exist.


      # medstat -s setname
      
      -s setname

      Specifies the diskset name

      If the value in the Status field is Bad, repair the affected mediator host by following the procedure "How to Fix Bad Mediator Data".

      See the medstat(1M) man page for more information.

    2. List all mediators.

      Use this information to determine which node, if any, has ownership of the diskset from which you will remove mediators.


      # metaset -s setname
      

      Save this information for when you restore the mediators during the procedure "How to Upgrade Cluster Software Packages".

    3. If no node has ownership, take ownership of the diskset.


      # metaset -s setname -t
      
      -t

      Takes ownership of the diskset

    4. Unconfigure all mediators.


      # metaset -s setname -d -m mediator_host_list
      
      -s setname

      Specifies the diskset name

      -d

      Deletes from the diskset

      -m mediator_host_list

      Specifies the name of the node to remove as a mediator host for the diskset

      Refer to the mediator(7) man page for further information about mediator-specific options to the metaset command.

    5. Remove the mediator software.


      # pkgrm SUNWmdm
      
  3. Does your configuration currently run Solaris 8 software?

    1. Create a file system of at least 100 Mbytes and set its mount point as /globaldevices.


      Note -

      The /globaldevices file system is necessary for Sun Cluster 3.0 software installation to succeed.


    2. Reallocate space in other partitions as needed to support Sun Cluster 3.0 software.

      Refer to "System Disk Partitions" for guidelines.

    3. Go to Step 6.

  4. Determine which procedure to use to upgrade to Solaris 8 software.

    Volume Manager 

    Procedure to Use 

    For Instructions, Go To ... 

    Solstice DiskSuite 

    Upgrading both Solaris and Solstice DiskSuite software 

    Solstice DiskSuite installation documentation 

    VxVM 

    Performing a standard Solaris software installation 

    Solaris 8 installation documentation 

  5. Upgrade to Solaris 8 software, following the procedure you selected in Step 4.

    During installation, make the following changes to the root disk partitioning scheme.

    • Create a file system of at least 100 Mbytes and set its mount point as /globaldevices. The /globaldevices file system is necessary for Sun Cluster 3.0 software installation to succeed.

    • Reallocate space in other partitions as needed to support Sun Cluster 3.0 software.

    Refer to "System Disk Partitions" for partitioning guidelines.


    Note -

    The Solaris interface groups feature is disabled by default during Solaris software installation. Interface groups are not supported in a Sun Cluster configuration and should not be enabled. Refer to the ifconfig(1M) man page for more information about Solaris interface groups.


  6. Install any Solaris software patches.

    Refer to Sun Cluster 3.0 Release Notes for the location of patches and installation instructions.

  7. Install any hardware-related patches.

    Refer to Sun Cluster 3.0 Release Notes for the location of patches and installation instructions.

  8. For Solstice DiskSuite software, install any Solstice DiskSuite software patches.

    Refer to Sun Cluster 3.0 Release Notes for the location of patches and installation instructions.

Example--Unconfiguring Mediators

The following example shows the mediator host phys-schost-1 unconfigured from the Solstice DiskSuite diskset schost-1 before the upgrade to Solaris 8 software.


(Check mediator status:)
# medstat -s schost-1
 
(List all mediators:)
# metaset -s schost-1
 
(Unconfigure the mediator:)metaset -s schost-1 -d -m phys-schost-1
 
(Remove mediator software:)pkgrm SUNWmdm
 
(Begin software upgrade)

Where to Go From Here

To upgrade to Sun Cluster 3.0 software, go to "How to Upgrade Cluster Software Packages".

How to Upgrade Cluster Software Packages

Perform this procedure on each node. You can perform this procedure on both nodes simultaneously if you have two copies of the Sun Cluster 3.0 framework CD-ROM.


Note -

The scinstall(1M) upgrade command is divided into a two-step process: the -u begin option and the -u finish option. This procedure runs the begin option. The finish option is run in "How to Finish Upgrading Cluster Software".


  1. Become superuser on a cluster node.

  2. If your volume manager is Solstice DiskSuite, install the latest Solstice DiskSuite mediator package (SUNWmdm) on each node.

    1. If you are installing from the CD-ROM, insert the Sun Cluster 3.0 CD-ROM into the CD-ROM drive on a node.

      If the volume daemon vold(1M) is running and configured to manage CD-ROM devices, it automatically mounts the CD-ROM on the /cdrom/suncluster_3_0 directory.

    2. Change to the /cdrom_image/suncluster_3_0/SunCluster_3.0/Packages directory.


      # cd /cdrom_image/suncluster_3_0/SunCluster_3.0/Packages
      
    3. Add the SUNWmdm package.


      # pkgadd -d . SUNWmdm
      
    4. Reboot the node.


      # shutdown -g 0 -y -i 6
      
    5. Repeat on the other node.

  3. Reconfigure mediators.

    1. Determine which node has ownership of the diskset to which you will add the mediator hosts.


      # metaset -s setname
      
      -s setname

      Specifies the diskset name

    2. If no node has ownership, take ownership of the diskset.


      # metaset -s setname -t
      
      -t

      Takes ownership of the diskset

    3. Recreate the mediators.


      # metaset -s setname -a -m mediator_host_list
      
      -a

      Adds to the diskset

      -m mediator_host_list

      Specifies the names of the nodes to add as mediator hosts for the diskset

    4. Repeat for each diskset.

  4. On each node, begin upgrade to Sun Cluster 3.0 software.

    1. On one node, change to the /cdrom_image/suncluster_3_0/SunCluster_3.0/Tools directory.


      # cd /cdrom_image/suncluster_3_0/SunCluster_3.0/Tools
      
    2. Upgrade the cluster software framework.

      To Upgrade The ... 

      Use This Command ... 

      First node 

      ./scinstall -u begin -F

      Second node 

      ./scinstall -u begin -N clusternode1

      -F

      Specifies that this is the first node in the cluster that will be upgraded

      -N clusternode1

      Specifies the name of the first node in the cluster that will be upgraded, not the name of the second node to be upgraded

      Refer to the scinstall(1M) man page for more information.

    3. Reboot the node.


      # shutdown -g 0 -y -i 6
      

      When the first node reboots into cluster mode, it establishes the cluster. The second node waits if necessary for the cluster to be established before completing its own processes and joining the cluster.

    4. Repeat on the other cluster node.

  5. On each node, install any Sun Cluster patches.

    Refer to Sun Cluster 3.0 Release Notes for the location of patches and installation instructions.

Example--Upgrading From Sun Cluster 2.2 to Sun Cluster 3.0 Software - Begin Process

The following example shows the beginning process of upgrading a two-node cluster from Sun Cluster 2.2 to Sun Cluster 3.0 software. The cluster node names are phys-schost-1, the sponsor node, and phys-schost-2, which joins the cluster that phys-schost-1 established. The volume manager is Solstice DiskSuite and both nodes are used as mediator hosts for the diskset schost-1.


(Install the latest Solstice DiskSuite mediator package on each node:)
# cd /cdrom/suncluster_3_0/SunCluster_3.0/Packagespkgadd -d . SUNWmdm
 
(Restore the mediator:)
# metaset -s schost-1 -tmetaset -s schost-1 -a -m phys-schost-1 phys-schost-2
 
(Begin upgrade on the first node:)
phys-schost-1# cd /cdrom/suncluster_3_0/SunCluster_3.0/Tools
phys-schost-1# ./scinstall -u begin -F
 
(Begin upgrade on the second node:)
phys-schost-2# cd /cdrom/suncluster_3_0/SunCluster_3.0/Tools
phys-schost-2# ./scinstall -u begin -N phys-schost-1
 
(Reboot each node:)
# shutdown -g 0 -y -i 6

Where to Go From Here

To update the directory paths, go to "How to Update the Root User's Environment".

How to Update the Root User's Environment

Perform the following tasks on each node of the cluster.

  1. Set the PATH to include /usr/sbin and /usr/cluster/bin.

    For VERITAS Volume Manager, also set your PATH to include /etc/vx/bin. If you installed the VRTSvmsa package, also add /opt/VRTSvmsa/bin to your PATH.

  2. Set the MANPATH to include /usr/cluster/man. Also include the volume manager-specific paths.

    • For Solstice DiskSuite software, set your MANPATH to include /usr/share/man.

    • For VERITAS Volume Manager, set your MANPATH to include /opt/VRTSvxvm/man. If you installed the VRTSvmsa package, also add /opt/VRTSvmsa/man to your MANPATH.

  3. (Optional) For ease of administration, set the same root password on each node.

  4. Start a new shell to activate the environment changes.

Where to Go From Here

To upgrade to Sun Cluster 3.0 data service software, go to "How to Upgrade Data Service Software Packages".

How to Upgrade Data Service Software Packages

Perform this procedure on each cluster node.

  1. Become superuser on a node of the cluster.

  2. Upgrade applications and apply application patches as needed.

    Refer to your application documentation for installation instructions.


    Note -

    If the applications are stored on shared disks, you must master the relevant disk groups and manually mount the relevant file systems before you upgrade the application.


  3. Add data services.

    1. Insert the Sun Cluster 3.0 data services CD-ROM into the CD-ROM drive on the node.

    2. Enter the scinstall(1M) utility.


      # scinstall
      

      Follow these guidelines while using the interactive scinstall utility.

      • Interactive scinstall enables you to type ahead. Therefore, do not press Return more than once if the next menu screen does not appear immediately.

      • Unless otherwise noted, pressing Control-D returns you either to the start of a series of related questions or to the Main Menu.

    3. To add data services, type 4 (Add support for a new data service to this cluster node).

      Follow the prompts to add data services.

    4. Eject the CD-ROM.

  4. Install any Sun Cluster data service patches.

    Refer to Sun Cluster 3.0 Release Notes for the location of patches and installation instructions.

  5. Repeat Step 1 through Step 4 on the other node of the cluster.

  6. Shut down the second node to be upgraded to Sun Cluster 3.0 software.

    Leave the second node shut down until after the first, or sponsor, node has been rebooted.


    phys-schost-2# shutdown -g 0 -y -i 0
    
  7. Reboot the first, sponsor node of the cluster.

    Ensure that the second node has been shut down before rebooting the first node. Otherwise, rebooting the first node while the second node is still up will panic the second node because quorum votes have not yet been assigned.


    phys-schost-1# shutdown -g 0 -y -i 6
    
  8. After the first node has completed booting, boot the second node.


    phys-schost-2# boot
    
  9. After both nodes have rebooted, verify from either node that both nodes are cluster members.


    # scstat -n
    Node                                               
      Node Name:                                        phys-schost-1
      Status:                                           Online
    
      Node Name:                                        phys-schost-2
      Status:                                           Online

    See the scstat(1M) man page for more information about displaying cluster status.

Where to Go From Here

To assign a quorum device and finish the upgrade, go to "How to Finish Upgrading Cluster Software".

How to Finish Upgrading Cluster Software

This procedure finishes the scinstall(1M) upgrade process begun in "How to Upgrade Cluster Software Packages". Perform these steps on each node of the cluster.

  1. Become superuser on each node of the cluster.

  2. Choose a shared disk to be the quorum device.

    You can use any disk shared by both nodes as a quorum device. From either node, use the scdidadm(1M) command to determine the shared disk's device ID (DID) name. You specify this device name in Step 5, in the -q globaldev=devicename option to scinstall.


    # scdidadm -L
    
  3. If your volume manager is VxVM, reinstall the VxVM software on each node of the cluster.


    Note -

    Whenever you must reboot, you must shut down the second node of the cluster before rebooting the first, or sponsor, node. After rebooting the first node, then bring the second node back up. Otherwise, rebooting the first node while the second node is still up will panic the second node because quorum votes have not yet been assigned.


    1. Install VxVM software, including any patches.

      Follow the procedures in "How to Install VERITAS Volume Manager Software".

    2. Configure VxVM.

      Follow the procedures listed in "Configuring VxVM for Sun Cluster Configurations".

  4. Insert the Sun Cluster 3.0 data services CD-ROM into the CD-ROM drive on the node.

    This step assumes that the volume daemon vold(1M) is running and configured to manage CD-ROM devices.

  5. Finish the cluster software upgrade on that node.


    # scinstall -u finish -q globaldev=devicename \
    -d /cdrom_image/scdataservices_3_0 -s srvc[,srvc]
    -q globaldev=devicename

    Specifies the name of the quorum device

    -d /cdrom_image/scdataservices_3_0

    Specifies the directory location of the CD-ROM image

    -s srvc

    Specifies the name of the data service to configure


    Note -

    An error message similar to the following might be generated. You can safely ignore it.



    ** Installing Sun Cluster - Highly Available NFS Server **
    Skipping "SUNWscnfs" - already installed
  6. Eject the CD-ROM.

  7. Repeat Step 4 through Step 6 on the other node.

    When completed on both nodes, the cluster is removed from install mode and all quorum votes are assigned.

  8. If your volume manager is Solstice DiskSuite, from either node bring pre-existing disk device groups online.


    # scswitch -z -D disk-device-group -h node
    
    -z

    Performs the switch

    -D disk-device-group

    Specifies the name of the disk device group, which for Solstice DiskSuite software is the same as the diskset name

    -h node

    Specifies the name of the cluster node that serves as the primary of the disk device group

  9. From either node, bring pre-existing data service resource groups online.

    At this point, Sun Cluster 2.2 logical hosts are converted to Sun Cluster 3.0 resource groups, and the names of logical hosts are appended with the suffix -lh. For example, a logical host named lhost-1 is upgraded to a resource group named lhost-1-lh. Use these converted resource group names in the following command.


    # scswitch -z -g resource-group -h node
    
    -g resource-group

    Specifies the name of the resource group to bring online

    You can use the scrgadm -p command to display a list of all resource types and resource groups in the cluster. The scrgadm -pv command displays this list with more detail.

  10. If you are using the Sun Management Center product to monitor your Sun Cluster configuration, install the Sun Cluster module for the Sun Management Center product.

    1. Ensure that you are using the most recent version of Sun Management Center software (formerly Sun Enterprise SyMON).

      Refer to the Sun Management Center documentation for installation or upgrade procedures.

    2. Follow guidelines and procedures in "Installation Requirements for Sun Management Center Software for Sun Cluster Monitoring" to install the Sun Cluster module packages.

Example--Upgrading From Sun Cluster 2.2 to Sun Cluster 3.0 Software - Finish Process

The following example shows the finish process of upgrading a two-node cluster upgraded from Sun Cluster 2.2 to Sun Cluster 3.0 software. The cluster node names are phys-schost-1 and phys-schost-2, the device group names are dg-schost-1 and dg-schost-2, and the data service resource group names are lh-schost-1 and lh-schost-2.


(Determine the DID of the shared quorum device:)
phys-schost-1# scdidadm -L
 
(Finish upgrade on each node:)
phys-schost-1# scinstall -u finish -q globaldev=d1 \
-d /cdrom/suncluster_3_0 -s nfs
phys-schost-2# scinstall -u finish -q globaldev=d1 \
-d /cdrom/suncluster_3_0 -s nfs
 
(Bring device groups and data service resource groups on each node online:)
phys-schost-1# scswitch -z -D dg-schost-1 -h phys-schost-1
phys-schost-1# scswitch -z -g lh-schost-1 -h phys-schost-1
phys-schost-1# scswitch -z -D dg-schost-2 -h phys-schost-2 
phys-schost-1# scswitch -z -g lh-schost-2 -h phys-schost-2

Where to Go From Here

To verify that all nodes have joined the cluster, go to "How to Verify Cluster Membership".

How to Verify Cluster Membership

Perform this procedure to verify that all nodes have joined the cluster.

  1. Become superuser on any node in the cluster.

  2. Display cluster status.

    Verify that cluster nodes are online and that the quorum device, device groups, and data services resource groups are configured and online.


    # scstat
    

    See the scstat(1M) man page for more information about displaying cluster status.

  3. On each node, display a list of all devices the system checks to verify their connectivity to the cluster nodes.

    The output on each node should be the same.


    # scdidadm -L
    

The cluster upgrade is complete. You can now return the cluster to production.