Sun Cluster 2.2 Software Installation Guide

Chapter 4 Upgrading Sun Cluster Software

This chapter contains guidelines and procedures for upgrading to the latest release of Sun Cluster 2.2 from Solstice HA 1.3, Sun Cluster 2.0, Sun Cluster 2.1, and Sun Cluster 2.2.

The software to be upgraded might include the Solaris operating environment, Sun Cluster, and volume management software (Solstice DiskSuite or VERITAS Volume Manager).

This chapter includes the following sections:

Upgrade Overview

This section describes the procedures for upgrading to the latest release of Sun Cluster 2.2 from existing Solstice HA 1.3, Sun Cluster 2.0, Sun Cluster 2.1, and Sun Cluster 2.2 configurations. The upgrade paths documented in this chapter preserve the existing cluster configuration and data. Your systems can remain online and available during most of the upgrade.

To upgrade from Solstice HA 1.3:

To upgrade from Sun Cluster 2.0 or 2.1:

To upgrade to the latest version of Sun Cluster 2.2 on Solaris 8 from an earlier version of Sun Cluster 2.2 on Solaris 2.6 or 7:

If you also want to make configuration changes such as adding disks or services, first complete the upgrade and then make the configuration changes by following the procedures documented in the Sun Cluster 2.2 System Administration Guide.

Before starting your upgrade, make sure the versions of any applications you plan to run are compatible with the version of the Solaris operating environment you plan to run.

To upgrade Solaris software, you might need to increase the size of your root (/) and /usr partitions on the root disks of all Sun Cluster servers in the configuration, to accommodate the Solaris operating environment.

You must install the Entire Distribution Solaris software packages. See your Solaris advanced system administration documentation for details.


Note -

The behavior of DNS changes between Solaris 2.6 and Solaris 8. This is because the default bind version differs between these operating environments. This change requires an update to some DNS configuration files. See your DNS documentation for details and instructions.


Upgrading to Sun Cluster 2.2 From Solstice HA 1.3

You can perform the upgrade either from an administrative workstation or from the console of any physical host in the cluster. Using an administrative workstation provides the most flexibility during the upgrade process.


Caution - Caution -

Back up all local and multihost disks before starting the upgrade. All systems must be operable and robust. Do not attempt to upgrade if systems are experiencing any difficulties.



Caution - Caution -

On each node, if you customized hasap_start_all_instances or hasap_stop_all_instances scripts in Solstice HA 1.3 or Sun Cluster 2.1, save them to a safe location before beginning the upgrade to Sun Cluster 2.2. Restore the scripts after completing the upgrade. Save and restore these scripts to prevent loss of your customizations when Sun Cluster 2.2 removes the old scripts. The configuration parameters implemented in Sun Cluster 2.2 are different from those implemented in Solstice HA 1.3 and Sun Cluster 2.1. Therefore, after upgrading to Sun Cluster 2.2, you must re-configure Sun Cluster HA for SAP by running the hadsconfig(1M) command. Before starting the upgrade, view the existing configuration and note the current configuration variables. For Solstice HA 1.3, use the hainetconfig(1M) command to view the configuration. For Sun Cluster 2.1, use the hadsconfig(1M) command to view the configuration. After upgrading to Sun Cluster 2.2, use the hadsconfig(1M) command to re-create the instance.



Caution - Caution -

If you created your own data services using the Sun Cluster API, make sure those data services have a base directory associated with them before you begin the upgrade. This base directory defines the location of the methods associated with the data service. If the data service was registered with the -b option to hareg(1M), the base directory is defined in the data services configuration file. By default, data services supplied by Sun are registered with the -b option to hareg(1M). To check whether a base directory is defined, view the file /etc/opt/SUNWhadf/hadf/.hadfconfig_services and look for the SERVICE_BASEDIR= entry for your data service. If no entry exists, unregister the data service using the command hareg -u dataservice, then re-register the data service by specifying the -b option to hareg(1M). If you attempt to upgrade while any data services do not have an associated base directory for methods, the upgrade will fail.


How to Upgrade to Sun Cluster 2.2 From HA 1.3

Note -

This procedure assumes you are using an administrative workstation.



Note -

While performing this upgrade, you might see network interface and mediator errors on the console. These messages are side effects of the upgrade and can be ignored safely.


  1. (Solstice HA 1.3 for SAP only) Run hainetconfig(1M) to obtain the current SAP configuration parameters.

    The SAP instance configuration data is lost during the upgrade. Therefore, run the hainetconfig(1M) command and make note of the current SAP parameters so you can restore them manually later. See Chapter 10, Installing and Configuring Sun Cluster HA for SAP for a description of the new Sun Cluster HA for SAP configuration parameters.


    phys-hahost1# hainetconfig
    

  2. Load the Sun Cluster 2.2 client packages onto the administrative workstation.

    Refer to Chapter 3, Installing and Configuring Sun Cluster Software to set up the administrative workstation, if you have not done so already.

  3. Stop Solstice HA on the first server to be upgraded.


    phys-hahost1# hastop
    

    If your cluster is already running Solaris 2.6 and you do not want to upgrade to Solaris 7 or Solaris 8, skip to Step 6.

  4. Upgrade the operating environment to Solaris 2.6, Solaris 7, or Solaris 8.

    To upgrade Solaris, you must use the suninstall(1M) upgrade procedure (rather than reinstalling the operating environment). You might need to increase the size of your root (/) and /usr partitions on the root disks of all Sun Cluster servers in the configuration to accommodate the Solaris operating environment. You must install the Entire Distribution software group. See your Solaris advanced system administration documentation for details.


    Note -

    For some hardware platforms, Solaris attempts to configure power management settings to shut down the server automatically if it has been idle for 30 minutes. The cluster heartbeat is not enough to prevent the Sun Cluster servers from appearing idle and shutting down. Therefore, you must disable this feature when you install the Solaris software. The dialog used to configure power management settings is shown in the next code sample. If you do not see this dialog, then your hardware platform does not support this feature. If the dialog appears, you must answer n to the first question and y to the second to configure the server to work correctly in the Sun Cluster environment.



    ****************************************************************
    This system is configured to conserve energy.
    After 30 minutes without activity, the system state will be
    saved to disk and the system will be powered off automatically.
    
    A system that has been suspended in this way can be restored
    back to exactly where it was by pressing the power key.
    The definition of inactivity and the timeout are user
    configurable. The dtpower(1M) man page has more information.
    ****************************************************************
    
    Do you wish to accept this default configuration, allowing
    your system to save its state then power off automatically
    when it has been idle for 30 minutes? (If this system is used
    as a server, answer n. By default autoshutdown is
    enabled.) [y,n,?] n
    
    Autoshutdown disabled.
    
    Should the system save your answer so it won't need to ask
    the question again when you next reboot? (By default the
    question will not be asked again.) [y,n,?] y
    
  5. Update the Solaris kernel files.

    As part of the Solaris upgrade, the files /kernel/drv/sd.conf and /kernel/drv/ssd.conf will be renamed to /kernel/drv/sd.conf:2.x and /kernel/drv/ssd.conf:2.x respectively. New /kernel/drv/sd.conf and /kernel/drv/ssd.conf files will be created. Run the diff(1) command to identify the differences between the old files and the new ones. Copy the additional information that was inserted by Sun Cluster from the old files into the new files. The information will look similar to the following:


    # Start of lines added by Solstice HA
    sd_retry_on_reservation_conflict=0;
    # End of lines added by Solstice HA

  6. Upgrade to Solstice DiskSuite 4.2 or 4.2.1.

    1. Upgrade Solstice DiskSuite using the detailed procedures in your Solstice DiskSuite documentation.

    2. On the local host, upgrade the Solstice DiskSuite mediator package, SUNWmdm.

      For Solstice DiskSuite 4.2, the path to the SUNWmdm package is /cdrom/multi_suncluster_sc_2_2/Sun_Cluster_2_2/Sol2_x/Product/ . For Solstice DiskSuite 4.2.1, the path is /cdrom/multi_suncluster_sc_2_2/Sun_Cluster_2_2/Sol_2.8/Packages/.

      In the following example, note that several existing files are noted as being in conflict. You must answer y at each prompt to install the new files.


      Caution - Caution -

      Do not remove the old SUNWmdm package before adding the new one. Doing so will make all data inaccessible.



      phys-hahost1# pkgadd -d /cdrom_path/ SUNWmdm
      
      Processing package instance <SUNWmdm>...
      
      Solstice DiskSuite (Mediator)
      (sparc) 4.2,REV=1998.23.10.09.59.06
      Copyright 1998 Sun Microsystems, Inc. All rights reserved.
      
      ## Executing checkinstall script.
      			This is an upgrade. Conflict approval questions may be
      			displayed. The listed files are the ones that will be
      			upgraded. Please answer "y" to these questions if they are
      			presented.
      Using </> as the package base directory.
      ## Processing package information.
      ## Processing system information.
         10 package pathnames are already properly installed.
      ## Verifying package dependencies.
      ## Verifying disk space requirements.
      ## Checking for conflicts with packages already installed.
      
      The following files are already installed on the system and are being used by another package:
        /etc/opt/SUNWmd/meddb
        /usr/opt <attribute change only>
        /usr/opt/SUNWmd/man/man1m/medstat.1m
        /usr/opt/SUNWmd/man/man1m/rpc.metamedd.1m
        /usr/opt/SUNWmd/man/man4/meddb.4
        /usr/opt/SUNWmd/man/man7/mediator.7
        /usr/opt/SUNWmd/sbin/medstat
        /usr/opt/SUNWmd/sbin/rpc.metamedd
      
      Do you want to install these conflicting files [y,n,?,q] y
      ## Checking for setuid/setgid programs.
      
      This package contains scripts which will be executed with super-user permission during the process of installing this package.
      Do you want to continue with the installation of <SUNWmdm.2> [y,n,?] y
      Installing Solstice DiskSuite (Mediator) as <SUNWmdm.2>
      ...
  7. Before updating the cluster package, remove patch 104996 (the Solstice HA 1.3 SUNWhaor patch), if it is installed.

    When scinstall(1M) updates cluster packages in Step 9, the command attempts to remove a patch on which patch 104996 is dependent. To prevent scinstall(1M) from failing, remove patch 104996 manually now.


    phys-hahost1# patchrm 104996-xx
    

  8. (Solstice HA for SAP only) Save to a safe location any customized hasap_start_all_instances or hasap_stop_all_instances scripts, before beginning the upgrade to Sun Cluster 2.2.

    Save these scripts to prevent loss of your customizations when Sun Cluster 2.2 removes the old scripts during the upgrade. Copy the scripts to a safe location. You will restore the scripts in Step 10. Use the following commands:


    # cp /opt/SUNWhasap/clust_progs/hasap_start_all_instances /safe_place
    # cp /opt/SUNWhasap/clust_progs/hasap_stop_all_instances /safe_place
    

  9. Use the scinstall(1M) command to update the cluster packages.

    Select Upgrade from the scinstall(1M) menu. Respond to the prompts that ask for the location of the Framework packages and the cluster name. The scinstall(1M) command replaces Solstice HA 1.3 packages with Sun Cluster 2.2 packages.


    phys-hahost1# cd /cdrom/multi_suncluster_sc_2_2/Sun_Cluster_2_2/Sol_2.x/Tools
    phys-hahost1# ./scinstall
    
    Installing: SUNWscins
    Installation of <SUNWscins> was successful.
    Checking on installed package state
    ............
    None of the Sun Cluster software has been installed
    <<Press return to continue>> 
    
    ==== Install/Upgrade Software Selection Menu =======================
    Upgrade to the latest Sun Cluster Server packages or select package
    
    Do you want to install these conflicting files [y,n,?,q] y
    sets for installation. The list of package sets depends on the Sun
    Cluster packages that are currently installed.
    
    Choose one:
    1) Upgrade            Upgrade to Sun Cluster 2.2 Server packages
    2) Server             Install the Sun Cluster packages needed on a server
    3) Client             Install the admin tools needed on an admin workstation
    4) Server and Client  Install both Client and Server packages
    5) Close              Exit this Menu
    6) Quit               Quit the Program
    Enter the number of the package set [6]: 1
    What is the directory where the Framework packages can be found 
    
    [/cdrom/cdrom0]: .
    
    ** Upgrading from Solstice HA 1.3 **
    What is the name of the cluster? sc-cluster
    ...

  10. (Solstice HA 1.3 for SAP only) Restore the customized scripts saved in Step 8.

    Copy the scripts to the /opt/SUNWcluster/ha/sap directory. The safe_place directory is the directory into which you saved the scripts in Step 8. After restoring the scripts, use the ls -l command to verify that the scripts are executable.


    phys-hahost1# cd /opt/SUNWcluster/ha/sap
    phys-hahost1# cp /safe_place/hasap_start_all_instances .
    phys-hahost1# cp /safe_place/hasap_stop_all_instances .
    phys-hahost1# ls -l /opt/SUNWcluster/ha/sap/hasap_st*
    -r-xr--r--   1 root     sys        18400 Feb  9 19:04 hasap_start_all_instances
    -r-xr--r--   1 root     sys        25963 Feb  9 19:04 hasap_stop_all_instances

  11. Add required entries to the /.rhosts file.

    The /.rhosts file contains one or more sets of three IP addresses (depending on the number of nodes in the cluster). These are private network IP addresses used internally by Sun Cluster. During the upgrade, only some of the IP addresses are added to the /.rhosts files; the first IP address in each set is lost. You must manually insert the missing addresses in the /.rhosts file on each node.

    The number of sets you need depends on the number of nodes in the cluster. For a two-node cluster, include only the addresses specified for nodes 0 and 1 below. For a three-node cluster, include the addresses specified for nodes 0, 1, and 2 below. For a four-node cluster, include all addresses noted below.


    # node 0
    204.152.65.33         # Manually insert this address on all nodes
    204.152.65.1									other than node0
    204.152.65.17
    
    # node 1
    204.152.65.34         # Manually insert this address on all nodes
    204.152.65.2									other than node1
    204.152.65.18
    
    # node 2
    204.152.65.35         # Manually insert this address on all nodes
    204.152.65.3									other than node2
    204.152.65.19
    
    # node 3
    204.152.65.36         # Manually insert this address on all nodes
    204.152.65.4									other than node3
    204.152.65.20

  12. (Solaris 2.6 and 7 only) Use install_scpatches to install Sun Cluster patches from the Sun Cluster product CD-ROM.

    Use the install_scpatches utility to install Sun Cluster patches from the Sun Cluster CD-ROM.


    # cd /cdrom/multi_suncluster_sc_2_2/Sun_Cluster_2_2/Sol_2.x/Patches
    # ./install_scpatches
    
    Patch install script for Sun Cluster 2.2 July 2000 Release
    
    *WARNING* SYSTEMS WITH LIMITED DISK SPACE SHOULD *NOT* INSTALL PATCHES:
    With or without using the save option, the patch installation process
    will still require some amount of disk space for installation and
    administrative tasks in the /, /usr, /var, or /opt partitions where
    patches are typically installed.  The exact amount of space will
    depend on the machine's architecture, software packages already 
    installed, and the difference in the patched objects size.  To be
    safe, it is not recommended that a patch cluster be installed on a
    system with less than 4 MBytes of available space in each of these
    partitions.  Running out of disk space during installation may result
    in only partially loaded patches.  Check and be sure adequate disk space
    is available before continuing.
    
    Are you ready to continue with install? [y/n]: y
    
    Determining if sufficient save space exists...
    Sufficient save space exists, continuing...
    Installing patches located in /cdrom/multi_suncluster_sc_2_2/Sun_Cluster_2_2/Sol_2.x/Patches
    Using patch_order file for patch installation sequence
    Installing 107388-03 ... okay.
    Installing 107748-02 ... okay.
    ...
    For more installation messages refer to the installation logfile:
      /var/sadm/install_data/Sun_Cluster_2.2_July_2000_Release_log
    
    Use '/usr/bin/showrev -p' to verify installed patch-ids.
    Refer to individual patch README files for more patch detail.
    Rebooting the system is usually necessary after installation.
    
    #

  13. Install any required or recommended Sun Cluster and volume manager patches.

    Besides those patches installed in Step 12, also obtain any required or recommended patches from your service provider or from the patches website, http://sunsolve.sun.com. Follow the instructions in the patch README files to install the patches.

  14. Set the PATH environment variable for user root to include the command directories /opt/SUNWcluster/bin and /opt/SUNWpnm/bin. Set the MANPATH environment variable for user root to include /opt/SUNWcluster/man.

  15. Reboot the machine.


    phys-hahost1# reboot
    


    Note -

    During the reboot process, you might see error messages pertaining to the loss of a private network. At this time, it is safe to ignore these error messages.


  16. Switch ownership of disks and data services from the remote host to the upgraded local host.

    1. Stop Solstice HA 1.3 services on the remote host.

      The remote host in this example is phys-hahost2.


      phys-hahost2# hastop
      

    2. After Solstice HA 1.3 is stopped on the remote host, start Sun Cluster 2.2 on the upgraded local host.

      After the hastop(1M) operation has completed, use the scadmin(1M) command to start Sun Cluster 2.2. This causes the upgraded local host to take over all data services. In this example, phys-hahost1 is the local physical host name and sc-cluster is the cluster name.


      phys-hahost1# scadmin startcluster phys-hahost1 sc-cluster
      

  17. Re-create instance configuration data for the highly available databases.

    During the upgrade, the instance configuration data is not upgraded for the highly available databases. You must use the appropriate hadbms insert command to manually re-create each database instance, where dbms is the name of the database; for example, haoracle insert, hainformix insert, or hasybase insert.

    Find the pre-upgrade instance configuration information in the /etc/opt/SUNWhadf.obsolete/hadf/hadbms_databases file. For information about the parameters for each hadbms insert command, see the man page for that command and the appropriate database chapter in this book. For example, for information on haoracle(1M), see the haoracle(1M) man page and Chapter 5, Installing and Configuring Sun Cluster HA for Oracle.

  18. Turn on the database instances.

    Use the appropriate hadbms command to turn on each database instance. For example, for Oracle:


    phys-hahost1# haoracle start instance
    

  19. (Sun Cluster HA for SAP only) Unregister and re-register the Sun Cluster HA for SAP data service.

    After the upgrade, the method names for the Sun Cluster HA for SAP data service are incorrect in the Cluster Configuration Database (CCD). To correct the method names, first turn off and unregister the Sun Cluster HA for SAP data service and then register it again in order to log the correct method names in the CCD file. In addition, re-create the SAP instance that you noted in Step 1.

    1. Use the hareg(1M) command to turn off the Sun Cluster HA for SAP data service.


      phys-hahost1# hareg -n sap
      

    2. Unregister the Sun Cluster HA for SAP data service.


      phys-hahost1# hareg -u sap
      

    3. Register the Sun Cluster HA for SAP data service.

      In this example, CI_logicalhost is the logical host name.


      phys-hahost1# hareg -s -r sap -h CI_logicalhost
      

    4. Run hadsconfig(1M) to restore the Sun Cluster HA for SAP configuration parameters.

      Refer to Chapter 10, Installing and Configuring Sun Cluster HA for SAP for descriptions of the new configuration parameters. Also, refer to the configuration information you saved in Step 1.


      phys-hahost1# hadsconfig
      


      Note -

      It is safe to ignore any errors generated by hadsconfig(1M) at this time.


    5. After setting the configuration parameters, use the hareg(1M) command to activate the data service.


      phys-hahost1# hareg -y sap
      

    6. Manually copy the configuration file, /etc/opt/SUNWscsap/hadsconf, to all other cluster nodes.

      First create the /etc/opt/SUNWscsap/hadsconf directory if it does not exist. Then copy the configuration file to all nodes.


      phys-hahost1# ftp phys-host2
      ftp> put /etc/opt/SUNWscsap/hadsconf
      

  20. Verify operations on the local host.

    1. Verify that the configuration on the local host is stable.


      phys-hahost1# hastat
      

    2. Verify that clients are receiving services from the local host.

  21. Repeat Step 3 through Step 20 on the remote host.

  22. Return the remote host to the cluster.


    phys-hahost2# scadmin startnode
    

  23. After cluster reconfiguration on the remote host is complete, switch over the data services to the remote host from the local host.


    phys-hahost1# haswitch phys-hahost2 hahost2
    

  24. Verify that the Sun Cluster 2.2 configuration on the remote host is in a stable state, and that clients are receiving services.


    phys-hahost2# hastat
    

This completes the procedure to upgrade to Sun Cluster 2.2 from Solstice HA 1.3.

Configuring Mediators When Migrating From Solstice HA 1.3 to Sun Cluster 2.2

This section is only relevant to clusters that were originally set up under Solstice HA 1.3 using Solstice DiskSuite mediators (two-string configurations). It describes changes that are automatically made to a mediator configuration when you upgrade from Solstice HA 1.3 to Sun Cluster 2.2. There is no direct user impact, but you should note the changes in any configuration information you keep on the cluster.

The procedure "How to Upgrade to Sun Cluster 2.2 From HA 1.3" changes the Solstice HA 1.3 mediator configuration. The original Solstice HA 1.3 mediator configuration resembles the following:


Mediator Host(s)
Aliases
ha-red
ha-red-priv1, ha-red-priv2
ha-green
ha-green-priv1, ha-green-priv2

After running the Sun Cluster 2.2 upgrade procedure, this configuration is converted to one similar to the following:


Mediator Host(s)
Aliases
ha-red
204.152.65.34
ha-green
204.152.65.33


Note -

In Solstice HA 1.3, the hosts referred to the private links by physical names, whereas in Sun Cluster 2.2, the private link IP addresses are used.


For more information about configuring mediators for Sun Cluster 2.2, see the dual-string mediators chapter in the Sun Cluster 2.2 System Administration Guide.

Upgrading to Sun Cluster 2.2 From Sun Cluster 2.0 or 2.1

To upgrade to Sun Cluster 2.2 from Sun Cluster 2.0 or 2.1, you must upgrade the Sun Cluster client software on the administrative workstation or install server, and then upgrade the Sun Cluster server software on all nodes in the cluster. Use the procedure "How to Upgrade to Sun Cluster 2.2 From Sun Cluster 2.0 or 2.1".

Planning the Upgrade

If you are working with greater than two-node clusters, consider logical host availability when planning your upgrade. Depending on the cluster configuration, it might not be possible for all logical hosts to remain available during the upgrade process. The following configuration examples illustrate upgrade strategies that minimize downtime of logical hosts.

Two Ring (Cascade) Configuration

Table 4-1 shows a four-node cluster with four logical hosts defined. The table shows which physical nodes can master each of the four logical hosts.

To upgrade this configuration, you can remove nodes 1 and 3 from the cluster and upgrade them without losing access to any logical hosts. After you upgrade nodes 1 and 3 there will be a brief service outage while you shut down nodes 2 and 4 and bring up nodes 1 and 3. Nodes 1 and 3 can then provide access to all logical hosts while nodes 2 and 4 are upgraded.

Table 4-1 Four Nodes With Four Logical Hosts

 

Logical Host 1 

Logical Host 2 

Logical Host 3 

Logical Host 4 

Node 1 

 

 

Node 2 

 

 

Node 3 

 

 

Node 4 

 

 

N+1 Configuration

In an N+1 configuration, one node is the backup for all other nodes in the cluster. Table 4-2 shows the logical host distribution for a four-node N+1 configuration with three logical hosts. In this configuration, upgrade node 4 first. After you upgrade node 4, it can provide all services while nodes 1, 2, and 3 are upgraded.

Table 4-2 Three Nodes With Three Logical Hosts

 

Logical Host 1 

Logical Host 2 

Logical Host 3 

Node 1 

 

 

Node 2 

 

 

Node 3 

 

 

Node 4 

Using Terminal Concentrator and System Service Processor Monitoring

Sun Cluster 2.2 monitors the Terminal Concentrator (TC), or the System Service Processor (SSP) on E10000 machines, on clusters with greater than two nodes. You can use this feature if you are upgrading from Sun Cluster 2.0 to Sun Cluster 2.2. To enable it, you will need to provide the following information to the scinstall(1M) command during the upgrade procedure:


Caution - Caution -

The TC and SSP passwords are required for failure fencing to work correctly in the cluster. Failure to correctly set the TC or SSP password might cause unpredictable results in the event of a node failure.


Performing the Upgrade

This procedure describes the steps required to upgrade the server software on a Sun Cluster 2.0 or Sun Cluster 2.1 system to Sun Cluster 2.2, with a minimum of downtime. You should become familiar with the entire procedure before starting the upgrade.


Caution - Caution -

Before starting the upgrade, you should have an adequate backup of all configuration information and key data, and the cluster must be in a stable, non-degraded state.



Caution - Caution -

If you are running VxVM with an encapsulated root disk, you must unencapsulate the root disk before installing Sun Cluster 2.2. After you install Sun Cluster 2.2, encapsulate the disk again. Refer to your VxVM documentation for the procedures to encapsulate and unencapsulate the root disk.



Note -

During the upgrade procedure, all non-local private link IP addresses will be added, with root access only, to the /.rhosts file on every cluster node.



Note -

If you want to use the Cluster Monitor to continue monitoring the cluster during the upgrade, upgrade the server software first and the client software last.


How to Upgrade to Sun Cluster 2.2 From Sun Cluster 2.0 or 2.1

This example assumes an N+1 configuration using an administrative workstation.

  1. (Sun Cluster HA for SAP only) Run the hadsconfig(1M) command to obtain the current configuration parameters.

    The SAP instance configuration data is lost during the upgrade. Therefore, run the hadsconfig(1M) and make note of the current SAP parameters so you can restore them manually later. See Chapter 10, Installing and Configuring Sun Cluster HA for SAP for a description of the new Sun Cluster HA for SAP configuration parameters.


    phys-hahost1# hadsconfig
    

  2. Stop the first node.


    phys-hahost1# scadmin stopnode
    

  3. If you are upgrading the operating environment or upgrading from SSVM to VxVM, run the command upgrade_start from the new VxVM media.

    In this example, CDROM_path is the path to the scripts on the new VxVM CD-ROM.


    phys-hahost1# CDROM_path/upgrade_start
    

    To upgrade the operating environment, follow the detailed instructions in the appropriate Solaris installation manual and also see Chapter 2, Planning the Configuration.

    To upgrade from SSVM to VxVM, refer to your VERITAS Volume Manager documentation.

  4. If you are upgrading the operating environment but not the volume manager, perform the following steps.

    1. Remove the volume manager package.

      For example:


      phys-hahost1# pkgrm SUNWvxvm
      

    2. Upgrade the operating system.

      Refer to your Solaris installation documentation for instructions.

    3. If you are using NIS+, modify the /etc/nsswitch.conf file.

      Ensure that "service," "group," and "hosts" lookups are directed to files first. For example:


      hosts: files nisplus
      services: files nisplus
      group: files nisplus

    4. Restore the volume manager package removed in Step 4a.

      Obtain the volume manager package from the Sun Cluster 2.2 CD-ROM. In this example, CDROM_path is the path to the tools on the VxVM CD-ROM.


      phys-hahost1# pkgadd -d CDROM_path/SUNWvxvm
      

  5. If you upgraded from SSVM to VxVM, run the command upgrade_finish from the VxVM media.

    In this example, CDROM_path is the path to the scripts on the VxVM CD-ROM


    phys-hahost1# CDROM_path/upgrade_finish
    
    .

  6. Reboot the system.


    Caution - Caution -

    You must reboot at this time.


  7. (Sun Cluster HA for SAP only) Perform the following steps.

    1. Save to a safe location any customized hasap_start_all_instances or hasap_stop_all_instances scripts in Sun Cluster 2.1, before beginning the upgrade to Sun Cluster 2.2.

      Save the scripts to prevent loss of your customizations when Sun Cluster 2.2 removes the old scripts during the upgrade. Restore the scripts after completing the upgrade. Copy the scripts to a safe location. You will restore the scripts later in Step 9b.


      phys-hahost1# cp /opt/SUNWcluster/ha/sap/hasap_start_all_instances /safe_place
      phys-hahost1# cp /opt/SUNWcluster/ha/sap/hasap_stop_all_instances /safe_place
      

    2. Remove the SUNWscsap package before using scinstall(1M) to update the cluster software.

      The SUNWscsap package is not updated automatically by scinstall(1M). You must first remove this package. You will add an updated version in Step 9.


      phys-hahost1# pkgrm SUNWscsap
      

  8. Update the cluster software by using the scinstall(1M) command from the Sun Cluster 2.2 CD-ROM.

    Invoke scinstall(1M) and select the Upgrade option from the menu presented.


    phys-hahost1# cd /cdrom/multi_suncluster_sc_2_2/Sun_Cluster_2_2/Sol_2.x/Tools
    phys-hahost1# ./scinstall
    
    Removal of <SUNWscins> was successful.
    Installing: SUNWscins
    
    Installation of <SUNWscins> was successful.
    Assuming a default cluster name of sc-cluster
    
    Checking on installed package state............
    
    ============ Main Menu =================
    1) Install/Upgrade - Install or Upgrade Server Packages or Install Client  Packages.
    2) Remove  - Remove Server or Client Packages.
    3) Change  - Modify cluster or data service configuration
    4) Verify  - Verify installed package sets.
    5) List    - List installed package sets.
    6) Quit    - Quit this program.
    7) Help    - The help screen for this menu.
    
    Please choose one of the menu items: [7]:  1
    ...
    ==== Install/Upgrade Software Selection Menu =======================
    Upgrade to the latest Sun Cluster Server packages or select package
    sets for installation. The list of package sets depends on the Sun
    Cluster packages that are currently installed.
    
    Choose one:
    1) Upgrade            Upgrade to Sun Cluster 2.2 Server packages
    2) Server             Install the Sun Cluster packages needed on a server
    3) Client             Install the admin tools needed on an admin workstation
    4) Server and Client  Install both Client and Server packages
    5) Close              Exit this Menu
    6) Quit               Quit the Program
    
    Enter the number of the package set [6]:  1
    
    What is the path to the CD-ROM image? [/cdrom/cdrom0]:  .
    
    ** Upgrading from Sun Cluster 2.1 **
    	Removing "SUNWccm" ... done
    ...

  9. (Sun Cluster HA for SAP only) Perform the following steps.

    1. Add the SUNWscsap package from the Sun Cluster 2.2 CD-ROM.

      Use pkgadd(1M) to add an updated SUNWscsap package to replace the package removed in Step 7. Answer y to all screen prompts that appear during the pkgadd process.


      phys-hahost1# pkgadd -d \ 
      
      /cdrom/multi_suncluster_sc_2_2/Sun_Cluster_2_2/Sol_2.x/Product/ SUNWscsap
      

    2. Restore the customized scripts saved in Step 7a.

      Copy the scripts to the /opt/SUNWcluster/ha/sap directory. The safe_place directory is the directory into which you saved the scripts in Step 7a. After restoring the scripts, use the ls -l command to verify that the scripts are executable.


      phys-hahost1# cd /opt/SUNWcluster/ha/sap
      phys-hahost1# cp /safe_place/hasap_start_all_instances .
      phys-hahost1# cp /safe_place/hasap_stop_all_instances .
      phys-hahost1# ls -l /opt/SUNWcluster/ha/sap/hasap_st*
      -r-xr--r--   1 root     sys        18400 Feb  9 19:04 hasap_start_all_instances
      -r-xr--r--   1 root     sys        25963 Feb  9 19:04 hasap_stop_all_instances

  10. If the cluster has more than two nodes and you are upgrading from Sun Cluster 2.0, supply the TC/SSP information.

    The first time the scinstall(1M) command is invoked, the TC/SSP information is automatically saved to the /var/tmp/tc_ssp_info file. Copy this file to the /var/tmp directory on all other cluster nodes so the information can be reused when you upgrade those nodes. You can either supply the TC/SSP information now, or do so later by using the scconf(1M) command. See the scconf(1M) man page for details.

    When the scinstall(1M) command prompts for the TC/SSP information, you can either force the program to query the tc_ssp_info file, or invoke an interactive session that will prompt you for the required information.

    The example cluster assumes the following configuration information:

    • Cluster name: sc-cluster

    • Number of nodes in the cluster: 2

    • Node names: phys-hahost1 and phys-hahost2

    • Logical host names: hahost1 and hahost2

    • Terminal concentrator name: cluster-tc

    • Terminal concentrator IP address: 123.4.5.678

    • Physical TC port connected to phys-hahost1: 2

    • Physical TC port connected to phys-hahost2: 3

    See Chapter 1, Understanding the Sun Cluster Environment for more information on server architectures and TC/SSPs. In this example, the configuration is not an E10000 cluster, so the architecture specified is other, and a terminal concentrator is used:


    What type of architecture does phys-hahost1 have? (E10000|other) [other] [?] other
    What is the name of the Terminal Concentrator connected to the serial port of phys-hahost1 [NO_NAME] [?] cluster-tc
    Is 123.4.5.678 the correct IP address for this Terminal Concentrator (yes|no) [yes] [?] yes
    Which physical port on the Terminal Concentrator is phys-hahost2 connected to [?] 2
    What type of architecture does phys-hahost2 have? (E10000|other) [other] [?] other
    Which Terminal Concentrator is phys-hahost2 connected to:
    
    0) cluster-tc       123.4.5.678
    1) Create A New Terminal Concentrator Entry
    
    Select a device [?] 0
    Which physical port on the Terminal Concentrator is phys-hahost2 connected to [?] 3
    The terminal concentrator/system service processor (TC/SSP) information has been stored in file /var/tmp/tc_ssp_data. Please put a copy of this file into /var/tmp on the rest of the nodes in the cluster. This way you don't have to re-enter the TC/SSP values, but you will, however, still be prompted for the TC/SSP passwords.

  11. If you will be using Sun Cluster SNMP, change the port number used by the Sun Cluster SNMP daemon and Solaris SNMP (smond).

    The default port used by Sun Cluster SNMP is the same as the default port number used by Solaris SNMP; both use port 161. Change the Sun Cluster SNMP port number using the procedure described in the SNMP appendix to the Sun Cluster 2.2 System Administration Guide.

  12. (Solaris 2.6 and 7 only) Use the install_scpatches utility to install Sun Cluster patches from the Sun Cluster product CD-ROM.

    Run the command from the Patches subdirectory on the new Sun Cluster CD-ROM.


    # cd /cdrom/multi_suncluster_sc_2_2/Sun_Cluster_2_2/Sol_2.x/Patches
    # ./install_scpatches
    
    Patch install script for Sun Cluster 2.2 July 2000 Release
    
    *WARNING* SYSTEMS WITH LIMITED DISK SPACE SHOULD *NOT* INSTALL PATCHES:
    With or without using the save option, the patch installation process
    will still require some amount of disk space for installation and
    administrative tasks in the /, /usr, /var, or /opt partitions where
    patches are typically installed.  The exact amount of space will
    depend on the machine's architecture, software packages already 
    installed, and the difference in the patched objects size.  To be
    safe, it is not recommended that a patch cluster be installed on a
    system with less than 4 MBytes of available space in each of these
    partitions.  Running out of disk space during installation may result
    in only partially loaded patches.  Check and be sure adequate disk space
    is available before continuing.
    
    Are you ready to continue with install? [y/n]: y
    
    Determining if sufficient save space exists...
    Sufficient save space exists, continuing...
    Installing patches located in /cdrom/multi_suncluster_sc_2_2/Sun_Cluster_2_2/Sol_2.x/Patches
    Using patch_order file for patch installation sequence
    Installing 107388-03 ... okay.
    Installing 107748-02 ... okay.
    ...
    For more installation messages refer to the installation logfile:
      /var/sadm/install_data/Sun_Cluster_2.2_July_2000_Release_log
    
    Use '/usr/bin/showrev -p' to verify installed patch-ids.
    Refer to individual patch README files for more patch detail.
    Rebooting the system is usually necessary after installation.
    
    #

  13. Install any required or recommended Sun Cluster and volume manager patches.

    Besides those patches installed in Step 12, also obtain any required or recommended patches from your service provider or from the patches website, http://sunsolve.sun.com. Follow the instructions in the patch README files to install the patches.

  14. Reboot the system.


    Caution - Caution -

    You must reboot at this time.


  15. If you are using a shared CCD, put all logical hosts into maintenance mode.


    phys-hahost2# haswitch -m hahost1 hahost2 
    


    Note -

    Clusters with more than two nodes do not use a shared CCD. Therefore, for these clusters, you do not need to put the data services into maintenance mode before beginning the upgrade.


  16. If your configuration includes Oracle Parallel Server (OPS), make sure OPS is halted.

    Refer to your OPS documentation for instructions on halting OPS.

  17. Stop the cluster software on the remaining nodes running the old version of Sun Cluster.


    phys-hahost2# scadmin stopnode
    

  18. Start the upgraded node.


    phys-hahost1# scadmin startcluster phys-hahost1 sc-cluster
    


    Note -

    As the upgraded node joins the cluster, the system might report several warning messages stating that communication with the terminal concentrator is invalid. These messages are expected at this point and can be ignored safely. You can also ignore any errors generated by Sun Cluster HA for SAP at this time.


  19. (Sun Cluster HA for SAP only) Reconfigure the SAP instance by performing the following steps.

    1. Use the hareg(1M) command to turn off the Sun Cluster HA for SAP data service.


      phys-hahost1# hareg -n sap
      


      Note -

      It is safe to ignore any errors generated while turning off Sun Cluster HA for SAP by running hareg(1M).


    2. Run the hadsconfig(1M) command to restore the Sun Cluster HA for SAP configuration parameters.

      Refer to Chapter 10, Installing and Configuring Sun Cluster HA for SAP, for descriptions of the new configuration parameters. Also, refer to the configuration information you saved in Step 1.


      phys-hahost1# hadsconfig
      


      Note -

      It is safe to ignore any errors generated by hadsconfig(1M) at this time.


    3. After you set the configuration parameters, use hareg(1M) to activate the data service:


      phys-hahost1# hareg -y sap
      

    4. Manually copy the configuration file to other nodes in the cluster by using ftp.

      Overwrite the Sun Cluster 2.1 configuration files with the new Sun Cluster 2.2 files.


      phys-hahost1# ftp phys-hahost2
      ftp> put /etc/opt/SUNWscsap/hadsconf 
      

  20. If you are using a shared CCD and if you upgraded from Sun Cluster 2.0, update the shared CCD now.

    Run the ccdadm(1M) command only once, on the host that joined the cluster first.


    phys-hahost1# cd /etc/opt/SUNWcluster/conf
    phys-hahost1# ccdadm sc-cluster -r ccd.database_post_sc2.0_upgrade
    

  21. If you stopped the data services previously, restart them on the upgraded node.


    phys-hahost1# haswitch phys-hahost1 hahost1 hahost2
    

    If your cluster includes Sun Cluster HA for SAP, you must explicitly unregister and re-register the data service, using the following commands. Replace the string CI_logicalhost with the name of the logical host on which the SAP central instance is installed.


    phys-hahost1# hareg -n sap
    phys-hahost1# hareg -u sap
    phys-hahost1# hareg -s -r sap -h CI_logicalhost
    phys-hahost1# hareg -y sap
    

  22. Upgrade the remaining nodes.

    Repeat Step 3 through Step 14 on the remaining Sun Cluster 2.0 or Sun Cluster 2.1 nodes.

  23. After each node is upgraded, add it to the cluster.


    phys-hahost2# scadmin startnode sc-cluster
    

  24. Set up and start Sun Cluster Manager.

    Sun Cluster Manager is used to monitor the cluster. For instructions, see the section on Sun Cluster Manager in the chapter on Sun Cluster administration tools in the Sun Cluster 2.2 System Administration Guide.

This completes the upgrade to Sun Cluster 2.2 from Sun Cluster 2.0 or 2.1.

Upgrading to Sun Cluster 2.2 on Solaris 8 From Sun Cluster 2.2 on Solaris 2.6 or Solaris 7

Use the procedures in the following sections to upgrade to Sun Cluster 2.2 on Solaris 8 from earlier releases of Sun Cluster 2.2 on Solaris 2.6 or 7.


Note -

No upgrade is necessary to move to this release of Sun Cluster 2.2 on Solaris 8 from previous releases of Sun Cluster 2.2 on Solaris 8. Simply update all cluster nodes with any applicable Sun Cluster and Solaris patches, available from your service provider or from the Sun patch website, http://sunsolve.sun.com.


Upgrade Procedures - Solstice DiskSuite

This section describes the upgrade to Sun Cluster 2.2 on Solaris 8 from Sun Cluster 2.2 on Solaris 2.6 or Solaris 7, for clusters using Solstice DiskSuite as the volume manager.

You should become familiar with the entire procedure before starting the upgrade. For your convenience, have your volume manager-specific documentation at hand for reference.


Caution - Caution -

You must take all nodes out of the cluster (that is, take the cluster down) to perform this upgrade. Data and data services will be inaccessible while the cluster is down.



Caution - Caution -

Before starting the upgrade, you should have an adequate backup of all configuration information and key data, and the cluster must be in a stable, non-degraded state.



Note -

During the scinstall(1M) upgrade procedure, all non-local private link IP addresses will be added, with root access only, to the /.rhosts file on every cluster node.



Note -

The behavior of DNS changes between Solaris 2.6 and Solaris 8. This is because the default bind version differs between these operating environments. This bind change requires an update to some DNS configuration files. See your DNS documentation for details and instructions.


How to Upgrade to Sun Cluster 2.2 on Solaris 8 From Sun Cluster 2.2 on Solaris 2.6 or Solaris 7 (Solstice DiskSuite)

The examples assume an N+1 configuration using an administrative workstation.

  1. (Sun Cluster HA for SAP only) Run the hadsconfig(1M) command to obtain the current configuration parameters.

    The SAP instance configuration data is lost during the upgrade. Therefore, run the hadsconfig(1M) and make note of the current SAP parameters so you can restore them manually later. See Chapter 10, Installing and Configuring Sun Cluster HA for SAP for a description of the new Sun Cluster HA for SAP configuration parameters.


    phys-hahost1# hadsconfig
    

  2. Stop all data services.

  3. Stop all nodes and bring down the cluster.

    Run the following command on all nodes.


    phys-hahost1# scadmin stopnode
    

  4. On all nodes, upgrade the operating system.

    To upgrade the operating environment, follow the detailed instructions in the appropriate Solaris installation manual and also see Chapter 2, Planning the Configuration.

  5. If you are using NIS+, modify the /etc/nsswitch.conf file on all nodes.

    Ensure that "service," "group," and "hosts" lookups are directed to files first. For example:


    hosts: files nisplus
    services: files nisplus
    group: files nisplus

  6. Upgrade from Solstice DiskSuite 4.2 to Solstice DiskSuite 4.2.1.

    Solaris 8 requires Solstice DiskSuite 4.2.1.

    1. Add the Solstice DiskSuite 4.2.1 package from the Solstice DiskSuite media, using pkgadd(1M).

      During the pkgadd(1M) operation, several existing files are noted as being in conflict. You must answer y at each pkgadd prompt to install the new files.


      Caution - Caution -

      If your original configuration includes mediators, do not remove the old SUNWmdm (mediators) package before adding the new one. Doing so will make all data inaccessible.


    2. Install any applicable Solstice DiskSuite patches.

    3. Reboot all nodes.

      At this time, you must reboot all cluster nodes.

  7. (Sun Cluster HA for SAP only) Perform the following steps.

    1. Save to a safe location any customized hasap_start_all_instances or hasap_stop_all_instances scripts.

      Do this to prevent loss of your customizations when Sun Cluster 2.2 removes the old scripts during the upgrade. Restore the scripts after completing the upgrade. Copy the scripts to a safe location. You will restore the scripts later in Step 9.


      phys-hahost1# cp /opt/SUNWcluster/ha/sap/hasap_start_all_instances /safe_place
      phys-hahost1# cp /opt/SUNWcluster/ha/sap/hasap_stop_all_instances /safe_place
      

    2. Remove the SUNWscsap package from all nodes before using scinstall(1M) to update the cluster software.

      The SUNWscsap package is not updated automatically by scinstall(1M). You must first remove this package now. You will add an updated version in Step 9.


      phys-hahost1# pkgrm SUNWscsap
      

  8. On all nodes, update the cluster software by using the scinstall(1M) command from the Sun Cluster 2.2 CD-ROM.

    Invoke scinstall(1M) and select the Upgrade option from the menu presented.


    phys-hahost1# cd /cdrom/multi_suncluster_sc_2_2/Sun_Cluster_2_2/Sol_2.x/Tools
    phys-hahost1# ./scinstall
    
    Removal of <SUNWscins> was successful.
    Installing: SUNWscins
    
    Installation of <SUNWscins> was successful.
    Assuming a default cluster name of sc-cluster
    
    Checking on installed package state............
    
    ============ Main Menu =================
    1) Install/Upgrade - Install or Upgrade Server Packages or Install Client                     Packages.
    2) Remove  - Remove Server or Client Packages.
    3) Change  - Modify cluster or data service configuration
    4) Verify  - Verify installed package sets.
    5) List    - List installed package sets.
    6) Quit    - Quit this program.
    7) Help    - The help screen for this menu.
    
    Please choose one of the menu items: [7]:  1
    ...
    ==== Install/Upgrade Software Selection Menu =======================
    Upgrade to the latest Sun Cluster Server packages or select package
    sets for installation. The list of package sets depends on the Sun
    Cluster packages that are currently installed.
    
    Choose one:
    1) Upgrade            Upgrade to Sun Cluster 2.2 Server packages
    2) Server             Install the Sun Cluster packages needed on a server
    3) Client             Install the admin tools needed on an admin workstation
    4) Server and Client  Install both Client and Server packages
    5) Close              Exit this Menu
    6) Quit               Quit the Program
    
    Enter the number of the package set [6]:  1
    
    What is the path to the CD-ROM image? [/cdrom/cdrom0]:  .
    
    ** Upgrading from Sun Cluster 2.1 **
    	Removing "SUNWccm" ... done
    ...

  9. (Sun Cluster HA for SAP only) Perform the following steps.

    1. On all nodes using SAP, add the SUNWscsap package from the Sun Cluster 2.2 CD-ROM.

      Use pkgadd(1M) to add an updated SUNWscsap package to replace the package removed in Step 1. Answer y to all screen prompts that appear during the pkgadd process.


      phys-hahost1# pkgadd -d \ 
      
      /cdrom/multi_suncluster_sc_2_2/Sun_Cluster_2_2/Sol_2.x/Product/ SUNWscsap
      

    2. On all nodes using SAP, restore the customized scripts saved in Step 1.

      Copy the scripts to the /opt/SUNWcluster/ha/sap directory. The safe_place directory is the directory into which you saved the scripts in Step 1. After restoring the scripts, use the ls -l command to verify that the scripts are executable.


      phys-hahost1# cd /opt/SUNWcluster/ha/sap
      phys-hahost1# cp /safe_place/hasap_start_all_instances .
      phys-hahost1# cp /safe_place/hasap_stop_all_instances .
      phys-hahost1# ls -l /opt/SUNWcluster/ha/sap/hasap_st*
      -r-xr--r--   1 root     sys        18400 Feb  9 19:04 hasap_start_all_instances
      -r-xr--r--   1 root     sys        25963 Feb  9 19:04 hasap_stop_all_instances

  10. If you will be using Sun Cluster SNMP, change the port number used by the Sun Cluster SNMP daemon and Solaris SNMP (smond). Do this on all nodes.

    The default port used by Sun Cluster SNMP is the same as the default port number used by Solaris SNMP; both use port 161. Change the Sun Cluster SNMP port number using the procedure described in the SNMP appendix in the Sun Cluster 2.2 System Administration Guide.

  11. On all nodes, install any required or recommended Sun Cluster patches.

    Obtain any required or recommended patches from your service provider or from the patches website, http://sunsolve.sun.com. Follow the instructions in the patch README files to install the patches.

  12. Start the cluster and add all nodes to it.

    Start the cluster by running the following command on the first node.


    phys-hahost1# scadmin startcluster phys-hahost1 sc-cluster 
    

    Then add each node to the cluster by running the following command on each node, sequentially. Allow the cluster to reconfigure before you add each subsequent node.


    phys-hahost2# scadmin startnode
    

  13. If necessary, update device IDs.

    While starting the cluster, if you received error messages pertaining to invalid device IDs, you must follow these steps to update the device IDs.

    1. On any node, make a backup copy of the /etc/did.conf file.

    2. Get a list of all affected instance numbers by running the following command from node 0. The output of the command will indicate the instance numbers.


      phys-hahost1# scdidadm -l
      

    3. From node 0 only, update device IDs by running the following command.

      This command re-initializes the devices for all multihost disks and for the local disks on node 0. The command must be run from the node defined as node 0. Specify the instance numbers of all multihost disks. You must run the command once for each multihost disk in the cluster.


      Caution - Caution -

      Use extreme caution when running the scdidadm -R command. Use only upper case R, never lower case. Lower case r might re-assign device numbers to all disks, making data inaccessible. See the scdidadm(1M) command for more information.


      phys-hahost1# scdidadm -R instance_number1
      ...
      phys-hahost1# scdidadm -R instance_number2
      ...
      phys-hahost1# scdidadm -R instance_number3
      ...



      Note -

      The scdidadm -R command does not re-initialize the local disks on cluster nodes other than node 0. This is acceptable, because the device IDs of local disks are not used by Sun Cluster. However, you will see error messages related to this, for all nodes other than node 0. These error messages are expected and can be ignored safely.


    4. Stop all cluster nodes.

    5. Reboot all cluster nodes.

  14. Start the cluster and add all nodes to it.

    Start the cluster on the first node.


    phys-hahost1# scadmin startcluster phys-hahost1 sc-cluster
    

    Then run the following command on each node, sequentially. Allow the cluster to reconfigure before you add each subsequent node.


    phys-hahost2# scadmin startnode
    


    Note -

    As the upgraded node joins the cluster, the system might report several warning messages stating that communication with the terminal concentrator is invalid. These messages are expected at this point and can be ignored safely. You can also ignore any errors generated by Sun Cluster HA for SAP at this time.


  15. (Sun Cluster HA for SAP only) Reconfigure the SAP instance by performing the following steps.

    1. Run the hadsconfig(1M) command to restore the Sun Cluster HA for SAP configuration parameters.

      Refer to Chapter 10, Installing and Configuring Sun Cluster HA for SAP for descriptions of the new configuration parameters and look at the configuration information you saved in Step 1.


      phys-hahost1# hadsconfig
      


      Note -

      It is safe to ignore any errors generated by hadsconfig(1M) at this time.


    2. After you set the configuration parameters, use hareg(1M) to activate the data service:


      phys-hahost1# hareg -y sap
      

    3. Manually copy the configuration file to other nodes in the cluster.

      Overwrite the old Sun Cluster 2.2 configuration files with the new Sun Cluster 2.2 files. For example:


      phys-hahost1# ftp phys-hahost2
      ftp> put /etc/opt/SUNWscsap/hadsconf 
      

  16. Start the data services.

    Run the following commands for all data services.


    phys-hahost1# hareg -s -r dataservice -h CI_logicalhost
    phys-hahost1# hareg -y dataservice
    

  17. Set up and start Sun Cluster Manager.

    Sun Cluster Manager is used to monitor the cluster. For instructions, see the section on Sun Cluster Manager in Chapter 2 of the Sun Cluster 2.2 System Administration Guide. Also see the Sun Cluster 2.2 Release Notes.

This completes the upgrade to the latest version of Sun Cluster 2.2 on Solaris 8 from earlier versions of Sun Cluster 2.2 on Solaris 2.6 or 7.

Upgrade - VERITAS Volume Manager

This section describes the upgrade to Sun Cluster 2.2 on Solaris 8 From Sun Cluster 2.2 on Solaris 2.6 or Solaris 7, for clusters using VERITAS Volume Manager (VxVM, with or without the cluster feature). The VxVM cluster feature is used with Oracle Parallel Server.

This procedure describes the steps required to upgrade the software on a Sun Cluster 2.2 system running VERITAS Volume Manager and Solaris 2.6 or 7 to the latest version of Sun Cluster 2.2 running Solaris 8.

You should become familiar with the entire procedure before starting the upgrade. For your convenience, have your volume manager-specific documentation at hand for reference.


Caution - Caution -

Before starting the upgrade, you should have an adequate backup of all configuration information and key data, and the cluster must be in a stable, non-degraded state.



Caution - Caution -

If you are running VxVM with an encapsulated root disk, you must unencapsulate the root disk before installing Sun Cluster 2.2. After you install Sun Cluster 2.2, encapsulate the root disk again. Refer to your VxVM documentation for the procedures to encapsulate and unencapsulate the root disk.



Note -

During the scinstall(1M) upgrade procedure, all non-local private link IP addresses will be added, with root access only, to the /.rhosts file on every cluster node.



Note -

The behavior of DNS changes between Solaris 2.6 and Solaris 8. This is because the default bind version differs between these operating environments. This bind change requires an update to some DNS configuration files. See your DNS documentation for details and instructions.


How to Upgrade to Sun Cluster 2.2 on Solaris 8 From Sun Cluster 2.2 on Solaris 2.6 or Solaris 7 (VERITAS Volume Manager)

The examples assume an N+1 configuration using an administrative workstation.

  1. (Sun Cluster HA for SAP only) Run the hadsconfig(1M) command to obtain the current configuration parameters.

    The SAP instance configuration data is lost during the upgrade. Therefore, run the hadsconfig(1M) and make note of the current SAP parameters so you can restore them manually later. See Chapter 10, Installing and Configuring Sun Cluster HA for SAP for a description of the new Sun Cluster HA for SAP configuration parameters.


    phys-hahost1# hadsconfig
    

  2. Select a node to upgrade first and switch any data services from that node to backup nodes.

  3. If your cluster includes OPS, make sure OPS is halted.

  4. Stop the node you are upgrading.


    phys-hahost1# scadmin stopnode
    

  5. If you are upgrading the operating environment, start the volume manager upgrade using your VERITAS documentation.

    You must perform some volume manager specific tasks before upgrading the operating environment. See your VERITAS documentation for detailed information. You will finish the volume manager upgrade after you upgrade the operating environment.

  6. Upgrade the operating environment.

    To upgrade the operating environment, follow the detailed instructions in the appropriate Solaris installation manual and also see Chapter 2, Planning the Configuration.

  7. If you are using NIS+, modify the /etc/nsswitch.conf file.

    Ensure that "service," "group," and "hosts" lookups are directed to files first. For example:


    hosts: files nisplus
    services: files nisplus
    group: files nisplus

  8. Complete the volume manager upgrade, using your VERITAS documentation.

  9. Reboot the node.

  10. (Sun Cluster HA for SAP only) Perform the following steps.

    1. Save to a safe location any customized hasap_start_all_instances or hasap_stop_all_instances scripts.

      Do this to prevent loss of your customizations when Sun Cluster 2.2 removes the old scripts during the upgrade. Restore the scripts after completing the upgrade. Copy the scripts to a safe location. You will restore the scripts later in Step 12.


      phys-hahost1# cp /opt/SUNWcluster/ha/sap/hasap_start_all_instances /safe_place
      phys-hahost1# cp /opt/SUNWcluster/ha/sap/hasap_stop_all_instances /safe_place
      

    2. Remove the SUNWscsap package from all nodes before using scinstall(1M) to update the cluster software.

      The SUNWscsap package is not updated automatically by scinstall(1M). You must first remove this package now. You will add an updated version in Step 12.


      phys-hahost1# pkgrm SUNWscsap
      

  11. Update the cluster software by using the scinstall(1M) command from the Sun Cluster 2.2 CD-ROM.

    Invoke scinstall(1M) and select the Upgrade option from the menu presented.


    phys-hahost1# cd /cdrom/multi_suncluster_sc_2_2/Sun_Cluster_2_2/Sol_2.x/Tools
    phys-hahost1# ./scinstall
    
    Removal of <SUNWscins> was successful.
    Installing: SUNWscins
    
    Installation of <SUNWscins> was successful.
    Assuming a default cluster name of sc-cluster
    
    Checking on installed package state............
    
    ============ Main Menu =================
    1) Install/Upgrade - Install or Upgrade Server Packages or Install Client                     Packages.
    2) Remove  - Remove Server or Client Packages.
    3) Change  - Modify cluster or data service configuration
    4) Verify  - Verify installed package sets.
    5) List    - List installed package sets.
    6) Quit    - Quit this program.
    7) Help    - The help screen for this menu.
    
    Please choose one of the menu items: [7]:  1
    ...
    ==== Install/Upgrade Software Selection Menu =======================
    Upgrade to the latest Sun Cluster Server packages or select package
    sets for installation. The list of package sets depends on the Sun
    Cluster packages that are currently installed.
    
    Choose one:
    1) Upgrade            Upgrade to Sun Cluster 2.2 Server packages
    2) Server             Install the Sun Cluster packages needed on a server
    3) Client             Install the admin tools needed on an admin workstation
    4) Server and Client  Install both Client and Server packages
    5) Close              Exit this Menu
    6) Quit               Quit the Program
    
    Enter the number of the package set [6]:  1
    
    What is the path to the CD-ROM image? [/cdrom/cdrom0]:  .
    
    ** Upgrading from Sun Cluster 2.1 **
    	Removing "SUNWccm" ... done
    ...

  12. (Sun Cluster HA for SAP only) Perform the following steps.

    1. On all nodes using SAP, add the SUNWscsap package from the Sun Cluster 2.2 CD-ROM.

      Use pkgadd(1M) to add an updated SUNWscsap package to replace the package removed in Step 1. Answer y to all screen prompts that appear during the pkgadd process.


      phys-hahost1# pkgadd -d \ 
      
      /cdrom/multi_suncluster_sc_2_2/Sun_Cluster_2_2/Sol_2.x/Product/ SUNWscsap
      

    2. On all nodes using SAP, restore the customized scripts saved in Step 10.

      Copy the scripts to the /opt/SUNWcluster/ha/sap directory. The safe_place directory is the directory into which you saved the scripts in Step 10. After restoring the scripts, use the ls -l command to verify that the scripts are executable.


      phys-hahost1# cd /opt/SUNWcluster/ha/sap
      phys-hahost1# cp /safe_place/hasap_start_all_instances .
      phys-hahost1# cp /safe_place/hasap_stop_all_instances .
      phys-hahost1# ls -l /opt/SUNWcluster/ha/sap/hasap_st*
      -r-xr--r--   1 root     sys        18400 Feb  9 19:04 hasap_start_all_instances
      -r-xr--r--   1 root     sys        25963 Feb  9 19:04 hasap_stop_all_instances

  13. If the cluster has more than two nodes, supply the TC/SSP information.

    The first time the scinstall(1M) command is invoked, the TC/SSP information is automatically saved to the /var/tmp/tc_ssp_info file. Copy this file to the /var/tmp directory on all other cluster nodes so the information can be reused when you upgrade those nodes. You can either supply the TC/SSP information now, or do so later by using the scconf(1M) command. See the scconf(1M) man page for details.


    SC2.2 uses the terminal concentrator (or system service processor in the case of an E10000) for failure fencing. During the SC2.2 installation the IP address for the terminal concentrator along with the physical port numbers that each server is connected to is requested. This information can be changed using scconf.
    
    After the upgrade has completed you need to run scconf to specify terminal concentrator information for each server. This will need to be done on each server in the cluster.
    
    The specific commands that need to be run are:
    
    scconf clustername -t <nts name> -i <nts name|IP address>
    scconf clustername -H <node 0> -p <serial port for node 0> \
    -d <other|E10000> -t <nts name>
    
    Repeat the second command for each node in the cluster. Repeat the first command if you have more than one terminal concentrator in your configuration.
    Or you can choose to set this up now. The information you will need is:
    
    			+terminal concentrator/system service processor names
    			+the architecture type (E10000 for SSP or other for tc)
    			+the ip address for the terminal concentrator/system service 
             processor (these will be looked up based on the name, you 
             will need to confirm)
    			+for terminal concentrators, you will need the physical 
             ports the systems are connected to (physical ports 
             (2,3,4... not the telnet ports (5002,...)
    
    Do you want to set the TC/SSP info now (yes/no) [no]?  y
    

    When the scinstall(1M) command prompts for the TC/SSP information, you can either force the program to query the tc_ssp_info file, or invoke an interactive session that will prompt you for the required information.

    The example cluster assumes the following configuration information:

    • Cluster name: sc-cluster

    • Number of nodes in the cluster: 2

    • Node names: phys-hahost1 and phys-hahost2

    • Logical host names: hahost1 and hahost2

    • Terminal concentrator name: cluster-tc

    • Terminal concentrator IP address: 123.4.5.678

    • Physical TC port connected to phys-hahost1: 2

    • Physical TC port connected to phys-hahost2: 3

      See Chapter 1, Understanding the Sun Cluster Environment for more information on server architectures and TC/SSPs. In this example, the configuration is not an E10000 cluster, so the architecture specified is other, and a terminal concentrator is used:


      What type of architecture does phys-hahost1 have? (E10000|other) [other] [?] other
      What is the name of the Terminal Concentrator connected to the serial port of phys-hahost1 [NO_NAME] [?] cluster-tc
      Is 123.4.5.678 the correct IP address for this Terminal Concentrator (yes|no) [yes] [?] yes
      Which physical port on the Terminal Concentrator is phys-hahost2 connected to [?] 2
      What type of architecture does phys-hahost2 have? (E10000|other) [other] [?] other
      Which Terminal Concentrator is phys-hahost2 connected to:
      
      0) cluster-tc       123.4.5.678
      1) Create A New Terminal Concentrator Entry
      
      Select a device [?] 0
      Which physical port on the Terminal Concentrator is phys-hahost2 connected to [?] 3
      The terminal concentrator/system service processor (TC/SSP) information has been stored in file /var/tmp/tc_ssp_data. Please put a copy of this file into /var/tmp on the rest of the nodes in the cluster. This way you don't have to re-enter the TC/SSP values, but you will, however, still be prompted for the TC/SSP passwords.

  14. If you will be using Sun Cluster SNMP, change the port number used by the Sun Cluster SNMP daemon and Solaris SNMP (smond).

    The default port used by Sun Cluster SNMP is the same as the default port number used by Solaris SNMP; both use port 161. Change the Sun Cluster SNMP port number using the procedure described the SNMP appendix in the Sun Cluster 2.2 System Administration Guide.

  15. Install any required or recommended Sun Cluster and volume manager patches.

    Obtain any required or recommended patches from your service provider or from the patches website, http://sunsolve.sun.com. Follow the instructions in the patch README files to install the patches.

  16. Reboot the node.


    Caution - Caution -

    You must reboot at this time.


  17. If you are using a shared CCD, put all logical hosts into maintenance mode.


    phys-hahost2# haswitch -m hahost1 hahost2 
    


    Note -

    Clusters with more than two nodes do not use a shared CCD. Therefore, for these clusters, you do not need to put the data services into maintenance mode at this time.


  18. Stop the cluster software on the remaining nodes running the old version of Sun Cluster.


    phys-hahost2# scadmin stopnode
    

  19. Start the upgraded node.


    phys-hahost1# scadmin startcluster phys-hahost1 sc-cluster
    


    Note -

    As the upgraded node joins the cluster, the system might report several warning messages stating that communication with the terminal concentrator is invalid. These messages are expected at this point and can be ignored safely. You can also ignore any errors generated by Sun Cluster HA for SAP at this time.


  20. (Sun Cluster HA for SAP only) Reconfigure the SAP instance by performing the following steps.

    1. Run the hadsconfig(1M) command to restore the Sun Cluster HA for SAP configuration parameters.

      Refer to Chapter 10, Installing and Configuring Sun Cluster HA for SAP for descriptions of the new configuration parameters and look at the configuration information you saved in Step 1.


      phys-hahost1# hadsconfig
      


      Note -

      It is safe to ignore any errors generated by hadsconfig(1M) at this time.


    2. After you set the configuration parameters, use hareg(1M) to activate the data service:


      phys-hahost1# hareg -y sap
      

    3. Manually copy the configuration file to other nodes in the cluster by using ftp.

      Overwrite the old Sun Cluster 2.2 configuration files with the new Sun Cluster 2.2 files.


      phys-hahost1# ftp phys-hahost2
      ftp> put /etc/opt/SUNWscsap/hadsconf 
      

  21. If you are using a shared CCD, update it now.

    Run the ccdadm(1M) command only once, on the host that joined the cluster first.


    phys-hahost1# cd /etc/opt/SUNWcluster/conf
    phys-hahost1# ccdadm sc-cluster -r ccd.database_post_sc2.0_upgrade
    

  22. Restart the data services on the upgraded node.

    Run the following commands for each data service.


    phys-hahost1# hareg -s -r dataservice -h CI_logicalhost
    phys-hahost1# hareg -y dataservice
    

  23. Repeat Steps 4 through 15 on the next node to be upgraded.

    Repeat the upgrade procedure for each node, sequentially.

  24. After each node is upgraded, add it to the cluster.


    phys-hahost2# scadmin startnode sc-cluster
    

  25. Set up and start Sun Cluster Manager.

    Sun Cluster Manager is used to monitor the cluster. For instructions, see the section on Sun Cluster Manager in Chapter 2 in the Sun Cluster 2.2 System Administration Guide. Also see the Sun Cluster 2.2 Release Notes.

This completes the upgrade to the latest version of Sun Cluster 2.2 on Solaris 8 from earlier versions of Sun Cluster 2.2 on Solaris 2.6 or 7.