Sun Cluster 2.2 Software Installation Guide

Chapter 4 Upgrading Sun Cluster Software

This chapter contains guidelines and procedures for upgrading to Sun Cluster 2.2 from Solstice HA 1.3, Sun Cluster 2.0, and Sun Cluster 2.1.

The software to be upgraded might include the Solaris operating environment, Sun Cluster, and volume management software (Solstice DiskSuite, Sun StorEdge Volume Manager, or Cluster Volume Manager).

This chapter includes the following procedures:

4.1 Upgrade Overview

This section describes the procedures for upgrading to Sun Cluster 2.2 from existing Solstice HA 1.3, Sun Cluster 2.0, and Sun Cluster 2.1 configurations. The upgrade paths documented here preserve the existing cluster configuration and data. Your systems can remain online and available during most of the upgrade, keeping interruption to services minimal.

To upgrade from Solstice HA 1.3:

To upgrade from Sun Cluster 2.0 or 2.1:


Note -

Converting from Solstice DiskSuite to SEVM 2.5, SSVM or CVM is not supported by Sun Cluster 2.2.


If you also want to make configuration changes such as adding disks or services, first complete the upgrade and then make the configuration changes by following the procedures documented in the Sun Cluster 2.2 System Administration Guide.

Before starting your upgrade, make sure the versions of any applications you plan to run are compatible with the version of the Solaris operating environment you plan to run.

To upgrade Solaris, you might need to increase the size of your root (/) and /usr partitions on the root disks of all Sun Cluster servers in the configuration to accommodate the Solaris 2.6 or Solaris 7 environment. You must install the Entire Distribution Solaris software packages. See the Solaris Advanced Installation Guide for details.

4.2 Upgrading From Solstice HA 1.3 to Sun Cluster 2.2


Note -

While performing this upgrade, you might see network interface and mediator errors on the console. These messages are side effects of the upgrade and can be ignored safely.


4.2.1 How to Upgrade From Solstice HA 1.3 to Sun Cluster 2.2

These are the high-level steps to upgrade from Solstice HA 1.3 to Sun Cluster 2.2. You can perform the upgrade either from an administrative workstation or from the console of any physical host in the cluster. Upgrading by using an administrative workstations provides the most flexibility during the process.


Note -

This procedure assumes you are using an administrative workstation.



Caution - Caution -

Back up all local and multihost disks before starting the upgrade. Also, all systems must be operable and robust. Do not attempt to upgrade if systems are experiencing any difficulties.



Caution - Caution -

On each node, if you customized hasap_start_all_instances or hasap_stop_all_instances scripts in Solstice HA 1.3 or Sun Cluster 2.1, save them to a safe location before beginning the upgrade to Sun Cluster 2.2. Restore the scripts after completing the upgrade. Do this to prevent loss of your customizations when Sun Cluster 2.2 removes the old scripts.

The configuration parameters implemented in Sun Cluster 2.2 are different from those implemented in Solstice HA 1.3 and Sun Cluster 2.1. Therefore, after upgrading to Sun Cluster 2.2, you will have to re-configure Sun Cluster HA for SAP by running the hadsconfig(1M) command. Before starting the upgrade, view the existing configuration and note the current configuration variables. For Solstice HA 1.3, use the hainetconfig(1M) command to view the configuration. For Sun Cluster 2.1, use the hadsconfig(1M) command to view the configuration. After upgrading to Sun Cluster 2.2, use the hadsconfig(1M) command to re-create the instance.


These are the detailed steps to upgrade from Solstice HA 1.3 to Sun Cluster 2.2.

  1. Load the Sun Cluster 2.2 client packages onto the administrative workstation.

    Refer to "3.2 Installation Procedures", to set up the administrative workstation, if you have not done so already.

  2. Stop Solstice HA on the first server to be upgraded.

    phys-hahost1# hastop
    

    If your cluster is already running Solaris 2.6, and you do not want to upgrade to Solaris 7, skip to Step 5.

  3. Upgrade the operating environment to Solaris 2.6 or Solaris 7.

    To upgrade Solaris, you must use the suninstall(1M) upgrade procedure (rather than reinstalling the operating environment). You might need to increase the size of your root (/) and /usr partitions on the root disks of all Sun Cluster servers in the configuration to accommodate the Solaris 2.6 or Solaris 7 environment. You must install the Entire Distribution Solaris software packages. See the Solaris Advanced Installation Guide for details.


    Note -

    For some hardware platforms, Solaris 2.6 and Solaris 7 attempts to configure power management settings to shut down the server automatically if it has been idle for 30 minutes. The cluster heartbeat is not enough to prevent the Sun Cluster servers from appearing idle and shutting down. Therefore, you must disable this feature when you install Solaris 2.6 or Solaris 7. The dialog used to configure power management settings is shown below. If you do not see this dialog, then your hardware platform does not support this feature. If the dialog appears, you must answer -n to the first question and -y to the second to configure the server to work correctly in the Sun Cluster environment.


    ****************************************************************
     This system is configured to conserve energy.
     After 30 minutes without activity, the system state will be
     saved to disk and the system will be powered off automatically.
    
     A system that has been suspended in this way can be restored
     back to exactly where it was by pressing the power key.
     The definition of inactivity and the timeout are user
     configurable. The dtpower(1M) man page has more information.
     ****************************************************************
    
     Do you wish to accept this default configuration, allowing
     your system to save its state then power off automatically
     when it has been idle for 30 minutes?  (If this system is used
     as a server, answer n. By default autoshutdown is
     enabled.) [y,n,?] n
    
     Autoshutdown disabled.
    
     Should the system save your answer so it won't need to ask
     the question again when you next reboot? (By default the
     question will not be asked again.) [y,n,?] y
    
  4. Update the Solaris 2.6 or Solaris 7 kernel files.

    As part of the Solaris upgrade, the files /kernel/drv/sd.conf and /kernel/drv/ssd.conf will be renamed to /kernel/drv/sd.conf:2.x and /kernel/drv/ssd.conf:2.x respectively. New /kernel/drv/sd.conf and /kernel/drv/ssd.conf files will be created. Run the diff(1) command to identify the differences between the old files and the new ones. Copy the additional information that was inserted by Sun Cluster from the old files into the new files. The information will look similar to the following:

    # Start of lines added by Solstice HA
     sd_retry_on_reservation_conflict=0;
     # End of lines added by Solstice HA
  5. Upgrade to Solstice DiskSuite 4.2.

    1. Upgrade Solstice DiskSuite using the detailed procedure in the Solstice DiskSuite 4.2 Installation and Product Notes.

    2. On the local host, upgrade the Solstice DiskSuite mediator package, SUNWmdm.

      phys-hahost1# pkgadd -d /cdrom/suncluster_sc_2_2/Sun_Cluster_2_2/Sol2_x/ \
       Product/ SUNWmdm
      
       Processing package instance <SUNWmdm>...
      
       Solstice DiskSuite (Mediator)
       (sparc) 4.2,REV=1998.23.10.09.59.06
       Copyright 1998 Sun Microsystems, Inc. All rights reserved.
      
       ## Executing checkinstall script.
       			This is an upgrade. Conflict approval questions may be
       			displayed. The listed files are the ones that will be
       			upgraded. Please answer "y" to these questions if they are
       			presented.
       Using </> as the package base directory.
       ## Processing package information.
       ## Processing system information.
          10 package pathnames are already properly installed.
       ## Verifying package dependencies.
       ## Verifying disk space requirements.
       ## Checking for conflicts with packages already installed.
      
       The following files are already installed on the system and are 
       being used by another package:
         /etc/opt/SUNWmd/meddb
         /usr/opt <attribute change only>
         /usr/opt/SUNWmd/man/man1m/medstat.1m
         /usr/opt/SUNWmd/man/man1m/rpc.metamedd.1m
         /usr/opt/SUNWmd/man/man4/meddb.4
         /usr/opt/SUNWmd/man/man7/mediator.7
         /usr/opt/SUNWmd/sbin/medstat
         /usr/opt/SUNWmd/sbin/rpc.metamedd
      
       Do you want to install these conflicting files [y,n,?,q] y
      ## Checking for setuid/setgid programs.
      
       This package contains scripts which will be executed with super-user 
       permission during the process of installing this package.
      
       Do you want to continue with the installation of <SUNWmdm.2> [y,n,?] y
      
       Installing Solstice DiskSuite (Mediator) as <SUNWmdm.2>
       ...
  6. From the root (/) directory on the local host, use the scinstall(1M) command to update the cluster packages.

    Select Upgrade from the scinstall(1M) menu. Respond to the prompts asking for the location of the Framework packages and cluster name. The scinstall(1M) command replaces Solstice HA 1.3 packages with Sun Cluster 2.2 packages.

    phys-hahost1# cd /cdrom/suncluster_sc_2_2/Sun_Cluster_2_2/Sol_2.x/Tools
    phys-hahost1# ./scinstall
    Installing: SUNWscins
    
     Installation of <SUNWscins> was successful.
    
             Checking on installed package state
     ............
    
     None of the Sun Cluster software has been installed
    
             <<Press return to continue>>
    
     ==== Install/Upgrade Software Selection Menu =======================
     Upgrade to the latest Sun Cluster Server packages or select package
     sets for installation. The list of package sets depends on the Sun
     Cluster packages that are currently installed.
    
     Choose one:
     1) Upgrade            Upgrade to Sun Cluster 2.2 Server packages
     2) Server             Install the Sun Cluster packages needed on a server
     3) Client             Install the admin tools needed on an admin workstation
     4) Server and Client	 Install both Client and Server packages
    
     5) Close              Exit this Menu
     6) Quit               Quit the Program
     
     Enter the number of the package set [6]: 1
    
     What is the directory where the Framework packages can be found
     [/cdrom/cdrom0]: .
    
     ** Upgrading from Solstice HA 1.3 **
    
     What is the name of the cluster? sc-cluster
    ...
  7. Install the required patches for Sun Cluster 2.2.

    Install all applicable Solstice DiskSuite and Sun Cluster patches. If you are using SPARCstorage Arrays, the latest SPARCstorage Array patch should have been installed when you installed the operating environment. Obtain the necessary patches from Sun Enterprise Services. Use the instructions in the patch README files to install the patches.

  8. Reboot the machine.

    phys-hahost1# reboot
    
  9. Switch ownership of disks and data services from the remote host to the upgraded local host.

    1. Stop Solstice HA 1.3 services on the remote host.

      The remote host in this example is phys-hahost2.

      phys-hahost2# hastop
      
    2. After Solstice HA 1.3 is stopped on the remote host, start Sun Cluster 2.2 on the upgraded local host.

      Since the remote host is no longer running HA, use the scadmin(1M) command to start Sun Cluster. This causes the upgraded local host to take over all data services. In this example, phys-hahost1 is the local physical host name, and sc-cluster is the cluster name.

      phys-hahost1# scadmin startcluster phys-hahost1 sc-cluster
      
    3. Verify that the configuration on the local host is stable.

      phys-hahost1# hastat
      
    4. Verify that clients are receiving services from the local host.

  10. Repeat Step 2 through Step 8 on the remote host.

  11. Return the remote host to the cluster.

    phys-hahost2# scadmin startnode
    
  12. After cluster reconfiguration on the remote host is complete, switch over the data services to the remote host from the local host.

    phys-hahost1# haswitch phys-hahost2 hahost2
    
  13. Verify that the Sun Cluster 2.2 configuration on the remote host is in a stable state, and that clients are receiving services.

    phys-hahost2# hastat
    

    This completes the procedure to upgrade from Solstice HA 1.3 to Sun Cluster 2.2.

4.3 Upgrading From Sun Cluster 2.0 or 2.1 to Sun Cluster 2.2

To upgrade from Sun Cluster 2.0 or 2.1 to Sun Cluster 2.2, you must upgrade the Sun Cluster client software on the administrative workstation or install server, and then upgrade the Sun Cluster server software on all nodes in the cluster. Use the two procedures described in "4.3.3 Performing the Upgrade".

4.3.1 Planning the Upgrade

If you are working with greater than two-node clusters, consider logical host availability when planning your upgrade. Depending on the cluster configuration, it might not be possible for all logical hosts to remain available during the upgrade process. The following configuration examples illustrate upgrade strategies that minimize downtime of logical hosts.

Two Ring (Cascade) Configuration

Table 4-1 shows a four-node cluster with four logical hosts defined. The table shows which physical nodes can master each of the four logical hosts.

To upgrade this configuration, you can remove nodes 1 and 3 from the cluster and upgrade them without losing access to any logical hosts. After you upgrade nodes 1 and 3 there will be a brief service outage while you take down nodes 2 and 4 and bring up nodes 1 and 3. Nodes 1 and 3 can then provide access to all logical hosts while nodes 2 and 4 are upgraded.

Table 4-1 Four Nodes with Four Logical Hosts

 

Logical Host 1 

Logical Host 2 

Logical Host 3 

Logical Host 4 

Node 1

 

 

Node 2

 

 

Node 3

 

 

Node 4

 

 

N+1 Configuration

In an N+1 configuration, one node is the backup for all other nodes in the cluster. Table 4-2 shows the logical host distribution for a four-node N+1 configuration with three logical hosts. In this configuration, upgrade node 4 first. After you upgrade node 4, it can provide all services while nodes 1, 2, and 3 are upgraded.

Table 4-2 Three Nodes with Three Logical Hosts

 

Logical Host 1 

Logical Host 2 

Logical Host 3 

Node 1

 

 

Node 2

 

 

Node 3

 

 

Node 4

4.3.2 Using Terminal Concentrator and System Service Processor Monitoring

Sun Cluster 2.2 monitors the Terminal Concentrator (TC), or the System Service Processor (SSP) on E10000 machines, on clusters with greater than two nodes. You can use this feature if you are upgrading from Sun Cluster 2.0 to Sun Cluster 2.2. To enable it, you will need to provide the following information to the scinstall(1M) command during the upgrade procedure:


Caution - Caution -

The TC and SSP passwords are required for failure fencing to work correctly in the cluster. Failure to correctly set the TC or SSP password might cause unpredictable results in the event of a node failure.


4.3.3 Performing the Upgrade

Use the two procedures in this section to perform the upgrade. You should also have available for reference the Sun StorEdge Volume Manager Installaton Guide.


Note -

If you want to use the Cluster Monitor to continue monitoring the cluster during the upgrade, then upgrade the server software first and the client software last. That is, first perform the procedure "4.3.5 How to Upgrade the Server Software From Sun Cluster 2.0 or 2.1 to Sun Cluster 2.2" and then perform the procedure "4.3.4 How to Upgrade the Client Software From Sun Cluster 2.0 or 2.1 to Sun Cluster 2.2".



Caution - Caution -

Before starting the upgrade, you should have an adequate backup of all configuration information and key data, and the cluster must be in a stable, non-degraded state.



Caution - Caution -

If you customized hasap_start_all_instances or hasap_stop_all_instances scripts in Solstice HA 1.3 or Sun Cluster 2.1, save them to a safe location before beginning the upgrade to Sun Cluster 2.2. Restore the scripts after completing the upgrade. Do this to prevent loss of your customizations when Sun Cluster 2.2 removes the old scripts.



Caution - Caution -

The configuration parameters implemented in Sun Cluster 2.2 are different from those implemented in Solstice HA 1.3 and Sun Cluster 2.1. Therefore, after upgrading to Sun Cluster 2.2, you will have to re-configure Sun Cluster HA for SAP by running the hadsconfig(1M) command. Before starting the upgrade, view the existing configuration and note the current configuration variables. For Solstice HA 1.3, use the hainetconfig(1M) command to view the configuration. For Sun Cluster 2.1, use the hadsconfig(1M) command to view the configuration. After upgrading to Sun Cluster 2.2, use the hadsconfig(1M) command to re-create the instance.


4.3.4 How to Upgrade the Client Software From Sun Cluster 2.0 or 2.1 to Sun Cluster 2.2

Upgrading the client software involves removing old client software packages and replacing them with new client software packages, on the administrative workstation.

Upgrading the client software includes:

These are the detailed steps to upgrade the client software from Sun Cluster 2.0 or 2.1 to Sun Cluster 2.2. This procedure assumes you are using an administrative workstation.

  1. Upgrade the Solaris operating environment on the administrative workstation to Solaris 2.6 or 7.

    For details, see the Solaris Advanced System Administration Guide and Chapter 2, Planning the Configuration.

    1. Partition your local disk according to Sun Cluster guidelines.

      See Chapter 2, Planning the Configuration, for details.

    2. Install Solaris patches.

      Check the patch database or contact your local service provider for any hardware or software patches required to run the Solaris operating environment, Sun Cluster 2.2, or your volume management software.

      Install any required patches by following the instructions in the README file accompanying each patch.

    3. Reboot the administrative workstation.

  2. Load the Sun Cluster 2.2 CD-ROM on the administrative workstation.

  3. Use the Sun Cluster 2.0 or 2.1 version of scinstall(1M) to remove the 2.0 or 2.1 client software packages from the administrative workstation.

    As root, invoke the Sun Cluster 2.0 or 2.1 version of scinstall(1M). Select "remove" from the scinstall(1M) menu and remove the Sun Cluster 2.0 or 2.1 client packages. The screens may vary, depending on your software version.

    # /opt/SUNWcluster/bin/scinstall
    
     ====================================================
     	Sun Cluster package manager
     	Version: 2.1,rev 1.9
    
     	Checking on installed package state.............
    
     ================= Package Set Selection ================
    
     The Sun Cluster software packages can be selected in sets,
     depending on the current state of installation
    
     Choose the appropriate set from the choices below:
        all	 	 	 	 	 All the client and server packages in this machine
        client	 	 	 	 	All the admin tools needed on an admin workstation
        server	 	 	 	 All the Sun Cluster packages needed on a server
     ...
     Select: [all client server] [all]: client
    ...
    
     =============== Sun Cluster Installation Manager ==================
      
     Current package set: client packages
     ...
     Choices:
        choose     Select the package set to manipulate
        install    Install the selected package sets
        remove     Remove the selected package sets
        obsolete   Remove obsolete packages
        verify     Sanity check the current installation
     ...
     Command: [choose install remove obsolete verify] [install]: remove
    ...
     Mode [manual automatic] [manual]: automatic
    
  4. Exit the Sun Cluster 2.0 or 2.1 version of scinstall(1M).

  5. Use the Sun Cluster 2.2 version of scinstall(1M) to install the Sun Cluster 2.2 client software packages on the administrative workstation.

    1. From the scinstall(1M) main menu, select the client package set:

      # cd /cdrom/suncluster_sc_2_2/Sun_Cluster_2_2/Sol_2.x/Tools
      # ./scinstall
      
       ==== Install/Upgrade Software Selection Menu =====================
       Upgrade to the latest Sun Cluster Server packages or select package
       sets for installation. The list of package sets depends on the Sun
       Cluster packages that are currently installed.
      
       Choose one:
       1) Upgrade									Upgrade to Sun Cluster 2.2 Server packages
       2) Server									Install the Sun Cluster packages needed on a server
       3) Client									Install the admin tools needed on an admin workstation
       4) Server and Client 									Install both Client and Server packages
       
       5) Close									Exit this Menu
       6) Quit									Quit the Program
       
       Enter the number of the package set [6]:  3
      
    2. Choose an install path for the client packages.

      Normally the default location is acceptable:

      What is the path to the CD-ROM image [/cdrom/cdrom0]: 
    3. Install the client packages.

      Note that your packages might differ from those shown in the example.

      Installing Client packages
       
       Installing the following packages: SUNWscch SUNWccon SUNWccp
       SUNWcsnmp SUNWscsdb
       
                   >>>> Warning <<<<
         The installation process will run several scripts as root.  In
         addition, it may install setUID programs.  If you choose automatic
         mode, the installation of the chosen packages will proceed without
         any user interaction.  If you wish to manually control the install
         process you must choose the manual installation option.
       
       Choices:
       	manual						Interactively install each package
       	automatic						Install the selected packages with no user interaction.
       
       In addition, the following commands are supported:
          list						Show a list of the packages to be installed
          help						Show this command summary
          close						Return to previous menu
          quit						Quit the program
       
       
       Install mode [manual automatic] [automatic]:  automatic
      

      The scinstall(1M) command now installs the client packages. After the packages have been installed, the main scinstall(1M) menu is displayed.

  6. Verify the client installation and then quit scinstall(1M).

    From the main menu, you can choose to verify the installation. Then quit to exit scinstall(1M).

    ============ Main Menu =================
     
     1) Install/Upgrade - Install or Upgrade Server Packages or Install Client Packages.
     2) Remove  - Remove Server or Client Packages.
     3) Verify  - Verify installed package sets.
     4) List    - List installed package sets.
    
     5) Quit    - Quit this program.
     6) Help    - The help screen for this menu.
    
     Please choose one of the menu items: [6]:  3
    
     ==== Verify Package Installation ==========================
     Installation
     	All  of the install										packages have been installed
     Framework
     	All  of the client										packages have been installed
     	None of the server										packages have been installed
     Communications
     	None of the SMA										packages have been installed
     ...
  7. If you will be using Sun Cluster SNMP, change the port number used by the Sun Cluster SNMP daemon and Solaris SNMP (smond).

    The default port used by Sun Cluster SNMP is the same as the default port number used by Solaris SNMP; both use port 161. Change the Sun Cluster SNMP port number using the procedure described in the appendix on Sun Cluster SNMP management solutions in the Sun Cluster 2.2 System Administration Guide. You must stop and restart both the snmpd and smond daemons after changing the port number.

4.3.5 How to Upgrade the Server Software From Sun Cluster 2.0 or 2.1 to Sun Cluster 2.2

This procedure describes the steps required to upgrade the server software on a Sun Cluster 2.0 or Sun Cluster 2.1 system to Sun Cluster 2.2, with a minimum of downtime. You should become familiar with the entire procedure before starting the upgrade.

Upgrading the server software includes:


Note -

During the scinstall(1M) upgrade procedure, all non-local private link IP addresses will be added, with root access only, to the /.rhosts file on every cluster node.


These are the detailed steps to upgrade the server software from Sun Cluster 2.0 or 2.1 to Sun Cluster 2.2. This example assumes an N+1 configuration using SSVM.

  1. Stop the first node.

    phys-hahost1# scadmin stopnode
    
  2. If you are upgrading the operating environment and/or SSVM or CVM, run the command upgrade_start from the SSVM or CVM media.

    In this example, CDROM_path is the path to the tools on the SSVM CD.

    phys-hahost1# CDROM_path/Tools/scripts/upgrade_start
    

    To upgrade the operating environment, follow the detailed instructions in the appropriate Solaris installation manual and see also Chapter 2, Planning the Configuration.

    If you are upgrading to Solaris 7, you must use SSVM 3.x. Refer to the Sun StorEdge Volume Manager Installaton Guide for details.

    To upgrade CVM, refer to the Sun Cluster 2.2 Cluster Volume Manager Guide.

  3. If you are upgrading the operating environment but not the volume manager, perform the following steps:

    1. Remove the volume manager package.

      Normally, the package name is SUNWvxvm for both SSVM and CVM. For example:

      phys-hahost1# pkgrm SUNWvxvm
      
    2. Upgrade the operating system.

      Refer to the Solaris installation documentation for instructions.

    3. Modify the /etc/nsswitch.conf file.

      Ensure that "service," "group," and "hosts" lookups are directed to files first. For example:

      hosts: files nisplus
       services: files nisplus
      group: files nisplus
    4. Restore the volume manager removed in Step a from the Sun Cluster 2.2 CD-ROM.

      phys-hahost1# pkgadd -d CDROM_path/SUNWvxvm
      
  4. If you upgraded SSVM or CVM, run the command upgrade_finish from the SSVM or CVM media.

    In this example, CDROM_path is the path to the tools on the SSVM CD.

    phys-hahost1# CDROM_path/Tools/scripts/upgrade_finish
    
  5. Reboot the system.

  6. Update the cluster software by using the scinstall(1M) command from the Sun Cluster 2.2 CD-ROM.

    Invoke the scinstall(1M) command and select the Upgrade option from the menu presented:

    phys-hahost1# cd /cdrom/suncluster_sc_2_2/Sun_Cluster_2_2/Sol_2.x/Tools
    phys-hahost1# ./scinstall
    
     Removal of <SUNWscins> was successful.
     Installing: SUNWscins
    
     Installation of <SUNWscins> was successful.
         Assuming a default cluster name of sc-cluster
    
     Checking on installed package state............
    
     ============ Main Menu =================
    
     1) Install/Upgrade - Install or Upgrade Server Packages or Install Client Packages.
     2) Remove  - Remove Server or Client Packages.
     3) Change  - Modify cluster or data service configuration
     4) Verify  - Verify installed package sets.
     5) List    - List installed package sets.
    
     6) Quit    - Quit this program.
     7) Help    - The help screen for this menu.
    
     Please choose one of the menu items: [7]:  1
    ...
     ==== Install/Upgrade Software Selection Menu =======================
     Upgrade to the latest Sun Cluster Server packages or select package
     sets for installation. The list of package sets depends on the Sun
     Cluster packages that are currently installed.
    
     Choose one:
     1) Upgrade            Upgrade to Sun Cluster 2.2 Server packages
     2) Server             Install the Sun Cluster packages needed on a server
     3) Client             Install the admin tools needed on an admin workstation
     4) Server and Client  Install both Client and Server packages
    
     5) Close              Exit this Menu
     6) Quit               Quit the Program
    
     Enter the number of the package set [6]:  1
    
     What is the path to the CD-ROM image? [/cdrom/cdrom0]:  .
    
     ** Upgrading from Sun Cluster 2.1 **
     	Removing "SUNWccm" ... done
     ...
  7. If the cluster has more than two nodes and you are upgrading from Sun Cluster 2.0, supply the TC/SSP information.

    The first time the scinstall(1M) command is invoked, the TC/SSP information is automatically saved to a file, /var/tmp/tc_ssp_info. Copy this file to the /var/tmp directory on all other cluster nodes so the information can be reused when you upgrade those nodes. You can either supply the TC/SSP information now, or do so later by using the scconf(1M) command. See the scconf(1M) man page for details.

    SC2.2 uses the terminal concentrator (or system service processor in the
     case of an E10000) for failure fencing. During the SC2.2 installation the
     IP address for the terminal concentrator along with the physical port numbers
     that each server is connected to is requested. This information can be changed 
     using scconf.
    
     After the upgrade has completed you need to run scconf to specify terminal
     concentrator information for each server. This will need to be done on each
     server in the cluster.
    
     The specific commands that need to be run are:
    
     scconf clustername -t <nts name> -i <nts name|IP address>
     scconf clustername -H <node 0> -p <serial port for node 0> \
             -d <other|E10000> -t <nts name>
    
     Repeat the second command for each node in the cluster. Repeat the first
     command if you have more than one terminal concentrator in your
     configuration.
    
     Or you can choose to set this up now. The information you will need is:
    
             +terminal concentrator/system service processor names
             +the architecture type (E10000 for SSP or other for tc)
             +the ip address for the terminal concentrator/system service
              processor (these will be looked up based on the name, you
              will need to confirm)
             +for terminal concentrators, you will need the physical
              ports the systems are connected to (physical ports
              (2,3,4... not the telnet ports (5002,...)
    
     Do you want to set the TC/SSP info now (yes/no) [no]?  y
    

    When the scinstall(1M) command prompts for the TC/SSP information, you can either force the program to query the tc_ssp_info file, or invoke an interactive session that will prompt you for the required information.

    The example cluster assumes the following configuration information:

    • Cluster name: sc-cluster

    • Number of nodes in the cluster: 2

    • Node names: phys-hahost1 and phys-hahost2

    • Logical host names: hahost1 and hahost2

    • Terminal concentrator name: cluster-tc

    • Terminal concentrator IP address: 123.4.5.678

    • Physical TC port connected to phys-hahost1: 2

    • Physical TC port connected to phys-hahost2: 3

    See "1.2.7 Terminal Concentrator or System Service Processor and Administrative Workstation", for more information on server architectures and TC/SSPs. In this example, the configuration is not an E10000 cluster, so the architecture specified is "other," and a terminal concentrator is used:

    What type of architecture does phys-hahost1 have? (E10000|other)
     [other] [?] other
    What is the name of the Terminal Concentrator connected to the
     serial port of phys-hahost1 [NO_NAME] [?] cluster-tc
    Is 123.4.5.678 the correct IP address for this Terminal
     Concentrator (yes|no) [yes] [?] yes
    Which physical port on the Terminal Concentrator is phys-hahost2
     connected to [?] 2
    What type of architecture does phys-hahost2 have? (E10000|other)
     [other] [?] other
    Which Terminal Concentrator is phys-hahost2 connected to:
    
     0) cluster-tc       123.4.5.678
     1) Create A New Terminal Concentrator Entry
    
     Select a device [?] 0
    Which physical port on the Terminal Concentrator is phys-hahost2
     connected to [?] 3
    The terminal concentrator/system service processor (TC/SSP)
     information has been stored in file /var/tmp/tc_ssp_data. Please
     put a copy of this file into /var/tmp on the rest of the nodes in
     the cluster. This way you don't have to re-enter the TC/SSP values,
     but you will, however, still be prompted for the TC/SSP passwords.
  8. If you will be using Sun Cluster SNMP, change the port number used by the Sun Cluster SNMP daemon and Solaris SNMP (smond).

    The default port used by Sun Cluster SNMP is the same as the default port number used by Solaris SNMP; both use port 161. Change the Sun Cluster SNMP port number using the procedure described in the appendix on Sun Cluster SNMP management soutions in the Sun Cluster 2.2 System Administration Guide.

  9. Reboot the system.


    Caution - Caution -

    You must reboot at this time.


  10. If your cluster is greater than two nodes and you are using a shared CCD, put all logical hosts into maintenance mode.

    phys-hahost2# haswitch -m hahost1 hahost2 
    

    Note -

    Greater than two-node clusters do not use a shared CCD. Therefore, for greater than two-node clusters, you do not need to put the data services into maintenance mode before beginning the upgrade.


  11. If your configuration includes Oracle Parallel Server (OPS), make sure OPS is halted.

    Refer to your OPS documentation for instructions on halting OPS.

  12. Stop the cluster software on the remaining nodes running the old version of Sun Cluster.

    phys-hahost2# scadmin stopnode
    
  13. Start the upgraded node.

    phys-hahost1# scadmin startcluster phys-hahost1 sc-cluster
    

    Note -

    As the upgraded node joins the cluster, the system might report several warning messages stating that communication with the terminal concentrator is invalid. At this point these messages are expected and can be safely ignored.


  14. If you are using a shared CCD and if you upgraded from Sun Cluster 2.0, update the shared CCD now.

    Run the ccdadm(1M) command only once, on the host that joined the cluster first:

    phys-hahost1# cd /etc/opt/SUNWcluster/conf
    phys-hahost1# ccdadm sc-cluster -r ccd.database_post_sc2.0_upgrade
    
  15. If you stopped the data services previously, restart them on the upgraded node.

    phys-hahost1# haswitch phys-hahost1 hahost1 hahost2
    
  16. Upgrade the remaining nodes.

    Repeat Step 2 through Step 9 on the remaining Sun Cluster 2.0 or Sun Cluster 2.1 nodes.

  17. After each node is upgraded, add it to the cluster:

    phys-hahost2# scadmin startnode sc-cluster
    
  18. Set up and start Sun Cluster Manager.

    Sun Cluster Manager is used to monitor the cluster. For instructions, see the section on monitoring Sun Cluster servers with Sun Cluster Manager in Chapter 2 of the Sun Cluster 2.2 System Administration Guide.

    This completes the upgrade to Sun Cluster 2.2.