Sun Cluster 2.2 Software Installation Guide

4.3.5 How to Upgrade the Server Software From Sun Cluster 2.0 or 2.1 to Sun Cluster 2.2

This procedure describes the steps required to upgrade the server software on a Sun Cluster 2.0 or Sun Cluster 2.1 system to Sun Cluster 2.2, with a minimum of downtime. You should become familiar with the entire procedure before starting the upgrade.

Upgrading the server software includes:


Note -

During the scinstall(1M) upgrade procedure, all non-local private link IP addresses will be added, with root access only, to the /.rhosts file on every cluster node.


These are the detailed steps to upgrade the server software from Sun Cluster 2.0 or 2.1 to Sun Cluster 2.2. This example assumes an N+1 configuration using SSVM.

  1. Stop the first node.

    phys-hahost1# scadmin stopnode
    
  2. If you are upgrading the operating environment and/or SSVM or CVM, run the command upgrade_start from the SSVM or CVM media.

    In this example, CDROM_path is the path to the tools on the SSVM CD.

    phys-hahost1# CDROM_path/Tools/scripts/upgrade_start
    

    To upgrade the operating environment, follow the detailed instructions in the appropriate Solaris installation manual and see also Chapter 2, Planning the Configuration.

    If you are upgrading to Solaris 7, you must use SSVM 3.x. Refer to the Sun StorEdge Volume Manager Installaton Guide for details.

    To upgrade CVM, refer to the Sun Cluster 2.2 Cluster Volume Manager Guide.

  3. If you are upgrading the operating environment but not the volume manager, perform the following steps:

    1. Remove the volume manager package.

      Normally, the package name is SUNWvxvm for both SSVM and CVM. For example:

      phys-hahost1# pkgrm SUNWvxvm
      
    2. Upgrade the operating system.

      Refer to the Solaris installation documentation for instructions.

    3. Modify the /etc/nsswitch.conf file.

      Ensure that "service," "group," and "hosts" lookups are directed to files first. For example:

      hosts: files nisplus
       services: files nisplus
      group: files nisplus
    4. Restore the volume manager removed in Step a from the Sun Cluster 2.2 CD-ROM.

      phys-hahost1# pkgadd -d CDROM_path/SUNWvxvm
      
  4. If you upgraded SSVM or CVM, run the command upgrade_finish from the SSVM or CVM media.

    In this example, CDROM_path is the path to the tools on the SSVM CD.

    phys-hahost1# CDROM_path/Tools/scripts/upgrade_finish
    
  5. Reboot the system.

  6. Update the cluster software by using the scinstall(1M) command from the Sun Cluster 2.2 CD-ROM.

    Invoke the scinstall(1M) command and select the Upgrade option from the menu presented:

    phys-hahost1# cd /cdrom/suncluster_sc_2_2/Sun_Cluster_2_2/Sol_2.x/Tools
    phys-hahost1# ./scinstall
    
     Removal of <SUNWscins> was successful.
     Installing: SUNWscins
    
     Installation of <SUNWscins> was successful.
         Assuming a default cluster name of sc-cluster
    
     Checking on installed package state............
    
     ============ Main Menu =================
    
     1) Install/Upgrade - Install or Upgrade Server Packages or Install Client Packages.
     2) Remove  - Remove Server or Client Packages.
     3) Change  - Modify cluster or data service configuration
     4) Verify  - Verify installed package sets.
     5) List    - List installed package sets.
    
     6) Quit    - Quit this program.
     7) Help    - The help screen for this menu.
    
     Please choose one of the menu items: [7]:  1
    ...
     ==== Install/Upgrade Software Selection Menu =======================
     Upgrade to the latest Sun Cluster Server packages or select package
     sets for installation. The list of package sets depends on the Sun
     Cluster packages that are currently installed.
    
     Choose one:
     1) Upgrade            Upgrade to Sun Cluster 2.2 Server packages
     2) Server             Install the Sun Cluster packages needed on a server
     3) Client             Install the admin tools needed on an admin workstation
     4) Server and Client  Install both Client and Server packages
    
     5) Close              Exit this Menu
     6) Quit               Quit the Program
    
     Enter the number of the package set [6]:  1
    
     What is the path to the CD-ROM image? [/cdrom/cdrom0]:  .
    
     ** Upgrading from Sun Cluster 2.1 **
     	Removing "SUNWccm" ... done
     ...
  7. If the cluster has more than two nodes and you are upgrading from Sun Cluster 2.0, supply the TC/SSP information.

    The first time the scinstall(1M) command is invoked, the TC/SSP information is automatically saved to a file, /var/tmp/tc_ssp_info. Copy this file to the /var/tmp directory on all other cluster nodes so the information can be reused when you upgrade those nodes. You can either supply the TC/SSP information now, or do so later by using the scconf(1M) command. See the scconf(1M) man page for details.

    SC2.2 uses the terminal concentrator (or system service processor in the
     case of an E10000) for failure fencing. During the SC2.2 installation the
     IP address for the terminal concentrator along with the physical port numbers
     that each server is connected to is requested. This information can be changed 
     using scconf.
    
     After the upgrade has completed you need to run scconf to specify terminal
     concentrator information for each server. This will need to be done on each
     server in the cluster.
    
     The specific commands that need to be run are:
    
     scconf clustername -t <nts name> -i <nts name|IP address>
     scconf clustername -H <node 0> -p <serial port for node 0> \
             -d <other|E10000> -t <nts name>
    
     Repeat the second command for each node in the cluster. Repeat the first
     command if you have more than one terminal concentrator in your
     configuration.
    
     Or you can choose to set this up now. The information you will need is:
    
             +terminal concentrator/system service processor names
             +the architecture type (E10000 for SSP or other for tc)
             +the ip address for the terminal concentrator/system service
              processor (these will be looked up based on the name, you
              will need to confirm)
             +for terminal concentrators, you will need the physical
              ports the systems are connected to (physical ports
              (2,3,4... not the telnet ports (5002,...)
    
     Do you want to set the TC/SSP info now (yes/no) [no]?  y
    

    When the scinstall(1M) command prompts for the TC/SSP information, you can either force the program to query the tc_ssp_info file, or invoke an interactive session that will prompt you for the required information.

    The example cluster assumes the following configuration information:

    • Cluster name: sc-cluster

    • Number of nodes in the cluster: 2

    • Node names: phys-hahost1 and phys-hahost2

    • Logical host names: hahost1 and hahost2

    • Terminal concentrator name: cluster-tc

    • Terminal concentrator IP address: 123.4.5.678

    • Physical TC port connected to phys-hahost1: 2

    • Physical TC port connected to phys-hahost2: 3

    See "1.2.7 Terminal Concentrator or System Service Processor and Administrative Workstation", for more information on server architectures and TC/SSPs. In this example, the configuration is not an E10000 cluster, so the architecture specified is "other," and a terminal concentrator is used:

    What type of architecture does phys-hahost1 have? (E10000|other)
     [other] [?] other
    What is the name of the Terminal Concentrator connected to the
     serial port of phys-hahost1 [NO_NAME] [?] cluster-tc
    Is 123.4.5.678 the correct IP address for this Terminal
     Concentrator (yes|no) [yes] [?] yes
    Which physical port on the Terminal Concentrator is phys-hahost2
     connected to [?] 2
    What type of architecture does phys-hahost2 have? (E10000|other)
     [other] [?] other
    Which Terminal Concentrator is phys-hahost2 connected to:
    
     0) cluster-tc       123.4.5.678
     1) Create A New Terminal Concentrator Entry
    
     Select a device [?] 0
    Which physical port on the Terminal Concentrator is phys-hahost2
     connected to [?] 3
    The terminal concentrator/system service processor (TC/SSP)
     information has been stored in file /var/tmp/tc_ssp_data. Please
     put a copy of this file into /var/tmp on the rest of the nodes in
     the cluster. This way you don't have to re-enter the TC/SSP values,
     but you will, however, still be prompted for the TC/SSP passwords.
  8. If you will be using Sun Cluster SNMP, change the port number used by the Sun Cluster SNMP daemon and Solaris SNMP (smond).

    The default port used by Sun Cluster SNMP is the same as the default port number used by Solaris SNMP; both use port 161. Change the Sun Cluster SNMP port number using the procedure described in the appendix on Sun Cluster SNMP management soutions in the Sun Cluster 2.2 System Administration Guide.

  9. Reboot the system.


    Caution - Caution -

    You must reboot at this time.


  10. If your cluster is greater than two nodes and you are using a shared CCD, put all logical hosts into maintenance mode.

    phys-hahost2# haswitch -m hahost1 hahost2 
    

    Note -

    Greater than two-node clusters do not use a shared CCD. Therefore, for greater than two-node clusters, you do not need to put the data services into maintenance mode before beginning the upgrade.


  11. If your configuration includes Oracle Parallel Server (OPS), make sure OPS is halted.

    Refer to your OPS documentation for instructions on halting OPS.

  12. Stop the cluster software on the remaining nodes running the old version of Sun Cluster.

    phys-hahost2# scadmin stopnode
    
  13. Start the upgraded node.

    phys-hahost1# scadmin startcluster phys-hahost1 sc-cluster
    

    Note -

    As the upgraded node joins the cluster, the system might report several warning messages stating that communication with the terminal concentrator is invalid. At this point these messages are expected and can be safely ignored.


  14. If you are using a shared CCD and if you upgraded from Sun Cluster 2.0, update the shared CCD now.

    Run the ccdadm(1M) command only once, on the host that joined the cluster first:

    phys-hahost1# cd /etc/opt/SUNWcluster/conf
    phys-hahost1# ccdadm sc-cluster -r ccd.database_post_sc2.0_upgrade
    
  15. If you stopped the data services previously, restart them on the upgraded node.

    phys-hahost1# haswitch phys-hahost1 hahost1 hahost2
    
  16. Upgrade the remaining nodes.

    Repeat Step 2 through Step 9 on the remaining Sun Cluster 2.0 or Sun Cluster 2.1 nodes.

  17. After each node is upgraded, add it to the cluster:

    phys-hahost2# scadmin startnode sc-cluster
    
  18. Set up and start Sun Cluster Manager.

    Sun Cluster Manager is used to monitor the cluster. For instructions, see the section on monitoring Sun Cluster servers with Sun Cluster Manager in Chapter 2 of the Sun Cluster 2.2 System Administration Guide.

    This completes the upgrade to Sun Cluster 2.2.