Sun Cluster 2.2 Release Notes Addendum

Chapter 1 Sun Cluster 2.2 Release Notes Addendum

This document provides information to supplement the Sun Cluster 2.2 Release Notes (part number 805-4243) and the Sun Cluster 2.2 Locale Installation Notes (part number 806-4172). For more information about Sun Cluster, see the web site http://www.sun.com/clusters.

Support for SAP 4.5B

SAP 4.5B is now supported with Sun Cluster 2.2, in the Solaris 2.6 operating environment only. At this time, HA-SAP with SAP 4.5B is qualified with the Oracle database only.

All administrative procedures for SAP 4.5B are identical to those documented for SAP 4.0x in the Sun Cluster 2.2 Software Installation Guide.

Upgrading to Sun Cluster 2.2

Several errors exist in the upgrade procedures documented in Chapter 4 of the Sun Cluster 2.2 Software Installation Guide. To upgrade to Sun Cluster 2.2 from HA 1.3, Sun Cluster 2.0, or Sun Cluster 2.1, use the following procedures instead.

How to Upgrade to Sun Cluster 2.2 From HA 1.3

These are the high-level steps to upgrade from Solstice(TM) HA 1.3 to Sun Cluster 2.2. You can perform the upgrade either from an administrative workstation or from the console of any physical host in the cluster. Using an administrative workstation allows the most flexibility during the upgrade process.


Note -

This procedure assumes you are using an administrative workstation.



Caution - Caution -

Back up all local and multihost disks before starting the upgrade. Also, all systems must be operable and robust. Do not attempt to upgrade if systems are experiencing any difficulties.



Caution - Caution -

On each node, if you customized hasap_start_all_instances or hasap_stop_all_instances scripts in Solstice HA 1.3 or Sun Cluster 2.1, save them to a safe location before beginning the upgrade to Sun Cluster 2.2. Restore the scripts after completing the upgrade. Do this to prevent loss of your customizations when Sun Cluster 2.2 removes the old scripts.


The configuration parameters implemented in Sun Cluster 2.2 are different from those implemented in Solstice HA 1.3 and Sun Cluster 2.1. Therefore, after upgrading to Sun Cluster 2.2, you will have to re-configure Sun Cluster HA for SAP by running the hadsconfig(1M) command. Before starting the upgrade, view the existing configuration and note the current configuration variables. For Solstice HA 1.3, use the hainetconfig(1M) command to view the configuration. For Sun Cluster 2.1, use the hadsconfig(1M) command to view the configuration. After upgrading to Sun Cluster 2.2, use the hadsconfig(1M) command to re-create the instance.


Caution - Caution -

If you created your own data services using the Sun Cluster API, make sure those data services have a base directory associated with them before you begin the upgrade. This base directory defines the location of the methods associated with the data service. If the data service was registered with the -b option to hareg(1M), the base directory is defined in the data services configuration file. Data services supplied by Sun are registered by default with the -b option to hareg(1M).


To check whether a base directory is defined, view the file /etc/opt/SUNWhadf/hadf/.hadfconfig_services and look for the SERVICE_BASEDIR= entry for your data service. If no entry exists, you must unregister the data service using the command hareg -u dataservice, then re-register the data service by specifying the -b option to hareg(1M).

If you attempt to upgrade while any data services do not have an associated base directory for methods, the upgrade will fail.

  1. (Solstice HA 1.3 for SAP only) Run hainetconfig(1M) to obtain the current SAP configuration parameters.

    The SAP instance configuration data is lost during the upgrade. Therefore, run the hainetconfig(1M) command and make note of the current SAP parameters so you can restore them manually later. See Section 10.6.1, "Configuration Parameters for Sun Cluster HA for SAP," in the Sun Cluster 2.2 Software Installation Guide for a description of the new Sun Cluster HA for SAP configuration parameters.


    phys-hahost1# hainetconfig
    

  2. Load the Sun Cluster 2.2 client packages onto the administrative workstation.

    Refer to Chapter 3 in the Sun Cluster 2.2 Software Installation Guide to set up the administrative workstation, if you have not done so already.

  3. Stop Solstice HA on the first server to be upgraded.


    phys-hahost1# hastop
    

    If your cluster is already running Solaris(TM) 2.6 and you do not want to upgrade to Solaris 7, skip to Step 6.

  4. Upgrade the operating environment to Solaris 2.6 or Solaris 7.

    To upgrade Solaris, you must use the suninstall(1M) upgrade procedure (rather than reinstalling the operating environment). You might need to increase the size of your root (/) and /usr partitions on the root disks of all Sun Cluster servers in the configuration to accommodate the Solaris 2.6 or Solaris 7 environment. You must install the Entire Distribution software group. See the Solaris Advanced Installation Guide for details.


    Note -

    For some hardware platforms, Solaris 2.6 and Solaris 7 attempt to configure power management settings to shut down the server automatically if it has been idle for 30 minutes. The cluster heartbeat is not enough to prevent the Sun Cluster servers from appearing idle and shutting down. Therefore, you must disable this feature when you install Solaris 2.6 or Solaris 7. The dialog used to configure power management settings is shown below. If you do not see this dialog, then your hardware platform does not support this feature. If the dialog appears, you must answer n to the first question and y to the second to configure the server to work correctly in the Sun Cluster environment.



    ****************************************************************
    This system is configured to conserve energy.
    After 30 minutes without activity, the system state will be
    saved to disk and the system will be powered off automatically.
    
    A system that has been suspended in this way can be restored
    back to exactly where it was by pressing the power key.
    The definition of inactivity and the timeout are user
    configurable. The dtpower(1M) man page has more information.
    ****************************************************************
    
    Do you wish to accept this default configuration, allowing
    your system to save its state then power off automatically
    when it has been idle for 30 minutes?  (If this system is used
    as a server, answer n. By default autoshutdown is
    enabled.) [y,n,?] n
    
    Autoshutdown disabled.
    
    Should the system save your answer so it won't need to ask
    the question again when you next reboot? (By default the
    question will not be asked again.) [y,n,?] y
    

  5. Update the Solaris 2.6 or Solaris 7 kernel files.

    As part of the Solaris upgrade, the files /kernel/drv/sd.conf and /kernel/drv/ssd.conf will be renamed to /kernel/drv/sd.conf:2.x and /kernel/drv/ssd.conf:2.x respectively. New /kernel/drv/sd.conf and /kernel/drv/ssd.conf files will be created. Run the diff(1) command to identify the differences between the old files and the new ones. Copy the additional information that was inserted by Sun Cluster from the old files into the new files. The information will look similar to the following:


    # Start of lines added by Solstice HA
    sd_retry_on_reservation_conflict=0;
    # End of lines added by Solstice HA

  6. Upgrade to Solstice DiskSuite 4.2.

    1. Upgrade Solstice DiskSuite using the detailed procedure in the Solstice DiskSuite 4.2 Installation and Product Notes.

    2. On the local host, upgrade the Solstice DiskSuite mediator package, SUNWmdm.


      phys-hahost1# pkgadd -d \
      /cdrom/suncluster_sc_2_2/Sun_Cluster_2_2/Sol2_x/Product/ SUNWmdm
      
      Processing package instance <SUNWmdm>...
      
      Solstice DiskSuite (Mediator)
      (sparc) 4.2,REV=1998.23.10.09.59.06
      Copyright 1998 Sun Microsystems, Inc. All rights reserved.
      
      ## Executing checkinstall script.
      			This is an upgrade. Conflict approval questions may be
      			displayed. The listed files are the ones that will be
      			upgraded. Please answer "y" to these questions if they are
      			presented.
      Using </> as the package base directory.
      ## Processing package information.
      ## Processing system information.
       10 package pathnames are already properly installed.
      ## Verifying package dependencies.
      ## Verifying disk space requirements.
      ## Checking for conflicts with packages already installed.
      
      The following files are already installed on the system and are being used by another package:
      
      /etc/opt/SUNWmd/meddb
       /usr/opt <attribute change only>
       /usr/opt/SUNWmd/man/man1m/medstat.1m
       /usr/opt/SUNWmd/man/man1m/rpc.metamedd.1m
       /usr/opt/SUNWmd/man/man4/meddb.4
       /usr/opt/SUNWmd/man/man7/mediator.7
       /usr/opt/SUNWmd/sbin/medstat
       /usr/opt/SUNWmd/sbin/rpc.metamedd
      
      ## Checking for setuid/setgid programs.
      
      This package contains scripts which will be executed with super-user permission during the process of installing this package.
      
      Do you want to continue with the installation of <SUNWmdm.2> [y,n,?] y
      
      Installing Solstice DiskSuite (Mediator) as <SUNWmdm.2>
      ...
  7. Before updating the cluster package, remove patch 104996, the Solstice HA 1.3 SUNWhaor patch, if it is installed.

    When scinstall(1M) updates cluster packages in Step 9, the command attempts to remove a patch on which patch 104996 is dependent. To prevent scinstall(1M) from failing, remove patch 104996 manually now:


    phys-hahost1# patchrm 104996-xx
    

  8. (Solstice HA for SAP only) Save to a safe location any customized hasap_start_all_instances or hasap_stop_all_instances scripts, before beginning the upgrade to Sun Cluster 2.2.

    Do this to prevent loss of your customizations when Sun Cluster 2.2 removes the old scripts during the upgrade. Copy the scripts to a safe location. You will restore the scripts in Step 10. Use the following commands:


    # cp /opt/SUNWhasap/clust_progs/hasap_start_all_instances /safe_place
    # cp /opt/SUNWhasap/clust_progs/hasap_stop_all_instances /safe_place
    

  9. Use the scinstall(1M) command to update the cluster packages.

    Select Upgrade from the scinstall(1M) menu. Respond to the prompts that ask for the location of the Framework packages and the cluster name. The scinstall(1M) command replaces Solstice HA 1.3 packages with Sun Cluster 2.2 packages.


    phys-hahost1# cd /cdrom/suncluster_sc_2_2/Sun_Cluster_2_2/Sol_2.x/Tools
    phys-hahost1# ./scinstall
    Installing: SUNWscins
    
    Installation of <SUNWscins> was successful.
    
     Checking on installed package state
    ............
    
    None of the Sun Cluster software has been installed
     <<Press return to continue>> 
    
    ==== Install/Upgrade Software Selection Menu =======================
    Upgrade to the latest Sun Cluster Server packages or select package
    Do you want to install these conflicting files [y,n,?,q] y
    sets for installation. The list of package sets depends on the Sun
    Cluster packages that are currently installed.
    
    Choose one:
    1) Upgrade            Upgrade to Sun Cluster 2.2 Server packages
    2) Server             Install the Sun Cluster packages needed on a server
    3) Client             Install the admin tools needed on an admin workstation
    4) Server and Client  Install both Client and Server packages
    5) Close              Exit this Menu
    6) Quit               Quit the Program
    
    Enter the number of the package set [6]: 1
    
    What is the directory where the Framework packages can be found 
    
    [/cdrom/cdrom0]: .
    
    ** Upgrading from Solstice HA 1.3 **
    
    What is the name of the cluster? sc-cluster
    ...
  10. (Solstice HA 1.3 for SAP only) Restore the customized scripts saved in Step 8.

    Copy the scripts to the /opt/SUNWcluster/ha/sap directory. The safe_place directory is the directory into which you saved the scripts in Step 8. After restoring the scripts, use the ls -l command to verify that the scripts are executable.


    phys-hahost1# cd /opt/SUNWcluster/ha/sap
    phys-hahost1# cp /safe_place/hasap_start_all_instances .
    phys-hahost1# cp /safe_place/hasap_stop_all_instances .
    phys-hahost1# ls -l /opt/SUNWcluster/ha/sap/hasap_st*
    -r-xr--r--   1 root     sys        18400 Feb  9 19:04 hasap_start_all_instances
    -r-xr--r--   1 root     sys        25963 Feb  9 19:04 hasap_stop_all_instances

  11. Add required entries to the /.rhosts file.

    The /.rhosts file contains one or more sets of three IP addresses (depending on the number of nodes in the cluster). These are private network IP addresses used internally by Sun Cluster. During the upgrade, only some of the IP addresses are added to the /.rhosts files; the first IP address in each set is lost. You must manually insert the missing addresses in the /.rhosts file on each node.

    The number of sets you need depends on the number of nodes in the cluster. For a two-node cluster, include only the addresses specified for nodes 0 and 1 below. For a three-node cluster, include the addresses specified for nodes 0, 1, and 2 below. For a four-node cluster, include all addresses noted below.


    # node 0
    204.152.65.33         # Manually insert this address on all nodes
    204.152.65.1									other than node0
    204.152.65.17
    
    # node 1
    204.152.65.34         # Manually insert this address on all nodes
    204.152.65.2									other than node1
    204.152.65.18
    
    # node 2
    204.152.65.35         # Manually insert this address on all nodes
    204.152.65.3									other than node2
    204.152.65.19
    
    # node 3
    204.152.65.36         # Manually insert this address on all nodes
    204.152.65.4									other than node3
    204.152.65.20

  12. Install the required patches for Sun Cluster 2.2.

    Install all applicable Solstice DiskSuite(TM) and Sun Cluster patches, including the Sun Cluster internationalization patches listed in the Sun Cluster 2.2 Locale Installation Notes (part number 806-4172). If you are using SPARCstorage Arrays, the latest SPARCstorage Array patch should have been installed when you installed the operating environment. Obtain the necessary patches from your serivce provider or from the Sun patch website http://sunsolve.sun.com. Use the instructions in the patch README files to install the patches.

  13. Set the PATH environment variable for user root to include the command directories /opt/SUNWcluster/bin and /opt/SUNWpnm/bin. Set the MANPATH environment variable for user root to include /opt/SUNWcluster/man.

  14. Reboot the machine.


    phys-hahost1# reboot
    


    Note -

    During the reboot process, you might see error messages pertaining to loss of private network. At this time, it is safe to ignore these error messages.


  15. Switch ownership of disks and data services from the remote host to the upgraded local host.

    1. Stop Solstice HA 1.3 services on the remote host.

      The remote host in this example is phys-hahost2.


      phys-hahost2# hastop
      

    2. After Solstice HA 1.3 is stopped on the remote host, start Sun Cluster 2.2 on the upgraded local host.

      After the hastop(1M) operation has completed, use the scadmin(1M) command to start Sun Cluster 2.2. This causes the upgraded local host to take over all data services. In this example, phys-hahost1 is the local physical host name and sc-cluster is the cluster name.


      phys-hahost1# scadmin startcluster phys-hahost1 sc-cluster
      

  16. Recreate instance configuration data for the highly available databases.

    During the upgrade, the instance configuration data is not upgraded for the highly available databases. You must use the appropriate hadbms insert command to manually recreate each database instance, where dbms is the name of the database; for example, haoracle insert, hainformix insert, or hasybase insert.

    Find the pre-upgrade instance configuration information in the /etc/opt/SUNWhadf.obsolete/hadf/hadbms_databases file. For information on the parameters to each hadbms insert command, see the man page for that command and the appropriate chapter in the Sun Cluster 2.2 Software Installation Guide. For example, for information on haoracle(1M), see the haoracle(1M) man page and the chapter, "Setting Up and Administering Sun Cluster HA for Oracle."

  17. Turn on the database instances.

    Use the appropriate hadbms command to turn on each database instance. For example, for Oracle:


    phys-hahost1# haoracle start instance
    

  18. (Sun Cluster HA for SAP only) Unregister and re-register the Sun Cluster HA for SAP data service.

    After the upgrade, the method names for the Sun Cluster HA for SAP data service are incorrect in the CCD. To correct them, first turn off and unregister the Sun Cluster HA for SAP data service and then register it again in order to log the correct method names in the CCD file. In addition, recreate the SAP instance that you noted in Step 1.

    1. Use the hareg(1M) command to turn off the Sun Cluster HA for SAP data service.


      phys-hahost1# hareg -n sap
      

    2. Unregister the Sun Cluster HA for SAP data service:


      phys-hahost1# hareg -u sap
      

    3. Register the Sun Cluster HA for SAP data service:

      In this example, CI_logicalhost is the logical host name.


      phys-hahost1# hareg -s -r sap -h CI_logicalhost
      

    4. Run hadsconfig(1M) to restore the Sun Cluster HA for SAP configuration parameters.

      Refer to Section 10.6.1, "Configuration Parameters for Sun Cluster HA for SAP," in the Sun Cluster 2.2 Software Installation Guide for descriptions of the new configuration parameters, and refer to the configuration information you saved in Step 1.


      phys-hahost1# hadsconfig
      


      Note -

      It is safe to ignore any errors generated by hadsconfig(1M) at this time.


    5. After setting the configuration parameters, use the hareg(1M) command to activate the data service.


      phys-hahost1# hareg -y sap
      

    6. Manually copy the configuration file, /etc/opt/SUNWscsap/hadsconf, to all other cluster nodes.

      First create the /etc/opt/SUNWscsap/hadsconf directory if it does not exist. Then copy the configuration file to all nodes.


      phys-hahost1# ftp phys-host2
      ftp> put /etc/opt/SUNWscsap/hadsconf
      

  19. Verify operations on the local host.

    1. Verify that the configuration on the local host is stable.


      phys-hahost1# hastat
      

    2. Verify that clients are receiving services from the local host.

  20. Repeat Step 3 through Step 19 on the remote host.

  21. Return the remote host to the cluster.


    phys-hahost2# scadmin startnode
    

  22. After cluster reconfiguration on the remote host is complete, switch over the data services to the remote host from the local host.


    phys-hahost1# haswitch phys-hahost2 hahost2
    

  23. Verify that the Sun Cluster 2.2 configuration on the remote host is in a stable state, and that clients are receiving services.


    phys-hahost2# hastat
    

This completes the procedure to upgrade to Sun Cluster 2.2 from Solstice HA 1.3.

How to Upgrade to Sun Cluster 2.2 From Sun Cluster 2.0 or 2.1

This procedure describes the steps required to upgrade the server software on a Sun Cluster 2.0 or Sun Cluster 2.1 system to Sun Cluster 2.2, with a minimum of downtime. You should become familiar with the entire procedure before starting the upgrade.


Note -

During the scinstall(1M) upgrade procedure, all non-local private link IP addresses will be added, with root access only, to the /.rhosts file on every cluster node.


This example assumes an N+1 configuration using SSVM.

  1. (Sun Cluster HA for SAP only) Run the hadsconfig(1M) command to obtain the current configuration parameters.

    The SAP instance configuration data is lost during the upgrade. Therefore, run the hadsconfig(1M) and make note of the current SAP parameters so you can restore them manually later. See Section 10.6.1, "Configuration Parameters for Sun Cluster HA for SAP" in the Sun Cluster 2.2 Software Installation Guide, for a description of the new Sun Cluster HA for SAP configuration parameters.


    phys-hahost1# hadsconfig
    

  2. Stop the first node.


    phys-hahost1# scadmin stopnode
    

  3. If you are upgrading the operating environment and/or SSVM or CVM, run the command upgrade_start from the SSVM or CVM media.

    In this example, CDROM_path is the path to the tools on the SSVM CD.


    phys-hahost1# CDROM_path/Tools/scripts/upgrade_start
    

    To upgrade the operating environment, follow the detailed instructions in the appropriate Solaris installation manual and see also Chapter 2 in the Sun Cluster 2.2 Software Installation Guide.

    To upgrade CVM, refer to the Sun Cluster 2.2 Cluster Volume Manager Guide.

  4. If you are upgrading the operating environment but not the volume manager, perform the following steps.

    1. Remove the volume manager package.

      Normally, the package name is SUNWvxvm for both SSVM and CVM. For example:


      phys-hahost1# pkgrm SUNWvxvm
      

    2. Upgrade the operating system.

      Refer to the Solaris installation documentation for instructions.

    3. If you are using NIS+, modify the /etc/nsswitch.conf file.

      Ensure that "service," "group," and "hosts" lookups are directed to files first. For example:


      hosts: files nisplus
      services: files nisplus
      group: files nisplus

    4. Restore the volume manager removed in Step 4a.

      Obtain the volume manager packages from the Sun Cluster 2.2 CD-ROM.


      phys-hahost1# pkgadd -d CDROM_path/SUNWvxvm
      

  5. If you upgraded SSVM or CVM, run the command upgrade_finish from the SSVM or CVM media.

    In this example, CDROM_path is the path to the tools on the SSVM CD


    phys-hahost1# CDROM_path/Tools/scripts/upgrade_finish
    
    .

  6. Reboot the system.


    Caution - Caution -

    You must reboot at this time.


  7. (Sun Cluster HA for SAP only) Perform the following steps.

    1. Save to a safe location any customized hasap_start_all_instances or hasap_stop_all_instances scripts in Sun Cluster 2.1, before beginning the upgrade to Sun Cluster 2.2.

      Do this to prevent loss of your customizations when Sun Cluster 2.2 removes the old scripts during the upgrade. Restore the scripts after completing the upgrade. Copy the scripts to a safe location. You will restore the scripts later in Step 8b.


      phys-hahost1# cp /opt/SUNWcluster/ha/sap/hasap_start_all_instances
      /safe_place
      phys-hahost1# cp /opt/SUNWcluster/ha/sap/hasap_stop_all_instances
      /safe_place
      

    2. Remove the SUNWscsap package before using scinstall(1M) to update the cluster software.

      The SUNWscsap package is not updated automatically by scinstall(1M). You must first remove this package and then add an updated version in Step 9b.


      phys-hahost1# pkgrm SUNWscsap
      

  8. Update the cluster software by using the scinstall(1M) command from the Sun Cluster 2.2 CD-ROM.

    Invoke the scinstall(1M) command and select the Upgrade option from the menu presented.


    phys-hahost1# cd /cdrom/suncluster_sc_2_2/Sun_Cluster_2_2/Sol_2.x/Tools
    phys-hahost1# ./scinstall
    
    Removal of <SUNWscins> was successful.
    Installing: SUNWscins
    
    Installation of <SUNWscins> was successful.
    Assuming a default cluster name of sc-cluster
    
    Checking on installed package state............
    
    ============ Main Menu =================
    1) Install/Upgrade - Install or Upgrade Server Packages or Install Client                     Packages.
    2) Remove  - Remove Server or Client Packages.
    3) Change  - Modify cluster or data service configuration
    4) Verify  - Verify installed package sets.
    5) List    - List installed package sets.
    6) Quit    - Quit this program.
    7) Help    - The help screen for this menu.
    
    Please choose one of the menu items: [7]:  1
    ...
    
    ==== Install/Upgrade Software Selection Menu =======================
    Upgrade to the latest Sun Cluster Server packages or select package
    sets for installation. The list of package sets depends on the Sun
    Cluster packages that are currently installed.
    
    Choose one:
    1) Upgrade            Upgrade to Sun Cluster 2.2 Server packages
    2) Server             Install the Sun Cluster packages needed on a server
    3) Client             Install the admin tools needed on an admin workstation
    4) Server and Client  Install both Client and Server packages
    5) Close              Exit this Menu
    6) Quit               Quit the Program
    
    Enter the number of the package set [6]:  1
    
    What is the path to the CD-ROM image? [/cdrom/cdrom0]:  .
    
    ** Upgrading from Sun Cluster 2.1 **
    	Removing "SUNWccm" ... done
    ...
  9. (Sun Cluster HA for SAP only) Perform the following steps.

    1. Add the SUNWscsap package from the Sun Cluster 2.2 CD-ROM.

      Use pkgadd(1M) to add an updated SUNWscsap package to replace the package removed in Step 7b. Answer y to all screen prompts that appear during the pkgadd process.


      phys-hahost1# pkgadd -d \ /cdrom/suncluster_sc_2_2/Sun_Cluster_2_2/Sol_2.x/Product/ SUNWscsap
      

    2. Restore the customized scripts saved in Step 7a.

      Copy the scripts to the /opt/SUNWcluster/ha/sap directory. The safe_place directory is the directory into which you saved the scripts in Step 7a. After restoring the scripts, use the ls -l command to verify that the scripts are executable.


      phys-hahost1# cd /opt/SUNWcluster/ha/sap
      phys-hahost1# cp /safe_place/hasap_start_all_instances .
      phys-hahost1# cp /safe_place/hasap_stop_all_instances .
      phys-hahost1# ls -l /opt/SUNWcluster/ha/sap/hasap_st*
      -r-xr--r--   1 root     sys        18400 Feb  9 19:04 hasap_start_all_instances
      -r-xr--r--   1 root     sys        25963 Feb  9 19:04 hasap_stop_all_instances

  10. If the cluster has more than two nodes and you are upgrading from Sun Cluster 2.0, supply the TC/SSP information.

    The first time the scinstall(1M) command is invoked, the TC/SSP information is automatically saved to the /var/tmp/tc_ssp_info file. Copy this file to the /var/tmp directory on all other cluster nodes so the information can be reused when you upgrade those nodes. You can either supply the TC/SSP information now, or do so later by using the scconf(1M) command. See the scconf(1M) man page for details.


    SC2.2 uses the terminal concentrator (or system service processor in the case of an E10000) for failure fencing. During the SC2.2 installation the IP address for the terminal concentrator along with the physical port numbers that each server is connected to is requested. This information can be changed using scconf.
    
    After the upgrade has completed you need to run scconf to specify terminal concentrator information for each server. This will need to be done on each server in the cluster.
    
    The specific commands that need to be run are:
    
    scconf clustername -t <nts name> -i <nts name|IP address>
    scconf clustername -H <node 0> -p <serial port for node 0> \
            -d <other|E10000> -t <nts name>
    
    Repeat the second command for each node in the cluster. Repeat the first command if you have more than one terminal concentrator in your configuration.
    
    Or you can choose to set this up now. The information you will need is:
            +terminal concentrator/system service processor names
    			+the architecture type (E10000 for SSP or other for tc)
            +the ip address for the terminal concentrator/system service 
             processor (these will be looked up based on the name, you 
             will need to confirm)
            +for terminal concentrators, you will need the physical 
             ports the systems are connected to (physical ports 
             (2,3,4... not the telnet ports (5002,...)
    
    Do you want to set the TC/SSP info now (yes/no) [no]?  y
    

    When the scinstall(1M) command prompts for the TC/SSP information, you can either force the program to query the tc_ssp_info file, or invoke an interactive session that will prompt you for the required information.

    The example cluster assumes the following configuration information:

    • Cluster name: sc-cluster

    • Number of nodes in the cluster: 2

    • Node names: phys-hahost1 and phys-hahost2

    • Logical host names: hahost1 and hahost2

    • Terminal concentrator name: cluster-tc

    • Terminal concentrator IP address: 123.4.5.678

    • Physical TC port connected to phys-hahost1: 2

    • Physical TC port connected to phys-hahost2: 3

      See the section on terminal concentrators and SSPs in Chapter 1 of the Sun Cluster 2.2 Software Installation Guide for more information on server architectures and TC/SSPs. In this example, the configuration is not an E10000 cluster, so the architecture specified is other, and a terminal concentrator is used:


      What type of architecture does phys-hahost1 have? (E10000|other) [other] [?] other
      What is the name of the Terminal Concentrator connected to the serial port of phys-hahost1 [NO_NAME] [?] cluster-tc
      Is 123.4.5.678 the correct IP address for this Terminal Concentrator (yes|no) [yes] [?] yes
      Which physical port on the Terminal Concentrator is phys-hahost2 connected to [?] 2
      What type of architecture does phys-hahost2 have? (E10000|other) [other] [?] other
      Which Terminal Concentrator is phys-hahost2 connected to:
      
      0) cluster-tc       123.4.5.678
      1) Create A New Terminal Concentrator Entry
      
      Select a device [?] 0
      Which physical port on the Terminal Concentrator is phys-hahost2 connected to [?] 3
      The terminal concentrator/system service processor (TC/SSP) information has been stored in file /var/tmp/tc_ssp_data. Please put a copy of this file into /var/tmp on the rest of the nodes in the cluster. This way you don't have to re-enter the TC/SSP values, but you will, however, still be prompted for the TC/SSP passwords.

  11. If you will be using Sun Cluster SNMP, change the port number used by the Sun Cluster SNMP daemon and Solaris SNMP (smond).

    The default port used by Sun Cluster SNMP is the same as the default port number used by Solaris SNMP; both use port 161. Change the Sun Cluster SNMP port number using the procedure described in Appendix D of the Sun Cluster 2.2 System Administration Guide.

  12. Reboot the system.


    Caution - Caution -

    You must reboot at this time.


  13. If you are using a shared CCD, put all logical hosts into maintenance mode.


    phys-hahost2# haswitch -m hahost1 hahost2 
    


    Note -

    Clusters with more than two nodes do not use a shared CCD. Therefore, for these clusters, you do not need to put the data services into maintenance mode before beginning the upgrade.


  14. If your configuration includes Oracle Parallel Server (OPS), make sure OPS is halted.

    Refer to your OPS documentation for instructions on halting OPS.

  15. Stop the cluster software on the remaining nodes running the old version of Sun Cluster.


    phys-hahost2# scadmin stopnode
    

  16. Start the upgraded node.


    phys-hahost1# scadmin startcluster phys-hahost1 sc-cluster
    


    Note -

    As the upgraded node joins the cluster, the system might report several warning messages stating that communication with the terminal concentrator is invalid. These messages are expected at this point and can be ignored safely. You can also ignore any errors generated by Sun Cluster HA for SAP at this time.


  17. (Sun Cluster HA for SAP only) Reconfigure the SAP instance by performing the following steps.

    1. Use the hareg(1M) command to turn off the Sun Cluster HA for SAP data service.


      phys-hahost1# hareg -n sap
      


      Note -

      It is safe to ignore any errors generated while turning off Sun Cluster HA for SAP by running hareg(1M).


    2. Run the hadsconfig(1M) command to restore the Sun Cluster HA for SAP configuration parameters.

      Refer to Section 10.6.1, "Configuration Parameters for Sun Cluster HA for SAP" in the Sun Cluster 2.2 Software Installation Guide, for descriptions of the new configuration parameters and look at the configuration information you saved in Step 1.


      phys-hahost1# hadsconfig
      


      Note -

      It is safe to ignore any errors generated by hadsconfig(1M) at this time.


    3. After you set the configuration parameters, use hareg(1M) to activate the data service:


      phys-hahost1# hareg -y sap
      

    4. Manually copy the configuration file to other nodes in the cluster by using ftp.

      Overwrite the Sun Cluster 2.1 configuration files with the new Sun Cluster 2.2 files.


      phys-hahost1# ftp phys-hahost2
      ftp> put /etc/opt/SUNWscsap/hadsconf 
      

  18. If you are using a shared CCD and if you upgraded from Sun Cluster 2.0, update the shared CCD now.

    Run the ccdadm(1M) command only once, on the host that joined the cluster first.


    phys-hahost1# cd /etc/opt/SUNWcluster/conf
    phys-hahost1# ccdadm sc-cluster -r ccd.database_post_sc2.0_upgrade
    

  19. If you stopped the data services previously, restart them on the upgraded node.


    phys-hahost1# haswitch phys-hahost1 hahost1 hahost2
    

  20. Upgrade the remaining nodes.

    Repeat Step 3 through Step 12 on the remaining Sun Cluster 2.0 or Sun Cluster 2.1 nodes.

  21. After each node is upgraded, add it to the cluster.


    phys-hahost2# scadmin startnode sc-cluster
    

  22. Set up and start Sun Cluster Manager.

    Sun Cluster Manager is used to monitor the cluster. For instructions, see the section on Sun Cluster Manager in Chapter 2 of the Sun Cluster 2.2 System Administration Guide.

This completes the upgrade to Sun Cluster 2.2.

Adding a Data Service to a Two-Node Cluster With Shared CCD

Use the following procedure to add a data service to an existing 2-node cluster with a shared Cluster Configuration Database (CCD). See also "Data Service Bugs".

How to Add a Data Service to a Two-Node Cluster With Shared CCD
  1. Unshare the shared CCD.

    You must reconfigure the cluster to unshare the CCD before you add any new data services. Run the following command on both nodes, as root, while both nodes are in the cluster:


    phys-hahost1# /opt/SUNWcluster/bin/scconf clustername -S none
    phys-hahost2# /opt/SUNWcluster/bin/scconf clustername -S none
    

    You must unshare the CCD. If you attempt to add a data service while the CCD is in shared state, only the local ccd.database file will be updated, and not the shared CCD file. This will cause registration of the new data service to fail.

  2. Add the new data services, using the following commands.

    Run all commands as root. In these examples, the node names are phys-hahost1 and phys-hahost2.

    1. Stop the cluster on the first node.


      phys-hahost1# scadmin stopnode
      
    2. Use scinstall(1M) to add the new data service package to the first node.

      See Chapter 3 of the Sun Cluster 2.2 Software Installation Guide for details. This step automatically updates the local CCD file.


      phys-hahost1# scinstall
      
    3. Stop the cluster on the second node.


      Note -

      The existing data services will be unavailable to clients while you perform Steps b and c .



      phys-hahost2# scadmin stopnode
      
    4. Restart the cluster on the first node.


      phys-hahost2# scadmin startcluster phys-hahost1 clustername
      
    5. Use scinstall(1M) to add the new data service package to the second node. See Chapter 3 of the Sun Cluster 2.2 Software Installation Guide for details. This step automatically updates the local CCD file.


      phys-hahost2# scinstall
      
    6. Add the second node to the cluster.


      phys-hahost2# scadmin startnode
      
  3. Reinstate the shared CCD.

    Run the following command on both nodes, as root.


    phys-hahost1# /opt/SUNWcluster/bin/scconf clustername -S ccdvol
    phys-hahost2# /opt/SUNWcluster/bin/scconf clustername -S ccdvol
    

Configuring Sun Cluster Manager

You can run Sun Cluster Manager (SCM) as a stand-alone application or through Netscape(TM) or HotJava(TM) browsers. To configure SCM to run with Netscape 4.5, use the procedures documented here. To run SCM with HotJava, use the procedures documented in the Sun Cluster 2.2 Release Notes. To run SCM as a stand-alone application on a cluster node or client workstation, use the instructions documented in the README file associated with the SCM patch (107388). Patches are available through your service provider or through the SunSolve web site at http://sunsolve.sun.com/

How to Run the SCM Applet in a Netscape Browser From a Cluster Node
  1. Install Netscape 4.5 on the cluster nodes.

  2. Install SCM and the required SCM patch on the cluster nodes.

    To install SCM, use scinstall(1M). The scinstall(1M) command installs the SCM package, SUNWscmgr, as part of the server package set. To get the SCM patch, see your service representative or the SunSolve web site:

    http://sunsolve.sun.com/

  3. Add the following lines to the preferences.js file, if necessary.

    The file is located in the $HOME/.netscape directory. If the preferences are not included in the file already, add the following lines:


    user_pref("security.lower_java_network_security_by_trusting_proxies", true);
    user_pref("signed.applets.codebase_principal_support", true);

  4. On a cluster node, set your DISPLAY environment variable so that the Netscape browser is displayed remotely on your X Windows workstation, and then run the Netscape browser on that cluster node.

  5. When you are ready to begin monitoring the cluster with SCM, enter the appropriate URL.


    file:/opt/SUNWcluster/scmgr/index.html
    

  6. Click Grant on Java Security dialog boxes that ask for permission to access certain files, ports, and so forth from the remote display workstation.

    As Sun Cluster Manager comes up, you might see error messages similar to the following, at the tty that started the HotJava browser:

    File not found when
    looking for:
    netscape.security.PrivilegeManager

    These messages are prompted by the HotJava browser but do not affect Sun Cluster Manager running in the HotJava browser, because the files noted are available only through the Netscape browser.

    Refer to the online help for complete information on menu navigation, tasks, and reference.

How to Set Up Netscape to Run With SCM Using a Web Server
  1. Install a web server on all nodes in the cluster.


    Note -

    If you are running the Sun Cluster HA for Netscape HTTP service and an HTTP server on SCM, configure the HTTP servers to listen on different ports. Otherwise there will be a port conflict between the two.


  2. Follow the web server's configuration procedure to make sure that SCM's index.html file is accessible to the clients.

    The client applet for SCM is in the index.html file in the /opt/SUNWcluster/scmgr directory. For example, go to your HTTP server's document root and create a link to the /opt/SUNWcluster/scmgr directory.

  3. Set security preferences by adding the following lines to the preferences.js file, if necessary.

    The file is located in the $HOME/.netscape directory. If the preferences are not included in the file already, add the following lines:


    user_pref("security.lower_java_network_security_by_trusting_proxies", true);
    user_pref("signed.applets.codebase_principal_support", true);

  4. Run the Netscape browser from your workstation.

  5. When you are ready to begin monitoring the cluster with SCM, enter the appropriate URL.

    For example, if you had created a link from the web server's document_root directory to the /opt/SUNWcluster/scmgr directory, you would enter the following URL, where clusternode is the name of the physical host:


    http://clusternode/scmgr/index.html
    

  6. Click Grant on Java Security dialog boxes that ask for permission to access certain files, ports, and so forth from the remote display workstation.

    As Sun Cluster Manager comes up, you might see error messages similar to the following, at the tty that started the HotJava browser:

    File not found when
    looking for:
    netscape.security.PrivilegeManager

    These messages are prompted by the HotJava browser but do not affect Sun Cluster Manager running in a HotJava browser, because the files noted are available only through the Netscape browser.

    Refer to the online help for complete information on menu navigation, tasks, and reference.

Known Problems

The following known problems affect the operation of Sun Cluster 2.2. These are in addition to the known problems described in the Sun Cluster 2.2 Release Notes.

Framework Bugs

4218052 - Sun Cluster should support modification of TCP ports used by CVM cluster daemons. TCP ports used by CVM cluster daemons might conflict with ports used by other applications running in the cluster. You cannot modify which TCP ports are used by CVM cluster daemons. Instead, you must modify any applications that use conflicting ports.

CVM uses the following port numbers:

cvm.port.vxkmsgd

5559 

cvm.port.vxconfigd

5560 

cvm.port.vxclust

5568 

vxclust

5568-5600 

4233113 - Documentation omission regarding logical host timeout values and how they are used. When you configure the cluster, you set a timeout value for the logical host. This timeout value is used by the CCD when you bring a data service up or down using the hareg(1M) command. The CCD operation occurs in two steps; half of the timeout value is used for each step. Therefore, when configuring START and STOP methods for data services, make sure each method uses no more than half of the timeout value set for the logical host.

4291427 - Locales only: uninstall fails to remove the SUNWccon and SUNWscch packages. In all locale versions of Sun Cluster 2.2 running on Solaris 7, removal of the client packages using the scinstall(1M) command can fail with the following error message:


Patch 108400-02 is required to be installed by patch 108446-02
it cannot be backed out until patch 108446-02 is backed out.
This occurs because of patch dependencies between patch 108446-02 and patch 108400-02. Work around the problem by manually removing patches 108446-02 and 108400-02, and then re-starting the package removal process using scinstall(1M).

Data Service Bugs

4213692 - If the data service or cluster is configured incorrectly, problems with the startup of a data service might cause the cluster framework to switch the data service to the backup node. If the data service fails to start on the backup node, it is switched back to the original node. This switching behavior continues until stopped by manual intervention.

4304532 - Adding a data service to an existing two-node cluster with shared CCD fails with registration errors. After adding a new data service to a two-node cluster with shared CCD, registration of the data service will fail because the shared CCD was not updated correctly. To correct this situation, stop the cluster, uninstall the new data service packages using the scinstall(1M) command, restart the cluster on both nodes, and then use the procedure "Adding a Data Service to a Two-Node Cluster With Shared CCD" to add the data services correctly.

4247239 - Cannot add data service if shared CCD is used and both nodes are not in cluster. In a two-node cluster with shared CCD, adding a data service fails with error messages indicating a corrupted ccd.database file. To correct this situation, stop the cluster, uninstall the new data service packages using the scinstall(1M) command, restart the cluster on both nodes, and then use the procedure "Adding a Data Service to a Two-Node Cluster With Shared CCD" to add the data services correctly.

SCM Bugs

4221612 - SCM sometimes incorrectly reports that the Sun Cluster HA for Netscape HTTP data service is down when it is up.

http://suncluster.eng.sun.com/support-matrix/SC2.2/index.html

Upgrade-Related Bugs

4215070 - The scinstall(1M) command does not upgrade the Sun Cluster HA for SAP package, SUNWscsap, during upgrade to Sun Cluster 2.2 from Sun Cluster 2.1. Work around the problem by replacing the SUNWscsap package manually during the upgrade, as described in the procedure "How to Upgrade to Sun Cluster 2.2 From Sun Cluster 2.0 or 2.1".

4218558 - The Sun Cluster HA for SAP data service is not registered correctly during upgrade to Sun Cluster 2.2 from HA 1.3. This prevents the data service from starting up correctly after the upgrade has been completed. Work around the problem by explicitly unregistering and then registering the data service by using the hareg(1M) command:


# hareg -n sap
# hareg -u sap
# hareg -s -r sap -h CI_logicalhost
# hareg -y sap

For the complete upgrade procedure, see "How to Upgrade to Sun Cluster 2.2 From HA 1.3".

4218574 - Upgrade to Sun Cluster 2.2 from HA 1.3 fails if patch 104996 (required for Solstice HA-DBMS for Oracle7) is installed on the pre-upgrade system. This occurs because patch 104996 depends upon patch 105008, which the scinstall(1M) command attempts to remove during the upgrade. Work around the problem by removing patch 104996 manually before using scinstall(1M) to upgrade from HA 1.3. See "How to Upgrade to Sun Cluster 2.2 From HA 1.3" for the complete upgrade procedure.

4218613 - During upgrade to Sun Cluster 2.2 from HA 1.3, instance configuration information for the HA-DBMS data services is not propagated to the new cluster. This prevents the database instances from starting when the new cluster is started. This bug affects the Sun Cluster HA for Oracle, Sun Cluster HA for Sybase, and Sun Cluster HA for Informix data services.

Work around the problem by manually recreating the database instance after completing the upgrade. Use the appropriate hadbms insert command (haoracle insert, hasybase insert, or hainformix insert) as described in the associated man pages, and in the appropriate data service chapters in the Sun Cluster 2.2 Software Installation Guide. For the complete upgrade procedures, see "How to Upgrade to Sun Cluster 2.2 From HA 1.3".

After you recreate the database instances, start the instances by using the appropriate hadbms start command.

4218620 - During upgrade to Sun Cluster 2.2, existing instance configuration data for Sun Cluster HA for SAP is not propagated to 2.2. Therefore, the SAP instance fails to start when the cluster is started. Work around the problem by manually re-creating the Sun Cluster HA for SAP instance after completing the upgrade, by using the hadsconfig(1M) command to specify all instance parameters. See the revised upgrade procedures in "Upgrading to Sun Cluster 2.2". See also Section 10.6.1, "Configuration Parameters for Sun Cluster HA for SAP" in the Sun Cluster 2.2 Software Installation Guide, for a full description of the Sun Cluster HA for SAP parameters. New parameters exist that significantly impact the behavior of the data service.

4218823 - During the upgrade from HA 1.3 to Sun Cluster 2.2, only two of three required IP addresses are added to the /.rhosts file on each node. The address lost is the highly available IP address for the private interconnects. Utilities such as hadsconfig(1M) will not work without this entry. The user must manually add the required entries to the /.rhosts file. The procedure is documented on page 3-26 of the Sun Cluster 2.2 Software Installation Guide, and in the revised upgrade procedures in "Upgrading to Sun Cluster 2.2".

4219689 - Adding a data service immediately after upgrading to Sun Cluster 2.2 removes a required entry from the cdb file. Restore the correct entry to the cdb file by selecting "Remove Volume Manager" from the scinstall(1M) Change menu. Then select "Choose Volume Manager" from the same menu, and select the volume manager that you are using.

Documentation Errata

4220504 - Page 4-3 in the Sun Cluster 2.2 System Administration Guide includes instructions to run the scadmin startnode command simultaneously on all nodes. Instead, the scadmin startnode command should be run on only one node at a time.

4222817 - Page 8-20 in the Sun Cluster 2.2 Software Installation Guide includes instructions to install Sun Cluster HA for Netscape LDAP by adding the SUNWhadns package. The correct package name is SUNWscnsl.

4224989 - Page 1-25 in the Sun Cluster 2.2 Software Installation Guide includes the statement:

"When Solstice DiskSuite is specified as the volume manager, you cannot configure direct-attach devices, that is, devices that directly attach to more than 2 nodes. Disks can only be connected to pairs of nodes."

This statement is incorrect. Direct-attach devices are supported with Solstice DiskSuite and Sun Cluster 2.2.

4258156 - Page 1-10 in the Sun Cluster 2.2 Software Installation Guide includes the statement that in parallel database configurations, any server failure is recognized by the cluster software, and subsequent user queries are re-routed through one of the remaining servers. This statement is untrue. In the case of a server failure, a cluster reconfiguration occurs automatically and the user queries are dropped. The user must initiate a new query through an active server, or through the original server after it has been restored to service.

You can configure Oracle Parellel Server such that a restart of the application will reconnect the clients to an active server. Configure this by modifying the tnsnames.ora file on all clients, using the procedure described in Section 14.1.4.2, "Configuring Oracle SQL*Net," in the Sun Cluster 2.2 Software Installation Guide.

Impact of quorum device failure - page 1-18 in the Sun Cluster 2.2 Software Installation Guide includes this note:

"The failure of a quorum device is similar to the failure of a node in a two-node cluster."

This note is misleading. Although the failure of a quorum device does not cause a failover of services, it does reduce the high availability of a two node cluster in that no further node failures can be tolerated. A failed quorum device can be reconfigured or replaced while the cluster is running. The cluster can remain running as long as no other component failure occurs while the quorum repair or replacement is in progress.

Using scconf(1M) to remove a cluster node - In Chapter 3 of the Sun Cluster 2.2 System Administration Guide, the procedure "How to Remove a Cluster Node" includes a step to use scconf -A clustername n to remove a cluster node. Note that in this command, the number n does not represent a node number, but instead represents the total number of cluster nodes that will be active after the scconf operation. The scconf operation always removes from the cluster the node with the highest node number. For example, assume a 4-node cluster. The following command would remove nodes 3 and 4 from a four-node cluster, resulting in a two-node cluster:


# scconf sc-cluster -A 2

Undocumented Error Messages

The following error messages for Sun Cluster HA for SAP were omitted from the Sun Cluster 2.2 Error Messages Manual.


SUNWcluster.ha.sap.stop_net.2076: proha:SUNWscsap_PRO: Found 2 leftover IPC objects for SAP instance, removing via cleanipc

This message indicates that during shutdown of the SAP central instance by the stop_net method, two IPC segments from the central instance were found. The stop_net code uses the SAP-supplied utility cleanipc to remove all IPC segments of the central instance during shutdown (and also before startup). This is to ensure a thorough shutdown as well as a clean startup. The error message is an informational message only, and is expected. No user action is required.


Graceful shutdown failed for oracle instance PRO, starting abort

This message indicates that the HA-Oracle oracle_db_shutdown script did not complete a graceful shutdown of the database within the timeout limit (30 seconds, by default). If the normal shutdown does not complete during the allowed time, then a shutdown abort is issued. This is an informational message and no user action is required.


SUNWcluster.ccd.ccdctl.4403: (error) checkpoint, ccdd, ticlts: RPC: Program not registered
`

This message indicates that the ccdadm command could not contact the ccdd demon for the requested operation--the RPC call clnt_create() failed. Verify that the cluster has been started on the current node, and the ccdd daemon is running.


SUNWcluster.clustd.transition.4010: cluster aborted on this node nodename

This message indicates that the current node is being aborted. Other error messages should indicate why this is occurring; check the scadmin.log log file in /var/opt/SUNWcluster.


reconf.pnm.3009: pnminit faced problems

This message is generated by the script /opt/SUNWcluster/bin/pnm. This script is called during step 1 of cluster reconfiguration, when PNM is initialized with pnminit. The error message appears if the execution of pnminit resulted in a non-zero exit. Reasons for a non-zero exit of pnminit include:

Check for any error messages logged to /var/opt/SUNWcluster/ccd/ccd.log, then restart the cluster reconfiguration.


SUNWcluster.reconfig.4018: Aborting--received abort request from nodename

This message indicates a request from a remote node to abort the current node. Use checksum to verify that the /etc/opt/SUNWcluster/conf/clustername.cdb files are identical on all nodes. If necessaryt, manually copy the most recent clustername.cdb file to all nodes, and then restart the cluster.