Sun Cluster 2.2 Software Installation Guide

Upgrading to Sun Cluster 2.2 From Solstice HA 1.3

You can perform the upgrade either from an administrative workstation or from the console of any physical host in the cluster. Using an administrative workstation provides the most flexibility during the upgrade process.


Caution - Caution -

Back up all local and multihost disks before starting the upgrade. All systems must be operable and robust. Do not attempt to upgrade if systems are experiencing any difficulties.



Caution - Caution -

On each node, if you customized hasap_start_all_instances or hasap_stop_all_instances scripts in Solstice HA 1.3 or Sun Cluster 2.1, save them to a safe location before beginning the upgrade to Sun Cluster 2.2. Restore the scripts after completing the upgrade. Save and restore these scripts to prevent loss of your customizations when Sun Cluster 2.2 removes the old scripts. The configuration parameters implemented in Sun Cluster 2.2 are different from those implemented in Solstice HA 1.3 and Sun Cluster 2.1. Therefore, after upgrading to Sun Cluster 2.2, you must re-configure Sun Cluster HA for SAP by running the hadsconfig(1M) command. Before starting the upgrade, view the existing configuration and note the current configuration variables. For Solstice HA 1.3, use the hainetconfig(1M) command to view the configuration. For Sun Cluster 2.1, use the hadsconfig(1M) command to view the configuration. After upgrading to Sun Cluster 2.2, use the hadsconfig(1M) command to re-create the instance.



Caution - Caution -

If you created your own data services using the Sun Cluster API, make sure those data services have a base directory associated with them before you begin the upgrade. This base directory defines the location of the methods associated with the data service. If the data service was registered with the -b option to hareg(1M), the base directory is defined in the data services configuration file. By default, data services supplied by Sun are registered with the -b option to hareg(1M). To check whether a base directory is defined, view the file /etc/opt/SUNWhadf/hadf/.hadfconfig_services and look for the SERVICE_BASEDIR= entry for your data service. If no entry exists, unregister the data service using the command hareg -u dataservice, then re-register the data service by specifying the -b option to hareg(1M). If you attempt to upgrade while any data services do not have an associated base directory for methods, the upgrade will fail.


How to Upgrade to Sun Cluster 2.2 From HA 1.3

Note -

This procedure assumes you are using an administrative workstation.



Note -

While performing this upgrade, you might see network interface and mediator errors on the console. These messages are side effects of the upgrade and can be ignored safely.


  1. (Solstice HA 1.3 for SAP only) Run hainetconfig(1M) to obtain the current SAP configuration parameters.

    The SAP instance configuration data is lost during the upgrade. Therefore, run the hainetconfig(1M) command and make note of the current SAP parameters so you can restore them manually later. See Chapter 10, Installing and Configuring Sun Cluster HA for SAP for a description of the new Sun Cluster HA for SAP configuration parameters.


    phys-hahost1# hainetconfig
    

  2. Load the Sun Cluster 2.2 client packages onto the administrative workstation.

    Refer to Chapter 3, Installing and Configuring Sun Cluster Software to set up the administrative workstation, if you have not done so already.

  3. Stop Solstice HA on the first server to be upgraded.


    phys-hahost1# hastop
    

    If your cluster is already running Solaris 2.6 and you do not want to upgrade to Solaris 7 or Solaris 8, skip to Step 6.

  4. Upgrade the operating environment to Solaris 2.6, Solaris 7, or Solaris 8.

    To upgrade Solaris, you must use the suninstall(1M) upgrade procedure (rather than reinstalling the operating environment). You might need to increase the size of your root (/) and /usr partitions on the root disks of all Sun Cluster servers in the configuration to accommodate the Solaris operating environment. You must install the Entire Distribution software group. See your Solaris advanced system administration documentation for details.


    Note -

    For some hardware platforms, Solaris attempts to configure power management settings to shut down the server automatically if it has been idle for 30 minutes. The cluster heartbeat is not enough to prevent the Sun Cluster servers from appearing idle and shutting down. Therefore, you must disable this feature when you install the Solaris software. The dialog used to configure power management settings is shown in the next code sample. If you do not see this dialog, then your hardware platform does not support this feature. If the dialog appears, you must answer n to the first question and y to the second to configure the server to work correctly in the Sun Cluster environment.



    ****************************************************************
    This system is configured to conserve energy.
    After 30 minutes without activity, the system state will be
    saved to disk and the system will be powered off automatically.
    
    A system that has been suspended in this way can be restored
    back to exactly where it was by pressing the power key.
    The definition of inactivity and the timeout are user
    configurable. The dtpower(1M) man page has more information.
    ****************************************************************
    
    Do you wish to accept this default configuration, allowing
    your system to save its state then power off automatically
    when it has been idle for 30 minutes? (If this system is used
    as a server, answer n. By default autoshutdown is
    enabled.) [y,n,?] n
    
    Autoshutdown disabled.
    
    Should the system save your answer so it won't need to ask
    the question again when you next reboot? (By default the
    question will not be asked again.) [y,n,?] y
    
  5. Update the Solaris kernel files.

    As part of the Solaris upgrade, the files /kernel/drv/sd.conf and /kernel/drv/ssd.conf will be renamed to /kernel/drv/sd.conf:2.x and /kernel/drv/ssd.conf:2.x respectively. New /kernel/drv/sd.conf and /kernel/drv/ssd.conf files will be created. Run the diff(1) command to identify the differences between the old files and the new ones. Copy the additional information that was inserted by Sun Cluster from the old files into the new files. The information will look similar to the following:


    # Start of lines added by Solstice HA
    sd_retry_on_reservation_conflict=0;
    # End of lines added by Solstice HA

  6. Upgrade to Solstice DiskSuite 4.2 or 4.2.1.

    1. Upgrade Solstice DiskSuite using the detailed procedures in your Solstice DiskSuite documentation.

    2. On the local host, upgrade the Solstice DiskSuite mediator package, SUNWmdm.

      For Solstice DiskSuite 4.2, the path to the SUNWmdm package is /cdrom/multi_suncluster_sc_2_2/Sun_Cluster_2_2/Sol2_x/Product/ . For Solstice DiskSuite 4.2.1, the path is /cdrom/multi_suncluster_sc_2_2/Sun_Cluster_2_2/Sol_2.8/Packages/.

      In the following example, note that several existing files are noted as being in conflict. You must answer y at each prompt to install the new files.


      Caution - Caution -

      Do not remove the old SUNWmdm package before adding the new one. Doing so will make all data inaccessible.



      phys-hahost1# pkgadd -d /cdrom_path/ SUNWmdm
      
      Processing package instance <SUNWmdm>...
      
      Solstice DiskSuite (Mediator)
      (sparc) 4.2,REV=1998.23.10.09.59.06
      Copyright 1998 Sun Microsystems, Inc. All rights reserved.
      
      ## Executing checkinstall script.
      			This is an upgrade. Conflict approval questions may be
      			displayed. The listed files are the ones that will be
      			upgraded. Please answer "y" to these questions if they are
      			presented.
      Using </> as the package base directory.
      ## Processing package information.
      ## Processing system information.
         10 package pathnames are already properly installed.
      ## Verifying package dependencies.
      ## Verifying disk space requirements.
      ## Checking for conflicts with packages already installed.
      
      The following files are already installed on the system and are being used by another package:
        /etc/opt/SUNWmd/meddb
        /usr/opt <attribute change only>
        /usr/opt/SUNWmd/man/man1m/medstat.1m
        /usr/opt/SUNWmd/man/man1m/rpc.metamedd.1m
        /usr/opt/SUNWmd/man/man4/meddb.4
        /usr/opt/SUNWmd/man/man7/mediator.7
        /usr/opt/SUNWmd/sbin/medstat
        /usr/opt/SUNWmd/sbin/rpc.metamedd
      
      Do you want to install these conflicting files [y,n,?,q] y
      ## Checking for setuid/setgid programs.
      
      This package contains scripts which will be executed with super-user permission during the process of installing this package.
      Do you want to continue with the installation of <SUNWmdm.2> [y,n,?] y
      Installing Solstice DiskSuite (Mediator) as <SUNWmdm.2>
      ...
  7. Before updating the cluster package, remove patch 104996 (the Solstice HA 1.3 SUNWhaor patch), if it is installed.

    When scinstall(1M) updates cluster packages in Step 9, the command attempts to remove a patch on which patch 104996 is dependent. To prevent scinstall(1M) from failing, remove patch 104996 manually now.


    phys-hahost1# patchrm 104996-xx
    

  8. (Solstice HA for SAP only) Save to a safe location any customized hasap_start_all_instances or hasap_stop_all_instances scripts, before beginning the upgrade to Sun Cluster 2.2.

    Save these scripts to prevent loss of your customizations when Sun Cluster 2.2 removes the old scripts during the upgrade. Copy the scripts to a safe location. You will restore the scripts in Step 10. Use the following commands:


    # cp /opt/SUNWhasap/clust_progs/hasap_start_all_instances /safe_place
    # cp /opt/SUNWhasap/clust_progs/hasap_stop_all_instances /safe_place
    

  9. Use the scinstall(1M) command to update the cluster packages.

    Select Upgrade from the scinstall(1M) menu. Respond to the prompts that ask for the location of the Framework packages and the cluster name. The scinstall(1M) command replaces Solstice HA 1.3 packages with Sun Cluster 2.2 packages.


    phys-hahost1# cd /cdrom/multi_suncluster_sc_2_2/Sun_Cluster_2_2/Sol_2.x/Tools
    phys-hahost1# ./scinstall
    
    Installing: SUNWscins
    Installation of <SUNWscins> was successful.
    Checking on installed package state
    ............
    None of the Sun Cluster software has been installed
    <<Press return to continue>> 
    
    ==== Install/Upgrade Software Selection Menu =======================
    Upgrade to the latest Sun Cluster Server packages or select package
    
    Do you want to install these conflicting files [y,n,?,q] y
    sets for installation. The list of package sets depends on the Sun
    Cluster packages that are currently installed.
    
    Choose one:
    1) Upgrade            Upgrade to Sun Cluster 2.2 Server packages
    2) Server             Install the Sun Cluster packages needed on a server
    3) Client             Install the admin tools needed on an admin workstation
    4) Server and Client  Install both Client and Server packages
    5) Close              Exit this Menu
    6) Quit               Quit the Program
    Enter the number of the package set [6]: 1
    What is the directory where the Framework packages can be found 
    
    [/cdrom/cdrom0]: .
    
    ** Upgrading from Solstice HA 1.3 **
    What is the name of the cluster? sc-cluster
    ...

  10. (Solstice HA 1.3 for SAP only) Restore the customized scripts saved in Step 8.

    Copy the scripts to the /opt/SUNWcluster/ha/sap directory. The safe_place directory is the directory into which you saved the scripts in Step 8. After restoring the scripts, use the ls -l command to verify that the scripts are executable.


    phys-hahost1# cd /opt/SUNWcluster/ha/sap
    phys-hahost1# cp /safe_place/hasap_start_all_instances .
    phys-hahost1# cp /safe_place/hasap_stop_all_instances .
    phys-hahost1# ls -l /opt/SUNWcluster/ha/sap/hasap_st*
    -r-xr--r--   1 root     sys        18400 Feb  9 19:04 hasap_start_all_instances
    -r-xr--r--   1 root     sys        25963 Feb  9 19:04 hasap_stop_all_instances

  11. Add required entries to the /.rhosts file.

    The /.rhosts file contains one or more sets of three IP addresses (depending on the number of nodes in the cluster). These are private network IP addresses used internally by Sun Cluster. During the upgrade, only some of the IP addresses are added to the /.rhosts files; the first IP address in each set is lost. You must manually insert the missing addresses in the /.rhosts file on each node.

    The number of sets you need depends on the number of nodes in the cluster. For a two-node cluster, include only the addresses specified for nodes 0 and 1 below. For a three-node cluster, include the addresses specified for nodes 0, 1, and 2 below. For a four-node cluster, include all addresses noted below.


    # node 0
    204.152.65.33         # Manually insert this address on all nodes
    204.152.65.1									other than node0
    204.152.65.17
    
    # node 1
    204.152.65.34         # Manually insert this address on all nodes
    204.152.65.2									other than node1
    204.152.65.18
    
    # node 2
    204.152.65.35         # Manually insert this address on all nodes
    204.152.65.3									other than node2
    204.152.65.19
    
    # node 3
    204.152.65.36         # Manually insert this address on all nodes
    204.152.65.4									other than node3
    204.152.65.20

  12. (Solaris 2.6 and 7 only) Use install_scpatches to install Sun Cluster patches from the Sun Cluster product CD-ROM.

    Use the install_scpatches utility to install Sun Cluster patches from the Sun Cluster CD-ROM.


    # cd /cdrom/multi_suncluster_sc_2_2/Sun_Cluster_2_2/Sol_2.x/Patches
    # ./install_scpatches
    
    Patch install script for Sun Cluster 2.2 July 2000 Release
    
    *WARNING* SYSTEMS WITH LIMITED DISK SPACE SHOULD *NOT* INSTALL PATCHES:
    With or without using the save option, the patch installation process
    will still require some amount of disk space for installation and
    administrative tasks in the /, /usr, /var, or /opt partitions where
    patches are typically installed.  The exact amount of space will
    depend on the machine's architecture, software packages already 
    installed, and the difference in the patched objects size.  To be
    safe, it is not recommended that a patch cluster be installed on a
    system with less than 4 MBytes of available space in each of these
    partitions.  Running out of disk space during installation may result
    in only partially loaded patches.  Check and be sure adequate disk space
    is available before continuing.
    
    Are you ready to continue with install? [y/n]: y
    
    Determining if sufficient save space exists...
    Sufficient save space exists, continuing...
    Installing patches located in /cdrom/multi_suncluster_sc_2_2/Sun_Cluster_2_2/Sol_2.x/Patches
    Using patch_order file for patch installation sequence
    Installing 107388-03 ... okay.
    Installing 107748-02 ... okay.
    ...
    For more installation messages refer to the installation logfile:
      /var/sadm/install_data/Sun_Cluster_2.2_July_2000_Release_log
    
    Use '/usr/bin/showrev -p' to verify installed patch-ids.
    Refer to individual patch README files for more patch detail.
    Rebooting the system is usually necessary after installation.
    
    #

  13. Install any required or recommended Sun Cluster and volume manager patches.

    Besides those patches installed in Step 12, also obtain any required or recommended patches from your service provider or from the patches website, http://sunsolve.sun.com. Follow the instructions in the patch README files to install the patches.

  14. Set the PATH environment variable for user root to include the command directories /opt/SUNWcluster/bin and /opt/SUNWpnm/bin. Set the MANPATH environment variable for user root to include /opt/SUNWcluster/man.

  15. Reboot the machine.


    phys-hahost1# reboot
    


    Note -

    During the reboot process, you might see error messages pertaining to the loss of a private network. At this time, it is safe to ignore these error messages.


  16. Switch ownership of disks and data services from the remote host to the upgraded local host.

    1. Stop Solstice HA 1.3 services on the remote host.

      The remote host in this example is phys-hahost2.


      phys-hahost2# hastop
      

    2. After Solstice HA 1.3 is stopped on the remote host, start Sun Cluster 2.2 on the upgraded local host.

      After the hastop(1M) operation has completed, use the scadmin(1M) command to start Sun Cluster 2.2. This causes the upgraded local host to take over all data services. In this example, phys-hahost1 is the local physical host name and sc-cluster is the cluster name.


      phys-hahost1# scadmin startcluster phys-hahost1 sc-cluster
      

  17. Re-create instance configuration data for the highly available databases.

    During the upgrade, the instance configuration data is not upgraded for the highly available databases. You must use the appropriate hadbms insert command to manually re-create each database instance, where dbms is the name of the database; for example, haoracle insert, hainformix insert, or hasybase insert.

    Find the pre-upgrade instance configuration information in the /etc/opt/SUNWhadf.obsolete/hadf/hadbms_databases file. For information about the parameters for each hadbms insert command, see the man page for that command and the appropriate database chapter in this book. For example, for information on haoracle(1M), see the haoracle(1M) man page and Chapter 5, Installing and Configuring Sun Cluster HA for Oracle.

  18. Turn on the database instances.

    Use the appropriate hadbms command to turn on each database instance. For example, for Oracle:


    phys-hahost1# haoracle start instance
    

  19. (Sun Cluster HA for SAP only) Unregister and re-register the Sun Cluster HA for SAP data service.

    After the upgrade, the method names for the Sun Cluster HA for SAP data service are incorrect in the Cluster Configuration Database (CCD). To correct the method names, first turn off and unregister the Sun Cluster HA for SAP data service and then register it again in order to log the correct method names in the CCD file. In addition, re-create the SAP instance that you noted in Step 1.

    1. Use the hareg(1M) command to turn off the Sun Cluster HA for SAP data service.


      phys-hahost1# hareg -n sap
      

    2. Unregister the Sun Cluster HA for SAP data service.


      phys-hahost1# hareg -u sap
      

    3. Register the Sun Cluster HA for SAP data service.

      In this example, CI_logicalhost is the logical host name.


      phys-hahost1# hareg -s -r sap -h CI_logicalhost
      

    4. Run hadsconfig(1M) to restore the Sun Cluster HA for SAP configuration parameters.

      Refer to Chapter 10, Installing and Configuring Sun Cluster HA for SAP for descriptions of the new configuration parameters. Also, refer to the configuration information you saved in Step 1.


      phys-hahost1# hadsconfig
      


      Note -

      It is safe to ignore any errors generated by hadsconfig(1M) at this time.


    5. After setting the configuration parameters, use the hareg(1M) command to activate the data service.


      phys-hahost1# hareg -y sap
      

    6. Manually copy the configuration file, /etc/opt/SUNWscsap/hadsconf, to all other cluster nodes.

      First create the /etc/opt/SUNWscsap/hadsconf directory if it does not exist. Then copy the configuration file to all nodes.


      phys-hahost1# ftp phys-host2
      ftp> put /etc/opt/SUNWscsap/hadsconf
      

  20. Verify operations on the local host.

    1. Verify that the configuration on the local host is stable.


      phys-hahost1# hastat
      

    2. Verify that clients are receiving services from the local host.

  21. Repeat Step 3 through Step 20 on the remote host.

  22. Return the remote host to the cluster.


    phys-hahost2# scadmin startnode
    

  23. After cluster reconfiguration on the remote host is complete, switch over the data services to the remote host from the local host.


    phys-hahost1# haswitch phys-hahost2 hahost2
    

  24. Verify that the Sun Cluster 2.2 configuration on the remote host is in a stable state, and that clients are receiving services.


    phys-hahost2# hastat
    

This completes the procedure to upgrade to Sun Cluster 2.2 from Solstice HA 1.3.

Configuring Mediators When Migrating From Solstice HA 1.3 to Sun Cluster 2.2

This section is only relevant to clusters that were originally set up under Solstice HA 1.3 using Solstice DiskSuite mediators (two-string configurations). It describes changes that are automatically made to a mediator configuration when you upgrade from Solstice HA 1.3 to Sun Cluster 2.2. There is no direct user impact, but you should note the changes in any configuration information you keep on the cluster.

The procedure "How to Upgrade to Sun Cluster 2.2 From HA 1.3" changes the Solstice HA 1.3 mediator configuration. The original Solstice HA 1.3 mediator configuration resembles the following:


Mediator Host(s)
Aliases
ha-red
ha-red-priv1, ha-red-priv2
ha-green
ha-green-priv1, ha-green-priv2

After running the Sun Cluster 2.2 upgrade procedure, this configuration is converted to one similar to the following:


Mediator Host(s)
Aliases
ha-red
204.152.65.34
ha-green
204.152.65.33


Note -

In Solstice HA 1.3, the hosts referred to the private links by physical names, whereas in Sun Cluster 2.2, the private link IP addresses are used.


For more information about configuring mediators for Sun Cluster 2.2, see the dual-string mediators chapter in the Sun Cluster 2.2 System Administration Guide.