Sun Cluster 2.2 Software Installation Guide

Upgrading to Sun Cluster 2.2 on Solaris 8 From Sun Cluster 2.2 on Solaris 2.6 or Solaris 7

Use the procedures in the following sections to upgrade to Sun Cluster 2.2 on Solaris 8 from earlier releases of Sun Cluster 2.2 on Solaris 2.6 or 7.


Note -

No upgrade is necessary to move to this release of Sun Cluster 2.2 on Solaris 8 from previous releases of Sun Cluster 2.2 on Solaris 8. Simply update all cluster nodes with any applicable Sun Cluster and Solaris patches, available from your service provider or from the Sun patch website, http://sunsolve.sun.com.


Upgrade Procedures - Solstice DiskSuite

This section describes the upgrade to Sun Cluster 2.2 on Solaris 8 from Sun Cluster 2.2 on Solaris 2.6 or Solaris 7, for clusters using Solstice DiskSuite as the volume manager.

You should become familiar with the entire procedure before starting the upgrade. For your convenience, have your volume manager-specific documentation at hand for reference.


Caution - Caution -

You must take all nodes out of the cluster (that is, take the cluster down) to perform this upgrade. Data and data services will be inaccessible while the cluster is down.



Caution - Caution -

Before starting the upgrade, you should have an adequate backup of all configuration information and key data, and the cluster must be in a stable, non-degraded state.



Note -

During the scinstall(1M) upgrade procedure, all non-local private link IP addresses will be added, with root access only, to the /.rhosts file on every cluster node.



Note -

The behavior of DNS changes between Solaris 2.6 and Solaris 8. This is because the default bind version differs between these operating environments. This bind change requires an update to some DNS configuration files. See your DNS documentation for details and instructions.


How to Upgrade to Sun Cluster 2.2 on Solaris 8 From Sun Cluster 2.2 on Solaris 2.6 or Solaris 7 (Solstice DiskSuite)

The examples assume an N+1 configuration using an administrative workstation.

  1. (Sun Cluster HA for SAP only) Run the hadsconfig(1M) command to obtain the current configuration parameters.

    The SAP instance configuration data is lost during the upgrade. Therefore, run the hadsconfig(1M) and make note of the current SAP parameters so you can restore them manually later. See Chapter 10, Installing and Configuring Sun Cluster HA for SAP for a description of the new Sun Cluster HA for SAP configuration parameters.


    phys-hahost1# hadsconfig
    

  2. Stop all data services.

  3. Stop all nodes and bring down the cluster.

    Run the following command on all nodes.


    phys-hahost1# scadmin stopnode
    

  4. On all nodes, upgrade the operating system.

    To upgrade the operating environment, follow the detailed instructions in the appropriate Solaris installation manual and also see Chapter 2, Planning the Configuration.

  5. If you are using NIS+, modify the /etc/nsswitch.conf file on all nodes.

    Ensure that "service," "group," and "hosts" lookups are directed to files first. For example:


    hosts: files nisplus
    services: files nisplus
    group: files nisplus

  6. Upgrade from Solstice DiskSuite 4.2 to Solstice DiskSuite 4.2.1.

    Solaris 8 requires Solstice DiskSuite 4.2.1.

    1. Add the Solstice DiskSuite 4.2.1 package from the Solstice DiskSuite media, using pkgadd(1M).

      During the pkgadd(1M) operation, several existing files are noted as being in conflict. You must answer y at each pkgadd prompt to install the new files.


      Caution - Caution -

      If your original configuration includes mediators, do not remove the old SUNWmdm (mediators) package before adding the new one. Doing so will make all data inaccessible.


    2. Install any applicable Solstice DiskSuite patches.

    3. Reboot all nodes.

      At this time, you must reboot all cluster nodes.

  7. (Sun Cluster HA for SAP only) Perform the following steps.

    1. Save to a safe location any customized hasap_start_all_instances or hasap_stop_all_instances scripts.

      Do this to prevent loss of your customizations when Sun Cluster 2.2 removes the old scripts during the upgrade. Restore the scripts after completing the upgrade. Copy the scripts to a safe location. You will restore the scripts later in Step 9.


      phys-hahost1# cp /opt/SUNWcluster/ha/sap/hasap_start_all_instances /safe_place
      phys-hahost1# cp /opt/SUNWcluster/ha/sap/hasap_stop_all_instances /safe_place
      

    2. Remove the SUNWscsap package from all nodes before using scinstall(1M) to update the cluster software.

      The SUNWscsap package is not updated automatically by scinstall(1M). You must first remove this package now. You will add an updated version in Step 9.


      phys-hahost1# pkgrm SUNWscsap
      

  8. On all nodes, update the cluster software by using the scinstall(1M) command from the Sun Cluster 2.2 CD-ROM.

    Invoke scinstall(1M) and select the Upgrade option from the menu presented.


    phys-hahost1# cd /cdrom/multi_suncluster_sc_2_2/Sun_Cluster_2_2/Sol_2.x/Tools
    phys-hahost1# ./scinstall
    
    Removal of <SUNWscins> was successful.
    Installing: SUNWscins
    
    Installation of <SUNWscins> was successful.
    Assuming a default cluster name of sc-cluster
    
    Checking on installed package state............
    
    ============ Main Menu =================
    1) Install/Upgrade - Install or Upgrade Server Packages or Install Client                     Packages.
    2) Remove  - Remove Server or Client Packages.
    3) Change  - Modify cluster or data service configuration
    4) Verify  - Verify installed package sets.
    5) List    - List installed package sets.
    6) Quit    - Quit this program.
    7) Help    - The help screen for this menu.
    
    Please choose one of the menu items: [7]:  1
    ...
    ==== Install/Upgrade Software Selection Menu =======================
    Upgrade to the latest Sun Cluster Server packages or select package
    sets for installation. The list of package sets depends on the Sun
    Cluster packages that are currently installed.
    
    Choose one:
    1) Upgrade            Upgrade to Sun Cluster 2.2 Server packages
    2) Server             Install the Sun Cluster packages needed on a server
    3) Client             Install the admin tools needed on an admin workstation
    4) Server and Client  Install both Client and Server packages
    5) Close              Exit this Menu
    6) Quit               Quit the Program
    
    Enter the number of the package set [6]:  1
    
    What is the path to the CD-ROM image? [/cdrom/cdrom0]:  .
    
    ** Upgrading from Sun Cluster 2.1 **
    	Removing "SUNWccm" ... done
    ...

  9. (Sun Cluster HA for SAP only) Perform the following steps.

    1. On all nodes using SAP, add the SUNWscsap package from the Sun Cluster 2.2 CD-ROM.

      Use pkgadd(1M) to add an updated SUNWscsap package to replace the package removed in Step 1. Answer y to all screen prompts that appear during the pkgadd process.


      phys-hahost1# pkgadd -d \ 
      
      /cdrom/multi_suncluster_sc_2_2/Sun_Cluster_2_2/Sol_2.x/Product/ SUNWscsap
      

    2. On all nodes using SAP, restore the customized scripts saved in Step 1.

      Copy the scripts to the /opt/SUNWcluster/ha/sap directory. The safe_place directory is the directory into which you saved the scripts in Step 1. After restoring the scripts, use the ls -l command to verify that the scripts are executable.


      phys-hahost1# cd /opt/SUNWcluster/ha/sap
      phys-hahost1# cp /safe_place/hasap_start_all_instances .
      phys-hahost1# cp /safe_place/hasap_stop_all_instances .
      phys-hahost1# ls -l /opt/SUNWcluster/ha/sap/hasap_st*
      -r-xr--r--   1 root     sys        18400 Feb  9 19:04 hasap_start_all_instances
      -r-xr--r--   1 root     sys        25963 Feb  9 19:04 hasap_stop_all_instances

  10. If you will be using Sun Cluster SNMP, change the port number used by the Sun Cluster SNMP daemon and Solaris SNMP (smond). Do this on all nodes.

    The default port used by Sun Cluster SNMP is the same as the default port number used by Solaris SNMP; both use port 161. Change the Sun Cluster SNMP port number using the procedure described in the SNMP appendix in the Sun Cluster 2.2 System Administration Guide.

  11. On all nodes, install any required or recommended Sun Cluster patches.

    Obtain any required or recommended patches from your service provider or from the patches website, http://sunsolve.sun.com. Follow the instructions in the patch README files to install the patches.

  12. Start the cluster and add all nodes to it.

    Start the cluster by running the following command on the first node.


    phys-hahost1# scadmin startcluster phys-hahost1 sc-cluster 
    

    Then add each node to the cluster by running the following command on each node, sequentially. Allow the cluster to reconfigure before you add each subsequent node.


    phys-hahost2# scadmin startnode
    

  13. If necessary, update device IDs.

    While starting the cluster, if you received error messages pertaining to invalid device IDs, you must follow these steps to update the device IDs.

    1. On any node, make a backup copy of the /etc/did.conf file.

    2. Get a list of all affected instance numbers by running the following command from node 0. The output of the command will indicate the instance numbers.


      phys-hahost1# scdidadm -l
      

    3. From node 0 only, update device IDs by running the following command.

      This command re-initializes the devices for all multihost disks and for the local disks on node 0. The command must be run from the node defined as node 0. Specify the instance numbers of all multihost disks. You must run the command once for each multihost disk in the cluster.


      Caution - Caution -

      Use extreme caution when running the scdidadm -R command. Use only upper case R, never lower case. Lower case r might re-assign device numbers to all disks, making data inaccessible. See the scdidadm(1M) command for more information.


      phys-hahost1# scdidadm -R instance_number1
      ...
      phys-hahost1# scdidadm -R instance_number2
      ...
      phys-hahost1# scdidadm -R instance_number3
      ...



      Note -

      The scdidadm -R command does not re-initialize the local disks on cluster nodes other than node 0. This is acceptable, because the device IDs of local disks are not used by Sun Cluster. However, you will see error messages related to this, for all nodes other than node 0. These error messages are expected and can be ignored safely.


    4. Stop all cluster nodes.

    5. Reboot all cluster nodes.

  14. Start the cluster and add all nodes to it.

    Start the cluster on the first node.


    phys-hahost1# scadmin startcluster phys-hahost1 sc-cluster
    

    Then run the following command on each node, sequentially. Allow the cluster to reconfigure before you add each subsequent node.


    phys-hahost2# scadmin startnode
    


    Note -

    As the upgraded node joins the cluster, the system might report several warning messages stating that communication with the terminal concentrator is invalid. These messages are expected at this point and can be ignored safely. You can also ignore any errors generated by Sun Cluster HA for SAP at this time.


  15. (Sun Cluster HA for SAP only) Reconfigure the SAP instance by performing the following steps.

    1. Run the hadsconfig(1M) command to restore the Sun Cluster HA for SAP configuration parameters.

      Refer to Chapter 10, Installing and Configuring Sun Cluster HA for SAP for descriptions of the new configuration parameters and look at the configuration information you saved in Step 1.


      phys-hahost1# hadsconfig
      


      Note -

      It is safe to ignore any errors generated by hadsconfig(1M) at this time.


    2. After you set the configuration parameters, use hareg(1M) to activate the data service:


      phys-hahost1# hareg -y sap
      

    3. Manually copy the configuration file to other nodes in the cluster.

      Overwrite the old Sun Cluster 2.2 configuration files with the new Sun Cluster 2.2 files. For example:


      phys-hahost1# ftp phys-hahost2
      ftp> put /etc/opt/SUNWscsap/hadsconf 
      

  16. Start the data services.

    Run the following commands for all data services.


    phys-hahost1# hareg -s -r dataservice -h CI_logicalhost
    phys-hahost1# hareg -y dataservice
    

  17. Set up and start Sun Cluster Manager.

    Sun Cluster Manager is used to monitor the cluster. For instructions, see the section on Sun Cluster Manager in Chapter 2 of the Sun Cluster 2.2 System Administration Guide. Also see the Sun Cluster 2.2 Release Notes.

This completes the upgrade to the latest version of Sun Cluster 2.2 on Solaris 8 from earlier versions of Sun Cluster 2.2 on Solaris 2.6 or 7.

Upgrade - VERITAS Volume Manager

This section describes the upgrade to Sun Cluster 2.2 on Solaris 8 From Sun Cluster 2.2 on Solaris 2.6 or Solaris 7, for clusters using VERITAS Volume Manager (VxVM, with or without the cluster feature). The VxVM cluster feature is used with Oracle Parallel Server.

This procedure describes the steps required to upgrade the software on a Sun Cluster 2.2 system running VERITAS Volume Manager and Solaris 2.6 or 7 to the latest version of Sun Cluster 2.2 running Solaris 8.

You should become familiar with the entire procedure before starting the upgrade. For your convenience, have your volume manager-specific documentation at hand for reference.


Caution - Caution -

Before starting the upgrade, you should have an adequate backup of all configuration information and key data, and the cluster must be in a stable, non-degraded state.



Caution - Caution -

If you are running VxVM with an encapsulated root disk, you must unencapsulate the root disk before installing Sun Cluster 2.2. After you install Sun Cluster 2.2, encapsulate the root disk again. Refer to your VxVM documentation for the procedures to encapsulate and unencapsulate the root disk.



Note -

During the scinstall(1M) upgrade procedure, all non-local private link IP addresses will be added, with root access only, to the /.rhosts file on every cluster node.



Note -

The behavior of DNS changes between Solaris 2.6 and Solaris 8. This is because the default bind version differs between these operating environments. This bind change requires an update to some DNS configuration files. See your DNS documentation for details and instructions.


How to Upgrade to Sun Cluster 2.2 on Solaris 8 From Sun Cluster 2.2 on Solaris 2.6 or Solaris 7 (VERITAS Volume Manager)

The examples assume an N+1 configuration using an administrative workstation.

  1. (Sun Cluster HA for SAP only) Run the hadsconfig(1M) command to obtain the current configuration parameters.

    The SAP instance configuration data is lost during the upgrade. Therefore, run the hadsconfig(1M) and make note of the current SAP parameters so you can restore them manually later. See Chapter 10, Installing and Configuring Sun Cluster HA for SAP for a description of the new Sun Cluster HA for SAP configuration parameters.


    phys-hahost1# hadsconfig
    

  2. Select a node to upgrade first and switch any data services from that node to backup nodes.

  3. If your cluster includes OPS, make sure OPS is halted.

  4. Stop the node you are upgrading.


    phys-hahost1# scadmin stopnode
    

  5. If you are upgrading the operating environment, start the volume manager upgrade using your VERITAS documentation.

    You must perform some volume manager specific tasks before upgrading the operating environment. See your VERITAS documentation for detailed information. You will finish the volume manager upgrade after you upgrade the operating environment.

  6. Upgrade the operating environment.

    To upgrade the operating environment, follow the detailed instructions in the appropriate Solaris installation manual and also see Chapter 2, Planning the Configuration.

  7. If you are using NIS+, modify the /etc/nsswitch.conf file.

    Ensure that "service," "group," and "hosts" lookups are directed to files first. For example:


    hosts: files nisplus
    services: files nisplus
    group: files nisplus

  8. Complete the volume manager upgrade, using your VERITAS documentation.

  9. Reboot the node.

  10. (Sun Cluster HA for SAP only) Perform the following steps.

    1. Save to a safe location any customized hasap_start_all_instances or hasap_stop_all_instances scripts.

      Do this to prevent loss of your customizations when Sun Cluster 2.2 removes the old scripts during the upgrade. Restore the scripts after completing the upgrade. Copy the scripts to a safe location. You will restore the scripts later in Step 12.


      phys-hahost1# cp /opt/SUNWcluster/ha/sap/hasap_start_all_instances /safe_place
      phys-hahost1# cp /opt/SUNWcluster/ha/sap/hasap_stop_all_instances /safe_place
      

    2. Remove the SUNWscsap package from all nodes before using scinstall(1M) to update the cluster software.

      The SUNWscsap package is not updated automatically by scinstall(1M). You must first remove this package now. You will add an updated version in Step 12.


      phys-hahost1# pkgrm SUNWscsap
      

  11. Update the cluster software by using the scinstall(1M) command from the Sun Cluster 2.2 CD-ROM.

    Invoke scinstall(1M) and select the Upgrade option from the menu presented.


    phys-hahost1# cd /cdrom/multi_suncluster_sc_2_2/Sun_Cluster_2_2/Sol_2.x/Tools
    phys-hahost1# ./scinstall
    
    Removal of <SUNWscins> was successful.
    Installing: SUNWscins
    
    Installation of <SUNWscins> was successful.
    Assuming a default cluster name of sc-cluster
    
    Checking on installed package state............
    
    ============ Main Menu =================
    1) Install/Upgrade - Install or Upgrade Server Packages or Install Client                     Packages.
    2) Remove  - Remove Server or Client Packages.
    3) Change  - Modify cluster or data service configuration
    4) Verify  - Verify installed package sets.
    5) List    - List installed package sets.
    6) Quit    - Quit this program.
    7) Help    - The help screen for this menu.
    
    Please choose one of the menu items: [7]:  1
    ...
    ==== Install/Upgrade Software Selection Menu =======================
    Upgrade to the latest Sun Cluster Server packages or select package
    sets for installation. The list of package sets depends on the Sun
    Cluster packages that are currently installed.
    
    Choose one:
    1) Upgrade            Upgrade to Sun Cluster 2.2 Server packages
    2) Server             Install the Sun Cluster packages needed on a server
    3) Client             Install the admin tools needed on an admin workstation
    4) Server and Client  Install both Client and Server packages
    5) Close              Exit this Menu
    6) Quit               Quit the Program
    
    Enter the number of the package set [6]:  1
    
    What is the path to the CD-ROM image? [/cdrom/cdrom0]:  .
    
    ** Upgrading from Sun Cluster 2.1 **
    	Removing "SUNWccm" ... done
    ...

  12. (Sun Cluster HA for SAP only) Perform the following steps.

    1. On all nodes using SAP, add the SUNWscsap package from the Sun Cluster 2.2 CD-ROM.

      Use pkgadd(1M) to add an updated SUNWscsap package to replace the package removed in Step 1. Answer y to all screen prompts that appear during the pkgadd process.


      phys-hahost1# pkgadd -d \ 
      
      /cdrom/multi_suncluster_sc_2_2/Sun_Cluster_2_2/Sol_2.x/Product/ SUNWscsap
      

    2. On all nodes using SAP, restore the customized scripts saved in Step 10.

      Copy the scripts to the /opt/SUNWcluster/ha/sap directory. The safe_place directory is the directory into which you saved the scripts in Step 10. After restoring the scripts, use the ls -l command to verify that the scripts are executable.


      phys-hahost1# cd /opt/SUNWcluster/ha/sap
      phys-hahost1# cp /safe_place/hasap_start_all_instances .
      phys-hahost1# cp /safe_place/hasap_stop_all_instances .
      phys-hahost1# ls -l /opt/SUNWcluster/ha/sap/hasap_st*
      -r-xr--r--   1 root     sys        18400 Feb  9 19:04 hasap_start_all_instances
      -r-xr--r--   1 root     sys        25963 Feb  9 19:04 hasap_stop_all_instances

  13. If the cluster has more than two nodes, supply the TC/SSP information.

    The first time the scinstall(1M) command is invoked, the TC/SSP information is automatically saved to the /var/tmp/tc_ssp_info file. Copy this file to the /var/tmp directory on all other cluster nodes so the information can be reused when you upgrade those nodes. You can either supply the TC/SSP information now, or do so later by using the scconf(1M) command. See the scconf(1M) man page for details.


    SC2.2 uses the terminal concentrator (or system service processor in the case of an E10000) for failure fencing. During the SC2.2 installation the IP address for the terminal concentrator along with the physical port numbers that each server is connected to is requested. This information can be changed using scconf.
    
    After the upgrade has completed you need to run scconf to specify terminal concentrator information for each server. This will need to be done on each server in the cluster.
    
    The specific commands that need to be run are:
    
    scconf clustername -t <nts name> -i <nts name|IP address>
    scconf clustername -H <node 0> -p <serial port for node 0> \
    -d <other|E10000> -t <nts name>
    
    Repeat the second command for each node in the cluster. Repeat the first command if you have more than one terminal concentrator in your configuration.
    Or you can choose to set this up now. The information you will need is:
    
    			+terminal concentrator/system service processor names
    			+the architecture type (E10000 for SSP or other for tc)
    			+the ip address for the terminal concentrator/system service 
             processor (these will be looked up based on the name, you 
             will need to confirm)
    			+for terminal concentrators, you will need the physical 
             ports the systems are connected to (physical ports 
             (2,3,4... not the telnet ports (5002,...)
    
    Do you want to set the TC/SSP info now (yes/no) [no]?  y
    

    When the scinstall(1M) command prompts for the TC/SSP information, you can either force the program to query the tc_ssp_info file, or invoke an interactive session that will prompt you for the required information.

    The example cluster assumes the following configuration information:

    • Cluster name: sc-cluster

    • Number of nodes in the cluster: 2

    • Node names: phys-hahost1 and phys-hahost2

    • Logical host names: hahost1 and hahost2

    • Terminal concentrator name: cluster-tc

    • Terminal concentrator IP address: 123.4.5.678

    • Physical TC port connected to phys-hahost1: 2

    • Physical TC port connected to phys-hahost2: 3

      See Chapter 1, Understanding the Sun Cluster Environment for more information on server architectures and TC/SSPs. In this example, the configuration is not an E10000 cluster, so the architecture specified is other, and a terminal concentrator is used:


      What type of architecture does phys-hahost1 have? (E10000|other) [other] [?] other
      What is the name of the Terminal Concentrator connected to the serial port of phys-hahost1 [NO_NAME] [?] cluster-tc
      Is 123.4.5.678 the correct IP address for this Terminal Concentrator (yes|no) [yes] [?] yes
      Which physical port on the Terminal Concentrator is phys-hahost2 connected to [?] 2
      What type of architecture does phys-hahost2 have? (E10000|other) [other] [?] other
      Which Terminal Concentrator is phys-hahost2 connected to:
      
      0) cluster-tc       123.4.5.678
      1) Create A New Terminal Concentrator Entry
      
      Select a device [?] 0
      Which physical port on the Terminal Concentrator is phys-hahost2 connected to [?] 3
      The terminal concentrator/system service processor (TC/SSP) information has been stored in file /var/tmp/tc_ssp_data. Please put a copy of this file into /var/tmp on the rest of the nodes in the cluster. This way you don't have to re-enter the TC/SSP values, but you will, however, still be prompted for the TC/SSP passwords.

  14. If you will be using Sun Cluster SNMP, change the port number used by the Sun Cluster SNMP daemon and Solaris SNMP (smond).

    The default port used by Sun Cluster SNMP is the same as the default port number used by Solaris SNMP; both use port 161. Change the Sun Cluster SNMP port number using the procedure described the SNMP appendix in the Sun Cluster 2.2 System Administration Guide.

  15. Install any required or recommended Sun Cluster and volume manager patches.

    Obtain any required or recommended patches from your service provider or from the patches website, http://sunsolve.sun.com. Follow the instructions in the patch README files to install the patches.

  16. Reboot the node.


    Caution - Caution -

    You must reboot at this time.


  17. If you are using a shared CCD, put all logical hosts into maintenance mode.


    phys-hahost2# haswitch -m hahost1 hahost2 
    


    Note -

    Clusters with more than two nodes do not use a shared CCD. Therefore, for these clusters, you do not need to put the data services into maintenance mode at this time.


  18. Stop the cluster software on the remaining nodes running the old version of Sun Cluster.


    phys-hahost2# scadmin stopnode
    

  19. Start the upgraded node.


    phys-hahost1# scadmin startcluster phys-hahost1 sc-cluster
    


    Note -

    As the upgraded node joins the cluster, the system might report several warning messages stating that communication with the terminal concentrator is invalid. These messages are expected at this point and can be ignored safely. You can also ignore any errors generated by Sun Cluster HA for SAP at this time.


  20. (Sun Cluster HA for SAP only) Reconfigure the SAP instance by performing the following steps.

    1. Run the hadsconfig(1M) command to restore the Sun Cluster HA for SAP configuration parameters.

      Refer to Chapter 10, Installing and Configuring Sun Cluster HA for SAP for descriptions of the new configuration parameters and look at the configuration information you saved in Step 1.


      phys-hahost1# hadsconfig
      


      Note -

      It is safe to ignore any errors generated by hadsconfig(1M) at this time.


    2. After you set the configuration parameters, use hareg(1M) to activate the data service:


      phys-hahost1# hareg -y sap
      

    3. Manually copy the configuration file to other nodes in the cluster by using ftp.

      Overwrite the old Sun Cluster 2.2 configuration files with the new Sun Cluster 2.2 files.


      phys-hahost1# ftp phys-hahost2
      ftp> put /etc/opt/SUNWscsap/hadsconf 
      

  21. If you are using a shared CCD, update it now.

    Run the ccdadm(1M) command only once, on the host that joined the cluster first.


    phys-hahost1# cd /etc/opt/SUNWcluster/conf
    phys-hahost1# ccdadm sc-cluster -r ccd.database_post_sc2.0_upgrade
    

  22. Restart the data services on the upgraded node.

    Run the following commands for each data service.


    phys-hahost1# hareg -s -r dataservice -h CI_logicalhost
    phys-hahost1# hareg -y dataservice
    

  23. Repeat Steps 4 through 15 on the next node to be upgraded.

    Repeat the upgrade procedure for each node, sequentially.

  24. After each node is upgraded, add it to the cluster.


    phys-hahost2# scadmin startnode sc-cluster
    

  25. Set up and start Sun Cluster Manager.

    Sun Cluster Manager is used to monitor the cluster. For instructions, see the section on Sun Cluster Manager in Chapter 2 in the Sun Cluster 2.2 System Administration Guide. Also see the Sun Cluster 2.2 Release Notes.

This completes the upgrade to the latest version of Sun Cluster 2.2 on Solaris 8 from earlier versions of Sun Cluster 2.2 on Solaris 2.6 or 7.