Several errors exist in the upgrade procedures documented in Chapter 4 of the Sun Cluster 2.2 Software Installation Guide. To upgrade to Sun Cluster 2.2 from HA 1.3, Sun Cluster 2.0, or Sun Cluster 2.1, use the following procedures instead.
These are the high-level steps to upgrade from Solstice(TM) HA 1.3 to Sun Cluster 2.2. You can perform the upgrade either from an administrative workstation or from the console of any physical host in the cluster. Using an administrative workstation allows the most flexibility during the upgrade process.
This procedure assumes you are using an administrative workstation.
Back up all local and multihost disks before starting the upgrade. Also, all systems must be operable and robust. Do not attempt to upgrade if systems are experiencing any difficulties.
On each node, if you customized hasap_start_all_instances or hasap_stop_all_instances scripts in Solstice HA 1.3 or Sun Cluster 2.1, save them to a safe location before beginning the upgrade to Sun Cluster 2.2. Restore the scripts after completing the upgrade. Do this to prevent loss of your customizations when Sun Cluster 2.2 removes the old scripts.
The configuration parameters implemented in Sun Cluster 2.2 are different from those implemented in Solstice HA 1.3 and Sun Cluster 2.1. Therefore, after upgrading to Sun Cluster 2.2, you will have to re-configure Sun Cluster HA for SAP by running the hadsconfig(1M) command. Before starting the upgrade, view the existing configuration and note the current configuration variables. For Solstice HA 1.3, use the hainetconfig(1M) command to view the configuration. For Sun Cluster 2.1, use the hadsconfig(1M) command to view the configuration. After upgrading to Sun Cluster 2.2, use the hadsconfig(1M) command to re-create the instance.
If you created your own data services using the Sun Cluster API, make sure those data services have a base directory associated with them before you begin the upgrade. This base directory defines the location of the methods associated with the data service. If the data service was registered with the -b option to hareg(1M), the base directory is defined in the data services configuration file. Data services supplied by Sun are registered by default with the -b option to hareg(1M).
To check whether a base directory is defined, view the file /etc/opt/SUNWhadf/hadf/.hadfconfig_services and look for the SERVICE_BASEDIR= entry for your data service. If no entry exists, you must unregister the data service using the command hareg -u dataservice, then re-register the data service by specifying the -b option to hareg(1M).
If you attempt to upgrade while any data services do not have an associated base directory for methods, the upgrade will fail.
(Solstice HA 1.3 for SAP only) Run hainetconfig(1M) to obtain the current SAP configuration parameters.
The SAP instance configuration data is lost during the upgrade. Therefore, run the hainetconfig(1M) command and make note of the current SAP parameters so you can restore them manually later. See Section 10.6.1, "Configuration Parameters for Sun Cluster HA for SAP," in the Sun Cluster 2.2 Software Installation Guide for a description of the new Sun Cluster HA for SAP configuration parameters.
phys-hahost1# hainetconfig |
Load the Sun Cluster 2.2 client packages onto the administrative workstation.
Refer to Chapter 3 in the Sun Cluster 2.2 Software Installation Guide to set up the administrative workstation, if you have not done so already.
Stop Solstice HA on the first server to be upgraded.
phys-hahost1# hastop |
If your cluster is already running Solaris(TM) 2.6 and you do not want to upgrade to Solaris 7, skip to Step 6.
Upgrade the operating environment to Solaris 2.6 or Solaris 7.
To upgrade Solaris, you must use the suninstall(1M) upgrade procedure (rather than reinstalling the operating environment). You might need to increase the size of your root (/) and /usr partitions on the root disks of all Sun Cluster servers in the configuration to accommodate the Solaris 2.6 or Solaris 7 environment. You must install the Entire Distribution software group. See the Solaris Advanced Installation Guide for details.
For some hardware platforms, Solaris 2.6 and Solaris 7 attempt to configure power management settings to shut down the server automatically if it has been idle for 30 minutes. The cluster heartbeat is not enough to prevent the Sun Cluster servers from appearing idle and shutting down. Therefore, you must disable this feature when you install Solaris 2.6 or Solaris 7. The dialog used to configure power management settings is shown below. If you do not see this dialog, then your hardware platform does not support this feature. If the dialog appears, you must answer n to the first question and y to the second to configure the server to work correctly in the Sun Cluster environment.
**************************************************************** This system is configured to conserve energy. After 30 minutes without activity, the system state will be saved to disk and the system will be powered off automatically. A system that has been suspended in this way can be restored back to exactly where it was by pressing the power key. The definition of inactivity and the timeout are user configurable. The dtpower(1M) man page has more information. **************************************************************** Do you wish to accept this default configuration, allowing your system to save its state then power off automatically when it has been idle for 30 minutes? (If this system is used as a server, answer n. By default autoshutdown is enabled.) [y,n,?] n Autoshutdown disabled. Should the system save your answer so it won't need to ask the question again when you next reboot? (By default the question will not be asked again.) [y,n,?] y |
Update the Solaris 2.6 or Solaris 7 kernel files.
As part of the Solaris upgrade, the files /kernel/drv/sd.conf and /kernel/drv/ssd.conf will be renamed to /kernel/drv/sd.conf:2.x and /kernel/drv/ssd.conf:2.x respectively. New /kernel/drv/sd.conf and /kernel/drv/ssd.conf files will be created. Run the diff(1) command to identify the differences between the old files and the new ones. Copy the additional information that was inserted by Sun Cluster from the old files into the new files. The information will look similar to the following:
# Start of lines added by Solstice HA sd_retry_on_reservation_conflict=0; # End of lines added by Solstice HA |
Upgrade to Solstice DiskSuite 4.2.
Upgrade Solstice DiskSuite using the detailed procedure in the Solstice DiskSuite 4.2 Installation and Product Notes.
On the local host, upgrade the Solstice DiskSuite mediator package, SUNWmdm.
phys-hahost1# pkgadd -d \ /cdrom/suncluster_sc_2_2/Sun_Cluster_2_2/Sol2_x/Product/ SUNWmdm Processing package instance <SUNWmdm>... Solstice DiskSuite (Mediator) (sparc) 4.2,REV=1998.23.10.09.59.06 Copyright 1998 Sun Microsystems, Inc. All rights reserved. ## Executing checkinstall script. This is an upgrade. Conflict approval questions may be displayed. The listed files are the ones that will be upgraded. Please answer "y" to these questions if they are presented. Using </> as the package base directory. ## Processing package information. ## Processing system information. 10 package pathnames are already properly installed. ## Verifying package dependencies. ## Verifying disk space requirements. ## Checking for conflicts with packages already installed. The following files are already installed on the system and are being used by another package: /etc/opt/SUNWmd/meddb /usr/opt <attribute change only> /usr/opt/SUNWmd/man/man1m/medstat.1m /usr/opt/SUNWmd/man/man1m/rpc.metamedd.1m /usr/opt/SUNWmd/man/man4/meddb.4 /usr/opt/SUNWmd/man/man7/mediator.7 /usr/opt/SUNWmd/sbin/medstat /usr/opt/SUNWmd/sbin/rpc.metamedd ## Checking for setuid/setgid programs. This package contains scripts which will be executed with super-user permission during the process of installing this package. Do you want to continue with the installation of <SUNWmdm.2> [y,n,?] y Installing Solstice DiskSuite (Mediator) as <SUNWmdm.2> ... |
Before updating the cluster package, remove patch 104996, the Solstice HA 1.3 SUNWhaor patch, if it is installed.
When scinstall(1M) updates cluster packages in Step 9, the command attempts to remove a patch on which patch 104996 is dependent. To prevent scinstall(1M) from failing, remove patch 104996 manually now:
phys-hahost1# patchrm 104996-xx |
(Solstice HA for SAP only) Save to a safe location any customized hasap_start_all_instances or hasap_stop_all_instances scripts, before beginning the upgrade to Sun Cluster 2.2.
Do this to prevent loss of your customizations when Sun Cluster 2.2 removes the old scripts during the upgrade. Copy the scripts to a safe location. You will restore the scripts in Step 10. Use the following commands:
# cp /opt/SUNWhasap/clust_progs/hasap_start_all_instances /safe_place # cp /opt/SUNWhasap/clust_progs/hasap_stop_all_instances /safe_place |
Use the scinstall(1M) command to update the cluster packages.
Select Upgrade from the scinstall(1M) menu. Respond to the prompts that ask for the location of the Framework packages and the cluster name. The scinstall(1M) command replaces Solstice HA 1.3 packages with Sun Cluster 2.2 packages.
phys-hahost1# cd /cdrom/suncluster_sc_2_2/Sun_Cluster_2_2/Sol_2.x/Tools phys-hahost1# ./scinstall Installing: SUNWscins Installation of <SUNWscins> was successful. Checking on installed package state ............ None of the Sun Cluster software has been installed <<Press return to continue>> ==== Install/Upgrade Software Selection Menu ======================= Upgrade to the latest Sun Cluster Server packages or select package Do you want to install these conflicting files [y,n,?,q] y sets for installation. The list of package sets depends on the Sun Cluster packages that are currently installed. Choose one: 1) Upgrade Upgrade to Sun Cluster 2.2 Server packages 2) Server Install the Sun Cluster packages needed on a server 3) Client Install the admin tools needed on an admin workstation 4) Server and Client Install both Client and Server packages 5) Close Exit this Menu 6) Quit Quit the Program Enter the number of the package set [6]: 1 What is the directory where the Framework packages can be found [/cdrom/cdrom0]: . ** Upgrading from Solstice HA 1.3 ** What is the name of the cluster? sc-cluster ... |
(Solstice HA 1.3 for SAP only) Restore the customized scripts saved in Step 8.
Copy the scripts to the /opt/SUNWcluster/ha/sap directory. The safe_place directory is the directory into which you saved the scripts in Step 8. After restoring the scripts, use the ls -l command to verify that the scripts are executable.
phys-hahost1# cd /opt/SUNWcluster/ha/sap phys-hahost1# cp /safe_place/hasap_start_all_instances . phys-hahost1# cp /safe_place/hasap_stop_all_instances . phys-hahost1# ls -l /opt/SUNWcluster/ha/sap/hasap_st* -r-xr--r-- 1 root sys 18400 Feb 9 19:04 hasap_start_all_instances -r-xr--r-- 1 root sys 25963 Feb 9 19:04 hasap_stop_all_instances |
Add required entries to the /.rhosts file.
The /.rhosts file contains one or more sets of three IP addresses (depending on the number of nodes in the cluster). These are private network IP addresses used internally by Sun Cluster. During the upgrade, only some of the IP addresses are added to the /.rhosts files; the first IP address in each set is lost. You must manually insert the missing addresses in the /.rhosts file on each node.
The number of sets you need depends on the number of nodes in the cluster. For a two-node cluster, include only the addresses specified for nodes 0 and 1 below. For a three-node cluster, include the addresses specified for nodes 0, 1, and 2 below. For a four-node cluster, include all addresses noted below.
# node 0 204.152.65.33 # Manually insert this address on all nodes 204.152.65.1 other than node0 204.152.65.17 # node 1 204.152.65.34 # Manually insert this address on all nodes 204.152.65.2 other than node1 204.152.65.18 # node 2 204.152.65.35 # Manually insert this address on all nodes 204.152.65.3 other than node2 204.152.65.19 # node 3 204.152.65.36 # Manually insert this address on all nodes 204.152.65.4 other than node3 204.152.65.20 |
Install the required patches for Sun Cluster 2.2.
Install all applicable Solstice DiskSuite(TM) and Sun Cluster patches, including the Sun Cluster internationalization patches listed in the Sun Cluster 2.2 Locale Installation Notes (part number 806-4172). If you are using SPARCstorage Arrays, the latest SPARCstorage Array patch should have been installed when you installed the operating environment. Obtain the necessary patches from your serivce provider or from the Sun patch website http://sunsolve.sun.com. Use the instructions in the patch README files to install the patches.
Set the PATH environment variable for user root to include the command directories /opt/SUNWcluster/bin and /opt/SUNWpnm/bin. Set the MANPATH environment variable for user root to include /opt/SUNWcluster/man.
Reboot the machine.
phys-hahost1# reboot |
During the reboot process, you might see error messages pertaining to loss of private network. At this time, it is safe to ignore these error messages.
Switch ownership of disks and data services from the remote host to the upgraded local host.
Stop Solstice HA 1.3 services on the remote host.
The remote host in this example is phys-hahost2.
phys-hahost2# hastop |
After Solstice HA 1.3 is stopped on the remote host, start Sun Cluster 2.2 on the upgraded local host.
After the hastop(1M) operation has completed, use the scadmin(1M) command to start Sun Cluster 2.2. This causes the upgraded local host to take over all data services. In this example, phys-hahost1 is the local physical host name and sc-cluster is the cluster name.
phys-hahost1# scadmin startcluster phys-hahost1 sc-cluster |
Recreate instance configuration data for the highly available databases.
During the upgrade, the instance configuration data is not upgraded for the highly available databases. You must use the appropriate hadbms insert command to manually recreate each database instance, where dbms is the name of the database; for example, haoracle insert, hainformix insert, or hasybase insert.
Find the pre-upgrade instance configuration information in the /etc/opt/SUNWhadf.obsolete/hadf/hadbms_databases file. For information on the parameters to each hadbms insert command, see the man page for that command and the appropriate chapter in the Sun Cluster 2.2 Software Installation Guide. For example, for information on haoracle(1M), see the haoracle(1M) man page and the chapter, "Setting Up and Administering Sun Cluster HA for Oracle."
Turn on the database instances.
Use the appropriate hadbms command to turn on each database instance. For example, for Oracle:
phys-hahost1# haoracle start instance |
(Sun Cluster HA for SAP only) Unregister and re-register the Sun Cluster HA for SAP data service.
After the upgrade, the method names for the Sun Cluster HA for SAP data service are incorrect in the CCD. To correct them, first turn off and unregister the Sun Cluster HA for SAP data service and then register it again in order to log the correct method names in the CCD file. In addition, recreate the SAP instance that you noted in Step 1.
Use the hareg(1M) command to turn off the Sun Cluster HA for SAP data service.
phys-hahost1# hareg -n sap |
Unregister the Sun Cluster HA for SAP data service:
phys-hahost1# hareg -u sap |
Register the Sun Cluster HA for SAP data service:
In this example, CI_logicalhost is the logical host name.
phys-hahost1# hareg -s -r sap -h CI_logicalhost |
Run hadsconfig(1M) to restore the Sun Cluster HA for SAP configuration parameters.
Refer to Section 10.6.1, "Configuration Parameters for Sun Cluster HA for SAP," in the Sun Cluster 2.2 Software Installation Guide for descriptions of the new configuration parameters, and refer to the configuration information you saved in Step 1.
phys-hahost1# hadsconfig |
It is safe to ignore any errors generated by hadsconfig(1M) at this time.
After setting the configuration parameters, use the hareg(1M) command to activate the data service.
phys-hahost1# hareg -y sap |
Manually copy the configuration file, /etc/opt/SUNWscsap/hadsconf, to all other cluster nodes.
First create the /etc/opt/SUNWscsap/hadsconf directory if it does not exist. Then copy the configuration file to all nodes.
phys-hahost1# ftp phys-host2 ftp> put /etc/opt/SUNWscsap/hadsconf |
Verify operations on the local host.
Return the remote host to the cluster.
phys-hahost2# scadmin startnode |
After cluster reconfiguration on the remote host is complete, switch over the data services to the remote host from the local host.
phys-hahost1# haswitch phys-hahost2 hahost2 |
Verify that the Sun Cluster 2.2 configuration on the remote host is in a stable state, and that clients are receiving services.
phys-hahost2# hastat |
This completes the procedure to upgrade to Sun Cluster 2.2 from Solstice HA 1.3.
This procedure describes the steps required to upgrade the server software on a Sun Cluster 2.0 or Sun Cluster 2.1 system to Sun Cluster 2.2, with a minimum of downtime. You should become familiar with the entire procedure before starting the upgrade.
During the scinstall(1M) upgrade procedure, all non-local private link IP addresses will be added, with root access only, to the /.rhosts file on every cluster node.
This example assumes an N+1 configuration using SSVM.
(Sun Cluster HA for SAP only) Run the hadsconfig(1M) command to obtain the current configuration parameters.
The SAP instance configuration data is lost during the upgrade. Therefore, run the hadsconfig(1M) and make note of the current SAP parameters so you can restore them manually later. See Section 10.6.1, "Configuration Parameters for Sun Cluster HA for SAP" in the Sun Cluster 2.2 Software Installation Guide, for a description of the new Sun Cluster HA for SAP configuration parameters.
phys-hahost1# hadsconfig |
Stop the first node.
phys-hahost1# scadmin stopnode |
If you are upgrading the operating environment and/or SSVM or CVM, run the command upgrade_start from the SSVM or CVM media.
In this example, CDROM_path is the path to the tools on the SSVM CD.
phys-hahost1# CDROM_path/Tools/scripts/upgrade_start |
To upgrade the operating environment, follow the detailed instructions in the appropriate Solaris installation manual and see also Chapter 2 in the Sun Cluster 2.2 Software Installation Guide.
To upgrade CVM, refer to the Sun Cluster 2.2 Cluster Volume Manager Guide.
If you are upgrading the operating environment but not the volume manager, perform the following steps.
Remove the volume manager package.
Normally, the package name is SUNWvxvm for both SSVM and CVM. For example:
phys-hahost1# pkgrm SUNWvxvm |
Upgrade the operating system.
Refer to the Solaris installation documentation for instructions.
If you are using NIS+, modify the /etc/nsswitch.conf file.
Ensure that "service," "group," and "hosts" lookups are directed to files first. For example:
hosts: files nisplus services: files nisplus group: files nisplus |
Restore the volume manager removed in Step 4a.
Obtain the volume manager packages from the Sun Cluster 2.2 CD-ROM.
phys-hahost1# pkgadd -d CDROM_path/SUNWvxvm |
If you upgraded SSVM or CVM, run the command upgrade_finish from the SSVM or CVM media.
In this example, CDROM_path is the path to the tools on the SSVM CD
phys-hahost1# CDROM_path/Tools/scripts/upgrade_finish |
Reboot the system.
You must reboot at this time.
(Sun Cluster HA for SAP only) Perform the following steps.
Save to a safe location any customized hasap_start_all_instances or hasap_stop_all_instances scripts in Sun Cluster 2.1, before beginning the upgrade to Sun Cluster 2.2.
Do this to prevent loss of your customizations when Sun Cluster 2.2 removes the old scripts during the upgrade. Restore the scripts after completing the upgrade. Copy the scripts to a safe location. You will restore the scripts later in Step 8b.
phys-hahost1# cp /opt/SUNWcluster/ha/sap/hasap_start_all_instances /safe_place phys-hahost1# cp /opt/SUNWcluster/ha/sap/hasap_stop_all_instances /safe_place |
Remove the SUNWscsap package before using scinstall(1M) to update the cluster software.
The SUNWscsap package is not updated automatically by scinstall(1M). You must first remove this package and then add an updated version in Step 9b.
phys-hahost1# pkgrm SUNWscsap |
Update the cluster software by using the scinstall(1M) command from the Sun Cluster 2.2 CD-ROM.
Invoke the scinstall(1M) command and select the Upgrade option from the menu presented.
phys-hahost1# cd /cdrom/suncluster_sc_2_2/Sun_Cluster_2_2/Sol_2.x/Tools phys-hahost1# ./scinstall Removal of <SUNWscins> was successful. Installing: SUNWscins Installation of <SUNWscins> was successful. Assuming a default cluster name of sc-cluster Checking on installed package state............ ============ Main Menu ================= 1) Install/Upgrade - Install or Upgrade Server Packages or Install Client Packages. 2) Remove - Remove Server or Client Packages. 3) Change - Modify cluster or data service configuration 4) Verify - Verify installed package sets. 5) List - List installed package sets. 6) Quit - Quit this program. 7) Help - The help screen for this menu. Please choose one of the menu items: [7]: 1 ... ==== Install/Upgrade Software Selection Menu ======================= Upgrade to the latest Sun Cluster Server packages or select package sets for installation. The list of package sets depends on the Sun Cluster packages that are currently installed. Choose one: 1) Upgrade Upgrade to Sun Cluster 2.2 Server packages 2) Server Install the Sun Cluster packages needed on a server 3) Client Install the admin tools needed on an admin workstation 4) Server and Client Install both Client and Server packages 5) Close Exit this Menu 6) Quit Quit the Program Enter the number of the package set [6]: 1 What is the path to the CD-ROM image? [/cdrom/cdrom0]: . ** Upgrading from Sun Cluster 2.1 ** Removing "SUNWccm" ... done ... |
(Sun Cluster HA for SAP only) Perform the following steps.
Add the SUNWscsap package from the Sun Cluster 2.2 CD-ROM.
Use pkgadd(1M) to add an updated SUNWscsap package to replace the package removed in Step 7b. Answer y to all screen prompts that appear during the pkgadd process.
phys-hahost1# pkgadd -d \ /cdrom/suncluster_sc_2_2/Sun_Cluster_2_2/Sol_2.x/Product/ SUNWscsap |
Restore the customized scripts saved in Step 7a.
Copy the scripts to the /opt/SUNWcluster/ha/sap directory. The safe_place directory is the directory into which you saved the scripts in Step 7a. After restoring the scripts, use the ls -l command to verify that the scripts are executable.
phys-hahost1# cd /opt/SUNWcluster/ha/sap phys-hahost1# cp /safe_place/hasap_start_all_instances . phys-hahost1# cp /safe_place/hasap_stop_all_instances . phys-hahost1# ls -l /opt/SUNWcluster/ha/sap/hasap_st* -r-xr--r-- 1 root sys 18400 Feb 9 19:04 hasap_start_all_instances -r-xr--r-- 1 root sys 25963 Feb 9 19:04 hasap_stop_all_instances |
If the cluster has more than two nodes and you are upgrading from Sun Cluster 2.0, supply the TC/SSP information.
The first time the scinstall(1M) command is invoked, the TC/SSP information is automatically saved to the /var/tmp/tc_ssp_info file. Copy this file to the /var/tmp directory on all other cluster nodes so the information can be reused when you upgrade those nodes. You can either supply the TC/SSP information now, or do so later by using the scconf(1M) command. See the scconf(1M) man page for details.
SC2.2 uses the terminal concentrator (or system service processor in the case of an E10000) for failure fencing. During the SC2.2 installation the IP address for the terminal concentrator along with the physical port numbers that each server is connected to is requested. This information can be changed using scconf. After the upgrade has completed you need to run scconf to specify terminal concentrator information for each server. This will need to be done on each server in the cluster. The specific commands that need to be run are: scconf clustername -t <nts name> -i <nts name|IP address> scconf clustername -H <node 0> -p <serial port for node 0> \ -d <other|E10000> -t <nts name> Repeat the second command for each node in the cluster. Repeat the first command if you have more than one terminal concentrator in your configuration. Or you can choose to set this up now. The information you will need is: +terminal concentrator/system service processor names +the architecture type (E10000 for SSP or other for tc) +the ip address for the terminal concentrator/system service processor (these will be looked up based on the name, you will need to confirm) +for terminal concentrators, you will need the physical ports the systems are connected to (physical ports (2,3,4... not the telnet ports (5002,...) Do you want to set the TC/SSP info now (yes/no) [no]? y |
When the scinstall(1M) command prompts for the TC/SSP information, you can either force the program to query the tc_ssp_info file, or invoke an interactive session that will prompt you for the required information.
The example cluster assumes the following configuration information:
Cluster name: sc-cluster
Number of nodes in the cluster: 2
Node names: phys-hahost1 and phys-hahost2
Logical host names: hahost1 and hahost2
Terminal concentrator name: cluster-tc
Terminal concentrator IP address: 123.4.5.678
Physical TC port connected to phys-hahost1: 2
Physical TC port connected to phys-hahost2: 3
See the section on terminal concentrators and SSPs in Chapter 1 of the Sun Cluster 2.2 Software Installation Guide for more information on server architectures and TC/SSPs. In this example, the configuration is not an E10000 cluster, so the architecture specified is other, and a terminal concentrator is used:
What type of architecture does phys-hahost1 have? (E10000|other) [other] [?] other What is the name of the Terminal Concentrator connected to the serial port of phys-hahost1 [NO_NAME] [?] cluster-tc Is 123.4.5.678 the correct IP address for this Terminal Concentrator (yes|no) [yes] [?] yes Which physical port on the Terminal Concentrator is phys-hahost2 connected to [?] 2 What type of architecture does phys-hahost2 have? (E10000|other) [other] [?] other Which Terminal Concentrator is phys-hahost2 connected to: 0) cluster-tc 123.4.5.678 1) Create A New Terminal Concentrator Entry Select a device [?] 0 Which physical port on the Terminal Concentrator is phys-hahost2 connected to [?] 3 The terminal concentrator/system service processor (TC/SSP) information has been stored in file /var/tmp/tc_ssp_data. Please put a copy of this file into /var/tmp on the rest of the nodes in the cluster. This way you don't have to re-enter the TC/SSP values, but you will, however, still be prompted for the TC/SSP passwords. |
If you will be using Sun Cluster SNMP, change the port number used by the Sun Cluster SNMP daemon and Solaris SNMP (smond).
The default port used by Sun Cluster SNMP is the same as the default port number used by Solaris SNMP; both use port 161. Change the Sun Cluster SNMP port number using the procedure described in Appendix D of the Sun Cluster 2.2 System Administration Guide.
Reboot the system.
You must reboot at this time.
If you are using a shared CCD, put all logical hosts into maintenance mode.
phys-hahost2# haswitch -m hahost1 hahost2 |
Clusters with more than two nodes do not use a shared CCD. Therefore, for these clusters, you do not need to put the data services into maintenance mode before beginning the upgrade.
If your configuration includes Oracle Parallel Server (OPS), make sure OPS is halted.
Refer to your OPS documentation for instructions on halting OPS.
Stop the cluster software on the remaining nodes running the old version of Sun Cluster.
phys-hahost2# scadmin stopnode |
Start the upgraded node.
phys-hahost1# scadmin startcluster phys-hahost1 sc-cluster |
As the upgraded node joins the cluster, the system might report several warning messages stating that communication with the terminal concentrator is invalid. These messages are expected at this point and can be ignored safely. You can also ignore any errors generated by Sun Cluster HA for SAP at this time.
(Sun Cluster HA for SAP only) Reconfigure the SAP instance by performing the following steps.
Use the hareg(1M) command to turn off the Sun Cluster HA for SAP data service.
phys-hahost1# hareg -n sap |
It is safe to ignore any errors generated while turning off Sun Cluster HA for SAP by running hareg(1M).
Run the hadsconfig(1M) command to restore the Sun Cluster HA for SAP configuration parameters.
Refer to Section 10.6.1, "Configuration Parameters for Sun Cluster HA for SAP" in the Sun Cluster 2.2 Software Installation Guide, for descriptions of the new configuration parameters and look at the configuration information you saved in Step 1.
phys-hahost1# hadsconfig |
It is safe to ignore any errors generated by hadsconfig(1M) at this time.
After you set the configuration parameters, use hareg(1M) to activate the data service:
phys-hahost1# hareg -y sap |
Manually copy the configuration file to other nodes in the cluster by using ftp.
Overwrite the Sun Cluster 2.1 configuration files with the new Sun Cluster 2.2 files.
phys-hahost1# ftp phys-hahost2 ftp> put /etc/opt/SUNWscsap/hadsconf |
If you are using a shared CCD and if you upgraded from Sun Cluster 2.0, update the shared CCD now.
Run the ccdadm(1M) command only once, on the host that joined the cluster first.
phys-hahost1# cd /etc/opt/SUNWcluster/conf phys-hahost1# ccdadm sc-cluster -r ccd.database_post_sc2.0_upgrade |
If you stopped the data services previously, restart them on the upgraded node.
phys-hahost1# haswitch phys-hahost1 hahost1 hahost2 |
Upgrade the remaining nodes.
Repeat Step 3 through Step 12 on the remaining Sun Cluster 2.0 or Sun Cluster 2.1 nodes.
After each node is upgraded, add it to the cluster.
phys-hahost2# scadmin startnode sc-cluster |
Set up and start Sun Cluster Manager.
Sun Cluster Manager is used to monitor the cluster. For instructions, see the section on Sun Cluster Manager in Chapter 2 of the Sun Cluster 2.2 System Administration Guide.