Perform this procedure to finish Sun Cluster upgrade. On the Solaris 10 OS, perform all steps from the global zone only. First, reregister all resource types that received a new version from the upgrade. Second, modify eligible resources to use the new version of the resource type that the resource uses. Third, re-enable resources. Finally, bring resource groups back online.
Ensure that all steps in How to Verify Upgrade of Sun Cluster 3.2 Software are completed.
This step ensures that security files for the common agent container are identical on all cluster nodes and that the copied files retain the correct file permissions.
On each node, stop the Sun Java Web Console agent.
phys-schost# /usr/sbin/smcwebserver stop
On each node, stop the security file agent.
phys-schost# /usr/sbin/cacaoadm stop
On one node, change to the /etc/cacao/instances/default/ directory.
phys-schost-1# cd /etc/cacao/instances/default/
Create a tar file of the /etc/cacao/SUNWcacao/security/ directory.
phys-schost-1# tar cf /tmp/SECURITY.tar security
Copy the /tmp/SECURITY.tar file to each of the other cluster nodes.
On each node to which you copied the /tmp/SECURITY.tar file, extract the security files.
Any security files that already exist in the /etc/cacao/instances/default/ directory are overwritten.
phys-schost-2# cd /etc/cacao/instances/default/ phys-schost-2# tar xf /tmp/SECURITY.tar
Delete the /tmp/SECURITY.tar file from each node in the cluster.
You must delete each copy of the tar file to avoid security risks.
phys-schost-1# rm /tmp/SECURITY.tar phys-schost-2# rm /tmp/SECURITY.tar
On each node, start the security file agent.
phys-schost# /usr/sbin/cacaoadm start
On each node, start the Sun Java Web Console agent.
phys-schost# /usr/sbin/smcwebserver start
If you upgraded any data services that are not supplied on the product media, register the new resource types for those data services.
If you upgraded Sun Cluster HA for SAP liveCache from the Sun Cluster 3.0 or 3.1 version to the Sun Cluster 3.2 version, modify the /opt/SUNWsclc/livecache/bin/lccluster configuration file.
Become superuser on a node that will host the liveCache resource.
Copy the new /opt/SUNWsclc/livecache/bin/lccluster file to the /sapdb/LC_NAME/db/sap/ directory.
Overwrite the lccluster file that already exists from the previous configuration of the data service.
Configure this /sapdb/LC_NAME/db/sap/lccluster file as documented in How to Register and Configure Sun Cluster HA for SAP liveCache in Sun Cluster Data Service for SAP liveCache Guide for Solaris OS.
Determine which node has ownership of a disk set to which you will add the mediator hosts.
phys-schost# metaset -s setname
Specifies the disk set name.
On the node that masters or will master the disk set, become superuser.
If no node has ownership, take ownership of the disk set.
phys-schost# cldevicegroup switch -n node devicegroup
Specifies the name of the node to become primary of the disk set.
Specifies the name of the disk set.
Re-create the mediators.
phys-schost# metaset -s setname -a -m mediator-host-list
Adds to the disk set.
Specifies the names of the nodes to add as mediator hosts for the disk set.
Repeat these steps for each disk set in the cluster that uses mediators.
Bring online and take ownership of a disk group to upgrade.
phys-schost# cldevicegroup switch -n node devicegroup
Run the following command to upgrade a disk group to the highest version supported by the VxVM release you installed.
phys-schost# vxdg upgrade dgname
See your VxVM administration documentation for more information about upgrading disk groups.
Repeat for each remaining VxVM disk group in the cluster.
Migrate resources to new resource type versions.
You must migrate all resources to the Sun Cluster 3.2 resource-type version.
For Sun Cluster HA for SAP Web Application Server, if you are using a J2EE engine resource or a web application server component resource or both, you must delete the resource and recreate it with the new web application server component resource. Changes in the new web application server component resource includes integration of the J2EE functionality. For more information, see Sun Cluster Data Service for SAP Web Application Server Guide for Solaris OS.
See Upgrading a Resource Type in Sun Cluster Data Services Planning and Administration Guide for Solaris OS, which contains procedures which use the command line. Alternatively, you can perform the same tasks by using the Resource Group menu of the clsetup utility. The process involves performing the following tasks:
Registering the new resource type.
Migrating the eligible resource to the new version of its resource type.
Modifying the extension properties of the resource type as specified in Sun Cluster 3.2 Release Notes for Solaris OS.
The Sun Cluster 3.2 release introduces new default values for some extension properties, such as the Retry_interval property. These changes affect the behavior of any existing resource that uses the default values of such properties. If you require the previous default value for a resource, modify the migrated resource to set the property to the previous default value.
If your cluster runs the Sun Cluster HA for Sun Java System Application Server EE (HADB) data service and you shut down the HADB database before you began a dual-partition upgrade, re-enable the resource and start the database.
phys-schost# clresource enable hadb-resource phys-schost# hadbm start database-name
For more information, see the hadbm(1m) man page.
If you upgraded to the Solaris 10 OS and the Apache httpd.conf file is located on a cluster file system, ensure that the HTTPD entry in the Apache control script still points to that location.
View the HTTPD entry in the /usr/apache/bin/apchectl file.
The following example shows the httpd.conf file located on the /global cluster file system.
phys-schost# cat /usr/apache/bin/apchectl | grep HTTPD=/usr HTTPD="/usr/apache/bin/httpd -f /global/web/conf/httpd.conf"
If the file does not show the correct HTTPD entry, update the file.
phys-schost# vi /usr/apache/bin/apchectl #HTTPD=/usr/apache/bin/httpd HTTPD="/usr/apache/bin/httpd -f /global/web/conf/httpd.conf"
From any node, start the clsetup utility.
The clsetup Main Menu is displayed.
Re-enable all disabled resources.
Type the number that corresponds to the option for Resource groups and press the Return key.
The Resource Group Menu is displayed.
Type the number that corresponds to the option for Enable/Disable a resource and press the Return key.
Choose a resource to enable and follow the prompts.
Repeat Step c for each disabled resource.
When all resources are re-enabled, type q to return to the Resource Group Menu.
Bring each resource group back online.
This step includes the bringing online of resource groups in non-global zones.
When all resource groups are back online, exit the clsetup utility.
Type q to back out of each submenu, or press Ctrl-C.
If, before upgrade, you enabled automatic node reboot if all monitored disk paths fail, ensure that the feature is still enabled.
Also perform this task if you want to configure automatic reboot for the first time.
Determine whether the automatic reboot feature is enabled or disabled.
phys-schost# clnode show
Enable the automatic reboot feature.
phys-schost# clnode set -p reboot_on_path_failure=enabled
Specifies the property to set
Specifies that the node will reboot if all monitored disk paths fail, provided that at least one of the disks is accessible from a different node in the cluster.
Verify that automatic reboot on disk-path failure is enabled.
phys-schost# clnode show === Cluster Nodes === Node Name: node … reboot_on_path_failure: enabled …
(Optional) Capture the disk partitioning information for future reference.
phys-schost# prtvtoc /dev/rdsk/cNtXdYsZ > filename
Store the file in a location outside the cluster. If you make any disk configuration changes, run this command again to capture the changed configuration. If a disk fails and needs replacement, you can use this information to restore the disk partition configuration. For more information, see the prtvtoc(1M) man page.
(Optional) Make a backup of your cluster configuration.
An archived backup of your cluster configuration facilitates easier recovery of the your cluster configuration,
For more information, see How to Back Up the Cluster Configuration in Sun Cluster System Administration Guide for Solaris OS.
Resource-type migration failure - Normally, you migrate resources to a new resource type while the resource is offline. However, some resources need to be online for a resource-type migration to succeed. If resource-type migration fails for this reason, error messages similar to the following are displayed:
phys-schost - Resource depends on a SUNW.HAStoragePlus type resource that is not online anywhere. (C189917) VALIDATE on resource nfsrs, resource group rg, exited with non-zero exit status. (C720144) Validation of resource nfsrs in resource group rg on node phys-schost failed.
If resource-type migration fails because the resource is offline, use the clsetup utility to re-enable the resource and then bring its related resource group online. Then repeat migration procedures for the resource.
Java binaries location change - If the location of the Java binaries changed during the upgrade of shared components, you might see error messages similar to the following when you attempt to run the cacaoadm start or smcwebserver start commands:
# /opt/SUNWcacao/bin/cacaoadm startNo suitable Java runtime found. Java 1.4.2_03 or higher is required.Jan 3 17:10:26 ppups3 cacao: No suitable Java runtime found. Java 1.4.2_03 or higher is required.Cannot locate all the dependencies
# smcwebserver start/usr/sbin/smcwebserver: /usr/jdk/jdk1.5.0_04/bin/java: not found
These errors are generated because the start commands cannot locate the current location of the Java binaries. The JAVA_HOME property still points to the directory where the previous version of Java was located, but that previous version was removed during upgrade.
To correct this problem, change the setting of JAVA_HOME in the following configuration files to use the current Java directory:
If you have a SPARC based system and use Sun Management Center to monitor the cluster, go to SPARC: How to Upgrade Sun Cluster Module Software for Sun Management Center.
To install or complete upgrade of Sun Cluster Geographic Edition 3.2 software, see Sun Cluster Geographic Edition Installation Guide.
Otherwise, the cluster upgrade is complete.