The following matrixes summarize the supported upgrade methods for each Solaris OS version and platform, provided that all other requirements for any supported method are met.
Table 1–1 Upgrade From Sun Cluster 3.1 8/05 Through 3.2 1/09 Software, Including Solaris OS UpgradeTable 1–2 Upgrade From Sun Cluster 3.2 Through 3.2 1/09 Software, With Solaris OS Update Only
Choose from the following methods to upgrade your cluster to Sun Cluster 3.2 11/09 software:
For overview information about planning your Sun Cluster 3.2 11/09 configuration, see Chapter 1, Planning the Sun Cluster Configuration, in Sun Cluster Software Installation Guide for Solaris OS.
In a standard upgrade, you shut down the cluster before you upgrade the cluster nodes. You return the cluster to production after all nodes are fully upgraded.
If you also upgrade the Solaris OS, do not use this upgrade method if the cluster uses ZFS for the root file system. Instead, you must use the live-upgrade method to upgrade the cluster.
In a dual-partition upgrade, you divide the cluster into two groups of nodes. You bring down one group of nodes and upgrade those nodes. The other group of nodes continues to provide services. After you complete upgrade of the first group of nodes, you switch services to those upgraded nodes. You then upgrade the remaining nodes and boot them back into the rest of the cluster.
The cluster outage time is limited to the amount of time that is needed for the cluster to switch over services to the upgraded partition, with one exception. If you upgrade from the Sun Cluster 3.1 8/05 release and you intend to configure zone clusters (Solaris 10 only), you must temporarily take the upgraded first partition out of cluster mode to set new private-network settings that were introduced in the Sun Cluster 3.2 release.
Observe the following additional restrictions and requirements for the dual–partition upgrade method:
ZFS root file system requirement – If you also upgrade the Solaris OS, do not use this upgrade method if the cluster uses ZFS for the root file system. Instead, you must use the live-upgrade method to upgrade the cluster.
Sun Cluster HA for Sun JavaTM System Application Server EE (HADB) - If you are running the Sun Cluster HA for Sun Java System Application Server EE (HADB) data service with Sun Java System Application Server EE (HADB) software as of version 4.4, you must shut down the database before you begin the dual-partition upgrade. The HADB database does not tolerate the loss of membership that would occur when a partition of nodes is shut down for upgrade. This requirement does not apply to versions before version 4.4.
Data format changes - Do not use the dual-partition upgrade method if you intend to upgrade an application that requires that you change its data format during the application upgrade. The dual-partition upgrade method is not compatible with the extended downtime that is needed to perform data transformation.
Location of application software - Applications must be installed on nonshared storage. Shared storage is not accessible to a partition that is in noncluster mode. Therefore, it is not possible to upgrade application software that is located on shared storage.
Division of storage - Each shared storage device must be connected to a node in each group.
Single-node clusters - Dual-partition upgrade is not available to upgrade a single-node cluster. Use the standard upgrade or live upgrade method instead.
Configuration changes - Do not make cluster configuration changes that are not documented in the upgrade procedures. Such changes might not be propagated to the final cluster configuration. Also, validation attempts of such changes would fail because not all nodes are reachable during a dual-partition upgrade.
A live upgrade maintains your previous cluster configuration until you have upgraded all nodes and you commit to the upgrade. If the upgraded configuration causes a problem, you can revert to your previous cluster configuration until you can rectify the problem.
The cluster outage is limited to the amount of time that is needed to reboot the cluster nodes into the upgraded boot environment.
Observe the following additional restrictions and requirements for the live upgrade method:
Minimum Solaris OS patch level - The following are the minimum Solaris OS patch levels that are required to use Solaris Live Upgrade.
Solaris 10 OS on SPARC — 137321–01
Solaris 10 OS on x86 — 137322 —01
Solaris 9 OS on SPARC — 137477–01
Solaris 9 OS on x86 — 137478–01
Minimum version of Live Upgrade software - To use the live upgrade method, you must use the Solaris Live Upgrade packages from at least the Solaris 9 9/05 or Solaris 10 release. This requirement applies to clusters running on all Solaris OS versions, including Solaris 8 software. The live upgrade procedures provide instructions for upgrading these packages.
ZFS root file system – If you also upgrade the Solaris OS, use only the live-upgrade method to upgrade a cluster that uses ZFS for the root file system.
Dual-partition upgrade - The live upgrade method cannot be used in conjunction with a dual-partition upgrade.
Non-global zones - Unless the cluster is already running on at least Solaris 10 11/06, the live upgrade method does not support the upgrade of clusters that have non-global zones that are configured on any of the cluster nodes. Instead, use the standard upgrade or dual-partition upgrade method.
Disk space - To use the live upgrade method, you must have enough spare disk space available to make a copy of each node's boot environment. You reclaim this disk space after the upgrade is complete and you have verified and committed the upgrade. For information about space requirements for an inactive boot environment, refer to Solaris Live Upgrade Disk Space Requirements in Solaris 9 9/04 Installation Guide orAllocating Disk and Swap Space in Solaris 10 10/09 Installation Guide: Planning for Installation and Upgrade.
In a rolling upgrade, you upgrade software to an update release on one node at a time. Services continue on the other nodes except for the time it takes to switch services from a node to be upgraded to a node that will remain in service.
Observe the following additional restrictions and requirements for the rolling upgrade method:
Minimum Sun Cluster version - The cluster must be running a Sun Cluster 3.2 release.
ZFS root file system – If you also upgrade the Solaris OS, do not use this upgrade method if the cluster uses ZFS for the root file system. Instead, you must use the live-upgrade method to upgrade the cluster.
Required patch - The following are the minimum required Sun Cluster software patch levels to use the rolling upgrade method:
Solaris 9 OS - 125510–02
Solaris 10 OS on SPARC - 125511–02
Solaris 10 OS on x86 - 125512–02
Solaris upgrade paths - You can upgrade the Solaris OS only to another update version of the same release. For example, you perform a rolling upgrade from Solaris 10 5/08 to Solaris 10 10/09. But you cannot perform a rolling upgrade to upgrade from a version of Solaris 9 to a version of Solaris 10.
Hardware configuration changes - Do not make any changes to the cluster configuration during a rolling upgrade. For example, do not add to or change the cluster interconnect or quorum devices. If you need to make such a change, do so before you start the rolling upgrade procedure or wait until after all nodes are upgraded and the cluster is committed to the new software version.
Duration of the upgrade - Limit the amount of time that you take to complete a rolling upgrade of all cluster nodes. After a node is upgraded, begin the upgrade of the next cluster node as soon as possible. You can experience performance penalties and other penalties when you run a mixed-version cluster for an extended period of time.
Software configuration changes - Avoid installing new data services or issuing any administrative configuration commands during the upgrade.
New-feature availability - Until all nodes of the cluster are successfully upgraded and the upgrade is committed, new features that are introduced by the new release might not be available.