The following matrixes summarize the supported upgrade methods for each Oracle Solaris OS version and platform, provided that all other requirements for any supported method are met. Check the documentation for other products in the cluster, such as volume management software and other applications, for any additional upgrade requirements or restrictions.
Note - If your cluster uses a ZFS root file system, you can upgrade the Oracle Solaris OS only by using the live upgrade method. See Oracle Solaris upgrade documentation for more information.
This limitation does not apply if you are not upgrading the Oracle Solaris OS.
Table 1-1 Upgrade From Oracle Solaris Cluster 3.1 8/05 Through 3.2 11/09 Software, Including Oracle Solaris OS Upgrade
Table 1-2 Upgrade on Oracle Solaris Cluster 3.3 Software of Oracle Solaris OS Update Only
Choose from the following methods to upgrade your cluster to Oracle Solaris Cluster 3.3 software:
For overview information about planning your Oracle Solaris Cluster 3.3 configuration, see Chapter 1, Planning the Oracle Solaris Cluster Configuration, in Oracle Solaris Cluster Software Installation Guide.
In a standard upgrade, you shut down the cluster before you upgrade the cluster nodes. You return the cluster to production after all nodes are fully upgraded.
ZFS root file systems - If your cluster uses a ZFS root file system, you cannot use standard upgrade to upgrade the Solaris OS. You must use only the live upgrade method to upgrade the Solaris OS. But you can use standard upgrade to separately upgrade Oracle Solaris Cluster and other software.
In a dual-partition upgrade, you divide the cluster into two groups of nodes. You bring down one group of nodes and upgrade those nodes. The other group of nodes continues to provide services. After you complete upgrade of the first group of nodes, you switch services to those upgraded nodes. You then upgrade the remaining nodes and boot them back into the rest of the cluster.
The cluster outage time is limited to the amount of time that is needed for the cluster to switch over services to the upgraded partition, with one exception. If you upgrade from the Sun Cluster 3.1 8/05 release and you intend to configure zone clusters, you must temporarily take the upgraded first partition out of cluster mode to set new private-network settings that were introduced in the Sun Cluster 3.2 release.
Observe the following additional restrictions and requirements for the dual–partition upgrade method:
ZFS root file systems - If your cluster uses a ZFS root file system, you cannot use dual-partition upgrade to upgrade the Solaris OS. You must use only the live upgrade method to upgrade the Solaris OS. But you can use dual-partition upgrade to separately upgrade Oracle Solaris Cluster and other software.
HA for Sun Java System Application Server EE (HADB) - If you are running the HA for Sun Java System Application Server EE (HADB) data service with Sun Java System Application Server EE (HADB) software as of version 4.4, you must shut down the database before you begin the dual-partition upgrade. The HADB database does not tolerate the loss of membership that would occur when a partition of nodes is shut down for upgrade. This requirement does not apply to versions before version 4.4.
Data format changes - Do not use the dual-partition upgrade method if you intend to upgrade an application that requires that you change its data format during the application upgrade. The dual-partition upgrade method is not compatible with the extended downtime that is needed to perform data transformation.
Location of application software - Applications must be installed on nonshared storage. Shared storage is not accessible to a partition that is in noncluster mode. Therefore, it is not possible to upgrade application software that is located on shared storage.
Division of storage - Each shared storage device must be connected to a node in each group.
Single-node clusters - Dual-partition upgrade is not available to upgrade a single-node cluster. Use the standard upgrade or live upgrade method instead.
Configuration changes - Do not make cluster configuration changes that are not documented in the upgrade procedures. Such changes might not be propagated to the final cluster configuration. Also, validation attempts of such changes would fail because not all nodes are reachable during a dual-partition upgrade.
A live upgrade maintains your previous cluster configuration until you have upgraded all nodes and you commit to the upgrade. If the upgraded configuration causes a problem, you can revert to your previous cluster configuration until you can rectify the problem.
The cluster outage is limited to the amount of time that is needed to reboot the cluster nodes into the upgraded boot environment.
Observe the following additional restrictions and requirements for the live upgrade method:
ZFS root file systems - If your cluster configuration uses a ZFS root file system, you must use only live upgrade to upgrade the Solaris OS. See Solaris documentation for more information.
Dual-partition upgrade - The live upgrade method cannot be used in conjunction with a dual-partition upgrade.
Non-global zones - Unless the cluster is already running on at least Solaris 10 11/06, the live upgrade method does not support the upgrade of clusters that have non-global zones that are configured on any of the cluster nodes. Instead, use the standard upgrade or dual-partition upgrade method.
Disk space - To use the live upgrade method, you must have enough spare disk space available to make a copy of each node's boot environment. You reclaim this disk space after the upgrade is complete and you have verified and committed the upgrade. For information about space requirements for an inactive boot environment, refer to or Allocating Disk and Swap Space in Solaris 10 10/09 Installation Guide: Planning for Installation and Upgrade.
In a rolling upgrade, you upgrade software to an update release on one node at a time. Services continue on the other nodes except for the time it takes to switch services from a node to be upgraded to a node that will remain in service.
Observe the following additional restrictions and requirements for the rolling upgrade method:
Minimum Oracle Solaris Cluster version - The cluster must be running an Oracle Solaris Cluster 3.3 release.
Solaris upgrade paths - You can upgrade the Solaris OS only to an update version of the same release. For example, you can perform a rolling upgrade from Solaris 10 5/08 to Solaris 10 10/09. But you cannot perform a rolling upgrade from a version of Solaris 9 to a version of Oracle Solaris 10.
ZFS root file systems - If your cluster configuration uses a ZFS root file system, you cannot use rolling upgrade to upgrade the Solaris OS. You must use only live upgrade to upgrade the Solaris OS. See Solaris documentation for more information.
Hardware configuration changes - Do not change the cluster configuration during a rolling upgrade. For example, do not add to or change the cluster interconnect or quorum devices. If you need to make such a change, do so before you start the rolling upgrade procedure or wait until after all nodes are upgraded and the cluster is committed to the new software version.
Duration of the upgrade - Limit the amount of time that you take to complete a rolling upgrade of all cluster nodes. After a node is upgraded, begin the upgrade of the next cluster node as soon as possible. You can experience performance penalties and other penalties when you run a mixed-version cluster for an extended period of time.
Software configuration changes - Avoid installing new data services or issuing any administrative configuration commands during the upgrade.
New-feature availability - Until all nodes of the cluster are successfully upgraded and the upgrade is committed, new features that are introduced by the new release might not be available.