Before you upgrade the software, perform the following steps to take the cluster out of production:
Ensure that the configuration meets requirements for upgrade.
Have available the CD-ROMs, documentation, and patches for all software products you are upgrading.
Solaris 8 or Solaris 9 operating environment
Sun Cluster 3.1 4/04 framework
Sun Cluster 3.1 4/04 data services (agents)
Applications that are managed by Sun Cluster 3.1 4/04 data-service agents
VERITAS Volume Manager
See “Patches and Required Firmware Levels” in Sun Cluster 3.1 Release Notes for the location of patches and installation instructions.
(Optional) Install Sun Cluster 3.1 4/04 documentation.
Install the documentation packages on your preferred location, such as an administrative console or a documentation server. See the index.html file at the top level of the Java Enterprise System Accessory CD 3 CD-ROM to access installation instructions.
Are you upgrading from Sun Cluster 3.0 software?
If no, proceed to Step 5.
If yes, have available your list of test IP addresses, one for each public-network adapter in the cluster.
A test IP address is required for each public-network adapter in the cluster, regardless of whether the adapter is the active adapter or the backup adapter in the group. The test IP addresses will be used to reconfigure the adapters to use IP Network Multipathing.
Each test IP address must be on the same subnet as the existing IP address that is used by the public-network adapter.
To list the public-network adapters on a node, run the following command:
% pnmstat |
See the IP Network Multipathing Administration Guide (Solaris 8) or System Administration Guide: IP Services (Solaris 9) for more information about test IP addresses for IP Network Multipathing.
Notify users that cluster services will be unavailable during upgrade.
Ensure that the cluster is functioning normally.
To view the current status of the cluster, run the following command from any node:
% scstat |
See the scstat(1M) man page for more information.
Search the /var/adm/messages log on the same node for unresolved error messages or warning messages.
Check volume-manager status.
Become superuser on a node of the cluster.
Switch each resource group offline.
# scswitch -F -g resource-group |
Switches a resource group offline
Specifies the name of the resource group to take offline
Disable all resources in the cluster.
The disabling of resources before upgrade prevents the cluster from bringing the resources online automatically if a node is mistakenly rebooted into cluster mode.
If you are upgrading from a Sun Cluster 3.1 release, you can use the scsetup(1M) utility instead of the command line. From the Main Menu, choose Resource Groups, then choose Enable/Disable Resources.
From any node, list all enabled resources in the cluster.
# scrgadm -pv | grep "Res enabled" (resource-group:resource) Res enabled: True |
Identify those resources that depend on other resources.
You must disable dependent resources first before you disable the resources that they depend on.
Disable each enabled resource in the cluster.
scswitch -n -j resource |
Disables
Specifies the resource
See the scswitch(1M) man page for more information.
Verify that all resources are disabled.
# scrgadm -pv | grep "Res enabled" (resource-group:resource) Res enabled: False |
Move each resource group to the unmanaged state.
# scswitch -u -g resource-group |
Moves the specified resource group to the unmanaged state
Specifies the name of the resource group to move into the unmanaged state
Verify that all resources on all nodes are Offline and that all resource groups are in the Unmanaged state.
# scstat -g |
Does your cluster use dual-string mediators for Solstice DiskSuite/Solaris Volume Manager?
If no, proceed to Step 13.
If yes, unconfigure your mediators.
See Configuring Dual-String Mediators for more information.
Run the following command to verify that no mediator data problems exist.
# medstat -s setname |
Specifies the diskset name
If the value in the Status field is Bad, repair the affected mediator host. Follow the procedure How to Fix Bad Mediator Data.
List all mediators.
Use this information for when you restore the mediators during the procedure How to Upgrade to Sun Cluster 3.1 4/04 Software (Nonrolling).
For a diskset that uses mediators, take ownership of the diskset if no node already has ownership.
# metaset -s setname -t |
Takes ownership of the diskset
Unconfigure all mediators for the diskset.
# metaset -s setname -d -m mediator-host-list |
Specifies the diskset name
Deletes from the diskset
Specifies the name of the node to remove as a mediator host for the diskset
See the mediator(7D) man page for further information about mediator-specific options to the metaset command.
Repeat Step c through Step d for each remaining diskset that uses mediators.
Stop all applications that are running on each node of the cluster.
Ensure that all shared data is backed up.
From one node, shut down the cluster.
# scshutdown -g -y |
See the scshutdown(1M) man page for more information.
Boot each node into noncluster mode.
ok boot -x |
Ensure that each system disk is backed up.
Determine whether to upgrade the Solaris operating environment.
If Sun Cluster 3.1 4/04 software does not support the release of the Solaris environment that you currently run on your cluster, you must upgrade the Solaris software to a supported release. Go to How to Upgrade the Solaris Operating Environment (Nonrolling).
If your cluster configuration already runs on a release of the Solaris environment that supports Sun Cluster 3.1 4/04 software, further Solaris software upgrade is optional.
To upgrade Sun Cluster software, go to How to Upgrade to Sun Cluster 3.1 4/04 Software (Nonrolling).
To upgrade Solaris software, go to How to Upgrade the Solaris Operating Environment (Nonrolling).
See “Supported Products” in Sun Cluster Release Notes for Solaris OS for more information.