Perform this procedure to remove the cluster from production.
Perform the following tasks:
Ensure that the configuration meets requirements for upgrade. See Upgrade Requirements and Software Support Guidelines.
Have available the CD-ROMs, documentation, and patches for all software products you are upgrading, including the following software:
Solaris OS
Sun Cluster 3.1 8/05 framework
Sun Cluster 3.1 8/05 data services (agents)
Applications that are managed by Sun Cluster 3.1 8/05 data-service agents
SPARC: VERITAS Volume Manager, if applicable
See Patches and Required Firmware Levels in Sun Cluster 3.1 8/05 Release Notes for Solaris OS for the location of patches and installation instructions.
If you are upgrading from Sun Cluster 3.0 software, have available your list of test IP addresses. Each public-network adapter in the cluster must have at least one test IP address. This requirement applies regardless of whether the adapter is the active adapter or the backup adapter in the group. The test IP addresses are used to reconfigure the adapters to use IP Network Multipathing.
Each test IP address must be on the same subnet as the existing IP address that is used by the public-network adapter.
To list the public-network adapters on a node, run the following command:
% pnmstat |
See one of the following manuals for more information about test IP addresses for IP Network Multipathing:
IP Network Multipathing Administration Guide (Solaris 8)
Configuring Test Addresses in Administering Multipathing Groups With Multiple Physical Interfaces in System Administration Guide: IP Services (Solaris 9)
Test Addresses in System Administration Guide: IP Services (Solaris 10)
Ensure that the cluster is functioning normally.
To view the current status of the cluster, run the following command from any node:
% scstat |
See the scstat(1M) man page for more information.
Search the /var/adm/messages log on the same node for unresolved error messages or warning messages.
Check the volume-manager status.
(Optional) Install Sun Cluster 3.1 8/05 documentation.
Install the documentation packages on your preferred location, such as an administrative console or a documentation server. See the Solaris_arch/Product/sun_cluster/index.html file on the Sun Cluster 2 of 2 CD-ROM, where arch is sparc or x86, to access installation instructions.
Notify users that cluster services will be unavailable during the upgrade.
Become superuser on a node of the cluster.
Start the scsetup(1m) utility.
# scsetup |
The Main Menu is displayed.
Switch each resource group offline.
From the scsetup Main Menu, choose the menu item, Resource groups.
From the Resource Group Menu, choose the menu item, Online/Offline or Switchover a resource group.
Follow the prompts to take offline all resource groups and to put them in the unmanaged state.
When all resource groups are offline, type q to return to the Resource Group Menu.
Disable all resources in the cluster.
The disabling of resources before upgrade prevents the cluster from bringing the resources online automatically if a node is mistakenly rebooted into cluster mode.
From the Resource Group Menu, choose the menu item, Enable/Disable a resource.
Choose a resource to disable and follow the prompts.
Repeat Step b for each resource.
When all resources are disabled, type q to return to the Resource Group Menu.
Exit the scsetup utility.
Type q to back out of each submenu or press Ctrl-C.
Verify that all resources on all nodes are Offline and that all resource groups are in the Unmanaged state.
# scstat -g |
If your cluster uses dual-string mediators for Solstice DiskSuite or Solaris Volume Manager software, unconfigure your mediators.
See Configuring Dual-String Mediators for more information.
Run the following command to verify that no mediator data problems exist.
# medstat -s setname |
Specifies the disk set name
If the value in the Status field is Bad, repair the affected mediator host. Follow the procedure How to Fix Bad Mediator Data.
List all mediators.
Save this information for when you restore the mediators during the procedure How to Finish a Nonrolling Upgrade to Sun Cluster 3.1 8/05 Software.
For a disk set that uses mediators, take ownership of the disk set if no node already has ownership.
# scswitch -z -D setname -h node |
Changes mastery
Specifies the name of the disk set
Specifies the name of the node to become primary of the disk set
Unconfigure all mediators for the disk set.
# metaset -s setname -d -m mediator-host-list |
Specifies the disk set name
Deletes from the disk set
Specifies the name of the node to remove as a mediator host for the disk set
See the mediator(7D) man page for further information about mediator-specific options to the metaset command.
Repeat Step c through Step d for each remaining disk set that uses mediators.
For a two-node cluster that uses Sun StorEdge Availability Suite software, ensure that the configuration data for availability services resides on the quorum disk.
The configuration data must reside on a quorum disk to ensure the proper functioning of Sun StorEdge Availability Suite after you upgrade the cluster software.
Become superuser on a node of the cluster that runs Sun StorEdge Availability Suite software.
Identify the device ID and the slice that is used by the Sun StorEdge Availability Suite configuration file.
# /usr/opt/SUNWscm/sbin/dscfg /dev/did/rdsk/dNsS |
In this example output, N is the device ID and S the slice of device N.
Identify the existing quorum device.
# scstat -q -- Quorum Votes by Device -- Device Name Present Possible Status ----------- ------- -------- ------ Device votes: /dev/did/rdsk/dQsS 1 1 Online |
In this example output, dQsS is the existing quorum device.
If the quorum device is not the same as the Sun StorEdge Availability Suite configuration-data device, move the configuration data to an available slice on the quorum device.
# dd if=`/usr/opt/SUNWesm/sbin/dscfg` of=/dev/did/rdsk/dQsS |
You must use the name of the raw DID device, /dev/did/rdsk/, not the block DID device, /dev/did/dsk/.
If you moved the configuration data, configure Sun StorEdge Availability Suite software to use the new location.
As superuser, issue the following command on each node that runs Sun StorEdge Availability Suite software.
# /usr/opt/SUNWesm/sbin/dscfg -s /dev/did/rdsk/dQsS |
Stop all applications that are running on each node of the cluster.
Ensure that all shared data is backed up.
From one node, shut down the cluster.
# scshutdown -g0 -y |
See the scshutdown(1M) man page for more information.
Boot each node into noncluster mode.
On SPARC based systems, perform the following command:
ok boot -x |
On x86 based systems, perform the following commands:
… <<< Current Boot Parameters >>> Boot path: /pci@0,0/pci-ide@7,1/ata@1/cmdk@0,0:b Boot args: Type b [file-name] [boot-flags] <ENTER> to boot with options or i <ENTER> to enter boot interpreter or <ENTER> to boot with defaults <<< timeout in 5 seconds >>> Select (b)oot or (i)nterpreter: b -x |
Ensure that each system disk is backed up.
To upgrade Solaris software before you perform Sun Cluster software upgrade, go to How to Perform a Nonrolling Upgrade of the Solaris OS.
If Sun Cluster 3.1 8/05 software does not support the release of the Solaris OS that you currently run on your cluster, you must upgrade the Solaris software to a supported release. See “Supported Products” in Sun Cluster 3.1 8/05 Release Notes for Solaris OS for more information.
If Sun Cluster 3.1 8/05 software supports the release of the Solaris OS that you currently run on your cluster, further Solaris software upgrade is optional.
Otherwise, upgrade dependency software. Go to How to Upgrade Dependency Software Before a Nonrolling Upgrade.