This chapter provides the following information and procedures to upgrade a Sun Cluster 3.x configuration to Sun Cluster 3.1 9/04 software:
Task Map: Upgrading to Sun Cluster 3.1 9/04 Software (Nonrolling)
How to Perform a Nonrolling Upgrade of Sun Cluster 3.1 9/04 Software
How to Finish a Nonrolling Upgrade to Sun Cluster 3.1 9/04 Software
Task Map: Upgrading to Sun Cluster 3.1 9/04 Software (Rolling)
How to Perform a Rolling Upgrade of a Solaris Maintenance Update
How to Perform a Rolling Upgrade of Sun Cluster 3.1 9/04 Software
How to Finish a Rolling Upgrade to Sun Cluster 3.1 9/04 Software
SPARC: How to Upgrade Sun Cluster-Module Software for Sun Management Center
This section provides the following guidelines to upgrade a Sun Cluster configuration:
Observe the following requirements and support guidelines when you upgrade to Sun Cluster 3.1 9/04 software:
The cluster must run on or be upgraded to at least Solaris 8 2/02 software, including the most current required patches.
The cluster hardware must be a supported configuration for Sun Cluster 3.1 9/04 software. Contact your Sun representative for information about current supported Sun Cluster configurations.
You must upgrade all software to a version that is supported by Sun Cluster 3.1 9/04 software. For example, if a data service is supported on Sun Cluster 3.0 software but is not supported on Sun Cluster 3.1 9/04 software, you must upgrade that data service to the version of that data service that is supported on Sun Cluster 3.1 9/04 software. See “Supported Products” in Sun Cluster 3.1 9/04 Release Notes for Solaris OS for support information about specific data services.
If the related application of a data service is not supported on Sun Cluster 3.1 9/04 software, you must upgrade that application to a supported release.
The scinstall upgrade utility only upgrades those data services that are provided with Sun Cluster 3.1 9/04 software. You must manually upgrade any custom or third-party data services.
For upgrade from a Sun Cluster 3.0 release, have available the test IP addresses to use with your public-network adapters when NAFO groups are converted to Internet Protocol (IP) Network Multipathing groups. The scinstall upgrade utility prompts you for a test IP address for each public-network adapter in the cluster. A test IP address must be on the same subnet as the primary IP address for the adapter.
See the IP Network Multipathing Administration Guide (Solaris 8) or System Administration Guide: IP Services (Solaris 9) for information about test IP addresses for IP Network Multipathing groups.
Sun Cluster 3.1 9/04 software supports only nonrolling upgrade from Solaris 8 software to Solaris 9 software.
Sun Cluster 3.1 9/04 software supports direct upgrade only from Sun Cluster 3.x software.
Sun Cluster 3.1 9/04 software does not support any downgrade of Sun Cluster software.
Sun Cluster 3.1 9/04 software does not support upgrade between architectures.
Sun Cluster 3.1 9/04 software does not support the Live Upgrade method to upgrade Solaris software in a Sun Cluster configuration.
Choose one of the following methods to upgrade your cluster to Sun Cluster 3.1 9/04 software:
Nonrolling upgrade – In a nonrolling upgrade, you shut down the cluster before you upgrade the cluster nodes. You return the cluster to production after all nodes are fully upgraded. You must use the nonrolling-upgrade method if one or more of the following conditions apply:
You are upgrading from Sun Cluster 3.0 software.
You are upgrading from Solaris 8 software to Solaris 9 software.
Any software products that you are upgrading, such as applications or databases, require that the same version of the software is running on all cluster nodes at the same time.
You are upgrading the Sun Cluster-module software for Sun Management Center.
You are also upgrading VxVM or VxFS.
Rolling upgrade – In a rolling upgrade, you upgrade one node of the cluster at a time. The cluster remains in production with services running on the other nodes. You can use the rolling-upgrade method only if all of the following conditions apply:
You are upgrading from Sun Cluster 3.1 software.
You are upgrading the Solaris operating system only to a Solaris update, if at all.
For any applications or databases you must upgrade, the current version of the software can coexist in a running cluster with the upgrade version of that software.
If your cluster configuration meets the requirements to perform a rolling upgrade, you can still choose to perform a nonrolling upgrade instead. A nonrolling upgrade might be preferable to a rolling upgrade if you wanted to use the Cluster Control Panel to issue commands to all cluster nodes at the same time and you could tolerate the cluster downtime.
For overview information about planning your Sun Cluster 3.1 9/04 configuration, see Chapter 1, Planning the Sun Cluster Configuration.
Follow the tasks in this section to perform a nonrolling upgrade from Sun Cluster 3.x software to Sun Cluster 3.1 9/04 software. In a nonrolling upgrade, you shut down the entire cluster before you upgrade the cluster nodes. This procedure also enables you to upgrade the cluster from Solaris 8 software to Solaris 9 software.
To perform a rolling upgrade to Sun Cluster 3.1 9/04 software, instead follow the procedures in Upgrading to Sun Cluster 3.1 9/04 Software (Rolling).
Task |
Instructions |
---|---|
1. Read the upgrade requirements and restrictions. | |
2. Remove the cluster from production, disable resources, and back up shared data and system disks. If the cluster uses dual-string mediators for Solstice DiskSuite or Solaris Volume Manager software, unconfigure the mediators. | |
3. Upgrade the Solaris software, if necessary, to a supported Solaris update. Optionally, upgrade VERITAS Volume Manager (VxVM). | |
4. Upgrade to Sun Cluster 3.1 9/04 framework and data-service software. If necessary, upgrade applications. If the cluster uses dual-string mediators, reconfigure the mediators. SPARC: If you upgraded VxVM, upgrade disk groups. |
How to Perform a Nonrolling Upgrade of Sun Cluster 3.1 9/04 Software |
5. Enable resources and bring resource groups online. Optionally, migrate existing resources to new resource types. |
How to Finish a Nonrolling Upgrade to Sun Cluster 3.1 9/04 Software |
6. (Optional) SPARC: Upgrade the Sun Cluster module for Sun Management Center, if needed. |
SPARC: How to Upgrade Sun Cluster-Module Software for Sun Management Center |
Before you upgrade the software, perform the following steps to remove the cluster from production:
Ensure that the configuration meets requirements for upgrade.
Have available the CD-ROMs, documentation, and patches for all software products you are upgrading.
Solaris 8 or Solaris 9 OS
Sun Cluster 3.1 9/04 framework
Sun Cluster 3.1 9/04 data services (agents)
Applications that are managed by Sun Cluster 3.1 9/04 data-service agents
SPARC: VERITAS Volume Manager
See “Patches and Required Firmware Levels” in Sun Cluster 3.1 Release Notes for the location of patches and installation instructions.
(Optional) Install Sun Cluster 3.1 9/04 documentation.
Install the documentation packages on your preferred location, such as an administrative console or a documentation server. See the index.html file at the top level of the Sun Cluster 3.1 9/04 CD-ROM to access installation instructions.
If you are upgrading from Sun Cluster 3.0 software, have available your list of test IP addresses.
Each public-network adapter in the cluster must have at least one test IP address. This requirement applies regardless of whether the adapter is the active adapter or the backup adapter in the group. The test IP addresses are used to reconfigure the adapters to use IP Network Multipathing.
Each test IP address must be on the same subnet as the existing IP address that is used by the public-network adapter.
To list the public-network adapters on a node, run the following command:
% pnmstat |
See the IP Network Multipathing Administration Guide (Solaris 8) or System Administration Guide: IP Services (Solaris 9) for more information about test IP addresses for IP Network Multipathing.
Notify users that cluster services will be unavailable during the upgrade.
Ensure that the cluster is functioning normally.
To view the current status of the cluster, run the following command from any node:
% scstat |
See the scstat(1M) man page for more information.
Search the /var/adm/messages log on the same node for unresolved error messages or warning messages.
Check the volume-manager status.
Become superuser on a node of the cluster.
Start the scsetup(1m) utility.
# scsetup |
The Main Menu displays.
Switch each resource group offline.
From the scsetup Main Menu, choose Resource groups.
From the Resource Group Menu, choose Online/Offline or Switchover a resource group.
Follow the prompts to take offline all resource groups and to put them in the unmanaged state.
When all resource groups are offline, type q to return to the Resource Group Menu.
Disable all resources in the cluster.
The disabling of resources before upgrade prevents the cluster from bringing the resources online automatically if a node is mistakenly rebooted into cluster mode.
From the Resource Group Menu, choose Enable/Disable a resource.
Choose a resource to disable and follow the prompts.
Repeat Step b for each resource.
When all resources are disabled, type q to return to the Resource Group Menu.
Exit the scsetup utility.
Type q to back out of each submenu or press Ctrl-C.
Verify that all resources on all nodes are Offline and that all resource groups are in the Unmanaged state.
# scstat -g |
If your cluster uses dual-string mediators for Solstice DiskSuite or Solaris Volume Manager software, unconfigure your mediators.
See Configuring Dual-String Mediators for more information.
Run the following command to verify that no mediator data problems exist.
# medstat -s setname |
Specifies the disk set name
If the value in the Status field is Bad, repair the affected mediator host. Follow the procedure How to Fix Bad Mediator Data.
List all mediators.
Save this information for when you restore the mediators during the procedure How to Finish a Nonrolling Upgrade to Sun Cluster 3.1 9/04 Software.
For a disk set that uses mediators, take ownership of the disk set if no node already has ownership.
# metaset -s setname -t |
Takes ownership of the disk set
Unconfigure all mediators for the disk set.
# metaset -s setname -d -m mediator-host-list |
Specifies the disk set name
Deletes from the disk set
Specifies the name of the node to remove as a mediator host for the disk set
See the mediator(7D) man page for further information about mediator-specific options to the metaset command.
Repeat Step c through Step d for each remaining disk set that uses mediators.
If not already installed, install Sun Web Console packages.
Perform this step on each node of the cluster. These packages are required by Sun Cluster software, even if you do not use Sun Web Console.
For a two-node cluster, if the cluster uses Sun StorEdge Availability Suite software, ensure that the configuration data for availability services resides on the quorum disk.
The configuration data must reside on a quorum disk to ensure the proper functioning of Sun StorEdge Availability Suite after you upgrade the cluster software.
Become superuser on a node of the cluster that runs Sun StorEdge Availability Suite software.
Identify the device ID and the slice that is used by the Sun StorEdge Availability Suite configuration file.
# /usr/opt/SUNWscm/sbin/dscfg /dev/did/rdsk/dNsS |
In this example output, N is the device ID and S the slice of device N.
Identify the existing quorum device.
# scstat -q -- Quorum Votes by Device -- Device Name Present Possible Status ----------- ------- -------- ------ Device votes: /dev/did/rdsk/dQsS 1 1 Online |
In this example output, dQsS is the existing quorum device.
If the quorum device is not the same as the Sun StorEdge Availability Suite configuration-data device, move the configuration data to an available slice on the quorum device.
# dd if=`/usr/opt/SUNWesm/sbin/dscfg` of=/dev/did/rdsk/dQsS |
You must use the name of the raw DID device, /dev/did/rdsk/, not the block DID device, /dev/did/dsk/.
If you moved the configuration data, configure Sun StorEdge Availability Suite software to use the new location.
As superuser, issue the following command on each node that runs Sun StorEdge Availability Suite software.
# /usr/opt/SUNWesm/sbin/dscfg -s /dev/did/rdsk/dQsS |
Stop all applications that are running on each node of the cluster.
Ensure that all shared data is backed up.
From one node, shut down the cluster.
# scshutdown -g0 -y |
See the scshutdown(1M) man page for more information.
Boot each node into noncluster mode.
On SPARC based systems, perform the following command:
ok boot -x |
On x86 based systems, perform the following commands:
... <<< Current Boot Parameters >>> Boot path: /pci@0,0/pci-ide@7,1/ata@1/cmdk@0,0:b Boot args: Type b [file-name] [boot-flags] <ENTER> to boot with options or i <ENTER> to enter boot interpreter or <ENTER> to boot with defaults <<< timeout in 5 seconds >>> Select (b)oot or (i)nterpreter: b -x |
Ensure that each system disk is backed up.
Upgrade the Sun Cluster software or the Solaris operating system.
To upgrade Solaris software before you perform Sun Cluster upgrade, go to How to Perform a Nonrolling Upgrade of the Solaris OS.
If Sun Cluster 3.1 9/04 software does not support the release of the Solaris OS that you currently run on your cluster, you must upgrade the Solaris software to a supported release. If Sun Cluster 3.1 9/04 software supports the release of the Solaris OS that you currently run on your cluster, further Solaris software upgrade is optional. See “Supported Products” in Sun Cluster Release Notes for Solaris OS for more information.
To upgrade Sun Cluster software, go to How to Perform a Nonrolling Upgrade of Sun Cluster 3.1 9/04 Software.
Perform this procedure on each node in the cluster to upgrade the Solaris OS. If the cluster already runs on a version of the Solaris OS that supports Sun Cluster 3.1 9/04 software, further upgrade of the Solaris OS is optional. If you do not intend to upgrade the Solaris OS, go to How to Perform a Nonrolling Upgrade of Sun Cluster 3.1 9/04 Software.
The cluster must already run on, or be upgraded to, at least the minimum required level of the Solaris 8 or Solaris 9 OS to support Sun Cluster 3.1 9/04 software. See “Supported Products” in Sun Cluster Release Notes for Solaris OS for more information.
Ensure that all steps in How to Prepare the Cluster for a Nonrolling Upgrade are completed.
Become superuser on the cluster node to upgrade.
(Optional) Upgrade VxFS.
Follow procedures that are provided in your VxFS documentation.
Determine whether the following Apache links already exist, and if so, whether the file names contain an uppercase K or S:
/etc/rc0.d/K16apache /etc/rc1.d/K16apache /etc/rc2.d/K16apache /etc/rc3.d/S50apache /etc/rcS.d/K16apache |
If these links already exist and do contain an uppercase K or S in the file name, no further action is necessary for these links.
If these links do not exist, or if these links exist but instead contain a lowercase k or s in the file name, you move aside these links in Step 9.
Comment out all entries for globally mounted file systems in the node's /etc/vfstab file.
For later reference, make a record of all entries that are already commented out.
Temporarily comment out all entries for globally mounted file systems in the /etc/vfstab file.
Entries for globally mounted file systems contain the global mount option. Comment out these entries to prevent the Solaris upgrade from attempting to mount the global devices.
Determine which procedure to follow to upgrade the Solaris OS.
Volume Manager |
Procedure to Use |
Location of Instructions |
---|---|---|
Solstice DiskSuite or Solaris Volume Manager |
Any Solaris upgrade method except the Live Upgrade method |
Solaris 8 or Solaris 9 installation documentation |
SPARC: VERITAS Volume Manager |
“Upgrading VxVM and Solaris” |
VERITAS Volume Manager installation documentation |
If your cluster has VxVM installed, you must reinstall the existing VxVM software or upgrade to the Solaris 9 version of VxVM software as part of the Solaris upgrade process.
Upgrade the Solaris software, following the procedure that you selected in Step 6.
When you are instructed to reboot a node during the upgrade process, always add the -x option to the command. Or, if the instruction says to run the init S command, use the reboot -- -xs command instead.
The -x option ensures that the node reboots into noncluster mode. For example, either of the following two commands boot a node into single-user noncluster mode:
On SPARC based systems, perform the following commands:
# reboot -- -xs ok boot -xs |
On x86 based systems, perform the following commands:
# reboot -- -xs ... <<< Current Boot Parameters >>> Boot path: /pci@0,0/pci-ide@7,1/ata@1/cmdk@0,0:b Boot args: Type b [file-name] [boot-flags] <ENTER> to boot with options or i <ENTER> to enter boot interpreter or <ENTER> to boot with defaults <<< timeout in 5 seconds >>> Select (b)oot or (i)nterpreter: b -xs |
Do not perform the final reboot instruction in the Solaris software upgrade. Instead, return to this procedure to perform Step 8 and Step 9, then reboot into noncluster mode in Step 10 to complete Solaris software upgrade.
In the /a/etc/vfstab file, uncomment those entries for globally mounted file systems that you commented out in Step 5.
Move aside restored Apache links if either of the following conditions was true before you upgraded the Solaris software:
The Apache links listed in Step 4 did not exist.
The Apache links listed in Step 4 existed and contained a lowercase k or s in the file names.
To move aside restored Apache links, which contain an uppercase K or S in the name, use the following commands to rename the files with a lowercase k or s.
# mv /a/etc/rc0.d/K16apache /a/etc/rc0.d/k16apache # mv /a/etc/rc1.d/K16apache /a/etc/rc1.d/k16apache # mv /a/etc/rc2.d/K16apache /a/etc/rc2.d/k16apache # mv /a/etc/rc3.d/S50apache /a/etc/rc3.d/s50apache # mv /a/etc/rcS.d/K16apache /a/etc/rcS.d/k16apache |
Reboot the node into noncluster mode.
Include the double dashes (--) in the following command:
# reboot -- -x |
SPARC: If your cluster runs VxVM, perform the remaining steps in the procedure “Upgrading VxVM and Solaris” to reinstall or upgrade VxVM.
Note the following special instructions:
After VxVM upgrade is complete but before you reboot, verify the entries in the /etc/vfstab file. If any of the entries that you uncommented in Step 8 were commented out, make those entries uncommented again.
When the VxVM procedures instruct you to perform a final reconfiguration reboot by using the -r option, reboot into noncluster mode by using the -x option instead.
# reboot -- -x |
If you see a message similar to the following, type the root password to continue upgrade processing. Do not run the fsck command nor type Ctrl-D.
WARNING - Unable to repair the /global/.devices/node@1 filesystem. Run fsck manually (fsck -F ufs /dev/vx/rdsk/rootdisk_13vol). Exit the shell when done to continue the boot process. Type control-d to proceed with normal startup, (or give root password for system maintenance): Type the root password |
Install any required Solaris software patches and hardware-related patches, and download any needed firmware that is contained in the hardware patches.
For Solstice DiskSuite software (Solaris 8), also install any Solstice DiskSuite software patches.
Do not reboot after you add patches. Wait to reboot the node until after you upgrade the Sun Cluster software.
See “Patches and Required Firmware Levels” in Sun Cluster 3.1 Release Notes for the location of patches and installation instructions.
Upgrade to Sun Cluster 3.1 9/04 software.
Go to How to Perform a Nonrolling Upgrade of Sun Cluster 3.1 9/04 Software.
To complete the upgrade from Solaris 8 to Solaris 9 software, you must also upgrade to the Solaris 9 version of Sun Cluster 3.1 9/04 software, even if the cluster already runs on the Solaris 8 version of Sun Cluster 3.1 9/04 software.
Perform this procedure to upgrade each node of the cluster to Sun Cluster 3.1 9/04 software. You must also perform this procedure to complete cluster upgrade from Solaris 8 to Solaris 9 software.
You can perform this procedure on more than one node at the same time.
Ensure that all steps in How to Prepare the Cluster for a Nonrolling Upgrade are completed.
If you upgraded from Solaris 8 to Solaris 9 software, ensure that all steps in How to Perform a Nonrolling Upgrade of the Solaris OS are completed.
Ensure that you have installed all required Solaris software patches and hardware-related patches.
For Solstice DiskSuite software (Solaris 8), also ensure that you have installed all required Solstice DiskSuite software patches.
Become superuser on a node of the cluster.
Insert the Sun Java Enterprise System 1/05 2 of 2 CD-ROM into the CD-ROM drive on the node.
If the volume management daemon vold(1M) is running and is configured to manage CD-ROM devices, the daemon automatically mounts the CD-ROM on the /cdrom/cdrom0/ directory.
On the Sun Cluster 3.1 9/04 CD-ROM, change to the Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/ directory, where arch is sparc or x86 and where ver is 8 (for Solaris 8) or 9 (for Solaris 9) .
# cd /cdrom/cdrom0/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools |
Upgrade the cluster framework software.
Do not use the /usr/cluster/bin/scinstall command that is already installed on the node. You must use the scinstall command that is on the Sun Cluster 3.1 9/04 CD-ROM.
To upgrade from Sun Cluster 3.0 software, run the following command:
# ./scinstall -u update -S interact [-M patchdir=dirname] |
Specifies the test IP addresses to use to convert NAFO groups to IP Network Multipathing groups
Specifies that scinstall prompts the user for each test IP address needed
Specifies the path to patch information so that the specified patches can be installed using the scinstall command. If you do not specify a patch-list file, the scinstall command installs all the patches in the directory dirname, including tarred, jarred, and zipped patches.
The -M option is not required. You can use any method you prefer for installing patches.
To upgrade from Sun Cluster 3.1 software, run the following command:
# ./scinstall -u update [-M patchdir=dirname] |
Specifies the path to patch information so that the specified patches can be installed by the scinstall command. If you do not specify a patch-list file, the scinstall command installs all the patches in the directory dirname, including tarred, jarred, and zipped patches.
The -M option is not required. You can use any method you prefer for installing patches.
See “Patches and Required Firmware Levels” in Sun Cluster 3.1 Release Notes for the location of patches and installation instructions.
Sun Cluster 3.1 9/04 software requires at least version 3.5.1 of Sun Explorer software. Upgrading to Sun Cluster software includes installing Sun Explorer data collector software, to be used in conjunction with the sccheck utility. If another version of Sun Explorer software was already installed before the Sun Cluster upgrade, it is replaced by the version that is provided with Sun Cluster software. Options such as user identity and data delivery are preserved, but crontab entries must be manually re-created.
During Sun Cluster upgrade, scinstall might make one or more of the following configuration changes:
Convert NAFO groups to IP Network Multipathing groups but keep the original NAFO-group name.
See the scinstall(1M) man page for more information. See the IP Network Multipathing Administration Guide (Solaris 8) or System Administration Guide: IP Services (Solaris 9) for information about test addresses for IP Network Multipathing.
Rename the ntp.conf file to ntp.conf.cluster, if ntp.conf.cluster does not already exist on the node.
Set the local-mac-address? variable to true, if the variable is not already set to that value.
Upgrade processing is finished when the system displays the message Completed Sun Cluster framework upgrade and the path to the upgrade log.
Change to a directory that does not reside on the CD-ROM and eject the CD-ROM.
# eject cdrom |
(Optional) Upgrade Sun Cluster data services.
If you are using the Sun Cluster HA for Oracle 3.0 64–bit for Solaris 9 data service, you must upgrade to the Sun Cluster 3.1 9/04 version.
You can continue to use any other Sun Cluster 3.0 data services after you upgrade to Sun Cluster 3.1 9/04 software.
Insert the Sun Cluster 3.1 9/04 Agents CD-ROM into the CD-ROM drive on the node.
Upgrade the data-service software.
Use one of the following methods:
To upgrade one or more specified data services, type the following command.
# scinstall -u update -s srvc[,srvc,…] -d /cdrom/cdrom0 |
Upgrades a cluster node to a later Sun Cluster software release
Upgrades the specified data service
Specifies an alternate directory location for the CD-ROM image
To upgrade all data services present on the node, type the following command.
# scinstall -u update -s all -d /cdrom/cdrom0 |
Upgrades all data services
The scinstall command assumes that updates for all installed data services exist on the update release. If an update for a particular data service does not exist in the update release, that data service is not upgraded.
Upgrade processing is finished when the system displays the message Completed upgrade of Sun Cluster data services agents and displays the path to the upgrade log.
Change to a directory that does not reside on the CD-ROM and eject the CD-ROM.
# eject cdrom |
As needed, manually upgrade any custom data services that are not supplied on the Sun Cluster 3.1 9/04 Agents CD-ROM.
Verify that each data-service update is installed successfully.
View the upgrade log file that is referenced at the end of the upgrade output messages.
Install any Sun Cluster 3.1 9/04 software patches, if you did not already install them by using the scinstall command.
Install any Sun Cluster 3.1 9/04 data-service software patches.
See “Patches and Required Firmware Levels” in Sun Cluster 3.1 Release Notes for the location of patches and installation instructions.
Upgrade software applications that are installed on the cluster.
Ensure that application levels are compatible with the current versions of Sun Cluster and Solaris software. See your application documentation for installation instructions.
After all nodes are upgraded, reboot each node into the cluster.
# reboot |
Verify that all upgraded software is at the same version on all upgraded nodes.
On each upgraded node, view the installed levels of Sun Cluster software.
# scinstall -pv |
The first line of output states which version of Sun Cluster software the node is running. This version should match the version that you just upgraded to.
From any node, verify that all upgraded cluster nodes are running in cluster mode (Online).
# scstat -n |
See the scstat(1M) man page for more information about displaying cluster status.
If you upgraded from Solaris 8 to Solaris 9 software, verify the consistency of the storage configuration.
On each node, run the following command to verify the consistency of the storage configuration.
# scdidadm -c |
Perform a consistency check
Do not proceed to Step b until your configuration passes this consistency check. Failure to pass this check might result in errors in device identification and cause data corruption.
The following table lists the possible output from the scdidadm -c command and the action you must take, if any.
Example Message |
Action |
---|---|
device id for 'phys-schost-1:/dev/rdsk/c1t3d0' does not match physical device's id, device may have been replaced |
Go to Recovering From Storage Configuration Changes During Upgrade and perform the appropriate repair procedure. |
device id for 'phys-schost-1:/dev/rdsk/c0t0d0' needs to be updated, run scdidadm –R to update |
None. You update this device ID in Step b. |
No output message |
None. |
See the scdidadm(1M) man page for more information.
On each node, migrate the Sun Cluster storage database to Solaris 9 device IDs.
# scdidadm -R all |
Perform repair procedures
Specify all devices
On each node, run the following command to verify that storage database migration to Solaris 9 device IDs is successful.
# scdidadm -c |
If the scdidadm command displays a message, return to Step a to make further corrections to the storage configuration or the storage database.
If the scdidadm command displays no messages, the device-ID migration is successful. When device-ID migration is verified on all cluster nodes, proceed to Step 4.
Go to How to Finish a Nonrolling Upgrade to Sun Cluster 3.1 9/04 Software.
The following example shows the process of a nonrolling upgrade of a two-node cluster from Sun Cluster 3.0 to Sun Cluster 3.1 9/04 software on the Solaris 8 OS. The example includes the installation of Sun Web Console software and the upgrade of all installed data services that have new versions on the Sun Cluster 3.1 9/04 Agents CD-ROM. The cluster node names are phys-schost-1 and phys-schost-2.
(On the first node, install Sun Web Console software from the Sun Cluster 3.1 9/04 CD-ROM) phys-schost-1# cd /cdrom/cdrom0/Solaris_sparc/Product/sun_cluster/ \ Solaris_8/Misc phys-schost-1# ./setup (On the first node, upgrade framework software from the Sun Cluster 3.1 9/04 CD-ROM) phys-schost-1# cd /cdrom/cdrom0/Solaris_sparc/Product/sun_cluster/Solaris_8/Tools phys-schost-1# ./scinstall -u update -S interact (On the first node, upgrade data services from the Sun Cluster 3.1 9/04 Agents CD-ROM) phys-schost-1# scinstall -u update -s all -d /cdrom/cdrom0 (On the second node, install Sun Web Console software from the Sun Cluster 3.1 9/04 CD-ROM) phys-schost-2# cd /cdrom/cdrom0/Solaris_sparc/Product/sun_cluster/ \ Solaris_8/Misc phys-schost-2# ./setup (On the second node, upgrade framework software from the Sun Cluster 3.1 9/04 CD-ROM) phys-schost-2# cd /cdrom/cdrom0/Solaris_sparc/Product/sun_cluster/Solaris_8/Tools phys-schost-2# ./scinstall -u update -S interact (On the second node, upgrade data services from the Sun Cluster 3.1 9/04 Agents CD-ROM) phys-schost-2# scinstall -u update -s all -d /cdrom/cdrom0 (Reboot each node into the cluster) phys-schost-1# reboot phys-schost-2# reboot (Verify that software versions are the same on all nodes) # scinstall -pv (Verify cluster membership) # scstat -n -- Cluster Nodes -- Node name Status --------- ------ Cluster node: phys-schost-1 Online Cluster node: phys-schost-2 Online |
Perform this procedure to finish Sun Cluster upgrade. First, reregister all resource types that received a new version from the upgrade. Second, modify eligible resources to use the new version of the resource type that the resource uses. Third, re-enable resources. Finally, bring resource groups back online.
To upgrade future versions of resource types, see “Upgrading a Resource Type” in Sun Cluster Data Service Planning and Administration Guide for Solaris OS.
Ensure that all steps in How to Perform a Nonrolling Upgrade of Sun Cluster 3.1 9/04 Software are completed.
If you upgraded any data services that are not supplied on the Sun Cluster 3.1 9/04 Agents CD-ROM, register the new resource types for those data services.
Follow the documentation that accompanies the data services.
If you upgraded Sun Cluster HA for SAP liveCache from the version for Sun Cluster 3.0 to the version for Sun Cluster 3.1, modify the /opt/SUNWsclc/livecache/bin/lccluster configuration file.
In the lccluster file, specify the value of put-Confdir_list-here in the CONFDIR_LIST="put-Confdir_list-here" entry. This entry did not exist in the Sun Cluster 3.0 version of the lccluster file. Follow instructions in “Registering and Configuring the Sun Cluster HA for SAP liveCache” in Sun Cluster Data Service for SAP liveCache Guide for Solaris OS.
If your configuration uses dual-string mediators for Solstice DiskSuite or Solaris Volume Manager software, restore the mediator configurations.
Determine which node has ownership of a disk set to which you will add the mediator hosts.
# metaset -s setname |
Specifies the disk set name
If no node has ownership, take ownership of the disk set.
# metaset -s setname -t |
Takes ownership of the disk set
Re-create the mediators.
# metaset -s setname -a -m mediator-host-list |
Adds to the disk set
Specifies the names of the nodes to add as mediator hosts for the disk set
Repeat Step a through Step c for each disk set in the cluster that uses mediators.
SPARC: If you upgraded VxVM, upgrade all disk groups.
To upgrade a disk group to the highest version supported by the VxVM release you installed, run the following command from the primary node of the disk group:
# vxdg upgrade dgname |
See your VxVM administration documentation for more information about upgrading disk groups.
From any node, start the scsetup(1M) utility.
# scsetup |
Re-enable all disabled resources.
From the Resource Group Menu, choose Enable/Disable a resource.
Choose a resource to enable and follow the prompts.
Repeat Step b for each disabled resource.
When all resources are re-enabled, type q to return to the Resource Group Menu.
Bring each resource group back online.
When all resource groups are back online, exit the scsetup utility.
Type q to back out of each submenu, or press Ctrl-C.
(Optional) Migrate resources to new resource type versions.
See “Upgrading a Resource Type” in Sun Cluster Data Services Planning and Administration Guide for Solaris OS, which contains procedures which use the command line. Alternatively, you can perform the same tasks by using the Resource Group menu of the scsetup utility. The process involves performing the following tasks:
Register the new resource type.
Migrate the eligible resource to the new version of its resource type.
Modify the extension properties of the resource type as specified in the manual for the related data service.
If you have a SPARC based system and use Sun Management Center to monitor the cluster, go to SPARC: How to Upgrade Sun Cluster-Module Software for Sun Management Center.
The cluster upgrade is complete.
This section provides procedures to perform a rolling upgrade from Sun Cluster 3.1 software to Sun Cluster 3.1 9/04 software. In a rolling upgrade, you upgrade one cluster node at a time, while the other cluster nodes remain in production. After all nodes are upgraded and have rejoined the cluster, you must commit the cluster to the new software version before you can use any new features.
To upgrade from Sun Cluster 3.0 software, follow the procedures in Upgrading to Sun Cluster 3.1 9/04 Software (Nonrolling).
Sun Cluster 3.1 9/04 software does not support rolling upgrade from Solaris 8 software to Solaris 9 software. You can upgrade Solaris software to an update release during Sun Cluster rolling upgrade. To upgrade a Sun Cluster configuration from Solaris 8 software to Solaris 9 software, perform the procedures in Upgrading to Sun Cluster 3.1 9/04 Software (Nonrolling).
To perform a rolling upgrade, follow the tasks that are listed in Table 5–2.
Table 5–2 Task Map: Upgrading to Sun Cluster 3.1 9/04 Software
Task |
Instructions |
---|---|
1. Read the upgrade requirements and restrictions. | |
2. On one node of the cluster, move resource groups and device groups to another cluster node, and ensure that shared data and system disks are backed up. If the cluster uses dual-string mediators for Solstice DiskSuite or Solaris Volume Manager software, unconfigure the mediators. Then reboot the node into noncluster mode. | |
3. Upgrade the Solaris OS on the cluster node, if necessary, to a supported Solaris update release. SPARC: Optionally, upgrade VERITAS File System (VxFS) and VERITAS Volume Manager (VxVM). |
How to Perform a Rolling Upgrade of a Solaris Maintenance Update |
4. Upgrade the cluster node to Sun Cluster 3.1 9/04 framework and data-service software. If necessary, upgrade applications. SPARC: If you upgraded VxVM, upgrade disk groups. Then reboot the node back into the cluster. |
How to Perform a Rolling Upgrade of Sun Cluster 3.1 9/04 Software |
5. Repeat Tasks 2 through 4 on each remaining node to upgrade. | |
6. Use the scversions command to commit the cluster to the upgrade. If the cluster uses dual-string mediators, reconfigure the mediators. Optionally, migrate existing resources to new resource types. |
How to Finish a Rolling Upgrade to Sun Cluster 3.1 9/04 Software |
7. (Optional) SPARC: Upgrade the Sun Cluster module to Sun Management Center. |
SPARC: How to Upgrade Sun Cluster-Module Software for Sun Management Center |
Perform this procedure on one node at a time. You will take the upgraded node out of the cluster while the remaining nodes continue to function as active cluster members.
Observe the following guidelines when you perform a rolling upgrade:
Limit the amount of time that you take to complete a rolling upgrade of all cluster nodes. After a node is upgraded, begin the upgrade of the next cluster node as soon as possible. You can experience performance penalties and other penalties when you run a mixed-version cluster for an extended period of time.
Avoid installing new data services or issuing any administrative configuration commands during the upgrade.
Until all nodes of the cluster are successfully upgraded and the upgrade is committed, new features that are introduced by the new release might not be available.
Ensure that the configuration meets requirements for upgrade.
Have available the CD-ROMs, documentation, and patches for all the software products you are upgrading before you begin to upgrade the cluster.
Solaris 8 or Solaris 9 OS
Sun Cluster 3.1 9/04 framework
Sun Cluster 3.1 9/04 data services (agents)
Applications that are managed by Sun Cluster 3.1 9/04 data-service agents
See “Patches and Required Firmware Levels” in Sun Cluster 3.1 Release Notes for the location of patches and installation instructions.
(Optional) Install Sun Cluster 3.1 9/04 documentation.
Install the documentation packages on your preferred location, such as an administrative console or a documentation server. See the index.html file at the top level of the Sun Cluster 3.1 9/04 CD-ROM to access installation instructions.
Become superuser on one node of the cluster to upgrade.
If not already installed, install Sun Web Console packages.
These packages are required by Sun Cluster software, even if you do not use Sun Web Console.
For a two-node cluster, if the cluster uses Sun StorEdge Availability Suite software, ensure that the configuration data for availability services resides on the quorum disk.
The configuration data must reside on a quorum disk to ensure the proper functioning of Sun StorEdge Availability Suite after you upgrade the cluster software.
Become superuser on a node of the cluster that runs Sun StorEdge Availability Suite software.
Identify the device ID and the slice that is used by the Sun StorEdge Availability Suite configuration file.
# /usr/opt/SUNWscm/sbin/dscfg /dev/did/rdsk/dNsS |
In this example output, N is the device ID and S the slice of device N.
Identify the existing quorum device.
# scstat -q -- Quorum Votes by Device -- Device Name Present Possible Status ----------- ------- -------- ------ Device votes: /dev/did/rdsk/dQsS 1 1 Online |
In this example output, dQsS is the existing quorum device.
If the quorum device is not the same as the Sun StorEdge Availability Suite configuration-data device, move the configuration data to an available slice on the quorum device.
# dd if=`/usr/opt/SUNWesm/sbin/dscfg` of=/dev/did/rdsk/dQsS |
You must use the name of the raw DID device, /dev/did/rdsk/, not the block DID device, /dev/did/dsk/.
If you moved the configuration data, configure Sun StorEdge Availability Suite software to use the new location.
As superuser, issue the following command on each node that runs Sun StorEdge Availability Suite software.
# /usr/opt/SUNWesm/sbin/dscfg -s /dev/did/rdsk/dQsS |
From any node, view the current status of the cluster.
Save the output as a baseline for later comparison.
% scstat % scrgadm -pv[v] |
See the scstat(1M) and scrgadm(1M) man pages for more information.
Move all resource groups and device groups that are running on the node to upgrade.
# scswitch -S -h from-node |
Moves all resource groups and device groups
Specifies the name of the node from which to move resource groups and device groups
See the scswitch(1M) man page for more information.
Verify that the move was completed successfully.
# scstat -g -D |
Shows status for all resource groups
Shows status for all disk device groups
Ensure that the system disk, applications, and all data are backed up.
If your cluster uses dual-string mediators for Solstice DiskSuite or Solaris Volume Manager software, unconfigure your mediators.
See Configuring Dual-String Mediators for more information.
Run the following command to verify that no mediator data problems exist.
# medstat -s setname |
Specifies the disk set name
If the value in the Status field is Bad, repair the affected mediator host. Follow the procedure How to Fix Bad Mediator Data.
List all mediators.
Save this information for when you restore the mediators during the procedure How to Finish a Rolling Upgrade to Sun Cluster 3.1 9/04 Software.
For a disk set that uses mediators, take ownership of the disk set if no node already has ownership.
# metaset -s setname -t |
Takes ownership of the disk set
Unconfigure all mediators for the disk set.
# metaset -s setname -d -m mediator-host-list |
Specifies the disk-set name
Deletes from the disk set
Specifies the name of the node to remove as a mediator host for the disk set
See the mediator(7D) man page for further information about mediator-specific options to the metaset command.
Repeat Step c through Step d for each remaining disk set that uses mediators.
Shut down the node that you want to upgrade and boot it into noncluster mode.
On SPARC based systems, perform the following commands:
# shutdown -y -g0 ok boot -x |
On x86 based systems, perform the following commands:
# shutdown -y -g0 ... <<< Current Boot Parameters >>> Boot path: /pci@0,0/pci-ide@7,1/ata@1/cmdk@0,0:b Boot args: Type b [file-name] [boot-flags] <ENTER> to boot with options or i <ENTER> to enter boot interpreter or <ENTER> to boot with defaults <<< timeout in 5 seconds >>> Select (b)oot or (i)nterpreter: b -x |
The other nodes of the cluster continue to function as active cluster members.
To upgrade the Solaris software to a Maintenance Update release, go to How to Perform a Rolling Upgrade of a Solaris Maintenance Update.
The cluster must already run on, or be upgraded to, at least the minimum required level of the Solaris OS to support Sun Cluster 3.1 9/04 software. See the Sun Cluster Release Notes for Solaris OS for information about supported releases of the Solaris OS.
Go to How to Perform a Rolling Upgrade of Sun Cluster 3.1 9/04 Software.
Perform this procedure to upgrade the Solaris 8 or Solaris 9 OS to a supported Maintenance Update release.
To upgrade a cluster from Solaris 8 to Solaris 9 software, with or without upgrading Sun Cluster software as well, you must perform a nonrolling upgrade. Go to Upgrading to Sun Cluster 3.1 9/04 Software (Nonrolling).
Ensure that all steps in How to Prepare a Cluster Node for a Rolling Upgrade are completed.
Temporarily comment out all entries for globally mounted file systems in the node's /etc/vfstab file.
Perform this step to prevent the Solaris upgrade from attempting to mount the global devices.
Follow the instructions in the Solaris maintenance update installation guide to install the Maintenance Update release.
Do not reboot the node when prompted to reboot at the end of installation processing.
Uncomment all entries in the /a/etc/vfstab file for globally mounted file systems that you commented out in Step 2.
Install any required Solaris software patches and hardware-related patches, and download any needed firmware that is contained in the hardware patches.
Do not reboot the node until Step 6.
Reboot the node into noncluster mode.
Include the double dashes (--) in the following command:
# reboot -- -x |
Upgrade the Sun Cluster software.
Go to How to Perform a Rolling Upgrade of Sun Cluster 3.1 9/04 Software.
Perform this procedure to upgrade a node to Sun Cluster 3.1 9/04 software while the remaining cluster nodes are in cluster mode.
Until all nodes of the cluster are upgraded and the upgrade is committed, new features that are introduced by the new release might not be available.
Ensure that all steps in How to Prepare a Cluster Node for a Rolling Upgrade are completed.
If you upgraded the Solaris OS to a Maintenance Update release, ensure that all steps in How to Perform a Rolling Upgrade of a Solaris Maintenance Update are completed.
Ensure that you have installed all required Solaris software patches and hardware-related patches.
For Solstice DiskSuite software (Solaris 8), also ensure that you have installed all required Solstice DiskSuite software patches.
Become superuser on the node of the cluster.
Install Sun Web Console packages.
Perform this step on each node of the cluster. These packages are required by Sun Cluster software, even if you do not use Sun Web Console.
On the Sun Cluster 3.1 9/04 CD-ROM, change to the Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/ directory, where arch is sparc or x86 and where ver is 8 (for Solaris 8) or 9 (for Solaris 9) .
# cd /cdrom/cdrom0/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools |
Upgrade the cluster framework software.
Do not use the /usr/cluster/bin/scinstall command that is already installed on the node. You must use the scinstall command that is on the Sun Cluster 3.1 9/04 CD-ROM.
# ./scinstall -u update [-M patchdir=dirname] |
Specifies the path to patch information so that the specified patches can be installed by the scinstall command. If you do not specify a patch-list file, the scinstall command installs all the patches in the directory dirname, including tarred, jarred, and zipped patches.
The -M option is not required. You can use any method you prefer for installing patches.
See “Patches and Required Firmware Levels” in Sun Cluster 3.1 Release Notes for the location of patches and installation instructions.
Sun Cluster 3.1 9/04 software requires at least version 3.5.1 of Sun Explorer software. Upgrading to Sun Cluster software includes installing Sun Explorer data collector software, to be used in conjunction with the sccheck utility. If another version of Sun Explorer software was already installed before the Sun Cluster upgrade, it is replaced by the version that is provided with Sun Cluster software. Options such as user identity and data delivery are preserved, but crontab entries must be manually re-created.
Upgrade processing is finished when the system displays the message Completed Sun Cluster framework upgrade and the path to the upgrade log.
Change to a directory that does not reside on the CD-ROM and eject the CD-ROM.
# eject cdrom |
(Optional) Upgrade Sun Cluster data services.
If you are using the Sun Cluster HA for Oracle 3.0 64–bit for Solaris 9 data service, you must upgrade to the Sun Cluster 3.1 9/04 version.
You can continue to use any other Sun Cluster 3.0 data services after you upgrade to Sun Cluster 3.1 9/04 software.
Insert the Sun Cluster 3.1 9/04 Agents CD-ROM into the CD-ROM drive on the node.
Upgrade the data-service software.
Use one of the following methods:
To upgrade one or more specified data services, type the following command.
# scinstall -u update -s srvc[,srvc,…] -d /cdrom/cdrom0 |
Upgrades a cluster node to a later Sun Cluster software release
Upgrades the specified data service
Specifies an alternate directory location for the CD-ROM image
To upgrade all data services present on the node, type the following command.
# scinstall -u update -s all -d /cdrom/cdrom0 |
Upgrades all data services
The scinstall command assumes that updates for all installed data services exist on the update release. If an update for a particular data service does not exist in the update release, that data service is not upgraded.
Upgrade processing is finished when the system displays the message Completed upgrade of Sun Cluster data services agents and displays the path to the upgrade log.
Change to a directory that does not reside on the CD-ROM and eject the CD-ROM.
# eject cdrom |
As needed, manually upgrade any custom data services that are not supplied on the Sun Cluster 3.1 9/04 Agents CD-ROM.
Verify that each data-service update is installed successfully.
View the upgrade log file that is referenced at the end of the upgrade output messages.
Install any Sun Cluster 3.1 9/04 software patches, if you did not already install them by using the scinstall command.
Install any Sun Cluster 3.1 9/04 data-service software patches.
See “Patches and Required Firmware Levels” in Sun Cluster 3.1 Release Notes for the location of patches and installation instructions.
Upgrade software applications that are installed on the cluster.
Ensure that application levels are compatible with the current versions of Sun Cluster and Solaris software. See your application documentation for installation instructions. In addition, follow these guidelines to upgrade applications in a Sun Cluster 3.1 9/04 configuration:
If the applications are stored on shared disks, you must master the relevant disk groups and manually mount the relevant file systems before you upgrade the application.
If you are instructed to reboot a node during the upgrade process, always add the -x option to the command.
The -x option ensures that the node reboots into noncluster mode. For example, either of the following two commands boot a node into single-user noncluster mode:
On SPARC based systems, perform the following commands:
# reboot -- -xs ok boot -xs |
On x86 based systems, perform the following commands:
# reboot -- -xs ... <<< Current Boot Parameters >>> Boot path: /pci@0,0/pci-ide@7,1/ata@1/cmdk@0,0:b Boot args: Type b [file-name] [boot-flags] <ENTER> to boot with options or i <ENTER> to enter boot interpreter or <ENTER> to boot with defaults <<< timeout in 5 seconds >>> Select (b)oot or (i)nterpreter: b -xs |
Do not upgrade an application if the newer version of the application cannot coexist in the cluster with the older version of the application.
Reboot the node into the cluster.
# reboot |
Run the following command on the upgraded node to verify that Sun Cluster 3.1 9/04 software was installed successfully.
# scinstall -pv |
The first line of output states which version of Sun Cluster software the node is running. This version should match the version you just upgraded to.
From any node, verify the status of the cluster configuration.
% scstat % scrgadm -pv[v] |
Output should be the same as for Step 7 in How to Prepare a Cluster Node for a Rolling Upgrade.
If you have another node to upgrade, return to How to Prepare a Cluster Node for a Rolling Upgrade and repeat all upgrade procedures on the next node to upgrade.
When all nodes in the cluster are upgraded, go to How to Finish a Rolling Upgrade to Sun Cluster 3.1 9/04 Software.
The following example shows the process of a rolling upgrade of a cluster node from Sun Cluster 3.1 to Sun Cluster 3.1 9/04 software on the Solaris 8 OS. The example includes the installation of Sun Web Console software and the upgrade of all installed data services that have new versions on the Sun Cluster 3.1 9/04 Agents CD-ROM. The cluster node names is phys-schost-1.
(Install Sun Web Console software from the Sun Cluster 3.1 9/04 CD-ROM) phys-schost-1# cd /cdrom/cdrom0/Solaris_sparc/Product/sun_cluster/ \ Solaris_8/Misc phys-schost-1# ./setup (Upgrade framework softwarefrom the Sun Cluster 3.1 9/04 CD-ROM) phys-schost-1# cd /cdrom/cdrom0/Solaris_sparc/Product/sun_cluster/Solaris_8/Tools phys-schost-1# ./scinstall -u update -S interact (Upgrade data services from the Sun Cluster 3.1 9/04 Agents CD-ROM) phys-schost-1# scinstall -u update -s all -d /cdrom/cdrom0 (Reboot the node into the cluster) phys-schost-1# reboot (Verify that software upgrade succeeded) # scinstall -pv (Verify cluster status) # scstat # scrgadm -pv |
Ensure that all upgrade procedures are completed for all cluster nodes that you are upgrading.
From one node, check the upgrade status of the cluster.
# scversions |
From the following table, perform the action that is listed for the output message from Step 2.
Output Message |
Action |
---|---|
Upgrade commit is needed. |
Go to Step 4. |
Upgrade commit is NOT needed. All versions match. |
Skip to Step 6. |
Upgrade commit cannot be performed until all cluster nodes are upgraded. Please run scinstall(1m) on cluster nodes to identify older versions. |
Return to How to Perform a Rolling Upgrade of Sun Cluster 3.1 9/04 Software to upgrade the remaining cluster nodes. |
Check upgrade cannot be performed until all cluster nodes are upgraded. Please run scinstall(1m) on cluster nodes to identify older versions. |
Return to How to Perform a Rolling Upgrade of Sun Cluster 3.1 9/04 Software to upgrade the remaining cluster nodes. |
After all nodes have rejoined the cluster, from one node commit the cluster to the upgrade.
# scversions -c |
Committing the upgrade enables the cluster to utilize all features in the newer software. New features are available only after you perform the upgrade commitment.
From one node, verify that the cluster upgrade commitment has succeeded.
# scversions Upgrade commit is NOT needed. All versions match. |
If your configuration uses dual-string mediators for Solstice DiskSuite or Solaris Volume Manager software, restore the mediator configurations.
Determine which node has ownership of a disk set to which you are adding the mediator hosts.
# metaset -s setname |
Specifies the disk-set name
If no node has ownership, take ownership of the disk set.
# metaset -s setname -t |
Takes ownership of the disk set
Re-create the mediators.
# metaset -s setname -a -m mediator-host-list |
Adds to the disk set
Specifies the names of the nodes to add as mediator hosts for the disk set
Repeat Step a through Step c for each disk set in the cluster that uses mediators.
If you upgraded any data services that are not supplied on the Sun Cluster 3.1 9/04 Agents CD-ROM, register the new resource types for those data services.
Follow the documentation that accompanies the data services.
(Optional) Switch each resource group and device group back its original node.
# scswitch -z -g resource-group -h node # scswitch -z -D disk-device-group -h node |
Performs the switch
Specifies the resource group to switch
Specifies the name of the node to switch to
Specifies the device group to switch
Restart any applications.
Follow the instructions that are provided in your vendor documentation.
(Optional) Migrate resources to new resource type versions.
See “Upgrading a Resource Type” in Sun Cluster Data Services Planning and Administration Guide for Solaris OS, which contains procedures which use the command line. Alternatively, you can perform the same tasks by using the Resource Group menu of the scsetup utility. The process involves performing the following tasks:
Register the new resource type.
Migrate the eligible resource to the new version of its resource type.
Modify the extension properties of the resource type as specified in the manual for the related data service.
If you have a SPARC based system and use Sun Management Center to monitor the cluster, go to SPARC: How to Upgrade Sun Cluster-Module Software for Sun Management Center.
The cluster upgrade is complete.
This section provides the following repair procedures to follow if changes were inadvertently made to the storage configuration during upgrade:
Any changes to the storage topology, including running Sun Cluster commands, should be completed before you upgrade the cluster to Solaris 9 software. If, however, changes were made to the storage topology during the upgrade, perform the following procedure. This procedure ensures that the new storage configuration is correct and that existing storage that was not reconfigured is not mistakenly altered.
Ensure that the storage topology is correct.
Check whether the devices that were flagged as possibly being replaced map to devices that actually were replaced. If the devices were not replaced, check for and correct possible accidental configuration changes, such as incorrect cabling.
Become superuser on a node that is attached to the unverified device.
Manually update the unverified device.
# scdidadm -R device |
Performs repair procedures on the specified device
See the scdidadm(1M) man page for more information.
Update the DID driver.
# scdidadm -ui # scdidadm -r |
Loads the device ID configuration table into the kernel
Initializes the DID driver
Reconfigures the database
Repeat Step 2 through Step 4 on all other nodes that are attached to the unverified device.
Return to the remaining upgrade tasks.
For a nonrolling upgrade, go to Step a in How to Perform a Nonrolling Upgrade of Sun Cluster 3.1 9/04 Software.
For a rolling upgrade, go to Step 4 in How to Perform a Rolling Upgrade of Sun Cluster 3.1 9/04 Software.
If accidental changes are made to the storage cabling during the upgrade, perform the following procedure to return the storage configuration to the correct state.
This procedure assumes that no physical storage was actually changed. If physical or logical storage devices were changed or replaced, instead follow the procedures in How to Handle Storage Reconfiguration During an Upgrade.
Return the storage topology to its original configuration.
Check the configuration of the devices that were flagged as possibly being replaced, including the cabling.
As superuser, update the DID driver on each node of the cluster.
# scdidadm -ui# scdidadm -r |
Loads the device–ID configuration table into the kernel
Initializes the DID driver
Reconfigures the database
See the scdidadm(1M) man page for more information.
If the scdidadm command returned any error messages in Step 2, return to Step 1 to make further modifications to correct the storage configuration, then repeat Step 2.
Return to the remaining upgrade tasks.
For a nonrolling upgrade, go to Step a in How to Perform a Nonrolling Upgrade of Sun Cluster 3.1 9/04 Software.
For a rolling upgrade, go to Step 4 in How to Perform a Rolling Upgrade of Sun Cluster 3.1 9/04 Software.
This section provides procedures to upgrade the Sun Cluster–module for Sun Management Center software and procedures to upgrade both Sun Management Center software and Sun Cluster–module software.
Perform the following steps to upgrade Sun Cluster-module software on the Sun Management Center server machine, help-server machine, and console machine.
If you intend to upgrade the Sun Management Center software itself, do not perform this procedure. Instead, go to SPARC: How to Upgrade Sun Management Center Software to upgrade the Sun Management Center software and the Sun Cluster module.
As superuser, remove the existing Sun Cluster–module packages.
Use the pkgrm(1M) command to remove all Sun Cluster–module packages from all locations that are listed in the following table.
# pkgrm module-package |
Location |
Module Package to Remove |
---|---|
Sun Management Center console machine |
SUNWscscn |
Sun Management Center server machine |
SUNWscssv |
Sun Management Center help-server machine |
SUNWscshl |
Sun Cluster-module software on the cluster nodes was already upgraded during the cluster-framework upgrade.
As superuser, reinstall Sun Cluster–module packages from the Sun Cluster 3.1 9/04 CD-ROM to the locations that are listed in the following table.
In the CD-ROM path, the value of arch is sparc or x86, and the value of ver is 8 (for Solaris 8) or 9 (for Solaris 9).
# cd /cdrom/cdrom0/Solaris_arch/Product/sun_cluster/Solaris_ver/Packages/ # pkgadd -d . module-package |
Location |
Module Package to Install |
---|---|
Sun Management Center console machine |
SUNWscshl |
Sun Management Center server machine |
SUNWscssv |
Sun Management Center help-server machine |
SUNWscshl |
Note that you install the help-server package SUNWscshl on both the console machine and the help-server machine. Also, you do not upgrade to a new SUNWscscn package on the console machine.
Perform the following steps to upgrade from Sun Management Center 2.1.1 to either Sun Management Center 3.0 software or Sun Management Center 3.5 software.
Have available the following items:
Sun Cluster 3.1 9/04 CD-ROM or the path to the CD-ROM image.
You use the CD-ROM to reinstall the Sun Cluster 3.1 9/04 version of the Sun Cluster–module packages after you upgrade Sun Management Center software.
Sun Management Center documentation.
Sun Management Center patches and Sun Cluster–module patches, if any.
See “Patches and Required Firmware Levels” in Sun Cluster 3.1 Release Notes for the location of patches and installation instructions.
Stop any Sun Management Center processes.
If the Sun Management Center console is running, exit the console.
In the console window, choose Exit from the File menu.
On each Sun Management Center agent machine (cluster node), stop the Sun Management Center agent process.
# /opt/SUNWsymon/sbin/es-stop -a |
On the Sun Management Center server machine, stop the Sun Management Center server process.
# /opt/SUNWsymon/sbin/es-stop -S |
As superuser, remove Sun Cluster–module packages.
Use the pkgrm(1M) command to remove all Sun Cluster–module packages from all locations that are listed in the following table.
# pkgrm module-package |
Location |
Module Package to Remove |
---|---|
Each cluster node |
SUNWscsam, SUNWscsal |
Sun Management Center console machine |
SUNWscscn |
Sun Management Center server machine |
SUNWscssv |
Sun Management Center help-server machine |
SUNWscshl |
If you do not remove the listed packages, the Sun Management Center software upgrade might fail because of package dependency problems. You reinstall these packages in Step 5, after you upgrade Sun Management Center software.
Upgrade the Sun Management Center software.
Follow the upgrade procedures in your Sun Management Center documentation.
As superuser, reinstall Sun Cluster–module packages from the Sun Cluster 3.1 9/04 CD-ROM to the locations that are listed in the following table.
In the CD-ROM path, the value of arch is sparc or x86, and the value of ver is 8 (for Solaris 8) or 9 (for Solaris 9).
# cd /cdrom/cdrom0/Solaris_arch/Product/sun_cluster/Solaris_ver/Packages/ # pkgadd -d . module-package |
Location |
Module Package to Install |
---|---|
Each cluster node |
SUNWscsam, SUNWscsal |
Sun Management Center server machine |
SUNWscssv |
Sun Management Center console machine |
SUNWscshl |
Sun Management Center help-server machine |
SUNWscshl |
You install the help-server package SUNWscshl on both the console machine and the help-server machine.
Apply any Sun Management Center patches and any Sun Cluster–module patches to each node of the cluster.
Restart Sun Management Center agent, server, and console processes.
Follow procedures in SPARC: How to Start Sun Management Center.
Load the Sun Cluster module.
Follow procedures in SPARC: How to Load the Sun Cluster Module.
If the Sun Cluster module was previously loaded, unload the module and then reload it to clear all cached alarm definitions on the server. To unload the module, choose Unload Module from the Module menu on the console's Details window.