This chapter provides step-by-step procedures to upgrade a two-node Sun Cluster 2.2 configuration to Sun Cluster 3.0 Update 2 (12/01) software, or to upgrade a Sun Cluster 3.0 7/01 (Update 1) configuration to the Sun Cluster 3.0 12/01 update release.
The following step-by-step instructions are in this chapter.
"How to Uninstall VERITAS Volume Manager Software From a Sun Cluster 2.2 Configuration"
"How to Upgrade to a Sun Cluster 3.0 Software Update Release"
For overview information about planning your Sun Cluster 3.0 configuration, see Chapter 1, Planning the Sun Cluster Configuration. For a high-level description of the related procedures for Sun Cluster 2.2-to-Sun Cluster 3.0 upgrade, see "Upgrading From Sun Cluster 2.2 to Sun Cluster 3.0 Update 2 Software".
Perform the following tasks to upgrade your two-node cluster from Sun Cluster 2.2 to Sun Cluster 3.0 Update 2 (12/01) software. To upgrade Sun Cluster 3.0 7/01 (Update 1) software to Sun Cluster 3.0 12/01 software, go to "Upgrading to a Sun Cluster 3.0 Software Update Release ".
Table 3-1 Task Map: Upgrading From Sun Cluster 2.2 to Sun Cluster 3.0 12/01 Software
Task |
For Instructions, Go To ... |
---|---|
Read upgrade conditions and restrictions, and plan a root disk partitioning scheme to support Sun Cluster 3.0 12/01 software. |
"Overview of Upgrading From Sun Cluster 2.2 to Sun Cluster 3.0 12/01 Software" |
Take the cluster out of production. For VERITAS Volume Manager (VxVM), also disable shared CCD. | |
If your cluster uses VxVM, deport disk groups and remove VxVM software packages. |
"How to Uninstall VERITAS Volume Manager Software From a Sun Cluster 2.2 Configuration" |
Upgrade to the Solaris 8 operating environment if necessary, add a new /globaldevices file system, and change file system allocations to support Sun Cluster 3.0 12/01 software. If your cluster uses Solstice DiskSuite software, also remove mediators and upgrade Solstice DiskSuite software. | |
Upgrade to Sun Cluster 3.0 12/01 framework software. If your cluster uses Solstice DiskSuite software, also recreate mediators. | |
Update the PATH and MANPATH. | |
Upgrade to Sun Cluster 3.0 12/01 data services software. If necessary, upgrade third-party applications. | |
Assign a quorum device, finish the cluster software upgrade, and start device groups and data services. If your cluster uses VERITAS Volume Manager (VxVM), reinstall VxVM software packages and import and register disk groups. If your cluster uses Solstice DiskSuite software, restore mediators. | |
Verify that all nodes have joined the cluster. |
This section provides conditions, restrictions, and planning guidelines for upgrading from Sun Cluster 2.2 to Sun Cluster 3.0 12/01 software.
The following conditions must be met to upgrade from Sun Cluster 2.2 to Sun Cluster 3.0 12/01 software.
The cluster must have exactly two nodes and be a supported configuration for Sun Cluster 3.0 12/01 software. The upgrade does not support clusters of three or more nodes.
Only Ethernet adapters are supported. Transport adapters must have a transmission rate of 100 Mbit/sec or greater.
All cluster hardware must be stable and working properly.
All third-party applications must be functioning properly.
The cluster must already run on, or be upgraded to, at least the minimum required level of the Solaris 8 operating environment that supports Sun Cluster 3.0 12/01 software.
You must upgrade all Sun Cluster software, framework, and data services at the same time.
Sun Cluster 3.0 12/01 software does not support upgrading directly to Sun Cluster 3.0 12/01 software from Solstice HA 1.3, Sun Cluster 2.0, or Sun Cluster 2.1 software.
Sun Cluster 3.0 12/01 software does not support converting from one volume manager product to another during upgrade.
The upgrade from Sun Cluster 2.2 to Sun Cluster 3.0 12/01 software cannot be reversed after the scinstall(1M) command has been started on a node, even if the command does not complete successfully. To restart a failed upgrade, you must first reinstall Sun Cluster 2.2 software on the node.
To support Sun Cluster 3.0 12/01 software, you probably need to change your current system disk layout. Consider the following when planning your new partitioning scheme.
Global devices namespace - On each node you must create a file system of at least 100 MBytes and set its mount point as /globaldevices. This file system will be converted during upgrade to the appropriate global device namespace. If necessary, you can remove some of the swap space for this purpose, or use an external disk that is not shared with any other node.
Mirrored root - If your root disks are mirrored, you must unmirror them before you modify partitions. The mirror can be used to recover the original configuration if the upgrade procedure fails. See your volume manager documentation for information.
Root (/) file system allocation - If you intend to upgrade your configuration to the Solaris 8 operating environment, you probably need to increase the size of your root (/) partition on the root disks of all Sun Cluster nodes.
See "System Disk Partitions" for more information about disk space requirements to support Sun Cluster 3.0 12/01 software.
Before you upgrade the software, take the cluster out of production.
Have available the CD-ROMs, documentation, and patches for all the software products you are upgrading.
Solaris 8 operating environment
Solstice DiskSuite software or VERITAS Volume Manager
Sun Cluster 3.0 12/01 framework
Sun Cluster 3.0 12/01 data services (agents)
Third-party applications
Solstice DiskSuite software and documentation are now part of the Solaris 8 product.
These procedures assume you are installing from CD-ROMs. If you are installing from a network, ensure that the CD-ROM image for each software product is loaded on the network.
See the Sun Cluster 3.0 12/01 Release Notes for the location of patches and installation instructions.
Notify users that the cluster will be down.
Become superuser on each node of the cluster.
Search the /var/adm/messages log for unresolved error or warning messages.
Correct any problems.
Verify that no logical hosts are in the maintenance state.
Become superuser on a node of the cluster.
Use the hastat(1M) command to display the status of the cluster.
# hastat HIGH AVAILABILITY CONFIGURATION AND STATUS ------------------------------------------- ... LOGICAL HOSTS IN MAINTENANCE STATE |
If the screen output displays NONE, no logical hosts are in the maintenance state. Go to Step 6.
If a logical host is in the maintenance state, use the haswitch(1M) command to perform switchover.
# haswitch hostname logical-hostname |
Specifies the name of the node that is to own the logical host
Specifies the name of the logical host
Run the hastat command to verify the switchover completed successfully.
Ensure that the size of each logical host administrative file system is at least 10 Mbytes.
# df -k /logical-hostname |
A logical host administrative file system that is not the required minimum size of 10 Mbytes will not be mountable after upgrade to Sun Cluster 3.0 12/01 software. If a logical host administrative file system is smaller than 10 Mbytes, follow your volume manager documentation procedure for growing this file system.
Back up your system.
Ensure that all users are logged off the system before you back it up.
(VxVM only) Disable the shared Cluster Configuration Database (CCD).
From either node, create a backup copy of the shared CCD.
# ccdadm -c backup-filename |
See the ccdadm(1M) man page for more information.
On each node of the cluster, remove the shared CCD.
# scconf clustername -S none |
On each node, run the mount(1M) command to determine on which node the ccdvol is mounted.
The ccdvol entry looks similar to the following.
# mount ... /dev/vx/dsk/sc_dg/ccdvol /etc/opt/SUNWcluster/conf/ccdssa ufs suid,rw,largefiles,dev=27105b8 982479320 |
Run the cksum(1) command on each node to ensure that the ccd.database file is identical on both nodes.
# cksum ccd.database |
If the ccd.database files are different, from either node restore the shared CCD backup that you created in Step a.
# ccdadm -r backup-filename |
Stop the Sun Cluster 2.2 software on the node on which the ccdvol is mounted.
# scadmin stopnode |
From the same node, unmount the ccdvol.
# umount /etc/opt/SUNWcluster/conf/ccdssa |
Stop the Sun Cluster 2.2 software on each node of the cluster.
# scadmin stopnode |
Run the hastat command to verify that no nodes are in the cluster.
Does the cluster use VERITAS Volume Manager?
If your cluster uses VERITAS Volume Manager (VxVM), perform this procedure on each node of the cluster to uninstall the VxVM software. Existing disk groups are retained and automatically reimported after you have upgraded all software.
To upgrade to Sun Cluster 3.0 12/01 software, you must remove VxVM software and later reinstall it, regardless of whether you have the latest version of VxVM installed.
Become superuser on a cluster node.
Uninstall VxVM.
Follow procedures in your VxVM documentation. This process involves the following tasks.
Deport all VxVM disk groups. Ensure that disks containing data to be preserved are not used for other purposes during the upgrade.
Unencapsulate the root disk, if it is encapsulated.
Shut down VxVM.
Remove all installed VxVM software packages.
Remove the VxVM device namespace.
# rm -rf /dev/vx |
Upgrade or prepare the Solaris operating environment to support Sun Cluster 3.0 12/01 software.
Perform this procedure on each node in the cluster to upgrade or prepare the Solaris operating environment to support Sun Cluster 3.0 12/01 software.
Become superuser on the cluster node.
If your volume manager is Solstice DiskSuite and you are using mediators, unconfigure mediators.
Run the following command to verify that no mediator data problems exist.
# medstat -s setname |
Specifies the diskset name
If the value in the Status field is Bad, repair the affected mediator host by following the procedure "How to Fix Bad Mediator Data".
See the medstat(1M) man page for more information.
List all mediators.
Use this information to determine which node, if any, has ownership of the diskset from which you will remove mediators.
# metaset -s setname |
Save this information for when you restore the mediators during the procedure "How to Upgrade Cluster Software Packages".
If no node has ownership, take ownership of the diskset.
# metaset -s setname -t |
Takes ownership of the diskset
Unconfigure all mediators.
# metaset -s setname -d -m mediator-host-list |
Specifies the diskset name
Deletes from the diskset
Specifies the name of the node to remove as a mediator host for the diskset
See the mediator(7) man page for further information about mediator-specific options to the metaset command.
Remove the mediator software.
# pkgrm SUNWmdm |
Does your configuration currently run Solaris 8 software?
If no, go to Step 4.
If yes, perform the following steps.
Create a file system of at least 100 Mbytes and set its mount point as /globaldevices.
The /globaldevices file system is necessary for Sun Cluster 3.0 12/01 software installation to succeed.
Reallocate space in other partitions as needed to support Sun Cluster 3.0 12/01 software.
See "System Disk Partitions" for guidelines.
Go to Step 6.
Determine which procedure to use to upgrade to Solaris 8 software.
Volume Manager |
Procedure to Use |
For Instructions, Go To ... |
---|---|---|
Solstice DiskSuite |
Upgrading both Solaris and Solstice DiskSuite software |
Solstice DiskSuite installation documentation |
VxVM |
Performing a standard Solaris software installation |
Solaris 8 installation documentation |
Upgrade to Solaris 8 software, following the procedure you selected in Step 4.
During installation, make the following changes to the root disk partitioning scheme.
Create a file system of at least 100 Mbytes and set its mount point as /globaldevices. The /globaldevices file system is necessary for Sun Cluster 3.0 12/01 software installation to succeed.
Reallocate space in other partitions as needed to support Sun Cluster 3.0 12/01 software.
See "System Disk Partitions" for partitioning guidelines.
The Solaris interface groups feature is disabled by default during Solaris software installation. Interface groups are not supported in a Sun Cluster configuration and should not be enabled. See the ifconfig(1M) man page for more information about Solaris interface groups.
Install any Solaris software patches.
See the Sun Cluster 3.0 12/01 Release Notes for the location of patches and installation instructions.
Install any hardware-related patches.
See the Sun Cluster 3.0 12/01 Release Notes for the location of patches and installation instructions.
For Solstice DiskSuite software, install any Solstice DiskSuite software patches.
See the Sun Cluster 3.0 12/01 Release Notes for the location of patches and installation instructions.
Upgrade to Sun Cluster 3.0 12/01 software.
The following example shows the mediator host phys-schost-1 unconfigured from the Solstice DiskSuite diskset schost-1 before the upgrade to Solaris 8 software.
(Check mediator status) # medstat -s schost-1 (List all mediators) # metaset -s schost-1 (Unconfigure the mediator) # metaset -s schost-1 -d -m phys-schost-1 (Remove mediator software) # pkgrm SUNWmdm (Begin software upgrade) |
Perform this procedure on each node. You can perform this procedure on both nodes simultaneously if you have two copies of the Sun Cluster 3.0 12/01 CD-ROM CD-ROM.
The scinstall(1M) upgrade command is divided into a two-step process--the -u begin option and the -u finish option. This procedure runs the begin option. The finish option is run in "How to Finish Upgrading Cluster Software".
Become superuser on a cluster node.
If you are installing from the CD-ROM, insert the Sun Cluster 3.0 12/01 CD-ROM into the CD-ROM drive on a node.
If the volume daemon vold(1M) is running and configured to manage CD-ROM devices, it automatically mounts the CD-ROM on the /cdrom/suncluster_3_0_u2 directory.
Change to the /cdrom/suncluster_3_0_u2/SunCluster_3.0/Packages directory.
# cd /cdrom/suncluster_3_0_u2/SunCluster_3.0/Packages |
If your volume manager is Solstice DiskSuite, install the latest Solstice DiskSuite mediator package (SUNWmdm) on each node.
Reconfigure mediators.
Determine which node has ownership of the diskset to which you will add the mediator hosts.
# metaset -s setname |
Specifies the diskset name
If no node has ownership, take ownership of the diskset.
# metaset -s setname -t |
Takes ownership of the diskset
Recreate the mediators.
# metaset -s setname -a -m mediator-host-list |
Adds to the diskset
Specifies the names of the nodes to add as mediator hosts for the diskset
Repeat for each diskset.
Begin upgrade to Sun Cluster 3.0 12/01 software.
On one node, change to the /cdrom/suncluster_3_0_u2/SunCluster_3.0/Tools directory.
# cd /cdrom/suncluster_3_0_u2/SunCluster_3.0/Tools |
Upgrade the cluster software framework.
Node To Upgrade |
Command to Use |
---|---|
First node |
./scinstall -u begin -F |
Second node |
./scinstall -u begin -N node1 |
Specifies that this is the first-installed node in the cluster
Specifies the name of the first-installed node in the cluster, not the name of the second node to be installed
See the scinstall(1M) man page for more information.
Reboot the node.
# shutdown -g0 -y -i6 |
When the first node reboots into cluster mode, it establishes the cluster. The second node waits if necessary for the cluster to be established before completing its own processes and joining the cluster.
Repeat on the other cluster node.
On each node, install any Sun Cluster patches.
See the Sun Cluster 3.0 12/01 Release Notes for the location of patches and installation instructions.
Update the directory paths.
The following example shows the beginning process of upgrading a two-node cluster from Sun Cluster 2.2 to Sun Cluster 3.0 12/01 software. The cluster node names are phys-schost-1, the first-installed node, and phys-schost-2, which joins the cluster that phys-schost-1 established. The volume manager is Solstice DiskSuite and both nodes are used as mediator hosts for the diskset schost-1.
(Install the latest Solstice DiskSuite mediator package on each node) # cd /cdrom/suncluster_3_0_u2/SunCluster_3.0/Packages # pkgadd -d . SUNWmdm (Restore the mediator) # metaset -s schost-1 -t # metaset -s schost-1 -a -m phys-schost-1 phys-schost-2 (Begin upgrade on the first node) phys-schost-1# cd /cdrom/suncluster_3_0_u2/SunCluster_3.0/Tools phys-schost-1# ./scinstall -u begin -F (Begin upgrade on the second node) phys-schost-2# cd /cdrom/suncluster_3_0_u2/SunCluster_3.0/Tools phys-schost-2# ./scinstall -u begin -N phys-schost-1 (Reboot each node) # shutdown -g0 -y -i6 |
Perform the following tasks on each node of the cluster.
In a Sun Cluster configuration, user initialization files for the various shells must verify that they are run from an interactive shell before attempting to output to the terminal. Otherwise, unexpected behavior or interference with data services might occur. See the Solaris system administration documentation for more information on customizing a user's work environment.
Become superuser on a cluster node.
Modify the .cshrc file PATH and MANPATH entries.
Set the PATH to include /usr/sbin and /usr/cluster/bin.
For VERITAS Volume Manager, also set your PATH to include /etc/vx/bin. If you installed the VRTSvmsa package, also add /opt/VRTSvmsa/bin to your PATH.
For VERITAS File System, also set your PATH to include /opt/VRTSvxfs/sbin, /usr/lib/fs/vxfs/bin, and /etc/fs/vxfs.
Set the MANPATH to include /usr/cluster/man. Also include the volume manager-specific paths.
For Solstice DiskSuite software, also set your MANPATH to include /usr/share/man.
For VERITAS Volume Manager, also set your MANPATH to include /opt/VRTSvxvm/man. If you installed the VRTSvmsa package, also add /opt/VRTSvmsa/man to your MANPATH.
For VERITAS File System, also add /opt/VRTS/man to your MANPATH.
(Optional) For ease of administration, set the same root password on each node, if you have not already done so.
Start a new shell to activate the environment changes.
Upgrade to Sun Cluster 3.0 12/01 data service software.
Perform this procedure on each cluster node.
Become superuser on a node of the cluster.
Upgrade applications and apply application patches as needed.
See your application documentation for installation instructions.
If the applications are stored on shared disks, you must master the relevant disk groups and manually mount the relevant file systems before you upgrade the application.
Add data services.
Insert the Sun Cluster 3.0 Agents 12/01 CD-ROM into the CD-ROM drive on the node.
Enter the scinstall(1M) utility.
# scinstall |
Follow these guidelines to use the interactive scinstall utility.
Interactive scinstall enables you to type ahead. Therefore, do not press Return more than once if the next menu screen does not appear immediately.
Unless otherwise noted, you can press Control-D to return to either the start of a series of related questions or to the Main Menu.
To add data services, type 4 (Add support for a new data service to this cluster node).
Follow the prompts to add data services.
Eject the CD-ROM.
Install any Sun Cluster data service patches.
See the Sun Cluster 3.0 12/01 Release Notes for the location of patches and installation instructions.
Repeat Step 1 through Step 4 on the other node of the cluster.
Shut down the second node to be upgraded to Sun Cluster 3.0 12/01 software.
phys-schost-2# shutdown -g0 -y -i0 |
Leave the second node shut down until after the first-installed node is rebooted.
Reboot the first-installed node of the cluster.
Ensure that the second node is shut down before rebooting the first-installed node. Otherwise, the second node will panic because quorum votes are not yet assigned.
phys-schost-1# shutdown -g0 -y -i6 |
After the first-installed node has completed booting, boot the second node.
ok boot |
After both nodes are rebooted, verify from either node that both nodes are cluster members.
-- Cluster Nodes -- Node name Status --------- ------ Cluster node: phys-schost-1 Online Cluster node: phys-schost-2 Online |
See the scstat(1M) man page for more information about displaying cluster status.
Assign a quorum device and finish the upgrade.
This procedure finishes the scinstall(1M) upgrade process begun in "How to Upgrade Cluster Software Packages". Perform these steps on each node of the cluster.
If you must reboot the first-installed node, first shut down the cluster by using the scshutdown(1M) command, then reboot. Do not reboot the first-installed node of the cluster until after the cluster is shut down.
Until cluster install mode is disabled, only the first-installed node, which established the cluster, has a quorum vote. In an established cluster that is still in install mode, if the cluster is not shut down before the first-installed node is rebooted, the remaining cluster nodes cannot obtain quorum and the entire cluster shuts down. To determine which node is the first-installed node, view quorum vote assignments by using the scconf -p command. The only node that has a quorum vote is the first-installed node.
After you complete Step 7, quorum votes are assigned and this reboot restriction is no longer necessary.
Become superuser on each node of the cluster.
Choose a shared disk to be the quorum device.
You can use any disk shared by both nodes as a quorum device. From either node, use the scdidadm(1M) command to determine the shared disk's device ID (DID) name. You specify this device name in Step 5, in the -q globaldev=DIDname option to scinstall.
# scdidadm -L |
If your volume manager is VxVM, reinstall and configure the VxVM software on each node of the cluster, including any patches.
Otherwise, go to Step 4.
Install VxVM and create the root disk group (rootdg) as for a new installation.
To install VxVM and encapsulate the root disk, perform the procedures in "How to Install VERITAS Volume Manager Software and Encapsulate the Root Disk". To mirror the root disk, perform the procedures in "How to Mirror the Encapsulated Root Disk".
To install VxVM and create rootdg on local, non-root disks, perform the procedures in "How to Install VERITAS Volume Manager Software Only" and in "How to Create a rootdg Disk Group on a Non-Root Disk".
If you have any existing disk groups, import them.
Perform the procedures in "How to Make an Existing Disk Group Into a Disk Device Group" in the Sun Cluster 3.0 12/01 System Administration Guide.
Create any additional disk groups.
Perform the procedures in "How to Create a New Disk Group When Encapsulating Disks" or "How to Create a New Disk Group When Initializing Disks" in the Sun Cluster 3.0 12/01 System Administration Guide.
Insert the Sun Cluster 3.0 Agents 12/01 CD-ROM into the CD-ROM drive on a node.
This step assumes that the volume daemon vold(1M) is running and configured to manage CD-ROM devices.
Finish the cluster software upgrade on that node.
# scinstall -u finish -q globaldev=DIDname \ -d /cdrom/scdataservices_3_0_u2 -s srvc[,srvc] |
Specifies the device ID (DID) name of the quorum device
Specifies the directory location of the CD-ROM image
Specifies the name of the data service to configure
An error message similar to the following might be generated. You can safely ignore it.
** Installing Sun Cluster - Highly Available NFS Server ** Skipping "SUNWscnfs" - already installed |
Eject the CD-ROM.
Repeat Step 4 through Step 6 on the other node.
When completed on both nodes, cluster install mode is disabled and all quorum votes are assigned.
If your volume manager is Solstice DiskSuite, from either node bring pre-existing disk device groups online.
# scswitch -z -D disk-device-group -h node |
Performs the switch
Specifies the name of the disk device group, which for Solstice DiskSuite software is the same as the diskset name
Specifies the name of the cluster node that serves as the primary of the disk device group
From either node, bring pre-existing data service resource groups online.
At this point, Sun Cluster 2.2 logical hosts are converted to Sun Cluster 3.0 12/01 resource groups, and the names of logical hosts are appended with the suffix -lh. For example, a logical host named lhost-1 is upgraded to a resource group named lhost-1-lh. Use these converted resource group names in the following command.
# scswitch -z -g resource-group -h node |
Specifies the name of the resource group to bring online
You can use the scrgadm -p command to display a list of all resource types and resource groups in the cluster. The scrgadm -pv command displays this list with more detail.
If you are using Sun Management Center to monitor your Sun Cluster configuration, install the Sun Cluster module for Sun Management Center.
Ensure that you are using the most recent version of Sun Management Center.
See your Sun Management Center documentation for installation or upgrade procedures.
Follow guidelines and procedures in "Installation Requirements for Sun Cluster Monitoring" to install the Sun Cluster module packages.
Verify that all nodes have joined the cluster.
The following example shows the finish process of upgrading a two-node cluster upgraded from Sun Cluster 2.2 to Sun Cluster 3.0 12/01 software. The cluster node names are phys-schost-1 and phys-schost-2, the device group names are dg-schost-1 and dg-schost-2, and the data service resource group names are lh-schost-1 and lh-schost-2.
(Determine the DID of the shared quorum device) phys-schost-1# scdidadm -L (Finish upgrade on each node) phys-schost-1# scinstall -u finish -q globaldev=d1 \ -d /cdrom/scdataservices_3_0_u2 -s nfs phys-schost-2# scinstall -u finish -q globaldev=d1 \ -d /cdrom/scdataservices_3_0_u2 -s nfs (Bring device groups and data service resource groups on each node online) phys-schost-1# scswitch -z -D dg-schost-1 -h phys-schost-1 phys-schost-1# scswitch -z -g lh-schost-1 -h phys-schost-1 phys-schost-1# scswitch -z -D dg-schost-2 -h phys-schost-2 phys-schost-1# scswitch -z -g lh-schost-2 -h phys-schost-2 |
Perform this procedure to verify that all nodes have joined the cluster.
Become superuser on any node in the cluster.
Display cluster status.
Verify that cluster nodes are online and that the quorum device, device groups, and data services resource groups are configured and online.
# scstat |
See the scstat(1M) man page for more information about displaying cluster status.
On each node, display a list of all devices the system checks to verify their connectivity to the cluster nodes.
The output on each node should be the same.
# scdidadm -L |
The cluster upgrade is complete. You can now return the cluster to production.
Use the following procedure to upgrade Sun Cluster 3.0 7/01 (Update 1) software to the Sun Cluster 3.0 12/01 update release. To upgrade from Sun Cluster 2.2 software, see "Upgrading From Sun Cluster 2.2 to Sun Cluster 3.0 Update 2 Software".
You cannot use this procedure to upgrade software from more than one release prior to the current release. For example, you can upgrade from the Update 1 release to the Update 2 release, but you cannot upgrade from the GA release directly to the Update 2 release. To upgrade from the Sun Cluster 3.0 GA release to the Sun Cluster 3.0 7/01 (Update 1) release, follow instructions in the README file on the Sun Cluster 3.0 07/01 CD-ROM. This README file is located in the cdrom/suncluster_3_0_u1/SunCluster_3.0/Tools/Upgrade/ directory.
Do not use any new features of the update release, install new data services, or issue any administrative configuration commands until all nodes of the cluster are successfully upgraded.
Get any necessary patches for your cluster configuration.
In addition to Sun Cluster software patches, get any patches for your hardware, Solaris operating environment, volume manager, applications, and any other software products currently running on your cluster. See the Sun Cluster 3.0 12/01 Release Notes for the location of Sun patches and installation instructions.
From any node, view the current status of the cluster.
Save the output as a baseline for comparison.
% scstat % scrgadm -pv[v] |
See the scstat(1M) and scrgadm(1M) man pages for more information.
Become superuser on a node of the cluster to upgrade.
Evacuate all resource groups and device groups running on the node to upgrade.
# scswitch -S -h node |
Evacuates all resource groups and device groups
Specifies the name of the node from which to evacuate resource groups and device groups
See the scswitch(1M) man page for more information.
Verify that the evacuation completed successfully.
# scstat -g -D |
Back up the system disk and data.
Do you intend to upgrade Solaris 8 software?
The cluster must already run on, or be upgraded to, at least the minimum required level of the Solaris 8 operating environment to support Sun Cluster 3.0 12/01 software.
(Optional) Upgrade Solaris 8 software.
Temporarily comment out all global device entries in the /etc/vfstab file.
Do this to prevent the Solaris upgrade from attempting to mount the global devices.
Shut down the node to upgrade.
# shutdown -y -g0 ok |
Follow instructions in the installation guide for the Solaris 8 Maintenance Update version you want to upgrade to.
When prompted to reboot, reboot the node in non-cluster mode.
Include the double dashes (--) and two quotation marks (") in the command.
# reboot -- "-x" |
Install any Solaris software patches and hardware-related patches, and download any needed firmware contained in the hardware patches.
If any patches require rebooting, reboot the node in non-cluster mode as described in Step d.
Uncomment all global device entries in the /etc/vfstab file that you commented out in Step a.
Upgrade to the Sun Cluster 3.0 update software.
If you are installing from the CD-ROM, insert the Sun Cluster 3.0 12/01 CD-ROM into the CD-ROM drive on the node.
If the volume daemon vold(1M) is running and configured to manage CD-ROM devices, it automatically mounts the CD-ROM on the /cdrom/suncluster_3_0_u2 directory.
Change to the Tools directory.
# cd /cdrom/suncluster_3_0_u2/SunCluster_3.0/Tools |
Install the Sun Cluster 3.0 Update 2 patches.
# ./scinstall -u update |
See the scinstall(1M) man page for more information.
Install any Sun Cluster software patches.
Reboot the node into the cluster.
# reboot |
Verify that each Sun Cluster software Update 2 patch is installed correctly.
View the upgrade log file referenced at the end of the upgrade output messages.
Verify the status of the cluster configuration.
% scstat % scrgadm -pv[v] |
Output should be the same as for Step 2.
Repeat Step 3 through Step 12 on each remaining cluster node.
Do you intend to upgrade any data services?
If yes, go to Step 15.
If no, stop. The software upgrade is complete.
Take offline all resource groups for the data services you will upgrade.
# scswitch -F -g resource-grp |
Take offline
Specifies the name of the resource group to take offline
Upgrade applications as needed.
Follow the instructions provided in your third-party documentation.
For each node on which data services are installed, upgrade to the Sun Cluster 3.0 data services update software.
If you are installing from the CD-ROM, insert the Sun Cluster 3.0 Agents 12/01 CD-ROM into the CD-ROM drive on the node.
Install the Sun Cluster 3.0 data services update patches.
Use one of the following methods.
To upgrade one or more specified data services, type the following command.
# scinstall -u update -s srvc[,srvc,...] -d cdrom-image |
To upgrade all data services present on the node, type the following command.
# scinstall -u update -s all -d cdrom-image |
This command assumes that updates for all installed data services exist on the update release. If an update for a particular data service does not exist in the update release, that data service is not upgraded.
Install any Sun Cluster data service software patches.
Verify that each data service update patch is installed successfully.
View the upgrade log file referenced at the end of the upgrade output messages.
Bring back online the resource groups for each upgraded data service.
# scswitch -Z -g resource-grp |
Bring online
Verify the status of the cluster configuration.
% scstat % scrgadm -pv[v] |
Output should be the same as for Step 2.
Restart any applications.
Follow the instructions provided in your third-party documentation.