This chapter provides the following information and procedures to upgrade a Sun Cluster 3.0 or 3.1 configuration to Sun Cluster 3.2 software:
Observe the following requirements and software-support guidelines when you upgrade to Sun Cluster 3.2 software:
Upgrade of x86 based systems - On x86 based systems, you cannot upgrade from the Solaris 9 OS to the Solaris 10 OS. You must reinstall the cluster with a fresh installation of the Solaris 10 OS and Sun Cluster 3.2 software for x86 based systems. Follow procedures in Chapter 2, Installing Software on the Cluster.
Minimum Sun Cluster software version - Sun Cluster 3.2 software supports the following direct upgrade paths:
SPARC: From version 3.0 including update releases to version 3.2 - Use the standard upgrade method only.
SPARC: From version 3.1, 3.1 10/03, 3.1 4/04, or 3.1 9/04 to version 3.2 - Use the standard, dual-partition, or live upgrade method.
From version 3.1 8/05 to version 3.2 - Use the standard, dual-partition, or live upgrade method.
See Choosing a Sun Cluster Upgrade Method for additional requirements and restrictions for each upgrade method.
Minimum Solaris OS - The cluster must run on or be upgraded to at least Solaris 9 9/05 software or Solaris 10 11/06 software, including the most current required patches. The Solaris 9 OS is supported only on SPARC based platforms.
Supported hardware - The cluster hardware must be a supported configuration for Sun Cluster 3.2 software. Contact your Sun representative for information about current supported Sun Cluster configurations.
Architecture changes during upgrade - Sun Cluster 3.2 software does not support upgrade between architectures.
Software migration - Do not migrate from one type of software product to another product during Sun Cluster upgrade. For example, migration from Solaris Volume Manager disk sets to VxVM disk groups or from UFS file systems to VxFS file systems is not supported during Sun Cluster upgrade. Perform only software configuration changes that are specified by upgrade procedures of an installed software product.
Global-devices partition size - If the size of your /global/.devices/node@nodeid partition is less than 512 Mbytes but it provides sufficient space for existing device nodes, you do not need to change the file-system size. The 512-Mbyte minimum applies to new installations of Sun Cluster 3.2 software. However, you must still ensure that the global-devices file system has ample space and ample inode capacity for existing devices and for any new devices that you intend to configure. Certain configuration changes, such as adding disks, disk volumes, or metadevices, might require increasing the partition size to provide sufficient additional inodes.
Data services - You must upgrade all Sun Cluster data service software to version 3.2 and migrate resources to the new resource-type version. Sun Cluster 3.0 and 3.1 data services are not supported on Sun Cluster 3.2 software.
Upgrading to compatible versions - You must upgrade all software on the cluster nodes to a version that is supported by Sun Cluster 3.2 software. For example, if a version of a data service is supported on Sun Cluster 3.1 software but is not supported on Sun Cluster 3.2 software, you must upgrade that data service to the version that is supported on Sun Cluster 3.2 software, if such a version exists. See Supported Products in Sun Cluster 3.2 Release Notes for Solaris OS for information about supported products.
Converting from NAFO to IPMP groups - For upgrade from a Sun Cluster 3.0 release, have available the test IP addresses to use with your public-network adapters when NAFO groups are converted to IP network multipathing groups. The scinstall upgrade utility prompts you for a test IP address for each public-network adapter in the cluster. A test IP address must be on the same subnet as the primary IP address for the adapter.
See IPMP, in System Administration Guide: IP Services (Solaris 9 or Solaris 10) for information about test IP addresses for IPMP groups.
Downgrade - Sun Cluster 3.2 software does not support any downgrade of Sun Cluster software.
Limitation of scinstall for data-service upgrades - The scinstall upgrade utility only upgrades those data services that are provided with Sun Cluster 3.2 software. You must manually upgrade any custom or third-party data services.
Choose from the following methods to upgrade your cluster to Sun Cluster 3.2 software:
Standard upgrade – In a standard upgrade, you shut down the cluster before you upgrade the cluster nodes. You return the cluster to production after all nodes are fully upgraded. Use this method if you are upgrading from a Sun Cluster 3.0 release.
Dual-partition upgrade - In a dual-partition upgrade, you divide the cluster into two groups of nodes. You bring down one group of nodes and upgrade those nodes. The other group of nodes continues to provide services. After you complete upgrade of the first group of nodes, you switch services to those upgraded nodes. You then upgrade the remaining nodes and boot them back into the rest of the cluster. The cluster outage time is limited to the amount of time needed for the cluster to switch over services to the upgraded partition.
Observe the following additional restrictions and requirements for the dual–partition upgrade method:
Sun Cluster HA for Sun Java System Application Server EE (HADB) - If you are running the Sun Cluster HA for Sun Java System Application Server EE (HADB) data service with Sun Java System Application Server EE (HADB) software as of version 4.4, you must shut down the database before you begin the dual-partition upgrade. The HADB database does not tolerate the loss of membership that would occur when a partition of nodes is shut down for upgrade. This requirement does not apply to versions before version 4.4.
Data format changes - Do not use the dual-partition upgrade method if you intend to upgrade an application that requires that you change its data format during the application upgrade. The dual-partition upgrade method is not compatible with the extended downtime that is needed to perform data transformation.
Location of application software - Applications must be installed on nonshared storage. Shared storage is not accessible to a partition that is in noncluster mode. Therefore, it is not possible to upgrade application software that is located on shared storage.
Division of storage - Each shared storage device must be connected to a node in each group.
Single-node clusters - Dual-partition upgrade is not available to upgrade a single-node cluster. Use the standard upgrade or live upgrade method instead.
Minimum Sun Cluster version - The cluster must be running a Sun Cluster 3.1 release before you begin the dual-partition upgrade.
Configuration changes - Do not make cluster configuration changes that are not documented in the upgrade procedures. Such changes might not be propagated to the final cluster configuration. Also, validation attempts of such changes would fail because not all nodes are reachable during a dual-partition upgrade.
Live upgrade - A live upgrade maintains your previous cluster configuration until you have upgraded all nodes and you commit to the upgrade. If the upgraded configuration causes a problem, you can revert to your previous cluster configuration until you can rectify the problem.
Observe the following additional restrictions and requirements for the live upgrade method:
Minimum Sun Cluster version - The cluster must be running a Sun Cluster 3.1 release before you begin the live upgrade.
Minimum version of Live Upgrade software - To use the live upgrade method, you must use the Solaris Live Upgrade packages from at least the Solaris 9 9/04 or Solaris 10 release. This requirement applies to clusters running on all Solaris OS versions, including Solaris 8 software. The live upgrade procedures provide instructions for upgrading these packages.
Dual-partition upgrade - The live upgrade method cannot be used in conjunction with a dual-partition upgrade.
Non-global zones - The live upgrade method does not support the upgrade of clusters that have non-global zones configured on any of the cluster nodes. Instead, use the standard upgrade or dual-partition upgrade method.
Disk space - To use the live upgrade method, you must have enough spare disk space available to make a copy of each node's boot environment. You reclaim this disk space after the upgrade is complete and you have verified and committed the upgrade. For information about space requirements for an inactive boot environment, refer to Solaris Live Upgrade Disk Space Requirements in Solaris 9 9/04 Installation Guide or Allocating Disk and Swap Space in Solaris 10 Installation Guide: Solaris Live Upgrade and Upgrade Planning.
For overview information about planning your Sun Cluster 3.2 configuration, see Chapter 1, Planning the Sun Cluster Configuration.
This section provides the following information to upgrade to Sun Cluster 3.2 software by using the standard upgrade method:
The following table lists the tasks to perform to upgrade from Sun Cluster 3.1 software to Sun Cluster 3.2 software. You also perform these tasks to upgrade only the version of the Solaris OS. If you upgrade the Solaris OS from Solaris 9 to Solaris 10 software, you must also upgrade the Sun Cluster software and dependency software to the version that is compatible with the new version of the Solaris OS.
Table 8–1 Task Map: Performing a Standard Upgrade to Sun Cluster 3.2 Software
Task |
Instructions |
---|---|
1. Read the upgrade requirements and restrictions. Determine the proper upgrade method for your configuration and needs. | |
2. Remove the cluster from production and back up shared data. | |
3. Upgrade the Solaris software, if necessary, to a supported Solaris update. If the cluster uses dual-string mediators for Solaris Volume Manager software, unconfigure the mediators. As needed, upgrade VERITAS Volume Manager (VxVM) and VERITAS File System (VxFS). Solaris Volume Manager software is automatically upgraded with the Solaris OS. |
How to Upgrade the Solaris OS and Volume Manager Software (Standard) |
4. Upgrade to Sun Cluster 3.2 framework and data-service software. If necessary, upgrade applications. If the cluster uses dual-string mediators and you upgraded the Solaris OS, reconfigure the mediators. If you upgraded VxVM, upgrade disk groups. | |
5. Verify successful completion of upgrade to Sun Cluster 3.2 software. | |
6. Enable resources and bring resource groups online. Migrate existing resources to new resource types. | |
7. (Optional) SPARC: Upgrade the Sun Cluster module for Sun Management Center, if needed. |
SPARC: How to Upgrade Sun Cluster Module Software for Sun Management Center |
Perform this procedure to remove the cluster from production before you perform a standard upgrade. On the Solaris 10 OS, perform all steps from the global zone only.
Perform the following tasks:
Ensure that the configuration meets the requirements for upgrade. See Upgrade Requirements and Software Support Guidelines.
Have available the installation media, documentation, and patches for all software products that you are upgrading, including the following software:
Solaris OS
Sun Cluster 3.2 framework
Sun Cluster 3.2 data services (agents)
Applications that are managed by Sun Cluster 3.2 data-services
VERITAS Volume Manager, if applicable
See Patches and Required Firmware Levels in Sun Cluster 3.2 Release Notes for Solaris OS for the location of patches and installation instructions.
If you use role-based access control (RBAC) instead of superuser to access the cluster nodes, ensure that you can assume an RBAC role that provides authorization for all Sun Cluster commands. This series of upgrade procedures requires the following Sun Cluster RBAC authorizations if the user is not superuser:
solaris.cluster.modify
solaris.cluster.admin
solaris.cluster.read
See Role-Based Access Control (Overview) in System Administration Guide: Security Services for more information about using RBAC roles. See the Sun Cluster man pages for the RBAC authorization that each Sun Cluster subcommand requires.
Ensure that the cluster is functioning normally.
View the current status of the cluster by running the following command from any node.
phys-schost% scstat |
See the scstat(1M) man page for more information.
Search the /var/adm/messages log on the same node for unresolved error messages or warning messages.
Check the volume-manager status.
Notify users that cluster services will be unavailable during the upgrade.
Become superuser on a node of the cluster.
Take each resource group offline and disable all resources.
Take offline all resource groups in the cluster, including those that are in non-global zones. Then disable all resources, to prevent the cluster from bringing the resources online automatically if a node is mistakenly rebooted into cluster mode.
If you are upgrading from Sun Cluster 3.1 software and want to use the scsetup utility, perform the following steps:
Start the scsetup utility.
phys-schost# scsetup |
The scsetup Main Menu is displayed.
Type the number that corresponds to the option for Resource groups and press the Return key.
The Resource Group Menu is displayed.
Type the number that corresponds to the option for Online/Offline or Switchover a resource group and press the Return key.
Follow the prompts to take offline all resource groups and to put them in the unmanaged state.
When all resource groups are offline, type q to return to the Resource Group Menu.
Exit the scsetup utility.
Type q to back out of each submenu or press Ctrl-C.
To use the command line, perform the following steps:
Take each resource offline.
phys-schost# scswitch -F -g resource-group |
Switches a resource group offline.
Specifies the name of the resource group to take offline.
From any node, list all enabled resources in the cluster.
phys-schost# scrgadm -pv | grep "Res enabled" (resource-group:resource) Res enabled: True |
Identify those resources that depend on other resources.
You must disable dependent resources first before you disable the resources that they depend on.
Disable each enabled resource in the cluster.
phys-schost# scswitch -n -j resource |
Disables.
Specifies the resource.
See the scswitch(1M) man page for more information.
Verify that all resources are disabled.
phys-schost# scrgadm -pv | grep "Res enabled" (resource-group:resource) Res enabled: False |
Move each resource group to the unmanaged state.
phys-schost# scswitch -u -g resource-group |
Moves the specified resource group to the unmanaged state.
Specifies the name of the resource group to move into the unmanaged state.
Verify that all resources on all nodes are Offline and that all resource groups are in the Unmanaged state.
phys-schost# scstat |
For a two-node cluster that uses Sun StorEdge Availability Suite software or Sun StorageTekTM Availability Suite software, ensure that the configuration data for availability services resides on the quorum disk.
The configuration data must reside on a quorum disk to ensure the proper functioning of Availability Suite after you upgrade the cluster software.
Become superuser on a node of the cluster that runs Availability Suite software.
Identify the device ID and the slice that is used by the Availability Suite configuration file.
phys-schost# /usr/opt/SUNWscm/sbin/dscfg /dev/did/rdsk/dNsS |
In this example output, N is the device ID and S the slice of device N.
Identify the existing quorum device.
phys-schost# scstat -q -- Quorum Votes by Device -- Device Name Present Possible Status ----------- ------- -------- ------ Device votes: /dev/did/rdsk/dQsS 1 1 Online |
In this example output, dQsS is the existing quorum device.
If the quorum device is not the same as the Availability Suite configuration-data device, move the configuration data to an available slice on the quorum device.
phys-schost# dd if=`/usr/opt/SUNWesm/sbin/dscfg` of=/dev/did/rdsk/dQsS |
You must use the name of the raw DID device, /dev/did/rdsk/, not the block DID device, /dev/did/dsk/.
If you moved the configuration data, configure Availability Suite software to use the new location.
As superuser, issue the following command on each node that runs Availability Suite software.
phys-schost# /usr/opt/SUNWesm/sbin/dscfg -s /dev/did/rdsk/dQsS |
(Optional) If you are upgrading from a version of Sun Cluster 3.0 software and do not want your ntp.conf file renamed to ntp.conf.cluster, create an ntp.conf.cluster file.
On each node, copy /etc/inet/ntp.cluster as ntp.conf.cluster.
phys-schost# cp /etc/inet/ntp.cluster /etc/inet/ntp.conf.cluster |
The existence of an ntp.conf.cluster file prevents upgrade processing from renaming the ntp.conf file. The ntp.conf file will still be used to synchronize NTP among the cluster nodes.
Stop all applications that are running on each node of the cluster.
Ensure that all shared data is backed up.
If you will upgrade the Solaris OS and your cluster uses dual-string mediators for Solaris Volume Manager software, unconfigure your mediators.
See Configuring Dual-String Mediators for more information about mediators.
Run the following command to verify that no mediator data problems exist.
phys-schost# medstat -s setname |
Specifies the disk set name.
If the value in the Status field is Bad, repair the affected mediator host. Follow the procedure How to Fix Bad Mediator Data.
List all mediators.
Save this information for when you restore the mediators during the procedure How to Finish Upgrade to Sun Cluster 3.2 Software.
For a disk set that uses mediators, take ownership of the disk set if no node already has ownership.
phys-schost# scswitch -z -D setname -h node |
Changes mastery.
Specifies the name of the disk set.
Specifies the name of the node to become primary of the disk set.
Unconfigure all mediators for the disk set.
phys-schost# metaset -s setname -d -m mediator-host-list |
Specifies the disk set name.
Deletes from the disk set.
Specifies the name of the node to remove as a mediator host for the disk set.
See the mediator(7D) man page for further information about mediator-specific options to the metaset command.
Repeat Step c through Step d for each remaining disk set that uses mediators.
From one node, shut down the cluster.
# scshutdown -g0 -y |
See the scshutdown(1M)man page for more information.
Boot each node into noncluster mode.
On SPARC based systems, perform the following command:
ok boot -x |
On x86 based systems, perform the following commands:
In the GRUB menu, use the arrow keys to select the appropriate Solaris entry and type e to edit its commands.
The GRUB menu appears similar to the following:
GNU GRUB version 0.95 (631K lower / 2095488K upper memory) +-------------------------------------------------------------------------+ | Solaris 10 /sol_10_x86 | | Solaris failsafe | | | +-------------------------------------------------------------------------+ Use the ^ and v keys to select which entry is highlighted. Press enter to boot the selected OS, 'e' to edit the commands before booting, or 'c' for a command-line. |
For more information about GRUB based booting, see Chapter 11, GRUB Based Booting (Tasks), in System Administration Guide: Basic Administration.
In the boot parameters screen, use the arrow keys to select the kernel entry and type e to edit the entry.
The GRUB boot parameters screen appears similar to the following:
GNU GRUB version 0.95 (615K lower / 2095552K upper memory) +----------------------------------------------------------------------+ | root (hd0,0,a) | | kernel /platform/i86pc/multiboot | | module /platform/i86pc/boot_archive | +----------------------------------------------------------------------+ Use the ^ and v keys to select which entry is highlighted. Press 'b' to boot, 'e' to edit the selected command in the boot sequence, 'c' for a command-line, 'o' to open a new line after ('O' for before) the selected line, 'd' to remove the selected line, or escape to go back to the main menu. |
Add -x to the command to specify that the system boot into noncluster mode.
[ Minimal BASH-like line editing is supported. For the first word, TAB lists possible command completions. Anywhere else TAB lists the possible completions of a device/filename. ESC at any time exits. ] grub edit> kernel /platform/i86pc/multiboot -x |
Press Enter to accept the change and return to the boot parameters screen.
The screen displays the edited command.
GNU GRUB version 0.95 (615K lower / 2095552K upper memory) +----------------------------------------------------------------------+ | root (hd0,0,a) | | kernel /platform/i86pc/multiboot -x | | module /platform/i86pc/boot_archive | +----------------------------------------------------------------------+ Use the ^ and v keys to select which entry is highlighted. Press 'b' to boot, 'e' to edit the selected command in the boot sequence, 'c' for a command-line, 'o' to open a new line after ('O' for before) the selected line, 'd' to remove the selected line, or escape to go back to the main menu.- |
Type b to boot the node into noncluster mode.
This change to the kernel boot parameter command does not persist over the system boot. The next time you reboot the node, it will boot into cluster mode. To boot into noncluster mode instead, perform these steps to again to add the -x option to the kernel boot parameter command.
Ensure that each system disk is backed up.
Upgrade software on each node.
To upgrade Solaris software before you perform Sun Cluster software upgrade, go to How to Upgrade the Solaris OS and Volume Manager Software (Standard).
You must upgrade the Solaris software to a supported release if Sun Cluster 3.2 software does not support the release of the Solaris OS that your cluster currently runs . See “Supported Products” in Sun Cluster 3.2 Release Notes for Solaris OS for more information.
If Sun Cluster 3.2 software supports the release of the Solaris OS that you currently run on your cluster, further Solaris software upgrade is optional.
Otherwise, upgrade to Sun Cluster 3.2 software. Go to How to Upgrade Sun Cluster 3.2 Software (Standard).
Perform this procedure on each node in the cluster to upgrade the Solaris OS. On the Solaris 10 OS, perform all steps from the global zone only. If the cluster already runs on a version of the Solaris OS that supports Sun Cluster 3.2 software, further upgrade of the Solaris OS is optional. If you do not intend to upgrade the Solaris OS, proceed to How to Upgrade Sun Cluster 3.2 Software (Standard).
The cluster must already run on, or be upgraded to, at least the minimum required level of the Solaris OS to support upgrade to Sun Cluster 3.2 software. See Supported Products in Sun Cluster 3.2 Release Notes for Solaris OS for more information.
Ensure that all steps in How to Prepare the Cluster for Upgrade (Standard) are completed.
Become superuser on the cluster node to upgrade.
If you are performing a dual-partition upgrade, the node must be a member of the partition that is in noncluster mode.
If Sun Cluster Geographic Edition software is installed, uninstall it.
For uninstallation procedures, see the documentation for your version of Sun Cluster Geographic Edition software.
Determine whether the following Apache run-control scripts exist and are enabled or disabled:
/etc/rc0.d/K16apache /etc/rc1.d/K16apache /etc/rc2.d/K16apache /etc/rc3.d/S50apache /etc/rcS.d/K16apache |
Some applications, such as Sun Cluster HA for Apache, require that Apache run control scripts be disabled.
If these scripts exist and contain an uppercase K or S in the file name, the scripts are enabled. No further action is necessary for these scripts.
If these scripts do not exist, in Step 8 you must ensure that any Apache run control scripts that are installed during the Solaris OS upgrade are disabled.
If these scripts exist but the file names contain a lowercase k or s, the scripts are disabled. In Step 8 you must ensure that any Apache run control scripts that are installed during the Solaris OS upgrade are disabled.
Comment out all entries for globally mounted file systems in the node's /etc/vfstab file.
For later reference, make a record of all entries that are already commented out.
Temporarily comment out all entries for globally mounted file systems in the /etc/vfstab file.
Entries for globally mounted file systems contain the global mount option. Comment out these entries to prevent the Solaris upgrade from attempting to mount the global devices.
Determine which procedure to follow to upgrade the Solaris OS.
Volume Manager |
Procedure |
Location of Instructions |
---|---|---|
Solaris Volume Manager |
Any Solaris upgrade method except the Live Upgrade method |
Solaris installation documentation |
VERITAS Volume Manager |
“Upgrading VxVM and Solaris” |
VERITAS Volume Manager installation documentation |
If your cluster has VxVM installed, you must reinstall the existing VxVM software or upgrade to the Solaris 9 or 10 version of VxVM software as part of the Solaris upgrade process.
Upgrade the Solaris software, following the procedure that you selected in Step 5.
Do not perform the final reboot instruction in the Solaris software upgrade. Instead, do the following:
Reboot into noncluster mode in Step 9 to complete Solaris software upgrade.
When prompted, choose the manual reboot option.
When you are instructed to reboot a node during the upgrade process, always reboot into noncluster mode. For the boot and reboot commands, add the -x option to the command. The -x option ensures that the node reboots into noncluster mode. For example, either of the following two commands boot a node into single-user noncluster mode:
On SPARC based systems, perform either of the following commands:
phys-schost# reboot -- -xs or ok boot -xs |
If the instruction says to run the init S command, use the reboot -- -xs command instead.
On x86 based systems running the Solaris 9 OS, perform either of the following commands:
phys-schost# reboot -- -xs or ... <<< Current Boot Parameters >>> Boot path: /pci@0,0/pci-ide@7,1/ata@1/cmdk@0,0:b Boot args: Type b [file-name] [boot-flags] <ENTER> to boot with options or i <ENTER> to enter boot interpreter or <ENTER> to boot with defaults <<< timeout in 5 seconds >>> Select (b)oot or (i)nterpreter: b -xs |
On x86 based systems running the Solaris 10 OS, perform the following command:
phys-schost# shutdown -g -y -i0Press any key to continue |
In the GRUB menu, use the arrow keys to select the appropriate Solaris entry and type e to edit its commands.
The GRUB menu appears similar to the following:
GNU GRUB version 0.95 (631K lower / 2095488K upper memory) +-------------------------------------------------------------------------+ | Solaris 10 /sol_10_x86 | | Solaris failsafe | | | +-------------------------------------------------------------------------+ Use the ^ and v keys to select which entry is highlighted. Press enter to boot the selected OS, 'e' to edit the commands before booting, or 'c' for a command-line. |
For more information about GRUB based booting, see Chapter 11, GRUB Based Booting (Tasks), in System Administration Guide: Basic Administration.
In the boot parameters screen, use the arrow keys to select the kernel entry and type e to edit the entry.
The GRUB boot parameters screen appears similar to the following:
GNU GRUB version 0.95 (615K lower / 2095552K upper memory) +----------------------------------------------------------------------+ | root (hd0,0,a) | | kernel /platform/i86pc/multiboot | | module /platform/i86pc/boot_archive | +----------------------------------------------------------------------+ Use the ^ and v keys to select which entry is highlighted. Press 'b' to boot, 'e' to edit the selected command in the boot sequence, 'c' for a command-line, 'o' to open a new line after ('O' for before) the selected line, 'd' to remove the selected line, or escape to go back to the main menu. |
Add -x to the command to specify that the system boot into noncluster mode.
[ Minimal BASH-like line editing is supported. For the first word, TAB lists possible command completions. Anywhere else TAB lists the possible completions of a device/filename. ESC at any time exits. ] grub edit> kernel /platform/i86pc/multiboot -x |
Press Enter to accept the change and return to the boot parameters screen.
The screen displays the edited command.
GNU GRUB version 0.95 (615K lower / 2095552K upper memory) +----------------------------------------------------------------------+ | root (hd0,0,a) | | kernel /platform/i86pc/multiboot -x | | module /platform/i86pc/boot_archive | +----------------------------------------------------------------------+ Use the ^ and v keys to select which entry is highlighted. Press 'b' to boot, 'e' to edit the selected command in the boot sequence, 'c' for a command-line, 'o' to open a new line after ('O' for before) the selected line, 'd' to remove the selected line, or escape to go back to the main menu.- |
Type b to boot the node into noncluster mode.
This change to the kernel boot parameter command does not persist over the system boot. The next time you reboot the node, it will boot into cluster mode. To boot into noncluster mode instead, perform these steps to again to add the -x option to the kernel boot parameter command.
If the instruction says to run the init S command, shut down the system then change the GRUB kernel boot command to /platform/i86pc/multiboot -sx instead.
In the /a/etc/vfstab file, uncomment those entries for globally mounted file systems that you commented out in Step 4.
If Apache run control scripts were disabled or did not exist before you upgraded the Solaris OS, ensure that any scripts that were installed during Solaris upgrade are disabled.
To disable Apache run control scripts, use the following commands to rename the files with a lowercase k or s.
phys-schost# mv /a/etc/rc0.d/K16apache /a/etc/rc0.d/k16apache phys-schost# mv /a/etc/rc1.d/K16apache /a/etc/rc1.d/k16apache phys-schost# mv /a/etc/rc2.d/K16apache /a/etc/rc2.d/k16apache phys-schost# mv /a/etc/rc3.d/S50apache /a/etc/rc3.d/s50apache phys-schost# mv /a/etc/rcS.d/K16apache /a/etc/rcS.d/k16apache |
Alternatively, you can rename the scripts to be consistent with your normal administration practices.
Reboot the node into noncluster mode.
Include the double dashes (--) in the following command:
phys-schost# reboot -- -x |
If your cluster runs VxVM, perform the remaining steps in the procedure “Upgrading VxVM and Solaris” to reinstall or upgrade VxVM.
Make the following changes to the procedure:
After VxVM upgrade is complete but before you reboot, verify the entries in the /etc/vfstab file.
If any of the entries that you uncommented in Step 7 were commented out, make those entries uncommented again.
When the VxVM procedures instruct you to perform a final reconfiguration reboot, do not use the -r option alone. Instead, reboot into noncluster mode by using the -rx options.
On SPARC based systems, perform the following command:
phys-schost# reboot -- -rx |
On x86 based systems, perform the shutdown and boot procedures that are described in Step 6 except add -rx to the kernel boot command instead of -sx.
If you see a message similar to the following, type the root password to continue upgrade processing. Do not run the fsck command nor type Ctrl-D.
WARNING - Unable to repair the /global/.devices/node@1 filesystem. Run fsck manually (fsck -F ufs /dev/vx/rdsk/rootdisk_13vol). Exit the shell when done to continue the boot process. Type control-d to proceed with normal startup, (or give root password for system maintenance): Type the root password |
(Optional) SPARC: Upgrade VxFS.
Follow procedures that are provided in your VxFS documentation.
Install any required Solaris software patches and hardware-related patches, and download any needed firmware that is contained in the hardware patches.
Do not reboot after you add patches. Wait to reboot the node until after you upgrade the Sun Cluster software.
See Patches and Required Firmware Levels in Sun Cluster 3.2 Release Notes for Solaris OS for the location of patches and installation instructions.
Upgrade to Sun Cluster 3.2 software. Go to How to Upgrade Sun Cluster 3.2 Software (Standard).
To complete the upgrade to a new marketing release of the Solaris OS, such as from Solaris 8 to Solaris 10 software, you must also upgrade the Sun Cluster software and dependency software to the version that is compatible with the new version of the Solaris OS.
Perform this procedure to upgrade each node of the cluster to Sun Cluster 3.2 software. This procedure also upgrades required Sun Java Enterprise System shared components.
You must also perform this procedure after you upgrade to a different marketing release of the Solaris OS, such as from Solaris 8 to Solaris 10 software.
On the Solaris 10 OS, perform all steps from the global zone only.
You can perform this procedure on more than one node at the same time.
Perform the following tasks:
Ensure that all steps in How to Prepare the Cluster for Upgrade (Standard) are completed.
If you upgraded to a new marketing release of the Solaris OS, such as from Solaris 8 to Solaris 10 software, ensure that all steps in How to Upgrade the Solaris OS and Volume Manager Software (Standard) are completed.
Ensure that you have installed all required Solaris software patches and hardware-related patches.
Become superuser on a node of the cluster.
Ensure that the /usr/java/ directory is a symbolic link to the minimum or latest version of Java software.
Sun Cluster software requires at least version 1.5.0_06 of Java software. If you upgraded to a version of Solaris that installs an earlier version of Java, the upgrade might have changed the symbolic link to point to a version of Java that does not meet the minimum requirement for Sun Cluster 3.2 software.
Determine what directory the /usr/java/ directory is symbolically linked to.
phys-schost# ls -l /usr/java lrwxrwxrwx 1 root other 9 Apr 19 14:05 /usr/java -> /usr/j2se/ |
Determine what version or versions of Java software are installed.
The following are examples of commands that you can use to display the version of their related releases of Java software.
phys-schost# /usr/j2se/bin/java -version phys-schost# /usr/java1.2/bin/java -version phys-schost# /usr/jdk/jdk1.5.0_06/bin/java -version |
If the /usr/java/ directory is not symbolically linked to a supported version of Java software, recreate the symbolic link to link to a supported version of Java software.
The following example shows the creation of a symbolic link to the /usr/j2se/ directory, which contains Java 1.5.0_06 software.
phys-schost# rm /usr/java phys-schost# ln -s /usr/j2se /usr/java |
Load the Sun Java Availability Suite DVD-ROM into the DVD-ROM drive.
If the volume management daemon vold(1M) is running and is configured to manage CD-ROM or DVD devices, the daemon automatically mounts the media on the /cdrom/cdrom0/ directory.
Change to the installation wizard directory of the DVD-ROM.
If you are installing the software packages on the SPARC platform, type the following command:
phys-schost# cd /cdrom/cdrom0//Solaris_sparc |
If you are installing the software packages on the x86 platform, type the following command:
phys-schost# cd /cdrom/cdrom0//Solaris_x86 |
Start the installation wizard program.
phys-schost# ./installer |
Follow the instructions on the screen to select and upgrade Shared Components software packages on the node.
Do not use the installation wizard program to upgrade Sun Cluster software packages.
The installation wizard program displays the status of the installation. When the installation is complete, the program displays an installation summary and the installation logs.
Exit the installation wizard program.
Change to the Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/ directory, where arch is sparc or x86 (Solaris 10 only) and where ver is 9 for Solaris 9 or 10 for Solaris 10 .
phys-schost# cd /cdrom/cdrom0/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools |
phys-schost# ./scinstall |
Do not use the /usr/cluster/bin/scinstall command that is already installed on the node. You must use the scinstall command that is located on the Sun Java Availability Suite DVD-ROM.
The scinstall Main Menu is displayed.
Type the number that corresponds to the option for Upgrade this cluster node and press the Return key.
*** Main Menu *** Please select from one of the following (*) options: 1) Create a new cluster or add a cluster node 2) Configure a cluster to be JumpStarted from this install server * 3) Manage a dual-partition upgrade * 4) Upgrade this cluster node * 5) Print release information for this cluster node * ?) Help with menu options * q) Quit Option: 4 |
The Upgrade Menu is displayed.
Type the number that corresponds to the option for Upgrade Sun Cluster framework on this cluster node and press the Return key.
Follow the menu prompts to upgrade the cluster framework.
During the Sun Cluster upgrade, scinstall might make one or more of the following configuration changes:
Convert NAFO groups to IPMP groups but keep the original NAFO-group name.
See one of the following manuals for information about test addresses for IPMP:
Configuring Test Addresses in Administering Multipathing Groups With Multiple Physical Interfaces in System Administration Guide: IP Services (Solaris 9)
Test Addresses in System Administration Guide: IP Services (Solaris 10)
See the scinstall(1M) man page for more information about the conversion of NAFO groups to IPMP during Sun Cluster software upgrade.
Rename the ntp.conf file to ntp.conf.cluster, if ntp.conf.cluster does not already exist on the node.
Set the local-mac-address? variable to true, if the variable is not already set to that value.
Upgrade processing is finished when the system displays the message Completed Sun Cluster framework upgrade and prompts you to press Enter to continue.
Quit the scinstall utility.
Unload the Sun Java Availability Suite DVD-ROM from the DVD-ROM drive.
Upgrade data service packages.
You must upgrade all data services to the Sun Cluster 3.2 version.
For Sun Cluster HA for SAP Web Application Server, if you are using a J2EE engine resource or a web application server component resource or both, you must delete the resource and recreate it with the new web application server component resource. Changes in the new web application server component resource includes integration of the J2EE functionality. For more information, see Sun Cluster Data Service for SAP Web Application Server Guide for Solaris OS.
Start the upgraded interactive scinstall utility.
phys-schost# /usr/cluster/bin/scinstall |
Do not use the scinstall utility that is on the installation media to upgrade data service packages.
The scinstall Main Menu is displayed.
Type the number that corresponds to the option for Upgrade this cluster node and press the Return key.
The Upgrade Menu is displayed.
Type the number that corresponds to the option for Upgrade Sun Cluster data service agents on this node and press the Return key.
Follow the menu prompts to upgrade Sun Cluster data service agents that are installed on the node.
You can choose from the list of data services that are available to upgrade or choose to upgrade all installed data services.
Upgrade processing is finished when the system displays the message Completed upgrade of Sun Cluster data services agents and prompts you to press Enter to continue.
Press Enter.
The Upgrade Menu is displayed.
Quit the scinstall utility.
If you have Sun Cluster HA for NFS configured on a highly available local file system, ensure that the loopback file system (LOFS) is disabled.
If you have non-global zones configured, LOFS must remain enabled. For guidelines about using LOFS and alternatives to disabling it, see Cluster File Systems.
As of the Sun Cluster 3.2 release, LOFS is no longer disabled by default during Sun Cluster software installation or upgrade. To disable LOFS, ensure that the /etc/system file contains the following entry:
exclude:lofs |
This change becomes effective at the next system reboot.
As needed, manually upgrade any custom data services that are not supplied on the product media.
Verify that each data-service update is installed successfully.
View the upgrade log file that is referenced at the end of the upgrade output messages.
Install any Sun Cluster 3.2 framework and data-service software patches.
See Patches and Required Firmware Levels in Sun Cluster 3.2 Release Notes for Solaris OS for the location of patches and installation instructions.
Upgrade software applications that are installed on the cluster.
Ensure that application levels are compatible with the current versions of Sun Cluster and Solaris software. See your application documentation for installation instructions.
(Optional) Reconfigure the private-network address range.
Perform this step if you want to increase or decrease the size of the IP address range that is used by the private interconnect. The IP address range that you configure must minimally support the number of nodes and private networks in the cluster. See Private Network for more information.
From one node, start the clsetup utility.
When run in noncluster mode, the clsetup utility displays the Main Menu for noncluster-mode operations.
Type the number that corresponds to the option for Change IP Address Range and press the Return key.
The clsetup utility displays the current private-network configuration, then asks if you would like to change this configuration.
To change either the private-network IP address or the IP address range, type yes and press the Return key.
The clsetup utility displays the default private-network IP address, 172.16.0.0, and asks if it is okay to accept this default.
Change or accept the private-network IP address.
To accept the default private-network IP address and proceed to changing the IP address range, type yes and press the Return key.
The clsetup utility will ask if it is okay to accept the default netmask. Skip to the next step to enter your response.
To change the default private-network IP address, perform the following substeps.
Type no in response to the clsetup utility question about whether it is okay to accept the default address, then press the Return key.
The clsetup utility will prompt for the new private-network IP address.
Type the new IP address and press the Return key.
The clsetup utility displays the default netmask and then asks if it is okay to accept the default netmask.
Change or accept the default private-network IP address range.
The default netmask is 255.255.248.0. This default IP address range supports up to 64 nodes and up to 10 private networks in the cluster.
To accept the default IP address range, type yes and press the Return key.
Then skip to the next step.
To change the IP address range, perform the following substeps.
Type no in response to the clsetup utility's question about whether it is okay to accept the default address range, then press the Return key.
When you decline the default netmask, the clsetup utility prompts you for the number of nodes and private networks that you expect to configure in the cluster.
Enter the number of nodes and private networks that you expect to configure in the cluster.
From these numbers, the clsetup utility calculates two proposed netmasks:
The first netmask is the minimum netmask to support the number of nodes and private networks that you specified.
The second netmask supports twice the number of nodes and private networks that you specified, to accommodate possible future growth.
Specify either of the calculated netmasks, or specify a different netmask that supports the expected number of nodes and private networks.
Type yes in response to the clsetup utility's question about proceeding with the update.
When finished, exit the clsetup utility.
After all nodes in the cluster are upgraded, reboot the upgraded nodes.
Shut down each node.
phys-schost# shutdown -g0 -y |
Boot each node into cluster mode.
On SPARC based systems, do the following:
ok boot |
On x86 based systems, do the following:
When the GRUB menu is displayed, select the appropriate Solaris entry and press Enter. The GRUB menu appears similar to the following:
GNU GRUB version 0.95 (631K lower / 2095488K upper memory) +-------------------------------------------------------------------------+ | Solaris 10 /sol_10_x86 | | Solaris failsafe | | | +-------------------------------------------------------------------------+ Use the ^ and v keys to select which entry is highlighted. Press enter to boot the selected OS, 'e' to edit the commands before booting, or 'c' for a command-line. |
For more information about GRUB based booting, see Chapter 11, GRUB Based Booting (Tasks), in System Administration Guide: Basic Administration.
Go to How to Verify Upgrade of Sun Cluster 3.2 Software
This section provides the following information to upgrade from a Sun Cluster 3.1 release to Sun Cluster 3.2 software by using the dual-partition upgrade method:
The following table lists the tasks to perform to upgrade from Sun Cluster 3.1 software to Sun Cluster 3.2 software. You also perform these tasks to upgrade only the version of the Solaris OS. If you upgrade the Solaris OS from Solaris 9 to Solaris 10 software, you must also upgrade the Sun Cluster software and dependency software to the version that is compatible with the new version of the Solaris OS.
Table 8–2 Task Map: Performing a Dual-Partition Upgrade to Sun Cluster 3.2 Software
Task |
Instructions |
---|---|
1. Read the upgrade requirements and restrictions. Determine the proper upgrade method for your configuration and needs. | |
2. Partition the cluster into two groups of nodes. | |
3. Upgrade the Solaris software, if necessary, to a supported Solaris update. If the cluster uses dual-string mediators for Solaris Volume Manager software, unconfigure the mediators. As needed, upgrade VERITAS Volume Manager (VxVM) and VERITAS File System (VxFS). Solaris Volume Manager software is automatically upgraded with the Solaris OS. |
How to Upgrade the Solaris OS and Volume Manager Software (Dual-Partition) |
4. Upgrade to Sun Cluster 3.2 framework and data-service software. If necessary, upgrade applications. If the cluster uses dual-string mediators and you upgraded the Solaris OS, reconfigure the mediators. If you upgraded VxVM, upgrade disk groups. | |
5. Verify successful completion of upgrade to Sun Cluster 3.2 software. | |
6. Enable resources and bring resource groups online. Optionally, migrate existing resources to new resource types. | |
7. (Optional) SPARC: Upgrade the Sun Cluster module for Sun Management Center, if needed. |
SPARC: How to Upgrade Sun Cluster Module Software for Sun Management Center |
Perform this procedure to prepare the cluster for a dual-partition upgrade. These procedures will refer to the two groups of nodes as the first partition and the second partition. The nodes that you assign to the second partition will continue cluster services while you upgrade the nodes in the first partition. After all nodes in the first partition are upgraded, you switch cluster services to the first partition and upgrade the second partition. After all nodes in the second partition are upgraded, you boot the nodes into cluster mode to rejoin the nodes from the first partition.
If you are upgrading a single-node cluster, do not use this upgrade method. Instead, go to How to Prepare the Cluster for Upgrade (Standard) or How to Prepare the Cluster for Upgrade (Live Upgrade).
On the Solaris 10 OS, perform all steps from the global zone only.
Perform the following tasks:
Ensure that the configuration meets the requirements for upgrade. See Upgrade Requirements and Software Support Guidelines.
Have available the installation media, documentation, and patches for all software products that you are upgrading, including the following software:
Solaris OS
Sun Cluster 3.2 framework
Sun Cluster 3.2 data services (agents)
Applications that are managed by Sun Cluster 3.2 data-services
VERITAS Volume Manager, if applicable
See Patches and Required Firmware Levels in Sun Cluster 3.2 Release Notes for Solaris OS for the location of patches and installation instructions.
If you use role-based access control (RBAC) instead of superuser to access the cluster nodes, ensure that you can assume an RBAC role that provides authorization for all Sun Cluster commands. This series of upgrade procedures requires the following Sun Cluster RBAC authorizations if the user is not superuser:
solaris.cluster.modify
solaris.cluster.admin
solaris.cluster.read
See Role-Based Access Control (Overview) in System Administration Guide: Security Services for more information about using RBAC roles. See the Sun Cluster man pages for the RBAC authorization that each Sun Cluster subcommand requires.
Ensure that the cluster is functioning normally.
View the current status of the cluster by running the following command from any node.
% scstat |
See the scstat(1M) man page for more information.
Search the /var/adm/messages log on the same node for unresolved error messages or warning messages.
Check the volume-manager status.
If necessary, notify users that cluster services might be temporarily interrupted during the upgrade.
Service interruption will be approximately the amount of time that your cluster normally takes to switch services to another node.
Become superuser on a node of the cluster.
For a two-node cluster that uses Sun StorEdge Availability Suite software or Sun StorageTek Availability Suite software, ensure that the configuration data for availability services resides on the quorum disk.
The configuration data must reside on a quorum disk to ensure the proper functioning of Availability Suite after you upgrade the cluster software.
Become superuser on a node of the cluster that runs Availability Suite software.
Identify the device ID and the slice that is used by the Availability Suite configuration file.
phys-schost# /usr/opt/SUNWscm/sbin/dscfg /dev/did/rdsk/dNsS |
In this example output, N is the device ID and S the slice of device N.
Identify the existing quorum device.
phys-schost# scstat -q -- Quorum Votes by Device -- Device Name Present Possible Status ----------- ------- -------- ------ Device votes: /dev/did/rdsk/dQsS 1 1 Online |
In this example output, dQsS is the existing quorum device.
If the quorum device is not the same as the Availability Suite configuration-data device, move the configuration data to an available slice on the quorum device.
phys-schost# dd if=`/usr/opt/SUNWesm/sbin/dscfg` of=/dev/did/rdsk/dQsS |
You must use the name of the raw DID device, /dev/did/rdsk/, not the block DID device, /dev/did/dsk/.
If you moved the configuration data, configure Availability Suite software to use the new location.
As superuser, issue the following command on each node that runs Availability Suite software.
phys-schost# /usr/opt/SUNWesm/sbin/dscfg -s /dev/did/rdsk/dQsS |
If you will upgrade the Solaris OS and your cluster uses dual-string mediators for Solaris Volume Manager software, unconfigure your mediators.
See Configuring Dual-String Mediators for more information about mediators.
Run the following command to verify that no mediator data problems exist.
phys-schost# medstat -s setname |
Specifies the disk set name.
If the value in the Status field is Bad, repair the affected mediator host. Follow the procedure How to Fix Bad Mediator Data.
List all mediators.
Save this information for when you restore the mediators during the procedure How to Finish Upgrade to Sun Cluster 3.2 Software.
For a disk set that uses mediators, take ownership of the disk set if no node already has ownership.
phys-schost# scswitch -z -D setname -h node |
Changes mastery.
Specifies the name of the disk set.
Specifies the name of the node to become primary of the disk set.
Unconfigure all mediators for the disk set.
phys-schost# metaset -s setname -d -m mediator-host-list |
Specifies the disk set name.
Deletes from the disk set.
Specifies the name of the node to remove as a mediator host for the disk set.
See the mediator(7D) man page for further information about mediator-specific options to the metaset command.
Repeat Step c through Step d for each remaining disk set that uses mediators.
If you are running the Sun Cluster HA for Sun Java System Application Server EE (HADB) data service with Sun Java System Application Server EE (HADB) software as of version 4.4, disable the HADB resource and shut down the HADB database.
If you are running a version of Sun Java System Application Server EE (HADB) software before 4.4, you can skip this step.
When one cluster partition is out of service during upgrade, there are not enough nodes in the active partition to meet HADB membership requirements. Therefore, you must stop the HADB database and disable the HADB resource before you begin to partition the cluster.
phys-schost# hadbm stop database-name phys-schost# scswitch -n -j hadb-resource |
For more information, see the hadbm(1m) man page.
If you are upgrading a two-node cluster, skip to Step 16.
Otherwise, proceed to Step 8 to determine the partitioning scheme to use. You will determine which nodes each partition will contain, but interrupt the partitioning process. You will then compare the node lists of all resource groups against the node members of each partition in the scheme that you will use. If any resource group does not contain a member of each partition, you must change the node list.
Load the Sun Java Availability Suite DVD-ROM into the DVD-ROM drive.
If the volume management daemon vold(1M) is running and is configured to manage CD-ROM or DVD devices, the daemon automatically mounts the media on the /cdrom/cdrom0/ directory.
Change to the Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/ directory, where arch is sparc or x86 (Solaris 10 only) and where ver is 9 for Solaris 9 or 10 for Solaris 10 .
phys-schost# cd /cdrom/cdrom0/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools |
Start the scinstall utility in interactive mode.
phys-schost# ./scinstall |
Do not use the /usr/cluster/bin/scinstall command that is already installed on the node. You must use the scinstall command on the Sun Java Availability Suite DVD-ROM.
The scinstall Main Menu is displayed.
Type the number that corresponds to the option for Manage a dual-partition upgrade and press the Return key.
*** Main Menu *** Please select from one of the following (*) options: 1) Create a new cluster or add a cluster node 2) Configure a cluster to be JumpStarted from this install server * 3) Manage a dual-partition upgrade * 4) Upgrade this cluster node * 5) Print release information for this cluster node * ?) Help with menu options * q) Quit Option: 3 |
The Manage a Dual-Partition Upgrade Menu is displayed.
Type the number that corresponds to the option for Display and select possible partitioning schemes and press the Return key.
Follow the prompts to perform the following tasks:
Display the possible partitioning schemes for your cluster.
Choose a partitioning scheme.
Choose which partition to upgrade first.
Stop and do not respond yet when prompted, Do you want to begin the dual-partition upgrade?, but do not exit the scinstall utility. You will respond to this prompt in Step 18 of this procedure.
Make note of which nodes belong to each partition in the partition scheme.
On another node of the cluster, become superuser.
Ensure that any critical data services can switch over between partitions.
For a two-node cluster, each node will be the only node in its partition.
When the nodes of a partition are shut down in preparation for dual-partition upgrade, the resource groups that are hosted on those nodes switch over to a node in the other partition. If a resource group does not contain a node from each partition in its node list, the resource group cannot switch over. To ensure successful switchover of all critical data services, verify that the node list of the related resource groups contains a member of each upgrade partition.
Display the node list of each resource group that you require to remain in service during the entire upgrade.
phys-schost# scrgadm -pv -g resourcegroup | grep "Res Group Nodelist" |
Displays configuration information.
Displays in verbose mode.
Specifies the name of the resource group.
If the node list of a resource group does not contain at least one member of each partition, redefine the node list to include a member of each partition as a potential primary node.
phys-schost# scrgadm -a -g resourcegroup -h nodelist |
Adds a new configuration.
Specifies a comma-separated list of node names.
Determine your next step.
If you are upgrading a two-node cluster, return to Step 8 through Step 13 to designate your partitioning scheme and upgrade order.
When you reach the prompt Do you want to begin the dual-partition upgrade?, skip to Step 18.
If you are upgrading a cluster with three or more nodes, return to the node that is running the interactive scinstall utility.
Proceed to Step 18.
At the interactive scinstall prompt Do you want to begin the dual-partition upgrade?, type Yes.
The command verifies that a remote installation method is available.
When prompted, press Enter to continue each stage of preparation for dual-partition upgrade.
The command switches resource groups to nodes in the second partition, and then shuts down each node in the first partition.
After all nodes in the first partition are shut down, boot each node in that partition into noncluster mode.
On SPARC based systems, perform the following command:
ok boot -x |
On x86 based systems running the Solaris 9 OS, perform either of the following commands:
phys-schost# reboot -- -xs or ... <<< Current Boot Parameters >>> Boot path: /pci@0,0/pci-ide@7,1/ata@1/cmdk@0,0:b Boot args: Type b [file-name] [boot-flags] <ENTER> to boot with options or i <ENTER> to enter boot interpreter or <ENTER> to boot with defaults <<< timeout in 5 seconds >>> Select (b)oot or (i)nterpreter: b -xs |
On x86 based systems running the Solaris 10 OS, perform the following commands:
In the GRUB menu, use the arrow keys to select the appropriate Solaris entry and type e to edit its commands.
The GRUB menu appears similar to the following:
GNU GRUB version 0.95 (631K lower / 2095488K upper memory) +-------------------------------------------------------------------------+ | Solaris 10 /sol_10_x86 | | Solaris failsafe | | | +-------------------------------------------------------------------------+ Use the ^ and v keys to select which entry is highlighted. Press enter to boot the selected OS, 'e' to edit the commands before booting, or 'c' for a command-line. |
For more information about GRUB based booting, see Chapter 11, GRUB Based Booting (Tasks), in System Administration Guide: Basic Administration.
In the boot parameters screen, use the arrow keys to select the kernel entry and type e to edit the entry.
The GRUB boot parameters screen appears similar to the following:
GNU GRUB version 0.95 (615K lower / 2095552K upper memory) +----------------------------------------------------------------------+ | root (hd0,0,a) | | kernel /platform/i86pc/multiboot | | module /platform/i86pc/boot_archive | +----------------------------------------------------------------------+ Use the ^ and v keys to select which entry is highlighted. Press 'b' to boot, 'e' to edit the selected command in the boot sequence, 'c' for a command-line, 'o' to open a new line after ('O' for before) the selected line, 'd' to remove the selected line, or escape to go back to the main menu. |
Add -x to the command to specify that the system boot into noncluster mode.
[ Minimal BASH-like line editing is supported. For the first word, TAB lists possible command completions. Anywhere else TAB lists the possible completions of a device/filename. ESC at any time exits. ] grub edit> kernel /platform/i86pc/multiboot -x |
Press Enter to accept the change and return to the boot parameters screen.
The screen displays the edited command.
GNU GRUB version 0.95 (615K lower / 2095552K upper memory) +----------------------------------------------------------------------+ | root (hd0,0,a) | | kernel /platform/i86pc/multiboot -x | | module /platform/i86pc/boot_archive | +----------------------------------------------------------------------+ Use the ^ and v keys to select which entry is highlighted. Press 'b' to boot, 'e' to edit the selected command in the boot sequence, 'c' for a command-line, 'o' to open a new line after ('O' for before) the selected line, 'd' to remove the selected line, or escape to go back to the main menu.- |
Type b to boot the node into noncluster mode.
This change to the kernel boot parameter command does not persist over the system boot. The next time you reboot the node, it will boot into cluster mode. To boot into noncluster mode instead, perform these steps to again to add the -x option to the kernel boot parameter command.
If any applications that are running in the second partition are not under control of the Resource Group Manager (RGM), create scripts to halt the applications before you begin to upgrade those nodes.
During dual-partition upgrade processing, these scripts would be called to stop applications such as Oracle RAC before the nodes in the second partition are halted.
Create the scripts that you need to stop applications that are not under RGM control.
Create separate scripts for those applications that you want stopped before applications under RGM control are stopped and for those applications that you want stop afterwards.
To stop applications that are running on more than one node in the partition, write the scripts accordingly.
Use any name and directory path for your scripts that you prefer.
Ensure that each node in the cluster has its own copy of your scripts.
On each node, modify the following Sun Cluster scripts to call the scripts that you placed on that node.
/etc/cluster/ql/cluster_pre_halt_apps - Use this file to call those scripts that you want to run before applications that are under RGM control are shut down.
/etc/cluster/ql/cluster_post_halt_apps - Use this file to call those scripts that you want to run after applications that are under RGM control are shut down.
The Sun Cluster scripts are issued from one arbitrary node in the partition during post-upgrade processing of the partition. Therefore, ensure that the scripts on any node of the partition will perform the necessary actions for all nodes in the partition.
Upgrade software on each node in the first partition.
To upgrade Solaris software before you perform Sun Cluster software upgrade, go to How to Upgrade the Solaris OS and Volume Manager Software (Dual-Partition).
If Sun Cluster 3.2 software does not support the release of the Solaris OS that you currently run on your cluster, you must upgrade the Solaris software to a supported release. See “Supported Products” in Sun Cluster 3.2 Release Notes for Solaris OS for more information.
If Sun Cluster 3.2 software supports the release of the Solaris OS that you currently run on your cluster, further Solaris software upgrade is optional.
Otherwise, upgrade to Sun Cluster 3.2 software. Go to How to Upgrade Sun Cluster 3.2 Software (Dual-Partition).
Perform this procedure on each node in the cluster to upgrade the Solaris OS. On the Solaris 10 OS, perform all steps from the global zone only. If the cluster already runs on a version of the Solaris OS that supports Sun Cluster 3.2 software, further upgrade of the Solaris OS is optional. If you do not intend to upgrade the Solaris OS, proceed to How to Upgrade Sun Cluster 3.2 Software (Standard).
The cluster must already run on, or be upgraded to, at least the minimum required level of the Solaris OS to support upgrade to Sun Cluster 3.2 software. See Supported Products in Sun Cluster 3.2 Release Notes for Solaris OS for more information.
Ensure that all steps in How to Prepare the Cluster for Upgrade (Standard) are completed.
Become superuser on the cluster node to upgrade.
The node must be a member of the partition that is in noncluster mode.
If Sun Cluster Geographic Edition software is installed, uninstall it.
For uninstallation procedures, see the documentation for your version of Sun Cluster Geographic Edition software.
Determine whether the following Apache run-control scripts exist and are enabled or disabled:
/etc/rc0.d/K16apache /etc/rc1.d/K16apache /etc/rc2.d/K16apache /etc/rc3.d/S50apache /etc/rcS.d/K16apache |
Some applications, such as Sun Cluster HA for Apache, require that Apache run control scripts be disabled.
If these scripts exist and contain an uppercase K or S in the file name, the scripts are enabled. No further action is necessary for these scripts.
If these scripts do not exist, in Step 8 you must ensure that any Apache run control scripts that are installed during the Solaris OS upgrade are disabled.
If these scripts exist but the file names contain a lowercase k or s, the scripts are disabled. In Step 8 you must ensure that any Apache run control scripts that are installed during the Solaris OS upgrade are disabled.
Comment out all entries for globally mounted file systems in the node's /etc/vfstab file.
For later reference, make a record of all entries that are already commented out.
Temporarily comment out all entries for globally mounted file systems in the /etc/vfstab file.
Entries for globally mounted file systems contain the global mount option. Comment out these entries to prevent the Solaris upgrade from attempting to mount the global devices.
Determine which procedure to follow to upgrade the Solaris OS.
Volume Manager |
Procedure |
Location of Instructions |
---|---|---|
Solaris Volume Manager |
Any Solaris upgrade method except the Live Upgrade method |
Solaris installation documentation |
VERITAS Volume Manager |
“Upgrading VxVM and Solaris” |
VERITAS Volume Manager installation documentation |
If your cluster has VxVM installed, you must reinstall the existing VxVM software or upgrade to the Solaris 9 or 10 version of VxVM software as part of the Solaris upgrade process.
Upgrade the Solaris software, following the procedure that you selected in Step 5.
When prompted, choose the manual reboot option.
When prompted to reboot, always reboot into noncluster mode.
Do not perform the final reboot instruction in the Solaris software upgrade. Instead, do the following:
Reboot into noncluster mode in Step 9 to complete Solaris software upgrade.
Execute the following commands to boot a node into noncluster mode during Solaris upgrade:
On SPARC based systems, perform either of the following commands:
phys-schost# reboot -- -xs or ok boot -xs |
If the instruction says to run the init S command, use the reboot -- -xs command instead.
On x86 based systems, perform the following command:
phys-schost# shutdown -g -y -i0 Press any key to continue |
In the GRUB menu, use the arrow keys to select the appropriate Solaris entry and type e to edit its commands.
The GRUB menu appears similar to the following:
GNU GRUB version 0.95 (631K lower / 2095488K upper memory) +-------------------------------------------------------------------------+ | Solaris 10 /sol_10_x86 | | Solaris failsafe | | | +-------------------------------------------------------------------------+ Use the ^ and v keys to select which entry is highlighted. Press enter to boot the selected OS, 'e' to edit the commands before booting, or 'c' for a command-line. |
For more information about GRUB based booting, see Chapter 11, GRUB Based Booting (Tasks), in System Administration Guide: Basic Administration.
In the boot parameters screen, use the arrow keys to select the kernel entry and type e to edit the entry.
The GRUB boot parameters screen appears similar to the following:
GNU GRUB version 0.95 (615K lower / 2095552K upper memory) +----------------------------------------------------------------------+ | root (hd0,0,a) | | kernel /platform/i86pc/multiboot | | module /platform/i86pc/boot_archive | +----------------------------------------------------------------------+ Use the ^ and v keys to select which entry is highlighted. Press 'b' to boot, 'e' to edit the selected command in the boot sequence, 'c' for a command-line, 'o' to open a new line after ('O' for before) the selected line, 'd' to remove the selected line, or escape to go back to the main menu. |
Add -x to the command to specify that the system boot into noncluster mode.
[ Minimal BASH-like line editing is supported. For the first word, TAB lists possible command completions. Anywhere else TAB lists the possible completions of a device/filename. ESC at any time exits. ] grub edit> kernel /platform/i86pc/multiboot -x |
Press Enter to accept the change and return to the boot parameters screen.
The screen displays the edited command.
GNU GRUB version 0.95 (615K lower / 2095552K upper memory) +----------------------------------------------------------------------+ | root (hd0,0,a) | | kernel /platform/i86pc/multiboot -x | | module /platform/i86pc/boot_archive | +----------------------------------------------------------------------+ Use the ^ and v keys to select which entry is highlighted. Press 'b' to boot, 'e' to edit the selected command in the boot sequence, 'c' for a command-line, 'o' to open a new line after ('O' for before) the selected line, 'd' to remove the selected line, or escape to go back to the main menu.- |
Type b to boot the node into noncluster mode.
This change to the kernel boot parameter command does not persist over the system boot. The next time you reboot the node, it will boot into cluster mode. To boot into noncluster mode instead, perform these steps to again to add the -x option to the kernel boot parameter command.
If the instruction says to run the init S command, shut down the system then change the GRUB kernel boot command to /platform/i86pc/multiboot -sx instead.
In the /a/etc/vfstab file, uncomment those entries for globally mounted file systems that you commented out in Step 4.
If Apache run control scripts were disabled or did not exist before you upgraded the Solaris OS, ensure that any scripts that were installed during Solaris upgrade are disabled.
To disable Apache run control scripts, use the following commands to rename the files with a lowercase k or s.
phys-schost# mv /a/etc/rc0.d/K16apache /a/etc/rc0.d/k16apache phys-schost# mv /a/etc/rc1.d/K16apache /a/etc/rc1.d/k16apache phys-schost# mv /a/etc/rc2.d/K16apache /a/etc/rc2.d/k16apache phys-schost# mv /a/etc/rc3.d/S50apache /a/etc/rc3.d/s50apache phys-schost# mv /a/etc/rcS.d/K16apache /a/etc/rcS.d/k16apache |
Alternatively, you can rename the scripts to be consistent with your normal administration practices.
Reboot the node into noncluster mode.
On SPARC based systems, perform the following command.
Include the double dashes (--) in the command:
phys-schost# reboot -- -x |
On x86 based systems, perform the shutdown and boot procedures that are described in Step 6 except add -x to the kernel boot command instead of -sx.
If your cluster runs VxVM, perform the remaining steps in the procedure “Upgrading VxVM and Solaris” to reinstall or upgrade VxVM.
Make the following changes to the procedure:
After VxVM upgrade is complete but before you reboot, verify the entries in the /etc/vfstab file.
If any of the entries that you uncommented in Step 7 were commented out, make those entries uncommented again.
When the VxVM procedures instruct you to perform a final reconfiguration reboot, do not use the -r option alone. Instead, reboot into noncluster mode by using the -rx options.
On SPARC based systems, perform the following command:
phys-schost# reboot -- -rx |
On x86 based systems, perform the shutdown and boot procedures that are described in Step 6 except add -rx to the kernel boot command instead of -sx.
If you see a message similar to the following, type the root password to continue upgrade processing. Do not run the fsck command nor type Ctrl-D.
WARNING - Unable to repair the /global/.devices/node@1 filesystem. Run fsck manually (fsck -F ufs /dev/vx/rdsk/rootdisk_13vol). Exit the shell when done to continue the boot process. Type control-d to proceed with normal startup, (or give root password for system maintenance): Type the root password |
(Optional) SPARC: Upgrade VxFS.
Follow procedures that are provided in your VxFS documentation.
Install any required Solaris software patches and hardware-related patches, and download any needed firmware that is contained in the hardware patches.
Do not reboot after you add patches. Wait to reboot the node until after you upgrade the Sun Cluster software.
See Patches and Required Firmware Levels in Sun Cluster 3.2 Release Notes for Solaris OS for the location of patches and installation instructions.
Upgrade to Sun Cluster 3.2 software. Go to How to Upgrade Sun Cluster 3.2 Software (Dual-Partition).
To complete the upgrade to a new marketing release of the Solaris OS, such as from Solaris 9 to Solaris 10 software, you must also upgrade the Sun Cluster software and dependency software to the version that is compatible with the new version of the Solaris OS.
Perform this procedure to upgrade each node of the cluster to Sun Cluster 3.2 software. This procedure also upgrades required Sun Java Enterprise System shared components. You must also perform this procedure after you upgrade to a different marketing release of the Solaris OS, such as from Solaris 9 to Solaris 10 software.
On the Solaris 10 OS, perform all steps from the global zone only.
You can perform this procedure on more than one node of the partition at the same time.
Perform the following tasks:
Ensure that all steps in How to Prepare the Cluster for Upgrade (Dual-Partition) are completed.
Ensure that the node you are upgrading belongs to the partition that is not active in the cluster and that the node is in noncluster mode.
If you upgraded to a new marketing release of the Solaris OS, such as from Solaris 9 to Solaris 10 software, ensure that all steps in How to Upgrade the Solaris OS and Volume Manager Software (Dual-Partition) are completed.
Ensure that you have installed all required Solaris software patches and hardware-related patches.
Become superuser on a node that is a member of the partition that is in noncluster mode.
Ensure that the /usr/java/ directory is a symbolic link to the minimum or latest version of Java software.
Sun Cluster software requires at least version 1.5.0_06 of Java software. If you upgraded to a version of Solaris that installs an earlier version of Java, the upgrade might have changed the symbolic link to point to a version of Java that does not meet the minimum requirement for Sun Cluster 3.2 software.
Determine what directory the /usr/java/ directory is symbolically linked to.
phys-schost# ls -l /usr/java lrwxrwxrwx 1 root other 9 Apr 19 14:05 /usr/java -> /usr/j2se/ |
Determine what version or versions of Java software are installed.
The following are examples of commands that you can use to display the version of their related releases of Java software.
phys-schost# /usr/j2se/bin/java -version phys-schost# /usr/java1.2/bin/java -version phys-schost# /usr/jdk/jdk1.5.0_06/bin/java -version |
If the /usr/java/ directory is not symbolically linked to a supported version of Java software, recreate the symbolic link to link to a supported version of Java software.
The following example shows the creation of a symbolic link to the /usr/j2se/ directory, which contains Java 1.5.0_06 software.
phys-schost# rm /usr/java phys-schost# ln -s /usr/j2se /usr/java |
Load the Sun Java Availability Suite DVD-ROM into the DVD-ROM drive.
If the volume management daemon vold(1M) is running and is configured to manage CD-ROM or DVD devices, the daemon automatically mounts the media on the /cdrom/cdrom0/ directory.
Change to the installation wizard directory of the DVD-ROM.
If you are installing the software packages on the SPARC platform, type the following command:
phys-schost# cd /cdrom/cdrom0//Solaris_sparc |
If you are installing the software packages on the x86 platform, type the following command:
phys-schost# cd /cdrom/cdrom0//Solaris_x86 |
Start the installation wizard program.
phys-schost# ./installer |
Follow the instructions on the screen to select and upgrade Shared Components software packages on the node.
Do not use the installation wizard program to upgrade Sun Cluster software packages.
The installation wizard program displays the status of the installation. When the installation is complete, the program displays an installation summary and the installation logs.
Exit the installation wizard program.
Change to the Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/ directory, where arch is sparc or x86 (Solaris 10 only) and where ver is 9 for Solaris 9 or 10 for Solaris 10 .
phys-schost# cd /cdrom/cdrom0/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools |
phys-schost# ./scinstall |
Do not use the /usr/cluster/bin/scinstall command that is already installed on the node. You must use the scinstall command that is located on the Sun Java Availability Suite DVD-ROM.
The scinstall Main Menu is displayed.
Type the number that corresponds to the option for Upgrade this cluster node and press the Return key.
*** Main Menu *** Please select from one of the following (*) options: 1) Create a new cluster or add a cluster node 2) Configure a cluster to be JumpStarted from this install server * 3) Manage a dual-partition upgrade * 4) Upgrade this cluster node * 5) Print release information for this cluster node * ?) Help with menu options * q) Quit Option: 4 |
The Upgrade Menu is displayed.
Type the number that corresponds to the option for Upgrade Sun Cluster framework on this cluster node and press the Return key.
Follow the menu prompts to upgrade the cluster framework.
During the Sun Cluster upgrade, scinstall might make one or more of the following configuration changes:
Convert NAFO groups to IPMP groups but keep the original NAFO-group name.
See one of the following manuals for information about test addresses for IPMP:
Configuring Test Addresses in Administering Multipathing Groups With Multiple Physical Interfaces in System Administration Guide: IP Services (Solaris 9)
Test Addresses in System Administration Guide: IP Services (Solaris 10)
See the scinstall(1M) man page for more information about the conversion of NAFO groups to IPMP during Sun Cluster software upgrade.
Rename the ntp.conf file to ntp.conf.cluster, if ntp.conf.cluster does not already exist on the node.
Set the local-mac-address? variable to true, if the variable is not already set to that value.
Upgrade processing is finished when the system displays the message Completed Sun Cluster framework upgrade and prompts you to press Enter to continue.
Quit the scinstall utility.
Unload the Sun Java Availability Suite DVD-ROM from the DVD-ROM drive.
Upgrade data service packages.
You must upgrade all data services to the Sun Cluster 3.2 version.
For Sun Cluster HA for SAP Web Application Server, if you are using a J2EE engine resource or a web application server component resource or both, you must delete the resource and recreate it with the new web application server component resource. Changes in the new web application server component resource includes integration of the J2EE functionality. For more information, see Sun Cluster Data Service for SAP Web Application Server Guide for Solaris OS.
Start the upgraded interactive scinstall utility.
phys-schost# /usr/cluster/bin/scinstall |
Do not use the scinstall utility that is on the installation media to upgrade data service packages.
The scinstall Main Menu is displayed.
Type the number that corresponds to the option for Upgrade this cluster node and press the Return key.
The Upgrade Menu is displayed.
Type the number that corresponds to the option for Upgrade Sun Cluster data service agents on this node and press the Return key.
Follow the menu prompts to upgrade Sun Cluster data service agents that are installed on the node.
You can choose from the list of data services that are available to upgrade or choose to upgrade all installed data services.
Upgrade processing is finished when the system displays the message Completed upgrade of Sun Cluster data services agents and prompts you to press Enter to continue.
Press Enter.
The Upgrade Menu is displayed.
Quit the scinstall utility.
If you have Sun Cluster HA for NFS configured on a highly available local file system, ensure that the loopback file system (LOFS) is disabled.
If you have non-global zones configured, LOFS must remain enabled. For guidelines about using LOFS and alternatives to disabling it, see Cluster File Systems.
As of the Sun Cluster 3.2 release, LOFS is no longer disabled by default during Sun Cluster software installation or upgrade. To disable LOFS, ensure that the /etc/system file contains the following entry:
exclude:lofs |
This change becomes effective at the next system reboot.
As needed, manually upgrade any custom data services that are not supplied on the product media.
Verify that each data-service update is installed successfully.
View the upgrade log file that is referenced at the end of the upgrade output messages.
Install any Sun Cluster 3.2 framework and data-service software patches.
See Patches and Required Firmware Levels in Sun Cluster 3.2 Release Notes for Solaris OS for the location of patches and installation instructions.
Upgrade software applications that are installed on the cluster.
Ensure that application levels are compatible with the current versions of Sun Cluster and Solaris software. See your application documentation for installation instructions.
After all nodes in a partition are upgraded, apply the upgrade changes.
From one node in the partition that you are upgrading, start the interactive scinstall utility.
phys-schost# /usr/cluster/bin/scinstall |
Do not use the scinstall command that is located on the installation media. Only use the scinstall command that is located on the cluster node.
The scinstall Main Menu is displayed.
Type the number that corresponds to the option for Apply dual-partition upgrade changes to the partition and press the Return key.
Follow the prompts to continue each stage of the upgrade processing.
The command performs the following tasks, depending on which partition the command is run from:
First partition - The command halts each node in the second partition, one node at a time. When a node is halted, any services on that node are automatically switched over to a node in the first partition, provided that the node list of the related resource group contains a node in the first partition. After all nodes in the second partition are halted, the nodes in the first partition are booted into cluster mode and take over providing cluster services.
Second partition - The command boots the nodes in the second partition into cluster mode, to join the active cluster that was formed by the first partition. After all nodes have rejoined the cluster, the command performs final processing and reports on the status of the upgrade.
Exit the scinstall utility, if it is still running.
If you are finishing upgrade of the first partition, perform the following substeps to prepare the second partition for upgrade.
Otherwise, if you are finishing upgrade of the second partition, proceed to How to Verify Upgrade of Sun Cluster 3.2 Software.
Boot each node in the second partition into noncluster mode.
On SPARC based systems, perform the following command:
ok boot -x |
On x86 based systems, perform the following commands:
In the GRUB menu, use the arrow keys to select the appropriate Solaris entry and type e to edit its commands.
The GRUB menu appears similar to the following:
GNU GRUB version 0.95 (631K lower / 2095488K upper memory) +-------------------------------------------------------------------------+ | Solaris 10 /sol_10_x86 | | Solaris failsafe | | | +-------------------------------------------------------------------------+ Use the ^ and v keys to select which entry is highlighted. Press enter to boot the selected OS, 'e' to edit the commands before booting, or 'c' for a command-line. |
For more information about GRUB based booting, see Chapter 11, GRUB Based Booting (Tasks), in System Administration Guide: Basic Administration.
In the boot parameters screen, use the arrow keys to select the kernel entry and type e to edit the entry.
The GRUB boot parameters screen appears similar to the following:
GNU GRUB version 0.95 (615K lower / 2095552K upper memory) +----------------------------------------------------------------------+ | root (hd0,0,a) | | kernel /platform/i86pc/multiboot | | module /platform/i86pc/boot_archive | +----------------------------------------------------------------------+ Use the ^ and v keys to select which entry is highlighted. Press 'b' to boot, 'e' to edit the selected command in the boot sequence, 'c' for a command-line, 'o' to open a new line after ('O' for before) the selected line, 'd' to remove the selected line, or escape to go back to the main menu. |
Add -x to the command to specify that the system boot into noncluster mode.
[ Minimal BASH-like line editing is supported. For the first word, TAB lists possible command completions. Anywhere else TAB lists the possible completions of a device/filename. ESC at any time exits. ] grub edit> kernel /platform/i86pc/multiboot -x |
Press Enter to accept the change and return to the boot parameters screen.
The screen displays the edited command.
GNU GRUB version 0.95 (615K lower / 2095552K upper memory) +----------------------------------------------------------------------+ | root (hd0,0,a) | | kernel /platform/i86pc/multiboot -x | | module /platform/i86pc/boot_archive | +----------------------------------------------------------------------+ Use the ^ and v keys to select which entry is highlighted. Press 'b' to boot, 'e' to edit the selected command in the boot sequence, 'c' for a command-line, 'o' to open a new line after ('O' for before) the selected line, 'd' to remove the selected line, or escape to go back to the main menu.- |
Type b to boot the node into noncluster mode.
This change to the kernel boot parameter command does not persist over the system boot. The next time you reboot the node, it will boot into cluster mode. To boot into noncluster mode instead, perform these steps to again to add the -x option to the kernel boot parameter command.
Upgrade the nodes in the second partition.
To upgrade Solaris software before you perform Sun Cluster software upgrade, go to How to Upgrade the Solaris OS and Volume Manager Software (Dual-Partition).
Otherwise, upgrade Sun Cluster software on the second partition. Return to Step 1.
Go to How to Verify Upgrade of Sun Cluster 3.2 Software.
If you experience an unrecoverable error during dual-partition upgrade, perform recovery procedures in How to Recover from a Failed Dual-Partition Upgrade.
This section provides the following information to upgrade from Sun Cluster 3.1 software to Sun Cluster 3.2 software by using the live upgrade method:
The following table lists the tasks to perform to upgrade from Sun Cluster 3.1 software to Sun Cluster 3.2 software. You also perform these tasks to upgrade only the version of the Solaris OS. If you upgrade the Solaris OS from Solaris 9 to Solaris 10 software, you must also upgrade the Sun Cluster software and dependency software to the version that is compatible with the new version of the Solaris OS.
Table 8–3 Task Map: Performing a Live Upgrade to Sun Cluster 3.2 Software
Task |
Instructions |
---|---|
1. Read the upgrade requirements and restrictions. Determine the proper upgrade method for your configuration and needs. | |
2. Remove the cluster from production, disable resources, and back up shared data and system disks. If the cluster uses dual-string mediators for Solaris Volume Manager software, unconfigure the mediators. | |
3. Upgrade the Solaris software, if necessary, to a supported Solaris update. Upgrade to Sun Cluster 3.2 framework and data-service software. If necessary, upgrade applications. If the cluster uses dual-string mediators, reconfigure the mediators. As needed, upgrade VERITAS Volume Manager (VxVM)software and disk groups and VERITAS File System (VxFS). |
How to Upgrade the Solaris OS and Sun Cluster 3.2 Software (Live Upgrade) |
4. Verify successful completion of upgrade to Sun Cluster 3.2 software. | |
5. Enable resources and bring resource groups online. Migrate existing resources to new resource types. | |
6. (Optional) SPARC: Upgrade the Sun Cluster module for Sun Management Center, if needed. |
SPARC: How to Upgrade Sun Cluster Module Software for Sun Management Center |
Perform this procedure to prepare a cluster for live upgrade.
Perform the following tasks:
Ensure that the configuration meets the requirements for upgrade. See Upgrade Requirements and Software Support Guidelines.
Have available the installation media, documentation, and patches for all software products that you are upgrading, including the following software:
Solaris OS
Sun Cluster 3.2 framework
Sun Cluster 3.2 data services (agents)
Applications that are managed by Sun Cluster 3.2 data-services
VERITAS Volume Manager, if applicable
See Patches and Required Firmware Levels in Sun Cluster 3.2 Release Notes for Solaris OS for the location of patches and installation instructions.
If you use role-based access control (RBAC) instead of superuser to access the cluster nodes, ensure that you can assume an RBAC role that provides authorization for all Sun Cluster commands. This series of upgrade procedures requires the following Sun Cluster RBAC authorizations if the user is not superuser:
solaris.cluster.modify
solaris.cluster.admin
solaris.cluster.read
See Role-Based Access Control (Overview) in System Administration Guide: Security Services for more information about using RBAC roles. See the Sun Cluster man pages for the RBAC authorization that each Sun Cluster subcommand requires.
Ensure that the cluster is functioning normally.
View the current status of the cluster by running the following command from any node.
phys-schost% scstat |
See the scstat(1M) man page for more information.
Search the /var/adm/messages log on the same node for unresolved error messages or warning messages.
Check the volume-manager status.
If necessary, notify users that cluster services will be temporarily interrupted during the upgrade.
Service interruption will be approximately the amount of time that your cluster normally takes to switch services to another node.
Become superuser on a node of the cluster.
If Sun Cluster Geographic Edition software is installed, uninstall it.
For uninstallation procedures, see the documentation for your version of Sun Cluster Geographic Edition software.
For a two-node cluster that uses Sun StorEdge Availability Suite software or Sun StorageTek Availability Suite software, ensure that the configuration data for availability services resides on the quorum disk.
The configuration data must reside on a quorum disk to ensure the proper functioning of Availability Suite after you upgrade the cluster software.
Become superuser on a node of the cluster that runs Availability Suite software.
Identify the device ID and the slice that is used by the Availability Suite configuration file.
phys-schost# /usr/opt/SUNWscm/sbin/dscfg /dev/did/rdsk/dNsS |
In this example output, N is the device ID and S the slice of device N.
Identify the existing quorum device.
phys-schost# scstat -q -- Quorum Votes by Device -- Device Name Present Possible Status ----------- ------- -------- ------ Device votes: /dev/did/rdsk/dQsS 1 1 Online |
In this example output, dQsS is the existing quorum device.
If the quorum device is not the same as the Availability Suite configuration-data device, move the configuration data to an available slice on the quorum device.
phys-schost# dd if=`/usr/opt/SUNWesm/sbin/dscfg` of=/dev/did/rdsk/dQsS |
You must use the name of the raw DID device, /dev/did/rdsk/, not the block DID device, /dev/did/dsk/.
If you moved the configuration data, configure Availability Suite software to use the new location.
As superuser, issue the following command on each node that runs Availability Suite software.
phys-schost# /usr/opt/SUNWesm/sbin/dscfg -s /dev/did/rdsk/dQsS |
Ensure that all shared data is backed up.
Ensure that each system disk is backed up.
Perform a live upgrade of the Solaris OS, Sun Cluster 3.2 software, and other software. Go to How to Upgrade the Solaris OS and Sun Cluster 3.2 Software (Live Upgrade).
Perform this procedure to upgrade the Solaris OS, Java ES shared components, volume-manager software, and Sun Cluster software by using the live upgrade method. The Sun Cluster live upgrade method uses the Solaris Live Upgrade feature. For information about live upgrade of the Solaris OS, refer to the documentation for the Solaris version that you are using:
Chapter 32, Solaris Live Upgrade (Topics), in Solaris 9 9/04 Installation Guide
Solaris 10 Installation Guide: Solaris Live Upgrade and Upgrade Planning
The cluster must already run on, or be upgraded to, at least the minimum required level of the Solaris OS to support upgrade to Sun Cluster 3.2 software. See Supported Products in Sun Cluster 3.2 Release Notes for Solaris OS for more information.
Perform this procedure on each node in the cluster.
You can use the cconsole utility to perform this procedure on all nodes simultaneously. See How to Install Cluster Control Panel Software on an Administrative Console for more information.
Ensure that all steps in How to Prepare the Cluster for Upgrade (Live Upgrade) are completed.
Ensure that a supported version of Solaris Live Upgrade software is installed on each node.
If your operating system is already upgraded to Solaris 9 9/05 software or Solaris 10 11/06 software, you have the correct Solaris Live Upgrade software. If your operating system is an older version, perform the following steps:
Insert the Solaris 9 9/05 software or Solaris 10 11/06 software media.
Become superuser.
Install the SUNWluu and SUNWlur packages.
phys-schost# pkgadd -d path SUNWluu SUNWlur |
Specifies the absolute path to the software packages.
Verify that the packages have been installed.
phys-schost# pkgchk -v SUNWluu SUNWlur |
If you will upgrade the Solaris OS and your cluster uses dual-string mediators for Solaris Volume Manager software, unconfigure your mediators.
See Configuring Dual-String Mediators for more information about mediators.
Run the following command to verify that no mediator data problems exist.
phys-schost# medstat -s setname |
Specifies the disk set name.
If the value in the Status field is Bad, repair the affected mediator host. Follow the procedure How to Fix Bad Mediator Data.
List all mediators.
Save this information for when you restore the mediators during the procedure How to Finish Upgrade to Sun Cluster 3.2 Software.
For a disk set that uses mediators, take ownership of the disk set if no node already has ownership.
phys-schost# scswitch -z -D setname -h node |
Changes mastery.
Specifies the name of the disk set.
Specifies the name of the node to become primary of the disk set.
Unconfigure all mediators for the disk set.
phys-schost# metaset -s setname -d -m mediator-host-list |
Specifies the disk set name.
Deletes from the disk set.
Specifies the name of the node to remove as a mediator host for the disk set.
See the mediator(7D) man page for further information about mediator-specific options to the metaset command.
Repeat Step c through Step d for each remaining disk set that uses mediators.
Build an inactive boot environment (BE).
phys-schost# lucreate options-n BE-name |
Specifies the name of the boot environment that is to be upgraded.
For information about important options to the lucreate command, see Solaris 10 Installation Guide: Solaris Live Upgrade and Upgrade Planning and the lucreate(1M) man page.
If necessary, upgrade the Solaris OS software in your inactive BE.
If the cluster already runs on a properly patched version of the Solaris OS that supports Sun Cluster 3.2 software, this step is optional.
If you use Solaris Volume Manager software, run the following command:
phys-schost# luupgrade -u -n BE-name -s os-image-path |
Upgrades an operating system image on a boot environment.
Specifies the path name of a directory that contains an operating system image.
If you use VERITAS Volume Manager, follow live upgrade procedures in your VxVM installation documentation.
Mount your inactive BE by using the lumount command.
phys-schost# lumount -n BE-name -m BE-mount-point |
Specifies the mount point of BE-name.
For more information, see Solaris 10 Installation Guide: Solaris Live Upgrade and Upgrade Planning and the lumount(1M) man page.
Ensure that the /BE-mount-point/usr/java/ directory is a symbolic link to the minimum or latest version of Java software.
Sun Cluster software requires at least version 1.5.0_06 of Java software. If you upgraded to a version of Solaris that installs an earlier version of Java, the upgrade might have changed the symbolic link to point to a version of Java that does not meet the minimum requirement for Sun Cluster 3.2 software.
Determine what directory the /BE-mount-point/usr/java/ directory is symbolically linked to.
phys-schost# ls -l /BE-mount-point/usr/java lrwxrwxrwx 1 root other 9 Apr 19 14:05 /BE-mount-point/usr/java -> /BE-mount-point/usr/j2se/ |
Determine what version or versions of Java software are installed.
The following are examples of commands that you can use to display the version of their related releases of Java software.
phys-schost# /BE-mount-point/usr/j2se/bin/java -version phys-schost# /BE-mount-point/usr/java1.2/bin/java -version phys-schost# /BE-mount-point/usr/jdk/jdk1.5.0_06/bin/java -version |
If the /BE-mount-point/usr/java/ directory is not symbolically linked to a supported version of Java software, recreate the symbolic link to link to a supported version of Java software.
The following example shows the creation of a symbolic link to the /usr/j2se/ directory, which contains Java 1.5.0_06 software.
phys-schost# rm /BE-mount-point/usr/java phys-schost# cd /mnt/usr phys-schost# ln -s j2se java |
Apply any necessary Solaris patches.
You might need to patch your Solaris software to use the Live Upgrade feature. For details about the patches that the Solaris OS requires and where to download them, see Managing Packages and Patches With Solaris Live Upgrade in Solaris 9 9/04 Installation Guide or Upgrading a System With Packages or Patches in Solaris 10 Installation Guide: Solaris Live Upgrade and Upgrade Planning.
If necessary and if your version of the VERITAS Volume Manager (VxVM) software supports it, upgrade your VxVM software.
Refer to your VxVM software documentation to determine whether your version of VxVM can use the live upgrade method.
(Optional) SPARC: Upgrade VxFS.
Follow procedures that are provided in your VxFS documentation.
If your cluster hosts software applications that require an upgrade and that you can upgrade by using the live upgrade method, upgrade those software applications.
If your cluster hosts software applications that cannot use the live upgrade method, you will upgrade them later in Step 25.
Load the Sun Java Availability Suite DVD-ROM into the DVD-ROM drive.
If the volume management daemon vold(1M) is running and is configured to manage CD-ROM or DVD devices, the daemon automatically mounts the media on the /cdrom/cdrom0/ directory.
Change to the installation wizard directory of the DVD-ROM.
If you are installing the software packages on the SPARC platform, type the following command:
phys-schost# cd /cdrom/cdrom0/Solaris_sparc |
If you are installing the software packages on the x86 platform, type the following command:
phys-schost# cd /cdrom/cdrom0/Solaris_x86 |
Start the installation wizard program to direct output to a state file.
Specify the name to give the state file and the absolute or relative path where the file should be created.
To create a state file by using the graphical interface, use the following command:
phys-schost# ./installer -no -saveState statefile |
To create a state file by using the text-based interface, use the following command:
phys-schost# ./installer -no -nodisplay -saveState statefile |
See Generating the Initial State File in Sun Java Enterprise System 5 Installation Guide for UNIX for more information.
Follow the instructions on the screen to select and upgrade Shared Components software packages on the node.
The installation wizard program displays the status of the installation. When the installation is complete, the program displays an installation summary and the installation logs.
Exit the installation wizard program.
Run the installer program in silent mode and direct the installation to the alternate boot environment.
The installer program must be the same version that you used to create the state file.
phys-schost# ./installer -nodisplay -noconsole -state statefile -altroot BE-mount-point |
See To Run the Installer in Silent Mode in Sun Java Enterprise System 5 Installation Guide for UNIX for more information.
Change to the Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/ directory, where arch is sparc or x86 (Solaris 10 only) and where ver is 9 for Solaris 9 or 10 for Solaris 10 .
phys-schost# cd /cdrom/cdrom0/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools |
Upgrade your Sun Cluster software by using the scinstall command.
phys-schost# ./scinstall -u update -R BE-mount-point |
Specifies that you are performing an upgrade of Sun Cluster software.
Specifies the mount point for your alternate boot environment.
For more information, see the scinstall(1M) man page.
Upgrade your data services by using the scinstall command.
phys-schost# BE-mount-point/usr/cluster/bin/scinstall -u update -s all \ -d /cdrom/cdrom0/Solaris_arch/Product/sun_cluster_agents -R BE-mount-point |
Unload the Sun Java Availability Suite DVD-ROM from the DVD-ROM drive.
Unmount the inactive BE.
phys-schost# luumount -n BE-name |
Activate the upgraded inactive BE.
phys-schost# luactivate BE-name |
The name of the alternate BE that you built in Step 3.
Repeat Step 1 through Step 22 for each node in the cluster.
Do not reboot any node until all nodes in the cluster are upgraded on their inactive BE.
Reboot all nodes.
phys-schost# shutdown -y -g0 -i6 |
Do not use the reboot or halt command. These commands do not activate a new BE. Use only shutdown or init to reboot into a new BE.
The nodes reboot into cluster mode using the new, upgraded BE.
(Optional) If your cluster hosts software applications that require upgrade for which you cannot use the live upgrade method, perform the following steps.
Throughout the process of software-application upgrade, always reboot into noncluster mode until all upgrades are complete.
Shut down the node.
phys-schost# shutdown -y -g0 -i0 |
Boot each node into noncluster mode.
On SPARC based systems, perform the following command:
ok boot -x |
On x86 based systems, perform the following commands:
In the GRUB menu, use the arrow keys to select the appropriate Solaris entry and type e to edit its commands.
The GRUB menu appears similar to the following:
GNU GRUB version 0.95 (631K lower / 2095488K upper memory) +-------------------------------------------------------------------------+ | Solaris 10 /sol_10_x86 | | Solaris failsafe | | | +-------------------------------------------------------------------------+ Use the ^ and v keys to select which entry is highlighted. Press enter to boot the selected OS, 'e' to edit the commands before booting, or 'c' for a command-line. |
For more information about GRUB based booting, see Chapter 11, GRUB Based Booting (Tasks), in System Administration Guide: Basic Administration.
In the boot parameters screen, use the arrow keys to select the kernel entry and type e to edit the entry.
The GRUB boot parameters screen appears similar to the following:
GNU GRUB version 0.95 (615K lower / 2095552K upper memory) +----------------------------------------------------------------------+ | root (hd0,0,a) | | kernel /platform/i86pc/multiboot | | module /platform/i86pc/boot_archive | +----------------------------------------------------------------------+ Use the ^ and v keys to select which entry is highlighted. Press 'b' to boot, 'e' to edit the selected command in the boot sequence, 'c' for a command-line, 'o' to open a new line after ('O' for before) the selected line, 'd' to remove the selected line, or escape to go back to the main menu. |
Add -x to the command to specify that the system boot into noncluster mode.
[ Minimal BASH-like line editing is supported. For the first word, TAB lists possible command completions. Anywhere else TAB lists the possible completions of a device/filename. ESC at any time exits. ] grub edit> kernel /platform/i86pc/multiboot -x |
Press Enter to accept the change and return to the boot parameters screen.
The screen displays the edited command.
GNU GRUB version 0.95 (615K lower / 2095552K upper memory) +----------------------------------------------------------------------+ | root (hd0,0,a) | | kernel /platform/i86pc/multiboot -x | | module /platform/i86pc/boot_archive | +----------------------------------------------------------------------+ Use the ^ and v keys to select which entry is highlighted. Press 'b' to boot, 'e' to edit the selected command in the boot sequence, 'c' for a command-line, 'o' to open a new line after ('O' for before) the selected line, 'd' to remove the selected line, or escape to go back to the main menu.- |
Type b to boot the node into noncluster mode.
This change to the kernel boot parameter command does not persist over the system boot. The next time you reboot the node, it will boot into cluster mode. To boot into noncluster mode instead, perform these steps to again to add the -x option to the kernel boot parameter command.
If the instruction says to run the init S command, shut down the system then change the GRUB kernel boot command to /platform/i86pc/multiboot -sx instead.
Upgrade each software application that requires an upgrade.
Remember to boot into noncluster mode if you are directed to reboot, until all applications have been upgraded.
Boot each node into cluster mode.
On SPARC based systems, perform the following command:
ok boot |
On x86 based systems, perform the following commands:
When the GRUB menu is displayed, select the appropriate Solaris entry and press Enter. The GRUB menu appears similar to the following:
GNU GRUB version 0.95 (631K lower / 2095488K upper memory) +-------------------------------------------------------------------------+ | Solaris 10 /sol_10_x86 | | Solaris failsafe | | | +-------------------------------------------------------------------------+ Use the ^ and v keys to select which entry is highlighted. Press enter to boot the selected OS, 'e' to edit the commands before booting, or 'c' for a command-line. |
This example shows a live upgrade of a cluster node. The example upgrades the SPARC based node to the Solaris 10 OS, Sun Cluster 3.2 framework, and all Sun Cluster data services that support the live upgrade method. In this example, sc31u2 is the original boot environment (BE). The new BE that is upgraded is named sc32 and uses the mount point /sc32. The directory /net/installmachine/export/solaris10/OS_image/ contains an image of the Solaris 10 OS. The Java ES installer state file is named sc32state.
The following commands typically produce copious output. This output is shown only where necessary for clarity.
phys-schost# lucreate sc31u2 -m /:/dev/dsk/c0t4d0s0:ufs -n sc32 … lucreate: Creation of Boot Environment sc32 successful. phys-schost# luupgrade -u -n sc32 -s /net/installmachine/export/solaris10/OS_image/ The Solaris upgrade of the boot environment sc32 is complete. Apply patches phys-schost# lumount sc32 /sc32 phys-schost# ls -l /sc32/usr/java lrwxrwxrwx 1 root other 9 Apr 19 14:05 /sc32/usr/java -> /sc32/usr/j2se/ Insert the Sun Java Availability Suite DVD-ROM. phys-schost# cd /cdrom/cdrom0/Solaris_sparc phys-schost# ./installer -no -saveState sc32state phys-schost# ./installer -nodisplay -noconsole -state sc32state -altroot /sc32 phys-schost# cd /cdrom/cdrom0/Solaris_sparc/sun_cluster/Sol_9/Tools phys-schost# ./scinstall -u update -R /sc32 phys-schost# /sc32/usr/cluster/bin/scinstall -u update -s all -d /cdrom/cdrom0 -R /sc32 phys-schost# cd / phys-schost# eject cdrom phys-schost# luumount sc32 phys-schost# luactivate sc32 Activation of boot environment sc32 successful. Upgrade all other nodes Boot all nodes phys-schost# shutdown -y -g0 -i6 ok boot |
At this point, you might upgrade data-service applications that cannot use the live upgrade method, before you reboot into cluster mode.
DID device name errors - During the creation of the inactive BE, if you receive an error that a file system that you specified with its DID device name, /dev/dsk/did/dNsX, does not exist, but the device name does exist, you must specify the device by its physical device name. Then change the vfstab entry on the alternate BE to use the DID device name instead. Perform the following steps:
1) For all unrecognized DID devices, specify the corresponding physical device names as arguments to the -m or -M option in the lucreate command. For example, if /global/.devices/node@nodeid is mounted on a DID device, use lucreate -m /global/.devices/node@nodeid:/dev/dsk/cNtXdYsZ:ufs [-m…] -n BE-name to create the BE.
2) Mount the inactive BE by using the lumount -n BE-name -m BE-mount-point command.
3) Edit the /BE-name/etc/vfstab file to convert the physical device name, /dev/dsk/cNtXdYsZ, to its DID device name, /dev/dsk/did/dNsX.
Mount point errors - During creation of the inactive boot environment, if you receive an error that the mount point that you supplied is not mounted, mount the mount point and rerun the lucreate command.
New BE boot errors - If you experience problems when you boot the newly upgraded environment, you can revert to your original BE. For specific information, see Failure Recovery: Falling Back to the Original Boot Environment (Command-Line Interface) in Solaris 9 9/04 Installation Guide or Chapter 10, Failure Recovery: Falling Back to the Original Boot Environment (Tasks), in Solaris 10 Installation Guide: Solaris Live Upgrade and Upgrade Planning.
Global-devices file-system errors - After you upgrade a cluster on which the root disk is encapsulated, you might see one of the following error messages on the cluster console during the first reboot of the upgraded BE:
mount: /dev/vx/dsk/bootdg/node@1 is already mounted or /global/.devices/node@1 is busy Trying to remount /global/.devices/node@1 mount: /dev/vx/dsk/bootdg/node@1 is already mounted or /global/.devices/node@1 is busy |
WARNING - Unable to mount one or more of the following filesystem(s): /global/.devices/node@1 If this is not repaired, global devices will be unavailable. Run mount manually (mount filesystem...). After the problems are corrected, please clear the maintenance flag on globaldevices by running the following command: /usr/sbin/svcadm clear svc:/system/cluster/globaldevices:default |
Dec 6 12:17:23 svc.startd[8]: svc:/system/cluster/globaldevices:default: Method "/usr/cluster/lib/svc/method/globaldevices start" failed with exit status 96. [ system/cluster/globaldevices:default misconfigured (see 'svcs -x' for details) ] Dec 6 12:17:25 Cluster.CCR: /usr/cluster/bin/scgdevs: Filesystem /global/.devices/node@1 is not available in /etc/mnttab. Dec 6 12:17:25 Cluster.CCR: /usr/cluster/bin/scgdevs: Filesystem /global/.devices/node@1 is not available in /etc/mnttab. |
These messages indicate that the vxio minor number is the same on each cluster node. Reminor the root disk group on each node so that each number is unique in the cluster. See How to Assign a New Minor Number to a Device Group.
Go to How to Verify Upgrade of Sun Cluster 3.2 Software.
You can choose to keep your original, and now inactive, boot environment for as long as you need to. When you are satisfied that your upgrade is acceptable, you can then choose to remove the old environment or to keep and maintain it.
If you used an unmirrored volume for your inactive BE, delete the old BE files. For specific information, see Deleting an Inactive Boot Environment in Solaris 9 9/04 Installation Guide or Deleting an Inactive Boot Environment in Solaris 10 Installation Guide: Solaris Live Upgrade and Upgrade Planning.
If you detached a plex to use as the inactive BE, reattach the plex and synchronize the mirrors. For more information about working with a plex, see Example of Detaching and Upgrading One Side of a RAID 1 Volume (Mirror) (Command-Line Interface) in Solaris 9 9/04 Installation Guide or Example of Detaching and Upgrading One Side of a RAID-1 Volume (Mirror) (Command-Line Interface) in Solaris 10 Installation Guide: Solaris Live Upgrade and Upgrade Planning.
You can also maintain the inactive BE. For information about how to maintain the environment, see Chapter 37, Maintaining Solaris Live Upgrade Boot Environments (Tasks), in Solaris 9 9/04 Installation Guide or Chapter 11, Maintaining Solaris Live Upgrade Boot Environments (Tasks), in Solaris 10 Installation Guide: Solaris Live Upgrade and Upgrade Planning.
This section provides the following information to complete all Sun Cluster 3.2 software upgrade methods:
Perform this procedure to verify that the cluster is successfully upgraded to Sun Cluster 3.2 software. On the Solaris 10 OS, perform all steps from the global zone only.
This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster System Administration Guide for Solaris OS.
Ensure that all upgrade procedures are completed for all cluster nodes that you are upgrading.
On each node, become superuser.
On each upgraded node, view the installed levels of Sun Cluster software.
phys-schost# clnode show-rev -v |
The first line of output states which version of Sun Cluster software the node is running. This version should match the version that you just upgraded to.
From any node, verify that all upgraded cluster nodes are running in cluster mode (Online).
phys-schost# clnode status |
See the clnode(1CL) man page for more information about displaying cluster status.
SPARC: If you upgraded from Solaris 8 to Solaris 9 software, verify the consistency of the storage configuration.
On each node, run the following command to verify the consistency of the storage configuration.
phys-schost# cldevice check |
Do not proceed to Step b until your configuration passes this consistency check. Failure to pass this check might result in errors in device identification and cause data corruption.
The following table lists the possible output from the cldevice check command and the action you must take, if any.
Example Message |
Action |
---|---|
device id for 'phys-schost-1:/dev/rdsk/c1t3d0' does not match physical device's id, device may have been replaced |
Go to Recovering From an Incomplete Upgrade and perform the appropriate repair procedure. |
device id for 'phys-schost-1:/dev/rdsk/c0t0d0' needs to be updated, run cldevice repair to update |
None. You update this device ID in Step b. |
No output message |
None. |
See the cldevice(1CL) man page for more information.
On each node, migrate the Sun Cluster storage database to Solaris 9 device IDs.
phys-schost# cldevice repair |
On each node, run the following command to verify that storage database migration to Solaris 9 device IDs is successful.
phys-schost# cldevice check |
If the cldevice command displays a message, return to Step a to make further corrections to the storage configuration or the storage database.
If the cldevice command displays no messages, the device-ID migration is successful. When device-ID migration is verified on all cluster nodes, proceed to How to Finish Upgrade to Sun Cluster 3.2 Software.
The following example shows the commands used to verify upgrade of a two-node cluster to Sun Cluster 3.2 software. The cluster node names are phys-schost-1 and phys-schost-2.
phys-schost# clnode show-rev -v 3.2 … phys-schost# clnode status === Cluster Nodes === --- Node Status --- Node Name Status --------- ------ phys-schost-1 Online phys-schost-2 Online |
Go to How to Finish Upgrade to Sun Cluster 3.2 Software.
Perform this procedure to finish Sun Cluster upgrade. On the Solaris 10 OS, perform all steps from the global zone only. First, reregister all resource types that received a new version from the upgrade. Second, modify eligible resources to use the new version of the resource type that the resource uses. Third, re-enable resources. Finally, bring resource groups back online.
Ensure that all steps in How to Verify Upgrade of Sun Cluster 3.2 Software are completed.
Copy the security files for the common agent container to all cluster nodes.
This step ensures that security files for the common agent container are identical on all cluster nodes and that the copied files retain the correct file permissions.
On each node, stop the Sun Java Web Console agent.
phys-schost# /usr/sbin/smcwebserver stop |
On each node, stop the security file agent.
phys-schost# /usr/sbin/cacaoadm stop |
On one node, change to the /etc/cacao/instances/default/ directory.
phys-schost-1# cd /etc/cacao/instances/default/ |
Create a tar file of the /etc/cacao/SUNWcacao/security/ directory.
phys-schost-1# tar cf /tmp/SECURITY.tar security |
Copy the /tmp/SECURITY.tar file to each of the other cluster nodes.
On each node to which you copied the /tmp/SECURITY.tar file, extract the security files.
Any security files that already exist in the /etc/cacao/instances/default/ directory are overwritten.
phys-schost-2# cd /etc/cacao/instances/default/ phys-schost-2# tar xf /tmp/SECURITY.tar |
Delete the /tmp/SECURITY.tar file from each node in the cluster.
You must delete each copy of the tar file to avoid security risks.
phys-schost-1# rm /tmp/SECURITY.tar phys-schost-2# rm /tmp/SECURITY.tar |
On each node, start the security file agent.
phys-schost# /usr/sbin/cacaoadm start |
On each node, start the Sun Java Web Console agent.
phys-schost# /usr/sbin/smcwebserver start |
If you upgraded any data services that are not supplied on the product media, register the new resource types for those data services.
Follow the documentation that accompanies the data services.
If you upgraded Sun Cluster HA for SAP liveCache from the Sun Cluster 3.0 or 3.1 version to the Sun Cluster 3.2 version, modify the /opt/SUNWsclc/livecache/bin/lccluster configuration file.
Become superuser on a node that will host the liveCache resource.
Copy the new /opt/SUNWsclc/livecache/bin/lccluster file to the /sapdb/LC_NAME/db/sap/ directory.
Overwrite the lccluster file that already exists from the previous configuration of the data service.
Configure this /sapdb/LC_NAME/db/sap/lccluster file as documented in How to Register and Configure Sun Cluster HA for SAP liveCache in Sun Cluster Data Service for SAP liveCache Guide for Solaris OS.
If you upgraded the Solaris OS and your configuration uses dual-string mediators for Solaris Volume Manager software, restore the mediator configurations.
Determine which node has ownership of a disk set to which you will add the mediator hosts.
phys-schost# metaset -s setname |
Specifies the disk set name.
On the node that masters or will master the disk set, become superuser.
If no node has ownership, take ownership of the disk set.
phys-schost# cldevicegroup switch -n node devicegroup |
Specifies the name of the node to become primary of the disk set.
Specifies the name of the disk set.
Re-create the mediators.
phys-schost# metaset -s setname -a -m mediator-host-list |
Adds to the disk set.
Specifies the names of the nodes to add as mediator hosts for the disk set.
Repeat these steps for each disk set in the cluster that uses mediators.
If you upgraded VxVM, upgrade all disk groups.
Bring online and take ownership of a disk group to upgrade.
phys-schost# cldevicegroup switch -n node devicegroup |
Run the following command to upgrade a disk group to the highest version supported by the VxVM release you installed.
phys-schost# vxdg upgrade dgname |
See your VxVM administration documentation for more information about upgrading disk groups.
Repeat for each remaining VxVM disk group in the cluster.
Migrate resources to new resource type versions.
You must migrate all resources to the Sun Cluster 3.2 resource-type version.
For Sun Cluster HA for SAP Web Application Server, if you are using a J2EE engine resource or a web application server component resource or both, you must delete the resource and recreate it with the new web application server component resource. Changes in the new web application server component resource includes integration of the J2EE functionality. For more information, see Sun Cluster Data Service for SAP Web Application Server Guide for Solaris OS.
See Upgrading a Resource Type in Sun Cluster Data Services Planning and Administration Guide for Solaris OS, which contains procedures which use the command line. Alternatively, you can perform the same tasks by using the Resource Group menu of the clsetup utility. The process involves performing the following tasks:
Registering the new resource type.
Migrating the eligible resource to the new version of its resource type.
Modifying the extension properties of the resource type as specified in Sun Cluster 3.2 Release Notes for Solaris OS.
The Sun Cluster 3.2 release introduces new default values for some extension properties, such as the Retry_interval property. These changes affect the behavior of any existing resource that uses the default values of such properties. If you require the previous default value for a resource, modify the migrated resource to set the property to the previous default value.
If your cluster runs the Sun Cluster HA for Sun Java System Application Server EE (HADB) data service and you shut down the HADB database before you began a dual-partition upgrade, re-enable the resource and start the database.
phys-schost# clresource enable hadb-resource phys-schost# hadbm start database-name |
For more information, see the hadbm(1m) man page.
If you upgraded to the Solaris 10 OS and the Apache httpd.conf file is located on a cluster file system, ensure that the HTTPD entry in the Apache control script still points to that location.
View the HTTPD entry in the /usr/apache/bin/apchectl file.
The following example shows the httpd.conf file located on the /global cluster file system.
phys-schost# cat /usr/apache/bin/apchectl | grep HTTPD=/usr HTTPD="/usr/apache/bin/httpd -f /global/web/conf/httpd.conf" |
If the file does not show the correct HTTPD entry, update the file.
phys-schost# vi /usr/apache/bin/apchectl #HTTPD=/usr/apache/bin/httpd HTTPD="/usr/apache/bin/httpd -f /global/web/conf/httpd.conf" |
From any node, start the clsetup utility.
phys-schost# clsetup |
The clsetup Main Menu is displayed.
Re-enable all disabled resources.
Type the number that corresponds to the option for Resource groups and press the Return key.
The Resource Group Menu is displayed.
Type the number that corresponds to the option for Enable/Disable a resource and press the Return key.
Choose a resource to enable and follow the prompts.
Repeat Step c for each disabled resource.
When all resources are re-enabled, type q to return to the Resource Group Menu.
Bring each resource group back online.
This step includes the bringing online of resource groups in non-global zones.
When all resource groups are back online, exit the clsetup utility.
Type q to back out of each submenu, or press Ctrl-C.
If, before upgrade, you enabled automatic node reboot if all monitored disk paths fail, ensure that the feature is still enabled.
Also perform this task if you want to configure automatic reboot for the first time.
Determine whether the automatic reboot feature is enabled or disabled.
phys-schost# clnode show |
Enable the automatic reboot feature.
phys-schost# clnode set -p reboot_on_path_failure=enabled |
Specifies the property to set
Specifies that the node will reboot if all monitored disk paths fail, provided that at least one of the disks is accessible from a different node in the cluster.
Verify that automatic reboot on disk-path failure is enabled.
phys-schost# clnode show === Cluster Nodes === Node Name: node … reboot_on_path_failure: enabled … |
(Optional) Capture the disk partitioning information for future reference.
phys-schost# prtvtoc /dev/rdsk/cNtXdYsZ > filename |
Store the file in a location outside the cluster. If you make any disk configuration changes, run this command again to capture the changed configuration. If a disk fails and needs replacement, you can use this information to restore the disk partition configuration. For more information, see the prtvtoc(1M) man page.
(Optional) Make a backup of your cluster configuration.
An archived backup of your cluster configuration facilitates easier recovery of the your cluster configuration,
For more information, see How to Back Up the Cluster Configuration in Sun Cluster System Administration Guide for Solaris OS.
Resource-type migration failure - Normally, you migrate resources to a new resource type while the resource is offline. However, some resources need to be online for a resource-type migration to succeed. If resource-type migration fails for this reason, error messages similar to the following are displayed:
phys-schost - Resource depends on a SUNW.HAStoragePlus type resource that is not online anywhere. (C189917) VALIDATE on resource nfsrs, resource group rg, exited with non-zero exit status. (C720144) Validation of resource nfsrs in resource group rg on node phys-schost failed.
If resource-type migration fails because the resource is offline, use the clsetup utility to re-enable the resource and then bring its related resource group online. Then repeat migration procedures for the resource.
Java binaries location change - If the location of the Java binaries changed during the upgrade of shared components, you might see error messages similar to the following when you attempt to run the cacaoadm start or smcwebserver start commands:
# /opt/SUNWcacao/bin/cacaoadm startNo suitable Java runtime found. Java 1.4.2_03 or higher is required.Jan 3 17:10:26 ppups3 cacao: No suitable Java runtime found. Java 1.4.2_03 or higher is required.Cannot locate all the dependencies
# smcwebserver start/usr/sbin/smcwebserver: /usr/jdk/jdk1.5.0_04/bin/java: not found
These errors are generated because the start commands cannot locate the current location of the Java binaries. The JAVA_HOME property still points to the directory where the previous version of Java was located, but that previous version was removed during upgrade.
To correct this problem, change the setting of JAVA_HOME in the following configuration files to use the current Java directory:
/etc/webconsole/console/config.properties/etc/opt/SUNWcacao/cacao.properties
If you have a SPARC based system and use Sun Management Center to monitor the cluster, go to SPARC: How to Upgrade Sun Cluster Module Software for Sun Management Center.
To install or complete upgrade of Sun Cluster Geographic Edition 3.2 software, see Sun Cluster Geographic Edition Installation Guide.
Otherwise, the cluster upgrade is complete.
This section provides the following information to recover from certain kinds of incomplete upgrades:
SPARC: How to Recover From a Partially Completed Dual-Partition Upgrade
x86: How to Recover From a Partially Completed Dual-Partition Upgrade
Recovering From Storage Configuration Changes During Upgrade
If you experience an unrecoverable error during upgrade, perform this procedure to back out of the upgrade.
You cannot restart a dual-partition upgrade after the upgrade has experienced an unrecoverable error.
Become superuser on each node of the cluster.
Boot each node into noncluster mode.
On SPARC based systems, perform the following command:
ok boot -x |
On x86 based systems, perform the following commands:
In the GRUB menu, use the arrow keys to select the appropriate Solaris entry and type e to edit its commands.
The GRUB menu appears similar to the following:
GNU GRUB version 0.95 (631K lower / 2095488K upper memory) +-------------------------------------------------------------------------+ | Solaris 10 /sol_10_x86 | | Solaris failsafe | | | +-------------------------------------------------------------------------+ Use the ^ and v keys to select which entry is highlighted. Press enter to boot the selected OS, 'e' to edit the commands before booting, or 'c' for a command-line. |
For more information about GRUB based booting, see Chapter 11, GRUB Based Booting (Tasks), in System Administration Guide: Basic Administration.
In the boot parameters screen, use the arrow keys to select the kernel entry and type e to edit the entry.
The GRUB boot parameters screen appears similar to the following:
GNU GRUB version 0.95 (615K lower / 2095552K upper memory) +----------------------------------------------------------------------+ | root (hd0,0,a) | | kernel /platform/i86pc/multiboot | | module /platform/i86pc/boot_archive | +----------------------------------------------------------------------+ Use the ^ and v keys to select which entry is highlighted. Press 'b' to boot, 'e' to edit the selected command in the boot sequence, 'c' for a command-line, 'o' to open a new line after ('O' for before) the selected line, 'd' to remove the selected line, or escape to go back to the main menu. |
Add -x to the command to specify that the system boot into noncluster mode.
[ Minimal BASH-like line editing is supported. For the first word, TAB lists possible command completions. Anywhere else TAB lists the possible completions of a device/filename. ESC at any time exits. ] grub edit> kernel /platform/i86pc/multiboot -x |
Press Enter to accept the change and return to the boot parameters screen.
The screen displays the edited command.
GNU GRUB version 0.95 (615K lower / 2095552K upper memory) +----------------------------------------------------------------------+ | root (hd0,0,a) | | kernel /platform/i86pc/multiboot -x | | module /platform/i86pc/boot_archive | +----------------------------------------------------------------------+ Use the ^ and v keys to select which entry is highlighted. Press 'b' to boot, 'e' to edit the selected command in the boot sequence, 'c' for a command-line, 'o' to open a new line after ('O' for before) the selected line, 'd' to remove the selected line, or escape to go back to the main menu.- |
Type b to boot the node into noncluster mode.
This change to the kernel boot parameter command does not persist over the system boot. The next time you reboot the node, it will boot into cluster mode. To boot into noncluster mode instead, perform these steps to again to add the -x option to the kernel boot parameter command.
On each node, run the upgrade recovery script from the installation media.
If the node successfully upgraded to Sun Cluster 3.2 software, you can alternatively run the scinstall command from the /usr/cluster/bin directory.
phys-schost# cd /cdrom/cdrom0/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools phys-schost# ./scinstall -u recover |
Specifies upgrade.
Restores the /etc/vfstab file and the Cluster Configuration Repository (CCR) database to their original state before the start of the dual-partition upgrade.
The recovery process leaves the cluster nodes in noncluster mode. Do not attempt to reboot the nodes into cluster mode.
For more information, see the scinstall(1M) man page.
Perform either of the following tasks.
Restore the old software from backup to return the cluster to its original state.
Continue to upgrade software on the cluster by using the standard upgrade method.
This method requires that all cluster nodes remain in noncluster mode during the upgrade. See the task map for standard upgrade, Table 8–1. You can resume the upgrade at the last task or step in the standard upgrade procedures that you successfully completed before the dual-partition upgrade failed.
Perform this procedure if a dual-partition upgrade fails and the state of the cluster meets all of the following criteria:
The nodes of the first partition have been upgraded.
None of the nodes of the second partition are yet upgraded.
None of the nodes of the second partition are in cluster mode.
You can also perform this procedure if the upgrade has succeeded on the first partition but you want to back out of the upgrade.
Do not perform this procedure after dual-partition upgrade processes have begun on the second partition. Instead, perform How to Recover from a Failed Dual-Partition Upgrade.
Before you begin, ensure that all second-partition nodes are halted. First-partition nodes can be either halted or running in noncluster mode.
Perform all steps as superuser.
Boot each node in the second partition into noncluster mode.
# ok boot -x |
On each node in the second partition, run the scinstall -u recover command.
# /usr/cluster/bin/scinstall -u recover |
The command restores the original CCR information, restores the original /etc/vfstab file, and eliminates modifications for startup.
Boot each node of the second partition into cluster mode.
# shutdown -g0 -y -i6 |
When the nodes of the second partition come up, the second partition resumes supporting cluster data services while running the old software with the original configuration.
Restore the original software and configuration data from backup media to the nodes in the first partition.
Boot each node in the first partition into cluster mode.
# shutdown -g0 -y -i6 |
The nodes rejoin the cluster.
Perform this procedure if a dual-partition upgrade fails and the state of the cluster meets all of the following criteria:
The nodes of the first partition have been upgraded.
None of the nodes of the second partition are yet upgraded.
None of the nodes of the second partition are in cluster mode.
You can also perform this procedures if the upgrade has succeeded on the first partition but you want to back out of the upgrade.
Do not perform this procedure after dual-partition upgrade processes have begun on the second partition. Instead, perform How to Recover from a Failed Dual-Partition Upgrade.
Before you begin, ensure that all second-partition nodes are halted. First-partition nodes can be either halted or running in noncluster mode.
Perform all steps as superuser.
Boot each node in the second partition into noncluster mode by completing the following steps.
In the GRUB menu, use the arrow keys to select the appropriate Solaris entry and type e to edit its commands.
The GRUB menu appears similar to the following:
GNU GRUB version 0.95 (631K lower / 2095488K upper memory) +-------------------------------------------------------------------------+ | Solaris 10 /sol_10_x86 | | Solaris failsafe | | | +-------------------------------------------------------------------------+ Use the ^ and v keys to select which entry is highlighted. Press enter to boot the selected OS, 'e' to edit the commands before booting, or 'c' for a command-line. |
For more information about GRUB-based booting, see Chapter 11, GRUB Based Booting (Tasks), in System Administration Guide: Basic Administration.
In the boot parameters screen, use the arrow keys to select the kernel entry and type e to edit the entry.
The GRUB boot parameters screen appears similar to the following:
GNU GRUB version 0.95 (615K lower / 2095552K upper memory) +----------------------------------------------------------------------+ | root (hd0,0,a) | | kernel /platform/i86pc/multiboot | | module /platform/i86pc/boot_archive | +----------------------------------------------------------------------+ Use the ^ and v keys to select which entry is highlighted. Press 'b' to boot, 'e' to edit the selected command in the boot sequence, 'c' for a command-line, 'o' to open a new line after ('O' for before) the selected line, 'd' to remove the selected line, or escape to go back to the main menu. |
Add the -x option to the command to specify that the system boot into noncluster mode.
Minimal BASH-like line editing is supported. For the first word, TAB lists possible command completions. Anywhere else TAB lists the possible completions of a device/filename. ESC at any time exits. |
# grub edit> kernel /platform/i86pc/multiboot -x |
Press Enter to accept the change and return to the boot parameters screen.
The screen displays the edited command.
GNU GRUB version 0.95 (615K lower / 2095552K upper memory) +----------------------------------------------------------------------+ | root (hd0,0,a) | | kernel /platform/i86pc/multiboot -x | | module /platform/i86pc/boot_archive | +----------------------------------------------------------------------+ Use the ^ and v keys to select which entry is highlighted. Press 'b' to boot, 'e' to edit the selected command in the boot sequence, 'c' for a command-line, 'o' to open a new line after ('O' for before) the selected line, 'd' to remove the selected line, or escape to go back to the main menu.- |
Type b to boot the node into noncluster mode.
This change to the kernel boot parameter command does not persist over the system boot. The next time you reboot the node, it will boot into cluster mode. To boot into noncluster mode instead, perform these steps to again to add the -x option to the kernel boot parameter command.
On each node in the second partition, run the scinstall -u recover command.
# /usr/cluster/bin/scinstall -u recover |
The command restores the original CCR information, restores the original /etc/vfstab file, and eliminates modifications for startup.
Boot each node of the second partition into cluster mode.
# shutdown -g0 -y -i6 |
When the nodes of the second partition come up, the second partition resumes supporting cluster data services while running the old software with the original configuration.
Restore the original software and configuration data from backup media to the nodes in the first partition.
Boot each node in the first partition into cluster mode.
# shutdown -g0 -y -i6 |
The nodes rejoin the cluster.
This section provides the following repair procedures to follow if changes were inadvertently made to the storage configuration during upgrade:
Any changes to the storage topology, including running Sun Cluster commands, should be completed before you upgrade the cluster to Solaris 9 or Solaris 10 software. If, however, changes were made to the storage topology during the upgrade, perform the following procedure. This procedure ensures that the new storage configuration is correct and that existing storage that was not reconfigured is not mistakenly altered.
This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster System Administration Guide for Solaris OS.
Ensure that the storage topology is correct. Check whether the devices that were flagged as possibly being replaced map to devices that actually were replaced. If the devices were not replaced, check for and correct possible accidental configuration changes, such as incorrect cabling.
On a node that is attached to the unverified device, become superuser.
Manually update the unverified device.
phys-schost# cldevice repair device |
See the cldevice(1CL) man page for more information.
Update the DID driver.
phys-schost# scdidadm -ui phys-schost# scdidadm -r |
Loads the device-ID configuration table into the kernel.
Initializes the DID driver.
Reconfigures the database.
Repeat Step 2 through Step 3 on all other nodes that are attached to the unverified device.
Return to the remaining upgrade tasks. Go to Step 4 in How to Upgrade Sun Cluster 3.2 Software (Standard).
If accidental changes are made to the storage cabling during the upgrade, perform the following procedure to return the storage configuration to the correct state.
This procedure assumes that no physical storage was actually changed. If physical or logical storage devices were changed or replaced, instead follow the procedures in How to Handle Storage Reconfiguration During an Upgrade.
Return the storage topology to its original configuration. Check the configuration of the devices that were flagged as possibly being replaced, including the cabling.
On each node of the cluster, become superuser.
Update the DID driver on each node of the cluster.
phys-schost# scdidadm -ui phys-schost# scdidadm -r |
Loads the device–ID configuration table into the kernel.
Initializes the DID driver.
Reconfigures the database.
See the scdidadm(1M) man page for more information.
If the scdidadm command returned any error messages in Step 2, make further modifications as needed to correct the storage configuration, then repeat Step 2.
Return to the remaining upgrade tasks. Go to Step 4 in How to Upgrade Sun Cluster 3.2 Software (Standard).