Skip Navigation Links | |
Exit Print View | |
Oracle Solaris Cluster Upgrade Guide Oracle Solaris Cluster 3.3 3/13 |
1. Preparing to Upgrade Oracle Solaris Cluster Software
2. Performing a Standard Upgrade to Oracle Solaris Cluster 3.3 3/13 Software
Performing a Standard Upgrade of a Cluster
How to Upgrade Quorum Server Software
How to Prepare the Cluster for Upgrade (Standard Upgrade)
How to Upgrade the Solaris OS and Volume Manager Software (Standard Upgrade)
How to Upgrade Oracle Solaris Cluster 3.3 3/13 Software (Standard Upgrade)
3. Performing a Dual-Partition Upgrade to Oracle Solaris Cluster 3.3 3/13 Software
4. Performing a Live Upgrade to Oracle Solaris Cluster 3.3 3/13 Software
5. Performing a Rolling Upgrade
The following table lists the tasks to perform to upgrade to Oracle Solaris Cluster 3.3 3/13 software. You also perform these tasks to upgrade only the Oracle Solaris OS.
Note - If you upgrade the Oracle Solaris OS to a new marketing release, such as from Solaris 9 to Oracle Solaris 10 software, you must also upgrade the Oracle Solaris Cluster software and dependency software to the version that is compatible with the new OS version.
Table 2-1 Task Map: Performing a Standard Upgrade to Oracle Solaris Cluster 3.3 3/13 Software
|
If the cluster uses a quorum server, upgrade the Oracle Solaris Cluster Quorum Server software on the quorum server before you upgrade the cluster.
Note - If more than one cluster uses the quorum server, perform on each cluster the steps to remove the quorum server and later the steps to add back the quorum server.
Perform all steps as superuser on the cluster and on the quorum server.
See Adding a Quorum Device in Oracle Solaris Cluster System Administration Guide.
If you add another quorum server as a temporary quorum device, the quorum server can run the same software version as the quorum server that you are upgrading, or it can run the 3.3 3/13 version of Quorum Server software.
phys-schost# clquorum remove quorumserver
quorumserver# clquorumserver show +
If the output shows any cluster is still served by the quorum server, unconfigure the quorum server from that cluster. Then repeat this step to confirm that the quorum server is no longer configured with any cluster.
Note - If you have unconfigured the quorum server from a cluster but the clquorumserver show command still reports that the quorum server is serving that cluster, the command might be reporting stale configuration information. See Cleaning Up Stale Quorum Server Cluster Information in Oracle Solaris Cluster System Administration Guide.
quorumserver# clquorumserver stop +
quorumserver# cd /var/sadm/prod/SUNWentsysver
The version that is installed on your system.
quorumserver# ./uninstall
After removal is finished, you can view any available log. See Chapter 8, Uninstalling, in Sun Java Enterprise System 5 Update 1 Installation Guide for UNIX for additional information about using the uninstall program.
By default, this directory is /var/scqsd.
Follow the steps in How to Install and Configure Quorum Server Software in Oracle Solaris Cluster Software Installation Guide for installing the Quorum Server software.
Follow the steps in How to Configure Quorum Devices in Oracle Solaris Cluster Software Installation Guide.
phys-schost# clquorum remove tempquorum
Perform this procedure to remove the cluster from production before you perform a standard upgrade. Perform all steps from the global zone only.
Before You Begin
Perform the following tasks:
Ensure that the configuration meets the requirements for upgrade. See Upgrade Requirements and Software Support Guidelines.
Have available the installation media, documentation, and patches for all software products that you are upgrading, including the following software:
Oracle Solaris OS
Oracle Solaris Cluster 3.3 3/13 framework
Oracle Solaris Cluster 3.3 3/13 patches
Oracle Solaris Cluster 3.3 3/13 data services (agents)
Applications that are managed by Oracle Solaris Cluster 3.3 3/13 data services
Any other third-party applications to upgrade
See Patches and Required Firmware Levels in Oracle Solaris Cluster 3.3 3/13 Release Notes for the location of patches and installation instructions.
If you use role-based access control (RBAC) instead of superuser to access the cluster nodes, ensure that you can assume an RBAC role that provides authorization for all Oracle Solaris Cluster commands. This series of upgrade procedures requires the following Oracle Solaris Cluster RBAC authorizations if the user is not superuser:
solaris.cluster.modify
solaris.cluster.admin
solaris.cluster.read
See Role-Based Access Control (Overview) in System Administration Guide: Security Services for more information about using RBAC roles. See the Oracle Solaris Cluster man pages for the RBAC authorization that each Oracle Solaris Cluster subcommand requires.
phys-schost% cluster status
See the cluster(1CL) man page for more information.
# clresourcegroup offline -Z zonecluster resource-group # clresource disable -Z zonecluster resource # clresourcegroup unmanage -Z zonecluster resource-group
For uninstallation procedures, see the documentation for your version of Geographic Edition software.
Take offline all resource groups in the cluster, including those that are in non-global zones. Then disable all resources, to prevent the cluster from bringing the resources online automatically if a node is mistakenly rebooted into cluster mode.
phys-schost# clsetup
The Main Menu is displayed.
The Resource Group Menu is displayed.
Type q to back out of each submenu or press Ctrl-C.
phys-schost# clresourcegroup offline resource-group
phys-schost# clresource show -p Enabled === Resources === Resource: resource Enabled{nodename1}: True Enabled{nodename2}: True …
phys-schost# clresource show -p resource_dependencies === Resources === Resource: node Resource_dependencies: node …
You must disable dependent resources first before you disable the resources that they depend on.
phys-schost# clresource disable resource
See the clresource(1CL) man page for more information.
phys-schost# clresource show -p Enabled === Resources === Resource: resource Enabled{nodename1}: False Enabled{nodename2}: False …
phys-schost# clresourcegroup unmanage resource-group
phys-schost# cluster status -t resource,resourcegroup
phys-schost# cluster shutdown -g0 -y
See the cluster(1CL)man page for more information.
ok boot -x
For more information about GRUB based booting, see Booting an x86 Based System by Using GRUB (Task Map) in Oracle Solaris Administration: Basic Administration.
The screen displays the edited command.
Note - This change to the kernel boot parameter command does not persist over the system boot. The next time you reboot the node, it will boot into cluster mode. To boot into noncluster mode instead, perform these steps to again add the -x option to the kernel boot parameter command.
Next Steps
Upgrade software on each node.
To upgrade Oracle Solaris software before you perform Oracle Solaris Cluster software upgrade, go to How to Upgrade the Solaris OS and Volume Manager Software (Standard Upgrade).
You must upgrade the Oracle Solaris software to a supported release if Oracle Solaris Cluster 3.3 3/13 software does not support the release of the Oracle Solaris OS that your cluster currently runs. See Supported Products in Oracle Solaris Cluster 3.3 3/13 Release Notes for more information.
If Oracle Solaris Cluster 3.3 3/13 software supports the release of the Oracle Solaris OS that you currently run on your cluster, further Oracle Solaris software upgrade is optional.
Otherwise, upgrade to Oracle Solaris Cluster 3.3 3/13 software. Go to How to Upgrade Oracle Solaris Cluster 3.3 3/13 Software (Standard Upgrade).
Perform this procedure on each node in the cluster to upgrade the Oracle Solaris OS. Perform all steps from the global zone only. If the cluster already runs on a version of the Oracle Solaris OS that supports Oracle Solaris Cluster 3.3 3/13 software, further upgrade of the Oracle Solaris OS is optional.
If you do not intend to upgrade the Oracle Solaris OS or volume management software, proceed to How to Upgrade Oracle Solaris Cluster 3.3 3/13 Software (Standard Upgrade).
Note - The cluster must already run on, or be upgraded to, at least the minimum required level of the Oracle Solaris OS to support upgrade to Oracle Solaris Cluster 3.3 3/13 software. See Supported Products in Oracle Solaris Cluster 3.3 3/13 Release Notes for more information.
Before You Begin
Ensure that all steps in How to Prepare the Cluster for Upgrade (Standard Upgrade) are completed.
If you are performing a dual-partition upgrade, the node must be a member of the partition that is in noncluster mode.
/etc/rc0.d/K16apache /etc/rc1.d/K16apache /etc/rc2.d/K16apache /etc/rc3.d/S50apache /etc/rcS.d/K16apache
Some applications, such as Oracle Solaris Cluster HA for Apache, require that Apache run control scripts be disabled.
If these scripts exist and contain an uppercase K or S in the file name, the scripts are enabled. No further action is necessary for these scripts.
If these scripts do not exist, in Step 7 you must ensure that any Apache run control scripts that are installed during the Oracle Solaris OS upgrade are disabled.
If these scripts exist but the file names contain a lowercase k or s, the scripts are disabled. In Step 7 you must ensure that any Apache run control scripts that are installed during the Oracle Solaris OS upgrade are disabled.
Entries for globally mounted file systems contain the global mount option. Comment out these entries to prevent the Oracle Solaris upgrade from attempting to mount the global devices.
To use Live Upgrade, go instead to Chapter 4, Performing a Live Upgrade to Oracle Solaris Cluster 3.3 3/13 Software.
To upgrade a cluster that uses Solaris Volume Manager by a method other than Live Upgrade, follow upgrade procedures in Oracle Solaris installation documentation.
Note - Do not perform the final reboot instruction in the Oracle Solaris software upgrade. Instead, do the following:
Reboot into noncluster mode in Step 8 to complete Oracle Solaris software upgrade.
When prompted, choose the manual reboot option.
When you are instructed to reboot a node during the upgrade process, always reboot into noncluster mode. For the boot and reboot commands, add the -x option to the command. The -x option ensures that the node reboots into noncluster mode. For example, either of the following two commands boot a node into single-user noncluster mode:
phys-schost# reboot -- -xs or ok boot -xs
If the instruction says to run the init S command, use the reboot -- -xs command instead.
phys-schost# shutdown -g -y -i0 Press any key to continue
For more information about GRUB based booting, see Booting an x86 Based System by Using GRUB (Task Map) in Oracle Solaris Administration: Basic Administration.
The screen displays the edited command.
Note - This change to the kernel boot parameter command does not persist over the system boot. The next time you reboot the node, it will boot into cluster mode. To boot into noncluster mode instead, perform these steps to again to add the -x option to the kernel boot parameter command.
If the instruction says to run the init S command, shut down the system then change the GRUB kernel boot command to /platform/i86pc/multiboot -sx instead.
To disable Apache run control scripts, use the following commands to rename the files with a lowercase k or s.
phys-schost# mv /a/etc/rc0.d/K16apache /a/etc/rc0.d/k16apache phys-schost# mv /a/etc/rc1.d/K16apache /a/etc/rc1.d/k16apache phys-schost# mv /a/etc/rc2.d/K16apache /a/etc/rc2.d/k16apache phys-schost# mv /a/etc/rc3.d/S50apache /a/etc/rc3.d/s50apache phys-schost# mv /a/etc/rcS.d/K16apache /a/etc/rcS.d/k16apache
Alternatively, you can rename the scripts to be consistent with your normal administration practices.
Include the double dashes (--) in the following command:
phys-schost# reboot -- -x
Note - Do not reboot after you add patches. Wait to reboot the node until after you upgrade the Oracle Solaris Cluster software.
See Patches and Required Firmware Levels in Oracle Solaris Cluster 3.3 3/13 Release Notes for the location of patches and installation instructions.
Next Steps
If you are only upgrading the Oracle Solaris OS to an Oracle Solaris update release and are not upgrading the Oracle Solaris Cluster software, skip to Chapter 6, Completing the Upgrade.
Otherwise, upgrade to Oracle Solaris Cluster 3.3 3/13 software. Go to How to Upgrade Oracle Solaris Cluster 3.3 3/13 Software (Standard Upgrade).
Perform this procedure to upgrade each node of the cluster to Oracle Solaris Cluster 3.3 3/13 software.
Perform all steps from the global zone only.
Tip - You can use the cconsole utility to perform this procedure on multiple nodes simultaneously. See How to Install Cluster Control Panel Software on an Administrative Console in Oracle Solaris Cluster Software Installation Guide for more information.
Before You Begin
Perform the following tasks:
Ensure that all steps in How to Prepare the Cluster for Upgrade (Standard Upgrade) are completed.
Ensure that you have installed all required Oracle Solaris software patches and hardware-related patches.
If the volume management daemon vold(1M) is running and is configured to manage CD-ROM or DVD devices, the daemon automatically mounts the media on the /cdrom/cdrom0 directory.
phys-schost# cd /cdrom/cdrom0/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools
phys-schost# ./scinstall
Note - Do not use the /usr/cluster/bin/scinstall command that is already installed on the node. You must use the scinstall command that is located on the installation DVD-ROM.
The scinstall Main Menu is displayed.
*** Main Menu *** Please select from one of the following (*) options: 1) Create a new cluster or add a cluster node 2) Configure a cluster to be JumpStarted from this install server * 3) Manage a dual-partition upgrade * 4) Upgrade this cluster node * 5) Print release information for this cluster node * ?) Help with menu options * q) Quit Option: 4
The Upgrade Menu is displayed.
During the Oracle Solaris Cluster upgrade, scinstall might make one or more of the following configuration changes:
Rename the ntp.conf file to ntp.conf.cluster, if ntp.conf.cluster does not already exist on the node.
Set the local-mac-address? variable to true, if the variable is not already set to that value.
Upgrade processing is finished when the system displays the message Completed Oracle Solaris Cluster framework upgrade and prompts you to press Enter to continue.
You must upgrade all data services to the Oracle Solaris Cluster 3.3 3/13 version.
Note - For HA for SAP Web Application Server, if you are using a J2EE engine resource or a web application server component resource or both, you must delete the resource and recreate it with the new web application server component resource. Changes in the new web application server component resource includes integration of the J2EE functionality. For more information, see Oracle Solaris Cluster Data Service for SAP Web Application Server Guide.
phys-schost# /usr/cluster/bin/scinstall
Note - Do not use the scinstall utility that is on the installation media to upgrade data service packages.
The scinstall Main Menu is displayed.
The Upgrade Menu is displayed.
You can choose from the list of data services that are available to upgrade or choose to upgrade all installed data services.
The Upgrade Menu is displayed.
phys-schost# eject cdrom
Note - If you have non-global zones configured, LOFS must remain enabled. For guidelines about using LOFS and alternatives to disabling it, see Planning Cluster File Systems in Oracle Solaris Cluster Software Installation Guide.
To disable LOFS, ensure that the /etc/system file contains the following entry:
exclude:lofs
This change becomes effective at the next system reboot.
View the upgrade log file that is referenced at the end of the upgrade output messages.
See Patches and Required Firmware Levels in Oracle Solaris Cluster 3.3 3/13 Release Notes for the location of patches and installation instructions.
Note - If any upgrade procedure instruct you to perform a reboot, you must add the -x option to the boot command. This option boots the cluster into noncluster mode.
Ensure that application levels are compatible with the current versions of Oracle Solaris Cluster and Oracle Solaris software. See your application documentation for installation instructions.
phys-schost# shutdown -g0 -y
ok boot
When the GRUB menu is displayed, select the appropriate Oracle Solaris entry and press Enter.
For more information about GRUB based booting, see Booting an x86 Based System by Using GRUB (Task Map) in Oracle Solaris Administration: Basic Administration.
Next Steps