Skip Navigation Links | |
Exit Print View | |
Oracle Solaris Cluster Upgrade Guide Oracle Solaris Cluster 3.3 3/13 |
1. Preparing to Upgrade Oracle Solaris Cluster Software
2. Performing a Standard Upgrade to Oracle Solaris Cluster 3.3 3/13 Software
3. Performing a Dual-Partition Upgrade to Oracle Solaris Cluster 3.3 3/13 Software
4. Performing a Live Upgrade to Oracle Solaris Cluster 3.3 3/13 Software
Performing a Live Upgrade of a Cluster
How to Upgrade Quorum Server Software
How to Prepare the Cluster for Upgrade (Live Upgrade)
How to Upgrade the Solaris OS and Oracle Solaris Cluster 3.3 3/13 Software (Live Upgrade)
5. Performing a Rolling Upgrade
The following table lists the tasks to perform to upgrade to Oracle Solaris Cluster 3.3 3/13 software. You also perform these tasks to upgrade only the Oracle Solaris OS.
Note - If you upgrade the Oracle Solaris OS to a new marketing release, such as from Solaris 9 to Oracle Solaris 10 software, you must also upgrade the Oracle Solaris Cluster software and dependency software to the version that is compatible with the new OS version.
Table 4-1 Task Map: Performing a Live Upgrade to Oracle Solaris Cluster 3.3 3/13 Software
|
If the cluster uses a quorum server, upgrade the Oracle Solaris Cluster Quorum Server software on the quorum server before you upgrade the cluster.
Note - If more than one cluster uses the quorum server, perform these steps for each of those clusters.
Perform all steps as superuser on the cluster and on the quorum server.
See Adding a Quorum Device in Oracle Solaris Cluster System Administration Guide.
If you add another quorum server as a temporary quorum device, the quorum server can run the same software version as the quorum server that you are upgrading, or it can run the 3.3 3/13 version of Quorum Server software.
phys-schost# clquorum remove quorumserver
quorumserver# clquorumserver show +
If the output shows any cluster is still served by the quorum server, unconfigure the quorum server from that cluster. Then repeat this step to confirm that the quorum server is no longer configured with any cluster.
Note - If you have unconfigured the quorum server from a cluster but the clquorumserver show command still reports that the quorum server is serving that cluster, the command might be reporting stale configuration information. See Cleaning Up Stale Quorum Server Cluster Information in Oracle Solaris Cluster System Administration Guide.
quorumserver# clquorumserver stop +
quorumserver# cd /var/sadm/prod/SUNWentsysver
The version that is installed on your system.
quorumserver# ./uninstall
After removal is finished, you can view any available log. See Chapter 8, Uninstalling, in Sun Java Enterprise System 5 Update 1 Installation Guide for UNIX for additional information about using the uninstall program.
By default, this directory is /var/scqsd.
Follow the steps in How to Install and Configure Quorum Server Software in Oracle Solaris Cluster Software Installation Guide for installing the Quorum Server software.
Follow the steps in How to Configure Quorum Devices in Oracle Solaris Cluster Software Installation Guide.
phys-schost# clquorum remove tempquorum
Perform this procedure to prepare a cluster for live upgrade.
Before You Begin
Perform the following tasks:
Ensure that the configuration meets the requirements for upgrade. See Upgrade Requirements and Software Support Guidelines.
Have available the installation media, documentation, and patches for all software products that you are upgrading, including the following software:
Oracle Solaris OS
Oracle Solaris Cluster 3.3 3/13 framework
Oracle Solaris Cluster 3.3 3/13 patches
Oracle Solaris Cluster 3.3 3/13 data services (agents)
Applications that are managed by Oracle Solaris Cluster 3.3 3/13 data services
Any other third-party applications to upgrade
See Patches and Required Firmware Levels in Oracle Solaris Cluster 3.3 3/13 Release Notes for the location of patches and installation instructions.
If you use role-based access control (RBAC) instead of superuser to access the cluster nodes, ensure that you can assume an RBAC role that provides authorization for all Oracle Solaris Cluster commands. This series of upgrade procedures requires the following Oracle Solaris Cluster RBAC authorizations if the user is not superuser:
solaris.cluster.modify
solaris.cluster.admin
solaris.cluster.read
See Role-Based Access Control (Overview) in System Administration Guide: Security Services for more information about using RBAC roles. See the Oracle Solaris Cluster man pages for the RBAC authorization that each Oracle Solaris Cluster subcommand requires.
phys-schost% cluster status
See the cluster(1CL) man page for more information.
Service interruption will be approximately the amount of time that your cluster normally takes to switch services to another node.
For uninstallation procedures, see the documentation for your version of Geographic Edition software.
Next Steps
Perform a live upgrade of the Oracle Solaris OS, Oracle Solaris Cluster 3.3 3/13 software, and other software. Go to How to Upgrade the Solaris OS and Oracle Solaris Cluster 3.3 3/13 Software (Live Upgrade).
Perform this procedure to upgrade the Oracle Solaris OS, volume-manager software, and Oracle Solaris Cluster software by using the live upgrade method. The Oracle Solaris Cluster live upgrade method uses the Oracle Solaris Live Upgrade feature. For information about live upgrade of the Oracle Solaris OS, refer to the following Oracle Solaris documentation:
Oracle Solaris 10 1/13 Installation Guide: Live Upgrade and Upgrade Planning
If non-global zones are installed on the cluster, see Chapter 8, Upgrading the Oracle Solaris OS on a System With Non-Global Zones Installed, in Oracle Solaris 10 1/13 Installation Guide: Live Upgrade and Upgrade Planning.
Note - The cluster must already run on, or be upgraded to, at least the minimum required level of the Oracle Solaris OS to support upgrade to Oracle Solaris Cluster 3.3 3/13 software. See Supported Products in Oracle Solaris Cluster 3.3 3/13 Release Notes for more information.
Perform this procedure on each node in the cluster.
Tip - You can use the cconsole utility to perform this procedure on multiple nodes simultaneously. See How to Install Cluster Control Panel Software on an Administrative Console in Oracle Solaris Cluster Software Installation Guide for more information.
Before You Begin
Ensure that all steps in How to Prepare the Cluster for Upgrade (Live Upgrade) are completed.
Follow instructions in Live Upgrade System Requirements in Oracle Solaris 10 1/13 Installation Guide: Live Upgrade and Upgrade Planning and Installing Live Upgrade in Oracle Solaris 10 1/13 Installation Guide: Live Upgrade and Upgrade Planning.
This name change is necessary for live upgrade software to recognize the global-devices file system. You will restore the DID names after the live upgrade is completed.
phys-schost# cp /etc/vfstab /etc/vfstab.old
Change the DID names to the physical names by changing /dev/did/{r}dsk/dYsZ to /dev/{r}dsk/cNtXdYsZ.
Remove global from the entry.
The following example shows the names of DID device d3s3, which corresponds to /global/.devices/node@2, changed to its physical device names and the global entry removed:
Original: /dev/did/dsk/d3s3 /dev/did/rdsk/d3s3 /global/.devices/node@2 ufs 2 no global Changed: dev/dsk/c0t0d0s3 /dev/rdsk/c0t0d0s3 /global/.devices/node@2 ufs 2 no -
phys-schost# lucreate options-n BE-name
Specifies the name of the boot environment that is to be upgraded.
For information about important options to the lucreate command, see Oracle Solaris 10 1/13 Installation Guide: Live Upgrade and Upgrade Planning and the lucreate(1M) man page.
If the cluster already runs on a properly patched version of the Oracle Solaris OS that supports Oracle Solaris Cluster 3.3 3/13 software, this step is optional.
Note - If you use Solaris Volume Manager software, run the following command:
phys-schost# luupgrade -u -n BE-name -s os-image-path
Upgrades an operating system image on a boot environment.
Specifies the path name of a directory that contains an operating system image.
phys-schost# lumount -n BE-name -m BE-mount-point
Specifies the mount point of BE-name.
For more information, see Oracle Solaris 10 1/13 Installation Guide: Live Upgrade and Upgrade Planning and the lumount(1M) man page.
You might need to patch your Oracle Solaris software to use Oracle Solaris Live Upgrade. For details about the patches that the Oracle Solaris OS requires and where to download them, see Upgrading a System With Packages or Patches in Oracle Solaris 10 1/13 Installation Guide: Live Upgrade and Upgrade Planning.
However, if some software applications to upgrade cannot use Oracle Solaris Live Upgrade, such as Sun QFS software, wait to upgrade those applications until Step 21.
If the volume management daemon vold(1M) is running and is configured to manage CD-ROM or DVD devices, the daemon automatically mounts the media on the /cdrom/cdrom0 directory.
phys-schost# cd /cdrom/cdrom0/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools
phys-schost# ./scinstall -u update -R BE-mount-point
Specifies that you are performing an upgrade of Oracle Solaris Cluster software.
Specifies the mount point for your alternate boot environment.
For more information, see the scinstall(1M) man page.
phys-schost# BE-mount-point/usr/cluster/bin/scinstall -u update -s all \ -d /cdrom/cdrom0/Solaris_arch/Product/sun_cluster_agents -R BE-mount-point
phys-schost# eject cdrom
Note - Do not reboot any node until all nodes in the cluster are upgraded on their inactive BE.
phys-schost# cp /etc/vstab.old /etc/vfstab
/dev/dsk/cNtXdYsZ /dev/rdsk/cNtXdYsZ /global/.devices/node@N ufs 2 no global
When the node is rebooted into the upgraded alternate BE, the DID names are substituted in the /etc/vfstab file automatically.
phys-schost# luumount -n BE-name
phys-schost# luactivate BE-name
The name of the alternate BE that you built in Step 3.
Note - Do not use the reboot or halt command. These commands do not activate a new BE.
phys-schost# shutdown -y -g0 -i0
Ensure that all nodes in the cluster are shut down before you boot nodes into noncluster mode.
ok boot -x
For more information about GRUB based booting, see Booting an x86 Based System by Using GRUB (Task Map) in Oracle Solaris Administration: Basic Administration.
The screen displays the edited command.
Note - This change to the kernel boot parameter command does not persist over the system boot. The next time you reboot the node, it will boot into cluster mode. To boot into noncluster mode instead, perform these steps to again to add the -x option to the kernel boot parameter command.
If the instruction says to run the init S command, shut down the system then change the GRUB kernel boot command to /platform/i86pc/multiboot -sx instead.
The upgraded BE now runs in noncluster mode.
Note - If an upgrade process directs you to reboot, always reboot into noncluster mode, as described in Step 20, until all upgrades are complete.
phys-schost# shutdown -g0 -y -i0
ok boot
When the GRUB menu is displayed, select the appropriate Oracle Solaris entry and press Enter.
The cluster upgrade is completed.
Example 4-1 Live Upgrade to Oracle Solaris Cluster 3.3 3/13 Software
This example shows a live upgrade of a cluster node. The example upgrades the SPARC based node to the Oracle Solaris 10 OS, Oracle Solaris Cluster 3.3 3/13 framework, and all Oracle Solaris Cluster data services that support the live upgrade method. In this example, sc31u4 is the original boot environment (BE). The new BE that is upgraded is named sc33u2 and uses the mount point /sc33u2. The directory /net/installmachine/export/solaris10/OS_image/ contains an image of the Oracle Solaris 10 OS.
The following commands typically produce copious output. This output is shown only where necessary for clarity.
phys-schost# lucreate sc31u4 -m /:/dev/dsk/c0t4d0s0:ufs -n sc33u2 … lucreate: Creation of Boot Environment sc33u2 successful. phys-schost# luupgrade -u -n sc33u2 -s /net/installmachine/export/solaris10/OS_image/ The Solaris upgrade of the boot environment sc33u2 is complete. Apply patches phys-schost# lumount sc33u2 /sc33u2 Insert the installation DVD-ROM. phys-schost# cd /cdrom/cdrom0/Solaris_sparc/sun_cluster/Sol_10/Tools phys-schost# ./scinstall -u update -R /sc33u2 phys-schost# /sc33u2/usr/cluster/bin/scinstall -u update -s all \ -d /cdrom/cdrom0/Solaris_sparc/Product/sun_cluster_agents -R /sc33u2 phys-schost# cd / phys-schost# eject cdrom phys-schost# luumount sc33u2 phys-schost# luactivate sc33u2 Activation of boot environment sc33u2 successful. Upgrade all other nodes Shut down all nodes phys-schost# shutdown -y -g0 -i0 When all nodes are shut down, boot each node into cluster mode ok boot
At this point, you might upgrade data-service applications that cannot use the live upgrade method, before you reboot into cluster mode.
Troubleshooting
DID device name errors - During the creation of the inactive BE, if you receive an error that a file system that you specified with its DID device name, /dev/dsk/did/dNsX, does not exist, but the device name does exist, you must specify the device by its physical device name. Then change the vfstab entry on the alternate BE to use the DID device name instead. Perform the following steps:
1) For all unrecognized DID devices, specify the corresponding physical device names as arguments to the -m or -M option in the lucreate command. For example, if /global/.devices/node@nodeid is mounted on a DID device, use lucreate -m /global/.devices/node@nodeid:/dev/dsk/cNtXdYsZ:ufs [-m…] -n BE-name to create the BE.
2) Mount the inactive BE by using the lumount -n BE-name -m BE-mount-point command.
3) Edit the /BE-name/etc/vfstab file to convert the physical device name, /dev/dsk/cNtXdYsZ, to its DID device name, /dev/dsk/did/dNsX.
Mount point errors - During creation of the inactive boot environment, if you receive an error that the mount point that you supplied is not mounted, mount the mount point and rerun the lucreate command.
New BE boot errors - If you experience problems when you boot the newly upgraded environment, you can revert to your original BE. For specific information, see Chapter 6, Failure Recovery: Falling Back to the Original Boot Environment (Tasks), in Oracle Solaris 10 1/13 Installation Guide: Live Upgrade and Upgrade Planning.
Next Steps
Go to Chapter 6, Completing the Upgrade.
See Also
You can choose to keep your original, and now inactive, boot environment for as long as you need to. When you are satisfied that your upgrade is acceptable, you can then choose to remove the old environment or to keep and maintain it.
If you used an unmirrored volume for your inactive BE, delete the old BE files. For specific information, see Deleting an Inactive Boot Environment in Oracle Solaris 10 1/13 Installation Guide: Live Upgrade and Upgrade Planning.
If you detached a plex to use as the inactive BE, reattach the plex and synchronize the mirrors. For more information about working with a plex, see the appropriate version of the procedure Example of Detaching and Upgrading One Side of a RAID-1 Volume (Mirror) for your original Oracle Solaris OS versions.
You can also maintain the inactive BE. For information about how to maintain the environment, see the appropriate version of the procedure Maintaining Solaris Live Upgrade Boot Environments (Tasks), for your original Solaris OS version.