Skip Navigation Links | |
Exit Print View | |
Oracle Solaris Cluster Upgrade Guide Oracle Solaris Cluster |
1. Preparing to Upgrade Oracle Solaris Cluster Software
2. Performing a Standard Upgrade to Oracle Solaris Cluster 3.3 Software
3. Performing a Dual-Partition Upgrade to Oracle Solaris Cluster 3.3 Software
Performing a Dual-Partition Upgrade of a Cluster
How to Upgrade Quorum Server Software
How to Prepare the Cluster for Upgrade (Dual-Partition)
How to Upgrade the Solaris OS and Volume Manager Software (Dual-Partition)
How to Upgrade Oracle Solaris Cluster 3.3 Software (Dual-Partition)
4. Performing a Live Upgrade to Oracle Solaris Cluster 3.3 Software
5. Performing a Rolling Upgrade
7. Recovering From an Incomplete Upgrade
The following table lists the tasks to perform to upgrade to Oracle Solaris Cluster 3.3 software. You also perform these tasks to upgrade only the Solaris OS.
Note - If you upgrade the Solaris OS to a new marketing release, such as from Solaris 9 to Oracle Solaris 10 software, you must also upgrade the Oracle Solaris Cluster software and dependency software to the version that is compatible with the new OS version.
Table 3-1 Task Map: Performing a Dual-Partition Upgrade to Oracle Solaris Cluster 3.3 Software
|
If the cluster uses a quorum server, upgrade the Quorum Server software on the quorum server before you upgrade the cluster.
Note - If more than one cluster uses the quorum server, perform these steps for each of those clusters.
Perform all steps as superuser on the cluster and on the quorum server.
See Adding a Quorum Device in Oracle Solaris Cluster System Administration Guide.
If you add another quorum server as a temporary quorum device, the quorum server can run the same software version as the quorum server that you are upgrading, or it can run the 3.3 version of Quorum Server software.
phys-schost# clquorum remove quorumserver
quorumserver# clquorumserver show +
If the output shows any cluster is still served by the quorum server, unconfigure the quorum server from that cluster. Then repeat this step to confirm that the quorum server is no longer configured with any cluster.
Note - If you have unconfigured the quorum server from a cluster but the clquorumserver show command still reports that the quorum server is serving that cluster, the command might be reporting stale configuration information. See Cleaning Up Stale Quorum Server Cluster Information in Oracle Solaris Cluster System Administration Guide.
quorumserver# clquorumserver stop +
quorumserver# cd /var/sadm/prod/SUNWentsysver
The version that is installed on your system.
quorumserver# ./uninstall
After removal is finished, you can view any available log. See Chapter 8, Uninstalling, in Sun Java Enterprise System 5 Update 1 Installation Guide for UNIX for additional information about using the uninstall program.
By default, this directory is /var/scqsd.
Follow the steps in How to Install and Configure Quorum Server Software in Oracle Solaris Cluster Software Installation Guide for installing the Quorum Server software.
Follow the steps in How to Configure Quorum Devices in Oracle Solaris Cluster Software Installation Guide.
phys-schost# clquorum remove tempquorum
Perform this procedure to prepare a multiple-node cluster for a dual-partition upgrade. These procedures will refer to the two groups of nodes as the first partition and the second partition. The nodes that you assign to the second partition will continue cluster services while you upgrade the nodes in the first partition. After all nodes in the first partition are upgraded, you switch cluster services to the first partition and upgrade the second partition. After all nodes in the second partition are upgraded, you boot the nodes into cluster mode to rejoin the nodes from the first partition.
Note - If you are upgrading a single-node cluster, do not use this upgrade method. Instead, go to How to Prepare the Cluster for Upgrade (Standard) or How to Prepare the Cluster for Upgrade (Live Upgrade).
Perform all steps from the global zone only.
Before You Begin
Perform the following tasks:
Ensure that the configuration meets the requirements for upgrade. See Upgrade Requirements and Software Support Guidelines.
Have available the installation media, documentation, and patches for all software products that you are upgrading, including the following software:
Solaris OS
Oracle Solaris Cluster 3.3 framework
Oracle Solaris Cluster 3.3 patches
Oracle Solaris Cluster 3.3 data services (agents)
Applications that are managed by Oracle Solaris Cluster 3.3 data services
Veritas Volume Manager, if applicable
See Patches and Required Firmware Levels in the Oracle Solaris Cluster 3.3 Release Notes for the location of patches and installation instructions.
If you use role-based access control (RBAC) instead of superuser to access the cluster nodes, ensure that you can assume an RBAC role that provides authorization for all Oracle Solaris Cluster commands. This series of upgrade procedures requires the following Oracle Solaris Cluster RBAC authorizations if the user is not superuser:
solaris.cluster.modify
solaris.cluster.admin
solaris.cluster.read
See Role-Based Access Control (Overview) in System Administration Guide: Security Services for more information about using RBAC roles. See the Oracle Solaris Cluster man pages for the RBAC authorization that each Oracle Solaris Cluster subcommand requires.
On Sun Cluster 3.1 8/05 software, use the following command:
phys-schost% scstat
On Sun Cluster 3.2 or Oracle Solaris Cluster 3.3 software, use the following command:
phys-schost% cluster status
See the scstat(1M) or cluster(1CL) man page for more information.
Service interruption will be approximately the amount of time that your cluster normally takes to switch services to another node.
A setting of RG_system=TRUE would restrict certain operations that the dual-partition software must perform.
phys-schost# clresourcegroup show -p RG_system
Make note of which resource groups to change. Save this list to use when you restore the setting after upgrade is completed.
phys-schost# clresourcegroup set -p RG_system=FALSE resourcegroup
For uninstallation procedures, see the documentation for your version of Geographic Edition software.
See Configuring Dual-String Mediators in Oracle Solaris Cluster Software Installation Guide for more information about mediators.
phys-schost# medstat -s setname
Specifies the disk set name.
If the value in the Status field is Bad, repair the affected mediator host. Follow the procedure How to Fix Bad Mediator Data in Oracle Solaris Cluster Software Installation Guide.
Save this information for when you restore the mediators during the procedure How to Finish Upgrade to Oracle Solaris Cluster 3.3 Software.
On Sun Cluster 3.1 8/05 software, use the following command:
phys-schost# scswitch -z -D setname -h node
Changes mastery.
Specifies the name of the disk set.
Specifies the name of the node to become primary of the disk set.
On Sun Cluster 3.2 or Oracle Solaris Cluster 3.3 software, use the following command:
phys-schost# cldevicegroup switch -n node devicegroup
phys-schost# metaset -s setname -d -m mediator-host-list
Specifies the disk set name.
Deletes from the disk set.
Specifies the name of the node to remove as a mediator host for the disk set.
See the mediator(7D) man page for further information about mediator-specific options to the metaset command.
Otherwise, proceed to Step 8 to determine the partitioning scheme to use. You will determine which nodes each partition will contain, but interrupt the partitioning process. You will then compare the node lists of all resource groups against the node members of each partition in the scheme that you will use. If any resource group does not contain a member of each partition, you must change the node list.
If the volume management daemon vold(1M) is running and is configured to manage CD-ROM or DVD devices, the daemon automatically mounts the media on the /cdrom/cdrom0 directory.
phys-schost# cd /cdrom/cdrom0/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools
phys-schost# ./scinstall
Note - Do not use the /usr/cluster/bin/scinstall command that is already installed on the node. You must use the scinstall command on the installation DVD-ROM.
The scinstall Main Menu is displayed.
*** Main Menu *** Please select from one of the following (*) options: 1) Create a new cluster or add a cluster node 2) Configure a cluster to be JumpStarted from this install server * 3) Manage a dual-partition upgrade * 4) Upgrade this cluster node * 5) Print release information for this cluster node * ?) Help with menu options * q) Quit Option: 3
The Manage a Dual-Partition Upgrade Menu is displayed.
Note - Stop and do not respond yet when prompted, Do you want to begin the dual-partition upgrade?, but do not exit the scinstall utility. You will respond to this prompt in Step 19 of this procedure.
For a two-node cluster, each node will be the only node in its partition.
When the nodes of a partition are shut down in preparation for dual-partition upgrade, the resource groups that are hosted on those nodes switch over to a node in the other partition. If a resource group does not contain a node from each partition in its node list, the resource group cannot switch over. To ensure successful switchover of all critical data services, verify that the node list of the related resource groups contains a member of each upgrade partition.
On Sun Cluster 3.1 8/05 software, use the following command:
phys-schost# scrgadm -pv -g resourcegroup | grep "Res Group Nodelist"
Displays configuration information.
Displays in verbose mode.
Specifies the name of the resource group.
On Sun Cluster 3.2 or Oracle Solaris Cluster 3.3 software, use the following command:
phys-schost# clresourcegroup show -p nodelist === Resource Groups and Resources === Resource Group: resourcegroup Nodelist: node1 node2 …
On Sun Cluster 3.1 8/05 software, use the following command:
phys-schost# scrgadm -a -g resourcegroup -h nodelist
Adds a new configuration.
Specifies a comma-separated list of node names.
On Sun Cluster 3.2 or Oracle Solaris Cluster 3.3 software, use the following command:
phys-schost# clresourcegroup add-node -n node resourcegroup
When you reach the prompt Do you want to begin the dual-partition upgrade?, skip to Step 19.
Proceed to Step 19.
The command verifies that a remote installation method is available.
The command switches resource groups to nodes in the second partition, and then shuts down each node in the first partition.
ok boot -x
The GRUB menu appears similar to the following:
GNU GRUB version 0.95 (631K lower / 2095488K upper memory) +----------------------------------------------------------------------+ | Solaris 10 /sol_10_x86 | | Solaris failsafe | | | +----------------------------------------------------------------------+ Use the ^ and v keys to select which entry is highlighted. Press enter to boot the selected OS, 'e' to edit the commands before booting, or 'c' for a command-line.
For more information about GRUB based booting, see Booting an x86 Based System by Using GRUB (Task Map) in System Administration Guide: Basic Administration.
The GRUB boot parameters screen appears similar to the following:
GNU GRUB version 0.95 (615K lower / 2095552K upper memory) +----------------------------------------------------------------------+ | root (hd0,0,a) | | kernel /platform/i86pc/multiboot | | module /platform/i86pc/boot_archive | +----------------------------------------------------------------------+ Use the ^ and v keys to select which entry is highlighted. Press 'b' to boot, 'e' to edit the selected command in the boot sequence, 'c' for a command-line, 'o' to open a new line after ('O' for before) the selected line, 'd' to remove the selected line, or escape to go back to the main menu.
[ Minimal BASH-like line editing is supported. For the first word, TAB lists possible command completions. Anywhere else TAB lists the possible completions of a device/filename. ESC at any time exits. ] grub edit> kernel /platform/i86pc/multiboot -x
The screen displays the edited command.
GNU GRUB version 0.95 (615K lower / 2095552K upper memory) +----------------------------------------------------------------------+ | root (hd0,0,a) | | kernel /platform/i86pc/multiboot -x | | module /platform/i86pc/boot_archive | +----------------------------------------------------------------------+ Use the ^ and v keys to select which entry is highlighted. Press 'b' to boot, 'e' to edit the selected command in the boot sequence, 'c' for a command-line, 'o' to open a new line after ('O' for before) the selected line, 'd' to remove the selected line, or escape to go back to the main menu.-
Note - This change to the kernel boot parameter command does not persist over the system boot. The next time you reboot the node, it will boot into cluster mode. To boot into noncluster mode instead, perform these steps to again to add the -x option to the kernel boot parameter command.
During dual-partition upgrade processing, these scripts would be called to stop applications such as Oracle Real Application Clusters before the nodes in the second partition are halted.
Create separate scripts for those applications that you want stopped before applications under RGM control are stopped and for those applications that you want stop afterwards.
To stop applications that are running on more than one node in the partition, write the scripts accordingly.
Use any name and directory path for your scripts that you prefer.
/etc/cluster/ql/cluster_pre_halt_apps - Use this file to call those scripts that you want to run before applications that are under RGM control are shut down.
/etc/cluster/ql/cluster_post_halt_apps - Use this file to call those scripts that you want to run after applications that are under RGM control are shut down.
The Oracle Solaris Cluster scripts are issued from one arbitrary node in the partition during post-upgrade processing of the partition. Therefore, ensure that the scripts on any node of the partition will perform the necessary actions for all nodes in the partition.
Next Steps
Upgrade software on each node in the first partition.
To upgrade Solaris software before you perform Oracle Solaris Cluster software upgrade, go to How to Upgrade the Solaris OS and Volume Manager Software (Dual-Partition).
If Oracle Solaris Cluster 3.3 software does not support the release of the Solaris OS that you currently run on your cluster, you must upgrade the Solaris software to a supported release. See Supported Products in Oracle Solaris Cluster 3.3 Release Notes for more information.
If Oracle Solaris Cluster 3.3 software supports the release of the Solaris OS that you currently run on your cluster, further Solaris software upgrade is optional.
Otherwise, upgrade to Oracle Solaris Cluster 3.3 software. Go to How to Upgrade Oracle Solaris Cluster 3.3 Software (Dual-Partition).
Perform this procedure on each node in the cluster to upgrade the Solaris OS and optionally VxVM, if used. Perform all steps from the global zone only.
If the cluster already runs on a version of the Solaris OS that supports Oracle Solaris Cluster 3.3 software, further upgrade of the Solaris OS is optional. If you do not intend to upgrade the Solaris OS or VxVM, proceed to How to Upgrade Oracle Solaris Cluster 3.3 Software (Standard).
Note - The cluster must already run on, or be upgraded to, at least the minimum required level of the Solaris OS to support upgrade to Oracle Solaris Cluster 3.3 software. See Supported Products in Oracle Solaris Cluster 3.3 Release Notes for more information.
Before You Begin
Ensure that all steps in How to Prepare the Cluster for Upgrade (Standard) are completed.
The node must be a member of the partition that is in noncluster mode.
/etc/rc0.d/K16apache /etc/rc1.d/K16apache /etc/rc2.d/K16apache /etc/rc3.d/S50apache /etc/rcS.d/K16apache
Some applications, such as Oracle Solaris Cluster HA for Apache, require that Apache run control scripts be disabled.
If these scripts exist and contain an uppercase K or S in the file name, the scripts are enabled. No further action is necessary for these scripts.
If these scripts do not exist, in Step 7 you must ensure that any Apache run control scripts that are installed during the Solaris OS upgrade are disabled.
If these scripts exist but the file names contain a lowercase k or s, the scripts are disabled. In Step 7 you must ensure that any Apache run control scripts that are installed during the Solaris OS upgrade are disabled.
Entries for globally mounted file systems contain the global mount option. Comment out these entries to prevent the Solaris upgrade from attempting to mount the global devices.
To use Live Upgrade, go instead to Chapter 4, Performing a Live Upgrade to Oracle Solaris Cluster 3.3 Software.
To upgrade a cluster that uses Solaris Volume Manager by a method other than Live Upgrade, follow upgrade procedures in Solaris installation documentation.
To upgrade a cluster that uses Veritas Volume Manager by a method other than Live Upgrade, follow upgrade procedures in Veritas Storage Foundation installation documentation.
Note - If your cluster has VxVM installed and you are upgrading the Solaris OS, you must reinstall or upgrade to VxVM software that is compatible with the version of Oracle Solaris 10 you upgraded to.
Note - Do not perform the final reboot instruction in the Solaris software upgrade. Instead, do the following:
Reboot into noncluster mode in Step 8 to complete Solaris software upgrade.
Execute the following commands to boot a node into noncluster mode during Solaris upgrade:
phys-schost# reboot -- -x or ok boot -x
If the instruction says to run the init S command, use the reboot -- -xs command instead.
phys-schost# shutdown -g -y -i0 Press any key to continue
The GRUB menu appears similar to the following:
GNU GRUB version 0.95 (631K lower / 2095488K upper memory) +-------------------------------------------------------------------------+ | Solaris 10 /sol_10_x86 | | Solaris failsafe | | | +-------------------------------------------------------------------------+ Use the ^ and v keys to select which entry is highlighted. Press enter to boot the selected OS, 'e' to edit the commands before booting, or 'c' for a command-line.
For more information about GRUB based booting, see Booting an x86 Based System by Using GRUB (Task Map) in System Administration Guide: Basic Administration.
The GRUB boot parameters screen appears similar to the following:
GNU GRUB version 0.95 (615K lower / 2095552K upper memory) +----------------------------------------------------------------------+ | root (hd0,0,a) | | kernel /platform/i86pc/multiboot | | module /platform/i86pc/boot_archive | +----------------------------------------------------------------------+ Use the ^ and v keys to select which entry is highlighted. Press 'b' to boot, 'e' to edit the selected command in the boot sequence, 'c' for a command-line, 'o' to open a new line after ('O' for before) the selected line, 'd' to remove the selected line, or escape to go back to the main menu.
[ Minimal BASH-like line editing is supported. For the first word, TAB lists possible command completions. Anywhere else TAB lists the possible completions of a device/filename. ESC at any time exits. ] grub edit> kernel /platform/i86pc/multiboot -x
The screen displays the edited command.
GNU GRUB version 0.95 (615K lower / 2095552K upper memory) +----------------------------------------------------------------------+ | root (hd0,0,a) | | kernel /platform/i86pc/multiboot -x | | module /platform/i86pc/boot_archive | +----------------------------------------------------------------------+ Use the ^ and v keys to select which entry is highlighted. Press 'b' to boot, 'e' to edit the selected command in the boot sequence, 'c' for a command-line, 'o' to open a new line after ('O' for before) the selected line, 'd' to remove the selected line, or escape to go back to the main menu.-
Note - This change to the kernel boot parameter command does not persist over the system boot. The next time you reboot the node, it will boot into cluster mode. To boot into noncluster mode instead, perform these steps to again to add the -x option to the kernel boot parameter command.
If the instruction says to run the init S command, shut down the system then change the GRUB kernel boot command to /platform/i86pc/multiboot -sx instead.
To disable Apache run control scripts, use the following commands to rename the files with a lowercase k or s.
phys-schost# mv /a/etc/rc0.d/K16apache /a/etc/rc0.d/k16apache phys-schost# mv /a/etc/rc1.d/K16apache /a/etc/rc1.d/k16apache phys-schost# mv /a/etc/rc2.d/K16apache /a/etc/rc2.d/k16apache phys-schost# mv /a/etc/rc3.d/S50apache /a/etc/rc3.d/s50apache phys-schost# mv /a/etc/rcS.d/K16apache /a/etc/rcS.d/k16apache
Alternatively, you can rename the scripts to be consistent with your normal administration practices.
Include the double dashes (--) in the command:
phys-schost# reboot -- -x
Make the following changes to the procedure:
If any of the entries that you uncommented in Step 6 were commented out, make those entries uncommented again.
phys-schost# reboot -- -rx
Note - If you see a message similar to the following, type the root password to continue upgrade processing. Do not run the fsck command nor type Ctrl-D.
WARNING - Unable to repair the /global/.devices/node@1 filesystem. Run fsck manually (fsck -F ufs /dev/vx/rdsk/rootdisk_13vol). Exit the shell when done to continue the boot process. Type control-d to proceed with normal startup, (or give root password for system maintenance): Type the root password
Follow procedures that are provided in your VxFS documentation.
Note - Do not reboot after you add patches. Wait to reboot the node until after you upgrade the Oracle Solaris Cluster software.
See Patches and Required Firmware Levels in the Oracle Solaris Cluster 3.3 Release Notes for the location of patches and installation instructions.
Next Steps
If you are already running Oracle Solaris Cluster 3.3 software and only upgrading the Oracle Solaris 10 OS to an Oracle Solaris 10 update release, you do not need to upgrade the Oracle Solaris Cluster software. Go to Chapter 6, Completing the Upgrade.
Otherwise, upgrade to Oracle Solaris Cluster 3.3 software. Go to How to Upgrade Oracle Solaris Cluster 3.3 Software (Dual-Partition).
Note - To complete the upgrade to a new marketing release of the Solaris OS, such as from Solaris 9 to Oracle Solaris 10 software, you must also upgrade the Oracle Solaris Cluster software and dependency software to the version that is compatible with the new version of the Solaris OS.
Perform this procedure to upgrade each node of the cluster to Oracle Solaris Cluster 3.3 software. You must also perform this procedure after you upgrade to a different marketing release of the Solaris OS, such as from Solaris 9 to Oracle Solaris 10 software.
Perform all steps from the global zone only.
Tip - You can use the cconsole utility to perform this procedure on multiple nodes simultaneously. See How to Install Cluster Control Panel Software on an Administrative Console in Oracle Solaris Cluster Software Installation Guide for more information.
Before You Begin
Perform the following tasks:
Ensure that all steps in How to Prepare the Cluster for Upgrade (Dual-Partition) are completed.
Ensure that the node you are upgrading belongs to the partition that is not active in the cluster and that the node is in noncluster mode.
If you upgraded to a new marketing release of the Solaris OS, such as from Solaris 9 to Oracle Solaris 10 software, ensure that all steps in How to Upgrade the Solaris OS and Volume Manager Software (Dual-Partition) are completed.
Ensure that you have installed all required Solaris software patches and hardware-related patches.
If the volume management daemon vold(1M) is running and is configured to manage CD-ROM or DVD devices, the daemon automatically mounts the media on the /cdrom/cdrom0 directory.
phys-schost# cd /cdrom/cdrom0/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools
phys-schost# ./scinstall
Note - Do not use the /usr/cluster/bin/scinstall command that is already installed on the node. You must use the scinstall command that is located on the installation DVD-ROM.
The scinstall Main Menu is displayed.
*** Main Menu *** Please select from one of the following (*) options: 1) Create a new cluster or add a cluster node 2) Configure a cluster to be JumpStarted from this install server * 3) Manage a dual-partition upgrade * 4) Upgrade this cluster node * 5) Print release information for this cluster node * ?) Help with menu options * q) Quit Option: 4
The Upgrade Menu is displayed.
During the Oracle Solaris Cluster upgrade, scinstall might make one or more of the following configuration changes:
Rename the ntp.conf file to ntp.conf.cluster, if ntp.conf.cluster does not already exist on the node.
Set the local-mac-address? variable to true, if the variable is not already set to that value.
Upgrade processing is finished when the system displays the message Completed Oracle Solaris Cluster framework upgrade and prompts you to press Enter to continue.
You must upgrade all data services to the Oracle Solaris Cluster 3.3 version.
Note - For HA for SAP Web Application Server, if you are using a J2EE engine resource or a web application server component resource or both, you must delete the resource and recreate it with the new web application server component resource. Changes in the new web application server component resource includes integration of the J2EE functionality. For more information, see Oracle Solaris Cluster Data Service for SAP Web Application Server Guide.
phys-schost# /usr/cluster/bin/scinstall
Note - Do not use the scinstall utility that is on the installation media to upgrade data service packages.
The scinstall Main Menu is displayed.
The Upgrade Menu is displayed.
You can choose from the list of data services that are available to upgrade or choose to upgrade all installed data services.
The Upgrade Menu is displayed.
phys-schost# eject cdrom
Note - If you have non-global zones configured, LOFS must remain enabled. For guidelines about using LOFS and alternatives to disabling it, see Cluster File Systems in Oracle Solaris Cluster Software Installation Guide.
To disable LOFS, ensure that the /etc/system file contains the following entry:
exclude:lofs
This change becomes effective at the next system reboot.
View the upgrade log file that is referenced at the end of the upgrade output messages.
Ensure that application levels are compatible with the current versions of Oracle Solaris Cluster and Solaris software. See your application documentation for installation instructions.
If you want to upgrade VxVM and did not upgrade the Solaris OS, follow procedures in Veritas Storage Foundation installation documentation to upgrade VxVM without upgrading the operating system.
Note - If any upgrade procedure instruct you to perform a reboot, you must add the -x option to the boot command. This option boots the cluster into noncluster mode.
phys-schost# /usr/cluster/bin/scinstall
Note - Do not use the scinstall command that is located on the installation media. Only use the scinstall command that is located on the cluster node.
The scinstall Main Menu is displayed.
The command performs the following tasks, depending on which partition the command is run from:
First partition - The command halts each node in the second partition, one node at a time. When a node is halted, any services on that node are automatically switched over to a node in the first partition, provided that the node list of the related resource group contains a node in the first partition. After all nodes in the second partition are halted, the nodes in the first partition are booted into cluster mode and take over providing cluster services.
Caution - Do not reboot any node of the first partition again until after the upgrade is completed on all nodes. If you again reboot a node of the first partition before the second partition is upgraded and rebooted into the cluster, the upgrade might fail in an unrecoverable state. |
Second partition - The command boots the nodes in the second partition into cluster mode, to join the active cluster that was formed by the first partition. After all nodes have rejoined the cluster, the command performs final processing and reports on the status of the upgrade.
If you upgraded from Sun Cluster 3.1 8/05 software and do not want to configure zone clusters, or if you upgraded from Sun Cluster 3.2 software, this task is optional.
ok boot -x
The GRUB menu appears similar to the following:
GNU GRUB version 0.97 (639K lower / 1047488K upper memory) +----------------------------------------------------------------------+ | Solaris 10 /sol_10_x86 | | Solaris failsafe | | | +----------------------------------------------------------------------+ Use the ^ and v keys to select which entry is highlighted. Press enter to boot the selected OS, 'e' to edit the commands before booting, or 'c' for a command-line.
For more information about GRUB based booting, see Booting an x86 Based System by Using GRUB (Task Map) in System Administration Guide: Basic Administration.
The GRUB boot parameters screen appears similar to the following:
GNU GRUB version 0.97 (639K lower / 1047488K upper memory) +----------------------------------------------------------------------+ | root (hd0,0,a) | | kernel /platform/i86pc/multiboot | | module /platform/i86pc/boot_archive | |+----------------------------------------------------------------------+ Use the ^ and v keys to select which entry is highlighted. Press 'b' to boot, 'e' to edit the selected command in the boot sequence, 'c' for a command-line, 'o' to open a new line after ('O' for before) the selected line, 'd' to remove the selected line, or escape to go back to the main menu.
[ Minimal BASH-like line editing is supported. For the first word, TAB lists possible command completions. Anywhere else TAB lists the possible completions of a device/filename. ESC at any time exits. ] grub edit> kernel /platform/i86pc/multiboot -x
The screen displays the edited command.
GNU GRUB version 0.97 (639K lower / 1047488K upper memory) +----------------------------------------------------------------------+ | root (hd0,0,a) | | kernel /platform/i86pc/multiboot -x | | module /platform/i86pc/boot_archive | +----------------------------------------------------------------------+ Use the ^ and v keys to select which entry is highlighted. Press 'b' to boot, 'e' to edit the selected command in the boot sequence, 'c' for a command-line, 'o' to open a new line after ('O' for before) the selected line, 'd' to remove the selected line, or escape to go back to the main menu.-
Note - This change to the kernel boot parameter command does not persist over the system boot. The next time you reboot the node, it will boot into cluster mode. To boot into noncluster mode instead, perform these steps to again add the -x option to the kernel boot parameter command.
When run in noncluster mode, the clsetup utility displays the Main Menu for noncluster-mode operations.
The clsetup utility displays the current private-network configuration, then asks if you would like to change this configuration.
The clsetup utility displays the default private-network IP address, 172.16.0.0, and asks if it is okay to accept this default.
The clsetup utility will ask if it is okay to accept the default netmask. Skip to the next step to enter your response.
The clsetup utility will prompt for the new private-network IP address.
The clsetup utility displays the default netmask and then asks if it is okay to accept the default netmask.
The default netmask is 255.255.240.0. This default IP address range supports up to 64 nodes, up to 10 private networks, and up to 12 zone clusters in the cluster. If you choose to change the netmask, you specify in the following substeps the number of nodes and private networks that you expect in the cluster.
If you also expect to configure zone clusters, you specify that number in How to Finish Upgrade to Oracle Solaris Cluster 3.3 Software, after all nodes are back in cluster mode.
Then skip to the next step.
When you decline the default netmask, the clsetup utility prompts you for the number of nodes and private networks that you expect to configure in the cluster.
From these numbers, the clsetup utility calculates two proposed netmasks:
The first netmask is the minimum netmask to support the number of nodes and private networks that you specified.
The second netmask supports twice the number of nodes and private networks that you specified, to accommodate possible future growth.
Otherwise, if you are finishing upgrade of the second partition, proceed to How to Verify Upgrade of Oracle Solaris Cluster 3.3 Software.
ok boot -x
The GRUB menu appears similar to the following:
GNU GRUB version 0.97 (639K lower / 1047488K upper memory) +----------------------------------------------------------------------+ | Solaris 10 /sol_10_x86 | | Solaris failsafe | | | +----------------------------------------------------------------------+ Use the ^ and v keys to select which entry is highlighted. Press enter to boot the selected OS, 'e' to edit the commands before booting, or 'c' for a command-line.
For more information about GRUB based booting, see Booting an x86 Based System by Using GRUB (Task Map) in System Administration Guide: Basic Administration.
The GRUB boot parameters screen appears similar to the following:
GNU GRUB version 0.97 (639K lower / 1047488K upper memory) +----------------------------------------------------------------------+ | root (hd0,0,a) | | kernel /platform/i86pc/multiboot | | module /platform/i86pc/boot_archive | |+----------------------------------------------------------------------+ Use the ^ and v keys to select which entry is highlighted. Press 'b' to boot, 'e' to edit the selected command in the boot sequence, 'c' for a command-line, 'o' to open a new line after ('O' for before) the selected line, 'd' to remove the selected line, or escape to go back to the main menu.
[ Minimal BASH-like line editing is supported. For the first word, TAB lists possible command completions. Anywhere else TAB lists the possible completions of a device/filename. ESC at any time exits. ] grub edit> kernel /platform/i86pc/multiboot -x
The screen displays the edited command.
GNU GRUB version 0.97 (639K lower / 1047488K upper memory) +----------------------------------------------------------------------+ | root (hd0,0,a) | | kernel /platform/i86pc/multiboot -x | | module /platform/i86pc/boot_archive | +----------------------------------------------------------------------+ Use the ^ and v keys to select which entry is highlighted. Press 'b' to boot, 'e' to edit the selected command in the boot sequence, 'c' for a command-line, 'o' to open a new line after ('O' for before) the selected line, 'd' to remove the selected line, or escape to go back to the main menu.-
Note - This change to the kernel boot parameter command does not persist over the system boot. The next time you reboot the node, it will boot into cluster mode. To boot into noncluster mode instead, perform these steps to again add the -x option to the kernel boot parameter command.
To upgrade Solaris software before you perform Oracle Solaris Cluster software upgrade, go to How to Upgrade the Solaris OS and Volume Manager Software (Dual-Partition).
Otherwise, upgrade Oracle Solaris Cluster software on the second partition. Return to Step 1.
phys-schost# clresourcegroup set -p RG_system=TRUE resourcegroup
Next Steps
Go to Chapter 6, Completing the Upgrade.
Troubleshooting
If you experience an unrecoverable error during dual-partition upgrade, perform recovery procedures in How to Recover from a Failed Dual-Partition Upgrade.