JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster Upgrade Guide     Oracle Solaris Cluster 3.3 3/13
search filter icon
search icon

Document Information

Preface

1.  Preparing to Upgrade Oracle Solaris Cluster Software

2.  Performing a Standard Upgrade to Oracle Solaris Cluster 3.3 3/13 Software

3.  Performing a Dual-Partition Upgrade to Oracle Solaris Cluster 3.3 3/13 Software

4.  Performing a Live Upgrade to Oracle Solaris Cluster 3.3 3/13 Software

5.  Performing a Rolling Upgrade

6.  Completing the Upgrade

7.  Recovering From an Incomplete Upgrade

Cluster Recovery After an Incomplete Upgrade

How to Recover from a Failed Dual-Partition Upgrade

x86: How to Recover From a Partially Completed Dual-Partition Upgrade

Recovering From Storage Configuration Changes During Upgrade

How to Handle Storage Reconfiguration During an Upgrade

How to Resolve Mistaken Storage Changes During an Upgrade

Index

Cluster Recovery After an Incomplete Upgrade

This section provides the following procedures:

This section provides information to recover from incomplete upgrades of an Oracle Solaris Cluster configuration.

How to Recover from a Failed Dual-Partition Upgrade

If you experience an unrecoverable error during dual-partition upgrade, perform this procedure to back out of the upgrade.


Note - You cannot restart a dual-partition upgrade after the upgrade has experienced an unrecoverable error.


  1. Become superuser on each node of the cluster.
  2. Boot each node into noncluster mode.
    • On SPARC based systems, perform the following command:
      ok boot -x
    • On x86 based systems, perform the following commands:
      1. In the GRUB menu, use the arrow keys to select the appropriate Oracle Solaris entry and type e to edit its commands.

        For more information about GRUB based booting, see Booting an x86 Based System by Using GRUB (Task Map) in Oracle Solaris Administration: Basic Administration.

      2. In the boot parameters screen, use the arrow keys to select the kernel entry and type e to edit the entry.
      3. Add -x to the command to specify that the system boot into noncluster mode.
      4. Press Enter to accept the change and return to the boot parameters screen.

        The screen displays the edited command.

      5. Type b to boot the node into noncluster mode.

        Note - This change to the kernel boot parameter command does not persist over the system boot. The next time you reboot the node, it will boot into cluster mode. To boot into noncluster mode instead, perform these steps to again add the -x option to the kernel boot parameter command.


  3. On each node, run the upgrade recovery script from the installation media.

    You can alternatively run the scinstall command from the /usr/cluster/bin directory.

    phys-schost# cd /cdrom/cdrom0/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools
    phys-schost# ./scinstall -u recover
    -u

    Specifies upgrade.

    recover

    Restores the /etc/vfstab file and the Cluster Configuration Repository (CCR) database to their original state before the start of the dual-partition upgrade.

    The recovery process leaves the cluster nodes in noncluster mode. Do not attempt to reboot the nodes into cluster mode.

    For more information, see the scinstall(1M) man page.

  4. Perform either of the following tasks.
    • Restore the old software from backup to return the cluster to its original state.
    • Continue to upgrade software on the cluster by using the standard upgrade method.

      This method requires that all cluster nodes remain in noncluster mode during the upgrade. See the task map for standard upgrade, Table 2-1. You can resume the upgrade at the last task or step in the standard upgrade procedures that you successfully completed before the dual-partition upgrade failed.

x86: How to Recover From a Partially Completed Dual-Partition Upgrade

Perform this procedure if a dual-partition upgrade fails and the state of the cluster meets all of the following criteria:

You can also perform this procedures if the upgrade has succeeded on the first partition but you want to back out of the upgrade.


Note - Do not perform this procedure after dual-partition upgrade processes have begun on the second partition. Instead, perform How to Recover from a Failed Dual-Partition Upgrade.


Before You Begin

Before you begin, ensure that all second-partition nodes are halted. First-partition nodes can be either halted or running in noncluster mode.

Perform all steps as superuser.

  1. Boot each node in the second partition into noncluster mode by completing the following steps.
  2. In the GRUB menu, use the arrow keys to select the appropriate Oracle Solaris entry and type e to edit its commands.

    For more information about GRUB-based booting, see Booting an x86 Based System by Using GRUB (Task Map) in Oracle Solaris Administration: Basic Administration.

  3. In the boot parameters screen, use the arrow keys to select the kernel entry and type e to edit the entry.
  4. Add the -x option to the command to specify that the system boot into noncluster mode.
    phys-schost# grub edit> kernel /platform/i86pc/multiboot -x
  5. Press Enter to accept the change and return to the boot parameters screen.

    The screen displays the edited command.

  6. Type b to boot the node into noncluster mode.

    Note - This change to the kernel boot parameter command does not persist over the system boot. The next time you reboot the node, it will boot into cluster mode. To boot into noncluster mode instead, perform these steps to again to add the -x option to the kernel boot parameter command.


  7. On each node in the second partition, run the scinstall -u recover command.
    phys-schost# /usr/cluster/bin/scinstall -u recover

    The command restores the original CCR information, restores the original /etc/vfstab file, and eliminates modifications for startup.

  8. Boot each node of the second partition into cluster mode.
    phys-schost# shutdown -g0 -y -i6

    When the nodes of the second partition come up, the second partition resumes supporting cluster data services while running the old software with the original configuration.

  9. Restore the original software and configuration data from backup media to the nodes in the first partition.
  10. Boot each node in the first partition into cluster mode.
    phys-schost# shutdown -g0 -y -i6

    The nodes rejoin the cluster.