Solaris 10 10/08 Installation Guide: Solaris Live Upgrade and Upgrade Planning

ProcedureHow to Create a ZFS Boot Environment Within the Same ZFS Root Pool

  1. Complete the following steps the first time you perform a Solaris Live Upgrade.


    Note –

    Using Solaris Live Upgrade to create new ZFS boot environments requires at least the Solaris 10 10/08 release to be installed. Previous releases do not have the ZFS and Solaris Live Upgrade software to perform the tasks.


    1. Remove existing Solaris Live Upgrade packages on your system if necessary. If you are upgrading to a new release, you must install the packages from that release.

      The three Solaris Live Upgrade packages, SUNWluu, SUNWlur, and SUNWlucfg, comprise the software needed to upgrade by using Solaris Live Upgrade. These packages include existing software, new features, and bug fixes. If you do not remove the existing packages and install the new packages on your system before using Solaris Live Upgrade, upgrading to the target release fails.


      Note –

      The SUMWlucfg package is new starting with the Solaris 10 8/07 release. If you are using Solaris Live Upgrade packages from a previous release, you do not need to remove this package.



      # pkgrm SUNWlucfg SUNWluu SUNWlur
      
    2. Install the new Solaris Live Upgrade packages. For instructions, see  Installing Solaris Live Upgrade.

    3. Before installing or running Solaris Live Upgrade, you are required to install the following patches. These patches ensure that you have all the latest bug fixes and new features in the release.

      Ensure that you have the most recently updated patch list by consulting SunSolve. Search for the info doc 206844 (formerly 72099) on the SunSolve web site.

      • Become superuser or assume an equivalent role.

      • If you are storing the patches on a local disk, create a directory such as /var/tmp/lupatches and download the patches to that directory.

      • From the SunSolve web site, obtain the list of patches.

      • Change to the patch directory.


        # cd /var/tmp/lupatches
        
      • Install the patches with the patchadd command.


        # patchadd patch_id
        

        patch_id is the patch number or numbers. Separate multiple patch names with a space.


        Note –

        The patches need to be applied in the order that is specified in info doc 206844.


      • Reboot the system if necessary. Certain patches require a reboot to be effective.

        x86 only: Rebooting the system is required or Solaris Live Upgrade fails.


        # init 6
        
  2. Create the new boot environment.


    # lucreate [-c zfsBE] -n new-zfsBE
    
    -c zfsBE

    Assigns the name zfsBE to the current boot environment. This option is not required and is used only when the first boot environment is created. If you run lucreate for the first time and you omit the -c option, the software creates a default name for you.

    -n new-zfsBE

    Assigns the name to the boot environment to be created. The name must be unique on the system.

    The creation of the new boot environment is almost instantaneous. A snapshot is created of each dataset in the current ZFS root pool, and a clone is then created from each snapshot. Snapshots are very disk-space efficient, and this process uses minimal disk space. When the inactive boot environment has been created, you can use the luupgrade or luactivate command to upgrade or activate the new ZFS boot environment.

  3. (Optional) Verify that the boot environment is complete.

    The lustatus command reports whether the boot environment creation is complete and bootable.


    # lustatus
    boot environment   Is        Active  Active     Can	    Copy 
    Name               Complete  Now	 OnReboot   Delete	 Status 
    ------------------------------------------------------------------------ 
    zfsBE               yes       yes     yes         no             -
    new-zfsBE           yes       no      no          yes            -
  4. (Optional) Verify the basic dataset information on the system.

    In this example, the ZFS root pool is named rpool, and the @ symbol indicates a snapshot. The new boot environment mount points are temporary until the luactivate command is executed. The /dump and /swap volumes are shared with the ZFS root pool and boot environments within the root pool.


    # zfs list
    NAME                                      USED  AVAIL  REFER  MOUNTPOINT 
    rpool                                    9.29G  57.6G    20K  /rpool 
    rpool/ROOT                               5.38G  57.6G    18K  /rpool/ROOT 
    rpool/ROOT/zfsBE                         5.38G  57.6G   551M  
    rpool/ROOT/zfsBE@new-zfsBE               66.5K      -   551M  -
    rpool/ROOT/new-zfsBE                     85.5K  57.6G   551M  /tmp/.alt.103197
    rpool/dump                               1.95G      -  1.95G  - 
    rpool/swap                               1.95G      -  1.95G  - 

    You can now upgrade and activate the new boot environment. See Example 13–2.


Example 13–2 Creating a Boot Environment Within the Same ZFS Root Pool

The following commands create a new ZFS boot environment, new-zfsBE. The -p option is not required because the boot environment is being created within the same root pool.


# lucreate [-c zfsBE] -n new-zfsBE
Analyzing system configuration.
Comparing source boot environment <zfsBE> file systems with the file
system(s) you specified for the new boot environment. Determining which
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Creating configuration for boot environment new-zfsBE.
Source boot environment is zfsBE.
Creating boot environment new-zfsBE.
Cloning file systems from boot environment zfsBE to create 
boot environment new-zfsBE.
Creating snapshot for <rpool> on <rpool> Creating clone for <rpool>. 
Setting canmount=noauto for <rpool> in zone <global> on <rpool>. 
Population of boot environment zfsBE successful on <rpool>.
# lustatus
boot environment   Is        Active  Active     Can	    Copy 
Name               Complete  Now	   OnReboot   Delete	 Status 
------------------------------------------------------------------------ 
zfsBE               yes       yes     yes         no          - 
new-zfsBE           yes       no      no          yes         -
# zfs list
NAME                                      USED  AVAIL  REFER  MOUNTPOINT 
rpool                                    9.29G  57.6G    20K  /rpool 
rpool/ROOT                               5.38G  57.6G    18K  /rpool/ROOT 
rpool/ROOT/zfsBE                         5.38G  57.6G   551M  
rpool/ROOT/zfsBE@new-zfsBE               66.5K      -   551M  - 
rpool/ROOT/new-zfsBE                     85.5K  57.6G   551M  /tmp/.alt.103197 
rpool/dump                               1.95G      -  1.95G  - 
rpool/swap                               1.95G      -  1.95G  - 

You can now upgrade and activate the new boot environment. For an example of upgrading a ZFS boot environment, see Example 13–1. For more examples of using the luupgrade command, see Chapter 5, Upgrading With Solaris Live Upgrade (Tasks).


# luactivate new-zfsBE
**********************************************************************

The target boot environment has been activated. It will be used when you 
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You 
MUST USE either the init or the shutdown command when you reboot. If you 
do not use either init or shutdown, the system will not boot using the 
target BE.

**********************************************************************
In case of a failure while booting to the target BE, the following process 
needs to be followed to fallback to the currently working boot environment:

1. Enter the PROM monitor (ok prompt).

2. Change the boot device back to the original boot environment by typing:

     setenv boot-device /pci@1f,0/pci@1/scsi@4,1/disk@2,0:a

3. Boot to the original boot environment by typing:

     boot

**********************************************************************

Modifying boot archive service
Activation of boot environment <new-zfsBE> successful.

Reboot the system to the ZFS boot environment.


# init 6
# svc.startd: The system is coming down.  Please wait.
svc.startd: 79 system services are now being stopped.
.
.
.