Go to main content

Lift and Shift Guide - Migrating Workloads from Oracle Solaris 10 (ZFS) SPARC Systems to Oracle Solaris 10 Guest Domains

Exit Print View

Updated: February 2020
 
 

Shift the Source Environment to the Target Guest Domain

This procedure deploys the FLAR on the target system.

  1. Log into the target guest domain.

    The target guest domain name is solaris10 in this example.

  2. Ensure that these items are available in the shared storage (/ovas1):
    • FLAR (SourceGlobal.flar)

    • Oracle Solaris 10 ISO image (sol-10-u11-ga-sparc-dvd.iso)

    root@SourceGlobal# cd /ovas1
    root@SourceGlobal# ls -rtlh  *.gz  *.flar
    -rw-r--r--   1 root     root         11G Jul 30 07:58 dbzone.gz
    -rw-r--r--   1 root     root        4.8G Jul 30 08:08 dbzone_db_binary.gz
    -rw-r--r--   1 root     root        5.8G Jul 30 09:43 asm1.img.gz
    -rw-r--r--   1 root     root        5.8G Jul 30 09:45 asm2.img.gz
    -rw-r--r--   1 root     root         59M Jul 30 11:09 redo.ufsdump.gz
    -rw-r--r--   1 root     root        185M Jul 30 11:14 archive.ufsdump.gz
    -rw-r--r--   1 root     root        32G  Jul 30 13:14 SourceGlobal.flar
  3. Ensure that the source database zone (dbzone) zpools and application data are copied over to the target system using zfs send.

    Refer to instructions in Lift the Source Environment to the Shared Storage.

  4. Deploy the source image in the target Oracle Solaris 10 guest domain.

    The ldmp2vz_convert command restores the source OS ZFS root file system to the target guest domain in a new boot environment. This restores the zone configuration and updates the new boot environment with SVR4 packages that are required for the sun4v target system.

    Syntax:

    ldmp2vz_convert -f FLAR_filename -i iso_filename -b boot_environment

    Where:

    • -f FLAR_filename – Is the file name of the FLAR created in Lift the Source Environment to the Shared Storage.

    • -i iso_filename – Is the file name of the Oracle Solaris ISO file.

    • -b boot_environment – Is the name of the guest domain boot environment.

    Example:

    root@TargetGuestDom# cd /ovas1
    root@TargetGuestDom# ldmp2vz_convert -f SourceGlobal.flar -i ./Downloads/sol-10-u11-ga-sparc-dvd.iso -b mybe
    Renaming /var/yp to /var/yp.pre_p2v
    Running: lucreate -s - -n mybe -p rpool -o /var/log/ldmp2vz/1669/lucreate.log
    See /var/log/ldmp2vz/1669/lucreate.log for detailed output
    Determining types of file systems supported
    Validating file system requests
    Preparing logical storage devices
    Preparing physical storage devices
    Configuring physical storage devices
    Configuring logical storage devices
    INFORMATION: No BEs are configured on this system.
    The device </dev/dsk/c0d0s0> is not a root device for any boot environment; cannot get BE ID.
    Running: zpool set autoexpand=on rpool
    Renaming zfs dataset rpool/export to rpool/export_pre_p2v
    Running: luupgrade -f -n mybe -s /tmp/p2v.sCa4qd/iso_unbundled -a /ovas1/SourceGlobal.flar -o /var/log/ldmp2vz/1669/luupgrade1.log
    See /var/log/ldmp2vz/1669/luupgrade1.log for detailed output
    67352 blocks
    cannot open 'dbzone_db_binary': dataset does not exist
    cannot open 'dbzone_db_binary': dataset does not exist
    /a
    INFO: Removing sun4u packages:
    SUNWcakr
    SUNWcar
    SUNWced
    SUNWcpc
    SUNWcpr
    SUNWdscpr
    SUNWdscpu
    SUNWefc
    SUNWifb
    SUNWjfb
    SUNWkvm
    SUNWm64
    SUNWnxge
    SUNWpfb
    SUNWpstl
    SUNWsckmu
    SUNWstc
    
    Removal of <SUNWcakr> was successful.
    Removal of <SUNWcar> was successful.
    Removal of <SUNWced> was successful.
    Removal of <SUNWcpc> was successful.
    Removal of <SUNWcpr> was successful.
    Removal of <SUNWdscpr> was successful.
    Removal of <SUNWdscpu> was successful.
    Removal of <SUNWefc> was successful.
    /a/dev/fb*: No such file or directory
    Removal of <SUNWifb> was successful.
    /a/dev/fb*: No such file or directory
    Removal of <SUNWjfb> was successful.
    Removal of <SUNWkvm> was successful.
    Removal of <SUNWm64> was successful.
    Removal of <SUNWnxge> was successful.
    Removal of <SUNWpfb> was successful.
    Removal of <SUNWpstl> was successful.
    Removal of <SUNWsckmu> was successful.
    Removal of <SUNWstc> was successful.
    Running: luupgrade -u -n mybe -s /tmp/p2v.sCa4qd/iso_unbundled -k /tmp/no-autoreg -o /var/log/ldmp2vz/1669/luupgrade2.log
    See /var/log/ldmp2vz/1669/luupgrade2.log for detailed output
    67352 blocks
    cannot open 'dbzone_db_binary': dataset does not exist
    /a
    Extracting configuration files
    x etc/defaultdomain, 11 bytes, 1 tape blocks
    x etc/nsswitch.conf, 434 bytes, 1 tape blocks
    x etc/pam.conf, 3454 bytes, 7 tape blocks
    x var/ldap, 0 bytes, 0 tape blocks
    x var/ldap/secmod.db, 131072 bytes, 256 tape blocks
    x var/ldap/Oracle_SSL_CA_G2.pem, 1830 bytes, 4 tape blocks
    x var/ldap/VTN-PublicPrimary-G5.pem, 1732 bytes, 4 tape blocks
    x var/ldap/cachemgr.log, 10274 bytes, 21 tape blocks
    x var/ldap/key3.db, 131072 bytes, 256 tape blocks
    x var/ldap/oracle_ssl_cert.pem, 2134 bytes, 5 tape blocks
    x var/ldap/ldap_client_cred, 204 bytes, 1 tape blocks
    x var/ldap/ldap_client_file, 2935 bytes, 6 tape blocks
    x var/ldap/cert8.db, 65536 bytes, 128 tape blocks
    Disabling Oracle Configuration Manager service in /a/var/svc/manifest/application/management/ocm.xml
    Running: luumount mybe
    Running: luactivate mybe
    A Live Upgrade Sync operation will be performed on startup of boot environment <mybe>.
    **********************************************************************
    
    The target boot environment has been activated. It will be used when you
    reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
    MUST USE either the init or the shutdown command when you reboot. If you
    do not use either init or shutdown, the system will not boot using the
    target BE.
    
    **********************************************************************
    
    In case of a failure while booting to the target BE, the following process
    needs to be followed to fallback to the currently working boot environment:
    
    1. Enter the PROM monitor (ok prompt).
    
    2. Boot the machine to Single User mode using a different boot device
    (like the Solaris Install CD or Network). Examples:
    
         At the PROM monitor (ok prompt):
         For boot to Solaris CD:  boot cdrom -s
         For boot to network:     boot net -s
    
    3. Mount the Current boot environment root slice to some directory (like
    /mnt). You can use the following commands in sequence to mount the BE:
    
         zpool import rpool
         zfs inherit -r mountpoint rpool/ROOT/s10s_u11wos_24a
         zfs set mountpoint=<mountpointName> rpool/ROOT/s10s_u11wos_24a
         zfs mount rpool/ROOT/s10s_u11wos_24a
    
    4. Run <luactivate> utility with out any arguments from the Parent boot
    environment root slice, as shown below:
    
         <mountpointName>/sbin/luactivate
    
    5. luactivate, activates the previous working boot environment and
    indicates the result.
    6. umount /mnt
    7. zfs set mountpoint=/ rpool/ROOT/s10s_u11wos_24a
    8. Exit Single User mode and reboot the machine.
    
    **********************************************************************
    
    Modifying boot archive service
    Activation of boot environment <mybe> successful.
    Run:
       init 6
    which will cause a reboot.
    After reboot run:
       svcadm disable ocm
       svcadm enable ldap/client
    Then install any desired patch(es). Follow instructions to reboot again
    if necessary. After rebooting, wait until the system boots to
    multi-user-server milestone.
    Check this using the command: svcs multi-user-server
    and look for state 'online'.
    
    Then copy and restore any required additional filesystems, for example non-ZFS filesystems, from system SourceGlobal.
  5. Reboot the target guest domain.

    Note -  This reboots the target guest domain with the OS image from the source system. There might be issues with drivers and SMF services that are visible on the target guest domain console. Ignore the issues at this time. The driver issues are resolved after performing the Install the Oracle Solaris 10 Recommended Patch Set procedure. The SMF services issues are resolved in Complete the Target Guest Domain Configuration.
    root@TargetGuestDom# shutdown -i 6
  6. Once rebooted, log into the target domain and disable the Oracle Configuration Manager service and enable the LDAP service.
    root@TargetGuestDom# svcadm disable ocm
    root@TargetGuestDom# svcadm enable ldap/client
  7. Verify that the zones are present in the boot environment.
    root@TargetGuestDom# zoneadm list -cv
      ID NAME             STATUS     PATH                           BRAND    IP
       0 global           running    /                              native   shared
       - webzone          configured /rpool/webzone                 native   shared
       - dbzone           configured /zones/dbzone                  native   shared
  8. From the target guest domain, mount the shared storage.
    root@TargetGuestDom# mount -F nfs TargetControlDom:/ovas1 /ovas1