Go to main content

Lift and Shift Guide - Migrating Workloads from Oracle Solaris 10 SPARC Systems to Oracle Solaris 10 Branded Zones

Exit Print View

Updated: February 2020
 
 

Prepare the Target System

Use this procedure to ensure that the target system is configured to provide CPU, memory, networking, and storage resources for the incoming source environment.

  • CPU Resources – During the shift, you can assign any amount of CPU resources to the solaris10 branded zone that are appropriate for the workload. However, prior to the lift and shift, you must ensure that those CPU resources are available as described in this procedure.

    If you are uncertain about the CPU utilization of the workload on the target system, then the target system should provide at minimum the same available CPU and memory resources that the solaris10 branded zone had on the source system. This conservative approach helps maintain the same or better performance level of the workload after the migration. On the other hand, if the CPU utilization is estimated to be significantly lower for the branded zone on the target system, for example, if the target system has faster CPUs, then the target system can provide fewer CPU resources to the branded zone. In some cases, using fewer CPU cores reduces software licensing costs.

  • Memory Resources – By default, the target solaris10 branded zone is allocated the same amount of memory that it had on the source system. Ensure that there is at least the same amount of memory available on the target system, as described in this procedure.

  • Storage Resources – The number of virtual disks and sizes on the target must match the source system's disk configuration, as described in this procedure.

  • Network Resources – A network data link created from the global zone to be used by the solaris10 branded zone as its network interface. Redundancy of links can leverage aggregations available from the global zone.

  1. List the target system processors.

    In this example, the target system has 16 cores.

    root@TargetGlobal# psrinfo -pv
        The physical processor has 8 cores and 64 virtual processors (0-63)
          The core has 8 virtual processors (0-7)
          The core has 8 virtual processors (8-15)
          The core has 8 virtual processors (16-23)
          The core has 8 virtual processors (24-31)
          The core has 8 virtual processors (32-39)
          The core has 8 virtual processors (40-47)
          The core has 8 virtual processors (48-55)
          The core has 8 virtual processors (56-63)
            SPARC-S7 (chipid 0, clock 4267 MHz)
        The physical processor has 8 cores and 64 virtual processors (64-127)
          The core has 8 virtual processors (64-71)
          The core has 8 virtual processors (72-79)
          The core has 8 virtual processors (80-87)
          The core has 8 virtual processors (88-95)
          The core has 8 virtual processors (96-103)
          The core has 8 virtual processors (104-111)
          The core has 8 virtual processors (112-119)
          The core has 8 virtual processors (120-127)
            SPARC-S7 (chipid 1, clock 4267 MHz)
  2. List the amount of memory.
    root@TargetGlobal# prtconf | grep Mem
    Memory size: 260096 Megabytes
  3. List the amount of swap space.
    root@TargetGlobal# swap -hl
    swapfile                 dev            swaplo      blocks        free
    /dev/zvol/dsk/rpool/swap 288,1              8K        128G        128G
  4. Display the target system's network configuration.
    root@TargetGlobal# dladm show-aggr -x
    LINK       PORT           SPEED DUPLEX   STATE     ADDRESS            PORTSTATE
    aggr0      --             10000Mb full   up        0:10:e0:d5:22:c2   --
               net0           10000Mb full   up        0:10:e0:d5:22:c2   attached
               net2           10000Mb full   up        0:10:e0:d5:22:c4   attached
    
    root@TargetGlobal# dladm show-aggr
    LINK              MODE  POLICY   ADDRPOLICY           LACPACTIVITY LACPTIMER
    aggr0             trunk L3,L4    auto                 active       short
    
    root@TargetGlobal# ipadm
    NAME              CLASS/TYPE STATE        UNDER      ADDR
    aggr0             ip         ok           --         --
       aggr0/v4       static     ok           --         192.0.2.45/23
    lo0               loopback   ok           --         --
       lo0/v4         static     ok           --         127.0.0.1/8
       lo0/v6         static     ok           --         ::1/128
    net4              ip         ok           --         --
       net4/v4        static     ok           --         203.0.113.77/24
    
    root@TargetGlobal# netstat -rn
    
    Routing Table: IPv4
      Destination           Gateway           Flags  Ref     Use     Interface
    -------------------- -------------------- ----- ----- ---------- ---------
    default              192.0.2.1            UG        4   26860570           
    192.0.2.0            192.0.2.45           U         8   25427791 aggr0     
    127.0.0.1            127.0.0.1            UH        4        576 lo0       
    203.0.113.0          203.0.113.77         U         3    2759126 net4      
    
    Routing Table: IPv6
      Destination/Mask            Gateway                   Flags Ref   Use    If   
    --------------------------- --------------------------- ----- --- ------- -----
    ::1                         ::1                         UH      2       2 lo0   
    
    root@TargetGlobal# cat /etc/resolv.conf
    #
    # _AUTOGENERATED_FROM_SMF_V1_
    #
    # WARNING: THIS FILE GENERATED FROM SMF DATA.
    #   DO NOT EDIT THIS FILE.  EDITS WILL BE LOST.
    # See resolv.conf(4) for details.
    search  us.example.com examplecorp.com example.com
    nameserver      198.51.100.198
    nameserver      198.51.100.197
    nameserver      198.51.100.132
  5. Create a VNIC on top of the trunk aggregation.
    root@TargetGlobal# dladm create-vnic -l aggr0 vnic0
    
    root@TargetGlobal# dladm show-vnic
    LINK            OVER           SPEED  MACADDRESS        MACADDRTYPE IDS
    vnic0           aggr0          10000  2:8:20:df:e0:73   random      VID:0
  6. Display information about the target's global zone root file system.
    root@TargetGlobal# zpool status rpool
      pool: rpool
     state: ONLINE
      scan: none requested
    config:
            NAME                       STATE     READ WRITE CKSUM
            rpool                      ONLINE       0     0     0
              mirror-0                 ONLINE       0     0     0
                c0t5000CCA08040A058d0  ONLINE       0     0     0
                c0t5000CCA080416934d0  ONLINE       0     0     0
    errors: No known data error
  7. Use the format utility to partition devices that will be used for the UFS/SVM file systems in the solaris10 branded zone.

    This table shows the storage configuration that is created on the target system to support the incoming source system OS and workloads (determined in Review the Source System Configuration).

    No. of LUNs
    Total Storage Capacity
    Data Mgt.
    Contents
    2
    2 x 1 TB = 2 TB
    ZFS mirror
    Global zone rpool
    2
    2 x 487 GB = 974 GB
    ZFS mirror
    Oracle Solaris 10 branded zone zonepath
    2
    2 x 150 GB = 300 GB
    UFS/SVM mirror
    Database binaries (/u01)
    2
    2 x 200 GB = 400 GB
    UFS/SVM mirror
    Database files (/oradata)
    1
    2 x 1 GB = 2 GB
    SVM
    metadb in global zone

    Note -  Recreated on the target system.

    3.7 TB
    Totals (rounded)

  8. Ensure that disk slices are configured to support all the storage items.
    1. Use the prtvtoc command to check the slices for metadb.

      In this example, two local disk slices (1 GB each) are used for the metadb disks:

      c0t5000CCA08040B3BCd0s4

      c0t5000CCA08040AB24d0s4.

      root@TargetGlobal# prtvtoc /dev/dsk/c0t5000CCA08040B3BCd0s2|tail -3
      * Partition  Tag  Flags    Sector     Count        Sector       Mount Directory
             2      5    01          0      2344108410   2344108409
             4      0    00      48195         2120580      2168774
      
      root@TargetGlobal# prtvtoc /dev/dsk/c0t5000CCA08040AB24d0s2|tail -3
      * Partition  Tag  Flags    Sector     Count        Sector       Mount Directory
             2      5    01          0      2344108410   2344108409
             4      0    00      48195         2120580      2168774
    2. Use the prtvtoc command to check the slices for the database binaries.

      In this example, the database binaries require two 150 GB iSCSI devices:

      c0t600144F09F2C0BFD00005B36D6910058d0s0

      c0t600144F09F2C0BFD00005B36D6A70059d0s0

      root@TargetGlobal# prtvtoc /dev/rdsk/c0t600144F09F2C0BFD00005B36D6910058d0s2|tail -3
      * Partition  Tag  Flags    Sector     Count      Sector      Mount Directory
             0      2    00          0      314386468  314386467
             2      5    01          0      314386468  314386467
      
      root@TargetGlobal# prtvtoc /dev/rdsk/c0t600144F09F2C0BFD00005B36D6A70059d0s2|tail -3
      * Partition  Tag  Flags    Sector     Count      Sector      Mount Directory
             0      2    00          0      314386468  314386467
             2      5    01          0      314386468  314386467
    3. Use the prtvtoc command to check the slices for the database REDO and archive logs.

      In this example, the database REDO and archive logs require two 200 GB iSCSI devices:

      c0t600144F09F2C0BFD00005B36D6C3005Ad0s0

      c0t600144F09F2C0BFD00005B36D6D5005Bd0s0

      root@TargetGlobal# prtvtoc /dev/rdsk/c0t600144F09F2C0BFD00005B36D6C3005Ad0s2|tail -3
      * Partition  Tag  Flags    Sector     Count      Sector      Mount Directory
             0      2    00          0      419289484  419289483
             2      5    01          0      419289484  419289483
      
      root@TargetGlobal# prtvtoc /dev/rdsk/c0t600144F09F2C0BFD00005B36D6D5005Bd0s2|tail -3
      * Partition  Tag  Flags    Sector     Count      Sector      Mount Directory
             0      2    00          0      419289484  419289483
             2      5    01          0      419289484  419289483
    4. Use the format command to review the disks.
      root@TargetGlobal# echo|format
      Searching for disks...done
      
      AVAILABLE DISK SELECTIONS:
             0. c0t5000CCA08040A058d0 <HGST-H101812SFSUN1.2T-A990-1.09TB>
                /scsi_vhci/disk@g5000cca08040a058
                /dev/chassis/SYS/HDD0/disk
             1. c0t5000CCA080416934d0 <HGST-H101812SFSUN1.2T-A990-1.09TB>
                /scsi_vhci/disk@g5000cca080416934
                /dev/chassis/SYS/HDD1/disk
             2. c0t5000CCA08040B3BCd0 <HGST-H101812SFSUN1.2T-A990 cyl 48638 alt 2 hd 255 sec 189>
                /scsi_vhci/disk@g5000cca08040b3bc
                /dev/chassis/SYS/HDD2/disk
             3. c0t5000CCA08040AB24d0 <HGST-H101812SFSUN1.2T-A990-1.09TB>
                /scsi_vhci/disk@g5000cca08040ab24
                /dev/chassis/SYS/HDD3/disk
             4. c1t0d0 <MICRON-eUSB DISK-1112 cyl 246 alt 0 hd 255 sec 63>
                /pci@300/pci@1/pci@0/pci@2/usb@0/storage@1/disk@0,0
                /dev/chassis/SYS/MB/EUSB_DISK/disk
             5. c0t600144F09F2C0BFD00005B36D6A70059d0 <SUN-ZFS Storage 7355-1.0 cyl 4873 alt 2 hd 254 sec 254>
                /scsi_vhci/ssd@g600144f09f2c0bfd00005b36d6a70059
             6. c0t600144F09F2C0BFD00005B36D6C3005Ad0 <SUN-ZFS Storage 7355-1.0 cyl 6499 alt 2 hd 254 sec 254>
                /scsi_vhci/ssd@g600144f09f2c0bfd00005b36d6c3005a
             7. c0t600144F09F2C0BFD00005B36D6D5005Bd0 <SUN-ZFS Storage 7355-1.0 cyl 6499 alt 2 hd 254 sec 254>
                /scsi_vhci/ssd@g600144f09f2c0bfd00005b36d6d5005b
             8. c0t600144F09F2C0BFD00005B36D6910058d0 <SUN-ZFS Storage 7355-1.0 cyl 4873 alt 2 hd 254 sec 254>
                /scsi_vhci/ssd@g600144f09f2c0bfd00005b36d6910058
             9. c0t600144F09F2C0BFD00005B9195D80079d0 <SUN-ZFS Storage 7355-1.0-300.00GB>
                /scsi_vhci/ssd@g600144f09f2c0bfd00005b9195d80079
            10. c0t600144F09F2C0BFD00005B9195EF007Ad0 <SUN-ZFS Storage 7355-1.0-300.00GB>
                /scsi_vhci/ssd@g600144f09f2c0bfd00005b9195ef007a
            11. c0t600144F09F2C0BFD00005B9020260078d0 <SUN-ZFS Storage 7355-1.0 cyl 3998 alt 2 hd 8 sec 32>
                /scsi_vhci/ssd@g600144f09f2c0bfd00005b9020260078
      Specify disk (enter its number): Specify disk (enter its number):
  9. Install the Solaris Volume Manager package and reboot the system.

    The Solaris Volume Manager package is needed for the metadb. In Oracle Solaris 11.3 and later, the package is not installed by default.

    root@TargetGlobal# pkg install storage/svm
    
    root@TargetGlobal# reboot
  10. Create a mirrored zpool for the solaris10 branded zone ZFS root file system.

    Note -  Only ZFS is supported for zone root file systems in Oracle Solaris 11.3.
    root@TargetGlobal# zpool create -m /zones/TargetS10bz  TargetS10bz mirror c0t600144F09F2C0BFD00005B9195D80079d0 c0t600144F09F2C0BFD00005B9195EF007Ad0
     
    root@TargetGlobal# zpool status TargetS10bz
      pool: TargetS10bz
     state: ONLINE
      scan: resilvered 73K in 1s with 0 errors on Fri Sep  7 10:15:54 2018
    
    config:
    
            NAME                                       STATE     READ WRITE CKSUM
            TargetS10bz                                ONLINE       0     0     0
              mirror-0                                 ONLINE       0     0     0
                c0t600144F09F2C0BFD00005B9195D80079d0  ONLINE       0     0     0
                c0t600144F09F2C0BFD00005B9195EF007Ad0  ONLINE       0     0     0
    
    errors: No known data errors
  11. Configure the Solaris Volume manager (SVM) metadevices.
    1. Create the SVM metadevice database using redundant local disks.
      root@TargetGlobal# metadb -a -c 3 -f /dev/dsk/c0t5000CCA08040B3BCd0s4
      
      root@TargetGlobal# metadb -a -c 3 -f /dev/dsk/c0t5000CCA08040AB24d0s4
      
      root@TargetGlobal# metadb
              flags           first blk       block count
           a        u         16              8192            /dev/dsk/c0t5000CCA08040B3BCd0s4
           a        u         8208            8192            /dev/dsk/c0t5000CCA08040B3BCd0s4
           a        u         16400           8192            /dev/dsk/c0t5000CCA08040B3BCd0s4
           a        u         16              8192            /dev/dsk/c0t5000CCA08040AB24d0s4
           a        u         8208            8192            /dev/dsk/c0t5000CCA08040AB24d0s4
           a        u         16400           8192            /dev/dsk/c0t5000CCA08040AB24d0s4
      
    2. Create a 150 GB mirrored SVM metadevice.
      root@TargetGlobal# metainit d31 1 1 c0t600144F09F2C0BFD00005B36D6910058d0s0
      
      root@TargetGlobal# metainit -f d32 1 1 c0t600144F09F2C0BFD00005B36D6A70059d0s0
      
      root@TargetGlobal# metainit d30 -m d31 d32
      
      root@TargetGlobal# metastat d30
      d30: Mirror
          Submirror 0: d31
            State: Okay         
          Submirror 1: d32
            State: Okay         
          Pass: 1
          Read option: roundrobin (default)
          Write option: parallel (default)
          Size: 314386468 blocks (149 GB)
      d31: Submirror of d30
          State: Okay         
          Size: 314386468 blocks (149 GB)
          Stripe 0:
              Device                                             Start Block  Dbase        State Reloc Hot Spare
              /dev/dsk/c0t600144F09F2C0BFD00005B36D6910058d0s0          0     No            Okay   Yes
      d32: Submirror of d30
          State: Okay         
          Size: 314386468 blocks (149 GB)
          Stripe 0:
              Device                                             Start Block  Dbase        State Reloc Hot Spare
              /dev/dsk/c0t600144F09F2C0BFD00005B36D6A70059d0s0          0     No            Okay   Yes
      Device Relocation Information:
      Device                                           Reloc  Device ID
      /dev/dsk/c0t600144F09F2C0BFD00005B36D6910058d0   Yes    id1,ssd@n600144f09f2c0bfd00005b36d6910058
      /dev/dsk/c0t600144F09F2C0BFD00005B36D6A70059d0   Yes    id1,ssd@n600144f09f2c0bfd00005b36d6a70059
    3. Create a mirrored SVM metadevice for /oradata.
      root@TargetGlobal# metainit d41 1 1 c0t600144F09F2C0BFD00005B36D6C3005Ad0s0
      
      root@TargetGlobal# metainit -f d42 1 1 c0t600144F09F2C0BFD00005B36D6D5005Bd0s0
      
      root@TargetGlobal# metainit d40 -m d41 d42
      
      root@TargetGlobal# metastat d40
      d40: Mirror
          Submirror 0: d41
            State: Okay         
          Submirror 1: d42
            State: Okay         
          Pass: 1
          Read option: roundrobin (default)
          Write option: parallel (default)
          Size: 419289484 blocks (199 GB)
      d41: Submirror of d40
          State: Okay         
          Size: 419289484 blocks (199 GB)
          Stripe 0:
              Device                                             Start Block  Dbase        State Reloc Hot Spare
              /dev/dsk/c0t600144F09F2C0BFD00005B36D6C3005Ad0s0          0     No            Okay   Yes
      d42: Submirror of d40
          State: Okay         
          Size: 419289484 blocks (199 GB)
          Stripe 0:
              Device                                             Start Block  Dbase        State Reloc Hot Spare
              /dev/dsk/c0t600144F09F2C0BFD00005B36D6D5005Bd0s0          0     No            Okay   Yes
      Device Relocation Information:
      Device                                           Reloc  Device ID
      /dev/dsk/c0t600144F09F2C0BFD00005B36D6C3005Ad0   Yes    id1,ssd@n600144f09f2c0bfd00005b36d6c3005a
      /dev/dsk/c0t600144F09F2C0BFD00005B36D6D5005Bd0   Yes    id1,ssd@n600144f09f2c0bfd00005b36d6d5005b
      
      
  12. On the target global zone, adjust the swap space.

    Configure the capped memory and swap values so that they are similar to the source system configuration.

    root@TargetGlobal# swap -d /dev/zvol/dsk/rpool/swap
    root@TargetGlobal# zfs set volsize=80G rpool/swap
    root@TargetGlobal# swap -a /dev/zvol/dsk/rpool/swap
  13. Create the UFS file systems for Oracle binaries and data files
    root@TargetGlobal# newfs /dev/md/rdsk/d30
    root@TargetGlobal# newfs /dev/md/rdsk/d40