Go to main content

Lift and Shift Guide - Migrating Workloads from Oracle Solaris 10 SPARC Systems to Oracle Solaris 10 Branded Zones

Exit Print View

Updated: February 2020
 
 

Review the Source System Configuration

The steps and examples in this section provide commands you can use to determine the configuration of your source system. The configuration information helps you prepare the target system.

After the lift and shift process, the source configuration information can be compared to the target system, to ensure workload continuity.

As you perform this procedure, take into account the state of your source system and adjust and omit steps as needed.

  1. Backup any critical data before the start of this process, so that you can fall back to this system if anything goes wrong.
  2. (Optional) On the source system, start a process that captures the output that is collected in this task.

    Capturing the commands and output provides a means to refer back to the data that is collected.

    There are a variety of methods to capture output. You can run the script(1M) command to make a record of a terminal session, or use a terminal window with command and output collection capabilities.

    Example:

    root@SourceSystem# script /tmp/source_zones_output.txt

    Note – When you want to stop capturing output, type Ctr-D.

  3. Check the version of the operating system.

    This lift and shift process is specifically for a source system running Oracle Solaris 10.

    root@SourceSystem# cat /etc/release
                       Oracle Solaris 10 1/13 s10s_u11wos_24a SPARC
      Copyright (c) 1983, 2013, Oracle and/or its affiliates. All rights reserved.
                                Assembled 17 January 2013
  4. List the source system architecture and the kernel patch version.
    root@SourceSystem# uname -a
    SunOS SourceSystem 5.10 Generic_150400-61 sun4u sparc SUNW,SPARC-Enterprise
  5. Ensure that the required patches are installed.
    1. Determine if the patches are installed and at or above the minimum versions.

      Minimum versions:

      119254-75

      119534-24

      140914-02 or 148027-03 (The latter patch obsoletes the former patch. Either patch is sufficient for this migration.)

      root@SourceSystem# showrev -p | egrep "Patch: 119254|Patch: 119534|Patch: 140914|Patch: 148027"
      Patch: 119254-88 Obsoletes: 119015-03 Requires: 121133-02 Incompatibles:  Packages: SUNWinstall-patch-utils-root, SUNWpkgcmdsr, SUNWpkgcmdsu, SUNWswmt
      Patch: 119534-33 Obsoletes:  Requires: 119252-18, 120199-09, 126677-02 Incompatibles:  Packages: SUNWinst
      Patch: 148027-03 Obsoletes: 121002-04, 126316-01, 126651-02, 127920-01, 127922-04, 128330-02, 137088-01, 138275-01, 138621-02, 138623-05, 140914-02, 
      142009-01, 142336-01, 143588-01, 144300-01, 144876-01, 146578-06 Requires: 118833-36, 120011-14, 127127-11, 137137-09, 139555-08, 142909-17 
    2. For any missing or below-version patches, download and install the latest versions offered from My Oracle Support (http://support.oracle.com)

      For patch details, refer to the patch README.

      Use the patchadd command to install each patch. For example:

      root@SourceSystem# patchadd ./119254-93
  6. List the amount of memory.
    root@SourceSystem# prtdiag|grep "Memory size"
    Memory size: 65536 Megabytes
  7. List the source system processors.

    In this example, the source system has 4 sockets.

    root@SourceSystem# psrinfo -pv
    The physical processor has 8 virtual processors (0-7)
      SPARC64-VII+ (portid 1024 impl 0x7 ver 0xc0 clock 3000 MHz)
    The physical processor has 8 virtual processors (40-47)
      SPARC64-VII+ (portid 1064 impl 0x7 ver 0xc0 clock 3000 MHz)
    The physical processor has 8 virtual processors (80-87)
      SPARC64-VII+ (portid 1104 impl 0x7 ver 0xc0 clock 3000 MHz)
    The physical processor has 8 virtual processors (120-127)
      SPARC64-VII+ (portid 1144 impl 0x7 ver 0xc0 clock 3000 MHz)
  8. Display network configuration information.
    root@SourceSystem# ifconfig -a
    lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
            inet 127.0.0.1 netmask ff000000
    bge0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
            inet 198.51.100.69 netmask ffffff00 broadcast 198.51.100.255
            groupname ipmp0
            ether 0:b:5d:dc:2:40
    bge2: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
            inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255
            groupname ipmp0
            ether 0:b:5d:dc:2:40
    sppp0: flags=10010008d1<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST,IPv4,FIXEDMTU> mtu 1500 index 4
            inet 203.0.113.3 --> 203.0.113.1 netmask ffffff00
            ether 0:0:0:0:0:0
    
    root@SourceSystem# netstat -rn
    Routing Table: IPv4
    
      Destination           Gateway           Flags  Ref     Use     Interface
    -------------------- -------------------- ----- ----- ---------- ---------
    default              198.51.100.1         UG        1    1135000           
    198.51.100.0         198.51.100.69        U         1         13 bge0      
    203.0.113.1          203.0.113.3          UH        1          1 sppp0     
    224.0.0.0            198.51.100.69        U         1          0 bge0      
    127.0.0.1            127.0.0.1            UH        5       4116 lo0       
    
    root@SourceSystem# ls -l /etc/hostname*
    -rw-r--r--   1 root     root          46 Apr 19 16:18 /etc/hostname.bge0
    -rw-r--r--   1 root     root          15 Apr 19 16:18 /etc/hostname.bge2
    
    root@SourceSystem# cat /etc/hostname.bge0
    SourceSystem netmask + broadcast + group ipmp0 up
    
    root@SourceSystem# cat /etc/hostname.bge2
    group ipmp0 up
    
    root@SourceSystem# cat /etc/inet/hosts
    # Internet host table
    ::1     localhost       
    127.0.0.1       localhost       
    198.51.100.69   SourceSystem        loghost
  9. Identify the source system disks and what is stored in them.

    While performing the following substeps, make note of the amount of storage used by the various software components. In the last substep, storage values are tallied to determine the amount of storage space that is needed in shared storage and on the target system.

    1. Use the format utility to list the disks.

      The list of disks provides information about the disks on the source system.

      In this example, disks number 0, 1, and 2 are physical internal disks. Disks 3 - 6 (where each disk is identified by a WWN) are LUNS.

      root@SourceSystem# echo|format
      
      Searching for disks...done
      
      AVAILABLE DISK SELECTIONS:
             0. c0t0d0 <FUJITSU-MAY2073RC-3701 cyl 14087 alt 2 hd 24 sec 424>
                /pci@0,600000/pci@0/scsi@1/sd@0,0
             1. c0t1d0 <SUN600G cyl 64986 alt 2 hd 27 sec 668>
                /pci@0,600000/pci@0/scsi@1/sd@1,0
             2. c1t0d0 <SUN600G cyl 64986 alt 2 hd 27 sec 668>
                /pci@24,600000/pci@0/scsi@1/sd@0,0
             3. c2t600144F0E635D8C700005AC804B80019d0 <SUN-ZFSStorage7420-1.0 cyl 4873 alt 2 hd 254 sec 254>
                /scsi_vhci/ssd@g600144f0e635d8c700005ac804b80019
             4. c2t600144F0E635D8C700005AC804D4001Ad0 <SUN-ZFSStorage7420-1.0 cyl 4873 alt 2 hd 254 sec 254>
                /scsi_vhci/ssd@g600144f0e635d8c700005ac804d4001a
             5. c2t600144F0E635D8C700005AC804F4001Bd0 <SUN-ZFSStorage7420-1.0 cyl 6499 alt 2 hd 254 sec 254>
                /scsi_vhci/ssd@g600144f0e635d8c700005ac804f4001b
             6. c2t600144F0E635D8C700005AC8050C001Cd0 <SUN-ZFSStorage7420-1.0 cyl 6499 alt 2 hd 254 sec 254>
                /scsi_vhci/ssd@g600144f0e635d8c700005ac8050c001c
      Specify disk (enter its number): Specify disk (enter its number):
      
    2. List the root file system.

      In this example, the root file system is on an SVM root encapsulated mirror.

      root@SourceSystem# df -h /
      Filesystem             size   used  avail capacity  Mounted on
      /dev/md/dsk/d10        487G   4.9G   477G     2%    /
      root@SourceSystem# metastat -p d10
      d10 -m d11 d12 1
      d11 1 1 c0t1d0s0
      d12 1 1 c1t0d0s0
    3. List information about the swap configuration.

      In this example, swap is on the SVM root encapsulated mirror.

      root@SourceSystem# swap -l
      swapfile             dev      swaplo blocks     free
      /dev/md/dsk/d20     85,20     16     133123696  133123696
      root@SourceSystem# metastat -p d20
      d20 -m d21 d22 1
      d21 1 1 c0t1d0s1
      d22 1 1 c1t0d0s1

      Note -  The swap -l command displays the swap space in 512-byte blocks. To convert the value to gigabytes, use the bc command. For example:
      # bc
      133123696*512/1024/1024/1024
      63

    4. Display information about the database binaries.

      In this example, the binaries are on a UFS file system (/u01) on top of an SVM metadevice using two mirrored 150GB disks.

      root@SourceSystem# df -h /u01
      Filesystem             size   used  avail capacity  Mounted on
      /dev/md/dsk/d30        148G   2.3G   144G     2%    /u01
      
      root@SourceSystem# metastat -p d30
      d30 -m d31 d32 1
      d31 1 1 /dev/dsk/c2t600144F0E635D8C700005AC804B80019d0s0
      d32 1 1 /dev/dsk/c2t600144F0E635D8C700005AC804D4001Ad0s0
      
      root@SourceSystem# grep u01 /etc/mnttab
      /dev/md/dsk/d30 /u01    ufs     rw,intr,forcedirectio,largefiles,logging,xattr,onerror=panic,dev=154001e        1528711266
      
      root@SourceSystem# egrep "mount|u01" /etc/vfstab
      #device         device            mount    FS      fsck    mount      mount
      #to mount       to fsck           point    type    pass    at boot    options
      /dev/md/dsk/d30 /dev/md/rdsk/d30 /u01      ufs      2      yes        forcedirectio,logging
    5. List the Solaris Volume Manager (SVM) metadevice state database (metadb) configuration.

      The metadb is not migrated to the target system. Instead, metadb is recreated when preparing the target system (see Prepare the Target System).

      root@SourceSystem# metadb
              flags           first blk       block count
           a m  p  luo        16              8192            /dev/dsk/c0t1d0s4
           a    p  luo        8208            8192            /dev/dsk/c0t1d0s4
           a    p  luo        16400           8192            /dev/dsk/c0t1d0s4
           a    p  luo        16              8192            /dev/dsk/c1t0d0s4
           a    p  luo        8208            8192            /dev/dsk/c1t0d0s4
           a    p  luo        16400           8192            /dev/dsk/c1t0d0s4
    6. Display information about the database files.

      In this example, the Oracle database data files, and redo and archive log files are on a UFS file system on top of an SVM metadevice using two mirrored 200 GB disks.

      root@SourceSystem# df -h /oradata
      Filesystem             size   used  avail capacity  Mounted on
      /dev/md/dsk/d40        197G    12G   182G     7%    /oradata
      
      root@SourceSystem# metastat -p d40
      d40 -m d41 d42 1
      d41 1 1 /dev/dsk/c2t600144F0E635D8C700005AC804F4001Bd0s0
      d42 1 1 /dev/dsk/c2t600144F0E635D8C700005AC8050C001Cd0s0
      
      root@SourceSystem# grep oradata /etc/mnttab
      /dev/md/dsk/d40 /oradata        ufs     rw,intr,forcedirectio,largefiles,logging,xattr,onerror=panic,dev=1540028        1528711266
      
      root@SourceSystem# egrep "mount|oradata" /etc/vfstab
      #device         device          mount           FS      fsck    mount   mount
      #to mount       to fsck         point           type    pass    at boot options
      /dev/md/dsk/d40 /dev/md/rdsk/d40 /oradata ufs 2 yes forcedirectio,logging
    7. Determine the total amount of storage used.

      The disk capacity information is used to configure the target system storage that is needed for the migration. See Prepare the Target System.

      The total amount of used storage is used to calculate the amount of space needed in the shared storage in Prepare the Shared Storage.

      This table lists the disk calculation for this example.

      No. of LUNs
      Total Storage Capacity
      Used Storage
      Data Mgt.
      Contents
      2
      2 x 487 = 974 GB
      4.9 GB (root)
      +
      64 GB (swap)
      = 69 GB
      UFS, SVM mirror
      Root file system and swap space

      Note -  The swap space is not captured in shared storage, and not migrated to the target system. Eventually it is recreated on the target system, so the space must be accounted for.

      2
      2 x 150 = 300 GB
      2.3 GB
      UFS on SVM metaset
      Database binaries (/u01)
      2
      2 x 200 = 400 GB
      12 GB
      UFS on SVM metaset
      Database files (/oradata)
      1
      n/a
      n/a
      SVM
      metadb

      Note -  Not captured in shared storage, and not migrated to the target system. Eventually recreated on the target system.

      1.6 TB
      (Minimum amount of space required on the target system to host the source.)
      19.2 GB
      (Used to calculate the amount of required shared storage.)
      Totals (rounded)

  10. Obtain the encrypted password for the source system's root user.

    This encrypted password is later used to access the root user on the target solaris10 branded zone when it is configured in Configure an Oracle Solaris 10 Branded Zone on the Target System.

    root@SourceSystem# grep root /etc/shadow | nawk -F':' '{print $2}'
    6Vq4iXheX6nm
  11. Check the status of the Database and listener.

    In this example, the source system is installed with Oracle 10.2.0.5.0.

    Note – The second and third commands must be performed by the oracle user.

    root@SourceSystem# cat /var/opt/oracle/oratab
    md1:/u01/oracle/product/10.2.0/db_1:N
    
    oracle1@SourceSystem $ /u01/oracle/product/10.2.0/db_1/bin/lsnrctl status
    LSNRCTL for Solaris: Version 10.2.0.5.0 - Production on 25-JUL-2018 16:28:48
    Copyright (c) 1991, 2010, Oracle.  All rights reserved.
    Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
    STATUS of the LISTENER
    ------------------------
    Alias                     LISTENER
    Version                   TNSLSNR for Solaris: Version 10.2.0.5.0 - Production
    Start Date                25-JUL-2018 10:51:40
    Uptime                    0 days 5 hr. 37 min. 8 sec
    Trace Level               off
    Security                  ON: Local OS Authentication
    SNMP                      OFF
    Listener Log File         /u01/oracle/product/10.2.0/db_1/network/log/listener.log
    Listening Endpoints Summary...
      (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=SourceSystem)(PORT=1521)))
    Services Summary...
    Service "md1" has 1 instance(s).
      Instance "md1", status READY, has 1 handler(s) for this service...
    Service "md1XDB" has 1 instance(s).
      Instance "md1", status READY, has 1 handler(s) for this service...
    Service "md1_XPT" has 1 instance(s).
      Instance "md1", status READY, has 1 handler(s) for this service...
    The command completed successfully
    
    oracle1@SourceSystem $ ptree|grep ora_|grep -v grep
    11221 ora_pmon_md1
    11223 ora_psp0_md1
    11225 ora_mman_md1
    11227 ora_dbw0_md1
    11229 ora_dbw1_md1
    11231 ora_dbw2_md1
    ...
  12. Sample one table in the application schema (SOE).

    As a sanity check, the same table is sampled again after the migration (in Start and Verify the Migrated Database to confirm data integrity.

    SQL> select count(*) from soe.orders ;
    
      COUNT(*)
    ----------
       1429790
  13. From a database client system, check the functionality of the database environment.

    For example, issue a few SQL statements from the client to ensure connectivity and the contents.

    oracle@client-sys: $ sqlplus system/welcome1@//SourceSystem.us.example.com:1521/md1
    SQL*Plus: Release 12.2.0.1.0 Production on Wed Jul 25 11:00:53 2018
    Copyright (c) 1982, 2016, Oracle.  All rights reserved.
    
    Connected to: 
    Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
     
    SQL> select name,log_mode,open_mode from v$database ;
    
    NAME      LOG_MODE     OPEN_MODE
    --------- ------------ ----------
    MD1       ARCHIVELOG   READ WRITE
    
    SQL> select * from v$recover_file;
    no rows selected
    
    SQL> select name from v$datafile union select name from v$controlfile union select member from v$logfile union select name from v$archived_log ;
    
    NAME
    --------------------------------------------------------------------------------
    /oradata/md1/archivelog/MD1/archivelog/2018_05_25/o1_mf_1_2_fjhkyx3w_.arc
    /oradata/md1/archivelog/MD1/archivelog/2018_07_26/o1_mf_1_3_fomnzzjk_.arc
    /oradata/md1/archivelog/MD1/archivelog/2018_07_27/o1_mf_1_4_foqy2qjs_.arc
    /oradata/md1/archivelog/MD1/archivelog/2018_07_29/o1_mf_1_5_fovgmd0x_.arc
    /oradata/md1/archivelog/MD1/archivelog/2018_07_30/o1_mf_1_6_fozv6z9k_.arc
    /oradata/md1/control01.ctl
    /oradata/md1/control02.ctl
    /oradata/md1/control03.ctl
    /oradata/md1/redo01.log
    /oradata/md1/redo02.log
    /oradata/md1/redo03.log
    /oradata/md1/sysaux01.dbf
    /oradata/md1/system01.dbf
    /oradata/md1/undotbs01.dbf
    /oradata/md1/users01.dbf
    
    16 rows selected.