In this example, log into the dbzone as the oracle1 user and check the status of the Database Listener.
-bash-3.2$ lsnrctl status LSNRCTL for Solaris: Version 12.1.0.2.0 - Production on 30-Jul-2018 03:02:49 Copyright (c) 1991, 2014, Oracle. All rights reserved. Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=SourceDBzone)(PORT=1521))) STATUS of the LISTENER ------------------------ Alias LISTENER Version TNSLSNR for Solaris: Version 12.1.0.2.0 - Production Start Date 29-JUL-2018 08:56:30 ......... ......... SNMP OFF Listener Parameter File /u01/app/oracle/product/12.1.0/grid/network/admin/listener.ora Listener Log File /u01/app/oracle/diag/tnslsnr/SourceDBzone/listener/alert/log.xml Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=SourceDBzone)(PORT=1521))) (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=SourceDBzone)(PORT=5500))(Security=(my_wallet_directory=/u01/app/oracle/product/12.1.0/dbhome_1/admin/orcl9/xdb_wallet))(Presentation=HTTP)(Session=RAW)) Services Summary... Service "+ASM" has 1 instance(s). Instance "+ASM", status READY, has 1 handler(s) for this service... Service "mydb.us.example.com" has 1 instance(s). Instance "orcl9", status READY, has 1 handler(s) for this service... Service "orcl9.us.example.com" has 1 instance(s). Instance "orcl9", status READY, has 1 handler(s) for this service... Service "orcl9XDB.us.example.com" has 1 instance(s). Instance "orcl9", status READY, has 1 handler(s) for this service... Service "pdborcl.us.example.com" has 1 instance(s). Instance "orcl9", status READY, has 1 handler(s) for this service... The command completed successfully
For example, issue a few SQL statements from the client to ensure connectivity and the contents.
root@client-sys# sqlplus system/welcome1@//SourceDBzone:1521/mydb.us.example.com SQL> set pages 100 SQL> select * from v$recover_file; no rows selected SQL> select name from v$datafile union select member from v$logfile union select name from v$controlfile; NAME -------------------------------------------------------------------------------- +DATA/ORCL9/71335771C4785E92E054000B5DDC023F/DATAFILE/soe.278.981702033 +DATA/ORCL9/71335771C4785E92E054000B5DDC023F/DATAFILE/sysaux.274.981701921 +DATA/ORCL9/71335771C4785E92E054000B5DDC023F/DATAFILE/system.275.981701921 +DATA/ORCL9/71335771C4785E92E054000B5DDC023F/DATAFILE/users.277.981701955 +DATA/ORCL9/CONTROLFILE/current.261.981701477 +DATA/ORCL9/CONTROLFILE/current.262.981701475 +DATA/ORCL9/DATAFILE/undotbs1.260.981701345 /logs/redologs/redo04.log /logs/redologs/redo05.log /logs/redologs/redo06.log 10 rows selected.
As a sanity check, the same table is sampled again after the migration to confirm data integrity.
SQL> select count(*) from soe.orders ; COUNT(*) ---------- 1429790
Shutdown the source dbzone database cleanly.
-bash-3.2$ srvctl stop database -d orcl9
This step prevents application related errors when the zone is started on the target.
In this case, the application is the Oracle Database. The following commands also shutdown the database listener. After the deployment, crsctl is used to restart the high availability components.
root@SourceDBzone# crsctl stop has root@SourceDBzone# crsctl disable has
root@SourceWebzone# svcs -a |grep apache2 online 10:06:42 svc:/network/http:apache2 root@SourceWebzone# ps -ef|grep apache webservd 8224 8181 0 03:19:24 ? 0:00 /usr/apache2/bin/httpd -k start webservd 8961 8181 0 03:47:50 ? 0:00 /usr/apache2/bin/httpd -k start
root@client-sys# /usr/sfw/bin/wget http://SourceWebzone.com/ --2018-07-17 03:53:23-- http://SourceWebzone.com/ Resolving SourceWebzone.com (SourceWebzone.com)... 203.0.113.191 Connecting to SourceWebzone.com (SourceWebzone.com)|203.0.113.191|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 1456 (1.4K) [text/html] Saving to: 'index.html.2' index.html.2 100%[=============================================================================================== ====================================>] 1.42K --.-KB/s in 0s 2018-07-17 03:53:23 (64.9 MB/s) - 'index.html.2' saved [1456/1456]
Failure to detach the non-global zones on the source system prevents the archive utility (ldmp2vz_collect) from completing.
root@SourceGlobal# zoneadm -z dbzone halt root@SourceGlobal# zoneadm -z dbzone detach root@SourceGlobal# zoneadm -z webzone halt root@SourceGlobal# zoneadm -z webzone detach
root@SourceGlobal# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared - webzone configured /rpool/webzone native shared - dbzone configured /zones/dbzone native shared
The dbzone zone path is installed on a zpool called dbzone. The Database and ASM binaries are installed on a zpool called dbzone_db_binary. The snapshots are later used to extract the dbzone and database binaries on the target system.
root@SourceGlobal# zfs snapshot -r dbzone@first root@SourceGlobal# zfs snapshot -r dbzone_db_binary@first root@SourceGlobal# zfs list -t snapshot NAME USED AVAIL REFER MOUNTPOINT dbzone@first 0 - 16.8G - dbzone_db_binary@first 0 - 21.5G -
The zfs send command is used to transfer the snapshots from the source to shared storage. The pigz command reduces execution time by performing file compression using multiple threads in parallel.
root@SourceGlobal# zfs send dbzone@first | /opt/SUNWldm/lib/contrib/pigz > /ovas1/dbzone.gz root@SourceGlobal# zfs send dbzone_db_binary@first | /opt/SUNWldm/lib/contrib/pigz > /ovas1/dbzone_db_binary.gz
When transferring raw disk data, you can use the dd command to copy a slice, or diskio to copy the whole disk.
In this example, the ASM data occupies slice 0, so the dd command is used to copy the contents of that slice. The data can be compressed using gzip; in this case we use pigz which is a faster, parallel implementation of gzip.
root@SourceGlobal# dd if=/dev/rdsk/c2t600144F0E635D8C700005AC56B080015d0s0 bs=104857600 | /opt/SUNWldm/lib/contrib/pigz > /ovas1/asm1.img.gz root@SourceGlobal# dd if=/dev/rdsk/c2t600144F0E635D8C700005AC56B2E0016d0s0 bs=104857600 | /opt/SUNWldm/lib/contrib/pigz > /ovas1/asm2.img.gz
root@SourceGlobal# ufsdump 0cf - /dev/md/rdsk/d20 | /opt/SUNWldm/lib/contrib/pigz > /ovas1/redo.ufsdump.gz DUMP: Date of this level 0 dump: Mon Jul 30 11:09:29 2018 DUMP: Date of last level 0 dump: the epoch DUMP: Dumping /dev/md/rdsk/d20 (SourceGlobal:/zones/dbzone/root/logs/redologs) to standard output. DUMP: Mapping (Pass I) [regular files] DUMP: Mapping (Pass II) [directories] DUMP: Writing 63 Kilobyte records DUMP: Estimated 1232116 blocks (601.62MB). DUMP: Dumping (Pass III) [directories] DUMP: Dumping (Pass IV) [regular files] DUMP: 1232026 blocks (601.58MB) on 1 volume at 6315 KB/sec DUMP: DUMP IS DONE root@SourceGlobal# ufsdump 0cf - /dev/md/rdsk/d30 | /opt/SUNWldm/lib/contrib/pigz > /ovas1/archive.ufsdump.gz DUMP: Date of this level 0 dump: Mon Jul 30 11:13:20 2018 DUMP: Date of last level 0 dump: the epoch DUMP: Dumping /dev/md/rdsk/d30 (SourceGlobal:/zones/dbzone/root/logs/archivelogs) to standard output. DUMP: Mapping (Pass I) [regular files] DUMP: Mapping (Pass II) [directories] DUMP: Writing 63 Kilobyte records DUMP: Estimated 1651844 blocks (806.56MB). DUMP: Dumping (Pass III) [directories] DUMP: Dumping (Pass IV) [regular files] DUMP: 1651858 blocks (806.57MB) on 1 volume at 7860 KB/sec DUMP: DUMP IS DONE
root@SourceGlobal# cd /ovas1 root@SourceGlobal# ls -rtlh *.gz -rw-r--r-- 1 root root 11G Jul 30 07:58 dbzone.gz -rw-r--r-- 1 root root 4.8G Jul 30 08:08 dbzone_db_binary.gz -rw-r--r-- 1 root root 5.8G Jul 30 09:43 asm1.img.gz -rw-r--r-- 1 root root 5.8G Jul 30 09:45 asm2.img.gz -rw-r--r-- 1 root root 59M Jul 30 11:09 redo.ufsdump.gz -rw-r--r-- 1 root root 185M Jul 30 11:14 archive.ufsdump.gz
The ldmp2vz_collect command creates a flash archive image (FLAR) of the source file system based on the configuration information that it collects about the source system.
Syntax:
ldmp2vz_collect -c -d FLAR_directory
Where:
-c compresses the .flar file.
-d FLAR_directory is the location where the .flar file is created.
In this example, the command is performed on the source system, but the .flar file is created on the shared storage making it available to the source and target systems.
root@SourceGlobal# ldmp2vz_collect -c -d /ovas1 INFO: Checking for patch 119534-33 (or higher) INFO: Patch 119534-33 is installed, proceeding... Backing up config files not retained by luupgrade process Running: tar cvfX /etc/p2v/configs_tar /etc/p2v/configs_tar_exclude etc/defaultdomain etc/nsswitch.conf etc/pam.conf var/ldap a etc/defaultdomain 1K a etc/nsswitch.conf 1K a etc/pam.conf 4K a var/ldap/ 0K a var/ldap/secmod.db 128K a var/ldap/Oracle_SSL_CA_G2.pem 2K a var/ldap/VTN-PublicPrimary-G5.pem 2K a var/ldap/cachemgr.log 11K a var/ldap/restore excluded a var/ldap/key3.db 128K a var/ldap/oracle_ssl_cert.pem 3K a var/ldap/ldap_client_cred 1K a var/ldap/ldap_client_file 3K a var/ldap/cert8.db 64K Running: flarcreate -n SourceGlobal -c -H -S -U content_architectures=sun4u,sun4v /ovas1/SourceGlobal.flar Full Flash Checking integrity... Integrity OK. Running precreation scripts... Precreation scripts done. Creating the archive... Added keyword content_architectures=sun4u,sun4v does not begin with X-. Archive creation complete. Running postcreation scripts... Postcreation scripts done. Running pre-exit scripts... Pre-exit scripts done. flarcreate completed with exit code 0 Flar has been created in /ovas1/SourceGlobal.flar -rw-r--r-- 1 root root 32350200956 Jul 30 13:14 /ovas1/SourceGlobal.flar