Note:
- This tutorial requires access to Oracle Cloud. To sign up for a free account, see Get started with Oracle Cloud Infrastructure Free Tier.
- It uses example values for Oracle Cloud Infrastructure credentials, tenancy, and compartments. When completing your lab, substitute these values with ones specific to your cloud environment.
Migrate Oracle Database to OCI using V4 Cross Platform Transportable Tablespaces and Upgrade from 12c to 19c
Introduction
This tutorial covers the steps needed to use V4 Cross Platform Transportable Tablespaces (XTTS) with Oracle Recovery Manager (RMAN) incremental backups to migrate data between systems that have different endian formats, with the least amount of application downtime.
The first step will be to copy a full backup from the source to the destination. Then, by using a series of incremental backups, each smaller than the last, the data at the destination system can be brought nearly current with the source system, before any downtime is required. This procedure requires downtime only during the final incremental backup, and the metadata export or import.
This tutorial describes the V4 procedures for Cross Platform Incremental Backup which can be used with 11.2.0.3 and higher.
Environment Details
-
Source Database:
-
Database Version: 12.1.0.2
-
Operating System: AIX
-
-
Target Database:
-
Database Version: 19.18.0.0 (OCI)
-
Operating System: Oracle Linux
-
Objectives
- Use Cross Platform Transportable Tablespaces (XTTS) with RMAN incremental backups to migrate data between systems that have different endian formats, with the least amount of application downtime.
Prerequisites
-
The current version does not support Windows as either source or destination.
-
Cross platform is only possible with Enterprise Edition. This procedure cannot be used with Standard Edition.
-
The source database
COMPATIBLE
parameter must not be greater than the destination databaseCOMPATIBLE
parameter. -
The source database must be in ARCHIVELOG mode.
-
Before running XTTS scripts, set
NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1
. OtherNLS_LANG
settings may cause errors. -
RMAN on the source system must not have:
-
DEVICE TYPE DISK configured with COMPRESSED.
-
BACKUP TYPE TO COPY. The source must have BACKUP TYPE TO BACKUPSET.
-
Default channel configured to type SBT. That is this procedure can only be used with disk channels.
-
Any channel configuration limitations. For example, MAXSETSIZE, MAXPIECESIZE and so on.
-
-
The set of tablespaces being moved must all be online, and contain no offline data files. Tablespaces must be READ WRITE.
-
Tablespaces that are READ ONLY may be moved with the normal XTTS method. There is no need to incorporate Cross Platform Incremental Backups to move tablespaces that are always READ ONLY.
-
Although preferred destination system is Linux (either 64-bit Oracle Linux or a certified version of RedHat Linux), this procedure can be used with other Unix based operating systems. However, any non-Linux operating system must be running 12.1.0.1 or higher in both destination and source.
-
The Oracle version of source must be lower or equal to destination. Therefore, this procedure can be used as an upgrade method.
-
Minimum version for source and destination is 11.2.0.3. Earlier 11.2 versions will likely function the same, however were not tested.
-
As suggested, test the procedure before relying on it for a production environment.
-
Automatic Storage Management (ASM) can only be used for final location of datafiles in destination, backups cannot be placed on ASM with this version.
-
The backup location of destination must be a device with read/write privileges. That is cannot be a READ ONLY device. This can cause ORA-19624 on the backupset conversion.
-
The source and target database must use a compatible character set and national character set.
-
The tablespace must be in READ WRITE at the first backup, (level 0).
-
The source and target database must:
-
Use a compatible timezone version.
-
Have latest patch version.
-
-
Copy the following scripts from Oracle E-Business Suite application node to target database server (only for Oracle E-Business Suite applications).
audb19c.sql ausy19c.sql adstats.sql auque2.sql
Task 1: Prepare the Source Database
-
Run the following query to check the endian format on source database.
col PLATFORM_NAME for a50 select platform_name,platform_id, endian_format from v$transportable_platform;
-
Identify the tablespace(s) in the source database that will be transported. Run the following query if you are migrating all the tablespaces.
select tablespace_name from dba_tablespaces where contents='PERMANENT' and tablespace_name not in ('SYSTEM','SYSAUX');
-
Create
transport_set_violations
view on source database.exec dbms_tts.transport_set_check('<list of tablespaces with comma saperated>'); SELECT * FROM SYS.TRANSPORT_SET_VIOLATIONS;
-
(Optional) Enable the block change tracking on source database to improve the performance of incremental backups.
-
Verify that all tablespaces are online on the source database.
-
Remove rebuild index parameter spatial indexes and modify unusable spatial indexes.
-
This step checks to ensure that you do not have the rebuild index parameter in the spatial indexes. For multi-tenant environments, run the following command first to set the database SID field to the pluggable database name for your Oracle database (PDB NAME).
export ORACLE_PDB_SID=[PDB NAME]
-
To see if you have any rebuild index parameters, on the source database server node, as the owner of the source database server file system and database instance, use SQL*Plus to connect to the source database as sysdba and run the following query.
select owner,index_name from dba_indexes where index_type='DOMAIN' and upper(parameters) like '%REBUILD%';
-
To remove the rebuild index parameter, use SQL*Plus to connect to the source database as the owner of the index and run the following query.
alter index [index name] rebuild parameters [parameters]
-
-
Synchronize text indexes. Use SQL*Plus to connect to the source database as SYSDBA and run the following query to find all indexes pending synchronization.
select pnd_index_owner,pnd_index_name,count(*) from ctxsys.ctx_pending group by pnd_index_owner,pnd_index_name;
To synchronize the indexes, run the following query.
exec ctx_ddl.sync_index('[index owner].[index name]');
Task 2: Prepare the Target Database
-
Create temporary new users tablespace. If the target database is a multi-tenant environment, then connect to the PDB.
alter session set container=[PDB NAME] select TABLESPACE_NAME from dba_tablespaces where TABLESPACE_NAME='USERS_OCI'; select TABLESPACE_NAME from dba_tablespaces order by 1; set long 99999 SELECT DBMS_METADATA.GET_DDL ('TABLESPACE','USERS') from dual;
Use the above Data Definition Language (DDL) output and replace the diskgroup name as below and create new users tablespace.
CREATE BIGFILE TABLESPACE USERS_OCI DATAFILE '+DATAC2' SIZE 1073741824 AUTOEXTEND ON NEXT 17179869184 MAXSIZE 33554431M LOGGING ONLINE PERMANENT BLOCKSIZE 8192 EXTENT MANAGEMENT LOCAL AUTOALLOCATE ENCRYPTION USING 'AES256' ENCRYPT DEFAULT NOCOMPRESS SEGMENT SPACE MANAGEMENT AUTO;
Assign the new users tablespace USERS_OCI to all the database users who have default tablespace USERS using the following query.
select 'alter user '||username||' default tablespace users_OCI;' from dba_users where default_tablespace='USERS';
The output for query must return 0.
select count(*) from dba_users where default_tablespace='USERS'; ALTER DATABASE DEFAULT TABLESPACE USERS_OCI; drop tablespace users including contents and datafiles; select TABLESPACE_NAME from dba_tablespaces where TABLESPACE_NAME='USERS';
-
Create temporary tablespaces on the target database. Run the following query on source and gather the tablespace details.
select TABLESPACE_NAME from dba_tablespaces where CONTENTS='TEMPORARY'; Create temp tablespaces on target database like below example. create bigfile temporary tablespace USER_TEMP tempfile size 672m autoextend on next 672m extent management local;
-
Create the
nls/data/9idata
directory for Oracle E-Business Suite applications.srvctl getenv database -d $ORACLE_UNQNAME perl $ORACLE_HOME/nls/data/old/cr9idata.pl --> on all the nodes of the cluster srvctl setenv database -d $ORACLE_UNQNAME -t ORA_NLS10=$ORACLE_HOME/nls/data/9idata srvctl getenv database -d $ORACLE_UNQNAME
-
Create
$ORACLE_HOME/appsutil/jre
for Oracle E-Business Suite applications. Copy theappsutil
folder from source database under the path$ORACLE_HOME
to the target database server under the path$ORACLE_HOME
. Run the following commands on all nodes of target database servers, once the folder is copied.cd $ORACLE_HOME/appsutil ln -s $ORACLE_HOME/jdk/jre cd $ORACLE_HOME/jdk/jre/lib/ext cp $ORACLE_HOME/jlib/orai18n.jar .
-
Set up sys schema for Oracle E-Business Suite applications.
export ORACLE_PDB_SID=[PDB_NAME] sqlplus / as sysdba show con_name @audb19c.sql
-
Set up system schema for Oracle E-Business Suite applications.
export ORACLE_PDB_SID=[PDB_NAME] sqlplus / as sysdba show con_name @ausy19c.sql
-
Set the
CTXSYS
parameter for Oracle E-Business Suite applications.export ORACLE_PDB_SID=[PDB_NAME] sqlplus / as sysdba show con_name exec ctxsys.ctx_adm.set_parameter('file_access_role', 'public');
-
Gather statistics automatically for Oracle E-Business Suite applications.
export ORACLE_PDB_SID=[PDB_NAME] sqlplus / as sysdba show con_name alter system enable restricted session; @adstats.sql alter system disable restricted session;
Task 3: Incremental V4 XTTS Steps
Task 3.1: Initial Phase
-
Download the latest v4 perl scripts and unzip from here: V4 Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 2471245.1). Create directories to unzip the scripts and temp directory.
For Example:
mkdir -p <Backup_location>/XTTS_Migration mkdir -p <Backup_location>/XTTS_Migration/XTTS_TEMP
-
Edit the
xtt.properties
file on the source system with your site-specific configuration. For more information about the parameters in thextt.properties
file, see the description of parameters in configuration filextt.properties
section here: V4 Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 2471245.1). For this procedure, only the following parameters are mandatory. Others are optional and/or available for use.tablespaces=<list of tablespaces that are migrated with comma saperated> platformid= <obtained from the query select platform_name,platform_id, endian_format from v$transportable_platform> src_scratch_location= <Backup location on source database server> dest_scratch_location= <Backup location on target database server> dest_datafile_location= <path for the datafile location of PDB > destconnstr=sys/<password>>@<Target PDB Connect String> usermantransport=1 - It is recommended this be set if the source database is running 12c or higher. This causes new 12c (and higher) parallel=1
Task 3.2: Prepare Phase
-
Run the backup on source system. On the source system, log in as the Oracle user with the environment (
ORACLE_HOME
andORACLE_SID
environment variables) pointing to the source database, run the backup as follows.export TMPDIR=<Backup_location>/XTTS_Migration/XTTS_TEMP export NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1 nohup $ORACLE_HOME/perl/bin/perl xttdriver.pl --backup --debug 3 &
-
Transfer the following files to target database server.
-
Backups created from source
src_scratch_location
to destinationdest_scratch_location
. -
The
res.txt
file from source$TMPDIR
to destination$TMPDIR
.
In the following example, scp is used to transfer the level=0 backup created by the previous step from the source system to the destination system.
[oracle@source]$ scp /src_scratch_location/* oracle@dest:/dest_scratch [oracle@source]$ scp res.txt oracle@dest:/home/oracle/xtt
-
-
Restore the data files on the destination system.
export TMPDIR=<Backup_location>/XTTS_Migration/XTTS_TEMP export NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1 cp nohup.out nohup.out_L0 nohup $ORACLE_HOME/perl/bin/perl xttdriver.pl --restore --debug 3 &
Task 3.3: Roll Forward Phase
During this phase an incremental backup is created from the source database, transferred to the destination system, converted to the destination system endian format, then applied to the converted destination datafile copies to roll them forward. This phase may be run multiple times. Each successive incremental backup should take less time than the prior incremental backup, and will bring the destination datafile copies more current with the source database. The data being transported (source) is fully accessible during this phase.
Note: Multiple backups can be executed against the source without applying them to the destination. The backup files and the
res.txt
must be copied before the ‘–restore’ command is executed at the destination.
-
Create an incremental backup of the tablespaces being transported on the source system.
export TMPDIR=<Backup_location>/XTTS_Migration/XTTS_TEMP export NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1 cp nohup.out nohup.out_1 nohup $ORACLE_HOME/perl/bin/perl xttdriver.pl --backup --debug 3 &
This step will create an incremental backup for all tablespaces listed in the
xtt.properties
file. -
Transfer incremental backups and
res.txt
to the destination system. Transfer the incremental backup(s) (betweensrc_scratch_location
anddest_scratch_location
) and theres.txt
(between the$TMPDIRs
) from the source to the destination. The list of incremental backup files from current backup can be found in theincrbackups.txt
file on the source system.For Example:
[oracle@source]$ scp `cat incrbackups.txt` oracle@dest:/dest_scratch_location [oracle@source]$ scp res.txt oracle@dest:/home/oracle/xtt
-
Apply the incremental backup to the datafile copies on the destination system. On the destination system, logged in as the Oracle user with the environment (
ORACLE_HOME
andORACLE_SID
environment variables) pointing to the destination database, run the roll forward datafiles step as follows.export TMPDIR=<Backup_location>/XTTS_Migration/XTTS_TEMP export NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1 cp nohup.out nohup.out_3 $ORACLE_HOME/perl/bin/perl xttdriver.pl --restore --debug 3
The roll forward step connects to the destination database and applies the incremental backups on the tablespaces’ datafiles for each tablespace being transported.
Note: Although multiple backups can be executed against the source without being applied on the destination, the
res.txt
file must be copied after the last backup and before the--restore
is executed at the destination.Repeat the roll forward phase until the database cutover window.
Task 3.4: Final Incremental Backup Phase
-
Alter source tablespace(s) to READ ONLY in the source database. Run the following query if all the tablespaces are transported to the target database.
set lines 300 select 'alter tablespace "'||tablespace_name||'" read only;' from dba_tablespaces where contents='PERMANENT' and tablespace_name not in ('SYSTEM','SYSAUX');
-
Create the final incremental backup of the tablespaces being transported on the source system.
export TMPDIR=<Backup_location>/XTTS_Migration/XTTS_TEMP export NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1 cp nohup.out nohup.out_4 nohup $ORACLE_HOME/perl/bin/perl xttdriver.pl --backup --debug 3 &
-
Transfer incremental backups and
res.txt
to the destination system.For Example:
[oracle@source]$ scp 'cat incrbackups.txt' oracle@dest:/dest_scratch_location [oracle@source]$ scp res.txt oracle@dest:/home/oracle/xtt
-
Apply last incremental backup to destination datafiles.
export TMPDIR=<Backup_location>/XTTS_Migration/XTTS_TEMP export NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1 cp nohup.out nohup.out_5 nohup $ORACLE_HOME/perl/bin/perl xttdriver.pl --restore --debug 3 &
Task 3.5: Transport Phase - Export Metadata and Plug-in Tablespaces into Destination Database
-
Create directory to store the export backup files.
For Example:
mkdir -p <Backup_location>/XTTS_DPUMP
Run the following query to create directory in the source database.
create or replace directory XTTS_DUMP as '<Backup_location>/XTTS_DUMP'; grant read,write on directory XTTS_DUMP to system;
-
Prepare export parfile.
vi xtts_expdp.par directory=XTTS*DUMP metrics=y dumpfile=xttsfulltts*%U.dmp filesize=1048576000 full=y exclude=STATISTICS logfile=expfulltts.log parallel=4 transportable=always EXCLUDE=NORMAL_OPTIONS
-
Run datapump export on the source database.
nohup expdp system/<password>@<source_db_name> parfile= xtts_expdp.par &
-
Run the following command on the target database server which generates the sample parfile for import.
export TMPDIR=<Backup_location>/XTTS_Migration/XTTS_TEMP export NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1 $ORACLE_HOME/perl/bin/perl xttdriver.pl -e
Update the following lines of parfile generated in the previous step.
impdp directory=<DATA_PUMP_DIR> logfile=<tts_imp.log> \ network_link=<ttslink> transport_full_check=no \ replace these parameters transport_tablespaces=<tablespace list> \ delete this line from parfile
The final import parfile should look like:
directory=XTTS*DUMP logfile=impdpfulltts.log \ metrics=y dumpfile=xttsfulltts*%U.dmp parallel=6 full=y logtime=all exclude=STATISTICS \ transport_datafiles=<list of datafile path generated from above parfile>
-
Disable the following audit related parameters until the completion of import as mentioned. Re-enable them after import is completed.
show parameter audit_sys_operations show parameter audit_trail alter system set audit_trail=none scope=spfile sid='_'; alter system set audit_sys_operations=FALSE scope=spfile sid='_'; srvctl stop database -d ${ORACLE_UNQNAME} srvctl start database -d ${ORACLE_UNQNAME} show parameter audit_sys_operations show parameter audit_trail
-
Run datapump import on the target database.
-
Create directory to copy the export backup files.
For Example:
mkdir -p <Backup_location>/XTTS_DPUMP
-
Run the following command to create a directory in the source database.
create or replace directory XTTS_DUMP as '<Backup_location>/XTTS_DUMP'; grant read,write on directory XTTS_DUMP to system;
nohup impdp system/<password>@<>target_db_name> parfile=<parfile_name> &
-
-
Enable the following audit related parameters after the completion of import.
show parameter audit_sys_operations show parameter audit_trail alter system set audit_trail=DB scope=spfile sid='_'; alter system set audit_sys_operations=TRUE scope=spfile sid='_'; srvctl stop database -d ${ORACLE_UNQNAME} srvctl start database -d ${ORACLE_UNQNAME} show parameter audit_sys_operations show parameter audit_trail
-
Validate the transported data.
rman target sys/<password>@<>target_db_name> trace=rman_trace.trc log=<backup_location>/tablespace_validate.log validate tablespace <list of tablespaces transported with comma saperated> check logical;
-
Assign users tablespace to revert back the changes made in the target database preparation.
select 'alter user '||username||' default tablespace users;' from dba_users where default_tablespace='USERS_OCI';
Validate the data on source and target databases.
Related Links
Acknowledgments
- Author - Pavan Yennampelli (Senior Cloud Engineer, Oracle North America Cloud Services - NACIE)
More Learning Resources
Explore other labs on docs.oracle.com/learn or access more free learning content on the Oracle Learning YouTube channel. Additionally, visit education.oracle.com/learning-explorer to become an Oracle Learning Explorer.
For product documentation, visit Oracle Help Center.
Migrate Oracle Database to OCI using V4 Cross Platform Transportable Tablespaces and Upgrade from 12c to 19c
G17450-01
October 2024