Transportable Tablespaces

This method provides broad cross-platform migration support, and limited support for source and destination databases with different character sets. You can also use the transportable tablespace feature to migrate data to a later version of Oracle Database. This method is often chosen when migrating between platforms with different endian formats, or in cases where physical re-organization is not necessary.

The transportable tablespace method is generally much faster than a conventional export and import of the same data because you do not have to unload and reload the data. Rather, the source data files are transported to the destination system and attached to the target database. For basic migrations using this feature, you use Data Pump to export and import only the metadata associated with the objects in the tablespace.

The transportable tablespace method provides broad cross-platform support with some limitations. If you are migrating from a big-endian platform to Exadata Cloud at Customer (little-endian), extra processing is required to perform a conversion. Ideally, the source and target database character sets should be the same (AL32UTF8). However, there are limited situations where another source character set can be supported. Administrative tablespaces, such as SYSTEM and SYSAUX, cannot be included in a transportable tablespace set. For details regarding the requirements and limitations for transportable tablespaces, see Transporting Tablespaces Between Databases in Oracle Database Administrator’s Guide for Release 18, 12.2, 12.1, or 11.2.

To perform a basic migration using the transportable tablespace method, you perform these tasks:

  1. Select a self-contained set of tablespaces. That is, there should be no references from objects inside the set of tablespaces to objects outside the set of tablespaces.

    For example, there should be no:

    • Indexes for tables outside the tablespace set.

    • Partitioned tables having partitions outside the tablespace set.

    • Referential integrity constraints that point to objects outside the tablespace set.

    • LOB columns that point to LOBs outside the tablespace set.

    You can use the TRANSPORT_SET_CHECK procedure in the DBMS_TTS package to determine whether a set of tablespaces is self-contained.

  2. On the source database, place the set of tablespaces into read-only mode.

  3. On the source database host, execute Data Pump Export to unload the metadata associated with the tablespace set.

  4. Transfer the Data Pump Export dump file and the tablespace datafiles to an Exadata Cloud at Customer compute node.

  5. On the Exadata Cloud at Customer compute node, load the tablespace data files into ASM and Exadata Storage Server. If required, perform an endian format conversion at this stage.

    You can load and convert the data files by using the RMAN CONVERT command, or the PUT_FILE procedure in the DBMS_FILE_TRANSFER package.

  6. On the Exadata Cloud at Customer compute node, use Data Pump Import to load the metadata associated with the tablespace set.

  7. Set the tablespaces on the Exadata Cloud at Customer database to read-write mode.

  8. After verifying that the data has been imported successfully, you can delete the dump file.

As an alternative to this basic migration procedure, you can use RMAN to migrate a transportable tablespace set. By using RMAN you can avoid the requirement to place the source tablespaces into read-only mode. You can also use a database backup as the migration source, and you can specify a target point in time, SCN, or restore point during your recovery window and transport tablespace data as it existed at that time. See Creating Transportable Tablespace Sets in Oracle Database Backup and Recovery User's Guide for Release 18, 12.2, 12.1, or 11.2.

Data Pump Transportable Tablespace: Example

This example provides a step-by-step demonstration of the tasks required to migrate tablespaces from an existing Oracle database to Oracle Database Exadata Cloud at Customer.

This example performs a migration of the FSDATA and FSINDEX tablespaces, which contain objects owned by the FSUSER database user.

In this example, the source database is on a big-endian AIX-based host.

  1. Verify that the source tablespace set is self-contained.

    1. On the source database host, invoke SQL*Plus and log in to the source database as the SYSTEM user.

      $ sqlplus system
      Enter password: <enter the password for the SYSTEM user>
    2. Use the TRANSPORT_SET_CHECK procedure in the DBMS_TTS package to determine if the tablespace set is self-contained.

      SQL> EXECUTE DBMS_TTS.TRANSPORT_SET_CHECK('FSDATA,FSINDEX', TRUE);
    3. Examine the TRANSPORT_SET_VIOLATIONS view. If the tablespace set examined by DBMS_TTS.TRANSPORT_SET_CHECK is self-contained, this view is empty. Otherwise, you must resolve any violation before you proceed.

      SQL> SELECT * FROM TRANSPORT_SET_VIOLATIONS;
  2. On the source database, place the set of tablespaces that will be transported into read-only mode.

    SQL> ALTER TABLESPACE fsindex READ ONLY;
    SQL> ALTER TABLESPACE fsdata READ ONLY;
  3. On the source database host, execute Data Pump Export to unload the metadata associated with the tablespace set.

    1. Create an operating system directory to store the output from the export operation.

      $ mkdir /u01/app/oracle/admin/orcl/dpdump/for_cloud
    2. Create a directory object in the source database to reference the operating system directory.

      SQL> CREATE DIRECTORY dp_for_cloud AS '/u01/app/oracle/admin/orcl/dpdump/for_cloud';
    3. Determine the name(s) of the data files that belong to the FSDATA and FSINDEX tablespaces by querying DBA_DATA_FILES. These files will also be listed in the export output.

      SQL> SELECT file_name FROM dba_data_files
        2  WHERE tablespace_name in ('FSDATA','FSINDEX');
      
      FILE_NAME
      -----------------------------------------------------------------
      /u01/app/oracle/oradata/orcl/fsdata01.dbf
      /u01/app/oracle/oradata/orcl/fsindex01.dbf
    4. Invoke Data Pump Export to perform the transportable tablespace export.

      On the source database host, invoke Data Pump Export and connect to the source database. Export the source tablespaces using the TRANSPORT_TABLESPACES option. Provide the password for the SYSTEM user when prompted.

      $ expdp system TRANSPORT_TABLESPACES=fsdata,fsindex TRANSPORT_FULL_CHECK=YES DIRECTORY=dp_for_cloud
  4. Transfer the dump file and tablespace data files to the target Exadata Cloud at Customer compute node.

    In this example, the files are copied across the network by using the SCP utility.

    1. On the target Exadata Cloud at Customer compute node, create a directory that you will copy the dump file to.

      Choose an appropriate location based on the size of the file that will be transferred.

      $ mkdir /u01/app/oracle/admin/ORCL/dpdump/from_source
    2. Before using the scp command to copy the export dump file, make sure the SSH private key that provides access to the target Exadata Cloud at Customer compute node is available on your source host. For more information about SSH keys, see About Network Access to Exadata Cloud at Customer.

    3. On the source database host, use the SCP utility to transfer the dump file and tablespace data files to the target Exadata Cloud at Customer compute node.

      $ scp -i private_key_file \
      /u01/app/oracle/admin/orcl/dpdump/for_cloud/expdat.dmp \
      oracle@compute_node_IP_address:/u01/app/oracle/admin/ORCL/dpdump/from_source
      
      $ scp -i private_key_file \
      /u01/app/oracle/oradata/orcl/fsdata01.dbf \
      
      oracle@compute_node_IP_address:/u01/app/oracle/admin/ORCL/dpdump/from_source
      
      $ scp -i private_key_file \
      /u01/app/oracle/oradata/orcl/fsindex01.dbf \
      oracle@compute_node_IP_address:/u01/app/oracle/admin/ORCL/dpdump/from_source
  5. On the target Exadata Cloud at Customer compute node, convert and load the tablespace data files into ASM and Exadata Storage Server.

    In this example, the data files are converted to little-endian format and loaded into ASM by using the RMAN CONVERT command.

    1. Invoke RMAN and log in to the target database as the SYSTEM user.

      $ rman target system
      target database password: <enter the password for the SYSTEM user>
    2. Use the CONVERT command to convert and load the data files into ASM.

      Take note of the ASM file names for your converted data files.

      RMAN> convert datafile
      2> '/u01/app/oracle/admin/ORCL/dpdump/from_source/fsdata01.dbf',
      3> '/u01/app/oracle/admin/ORCL/dpdump/from_source/fsindex01.dbf'
      4> to platform="Linux x86 64-bit"
      5> from platform="AIX-Based Systems (64-bit)"
      6> format '+DATA_SYSNAME';
      
      Starting converstion at target at ...
      ...
      input file name=/u01/app/oracle/admin/ORCL/dpdump/from_source/fsdata01.dbf
      converted datafile=+DATA_SYSNAME/ORCL/datafile/fsdata01.277.821069105
      ...
      
      input file name=/u01/app/oracle/admin/ORCL/dpdump/from_source/fsindex01.dbf
      converted datafile=+DATA_SYSNAME/ORCL/datafile/fsindex01.278.419052810
      
      ...
  6. On the target Exadata Cloud at Customer compute node, use Data Pump Import to load the metadata associated with the tablespace set.

    1. Invoke SQL*Plus and log in to the target database as the SYSTEM user.

    2. Create a directory object in the target database that points to the operating system directory containing the Data Pump dump file.

      SQL> CREATE DIRECTORY dp_from_source AS '/u01/app/oracle/admin/ORCL/dpdump/from_source';
    3. If they do not already exist, create user accounts for the owners of the objects that will be imported into the target database.

      SQL> CREATE USER fsowner
        2  PROFILE default
        3  IDENTIFIED BY fspass
        4  TEMPORARY TABLESPACE temp
        5  ACCOUNT UNLOCK;
    4. Invoke Data Pump Import and import the tablespace metadata into the target database. Use the TRANSPORT_DATAFILES option and specify the file names for the data files that are converted and loaded into ASM.

      $ impdp system DIRECTORY=dp_from_source \
      TRANSPORT_DATAFILES='+DATA_SYSNAME/ORCL/datafile/fsdata01.277.821069105', \
      '+DATA_SYSNAME/ORCL/datafile/fsindex01.278.419052810'
  7. On the target database, set the FSDATA and FSINDEX tablespaces to READ WRITE mode.

    SQL> ALTER TABLESPACE fsdata READ WRITE;
    Tablespace altered.
    SQL> ALTER TABLESPACE fsindex READ WRITE;
    Tablespace altered.
  8. After verifying that the data has been imported successfully, you can delete the expdat.dmp dump file.