Conventional Data Pump Export and Import

You can use this method regardless of the endian format and database character set of the source database. You can also use Data Pump to migrate data between different versions of Oracle Database. This method is simple to implement, provides the broadest cross-platform support and enables you to physically re-organize your target database; however, the time and resources required for export and import may rule out this approach for situations with large databases or limited timeframes.

Conventional Data Pump Export and Import uses the Data Pump utilities, expdp and impdp, to unload (export) and load (import) Oracle Database data and metadata. During an Export, a copy of the source data, and metadata, is written to a binary dump file. After the dump file is transported to the target system, its contents can be imported into another Oracle database. Because of this architecture, Data Pump provides broad support for data migration between different platforms, different Oracle Database versions and databases with different character sets.

In conjunction with using this approach, database administrators can alter the physical properties of database objects in the target database. For example, administrators can optimize table and index extent sizes to suit the characteristics of the target database environment. Therefore, conventional Data Pump Export and Import is well suited for situations where you need to physically re-organize the target database.

In addition to working on whole databases, conventional Data Pump Export and Import provides the flexibility to export and import specific tables, schemas or tablespaces, which makes it well suited for situations where you do not want to migrate the entire database. This capability also enables you to migrate a database in pieces if such an approach is logically valid.

Because of the processing required during export and import, this approach can be more time and resource intensive than other migration approaches. Therefore, other approaches might be preferred for migrations that require minimal downtime.

To migrate your source database, tablespace, schema, or table to Oracle Database Exadata Cloud Service using conventional Data Pump Export and Import, perform these tasks:

  1. On the source database host, use Data Pump Export to unload part or all of the source database to a dump file.

  2. Transfer the resulting dump file to an Exadata Cloud Service compute node.

  3. On the Exadata Cloud Service compute node, use Data Pump Import to load the target database.

  4. After verifying that the dump file contents has been imported successfully, you can delete the dump file.

See Oracle Data Pump in Oracle Database Utilities for Release 18, 12.2, 12.1, or 11.2.

Conventional Data Pump Export and Import: Example

This example provides a step-by-step demonstration of the tasks required to migrate a schema from an existing Oracle database to Oracle Database Exadata Cloud Service.

This example illustrates a schema-mode export and import. The same general procedure applies for a full database, tablespace, or table export and import.

In this example, the source database is on a Linux host.

  1. On the source database host, invoke Data Pump Export to export the schema.

    1. On the source database host, create an operating system directory to store the output from the export operation.

      $ mkdir /u01/app/oracle/admin/orcl/dpdump/for_cloud
    2. On the source database host, invoke SQL*Plus and log in to the source database as the SYSTEM user.

      $ sqlplus system
      Enter password: <enter the password for the SYSTEM user>
    3. Create a directory object in the source database to reference the operating system directory.

      SQL> CREATE DIRECTORY dp_for_cloud AS '/u01/app/oracle/admin/orcl/dpdump/for_cloud';
    4. Exit from SQL*Plus.

    5. On the source database host, invoke Data Pump Export as the SYSTEM user or another user with the DATAPUMP_EXP_FULL_DATABASE role and export the required schema. In this example, the schema owner is FSOWNER. Provide the password for the user when prompted.

      $ expdp system SCHEMAS=fsowner DIRECTORY=dp_for_cloud
  2. Transfer the dump file to the target Exadata Cloud Service compute node.

    In this example, the dump file is copied across the network by using the SCP utility.

    1. On the target Exadata Cloud Service compute node, create a directory that you will copy the dump file to.

      Choose an appropriate location based on the size of the file that will be transferred.

      $ mkdir /u01/app/oracle/admin/ORCL/dpdump/from_source
    2. Before using the scp command to copy the export dump file, make sure the SSH private key that provides access to the target Exadata Cloud Service compute node is available on your source host. For more information about SSH keys, see About Network Access to Exadata Cloud Service.

    3. On the source database host, use the SCP utility to transfer the dump file to the target Exadata Cloud Service compute node.

      $ scp –i private_key_file \
      /u01/app/oracle/admin/orcl/dpdump/for_cloud/expdat.dmp \
      oracle@compute_node_IP_address:/u01/app/oracle/admin/ORCL/dpdump/from_source
  3. On the target Exadata Cloud Service compute node, invoke Data Pump Import and import the data into the database.

    1. On the Exadata Cloud Service compute node, invoke SQL*Plus and log in to the database as the SYSTEM user.

      $ sqlplus system
      Enter password: <enter the password for the SYSTEM user>
    2. Create a directory object in the Exadata Cloud Service database.

      SQL> CREATE DIRECTORY dp_from_source AS '/u01/app/oracle/admin/ORCL/dpdump/from_source';
    3. If they do not exist, create the tablespace(s) for the objects that will be imported.

    4. Exit from SQL*Plus.

    5. On the Exadata Cloud Service compute node, invoke Data Pump Import and connect to the database. Import the data into the database.

      $ impdp system SCHEMAS=fsowner DIRECTORY=dp_from_source
  4. After verifying that the data has been imported successfully, you can delete the expdat.dmp file.