Data Pump Conventional Export/Import

You can use this method regardless of the endian format and database character set of the on-premises database.

To migrate an on-premises source database, tablespace, schema, or table to the database on an Oracle Database Cloud Service database deployment using Data Pump Export and Import, you perform these tasks:

  1. On the on-premises database host, invoke Data Pump Export and export the on-premises database.

  2. Use a secure copy utility to transfer the dump file to the Database Cloud Service compute node.

  3. On the Database Cloud Service compute node, invoke Data Pump Import and import the data into the database.

  4. After verifying that the data has been imported successfully, you can delete the dump file.

For information about Data Pump Import and Export, see these topics:

  • "Data Pump Export Modes" in Oracle Database Utilities for Release 12.2, 12.1 or 11.2.

  • "Data Pump Import Modes" in Oracle Database Utilities for Release 12.2, 12.1 or 11.2.

Data Pump Conventional Export/Import: Example

This example provides a step-by-step demonstration of the tasks required to migrate a schema from an on-premises Oracle database to an Oracle Database Cloud Service database.

This example illustrates a schema mode export and import. The same general procedure applies for a full database, tablespace, or table export and import.

In this example, the on-premises database is on a Linux host.

  1. On the on-premises database host, invoke Data Pump Export to export the schemas.

    1. On the on-premises database host, create an operating system directory to use for the on-premises database export files.

      $ mkdir /u01/app/oracle/admin/orcl/dpdump/for_cloud
    2. On the on-premises database host, invoke SQL*Plus and log in to the on-premises database as the SYSTEM user.

      $ sqlplus system
      Enter password: <enter the password for the SYSTEM user>
    3. Create a directory object in the on-premises database to reference the operating system directory.

      SQL> CREATE DIRECTORY dp_for_cloud AS '/u01/app/oracle/admin/orcl/dpdump/for_cloud';
    4. Exit from SQL*Plus.

    5. On the on-premises database host, invoke Data Pump Export as the SYSTEM user or another user with the DATAPUMP_EXP_FULL_DATABASE role and export the on-premises schemas. Provide the password for the user when prompted.

      $ expdp system SCHEMAS=fsowner DIRECTORY=dp_for_cloud
  2. Use a secure copy utility to transfer the dump file to the Database Cloud Service compute node.

    In this example the dump file is copied to the /u01 directory. Choose the appropriate location based on the size of the file that will be transferred.

    1. On the Database Cloud Service compute node, create a directory for the dump file.

      $ mkdir /u01/app/oracle/admin/ORCL/dpdump/from_onprem
    2. Before using the scp command to copy the export dump file, make sure the SSH private key that provides access to the Database Cloud Service compute node is available on your on-premises host. For more information about SSH keys, see About Network Access to Database Cloud Service.

    3. On the on-premises database host, use the SCP utility to transfer the dump file to the Database Cloud Service compute node.

      $ scp –i private_key_file \
      /u01/app/oracle/admin/orcl/dpdump/for_cloud/expdat.dmp \
      oracle@IP_address_DBaaS_VM:/u01/app/oracle/admin/ORCL/dpdump/from_onprem
  3. On the Database Cloud Service compute node, invoke Data Pump Import and import the data into the database.

    1. On the Database Cloud Service compute node, invoke SQL*Plus and log in to the database as the SYSTEM user.

      $ sqlplus system
      Enter password: <enter the password for the SYSTEM user>
    2. Create a directory object in the Database Cloud Service database.

      SQL> CREATE DIRECTORY dp_from_onprem AS '/u01/app/oracle/admin/ORCL/dpdump/from_onprem';
    3. If they do not exist, create the tablespace(s) for the objects that will be imported.

    4. Exit from SQL*Plus.

    5. On the Database Cloud Service compute node, invoke Data Pump Import and connect to the database. Import the data into the database.

      $ impdp system SCHEMAS=fsowner DIRECTORY=dp_from_onprem
  4. After verifying that the data has been imported successfully, you can delete the expdat.dmp file.