5 Using Data Pump

You can import and export data and metadata using Data Pump. Data Pump is made available in SQLcl using the PL/SQL package, DBMS_DATAPUMP.

You can do the following:

  • Export one or more schemas.

  • Import one or more schemas with the remap schema option.

  • Use DATAPUMP export and import for Oracle Database and Oracle Autonomous Database Cloud Services.

  • Use database directory or Oracle Object Store for dump files, which can be optionally encrypted.

  • Perform parallel processing of dump files.

  • Filter objects by object types or names.

  • Use the SET DATAPUMP command to set defaults and save them with the STORE command.

  • Use the Cloud Storage feature to put or get from the database directory.

5.1 Getting Started

The following sections provide information needed to get started with the DATAPUMP command in SQLcl.

5.1.1 Prerequisites

To use DATAPUMP in SQLcl, you need to know the following:

  • When you are the logged-in user that is exporting or importing:

    • You must have the appropriate system privileges or adequate user space resource on the tablespace.

    • If you do not have the DBA or PDB_DBA role, you need access to the Oracle Directory for importing or exporting your own schema.

      For example:
      grant read, write on directory DATA_PUMP_DIR to dpumpnotdba;
  • When using other schemas:

    • See Required Roles to learn about the required roles for data pump import and export operations.

    • You must have permissions for the tablespace to import.

  • If you are unable to use Data Pump, you can instead load a file using Loading a File.

  • To connect to an Autonomous Database:

  • To use object storage from Oracle Database:

    • See About the Cloud Storage Command Options and Setup and Prerequisites in the Oracle Cloud Infrastructure Documentation.

    • You must install an OCI profile configuration file.

    • The OCI profile must be set using OCI PROFILE <name> or CS OCI <profile>.

    • You can copy dump files to object storage as the last step of an export using the -copycloud option on the data pump export.

    • You can copy dump files from object storage as the first step of an import using the -copycloud option on the data pump import.

    • You can copy dump files between cloud storage and Oracle Directories using the Cloud Storage command (put and get).

  • To use object storage from Autonomous Database:

    • See Using DBMS_CLOUD for Authentication.

    • You must set the credential for object storage using Cloud Storage, SET DATAPUMP or DATAPUMP command.

    • You can export dump files directly to object storage without using a database directory.

    • You can import dump files directly from object storage without using a database directory.

    • You cannot write log files directly to object storage but you can copy them to cloud storage using the Cloud Storage command.

  • The Time Zone File version is shown during export. The dump file can only be imported into a database with the same (or later) time zone file version.

  • Transparent and Dual encryption modes require Transparent Data Encryption (TDE). If TDE is not available, you can use only password encryption.

5.1.2 Usage

The DATAPUMP command creates and submits data pump jobs using the DBMS_DATAPUMP package.

  • For importing to Autonomous, see Cloud Premigration Advisor Tool (CPAT).

  • You can have different hardware, operating systems, character sets, time zones, and versions in the source and target databases.

  • All object types and data types existing in Oracle Database release 11g and later versions are supported.

  • You can transfer and filter data and metadata.

  • You can transform schema names, tablespace names, and metadata at import time.

  • You can unload and load data in an Oracle proprietary format.

  • You can encrypt dump files to ensure your data is secure. You can use transparent if available on the database or use password as the encryption mode.

  • You can filter the import or export process to limit the objects types and objects included.

Export

Data Pump export is used to unload metadata and data into a dump file, which can be stored in an Oracle Directory or object storage.

  • If a schema or list of schemas is not provided, the current schema is exported.

  • If a directory or object storage and a credential are not provided, the dump file is stored in DATA_PUMP_DIR.

  • Job name is created as ESQL_<n>, if one is not provided.

  • If a dump file name is not provided, the dump file is stored as <jobname>.DMP.

  • If a log file name is not provided, the log file is stored as <jobname>.LOG.

Import

Data Pump import is used to load metadata and data from a previously exported dump file, which was stored in an Oracle Directory or object storage.

  • If a schema or list of schemas is not provided, all the objects in the dump file are imported (FULL import by default). You can use a filter to limit the number of objects imported.

  • If a database directory or object storage and a credential are not provided, the dump file is stored in DATA_PUMP_DIR.

  • Job name is created as ISQL_<n>, if one is not provided.

  • If a dump file name is not provided, the dump file is stored as <jobname>.DMP.

  • If a log file name is not provided, the log file is stored as <jobname>.LOG.

5.2 Data Pump Command Syntax and Arguments

You can invoke the Data Pump command using dp or datapump.

Syntax

dp help [examples|syntax] | export [<optional-argument>,...] | import [<optional-argument>,...] |

To see the help description for data pump in SQLcl, type one of the following:

datapump help
dp help

To quickly view the syntax and exclude other details, type

dp help syntax

To only view the examples in help, type:

dp help examples

<optional argument>: The following table describes the possible optional arguments along with default values for each of them.

Table 5-1 Optional Arguments

File Argument Description Default
-credential, -c Credential for dump file access in Oracle Object Store. As specified in the Cloud Storage command
-directory,-d Default database directory for reading and writing dump and log files. DATA_PUMP_DIR
-dumpdirectory,-dd Database directory for dump file. -directory if specified or DATA_PUMP_DIR
-dumpfile,-f <file-name>[,…]

Dump file name(s) when using database directory.

You can specify multiple files whether parallelism is enabled or not. The number of files specified must be at least as large as the degree of parallelism.

<jobname><n>.DMP
-dumpuri,-u

[<uri>[,...] | <qualified-name>[,...]]

<uri>: Complete URI for the Oracle Object Store file if a default is not set on Cloud Storage command.

<qualifier>: Name of the object, optionally qualified by the namespace and the bucket. The qualified name concatenated to the URI specified on Cloud Storage command must fully identify the object URI.

Credential must be set for direct read/write access to Oracle Object Store from Autonomous database. For -copycloud between database directory and Oracle Object Store, OCI PROFILE must be set.

You can specify multiple URIs whether parallelism is enabled or not. The number of files specified should be at least as large as the degree of parallelism.

Default object name is <jobname>.DMP
-logdirectory,-ld Database directory for log file. -directory if specified or DATA_PUMP_DIR
-logfile,-lf Log file name in the database directory. <jobname><n>.LOG

Table 5-2 Command Arguments

Command Argument Description Default
-noexec,-ne

[TRUE | FALSE]

TRUE: Validate and generate the PL/SQL, but do not execute it.

FALSE
-verbose,-ve

[TRUE | FALSE]

TRUE: Show additional diagnostic output.

FALSE

Table 5-3 Common Arguments

Common Argument Description Default
-copycloud,-cc

[TRUE | FALSE]

TRUE: Copy the dump file between database directory and Oracle Object Store.

For export, copy the dump file from the database directory to Oracle Object Store after the data pump job completes.

For import, copy the dump file from Oracle Object Store to the database directory before the data pump job starts.

Set the OCI PROFILE using OCI command or CLOUDSTORAGE command. Set the cloud storage URI using the CLOUDSTORAGE command, SET DATAPUMP command or DATAPUMP command.

FALSE

-encryptionpassword,-enp

<password>

If password is not specified, a prompt for the password is given.

For export, the dump files are encrypted using the password. For import, the same password used for export is provided.

None

-excludeexpr,-ex

<object_type_expression>

Specify an expression identifying an object type or set of object types to exclude from the job. Example:-excludeexpr "IN ('GRANT','INDEX','TRIGGER')"

None

-excludelist,-el

<object-type>[,...]

Specify a comma-separated-value list of object types to exclude from the job.

Example:-excludelist GRANT,INDEX,TRIGGER

None

-includeexpr,-ix

<object_type_expression>

Specify an expression identifying an object type or set of object types to include in the job. Only matching object types and their dependents are included in the job. Use -excludelist or -excludeexpr to exclude dependent objects. Example: -includeexpr "IN ('TABLE','VIEW')"

None

-includelist,-il

<object_type>[,...]

Specify a comma-separated-value list of object types to include in the job. Only matching object types and their dependents are included in the job. Use -excludelist or -excludeexpr to exclude dependent objects. Example: -includelist TABLE,VIEW

None

-includemetadata,-im

[TRUE | FALSE]

TRUE: Include metadata in the job.

TRUE
-includerows,-ir

[TRUE | FALSE]

TRUE: Include data in the job.

TRUE
-jobname,-j

Name for the data pump job.

Job name is appended with a data pump generated number, unless it ends with a number. jobname<n> is used when submitting the data pump job and as a default name for dump and log file names or object names.

ESQL_<n> | ISQL_<n> where n is a data pump generated number.

-nameexpr,-nx

{<object-type>=<name-expression>}[;...]

For specified object type, provide an expression identifying a set of object names to include in the job. Example: -nameexpr TABLE="IN ('EMPLOYEES', 'DEPARTMENTS')";PROCEDURE="IN ('ADD_JOB_HISTORY','SECURE_DML')"

None

-namelist,nl

{<object-type>=<name>[,...]}[;...]

For specified object type, provide a comma-separated-value list of objects to include in the job. Example: -namelist TABLE=employees,departments;PROCEDURE=add_job_history,secure_dml

None

-parallel,-p

<degree_integer>

Adjusts the degree of parallelism within a job allowing multiple processes simultaneously. Specify the same number of files as the degree or some processes may remain idle.

1

-schemas,-s

<schema>[,...] - The schema or list of schemas to process.

For example:
-schemas schema1,schema2
For export, schema for the current connection. For import, the default is FULL and all objects in the dump file are imported.

-version,-v

{<nn.n> | COMPATIBLE | LATEST}

<nn.n>: A specific database version, for example, 11.0.0.

When exporting from Oracle Database 11g release 2 (11.2.0.3) or later into an Oracle Database 12 c Release 1 (12.1), specify a value of 12 to allow all existing database features, components, and options to be exported. This applies to a multitenant container database (CDB) or a non-CDB.

COMPATIBLE: Uses the metadata version from the database compatibility level and the compatibility release level for the feature.

LATEST: The version of the metadata corresponds to the database version.

COMPATIBLE

-wait,-w

[TRUE | FALSE]

TRUE: Wait for the data pump job to finish and show summary results.

FALSE: Submit the data pump job without waiting and without showing results.

TRUE

Table 5-4 Export Only Arguments

Export-Only Arguments Description Default
-compression,-cm

{ALL | DATA_ONLY | METADATA_ONLY | NONE}

Indicates if compression is needed for user data and metadata.

ALL: Compress user data and metadata.

DATA_ONLY: Compress only user data.

METADATA_ONLY: Compress only metadata.

NONE: Do not compress user data or metadata.

METADATA_ONLY

-estimate,-e

{BLOCKS | STATISTICS}

Specifies the estimate method for the size of the tables. It should be performed before starting the job.

BLOCKS: Estimate is calculated using the count of blocks allocated to the user tables.

STATISTICS: Estimate is calculated using the statistics for each table. If no statistics are available for a table, BLOCKS is used.

 

-encryption,-en

{ALL | DATA_ONLY | ENCRYPTED_COLUMNS_ONLY | METADATA_ONLY | NONE}

Specifying any encryption option will turn on encryption if you do not specify -encryption NONE.

NONE, or ALL if any other encryption option is specified.

-encryptionalgorithm,-ena

{AES128 | AES192 | AES256}

Identifies the cryptographic algorithm to use.

AES128

-encryptionmode,-enm

{DUAL | PASSWORD | TRANSPARENT}

Identifies the types of security used for encryption and decryption.

PASSWORD encrypts the dump files using the provided password.

TRANSPARENT enables encryption if the Oracle Encryption Wallet is available.

DUAL enables import using the Oracle Encryption Wallet or the password. When using DUAL, -encryptionpassword must be specified.

TRANSPARENT, or PASSWORD if -encryptionpassword is specified.

-filesize,-fs

{<n>{B | KB | MB | GB | TB}}

Limit for the size of files.

500 MB

-flashbackscn,-fb

[TRUE | FALSE]

TRUE: Use consistent database content based on system change number (SCN) at the start time of execution.

FALSE

-reusefile,-r

[TRUE | FALSE]

TRUE: Replace existing dump file(s) with a new file.

TRUE

Table 5-5 Import Only Arguments

Import-Only Arguments Description Default

-columnencryption,-ce

[TRUE | FALSE]

TRUE: Include column encryption clause on table metadata.

FALSE Omit column encryption clause.

TRUE if supported by database

-objectid,-oid

[TRUE | FALSE]

TRUE: Assign the exported OID.

FALSE: Assign a new OID

TRUE

-remapschemas,-rs

{<oldSchema>=<newSchema>[,…]}

<oldSchema> objects in the job are moved to <newSchema>. Example: oldschema1=newschema1,oldschema2=newschema2.

Not Applicable

-remaptablespaces,-rt

{<oldTablespace>=<newTablespace>[,…]}

<oldTablespace> storage segment in the job is relocated to <newTablespace>. Example: oldtablespace1=newtablespace1,oldtablespace2=newtablespace2

Not Applicable

-segmentattributes,-sa

[TRUE | FALSE]

TRUE: Include segment attributes clauses (physical attributes, storage attributes, tablespace, logging).

TRUE

-skipunusableindexes,-sui

[TRUE | FALSE]

TRUE : Rows are inserted into tables having unusable indexes.

TRUE

-storage,-st

[TRUE | FALSE]

TRUE: Include storage clauses.

TRUE

-tableexists,-te

{APPEND | REPLACE | SKIP | TRUNCATE}

Action to take if table exists during import.

APPEND: New rows are added to the existing rows in the table.

REPLACE: Before creating the new table, the old table is dropped.

SKIP: The preexisting table is left unchanged.

TRUNCATE: Rows are removed from a preexisting table before inserting rows from the import.

SKIP when -includemetadata true, otherwise APPEND action is taken if table exists during import.

-tablecompression,-tc

[TRUE | FALSE]

TRUE: The table compression clause is included if supported.

FALSE: The table has the default compression for the tablespace.

TRUE

5.3 Use Cases

The following use cases illustrate how to use the DATAPUMP command to import and export data.

Use Case 1

Copy tables, views and functions in current schema from database to database.

  1. Export the current schema into DATA_PUMP_DIR.
  2. Import the schema from DATA_PUMP_DIR.
-- Export the current schema into DATA_PUMP_DIR
SQL> connect <db-connect1-string>
SQL> dp export -dumpfile my_dump.dmp -includelist table,view,function

-- Import from DATA_PUMP_DIR
SQL> connect <db-connect2-string>
SQL> dp import -dumpfile my_dump.dmp

Use Case 2

Copy current schema from database to Autonomous Database. Encryption is done using prompt for password.

  1. Set up for Oracle access to Oracle Object Store using an OCI profile.
  2. Export current schema into an encrypted dump file in DATA_PUMP_DIR and copy it to the CS bucket using profile.
  3. Import encrypted file from Oracle Object Store using credential.
-- Set up for Oracle access to Oracle Object Store using an OCI profile
SQL> oci profile my-profile
SQL> cs https://swiftobjectstorage.us-ashburn-1.oraclecloud.com/v1/abc123/testing-bucket/

-- Export current schema into an encrypted dump file in DATA_PUMP_DIR and copy it to the CS bucket using profile
SQL> connect <db-connect-string>
SQL> dp export -copycloud -dumpfile my_dump.dmp -encryptionpassword

-- Import encrypted file from Oracle Object Store using credential
SQL> set cloudconfig <wallet>
SQL> connect <cloud-connect-string>
SQL> dp import -dumpuri /o/my_dump.dmp -encryptionpassword -c SWIFTCRED

Use Case 3

Copy multiple schemas from database to Autonomous Database with remap schemas.

  1. Set up for Oracle access to Oracle Object Store using an OCI profile.
  2. Export schemas into database directory and copy to cloud.
  3. Import from Oracle Object Store using credential. Remap the schemas.
-- Set up for Oracle access to Oracle Object Store using an OCI profile
SQL> oci profile my-profile
SQL> cs https://swiftobjectstorage.us-ashburn-1.oraclecloud.com/v1/abc123/testing-bucket/

-- Export schemas into database directory and copy to cloud
SQL> connect <db-connect-string>
SQL> dp export -schemas dpumptest1,dpumptest11 -dumpfile DPUMPTEST1_11.DMP -cc

-- Import from Oracle Object Store using credential. Remap the schemas
SQL> set cloudconfig <wallet>
SQL> connect <cloud-connect-string>
SQL> dp import -dumpuri /o/DPUMPTEST1_11.DMP -c SWIFTCRED -rs dpumptest1=dpumptest2,dpumptest11=dpumptest21

Use Case 4

Copy multiple schemas from database to Autonomous Database with remap tablespace using OCI profile only.

  1. Set up for Oracle access to Oracle Object Store using an OCI profile.
  2. Export the current schema into DATA_PUMP_DIR and copy it to the CS bucket using profile.
  3. Copy from Oracle Object Store into directory and import from directory. Remap the tablespace.
-- Set up for Oracle access to Oracle Object Store using an OCI profile
SQL> oci profile my-profile
SQL> cs https://swiftobjectstorage.us-ashburn-1.oraclecloud.com/v1/abc123/testing-bucket/

-- Export the current schema into DATA_PUMP_DIR and copy it to the CS bucket using profile
SQL> connect <db-connect-string>
SQL> dp export -copycloud

-- Copy from Oracle Object Store into directory and import from directory. Remap the tablespace.
SQL> set cloudconfig <wallet>
SQL> connect <cloud-connect-string>
SQL> dp import -copycloud -dumpuri /o/ESQL_<n>.DMP -rt DATA=USERS

Use Case 5

Copy current schema from database to Autonomous Database using parallel processors.

  1. Set up for Oracle access to Oracle Object Store using an OCI profile.
  2. Export the current schema into DATA_PUMP_DIR and copy all files to the CS bucket using profile.
  3. Import from Oracle Object Store using credential.
-- Set up for Oracle access to Oracle Object Store using an OCI profile
SQL> oci profile my-profile
SQL> cs https://swiftobjectstorage.us-ashburn-1.oraclecloud.com/v1/abc123/testing-bucket/

-- Export the current schema into DATA_PUMP_DIR and copy all files to the CS bucket using profile
SQL> connect <db-connect-string>
SQL> dp export -copycloud -parallel 3 -dumpfile my_dump1.dmp,my_dump2.dmp,my_dump3.dmp

-- Import from Oracle Object Store using credential
SQL> set cloudconfig <wallet>
SQL> connect <cloud-connect-string>
SQL> dp import -dumpuri /o/my_dump1.dmp,/o/my_dump2.dmp,/o/my_dump3.dmp -c SWIFTCRED

5.4 Tips and Troubleshooting

The DATAPUMP command builds PL/SQL that uses the DBMS_DATAPUMP package to execute exports and imports.

  • Preview the PL/SQL and Parameters

    • Use the -noexec option to validate the export or import. This option shows the generated PL/SQL and parameters but does not submit the job for execution.

  • Get additional information about the command processing

    • Use the -verbose option to see additional diagnostic information.

    • This option also provides queries that you can copy to view the last line of the log and percentage completed.

  • View the job log

    • When using the -wait option, the log is written to the console.

    • When using -wait false, you can copy the log to object storage if you cannot see files in the directory.

    • When using -wait false, you can view the log file with the following code block for Oracle Database release 12.2 and later versions:

      var c clob;
      set long 32000
      begin select to_clob(BFILENAME('DATA_PUMP_DIR','ESQL_<n>.LOG')) into :c from dual; end;
      /
      print c
      
  • When importing or exporting as the logged-in user, it is recommended not to use the SYS role.

  • Inserting a row into a table confirms you have quota on the tablespace. If you do not have quota, you see the following error after you run a DATAPUMP command: ORA-31626: job does not exist.

    The following code snippet illustrates this problem:

    *** Error with DataPump command 
    
    grant connect, resource, create session to user1 identified by user1;
    grant read on directory DATA_PUMP_DIR to user1;
    grant write on directory DATA_PUMP_DIR to user1;
    
    SQL> connect <db-connect-string>
    Connected
    
    SQL> datapump export -schemas user1
    
    Datapump Command Start ** at 2022.03.10-15.51.28
    Initiating DATA PUMP
    DATABASE TIME ZONE: VERSION:32 CON_ID:0
    Log
    Location: DATA_PUMP_DIR:ESQL_1614.LOG
    ORA-31626: job does not exist
    ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
    ORA-06512: at "SYS.DBMS_DATAPUMP", line 1849
    ORA-06512: at "SYS.DBMS_DATAPUMP", line 6869
    ORA-06512: at line 25
     
    ** Datapump Command End ** at 2022.03.10-15.51.30
    SQL> exit
    
    ***Resolve Error by Granting Tablespace Quota to User
    Connected to:
    Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
    Version 19.3.0.0.0
     
    SQL> alter user user1 quota 128M on users;
    Grant succeeded.
     
    SQL> connect <db-connect-string>
    Connected.
    
    SQL> datapump export -schemas user1
    
    ** Datapump Command Start ** at 2022.03.10-15.54.15
    Initiating DATA PUMP
    DATABASE TIME ZONE: VERSION:32 CON_ID:0
    Log Location: DATA_PUMP_DIR:ESQL_1616.LOG
    Starting "USER1"."ESQL_1616":
    Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
    Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
    Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
    Processing object type SCHEMA_EXPORT/TABLE/TABLE
    Processing object type SCHEMA_EXPORT/TABLE/COMMENT
    Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
    Master table "USER1"."ESQL_1616" successfully loaded/unloaded
    ******************************************************************************
    Dump file set for USER1.ESQL_1616 is:
      /opt/oracle/admin/DB193C/dpdump/D9C3824B6E651CA4E053020014C3358C/ESQL_1616.DMP
    Job "USER1"."ESQL_1616" successfully completed at Thu Mar 10 15:54:52 2022 elapsed 0 00:00:33
    DataPump Operation Status 'COMPLETED'
    Jobname = ESQL_1616
    ** Datapump Command End ** at 2022.03.10-15.54.56