Oracle Data Pump technology enables very high-speed movement of data and metadata from one database to another.
An understanding of the following topics can help you to successfully use Oracle Data Pump to its fullest advantage:
1.1 Oracle Data Pump Components
Oracle Data Pump is made up of three distinct components: Command-line clients,
DBMS_DATAPUMP PL/SQL package (also known as the Data Pump API); and the
DBMS_METADATA PL/SQL package (also known as the Metadata API).
The Oracle Data Pump clients,
impdp, start the Oracle Data Pump Export utility and Oracle Data Pump Import utility, respectively.
impdp clients use the procedures provided in the
DBMS_DATAPUMP PL/SQL package to execute export and import commands, using the parameters entered at the command line. These parameters enable the exporting and importing of data and metadata for a complete database or for subsets of a database.
When metadata is moved, Data Pump uses functionality provided by the
DBMS_METADATA PL/SQL package. The
DBMS_METADATA package provides a centralized facility for the extraction, manipulation, and re-creation of dictionary metadata.
DBMS_METADATA PL/SQL packages can be used independently of the Data Pump clients.
All Oracle Data Pump Export and Import processing, including the reading and writing of dump files, is done on the system (server) selected by the specified database connect string. This means that for unprivileged users, the database administrator (DBA) must create directory objects for the Data Pump files that are read and written on that server file system. (For security reasons, DBAs must ensure that only approved users are allowed access to directory objects.) For privileged users, a default directory object is available.
Starting with Oracle Database 18c, you can include the unified audit trail in either full or partial export and import operations using Oracle Data Pump. There is no change to the user interface. When you perform the export or import operations of a database, the unified audit trail is automatically included in the Oracle Data Pump dump files. See Oracle Database PL/SQL Packages and Types Reference for a description of the
DBMS_DATAPUMP and the
DBMS_METADATA packages. See Oracle Database Security Guide for information about exporting and importing the unified audit trail using Oracle Data Pump.
1.2 How Does Data Pump Move Data?
There are several Oracle Data Pump methods that you can use to move data in and out of databases. You can select the method that best fits your use case.
Data Pump does not load tables with disabled unique indexes. To load data into the table, the indexes must be either dropped or reenabled.
1.2.1 Using Data File Copying to Move Data
The fastest method of moving data is to copy the database data files to the target database without interpreting or altering the data. With this method, Data Pump Export is used to unload only structural information (metadata) into the dump file.
TRANSPORT_TABLESPACESparameter is used to specify a transportable tablespace export. Only metadata for the specified tablespaces is exported.
TRANSPORTABLE=ALWAYSparameter is supplied on a table mode export (specified with the
TABLESparameter) or a full mode export (specified with the
FULLparameter) or a full mode network import (specified with the
When an export operation uses data file copying, the corresponding import job always also uses data file copying. During the ensuing import operation, both the data files and the export dump file must be loaded.
During transportable imports tablespaces are temporarily made read/write and then set back to read-only.The temporary setting change was introduced with Oracle Database 12c Release 1 (22.214.171.124) to improve performance. However, be aware that this behavior also causes the SCNs of the import job data files to change. Changing the SCNs for data files can cause issues during future transportable imports of those files.
For example, if a transportable tablespace import fails at any point after the tablespaces have been made read/write (even if they are now read-only again), then the data files become corrupt. They cannot be recovered.
Because transportable jobs are not restartable, you must restart the failed job from the beginning. You must delete the corrupt datafiles, and copy fresh versions to the target destination.
When transportable jobs are performed, it is best practice to keep a copy of the data files on the source system until the import job has successfully completed on the target system. If the import job fails for some reason, then keeping copies ensures that you can have uncorrupted copies of the data files.
When data is moved by using data file copying, there are some limitations regarding character set compatibility between the source and target databases.
If the source platform and the target platform are of different endianness, then you must convert the data being transported so that it is in the format of the target platform. You can use the
DBMS_FILE_TRANSFER PL/SQL package or the
CONVERT command to convert the data.
1.2.2 Using Direct Path to Move Data
After data file copying, direct path is the fastest method of moving data. In this method, the SQL layer of the database is bypassed and rows are moved to and from the dump file with only minimal interpretation.
Data Pump automatically uses the direct path method for loading and unloading data unless the structure of a table does not allow it. For example, if a table contains a column of type
BFILE, then direct path cannot be used to load that table and external tables is used instead.
The following sections describe situations in which direct path cannot be used for loading and unloading.
Situations in Which Direct Path Load Is Not Used
If any of the following conditions exist for a table, then Data Pump uses external tables to load the data for that table, instead of direct path:
A domain index that is not a
CONTEXTtype index exists for a LOB column.
A global index on multipartition tables exists during a single-partition load. This case includes object tables that are partitioned.
A table is in a cluster.
There is an active trigger on a preexisting table.
Fine-grained access control is enabled in insert mode on a preexisting table.
A table contains
BFILEcolumns or columns of opaque types.
A referential integrity constraint is present on a preexisting table.
A table contains
VARRAYcolumns with an embedded opaque type.
The table has encrypted columns.
The table into which data is being imported is a preexisting table and at least one of the following conditions exists:
There is an active trigger
The table is partitioned
Fine-grained access control is in insert mode
A referential integrity constraint exists
A unique index exists
Supplemental logging is enabled, and the table has at least one LOB column.
The Data Pump command for the specified table used the
A table contains a column (including a
VARRAYcolumn) with a
TIMESTAMP WITH TIME ZONEdata type, and the version of the time zone data file is different between the export and import systems.
Situations in Which Direct Path Unload Is Not Used
If any of the following conditions exist for a table, then Data Pump uses external tables rather than direct path to unload the data:
Fine-grained access control for
The table is a queue table.
The table contains one or more columns of type
BFILEor opaque, or an object type containing opaque columns.
The table contains encrypted columns.
The table contains a column of an evolved type that needs upgrading.
The Data Pump command for the specified table used the
Before the unload operation, the table was altered to contain a column that is NOT NULL, and also has a default value specified.
1.2.3 Using External Tables to Move Data
If you do not select data file copying, and the data cannot be moved using direct path, you can use the external tables mechanism.
The external tables mechanism creates an external table that maps to the dump file data for the database table. The SQL engine is then used to move the data. If possible, use the
APPEND hint on import to speed the copying of the data into the database. The representation of data for direct path data and external table data is the same in a dump file. Because they are the same, Oracle Data Pump can use the direct path mechanism at export time, but use external tables when the data is imported into the target database. Similarly, Oracle Data Pump can use external tables for the export, but use direct path for the import.
In particular, Oracle Data Pump can use external tables in the following situations:
Loading and unloading very large tables and partitions in situations where it is advantageous to use parallel SQL capabilities
Loading tables with global or domain indexes defined on them, including partitioned object tables
Loading tables with active triggers or clustered tables
Loading and unloading tables with encrypted columns
Loading tables with fine-grained access control enabled for inserts
Loading a table not created by the import operation (the table exists before the import starts)
When Oracle Data Pump uses external tables as the data access mechanism, it uses the
ORACLE_DATAPUMP access driver. However, be aware that the files that Data Pump creates when it uses external tables are not compatible with files created when you manually create an external table using the SQL
CREATE TABLE ... ORGANIZATION EXTERNAL statement.
1.2.4 Using Conventional Path to Move Data
Where there are conflicting table attributes, Oracle Data Pump uses conventional path to move data.
In situations where there are conflicting table attributes, Oracle Data Pump is not able to load data into a table using either direct path or external tables. In such cases, conventional path is used, which can affect performance.
1.2.5 Using Network Link Import to Move Data
When the Import
NETWORK_LINK parameter is used to specify a network link for an import operation, the direct path method is used by default. Review supported database link types.
If direct path cannot be used (for example, because one of the columns is a
BFILE), then SQL is used to move the data using an
SELECT statement. (Before Oracle Database 12c Release 2 (126.96.36.199), the
default was to use the
INSERT SELECT statement.) The
SELECT clause retrieves the data from the remote database over the
network link. The
INSERT clause uses SQL to insert the data into the
target database. There are no dump files involved.
When the Export
NETWORK_LINK parameter is used to specify a network link for an export operation, the data from the remote database is written to dump files on the target database. (Note that to export from a read-only database, the
NETWORK_LINK parameter is required.)
Because the link can identify a remotely networked database, the terms database link and network link are used interchangeably.
Supported Link Types
The following types of database links are supported for use with Data Pump Export and Import:
Public fixed user
Public connected user
Public shared user (only when used by link owner)
Private shared user (only when used by link owner)
Private fixed user (only when used by link owner)
Unsupported Link Types
The following types of database links are not supported for use with Data Pump Export and Import:
Private connected user
Parallel export or import of metadata for network jobs.
For conventional jobs, if you need parallel metadata import, then use a dumpfile instead of
The Export NETWORK_LINK parameter for information about performing exports over a database link
The Import NETWORK_LINK parameter for information about performing imports over a database link
Oracle Database Administrator’s Guide for information about creating database links and the different types of links
1.3 Using Oracle Data Pump With CDBs
Oracle Data Pump can migrate all, or portions of, a database from a non-CDB into a PDB, between PDBs within the same or different CDBs, and from a PDB into a non-CDB.
1.3.1 About Using Data Pump in a Multitenant Environment
In general, using Data Pump with PDBs is identical to using Data Pump with a non-CDB.
A multitenant container database (CDB) is an Oracle database that includes zero, one, or many user-created pluggable databases (PDBs). A PDB is a portable set of schemas, schema objects, and nonschema objects that appear to an Oracle Net client as a non-CDB. A non-CDB is an Oracle database that is not a CDB.
You can use Data Pump to migrate all or some of a database in the following scenarios:
From a non-CDB into a PDB
Between PDBs within the same or different CDBs
From a PDB into a non-CDB
Data Pump does not support any CDB-wide operations. If you are connected to the root or seed database of a CDB, then Data Pump issues the following warning:
ORA-39357: Warning: Oracle Data Pump operations are not typically needed when connected to the root or seed of a container database.
1.3.2 Using Data Pump to Move Data Into a CDB
After you create an empty PDB, you can use an Oracle Data Pump full-mode export and import operation to move data into the PDB.
You can import data with or without the transportable option. If you use the transportable option on a full mode export or import, then it is referred to as a full transportable export/import.
When the transportable option is used, export and import use both transportable tablespace data movement and conventional data movement; the latter for those tables that reside in non-transportable tablespaces such as
SYSAUX. Using the transportable option can reduce the export time and especially, the import time, because table data does not need to be unloaded and reloaded and index structures in user tablespaces do not need to be recreated.
Note the following requirements when using Data Pump to move data into a CDB:
To administer a multitenant environment, you must have the
Full database exports from Oracle Database 188.8.131.52 and earlier can be imported into Oracle Database 12c (CDB or non-CDB). However, Oracle recommends that you first upgrade the source database to Oracle Database 11g release 2 (184.108.40.206 or later), so that information about registered options and components is included in the export.
When migrating Oracle Database 11g release 2 (220.127.116.11 or later) to a CDB (or to a non-CDB) using either full database export or full transportable database export, you must set the Data Pump Export parameter
VERSION=12in order to generate a dump file that is ready for import into Oracle Database 12c. If you do not set
VERSION=12, then the export file that is generated does not contain complete information about registered database options and components.
Network-based full transportable imports require use of the
TRANSPORT_DATAFILES=datafile_nameparameters. When the source database is Oracle Database 11g release 18.104.22.168 or later, but earlier than Oracle Database 12c Release 1 (12.1), the
VERSION=12parameter is also required.
File-based full transportable imports only require use of the
TRANSPORT_DATAFILES=datafile_nameparameter. Data Pump Import infers the presence of the
As of Oracle Database 12c release 2 (12.2), in a multitenant container database (CDB) environment, the default Data Pump directory object,
DATA_PUMP_DIR, is defined as a unique path for each PDB in the CDB. This unique path is defined whether the
PATH_PREFIXclause of the
CREATE PLUGGABLE DATABASEstatement is defined or is not defined for relative paths.
Starting in Oracle Database 19c, the
impdpspecifies the name of the credential object that contains the user name and password required to access an object store bucket. You can also specify a default credential using the PDB property named
DEFAULT_CREDENTIAL. When you run
impdbwith then default credential, you prefix the dump file name with
DEFAULT_CREDENTIAL:and you do not specify the
Example 1-1 Importing a Table into a PDB
To specify a particular PDB for the export/import operation, supply a connect identifier in the connect string when you start Data Pump. For example, to import data to a PDB named
pdb1, you could enter the following on the Data Pump command line:
impdp hr@pdb1 DIRECTORY=dpump_dir1 DUMPFILE=hr.dmp TABLES=employees
Example 1-2 Specifying a Credential When Importing Data
This example assumes that you created a credential named
DBMS_CREDENTIAL.CREATE_CREDENTIAL as follows:
BEGIN DBMS_CLOUD.CREATE_CREDENTIAL( credential_name => 'HR_CRED', username => 'email@example.com', password => 'password' ); END; /
The following command specifies credential
HR_CRED, and specifies the a file stored in an object store. The URL of the file is
impdp hr@pdb1 \ table_exists_action=replace \ credential=HR_CRED \ parallel=16 \ dumpfile=https://example.com/ostore/dnfs/myt.dmp
Example 1-3 Importing Data Using a Default Credential
You create a credential named
BEGIN DBMS_CLOUD.CREATE_CREDENTIAL( credential_name => 'HR_CRED', username => 'firstname.lastname@example.org', password => 'password' ); END; /
You set the PDB property
ALTER DATABASE PROPERTY SET DEFAULT_CREDENTIAL = 'ADMIN.HR_CRED'
The following command specifies the default credential as a prefix to the dump file location
impdp hr@pdb1 \ table_exists_action=replace \ parallel=16 \ dumpfile=default_credential:https://example.com/ostore/dnfs/myt.dmp
Note that the
credentialparameter is not specified.
1.3.3 Using Data Pump to Move PDBs Within Or Between CDBs
Data Pump export and import operations on PDBs are identical to those on non-CDBs, with the exception of how common users are handled.
If you have created a common user in a CDB, then a full database or privileged schema export of that user from within any PDB in the CDB results in a standard
CREATE USER C##common name DDL statement being performed upon import. The statement will fail because of the common user prefix
C## on the user name. The following error message will be returned:
ORA-65094:invalid local user or role name
In the PDB being exported, if you have created local objects in that user's schema and you want to import them, then either make sure a common user of the same name already exists in the target CDB instance or use the Data Pump Import
REMAP_SCHEMA parameter on the
impdp command, as follows:
REMAP_SCHEMA=C##common name:local user name
1.4 Required Roles for Oracle Data Pump Export and Import Operations
DATAPUMP_IMP_FULL_DATABASE are required for many Export and Import operations.
When you run Export or Import operations, the operation can require that the user account you are using to run the operations is granted either the
DATAPUMP_EXP_FULL_DATABASE role, or the
DATAPUMP_IMP_FULL_DATABASE role, or both roles. These roles are automatically defined for Oracle Database when you run the standard scripts that are part of database creation. (Note that although the names of these roles contain the word
FULL, these roles actually apply to any privileged operations in any export or import mode, not only
DATAPUMP_EXP_FULL_DATABASE role affects only export operations. The
DATAPUMP_IMP_FULL_DATABASE role affects import operations and operations that use the Import
SQLFILE parameter. These roles allow users performing exports and imports to do the following:
Perform the operation outside the scope of their schema
Monitor jobs that were initiated by another user
Export objects (such as tablespace definitions) and import objects (such as directory definitions) that unprivileged users cannot reference
These are powerful roles. Database administrators should use caution when granting these roles to users.
SYS schema does not have either of these roles assigned to it, all security checks performed by Oracle Data Pump that require these roles also grant access to the
If you receive an
ORA-39181: Only Partial Data Exported Due to Fine Grain Access Control error message, then see My Oracle Support Note 422480.1 for information about security during an export of table data with fine-grained access control policies enabled.:
Some Oracle roles require authorization. If you need to use these roles with Oracle Data
Pump exports and imports, then you must explicitly enable them by setting the
ENABLE_SECURE_ROLES parameter to
Oracle Database Security Guide for more information about predefined roles in an Oracle Database installation
1.5 What Happens During Execution of an Oracle Data Pump Job?
Oracle Data Pump jobs use a master table, a master process, and worker processes to perform the work and keep track of progress.
1.5.1 Coordination of a Job
A master process is created to coordinate every Data Pump Export and Data Pump Import job.
The master process controls the entire job, including communicating with the clients, creating and controlling a pool of worker processes, and performing logging operations.
1.5.2 Tracking Progress Within a Job
While the data and metadata are being transferred, a master table is used to track the progress within a job.
The master table is implemented as a user table within the database. The specific function of the master table for export and import jobs is as follows:
For export jobs, the master table records the location of database objects within a dump file set. Export builds and maintains the master table for the duration of the job. At the end of an export job, the content of the master table is written to a file in the dump file set.
For import jobs, the master table is loaded from the dump file set and is used to control the sequence of operations for locating objects that need to be imported into the target database.
The master table is created in the schema of the current user performing the export or import operation. Therefore, that user must have the
CREATE TABLE system privilege and a sufficient tablespace quota for creation of the master table. The name of the master table is the same as the name of the job that created it. Therefore, you cannot explicitly give a Data Pump job the same name as a preexisting table or view.
For all operations, the information in the master table is used to restart a job. (Note that transportable jobs are not restartable.)
The master table is either retained or dropped, depending on the circumstances, as follows:
Upon successful job completion, the master table is dropped. You can override this by setting the Data Pump
KEEP_MASTER=YESparameter for the job.
The master table is automatically retained for jobs that do not complete successfully.
If a job is stopped using the
STOP_JOBinteractive command, then the master table is retained for use in restarting the job.
If a job is killed using the
KILL_JOBinteractive command, then the master table is dropped and the job cannot be restarted.
If a job terminates unexpectedly, then the master table is retained. You can delete it if you do not intend to restart the job.
If a job stops before it starts running (that is, before any database objects have been copied), then the master table is dropped.
JOB_NAME for more information about how job names are formed
1.5.3 Filtering Data and Metadata During a Job
You can use the
INCLUDE parameters to filter the types of objects that are exported and imported.
Within the master table, specific objects are assigned attributes such as name or owning schema. Objects also belong to a class of objects (such as
DIRECTORY). The class of an object is called its object type. You can use the
INCLUDE parameters to restrict the types of objects that are exported and imported. The objects can be based upon the name of the object or the name of the schema that owns the object. You can also specify data-specific filters to restrict the rows that are exported and imported.
1.5.4 Transforming Metadata During a Job
When you move data from one database to another, you can perform transformations on the metadata using Oracle Data Pump Import parameters.
It is often useful to perform transformations on the metadata for remapping storage between tablespaces or redefining the owner of a particular set of objects. When you move data, you can perform transformations using the Oracle Data Pump import parameters
1.5.5 Maximizing Job Performance
To increase job performance, you can use the Oracle Data Pump
PARALLEL parameter to run multiple worker processes in parallel.
PARALLEL parameter enables you to set a degree of parallelism that takes maximum advantage of current conditions. For example, to limit the effect of a job on a production system, database administrators can choose to restrict the parallelism. The degree of parallelism can be reset at any time during a job. For example, during production hours, you can set
PARALLEL to 2, so that you restrict a particular job to only two degrees of parallelism. During nonproduction hours, you can reset the degree of parallelism to 8. The parallelism setting is enforced by the master process, which allocates workloads to worker processes that perform the data and metadata processing within an operation. These worker processes operate in parallel. For recommendations on setting the degree of parallelism, refer to the Export PARALLEL and Import PARALLEL parameter descriptions.
The ability to adjust the degree of parallelism is available only in the Enterprise Edition of Oracle Database.
1.5.6 Loading and Unloading of Data
The worker processes unload and load metadata and table data. For export, all metadata and data are unloaded in parallel, with the exception of jobs that use transportable tablespace. For import, objects must be created in the correct dependency order.
If there are enough objects of the same type to make use of multiple workers, then the objects will be imported by multiple worker processes. Some metadata objects have interdependencies which require one worker process to create them serially to satisfy those dependencies. Worker processes are created as needed until the number of worker processes equals the value supplied for the
PARALLEL command-line parameter. The number of active worker processes can be reset throughout the life of a job. Worker processes can be started on different nodes in an Oracle Real Application Clusters (Oracle RAC) environment.
The value of
PARALLEL is restricted to 1 in the Standard Edition of Oracle Database.
When a worker process is assigned the task of loading or unloading a very large table or partition, it may choose to use the external tables access method to make maximum use of parallel execution. In such a case, the worker process becomes a parallel execution coordinator. The actual loading and unloading work is divided among some number of parallel I/O execution processes (sometimes called slaves) allocated from a pool of available processes in an Oracle RAC environment.
1.6 Monitoring Job Status
The Oracle Data Pump Export and Import client utilities can attach to a job in either logging mode or interactive-command mode.
In logging mode, real-time detailed status about the job is automatically displayed during job execution. The information displayed can include the job and parameter descriptions, an estimate of the amount of data to be processed, a description of the current operation or item being processed, files used during the job, any errors encountered, and the final job state (Stopped or Completed).
In interactive-command mode, job status can be displayed on request. The information displayed can include the job description and state, a description of the current operation or item being processed, files being written, and a cumulative status.
You can also have a log file written during the execution of a job. The log file summarizes the progress of the job, lists any errors encountered during execution of the job, and records the completion status of the job.
As an alternative to determine job status or other information about Oracle Data Pump jobs, you can query the
DBA_DATAPUMP_SESSIONS views. Refer to Oracle Database Reference for more information.
1.7 Monitoring the Progress of Executing Jobs
To monitor table data transfers, you can use the
V$SESSION_LONGOPS dynamic performance view to monitor Oracle Data Pump jobs.
Oracle Data Pump operations that transfer table data (export and import) maintain an entry in the
V$SESSION_LONGOPS dynamic performance view indicating the job progress (in megabytes of table data transferred). The entry contains the estimated transfer size and is periodically updated to reflect the actual amount of data transferred.
Use of the
REMAP_DATA parameters are not reflected in the determination of estimate values.
The usefulness of the estimate value for export operations depends on the type of estimation requested when the operation was initiated, and it is updated as required if exceeded by the actual transfer amount. The estimate value for import operations is exact.
V$SESSION_LONGOPS columns that are relevant to a Data Pump job are as follows:
USERNAME: Job owner
OPNAME: Job name
TARGET_DESC: Job operation
SOFAR: Megabytes transferred thus far during the job
TOTALWORKEstimated number of megabytes in the job
UNITS: Megabytes (MB)
MESSAGE: A formatted status message that uses the following format:
'job_name: operation_name : nnn out of mmm MB done'
1.8 File Allocation with Oracle Data Pump
You can modify how Oracle Data Pump allocates and handles files by using commands in interactive mode.
1.8.1 Understanding File Allocation in Oracle Data Pump
Understanding how Oracle Data Pump allocates and handles files helps you to use Export and Import to their fullest advantage.
Oracle Data Pump jobs manage the following types of files:
Dump files, to contain the data and metadata that is being moved.
Log files, to record the messages associated with an operation.
SQL files, to record the output of a
SQLFILEoperation is started using the Oracle Data Pump Import
SQLFILEparameter. This operation results in all of the
SQL DDLthat Import would execute, based on other parameters, being written to a SQL file.
Files specified by the
DATA_FILESparameter during a transportable import.
If your Oracle Data Pump job generates errors related to Network File Storage (NFS), then consult the installation guide for your platform to determine the correct NFS mount settings.
1.8.2 Specifying Files and Adding Additional Dump Files
For export operations, you can specify dump files at the time the job is defined, and also at a later time during the operation.
If you discover that space is running low during an export operation, then you can add additional dump files by using the Oracle Data Pump Export
ADD_FILE command in interactive mode.
For import operations, all dump files must be specified at the time the job is defined.
Log files and SQL files overwrite previously existing files. For dump files, you can use the Export
REUSE_DUMPFILES parameter to specify whether to overwrite a preexisting dump file.
1.8.3 Default Locations for Dump, Log, and SQL Files
Review these topics to understand the Oracle Data Pump default file locations, and to understand how these locations are affected when you are using Oracle RAC, Oracle Automatic Storage Management, and multitenant architecture.
22.214.171.124 Understanding Dump, Log, and SQL File Default Locations
Oracle Data Pump is server-based, rather than client-based. Dump files, log files, and SQL files are accessed relative to server-based directory paths.
Oracle Data Pump requires that directory paths are specified as directory objects. A directory object maps a name to a directory path on the file system. As a database administrator, you must ensure that only approved users are allowed access to the directory object associated with the directory path.
The following example shows a SQL statement that creates a directory object named
dpump_dir1 that is mapped to a directory located at
SQL> CREATE DIRECTORY dpump_dir1 AS '/usr/apps/datafiles';
The reason that a directory object is required is to ensure data security and integrity. For example:
If you are allowed to specify a directory path location for an input file, then it is possible that you could be able to read data that the server has access to, but to which you should not.
If you are allowed to specify a directory path location for an output file, then it is possible that you could overwrite a file that normally you do not have privileges to delete.
On Unix, Linux, and Windows operating systems, a default directory object,
DATA_PUMP_DIR, is created at database creation, or whenever the database dictionary is upgraded. By default, this directory object is available only to privileged users. (The user
SYSTEM has read and write access to the
DATA_PUMP_DIR directory, by default.) Oracle can change the definition of the
DATA_PUMP_DIR directory, either during Oracle Database upgrades, or when patches are applied.
If you are not a privileged user, then before you can run Oracle Data Pump Export or Import, a directory object must be created by a database administrator (DBA), or by any user with the
CREATE ANY DIRECTORY privilege.
After a directory is created, the user creating the directory object must grant
WRITE permission on the directory to other users. For example, to allow Oracle Database to read and write files on behalf of user
hr in the directory named by
dpump_dir1, the DBA must run the following command:
SQL> GRANT READ, WRITE ON DIRECTORY dpump_dir1 TO hr;
WRITE permission to a directory object only means that Oracle Database can read or write files in the corresponding directory on your behalf. Outside of Oracle Database, uou are not given direct access to those files, unless you have the appropriate operating system privileges. Similarly, Oracle Database requires permission from the operating system to read and write files in the directories.
Oracle Data Pump Export and Import use the following order of precedence to determine a file's location:
If a directory object is specified as part of the file specification, then the location specified by that directory object is used. (The directory object must be separated from the file name by a colon.)
If a directory object is not specified as part of the file specification, then the directory object named by the
DIRECTORYparameter is used.
If a directory object is not specified as part of the file specification, and if no directory object is named by the
DIRECTORYparameter, then the value of the environment variable,
DATA_PUMP_DIR, is used. This environment variable is defined by using operating system commands on the client system where the Data Pump Export and Import utilities are run. The value assigned to this client-based environment variable must be the name of a server-based directory object, which must first be created on the server system by a DBA. For example, the following SQL statement creates a directory object on the server system. The name of the directory object is
DUMP_FILES1, and it is located at
SQL> CREATE DIRECTORY DUMP_FILES1 AS '/usr/apps/dumpfiles1';
After this statement is run, a user on a Unix-based client system using
cshcan assign the value
DUMP_FILES1to the environment variable
DIRECTORYparameter can then be omitted from the command line. The dump file
employees.dmp, and the log file
export.log, are written to
%setenv DATA_PUMP_DIR DUMP_FILES1 %expdp hr TABLES=employees DUMPFILE=employees.dmp
If none of the previous three conditions yields a directory object, and you are a privileged user, then Oracle Data Pump attempts to use the value of the default server-based directory object,
DATA_PUMP_DIR. This directory object is automatically created, either at database creation, or when the database dictionary is upgraded. To see the path definition for
DATA_PUMP_DIR, you can use the following SQL query:
SQL> SELECT directory_name, directory_path FROM dba_directories 2 WHERE directory_name='DATA_PUMP_DIR';
If you are not a privileged user, then access to the
DATA_PUMP_DIRdirectory object must have previously been granted to you by a DBA.
Do not confuse the default
DATA_PUMP_DIRdirectory object with the client-based environment variable of the same name.
126.96.36.199 Understanding How to Use Oracle Data Pump with Oracle RAC
Using Oracle Data Pump in an Oracle Real Application Clusters (Oracle RAC) environment requires you to perform a few checks to make sure that you are making cluster member nodes available.
To use Data Pump or external tables in an Oracle RAC configuration, you must ensure that the directory object path is on a cluster-wide file system.
The directory object must point to shared physical storage that is visible to, and accessible from, all instances where Data Pump and/or external tables processes can run.
The default Data Pump behavior is that worker processes can run on any instance in an Oracle RAC configuration. Therefore, workers on those Oracle RAC instances must have physical access to the location defined by the directory object, such as shared storage media. If the configuration does not have shared storage for this purpose, but you still require parallelism, then you can use the
CLUSTER=NOparameter to constrain all worker processes to the instance where the Data Pump job was started.
Under certain circumstances, Data Pump uses parallel query slaves to load or unload data. In an Oracle RAC environment, Data Pump does not control where these slaves run. Therefore, these slaves can run on other cluster member nodes in the cluster, regardless of which instance is specified for
SERVICE_NAMEfor the Data Pump job. Controls for parallel query operations are independent of Data Pump. When parallel query slaves run on other instances as part of a Data Pump job, they also require access to the physical storage of the dump file set.
188.8.131.52 Using Directory Objects When Oracle Automatic Storage Management Is Enabled
If you use Oracle Data Pump Export or Import with Oracle Automatic Storage Management (Oracle ASM) enabled, then define the directory object used for the dump file.
You must define the directory object used for the dump file so that the Oracle ASM disk group name is used, instead of an operating system directory path.
For log file, use a separate directory object that points to an operating system directory path.
For example, you can create a directory object for the Oracle ASM dump file using this procedure.
SQL> CREATE or REPLACE DIRECTORY dpump_dir as '+DATAFILES/';
After you create the directory object, you then create a separate directory object for the log file:
SQL> CREATE or REPLACE DIRECTORY dpump_log as '/homedir/user1/';
To enable user
hr to have access to these directory objects, you assign the necessary privileges for that user:
SQL> GRANT READ, WRITE ON DIRECTORY dpump_dir TO hr; SQL> GRANT READ, WRITE ON DIRECTORY dpump_log TO hr;
Finally, you then can use use the following Data Pump Export command:
> expdp hr DIRECTORY=dpump_dir DUMPFILE=hr.dmp LOGFILE=dpump_log:hr.log
Before the command executes, you are prompted for the password.
If you simply want to copy Data Pump dump files between ASM and disk directories, you can use the
DBMS_FILE_TRANSFER PL/SQL package.
184.108.40.206 The DATA_PUMP_DIR Directory Object and Pluggable Databases
The default Oracle Data Pump directory object,
DATA_PUMP_DIR, is defined as a unique path for each PDB in the CDB.
As of Oracle Database 12c Release 2 (12.2), in a multitenant container database (CDB) environment, the default Oracle Data Pump directory object,
DATA_PUMP_DIR, is defined as a unique path for each PDB in the CDB, whether or not the
PATH_PREFIX clause of the
CREATE PLUGGABLE DATABASE statement is defined for relative paths.
1.8.4 Using Substitution Variables
If you want to specify multiple dump files during export operations, then use the
DUMPFILE parameter with a substitution variable in the file name.
When you use substitution variables with file names, instead of or in addition to listing specific file names, then those filenames with a substitution variable are called dump file templates.
Note:In the examples that follow, the substitution variable
%Uis used to explain how Oracle Data Pump uses substitution variables. You can view other available substitution variables under the Import or Export
DUMPFILEparameter reference topics.
When you use dump file templates, new dump files are created as they are needed. For example, if you are using the substitution variable
%U, then new dump files are created as needed beginning with
%U, and then using
03, and so on. Enough dump files are created to allow all processes specified by the current setting of the
PARALLEL parameter to be active. If one of the dump files becomes full because its size has reached the maximum size specified by the
FILESIZE parameter, then it is closed, and a new dump file (with a new generated name) is created to take its place.
If multiple dump file templates are provided, then they are used to generate dump files in a round-robin fashion. For example, if
expc%U are all specified for a job having a parallelism of 6, then the initial dump files created are
For import and
SQLFILE operations, if dump file specifications
expc%U are specified, then the operation begins by attempting to open the dump files
expc01.dmp. It is possible for the master table to span multiple dump files. For this reason, until all pieces of the master table are found, dump files continue to be opened by incrementing the substitution variable, and looking up the new file names (For example:
expc02.dmp). If a dump file does not exist, then the operation stops incrementing the substitution variable for the dump file specification that was in error. For example, if
expb02.dmp are found, but
expb03.dmp is not found, then no more files are searched for using the
expb%U specification. After the entire master table is found, it is used to determine whether all dump files in the dump file set have been located.
1.9 Exporting and Importing Between Different Oracle Database Releases
You can use Oracle Data Pump to migrate all or any portion of an Oracle Database between different releases of the database software.
Typically, you use the Oracle Data Pump Export
VERSION parameter to migrate between database releases.Using
VERSION generates an Oracle Data Pump dump file set that is compatible with the specified version.
The default value for
COMPATIBLE. This value indicates that exported database object definitions are compatible with the release specified for the
COMPATIBLE initialization parameter.
In an upgrade situation, when the target release of an Oracle Data Pump-based migration is higher than the source, you typically do not have to specify the
VERSION parameter. When the target release is higher then the source, all objects in the source database are compatible with the higher target release. However, an exception is when an entire Oracle Database 11g (Release 220.127.116.11 or higher) is exported in preparation for importing into Oracle Database 12c Release 1 (18.104.22.168) or later. In this case, to include a complete set of Oracle Database internal component metadata, explicitly specify
In a downgrade situation, when the target release of an Oracle Data Pump-based migration is lower than the source, set the
VERSION parameter value to be the same version as the target. An exception is when the target release version is the same as the value of the
COMPATIBLE initialization parameter on the source system. In that case, you do not need to specify
VERSION. In general, however, Oracle Data Pump import cannot read dump file sets created by an Oracle Database release that is newer than the current release, unless you explicitly specify the
Keep the following information in mind when you are exporting and importing between different database releases:
On an Oracle Data Pump export, if you specify a database version that is older than the current database version, then a dump file set is created that you can import into that older version of the database. For example, if you are running Oracle Database 12c Release 1 (22.214.171.124), and you specify
VERSION=11.2on an export, then the dump file set that is created can be imported into an Oracle Database 11g (Release 11.2) database.
Database privileges that are valid only in Oracle Database 12c Release 1 (126.96.36.199) and later (for example, the
READprivilege on tables, views, materialized views, and synonyms) cannot be imported into Oracle Database 12c Release 1 (188.8.131.52) or earlier. If an attempt is made to do so, then Import reports it as an error, and continues the import operation.
When you export to a release earlier than Oracle Database 12c Release 2 (184.108.40.206), Oracle Data Pump does not filter out object names longer than 30 bytes. The objects are exported. At import time, if you attempt to create an object with a name longer than 30 bytes, then an error is returned.
If you specify an Oracle Database release that is older than the current Oracle Database release, then certain features and data types can be unavailable. For example, specifying
VERSION=10.1causes an error if data compression is also specified for the job, because compression was not supported in Oracle Database 10g release 1 (10.1). Another example: If a user-defined type or Oracle-supplied type in the source Oracle Database release is a later version than the type in the target Oracle Database release, then that type is not loaded, because it does not match any version of the type in the target database.
Oracle Data Pump Import can always read Oracle Data Pump dump file sets created by older Oracle Database releases.
When operating across a network link, Oracle Data Pump requires that the source and target Oracle Database releases differ by no more than two versions.
For example, if one database is Oracle Database 12c, then the other Oracle Database release must be 12c, 11g, or 10g. Oracle Data Pump checks only the major version number (for example, 10g,11g, 12c), not specific Oracle Database release numbers (for example, 12.2, 12.1, 11.1, 11.2, 10.1, or 10.2).
Importing Oracle Database 11g dump files that contain table statistics into Oracle Database 12c Release 1 (12.1) or later Oracle Database releases can result in an Oracle ORA-39346 error. This error occurs because Oracle Database 11g dump files contain table statistics as metadata. Oracle Database 12c Release 1 (12.1) and later releases require table statistics to be presented as table data. The workaround is to ignore the error during the import operation. After the import operation completes, regather table statistics.
1.10 SecureFiles LOB Considerations
Exports of SecureFiles large objects (LOBs) are affected by the content type, VERSION parameter, and other variables.
LOBs are a set of datatypes that are designed to hold large amounts of data. When
you use Oracle Data Pump Export to export SecureFiles LOBs, the export behavior depends
on several things, including the Export
VERSION parameter value,
whether a content type (
ContentType) is present, and whether the LOB is
archived and data is cached.
The following scenarios cover different combinations of these variables:
If a table contains SecureFiles LOBs with a ContentType, and the Export
VERSIONparameter is set to a value earlier than
220.127.116.11.0, then the ContentType is not exported.
If a table contains SecureFiles LOBs with a ContentType, and the Export
VERSIONparameter is set to a value of
18.104.22.168.0or later, then the ContentType is exported and restored on a subsequent import.
If a table contains a SecureFiles LOB that is currently archived and the data is cached, and the Export
VERSIONparameter is set to a value earlier than
22.214.171.124.0, then the SecureFiles LOB data is exported and the archive metadata is dropped. In this scenario, if
VERSIONis set to
11.1or later, then the SecureFiles LOB becomes a plain SecureFiles LOB. But if
VERSIONis set to a value earlier than
11.1, then the SecureFiles LOB becomes a BasicFiles LOB.
If a table contains a SecureFiles LOB that is currently archived, but the data is not cached, and the Export
VERSIONparameter is set to a value earlier than
126.96.36.199.0, then an ORA-45001 error is returned.
If a table contains a SecureFiles LOB that is currently archived and the data is cached, and the Export
VERSIONparameter is set to a value of
188.8.131.52.0or later, then both the cached data and the archive metadata is exported.
Refer to Oracle Database SecureFiles and Large Objects Developer's Guide for more information about SecureFiles LOBs.
1.11 Oracle Data Pump Exit Codes
You can review Oracle Data Pump export and import operation results in a log file, and in a process exit code.
Oracle Data Pump provides the results of export and import operations immediately upon completion. In addition to recording the results in a log file, Oracle Data Pump can also report the outcome in a process exit code. Use the Oracle Data Pump exit code to check the outcome of an Oracle Data Pump job from the command line or a script:
Table 1-1 Data Pump Exit Codes
The export or import job completed successfully. No errors are displayed to the output device or recorded in the log file, if there is one.
The export or import job completed successfully, but there were errors encountered during the job. The errors are displayed to the output device and recorded in the log file, if there is one.
The export or import job encountered one or more fatal errors, including the following:
A fatal error is displayed to the output device but may not be recorded in the log file. Whether it is recorded in the log file can depend on several factors, including:
1.12 Auditing Data Pump Jobs
To monitor and record specific user database actions, perform auditing on Data Pump jobs with unified auditing.
You can perform auditing on Data Pump jobs to monitor and record specific user database actions. Data Pump uses unified auditing, in which all audit records are centralized in one place.
To set up unified auditing, you create a unified audit policy or alter an existing policy. An audit policy is a named group of audit settings that enable you to audit a particular aspect of user behavior in the database. To create the policy, use the SQL
CREATE AUDIT POLICY statement.
After creating the audit policy, use the
NOAUDIT SQL statements to, respectively, enable and disable the policy.
1.13 Encrypted Data Security Warnings for Oracle Data Pump Operations
Oracle Data Pump warns you when encrypted data is exported or imported as unencrypted data.
Oracle Data Pump export or import operations, you receive a warning when Oracle Data
Pump encounters encrypted data in the text and boolean parameters that were specified
when the export or import job was started. This warning is also written to the the audit
record. You can view any cases of encrypted data warnings encountered during the export
or import operation by checking the
DP_DP_WARNINGS1 column in the
unified audit trail. Obtain the audit information by running the following SQL
SELECT DP_DP_WARNINGS1 FROM UNIFIED_AUDIT_TRAIL WHERE AUDIT_TYPE = 'DATAPUMP';
1.14 How Does Data Pump Handle Timestamp Data?
This section describes factors that can affect successful completion of export and import jobs that involve the timestamp data types
TIMESTAMP WITH TIMEZONE and
TIMESTAMP WITH LOCAL TIMEZONE.
The information in this section applies only to Oracle Data Pump running on Oracle Database 12c and later.
1.14.1 TIMESTAMP WITH TIMEZONE Restrictions
Export and import jobs that have
TIMESTAMP WITH TIME ZONE data are restricted.
184.108.40.206 Understanding TIMESTAMP WITH TIME ZONE Restrictions
Carrying out export and import jobs that have
TIMESTAMP WITH TIME ZONE data requires understanding information about your time zone file data and Oracle Database release.
Successful job completion can depend on the following factors:
The version of the Oracle Database time zone files on the source and target databases.
The export/import mode and whether the Data Pump version being used supports
TIMESTAMP WITH TIME ZONEdata. (Data Pump 220.127.116.11 and later provide support for
TIMESTAMP WITH TIME ZONEdata.)
To identify the time zone file version of a database, you can execute the following SQL statement:
SQL> SELECT VERSION FROM V$TIMEZONE_FILE;
18.104.22.168 Data Pump Support for TIMESTAMP WITH TIME ZONE Data
Data Pump supports
TIMESTAMP WITH TIME ZONE data during different export and import modes like non-transportable mode, transportable tablespace and transportable table mode, and full transportable mode.
This section describes Data Pump support for
TIMESTAMP WITH TIME ZONE data during different export and import modes when versions of the Oracle Database time zone file are different on the source and target databases.
If the dump file is created with a Data Pump version that supports
TIMESTAMP WITH TIME ZONEdata (22.214.171.124 or later), then the time zone file version of the export system is recorded in the dump file. Data Pump uses that information to determine whether data conversion is necessary. If the target database knows about the source time zone version, but is actually using a later version, then the data is converted to the later version.
TIMESTAMP WITH TIME ZONEdata cannot be downgraded, so if you attempt to import to a target that is using an earlier version of the time zone file than the source used, the import fails.
If the dump file is created with a Data Pump version prior to Oracle Database 11g release 2 (126.96.36.199), then
TIMESTAMP WITH TIME ZONEdata is not supported, so no conversion is done and corruption may occur.
Transportable Tablespace and Transportable Table Modes
In transportable tablespace and transportable table modes, if the source and target have different time zone file versions, then tables with
TIMESTAMP WITH TIME ZONEcolumns are not created. A warning is displayed at the beginning of the job showing the source and target database time zone file versions. A message is also displayed for each table not created. This is true even if the Data Pump version used to create the dump file supports
TIMESTAMP WITH TIME ZONEdata. (Release 188.8.131.52 and later support
TIMESTAMP WITH TIMEZONEdata.)
If the source is earlier than Oracle Database 11g release 2 (184.108.40.206), then the time zone file version must be the same on the source and target database for all transportable jobs regardless of whether the transportable set uses
TIMESTAMP WITH TIME ZONEcolumns.
Full Transportable Mode
Full transportable exports and imports are supported when the source database is at least Oracle Database 11g release 2 (220.127.116.11) and the target is Oracle Database 12c release 1 (12.1) or later.
Data Pump 18.104.22.168 and later provide support for
TIMESTAMP WITH TIME ZONE data. Therefore, in full transportable operations, tables with
TIMESTAMP WITH TIME ZONE columns are created. If the source and target database have different time zone file versions, then
TIMESTAMP WITH TIME ZONE columns from the source are converted to the time zone file version of the target.
Oracle Database Administrator's Guide for more information about transportable tablespaces
Using the Transportable Option During Full Mode Exports for more information about full transportable exports
Using the Transportable Option During Full Mode Imports for more information about full transportable imports
22.214.171.124 Time Zone File Versions on the Source and Target
Successful job completion can depend on whether the source and target time zone file versions match.
If the Oracle Database time zone file version is the same on the source and target databases, then conversion of
TIMESTAMP WITH TIME ZONEdata is not necessary. The export/import job should complete successfully.
The exception to this is a transportable tablespace or transportable table export performed using a Data Pump release earlier than 126.96.36.199. In that case, tables in the dump file that have
TIMESTAMP WITH TIME ZONEcolumns are not created on import even though the time zone file version is the same on the source and target.
If the source time zone file version is not available on the target database, then the job fails. The version of the time zone file on the source may not be available on the target because the source may have had its time zone file updated to a later version but the target has not. For example, if the export is done on Oracle Database 11g release 2 (188.8.131.52) with a time zone file version of 17, and the import is done on 184.108.40.206 with only a time zone file of 16 available, then the job fails.
1.14.2 TIMESTAMP WITH LOCAL TIME ZONE Restrictions
Moving tables using a transportable mode is restricted.
If a table is moved using a transportable mode (transportable table, transportable tablespace, or full transportable), and the following conditions exist, then a warning is issued and the table is not created:
The source and target databases have different database time zones.
The table contains
TIMESTAMP WITH LOCAL TIME ZONEdata types.
To successfully move a table that was not created because of these conditions, use a non-transportable export and import mode.
1.15 Character Set and Globalization Support Considerations
Globalization support behavior of Data Pump Export and Import.
These sections describe the globalization support behavior of Data Pump Export and Import with respect to character set conversion of user data and data definition language (DDL).
1.15.1 Data Definition Language (DDL)
The Export utility writes dump files using the database character set of the export system.
When the dump file is imported, a character set conversion is required for DDL only if the database character set of the import system is different from the database character set of the export system.
To minimize data loss due to character set conversions, ensure that the import database character set is a superset of the export database character set.
1.15.2 Single-Byte Character Sets and Export and Import
Ensure that the export database and the import database use the same character set.
If the system on which the import occurs uses a 7-bit character set, and you import an 8-bit character set dump file, then some 8-bit characters may be converted to 7-bit equivalents. An indication that this has happened is when accented characters lose the accent mark.
To avoid this unwanted conversion, ensure that the export database and the import database use the same character set.
1.15.3 Multibyte Character Sets and Export and Import
During character set conversion, any characters in the export file that have no equivalent in the import database character set are replaced with a default character. The import database character set defines the default character.
If the import system has to use replacement characters while converting DDL, then a warning message is displayed and the system attempts to load the converted DDL.
If the import system has to use replacement characters while converting user data, then the default behavior is to load the converted data. However, it is possible to instruct the import system to reject rows of user data that were converted using replacement characters. See the Import
DATA OPTIONS parameter for details.
To guarantee 100% conversion, the import database character set must be a superset (or equivalent) of the character set used to generate the export file.
When the database character set of the export system differs from that of the import system, the import system displays informational messages at the start of the job that show what the database character set is.
When the import database character set is not a superset of the character set used to generate the export file, the import system displays a warning that possible data loss may occur due to character set conversions.
1.16 Oracle Data Pump Behavior with Data-Bound Collation
Oracle Data Pump supports data-bound collation (DBC).
Current default collations of exported users' schemas
Current default collations of exported tables, views, materialized views and PL/SQL units (including user-defined types)
Declared collations of all table and cluster character data type columns
When importing a dump file exported from an Oracle Database 12c Release 2 (12.2) database, Data Pump Import's behavior depends on the effective value of the Data Pump
VERSION parameter at the time of import and on whether the data-bound collation (DBC) feature is enabled in the target database. The effective value of the
VERSION parameter is determined by how it is specified. The parameter may be specified as follows:
VERSION=n, which means the effective value is the specific version number
n, for example,
VERSION=LATEST, which means the effective value is the currently running database version
VERSION=COMPATIBLE, which means the effective value is the same as the value of the database initialization parameter
COMPATIBLE. This is also true if no value is specified for
For the DBC feature to be enabled in a database, the initialization parameter
COMPATIBLE must be set to 12.2 or higher and the initialization parameter
MAX_STRING_SIZE must be set to
If the effective value of the Data Pump Import
VERSION parameter is 12.2 and DBC is enabled in the target database, then Data Pump Import generates DDL statements with collation clauses referencing collation metadata from the dump file. Exported objects are created with the original collation metadata that they had in the source database.
No collation syntax is generated if DBC is disabled or if the Data Pump Import
VERSION parameter is set to a value lower than 12.2.