Oracle Data Pump technology enables very high-speed movement of data and metadata from one database to another.
This chapter discusses the following topics:
Oracle Data Pump is made up of three distinct parts:
The command-line clients,
DBMS_DATAPUMP PL/SQL package (also known as the Data Pump API)
DBMS_METADATA PL/SQL package (also known as the Metadata API)
The Data Pump clients,
impdp, start the Data Pump Export utility and Data Pump Import utility, respectively.
impdp clients use the procedures provided in the
DBMS_DATAPUMP PL/SQL package to execute export and import commands, using the parameters entered at the command line. These parameters enable the exporting and importing of data and metadata for a complete database or for subsets of a database.
When metadata is moved, Data Pump uses functionality provided by the
DBMS_METADATA PL/SQL package. The
DBMS_METADATA package provides a centralized facility for the extraction, manipulation, and re-creation of dictionary metadata.
DBMS_METADATA PL/SQL packages can be used independently of the Data Pump clients.
Note:All Data Pump Export and Import processing, including the reading and writing of dump files, is done on the system (server) selected by the specified database connect string. This means that for unprivileged users, the database administrator (DBA) must create directory objects for the Data Pump files that are read and written on that server file system. (For security reasons, DBAs must ensure that only approved users are allowed access to directory objects.) For privileged users, a default directory object is available. See "Default Locations for Dump, Log, and SQL Files" for more information about directory objects.
Oracle Database PL/SQL Packages and Types Reference for a description of the
Oracle Database PL/SQL Packages and Types Reference for a description of the
Oracle Database SecureFiles and Large Objects Developer's Guide for information about guidelines to consider when creating directory objects
For information about how Data Pump moves data in and out of databases, see the following sections:
Note:Data Pump does not load tables with disabled unique indexes. To load data into the table, the indexes must be either dropped or reenabled.
The following sections briefly explain how and when each of these data movement mechanisms is used.
The fastest method of moving data is to copy the database data files to the target database without interpreting or altering the data. With this method, Data Pump Export is used to unload only structural information (metadata) into the dump file. This method is used in the following situations:
TRANSPORT_TABLESPACES parameter is used to specify a transportable tablespace export. Only metadata for the specified tablespaces is exported.
TRANSPORTABLE=ALWAYS parameter is supplied on a table mode export (specified with the
TABLES parameter) or a full mode export (specified with the
FULL parameter) or a full mode network import (specified with the
When an export operation uses data file copying, the corresponding import job always also uses data file copying. During the ensuing import operation, both the data files and the export dump file must be loaded.
Note:During transportable imports tablespaces are temporarily made read/write and then set back to read-only. This is new behavior introduced as of Oracle Database 12c Release 1 (22.214.171.124) to improve performance. However, you should be aware that this behavior also causes the SCNs of the import job's data files to change, which can cause issues during future transportable imports of those files.
For example, if a transportable tablespace import fails at any point after the tablespaces have been made read/write (even if they are now read-only again), then the data files become corrupt. They cannot be recovered.
Since transportable jobs are not restartable, the failed job needs to be restarted from the beginning. The corrupt datafiles must be deleted and fresh versions must be copied to the target destination.
When transportable jobs are performed, it is best practice to keep a copy of the data files on the source system until the import job has successfully completed on the target system. If the import job should fail for some reason, you will still have uncorrupted copies of the data files.
When data is moved by using data file copying, there are some limitations regarding character set compatibility between the source and target databases. See Oracle Database Administrator's Guide for details.
If the source platform and the target platform are of different endianness, then you must convert the data being transported so that it is in the format of the target platform. You can use the
DBMS_FILE_TRANSFER PL/SQL package or the
CONVERT command to convert the data.
After data file copying, direct path is the fastest method of moving data. In this method, the SQL layer of the database is bypassed and rows are moved to and from the dump file with only minimal interpretation. Data Pump automatically uses the direct path method for loading and unloading data unless the structure of a table does not allow it. For example, if a table contains a column of type
BFILE, then direct path cannot be used to load that table and external tables is used instead.
The following sections describe situations in which direct path cannot be used for loading and unloading:
A domain index that is not a
CONTEXT type index exists for a LOB column.
A global index on multipartition tables exists during a single-partition load. This includes object tables that are partitioned.
A table is in a cluster.
There is an active trigger on a preexisting table.
Fine-grained access control is enabled in insert mode on a preexisting table.
A table contains
BFILE columns or columns of opaque types.
A referential integrity constraint is present on a preexisting table.
A table contains
VARRAY columns with an embedded opaque type.
The table has encrypted columns.
The table into which data is being imported is a preexisting table and at least one of the following conditions exists:
There is an active trigger
The table is partitioned
Fine-grained access control is in insert mode
A referential integrity constraint exists
A unique index exists
Supplemental logging is enabled and the table has at least one LOB column.
The Data Pump command for the specified table used the
A table contains a column (including a
VARRAY column) with a
TIMESTAMP WITH TIME ZONE data type and the version of the time zone data file is different between the export and import systems.
If any of the following conditions exist for a table, then Data Pump uses external tables rather than direct path to unload the data:
Fine-grained access control for
SELECT is enabled.
The table is a queue table.
The table contains one or more columns of type
BFILE or opaque, or an object type containing opaque columns.
The table contains encrypted columns.
The table contains a column of an evolved type that needs upgrading.
The table contains a column of type
LONG RAW that is not last.
The Data Pump command for the specified table used the
Prior to the unload operation, the table was altered to contain a column that is NOT NULL and also has a default value specified.
When data file copying is not selected and the data cannot be moved using direct path, the external tables mechanism is used. The external tables mechanism creates an external table that maps to the dump file data for the database table. The SQL engine is then used to move the data. If possible, the
APPEND hint is used on import to speed the copying of the data into the database. The representation of data for direct path data and external table data is the same in a dump file. Therefore, Data Pump might use the direct path mechanism at export time, but use external tables when the data is imported into the target database. Similarly, Data Pump might use external tables for the export, but use direct path for the import.
In particular, Data Pump uses external tables in the following situations:
Loading and unloading very large tables and partitions in situations where it is advantageous to use parallel SQL capabilities
Loading tables with global or domain indexes defined on them, including partitioned object tables
Loading tables with active triggers or clustered tables
Loading and unloading tables with encrypted columns
Loading tables with fine-grained access control enabled for inserts
Loading tables that are partitioned differently at load time and unload time
Loading a table not created by the import operation (the table exists before the import starts)
Note:When Data Pump uses external tables as the data access mechanism, it uses the
ORACLE_DATAPUMPaccess driver. However, it is important to understand that the files that Data Pump creates when it uses external tables are not compatible with files created when you manually create an external table using the SQL
CREATE TABLE ... ORGANIZATION EXTERNALstatement.
In situations where there are conflicting table attributes, Data Pump is not able to load data into a table using either direct path or external tables. In such cases, conventional path is used, which can affect performance.
When the Import
NETWORK_LINK parameter is used to specify a network link for an import operation, SQL is used to move the data using an
INSERT SELECT statement. The
SELECT clause retrieves the data from the remote database over the network link. The
INSERT clause uses SQL to insert the data into the target database. There are no dump files involved.
When the Export
NETWORK_LINK parameter is used to specify a network link for an export operation, the data from the remote database is written to dump files on the target database. (Note that to export from a read-only database, the
NETWORK_LINK parameter is required.)
Because the link can identify a remotely networked database, the terms database link and network link are used interchangeably.
The following types of database links are supported for use with Data Pump Export and Import:
Public (both public and shared)
The database link type, Current User, is not supported for use with Data Pump Export or Import.
The Export NETWORK_LINK parameter for information about performing exports over a database link
The Import NETWORK_LINK parameter for information about performing imports over a database link
Oracle Database Administrator's Guide for information about creating database links and the different types of links
A multitenant container database (CDB) is an Oracle database that includes zero, one, or many user-created pluggable databases (PDBs). A PDB is a portable set of schemas, schema objects, and nonschema objects that appear to an Oracle Net client as a non-CDB. A non-CDB is an Oracle database that is not a CDB.
You can use Data Pump to migrate all, or portions of, a database from a non-CDB into a PDB, between PDBs within the same or different CDBs, and from a PDB into a non-CDB. In general, using Data Pump with PDBs is identical to using Data Pump with a non-CDB.
Note:In Oracle Database 12c Release 1 (12.1), Data Pump does not support any CDB-wide operations. Data Pump issues the following warning if you are connected to the root or seed database of a CDB:
ORA-39357: WARNING: Oracle Data Pump operations are not typically needed when connected to the root or seed of a container database.
After you create an empty PDB in the CDB, you can use an Oracle Data Pump full-mode export and import operation to move data into the PDB. The job can be performed with or without the transportable option. If you use the transportable option on a full mode export or import, it is referred to as a full transportable export/import.
When the transportable option is used, export and import use both transportable tablespace data movement and conventional data movement; the latter for those tables that reside in non-transportable tablespaces such as
SYSAUX. Using the transportable option can reduce the export time and especially, the import time, because table data does not need to be unloaded and reloaded and index structures in user tablespaces do not need to be re-created.
If you want to specify a particular PDB for the export/import operation, then on the Data Pump command line, you can supply a connect identifier in the connect string when you start Data Pump. For example, to import data to a PDB named
pdb1, you could enter the following on the Data Pump command line:
impdp hr@pdb1 DIRECTORY=dpump_dir1 DUMPFILE=hr.dmp TABLES=employees
Be aware of the following requirements when using Data Pump to move data into a CDB:
To administer a multitenant environment, you must have the
Full database exports from Oracle Database 126.96.36.199 and earlier may be imported into Oracle Database 12c (CDB or non-CDB). However, Oracle recommends the source database first be upgraded to Oracle Database 11g release 2 (188.8.131.52 or later) so that information about registered options and components is included in the export.
When migrating Oracle Database 11g release 2 (184.108.40.206 or later) to a CDB (or to a non-CDB) using either full database export or full transportable database export, you must set the Data Pump Export parameter
VERSION=12 in order to generate a dump file that is ready for import into Oracle Database 12c. If you do not set
VERSION=12, then the export file that is generated will not contain complete information about registered database options and components.
Network-based full transportable imports require use of the
datafile_name parameters. When the source database is Oracle Database 11g release 220.127.116.11 or later, but earlier than Oracle Database 12c Release 1 (12.1), the
VERSION=12 parameter is also required.
File-based full transportable imports only require use of the
datafile_name parameter. Data Pump Import infers the presence of the
The default Data Pump directory object,
DATA_PUMP_DIR, does not work with PDBs. You must define an explicit directory object within the PDB that you are exporting or importing.
Data Pump export and import operations on PDBs are identical to those on non-CDBs with the exception of how common users are handled. If you have created a common user in a CDB, then a full database or privileged schema export of that user from within any PDB in the CDB results in a standard
CREATE USER C##common name DDL statement being performed upon import. The statement will fail because of the common user prefix
C## on the user name. The following error message will be returned:
ORA-65094:invalid local user or role name
In the PDB being exported, if you have created local objects in that user's schema and you want to import them, then either make sure a common user of the same name already exists in the target CDB instance or use the Data Pump Import
REMAP_SCHEMA parameter on the
impdp command, as follows:
REMAP_SCHEMA=C##common name:local user name
Oracle Database Concepts for more information about CDBs
Oracle Database Administrator's Guide for information about using Data Pump to move a non-CDB into a CDB
Oracle Database Security Guide for more information about privileges and roles in CDBs and PDBs
"Network Considerations" for more information about supplying a connect identifier on the command line
Many Data Pump Export and Import operations require the user to have the
DATAPUMP_EXP_FULL_DATABASE role and/or the
DATAPUMP_IMP_FULL_DATABASE role. These roles are automatically defined for Oracle databases when you run the standard scripts that are part of database creation. (Note that although the names of these roles contain the word FULL, these roles actually apply to any privileged operations in any export or import mode, not only Full mode.)
DATAPUMP_EXP_FULL_DATABASE role affects only export operations. The
DATAPUMP_IMP_FULL_DATABASE role affects import operations and operations that use the Import
SQLFILE parameter. These roles allow users performing exports and imports to do the following:
Perform the operation outside the scope of their schema
Monitor jobs that were initiated by another user
Export objects (such as tablespace definitions) and import objects (such as directory definitions) that unprivileged users cannot reference
These are powerful roles. Database administrators should use caution when granting these roles to users.
SYS schema does not have either of these roles assigned to it, all security checks performed by Data Pump that require these roles also grant access to the
See Also:Oracle Database Security Guide for more information about predefined roles in an Oracle Database installation
Data Pump jobs use a master table, a master process, and worker processes to perform the work and keep track of progress.
For every Data Pump Export job and Data Pump Import job, a master process is created. The master process controls the entire job, including communicating with the clients, creating and controlling a pool of worker processes, and performing logging operations.
While the data and metadata are being transferred, a master table is used to track the progress within a job. The master table is implemented as a user table within the database. The specific function of the master table for export and import jobs is as follows:
For export jobs, the master table records the location of database objects within a dump file set. Export builds and maintains the master table for the duration of the job. At the end of an export job, the content of the master table is written to a file in the dump file set.
For import jobs, the master table is loaded from the dump file set and is used to control the sequence of operations for locating objects that need to be imported into the target database.
The master table is created in the schema of the current user performing the export or import operation. Therefore, that user must have the
CREATE TABLE system privilege and a sufficient tablespace quota for creation of the master table. The name of the master table is the same as the name of the job that created it. Therefore, you cannot explicitly give a Data Pump job the same name as a preexisting table or view.
For all operations, the information in the master table is used to restart a job. (Note that transportable jobs are not restartable.)
The master table is either retained or dropped, depending on the circumstances, as follows:
Upon successful job completion, the master table is dropped. You can override this by setting the Data Pump
KEEP_MASTER=YES parameter for the job.
The master table is automatically retained for jobs that do not complete successfully.
If a job is stopped using the
STOP_JOB interactive command, then the master table is retained for use in restarting the job.
If a job is killed using the
KILL_JOB interactive command, then the master table is dropped and the job cannot be restarted.
If a job terminates unexpectedly, then the master table is retained. You can delete it if you do not intend to restart the job.
If a job stops before it starts running (that is, before any database objects have been copied), then the master table is dropped.
See Also:"JOB_NAME" for more information about how job names are formed
Within the master table, specific objects are assigned attributes such as name or owning schema. Objects also belong to a class of objects (such as
DIRECTORY). The class of an object is called its object type. You can use the
INCLUDE parameters to restrict the types of objects that are exported and imported. The objects can be based upon the name of the object or the name of the schema that owns the object. You can also specify data-specific filters to restrict the rows that are exported and imported.
When you are moving data from one database to another, it is often useful to perform transformations on the metadata for remapping storage between tablespaces or redefining the owner of a particular set of objects. This is done using the following Data Pump Import parameters:
Data Pump can employ multiple worker processes, running in parallel, to increase job performance. Use the
PARALLEL parameter to set a degree of parallelism that takes maximum advantage of current conditions. For example, to limit the effect of a job on a production system, the database administrator (DBA) might want to restrict the parallelism. The degree of parallelism can be reset at any time during a job. For example,
PARALLEL could be set to 2 during production hours to restrict a particular job to only two degrees of parallelism, and during nonproduction hours it could be reset to 8. The parallelism setting is enforced by the master process, which allocates work to be executed to worker processes that perform the data and metadata processing within an operation. These worker processes operate in parallel. For recommendations on setting the degree of parallelism, see the Export PARALLEL and Import PARALLEL parameter descriptions.
Note:The ability to adjust the degree of parallelism is available only in the Enterprise Edition of Oracle Database.
The worker processes unload and load metadata and table data. During import they also rebuild indexes. Some of these operations may be done in parallel: unloading and loading table data, rebuilding indexes, and loading package bodies. All other operations are done serially. Worker processes are created as needed until the number of worker processes equals the value supplied for the
PARALLEL command-line parameter. The number of active worker processes can be reset throughout the life of a job. Worker processes can be started on different nodes in an Oracle Real Application Clusters (Oracle RAC) environment.
Note:The value of
PARALLELis restricted to 1 in the Standard Edition of Oracle Database.
When a worker process is assigned the task of loading or unloading a very large table or partition, it may choose to use the external tables access method to make maximum use of parallel execution. In such a case, the worker process becomes a parallel execution coordinator. The actual loading and unloading work is divided among some number of parallel I/O execution processes (sometimes called slaves) allocated from a pool of available processes in an Oracle RAC environment.
The Data Pump Export and Import client utilities can attach to a job in either logging mode or interactive-command mode.
In logging mode, real-time detailed status about the job is automatically displayed during job execution. The information displayed can include the job and parameter descriptions, an estimate of the amount of data to be processed, a description of the current operation or item being processed, files used during the job, any errors encountered, and the final job state (Stopped or Completed).
In interactive-command mode, job status can be displayed on request. The information displayed can include the job description and state, a description of the current operation or item being processed, files being written, and a cumulative status.
A log file can also be optionally written during the execution of a job. The log file summarizes the progress of the job, lists any errors that were encountered during execution of the job, and records the completion status of the job.
An alternative way to determine job status or to get other information about Data Pump jobs, would be to query the
DBA_DATAPUMP_SESSIONS views. See Oracle Database Reference for descriptions of these views.
Data Pump operations that transfer table data (export and import) maintain an entry in the
V$SESSION_LONGOPS dynamic performance view indicating the job progress (in megabytes of table data transferred). The entry contains the estimated transfer size and is periodically updated to reflect the actual amount of data transferred.
Use of the
REMAP_DATA parameters are not reflected in the determination of estimate values.
The usefulness of the estimate value for export operations depends on the type of estimation requested when the operation was initiated, and it is updated as required if exceeded by the actual transfer amount. The estimate value for import operations is exact.
V$SESSION_LONGOPS columns that are relevant to a Data Pump job are as follows:
USERNAME - job owner
OPNAME - job name
TARGET_DESC - job operation
SOFAR - megabytes transferred thus far during the job
TOTALWORK - estimated number of megabytes in the job
UNITS - megabytes (MB)
MESSAGE - a formatted status message of the form:
'job_name: operation_name : nnn out of mmm MB done'
Data Pump jobs manage the following types of files:
Dump files to contain the data and metadata that is being moved.
Log files to record the messages associated with an operation.
SQL files to record the output of a SQLFILE operation. A SQLFILE operation is started using the Data Pump Import
SQLFILE parameter and results in all the SQL DDL that Import would be executing based on other parameters, being written to a SQL file.
Files specified by the
DATA_FILES parameter during a transportable import.
An understanding of how Data Pump allocates and handles these files will help you to use Export and Import to their fullest advantage.
Note:If your Data Pump job generates errors related to Network File Storage (NFS), then consult the installation guide for your platform to determine the correct NFS mount settings.
For export operations, you can specify dump files at the time the job is defined, and also at a later time during the operation. For example, if you discover that space is running low during an export operation, then you can add additional dump files by using the Data Pump Export
ADD_FILE command in interactive mode.
For import operations, all dump files must be specified at the time the job is defined.
Log files and SQL files overwrite previously existing files. For dump files, you can use the Export
REUSE_DUMPFILES parameter to specify whether to overwrite a preexisting dump file.
Because Data Pump is server-based rather than client-based, dump files, log files, and SQL files are accessed relative to server-based directory paths. Data Pump requires that directory paths be specified as directory objects. A directory object maps a name to a directory path on the file system. DBAs must ensure that only approved users are allowed access to the directory object associated with the directory path.
The following example shows a SQL statement that creates a directory object named
dpump_dir1 that is mapped to a directory located at
SQL> CREATE DIRECTORY dpump_dir1 AS '/usr/apps/datafiles';
The reason that a directory object is required is to ensure data security and integrity. For example:
If you were allowed to specify a directory path location for an input file, then you might be able to read data that the server has access to, but to which you should not.
If you were allowed to specify a directory path location for an output file, then the server might overwrite a file that you might not normally have privileges to delete.
On UNIX and Windows operating systems, a default directory object,
DATA_PUMP_DIR, is created at database creation or whenever the database dictionary is upgraded. By default, it is available only to privileged users. (The user
SYSTEM has read and write access to the
DATA_PUMP_DIR directory, by default.)
If you are not a privileged user, then before you can run Data Pump Export or Data Pump Import, a directory object must be created by a database administrator (DBA) or by any user with the
After a directory is created, the user creating the directory object must grant
WRITE permission on the directory to other users. For example, to allow the Oracle database to read and write files on behalf of user
hr in the directory named by
dpump_dir1, the DBA must execute the following command:
SQL> GRANT READ, WRITE ON DIRECTORY dpump_dir1 TO hr;
WRITE permission to a directory object only means that the Oracle database can read or write files in the corresponding directory on your behalf. You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges. Similarly, the Oracle database requires permission from the operating system to read and write files in the directories.
Data Pump Export and Import use the following order of precedence to determine a file's location:
If a directory object is specified as part of the file specification, then the location specified by that directory object is used. (The directory object must be separated from the file name by a colon.)
If a directory object is not specified as part of the file specification, then the directory object named by the
DIRECTORY parameter is used.
If a directory object is not specified as part of the file specification, and if no directory object is named by the
DIRECTORY parameter, then the value of the environment variable,
DATA_PUMP_DIR, is used. This environment variable is defined using operating system commands on the client system where the Data Pump Export and Import utilities are run. The value assigned to this client-based environment variable must be the name of a server-based directory object, which must first be created on the server system by a DBA. For example, the following SQL statement creates a directory object on the server system. The name of the directory object is
DUMP_FILES1, and it is located at
SQL> CREATE DIRECTORY DUMP_FILES1 AS '/usr/apps/dumpfiles1';
Then, a user on a UNIX-based client system using
csh can assign the value
DUMP_FILES1 to the environment variable
DIRECTORY parameter can then be omitted from the command line. The dump file
employees.dmp, and the log file
export.log, are written to
%setenv DATA_PUMP_DIR DUMP_FILES1 %expdp hr TABLES=employees DUMPFILE=employees.dmp
If none of the previous three conditions yields a directory object and you are a privileged user, then Data Pump attempts to use the value of the default server-based directory object,
DATA_PUMP_DIR. This directory object is automatically created at database creation or when the database dictionary is upgraded. You can use the following SQL query to see the path definition for
SQL> SELECT directory_name, directory_path FROM dba_directories 2 WHERE directory_name='DATA_PUMP_DIR';
If you are not a privileged user, then access to the
DATA_PUMP_DIR directory object must have previously been granted to you by a DBA.
Do not confuse the default
DATA_PUMP_DIR directory object with the client-based environment variable of the same name.
Keep the following considerations in mind when working in an Oracle RAC environment.
To use Data Pump or external tables in an Oracle RAC configuration, you must ensure that the directory object path is on a cluster-wide file system.
The directory object must point to shared physical storage that is visible to, and accessible from, all instances where Data Pump and/or external tables processes may run.
The default Data Pump behavior is that worker processes can run on any instance in an Oracle RAC configuration. Therefore, workers on those Oracle RAC instances must have physical access to the location defined by the directory object, such as shared storage media. If the configuration does not have shared storage for this purpose, but you still require parallelism, then you can use the
CLUSTER=NO parameter to constrain all worker processes to the instance where the Data Pump job was started.
Under certain circumstances, Data Pump uses parallel query slaves to load or unload data. In an Oracle RAC environment, Data Pump does not control where these slaves run, and they may run on other instances in the Oracle RAC, regardless of what is specified for
SERVICE_NAME for the Data Pump job. Controls for parallel query operations are independent of Data Pump. When parallel query slaves run on other instances as part of a Data Pump job, they also require access to the physical storage of the dump file set.
If you use Data Pump Export or Import with Oracle Automatic Storage Management (Oracle ASM) enabled, then you must define the directory object used for the dump file so that the Oracle ASM disk group name is used (instead of an operating system directory path). A separate directory object, which points to an operating system directory path, should be used for the log file. For example, you would create a directory object for the Oracle ASM dump file as follows:
SQL> CREATE or REPLACE DIRECTORY dpump_dir as '+DATAFILES/';
Then you would create a separate directory object for the log file:
SQL> CREATE or REPLACE DIRECTORY dpump_log as '/homedir/user1/';
To enable user
hr to have access to these directory objects, you would assign the necessary privileges, for example:
SQL> GRANT READ, WRITE ON DIRECTORY dpump_dir TO hr; SQL> GRANT READ, WRITE ON DIRECTORY dpump_log TO hr;
You would then use the following Data Pump Export command (you will be prompted for a password):
> expdp hr DIRECTORY=dpump_dir DUMPFILE=hr.dmp LOGFILE=dpump_log:hr.log
Note:If you simply want to copy Data Pump dump files between ASM and disk directories, you can use the
The Export DIRECTORY parameter
The Import DIRECTORY parameter
Oracle Database SQL Language Reference for information about the
Oracle Automatic Storage Management Administrator's Guide for more information about Oracle ASM
Oracle Database PL/SQL Packages and Types Reference for more information about the
DBMS_FILE_TRANSFER PL/SQL package
Instead of, or in addition to, listing specific file names, you can use the
DUMPFILE parameter during export operations to specify multiple dump files, by using a substitution variable (
%U) in the file name. This is called a dump file template. The new dump files are created as they are needed, beginning with
%U, then using
03, and so on. Enough dump files are created to allow all processes specified by the current setting of the
PARALLEL parameter to be active. If one of the dump files becomes full because its size has reached the maximum size specified by the
FILESIZE parameter, then it is closed and a new dump file (with a new generated name) is created to take its place.
If multiple dump file templates are provided, they are used to generate dump files in a round-robin fashion. For example, if
expc%U were all specified for a job having a parallelism of 6, then the initial dump files created would be
For import and SQLFILE operations, if dump file specifications
expc%U are specified, then the operation begins by attempting to open the dump files
dmp. It is possible for the master table to span multiple dump files, so until all pieces of the master table are found, dump files continue to be opened by incrementing the substitution variable and looking up the new file names (for example,
dmp). If a dump file does not exist, then the operation stops incrementing the substitution variable for the dump file specification that was in error. For example, if
dmp are found but
dmp is not found, then no more files are searched for using the
expb%U specification. Once the entire master table is found, it is used to determine whether all dump files in the dump file set have been located.
Data Pump can be used to migrate all or any portion of a database between different releases of the database software. The Data Pump Export
VERSION parameter is typically used to do this. This will generate a Data Pump dump file set compatible with the specified version.
The default value for
COMPATIBLE, indicating that exported database object definitions will be compatible with the release specified for the
COMPATIBLE initialization parameter.
In an upgrade situation, when the target release of a Data Pump-based migration is higher than the source, the
VERSION parameter typically does not have to be specified because all objects in the source database will be compatible with the higher target release. An exception is when an entire Oracle Database 11g (release 18.104.22.168 or higher) is exported in preparation for importing into Oracle Database 12c Release 1 (22.214.171.124) or later. In this case, explicitly specify
VERSION=12 in conjunction with
FULL=YES in order to include a complete set of Oracle internal component metadata.
In a downgrade situation, when the target release of a Data Pump-based migration is lower than the source, the
VERSION parameter should be explicitly specified to be the same version as the target. An exception is when the target release version is the same as the value of the
COMPATIBLE initialization parameter on the source system; then
VERSION does not need to be specified. In general however, Data Pump import cannot read dump file sets created by an Oracle release that is newer than the current release unless the
VERSION parameter is explicitly specified.
Keep the following information in mind when you are exporting and importing between different database releases:
On a Data Pump export, if you specify a database version that is older than the current database version, then a dump file set is created that you can import into that older version of the database. For example, if you are running Oracle Database 12c Release 1 (126.96.36.199) and specify
VERSION=11.2 on an export, then the dump file set that is created can be imported into an Oracle 11.2 database.
Note:Database privileges that are valid only in Oracle Database 12c Release 1 (188.8.131.52) and later (for example, the
READprivilege on tables, views, materialized views, and synonyms) cannot be imported into Oracle Database 12c Release 1 (184.108.40.206) or earlier. If an attempt is made to do so, then Import reports it as an error and continues the import operation.
Oracle Database Security Guide for more information about the
READ ANY TABLE privileges
If you specify a database release that is older than the current database release, then certain features may be unavailable. For example, specifying
VERSION=10.1 causes an error if data compression is also specified for the job because compression was not supported in Oracle Database 10g release 1 (10.1).
Data Pump Import can always read Data Pump dump file sets created by older releases of the database.
When operating across a network link, Data Pump requires that the source and target databases differ by no more than two versions. For example, if one database is Oracle Database 12c, then the other database must be 12c, 11g, or 10g. Note that Data Pump checks only the major version number (for example, 10g,11g, 12c), not specific release numbers (for example, 12.1,10.1, 10.2, 11.1, or 11.2).
When you use Data Pump Export to export SecureFiles LOBs, the resulting behavior depends on several things, including the value of the Export
VERSION parameter, whether ContentType is present, and whether the LOB is archived and data is cached. The following scenarios cover different combinations of these variables:
If a table contains SecureFiles LOBs with ContentType and the Export
VERSION parameter is set to a value earlier than
220.127.116.11.0, then the ContentType is not exported.
If a table contains SecureFiles LOBs with ContentType and the Export
VERSION parameter is set to a value of
18.104.22.168.0 or later, then the ContentType is exported and restored on a subsequent import.
If a table contains a SecureFiles LOB that is currently archived and the data is cached, and the Export
VERSION parameter is set to a value earlier than
22.214.171.124.0, then the SecureFiles LOB data is exported and the archive metadata is dropped. In this scenario, if
VERSION is set to
11.1 or later, then the SecureFiles LOB becomes a vanilla SecureFiles LOB. But if
VERSION is set to a value earlier than
11.1, then the SecureFiles LOB becomes a BasicFiles LOB.
If a table contains a SecureFiles LOB that is currently archived but the data is not cached, and the Export
VERSION parameter is set to a value earlier than
126.96.36.199.0, then an ORA-45001 error is returned.
If a table contains a SecureFiles LOB that is currently archived and the data is cached, and the Export
VERSION parameter is set to a value of
188.8.131.52.0 or later, then both the cached data and the archive metadata is exported.
See Also:Oracle Database SecureFiles and Large Objects Developer's Guide for more information about SecureFiles
Oracle Data Pump provides the results of export and import operations immediately upon completion. In addition to recording the results in a log file, Data Pump may also report the outcome in a process exit code. This allows you to check the outcome of a Data Pump job from the command line or a script.
Table 1-1 describes the Data Pump exit codes for Linux, Unix, and Windows operating systems.
The export or import job completed successfully. No errors are displayed to the output device or recorded in the log file, if there is one.
The export or import job completed successfully but there were errors encountered during the job. The errors are displayed to the output device and recorded in the log file, if there is one.
The export or import job encountered one or more fatal errors, including the following:
A fatal error is displayed to the output device but may not be recorded in the log file. Whether it is recorded in the log file can depend on several factors, including:
To set up unified auditing you create a unified audit policy or alter an existing policy. An audit policy is a named group of audit settings that enable you to audit a particular aspect of user behavior in the database. To create the policy, use the SQL
CREATE AUDIT POLICY statement.
After creating the audit policy, use the
NOAUDIT SQL statements to, respectively, enable and disable the policy.
Note:The information in this section applies only to Oracle Data Pump running on Oracle Database 12c and later.
This section describes factors that can affect successful completion of export and import jobs that involve the timestamp data types
TIMESTAMP WITH TIMEZONE and
TIMESTAMP WITH LOCAL TIMEZONE.
For export and import jobs that have
TIMESTAMP with TIME ZONE data, successful job completion can depend on:
The version of the Oracle Database time zone files on the source and target databases. See "Time Zone File Versions on the Source and Target".
The export/import mode and whether the Data Pump version being used supports
TIMESTAMP WITH TIME ZONE data. (Data Pump 184.108.40.206 and later provide support for
TIMESTAMP WITH TIME ZONE data.) See "Data Pump Support for TIMESTAMP WITH TIME ZONE Data".
To identify the time zone file version of a database, you can execute the following SQL statement:
SQL> SELECT VERSION FROM V$TIMEZONE_FILE;
Oracle Database Globalization Support Guide for more information about time zone files
Successful job completion can depend on whether the source and target time zone file versions match.
If the Oracle Database time zone file version is the same on the source and target databases, then conversion of
TIMESTAMP WITH TIME ZONE data is not necessary. The export/import job should complete successfully.
The exception to this is a transportable tablespace or transportable table export performed using a Data Pump release earlier than 220.127.116.11. In that case, tables in the dump file that have
TIMESTAMP WITH TIME ZONE columns are not created on import even though the time zone file version is the same on the source and target.
If the source time zone file version is not available on the target database, then the job fails. The version of the time zone file on the source may not be available on the target because the source may have had its time zone file updated to a later version but the target has not. For example, if the export is done on Oracle Database 11g release 2 (18.104.22.168) with a time zone file version of 17, and the import is done on 22.214.171.124 with only a time zone file of 16 available, then the job fails.
This section describes Data Pump support for
TIMESTAMP WITH TIME ZONE data during different export and import modes when versions of the Oracle Database time zone file are different on the source and target databases.
If the dump file is created with a Data Pump version that supports
TIMESTAMP WITH TIME ZONE data (126.96.36.199 or later), then the time zone file version of the export system is recorded in the dump file. Data Pump uses that information to determine whether data conversion is necessary. If the target database knows about the source time zone version, but is actually using a later version, then the data is converted to the later version.
TIMESTAMP WITH TIME ZONE data cannot be downgraded, so if you attempt to import to a target that is using an earlier version of the time zone file than the source used, the import fails.
If the dump file is created with a Data Pump version prior to Oracle Database 11g release 2 (188.8.131.52), then
TIMESTAMP WITH TIME ZONE data is not supported, so no conversion is done and corruption may occur.
In transportable tablespace and transportable table modes, if the source and target have different time zone file versions, then tables with
TIMESTAMP WITH TIME ZONE columns are not created. A warning is displayed at the beginning of the job showing the source and target database time zone file versions. A message is also displayed for each table not created. This is true even if the Data Pump version used to create the dump file supports
TIMESTAMP WITH TIME ZONE data. (Release 184.108.40.206 and later support
TIMESTAMP WITH TIMEZONE data.)
If the source is earlier than Oracle Database 11g release 2 (220.127.116.11), then the time zone file version must be the same on the source and target database for all transportable jobs regardless of whether the transportable set uses
TIMESTAMP WITH TIME ZONE columns.
Oracle Database Administrator's Guide for more information about transportable tablespaces
Full transportable exports and imports are supported when the source database is at least Oracle Database 11g release 2 (18.104.22.168) and the target is Oracle Database 12c release 1 (12.1) or later.
Data Pump 22.214.171.124 and later provide support for
TIMESTAMP WITH TIME ZONE data. Therefore, in full transportable operations, tables with
TIMESTAMP WITH TIME ZONE columns are created. If the source and target database have different time zone file versions, then
TIMESTAMP WITH TIME ZONE columns from the source are converted to the time zone file version of the target.
"Using the Transportable Option During Full Mode Exports" for more information about full transportable exports
"Using the Transportable Option During Full Mode Imports" for more information about full transportable imports
If a table is moved using a transportable mode (transportable table, transportable tablespace, or full transportable), and the following conditions exist, then a warning is issued and the table is not created:
The source and target databases have different database time zones
The table contains
TIME ZONE data types
To successfully move a table that was not created because of these conditions, use a non-transportable export and import mode.
The following sections describe the globalization support behavior of Data Pump Export and Import with respect to character set conversion of user data and data definition language (DDL).
The Export utility always exports user data, including Unicode data, in the character set of the export system. (Character sets are specified at database creation.) If the character set of the source database is different than the character set of the import database, then a single conversion is performed to automatically convert the data to the character set of the Import system.
If the export character set has a different sorting order than the import character set, then tables that are partitioned on character columns may yield unpredictable results. For example, consider the following table definition, which is produced on a database having an ASCII character set:
CREATE TABLE partlist ( part VARCHAR2(10), partno NUMBER(2) ) PARTITION BY RANGE (part) ( PARTITION part_low VALUES LESS THAN ('Z') TABLESPACE tbs_1, PARTITION part_mid VALUES LESS THAN ('z') TABLESPACE tbs_2, PARTITION part_high VALUES LESS THAN (MAXVALUE) TABLESPACE tbs_3 );
This partitioning scheme makes sense because
z comes after
Z in ASCII character sets.
When this table is imported into a database based upon an EBCDIC character set, all of the rows in the
part_mid partition will migrate to the
part_low partition because
z comes before
Z in EBCDIC character sets. To obtain the desired results, the owner of
partlist must repartition the table following the import.
The Export utility writes dump files using the database character set of the export system.
When the dump file is imported, a character set conversion is required for DDL only if the import system's database character set is different from the export system's database character set.
To minimize data loss due to character set conversions, ensure that the import database character set is a superset of the export database character set.
If the system on which the import occurs uses a 7-bit character set, and you import an 8-bit character set dump file, then some 8-bit characters may be converted to 7-bit equivalents. An indication that this has happened is when accented characters lose the accent mark.
To avoid this unwanted conversion, ensure that the export database and the import database use the same character set.
During character set conversion, any characters in the export file that have no equivalent in the import database character set are replaced with a default character. The import database character set defines the default character.
If the import system has to use replacement characters while converting DDL, then a warning message is displayed and the system attempts to load the converted DDL.
If the import system has to use replacement characters while converting user data, then the default behavior is to load the converted data. However, it is possible to instruct the import system to reject rows of user data that were converted using replacement characters. See the Import "DATA_OPTIONS" parameter for details.
To guarantee 100% conversion, the import database character set must be a superset (or equivalent) of the character set used to generate the export file.
Caution:When the database character set of the export system differs from that of the import system, the import system displays informational messages at the start of the job that show what the database character set is.
When the import database character set is not a superset of the character set used to generate the export file, the import system displays a warning that possible data loss may occur due to character set conversions.