Skip Headers
Oracle® Database Utilities
12c Release 1 (12.1)

E17639-12
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Master Index
Master Index
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
PDF · Mobi · ePub

1 Overview of Oracle Data Pump

Oracle Data Pump technology enables very high-speed movement of data and metadata from one database to another.

This chapter discusses the following topics:

Data Pump Components

Oracle Data Pump is made up of three distinct parts:

  • The command-line clients, expdp and impdp

  • The DBMS_DATAPUMP PL/SQL package (also known as the Data Pump API)

  • The DBMS_METADATA PL/SQL package (also known as the Metadata API)

The Data Pump clients, expdp and impdp, start the Data Pump Export utility and Data Pump Import utility, respectively.

The expdp and impdp clients use the procedures provided in the DBMS_DATAPUMP PL/SQL package to execute export and import commands, using the parameters entered at the command line. These parameters enable the exporting and importing of data and metadata for a complete database or for subsets of a database.

When metadata is moved, Data Pump uses functionality provided by the DBMS_METADATA PL/SQL package. The DBMS_METADATA package provides a centralized facility for the extraction, manipulation, and re-creation of dictionary metadata.

The DBMS_DATAPUMP and DBMS_METADATA PL/SQL packages can be used independently of the Data Pump clients.

Note:

All Data Pump Export and Import processing, including the reading and writing of dump files, is done on the system (server) selected by the specified database connect string. This means that for unprivileged users, the database administrator (DBA) must create directory objects for the Data Pump files that are read and written on that server file system. (For security reasons, DBAs must ensure that only approved users are allowed access to directory objects.) For privileged users, a default directory object is available. See "Default Locations for Dump, Log, and SQL Files" for more information about directory objects.

See Also:

How Does Data Pump Move Data?

For information about how Data Pump moves data in and out of databases, see the following sections:

Note:

Data Pump does not load tables with disabled unique indexes. To load data into the table, the indexes must be either dropped or reenabled.

The following sections briefly explain how and when each of these data movement mechanisms is used.

Using Data File Copying to Move Data

The fastest method of moving data is to copy the database data files to the target database without interpreting or altering the data. With this method, Data Pump Export is used to unload only structural information (metadata) into the dump file. This method is used in the following situations:

  • The TRANSPORT_TABLESPACES parameter is used to specify a transportable tablespace export. Only metadata for the specified tablespaces is exported.

  • The TRANSPORTABLE=ALWAYS parameter is supplied on a table mode export (specified with the TABLES parameter) or a full mode export (specified with the FULL parameter) or a full mode network import (specified with the FULL and NETWORK_LINK parameters).

When an export operation uses data file copying, the corresponding import job always also uses data file copying. During the ensuing import operation, both the data files and the export dump file must be loaded.

When data is moved by using data file copying, there are some limitations regarding character set compatibility between the source and target databases. See Oracle Database Administrator's Guide for details.

If the source platform and the target platform are of different endianness, then you must convert the data being transported so that it is in the format of the target platform. You can use the DBMS_FILE_TRANSFER PL/SQL package or the RMAN CONVERT command to convert the data. See Oracle Database Administrator's Guide for more information about using either of these options.

See Also:

Using Direct Path to Move Data

After data file copying, direct path is the fastest method of moving data. In this method, the SQL layer of the database is bypassed and rows are moved to and from the dump file with only minimal interpretation. Data Pump automatically uses the direct path method for loading and unloading data when the structure of a table allows it. For example, if a table contains a column of type BFILE, then direct path cannot be used to load that table and external tables is used instead.

The following sections describe situations in which direct path cannot be used for loading and unloading:

Situations in Which Direct Path Load Is Not Used

If any of the following conditions exist for a table, then Data Pump uses external tables rather than direct path to load the data for that table:

  • A domain index that is not a CONTEXT type index exists for a LOB column.

  • A global index on multipartition tables exists during a single-partition load. This includes object tables that are partitioned.

  • A table is in a cluster.

  • There is an active trigger on a preexisting table.

  • Fine-grained access control is enabled in insert mode on a preexisting table.

  • A table contains BFILE columns or columns of opaque types.

  • A referential integrity constraint is present on a preexisting table.

  • A table contains VARRAY columns with an embedded opaque type.

  • The table has encrypted columns.

  • The table into which data is being imported is a preexisting table and at least one of the following conditions exists:

    • There is an active trigger

    • The table is partitioned

    • Fine-grained access control is in insert mode

    • A referential integrity constraint exists

    • A unique index exists

  • Supplemental logging is enabled and the table has at least one LOB column.

  • The Data Pump command for the specified table used the QUERY, SAMPLE, or REMAP_DATA parameter.

  • A table contains a column (including a VARRAY column) with a TIMESTAMP WITH TIME ZONE data type and the version of the time zone data file is different between the export and import systems.

Situations in Which Direct Path Unload Is Not Used

If any of the following conditions exist for a table, then Data Pump uses the external table method to unload data, rather than the direct path method:

  • Fine-grained access control for SELECT is enabled.

  • The table is a queue table.

  • The table contains one or more columns of type BFILE or opaque, or an object type containing opaque columns.

  • The table contains encrypted columns.

  • The table contains a column of an evolved type that needs upgrading.

  • The table contains a column of type LONG or LONG RAW that is not last.

  • The Data Pump command for the specified table used the QUERY, SAMPLE, or REMAP_DATA parameter.

  • Prior to the unload operation, the table was altered to contain a column that is NOT NULL and also has a default value specified.

Using External Tables to Move Data

When data file copying is not selected and the data cannot be moved using direct path, the external table mechanism is used. The external table mechanism creates an external table that maps to the dump file data for the database table. The SQL engine is then used to move the data. If possible, the APPEND hint is used on import to speed the copying of the data into the database. The representation of data for direct path data and external table data is the same in a dump file. Therefore, Data Pump might use the direct path mechanism at export time, but use external tables when the data is imported into the target database. Similarly, Data Pump might use external tables for the export, but use direct path for the import.

In particular, Data Pump uses external tables in the following situations:

  • Loading and unloading very large tables and partitions in situations where it is advantageous to use parallel SQL capabilities

  • Loading tables with global or domain indexes defined on them, including partitioned object tables

  • Loading tables with active triggers or clustered tables

  • Loading and unloading tables with encrypted columns

  • Loading tables with fine-grained access control enabled for inserts

  • Loading tables that are partitioned differently at load time and unload time

  • Loading a table not created by the import operation (the table exists before the import starts)

Note:

When Data Pump uses external tables as the data access mechanism, it uses the ORACLE_DATAPUMP access driver. However, it is important to understand that the files that Data Pump creates when it uses external tables are not compatible with files created when you manually create an external table using the SQL CREATE TABLE ... ORGANIZATION EXTERNAL statement.

See Also:

Using Conventional Path to Move Data

In situations where there are conflicting table attributes, Data Pump is not able to load data into a table using either direct path or external tables. In such cases, conventional path is used, which can affect performance.

Using Network Link Import to Move Data

When the Import NETWORK_LINK parameter is used to specify a network link for an import operation, SQL is directly used to move the data using an INSERT SELECT statement. The SELECT clause retrieves the data from the remote database over the network link. The INSERT clause uses SQL to insert the data into the target database. There are no dump files involved.

When the Export NETWORK_LINK parameter is used to specify a network link for an export operation, the data from the remote database is written to dump files on the target database. (Note that to export from a read-only database, the NETWORK_LINK parameter is required.)

Because the link can identify a remotely networked database, the terms database link and network link are used interchangeably.

Supported Link Types

The following types of database links are supported for use with Data Pump Export and Import:

  • Public (both public and shared)

  • Fixed user

  • Connected user

Unsupported Link Types

The database link type, Current User, is not supported for use with Data Pump Export or Import.

See Also:

  • The Export NETWORK_LINK parameter for information about performing exports over a database link

  • The Import NETWORK_LINK parameter for information about performing imports over a database link

  • Oracle Database Administrator's Guide for information about creating database links and the different types of links

Using Data Pump With CDBs

A multitenant container database (CDB) is an Oracle database that includes zero, one, or many user-created pluggable databases (PDBs). A PDB is a portable set of schemas, schema objects, and nonschema objects that appear to an Oracle Net client as a non-CDB. A non-CDB is an Oracle database that is not a CDB.

You can use Data Pump to migrate all, or portions of, a database from a non-CDB into a PDB, between PDBs within the same or different CDBs, and from a PDB into a non-CDB. In general, using Data Pump with PDBs is identical to using Data Pump with a non-CDB.

Note:

In Oracle Database 12c Release 1 (12.1), Data Pump does not support any CDB-wide operations. Data Pump issues the following warning if you are connected to the root or seed database of a CDB:
ORA-39357: WARNING: Oracle Data Pump operations are not typically needed when connected to the root or seed of a container database.

Using Data Pump to Move Databases Into a CDB

After you create an empty PDB in the CDB, you can use an Oracle Data Pump full-mode export and import operation to move the data into the PDB. The job can be performed with or without the transportable option. If you use the transportable option on a full mode export or import, it is referred to as a full transportable export/import.

When the transportable option is used, export and import use both transportable tablespace data movement and conventional data movement; the latter for those tables that reside in non-transportable tablespaces such as SYSTEM and SYSAUX. Using the transportable option can reduce the export time and especially, the import time, because table data does not need to be unloaded and reloaded and index structures in user tablespaces do not need to be re-created.

If you want to specify a particular PDB for the export/import operation, then on the Data Pump command line, you can supply a connect identifier in the connect string when you start Data Pump. For example, to import data to a PDB named pdb1, you could enter the following on the Data Pump command line:

impdp hr@pdb1 DIRECTORY=dpump_dir1 DUMPFILE=hr.dmp TABLES=employees

Be aware of the following requirements when using Data Pump to move data into a CDB:

  • To administer a multitenant environment, you must have the CDB_DBA role.

  • Full database exports from Oracle Database 11.2.0.2 and earlier may be imported into Oracle Database 12c (CDB or non-CDB). However, Oracle recommends the source database first be upgraded to Oracle Database 11g release 2 (11.2.0.3 or later) so that information about registered options and components is included in the export.

  • When migrating Oracle Database 11g release 2 (11.2.0.3 or later) to a CDB (or to a non-CDB) using either full database export or full transportable database export, you must set the Data Pump Export parameter VERSION=12 in order to generate a dump file that is ready for import into Oracle Database 12c. If you do not set VERSION=12, then the export file that is generated will not contain complete information about registered database options and components.

  • Network-based full transportable imports require use of the FULL=YES, TRANSPORTABLE=ALWAYS, and TRANSPORT_DATAFILES=datafile_name parameters. When the source database is Oracle Database 11g release 11.2.0.3 or later, but earlier than Oracle Database 12c Release 1 (12.1), the VERSION=12 parameter is also required.

  • File-based full transportable imports only require use of the TRANSPORT_DATAFILES=datafile_name parameter. Data Pump Import infers the presence of the TRANSPORTABLE=ALWAYS and FULL=YES parameters.

  • The default Data Pump directory object, DATA_PUMP_DIR, does not work with PDBs. You must define an explicit directory object within the PDB that you are exporting or importing.

Using Data Pump to Move PDBs Within Or Between CDBs

Data Pump export and import operations on PDBs are identical to those on non-CDBs with the exception of how common users are handled. If you have created a common user in a CDB, then a full database or privileged schema export of that user from within any PDB in the CDB results in a standard CREATE USER C##common name DDL statement being performed upon import. The statement will fail because of the common user prefix C## on the user name. The following error message will be returned:

ORA-65094:invalid local user or role name

In the PDB being exported, if you have created local objects in that user's schema and you want to import them, then either make sure a common user of the same name already exists in the target CDB instance or use the Data Pump Import REMAP_SCHEMA parameter on the impdp command, as follows:

REMAP_SCHEMA=C##common name:local user name

See Also:

Required Roles for Data Pump Export and Import Operations

Many Data Pump Export and Import operations require the user to have the DATAPUMP_EXP_FULL_DATABASE role and/or the DATAPUMP_IMP_FULL_DATABASE role. These roles are automatically defined for Oracle databases when you run the standard scripts that are part of database creation. (Note that although the names of these roles contain the word FULL, these roles actually apply to any privileged operations in any export or import mode, not only Full mode.)

The DATAPUMP_EXP_FULL_DATABASE role affects only export operations. The DATAPUMP_IMP_FULL_DATABASE role affects import operations and operations that use the Import SQLFILE parameter. These roles allow users performing exports and imports to do the following:

  • Perform the operation outside the scope of their schema

  • Monitor jobs that were initiated by another user

  • Export objects (such as tablespace definitions) and import objects (such as directory definitions) that unprivileged users cannot reference

These are powerful roles. Database administrators should use caution when granting these roles to users.

Although the SYS schema does not have either of these roles assigned to it, all security checks performed by Data Pump that require these roles also grant access to the SYS schema.

See Also:

Oracle Database Security Guide for more information about predefined roles in an Oracle Database installation

What Happens During Execution of a Data Pump Job?

Data Pump jobs use a master table, a master process, and worker processes to perform the work and keep track of progress.

Coordination of a Job

For every Data Pump Export job and Data Pump Import job, a master process is created. The master process controls the entire job, including communicating with the clients, creating and controlling a pool of worker processes, and performing logging operations.

Tracking Progress Within a Job

While the data and metadata are being transferred, a master table is used to track the progress within a job. The master table is implemented as a user table within the database. The specific function of the master table for export and import jobs is as follows:

  • For export jobs, the master table records the location of database objects within a dump file set. Export builds and maintains the master table for the duration of the job. At the end of an export job, the content of the master table is written to a file in the dump file set.

  • For import jobs, the master table is loaded from the dump file set and is used to control the sequence of operations for locating objects that need to be imported into the target database.

The master table is created in the schema of the current user performing the export or import operation. Therefore, that user must have the CREATE TABLE system privilege and a sufficient tablespace quota for creation of the master table. The name of the master table is the same as the name of the job that created it. Therefore, you cannot explicitly give a Data Pump job the same name as a preexisting table or view.

For all operations, the information in the master table is used to restart a job. (Note that transportable jobs are not restartable.)

The master table is either retained or dropped, depending on the circumstances, as follows:

  • Upon successful job completion, the master table is dropped. You can override this by setting the Data Pump KEEP_MASTER=YES parameter for the job.

  • The master table is automatically retained for jobs that do not complete successfully.

  • If a job is stopped using the STOP_JOB interactive command, then the master table is retained for use in restarting the job.

  • If a job is killed using the KILL_JOB interactive command, then the master table is dropped and the job cannot be restarted.

  • If a job terminates unexpectedly, then the master table is retained. You can delete it if you do not intend to restart the job.

  • If a job stops before it starts running (that is, before any database objects have been copied), then the master table is dropped.

See Also:

"JOB_NAME" for more information about how job names are formed

Filtering Data and Metadata During a Job

Within the master table, specific objects are assigned attributes such as name or owning schema. Objects also belong to a class of objects (such as TABLE, INDEX, or DIRECTORY). The class of an object is called its object type. You can use the EXCLUDE and INCLUDE parameters to restrict the types of objects that are exported and imported. The objects can be based upon the name of the object or the name of the schema that owns the object. You can also specify data-specific filters to restrict the rows that are exported and imported.

Transforming Metadata During a Job

When you are moving data from one database to another, it is often useful to perform transformations on the metadata for remapping storage between tablespaces or redefining the owner of a particular set of objects. This is done using the following Data Pump Import parameters: REMAP_DATAFILE, REMAP_SCHEMA, REMAP_TABLE,REMAP_TABLESPACE, TRANSFORM, and PARTITION_OPTIONS.

Maximizing Job Performance

Data Pump can employ multiple worker processes, running in parallel, to increase job performance. Use the PARALLEL parameter to set a degree of parallelism that takes maximum advantage of current conditions. For example, to limit the effect of a job on a production system, the database administrator (DBA) might want to restrict the parallelism. The degree of parallelism can be reset at any time during a job. For example, PARALLEL could be set to 2 during production hours to restrict a particular job to only two degrees of parallelism, and during nonproduction hours it could be reset to 8. The parallelism setting is enforced by the master process, which allocates work to be executed to worker processes that perform the data and metadata processing within an operation. These worker processes operate in parallel. For recommendations on setting the degree of parallelism, see the Export PARALLEL and Import PARALLEL parameter descriptions.

Note:

The ability to adjust the degree of parallelism is available only in the Enterprise Edition of Oracle Database.

Loading and Unloading of Data

The worker processes unload and load metadata and table data. During import they also rebuild indexes. Some of these operations may be done in parallel: unloading and loading table data, rebuilding indexes, and loading package bodies. All other operations are done serially. Worker processes are created as needed until the number of worker processes equals the value supplied for the PARALLEL command-line parameter. The number of active worker processes can be reset throughout the life of a job. Worker processes can be started on different nodes in an Oracle Real Application Clusters (Oracle RAC) environment.

Note:

The value of PARALLEL is restricted to 1 in the Standard Edition of Oracle Database.

When a worker process is assigned the task of loading or unloading a very large table or partition, it may choose to use the external tables access method to make maximum use of parallel execution. In such a case, the worker process becomes a parallel execution coordinator. The actual loading and unloading work is divided among some number of parallel I/O execution processes (sometimes called slaves) allocated from a pool of available processes in an Oracle RAC environment.

See Also:

Monitoring Job Status

The Data Pump Export and Import client utilities can attach to a job in either logging mode or interactive-command mode.

In logging mode, real-time detailed status about the job is automatically displayed during job execution. The information displayed can include the job and parameter descriptions, an estimate of the amount of data to be processed, a description of the current operation or item being processed, files used during the job, any errors encountered, and the final job state (Stopped or Completed).

See Also:

  • The Export STATUS parameter for information about changing the frequency of the status display in command-line Export

  • The Import STATUS parameter for information about changing the frequency of the status display in command-line Import

In interactive-command mode, job status can be displayed on request. The information displayed can include the job description and state, a description of the current operation or item being processed, files being written, and a cumulative status.

See Also:

  • The interactive Export STATUS command

  • The interactive Import STATUS command

A log file can also be optionally written during the execution of a job. The log file summarizes the progress of the job, lists any errors that were encountered during execution of the job, and records the completion status of the job.

See Also:

  • The Export LOGFILE parameter for information on how to set the file specification for an export log file

  • The Import LOGFILE parameter for information on how to set the file specification for a import log file

An alternative way to determine job status or to get other information about Data Pump jobs, would be to query the DBA_DATAPUMP_JOBS, USER_DATAPUMP_JOBS, or DBA_DATAPUMP_SESSIONS views. See Oracle Database Reference for descriptions of these views.

Monitoring the Progress of Executing Jobs

Data Pump operations that transfer table data (export and import) maintain an entry in the V$SESSION_LONGOPS dynamic performance view indicating the job progress (in megabytes of table data transferred). The entry contains the estimated transfer size and is periodically updated to reflect the actual amount of data transferred.

Use of the COMPRESSION, ENCRYPTION, ENCRYPTION_ALGORITHM, ENCRYPTION_MODE, ENCRYPTION_PASSWORD, QUERY, and REMAP_DATA parameters are not reflected in the determination of estimate values.

The usefulness of the estimate value for export operations depends on the type of estimation requested when the operation was initiated, and it is updated as required if exceeded by the actual transfer amount. The estimate value for import operations is exact.

The V$SESSION_LONGOPS columns that are relevant to a Data Pump job are as follows:

  • USERNAME - job owner

  • OPNAME - job name

  • TARGET_DESC - job operation

  • SOFAR - megabytes transferred thus far during the job

  • TOTALWORK - estimated number of megabytes in the job

  • UNITS - megabytes (MB)

  • MESSAGE - a formatted status message of the form:

    'job_name: operation_name : nnn out of mmm MB done'
    

File Allocation

Data Pump jobs manage the following types of files:

  • Dump files to contain the data and metadata that is being moved.

  • Log files to record the messages associated with an operation.

  • SQL files to record the output of a SQLFILE operation. A SQLFILE operation is started using the Data Pump Import SQLFILE parameter and results in all the SQL DDL that Import would be executing based on other parameters, being written to a SQL file.

  • Files specified by the DATA_FILES parameter during a transportable import.

An understanding of how Data Pump allocates and handles these files will help you to use Export and Import to their fullest advantage.

Note:

If your Data Pump job generates errors related to Network File Storage (NFS), then consult the installation guide for your platform to determine the correct NFS mount settings.

Specifying Files and Adding Additional Dump Files

For export operations, you can specify dump files at the time the job is defined, and also at a later time during the operation. For example, if you discover that space is running low during an export operation, then you can add additional dump files by using the Data Pump Export ADD_FILE command in interactive mode.

For import operations, all dump files must be specified at the time the job is defined.

Log files and SQL files overwrite previously existing files. For dump files, you can use the Export REUSE_DUMPFILES parameter to specify whether to overwrite a preexisting dump file.

Default Locations for Dump, Log, and SQL Files

Because Data Pump is server-based rather than client-based, dump files, log files, and SQL files are accessed relative to server-based directory paths. Data Pump requires that directory paths be specified as directory objects. A directory object maps a name to a directory path on the file system. DBAs must ensure that only approved users are allowed access to the directory object associated with the directory path.

The following example shows a SQL statement that creates a directory object named dpump_dir1 that is mapped to a directory located at /usr/apps/datafiles.

SQL> CREATE DIRECTORY dpump_dir1 AS '/usr/apps/datafiles';

The reason that a directory object is required is to ensure data security and integrity. For example:

  • If you were allowed to specify a directory path location for an input file, then you might be able to read data that the server has access to, but to which you should not.

  • If you were allowed to specify a directory path location for an output file, then the server might overwrite a file that you might not normally have privileges to delete.

On UNIX and Windows operating systems, a default directory object, DATA_PUMP_DIR, is created at database creation or whenever the database dictionary is upgraded. By default, it is available only to privileged users. (The user SYSTEM has read and write access to the DATA_PUMP_DIR directory, by default.)

If you are not a privileged user, then before you can run Data Pump Export or Data Pump Import, a directory object must be created by a database administrator (DBA) or by any user with the CREATE ANY DIRECTORY privilege.

After a directory is created, the user creating the directory object must grant READ or WRITE permission on the directory to other users. For example, to allow the Oracle database to read and write files on behalf of user hr in the directory named by dpump_dir1, the DBA must execute the following command:

SQL> GRANT READ, WRITE ON DIRECTORY dpump_dir1 TO hr;

Note that READ or WRITE permission to a directory object only means that the Oracle database can read or write files in the corresponding directory on your behalf. You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges. Similarly, the Oracle database requires permission from the operating system to read and write files in the directories.

Data Pump Export and Import use the following order of precedence to determine a file's location:

  1. If a directory object is specified as part of the file specification, then the location specified by that directory object is used. (The directory object must be separated from the file name by a colon.)

  2. If a directory object is not specified as part of the file specification, then the directory object named by the DIRECTORY parameter is used.

  3. If a directory object is not specified as part of the file specification, and if no directory object is named by the DIRECTORY parameter, then the value of the environment variable, DATA_PUMP_DIR, is used. This environment variable is defined using operating system commands on the client system where the Data Pump Export and Import utilities are run. The value assigned to this client-based environment variable must be the name of a server-based directory object, which must first be created on the server system by a DBA. For example, the following SQL statement creates a directory object on the server system. The name of the directory object is DUMP_FILES1, and it is located at '/usr/apps/dumpfiles1'.

    SQL> CREATE DIRECTORY DUMP_FILES1 AS '/usr/apps/dumpfiles1';
    

    Then, a user on a UNIX-based client system using csh can assign the value DUMP_FILES1 to the environment variable DATA_PUMP_DIR. The DIRECTORY parameter can then be omitted from the command line. The dump file employees.dmp, and the log file export.log, are written to '/usr/apps/dumpfiles1'.

    %setenv DATA_PUMP_DIR DUMP_FILES1
    %expdp hr TABLES=employees DUMPFILE=employees.dmp
    
  4. If none of the previous three conditions yields a directory object and you are a privileged user, then Data Pump attempts to use the value of the default server-based directory object, DATA_PUMP_DIR. This directory object is automatically created at database creation or when the database dictionary is upgraded. You can use the following SQL query to see the path definition for DATA_PUMP_DIR:

    SQL> SELECT directory_name, directory_path FROM dba_directories
    2 WHERE directory_name='DATA_PUMP_DIR';
    

    If you are not a privileged user, then access to the DATA_PUMP_DIR directory object must have previously been granted to you by a DBA.

    Do not confuse the default DATA_PUMP_DIR directory object with the client-based environment variable of the same name.

Oracle RAC Considerations

Keep the following considerations in mind when working in an Oracle RAC environment.

  • To use Data Pump or external tables in an Oracle RAC configuration, you must ensure that the directory object path is on a cluster-wide file system.

    The directory object must point to shared physical storage that is visible to, and accessible from, all instances where Data Pump and/or external tables processes may run.

  • The default Data Pump behavior is that worker processes can run on any instance in an Oracle RAC configuration. Therefore, workers on those Oracle RAC instances must have physical access to the location defined by the directory object, such as shared storage media. If the configuration does not have shared storage for this purpose, but you still require parallelism, then you can use the CLUSTER=NO parameter to constrain all worker processes to the instance where the Data Pump job was started.

  • Under certain circumstances, Data Pump uses parallel query slaves to load or unload data. In an Oracle RAC environment, Data Pump does not control where these slaves run, and they may run on other instances in the Oracle RAC, regardless of what is specified for CLUSTER and SERVICE_NAME for the Data Pump job. Controls for parallel query operations are independent of Data Pump. When parallel query slaves run on other instances as part of a Data Pump job, they also require access to the physical storage of the dump file set.

Using Directory Objects When Oracle Automatic Storage Management Is Enabled

If you use Data Pump Export or Import with Oracle Automatic Storage Management (Oracle ASM) enabled, then you must define the directory object used for the dump file so that the Oracle ASM disk group name is used (instead of an operating system directory path). A separate directory object, which points to an operating system directory path, should be used for the log file. For example, you would create a directory object for the Oracle ASM dump file as follows:

SQL> CREATE or REPLACE DIRECTORY dpump_dir as '+DATAFILES/';

Then you would create a separate directory object for the log file:

SQL> CREATE or REPLACE DIRECTORY dpump_log as '/homedir/user1/';

To enable user hr to have access to these directory objects, you would assign the necessary privileges, for example:

SQL> GRANT READ, WRITE ON DIRECTORY dpump_dir TO hr;
SQL> GRANT READ, WRITE ON DIRECTORY dpump_log TO hr;

You would then use the following Data Pump Export command (you will be prompted for a password):

> expdp hr DIRECTORY=dpump_dir DUMPFILE=hr.dmp LOGFILE=dpump_log:hr.log

Note:

If you simply want to copy Data Pump dump files between ASM and disk directories, you can use the DBMS_FILE_TRANSFER PL/SQL package.

See Also:

The DATA_PUMP_DIR Directory Object and Pluggable Databases

The default Data Pump directory object, DATA_PUMP_DIR, does not work with pluggable databases (PDBs). You must define an explicit directory object within the PDB that you are using Data Pump to export or import.

Using Substitution Variables

Instead of, or in addition to, listing specific file names, you can use the DUMPFILE parameter during export operations to specify multiple dump files, by using a substitution variable (%U) in the file name. This is called a dump file template. The new dump files are created as they are needed, beginning with 01 for %U, then using 02, 03, and so on. Enough dump files are created to allow all processes specified by the current setting of the PARALLEL parameter to be active. If one of the dump files becomes full because its size has reached the maximum size specified by the FILESIZE parameter, then it is closed and a new dump file (with a new generated name) is created to take its place.

If multiple dump file templates are provided, they are used to generate dump files in a round-robin fashion. For example, if expa%U, expb%U, and expc%U were all specified for a job having a parallelism of 6, then the initial dump files created would be expa01.dmp, expb01.dmp, expc01.dmp, expa02.dmp, expb02.dmp, and expc02.dmp.

For import and SQLFILE operations, if dump file specifications expa%U, expb%U, and expc%U are specified, then the operation begins by attempting to open the dump files expa01.dmp, expb01.dmp, and expc01.dmp. It is possible for the master table to span multiple dump files, so until all pieces of the master table are found, dump files continue to be opened by incrementing the substitution variable and looking up the new file names (for example, expa02.dmp, expb02.dmp, and expc02.dmp). If a dump file does not exist, then the operation stops incrementing the substitution variable for the dump file specification that was in error. For example, if expb01.dmp and expb02.dmp are found but expb03.dmp is not found, then no more files are searched for using the expb%U specification. Once the entire master table is found, it is used to determine whether all dump files in the dump file set have been located.

Moving Data Between Different Database Releases

Because most Data Pump operations are performed on the server side, if you are using any release of the database other than COMPATIBLE, then you must provide the server with the specific release information. Otherwise, errors may occur. To specify release information, use the VERSION parameter.

See Also:

Keep the following information in mind when you are using Data Pump Export and Import to move data between different database releases:

  • If you specify a database release that is older than the current database release, then certain features may be unavailable. For example, specifying VERSION=10.1 causes an error if data compression is also specified for the job because compression was not supported in Oracle Database 10g release 1 (10.1).

  • On a Data Pump export, if you specify a database release that is older than the current database release, then a dump file set is created that you can import into that older release of the database. However, the dump file set does not contain any objects that the older database release does not support.

  • Data Pump Import can always read dump file sets created by older releases of the database.

  • Data Pump Import cannot read dump file sets created by a database release that is newer than the current database release, unless those dump file sets were created with the VERSION parameter set to the release of the target database. Therefore, the best way to perform a downgrade is to perform your Data Pump export with the VERSION parameter set to the release of the target database.

  • When operating across a network link, Data Pump requires that the source and target databases differ by no more than two versions. For example, if one database is Oracle Database 12c, then the other database must be 12c, 11g, or 10g. Note that Data Pump checks only the major version number (for example, 10g,11g, 12c), not specific release numbers (for example, 12.1,10.1, 10.2, 11.1, or 11.2).

  • A Full-mode export of an Oracle Database 11g (11.2.0.3) database can specify the Data Pump VERSION=12 parameter when the target database is Oracle Database 12c Release 1 (12.1) or later.

SecureFiles LOB Considerations

When you use Data Pump Export to export SecureFiles LOBs, the resulting behavior depends on several things, including the value of the Export VERSION parameter, whether ContentType is present, and whether the LOB is archived and data is cached. The following scenarios cover different combinations of these variables:

  • If a table contains SecureFiles LOBs with ContentType and the Export VERSION parameter is set to a value earlier than 11.2.0.0.0, then the ContentType is not exported.

  • If a table contains SecureFiles LOBs with ContentType and the Export VERSION parameter is set to a value of 11.2.0.0.0 or later, then the ContentType is exported and restored on a subsequent import.

  • If a table contains a SecureFiles LOB that is currently archived and the data is cached, and the Export VERSION parameter is set to a value earlier than 11.2.0.0.0, then the SecureFiles LOB data is exported and the archive metadata is dropped. In this scenario, if VERSION is set to 11.1 or later, then the SecureFiles LOB becomes a vanilla SecureFiles LOB. But if VERSION is set to a value earlier than 11.1, then the SecureFiles LOB becomes a BasicFiles LOB.

  • If a table contains a SecureFiles LOB that is currently archived but the data is not cached, and the Export VERSION parameter is set to a value earlier than 11.2.0.0.0, then an ORA-45001 error is returned.

  • If a table contains a SecureFiles LOB that is currently archived and the data is cached, and the Export VERSION parameter is set to a value of 11.2.0.0.0 or later, then both the cached data and the archive metadata is exported.

See Also:

Oracle Database SecureFiles and Large Objects Developer's Guide for more information about SecureFiles

Data Pump Exit Codes

Oracle Data Pump provides the results of export and import operations immediately upon completion. In addition to recording the results in a log file, Data Pump may also report the outcome in a process exit code. This allows you to check the outcome of a Data Pump job from the command line or a script.

Table 1-1 describes the Data Pump exit codes for Linux, Unix, and Windows operating systems.

Table 1-1 Data Pump Exit Codes

Exit Code Meaning

EX_SUCC 0

The export or import job completed successfully. No errors are displayed to the output device or recorded in the log file, if there is one.

EX_SUCC_ERR 5

The export or import job completed successfully but there were errors encountered during the job. The errors are displayed to the output device and recorded in the log file, if there is one.

EX_FAIL 1

The export or import job encountered one or more fatal errors, including the following:

  • Errors on the command line or in command syntax

  • Oracle database errors from which export or import cannot recover

  • Operating system errors (such as malloc)

  • Invalid parameter values that prevent the job from starting (for example, an invalid directory object specified in the DIRECTORY parameter)

A fatal error is displayed to the output device but may not be recorded in the log file. Whether it is recorded in the log file can depend on several factors, including:

  • Was a log file specified at the start of the job?

  • Did the processing of the job proceed far enough for a log file to be opened?


Auditing Data Pump Jobs

You can perform auditing on Data Pump jobs in order to monitor and record selected user database actions. Data Pump uses unified auditing, in which all audit records are centralized in one place.

To set up unified auditing you create a unified audit policy or alter an existing policy. An audit policy is a named group of audit settings that enable you to audit a particular aspect of user behavior in the database. To create the policy, use the SQL CREATE AUDIT POLICY statement.

After creating the audit policy, use the AUDIT and NOAUDIT SQL statements to, respectively, enable and disable the policy.

See Also:

How Does Data Pump Handle Timestamp Data?

Note:

The information in this section applies only to Oracle Data Pump running on Oracle Database 12c and later.

This section describes factors that can affect successful completion of export and import jobs that involve the timestamp data types TIMESTAMP WITH TIMEZONE and TIMESTAMP WITH LOCAL TIMEZONE.

TIMESTAMP WITH TIME ZONE Restrictions

For export and import jobs that have TIMESTAMP with TIME ZONE data, successful job completion can depend on:

To identify the time zone file version of a database, you can execute the following SQL statement:

SQL> SELECT VERSION FROM V$TIMEZONE_FILE;

See Also:

Time Zone File Versions on the Source and Target

Successful job completion can depend on whether the source and target time zone file versions match.

  • If the Oracle Database time zone file version is the same on the source and target databases, then conversion of TIMESTAMP WITH TIME ZONE data is not necessary. The export/import job should complete successfully.

    The exception to this is a transportable tablespace or transportable table export performed using a Data Pump release earlier than 11.2.0.1. In that case, tables in the dump file that have TIMESTAMP WITH TIME ZONE columns are not created on import even though the time zone file version is the same on the source and target.

  • If the source time zone file version is not available on the target database, then the job fails. The version of the time zone file on the source may not be available on the target because the source may have had its time zone file updated to a later version but the target has not. For example, if the export is done on Oracle Database 11g release 2 (11.2.0.2) with a time zone file version of 17, and the import is done on 11.2.0.2 with only a time zone file of 16 available, then the job fails.

Data Pump Support for TIMESTAMP WITH TIME ZONE Data

This section describes Data Pump support for TIMESTAMP WITH TIME ZONE data during different export and import modes when versions of the Oracle Database time zone file are different on the source and target databases.

Non-transportable Modes

  • If the dump file is created with a Data Pump version that supports TIMESTAMP WITH TIME ZONE data (11.2.0.1 or later), then the time zone file version of the export system is recorded in the dump file. Data Pump uses that information to determine whether data conversion is necessary. If the target database knows about the source time zone version, but is actually using a later version, then the data is converted to the later version. TIMESTAMP WITH TIME ZONE data cannot be downgraded, so if you attempt to import to a target that is using an earlier version of the time zone file than the source used, the import fails.

  • If the dump file is created with a Data Pump version prior to Oracle Database 11g release 2 (11.2.0.1), then TIMESTAMP WITH TIME ZONE data is not supported, so no conversion is done and corruption may occur.

Transportable Tablespace and Transportable Table Modes

  • In transportable tablespace and transportable table modes, if the source and target have different time zone file versions, then tables with TIMESTAMP WITH TIME ZONE columns are not created. A warning is displayed at the beginning of the job showing the source and target database time zone file versions. A message is also displayed for each table not created. This is true even if the Data Pump version used to create the dump file supports TIMESTAMP WITH TIME ZONE data. (Release 11.2.0.1 and later support TIMESTAMP WITH TIMEZONE data.)

  • If the source is earlier than Oracle Database 11g release 2 (11.2.0.1), then the time zone file version must be the same on the source and target database for all transportable jobs regardless of whether the transportable set uses TIMESTAMP WITH TIME ZONE columns.

See Also:

Full Transportable Mode

Full transportable exports and imports are supported when the source database is at least Oracle Database 11g release 2 (11.2.0.3) and the target is Oracle Database 12c release 1 (12.1) or later.

Data Pump 11.2.0.1 and later provide support for TIMESTAMP WITH TIME ZONE data. Therefore, in full transportable operations, tables with TIMESTAMP WITH TIME ZONE columns are created. If the source and target database have different time zone file versions, then TIMESTAMP WITH TIME ZONE columns from the source are converted to the time zone file version of the target.

See Also:

TIMESTAMP WITH LOCAL TIME ZONE Restrictions

If a table is moved using a transportable mode (transportable table, transportable tablespace, or full transportable), and the following conditions exist, then a warning is issued and the table is not created:

  • The source and target databases have different database time zones

  • The table contains TIMESTAMP WITH LOCAL TIME ZONE data types

To successfully move a table that was not created because of these conditions, use a non-transportable export and import mode.

Character Set and Globalization Support Considerations

The following sections describe the globalization support behavior of Data Pump Export and Import with respect to character set conversion of user data and data definition language (DDL).

User Data

The Export utility always exports user data, including Unicode data, in the character set of the export system. (Character sets are specified at database creation.) If the character set of the source database is different than the character set of the import database, then a single conversion is performed to automatically convert the data to the character set of the Import system.

Effect of Character Set Sorting Order on Conversions

If the export character set has a different sorting order than the import character set, then tables that are partitioned on character columns may yield unpredictable results. For example, consider the following table definition, which is produced on a database having an ASCII character set:

CREATE TABLE partlist ( part VARCHAR2(10), 
                             partno NUMBER(2) 
                        ) 
                        PARTITION BY RANGE (part) 
                        ( 
                        PARTITION part_low VALUES LESS THAN ('Z') 
                        TABLESPACE tbs_1, 
                        PARTITION part_mid VALUES LESS THAN ('z') 
                        TABLESPACE tbs_2, 
                        PARTITION part_high VALUES LESS THAN (MAXVALUE) 
                        TABLESPACE tbs_3 
                        );

This partitioning scheme makes sense because z comes after Z in ASCII character sets.

When this table is imported into a database based upon an EBCDIC character set, all of the rows in the part_mid partition will migrate to the part_low partition because z comes before Z in EBCDIC character sets. To obtain the desired results, the owner of partlist must repartition the table following the import.

Data Definition Language (DDL)

The Export utility writes dump files using the database character set of the export system.

When the dump file is imported, a character set conversion is required for DDL only if the import system's database character set is different from the export system's database character set.

To minimize data loss due to character set conversions, ensure that the import database character set is a superset of the export database character set.

Single-Byte Character Sets and Export and Import

If the system on which the import occurs uses a 7-bit character set, and you import an 8-bit character set dump file, then some 8-bit characters may be converted to 7-bit equivalents. An indication that this has happened is when accented characters lose the accent mark.

To avoid this unwanted conversion, ensure that the export database and the import database use the same character set.

Multibyte Character Sets and Export and Import

During character set conversion, any characters in the export file that have no equivalent in the import database character set are replaced with a default character. The import database character set defines the default character.

If the import system has to use replacement characters while converting DDL, then a warning message is displayed and the system attempts to load the converted DDL.

If the import system has to use replacement characters while converting user data, then the default behavior is to load the converted data. However, it is possible to instruct the import system to reject rows of user data that were converted using replacement characters. See the Import "DATA_OPTIONS" parameter for details.

To guarantee 100% conversion, the import database character set must be a superset (or equivalent) of the character set used to generate the export file.

Caution:

When the database character set of the export system differs from that of the import system, the import system displays informational messages at the start of the job that show what the database character set is.

When the import database character set is not a superset of the character set used to generate the export file, the import system displays a warning that possible data loss may occur due to character set conversions.