This section describes new features of the Oracle Database 10g utilities and provides pointers to additional information. For information about features that were introduced in earlier releases of Oracle Database, refer to the documentation for those releases.
Data Pump Export and Data Pump Import
The following features have been added for Oracle Database 10g Release 2 (10.2):
The ability to perform database subsetting. This is done by using the
SAMPLE parameter on an export operation or by using the
TRANSFORM=PCTSPACE parameter on an import operation.
The ability to compress metadata before it is written to a dump file set.
The ability to encrypt column data on an export operation and then to access that data on an import operation.
The ability to downgrade a database through use of the
Automatic Storage Management Command-Line Utility (ASMCMD)
ASMCMD is a command-line utility that you can use to easily view and manipulate files and directories within Automatic Storage Management (ASM) disk groups. It can list the contents of disk groups, perform searches, create and remove directories and aliases, display space utilization, and more.
See Chapter 20, "ASM Command-Line Utility (ASMCMD)" for detailed information about this utility and how to use it.
Data Pump Technology
Oracle Database 10g introduces the new Oracle Data Pump technology, which enables very high-speed movement of data and metadata from one database to another. This technology is the basis for Oracle's new data movement utilities, Data Pump Export and Data Pump Import.
See Chapter 1, "Overview of Oracle Data Pump" for more information.
Data Pump Export
Data Pump Export is a utility that makes use of Oracle Data Pump technology to unload data and metadata at high speeds into a set of operating system files called a dump file set. The dump file set can be moved to another system and loaded by the Data Pump Import utility.
Although the functionality of Data Pump Export (invoked with the
expdp command) is similar to that of the original Export utility (
exp), they are completely separate utilities.
See Chapter 2, "Data Pump Export" for more information.
Data Pump Import
Data Pump Import is a utility for loading a Data Pump Export dump file set into a target system.
Although the functionality of Data Pump Import (invoked with the
impdp command) is similar to that of the original Import utility (
imp), they are completely separate utilities.
See Chapter 3, "Data Pump Import" for more information.
Data Pump API
The Data Pump API provides a high-speed mechanism to move all or part of the data and metadata from one database to another. The Data Pump Export and Data Pump Import utilities are based on the Data Pump API.
The Data Pump API is implemented through a PL/SQL package,
DBMS_DATAPUMP, that provides programmatic access to Data Pump data and metadata movement capabilities.
See Chapter 5, "The Data Pump API" for more information.
The following features have been added or updated for Oracle Database 10g.
You can now use remap parameters, which enable you to modify an object by changing specific old attribute values to new values. For example, when you are importing data into a database, you can use the
REMAP_SCHEMA parameter to change occurrences of schema name
scott in a dump file set to schema name
All dictionary objects needed for a full export are supported.
You can request that a heterogeneous collection of objects be returned in creation order.
In addition to retrieving metadata as XML and creation DDL, you can now submit the XML to re-create the object.
See Chapter 18, "Using the Metadata API" for full descriptions of these features.
A new access driver,
ORACLE_DATAPUMP, is now available. See Chapter 14, "The ORACLE_DATAPUMP Access Driver" for more information.
The LogMiner utility, previously documented in the Oracle9i Database Administrator's Guide, is now documented in this guide. The new and changed LogMiner features for Oracle Database 10g are as follows:
REMOVE_LOGFILE() procedure removes log files from the list of those being analyzed. This subprogram replaces the
REMOVEFILE option to the
NO_ROWID_IN_STMT option for
START_LOGMNR procedure lets you filter out the
ROWID clause from reconstructed
Supplemental logging is enhanced as follows:
At the database level, there are two new options for identification key logging:
Supplementally logs all other columns of a row's foreign key if any column in the foreign key is modified.
Supplementally logs all the columns in a row (except for LOBs
ADTs) if any column value is modified.
At the table level, there are these new features:
Identification key logging is now supported (
LOG option provides a way to prevent a column in a user-defined log group from being supplementally logged.
See Chapter 17, "Using LogMiner to Analyze Redo Log Files" for more information.