15 Transporting Data
Transporting data moves the data from one database to another.
- About Transporting Data
You can transport data at the following levels: database, tablespaces, tables, partitions, and subpartitions. - Transporting Databases
You can transport a database to a new Oracle Database instance. - Transporting Tablespaces Between Databases
You can transport tablespaces between databases. - Transporting Tables, Partitions, or Subpartitions Between Databases
You can transport tables, partitions, and subpartitions between databases. - Converting Data Between Platforms
When you perform a transportable operation, and the source platform and the target platform are of different endianness, you must convert the data being transported to the target format. If they are of the same endianness, then no conversion is necessary and data can be transported as if they were on the same platform. You can use theDBMS_FILE_TRANSFER
package or the RMANCONVERT
command to convert data. - Guidelines for Transferring Data Files
You should follow a set of guidelines when transferring the data files.
Parent topic: Oracle Database Structure and Storage
15.1 About Transporting Data
You can transport data at the following levels: database, tablespaces, tables, partitions, and subpartitions.
- Purpose of Transporting Data
Transporting data is much faster than performing either an export/import or unload/load of the same data. It is faster because, for user-defined tablespaces, the data files containing all of the actual data are copied to the target location, and you use Data Pump to transfer only the metadata of the database objects to the new database. - Transporting Data: Scenarios
Transporting data is useful in several scenarios. - Transporting Data Across Platforms
You can transport data across platforms. - General Limitations on Transporting Data
There are general limitations on transporting data. There are also limitations that are specific to full transportable export/import, transportable tablespaces, or transportable tables. - Compatibility Considerations for Transporting Data
When transporting data, Oracle Database computes the lowest compatibility level at which the target database must run.
Parent topic: Transporting Data
15.1.1 Purpose of Transporting Data
Transporting data is much faster than performing either an export/import or unload/load of the same data. It is faster because, for user-defined tablespaces, the data files containing all of the actual data are copied to the target location, and you use Data Pump to transfer only the metadata of the database objects to the new database.
You can transport data at any of the following levels:
-
Database
You can use the full transportable export/import feature to move an entire database to a different database instance.
-
Tablespaces
You can use the transportable tablespaces feature to move a set of tablespaces between databases.
-
Tables, partitions, and subpartitions
You can use the transportable tables feature to move a set of tables, partitions, and subpartitions between databases.
Transportable tablespaces and transportable tables only transports data that resides in user-defined tablespaces. However, full transportable export/import transports data that resides in both user-defined and administrative tablespaces, such as SYSTEM
and SYSAUX
. Full transportable export/import transports metadata for objects contained within the user-defined tablespaces and both the metadata and data for user-defined objects contained within the administrative tablespaces. Specifically, with full transportable export/import, the export dump file includes only the metadata for objects contained within the user-defined tablespaces, but it includes both the metadata and the data for user-defined objects contained within the administrative tablespaces.
Parent topic: About Transporting Data
15.1.2 Transporting Data: Scenarios
Transporting data is useful in several scenarios.
- Scenarios for Full Transportable Export/import
The full transportable export/import feature is useful in several scenarios. - Scenarios for Transportable Tablespaces or Transportable Tables
The transportable tablespaces or transportable tables feature is useful in several scenarios.
Parent topic: About Transporting Data
15.1.2.1 Scenarios for Full Transportable Export/import
The full transportable export/import feature is useful in several scenarios.
- Moving a Non-CDB Into a CDB
The multitenant architecture enables an Oracle database to function as a multitenant container database (CDB) that includes one or many customer-created pluggable databases (PDBs). You can move a non-CDB into a CDB by transporting the database. - Moving a Database to a New Computer System
You can use full transportable export/import to move a database from one computer system to another. You might want to move a database to a new computer system to upgrade the hardware or to move the database to a different platform. - Upgrading to a New Release of Oracle Database
You can use full transportable export/import to upgrade a database from an Oracle Database 11g Release 2 (11.2.0.3) or later to Oracle Database 18c.
Parent topic: Transporting Data: Scenarios
15.1.2.1.1 Moving a Non-CDB Into a CDB
The multitenant architecture enables an Oracle database to function as a multitenant container database (CDB) that includes one or many customer-created pluggable databases (PDBs). You can move a non-CDB into a CDB by transporting the database.
The transported database becomes a pluggable database (PDB) associated with the CDB. Full transportable export/import can move an Oracle Database 11g Release 2 (11.2.0.3) or later database into an Oracle Database 18c CDB efficiently.
See Also:
-
"Transporting a Database Using an Export Dump File" for instructions that describe transporting a non-CDB into a PDB in an Oracle Database 18c CDB
-
"Transporting a Database Over the Network" for an example that transports an Oracle Database 11g Release 2 (11.2.0.3) database into a PDB in an Oracle Database 18c CDB
Parent topic: Scenarios for Full Transportable Export/import
15.1.2.1.2 Moving a Database to a New Computer System
You can use full transportable export/import to move a database from one computer system to another. You might want to move a database to a new computer system to upgrade the hardware or to move the database to a different platform.
See Also:
Parent topic: Scenarios for Full Transportable Export/import
15.1.2.1.3 Upgrading to a New Release of Oracle Database
You can use full transportable export/import to upgrade a database from an Oracle Database 11g Release 2 (11.2.0.3) or later to Oracle Database 18c.
To do so, install Oracle Database 18c and create an empty database. Next, use full transportable export/import to transport the Oracle Database 11g Release 2 (11.2.0.3) or later database into the Oracle Database 18c database.
Parent topic: Scenarios for Full Transportable Export/import
15.1.2.2 Scenarios for Transportable Tablespaces or Transportable Tables
The transportable tablespaces or transportable tables feature is useful in several scenarios.
- Scenarios That Apply to Transportable Tablespaces or Transportable Tables
For some scenarios, either transportable tablespaces or transportable tables can be useful. For other scenarios, only transportable tablespaces can be useful, or only transportable tables can be useful. - Transporting and Attaching Partitions for Data Warehousing
You can use transportable tables and tranportable tablespaces to attach partitions for data warehousing. - Publishing Structured Data on CDs
Transportable tablespaces and transportable tables both provide a way to publish structured data on CDs. - Mounting the Same Tablespace Read-Only on Multiple Databases
You can use transportable tablespaces to mount a tablespace read-only on multiple databases. - Archiving Historical Data
When you use transportable tablespaces or transportable tables, the transported data is a self-contained set of files that can be imported into any Oracle database. Therefore, you can archive old or historical data in an enterprise data warehouse using the transportable tablespaces and transportable tables procedures. - Using Transportable Tablespaces to Perform TSPITR
You can use transportable tablespaces to perform tablespace point-in-time recovery (TSPITR). - Copying or Moving Individual Tables
You can use transportable tables to move a table or a set of tables from one database to another without transporting the entire tablespaces that contain the tables. You can also copy or move individual partitions and subpartitions from one database to another using transportable tables.
Parent topic: Transporting Data: Scenarios
15.1.2.2.1 Scenarios That Apply to Transportable Tablespaces or Transportable Tables
For some scenarios, either transportable tablespaces or transportable tables can be useful. For other scenarios, only transportable tablespaces can be useful, or only transportable tables can be useful.
Table 15-1 shows which feature can be used for each scenario.
Table 15-1 Scenarios for Transportable Tablespaces and Transportable Tables
Scenarios | Transportable Tablespaces | Transportable Tables |
---|---|---|
Yes |
Yes |
|
Yes |
Yes |
|
Yes |
Yes |
|
Yes |
No |
|
No |
Yes |
The following sections describe these scenarios in more detail.
15.1.2.2.2 Transporting and Attaching Partitions for Data Warehousing
You can use transportable tables and tranportable tablespaces to attach partitions for data warehousing.
Typical enterprise data warehouses contain one or more large fact tables. These fact tables can be partitioned by date, making the enterprise data warehouse a historical database. You can build indexes to speed up star queries. Oracle recommends that you build local indexes for such historically partitioned tables to avoid rebuilding global indexes every time you drop the oldest partition from the historical database.
Suppose every month you would like to load one month of data into the data warehouse. There is a large fact table in the data warehouse called sales
, which has the following columns:
CREATE TABLE sales (invoice_no NUMBER, sale_year INT NOT NULL, sale_month INT NOT NULL, sale_day INT NOT NULL) PARTITION BY RANGE (sale_year, sale_month, sale_day) (partition jan2011 VALUES LESS THAN (2011, 2, 1), partition feb2011 VALUES LESS THAN (2011, 3, 1), partition mar2011 VALUES LESS THAN (2011, 4, 1), partition apr2011 VALUES LESS THAN (2011, 5, 1), partition may2011 VALUES LESS THAN (2011, 6, 1), partition jun2011 VALUES LESS THAN (2011, 7, 1));
You create a local non-prefixed index:
CREATE INDEX sales_index ON sales(invoice_no) LOCAL;
Initially, all partitions are empty, and are in the same default tablespace. Each month, you want to create one partition and attach it to the partitioned sales
table.
Suppose it is July 2011, and you would like to load the July sales data into the partitioned table. In a staging database, you create a table, jul_sales
with the same column types as the sales
table. Optionally, you can create a new tablespace, ts_jul
, before you create the table, and create the table in this tablespace. You can create the table jul_sales
using the CREATE
TABLE
... AS
SELECT
statement. After creating and populating jul_sales
, you can also create an index, jul_sale_index
, for the table, indexing the same column as the local index in the sales
table. For detailed information about creating and populating a staging table in a data warehousing environment, see Oracle Database Data Warehousing Guide.
After creating the table and building the index, transport the table's data to the data warehouse in one of the following ways:
-
You can use transportable tables to transport the
jul_sales
table to the data warehouse. -
If you created the
ts_jul
tablespace, then you can use transportable tablespaces to transport the tablespacets_jul
to the data warehouse.
In the data warehouse, add a partition to the sales
table for the July sales data. This also creates another partition for the local non-prefixed index:
ALTER TABLE sales ADD PARTITION jul2011 VALUES LESS THAN (2011, 8, 1);
Attach the transported table jul_sales
to the table sales
by exchanging it with the new partition:
ALTER TABLE sales EXCHANGE PARTITION jul2011 WITH TABLE jul_sales INCLUDING INDEXES WITHOUT VALIDATION;
This statement places the July sales data into the new partition jul2011
, attaching the new data to the partitioned table. This statement also converts the index jul_sale_index
into a partition of the local index for the sales
table. This statement should return immediately, because it only operates on the structural information and it simply switches database pointers. If you know that the data in the new partition does not overlap with data in previous partitions, you are advised to specify the WITHOUT VALIDATION
clause. Otherwise, the statement goes through all the new data in the new partition in an attempt to validate the range of that partition.
If all partitions of the sales
table came from the same staging database (the staging database is never destroyed), then the exchange statement always succeeds. In general, however, if data in a partitioned table comes from different databases, then the exchange operation might fail. For example, if the jan2011
partition of sales
did not come from the same staging database, then the preceding exchange operation can fail, returning the following error:
ORA-19728: data object number conflict between table JUL_SALES and partition JAN2011 in table SALES
To resolve this conflict, move the offending partition by issuing the following statement:
ALTER TABLE sales MOVE PARTITION jan2011;
Then retry the exchange operation.
After the exchange succeeds, you can safely drop jul_sales
and jul_sale_index
(both are now empty). Thus you have successfully loaded the July sales data into your data warehouse.
15.1.2.2.3 Publishing Structured Data on CDs
Transportable tablespaces and transportable tables both provide a way to publish structured data on CDs.
You can copy the data to be published, including the data files and export dump file, to a CD. This CD can then be distributed. If you are using transportable tablespaces, then you must generate a transportable set before copying the data to the CD.
When customers receive this CD, they can add the CD contents to an existing database without having to copy the data files from the CD to disk storage. For example, suppose on a Microsoft Windows system D: drive is the CD drive. You can import the data in data file catalog.f
and the export dump file expdat.dmp
as follows:
impdp user_name/password DUMPFILE=expdat.dmp DIRECTORY=dpump_dir
TRANSPORT_DATAFILES='D:\catalog.f'
You can remove the CD while the database is still up. Subsequent queries to the data return an error indicating that the database cannot open the data files on the CD. However, operations to other parts of the database are not affected. Placing the CD back into the drive makes the data readable again.
Removing the CD is the same as removing the data files of a read-only tablespace. If you shut down and restart the database, then the database indicates that it cannot find the removed data file and does not open the database (unless you set the initialization parameter READ_ONLY_OPEN_DELAYED
to TRUE
). When READ_ONLY_OPEN_DELAYED
is set to TRUE
, the database reads the file only when someone queries the data. Thus, when transporting data from a CD, set the READ_ONLY_OPEN_DELAYED
initialization parameter to TRUE
, unless the CD is permanently attached to the database.
15.1.2.2.4 Mounting the Same Tablespace Read-Only on Multiple Databases
You can use transportable tablespaces to mount a tablespace read-only on multiple databases.
In this way, separate databases can share the same data on disk instead of duplicating data on separate disks. The tablespace data files must be accessible by all databases. To avoid database corruption, the tablespace must remain read-only in all the databases mounting the tablespace, and the tablespace's data files must be read-only at the operating system level.
The following are two scenarios for mounting the same tablespace read-only on multiple databases:
-
The tablespace originates in a database that is separate from the databases that will share the tablespace.
You generate a transportable set in the source database, put the transportable set onto a disk that is accessible to all databases, and then import the metadata into each database on which you want to mount the tablespace.
-
The tablespace already belongs to one of the databases that will share the tablespace.
It is assumed that the data files are already on a shared disk. In the database where the tablespace already exists, you make the tablespace read-only, generate the transportable set, and then import the tablespace into the other databases, leaving the data files in the same location on the shared disk.
You can make a disk accessible by multiple computers in several ways. You can use either a cluster file system or raw disk. You can also use network file system (NFS), but be aware that if a user queries the shared tablespace while NFS is down, the database will hang until the NFS operation times out.
Later, you can drop the read-only tablespace in some of the databases. Doing so does not modify the data files for the tablespace. Thus, the drop operation does not corrupt the tablespace. Do not make the tablespace read/write unless only one database is mounting the tablespace.
15.1.2.2.5 Archiving Historical Data
When you use transportable tablespaces or transportable tables, the transported data is a self-contained set of files that can be imported into any Oracle database. Therefore, you can archive old or historical data in an enterprise data warehouse using the transportable tablespaces and transportable tables procedures.
See Also:
Oracle Database Data Warehousing Guide for more details
15.1.2.2.6 Using Transportable Tablespaces to Perform TSPITR
You can use transportable tablespaces to perform tablespace point-in-time recovery (TSPITR).
See Also:
Oracle Database Backup and Recovery User's Guide for information about how to perform TSPITR using transportable tablespaces
15.1.2.2.7 Copying or Moving Individual Tables
You can use transportable tables to move a table or a set of tables from one database to another without transporting the entire tablespaces that contain the tables. You can also copy or move individual partitions and subpartitions from one database to another using transportable tables.
15.1.3 Transporting Data Across Platforms
You can transport data across platforms.
This capability can be used to:
-
Allow a database to be migrated from one platform to another
-
Provide an easier and more efficient means for content providers to publish structured data and distribute it to customers running Oracle Database on different platforms
-
Simplify the distribution of data from a data warehouse environment to data marts, which are often running on smaller platforms
-
Enable the sharing of read-only tablespaces between Oracle Database installations on different operating systems or platforms, assuming that your storage system is accessible from those platforms and the platforms all have the same endianness, as described in the sections that follow.
Many, but not all, platforms are supported for cross-platform data transport. You can query the V$TRANSPORTABLE_PLATFORM
view to see the platforms that are supported, and to determine each platform's endian format (byte ordering). The following query displays the platforms that support cross-platform data transport:
COLUMN PLATFORM_NAME FORMAT A40 COLUMN ENDIAN_FORMAT A14 SELECT PLATFORM_ID, PLATFORM_NAME, ENDIAN_FORMAT FROM V$TRANSPORTABLE_PLATFORM ORDER BY PLATFORM_ID; PLATFORM_ID PLATFORM_NAME ENDIAN_FORMAT ----------- ---------------------------------------- -------------- 1 Solaris[tm] OE (32-bit) Big 2 Solaris[tm] OE (64-bit) Big 3 HP-UX (64-bit) Big 4 HP-UX IA (64-bit) Big 5 HP Tru64 UNIX Little 6 AIX-Based Systems (64-bit) Big 7 Microsoft Windows IA (32-bit) Little 8 Microsoft Windows IA (64-bit) Little 9 IBM zSeries Based Linux Big 10 Linux IA (32-bit) Little 11 Linux IA (64-bit) Little 12 Microsoft Windows x86 64-bit Little 13 Linux x86 64-bit Little 15 HP Open VMS Little 16 Apple Mac OS Big 17 Solaris Operating System (x86) Little 18 IBM Power Based Linux Big 19 HP IA Open VMS Little 20 Solaris Operating System (x86-64) Little 21 Apple Mac OS (x86-64) Little
If source platform and the target platform are of the same endianness, then no conversion is necessary, and data can be transported as if they were on the same platform.
If the source platform and the target platform are of different endianness, then the data being transported must be converted to the target platform format. You can convert the data using one of the following methods:
-
The
GET_FILE
orPUT_FILE
procedure in theDBMS_FILE_TRANSFER
packageWhen you use one of these procedures to move data files between the source platform and the target platform, each block in each data file is converted to the target platform's endianness. The conversion occurs on the target platform.
-
The RMAN
CONVERT
commandRunning the RMAN
CONVERT
command is an additional step that can be completed on the source or target platform. It converts the data being transported to the target platform format.
Before the data in a data file can be transported to a different platform, the data file header must identify the platform to which it belongs. When you are transporting read-only tablespaces between Oracle Database installations on different platforms, you can accomplish this by making the data file read/write at least once.
See Also:
Parent topic: About Transporting Data
15.1.4 General Limitations on Transporting Data
There are general limitations on transporting data. There are also limitations that are specific to full transportable export/import, transportable tablespaces, or transportable tables.
Be aware of the following general limitations as you plan to transport data:
-
The source and the target databases must use compatible database character sets. Specifically, one of the following must be true:
-
The database character sets of the source and the target databases are the same.
-
The source database character set is a strict (binary) subset of the target database character set, and the following three conditions are true:
-
The source database is Oracle Database 10g Release 1 (10.1.0.3) or later.
-
The tablespaces to be transported contain no table columns with character length semantics or the maximum character width is the same in both the source and target database character sets.
-
The data to be transported contains no columns with the
CLOB
data type, or the source and the target database character sets are both single-byte or both multibyte.
-
-
The source database character set is a strict (binary) subset of the target database character set, and the following two conditions are true:
-
The source database is before Oracle Database 10g Release 1 (10.1.0.3).
-
The maximum character width is the same in the source and target database character sets.
-
Note:
The subset-superset relationship between character sets recognized by Oracle Database is documented in the Oracle Database Globalization Support Guide.
-
-
The source and the target databases must use compatible national character sets. Specifically, one of the following must be true:
-
The national character sets of the source and target databases are the same.
-
The source database is Oracle Database 10g Release 1 (10.1.0.3) or later and the tablespaces to be transported contain no columns with
NCHAR
,NVARCHAR2
, orNCLOB
data type.
-
-
When running a transportable export operation, the following limitations apply:
-
The default tablespace of the user performing the export must not be one of the tablespaces being transported.
-
The default tablespace of the user performing the export must be writable.
-
-
In a non-CDB, you cannot transport a tablespace to a target database that contains a tablespace of the same name.
In a CDB, you cannot transport a tablespace to a target container that contains a tablespace of the same name. However, different containers can have tablespaces with the same name.
You can use the
REMAP_TABLESPACE
import parameter to import the database objects into a different tablespace. Alternatively, before the transport operation, you can rename either the tablespace to be transported or the target tablespace.Starting with Oracle Database 12c Release 2 (12.2), the Recovery Manager (RMAN)
RECOVER
command can move tables to a different schema while remapping a tablespace. See Oracle Database Backup and Recovery User’s Guide for more information. -
In a CDB, the default Data Pump directory object,
DATA_PUMP_DIR
, does not work with PDBs. You must define an explicit directory object within the PDB that you are using with Data Pump export/import. -
Transporting data with XMLTypes has the following limitations:
-
The target database must have XML DB installed.
-
Schemas referenced by XMLType tables cannot be the XML DB standard schemas.
-
If the schema for a transported XMLType table is not present in the target database, then it is imported and registered. If the schema already exists in the target database, then a message is displayed during import.
-
You must use only Data Pump to export and import the metadata for data that contains XMLTypes.
The following query returns a list of tablespaces that contain XMLTypes:
select distinct p.tablespace_name from dba_tablespaces p, dba_xml_tables x, dba_users u, all_all_tables t where t.table_name=x.table_name and t.tablespace_name=p.tablespace_name and x.owner=u.username;
See Oracle XML DB Developer's Guide for information on XMLTypes.
-
-
Types whose interpretation is application-specific and opaque to the database (such as
RAW
,BFILE
, and the AnyTypes) can be transported, but they are not converted as part of the cross-platform transport operation. Their actual structure is known only to the application, so the application must address any endianness issues after these types are moved to the new platform. Types and objects that use these opaque types, either directly or indirectly, are also subject to this limitation. -
When you transport a tablespace containing tables with
TIMESTAMP
WITH
LOCAL
TIME
ZONE
(TSLTZ) data between databases with different time zones, the tables with the TSLTZ data are not transported. Error messages describe the tables that were not transported. However, tables in the tablespace that do not contain TSLTZ data are transported.You can determine the time zone of a database with the following query:
SELECT DBTIMEZONE FROM DUAL;
You can alter the time zone for a database with an
ALTER
DATABASE
SQL statement.You can use Data Pump to perform a conventional export/import of tables with TSLTZ data after the transport operation completes.
-
Analytic workspaces cannot be part of cross-platform transport operations. If the source platform and target platform are different, then use Data Pump export/import to export and import analytic workspaces. See Oracle OLAP DML Reference for more information about analytic workspaces.
Note:
Do not invoke Data Pump export utility expdp
or import utility impdp
as SYSDBA
, except at the request of Oracle technical support. SYSDBA
is used internally and has specialized functions; its behavior is not the same as for general users.
Certain limitations are specific to full transportable export/import, transportable tablespaces, or transportable tables. See the appropriate section for information:
Parent topic: About Transporting Data
15.1.5 Compatibility Considerations for Transporting Data
When transporting data, Oracle Database computes the lowest compatibility level at which the target database must run.
A tablespace or table can always be transported to a database with the same or higher compatibility setting using transportable tablespaces, whether the target database is on the same or a different platform. The database signals an error if the compatibility level of the source database is higher than the compatibility level of the target database.
The following table shows the minimum compatibility requirements of the source and target databases in various scenarios. The source and target database need not have the same compatibility setting.
Table 15-2 Minimum Compatibility Requirements
Transport Scenario | Minimum Compatibility Setting | |
---|---|---|
Source Database | Target Database | |
Transporting a database using full transportable export/import |
12.0 ( 12 ( |
12.0 ( |
Transporting a tablespace between databases on the same platform using transportable tablespaces |
8.0 ( |
8.0 ( |
Transporting a tablespace with different database block size than the target database using transportable tablespaces |
9.0 ( |
9.0 ( |
Transporting a tablespace between databases on different platforms using transportable tablespaces |
10.0 ( |
10.0 ( |
Transporting tables between databases |
11.2.0 ( |
11.2.0 ( |
When you use full transportable export/import, the source database must be an Oracle Database 11g Release 2 (11.2.0.3) or later database, and the target database must be an Oracle Database 12c or later database. When transporting a database from Oracle Database 11g Release 2 (11.2.0.3) or later database to Oracle Database 12c or later database, the VERSION
Data Pump export parameter must be set to 12
or higher. When transporting a database from an Oracle Database 18c database to an Oracle Database 18c database, the COMPATIBLE
initialization parameter must be set to 18.0.0
or higher.
Parent topic: About Transporting Data
15.2 Transporting Databases
You can transport a database to a new Oracle Database instance.
- Introduction to Full Transportable Export/Import
You can use the full transportable export/import feature to copy an entire database from one Oracle Database instance to another. - Limitations on Full Transportable Export/import
There are limitations on full transportable export/import. - Transporting a Database Using an Export Dump File
You can transport a database using an export dump file. - Transporting a Database Over the Network
You can transport a database over the network.
Parent topic: Transporting Data
15.2.1 Introduction to Full Transportable Export/Import
You can use the full transportable export/import feature to copy an entire database from one Oracle Database instance to another.
You can use Data Pump to produce an export dump file, transport the dump file to the target database if necessary, and then import the export dump file. Alternatively, you can use Data Pump to copy the database over the network.
The tablespaces in the database being transported can be either dictionary managed or locally managed. The tablespaces in the database are not required to be of the same block size as the target database standard block size.
Note:
This method for transporting a database requires that you place the user-defined tablespaces in the database in read-only mode until you complete the export. If this is undesirable, then you can use the transportable tablespaces from backup feature described in Oracle Database Backup and Recovery User's Guide.
See Also:
Parent topic: Transporting Databases
15.2.2 Limitations on Full Transportable Export/import
There are limitations on full transportable export/import.
Be aware of the following limitations on full transportable export/import:
-
The general limitations described in "General Limitations on Transporting Data" apply to full transportable export/import.
-
Full transportable export/import can export and import user-defined database objects in administrative tablespaces using conventional Data Pump export/import, such as direct path or external table. Administrative tablespaces are non-user tablespaces supplied with Oracle Database, such as the
SYSTEM
andSYSAUX
tablespaces. -
Full transportable export/import cannot transport a database object that is defined in both an administrative tablespace (such as
SYSTEM
andSYSAUX
) and a user-defined tablespace. For example, a partitioned table might be stored in both a user-defined tablespace and an administrative tablespace. If you have such database objects in your database, then you can redefine them before transporting them so that they are stored entirely in either an administrative tablespace or a user-defined tablespace. If the database objects cannot be redefined, then you can use conventional Data Pump export/import. -
When transporting a database over the network using full transportable export/import, auditing cannot be enabled for tables stored in an administrative tablespace (such as
SYSTEM
andSYSAUX
) when the audit trail information itself is stored in a user-defined tablespace. See Oracle Database Security Guide for more information about auditing.
Parent topic: Transporting Databases
15.2.3 Transporting a Database Using an Export Dump File
You can transport a database using an export dump file.
The following list of tasks summarizes the process of transporting a database using an export dump file. Details for each task are provided in the subsequent example.
-
At the source database, place each of the user-defined tablespaces in read-only mode and export the database.
Ensure that the following parameters are set to the specified values:
-
TRANSPORTABLE=ALWAYS
-
FULL=Y
If the source database is an Oracle Database 11g Release 2 (11.2.0.3) or later Oracle Database 11g database, then you must set the
VERSION
parameter to12
or higher.If the source database contains any encrypted tablespaces or tablespaces containing tables with encrypted columns, then you must either specify
ENCRYPTION_PWD_PROMPT=YES
, or specify theENCRYPTION_PASSWORD
parameter.The export dump file includes the metadata for objects contained within the user-defined tablespaces and both the metadata and data for user-defined objects contained within the administrative tablespaces, such as
SYSTEM
andSYSAUX
. -
-
Transport the export dump file.
Copy the export dump file to a place that is accessible to the target database.
-
Transport the data files for all of the user-defined tablespaces in the database.
Copy the data files to a place that is accessible to the target database.
If the source platform and target platform are different, then check the endian format of each platform by running the query on the
V$TRANSPORTABLE_PLATFORM
view in "Transporting Data Across Platforms".If the source platform's endian format is different from the target platform's endian format, then use one of the following methods to convert the data files:
-
Use the
GET_FILE
orPUT_FILE
procedure in theDBMS_FILE_TRANSFER
package to transfer the data files. These procedures convert the data files to the target platform's endian format automatically. -
Use the RMAN
CONVERT
command to convert the data files to the target platform's endian format.
See "Converting Data Between Platforms" for more information.
-
-
(Optional) Restore the user-defined tablespaces to read/write mode on the source database.
-
At the target database, import the database.
When the import is complete, the user-defined tablespaces are in read/write mode.
Example
These tasks for transporting a database are illustrated more fully in the example that follows, where it is assumed the following data files and tablespaces exist:
Tablespace | Type | Data File |
---|---|---|
sales |
User-defined |
/u01/app/oracle/oradata/mydb/sales01.dbf |
customers |
User-defined |
/u01/app/oracle/oradata/mydb/cust01.dbf |
employees |
User-defined |
/u01/app/oracle/oradata/mydb/emp01.dbf |
SYSTEM |
Administrative |
/u01/app/oracle/oradata/mydb/system01.dbf |
SYSAUX |
Administrative |
/u01/app/oracle/oradata/mydb/sysaux01.dbf |
This example makes the following additional assumptions:
-
The target database is a new database that is being populated with the data from the source database. The name of the source database is
mydb
. -
Both the source database and the target database are Oracle Database 18c databases.
Complete the following tasks to transport the database using an export dump file:
- Task 1 Generate the Export Dump File
-
Generate the export dump file by completing the following steps:
-
Start SQL*Plus and connect to the database as an administrator or as a user who has either the
ALTER
TABLESPACE
orMANAGE
TABLESPACE
system privilege.See "Connecting to the Database with SQL*Plus" for instructions.
-
Make all of the user-defined tablespaces in the database read-only.
ALTER TABLESPACE sales READ ONLY; ALTER TABLESPACE customers READ ONLY; ALTER TABLESPACE employees READ ONLY;
-
Invoke the Data Pump export utility as a user with
DATAPUMP_EXP_FULL_DATABASE
role and specify the full transportable export/import options.SQL> HOST $ expdp user_name full=y dumpfile=expdat.dmp directory=data_pump_dir transportable=always logfile=export.log Password: password
You must always specify
TRANSPORTABLE=ALWAYS
, which determines whether the transportable option is used.This example specifies the following Data Pump parameters:
-
The
FULL
parameter specifies that the entire database is being exported. -
The
DUMPFILE
parameter specifies the name of the structural information export dump file to be created,expdat.dmp
. -
The
DIRECTORY
parameter specifies the directory object that points to the operating system or Oracle Automatic Storage Management location of the dump file. You must create theDIRECTORY
object before invoking Data Pump, and you must grant theREAD
andWRITE
object privileges on the directory to the user running the Export utility. See Oracle Database SQL Language Reference for information on theCREATE
DIRECTORY
command.In a non-CDB, the directory object
DATA_PUMP_DIR
is created automatically. Read and write access to this directory is automatically granted to theDBA
role, and thus to usersSYS
andSYSTEM
.However, the directory object
DATA_PUMP_DIR
is not created automatically in a PDB. Therefore, when importing into a PDB, create a directory object in the PDB and specify the directory object when you run Data Pump.See Also:
-
Oracle Database Utilities for information about the default directory when the
DIRECTORY
parameter is omitted -
Oracle Multitenant Administrator's Guide for more information about PDBs
-
-
The
LOGFILE
parameter specifies the file name of the log file to be written by the export utility. In this example, the log file is written to the same directory as the dump file, but it can be written to a different location.
To perform a full transportable export on an Oracle Database 11g Release 2 (11.2.0.3) or later Oracle Database 11g database, use the
VERSION
parameter, as shown in the following example:expdp user_name full=y dumpfile=expdat.dmp directory=data_pump_dir transportable=always version=12 logfile=export.log
Full transportable import is supported only for Oracle Database 12c and later databases.
Note:
In this example, the Data Pump utility is used to export only data dictionary structural information (metadata) for the user-defined tablespaces. Actual data is unloaded only for the administrative tablespaces (
SYSTEM
andSYSAUX
), so this operation goes relatively quickly even for large user-defined tablespaces. -
-
Check the log file for errors, and take note of the dump file and data files that you must transport to the target database.
expdp
outputs the names and paths of these files in messages like these:****************************************************************************** Dump file set for SYSTEM.SYS_EXPORT_TRANSPORTABLE_01 is: /u01/app/oracle/admin/mydb/dpdump/expdat.dmp ****************************************************************************** Datafiles required for transportable tablespace SALES: /u01/app/oracle/oradata/mydb/sales01.dbf Datafiles required for transportable tablespace CUSTOMERS: /u01/app/oracle/oradata/mydb/cust01.dbf Datafiles required for transportable tablespace EMPLOYEES: /u01/app/oracle/oradata/mydb/emp01.dbf
-
When finished, exit back to SQL*Plus:
$ exit
See Also:
Oracle Database Utilities for information about using the Data Pump utility
-
- Task 2 Transport the Export Dump File
-
Transport the dump file to the directory pointed to by the
DATA_PUMP_DIR
directory object, or to any other directory of your choosing. The new location must be accessible to the target database.At the target database, run the following query to determine the location of
DATA_PUMP_DIR
:SELECT * FROM DBA_DIRECTORIES WHERE DIRECTORY_NAME = 'DATA_PUMP_DIR'; OWNER DIRECTORY_NAME DIRECTORY_PATH ---------- ---------------- ----------------------------------- SYS DATA_PUMP_DIR C:\app\orauser\admin\orawin\dpdump\
- Task 3 Transport the Data Files for the User-Defined Tablespaces
-
Transport the data files of the user-defined tablespaces in the database to a place that is accessible to the target database.
In this example, transfer the following data files from the source database to the target database:
-
sales01.dbf
-
cust01.dbf
-
emp01.dbf
If you are transporting the database to a platform different from the source platform, then determine if cross-platform database transport is supported for both the source and target platforms, and determine the endianness of each platform. If both platforms have the same endianness, then no conversion is necessary. Otherwise you must do a conversion of each tablespace in the database either at the source or target database.
If you are transporting the database to a different platform, you can execute the following query on each platform. If the query returns a row, then the platform supports cross-platform tablespace transport.
SELECT d.PLATFORM_NAME, ENDIAN_FORMAT FROM V$TRANSPORTABLE_PLATFORM tp, V$DATABASE d WHERE tp.PLATFORM_NAME = d.PLATFORM_NAME;
The following is the query result from the source platform:
PLATFORM_NAME ENDIAN_FORMAT ---------------------------------- -------------- Solaris[tm] OE (32-bit) Big
The following is the query result from the target platform:
PLATFORM_NAME ENDIAN_FORMAT ---------------------------------- -------------- Microsoft Windows IA (32-bit) Little
In this example, you can see that the endian formats are different. Therefore, in this case, a conversion is necessary for transporting the database. Use either the
GET_FILE
orPUT_FILE
procedure in theDBMS_FILE_TRANSFER
package to transfer the data files. These procedures convert the data files to the target platform's endian format automatically. Transport the data files to the location of the existing data files of the target database. On the UNIX and Linux platforms, this location is typically /u01/app/oracle/oradata/dbname/ or +DISKGROUP/dbname/datafile/. Alternatively, you can use the RMANCONVERT
command to convert the data files. See "Converting Data Between Platforms" for more information.Note:
If no endianness conversion of the tablespaces is needed, then you can transfer the files using any file transfer method.
See Also:
-
- Task 4 (Optional) Restore Tablespaces to Read/Write Mode
-
Make the transported tablespaces read/write again at the source database, as follows:
ALTER TABLESPACE sales READ WRITE; ALTER TABLESPACE customers READ WRITE; ALTER TABLESPACE employees READ WRITE;
You can postpone this task to first ensure that the import process succeeds.
- Task 5 At the Target Database, Import the Database
-
Invoke the Data Pump import utility as a user with
DATAPUMP_IMP_FULL_DATABASE
role and specify the full transportable export/import options.impdp user_name full=Y dumpfile=expdat.dmp directory=data_pump_dir transport_datafiles= '/u01/app/oracle/oradata/mydb/sales01.dbf', '/u01/app/oracle/oradata/mydb/cust01.dbf', '/u01/app/oracle/oradata/mydb/emp01.dbf' logfile=import.log Password: password
This example specifies the following Data Pump parameters:
-
The
FULL
parameter specifies that the entire database is being imported inFULL
mode. -
The
DUMPFILE
parameter specifies the exported file containing the metadata for the user-defined tablespaces and both the metadata and data for the administrative tablespaces to be imported. -
The
DIRECTORY
parameter specifies the directory object that identifies the location of the export dump file. You must create theDIRECTORY
object before invoking Data Pump, and you must grant theREAD
andWRITE
object privileges on the directory to the user running the Import utility. See Oracle Database SQL Language Reference for information on theCREATE
DIRECTORY
command.In a non-CDB, the directory object
DATA_PUMP_DIR
is created automatically. Read and write access to this directory is automatically granted to theDBA
role, and thus to usersSYS
andSYSTEM
.However, the directory object
DATA_PUMP_DIR
is not created automatically in a PDB. Therefore, when importing into a PDB, create a directory object in the PDB and specify the directory object when you run Data Pump.See Also:
-
Oracle Database Utilities for information about the default directory when the
DIRECTORY
parameter is omitted -
Oracle Multitenant Administrator's Guide for more information about PDBs
-
-
The
TRANSPORT_DATAFILES
parameter identifies all of the data files to be imported.You can specify the
TRANSPORT_DATAFILES
parameter multiple times in a parameter file specified with thePARFILE
parameter if there are many data files. -
The
LOGFILE
parameter specifies the file name of the log file to be written by the import utility. In this example, the log file is written to the directory from which the dump file is read, but it can be written to a different location.
After this statement executes successfully, check the import log file to ensure that no unexpected error has occurred.
When dealing with a large number of data files, specifying the list of data file names in the statement line can be a laborious process. It can even exceed the statement line limit. In this situation, you can use an import parameter file. For example, you can invoke the Data Pump import utility as follows:
impdp user_name parfile='par.f'
For example,
par.f
might contain the following lines:FULL=Y DUMPFILE=expdat.dmp DIRECTORY=data_pump_dir TRANSPORT_DATAFILES= '/u01/app/oracle/oradata/mydb/sales01.dbf', '/u01/app/oracle/oradata/mydb/cust01.dbf', '/u01/app/oracle/oradata/mydb/emp01.dbf' LOGFILE=import.log
Note:
-
During the import, user-defined tablespaces might be temporarily made read/write for metadata loading. Ensure that no user changes are made to the data during the import. At the successful completion of the import, all user-defined tablespaces are made read/write.
-
When performing a network database import, the
TRANSPORTABLE
parameter must be set toalways
. -
When you are importing into a PDB in a CDB, specify the connect identifier for the PDB after the user name. For example, if the connect identifier for the PDB is
hrpdb
, then enter the following when you run the Oracle Data Pump Import utility:impdp user_name@hrpdb ...
-
Parent topic: Transporting Databases
15.2.4 Transporting a Database Over the Network
You can transport a database over the network.
To transport a database over the network, you perform an import using the NETWORK_LINK
parameter, the import is performed using a database link, and there is no dump file involved.
The following list of tasks summarizes the process of transporting a database over the network. Details for each task are provided in the subsequent example.
-
Create a database link from the target database to the source database.
The import operation must be performed by a user on the target database with
DATAPUMP_IMP_FULL_DATABASE
role, and the database link must connect to a user on the source database withDATAPUMP_EXP_FULL_DATABASE
role. The user on the source database cannot be a user withSYSDBA
administrative privilege. If the database link is a connected user database link, then the user on the target database cannot be a user withSYSDBA
administrative privilege. See "Users of Database Links" for information about connected user database links. -
In the source database, make the user-defined tablespaces in the database read-only.
-
Transport the data files for the all of the user-defined tablespaces in the database.
Copy the data files to a place that is accessible to the target database.
If the source platform and target platform are different, then check the endian format of each platform by running the query on the
V$TRANSPORTABLE_PLATFORM
view in "Transporting Data Across Platforms".If the source platform's endian format is different from the target platform's endian format, then use one of the following methods to convert the data files:
-
Use the
GET_FILE
orPUT_FILE
procedure in theDBMS_FILE_TRANSFER
package to transfer the data files. These procedures convert the data files to the target platform's endian format automatically. -
Use the RMAN
CONVERT
command to convert the data files to the target platform's endian format.
See "Converting Data Between Platforms" for more information.
-
-
At the target database, import the database.
Invoke the Data Pump utility to import the metadata for the user-defined tablespaces and both the metadata and data for the administrative tablespaces.
Ensure that the following parameters are set to the specified values:
-
TRANSPORTABLE=ALWAYS
-
TRANSPORT_DATAFILES=
list_of_datafiles
-
FULL=Y
-
NETWORK_LINK=
source_database_link
Replace
source_database_link
with the name of the database link to the source database. -
VERSION=12
If the source database is an Oracle Database 11g Release 2 (11.2.0.3) or later Oracle Database 11g database, then the
VERSION
parameter is required and must be set to12
. If the source database is an Oracle Database 12c or later database, then theVERSION
parameter is not required.
If the source database contains any encrypted tablespaces or tablespaces containing tables with encrypted columns, then you must either specify
ENCRYPTION_PWD_PROMPT=YES
, or specify theENCRYPTION_PASSWORD
parameter.The Data Pump network import copies the metadata for objects contained within the user-defined tablespaces and both the metadata and data for user-defined objects contained within the administrative tablespaces, such as
SYSTEM
andSYSAUX
.When the import is complete, the user-defined tablespaces are in read/write mode.
-
-
(Optional) Restore the user-defined tablespaces to read/write mode on the source database.
Example
These tasks for transporting a database are illustrated more fully in the example that follows, where it is assumed the following data files and tablespaces exist:
Tablespace | Type | Data File |
---|---|---|
sales |
User-defined |
/u01/app/oracle/oradata/mydb/sales01.dbf |
customers |
User-defined |
/u01/app/oracle/oradata/mydb/cust01.dbf |
employees |
User-defined |
/u01/app/oracle/oradata/mydb/emp01.dbf |
SYSTEM |
Administrative |
/u01/app/oracle/oradata/mydb/system01.dbf |
SYSAUX |
Administrative |
/u01/app/oracle/oradata/mydb/sysaux01.dbf |
This example makes the following additional assumptions:
-
The target database is a new database that is being populated with the data from the source database. The name of the source database is
sourcedb
. -
The source database and target database are running on the same platform with the same endianness.
To check the endianness of a platform, run the following query:
SELECT d.PLATFORM_NAME, ENDIAN_FORMAT FROM V$TRANSPORTABLE_PLATFORM tp, V$DATABASE d WHERE tp.PLATFORM_NAME = d.PLATFORM_NAME;
-
The
sales
tablespace is encrypted. The other tablespaces are not encrypted. -
The source database is an Oracle Database 11g Release 2 (11.2.0.3) database and the target database is an Oracle Database 18c database.
Note:
This example illustrates the tasks required to transport an Oracle Database 11g Release 2 (11.2.0.3) or later Oracle Database 11g database to a new Oracle Database 18c PDB inside of a CDB. See Oracle Multitenant Administrator's Guide. These tasks also illustrate how to transport one non-CDB to another non-CDB.
Complete the following tasks to transport the database over the network:
- Task 1 Create a Database Link from the Target Database to the Source Database
-
Create a database link from the target database to the source database by completing the following steps:
-
Ensure that network connectivity is configured between the source database and the target database.
See Oracle Database Net Services Administrator's Guide for instructions.
-
Start SQL*Plus and connect to the target database as the administrator who will transport the database with Data Pump import. This user must have
DATAPUMP_IMP_FULL_DATABASE
role to transport the database.See "Connecting to the Database with SQL*Plus" for instructions.
-
Create the database link:
CREATE PUBLIC DATABASE LINK sourcedb USING 'sourcedb';
Specify the service name for the source database in the using clause.
During the import operation, the database link must connect to a user on the source database with
DATAPUMP_EXP_FULL_DATABASE
role. The user on the source database cannot be a user withSYSDBA
administrative privilege.
-
- Task 2 Make the User-Defined Tablespaces Read-Only
-
Complete the following steps:
-
Start SQL*Plus and connect to the source database as an administrator or as a user who has either the
ALTER
TABLESPACE
orMANAGE
TABLESPACE
system privilege.See "Connecting to the Database with SQL*Plus" for instructions.
-
Make all of the user-defined tablespaces in the database read-only.
ALTER TABLESPACE sales READ ONLY; ALTER TABLESPACE customers READ ONLY; ALTER TABLESPACE employees READ ONLY;
-
- Task 3 Transport the Data Files for the User-Defined Tablespaces
-
Transport the data files to the location of the existing data files of the target database.
On the UNIX and Linux platforms, this location is typically /u01/app/oracle/oradata/dbname/ or +DISKGROUP/dbname/datafile/.
In this example, transfer the following data files from the source database to the target database:
-
sales01.dbf
-
cust01.dbf
-
emp01.dbf
See Also:
-
- Task 4 At the Target Database, Import the Database
-
Invoke the Data Pump import utility as a user with
DATAPUMP_IMP_FULL_DATABASE
role and specify the full transportable export/import options.impdp user_name full=Y network_link=sourcedb transportable=always transport_datafiles= '/u01/app/oracle/oradata/mydb/sales01.dbf', '/u01/app/oracle/oradata/mydb/cust01.dbf', '/u01/app/oracle/oradata/mydb/emp01.dbf' encryption_pwd_prompt=YES version=12 logfile=import.log Password: password
This example specifies the following Data Pump parameters:
-
The
FULL
parameter specifies that the entire database is being imported inFULL
mode. -
The
NETWORK_LINK
parameter specifies the database link used for the network import. -
The
TRANSPORTABLE
parameter specifies that the import uses the transportable option. -
The
TRANSPORT_DATAFILES
parameter identifies all of the data files to be imported.You can specify the
TRANSPORT_DATAFILES
parameter multiple times in a parameter file specified with thePARFILE
parameter if there are many data files. -
The
ENCRYPTION_PWD_PROMPT
parameter instructs Data Pump to prompt you for the encryption password, and Data Pump encrypts data and metadata sent over the network connection. Either theENCRYPTION_PWD_PROMPT
parameter or theENCRYPTION_PASSWORD
parameter is required when encrypted tablespaces or tables with encrypted columns are part of the import operation. -
The
VERSION
parameter is set to12
because the source database is an Oracle Database 11g Release 2 (11.2.0.3) or later Oracle Database 11g database. -
The
LOGFILE
parameter specifies the file name of the log file to be written by the import utility.
After this statement executes successfully, check the import log file to ensure that no unexpected error has occurred.
When dealing with a large number of data files, specifying the list of data file names in the statement line can be a laborious process. It can even exceed the statement line limit. In this situation, you can use an import parameter file.
Use of an import parameter file is also recommended when encrypted tablespaces or tables with encrypted columns are part of the import operation. In this case, specify
ENCRYPTION_PWD_PROMPT=YES
in the import parameter file.For example, you can invoke the Data Pump import utility as follows:
impdp user_name parfile='par.f'
For example,
par.f
might contain the following lines:FULL=Y NETWORK_LINK=sourcedb TRANSPORTABLE=always TRANSPORT_DATAFILES= '/u01/app/oracle/oradata/mydb/sales01.dbf', '/u01/app/oracle/oradata/mydb/cust01.dbf', '/u01/app/oracle/oradata/mydb/emp01.dbf' ENCRYPTION_PWD_PROMPT=YES VERSION=12 LOGFILE=import.log
Note:
-
During the import, user-defined tablespaces might be temporarily made read/write for metadata loading. Ensure that no user changes are made to the data during the import. At the successful completion of the import, all user-defined tablespaces are made read/write.
-
When you are importing into a PDB in a CDB, specify the connect identifier for the PDB after the user name. For example, if the connect identifier for the PDB is
hrpdb
, then enter the following when you run the Oracle Data Pump Import utility:impdp user_name@hrpdb ...
See Also:
Oracle Database Utilities for information about using the import utility
-
- Task 5 (Optional) Restore User-Defined Tablespaces to Read/Write Mode
-
Make the user-defined tablespaces read/write again at the source database, as follows:
ALTER TABLESPACE sales READ WRITE; ALTER TABLESPACE customers READ WRITE; ALTER TABLESPACE employees READ WRITE;
You can postpone this task to first ensure that the import process succeeds.
Parent topic: Transporting Databases
15.3 Transporting Tablespaces Between Databases
You can transport tablespaces between databases.
Note:
To import a transportable tablespace set into an Oracle database on a different platform, both databases must have compatibility set to at least 10.0.0. See "Compatibility Considerations for Transporting Data" for a discussion of database compatibility for transporting tablespaces across release levels.
- Introduction to Transportable Tablespaces
You can use the transportable tablespaces feature to copy a set of tablespaces from one Oracle Database to another. - Limitations on Transportable Tablespaces
This section lists the limitations on transportable tablespace. - Transporting Tablespaces Between Databases
You can transport a tablespace between databases.
Parent topic: Transporting Data
15.3.1 Introduction to Transportable Tablespaces
You can use the transportable tablespaces feature to copy a set of tablespaces from one Oracle Database to another.
The tablespaces being transported can be either dictionary managed or locally managed. The transported tablespaces are not required to be of the same block size as the target database standard block size. These scenarios are discussed in "Transporting Data: Scenarios".
There are two ways to transport a tablespace:
-
Manually, following the steps described in this section. This involves issuing commands to SQL*Plus and Data Pump.
-
Using the Transport Tablespaces Wizard in Oracle Enterprise Manager Cloud Control
To run the Transport Tablespaces Wizard:
-
Log in to Cloud Control with a user that has the
DATAPUMP_EXP_FULL_DATABASE
role. -
Access the Database Home page.
-
From the Schema menu, select Database Export/Import, then Transport Tablespaces.
-
Note:
-
This method for transporting tablespaces requires that you place the tablespaces to be transported in read-only mode until you complete the transporting process. If this is undesirable, you can use the transportable tablespaces from backup feature, described in Oracle Database Backup and Recovery User's Guide.
-
You must use Data Pump for transportable tablespaces. The only circumstance under which you can use the original import and export utilities, IMP and EXP, is for a backward migration of XMLType data to an Oracle Database 10g Release 2 (10.2) or earlier database. See Oracle Database Utilities for more information on these utilities and to Oracle XML DB Developer's Guide for more information on XMLTypes.
See Also:
-
Oracle Database Data Warehousing Guide for information about using transportable tablespaces in a data warehousing environment
Parent topic: Transporting Tablespaces Between Databases
15.3.2 Limitations on Transportable Tablespaces
This section lists the limitations on transportable tablespace.
Be aware of the following limitations for transportable tablespaces:
-
The general limitations described in "General Limitations on Transporting Data" apply to transportable tablespaces.
-
When transporting a tablespace set, objects with underlying objects (such as materialized views) or contained objects (such as partitioned tables) are not transportable unless all of the underlying or contained objects are in the tablespace set.
-
Transportable tablespaces cannot transport tables with
TIMESTAMP
WITH
TIMEZONE
(TSTZ) data across platforms with different time zone file versions. The transportable tablespace operation skips these tables. You can export and import these tables conventionally.See Oracle Database Utilities for more information.
-
You cannot include administrative tablespaces, such as
SYSTEM
andSYSAUX
in a transportable tablespace set. -
Transportable tablespaces cannot contain any tables with columns that are encrypted using TDE column encryption.
-
If a tablespace is encrypted using TDE, you can only transport this tablespace to a platform that uses the same endian format. If you need to go across endianness, you must decrypt, transport, and re-encrypt the tablespace. Starting with Oracle Database release 12.2 these operations can be performed online.
Parent topic: Transporting Tablespaces Between Databases
15.3.3 Transporting Tablespaces Between Databases
You can transport a tablespace between databases.
The following list of tasks summarizes the process of transporting a tablespace. Details for each task are provided in the subsequent example.
Example 15-1 Example
These tasks for transporting a tablespace are illustrated more fully in the example that follows, where it is assumed the following data files and tablespaces exist:
Tablespace | Data File |
---|---|
|
/u01/app/oracle/oradata/salesdb/sales_101.dbf |
|
/u01/app/oracle/oradata/salesdb/sales_201.dbf |
- Task 1: Pick a Self-Contained Set of Tablespaces
There may be logical or physical dependencies between objects in the transportable set and those outside of the set. You can only transport a set of tablespaces that is self-contained. In this context "self-contained" means that there are no references from inside the set of tablespaces pointing outside of the tablespaces. - Task 2: Generate a Transportable Tablespace Set
After ensuring you have a self-contained set of tablespaces that you want to transport, generate a transportable tablespace set. - Task 3: Transport the Export Dump File
Transport the dump file to the directory pointed to by theDATA_PUMP_DIR
directory object, or to any other directory of your choosing. The new location must be accessible to the target database. - Task 4: Transport the Tablespace Set
Transport the data files of the tablespaces to a place that is accessible to the target database. - Task 5: (Optional) Restore Tablespaces to Read/Write Mode
Make the transported tablespaces read/write again at the source database. - Task 6: Import the Tablespace Set
To complete the transportable tablespaces operation, import the tablespace set.
Parent topic: Transporting Tablespaces Between Databases
15.3.3.1 Task 1: Pick a Self-Contained Set of Tablespaces
There may be logical or physical dependencies between objects in the transportable set and those outside of the set. You can only transport a set of tablespaces that is self-contained. In this context "self-contained" means that there are no references from inside the set of tablespaces pointing outside of the tablespaces.
Some examples of self contained tablespace violations are:
-
An index inside the set of tablespaces is for a table outside of the set of tablespaces.
Note:
It is not a violation if a corresponding index for a table is outside of the set of tablespaces.
-
A partitioned table is partially contained in the set of tablespaces.
The tablespace set you want to copy must contain either all partitions of a partitioned table, or none of the partitions of a partitioned table. To transport a subset of a partition table, you must exchange the partitions into tables.
See Oracle Database VLDB and Partitioning Guide for information about exchanging partitions.
-
A referential integrity constraint points to a table across a set boundary.
When transporting a set of tablespaces, you can choose to include referential integrity constraints. However, doing so can affect whether a set of tablespaces is self-contained. If you decide not to transport constraints, then the constraints are not considered as pointers.
-
A table inside the set of tablespaces contains a
LOB
column that points toLOB
s outside the set of tablespaces. -
An XML DB schema (*.xsd) that was registered by user A imports a global schema that was registered by user B, and the following is true: the default tablespace for user A is tablespace A, the default tablespace for user B is tablespace B, and only tablespace A is included in the set of tablespaces.
To determine whether a set of tablespaces is self-contained, you can invoke the TRANSPORT_SET_CHECK
procedure in the Oracle supplied package DBMS_TTS
. You must have been granted the EXECUTE_CATALOG_ROLE
role (initially signed to SYS
) to execute this procedure.
When you invoke the DBMS_TTS
package, you specify the list of tablespaces in the transportable set to be checked for self containment. You can optionally specify if constraints must be included. For strict or full containment, you must additionally set the TTS_FULL_CHECK
parameter to TRUE
.
The strict or full containment check is for cases that require capturing not only references going outside the transportable set, but also those coming into the set. Tablespace Point-in-Time Recovery (TSPITR) is one such case where dependent objects must be fully contained or fully outside the transportable set.
For example, it is a violation to perform TSPITR on a tablespace containing a table t
but not its index i
because the index and data will be inconsistent after the transport. A full containment check ensures that there are no dependencies going outside or coming into the transportable set. See the example for TSPITR in the Oracle Database Backup and Recovery User's Guide.
Note:
The default for transportable tablespaces is to check for self containment rather than full containment.
The following statement can be used to determine whether tablespaces sales_1
and sales_2
are self-contained, with referential integrity constraints taken into consideration (indicated by TRUE
).
EXECUTE DBMS_TTS.TRANSPORT_SET_CHECK('sales_1,sales_2', TRUE);
After invoking this PL/SQL package, you can see all violations by selecting from the TRANSPORT_SET_VIOLATIONS
view. If the set of tablespaces is self-contained, this view is empty. The following example illustrates a case where there are two violations: a foreign key constraint, dept_fk
, across the tablespace set boundary, and a partitioned table, jim.sales
, that is partially contained in the tablespace set.
SELECT * FROM TRANSPORT_SET_VIOLATIONS; VIOLATIONS --------------------------------------------------------------------------- Constraint DEPT_FK between table JIM.EMP in tablespace SALES_1 and table JIM.DEPT in tablespace OTHER Partitioned table JIM.SALES is partially contained in the transportable set
These violations must be resolved before sales_1
and sales_2
are transportable. As noted in the next task, one choice for bypassing the integrity constraint violation is to not export the integrity constraints.
See Also:
-
Oracle Database PL/SQL Packages and Types Reference for more information about the
DBMS_TTS
package -
Oracle Database Backup and Recovery User's Guide for information specific to using the
DBMS_TTS
package for TSPITR
Parent topic: Transporting Tablespaces Between Databases
15.3.3.2 Task 2: Generate a Transportable Tablespace Set
After ensuring you have a self-contained set of tablespaces that you want to transport, generate a transportable tablespace set.
To generate a transportable tablespace set:
Parent topic: Transporting Tablespaces Between Databases
15.3.3.3 Task 3: Transport the Export Dump File
Transport the dump file to the directory pointed to by the DATA_PUMP_DIR
directory object, or to any other directory of your choosing. The new location must be accessible to the target database.
At the target database, run the following query to determine the location of DATA_PUMP_DIR
:
SELECT * FROM DBA_DIRECTORIES WHERE DIRECTORY_NAME = 'DATA_PUMP_DIR'; OWNER DIRECTORY_NAME DIRECTORY_PATH ---------- ---------------- ----------------------------------- SYS DATA_PUMP_DIR C:\app\orauser\admin\orawin\dpdump\
Parent topic: Transporting Tablespaces Between Databases
15.3.3.4 Task 4: Transport the Tablespace Set
Transport the data files of the tablespaces to a place that is accessible to the target database.
In this example, transfer the following files from the source database to the target database:
-
sales_101.dbf
-
sales_201.dbf
If you are transporting the tablespace set to a platform different from the source platform, then determine if cross-platform tablespace transport is supported for both the source and target platforms, and determine the endianness of each platform. If both platforms have the same endianness, no conversion is necessary. Otherwise you must do a conversion of the tablespace set either at the source or target database.
If you are transporting sales_1
and sales_2
to a different platform, you can execute the following query on each platform. If the query returns a row, the platform supports cross-platform tablespace transport.
SELECT d.PLATFORM_NAME, ENDIAN_FORMAT FROM V$TRANSPORTABLE_PLATFORM tp, V$DATABASE d WHERE tp.PLATFORM_NAME = d.PLATFORM_NAME;
The following is the query result from the source platform:
PLATFORM_NAME ENDIAN_FORMAT ---------------------------------- -------------- Solaris[tm] OE (32-bit) Big
The following is the result from the target platform:
PLATFORM_NAME ENDIAN_FORMAT ---------------------------------- -------------- Microsoft Windows IA (32-bit) Little
In this example, you can see that the endian formats are different. Therefore, in this case, a conversion is necessary for transporting the database. Use either the GET_FILE
or PUT_FILE
procedure in the DBMS_FILE_TRANSFER
package to transfer the data files. These procedures convert the data files to the target platform's endian format automatically. Transport the data files to the location of the existing data files of the target database. On the UNIX and Linux platforms, this location is typically /u01/app/oracle/oradata/dbname/ or +DISKGROUP/dbname/datafile/. Alternatively, you can use the RMAN CONVERT
command to convert the data files. See "Converting Data Between Platforms" for more information.
Note:
If no endianness conversion of the tablespaces is needed, then you can transfer the files using any file transfer method.
See Also:
Parent topic: Transporting Tablespaces Between Databases
15.3.3.5 Task 5: (Optional) Restore Tablespaces to Read/Write Mode
Make the transported tablespaces read/write again at the source database.
The following statements make the sales_1
and sales_2
tablespaces read/write:
ALTER TABLESPACE sales_1 READ WRITE; ALTER TABLESPACE sales_2 READ WRITE;
You can postpone this task to first ensure that the import process succeeds.
Parent topic: Transporting Tablespaces Between Databases
15.3.3.6 Task 6: Import the Tablespace Set
To complete the transportable tablespaces operation, import the tablespace set.
To import the tablespace set:
Parent topic: Transporting Tablespaces Between Databases
15.4 Transporting Tables, Partitions, or Subpartitions Between Databases
You can transport tables, partitions, and subpartitions between databases.
- Introduction to Transportable Tables
You can use the transportable tables feature to copy a set of tables, partitions, or subpartitions from one Oracle Database to another. A transportable tables operation moves metadata for the specified tables, partitions, or subpartitions to the target database. - Limitations on Transportable Tables
There are limitations on transportable tables. - Transporting Tables, Partitions, or Subpartitions Using an Export Dump File
You can transport tables, partitions, or subpartitions between databases using an export file. - Transporting Tables, Partitions, or Subpartitions Over the Network
To transport tables over the network, you perform an import using theNETWORK_LINK
parameter, the import is performed using a database link, and there is no dump file involved.
Parent topic: Transporting Data
15.4.1 Introduction to Transportable Tables
You can use the transportable tables feature to copy a set of tables, partitions, or subpartitions from one Oracle Database to another. A transportable tables operation moves metadata for the specified tables, partitions, or subpartitions to the target database.
A transportable tables operation automatically identifies the tablespaces used by the specified tables. To move the data, you copy the data files for these tablespaces to the target database. The Data Pump import automatically frees the blocks in the data files occupied by tables, partitions, or subpartitions that were not part of the transportable tables operation. It also frees the blocks occupied by the dependent objects of the tables that were not part of the transportable tables operation.
You can transport the tables, partitions, and subpartitions in the following ways:
-
Using an export dump file
During the export, specify the
TABLES
parameter and set theTRANSPORTABLE
parameter toALWAYS
. During import, do not specify theTRANSPORTABLE
parameter. Data Pump import recognizes the transportable tables operation automatically. -
Over the network
During the import, specify the
TABLES
parameter, set theTRANSPORTABLE
parameter toALWAYS
, and specify theNETWORK_LINK
parameter to identify the source database.
15.4.2 Limitations on Transportable Tables
There are limitations on transportable tables.
Be aware of the following limitations for transportable tables:
-
The general limitations described in "General Limitations on Transporting Data" apply to transportable tables.
-
You cannot transport a table to a target database that contains a table of the same name in the same schema. However, you can use the
REMAP_TABLE
import parameter to import the data into a different table. Alternatively, before the transport operation, you can rename either the table to be transported or the target table.Starting with Oracle Database 12c Release 2 (12.2), the Recovery Manager (RMAN)
RECOVER
command can move tables to a different schema while remapping a table. See Oracle Database Backup and Recovery User’s Guide for more information. -
You cannot transport tables with
TIMESTAMP
WITH
TIMEZONE
(TSTZ) data across platforms with different time zone file versions.See Oracle Database Utilities for more information.
15.4.3 Transporting Tables, Partitions, or Subpartitions Using an Export Dump File
You can transport tables, partitions, or subpartitions between databases using an export file.
The following list of tasks summarizes the process of transporting tables between databases using an export dump file. Details for each task are provided in the subsequent example.
-
Pick a set of tables, partitions, or subpartitions.
If you are transporting partitions, then you can specify partitions from only one table in a transportable tables operation, and no other tables can be transported in the same operation. Also, if only a subset of a table's partitions are exported in a transportable tables operation, then on import each partition becomes a non-partitioned table.
-
At the source database, place the tablespaces associated with the data files for the tables, partitions, or subpartitions in read-only mode.
To view the tablespace for a table, query the
DBA_TABLES
view. To view the data file for a tablespace, query theDBA_DATA_FILES
view. -
Perform the Data Pump export.
-
Transport the export dump file.
Copy the export dump file to a place that is accessible to the target database.
-
Transport the data files for the tables, partitions, or subpartitions.
Copy the data files to a place that is accessible to the target database.
If the source platform and target platform are different, then check the endian format of each platform by running the query on the
V$TRANSPORTABLE_PLATFORM
view in "Transporting Data Across Platforms".If the source platform's endian format is different from the target platform's endian format, then use one of the following methods to convert the data files:
-
Use the
GET_FILE
orPUT_FILE
procedure in theDBMS_FILE_TRANSFER
package to transfer the data files. These procedures convert the data files to the target platform's endian format automatically. -
Use the RMAN
CONVERT
command to convert the data files to the target platform's endian format.
See "Converting Data Between Platforms" for more information.
-
-
(Optional) Restore tablespaces to read/write mode on the source database.
-
At the target database, perform the import.
Invoke the Data Pump utility to import the metadata for the tables.
Example
These tasks for transporting tables, partitions, and subpartitions using a Data Pump dump file are illustrated more fully in the example that follows, where it is assumed that the following partitions exist in the sh.sales_prt
table:
-
sales_q1_2000
-
sales_q2_2000
-
sales_q3_2000
-
sales_q4_2000
This example transports two of these partitions to the target database.
The following SQL statements create the sales_prt
table and its and partitions in the sh
schema and the tablespace and data file for the table. The statements also insert data into the partitions by using data in the sh
sample schemas.
CREATE TABLESPACE sales_prt_tbs DATAFILE 'sales_prt.dbf' SIZE 20M ONLINE; CREATE TABLE sh.sales_prt (prod_id NUMBER(6), cust_id NUMBER, time_id DATE, channel_id CHAR(1), promo_id NUMBER(6), quantity_sold NUMBER(3), amount_sold NUMBER(10,2)) PARTITION BY RANGE (time_id) (PARTITION SALES_Q1_2000 VALUES LESS THAN (TO_DATE('01-APR-2000','DD-MON-YYYY','NLS_DATE_LANGUAGE = American')), PARTITION SALES_Q2_2000 VALUES LESS THAN (TO_DATE('01-JUL-2000','DD-MON-YYYY','NLS_DATE_LANGUAGE = American')), PARTITION SALES_Q3_2000 VALUES LESS THAN (TO_DATE('01-OCT-2000','DD-MON-YYYY','NLS_DATE_LANGUAGE = American')), PARTITION SALES_Q4_2000 VALUES LESS THAN (TO_DATE('01-JAN-2001','DD-MON-YYYY','NLS_DATE_LANGUAGE = American'))) TABLESPACE sales_prt_tbs; INSERT INTO sh.sales_prt PARTITION(sales_q1_2000) SELECT * FROM sh.sales PARTITION(sales_q1_2000); INSERT INTO sh.sales_prt PARTITION(sales_q2_2000) SELECT * FROM sh.sales PARTITION(sales_q2_2000); INSERT INTO sh.sales_prt PARTITION(sales_q3_2000) SELECT * FROM sh.sales PARTITION(sales_q3_2000); INSERT INTO sh.sales_prt PARTITION(sales_q4_2000) SELECT * FROM sh.sales PARTITION(sales_q4_2000); COMMIT;
This example makes the following additional assumptions:
-
The name of the source database is
sourcedb
. -
The source database and target database are running on the same platform with the same endianness. To check the endianness of a platform, run the following query:
SELECT d.PLATFORM_NAME, ENDIAN_FORMAT FROM V$TRANSPORTABLE_PLATFORM tp, V$DATABASE d WHERE tp.PLATFORM_NAME = d.PLATFORM_NAME;
-
Only the
sales_q1_2000
andsales_q2_2000
partitions are transported to the target database. The other two partitions are not transported.
Complete the following tasks to transport the partitions using an export dump file:
- Task 1 Generate the Export Dump File
-
Generate the export dump file by completing the following steps:
-
Start SQL*Plus and connect to the source database as an administrator or as a user who has either the
ALTER
TABLESPACE
orMANAGE
TABLESPACE
system privilege.See "Connecting to the Database with SQL*Plus" for instructions.
-
Make all of the tablespaces that contain the tables being transported read-only.
ALTER TABLESPACE sales_prt_tbs READ ONLY;
-
Invoke the Data Pump export utility as a user with
DATAPUMP_EXP_FULL_DATABASE
role and specify the transportable tables options.SQL> HOST expdp user_name dumpfile=sales_prt.dmp directory=data_pump_dir tables=sh.sales_prt:sales_q1_2000,sh.sales_prt:sales_q2_2000 transportable=always logfile=exp.log Password: password
You must always specify
TRANSPORTABLE=ALWAYS
, which specifies that the transportable option is used.This example specifies the following additional Data Pump parameters:
-
The
DUMPFILE
parameter specifies the name of the structural information export dump file to be created,sales_prt.dmp
. -
The
DIRECTORY
parameter specifies the directory object that points to the operating system or Oracle Automatic Storage Management location of the dump file. You must create theDIRECTORY
object before invoking Data Pump, and you must grant theREAD
andWRITE
object privileges on the directory to the user running the Export utility. See Oracle Database SQL Language Reference for information on theCREATE
DIRECTORY
command.In a non-CDB, the directory object
DATA_PUMP_DIR
is created automatically. Read and write access to this directory is automatically granted to theDBA
role, and thus to usersSYS
andSYSTEM
.However, the directory object
DATA_PUMP_DIR
is not created automatically in a PDB. Therefore, when importing into a PDB, create a directory object in the PDB and specify the directory object when you run Data Pump.See Also:
-
Oracle Database Utilities for information about the default directory when the
DIRECTORY
parameter is omitted -
Oracle Multitenant Administrator's Guide for more information about PDBs
-
-
The
TABLES
parameter specifies the tables, partitions, or subpartitions being exported. -
The
LOGFILE
parameter specifies the file name of the log file to be written by the export utility. In this example, the log file is written to the same directory as the dump file, but it can be written to a different location.
-
-
Check the log file for unexpected errors, and take note of the dump file and data files that you must transport to the target database.
expdp
outputs the names and paths of these files in messages like these:Processing object type TABLE_EXPORT/TABLE/PLUGTS_BLK Processing object type TABLE_EXPORT/TABLE/TABLE Processing object type TABLE_EXPORT/TABLE/END_PLUGTS_BLK Master table "SYSTEM"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded ****************************************************************************** Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is: /u01/app/oracle/rdbms/log/sales_prt.dmp ****************************************************************************** Datafiles required for transportable tablespace SALES_PRT_TBS: /u01/app/oracle/oradata/sourcedb/sales_prt.dbf Job "SYSTEM"."SYS_EXPORT_TABLE_01" successfully completed at 11:32:13
-
When finished, exit back to SQL*Plus:
$ exit
See Also:
Oracle Database Utilities for information about using the Data Pump utility
-
- Task 2 Transport the Export Dump File
-
Transport the dump file to the directory pointed to by the
DATA_PUMP_DIR
directory object on the target database, or to any other directory of your choosing. The new location must be accessible to the target database.In this example, transfer the
sales_prt.dmp
dump file from the source database to the target database.At the target database, run the following query to determine the location of
DATA_PUMP_DIR
:SELECT * FROM DBA_DIRECTORIES WHERE DIRECTORY_NAME = 'DATA_PUMP_DIR'; OWNER DIRECTORY_NAME DIRECTORY_PATH ---------- ---------------- ----------------------------------- SYS DATA_PUMP_DIR /u01/app/oracle/rdbms/log/
- Task 3 Transport the Data Files for the Tables
-
Transport the data files of the tablespaces containing the tables being transported to a place that is accessible to the target database.
Typically, you transport the data files to the location of the existing data files of the target database. On the UNIX and Linux platforms, this location is typically /u01/app/oracle/oradata/dbname/ or +DISKGROUP/dbname/datafile/.
In this example, transfer the
sales_prt.dbf
data file from the source database to the target database.See Also:
- Task 4 (Optional) Restore Tablespaces to Read/Write Mode
-
Make the tablespaces that contain the tables being transported read/write again at the source database, as follows:
ALTER TABLESPACE sales_prt_tbs READ WRITE;
You can postpone this task to first ensure that the import process succeeds.
- Task 5 At the Target Database, Import the Partitions
-
At the target database, invoke the Data Pump import utility as a user with
DATAPUMP_IMP_FULL_DATABASE
role and specify the transportable tables options.impdp user_name dumpfile=sales_prt.dmp directory=data_pump_dir transport_datafiles='/u01/app/oracle/oradata/targetdb/sales_prt.dbf' tables=sh.sales_prt:sales_q1_2000,sh.sales_prt:sales_q2_2000 logfile=imp.log Password: password
This example specifies the following Data Pump parameters:
-
The
DUMPFILE
parameter specifies the exported file containing the metadata for the data to be imported. -
The
DIRECTORY
parameter specifies the directory object that identifies the location of the export dump file. You must create theDIRECTORY
object before invoking Data Pump, and you must grant theREAD
andWRITE
object privileges on the directory to the user running the Import utility. See Oracle Database SQL Language Reference for information on theCREATE
DIRECTORY
command.In a non-CDB, the directory object
DATA_PUMP_DIR
is created automatically. Read and write access to this directory is automatically granted to theDBA
role, and thus to usersSYS
andSYSTEM
.However, the directory object
DATA_PUMP_DIR
is not created automatically in a PDB. Therefore, when importing into a PDB, create a directory object in the PDB and specify the directory object when you run Data Pump.See Also:
-
Oracle Database Utilities for information about the default directory when the
DIRECTORY
parameter is omitted -
Oracle Multitenant Administrator's Guide for more information about PDBs
-
-
The
TRANSPORT_DATAFILES
parameter identifies all of the data files to be imported.You can specify the
TRANSPORT_DATAFILES
parameter multiple times in a parameter file specified with thePARFILE
parameter if there are many data files. -
The
TABLES
parameter specifies the tables, partitions, or subpartitions being imported. -
The
LOGFILE
parameter specifies the file name of the log file to be written by the import utility. In this example, the log file is written to the directory from which the dump file is read, but it can be written to a different location.
After this statement executes successfully, check the import log file to ensure that no unexpected error has occurred.
When dealing with a large number of data files, specifying the list of data file names in the statement line can be a laborious process. It can even exceed the statement line limit. In this situation, you can use an import parameter file. For example, you can invoke the Data Pump import utility as follows:
impdp user_name parfile='par.f'
For example,
par.f
might contain the following lines:DUMPFILE=sales_prt.dmp DIRECTORY=data_pump_dir TRANSPORT_DATAFILES='/u01/app/oracle/oradata/targetdb/sales_prt.dbf' TABLES=sh.sales_prt:sales_q1_2000,sh.sales_prt:sales_q2_2000 LOGFILE=imp.log
Note:
-
The partitions are imported as separate tables in the target database because this example transports a subset of partitions.
-
During the import, tablespaces might be temporarily made read/write for metadata loading. Ensure that no user changes are made to the data during the import. At the successful completion of the import, all user-defined tablespaces are made read/write.
-
When performing a network database import, the
TRANSPORTABLE
parameter must be set toalways
.
See Also:
Oracle Database Utilities for information about using the import utility
-
15.4.4 Transporting Tables, Partitions, or Subpartitions Over the Network
To transport tables over the network, you perform an import using the NETWORK_LINK
parameter, the import is performed using a database link, and there is no dump file involved.
The following list of tasks summarizes the process of transporting tables, partitions, and subpartitions between databases over the network. Details for each task are provided in the subsequent example.
-
Pick a set of tables, partitions, or subpartitions.
If you are transporting partitions, then you can specify partitions from only one table in a transportable tables operation, and no other tables can be transported in the same operation. Also, if only a subset of a table's partitions are exported in a transportable tables operation, then on import each partition becomes a non-partitioned table.
-
At the source database, place the tablespaces associated with the data files for the tables, partitions, or subpartitions in read-only mode.
To view the tablespace for a table, query the
DBA_TABLES
view. To view the data file for a tablespace, query theDBA_DATA_FILES
view. -
Transport the data files for the tables, partitions, or subpartitions.
Copy the data files to a place that is accessible to the target database.
If the source platform and target platform are different, then check the endian format of each platform by running the query on the
V$TRANSPORTABLE_PLATFORM
view in "Transporting Data Across Platforms".If the source platform's endian format is different from the target platform's endian format, then use one of the following methods to convert the data files:
-
Use the
GET_FILE
orPUT_FILE
procedure in theDBMS_FILE_TRANSFER
package to transfer the data files. These procedures convert the data files to the target platform's endian format automatically. -
Use the RMAN
CONVERT
command to convert the data files to the target platform's endian format.
See "Converting Data Between Platforms" for more information.
-
-
At the target database, perform the import.
Invoke the Data Pump utility to import the metadata for the tables.
-
(Optional) Restore tablespaces to read/write mode on the source database.
Example
These tasks for transporting tables over the network are illustrated more fully in the example that follows, where it is assumed that the tables exist in the source database:
Table | Tablespace | Data File |
---|---|---|
|
|
/u01/app/oracle/oradata/sourcedb/emp.dbf |
|
|
/u01/app/oracle/oradata/sourcedb/orders.dbf |
This example transports these tables to the target database. To complete the example, these tables must exist on the source database.
The following SQL statements create the tables in the hr
schema and the tablespaces and data files for the tables. The statements also insert data into the tables by using data in the hr
and oe
sample schemas.
CREATE TABLESPACE emp_tsp DATAFILE 'emp.dbf' SIZE 1M ONLINE; CREATE TABLE hr.emp_ttbs( employee_id NUMBER(6), first_name VARCHAR2(20), last_name VARCHAR2(25), email VARCHAR2(25), phone_number VARCHAR2(20), hire_date DATE, job_id VARCHAR2(10), salary NUMBER(8,2), commission_pct NUMBER(2,2), manager_id NUMBER(6), department_id NUMBER(4)) TABLESPACE emp_tsp; INSERT INTO hr.emp_ttbs SELECT * FROM hr.employees; CREATE TABLESPACE orders_tsp DATAFILE 'orders.dbf' SIZE 1M ONLINE; CREATE TABLE oe.orders_ttbs( order_id NUMBER(12), order_date TIMESTAMP WITH LOCAL TIME ZONE, order_mode VARCHAR2(8), customer_id NUMBER(6), order_status NUMBER(2), order_total NUMBER(8,2), sales_rep_id NUMBER(6), promotion_id NUMBER(6)) TABLESPACE orders_tsp; INSERT INTO oe.orders_ttbs SELECT * FROM oe.orders; COMMIT;
This example makes the following additional assumptions:
-
The name of the source database is
sourcedb
. -
The source database and target database are running on the same platform with the same endianness. To check the endianness of a platform, run the following query:
SELECT d.PLATFORM_NAME, ENDIAN_FORMAT FROM V$TRANSPORTABLE_PLATFORM tp, V$DATABASE d WHERE tp.PLATFORM_NAME = d.PLATFORM_NAME;
Complete the following tasks to transport the tables over the network:
- Task 1 Create a Database Link from the Target Database to the Source Database
-
Create a database link from the target database to the source database by completing the following steps:
-
Ensure that network connectivity is configured between the source database and the target database.
See Oracle Database Net Services Administrator's Guide for instructions.
-
Start SQL*Plus and connect to the target database as the administrator who will transport the data with Data Pump import. This user must have
DATAPUMP_IMP_FULL_DATABASE
role to transport the data.See "Connecting to the Database with SQL*Plus" for instructions.
-
Create the database link:
CREATE PUBLIC DATABASE LINK sourcedb USING 'sourcedb';
Specify the service name for the source database in the using clause.
During the import operation, the database link must connect to a user on the source database with
DATAPUMP_EXP_FULL_DATABASE
role. The user on the source database cannot be a user withSYSDBA
administrative privilege.
-
- Task 2 Make the Tablespaces Containing the Tables Read-Only
-
At the source database, complete the following steps:
-
Start SQL*Plus and connect to the source database as an administrator or as a user who has either the
ALTER
TABLESPACE
orMANAGE
TABLESPACE
system privilege.See "Connecting to the Database with SQL*Plus" for instructions.
-
Make all of the tablespaces that contain data to be transported read-only.
ALTER TABLESPACE emp_tsp READ ONLY; ALTER TABLESPACE orders_tsp READ ONLY;
-
- Task 3 Transport the Data Files for the Tables
-
Transport the data files of the tablespaces containing the tables being transported to a place that is accessible to the target database.
Typically, you transport the data files to the location of the existing data files of the target database. On the UNIX and Linux platforms, this location is typically /u01/app/oracle/oradata/dbname/ or +DISKGROUP/dbname/datafile/.
In this example, transfer the
emp.dbf
andorders.dbf
data files from the source database to the target database.See Also:
- Task 4 At the Target Database, Import the Database
-
Invoke the Data Pump import utility as a user with
DATAPUMP_IMP_FULL_DATABASE
role and specify the full transportable export/import options.impdp user_name network_link=sourcedb transportable=always transport_datafiles= '/u01/app/oracle/oradata/targetdb/emp.dbf' '/u01/app/oracle/oradata/targetdb/orders.dbf' tables=hr.emp_ttbs,oe.orders_ttbs logfile=import.log Password: password
This example specifies the following Data Pump parameters:
-
The
NETWORK_LINK
parameter specifies the database link to the source database used for the network import. -
The
TRANSPORTABLE
parameter specifies that the import uses the transportable option. -
The
TRANSPORT_DATAFILES
parameter identifies all of the data files to be imported.You can specify the
TRANSPORT_DATAFILES
parameter multiple times in a parameter file specified with thePARFILE
parameter if there are many data files. -
The
TABLES
parameter specifies the tables to be imported. -
The
LOGFILE
parameter specifies the file name of the log file to be written by the import utility.
After this statement executes successfully, check the import log file to ensure that no unexpected error has occurred.
When dealing with a large number of data files, specifying the list of data file names in the statement line can be a laborious process. It can even exceed the statement line limit. In this situation, you can use an import parameter file. For example, you can invoke the Data Pump import utility as follows:
impdp user_name parfile='par.f'
For example,
par.f
might contain the following lines:NETWORK_LINK=sourcedb TRANSPORTABLE=always TRANSPORT_DATAFILES= '/u01/app/oracle/oradata/targetdb/emp.dbf' '/u01/app/oracle/oradata/targetdb/orders.dbf' TABLES=hr.emp_ttbs,oe.orders_ttbs LOGFILE=import.log
Note:
During the import, user-defined tablespaces might be temporarily made read/write for metadata loading. Ensure that no user changes are made to the data during the import. At the successful completion of the import, all user-defined tablespaces are made read/write.
See Also:
Oracle Database Utilities for information about using the import utility
-
- Task 5 (Optional) Restore Tablespaces to Read/Write Mode
-
Make the tables that contain the tables being transported read/write again at the source database, as follows:
ALTER TABLESPACE emp_tsp READ WRITE; ALTER TABLESPACE orders_tsp READ WRITE;
15.5 Converting Data Between Platforms
When you perform a transportable operation, and the source platform and the target platform are of different endianness, you must convert the data being transported to the target format. If they are of the same endianness, then no conversion is necessary and data can be transported as if they were on the same platform. You can use the DBMS_FILE_TRANSFER
package or the RMAN CONVERT
command to convert data.
Note:
Some limitations might apply that are not described in these sections. Refer to the following documentation for more information:
-
"Transporting Data Across Platforms" for information about checking the endianness of platforms
-
Oracle Database PL/SQL Packages and Types Reference for information about limitations related to the
DBMS_FILE_TRANSFER
package -
Oracle Database Backup and Recovery Reference for information about limitations related to the RMAN
CONVERT
command
- Converting Data Between Platforms Using the DBMS_FILE_TRANSFER Package
You can use theGET_FILE
orPUT_FILE
procedure in theDBMS_FILE_TRANSFER
package to convert data between platforms during the data file transfer. - Converting Data Between Platforms Using RMAN
When you use the RMANCONVERT
command to convert data, you can either convert the data on the source platform after running Data Pump export, or you can convert it on the target platform before running Data Pump import. In either case, you must transfer the data files from the source system to the target system.
Parent topic: Transporting Data
15.5.1 Converting Data Between Platforms Using the DBMS_FILE_TRANSFER Package
You can use the GET_FILE
or PUT_FILE
procedure in the DBMS_FILE_TRANSFER
package to convert data between platforms during the data file transfer.
When you use one of these procedures to move data files between the source platform and the target platform, each block in each data file is converted to the target platform's endianness.
This section uses an example to describe how to use the DBMS_FILE_TRANSFER
package to convert a data file to a different platform. The example makes the following assumptions:
-
The
GET_FILE
procedure will transfer the data file. -
The mytable.342.123456789 data file is being transferred to a different platform.
-
The endianness of the source platform is different from the endianness of the target platform.
-
The global name of the source database is
dbsa.example.com
. -
Both the source database and the target database use Oracle Automatic Storage Management (Oracle ASM).
Note:
You can also use the DBMS_FILE_TRANSFER
package to transfer data files between platforms with the same endianness.
Complete the following steps to convert the data file by transferring it with the GET_FILE
procedure:
Note:
In this example, the destination data file name is mytable
. Oracle ASM does not allow a fully qualified file name form in the destination_file_name
parameter of the GET_FILE
procedure.
See Also:
-
Oracle Database PL/SQL Packages and Types Reference for more information about using the
DBMS_FILE_TRANSFER
package -
Oracle Automatic Storage Management Administrator's Guide for information about fully qualified file name forms in ASM
Parent topic: Converting Data Between Platforms
15.5.2 Converting Data Between Platforms Using RMAN
When you use the RMAN CONVERT
command to convert data, you can either convert the data on the source platform after running Data Pump export, or you can convert it on the target platform before running Data Pump import. In either case, you must transfer the data files from the source system to the target system.
You can convert data with the following RMAN CONVERT
commands:
-
CONVERT
DATAFILE
-
CONVERT
TABLESPACE
-
CONVERT
DATABASE
Note:
Datatype restrictions apply to the RMAN CONVERT
command.
- Converting Tablespaces on the Source System After Export
An example illustrates how to use the RMANCONVERT
TABLESPACE
command to convert tablespaces to a different platform. - Converting Data Files on the Target System Before Import
An example illustrates how to use the RMANCONVERT
DATAFILE
command to convert data files to a different platform.
See Also:
Parent topic: Converting Data Between Platforms
15.5.2.1 Converting Tablespaces on the Source System After Export
An example illustrates how to use the RMAN CONVERT
TABLESPACE
command to convert tablespaces to a different platform.
The example makes the following assumptions:
-
The
sales_1
andsales_2
tablespaces are being transported to a different platform. -
The endianness of the source platform is different from the endianness of the target platform.
-
You want to convert the data on the source system, before transporting the tablespace set to the target system.
-
You have completed the Data Pump export on the source database.
Complete the following steps to convert the tablespaces on the source system:
Parent topic: Converting Data Between Platforms Using RMAN
15.5.2.2 Converting Data Files on the Target System Before Import
An example illustrates how to use the RMAN CONVERT
DATAFILE
command to convert data files to a different platform.
During the conversion, you identify the data files by file name, not by tablespace name. Until the tablespace metadata is imported, the target instance has no way of knowing the desired tablespace names.
The example makes the following assumptions:
-
You have not yet converted the data files for the tablespaces being transported.
If you used the
DBMS_FILE_TRANSFER
package to transfer the data files to the target system, then the data files were converted automatically during the file transfer. See "Converting Data Between Platforms Using the DBMS_FILE_TRANSFER Package". -
The following data files are being transported to a different platform:
-
C:\Temp\sales_101.dbf
-
C:\Temp\sales_201.dbf
The data files must be accessible to the target database. If they are not accessible to the target database, then transfer the data files to the target system. See "Guidelines for Transferring Data Files".
-
-
The endianness of the source platform is different from the endianness of the target platform.
-
You want to convert the data on the target system, before performing the Data Pump import.
-
The converted data files are placed in C:\app\orauser\oradata\orawin\, which is the location of the existing data files for the target system:
Complete the following steps to convert the tablespaces on the target system:
Parent topic: Converting Data Between Platforms Using RMAN
15.6 Guidelines for Transferring Data Files
You should follow a set of guidelines when transferring the data files.
If both the source and target are file systems, then you can transport using:
-
Any facility for copying flat files (for example, an operating system copy utility or ftp)
-
The
DBMS_FILE_TRANSFER
package -
RMAN
-
Any facility for publishing on CDs
If either the source or target is an Oracle Automatic Storage Management (Oracle ASM) disk group, then you can use:
-
ftp to or from the
/sys/asm
virtual folder in the XML DB repositorySee Oracle Automatic Storage Management Administrator's Guide for more information.
-
The
DBMS_FILE_TRANSFER
package -
RMAN
Do not transport the data files for the administrative tablespaces (such as SYSTEM
and SYSAUX
) or any undo or temporary tablespaces.
If you are transporting data of a different block size than the standard block size of the database receiving the data, then you must first have a DB_
n
K_CACHE_SIZE
initialization parameter entry in the receiving database parameter file.
For example, if you are transporting data with an 8K block size into a database with a 4K standard block size, then you must include a DB_8K_CACHE_SIZE
initialization parameter entry in the parameter file. If it is not already included in the parameter file, then this parameter can be set using the ALTER SYSTEM SET
statement.
See Oracle Database Reference for information about specifying values for the DB_
n
K_CACHE_SIZE
initialization parameter.
Starting with Oracle Database 12c, the GET_FILE
or PUT_FILE
procedure in the DBMS_FILE_TRANSFER
package can convert data between platforms during the data file transfer. See "Converting Data Between Platforms".
Starting with Oracle Database 12c, RMAN can transfer files using network-enabled restore. RMAN restores database files, over the network, from a remote database instance by using the FROM
SERVICE
clause of the RESTORE
command. The primary advantage of network-enabled restore is that it eliminates the requirement for a restore of the backup to a staging area on disk and the need to transfer the copy. Therefore, network-enabled restore saves disk space and time. This technique can also provide the following advantages during file transfer: compression, encryption, and transfer of used data blocks only. See Oracle Database Backup and Recovery User's Guide for more information.
Note:
Exercise caution when using the UNIX dd
utility to copy raw-device files between databases, and note that Oracle Database 12c and later do not support raw devices for database files. The dd
utility can be used to copy an entire source raw-device file, or it can be invoked with options that instruct it to copy only a specific range of blocks from the source raw-device file.
It is difficult to ascertain actual data file size for a raw-device file because of hidden control information that is stored as part of the data file. If you must use the dd
utility to operate on raw devices, then specify the entire source raw-device file contents. If you move database file content from a raw device to either ASM or a file system to adhere to the desupport of raw devices with Oracle Database 12c and later, then use an Oracle-provided tool such as RMAN.
See Also:
"Copying Files Using the Database Server" for information about using the DBMS_FILE_TRANSFER
package to copy the files that are being transported and their metadata
Parent topic: Transporting Data