|Oracle® Database Administrator's Guide
11g Release 2 (11.2)
Part Number E17120-11
|PDF · Mobi · ePub|
Note:You must be using the Enterprise Edition of Oracle Database Release 8i or later to generate a transportable tablespace set. However, you can use any edition of Oracle Database 8i or later to import a transportable tablespace set into an Oracle database on the same platform. To import a transportable tablespace set into an Oracle database on a different platform, both databases must have compatibility set to at least 10.0.0. See "Compatibility Considerations for Transportable Tablespaces" for a discussion of database compatibility for transporting tablespaces across release levels.
You can use the Transportable Tablespaces feature to copy a set of tablespaces from one Oracle Database to another.
Note:This method for transporting tablespaces requires that you place the tablespaces to be transported in read-only mode until you complete the transporting process. If this is undesirable, you can use the Transportable Tablespaces from Backup feature, described in Oracle Database Backup and Recovery User's Guide.
The tablespaces being transported can be either dictionary managed or locally managed. Starting with Oracle9i, the transported tablespaces are not required to be of the same block size as the destination database standard block size.
Moving data using transportable tablespaces is much faster than performing either an export/import or unload/load of the same data. This is because the datafiles containing all of the actual data are just copied to the destination location, and you use Data Pump to transfer only the metadata of the tablespace objects to the new database.
Note:Beginning with Oracle Database 11g Release 1, you must use Data Pump for transportable tablespaces. The only circumstance under which you can use the original import and export utilities, IMP and EXP, is for a backward migration of XMLType data to a database version 10g Release 2 or earlier. See Oracle Database Utilities for more information on these utilities and to Oracle XML DB Developer's Guide for more information on XMLTypes.
The transportable tablespace feature is useful in several scenarios, including:
Exporting and importing partitions in data warehousing tables
Publishing structured data on CDs
Copying multiple read-only versions of a tablespace on multiple databases
Archiving historical data
Performing tablespace point-in-time-recovery (TSPITR)
These scenarios are discussed in "Using Transportable Tablespaces: Scenarios".
There are two ways to transport a tablespace:
Manually, following the steps described in this section. This involves issuing commands to SQL*Plus, RMAN, and Data Pump.
Using the Transport Tablespaces Wizard in Enterprise Manager
Log in to Enterprise Manager with a user that has the
At the top of the Database Home page, click Data Movement to view the Data Movement page.
Under Move Database Files, click Transport Tablespaces.
See Also:Oracle Database Data Warehousing Guide for information about using transportable tablespaces in a data warehousing environment
Starting with Oracle Database Release 10g, you can transport tablespaces across platforms. This functionality can be used to:
Allow a database to be migrated from one platform to another
Provide an easier and more efficient means for content providers to publish structured data and distribute it to customers running Oracle Database on different platforms
Simplify the distribution of data from a data warehouse environment to data marts, which are often running on smaller platforms
Enable the sharing of read-only tablespaces between Oracle Database installations on different operating systems or platforms, assuming that your storage system is accessible from those platforms and the platforms all have the same endianness, as described in the sections that follow.
Many, but not all, platforms are supported for cross-platform tablespace transport. You can query the
V$TRANSPORTABLE_PLATFORM view to see the platforms that are supported, and to determine each platform's endian format (byte ordering). The following query displays the platforms that support cross-platform tablespace transport:
SQL> COLUMN PLATFORM_NAME FORMAT A36 SQL> SELECT * FROM V$TRANSPORTABLE_PLATFORM ORDER BY PLATFORM_NAME; PLATFORM_ID PLATFORM_NAME ENDIAN_FORMAT ----------- ------------------------------------ -------------- 6 AIX-Based Systems (64-bit) Big 16 Apple Mac OS Big 19 HP IA Open VMS Little 15 HP Open VMS Little 5 HP Tru64 UNIX Little 3 HP-UX (64-bit) Big 4 HP-UX IA (64-bit) Big 18 IBM Power Based Linux Big 9 IBM zSeries Based Linux Big 10 Linux IA (32-bit) Little 11 Linux IA (64-bit) Little 13 Linux x86 64-bit Little 7 Microsoft Windows IA (32-bit) Little 8 Microsoft Windows IA (64-bit) Little 12 Microsoft Windows x86 64-bit Little 17 Solaris Operating System (x86) Little 20 Solaris Operating System (x86-64) Little 1 Solaris[tm] OE (32-bit) Big 2 Solaris[tm] OE (64-bit) Big 19 rows selected.
If the source platform and the destination platform are of different endianness, then an additional step must be done on either the source or destination platform to convert the tablespace being transported to the destination format. If they are of the same endianness, then no conversion is necessary and tablespaces can be transported as if they were on the same platform.
Before a tablespace can be transported to a different platform, the datafile header must identify the platform to which it belongs. In an Oracle Database with compatibility set to 10.0.0 or later, you can accomplish this by making the datafile read/write at least once.
The source and the destination databases must use compatible database character sets. That is, one of the following must be true:
The database character sets of the source and the target databases are the same.
The source database character set is a strict (binary) subset of the target database character set, and the following three conditions are true:
The source database is in version 10.1.0.3 or higher.
The tablespaces to be transported contain no table columns with character length semantics or the maximum character width is the same in both the source and target database character sets.
The tablespaces to be transported contain no columns with the
CLOB data type, or the source and the target database character sets are both single-byte or both multibyte.
The source database character set is a strict (binary) subset of the target database character set, and the following two conditions are true:
The source database is in a version lower than 10.1.0.3.
The maximum character width is the same in the source and target database character sets.
The source and the target databases must use compatible national character sets. That is:
The national character sets of the source and target databases are the same
The source database is in version 10.1.0.3 or higher and the tablespaces to be transported contain no columns with
NCLOB data type.
You cannot transport a tablespace to a destination database that contains a tablespace of the same name. However, before the transport operation, you can rename either the tablespace to be transported or the destination tablespace.
Note:The subset-superset relationship between character sets recognized by Oracle Database is documented in the Oracle Database Globalization Support Guide.
Objects with underlying objects (such as materialized views) or contained objects (such as partitioned tables) are not transportable unless all of the underlying or contained objects are in the tablespace set.
Encrypted tablespaces have the following the limitations:
Before transporting an encrypted tablespace, you must copy the Oracle wallet manually to the destination database, unless the master encryption key is stored in a Hardware Security Module (HSM) device instead of an Oracle wallet. When copying the wallet, the wallet password remains the same in the destination database. However, it is recommended that you change the password on the destination database so that each database has its own wallet password. See Oracle Database Advanced Security Administrator's Guide for information about HSM devices, about determining the location of the Oracle wallet, and about changing the wallet password with Oracle Wallet Manager.
You cannot transport an encrypted tablespace to a database that already has an Oracle wallet for transparent data encryption. In this case, you must use Oracle Data Pump to export the tablespace's schema objects and then import them to the destination database. You can optionally take advantage of Oracle Data Pump features that enable you to maintain encryption for the data while it is being exported and imported. See Oracle Database Utilities for more information.
You cannot transport an encrypted tablespace to a platform with different endianness.
Tablespaces that do not use block encryption but that contain tables with encrypted columns cannot be transported. You must use Oracle Data Pump to export and import the tablespace's schema objects. You can take advantage of Oracle Data Pump features that enable you to maintain encryption for the data while it is being exported and imported. See Oracle Database Utilities for more information.
Beginning with Oracle Database 10g Release 2, you can transport tablespaces that contain XMLTypes. Beginning with Oracle Database 11g Release 1, you must use only Data Pump to export and import the tablespace metadata for tablespaces that contain XMLTypes.
The following query returns a list of tablespaces that contain XMLTypes:
select distinct p.tablespace_name from dba_tablespaces p, dba_xml_tables x, dba_users u, all_all_tables t where t.table_name=x.table_name and t.tablespace_name=p.tablespace_name and x.owner=u.username
See Oracle XML DB Developer's Guide for information on XMLTypes.
Transporting tablespaces with XMLTypes has the following limitations:
The destination database must have XML DB installed.
Schemas referenced by XMLType tables cannot be the XML DB standard schemas.
Schemas referenced by XMLType tables cannot have cyclic dependencies.
XMLType tables with row level security are not supported, because they cannot be exported or imported.
If the schema for a transported XMLType table is not present in the destination database, it is imported and registered. If the schema already exists in the destination database, an error is returned unless the
ignore=y option is set.
If an XMLType table uses a schema that is dependent on another schema, the schema that is depended on is not exported. The import succeeds only if that schema is already in the destination database.
Additional limitations include the following:
SYSTEM Tablespace Objects You cannot transport the SYSTEM tablespace or objects owned by the user SYS. Some examples of such objects are PL/SQL, Java classes, callouts, views, synonyms, users, privileges, dimensions, directories, and sequences.
Opaque Types Types whose interpretation is application-specific and opaque to the database (such as
BFILE, and the AnyTypes) can be transported, but they are not converted as part of the cross-platform transport operation. Their actual structure is known only to the application, so the application must address any endianness issues after these types are moved to the new platform. Types and objects that use these opaque types, either directly or indirectly, are also subject to this limitation.
When you create a transportable tablespace set, Oracle Database computes the lowest compatibility level at which the destination database must run. This is referred to as the compatibility level of the transportable set. Beginning with Oracle Database 11g, a tablespace can always be transported to a database with the same or higher compatibility setting, whether the destination database is on the same or a different platform. The database signals an error if the compatibility level of the transportable set is higher than the compatibility level of the destination database.
The following table shows the minimum compatibility requirements of the source and destination tablespace in various scenarios. The source and destination database need not have the same compatibility setting.
The following list of tasks summarizes the process of transporting a tablespace. Details for each task are provided in the subsequent example.
Note:This method of generating a transportable tablespace requires that you temporarily make the tablespace read-only. If this is undesirable, you can use the alternate method known as transportable tablespace from backup. See Oracle Database Backup and Recovery User's Guide for details.
For cross-platform transport, check the endian format of both platforms by querying the
Ignore this task if you are transporting your tablespace set to the same platform.
Pick a self-contained set of tablespaces.
At the source database, place the set of tablespaces in read-only mode and generate a transportable tablespace set.
A transportable tablespace set (or transportable set) consists of datafiles for the set of tablespaces being transported and an export file containing structural information (metadata) for the set of tablespaces. You use Data Pump to perform the export.
If you are transporting the tablespace set to a platform with different endianness from the source platform, you must convert the tablespace set to the endianness of the destination platform. You can perform a source-side conversion at this step in the procedure, or you can perform a destination-side conversion as part of Task 4.
Transport the tablespace set.
Copy the datafiles and the export file to a place that is accessible to the destination database.
If you transported the tablespace set to a platform with different endianness from the source platform, and you have not performed a source-side conversion to the endianness of the destination platform, perform a destination-side conversion now.
(Optional) Restore tablespaces to read/write mode.
At the destination database, import the tablespace set.
Invoke the Data Pump utility to import the metadata for the tablespace set.
These tasks for transporting a tablespace are illustrated more fully in the example that follows, where it is assumed the following datafiles and tablespaces exist:
This task is only necessary if you are transporting the tablespace set to a platform different from the source platform.
If you are transporting the tablespace set to a platform different from the source platform, then determine if cross-platform tablespace transport is supported for both the source and destination platforms, and determine the endianness of each platform. If both platforms have the same endianness, no conversion is necessary. Otherwise you must do a conversion of the tablespace set either at the source or destination database.
If you are transporting
sales_2 to a different platform, you can execute the following query on each platform. If the query returns a row, the platform supports cross-platform tablespace transport.
SELECT d.PLATFORM_NAME, ENDIAN_FORMAT FROM V$TRANSPORTABLE_PLATFORM tp, V$DATABASE d WHERE tp.PLATFORM_NAME = d.PLATFORM_NAME;
The following is the query result from the source platform:
PLATFORM_NAME ENDIAN_FORMAT ---------------------------------- -------------- Solaris[tm] OE (32-bit) Big
The following is the result from the destination platform:
PLATFORM_NAME ENDIAN_FORMAT ---------------------------------- -------------- Microsoft Windows IA (32-bit) Little
You can see that the endian formats are different and thus a conversion is necessary for transporting the tablespace set.
There may be logical or physical dependencies between objects in the transportable set and those outside of the set. You can only transport a set of tablespaces that is self-contained. In this context "self-contained" means that there are no references from inside the set of tablespaces pointing outside of the tablespaces. Some examples of self contained tablespace violations are:
An index inside the set of tablespaces is for a table outside of the set of tablespaces.
Note:It is not a violation if a corresponding index for a table is outside of the set of tablespaces.
A partitioned table is partially contained in the set of tablespaces.
The tablespace set you want to copy must contain either all partitions of a partitioned table, or none of the partitions of a partitioned table. To transport a subset of a partition table, you must exchange the partitions into tables.
See Oracle Database VLDB and Partitioning Guide for information about exchanging partitions.
A referential integrity constraint points to a table across a set boundary.
When transporting a set of tablespaces, you can choose to include referential integrity constraints. However, doing so can affect whether a set of tablespaces is self-contained. If you decide not to transport constraints, then the constraints are not considered as pointers.
A table inside the set of tablespaces contains a
LOB column that points to
LOBs outside the set of tablespaces.
An XML DB schema (*.xsd) that was registered by user A imports a global schema that was registered by user B, and the following is true: the default tablespace for user A is tablespace A, the default tablespace for user B is tablespace B, and only tablespace A is included in the set of tablespaces.
To determine whether a set of tablespaces is self-contained, you can invoke the
TRANSPORT_SET_CHECK procedure in the Oracle supplied package
DBMS_TTS. You must have been granted the
EXECUTE_CATALOG_ROLE role (initially signed to
SYS) to execute this procedure.
When you invoke the
DBMS_TTS package, you specify the list of tablespaces in the transportable set to be checked for self containment. You can optionally specify if constraints must be included. For strict or full containment, you must additionally set the
TTS_FULL_CHECK parameter to
The strict or full containment check is for cases that require capturing not only references going outside the transportable set, but also those coming into the set. Tablespace Point-in-Time Recovery (TSPITR) is one such case where dependent objects must be fully contained or fully outside the transportable set.
For example, it is a violation to perform TSPITR on a tablespace containing a table
t but not its index
i because the index and data will be inconsistent after the transport. A full containment check ensures that there are no dependencies going outside or coming into the transportable set. See the example for TSPITR in the Oracle Database Backup and Recovery User's Guide.
Note:The default for transportable tablespaces is to check for self containment rather than full containment.
The following statement can be used to determine whether tablespaces
sales_2 are self-contained, with referential integrity constraints taken into consideration (indicated by
EXECUTE DBMS_TTS.TRANSPORT_SET_CHECK('sales_1,sales_2', TRUE);
After invoking this PL/SQL package, you can see all violations by selecting from the
TRANSPORT_SET_VIOLATIONS view. If the set of tablespaces is self-contained, this view is empty. The following example illustrates a case where there are two violations: a foreign key constraint,
dept_fk, across the tablespace set boundary, and a partitioned table,
jim.sales, that is partially contained in the tablespace set.
SQL> SELECT * FROM TRANSPORT_SET_VIOLATIONS; VIOLATIONS --------------------------------------------------------------------------- Constraint DEPT_FK between table JIM.EMP in tablespace SALES_1 and table JIM.DEPT in tablespace OTHER Partitioned table JIM.SALES is partially contained in the transportable set
These violations must be resolved before
sales_2 are transportable. As noted in the next task, one choice for bypassing the integrity constraint violation is to not export the integrity constraints.
Oracle Database PL/SQL Packages and Types Reference for more information about the
Oracle Database Backup and Recovery User's Guide for information specific to using the
DBMS_TTS package for TSPITR
After ensuring you have a self-contained set of tablespaces that you want to transport, generate a transportable tablespace set by completing the following steps:
Start SQL*Plus and connect to the database as an administrator or as a user who has either the
TABLESPACE system privilege.
See "Connecting to the Database with SQL*Plus" for instructions.
Make all tablespaces in the set read-only.
SQL> ALTER TABLESPACE sales_1 READ ONLY; Tablespace altered. SQL> ALTER TABLESPACE sales_2 READ ONLY; Tablespace altered.
Invoke the Data Pump export utility as user
system and specify the tablespaces in the transportable set.
SQL> HOST $ expdp system dumpfile=expdat.dmp directory=data_pump_dir transport_tablespaces=sales_1,sales_2 logfile=tts_export.log Password: password
You must always specify
TRANSPORT_TABLESPACES, which determines the mode of the export operation. In this example:
DUMPFILE parameter specifies the name of the structural information export file to be created,
DIRECTORY parameter specifies the directory object that points to the operating system or Oracle Automatic Storage Management location of the dump file. You must create the
DIRECTORY object before invoking Data Pump, and you must grant the
WRITE object privileges on the directory to
PUBLIC. See Oracle Database SQL Language Reference for information on the
Note:The directory object
DATA_PUMP_DIRis automatically created when you install Oracle Database. Read and write access to this directory is automatically granted to the
DBArole, and thus to users
SYSTEM. If the
DIRECTORYparameter is omitted,
DATA_PUMP_DIRis used as the default directory.
LOGFILE parameter specifies the file name of the log file to be written by the export utility. The log file is written to the same directory as the dump file.
EXPDP prompts for the password for the
system account if you do not specify it on the command line.
Triggers and indexes are included in the export operation by default.
To perform a transport tablespace operation with a strict containment check, use the
TRANSPORT_FULL_CHECK parameter, as shown in the following example:
expdp system dumpfile=expdat.dmp directory=data_pump_dir transport_tablespaces=sales_1,sales_2 transport_full_check=y logfile=tts_export.log
In this case, the Data Pump export utility verifies that there are no dependencies between the objects inside the transportable set and objects outside the transportable set. If the tablespace set being transported is not self-contained, then the export fails and indicates that the transportable set is not self-contained. You must then return to Task 2 to resolve all violations.
Notes:The Data Pump utility is used to export only data dictionary structural information (metadata) for the tablespaces. No actual data is unloaded, so this operation goes relatively quickly even for large tablespace sets.
Check the log file for errors, and take note of the dump file and datafiles that you must transport to the destination database.
EXPDP outputs the names and paths of these files in messages like these:
****************************************************************************** Dump file set for SYSTEM.SYS_EXPORT_TRANSPORTABLE_01 is: /u01/app/oracle/admin/salesdb/dpdump/expdat.dmp ****************************************************************************** Datafiles required for transportable tablespace SALES_1: /u01/app/oracle/oradata/salesdb/sales_101.dbf Datafiles required for transportable tablespace SALES_2: /u01/app/oracle/oradata/salesdb/sales_201.dbf
When finished, exit back to SQL*Plus:
See Also:Oracle Database Utilities for information about using the Data Pump utility
sales_2 are being transported to a different platform, and the endianness of the platforms is different, and if you want to convert before transporting the tablespace set, then convert the datafiles composing the
From SQL*Plus, return to the host system:
Start RMAN and connect to the source database:
$ RMAN TARGET / Recovery Manager: Release 184.108.40.206.1 Copyright (c) 1982, 2007, Oracle. All rights reserved. connected to target database: salesdb (DBID=3295731590)
In this example, assume that the temporary location, directory
/tmp, has already been created. The converted datafiles are assigned names by the system.
RMAN> CONVERT TABLESPACE sales_1,sales_2 2> TO PLATFORM 'Microsoft Windows IA (32-bit)' 3> FORMAT '/tmp/%U'; Starting conversion at source at 30-SEP-08 using channel ORA_DISK_1 channel ORA_DISK_1: starting datafile conversion input datafile file number=00007 name=/u01/app/oracle/oradata/salesdb/sales_101.dbf converted datafile=/tmp/data_D-SALESDB_I-1192614013_TS-SALES_1_FNO-7_03jru08s channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:45 channel ORA_DISK_1: starting datafile conversion input datafile file number=00008 name=/u01/app/oracle/oradata/salesdb/sales_201.dbf converted datafile=/tmp/data_D-SALESDB_I-1192614013_TS-SALES_2_FNO-8_04jru0aa channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:25 Finished conversion at source at 30-SEP-08
See Also:Oracle Database Backup and Recovery Reference for a description of the RMAN
Exit Recovery Manager:
RMAN> exit Recovery Manager complete.
Complete the following steps:
Transport both the datafiles and the export (dump) file of the tablespaces to a place that is accessible to the destination database. To accomplish this, do one of the following:
If no endianness conversion of the tablespace set is needed, or if you already converted the tablespace set:
Transport the dump file to the directory pointed to by the
DATA_PUMP_DIR directory object, or to any other directory of your choosing.
Run the following query to determine the location of
SELECT * FROM DBA_DIRECTORIES WHERE DIRECTORY_NAME = 'DATA_PUMP_DIR'; OWNER DIRECTORY_NAME DIRECTORY_PATH ---------- ---------------- ----------------------------------- SYS DATA_PUMP_DIR C:\app\orauser\admin\orawin\dpdump\
Transport the datafiles to the location of the existing datafiles of the destination database.
On the UNIX and Linux platforms, this location is typically /u01/app/oracle/oradata/SID/ or +DISKGROUP/SID/datafile/.
If you intend to perform endianness conversion after transporting to the destination host:
Transport the dump file to the directory pointed to by the
DATA_PUMP_DIR directory object, or to any other directory of your choosing.
Transport the datafiles to a temporary location on the destination host (for example, /tmp or C:\TEMP). During conversion, you can move the converted datafiles to the location of the existing datafiles of the destination database.
Tip:If both the source and destination are file systems, you can transport using:
Any facility for copying flat files (for example, an operating system copy utility or ftp)
Any facility for publishing on CDs
If either the source or destination is an Oracle Automatic Storage Management (Oracle ASM) disk group, you can use:
ftp to or from the
/sys/asm virtual folder in the XML DB repository
See Oracle Automatic Storage Management Administrator's Guide for more information.
Caution:Exercise caution when using the UNIX
ddutility to copy raw-device files between databases. The
ddutility can be used to copy an entire source raw-device file, or it can be invoked with options that instruct it to copy only a specific range of blocks from the source raw-device file.
It is difficult to ascertain actual datafile size for a raw-device file because of hidden control information that is stored as part of the datafile. Thus, it is advisable when using the
dd utility to specify copying the entire source raw-device file contents.
If you are transporting the tablespace set to a platform with endianness that is different from the source platform, and you have not yet converted the tablespace set, do so now with RMAN.
The following example places the converted datafiles into C:\app\orauser\oradata\orawin\, which is the location of the existing datafiles for the destination database:
C:\>RMAN TARGET / Recovery Manager: Release 220.127.116.11.1 Copyright (c) 1982, 2007, Oracle. All rights reserved. connected to target database: ORAWIN (DBID=3462152886) RMAN> CONVERT DATAFILE 2>'C:\Temp\sales_101.dbf', 3>'C:\Temp\sales_201.dbf' 4>TO PLATFORM="Microsoft Windows IA (32-bit)" 5>FROM PLATFORM="Solaris[tm] OE (32-bit)" 6>DB_FILE_NAME_CONVERT= 7>'C:\Temp\', 'C:\app\orauser\oradata\orawin\' 8> PARALLELISM=4;
You identify the datafiles by filename, not by tablespace name. Until the tablespace metadata is imported, the destination instance has no way of knowing the desired tablespace names. The source and destination platforms are optional. RMAN determines the source platform by examining the datafile, and the destination platform defaults to the platform of the host running the conversion.
See Also:"Copying Files Using the Database Server" for information about using the
DBMS_FILE_TRANSFERpackage to copy the files that are being transported and their metadata
Make the transported tablespaces read/write again at the source database, as follows:
ALTER TABLESPACE sales_1 READ WRITE; ALTER TABLESPACE sales_2 READ WRITE;
You can postpone this task to first ensure that the import process succeeds.
Note:If you are transporting a tablespace of a different block size than the standard block size of the database receiving the tablespace set, then you must first have a
K_CACHE_SIZEinitialization parameter entry in the receiving database parameter file.
For example, if you are transporting a tablespace with an 8K block size into a database with a 4K standard block size, then you must include a
DB_8K_CACHE_SIZE initialization parameter entry in the parameter file. If it is not already included in the parameter file, this parameter can be set using the
ALTER SYSTEM SET statement.
See Oracle Database Reference for information about specifying values for the
K_CACHE_SIZE initialization parameter.
Any privileged user can perform this task. To import a tablespace set, complete the following steps:
Import the tablespace metadata using the Data Pump Import utility,
impdp system dumpfile=expdat.dmp directory=data_pump_dir transport_datafiles= c:\app\orauser\oradata\orawin\sales_101.dbf, c:\app\orauser\oradata\orawin\sales_201.dbf remap_schema=sales1:crm1 remap_schema=sales2:crm2 logfile=tts_import.log Password: password
In this example we specify the following:
DUMPFILE parameter specifies the exported file containing the metadata for the tablespaces to be imported.
DIRECTORY parameter specifies the directory object that identifies the location of the dump file.
TRANSPORT_DATAFILES parameter identifies all of the datafiles containing the tablespaces to be imported.
REMAP_SCHEMA parameter changes the ownership of database objects. If you do not specify
REMAP_SCHEMA, all database objects (such as tables and indexes) are created in the same user schema as in the source database, and those users must already exist in the destination database. If they do not exist, then the import utility returns an error. In this example, objects in the tablespace set owned by
sales1 in the source database will be owned by
crm1 in the destination database after the tablespace set is imported. Similarly, objects owned by
sales2 in the source database will be owned by
crm2 in the destination database. In this case, the destination database is not required to have users
sales2, but must have users
LOGFILE parameter specifies the file name of the log file to be written by the import utility. The log file is written to the directory from which the dump file is read.
After this statement executes successfully, all tablespaces in the set being copied remain in read-only mode. Check the import log file to ensure that no error has occurred.
When dealing with a large number of datafiles, specifying the list of datafile names in the statement line can be a laborious process. It can even exceed the statement line limit. In this situation, you can use an import parameter file. For example, you can invoke the Data Pump import utility as follows:
impdp system parfile='par.f'
where the parameter file,
par.f contains the following:
DUMPFILE=expdat.dmp DIRECTORY=data_pump_dir TRANSPORT_DATAFILES= C:\app\orauser\oradata\orawin\sales_101.dbf, C:\app\orauser\oradata\orawin\sales_201.dbf REMAP_SCHEMA=sales1:crm1 REMAP_SCHEMA=sales2:crm2 LOGFILE=tts_import.log
See Also:Oracle Database Utilities for information about using the import utility
If required, put the tablespaces into read/write mode on the destination database.
The following sections describe some uses for transportable tablespaces:
Typical enterprise data warehouses contain one or more large fact tables. These fact tables can be partitioned by date, making the enterprise data warehouse a historical database. You can build indexes to speed up star queries. Oracle recommends that you build local indexes for such historically partitioned tables to avoid rebuilding global indexes every time you drop the oldest partition from the historical database.
Suppose every month you would like to load one month of data into the data warehouse. There is a large fact table in the data warehouse called
sales, which has the following columns:
CREATE TABLE sales (invoice_no NUMBER, sale_year INT NOT NULL, sale_month INT NOT NULL, sale_day INT NOT NULL) PARTITION BY RANGE (sale_year, sale_month, sale_day) (partition jan98 VALUES LESS THAN (1998, 2, 1), partition feb98 VALUES LESS THAN (1998, 3, 1), partition mar98 VALUES LESS THAN (1998, 4, 1), partition apr98 VALUES LESS THAN (1998, 5, 1), partition may98 VALUES LESS THAN (1998, 6, 1), partition jun98 VALUES LESS THAN (1998, 7, 1));
You create a local non-prefixed index:
CREATE INDEX sales_index ON sales(invoice_no) LOCAL;
Initially, all partitions are empty, and are in the same default tablespace. Each month, you want to create one partition and attach it to the partitioned
Suppose it is July 1998, and you would like to load the July sales data into the partitioned table. In a staging database, you create a new tablespace,
ts_jul. You also create a table,
jul_sales, in that tablespace with the same column types as the
sales table. You can create the table
jul_sales using the
SELECT statement. After creating and populating
jul_sales, you can also create an index,
jul_sale_index, for the table, indexing the same column as the local index in the
sales table. After building the index, transport the tablespace
ts_jul to the data warehouse.
In the data warehouse, add a partition to the
sales table for the July sales data. This also creates another partition for the local non-prefixed index:
ALTER TABLE sales ADD PARTITION jul98 VALUES LESS THAN (1998, 8, 1);
Attach the transported table
jul_sales to the table
sales by exchanging it with the new partition:
ALTER TABLE sales EXCHANGE PARTITION jul98 WITH TABLE jul_sales INCLUDING INDEXES WITHOUT VALIDATION;
This statement places the July sales data into the new partition
jul98, attaching the new data to the partitioned table. This statement also converts the index
jul_sale_index into a partition of the local index for the
sales table. This statement should return immediately, because it only operates on the structural information and it simply switches database pointers. If you know that the data in the new partition does not overlap with data in previous partitions, you are advised to specify the
WITHOUT VALIDATION clause. Otherwise, the statement goes through all the new data in the new partition in an attempt to validate the range of that partition.
If all partitions of the
sales table came from the same staging database (the staging database is never destroyed), the exchange statement always succeeds. In general, however, if data in a partitioned table comes from different databases, the exchange operation might fail. For example, if the
jan98 partition of
sales did not come from the same staging database, the preceding exchange operation can fail, returning the following error:
ORA-19728: data object number conflict between table JUL_SALES and partition JAN98 in table SALES
To resolve this conflict, move the offending partition by issuing the following statement:
ALTER TABLE sales MOVE PARTITION jan98;
Then retry the exchange operation.
After the exchange succeeds, you can safely drop
jul_sale_index (both are now empty). Thus you have successfully loaded the July sales data into your data warehouse.
Transportable tablespaces provide a way to publish structured data on CDs. A data provider can load a tablespace with data to be published, generate the transportable set, and copy the transportable set to a CD. This CD can then be distributed.
When customers receive this CD, they can add the CD contents to an existing database without having to copy the datafiles from the CD to disk storage. For example, suppose on a Windows NT system D: drive is the CD drive. You can import a transportable set with datafile
catalog.f and export file
expdat.dmp as follows:
IMPDP system/password DUMPFILE=expdat.dmp DIRECTORY=dpump_dir TRANSPORT_DATAFILES='D:\catalog.f'
You can remove the CD while the database is still up. Subsequent queries to the tablespace return an error indicating that the database cannot open the datafiles on the CD. However, operations to other parts of the database are not affected. Placing the CD back into the drive makes the tablespace readable again.
Removing the CD is the same as removing the datafiles of a read-only tablespace. If you shut down and restart the database, the database indicates that it cannot find the removed datafile and does not open the database (unless you set the initialization parameter
READ_ONLY_OPEN_DELAYED is set to
TRUE, the database reads the file only when someone queries the transported tablespace. Thus, when transporting a tablespace from a CD, you should always set the
READ_ONLY_OPEN_DELAYED initialization parameter to
TRUE, unless the CD is permanently attached to the database.
You can use transportable tablespaces to mount a tablespace read-only on multiple databases. In this way, separate databases can share the same data on disk instead of duplicating data on separate disks. The tablespace datafiles must be accessible by all databases. To avoid database corruption, the tablespace must remain read-only in all the databases mounting the tablespace.
The following are two scenarios for mounting the same tablespace read-only on multiple databases:
The tablespace originates in a database that is separate from the databases that will share the tablespace.
You generate a transportable set in the source database, put the transportable set onto a disk that is accessible to all databases, and then import the metadata into each database on which you want to mount the tablespace.
The tablespace already belongs to one of the databases that will share the tablespace.
It is assumed that the datafiles are already on a shared disk. In the database where the tablespace already exists, you make the tablespace read-only, generate the transportable set, and then import the tablespace into the other databases, leaving the datafiles in the same location on the shared disk.
You can make a disk accessible by multiple computers in several ways. You can use either a cluster file system or raw disk. You can also use network file system (NFS), but be aware that if a user queries the shared tablespace while NFS is down, the database will hang until the NFS operation times out.
Later, you can drop the read-only tablespace in some of the databases. Doing so does not modify the datafiles for the tablespace. Thus, the drop operation does not corrupt the tablespace. Do not make the tablespace read/write unless only one database is mounting the tablespace.
Since a transportable tablespace set is a self-contained set of files that can be imported into any Oracle Database, you can archive old/historical data in an enterprise data warehouse using the transportable tablespace procedures described in this chapter.
See Also:Oracle Database Data Warehousing Guide for more details
You can use transportable tablespaces to perform tablespace point-in-time recovery (TSPITR).
See Also:Oracle Database Backup and Recovery User's Guide for information about how to perform TSPITR using transportable tablespaces
You can use the transportable tablespace feature to migrate a database to a different platform by creating a new database on the destination platform and performing a transport of all the user tablespaces. See Oracle Database Backup and Recovery User's Guide for more information.
You cannot transport the
SYSTEM tablespace. Therefore, objects such as sequences, PL/SQL packages, and other objects that depend on the
SYSTEM tablespace are not transported. You must either create these objects manually on the destination database, or use Data Pump to transport the objects that are not moved by transportable tablespace.