Oracle8i Utilities
Release 2 (8.1.6)

Part Number A76955-01

Library

Product

Contents

Index

Go to previous page Go to next page

2
Import

This chapter describes how to use the Import utility, which reads an export file into an Oracle database.

Import reads only files created by Export. For information on how to export a database, see Chapter 1. To load data from other operating system files, see the discussion of SQL*Loader in Part II of this manual.

This chapter discusses the following topics:

What Is the Import Utility?

The basic concept behind Import is very simple. Import inserts the data objects extracted from one Oracle database by the Export utility (and stored in an Export dump file) into another Oracle database. Export dump files can only be read by Import. See Chapter 1 for more information about Oracle's Export utility.

Import reads the object definitions and table data that the Export utility extracted from an Oracle database and stored in an Oracle binary-format Export dump file located typically on disk or tape.

Such files can then be transferred using FTP or physically transported (in the case of tape) to a different site. The files can then be used with the Import utility to transfer data between databases that are on machines not connected through a network. The files can also be used as backups in addition to normal backup procedures.

Note: Export dump files can only be read by the Oracle utility Import. If you need to load data from ASCII fixed-format or delimited files, see Part II of this manual.

The Export and Import utilities can also facilitate certain aspects of Oracle Advanced Replication functionality, such as offline instantiation. See Oracle8i Replication for more information.

Figure 2-1 illustrates the process of importing from an Export dump file.

Figure 2-1 Importing an Export File


Table Objects: Order of Import

Table objects are imported as they are read from the export file. The export file contains objects in the following order:

  1. Type definitions

  2. Table definitions

  3. Table data

  4. Table indexes

  5. Integrity constraints, views, procedures, and triggers

  6. Bitmap, functional, and domain indexes

First, new tables are created. Then, data is imported and indexes are built. Then triggers are imported, integrity constraints are enabled on the new tables, and any bitmap, functional, and/or domain indexes are built. This sequence prevents data from being rejected due to the order in which tables are imported. This sequence also prevents redundant triggers from firing twice on the same data (once when it was originally inserted and again during the import).

For example, if the EMP table has a referential integrity constraint on the DEPT table and the EMP table is imported first, all EMP rows that reference departments that have not yet been imported into DEPT would be rejected if the constraints were enabled.

When data is imported into existing tables, however, the order of import can still produce referential integrity failures. In the situation just given, if the EMP table already existed and referential integrity constraints were in force, many rows could be rejected.

A similar situation occurs when a referential integrity constraint on a table references itself. For example, if SCOTT's manager in the EMP table is DRAKE, and DRAKE's row has not yet been loaded, SCOTT's row will fail, even though it would be valid at the end of the import.

Suggestion: For the reasons mentioned previously, it is a good idea to disable referential constraints when importing into an existing table. You can then reenable the constraints after the import is completed.

Compatibility

Import can read export files created by Export release 5.1.22 and higher.

Import Modes

The Import utility provides four modes of import. The objects that are imported depend on the Import mode you choose and the mode that was used during the export. All users have two choices of import mode. A user with the
IMP_FULL_DATABASE role (a privileged user) has four choices:

TABLES  

This mode allows you to import specific tables and partitions. A privileged user can qualify the tables by specifying the schema that contains them.  

FROMUSER  

This mode allows you to import all objects that belong to you (such as tables, grants, indexes, and procedures). A privileged user importing in user mode can import all objects in the schemas of a specified set of users.  

FULL  

Only users with the IMP_FULL_DATABASE role can import in this mode, which imports a full database export dump file.  

TRANSPORT_TABLESPACES  

This mode allows a privileged user to move a set of tablespaces from one Oracle database to another.  

See Import Parameters for information on specifying each mode.

A user with the IMP_FULL_DATABASE role must specify one of these options or specify an incremental import. Otherwise, an error results. If a user without the IMP_FULL_DATABASE role fails to specify one of these options, a user-level import is performed.

Table 1-1 shows the objects that are exported and imported in each mode.

Understanding Table-Level and Partition-Level Import

You can import tables, partitions, and subpartitions in the following ways:

You must set the parameter IGNORE=Y when loading data into an existing table. See IGNORE for more information.

Table-Level Import

For each specified table, table-level Import imports all rows of the table. With table-level Import:

If the table does not exist, and if the exported table was partitioned, table-level Import creates a partitioned table. If the table creation is successful, table-level Import reads all source data from the export file into the target table. After Import, the target table contains the partition definitions of all partitions and subpartitions associated with the source table in the Export file. This operation ensures that the physical and logical attributes (including partition bounds) of the source partitions are maintained on Import.

Partition-Level Import

Partition-level Import imports a set of partitions or subpartitions from a source table into a target table. Note the following points:

For more information, see Using Table-Level and Partition-Level Export and Import.

Using Import

This section describes what you need to do before you begin importing and how to invoke and use the Import utility.

Before Using Import

To use Import, you must run either the script CATEXP.SQL or CATALOG.SQL (which runs CATEXP.SQL) after the database has been created or migrated to release 8.1.

Additional Information: The actual names of the script files depend on your operating system. The script file names and the method for running them are described in your Oracle operating system-specific documentation.

CATEXP.SQL or CATALOG.SQL need to be run only once on a database. You do not need to run either script again before performing future import operations. Both scripts perform the following tasks to prepare the database for Import:

Invoking Import

You can invoke Import in three ways:

You can use a combination of the first and second options. That is, you can list parameters both in the parameters file and on the command line. In fact, you can specify the same parameter in both places. The position of the PARFILE parameter and other parameters on the command line determines what parameters override others. For example, assume the parameters file params.dat contains the parameter INDEXES=Y and Import is invoked with the following line:

imp system/manager PARFILE=params.dat INDEXES=N

In this case, because INDEXES=N occurs after PARFILE=params.dat, INDEXES=N overrides the value of the INDEXES parameter in the PARFILE.

You can specify the username and password in the parameter file, although, for security reasons, this is not recommended.

If you omit the username and password, Import prompts you for it.

See Import Parameters for a description of each parameter.

Invoking Import As SYSDBA

SYSDBA is used internally and has specialized functions; its behavior is not the same as for generalized users. Therefore, you should not typically need to invoke Import as SYSDBA. However, there may be a few situations in which you need to do so, usually at the request of Oracle technical support.

To invoke Import as SYSDBA, use the following syntax:

imp username/password AS SYSDBA

or, optionally:

imp username/password@instance AS SYSDBA 
 

Note: Because the string "AS SYSDBA" contains a blank, most operating systems require that entire string 'username/password AS SYSDBA' be placed in quotation marks or marked as a literal by some method. Some operating systems also require that quotation marks on the command line be preceded by an escape character. See your operating system-specific Oracle documentation for information about special and reserved characters on your system.

If either the username or password is omitted, Import will prompt you for it.

If you use the Import interactive mode, you will not be prompted to specify whether you want to connect as SYSDBA or @instance. You must specify "AS SYSDBA" and/or "@instance" with the username.

Getting Online Help

Import provides online help. Enter imp help=y on the command line to invoke it. You will see a display similar to the following:

Import: Release 8.1.6.0.0 - Production on Wed Oct 6 15:26:12 1999

(c) Copyright 1999 Oracle Corporation.  All rights reserved.

You can let Import prompt you for parameters by entering the IMP
command followed by your username/password:

     Example: IMP SCOTT/TIGER

Or, you can control how Import runs by entering the IMP command followed
by various arguments. To specify parameters, you use keywords:

     Format:  IMP KEYWORD=value or KEYWORD=(value1,value2,...,valueN)
     Example: IMP SCOTT/TIGER IGNORE=Y TABLES=(EMP,DEPT) FULL=N
               or TABLES=(T1:P1,T1:P2), if T1 is partitioned table

USERID must be the first parameter on the command line.

Keyword  Description (Default)       Keyword      Description (Default)
--------------------------------------------------------------------------
USERID   username/password           FULL         import entire file (N)
BUFFER   size of data buffer         FROMUSER     list of owner usernames
FILE     input files (EXPDAT.DMP)    TOUSER       list of usernames
SHOW     just list file contents (N) TABLES       list of table names
IGNORE   ignore create errors (N)    RECORDLENGTH length of IO record
GRANTS   import grants (Y)           INCTYPE      incremental import type
INDEXES  import indexes (Y)          COMMIT       commit array insert (N)
ROWS     import data rows (Y)        PARFILE      parameter filename
LOG      log file of screen output   CONSTRAINTS  import constraints (Y)
DESTROY  overwrite tablespace data file (N)
INDEXFILE write table/index info to specified file
SKIP_UNUSABLE_INDEXES  skip maintenance of unusable indexes (N)
ANALYZE  execute ANALYZE statements in dump file (Y)
FEEDBACK display progress every x rows(0)
TOID_NOVALIDATE  skip validation of specified type ids 
FILESIZE maximum size of each dump file
RECALCULATE_STATISTICS recalculate statistics (N)
VOLSIZE  number of bytes in file on each volume of a file on tape

The following keywords only apply to transportable tablespaces
TRANSPORT_TABLESPACE import transportable tablespace metadata (N)
TABLESPACES tablespaces to be transported into database
DATAFILES datafiles to be transported into database
TTS_OWNERS users that own data in the transportable tablespace set

Import terminated successfully without warnings.

The Parameter File

The parameter file allows you to specify Import parameters in a file where they can be easily modified or reused. Create a parameter file using any flat file text editor. The command-line option PARFILE=<filename> tells Import to read the parameters from the specified file rather than from the command line. For example:

imp parfile=filename

or

imp username/password parfile=filename

The syntax for parameter file specifications is one of the following:

KEYWORD=value
KEYWORD=(value) 
KEYWORD=(value1, value2, ...)

You can add comments to the parameter file by preceding them with the pound (#) sign. All characters to the right of the pound (#) sign are ignored. The following is an example of a partial parameter file listing:

FULL=y
FILE=DBA.DMP
GRANTS=Y
INDEXES=Y # import all indexes

See Import Parameters for a description of each parameter.

Privileges Required to Use Import

This section describes the privileges you need to use the Import utility and to import objects into your own and others' schemas.

Access Privileges

To use Import, you need the privilege CREATE SESSION to log on to the Oracle database server. This privilege belongs to the CONNECT role established during database creation.

You can do an import even if you did not create the export file. However, if the export file was created by someone other than you, you can import that file only if you have the IMP_FULL_DATABASE role.

Importing Objects into Your Own Schema

Table 2-1 lists the privileges required to import objects into your own schema. All of these privileges initially belong to the RESOURCE role.

Table 2-1 Privileges Required to Import Objects into Your Own Schema
Object  Privileges   Privilege Type 

clusters  

 

CREATE CLUSTER  

system  

 

And:  

tablespace quota, or  

 

 

 

UNLIMITED TABLESPACE  

system  

database links  

 

CREATE DATABASE LINK  

system  

 

And:  

CREATE SESSION on remote db  

system  

triggers on tables  

 

CREATE TRIGGER  

system  

triggers on schemas  

 

CREATE ANY TRIGGER  

system  

indexes  

 

CREATE INDEX  

system  

 

And:  

tablespace quota, or  

 

 

 

UNLIMITED TABLESPACE  

system  

integrity constraints  

 

ALTER TABLE  

object  

libraries  

 

CREATE ANY LIBRARY  

system  

packages  

 

CREATE PROCEDURE  

system  

private synonyms  

 

CREATE SYNONYM  

system  

sequences  

 

CREATE SEQUENCE  

system  

snapshots  

 

CREATE SNAPSHOT  

system  

stored functions  

 

CREATE PROCEDURE  

system  

stored procedures  

 

CREATE PROCEDURE  

system  

table data  

 

INSERT TABLE  

object  

table definitions  

 

CREATE TABLE  

system  

(including
comments and audit options)  

And:  

tablespace quota, or
UNLIMITED TABLESPACE  

system  

views  

 

CREATE VIEW  

system  

 

And:  

SELECT on the base table, or  

object  

 

 

SELECT ANY TABLE  

system  

object types  

 

CREATE TYPE  

system  

foreign function libraries  

 

CREATE LIBRARY  

system  

dimensions  

 

CREATE DIMENSION  

system  

operators  

 

CREATE OPERATOR  

system  

indextypes  

 

CREATE INDEXTYPE  

system  

Importing Grants

To import the privileges that a user has granted to others, the user initiating the import must either own the objects or have object privileges with the WITH GRANT OPTION. Table 2-2 shows the required conditions for the authorizations to be valid on the target system.

Table 2-2 Privileges Required to Import Grants
Grant   Conditions  

object privileges  

Object must exist in the user's schema, or user must have the object privileges with the WITH GRANT OPTION.  

system privileges  

User must have system privileges as well as the WITH ADMIN OPTION.  

Importing Objects into Other Schemas

To import objects into another user's schema, you must have the IMP_FULL_DATABASE role enabled.

Importing System Objects

To import system objects from a full database export file, the role IMP_FULL_DATABASE must be enabled. The parameter FULL specifies that these system objects are included in the import when the export file is a full export:

User Privileges

When user definitions are imported into an Oracle database, they are created with the CREATE USER command. So, when importing from export files created by previous versions of Export, users are not granted CREATE SESSION privileges automatically.

Importing into Existing Tables

This section describes factors to take into account when you import data into existing tables.

Manually Creating Tables Before Importing Data

When you choose to create tables manually before importing data into them from an export file, you should use either the same table definition previously used or a compatible format. For example, although you can increase the width of columns and change their order, you cannot do the following:

Disabling Referential Constraints

In the normal import order, referential constraints are imported only after all tables are imported. This sequence prevents errors that could occur if a referential integrity constraint existed for data that has not yet been imported.

These errors can still occur when data is loaded into existing tables. For example, if table EMP has a referential integrity constraint on the MGR column that verifies that the manager number exists in EMP, a perfectly legitimate employee row might fail the referential integrity constraint if the manager's row has not yet been imported.

When such an error occurs, Import generates an error message, bypasses the failed row, and continues importing other rows in the table. You can disable constraints manually to avoid this.

Referential constraints between tables can also cause problems. For example, if the EMP table appears before the DEPT table in the export file, but a referential check exists from the EMP table into the DEPT table, some of the rows from the EMP table may not be imported due to a referential constraint violation.

To prevent errors like these, you should disable referential integrity constraints when importing data into existing tables.

Manually Ordering the Import

When the constraints are reenabled after importing, the entire table is checked, which may take a long time for a large table. If the time required for that check is too long, it may be beneficial to order the import manually.

To do so, perform several imports from an export file instead of one. First, import tables that are the targets of referential checks. Then, import the tables that reference them. This option works if tables do not reference each other in circular fashion, and if a table does not reference itself.

Import Parameters

The following diagrams show the syntax for the parameters that you can specify in the parameter file or on the command line.

Import_start


ImpModes


ImpUserOpts


ImpTableOpts


ImpTTSOpts


ImpTTSFiles


ImpOpts


ImpFileOpts


The following sections describe parameter functionality and default values.

ANALYZE

Default: Y

Specifies whether or not the Import utility executes SQL ANALYZE statements found in the export file or loads optimizer statistics for tables, indexes, and columns that were precomputed on the Export system. See also the Import parameter RECALCULATE_STATISTICS and Importing Statistics.

BUFFER

Default: operating system-dependent

The buffer-size is the size, in bytes, of the buffer through which data rows are transferred.

The parameter BUFFER (buffer size) determines the number of rows in the array inserted by Import. The following formula gives an approximation of the buffer size that inserts a given array of rows:

buffer_size = rows_in_array * maximum_row_size
     

For tables containing LONG, LOB, BFILE, REF, ROWID, LOGICAL ROWID, DATE, or type columns, rows are inserted individually. The size of the buffer must be large enough to contain the entire row, except for LOB and LONG columns. If the buffer cannot hold the longest row in a table, Import attempts to allocate a larger buffer.

Additional Information: See your Oracle operating system-specific documentation to determine the default value for this parameter.

CHARSET

Note: This parameter applies to Oracle Version 5 and 6 export files only. Use of this parameter is not recommended. It is provided only for compatibility with previous versions. Eventually, it will no longer be supported. See The CHARSET Parameter if you still need to use this parameter.

COMMIT

Default: N

Specifies whether Import should commit after each array insert. By default, Import commits only after loading each table, and Import performs a rollback when an error occurs, before continuing with the next object.

If a table has nested table columns or attributes, the contents of the nested tables are imported as separate tables. Therefore, the contents of the nested tables are always committed in a transaction distinct from the transaction used to commit the outer table.

If COMMIT=N and a table is partitioned, each partition and subpartition in the Export file is imported in a separate transaction.

Specifying COMMIT=Y prevents rollback segments from growing inordinately large and improves the performance of large imports. Specifying COMMIT=Y is advisable if the table has a uniqueness constraint. If the import is restarted, any rows that have already been imported are rejected with a nonfatal error.

If a table does not have a uniqueness constraint, Import could produce duplicate rows when you reimport the data.

For tables containing LONG, LOB, BFILE, REF, ROWID, UROWID, DATE, or type columns, array inserts are not done. If COMMIT=Y, Import commits these tables after each row.

CONSTRAINTS

Default: Y

Specifies whether or not table constraints are to be imported. Note that the default is to import constraints. If you do not want constraints to be imported, you must set the parameter value to N.

DATAFILES

Default: none

When TRANSPORT_TABLESPACE is specified as Y, use this parameter to list the datafiles to be transported into the database.

See Transportable Tablespaces for more information.

DESTROY

Default: N

Specifies whether or not the existing data files making up the database should be reused. That is, specifying DESTROY=Y causes Import to include the REUSE option in the datafile clause of the CREATE TABLESPACE command, which causes Import to reuse the original database's data files after deleting their contents.

Note that the export file contains the data file names used in each tablespace. If you specify DESTROY=Y and attempt to create a second database on the same machine (for testing or other purposes), the Import utility will overwrite the first database's data files when it creates the tablespace. In this situation you should use the default, DESTROY=N, so that an error occurs if the data files already exist when the tablespace is created. Also, when you need to import into the original database, you will need to specify IGNORE=Y to add to the existing data files without replacing them.

Caution: If data files are stored on a raw device, DESTROY=N does not prevent files from being overwritten.

FEEDBACK

Default: 0 (zero)

Specifies that Import should display a progress meter in the form of a dot for n number of rows imported. For example, if you specify FEEDBACK=10, Import displays a dot each time 10 rows have been imported. The FEEDBACK value applies to all tables being imported; it cannot be set on a per-table basis.

FILE

Default: expdat.dmp

Specifies the names of the export files to import. The default extension is .dmp. Because Export supports multiple export files (see the following description of the FILESIZE parameter), you may need to specify multiple filenames to be imported.

You need not be the user who exported the export files; however, you must have read access to the files. If you were not the exporter of the export files, you must also have the IMP_FULL_DATABASE role granted to you.

FILESIZE

Export supports writing to multiple export files, and Import can read from multiple export files. If, on export, you specify a value (byte limit) for the Export FILESIZE parameter, Export will write only the number of bytes you specify to each dump file. On import, you must use the Import parameter FILESIZE to tell Import the maximum dump file size you specified on export.

Note: The maximum value that can be stored in a file is operating system-dependent. You should verify this maximum value in your operating system-specific documentation before specifying FILESIZE.

The FILESIZE value can be specified as a number followed by K (number of kilobytes). For example, FILESIZE=2K is the same as FILESIZE=2048. Similarly, M specifies megabytes (1024 * 1024) while G specifies gigabytes (1024**3). B remains the shorthand for bytes; the number is not multiplied to obtain the final file size (FILESIZE=2048b is the same as FILESIZE=2048).

For information on the maximum size of dump files, see FILESIZE.

FROMUSER

Default: none

A comma-separated list of schemas to import. This parameter is relevant only to users with the IMP_FULL_DATABASE role. The parameter enables you to import a subset of schemas from an export file containing multiple schemas (for example, a full export dump file or a multischema, user mode export dump file).

You will typically use FROMUSER in conjunction with the Import parameter TOUSER, which you use to specify a list of usernames whose schemas will be targets for import (see TOUSER). However, if you omit specifying TOUSER, Import will:

FULL

Default: N

Specifies whether to import the entire export file.

GRANTS

Default: Y

Specifies whether to import object grants.

By default, the Import utility imports any object grants that were exported. If the export was a user-mode Export, the export file contains only first-level object grants (those granted by the owner).

If the export was a full database mode Export, the export file contains all object grants, including lower-level grants (those granted by users given a privilege with the WITH GRANT OPTION). If you specify GRANTS=N, the Import utility does not import object grants. (Note that system grants are imported even if GRANTS=N.)

Note: Export does not export grants on data dictionary views for security reasons that affect Import. If such grants were exported, access privileges would be changed and the importer would not be aware of this.

HELP

Default: N

Displays a description of the Import parameters.

IGNORE

Default: N

Specifies how object creation errors should be handled. If you specify IGNORE=Y, Import overlooks object creation errors when it attempts to create database objects and continues without reporting the errors. Even if IGNORE=Y, Import will not replace an existing object; instead, it will skip the object.

If you accept the default, IGNORE=N, Import logs and/or displays the object creation error before continuing.

For tables, IGNORE=Y causes rows to be imported into existing tables. No message is given. If a table already exists, IGNORE=N causes an error to be reported, and the table is skipped with no rows inserted. Also, objects dependent on tables, such as indexes, grants, and constraints, will not be created if a table already exists and IGNORE=N.

Note that only object creation errors are ignored; other errors, such as operating system, database, and SQL errors, are not ignored and may cause processing to stop.

In situations where multiple refreshes from a single export file are done with IGNORE=Y, certain objects can be created multiple times (although they will have unique system-defined names). You can prevent this for certain objects (for example, constraints) by doing an import with the value of the parameter CONSTRAINTS set to N. If you do a full import with the CONSTRAINTS parameter set to N, no constraints for any tables are imported.

If you want to import data into tables that already exist--perhaps because you want to use new storage parameters, or because you have already created the table in a cluster--specify IGNORE=Y. The Import utility imports the rows of data into the existing table.

Caution: When you import into existing tables, if no column in the table is uniquely indexed, rows could be duplicated if they were already present in the table. (This applies to nonincremental imports only. Incremental imports replace the table from the last complete export and then rebuild it to its last backup state from a series of cumulative and incremental exports.)

INCTYPE

Default: undefined

Specifies the type of incremental import.

The options are:

SYSTEM  

Imports the most recent version of system objects. You should specify the most recent incremental export file when you use this option. A SYSTEM import imports system objects such as foreign function libraries and object type definitions, but does not import user data or objects.  

RESTORE  

Imports all user database objects and data contained in the export file, excluding system objects.  

See Importing Incremental, Cumulative, and Complete Export Files for more information about the INCTYPE parameter.

INDEXES

Default: Y

Specifies whether or not to import indexes. System-generated indexes such as LOB indexes, OID indexes, or unique constraint indexes are re-created by Import regardless of the setting of this parameter.

You can postpone all user-generated index creation until after Import completes, by specifying INDEXES=N.

If indexes for the target table already exist at the time of the import, Import performs index maintenance when data is inserted into the table.

INDEXFILE

Default: none

Specifies a file to receive index-creation commands.

When this parameter is specified, index-creation commands for the requested mode are extracted and written to the specified file, rather than used to create indexes in the database. No database objects are imported.

If the Import parameter CONSTRAINTS is set to Y, Import also writes table constraints to the index file.

The file can then be edited (for example, to change storage parameters) and used as a SQL script to create the indexes.

To make it easier to identify the indexes defined in the file, the export file's CREATE TABLE statements and CREATE CLUSTER statements are included as comments.

Perform the following steps to use this feature:

  1. Import using the INDEXFILE parameter to create a file of index-creation commands.

  2. Edit the file, making certain to add a valid password to the CONNECT strings.

  3. Rerun Import, specifying INDEXES=N.

    (This step imports the database objects while preventing Import from using the index definitions stored in the export file.)

  4. Execute the file of index-creation commands as a SQL script to create the index.

    The INDEXFILE parameter can be used only with the FULL=Y, FROMUSER, TOUSER, or TABLES parameters.

LOG

Default: none

Specifies a file to receive informational and error messages. If you specify a log file, the Import utility writes all information to the log in addition to the terminal display.

PARFILE

Default: undefined

Specifies a filename for a file that contains a list of Import parameters. For more information on using a parameter file, see The Parameter File.

RECALCULATE_STATISTICS

Default: N

Setting this parameter to Y will cause database optimizer statistics to generate when the exported data is imported. See Oracle8i Concepts for information about the optimizer and the statistics it uses. See also the Export parameter STATISTICS, the Import parameter ANALYZE, and Importing Statistics.

RECORDLENGTH

Default: operating system-dependent

Specifies the length, in bytes, of the file record. The RECORDLENGTH parameter is necessary when you must transfer the export file to another operating system that uses a different default value.

If you do not define this parameter, it defaults to your platform-dependent value for BUFSIZ. For more information about the BUFSIZ default value, see your operating system-specific documentation.

You can set RECORDLENGTH to any value equal to or greater than your system's BUFSIZ. (The highest value is 64KB.) Changing the RECORDLENGTH parameter affects only the size of data that accumulates before writing to the database. It does not affect the operating system file block size.

Note: You can use this parameter to specify the size of the Import I/O buffer.

Additional Information: See your Oracle operating system-specific documentation to determine the proper value or to create a file with a different record size.

ROWS

Default: Y

Specifies whether or not to import the rows of table data.

SHOW

Default: N

When SHOW=Y, the contents of the export file are listed to the display and not imported. The SQL statements contained in the export are displayed in the order in which Import will execute them.

The SHOW parameter can be used only with the FULL=Y, FROMUSER, TOUSER, or TABLES parameters.

SKIP_UNUSABLE_INDEXES

Default: N

Specifies whether or not Import skips building indexes that were set to the Index Unusable state (by either system or user). Refer to the ALTER SESSION statement in the Oracle8i SQL Reference manual for details. Other indexes (not previously set Index Unusable) continue to be updated as rows are inserted.

This parameter allows you to postpone index maintenance on selected index partitions until after row data has been inserted. You then have the responsibility to rebuild the affected index partitions after the Import.

You can use the INDEXFILE parameter in conjunction with INDEXES=N to provide the SQL scripts for re-creating the index. Without this parameter, row insertions that attempt to update unusable indexes will fail.

TABLES

Default: none

Specifies a list of table names to import. Use an asterisk (*) to indicate all tables. When specified, this parameter initiates a table mode import, which restricts the import to tables and their associated objects, as listed in Table 1-1. The number of tables that can be specified at the same time is dependent on command-line limits.

Although you can qualify table names with schema names (as in SCOTT.EMP) when exporting, you cannot do so when importing. In the following example, the TABLES parameter is specified incorrectly:

imp system/manager TABLES=(jones.accts, scott.emp,scott.dept)

The valid specification to import these tables is:

imp system/manager FROMUSER=jones TABLES=(accts)
imp system/manager FROMUSER=scott TABLES=(emp,dept)

Additional Information: Some operating systems, such as UNIX, require that you use escape characters before special characters, such as a parenthesis, so that the character is not treated as a special character. On UNIX, use a backslash (\) as the escape character, as shown in the following example:

TABLES=\(EMP,DEPT\)

Table Name Restrictions

Table names specified on the command line or in the parameter file cannot include a pound (#) sign, unless the table name is enclosed in quotation marks.

For example, if the parameter file contains the following line, Import interprets everything on the line after EMP# as a comment. As a result, DEPT and MYDATA are not imported.

TABLES=(EMP#, DEPT, MYDATA)

However, if the parameter file contains the following line, the Import utility imports all three tables:

TABLES=("EMP#, DEPT, MYDATA)

Note: When you specify the table name in quotation marks, it is case-sensitive. The name must exactly match the table name stored in the database. By default, database names are stored as uppercase.

Additional Information: Some operating systems require single quotation marks instead of double quotation marks. See your Oracle operating system-specific documentation.

TABLESPACES

Default: none

When TRANSPORT_TABLESPACE is specified as Y, use this parameter to provide a list of tablespaces to be transported into the database.

See Transportable Tablespaces for more information.

TOID_NOVALIDATE

Default: none

When you import a table that references a type, but a type of that name already exists in the database, Import attempts to verify that the preexisting type is, in fact, the type used by the table (rather than a different type that just happens to have the same name).

To do this, Import compares the type's unique identifier (TOID) with the identifier stored in the export file. Import will not import the table rows if the TOIDs do not match.

In some situations, you may not want this validation to occur on specified types (for example, if the types were created by a cartridge installation). You can use the TOID_NOVALIDATE parameter to specify types to exclude from TOID comparison.

The syntax is as follows:

toid_novalidate=([schemaname.]typename [, ...])

For example:

imp scott/tiger table=foo toid_novalidate=bar
imp scott/tiger table=foo toid_novalidate=(fred.type0,sally.type2,type3)

If you do not specify a schemaname for the type, it defaults to the schema of the importing user. For example, in the first preceding example, the type "bar" defaults to "scott.bar".

The output of a typical import with excluded types would contain entries similar to the following:

[...]
. importing IMP3's objects into IMP3
. . skipping TOID validation on type IMP2.TOIDTYP0
. . importing table                  "TOIDTAB3"          
[...]

Note: When you inhibit validation of the type identifier, it is your responsibility to ensure that the attribute list of the imported type matches the attribute list of the existing type. If these attribute lists do not match, results are unpredictable.

TOUSER

Default: none

Specifies a list of usernames whose schemas will be targets for import. The
IMP_FULL_DATABASE role is required to use this parameter. To import to a different schema than the one that originally contained the object, specify TOUSER. For example:

imp system/manager FROMUSER=scott TOUSER=joe TABLES=emp

If multiple schemas are specified, the schema names are paired. The following example imports SCOTT's objects into JOE's schema, and FRED's objects into TED's schema:

imp system/manager FROMUSER=scott,fred TOUSER=joe,ted

Note: If the FROMUSER list is longer than the TOUSER list, the remaining schemas will be imported into either the FROMUSER schema, or into the importer's schema, based on normal defaulting rules. You can use the following syntax to ensure that any extra objects go into the TOUSER schema:

imp system/manager FROMUSER=scott,adams TOUSER=ted,ted

Note that user Ted is listed twice.

TRANSPORT_TABLESPACE

Default: N

When specified as Y, instructs Import to import transportable tablespace metadata from an export file.

See Transportable Tablespaces for more information.

TTS_OWNERS

Default: none

When TRANSPORT_TABLESPACE is specified as Y, use this parameter to list the users who own the data in the transportable tablespace set.

See Transportable Tablespaces for more information.

USERID (username/password)

Default: undefined

Specifies the username/password (and optional connect string) of the user performing the import.

USERID can also be:

username/password AS SYSDBA

or

username/password@instance AS SYSDBA

See Invoking Import As SYSDBA for more information. Note that your operating system may require you to treat AS SYSDBA as a special string, requiring you to enclose the entire string in quotation marks.

Optionally, you can specify the @connect_string clause for Net8. See the user's guide for your Net8 protocol for the exact syntax of @connect_string. See also Oracle8i Distributed Database Systems

VOLSIZE

Specifies the maximum number of bytes in an export file on each volume of tape.

The VOLSIZE parameter has a maximum value equal to the maximum value that can be stored in 64 bits. See your Operating system-specific documentation for more information.

The VOLSIZE value can be specified as number followed by K (number of kilobytes). For example, VOLSIZE=2K is the same as VOLSIZE=2048. Similarly, M specifies megabytes (1024 * 1024) while G specifies gigabytes (1024**3). B remains the shorthand for bytes; the number is not multiplied to get the final file size (VOLSIZE=2048b is the same as VOLSIZE=2048).

Using Table-Level and Partition-Level Export and Import

Both table-level Export and partition-level Export can migrate data across tables, partitions, and subpartitions.

Guidelines for Using Partition-Level Import

This section provides detailed information about partition-level Import. For general information, see Understanding Table-Level and Partition-Level Import.

Partition-level Import cannot import a nonpartitioned exported table. However, a partitioned table can be imported from a nonpartitioned exported table using table-level Import. Partition-level Import is legal only if the source table (that is, the table called tablename at export time) was partitioned and exists in the Export file.

If ROWS=Y (default), and the table does not exist in the Import target system, the table is created and all rows from the source partition or subpartition are inserted into the partition or subpartition of the target table.

If ROWS=Y (default) and IGNORE=Y, but the table already existed before Import, all rows for the specified partition or subpartition in the table are inserted into the table. The rows are stored according to the existing partitioning scheme of the target table.

If the target table is partitioned, Import reports any rows that are rejected because they fall above the highest partition of the target table.

If ROWS=N, Import does not insert data into the target table and continues to process other objects associated with the specified table and partition or subpartition in the file.

If the target table is nonpartitioned, the partitions and subpartitions are imported into the entire table. Import requires IGNORE=Y to import one or more partitions or subpartitions from the Export file into a nonpartitioned table on the import target system.

Migrating Data Across Partitions and Tables

The presence of a table-name:partition-name with the TABLES parameter results in reading from the Export file only data rows from the specified source partition or subpartition. If you do not specify the partition or subpartition name, the entire table is used as the source. If you specify a partition name for a composite partition, all subpartitions within the composite partition are used as the source.

Import issues a warning if the specified partition or subpartition is not in the export file.

Data exported from one or more partitions or subpartitions can be imported into one or more partitions or subpartitions. Import inserts rows into partitions or subpartitions based on the partitioning criteria in the target table.

In the following example, the partition specified by the partition-name is a composite partition. All of its subpartitions will be imported:

imp system/manager FILE = expdat.dmp FROMUSER = scott TABLES=b:py

The following example causes row data of partitions qc and qd of table scott.e to be imported into the table scott.e:

imp scott/tiger FILE = expdat.dmp TABLES = (e:qc, e:qd) IGNORE=y

If table "e" does not exist in the Import target database, it is created and data is inserted into the same partitions. If table "e" existed on the target system before Import, the row data is inserted into the partitions whose range allows insertion. The row data can end up in partitions of names other than qc and qd.

Note: With partition-level Import to an existing table, you must set up the target partitions or subpartitions properly and use IGNORE=Y.

Example Import Sessions

This section gives some examples of import sessions that show you how to use the parameter file and command-line methods. The examples illustrate four scenarios:

Example Import of Selected Tables for a Specific User

In this example, using a full database export file, an administrator imports the DEPT and EMP tables into the SCOTT schema.

Parameter File Method
 

> imp system/manager parfile=params.dat

The params.dat file contains the following information:

FILE=dba.dmp
SHOW=n
IGNORE=n
GRANTS=y
FROMUSER=scott
TABLES=(dept,emp)
Command-Line Method
 

> imp system/manager file=dba.dmp fromuser=scott tables=(dept,emp)
Import Messages

Import: Release 8.1.6.0.0 - Production on Wed Oct 6 15:26:15 1999

(c) Copyright 1999 Oracle Corporation.  All rights reserved.

Connected to: Oracle8i Enterprise Edition Release 8.1.6.0.0 - Production
With the Partitioning and Java options
PL/SQL Release 8.1.6.0.0 - Production

Export file created by EXPORT:V08.01.06 via conventional path
import done in WE8DEC character set and WE8DEC NCHAR character set
. importing SCOTT's objects into SCOTT
. . importing table                         "DEPT"          4 rows imported
. . importing table                          "EMP"         14 rows imported
Import terminated successfully without warnings.

Example Import of Tables Exported by Another User

This example illustrates importing the UNIT and MANAGER tables from a file exported by BLAKE into the SCOTT schema.

Parameter File Method
 

> imp system/manager parfile=params.dat

The params.dat file contains the following information:

FILE=blake.dmp
SHOW=n
IGNORE=n
GRANTS=y
ROWS=y
FROMUSER=blake
TOUSER=scott
TABLES=(unit,manager)
Command-Line Method
 

    
           > imp system/manager fromuser=blake touser=scott file=blake.dmp tables=(unit,manager)
Import Messages

Import: Release 8.1.6.0.0 - Production on Wed Oct 6 15:26:18 1999

(c) Copyright 1999 Oracle Corporation.  All rights reserved.

Connected to: Oracle8i Enterprise Edition Release 8.1.6.0.0 - Production
With the Partitioning and Java options
PL/SQL Release 8.1.6.0.0 - Production

Export file created by EXPORT:V08.01.06 via conventional path

Warning: the objects were exported by BLAKE, not by you

import done in WE8DEC character set and WE8DEC NCHAR character set
. . importing table                         "UNIT"          4 rows imported
. . importing table                      "MANAGER"          4 rows imported
Import terminated successfully without warnings.

Example Import of Tables from One User to Another

In this example, a DBA imports all tables belonging to SCOTT into user BLAKE's account.

Parameter File Method
 

> imp system/manager parfile=params.dat

The params.dat file contains the following information:

FILE=scott.dmp
FROMUSER=scott
TOUSER=blake
TABLES=(*)
Command-Line Method
 

> imp system/manager file=scott.dmp fromuser=scott touser=blake tables=(*)
Import Messages

Import: Release 8.1.6.0.0 - Production on Wed Oct 6 15:26:19 1999

(c) Copyright 1999 Oracle Corporation.  All rights reserved.

Connected to: Oracle8i Enterprise Edition Release 8.1.6.0.0 - Production
With the Partitioning and Java options
PL/SQL Release 8.1.6.0.0 - Production

Export file created by EXPORT:V08.01.06 via conventional path

Warning: the objects were exported by SCOTT, not by you

import done in WE8DEC character set and WE8DEC NCHAR character set
. importing SCOTT's objects into BLAKE
. . importing table                        "BONUS"          0 rows imported
. . importing table                         "DEPT"          4 rows imported
. . importing table                          "EMP"         14 rows imported
. . importing table                     "SALGRADE"          5 rows imported
Import terminated successfully without warnings.

Example Import Session Using Partition-Level Import

This section describes an import of a table with multiple partitions, a table with partitions and subpartitions, and repartitioning a table on different columns.

Example 1: A Partition-Level Import

In this example, emp is a partitioned table with three partitions: p1, p2, and p3.

A table-level export file was created using the following command:

> exp scott/tiger tables=emp file=exmpexp.dat rows=y 

About to export specified tables via Conventional Path -- 
. . exporting table                       EMP 
. . exporting partition                        P1          7 rows exported 
. . exporting partition                        P2         12 rows exported 
. . exporting partition                        P3          3 rows exported 

Export terminated successfully without warnings. 

In a partition-level import you can specify the specific partitions of an exported table that you want to import. In this example, these are p1 and p3 of table emp:

> imp scott/tiger tables=(emp:p1,emp:p3) file=exmpexp.dat rows=y 

Export file created by EXPORT:V08.01.06 via direct path 
import done in WE8DEC character set and WE8DEC NCHAR character set 
. importing SCOTT's objects into SCOTT 
. . importing partition            "EMP":"P1"                    7 rows imported 
. . importing partition            "EMP":"P3"                    3 rows imported 
Import terminated successfully without warnings. 

Example 2: A Partition-Level Import of a Composite Partitioned Table

This example demonstrates that the partitions and subpartitions of a composite partitioned table are imported. EMP is a partitioned table with two composite partitions: p1 and p2. P1 has three subpartitions: p1_sp1, p1_sp2, and p1_sp3. P2 has two subpartitions: p2_sp1 and p2_sp2.

A table-level export file was created using the following command:

> exp scott/tiger tables=emp file=exmpexp.dat rows=y 

About to export specified tables via Conventional Path -- 
. . exporting table                       EMP 
. . exporting partition                        P1 
. . exporting subpartition                     P1_SP1         11 rows exported 
. . exporting subpartition                     P1_SP2         17 rows exported 
. . exporting subpartition                     P1_SP3          3 rows exported 
. . exporting partition                        P2 
. . exporting subpartition                     P2_SP1          5 rows exported 
. . exporting subpartition                     P2_SP2         12 rows exported 

Export terminated successfully without warnings. 

The following import command results in the importing of subpartition p1_sp2 and p1_sp3 of composite partition p1 in table EMP and all subpartitions of composite partition p2 in table EMP.

> imp scott/tiger tables=(emp:p1_sp2,emp:p1_sp3,emp:p2) file=exmpexp.dat rows=y 
  
Export file created by EXPORT:V08.01.06 via conventional path 
import done in WE8DEC character set and WE8DEC NCHAR character set 
. importing SCOTT's objects into SCOTT 
. . importing table               EMP
. . importing subpartition        "EMP":"P1_SP2"               17 rows imported 
. . importing subpartition        "EMP":"P1_SP3"                3 rows imported 
. . importing subpartition        "EMP":"P2_SP1"                5 rows imported 
. . importing subpartition        "EMP":"P2_SP2"               12 rows imported 
  
Import terminated successfully without warnings. 

Example 3: Repartitioning a Table on a Different Column

This example assumes the EMP table has two partitions, based on the EMPNO column. This example repartitions the EMP table on the DEPTNO column.

Perform the following steps to repartition a table on a different column:

  1. Export the table to save the data.

  2. Delete the table from the database.

  3. Create the table again with the new partitions.

  4. Import the table data.

The following example shows how to repartition a table on a different column:

> exp scott/tiger tables=emp file=empexp.dat

Export: Release 8.1.6.0.0 - Production on Wed Oct 6 15:26:34 1999

(c) Copyright 1999 Oracle Corporation.  All rights reserved.

Connected to: Oracle8i Enterprise Edition Release 8.1.6.0.0 - Production
With the Partitioning and Java options
PL/SQL Release 8.1.6.0.0 - Production
Export done in WE8DEC character set and WE8DEC NCHAR character set

About to export specified tables via Conventional Path ...
. . exporting table                            EMP
. . exporting partition                        EMP_LOW          4 rows exported
. . exporting partition                       EMP_HIGH         10 rows exported
Export terminated successfully without warnings.

SQL> drop table emp cascade constraints;
Table dropped.
SQL> create table emp
     2>    (
     3>    empno    number(4) not null,
     4>    ename    varchar2(10),
     5>    job      varchar2(9),
     6>    mgr      number(4),
     7>    hiredate date,
     8>    sal      number(7,2),
     9>    comm     number(7,2),
    10>    deptno   number(2)
    11>    )
    12> partition by range (deptno)
    13>   (
    14>   partition dept_low values less than (15)
    15>     tablespace tbs_1,
    16>   partition dept_mid values less than (25)
    17>     tablespace tbs_2,
    18>   partition dept_high values less than (35)
    19>     tablespace tbs_3
    20>   );
Table created.
SQL> exit

> imp scott/tiger tables=emp file=empexp.dat ignore=y

Import: Release 8.1.6.0.0 - Production on Wed Oct 6 15:26:38 1999

(c) Copyright 1999 Oracle Corporation.  All rights reserved.

Connected to: Oracle8i Enterprise Edition Release 8.1.6.0.0 - Production
With the Partitioning and Java options
PL/SQL Release 8.1.6.0.0 - Production

Export file created by EXPORT:V08.01.06 via conventional path
import done in WE8DEC character set and WE8DEC NCHAR character set
. importing SCOTT's objects into SCOTT
. . importing partition                "EMP":"EMP_LOW"          4 rows imported
. . importing partition               "EMP":"EMP_HIGH"         10 rows imported
Import terminated successfully without warnings.

The following SELECT statements show that the data is partitioned on the DEPTNO column:

SQL> select empno, deptno from emp partition (dept_low);
EMPNO      DEPTNO    
---------- ----------
      7782         10
      7839         10
      7934         10
3 rows selected.
SQL> select empno, deptno from emp partition (dept_mid);
EMPNO      DEPTNO    
---------- ----------
      7369         20
      7566         20
      7788         20
      7876         20
      7902         20
5 rows selected.
SQL> select empno, deptno from emp partition (dept_high);
EMPNO      DEPTNO    
---------- ----------
      7499         30
      7521         30
      7654         30
      7698         30
      7844         30
      7900         30
6 rows selected.

Using the Interactive Method

Starting Import from the command line with no parameters initiates the interactive method. The interactive method does not provide prompts for all Import functionality. The interactive method is provided only for backward compatibility.

If you do not specify a username/password on the command line, the Import utility prompts you for this information. The following example shows the interactive method:

> imp system/manager

Import: Release 8.1.6.0.0 - Production on Wed Oct 6 15:26:46 1999

(c) Copyright 1999 Oracle Corporation.  All rights reserved.

Connected to: Oracle8i Enterprise Edition Release 8.1.6.0.0 - Production
With the Partitioning and Java options
PL/SQL Release 8.1.6.0.0 - Production

Import file: expdat.dmp > 
Enter insert buffer size (minimum is 8192) 30720> 
Export file created by EXPORT:V08.01.06 via conventional path

Warning: the objects were exported by BLAKE, not by you

import done in WE8DEC character set and WE8DEC NCHAR character set
List contents of import file only (yes/no): no > 
Ignore create error due to object existence (yes/no): no > 
Import grants (yes/no): yes > 
Import table data (yes/no): yes > 
Import entire export file (yes/no): no > 
. importing BLAKE's objects into SYSTEM
. . importing table                         "DEPT"          4 rows imported
. . importing table                      "MANAGER"          3 rows imported
Import terminated successfully without warnings.

You may not see all prompts in a given Import session because some prompts depend on your responses to other prompts. Some prompts show a default answer; if the default is acceptable, press Enter.

Note: If you specify No at the previous prompt, Import prompts you for a schema name and the table names you want to import for that schema:

Enter table(T) or partition(T:P) names. Null list means all tables for user

Entering a null table list causes all tables in the schema to be imported. You can specify only one schema at a time when you use the interactive method.

Importing Incremental, Cumulative, and Complete Export Files

An incremental export extracts only tables that have changed since the last incremental, cumulative, or complete export. Therefore, an import from an incremental export file imports the table definition and all of its data, not just the changed rows.

Because imports from incremental export files are dependent on the method used to export the data, you should also read Incremental, Cumulative, and Complete Exports.

It is important to note that, because importing an incremental export file imports new versions of existing objects, existing objects are dropped before new ones are imported. This behavior differs from a normal import. During a normal import, objects are not dropped and an error is usually generated if the object already exists.

Restoring a Set of Objects

The order in which incremental, cumulative, and complete exports are done is important. A set of objects cannot be restored until a complete export has been run on a database. Once that has been done, take the following steps to restore objects:

  1. Import the most recent incremental export file (specify INCTYPE=SYSTEM for the import) or cumulative export file, if no incremental exports have been taken. This step imports the correct system objects (for example, users, object types, and so forth) for the database.

  2. Import the most recent complete export file. (Specify INCTYPE=RESTORE for the import.)

  3. Import all cumulative export files after the last complete export. (Specify INCTYPE=RESTORE for the import.)

  4. Import all incremental export files after the last cumulative export. (Specify INCTYPE=RESTORE for the import.)

For example, assume that you have the following situation:

Then you should import in the following order:

imp system/manager INCTYPE=SYSTEM  FULL=Y FILE=I3
imp system/manager INCTYPE=RESTORE FULL=Y FILE=X1
imp system/manager INCTYPE=RESTORE FULL=Y FILE=C1
imp system/manager INCTYPE=RESTORE FULL=Y FILE=C2
imp system/manager INCTYPE=RESTORE FULL=Y FILE=I1
imp system/manager INCTYPE=RESTORE FULL=Y FILE=I2
imp system/manager INCTYPE=RESTORE FULL=Y FILE=I3

Notes:

Importing Object Types and Foreign Function Libraries

For incremental imports only, object types and foreign function libraries are handled as system objects. That is, their definitions are only imported with the other system objects when INCTYPE=SYSTEM. This imports the most recent definition of the object type (including the object identifier) and the most recent definition of the library specification.

Then, as tables are imported from earlier incremental export files using INCTYPE=RESTORE, Import verifies that any object types needed by the table exist and have the same object identifier. If the object type does not exist, or if it exists but its object identifier does not match, the table is not imported. This indicates that the object type had been dropped or replaced subsequent to the incremental export, requiring that all tables dependent on the object also had been dropped.

Controlling Index Creation and Maintenance

This section describes the behavior of Import with respect to index creation and maintenance.

Index Creation and Maintenance Controls

If SKIP_UNUSABLE_INDEXES=Y, the Import utility postpones maintenance on all indexes that were set to Index Unusable before Import. Other indexes (not previously set Index Unusable) continue to be updated as rows are inserted. This approach saves on index updates during Import of existing tables.

Delayed index maintenance may cause a violation of an existing unique integrity constraint supported by the index. The existence of a unique integrity constraint on a table does not prevent existence of duplicate keys in a table that was imported with INDEXES=N. The supporting index will be in an UNUSABLE state until the duplicates are removed and the index is rebuilt.

Delaying Index Creation

Import provides you with the capability of delaying index creation and maintenance services until after completion of the import and insertion of exported data. Performing index creation, re-creation, or maintenance after import completes is generally faster than updating the indexes for each row inserted by Import.

Index creation can be time consuming, and therefore can be done more efficiently after the import of all other objects has completed. You can postpone creation of indexes until after the Import completes by specifying INDEXES=N. (INDEXES=Y is the default.) You can then store the missing index definitions in a SQL script by running Import while using the INDEXFILE parameter. The index-creation commands that would otherwise be issued by Import are instead stored in the specified file.

After the import is complete, you must create the indexes, typically by using the contents of the file (specified with INDEXFILE) as a SQL script after specifying passwords for the connect statements.

If the total amount of index updates are smaller during data insertion than at index rebuild time after import, users can choose to update those indexes at table data insertion time by setting INDEXES=Y.

Example of Postponing Index Maintenance

For example, assume that partitioned table t with partitions p1 and p2 exists on the Import target system. Assume that local indexes p1_ind on partition p1 and p2_ind on partition p2 exist also. Assume that partition p1 contains a much larger amount of data in the existing table t, compared with the amount of data to be inserted by the Export file (expdat.dmp). Assume that the reverse is true for p2.

Consequently, performing index updates for p1_ind during table data insertion time is more efficient than at partition index rebuild time. The opposite is true for p2_ind.

Users can postpone local index maintenance for p2_ind during Import by using the following steps:

  1. Issue the following SQL statement before Import:

    ALTER TABLE t MODIFY PARTITION p2 UNUSABLE LOCAL INDEXES;
    
    
  2. Issue the following Import command:

            imp scott/tiger FILE=expdat.dmp TABLES = (t:p1, t:p2) IGNORE=Y SKIP_UNUSABLE_INDEXES=Y
        
    
    
    

This example executes the ALTER SESSION SET SKIP_UNUSABLE_INDEXES=Y statement before performing the import.

  1. Issue the following SQL statement after Import:

    ALTER TABLE t MODIFY PARTITION p2 REBUILD UNUSABLE LOCAL INDEXES;
    
    

In this example, local index p1_ind on p1 will be updated when table data is inserted into partition p1 during Import. Local index p2_ind on p2 will be updated at index rebuild time, after Import.

Reducing Database Fragmentation

A database with many noncontiguous, small blocks of free space is said to be fragmented. A fragmented database should be reorganized to make space available in contiguous, larger blocks. You can reduce fragmentation by performing a full database export and import as follows:

  1. Do a full database export (FULL=Y) to back up the entire database.

  2. Shut down Oracle after all users are logged off.

  3. Delete the database. See your Oracle operating system-specific documentation for information on how to delete a database.

  4. Re-create the database using the CREATE DATABASE command.

  5. Do a full database import (FULL=Y) to restore the entire database.

See the Oracle8i Administrator's Guide for more information about creating databases.

Warning, Error, and Completion Messages

By default, Import displays all error messages. If you specify a log file by using the LOG parameter, Import writes the error messages to the log file in addition to displaying them on the terminal. You should always specify a log file when you import. (You can redirect Import's output to a file on those systems that permit I/O redirection.)

Additional Information: See LOG. Also see your operating system-specific documentation for information on redirecting output.

When an import completes without errors, the message "Import terminated successfully without warnings" is issued. If one or more nonfatal errors occurred, and Import was able to continue to completion, the message "Import terminated successfully with warnings" occurs. If a fatal error occurs, Import ends immediately with the message "Import terminated unsuccessfully."

Additional Information: Messages are documented in Oracle8i Error Messages and your operating system-specific documentation.

Error Handling

This section describes errors that can occur when you import database objects.

Row Errors

If a row is rejected due to an integrity constraint violation or invalid data, Import displays a warning message but continues processing the rest of the table. Some errors, such as "tablespace full," apply to all subsequent rows in the table. These errors cause Import to stop processing the current table and skip to the next table.

Failed Integrity Constraints

A row error is generated if a row violates one of the integrity constraints in force on your system, including:

See the Oracle8i Application Developer's Guide - Fundamentals and Oracle8i Concepts for more information on integrity constraints.

Invalid Data

Row errors can also occur when the column definition for a table in a database is different from the column definition in the export file. The error is caused by data that is too long to fit into a new table's columns, by invalid data types, and by any other INSERT error.

Errors Importing Database Objects

Errors can occur for many reasons when you import database objects, as described in this section. When such an error occurs, import of the current database object is discontinued. Import then attempts to continue with the next database object in the export file.

Object Already Exists

If a database object to be imported already exists in the database, an object creation error occurs. What happens next depends on the setting of the IGNORE parameter.

If IGNORE=N (the default), the error is reported, and Import continues with the next database object. The current database object is not replaced. For tables, this behavior means that rows contained in the export file are not imported.

If IGNORE=Y, object creation errors are not reported. The database object is not replaced. If the object is a table, rows are imported into it. Note that only object creation errors are ignored; all other errors (such as operating system, database, and SQL errors) are reported and processing may stop.

Caution: Specifying IGNORE=Y can cause duplicate rows to be entered into a table unless one or more columns of the table are specified with the UNIQUE integrity constraint. This could occur, for example, if Import were run twice.

Sequences

If sequence numbers need to be reset to the value in an export file as part of an import, you should drop sequences. A sequence that is not dropped before the import is not set to the value captured in the export file, because Import does not drop and re-create a sequence that already exists. If the sequence already exists, the export file's CREATE SEQUENCE statement fails and the sequence is not imported.

Resource Errors

Resource limitations can cause objects to be skipped. When you are importing tables, for example, resource errors can occur as a result of internal problems, or when a resource such as memory has been exhausted.

If a resource error occurs while you are importing a row, Import stops processing the current table and skips to the next table. If you have specified COMMIT=Y, Import commits the partial import of the current table. If not, a rollback of the current table occurs before Import continues. (See the description of COMMIT for information about the COMMIT parameter.)

Domain Index Metadata

Domain indexes can have associated application-specific metadata that is imported using anonymous PL/SQL blocks. These PL/SQL blocks are executed at import time prior to the CREATE INDEX statement. If a PL/SQL block causes an error, the associated index is not created because the metadata is considered an integral part of the index.

Fatal Errors

When a fatal error occurs, Import terminates. For example, if you enter an invalid username/password combination or attempt to run Export or Import without having prepared the database by running the scripts CATEXP.SQL or CATALOG.SQL, a fatal error occurs and causes Import to terminate.

Network Considerations

This section describes factors to take into account when using Export and Import across a network.

Transporting Export Files Across a Network

When transferring an export file across a network, be sure to transmit the file using a protocol that preserves the integrity of the file. For example, when using FTP or a similar file transfer protocol, transmit the file in binary mode. Transmitting export files in character mode causes errors when the file is imported.

Exporting and Importing with Net8

Net8 lets you export and import over a network. For example, running Import locally, you can read data into a remote Oracle database.

To use Import with Net8, you must include the connection qualifier string @connect_string when entering the username/password in the exp or imp command. For the exact syntax of this clause, see the user's guide for your Net8 protocol. For more information on Net8, see the Net8 Administrator's Guide. See also Oracle8i Distributed Database Systems.

Import and Snapshots

Note: In certain situations, particularly those involving data warehousing, snapshots may be referred to as materialized views. This section retains the term snapshot.

The three interrelated objects in a snapshot system are the master table, optional snapshot log, and the snapshot itself. The tables (master table, snapshot log table definition, and snapshot tables) can be exported independently of one another. Snapshot logs can be exported only if you export the associated master table. You can export snapshots using full database or user-mode Export; you cannot use table-mode Export.

This section discusses how fast refreshes are affected when these objects are imported. See also Oracle8i Replication, for Import-specific information about migration and compatibility and for more information about snapshots and snapshot logs.

Master Table

The imported data is recorded in the snapshot log if the master table already exists for the database to which you are importing and it has a snapshot log.

Snapshot Log

When a ROWID snapshot log is exported, ROWIDs stored in the snapshot log have no meaning upon import. As a result, each ROWID snapshot's first attempt to do a fast refresh fails, generating an error indicating that a complete refresh is required.

To avoid the refresh error, do a complete refresh after importing a ROWID snapshot log. After you have done a complete refresh, subsequent fast refreshes will work properly. In contrast, when a primary key snapshot log is exported, the keys' values do retain their meaning upon Import. Therefore, primary key snapshots can do a fast refresh after the import. See Oracle8i Replication for information about primary key snapshots.

Snapshots

A snapshot that has been restored from an export file has "gone back in time" to a previous state. On import, the time of the last refresh is imported as part of the snapshot table definition. The function that calculates the next refresh time is also imported.

Each refresh leaves a signature. A fast refresh uses the log entries that date from the time of that signature to bring the snapshot up to date. When the fast refresh is complete, the signature is deleted and a new signature is created. Any log entries that are not needed to refresh other snapshots are also deleted (all log entries with times before the earliest remaining signature).

Importing a Snapshot

When you restore a snapshot from an export file, you may encounter a problem under certain circumstances.

Assume that a snapshot is refreshed at time A, exported at time B, and refreshed again at time C. Then, because of corruption or other problems, the snapshot needs to be restored by dropping the snapshot and importing it again. The newly imported version has the last refresh time recorded as time A. However, log entries needed for a fast refresh may no longer exist. If the log entries do exist (because they are needed for another snapshot that has yet to be refreshed), they are used, and the fast refresh completes successfully. Otherwise, the fast refresh fails, generating an error that says a complete refresh is required.

Importing a Snapshot into a Different Schema

Snapshots, snapshot logs, and related items are exported with the schema name explicitly given in the DDL statements; therefore, snapshots and their related items cannot be imported into a different schema.

If you attempt to use FROMUSER/TOUSER to import snapshot data, an error will be written to the Import log file and the items will not be imported.

Import and Instance Affinity

If you use instance affinity to associate jobs with instances in databases you plan to import/export, you should refer to the information in the Oracle8i Administrator's Guide, the Oracle8i Reference, and Oracle8i Parallel Server Concepts and Administration for information about the use of instance affinity with the Import/Export utilities.

Fine-Grained Access Support

You can export tables with fine-grained access policies enabled. When doing so, keep the following considerations in mind:

Storage Parameters

By default, a table is imported into its original tablespace.

If the tablespace no longer exists, or the user does not have sufficient quota in the tablespace, the system uses the default tablespace for that user, unless the table:

If the user does not have sufficient quota in the default tablespace, the user's tables are not imported. (See Reorganizing Tablespaces to see how you can use this to your advantage.)

The OPTIMAL Parameter

The storage parameter OPTIMAL for rollback segments is not preserved during export and import.

Storage Parameters for OID INDEXes and LOB Columns

Tables are exported with their current storage parameters. For object tables, the OIDINDEX is created with its current storage parameters and name, if given. For tables that contain LOB or VARRAY columns, LOB or VARRAY data is created with their current storage parameters.

If you alter the storage parameters of existing tables prior to export, the tables are exported using those altered storage parameters. Note, however, that storage parameters for LOB data cannot be altered prior to export (for example, chunk size for a LOB column, whether a LOB column is CACHE or NOCACHE, and so forth).

Note that LOB data might not reside in the same tablespace as the containing table. The tablespace for that data must be read/write at the time of import or the table will not be imported.

If LOB data resides in a tablespace that does not exist at the time of import or the user does not have the necessary quota in that tablespace, the table will not be imported. Because there can be multiple tablespace clauses, including one for the table, Import cannot determine which tablespace clause caused the error.

Overriding Storage Parameters

Before using the Import utility to import data, you may want to create large tables with different storage parameters. If so, you must specify IGNORE=Y on the command line or in the parameter file.

The Export COMPRESS Parameter

By default at export time, storage parameters are adjusted to consolidate all data into its initial extent. To preserve the original size of an initial extent, you must specify at export time that extents are not to be consolidated (by setting COMPRESS=N). See COMPRESS for a description of the COMPRESS parameter.

Read-Only Tablespaces

Read-only tablespaces can be exported. On import, if the tablespace does not already exist in the target database, the tablespace is created as a read/write tablespace. If you want read-only functionality, you must manually make the tablespace read-only after the import.

If the tablespace already exists in the target database and is read-only, you must make it read/write before the import.

Dropping a Tablespace

You can drop a tablespace by redefining the objects to use different tablespaces before the import. You can then issue the import command and specify IGNORE=Y.

In many cases, you can drop a tablespace by doing a full database export, then creating a zero-block tablespace with the same name (before logging off) as the tablespace you want to drop. During import, with IGNORE=Y, the relevant CREATE TABLESPACE statement will fail and prevent the creation of the unwanted tablespace.

All objects from that tablespace will be imported into their owner's default tablespace with the exception of partitioned tables, type tables, and tables that contain LOB or VARRAY columns or index-only tables with overflow segments. Import cannot determine which tablespace caused the error. Instead, you must first create a table and then import the table again, specifying IGNORE=Y.

Objects are not imported into the default tablespace if the tablespace does not exist or you do not have the necessary quotas for your default tablespace.

Reorganizing Tablespaces

If a user's quotas allow it, the user's tables are imported into the same tablespace from which they were exported. However, if the tablespace no longer exists or the user does not have the necessary quota, the system uses the default tablespace for that user as long as the table is unpartitioned, contains no LOB or VARRAY columns, is not a type table, and is not an index-only table with an overflow segment. This scenario can be used to move a user's tables from one tablespace to another.

For example, you need to move JOE's tables from tablespace A to tablespace B after a full database export. Follow these steps:

  1. If JOE has the UNLIMITED TABLESPACE privilege, revoke it. Set JOE's quota on tablespace A to zero. Also revoke all roles that might have such privileges or quotas.

    Note: Role revokes do not cascade. Therefore, users who were granted other roles by JOE will be unaffected.

  2. Export JOE's tables.

  3. Drop JOE's tables from tablespace A.

  4. Give JOE a quota on tablespace B and make it the default tablespace.

  5. Import JOE's tables. (By default, Import puts JOE's tables into
    tablespace B.)

Character Set and NLS Considerations

This section describes the character set conversions that can take place during export and import operations.

Character Set Conversion

The following sections describe character conversion for CHAR and NCHAR data.

CHAR Data

Up to three character set conversions may be required for character data during an export/import operation:

  1. Export writes export files using the character set specified in the NLS_LANG environment variable for the user session. A character set conversion is performed if the value of NLS_LANG differs from the database character set.

  2. If the character set in the export file is different than the Import user session character set, Import performs a character set conversion to its user session character set. Import can perform this conversion only if the ratio of the width of the widest character in its user session character set to the width of the smallest character in the export file character set is 1.

  3. A final character set conversion may be performed if the target database's character set is different from Import's user session character set.

To minimize data loss due to character set conversions, it is advisable to ensure that the export database, the export user session, the import user session, and the import database all use the same character set.

NCHAR Data

Data of datatypes NCHAR, NVARCHAR2, and NCLOB are written to the export file directly in the national character set of the source database. If the national character set of the source database is different than the national character set of the import database, a conversion is performed.

Import and Single-Byte Character Sets

Some 8-bit characters can be lost (that is, converted to 7-bit equivalents) when you import an 8-bit character set export file. This occurs if the machine on which the import occurs has a native 7-bit character set, or the NLS_LANG operating system environment variable is set to a 7-bit character set. Most often, this is apparent when accented characters lose the accent mark.

To avoid this unwanted conversion, you can set the NLS_LANG operating system environment variable to be that of the export file character set.

When importing an Oracle Version 5 or 6 export file with a character set different from that of the native operating system or the setting for NLS_LANG, you must set the CHARSET import parameter to specify the character set of the export file.

See Character Set Conversion.

Import and Multibyte Character Sets

For multibyte character sets, Import can convert data to the user-session character set only if the ratio of the width of the widest character in the import character set to the width of the smallest character in the export character set is 1. If the ratio is not 1, the user-session character set should be set to match the export character set, so that Import does no conversion.

During the conversion, any characters in the export file that have no equivalent in the target character set are replaced with a default character. (The default character is defined by the target character set.) To guarantee 100% conversion, the target character set must be a superset (or equivalent) of the source character set.

For more information, refer to the Oracle8i National Language Support Guide.

Considerations When Importing Database Objects

This section describes the behavior of various database objects during Import.

Importing Object Identifiers

The Oracle database server assigns object identifiers to uniquely identify object types, object tables, and rows in object tables. These object identifiers are preserved by import.

When you import a table that references a type, but a type of that name already exists in the database, Import attempts to verify that the preexisting type is, in fact, the type used by the table (rather than a different type that just happens to have the same name).

To do this, Import compares the type's unique identifier (TOID) with the identifier stored in the export file. Import will not import the table rows if the TOIDs do not match.

In some situations, you may not want this validation to occur on specified types (for example, if the types were created by a cartridge installation). You can use the parameter TOID_NOVALIDATE to specify types to exclude from TOID comparison. See TOID_NOVALIDATE for more information.

Caution: Be very careful about using TOID_NOVALIDATE, because type validation provides an important capability that helps avoid data corruption. Be sure you feel confident of your knowledge of type validation and how it works before attempting to import with this feature disabled.

Import uses the following criteria to decide how to handle object types, object tables, and rows in object tables:

Because Import preserves object identifiers of object types and object tables, note the following considerations when you import objects from one schema into another schema using the FROMUSER and TOUSER parameters:

Importing Existing Object Tables and Tables That Contain Object Types

Users frequently create tables before importing data to reorganize tablespace usage or change a table's storage parameters. The tables must be created with the same definitions as were previously used or a compatible format (except for storage parameters). For object tables and tables that contain columns of object types, format compatibilities are more restrictive.

For tables containing columns of object types, the same object type must be specified, and that type must have the same object identifier as the original. If the parameter TOID_NOVALIDATE has been set to ignore the object type, the object IDs do not need to match.

Export writes information about object types used by a table in the Export file, including object types from different schemas. Object types from different schemas used as top level columns are verified for matching name and object identifier at import time. Object types from different schemas that are nested within other object types are not verified.

If the object type already exists, its object identifier is verified. If the parameter TOID_NOVALIDATE has been set to ignore the object type, the object IDs do not need to match. Import retains information about what object types it has created, so that if an object type is used by multiple tables, it is created only once.

Note: In all cases, the object type must be compatible in terms of the internal format used for storage. Import does not verify that the internal format of a type is compatible. If the exported data is not compatible, the results can be unpredictable.

Importing Nested Tables

Inner nested tables are exported separately from the outer table. Therefore, situations may arise where data in an inner nested table might not be properly imported:

You should always carefully examine the logfile for errors in outer tables and inner tables. To be consistent, table data may need to be modified or deleted.

Because inner nested tables are imported separately from the outer table, attempts to access data from them while importing may produce unexpected results. For example, if an outer row is accessed before its inner rows are imported, an incomplete row may be returned to the user.

Importing REF Data

REF columns and attributes may contain a hidden ROWID that points to the referenced type instance. Import does not automatically recompute these ROWIDs for the target database. You should execute the following command to reset the
ROWIDs to their proper values:

ANALYZE TABLE [schema.]table VALIDATE REF UPDATE

See Oracle8i SQL Reference for more information about the ANALYZE TABLE statement.

Importing BFILE Columns and Directory Aliases

Export and Import do not copy data referenced by BFILE columns and attributes from the source database to the target database. Export and Import only propagate the names of the files and the directory aliases referenced by the BFILE columns. It is the responsibility of the DBA or user to move the actual files referenced through BFILE columns and attributes.

When you import table data that contains BFILE columns, the BFILE locator is imported with the directory alias and file name that was present at export time. Import does not verify that the directory alias or file exists. If the directory alias or file does not exist, an error occurs when the user accesses the BFILE data.

For operating system directory aliases, if the directory syntax used in the export system is not valid on the import system, no error is reported at import time. Subsequent access to the file data receives an error.

It is the responsibility of the DBA or user to ensure the directory alias is valid on the import system.

Importing Foreign Function Libraries

Import does not verify that the location referenced by the foreign function library is correct. If the formats for directory and file names used in the library's specification on the export file are invalid on the import system, no error is reported at import time. Subsequent usage of the callout functions will receive an error.

It is the responsibility of the DBA or user to manually move the library and ensure the library's specification is valid on the import system.

Importing Stored Procedures, Functions, and Packages

When a local stored procedure, function, or package is imported, it retains its original specification timestamp. The procedure, function, or package is recompiled upon import. If the compilation is successful, it can be accessed by remote procedures without error.

Procedures are exported after tables, views, and synonyms; therefore, they usually compile successfully because all dependencies already exist. However, procedures, functions, and packages are not exported in dependency order. If a procedure, function, or package depends on a procedure, function, or package that is stored later in the Export dump file, it will not compile successfully. Later use of the procedure, function, or package will automatically cause a recompile and, if successful, will change the timestamp. This may cause errors in the remote procedures that call it.

Importing Java Objects

When a Java source or class is imported, it retains its original resolver (the list of schemas used to resolve Java full names). If the object is imported into a different schema, that resolver may no longer be valid. For example, the default resolver for a Java object in SCOTT's schema is ((* SCOTT) (* PUBLIC)). If the object is imported into BLAKE's schema, it may be necessary to alter the object so that the resolver references BLAKE's schema.

Importing Advanced Queue (AQ) Tables

Importing a queue also imports any underlying queue tables and the related dictionary tables. A queue can be imported only at the granularity level of the queue table. When a queue table is imported, export pre-table and post-table action procedures maintain the queue dictionary.

See Oracle8i Application Developer's Guide - Advanced Queuing for more information.

Importing LONG Columns

LONG columns can be up to 2 gigabytes in length. In importing and exporting, the LONG columns must fit into memory with the rest of each row's data. The memory used to store LONG columns, however, does not need to be contiguous because LONG data is loaded in sections.

Importing Views

Views are exported in dependency order. In some cases, Export must determine the ordering, rather than obtaining the order from the server database. In doing so, Export may not always be able to duplicate the correct ordering, resulting in compilation warnings when a view is imported and the failure to import column comments on such views.

In particular, if VIEWA uses the stored procedure PROCB and PROCB uses the view VIEWC, Export cannot determine the proper ordering of VIEWA and VIEWC. If VIEWA is exported before VIEWC and PROCB already exists on the import system, VIEWA receives compilation warnings at import time.

Grants on views are imported even if a view has compilation errors. A view could have compilation errors if an object it depends on, such as a table, procedure, or another view, does not exist when the view is created. If a base table does not exist, the server cannot validate that the grantor has the proper privileges on the base table with the GRANT OPTION.

Therefore, access violations could occur when the view is used, if the grantor does not have the proper privileges after the missing tables are created.

Importing views that contain references to tables in other schemas requires that the importer have SELECT ANY TABLE privilege. If the importer has not been granted this privilege, the views will be imported in an uncompiled state. Note that granting the privilege to a role is insufficient. For the view to be compiled, the privilege must be granted directly to the importer.

Importing Tables

Import attempts to create a partitioned table with the same partition or subpartition names as the exported partitioned table, including names of the form SYS_Pnnn. If a table with the same name already exists, Import processing depends on the value of the IGNORE parameter.

Unless SKIP_UNUSABLE_INDEXES=Y, inserting the exported data into the target table fails if Import cannot update a nonpartitioned index or index partition that is marked Indexes Unusable or otherwise not suitable.

Transportable Tablespaces

The transportable tablespace feature enables you to move a set of tablespaces from one Oracle database to another.

To do this, you must make the tablespaces read-only, copy the datafiles of these tablespaces, and use Export/Import to move the database information (metadata) stored in the data dictionary. Both the datafiles and the metadata export file must be copied to the target database. The transport of these files can be done using any facility for copying flat, binary files, such as the operating system copying facility, binary-mode FTP, or publishing on CD-ROMs.

After copying the datafiles and importing the metadata, you can optionally put the tablespaces in read/write mode.

See Transportable Tablespaces for information on creating an Export file containing transportable tablespace metadata.

Import provides the following parameter keywords to enable import of transportable tablespaces metadata.

See TRANSPORT_TABLESPACE, TABLESPACES, DATAFILES, and TTS_OWNERS for more information.

Additional Information: See the Oracle8i Administrator's Guide for details about how to move or copy tablespaces to another database. For an introduction to the transportable tablespaces feature, see Oracle8i Concepts.

Importing Statistics

If statistics are requested at export time and analyzer statistics are available for a table, Export will place the ANALYZE command to recalculate the statistics for the table into the dump file. In certain circumstances, Export will also write the precalculated optimizer statistics for tables, indexes, and columns to the dump file. See the description of the Export parameter STATISTICS and the Import parameter RECALCULATE_STATISTICS.

Because of the time it takes to perform an ANALYZE statement, it is usually preferable for Import to use the precalculated optimizer statistics for a table (and its indexes and columns), rather than executing the ANALYZE statement saved by Export. However, in the following cases, Import will ignore the precomputed statistics because they are potentially unreliable:

Note: Specifying ROWS=N will not prevent the use of precomputed statistics. This feature allows plan generation for queries to be tuned in a nonproduction database using statistics from a production database.

In certain situations, the importer might want to always use ANALYZE statements rather than precomputed statistics. For example, the statistics gathered from a fragmented database may not be relevant when the data is imported in a compressed form. In these cases, the importer may specify
RECALCULATE_STATISTICS=Y to force the recalculation of statistics.

If you do not want any statistics to be established by Import, you can specify ANALYZE=N, in which case, the RECALCULATE_STATISTICS parameter is ignored. See ANALYZE.

Using Export Files from a Previous Oracle Release

The following sections describe considerations to keep in mind when you import data from earlier versions of the Oracle database server into an Oracle8i server.

Using Oracle Version 7 Export Files

This section describes guidelines and restrictions that apply when you import data from an Oracle version 7 database into an Oracle8i server. See Oracle8i Migration for additional information.

Check Constraints on DATE Columns

In Oracle8i, check constraints on DATE columns must use the TO_DATE function to specify the format of the date. Because this function was not required in earlier Oracle versions, data imported from an earlier Oracle database might not have used the TO_DATE function. In such cases, the constraints are imported into the Oracle8i database, but they are flagged in the dictionary as invalid.

The catalog views DBA_CONSTRAINTS, USER_CONSTRAINTS, and ALL_CONSTRAINTS can be used to identify such constraints. Import issues a warning message if invalid date constraints are in the database.

Using Oracle Version 6 Export Files

This section describes guidelines and restrictions that apply when you import data from an Oracle Version 6 database into an Oracle8i server. See Oracle8i Migration for additional information.

CHAR columns

Oracle Version 6 CHAR columns are automatically converted into the Oracle VARCHAR2 datatype.

Syntax of Integrity Constraints

Although the SQL syntax for integrity constraints in Oracle Version 6 is different from the Oracle7 and Oracle8i syntax, integrity constraints are correctly imported into Oracle8i.

Status of Integrity Constraints

NOT NULL constraints are imported as ENABLED. All other constraints are imported as DISABLED.

Length of DEFAULT Column Values

A table with a default column value that is longer than the maximum size of that column generates the following error on import to Oracle8i:

ORA-1401: inserted value too large for column

Oracle Version 6 did not check the columns in a CREATE TABLE statement to be sure they were long enough to hold their DEFAULT values so these tables could be imported into a Version 6 database. The Oracle8i server does make this check, however. As a result, tables that could be imported into a Version 6 database may not import into Oracle8i.

If the DEFAULT is a value returned by a function, the column must be large enough to hold the maximum value that can be returned by that function. Otherwise, the CREATE TABLE statement recorded in the export file produces an error on import.

Note: The maximum value of the USER function increased in Oracle7, so columns with a default of USER may not be long enough. To determine the maximum size that the USER function returns, execute the following SQL command:

DESCRIBE user_sys_privs

The length shown for the USERNAME column is the maximum length returned by the USER function.

Using Oracle Version 5 Export Files

Oracle8i Import reads Export dump files created by Oracle release 5.1.22 and higher. Keep in mind the following:

The CHARSET Parameter

Default: none

Note: This parameter applies to Oracle Version 5 and 6 export files only. Use of this parameter is not recommended. It is provided only for compatibility with previous versions. Eventually, it will no longer be supported.

Oracle Version 5 and 6 export files do not contain the NLS character set identifier. However, a version 5 or 6 export file does indicate whether the user session character set was ASCII or EBCDIC.

Use this parameter to indicate the actual character set used at export time. The import utility will verify whether the specified character set is ASCII or EBCDIC based on the character set in the export file.

If you do not specify a value for the CHARSET parameter and the export file is ASCII, Import will verify that the user session character set is ASCII. Or, if the export file is EBCDIC, Import will verify that the user session character set is EBCDIC.

If you are using an Oracle7 or Oracle8i export file, the character set is specified within the export file, and conversion to the current database's character set is automatic. Specification of this parameter serves only as a check to ensure that the export file's character set matches the expected value. If not, an error results.



Go to previous page
Go to next page
Oracle
Copyright © 1996-2000, Oracle Corporation.

All Rights Reserved.

Library

Product

Contents

Index