expdp admin/password@db2022adb_high \
filesize=5GB \
credential=def_cred_name \
dumpfile=https://namespace-string.objectstorage.us-ashburn-1.oci.customer-oci.com/n/namespace-string/b/bucketname/o/exp%L.dmp \
parallel=16 \
encryption_pwd_prompt=yes \
logfile=export.log \
directory=data_pump_dir
Notes for Oracle Data Pump parameters:
-
If during the export with expdp you used the
encryption_pwd_prompt=yes parameter then use
encryption_pwd_prompt=yes and input the same password at the
impdp prompt that you specified during the export.
-
The dumpfile parameter supports the %L and %l wildcards in addition to the legacy %U and %u wildcards. For example, dumpfile=export%L.dmp. Use the %L or %l wildcard for exports from Oracle AI Database Release 12.2 and higher. This wildcard expands the dumpfile file name into a 3-digit to 10-digit, variable-width incrementing integer, starting at 100 and ending at 2147483646.
-
If your bucket and data do not reside in a commercial (OC1) realm,
where OCI Dedicated Endpoints are not supported, use the following URI format for
dumpfile:
https://objectstorage.region.oraclecloud.com/n/namespace-string/b/bucketname/o/filename
For
dumpfile, this
example uses the recommended URI format using OCI Dedicated
Endpoints for commercial realm (OC1). The
namespace-string
is the Oracle Cloud Infrastructure object storage namespace and
bucketname is the bucket
name. See
Object Storage
Dedicated Endpoints,
Regions and
Availability Domains, and
Understanding
Object Storage Namespaces for more
information.
The
credential parameter cannot be an OCI
resource principal, Azure service principal, Amazon Resource Name
(ARN), or a Google service account. See Accessing Cloud
Resources by Configuring Policies and Roles for more
information on resource principal based authentication.
For the best export performance use the
HIGH database service for your export connection and set the parallel parameter
to one quarter the number of ECPUs (.25 x ECPU
count). If you are using OCPU compute model, set
the parallel parameter to the number of OCPUs (1 x OCPU
count).
For information on which database service name to connect to run Data Pump Export, see
Database Service Names for Autonomous AI Database.
For the dump file URL format for different Cloud
Object Storage services, see DBMS_CLOUD URI Formats.
This example
shows the recommended parameters for exporting from Autonomous AI Database. For these expdp parameters,
note the following:
-
The maximum filesize
parameter value is 10000MB for Oracle Cloud
Infrastructure Object Storage exports.
-
The maximum filesize
parameter value is 20GB for Oracle Cloud
Infrastructure Object Storage Classic exports.
-
If the specified filesize is too large, the export
shows the error message:
ORA-17500: ODM err:ODM HTTP Request Entity Too Large
-
The directory parameter specifies the directory
data_pump_dir for the specified log file,
export.log. See Access Log Files for Data Pump Export for more information.
Oracle Data Pump divides each dump
file part into smaller chunks for faster uploads. The Oracle Cloud
Infrastructure Object Storage console shows multiple files for each dump file part that you
export. The size of the actual dump files will be displayed as zero
(0) and its related file chunks as 10mb or less. For
example:
exp100.dmp
exp100.dmp_aaaaaa
exp101.dmp
exp101.dmp_aaaaaa
Downloading the zero byte dump
file from the Oracle Cloud
Infrastructure console or using the Oracle Cloud
Infrastructure CLI will not give you the full dump files. To download the full
dump files from the Object Store, use a tool that supports Swift
such as curl, and provide your user login and Swift auth
token. For example:
curl -O -v -X GET -u 'user1@example.com:auth_token' \
https://swiftobjectstorage.us-ashburn-1.oraclecloud.com/v1/namespace-string/bucketname/exp100.dmp
If you import a file with the DBMS_CLOUD procedures that support
the format parameter type with the
value 'datapump', you only need to provide the
primary file name. The procedures that support the
'datapump' format type automatically
discover and download the chunks.
When you use DBMS_CLOUD.DELETE_OBJECT, the
procedure automatically discovers and deletes the chunks when the
procedure deletes the primary file.