Summary of DBMS_CLOUD Subprograms

This section covers the DBMS_CLOUD subprograms provided with Autonomous Database.

Note:

To run DBMS_CLOUD subprograms with a user other than ADMIN you need to grant EXECUTE privileges to that user. For example, run the following command as ADMIN to grant privileges to atpc_user:
GRANT EXECUTE ON DBMS_CLOUD TO atpc_user;

COPY_COLLECTION Procedure

This procedure loads data into existing SODA collection from Cloud Object Storage. The overloaded form enables you to use the operation_id parameter.

Syntax

DBMS_CLOUD.COPY_COLLECTION (
	collection_name   IN VARCHAR2,
	credential_name   IN VARCHAR2 DEFAULT NULL,		
	file_uri_list     IN CLOB,	
	format            IN CLOB     DEFAULT NULL
);

DBMS_CLOUD.COPY_COLLECTION (
	collection_name   IN VARCHAR2,
	credential_name   IN VARCHAR2 DEFAULT NULL,		
	file_uri_list     IN CLOB,	
	format            IN CLOB     DEFAULT NULL,
	operation_id      OUT NOCOPY NUMBER
);

Parameters

Parameter Description

collection_name

The name of the SODA collection into which data will be loaded. If a collection with this name already exists, the specified data will be loaded, otherwise a new collection is created.

credential_name

The name of the credential to access the Cloud Object Storage.

file_uri_list

Comma-delimited list of source file URIs. You can use wildcards in the file names in your URIs. The character "*" can be used as the wildcard for multiple characters, the character "?" can be used as the wildcard for a single character.

The format of the URIs depends on the Cloud Object Storage service. For details see DBMS_CLOUD Package File Cloud Object Storage URI Formats.

format

The options describing the format of the source files. These options are specified as a JSON string. Supported formats for JSON data are: unpackarrays, recorddelimiter, ignoreblanks, characterset, rejectlimit, compression.

Apart from the mentioned formats for JSON data, Autonomous Database supports other formats too. For the list of format arguments supported by Autonomous Database, see Package Format Options.

operation_id

Use this parameter to track the progress and final status of the load operation as the corresponding ID in the USER_LOAD_OPERATIONS view.

Example

BEGIN
	DBMS_CLOUD.CREATE_CREDENTIAL(    
			credential_name => 'OBJ_STORE_CRED',    
			username        => 'user_name@oracle.com',    
			password        => 'password'
			);

	DBMS_CLOUD.COPY_COLLECTION(    
			collection_name => 'myCollection',    
			credential_name => 'OBJ_STORE_CRED',    
			file_uri_list   => 'https://objectstorage.us-phoenix-1.oraclecloud.com/n/adwc4pm/b/json/o/myCollection.json'  
			);
END;
/

COPY_DATA Procedure

This procedure loads data into existing Autonomous Database tables from files in the Cloud. The overloaded form enables you to use the operation_id parameter.

Syntax

DBMS_CLOUD.COPY_DATA (
	table_name        IN VARCHAR2,
	credential_name   IN VARCHAR2,		
	file_uri_list     IN CLOB,	
	schema_name       IN VARCHAR2,
	field_list        IN CLOB,
	format            IN CLOB);

DBMS_CLOUD.COPY_DATA (
	table_name        IN VARCHAR2,
	credential_name   IN VARCHAR2 DEFAULT NULL,		
	file_uri_list     IN CLOB DEFAULT NULL,	
	schema_name       IN VARCHAR2 DEFAULT NULL,
	field_list        IN CLOB DEFAULT NULL,
	format            IN CLOB DEFAULT NULL
	operation_id      OUT NOCOPY NUMBER);

Parameters

Parameter Description

table_name

The name of the target table on the database. The target table needs to be created before you run COPY_DATA.

credential_name

The name of the credential to access the Cloud Object Storage.

file_uri_list

Comma-delimited list of source file URIs. You can use wildcards in the file names in your URIs. The character "*" can be used as the wildcard for multiple characters, the character "?" can be used as the wildcard for a single character.

The format of the URIs depend on the Cloud Object Storage service you are using, for details see DBMS_CLOUD Package File URI Formats.

schema_name

The name of the schema where the target table resides. The default value is NULL meaning the target table is in the same schema as the user running the procedure.

field_list

Identifies the fields in the source files and their data types. The default value is NULL meaning the fields and their data types are determined by the target table definition. This argument's syntax is the same as the field_list clause in regular Oracle external tables. For more information about field_list see Oracle® Database Utilities.

For an example using field_list, see CREATE_EXTERNAL_TABLE Procedure.

format

The options describing the format of the source files. For the list of the options and how to specify the values see DBMS_CLOUD Package Format Options.

For Parquet and Avro file format options, see DBMS_CLOUD Package Format Options for Parquet and Avro.

operation_id

Use this parameter to track the progress and final status of the load operation as the corresponding ID in the USER_LOAD_OPERATIONS view.

COPY_DATA Procedure for Parquet or Avro Files

This procedure with the format parameter type set to the value parquet or avro loads data into existing Autonomous Database tables from Parquet or Avro files in the Cloud. Similar to text files, the data is copied from the source Parquet or Avro file into the preexisting internal table.

Syntax

DBMS_CLOUD.COPY_DATA (
	table_name        IN VARCHAR2,
	credential_name   IN VARCHAR2,		
	file_uri_list     IN CLOB,	
	schema_name       IN VARCHAR2 DEFAULT,
	field_list        IN CLOB DEFAULT,
	format            IN CLOB DEFAULT);

Parameters

Parameter Description

table_name

The name of the target table on the database. The target table needs to be created before you run COPY_DATA.

credential_name

The name of the credential to access the Cloud Object Storage.

file_uri_list

Comma-delimited list of source file URIs. You can use wildcards in the file names in your URIs. The character "*" can be used as the wildcard for multiple characters, the character "?" can be used as the wildcard for a single character.

The format of the URIs depend on the Cloud Object Storage service you are using, for details see DBMS_CLOUD Package File URI Formats.

schema_name

The name of the schema where the target table resides. The default value is NULL meaning the target table is in the same schema as the user running the procedure.

field_list

Ignored for Parquet and Avro files.

The fields in the source match the external table columns by name. Source data types are converted to the external table column data type.

For Parquet files, see DBMS_CLOUD Package Parquet to Oracle Data Type Mapping for details on mapping.

For Avro files, see DBMS_CLOUD Package Avro to Oracle Data Type Mapping for details on mapping.

format

The options describing the format of the source files. For Parquet and Avro files, only two options are supported: see DBMS_CLOUD Package Format Options for Parquet and Avro.

Usage Notes

As with other data files, Parquet and Avro data loads generate logs that are viewable in the tables dba_load_operations and user_load_operations. Each load operation adds a record to dba[user]_load_operations that indicates the table containing the logs.

The log table provides summary information about the load.

Note:

For Parquet or Avro files, when the format parameter type is set to the value parquet or avro, the BADFILE_TABLE table is always empty.

  • For Parquet files, PRIMARY KEY constraint errors throw an ORA error.
  • If data for a column encounters a conversion error, for example, the target column is not large enough to hold the converted value, the value for the column is set to NULL. This does not produce a rejected record.

CREATE_CREDENTIAL Procedure

This procedure stores cloud service credentials in Autonomous Database.

Use stored cloud service credentials to access the cloud service for data loading, for querying external data residing in the cloud, or for other cases when you use DBMS_CLOUD procedures with a credential_name parameter. This procedure is overloaded. Use the Oracle Cloud Infrastructure-related parameters, including: user_ocid, tenancy_ocid, private_key, and fingerprint only when you are using Oracle Cloud Infrastructure Signing Keys authentication. See CREATE_CREDENTIAL Procedure (OCI Signing Key Credentials) for more information.

Syntax

DBMS_CLOUD.CREATE_CREDENTIAL (
	credential_name   IN VARCHAR2,
	username          IN VARCHAR2,
	password          IN VARCHAR2 DEFAULT NULL);

Parameters

Parameter Description

credential_name

The name of the credential to be stored.

username

The username and password arguments together specify your cloud service credentials. See below for what to specify for the username and password for different cloud services.

password

The username and password arguments together specify your cloud service credentials.

Usage Notes

  • This operation stores the credentials in the database in an encrypted format.

  • You can see the credentials in your schema by querying the user_credentials table.

  • The ADMIN user can see all the credentials by querying the dba_credentials table.

  • You only need to create credentials once unless your cloud service credentials change. Once you store the credentials you can then use the same credential name for DBMS_CLOUD procedures that require a credential_name parameter.

  • This procedure is overloaded. If you provide one of the key based authentication attributes, user_ocid, tenancy_ocid, private_key, or fingerprint, the call is assumed to be an Oracle Cloud Infrastructure Signing Key based credential. See CREATE_CREDENTIAL Procedure (OCI Signing Key Credentials) for more information.

Oracle Cloud Infrastructure Credentials (Auth Tokens)

For Oracle Cloud Infrastructure the username is your Oracle Cloud Infrastructure user name. The password is your Oracle Cloud Infrastructure auth token. See Working with Auth Tokens.

Oracle Cloud Infrastructure Object Storage Classic Credentials

If your source files reside in Oracle Cloud Infrastructure Object Storage Classic, the username is your Oracle Cloud Infrastructure Classic user name and the password is your Oracle Cloud Infrastructure Classic password.

Amazon Web Services (AWS) Credentials

If your source files reside in Amazon S3 or you are calling an AWS API, the username is your AWS access key ID and the password is your AWS secret access key. See AWS Identity and Access Management.

Microsoft Azure Credentials

If your source files reside in Azure Blob Storage or you are calling an Azure API, the username is your Azure storage account name and the password is an Azure storage account access key. See About Azure storage accounts.

CREATE_CREDENTIAL Procedure (OCI Signing Key Credentials)

This procedure stores Oracle Cloud Infrastructure (OCI) Cloud Service credentials in the Autonomous Database.

Use stored credentials for data loading or for querying external data residing in the Cloud, where you use DBMS_CLOUD procedures with a credential_name parameter. This procedure is overloaded. Use the Oracle Cloud Infrastructure-related parameters, including: user_ocid, tenancy_ocid, private_key, and fingerprint only when you are using Oracle Cloud Infrastructure Signing Key based authentication. To create credentials for other cloud services, with the username and password parameters, see CREATE_CREDENTIAL Procedure.

Syntax

DBMS_CLOUD.CREATE_CREDENTIAL (
	credential_name IN VARCHAR2,
	user_ocid       IN VARCHAR2,
	tenancy_ocid    IN VARCHAR2,
	private_key     IN VARCHAR2,
	fingerprint     IN VARCHAR2);

Parameters

Parameter Description

credential_name

The name of the credential to be stored.

user_ocid

Specifies the user's OCID. See Where to Get the Tenancy's OCID and User's OCID for details on obtaining the User's OCID.

tenancy_ocid

Specifies the tenancy's OCID. See Where to Get the Tenancy's OCID and User's OCID for details on obtaining the Tenancy's OCID.

private_key

Specifies the generated private key. Private keys generated with a passphrase are not supported. You need to generate the private key without a passphrase. See How to Generate an API Signing Key for details on generating a key pair in PEM format.

fingerprint

Specifies a fingerprint. After a generated public key is uploaded to the user's account the fingerprint is displayed in the console. Use the displayed fingerprint for this argument. See How to Get the Key's Fingerprint and How to Generate an API Signing Key for more details.

Usage Notes

  • This operation stores the credentials in the database in an encrypted format.

  • You can see the credentials in your schema by querying the user_credentials table.

  • The ADMIN user can see all the credentials by querying the dba_credentials table.

  • You only need to create credentials once unless your cloud service credentials change. Once you store the credentials you can then use the same credential name for DBMS_CLOUD procedures that require a credential_name parameter.

  • This procedure is overloaded. Also see CREATE_CREDENTIAL Procedure for more information.

  • Private keys generated with a passphrase are not supported. You need to generate the private key without a passphrase. See How to Generate an API Signing Key for more information.

CREATE_EXTERNAL_TABLE Procedure

This procedure creates an external table on files in the Cloud. This allows you to run queries on external data from Autonomous Database.

Syntax

DBMS_CLOUD.CREATE_EXTERNAL_TABLE (
	table_name       IN VARCHAR2,
	credential_name  IN VARCHAR2,		
	file_uri_list    IN CLOB,	
	column_list      IN CLOB,
	field_list       IN CLOB DEFAULT,
	format           IN CLOB DEFAULT);

Parameters

Parameter Description

table_name

The name of the external table.

credential_name

The name of the credential to access the Cloud Object Storage.

file_uri_list

Comma-delimited list of source file URIs. You can use wildcards in the file names in your URIs. The character "*" can be used as the wildcard for multiple characters, the character "?" can be used as the wildcard for a single character.

The format of the URIs depend on the Cloud Object Storage service you are using, for details see DBMS_CLOUD Package File URI Formats.

column_list

Comma-delimited list of column names and data types for the external table.

field_list

Identifies the fields in the source files and their data types. The default value is NULL meaning the fields and their data types are determined by the column_list parameter. This argument's syntax is the same as the field_list clause in regular Oracle external tables. For more information about field_list see Oracle® Database Utilities.

format

The options describing the format of the source files. For the list of the options and how to specify the values see DBMS_CLOUD Package Format Options.

For Parquet or Avro format files, see CREATE_EXTERNAL_TABLE Procedure for Parquet or Avro Files.

Usage Notes

The procedure DBMS_CLOUD.CREATE_EXTERNAL_TABLE supports external partitioned files in the supported cloud object storage services, including: Oracle Cloud Infrastructure Object Storage, Microsoft Azure, and AWS S3. The credential is a table level property; therefore, the external files must be on the same object store. See DBMS_CLOUD Package File URI Formats for more information.

Example

BEGIN  
   DBMS_CLOUD.CREATE_EXTERNAL_TABLE(   
      table_name =>'WEATHER_REPORT_DOUBLE_DATE',   
      credential_name =>'OBJ_STORE_CRED',   
      file_uri_list =>'&base_URL/Charlotte_NC_Weather_History_Double_Dates.csv',
      format => json_object('type' value 'csv', 'skipheaders' value '1'),   
      field_list => 'REPORT_DATE DATE''mm/dd/yy'',                   
                     REPORT_DATE_COPY DATE ''yyyy-mm-dd'',
                     ACTUAL_MEAN_TEMP,                 
                     ACTUAL_MIN_TEMP,                 
                     ACTUAL_MAX_TEMP,                 
                     AVERAGE_MIN_TEMP,                    
                     AVERAGE_MAX_TEMP,     
                     AVERAGE_PRECIPITATION',   
      column_list => 'REPORT_DATE DATE,   
                     REPORT_DATE_COPY DATE,
                     ACTUAL_MEAN_TEMP NUMBER,  
                     ACTUAL_MIN_TEMP NUMBER,  
                     ACTUAL_MAX_TEMP NUMBER,  
                     AVERAGE_MIN_TEMP NUMBER,   
                     AVERAGE_MAX_TEMP NUMBER,                  
                     AVERAGE_PRECIPITATION NUMBER');
   END;
/ 

SELECT * FROM WEATHER_REPORT_DOUBLE_DATE where         
   actual_mean_temp > 69 and actual_mean_temp < 74

CREATE_EXTERNAL_TABLE Procedure for Parquet or Avro Files

This procedure with the format parameter type set to the value parquet or avro creates an external table with either Parquet or Avro format files in the Cloud. This allows you to run queries on external data from Autonomous Database.

Syntax

DBMS_CLOUD.CREATE_EXTERNAL_TABLE (
	table_name       IN VARCHAR2,
	credential_name  IN VARCHAR2,		
	file_uri_list    IN CLOB,
	column_list      IN CLOB,
	field_list       IN CLOB DEFAULT,
	format           IN CLOB DEFAULT);

Parameters

Parameter Description

table_name

The name of the external table.

credential_name

The name of the credential to access the Cloud Object Storage.

file_uri_list

Comma-delimited list of source file URIs. You can use wildcards in the file names in your URIs. The character "*" can be used as the wildcard for multiple characters, the character "?" can be used as the wildcard for a single character.

The format of the URIs depend on the Cloud Object Storage service you are using, for details see DBMS_CLOUD Package File URI Formats.

column_list

(Optional) This field, when specified, overrides the format->schema parameter which specifies that the schema, columns and data types, are derived automatically. See the format parameter for details.

When the column_list is specified for a Parquet or Avro source, the column names must match those columns found in the file. Oracle data types must map appropriately to the Parquet or Avro data types.

For Parquet files, see DBMS_CLOUD Package Parquet to Oracle Data Type Mapping for details.

For Avro files, see DBMS_CLOUD Package Avro to Oracle Data Type Mapping for details.

field_list

Ignored for Parquet or Avro files.

The fields in the source match the external table columns by name. Source data types are converted to the external table column data type.

For Parquet files, see DBMS_CLOUD Package Parquet to Oracle Data Type Mapping for details.

For Avro files, see DBMS_CLOUD Package Avro to Oracle Data Type Mapping for details.

format

For Parquet and Avro, there are only two supported parameters. See DBMS_CLOUD Package Format Options for Parquet and Avro for details.

Examples Avro

format => '{"type":"avro", "schema": "all"}'
format => json_object('type' value 'avro', 'schema' value 'first')

Examples Parquet

format => '{"type":"parquet", "schema": "all"}'
format => json_object('type' value 'parquet', 'schema' value 'first')

AVRO and Parquet Column Name Mapping to Oracle Column Names

See DBMS_CLOUD Package Parquet and AVRO to Oracle Column Name Mapping for information on column name mapping and column name conversion usage in Oracle SQL.

CREATE_EXTERNAL_PART_TABLE Procedure

This procedure creates an external partitioned table on files in the Cloud. This allows you to run queries on external data from Autonomous Database.

Syntax

DBMS_CLOUD.CREATE_EXTERNAL_PART_TABLE (
	table_name           IN VARCHAR2,
	credential_name      IN VARCHAR2,		
	partitioning_clause  IN CLOB,	
	column_list          IN CLOB,
	field_list           IN CLOB DEFAULT,
	format               IN CLOB DEFAULT);

Parameters

Parameter Description

table_name

The name of the external table.

credential_name

The name of the credential to access the Cloud Object Storage.

partitioning_clause

Specifies the complete partitioning clause, including the location information for individual partitions.

column_list

Comma-delimited list of column names and data types for the external table.

field_list

Identifies the fields in the source files and their data types. The default value is NULL meaning the fields and their data types are determined by the column_list parameter. This argument's syntax is the same as the field_list clause in regular Oracle external tables. For more information about field_list see Oracle® Database Utilities.

format

The options describing the format of the source files. For the list of the options and how to specify the values see DBMS_CLOUD Package Format Options.

Usage Notes

  • With Parquet or Avro data format, using DBMS_CLOUD.CREATE_EXTERNAL_PART_TABLE, the schema format option is not available and the column_list parameter must be specified. The schema format option is available with DBMS_CLOUD.CREATE_EXTERNAL_TABLE.

  • The procedure DBMS_CLOUD.CREATE_EXTERNAL_PART_TABLE supports external partitioned files in the supported cloud object storage services, including: Oracle Cloud Infrastructure Object Storage, Microsoft Azure, and AWS S3. The credential is a table level property; therefore, the external files must be on the same object store. See DBMS_CLOUD Package File URI Formats for more information.

Example

BEGIN  
   DBMS_CLOUD.CREATE_EXTERNAL_PART_TABLE(
      table_name =>'PET1',  
      credential_name =>'OBJ_STORE_CRED',
      format => json_object('delimiter' value ',', 'recorddelimiter' value 'newline', 'characterset' value 'us7ascii'),  
      column_list => 'col1 number, col2 number, col3 number',
      partitioning_clause => 'partition by range (col1)
                                (partition p1 values less than (1000) location
                                    ( ''&base_URL//file_11.txt'')
                                 ,
                                 partition p2 values less than (2000) location
                                    ( ''&base_URL/file_21.txt'')
                                 ,
                                 partition p3 values less than (3000) location 
                                    ( ''&base_URL/file_31.txt'')
                                 )'
     );
   END;
/  

CREATE_HYBRID_PART_TABLE Procedure

This procedure creates a hybrid partitioned table. This allows you to run queries on hybrid partitioned data from Autonomous Database. Hybrid partitioned tables are only supported with Oracle Database 19c onwards.

Syntax

DBMS_CLOUD.CREATE_HYBRID_PART_TABLE (
	table_name           IN VARCHAR2,
	credential_name      IN VARCHAR2,		
	partitioning_clause  IN CLOB,	
	column_list          IN CLOB,
	field_list           IN CLOB DEFAULT,
	format               IN CLOB DEFAULT);

Parameters

Parameter Description

table_name

The name of the external table.

credential_name

The name of the credential to access the Cloud Object Storage.

partitioning_clause

Specifies the complete partitioning clause, including the location information for individual partitions.

column_list

Comma-delimited list of column names and data types for the external table.

field_list

Identifies the fields in the source files and their data types. The default value is NULL meaning the fields and their data types are determined by the column_list parameter. This argument's syntax is the same as the field_list clause in regular Oracle external tables. For more information about field_list see Oracle® Database Utilities.

format

The options describing the format of the source files. For the list of the options and how to specify the values see DBMS_CLOUD Package Format Options.

Usage Note

  • The procedure DBMS_CLOUD.CREATE_HYBRID_PART_TABLE supports external partitioned files in the supported cloud object storage services, including: Oracle Cloud Infrastructure Object Storage, Microsoft Azure, and AWS S3. The credential is a table level property; therefore, the external files must be on the same object store. See DBMS_CLOUD Package File URI Formats for more information.

Example

BEGIN  
   DBMS_CLOUD.CREATE_HYBRID_PART_TABLE(
      table_name =>'HPT1',  
      credential_name =>'OBJ_STORE_CRED',  
      format => json_object('delimiter' value ',', 'recorddelimiter' value 'newline', 'characterset' value 'us7ascii'),  
      column_list => 'col1 number, col2 number, col3 number',
      partitioning_clause => 'partition by range (col1)
                                (partition p1 values less than (1000) external location
                                    ( ''&base_URL/file_11.txt'')
                                 ,
                                 partition p2 values less than (2000) external location
                                    ( ''&base_URL/file_21.txt'')
                                 ,
                                 partition p3 values less than (3000)
                                 )'
     );
   END;
/ 

DELETE_ALL_OPERATIONS Procedure

This procedure clears either all data load operations logged in the user_load_operations table in your schema or clears all the data load operations of the specified type, as indicated with the type parameter.

Syntax

DBMS_CLOUD.DELETE_ALL_OPERATIONS (
	type      IN VARCHAR DEFAULT NULL);

Parameters

Parameter Description

type

Specifies the type of operation to delete. Type values can be found in the TYPE column in the user_load_operations table.

If no type is specified all rows are deleted.

Usage Note

  • DBMS_CLOUD.DELETE_ALL_OPERATIONS does not delete currently running operations (operations in a "Running" status).

DELETE_FILE Procedure

This procedure removes the specified file from the specified directory on Autonomous Database.

Syntax

 DBMS_CLOUD.DELETE_FILE ( 
       directory_name     IN VARCHAR2,
       file_name          IN VARCHAR2); 

Parameters

Parameter Description

directory_name

The name of the directory on the Autonomous Database instance.

file_name

The name of the file to be removed.

Note:

To run DBMS_CLOUD.DELETE_FILE with a user other than ADMIN you need to grant write privileges on the directory that contains the file to that user. For example, run the following command as ADMIN to grant write privileges to atpc_user:
GRANT WRITE ON DIRECTORY data_pump_dir TO atpc_user;

DELETE_OBJECT Procedure

This procedure deletes the specified object on object store.

Syntax

DBMS_CLOUD.DELETE_OBJECT (
       credential_name      IN VARCHAR2,
       object_uri           IN VARCHAR2);

Parameters

Parameter Description

credential_name

The name of the credential to access the Cloud Object Storage.

object_uri

Object or file URI for the object to delete. The format of the URI depends on the Cloud Object Storage service you are using, for details see DBMS_CLOUD Package File URI Formats.

DROP_CREDENTIAL Procedure

This procedure removes an existing credential from Autonomous Database.

Syntax

DBMS_CLOUD.DROP_CREDENTIAL (
   credential_name     IN VARCHAR2);

Parameters

Parameter Description

credential_name

The name of the credential to be removed.

EXPORT_DATA Procedure

This procedure exports data from Autonomous Database to Oracle Data pump files in the Cloud based on the result of the specified SQL query. Using this procedure Autonomous Database uses the ORACLE_DATAPUMP access driver to write data to a dump file(s) on the Cloud Object store. The overloaded form enables you to use the operation_id parameter.

Syntax

DBMS_CLOUD.EXPORT_DATA (
      file_uri_list     IN CLOB,
      format            IN CLOB,
      credential_name   IN VARCHAR2 DEFAULT NULL,
      query             IN CLOB);

DBMS_CLOUD.EXPORT_DATA (
      file_uri_list     IN CLOB DEFAULT NULL,
      format            IN CLOB DEFAULT NULL,
      credential_name   IN VARCHAR2 DEFAULT NULL,
      query             IN CLOB DEFAULT NULL,
      operation_id      OUT NOCOPY NUMBER);

Parameters

Parameter Description

credential_name

The name of the credential to access the Cloud Object Storage.

file_uri_list

Comma-delimited list of the dump files. This specifies the files to be created on the Object Store. Use of wildcard and substitution characters is not supported in the file_uri_list.

The format of the URIs depend on the Cloud Object Storage service you are using, for details see DBMS_CLOUD Package File URI Formats.

format

Specify export format options. Supported options are:

  • TYPE: The TYPE format parameter is required and must have the value DATAPUMP.

In addition, with the format parameter you can specify supported Oracle Data Pump access parameters. The supported Oracle Data Pump access parameters are:

  • COMPRESSION: The valid values are: BASIC, LOW, MEDIUM, and HIGH.

  • VERSION: The valid values are: COMPATIBLE, LATEST, and a specified version_number.

See access_parameters Clause for more information.

query

Use this parameter to specify a SELECT statement so that only the required data is exported. The query determines the contents of the dump file(s). For example:

'SELECT warehouse_id, quantity FROM inventories'

See Oracle Data Pump Export Data Filters and Unloading and Loading Data with the ORACLE_DATAPUMP Access Driver for more information.

operation_id

Use this parameter to track the progress and final status of the export operation as the corresponding ID in the USER_LOAD_OPERATIONS view.

Usage Notes

  • Autonomous Database export using DBMS_CLOUD.EXPORT_DATA only supports Oracle Cloud Infrastructure Object Storage and Oracle Cloud Infrastructure Object Storage Classic object stores.

  • The Oracle Cloud Infrastructure Object Store console shows multiple files for each dump file that you export, and the size of the actual dump files will be displayed as zero (0). For example:
    exp01.dmp
    exp01.dmp_aaaaaa
    exp02.dmp
    exp02.dmp_aaaaaa
    Oracle Data Pump divides each dump file into smaller chunks for faster uploads. Downloading the zero byte dump file from the console or Oracle Cloud Infrastructure CLI does not give you the full dump file. To download the dump files from the Object Store, use a tool that supports Swift such as curl, and provide your user login and Swift auth token. For example, curl with GET:
    curl -O -v -X GET -u 'user1@example.com:auth_token' \
       https://swiftobjectstorage.us-ashburn-1.oraclecloud.com/v1/namespace-string/bucketname/exp01.dmp

    If you import a file with the DBMS_CLOUD procedures that support the format parameter type with the value 'datapump', you only need to provide the primary file name. The procedures that support the 'datapump' format type automatically discover and download the chunks.

    When you use DBMS_CLOUD.DELETE_OBJECT, the procedure automatically discovers and deletes the chunks when the procedure deletes the primary file.

  • The DBMS_CLOUD.EXPORT_DATA procedure creates the dump file(s) from the file_uri_list values that you specify, as follows:

    • As more files are needed, the procedure creates additional files from the file_uri_list.

    • The procedure does not overwrite files. If a dump file in the file_uri_list exists, DBMS_CLOUD.EXPORT_DATA reports an error.

    • DBMS_CLOUD.EXPORT_DATA does not create buckets.

  • The number of dump files that DBMS_CLOUD.EXPORT_DATA generates is determined when the procedure runs. The number of dump files that are generated depends on the number of file names you provide in the file_uri_list parameter, as well as on the number of Autonomous Database OCPUs available to the instance, the service level, and the size of the data.

    For example, if you use a 1 OCPU Autonomous Database instance or the low service, then a single dump file is exported with no parallelism, even if you provide multiple file names. If you use a 4 OCPU Autonomous Database instance with the medium or high service, then the jobs can run in parallel and multiple dump files are exported if you provide multiple file names.

  • The dump files you create with DBMS_CLOUD.EXPORT_DATA cannot be imported using Oracle Data Pump impdp. Depending on the database, you can use these files as follows:

    • On an Autonomous Database instance on Shared Infrastructure, you can use the dump files with the DBMS_CLOUD procedures that support the format parameter type with the value 'datapump'. You can import the dump files using DBMS_CLOUD.COPY_DATA or you can call DBMS_CLOUD.CREATE_EXTERNAL_TABLE to create an external table.

    • On any other Oracle Database, such as Oracle Database 19c on-premise, you can import the dump files created with the procedure DBMS_CLOUD.EXPORT_DATA using the ORACLE_DATAPUMP access driver. See Unloading and Loading Data with the ORACLE_DATAPUMP Access Driver for more information.

  • The query parameter value that you supply can be an advanced query, if required, such as a query that includes joins or subqueries.

Example

BEGIN  
   DBMS_CLOUD.EXPORT_DATA(
      credential_name =>'OBJ_STORE_CRED',
      file_uri_list =>'https://objectstorage.us-phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o/exp1.dmp,
      format => json_object('type' value 'datapump', 'compression' value 'basic', 'version' value 'latest'),
      query => 'SELECT warehouse_id, quantity FROM inventories'
     );
   END;
/  

In this example, namespace-string is the Oracle Cloud Infrastructure object storage namespace and bucketname is the bucket name. See Understanding Object Storage Namespaces for more information.

GET_OBJECT Procedure

This procedure reads an object from Cloud Object Storage and copies it to Autonomous Database. The maximum file size allowed in this procedure is 5 gigabytes (GB).

Syntax

DBMS_CLOUD.GET_OBJECT (
	credential_name      IN VARCHAR2,		
	object_uri           IN VARCHAR2,		
	directory_name       IN VARCHAR2,
	file_name            IN VARCHAR2 DEFAULT  NULL,
	startoffset          IN NUMBER DEFAULT  0,
	endoffset            IN NUMBER DEFAULT  0,
	compression          IN VARCHAR2 DEFAULT  NULL);	

Parameters

Parameter Description

credential_name

The name of the credential to access the Cloud Object Storage.

object_uri

Object or file URI. The format of the URI depends on the Cloud Object Storage service you are using, for details see DBMS_CLOUD Package File URI Formats.

directory_name

The name of the directory on the database.

Foot 1

file_name

Specifies the name of the file to create. If file name is not specified, the file name is taken from after the last slash in the object_uri parameter. For special cases, for example when the file name contains slashes, use the file_name parameter.

startoffset

The offset, in bytes, from where the procedure starts reading.

endoffset

The offset, in bytes, until where the procedure stops reading.

compression

Specifies the compression used to store the object. When compression is set to ‘AUTO’ the file is uncompressed (the value ‘AUTO’ implies the object specified with object_uri is compressed with Gzip).

Footnote 1

Note:

To run DBMS_CLOUD.GET_OBJECT with a user other than ADMIN you need to grant WRITE privileges on the directory to that user. For example, run the following command as ADMIN to grant write privileges to atpc_user:

GRANT WRITE ON DIRECTORY data_pump_dir TO atpc_user;

Example

BEGIN DBMS_CLOUD.GET_OBJECT(
     credential_name => 'OBJ_STORE_CRED',
     object_uri => 'https://objectstorage.us-phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o/cwallet.sso',
     directory_name => 'DATA_PUMP_DIR'); 
END;
/

In this example, namespace-string is the Oracle Cloud Infrastructure object storage namespace and bucketname is the bucket name. See Understanding Object Storage Namespaces for more information.

LIST_FILES Function

This function lists the files in the specified directory. The results include the file names and additional metadata about the files such as file size in bytes, creation timestamp, and the last modification timestamp.

Syntax

DBMS_CLOUD.LIST_FILES (
	directory_name      IN VARCHAR2)
       RETURN TABLE;

Parameters

Parameter Description

directory_name

The name of the directory on the database.

Usage Notes

  • To run DBMS_CLOUD.LIST_FILES with a user other than ADMIN you need to grant read privileges on the directory to that user. For example, run the following command as ADMIN to grant read privileges to atpc_user:

    GRANT READ ON DIRECTORY data_pump_dir TO atpc_user;
  • This is a pipelined table function with return type as DBMS_CLOUD_TYPES.list_object_ret_t.

  • DBMS_CLOUD.LIST_FILES does not obtain the checksum value and returns NULL for this field.

Example

This is a pipelined function that returns a row for each file. For example, use the following query to use this function:

SELECT * FROM DBMS_CLOUD.LIST_FILES('DATA_PUMP_DIR');

OBJECT_NAME       BYTES   CHECKSUM      CREATED              LAST_MODIFIED
------------ ---------- ----------    ---------------------  ---------------------
cwallet.sso        2965               2018-12-12T18:10:47Z   2019-11-23T06:36:54Z

LIST_OBJECTS Function

This function lists objects in the specified location on object store. The results include the object names and additional metadata about the objects such as size, checksum, creation timestamp, and the last modification timestamp.

Syntax

DBMS_CLOUD.LIST_OBJECTS (
       credential_name      IN VARCHAR2,
       location_uri         IN VARCHAR2)
   RETURN TABLE;

Parameters

Parameter Description

credential_name

The name of the credential to access the Cloud Object Storage.

location_uri

Object or file URI. The format of the URI depends on the Cloud Object Storage service you are using, for details see DBMS_CLOUD Package File URI Formats.

Usage Notes

  • Depending on the capabilities of the object store, DBMS_CLOUD.LIST_OBJECTS does not return values for certain attributes and the return value for the field is NULL in this case.

    All supported Object Stores return values for the OBJECT_NAME, BYTES, and CHECKSUM fields.

    The following table shows support for the fields CREATED and LAST_MODIFIED by Object Store:

    Object Store CREATED LAST_MODIFIED
    Oracle Cloud Infrastructure Native Returns timestamp Returns NULL
    Oracle Cloud Infrastructure Swift Returns NULL Returns timestamp
    Oracle Cloud Infrastructure Classic Returns NULL Returns timestamp
    Amazon S3 Returns NULL Returns timestamp
    Azure Returns timestamp Returns timestamp
  • The checksum value is the MD5 checksum. This is a 32-character hexadecimal number that is computed on the object contents.

  • This is a pipelined table function with return type as DBMS_CLOUD_TYPES.list_object_ret_t.

Example

This is a pipelined function that returns a row for each object. For example, use the following query to use this function:

SELECT * FROM DBMS_CLOUD.LIST_OBJECTS('OBJ_STORE_CRED', 
    'https://objectstorage.us-phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o/');


OBJECT_NAME   BYTES              CHECKSUM                       CREATED         LAST_MODIFIED
------------ ---------- -------------------------------- --------------------- --------------------
cwallet.sso   2965      2339a2731ba24a837b26d344d643dc07 2019-11-23T06:36:54Z          

In this example, namespace-string is the Oracle Cloud Infrastructure object storage namespace and bucketname is the bucket name. See Understanding Object Storage Namespaces for more information.

PUT_OBJECT Procedure

This procedure copies a file from Autonomous Database to the Cloud Object Storage. The maximum file size allowed in this procedure is 5 gigabytes (GB).

Syntax

DBMS_CLOUD.PUT_OBJECT (
	credential_name      IN VARCHAR2,		
	object_uri           IN VARCHAR2,		
	directory_name       IN VARCHAR2,
	file_name            IN VARCHAR2);	

Parameters

Parameter Description

credential_name

The name of the credential to access the Cloud Object Storage.

object_uri

Object or file URI. The format of the URI depends on the Cloud Object Storage service you are using, for details see DBMS_CLOUD Package File URI Formats.

directory_name

The name of the directory on the Autonomous Transaction Processing database.

Foot 2

file_name

The name of the file in the specified directory.

Footnote 2

Note:

To run DBMS_CLOUD.PUT_OBJECT with a user other than ADMIN you need to grant read privileges on the directory to that user. For example, run the following command as ADMIN to grant read privileges to atpc_user:

GRANT READ ON DIRECTORY data_pump_dir TO atpc_user;

Usage Note

Oracle Cloud Infrastructure object store does not allow writing files into a public bucket without supplying credentials (Oracle Cloud Infrastructure allows users to download objects from public buckets). Thus, you must supply a credential name with valid credentials to store an object in an Oracle Cloud Infrastructure public bucket using PUT_OBJECT.

See DBMS_CLOUD Package File URI Formats for more information.

UPDATE_CREDENTIAL Procedure

This procedure updates cloud service credential attributes in Autonomous Database.

Use stored credentials for data loading, for querying external data residing in the Cloud, or wherever you use DBMS_CLOUD procedures with a credential_name parameter. This procedure lets you update an attribute with a new value for a specified credential_name.

Syntax

DBMS_CLOUD.UPDATE_CREDENTIAL (
	credential_name   IN VARCHAR2,
	attribute         IN VARCHAR2,
	value             IN VARCHAR2);

Parameters

Parameter Description

credential_name

The name of the credential to be stored.

attribute

Name of attribute to update: USERNAME or PASSWORD.

value

New value for the selected attribute.

Usage Notes

  • The user name is case sensitive. It cannot contain double quotes or spaces.

  • The ADMIN user can see all the credentials by querying the dba_credentials table.

  • You only need to create credentials once unless your cloud service credentials change. Once you store the credentials you can then use the same credential name for DBMS_CLOUD procedures that require a credential_name parameter.

Example

BEGIN
  DBMS_CLOUD.UPDATE_CREDENTIAL(
     credential_name => 'OBJ_STORE_CRED',
     attribute => 'PASSWORD',
     value => 'password'); 
END;
/

VALIDATE_EXTERNAL_TABLE Procedure

This procedure validates the source files for an external table, generates log information, and stores the rows that do not match the format options specified for the external table in a badfile table on Autonomous Database. The overloaded form enables you to use the operation_id parameter.

Syntax

DBMS_CLOUD.VALIDATE_EXTERNAL_TABLE (
	table_name      IN VARCHAR2,
	schema_name     IN VARCHAR2 DEFAULT,		
	rowcount        IN NUMBER DEFAULT,
	stop_on_error   IN BOOLEAN DEFAULT);


DBMS_CLOUD.VALIDATE_EXTERNAL_TABLE(
	table_name      IN VARCHAR2,
	operation_id    OUT NOCOPY NUMBER,
	schema_name     IN VARCHAR2 DEFAULT NULL,		
	rowcount        IN NUMBER DEFAULT 0,
	stop_on_error   IN BOOLEAN DEFAULT TRUE);

Parameters

Parameter Description

table_name

The name of the external table.

operation_id

Use this parameter to track the progress and final status of the load operation as the corresponding ID in the USER_LOAD_OPERATIONS view.

schema_name

The name of the schema where the external table resides. The default value is NULL meaning the external table is in the same schema as the user running the procedure.

rowcount

Number of rows to be scanned. The default value is NULL meaning all the rows in the source files are scanned.

stop_on_error

Determines if the validate should stop when a row is rejected. The default value is TRUE meaning the validate stops at the first rejected row. Setting the value to FALSE specifies that the validate does not stop at the first rejected row and validates all rows up to the value specified for the rowcount parameter.

If the external table refers to Parquet or Avro files then the validate stops at the first rejected row. When the external table specifies the format parameter type set to the value parquet or avro, the parameter stop_on_error is effectively always TRUE. Thus, the table badfile will always be empty for an external table referring to Parquet or Avro files.

Usage Note

DBMS_CLOUD.VALIDATE_EXTERNAL_TABLE works with both partitioned external tables and hybrid partitioned tables. This potentially reads data from all external partitions until rowcount is reached or stop_on_error applies. You do not have control over which partition, or parts of a partition, is read in which order.

VALIDATE_EXTERNAL_PART_TABLE Procedure

This procedure validates the source files for an external partitioned table, generates log information, and stores the rows that do not match the format options specified for the external table in a badfile table on Autonomous Database. The overloaded form enables you to use the operation_id parameter.

Syntax

DBMS_CLOUD.VALIDATE_EXTERNAL_PART_TABLE (
       table_name                 IN VARCHAR2,
       partition_name             IN CLOB DEFAULT,
       schema_name                IN VARCHAR2 DEFAULT,
       rowcount                   IN NUMBER DEFAULT,
       partition_key_validation   IN BOOLEAN DEFAULT,
       stop_on_error              IN BOOLEAN DEFAULT);


DBMS_CLOUD.VALIDATE_EXTERNAL_PART_TABLE (
       table_name                 IN VARCHAR2,
       operation_id               OUT NUMBER,
       partition_name             IN CLOB DEFAULT,
       schema_name                IN VARCHAR2 DEFAULT,
       rowcount                   IN NUMBER DEFAULT,
       partition_key_validation   IN BOOLEAN DEFAULT,
       stop_on_error              IN BOOLEAN DEFAULT);

Parameters

Parameter Description

table_name

The name of the external table.

operation_id

Use this parameter to track the progress and final status of the load operation as the corresponding ID in the USER_LOAD_OPERATIONS view.

partition_name

If defined, then only a specific partition is validated. If not specified then read all partitions sequentially until rowcount is reached.

schema_name

The name of the schema where the external table resides. The default value is NULL meaning the external table is in the same schema as the user running the procedure.

rowcount

Number of rows to be scanned. The default value is NULL meaning all the rows in the source files are scanned.

partition_key_validation

For internal use only. Do not use this parameter.

stop_on_error

Determines if the validate should stop when a row is rejected. The default value is TRUE meaning the validate stops at the first rejected row. Setting the value to FALSE specifies that the validate does not stop at the first rejected row and validates all rows up to the value specified for the rowcount parameter.

VALIDATE_HYBRID_PART_TABLE Procedure

This procedure validates the source files for a hybrid partitioned table, generates log information, and stores the rows that do not match the format options specified for the hybrid table in a badfile table on Autonomous Database. The overloaded form enables you to use the operation_id parameter. Hybrid partitioned tables are only supported with Oracle Database 19c onwards.

Syntax

DBMS_CLOUD.VALIDATE_HYBRID_PART_TABLE (
       table_name                 IN VARCHAR2,
       partition_name             IN CLOB DEFAULT,
       schema_name                IN VARCHAR2 DEFAULT,
       rowcount                   IN NUMBER DEFAULT,
       partition_key_validation   IN BOOLEAN DEFAULT,
       stop_on_error              IN BOOLEAN DEFAULT);


DBMS_CLOUD.VALIDATE_HYBRID_PART_TABLE (
       table_name                 IN VARCHAR2,
       operation_id               OUT NUMBER,
       partition_name             IN CLOB DEFAULT,
       schema_name                IN VARCHAR2 DEFAULT,
       rowcount                   IN NUMBER DEFAULT,
       partition_key_validation   IN BOOLEAN DEFAULT,
       stop_on_error              IN BOOLEAN DEFAULT);

Parameters

Parameter Description

table_name

The name of the external table.

operation_id

Use this parameter to track the progress and final status of the load operation as the corresponding ID in the USER_LOAD_OPERATIONS view.

partition_name

If defined, then only a specific partition is validated. If not specified then read from all external partitions sequentially until rowcount is reached.

schema_name

The name of the schema where the external table resides. The default value is NULL meaning the external table is in the same schema as the user running the procedure.

rowcount

Number of rows to be scanned. The default value is NULL meaning all the rows in the source files are scanned.

partition_key_validation

For internal use only. Do not use this parameter.

stop_on_error

Determines if the validate should stop when a row is rejected. The default value is TRUE meaning the validate stops at the first rejected row. Setting the value to FALSE specifies that the validate does not stop at the first rejected row and validates all rows up to the value specified for the rowcount parameter.

Usage Note

Hybrid partitioned tables are only supported with Oracle Database 19c onwards.