DBMS_CLOUD Subprograms and REST APIs
This section covers the DBMS_CLOUD
subprograms and REST APIs provided with Autonomous Database.
Note:
To runDBMS_CLOUD
subprograms with a user other than ADMIN you need to grant EXECUTE
privileges to that user. For example, run the following command as ADMIN to grant privileges to adb_user
:GRANT EXECUTE ON DBMS_CLOUD TO adb_user;
This DBMS_CLOUD
package is made up of the following:
DBMS_CLOUD for Access Management
The subprograms for credential management within the DBMS_CLOUD package, including creating, deleting, and updating credentials.
Subprogram | Description |
---|---|
This procedure stores cloud service credentials in Autonomous Database. | |
This procedure removes an existing credential from Autonomous Database. | |
This procedure updates cloud service credential attributes in Autonomous Database. |
CREATE_CREDENTIAL Procedure
This procedure stores cloud service credentials in Autonomous Database.
Use stored cloud service credentials to access the cloud service for data loading,
for querying external data residing in the cloud, or for other cases when you use
DBMS_CLOUD
procedures
with a credential_name
parameter. This procedure is overloaded. Use
the Oracle Cloud
Infrastructure-related parameters, including: user_ocid
,
tenancy_ocid
, private_key
, and
fingerprint
only when you are using Oracle Cloud
Infrastructure Signing Keys authentication.
Syntax
DBMS_CLOUD.CREATE_CREDENTIAL
(
credential_name IN VARCHAR2,
username IN VARCHAR2,
password IN VARCHAR2 DEFAULT NULL);
DBMS_CLOUD.CREATE_CREDENTIAL
(
credential_name IN VARCHAR2,
user_ocid IN VARCHAR2,
tenancy_ocid IN VARCHAR2,
private_key IN VARCHAR2,
fingerprint IN VARCHAR2);
Parameters
Parameter | Description |
---|---|
|
The name of the credential to be stored. |
|
The |
|
The |
|
Specifies the user's OCID. See Where to Get the Tenancy's OCID and User's OCID for details on obtaining the User's OCID. |
|
Specifies the tenancy's OCID. See Where to Get the Tenancy's OCID and User's OCID for details on obtaining the Tenancy's OCID. |
|
Specifies the generated private key. Private keys generated with a passphrase are not supported. You need to generate the private key without a passphrase. See How to Generate an API Signing Key for details on generating a key pair in PEM format. |
|
Specifies a fingerprint. After a generated public key is uploaded to the user's account the fingerprint is displayed in the console. Use the displayed fingerprint for this argument. See How to Get the Key's Fingerprint and How to Generate an API Signing Key for more details. |
Usage Notes
-
This operation stores the credentials in the database in an encrypted format.
-
You can see the credentials in your schema by querying the
user_credentials
table. -
The
ADMIN
user can see all the credentials by querying thedba_credentials
table. -
You only need to create credentials once unless your cloud service credentials change. Once you store the credentials you can then use the same credential name for
DBMS_CLOUD
procedures that require acredential_name
parameter. -
This procedure is overloaded. If you provide one of the key based authentication attributes,
user_ocid
,tenancy_ocid
,private_key
, orfingerprint
, the call is assumed to be an Oracle Cloud Infrastructure Signing Key based credential. -
On Oracle Cloud Infrastructure, private keys generated with a passphrase are not supported. You need to generate the private key without a passphrase. See How to Generate an API Signing Key for more information.
Oracle Cloud Infrastructure Credentials (Auth Tokens)
For Oracle Cloud
Infrastructure the username
is your Oracle Cloud Infrastructure user name. The
password
is your Oracle Cloud Infrastructure auth token. See
Working with Auth Tokens.
Oracle Cloud Infrastructure Object Storage Classic Credentials
If your source files reside in Oracle Cloud
Infrastructure Object Storage Classic, the username
is your Oracle Cloud Infrastructure
Classic user name and the password
is your Oracle Cloud Infrastructure
Classic password.
Amazon Web Services (AWS) Credentials
If your source files reside in Amazon S3 or you are calling
an AWS API, the username
is your AWS access key ID and the
password
is your AWS secret access key. See AWS Identity and Access Management.
Microsoft Azure Credentials
If your source files reside in Azure Blob Storage or you are
calling an Azure API, the username
is your Azure storage account
name and the password
is an Azure storage account access key. See
About Azure storage accounts.
Amazon S3-Compatible Credentials
Service | Credentials Information |
---|---|
Oracle Cloud Infrastructure (Customer Secret Keys) |
If your source files reside in Oracle Cloud Infrastructure, then you need to use Customer Secret Keys with S3-compatible URLs. See Working with Customer Secret Keys for more information. |
Google Cloud Storage |
If your source files reside in Google Cloud Storage or you are calling Google Cloud Storage APIs, then you need to set a default Google project and obtain an HMAC key to create credentials to supply with Google Cloud Storage S3-compatible URLs. Use the HMAC key id as the username, and the HMAC secret as the password. |
Wasabi Hot Cloud Storage |
If your source files reside in Wasabi Hot Cloud Storage or you are calling Wasabi Hot Cloud Storage APIs, then you need Access Keys to create credentials to supply with S3-compatible URLs. Use the Wasabi Hot Cloud Storage Access Key as the username, and the Wasabi Hot Cloud Storage Secret Key as the password. See Creating a Wasabi API Access Key Set for more information. |
DROP_CREDENTIAL Procedure
This procedure removes an existing credential from Autonomous Database.
Syntax
DBMS_CLOUD.DROP_CREDENTIAL
(
credential_name IN VARCHAR2);
Parameters
Parameter | Description |
---|---|
|
The name of the credential to be removed. |
UPDATE_CREDENTIAL Procedure
This procedure updates cloud service credential attributes in Autonomous Database.
Use stored credentials for data loading, for querying external data residing in the
Cloud, or wherever you use DBMS_CLOUD
procedures with a
credential_name
parameter. This procedure lets you update an
attribute with a new value for a specified credential_name
.
Syntax
DBMS_CLOUD.UPDATE_CREDENTIAL
(
credential_name IN VARCHAR2,
attribute IN VARCHAR2,
value IN VARCHAR2);
Parameters
Parameter | Description |
---|---|
|
The name of the credential to be stored. |
|
Name of attribute to update: |
|
New value for the selected attribute. |
Usage Notes
-
The user name is case sensitive. It cannot contain double quotes or spaces.
-
The
ADMIN
user can see all the credentials by querying thedba_credentials
table. -
You only need to create credentials once unless your cloud service credentials change. Once you store the credentials you can then use the same credential name for
DBMS_CLOUD
procedures that require acredential_name
parameter.
Example
BEGIN
DBMS_CLOUD.UPDATE_CREDENTIAL
(
credential_name => 'OBJ_STORE_CRED',
attribute => 'PASSWORD',
value => 'password');
END;
/
DBMS_CLOUD for Objects and Files
The subprograms for object and file management within the DBMS_CLOUD package.
Subprogram | Description |
---|---|
This procedure loads data into existing SODA collection from Cloud Object Storage. | |
This procedure loads data into existing Autonomous Database tables from files in the Cloud. | |
This procedure with the format parameter type set to the value orc , parquet , or avro loads data into existing Autonomous Database tables from ORC, Parquet, or Avro files in the Cloud. Similar to text files, the data is copied from the source ORC, Parquet, or Avro file into the preexisting internal table.
|
|
This procedure creates an external table on files in the Cloud. This allows you to run queries on external data from Autonomous Database. | |
CREATE_EXTERNAL_TABLE Procedure for Avro, ORC, or Parquet Files |
This procedure with the format parameter type set to the value parquet , orc , or avro , creates an external table with either Parquet, ORC, or Avro format files in the Cloud. This allows you to run queries on external data from Autonomous Database.
|
This procedure creates an external partitioned table on files in the Cloud. This allows you to run queries on external data from Autonomous Database. | |
This procedure creates a hybrid partitioned table. This allows you to run queries on hybrid partitioned data from Autonomous Database. | |
This procedure clears either all data load operations logged in the user_load_operations table in your schema or clears all the data load operations of the specified type, as indicated with the type parameter.
|
|
This procedure removes the specified file from the specified directory on Autonomous Database | |
This procedure deletes the specified object on object store. | |
This procedure exports data from Autonomous Database to Oracle Data pump files in the Cloud based on the result of the specified SQL query. Using this procedure Autonomous Database uses the ORACLE_DATAPUMP access driver to write data to a dump file(s) on the Cloud Object store. | |
This procedure is overloaded. The procedure form reads an object from Cloud Object Storage and copies it to Autonomous Database. The function form reads an object from Cloud Object Storage and returns a BLOB to Autonomous Database.
|
|
This function lists the files in the specified directory. The results include the file names and additional metadata about the files such as file size in bytes, creation timestamp, and the last modification timestamp. | |
This function lists objects in the specified location on object store. The results include the object names and additional metadata about the objects such as size, checksum, creation timestamp, and the last modification timestamp. | |
This procedure is overloaded. In one form the procedure copies a file from Autonomous Database to the Cloud Object Storage. In another form the procedure copies a BLOB from Autonomous Database to the Cloud Object Storage.
|
|
This procedure validates the source files for an external table, generates log information, and stores the rows that do not match the format options specified for the external table in a badfile table on Autonomous Database. | |
This procedure validates the source files for an external partitioned table, generates log information, and stores the rows that do not match the format options specified for the external table in a badfile table on Autonomous Database. | |
This procedure validates the source files for a hybrid partitioned table, generates log information, and stores the rows that do not match the format options specified for the hybrid table in a badfile table on Autonomous Database. |
COPY_COLLECTION Procedure
This procedure loads data into a SODA collection from Cloud Object
Storage. If the specified SODA collection does not exist, the procedure creates it. The
overloaded form enables you to use the operation_id
parameter.
Syntax
DBMS_CLOUD.COPY_COLLECTION
(
collection_name IN VARCHAR2,
credential_name IN VARCHAR2 DEFAULT NULL,
file_uri_list IN CLOB,
format IN CLOB DEFAULT NULL
);
DBMS_CLOUD.COPY_COLLECTION
(
collection_name IN VARCHAR2,
credential_name IN VARCHAR2 DEFAULT NULL,
file_uri_list IN CLOB,
format IN CLOB DEFAULT NULL,
operation_id OUT NOCOPY NUMBER
);
Parameters
Parameter | Description |
---|---|
|
The name of the SODA collection into which data will be loaded. If a collection with this name already exists, the specified data will be loaded, otherwise a new collection is created. |
|
The name of the credential to access the Cloud Object Storage. |
|
Comma-delimited list of source file URIs. You can use wildcards in the file names in your URIs. The character "*" can be used as the wildcard for multiple characters, the character "?" can be used as the wildcard for a single character. The format of the URIs depends on the Cloud Object Storage service. See DBMS_CLOUD Package File URI Formats for more information. |
|
The options describing the format of the source files. These options are specified as
a JSON string. Supported formats for JSON data are: Apart from the mentioned formats for JSON data, Autonomous Database supports other formats too. For the list of format arguments supported by Autonomous Database, see DBMS_CLOUD Package Format Options. |
|
Use this parameter to track the progress and final
status of the load operation as the corresponding ID in the
|
Example
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL
(
credential_name => 'OBJ_STORE_CRED',
username => 'user_name@oracle.com',
password => 'password'
);
DBMS_CLOUD.COPY_COLLECTION
(
collection_name => 'myCollection',
credential_name => 'OBJ_STORE_CRED',
file_uri_list => 'https://objectstorage.us-phoenix-1.oraclecloud.com/n/adbexample/b/json/o/myCollection.json'
);
END;
/
COPY_DATA Procedure
This procedure loads data into existing Autonomous Database tables from files in the Cloud. The overloaded form enables
you to use the operation_id
parameter.
Syntax
DBMS_CLOUD.COPY_DATA
(
table_name IN VARCHAR2,
credential_name IN VARCHAR2,
file_uri_list IN CLOB,
schema_name IN VARCHAR2,
field_list IN CLOB,
format IN CLOB);
DBMS_CLOUD.COPY_DATA
(
table_name IN VARCHAR2,
credential_name IN VARCHAR2 DEFAULT NULL,
file_uri_list IN CLOB DEFAULT NULL,
schema_name IN VARCHAR2 DEFAULT NULL,
field_list IN CLOB DEFAULT NULL,
format IN CLOB DEFAULT NULL
operation_id OUT NOCOPY NUMBER);
Parameters
Parameter | Description |
---|---|
|
The name of the target table on the database. The target table needs to be created
before you run |
|
The name of the credential to access the Cloud Object Storage. |
|
Comma-delimited list of source file URIs. You can use wildcards in the file names in your URIs. The character "*" can be used as the wildcard for multiple characters, the character "?" can be used as the wildcard for a single character. The format of the URIs depend on the Cloud Object Storage service you are using, for details see DBMS_CLOUD Package File URI Formats. |
|
The name of the schema where the target table resides. The default value is NULL meaning the target table is in the same schema as the user running the procedure. |
|
Identifies the fields in the source files and their data types. The default value is
NULL meaning the fields and their data types are determined by
the target table definition. This argument's syntax is the same
as the For an example using |
|
The options describing the format of the source files. For the list of the options and how to specify the values see DBMS_CLOUD Package Format Options. For Avro, ORC, or Parquet file format options, see DBMS_CLOUD Package Format Options for Avro, ORC, or Parquet. |
|
Use this parameter to track the progress and final status of the
load operation as the corresponding ID in the
|
COPY_DATA Procedure for Avro, ORC, or Parquet Files
This procedure with the format
parameter
type
set to the value avro
,
orc
, or parquet
loads data into existing Autonomous Database tables from Avro, ORC, or Parquet files in the Cloud. Similar to text
files, the data is copied from the source Avro, ORC, or Parquet file into the preexisting internal table.
Syntax
DBMS_CLOUD.COPY_DATA
(
table_name IN VARCHAR2,
credential_name IN VARCHAR2,
file_uri_list IN CLOB,
schema_name IN VARCHAR2 DEFAULT,
field_list IN CLOB DEFAULT,
format IN CLOB DEFAULT);
Parameters
Parameter | Description |
---|---|
|
The name of the target table on the database. The target table needs to be created
before you run |
|
The name of the credential to access the Cloud Object Storage. |
|
Comma-delimited list of source file URIs. You can use wildcards in the file names in your URIs. The character "*" can be used as the wildcard for multiple characters, the character "?" can be used as the wildcard for a single character. The format of the URIs depend on the Cloud Object Storage service you are using, for details see DBMS_CLOUD Package File URI Formats. |
|
The name of the schema where the target table resides. The default value is NULL meaning the target table is in the same schema as the user running the procedure. |
|
Ignored for Avro, ORC, or Parquet files. The fields in the source match the external table columns by name. Source data types are converted to the external table column data type. For ORC files, see DBMS_CLOUD Package ORC to Oracle Data Type Mapping For Parquet files, see DBMS_CLOUD Package Parquet to Oracle Data Type Mapping for details on mapping. For Avro files, see DBMS_CLOUD Package Avro to Oracle Data Type Mapping for details on mapping. |
|
The options describing the format of the source files. For Avro, ORC, or Parquet files, only two options are supported: see DBMS_CLOUD Package Format Options for Avro, ORC, or Parquet. |
Usage Notes
-
As with other data files, Avro, ORC, and Parquet data loads generate logs that are viewable in the tables
dba_load_operations
anduser_load_operations
. Each load operation adds a record todba
[user]_load_operations
that indicates the table containing the logs.The log table provides summary information about the load.
-
For Avro, ORC, or Parquet, when the
format
parametertype
is set to the valueavro
,orc
, orparquet
, theBADFILE_TABLE
table is always empty.-
For Parquet files,
PRIMARY KEY
constraint errors throw anORA
error. -
If data for a column encounters a conversion error, for example, the target column is not large enough to hold the converted value, the value for the column is set to
NULL
. This does not produce a rejected record.
-
CREATE_EXTERNAL_PART_TABLE Procedure
This procedure creates an external partitioned table on files in the Cloud. This allows you to run queries on external data from Autonomous Database.
Syntax
DBMS_CLOUD.CREATE_EXTERNAL_PART_TABLE
(
table_name IN VARCHAR2,
credential_name IN VARCHAR2,
partitioning_clause IN CLOB,
column_list IN CLOB,
field_list IN CLOB DEFAULT,
format IN CLOB DEFAULT);
Parameters
Parameter | Description |
---|---|
|
The name of the external table. |
|
The name of the credential to access the Cloud Object Storage. |
|
Specifies the complete partitioning clause, including the location information for individual partitions. |
|
Comma-delimited list of column names and data types for the external table. |
|
Identifies the fields in the source files and their data types. The default value is NULL meaning the fields and their data types are determined by the column_list parameter. This argument's syntax is the same as the |
|
The options describing the format of the source files. For the list of the options and how to specify the values see DBMS_CLOUD Package Format Options. |
Usage Notes
-
With Avro, ORC, or Parquet data format using
DBMS_CLOUD.CREATE_EXTERNAL_PART_TABLE
, theschema
format option is not available and thecolumn_list
parameter must be specified. Theschema
format option is available withDBMS_CLOUD.CREATE_EXTERNAL_TABLE
. -
The procedure
DBMS_CLOUD.CREATE_EXTERNAL_PART_TABLE
supports external partitioned files in the supported cloud object storage services, including:-
Oracle Cloud Infrastructure Object Storage
-
Azure Blob Storage
-
Amazon S3
-
Amazon S3-Compatible, including: Oracle Cloud Infrastructure Object Storage, Google Cloud Storage, and Wasabi Hot Cloud Storage.
See DBMS_CLOUD Package File URI Formats for more information.
-
Example
BEGIN
DBMS_CLOUD.CREATE_EXTERNAL_PART_TABLE
(
table_name =>'PET1',
credential_name =>'OBJ_STORE_CRED',
format => json_object('delimiter' value ',', 'recorddelimiter' value 'newline', 'characterset' value 'us7ascii'),
column_list => 'col1 number, col2 number, col3 number',
partitioning_clause => 'partition by range (col1)
(partition p1 values less than (1000) location
( ''&base_URL//file_11.txt'')
,
partition p2 values less than (2000) location
( ''&base_URL/file_21.txt'')
,
partition p3 values less than (3000) location
( ''&base_URL/file_31.txt'')
)'
);
END;
/
CREATE_EXTERNAL_TABLE Procedure
This procedure creates an external table on files in the Cloud. This allows you to run queries on external data from Autonomous Database.
Syntax
DBMS_CLOUD.CREATE_EXTERNAL_TABLE
(
table_name IN VARCHAR2,
credential_name IN VARCHAR2,
file_uri_list IN CLOB,
column_list IN CLOB,
field_list IN CLOB DEFAULT,
format IN CLOB DEFAULT);
Parameters
Parameter | Description |
---|---|
|
The name of the external table. |
|
The name of the credential to access the Cloud Object Storage. |
|
Comma-delimited list of source file URIs. You can use wildcards in the file names in your URIs. The character "*" can be used as the wildcard for multiple characters, the character "?" can be used as the wildcard for a single character. The format of the URIs depend on the Cloud Object Storage service you are using, for details see DBMS_CLOUD Package File URI Formats. |
|
Comma-delimited list of column names and data types for the external table. |
|
Identifies the fields in the source files and their data types. The default value is NULL meaning the fields and their data types are determined by the column_list parameter. This argument's syntax is the same as the |
|
The options describing the format of the source files. For the list of the options and how to specify the values see DBMS_CLOUD Package Format Options. For Avro, ORC, or Parquet format files, see CREATE_EXTERNAL_TABLE Procedure for Avro, ORC, or Parquet Files. |
Usage Notes
-
The procedure
DBMS_CLOUD.CREATE_EXTERNAL_TABLE
supports external partitioned files in the supported cloud object storage services, including:-
Oracle Cloud Infrastructure Object Storage
-
Azure Blob Storage
-
Amazon S3
-
Amazon S3-Compatible, including: Oracle Cloud Infrastructure Object Storage, Google Cloud Storage, and Wasabi Hot Cloud Storage.
The credential is a table level property; therefore, the external files must be on the same object store.
See DBMS_CLOUD Package File URI Formats for more information.
-
Example
BEGIN
DBMS_CLOUD.CREATE_EXTERNAL_TABLE
(
table_name =>'WEATHER_REPORT_DOUBLE_DATE',
credential_name =>'OBJ_STORE_CRED',
file_uri_list =>'&base_URL/Charlotte_NC_Weather_History_Double_Dates.csv',
format => json_object('type' value 'csv', 'skipheaders' value '1'),
field_list => 'REPORT_DATE DATE''mm/dd/yy'',
REPORT_DATE_COPY DATE ''yyyy-mm-dd'',
ACTUAL_MEAN_TEMP,
ACTUAL_MIN_TEMP,
ACTUAL_MAX_TEMP,
AVERAGE_MIN_TEMP,
AVERAGE_MAX_TEMP,
AVERAGE_PRECIPITATION',
column_list => 'REPORT_DATE DATE,
REPORT_DATE_COPY DATE,
ACTUAL_MEAN_TEMP NUMBER,
ACTUAL_MIN_TEMP NUMBER,
ACTUAL_MAX_TEMP NUMBER,
AVERAGE_MIN_TEMP NUMBER,
AVERAGE_MAX_TEMP NUMBER,
AVERAGE_PRECIPITATION NUMBER');
END;
/
SELECT * FROM WEATHER_REPORT_DOUBLE_DATE where
actual_mean_temp > 69 and actual_mean_temp < 74
CREATE_EXTERNAL_TABLE Procedure for Avro, ORC, or Parquet Files
This procedure with the format
parameter
type
set to the value avro
,
orc
, or parquet
creates an external table
with either Avro, ORC, or Parquet format files in the
Cloud. This allows you to run queries on external data from Autonomous Database.
Syntax
DBMS_CLOUD.CREATE_EXTERNAL_TABLE
(
table_name IN VARCHAR2,
credential_name IN VARCHAR2,
file_uri_list IN CLOB,
column_list IN CLOB,
field_list IN CLOB DEFAULT,
format IN CLOB DEFAULT);
Parameters
Parameter | Description |
---|---|
|
The name of the external table. |
|
The name of the credential to access the Cloud Object Storage. |
|
Comma-delimited list of source file URIs. You can use wildcards in the file names in your URIs. The character "*" can be used as the wildcard for multiple characters, the character "?" can be used as the wildcard for a single character. The format of the URIs depend on the Cloud Object Storage service you are using, for details see DBMS_CLOUD Package File URI Formats. |
|
(Optional) This field, when specified, overrides the
When the For Parquet files, see DBMS_CLOUD Package Parquet to Oracle Data Type Mapping for details. For ORC files, see DBMS_CLOUD Package ORC to Oracle Data Type Mapping for details. For Avro files, see DBMS_CLOUD Package Avro to Oracle Data Type Mapping for details. |
|
Ignored for Avro, ORC, or Parquet files. The fields in the source match the external table columns by name. Source data types are converted to the external table column data type. For ORC files, see DBMS_CLOUD Package ORC to Oracle Data Type Mapping For Parquet files, see DBMS_CLOUD Package Parquet to Oracle Data Type Mapping for details. For Avro files, see DBMS_CLOUD Package Avro to Oracle Data Type Mapping for details. |
|
For Avro, ORC, or Parquet, there are only two supported parameters. See DBMS_CLOUD Package Format Options for Avro, ORC, or Parquet for details. |
Examples ORC
format => '{"type":"orc", "schema": "all"}'
format => json_object('type' value 'orc', 'schema' value 'first')
Examples Avro
format => '{"type":"avro", "schema": "all"}'
format => json_object('type' value 'avro', 'schema' value 'first')
Examples Parquet
format => '{"type":"parquet", "schema": "all"}'
format => json_object('type' value 'parquet', 'schema' value 'first')
Avro, ORC, or Parquet Column Name Mapping to Oracle Column Names
See DBMS_CLOUD Package Avro, ORC, and Parquet to Oracle Column Name Mapping for information on column name mapping and column name conversion usage in Oracle SQL.
CREATE_HYBRID_PART_TABLE Procedure
This procedure creates a hybrid partitioned table. This allows you to run queries on hybrid partitioned data from Autonomous Database.
Syntax
DBMS_CLOUD.CREATE_HYBRID_PART_TABLE
(
table_name IN VARCHAR2,
credential_name IN VARCHAR2,
partitioning_clause IN CLOB,
column_list IN CLOB,
field_list IN CLOB DEFAULT,
format IN CLOB DEFAULT);
Parameters
Parameter | Description |
---|---|
|
The name of the external table. |
|
The name of the credential to access the Cloud Object Storage. |
|
Specifies the complete partitioning clause, including the location information for individual partitions. |
|
Comma-delimited list of column names and data types for the external table. |
|
Identifies the fields in the source files and their data types. The default value is NULL meaning the fields and their data types are determined by the column_list parameter. This argument's syntax is the same as the |
|
The options describing the format of the source files. For the list of the options and how to specify the values see DBMS_CLOUD Package Format Options. |
Usage Note
-
The procedure
DBMS_CLOUD.CREATE_HYBRID_PART_TABLE
supports external partitioned files in the supported cloud object storage services, including:-
Oracle Cloud Infrastructure Object Storage
-
Azure Blob Storage
-
Amazon S3
-
Amazon S3-Compatible, including: Oracle Cloud Infrastructure Object Storage, Google Cloud Storage, and Wasabi Hot Cloud Storage.
The credential is a table level property; therefore, the external files must be on the same object store.
See DBMS_CLOUD Package File URI Formats for more information.
-
Example
BEGIN
DBMS_CLOUD.CREATE_HYBRID_PART_TABLE
(
table_name =>'HPT1',
credential_name =>'OBJ_STORE_CRED',
format => json_object('delimiter' value ',', 'recorddelimiter' value 'newline', 'characterset' value 'us7ascii'),
column_list => 'col1 number, col2 number, col3 number',
partitioning_clause => 'partition by range (col1)
(partition p1 values less than (1000) external location
( ''&base_URL/file_11.txt'')
,
partition p2 values less than (2000) external location
( ''&base_URL/file_21.txt'')
,
partition p3 values less than (3000)
)'
);
END;
/
DELETE_ALL_OPERATIONS Procedure
This procedure clears either all data load operations logged in the
user_load_operations
table in your schema or clears all the data
load operations of the specified type, as indicated with the type
parameter.
Syntax
DBMS_CLOUD.DELETE_ALL_OPERATIONS
(
type IN VARCHAR DEFAULT NULL);
Parameters
Parameter | Description |
---|---|
|
Specifies the type of operation to delete. Type values can be found in the
If no |
Usage Note
-
DBMS_CLOUD.DELETE_ALL_OPERATIONS
does not delete currently running operations (operations in a "Running" status).
DELETE_FILE Procedure
This procedure removes the specified file from the specified directory on Autonomous Database.
Syntax
DBMS_CLOUD.DELETE_FILE
(
directory_name IN VARCHAR2,
file_name IN VARCHAR2);
Parameters
Parameter | Description |
---|---|
|
The name of the directory on the Autonomous Database instance. |
|
The name of the file to be removed. |
Note:
To runDBMS_CLOUD.DELETE_FILE
with a user
other than ADMIN you need to grant write privileges on the directory
that contains the file to that user. For example, run the following
command as ADMIN to grant write privileges to adb_user
:GRANT WRITE ON DIRECTORY data_pump_dir TO adb_user;
Example
BEGIN
DBMS_CLOUD.DELETE_FILE
(
directory_name => 'DATA_PUMP_DIR',
file_name => 'exp1.dmp');
END;
/
DELETE_OBJECT Procedure
This procedure deletes the specified object on object store.
Syntax
DBMS_CLOUD.DELETE_OBJECT
(
credential_name IN VARCHAR2,
object_uri IN VARCHAR2);
Parameters
Parameter | Description |
---|---|
|
The name of the credential to access the Cloud Object Storage. |
object_uri |
Object or file URI for the object to delete. The format of the URI depends on the Cloud Object Storage service you are using, for details see DBMS_CLOUD Package File URI Formats. |
Example
BEGIN
DBMS_CLOUD.DELETE_OBJECT
(
credential_name => 'DEF_CRED_NAME',
object_uri => 'https://objectstorage.us-ashburn-1.oraclecloud.com/n/namespace-string/b/bucketname/o/exp1.dmp');
END;
/
EXPORT_DATA Procedure
This procedure exports data from Autonomous Database to Oracle Data pump files in the Cloud based on the result of
the specified SQL query. Using this procedure Autonomous Database uses the ORACLE_DATAPUMP access driver to write data to a dump
file(s) on the Cloud Object store. The overloaded form enables you to use the
operation_id
parameter.
Syntax
DBMS_CLOUD.EXPORT_DATA
(
file_uri_list IN CLOB,
format IN CLOB,
credential_name IN VARCHAR2 DEFAULT NULL,
query IN CLOB);
DBMS_CLOUD.EXPORT_DATA
(
file_uri_list IN CLOB DEFAULT NULL,
format IN CLOB DEFAULT NULL,
credential_name IN VARCHAR2 DEFAULT NULL,
query IN CLOB DEFAULT NULL,
operation_id OUT NOCOPY NUMBER);
Parameters
Parameter | Description |
---|---|
|
The name of the credential to access the Cloud Object Storage. |
|
Comma-delimited list of the dump files. This specifies the files to be created on
the Object Store. Use of wildcard and substitution characters is
not supported in the The format of the URIs depend on the Cloud Object Storage service you are using, for details see DBMS_CLOUD Package File URI Formats. |
|
Specify export format options. Supported options are:
In addition, with the
See access_parameters Clause for more information. |
|
Use this parameter to specify a
See Oracle Data Pump Export Data Filters and Unloading and Loading Data with the ORACLE_DATAPUMP Access Driver for more information. |
|
Use this parameter to track the progress and final status of the export
operation as the corresponding ID in the
|
Usage Notes
-
Autonomous Database export using
DBMS_CLOUD.EXPORT_DATA
only supports Oracle Cloud Infrastructure Object Storage and Oracle Cloud Infrastructure Object Storage Classic object stores. -
Oracle Data Pump divides each dump file part into smaller chunks for faster uploads. The Oracle Cloud Infrastructure Object Storage console shows multiple files for each dump file part that you export. The size of the actual dump files will be displayed as zero (0) and its related file chunks as 10mb or less. For example:
Downloading the zero byte dump file from the Oracle Cloud Infrastructure console or using the Oracle Cloud Infrastructure CLI will not give you the full dump files. To download the full dump files from the Object Store, use a tool that supports Swift such as curl, and provide your user login and Swift auth token.exp01.dmp exp01.dmp_aaaaaa exp02.dmp exp02.dmp_aaaaaa
curl -O -v -X GET -u 'user1@example.com:auth_token' \ https://swiftobjectstorage.us-ashburn-1.oraclecloud.com/v1/namespace-string/bucketname/exp01.dmp
If you import a file with the
DBMS_CLOUD
procedures that support theformat
parametertype
with the value 'datapump
', you only need to provide the primary file name. The procedures that support the 'datapump
' format type automatically discover and download the chunks.When you use
DBMS_CLOUD.DELETE_OBJECT
, the procedure automatically discovers and deletes the chunks when the procedure deletes the primary file. -
The
DBMS_CLOUD.EXPORT_DATA
procedure creates the dump file(s) from thefile_uri_list
values that you specify, as follows:-
As more files are needed, the procedure creates additional files from the
file_uri_list
. -
The procedure does not overwrite files. If a dump file in the
file_uri_list
exists,DBMS_CLOUD.EXPORT_DATA
reports an error. -
DBMS_CLOUD.EXPORT_DATA
does not create buckets.
-
-
The number of dump files that
DBMS_CLOUD.EXPORT_DATA
generates is determined when the procedure runs. The number of dump files that are generated depends on the number of file names you provide in thefile_uri_list
parameter, as well as on the number of Autonomous Database OCPUs available to the instance, the service level, and the size of the data.For example, if you use a 1 OCPU Autonomous Database instance or the
low
service, then a single dump file is exported with no parallelism, even if you provide multiple file names. If you use a 4 OCPU Autonomous Database instance with themedium
orhigh
service, then the jobs can run in parallel and multiple dump files are exported if you provide multiple file names. -
The dump files you create with
DBMS_CLOUD.EXPORT_DATA
cannot be imported using Oracle Data Pumpimpdp
. Depending on the database, you can use these files as follows:-
On an Autonomous Database instance on Shared Infrastructure, you can use the dump files with the
DBMS_CLOUD
procedures that support theformat
parametertype
with the value 'datapump
'. You can import the dump files usingDBMS_CLOUD.COPY_DATA
or you can callDBMS_CLOUD.CREATE_EXTERNAL_TABLE
to create an external table. -
On any other Oracle Database, such as Oracle Database 19c on-premise, you can import the dump files created with the procedure
DBMS_CLOUD.EXPORT_DATA
using theORACLE_DATAPUMP
access driver. See Unloading and Loading Data with the ORACLE_DATAPUMP Access Driver for more information.
-
-
The
query
parameter value that you supply can be an advanced query, if required, such as a query that includes joins or subqueries.
Example
BEGIN
DBMS_CLOUD.EXPORT_DATA
(
credential_name =>'OBJ_STORE_CRED',
file_uri_list =>'https://objectstorage.us-phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o/exp1.dmp,
format => json_object('type' value 'datapump', 'compression' value 'basic', 'version' value 'latest'),
query => 'SELECT warehouse_id, quantity FROM inventories'
);
END;
/
In this example, namespace-string
is the Oracle
Cloud Infrastructure object storage namespace and
bucketname
is the bucket
name. See Understanding
Object Storage Namespaces for more
information.
GET_OBJECT Procedure and Function
This procedure is overloaded. The procedure form reads an object from
Cloud Object Storage and copies it to Autonomous Database. The function form reads an object from Cloud Object Storage and returns a
BLOB
to Autonomous Database.
Syntax
DBMS_CLOUD.GET_OBJECT
(
credential_name IN VARCHAR2,
object_uri IN VARCHAR2,
directory_name IN VARCHAR2,
file_name IN VARCHAR2 DEFAULT NULL,
startoffset IN NUMBER DEFAULT 0,
endoffset IN NUMBER DEFAULT 0,
compression IN VARCHAR2 DEFAULT NULL);
DBMS_CLOUD.GET_OBJECT
(
credential_name IN VARCHAR2 DEFAULT NULL,
object_uri IN VARCHAR2,
startoffset IN NUMBER DEFAULT 0,
endoffset IN NUMBER DEFAULT 0,
compression IN VARCHAR2 DEFAULT NULL)
RETURN BLOB;
Parameters
Parameter | Description |
---|---|
|
The name of the credential to access the Cloud Object Storage. |
|
Object or file URI. The format of the URI depends on the Cloud Object Storage service you are using, for details see DBMS_CLOUD Package File URI Formats. |
|
The name of the directory on the database. Foot 1 |
|
Specifies the name of the file to create. If file name is not specified, the file
name is taken from after the last slash in the
|
startoffset |
The offset, in bytes, from where the procedure starts reading. |
endoffset |
The offset, in bytes, until where the procedure stops reading. |
|
Specifies the compression used to store the object. When
|
Footnote 1
Note:
To run DBMS_CLOUD.GET_OBJECT
with
a user other than ADMIN you need to grant WRITE
privileges on the directory to that user. For example,
run the following command as ADMIN to grant write
privileges to adb_user
:
GRANT WRITE ON DIRECTORY data_pump_dir TO adb_user;
Return Values
The function form reads from Object Store and DBMS_CLOUD.GET_OBJECT
returns a
BLOB
.
Examples
BEGIN
DBMS_CLOUD.GET_OBJECT
(
credential_name => 'OBJ_STORE_CRED',
object_uri => 'https://objectstorage.us-phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o/file.txt',
directory_name => 'DATA_PUMP_DIR');
END;
/
To read character data from a file in Object Store:
SELECT to_clob(
DBMS_CLOUD.GET_OBJECT
(
credential_name => 'OBJ_STORE_CRED',
object_uri => 'https://objectstorage.us-phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o/file.txt'))
FROM DUAL;
To add an image stored on Object Store in a BLOB
in the
database:
DECLARE
l_blob BLOB := NULL;
BEGIN
l_blob := DBMS_CLOUD.GET_OBJECT
(
credential_name => 'OBJ_STORE_CRED',
object_uri => 'https://objectstorage.us-phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o/MyImage.gif' );
END;
/
In this example, namespace-string
is the Oracle
Cloud Infrastructure object storage namespace and
bucketname
is the bucket
name. See Understanding
Object Storage Namespaces for more
information.
LIST_FILES Function
This function lists the files in the specified directory. The results include the file names and additional metadata about the files such as file size in bytes, creation timestamp, and the last modification timestamp.
Syntax
DBMS_CLOUD.LIST_FILES
(
directory_name IN VARCHAR2)
RETURN TABLE;
Parameters
Parameter | Description |
---|---|
|
The name of the directory on the database. |
Usage Notes
-
To run
DBMS_CLOUD.LIST_FILES
with a user other than ADMIN you need to grant read privileges on the directory to that user. For example, run the following command as ADMIN to grant read privileges toadb_user
:GRANT READ ON DIRECTORY data_pump_dir TO adb_user;
-
This is a pipelined table function with return type as
DBMS_CLOUD_TYPES.list_object_ret_t
. -
DBMS_CLOUD.LIST_FILES
does not obtain the checksum value and returnsNULL
for this field.
Example
This is a pipelined function that returns a row for each file. For example, use the following query to use this function:
SELECT * FROM DBMS_CLOUD.LIST_FILES
('DATA_PUMP_DIR');
OBJECT_NAME BYTES CHECKSUM CREATED LAST_MODIFIED
------------ ---------- ---------- --------------------- ---------------------
cwallet.sso 2965 2018-12-12T18:10:47Z 2019-11-23T06:36:54Z
LIST_OBJECTS Function
This function lists objects in the specified location on object store. The results include the object names and additional metadata about the objects such as size, checksum, creation timestamp, and the last modification timestamp.
Syntax
DBMS_CLOUD.LIST_OBJECTS
(
credential_name IN VARCHAR2,
location_uri IN VARCHAR2)
RETURN TABLE;
Parameters
Parameter | Description |
---|---|
|
The name of the credential to access the Cloud Object Storage. |
location_uri |
Object or file URI. The format of the URI depends on the Cloud Object Storage service you are using, for details see DBMS_CLOUD Package File URI Formats. |
Usage Notes
-
Depending on the capabilities of the object store,
DBMS_CLOUD.LIST_OBJECTS
does not return values for certain attributes and the return value for the field isNULL
in this case.All supported Object Stores return values for the
OBJECT_NAME
,BYTES
, andCHECKSUM
fields.The following table shows support for the fields
CREATED
andLAST_MODIFIED
by Object Store:Object Store CREATED
LAST_MODIFIED
Oracle Cloud Infrastructure Native Returns timestamp Returns NULL
Oracle Cloud Infrastructure Swift Returns NULL
Returns timestamp Oracle Cloud Infrastructure Classic Returns NULL
Returns timestamp Amazon S3 Returns NULL
Returns timestamp Amazon S3-Compatible Returns NULL
Returns timestamp Azure Returns timestamp Returns timestamp -
The checksum value is the MD5 checksum. This is a 32-character hexadecimal number that is computed on the object contents.
-
This is a pipelined table function with return type as
DBMS_CLOUD_TYPES.list_object_ret_t
.
Example
This is a pipelined function that returns a row for each object. For example, use the following query to use this function:
SELECT * FROM DBMS_CLOUD.LIST_OBJECTS
('OBJ_STORE_CRED',
'https://objectstorage.us-phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o/');
OBJECT_NAME BYTES CHECKSUM CREATED LAST_MODIFIED
------------ ---------- -------------------------------- --------------------- --------------------
cwallet.sso 2965 2339a2731ba24a837b26d344d643dc07 2019-11-23T06:36:54Z
In this example, namespace-string
is the Oracle
Cloud Infrastructure object storage namespace and
bucketname
is the bucket
name. See Understanding
Object Storage Namespaces for more
information.
PUT_OBJECT Procedure
This procedure is overloaded. In one form the procedure copies a file from
Autonomous Database to the Cloud Object
Storage. In another form the procedure copies a BLOB
from Autonomous Database to the Cloud Object
Storage.
Syntax
DBMS_CLOUD.PUT_OBJECT
(
credential_name IN VARCHAR2,
object_uri IN VARCHAR2,
directory_name IN VARCHAR2,
file_name IN VARCHAR2);
DBMS_CLOUD.PUT_OBJECT
(
credential_name IN VARCHAR2,
object_uri IN VARCHAR2,
contents IN BLOB,
file_name IN VARCHAR2);
Parameters
Parameter | Description |
---|---|
|
The name of the credential to access the Cloud Object Storage. |
|
Object or file URI. The format of the URI depends on the Cloud Object Storage service you are using, for details see DBMS_CLOUD Package File URI Formats. |
|
The name of the directory on the Autonomous Database. Foot 2 |
|
The name of the file in the specified directory. |
Footnote 2
Note:
To run DBMS_CLOUD.PUT_OBJECT
with
a user other than ADMIN you need to grant read privileges on the
directory to that user. For example, run the following
command as ADMIN to
grant read privileges to adb_user
:
GRANT READ ON DIRECTORY data_pump_dir TO adb_user;
Example
To handle BLOB
data after in-database processing and then store the
data directly into a file in the object store:
DECLARE
my_blob_data BLOB;
BEGIN
/* Some processing producing BLOB data and populating my_blob_data */
DBMS_CLOUD.PUT_OBJECT
(
credential_name => 'OBJ_STORE_CRED',
object_uri => 'https://objectstorage.us-phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o/my_new_file',
contents => my_blob_data));
END;
/
Usage Notes
Depending on your Cloud Object Storage, the size of the object you transfer is limited as follows:
Cloud Object Storage Service | Object Transfer Size Limit |
---|---|
Oracle Cloud Infrastructure Object Storage |
50 GB |
Amazon S3 |
5 GB |
Azure Blob Storage |
256 MB |
Amazon S3-Compatible |
Set by the object store provider. For more information, refer to the provider's documentation. |
Oracle Cloud
Infrastructure object store does not allow writing files into a public bucket without supplying
credentials (Oracle Cloud
Infrastructure allows users to download objects from public buckets). Thus, you must supply a
credential name with valid credentials to store an object in an Oracle Cloud
Infrastructure public bucket using PUT_OBJECT
.
See DBMS_CLOUD Package File URI Formats for more information.
VALIDATE_EXTERNAL_PART_TABLE Procedure
This procedure validates the source files for an external partitioned
table, generates log information, and stores the rows that do not match the format options
specified for the external table in a badfile table on Autonomous Database. The overloaded form enables
you to use the operation_id
parameter.
Syntax
DBMS_CLOUD.VALIDATE_EXTERNAL_PART_TABLE
(
table_name IN VARCHAR2,
partition_name IN CLOB DEFAULT,
schema_name IN VARCHAR2 DEFAULT,
rowcount IN NUMBER DEFAULT,
partition_key_validation IN BOOLEAN DEFAULT,
stop_on_error IN BOOLEAN DEFAULT);
DBMS_CLOUD.VALIDATE_EXTERNAL_PART_TABLE
(
table_name IN VARCHAR2,
operation_id OUT NUMBER,
partition_name IN CLOB DEFAULT,
schema_name IN VARCHAR2 DEFAULT,
rowcount IN NUMBER DEFAULT,
partition_key_validation IN BOOLEAN DEFAULT,
stop_on_error IN BOOLEAN DEFAULT);
Parameters
Parameter | Description |
---|---|
|
The name of the external table. |
|
Use this parameter to track the progress and final
status of the load operation as the corresponding ID in the
|
|
If defined, then only a specific partition is validated. If not
specified then read all partitions sequentially until
rowcount is reached.
|
|
The name of the schema where the external table resides. The default value is NULL meaning the external table is in the same schema as the user running the procedure. |
|
Number of rows to be scanned. The default value is NULL meaning all the rows in the source files are scanned. |
|
For internal use only. Do not use this parameter. |
|
Determines if the validate should stop when a row is rejected. The default value is |
VALIDATE_EXTERNAL_TABLE Procedure
This procedure validates the source files for an external table,
generates log information, and stores the rows that do not match the format options
specified for the external table in a badfile table on Autonomous Database. The overloaded form enables
you to use the operation_id
parameter.
Syntax
DBMS_CLOUD.VALIDATE_EXTERNAL_TABLE
(
table_name IN VARCHAR2,
schema_name IN VARCHAR2 DEFAULT,
rowcount IN NUMBER DEFAULT,
stop_on_error IN BOOLEAN DEFAULT);
DBMS_CLOUD.VALIDATE_EXTERNAL_TABLE
(
table_name IN VARCHAR2,
operation_id OUT NOCOPY NUMBER,
schema_name IN VARCHAR2 DEFAULT NULL,
rowcount IN NUMBER DEFAULT 0,
stop_on_error IN BOOLEAN DEFAULT TRUE);
Parameters
Parameter | Description |
---|---|
|
The name of the external table. |
|
Use this parameter to track the progress and final status of the
load operation as the corresponding ID in the
|
|
The name of the schema where the external table resides. The default value is NULL meaning the external table is in the same schema as the user running the procedure. |
|
Number of rows to be scanned. The default value is NULL meaning all the rows in the source files are scanned. |
|
Determines if the validate should stop when a row is rejected. The default value is If the external table refers to Avro, ORC, or Parquet files then the validate stops at the first rejected row. When the external table specifies the |
Usage Notes
-
DBMS_CLOUD.VALIDATE_EXTERNAL_TABLE
works with both partitioned external tables and hybrid partitioned tables. This potentially reads data from all external partitions untilrowcount
is reached orstop_on_error
applies. You do not have control over which partition, or parts of a partition, is read in which order.
VALIDATE_HYBRID_PART_TABLE Procedure
This procedure validates the source files for a hybrid partitioned
table, generates log information, and stores the rows that do not match the format options
specified for the hybrid table in a badfile table on Autonomous Database. The overloaded form enables
you to use the operation_id
parameter.
Syntax
DBMS_CLOUD.VALIDATE_HYBRID_PART_TABLE
(
table_name IN VARCHAR2,
partition_name IN CLOB DEFAULT,
schema_name IN VARCHAR2 DEFAULT,
rowcount IN NUMBER DEFAULT,
partition_key_validation IN BOOLEAN DEFAULT,
stop_on_error IN BOOLEAN DEFAULT);
DBMS_CLOUD.VALIDATE_HYBRID_PART_TABLE
(
table_name IN VARCHAR2,
operation_id OUT NUMBER,
partition_name IN CLOB DEFAULT,
schema_name IN VARCHAR2 DEFAULT,
rowcount IN NUMBER DEFAULT,
partition_key_validation IN BOOLEAN DEFAULT,
stop_on_error IN BOOLEAN DEFAULT);
Parameters
Parameter | Description |
---|---|
|
The name of the external table. |
|
Use this parameter to track the progress and final
status of the load operation as the corresponding ID in the
|
|
If defined, then only a specific partition is validated. If not specified
then read from all external partitions sequentially until
rowcount is reached.
|
|
The name of the schema where the external table resides. The default value is NULL meaning the external table is in the same schema as the user running the procedure. |
|
Number of rows to be scanned. The default value is NULL meaning all the rows in the source files are scanned. |
|
For internal use only. Do not use this parameter. |
|
Determines if the validate should stop when a row is rejected. The default value is |
DBMS_CLOUD REST APIs
This section covers the DBMS_CLOUD
REST APIs provided with Autonomous Database.
REST API | Description |
---|---|
This function returns the HTTP response headers as JSON data in a JSON object in Autonomous Database. | |
This function returns the HTTP response in RAW format Autonomous Database. This is useful if the HTTP response is expected to be binary format. | |
This function returns the HTTP response status code as an integer in Autonomous Database. The status code helps to identify if the request is successful. | |
This function returns the HTTP response in TEXT format (VARCHAR2 or CLOB ) in Autonomous Database. Usually, most Cloud REST APIs return JSON response in text format. This function is useful if you expect the the HTTP response is in text format.
|
|
This function begins an HTTP request, gets the response, and ends the response in Autonomous Database. This function provides a workflow for sending a Cloud REST API request with arguments and a return response code and payload. |
DBMS_CLOUD REST API Overview
When you use PL/SQL in your application and you need
to call Cloud REST APIs you can use the DBMS_CLOUD
function DBMS_CLOUD.SEND_REQUEST
to send the REST
API requests.
The DBMS_CLOUD
REST
API functions allow you to make HTTP
requests using DBMS_CLOUD.SEND_REQUEST
. These
functions provide a generic API that lets you call any REST API with the following
supported cloud services:
- Oracle Cloud Infrastructure
- Amazon Web Services (AWS)
- Azure Cloud Foot 3
- Oracle Cloud Infrastructure Classic
See the following for more information:
-
See API Reference and Endpoints for information on Oracle Cloud Infrastructure REST APIs.
-
See Guides and API References for information on Amazon Web Services REST APIs.
-
See Azure REST API Reference for information on Azure REST APIs.
-
See All REST Endpoints for information on Oracle Cloud Infrastructure Classic REST APIs.
DBMS_CLOUD REST API Constants
Describes the DBMS_CLOUD
constants for making HTTP
requests using
DBMS_CLOUD.SEND_REQUEST
.
DBMS_CLOUD
supports
GET
, PUT
, POST
, HEAD
and DELETE
HTTP methods. The REST API method to be used for an HTTP request
is typically documented in the Cloud REST API documentation.
Name | Type | Value |
---|---|---|
METHOD_DELETE |
VARCHAR2(6) |
'DELETE' |
METHOD_GET |
VARCHAR2(3) |
'GET' |
METHOD_HEAD |
VARCHAR2(4) |
'HEAD' |
METHOD_POST |
VARCHAR2(4) |
'POST' |
METHOD_PUT |
VARCHAR2(3) |
'PUT' |
GET_RESPONSE_HEADERS Function
This function returns the HTTP response headers as JSON data in a JSON object.
Syntax
DBMS_CLOUD.GET_RESPONSE_HEADERS
(
resp IN DBMS_CLOUD_TYPES.resp)
RETURN JSON_OBJECT_T;
Parameters
Parameter | Description |
---|---|
resp |
HTTP Response type returned from |
Exceptions
Exception | Error | Description |
---|---|---|
invalid_response |
ORA-20025 |
Invalid response type object passed to |
GET_RESPONSE_RAW Function
This function returns the HTTP response in RAW format. This is useful if the HTTP response is expected to be binary format.
Syntax
DBMS_CLOUD.GET_RESPONSE_RAW
(
resp IN DBMS_CLOUD_TYPES.resp)
RETURN BLOB;
Parameters
Parameter | Description |
---|---|
resp |
HTTP Response type returned from |
Exceptions
Exception | Error | Description |
---|---|---|
invalid_response |
ORA-20025 |
Invalid response type object passed to |
GET_RESPONSE_STATUS_CODE Function
This function returns the HTTP response status code as an integer. The status code helps to identify if the request is successful.
Syntax
DBMS_CLOUD.GET_RESPONSE_STATUS_CODE
(
resp IN DBMS_CLOUD_TYPES.resp)
RETURN PLS_INTEGER;
Parameters
Parameter | Description |
---|---|
resp |
HTTP Response type returned from |
Exceptions
Exception | Error | Description |
---|---|---|
invalid_response |
ORA-20025 |
Invalid response type object passed to |
GET_RESPONSE_TEXT Function
This function returns the HTTP response in TEXT
format
(VARCHAR2
or CLOB
). Usually, most Cloud REST APIs return
JSON response in text format. This function is useful if you expect the the HTTP response is in
text format.
Syntax
DBMS_CLOUD.GET_RESPONSE_TEXT
(
resp IN DBMS_CLOUD_TYPES.resp)
RETURN CLOB;
Parameters
Parameter | Description |
---|---|
resp |
HTTP Response type returned from |
Exceptions
Exception | Error | Description |
---|---|---|
invalid_response |
ORA-20025 |
Invalid response type object passed to |
SEND_REQUEST Function
This function begins an HTTP request, gets the response, and ends the response. This function provides a workflow for sending a Cloud REST API request with arguments and a return response code and payload.
Syntax
DBMS_CLOUD.SEND_REQUEST
(
credential_name IN VARCHAR2,
uri IN VARCHAR2,
method IN VARCHAR2,
headers IN CLOB DEFAULT NULL,
async_request_url IN VARCHAR2 DEFAULT NULL,
wait_for_states IN DBMS_CLOUD_TYPES.wait_for_states_t DEFAULT NULL,
timeout IN NUMBER DEFAULT 0,
body IN BLOB DEFAULT NULL)
RETURN DBMS_CLOUD_TYPES.resp;
Parameters
Parameter | Description |
---|---|
|
The name of the credential for authenticating with the corresponding cloud native API. |
uri |
HTTP URI to make the request. |
method |
HTTP Request Method: See DBMS_CLOUD REST API Constants for more information. |
headers |
HTTP Request headers for the corresponding cloud native API in JSON format. The authentication headers are set automatically, only pass custom headers. |
|
An asynchronous request URL. To obtain the URL select your request API from the list of APIs (see https://docs.cloud.oracle.com/en-us/iaas/api/). Then, navigate to find the API for your request in the left pane. For example, Database Services API → Autonomous Database → StopAutonomousDatabase. This page shows the API home (and shows the base endpoint). Then, append the base endpoint with the relative path obtained for your work request WorkRequest link. |
wait_for_states |
Wait for states is a status of type:
Multiple states are allowed for |
timeout |
Specifies the timeout, in seconds, for asynchronous requests with
the parameters Default value is |
body |
HTTP Request Body for |
Exceptions
Exception | Error | Description |
---|---|---|
invalid_req_method |
ORA-20023 |
Request method passed to |
invalid_req_header |
ORA-20024 |
Request headers passed to |
Usage Notes
-
If you are using Oracle Cloud Infrastructure, you must use a Signing Key based credential value for the
credential_name
. See CREATE_CREDENTIAL Procedure for more information. -
The optional parameters
async_request_url
,wait_for_states
, andtimeout
allow you to handle long running requests. Using this asynchronous form ofsend_request
, the function waits for the completion status specified inwait_for_states
before returning. With these parameters in the send request, you pass the expected return states in thewait_for_states
parameter, and you use theasync_request_url
parameter to specify an associated work request, the request does not return immediately. Instead, the request probes theasync_request_url
until the return state is one of the expected states or thetimeout
is exceeded (timeout
is optional). If notimeout
is specified, the request waits until a state found inwait_for_states
occurs.
DBMS_CLOUD REST API Examples
Shows examples using DBMS_CLOUD.SEND_REQUEST
to create and delete an Oracle Cloud
Infrastructure Object Storage bucket, and an example to list all compartments in the tenancy.
Note:
These examples show Oracle Cloud
Infrastructure request APIs and require that you use a Signing Key based credential for the
credential_name
. Oracle Cloud
Infrastructure Signing Key based credentials include the private_key
and
fingerprint
arguments.
For example:
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL (
credential_name => ‘OCI_KEY_CRED’,
user_ocid => ‘ocid1.user.oc1..aaaaaaaauq54mi7zdyfhw33ozkwuontjceel7fok5nq3bf2vwetkpqsoa’,
tenancy_ocid => ‘ocid1.tenancy.oc1..aabbbbbbaafcue47pqmrf4vigneebgbcmmoy5r7xvoypicjqqge32ewnrcyx2a’,
private_key => ‘MIIEogIBAAKCAQEAtUnxbmrekwgVac6FdWeRzoXvIpA9+0r1.....wtnNpESQQQ0QLGPD8NM//JEBg=’,
fingerprint => ‘f2:db:f9:18:a4:aa:fc:94:f4:f6:6c:39:96:16:aa:27’);
END;
/
See
CREATE_CREDENTIAL Procedure
for information on DBMS_CLOUD.CREATE_CREDENTIAL
.
Create Bucket Example
Shows an example using DBMS_CLOUD.SEND_REQUEST
with
HTTP POST
method to create an object store bucket
named bucketname
.
See CreateBucket for details on the Oracle Cloud Infrastructure Object Storage Service API for this example.
SET SERVEROUTPUT ON
DECLARE
resp DBMS_CLOUD_TYPES.resp;
BEGIN
-- Send request
resp := DBMS_CLOUD.send_request(
credential_name => 'OCI_KEY_CRED',
uri => 'https://objectstorage.region.oraclecloud.com/n/namespace-string/b/',
method => DBMS_CLOUD.METHOD_POST,
body => UTL_RAW.cast_to_raw(
JSON_OBJECT('name' value 'bucketname',
'compartmentId' value 'compartment_OCID'))
);
-- Response Body in TEXT format
dbms_output.put_line('Body: ' || '------------' || CHR(10) ||
DBMS_CLOUD.get_response_text(resp) || CHR(10));
-- Response Headers in JSON format
dbms_output.put_line('Headers: ' || CHR(10) || '------------' || CHR(10) ||
DBMS_CLOUD.get_response_headers(resp).to_clob || CHR(10));
-- Response Status Code
dbms_output.put_line('Status Code: ' || CHR(10) || '------------' || CHR(10) ||
DBMS_CLOUD.get_response_status_code(resp));
END;
/
Notes:
-
In this example,
namespace-string
is the Oracle Cloud Infrastructure object storage namespace andbucketname
is the bucket name. See Understanding Object Storage Namespaces for more information. -
Where:
region
is an endpoint region. See Object Storage API reference in API Reference and Endpoints for more information. For example, whereregion
is:us-phoenix-1
.
Delete Bucket Example
Shows an example using DBMS_CLOUD.SEND_REQUEST
with
HTTP DELETE
method to delete an object store bucket
named bucketname
.
See DeleteBucket for details on the Oracle Cloud Infrastructure Object Storage Service API for this example.
SET SERVEROUTPUT ON
DECLARE
resp DBMS_CLOUD_TYPES.resp;
BEGIN
-- Send request
resp := DBMS_CLOUD.send_request(
credential_name => 'OCI_KEY_CRED',
uri => 'https://objectstorage.region.oraclecloud.com/n/namespace-string/b/bucketname',
method => DBMS_CLOUD.METHOD_DELETE
);
-- Response Body in TEXT format
dbms_output.put_line('Body: ' || '------------' || CHR(10) ||
DBMS_CLOUD.get_response_text(resp) || CHR(10));
-- Response Headers in JSON format
dbms_output.put_line('Headers: ' || CHR(10) || '------------' || CHR(10) ||
DBMS_CLOUD.get_response_headers(resp).to_clob || CHR(10));
-- Response Status Code
dbms_output.put_line('Status Code: ' || CHR(10) || '------------' || CHR(10) ||
DBMS_CLOUD.get_response_status_code(resp));
END;
/
Notes:
-
In this example,
namespace-string
is the Oracle Cloud Infrastructure object storage namespace andbucketname
is the bucket name. See Understanding Object Storage Namespaces for more information. -
Where:
region
is an endpoint region. See Object Storage API reference in API Reference and Endpoints for more information. For example, whereregion
is:us-phoenix-1
.
List Compartments Example
Shows an example using DBMS_CLOUD.SEND_REQUEST
with HTTP GET
method to
list all compartments in the tenancy (root compartment). This example shows how to pass
request headers in the DBMS_CLOUD.SEND_REQUEST
.
See ListCompartments for details on the Oracle Cloud Infrastructure Identity and Access Management Service API for this example.
--
-- List compartments
--
DECLARE
resp DBMS_CLOUD_TYPES.resp;
root_compartment_ocid VARCHAR2(512) := '&1';
BEGIN
-- Send request
dbms_output.put_line('Send Request');
resp := DBMS_CLOUD.send_request(
credential_name => 'OCI_KEY_CRED',
uri => 'https://identity.region.oraclecloud.com/20160918/compartments?compartmentId=' || root_compartment_ocid,
method => DBMS_CLOUD.METHOD_GET,
headers => JSON_OBJECT('opc-request-id' value 'list-compartments')
);
dbms_output.put_line('Body: ' || '------------' || CHR(10) || DBMS_CLOUD.get_response_text(resp) || CHR(10));
dbms_output.put_line('Headers: ' || CHR(10) || '------------' || CHR(10) || DBMS_CLOUD.get_response_headers(resp).to_clob || CHR(10));
dbms_output.put_line('Status Code: ' || CHR(10) || '------------' || CHR(10) || DBMS_CLOUD.get_response_status_code(resp));
dbms_output.put_line(CHR(10));
END;
/
Where: region
is an endpoint region. See Identity and Access
Management (IAM) API reference in API Reference and Endpoints for more information.
For example, where region
is: uk-london-1
.
Asynchronous Request Example
Shows an example using DBMS_CLOUD.SEND_REQUEST
with HTTP POST
method to
perform the Autonomous Database stop operation and wait for status. This example shows how
to use DBMS_CLOUD.SEND_REQUEST
with the
async_request_url
, wait_for_states
, and
timeout
parameters.
--
-- Sent Work Request Autonomous Database Stop Request with Wait for Status
DECLARE
l_resp DBMS_CLOUD_TYPES.resp;
l_resp_json JSON_OBJECT_T;
l_key_shape JSON_OBJECT_T;
l_body JSON_OBJECT_T;
status_array DBMS_CLOUD_TYPES.wait_for_states_t;
BEGIN
status_array := DBMS_CLOUD_TYPES.wait_for_states_t('SUCCEEDED');
l_body := JSON_OBJECT_T('{}');
l_body.put('autonomousDatabaseId', 'ocid');
-- Send request
dbms_output.put_line(l_body.to_clob);
dbms_output.put_line('Send Request');
l_resp := DBMS_CLOUD.send_request(
credential_name => 'NATIVE_CRED_OCI',
uri => 'https://database.region.oraclecloud.com/20160918/autonomousDatabases/ocid/actions/stop',
method => DBMS_CLOUD.METHOD_POST,
body => UTL_RAW.cast_to_raw(l_body.to_clob),
async_request_url => 'https://iaas.region.oraclecloud.com/20160918/workRequests',
wait_for_states => status_array,
timeout => 600
);
dbms_output.put_line('resp body: '||DBMS_CLOUD.get_response_text(l_resp));
dbms_output.put_line('resp headers: '||DBMS_CLOUD.get_response_headers(l_resp).to_clob);
END;
/
Where: region
is an endpoint region. See Identity and
Access Management (IAM) API reference in API Reference and Endpoints for more
information. For example, where region
is:
uk-london-1
.
The ocid
is the Oracle Cloud
Infrastructure resource identifier. See Resource Identifiers for more information.
Footnote Legend
Footnote 3: Support for Azure Cloud REST API calls is limited to the domain "blob.windows.net".