A Packages DBMS_CLOUD and DBMS_CLOUD_ADMIN

This section provides information about the DBMS_CLOUD and DBMS_CLOUD_ADMIN packages you use with Oracle Autonomous Data Warehouse.

Summary of DBMS_CLOUD Subprograms

This section covers the DBMS_CLOUD subprograms provided with Autonomous Data Warehouse.

COPY_DATA Procedure

This procedure loads data into existing Autonomous Data Warehouse tables from files in the Cloud.

Syntax

DBMS_CLOUD.COPY_DATA (
	table_name IN VARCHAR2,
	credential_name IN VARCHAR2		
	file_uri_list IN CLOB		
	schema_name IN VARCHAR2 DEFAULT
	field_list IN CLOB DEFAULT
	format IN CLOB DEFAULT);

Parameters

Parameter Description

table_name

The name of the target table on the Autonomous Data Warehouse database. The target table needs to be created before you run COPY_DATA.

credential_name

The name of the credential to access the Cloud Object Storage.

file_uri_list

Comma-delimited list of source file URIs. You can use wildcards in the file names in your URIs. The character "*" can be used as the wildcard for multiple characters, the character "?" can be used as the wildcard for a single character.

The format of the URIs depend on the Cloud Object Storage service you are using, for details see DBMS_CLOUD Package File URI Formats.

schema_name

The name of the schema where the target table resides. The default value is NULL meaning the target table is in the same schema as the user running the procedure.

field_list

Identifies the fields in the source files and their data types. The default value is NULL meaning the fields and their data types are determined by the target table definition. This argument's syntax is the same as the field_list clause in regular Oracle external tables. For more information about field_list see Oracle® Database Utilities.

For an example using field_list, see CREATE_EXTERNAL_TABLE Procedure.

format

The options describing the format of the source files. For the list of the options and how to specify the values see DBMS_CLOUD Package Format Options.

For parquet file format options, see COPY_DATA Procedure for Parquet Files.

COPY_DATA Procedure for Parquet Files

This procedure with the format parameter type set to the value parquet loads data into existing Autonomous Data Warehouse tables from parquet files in the Cloud. Similar to text files, the data is copied from the source parquet file into the preexisting internal table.

Syntax

DBMS_CLOUD.COPY_DATA (
	table_name IN VARCHAR2,
	credential_name IN VARCHAR2		
	file_uri_list IN CLOB		
	schema_name IN VARCHAR2 DEFAULT
	field_list IN CLOB DEFAULT
	format IN CLOB DEFAULT);

Parameters

Parameter Description

table_name

The name of the target table on the Autonomous Data Warehouse database. The target table needs to be created before you run COPY_DATA.

credential_name

The name of the credential to access the Cloud Object Storage.

file_uri_list

Comma-delimited list of source file URIs. You can use wildcards in the file names in your URIs. The character "*" can be used as the wildcard for multiple characters, the character "?" can be used as the wildcard for a single character.

The format of the URIs depend on the Cloud Object Storage service you are using, for details see DBMS_CLOUD Package File URI Formats.

schema_name

The name of the schema where the target table resides. The default value is NULL meaning the target table is in the same schema as the user running the procedure.

field_list

Ignored for parquet files.

The fields in the source match the external table columns by name. Source data types are converted to the external table column data type. See DBMS_CLOUD Package Parquet to Oracle Data Type Mapping for details.

format

The options describing the format of the source files. For parquet files, only two options are supported: see Table A-1.

Usage Notes

As with other data files, parquet data loads generate logs that are viewable in the tables dba_load_operations and user_load_operations. Each load operation adds a record to dba[user]_load_operations that indicates the table containing the logs.

The log table provides summary information about the load.

Note:

For parquet files, when the format parameter type is set to the value parquet the BADFILE_TABLE table is always empty.

  • For parquet files, PRIMARY KEY constraint errors throw an ORA error.
  • If data for a column encounters a conversion error, for example, the target column is not large enough to hold the converted value, the value for the column is set to NULL. This does not produce a rejected record.

CREATE_CREDENTIAL Procedure

This procedure stores Cloud Object Storage credentials in the Autonomous Data Warehouse database.

Use stored credentials for data loading or for querying external data residing in the Cloud, where you use DBMS_CLOUD procedures with a credential_name parameter.

Syntax

DBMS_CLOUD.CREATE_CREDENTIAL (
	credential_name	IN VARCHAR2
	username IN VARCHAR2
	password IN VARCHAR2);

Parameters

Parameter Description

credential_name

The name of the credential to be stored.

username

The username and password arguments together specify your object storage credentials. See below for what to specify for the username and password for different object stores.

password

The username and password arguments together specify your object storage credentials.

Usage Notes

  • This operation stores the credentials in the database in an encrypted format.

  • You can see the credentials in your schema by querying the user_credentials table.

  • The ADMIN user can see all the credentials by querying the dba_credentials table.

  • You only need to create credentials once unless your object store credentials change. Once you store the credentials you can then use the same credential name for DBMS_CLOUD procedures that require a credential_name parameter.

Oracle Cloud Infrastructure Object Storage Credentials

If your source files reside in Oracle Cloud Infrastructure Object Storage the username is your Oracle Cloud Infrastructure user name. The password is your Oracle Cloud Infrastructure auth token. See Working with Auth Tokens.

Oracle Cloud Infrastructure Object Storage Classic Credentials

If your source files reside in Oracle Cloud Infrastructure Object Storage Classic, the username is your Oracle Cloud Infrastructure Classic user name and the password is your Oracle Cloud Infrastructure Classic password.

Amazon S3 Credentials

If your source files reside in Amazon S3 the username is your AWS access key ID and the password is your AWS secret access key. See AWS Identity and Access Management.

Azure Blob Storage Credentials

If your source files reside in Azure Blob Storage, the username is your Azure storage account name and the password is an Azure storage account access key. See About Azure storage accounts.

CREATE_EXTERNAL_TABLE Procedure

This procedure creates an external table on files in the Cloud. This allows you to run queries on external data from Autonomous Data Warehouse.

Syntax

DBMS_CLOUD.CREATE_EXTERNAL_TABLE (
	table_name IN VARCHAR2,
	credential_name IN VARCHAR2		
	file_uri_list IN CLOB		
	column_list IN CLOB
	field_list IN CLOB DEFAULT
	format IN CLOB DEFAULT);

Parameters

Parameter Description

table_name

The name of the external table.

credential_name

The name of the credential to access the Cloud Object Storage.

file_uri_list

Comma-delimited list of source file URIs. You can use wildcards in the file names in your URIs. The character "*" can be used as the wildcard for multiple characters, the character "?" can be used as the wildcard for a single character.

The format of the URIs depend on the Cloud Object Storage service you are using, for details see DBMS_CLOUD Package File URI Formats.

column_list

Comma-delimited list of column names and data types for the external table.

field_list

Identifies the fields in the source files and their data types. The default value is NULL meaning the fields and their data types are determined by the column_list parameter. This argument's syntax is the same as the field_list clause in regular Oracle external tables. For more information about field_list see Oracle® Database Utilities.

format

The options describing the format of the source files. For the list of the options and how to specify the values see DBMS_CLOUD Package Format Options.

For parquet format files, see CREATE_EXTERNAL_TABLE Procedure for Parquet Files.

Example

begin  
   DBMS_CLOUD.CREATE_EXTERNAL_TABLE(   
      table_name =>'WEATHER_REPORT_DOUBLE_DATE',   
      credential_name =>'OBJ_STORE_CRED',   
      file_uri_list =>'&base_URL/Charlotte_NC_Weather_History_Double_Dates.csv',
      format => json_object('type' value 'csv', 'skipheaders' value '1'),   
      field_list => 'REPORT_DATE DATE''mm/dd/yy'',                   
                     REPORT_DATE_COPY DATE ''yyyy-mm-dd'',
                     ACTUAL_MEAN_TEMP,                 
                     ACTUAL_MIN_TEMP,                 
                     ACTUAL_MAX_TEMP,                 
                     AVERAGE_MIN_TEMP,                    
                     AVERAGE_MAX_TEMP,     
                     AVERAGE_PRECIPITATION',   
      column_list => 'REPORT_DATE DATE,   
                     REPORT_DATE_COPY DATE,
                     ACTUAL_MEAN_TEMP NUMBER,  
                     ACTUAL_MIN_TEMP NUMBER,  
                     ACTUAL_MAX_TEMP NUMBER,  
                     AVERAGE_MIN_TEMP NUMBER,   
                     AVERAGE_MAX_TEMP NUMBER,                  
                     AVERAGE_PRECIPITATION NUMBER');
   end;
/ 

SELECT * FROM WEATHER_REPORT_DOUBLE_DATE where         
   actual_mean_temp > 69 and actual_mean_temp < 74

CREATE_EXTERNAL_TABLE Procedure for Parquet Files

This procedure with the format parameter type set to the value parquet creates an external table on Parquet format files in the Cloud. This allows you to run queries on external data from Autonomous Data Warehouse.

Syntax

DBMS_CLOUD.CREATE_EXTERNAL_TABLE (
	table_name IN VARCHAR2,
	credential_name IN VARCHAR2		
	file_uri_list IN CLOB		
	column_list IN CLOB
	field_list IN CLOB DEFAULT
	format IN CLOB DEFAULT);

Parameters

Parameter Description

table_name

The name of the external table.

credential_name

The name of the credential to access the Cloud Object Storage.

file_uri_list

Comma-delimited list of source file URIs. You can use wildcards in the file names in your URIs. The character "*" can be used as the wildcard for multiple characters, the character "?" can be used as the wildcard for a single character.

The format of the URIs depend on the Cloud Object Storage service you are using, for details see DBMS_CLOUD Package File URI Formats.

column_list

(Optional) This field, when specified, overrides the format->schema parameter which specifies that the schema, columns and data types, are derived automatically. See the format parameter for details.

When the column_list is specified for a parquet source, the column names must match those columns found in the file. Oracle data types must map appropriately to the parquet data types. See DBMS_CLOUD Package Parquet to Oracle Data Type Mapping for details.

field_list

Ignored for parquet files.

The fields in the source match the external table columns by name. Source data types are converted to the external table column data type. See DBMS_CLOUD Package Parquet to Oracle Data Type Mapping for details.

format

For parquet, there are only two supported parameters. See Table A-1 for details.

Table A-1 DBMS_CLOUD Parquet File Format Options

Format Option Description Syntax

type

Specifies the file type.

type : parquet

schema

When schema is set to first or all, the external table columns and data types are automatically derived from the parquet file. The column names will match those found in parquet. The data types are converted from parquet to Oracle. All columns are added to the table.

Specifies to use the metadata from the first parquet file or from all parquet files in the file_uri_list to auto generate the columns and their data types. Use first if all of the files have the same schema. Use all (slower) if the files may have different schemas.

Default: If column_list is specified, then the schema value, if specified is ignored. If column_list is not specified then the schema default value is first.

schema : first | all

Examples

format => '{"type":"parquet", "schema": "all"}'
format => json_object('type' value 'parquet', 'schema' value 'first')

DELETE_FILE Procedure

This procedure removes the specified file from the specified directory on Autonomous Data Warehouse.

Syntax

 DBMS_CLOUD.DELETE_FILE ( 
       directory_name IN VARCHAR2
       file_name IN VARCHAR2); 

Parameters

Parameter Description

directory_name

The name of the directory on the Autonomous Data Warehouse instance. The only allowed value is data_pump_dir.

file_name

The name of the file to be removed.

Note:

To run DBMS_CLOUD.DELETE_FILE with a user other than ADMIN you need to grant write privileges on the directory data_pump_dir to that user. For example, run the following command as ADMIN to grant write privileges to adwc_user:

GRANT WRITE ON DIRECTORY data_pump_dir TO adwc_user;

DROP_CREDENTIAL Procedure

This procedure removes an existing credential from Autonomous Data Warehouse.

Syntax

DBMS_CLOUD.DROP_CREDENTIAL (
   credential_name IN VARCHAR2);

Parameters

Parameter Description

credential_name

The name of the credential to be removed.

LIST_FILES Function

This function lists the files and their sizes in the specified directory on Autonomous Data Warehouse.

Syntax

DBMS_CLOUD.LIST_FILES (
	directory_name IN VARCHAR2);

Parameters

Parameter Description

directory_name

The name of the directory on the Autonomous Data Warehouse instance. The only allowed value is data_pump_dir.

This is a pipelined function that returns a row for each file. For example, use the following query to use this function:

SELECT * FROM DBMS_CLOUD.LIST_FILES('DATA_PUMP_DIR');

Note:

To run DBMS_CLOUD.LIST_FILES with a user other than ADMIN you need to grant read privileges on the directory data_pump_dir to that user. For example, run the following command as ADMIN to grant read privileges to adwc_user:

GRANT READ ON DIRECTORY data_pump_dir TO adwc_user;

PUT_OBJECT Procedure

This procedure copies a file from Autonomous Data Warehouse to the Cloud Object Storage. The maximum file size allowed in this procedure is 5 gigabytes (GB).

Syntax

DBMS_CLOUD.PUT_OBJECT (
	credential_name IN VARCHAR2		
	object_uri IN VARCHAR2		
	directory_name IN VARCHAR2
	file_name IN VARCHAR2);	

Parameters

Parameter Description

credential_name

The name of the credential to access the Cloud Object Storage.

object_uri

Object or file URI. The format of the URI depends on the Cloud Object Storage service you are using, for details see DBMS_CLOUD Package File URI Formats.

directory_name

The name of the directory on the Autonomous Data Warehouse. The only allowed value is data_pump_dir.

Foot 1

file_name

The name of the file in the directory data_pump_dir.

Footnote 1

Note:

To run DBMS_CLOUD.PUT_OBJECT with a user other than ADMIN you need to grant read privileges on the directory data_pump_dir to that user. For example, run the following command as ADMIN to grant read privileges to adwc_user:

GRANT READ ON DIRECTORY data_pump_dir TO adwc_user;

VALIDATE_EXTERNAL_TABLE Procedure

This procedure validates the source files for an external table, generates log information, and stores the rows that do not match the format options specified for the external table in a badfile table on Autonomous Data Warehouse.

Syntax

DBMS_CLOUD.VALIDATE_EXTERNAL_TABLE (
	table_name IN VARCHAR2,
	schema_name IN VARCHAR2 DEFAULT		
	rowcount IN NUMBER DEFAULT
	stop_on_error IN BOOLEAN DEFAULT);

Parameters

Parameter Description

table_name

The name of the external table.

schema_name

The name of the schema where the external table resides. The default value is NULL meaning the external table is in the same schema as the user running the procedure.

rowcount

Number of rows to be scanned. The default value is NULL meaning all the rows in the source files are scanned.

stop_on_error

Determines if the validate should stop when a row is rejected. The default value is TRUE meaning the validate stops at the first rejected row. Setting the value to FALSE specifies that the validate does not stop at the first rejected row and validates all rows up to the value specified for the rowcount parameter.

If the external table refers to Parquet file(s) the validate stops at the first rejected row. When the external table specifies the format parameter type set to the value parquet, the parameter stop_on_error is effectively always TRUE. Thus, the table badfile will always be empty for an external table referring to parquet file(s).

DBMS_CLOUD Package File URI Formats

Describes the format of the source file URIs in operations with DBMS_CLOUD. The format depends on the object storage service you are using.

Oracle Cloud Infrastructure Object Storage URI Format

If your source files reside on the Oracle Cloud Infrastructure Object Storage you can use the Oracle Cloud Infrastructure native URIs or the Swift URIs. The format for files can be either:

https://objectstorage.region.oraclecloud.com/n/object_storage_namespace/b/bucket/o/filename
or
https://swiftobjectstorage.region.oraclecloud.com/v1/object_storage_namespace/bucket/filename

For example, the Native URI for the file channels.txt in the adwc_user bucket in the adwc object storage name in the Phoenix data center:

https://objectstorage.us-phoenix-1.oraclecloud.com/n/adwc/b/adwc_user/o/channels.txt

See Viewing the Oracle Cloud Infrastructure Object Storage Native URI for an example of how you find and copy the Oracle Cloud Infrastructure native URI.

For example, the Swift URI for the file channels.txt in the adwc_user bucket in the adwc object storage name in the Phoenix data center:

https://swiftobjectstorage.us-phoenix-1.oraclecloud.com/v1/adwc/adwc_user/channels.txt

Note:

The source files need to be stored in an Object Storage tier bucket. Autonomous Data Warehouse does not support buckets in the Archive Storage tier. See Overview of Object Storage for more information.

Oracle Cloud Infrastructure Object Storage Classic URI Format

If your source files reside in Oracle Cloud Infrastructure Object Storage Classic, see the REST page for a description of the URI format for accessing your files: About REST URLs for Oracle Cloud Infrastructure Object Storage Classic Resources.

Amazon S3 URI Format

If your source files reside in Amazon S3, see the following for a description of the URI format for accessing your files: Accessing a Bucket.

For example the following refers to the file channels.txt in the adwc bucket in the us-west-2 region.

https://s3-us-west-2.amazonaws.com/adwc/channels.txt 

Azure Blob Storage URI Format

If your source files reside in Azure Blob Storage, see the following for a description of the URI format for accessing your files: Resource URI Syntax.

For example the following refers to the file channels.txt in the adwc container in the storage account adwc_user:
https://adwc_user.blob.core.windows.net/adwc/channels.txt

Viewing the Oracle Cloud Infrastructure Object Storage Native URI

If you are using Oracle Cloud Infrastructure Object Storage you can find the URI for your files from the Oracle Cloud Infrastructure console. Use this URI with DBMS_CLOUD procedures that take a file_uri_list parameter, such as DBMS_CLOUD.COPY_DATA

  • Sign in to your Oracle Cloud Account at cloud.oracle.com.

  • Open the Oracle Cloud Infrastructure console by clicking the navigation icon next to Oracle Cloud.

  • From the Oracle Cloud Infrastructure left navigation list click Object Storage → Object Storage.

  1. On the Object Storage Buckets page, select your bucket. For example select the credit bucket.
  2. On the Bucket Details page, under Objects, click …-> Details to show the object details.
  3. Copy the path in the field URL Path (URI).

DBMS_CLOUD Package Format Options

The format argument in DBMS_CLOUD specifies the format of source files.

The two ways to specify the format argument are:

format => '{"format_option" : “format_value” }'  

And:

format => json_object('format_option' value 'format_value'))

For example:

format => json_object('type' VALUE 'CSV')

To specify multiple format options, separate the values with a ",".

For example:

format => json_object('ignoremissingcolumns' value 'true', 'removequotes' value 'true', 'dateformat' value 'YYYY-MM-DD-HH24-MI-SS', 'blankasnull' value 'true')

Note:

For parquet format options, see Table A-1.

Table A-2 DBMS_CLOUD Format Options

Format Option Description Syntax Default

recorddelimiter

Specifies the record delimiter

recorddelimiter : character

newline

The default works with source files created on Unix/Linux platforms. For source files created on Windows with newline as the record delimiter use "\r\n" for this format option. For example: format => json_object('recorddelimiter' value '''\r\n''')

delimiter

Specifies the field delimiter

delimiter : character

| (pipe character)

trimspaces

Specifies how the leading and trailing spaces of the fields are trimmed.

See the description of trim_spec in Oracle Database Utilities, 18c

trimspaces : rtrim| ltrim| notrim| lrtrim| ldrtrim

notrim

characterset

Specifies the characterset of source files

characterset : string

Database characterset

skipheaders

Specifies how many rows should be skipped from the start of the file.

skipheaders : number

0 if not specified, 1 if specified without a value

escape

The character "\" is used as the escape character when specified.

escape : true

False

ignoreblanklines

Blank lines are ignored when set to true.

ignoreblanklines : true

False

type

Specifies the source file type.

Cannot be specified together with delimiter or quote.

See the description of CSV in Oracle Database Utilities, 18c

type : csv|csv with embedded|csv without embedded

csv is the same as csv without embedded.

Null

quote

Specifies the quote character for the fields, the quote characters are removed during loading when specified.

quote : character

Null meaning no quote

ignoremissingcolumns

If there are more columns in the field_list than there are in the source files, the extra columns are stored as null.

ignoremissingcolumns : true

False

truncatecol

If the data in the file is too long for a field, then this option will truncate the value of the field rather than reject the row.

truncatecol : true

False

removequotes

Removes any quotes that are around any field in the source file.

removequotes: true

False

blankasnull

When set to true, loads fields consisting of spaces as null.

blankasnull : true

False

dateformat

Specifies the date format in the source file. The format option AUTO searches for the following formats:

J 
MM-DD-YYYYBC 
MM-DD-YYYY 
YYYYMMDD HHMISS 
YYMMDD HHMISS 
YYYY.DDD 
YYYY-MM-DD

dateformat : string

Database date format

timestampformat

Specifies the timestamp format in the source file. The format option AUTO searches for the following formats:

YYYY-MM-DD HH:MI:SS.FF 
YYYY-MM-DD HH:MI:SS.FF3 
MM/DD/YYYY HH:MI:SS.FF3

timestampformat : string

Database timestamp format

timestamptzformat

Specifies the timestamp with timezone format in the source file. The format option AUTO searches for the following formats:

DD Mon YYYY HH:MI:SS.FF TZR 
MM/DD/YYYY HH:MI:SS.FF TZR 
YYYY-MM-DD HH:MI:SS+/-TZR 
YYYY-MM-DD HH:MI:SS.FF3 
DD.MM.YYYY HH:MI:SS TZR

timestamptzformat: string

Database timestamp with timezone format

timestampltzformat

Specifies the timestamp with local timezone format in the source file. The format option AUTO searches for the following formats:

DD Mon YYYY HH:MI:SS.FF TZR 
MM/DD/YYYY HH:MI:SS.FF TZR 
YYYY-MM-DD HH:MI:SS+/-TZR 
YYYY-MM-DD HH:MI:SS.FF3 
DD.MM.YYYY HH:MI:SS TZR

timestampltzformat : string

Database timestamp with local timezone format

conversionerrors

If a row is rejected because of data type conversion errors, the related columns are stored as null or the row is rejected.

conversionerrors : reject_record | store_null

reject_record

rejectlimit

The operation will error out after specified number of rows are rejected.

rejectlimit : number

0

compression

Specifies the compression type of the source file

compression: auto|gzip|zlib|bzip2

Specifying the value auto checks for the compression types: gzip, zlib, bzip2.

Null value meaning no compression.

DBMS_CLOUD Package Parquet to Oracle Data Type Mapping

The external table supports scalar data types only. Describes the mapping of Parquet data types to Oracle data types.

Table A-3 Parquet Data Type Support

Parquet Type Oracle Type
UINT_64 NUMBER(20)
INT_64 NUMBER(19)
UINT_32 NUMBER(10)
INT_32 NUMBER(10)
UINT_16 NUMBER(5)
INT_16 NUMBER(5)
UINT_8 NUMBER(3)
INT_8 NUMBER(3)
BOOL NUMBER(1)
UTF8 BYTE_ARRAY VARCHAR2(4000 BYTE)
FLT BINARY_FLOAT
DBL BINARY_DOUBLE
DECIMAL(p) NUMBER(p)
DECIMAL(p,s) NUMBER(p,s)
DATE DATE
STRING VARCHAR2(4000)
TIME_MILLIS VARCHAR2(20 BYTE)
TIME_MILLIS_UTC VARCHAR2(20 BYTE)
TIME_MICROS VARCHAR2(20 BYTE)
TIME_MICROS_UTC VARCHAR2(20 BYTE)
TIMESTAMP_MILLIS TIMESTAMP(3)
TIMESTAMP_MILLIS_UTC TIMESTAMP(3)
TIMESTAMP_MICROS TIMESTAMP(6)
TIMESTAMP_MICROS_UTC TIMESTAMP(6)
TIMESTAMP_NANOS TIMESTAMP(9)

Summary of DBMS_CLOUD_ADMIN Subprograms

This section covers the DBMS_CLOUD_ADMIN subprograms provided with Autonomous Data Warehouse.

DISABLE_APP_CONT Procedure

This procedure disables database application continuity for the session associated with the specified service name in Autonomous Data Warehouse.

Syntax

DBMS_CLOUD_ADMIN.DISABLE_APP_CONT(
	service_name IN VARCHAR2);

Parameters

Parameter Description

service_name

The service_name for the Autonomous Data Warehouse service. The service name consists of three parts:

  • database_name: the name of your database.
  • priority is one of: _high | _medium| _low
  • adwc.oraclecloud.com

You can find the service names in the tnsnames.ora file in the wallet.zip that you download from Autonomous Data Warehouse for your connection.

Usage Notes

See Overview of Application Continuity for more information on Application Continuity.

Example

BEGIN
	DBMS_CLOUD_ADMIN.DISABLE_APP_CONT(
		service_name => 'nvthp2ht_adb1_high.adwc.oraclecloud.com'
	);
END;
/

Verify the value as follows:

SELECT name, failover_restore, drain_timeout FROM v$services;

NAME                                                    FAILOVER_RESTORE  DRAIN_TIMEOUT
------------------------------------------------------- ----------------- -------------
nvthp2ht_adb1_high.adwc.oraclecloud.com                 NONE                          0

ENABLE_APP_CONT Procedure

This procedure enables database application continuity for the session associated with the specified service name in Autonomous Data Warehouse.

Syntax

DBMS_CLOUD_ADMIN.ENABLE_APP_CONT(
	service_name IN VARCHAR2);

Parameters

Parameter Description

service_name

The service_name for the Autonomous Data Warehouse service. The service name consists of three parts:

  • database_name: the name of your database.
  • priority is one of: _high | _medium| _low
  • adwc.oraclecloud.com

You can find the service names in the tnsnames.ora file in the wallet.zip that you download from Autonomous Data Warehouse for your connection.

Usage Notes

See Overview of Application Continuity for more information on Application Continuity.

Example

BEGIN
	DBMS_CLOUD_ADMIN.ENABLE_APP_CONT(
		service_name => 'nvthp2ht_adb1_high.adwc.oraclecloud.com'
	);
END;
/

Verify the value as follows:

SELECT name, failover_restore, drain_timeout FROM v$services;

NAME                                                    FAILOVER_RESTORE  DRAIN_TIMEOUT
------------------------------------------------------- ----------------- -------------
nvthp2ht_adb1_high.adwc.oraclecloud.com                 LEVEL1                      300

GRANT_TABLESPACE_QUOTA Procedure

This procedure grants a storage quota to a specified database user. When a tablespace quota is granted to a user Autonomous Data Warehouse limits the storage space used by that user to the specified quota.

Syntax

DBMS_CLOUD_ADMIN.GRANT_TABLESPACE_QUOTA(
	username IN VARCHAR2, 
        tablespace_quota IN VARCHAR2);

Parameters

Parameter Description

user_name

The database username to grant the tablespace quota to.

tablespace_quota

The quota to assign to the specified user. This value needs to be entered in bytes. For kilobytes, megabytes, gigabytes, and terabytes you can specify K, M, G, and T after the numeric value respectively.

Usage Notes

If you want to allow the user to work without a quota you can use the GRANT UNLIMITED TABLESPACE command to grant that privilege to a user. For example, the following command grants the unlimited tablespace privilege to the user ADBUSER.

GRANT UNLIMITED TABLESPACE TO adbuser;

See System Privileges (Organized by the Database Object Operated Upon - TABLESPACE) for more information.

See Manage User Privileges with Autonomous Data Warehouse for information on privileges granted with the role DWROLE.

Example


BEGIN
	DBMS_CLOUD_ADMIN.GRANT_TABLESPACE_QUOTA(
           username => ‘ADBUSER’, 
           tablespace_quota => ‘10G'
	);
END;
/