Export Data to Object Store as Text Using DBMS_CLOUD.EXPORT_DATA

Use DBMS_CLOUD.EXPORT_DATA to export data as text from an Autonomous Database to cloud Object Store. The text format export options are CSV, JSON, or XML.

Export JSON Data to Cloud Object Storage

Shows the steps to export table data from your Autonomous Database to Cloud Object Storage as JSON data by specifying a query.

This export method supports all the Cloud Object Stores supported by Autonomous Database, and you can use an Oracle Cloud Infrastructure resource principal to access your Oracle Cloud Infrastructure Object Store, Amazon Resource Names (ARNs) to access AWS Simple Storage Service (S3), an Azure service principal to access Azure BLOB storage, or a Google service account to access Google Cloud Platform (GCP) resources.

  1. Connect to your Autonomous Database instance.

    See Connect to Autonomous Database for more information.

  2. Store your Cloud Object Storage credential using DBMS_CLOUD.CREATE_CREDENTIAL.

    For example:

    BEGIN
      DBMS_CLOUD.CREATE_CREDENTIAL(
        credential_name => 'DEF_CRED_NAME',
        username => 'user1@example.com',
        password => 'password'
      );
    END;
    /

    The values you provide for username and password depend on the Cloud Object Storage service you are using.

  3. Run DBMS_CLOUD.EXPORT_DATA and specify the format parameter type with the value json to export the results as JSON files on Cloud Object Storage.
    To generate the JSON output files there are two options for the file_uri_list parameter:
    • Set the file_uri_list value to the URL for an existing bucket on your Cloud Object Storage.

    • Set the file_uri_list value to the URL for an existing bucket on your Cloud Object Storage and include a file name prefix to use when generating the file names for the exported JSON.

    If you do not include the file name prefix in the file_uri_list, DBMS_CLOUD.EXPORT_DATA supplies a file name prefix. See File Naming for Text Output (CSV, JSON, Parquet, or XML) for details.

    For example, the following shows DBMS_CLOUD.EXPORT_DATA with a file name prefix specified in file_uri_list:

    BEGIN
      DBMS_CLOUD.EXPORT_DATA(
        credential_name => 'DEF_CRED_NAME',
        file_uri_list   => 'https://objectstorage.us-phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o/dept_export',
        query           => 'SELECT * FROM DEPT',
        format          => JSON_OBJECT('type' value 'json'));
    END;
    /

    In this example, namespace-string is the Oracle Cloud Infrastructure object storage namespace and bucketname is the bucket name. See Understanding Object Storage Namespaces for more information.

    When record delimiters include escape characters, such as \r\n or \t, enclose the record delimiters in double quotes. For example, to use the record delimiter \r\n, enclose the value in double quotes:"\r\n".

    BEGIN
      DBMS_CLOUD.EXPORT_DATA(
        credential_name => 'DEF_CRED_NAME',
        file_uri_list   => 'https://objectstorage.us-phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o/dept_export',
        query => 'SELECT * FROM DEPT',
        format => JSON_OBJECT('type' value 'json', 'recorddelimiter' value '"\r\n"' format json));
    END;
    /

    In this example, namespace-string is the Oracle Cloud Infrastructure object storage namespace and bucketname is the bucket name. See Understanding Object Storage Namespaces for more information.

    For detailed information about the parameters, see EXPORT_DATA Procedure.

    For detailed information about the available format parameters you can use with DBMS_CLOUD.EXPORT_DATA, see DBMS_CLOUD Package Format Options for EXPORT_DATA.

Notes for exporting with DBMS_CLOUD.EXPORT_DATA:

  • The query parameter that you supply can be an advanced query, if required, such as a query that includes joins or subqueries.

  • Specify the format parameter with the compression option to compress the output files.

  • When you no longer need the files that you export, use the procedure DBMS_CLOUD.DELETE_OBJECT or use native Cloud Object Storage commands to delete the files.

Export Data as CSV to Cloud Object Storage

Shows the steps to export table data from your Autonomous Database to Cloud Object Storage as CSV data by specifying a query.

This export method supports all the Cloud Object Stores supported by Autonomous Database. You can also use Amazon Resource Names (ARNs) to access AWS Simple Storage Service (S3), an Azure service principal to access Azure BLOB storage, or a Google service account to access Google Cloud Platform (GCP) resources.

  1. Connect to your Autonomous Database instance.

    See Connect to Autonomous Database for more information.

  2. Store your Cloud Object Storage credential using DBMS_CLOUD.CREATE_CREDENTIAL.

    For example:

    BEGIN
      DBMS_CLOUD.CREATE_CREDENTIAL(
        credential_name => 'DEF_CRED_NAME',
        username => 'user1@example.com',
        password => 'password'
      );
    END;
    /

    The values you provide for username and password depend on the Cloud Object Storage service you are using.

  3. Run DBMS_CLOUD.EXPORT_DATA and specify the format parameter type with the value csv to export the results as CSV files on Cloud Object Storage.
    To generate the CSV output files there are two options for the file_uri_list parameter:
    • Set the file_uri_list value to the URL for an existing bucket on your Cloud Object Storage.

    • Set the file_uri_list value to the URL for an existing bucket on your Cloud Object Storage and include a file name prefix to use when generating the file names for the exported CSV files.

    If you do not include the file name prefix in the file_uri_list, DBMS_CLOUD.EXPORT_DATA supplies a file name prefix. See File Naming for Text Output (CSV, JSON, Parquet, or XML) for details.

    For example, the following shows DBMS_CLOUD.EXPORT_DATA with a file name prefix specified in file_uri_list:

    BEGIN
      DBMS_CLOUD.EXPORT_DATA(
        credential_name => 'DEF_CRED_NAME',
        file_uri_list   => 'https://objectstorage.us-phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o/dept_export',
        query           => 'SELECT * FROM DEPT',
        format          => JSON_OBJECT('type' value 'csv', 'delimiter' value '|', 'compression' value 'gzip'));
    END;
    /

    In this example, namespace-string is the Oracle Cloud Infrastructure object storage namespace and bucketname is the bucket name. See Understanding Object Storage Namespaces for more information.

    For detailed information about the parameters, see EXPORT_DATA Procedure.

    For detailed information about the available format parameters you can use with DBMS_CLOUD.EXPORT_DATA, see DBMS_CLOUD Package Format Options for EXPORT_DATA.

Notes for exporting with DBMS_CLOUD.EXPORT_DATA:

  • The query parameter that you supply can be an advanced query, if required, such as a query that includes joins or subqueries.

  • Specify the format parameter with the compression option to compress the output files.

  • When you no longer need the files that you export, use the procedure DBMS_CLOUD.DELETE_OBJECT or use native Cloud Object Storage commands to delete the files.

Export Data as Parquet to Cloud Object Storage

Shows the steps to export table data from your Autonomous Database to Cloud Object Storage as Parquet data by specifying a query.

This export method supports all the Cloud Object Stores supported by Autonomous Database, and you can use an Oracle Cloud Infrastructure resource principal to access your Oracle Cloud Infrastructure Object Store, Amazon Resource Names (ARNs) to access AWS Simple Storage Service (S3), an Azure service principal to access Azure BLOB storage, or a Google service account to access Google Cloud Platform (GCP) resources.

  1. Connect to your Autonomous Database instance.

    See Connect to Autonomous Database for more information.

  2. Store your Cloud Object Storage credential using DBMS_CLOUD.CREATE_CREDENTIAL.

    For example:

    BEGIN
      DBMS_CLOUD.CREATE_CREDENTIAL(
        credential_name => 'DEF_CRED_NAME',
        username => 'user1@example.com',
        password => 'password'
      );
    END;
    /

    The values you provide for username and password depend on the Cloud Object Storage service you are using.

  3. Run DBMS_CLOUD.EXPORT_DATA and specify the format parameter type with the value parquet to export the results as parquet files on Cloud Object Storage.
    To generate the parquet output files there are two options for the file_uri_list parameter:
    • Set the file_uri_list value to the URL for an existing bucket on your Cloud Object Storage.

    • Set the file_uri_list value to the URL for an existing bucket on your Cloud Object Storage and include a file name prefix to use when generating the file names for the exported parquet files.

    If you do not include the file name prefix in the file_uri_list, DBMS_CLOUD.EXPORT_DATA supplies a file name prefix. See File Naming for Text Output (CSV, JSON, Parquet, or XML) for details.

    For example, the following shows DBMS_CLOUD.EXPORT_DATA with a file name prefix specified in file_uri_list:

    BEGIN
      DBMS_CLOUD.EXPORT_DATA(
        credential_name => 'DEF_CRED_NAME',
        file_uri_list   => 'https://objectstorage.us-phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o/dept_export',
        query           => 'SELECT * FROM DEPT',
        format          => JSON_OBJECT('type' value 'parquet', 'compression' value 'snappy'));
    END;
    /

    In this example, namespace-string is the Oracle Cloud Infrastructure object storage namespace and bucketname is the bucket name. See Understanding Object Storage Namespaces for more information.

    For detailed information about the parameters, see EXPORT_DATA Procedure.

    For detailed information about the available format parameters you can use with DBMS_CLOUD.EXPORT_DATA, see DBMS_CLOUD Package Format Options for EXPORT_DATA.

Notes for exporting with DBMS_CLOUD.EXPORT_DATA:

  • The query parameter that you supply can be an advanced query, if required, such as a query that includes joins or subqueries.

  • Specify the format parameter with the compression option to compress the output files. The default compression for type parquet is snappy.

  • When you no longer need the files that you export, use the procedure DBMS_CLOUD.DELETE_OBJECT or use native Cloud Object Storage commands to delete the files.

  • See DBMS_CLOUD Package Oracle Data Type to Parquet Mapping for details on Oracle Type to Parquet Type mapping.

    The following types are not supported or have limitations on their support for exporting Parquet with DBMS_CLOUD.EXPORT_DATA:

    Oracle Type Notes

    BFILE

    Not supported

    BLOB

    Not supported

    DATE

    Supported with the following limitation: DATE format supports only date, month and year. Hour, minute and seconds are not supported.

    See DBMS_CLOUD Package Oracle Data Type to Parquet Mapping for details on NLS format limitations for exporting DATE to Parquet.

    INTERVAL DAY TO SECOND

    Supported and is treated as string internally

    INTERVAL YEAR TO MONTH

    Supported and is treated as string internally

    LONG

    Not supported

    LONG RAW

    Not supported

    NUMBER

    Supported with the following limitations:

    • Can have maximum precision of 38 and scale equal to less than precision.
    • If no precision and scale is provided for the column NUMBER type, by default precision of 38 and scale of 20 is used.
    • Negative scale is not supported for NUMBER types.

    Object Types

    Not supported

    TIMESTAMP

    Supported with the following limitations:

    • If there are multiple columns with different precision, highest precision will be taken.
    • TIMESTAMP WITH TIME ZONE Oracle datatype will use the timestamp only.

    See DBMS_CLOUD Package Oracle Data Type to Parquet Mapping for details on NLS format limitations for exporting TIMESTAMP to Parquet.

Export Data as XML to Cloud Object Storage

Shows the steps to export table data from your Autonomous Database to Cloud Object Storage as XML data by specifying a query.

This export method supports all the Cloud Object Stores supported by Autonomous Database. You can also use Amazon Resource Names (ARNs) to access AWS Simple Storage Service (S3), an Azure service principal to access Azure BLOB storage, or a Google service account to access Google Cloud Platform (GCP) resources.

  1. Connect to your Autonomous Database instance.

    See Connect to Autonomous Database for more information.

  2. Store your Cloud Object Storage credential using DBMS_CLOUD.CREATE_CREDENTIAL.

    For example:

    BEGIN
      DBMS_CLOUD.CREATE_CREDENTIAL(
        credential_name => 'DEF_CRED_NAME',
        username => 'user1@example.com',
        password => 'password'
      );
    END;
    /

    The values you provide for username and password depend on the Cloud Object Storage service you are using.

  3. Run DBMS_CLOUD.EXPORT_DATA and specify the format parameter type with the value xml to export the results as XML files on Cloud Object Storage.
    To generate the XML output files there are two options for the file_uri_list parameter:
    • Set the file_uri_list value to the URL for an existing bucket on your Cloud Object Storage.

    • Set the file_uri_list value to the URL for an existing bucket on your Cloud Object Storage and include a file name prefix to use when generating the file names for the exported JSON.

    If you do not include the file name prefix in the file_uri_list, DBMS_CLOUD.EXPORT_DATA supplies a file name prefix. See File Naming for Text Output (CSV, JSON, Parquet, or XML) for details.

    For example, the following shows DBMS_CLOUD.EXPORT_DATA with a file name prefix specified in file_uri_list:

    BEGIN
      DBMS_CLOUD.EXPORT_DATA(
        credential_name => 'DEF_CRED_NAME',
        file_uri_list   => 'https://objectstorage.us-phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o/dept_export',
        query           => 'SELECT * FROM DEPT',
        format          => JSON_OBJECT('type' value 'xml', 'compression' value 'gzip'));
    END;
    /

    In this example, namespace-string is the Oracle Cloud Infrastructure object storage namespace and bucketname is the bucket name. See Understanding Object Storage Namespaces for more information.

    For detailed information about the parameters, see EXPORT_DATA Procedure.

    For detailed information about the available format parameters you can use with DBMS_CLOUD.EXPORT_DATA, see DBMS_CLOUD Package Format Options for EXPORT_DATA.

Notes for exporting with DBMS_CLOUD.EXPORT_DATA:

  • The query parameter that you supply can be an advanced query, if required, such as a query that includes joins or subqueries.

  • Specify the format parameter with the compression option to compress the output files.

  • When you no longer need the files that you export, use the procedure DBMS_CLOUD.DELETE_OBJECT or use native Cloud Object Storage commands to delete the files.

File Naming for Text Output (CSV, JSON, Parquet, or XML)

Describes the output file naming using DBMS_CLOUD.EXPORT_DATA with CSV, JSON, Parquet, or XML text file output.

DBMS_CLOUD.EXPORT_DATA performs the query specified with the query parameter and sends the results to text files either in the Cloud Object Store bucket or to a directory. The output format depends on the format parameter type you specify (one of CSV, JSON, Parquet, or XML).

To speed up the procedure and to generate the output as fast as possible, DBMS_CLOUD.EXPORT_DATA divides its work. This means that, depending on system resources, when you run DBMS_CLOUD.EXPORT_DATA the procedure creates multiple output files either in the Cloud Object Store bucket or in the directory.

The format for each generated file is:

[FileNamePrefix | client_info_module_action]_sequenceNum_timestamp.format_extension.[compression_extension]

  • FileNamePrefix: (optional) If a FileNamePrefix is supplied, DBMS_CLOUD.EXPORT_DATA uses the file name prefix to generate file names for the results. The FileNamePrefix is specified using the text supplied after the bucket or directory name in the file_uri_list parameter value.

    You cannot provide multiple values for the FileNamePrefix in the file_uri_list.

  • client_info_module_action: If a file name prefix is not supplied with the file_uri_list parameter, DBMS_CLOUD.EXPORT_DATA uses the combination of client_info, application module and action as the file name prefix (when this information is available). The procedure obtains these names from the application information for the database session that runs the query. See DBMS_APPLICATION_INFO for information on client_info, module name, and action name.

    If a file name prefix is not supplied with the file_uri_list and the database session attributes are not available, DBMS_CLOUD.EXPORT_DATA uses the file name prefix "data".

  • sequenceNum: The sequence number associated with the DBMS_CLOUD.EXPORT_DATA query. Depending on the query, the database service, and the number of ECPUs (OCPUs if your database uses OCPUs) there are one or more sequenceNums. Also, depending on the size of the results, there are one or more output files for each sequenceNum.

  • timestamp: Timestamp when the file is uploaded.

  • format_extension: The default value depends on the format type value:

    • CSV format: .csv
    • JSON format: .json
    • PARQUET format .parquet
    • XML format: .xml

    For more information, see the description for format option fileextension in DBMS_CLOUD Package Format Options for EXPORT_DATA.

  • compression_extension: When you include the format parameter with the compression option with the value gzip, this is "gz".

    When the format type is parquet, the compression value snappy is also supported and is the default.

For example, the file name prefix in the following DBMS_CLOUD.EXPORT_DATA procedure is specified in the file_uri_list parameter, as dept_export. The example generates the output to the provided Object Store in the specified format.

BEGIN
  DBMS_CLOUD.EXPORT_DATA(
    credential_name => 'DEF_CRED_NAME',
    file_uri_list   => 'https://objectstorage.us-phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o/dept_export',
    query           => 'SELECT * FROM DEPT',
    format          => JSON_OBJECT('type' value 'json'));
END;
/

When you specify a file name prefix the generated output files include the file name prefix, similar to the following:

dept_export_1_20210809T173033Z.json
dept_export_2_20210809T173034Z.json
dept_export_3_20210809T173041Z.json
dept_export_4_20210809T173035Z.json

The number of generated output files depends on the size of the results, the database service, and the number of ECPUs (OCPUs if your database uses OCPUs) in the Autonomous Database instance.

In the following example the file_uri_list parameter does not include a file name prefix and the compression parameter is supplied, with value gzip:

BEGIN
  DBMS_CLOUD.EXPORT_DATA(
    credential_name => 'DEF_CRED_NAME',
    file_uri_list   => 'https://objectstorage.us-phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o/',
    query           => 'SELECT * FROM DEPT',
    format          => json_object('type' value 'json', 'compression' value 'gzip'));
END;
/

When a file name prefix is not in the file_uri_list parameter, DBMS_CLOUD.EXPORT_DATA uses a file name prefix of the form: client_info_module_action. For this example the generated output files include the file name prefix that DBMS_CLOUD.EXPORT_DATA supplies and the files are compressed with gzip and the file extension .gz is added, as follows:

Client1_Module1_Action1_1_20210809T173033Z.json.gz
Client1_Module1_Action1_2_20210809T173034Z.json.gz
Client1_Module1_Action1_3_20210809T173041Z.json.gz
Client1_Module1_Action1_4_20210809T173035Z.json.gz

If the client_info_module_action session information is not available when you run DBMS_CLOUD.EXPORT_DATA, the file name prefix is set to data. For example:

data_1_20210809T173033Z.json.gz
data_2_20210809T173034Z.json.gz
data_3_20210809T173041Z.json.gz
data_4_20210809T173035Z.json.gz

For example, the file name prefix in the following DBMS_CLOUD.EXPORT_DATA procedure is specified in the file_uri_list parameter, as dept_export. The example generates the output to the provided directory in the specified format.

BEGIN
  DBMS_CLOUD.EXPORT_DATA(
    file_uri_list   => 'DATA_PUMP_DIR:sales.json',
    query           => 'SELECT * FROM SALES',
    format          => JSON_OBJECT('type' value 'json'));
END;
/

When you specify a file name prefix the generated output file include the file name prefix, similar to the following:

sales_1_20230705T124523275915Z.csv

Notes for file naming with DBMS_CLOUD.EXPORT_DATA:

  • DBMS_CLOUD.EXPORT_DATA does not create buckets or directories.

  • The number of files that DBMS_CLOUD.EXPORT_DATA generates is determined by the number of ECPUs (OCPUs if your database uses OCPUs), the database service, and the size of the result data.

  • The following applies when providing a directory object name in the file_uri_list parameter:

    • The provided directory must exist and you must have WRITE access to the directory.

    • The directory name is case-sensitive when enclosed in double quotes.

    • The credential name parameter must not be provided.

  • For CSV, JSON, or XML output, by default when a generated file contains 10MB of data a new output file is created. However, if you have less than 10MB of result data you may have multiple output files, depending on the database service and the number of ECPUs (OCPUs if your database uses OCPUs) for the Autonomous Database instance.

    The default output file chunk size is 10MB for CSV, JSON, or XML. You can change this value with the format parameter maxfilesize option. See DBMS_CLOUD Package Format Options for EXPORT_DATA for more information.

  • For Parquet output, each generated file is less than 128MB and multiple output files may be generated. However, if you have less than 128MB of result data, you may have multiple output files depending on the database service and the number of ECPUs (OCPUs if your database uses OCPUs) for the Autonomous Database instance.

    The format parameter maxfilesize option does not apply for Parquet files.