Loading Data

In the SQL page, you can load data from one or more local files into one or more tables.

The file formats that you can load are CSV, XLS, XLSX, TSV, TXT, XML, JSON, and AVRO. For XML, JSON, and AVRO files, see Format Specifications for JSON, AVRO, and XML Files.

Loading Data from a Local File to a New Table

To load data from a local file to a new table:

  1. You can start in one of the following ways:

    • In the Navigator tab, in the left pane, click Object submenuObject submenu icon, select Data Loading, and then select Upload Data into New Table.

      Figure 3-6 Upload Data to New Table Option

      Description of Figure 3-6 follows
      Description of "Figure 3-6 Upload Data to New Table Option"
    • In the Navigator tab, drag and drop the local file into the left pane. When you drag a file into the pane, the following message is displayed Drop the file here to start.

    The Upload Data into New Table is displayed. A preview of the data is displayed in a grid format.

  2. Click Show/Hide options Show/Hide options icon to display options that you can modify for data preview:
    • Column names: Select Get from file to display column headers in the first row.

    • Encoding: An option to select the encoding type is visible when the loaded file is in plain text format (CSV, TSV, or TXT). The default encoding type is UTF-8.
    • Text enclosure and Field delimiter: These options are visible only when the selected file is in plain text format (CSV, TSV, or TXT). Select or enter the character used in the source file for text enclosure and field delimiter.

    • Rows to skip: Enter or use the up and down arrows to select the number of rows to skip.

    • Preview size: Enter or use the up and down arrows to select the number of rows to preview.

    • Limit rows to upload: If you select this option, you need to specify the rows to load. Use the up and down arrows to select the number of rows to load.

    To remove the options selected and the data preview, click Clear.

    After selecting the required options, click Apply, and then click Next.

  3. In Table Definition, do the following:

    • In the Table Name field, enter a name for the target table.

    • Select the check box at the beginning of a row to add the column to the target table.

    • Select or enter values for column attributes such as Column Name, Column Type, Precision, Scale, Default, Primary Key and Nullable.

    • The Format Mask column appears for date, timestamp and numeric types of data. For date and timestamp types, you must select a value from the drop-down list or type the value in the Format Mask field. For numeric types, the format mask is optional.

      For a date and timestamp column, you need to supply a compatible format mask that describes the data being uploaded. For example, if the date data looks like 12-FEB-2021 12.21.30, you need to supply a date mask of DD-MON-YYYY HH.MI.SS. The format mask is automatically determined based on the data in the file. You need to review the suggested format mask and if needed, modify it by entering the format directly into the target cell.

    Figure 3-7 Table Definition Step in Upload Data into New Table

    Description of Figure 3-7 follows
    Description of "Figure 3-7 Table Definition Step in Upload Data into New Table"

    Click Next.

  4. Review the generated DDL code based on the selections made in the previous screens. The mapping of the source to target columns are also displayed.

    Click Finish. After the data is successfully loaded, the new table is displayed in the Navigator tab.

  5. For a detailed report of the total rows loaded and failed, do one of the following:
    • Right-click the table in the Navigator tab, select Data Loading, and then select History. This displays the report for a specific table.

    • In the Navigator tab, select Object submenu Object submenu, select Data Loading, and then select History. This displays the report for all tables in the schema that is selected in the Navigator tab.

    • In the worksheet output pane, select the Data Loading tab. This displays the report for all visible tables (including tables from other schemas).

    A summary of the data loaded is displayed in the History dialog. If any data failed to load, you can view the number of failed rows in the Failed Rows column. Click the failed rows column to open a dialog showing the failed rows.

    In the History dialog, you can also search for files loaded by schema name, table name, or file name. To remove the loaded files, click Remove all history Remove all history.

    You can also load data from a file to a new table using the steps in Loading Data from Multiple Local Files into Mutiple Tables.

Loading Data from a Local File to an Existing Table

To load data from a local file to an existing table:

  1. In the Navigator tab, in the left pane, right-click the table that you want to load data into, select Data Loading, and then select Upload Data.

    Figure 3-8 Upload Data Option for an Existing Table

    Description of Figure 3-8 follows
    Description of "Figure 3-8 Upload Data Option for an Existing Table"

    The Import data dialog is displayed.

  2. Drag and drop the file from your system into the dialog, or click Select Files to browse for the file and open it.

    A preview of the data is displayed in a grid format.

  3. Click Show/Hide options Show/Hide options icon to display options that you can modify for data preview:
    • Column names: Select Get from file to display column headers in the first row.
    • Encoding: An option to select the encoding type is visible when the loaded file is in plain text format (CSV, TSV, or TXT). The default encoding type is UTF-8.
    • Text enclosure and Field delimiter: These options are visible only when the selected file is in plain text format (CSV, TSV, or TXT). Select or enter the character used in the source file for text enclosure and field delimiter.
    • Rows to skip: Enter or use the up and down arrows to select the number of rows to skip.
    • Rows to load: Enter or use the up and down arrows to select the number of rows to load.
    • Preview size: Enter or use the up and down arrows to select the number of rows to preview.

    To remove the options selected and the data preview, click Clear.

    After selecting the required options, click Apply, and then click Next.

  4. In Data mapping, match the data in the file to the appropriate columns in the target table. By default, the matching is done using column name.

    To modify, click Show/Hide options Show/Hide options icon. In Match columns by:

    • Select Name to match columns based on the name of the column in the target table.
    • Select Position if you want to match columns based on the position of the column in the target table.
    • Select None to remove the current selections and to select the target column for each source column from the drop-down list.

    Note:

    Based on the data in the file, attempts are made to automatically retrieve the correct format mask of date-based columns. If this is incorrect, you can change the suggested format by entering it directly into the target cell.

    If there are any issues to resolve, you see a notification such as icon on the top right of the dialog.

    Click Next.

  5. A summary of the previous screens is displayed. Click Finish.

    The data will start uploading to the target table. After it is completed, an entry is added to the Log with the status of the operation. To view the Log, click the timestamp notification at the bottom of the page. If the operation is successful, a Data Import Completed notification is displayed.

  6. For a detailed summary of the upload process, right-click the table in the Navigator tab, select Data Loading, and then select History. A summary of the data loaded is displayed in the Data Loading History dialog.

    If any data failed to load, you can view the number of rows in the Failed Rows column. Click the column to open a dialog showing the failed rows.

    In the Data Loading History dialog, you can also search for files loaded by schema name, table name, or file name. To remove the loaded files, click Remove all history Remove all history.

    You can also load data from a file to an existing table using the steps in Loading Data from Multiple Local Files into Mutiple Tables.

Loading Data from Multiple Local Files into Mutiple Tables

To load data into multiple tables from multiple files:
  1. In the SQL page, on the top right, click Data Load.

    The Data Loading slider appears.

  2. Click Add File to browse for one or more files and add them concurrently. Alternatively, you can drag and drop the files into the Data Loading slider.

    The added files are displayed as cards.

  3. To open a file, click the name of the file or click actions icon at the top right corner of the card and select Details.
    In the Details page, a preview of the file is displayed. Click Next. To go to the previous page, click All Files.

    Note:

    A preview of the entire file is shown the first time the file is loaded. Subsequently if any changes are made to the settings, the preview is set at a maximum of 10 rows. For a description of the settings, see step 2 in Loading Data from a Local File to a New Table.
  4. In the Target Details page, the Actions field contains the following two options:
    • Create New Table to create a table. By default, this option is selected and the Table Name field is prefilled.
    • Append to Existing Table to add the file to an existing table. The data loader attempts to match the file name to an existing table name. Expand Table Rows Preview to preview the existing table.

    For a description of the mapping options, see step 3 in Loading Data from a Local File to a New Table and step 4 in Loading Data from a Local File to an Existing Table.

    Click All Files to return to the initial Data Loading page.

  5. To load data into a table, at the top right of the card, click Actions icon and select Run.

    To load data into multiple tables simultaneously, click Run all icon Run All.

    The files that are successfully uploaded are displayed with a green colour icon and a check mark and the status UPLOADED.

    Some other possible statuses are:

    • UPLOADED WITH WARNING in yellow.
    • NOT UPLOADED BECAUSE OF ERRORS in red.
    • UPLOADED WITH ERRORS where you can see the number of failed rows.
    • CONTAINS WARNING(S) indicates that something in the settings is wrong. This file will not be loaded until the warnings are resolved.
  6. After a file is loaded, go to the Details slider to see the third step named Results.

    The Results step shows the total number of rows loaded along with the number of failed rows.

    To view all previous files that have been loaded along with their loading statuses, click Open History. You can filter by schema and table name.

Format Specifications for JSON, AVRO, and XML Files

Data has to be stored in a particular format for JSON, AVRO, and XML files to load them successfully into a table.

The format specifications are described in the following sections.

JSON and AVRO Files

For JSON and AVRO files, the conversion for primitive types to table columns is supported only for top-level data. Nested objects are saved as JSON strings such as VARCHAR2 (JSON) or CLOB (JSON).

Note:

JSON check constraints are available only for Oracle Database 12c and later releases.

Consider the following JSON file as an example:

[
  {
    "ItemNumber": 1,
    "Description": "One Magic Christmas",
    "Part": {
      "UnitPrice": 19.95,
      "UPCCode": 13131092899
    },
    "Quantity": 9,
    "Total": 179.55
  },
  {
    "ItemNumber": 2,
    "Description": "Lethal Weapon",
    "Part": {
      "UnitPrice": 17.95,
      "UPCCode": 85391628927
    },
    "Quantity": 5,
    "Total": 89.75
  }
]

The AVRO schema for this file:

{
     "type": "array",
     "items": {
          "type": "record",
          "fields": [
               {
                    "name": "ItemNumber",
                    "type": "int"
               },
               {
                    "name": "Description",
                    "type": "string"
               },
               {
                    "name": "Part",
                    "type": {
                         "type": "record",
                         "fields": [
                              {
                                   "name": "UnitPrice",
                                   "type": "float"
                              },
                              {
                                   "name": "UPCCode",
                                   "type": "float"
                              }
                         ]
                    }
               },
               {
                    "name": "Quantity",
                    "type": "int"
               },
               {
                    "name": "Total",
                    "type": "float"
               }
          ]
     }
}

Load the JSON file using "Upload Data" in the SQL page, and it is converted to the following table with two rows. part is a nested object that is assigned the column type CLOB (JSON) during data mapping.

XML Files

This section lists the specifications for loading XML files.

  • Attributes will have their own columns

    If the XML data is structured as:

    <?xml version="1.0"?>
    <catalog>
      <book id="bk102">
        <author>Ralls, Kim</author>
        <title>Midnight Rain</title>
        <genre>Fantasy</genre>
        <publisher>John Doe</publisher>
       </book>
    </catalog>

    The generated columns in the table are id, author, title, genre, and publisher.

  • Two or more levels of nesting are needed to parse the data

    In the following example, the data that needs to be parsed will not be located because it has only one level of nesting (catalog).

    <?xml version="1.0"?>
    <catalog>
      <author>Ralls, Kim</author>
      <title>Midnight Rain</title>
      <genre>Fantasy</genre>
      <publisher>John Doe</publisher>
    </catalog>

    However, the following examples will work:

    <?xml version="1.0"?>
    <catalog>
      <book id="bk102">
        <author>Ralls, Kim</author>
        <title>Midnight Rain</title>
        <genre>Fantasy</genre>
        <publisher>John Doe</publisher>
      </book>
    </catalog>

    or

    <?xml version="1.0"?>
    <catalog>
      <bookstore>
        <book id="bk102">
          <author>Ralls, Kim</author>
          <title>Midnight Rain</title>
          <genre>Fantasy</genre>
          <publisher>John Doe</publisher>
        </book>
      </bookstore>
    </catalog>

    or

    <?xml version="1.0"?>
    <catalog>
      <bookstore>
        <shelf>
          <book id="bk102">
            <author>Ralls, Kim</author>
            <title>Midnight Rain</title>
            <genre>Fantasy</genre>
            <publisher>John Doe</publisher>
          </book>
        </shelf>
      </bookstore>
    </catalog>
  • Special characters such as hyphen (-) and period (.) in tag names are replaced by underscore (_) in the column name

    XML tag names can contain hyphens and periods. Since the parser is converting XML to JSON, these characters become invalid object keys.

    <?xml version="1.0"?>
    <catalog>
      <book id="bk102">
        <author-name>Ralls, Kim</author-name>
        <title.1>Midnight Rain</title.1>
        <genre>Fantasy</genre>
        <publisher>John Doe</publisher>
       </book>
    </catalog>

    The generated columns are id, author_name, title_1, genre, and publisher.

  • First-level only-text tags are ignored

    <?xml version="1.0"?>
    <catalog>
      <library> New Age Library </library>
      <book id="bk102">
        <author>Ralls, Kim</author>
        <title>Midnight Rain</title>
        <genre>Fantasy</genre>
        <publisher>John Doe</publisher>
      </book>
    </catalog>

    The <library> tag is ignored and only the content of the <book> tag is taken into account. The generated columns are id, author, title, genre, and publisher.

  • First-level repetitive data is interpreted as an array of values

    <?xml version="1.0" encoding="UTF-8"?>
    <items id="orders">
        <item_number>1</item_number>
        <description>One Magic Christmas</description>
        <part>
          <unit_price>19.95</unit_price>
          <upccode>13131092899</upccode>
        </part>
        <quantity>9</quantity>
        <total>179.55</total>
        <item_number>2</item_number>
        <description>Lethal Weapon</description>
        <part>
          <unit_price>17.95</unit_price>
          <upccode>85391628927</upccode>
        </part>
        <quantity>5</quantity>
        <total>89.75</total>
    </items>

    The generated columns include item_number, description, part, and each column will have only one row with the following values respectively ([1,2], ["One Magic Christmas","Lethal Weapon"], [{" unit_price":19.95,"upccode":13131092899},{"unit_price":17.95,"upccode":85391628927}] and so on for the remaining columns.

  • Tags containing values and attributes are converted to object

    <?xml version="1.0"?>
    <catalog>
      <book id="bk102">
        <author country="ca">Ralls, Kim</author>
        <title>Midnight Rain</title>
        <genre>Fantasy</genre>
        <publisher>John Doe</publisher>
      </book>
    </catalog>

    The <author> tag is converted to a column and will have an object as value that is structured as:

    {
          "_":"Ralls, Kim",
          "country":"ca"
        }

    Note that the value of the tag has underscore ("_") as key and the attributes as "attribute_name": "attribute_value".