26.3 Managing Applications with Legacy Data Loading Capability

Applications with legacy data loading capability enable end users to dynamically import data into a table within any schema to which they have access.

26.3.1 Comparing Legacy Data Loading and New Data Loading

Learn about the differences between legacy data loading and new data loading

Oracle APEX includes two types of data loading: new data loading and legacy data loading. The new data loading has many benefits over legacy data loading.

Benefits of new data loading include:

  • New application data loading supports CSV, XLSX, XML, and JSON formats.
  • Column mapping occurs at design time, removing the burden for end users.
  • Flexible column mappings based on simple names or regular expressions.
  • Data conversion with transformation rules or lookup queries.
  • Easy work flow for end users: upload the file, verify the preview, and load data.
  • CSV, XLSX, XML, and JSON data formats can be loaded to tables or collections.
  • Configure data loading to Append, Merge or Replace data, with or without Error Handling.
  • Simple new Process Type Data Loading: Customize Data Loading pages as you wish.
  • APEX_DATA_LOADING PL/SQL API available for custom processing.
  • Maximum number of columns to load is 300. Note that Legacy data load supports up to 45.

26.3.2 About Creating a Legacy Data Load Wizard

Create a Data Load Wizard by running the Create Page Wizard and creating a Data Load page.

During the process of creating the Data Load page, developers can specify the upload table and its unique columns, table lookups, and data transformation rules.

Note:

A Data Load Wizard is not designed or intended to load hundreds of thousands of rows of data. While it is possible to use a Data Load Wizard to load this high volume of data, you may encounter performance issues with both transmitting and loading large data files. Tools like Oracle SQL Developer and Oracle SQL*Loader are better suited to loading large volumes of data.

The Data Load Wizard includes support for the following:

  • Table Definitions - This definition specifies the data upload table name with its unique key columns.

  • Data Transformation Rules - For formatting transformations such as changing import data to uppercase, lowercase, and so on, you must define data transformation rules. For example, if the import file includes column data with both upper and lowercase and the upload table requires all uppercase, you can define a data transformation rule to insert only uppercase into the target column.

  • Table Lookups - If data existing in the import file must be mapped to data in another table, specify a table lookup to perform the mapping. For example, if the import file contains a department name for the DEPTNO column but the upload table requires a number for that column, use a table lookup rule to find the corresponding department number for that department name in another table.

  • Column Name Aliases - There are many situations when a developer does not wish to expose the table column names to the end user, or to expose all columns to the end user. In those situations, you can create a column aliases for the columns that need to exposed.

  • Manage Concurrency - If multiple users are uploading data at the same time, developers can use extra column to track the version of data in the underlying table. The Data Load Wizard can use this column to check and signal the end user if anyone else is working with the same data at the same time. This is particularly important if uploading into a table that is regularly updated.

  • Multiple Spreadsheet Columns - There are many situations when a spreadsheet to be uploaded has multiple columns that the developer wants to concatenate and upload in to one table column (for example FirstName and LastName on spreadsheet can be uploaded into ENAME of the EMP table).

  • Skip Validation - You can improve data loading performance when uploading a large number of records by skipping the validation step. If uploading thousands of records, the end user might not be interested in validating each record. If you are certain that each record will be inserted as new record, the data loading process does not need to check for duplicates.

The newly generated Data Load Wizard consists of four pages that provide users with the ability to upload data from a file or by copy and paste, define data and table mappings, validate the data, and finally to upload the data to the table. The developer can later edit the Data Load Wizard's definitions such as table lookups and data transformation rules, by accessing Shared Components, Data References, Data Load Definitions.

Supported Data Types

Data Load Wizards support the following data types:

  • VARCHAR2

  • DATE

  • TIMESTAMP

  • NUMBER

Unsupported Data Types

Data Load Wizards do not support the following data types:

  • Large objects (BLOB and CLOB)

  • Complex types (XMLTYPE and SDO_GEOMETRY)

  • CHAR

26.3.3 Creating a Legacy Data Load Page

To create a legacy Data Load Wizard, a developer creates a Data Load page with the Create Page Wizard.

To create a legacy Data Load Wizard:

  1. On the Workspace home page, click the App Builder icon.
  2. Select the application.
  3. On the Application home page, click Create Page.
    Create a Page appears and features three tabs: Component, Feature, and Legacy Pages.
  4. Click Legacy Pages and select Legacy Data Loading.
  5. For Data Load Table:
    1. Legacy Data Load - Choose whether to create a new or to re-use an existing Legacy Data Load definition.
    2. Definition Name - Enter the name of this data load definition.
    3. Owner - Select the owner of the table on which the form will be based..
    4. Table Name - Select the table to use for data loading (also known as the upload table).
    5. Unique Column 1 - Identify the column name(s) to be used as the primary unique key column during the data load process. You can define up to 3 unique key columns.
    6. Case Sensitive - Identify whether the selected unique key column is case sensitive. By default, this is set to No.
    7. Define additional Unique Columns. You can define up to 3 unique key columns.
    8. Click Next.
  6. For Add Transformation Rules (optional) - Transformation Rules enable you to change the data being uploaded before it is inserted into the base table. Select the column to transform and then the desired rule to apply to it.
    1. Select Column(s) to create a transformation rule - Select the column on which the transformation rule definition is to be based and move them to the right.
    2. Rule Name - Enter a name for this transformation rule.
    3. Sequence - Specify the sequence for the transformation rule. The sequence determines the order of execution.
    4. Type - Select the type of transformation rule you want to perform.
    5. Provide additional details based on the transformation type you have chosen.
    6. Click Add Transformation.
    7. Click Next.
  7. For Table Lookups - Match an uploaded value against another table and use the associated key value, instead of the uploaded value.
    1. Add new table lookup for Column (optional) - Identify the column on which the table lookup definition is to be based.
    2. Lookup Table Owner - Select the owner of the lookup table.
    3. Lookup Table Name - Identify the table to be used for this table lookup definition.
    4. Return Column - Select the name of the column returned by the table lookup. This value will be inserted into the load column specified and is generally the key value of the parent in a foreign key relationship (for example: DEPTNO).
    5. Upload Column - Select the name of the column end users will upload instead of the return column. This is the column that contains the display value from the lookup table (for example: DNAME).
    6. Upload Column 2 - Select the name of the second column to be uploaded to uniquely identify the return column if necessary. For example, to uniquely identify a State Code it may be necessary to upload the State Name and Country.
    7. Upload Column 3 - Select the name of the third column to be uploaded to uniquely identify the return column.
    8. Click Add lookup to add the lookup definition. Repeat the previous steps to add additional table lookups. 
    9. Click Next finish creating lookups.
  8. For Page Attributes:
    1. Step 1, Step 2, Step 3, and Step 4 - Enter a page name and page number for each step.
    2. Page Number - Enter a page number for each step.
    3. Page Mode - Identify the page mode. To learn more, see field-level Help.
    4. Page Group - Identify the name of the page group you would like to associate with this page.
    5. Breadcrumb - Select whether you want to use a breadcrumb navigation control on your page and which breadcrumb navigation control you want to use.
    6. Click Next.
  9. For Navigation Menu:
    1. Navigation Preference - Select how you want this page integrated into the Navigation Menu. To learn more, see field-level Help.
    2. Click Next.
  10. For Buttons and Branching, specify the branching for the buttons on the data load wizard pages:
    1. New Button Label - Enter text to display on the Next button.
    2. Previous Button Label - Enter text to display on the Previous button.
    3. Cancel Button Label - Enter text to display on the Cancel button.
    4. Cancel Button Branch to Page - Specify the page number to branch to when the user clicks Cancel.
    5. Finish Button Label - Enter text to display on the Submit button.
    6. Finish Button Branch to Page - Specify the number of the page to branch to. You can choose to branch back to the same page or any other page in your application.
    7. Click Create.
  11. Click Save and Run Page to test the Data Load Wizard.

Tip:

After creating Data Load Wizard pages, if you wish to make changes, Oracle recommends re-creating new pages without deleting the data loading definitions as described in the next section.

26.3.4 Re-creating Legacy Data Load Pages

To make changes to existing legacy Data Load pages, Oracle recommends re-creating new pages without deleting the data loading definitions.

To re-create legacy Data Load Wizard pages without deleting the data loading definitions:

  1. Navigate to the Data Load Definitions page:
    1. On the Workspace home page, click App Builder.
    2. Select an application.
    3. On the Application home page, click Shared Components.

      The Shared Components page appears.

    4. Under Data Load Definitions, click Data Load Definitions.
    5. Expand the Legacy Data Load region and click the legacy data load.
  2. Click the Legacy Data Load you want to re-create.

    The Data Load Table Details page appears.

  3. From Tasks, click Create Pages using this Legacy Data load.

    The Page Attributes page of the Create Data Load Wizard appears.

  4. For each page, edit the appropriate attributes and click Next.

    To learn more about any attributes, see field-level Help.

  5. Click Create.

26.3.5 Editing a Legacy Data Load Definition

Edit the Data Load Definition in Shared Components.

A Data Load Definition is comprised of a data load table, table rules, and lookup tables used by the Data Load Wizard in your application. A data load table is an existing table in your schema that has been selected for use in the data loading process, to upload data.

To edit a Data Load Definition:

  1. Navigate to the Data Load Definitions page:
    1. On the Workspace home page, click App Builder.
    2. Select an application.
    3. On the Application home page, click Shared Components.
      The Shared Components page appears.
    4. Under Data References, click Data Load Definitions.
    5. Expand the Legacy Data Load region and click Legacy Data Load.
    The Data Load Table Details page appears. To learn more about any attributes, see field-level Help.
  2. For Data Load Definition:
    • Name - Name for the data load definition.

    • Table Name - Displays the name of underlying table to be used for data load definition.

    • Unique column 1 - The column name used as the primary unique key column during the data load process.

    • Case Sensitive - Identify whether the selected unique key column 1 is case sensitive. By default, this is set to No.

    • Unique column 2 - If the unique key definition of the selected table is a compound key, consisting of 2 or more columns, this column name is used as the second unique key column during the data load process.

    • Case Sensitive - Identify whether the selected unique key column 2 is case sensitive. By default, this is set to No.

    • Unique column 3 - If the unique key definition of the selected table is a compound key, consisting of 2 or more columns, this column name is used as the third unique key column during the data load process.

    • Case Sensitive - Identify whether the selected unique key column 3 is case sensitive. By default, this is set to No.

    • Skip Validation - One step in data loading is to validate actions to be taken on records to be uploaded. Select Yes to skip validation.

  3. Transformation Rules lists previously defined transformation rules.
    • To create a new rule:

      1. Click Create Transformation Rule.

      2. Edit the attributes.

        To learn more about an attribute, see field-level Help.

      3. Click Create.

    • To edit an existing rule:

      1. Click the rule name.

      2. Edit the attributes.

      3. Click Apply Changes.

  4. Table Lookups map data in the import file to data found in another table.
    • To create a new Table Lookups:

      1. Click Create Table Lookup.

      2. Edit the attributes.

      3. Click Create.

    • To edit an existing Table Lookup:

      1. Click column name.

      2. Edit the attributes.

      3. Click Apply Changes.

  5. Column Name Aliases define aliases to help users correctly identify the columns to upload.

    To add Column Name Aliases:

    1. Click Edit List of Values.

    2. Edit the attributes.

    3. Click Create.

  6. From Concurrency Column Name, select a column to be used for concurrency management.

    Concurrency gives the developer the option to select a column to check the version of the data in the underlying table. This is particularly important if uploading into a table that is regularly updated.

  7. Click Apply Changes.