Monitor and Troubleshoot Loads

All data load operations done using the PL/SQL package DBMS_CLOUD are logged in the tables dba_load_operations and user_load_operations:

  • dba_load_operations: shows all load operations.

  • user_load_operations: shows the load operations in your schema.

Query these tables to see information about ongoing and completed data loads. For example:


SELECT table_name, owner_name, type, status, start_time, update_time, logfile_table, badfile_table 
   FROM user_load_operations WHERE type = 'COPY';

TABLE_NAME OWNER_NAME  TYPE   STATUS     START_TIME              UPDATE_TIME           LOGFILE_TABLE   BADFILE_TABLE
---------- ----------- ------- ---------- ---------------------- --------------------- --------------- -------------
CHANNELS   SH          COPY   COMPLETED  06-NOV-18 01.55.19.3    06-NOV-18 01.55.28.2  COPY$21_LOG     COPY$21_BAD

Using this SELECT statement with a WHERE clause predicate on the TYPE column, shows load operations with the type COPY.

The LOGFILE_TABLE column shows the name of the table you can query to look at the log of a load operation. For example, the following query shows the log of the load operation:

select * from COPY$21_LOG;

The column BADFILE_TABLE shows the name of the table you can query to look at the rows that got errors during loading. For example, the following query shows the rejected records for the load operation:

select * from COPY$21_BAD;

Depending on the errors shown in the log and the rows shown in the specified BADFILE_TABLE table you can correct the error by specifying the correct format options in DBMS_CLOUD.COPY_DATA.

When the format type is "datapump", any rows rejected up to the specified rejectlimit are logged in the log file, but badfiles are not generated.

Note:

The LOGFILE_TABLE and BADFILE_TABLE tables are stored for two days for each load operation and then removed automatically.

See DELETE_ALL_OPERATIONS Procedure for information on clearing the user_load_operations table.

Monitor and Troubleshoot ORC, Parquet, or Avro File Loading

As with other data files, ORC, Parquet, and Avro data loads generate logs that are viewable in the tables dba_load_operations and user_load_operations. Each load operation adds a record to dba[user]_load_operations that indicates the table containing the logs.

The log table provides summary information about the load.

Note:

For ORC, Parquet, or Avro files, when the format parameter type is set to the value orc, parquet or avro the BADFILE_TABLE table is always empty.

  • PRIMARY KEY constraint errors throw an ORA error.
  • If data for a column encounters a conversion error, for example, the target column is not large enough to hold the converted value, the value for the column is set to NULL. This does not produce a rejected record.