7 Oracle Big Data SQL Reference

This chapter contains reference information for Oracle Big Data SQL:

DBMS_HADOOP PL/SQL Package

The DBMS_HADOOP package contains a function to generate the CREATE EXTERNAL TABLE DDL for a Hive table.

CREATE_EXTDDL_FOR_HIVE

This function returns a SQL CREATE TABLE ORGANIZATION EXTERNAL statement for a Hive table. It uses the ORACLE_HIVE access driver.

Syntax

DBMS_HADOOP.CREATE_EXTDDL_FOR_HIVE (
   cluster_id        IN    VARCHAR2,
   db_name           IN    VARCHAR2   := NULL,
   hive_table_name   IN    VARCHAR2,
   hive_partition    IN    BOOLEAN,
   table_name        IN    VARCHAR2   := NULL,
   perform_ddl       IN    BOOLEAN    DEFAULT FALSE,
   text_of_ddl       OUT   VARCHAR2
);

Parameters

Table 7-1 CREATE_EXTDDL_FOR_HIVE Function Parameters

Parameter Description

cluster_id

Hadoop cluster where the Hive metastore is located

db_name

Name of the Hive database

hive_table_name

Name of the Hive table

hive_partition

Whether the table is partitioned (TRUE) or not (FALSE)

table_name

Name of the Oracle external table to be created. It cannot already exist.

perform_ddl

Whether to execute the generated CREATE TABLE statement (TRUE) or just return the text of the command (FALSE).

Do not execute the command automatically if you want to review or modify it.

text_of_ddl

The generated CREATE TABLE ORGANIZATION EXTERNAL statement.


Usage Notes

The Oracle Database system must be configured for Oracle Big Data SQL. See "About Oracle Big Data SQL on Oracle Exadata Database Machine".

The data type conversions are based on the default mappings between Hive data types and Oracle data types. See "About Data Type Conversions".

Example

The following query returns the CREATE EXTERNAL TABLE DDL for my_hive_table from the default Hive database. The connection to Hive is established using the configuration files in the ORACLE_BIGDATA_CONFIG directory, which identify the location of the HADOOP1 cluster.

DECLARE
   DDLtxt VARCHAR2(4000);
BEGIN
   dbms_hadoop.create_extddl_for_hive(
     CLUSTER_ID=>'hadoop1',
     DB_NAME=>'default',
     HIVE_TABLE_NAME=>'my_hive_table',
     HIVE_PARTITION=>FALSE,
     TABLE_NAME=>'my_xt_oracle',
     PERFORM_DDL=>FALSE,
     TEXT_OF_DDL=>DDLtxt
   );
   dbms_output.put_line(DDLtxt);
END;
/

The query returns the text of the following SQL command:

CREATE TABLE my_xt_oracle
(
     c0 VARCHAR2(4000),
     c1 VARCHAR2(4000),
     c2 VARCHAR2(4000),
     c3 VARCHAR2(4000))
     ORGANIZATION EXTERNAL
        (TYPE ORACLE_HIVE
          DEFAULT DIRECTORY DEFAULT_DIR
          ACCESS PARAMETERS (
             com.oracle.bigdata.cluster=hadoop1
             com.oracle.bigdata.tablename=default.my_hive_table
          )
)
PARALLEL 2 REJECT LIMIT UNLIMITED

CREATE TABLE ACCESS PARAMETERS Clause

This section describes the properties that you use when creating an external table that uses the ORACLE_HDFS or ORACLE_HIVE access drivers. In a CREATE TABLE ORGANIZATION EXTERNAL statement, specify the parameters in the opaque_format_spec clause of ACCESS PARAMETERS.

This section contains the following topics:

Syntax Rules for Specifying Properties

The properties are set using keyword-value pairs in the SQL CREATE TABLE ACCESS PARAMETERS clause and in the configuration files. The syntax must obey these rules:

  • The format of each keyword-value pair is a keyword, a colon or equal sign, and a value. The following are valid keyword-value pairs:

    keyword=value
    keyword:value
    

    The value is everything from the first non-whitespace character after the separator to the end of the line. Whitespace between the separator and the value is ignored. Trailing whitespace for the value is retained.

  • A property definition can be on one line or multiple lines.

  • A line terminator is a line feed, a carriage return, or a carriage return followed by line feeds.

  • When a property definition spans multiple lines, then precede the line terminators with a backslash (escape character), except on the last line. In this example, the value of the Keyword1 property is Value part 1 Value part 2 Value part 3.

    Keyword1=  Value part 1 \
               Value part 2 \
               Value part 3
    
  • You can create a logical line by stripping each physical line of leading whitespace and concatenating the lines. The parser extracts the property names and values from the logical line.

  • You can embed special characters in a property name or property value by preceding a character with a backslash (escape character), indicating the substitution. Table 7-2 describes the special characters.

Table 7-2 Special Characters in Properties

Escape Sequence Character

\b

Backspace (\u0008)

\t

Horizontal tab (\u0009)

\n

Line feed (\u000a)

\f

Form feed (\u000c)

\r

Carriage return (\u000d)

\"

Double quote (\u0022)

\'

Single quote (\u0027)

\\

Backslash (\u005c)

When multiple backslashes are at the end of the line, the parser continues the value to the next line only for an odd number of backslashes.

\uxxxx

2-byte, big-endian, Unicode code point.

When a character requires two code points (4 bytes), the parser expects \u for the second code point.


ORACLE_HDFS Access Parameters

The access parameters for theORACLE_HDFS access driver provide the metadata needed to locate the data in HDFS and generate a Hive table over it.

Default Parameter Settings for ORACLE_HDFS

If you omit all access parameters from the CREATE TABLE statement, then ORACLE_HDFS uses the following default values:

com.oracle.bigdata.rowformat=DELIMITED
com.oracle.bigdata.fileformat=TEXTFILE
com.oracle.bigdata.overflow={"action":"truncate"}
com.oracle.bigdata.erroropt={"action":"setnull"}

Optional Parameter Settings for ORACLE_HDFS

ORACLE_HDFS supports the following optional com.oracle.bigdata parameters, which you can specify in the opaque_format_spec clause:

Example 7-1 shows a CREATE TABLE statement in which multiple access parameters are set.

Example 7-1 Setting Multiple Access Parameters for ORACLE_HDFS

CREATE TABLE ORDER (CUST_NUM VARCHAR2(10), 
                    ORDER_NUM VARCHAR2(20), 
                    ORDER_DATE DATE,
                    ITEM_CNT NUMBER,
                    DESCRIPTION VARCHAR2(100),
                    ORDER_TOTAL (NUMBER8,2)) ORGANIZATION EXTERNAL 
             (TYPE ORACLE_HDFS
 ACCESS PARAMETERS (
  com.oracle.bigdata.fields: (CUST_NUM,              \
                              ORDER_NUM,             \
                              ORDER_DATE,            \
                              ORDER_LINE_ITEM_COUNT, \
                              DESCRIPTION,           \
                              ORDER_TOTAL)
  com.oracle.bigdata.colMap:     {"col":"item_cnt", \
                                  "field":"order_line_item_count"}
  com.oracle.bigdata.overflow:   {"action":"TRUNCATE", \
                                  "col":"DESCRIPTION"}
  com.oracle.bigdata.errorOpt:  [{"action":"replace", \
                                  "value":"INVALID NUM", \
                                  "col":["CUST_NUM","ORDER_NUM"]} , \
                                 {"action":"reject", \
                                  "col":"ORDER_TOTAL}]
 )
 LOCATION ("hdfs:/usr/cust/summary/*"));

ORACLE_HIVE Access Parameters

ORACLE_HIVE retrieves metadata about external data sources from the Hive catalog. The default mapping of Hive data to columns in the external table are usually appropriate. However, some circumstances require special parameter settings, or you might want to override the default values for reasons of your own.

Default Parameter Settings for ORACLE_HIVE

If you omit all access parameters from the CREATE TABLE statement, then ORACLE_HIVE uses the following default values:

com.oracle.bigdata.tablename=name of external table
com.oracle.bigdata.overflow={"action":"truncate"}
com.oracle.bigdata.erroropt={"action":"setnull"}

Optional Parameter Values for ORACLE_HIVE

ORACLE_HIVE supports the following optional com.oracle.bigdata parameters, which you can specify in the opaque_format_spec clause:

Example 7-2 shows a CREATE TABLE statement in which multiple access parameters are set.

Example 7-2 Setting Multiple Access Parameters for ORACLE_HIVE

CREATE TABLE ORDER (cust_num VARCHAR2(10), 
                    order_num VARCHAR2(20), 
                    order_date DATE,
                    item_cnt NUMBER,
                    description VARCHAR2(100),
                    order_total (NUMBER8,2)) ORGANIZATION EXTERNAL 
(TYPE oracle_hive
 ACCESS PARAMETERS (
  com.oracle.bigdata.tableName:  order_db.order_summary
  com.oracle.bigdata.colMap:     {"col":"ITEM_CNT", \
                                  "field":"order_line_item_count"}
  com.oracle.bigdata.overflow:   {"action":"ERROR", \
                                  "col":"DESCRIPTION"}
  com.oracle.bigdata.errorOpt:   [{"action":"replace", \
                                   "value":"INV_NUM" , \
                                   "col":["CUST_NUM","ORDER_NUM"]} ,\
                                  {"action":"reject", \
                                   "col":"ORDER_TOTAL}]
));

com.oracle.bigdata.colmap

Maps a column in the source data to a column in the Oracle external table. Use this property when the source field names exceed the maximum length of Oracle column names, or when you want to use different column names in the external table.

Default Value

A column in the external table with the same name as the Hive column

Syntax

A JSON document with the keyword-value pairs is shown in the following diagram:

colmap ::=

Description of colmap.gif follows
Description of the illustration colmap.gif

colmap_entry ::=

Description of colmap_entry.gif follows
Description of the illustration colmap_entry.gif

Semantics

"col":name

"col": The keyword must be lowercase and enclosed in quotation marks.

name: The name of a column in the Oracle external table. It is case sensitive and must be enclosed in quotation marks.

"field":name

"field": The keyword must be lowercase and enclosed in quotation marks.

name: The name of a field in the data source. It is not case sensitive, but it must be enclosed in quotation marks. See "Syntax Rules for Specifying Properties".

Example

This example maps a Hive column named ORDER_LINE_ITEM_COUNT to an Oracle column named ITEM_CNT:

com.oracle.bigdata.colMap={"col":"ITEM_CNT", \
                           "field":"order_line_item_count"}

com.oracle.bigdata.datamode

Specifies the method that SmartScan uses to scan a data source composed of delimited text files. The method can make a significant difference in performance.

Default Value

java

Syntax

A JSON document with the keyword-value pairs shown in the following diagram:

datamode ::=

Description of datamode.gif follows
Description of the illustration datamode.gif

Semantics

automatic

Selects c mode if possible, or java mode if the data contains formats that are not supported by c mode.

c

Uses Java to read the file buffers, but C code to process the data and convert it to Oracle format. If the data contains formats that the C code does not support, then it returns an error.

java

Uses the Java SerDes and InputFormats to process the data and convert it to Oracle format.

com.oracle.bigdata.erroropt

Describes how to handle errors that occur while the value of a column is calculated.

Default Value

{"action":"setnull"}

Syntax

A JSON document with the keyword-value pairs is shown in the following diagram:

erroropt ::=

Description of erroropt.gif follows
Description of the illustration erroropt.gif

error_element ::=

Description of erroropt_element.gif follows
Description of the illustration erroropt_element.gif

Semantics

The "action", "reject", "setnull", "replace", "value", and "col" keywords must be lowercase and enclosed in quotation marks. See "Syntax Rules for Specifying Properties".

"action":value

value: One of these keywords:

  • "reject": Does not load any rows.

  • "setnull": Sets the column to NULL.

  • "replace": Sets the column to the specified value.

"value":string

string: Replaces a bad value in the external table. It must be enclosed in quotation marks.

"col":name

name: Identifies a column in an external table. The column name is case sensitive, must be enclosed in quotation marks, and can be listed only once.

Example

This example sets the value of the CUST_NUM or ORDER_NUM columns to INVALID if the Hive value causes an error. For any other columns, an error just causes the Hive value to be rejected.

com.oracle.bigdata.errorOpt: {"action":"replace",\
                              "value":"INVALID", \
                              "col":["CUST_NUM","ORDER_NUM"]

com.oracle.bigdata.fields

Lists the field names and data types of the data source.

Default Value

Not defined

Syntax

A JSON document with the keyword-value pairs is shown in the following diagram:

fields ::=

Description of fields.gif follows
Description of the illustration fields.gif

data_type ::=

Description of data_type.gif follows
Description of the illustration data_type.gif

primitive_type ::=

Description of primitive_type.gif follows
Description of the illustration primitive_type.gif

Semantics

The syntax is the same as a field list for a Hive table. If you split the field list across multiple lines, you must use a backslash to escape the new line characters.

field_name

The name of the Hive field. Use only alphanumeric characters and underscores (_). The maximum length is 128 characters. Field names are case-insensitive.

data_type

The data type of the Hive field. Optional; the default is STRING. The character set must be UTF8.

The data type can be complex or primitive:

Hive Complex Data Types

  • ARRAY: Indexable list

  • MAP: Key-value tuples

  • STRUCT: List of elements

  • UNIONTYPE: Multiple data types

Hive Primitive Data Types

  • INT: 4 byte integer

  • BIGINT: 8 byte integer

  • SMALLINT: 2 byte integer

  • TINYINT: 1 byte integer

  • BOOLEAN: TRUE or FALSE

  • FLOAT: single precision

  • DOUBLE: double precision

  • STRING: character sequence

See Also:

"Data Types" in the Apache Hive Language Manual at

https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Types

COMMENT col_comment

A string literal enclosed in single quotation marks, which is stored as metadata for the Hive table (comment property of TBLPROPERTIES).

com.oracle.bigdata.fileformat

Describes the row format of the data source, based on the ROW FORMAT clause for a Hive table generated by ORACLE_HDFS.

Default Value

TEXTFILE

Syntax

A JSON document with the keyword-value pairs is shown in the following diagram.

fileformat ::=

Description of fileformat.gif follows
Description of the illustration fileformat.gif

Semantics

ORC

Optimized row columnar file format

PARQUET

Not supported in this release of Oracle Big Data Appliance.

RCFILE

Record columnar file format

SEQUENCEFILE

Compressed file format

TEXTFILE

Plain text file format

INPUTFORMAT

Identifies a Java class that can extract records from the data file.

OUTPUTFORMAT

Identifies a Java class that can format the output records in the desired format.

com.oracle.bigdata.log.exec

Specifies how the access driver generates log files generated by the C code for a query, when it is running as parallel processes on CDH.

The access driver does not create or write log files when executing on a Hadoop cluster node; the parallel query processes write them. The log files from the Java code are controlled by log4j properties, which are specified in the configuration file or the access parameters. See "bigdata-log4j.properties".

Default Value

Not defined (no logging)

Syntax

[directory_object:]file_name_template

Semantics

directory_object

The Oracle directory object for the HDFS path on the Hadoop cluster where the log file is created.

file_name_template

A string used to generate file names. Table 7-2 describes the optional variables that you can use in the template.

Table 7-3 Variables for com.oracle.bigdata.log.exec

Variable Value
 

%p

Operating system process identifier (PID)

%a

A number that uniquely identifies the process.

%%

A percent sign (%)


Example

The following example generates log file names that include the PID and a unique number, such as xtlogp_hive14_3413_57:

com.oracle.bigdata.log.exec= xtlogp_hive14_%p_%a

com.oracle.bigdata.log.qc

Specifies how the access driver generates log files for a query.

Default Value

Not defined (no logging)

Syntax

[directory_object:]file_name_template

Semantics

directory_object

Name of an Oracle directory object that points to the path where the log files are written. If this value is omitted, then the logs are written to the default directory for the external table.

file_name_template

A string used to generate file names. Table 7-4 describes the optional variables that you can use in the string.

Table 7-4 Variables for com.oracle.bigdata.log.qc

Variable Value

%p

Operating system process identifier (PID)

 

%%

A percent sign (%)


Example

This example creates log file names that include the PID and a percent sign, such as xtlogp_hive213459_%:

com.oracle.bigdata.log.qc= xtlogp_hive21%p_%%

com.oracle.bigdata.overflow

Describes how to handle string data that is too long for the columns in the external table. The data source can be character or binary. For Hive, the data source can also be STRUCT, UNIONTYPES, MAP, or ARRAY.

Default Value

{"action":"truncate"}

Syntax

A JSON document with the keyword-value pairs is shown in the following diagram:

overflow ::=

Description of overflow.gif follows
Description of the illustration overflow.gif

overflow_element ::=

Description of overflow_element.gif follows
Description of the illustration overflow_element.gif

Semantics

The "action", "truncate", "error", and "col" tags must be lowercase and enclosed in quotation marks. See "Syntax Rules for Specifying Properties".

"action":value

The value of "action" can be one of the following keywords:

  • truncate: Shortens the data to fit the column.

  • error: Throws an error. The com.oracle.bigdata.erroropt property controls the result of the error.

"col":name

name: Identifies a column in the external table. The name is case sensitive and must be enclosed in quotation marks.

Example

This example truncates the source data for the DESCRIPTION column, if it exceeds the column width:

com.oracle.bigdata.overflow={"action":"truncate", \
                             "col":"DESCRIPTION"}

com.oracle.bigdata.rowformat

Provides the information the access driver needs to extract fields from the records in a file.

Default Value

DELIMITED

Syntax

A JSON document with the keyword-value pairs is shown in the following diagram.

rowformat ::=

Description of rowformat.gif follows
Description of the illustration rowformat.gif

Semantics

DELIMITED

Describes the characters used to delimit the fields in a record:

  • FIELDS TERMINATED BY: The character that delimits every field in the record. The optional ESCAPED BY character precedes the delimit character when it appears within a field value.

  • COLLECTION ITEMS TERMINATED BY: The character that marks the end of an array element.

  • MAP KEYS TERMINATED BY: The character that marks the end of an entry in a MAP field.

  • LINES TERMINATED BY: The character that marks the end of a record.

  • NULL DEFINED AS: The character that indicates a null value.

SERDE

Identifies a SerDe that can parse the data and any properties of the SerDe that the access driver might need.

Example

This example specifies a SerDe for an Avro container file:

com.oracle.bigdata.rowformat:
   SERDE'org.apache.hadoop.hive.serde2.avro.AvroSerDe'

The next example specifies a SerDe for a file containing regular expressions:

com.oracle.bigdata.rowformat=\
    SERDE 'org.apache.hadoop.hive.contrib.serde2.RegexSerDe' \
    WITH SERDEPROPERTIES \ 
    ("input.regex" = "(\\\\d{6}) (\\\\d{5}) (.{29}) .*") 

com.oracle.bigdata.tablename

Identifies the Hive table that contains the source data.

Default Value

DEFAULT.external_table_name

Syntax

[hive_database_name.]table_name

Semantics

The maximum length of hive_database_name and table_name is 128 UTF-8 characters (512 bytes).

hive_database_name

The Hive database where the source data resides. DEFAULT is the name of the initial Hive database.

table_name

The Hive table with the data. If you omit table_name, then ORACLE_HIVE searches for a Hive table with the same name as the external table. Table names are case-insensitive.

Example

This setting indicates that the source data is in a table named ORDER_SUMMARY in the Hive ORDER_DB database:

com.oracle.bigdata.tablename ORDER_DB.ORDER_SUMMARY

Static Data Dictionary Views for Hive

The Oracle Database catalog contains several static data dictionary views for the Hive tables it can access. For you to access any Hive databases from Oracle Database, you must have read privileges on the ORACLE_BIGDATA_CONFIG directory object.

ALL_HIVE_DATABASES

ALL_HIVE_DATABASES describes all databases in the Hive metastore accessible to the current user.

Related Views

  • DBA_HIVE_DATABASES describes all the databases in the Hive metastore.

  • USER_HIVE_DATABASES describes the databases in the Hive metastore owned by the current user.

Column Datatype NULL Description
CLUSTER_ID VARCHAR2(4000) NOT NULL Hadoop cluster where the Hive metastore is located
DATABASE_NAME VARCHAR2(4000) NOT NULL Hive database name
DESCRIPTION VARCHAR2(4000)   Hive database description
DB_LOCATION VARCHAR2(4000) NOT NULL  
HIVE_URI VARCHAR2(4000)   Hive database URI

ALL_HIVE_TABLES

ALL_HIVE_TABLES describes all tables in the Hive metastore accessible to the current user.

The Oracle Big Data SQL configuration must identify the default Hive database for the current user. The current user must also have READ privileges on the ORA_BIGSQL_CONFIG database directory. See "About the Common Directory".

Related Views

  • DBA_HIVE_TABLES describes all tables in the Hive metastore.

  • USER_HIVE_TABLES describes the tables in the database owned by the current user in the Hive metastore.

Column Datatype NULL Description
CLUSTER_ID VARCHAR2(4000) NOT NULL Hadoop cluster where the Hive metastore is located
DATABASE_NAME VARCHAR2(4000) NOT NULL Name of the Hive database
TABLE_NAME VARCHAR2(4000) NOT NULL Name of the Hive table
LOCATION VARCHAR2(4000)    
NO_OF_COLS NUMBER   Number of columns in the Hive table
CREATION_TIME DATE   Time when the table was created
LAST_ACCESSED_TIME DATE   Time of most recent access
OWNER VARCHAR2(4000)   Owner of the Hive table
TABLE_TYPE VARCHAR2(4000) NOT NULL Type of Hive table, such as external or managed
PARTITIONED VARCHAR2(4000)   Whether the table is partitioned (YES) or not (NO)
NO_OF_PART_KEYS NUMBER   Number of partitions
INPUT_FORMAT VARCHAR2(4000)   Input format
OUTPUT_FORMAT VARCHAR2(4000)   Output format
SERIALIZATION VARCHAR2(4000)   SerDe serialization information
COMPRESSED NUMBER   Whether the table is compressed (YES) or not (NO)
HIVE_URI VARCHAR2(4000)   Hive database URI

ALL_HIVE_COLUMNS

ALL_HIVE_COLUMNS describes the columns of all Hive tables accessible to the current user.

The Oracle Big Data SQL configuration must identify the default Hive database for the current user. The current user must also have READ privileges on the ORA_BIGSQL_CONFIG database directory. See "About the Common Directory".

Related Views

  • DBA_HIVE_COLUMNS describes the columns of all tables in the Hive metastore.

  • USER_HIVE_COLUMNS describes the columns of the tables in the Hive database owned by the current user.

Column Datatype NULL Description
CLUSTER_ID VARCHAR2(4000) NOT NULL Hadoop cluster where the Hive metastore is located
DATABASE_NAME VARCHAR2(4000) NOT NULL Name of the Hive database; if blank, then the default database
TABLE_NAME VARCHAR2(4000) NOT NULL Name of the Hive table
COLUMN_NAME VARCHAR2(4000) NOT NULL Name of the Hive column
HIVE_COLUMN_TYPE VARCHAR2(4000) NOT NULL Data type of the Hive column
ORACLE_COLUMN_TYPE VARCHAR2(4000) NOT NULL Oracle data type equivalent to Hive data type
LOCATION VARCHAR2(4000)    
OWNER VARCHAR2(4000)   Owner of the Hive table
CREATION_TIME DATE   Time when the table was created
HIVE_URI VARCHAR2(4000)   Hive database URI

DBA_HIVE_DATABASES

DBA_HIVE_DATABASES describes all the databases in the Hive metastore. Its columns are the same as those in ALL_HIVE_DATABASES.

DBA_HIVE_TABLES

DBA_HIVE_TABLES describes all tables in the Hive metastore. Its columns are the same as those in ALL_HIVE_TABLES.

The Oracle Big Data SQL configuration must identify the default Hive database for the current user. See "About the Common Directory".

DBA_HIVE_COLUMNS

DBA_HIVE_COLUMNS describes the columns of all tables in the Hive metastore. Its columns are the same as those in ALL_HIVE_COLUMNS.

USER_HIVE_DATABASES

USER_HIVE_DATABASES describes the databases in the Hive metastore owned by the current user. Its columns (except for OWNER) are the same as those in ALL_HIVE_DATABASES.

USER_HIVE_TABLES

USER_HIVE_TABLES describes the tables in the database owned by the current user in the Hive metastore. Its columns (except for OWNER) are the same as those in ALL_HIVE_TABLES.

The Oracle Big Data SQL configuration must identify the default Hive database for the current user. The current user must also have READ privileges on the ORA_BIGSQL_CONFIG database directory. See "About the Common Directory".

USER_HIVE_COLUMNS

USER_HIVE_COLUMNS describes the columns of the tables in the Hive database owned by the current user. Its columns (except for OWNER) are the same as those in ALL_HIVE_COLUMNS.

The Oracle Big Data SQL configuration must identify the default Hive database for the current user. The current user must also have READ privileges on the ORA_BIGSQL_CONFIG database directory. See "About the Common Directory".