Table of Contents
- Title and Copyright Information
- Preface
-
1
Introduction to Oracle Big Data
SQL
-
1.1
What Is Oracle Big Data SQL?
- 1.1.1 About Oracle External Tables
- 1.1.2 About the Access Drivers for Oracle Big Data SQL
- 1.1.3 About Smart Scan for Big Data Sources
- 1.1.4 About Storage Indexes
- 1.1.5 About Predicate Push Down
- 1.1.6 About Pushdown of Character Large Object (CLOB) Processing
- 1.1.7 About Aggregation Offload
- 1.1.8 About Oracle Big Data SQL Statistics
- 1.2 Installation
-
1.1
What Is Oracle Big Data SQL?
-
2
Use Oracle Big Data SQL to Access Data
- 2.1 About Creating External Tables
- 2.2 Create an External Table for Hive Data
- 2.3 Create an External Table for Oracle NoSQL Database
- 2.4 Create an Oracle External Table for Apache HBase
- 2.5 Create an Oracle External Table for HDFS Files
- 2.6 Create an Oracle External Table for Kafka Topics
- 2.7 Create an Oracle External Table for Object Store Access
- 2.8 Query External Tables
-
2.9
Oracle SQL Access to Kafka
- 2.9.1 About Oracle SQL Access to Kafka
- 2.9.2 Get Started with Oracle SQL Access to Kafka
- 2.9.3 Register a Kafka Cluster
- 2.9.4 Create Views to Access CSV Data in a Kafka Topic
- 2.9.5 Create Views to Access JSON Data in a Kafka Topic
- 2.9.6 Query Kafka Data as Continuous Stream
- 2.9.7 Explore Kafka Data from a Specific Offset
- 2.9.8 Explore Kafka Data from a Specific Timestamp
- 2.9.9 Load Kafka Data into Tables Stored in Oracle Database
- 2.9.10 Load Kafka Data into Temporary Tables
- 2.9.11 Customize Oracle SQL Access to Kafka Views
- 2.9.12 Reconfigure Existing Kafka Views
- 2.10 About Oracle Big Data SQL on the Database Server (Oracle Exadata Machine or Other)
-
3
Information Lifecycle Management: Hybrid
Access to Data in Oracle Database and Hadoop
- 3.1 About Storing to Hadoop and Hybrid Partitioned Tables
-
3.2
Use Copy to Hadoop
- 3.2.1 What Is Copy to Hadoop?
- 3.2.2 Getting Started Using Copy to Hadoop
- 3.2.3 Using Oracle Shell for Hadoop Loaders With Copy to Hadoop
- 3.2.4 Copy to Hadoop by Example
- 3.2.5 Querying the Data in Hive
- 3.2.6 Column Mappings and Data Type Conversions in Copy to Hadoop
- 3.2.7 Working With Spark
- 3.2.8 Using Oracle SQL Developer with Copy to Hadoop
- 3.3 Enable Access to Hybrid Partitioned Tables
- 3.4 Store Oracle Tablespaces in HDFS
- 4 Oracle Big Data SQL Security
-
5
Work With Query Server
- 5.1 About Oracle Big Data SQL Query Server
- 5.2 Important Terms and Concepts
- 5.3 Specify the Hive Databases to Synchronize With Query Server
- 5.4 Synchronize Query Server With Hive
- 5.5 Query Server Restarts and Metadata Persistence
-
5.6
Connect to Query Server
- 5.6.1 About Connecting to the Query Server
- 5.6.2 Copy the Client Wallet for TLS Connections
- 5.6.3 Connect to Non-Secure Hadoop Clusters
- 5.6.4 Connect to Secure Hadoop Clusters with Kerberos Authentication
- 5.6.5 Connect to Secure Hadoop Clusters with Password-Based Database Authentication
- 5.6.6 Administrative Connections
-
6
Oracle Big Data SQL Reference
-
6.1
CREATE TABLE ACCESS
PARAMETERS Clause
- 6.1.1 Syntax Rules for Specifying Properties
- 6.1.2 ORACLE_HDFS Access Parameters
- 6.1.3 ORACLE_HIVE Access Parameters
-
6.1.4
Full List of Access Parameters for
ORACLE_HDFS and ORACLE_HIVE
- 6.1.4.1 com.oracle.bigdata.buffersize
- 6.1.4.2 com.oracle.bigdata.datamode
- 6.1.4.3 com.oracle.bigdata.colmap
- 6.1.4.4 com.oracle.bigdata.erroropt
- 6.1.4.5 com.oracle.bigdata.fields
- 6.1.4.6 com.oracle.bigdata.fileformat
- 6.1.4.7 com.oracle.bigdata.log.exec
- 6.1.4.8 com.oracle.bigdata.log.qc
- 6.1.4.9 com.oracle.bigdata.overflow
- 6.1.4.10 com.oracle.bigdata.rowformat
- 6.1.4.11 com.oracle.bigdata.tablename
- 6.1.5 ORACLE_BIGDATA Access Parameters
- 6.2 Static Data Dictionary Views for Hive
- 6.3 DBMS_BDSQL PL/SQL Package
- 6.4 DBMS_BDSQS PL/SQL Package
- 6.5 DBMS_BDSQS_ADMIN PL/SQL Package
- 6.6 DBMS_HADOOP PL/SQL Package
-
6.1
CREATE TABLE ACCESS
PARAMETERS Clause
-
Appendices
-
A
Manual Steps for Using Copy to Hadoop for Staged Copies
- A.1 Generating the Data Pump Files
- A.2 Copying the Files to HDFS
- A.3 Creating a Hive Table
- A.4 Example Using the Sample Schemas
- B Using Copy to Hadoop With Direct Copy
- C Using mtactl to Manage the MTA extproc
-
D
Diagnostic Tips and Details
- D.1 Running Diagnostics with bdschecksw
- D.2 How to do a Quick Test
- D.3 Oracle Big Data SQL Database Objects
- D.4 Other Database-Side Artifacts
- D.5 Hadoop Datanode Artifacts
- D.6 Step-by-Step Process for Querying an External Table
- D.7 Step-by-Step for a Hive Data Dictionary Query
- D.8 Key Adminstration Tasks for Oracle Big Data SQL
- D.9 Additional Java Diagnostics
- D.10 Checking for Correct Oracle Big Data SQL Patches
- D.11 Debugging SQL.NET Issues
- E Oracle Big Data SQL Software Accessibility Recommendations
-
A
Manual Steps for Using Copy to Hadoop for Staged Copies
- Index