External Sources
Oracle AI Data Platform Workbench supports ingestion of data from a wide range of sources using Spark-based notebook connectors. These connectors enable users to ingest and process data directly from external sources in a flexible, code-driven manner.
AI Data Platform Workbench provides sample code templates in the Oracle AI Data Platform Workbench Samples Git repository to support ingesting data from several external systems using Spark in notebooks. These templates are pre-built and customizable, allowing users to quickly connect, read, and write data from various commonly used systems.
Table 14-3 External Ingestion Sources
| Source | Access Type | Integration Method | Decription | External Catalog Support | Sample Code Available |
|---|---|---|---|---|---|
| MySQL | Read/Write | JDBC via Spark Notebook | Ingest and export data between AI Data Platform Workbench and MySQL databases using JDBC connectors. | No | Yes |
| PostgreSQL | Read/Write | JDBC via Spark Notebook | Supports bidirectional data movement with PostgreSQL via JDBC. | No | Yes |
| MS SQL Server | Read/Write | JDBC via Spark Notebook | Connect and transfer data from Microsoft SQL Server using Spark and JDBC. | No | Yes |
| Kafka | Read | Kafka Consumer in Spark Notebook | Stream ingestion from Kafka topics | No | Yes |
| Hive | Read/Write | JDBC via Spark Notebook | Ingest and export data between AI Data Platform Workbench and Hive databases using JDBC connectors | No | Yes |