You can effectively collect and analyze event data and streaming data from internet of things (IoT) and social media sources, but how do you correlate it with the broad range of enterprise data resources to leverage your investment and gain the insights you want?
Combine the abilities of a data lake and a data warehouse to process streaming data and a broad range of enterprise data resources and leverage the data for business analysis and machine learning.
This reference architecture positions the technology solution within the overall business context:
A data lake enables an enterprise to store all of its data in a cost effective, elastic environment while providing the necessary processing, persistence, and analytic services to discover new business insights. A data lake stores and curates structured and unstructured data and provides methods for organizing large volumes of highly diverse data from multiple sources.
With a data warehouse, you perform data transformation and cleansing before you commit the data to the warehouse. With a a data lake, you ingest data quickly and prepare it on the fly as people access it. A data lake supports operational reporting and business monitoring that require immediate access to data and flexible analysis to understand what is happening in the business while it it happening.
At a conceptual level, the technology solution addresses the problem as follows:
This architecture combine the abilities of a data lake and a data warehouse to process streaming data and other types of data from a broad range of enterprise data resources. Use this architecture to leverage the data for business analysis and machine learning.
The following diagram illustrates this reference architecture.
The architecture focuses on the following logical divisions:
- Data refinery
Ingests and refines the data for use in each of the data layers in the architecture. The shape is intended to illustrate the differences in processing costs for storing and refining data at each level and for moving data between them.
- Data persistence platform (curated information layer)
Facilitates access and navigation of the data to show the current business view. For relational technologies, data may be logical or physically structured in simple relational, longitudinal, dimensional or OLAP forms. For non-relational data, this layer contains one or more pools of data, either output from an analytical process or data optimized for a specific analytical task.
- Access and interpretation
Abstracts the logical business view of the data for the consumers. This abstraction facilitates agile approaches to development, migration to the target architecture, and the provision of a single reporting layer from multiple federated sources.
The architecture has the following components:
- Data integration
Oracle Data Integrator provides comprehensive data integration from high-volume and high-performance batch loads, to event-driven, trickle-feed integration processes, to SOA-enabled data services. A declarative design approach means faster and simpler development and maintenance and a unique approach to extract-load transform (E-LT) guarantees the highest level of performance possible for the execution of data transformation and validation processes.
Oracle Cloud Infrastructure Data Integration is a fully managed, cloud-native, serverless, extract, transform, and load (ETL) tool for data lake and data mart use-cases on Oracle Cloud Infrastructure. Users design data integration processes using an intuitive, codeless user interface that optimizes integration flows to generate the most efficient engine and orchestration, automatically allocating and scaling the execution environment. Oracle Cloud Infrastructure Data Integration provides interactive exploration and data preparation and helps data engineers protect against schema drift by defining rules to handle schema changes.
Oracle Cloud Infrastructure Streaming service provides a fully managed, scalable, and durable storage solution for ingesting continuous, high-volume streams of data that you can consume and process in real time. Streaming can be used for messaging, high-volume application logs, operational telemetry, web click-stream data, or other publish-subscribe messaging model use cases in which data is produced and processed continually and sequentially.
Kafka Connect is a scalable and reliable tool for streaming data between Apache Kafka and other systems. Kafka Connect is an open source framework for connecting Kafka and services such as Oracle Cloud Infrastructure Streaming service with external sources.
GoldenGate Stream Analytics processes and analyzes large-scale, real-time information by using sophisticated correlation patterns, enrichment, and machine learning. Users can explore real-time data through live charts, maps, visualizations, and graphically build streaming pipelines without any hand coding. These pipelines execute in a scalable and highly available clustered big data environment using Spark integrated with Oracle’s continuous query engine to address critical real-time use cases of modern enterprises.
Oracle Cloud Infrastructure Data Flow is a fully managed big data service that lets you run Apache Spark applications with no infrastructure to deploy or manage. It lets you deliver big data and AI applications faster because you can focus on your applications without getting distracted by operations. Data flow applications are reusable templates consisting of a Spark application, its dependencies, default parameters, and a default run-time resource specification.
- Autonomous Data Warehouse
Oracle Autonomous Data Warehouse is a fully managed, preconfigured database environment. You do not need to configure or manage any hardware, or install any software. After provisioning, you can scale the number of CPU cores or the storage capacity of the database at any time without impacting availability or performance.
Oracle Cloud Infrastructure Object Storage is an internet-scale, high-performance storage platform that offers reliable and cost-efficient data durability. Oracle Cloud Infrastructure Object Storage can store an unlimited amount of unstructured data of any content type, including analytic data. You can safely and securely store or retrieve data directly from the internet or from within the cloud platform. Multiple management interfaces let you easily start small and scale seamlessly, without experiencing any degradation in performance or service reliability.
Oracle Analytics Cloud is a scalable and secure public cloud service that provides a full set of capabilities to explore and perform collaborative analytics for you, your workgroup, and your enterprise.
With Oracle Analytics Cloud you also get flexible service management capabilities, including fast setup, easy scaling and patching, and automated lifecycle management.
Oracle Machine Learning provides powerful new machine learning capabilities tightly integrated in Oracle Autonomous Data Warehouse with new support for Python scripting. Upcoming integration with Oracle Infrastructure Cloud Data Science will enable data scientists to develop models using both open source and scalable in-database algorithms, reducing data preparation and movement times.
Data Science provides infrastructure, open source technologies, libraries, and packages, and data science tools for data science teams to build, train, and manage machine learning (ML) models in Oracle Cloud Infrastructure. The collaborative and project-driven workspace provides an end-to-end cohesive user experience and supports the lifecycle of predictive models.
GoldenGate Stream Analytics runs as a set of native Spark pipelines and provides custom operational dashboards with real-time monitoring and analysis of event streams in an Apache Spark based system. You can identify events of interest, execute queries against those event streams in real time, and drive operational dashboards or raise alerts based on that analysis.
Use the following recommendations as a starting point to process streaming data and a broad range of enterprise data resources for business analysis and machine learning.
Your requirements might differ from the architecture described here.
- Oracle Autonomous Data Warehouse
This architecture uses Oracle Autonomous Data Warehouse on shared infrastructure. Enable auto scaling to give the database workloads up to three times the processing power.
Consider using Oracle Autonomous Data Warehouse on dedicated infrastructure if you want the self-service database capability within a private database cloud environment running on the public cloud.
- Oracle Cloud
Infrastructure Data Transfer service
Use Oracle Cloud Infrastructure Data Transfer service when uploading data using public internet connectivity is not feasible. Our recommendation is that you consider using Data Transfer if uploading data over the public internet takes longer than 1-2 weeks.
When processing streaming data and a broad range of enterprise data resources for business analysis and machine learning, consider these implementation options.
|Guidance||Data Refinery||Data Persistence Platform||Access & Interpretation|
|Other Options||Oracle Cloud Infrastructure Data Integration||
Oracle Data Integrator provides two specialized knowledge modules (KMs) for transforming PeopleSoft data structures in the form of datastores.
Oracle Cloud Infrastructure Streaming service is a fully managed service.
Apache Kafka VM Image for enterprise Kafka.
Oracle Cloud Infrastructure Object Storage stores unlimited data in raw format.
Oracle Autonomous Data Warehouse is an easy-to- use, fully autonomous database that scales elastically, delivers fast query performance and requires no database administration. It also offers direct access to the data from object storage via external tables.
Oracle Analytics Cloud is a fully managed and tightly integrated with the Curated Data Layer (Oracle Autonomous Data Warehouse).
Data Science is a fully managed, self-service platform for data science teams to build, train, and manage machine learning (ML) models in Oracle Cloud Infrastructure. The Data Science service provides infrastructure and data science tools
The Terraform code for this reference architecture is available on GitHub.
- Go to GitHub.
- Follow the instructions in the
Learn more about the features of this architecture.