About Big Data Cloud

Big Data Cloud leverages Oracle’s Infrastructure Cloud Services to deliver a secure, elastic, integrated platform for all Big Data workloads. You can:

  • Spin up multiple Hadoop or Spark clusters in minutes

  • Use built-in tools such as Apache Zeppelin to understand your data, or use the jobs API to run non-interactive jobs

  • Use open interfaces to integrate third-party tools to analyze your data

  • Launch multiple clusters against a centralized data lake to achieve data sharing without compromising on job isolation

  • Create small clusters or extremely large ones based on workload and use-cases

  • Elastically scale the compute and storage tiers independently of one another, either manually or in an automated fashion

  • Pause a cluster when not in use

  • Use REST APIs to monitor, manage, and utilize the service

For information about the open source components used in Big Data Cloud, see Cluster Components.