About the Big Data File System (BDFS)

Not Oracle Cloud Infrastructure Not supported on Oracle Cloud Infrastructure.

Oracle Big Data Cloud includes the Oracle Big Data File System (BDFS), an in-memory file system that accelerates access to data stored in multiple locations.

BDFS is compatible with the Hadoop file system and thus can be used with computational technologies such as Hive, MapReduce, and Spark. BDFS (currently based on Alluxio), is designed to accelerate data access for data pipelines and has several features that significantly improve the runtime performance of Spark applications. The focus of BDFS is to accelerate data access to and from Oracle Cloud Infrastructure Object Storage Classic by providing an active caching layer.

Note:

BDFS is available in the Full deployment profile only, not the Basic profile.

Using BDFS

Making use of BDFS doesn’t require any special integration. The mechanism to access data involves modifying the URI used by the application to access the underlying data. Typically, files are accessed by leveraging Oracle Cloud Infrastructure Object Storage Classic using a swift:// URL. Oracle Cloud Infrastructure Object Storage Classic can be used for both reading and writing temporal and persistent data. In the case of temporal data, leveraging BDFS results in significant performance improvement because data doesn’t need to be transferred outside of the Big Data Cloud cluster. Also, BDFS can be used to read data stored on Oracle Cloud Infrastructure Object Storage Classic, which results in data being cached in BDFS for any subsequent reads, which results in better performance.

BDFS Topology

The BDFS architecture is composed of master and slave processes. BDFS master processes are responsible for general coordination and bookkeeping, while the slave processes are responsible for the actual caching of data. The amount of memory made available to Alluxio by default is 1 GB per Alluxio worker node in the cluster.

The out-of-box configuration can be modified through Apache Ambari by changing the alluxio.worker.memory setting in alluxio-env (Ambari Advanced tab). All workers must be restarted for changes to take effect.

Highly Available

BDFS is made highly available for clusters that have an initial size of at least three nodes. Multiple master components are deployed, and one is elected leader, while the others are put in standby mode. If the leader goes down, one of the standby masters is promoted to leader. Leader election is managed by Apache Zookeeper.

Off-Heap Storage (Cache)

The canonical use-case for BDFS is to share in-memory data across different applications, including Spark applications. The benefit of leveraging BDFS for this purpose is to reduce the memory usage of the Spark process and accelerate data access for downstream Spark applications. It’s common for Spark applications to store data using the RDD cache() or persist() API. This enables the RDD to be stored in the Spark executors, which makes the fetching of data efficient because the data is retained in memory. The drawback to this technique is that, depending on the amount of memory consumed in the executor, it may not leave enough memory for successful execution. An alternative is to leverage BDFS to store the RDD off-heap. This frees up valuable executor memory for processing while off-loading the caching of the data to BDFS.

Any Spark API used to save RDDs can be used with BDFS. Examples include:

  • rdd.saveAsTextFile(BDFS_FILE_URI) – Suitable for storing the RDD as a text file; each element in the RDD will be written as a separate line

  • rdd.saveAsObjectFile(BDFS_FILE_URI) – Suitable for storing the RDD as a file using Java serialization and deserialization

  • rdd.saveAsSequenceFile(BDFS_FILE_URI) – Suitable for storing key-value pairs as a Hadoop SequenceFile

An example BDFS_FILE_URI per above is: bdfs://localhost:19998/bdcsce/myrdd.txt

Once the RDD is saved to BDFS, the RDD can be accessed by any other Spark application in the job pipeline.

Oracle Cloud Infrastructure Object Storage Classic Read Access

BDFS mounts Oracle Cloud Infrastructure Object Storage Classic as a read-only file system, which allows for direct reads from Oracle Cloud Infrastructure Object Storage Classic. This enables files that reside on Oracle Cloud Infrastructure Object Storage Classic to be accessed through BDFS. The benefit of accessing files through BDFS is that the file is cached in BDFS. As such, subsequent reads of the same file are more performant, since the underlying data doesn’t need to be fetched from Oracle Cloud Infrastructure Object Storage Classic for a second time. Instead, the file is read directly from BDFS.

The following are example Swift and BDFS URLs for the same file. In this example, it’s assumed that BDFS is mounted to the bdcsce container in Oracle Cloud Infrastructure Object Storage Classic.

  • Swift URL: swift://bdcsce.default/somepath/file.txt

  • BDFS URL: bdfs://localhost:19998/somepath/file.txt

The host and port in the BDFS URL are required but aren’t used because the cluster is configured for an HA environment. The requirement to provide the host and port will likely be eliminated in a future release.

BDFS Tiered Storage

BDFS Tiered Storage provides the ability to store more objects in the caching layer beyond what can be kept in memory. This is accomplished by evicting objects held in memory to block storage for retrieval later. As memory is exhausted, BDFS automatically moves objects from memory into block storage. This feature allows the caching of objects beyond the capacity of the total memory cache available across the BDFS cluster (collection of workers). BDFS lazily consumes memory, so if BDFS isn't used, little overhead is incurred by BDFS.

You can specify the size of the BDFS cache block storage when you create a cluster. The total amount of cache provided by BDFS is the sum of RAM allocated to BDFS plus the total block storage allocated for spillover. The amount of memory allocated to BDFS is based on the compute shape selected when the cluster was created. The following table summarizes the amount of memory allocated to BDFS.

Compute Shape Total Memory Available Total Memory Allocated to BDFS
OC2m 30 GB 1 GB
All other shapes   16% of Total Memory Available