This topic discusses how BDD fits into the Hadoop environment.
Hadoop is a platform for storing, accessing, and analyzing all kinds of data: structured, unstructured and other, such as logs, and data from the Internet Of Things. Hadoop is broadly adopted by IT organizations; with lots of data sets being added to the Hadoop platform rapidly.
By coupling tightly with Hadoop, Oracle Big Data Discovery achieves data discovery for any data, at significantly-large scale, with high query-processing performance.
Big Data Discovery brings itself to the data that is natively available in Hadoop.
BDD maintains a list of all of a company’s data sources found in Hive and registered in HCatalog. When new data arrives, BDD lists it in Studio's Catalog, decorates it with profiling and enrichment metadata, and, when you take this data for further exploration, takes a sample of it. It also lets you explore the source data further by providing an automatically-generated list of powerful visualizations that illustrate most interesting characteristics of this data. This helps you cut down on time spent for identifying useful source data sets, and on data set preparation time; it increases the amount of time your team spends on analytics leading to insights and new ideas.
Big Data Discovery is deployed directly on a subset of nodes in the pre-existing CDH cluster where you store the data you want to explore, prepare, and analyze.
By analyzing the data in the Hadoop cluster itself, BDD eliminates the cost of moving data around an enterprise’s systems — a cost that becomes prohibitive when enterprises begin dealing with hundreds of Terabytes of data. Furthermore, a tight integration of BDD with HDFS allows profiling, enriching, and indexing data as soon as the data enters the Hadoop cluster in the original file format. By the time you want to see a data set, BDD has already prepared it for exploration and analysis. BDD leverages the resource management capabilities in Hadoop to let you run mixed-workload clusters that provide optimal performance and value.
Finally, direct integration of BDD with the Hadoop ecosystem streamlines the transition between the data preparation done in BDD and the advanced data analysis done in tools such as Oracle R Advanced Analytics for Hadoop (ORAAH), or other 3rd party tools. BDD lets you export a cleaned, sampled data set as a Hive table, making it immediately available for users to analyze in ORAAH. BDD can also export data as a file and register it in Hadoop, so that it is ready for future custom analysis.
Big Data Discovery works with very large amounts of data which may already be stored within HDFS. A Hadoop distribution is a prerequisite for the product, and is critical for the functionality provided by the product. Cloudera CDH is the world's most complete, tested, and popular distribution of Apache Hadoop and related projects. CDH is 100% Apache-licensed open source and is the only Hadoop solution to offer unified batch processing, interactive SQL and interactive search, and role-based access controls.
CDH delivers the core elements of Hadoop — scalable storage and distributed computing — along with additional components, such as a user interface, plus necessary enterprise capabilities, such as security. In particular, BDD uses the HDFS, Hive, Oozie, and Spark components which are all packaged within the CDH distribution along with easy-to-use Web user interfaces.