About Oracle Big Data Cloud Service Nodes

Every cluster must have at least three permanent Hadoop nodes (a starter pack) and can have an additional 57 permanent nodes, which can be a combination of permanent Hadoop nodes and edge nodes. In addition, a cluster can have up to 15 cluster compute nodes (480 OCPUs).

Permanent Hadoop Nodes

Permanent Hadoop nodes last for the lifetime of the cluster. Each node has:

  • 32 Oracle Compute Units (OCPUs)

  • 248 GB of available RAM

  • 48 TB storage

  • Full use of the Cloudera Enterprise Data Hub Edition software stack, including licenses and support

When planning the number of nodes you want for a cluster, be aware of the following:

  • Three-node clusters are recommended for development only. A production cluster should have five or more nodes. This is to ensure that, if a node fails, you can migrate the node responsibilities to a different node and retain quorums in the high availability setup of the cluster.

  • Services are distributed differently on three-node clusters than they are on clusters of four or more nodes. See Where Do the Services Run on a Three-Node, Development-Only Cluster?

  • You must have at least four permanent Hadoop nodes before you can add edge nodes.

  • Installing Oracle Big Data Discovery on a Oracle Big Data Cloud Service cluster requires at least five nodes.

Edge Nodes

Edge nodes provide an interface between the Hadoop cluster and the outside network. They are commonly used to run client applications and cluster administration tools, keeping them separate from the nodes of the cluster that run Hadoop services. Like permanent Hadoop nodes, edge nodes last for the lifetime of the cluster. They have the same characteristics as permanent Hadoop nodes:

  • 32 Oracle Compute Units (OCPUs)

  • 248 GB of available RAM

  • 48 TB storage

  • Full use of the Cloudera Enterprise Data Hub Edition software stack, including licenses and support

When you create a cluster or expand a cluster, you can specify how many of the nodes will be edge nodes, as long as the first four nodes are permanent Hadoop nodes.

Cluster Compute Nodes

Cluster compute nodes have only OCPUs and memory (no storage), and you can add and remove them at will, a process known as “bursting.” Bursting provides the elasticity of growing and shrinking the cluster as processing needs fluctuate.

Clusters can be extended by up to 15 cluster compute nodes. However, when you work with cluster compute nodes in the service console, you identify them by their number of OCPUs, The following are supported.

  • 32 OCPUs = 1 node

  • 64 OCPUs = 2 nodes

  • 96 OCPUs = 3 nodes

  • 128 OCPUs = 4 nodes

  • 160 OCPUs = 5 nodes

  • 192 OCPUs = 6 nodes

  • 224 OCPUs = 7 nodes

  • 256 OCPUs = 8 nodes

  • 288 OCPUs = 9 nodes

  • 320 OCPUs = 10 nodes

  • 352 OCPUs = 11 nodes

  • 384 OCPUs = 12 nodes

  • 416 OCPUs = 13 nodes

  • 448 OCPUs = 14 nodes

  • 480 OCPUs = 15 nodes

Because cluster compute nodes don’t include the Hadoop Distributed File System (HDFS), you don’t store data on these nodes. Therefore, when you remove cluster compute nodes from the cluster, there is no impact on any data stored in the cluster.