2.2 Using Groups to Deploy Services

Oracle OpenStack for Oracle Linux uses groups to associate the target nodes with OpenStack services. Target nodes in the same group run the same OpenStack services. The default groups are:

  • control: Contains the control-related services, such as glance, keystone, ndbcluster, nova, and rabbitmq.

  • compute: Contains the hypervisor part of the compute services, such as nova-compute.

  • database: Contains the data part of the database services.

  • network: Contains the shared network services, such as neutron-server, neutron-agents and neutron-plugins.

  • storage: Contains the storage part of storage services, such as cinder and swift.

A node can belong to more than one group and can run multiple OpenStack services.

The minimum supported deployment of OpenStack contains at least three nodes (as shown in Figure 2.1):

  • Two controller nodes, each node belongs to the control, database, network and storage groups.

  • One or more nodes belonging to the compute group.

Note

Single-node deployments (sometimes referred to as all-in-one deployments) are not supported.

Figure 2.1 Minimum Supported Deployment

The diagram shows 3 boxes. Two are labeled Controller Node and one is labeled Compute Node. The Controller Node boxes contain four boxes one each for the control, database, storage and network group. The Compute Node box contains one box for the compute group.

As your scaling and performance requirements change, you can increase the number of nodes and move groups on to separate nodes to spread the workload, as shown in Figure 2.2:

Figure 2.2 Expanding Deployment

The diagram shows 5 boxes. Two are labeled Controller Node, Two are labeled Network Node, and one is labeled Compute Nodes. The Controller Node boxes contain three boxes one each for the control, database, and storage group. The Network Node boxes contain one box for the network group. There is an arrow from the Controller Nodes to the Network Nodes to show that the network group has moved to the Network Node. The Compute Nodes box contains one box for the compute group, and the box has several drop-shadows to indicate that there are now multiple controller nodes.

As your deployment expands, note the following "rules" for deployment:

  • The nodes in the compute group must not be assigned to the control group.

  • The control group must contain at least two nodes.

  • The number of nodes in the database group must always be a multiple of two.

  • Each group must contain at least two nodes to enable high availability.

There is no limit on the number of nodes in a deployment. Figure 2.3 shows a fully-expanded deployment using the default groups. There are at least two nodes for each group to maintain high availability, and the number of database nodes is a multiple of two.

Figure 2.3 Fully-Expanded Deployment

The diagram shows 6 boxes, labeled Controller Nodes, Compute Nodes, Database Nodes, Network Nodes, and Storage Nodes. Each box contains a box labeled either compute, control, database, network, and storage according to the node type. Two are labeled Network Node, and one is labeled Compute Nodes. The Controller Node boxes contain three boxes one each for the control, database, and storage group. The Network Node boxes contain one box for the network group. Each node box has several drop-shadows to indicate that there are now multiple nodes of each node type. There are two Compute Node boxes to indicate the increased scale of the compute node deployment.

You are not restricted to using the default groups. You can change the services a group runs, or configure your own groups. If you configure your own groups, be sure to remember the rules of deployment listed above.