OpenStack is a cloud computing software platform that controls large pools of compute, storage, and networking resources in a data center. With OpenStack, you can manage different types of hypervisors, network devices and services, storage components, and more, using a single API that creates a unified data center fabric. OpenStack is, therefore, a pluggable framework that allows vendors to write plug-ins to implement a solution using their own technology, and which allows users to integrate their technology of choice.
OpenStack is built as a set of distributed services. These services communicate with each other, and are responsible for the various functions expected from virtualization/cloud management software. The following are some of the key OpenStack services:
Nova: A compute service responsible for creating virtual machine instances and managing their life cycle, as well as managing the hypervisor of choice. The hypervisors are pluggable to Nova, while the Nova API remains the same, regardless of the underlying hypervisor.
Neutron: A network service responsible for creating network connectivity and network services. It is capable of connecting with vendor network hardware via plug-ins. Neutron comes with a set of default services implemented by common tools. Network vendors can create plug-ins to replace any one of the services with their own implementation, adding value to their users.
Cinder: A block storage service responsible for creating and managing external storage, including block devices and NFS. It is capable of connecting to vendor storage hardware through plug-ins. Cinder has several generic plug-ins, which can connect to NFS and iSCSI, for example. Vendors add value by creating dedicated plug-ins for their storage devices.
Swift: An object and Binary Large Object (BLOB) storage service responsible for managing object-based storage.
Keystone: An identity management system responsible for user and service authentication. Keystone is capable of integrating with third-party directory services such as LDAP.
Glance: An image service responsible for managing images uploaded by users. Glance is not a storage service, but it is responsible for saving image attributes, making a virtual catalog of the images.
Heat: An orchestration service responsible for managing the life cycle of the OpenStack infrastructure (such as servers, floating IP addresses, volumes, security groups, and so on) and applications. Uses Heat Orchestration Templates (HOT) to describe the infrastructure for an application and provides an API for Amazon's AWS template format.
Horizon: A dashboard that creates a GUI for users to control the OpenStack deployment. This is an extensible framework to which vendors can add features. Horizon uses the same APIs exposed to users.
Murano: An application catalog service for publishing cloud-ready applications from a catalog. An agent is installed into an instance's operating system, which enables deployment of the applications directly into the guest. Murano also includes a plug-in to the Horizon dashboard.
Ceilometer: A telemetry service that collects, normalizes and transforms data produced by OpenStack services for various telemetry use cases, such as customer billing, resource tracking, metering, and alarming.
More details are available in the OpenStack Cloud Administrator Guide at:
http://docs.openstack.org/admin-guide-cloud/common/get_started_openstack_services.html
OpenStack has many more services that are responsible for various features and capabilities, and the full list can be found on the OpenStack website at:
The list provided here is limited to those needed to get started with Oracle OpenStack for Oracle Linux.
There are a number of node types used in OpenStack. Nodes are a physical host computer, with an operating system installed, with Oracle Linux using KVM (Kernel-based Virtual Machine), or Oracle VM Server. The main node types we discuss in this guide are:
A controller node is a system running Oracle Linux, and is where most of the OpenStack services are installed. The term controller node is used to describe nodes that do not run virtual machine instances. The controller nodes may have all the non-compute services or only some of them. A controller node may also include the Oracle OpenStack for Oracle Linux CLI (kollacli), which is used to perform the deployment of OpenStack services to other nodes.
A compute node is a system running Oracle Linux using KVM, or Oracle VM Server. A compute node runs the bare minimum of services to manage virtual machine instances.
A database node is a system running Oracle Linux, and the services required to manage databases for images and instances.
A network node is a system running Oracle Linux, and runs the neutron network worker daemon. The neutron worker daemon provides services such as providing an IP address to a booting Nova instance.
A storage node is a system running Oracle Linux and the services required to manage storage for images and instances.
Some storage is not directly managed by the OpenStack services, but is instead managed by the storage appliance. On the storage node, Cinder communicates with the storage appliance's API, and it is the storage appliance that performs the storage management. For example, when using the Oracle ZFS Storage Appliance, the Cinder driver on the storage node communicates with the Oracle ZFS Storage Appliance NFS driver, and it is the ZFS driver which performs the storage management.
A master node is a system running Oracle Linux and kollacli, used to deploy the OpenStack services to the nodes. A master node is not an OpenStack node, although kollacli may be installed on a controller node.
More detailed information on the node types is available in the OpenStack Operations Guide at:
http://docs.openstack.org/openstack-ops/content/example_architecture.html#node_types
OpenStack virtual machines are called instances, mostly because they are instances of an image that is created upon request and that is configured when launched. The main difference between OpenStack and traditional virtualization technology is the way state is stored. With traditional virtualization technology, the state of the virtual machine is persistent.
OpenStack can support both persistent and ephemeral models. In the ephemeral model, an instance is launched from an image in the Image service, the image is copied to the run area, and when the copy is completed, the instance starts running. The size and connectivity of the instance are defined at the time of launching the instance. When an instance is terminated, the original image remains intact, but the state of the terminated instance is not retained. This ephemeral model is useful for scaling out quickly and maintaining agility for users.
In the persistent model, the instance is launched from a persistent volume on a compute node, or from a block storage volume, and not from the Image service. A volume can be any kind of persistent storage, including a file, a block device, an LVM partition, or any other form of persistent storage. In this case, when the instance is terminated, any session changes are retained and are present the next time an instance is launched. In the persistent model, the size and connectivity of the instance are also defined at the time the instance launches. In some sense, the persistent model in OpenStack is similar to the traditional approach to virtualization.
The storage used in OpenStack can be either ephemeral or persistent. Ephemeral storage is deleted when an instance is terminated, while persistent storage remains intact. Persistent storage in OpenStack is referred to as a volume, regardless of the technology and device it is backed by. Persistent storage can either be used to launch an instance, or it can be connected to an instance as a secondary storage device to retain state. An example of this is a database launched as an ephemeral instance, with a volume connected to it to save the data. When the instance is terminated, the volume retains the data and can be connected to another instance as needed.
The OpenStack Cinder service is responsible for managing the volumes, and it offers a framework for vendors to create drivers. If a storage vendor wants to support OpenStack deployment and allow users to create volumes on the device, the vendor must create a Cinder driver that allows users to use the standard calls to control the storage device.
OpenStack also supports object storage using the Swift service.
The OpenStack networking service, Neutron, offers a complete software-defined networking (SDN) solution, along with various network services. The network services Neutron can support include routing, firewall, DNS, DHCP, load balancing, VPN, and more.
Neutron, like Cinder, offers a framework for vendors to write plug-ins for various services. For example, a network vendor might want to offer a custom load balancer instead of the default load balancer provided by Neutron. The plug-in framework offers a powerful tool to build sophisticated network topologies using standard APIs.
Network Isolation: Tenant Networks
Tenant networks are the basis for Neutron’s SDN capability. Neutron has full control of layer-2 isolation. This automatic management of layer-2 isolation is completely hidden from the user, providing a convenient abstraction layer required by SDN.
To perform the layer-2 separation, Neutron supports three layer-2 isolation mechanisms: VLANs, VxLANs, and GRE (Generic Routing Encapsulation) tunnels. You must define which mechanism should be used and set up the physical topology as required. Neutron is responsible for allocating the resources as needed. For example, you would configure the VLAN switch, allocate the VLAN range, and configure the VLAN in Neutron. When you define a new network, Neutron automatically allocates a VLAN and takes care of the isolation. You do not have to manage VLANs, and do not need to be aware of which VLAN was assigned to the network.
Complete Software-Defined Network Solution
OpenStack, using Neutron, presents a complete SDN solution. You can define isolated networks with any address space, and connect between those networks using virtual routers. You can define firewall rules without the need to touch or change any element of the physical network topology. Furthermore, there is a complete abstraction between the physical topology and the virtual networks, so that multiple virtual networks can share the same physical resources, without any security or address space concerns.
Allowing multiple users to share the same physical environment while ensuring complete separation between them is a key feature of OpenStack. OpenStack is designed so that multiple tenants can share physical resources in a way that is transparent to the users. OpenStack offers ways to share virtual resources between tenants, but maintains complete separation where needed.

