In a single-node configuration whose purpose is to enable you to evaluate OpenStack, one network interface card is sufficient. However, using a single network interface for a multinode configuration is insufficient to provide the bandwidth to service a cloud's heavy network traffic. If you use a single network interface in an enterprise OpenStack setting, performance quickly becomes a serious issue.
Different types of network traffic traverse the cloud infrastructure. You should have separate networks or subnets to host each type of traffic, for example:
Guest or tenant network – Hosts traffic among the virtual machines (VMs) in the OpenStack cloud.
Storage network – Hosts traffic between the VMs and their application datasets that are on external storage systems.
Management or API network – Hosts traffic among the OpenStack components that manage the entire operation of the cloud infrastructure, including administrator-generated traffic.
External network – Hosts traffic between the virtual entities such as the VMs and their private networks in the OpenStack cloud and the wider network, which consists of both the corporate network and the Internet.
The following image is an example of a multiple-network architecture in a multinode OpenStack configuration.
Figure 1 Example of a Multiple Network Architecture
In this example, you can expand the architecture further as needed. For example, if you decided to use redundant storage systems, then you create separate storage subnets to manage the traffic to each system.
With different networks for specific traffic, you obtain the following advantages:
Reliability and availability of the network – Multiple networks avoid the risk of a single point of failure inherent in single-network configurations.
Performance and scalability – Compared to using a single network interface, having multiple interfaces to function as different network traffic paths prevents potential congestion and its consequent performance degradation.
Security – Separating the networks ensures control of access to different parts of the OpenStack framework.
Manageability – Managing the entire OpenStack framework is easier for the cloud administrator.
In Oracle Solaris, network adapter datalinks follow the naming convention netn, where n is a number beginning from zero. The number is assigned based on the order by which the adapters are detected during the kernel boot process.
Use the same type of network adapter on each hardware node and install them on the same option slots on the motherboard. On every server, configure each network adapter port for the same network. For example, the interface net0 in all the systems would be used for connecting to the external network, with net1 reserved for the guest network, and so on. In this manner, the interface port numeration in the kernel and the device link names remain consistent on each OpenStack node. Having a uniform network configuration facilitates subsequent OpenStack configuration steps, specifically when you set up the Elastic Virtual Switch (EVS) in Neutron.
Using logical host names is a good practice that applies to any network configuration scenario. An enterprise OpenStack infrastructure requires several IP addresses. Specifying IP addresses to configure the cloud complicates configuration in requiring you to remember and manage these numbers. OpenStack configuration information is stored in databases. Without any deep knowledge of these databases and the way they store OpenStack information, correcting the database would be difficult should you need to change IP address configuration.
Prepare a mapping of host names and IP addresses that you will use for the setup. Use DNS or the /etc/hosts files for name resolution. Test the configuration to ensure that it is functioning properly. Then, after installing Oracle OpenStack for Oracle Solaris, when defining connection parameters in the configuration files, specify host names instead of the IP addresses.