This section describes how to prepare an Oracle Linux node for OpenStack. Section 3.6, “Preparing Oracle VM Server Nodes” describes how to prepare an Oracle VM Server node.
You can download the installation ISO for the latest version of Oracle Linux Release 7 from the Oracle Software Delivery Cloud at:
https://edelivery.oracle.com/linux
You prepare an Oracle Linux node for OpenStack by enabling the required repositories and installing the Oracle OpenStack for Oracle Linux preinstallation package. When you install the preinstallation package, it installs all the other required packages on the node. The packages can be installed using either the Oracle Unbreakable Linux Network (ULN) or the Oracle Linux Yum Server. If you are using ULN, the following procedure assumes that you register the system with ULN during installation.
For more information about ULN and registering systems, see:
http://docs.oracle.com/cd/E52668_01/E39381/html/index.html
For more information on using the Oracle Linux Yum Server, see
http://docs.oracle.com/cd/E52668_01/E54669/html/ol7-yum.html
Oracle OpenStack for Oracle Linux Release 3.0 uses a version of Docker, which requires that you configure a system to use the Unbreakable Enterprise Kernel Release 4 (UEK R4) and boot the system with this kernel.
Oracle OpenStack for Oracle Linux requires a file system mounted on
/var/lib/docker
with at least 64 GB
available. You can use a btrfs file system with the Docker
btrfs
storage driver, or an ext4 file system
with the Docker overlay2
storage driver. The
storage device can be a disk partition, an LVM volume, a loopback
device, a multipath device, or a LUN.
To prepare an Oracle Linux node:
Install Oracle Linux using the instructions in the Oracle Linux Installation Guide for Release 7 at:
http://docs.oracle.com/cd/E52668_01/E54695/html/index.html
Select Minimal install as the base environment for all node types.
As part of the install, you should create either a btrfs or an ext4 file system mounted at
/var/lib/docker
. This file system requires a minimum of 64GB of disk space and is used to host a local copy of the OpenStack Docker images. If you prefer, you can create the file system after installation, as described in the following steps.Create a file system mounted on
/var/lib/docker
.You create a btrfs file system with the utilities available in the
btrfs-progs
package, which should be installed by default.Create either a btrfs or an ext4 file system on one or more block devices:
To create a btrfs file system:
# mkfs.btrfs [-L
label
] block_device ...To create an ext4 file system:
# mkfs.ext4 [-L
label
] block_devicewhere
-L
is an optional label that can be used to mount the file system.label
For example:
To create an ext4 file system in a partition
/dev/sdb1
:# mkfs.ext4 -L var-lib-docker /dev/sdb1
The partition must already exist. Use a utility such as fdisk (MBR partitions) or gdisk (GPT partitions) to create one if needed.
To create a btrfs file system across two disk devices,
/dev/sdc
and/dev/sdd
:# mkfs.btrfs -L var-lib-docker /dev/sd[cd]
The default configuration is to stripe the file system data (
raid0
) and to mirror the file system metadata (raid1
) across the devices. Use the-d
(data) and-m
(metadata) options to specify the required RAID configuration. Forraid10
, you must specify an even number of devices and there must be at least four devices.To create a btrfs file system in a logical volume named
docker
in theol
volume group:# mkfs.btrfs -L var-lib-docker /dev/ol/docker
The logical volume must already exist. Use Logical Volume Manager (LVM) to create one if needed.
For more information, see the
mkfs.btrfs(8)
andmkfs.ext4(8)
manual pages.Obtain the UUID of the device containing the new file system.
Use the blkid command to display the UUID of the device and make a note of this value, for example:
#
blkid /dev/sdb1
/dev/sdb1: LABEL="var-lib-docker" UUID="460ed4d2-255f-4c1b-bb2a-588783ad72b1" \ UUID_SUB="3b4562d6-b248-4c89-96c5-53d38b9b8b77" TYPE="btrfs"If you created a btrfs file system using multiple devices, you can specify any of the devices to obtain the UUID. Alternatively you can use the btrfs filesystem show command to see the UUID. Ignore any
UUID_SUB
value displayed. For a logical volume, specify the path to the logical volume as the device for example/dev/ol/docker
.Edit the
/etc/fstab
file and add an entry to ensure the file system is mounted when the system boots.UUID=
UUID_value
/var/lib/docker btrfs defaults 1 2Replace
UUID_value
with the UUID that you found in the previous step. If you created a label for the file system, you can also use the label instead of the UUID, for example:LABEL=
label
/var/lib/docker ext4 defaults 1 2Create the
/var/lib/docker
directory.# mkdir /var/lib/docker
Mount all the file systems listed in
/etc/fstab
.# mount -a
Verify that the file system is mounted.
#
df
Filesystem 1K-blocks Used Available Use% Mounted on ... /dev/sdb1 ... ... ... 1% /var/lib/docker
(Optional) If you use a proxy server for Internet access, configure Yum with the proxy server settings.
Edit the
/etc/yum.conf
file and specify theproxy
setting, for example:proxy=http://proxysvr.example.com:3128
If the proxy server requires authentication, additionally specify the
proxy_username
, andproxy_password
settings, for example:proxy=http://proxysvr.example.com:3128 proxy_username=
username
proxy_password=password
If you use the yum plug-in (
yum-rhn-plugin
) to access the ULN, specify theenableProxy
andhttpProxy
settings in the/etc/sysconfig/rhn/up2date
file, for example:enableProxy=1 httpProxy=http://proxysvr.example.com:3128
If the proxy server requires authentication, additionally specify the
enableProxyAuth
,proxyUser
, andproxyPassword
settings, as follows:enableProxy=1 httpProxy=http://proxysvr.example.com:3128 enableProxyAuth=1 proxyUser=
username
proxyPassword=password
Make sure the system is up-to-date:
# yum update
Enable the required ULN channels or Yum repositories.
To enable the required ULN channels:
Log in to http://linux.oracle.com with your ULN user name and password.
On the Systems tab, click the link named for the system in the list of registered machines.
On the System Details page, click Manage Subscriptions.
On the System Summary page, use the left and right arrows to move channels to and from the list of subscribed channels.
Subscribe the system to the following channels:
ol7_x86_64_addons
- Oracle Linux 7 Addons (x86_64)ol7_x86_64_latest
- Oracle Linux 7 Latest (x86_64)ol7_x86_64_optional_latest
- Oracle Linux 7 Latest Optional Packages (x86_64)ol7_x86_64_openstack30
- Oracle OpenStack 3.0 (x86_64)ol7_x86_64_UEKR4
- Unbreakable Enterprise Kernel Release 4 for Oracle Linux 7 (x86_64)(Optional)
ol7_x86_64_UEKR4_OFED
- OFED supporting tool packages for Unbreakable Enterprise Kernel Release 4 on Oracle Linux 7 (x86_64)Subscribe to this channel if you are using the OFED (OpenFabrics Enterprise Distribution) packages provided by Oracle. UEK R4 requires a different set of OFED packages to UEK R3.
Unsubscribe the system from the following channels:
ol7_x86_64_openstack20
- Oracle OpenStack 2.0 (x86_64)ol7_x86_64_openstack21
- Oracle OpenStack 2.1 (x86_64)ol7_x86_64_UEKR3
- Unbreakable Enterprise Kernel Release 3 for Oracle Linux 7 (x86_64) - Latestol7_x86_64_UEKR3_OFED20
- OFED supporting tool packages for Unbreakable Enterprise Kernel Release 3 on Oracle Linux 7 (x86_64)
Click
.
To enable the required Yum repositories:
Download the latest Oracle Linux Release 7 Yum Server repository file.
# curl -L -o /etc/yum.repos.d/public-yum-ol7.repo \ http://yum.oracle.com/public-yum-ol7.repo
Edit the
/etc/yum.repos.d/public-yum-ol7.repo
file.Enable the following repositories by setting
enabled=1
in the following sections:[ol7_addons]
[ol7_latest]
[ol7_optional_latest]
[ol7_openstack30]
[ol7_UEKR4]
(Optional)
[ol7_UEKR4_OFED]
Subscribe to this repository only if you have InfiniBand-capable devices and you are using the OFED (OpenFabrics Enterprise Distribution) packages provided by Oracle. UEK R4 requires a different set of OFED packages to UEK R3.
Disable the following repositories by setting
enabled=0
in the following sections:[ol7_openstack20]
[ol7_openstack21]
[ol7_UEKR3]
[ol7_UEKR3_OFED20]
Use the yum command to check the repository configuration.
Clean all yum cached files from all enabled repositories.
# yum clean all
List the configured repositories for the system.
# yum repolist
(Optional) Remove the Open vSwitch kernel module package.
To check if the
kmod-openvswitch-uek package
is installed:# yum list installed kmod-openvswitch-uek
If the
kmod-openvswitch-uek
package is installed, remove it:# yum -y remove kmod-openvswitch-uek
You must remove the UEK R3 Open vSwitch kernel module package in order to resolve the package dependencies for UEK R4. UEK R4 includes the Open vSwitch kernel module.
(Optional) Remove any existing OFED packages.
Only perform this step if you have InfiniBand-capable devices and you are using the OFED packages provided by Oracle. UEK R4 requires a different set of OFED packages to UEK R3.
For instructions on how to remove the OFED packages, see the release notes for your UEK R4 release, available at http://docs.oracle.com/cd/E52668_01/index.html.
Install the Oracle OpenStack for Oracle Linux preinstallation package.
If you are preparing an Oracle Linux node for a new OpenStack deployment:
# yum install openstack-kolla-preinstall
If you are updating an Oracle Linux node to a new release of Oracle OpenStack for Oracle Linux:
# yum update openstack-kolla-preinstall
This ensures the system has the required packages for OpenStack Kolla deployments.
(Master Node Only) Install the
openstack-kollacli
package and configure the users that can run the kollacli command.A master node is a host from which you deploy Oracle OpenStack for Oracle Linux to the target nodes using the kollacli deploy command. Typically, you use a controller node as a master node. If you prefer, you can use a separate host as a master node, see Section 3.7, “Preparing a Separate Master Node”. Only configure a one node as a master node.
In order to recover from a failure, you should ensure that you have backups of the
/etc/kolla
and/usr/share/kolla
directories.To prepare a controller node as a master node:
Install the OpenStack Kolla CLI (kollacli).
If you are preparing a master node for a new OpenStack deployment:
# yum install openstack-kollacli
If you are updating a master node for a new release of Oracle OpenStack for Oracle Linux:
# yum update openstack-kollacli
Add a user to the
kolla
group.To add an existing user to the
kolla
group:#
usermod -aG kolla
username
The user must log out and in again for the group setting to take effect.
ImportantFor security reasons, always run kollacli commands as this user. Never use
root
or thekolla
user.
(Optional) Install the OFED packages for UEK R4 and enable the RDMA service.
Only perform this step if you have InfiniBand-capable devices and you are using the OFED packages provided by Oracle.
For instructions on how to install the OFED packages and enable the RDMA service, see the release notes for your UEK R4 release, available at http://docs.oracle.com/cd/E52668_01/index.html.
Reboot the system.
# systemctl reboot
Check the system has booted with the UEK R4 kernel.
#
uname -r
4.1.12-37.5.1.el7uek.x86_64If the output of this command begins with
4.1.12
, the system has booted with the UEK R4 kernel.If the system has not booted with the UEK R4 kernel, you must edit your grub configuration to boot with this kernel and reboot, as follows:
Display the menu entries that are defined in the GRUB 2 configuration file.
On UEFI-based systems, the configuration file is
/boot/efi/EFI/redhat/grub.cfg
.
On BIOS-based systems, the configuration file is/boot/grub2/grub.cfg
.#
grep '^menuentry' /boot/grub2/grub.cfg
... menuentry 'Oracle Linux Server 7.2, with Unbreakable Enterprise Kernel 4.1.12-37.5.1.e ... { menuentry 'Oracle Linux Server (3.8.13-98.7.1.el7uek.x86_64 with Unbreakable Enterpris ... { ...In this example, the configuration file is for a BIOS-based system. GRUB 2 counts the menu entries in the configuration file starting at 0 for the first entry. In this example, menu entry 0 is for a UEK R4 kernel (
4.1.12
), and menu entry 1 is for a UEK R3 kernel (3.8.13
).Make the UEK R4 the default boot kernel.
In the following example, menu entry 0 is set as the default boot kernel for a BIOS-based system.
# grub2-set-default 0 # grub2-mkconfig -o /boot/grub2/grub.cfg
In the following example, menu entry 0 is set as the default boot kernel for a UEFI-based system.
# grub2-set-default 0 # grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg
Reboot the system and confirm that UEK R4 is the boot kernel.
Ensure that Docker is using the correct storage driver, either
btrfs
oroverlay2
.Docker may not use the correct storage driver for the file system mounted on
/var/lib/docker
when it starts. Use the docker info command to check the current storage driver.For a btrfs file system, the storage driver must be
btrfs
, for example:#
docker info | grep -A 1 Storage
Storage Driver: btrfs Build Version: Btrfs v4.4.1For an ext4 file system, the storage driver must be
overlay2
, for example:#
docker info | grep -A 1 Storage
Storage Driver: overlay2 Backing Filesystem: extfsIf the storage driver is incorrect, configure Docker to use the correct driver, as follows:
Edit the
/etc/sysconfig/docker
file and add a--storage-driver
option to theOPTIONS
variable.For example:
OPTIONS='--selinux-enabled --storage-driver=
driver
'where
driver
is eitherbtrfs
oroverlay2
.Reload
systemd
manager configuration.# systemctl daemon-reload
Restart the
docker
service.# systemctl restart docker.service
Check the correct driver is now loaded.
If you are using a web proxy, configure the
docker
service to use the proxy.Create the drop-in file
/etc/systemd/system/docker.service.d/http-proxy.conf
with the following content:[Service] Environment="HTTP_PROXY=
proxy_URL
:port
" Environment="HTTPS_PROXY=proxy_URL
:port
"Replace
proxy_URL
andport
with the appropriate URLs and port numbers for your web proxy.Reload
systemd
manager configuration.# systemctl daemon-reload
Restart the
docker
service.# systemctl restart docker.service
Check that the
docker
service is running.#
systemctl status docker.service
● docker.service - Docker Application Container Engine Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled) Drop-In: /etc/systemd/system/docker.service.d └─docker-sysconfig.conf, http-proxy.conf Active: active (running) since Thu 2016-03-31 17:14:04 BST; 30s ago ...Check the
Drop-In:
line and ensure that all the requiredsystemd
drop-in files are listed.Check that any environment variables you have configured, such as web proxy settings, are loaded:
#
systemctl show docker --property Environment
Environment=HTTP_PROXY=http://proxy.example.com:80If you have installed the
mlocate
package, it is recommended that you add/var/lib/docker
to thePRUNEPATHS
entry in/etc/updatedb.conf
to prevent updatedb from indexing directories below/var/lib/docker
.
Synchronize the time.
Time synchronization is essential to avoid errors with OpenStack operations. Before deploying OpenStack, you should ensure that the time is synchronized on all nodes using the Network Time Protocol (NTP).
It is best to configure the controller nodes to synchronize the time from more accurate (lower stratum) NTP servers and to configure the other nodes to synchronize the time from the controller nodes.
Further information on network time configuration can be found in the Oracle Linux Administration Guide for Release 7 at:
http://docs.oracle.com/cd/E52668_01/E54669/html/ol7-nettime.html
The following configuration assumes that the firewall rules for your internal networks enable you to access public or local NTP servers. Perform the following steps on all Oracle Linux nodes.
Time synchronization for Oracle VM Server compute nodes is described in Section 3.6, “Preparing Oracle VM Server Nodes”.
Install the
chrony
package.# yum install chrony
Edit the
/etc/chrony.conf
file to configure thechronyd
service.On the controller nodes, configure the
chronyd
service to synchronize time from a pool of NTP servers and set theallow
directive to enable the controller nodes to act as NTP servers for the other OpenStack nodes, for example:server
NTP_server_1
serverNTP_server_2
serverNTP_server_3
allow 10.0.0/24The NTP servers can be public NTP servers or your organization may have its own local NTP servers. In the above example, the
allow
directive specifies a subnet from which the controller nodes accept NTP requests. Alternatively, you can specify the other OpenStack nodes individually with multipleallow
directives.On all other nodes, configure the
chronyd
service to synchronize time from the controller nodes, for example:server control1.example.com iburst server control2.example.com iburst
Start the
chronyd
service and configure it to start following a system reboot.# systemctl start chronyd # systemctl enable chronyd
Verify that
chronyd
is accessing the correct time sources.#
chronyc -a sources
200 OK 210 Number of sources = 2 MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^* control1.example.com 3 6 17 40 +9494ns[ +21us] +/- 29ms ....On the controller nodes, the Name/IP address column in the command output should list the configured pool of NTP servers. On all other nodes, it should list the controller nodes.
Ensure that the time is synchronized on all nodes.
Use the chronyc -a tracking command to check the offset (the Last offset row):
#
chronyc -a tracking
200 OK Reference ID : 10.0.0.11 (control1.example.com) Stratum : 3 Ref time (UTC) : Fri Mar 4 16:19:50 2016 System time : 0.000000007 seconds slow of NTP time Last offset : -0.000088924 seconds RMS offset : 2.834978580 seconds Frequency : 3.692 ppm slow Residual freq : -0.006 ppm Skew : 0.222 ppm Root delay : 0.047369 seconds Root dispersion : 0.004273 seconds Update interval : 2.1 seconds Leap status : NormalTo force a node to synchronize its time:
#
chronyc -a 'burst 4/4'
200 OK 200 OK #chronyc -a makestep
200 OK 200 OK