3.4 Preparing Oracle Linux Nodes

This section describes how to prepare an Oracle Linux node for OpenStack. Section 3.5, “Preparing Oracle VM Server Compute Nodes” describes how to prepare an Oracle VM Server node.

You can download the installation ISO for the latest version of Oracle Linux Release 7 from the Oracle Software Delivery Cloud at:

https://edelivery.oracle.com/linux

You prepare an Oracle Linux node for OpenStack by enabling the required repositories and installing the Oracle OpenStack for Oracle Linux preinstallation package. When you install the preinstallation package, it installs all the other required packages on the node. The packages can be installed using either the Oracle Unbreakable Linux Network (ULN) or the Oracle Linux Yum Server. If you are using ULN, the following procedure assumes that you register the system with ULN during installation.

For more information about ULN and registering systems, see:

http://docs.oracle.com/cd/E52668_01/E39381/html/index.html

For more information on using the Oracle Linux Yum Server, see

http://docs.oracle.com/cd/E52668_01/E54669/html/ol7-yum.html

Oracle OpenStack for Oracle Linux Release 2.1 uses a version of Docker, which requires that you configure a system to use the Unbreakable Enterprise Kernel Release 4 (UEK R4) and boot the system with this kernel.

Oracle OpenStack for Oracle Linux requires a btrfs file system mounted on /var/lib/docker with at least 64GB available. The following provides instructions for setting up a btrfs file system using one or more available devices. The device could be a disk partition, an LVM volume, a loopback device, a multipath device, or a LUN.

To prepare an Oracle Linux node:

  1. Install Oracle Linux using the instructions in the Oracle Linux Installation Guide for Release 7 at:

    http://docs.oracle.com/cd/E52668_01/E54695/html/index.html

    Select Minimal install as the base environment for all node types.

    As part of the install, you should create a btrfs file system mounted at /var/lib/docker. This file system requires a minimum of 64GB of disk space and is used to host a local copy of the OpenStack Docker images. If you prefer, you can create the file system after installation, as described in the following steps.

    If the node will also host the Docker registry, you should create an additional btrfs file system mounted at /var/lib/registry. This file system requires a minimum of 15GB of disk space. If you prefer, you can create the file system after installation, as described in Section 3.7, “Setting up the Docker Registry”.

  2. Disable SELinux.

    In order to use the btrfs storage engine with Docker, you must either disable SELinux or set the SELinux mode to Permissive.

    To check the current SELinux mode, use the getenforce command. If the output of this command shows Enabled, you must disable SELinux as follows:

    1. Edit /etc/selinux/config and set the value of the SELINUX directive to disabled.

      Note

      Do not use the setenforce command to change the SELinux mode, as changes made with this command do not persist across reboots.

    2. Reboot the system.

      # systemctl reboot
  3. Stop and disable the firewalld service.

    If you require a system firewall, you can use iptables instead of firewalld.

    # systemctl stop firewalld 
    # systemctl disable firewalld

    Confirm that the firewalld service is stopped and disabled.

    # systemctl status firewalld 
    # systemctl is-enabled firewalld
  4. Create a btrfs file system mounted on /var/lib/docker.

    You create a btrfs file system with the utilities available in the btrfs-progs package, which should be installed by default.

    1. Create a btrfs file system on one or more block devices:

      # mkfs.btrfs [-L label] block_device ...

      where -L label is an optional label that can be used to mount the file system.

      For example:

      • To create a file system in a partition /dev/sdb1:

        # mkfs.btrfs -L var-lib-docker /dev/sdb1

        The partition must already exist. Use a utility such as fdisk (MBR partitions) or gdisk (GPT partitions) to create one if needed.

      • To create a file system across two disk devices, /dev/sdc and /dev/sdd:

        # mkfs.btrfs -L var-lib-docker /dev/sd[cd]

        The default configuration is to stripe the file system data (raid0) and to mirror the file system metadata (raid1) across the devices. Use the -d (data) and -m (metadata) options to specify the required RAID configuration. For raid10, you must specify an even number of devices and there must be at least four devices.

      • To create a file system in a logical volume named docker in the ol volume group:

        # mkfs.btrfs -L var-lib-docker /dev/ol/docker

        The logical volume must already exist. Use Logical Volume Manager (LVM) to create one if needed.

      More information on using mkfs.btrfs is available in the Oracle Linux Administrator's Guide for Release 7 at:

      http://docs.oracle.com/cd/E52668_01/E54669/html/ol7-create-btrfs.html

    2. Obtain the UUID of the device containing the btrfs file system.

      Use the blkid command to display the UUID of the device and make a note of this value, for example:

      # blkid /dev/sdb1
      /dev/sdb1: LABEL="var-lib-docker" UUID="460ed4d2-255f-4c1b-bb2a-588783ad72b1" \
      UUID_SUB="3b4562d6-b248-4c89-96c5-53d38b9b8b77" TYPE="btrfs" 
      

      If the btrfs file system is created across multiple devices, you can specify any of the devices to obtain the UUID. Alternatively you can use the btrfs filesystem show command to see the UUID. For a logical volume, specify the path to the logical volume as the device for example /dev/ol/docker. Ignore any UUID_SUB value displayed.

    3. Edit the /etc/fstab file and add an entry to ensure the file system is mounted when the system boots.

      UUID=UUID_value /var/lib/docker  btrfs  defaults  1 2

      Replace UUID_value with the UUID that you found in the previous step. If you created a label for the btrfs file system, you can also use the label instead of the UUID, for example:

      LABEL=label /var/lib/docker  btrfs  defaults  1 2
    4. Create the /var/lib/docker directory.

      # mkdir /var/lib/docker
    5. Mount all the file systems listed in /etc/fstab.

      # mount -a
    6. Verify that the file system is mounted.

      # df
      Filesystem     1K-blocks    Used Available Use% Mounted on
      ...
      /dev/sdb1            ...    ...  ...       1%   /var/lib/docker
      
  5. (Optional) If you use a proxy server for Internet access, configure Yum with the proxy server settings.

    Edit the /etc/yum.conf file and specify the proxy setting, for example:

    proxy=http://proxysvr.example.com:3128

    If the proxy server requires authentication, additionally specify the proxy_username, and proxy_password settings, for example:

    proxy=http://proxysvr.example.com:3128
    proxy_username=username
    proxy_password=password 

    If you use the yum plug-in (yum-rhn-plugin) to access the ULN, specify the enableProxy and httpProxy settings in the /etc/sysconfig/rhn/up2date file, for example:

    enableProxy=1
    httpProxy=http://proxysvr.example.com:3128

    If the proxy server requires authentication, additionally specify the enableProxyAuth, proxyUser, and proxyPassword settings, as follows:

    enableProxy=1
    httpProxy=http://proxysvr.example.com:3128
    enableProxyAuth=1
    proxyUser=username
    proxyPassword=password
  6. Make sure the system is up-to-date:

    # yum update
  7. Enable the required ULN channels or Yum repositories.

    To enable the required ULN channels:

    1. Log in to http://linux.oracle.com with your ULN user name and password.

    2. On the Systems tab, click the link named for the system in the list of registered machines.

    3. On the System Details page, click Manage Subscriptions.

    4. On the System Summary page, use the left and right arrows to move channels to and from the list of subscribed channels.

      Subscribe the system to the following channels:

      • ol7_x86_64_UEKR4 - Unbreakable Enterprise Kernel Release 4 for Oracle Linux 7 (x86_64)

      • ol7_x86_64_addons - Oracle Linux 7 Addons (x86_64)

      • ol7_x86_64_openstack21 - Oracle OpenStack 2.1 (x86_64)

      • ol7_x86_64_latest - Oracle Linux 7 Latest (x86_64)

      • (Optional) ol7_x86_64_UEKR4_OFED - OFED supporting tool packages for Unbreakable Enterprise Kernel Release 4 on Oracle Linux 7 (x86_64)

        Subscribe to this channel if you are using the OFED (OpenFabrics Enterprise Distribution) packages provided by Oracle. UEK R4 requires a different set of OFED packages to UEK R3.

      Unsubscribe the system from the following channels:

      • ol7_x86_64_UEKR3 - Unbreakable Enterprise Kernel Release 3 for Oracle Linux 7 (x86_64) - Latest

      • ol7_x86_64_UEKR3_OFED20 - OFED supporting tool packages for Unbreakable Enterprise Kernel Release 3 on Oracle Linux 7 (x86_64)

    5. Click Save Subscriptions.

    To enable the required Yum repositories:

    1. Check that you have the latest Oracle Linux Yum Server repository file.

      Check that the /etc/yum.repos.d/public-yum-ol7.repo file contains an [ol7_UEKR4] section. If it does not, you do not have the most up-to-date version of the repository file.

      To download the latest copy of the repository file:

      # curl -L -o /etc/yum.repos.d/public-yum-ol7.repo \
          http://yum.oracle.com/public-yum-ol7.repo
    2. Edit the /etc/yum.repos.d/public-yum-ol7.repo file.

      Enable the following repositories by setting enabled=1 in the following sections:

      • [ol7_UEKR4]

      • [ol7_addons]

      • [ol7_openstack21]

      • [ol7_latest]

      • (Optional) [ol7_UEKR4_OFED]

        Subscribe to this repository only if you have InfiniBand-capable devices and you are using the OFED (OpenFabrics Enterprise Distribution) packages provided by Oracle. UEK R4 requires a different set of OFED packages to UEK R3.

      Disable the following repositories by setting enabled=0 in the following sections:

      • [ol7_UEKR3]

      • [ol7_UEKR3_OFED20]

  8. Use the yum command to check the repository configuration.

    Clean all yum cached files from all enabled repositories.

    # yum clean all

    List the configured repositories for the system.

    # yum repolist
  9. (Optional) Remove the Open vSwitch kernel module package.

    To check if the kmod-openvswith-uek package is installed:

    # yum list installed kmod-openvswitch-uek 

    If the kmod-openvswitch-uek package is installed, remove it:

    # yum -y remove kmod-openvswitch-uek

    You must remove the UEK R3 Open vSwitch kernel module package in order to resolve the package dependencies for UEK R4. UEK R4 includes the Open vSwitch kernel module.

  10. (Optional) Remove any existing OFED packages.

    Only perform this step if you have InfiniBand-capable devices and you are using the OFED packages provided by Oracle. UEK R4 requires a different set of OFED packages to UEK R3.

    For instructions on how to remove the OFED packages, see the release notes for your UEK R4 release, available at http://docs.oracle.com/cd/E52668_01/index.html.

  11. Install the Oracle OpenStack for Oracle Linux preinstallation package.

    If you are preparing an Oracle Linux node for a new OpenStack deployment:

    # yum install openstack-kolla-preinstall

    If you are updating an Oracle Linux node for a new release of Oracle OpenStack for Oracle Linux:

    # yum update openstack-kolla-preinstall

    This ensures the system has the required packages for OpenStack Kolla deployments.

  12. (Optional) Install the OFED packages for UEK R4 and enable the RDMA service.

    Only perform this step if you have InfiniBand-capable devices and you are using the OFED packages provided by Oracle.

    For instructions on how to install the OFED packages and enable the RDMA service, see the release notes for your UEK R4 release, available at http://docs.oracle.com/cd/E52668_01/index.html.

  13. Reboot the system.

    # systemctl reboot
  14. Check the system has booted with the UEK R4 kernel.

    # uname -r
    4.1.12-32.2.1.el7uek.x86_64

    If the output of this command begins with 4.1.12, the system has booted with the UEK R4 kernel.

    If the system has not booted with the UEK R4 kernel, you must edit your grub configuration to boot with this kernel and reboot, as follows:

    1. Display the menu entries that are defined in the GRUB 2 configuration file.

      On UEFI-based systems, the configuration file is /boot/efi/EFI/redhat/grub.cfg.
      On BIOS-based systems, the configuration file is /boot/grub2/grub.cfg.

      # grep '^menuentry' /boot/grub2/grub.cfg
      ...
      menuentry 'Oracle Linux Server 7.2, with Unbreakable Enterprise Kernel 4.1.12-32.2.1.e ... {
      menuentry 'Oracle Linux Server (3.8.13-98.7.1.el7uek.x86_64 with Unbreakable Enterpris ... {
      ...

      In this example, the configuration file is for a BIOS-based system. GRUB 2 counts the menu entries in the configuration file starting at 0 for the first entry. In this example, menu entry 0 is for a UEK R4 kernel (4.1.12), and menu entry 1 is for a UEK R3 kernel (3.8.13).

    2. Make the UEK R4 the default boot kernel.

      In the following example, menu entry 0 is set as the default boot kernel for a BIOS-based system.

      # grub2-set-default 0
      # grub2-mkconfig -o /boot/grub2/grub.cfg 

      In the following example, menu entry 0 is set as the default boot kernel for a UEFI-based system.

      # grub2-set-default 0
      # grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg
    3. Reboot the system and confirm that UEK R4 is the boot kernel.

  15. If you are using a web proxy, configure the docker service to use the proxy.

    1. Create the drop-in file /etc/systemd/system/docker.service.d/http-proxy.conf with the following content:

      [Service]
      Environment="HTTP_PROXY=proxy_URL:port"
      Environment="HTTPS_PROXY=proxy_URL:port"

      Replace proxy_URL and port with the appropriate URLs and port numbers for your web proxy.

      If the host also runs the Docker registry that stores the Oracle OpenStack for Oracle Linux images, you should specify that local connections do not need to be proxied by setting the NO_PROXY environment variable:

      [Service]
      Environment="HTTP_PROXY=proxy_URL:port" "NO_PROXY=localhost,127.0.0.1"
      Environment="HTTPS_PROXY=proxy_URL:port" "NO_PROXY=localhost,127.0.0.1"
    2. Reload systemd manager configuration.

      # systemctl daemon-reload
    3. Restart the docker service.

      # systemctl restart docker.service
    4. Check that the docker service is running.

      # systemctl status docker.service
      ● docker.service - Docker Application Container Engine
         Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
        Drop-In: /etc/systemd/system/docker.service.d
                 └─docker-sysconfig.conf, http-proxy.conf
         Active: active (running) since Thu 2016-03-31 17:14:04 BST; 30s ago
      ...

      Check the Drop-In: line and ensure that all the required systemd drop-in files are listed.

      Check that any environment variables you have configured, such as web proxy settings, are loaded:

      # systemctl show docker --property Environment 
      Environment=HTTP_PROXY=http://proxy.example.com:80

      If you have installed the mlocate package, it is recommended that you add /var/lib/docker to the PRUNEPATHS entry in /etc/updatedb.conf to prevent updatedb from indexing directories below /var/lib/docker.

  16. Synchronize the time.

    Time synchronization is essential to avoid errors with OpenStack operations. Before deploying OpenStack, you should ensure that the time is synchronized on all nodes using the Network Time Protocol (NTP).

    It is best to configure the controller nodes to synchronize the time from more accurate (lower stratum) NTP servers and to configure the other nodes to synchronize the time from the controller nodes.

    Further information on network time configuration can be found in the Oracle Linux Administration Guide for Release 7 at:

    http://docs.oracle.com/cd/E52668_01/E54669/html/ol7-nettime.html

    The following configuration assumes that the firewall rules for your internal networks enable you to access public or local NTP servers. Perform the following steps on all Oracle Linux nodes.

    Time synchronization for Oracle VM Server compute nodes is described in Section 3.5, “Preparing Oracle VM Server Compute Nodes”.

    1. Install the chrony package.

      # yum install chrony
    2. Edit the /etc/chrony.conf file to configure the chronyd service.

      On the controller nodes, configure the chronyd service to synchronize time from a pool of NTP servers and set the allow directive to enable the controller nodes to act as NTP servers for the other OpenStack nodes, for example:

      server NTP_server_1
      server NTP_server_2
      server NTP_server_3
      allow 10.0.0/24

      The NTP servers can be public NTP servers or your organization may have its own local NTP servers. In the above example, the allow directive specifies a subnet from which the controller nodes accept NTP requests. Alternatively, you can specify the other OpenStack nodes individually with multiple allow directives.

      On all other nodes, configure the chronyd service to synchronize time from the controller nodes, for example:

      server control1.example.com iburst
      server control2.example.com iburst
    3. Start the chronyd service and configure it to start following a system reboot.

      # systemctl start chronyd
      # systemctl enable chronyd
    4. Verify that chronyd is accessing the correct time sources.

      # chronyc -a sources
      200 OK
      210 Number of sources = 2
      MS Name/IP address         Stratum Poll Reach LastRx Last sample
      ===============================================================================
      ^* control1.example.com          3   6    17    40  +9494ns[  +21us] +/-   29ms
      ....

      On the controller nodes, the Name/IP address column in the command output should list the configured pool of NTP servers. On all other nodes, it should list the controller nodes.

    5. Ensure that the time is synchronized on all nodes.

      Use the chronyc -a tracking command to check the offset (the Last offset row):

      # chronyc -a tracking
      200 OK
      Reference ID    : 10.0.0.11 (control1.example.com)
      Stratum         : 3
      Ref time (UTC)  : Fri Mar  4 16:19:50 2016
      System time     : 0.000000007 seconds slow of NTP time
      Last offset     : -0.000088924 seconds
      RMS offset      : 2.834978580 seconds
      Frequency       : 3.692 ppm slow
      Residual freq   : -0.006 ppm
      Skew            : 0.222 ppm
      Root delay      : 0.047369 seconds
      Root dispersion : 0.004273 seconds
      Update interval : 2.1 seconds
      Leap status     : Normal

      To force a node to synchronize its time:

      # chronyc -a 'burst 4/4'
      200 OK
      200 OK
      # chronyc -a makestep
      200 OK
      200 OK