9.2 Configuring Operating System Containers

9.2.1 Installing and Configuring the Software
9.2.2 Setting up the File System for the Containers
9.2.3 Creating and Starting a Container
9.2.4 About the lxc-oracle Template Script
9.2.5 About Veth and Macvlan
9.2.6 Modifying a Container to Use Macvlan

The procedures in the following sections describe how to set up Linux Containers that contain a copy of the root file system installed from packages in the Public Yum repository.

Note

Throughout the following sections in this chapter, the prompts [root@host ~]# and [root@ol6ctr1 ~]# distinguish between commands run by root on the host and in the container.

The software functionality described requires that you boot the system with at least the Unbreakable Enterprise Kernel Release 2 (2.6.39).

9.2.1 Installing and Configuring the Software

To install and configure the software that is required to run Linux Containers:

  1. Use yum to install the btrfs-progs package.

    [root@host ~]# yum install btrfs-progs
  2. Install the lxc packages.

    [root@host ~]# yum install lxc

    This command installs all of the required packages, such as libvirt, libcgroup, and lxc-libs. The LXC template scripts are installed in /usr/share/lxc/templates.

  3. Start the Control Groups (cgroups) service, cgconfig, and configure the service to start at boot time.

    [root@host ~]# service cgconfig start
    [root@host ~]# chkconfig cgconfig on

    LXC uses the cgroups service to control the system resources that are available to containers.

  4. Start the virtualization management service, libvirtd, and configure the service to start at boot time.

    [root@host ~]# service libvirtd start
    [root@host ~]# chkconfig libvirtd on

    LXC uses the virtualization management service to support network bridging for containers.

  5. If you are going to compile applications that require the LXC header files and libraries, install the lxc-devel package.

    [root@host ~]# yum install lxc-devel

9.2.2 Setting up the File System for the Containers

Note

The LXC template scripts assume that containers are created in /container. You must edit the script if your system's configuration differs from this assumption.

To set up the /container file system:

  1. Create a btrfs file system on a suitably sized device such as /dev/sdb, and create the /container mount point.

    [root@host ~]# mkfs.btrfs /dev/sdb
    [root@host ~]# mkdir /container
  2. Mount the /container file system.

    [root@host ~]# mount /dev/sdb /container
  3. Add an entry for /container to the /etc/fstab file.

    /dev/sdb      /container    btrfs    defaults   0 0

For more information, see Chapter 5, The Btrfs File System.

9.2.3 Creating and Starting a Container

Note

The procedure in this section uses the LXC template script for Oracle Linux (lxc-oracle), which is located in /usr/share/lxc/templates.

An Oracle Linux container requires a minimum of 400 MB of disk space.

To create and start a container:

  1. Create an Oracle Linux 6 container named ol6ctr1 using the lxc-oracle template script.

    [root@host ~]# lxc-create -n ol6ctr1 -B btrfs -t oracle -- --release=6.latest
    
    lxc-create: No config file specified, using the default config /etc/lxc/default.conf
    Host is OracleServer 6.4
    Create configuration file /container/ol6ctr1/config
    Downloading release 6.latest for x86_64
      .
      .
      .
      yum-metadata-parser.x86_64 0:1.1.2-16.el6                                     
      zlib.x86_64 0:1.2.3-29.el6                                                    
    
    Complete!
    Note

    For LXC version 1.0 and later, you must specify the -B btrfs option if you want to use the snapshot features of btrfs. For more information, see the lxc-create(1) manual page.

    The lxc-create command runs the template script lxc-oracle to create the container in /container/ol6ctr1 with the btrfs subvolume /container/ol6ctr1/rootfs as its root file system. The command then uses yum to install the latest available update of Oracle Linux 6 from the Public Yum repository. It also writes the container's configuration settings to the file /container/ol6ctr1/config and its fstab file to /container/ol6ctr1/fstab. The default log file for the container is /container/ol6ctr1/ol6ctr1.log.

    You can specify the following template options after the -- option to lxc-create:

    -a | --arch=i386|x86_64

    Specifies the architecture. The default value is the architecture of the host.

    --baseurl=pkg_repo

    Specify the file URI of a package repository. You must also use the --arch and --release options to specify the architecture and the release, for example:

    # mount -o loop OracleLinux-R7-GA-Everything-x86_64-dvd.iso /mnt
    # lxc-create -n ol70beta -B btrfs -t oracle -- -R 7.0 -a x86_64 \
      --baseurl=file:///mnt/Server
    -P | --patch=path

    Patch the rootfs at the specified path.

    -R | --release=major.minor

    Specifies the major release number and minor update number of the Oracle release to install. The value of major can be set to 4, 5, 6, or 7. If you specify latest for minor, the latest available release packages for the major release are installed. If the host is running Oracle Linux, the default release is the same as the release installed on the host. Otherwise, the default release is the latest update of Oracle Linux 6.

    -r | --rpms=rpm_name

    Install the specified RPM in the container.

    -t | --templatefs=rootfs

    Specifies the path to the root file system of an existing system, container, or Oracle VM template that you want to copy. Do not specify this option with any other template option. See Section 9.4, “Creating Additional Containers”.

    -u | --url=repo_URL

    Specifies a yum repository other than the Public Yum repository. For example, you might want to perform the installation from a local yum server. The repository file in configured in /etc/yum.repos.d in the container's root file system. The default URL is http://public-yum.oracle.com.

  2. If you want to create additional copies of the container in its initial state, create a snapshot of the container's root file system, for example:

    # btrfs subvolume snapshot /container/ol6ctr1/rootfs /container/ol6ctr1/rootfs_snap

    See Chapter 5, The Btrfs File System and Section 9.4, “Creating Additional Containers”.

  3. Start the container ol6ctr1 as a daemon that writes its diagnostic output to a log file other than the default log file.

    [root@host ~]# lxc-start -n ol6ctr1 -d -o /container/ol6ctr1_debug.log -l DEBUG
    Note

    If you omit the -d option, the container's console opens in the current shell.

    The following logging levels are available: FATAL, CRIT, WARN, ERROR, NOTICE, INFO, and DEBUG. You can set a logging level for all lxc-* commands.

    If you run the ps -ef --forest command on the host system and the process tree below the lxc-start process shows that the /usr/sbin/sshd and /sbin/mingetty processes have started in the container, you can log in to the container from the host. See Section 9.3, “Logging in to Containers”.

9.2.4 About the lxc-oracle Template Script

Note

If you amend a template script, you alter the configuration files of all containers that you subsequently create from that script. If you amend the config file for a container, you alter the configuration of that container and all containers that you subsequently clone from it.

The lxc-oracle template script defines system settings and resources that are assigned to a running container, including:

  • the default passwords for the oracle and root users, which are set to oracle and root respectively

  • the host name (lxc.utsname), which is set to the name of the container

  • the number of available terminals (lxc.tty), which is set to 4

  • the location of the container's root file system on the host (lxc.rootfs)

  • the location of the fstab mount configuration file (lxc.mount)

  • all system capabilities that are not available to the container (lxc.cap.drop)

  • the local network interface configuration (lxc.network)

  • all whitelisted cgroup devices (lxc.cgroup.devices.allow)

The template script sets the virtual network type (lxc.network.type) and bridge (lxc.network.link) to veth and virbr0. If you want to use a macvlan bridge or Virtual Ethernet Port Aggregator that allows external systems to access your container via the network, you must modify the container's configuration file. See Section 9.2.5, “About Veth and Macvlan” and Section 9.2.6, “Modifying a Container to Use Macvlan”.

To enhance security, you can uncomment lxc.cap.drop capabilities to prevent root in the container from performing certain actions. For example, dropping the sys_admin capability prevents root from remounting the container's fstab entries as writable. However, dropping sys_admin also prevents the container from mounting any file system and disables the hostname command. By default, the template script drops the following capabilities: mac_admin, mac_override, setfcap, setpcap, sys_module, sys_nice, sys_pacct, sys_rawio, and sys_time.

For more information, see Chapter 8, Control Groups and the capabilities(7) and lxc.conf(5) manual pages.

When you create a container, the template script writes the container's configuration settings and mount configuration to /container/name/config and /container/name/fstab, and sets up the container's root file system under /container/name/rootfs.

Unless you specify to clone an existing root file system, the template script installs the following packages under rootfs (by default, from Public Yum at http://public-yum.oracle.com):

Package

Description

chkconfig

chkconfig utility for maintaining the /etc/rc*.d hierarchy.

dhclient

DHCP client daemon (dhclient) and dhclient-script.

initscripts

/etc/inittab file and /etc/init.d scripts.

openssh-server

Open source SSH server daemon, /usr/sbin/sshd.

oraclelinux-release

Oracle Linux 6 release and information files.

passwd

passwd utility for setting or changing passwords using PAM.

policycoreutils

SELinux policy core utilities.

rootfiles

Basic files required by the root user.

rsyslog

Enhanced system logging and kernel message trapping daemons.

vim-minimal

Minimal version of the VIM editor.

yum

yum utility for installing, updating and managing RPM packages.

The template script edits the system configuration files under rootfs to set up networking in the container and to disable unnecessary services including volume management (LVM), device management (udev), the hardware clock, readahead, and the Plymouth boot system.

9.2.5 About Veth and Macvlan

By default, the lxc-oracle template script sets up networking by setting up a veth bridge. In this mode, a container obtains its IP address from the dnsmasq server that libvirtd runs on the private virtual bridge network (virbr0) between the container and the host. The host allows a container to connect to the rest of the network by using NAT rules in iptables, but these rules do not allow incoming connections to the container. Both the host and other containers on the veth bridge have network access to the container via the bridge.

Figure 9.1 illustrates a host system with two containers that are connected via the veth bridge virbr0.

Figure 9.1 Network Configuration of Containers Using a Veth Bridge

The diagram illustrates a host system with two containers that are connected via the veth bridge virbr0. The host uses NAT rules to allow the containers to connect to the rest of the network via eth0, but these rules do not allow incoming connections to the container.


If you want to allow network connections from outside the host to be able to connect to the container, the container needs to have an IP address on the same network as the host. One way to achieve this configuration is to use a macvlan bridge to create an independent logical network for the container. This network is effectively an extension of the local network that is connected the host's network interface. External systems can access the container as though it were an independent system on the network, and the container has network access to other containers that are configured on the bridge and to external systems. The container can also obtain its IP address from an external DHCP server on your local network. However, unlike a veth bridge, the host system does not have network access to the container.

Figure 9.2 illustrates a host system with two containers that are connected via a macvlan bridge.

Figure 9.2 Network Configuration of Containers Using a Macvlan Bridge

The diagram illustrates a host system with two containers that are connected via a macvlan bridge, which is effectively an extension of the network that is connected via eth0.


If you do not want containers to be able to see each other on the network, you can configure the Virtual Ethernet Port Aggregator (VEPA) mode of macvlan. Figure 9.3 illustrates a host system with two containers that are separately connected to a network by a macvlan VEPA. In effect, each container is connected directly to the network, but neither container can access the other container nor the host via the network.

Figure 9.3 Network Configuration of Containers Using a Macvlan VEPA

The diagram illustrates a host system with two containers that are separately connected by a macvlan VEPA to the network.


For information about configuring macvlan, see Section 9.2.6, “Modifying a Container to Use Macvlan” and the lxc.conf(5) manual page.

9.2.6 Modifying a Container to Use Macvlan

To modify a container so that it uses the bridge or VEPA mode of macvlan, edit /container/name/config and replace the following lines:

lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = virbr0

with these lines for bridge mode:

lxc.network.type = macvlan
lxc.network.macvlan.mode = bridge
lxc.network.flags = up
lxc.network.link = eth0

or these lines for VEPA mode:

lxc.network.type = macvlan
lxc.network.macvlan.mode = vepa
lxc.network.flags = up
lxc.network.link = eth0

In these sample configurations, the setting for lxc.network.link assumes that you want the container's network interface to be visible on the network that is accessible via the host's eth0 interface.

9.2.6.1 Modifying a Container to Use a Static IP Address

By default, a container connected by macvlan relies on the DHCP server on your local network to obtain its IP address. If you want the container to act as a server, you would usually configure it with a static IP address. You can configure DHCP to serve a static IP address for a container or you can define the address in the container's config file.

To configure a static IP address that a container does not obtain using DHCP:

  1. Edit /container/name/rootfs/etc/sysconfig/network-scripts/ifcfg-iface, where iface is the name of the network interface, and change the following line:

    BOOTPROTO=dhcp

    to read:

    BOOTPROTO=none
  2. Add the following line to /container/name/config:

    lxc.network.ipv4 = xxx.xxx.xxx.xxx/prefix_length

    where xxx.xxx.xxx.xxx/prefix_length is the IP address of the container in CIDR format, for example: 192.168.56.100/24.

    Note

    The address must not already be in use on the network or potentially be assignable by a DHCP server to another system.

    You might also need to configure the firewall on the host to allow access to a network service that is provided by a container.