3.1 Installing Oracle VM Server for x86 from PXE Boot

In deployments where multiple systems must be installed, it is common to perform a network-based installation by configuring target systems to load a PXE boot image from a TFTP server configured on the same network. This deployment strategy typically suits environments where many Oracle VM Server instances are to be installed on x86 hardware at once.

This section describes some of the basic configuration steps required on a single Oracle Linux server that is set up to provide all of the services needed to handle a PXE boot environment. There are many different approaches to the architecture and choices of software required to service PXE boot requests. The information provided here is intended only as a guideline for setting up such an environment.

Note

As of Release 3.4.5, the updated Xen hypervisor for Oracle VM Server is delivered as a single binary, named xen.mb.efi instead of xen.gz, which can be loaded by the EFI loader, Multiboot, and Multiboot2 protocols.

3.1.1 PXE Boot Overview

PXE boot is a method of installing Oracle VM Server on multiple client machines across the network. In general, to successfully perform a PXE boot, you need to do the following:

  1. Install and configure an Oracle Linux server to provide services and host files across the network.

  2. Configure a DHCP service to direct client machines to the location of a boot loader.

  3. Configure a TFTP service to host the boot loader, kernel, initial RAM disk (initrd) image, Xen hypervisor, and configuration files.

  4. Host the contents of the Oracle VM Server ISO image file on an NFS or HTTP server.

  5. Create a kickstart configuration file for the Oracle VM Server installation.

    A kickstart configuration file allows you to automate the Oracle VM Server installation steps that require user input. While not necessary to perform a PXE boot, using a kickstart configuration file is recommended. For more information, see Section 2.1.4, “Performing a Kickstart Installation of Oracle VM Server”.

  6. Set up the PXE client boot loaders.

    1. For BIOS-based PXE clients you use the pxelinux.0 boot loader that is available from the syslinux package. For UEFI-based PXE clients in a non-Secure Boot configuration, you use the grubx64.efi boot loader.

      Note

      Oracle VM Release 3.4.1 and Release 3.4.2 require you to build the boot loader for UEFI-based PXE clients. For more information, see Section A.1, “Setting Up PXE Boot for Oracle VM Server Release 3.4.1 and Release 3.4.2”.

    2. Create the required boot loader configuration files.

    3. Host the boot loader and configuration files on the TFTP server.

3.1.2 Configuring the DHCP Service

The DHCP service handles requests from PXE clients to specify the location of the TFTP service and boot loader files.

Note
  • If your network already has a DHCP service configured, you should edit that configuration to include an entry for PXE clients. If you configure two different DHCP services on a network, requests from clients can conflict and result in network issues and PXE boot failure.

  • The following examples and references are specific to ISC DHCP.

Configure the DHCP service as follows:

  1. Install the dhcp package.

    # yum install dhcp
  2. Edit /etc/dhcp/dhcpd.conf and configure an entry for the PXE clients as appropriate. See Example DHCP Entry for PXE Clients.

  3. Start the DHCP service and configure it to start after a reboot.

    # service dhcpd start
    # chkconfig dhcpd on
    Note

    If the server has more that one network interface, the DHCP service uses the /etc/dhcp/dhcpd.conf file to determine which interface to listen on. If you make any changes to /etc/dhcp/dhcpd.conf, restart the dhcpd service.

  4. Configure the firewall to accept DHCP requests, if required.

Example DHCP Entry for PXE Clients

The following is an example entry in dhcpd.conf for PXE clients:

set vendorclass = option vendor-class-identifier;
option pxe-system-type code 93 = unsigned integer 16;
set pxetype = option pxe-system-type;

option domain-name "example.com";

subnet 192.0.2.0 netmask 255.255.255.0 {
  option domain-name-servers 192.0.2.1;
  option broadcast-address 192.0.2.2;
  option routers 192.0.2.1;
  default-lease-time 14400;
  max-lease-time 28800;
  if substring(vendorclass, 0, 9)="PXEClient" {
    if pxetype=00:07 or pxetype=00:09 {
          filename "tftpboot/grub2/grubx64.efi";
      } else {
          filename "tftpboot/pxelinux/pxelinux.0";
  }
  pool {
    range 192.0.2.14 192.0.2.24;
  }
  next-server 10.0.0.6;
}

host svr1 {
hardware ethernet 08:00:27:c6:a1:16;
fixed-address 192.0.2.5;
option host-name "svr1";
}

host svr2 {
hardware ethernet 08:00:27:24:0a:56;
fixed-address 192.0.2.6;
option host-name "svr2";
}
  • The preceding example configures a pool of generally available IP addresses in the range 192.0.2.14 through 192.0.2.24 on the 192.0.2/24 subnet. Any PXE-booted system on the subnet uses the boot loader that the filename parameter specifies for its PXE type.

  • The boot loader grubx64.efi for UEFI-based clients is also located in the grub2 subdirectory of the TFTP server directory. In a non-Secure Boot configuration, you can specify grubx64.efi as the boot loader.

  • The boot loader pxelinux.0 for BIOS-based clients is located in the pxelinux subdirectory of the TFTP server directory.

  • The next-server statement specifies the IP address of the TFTP server from which a client can download the boot loader file.

    Note

    You should have a next-server statement even if you use the same server to host both DHCP and TFTP services. Otherwise, some boot loaders cannot get their configuration files, which the client to reboot, hang, or display a prompt.

  • The static IP addresses 192.0.2.5 and 192.0.2.6 are reserved for svr1 and svr2, which are identified by their MAC addresses.

3.1.3 Configuring the TFTP Service

The TFTP service hosts boot loader files, configuration files, and binaries on the network so PXE clients can retrieve them.

Configure the TFTP service as follows:

  1. Install the tftp-server package.

    # yum install tftp-server
  2. Open /etc/xinetd.d/tftp for editing and then:

    1. Set no as the value of the disable parameter.

      disable = no
    2. Set /tftpboot as the TFTP root.

      server_args = -s /tftpboot
  3. Save and close /etc/xinetd.d/tftp.

  4. Create the /tftpboot directory if it does not already exist.

    # mkdir /tftpboot
  5. Restart the inetd server.

    # service xinetd restart
  6. Configure the firewall to allow TFTP traffic, if required.

3.1.4 Copying the Xen Hypervisor, Installer Kernel, and RAM Disk Image

The TFTP service hosts the following files so that PXE clients can retrieve them over the network:

  • xen.mb.efi - Xen hypervisor for Oracle VM Server

  • vmlinuz - installer kernel

  • initrd.img - initial RAM disk image

Copy the files to your TFTP service as follows:

  1. Create an isolinux subdirectory in the TFTP server root.

    # mkdir /tftpboot/isolinux
  2. Mount the Oracle VM Server ISO image file as a loopback device. For instructions, see Section 1.4, “Loopback ISO Mounts”.

  3. Copy the contents of images/pxeboot from the ISO image file into the ovs subdirectory you have created.

    # cp /mnt/images/pxeboot/* /tftpboot/isolinux/

    Substitute mnt with the path to the mount point where you mounted the ISO image file.

3.1.5 Hosting the Contents of the Oracle VM Server ISO File

You must host the contents of the Oracle VM Server ISO image file over the network so that the PXE clients can access them. You can use an NFS or HTTP server as appropriate.

Note

You cannot host only the ISO image file itself over the network. You must make the entire contents of the ISO image file available in a single directory.

The following steps provide an example using an NFS server:

  1. Install an NFS server if necessary.

    # yum install nfs-utils
  2. Create a directory for the contents of the Oracle VM Server ISO image file.

    # mkdir -p /srv/install/ovs
  3. Mount the Oracle VM Server ISO image file as a loopback device. For instructions, see Section 1.4, “Loopback ISO Mounts”.

  4. Copy the contents of the Oracle VM Server ISO image file into the directory you created.

    # cp -r /mnt/* /srv/install/ovs

    Substitute mnt with the path to the mount point where you mounted the ISO image file.

  5. Edit /etc/exports to configure your NFS exports.

    /srv/install *(ro,async,no_root_squash,no_subtree_check,insecure)

    Depending on your security requirements, you can configure this export only to cater to particular hosts.

  6. Start the NFS service.

    # service nfs start

    If the NFS service is already running and you make any changes to the /etc/exports file, run the following command to update the exports table within the NFS kernel server:

    # exportfs -va

  7. Configure the NFS service to always start at boot.

    # chkconfig nfs on
    # chkconfig nfslock on
  8. Configure the firewall to allow clients to access the NFS server, if required.

3.1.6 Copying the Kickstart Configuration File

To perform a PXE boot, you should create a kickstart configuration file to automate the installation process. The kickstart configuration file provides the input that the Anaconda installation wizard requires. If you have not already created a kickstart configuration file, ks.config, see Section 2.1.4, “Performing a Kickstart Installation of Oracle VM Server”.

You must make the kickstart configuration file available to PXE clients over the network. To do this, you can copy the file to the NFS or HTTP server where you host the contents of the Oracle VM Server ISO image file, as follows:

# cp /tmp/OVS_ks.conf /srv/install/kickstart/ks.cfg

Substitute /tmp/OVS_ks.conf with the path to your kickstart configuration file for the Oracle VM Server installation.

3.1.7 Setting Up the Boot Loader

PXE clients require a boot loader to load the Xen hypervisor and the Linux installation kernel.

For BIOS-based PXE clients you use the pxelinux.0 boot loader that is available from the syslinux package.

For UEFI-based PXE clients in a non-Secure Boot configuration, you use the grubx64.efi boot loader that is available from the Oracle VM Server ISO image file.

Note

Oracle VM Release 3.4.1 and Release 3.4.2 require you to build the boot loader for UEFI-based PXE clients. Before you proceed with any of the following steps, you must first complete the following procedure: Section A.1.1.1, “Building the GRUB2 Boot Loader”.

3.1.7.1 Setting Up the PXELinux Boot Loader for BIOS-based PXE Clients

If you are performing a PXE boot for BIOS-based PXE clients, you use the pxelinux.0 boot loader from the syslinux package.

Getting the PXELinux Boot Loader

To get the PXELinux boot loader, you must install syslinux.

Important

The PXELinux boot loader files must match the kernel requirements of the DHCP server. You should install the syslinux package that is specific to the Oracle Linux installation on which your DHCP service runs.

Complete the following steps:

  1. Install the syslinux package.

    # yum install syslinux
  2. If you have SELinux enabled, install the syslinux-tftpboot package to ensure files have the correct SELinux context.

    # yum install syslinux-tftpboot
Hosting the PXELinux Boot Loader

After you get the PXELinux boot loader, you copy the following files to the TFTP server so the BIOS-based PXE clients can access them over the network:

  • pxelinux.0 - PXELinux binary

  • vesamenu.c32 - graphical menu system module

  • mboot.c32 - text only menu system module. You can use mboot.c32 without vesamenu.c32 if you do not require a graphical boot menu.

To host the boot loader, do the following:

  1. Create a pxelinux directory in the TFTP root.

  2. Copy the boot loader and menu modules to the pxelinux directory.

    # cp /usr/share/syslinux/pxelinux.0 /tftpboot/pxelinux/
    # cp /usr/share/syslinux/vesamenu.c32 /tftpboot/pxelinux/
    # cp /usr/share/syslinux/mboot.c32 /tftpboot/pxelinux/
Configuring the PXELinux Boot Loader

For BIOS-based PXE clients, you must create two boot loader configuration files on the TFTP server, as follows:

  1. Create the pxelinux.cfg directory.

    # mkdir /tftpboot/pxelinux/pxelinux.cfg
  2. Create a PXE menu configuration file.

    # touch /tftpboot/pxelinux/pxelinux.cfg/pxe.conf
  3. Create a PXE configuration file.

    # touch /tftpboot/pxelinux/pxelinux.cfg/default
  4. Configure pxe.conf and default as appropriate. See Example Boot Loader Configurations.

Example Boot Loader Configurations

The following is an example of pxelinux.cfg/pxe.conf:

MENU TITLE  PXE Server
  NOESCAPE 1
  ALLOWOPTIONS 1
  PROMPT 0
  menu width 80
  menu rows 14
  MENU TABMSGROW 24
  MENU MARGIN 10
  menu color border               30;44      #ffffffff #00000000 std

The following is an example of pxelinux.cfg/default:

DEFAULT vesamenu.c32
  TIMEOUT 800
  ONTIMEOUT BootLocal
  PROMPT 0
  MENU INCLUDE pxelinux.cfg/pxe.conf
  NOESCAPE 1
  LABEL BootLocal
          localboot 0
          TEXT HELP
          Boot to local hard disk
          ENDTEXT
  LABEL OVS
          MENU LABEL OVS
          KERNEL mboot.c32
          # Note that the APPEND statement must be a single line, the \ delimiter indicates
          # line breaks that you should remove
          APPEND /tftpboot/isolinux/xen.mb.efi --- /tftpboot/isolinux/vmlinuz ip=dhcp \
                 dom0_mem=max:128G dom0_max_vcpus=20 \
                 ksdevice=eth0 ks=nfs:192.0.2.0:/srv/install/kickstart/ks.cfg \
                 method=nfs:192.0.2.0:/srv/install/ovs --- /tftpboot/isolinux/initrd.img
          TEXT HELP
          Install OVM Server
          ENDTEXT

The default behavior on timeout is to boot to the local hard disk. To change the default behavior to force an install, you can change the ONTIMEOUT parameter to point to the OVS menu item. The important thing to remember here is that when an install is completed, the server reboots and if this option is not changed back to BootLocal, the server enters into an installation loop. There are numerous approaches to handling this, and each depend on your own environment, requirements and policies. The most common approach is to boot the servers using one configuration, wait for a period until they are all in the install process and then change this configuration file to ensure that they return to local boot at the time that they reboot.

The KERNEL location points to the mboot.c32 module. This allows us to perform a multiboot operation so that the installer loads within a Xen environment. This is necessary for two reasons. First, it is useful to establish that the Xen hypervisor is at least able to run on the hardware prior to installation. Second, and more importantly, device naming may vary after installation if you do not run the installer from within the Xen hypervisor, leading to problems with device configuration post installation.

In the APPEND line of the preceding example:

  • Some parameters in APPEND are broken into separate lines with the \ delimiter for readability purposes. A valid configuration places the entire APPEND statement on a single line.

  • The Xen hypervisor is loaded first from isolinux/xen.mb.efi in the TFTP server root.

  • The installer kernel is located within the path isolinux/vmlinuz in the TFTP server root.

  • The IP address for the installer kernel is acquired using DHCP.

  • Limits are applied to dom0 for the installer to ensure that the installer is stable while it runs. This is achieved using the default parameters: dom0_mem=max:128G and dom0_max_vcpus=20.

  • The ksdevice parameter specifies the network interface to use. You should specify a value that reflects your network configuration, such as eth0, a specific MAC address, or an appropriate keyword. Refer to the appropriate kickstart documentation for more information.

  • The initial ramdisk image is located within the path isolinux/initrd.img in the TFTP server root.

3.1.7.2 Setting Up the GRUB2 Boot Loader for UEFI-based PXE Clients

If you are performing a PXE boot for UEFI-based PXE clients, you can use the GRUB2 boot loader that is available on the Oracle VM Server ISO image file.

Hosting the GRUB2 Boot Loader

Host the GRUB2 boot loader on the TFTP server so PXE clients can access it over the network, as follows:

  1. Create a grub2 directory in the TFTP root.

  2. Mount the Oracle VM Server ISO image file as a loopback device. For instructions, see Section 1.4, “Loopback ISO Mounts”.

  3. In a non-Secure Boot configuration, you just need to copy the grubx64.efi boot loader from the /EFI/BOOT/ directory to the grub2 directory.

    # cp -r path/EFI/BOOT/grubx64.efi /tftpboot/grub2/

    Substitute path with the directory where you mounted the Oracle VM Server ISO image file.

  4. Copy the GRUB2 modules and files to the appropriate directory.

    # cp -r path/grub2/lib/grub/x86_64-efi/*.{lst,mod} /tftpboot/grub2/x86_64-efi

    Substitute path with the path to the contents of the Oracle VM Server ISO image file on your file system.

Setting Up the GRUB2 Configuration

Complete the following steps to set up the GRUB2 configuration:

  1. Create an EFI/redhat subdirectory in the TFTP server root.

  2. Copy grub.cfg from the Oracle VM Server ISO image file to the directory.

    # cp -r path/EFI/BOOT/grub.cfg /tftpboot/grub2/EFI/redhat/grub.cfg-01-cd-ef-gh-ij-kl-mn

    Where -cd-ef-gh-ij-kl-mn is the MAC address for the Network Interface Card (NIC) for the PXE boot client.

    Substitute path with the directory where you mounted the Oracle VM Server ISO image file.

  3. Modify grub.cfg on the TFTP server as appropriate. See Example grub.cfg.

    Oracle VM Server provides a GRUB2 boot loader for UEFI and BIOS-based PXE clients. However, the grub.cfg must be compatible with GRUB, not GRUB2. The Anaconda installation program for Oracle VM Server is compatible with GRUB only. You can find more information at: http://docs.oracle.com/cd/E37670_01/E41138/html/ch04s02s01.html

Example grub.cfg

The following is an example of grub.cfg:

menuentry 'Install Oracle VM Server' --class fedora --class gnu-linux --class gnu --class os {
        echo 'Loading Xen...'
        multiboot2 /tftpboot/isolinux/xen.mb.efi dom0_mem=max:128G dom0_max_vcpus=20
        echo 'Loading Linux Kernel...'
        module2 /tftpboot/isolinux/vmlinuz ip=dhcp \
        repo=nfs:192.0.2.0:/srv/install/ovs \
        ks=nfs:192.0.2.0:/srv/install/kickstart/ks.cfg \
        ksdevice=00:10:E0:29:B6:C0
        echo 'Loading initrd...'
        module2 /tftpboot/isolinux/initrd.img
}

In the preceding example,

  • Some parameters in module2 statement are broken into separate lines with the \ delimiter for readability purposes. A valid configuration contains all parameters and values in a single line.

  • The Xen hypervisor is loaded first from isolinux/xen.mb.efi in the TFTP server root.

  • The following parameters apply limits to dom0. These limits ensure that the installation program is stable while it runs: dom0_mem=max:128G and dom0_max_vcpus=20.

  • The installer kernel is located within the path isolinux/vmlinuz in the TFTP server root.

  • The IP address for the installer kernel is acquired using DHCP.

  • The repo parameter specifies the IP address of the NFS server that hosts the contents of the Oracle VM Server ISO image file and the path to those contents.

  • The ks parameter specifies the IP address of the NFS server that hosts the kickstart configuration file and the path to the file.

  • The ksdevice parameter specifies the network interface to use. You should specify a value that reflects your network configuration, such as eth0, a specific MAC address, or an appropriate keyword.

  • The initial ramdisk image is located within the path isolinux/initrd.img in the TFTP server root.

3.1.8 Starting the Installation Process

To start the Oracle VM Server installation for PXE clients, do the following:

  1. For BIOS-based PXE clients using the PXELinux boot loader, update the /tftpboot/pxelinux.cfg/default configuration file to use the OVS option as the default in the case of a timeout.

  2. Configure the network boot or PXE boot option in the BIOS or UEFI settings for each client, as appropriate.

  3. Reboot each target client.

Each client makes a DHCP request during the network boot process. The DHCP service allocates an IP address to the client and provides the path to the boot loader on the TFTP server. Each target client then makes a TFTP request for the boot loader and loads the default menu. After they reach the menu option timeout, each client loads the kernel and initrd image from the TFTP service and begins the boot process. The client then connects to the NFS or HTTP server and installation begins.