The software described in this documentation is either in Extended Support or Sustaining Support. See https://www.oracle.com/us/support/library/enterprise-linux-support-policies-069172.pdf for more information.
Oracle recommends that you upgrade the software described by this documentation as soon as possible.

1.1.2 About UEK Release 2

Note

The kernel version in UEK Release 2 (UEK R2) is stated as 2.6.39, but it is actually based on the 3.0-stable Linux kernel. This renumbering allows some low-level system utilities that expect the kernel version to start with 2.6 to run without change.

UEK R2 includes the following improvements over release 1:

  • Interrupt scalability is refined, and scheduler tuning is improved, especially for Java workloads.

  • Transcendent memory helps the performance of virtualization solutions for a broad range of workloads by allowing a hypervisor to cache clean memory pages and eliminating costly disk reads of file data by virtual machines, allowing you to increase their capacity and usage level. Transcendent memory also implements an LZO-compressed page cache, or zcache, which reduces disk I/O.

  • Transmit packet steering (XPS) distributes outgoing network packets from a multiqueue network device across the CPUs. XPS chooses the transmit queue for outgoing packets based on the lock contention and NUMA cost on each CPU, and it selects which CPU uses that queue to send a packet.

    To configure the list of CPUs to which XPS can forward traffic, use /sys/class/net/interface/queues/tx-N/xps_cpus, which implements a CPU bitmap for a specified network interface and transmit queue. The default value is zero, which disables XPS. To enable XPS and allow a particular set of CPUs to use a specified transmit queue on an interface, set the value of their positions in the bitmap to 1. For example, to enable XPS to use CPUs 4, 5, 6, and 7 for the tx-0 queue on eth0, set the value of rps_cpus to f0 (that is, 16+32+64+128 = 240 in hexadecimal):

    # cat f0 > /sys/class/net/eth0/queues/tx-0/xps_cpus

    There is no benefit in configuring XPS for a network device with a single transmit queue.

    For a system with a multiqueue network device, configure XPS so that each CPU maps onto one transmit queue. If a system has an equal number of CPUs and transit queues, you can configure exclusive pairings in XPS to eliminate queue contention. If a system has more CPUs than queues, configure CPUs that share the same cache to the same transmit queue.

  • The btrfs file system for Linux is designed to meet the expanding scalability requirements of large storage subsystems. For more information, see Chapter 4, The Btrfs File System.

  • Cgroups provide fine-grained control of CPU, I/O and memory resources. For more information, see Chapter 7, Control Groups.

  • Linux containers provide multiple user-space versions of the operating system on the same server.Each container is an isolated environment with its own process and network space. For more information, see Chapter 8, Linux Containers.

  • Transparent huge pages take advantage of the memory management capabilities of modern CPUs to allow the kernel to manage physical memory more efficiently by reducing overhead in the virtual memory subsystem, and by improving the caching of frequently accessed virtual addresses for memory-intensive workloads. For more information, see Chapter 9, HugePages.

  • DTrace allows you to explore your system to understand how it works, to track down performance problems across many layers of software, or to locate the causes of aberrant behavior. DTrace is currently available only on ULN. For more information, see Chapter 11, DTrace.

  • The configfs virtual file system, engineered by Oracle, allows you to configure the settings of kernel objects where a file system or device driver implements this feature. configfs provides an alternative mechanism for changing the values of settings to the ioctl() system call, and complements the intended functionality of sysfs as a means to view kernel objects.

    The cluster stack for OCFS2, O2CB, uses configfs to set cluster timeouts and to examine the cluster status.

    The low-level I/O (LIO) driver uses configfs as a multiprotocal SCSI target to support the configuration of FCoE, Fibre Channel, iSCSI and InfiniBand using the lio-utils tool set.

    For more information about the implementation of configfs, see http://www.kernel.org/doc/Documentation/filesystems/configfs/configfs.txt.

  • The dm-nfs feature creates virtual disk devices (LUNs) where the data is stored in an NFS file instead of on local storage. Managed networked storage has many benefits over keeping virtual devices on a disk that is local to the physical host.

    The dm-nfs kernel module provides a device-mapper target that allows you to treat a file on an NFS file system as a block device that can be loopback-mounted locally.

    The following sample code demonstrates how to use dmsetup to create a mapped device (/dev/mapper/$dm_nfsdev) for the file $filename that is accessible on a mounted NFS file system:

    nblks=`stat -c '%s' $filename`
    echo -n "0 $nblks nfs $filename 0" | dmsetup create $dm_nfsdev

    A sample use case is the fast migration of guest VMs for load balancing or if a physical host requires maintenance. This functionality is also possible using iSCSI LUNs, but the advantage of dm-nfs is that you can manage new virtual drives on a local host system, rather than requiring a storage administrator to initialize new LUNs.

    dm-nfs uses asynchronous direct I/O so that I/O is performed efficiently and coherently. A guest's disk data is not cached locally on the host. If the host crashes, there is a lower probability of data corruption. If a guest is frozen, you can take a clean backup of its virtual disk, as you can be certain that its data has been fully written out.