Chapter 1 New Features and Changes
The Unbreakable Enterprise Kernel Release 3 (UEK R3) is Oracle's third major release of its heavily tested and optimized operating system kernel for Oracle Linux 6 and 7 on the x86-64 architecture. It is based on the mainline Linux kernel version 3.8.13.
The 3.8.13-98 release is the sixth quarterly update release for UEK R3. It includes security and bug fixes, as well as driver updates.
Oracle actively monitors upstream checkins and applies critical bug and security fixes to UEK R3.
UEK R3 uses the same versioning model as the mainline Linux kernel version. It is possible that some applications might not understand the 3.x versioning scheme. If an application does require a 2.6 context, you can use the uname26 wrapper command to start it. However, regular Linux applications are usually neither aware of, nor affected by, Linux kernel version numbers.
1.1 Notable Changes
Support for installing and using Oracle Linux on systems that have enabled UEFI Secure Boot. A system in Secure Boot mode will load only boot loaders and kernels that have been signed by Oracle.
Virtual eXtensible Local Area Network (VXLAN) and Generic Routing Encapsulation (GRE) support added to the UEK Open Vswitch kernel module (
kmod-openvswitch-uek
).Support for Intel Sandy Bridge memory controllers enabled.
Cisco SCSI NIC driver (
snic
) version 0.0.1.18 added.Enabled hardware support for the SGI UltraViolet 3 platform.
Kernel modules are now signed using the SHA-512 hash algorithm (previously SHA-256 was used).
Enabled support for more than eight PTP hardware clocks (PHC).
Bug fixes for btrfs, ext4, xfs, and OCFS2 file systems.
Bug fixes to support Oracle Linux guests running on Microsoft Azure or Hyper-V.
This kernel update also includes updated dependencies for QLogic firmware. The dependencies should be resolved when you install the new kernel.
1.2 LXC Improvements
With version 1.0.7 and later of the Linux Containers
(lxc
) package under UEK R3 QU6, you can
adjust the values of the following kernel parameters under the
/proc
hierarchy in an Oracle Linux container
if you specify the --privileged option to the
lxc-oracle
template script:
/proc/sys/kernel/msgmax
/proc/sys/kernel/msgmnb
/proc/sys/kernel/sem
/proc/sys/kernel/shmall
/proc/sys/kernel/shmmax
/proc/sys/kernel/shmmni
/proc/sys/net/ipv4/conf/default/accept_source_route
/proc/sys/net/ipv4/conf/default/rp_filter
/proc/sys/net/ipv4/ip_forward
Each of these parameters can have a different value than that configured for the host system and for other containers running on the host system. The default value is derived from the template when you create the container. Oracle recommends that you change a setting only if the Oracle database or other application requires a value other than the default for a container.
The --privileged option also adds the
CAP_SYS_NICE
capability, which allows you to
set negative nice
values (that is, more
favored for scheduling) for processes from within the container.
Prior to UEK R3 QU6, the following host-only parameters were not visible within the container due to kernel limitations:
/proc/sys/net/core/rmem_default
/proc/sys/net/core/rmem_max
/proc/sys/net/core/wmem_default
/proc/sys/net/core/wmem_max
/proc/sys/net/ipv4/ip_local_port_range
/proc/sys/net/ipv4/tcp_syncookies
With UEK R3 QU6 and later, these parameters are read-only within the container to allow Oracle Database and other applications to be installed. You can change the values of these parameters only from the host. Any changes that you make to host-only parameters apply to all containers on the host.
For more information, see Configuring Kernel Parameters and Resource Limits in the Oracle Database 11.2 Quick Installation Guide, Configuring Kernel Parameters and Resource Limits in the Oracle Database 12.1 Quick Installation Guide, Linux Containers in Oracle® Linux 6: Administrator's Solutions Guide, and Linux Containers in Oracle® Linux 7: Administrator's Guide.
(Bug ID 21267882)
1.3 Xen Improvements
Prevent soft lockups due to long-running hypercalls.
Rewrite of the Physical to Machine (P2M) table to lower
SWIOTLB
usage.Fixed memory leaks in the Xen block driver.
Fixed compound pages that were not handled in the
xen-netfront
driver.
1.4 DTrace Improvements
DTrace Kernel Modules (Version 0.4.5)
You can now use User-Level Statically Defined Tracing (USDT) probes in 32-bit applications on 64-bit hosts.
The
d_path()
D subroutine requires its argument to be a pointer to a path structure that corresponds to a file that is known to the current task.
A minor memory leak with the DTrace help tracing facility has been fixed. When the
dtrace.ko
module was loaded, a buffer (by default 64K) was allocated and never released.Stack backtraces are more accurate as a result of various fixes to adjust the number of frames to skip for specific probes.
The stack depth was being determined by requesting a backtrace to be written into a temporary buffer that was being allocated (
vmalloc
), which posed significant problems when probes were executing in a context that does not support memory allocations. The buffer is now obtained from the scratch area of memory that DTrace provides for probe processing.A system crash could occur if you passed an invalid pointer to
d_path()
. Due to its implementation, it is not possible to depend on safe memory accesses to avoid this. Now you must validate the pointer before callingd_path()
.
DTrace User Space Tools (Version 0.4.6)
The
dtrace-utils-devel
package now requires the corresponding version of thedtrace-utils
package.The
dtrace-utils
package has been renamed.There is a new dtrace -vV option which reports information on the released version of DTrace, as well as the internal ID of
dtrace(1)
andlibdtrace(1)
.The
<dtrace.h>
header file can be included to support development of DTrace consumer applications.DTrace only loads D libraries from directories with a name that corresponds to the current running kernel.
Processes that receive
SIGTRAP
during normal operation now work even when being traced. Previously, theSIGTRAP
was ignored.DTrace no longer loses track of processes that perform
exec()
while DTrace is examining their dynamic linker state.DTrace no longer leaves breakpoints in forked processes.
DTrace no longer considers that it knows the state of the symbol table of processes it has stopped monitoring.
DTrace no longer crashes multi-threaded processes that use
dlopen()
ordlclose()
.
1.5 Driver Updates
The Unbreakable Enterprise Kernel supports a wide range of hardware and devices. In close cooperation with hardware and storage vendors, several device drivers have been updated by Oracle.
Manufacturer |
Driver |
Version |
Description |
---|---|---|---|
Adaptec |
|
1.2-1[40709]-ms |
SCSI driver |
Broadcom |
|
2.2.5o |
NetXtreme II 1 Gigabit network adapter driver |
Broadcom |
|
2.9.3 |
NetXtreme II FCoE driver |
Broadcom |
|
2.11.2.0 |
NetXtreme II iSCSI driver |
Broadcom |
|
1.712.33 |
NetXtreme II 10 Gigabit network adapter driver |
Broadcom |
|
2.5.20h |
NetXtreme II converged NIC core driver |
Cisco |
|
1.6.0.18 |
FCoE HBA driver |
Cisco |
|
0.0.1.18 |
SCSI NIC driver |
Emulex |
|
10.6.0.2 |
OneConnect (Blade Engine 2) NIC driver |
Emulex |
|
0:10.6.61.1 |
LightPulse Fibre Channel SCSI driver |
Intel |
i40e |
1.3.2-k |
Ethernet Connection XL710 network driver |
Intel |
|
1.2.25 |
XL710 X710 virtual function network driver |
Intel |
|
4.0.3 |
10 Gigabit PCI Express network driver |
Intel |
|
2.16.1 |
10 Gigabit PCI Express virtual function network driver |
Intel |
|
0.10 |
NVM Express device driver |
QLogic |
|
8.07.00.18.39.0-k |
Fibre channel HBA driver |
QLogic |
|
5.04.00.07.06.02-uek3 |
iSCSI HBA driver |
VMware |
|
1.0.3.0-k |
PVSCSI driver |
VMware |
|
1.1.3.0-k |
Virtual machine communication interface |
VMware |
|
1.0.1.0-k |
Virtual socket family |
1.6 Technology Preview
The following features included in the Unbreakable Enterprise Kernel Release 3 are still under development, but are made available for testing and evaluation purposes. Do not use these features on production systems.
DRBD (Distributed Replicated Block Device)
A shared-nothing, synchronously replicated block device (RAID1 over network), designed to serve as a building block for high availability (HA) clusters. It requires a cluster manager (for example, pacemaker) for automatic failover.
Kernel module signing facility
Applies cryptographic signature checking to modules on module load, checking the signature against a ring of public keys compiled into the kernel. GPG is used to do the cryptographic work and determines the format of the signature and key data.
NFS over RDMA Client
Enables you to use NFS over the RDMA transport on the Oracle InfiniBand stack. This is more efficient than using the TCP/IPoIB transport. The technology preview does not include NFS over RDMA server support, or support for NFS over RDMA in virtualized environments. NFS version 3 and 4 are supported. Currently, only the Mellanox ConnectX-2 and ConnectX-3 Host Channel Adapters (HCAs) are supported. The client passes the full Connectathon NFS test suite using these HCAs. The Release Notes will be updated if additional adapters are supported after the initial release.
See Section 1.6.1, “Using the NFS over RDMA Client” for details of how to use the feature.
Swap files on NFS shares
Ability for a system to use swap files that reside on NFS shares. For information about using swap files, see the
swapon(8)
manual page and the Administrator's Guide for your Oracle Linux release.Transcendent Memory
Transcendent Memory (tmem) provides a new approach for improving the utilization of physical memory in a virtualized environment by claiming underutilized memory in a system and making it available where it is most needed. From the perspective of an operating system, tmem is fast pseudo-RAM of indeterminate and varying size that is useful primarily when real RAM is in short supply. To learn more about this technology and its use cases, see the Transcendent Memory project page at https://oss.oracle.com/projects/tmem/.
1.6.1 Using the NFS over RDMA Client
The following instructions also include details for enabling an NFS over RDMA server. These are provided as an example only, as the NFS over RDMA server is currently not supported with the UEK R3 kernel.
Install an RDMA device, set up InfiniBand and enable IPoIB.
The Oracle Linux OFED packages are available from the following channels:
Oracle Linux 6:
ol6_x86_64_ofed_UEK
Oracle Linux 7:
ol7_x86_64_UEKR3_OFED20
Check that the RDMA device is working.
#
cat /sys/class/infiniband/
4: ACTIVEdriver_name
/ports/1/statewhere
driver_name
is the RDMA device driver, for examplemlx4_0
.Verify the physical InfiniBand interfaces and links.
Check that the hosts can be contacted through the InfiniBand switch, by using commands such as ibhosts, and ibnetdiscover.
Check the connection between the NFS client and NFS server.
You can configure the settings for an InfiniBand interface in the
/etc/sysconfig/network-scripts/ifcfg-ib
file.N
You can use the ping command to check the connection. For example:
nfs-server$
ip addr add 10.196.0.101/24 dev ib0
nfs-client$ip addr add 10.196.0.102/24 dev ib0
nfs-server$ping 10.196.0.102
nfs-client$ping 10.196.0.101
Install the
nfs-utils
package on the NFS client and server.Configure the NFS shares.
Edit the
/etc/exports
file. Define the directories that the NFS server will make available for clients to mount, using the IPoIB addresses of the clients. For example:/
export_dir
10.196.0.102(fsid=0,rw,async,insecure,no_root_squash) /export_dir
10.196.0.0/255.255.255.0(fsid=0,rw,async,insecure,no_root_squash)On the NFS server, load the
svcrdma
kernel module and start the NFS service.Oracle Linux 6:
#
modprobe svcrdma
#service nfs start
#echo rdma 20049 > /proc/fs/nfsd/portlist
Oracle Linux 7:
#
modprobe svcrdma
#systemctl start nfs-server
#echo rdma 20049 > /proc/fs/nfsd/portlist
NoteThe
rdma 20049
setting does not persist when the NFS service is restarted. You have to set it each time the NFS service starts.On the NFS client, load the
xprtrdma
kernel module and start the NFS service.#
modprobe xprtrdma
#service nfs start
#mount -o proto=rdma,port=20049
host
:/export
/mntwhere
host
is the host name or IP address of the IPoIB server, andexport
is the name of the NFS share.To check that the mount over RDMA is successful, check the
proto
field for the mount point.#
nfsstat -m
/mnt from 10.196.0.102:/export Flags: rw,relatime,vers=4.0,rsize=262144,wsize=262144,namlen=255,hard,proto=rdma,port=20049, ...Alternatively:
#
cat /proc/mounts
Known Issues
Any mounted file systems must be unmounted on the NFS client before you shut down the NFS server. Otherwise the NFS server hangs when you shut down.
1.7 Compatibility
Oracle Linux maintains user-space compatibility with Red Hat Enterprise Linux, which is independent of the kernel version running underneath the operating system. Existing applications in user space will continue to run unmodified on the Unbreakable Enterprise Kernel Release 3 and no re-certifications are needed for RHEL certified applications.
To minimize impact on interoperability during releases, the Oracle Linux team works closely with third-party vendors whose hardware and software have dependencies on kernel modules. The kernel ABI for UEK R3 will remain unchanged in all subsequent updates to the initial release. In this release, there are changes to the kernel ABI relative to UEK R2 that require recompilation of third-party kernel modules on the system. Before installing UEK R3, verify its support status with your application vendor.
1.8 Header Packages for Development
The kernel-headers
packages provide the C
header files that specify the interface between user-space
binaries or libraries and UEK or RHCK. These header files define
the structures and constants that you need to build most
standard programs or to rebuild the glibc
package.
The kernel-devel
and
kernel-uek-devel
packages provide the kernel
headers and makefiles that you need to build modules against UEK
and RHCK.
To install the packages required to build modules against UEK and the C header files for both UEK and RHCK:
# yum install kernel-uek-devel-`uname -r` kernel-headers