Managing Kernels and System Boot on Oracle Linux

Discover the different kernels that are distributed with Oracle Linux, how to manage them, and how to control boot processes.

About System Boot

Understanding the Oracle Linux boot process can help you troubleshoot problems when booting a system.

The boot process involves several files, and errors in these files are the usual cause of boot problems. Boot processes and configuration differ depending on whether the hardware uses UEFI firmware or legacy BIOS to handle system boot.

An installation of Oracle Linux includes the GRUB 2 boot loader, which is installed into a location on the hard disk that's accessible to the BIOS or UEFI firmware. The GRUB 2 boot loader is used to load a kernel and the initramfs into memory. After the kernel is fully initialized, it starts the systemd process that manages the rest of the operating system.

About UEFI-Based Booting

On a UEFI-based system running the Oracle Linux release, the system boot process uses the following sequence:

  1. When the system is powered on, the system performs a power-on self-test (POST) to detect and check the system's core hardware components such as CPU and memory. The UEFI firmware is then initialized.

  2. The UEFI firmware detects any other hardware, such as peripheral components including network devices and storage. The UEFI firmware contains its own boot manager, which can directly interact with boot loaders on various storage devices. The boot manager stores a set of variables including the priority of different boot devices and any detected boot loaders.

    UEFI searches for a FAT32 formatted GPT partition with a specific globally unique identifier (GUID) that identifies it as the EFI System Partition (ESP). This partition contains EFI applications such as boot loaders and other configuration files.

    When more than one boot device is present, the UEFI boot manager uses the appropriate ESP based on the order that's defined in the boot manager. With the efibootmgr tool, you can define a different order, if you don't want to use the default definition.

  3. The UEFI boot manager loads the default boot loader. Oracle Linux uses a 2-stage boot process to handle the Secure Boot validation process. The 2-stage process includes a first stage boot loader called the shim boot loader on the ESP, and the second stage boot loader called GRUB 2. If Secure Boot is disabled, the shim boot loader directly loads the GRUB 2 boot loader on the ESP, to continue the boot process. Boot loader files are named according to the system architecture, for example the shim bootloader is named shimx64.efi on x86_64 systems, and shimaa64.efi on aarch64 systems.

    Otherwise, if Secure Boot is enabled, the shim boot loader is validated against keys stored in the UEFI Secure Boot key database, and in turn, verifies the GRUB 2 boot loader signature against certificates stored in the UEFI Secure Boot key database or the Machine Owner Key (MOK) database. If the GRUB 2 signature is valid, the GRUB 2 boot loader runs and, in turn, validates the kernel that it's configured to load.

    See Oracle Linux: Working With UEFI Secure Boot for more information on Secure Boot.

  4. The boot loader loads the vmlinuz kernel image file and the initramfs image file into memory. The kernel extracts the contents of the initramfs image into a temporary, memory-based file system (tmpfs). The initramfs contains essential drivers and utilities needed for booting.

  5. The boot loader passes control to the kernel and provides pointers to the initramfs and any other boot parameters. The kernel continues system initialization, detecting hardware, loading necessary drivers, and mounting the root file system.

  6. The kernel searches for the init process within initramfs and starts the defined process with a process ID of 1 (PID 1). On Oracle Linux, the default init process is configured as systemd. See Administering SELinux in Oracle Linux for more information.

  7. systemd runs any other processes defined for it.

    Note

    Specify any other actions to be processed during the boot process by defining systemd units. This method is preferred to using the /etc/rc.local file.

About BIOS-Based Booting

On a BIOS-based system running the Oracle Linux release, the boot process is as follows:

  1. The system's BIOS performs a power-on self-test (POST), and then detects and initializes any peripheral devices and the hard disk.

  2. The BIOS reads the Master Boot Record (MBR) into memory from the boot device. The MBR stores information about the organization of partitions on that device, the partition table, and the boot signature which is used for error detection. The MBR also includes the pointer to the boot loader program (GRUB 2), usually on a dedicated /boot partition on the same disk device.

  3. The boot loader loads the vmlinuz kernel image file and the initramfs image file into memory. The kernel then extracts the contents of initramfs into a temporary, memory-based file system (tmpfs).

  4. The kernel loads the driver modules from the initramfs file system that are needed to access the root file system.

  5. The kernel searches for the init process within initramfs and starts the defined process with a process ID of 1 (PID 1). On Oracle Linux, the default init process is configured as systemd. See Administering SELinux in Oracle Linux for more information.

  6. systemd runs any other processes defined for it.
    Note

    Specify any other actions to be processed during the boot process by defining systemd units. This method is preferred to using the /etc/rc.local file.

About the GRUB 2 Bootloader

Oracle Linux includes version 2 of the GRand Unified Bootloader (GRUB 2), which loads the OS onto a system at boot time.

In addition to Oracle Linux, GRUB 2 can load and chain-load many proprietary operating systems. GRUB 2 understands the formats of many different file systems and kernel executable files. GRUB 2 requires the full path to the kernel and initramfs relative to the boot or root device. You can configure this information by using the GRUB 2 menu or by entering it on the GRUB 2 command line.

The grub2-mkconfig command generates the GRUB 2 configuration file using the template scripts in /etc/grub.d and menu-configuration settings taken from the configuration file, /etc/default/grub.

The generated GRUB 2 files are read during system boot from /boot. The main GRUB 2 configuration file is available at /boot/grub2/grub.cfg. On UEFI-based systems, an initial configuration file at /boot/efi/EFI/redhat/grub.cfg is used to help direct GRUB 2 to the correct device and location of the main GRUB2 configuration file. Each kernel version's boot parameters are stored in independent configuration files in /boot/loader/entries. Depending on the version of Oracle Linux, each kernel configuration is stored with the file name:
  • machine_id-kernel_version.el10.arch.conf
  • machine_id-kernel_version.el9.arch.conf
  • machine_id-kernel_version.el8.arch.conf
Note

Don't edit the GRUB 2 configuration file in /boot directly.

The default menu entry is set by the value of the GRUB_DEFAULT parameter in /etc/default/grub. If GRUB_DEFAULT is set to saved, you can use the grub2-set-default and grub2-reboot commands to specify the default entry. The command grub2-set-default sets the default entry for all reboots, while grub2-reboot sets the default entry for the next reboot only.

If you specify a numeric value as the value of GRUB_DEFAULT or as an argument to either grub2-reboot or grub2-set-default, GRUB 2 counts the menu entries in the configuration file starting at 0 for the first entry.

To update the GRUB 2 boot loader configuration on Oracle Linux, use the grubby command to control and manage all boot requirements.

The grubby command-line tool helps you manage GRUB 2 configuration and kernel boot parameters. It’s fully scriptable and abstracts low level boot loader details, so you don’t need to edit GRUB files by hand. See Using grubby to Manage Kernels for more information.

Important

Persistent kernel command-line changes must be made with grubby. Regenerating /boot/grub2/grub.cfg from /etc/default/grub won't apply those changes.

If you need to change some parameters in the configuration at boot time, you can temporarily change kernel boot parameters in the GRUB 2 boot menu. See Changing Kernel Boot Parameters Before Booting.

For more information about using, configuring, and customizing GRUB 2, see the GNU GRUB Manual, which is also installed as /usr/share/doc/grub2-tools-2.00/grub.html.

About Linux Kernels

Oracle Linux can be booted with different customized kernels for system interoperability or performance.

The Linux kernel is the core of the OS and provides the interface between system hardware and any applications that run on the system. The kernel manages system resources, handles security, and enables software to interact with hardware without needing direct access. The Linux kernel is an open source project that's made available by the Linux Foundation.

The Linux Foundation provides a hub for open source developers to code, manage, and scale different open technology projects. It also manages the Linux Kernel Organization that exists to distribute various versions of the Linux kernel which is at the core of all Linux distributions, including those used by Oracle Linux.

You must install and run one of these Linux kernels with Oracle Linux:

  • Unbreakable Enterprise Kernel (UEK): UEK is based on a stable kernel branch from the Linux Foundation, with customer-driven additions, and several UEKs can exist for a specific Oracle Linux release. Its focus is performance, stability, and minimal backports by tracking the mainline source code provided by the Linux Kernel Organization, as closely as is practical. UEK is tested and used to run Oracle Engineered Systems, Oracle Cloud Infrastructure (OCI), and large enterprise deployments for Oracle customers.

    UEK includes some packages or package versions that aren't available in RHCK. Some examples are btrfs-tools, rds, and rdma related packages, and some kernel tuning tools.

  • Red Hat Compatible Kernel (RHCK): RHCK is fully compatible with the Linux kernel that's distributed in a corresponding Red Hat Enterprise Linux (RHEL) release. You can use RHCK to ensure full compatibility with applications that run on Red Hat Enterprise Linux.

Kernel packages are purposely built to avoid dependencies on a particular kernel type. Any kernel that isn't in use can be removed from the system without impact.

For example, to remove RHCK from a system that's running UEK, you can run:

sudo dnf remove kernel-core

If a system is using RHCK, you can remove UEK by running:

sudo dnf remove kernel-uek-core

See Checking Available Kernels on the System to see what kernels are installed on the system.

See Changing the Default Kernel to learn how to change the default kernel, for example from RHCK to UEK, or from UEK to RHCK.

Important

Linux kernels are critical for running applications in the Oracle Linux user space. Therefore, you must keep the kernel current with the latest bug fixes, enhancements, and security updates provided by Oracle. To do so, implement a continuous update and upgrade strategy. See Oracle Linux: Ksplice User's Guide for information on how to keep the kernel updated without any requirement to reboot the system. See Oracle Linux: Managing Software on Oracle Linux for general information about keeping software on the system up-to-date.

For more information about available kernels, see:

About Kernel Modules

The boot loader loads the kernel into memory. You can add new code to the kernel by including the source files in the kernel source tree and recompiling the kernel. Kernel modules provide device drivers that enable the kernel to access new hardware, support different file system types, and extend its functionality in other ways. The modules can be dynamically loaded and unloaded on demand. To avoid wasting memory on unused device drivers, Oracle Linux supports loadable kernel modules (LKMs), which enable a system to run with only the device drivers and kernel code that are required to be loaded into memory. See Managing Kernel Modules to see more information on how to manage kernel modules on Oracle Linux.

Note

From UEK R7 onward, kernel packaging changes are applied to provide a more streamlined kernel. Kernel modules that are required for most server configurations are provided in the kernel-uek-modules package, while optional kernel modules for hardware less often found in server configurations, such as Bluetooth, Wi-Fi, and video capture cards, can be found in the kernel-uek-modules-extra package. Note that both of these packages require the linux-firmware package to be installed.

You can view the contents of these packages by running:

dnf repoquery -l kernel-uek-modules
dnf repoquery -l kernel-uek-modules-extra

To install all available kernel modules, run:

sudo dnf install -y kernel-uek-modules kernel-uek-modules-extra linux-firmware

See UEK R7 (5.15.0).

Note

From UEK 8 onward, kernel packaging changes are applied to provide a more streamlined kernel. The minimal number of core kernel modules and supporting files, such as the files generated by depmod, are provided in the kernel-uek-modules-core package. Kernel modules that are required for most server configurations are provided in the kernel-uek-modules package, while optional kernel modules for hardware less often found in server configurations, such as Bluetooth, Wi-Fi, and video capture cards, can be found in the kernel-uek-modules-extra package. Note that both of these packages require the linux-firmware package to be installed.

You can view the contents of these packages by running:

dnf repoquery -l kernel-uek-modules-core
dnf repoquery -l kernel-uek-modules
dnf repoquery -l kernel-uek-modules-extra

To install all available kernel modules, run:

sudo dnf install -y kernel-uek-modules-core kernel-uek-modules kernel-uek-modules-extra linux-firmware

See UEK 8 (6.12.0).

Kernel modules can be signed to protect the system from running malicious code at boot time. When UEFI Secure Boot is enabled, only kernel modules that contain the correct signature information can be loaded. See Oracle Linux: Working With UEFI Secure Boot for more information.

About Weak Update Modules

External modules, such as drivers that are installed by using a driver update disk or that are installed from an independent package, are typically installed in the /lib/modules/kernel-version/extra directory. Modules that are stored in this directory are preferred over any matching modules that are included with the kernel when these modules are being loaded. Installed external drivers and modules can override existing kernel modules to resolve hardware issues. For each kernel update, these external modules must be made available to each compatible kernel so that potential boot issues resulting from driver incompatibilities with the affected hardware can be avoided.

Because the requirement to load the external module with each compatible kernel update is system critical, a mechanism exists for external modules to be loaded as weak update modules for compatible kernels.

You make weak update modules available by creating symbolic links to compatible modules in the /lib/modules/kernel-version/weak-updates directory. The package manager handles this process automatically when it detects driver modules that are installed in the /lib/modules/kernel-version/extra directories for any compatible kernels.

For example, if a newer kernel is compatible with a module that was installed for the previous kernel, an external module (such as kmod-kvdo) is automatically added as a symbolic link in the weak-updates directory as part of the installation process, as shown in the following command output:

ls -l /lib/modules/6.12.0-100.28.2.el10.x86_64/weak-updates/kmod-kvdo/uds
lrwxrwxrwx. 1 root root 68 Jul  8 07:57 uds.ko -> 
/lib/modules/6.12.0-100.28.2.el10.x86_64/extra/kmod-kvdo/uds/uds.ko
ls -l /lib/modules/6.12.0-100.28.2.el10.x86_64/weak-updates/kmod-kvdo/vdo

The symbolic link enables the external module to load for kernel updates.

Weak updates are beneficial and ensure that no extra work is required to carry an external module through kernel updates. Any potential driver-related boot issues after kernel upgrades are prevented, so this approach provides a more predictable running of a system and its hardware.

You can remove weak update modules if a kernel version provides a superior or preferred driver or module version. See Removing Weak Update Modules for more information.

For more information about external driver modules and driver update disks, see the following documents:

About Virtual File Systems and System Configuration

After the system completes the boot process, virtual file systems provide an interface to the running kernel and to processes and hardware that are available on the system. Two virtual file systems are available:
  • procfs: is mounted at /proc and provides an interface to kernel data structures, mostly related to processes and hardware.
  • sysfs: is mounted at /sys and provides information about devices, kernel modules, file systems, and other kernel components.

These virtual file systems are used to control and report on the running kernel, so that the system configuration can be monitored and adjusted while the OS is live.

Although not part of the kernel virtual file system collection, the /etc/sysconfig system configuration file path is also important because it provides an interface to many core system configuration variables that are read when the system boots.

Also see Explore System Configuration Files and Kernel Tunables on Oracle Linux for a hands-on tutorial on how to configure system settings.

About the /etc/sysconfig Files

The /etc/sysconfig directory contains some files that control the system's configuration after boot. The contents of this directory depend on the packages that you have installed on the system. The /etc/sysconfig directory largely provides a single view of many configuration files that are used by systemd and related components that control system configuration, such as Network Manager.

In newer releases of Oracle Linux, the number of configuration files in this directory is diminishing because configuration is better handled by systemd and other configuration units. For more information about systemd, see Managing the System With systemd.

Certain files that you might find in the /etc/sysconfig directory include the following:

atd

Specifies command line arguments for the atd daemon.

autofs

Defines custom options for automatically mounting devices and controlling the operation of the automounter. Not available in Oracle Linux 9, or later.

crond

Passes arguments to the crond daemon at boot time.

chronyd

Passes arguments to the chronyd daemon used for NTP services at boot time.

firewalld

Passes arguments to the firewall daemon (firewalld) at boot time.

grub

Specifies default settings for the GRUB 2 bootloader. This file is a symbolic link to /etc/default/grub. For more information, see About the GRUB 2 Bootloader. Not available in Oracle Linux 9, or later.

named

Passes arguments to the name service daemon at boot time. The named daemon is a Domain Name System (DNS) server that's part of the Berkeley Internet Name Domain (BIND) distribution. This server maintains a table that associates host names with IP addresses on the network.

samba

Passes arguments to the smbd, nmbd, and winbindd daemons at boot time to support file-sharing connectivity for Windows clients, NetBIOS-over-IP naming service, and connection management to domain controllers.

selinux

Controls the state of SELinux on the system. This file is a symbolic link to /etc/selinux/config.

For more information, see Administering SELinux in Oracle Linux.

snapper

Defines a list of btrfs file systems and thinly provisioned LVM volumes whose contents can be recorded as snapshots by the snapper utility.

For more information, see the following documents:

sysstat

Configures logging parameters for system activity data collector utilities such as sar.

On Oracle Linux 8, more information is available in /usr/share/doc/initscripts*/sysconfig.txt. This content isn't available in more recent releases of Oracle Linux.

About the /proc Virtual File System

The files in the /proc directory hierarchy contain information about the system hardware and the processes that are running on the system. You can change the configuration of the kernel by writing to certain files that have write permission.

Files that are under the /proc directory are virtual files that the kernel creates on demand to present a view of the underlying data structures and system information. As such, /proc is an example of a virtual file system. Most virtual files are listed as 0 bytes in size, but they contain large amount of information when viewed.

Virtual files such as /proc/interrupts, /proc/meminfo, /proc/mounts, and /proc/partitions provide a view of the system's hardware. Other files, such as /proc/filesystems and the files under /proc/sys, provide information about the system's configuration and through which you can change configurations as needed.

Files that contain information about related topics are grouped into virtual directories. A separate directory exists in the /proc directory for each process that's running on the system. The directory's name corresponds to the numeric process ID. For example, /proc/1 corresponds to the systemd process that has a PID of 1.

To examine virtual files, you can use commands such as cat, less, and view, as shown in the following example:

cat /proc/cpuinfo
processor         : 0
vendor_id         : GenuineIntel
cpu family        : 6
model             : 42
model name        : Intel(R) Core(TM) i5-2520M CPU @ 2.50GHz
stepping          : 7
cpu MHz           : 2393.714
cache size        : 6144 KB
physical id       : 0
siblings          : 2
core id           : 0
cpu cores         : 2
apicid            : 0
initial apicid    : 0
fpu               : yes
fpu_exception     : yes
cpuid level       : 5
wp                : yes
...

For files that contain non human-readable content, you can use utilities such as lspci, free, top, and sysctl to access information. For example, the lspci command lists PCI devices on a system:

sudo lspci
00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
00:01.1 IDE interface: Intel Corporation 82371AB/EB/MB PIIX4 IDE (rev 01)
00:02.0 VGA compatible controller: InnoTek Systemberatung GmbH VirtualBox Graphics Adapter
00:03.0 Ethernet controller: Intel Corporation 82540EM Gigabit Ethernet Controller (rev 02)
00:04.0 System peripheral: InnoTek Systemberatung GmbH VirtualBox Guest Service
00:05.0 Multimedia audio controller: Intel Corporation 82801AA AC'97 Audio Controller (rev 01)
00:06.0 USB controller: Apple Inc. KeyLargo/Intrepid USB
00:07.0 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 08)
00:0b.0 USB controller: Intel Corporation 82801FB/FBM/FR/FW/FRW (ICH6 Family) USB2 EHCI Controller
00:0d.0 SATA controller: Intel Corporation 82801HM/HEM (ICH8M/ICH8M-E) SATA Controller [AHCI mode]
        (rev 02)
...

See procfs Directory Reference for more information about the different directories available under /proc. See Managing Kernel Parameters at Runtime for information on how you can view and change kernel parameters in /proc/sys to control system runtime behavior.

About the /sys Virtual File System

In addition to the /proc file system, the kernel exports information to the /sys virtual file system (sysfs). Programs such as the dynamic device manager (udev), use /sys to access device and device driver information. See Managing System Devices With the udev Device Manager for more information about device management.

Note

/sys exposes kernel data structures and control points, which implies that the directory contains circular references, where a directory links to an ancestor directory. Thus, a find command used on /sys might never stop.

See sysfs Directory Reference to see more information about the directories that you can find in /sys.