Device Driver Tutorial

Kernel Overview

The kernel manages the system resources, including file systems, processes, and physical devices. The kernel provides applications with system services such as I/O management, virtual memory, and scheduling. The kernel coordinates interactions of all user processes and system resources. The kernel assigns priorities, services resource requests, and services hardware interrupts and exceptions. The kernel schedules and switches threads, pages memory, and swaps processes.

Differences Between Kernel Modules and User Programs

This section discusses several important differences between kernel modules and user programs.

Execution Differences Between Kernel Modules and User Programs

The following characteristics of kernel modules highlight important differences between the execution of kernel modules and the execution of user programs:

Structural Differences Between Kernel Modules and User Programs

The following characteristics of kernel modules highlight important differences between the structure of kernel modules and the structure of user programs:

Data Transfer Differences Between Kernel Modules and User Programs

Data transfer between a device and the system typically is slower than data transfer within the CPU. Therefore, a driver typically suspends execution of the calling thread until the data transfer is complete. While the thread that called the driver is suspended, the CPU is free to execute other threads. When the data transfer is complete, the device sends an interrupt. The driver handles the interrupt that the driver receives from the device. The driver then tells the CPU to resume execution of the calling thread. See Chapter 8, Interrupt Handlers, in Writing Device Drivers.

Drivers must work with user process (virtual) addresses, system (kernel) addresses, and I/O bus addresses. Drivers sometimes copy data from one address space to another address space and sometimes just manipulate address-mapping tables. See Bus Architectures in Writing Device Drivers.

User and Kernel Address Spaces on x86 and SPARC Machines

On SPARC machines, the system panics when a kernel module attempts to directly access user address space. You must make sure your driver does not attempt to directly access user address space on a SPARC machine.

On x86 machines, the system does not enter an error state when a kernel module attempts to directly access user address space. You still should make sure your driver does not attempt to directly access user address space on an x86 machine. Drivers should be written to be as portable as possible. Any driver that directly accesses user address space is a poorly written driver.


Caution – Caution –

A driver that works on an x86 machine might not work on a SPARC machine because the driver might access an invalid address.


Do not access user data directly. A driver that directly accesses user address space is using poor programming practice. Such a driver is not portable and is not supportable. Use the ddi_copyin(9F) and ddi_copyout(9F) routines to transfer data to and from user address space. These two routines are the only supported interfaces for accessing user memory. Modifying Data Stored in Kernel Memory shows an example driver that uses ddi_copyin(9F) and ddi_copyout(9F).

The mmap(2) system call maps pages of memory between a process's address space and a file or shared memory object. In response to an mmap(2) system call, the system calls the devmap(9E) entry point to map device memory into user space. This information is then available for direct access by user applications.

Device Drivers

A device driver is a loadable kernel module that manages data transfers between a device and the OS. Loadable modules are loaded at boot time or by request and are unloaded by request. A device driver is a collection of C routines and data structures that can be accessed by other kernel modules. These routines must use standard interfaces called entry points. Through the use of entry points, the calling modules are shielded from the internal details of the driver. See Device Driver Entry Points in Writing Device Drivers for more information on entry points.

A device driver declares its general entry points in its dev_ops(9S) structure. A driver declares entry points for routines that are related to character or block data in its cb_ops(9S) structure. Some entry points and structures that are common to most drivers are shown in the following diagram.

Figure 1–1 Typical Device Driver Entry Points

Diagram shows entry points that are common to most drivers
and how the entry points are used.

The Oracle Solaris OS provides many driver entry points. Different types of devices require different entry points in the driver. The following diagram shows some of the available entry points, grouped by driver type. No single device driver would use all the entry points shown in the diagram.

Figure 1–2 Entry Points for Different Types of Drivers

Diagram shows subsets of entry points that are used by
various types of device drivers.

In the Oracle Solaris OS, drivers can manage physical devices, such as disk drives, or software (pseudo) devices, such as bus nexus devices or ramdisk devices. In the case of hardware devices, the device driver communicates with the hardware controller that manages the device. The device driver shields the user application layer from the details of a specific device so that application level or system calls can be generic or device independent.

Drivers are accessed in the following situations:

The following diagram illustrates how a device driver interacts with the rest of the system.

Figure 1–3 Typical Device Driver Interactions

Diagram shows typical interactions between a device driver
and other elements in the operating system.

Driver Directory Organization

Device drivers and other kernel modules are organized into the following directories in the Oracle Solaris OS. See the kernel(1M) and system(4) man pages for more information about kernel organization and how to add directories to your kernel module search path.

/kernel

These modules are common across most platforms. Modules that are required for booting or for system initialization belong in this directory.

/platform/`uname -i`/kernel

These modules are specific to the platform identified by the command uname -i.

/platform/`uname -m`/kernel

These modules are specific to the platform identified by the command uname -m. These modules are specific to a hardware class but more generic than modules in the uname -i kernel directory.

/usr/kernel

These are user modules. Modules that are not essential to booting belong in this directory. This tutorial instructs you to put all your drivers in the /usr/kernel directory.

One benefit of organizing drivers into different directories is that you can selectively load different groups of drivers on startup when you boot interactively at the boot prompt as shown in the following example. See the boot(1M) man page for more information.


Type    b [file-name] [boot-flags] <ENTER>      to boot with options
or      i <ENTER>                               to enter boot interpreter
or      <ENTER>                                 to boot with defaults

                  <<< timeout in 5 seconds >>>

Select (b)oot or (i)nterpreter: b -a
bootpath: /pci@0,0/pci8086,2545@3/pci8086,
Enter default directory for modules [/platform/i86pc/kernel /kernel 
/usr/kernel]: /platform/i86pc/kernel /kernel

In this example, the /usr/kernel location is omitted from the list of directories to search for modules to load. You might want to do this if you have a driver in /usr/kernel that causes the kernel to panic during startup or on attach. Instead of omitting all /usr/kernel modules, a better method for testing drivers is to put them in their own directory. Use the moddir kernel variable to add this test directory to your kernel modules search path. The moddir kernel variable is described in kernel(1M) and system(4). Another method for working with drivers that might have startup problems is described in Device Driver Testing Tips.