A device driver needs to work transparently as an integral part of the operating environment. Understanding how the kernel works is a prerequisite for learning about device drivers. This chapter provides an overview of the Solaris kernel and device tree. For an overview of how device drivers work, see Chapter 2, Overview of Solaris Device Drivers.
This chapter provides information on the following subjects:
The Solaris kernel is a program that manages system resources. It insulates applications from the system hardware and provides them with essential system services such as input/output (I/O) management, virtual memory, and scheduling. The kernel consists of object modules that are dynamically loaded into memory when needed.
One way to look at the Solaris kernel is to divide it into two parts: the first part, referred to as the kernel, manages file systems, scheduling, and virtual memory. The second part, referred to as the I/O subsystem, manages the physical components.
The kernel provides a set of interfaces for applications to use that are accessible through system calls. System calls are documented in the Solaris 9 Reference Manual Collection (see Intro(2)). Some system calls are used to invoke device drivers to perform I/O. Device drivers are loadable kernel modules that manage data transfers while insulating the rest of the kernel from the device hardware. To be compatible with the operating environment, device drivers need to be able to accommodate such features as multithreading, virtual memory addressing, and both 32–bit and 64–bit operation.
The following figure illustrates the kernel, with the kernel modules handling system calls from application programs and the I/O modules communicating with hardware.
The kernel provides access to device drivers through:
Device-to-driver mapping — The kernel maintains the device tree. Each node in the tree represents a virtual or a physical device. The kernel binds each node to a driver by matching the device node name with the set of drivers installed in the system. The device is made accessible to applications only if there is a driver binding.
DDI/DDK interfaces — DDI/DDK stands for Device Driver Interface Driver-Kernel Interface. The DDI/DKI interfaces standardize interactions between the driver and the kernel, the device hardware, and the boot/configuration software. Use of these interfaces keeps the driver independent from the kernel and improves its portability across successive releases of the operating environments on a particular machine.
The Solaris kernel is multithreaded. On a multiprocessor machine, multiple kernel threads can be running kernel code, and can do so concurrently. Kernel threads can also be preempted by other kernel threads at any time.
The multithreading of the kernel imposes some additional restrictions on device drivers. For more information on multithreading considerations, see Chapter 3, Multithreading. Device drivers must be coded to run as needed at the request of many different threads. For each thread, it must handle contention problems from overlapping I/O requests.
A complete overview of the Solaris virtual memory system is beyond the scope of this book, but two virtual memory terms of special importance are used when discussing device drivers: virtual address and address space.
Virtual address – A virtual address is an address that is mapped by the memory management unit (MMU) to a physical hardware address. All addresses directly accessible by the driver are kernel virtual addresses; they refer to the kernel address space.
Address space – An address space is a set of virtual address segments, each of which is a contiguous range of virtual addresses. Each user process has an address space called the user address space. The kernel has its own address space called the kernel address space.
Devices are treated as files. They are represented in the file system by special files. In the Solaris operating environment these files reside in the /devices directory hierarchy.
Special files can be of type block or character. The type indicates which kind of device driver operates the device. Drivers can be implemented to operate on both types. For example, disk drivers export a character interface for use by the fsck(1) and mkfs(1) utilities, and a block interface for use by the file system.
Associated with each special file is a device number (dev_t). This consists of a major number and a minor number. The major number identifies the device driver associated with the special file. The minor number is created and used by the device driver to further identify the special file. Usually, the minor number is an encoding that identifies the device instance the driver should access and the type of access to perform. The minor number, for example, could identify a tape device used for backup and also specify whether the tape needs to be rewound when the backup operation is complete.
In System V Release 4 (SVR4), the interface between device drivers and the rest of the UNIX kernel was standardized as the DDI/DKI. The Solaris 9 DDI/DKI is documented in Section 9 of the Solaris 9 Reference Manual Collection. This section documents driver entry points, driver-callable functions, and kernel data structures used by device drivers.
The Solaris 9 DDI/DKI, like its SVR4 counterpart, is intended to standardize and document all interfaces between device drivers and the rest of the kernel. In addition, the Solaris 9 DDI/DKI allows source compatibility for drivers on any machine running the Solaris 9 operating environment, regardless of the processor architecture (such as SPARC or IA). It is also intended to provide binary compatibility for drivers running on any Solaris 9–based processor, regardless of the specific platform architecture. Drivers using only kernel facilities that are part of the Solaris 9 DDI/DKI are known as Solaris 9 DDI/DKI-compliant device drivers.
The Solaris 9 DDI/DKI allows platform-independent device drivers to be written for Solaris 9 based machines. These shrink-wrapped (binary compatible) drivers enable third-party hardware and software to be more easily integrated into Solaris 9-based machines. The Solaris 9 DDI/DKI is designed to be architecture independent and enable the same driver to work across a diverse set of machine architectures.
Platform independence is accomplished by the design of DDI in Solaris 9 DDI/DKI. The following main areas are addressed:
Dynamic loading and unloading of modules
Power management
Interrupt handling
Accessing the device space from the kernel or a user process (register mapping and memory mapping)
Accessing kernel or user process space from the device (DMA services)
Managing device properties
Devices in Solaris are represented as a tree of interconnected device information nodes.
The system builds a tree structure that contains information about the devices connected to the machine at boot time. The device tree can also be modified by dynamic reconfiguration operations while the system is in normal operation.
The tree begins at the “root” device node, which represents the platform. Below the root node are “branches” of the device tree, where a branch consists of one or more bus nexus devices and a terminating leaf device.
A bus nexus device provides bus mapping and translation services to devices that are subordinate to it in the device tree. PCI - PCI bridges, PCMCIA adapters, and SCSI HBAs are all example of nexus devices. The discussion of writing drivers for nexus devices is limited to that of developing SCSI HBA drivers (see Chapter 15, SCSI Host Bus Adapter Drivers).
Leaf devices are typically peripheral devices such as disks, tapes, network adapters, frame buffers, and so forth. Drivers for these devices export the traditional character and block driver interfaces for use by user processes to read and write data to storage or communication devices.
Each driver exports a device operations structure dev_ops(9S) that defines the operations that the device driver can perform. The device operations structure contains function pointers for generic operations such as attach(9E), detach(9E), and getinfo(9E). It also contains a pointer to a set of operations specific to bus nexus drivers and a pointer to a set of operations specific to leaf drivers.
The tree structure creates a parent-child relationship between nodes. This parent-child relationship is the key to architectural independence. When a leaf or bus nexus driver requires a service that is architecturally dependent in nature, it requests its parent to provide the service. This approach enables drivers to function regardless of the architecture of the machine or the processor. A typical device tree is shown in the following figure.
The nexus nodes may have one or more children, and the leaf nodes represent individual devices.
The device tree can be displayed in three ways:
The libdevinfo library provides interfaces to access the contents of the device tree programmatically.
The prtconf(1M) command displays the complete contents of the device tree.
The /devices hierarchy is a representation of the device tree; use ls(1) to view it.
/devices displays only devices that have drivers configured into the system. prtconf(1M) shows all device nodes regardless of whether a driver for the device exists on the system.
The libdevinfo library provides interfaces for accessing all public device configuration data. See the libdevinfo(3DEVINFO) man page for a list of interfaces.
The prtconf(1M) command (excerpted example follows) displays all the devices in the system:
System Configuration: Sun Microsystems sun4u Memory size: 128 Megabytes System Peripherals (Software Nodes): SUNW,Ultra-5_10 packages (driver not attached) terminal-emulator (driver not attached) deblocker (driver not attached) obp-tftp (driver not attached) disk-label (driver not attached) SUNW,builtin-drivers (driver not attached) sun-keyboard (driver not attached) ufs-file-system (driver not attached) chosen (driver not attached) openprom (driver not attached) client-services (driver not attached) options, instance #0 aliases (driver not attached) memory (driver not attached) virtual-memory (driver not attached) pci, instance #0 pci, instance #0 ebus, instance #0 auxio (driver not attached) power, instance #0 SUNW,pll (driver not attached) se, instance #0 su, instance #0 su, instance #1 ecpp (driver not attached) fdthree, instance #0 eeprom (driver not attached) flashprom (driver not attached) SUNW,CS4231 (driver not attached) network, instance #0 SUNW,m64B (driver not attached) ide, instance #0 disk (driver not attached) cdrom (driver not attached) dad, instance #0 sd, instance #15 pci, instance #1 pci, instance #0 pci108e,1000 (driver not attached) SUNW,hme, instance #1 SUNW,isptwo, instance #0 sd (driver not attached) st (driver not attached) sd, instance #0 (driver not attached) sd, instance #1 (driver not attached) sd, instance #2 (driver not attached) .... SUNW,UltraSPARC-IIi (driver not attached) SUNW,ffb, instance #0 pseudo, instance #0
The /devices hierarchy provides a name space representing the device tree. Following is an abbreviated listing of the /devices name space. The sample output corresponds to the example device tree and prtconf(1M) output shown previously.
/devices /devices/pseudo /devices/pci@1f,0:devctl /devices/SUNW,ffb@1e,0:ffb0 /devices/pci@1f,0 /devices/pci@1f,0/pci@1,1 /devices/pci@1f,0/pci@1,1/SUNW,m64B@2:m640 /devices/pci@1f,0/pci@1,1/ide@3:devctl /devices/pci@1f,0/pci@1,1/ide@3:scsi /devices/pci@1f,0/pci@1,1/ebus@1 /devices/pci@1f,0/pci@1,1/ebus@1/power@14,724000:power_button /devices/pci@1f,0/pci@1,1/ebus@1/se@14,400000:a /devices/pci@1f,0/pci@1,1/ebus@1/se@14,400000:b /devices/pci@1f,0/pci@1,1/ebus@1/se@14,400000:0,hdlc /devices/pci@1f,0/pci@1,1/ebus@1/se@14,400000:1,hdlc /devices/pci@1f,0/pci@1,1/ebus@1/se@14,400000:a,cu /devices/pci@1f,0/pci@1,1/ebus@1/se@14,400000:b,cu /devices/pci@1f,0/pci@1,1/ebus@1/ecpp@14,3043bc:ecpp0 /devices/pci@1f,0/pci@1,1/ebus@1/fdthree@14,3023f0:a /devices/pci@1f,0/pci@1,1/ebus@1/fdthree@14,3023f0:a,raw /devices/pci@1f,0/pci@1,1/ebus@1/SUNW,CS4231@14,200000:sound,audio /devices/pci@1f,0/pci@1,1/ebus@1/SUNW,CS4231@14,200000:sound,audioctl /devices/pci@1f,0/pci@1,1/ide@3 /devices/pci@1f,0/pci@1,1/ide@3/sd@2,0:a /devices/pci@1f,0/pci@1,1/ide@3/sd@2,0:a,raw /devices/pci@1f,0/pci@1,1/ide@3/dad@0,0:a /devices/pci@1f,0/pci@1,1/ide@3/dad@0,0:a,raw /devices/pci@1f,0/pci@1 /devices/pci@1f,0/pci@1/pci@2 /devices/pci@1f,0/pci@1/pci@2/SUNW,isptwo@4:devctl /devices/pci@1f,0/pci@1/pci@2/SUNW,isptwo@4:scsi
In addition to constructing the device tree, the kernel also determines the drivers that will be used to manage the devices.
Binding a driver to a device refers to the process by which the system selects a driver to manage a particular device. The driver binding name is the name that links a driver to a unique device node in the device information tree. For each device in the device tree, the system attempts to choose a driver from a list of installed drivers.
Each device node has a name property associated with it. This property can be assigned either from an external agent, such as the PROM, during system boot or from a driver.conf configuration file. In either case, the name property represents the node name assigned to a device in the device tree. The node name is the name visible in /devices and listed in the prtconf(1M) output.
A device node can also have a compatible property associated with it. The compatible property (if it exists) contains an ordered list of one or more possible driver names or driver aliases for the device.
The system uses both the compatible and the name properties to select a driver for the device. The system first attempts to match the contents of the compatible property (if the compatible property exists) to a driver on the system. Beginning with the first driver name on the compatible property list, the system attempts to match the driver name to a known driver on the system. It processes each entry on the list until either a match is found or the end of the list is reached.
If the contents of either the name property or the compatible property match a driver on the system, then that driver is bound to the device node. If no match is found, no driver is bound to the device node.
Some devices specify a generic device name as the value for the name property. Generic device names describe the function of a device without actually identifying a specific driver for the device. For example, a SCSI host bus adapter might have a generic device name of scsi. An Ethernet device might have a generic device name of ethernet.
The compatible property allows the system to determine alternate driver names (like glm for scsi HBA device drivers or hme for ethernet device drivers) for devices with a generic device name.
Devices with generic device names are required to supply a compatible property.
For a complete description of generic device names, see the IEEE 1275 Open Firmware Boot Standard.
Figure 1–4 and Figure 1–5 show two device nodes: one node uses a specific device name and the other uses a generic device name. For the device node with a specific device name, the driver binding name SUNW,ffb is the same name as the device node name.
For the device node with the generic device name display, the driver binding name SUNW,ffb is the first name on the compatible property driver list that matches a driver on the system driver list. In this case, display is a generic device name for frame buffers.