Multithreaded Programming Guide

Looking at Multithreading Structure

Traditional UNIX already supports the concept of threads—each process contains a single thread, so programming with multiple processes is programming with multiple threads. But a process is also an address space, and creating a process involves creating a new address space.

Creating a thread is less expensive when compared to creating a new process, because the newly created thread uses the current process address space. The time it takes to switch between threads is less than the time it takes to switch between processes, partly because switching between threads does not involve switching between address spaces.

Communicating between the threads of one process is simple because the threads share everything—address space, in particular. So, data produced by one thread is immediately available to all the other threads.

The interface to multithreading support is through a subroutine library, libpthread for POSIX threads, and libthread for Solaris threads. Multithreading provides flexibility by decoupling kernel-level and user-level resources.

User-Level Threads

Threads are the primary programming interface in multithreaded programming. [User-level threads are so named to distinguish them from kernel-level threads, which are the concern of systems programmers only. Because this book is for application programmers, kernel-level threads are not discussed.] Threads are visible only from within the process, where they share all process resources like address space, open files, and so on. The following state is unique to each thread.

Because threads share the process instructions and most of the process data, a change in shared data by one thread can be seen by the other threads in the process. When a thread needs to interact with other threads in the same process, it can do so without involving the operating environment.

By default, threads are lightweight. But, to get more control over a thread (for instance, to control scheduling policy more), the application can bind the thread. When an application binds threads to execution resources, the threads become kernel resources (see System Scope (Bound Threads)for more information).

To summarize, user-level threads are:

Lightweight Processes

The threads library uses underlying threads of control called lightweight processes that are supported by the kernel. You can think of an LWP as a virtual CPU that executes code or system calls.

You usually do not need to concern yourself with LWPs to program with threads. The information here about LWPs is provided as background, so you can understand the differences in scheduling scope, described on Process Scope (Unbound Threads).

Much as the stdio library routines such as fopen() and fread() use the open() and read() functions, the threads interface uses the LWP interface, and for many of the same reasons.

Lightweight processes (LWPs) bridge the user level and the kernel level. Each process contains one or more LWP, each of which runs one or more user threads. (See Figure 1–1.)

Figure 1–1 User-level Threads and Lightweight Processes

Diagram showing bound and unbound threads connecting to lightweight process

Each LWP is a kernel resource in a kernel pool, and is allocated and de-allocated to a thread on a per thread basis.