JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Multithreaded Programming Guide     Oracle Solaris 11.1 Information Library
search filter icon
search icon

Document Information

Preface

1.  Covering Multithreading Basics

Multithreading Terms

Oracle Solaris Multithreading Libraries and Standards

Benefiting From Multithreading

Improving Application Responsiveness

Using Multiprocessors Efficiently

Improving Program Structure

Using Fewer System Resources

Combining Threads and RPC

Multithreading Concepts

Concurrency and Parallelism

Multithreading Structure

User-Level Threads

User-Level Threads State

Thread Scheduling

Thread Cancellation

Thread Synchronization

Using the 64-bit Architecture

2.  Basic Threads Programming

3.  Thread Attributes

4.  Programming with Synchronization Objects

5.  Programming With the Oracle Solaris Software

6.  Programming With Oracle Solaris Threads

7.  Safe and Unsafe Interfaces

8.  Compiling and Debugging

9.  Programming Guidelines

A.  Extended Example: A Thread Pool Implementation

Index

Multithreading Concepts

This section introduces basic concepts of multithreading.

Concurrency and Parallelism

In a multithreaded process on a single processor, the processor can switch execution resources between threads, resulting in concurrent execution. Concurrency indicates that more than one thread is making progress, but the threads are not actually running simultaneously. The switching between threads happens quickly enough that the threads might appear to run simultaneously.

In the same multithreaded process in a shared-memory multiprocessor environment, each thread in the process can run concurrently on a separate processor, resulting in parallel execution, which is true simultaneous execution. When the number of threads in a process is less than or equal to the number of processors available, the operating system's thread support system ensures that each thread runs on a different processor. For example, in a matrix multiplication that is programmed with four threads, and runs on a system that has two dual-core processors, each software thread can run simultaneously on the four processor cores to compute a row of the result at the same time.

Multithreading Structure

Traditional UNIX already supports the concept of threads. Each process contains a single thread, so programming with multiple processes is programming with multiple threads. But, a process is also an address space, and creating a process involves creating a new address space.

Creating a thread is less expensive than creating a new process because the newly created thread uses the current process address space. The time that is required to switch between threads is less than the time required to switch between processes. A switch between threads is faster because no switching between address spaces occurs.

Communication between the threads of one process is simple because the threads share everything, most importantly address space. So, data produced by one thread is immediately available to all the other threads in the process.

However, this sharing of data leads to a different set of challenges for the programmer. Care must be taken to synchronize threads to protect data from being modified by more than one thread at once, or from being read by some threads while being modified by another thread at the same time. See Thread Synchronization for more information.

User-Level Threads

Threads are the primary programming interface in multithreaded programming. Threads are visible only from within the process, where the threads share all process resources like address space, open files, and so on.

User-Level Threads State

The following state is unique to each thread.

Threads share the process instructions and most of the process data. For that reason, a change in shared data by one thread can be seen by the other threads in the process. When a thread needs to interact with other threads in the same process, the thread can do so without involving the operating environment.


Note - User-level threads are so named to distinguish them from kernel-level threads, which are the concern of systems programmers only. Because this book is for application programmers, kernel-level threads are not discussed.


Thread Scheduling

The POSIX standard specifies three scheduling policies: first-in-first-out (SCHED_FIFO), round-robin (SCHED_RR), and custom (SCHED_OTHER). SCHED_FIFO is a queue-based scheduler with different queues for each priority level. SCHED_RR is like FIFO except that each thread has an execution time quota.

Both SCHED_FIFO and SCHED_RR are POSIX Realtime extensions. Threads executing with these policies are in the Oracle Solaris Real-Time (RT) scheduling class, normally requiring special privilege. SCHED_OTHER is the default scheduling policy. Threads executing with the SCHED_OTHER policy are in the traditional Oracle Solaris Time-Sharing (TS) scheduling class.

Oracle Solaris provides other scheduling classes, namely the Interactive timesharing (IA) class, the Fair-Share (FSS) class, and the Fixed-Priority (FX) class. Such specialized classes are not discussed here. See the Oracle Solaris priocntl(2) manual page for more information.

See LWPs and Scheduling Classes for information about the SCHED_OTHER policy.

Two scheduling scopes are available: process scope (PTHREAD_SCOPE_PROCESS) and system scope (PTHREAD_SCOPE_SYSTEM). Threads with differing scope states can coexist on the same system and even in the same process. Process scope causes such threads to contend for resources only with other such threads in the same process. System scope causes such threads to contend with all other threads in the system. In practice, beginning with the Solaris 9 release, the system makes no distinction between these two scopes.

Thread Cancellation

A thread can request the termination of any other thread in the process. The target thread, the one being cancelled, can keep cancellation requests pending as well as perform application-specific cleanup when the thread acts upon the cancellation request.

The pthreads cancellation feature permits either asynchronous or deferred termination of a thread. Asynchronous cancellation can occur at any time. Deferred cancellation can occur only at defined points. Deferred cancellation is the default type.

Thread Synchronization

Synchronization enables you to control program flow and access to shared data for concurrently executing threads.

The four synchronization models are mutex locks, read/write locks, condition variables, and semaphores.