Threads are the primary programming interface in multithreaded programming. User-level threads [User-level threads are named to distinguish them from kernel-level threads, which are the concern of systems programmers, only. Because this book is for application programmers, kernel-level threads are not discussed.] are handled in user space and avoid kernel context switching penalties. An application can have hundreds of threads and still not consume many kernel resources. How many kernel resources the application uses is largely determined by the application.
Threads are visible only from within the process, where they share all process resources like address space, open files, and so on. The following state is unique to each thread.
Because threads share the process instructions and most of the process data, a change in shared data by one thread can be seen by the other threads in the process. When a thread needs to interact with other threads in the same process, it can do so without involving the operating environment.
By default, threads are very lightweight. But, to get more control over a thread (for instance, to control scheduling policy more), the application can bind the thread. When an application binds threads to execution resources, the threads become kernel resources (see "System Scope (Bound Threads)"for more information).
To summarize, user-level threads are:
Inexpensive to create because they do not need to create their own address space. They are bits of virtual memory that are allocated from your address space at run time.
Fast to synchronize because synchronization is done at the application level, not at the kernel level.
Easily managed by the threads library; either libpthread or libthread.