This chapter discusses the processes in an Oracle database.
This chapter contains the following sections:
A process is a mechanism in an operating system that can run a series of steps. The process execution architecture depends on the operating system. For example, on Windows an Oracle background process is a thread of execution within a process. On Linux and UNIX, an Oracle process is either an operating system process or a thread within an operating system process.
Processes run code modules. All connected Oracle Database users must run the following modules to access a database instance:
Application or Oracle Database utility
A database user runs a database application, such as a precompiler program or a database tool such as SQL*Plus, that issues SQL statements to a database.
Oracle database code
Each user has Oracle database code executing on his or her behalf that interprets and processes the application's SQL statements.
A process normally runs in its own private memory area. Most processes can periodically write to an associated trace file (see "Trace Files").
A database instance contains or interacts with multiple processes.
Processes are divided into the following types:
A client process runs the application or Oracle tool code.
An Oracle process is a unit of execution that runs the Oracle database code. In the multithreaded architecture, an Oracle process can be an operating system process or a thread within an operating system process. Oracle processes include the following subtypes:
A server process performs work based on a client request.
Server processes, and the process memory allocated in these processes, run in the database instance. The instance continues to function when server processes terminate.
A slave process performs additional tasks for a background or server process.
The process structure varies depending on the operating system and the choice of Oracle Database options. For example, you can configure the code for connected users for dedicated server or shared server connections. In a shared server architecture, each server process that runs database code can serve multiple client processes.
Figure 15-1 shows a system global area (SGA) and background processes using dedicated server connections. For each user connection, a client process runs the application. This client process that is different from the dedicated server process that runs the database code. Each client process is associated with its own server process, which has its own program global area (PGA).
Figure 15-1 Oracle Processes and the SGA
Multiprocess Oracle Database (also called multiuser Oracle Database) uses several processes to run different parts of the Oracle Database code and additional Oracle processes for the users—either one process for each connected user or one or more processes shared by multiple users. Most databases are multiuser because a primary advantage of a database is managing data needed by multiple users simultaneously.
Each process in a database instance performs a specific job. By dividing the work of the database and applications into several processes, multiple users and applications can connect to an instance simultaneously while the system gives good performance.
In releases earlier than Oracle Database 12c, Oracle processes did not run as threads on UNIX and Linux systems. Starting in Oracle Database 12c, the multithreaded Oracle Database model enables Oracle processes to execute as operating system threads in separate address spaces. When Oracle Database 12c is installed, the database runs in process mode. You must set the
THREADED_EXECUTION initialization parameter to
TRUE to run the database in threaded mode. In threaded mode, some background processes on UNIX and Linux run as processes (with each process containing one thread), whereas the remaining Oracle processes run as threads within processes.
In a database running in threaded mode, PMON and DBW might run as operating system processes, whereas LGWR and CMON might run as threads within a single process. Two foreground processes and a parallel execution (PX) server process might run as threads in a second operating system process. A third operating system process might contain multiple foreground threads. Thus, "Oracle process" does not always mean "operating system process."
THREADED_EXECUTIONinitialization parameter is set to
TRUE, operating system authentication is not supported.
Example 15-1 Viewing Oracle Process Metadata
V$PROCESS view contains one row for each Oracle process connected to a database instance. For example, you can run the following query in SQL*Plus to get the operating system process ID and operating system thread ID for each process:
COL SPID FORMAT a8 COL STID FORMAT a8 SELECT SPID, STID, PROGRAM FROM V$PROCESS ORDER BY SPID;
The query yields the following partial sample output:
SPID STID PROGRAM ----- ----- --------------------------- 7190 7190 oracle@samplehost (PMON) 7192 7192 oracle@samplehost (PSP0) 7194 7194 oracle@samplehost (VKTM) 7198 7198 oracle@samplehost (SCMN) 7198 7200 oracle@samplehost (GEN0) 7202 7202 oracle@samplehost (SCMN) 7202 7204 oracle@samplehost (DIAG) 7198 7205 oracle@samplehost (DBRM) 7202 7206 oracle@samplehost (DIA0) . . .
When a user runs an application such as a Pro*C program or SQL*Plus, the operating system creates a client process (sometimes called a user process) to run the user application. The client application has Oracle Database libraries linked into it that provide the APIs required to communicate with the database.
Client processes differ in important ways from the Oracle processes interacting directly with the instance. The Oracle processes servicing the client process can read from and write to the SGA, whereas the client process cannot. A client process can run on a host other than the database host, whereas Oracle processes cannot.
For example, assume that a user on a client host starts SQL*Plus, and then connects over the network to database
sample on a different host when the database instance is not started:
SQL> CONNECT SYS@inst1 AS SYSDBA Enter password: ********* Connected to an idle instance.
On the client host, a search of the processes for either
sample shows only the
sqlplus client process:
% ps -ef | grep -e sample -e sqlplus | grep -v grep clientuser 29437 29436 0 15:40 pts/1 00:00:00 sqlplus as sysdba
On the database host, a search of the processes for either
sample shows a server process with a nonlocal connection, but no client process:
% ps -ef | grep -e sample -e sqlplus | grep -v grep serveruser 29441 1 0 15:40 ? 00:00:00 oraclesample (LOCAL=NO)
"How an Instance Is Started" to learn how a client can connect to a database when the instance is not started
A database connection is a physical communication pathway between a client process and a database instance.
During a connection, a communication pathway is established using available interprocess communication mechanisms or network software. Typically, a connection occurs between a client process and a server process or dispatcher, but it can also occur between a client process and Oracle Connection Manager (CMAN).
A database session is a logical entity in the database instance memory that represents the state of a current user login to a database. For example, when a user is authenticated by the database with a password, a session is established for this user. A session lasts from the time the user is authenticated by the database until the time the user disconnects or exits the database application.
A single connection can have 0, 1, or more sessions established on it. The sessions are independent: a commit in one session does not affect transactions in other sessions.
If Oracle Net connection pooling is configured, then it is possible for a connection to drop but leave the sessions intact.
Multiple sessions can exist concurrently for a single database user. As shown in the following figure, user
hr can have multiple connections to a database. In dedicated server connections, the database creates a server process on behalf of each connection. Only the client process that causes the dedicated server to be created uses it. In a shared server connection, many client processes access a single shared server process.
Figure 15-2 One Session for Each Connection
Figure 15-3 illustrates a case in which user
hr has a single connection to a database, but this connection has two sessions.
Figure 15-3 Two Sessions in One Connection
Generating an autotrace report of SQL statement execution statistics re-creates the scenario in Figure 15-3.
DISCONNECT command in Example 15-2 actually ends the sessions, not the connection.
Example 15-2 Connections and Sessions
The following example connects SQL*Plus to the database as user
SYSTEM and enables tracing, thus creating a new session (sample output included):
SQL> SELECT SID, SERIAL#, PADDR FROM V$SESSION WHERE USERNAME = USER; SID SERIAL# PADDR --- ------- -------- 90 91 3BE2E41C SQL> SET AUTOTRACE ON STATISTICS; SQL> SELECT SID, SERIAL#, PADDR FROM V$SESSION WHERE USERNAME = USER; SID SERIAL# PADDR --- ------- -------- 88 93 3BE2E41C 90 91 3BE2E41C ... SQL> DISCONNECT
DISCONNECT command actually ends the sessions, not the connection. Opening a new terminal and connecting to the instance as a different user, the following query shows that the connection with the address
3BE2E41C is still active.
SQL> CONNECT dba1@inst1 Password: ******** Connected. SQL> SELECT PROGRAM FROM V$PROCESS WHERE ADDR = HEXTORAW('3BE2E41C'); PROGRAM ------------------------------------------------ oracle@stbcs09-1 (TNS V1-V3)
In the context of database monitoring, a database operation is a logical entity that includes a SQL statement, a PL/SQL block, or a composite of the two. A simple database operation is either a single SQL statement, or a single PL/SQL procedure or function. A composite database operation is activity between two points in time in a single database session.
You can divide a large set of tasks into database operations, and subdivide operations into phases, so that you can more easily monitor, compare, and tune tasks. You cannot nest operations and phases within one another. A use case is a PL/SQL batch job that takes three times as long as usual to complete. By configuring the job as a database operation, you can easily identify and tune the expensive steps in the job.
You create and manage database operations with the
DBMS_SQL_MONITOR package. Each execution of a database operation is uniquely identified by a pair of attributes: operation name and execution ID. Two occurrences of the same database operation can execute at the same time using the same name but different IDs. Each execution of a database operation with the same name can contain different statements.
Oracle Database creates server processes to handle the requests of client processes connected to the instance. A client process always communicates with a database through a separate server process.
Server processes created on behalf of a database application can perform one or more of the following tasks:
Parse and run SQL statements issued through the application, including creating and executing the query plan
Execute PL/SQL code
Read data blocks from data files into the database buffer cache (the DBW background process has the task of writing modified blocks back to disk)
Return results in such a way that the application can process the information
In dedicated server connections, the client connection is associated with one and only one server process (see "Dedicated Server Architecture"). On Linux, 20 client processes connected to a database instance are serviced by 20 server processes.
Each client process communicates directly with its server process. This server process is dedicated to its client process for the duration of the session. The server process stores process-specific information and the UGA in its PGA (see "PGA Usage in Dedicated and Shared Server Modes").
In shared server connections, client applications connect over a network to a dispatcher process, not a server process (see "Shared Server Architecture"). For example, 20 client processes can connect to a single dispatcher process.
The dispatcher process receives requests from connected clients and puts them into a request queue in the large pool (see "Large Pool"). The first available shared server process takes the request from the queue and processes it. Afterward, the shared server places the result into the dispatcher response queue. The dispatcher process monitors this queue and transmits the result to the client.
Like a dedicated server process, a shared server process has its own PGA. However, the UGA for a session is in the SGA so that any shared server can access session data.
Background processes are additional processes used by a multiprocess Oracle database. The background processes perform maintenance tasks required to operate the database and to maximize performance for multiple users.
Each background process has a separate task, but works with the other processes. For example, the LGWR process writes data from the redo log buffer to the online redo log. When a filled redo log file is ready to be archived, LGWR signals another process to archive the redo log file.
Oracle Database creates background processes automatically when a database instance starts. An instance can have many background processes, not all of which always exist in every database configuration. The following query lists the background processes running on your database:
SELECT PNAME FROM V$PROCESS WHERE PNAME IS NOT NULL ORDER BY PNAME;
This section includes the following topics:
Oracle Database Reference for descriptions of all the background processes
Mandatory background processes are present in all typical database configurations. These processes run by default in a database instance started with a minimally configured initialization parameter file.
This section describes the following mandatory background processes:
Oracle Database Reference for descriptions of other mandatory processes, including MMAN, DIAG, VKTM, DBRM, and PSP0
Oracle Real Application Clusters Administration and Deployment Guide and Oracle Clusterware Administration and Deployment Guide for more information about background processes specific to Oracle RAC and Oracle Clusterware
The process monitor (PMON) monitors the other background processes and performs process recovery when a server or dispatcher process terminates abnormally.
PMON is responsible for cleaning up the database buffer cache and freeing resources that the client process was using. For example, PMON resets the status of the active transaction table, releases locks that are no longer required, and removes the process ID from the list of active processes.
The listener registration process (LREG) registers information about the database instance and dispatcher processes with the Oracle Net Listener (see "The Oracle Net Listener"). When an instance starts, LREG polls the listener to determine whether it is running. If the listener is running, then LREG passes it relevant parameters. If it is not running, then LREG periodically attempts to contact it.
In releases before Oracle Database 12c, PMON performed the listener registration.
The system monitor process (SMON) is in charge of a variety of system-level cleanup duties.
Duties assigned to SMON include:
Performing instance recovery, if necessary, at instance startup. In an Oracle RAC database, the SMON process of one database instance can perform instance recovery for a failed instance.
Recovering terminated transactions that were skipped during instance recovery because of file-read or tablespace offline errors. SMON recovers the transactions when the tablespace or file is brought back online.
Cleaning up unused temporary segments. For example, Oracle Database allocates extents when creating an index. If the operation fails, then SMON cleans up the temporary space.
Coalescing contiguous free extents within dictionary-managed tablespaces.
SMON checks regularly to see whether it is needed. Other processes can call SMON if they detect a need for it.
The database writer process (DBW) writes the contents of database buffers to data files. DBW processes write modified buffers in the database buffer cache to disk.
Although one database writer process (DBW0) is adequate for most systems, you can configure additional processes—DBW1 through DBW9, DBWa through DBWz, and BW36 through BW99—to improve write performance if your system modifies data heavily. These additional DBW processes are not useful on uniprocessor systems.
The DBW process writes dirty buffers to disk under the following conditions:
When a server process cannot find a clean reusable buffer after scanning a threshold number of buffers, it signals DBW to write. DBW writes dirty buffers to disk asynchronously if possible while performing other processing.
DBW periodically writes buffers to advance the checkpoint, which is the position in the redo thread from which instance recovery begins. The log position of the checkpoint is determined by the oldest dirty buffer in the buffer cache.
In many cases the blocks that DBW writes are scattered throughout the disk. Thus, the writes tend to be slower than the sequential writes performed by LGWR. DBW performs multiblock writes when possible to improve efficiency. The number of blocks written in a multiblock write varies by operating system.
The log writer process (LGWR) manages the online redo log buffer. LGWR writes one portion of the buffer to the online redo log. By separating the tasks of modifying database buffers, performing scattered writes of dirty buffers to disk, and performing fast sequential writes of redo to disk, the database improves performance.
In the following circumstances, LGWR writes all redo entries that have been copied into the buffer since the last time it wrote:
A user commits a transaction (see "Commits of Transactions").
An online redo log switch occurs.
Three seconds have passed since LGWR last wrote.
The redo log buffer is one-third full or contains 1 MB of buffered data.
DBW must write modified buffers to disk.
Before DBW can write a dirty buffer, the database must write to disk the redo records associated with changes to the buffer (the write-ahead protocol). If DBW discovers that some redo records have not been written, it signals LGWR to write the records to disk, and waits for LGWR to complete before writing the data buffers to disk.
Oracle Database uses a fast commit mechanism to improve performance for committed transactions. When a user issues a
COMMIT statement, the transaction is assigned a system change number (SCN). LGWR puts a commit record in the redo log buffer and writes it to disk immediately, along with the commit SCN and transaction's redo entries.
The redo log buffer is circular. When LGWR writes redo entries from the redo log buffer to an online redo log file, server processes can copy new entries over the entries in the redo log buffer that have been written to disk. LGWR normally writes fast enough to ensure that space is always available in the buffer for new entries, even when access to the online redo log is heavy.
The atomic write of the redo entry containing the transaction's commit record is the single event that determines that the transaction has committed. Oracle Database returns a success code to the committing transaction although the data buffers have not yet been written to disk. The corresponding changes to data blocks are deferred until it is efficient for DBW to write them to the data files.
LGWR can write redo log entries to disk before a transaction commits. The changes that are protected by the redo entries become permanent only if the transaction later commits.
When activity is high, LGWR can use group commits. For example, a user commits, causing LGWR to write the transaction's redo entries to disk. During this write other users commit. LGWR cannot write to disk to commit these transactions until its previous write completes. Upon completion, LGWR can write the list of redo entries of waiting transactions (not yet committed) in one operation. In this way, the database minimizes disk I/O and maximizes performance. If commits requests continue at a high rate, then every write by LGWR can contain multiple commit records.
LGWR writes synchronously to the active mirrored group of online redo log files. If a log file is inaccessible, then LGWR continues writing to other files in the group and writes an error to the LGWR trace file and the alert log. If all files in a group are damaged, or if the group is unavailable because it has not been archived, then LGWR cannot continue to function.
The checkpoint process (CKPT) updates the control file and data file headers with checkpoint information and signals DBW to write blocks to disk. Checkpoint information includes the checkpoint position, SCN, location in online redo log to begin recovery, and so on. As shown in Figure 15-4, CKPT does not write data blocks to data files or redo blocks to online redo log files.
Figure 15-4 Checkpoint Process
The manageability monitor process (MMON) performs many tasks related to the Automatic Workload Repository (AWR). For example, MMON writes when a metric violates its threshold value, taking snapshots, and capturing statistics value for recently modified SQL objects.
The manageability monitor lite process (MMNL) writes statistics from the Active Session History (ASH) buffer in the SGA to disk. MMNL writes to disk when the ASH buffer is full.
In a distributed database, the recoverer process (RECO) automatically resolves failures in distributed transactions.
The RECO process of a node automatically connects to other databases involved in an in-doubt distributed transaction. When RECO reestablishes a connection between the databases, it automatically resolves all in-doubt transactions, removing from each database's pending transaction table any rows that correspond to the resolved transactions.
Oracle Database Administrator’s Guide for more information about transaction recovery in distributed systems
An optional background process is any background process not defined as mandatory. Most optional background processes are specific to tasks or features. For example, background processes that support Oracle Automatic Storage Management (Oracle ASM) are only available when this feature is enabled.
This section describes some common optional processes:
An archiver process (ARCn) copies online redo log files to offline storage after a redo log switch occurs. These processes can also collect transaction redo data and transmit it to standby database destinations. ARCn processes exist only when the database is in ARCHIVELOG mode and automatic archiving is enabled.
A job queue process runs user jobs, often in batch mode. A job is a user-defined task scheduled to run one or more times. For example, you can use a job queue to schedule a long-running update in the background. Given a start date and a time interval, the job queue processes attempt to run the job at the next occurrence of the interval.
Oracle Database manages job queue processes dynamically, thereby enabling job queue clients to use more job queue processes when required. The database releases resources used by the new processes when they are idle.
Dynamic job queue processes can run many jobs concurrently at a given interval. The sequence of events is as follows:
The job coordinator process (CJQ0) is automatically started and stopped as needed by Oracle Scheduler (see "Oracle Scheduler"). The coordinator process periodically selects jobs that need to be run from the system
JOB$ table. New jobs selected are ordered by time.
The coordinator process dynamically spawns job queue slave processes (Jnnn) to run the jobs.
The job queue process runs one of the jobs that was selected by the CJQ0 process for execution. Each job queue process runs one job at a time to completion.
After the process finishes execution of a single job, it polls for more jobs. If no jobs are scheduled for execution, then it enters a sleep state, from which it wakes up at periodic intervals and polls for more jobs. If the process does not find any new jobs, then it terminates after a preset interval.
The initialization parameter
JOB_QUEUE_PROCESSES represents the maximum number of job queue processes that can concurrently run on an instance. However, clients should not assume that all job queue processes are available for job execution.
The coordinator process is not started if the initialization parameter
JOB_QUEUE_PROCESSES is set to 0.
The flashback data archive process (FBDA) archives historical rows of tracked tables into Flashback Data Archives. When a transaction containing DML on a tracked table commits, this process stores the pre-image of the changed rows into the Flashback Data Archive. It also keeps metadata on the current rows.
FBDA automatically manages the Flashback Data Archive for space, organization, and retention. Additionally, the process keeps track of how long the archiving of tracked transactions has occurred.
The SMCO process coordinates the execution of various space management related tasks, such as proactive space allocation and space reclamation. SMCO dynamically spawns slave processes (Wnnn) to implement the task.
Oracle Database Development Guide to learn about the Flashback Data Archive and the Temporal History feature
Slave processes are background processes that perform work on behalf of other processes. This section describes some slave processes used by Oracle Database.
Oracle Database Reference for descriptions of Oracle Database slave processes
I/O slave processes (Innn) simulate asynchronous I/O for systems and devices that do not support it. In asynchronous I/O, there is no timing requirement for transmission, enabling other processes to start before the transmission has finished.
For example, assume that an application writes 1000 blocks to a disk on an operating system that does not support asynchronous I/O. Each write occurs sequentially and waits for a confirmation that the write was successful. With asynchronous disk, the application can write the blocks in bulk and perform other work while waiting for a response from the operating system that all blocks were written.
To simulate asynchronous I/O, one process oversees several slave processes. The invoker process assigns work to each of the slave processes, who wait for each write to complete and report back to the invoker when done. In true asynchronous I/O the operating system waits for the I/O to complete and reports back to the process, while in simulated asynchronous I/O the slaves wait and report back to the invoker.
The database supports different types of I/O slaves, including the following:
I/O slaves for Recovery Manager (RMAN)
When using RMAN to back up or restore data, you can use I/O slaves for both disk and tape devices.
Database writer slaves
If it is not practical to use multiple database writer processes, such as when the computer has one CPU, then the database can distribute I/O over multiple slave processes. DBW is the only process that scans the buffer cache LRU list for blocks to be written to disk. However, I/O slaves perform the I/O for these blocks.
In parallel execution or parallel processing, multiple processes work together simultaneously to run a single SQL statement. By dividing the work among multiple processes, Oracle Database can run the statement more quickly. For example, four processes handle four different quarters in a year instead of one process handling all four quarters by itself.
Parallel execution contrasts with serial execution, in which a single server process performs all necessary processing for the sequential execution of a SQL statement. For example, to perform a full table scan such as
FROM employees, one server process performs all of the work, as shown in Figure 15-5.
Figure 15-5 Serial Full Table Scan
Parallel execution reduces response time for data-intensive operations on large databases such as data warehouses. Symmetric multiprocessing (SMP) and clustered system gain the largest performance benefits from parallel execution because statement processing can be split up among multiple CPUs. Parallel execution can also benefit certain types of OLTP and hybrid systems.
In Oracle RAC systems, the service placement of a specific service controls parallel execution. Specifically, parallel processes run on the nodes on which the service is configured. By default, Oracle Database runs parallel processes only on an instance that offers the service used to connect to the database. This does not affect other parallel operations such as parallel recovery or the processing of
Oracle Real Application Clusters Administration and Deployment Guide for considerations regarding parallel execution in Oracle RAC environments
In parallel execution, the server process acts as the query coordinator (also called the parallel execution coordinator) responsible for the following:
Parsing the query
Allocating and controlling the parallel execution server processes
Sending output to the user
Given a query plan for a query, the coordinator breaks down each operator in a SQL query into parallel pieces, runs them in the order specified in the query, and integrates the partial results produced by the parallel execution servers executing the operators.
The number of parallel execution servers assigned to a single operation is the degree of parallelism for an operation. Multiple operations within the same SQL statement all have the same degree of parallelism.
Parallel execution servers are divided into producers and consumers. The producers are responsible for processing their data and then distributing it to the consumers that need it. The database can perform the distribution using a variety of techniques. Two common techniques are a broadcast, in which each producer sends the rows to all consumers, and a hash, in which the database computes a hash function on a set of keys and makes each consumer responsible for a subset of hash values.
Figure 15-6 represents the interplay between producers and consumers in the parallel execution of the following statement:
SELECT * FROM employees ORDER BY last_name;
The execution plan implements a full scan of the
employees table. The scan is followed by a sort of the retrieved rows. All of the producer processes involved in the scan operation send rows to the appropriate consumer process performing the sort.
Figure 15-6 Producers and Consumers
In parallel execution, a table is divided dynamically into load units. Each unit, called a granule, is the smallest unit of work when accessing data. A block-based granule is a range of data blocks of the table read by a single parallel execution server (also called a PX server), which uses
nnn as a name format. To obtain an even distribution of work among parallel server processes, the number of granules is always much higher than the requested DOP.
Figure 15-7 shows a parallel scan of the
Figure 15-7 Parallel Full Table Scan
The database maps granules to parallel execution servers at execution time. When a parallel execution server finishes reading the rows corresponding to a granule, and when granules remain, it obtains another granule from the query coordinator. This operation continues until the table has been read. The execution servers send results back to the coordinator, which assembles the pieces into the desired full table scan.