Oracle Database 23c Technical Architecture

January 2024

Copyright © 2021, 2024 Oracle and/or its affiliates

Database Server

Database Server Database Instance Memory and Processes Server Processes Multitenant Container Database (CDB) Physical Storage Structures (Data Files) Logical Data Structures (Schemas, Objects) Manages Communicates directly after connection is established Oracle Net Listener Connects Client Process (SQL*Plus, SQL Developer, Oracle Enterprise Manager, other tools)

Notes

An Oracle Database consists of at least one database instance and one database. The database instance consists of memory and processes to manage the multitenant container database (CDB). The CDB consists of physical storage structures (data files) and logical data structures (schemas and schema objects).

This diagram shows a single-instance database architecture, which has a one-to-one relationship between the database and the database instance. Multiple single-instance databases can exist on the same server machine. This configuration is useful for running different versions of Oracle Database on the same machine.

An Oracle Real Application Clusters (Oracle RAC) database architecture consists of multiple instances that run on separate server machines with a single shared database. For more information about the Oracle RAC architecture, see Oracle Real Application Clusters 19c Technical Architecture.

The Oracle Net Listener is a database server process that receives incoming connections from client processes, establishes connections to the database instance, and then hands over the client connections to communicate directly with the server processes to fulfill the client requests. The listener for a single-instance database can run locally on the database server or run remotely.

You can use several tools to administer the database, including the following:

Related Resources

Database Instance

Database Server Database Instance System Global Area (SGA) Program Global Areas (PGAs) Background Processes Client Process Server Processes Multitenant Container Database (CDB)

Notes

In Oracle Database, a database instance is a set of processes and memory structures that manage database files. The main memory structures are the system global area (SGA) and the program global areas (PGAs). The background processes operate on the database files and use the memory structures to do their work. A database instance exists only in memory.

The database instance also has server processes to handle the connections to the database on behalf of client programs and to perform the work for the client processes. For example, these server processes parse and run SQL statements and retrieve and return results to the client processes. These types of server processes are also called foreground processes.

With the multitenant architecture, a database instance is associated with a single multitenant container database (CDB). All pluggable databases (PDBs) that are plugged into the CDB (either directly or through an application container) share a single set of background processes and SGA.

Related Resources

System Global Area

Database Instance System Global Area (SGA) Shared Pool In-Memory Area Database Buffer Cache Large Pool Fixed SGA Java Pool Streams Pool Shared I/O Pool Redo Log Buffer Flashback
Buffer Optional extension Database Smart Flash Cache PMEM Filestore Optional extension Memoptimize Pool

Notes

The system global area (SGA) is the memory area that contains data and control information for one Oracle Database instance. All server and background processes share the SGA. When you start a database instance, the amount of memory allocated for the SGA is displayed. The SGA includes the following data structures:

Related Resources

Program Global Area

Database Instance Program Global Area (PGA) (created for server process) Private SQL Area Persistent Area Runtime Area User Global Area (UGA) Session Variables OLAP Pool Client Process Pointer Server Process SQL Work Areas Bitmap Merge Area Hash Area Sort Area Points to cursor

Notes

The program global area (PGA) is a nonshared memory region within the database instance that contains data and control information exclusively for use by a server or background process. Oracle Database creates server processes to handle connections to the database on behalf of client programs. (The diagram shows a PGA for a server process.)

In a dedicated server environment, Oracle Database creates one PGA for each server and background process that starts. Oracle Database deallocates a PGA when the associated server or background process is terminated. In a dedicated server session, the PGA consists of the following components:

In a shared server environment, multiple client users share the server process. The UGA moves into the large pool, leaving the PGA with only SQL work areas and the private SQL area.

Related Resources

Background Processes

Database Instance Background Processes Worker Processes . . . Optional Processes . . . Mandatory Processes PMON PMAN LREG SMON DBW n CKPT MMNL MMON RECO LGWR ARC n CJQ0 RVWR FBDA SMCO S nnn D nnn

Notes

Background processes are part of the database instance and perform maintenance tasks to operate the database and to maximize performance for multiple users. Each background process performs a unique task, but works with the other processes. Oracle Database creates background processes automatically when you start a database instance. The background processes that are present depend on the features that you're using database. When you start a database instance, mandatory background processes automatically start. You can start optional background processes later as required.

Mandatory background processes are present in all typical database configurations. These processes run by default in a read/write database instance that was started with a minimally configured initialization parameter file. A read-only database instance disables some of these processes. Mandatory background processes include the process monitor (PMON), process manager (PMAN), listener registration (LREG), system monitor (SMON), database writer (DBWn), checkpoint (CKPT), manageability monitor (MMON), manageability monitor lite (MMNL), recoverer (RECO), and log writer (LGWR).

Most optional background processes are specific to tasks or features. Some common optional processes include the archiver (ARCn), job queue coordinator (CJQ0), recovery writer (RVWR), flashback data archive (FBDA), and space management coordinator (SMCO).

Worker processes are background processes that perform work on behalf of other processes. These include the dispatcher (Dnnn) and shared server (Snnn) processes.

Related Resources

Shared Pool

Database Instance System Global Area (SGA) Shared Pool Other ASH Buffers Latches Enqueues ILM Bitmap Tables Server Result Cache PL/SQL Functon Result Cache SQL Query Result Cache Data Dictionary Cache Reserved Pool Library Cache Shared SQL and PL/SQL Area

Notes

The shared pool is a component of the system global area (SGA) within the database instance. It's responsible for caching various types of program data. For example, the shared pool stores parsed SQL, PL/SQL code, system parameters, and data dictionary information. Almost every operation that occurs in the database involves the shared pool. For example, if a user runs a SQL statement, then Oracle Database accesses the shared pool.

The shared pool consists of the following subcomponents:

Related Resources

Large Pool

Database Instance System Global Area (SGA) Shared Server Environment Shared Server Processes (S) nnn Dispatcher Processes (D) nnn Data Large Pool Data User Global Area (UGA) (shared server environment) Response Queues Request Queue Deferred Inserts Pool I/O Buffer Area Free Memory Shares information Sends request 1 Places request 2 Picks up request 3 Retrieves data 4 Places response 5 Sends response 6 Returns response 7 Client Processes (shared server environment) Client Processes (dedicated server environment) Dedicated Server Process

Notes

The large pool is an optional memory area within the database instance and system global area (SGA). You can configure the large pool to provide large memory allocations for the following areas:

The large pool is different from reserved space in the shared pool, which uses the same least recently used (LRU) list as other memory that is allocated from the shared pool. The large pool does not have an LRU list. Pieces of memory are allocated and cannot be freed until they are done being used.

A request from a user is a single API call that is part of the user's SQL statement.

In a dedicated server environment, one dedicated server process handles requests for a single client process. Each server process uses system resources, including CPU cycles and memory.

In a shared server environment, the following actions occur:

  1. A client process sends a request to the database instance, and a dispatcher process (Dnnn) receives that request.
  2. The dispatcher places the request in the request queue in the large pool.
  3. The next available shared server process (Snnn) picks up the request. The shared server processes check the common request queue for new requests, picking up new requests on a first-in-first-out basis. One shared server process picks up one request in the queue.
  4. The shared server process makes all the necessary calls to the database to complete the request. First, the shared server process accesses the library cache in the shared pool to verify the requested items. For example, it checks whether the table exists, whether the user has the correct privileges, and so on. Next, the shared server process accesses the buffer cache to retrieve the data. If the data is not there, the shared server process accesses the disk. A different shared server process can handle each database call. Therefore, requests to parse a query, fetch the first row, fetch the next row, and close the result set may each be processed by a different shared server process. Because a different shared server process may handle each database call, the UGA must be a shared memory area, as the UGA contains information about each client session. Or reversed, the UGA contains information about each client session and must be available to all shared server processes because any shared server process may handle any session's database call.
  5. After the request is completed, a shared server process places the response on the calling dispatcher's response queue in the large pool. Each dispatcher has its own response queue.
  6. The response queue sends the response to the dispatcher.
  7. The dispatcher returns the completed request to the appropriate client process.

Related Resources

Database Buffer Cache

Database Instance System Global Area (SGA) Database Buffer Cache Checkpoint Queue RBA Order Flash Buffer Area KEEP Flash LRU Chain DEFAULT Flash LRU Chain Least Recently Used 
(LRU) List Cold Hot Nondefault Buffer Pools 32K 16K 4K 2K Recycle Keep Default (8K) PMEM Buffer Headers Database Smart Flash Cache PMEM Filestore Optional extension Optional extension

Notes

The database buffer cache, also called the buffer cache, is a memory area in the system global area (SGA) of the database instance. It stores copies of data blocks that are read from data files. A buffer is a main memory address in which the buffer manager temporarily caches a currently or recently used data block. All users that are concurrently connected to a database instance share access to the buffer cache. The goals of the buffer cache are to optimize physical I/O, keep frequently accessed blocks in the buffer cache, and manage buffer headers that point to data files in the optional Oracle Persistent Memory Filestore (PMEM Filestore).

The first time an Oracle Database client process requires a particular piece of data, it searches for the data in the database buffer cache. If the process finds the data already in the cache (a cache hit), it can read the data directly from memory. If the process cannot find the data in the cache (a cache miss), it copies the data block from a data file on disk into a buffer in the cache before accessing the data. Accessing data through a cache hit is faster than accessing data through a cache miss.

The buffers in the cache are managed by a complex algorithm that uses a combination of least recently used (LRU) lists and touch count. The LRU helps to ensure that the most recently used blocks tend to stay in memory to minimize disk access.

The database buffer cache consists of the following areas:

Related Resources

In-Memory Area

Database Instance System Global Area (SGA) Database In-Memory Area IM Column Store DML Queries Server Processes Expression Statistics Store (ESS) Metadata Columnar Data SMU SMU IMCU IMCU IMEU Space Management Worker Processes (Wnnn) W002 W001 W000 IMCO Background Processes IMEU Scans Invalidations Scans Populates Populates

Notes

The In-Memory Area is an optional system global area (SGA) component within the database instance. It contains the In-Memory column store (IM column store), which stores tables and partitions in memory by using a columnar format that is optimized for rapid scans. The IM column store enables data to be simultaneously populated in the SGA in both the traditional row format (in the buffer cache) and a columnar format. The database transparently sends online transactional processing (OLTP) queries, such as primary key lookups, to the buffer cache and analytic and reporting queries to the IM column store. When fetching data, Oracle Database can also read data from both memory areas within the same query. The dual-format architecture does not double memory requirements. The buffer cache is optimized to run with a much smaller size than the size of the database.

You should populate only the most performance-critical data in the IM column store. To add an object to the IM column store, turn on the INMEMORY attribute for an object when you create or alter it. You can specify this attribute on a tablespace (for all new tables and views in the tablespace), table, partition, subpartition, materialized view, or subset of columns within an object.

The IM column store manages both data and metadata in optimized storage units, not in traditional Oracle data blocks. An In-Memory Compression Unit (IMCU) is a compressed, read-only storage unit that contains data for one or more columns. A Snapshot Metadata Unit (SMU) contains metadata and transactional information for an associated IMCU. Every IMCU maps to a separate SMU.

The Expression Statistics Store (ESS) is a repository that stores statistics about expression evaluation. The ESS resides in the SGA and also persists on disk. When an IM column store is enabled, the database leverages the ESS for its In-Memory Expressions (IM expressions) feature. An In-Memory Expression Unit (IMEU) is a storage container for materialized IM expressions and user-defined virtual columns. Note that the ESS is independent of the IM column store. The ESS is a permanent component of the database and cannot be disabled.

Conceptually, an IMEU is a logical extension of its parent IMCU. Just as an IMCU can contain multiple columns, an IMEU can contain multiple virtual columns. Every IMEU maps to exactly one IMCU, mapping to the same row set. The IMEU contains expression results for the data that is contained in its associated IMCU. When the IMCU is populated, the associated IMEU is also populated.

A typical IM expression involves one or more columns, possibly with constants, and has a one-to-one mapping with the rows in the table. For example, an IMCU for an EMPLOYEES table might contain rows 1-1000 for the column weekly_salary. For the rows that are stored in this IMCU, the IMEU calculates the automatically detected IM expression weekly_salary*52, and the user-defined virtual column quarterly_salary defined as weekly_salary*12. The third row down in the IMCU maps to the third row down in the IMEU.

The In-Memory Area is subdivided into two pools: 

The relative size of the two pools is determined by internal heuristics. The majority of the In-Memory area memory is allocated to the 1MB pool. The size of the In-Memory Area is controlled by the initialization parameter INMEMORY_SIZE (default 0) and must have a minimum size of 100MB. Starting in Oracle Database 12.2, you can increase the size of the In-Memory Area on the fly by increasing the INMEMORY_SIZE parameter via an ALTER SYSTEM command by at least 128MB. Note that it is not possible to shrink the size of the In-Memory Area on the fly.

Oracle Database In-Memory has a Base Level feature that allows you to use up to a 16GB column store without triggering any license tracking.

An in-memory table gets IMCUs allocated in the IM column store when the table data is first accessed or at database startup. An in-memory copy of the table is made by doing a conversion from the on-disk format to the new in-memory columnar format. This conversion is done each time the instance restarts because the IM column store copy resides only in memory. When this conversion is done, the in-memory version of the table gradually becomes available for queries. If a table is partially converted, queries can use the partial in-memory version and go to disk for the rest, rather than waiting for the entire table to be converted.

In-memory hybrid scans can access some data from the IM column store and some data from the row store when not all columns in a table have been populated into the IM column store. This improves performance by orders of magnitude over pure row store queries.

Automatic In-Memory enables, populates, evicts, and recompresses segments without user intervention. When INMEMORY_AUTOMATIC_LEVEL is set to HIGH, the database automatically enables and populates segments based on their usage patterns. This automation helps maximize the number of objects that can be populated into the IM column store at one time.

In response to queries and data manipulation language (DML), server processes scan columnar data and update SMU metadata. Background processes populate row data from disk into the IM column store. The in-memory coordinator process (IMCO) initiates the background population and repopulation of columnar data. The space management coordinator process (SMCO) and space management worker processes (Wnnn) do the actual populating and repopulating of data on behalf of IMCO. DML block changes are written to the buffer cache and then to disk. Background processes then repopulate row data from disk into the IM column store based on the metadata invalidations and query requests.

You can enable the In-Memory FastStart feature to write the columnar data in the IM column store back to a tablespace in the database in compressed columnar format. This feature makes database startup faster. Note that this feature does not apply to IMEUs. They are always populated dynamically from the IMCUs.

In-Memory deep vectorization can optimize complex SQL operators by pipelining the physical operators inside each SQL operator and vectorizing them using single instruction, multiple data ( SIMD) techniques. This feature is enabled by default but you can disable it by setting the INMEMORY_DEEP_VECTORIZATION initialization parameter to false.

Related Resources

Multitenant Container Database (CDB)

Database Server Multitenant Container Database (CDB) Regular PDBs User-Created PDBs Seed PDB (PDB$SEED) Root Container (CDB$ROOT) Shares Data Physical Data Files Physical Data Files Physical Data Files Logical Tablespaces Logical Tablespaces Logical Tablespaces System Files Other System Files Online Redo Log Files Control File Application Containers Application Containers

Notes

In a multitenant container database (CDB), a container is a collection of schemas, objects, and related structures. A pluggable database (PDB) is a user-created container that stores the data and code for an application, such as a human resources application.

From the perspective of a user or application, the PDB appears to be a logically separate, independent database. From the operating system perspective, the CDB is the database.

Every CDB has the following containers:

At a physical level, containers consist of physical data files. Each container has at least one data file. At a logical level, the database allocates data across data files with logical structures called tablespaces.

The CDB also uses several system files during its operation. The control file and redo online log files reside in the CDB and are shared across all PDBs and application containers. Other system files reside outside the CDB.

Note: Non-CDBs were desupported in Oracle Database 21c, which means that the Oracle Universal Installer and Database Configuration Assistant (DBCA) can no longer create non-CDB Oracle Database instances.

Related Resources

Application Containers

Multitenant Container Database (CDB) User-Created PDBs Seed PDB (PDB$SEED) Root Container (CDB$ROOT) Shares Data Application Containers Application Container Shares Data Regular PDBs Application PDBs Application Seed Application Root Physical Data Files Physical Data Files Physical Data Files Logical Tablespaces Logical Tablespaces Logical Tablespaces

Notes

An application container is an optional, user-created component within a multitenant container database (CDB). An application container stores data and metadata for pluggable databases (PDBs) that are specific to an application.

In some ways, an application container functions as an application-specific CDB within a CDB. An application container, like the CDB itself, can include multiple application PDBs and enables these PDBs to share data and metadata in the same way that the CDB root shares data and metadata with the seed PDB, user-created PDBs, and application containers that are plugged into the CDB root.

A CDB can include zero or more application containers. An application container consists of exactly one application root and one or more application PDBs, which plug into the application root. The application root belongs to the CDB root. An application root differs from both the CDB root and standard PDB because it can store user-created common objects, which are accessible to the application PDBs that are plugged in to the application root.

An application seed is an optional, user-created PDB within an application container. An application container can have zero or one application seed. An application seed enables you to create application PDBs quickly. It serves the same role within the application container as the seed PDB serves within the CDB itself.

At a physical level, containers consist of physical data files. Each container has at least one data file. At a logical level, the database allocates data across data files with logical structures called tablespaces.

A typical application installs application common users, metadata-linked common objects, and data-linked common objects. For example, you might create multiple sales-related PDBs within one application container, with these PDBs sharing an application back end that consists of a set of common tables and table definitions.

Related Resources

Database Storage Structures

Tablespace Tablespace Data File Data File Data File Data File Segment Table Extent Data Block Data Block Data Block Data Block Extent Data Block Data Block Data Block Data Block Segment Extent Data Block Data Block Data Block Data Block Extent Data Block Data Block Data Block Data Block Extent Data Block Data Block Data Block Data Block Extent Data Block Data Block Data Block Data Block Index Segment Extent Data Block Data Block Data Block Data Block Extent Data Block Data Block Data Block Data Block Table Segment Index Extent Data Block Data Block Extent Data Block Data Block Extent Data Block Data Block Segment Table Extent Data Block Data Block Extent Data Block Data Block Extent Data Block Data Block Segment Table Extent Data Block Data Block Extent Data Block Data Block Schema Schema

Notes

A multitenant container database (CDB) is a collection of schemas and schema objects, such as tables and indexes.

At the physical level, a CDB stores objects in data files. You can use the following mechanisms to store data files:

At the logical level, a CDB allocates data across the data files with the following structures:

Related Resources

Tablespaces

Multitenant Container Database (CDB) CDB Root, PDBs, Application Roots Physical Data Files Data Files Temp Files Data Files Data Files Logical Temporary Tablespaces TEMP Permanent Tablespaces USERS UNDO SYSAUX SYSTEM

Notes

A tablespace is a logical storage container for segments. Segments are database objects, such as tables and indexes, that consume storage space.

In a multitenant container database (CDB), each pluggable database (PDB) and application root has its own set of permanent tablespaces, which correspond to physical data files:

Each container also has a temporary tablespace, which correspond to physical temp files. One default temporary tablespace exists for the CDB root and for each application root, application PDB, and PDB. Temporary tablespaces contain transient data that persists only for the duration of a session. No permanent schema objects can reside in a temporary tablespace.

Related Resources

Schemas and Schema Objects

Schema Tablespace Data Files PL/SQL subprograms and packages Synonyms Dimensions Sequences Views Partitions Indexes Tables Owns User Account

Notes

A database schema is a logical container for data structures, called schema objects. Each Oracle Database user account owns a single schema, which has the same name as the user.

Note: The database also stores other types of objects that are not contained in a schema. These objects include database user account, roles, contexts, and dictionary objects.

At the physical level, the database stores schema objects in data files. At the logical level, the database allocates data across the data files with tablespaces. There is no relationship between schemas and tablespaces: a tablespace can contain objects from different schemas, and the objects for a schema can be contained in different tablespaces. The data of each object is physically contained in one or more data files. 

Schemas include the following principal types of objects:

Related Resources

Database System Files

Database Server Multitenant Container Database (CDB) System Files Required for Startup Flashback Logs Block Change Tracking File Keystores Password File Archived Redo Log Files Automatic Diagnostic Repository (ADR) Backup Files Parameter File Online Redo Log Files Control File

Notes

Oracle Database uses several database system files that reside on the database server. These are different from data files, which are physical files that belong to database containers.

The following files are required for database startup:

Note: Oracle recommends that you maintain multiple copies of the control files and online redo log files in separate locations to avoid a single point of failure. This is called multiplexing. For details, see Multiple Control Files and Multiple Copies of Online Redo Log Files

.

The database also uses the following system files, which reside outside the CDB:

Related Resources

Automatic Diagnostic Repository (ADR)

Automatic Diagnostic Repository (ADR) Alert Log File Incident Dumps Incident Packages Health Monitor Reports Dump Files Foreground Trace Files Background Trace Files

Notes

The Automatic Diagnostic Repository (ADR) is a system-wide tracing and logging central repository for database diagnostic data. It includes the following items:

Related Resources

Backup Files

Physical Backups Logical Backups Disk Recovery Appliance Oracle Database Backup Cloud Service Tape MML or SBT Software Backup Sets Image Copies Recovery Manager (RMAN) Operating System Utilities Dump Files Data Pump Export

Notes

You can create the following types of backups:

When you use RMAN, you can store the following types of backup formats on disk:

RMAN can also interface with Media Management Library (MML) or System Backup to Tape (SBT) software, which can create backups to tape, Oracle Database Backup Cloud Service, or Zero Data Loss Recovery Appliance (commonly known as Recovery Appliance).

Related Resources

Process Monitor Process (PMON)

All Processes Cleans
up . . . Process Process Terminated Processes Process Process Cleanup Worker Processes (CL) nn Cleanup Main Process (CLMN) Process Monitor Process (PMON) Delegates Delegates Scans

Notes

The process monitor process (PMON) periodically scans all processes to find any that have terminated abnormally. PMON delegates cleanup work to the cleanup main process (CLMN), which periodically performs cleanup of terminated processes and sessions. CLMN delegates to cleanup worker processes (CLnn)

PMON runs as an operating system process and not as a thread. In addition to database instances, PMON also runs on Oracle Automatic Storage Management (Oracle ASM) instances and Oracle ASM Proxy instances.

Related Resources

Process Manager Process (PMAN)

Process Manager Process (PMAN) Monitors, spawns, stops Dispatcher and Shared Server Processes Connection Broker and Pooled Server Processes Restartable Background Processes Job Queue Processes

Notes

The process manager process (PMAN) oversees several background processes including shared servers, pooled servers, and job queue processes. PMAN monitors, spawns, and stops the following types of processes as needed:

PMAN runs as an operating system process and not as a thread. In addition to database instances, PMAN also runs on Oracle Automatic Storage Management (Oracle ASM) instances and Oracle ASM proxy instances.

Related Resources

Listener Registration Process (LREG)

Listener Registration Process (LREG) Oracle Net Listener Notifies

Notes

The listener registration process (LREG) notifies the listeners about instances, services, handlers, and endpoints.

LREG can run as a thread or an operating system process. In addition to database instances, LREG also runs on Oracle Automatic Storage Management (Oracle ASM) instances and Oracle Real Application Clusters (Oracle RAC).

Related Resources

System Monitor Process (SMON)

Data and Temp Files System Change Number (SCN) Data Dictionary Undo Segments Temporary Segments Creates
and
manages Maintains Maintains Cleans
up System Monitor Process (SMON)

Notes

The system monitor process (SMON) performs many database maintenance tasks, including the following:

SMON is resilient to internal and external errors that occur during background activities. SMON can run as a thread or an operating system process. In an Oracle Real Application Clusters (Oracle RAC) database, the SMON process of one instance can perform instance recovery for other instances that have failed.

Related Resources

Database Writer Process (DBWn)

Database Writer Process (DBW) n Database Smart Flash Cache Database Buffer Cache Data Files Reads Writes

Notes

The database writer process (DBWn) reads the database buffer cache and writes modified buffers to data files. It also handles checkpoints, file open synchronization, and logging of Block Written records. DBWn also reads the Database Smart Flash Cache (Flash Cache) when Flash Cache is configured.

In many cases the blocks that DBWn writes are scattered throughout the disk, so the writes tend to be slower than the sequential writes performed by the log writer process (LGWR). DBWn performs multiblock writes when possible to improve efficiency. The number of blocks that are written in a multiblock write varies by operating system.

The DB_WRITER_PROCESSES initialization parameter specifies the number of database writer processes. There can be 1 to 100 database writer processes. The names of the first 36 database writer processes are DBW0-DBW9 and DBWa-DBWz. The names of the 37th through 100th database writer processes are BW36-BW99. The database selects an appropriate default setting for the DB_WRITER_PROCESSES parameter or adjusts a user-specified setting based on the number of CPUs and processor groups.

Related Resources

Checkpoint Process (CKPT)

Control Files Database Writer Process (DBW) n Data Files Database Buffer Cache Checkpoint Process (CKPT) Updates Updates
data Updates
headers Reads Triggers

Notes

The checkpoint process (CKPT), at specific times, starts a checkpoint request by triggering the database writer process (DBWn) to read the database buffer cache and write modified buffers to data files. On completion of individual checkpoint requests, CKPT updates data file headers and control files to record the most recent checkpoint.

CKPT checks every three seconds to see whether the amount of memory exceeds the value of the PGA_AGGREGATE_LIMIT initialization parameter, and if so, takes action.

CKPT can run as a thread or an operating system process. In addition to database instances, CKPT also runs on Oracle Automatic Storage Management (Oracle ASM) instances.

Related Resources

Manageability Monitor Process (MMON) and Manageability Monitor Lite Process (MMNL)

System Global Area (SGA) Shared Pool SYSAUX Tablespace Automatic Workload Repository (AWR) DBA_HIST_ACTIVE_SESS_HISTORY View Snapshots Active Session History (ASH) Buffer Manageability Monitor Lite Process (MMNL) Manageability Monitor Process (MMON) Flushes
ASH
buffer Creates
snapshots
every 60
seconds Samples
every
second V$SESSION and V$SESSION_WAIT Views V$ACTIVE_SESSION_HISTORY View Records
data Gathers
and filters
statistics

Notes

The manageability monitor process (MMON) and manageability monitor lite process (MMNL) perform tasks related to the Automatic Workload Repository (AWR). The AWR is a repository of historical performance data that includes cumulative statistics for the system, sessions, individual SQL statements, segments, and services. It provides problem detection and self-tuning. The AWR resides in the SYSAUX tablespace. AWR reports can be generated in the CDB root or in any PDB. For more information about how the AWR works in a multitenant container database (CDB), see About Using Manageability Features in a CDB.

MMON gathers memory statistics from the system global area (SGA), filters them, and creates snapshots of those statistics in the AWR every 60 minutes (or another interval that you choose). It also performs Automatic Database Diagnostic Monitor (ADDM) analysis and issues alerts for metrics that exceed their threshold values.

MMNL gathers session statistics (such as the user ID, state, the machine, and the SQL that it's processing) and stores them in the active session history (ASH) buffer, which is part of the shared pool in the SGA. Specifically, MMNL samples the V$SESSION and V$SESSION_WAIT views every second and then records that data in the V$ACTIVE_SESSION_HISTORY view in the ASH buffer. MMNL doesn't sample Inactive sessions. The ASH is a rolling buffer in memory, so newer information overwrites earlier information when needed. When the ASH buffer becomes full or when MMON takes a snapshot, MMNL flushes (empties) the ASH buffer into the DBA_HIST_ACTIVE_SESS_HISTORY view in the AWR. Because space is expensive, MMNL flushes only one in every 10 entries. MMNL also computes metrics.

Both MMON and MMNL can run as threads or as an operating system processes. In addition to database instances, MMON and MMNL also run on Oracle Automatic Storage Management (Oracle ASM) instances.

Related Resources

Recoverer Process (RECO)

Distributed Database System Database Server Recoverer Process (RECO) Database Server Database Server Resolves
in-doubt
distributed
transactions

Notes

The recoverer process (RECO) resolves distributed transactions that are pending because of a network or system failure in a distributed database system, which is set of database servers that can appear to applications as a single data source.

The RECO process of a node automatically connects to other databases involved in an in-doubt distributed transaction. When RECO reestablishes a connection between the databases, it automatically resolves all in-doubt transactions, removing from each database's pending transaction table any rows that correspond to the resolved transactions.

RECO can run as a thread or as an operating system process.

Related Resources

Log Writer Process (LGWR)

System Global Area (SGA) Database Buffer Cache Online Redo Log Files Redo Transport Worker Processes (TT) nn Log Writer Worker Processes (LG) nn Log Writer Process (LGWR) Redo Log Buffer Server Process Reads Writes Delegates Reads Writes Writes

Notes

Server processes write changes to data blocks in the database buffer cache and write redo data into the redo log buffer. The log writer process (LGWR) writes redo log entries sequentially from the redo log buffer to the online redo log. If the database has a multiplexed redo log, then LGWR writes the same redo log entries to all members of a redo log file group.

LGWR handles the operations that are very fast or must be coordinated. It delegates operations that could benefit from concurrent operations to the log writer worker processes (LGnn), which are numbered LG00-LG99. These operations include writing the redo from the log buffer to the redo log file and posting the completed write to the server process that is waiting.

The redo transport worker processes (TTnn), numbered TT00-TTzz, ship redo from the current online and standby redo logs to remote standby destinations that are configured for asynchronous (ASYNC) redo transport.

LGWR can run as a thread or as an operating system process. In addition to database instances, LGWR also runs on Oracle Automatic Storage Management (Oracle ASM) instances. Each database instance in an Oracle Real Application Clusters (Oracle RAC) configuration has its own set of redo log files.

Related Resources

Archiver Process (ARCn)

Online Redo Log Files Archived Redo Log Files Archiver Process (ARC) n Writes redo entries Triggers archiving

Notes

The archiver processes (ARCn) exist only when the database is in ARCHIVELOG mode and automatic archiving is enabled, in which case ARCn automatically archives online redo log files. The log writer process (LGWR) cannot reuse and overwrite an online redo log group until it has been archived.

The database starts multiple ARCn processes as needed to ensure that the archiving of filled online redo logs does not fall behind. Possible processes include ARC0-ARC9 and ARCa-ARCt (31 possible destinations).

The LOG_ARCHIVE_MAX_PROCESSES initialization parameter specifies the number of ARCn processes that the database initially invokes. If you anticipate a heavy workload for archiving, such as during bulk loading of data, you can increase the maximum number of ARCn processes. There can also be multiple archive log destinations. Oracle recommends at least one ARCn process for each destination.

ARCn can run as a thread or as an operating system process.

Related Resources

Job Queue Coordinator Process (CJQ0)

Data Dictionary Job Log Table Job Table Job Queue Worker Processes (J) nnn Job Queue Coordinator Process (CJQ0) Spawns Inserts
entries Selects
jobs Updates
job
states

Notes

The job queue coordinator process (CJQ0) selects jobs that need to be run from the data dictionary and spawns job queue worker processes (Jnnn) to run the jobs. Oracle Scheduler automatically starts and stops CJQ0 as needed. The JOB_QUEUE_PROCESSES initialization parameter specifies the maximum number of processes that can be created for running jobs. CJQ0 starts only as many job queue processes as are required by the number of jobs to run and the available resources.

Jnnn processes run jobs that the job coordinator assigns. When workers pick jobs for processing, they do the following:

When a job is done, the workers do the following:

Both CJQ0 and Jnnn can run as threads or as operating system processes.

Related Resources

Recovery Writer Process (RVWR)

Recovery Writer Process (RVWR) System Global Area (SGA) Flashback Logs Flashback
Buffer Reads Writes

Notes

When you use Flashback Database, the recovery writer process (RVWR) reads flashback data from the flashback buffer in the system global area (SGA) and writes to the flashback logs. That is, it undoes transactions from the current state of the database to a time in the past, provided that you have the required flashback logs.

RVWR can run as a thread or as an operating system process.

Related Resources

Flashback Data Archiver Process (FBDA)

Flashback Data Archiver Process (FBDA) Data Files Flashback Data Archive (Tablespace) Undo Segments System Global Area (SGA) Database Buffer Cache Undo Blocks Writes Reads Reads

Notes

When you use Flashback Database, the flashback data archiver process (FBDA) tracks and stores transactional changes to a table over its lifetime. This way, you can flashback tables to restore them to the way they were at a time in the past.

When a transaction that modifies a tracked table commits, FBDA reads undo blocks in the database buffer cache and undo segments in data files. Then it filters what is relevant to objects that are marked for archival and copies that undo information into the flashback data archive (tablespace) in the data files. FBDA maintains metadata on the current rows and tracks how much data has been archived.

FBDA automatically manages the flashback data archive for space, organization (partitioning tablespaces), and retention. Additionally, FBDA keeps track of how far the archiving of tracked transactions has progressed.

FBDA can run as a thread or as an operating system process.

Related Resources

Space Management Coordinator Process (SMCO)

Worker Processes W nnn . . . W003 W002 W001 Space Management Coordinator Process (SMCO) Dynamically
spawns

Notes

The optional space management coordinator process (SMCO) schedules various space management tasks, including proactive space allocation and space reclamation. SMCO dynamically spawns space management worker processes (Wnnn) to implement these tasks. After starting, the worker acts as an autonomous agent. After it finishes a task, the worker process automatically picks up another task from the queue. The process terminates itself after being idle for a long time.

Wnnn worker processes (named W001, W002, and so on) perform tasks on behalf of Space Management and the Oracle Database In-Memory option.

For Space Management, Wnnn processes perform space management tasks in the background, including the following:

For the Oracle Database In-Memory option, Wnnn processes perform the following tasks:

Both SMCO and Wnnn can run as threads or as operating system processes.

Related Resources

Dispatcher Process (Dnnn) and Shared Server Process (Snnn)

Database Instance System Global Area (SGA) Large Pool Request/Response Queues Shared Server Environment Shared Server Processes (Snnn) Dispatcher Processes () Dnnn Picks up request/
sends response Places request/
receives response Communicates directly
after connection is
established Establishes
connection Oracle Net Listener Client Process

Notes

In a shared server environment, a dispatcher process (Dnnn) directs multiple incoming network session requests to a pool of shared server processes (Snnn). You can create multiple dispatcher processes for a single database instance.

The Oracle Net Listener process establishes a connection to the dispatcher. When a client process makes a connection request that requires a shared server process, the listener returns the dispatcher address so the client process can communicate with the dispatcher directly after the connection is established.

The dispatcher places the client request in the request queue in the large pool of the system global area (SGA) within the database instance. The next available shared server process picks up the request and processes the request. When the request is complete, the shared server process places the response on the calling dispatcher's response queue in the large pool, and the response queue sends the response to the dispatcher. The dispatcher returns the completed request to the client process.

Both Snnn and Dnnn can run as threads or as operating system processes. In addition to database instances, Dnnn also runs in a shared server environment.

Related Resources