Oracle Database 23c Technical Architecture
January 2024
Copyright © 2021, 2024 Oracle and/or its affiliates
Copyright Ⓒ 2021, 2024 Oracle and/or its affiliates.
This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited.
The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing.
If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, then the following notice is applicable:
U.S. GOVERNMENT END USERS: Oracle programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation, delivered to U.S. Government end users are "commercial computer software" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, use, duplication, disclosure, modification, and adaptation of the programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation, shall be subject to license terms and license restrictions applicable to the programs. No other rights are granted to the U.S. Government.
This software or hardware is developed for general use in a variety of information management applications. It is not developed or intended for use in any inherently dangerous applications, including applications that may create a risk of personal injury. If you use this software or hardware in dangerous applications, then you shall be responsible to take all appropriate fail-safe, backup, redundancy, and other measures to ensure its safe use. Oracle Corporation and its affiliates disclaim any liability for any damages caused by use of this software or hardware in dangerous applications.
Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.
Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered trademark of The Open Group.
This software or hardware and documentation may provide access to or information about content, products, and services from third parties. Oracle Corporation and its affiliates are not responsible for and expressly disclaim all warranties of any kind with respect to third-party content, products, and services unless otherwise set forth in an applicable agreement between you and Oracle. Oracle Corporation and its affiliates will not be responsible for any loss, costs, or damages incurred due to your access to or use of third-party content, products, or services, except as set forth in an applicable agreement between you and Oracle.
An Oracle Database consists of at least one database instance and one database. The database instance consists of memory and processes to manage the multitenant container database (CDB). The CDB consists of physical storage structures (data files) and logical data structures (schemas and schema objects).
This diagram shows a single-instance database architecture, which has a one-to-one relationship between the database and the database instance. Multiple single-instance databases can exist on the same server machine. This configuration is useful for running different versions of Oracle Database on the same machine.
An Oracle Real Application Clusters (Oracle RAC) database architecture consists of multiple instances that run on separate server machines with a single shared database. For more information about the Oracle RAC architecture, see Oracle Real Application Clusters 19c Technical Architecture.
The Oracle Net Listener is a database server process that receives incoming connections from client processes, establishes connections to the database instance, and then hands over the client connections to communicate directly with the server processes to fulfill the client requests. The listener for a single-instance database can run locally on the database server or run remotely.
You can use several tools to administer the database, including the following:
In Oracle Database, a database instance is a set of processes and memory structures that manage database files. The main memory structures are the system global area (SGA) and the program global areas (PGAs). The background processes operate on the database files and use the memory structures to do their work. A database instance exists only in memory.
The database instance also has server processes to handle the connections to the database on behalf of client programs and to perform the work for the client processes. For example, these server processes parse and run SQL statements and retrieve and return results to the client processes. These types of server processes are also called foreground processes.
With the multitenant architecture, a database instance is associated with a single multitenant container database (CDB). All pluggable databases (PDBs) that are plugged into the CDB (either directly or through an application container) share a single set of background processes and SGA.
The system global area (SGA) is the memory area that contains data and control information for one Oracle Database instance. All server and background processes share the SGA. When you start a database instance, the amount of memory allocated for the SGA is displayed. The SGA includes the following data structures:
MEMOPTIMIZE FOR READ
) that are permanently pinned in the buffer cache to avoid disk I/O. The buffers in the memoptimize pool are completely separate from the database buffer cache. The hash index is created when the Memoptimized Rowstore is configured, and Oracle Database maintains it automatically.The program global area (PGA) is a nonshared memory region within the database instance that contains data and control information exclusively for use by a server or background process. Oracle Database creates server processes to handle connections to the database on behalf of client programs. (The diagram shows a PGA for a server process.)
In a dedicated server environment, Oracle Database creates one PGA for each server and background process that starts. Oracle Database deallocates a PGA when the associated server or background process is terminated. In a dedicated server session, the PGA consists of the following components:
In a shared server environment, multiple client users share the server process. The UGA moves into the large pool, leaving the PGA with only SQL work areas and the private SQL area.
Background processes are part of the database instance and perform maintenance tasks to operate the database and to maximize performance for multiple users. Each background process performs a unique task, but works with the other processes. Oracle Database creates background processes automatically when you start a database instance. The background processes that are present depend on the features that you're using database. When you start a database instance, mandatory background processes automatically start. You can start optional background processes later as required.
Mandatory background processes are present in all typical database configurations. These processes run by default in a read/write database instance that was started with a minimally configured initialization parameter file. A read-only database instance disables some of these processes. Mandatory background processes include the process monitor (PMON), process manager (PMAN), listener registration (LREG), system monitor (SMON), database writer (DBWn), checkpoint (CKPT), manageability monitor (MMON), manageability monitor lite (MMNL), recoverer (RECO), and log writer (LGWR).
Most optional background processes are specific to tasks or features. Some common optional processes include the archiver (ARCn), job queue coordinator (CJQ0), recovery writer (RVWR), flashback data archive (FBDA), and space management coordinator (SMCO).
Worker processes are background processes that perform work on behalf of other processes. These include the dispatcher (Dnnn) and shared server (Snnn) processes.
The shared pool is a component of the system global area (SGA) within the database instance. It's responsible for caching various types of program data. For example, the shared pool stores parsed SQL, PL/SQL code, system parameters, and data dictionary information. Almost every operation that occurs in the database involves the shared pool. For example, if a user runs a SQL statement, then Oracle Database accesses the shared pool.
The shared pool consists of the following subcomponents:
The large pool is an optional memory area within the database instance and system global area (SGA). You can configure the large pool to provide large memory allocations for the following areas:
MEMOPTIMIZE FOR WRITE
. The inserts by fast ingest are also called deferred inserts. They are initially buffered in the large pool and later written to disk asynchronously by the Space Management Coordinator process (SMCO) and Wxxx worker background processes after 1MB worth of writes per session per object (or after 60 seconds). Sessions can't read any data that is buffered in this pool, even when committed, until the SMCO background process sweeps. The pool is initialized in the large pool at the first inserted row of a memoptimized table. 2G is allocated from the large pool when there is enough space. If there is not enough space in the large pool, an ORA-4031
is internally discovered and automatically cleared. The allocation is retried with half the requested size. If there is still not enough space in the large pool, the allocation is retried with 512M and 256M after which the feature is disabled until the instance is restarted. Once the pool is initialized, the size remains static. It cannot grow or shrink.The large pool is different from reserved space in the shared pool, which uses the same least recently used (LRU) list as other memory that is allocated from the shared pool. The large pool does not have an LRU list. Pieces of memory are allocated and cannot be freed until they are done being used.
A request from a user is a single API call that is part of the user's SQL statement.
In a dedicated server environment, one dedicated server process handles requests for a single client process. Each server process uses system resources, including CPU cycles and memory.
In a shared server environment, the following actions occur:
The database buffer cache, also called the buffer cache, is a memory area in the system global area (SGA) of the database instance. It stores copies of data blocks that are read from data files. A buffer is a main memory address in which the buffer manager temporarily caches a currently or recently used data block. All users that are concurrently connected to a database instance share access to the buffer cache. The goals of the buffer cache are to optimize physical I/O, keep frequently accessed blocks in the buffer cache, and manage buffer headers that point to data files in the optional Oracle Persistent Memory Filestore (PMEM Filestore).
The first time an Oracle Database client process requires a particular piece of data, it searches for the data in the database buffer cache. If the process finds the data already in the cache (a cache hit), it can read the data directly from memory. If the process cannot find the data in the cache (a cache miss), it copies the data block from a data file on disk into a buffer in the cache before accessing the data. Accessing data through a cache hit is faster than accessing data through a cache miss.
The buffers in the cache are managed by a complex algorithm that uses a combination of least recently used (LRU) lists and touch count. The LRU helps to ensure that the most recently used blocks tend to stay in memory to minimize disk access.
The database buffer cache consists of the following areas:
FLASH_CACHE
object attribute. The KEEP flash LRU list maintains the buffer headers on a separate list to prevent the regular buffer headers from replacing them. This means that the flash buffer headers belonging to an object that is specified as KEEP
tend to stay in the flash cache longer. If the FLASH_CACHE
object attribute is set to NONE
, the system does not retain the corresponding buffers in the flash cache or in memory. When a buffer that was already aged out of memory is accessed again, the system checks the flash cache. If the buffer is found, it reads it back from the flash cache, which takes only a fraction of the time of reading from the disk. The consistency of flash cache buffers across RAC is maintained in the same way as by Oracle RAC Cache Fusion. Because the flash cache is an extended cache and direct path I/O totally bypasses the buffer cache, this feature does not support direct path I/O. Note that the system does not put dirty buffers in flash cache because it may have to read buffers into memory to checkpoint them because writing to flash cache does not count for checkpoint.The In-Memory Area is an optional system global area (SGA) component within the database instance. It contains the In-Memory column store (IM column store), which stores tables and partitions in memory by using a columnar format that is optimized for rapid scans. The IM column store enables data to be simultaneously populated in the SGA in both the traditional row format (in the buffer cache) and a columnar format. The database transparently sends online transactional processing (OLTP) queries, such as primary key lookups, to the buffer cache and analytic and reporting queries to the IM column store. When fetching data, Oracle Database can also read data from both memory areas within the same query. The dual-format architecture does not double memory requirements. The buffer cache is optimized to run with a much smaller size than the size of the database.
You should populate only the most performance-critical data in the IM column store. To add an object to the IM column store, turn on the INMEMORY
attribute for an object when you create or alter it. You can specify this attribute on a tablespace (for all new tables and views in the tablespace), table, partition, subpartition, materialized view, or subset of columns within an object.
The IM column store manages both data and metadata in optimized storage units, not in traditional Oracle data blocks. An In-Memory Compression Unit (IMCU) is a compressed, read-only storage unit that contains data for one or more columns. A Snapshot Metadata Unit (SMU) contains metadata and transactional information for an associated IMCU. Every IMCU maps to a separate SMU.
The Expression Statistics Store (ESS) is a repository that stores statistics about expression evaluation. The ESS resides in the SGA and also persists on disk. When an IM column store is enabled, the database leverages the ESS for its In-Memory Expressions (IM expressions) feature. An In-Memory Expression Unit (IMEU) is a storage container for materialized IM expressions and user-defined virtual columns. Note that the ESS is independent of the IM column store. The ESS is a permanent component of the database and cannot be disabled.
Conceptually, an IMEU is a logical extension of its parent IMCU. Just as an IMCU can contain multiple columns, an IMEU can contain multiple virtual columns. Every IMEU maps to exactly one IMCU, mapping to the same row set. The IMEU contains expression results for the data that is contained in its associated IMCU. When the IMCU is populated, the associated IMEU is also populated.
A typical IM expression involves one or more columns, possibly with constants, and has a one-to-one mapping with the rows in the table. For example, an IMCU for an EMPLOYEES
table might contain rows 1-1000 for the column weekly_salary
. For the rows that are stored in this IMCU, the IMEU calculates the automatically detected IM expression weekly_salary*52
, and the user-defined virtual column quarterly_salary
defined as weekly_salary*12
. The third row down in the IMCU maps to the third row down in the IMEU.
The In-Memory Area is subdivided into two pools:
The relative size of the two pools is determined by internal heuristics. The majority of the In-Memory area memory is allocated to the 1MB pool. The size of the In-Memory Area is controlled by the initialization parameter INMEMORY_SIZE
(default 0) and must have a minimum size of 100MB. Starting in Oracle Database 12.2, you can increase the size of the In-Memory Area on the fly by increasing the INMEMORY_SIZE
parameter via an ALTER SYSTEM
command by at least 128MB. Note that it is not possible to shrink the size of the In-Memory Area on the fly.
Oracle Database In-Memory has a Base Level feature that allows you to use up to a 16GB column store without triggering any license tracking.
An in-memory table gets IMCUs allocated in the IM column store when the table data is first accessed or at database startup. An in-memory copy of the table is made by doing a conversion from the on-disk format to the new in-memory columnar format. This conversion is done each time the instance restarts because the IM column store copy resides only in memory. When this conversion is done, the in-memory version of the table gradually becomes available for queries. If a table is partially converted, queries can use the partial in-memory version and go to disk for the rest, rather than waiting for the entire table to be converted.
In-memory hybrid scans can access some data from the IM column store and some data from the row store when not all columns in a table have been populated into the IM column store. This improves performance by orders of magnitude over pure row store queries.
Automatic In-Memory enables, populates, evicts, and recompresses segments without user intervention. When INMEMORY_AUTOMATIC_LEVEL
is set to HIGH
, the database automatically enables and populates segments based on their usage patterns. This automation helps maximize the number of objects that can be populated into the IM column store at one time.
In response to queries and data manipulation language (DML), server processes scan columnar data and update SMU metadata. Background processes populate row data from disk into the IM column store. The in-memory coordinator process (IMCO) initiates the background population and repopulation of columnar data. The space management coordinator process (SMCO) and space management worker processes (Wnnn) do the actual populating and repopulating of data on behalf of IMCO. DML block changes are written to the buffer cache and then to disk. Background processes then repopulate row data from disk into the IM column store based on the metadata invalidations and query requests.
You can enable the In-Memory FastStart feature to write the columnar data in the IM column store back to a tablespace in the database in compressed columnar format. This feature makes database startup faster. Note that this feature does not apply to IMEUs. They are always populated dynamically from the IMCUs.
In-Memory deep vectorization can optimize complex SQL operators by pipelining the physical operators inside each SQL operator and vectorizing them using single instruction, multiple data ( SIMD) techniques. This feature is enabled by default but you can disable it by setting the INMEMORY_DEEP_VECTORIZATION
initialization parameter to false.
In a multitenant container database (CDB), a container is a collection of schemas, objects, and related structures. A pluggable database (PDB) is a user-created container that stores the data and code for an application, such as a human resources application.
From the perspective of a user or application, the PDB appears to be a logically separate, independent database. From the operating system perspective, the CDB is the database.
Every CDB has the following containers:
At a physical level, containers consist of physical data files. Each container has at least one data file. At a logical level, the database allocates data across data files with logical structures called tablespaces.
The CDB also uses several system files during its operation. The control file and redo online log files reside in the CDB and are shared across all PDBs and application containers. Other system files reside outside the CDB.
Note: Non-CDBs were desupported in Oracle Database 21c, which means that the Oracle Universal Installer and Database Configuration Assistant (DBCA) can no longer create non-CDB Oracle Database instances.
An application container is an optional, user-created component within a multitenant container database (CDB). An application container stores data and metadata for pluggable databases (PDBs) that are specific to an application.
In some ways, an application container functions as an application-specific CDB within a CDB. An application container, like the CDB itself, can include multiple application PDBs and enables these PDBs to share data and metadata in the same way that the CDB root shares data and metadata with the seed PDB, user-created PDBs, and application containers that are plugged into the CDB root.
A CDB can include zero or more application containers. An application container consists of exactly one application root and one or more application PDBs, which plug into the application root. The application root belongs to the CDB root. An application root differs from both the CDB root and standard PDB because it can store user-created common objects, which are accessible to the application PDBs that are plugged in to the application root.
An application seed is an optional, user-created PDB within an application container. An application container can have zero or one application seed. An application seed enables you to create application PDBs quickly. It serves the same role within the application container as the seed PDB serves within the CDB itself.
At a physical level, containers consist of physical data files. Each container has at least one data file. At a logical level, the database allocates data across data files with logical structures called tablespaces.
A typical application installs application common users, metadata-linked common objects, and data-linked common objects. For example, you might create multiple sales-related PDBs within one application container, with these PDBs sharing an application back end that consists of a set of common tables and table definitions.
A multitenant container database (CDB) is a collection of schemas and schema objects, such as tables and indexes.
At the physical level, a CDB stores objects in data files. You can use the following mechanisms to store data files:
Operating system file system, which is a structure for storing and retrieving data. Most Oracle databases store files in a file system.
Cluster file system, which is a distributed file system that is a cluster of servers that collaborate to provide consistency and high performance to their clients. In an Oracle Real Application Cluster (Oracle RAC) environment, a cluster file system makes shared storage appear as a single file system that many computers share in a clustered environment.
Oracle Persistent Memory Filestore (PMEM Filestore), which stores database files in PMEM (byte-addressable persistent memory).
Oracle Automatic Storage Management (Oracle ASM).
At the logical level, a CDB allocates data across the data files with the following structures:
A tablespace is a logical storage container for segments. Segments are database objects, such as tables and indexes, that consume storage space.
In a multitenant container database (CDB), each pluggable database (PDB) and application root has its own set of permanent tablespaces, which correspond to physical data files:
Zero or more user-created tablespaces , which contain the data for user-defined schemas and objects in the PDB. You can store user data in the default USERS tablespace or create additional tablespaces for specific application data or system data.
Each container also has a temporary tablespace, which correspond to physical temp files. One default temporary tablespace exists for the CDB root and for each application root, application PDB, and PDB. Temporary tablespaces contain transient data that persists only for the duration of a session. No permanent schema objects can reside in a temporary tablespace.
A database schema is a logical container for data structures, called schema objects. Each Oracle Database user account owns a single schema, which has the same name as the user.
Note: The database also stores other types of objects that are not contained in a schema. These objects include database user account, roles, contexts, and dictionary objects.
At the physical level, the database stores schema objects in data files. At the logical level, the database allocates data across the data files with tablespaces. There is no relationship between schemas and tablespaces: a tablespace can contain objects from different schemas, and the objects for a schema can be contained in different tablespaces. The data of each object is physically contained in one or more data files.
Schemas include the following principal types of objects:
Oracle Database uses several database system files that reside on the database server. These are different from data files, which are physical files that belong to database containers.
The following files are required for database startup:
Note: Oracle recommends that you maintain multiple copies of the control files and online redo log files in separate locations to avoid a single point of failure. This is called multiplexing. For details, see Multiple Control Files and Multiple Copies of Online Redo Log Files
.
The database also uses the following system files, which reside outside the CDB:
The Automatic Diagnostic Repository (ADR) is a system-wide tracing and logging central repository for database diagnostic data. It includes the following items:
mytest_reco_10355.trc
.ora
, and the operating system process number. An example of a server process trace file name is mytest_ora_10304.trc
.You can create the following types of backups:
When you use RMAN, you can store the following types of backup formats on disk:
RMAN can also interface with Media Management Library (MML) or System Backup to Tape (SBT) software, which can create backups to tape, Oracle Database Backup Cloud Service, or Zero Data Loss Recovery Appliance (commonly known as Recovery Appliance).
The process monitor process (PMON) periodically scans all processes to find any that have terminated abnormally. PMON delegates cleanup work to the cleanup main process (CLMN), which periodically performs cleanup of terminated processes and sessions. CLMN delegates to cleanup worker processes (CLnn).
PMON runs as an operating system process and not as a thread. In addition to database instances, PMON also runs on Oracle Automatic Storage Management (Oracle ASM) instances and Oracle ASM Proxy instances.
The process manager process (PMAN) oversees several background processes including shared servers, pooled servers, and job queue processes. PMAN monitors, spawns, and stops the following types of processes as needed:
PMAN runs as an operating system process and not as a thread. In addition to database instances, PMAN also runs on Oracle Automatic Storage Management (Oracle ASM) instances and Oracle ASM proxy instances.
The listener registration process (LREG) notifies the listeners about instances, services, handlers, and endpoints.
LREG can run as a thread or an operating system process. In addition to database instances, LREG also runs on Oracle Automatic Storage Management (Oracle ASM) instances and Oracle Real Application Clusters (Oracle RAC).
The system monitor process (SMON) performs many database maintenance tasks, including the following:
SMON is resilient to internal and external errors that occur during background activities. SMON can run as a thread or an operating system process. In an Oracle Real Application Clusters (Oracle RAC) database, the SMON process of one instance can perform instance recovery for other instances that have failed.
The database writer process (DBWn) reads the database buffer cache and writes modified buffers to data files. It also handles checkpoints, file open synchronization, and logging of Block Written records. DBWn also reads the Database Smart Flash Cache (Flash Cache) when Flash Cache is configured.
In many cases the blocks that DBWn writes are scattered throughout the disk, so the writes tend to be slower than the sequential writes performed by the log writer process (LGWR). DBWn performs multiblock writes when possible to improve efficiency. The number of blocks that are written in a multiblock write varies by operating system.
The DB_WRITER_PROCESSES
initialization parameter specifies the number of database writer processes. There can be 1 to 100 database writer processes. The names of the first 36 database writer processes are DBW0-DBW9 and DBWa-DBWz. The names of the 37th through 100th database writer processes are BW36-BW99. The database selects an appropriate default setting for the DB_WRITER_PROCESSES
parameter or adjusts a user-specified setting based on the number of CPUs and processor groups.
The checkpoint process (CKPT), at specific times, starts a checkpoint request by triggering the database writer process (DBWn) to read the database buffer cache and write modified buffers to data files. On completion of individual checkpoint requests, CKPT updates data file headers and control files to record the most recent checkpoint.
CKPT checks every three seconds to see whether the amount of memory exceeds the value of the PGA_AGGREGATE_LIMIT
initialization parameter, and if so, takes action.
CKPT can run as a thread or an operating system process. In addition to database instances, CKPT also runs on Oracle Automatic Storage Management (Oracle ASM) instances.
The manageability monitor process (MMON) and manageability monitor lite process (MMNL) perform tasks related to the Automatic Workload Repository (AWR). The AWR is a repository of historical performance data that includes cumulative statistics for the system, sessions, individual SQL statements, segments, and services. It provides problem detection and self-tuning. The AWR resides in the SYSAUX tablespace. AWR reports can be generated in the CDB root or in any PDB. For more information about how the AWR works in a multitenant container database (CDB), see About Using Manageability Features in a CDB.
MMON gathers memory statistics from the system global area (SGA), filters them, and creates snapshots of those statistics in the AWR every 60 minutes (or another interval that you choose). It also performs Automatic Database Diagnostic Monitor (ADDM) analysis and issues alerts for metrics that exceed their threshold values.
MMNL gathers session statistics (such as the user ID, state, the machine, and the SQL that it's processing) and stores them in the active session history (ASH) buffer, which is part of the shared pool in the SGA. Specifically, MMNL samples the V$SESSION
and V$SESSION_WAIT
views every second and then records that data in the V$ACTIVE_SESSION_HISTORY
view in the ASH buffer. MMNL doesn't sample Inactive sessions. The ASH is a rolling buffer in memory, so newer information overwrites earlier information when needed. When the ASH buffer becomes full or when MMON takes a snapshot, MMNL flushes (empties) the ASH buffer into the DBA_HIST_ACTIVE_SESS_HISTORY
view in the AWR. Because space is expensive, MMNL flushes only one in every 10 entries. MMNL also computes metrics.
Both MMON and MMNL can run as threads or as an operating system processes. In addition to database instances, MMON and MMNL also run on Oracle Automatic Storage Management (Oracle ASM) instances.
The recoverer process (RECO) resolves distributed transactions that are pending because of a network or system failure in a distributed database system, which is set of database servers that can appear to applications as a single data source.
The RECO process of a node automatically connects to other databases involved in an in-doubt distributed transaction. When RECO reestablishes a connection between the databases, it automatically resolves all in-doubt transactions, removing from each database's pending transaction table any rows that correspond to the resolved transactions.
RECO can run as a thread or as an operating system process.
Server processes write changes to data blocks in the database buffer cache and write redo data into the redo log buffer. The log writer process (LGWR) writes redo log entries sequentially from the redo log buffer to the online redo log. If the database has a multiplexed redo log, then LGWR writes the same redo log entries to all members of a redo log file group.
LGWR handles the operations that are very fast or must be coordinated. It delegates operations that could benefit from concurrent operations to the log writer worker processes (LGnn), which are numbered LG00-LG99. These operations include writing the redo from the log buffer to the redo log file and posting the completed write to the server process that is waiting.
The redo transport worker processes (TTnn), numbered TT00-TTzz, ship redo from the current online and standby redo logs to remote standby destinations that are configured for asynchronous (ASYNC) redo transport.
LGWR can run as a thread or as an operating system process. In addition to database instances, LGWR also runs on Oracle Automatic Storage Management (Oracle ASM) instances. Each database instance in an Oracle Real Application Clusters (Oracle RAC) configuration has its own set of redo log files.
The archiver processes (ARCn) exist only when the database is in ARCHIVELOG
mode and automatic archiving is enabled, in which case ARCn automatically archives online redo log files. The log writer process (LGWR) cannot reuse and overwrite an online redo log group until it has been archived.
The database starts multiple ARCn processes as needed to ensure that the archiving of filled online redo logs does not fall behind. Possible processes include ARC0-ARC9 and ARCa-ARCt (31 possible destinations).
The LOG_ARCHIVE_MAX_PROCESSES
initialization parameter specifies the number of ARCn processes that the database initially invokes. If you anticipate a heavy workload for archiving, such as during bulk loading of data, you can increase the maximum number of ARCn processes. There can also be multiple archive log destinations. Oracle recommends at least one ARCn process for each destination.
ARCn can run as a thread or as an operating system process.
The job queue coordinator process (CJQ0) selects jobs that need to be run from the data dictionary and spawns job queue worker processes (Jnnn) to run the jobs. Oracle Scheduler automatically starts and stops CJQ0 as needed. The JOB_QUEUE_PROCESSES
initialization parameter specifies the maximum number of processes that can be created for running jobs. CJQ0 starts only as many job queue processes as are required by the number of jobs to run and the available resources.
Jnnn processes run jobs that the job coordinator assigns. When workers pick jobs for processing, they do the following:
When a job is done, the workers do the following:
Both CJQ0 and Jnnn can run as threads or as operating system processes.
When you use Flashback Database, the recovery writer process (RVWR) reads flashback data from the flashback buffer in the system global area (SGA) and writes to the flashback logs. That is, it undoes transactions from the current state of the database to a time in the past, provided that you have the required flashback logs.
RVWR can run as a thread or as an operating system process.
When you use Flashback Database, the flashback data archiver process (FBDA) tracks and stores transactional changes to a table over its lifetime. This way, you can flashback tables to restore them to the way they were at a time in the past.
When a transaction that modifies a tracked table commits, FBDA reads undo blocks in the database buffer cache and undo segments in data files. Then it filters what is relevant to objects that are marked for archival and copies that undo information into the flashback data archive (tablespace) in the data files. FBDA maintains metadata on the current rows and tracks how much data has been archived.
FBDA automatically manages the flashback data archive for space, organization (partitioning tablespaces), and retention. Additionally, FBDA keeps track of how far the archiving of tracked transactions has progressed.
FBDA can run as a thread or as an operating system process.
The optional space management coordinator process (SMCO) schedules various space management tasks, including proactive space allocation and space reclamation. SMCO dynamically spawns space management worker processes (Wnnn) to implement these tasks. After starting, the worker acts as an autonomous agent. After it finishes a task, the worker process automatically picks up another task from the queue. The process terminates itself after being idle for a long time.
Wnnn worker processes (named W001, W002, and so on) perform tasks on behalf of Space Management and the Oracle Database In-Memory option.
For Space Management, Wnnn processes perform space management tasks in the background, including the following:
For the Oracle Database In-Memory option, Wnnn processes perform the following tasks:
Both SMCO and Wnnn can run as threads or as operating system processes.
In a shared server environment, a dispatcher process (Dnnn) directs multiple incoming network session requests to a pool of shared server processes (Snnn). You can create multiple dispatcher processes for a single database instance.
The Oracle Net Listener process establishes a connection to the dispatcher. When a client process makes a connection request that requires a shared server process, the listener returns the dispatcher address so the client process can communicate with the dispatcher directly after the connection is established.
The dispatcher places the client request in the request queue in the large pool of the system global area (SGA) within the database instance. The next available shared server process picks up the request and processes the request. When the request is complete, the shared server process places the response on the calling dispatcher's response queue in the large pool, and the response queue sends the response to the dispatcher. The dispatcher returns the completed request to the client process.
Both Snnn and Dnnn can run as threads or as operating system processes. In addition to database instances, Dnnn also runs in a shared server environment.