The large pool is an optional memory area within the database instance and system global area (SGA). You can configure the large pool to provide large memory allocations for the following areas:
- The user global area (UGA) (session memory) for the shared server environment and the Oracle XA interface (which is used where transactions interact with multiple databases). In a dedicated server environment, the UGA is stored in the program global area (PGA).
- The I/O buffer area includes I/O server processes, message buffers for parallel query operations, buffers for Recovery Manager (RMAN) I/O workers, and advanced queuing memory table storage.
- The deferred inserts pool is used by the fast ingest feature, which enables high-frequency, single-row data inserts into the database for tables that are defined as
MEMOPTIMIZE FOR WRITE. The inserts by fast ingest are also called deferred inserts. They are initially buffered in the large pool and later written to disk asynchronously by the Space Management Coordinator process (SMCO) and Wxxx worker background processes after 1MB worth of writes per session per object (or after 60 seconds). Sessions can't read any data that is buffered in this pool, even when committed, until the SMCO background process sweeps. The pool is initialized in the large pool at the first inserted row of a memoptimized table. 2G is allocated from the large pool when there is enough space. If there is not enough space in the large pool, anORA-4031is internally discovered and automatically cleared. The allocation is retried with half the requested size. If there is still not enough space in the large pool, the allocation is retried with 512M and 256M after which the feature is disabled until the instance is restarted. Once the pool is initialized, the size remains static. It cannot grow or shrink. - The parallel execution message pool (px msg pool) is used by parallel query execution processes to send messages to each other.
- Free memory
The large pool is different from reserved space in the shared pool, which uses the same least recently used (LRU) list as other memory that is allocated from the shared pool. The large pool does not have an LRU list. Pieces of memory are allocated and cannot be freed until they are done being used.
A request from a user is a single API call that is part of the user's SQL statement.
In a dedicated server environment, one dedicated server process handles requests for a single client process. Each server process uses system resources, including CPU cycles and memory.
In a shared server environment, the following actions occur:
- A client process sends a request to the database instance, and a dispatcher process (Dnnn) receives that request.
- The dispatcher places the request in the request queue in the large pool.
- The next available shared server process (Snnn) picks up the request. The shared server processes check the common request queue for new requests, picking up new requests on a first-in-first-out basis. One shared server process picks up one request in the queue.
- The shared server process makes all the necessary calls to the database to complete the request. First, the shared server process accesses the library cache in the shared pool to verify the requested items. For example, it checks whether the table exists, whether the user has the correct privileges, and so on. Next, the shared server process accesses the buffer cache to retrieve the data. If the data is not there, the shared server process accesses the disk. A different shared server process can handle each database call. Therefore, requests to parse a query, fetch the first row, fetch the next row, and close the result set may each be processed by a different shared server process. Because a different shared server process may handle each database call, the UGA must be a shared memory area, as the UGA contains information about each client session. Or reversed, the UGA contains information about each client session and must be available to all shared server processes because any shared server process may handle any session's database call.
- After the request is completed, a shared server process places the response on the calling dispatcher's response queue in the large pool. Each dispatcher has its own response queue.
- The response queue sends the response to the dispatcher.
- The dispatcher returns the completed request to the appropriate client process.