6 Performance and High Availability

ACFS Cluster File System

AutoShrink for ACFS

Oracle Advanced Cluster File System (Oracle ACFS) automatic shrinking automatically shrinks an Oracle ACFS file system based on policy, providing there is enough free storage in the volume.

The benefit is an optimization in performance and space utilization, ensuring that the data migration and associated locking do not cause delays to the running workloads or timeouts.

Related Topics

Mixed Sector Support

Oracle ACFS mixed sector support enables the Linux primary and accelerator volumes of an Oracle ACFS file system to use a mix of different logical sector sizes, such as 512-bytes and 4096 bytes.

The benefit is flexibility for storage configuration, enabling primary and accelerator volumes to have different logical sector sizes on Linux operating systems.

Related Topics

Oracle ACFS File Based Snapshots

Oracle Advanced Cluster File System (Oracle ACFS) file-based snapshots provide the ability to create snapshots of individual Oracle ACFS files in a space efficient manner on Linux.

The benefit is storage efficiency because you only snapshot specific files, not all files in the file system. Example use cases are for virtual machine (VM) image files and PDB snapshot copies.

Related Topics

Replication Unplanned Failover

Oracle ACFS replication failover provides unplanned failover where the standby location assumes the role of the primary in case of failure. When a failure occurs, the standby location pursues contact with the primary and in the absence of a response, the standby assumes the primary role, and on recovery of the former primary, the former primary becomes the new standby.

The benefit is faster recovery in the event of unplanned downtime for Oracle ACFS replication.

Related Topics

AutoShrink for ACFS

Oracle Advanced Cluster File System (Oracle ACFS) automatic shrinking automatically shrinks an Oracle ACFS file system based on policy, providing there is enough free storage in the volume.

The benefit is an optimization in performance and space utilization, ensuring that the data migration and associated locking do not cause delays to the running workloads or timeouts.

Related Topics

Mixed Sector Support

Oracle ACFS mixed sector support enables the Linux primary and accelerator volumes of an Oracle ACFS file system to use a mix of different logical sector sizes, such as 512-bytes and 4096 bytes.

The benefit is flexibility for storage configuration, enabling primary and accelerator volumes to have different logical sector sizes on Linux operating systems.

Related Topics

Oracle ACFS File Based Snapshots

Oracle Advanced Cluster File System (Oracle ACFS) file-based snapshots provide the ability to create snapshots of individual Oracle ACFS files in a space efficient manner on Linux.

The benefit is storage efficiency because you only snapshot specific files, not all files in the file system. Example use cases are for virtual machine (VM) image files and PDB snapshot copies.

Related Topics

Replication Unplanned Failover

Oracle ACFS replication failover provides unplanned failover where the standby location assumes the role of the primary in case of failure. When a failure occurs, the standby location pursues contact with the primary and in the absence of a response, the standby assumes the primary role, and on recovery of the former primary, the former primary becomes the new standby.

The benefit is faster recovery in the event of unplanned downtime for Oracle ACFS replication.

Related Topics

Application Continuity

Application Continuity Protection Check

Application Continuity Protection Check (ACCHK) provides guidance on the level of protection for each application that uses Application Continuity and assists you to increase protection, if required.

ACCHK identifies which application configuration is protected to help you make an informed decision about which configuration to use for maximum protection or how to increase protection level for an application configuration. ACCHK also provides diagnostics for an unsuccessful failover.

Related Topics

Session Migration

Oracle Database invokes planned failover at points where the database knows that it can replay the session using Application Continuity and that the session is not expected to drain.

Session Migration is an automatic solution that Oracle Database uses for relocating sessions during planned maintenance for batch and long-running operations that are not expected to complete in the specified drain timeout period.

Related Topics

Reset Session State

The reset session state feature clears the session state set by the application when the request ends. The RESET_STATE database service attribute cleans up dirty sessions, so that the applications, which use these sessions after cleanup, cannot see the state of these sessions.

The RESET_STATE feature enables you to clean the session state at the end of each request so that the database developers do not have to clean the session state manually. You can reset session state of the applications that are stateless between requests.

Related Topics

Transparent Application Continuity

In this release, the following new features are provided for Transparent Application Continuity:

  • Oracle Database clients use implicit request boundaries when connecting to the database using a service that has the attribute FAILOVER_TYPE set to AUTO.
  • Planned Failover is introduced, which is failover that is forced by the Oracle database at points where the Oracle database decides that the session can be failed over and the session is unlikely to drain. This feature is also available for Application Continuity.
  • There is also a new service attribute, RESET_STATE. The resetting of state is an important feature that clears the session state set by the application in a request at the end of request. This feature is also available for Application Continuity.

This improvement increases coverage of Transparent Application Continuity for applications that do not use an Oracle-supplied connection pool. Planned failover is used for shedding sessions during planned maintenance. It is also used for load rebalancing. Without RESET_STATE, application developers need to cancel their cursors, and clear any session state that has been set.

Related Topics

Transparent Application Continuity in the Oracle Cloud

Transparent Application Continuity is enabled by default in an Oracle Cloud environment. Enabling Transparent Application Continuity by default in the Oracle Cloud improves runtime performance, planned failover, and provides broader application coverage.

Transparent Application Continuity reduces overhead, reduces resource consumption, and broadens replay capabilities, so that database requests can replay in Oracle Cloud. Transparent Application Continuity also ensures Continuous Availability for applications working with databases in the Oracle Cloud.

Related Topics

Automatic Operations

Automatic Indexing Enhancements

Automatic indexing considers more cases for potential indexes and allows inclusion or exclusion of specific tables. An enhancement has been introduced to reduce the overhead of cursor invalidations when a new automatic index is created.

The enhancements increase the number cases where automatic indexes improve query performance.

Related Topics

Automatic Index Optimization

ADO Policies for Indexes extends existing Automatic Data Optimization (ADO) functionality to provide compression and optimization capability on indexes. Customers of Oracle Database are interested in leveraging compression tiering and storage tiering to satisfy their Information Lifecycle Management (ILM) requirements. The existing ADO functionality enables you to set policies that enforce compression tiering and storage tiering for data tables and partitions automatically, with minimal user intervention.

In a database, indexes can also consume a significant amount of database space. Reducing the space requirement for indexes, without sacrificing performance, requires ILM actions similar to the existing Automatic Data Optimization feature for data segments. Using this new Index compression and optimization capability, the same ADO infrastructure can also automatically optimize indexes. Similar to ADO for data segments, this automatic index compression and optimization capability achieves ILM on indexes by enabling you to set policies that automatically optimize indexes through actions like compressing, shrinking, and rebuilding indexes.

Related Topics

Automatic Materialized Views

Materialized views offer the potential to improve query performance significantly, but considerable effort and skill is required to identify what materialized views to use. The database now incorporates workload monitoring to establish what materialized views are needed. Based on the decisions it makes, materialized views and materialized view logs are created and maintained automatically without any manual interaction.

Automatic materialized views improve application performance transparently without the need for any user action or management overhead.

Related Topics

Automatic SQL Tuning Set

The automatic SQL tuning set (ASTS) is a system-maintained record of SQL execution plans and SQL statement performance metrics seen by the database. Over time, the ASTS will include examples of all queries seen on the system.

SQL plans and metrics stored in ASTS are useful for repairing SQL performance regressions quickly using SQL plan management.

ASTS enables SQL performance regressions to be resolved quickly and with minimal manual intervention. It is complementary to the automatic workload repository (AWR) and considered a similar core manageability infrastructure of the Oracle Database.

Related Topics

Automatic Temporary Tablespace Shrink

A temporary tablespace can grow to a very large size due to spikes in temp usage by queries. Sorts, hash joins, and query transformations are examples that might cause high temp usage. Automatic Temp Tablespace Sizing is a feature which shrinks the temporary tablespace in the background when temp usage has subsided. In addition, if temp usage is increasing, the feature would pre-emptively grow the temporary tablespace to ensure query performance is not impacted.

This feature alleviates the need for a DBA to manually size the temporary tablespace. These are the two scenarios which would benefit:

  • The DBA does not need to manually shrink the temporary tablespace to reclaim unsued space.
  • The DBA does not need to manually grow the temporary tablespace in anticipation of high temp usage.

Automatic Undo Tablespace Shrink

An undo tablespace can grow to a very large size and then that space may not be needed again. Currently there is no automated way to recover that space and there have been limits placed on the size of the undo tablespace in some environments because space cannot be easily reclaimed. This can prevent large transactions from running successfully. Automatic Undo Tablespace Shrink is a feature that shrinks the undo tablespace in the background by dropping expired (i.e. manual or automatically configured) undo segments and their corresponding extents, and then performing a datafile shrink if possible. A datafile shrink is not guaranteed depending on where allocated extents are located. Even if datafile shrink is not possible, releasing undo extents back to the undo tablespace enables the reuse of space by other undo segments.

This feature reduces space requirements and the need for manual intervention:

  • Automatically recovering space used by transactions that are no longer active
  • Remove the need to restrict the size of the undo tablespace allowing larger transactions to run with the ability to recover that space once the transaction undo expires.

Automatic Zone Maps

Automatic zone maps are created and maintained for any user table without any customer intervention. Zone maps allow the pruning of block ranges and partitions based on the predicates in the queries. Automatic zone maps are maintained for direct loads, and are maintained and refreshed for any other DML operation incrementally and periodically in the background.

Automatic zone maps improve the performance of queries transparently and automatically without management overhead.

Related Topics

Object Activity Tracking System

Object Activity Tracking System (OATS) tracks the usage of various types of database objects. Usage includes operations such as access, data manipulation, or refresh.

Automated tracking of how database objects are being used enables customers to gain a better insight into how applications are querying and manipulating the database and its objects. Internal clients such as Access Advisors or Automatic Materialized Views leverage and benefit from OATS as well.

Related Topics

Sequence Dynamic Cache Resizing

With dynamic cache resizing, the sequence cache size is now auto-tuned based on the rate of consumption of sequence values. This means the cache can automatically grow and shrink over time, depending on usage, while never falling below the DDL specified cache size. By dynamically resizing the cache, performance can be improved significantly, especially for fast insert workloads on Oracle RAC, by reducing the number of trips to disk needed to replenish the cache.

Dynamic cache resizing can improve performance significantly for fast insert workloads that use sequences. This is accomplished by reducing the number of trips to disk needed to replenish the sequence cache and can be especially significant in an Oracle RAC environment.

Related Topics

Automatic Storage Manager (ASM)

Enable ASMCA to Configure Flex ASM on an Existing NAS Configuration

This feature enables you to install Oracle Flex ASM on a configuration previously configured on network file storage (NFS). In particular, Oracle ASM Configuration Assistant (ASMCA) can be run in silent mode to configure Oracle ASM after an Oracle Clusterware installation has been performed using network attached storage (NAS) for Oracle Cluster Registry (OCR) and Voting disks.

The business value of this feature is that it provides an easy way for you to transition from NFS storage over to Oracle ASM managed storage. Without this feature, you would have to do a complete fresh installation and move all databases.

Related Topics

Enhanced Double Parity Protection for Flex and Extended Disk Groups

This feature provides support for double parity protection for write-once files in an Oracle Automatic Storage Management (Oracle ASM) Flex Disk Group.

With this feature you can use double parity protection for write-once files in a Oracle ASM Flex Disk Group. Double parity protection provides greater protection against multiple hardware failures. A previous release of Oracle ASM provided for simple parity protection for write-once files in a Flex Disk Group. Write-once files include files such as database backup sets and archive logs. The benefit of parity protection as compared to conventional mirroring is that it reduces storage overhead, but with a slight increase of risk of data loss after an event involving multiple hardware failures.

Related Topics

File Group Templates

With file group templates you can customize and set default file group properties for automatically created file groups, enabling you to customize file group properties that are inherited by a number of databases.

Without file group templates, if you wanted to change properties for an automatically created file group, you would have to manually change the properties after the associated files are created which triggers an unnecessary rebalance. The file group templates feature provides a much better option.

Related Topics

Oracle ASM Flex Disk Group Support for Cloning a PDB in One CDB to a New PDB in a Different CDB

Previously point-in-time database clones could only clone a pluggable database (PDB) in a multitenant container database (CDB) to a new PDB in the same CDB. The latter restriction is removed as part of this feature. Now, you can clone a PDB in a CDB to a new PDB in a different CDB.

This feature enables you to use Oracle ASM cloning for test and development cloning where the cloned PDB must be in a separate CDB.

Related Topics

Autonomous Health Framework

Enhanced Support for Oracle Exadata

The ability of Oracle Cluster Health Advisor to detect performance and availability issues on Oracle Exadata systems has been improved in this release with the addition of Exadata specific models.

Oracle Cluster Health Advisor detects performance and availability issues using Oracle Database and node models that were developed using machine learning.

With the improved detection of performance and availability issues on Oracle Exadata systems, Oracle Cluster Health Advisor helps to improve Oracle Database availability and performance. New Exadata-specific models are automatically loaded when CHA runs on Exadata Engineered Systems.

Related Topics

Oracle Cluster Health Advisor Support for Solaris

Oracle Cluster Health Advisor supports Oracle Real Application Clusters (Oracle RAC) deployments on Oracle Solaris.

With the Oracle Cluster Health Advisor support for Oracle Solaris, you can now get early detection and prevention of performance and availability issues in your Oracle RAC database deployments.

Related Topics

Oracle Cluster Health Monitor Local Mode Support

You can now configure Oracle Cluster Health Monitor to operate in local mode to report the operating system metrics using the oclumon dumpnodeview command even if you have not deployed Grid Infrastructure Management Repository (GIMR).

In local mode, you can get only the local node data. The local mode has limited Oracle Cluster Health Monitor functionality in the deployments where you have not installed Grid Infrastructure Management Repository (GIMR). In earlier releases, Oracle Cluster Health Monitor required GIMR to report the operating system metrics using the oclumon dumpnodeview command.

Related Topics

Oracle ORAchk and EXAchk Support for REST API

In this release, support for REST API adds a remote interface to the existing Oracle ORAchk and Oracle EXAchk command-line interfaces (CLI).

You can manage Oracle software deployments remotely from centralized consoles and web interfaces. By supporting the REST interfaces, Oracle ORAchk and Oracle EXAchk integrate into these applications and help support fleet or cloud management.

Related Topics

Oracle Trace File Analyzer Real-Time Health Summary

Oracle Trace File Analyzer generates a real-time health summary report, which shows performance degradation due to faults and workload issues.

Similar to the status scorecard of the deployment configurations that Oracle ORAchk and Oracle EXAchk generate, Oracle Trace File Analyzer also provides a readily consumable and trackable scoring for operational status. The health summary consists of scores in the categories of availability, health, workload, and capacity broken down from cluster-wide through the database, instance, service, and hardware resource.

Related Topics

Oracle Trace File Analyzer Support for Efficient Multiple Service Request Data Collections

Oracle Trace File Analyzer collects multiple Service Request Data Collections into a single collection even if it detects multiple issues or errors at the same time.

Service Request Data Collection mode of operation enables you to collect only the log and trace files that are required for diagnosing a specific type of problem. Even with this optimization, Oracle Trace File Analyzer collects the same subset of files if it detects multiple issues or errors at the same time. The enhancement further optimizes the collection of multiple Service Request Data Collections into a single collection and thus removes duplication.

It is essential to collect log and trace files upon detection of issues before the files are rotated or purged. However, collecting log and trace files involves resource overhead, which may be critically low due to these issues. The enhancement in this release reduces the resource overhead and disk space needed at a critical time.

Related Topics

Remote GIMR Support for Oracle Standalone Clusters

The remote Grid Infrastructure Management Repository (GIMR) feature for Oracle Standalone Cluster enables you to use a centralized GIMR. This feature does not require local cluster resources to host the GIMR.

The remote GIMR feature provides access to a persistent data store that significantly enhances the proactive diagnostic functionality of Cluster Health Monitor, Cluster Health Advisor, and Autonomous Health Framework clients. The remote GIMR feature saves cost by freeing up local resources and licensed database server resources.

Related Topics

Support for Automatically Enabling Oracle Database Quality of Service (QoS) Management

Oracle Database Quality of Service (QoS) Management automatically configures a default policy set based upon the services it discovers and begins monitoring in measurement mode.

With this implementation, the workload performance data is always available to you and other Oracle Autonomous Health Framework components.

If you do not have Oracle Enterprise Manager deployed to monitor Oracle Database clusters, then you cannot utilize the functionality of Oracle Database QoS Management because you cannot enable it with Enterprise Manager. With automatic monitoring, you can now take advantage of the rich set of workload data provided.

In conjunction with the new REST APIs, you can integrate the advanced Oracle Database QoS Management modes into your management systems. In earlier releases, you have to configure the monitoring functionality of Oracle Database QoS Management and enable Oracle Database QoS Management with Enterprise Manager.

Related Topics

Support for Deploying Grid Infrastructure Management Repository (GIMR) into a Separate Oracle Home

Starting with this release of Oracle Grid Infrastructure, you must configure the Grid Infrastructure Management Repository (GIMR) in a separate Oracle home, instead of in the Grid home. This option is available when you configure GIMR during a fresh Oracle Grid Infrastructure installation or you add a GIMR to an existing deployment. It is mandatory to configure GIMR in a separate Oracle home when you upgrade Oracle Grid infrastructure with an existing GIMR deployed in it.

A separate Oracle home for the GIMR ensures faster rolling upgrades, fewer errors, and fewer rollback situations. The Oracle Grid Infrastructure installation owner user must own the GIMR home.

Related Topics

Clusterware

Clusterware REST API

The Clusterware REST APIs enable customers to remotely execute commands on their cluster and to monitor the execution, including output, error codes, and time to execute. Support is provided for existing Oracle Clusterware command line interfaces.

REST API-based management based on well-known Oracle Clusterware command line interfaces simplifies cluster management in the Oracle Cloud, at remote physical locations or locally provisioned.

Related Topics

Common Data Model Across Fleet Patching and Provisioning Servers

The common data model across Fleet Patching and Provisioning (FPP) servers provides a unified view of fleet targets regardless of the FPP server deployment.

The unified view of the common model provides an easier and simplified operation of large estates and cloud management across multiple data centers.

Related Topics

FPP Integration with AutoUpgrade

Fleet Patching and Provisioning (FPP) integration with AutoUpgrade provides a new tool for automating and simplifying Oracle Database Upgrade.

This feature makes Oracle Database AutoUpgrade more flexible, provides better control over the upgrade flow mechanism, and provides better usability by showing progress bar and additional elements. You can upgrade multiple databases in parallel.

Related Topics

Oracle Clusterware 21c Deprecated and Desupported Features

The following are deprecated and desupported features in Oracle Clusterware 21c:

  • Deprecation of Policy-Managed Databases: Starting with Oracle Grid Infrastructure 21c, creation of new server pools is eliminated, and policy-managed databases are deprecated and can be desupported in a future release. Server pools will be migrated to the new Oracle Services feature that provides similar functionality.
  • Deprecation of Cluster Domain - Domain Services Cluster: With the introduction of Oracle Grid Infrastructure 21c, Domain Services Cluster (DSC), which is part of the Oracle Cluster Domain architecture, are deprecated and can be desupported in a future release.
  • Deprecation of Policy-Managed Databases
  • Vendor Clusterware Integration with Oracle Clusterware has been desupported: Starting with Oracle Oracle Clusterware 21c , the integration of vendor or third party clusterware with Oracle Clusterware is desupported.
  • Desupport of Cluster Domain - Member Clusters: Member Clusters, which are part of the Oracle Cluster Domain architecture, are desupported. However, Domain Services Clusters continue to support Members Clusters in releases previous to Oracle Grid Infrastructure 21c.

Database In-Memory

In addition to the features highlighted in the Database In-Memory section, the following features in other sections are applicable for Database In-Memory:

  • In-Memory Full Text Columns
  • Spatial Support for Database In-Memory
  • In-Memory Deep Vectorization
  • New JSON Data Type

Database In-Memory Base Level

Database In-Memory is an option to Enterprise Edition and now has a new "Base Level" feature. This allows the use of Database In-Memory with up to a 16GB column store without having to license the option. The use of the Base Level features does not trigger any license tracking.

The IM column store is limited to 16GB when using the Base Level feature. This can allow customers to see the value of Database In-Memory without having to worry about licensing issues. Note that Base Level has some other restrictions; for instance, the CellMemory feature and Automatic In-Memory are not included with the base level.

Related Topics

Automatic In-Memory

Automatic In-Memory (AIM) enables, populates, evicts, and recompresses segments without user intervention.

When INMEMORY_AUTOMATIC_LEVEL is set to HIGH the database automatically populates segments based on their usage patterns without requiring them to be marked INMEMORY. Combined with support for selective column level recompression, In-Memory population is largely self-managing. This automation helps maximize the number of objects that can be populated into the In-Memory column store (IM column store) at one time and maximizes overall application performance.

Related Topics

Database In-Memory External Table Enhancements

For a partitioned external table or a hybrid partitioned table (a table which has both internal and external partitions), the INMEMORY clause is supported at both the table and partition level. For hybrid partitioned tables, the table-level INMEMORY attribute applies to all partitions, whether internal or external.

This enhancement significantly broadens support for in-memory external tables by allowing partitioned external tables or hybrid partitioned tables to be selectively populated by partition thereby supporting active subsets of external data rather than having to populate the entire table.

Related Topics

In-Memory Hybrid Scans

In-Memory hybrid scans support in-memory scans when not all columns in a table have been populated into the In-Memory column store (IM column store). This situation can occur when columns have been specified as NO INMEMORY to save space. In previous releases if a column that was not populated was accessed by a query then no data could be accessed from the IM column store.

The In-Memory hybrid scan feature allows a query to combine in-memory accesses with row-store accesses, improving performance by orders of magnitude over pure row store queries.

Related Topics

CellMemory Level

On Exadata systems you can use both memory in the compute servers for the In-Memory column store and the Exadata Smart Flash Cache in the storage servers to populate data in in-memory columnar format (a feature known as CellMemory). CellMemory is enabled by default if the IM column store is enabled on the database servers. You can now selectively enable only CellMemory without enabling the IM column store, by setting INMEMORY_FORCE=CELLMEMORY_LEVEL and INMEMORY_SIZE=0.

If you do not intend to use Database In-Memory on the Database servers, this feature allows you to use CellMemory without incurring the overhead of allocating SGA memory for the IM column store.

Related Topics

Database In-Memory Additional Features

In addition to the features highlighted in the Database In-Memory section, the following features in other sections are applicable for Database In-Memory:

  • In-Memory Full Text Columns
  • Spatial Support for Database In-Memory
  • In-Memory Vectorized Joins
  • New JSON Data Type

Data Guard

Active Data Guard - Standby Result Cache

The result cache in an Active Data Guard standby database is utilized to cache results of queries that were run on the physical standby database. In the case of a role transition to primary, the standby database result cache will now be preserved ensuring performance for offloaded reporting and other queries continue without compromising the performance benefits of the standby result cache.

Use of the result cache greatly improves query performance for recurring queries and minimizes performance impact on the primary and standby databases. By maintaining the result cache on the standby, the performance of any queries that were running on the standby will be maintained ensuring previously offloaded reporting and other read-only applications utilizing the standby will not impacted by the role transition.

Related Topics

Data Guard Broker Far Sync Instance Creation

The Data Guard Broker now enables users to create and add a Far Sync instance to a Data Guard Broker configuration using a single command.

Zero Data loss over long distance can be achieved by using the Data Guard Far Sync standby instances. To ease the setup and maintenance of these instances, the Oracle Data Guard Broker can now be utilized. This leads to easier and simplified setup, which leverages the existing maintenance solution of the overall Data Guard environment.

Related Topics

Data Guard Far Sync instance in Maximum Performance Mode

The far sync instance can fully be utilized in Maximum Performance mode in both normal configurations as well as when fast-start failover (FSFO) is enabled.

This additional flexibility allows RTO/RPO objectives to be met more easily when far sync is utilized in Active Data Guard configurations. Maximum Performance mode provides for a predefined data loss, but with the benefit of a faster automated failover, which is critical in disaster recovery events.

Related Topics

Fast-Start Failover Callouts

Callout scripts are now available for use with fast-start failover (FSFO). These scripts can contain user-defined commands that are run before and after a FSFO operation.

With the additional flexibility provided by callout scripts, administrators can automate manual actions that must be performed before FSFO events are initiated as well as after a FSFO event has been completed. This provides a more consistent and adaptable configuration reducing the chance for human error to be introduced these events.

Related Topics

Fast-Start Failover Configuration Validation

Oracle Data Guard Broker now provides early detection of fast-start failover (FSFO) configuration issues and reports those mis-configurations allowing administrators to take action prior to a failover event.

Monitoring and validating a fast-start failover configuration helps maintain and ensure database availability. Potential configuration errors are detected early thereby preventing problems prior to a fast-start failover event that may be required to protect the configuration. This ensures that administrators can confidently maintain and validate their fast-start failover configuration in cases of new deployments as well as updates to an existing configuration.

Related Topics

PDB Recovery Isolation

PDB Recovery Isolation ensures single PDB recovery while managed standby recovery is recovering other PDBs and prevents administrative PDB operations on the primary database from interfering with recovery of other PDBs on the standby database.

This preserves the PDB isolation principle in regards to Active Data Guard allowing the maintenance, protection, and query SLAs for remaining PDBs to continue unabated which is consistent with how the primary database handles PDB operations.

Related Topics

Standardized Data Guard Broker Directory Structure

Oracle Data Guard broker now utilizes a standardized directory structure to store client-side files.

Using a standardized directory structure helps keep your Oracle Data Guard environment well organized and consistent. Management is further simplified by using configuration aliases to quickly identify a specific Oracle Data Guard configuration.

Related Topics

Other Features

Callout Configuration Scripts

Callout configuration scripts can be used to automatically execute specified tasks before and after a fast-start failover operation. The name of the callout configuration file is fsfocallout.ora. You cannot use a different name for this file. This file is stored in the $DG_ADMIN/config_ConfigurationSimpleName/callout directory. If the DG_ADMIN environment variable is not defined, or the directory specified by this variable does not exist, or the directory does not have the required permissions, fast-start failover callouts will fail.

The name of the callout configuration scripts is specified in fsfocallout.ora. These scripts must be in the same directory as the callout configuration file. You can create two callout configuration scripts, a pre-callout configuration script and post-callout configuration script. Before a fast-start failover operation, the observer checks if a fast-start failover configuration file exists. If it exists, and it contains a pre-callout script location, this script is run before the fast-start failover is initiated. After fast-start failover succeeds, if a post-callout script is specified in the fast-start failover configuration file this script is run.

The VALIDATE FAST_START FAILOVER command parses the callout configuration scripts and checks for errors or misconfigurations.

To perform specified actions before or after a fast-start failover operation:

  1. Create a pre-callout script, or a post-callout script, or both.
  2. Create or update the fast-start failover callout configuration file and include the names of the scripts created in the previous step.

Callout Configuration Script Example

The following example displays the contents of the fast-start failover configuration file named /home1/dataguard/config_NorthSales/callout/fsfocallout.ora. The fsfo_precallout and fsfo_postcallout callout configuration scripts are stored in the same location as fsfocallout.ora with the required permissions.

# The pre-callout script that is run before fast-start failover is enabled.
FastStartFailoverPreCallout=fsfo_precallout

# The timeout value (in seconds) for pre-callout script.
FastStartFailoverPreCalloutTimeout=1200 

# The name of the suc file created by the pre-callout script.
FastStartFailoverPreCalloutSucFileName=fsfo_precallout.suc    

# The name of the error file that the pre-callout script creates.
FastStartFailoverPreCalloutErrorFileName=precallout.err   

# Action taken by observer if the suc file does not exist after FastStartFailoverPreCalloutTimeout seconds or if an error file is detected before FastStartFailoverPreCalloutTimeout seconds passed.
FastStartFailoverActionOnPreCalloutFailur=STOP

# The post-callout script that is run after fast-start failover succeeds.
FastStartFailoverPostCallout=fsfo_postcallout

Related Topics

Data Guard Broker Managed Default Directory

Starting with Oracle Database 21c, the DG_ADMIN environment variable is used to specify the default location for client-side broker files.

Client-side broker files include the following:

  • Observer configuration file (observer.ora)
  • Observer log file
  • Fast-start failover log file (fsfo.dat)
  • Fast-start failover callout configuration file

The files are stored in subdirectories created under DG_ADMIN. You must create the directory specified in DG_ADMIN and ensure that it has the required permissions.

Content for default directory for client-side files is as follows:

  • admin: Contains the observer configuration file used by DGMGRL to manage multiple observers. This file also declares broker configurations and defines configuration groups used by multiple configuration commands. The default name of the observer configuration file is observer.ora. When DGMGRL starts, if the DG_ADMIN environment variable is set and the specified directory has the required permissions, the admin directory is created under DG_ADMIN. When commands that need access to the observer configuration file are run, such as START OBSERVING, STOP OBSERVING, SET MASTEROBSERVER TO, and SET MASTEROBSERVERHOSTS, DGMGRL reports an error if the directory does not have the required permissions.
  • config_ConfigurationSimpleName: Stores files related to the observer and callout configuration. This directory has the same permissions as its parent directory. For each broker configuration on which one or more observers are registered, a directory named ConfigurationSimpleName is created. ConfigurationSimpleName represents an alias of the broker configuration name. Subdirectories within this directory are used to store the files related to the configuration. This directory is created when you run the CONNECT command. When running the START OBSERVER command, if this directory does not have the required permissions, DGMGRL reports an error.
  • config_ConfigurationSimpleName/log: Contains the observer log file for the broker configuration named ConfigurationSimpleName. The default name of the observer log file is observer_hostname.log.
  • config_ConfigurationSimpleName/dat: Contains the observer runtime data file for the broker configuration named ConfigurationSimpleName. The default name of the observer runtime data file is fsfo_hostname.dat. This file contains important information about the observer. In the event of a crash, data in this file can be used to restart the observer to the status before the crash.
  • config_ConfigurationSimpleName/callout: Contains the callout configuration file, precallout script, post-callout script, and precallout success file for the broker configuration named ConfigurationSimpleName. The default name of the callout configuration file is fsfocallout.ora.

Permissions Required by the DG_ADMIN Directory

On Linux/Unix platforms, the directory specified by the DG_ADMIN environment variable must have read, write, and execute permissions for the directory owner only. The subdirectories that DGMGRL creates under this directory will also have the same permissions.

On Windows platforms, the directory specified by the DG_ADMIN environment variable must have exclusive permissions wherein it can be accessed only by the current operating system user who is running DGMGRL The subdirectories created under this directory by DGMGRL will also have the same permissions.

If the DG_ADMIN environment variable is not set or the specified directory does not exist or the permissions are different from the ones specified above, then broker does the following:

  • Stores the fast-start failover log file and observer configuration file in the current working directory
  • Uses standard output for displaying the observer logs

Every time DGMGRL starts, it checks if the default directories for client-side files exist. If they do not exist, the subdirectories are created.

  • When you run DGMGRL commands, if a path name and file name are explicitly specified for client-side files, the specified values are used.
  • If only a file name is specified, the file is stored in an appropriate directory under the broker's DG_ADMIN directory.
  • If only a path is specified, the files are stored in the specified path using the default file names.
  • If DG_ADMIN is not set, the then files are stored in the current working directory.

Related Topics

Oracle Database 21c Data Guard Desupported Features

The following parameters that were deprectated in Oracle Database Release 19c are now desupported features in Data Guard and Data Guard Broker:

  • ArchiveLagTarget
  • DataGaurdSyncLatency
  • LogArchiveMaxProcesses
  • LogArchiveMinSucceedDest
  • LogArchiveTrace
  • StandbyFileManagement
  • DbfileNameConvert
  • LogArchiveFormat
  • LogFileNameConvert
  • LsbyMaxEventsRecorded
  • LsbyMaxServers
  • LsbyMaxSga
  • LsbyPreserveCommitOrder
  • LsbyRecordAppliedDdl
  • LsbyRecordSkipDdl
  • LsbyRecordSkipErrors
  • LsbyParameters
FastStartFailoverLagLimit Configuration Property

When no synchronous standby destinations are available, a standby that uses asynchronous redo transport can be used as a fast-start failover target provided the new FastStartFailoverLagLimit configuration property is set. When a synchronous standby becomes available, the broker automatically switches back to the synchronous standby.

The FastStartFailoverLagLimit configuration property establishes an acceptable limit, in seconds, that the standby is allowed to fall behind the primary in terms of redo applied. If the limit is reached, then a fast-start failover is not allowed. The lowest possible value is 5 seconds. This property is used when fast-start failover is enabled and the configuration is operating in maximum performance mode.

The following categories are for the FastStartFailoverLagLimit configuration property:

  • Datatype: Integer
  • Valid value: Integral number of seconds. Must be greater than or equal to 5
  • Broker default: 30 seconds
  • Imported?: No
  • Parameter class: Not applicable
  • Role: Primary and standby
  • Standby type: Not applicable
  • Corresponds to: Not applicable
  • Scope: Broker configuration. This property will be consumed by the primary database after fast-start failover has been enabled.
  • Cloud control name: Lag Limit

Related Topics

PREPARE DATABASE FOR DATA GUARD Command

You can configure a database for Data Guard using a single command. The PREPARE DATABASE FOR DATA GUARD command configures a database and sets it up to be used as a primary database in a Data Guard broker configuration. You must connect to the primary database as a user with the SYSDG or SYSDBA privilege. Database versions starting from Oracle Database 12c Release 2 are supported. For a single-instance database, if a server parameter file does not exist, it is created using the current in-memory parameter settings and stored in the default location.

The PREPARE DATABASE FOR DATA GUARD command is as follows:

PREPARE DATABASE FOR DATA GUARD [WITH DB_UNIQUE_NAME IS dbunique-name]
[DB_RECOVERY_FILE_DEST IS directory-location]
[DB_RECOVERY_FILE_DEST_SIZE is size] [BROKER_CONFIG_FILE_1 IS
brokerconfig-file-1-location] [BROKER_CONFIG_FILE_2 IS
broker-config-file-2-location]

Command Parameters

db_unique_name: The value for the DB_UNIQUE_NAME initialization parameter. If the initialization parameter has been set to a different value, the existing value is replaced with the value specified by db_unique_name. If this parameter is not specified, the DB_UNIQUE_NAME parameter is set to the value of the DBNAME parameter.

directory-location: The directory name for the DB_RECOVERY_FILE_DEST initialization parameter, which represents the fast recovery area location. The specified directory must be accessible by all instances of a RAC database. This parameter can be omitted if a local archive destination is set. However, if the DB_RECOVERY_FILE_DEST initialization parameter has not been set and no local archive destination has been set, specifying this parameter is mandatory. If directory_location is specified, a log_archive_dest_n initialization parameter is set to the value USE_DB_RECOVERY_FILE_DEST. This is done whether or not there is a local archive destination already set.

size: A size value for the DB_RECOVERY_FILE_DEST initialization parameter. This parameter is mandatory if DB_RECOVERY_FILE_DEST is specified.

broker-config-file-1-location: A file location that is used to set the DG_BROKER_CONFIG_FILE1 initialization parameter. The file location specified must be accessible by all instances of a RAC database. This is an optional command parameter.

broker-config-file-2-location: A file location that is used to set the DG_BROKER_CONFIG_FILE2 initialization parameter. The file location specified must be accessible by all instances of a RAC database. This is an optional command parameter.

Initialization Parameters

The PREPARE DATABASE FOR DATA GUARD command sets the following initialization parameters, as per the values recommended for the Maximum Availability Architecture (MAA):

  • DB_FILES=1024
  • LOG_BUFFER=256M
  • DB_BLOCK_CHECKSUM=TYPICAL

    If this value is already set to FULL, the value is left unchanged.

  • DB_BLOCK_CHECKSUM=TYPICAL

    If this value is already set to FULL, the value is left unchanged.

  • DB_LOST_WRITE_PROTECT=TYPICAL

    If this value is already set to FULL, the value is left unchanged.

  • DB_FLASHBACK_RETENTION_TARGET=120

    If this parameter is already set to a non-default value, it is left unchanged.

  • PARALLEL_THREADS_PER_CPU=1
  • DG_BROKER_START=TRUE

This command enables archivelog mode, enables force logging, enables Flashback Database, and sets the RMAN archive log deletion policy to SHIPPED TO ALL STANDBY. If standby redo logs do not exist in the primary database, they are added. If the logs exist and are misconfigured, they are deleted and re-created.

Command Example

The following example prepares a database with the name boston for use as a primary database. The recovery destination is $ORACLE_BASE_HOME/dbs.

DGMGRL> PREPARE DATABASE FOR DATA GUARD
WITH DB_UNIQUE_NAME IS boston 
DB_RECOVERY_FILE_DEST IS "$ORACLE_BASE_HOME/dbs/"
DB_RECOVERY_FILE_DEST_SIZE is "400G"
DG_BROKER_CONFIG_FILE1 IS "$ORACLE_HOME/dbs/file1.dat"
DG_BROKER_CONFIG_FILE2 IS "$ORACLE_HOME/dbs/file2.dat";
Preparing database "boston" for Data Guard.
Creating server parameter file (SPFILE)from initialization parameter memory values.
Database must be restarted after creating the server parameter (SPFILE).
Shutting down database "boston".
Database closed.
Database dismounted.
ORACLE instance shut down. Starting database "boston" to mounted mode.
ORACLE instance started.
Database mounted. Server parameter file (SPFILE) is "ORACLE_BASE_HOME/dbs/spboston.ora".
Initialization parameter DB_UNIQUE_NAME set to 'boston'.
Initialization parameter DB_FILES set to 1024.
Initialization parameter LOG_BUFFER set to 268435456.
Primary database must be restarted after setting static initialization parameters. Primary database must be restarted to enable archivelog mode.
Shutting down database "boston".
Database dismounted.
ORACLE instance shut down.
Starting database "boston" to mounted mode.
ORACLE instance started.
Database mounted.
Initialization parameter DB_FLASHBACK_RETENTION_TARGET set to 120.
Initialization parameter DB_BLOCK_CHECKSUM set to 'TYPICAL'.
Initialization parameter DB_LOST_WRITE_PROTECT set to 'TYPICAL'.
Initialization parameter PARALLEL_THREADS_PER_CPU set to 1.
Removing RMAN archivelog deletion policy 1.
Removing RMAN archivelog deletion policy 2.
RMAN configuration archivelog deletion policy set to SHIPPED TO ALL STANDBY.
Initialization parameter DB_RECOVERY_FILE_DEST_SIZE set to '400G'.
Initialization parameter DB_RECOVERY_FILE_DEST set to 'ORACLE_BASE_HOME/ dbs/'.
Initialization parameter DG_BROKER_START set to FALSE.
Initialization parameter DG_BROKER_CONFIG_FILE1 set to 'ORACLE_HOME/dbs/file1.dat'.
Initialization parameter DG_BROKER_CONFIG_FILE2 set to 'ORACLE_HOME/dbs/file2.dat'.
LOG_ARCHIVE_DEST_n initialization parameter already set for local archival.
Initialization parameter LOG_ARCHIVE_DEST_2 set to 'location=use_db_recovery_file_dest valid_for=(all_logfiles,all_roles)'.
Initialization parameter LOG_ARCHIVE_DEST_STATE_2 set to 'Enable'.
Initialization parameter STANDBY_FILE_MANAGEMENT set to 'MANUAL'.
Standby log group 4 will be dropped because it was not configured correctly.
Standby log group 3 will be dropped because it was not configured correctly.
Adding standby log group size 26214400 and assigning it to thread 1.
Initialization parameter STANDBY_FILE_MANAGEMENT set to 'AUTO'.
Initialization parameter DG_BROKER_START set to TRUE.
Database set to FORCE LOGGING. Database set to ARCHIVELOG.
Database set to FLASHBACK ON.
Database opened.

Related Topics

VALIDATE FAST_START FAILOVER Command

The VALIDATE FAST_START FAILOVER command enables you to validate a fast-start failover configuration. It identifies misconfiguration, either while setting up or initiating fast-start failover. This command validates the fast-start failover configuration and reports the following information:

  • Incorrectly set up fast-start failover parameters. For example, the fast-start failover threshold is not set appropriately.
  • Issues that prevent the enabling or initiating of fast-start failover. This includes issues that prevent the usage of fast-start failover even when the conditions required for fast-start failover are met (for example, fast-start failover is enabled in Observe-Only mode).
  • Issues that affect actions taken after fast-start failover is initiated
  • Issues that could impact the stability of the broker configuration
  • Issues with fast-start failover callout configuration scripts. Displays if the syntax of the fast-start failover configuration file fsfocallout.ora is correct and if the pre-callout and post-callout scripts are accessible.

Example

DGMGRL> VALIDATE FAST_START FAILOVER;
Fast-Start Failover: Enabled in Potential Data Loss Mode
Protection Mode: MaxPerformance
Primary: North_Sales
Active Target: South_Sales
Fast-Start Failover Not Possible:
Fast-Start Failover observer not started
Post Fast-Start Failover Issues:
Flashback database disabled for database 'dgv1'
Other issues:
FastStartFailoverThreshold may be too low for RAC databases.
Fast-start failover callout configuration file "fsfocallout.ora" has the following issues:
Invalid lines
The specified file "./precallout" contains a path.

Related Topics

Flashback

Migrate Flashback Time Travel-Enabled Tables Between Different Database Releases

A new PL/SQL package called DBMS_FLASHBACK_ARCHIVE_MIGRATE enables the migration of Flashback Time Travel-enabled tables from a database on any release (in which the package exists) to any database on any release (that supports Flashback Time Travel).

Using the DBMS_FLASHBACK_ARCHIVE_MIGRATE PL/SQL package, users can export and import the Flashback Archive base tables, along with their history, to another database via the Oracle Transportable Tablespaces capability. Compression is preserved when History Tables enabled with the Advanced Compression Optimization for Flashback Time Travel History Tables capability are migrated.

Related Topics

Flashback Database Support for Datafile Shrink

The existing Flashback Database capability has some limitations with respect to permanent data file resize operations. In earlier releases, the behavior of permanent datafile resizes to a smaller size, (i.e. shrink) on Oracle Databases with Flashback Database enabled, was as follows:

  • When a permanent datafile shrink operation is performed on a database, which has Flashback Database enabled, the operation is allowed to succeed. However, any subsequent flashback operations, to a SCN or timestamps across any shrink operations fails (cannot use Flashback Database to undo or rollback a datafile shrink operation).
  • When performing a permanent datafile shrink operation on a database, which has Flashback Database enabled and a guaranteed restore point created, the datafile shrink operation fails with a user error.

This new capability is an enhancement to the current Flashback Database feature, allowing Flashback Database operations to succeed with permanent datafile shrinks, and shrinks to succeed even with guaranteed flashback restore points created on the database.

When objects in a tablespace are deleted, or when blocks in objects belonging to the tablespace are defragmented, the tablespace can be shrunk. Shrinking reduces the size of a datafile and returns unused space to the operating system -- including space taken up by UNDO, and defragmenting space in tables, LOBs and etc… The existing Flashback Database capability allowed users to “rewind” the database to a point in the past. However, when a permanent datafile shrink operation was performed users could not use Flashback Database to undo or rollback a datafile shrink operation. This new Flashback Database support for datafile shrink capability enables Flashback Database operations to succeed, with permanent datafile shrinks, and shrinks to succeed even with guaranteed flashback restore points created on the database.

Related Topics

PDB Point-in-Time Recovery or Flashback to Any Time in the Recent Past

PDBs can be recovered to an orphan PDB incarnation within the same CDB incarnation or an ancestor incarnation.

Availability of PDBs is enhanced. Both flashback and point-in-time recovery operations are supported when recovering PDBs to orphan PDB incarnations.

Related Topics

GoldenGate

Automatic CDR Enhancements

Automatic Conflict Detection and Resolution (CDR) was introduced in Oracle Database 12c Release 2 (and Oracle GoldenGate 12.3.0.1) to automate the conflict detection and resolution configuration in active-active Oracle GoldenGate replication setups. In Oracle Database 21c, we are enhancing automatic CDR to support earliest timestamp based resolution and site priority based resolution.

Active-active Oracle GoldenGate replication customers can use the automatic CDR feature on more types of tables simplifying their active-active replication setups.

Related Topics

Improved Support for Table Replication for Oracle GoldenGate

In earlier releases, extracting tables to Oracle GoldenGate required supplemental logging data to replicate table/schema, and required you to set TABLE/TABLEEXCLUDE parameters to configure which table to extract.

Starting with this release, to coordinate with the Oracle GoldenGate feature OGG EXTRACT, the LOGICAL_REPLICATION clause now provides support for automatic extract of tables.

In addition, two new views, DBA_OGG_AUTO_CAPTURED_TABLES and USER_OGG_AUTO_CAPTURED_TABLES, provide you with tools to query which tables are enabled for Oracle GoldenGate automatic capture.

Related Topics

LogMiner Views Added to Assist Replication

The DBMS_ROLLING package contains a new parameter that enables you to block the replication of operations unsupported by Transient Logical Standby.

Starting with this release, in the DBMS_ROLLING.set_parameter()procedure, there is a new parameter called BLOCK_UNSUPPORTED. By default, BLOCK_UNSUPPORTED is set to 1 [YES], indicating that operations performed on tables that are unsupported by Transient Logical Standby will be blocked on the primary database. If set to 0 [OFF], then the DBMS_ROLLING package does not block operations on unsupported tables. Those tables will not be maintained by Transient Logical Standby, and will diverge from the primary database.

Related Topics

Oracle GoldenGate for Oracle and XStream Support for JSON Data Type

Oracle GoldenGate for Oracle and XStream supports JavaScript Object Notation (JSON) data type.

JSON data type represents JSON in a proprietary binary format that is optimized for query and DML processing and can yield performance improvements for JSON processing in the database. It provides strong typing of JSON values so that the data type can be propagated through SQL expressions and view columns.

Related Topics

Multitenant

DRCP Enhancements for Oracle Multitenant

Starting with Oracle Database 21c, you can configure Database Resident Connection Pooling (DRCP) from the CDB to individual PDBs for improved PDB tenancy management.

In previous releases, the DRCP pool was used by the entire container database (CDB). With per-PDB pools, you can now configure, manage, and monitor pools on individual pluggable databases (PDBs), on the basis of tenancy. This feature also provides the ability to specify Connection Class and Purity support in connect strings for DRCP. With this change, you can leverage DRCP without application code change. This feature eases the management of DRCP pool by changing the granularity of the DRCP pool from the entire CDB to a per-PDB DRCP pool. This change enables tenant administrators to configure and manage independent tenant-specific DRCP pools. Additionally, this feature enables applications to leverage some of the DRCP benefits without making any application changes.

Related Topics

Expanded Syntax for PDB Application Synchronization

The ALTER PLUGGABLE DATABASE APPLICATION ... SYNC statement now accepts multiple application names and names to be excluded. For example, a single statement issued in an application PDB can synchronize app1 and app2, or synchronize all applications except app3.

The expanded syntax enables you to reduce the number of synchronization statements. Also, the database replays the statements in correct order. Assume that you upgrade ussales from v1 to v2, and then upgrade eusales from v1 to v2, and then upgrade ussales from v2 to v3. The following statement replays the statements in sequence, upgrading ussales to v2, then eusales to v2, and then ussales to v3.
ALTER PLUGGABLE DATABASE APPLICATION ussales, eusales SYNC

Related Topics

MAX_IDLE_BLOCKER_TIME Parameter

MAX_IDLE_BLOCKER_TIME sets the number of minutes that a session holding needed resources can be idle before it is a candidate for termination.

MAX_IDLE_TIME sets limits for all idle sessions, whereas MAX_IDLE_BLOCKER_TIME sets limits only for idle sessions consuming resources. MAX_IDLE_TIME can be problematic for a connection pool because it may continually try to re-create the sessions terminated by this parameter.

Related Topics

Namespace Integration with Database

Database Nest is an infrastructure that provides operating system resource isolation and management, file system isolation, and secure computing for CDBs and PDBs. This infrastructure enables a database instance to run in a protected, virtualized environment.

Sharing instance-level and operating system resources can lead to security and isolation constraints, especially in large-scale cloud deployments. Vulnerabilities can be external, such as compromised applications, unauthorized access of resources, and shared resources. An example of an internal vulnerability is a compromised Oracle process.

Database Nest isolates a database instance from other databases and applications running on the same host, and also isolates PDBs from each other and from the CDB. The feature is implemented as a Linux-specific package that provides hierarchical containers, called nests. A CDB resides within a single parent nest, while PDBs reside within the individual child nests created within the parent.

Linux processes in a PDB nest have their own process ID (PID) number spaces and cannot access PIDs in other nests. Process isolation provides a last level of defense in a security breach if a malicious user compromises a process.

Related Topics

Support Per-PDB Capture for Oracle Autonomous Database

To securely capture and replicate individual pluggable database (PDB) changes to Oracle Autonomous Database, you can now use Oracle GoldenGate to provide per-PDB capture.

You can now provide local user credentials to connect to an individual PDB in a multitenant architecture Oracle Database, and replicate the data from just that PDB to an Oracle Autonomous Database. You no longer need to create a common user with access to all PDBs on the multitenant container database (CDB) to replicate a PDB to an Oracle Autonomous Database. Instead, you can now provision a local user with a predefined set of privileges to the source PDB that you want to capture. All LogMiner and Capture processing takes place only in this PDB, and only data from this specific PDB is captured and written to the Oracle GoldenGate trail. As part of this feature, the behavior for V$LOGMNR_CONTENTS changes, depending on whether you connect to a PDB, or connect to the CDB$ROOT.

Related Topics

Time Zone support for PDBs in DBCA

In Database Configuration Assistant (DBCA) silent mode, you can optionally use the -pdbTimezone parameter with the -createPluggableDatabase and -configurePluggableDatabase commands to specify a time zone for a pluggable database (PDB).

Related Topics

Using Non-CDBs and CDBs

This page provides information about the availability of CDBs only in Oracle Database 21c. The non-CDB architecture was deprecated in Oracle Database 12c. It is desupported in Oracle Database 21c which means that the Oracle Universal Installer and DBCA can no longer be used to create non-CDB Oracle Database instances. A multitenant container database is the only supported architecture in Oracle Database 21c.

Oracle Real Application Clusters (RAC)

Cache Fusion Hardening

The Global Cache Service (LMS) process is vital to the operation of an Oracle Real Application Clusters (Oracle RAC) Database. Cache Fusion Hardening helps to ensure that the critical LMS process remains running despite some discrepancies between instances that would otherwise lead to a LMS and consequently database instance failures.

Cache Fusion Hardening increases availability by reducing outages, particularly in consolidated environments in which multiple Pluggable Databases (PDBs) are operated in the same Oracle RAC Container Database.

Related Topics

Database Management Policy change for Oracle RAC in DBCA

DBCA supports creation of database management policy for Oracle RAC.

The following variants are supported: Automatic and RANK.

The feature allows customers to modify the PDB placement in an Oracle RAC database.

Related Topics

Integration of PDB as a Resource in Clusterware

Integration of pluggable databases (PDBs) as a resource in Oracle Clusterware completes the integration of PDBs as Oracle Clusterware resources. This feature includes support using utilities and command line tools.

With resources to be defined and mapped at the PDB level, rather than the previously supported CDB level, you are better able to manage PDBs, the workload and resources against those PDBs, and monitor those PDBs.

Related Topics

Pluggable Database Cluster Resources

Pluggable Database (PDB) Cluster Resources enables direct mapping and control of the PDB resources. Unlike in previous versions, in which cluster resources for Multitenant databases were mapped against the Container Database (CDB) using a control of Pluggable Databases (PDBs) using services.

Pluggable Database (PDB) Cluster Resources enable a tighter and more effective control of PDBs in an Oracle RAC Database.

Related Topics

SecureFiles

SecureFiles Shrink

SecureFiles Shrink provides a way to free the unused space in SecureFiles segments while allowing concurrent reads and writes to SecureFiles data.

SecureFiles Shrink also helps to reduce fragmentation and improve read and write performance. The feature supports all types of SecureFiles LOBs - compressed, deduplicated, and encrypted.

Related Topics

Sharding

Centralized Backup and Restore of a Sharded Database

Oracle Sharding backup and recovery operations are centralized using new commands in the GDSCTL utility. You can define a backup policy for a sharded database as a whole and restore one or more shards, or the entire sharded database, to the same point in time. Configured backups are run automatically, and you can define a schedule to run backups during off-peak hours.

This feature streamlines backup and restore configuration and simplifies the overall management of backup policies for all of the databases in a sharded database topology. In earlier releases, you had to manually configure backup policies for each shard and the shard catalog database. Some of this work could be done with Oracle Enterprise Manager, but there was no way to orchestrate a complete restore of a sharded database such that all shards and the shard catalog are restored to the same point in time.

Related Topics

Create a Sharded Database from Multiple Existing Databases (Federated Sharding)

Convert a set of existing databases running the same application into a sharded database, without modifying the database schemas or the application. The databases can be geographically distributed and can have some differences in their individual schemas.

You can more easily issue queries across multiple independent databases running the same application when they are combined into a sharded database.

Related Topics

Multi-Shard Query, Data Loading, and DML Enhancements

If any shards are unavailable during query execution, then the enhanced multi-shard query attempts to find alternate shards to operate on, and the query resumes without issuing a failure condition. Bulk data loading and DML can operate on multiple shards simultaneously.

Multi-shard queries are more fault-tolerant. Bulk data loading and DML operations can occur across all shards simultaneously, making these operations much faster.

Related Topics

Sharding Advisor Schema Analysis Tool

Sharding Advisor is a standalone command-line tool that helps you redesign a database schema so that you can efficiently migrate an existing, non-sharded Oracle Database to an Oracle sharding environment. Sharding Advisor analyzes your existing database schema and produces a ranked list of possible sharded database designs.

Using the Sharding Advisor recommendations, you can experience a smoother, faster migration to Oracle Sharding. Sharding Advisor analysis provides you with the information you need to:

  • Maximize availability and scalability
  • Maximize query workload performance
  • Minimize the amount of duplicated data on each shard

Related Topics

Transactional Event Queues (TEQs)

Transactional Event Queues (TEQ) is the new name for AQ Sharded Queues which were introduced in Database 12c and are further enhanced in Database 19c. All features of AQ Sharded Queues are now in TEQ, plus some new ones.

Advanced Queuing Support for JSON Data Type

Oracle Database Advanced Queuing now supports JSON data type.

Many client applications and microservices which use Advanced Queuing for messaging have better performance if they use the JSON data type to handle JavaScript Object Notation (JSON) messages.

Related Topics

Advanced Queuing: Kafka Java Client for Transactional Event Queues

Kafka Java Client for Transactional Event Queues (TEQ) enables Kafka application compatibility with Oracle database. This provides easy migration of Kafka applications to TEQ.

Customers don't have to manage a separate Kafka infrastructure, and this feature simplifies the event-driven application architectures with an Oracle converged database that now includes events data.

Starting from this release, Kafka Java APIs can connect to Oracle database server and use Transactional Event Queues (TEQ) as a messaging platform. Developers can migrate an existing Java application that uses Kafka to the Oracle database. A client side library allows Kafka applications to connect to Oracle database instead of Kafka cluster and use TEQ messaging platform transparently.

Related Topics

Advanced Queuing: PL/SQL Enqueue and Dequeue Support for JMS Payload in Transactional Event Queues

PL/SQL APIs perform enqueue and dequeue operations for Java Message Service (JMS) payload in Transactional Event Queues. Similarly, the PL/SQL Array APIs are exposed to Transactional Event Queues JMS users. Since JMS supports heterogeneous messages in a single JMS destination, dequeue gets one of the five JMS message types back, but cannot predict what is the type of the next message received. Therefore, it can run into application errors with PL/SQL complaining about type mismatch. Oracle suggests that the application always dequeue from Transactional Event Queues using the generic type AQ$_JMS_MESSAGE.

Customers can use PL/SQL APIs to enqueue and dequeue JMS payloads in Transactional Event Queues to avoid client-server round trips.

Related Topics

Advanced Queuing: PL/SQL Enqueue and Dequeue Support for non-JMS Payload in Transactional Event Queues

PL/SQL APIs can now perform enqueue and dequeue operations for ADT and RAW payloads in Transactional Event Queues. Similarly, the PL/SQL array APIs are exposed to Transactional Event Queue users.

ADT payloads are important because they allow you to have different queue payloads required by applications with all the benefits of strong type checking.

Related Topics

Advanced Queuing: Simplified Metadata and Schema in Transactional Event Queues

Transactional Event Queues have fewer tables than AQ and implement multiple memory optimizations for higher throughput. Customers will see higher message throughput just by switching from AQ to Transactional Event Queues.

This feature provides improvement in performance, scalability, and manageability.

Related Topics

Advanced Queuing: Transactional Event Queues for Performance and Scalability

Oracle Transactional Event Queues have their queue tables partitioned into multiple Event Streams, which are distributed across multiple RAC nodes for high throughput messaging and streaming of events.

Partitioned tables form part of the foundation to scale and increase performance of Transactional Event Queues, especially on Oracle RAC or Exadata.

Related Topics