3 Administering Oracle Database on Exadata

3.1 Administering SQL Processing Offload

To optimize the performance of queries that do table and index scans, the database can offload data search and retrieval processing to the Exadata storage servers. This feature is controlled by database initialization parameters:

3.1.1 CELL_OFFLOAD_PROCESSING

The CELL_OFFLOAD_PROCESSING initialization parameter enables SQL processing offload to Oracle Exadata Storage Server.

When the value of the parameter is set to TRUE, predicate evaluation can be offloaded to cells. The default value of the parameter is TRUE. If the parameter is set to FALSE at the session or system level, then the database performs all the predicate evaluation with cells serving blocks. You can set CELL_OFFLOAD_PROCESSING dynamically with the SQL ALTER SYSTEM or ALTER SESSION commands, for example:

SQL> ALTER SESSION SET CELL_OFFLOAD_PROCESSING = TRUE;

The CELL_OFFLOAD_PROCESSING parameter can also be set with the OPT_PARAM optimizer hint to enable or disable predicate filtering for a specific SQL command.

  • To disable CELL_OFFLOAD_PROCESSING for a SQL command:

    SELECT /*+ OPT_PARAM('cell_offload_processing' 'false') */ COUNT(*) FROM EMPLOYEES;
    
  • To enable CELL_OFFLOAD_PROCESSING for a SQL command:

    SELECT /*+ OPT_PARAM('cell_offload_processing' 'true') */ COUNT(*) FROM EMPLOYEES;
    

Note:

The CELL_OFFLOAD_PROCESSING initialization parameter cannot be used to compare the performance of Oracle Exadata Storage Server with conventional storage. Even when CELL_OFFLOAD_PROCESSING is set to FALSE, Oracle Exadata Storage Server has many advantages over conventional storage. Oracle Exadata Storage Server is highly optimized for fast processing of large queries. It has no bottlenecks at the controller or other levels inside the cell. Oracle Exadata uses a modern scale-out architecture and state-of-the-art RDMA Network Fabric that has much higher throughput than conventional storage networks. Oracle Exadata is tightly integrated with the Oracle Database, and has unique capabilities for setup, execution, monitoring, diagnostics, resource management, and corruption prevention.

3.1.2 CELL_OFFLOAD_PLAN_DISPLAY

The database parameter CELL_OFFLOAD_PLAN_DISPLAY determines whether the SQL EXPLAIN PLAN command displays the predicates that can be evaluated by Oracle Exadata System Software as STORAGE predicates for a given SQL command.

The values for the CELL_OFFLOAD_PLAN_DISPLAY parameter are AUTO, ALWAYS, or NEVER. The default value is AUTO.

  • AUTO instructs the SQL EXPLAIN PLAN command to display the predicates that can be evaluated as STORAGE only if a cell is present and if a table is on the cell.

  • ALWAYS produces changes to the SQL EXPLAIN PLAN command based on Oracle Exadata System Software, whether or not Oracle Exadata System Software is present or the table is on the cell. You can use this setting to see what can be offloaded to Oracle Exadata Storage Server before migrating to Oracle Exadata Storage Server.

  • NEVER produces no changes to the SQL EXPLAIN PLAN command for Oracle Exadata System Software.

You can set the CELL_OFFLOAD_PLAN_DISPLAY parameter dynamically with the SQL ALTER SYSTEM or ALTER SESSION commands. For example:

SQL> ALTER SESSION SET cell_offload_plan_display = ALWAYS;

3.1.3 CELL_OFFLOAD_DECRYPTION

The CELL_OFFLOAD_DECRYPTION initialization parameter enables decryption offload to Oracle Exadata Storage Servers. This decryption applies to both encrypted tablespaces and encrypted columns. When the value of the parameter is set to TRUE, decryption can be offloaded to cells. The default value of the parameter is TRUE. If the parameter is set to FALSE at the system level, then the database performs all decryption with cells serving blocks. You can set CELL_OFFLOAD_DECRYPTION dynamically with the SQL ALTER SYSTEM command. For example:

SQL> ALTER SYSTEM SET CELL_OFFLOAD_DECRYPTION = FALSE;

3.2 Administering Exadata Smart Flash Cache

Exadata Smart Flash Cache automatically caches frequently-used data in high-performance flash memory.

3.2.1 Overriding the Default Caching Policy

Though typically not required, you can use the CELL_FLASH_CACHE segment storage option to override the automatic caching policy for Exadata Smart Flash Cache. The storage clause can be provided during the CREATE and ALTER commands for a table or other objects.

For example:

SQL> CREATE TABLE t1 (c1 number, c2 varchar2(200)) STORAGE (CELL_FLASH_CACHE KEEP);

The CELL_FLASH_CACHE option supports the following settings:

  • NONE: This value ensures that Exadata Smart Flash Cache never caches the corresponding segment. By using this setting on peripheral database segments, more cache space is available for more-important and frequently-accessed database segments.
  • DEFAULT: This value specifies that database segments are cached using the default LRU (least recently used) algorithm of Exadata Smart Flash Cache. This value is the default setting for CELL_FLASH_CACHE.
  • KEEP: This value elevates the segment priority in Exadata Smart Flash Cache. By using this setting, you can increase the likelihood of keeping data from the corresponding segment in the cache.

You can set the CELL_FLASH_CACHE segment storage option separately on each partition in a partitioned segment, which is particularly useful if you want to influence the caching priority for different partitions based on predictable usage patterns. While setting the CELL_FLASH_CACHE segment storage option on a partition, you can add the DEFERRED INVALIDATION clause. For example:

SQL> ALTER TABLE ptable MODIFY PARTITION p1 STORAGE (CELL_FLASH_CACHE KEEP) DEFERRED INVALIDATION;

By using this option, you can dynamically modify the segment storage option without immediately invalidating dependent cursors. This option requires Oracle Database software that contains the patch for bug 33456703, which is included in Oracle Database version 19.15, Oracle Database version 21.6, and later releases.

Example 3-1 Setting CELL_FLASH_CACHE on partitions

This example shows setting CELL_FLASH_CACHE individually on multiple partitions in a CREATE TABLE command.

CREATE TABLE ptable (c1 number, c2 clob) TABLESPACE TBS_1
          PARTITION BY RANGE(c1) ( PARTITION p1 VALUES LESS THAN (100)
             TABLESPACE TBS_2 STORAGE (CELL_FLASH_CACHE DEFAULT),
          PARTITION p2 VALUES LESS THAN (200) TABLESPACE TBS_3
             STORAGE (CELL_FLASH_CACHE KEEP));

Example 3-2 Setting CELL_FLASH_CACHE on a LOB segment

This example shows setting CELL_FLASH_CACHE for a LOB segment in a CREATE TABLE command.

CREATE TABLE tkbcsrbc (c1 number, l1 clob)
           lob (l1) STORE AS securefile 
              (cache nologging STORAGE (CELL_FLASH_CACHE NONE))
           PCTFREE 0 TABLESPACE tbs_93 STORAGE 
              (initial 128K next 128K pctincrease 0);

Example 3-3 Using ALTER TABLE with CELL_FLASH_CACHE

For objects where altering the storage clause is allowed, the ALTER command can be used with CELL_FLASH_CACHE, as shown in these examples.

ALTER TABLE tkbcsrbc STORAGE(CELL_FLASH_CACHE DEFAULT);

ALTER TABLE tkbcsrbc MODIFY LOB (l1) (STORAGE (CELL_FLASH_CACHE KEEP));

Example 3-4 Using Views to Query CELL_FLASH_CACHE Storage Clause

The CELL_FLASH_CACHE storage clause attribute can be queried using database views based on the object involved.

SELECT TABLESPACE_NAME, TABLE_NAME, CELL_FLASH_CACHE FROM user_tables WHERE table_name='TKBCSRBC';

SELECT CELL_FLASH_CACHE FROM ALL_INDEXES WHERE index_name='TKBCIDX';

3.2.2 Administering In-Memory Columnar Caching

The columnar cache is a section of Exadata Smart Flash Cache that stores data in columnar format. When directed by Oracle Database, Exadata automatically uses a portion of the columnar cache to maintain data using the Oracle Database In-Memory format. You do not need to configure anything on Exadata to use this enhancement.

This feature is available if you have licensed the Oracle Database In-Memory option. To enable this feature, use either of the following database instance parameters:

  • Set the INMEMORY_SIZE database instance parameter to a value greater than zero.

  • Starting with Oracle Database version 19.8.0.0.200714, you can set INMEMORY_FORCE=cellmemory_level.

    This option enables you to use In-Memory Columnar Caching in Exadata Smart Flash Cache without a dedicated In-Memory cache in the database instance.

You can use the CELLMEMORY segment option to override of the default behavior for In-Memory Columnar Caching in Exadata Smart Flash Cache:

SQL> ALTER TABLE table_name  [ [ NO ] CELLMEMORY [ MEMCOMPRESS FOR [ QUERY | CAPACITY ] [ LOW | HIGH ] ]
Options and Clauses Usage Description
NO CELLMEMORY Indicates a table is ineligible for the rewrite from 12.1.0.2 columnar flash cache format into the 12.2 Database In-Memory format.
CELLMEMORY and CELLMEMORY MEMCOMPRESS FOR CAPACITY Allows a table to be cached in the default Oracle Database 12.2 In-Memory format. You only need to use this clause to undo a previously specified NO CELLMEMORY statement or to change the specified compression level. 
CELLMEMORY MEMCOMPRESS FOR QUERY This option indicates that In-Memory column store data should be compressed less than if MEMCOMPRESS FOR CAPACITY was specified. This option can provide an increase in query performance but requires almost twice as much flash space.
LOW and HIGH Not implemented at this time.

Example 3-5 Using CELLMEMORY and INMEMORY Options on the Same Table

You can use both INMEMORY and CELLMEMORY on the same table. For example:

CREATE TABLE t (c1 NUMBER) INMEMORY CELLMEMORY MEMCOMPRESS FOR QUERY;

Specifying both options is useful for when you have a low priority table that is unlikely to get loaded in memory. By also specifying CELLMEMORY you still get columnar performance.

3.2.3 Optimizing Exadata Smart Flash Cache for Analytical Workloads

By default, Exadata Smart Flash Cache favors caching of frequently accessed data and limits caching of data with low potential for reuse. Specifically, Exadata Smart Flash Cache limits the amount of cache space occupied by large I/Os with low potential for reuse, which includes I/Os associated with temporary segments. Furthermore, if thrashing (caching of data recently removed from the cache) is detected, additional cache space is created by removing data associated with large I/Os (including temporary segment data).

This approach naturally suits online transaction processing (OLTP) workloads. However, analytical workloads typically under-utilize the resources of Exadata Smart Flash Cache when processing large join and sort operations with large temporary segments.

To better utilize Exadata Smart Flash Cache for analytical workloads, Exadata allows administrators to specify the main workload type for each database and pluggable database (PDB) by setting the main_workload_type database instance parameter.

For example:

  • In a container database (CDB) or non-CDB:

    SQL> alter system set main_workload_type = [ OLTP | ANALYTICS ];

    In a PDB:

    SQL> alter session container = <pdb_name>;
    SQL> alter system set main_workload_type = [ OLTP | ANALYTICS ];

The available options are:

  • OLTP: Exadata Smart Flash Cache favors caching of frequently accessed data and limits caching of data with low potential for reuse.

  • ANALYTICS: Exadata Smart Flash Cache relaxes the limits on caching data with low potential for reuse, enabling temporary segments to occupy most of the cache.

Note the following details regarding this capability:

  • To enable this capability, the Exadata I/O resource management plan (IORMPLAN) must contain the associated database as part of the interdatabase plan (DBPLAN), and the plan directive must include a flash cache quota specified using the flashCacheSize attribute. For example, on a system using the following IORMPLAN, the main_workload_type setting only takes effect in the sales database and any PDB it contains:

    CellCLI> ALTER IORMPLAN dbplan=((name=sales, share=8, flashCacheSize=20G), -
                                    (name=finance, share=8, flashCacheLimit=10G, flashCacheMin=2G), -     
                                    (name=dev, share=2, flashCacheLimit=4G, flashCacheMin=1G), -
                                    (name=test, share=1))
  • The main_workload_type setting can differ across PDBs or between the CDB and any PDB. Any PDB-level setting overrides the CDB setting.

  • If main_workload_type is not set in a PDB, the corresponding CDB setting is used. If main_workload_type is not set in a CDB, the default setting is OLTP.

  • Minimum system requirements:

    • Exadata System Software release 22.1.12.0.0 or 23.1.3.0.0.

    • Oracle Database 19c with patch 35017301.

3.2.4 Manually Populating Exadata Smart Flash Cache

By default, Exadata Smart Flash Cache is automatically populated when reading data from disk, which is optimal for most application scenarios. However, if your application is sensitive to the increased I/O latency of the initial disk reads, you can manually populate Exadata Smart Flash Cache. This topic describes approaches and considerations for manually populating Exadata Smart Flash Cache.

Scanning Data

If Exadata Smart Flash Cache is not full, you can read data into the cache by simply accessing the desired data, usually through a full table scan.

To determine whether Exadata Smart Flash Cache has available space, compare the FC_BY_ALLOCATED metric with the effectiveFlashCacheSize attribute. For example, the following output shows that each cell contains approximately 4 TB of available space.

# FC_BY_ALLOCATED shows that each cell contains approximately 19 TB of data allocated
$ dcli -g cell_group cellcli -e list metriccurrent where name\=\"FC_BY_ALLOCATED\"
db01celadm01: FC_BY_ALLOCATED        FLASHCACHE      19,313,940 MB
db01celadm02: FC_BY_ALLOCATED        FLASHCACHE      19,311,784 MB
db01celadm03: FC_BY_ALLOCATED        FLASHCACHE      19,311,688 MB

# effectiveCacheSize shows that each cell contains approximately 23 TB of flash cache space
$ dcli -g cell_group cellcli -e list flashcache attributes effectiveCacheSize detail
db01celadm01: effectiveCacheSize:     23.28692626953125T
db01celadm02: effectiveCacheSize:     23.28692626953125T
db01celadm03: effectiveCacheSize:     23.28692626953125T

You can use the following query to confirm whether the desired data is populated in Exadata Smart Flash Cache.

select name,value
  from v$statname n,
       v$mystat s
  where s.statistic# = n.statistic#
    and name in ('physical read IO requests','physical read requests optimized')
  order by name;

For example:

-- get session statistics
SQL> select name,value
  2    from v$statname n,
  3         v$mystat s
  4    where s.statistic# = n.statistic#
  5      and name in ('physical read IO requests','physical read requests optimized')
  6    order by name;

NAME                                                                  VALUE
---------------------------------------------------------------- ----------
physical read IO requests                                                 5
physical read requests optimized                                         11

// run the desired workload...

-- get session statistics again
SQL> select name,value
  2    from v$statname n,
  3         v$mystat s
  4    where s.statistic# = n.statistic#
  5      and name in ('physical read IO requests','physical read requests optimized')
  6    order by name;

NAME                                                                  VALUE
---------------------------------------------------------------- ----------
physical read IO requests                                            45,193
physical read requests optimized                                     23,140

In the previous example, the workload performed 45188 physical read requests (45193 - 5), of which 23129 (23140 - 11) are optimized. In this case, 22059 (45188 - 23129) unoptimized (disk) reads are performed.

When you repeat the workload, all of the reads should be optimized (physical read IO requests = physical read requests optimized), which indicates that all of the desired data is populated in Exadata Smart Flash Cache.

Using flashCacheMin

If Exadata Smart Flash Cache is fully populated in a consolidated environment, you can use the I/O Resource Management (IORM) flashCacheMin setting to free up space so that the desired data can be read into the cache. In this case, the flashCacheMin setting must be larger than the current space occupied by the database with sufficient space to add the desired data.

Note:

When using a container database (CDB), a CDB resource plan is required to govern the resource allocation to each pluggable database (PDB), even if there is only one PDB in the CDB.

To determine the current Exadata Smart Flash Cache space allocation for each database, examine the DB_FC_BY_ALLOCATED metric. For example:

$ dcli -g cell_group cellcli -e list metriccurrent where name\=\"DB_FC_BY_ALLOCATED\"
db01celadm01: DB_FC_BY_ALLOCATED     ASM                     0.000 MB
db01celadm01: DB_FC_BY_ALLOCATED     DBCDB1              1,694,241 MB
db01celadm01: DB_FC_BY_ALLOCATED     DBCDB2              4,851,611 MB
db01celadm01: DB_FC_BY_ALLOCATED     DBCDB3              4,638,129 MB
db01celadm01: DB_FC_BY_ALLOCATED     DBCDB4              2,157,755 MB
db01celadm01: DB_FC_BY_ALLOCATED     DBCDB5              9,509,356 MB
db01celadm01: DB_FC_BY_ALLOCATED     _OTHER_DATABASE_      365,790 MB
db01celadm02: DB_FC_BY_ALLOCATED     ASM                     0.000 MB
db01celadm02: DB_FC_BY_ALLOCATED     DBCDB1              1,629,001 MB
db01celadm02: DB_FC_BY_ALLOCATED     DBCDB2              4,761,316 MB
db01celadm02: DB_FC_BY_ALLOCATED     DBCDB3              4,495,902 MB
db01celadm02: DB_FC_BY_ALLOCATED     DBCDB4              2,106,805 MB
db01celadm02: DB_FC_BY_ALLOCATED     DBCDB5              9,848,567 MB
db01celadm02: DB_FC_BY_ALLOCATED     _OTHER_DATABASE_      377,023 MB
db01celadm03: DB_FC_BY_ALLOCATED     ASM                     0.000 MB
db01celadm03: DB_FC_BY_ALLOCATED     DBCDB1              1,664,919 MB
db01celadm03: DB_FC_BY_ALLOCATED     DBCDB2              4,872,123 MB
db01celadm03: DB_FC_BY_ALLOCATED     DBCDB3              4,459,631 MB
db01celadm03: DB_FC_BY_ALLOCATED     DBCDB4              2,096,412 MB
db01celadm03: DB_FC_BY_ALLOCATED     DBCDB5              9,750,586 MB
db01celadm03: DB_FC_BY_ALLOCATED     _OTHER_DATABASE_      315,181 MB

In the previous example, DBCDB1 consumes approximately 1.6 TB of cache space on each cell. To create a minimal IROM plan that increases the DBCDB1 allocation to 2 TB on each cell you could use the following command:

$ dcli -g cell_group cellcli -e alter iormplan dbplan=\(\(name=DBCDB1, flashCacheMin=2T\)\)

If Exadata Smart Flash Cache is fully populated in a consolidated environment, then using the IORM flashCacheMin to increase the allocation for one database effectively steals space from all of the others. In such cases, the cache space is not transferred immediately, and it may take more than one scan of the desired data to bring it into the cache.

Using CELL_FLASH_CACHE KEEP

In addition to other approaches, you can set the CELL_FLASH_CACHE segment storage option to KEEP to elevate the segment priority and keep the segment data in Exadata Smart Flash Cache.

Using this setting alone does not populate the cache, which still occurs when the segment is read. But, it increases the likelihood that the data remains in the cache. Also, this approach occupies cache space even when the data is not being used.

Identifying Objects to Populate Manually

You can use the Segments by UnOptimized Reads section of the AWR report to identify objects that are not being cached.

This information is also available by comparing physical reads with optimized physical reads in V$SEGMENT_STATISTICS and V$SEGSTAT.

Other Considerations

  • Small segments may not qualify for direct path reads and end up populating the database buffer cache when manual population is performed. This can be avoided by using a parallel query to perform the scan.

  • You need to perform multiple scans to populate the cache with data from tables and their associated indexes. To populate the cache with index blocks, use an INDEX FAST FULL SCAN.

3.3 Administering Exadata Hybrid Columnar Compression

Use these procedures to administer Oracle Database objects that use Exadata Hybrid Columnar Compression.

3.3.1 Determining If a Table Is Compressed

Query the *_TABLES or *_TAB_PARTITIONS data dictionary views to determine whether a table or partitioned is compressed.

  • Query the *_TABLES data dictionary views to determine table compression.

    In the *_TABLES data dictionary views, compressed tables have ENABLED in the COMPRESSION column.

    SQL> SELECT table_name, compression, compress_for FROM user_tables;
     
    TABLE_NAME       COMPRESSION   COMPRESS_FOR
    ---------------- ------------  -----------------
    T1               DISABLED
    T2               ENABLED       BASIC
    T3               ENABLED       OLTP
    T4               ENABLED       QUERY HIGH
    T5               ENABLED       ARCHIVE LOW
    For partitioned tables, the COMPRESSION column is NULL in the *_TABLES data dictionary views.
  • For partitioned tables, query the *_TAB_PARTITIONS data dictionary views.

    The COMPRESSION column of the *_TAB_PARTITIONS views indicates the table partitions that are compressed. The COMPRESS_FOR column indicates the compression method in use for the table or partition.

    SQL> SELECT table_name, partition_name, compression, compress_for
      FROM user_tab_partitions;
    
    TABLE_NAME  PARTITION_NAME   COMPRESSION   COMPRESS_FOR
    ----------- ---------------- -----------   ------------------------------
    SALES       Q4_2004          ENABLED       ARCHIVE HIGH
      ...
    SALES       Q3_2008          ENABLED       QUERY HIGH
    SALES       Q4_2008          ENABLED       QUERY HIGH
    SALES       Q1_2009          ENABLED       OLTP
    SALES       Q2_2009          ENABLED       OLTP

3.3.2 Determining Which Rows are Compressed

When Exadata Hybrid Columnar Compression tables are updated, the rows change to a lower level of compression.

For example, the compression level might change from COMP_FOR_QUERY_HIGH to COMP_FOR_OLTP or COMP_NOCOMPRESS.

By sampling the table rows, you can determine the percentage of rows that are no longer at the higher compression level.

  • Use the following query to determine the compression level of a row:
    DBMS_COMPRESSION.GET_COMPRESSION_TYPE (
       ownname    IN    VARCHAR2, 
       tabname    IN    VARCHAR2, 
       row_id     IN    ROWID)
      RETURN NUMBER;
You can use ALTER TABLE or MOVE PARTITION to set the rows to a higher compression level. For example, if 10 percent of the rows are no longer at the highest compression level, then you might alter or move the rows to a higher compression level.

3.3.3 Changing Compression Level

You can change the compression level for a partition, table, or tablespace.

The following example describes a scenario when you might want to change the compression level.

A company uses warehouse compression for its sales data, but sales data older than six months is rarely accessed. If the sales data is stored in a table that is partitioned based on the age of the data, then the compression level for the older data can be changed to archive compression to free up disk space.

  • To change the compression level of a partitioned table you can use the DBMS_REDEFINITION package.

    This package performs online redefinition of a table by creating a temporary copy of the table which holds the table data while it is being redefined. The table being redefined remains available for queries and DML statements during the redefinition. The amount of free space for online table redefinition depends on the relative compression level for the existing table, and the new table. Ensure you have enough hard disk space on your system before using the DBMS_REDEFINITION package.

  • To change the compression level for a single partition of a partitioned table, you can use the ALTER TABLE ... MODIFY PARTITION command.

  • To change the compression level of a non-partitioned table use the ALTER TABLE ... MOVE command with the COMPRESS FOR clause.

    To perform DML statements against the table while the ALTER TABLE ... MOVE command is running, you must also add the ONLINE clause.

  • To change the compression level for a tablespace, use the ALTER TABLESPACE command.

    This defines the default for new objects created in the tablespace. Existing objects are not changed or moved.

  • You can use Automatic Data Optimization (ADO) to create policies that automatically adjust the compression level.

3.3.4 Importing and Exporting Exadata Hybrid Columnar Compression Tables

You can use the impdp and expdp commands to import and export Exadata Hybrid Columnar Compression tables.

Exadata Hybrid Columnar Compression tables can be imported using the impdp command of the Data Pump Import utility. By default, the impdp command preserves the table properties and the imported table is Exadata Hybrid Columnar Compression table. The tables can also be exported using the expdp command.

On tablespaces not supporting Exadata Hybrid Columnar Compression, the impdp command fails with the following error:

ORA-64307: Exadata Hybrid Columnar Compression is not supported for tablespaces on this storage type

You can import the Exadata Hybrid Columnar Compression table as an uncompressed table using the TRANSFORM:SEGMENT_ATTRIBUTES=n option clause of the impdp command.

An uncompressed or OLTP-compressed table can be converted to Exadata Hybrid Columnar Compression format during import. To convert a non-Exadata Hybrid Columnar Compression table to an Exadata Hybrid Columnar Compression table, do the following:

  1. Specify default compression for the tablespace using the ALTER TABLESPACE ... SET DEFAULT COMPRESS command.
  2. Override the SEGMENT_ATTRIBUTES option of the imported table during import.

3.3.5 Restoring an Exadata Hybrid Columnar Compression Table

The compressed table backup can be restored to a system that supports Exadata Hybrid Columnar Compression, or to a system that does not support Exadata Hybrid Columnar Compression.

  • When restoring a table with Exadata Hybrid Columnar Compression to a system that supports Exadata Hybrid Columnar Compression, restore the file using Oracle Recovery Manager (RMAN).

  • When an Exadata Hybrid Columnar Compression table is restored to a system that does not support Exadata Hybrid Columnar Compression, you must convert the table from Exadata Hybrid Columnar Compression to an uncompressed format.

Use the following steps to convert an Exadata Hybrid Columnar Compression table to an uncompressed format.

  1. Ensure that there is sufficient space to store the data in uncompressed format.
  2. Use RMAN to restore the Exadata Hybrid Columnar Compression tablespace.
  3. Alter the table compression from Exadata Hybrid Columnar Compression to NOCOMPRESS:

    For example:

    SQL> ALTER TABLE table_name MOVE ONLINE NOCOMPRESS;

    Alternatively, you can use the following command to move the data in parallel:

    SQL> ALTER TABLE table_name MOVE ONLINE NOCOMPRESS PARALLEL;

    If the table partitioned, then alter the compression method for each partition separately. For example:

    SQL> ALTER TABLE table_name MOVE PARTITION partition_name NOCOMPRESS ONLINE;

After you convert the table to an uncompressed format, you can optionally use another form of compression, such as OLTP compression or Oracle Database In-Memory compression.

3.4 Administering Oracle Database Features on Exadata

Use these guidelines to use and administer the following Oracle Database features in conjunction with Oracle Exadata.

3.4.1 Using Indexes on Exadata

In the past, databases required indexes for good performance. However, Oracle Exadata can deliver superior scan rates without using indexes.

Review the application execution plans that use indexes to determine if they would run faster with Exadata Smart Scan Offload. To determine if a scan would be faster when there is no index, make the index invisible to the optimizer. An invisible index is maintained by DML operations, but it is not used by the optimizer.

To make an index invisible, use the following command and substitute the name of the index in place of index_name:

SQL> ALTER INDEX index_name INVISIBLE;

3.4.2 Using SQL Tuning Advisor on Exadata

SQL Tuning Advisor takes one or more SQL statements as input and uses the Automatic Tuning Optimizer to perform SQL tuning on the statements.

The output of SQL Tuning Advisor is in the form of advice or recommendations, along with a rationale for each recommendation and its expected benefit. SQL Tuning Advisor provides information about the following:

  • Missing and stale statistics

  • Better execution plans

  • Better access paths and objects

  • Better SQL statements

3.4.3 Using SQL Plan Management on Exadata

SQL plan management prevents performance regressions resulting from sudden changes to the execution plan of a SQL statement by recording and evaluating the execution plans of SQL statements over time.

With SQL Plan management, you build SQL plan baselines composed of a set of existing plans known to be efficient. The SQL plan baselines are then used to preserve performance of corresponding SQL statements, regardless of changes occurring in the system, such as software upgrades or the deployment of new application modules.

You can also use SQL plan management to gracefully adapt to changes such as new optimizer statistics or indexes. You can verify and accept only plan changes that improve performance. SQL plan evolution is the process by which the optimizer verifies new plans and adds them to an existing SQL plan baseline.

Both SQL profiles and SQL plan baselines help improve the performance of SQL statements by ensuring that the optimizer uses only optimal plans. Typically, you create SQL plan baselines before significant performance problems occur. SQL plan baselines prevent the optimizer from using suboptimal plans in the future. The database creates SQL profiles when you invoke SQL Tuning Advisor, which you do typically only after a SQL statement has shown high-load symptoms. SQL profiles are primarily useful by providing the ongoing resolution of optimizer mistakes that have led to suboptimal plans.

The DBMS_SPM package supports the SQL plan management feature by providing an interface for you to perform controlled manipulation of plan history and SQL plan baselines maintained for various SQL statements. The DBMS_SPM package provides procedures and functions for plan evolution.

Starting with Oracle Exadata System Software release 19.1, there is a new parameter AUTO_SPM_EVOLVE_TASK for DBMS_SPM.CONFIGURE, which can be used with only Exadata Database Machine on-premises and Oracle Cloud deployments. The AUTO_SPM_EVOLVE_TASK parameter can have one of three values:

  • ON: The feature is enabled. The SPM evolve advisor creates a task to periodically manage the SQL plan history. The task determines whether there are alternatives and if SQL execution plans should be evolved and accepted. The task runs outside the normal maintenance window in a similar manner to high-frequency statistics gathering.
  • OFF: The feature is disabled. This is the default value.
  • AUTO: Oracle Database decides when to use the feature.

3.4.4 Using Exadata System Statistics

System statistics measure the performance of CPU and storage so that the Oracle Database optimizer can use these inputs when evaluating SQL execution plans.

During first instance startup, Oracle Database automatically gathers default system statistics, which are also known as noworkload statistics. However, you can also gather Exadata-specific system statistics. The Exadata system statistics ensure that the SQL optimizer factors in the performance characteristics of Oracle Exadata Database Machine.

Use the following SQL command to see if Exadata-specific statistics are in use.

SQL> SELECT pname, PVAL1 FROM aux_stats$ WHERE pname='MBRC';

If PVAL1 is NULL, then the default system statistics are in use.

The following command gathers Exadata system statistics for use by the database optimizer:

SQL> exec DBMS_STATS.GATHER_SYSTEM_STATS('EXADATA');

If required, you can revert back to the default system statistics by using the DBMS_STATS.DELETE_SYSTEM_STATS procedure and then restarting the database.

For new applications or deployments, you should perform pre-production system testing to compare the default system performance with the performance using Exadata system statistics.

For existing applications or deployments, Exadata system statistics should be generated and tested on an equivalent test system before they are first used on a production system. This approach mitigates the risk of an unexpected performance regression on a production system.

If an equivalent test system is unavailable, you can perform more realistic testing by copying the production Exadata system statistics to your test system. The following outlines the procedure:

  1. On the production system, create a statistics table.

    For example:

    SQL> exec DBMS_STATS.CREATE_STAT_TABLE(stattab => 'exadata_stats');
  2. On the production system, gather the Exadata system statistics into the newly created table.

    For example:

    SQL> exec DBMS_STATS.GATHER_SYSTEM_STATS(gathering_mode => 'EXADATA', stattab => 'exadata_stats');

    Statistics stored in a user-defined statistics table are not used by the system. So, gathering these statistics does not impact the production system performance.

  3. Copy the statistics table from the production system to the test system.

    You can copy the table by various means, including data pump export and import.

  4. On the test system, import the statistics table into the Oracle data dictionary.

    For example:

    SQL> exec DBMS_STATS.IMPORT_SYSTEM_STATS(stattab => 'exadata_stats');

Related Topics

3.4.5 Using the Same DB_UNIQUE_NAME for Multiple Database Instances

You can create database instances that use the same DB_UNIQUE_NAME value if the databases are associated with separate Oracle ASM clusters.

Starting with Oracle Exadata System Software release 19.1.0, multiple Oracle databases sharing the same Oracle ASM storage can use the same DB_UNIQUE_NAME.

WARNING:

If you configure databases to have the same DB_UNIQUE_NAME, then those databases cannot be backed up to Oracle Zero Data Loss Recovery Appliance.

To use the same DB_UNIQUE_NAME across multiple Oracle databases you must:

  • Use separate Oracle ASM clusters for the databases using the same DB_UNIQUE_NAME value. The Oracle ASM cluster name is used to qualify the DB_UNIQUE_NAME in I/O Resource Management (IORM), Exadata Smart Flash Cache, and Exadata Smart Scan Offload operations.

  • Implement ASM-scoped security for each Oracle ASM cluster that contains the databases using the same DB_UNIQUE_NAME value.