14 New Features for Oracle Exadata System Software Release 12.x

Several new features were introduced for the various versions of Oracle Exadata System Software Release 12.x.

14.1 What's New in Oracle Exadata Database Machine 12.2.1.1.0

The following features are new for Oracle Exadata Database Machine 12.2.1.1.0:

14.1.1 In-Memory Columnar Caching on Storage Servers

Oracle Exadata System Software release 12.2.1.1.0 can use fast vector-processing in-memory algorithms on data in the storage flash cache. This feature is available if you have licensed the Oracle Database In-Memory (Database In-Memory) option.

The Database In-Memory format cache offers a significant boost to the amount of data held in Database In-Memory format formats and to Smart Scan performance over and above that offered by the pure columnar Exadata Hybrid Columnar CompressionExadata Hybrid Columnar Compression format.

Oracle Exadata System Software release 12.1.2.1.0 added a columnar flash cache format which automatically stored Exadata Hybrid Columnar Compression data in pure columnar Exadata Hybrid Columnar Compression format in the flash cache. This release extends support for Exadata Hybrid Columnar Compression data to enable the cached data to be rewritten into Database In-Memory format and enabling ultra-fast single instruction, multiple data (SIMD) predicates to be used in Smart Scan. With this format, most in-memory performance enhancements are supported in Smart Scan including joins and aggregation.

Data from normal (unencrypted) as well as encrypted tablespaces can be cached in the in-memory columnar cache format.

Just as with Oracle Database In-Memory, the new Database In-Memory format is created by a background process so that it does not interfere with the performance of queries.

This feature is enabled by default when the INMEMORY_SIZE database initialization parameter is configured and the user does not need to do anything to get this enhancement. See INMEMORY_CLAUSE_DEFAULT in Oracle Database Reference for information about INMEMORY_SIZE. If INMEMORY_SIZE is not configured, then the Exadata Hybrid Columnar Compression format columnar cache is still used exactly as in 12.1.2.1.0.

If you need to disable this feature, you can use a new DDL keyword CELLMEMORY with the ALTER TABLE command. See Enabling or Disabling In-Memory Columnar Caching on Storage Servers in Oracle Exadata System Software User's Guide.

Minimum requirements:

  • Oracle Database 12c release 1 (12.1.0.2), minimum software version required is 12.1.0.2.161018DBBP or

  • Oracle Database 12c release 2 (12.2.0.1)

  • Patch for bug 24521608

14.1.2 Columnar Flash Cache for Encrypted Tablespace

In Oracle Exadata System Software 12.2.1.1.0,columnar flash cache support has been extended to encrypted tablespaces. If you have licensed the Oracle Database In-Memory (Database In-Memory) option, the encrypted tablespace data is stored in in-memory columnar format on storage flash cache. If you have not licensed the option, the encrypted tablespace data is stored in pure columnar Exadata Hybrid Columnar Compression format on storage server flash cache.

Minimum requirements:

  • Oracle Database 12c release 1 (12.1.0.2), minimum software version required is 12.1.0.2.161018DBBP or

  • Oracle Database 12c release 2 (12.2.0.1)

  • Patch for bug 24521608

14.1.3 Set Membership in Storage Indexes

In Oracle Exadata System Software release 12.2.1.1.0, when data has been stored using the in-memory format columnar cache, Oracle Exadata Database Machine stores these columns compressed using dictionary encoding. For columns with fewer than 200 distinct values, the storage index creates a very compact in-memory representation of the dictionary and uses this compact representation to filter disk reads based on equality predicates. This feature is called set membership. A more limited filtering ability extends up to 400 distinct values.

For example, suppose a region of disk holds a list of customers in the United States and Canada. When you run a query looking for customers in Mexico, Oracle Exadata Storage Server can use the new set membership capability to improve the performance of the query by filtering out disk regions that do not contain customers from Mexico. In Oracle Exadata System Software releases earlier than 12.2.1.1.0, which do not have the set membership capability, a regular storage index would be unable to filter those disk regions.

Minimum requirements:

  • Oracle Database 12c release 1 (12.1.0.2), minimum software version required is 12.1.0.2.161018DBBP or

  • Oracle Database 12c release 2 (12.2.0.1)

  • Patch for bug 24521608

14.1.4 Storage Index to Store Column Information for More Than Eight Columns

In Oracle Exadata System Software releases earlier than 12.2.1.1.0, storage indexes can hold column information for up to eight columns. In Oracle Exadata System Software release 12.2.1.1.0, storage indexes have been enhanced to store column information for up to 24 columns.

Space to store column information for eight columns is guaranteed. For more than eight columns, space is shared between column set membership summary and column minimum/maximum summary. The type of workload determines whether set membership summary gets stored in storage index.

See Set Membership in Storage Indexes for more information.

14.1.5 5x Faster Oracle Exadata System Software Updates

Updating Oracle Exadata System Software now takes even less time. The Oracle Exadata System Software update process is now up to 2 times faster compared to 12.1.2.3.0, and up to 5 times faster compared to releases earlier than 12.1.2.3.0. A faster update time reduces the cost and effort required to update the software.

14.1.6 Faster Performance for Large Analytic Queries and Large Loads

Temp writes and temp reads are used when large joins or aggregation operations don't fit in memory and must be spilled to storage. In Oracle Exadata System Software releases earlier than 12.2.1.1.0, temp writes were not cached in flash cache; both temp writes and subsequent temp reads were from hard disk only. In Oracle Exadata System Software release 12.2.1.1.0, temp writes are sent to flash cache so that subsequent temp reads can be read from flash cache as well. This significantly speeds up queries that spill to temp if they are temp I/O bound. For certain queries, performance can improve up to four times faster. This is comparable to putting the temporary tablespace entirely in flash. Write-back flash cache has to be enabled for this feature to work.

Prior to Oracle Exadata System Software release 12.2.1.1.0, there was a size threshold for writes to the flash cache. Most writes over 128 KB are routed straight to disk because these writes are not expected to be read any time soon. For example, direct load writes, flashback database log writes, archived log writes, and incremental backup writes would bypass flash cache. Starting with Oracle Exadata System Software release 12.2.1.1.0, the flash cache algorithms have been enhanced to redirect such large writes into the flash cache, provided that such large writes do not disrupt the higher priority OLTP or scan workloads. Such writes are later written back to the disks when the disks are less busy. This feature allows Oracle Exadata Storage Server to utilize additional spare flash capacity and I/O bandwidth to provide better overall performance.

Note that this feature is supported on all Oracle Exadata Database Machine hardware except for V2 and X2 storage servers. On X3 and X4 storage servers, flash caching of temp writes and large writes is not supported when flash compression is enabled.

New statistics and report sections related to this feature were added to Automatic Workload Repository (AWR) reports in Oracle Database 18c (18.1.0), also available in the 12.1.0.2.0 July 2017 DBBP and the 12.2.0.1.0 Oct 2017 RUR.

Minimum requirements:

  • Oracle Exadata Database Machine X3-2 or later

  • Oracle Exadata System Software 12.2.1.1.0

  • Oracle Database (one of the following):

    • Oracle Database 11g release 2 (11.2) with the fix for bug 24944847 applied

    • Oracle Database 12c release 1 (12.1) — 12.1.0.2.0 July 2017 DBBP

    • Oracle Database 12c release 2 (12.2.0.1) — 12.2.0.1.0 Oct 2017 RUR

    • Oracle Database 18c (18.1.0)

14.1.7 Secure Eraser

Oracle Exadata System Software software releases 12.2.1.1.0 or later provide a secure erasure solution, called Secure Eraser, for every component within Oracle Exadata Database Machine. This is a comprehensive solution that covers all Exadata Database Machines V2 and higher, including both 2-socket and 8-socket servers.

In earlier versions of Oracle Exadata Database Machine, you could securely erase user data through CellCLI commands like DROP CELL ERASE, DROP CELLDISK ERASE, or DROP GRIDDISK ERASE. However, these DROP commands only cover user data on hard drives and flash devices. Secure Eraser sanitizes all content, not only user data but also operating system, Oracle Exadata System Software, and user configurations. In addition, Secure Eraser covers a wider range of hardware components including hard drives, flash devices, internal USB, and ILOM.

The Secure Eraser securely erases all data on both database servers and storage servers, and resets InfiniBand switches, Ethernet switches, and power distribution units back to factory default. You can use this feature to decommission or repurpose an Oracle Exadata machine. The Secure Eraser completely erases all traces of data and metadata on every component of the machine.

For details on the Secure Eraser utility, see Oracle Exadata Database Machine Security Guide.

14.1.8 Cell-to-Cell Offload Support for Oracle ASM-Scoped Security

To perform cell-to-cell offload operations efficiently, storage servers need to access other storage servers directly, instead of through a database server.

If you have configured Oracle ASM-scoped security in your Exadata environment, you need to set up cell keys to ensure storage servers can authenticate themselves to other storage servers so that they can communicate with each other directly. This applies to Oracle ASM resync, resilver, rebuild and rebalance operations, and database high-throughput write operations

14.1.9 Adding an Additional Network Card to Oracle Exadata Database Machine X6-2 Database Servers

Oracle Exadata Database Machine X6-2 database servers offer a highly available copper 10 Gbps network on the motherboard, and an optical 10 Gbps network via a PCI card on slot 2.

Starting with Oracle Exadata System Software release 12.2.1.1.0, you can add an additional Ethernet card if you require additional connectivity. The additional card can provide either dual port 10 GbE optical connectivity (part number X1109A-Z) or dual port 10 GbE copper connectivity (part number 7100488). You can install this part in PCIe slot 1 on the Oracle Exadata Database Machine X6-2 database server.

After you install the network card and connect it to the network, Oracle Exadata System Software release 12.2.1.1.0 automatically recognizes the new card and configures the two ports as eth6 and eth7 interfaces on the database server. You can use these additional ports for providing an additional client network, or for creating a separate backup or disaster recovery network. On a database server that runs virtual machines, you could use this network card to isolate traffic from two virtual machines.

14.1.10 Automatic Diagpack Upload for Oracle ASR

In Oracle Exadata System Software release 12.2.1.1.0, Management Server (MS) communicates with Oracle ASR Manager to upload a diagnostic package containing information relevant to the Oracle ASR automatically. In earlier releases you had to manually upload other diagnostics information after an automatic service request (SR) has been opened. By automating this step, this feature significantly reduces the turnaround time of Oracle ASRs.

This feature adds two new attributes to the AlertHistory object:

  • The new serviceRequestNumber attribute shows the associated service request number.

  • The new serviceRequestLink attribute shows the URL to the associated service request number.

Other feature highlights include:

  • The diagnostic package RESTful page (https://hostname/diagpack/download?name=diagpackname) has a new column showing a link to the corresponding service request.

  • Oracle ASR alert emails include SR links.

To enable Automatic Diagpack Upload for Oracle ASR, you must enable http_receiver in the Oracle ASR Manager:

  • To check if http_receiver is enabled, run the following command from Oracle ASR Manager:

    asr show_http_receiver
  • To enable the http_receiver, use the following command, where port is the port the http_receiver listens on.

    asr enable_http_receiver -p port

    Note:

    The port specified here has to be the same as the asrmPort specified for the subscriber on the database server or on the storage server. The following commands show how to verify the asrmPort on a database server and storage server.

    DBMCLI> LIST DBSERVER ATTRIBUTES snmpSubscriber 
         ((host=test-engsys-asr1.example.com, port=162,community=public, 
    type=ASR,fromIP=10.242.0.55,asrmPort=16168))
    
    CellCLI> LIST CELL ATTRIBUTES snmpSubscriber
         ((host=test-engsys-asr1.example.com,port=162,community=public,
    type=ASR,asrmPort=16168))
    

If you do not want to automatically upload diagnostic data to a service request, you can run ALTER CELL diagPackUploadEnabled=FALSE to disable the automatic upload.

Minimum software required: Oracle ASR Manager Release 5.7

14.1.11 CREATE DIAGPACK and LIST DIAGPACK Commands Available for Oracle Exadata Database Servers

The diagnostic package feature, which is available for storage servers, is now available for database servers as well. Management Server (MS) on database servers automatically collects customized diagnostic packages that include relevant logs and traces upon generating a database server alert. This applies to all critical database server alerts. Timely collection of diagnostic information prevents rollover of critical logs.

MS on database servers sends the diagnostic package as an email attachment for every critical email alert. Users can create hourly custom diagnostic packages by providing the start time and duration using the new CREATE DIAGPACK DBMCLI command.

See CREATE DIAGPACK and LIST DIAGPACK in the Oracle Exadata Database Machine Maintenance Guide for details.

14.1.12 Rescue Plan

In Oracle Exadata System Software releases earlier than 12.2.1.1.0, after a Oracle Exadata Database Server or Oracle Exadata Storage Server rescue, you must re-run multiple commands to configure items such as I/O Resource Management (IORM) plans, thresholds, and storage server and database server notification setting.

In Oracle Exadata System Software release 12.2.1.1.0, there is a new attribute called rescuePlan for the cell and dbserver objects. You can use this attribute to get a list of commands, which you can store as a script. You can then run the script after a cell rescue to restore the settings.

For details on the rescuePlan attribute, see Oracle Exadata Database Machine Maintenance Guide.

14.1.13 Support for IPv6 Oracle VM and Tagged VLANs

Oracle Exadata System Software release 12.2.1.1.0 supports IPv6 Oracle VM and tagged virtual LANs (VLANs) using Oracle Exadata Deployment Assistant (OEDA).

IPv6 VLANs are now supported on the management network. In earlier releases, this was not supported.

See Oracle Exadata Database Machine Installation and Configuration Guide.

14.1.14 Management Server Can Remain Online During NTP, DNS, and ILOM Changes

If you are changing NTP, DNS, or ILOM parameters, the Management Server (MS) can remain online during the operation and does not need to be restarted.

14.1.15 New Charts in ExaWatcher

In Oracle Exadata System Software release 12.2.1.1.0, GetExaWatcherResults.sh generates HTML pages that contain charts for IO, CPU utilization, cell server statistics, and alert history. The IO and CPU utilization charts use data from iostat, while the cell server statistics use data from cellsrvstat. Alert history is retrieved for the specified timeframe.

For details, see ExaWatcher Charts in the Oracle Exadata Database Machine Maintenance Guide.

14.1.16 New Metrics for Redo Log Writes

New metrics are available to help analyze redo log write performance.

Previously, when Automatic Workload Repository (AWR) reported an issue with redo log write wait time for database servers, the storage cells often indicated no issue with redo log write performance. New metrics help to give a better overall picture. These metrics provide insight into the following concerns:

  • Is the I/O latency high, or is it some other factor (for example, network) ?

  • How many redo log writes bypassed Flash Log ?

  • What is the overall latency of redo log writes on each cell, taking into account all redo log writes, not just those which were handled by Flash Log ?

Oracle Exadata System Software release 12.2.1.1.0 introduces the following metrics related to redo log write requests:

  • FL_IO_TM_W: Cumulative redo log write latency. It includes latency for requests not handled by Exadata Smart Flash Log.

  • FL_IO_TM_W_RQ: Average redo log write latency. It includes write I/O latency only.

  • FL_RQ_TM_W: Cumulative redo log write request latency. It includes networking and other overhead.

    To get the latency overhead due to factors such as network and processing, you can use (FL_RQ_TM_W - FL_IO_TM_W).

  • FL_RQ_TM_W_RQ: Average redo log write request latency.

  • FL_RQ_W: Total number of redo log write requests. It includes requests not handled by Exadata Smart Flash Log.

    To get the number of redo log write requests not handled by Exadata Smart Flash Log, you can use (FL_RQ_W - FL_IO_W).

14.1.17 Quarantine Manager Support for Cell-to-Cell Rebalance and High Throughput Write Operations

Quarantine manager support is enabled for rebalance and high throughput writes in cell-to-cell offload operations. If Oracle Exadata System Software detects a crash during these operations, the offending operation is quarantined, and an less optimized path is used to continue the operation.

The quarantine types for the new quarantines are ASM_OFFLOAD_REBALANCE and HIGH_THROUGHPUT_WRITE.

See Quarantine Manager Support for Cell-to-Cell Offload Operations in the Oracle Exadata System Software User's Guide for details.

14.1.18 ExaCLI and REST API Enabled for Management Server

Both ExaCLI and REST API are enabled for Management Server (MS) on the database servers.

You can now perform remote execution of MS commands. You can access the interface using HTTPS in a web browser, or curl. See Oracle Exadata Database Machine Maintenance Guide for more information.

14.1.19 New Features in Oracle Grid Infrastructure 12.2.1.1.0

The following new features in Oracle Grid Infrastructure 12.2.1.1.0 affect Oracle Exadata Database Machine:

14.1.19.1 Oracle ASM Flex Disk Groups

An Oracle ASM flex disk group is a disk group type that supports Oracle ASM file groups.

An Oracle ASM file group describes a group of files that belong to a database, and enables storage management to be performed at the file group, or database, level. In general, a flex disk group enables users to manage storage at the granularity of the database, in addition to at the disk group level.

See Managing Flex Disk Groups in Oracle Automatic Storage Management Administrator's Guide.

14.1.19.2 Oracle Flex ASM

Oracle Flex ASM enables Oracle ASM instances to run on a separate physical server from the database servers.

If the Oracle ASM instance on a node in a standard Oracle ASM cluster fails, then all of the database instances on that node also fail. However, in an Oracle Flex ASM configuration, Oracle 12c database instances would not fail as they would be able to access another Oracle ASM instance remotely on another node.

With Oracle Flex ASM, you can consolidate all the storage requirements into a single set of disk groups. All these disk groups are mounted by and managed by a small set of Oracle ASM instances running in a single cluster. You can specify the number of Oracle ASM instances with a cardinality setting.

Oracle Flex ASM is enabled by default with Oracle Database 12c release 2 (12.2). Oracle Exadata Database Machine ships with cardinality set to ALL, which means an Oracle ASM instance is created on every available node. See the following topics for details:

14.1.19.3 Faster Redundancy Restoration After Storage Loss

Using Oracle Grid Infrastructure 12c Release 2 (12.2), redundancy restoration after storage loss takes less time than in previous releases.

A new REBUILD phase was introduced to the rebalance operation. The REBUILD phase restores redundancy first after storage failure, greatly reducing the risk window within which a secondary failure could occur. A subsequent BALANCE phase restores balance.

Oracle Grid Infrastructure release 12.1.0.2 with DBBP 12.1.0.2.170718 also includes the Oracle ASM REBUILD phase of rebalance.

Note:

In Oracle Grid Infrastructure 12c release 2 (12.2), rebuild is tracked in V$ASM_OPERATION via a separate pass (REBUILD). In Oracle Grid Infrastructure 12c release 1 (12.1), both rebuild and rebalance phases are tracked in the same pass (REBALANCE).
14.1.19.4 Dynamic Power Change

You can adjust the value of the ASM_POWER_LIMIT parameter dynamically.

If the POWER clause is not specified in an ALTER DISKGROUP statement, or when rebalance is implicitly run by adding or dropping a disk, then the rebalance power defaults to the value of the ASM_POWER_LIMIT initialization parameter. You can adjust the value of this parameter dynamically. The range of values for the POWER clause is the same for the ASM_POWER_LIMIT initialization parameter.

The higher the power limit, the more quickly a rebalance operation can complete. Rebalancing takes longer with lower power values, but consumes fewer processing and I/O resources which are shared by other applications, such as the database.

See Tuning Rebalance Operations in Oracle Automatic Storage Management Administrator's Guide.

14.1.19.5 Quorum Disk Support in Oracle Universal Installer

You can specify a quorum failure group during the installation of Oracle Grid Infrastructure.

On Oracle Exadata Storage Servers, quorum disk groups are automatically created during deployment. A quorum failure group is a special type of failure group that is used to store the Oracle Clusterware voting files. The quorum failure group is used to ensure that a quorum of the specified failure groups are available.

The installer for Oracle Grid Infrastructure 12.2 was updated to allow you to specify quorum failure groups during installation instead of configuring the quorum failure group after installation using the Quorum Disk Manager utility.

See Identifying Storage Requirements for Oracle Automatic Storage Management in Oracle Grid Infrastructure Installation and Upgrade Guide for Linux

14.1.20 New Features in Oracle Database 12c Release 2 (12.2.0.1)

The following new features in Oracle Database 12c release 2 (12.2.0.1) affect Oracle Exadata:

14.1.20.1 Database Server I/O Latency Capping

On very rare occasions there may be high I/O latency between a database server and a storage server due to network latency outliers, hardware problems on the storage servers, or some other system problem with the storage servers. Oracle ASM and Oracle Exadata Storage Server software automatically redirect read I/O operations to another storage server when the latency of the read I/O is much longer than expected. Any I/Os issued to the last valid mirror copy of the data are not redirected.

This feature works with all Exadata Storage Software releases. You do not have to perform any configuration to use this feature.

Minimum software required: Oracle Database and Oracle Grid Infrastructure 12c release 2 (12.2.0.1.0)

14.1.20.2 Exadata Smart Scan Offload for Compressed Index Scan

In Oracle Exadata Storage Server Software 12.1.2.3.0 and prior releases, smart scan offload supported normal uncompressed indexes and bitmap indexes.

In Oracle Exadata Storage Server Software 12.2.1.1.0, smart scan offload has been implemented for compressed indexes. Queries involving compressed index scan on Oracle Exadata can benefit from this feature.

Minimum software required: Oracle Database 12c release 2 (12.2.0.1.0) and Oracle Exadata Storage Server software release 12.2.1.1.0

14.1.20.3 Exadata Smart Scan Offload Enhancements for In-Memory Aggregation (IMA)

Oracle Exadata Storage Server software supports offloading many SQL operators for predicate evaluation. The In-Memory Aggregation feature attempts to perform a "vector transform" optimization which takes a star join SQL query with certain aggregation operators (for example, SUM, MIN, MAX, and COUNT) and rewrites it for more efficient processing. A vector transformation query is similar to a query that uses bloom filter for joins, but is more efficient. When a vector transformed query is used with Oracle Exadata Storage Server release 12.1.2.1.0, the performance of joins in the query is enhanced by the ability to offload filtration for rows used for aggregation. You will see “KEY VECTOR USE” in the query plan when this optimization kicks in.

In Oracle Exadata Storage Server software release 12.2.1.1.0, vector transformed queries benefit from more efficient processing due to the application of group-by columns (key vectors) to the Exadata Storage Index.

Additionally, vector transformed queries that scan data in in-memory columnar format on the storage server can offload processing of aggregation work. These optimizations are automatic and do not depend on user settings.

Minimum software required: Oracle Database 12c release 2 (12.2.0.1.0) and Oracle Exadata Storage Server software release 12.2.1.1.0

14.1.20.4 Exadata Smart Scan Offload Enhancements for XML

When XML data is stored using a SecureFiles LOB of less than 4 KB, the evaluation in a SQL WHERE clause of Oracle SQL condition XMLExists or Oracle SQL function XMLCast applied to the return value of Oracle SQL function XMLQuery can sometimes be offloaded to an Oracle Exadata Storage Server.

Minimum software required: Oracle Database 12c Release 2 (12.2.0.1.0) and Oracle Exadata Storage Server software release 12.2.1.1.0.

14.1.20.5 Exadata Smart Scan Offload Enhancements for LOBs

In Oracle Exadata Storage Server 12.2.1.1.0, offload support has been extended for the following LOB operators: LENGTH, SUBSTR, INSTRM CONCAT, LPAD, RPAD, LTRIM, RTRIM, LOWER, UPPER, NLS_LOWER, NLS_UPPER, NVL, REPLACE, REGEXP_INSTR, TO_CHAR.

Exadata smart scan offload evaluation is supported only on uncompressed inlined LOBs (less than 4 KB in size).

Minimum software required: Oracle Database 12c release 2 (12.2.0.1.0) and Oracle Exadata Storage Server software release 12.2.1.1.0.

14.1.20.6 New Features in Oracle Exadata Snapshots
  • Hierarchical snapshot databases

    You can create space-efficient snapshot databases from a parent that is itself a snapshot. This allows for hierarchical snapshot databases. The parent snapshot is also space-efficient, all the way to the base test master. Multiple users can create their own snapshots from the same parent snapshot. The set of snapshots can be represented as a tree, where the root of the tree is the base test master. All the internal nodes in the tree are read-only databases and all the leaves in the tree are read/write databases. All Oracle Exadata features are supported on the hierarchical snapshot databases. For hierarchical snapshot databases, because there is performance penalty with every additional depth level of the snapshot, Oracle recommends having a snapshot tree with a maximum depth of 10.

  • Spare Test Master databases

    You can also create and manage a sparse test master, while having active snapshots from it. This feature allows the sparse test master to sync almost continuously with Oracle Data Guard, except for small periods of time when users are creating a snapshot directly from the sparse test master. This feature utilizes the hierarchical snapshot feature described above, by creating read-only hidden parents. Note that Oracle Exadata snapshot databases are intended for test and development databases only.

  • Sparse backup and recovery

    When you choose to perform a sparse backup on DB0, the operation copies data only from the delta storage space of the database and the delta space of the sparse data files. A sparse backup can be either in the backup set format (default) or the image copy format. RMAN restores sparse data files from sparse backups and then recovers them from archive and redo logs. You can perform a complete or a point-in-time recovery on sparse data files. Sparse backups help in efficiently managing storage space and facilitate faster backup and recovery.

    See Oracle Database Backup and Recovery User’s Guide for information about sparse backups.

Minimum hardware: Storage servers must be X3 or later

Minimum software: Oracle Database and Grid Infrastructure 12c release 2 (12.2), and Oracle Exadata Storage Server software release 12.2.1.1.0.

14.1.21 Oracle Linux Kernel Upgraded to Unbreakable Enterprise Kernel 4 and Oracle VM Upgraded to 3.4.2

This release upgrades Oracle Linux to Unbreakable Enterprise Kernel (UEK) 4, (4.1.12-61.28.1.el6uek.x86_64). For systems using virtualization, the DOM0 is upgraded to Oracle VM 3.4.2. This enables you to use Oracle Linux 6 on the dom0. The Linux kernels used on the dom0 and domU are now unified.

For systems previously using virtualization on the compute nodes, you must upgrade the Oracle Grid Infrastructure home to release 12.1.0.2.161018DBBP or later in all domU’s before upgrading the Oracle Exadata Storage Server software to release 12.2.1.1.0. The Oracle Exadata Storage Server software upgrade to release 12.2.1.1.0 requires you to upgrade all the domU’s first before you upgrade the dom0. This requirement is enforced by the patchmgr software.

If you use Oracle ASM Cluster File System (Oracle ACFS), then you must apply the fix for bug 22810422 prior to the upgrade of the Oracle Grid Infrastructure home to enable Oracle ACFS support on the UEK4 kernel. In addition, Oracle recommends that you install the fix for bug 23642667 on both the Oracle Grid Infrastructure home and the Oracle Database home to increase OLTP workload performance.

14.2 What's New in Oracle Exadata Database Machine 12c Release 1 (12.1.2.3.0)

The following features are new for Oracle Exadata Database Machine 12c Release 1 (12.1.2.3.0):

14.2.1 Performance Improvement forOracle Exadata System Software Updates

Updating Oracle Exadata System Software now takes significantly less time. By optimizing internal processing even further, the cell update process is now up to 2.5 times faster compared to previous releases.

14.2.2 Quorum Disk Manager Utility

In earlier releases, when Oracle Exadata systems with fewer than 5 storage servers were deployed with HIGH redundancy, the voting disk for the cluster was created on a disk group with NORMAL redundancy. If two cells go down in such a system, the data is still preserved due to HIGH redundancy but the cluster software comes down because the voting disk is on a disk group with NORMAL redundancy.

Quorum disks enable users to deploy and leverage disks on database servers to achieve highest redundancy in quarter rack or smaller configurations. Quorum disks are created on the database servers and added into the quorum failure group.

For new systems to be configured with HIGH redundancy but having fewer than 5 storage servers, Oracle Exadata Deployment Assistant can be used to automatically create such quorum disks.

Users who have deployed such systems can use the new quorumdiskmgr utility manually. quorumdiskmgr enables you to manage quorum disks on database servers. With this utility, you can create, list, delete, and alter quorum disk configurations, targets, and devices.

See "Managing Quorum Disks for High Redundancy Disk Groups" in the Oracle Exadata Database Machine Maintenance Guide for details.

Minimum software required:

  • Oracle Exadata Storage Server Software release 12.1.2.3.0

  • Grid Infrastructure release 12.1.0.2.160119 with these patches: 22722476 and 22682752; or Grid Infrastructure release 12.1.0.2.160419 or later

  • Patch 23200778 for all Database homes

14.2.3 VLAN Support

OEDA now supports the creation of VLANs on compute nodes and storage servers for the admin network, ILOM, client and the backup access network. Note the following:

  • Client and backup VLAN networks must be bonded. The admin network is never bonded.

  • If the backup network is on a tagged VLAN network, the client network must also be on a separate tagged VLAN network.

  • The backup and client networks can share the same network cables.

  • OEDA supports VLAN tagging for both physical and virtual deployments.

  • IPv6 VLANs are supported for bare metal on all Oracle Exadata systems except for X2 and V2 systems.

    IPv6 VLAN with VM is not supported currently.

Note:

If your system will use more than 10 VIP addresses in the cluster and you have VLAN configured for the Oracle Clusterware client network, then you must use 3 digit VLAN ids. Do not use 4 digit VLAN ids because the VLAN name can exceed the 15 character operating system interface name limit.

The following table shows IPv4 and IPv6 support on the admin, client, and backup networks for the different Exadata systems and Oracle Database versions.

Version of Oracle Database VLAN Tagging on Admin Network Client and Backup Networks

11.2.0.4

Only supported with IPv4 addresses on X3-2 and above for two-socket servers, and X4-8 and above for eight-socket servers.

Supported with IPv4 and IPv6 on all hardware models.

12.1.0.2

Only supported with IPv4 addresses on X3-2 and above for two-socket servers, and X4-8 and above for eight-socket servers.

Supported with IPv4 on all hardware models.

Supported with IPv6 on all hardware models with fix for 22289350.

See "Using Network VLAN Tagging with Oracle Exadata Database Machine" in the Oracle Exadata Database Machine Installation and Configuration Guide for details.

14.2.4 Adaptive Scrubbing Schedule

Oracle Exadata System Software automatically inspects and repairs hard disks periodically when the hard disks are idle. The default schedule of scrubbing is every two weeks.

However, once a hard disk starts to develop bad sectors, it is better to scrub that disk more frequently because it is likely to develop more bad sectors. In release 12.1.2.3.0, if a bad sector is found on a hard disk in a current scrubbing job, Oracle Exadata System Software will schedule a follow-up scrubbing job for that disk in one week. When no bad sectors are found in a scrubbing job for that disk, the schedule will fall back to the scrubbing schedule specified by the hardDiskScrubInterval attribute.

If the user has changed the hardDiskScrubInterval to less than or equal to weekly, Oracle Exadata System Software will use the user-configured frequency instead of the weekly follow-up schedule even if bad sectors are found. See the ALTER CELL section in the Oracle Exadata System Software User's Guide for more information about scrubbing.

Minimum software required:

  • Oracle Exadata System Software release 12.1.2.3.0

  • Grid Infrastructure home:

    • 11.2.0.4.16 (April 2015) or higher

    • 12.1.0.2.4 (January 2015) or higher

14.2.5 IPv6 Support in ASR Manager

Systems using IPv6 can now connect to Auto Service Request (ASR) using ASR Manager 5.4.

14.2.6 Increased Maximum Number of Database Processes

Table 14-1 shows the maximum number of database processes supported per database node. These numbers are higher than in previous releases. The best practice is to keep the process count below these values. If a subset of your workload is running parallel queries, the maximum database process count will be between the values in the "Maximum Number of Processes with No Parallel Queries" column and the "Maximum Number of Processes with All Running Parallel Queries" column.

Table 14-1 Maximum Number of Database Processes Per Node

Machine Type InfiniBand Bonding Type Maximum Number of Processes with No Parallel Queries Maximum Number of Processes with All Running Parallel Queries

8-socket (X2-8, X3-8)

Active passive

28,500

25,000

8-socket (X4-8, X5-8)

Active bonding

100,000

44,000

2-socket (X2-2, X3-2)

Active passive

12,500

10,000

2-socket (X4-2, X5-2, X6-2)

Active bonding

25,000

14,000

Table 14-2 shows the maximum number of database processes supported per Oracle VM user domain. These numbers are higher than in previous releases. The best practice is to keep the process count below these values. If a subset of your workload is running parallel queries, the maximum database process count will be between the "Maximum Number of Processes with No Parallel Queries" column and the "Maximum Number of Processes with All Running Parallel Queries" column.

Table 14-2 Maximum Number of Database Processes Per Oracle VM User Domain

Machine Type InfiniBand Bonding Type Maximum Number of Processes with No Parallel Queries Maximum Number of Processes with All Running Parallel Queries

2-socket (X2-2, X3-2)

Active passive

11,500

8,000

2-socket (X4-2, X5-2, X6-2)

Active bonding

23,000

14,000

The machines are configured as follows:

  • On an 8-socket database node with active bonding InfiniBand configurations (X4-8 and X5-8), there are 8 IP addresses across 4 InfiniBand cards (8 InfiniBand ports).

  • On an 8-socket database node with active-passive InfiniBand configurations (X2-8 and X3-8), there are 4 IP addresses across 4 InfiniBand cards (8 InfiniBand ports).

  • On a 2-socket database node with active bonding InfiniBand configurations (X4-2, X5-2, and X6-2), there are 2 IP addresses on 1 InfiniBand card (2 InfiniBand ports).

  • On a 2-socket database node with active-passive InfiniBand configurations (X2-2 and X3-2), there is 1 IP address on 1 InfiniBand card (2 InfiniBand ports).

Up to 50,000 RDS sockets are allocated per InfiniBand IP address for database usage. Each IO-capable database process will consume RDS sockets across IPs with even load balancing.

Starting with Exadata 12.1.2.3.0, there is no connection limit on the cell side.

In addition to the higher process count supported by the Exadata image and the Oracle kernel, the following related products have also been enhanced:

  • Oracle Exadata Deployment Assistant automatically configures higher limits in Grid_home/crs/install/s_crsconfig_nodenameenv.txt at deployment time.

  • Exadata Patch Manager (patchmgr and dbnodeupdate.sh) automatically configures higher limits in Grid_home/crs/install/s_crsconfig_nodenameenv.txt during database node upgrades.

The following best practices should be followed to ensure optimal resource utilization at high process count:

  • Application-initiated Oracle foregrounds should be established through a set of Oracle listeners running on the Exadata database nodes instead of using local bequeath connections.

  • The number of listeners should be at least as high as the number of database node CPU sockets, and every database node CPU socket should run the same number of listeners. For example, on an Oracle Exadata X5-8 database node, eight listeners could be configured, one per database node CPU socket.

  • Listeners should spawn Oracle processes evenly across database node CPU sockets. This can be done by specifying the socket they will run on at startup time. For example, assuming the listener.ora file is configured correctly for listeners 0 through 7, the following script could be used to spawn eight listeners on an X5-8 database node, each on a different socket:

    #!/bin/bash
    export ORACLE_HOME=/u01/app/oracle/product/12.1.0.2/dbhome_1
    for socket in `seq 0 7`
    do
     numactl --cpunodebind=${socket} $ORACLE_HOME/bin/lsnrctl start LISTENER${socket}
    done
  • Listener connection rate throttling should be used to control login storms and provide system stability at high process counts.

  • The total number of connections established per second, in other words, the sum of rate_limit for all listeners, should not be more than 400 to avoid excessive client connection timeouts and server-side errors.

Minimum software required:

  • Oracle Exadata Storage Server Software release 12.1.2.3.0

  • Oracle Database 12c Release 1 (12.1) release 12.1.0.2.160119 with these patches: 22711561, 22233968, and 22609314

14.2.7 Cell-to-Cell Rebalance Preserves Storage Index

Storage index provides significant performance enhancement by pruning I/Os during a smart scan. When a disk hits a predictive failure or a true failure, data needs to be rebalanced out to disks on other cells. This feature enables storage index entries, created for regions of data in the disk that failed, to be moved along with the data during the cell-to-cell offloaded rebalance to maintain application performance.

This feature provides significant performance improvement compared to earlier releases for application performance during a rebalance due to disk failure.

Minimum software required:

  • Oracle Exadata Storage Server Software release 12.1.2.3.0

  • Grid Infrastructure release 12.1.0.2.160119 with patch 22682752

14.2.8 ASM Disk Size Checked When Reducing Grid Disk Size

In releases earlier than 12.1.2.3.0, a user might accidentally decrease the size of a grid disk before decreasing the size of an ASM disk that is part of the disk group. In release 12.1.2.3.0, the resize order is checked so that the user cannot reduce the size of the grid disk to be smaller than the ASM disk.

A new attribute asmDiskSize is added to grid disk to support ASM disk size query. When the user runs ALTER GRIDDISK to alter the grid disk size, the command now checks the ASM disk size and prevents the user from making the grid disk smaller than the ASM disk.

The check works for both normal data disk and sparse disk. If it is a sparse grid disk, the check is performed when the virtual size is being changed. If it is a normal grid disk, the check is performed when size is being changed.

For example, suppose the following command:

CellCLI> list griddisk DATAC1_CD_00_adczarcel04 attributes name,asmdisksize

returns the following output:

DATAC1_CD_00_adczarcel04     14880M

When you try to reduce the size of the grid disk to be smaller than the ASM disk:

CellCLI> alter griddisk DATAC1_CD_00_adczarcel04 size=10G

the command returns an error:

CELL-02894: Requested grid disk size is smaller than ASM disk size. Please resize ASM disk DATAC1_CD_00_ADCZARCEL04 first.

Minimum software required:

  • Oracle Exadata Storage Server Software release 12.1.2.3.0

  • Grid Infrastructure release 12.1.0.2.160119 with patch 22347483

14.2.9 Support for Alerts in CREATE DIAGPACK

The CREATE DIAGPACK command now supports creating diagnostic packages for a specified alert using the alertName parameter.

See CREATE DIAGPACK in Oracle Exadata System Software User's Guide for details.

14.3 What's New in Oracle Exadata Database Machine 12c Release 1 (12.1.2.2.0)

The following features are new for Oracle Exadata Database Machine 12c Release 1 (12.1.2.2.0):

14.3.1 Smart Fusion Block Transfer

Minimum software required: 12.1.0.2 BP13

Many OLTP workloads can have hot blocks that need to be updated frequently across multiple nodes in Oracle Real Application Clusters (Oracle RAC). One example is Right Growing Index (RGI), where new rows are added to a table with an index from several Oracle RAC nodes. The index leaf block becomes a hot block that needs frequent updates across all nodes.

Without the Smart Fusion Block Transfer feature of Oracle Exadata Database Machine, a hot block can be transferred from a sending node to a receiver node only after the sending node has made changes in its redo log buffer durable in its redo log. With Smart Fusion Block Transfer, this latency of redo log write at the sending node is eliminated. The block is transferred as soon as the IO to the redo log is issued at the sending node, without waiting for it to complete. Smart Fusion Block Transfer increases throughput (about 40% higher) and decreases response times (about 33% less) for RGI workloads.

To enable Smart Fusion Block Transfer:

  • Set the hidden static parameter _cache_fusion_pipelined_updates to TRUE on all Oracle RAC nodes. Because this is a static parameter, you need to restart your database for this change to take effect.

  • Set the exafusion_enabled parameter to 1 on all Oracle RAC instances.

Note:

The Oracle Exadata Database Machine initialization parameter exafusion_enabled is desupported in Oracle Database 19c.

14.3.2 8 TB Hard Disk Support

In this software release, Exadata storage server supports 8 TB high capacity disks. Supporting 8 TB high capacity disks has the following advantages:

  • Doubles the disk capacity of the pervious Exadata machines, up to 1344 TB per rack

  • Provides more storage for scaling out your existing databases and data warehouses

Requirements for using 8 TB high capacity disks:

  • Exadata 12.1.2.1.2 or higher

  • Grid Infrastructure home requires one of the following:

    • 12.1.0.2.11 or higher BP

    • 12.1.0.2.10 or lower BP plus the following patches: 21281532, 21099555, and 20904530

    • 11.2.0.4.18 or higher BP

    • 11.2.0.4.17 or lower BP plus patch 21099555

    • 11.2.0.3 plus patch 21099555

14.3.3 IPv6 Support

IPv6 support is for Ethernet.

Compute nodes and storage servers are now enabled to use IPv6 for the admin network, ILOM, and the client access network. This works for both bare metal and virtualized deployments. The following table describes how various components support IPv6:

Component Description of IPv6 Support

Oracle Exadata Deployment Assistant (OEDA)

OEDA allows a user to enter an IPv6 address in the "Define Customer Networks" screen (see Figure 14-1 for a screenshot). When IPv6 addresses are used for the admin network, the DNS servers, NTP servers, SMTP servers, and SNMP servers need to be on an IPv6 network.

On the "Define Customer Networks" screen, if you specify the gateway with an IPv6 address and the /64 suffix to denote the mask, the Subnet Mask field becomes greyed out and unavailable.

Cisco switch

The Cisco switch can be enabled with an IPv6 address using the minimum firmware version of 15.2(3)E2 for the Cisco 4948E-F switches.

Refer to My Oracle Support note 1415044.1 for upgrade instructions and to open an SR to obtain the updated Cisco firmware through Oracle Support.

Auto Service Request (ASR)

ASR does not work with IPv6 addresses. This will be resolved in a future release. ASR can be enabled by bridging to an IPv4 network.

Enterprise Manager

Enterprise Manager needs to be on a bridged network such that it can monitor both the InfiniBand switches (on an IPv4 network) and the compute and storage nodes (on an IPv6 network).

dbnodeupdate

dbnodeupdate requires remote repositories that are hosted on machines with IPv6 address, or an ISO file needs to be used.

Remote repositories may not be reachable (using http or other protocols) if they use IPv4 and the host only has IPv6 IPs. Some customers may be able to reach IPv4 IPs from their IPv6 hosts if the network routers and devices permit it. Most customers will likely need an IPv6 repository server or use an ISO file.

InfiniBand network

Private InfiniBand network is required to remain on IPv4 only. Note that only private addresses are used on the InfiniBand network, so there is little benefit from moving InfiniBand to IPv6.

SMTP and SNMP

SMTP and SNMP servers should usually be IPv6 (or a name that resolves to an IPv6 IP address) unless the customer network has a bridge or gateway to route between IPv4 and IPv6.

Platinum Support

Platinum Support will not be available for IPv6 deployments until a subsequent Platinum Gateway software release is available.

Figure 14-1 "Define Customer Networks" Screen in Oracle Exadata Deployment Assistant

Description of Figure 14-1 follows
Description of "Figure 14-1 "Define Customer Networks" Screen in Oracle Exadata Deployment Assistant"

14.3.4 Running CellCLI Commands from Compute Nodes

The new ExaCLI utility enables you to run CellCLI commands on storage servers remotely from database servers. This is useful in cases where you locked the storage serves by disabling SSH access, as described in Disabling SSH on Storage Servers.

ExaCLI also provides an easier to use interface for storage server management, and enables you to separate the roles of a storage user from a storage administrator.

To run ExaCLI, you need to create users on the storage servers, and grant roles to the users. Granting roles assign privileges to users, that is, they specify which CellCLI commands users are allowed to run. When connecting to a storage server, ExaCLI authenticates the specified user name, and checks that the user has the appropriate privileges to run the specified command.

The new exadcli utility is similar to the dcli utility: exadcli enables you to run CellCLI commands across multiple storage servers.

For details, see:

You can control which commands users can run by granting privileges to roles, and granting roles to users. For example, you can specify that a user can run the LIST GRIDDISK command but not ALTER GRIDDISK. This level of control is useful in Oracle Cloud environments, where you might want to allow full access to the system to only a few users.

You also need to create users if you are using the new ExaCLI utility. You use the CREATE USER, GRANT PRIVILEGE, and GRANT ROLE commands to create users, specify privileges to roles, and grant roles to users. For details, see "Creating Users and Roles" in the Oracle Exadata System Software User's Guide.

14.3.5 Disabling SSH on Storage Servers

By default, SSH is enabled on storage servers. If required, you can "lock" the storage servers to disable SSH access. You can still perform operations on the cell using ExaCLI, which runs on compute nodes and communicates using https and REST APIs to a web service running on the cell.

When you need to perform operations that require you to log in to the cell, you can temporarily unlock the cell. After the operation is complete, you can relock the cell.

For details, see Disabling SSH on Storage Servers in the Oracle Exadata System Software User's Guide.

14.3.6 Fixed Allocations for Databases in the Flash Cache

The ALTER IORMPLAN command has a new attribute called flashcachesize which enables you to allocate a fixed amount of space in the flash cache for a database. The value specified in flashcachesize is a hard limit, which means that the database cannot use more than the specified value. This is different from the flashcachelimit value, which is a "soft" maximum: databases can exceed this value if the flash cache is not full.

flashcachesize is ideal for situations such as Cloud and "pay for performance" deployments where you want to limit databases to their allocated space.

For details, see the following:

14.3.7 Oracle Exadata Storage Statistics in AWR Reports

The Exadata Flash Cache Performance Statistics sections have been enhanced in the AWR report:

  • Added support for Columnar Flash Cache and Keep Cache.

  • Added a section on Flash Cache Performance Summary to summarize Exadata storage cell statistics along with database statistics.

The Exadata Flash Log Statistics section in the AWR report now includes statistics for first writes to disk and flash.

Minimum software: Oracle Database release 12.1.0.2 Bundle Patch 11

14.3.8 Increased Maximum Number of Database Processes

Minimum software required: 12.1.0.2 BP11, or 11.2.0.4 BP18

The following table shows the maximum number of database processes supported per database node. These numbers are higher than in previous releases. The best practice is to keep the process count below these values. If a subset of your workload is running parallel queries, the maximum database process count will be between the "Number of Processes with No Parallel Queries" column and the "Number of Processes with All Running Parallel Queries" column.

Table 14-3 Maximum Number of Database Processes Per Node

Machine Type InfiniBand Bonding Type Maximum Number of Processes with No Parallel Queries Maximum Number of Processes with All Running Parallel Queries

8-socket (X2-8, X3-8)

Active passive

28,500

25,000

8-socket (X4-8, X5-8)

Active bonding

50,000

44,000

2-socket (X2-2, X3-2)

Active passive

12,500

10,000

2-socket (X4-2, X5-2)

Active bonding

16,500

14,000

The machines are configured as follows:

  • On an 8-socket database node with active bonding InfiniBand configurations (X4-8 and X5-8), there are 8 IP addresses across 4 InfiniBand cards (8 Infiniband ports).

  • On an 8-socket database node with active-passive InfiniBand configurations (X2-8 and X3-8), there are 4 IP addresses across 4 InfiniBand cards (8 Infiniband ports).

  • On an 2-socket database node with active bonding InfiniBand configurations (X4-2 and X5-2), there are 2 IP addresses on 1 InfiniBand card (2 Infiniband ports).

  • On an 2-socket database node with active-passive InfiniBand configurations (X2-2 and X3-2), there is 1 IP address on 1 InfiniBand card (2 Infiniband ports).

50,000 RDS sockets are provisioned per IP for database usage. Each IO-capable database process will consume RDS sockets across IPs with even load balancing.

Note that cells have the following connection limits:

  • On X4 and X5 systems, the cell connection limit is 120,000 processes.

  • On X2 and X3 systems, the cell connection limit is 60,000 processes.

This means that the total number of database processes cannot exceed the above limits on the cell nodes. For example, a full rack of 8 databases running at the maximum process count will exceed the cell connection limit.

14.3.9 Custom Diagnostic Package for Storage Server Alerts

Oracle Exadata Storage Servers automatically collect customized diagnostic packages that include relevant logs and traces upon generating a cell alert. This applies to all cell alerts, including both hardware alerts and software alerts. The timely collection of the diagnostic information prevents rollover of critical logs.

Management Server (MS) sends the diagnostic package as an email attachment for every email alert. You can access the following URL to download an existing diagnostic package if the email attachment is misplaced. In the following URL, hostname refers to the host name of the cell.

https://hostname/diagpack

You can also download the packages using ExaCLI.

You can create hourly custom diagnostic packages by providing the start time and duration using the CREATE DIAGPACK CellCLI command.

For details, see CREATE DIAGPACK in the Oracle Exadata System Software User's Guide.

14.3.10 Updating Nodes Using patchmgr

Starting with Exadata release 12.1.2.2.0, Oracle Exadata database nodes (releases later than 11.2.2.4.2), Oracle Exadata Virtual Server nodes (dom0), and Oracle Exadata Virtual Machines (domU) can be updated, rolled back, and backed up using patchmgr. You can still run dbnodeupdate.sh in standalone mode, but using patchmgr enables you to run a single command to update multiple nodes; you do not need to run dbnodeupdate.sh separately on each node. patchmgr can update the nodes in a rolling or non-rolling fashion.

The updated patchmgr and dbnodeupdate.sh are available in the new dbserver.patch.zip file, which can be downloaded from My Oracle Support note 1553103.1.

For details, see the "Updating Database Nodes with patchmgr" section in the Oracle Exadata Database Machine Maintenance Guide.

14.3.11 kdump Operational for 8-Socket Database Nodes

In releases earlier than 12.1.2.2.0, kdump, a service that creates and stores kernel crash dumps, was disabled on Exadata 8-socket database nodes because generating the vmcore took too long and consumed too much space. Starting with Exadata release 12.1.2.2.0, kdump is fully operational on 8-socket database nodes due to the following optimizations:

  • Hugepages and several other areas of shared memory are now exposed by the Linux kernel to user space, then filtered out by makedumpfile at kernel crash time. This saves both time and space for the vmcore.

  • Memory configuration for the kexec kernel has been optimized.

  • Overall memory used has been reduced by blacklisting unnecessary modules.

  • Snappy compression is enabled on the database nodes to speed up vmcore generation.

14.3.12 Redundancy Check When Powering Down the Storage Server

If you try to shut down gracefully a storage server by pressing the power button on the front or going through ILOM, the storage server performs an ASM data redundancy check. If shutting down the storage server could lead to an ASM disk group force dismount due to reduced data redundancy, the shutdown is aborted and the LEDs blink as follows to alert the user that shutting down the storage server is not safe:

  • On high capacity cells, all three LEDs on all hard drives blink for 10 seconds.

  • On extreme flash cells, the blue OK-to-Remove LED blinks for 10 seconds, and the amber LED is lit.

You should not attempt a hard reset on the storage server.

If a storage server cannot be safely shut down due to reduced redundancy (the command "cellcli -e list griddisk attributes name, deactivationOutcome" will show all the offline and unhealthy disks), then you need to restore the data redundancy first. If there are other offline disks, you need to bring them back online and wait for resync to finish. If there is rebalance running to force drop failed disks or resilvering running to resilver data blocks after write back flash cache failure, you need to wait for the rebalance or resilvering to complete. Once data redundancy is restored, you may proceed to shut down the storage server again.

14.3.13 Specifying an IP Address for SNMP Traps

If the IP address associated with eth0 is not registered with Oracle ASR Manager, you can specify a different IP address using the new fromIPfield in the ALTER CELLcommand (for storage servers) or the ALTER DBSERVER command (for database servers).

For details, see the description for:

  • ALTER CELL in the Oracle Exadata System Software User's Guide
  • ALTER DBSERVER in the Oracle Exadata Database Machine Maintenance Guide

14.3.14 Reverse Offload Improvements

Minimum software required: 12.1.0.2 BP11

The reverse offload feature enables a storage cell to push some offloaded work back to the database node when the storage cell's CPU is saturated.

Reverse offload from storage servers to database nodes is essential in providing a more uniform usage of all the database and storage CPU resources available in an Exadata environment. In most configurations, there are more database CPUs than storage CPUs, and the ratio may vary depending on the hardware generation and the number of database and cell nodes.

Different queries running at the same time need different rates of reverse offload to perform optimally with regard to elapsed time. Even the same query running in different instances may need different rates of reverse offload.

In this release, a number of heuristic improvements have been added for elapsed time improvement of up to 15% for multiple database instances and different queries running in parallel.

14.3.15 Cell-to-Cell Rebalance Preserves Flash Cache Population

Minimum software required: 12.1.0.2 BP11

When a hard disk hits a predictive failure or true failure, and data needs to be rebalanced out of it, some of the data that resides on this hard disk might have been cached on the flash disk, providing better latency and bandwidth accesses for this data. To maintain an application's current performance SLA, it is critical to rebalance the data while honoring the caching status of the different regions on the hard disk during the cell-to-cell offloaded rebalance.

This feature provides significant performance improvement compared to earlier releases for application performance during a rebalance due to disk failure or disk replacement.

When data is rebalanced from one cell to another, the data that was cached on the source cell is also cached on the target cell.

14.4 What's New in Oracle Exadata Database Machine 12c Release 1 (12.1.2.1.2)

The following features are new for Oracle Exadata Database Machine 12c Release 1 (12.1.2.1.2):

14.4.1 InfiniBand Partitioning for Virtualized Exadata Environments

InfiniBand partitioning is now available for virtualized Exadata environments and can be configured with the Oracle Exadata Deployment Assistant (OEDA).

Use the graphical user interface of OEDA to define the InfiniBand partitions on a per-cluster basis, and the command line interface of OEDA to configure the guests and the InifniBand switches with the appropriate partition keys and membership requirements to enable InfiniBand partitions.

InfiniBand partitions can be defined on a per-cluster basis. If storage servers are shared among multiple clusters, then all clusters will use the same storage partition key.

14.5 What's New in Oracle Exadata Database Machine 12c Release 1 (12.1.2.1.1)

The following features are new for Oracle Exadata Database Machine 12c Release 1 (12.1.2.1.1):

14.5.1 Flash Performance Improvement in X5 Storage Servers

Changes were made in the NVMe flash firmware to improve I/O task handling resources and modifications to the background refresh algorithms and operations. The flash performance is equivalent or slightly higher in this release.

14.5.2 Oracle Exadata Virtual Machines

Consolidated environments can now use Oracle Virtual Machine (Oracle VM) on X5-2, X4-2, X3-2, and X2-2 database servers to deliver higher levels of isolation between workloads. Virtual machine isolation is desirable for workloads that cannot be trusted to restrict their security, CPU, or memory usage in a shared environment.

14.5.3 Infrastructure as a Service (IaaS) with Capacity-on-Demand (CoD)

Oracle Exadata Infrastructure as a Service (IaaS) customers now have the Capacity-on-Demand feature to limit the number of active cores in the database servers in order to restrict the number of required database software licenses. Exadata 12.1.2.1.1 Software allows CoD and IaaS to coexist on the same system. Note that IaaS-CoD, the ability to turn on/off a reserved set of cores, is still included with IaaS.

14.5.4 Improved Flash Cache Metrics

This release contains integrated block cache and columnar cache metrics for better flash cache performance analysis.

14.5.5 Leap Second Time Adjustment

This release contains leap second support in anticipation of the June 30, 2015 leap second adjustment.

14.5.6 Network Resource Management

  • Oracle 1.6TB NVMe SSD firmware update to 8DV1RA12 in X5-2 Extreme Flash (EF) Storage Servers

  • Oracle Flash Accelerator F160 PCIe card's firmware update to 8DV1RA12 in X5-2 High Capacity (HC) Storage Servers

14.5.7 DBMCLI Replaces /opt/oracle.cellos/compmon/exadata_mon_hw_asr.pl Script

Starting in Oracle Exadata Storage Server Release 12.1.2.1.0, a new command-line interface called DBMCLI is introduced to configure, monitor, and manage the database servers. DBMCLI is pre-installed on each database server and on DOM0 of virtualized machines. DBMCLI configures Auto Service Request, capacity-on-demand, Infrastructure as a Service, and database server e-mail alerts. DBMCLI replaces the /opt/oracle.cellos/compmon/exadata_mon_hw_asr.pl Perl script. Refer to the Oracle Exadata Database Machine Maintenance Guide for information on how to use DBMCLI.

14.6 What's New in Oracle Exadata Database Machine 12c Release 1 (12.1.2.1.0)

The following features are new for Oracle Exadata Database Machine 12c Release 1 (12.1.2.1.0):

14.6.1 Oracle Exadata Database Machine Elastic Configurations

Elastic configurations allow Oracle Exadata Racks to have customer-defined combinations of database servers and Exadata Storage Servers. At least 2 database servers and 3 storage servers must be installed in the rack. The storage servers must all be the same type. Oracle Exadata Database Machine X5-2 Elastic Configurations and Oracle Exadata Database Machine X4-8 Elastic Configuration can have 2 to 19 database servers, 3 to 20 Exadata Storage Servers, or a combination of database servers and Exadata Storage Servers. Oracle Exadata Storage Expansion Rack X5-2 Elastic Configurations can have 4 to 19 storage servers.

Elastic configurations allow Oracle Exadata Racks to have customer-defined combinations of database servers and Exadata Storage Servers. This allows the hardware configuration to be tailored to specific workloads such as Database In-Memory, OLTP, Data Warehousing, or Data Retention.

  • Oracle Exadata Database Machine X5-2 Elastic Configurations start with a quarter rack containing 2 database servers and 3 Exadata Storage Servers. Additional database and storage servers (High Capacity (HC) or Extreme Flash (EF)) can be added until the rack fills up or a rack maximum of 22 total servers is reached.

  • Oracle Exadata Storage Expansion Rack X5-2 Elastic Configurations start with a quarter rack containing 4 Exadata Storage Servers. Additional storage servers (HC or EF) can be added to a total of 19 storage servers per rack.

  • Oracle Exadata Database Machine X4-8 Elastic Configurations start with a half rack containing 2 database server X4-8 8-socket servers and 3 Exadata Storage Servers. Up to 2 additional X4-8 8-socket servers can be added per rack. Up to 11 additional Exadata Storage Servers (HC or EF) can be added per rack.

14.6.2 Sparse Grid Disks

Sparse grid disks allocate space as new data is written to the disk, and therefore have a virtual size that can be much larger than the actual physical size. Sparse grid disks can be used to create a sparse disk group to store database files that will use a small portion of their allocated space. Sparse disk groups are especially useful for quickly and efficiently creating database snapshots on Oracle Exadata. Traditional databases can also be created using a sparse disk group.

Minimum hardware: Storage nodes must be X3 or later

Minimum software: Oracle Database 12c Release 1 (12.1) release 12.1.0.2 BP5 or later.

14.6.3 Snapshot Databases for Test and Development

Space-efficient database snapshots can now be quickly created for test and development purposes. Snapshots start with a shared read-only copy of the production database (or pluggable database (PDB)) that has been cleansed of any sensitive information. As changes are made, each snapshot writes the changed blocks to a sparse disk group.

Multiple users can create independent snapshots from the same base database. Therefore multiple test and development environments can share space while maintaining independent databases for each task. Snapshots on Exadata Storage Servers allow testing and development with Oracle Exadata Storage Server Software features such as Smart Scan.

Exadata database snapshots are integrated with the Multi-tenant Database Option to provide an extremely simple interface for creating new PDB snapshots.

Minimum hardware: Storage nodes must be X3 or later

Minimum software: Oracle Database 12c Release 1 (12.1) release 12.1.0.2 BP5 or later.

14.6.4 Columnar Flash Caching

Oracle Exadata System Software release 12.1.2.1.0 can efficiently support mixed workloads, delivering optimal performance for both OLTP and analytics. This is possible due to the dual format architecture of Exadata Smart Flash Cache that enables the data to be stored in hybrid columnar for transactional processing and also stored in pure columnar, which is optimized for analytical processing.

In addition, Exadata Hybrid Columnar Compression balances the needs of OLTP and analytic workloads. Exadata Hybrid Columnar Compression enables the highest levels of data compression and provides tremendous cost-savings and performance improvements due to reduced I/O, especially for analytic workloads.

In Oracle Exadata System Software release 12.1.2.1.0, Exadata Smart Flash Cache software transforms hybrid columnar compressed data into pure columnar during flash cache population for optimal analytics processing. Flash caches on pure columnar data in flash run faster because they read only the selected columns, reducing flash I/Os and storage server CPU consumption.

Oracle Exadata System Software release 12.1.2.1.0 has the ability to cache Exadata Hybrid Columnar Compression table data on flash cache in a pure columnar layout. When Exadata Hybrid Columnar Compression tables are accessed using Smart Scan, the Exadata Hybrid Columnar Compression compressed data is reformatted to a pure columnar layout in the same amount of storage space on flash cache.

The percentage of data for a given column in a compression unit (CU) for a wide table is small compared to narrow table. This results in more CUs being fetched from disks and flash to get data for the entire column. Queries reading only a few columns of a wide Exadata Hybrid Columnar Compression table exhibit high I/O bandwidth utilization due to irrelevant columns being read from storage. Storing the data in a columnar format on flash cache alleviates the need for reading the irrelevant columns and provides a significant performance boost.

Depending on the type of workload (OLTP or data warehousing), the same region of data can be cached in both the traditional block format as well as the columnar format in flash cache.

This feature is enabled by default; you do not need to configure anything to use this feature.

Columnar Flash Caching accelerates reporting and analytic queries while maintaining excellent performance for OLTP style single row lookups.

Columnar Flash Caching implements a dual format architecture in Oracle Exadata Database Machine flash by automatically transforming frequently scanned Exadata Hybrid Columnar Compression compressed data into a pure columnar format as it is loaded into the flash cache. Smart Scans operating on pure columnar data in flash run faster because they read only the selected columns reducing flash I/Os and storage server CPU.

The original Exadata Hybrid Columnar Compression formatted data can also be cached in the flash cache if there are frequent OLTP lookups for the data. Therefore the Exadata Smart Flash Cache automatically optimizes the format of the cached data to accelerate all types of frequent operations.

This feature is enabled by default; you do not need to configure anything to use this feature.

Minimum software: Oracle Exadata System Software release 12.1.2.1.0 running Oracle Database 12c release 12.1.0.2.0.

See Also:

Oracle Exadata System Software User's Guide for information about the flash cache metrics

14.6.5 Oracle Exadata System Software I/O Latency Capping for Write Operations

This feature helps eliminate any outliers due to slow reads. It prevents read outliers that would otherwise have been visible to applications.

Disks drives, disk controllers, and flash devices are complex computers that can, occasionally, exhibit high latencies while the device is performing an internal maintenance or recovery operation. In addition, devices that are close to failing sometimes exhibit high latencies before they fail. Previously, devices exhibiting high latencies could occasionally cause slow SQL response times. Oracle Exadata System Software I/O latency capping for write operations ensures excellent SQL I/O response times on Oracle Exadata Database Machine by automatically redirecting high latency I/O operations to a mirror copy.

In Oracle Exadata System Software releases 11.2.3.3.1 and 12.1.1.1.1, if Oracle Exadata Database Machine tries to read from a flash device but the latency of the read I/O is longer than expected, the Oracle Exadata System Software automatically redirects the read I/O operations to another storage server (cell). The database server that initiated the read I/O is sent a message that causes the database server to redirect the read I/O to another mirror copy of the data. Any read I/O issued to the last valid mirror copy of the data is not redirected.

In Oracle Exadata System Software release 12.1.2.1.0, if a write operation encounters high latency, then Oracle Exadata System Software automatically redirects the write operation to another healthy flash device on the same storage server. After the write completes successfully, the write I/O is acknowledged as successful to the database server, thereby eliminating the write outlier.

Requirements:.

  • Minimum software:

    • Oracle Database 11g release 2 (11.2) Monthly Database Patch For Exadata (June 2014 - 11.2.0.4.8)

    • Oracle Grid Infrastructure 11g release 2 (11.2) Monthly Database Patch For Exadata (June 2014 - 11.2.0.4.8)

  • Enable write-back flash cache on the storage server (cell)

14.6.6 Elimination of False Drive Failures

Disks drives, and flash drives are complex computers that can occasionally appear to fail due to internal software lockups without actually physically failing. In the event of an apparent hard drive failure on X5-2 High Capacity cell or an apparent flash drive failure on X5-2 Extreme Flash cell, Oracle Exadata System Software automatically redirects I/Os to other drives, and then power cycles the drive. If the drive returns to normal status after the power cycle, then it will be re-enabled and resynchronized. If the drive continues to fail after being power cycled, then it will be dropped. This feature allowsOracle Exadata System Software to eliminate false-positive disk failures and therefore helps to preserve data redundancy and reduce management.

14.6.7 Flash and Disk Life Cycle Management Alert

Oracle Exadata Storage Server Software now monitors Oracle ASM rebalance operations due to disk failure and replacement. Management Server sends an alert when a rebalance operation completes successfully, or encounters an error.

In previous releases, the user would have to periodically monitor the progress of rebalance operations by querying the V$ASM_OPERATION view. Now the user can subscribe to alerts from Management Server, and receive updates on Oracle ASM rebalance operations.

Minimum software: Oracle Database release 12.1.0.2 BP4 or later, and Oracle Exadata Storage Server Software release 12.1.2.1.0 or later.

14.6.8 Performance Optimization for SQL Queries with Minimum or Maximum Functions

SQL queries using minimum or maximum functions are designed to take advantage of the storage index column summary that is cached in Exadata Storage Server memory. As a query is processed, a running minimum value and a running maximum value are tracked. Before issuing an I/O, the minimum/maximum value cached in the storage index for the data region is checked in conjunction with the running minimum/maximum value to decide whether that I/O should be issued or can be pruned. Overall, this optimization can result in significant I/O pruning during the course of a query and improves query performance. An example of a query that benefits from this optimization is:

Select max(Salary) from EMP where Department = 'sales';

Business intelligence tools that get the shape of a table by querying the minimum or maximum value for each column benefit greatly from this optimization.

The following session statistic shows the amount of I/O saved due to storage index optimization.

cell physical IO bytes saved by storage index

Minimum software: Oracle Database release 12.1.0.2.

14.6.9 Oracle Exadata Storage Server Software Performance Statistics in AWR Reports

Exadata Storage Server configuration and performance statistics are collected in the Automatic Workload Repository (AWR), and the data is available in the AWR report. The Oracle Exadata section of the AWR report is available in HTML or active report format.

The following sections are three main sections in the AWR report:

  • Exadata Server Configuration: Hardware model information, software versions, and storage configuration

  • Exadata Server Health Report: Offline disks and open alerts

  • Exadata Performance Statistics: Operating system statistics, storage server software statistics, smart scan statistics, and statistics by databases

The AWR report provides storage level performance statistics, not restricted to a specific instance or a database. This is useful for analyzing cases where one database can affect the performance of another database.

Configuration differences are highlighted using specific colors for easy analysis. For example, a cell with a different software release than the other cells, or a cell with different memory configuration than the other cells are highlighted.

Outliers are automatically analyzed and presented for easy performance analysis. Outliers are appropriately colored and linked to detailed statistics.

Minimum software: Oracle Database release 12.1.0.2, and Oracle Exadata Storage Server Software release 12.1.2.1.0.

14.6.10 Exafusion Direct-to-Wire Protocol

Exafusion Direct-to-Wire protocol allows database processes to read and send Oracle Real Applications Cluster (Oracle RAC) messages directly over the Infiniband network bypassing the overhead of entering the OS kernel, and running the normal networking software stack. This improves the response time and scalability of the Oracle RAC environment on Oracle Exadata Database Machine. Exafusion is especially useful for OLTP applications because per message overhead is particularly apparent in small OLTP messages.

Minimum software: Oracle Exadata Storage Server Software release 12.1.2.1.0 contains the OS, firmware, and driver support for Exafusion and Oracle Database software release 12.1.0.2.0 BP1.

14.6.11 Management Server on Database Servers

Management Server (MS) on database servers implements a web service for database server management commands, and runs background monitoring threads. The management service provides the following:

  • Comprehensive hardware and software monitoring including monitoring of hard disks, CPUs, and InfiniBand ports.

  • Enhanced alerting capabilities.

  • Important system metric collection and monitoring.

  • A command-line interface called DBMCLI to configure, monitor, and manage the database servers. DBMCLI is pre-installed on each database server and on DOM0 of virtualized machines. DBMCLI configures Auto Service Request, capacity-on-demand, Infrastructure as a Service, and database server e-mail alerts.

Oracle Exadata Database Machine Command-Line Interface (DBMCLI) utility is the command-line administration tool for managing the database servers. DBMCLI runs on each server to enable you to manage an individual database server. DBMCLI also runs on virtual machines. You use DBMCLI to configure, monitor, and manage the database servers. The command-line utility is already installed when Oracle Exadata Database Machine is shipped.

DBMCLI provides an integrated client interface to configure Auto Service Request, capacity-on-demand, Infrastructure as a Service, and database server e-mail alerts. It also provides for monitoring of hard disks, CPUs, InfiniBand ports, as well as system metrics and thresholds.

See Also:

Oracle Exadata Database Machine Maintenance Guide for additional information about DBMCLI

14.6.12 SQL Operators for JSON and XML

Oracle Exadata System Software supports offload of many SQL operators for predicate evaluation. Offload of the following SQL operators are now supported by Oracle Exadata System Software:

  • JSON Operators

    • JSON_VALUE

    • JSON_EXISTS

    • JSON_QUERY

    • IS JSON

    • IS NOT JSON

  • XML Operators

    • XMLExists

    • XMLCast(XMLQuery())

Minimum software: Oracle Database release 12.1.0.2 for offload of JSON. Oracle Database release 12.1.0.2 BP1 for XML operator offload.

14.6.13 I/O Resource Management for Flash

I/O Resource Management (IORM) now manages flash drive I/Os in addition to disk drive I/Os to control I/O contention between databases, pluggable databases, and consumer groups. Because it is very rare for Oracle Exadata environments to be limited by OLTP I/Os, IORM automatically prioritizes OLTP flash I/Os over smart scan flash I/Os, ensuring fast OLTP response times with little cost to smart scan throughput.

Minimum software: Exadata cell software release 12.1.2.1.0 running Oracle Database 11g or Oracle Database 12c releases

Minimum hardware: Exadata releases X2-*

14.6.14 Flash Cache Space Resource Management

Flash cache is a shared resource. Flash Cache Space Resource Management allows users to specify the minimum and maximum sizes a database can use in the flash cache using interdatabase IORM plans. Flash Cache Space Resource Management also allows users to specify the minimum and maximum sizes a pluggable database (PDB) can use in the flash cache using database resource plans.

Minimum software: Oracle Exadata System Software release 12.1.2.1.0 running Oracle Database 11g or Oracle Database 12c Release 1 (12.1) release 12.1.0.2

Minimum hardware: Oracle Exadata Database Machine model X2-*

14.6.15 I/O Resource Management Profiles

IORM interdatabase plans plans now support profiles that reduce management of interdatabase plans for environments with many databases. Previously, a storage administrator had to specify resources for every database in the interdatabase plan. The plan needed updates each time a new database was created. IORM profiles greatly reduce this management. The storage administrator can now create profile directives that define different profile types based on performance requirements. Next, the administrator maps new and existing databases to one of the defined profiles in the interdatabase plan using the database parameter DB_PERFORMANCE_PROFILE. Each database inherits all its attributes from the specified profile directive automatically.

Minimum software: Exadata cell software release 12.1.2.1.0 running Oracle Database 12c Release 1 (12.1) release 12.1.0.2 Exadata Bundle Patch 4.

14.6.16 Write Back Flash Cache on Extreme Flash Cells

On Extreme Flash cells, flash cache runs in write back mode by default, and takes 5 percent of the flash space. Flash cache on Extreme Flash cells is not used as a block cache because user grid disks are already created on flash and therefore caching is not needed. However, flash cache is still useful for the following advanced operations:

  • Columnar caching caches Exadata Hybrid Columnar Compression (EHCC) table data on flash cache in a pure columnar layout on an Extreme Flash cell.

  • Write I/O latency capping cancels write I/O operations to a temporarily stalled flash, and redirects the write to be logged in the write back flash cache of another healthy flash device on an Extreme Flash cell

  • Fast data file creation persists the metadata about the blocks in the write back flash cache, eliminating the actual formatting writes to user grid disks, on an Extreme Flash cell.

Administrators can choose to configure flash cache in write through mode on Extreme Flash cells. Columnar caching works in write through flash cache mode, but write I/O latency capping and fast data file creation require write back flash cache to be enabled.

14.6.17 Secure Erase for 1.6 TB Flash Drives in Extreme Flash and High Capacity Systems

With this release, Oracle Exadata System Software supports secure erase for 1.6 TB flash drives in the Extreme Flash and High Capacity systems. The 1.6 TB flash drives take approximately 5.5 hours to securely erase.

14.6.18 Increased Exadata Cell Connection Limit

Oracle Exadata X5-2 and X4-2 cells can now support up to 120,000 simultaneous connections originating from one or more database servers that is using active-active bonding. This implies that at most 120,000 processes can simultaneously remain connected to a cell and perform I/O operations.

14.6.19 Support for SNMP v3

Oracle Exadata Database Machine database and storage servers support SNMP v3 for sending alerts. SNMP v3 provides authentication and encryption for alerts sent from the servers to administrators and Oracle Auto Service Request (ASR).

14.6.20 Federal Information Processing Standards (FIPS) 140-2 Compliant Smart Scan

The U.S. Federal Information Processing Standard (FIPS) 140-2 specifies security requirements for cryptographic modules. To support customers with FIPS 140-2 requirements, Oracle Exadata version 12.1.2.1.0 can be configured to use FIPS 140-2 validated cryptographic modules. These modules provide cryptographic services such as Oracle Database password hashing and verification, network encryption (SSL/TLS and native encryption), as well as data at rest encryption (Transparent Data Encryption).

When Transparent Data Encryption is used and Oracle Database is configured for FIPS 140 mode, Oracle Exadata Smart Scan offloads will automatically leverage the same FIPS 140 validated modules for encryption and decryption operations of encrypted columns and encrypted tablespaces.

In Oracle Database release 12.1.0.2.0, the database parameter, DBFIPS_140, provides the ability to turn on and off the FIPS 140 cryptographic processing mode inside the Oracle Database and Exadata Storage Server.

In Oracle Database release 11.2.0.4.0, the underscore parameter _use_fips_mode provides the ability to turn on or off the FIPS 140 cryptographic processing in Oracle Database and Exadata Storage Server.

For example, using DBFIPS_140:

ALTER SYSTEM SET DBFIPS_140 = TRUE;

For example in the parameter file:

DBFIPS_140=TRUE

The following hardware components are now FIPS compliant with the firmware updates in the specified releases:

  • Oracle Server X5-2 and later systems are designed to be FIPS 140–2 compliant

  • Oracle Sun Server X4-8 with ILOM release 3.2.4

  • Sun Server X4-2 and X4-2L with SW1.2.0 and ILOM release 3.2.4.20/22

  • Sun Server X3-2 and X3-2L with SW1.4.0 and ILOM release 3.2.4.26/28

  • Sun Server X2-2 with SW1.8.0 and ILOM release 3.2.7.30.a

  • Cisco Catalyst 4948E-F Ethernet Switch

FIPS compliance for V1, X2-* and database node X3-8 generations of Exadata Database Machine Servers is not planned.

Minimum software: Oracle Database release 12.1.0.2.0 BP3, Oracle Database release 11.2.0.4 with MES Bundle on Top of Quarterly Database Patch For Exadata (APR2014 - 11.2.0.4.6), Oracle Exadata Storage Server Software release 12.1.2.1.0, ILOM 3.2.4.

See Also:

Oracle Database Security Guide for additional information about FIPS

14.6.21 Oracle Exadata Virtual Machines

Consolidated environments can now use Oracle Virtual Machine (Oracle VM) on X5-2, X4-2, X3-2, and X2-2 database servers to deliver higher levels of isolation between workloads. Virtual machine isolation is desirable for workloads that cannot be trusted to restrict their security, CPU, or memory usage in a shared environment. Examples are hosted or cloud environments, cross department consolidation, test and development environments, and non-database or third party applications running on a database machine. Oracle VM can also be used to consolidate workloads that require different versions of clusterware, for example SAP applications that require specific clusterware patches and versions.

The higher isolation provided by virtual machines comes at the cost of increased resource usage, management, and patching because a separate operating system, clusterware, and database install is needed for each virtual machine. Therefore it is desirable to blend Oracle VM with database native consolidation by consolidating multiple trusted databases within a virtual machine. Oracle Resource Manager can be used to control CPU, memory, and I/O usage for the databases within a virtual machine. The Oracle Multitenant option can be used to provide the highest level of consolidation and agility for consolidated Oracle databases.

Exadata Virtual Machines use high speed InfiniBand networking with Single Root I/O Virtualization (SR-IOV) to ensure that performance within a virtual machine is similar to Exadata's famous raw hardware performance. Exadata Smart Scans greatly decrease virtualization overhead compared to other platforms by dramatically reducing message traffic to virtual machines. Exadata Virtual Machines can dynamically expand or shrink CPUs and memory based on the workload requirement of the applications running in that virtual machine.

Virtual machines on Exadata are considered Trusted Partitions, and therefore software can be licensed at the virtual machine level instead of the physical processor level. Without Trusted Partitions, database options and other Oracle software must to be licensed at a server or cluster level even though all databases running on that server or cluster may not require a particular option.

14.7 What's New in Oracle Exadata Database Machine 12c Release 1 (12.1.1.1.1)

The following feature is new for Oracle Exadata Database Machine 12c Release 1 (12.1.1.1.1):

The following features are new for Oracle Exadata Database Machine 12c Release 1 (12.1.1.1.1) and Oracle Exadata Database Machine 11g Release 2 (11.2.3.3.1):

The preceding release 11.2.3.3.1 features are described in "What's New in Oracle Exadata Database Machine 11g Release 2 (11.2.3.3.1)."

14.7.1 Additional SQL Operators and Conditions for LOB and CLOB

Oracle Exadata Storage Server Software supports offload of many SQL operators and conditions for predicate evaluation. Offload of the following SQL operators and conditions are now supported by Oracle Exadata Storage Server Software:

  • LOB and CLOB Conditions

    • LIKE

    • REGEXP_LIKE

Smart Scan evaluates the LOB operators and conditions only when a LOB is inlined (stored in the table row). In addition, Smart Scan handles Secure File compression. Using Secure File compression helps reduce the size of LOBs so that they can be inlined.

Minimum software: Oracle Database release 12.1.0.2 for offload of LOB/CLOB

See Also:

Oracle Database SecureFiles and Large Objects Developer's Guide for information about inline LOBs

14.8 What's New in Oracle Exadata Database Machine 12c Release 1 (12.1.1.1.0)

The following are new for Oracle Exadata Database Machine 12c Release 1 (12.1.1.1.0):

14.8.1 Support for Oracle Database Releases 12c Release 1 (12.1) and 11g Release 2 (11.2)

Oracle Exadata Storage Server Software 12c Release 1 (12.1) supports Oracle Database releases 12c Release 1 (12.1) and 11g Release 2 (11.2) running on a single Oracle Exadata Database Machine. The database servers get full performance, such as smart scans, fast file creation, and fast incremental backup, from all Exadata Storage Servers running Oracle Exadata Storage Server Software release 12c Release 1 (12.1).

Oracle Exadata Storage Server Software 12c Release 1 (12.1) is able to support multiple database releases by running appropriate cell offload servers to handle the offload requests. Offload requests coming from Oracle Database 11g Release 2 (11.2) are handled by a release 11g offload server, and offload requests coming from Oracle Database 12c Release 1 (12.1) database are handled by a 12c offload server.

Oracle Exadata Storage Server Software 12c Release 1 (12.1) now contains separate offload servers for each major database release so that it can fully support all offload operations. Offload requests coming from Oracle Database 11g Release 2 (11.2) are handled automatically by a release 11g offload server, and offload requests coming from Oracle Database 12c Release 1 (12.1) database are handled automatically by a 12c offload server.

Exadata Storage Server 12c Release 1 (12.1) can support the following releases of Oracle Database:

Database Release Minimum Required Release

11.2.0.2

Bundle patch 22

11.2.0.3

Bundle patch 20

11.2.0.4

Current release

12.1.0.1

Current release

14.8.2 IORM Support for CDBs and PDBs

Oracle Database 12c Release 1 (12.1) supports a multitenant architecture. In a multitentant architecture, a container is a collection of schemas, objects, and related structures in a Oracle Multitenant container database (CDB) that appears logically to an application as a separate database. In a CDB, administrators can create multiple pluggable database (PDB) to run their workloads. In a CDB, there are multiple workloads within multiple PDBs competing for shared resources.

By using CDB plans and PDB plans, I/O Resource Management (IORM) provides the ability to manage I/O resource utilization among different PDBs as well as manage the workloads within each PDB.

Oracle Database 12c Release 1 (12.1) supports IORM priorization to manage I/O resource utilization among different PDBs as well as manage the workloads within each PDB.

See Also:

14.8.3 Cell to Cell Data Transfer

In earlier releases, Exadata Cells did not directly communicate to each other. Any data movement between the cells was done through the database servers. Data was read from a source cell into database server memory, and then written out to the destination cell. Starting with Oracle Exadata Storage Server Software 12c Release 1 (12.1), database server processes can offload data transfer to cells. A database server instructs the destination cell to read the data directly from the source cell, and write the data to its local storage. This reduces the amount of data transferred across the fabric by half, reducing InfiniBand bandwidth consumption, and memory requirements on the database server. Oracle Automatic Storage Management (Oracle ASM) resynchronization, resilver, and rebalance use this feature to offload data movement. This provides improved bandwidth utilization at the InfiniBand fabric level in Oracle ASM instances. No configuration is needed to utilize this feature.

Minimum software: Oracle Database 12c Release 1 (12.1) or later, and Oracle Exadata Storage Server Software 12c Release 1 (12.1) or later.

14.8.4 Desupport of HP Oracle Database Machine Hardware

Oracle Exadata System Software 12c Release 1 (12.1) is not supported on HP Oracle Database Machine hardware. Oracle continues to support HP Oracle Database Machines running Oracle Exadata System Software 11g Release 2 (11.2).