A What's New in Oracle Exadata Database Machine

This appendix describes the new features included in Oracle Exadata Database Machine and Oracle Exadata System Software.

A.1 What's New in Oracle Exadata Database Machine 18c (18.1.0)

The following features are new for Oracle Exadata System Software 18c (18.1.0):

A.1.1 In-Memory OLTP and Consolidation Acceleration

Exadata Storage Servers add a new memory cache in front of flash memory. This is similar to how the current flash cache is in front of hard disks. This feature provides 100 microsecond (µs) online transaction processing (OLTP) read IO latency, which is 2.5 times lower than the 250 μs flash OLTP read IO latency. You can use existing memory upgrade kits to add more memory to storage servers to take advantage of this feature.

Cell RAM cache is a cache in front of the Flash Cache on storage servers and is extension of the database cache. It is faster than the Flash Cache, but has smaller capacity. When buffers are aged out of the database buffer cache, the database notifies the cells so that the cell RAM cache can be populated with these buffers according to caching policies. These buffers are exclusively cached in the cell RAM cache. If a data block needs to be returned to the database buffer cache, then the buffer, if present, is evicted from the cell RAM cache. The cell RAM cache is an exclusive cache — the data block is present only in the cell RAM cache or in the database buffer cache, but not in both.

During read operations, if a data block is not found in the database buffer cache (cache miss), then the database issues a read for the data block from the cell. CellSrv checks the RAM cache before accessing a lower layer in the IO stack (flash memory or hard disk):

  • If the data block is found in the RAM cache, then the data block is returned synchronously to the database. If the data block is going to be cached in database buffer cache, then the storage server evicts the data block from RAM cache.

  • If the data block is not found in RAM cache, then the storage server looks in the Flash cache. If the data block is not found in the Flash cache, then the data block is read from disk. The data block is returned to the database, but the data block is not added to the RAM cache.

During write operations, the database issues a write for a data block to the storage server. CellSrv checks the RAM cache before accessing a lower layer in the IO stack (Flash Cache or hard disk):

  • If the data block is found in the RAM cache, then CellSrv invalidates the corresponding cache line in the RAM cache, and sends the data block to the lower layer to be written to disk. The RAM cache is not populated.

  • If the data block is not found in RAM cache, then the storage server sends the data block to the lower layer to be written to disk. The RAM cache is not populated.

Unlike the flash cache, the RAM cache on the storage servers is an exclusive cache — the data block stored is present either in RAM cache or in the buffer cache of the database, but not both. When the database evicts a data block from buffer cache, it instructs CellSrv to populate the data block in the RAM cache. CellSrv populates the RAM cache in an asynchronous manner.

In the event of storage server failure on the primary mirror, the database sends the RAM cache population to the secondary mirror, so that the blocks are now cached in the RAM cache for the secondary mirror. When the primary mirror comes back to online state, the blocks are populated back in the primary mirror’s RAM cache.

A new Memory Cache section is available in the AWR report for monitoring RAM cache status and activities.

Minimum requirements:

  • Oracle Exadata System Software 18c (18.1.0)

  • Oracle Exadata Storage Server X6 or X7

  • Patch for bug 26923396 applied to the Oracle Database home

A.1.2 In-Memory Columnar Caching on Storage Servers

Oracle Exadata System Software release 12.2.1.1.0 introduced the support for In-Memory Columnar Caching on Storage Servers for Hybrid Columnar Compressed (HCC) tables. Oracle Exadata System Software 18c (18.1.0) extends the support for In-Memory Columnar Caching on Storage Server for additional table types, specifically uncompressed tables and OLTP compressed tables.

By extending the Database In-Memory format for uncompressed tables and OLTP compressed tables, smart scan queries on more table types can benefit from fast vector-processing in-memory algorithms on data stored in the storage flash cache. With this format, most in-memory performance enhancements are supported in Smart Scan including joins and aggregation. Database In-Memory format is space efficient and usually takes up less space than uncompressed or OLTP compressed formats. Storing data in Database In-memory format results in better Storage flash cache space utilization.

Rewriting data from the traditional uncompressed or OLTP compressed format to Database In-Memory format is very CPU intensive. Oracle Exadata System Software has built-in intelligence to cache data in Database In-Memory format for regions that are not modified frequently.

Data from normal (unencrypted) as well as encrypted tablespaces can be cached in the in-memory columnar cache format.

Just as with Oracle Database In-Memory, the new Database In-Memory format is created by a background process so that it does not interfere with the performance of queries.

This feature is enabled by default when you configure the INMEMORY_SIZE database initialization parameter; no further configuration is required to use this new feature. If INMEMORY_SIZE is not configured, then uncompressed tables and OLTP compressed table data is cached in Flash Cache in its native format and not in the Database In-Memory columnar format.

If you need to disable this feature, you can use a new DDL keyword CELLMEMORY with the ALTER TABLE command. See "Enabling or Disabling In-Memory Columnar Caching on Storage Servers" in Oracle Exadata System Software User's Guide.

Minimum requirements:

  • Oracle Exadata System Software 18c (18.1.0)

  • Oracle Database 12c release 1 (12.1.0.2) version 12.1.0.2.161018DBBP or Oracle Database 12c release 2 (12.2.0.1)

  • Patch for bug 24521608 if using Oracle Database 12c release 1 (12.1.0.2)

  • (Recommended) Patch for bug 26261327 (Enables better reverse offload functionality for complex queries)

A.1.3 Storage Server Cloud Scale Software Update

The Storage Server Cloud Scale Software Update feature introduces a brand new cloud-scale software update process for storage servers. You point the storage servers to a software store. The storage servers download new software in the background. You can schedule the preferred time of software update. Storage servers automatically upgrade the Oracle Exadata System Software in a rolling fashion while keeping the databases online. A single software repository can be used for hundreds of storage servers. This feature provides simpler and faster software updates for Cloud and On-Premise customers.

Each storage server downloads the software to its active partition, and then loads the software on its passive partition. The storage servers then reboot to the new version according to a specified schedule.

This feature improves scalability of software updates by allowing storage servers to update without dedicated patchmgr sessions. When updating hundreds of storage servers, administrators can use dcli to issue ALTER SOFTWAREUPDATE commands to set the software location and time parameters for all storage servers. Multiple software locations could be used for very large deployments to reduce contention.

See "Scheduling Automated Updates of Storage Servers" in the Oracle Exadata Database Machine Maintenance Guide for details.

Minimum requirements:

  • Oracle Exadata System Software 18c (18.1.0). You can use this feature to install subsequent software updates.

A.1.4 Faster Database Server Software Update

Database server software update process now takes significantly less time than before and is up to 40% faster compared to previous releases. This helps reduce the cost and effort required to update the software on database server.

Minimum requirements:

  • Oracle Exadata System Software 18c (18.1.0)

A.1.5 Improved Ethernet Performance in Oracle VM

Oracle Exadata System Software 18c (18.1.0) has optimized receive and transmit of ethernet packets for systems using virtualization. This optimization provides significant network performance boost with lower network latency and improved CPU utilization on Dom0 and DomU.

Minimum requirements:

  • Oracle Exadata System Software 18c (18.1.0)

A.1.6 Performance Improvement Following Disk Online Completion

In previous releases, during Oracle ASM resync operations, if the cachelines are not cached in source storage server, even if they have been cached in target storage server, Oracle Exadata System Software evicts them from the flash cache of the target storage server. This potentially impacted primary mirrors, resulting in cache misses and degraded performance.

Starting with this release, Oracle Exadata System Software ensures that cachelines in the Oracle ASM resync chunk region that are already in the flash cache are preserved, instead of being invalidated. This helps prevent cache misses. This feature provides significant performance improvement compared to earlier version releases for during Oracle ASM resync operations.

Minimum requirements:

  • Oracle Exadata System Software 18c (18.1.0)

A.1.7 Improved High Availability After Flash Failures

Overall system performance after flash failures has been improved. Previously, after a flash failure, Oracle ASM would start reading from the disks on the affected Exadata Storage Server as soon as flash resilvering completes. However, the Storage Server would still have a fewer than normal number of flash devices, so performance on that Storage Server was affected. Starting with Oracle Exadata System Software 18c (18.1.0), Oracle ASM starts reading from the disks only after all failed flash devices are replaced on that Storage Server.

Minimum software required:

  • Oracle Exadata System Software 18c (18.1.0)

A.1.8 OEDA Command Line Interface

The OEDA command-line interface (oedacli) is a new interface you can use to update an existing es.xml file. These updates are called Actions. An Action is a single atomic task. An example of an Action might be to create a new Guest. An Action can have many sub commands; however, most actions are single commands. Examples of multi-command steps are CLONE GUEST and CLONE CELL.

You can use oedacli to help with various Exadata life cycle management tasks, such as:

  • Add node to or remove node from a Virtual Cluster on Exadata

  • Add database home to or remove database home from physical cluster

  • Add or remove Storage cell

  • Resize Oracle ASM disk groups

  • Add or remove additional Databases

  • Add or remove additional database homes to an Oracle VM cluster

See "About the OEDA Command Line Interface" in the Oracle Exadata Database Machine Installation and Configuration Guide for details.

Minimum software required:

  • Oracle Exadata System Software 18c (18.1.0)

  • Oracle Exadata Deployment Assistant, August 2017 release

A.1.9 Exadata Database Machine X7 New Features

The following new features are available with Exadata Database Machine X7:

A.1.9.1 Do-Not-Service LED on Storage Servers

Powering off an Exadata Storage Server in a cluster with reduced redundancy may cause Oracle ASM disk group force dismount and compromise data availability. To prevent human errors such as mistakenly powering off the wrong Storage Server, Exadata Storage Servers on Exadata Database Machine X7 come with a new LED called Do-Not-Service. This LED indicates whether it is safe to power off the Exadata Storage Server for services. Starting with Exadata Storage Server Software release 18.1, the Do-Not-Service LED is turned on automatically in real-time when redundancy is reduced to inform system administrators or field engineers that the storage server should not be powered off for services.

For example, if an Exadata Storage Server or disk is offline, Exadata Storage Server Software automatically turns on the Do-Not-Service LED on the Storage Servers that contain the partner disks to indicate these servers should not be turned off for servicing. When redundancy is restored, Exadata Storage Server Software automatically turns off the Do-Not-Service LED to indicate that the Exadata Storage Server can be powered off for servicing.

See the following for additional details:

Minimum requirements:

  • Oracle Exadata Storage Server Software release 18.1.0.0.0

  • Oracle Grid Infrastructure:

    • Release 12.1.0.2 July 2017 BP with ARU 21405133

    • Release 12.1.0.2 October 2017 BP or later

    • Release 12.2.0.1 July 2017 BP with ARU 21405125

    • Release 12.2.0.1 October 2017 BP or later

  • Oracle Exadata Database Machine X7-2 or X7-8 (Storage Servers only)

A.1.9.2 Online Flash Disk Replacement Exadata X7 Storage Servers

Previous generations of Exadata Database Machine allowed for online flash disk replacement in Extreme Flash Storage Server, but flash disk replacement in High Capacity Storage Server still required server downtime. Starting with Exadata Database Machine X7-2L and X7-8, flash disks in High Capacity Storage Server can also be replaced online without server downtime.

Oracle Exadata System Software constantly monitors flash disk health. If a flash disk fails or experiences poor performance, then the disk can be replaced right away. If a flash disk enters predictive failure, then, to ensure redundancy, flash disk should not be replaced until:

  • Oracle ASM disk rebalance completes if the device is used as data grid disk

  • Flash cache flush completes if the device is used for flash cache

Oracle Exadata System Software automatically monitors the progress of Oracle ASM disk rebalance and flash cache flush operations and sends a notification to users once the flash disk can be safely replaced without compromising redundancy. In any case, when it's safe to replace a flash disk, Oracle Exadata System Software automatically prepares the flash disk for online replacement and moves the flash disk to dropped for replacement status to indicate it's ready to be replaced. In addition, Oracle Exadata System Software automatically turns on the attention LED on the flash card and turns off the power LED on the card to help identify the card to replace.

System administrators or field engineers can open the chassis without shutting down the storage server, easily identify the card by the LED pattern, and replace the disks.

See "Performing a Hot Pluggable Replacement of a Flash Disk" in the Oracle Exadata Database Machine Maintenance Guidefor details.

Minimum requirements:

  • Exadata Extreme Flash Storage Server

  • Oracle Exadata System Software release 18.1.0.0.0 and Exadata High Capacity Storage Server X7-2 or Exadata Database Machine X7-8

A.1.9.3 New Configuration for System Partitions on Storage Servers

Previous generations of Exadata Database Machine use portions of two disks at slot 0 and 1 as system partitions. This is where the operating system and Oracle Exadata System Software are installed. Starting with Exadata Database Machine X7, there are two M.2 disks dedicated for system partitions. All hard disks on High Capacity Storage Server and all flash disks on Extreme Flash Storage Server are dedicated now for data storage only.

This configuration separates the system I/Os from the data I/Os and improves the performance on data disks at slot 0 and 1. Storage Server disks now can be created on the entire disks at slot 0 and 1 and have a uniform size across all disks.

Additionally, Oracle Exadata System Software creates system partitions on the M.2 disks with the latest Intel Rapid Storage Technology enterprise (Intel RSTe) RAID, which delivers faster performance and better data protection compared to the traditional software RAID.

Oracle Exadata System Software also supports online replacement of the M.2 disks. The M.2 disks can be replaced without server downtime.

See "Maintaining the M.2 Disks of Exadata Storage Servers" in the Oracle Exadata Database Machine Maintenance Guide for details.

Minimum software required:

  • Oracle Exadata System Software release 18.1.0.0.0

  • Oracle Exadata Database Machine X7-2 or X7-8

A.1.9.4 Secure Boot

Secure Boot is a method used to restrict which binaries can be executed to boot the system. With Secure Boot, the system UEFI firmware will only allow the execution of boot loaders that carry the cryptographic signature of trusted entities. In other words, anything run in the UEFI firmware must be signed with a key that the system recognizes as trustworthy. With each reboot of the server, every executed component is verified. This prevents malware from hiding embedded code in the boot chain.

  • Intended to prevent boot-sector malware or kernel code injection

  • Hardware-based code signing

  • Extension of the UEFI firmware architecture

  • Can be enabled or disabled through the UEFI firmware

See "Restricting the Binaries Used to Boot the System" in the Oracle Exadata Database Machine Security Guide for details.

Minimum software required:

  • Oracle Exadata Storage Server Software release 18.1.0.0.0

  • Oracle Exadata Database Machine X7-2 or X7-8

  • Bare metal installation

A.2 What's New in Oracle Exadata Database Machine 12c Release 2 (12.2.1.1.0)

The following features are new for Oracle Exadata Database Machine 12c Release 2 (12.2.1.1.0):

A.2.1 In-Memory Columnar Caching on Storage Servers

Oracle Exadata Storage Server Software release 12.2.1.1.0 can use fast vector-processing in-memory algorithms on data in the storage flash cache. This feature is available if you have licensed the Oracle Database In-Memory option.

The Database In-Memory format cache offers a significant boost to the amount of data held in Database In-Memory format formats and to Smart Scan performance over and above that offered by the pure columnar Hybrid Columnar Compression (HCC) format.

Oracle Exadata Storage Server Software release 12.1.2.1.0 added a columnar flash cache format which automatically stored HCC data in pure columnar HCC format in the flash cache. This release extends support for HCC data to enable the cached data to be rewritten into Database In-Memory format and enabling ultra-fast single instruction, multiple data (SIMD) predicates to be used in Smart Scan. With this format, most in-memory performance enhancements are supported in Smart Scan including joins and aggregation.

Data from normal (unencrypted) as well as encrypted tablespaces can be cached in the in-memory columnar cache format.

Just as with Oracle Database In-Memory, the new Database In-Memory format is created by a background process so that it does not interfere with the performance of queries.

This feature is enabled by default when the INMEMORY_SIZE database initialization parameter is configured and the user does not need to do anything to get this enhancement. See Oracle Database Reference for information about INMEMORY_SIZE. If INMEMORY_SIZE is not configured, then the HCC format columnar cache is still used exactly as in 12.1.2.1.0.

If you need to disable this feature, you can use a new DDL keyword CELLMEMORY with the ALTER TABLE command. See "Enabling or Disabling In-Memory Columnar Caching on Storage Servers" in Oracle Exadata System Software User's Guide.

This feature works with Oracle Database 12c release 1 (12.1.0.2) and Oracle Database 12c release 2 (12.2.0.1). Note that if you are using Oracle Database 12c release 1 (12.1.0.2), then the minimum software version required is 12.1.0.2.161018DBBP and you must install the patch for bug 24521608.

A.2.2 Columnar Flash Cache for Encrypted Tablespace

In Oracle Exadata Storage Server release 12.2.1.1.0, columnar flash cache support has been extended to encrypted tablespaces. If you have licensed the Oracle Database In-Memory option, the encrypted tablespace data is stored in in-memory columnar format on storage flash cache. If you have not licensed the option, the encrypted tablespace data is stored in pure columnar HCC format on storage flash cache.

This feature works with Oracle Database 12c release 1 (12.1.0.2) and Oracle Database 12c release 2 (12.2.0.1). Note that if you are using Oracle Database 12c release 1 (12.1.0.2), then the minimum software version required is 12.1.0.2.161018DBBP and you must install the patch for bug 24521608.

A.2.3 Set Membership in Storage Indexes

In Oracle Exadata Storage Server Software release 12.2.1.1.0, when data has been stored using the in-memory format columnar cache, Exadata stores these columns compressed using dictionary encoding. For columns with fewer than 200 distinct values, the storage index creates a very compact in-memory representation of the dictionary and uses this compact representation to filter disk reads based on equality predicates. This feature is called set membership. A more limited filtering ability extends up to 400 distinct values.

For example, suppose a region of disk holds a list of customers in the United States and Canada. When you run a query looking for customers in Mexico, Oracle Exadata Storage Server can use the new set membership capability to improve the performance of the query by filtering out disk regions that do not contain customers from Mexico. In Oracle Exadata Storage Server software releases earlier than 12.2.1.1.0, which do not have the set membership capability, a regular storage index would be unable to filter those disk regions.

This feature works with Oracle Database 12c release 1 (12.1.0.2) and Oracle Database 12c release 2 (12.2.0.1). Note that if you are using Oracle Database 12c release 1 (12.1.0.2), then the minimum software version required is 12.1.0.2.161018DBBP and you must install the patch for bug 24521608.

A.2.4 Storage Index to Store Column Information for More Than Eight Columns

In Oracle Exadata Storage Server Software releases earlier than 12.2.1.1.0, storage indexes can hold column information for up to eight columns. In Oracle Exadata Storage Server Software release 12.2.1.1.0, storage indexes have been enhanced to store column information for up to 24 columns.

Space to store column information for eight columns is guaranteed. For more than eight columns, space is shared between column set membership summary and column minimum/maximum summary. The type of workload determines whether set membership summary gets stored in storage index. See "Set Membership in Storage Indexes" for more information.

A.2.5 5x Faster Storage Server Software Updates

Updating Oracle Exadata Storage Server software now takes even less time. The Oracle Exadata Storage Server software update process is now up to 2 times faster compared to 12.1.2.3.0, and up to 5 times faster compared to releases earlier than 12.1.2.3.0. A faster update time reduces the cost and effort required to update the software.

A.2.6 Faster Performance for Large Analytic Queries and Large Loads

Temp writes and temp reads are used when large joins or aggregation operations don't fit in memory and must be spilled to storage. In Oracle Exadata Storage Server releases earlier than 12.2.1.1.0, temp writes were not cached in flash cache; both temp writes and subsequent temp reads were from hard disk only. In Oracle Exadata Storage Server release 12.2.1.1.0, temp writes are sent to flash cache so that subsequent temp reads can be read from flash cache as well. This significantly speeds up queries that spill to temp if they are temp I/O bound. For certain queries, performance can improve up to four times faster. This is comparable to putting the temporary tablespace entirely in flash. Write-back flash cache has to be enabled for this feature to work.

Prior to Oracle Exadata Storage Server release 12.2.1.1.0, there was a size threshold for writes to the flash cache. Most writes over 128 KB are routed straight to disk because these writes are not expected to be read any time soon. For example, direct load writes, flashback database log writes, archived log writes, and incremental backup writes would bypass flash cache. Starting with Oracle Exadata Storage Server release 12.2.1.1.0, the flash cache algorithms have been enhanced to redirect such large writes into the flash cache, provided that such large writes do not disrupt the higher priority OLTP or scan workloads. Such writes are later written back to the disks when the disks are less busy. This feature allows Oracle Exadata Storage Server to utilize additional spare flash capacity and I/O bandwidth to provide better overall performance.

Note that this feature is supported on all Oracle Exadata hardware except for V2 and X2 storage servers. On X3 and X4 storage servers, flash caching of temp writes and large writes is not supported when flash compression is enabled.

This feature works with all Oracle Database releases. Note that if you are using Oracle Database 11g release 2 (11.2) or Oracle Database 12c release 1 (12.1), then you need the patches for bug 24944847. New statistics and report sections related to this feature were added to Automatic Workload Repository (AWR) reports as part of bug 25410017, available in the July 2017 DBBP.

A.2.7 Secure Eraser

Oracle Exadata Storage Server software releases 12.2.1.1.0 or later provide a secure erasure solution, called Secure Eraser, for every component within Oracle Exadata Database Machine. This is a comprehensive solution that covers all Exadata Database Machines V2 and higher, including both 2-socket and 8-socket servers.

In earlier versions of Oracle Exadata Database Machine, you could securely erase user data through CellCLI commands like DROP CELL ERASE, DROP CELLDISK ERASE, or DROP GRIDDISK ERASE. However, these DROP commands only cover user data on hard drives and flash devices. Secure Eraser sanitizes all content, not only user data but also operating system, Oracle Exadata software, and user configurations. In addition, Secure Eraser covers a wider range of hardware components including hard drives, flash devices, internal USB, and ILOM.

The Secure Eraser securely erases all data on both database servers and storage servers, and resets InfiniBand switches, Ethernet switches, and power distribution units back to factory default. You can use this feature to decommission or repurpose an Oracle Exadata machine. The Secure Eraser completely erases all traces of data and metadata on every component of the machine.

For details on the Secure Eraser utility, see Oracle Exadata Database Machine Security Guide.

A.2.8 Cell-to-Cell Offload Support for Oracle ASM-Scoped Security

To perform cell-to-cell offload operations efficiently, storage servers need to access other storage servers directly, instead of through a database server.

If you have configured Oracle ASM-scoped security in your Exadata environment, you need to set up cell keys to ensure storage servers can authenticate themselves to other storage servers so that they can communicate with each other directly. This applies to Oracle ASM resync, resilver, rebuild and rebalance operations, and database high-throughput write operations

A.2.9 Adding an Additional Network Card to Oracle Exadata X6-2 Database Servers

Oracle Exadata Database Server X6-2 offers highly available copper 10 Gbps network on the motherboard, and an optical 10 Gbps network via a PCI card on slot 2. Starting with Oracle Exadata software release 12.2.1.1.0, you can add an additional Ethernet card if you require additional connectivity. The additional card can provide either dual port 10 GbE optical connectivity (part number X1109A-Z) or dual port 10 GbE copper connectivity (part number 7100488). You can install this part in PCIe slot 1 on the Oracle Exadata X6-2 database server.

After you install the network card and connect it to the network, Oracle Exadata Storage Server software release 12.2.1.1.0 automatically recognizes the new card and configures the two ports as eth6 and eth7 interfaces on the database server. You can use these additional ports for providing an additional client network, or for creating a separate backup or disaster recovery network. On a database server that runs virtual machines, you could use this network card to isolate traffic from two virtual machines.

A.2.10 Automatic Diagpack Upload for ASR

In Oracle Exadata software release 12.2.1.1.0, Management Server (MS) communicates with ASR Manager to upload a diagnostic package containing information relevant to the ASR automatically. In earlier releases you had to manually upload other diagnostics information after an automatic service request (SR) has been opened. By automating this step, this feature significantly reduces the turnaround time of ASRs.

This feature adds two new attributes to the AlertHistory object:

  • The new serviceRequestNumber attribute shows the associated service request number.

  • The new serviceRequestLink attribute shows the URL to the associated service request number.

Other feature highlights include:

  • The diagnostic package RESTful page (https://hostname/diagpack/download?name=diagpackname) has a new column showing a link to the corresponding service request.

  • ASR alert emails include SR links.

To enable Automatic Diagpack Upload for ASR, you must enable http_receiver in the ASR manager:

  • To check if http_receiver is enabled, run the following command from ASR manager:

    asr show_http_receiver
    
  • To enable the http_receiver, use the following command, where port is the port the http_receiver listens on.

    asr enable_http_receiver -p port
    

    Note:

    The port specified here has to be the same as the asrmPort specified for the subscriber on the database server or on the storage server. The following commands show how to verify the asrmPort on a database server and storage server.

    DBMCLI> LIST DBSERVER ATTRIBUTES snmpSubscriber 
         ((host=test-engsys-asr1.example.com, port=162,community=public, 
    type=ASR,fromIP=10.242.0.55,asrmPort=16168))
    
    
    CellCLI> LIST CELL ATTRIBUTES snmpSubscriber
         ((host=test-engsys-asr1.example.com,port=162,community=public,
    type=ASR,asrmPort=16168))
    

If you do not want to automatically upload diagnostic data to a service request, you can run ALTER CELL diagPackUploadEnabled=FALSE to disable the automatic upload.

Minimum software required: ASR Manager Release 5.7

A.2.11 CREATE DIAGPACK and LIST DIAGPACK Commands Available for Database Servers

The diagnostic package feature, which is available for storage servers, is now available for database servers as well. Management Server on database nodes automatically collects customized diagnostic packages that include relevant logs and traces upon generating a database server alert. This applies to all critical database server alerts. Timely collection of diagnostic information prevents rollover of critical logs.

Management Server on database nodes sends the diagnostic package as an email attachment for every critical email alert. Users can create hourly custom diagnostic packages by providing the start time and duration using the new CREATE DIAGPACK DBMCLI command.

See CREATE DIAGPACK and LIST DIAGPACK in the Oracle Exadata Database Machine Maintenance Guide for details.

A.2.12 Rescue Plan

In Oracle Exadata Storage Server software releases earlier than 12.2.1.1.0, after a storage server or database server rescue, you must re-run multiple commands to configure items such as IORM plans, thresholds, and storage server and database server notification setting. In Oracle Exadata Storage Server software release 12.2.1.1.0, there is a new attribute called rescuePlan for the cell and dbserver objects. You can use this attribute to get a list of commands, which you can store as a script and run after a cell rescue to restore the settings.

For details on the rescuePlan attribute, see Oracle Exadata Database Machine Maintenance Guide.

A.2.13 Support for IPv6 Oracle VM and Tagged VLANs

Oracle Exadata 12.2.1.1.0 supports IPv6 Oracle VM and tagged virtual LANs (VLANs) using Oracle Exadata Deployment Assistant (OEDA).

IPv6 VLANs are now supported on the management network. In earlier releases, this was not supported.

See Oracle Exadata Database Machine Installation and Configuration Guide.

A.2.14 Management Server Can Remain Online During NTP, DNS, and ILOM Changes

If you are changing NTP, DNS, or ILOM parameters, the Management Server can remain online during the operation and does not need to be restarted.

A.2.15 New Charts in ExaWatcher

In Oracle Exadata Storage Software release 12.2.1.1.0, GetExaWatcherResults.sh generates HTML pages that contain charts for IO, CPU utilization, cell server statistics, and alert history. The IO and CPU utilization charts use data from iostat, while the cell server statistics use data from cellsrvstat. Alert history is retrieved for the specified timeframe.

For details, see "ExaWatcher Charts" in the Oracle Exadata Database Server Maintenance Guide.

A.2.16 New Metrics for Redo Log Writes

New metrics are available to help analyze redo log write performance.

Previously, when Automatic Workload Repository (AWR) reported an issue with redo log write wait time for database servers, the storage cells often indicated no issue with redo log write performance. New metrics help to give a better overall picture. These metrics provide insight into the following concerns:

  • Is the I/O latency high, or is it some other factor (for example, network) ?

  • How many redo log writes bypassed Flash Log ?

  • What is the overall latency of redo log writes on each cell, taking into account all redo log writes, not just those which were handled by Flash Log ?

Oracle Exadata software release 12.2.1.1.0 introduces the following metrics related to redo log write requests:

  • FL_IO_TM_W: Cumulative redo log write latency. It includes latency for requests not handled by Oracle Exadata Smart Flash Log.

  • FL_IO_TM_W_RQ: Average redo log write latency. It includes write I/O latency only.

  • FL_RQ_TM_W: Cumulative redo log write request latency. It includes networking and other overhead.

    To get the latency overhead due to factors such as network and processing, you can use (FL_RQ_TM_W - FL_IO_TM_W).

  • FL_RQ_TM_W_RQ: Average redo log write request latency.

  • FL_RQ_W: Total number of redo log write requests. It includes requests not handled by Oracle Exadata Smart Flash Log.

    To get the number of redo log write requests not handled by Oracle Exadata Smart Flash Log, you can use (FL_RQ_W - FL_IO_W).

A.2.17 Quarantine Manager Support for Cell-to-Cell Rebalance and High Throughput Write Operations

Quarantine manager support is enabled for rebalance and high throughput writes in cell-to-cell offload operations. If Oracle Exadata detects a crash during these operations, the offending operation is quarantined, and an less optimized path is used to continue the operation.

The quarantine types for the new quarantines are ASM_OFFLOAD_REBALANCE and HIGH_THROUGHPUT_WRITE.

See "Quarantine Manager Support for Cell-to-Cell Offload Operations" in the Oracle Exadata Storage Server Software User's Guide for details.

A.2.18 ExaCLI and REST API Enabled for Management Server

Both ExaCLI and REST API are enabled for Management Server (MS) on the database nodes.

You can now perform remote execution of MS commands. You can access the interface using HTTPS in a web browser, or curl. See Oracle Exadata Database Machine Maintenance Guide for more information.

A.2.19 New Features in Oracle Grid Infrastructure 12c Release 2 (12.2.0.1)

The following new features in Oracle Grid Infrastructure 12c release 2 (12.2.0.1) affect Oracle Exadata:

A.2.19.1 Oracle ASM Flex Disk Groups

An Oracle ASM flex disk group is a disk group type that supports Oracle ASM file groups.

An Oracle ASM file group describes a group of files that belong to a database, and enables storage management to be performed at the file group, or database, level. In general, a flex disk group enables users to manage storage at the granularity of the database, in addition to at the disk group level.

See "Managing Flex Disk Groups" in Oracle Automatic Storage Management Administrator's Guide.

A.2.19.2 Oracle Flex ASM

Oracle Flex ASM enables Oracle ASM instances to run on a separate physical server from the database servers.

If the Oracle ASM instance on a node in a standard Oracle ASM cluster fails, then all of the database instances on that node also fail. However, in an Oracle Flex ASM configuration, Oracle 12c database instances would not fail as they would be able to access another Oracle ASM instance remotely on another node.

With Oracle Flex ASM, you can consolidate all the storage requirements into a single set of disk groups. All these disk groups are mounted by and managed by a small set of Oracle ASM instances running in a single cluster. You can specify the number of Oracle ASM instances with a cardinality setting.

Oracle Flex ASM is enabled by default with Oracle Database 12c release 2 (12.2). Oracle Exadata ships with cardinality set to ALL, which means an Oracle ASM instance is created on every available node. See the following topics for details:

A.2.19.3 Faster Redundancy Restoration After Storage Loss

Using Oracle Grid Infrastructure 12c Release 2 (12.2), redundancy restoration after storage loss takes less time than in previous releases.

A new REBUILD phase was introduced to the rebalance operation. The REBUILD phase restores redundancy first after storage failure, greatly reducing the risk window within which a secondary failure could occur. A subsequent BALANCE phase restores balance.

Oracle Grid Infrastructure release 12.1.0.2 with DBBP 12.1.0.2.170718 also includes the Oracle ASM REBUILD phase of rebalance.

Note:

In Oracle Grid Infrastructure 12c release 2 (12.2), rebuild is tracked in GV$ASM_OPERATION via a separate pass (REBUILD). In Oracle Grid Infrastructure 12c release 1 (12.1), both rebuild and rebalance phases are tracked in the same pass (REBALANCE). 

A.2.19.4 Dynamic Power Change

You can adjust the value of the ASM_POWER_LIMIT parameter dynamically.

If the POWER clause is not specified in an ALTER DISKGROUP statement, or when rebalance is implicitly run by adding or dropping a disk, then the rebalance power defaults to the value of the ASM_POWER_LIMIT initialization parameter. You can adjust the value of this parameter dynamically. The range of values for the POWER clause is the same for the ASM_POWER_LIMIT initialization parameter.

The higher the power limit, the more quickly a rebalance operation can complete. Rebalancing takes longer with lower power values, but consumes fewer processing and I/O resources which are shared by other applications, such as the database.

See "Tuning Rebalance Operations" in Oracle Automatic Storage Management Administrator's Guide

A.2.19.5 Quorum Disk Support in Oracle Installer

You can specify a quorum failure group during the installation of Oracle Grid Infrastructure.

On Oracle Exadata servers, quorum disk groups are automatically created during deployment. A quorum failure group is a special type of failure group that is used to store the Oracle Clusterware voting files. The quorum failure group is used to ensure that a quorum of the specified failure groups are available.

The installer for Oracle Grid Infrastructure 12.2 was updated to allow you to specify quorum failure groups during installation instead of configuring the quorum failure group after installation using the Quorum Disk Manager utility.

See "Identifying Storage Requirements for Oracle Automatic Storage Management" in Oracle Grid Infrastructure Installation and Upgrade Guide for Linux

A.2.20 New Features in Oracle Database 12c Release 2 (12.2.0.1)

The following new features in Oracle Database 12c release 2 (12.2.0.1) affect Oracle Exadata:

A.2.20.1 Database Server I/O Latency Capping

On very rare occasions there may be high I/O latency between a database server and a storage server due to network latency outliers, hardware problems on the storage servers, or some other system problem with the storage servers. Oracle ASM and Oracle Exadata Storage Server software automatically redirect read I/O operations to another storage server when the latency of the read I/O is much longer than expected. Any I/Os issued to the last valid mirror copy of the data are not redirected.

This feature works with all Exadata Storage Software releases. You do not have to perform any configuration to use this feature.

Minimum software required: Oracle Database and Oracle Grid Infrastructure 12c release 2 (12.2.0.1.0)

A.2.20.2 Exadata Smart Scan Offload for Compressed Index Scan

In Oracle Exadata Storage Server Software 12.1.2.3.0 and prior releases, smart scan offload supported normal uncompressed indexes and bitmap indexes.

In Oracle Exadata Storage Server Software 12.2.1.1.0, smart scan offload has been implemented for compressed indexes. Queries involving compressed index scan on Oracle Exadata can benefit from this feature.

Minimum software required: Oracle Database 12c release 2 (12.2.0.1.0) and Oracle Exadata Storage Server software release 12.2.1.1.0

A.2.20.3 Exadata Smart Scan Offload Enhancements for In-Memory Aggregation (IMA)

Oracle Exadata Storage Server software supports offloading many SQL operators for predicate evaluation. The In-Memory Aggregation feature attempts to perform a "vector transform" optimization which takes a star join SQL query with certain aggregation operators (for example, SUM, MIN, MAX, and COUNT) and rewrites it for more efficient processing. A vector transformation query is similar to a query that uses bloom filter for joins, but is more efficient. When a vector transformed query is used with Oracle Exadata Storage Server release 12.1.2.1.0, the performance of joins in the query is enhanced by the ability to offload filtration for rows used for aggregation. You will see “KEY VECTOR USE” in the query plan when this optimization kicks in.

In Oracle Exadata Storage Server software release 12.2.1.1.0, vector transformed queries benefit from more efficient processing due to the application of group-by columns (key vectors) to the Exadata Storage Index.

Additionally, vector transformed queries that scan data in in-memory columnar format on the storage server can offload processing of aggregation work. These optimizations are automatic and do not depend on user settings.

Minimum software required: Oracle Database 12c release 2 (12.2.0.1.0) and Oracle Exadata Storage Server software release 12.2.1.1.0

A.2.20.4 Exadata Smart Scan Offload Enhancements for XML

When XML data is stored using a SecureFiles LOB of less than 4 KB, the evaluation in a SQL WHERE clause of Oracle SQL condition XMLExists or Oracle SQL function XMLCast applied to the return value of Oracle SQL function XMLQuery can sometimes be offloaded to an Oracle Exadata Storage Server.

Minimum software required: Oracle Database 12c Release 2 (12.2.0.1.0) and Oracle Exadata Storage Server software release 12.2.1.1.0.

A.2.20.5 Exadata Smart Scan Offload Enhancements for LOBs

In Oracle Exadata Storage Server 12.2.1.1.0, offload support has been extended for the following LOB operators: LENGTH, SUBSTR, INSTRM CONCAT, LPAD, RPAD, LTRIM, RTRIM, LOWER, UPPER, NLS_LOWER, NLS_UPPER, NVL, REPLACE, REGEXP_INSTR, TO_CHAR.

Exadata smart scan offload evaluation is supported only on uncompressed inlined LOBs (less than 4 KB in size).

Minimum software required: Oracle Database 12c release 2 (12.2.0.1.0) and Oracle Exadata Storage Server software release 12.2.1.1.0.

A.2.20.6 New Features in Oracle Exadata Snapshots

  • Hierarchical snapshot databases

    You can create space-efficient snapshot databases from a parent that is itself a snapshot. This allows for hierarchical snapshot databases. The parent snapshot is also space-efficient, all the way to the base test master. Multiple users can create their own snapshots from the same parent snapshot. The set of snapshots can be represented as a tree, where the root of the tree is the base test master. All the internal nodes in the tree are read-only databases and all the leaves in the tree are read/write databases. All Oracle Exadata features are supported on the hierarchical snapshot databases. For hierarchical snapshot databases, because there is performance penalty with every additional depth level of the snapshot, Oracle recommends having a snapshot tree with a maximum depth of 10.

  • Spare Test Master databases

    You can also create and manage a sparse test master, while having active snapshots from it. This feature allows the sparse test master to sync almost continuously with Oracle Data Guard, except for small periods of time when users are creating a snapshot directly from the sparse test master. This feature utilizes the hierarchical snapshot feature described above, by creating read-only hidden parents. Note that Oracle Exadata snapshot databases are intended for test and development databases only.

  • Sparse backup and recovery

    When you choose to perform a sparse backup on DB0, the operation copies data only from the delta storage space of the database and the delta space of the sparse data files. A sparse backup can be either in the backup set format (default) or the image copy format. RMAN restores sparse data files from sparse backups and then recovers them from archive and redo logs. You can perform a complete or a point-in-time recovery on sparse data files. Sparse backups help in efficiently managing storage space and facilitate faster backup and recovery.

    See Oracle Database Backup and Recovery User’s Guide for information about sparse backups.

Minimum hardware: Storage servers must be X3 or later

Minimum software: Oracle Database and Grid Infrastructure 12c release 2 (12.2), and Oracle Exadata Storage Server software release 12.2.1.1.0.

A.2.21 Oracle Linux Kernel Upgraded to Unbreakable Enterprise Kernel 4 and Oracle VM Upgraded to 3.4.2

This release upgrades Oracle Linux to Unbreakable Enterprise Kernel (UEK) 4, (4.1.12-61.28.1.el6uek.x86_64). For systems using virtualization, the DOM0 is upgraded to Oracle VM 3.4.2. This enables you to use Oracle Linux 6 on the dom0. The Linux kernels used on the dom0 and domU are now unified.

For systems previously using virtualization on the compute nodes, you must upgrade the Oracle Grid Infrastructure home to release 12.1.0.2.161018DBBP or later in all domU’s before upgrading the Oracle Exadata Storage Server software to release 12.2.1.1.0. The Oracle Exadata Storage Server software upgrade to release 12.2.1.1.0 requires you to upgrade all the domU’s first before you upgrade the dom0. This requirement is enforced by the patchmgr software.

If you use Oracle ASM Cluster File System (Oracle ACFS), then you must apply the fix for bug 22810422 prior to the upgrade of the Oracle Grid Infrastructure home to enable Oracle ACFS support on the UEK4 kernel. In addition, Oracle recommends that you install the fix for bug 23642667 on both the Oracle Grid Infrastructure home and the Oracle Database home to increase OLTP workload performance.

A.3 What's New in Oracle Exadata Database Machine 12c Release 1 (12.1.2.3.0)

The following features are new for Oracle Exadata Database Machine 12c Release 1 (12.1.2.3.0):

A.3.1 Performance Improvement for Storage Server Software Updates

Updating Oracle Exadata Storage Server Software now takes significantly less time. By optimizing internal processing even further, the cell update process is now up to 2.5 times faster compared to previous releases.

A.3.2 Quorum Disk Manager Utility

In earlier releases, when Oracle Exadata systems with fewer than 5 storage servers were deployed with HIGH redundancy, the voting disk for the cluster was created on a disk group with NORMAL redundancy. If two cells go down in such a system, the data is still preserved due to HIGH redundancy but the cluster software comes down because the voting disk is on a disk group with NORMAL redundancy.

Quorum disks enable users to deploy and leverage disks on database servers to achieve highest redundancy in quarter rack or smaller configurations. Quorum disks are created on the database servers and added into the quorum failure group.

For new systems to be configured with HIGH redundancy but having fewer than 5 storage servers, Oracle Exadata Deployment Assistant can be used to automatically create such quorum disks.

Users who have deployed such systems can use the new quorumdiskmgr utility manually. quorumdiskmgr enables you to manage quorum disks on database servers. With this utility, you can create, list, delete, and alter quorum disk configurations, targets, and devices.

See "Managing Quorum Disks for High Redundancy Disk Groups" in the Oracle Exadata Database Machine Maintenance Guide for details.

Minimum software required:

  • Oracle Exadata Storage Server Software release 12.1.2.3.0

  • Grid Infrastructure release 12.1.0.2.160119 with these patches: 22722476 and 22682752; or Grid Infrastructure release 12.1.0.2.160419 or later

  • Patch 23200778 for all Database homes

A.3.3 VLAN Support

OEDA now supports the creation of VLANs on compute nodes and storage servers for the admin network, ILOM, client and the backup access network. Note the following:

  • Client and backup VLAN networks must be bonded. The admin network is never bonded.

  • If the backup network is on a tagged VLAN network, the client network must also be on a separate tagged VLAN network.

  • The backup and client networks can share the same network cables.

  • OEDA supports VLAN tagging for both physical and virtual deployments.

  • IPv6 VLANs are supported for bare metal on all Oracle Exadata systems except for X2 and V2 systems.

    IPv6 VLAN with VM is not supported currently.

Note:

If your system will use more than 10 VIP addresses in the cluster and you have VLAN configured for the Oracle Clusterware client network, then you must use 3 digit VLAN ids. Do not use 4 digit VLAN ids because the VLAN name can exceed the 15 character operating system interface name limit.

The following table shows IPv4 and IPv6 support on the admin, client, and backup networks for the different Exadata systems and Oracle Database versions.

Version of Oracle Database VLAN Tagging on Admin Network Client and Backup Networks

11.2.0.4

Only supported with IPv4 addresses on X3-2 and above for two-socket servers, and X4-8 and above for eight-socket servers.

Supported with IPv4 and IPv6 on all hardware models.

12.1.0.2

Only supported with IPv4 addresses on X3-2 and above for two-socket servers, and X4-8 and above for eight-socket servers.

Supported with IPv4 on all hardware models.

Supported with IPv6 on all hardware models with fix for 22289350.

See "Using Network VLAN Tagging with Oracle Exadata Database Machine" in the Oracle Exadata Database Machine Installation and Configuration Guide for details.

A.3.4 Adaptive Scrubbing Schedule

Oracle Exadata Storage Server Software automatically inspects and repairs hard disks periodically when the hard disks are idle. The default schedule of scrubbing is every two weeks.

However, once a hard disk starts to develop bad sectors, it is better to scrub that disk more frequently because it is likely to develop more bad sectors. In release 12.1.2.3.0, if a bad sector is found on a hard disk in a current scrubbing job, Oracle Exadata Storage Server Software will schedule a follow-up scrubbing job for that disk in one week. When no bad sectors are found in a scrubbing job for that disk, the schedule will fall back to the scrubbing schedule specified by the hardDiskScrubInterval attribute.

If the user has changed the hardDiskScrubInterval to less than or equal to weekly, Oracle Exadata Storage Server Software will use the user-configured frequency instead of the weekly follow-up schedule even if bad sectors are found. See the ALTER CELL section in the Oracle Exadata Storage Server Software User's Guide for more information about scrubbing.

Minimum software required:

  • Oracle Exadata Storage Server Software release 12.1.2.3.0

  • Grid Infrastructure home:

    • 11.2.0.4.16 (April 2015) or higher

    • 12.1.0.2.4 (January 2015) or higher

A.3.5 IPv6 Support in ASR Manager

Systems using IPv6 can now connect to Auto Service Request (ASR) using ASR Manager 5.4.

A.3.6 Increased Maximum Number of Database Processes

Table A-1 shows the maximum number of database processes supported per database node. These numbers are higher than in previous releases. The best practice is to keep the process count below these values. If a subset of your workload is running parallel queries, the maximum database process count will be between the values in the "Maximum Number of Processes with No Parallel Queries" column and the "Maximum Number of Processes with All Running Parallel Queries" column.

Table A-1 Maximum Number of Database Processes Per Node

Machine Type InfiniBand Bonding Type Maximum Number of Processes with No Parallel Queries Maximum Number of Processes with All Running Parallel Queries

8-socket (X2-8, X3-8)

Active passive

28,500

25,000

8-socket (X4-8, X5-8)

Active bonding

100,000

44,000

2-socket (X2-2, X3-2)

Active passive

12,500

10,000

2-socket (X4-2, X5-2, X6-2)

Active bonding

25,000

14,000

Table A-2 shows the maximum number of database processes supported per Oracle VM user domain. These numbers are higher than in previous releases. The best practice is to keep the process count below these values. If a subset of your workload is running parallel queries, the maximum database process count will be between the "Maximum Number of Processes with No Parallel Queries" column and the "Maximum Number of Processes with All Running Parallel Queries" column.

Table A-2 Maximum Number of Database Processes Per Oracle VM User Domain

Machine Type InfiniBand Bonding Type Maximum Number of Processes with No Parallel Queries Maximum Number of Processes with All Running Parallel Queries

2-socket (X2-2, X3-2)

Active passive

11,500

8,000

2-socket (X4-2, X5-2, X6-2)

Active bonding

23,000

14,000

The machines are configured as follows:

  • On an 8-socket database node with active bonding InfiniBand configurations (X4-8 and X5-8), there are 8 IP addresses across 4 InfiniBand cards (8 InfiniBand ports).

  • On an 8-socket database node with active-passive InfiniBand configurations (X2-8 and X3-8), there are 4 IP addresses across 4 InfiniBand cards (8 InfiniBand ports).

  • On a 2-socket database node with active bonding InfiniBand configurations (X4-2, X5-2, and X6-2), there are 2 IP addresses on 1 InfiniBand card (2 InfiniBand ports).

  • On a 2-socket database node with active-passive InfiniBand configurations (X2-2 and X3-2), there is 1 IP address on 1 InfiniBand card (2 InfiniBand ports).

Up to 50,000 RDS sockets are allocated per InfiniBand IP address for database usage. Each IO-capable database process will consume RDS sockets across IPs with even load balancing.

Starting with Exadata 12.1.2.3.0, there is no connection limit on the cell side.

In addition to the higher process count supported by the Exadata image and the Oracle kernel, the following related products have also been enhanced:

  • Oracle Exadata Deployment Assistant automatically configures higher limits in Grid_home/crs/install/s_crsconfig_nodenameenv.txt at deployment time.

  • Exadata Patch Manager (patchmgr and dbnodeupdate.sh) automatically configures higher limits in Grid_home/crs/install/s_crsconfig_nodenameenv.txt during database node upgrades.

The following best practices should be followed to ensure optimal resource utilization at high process count:

  • Application-initiated Oracle foregrounds should be established through a set of Oracle listeners running on the Exadata database nodes instead of using local bequeath connections.

  • The number of listeners should be at least as high as the number of database node CPU sockets, and every database node CPU socket should run the same number of listeners. For example, on an Oracle Exadata X5-8 database node, eight listeners could be configured, one per database node CPU socket.

  • Listeners should spawn Oracle processes evenly across database node CPU sockets. This can be done by specifying the socket they will run on at startup time. For example, assuming the listener.ora file is configured correctly for listeners 0 through 7, the following script could be used to spawn eight listeners on an X5-8 database node, each on a different socket:

    #!/bin/bash
    export ORACLE_HOME=/u01/app/oracle/product/12.1.0.2/dbhome_1
    for socket in `seq 0 7`
    do
     numactl --cpunodebind=${socket} $ORACLE_HOME/bin/lsnrctl start LISTENER${socket}
    done
    
  • Listener connection rate throttling should be used to control login storms and provide system stability at high process counts.

  • The total number of connections established per second, in other words, the sum of rate_limit for all listeners, should not be more than 400 to avoid excessive client connection timeouts and server-side errors.

Minimum software required:

  • Oracle Exadata Storage Server Software release 12.1.2.3.0

  • Oracle Database 12c Release 1 (12.1) release 12.1.0.2.160119 with these patches: 22711561, 22233968, and 22609314

A.3.7 Cell-to-Cell Rebalance Preserves Storage Index

Storage index provides significant performance enhancement by pruning I/Os during a smart scan. When a disk hits a predictive failure or a true failure, data needs to be rebalanced out to disks on other cells. This feature enables storage index entries, created for regions of data in the disk that failed, to be moved along with the data during the cell-to-cell offloaded rebalance to maintain application performance.

This feature provides significant performance improvement compared to earlier releases for application performance during a rebalance due to disk failure.

Minimum software required:

  • Oracle Exadata Storage Server Software release 12.1.2.3.0

  • Grid Infrastructure release 12.1.0.2.160119 with patch 22682752

A.3.8 ASM Disk Size Checked When Reducing Grid Disk Size

In releases earlier than 12.1.2.3.0, a user might accidentally decrease the size of a grid disk before decreasing the size of an ASM disk that is part of the disk group. In release 12.1.2.3.0, the resize order is checked so that the user cannot reduce the size of the grid disk to be smaller than the ASM disk.

A new attribute asmDiskSize is added to grid disk to support ASM disk size query. When the user runs ALTER GRIDDISK to alter the grid disk size, the command now checks the ASM disk size and prevents the user from making the grid disk smaller than the ASM disk.

The check works for both normal data disk and sparse disk. If it is a sparse grid disk, the check is performed when the virtual size is being changed. If it is a normal grid disk, the check is performed when size is being changed.

For example, suppose the following command:

CellCLI> list griddisk DATAC1_CD_00_adczarcel04 attributes name,asmdisksize

returns the following output:

DATAC1_CD_00_adczarcel04     14880M

When you try to reduce the size of the grid disk to be smaller than the ASM disk:

CellCLI> alter griddisk DATAC1_CD_00_adczarcel04 size=10G

the command returns an error:

CELL-02894: Requested grid disk size is smaller than ASM disk size. Please resize ASM disk DATAC1_CD_00_ADCZARCEL04 first.

Minimum software required:

  • Oracle Exadata Storage Server Software release 12.1.2.3.0

  • Grid Infrastructure release 12.1.0.2.160119 with patch 22347483

A.3.9 Support for Alerts in CREATE DIAGPACK

The CREATE DIAGPACK command now supports creating diagnostic packages for a specified alert using the alertName parameter.

See "CREATE DIAGPACK" in the Oracle Exadata Storage Server Software User's Guide for details.

A.4 What's New in Oracle Exadata Database Machine 12c Release 1 (12.1.2.2.0)

The following features are new for Oracle Exadata Database Machine 12c Release 1 (12.1.2.2.0):

A.4.1 Smart Fusion Block Transfer

Minimum software required: 12.1.0.2 BP13

Many OLTP workloads can have hot blocks that need to be updated frequently across multiple nodes in Oracle Real Application Clusters (Oracle RAC). One example is Right Growing Index (RGI), where new rows are added to a table with an index from several Oracle RAC nodes. The index leaf block becomes a hot block that needs frequent updates across all nodes.

Without Exadata's Smart Fusion Block Transfer feature, a hot block can be transferred from a sending node to a receiver node only after the sending node has made changes in its redo log buffer durable in its redo log. With Smart Fusion Block Transfer, this latency of redo log write at the sending node is eliminated. The block is transferred as soon as the IO to the redo log is issued at the sending node, without waiting for it to complete. Smart Fusion Block Transfer increases throughput (about 40% higher) and decreases response times (about 33% less) for RGI workloads.

To enable Smart Fusion Block Transfer:

  • Set the hidden static parameter "_cache_fusion_pipelined_updates" to TRUE on all Oracle RAC nodes. Because this is a static parameter, you need to restart your database for this change to take effect.

  • Set the "exafusion_enabled" parameter to 1 on all Oracle RAC instances.

A.4.2 8 TB Hard Disk Support

In this software release, Exadata storage server supports 8 TB high capacity disks. Supporting 8 TB high capacity disks has the following advantages:

  • Doubles the disk capacity of the pervious Exadata machines, up to 1344 TB per rack

  • Provides more storage for scaling out your existing databases and data warehouses

Requirements for using 8 TB high capacity disks:

  • Exadata 12.1.2.1.2 or higher

  • Grid Infrastructure home requires one of the following:

    • 12.1.0.2.11 or higher BP

    • 12.1.0.2.10 or lower BP plus the following patches: 21281532, 21099555, and 20904530

    • 11.2.0.4.18 or higher BP

    • 11.2.0.4.17 or lower BP plus patch 21099555

    • 11.2.0.3 plus patch 21099555

A.4.3 IPv6 Support

IPv6 support is for Ethernet.

Compute nodes and storage servers are now enabled to use IPv6 for the admin network, ILOM, and the client access network. This works for both bare metal and virtualized deployments. The following table describes how various components support IPv6:

Component Description of IPv6 Support

Oracle Exadata Deployment Assistant (OEDA)

OEDA allows a user to enter an IPv6 address in the "Define Customer Networks" screen (see Figure A-1 for a screenshot). When IPv6 addresses are used for the admin network, the DNS servers, NTP servers, SMTP servers, and SNMP servers need to be on an IPv6 network.

On the "Define Customer Networks" screen, if you specify the gateway with an IPv6 address and the /64 suffix to denote the mask, the Subnet Mask field becomes greyed out and unavailable.

Cisco switch

The Cisco switch can be enabled with an IPv6 address using the minimum firmware version of 15.2(3)E2 for the Cisco 4948E-F switches.

Refer to My Oracle Support note 1415044.1 for upgrade instructions and to open an SR to obtain the updated Cisco firmware through Oracle Support.

Auto Service Request (ASR)

ASR does not work with IPv6 addresses. This will be resolved in a future release. ASR can be enabled by bridging to an IPv4 network.

Enterprise Manager

Enterprise Manager needs to be on a bridged network such that it can monitor both the InfiniBand switches (on an IPv4 network) and the compute and storage nodes (on an IPv6 network).

dbnodeupdate

dbnodeupdate requires remote repositories that are hosted on machines with IPv6 address, or an ISO file needs to be used.

Remote repositories may not be reachable (using http or other protocols) if they use IPv4 and the host only has IPv6 IPs. Some customers may be able to reach IPv4 IPs from their IPv6 hosts if the network routers and devices permit it. Most customers will likely need an IPv6 repository server or use an ISO file.

InfiniBand network

Private InfiniBand network is required to remain on IPv4 only. Note that only private addresses are used on the InfiniBand network, so there is little benefit from moving InfiniBand to IPv6.

SMTP and SNMP

SMTP and SNMP servers should usually be IPv6 (or a name that resolves to an IPv6 IP address) unless the customer network has a bridge or gateway to route between IPv4 and IPv6.

Platinum Support

Platinum Support will not be available for IPv6 deployments until a subsequent Platinum Gateway software release is available.

Figure A-1 "Define Customer Networks" Screen in Oracle Exadata Deployment Assistant

Description of Figure A-1 follows
Description of "Figure A-1 "Define Customer Networks" Screen in Oracle Exadata Deployment Assistant"

A.4.4 Running CellCLI Commands from Compute Nodes

The new ExaCLI utility enables you to run CellCLI commands on cell nodes remotely from compute nodes. This is useful in cases where you locked the cell nodes by disabling SSH access; see "Disabling SSH on Storage Servers".

ExaCLI also provides an easier to use interface for cell management, and enables you to separate the roles of a storage user from a storage administrator.

To run ExaCLI, you need to create users on the cell nodes, and grant roles to the users. Granting roles assign privileges to users, that is, they specify which CellCLI commands users are allowed to run. When connecting to a cell, ExaCLI authenticates the specified user name, and checks that the user has the appropriate privileges to run the specified command.

The new exadcli utility is similar to the dcli utility: exadcli enables you to run CellCLI commands across multiple cells.

For details, see:

You can control which commands users can run by granting privileges to roles, and granting roles to users. For example, you can specify that a user can run the list griddisk command but not alter griddisk. This level of control is useful in Cloud environments, where you might want to allow full access to the system to only a few users.

You also need to create users if you are using the new ExaCLI utility. You use the CREATE USER, GRANT PRIVILEGE, and GRANT ROLE commands to create users, specify privileges to roles, and grant roles to users. For details, see "Creating Users and Roles" in the Oracle Exadata Storage Server Software User's Guide.

A.4.5 Disabling SSH on Storage Servers

By default, SSH is enabled on storage servers. If required, you can "lock" the storage servers to disable SSH access. You can still perform operations on the cell using ExaCLI, which runs on compute nodes and communicates using https and REST APIs to a web service running on the cell.

When you need to perform operations that require you to log in to the cell, you can temporarily unlock the cell. After the operation is complete, you can relock the cell.

For details, see "Disabling SSH on Storage Servers" in the Oracle Exadata Storage Server Software User's Guide.

A.4.6 Fixed Allocations for Databases in the Flash Cache

The ALTER IORMPLAN command has a new attribute called flashcachesize which enables you to allocate a fixed amount of space in the flash cache for a database. The value specified in flashcachesize is a hard limit, which means that the database cannot use more than the specified value. This is different from the flashcachelimit value, which is a "soft" maximum: databases can exceed this value if the flash cache is not full.

flashcachesize is ideal for situations such as Cloud and "pay for performance" deployments where you want to limit databases to their allocated space.

For details, see the following:

A.4.7 Oracle Exadata Storage Statistics in AWR Reports

The Exadata Flash Cache Performance Statistics sections have been enhanced in the AWR report:

  • Added support for Columnar Flash Cache and Keep Cache.

  • Added a section on Flash Cache Performance Summary to summarize Exadata storage cell statistics along with database statistics.

The Exadata Flash Log Statistics section in the AWR report now includes statistics for first writes to disk and flash.

Minimum software: Oracle Database release 12.1.0.2 Bundle Patch 11

A.4.8 Increased Maximum Number of Database Processes

Minimum software required: 12.1.0.2 BP11, or 11.2.0.4 BP18

The following table shows the maximum number of database processes supported per database node. These numbers are higher than in previous releases. The best practice is to keep the process count below these values. If a subset of your workload is running parallel queries, the maximum database process count will be between the "Number of Processes with No Parallel Queries" column and the "Number of Processes with All Running Parallel Queries" column.

Table A-3 Maximum Number of Database Processes Per Node

Machine Type InfiniBand Bonding Type Maximum Number of Processes with No Parallel Queries Maximum Number of Processes with All Running Parallel Queries

8-socket (X2-8, X3-8)

Active passive

28,500

25,000

8-socket (X4-8, X5-8)

Active bonding

50,000

44,000

2-socket (X2-2, X3-2)

Active passive

12,500

10,000

2-socket (X4-2, X5-2)

Active bonding

16,500

14,000

The machines are configured as follows:

  • On an 8-socket database node with active bonding InfiniBand configurations (X4-8 and X5-8), there are 8 IP addresses across 4 InfiniBand cards (8 Infiniband ports).

  • On an 8-socket database node with active-passive InfiniBand configurations (X2-8 and X3-8), there are 4 IP addresses across 4 InfiniBand cards (8 Infiniband ports).

  • On an 2-socket database node with active bonding InfiniBand configurations (X4-2 and X5-2), there are 2 IP addresses on 1 InfiniBand card (2 Infiniband ports).

  • On an 2-socket database node with active-passive InfiniBand configurations (X2-2 and X3-2), there is 1 IP address on 1 InfiniBand card (2 Infiniband ports).

50,000 RDS sockets are provisioned per IP for database usage. Each IO-capable database process will consume RDS sockets across IPs with even load balancing.

Note that cells have the following connection limits:

  • On X4 and X5 systems, the cell connection limit is 120,000 processes.

  • On X2 and X3 systems, the cell connection limit is 60,000 processes.

This means that the total number of database processes cannot exceed the above limits on the cell nodes. For example, a full rack of 8 databases running at the maximum process count will exceed the cell connection limit.

A.4.9 Custom Diagnostic Package for Storage Server Alerts

Storage servers automatically collect customized diagnostic packages that include relevant logs and traces upon generating a cell alert. This applies to all cell alerts, including both hardware alerts and software alerts. The timely collection of the diagnostic information prevents rollover of critical logs.

Management Server sends the diagnostic package as an email attachment for every email alert. In addition, users can access the following URL:

https://hostname/diagpack

to download an existing diagnostic package if the email attachment is misplaced. Users can also download the packages using ExaCLI. In the URL above, hostname refers to the host name of the cell.

Users can create hourly custom diagnostic packages by providing the start time and duration using the new "CREATE DIAGPACK" CellCLI command.

For details, see "CREATE DIAGPACK" in the Oracle Exadata Storage Server Software User's Guide.

A.4.10 Updating Nodes Using patchmgr

Starting with Exadata release 12.1.2.2.0, Oracle Exadata database nodes (releases later than 11.2.2.4.2), Oracle Exadata Virtual Server nodes (dom0), and Oracle Exadata Virtual Machines (domU) can be updated, rolled back, and backed up using patchmgr. You can still run dbnodeupdate.sh in standalone mode, but using patchmgr enables you to run a single command to update multiple nodes; you do not need to run dbnodeupdate.sh separately on each node. patchmgr can update the nodes in a rolling or non-rolling fashion.

The updated patchmgr and dbnodeupdate.sh are available in the new dbserver.patch.zip file, which can be downloaded from My Oracle Support note 1553103.1.

For details, see the "Updating Database Nodes with patchmgr" section in the Oracle Exadata Database Machine Maintenance Guide.

A.4.11 kdump Operational for 8-Socket Database Nodes

In releases earlier than 12.1.2.2.0, kdump, a service that creates and stores kernel crash dumps, was disabled on Exadata 8-socket database nodes because generating the vmcore took too long and consumed too much space. Starting with Exadata release 12.1.2.2.0, kdump is fully operational on 8-socket database nodes due to the following optimizations:

  • Hugepages and several other areas of shared memory are now exposed by the Linux kernel to user space, then filtered out by makedumpfile at kernel crash time. This saves both time and space for the vmcore.

  • Memory configuration for the kexec kernel has been optimized.

  • Overall memory used has been reduced by blacklisting unnecessary modules.

  • Snappy compression is enabled on the database nodes to speed up vmcore generation.

A.4.12 Redundancy Check When Powering Down the Storage Server

If you try to shut down gracefully a storage server by pressing the power button on the front or going through ILOM, the storage server performs an ASM data redundancy check. If shutting down the storage server could lead to an ASM disk group force dismount due to reduced data redundancy, the shutdown is aborted and the LEDs blink as follows to alert the user that shutting down the storage server is not safe:

  • On high capacity cells, all three LEDs on all hard drives blink for 10 seconds.

  • On extreme flash cells, the blue OK-to-Remove LED blinks for 10 seconds, and the amber LED is lit.

You should not attempt a hard reset on the storage server.

If a storage server cannot be safely shut down due to reduced redundancy (the command "cellcli -e list griddisk attributes name, deactivationOutcome" will show all the offline and unhealthy disks), then you need to restore the data redundancy first. If there are other offline disks, you need to bring them back online and wait for resync to finish. If there is rebalance running to force drop failed disks or resilvering running to resilver data blocks after write back flash cache failure, you need to wait for the rebalance or resilvering to complete. Once data redundancy is restored, you may proceed to shut down the storage server again.

A.4.13 Specifying an IP Address for SNMP Traps

If the IP address associated with eth0 is not registered with ASR Manager, you can specify a different IP address using the new fromIPfield in the ALTER CELLcommand (for storage servers) or the ALTER DBSERVER command (for database servers).

For details, see the description for the ALTER CELL command in the Oracle Exadata Storage Server Software User's Guide, and the ALTER DBSERVER command in the Oracle Exadata Database Machine Maintenance Guide.

A.4.14 Reverse Offload Improvements

Minimum software required: 12.1.0.2 BP11

The reverse offload feature enables a storage cell to push some offloaded work back to the database node when the storage cell's CPU is saturated.

Reverse offload from storage servers to database nodes is essential in providing a more uniform usage of all the database and storage CPU resources available in an Exadata environment. In most configurations, there are more database CPUs than storage CPUs, and the ratio may vary depending on the hardware generation and the number of database and cell nodes.

Different queries running at the same time need different rates of reverse offload to perform optimally with regard to elapsed time. Even the same query running in different instances may need different rates of reverse offload.

In this release, a number of heuristic improvements have been added for elapsed time improvement of up to 15% for multiple database instances and different queries running in parallel.

A.4.15 Cell-to-Cell Rebalance Preserves Flash Cache Population

Minimum software required: 12.1.0.2 BP11

When a hard disk hits a predictive failure or true failure, and data needs to be rebalanced out of it, some of the data that resides on this hard disk might have been cached on the flash disk, providing better latency and bandwidth accesses for this data. To maintain an application's current performance SLA, it is critical to rebalance the data while honoring the caching status of the different regions on the hard disk during the cell-to-cell offloaded rebalance.

This feature provides significant performance improvement compared to earlier releases for application performance during a rebalance due to disk failure or disk replacement.

When data is rebalanced from one cell to another, the data that was cached on the source cell is also cached on the target cell.

A.5 What's New in Oracle Exadata Database Machine 12c Release 1 (12.1.2.1.2)

The following features are new for Oracle Exadata Database Machine 12c Release 1 (12.1.2.1.2):

A.5.1 InfiniBand Partitioning for Virtualized Exadata Environments

InfiniBand partitioning is now available for virtualized Exadata environments and can be configured with the Oracle Exadata Deployment Assistant (OEDA).

Use the graphical user interface of OEDA to define the InfiniBand partitions on a per-cluster basis, and the command line interface of OEDA to configure the guests and the InifniBand switches with the appropriate partition keys and membership requirements to enable InfiniBand partitions.

InfiniBand partitions can be defined on a per-cluster basis. If storage servers are shared among multiple clusters, then all clusters will use the same storage partition key.

A.6 What's New in Oracle Exadata Database Machine 12c Release 1 (12.1.2.1.1)

The following features are new for Oracle Exadata Database Machine 12c Release 1 (12.1.2.1.1):

A.6.1 Flash Performance Improvement in X5 Storage Servers

Changes were made in the NVMe flash firmware to improve I/O task handling resources and modifications to the background refresh algorithms and operations. The flash performance is equivalent or slightly higher in this release.

A.6.2 Oracle Exadata Virtual Machines

Consolidated environments can now use Oracle Virtual Machine (Oracle VM) on X5-2, X4-2, X3-2, and X2-2 database servers to deliver higher levels of isolation between workloads. Virtual machine isolation is desirable for workloads that cannot be trusted to restrict their security, CPU, or memory usage in a shared environment.

A.6.3 Infrastructure as a Service (IaaS) with Capacity-on-Demand (CoD)

Oracle Exadata Infrastructure as a Service (IaaS) customers now have the Capacity-on-Demand feature to limit the number of active cores in the database servers in order to restrict the number of required database software licenses. Exadata 12.1.2.1.1 Software allows CoD and IaaS to coexist on the same system. Note that IaaS-CoD, the ability to turn on/off a reserved set of cores, is still included with IaaS.

A.6.4 Improved Flash Cache Metrics

This release contains integrated block cache and columnar cache metrics for better flash cache performance analysis.

A.6.5 Leap Second Time Adjustment

This release contains leap second support in anticipation of the June 30, 2015 leap second adjustment.

A.6.6 Network Resource Management

  • Oracle 1.6TB NVMe SSD firmware update to 8DV1RA12 in X5-2 Extreme Flash (EF) Storage Servers

  • Oracle Flash Accelerator F160 PCIe card's firmware update to 8DV1RA12 in X5-2 High Capacity (HC) Storage Servers

A.6.7 DBMCLI Replaces /opt/oracle.cellos/compmon/exadata_mon_hw_asr.pl Script

Starting in Oracle Exadata Storage Server Release 12.1.2.1.0, a new command-line interface called DBMCLI is introduced to configure, monitor, and manage the database servers. DBMCLI is pre-installed on each database server and on DOM0 of virtualized machines. DBMCLI configures Auto Service Request, capacity-on-demand, Infrastructure as a Service, and database server e-mail alerts. DBMCLI replaces the /opt/oracle.cellos/compmon/exadata_mon_hw_asr.pl Perl script. Refer to the Oracle Exadata Database Machine Maintenance Guide for information on how to use DBMCLI.

A.7 What's New in Oracle Exadata Database Machine 12c Release 1 (12.1.2.1.0)

The following features are new for Oracle Exadata Database Machine 12c Release 1 (12.1.2.1.0):

A.7.1 Oracle Exadata Database Machine Elastic Configurations

Elastic configurations allow Oracle Exadata Racks to have customer-defined combinations of database servers and Exadata Storage Servers. At least 2 database servers and 3 storage servers must be installed in the rack. The storage servers must all be the same type. Oracle Exadata Database Machine X5-2 Elastic Configurations and Oracle Exadata Database Machine X4-8 Elastic Configuration can have 2 to 19 database servers, 3 to 20 Exadata Storage Servers, or a combination of database servers and Exadata Storage Servers. Oracle Exadata Storage Expansion Rack X5-2 Elastic Configurations can have 4 to 19 storage servers.

Elastic configurations allow Oracle Exadata Racks to have customer-defined combinations of database servers and Exadata Storage Servers. This allows the hardware configuration to be tailored to specific workloads such as Database In-Memory, OLTP, Data Warehousing, or Data Retention.

  • Oracle Exadata Database Machine X5-2 Elastic Configurations start with a quarter rack containing 2 database servers and 3 Exadata Storage Servers. Additional database and storage servers (High Capacity (HC) or Extreme Flash (EF)) can be added until the rack fills up or a rack maximum of 22 total servers is reached.

  • Oracle Exadata Storage Expansion Rack X5-2 Elastic Configurations start with a quarter rack containing 4 Exadata Storage Servers. Additional storage servers (HC or EF) can be added to a total of 19 storage servers per rack.

  • Oracle Exadata Database Machine X4-8 Elastic Configurations start with a half rack containing 2 database server X4-8 8-socket servers and 3 Exadata Storage Servers. Up to 2 additional X4-8 8-socket servers can be added per rack. Up to 11 additional Exadata Storage Servers (HC or EF) can be added per rack.

A.7.2 Sparse Grid Disks

Sparse grid disks allocate space as new data is written to the disk, and therefore have a virtual size that can be much larger than the actual physical size. Sparse grid disks can be used to create a sparse disk group to store database files that will use a small portion of their allocated space. Sparse disk groups are especially useful for quickly and efficiently creating database snapshots on Oracle Exadata. Traditional databases can also be created using a sparse disk group.

Minimum hardware: Storage nodes must be X3 or later

Minimum software: Oracle Database 12c Release 1 (12.1) release 12.1.0.2 BP5 or later.

A.7.3 Snapshot Databases for Test and Development

Space-efficient database snapshots can now be quickly created for test and development purposes. Snapshots start with a shared read-only copy of the production database (or pluggable database (PDB)) that has been cleansed of any sensitive information. As changes are made, each snapshot writes the changed blocks to a sparse disk group.

Multiple users can create independent snapshots from the same base database. Therefore multiple test and development environments can share space while maintaining independent databases for each task. Snapshots on Exadata Storage Servers allow testing and development with Oracle Exadata Storage Server Software features such as Smart Scan.

Exadata database snapshots are integrated with the Multi-tenant Database Option to provide an extremely simple interface for creating new PDB snapshots.

Minimum hardware: Storage nodes must be X3 or later

Minimum software: Oracle Database 12c Release 1 (12.1) release 12.1.0.2 BP5 or later.

A.7.4 Columnar Flash Caching

Oracle Exadata Storage Server Software release 12.1.2.1.0 can efficiently support mixed workloads, delivering optimal performance for both OLTP and analytics. This is possible due to the dual format architecture of Exadata Smart Flash Cache that enables the data to be stored in hybrid columnar for transactional processing and also stored in pure columnar, which is optimized for analytical processing.

In addition, Hybrid Columnar Compression balances the needs of OLTP and analytic workloads. Hybrid Columnar Compression enables the highest levels of data compression and provides tremendous cost-savings and performance improvements due to reduced I/O, especially for analytic workloads.

In Oracle Exadata Storage Server Software release 12.1.2.1.0, Smart Flash Cache software transforms hybrid columnar compressed data into pure columnar during flash cache population for optimal analytics processing. Smart Flash Caches on pure columnar data in flash run faster because they read only the selected columns, reducing flash I/Os and storage server CPU consumption.

Oracle Exadata Storage Server Software release 12.1.2.1.0 has the ability to cache Hybrid Columnar Compression table data on flash cache in a pure columnar layout. When Exadata Hybrid Columnar Compression tables are accessed using Exadata Smart Scan, the Hybrid Columnar Compression compressed data is reformatted to a pure columnar layout in the same amount of storage space on flash cache.

The percentage of data for a given column in a compression unit (CU) for a wide table is small compared to narrow table. This results in more CUs being fetched from disks and flash to get data for the entire column. Queries reading only a few columns of a wide Exadata Hybrid Columnar Compression table exhibit high I/O bandwidth utilization due to irrelevant columns being read from storage. Storing the data in a columnar format on flash cache alleviates the need for reading the irrelevant columns and provides a significant performance boost.

Depending on the type of workload (OLTP or data warehousing), the same region of data can be cached in both the traditional block format as well as the columnar format in flash cache.

This feature is enabled by default, and does not need configuration by the user.

Columnar Flash Caching accelerates reporting and analytic queries while maintaining excellent performance for OLTP style single row lookups.

Columnar Flash Caching implements a dual format architecture in Oracle Exadata flash by automatically transforming frequently scanned Hybrid Columnar Compression compressed data into a pure columnar format as it is loaded into the flash cache. Smart scans operating on pure columnar data in flash run faster because they read only the selected columns reducing flash I/Os and storage server CPU.

The original Hybrid Columnar Compression formatted data can also be cached in the flash cache if there are frequent OLTP lookups for the data. Therefore the Exadata Smart Flash Cache automatically optimizes the format of the cached data to accelerate all types of frequent operations.

This feature is enabled by default, and does not need configuration by the user.

Minimum software: Oracle Exadata Storage Server Software release 12.1.2.1.0 running Oracle Database 12c release 12.1.0.2.0.

See Also:

Oracle Exadata Storage Server Software User's Guide for information about the flash cache metrics

A.7.5 Oracle Exadata Storage Server Software I/O Latency Capping for Write Operations

This feature helps eliminate any outliers due to slow reads. It prevents read outliers that would otherwise have been visible to applications.

Disks drives, disk controllers, and flash devices are complex computers that can, occasionally, exhibit high latencies while the device is performing an internal maintenance or recovery operation. In addition, devices that are close to failing sometimes exhibit high latencies before they fail. Previously, devices exhibiting high latencies could occasionally cause slow SQL response times. Oracle Exadata Storage Server Software I/O latency capping for write operations ensures excellent SQL I/O response times on Oracle Exadata by automatically redirecting high latency I/O operations to a mirror copy.

In Oracle Exadata Storage Server Software releases 11.2.3.3.1 and 12.1.1.1.1, if Oracle Exadata tries to read from a flash device but the latency of the read I/O is longer than expected, the Oracle Exadata automatically redirects the read I/O operations to another cell. The database that initiated the read I/O is sent a message causes the database to redirect the read I/O to another mirror copy of the data. Any read I/O issued to the last valid mirror copy of the data is not redirected.

In Oracle Exadata Storage Server Software release 12.1.2.1.0, if a write operation encounters high latency, then Oracle Exadata automatically redirects the write operation to another healthy flash device on the same cell. After the write is successfully completed, the write I/O is acknowledged as successful to the database thereby eliminating the write outlier.

Requirements:.

  • Minimum software:

    • Oracle Database 11g release 2 (11.2) Monthly Database Patch For Exadata (June 2014 - 11.2.0.4.8)

    • Oracle Grid Infrastructure 11g release 2 (11.2) Monthly Database Patch For Exadata (June 2014 - 11.2.0.4.8)

  • Enable write back flash cache on the cell

A.7.6 Elimination of False Drive Failures

Disks drives, and flash drives are complex computers that can occasionally appear to fail due to internal software lockups without actually physically failing. In the event of an apparent hard drive failure on X5-2 High Capacity cell or an apparent flash drive failure on X5-2 Extreme Flash cell, Oracle Exadata Storage Server Software automatically redirects I/Os to other drives, and then power cycles the drive. If the drive returns to normal status after the power cycle, then it will be re-enabled and resynchronized. If the drive continues to fail after being power cycled, then it will be dropped. This feature allows Oracle Exadata Storage Server Software to eliminate false-positive disk failures and therefore helps to preserve data redundancy and reduce management.

A.7.7 Flash and Disk Life Cycle Management Alert

Oracle Exadata Storage Server Software now monitors Oracle ASM rebalance operations due to disk failure and replacement. Management Server sends an alert when a rebalance operation completes successfully, or encounters an error.

In previous releases, the user would have to periodically monitor the progress of rebalance operations by querying the V$ASM_OPERATION view. Now the user can subscribe to alerts from Management Server, and receive updates on Oracle ASM rebalance operations.

Minimum software: Oracle Database release 12.1.0.2 BP4 or later, and Oracle Exadata Storage Server Software release 12.1.2.1.0 or later.

A.7.8 Performance Optimization for SQL Queries with Minimum or Maximum Functions

SQL queries using minimum or maximum functions are designed to take advantage of the storage index column summary that is cached in Exadata Storage Server memory. As a query is processed, a running minimum value and a running maximum value are tracked. Before issuing an I/O, the minimum/maximum value cached in the storage index for the data region is checked in conjunction with the running minimum/maximum value to decide whether that I/O should be issued or can be pruned. Overall, this optimization can result in significant I/O pruning during the course of a query and improves query performance. An example of a query that benefits from this optimization is:

Select max(Salary) from EMP where Department = 'sales';

Business intelligence tools that get the shape of a table by querying the minimum or maximum value for each column benefit greatly from this optimization.

The following session statistic shows the amount of I/O saved due to storage index optimization.

cell physical IO bytes saved by storage index

Minimum software: Oracle Database release 12.1.0.2.

A.7.9 Oracle Exadata Storage Server Software Performance Statistics in AWR Reports

Exadata Storage Server configuration and performance statistics are collected in the Automatic Workload Repository (AWR), and the data is available in the AWR report. The Oracle Exadata section of the AWR report is available in HTML or active report format.

The following sections are three main sections in the AWR report:

  • Exadata Server Configuration: Hardware model information, software versions, and storage configuration

  • Exadata Server Health Report: Offline disks and open alerts

  • Exadata Performance Statistics: Operating system statistics, storage server software statistics, smart scan statistics, and statistics by databases

The AWR report provides storage level performance statistics, not restricted to a specific instance or a database. This is useful for analyzing cases where one database can affect the performance of another database.

Configuration differences are highlighted using specific colors for easy analysis. For example, a cell with a different software release than the other cells, or a cell with different memory configuration than the other cells are highlighted.

Outliers are automatically analyzed and presented for easy performance analysis. Outliers are appropriately colored and linked to detailed statistics.

Minimum software: Oracle Database release 12.1.0.2, and Oracle Exadata Storage Server Software release 12.1.2.1.0.

A.7.10 Exafusion Direct-to-Wire Protocol

Exafusion Direct-to-Wire protocol allows database processes to read and send Oracle Real Applications Cluster (Oracle RAC) messages directly over the Infiniband network bypassing the overhead of entering the OS kernel, and running the normal networking software stack. This improves the response time and scalability of the Oracle RAC environment on Oracle Exadata Database Machine. Exafusion is especially useful for OLTP applications because per message overhead is particularly apparent in small OLTP messages.

Minimum software: Oracle Exadata Storage Server Software release 12.1.2.1.0 contains the OS, firmware, and driver support for Exafusion and Oracle Database software release 12.1.0.2.0 BP1.

A.7.11 Management Server on Database Servers

Management Server (MS) on database servers implements a web service for database server management commands, and runs background monitoring threads. The management service provides the following:

  • Comprehensive hardware and software monitoring including monitoring of hard disks, CPUs, and InfiniBand ports.

  • Enhanced alerting capabilities.

  • Important system metric collection and monitoring.

  • A command-line interface called DBMCLI to configure, monitor, and manage the database servers. DBMCLI is pre-installed on each database server and on DOM0 of virtualized machines. DBMCLI configures Auto Service Request, capacity-on-demand, Infrastructure as a Service, and database server e-mail alerts.

Oracle Exadata Database Machine Command-Line Interface (DBMCLI) utility is the command-line administration tool for managing the database servers. DBMCLI runs on each server to enable you to manage an individual database server. DBMCLI also runs on virtual machines. You use DBMCLI to configure, monitor, and manage the database servers. The command-line utility is already installed when Oracle Exadata Database Machine is shipped.

DBMCLI provides an integrated client interface to configure Auto Service Request, capacity-on-demand, Infrastructure as a Service, and database server e-mail alerts. It also provides for monitoring of hard disks, CPUs, InfiniBand ports, as well as system metrics and thresholds.

See Also:

Oracle Exadata Database Machine Maintenance Guide for additional information about DBMCLI

A.7.12 SQL Operators for JSON and XML

Oracle Exadata Storage Server Software supports offload of many SQL operators for predicate evaluation. Offload of the following SQL operators are now supported by Oracle Exadata Storage Server Software:

  • JSON Operators

    • JSON_VALUE

    • JSON_EXISTS

    • JSON_QUERY

    • IS JSON

    • IS NOT JSON

  • XML Operators

    • XMLExists

    • XMLCast(XMLQuery())

Minimum software: Oracle Database release 12.1.0.2 for offload of JSON. Oracle Database release 12.1.0.2 BP1 for XML operator offload.

A.7.13 I/O Resource Management for Flash

I/O Resource Management (IORM) now manages flash drive I/Os in addition to disk drive I/Os to control I/O contention between databases, pluggable databases, and consumer groups. Because it is very rare for Oracle Exadata environments to be limited by OLTP I/Os, IORM automatically prioritizes OLTP flash I/Os over smart scan flash I/Os, ensuring fast OLTP response times with little cost to smart scan throughput.

Minimum software: Exadata cell software release 12.1.2.1.0 running Oracle Database 11g or Oracle Database 12c releases

Minimum hardware: Exadata releases X2-*

A.7.14 Flash Cache Space Resource Management

Flash cache is a shared resource. Flash Cache Space Resource Management allows users to specify the minimum and maximum sizes a database can use in the flash cache using interdatabase IORM plans. Flash Cache Space Resource Management also allows users to specify the minimum and maximum sizes a pluggable database can use in the flash cache using database resource plans.

Minimum software: Exadata cell software release 12.1.2.1.0 running Oracle Database 11g or Oracle Database 12c Release 1 (12.1) release 12.1.0.2

Minimum hardware: Exadata releases X2-*

A.7.15 I/O Resource Management Profiles

IORM interdatabase plans plans now support profiles that reduce management of interdatabase plans for environments with many databases. Previously, a storage administrator had to specify resources for every database in the interdatabase plan. The plan needed updates each time a new database was created. IORM profiles greatly reduce this management. The storage administrator can now create profile directives that define different profile types based on performance requirements. Next, the administrator maps new and existing databases to one of the defined profiles in the interdatabase plan using the database parameter DB_PERFORMANCE_PROFILE. Each database inherits all its attributes from the specified profile directive automatically.

Minimum software: Exadata cell software release 12.1.2.1.0 running Oracle Database 12c Release 1 (12.1) release 12.1.0.2 Exadata Bundle Patch 4.

A.7.16 Write Back Flash Cache on Extreme Flash Cells

On Extreme Flash cells, flash cache runs in write back mode by default, and takes 5 percent of the flash space. Flash cache on Extreme Flash cells is not used as a block cache because user grid disks are already created on flash and therefore caching is not needed. However, flash cache is still useful for the following advanced operations:

  • Columnar caching caches Exadata Hybrid Columnar Compression (EHCC) table data on flash cache in a pure columnar layout on an Extreme Flash cell.

  • Write I/O latency capping cancels write I/O operations to a temporarily stalled flash, and redirects the write to be logged in the write back flash cache of another healthy flash device on an Extreme Flash cell

  • Fast data file creation persists the metadata about the blocks in the write back flash cache, eliminating the actual formatting writes to user grid disks, on an Extreme Flash cell.

Administrators can choose to configure flash cache in write through mode on Extreme Flash cells. Columnar caching works in write through flash cache mode, but write I/O latency capping and fast data file creation require write back flash cache to be enabled.

A.7.17 Secure Erase for 1.6 TB Flash Drives in Extreme Flash and High Capacity Systems

With this release, Oracle Exadata Storage Server Software supports secure erase for 1.6 TB flash drives in the Extreme Flash and High Capacity systems. The 1.6 TB flash drives take approximately 5.5 hours to securely erase.

See Also:

"Exadata Secure Erase"

A.7.18 Increased Exadata Cell Connection Limit

Oracle Exadata X5-2 and X4-2 cells can now support up to 120,000 simultaneous connections originating from one or more database servers that is using active-active bonding. This implies that at most 120,000 processes can simultaneously remain connected to a cell and perform I/O operations.

A.7.19 Support for SNMP v3

Oracle Exadata Database Machine database and storage servers support SNMP v3 for sending alerts. SNMP v3 provides authentication and encryption for alerts sent from the servers to administrators and Auto Service Request.

A.7.20 Federal Information Processing Standards (FIPS) 140-2 Compliant Smart Scan

The U.S. Federal Information Processing Standard (FIPS) 140-2 specifies security requirements for cryptographic modules. To support customers with FIPS 140-2 requirements, Oracle Exadata version 12.1.2.1.0 can be configured to use FIPS 140-2 validated cryptographic modules. These modules provide cryptographic services such as Oracle Database password hashing and verification, network encryption (SSL/TLS and native encryption), as well as data at rest encryption (Transparent Data Encryption).

When Transparent Data Encryption is used and Oracle Database is configured for FIPS 140 mode, Oracle Exadata Smart Scan offloads will automatically leverage the same FIPS 140 validated modules for encryption and decryption operations of encrypted columns and encrypted tablespaces.

In Oracle Database release 12.1.0.2.0, the database parameter, DBFIPS_140, provides the ability to turn on and off the FIPS 140 cryptographic processing mode inside the Oracle Database and Exadata Storage Server.

In Oracle Database release 11.2.0.4.0, the underscore parameter _use_fips_mode provides the ability to turn on or off the FIPS 140 cryptographic processing in Oracle Database and Exadata Storage Server.

For example, using DBFIPS_140:

ALTER SYSTEM SET DBFIPS_140 = TRUE;

For example in the parameter file:

DBFIPS_140=TRUE

The following hardware components are now FIPS compliant with the firmware updates in the specified releases:

  • Oracle Server X5-2 and later systems are designed to be FIPS 140–2 compliant

  • Oracle Sun Server X4-8 with ILOM release 3.2.4

  • Sun Server X4-2 and X4-2L with SW1.2.0 and ILOM release 3.2.4.20/22

  • Sun Server X3-2 and X3-2L with SW1.4.0 and ILOM release 3.2.4.26/28

  • Sun Server X2-2 with SW1.8.0 and ILOM release 3.2.7.30.a

  • Cisco Catalyst 4948E-F Ethernet Switch

FIPS compliance for V1, X2-* and database node X3-8 generations of Exadata Database Machine Servers is not planned.

Minimum software: Oracle Database release 12.1.0.2.0 BP3, Oracle Database release 11.2.0.4 with MES Bundle on Top of Quarterly Database Patch For Exadata (APR2014 - 11.2.0.4.6), Oracle Exadata Storage Server Software release 12.1.2.1.0, ILOM 3.2.4.

See Also:

Oracle Database Security Guide for additional information about FIPS

A.7.21 Oracle Exadata Virtual Machines

Consolidated environments can now use Oracle Virtual Machine (Oracle VM) on X5-2, X4-2, X3-2, and X2-2 database servers to deliver higher levels of isolation between workloads. Virtual machine isolation is desirable for workloads that cannot be trusted to restrict their security, CPU, or memory usage in a shared environment. Examples are hosted or cloud environments, cross department consolidation, test and development environments, and non-database or third party applications running on a database machine. Oracle VM can also be used to consolidate workloads that require different versions of clusterware, for example SAP applications that require specific clusterware patches and versions.

The higher isolation provided by virtual machines comes at the cost of increased resource usage, management, and patching because a separate operating system, clusterware, and database install is needed for each virtual machine. Therefore it is desirable to blend Oracle VM with database native consolidation by consolidating multiple trusted databases within a virtual machine. Oracle Resource Manager can be used to control CPU, memory, and I/O usage for the databases within a virtual machine. The Oracle Multitenant option can be used to provide the highest level of consolidation and agility for consolidated Oracle databases.

Exadata Virtual Machines use high speed InfiniBand networking with Single Root I/O Virtualization (SR-IOV) to ensure that performance within a virtual machine is similar to Exadata's famous raw hardware performance. Exadata Smart Scans greatly decrease virtualization overhead compared to other platforms by dramatically reducing message traffic to virtual machines. Exadata Virtual Machines can dynamically expand or shrink CPUs and memory based on the workload requirement of the applications running in that virtual machine.

Virtual machines on Exadata are considered Trusted Partitions, and therefore software can be licensed at the virtual machine level instead of the physical processor level. Without Trusted Partitions, database options and other Oracle software must to be licensed at a server or cluster level even though all databases running on that server or cluster may not require a particular option.

A.8 What's New in Oracle Exadata Database Machine 12c Release 1 (12.1.1.1.1)

The following feature is new for Oracle Exadata Database Machine 12c Release 1 (12.1.1.1.1):

The following features are new for Oracle Exadata Database Machine 12c Release 1 (12.1.1.1.1) and Oracle Exadata Database Machine 11g Release 2 (11.2.3.3.1):

The preceding release 11.2.3.3.1 features are described in "What's New in Oracle Exadata Database Machine 11g Release 2 (11.2.3.3.1)."

A.8.1 Additional SQL Operators and Conditions for LOB and CLOB

Oracle Exadata Storage Server Software supports offload of many SQL operators and conditions for predicate evaluation. Offload of the following SQL operators and conditions are now supported by Oracle Exadata Storage Server Software:

  • LOB and CLOB Conditions

    • LIKE

    • REGEXP_LIKE

Smart Scan evaluates the LOB operators and conditions only when a LOB is inlined (stored in the table row). In addition, Smart Scan handles Secure File compression. Using Secure File compression helps reduce the size of LOBs so that they can be inlined.

Minimum software: Oracle Database release 12.1.0.2 for offload of LOB/CLOB

See Also:

Oracle Database SecureFiles and Large Objects Developer's Guide for information about inline LOBs

A.9 What's New in Oracle Exadata Database Machine 12c Release 1 (12.1.1.1.0)

The following are new for Oracle Exadata Database Machine 12c Release 1 (12.1.1.1.0):

A.9.1 Support for Oracle Database Releases 12c Release 1 (12.1) and 11g Release 2 (11.2)

Oracle Exadata Storage Server Software 12c Release 1 (12.1) supports Oracle Database releases 12c Release 1 (12.1) and 11g Release 2 (11.2) running on a single Oracle Exadata Database Machine. The database servers get full performance, such as smart scans, fast file creation, and fast incremental backup, from all Exadata Storage Servers running Oracle Exadata Storage Server Software release 12c Release 1 (12.1).

Oracle Exadata Storage Server Software 12c Release 1 (12.1) is able to support multiple database releases by running appropriate cell offload servers to handle the offload requests. Offload requests coming from Oracle Database 11g Release 2 (11.2) are handled by a release 11g offload server, and offload requests coming from Oracle Database 12c Release 1 (12.1) database are handled by a 12c offload server.

Oracle Exadata Storage Server Software 12c Release 1 (12.1) now contains separate offload servers for each major database release so that it can fully support all offload operations. Offload requests coming from Oracle Database 11g Release 2 (11.2) are handled automatically by a release 11g offload server, and offload requests coming from Oracle Database 12c Release 1 (12.1) database are handled automatically by a 12c offload server.

Exadata Storage Server 12c Release 1 (12.1) can support the following releases of Oracle Database:

Database Release Minimum Required Release

11.2.0.2

Bundle patch 22

11.2.0.3

Bundle patch 20

11.2.0.4

Current release

12.1.0.1

Current release

A.9.2 IORM Support for Container Databases and Pluggable Databases

Oracle Database 12c Release 1 (12.1) supports a multitenant architecture. In a multitentant architecture, a container is a collection of schemas, objects, and related structures in a multitenant container database (CDB) that appears logically to an application as a separate database. In a CDB, administrators can create one or more pluggable databases (PDBs) to run their workloads. In a CDB, there are multiple workloads within multiple PDBs competing for shared resources. By using CDB plans and PDB plans, I/O Resource Management (IORM) provides the ability to manage I/O resource utilization among different PDBs as well as manage the workloads within each PDB.

Oracle Database 12c Release 1 (12.1) supports I/O Resource Management (IORM) priorization to manage I/O resource utilization among different pluggable databases (PDBs) as well as manage the workloads within each PDB.

See Also:

A.9.3 Cell to Cell Data Transfer

In earlier releases, Exadata Cells did not directly communicate to each other. Any data movement between the cells was done through the database servers. Data was read from a source cell into database server memory, and then written out to the destination cell. Starting with Oracle Exadata Storage Server Software 12c Release 1 (12.1), database server processes can offload data transfer to cells. A database server instructs the destination cell to read the data directly from the source cell, and write the data to its local storage. This reduces the amount of data transferred across the fabric by half, reducing InfiniBand bandwidth consumption, and memory requirements on the database server. Oracle Automatic Storage Management (Oracle ASM) resynchronization, resilver, and rebalance use this feature to offload data movement. This provides improved bandwidth utilization at the InfiniBand fabric level in Oracle ASM instances. No configuration is needed to utilize this feature.

Minimum software: Oracle Database 12c Release 1 (12.1) or later, and Oracle Exadata Storage Server Software 12c Release 1 (12.1) or later.

A.9.4 Desupport of HP Oracle Database Machine Hardware

Oracle Exadata Storage Server Software 12c Release 1 (12.1) is not supported on HP Oracle Database Machine hardware. Oracle continues to support HP Oracle Database Machines running Oracle Exadata Storage Server Software 11g Release 2 (11.2).

A.10 What's New in Oracle Exadata Database Machine 11g Release 2 (11.2.3.3.1)

The following are new for Oracle Exadata Database Machine 11g Release 2 (11.2.3.3.1):

A.10.1 Oracle Capacity-on-Demand for Database Servers

Oracle allows users to limit the number of active cores in the database servers in order to restrict the number of required database software licenses. The reduction of processor cores is implemented during software installation using Oracle Exadata Database Machine Deployment Assistant (OEDA). The number of active cores can be increased at a later time, when more capacity is needed, but not decreased. The number of active processor cores must be the same on every socket of a database server.

Capacity-on-demand differs from Oracle Exadata Infrastructure as a Service (IaaS) as follows:

  • The number of active cores for capacity-on-demand cannot be decreased after initial installation.

  • Software licenses are only needed for the active cores when using capacity-on-demand.

Note:

Reducing the number of active cores lowers the initial software licensing cost. It does not change the hardware cost.

See Also:

A.10.2 Exadata I/O Latency Capping

Disks drives or flash devices can, on rare occasion, exhibit high latencies for a small amount of time while an internal recovery operation is running. In addition, drives that are close to failing can sometimes exhibit high latencies before they fail. This feature masks these very rare latency spikes by redirecting read I/O operations to a mirror copy.

Oracle Exadata Storage Server Software automatically redirects read I/O operations to another cell when the latency of the read I/O is much longer than expected. This is performed by returning a message to the database that initiated the read I/O. The database then redirects the I/O to another mirror copy of the data. Any I/Os issued to the last valid mirror copy of the data is not redirected.

Minimum software: Oracle Database 11g Release 2 (11.2) release Monthly Database Patch For Exadata (June 2014 - 11.2.0.4.8). The same release is required for Grid Infrastructure.

A.10.3 Exadata Cell I/O Timeout Threshold

The I/O timeout threshold can be configured for Exadata Cells. Cell I/O that takes longer than the defined threshold is cancelled. Oracle Automatic Storage Management (Oracle ASM) redirects the I/O to another mirror copy of the data. Any write I/Os issued to the last valid mirror copy of the data are not cancelled, even if the timeout threshold is exceeded.

Setting the timeout threshold too low can negatively impact system performance. Oracle recommends reviewing the Automatic Workload Repository (AWR) reports of peak I/O loads, and setting the threshold value to a value higher than the peak I/O latency with sufficient safety margin.

Minimum software: Oracle Database 11g Release 2 (11.2) release Monthly Database Patch For Exadata (June 2014 - 11.2.0.4.8). The same release is required for Grid Infrastructure.

See Also:

Oracle Exadata Storage Server Software User's Guide for additional information about the ALTER CELL I/O threshold timeout attribute

A.10.4 Support for New Hardware

This release includes support for the following hardware:

  • Oracle Exadata Database Machine X4-8 Full Rack

  • 4 TB high-capacity drives for Exadata Storage Server X4-2, Exadata Storage Server X3-2, and Exadata Storage Server X2-2

Minimum software: Oracle Exadata Storage Server Software release 11.2.3.3.1

A.11 What's New in Oracle Exadata Database Machine 11g Release 2 (11.2.3.3.0)

The following are new for Oracle Exadata Database Machine 11g Release 2 (11.2.3.3.0):

A.11.1 Flash Cache Compression

Flash cache compression dynamically increases the logical capacity of the flash cache by transparently compressing user data as it is loaded into the flash cache. This allows much more data to be kept in flash, and decreases the need to access data on disk drives. The I/Os to data in flash are orders of magnitude faster than the I/Os to data on disk. The compression and decompression operations are completely transparent to the application and database, and have no performance overhead, even when running at rates of millions of I/Os per second.

Depending on the user data compressibility, Oracle Exadata Storage Server Software dynamically expands the flash cache size up to two times. Compression benefits vary based on the redundancy in the data. Tables and indexes that are uncompressed have the largest space reductions. Tables and indexes that are OLTP compressed have significant space reductions. Tables that use Hybrid Columnar Compression have minimal space reductions. Oracle Advanced Compression Option is required to enable flash cache compression.

This feature is enabled using the CellCLI ALTER CELL flashCacheCompress=true command.

Minimum hardware: Exadata Storage Server X4-2L Servers

A.11.2 Automatic Flash Caching for Table Scan Workloads

Oracle Exadata Storage Server Software automatically caches objects read by table and partition scan workloads in flash cache based on how frequently the objects are read. The algorithm takes into account the size of the object, the frequency of access of the object, the frequency of access to data displaced in the cache by the object, and the type of scan being performed by the database. Depending on the flash cache size, and the other concurrent workloads, all or only part of the table or partition is cached. There is no risk of thrashing the flash cache by trying to cache an object that is large compared to the size of the flash cache, or by caching a table that is accessed by maintenance operations.

This new feature largely eliminates the need for manually keeping tables in flash cache except to guarantee low response times for certain objects at the expense of potentially increasing total disk I/Os. In earlier releases, database administrators had to mark a large object as KEEP to have it cached in flash cache for table scan workloads.

This feature primarily benefits table scan intensive workloads such as Data Warehouses and Data Marts. Random I/Os such as those performed for Online Transaction Processing (OLTP) continue to be cached in the flash cache the same way as in earlier releases.

Minimum software: Oracle Exadata Storage Server Software release 11.2.3.3

A.11.3 Fast Data File Creation

Fast data file creation more than doubles the speed at which new data files are formatted. Instead of writing the newly formatted blocks to disk or flash, the flash cache just persists the metadata about the blocks in the write back flash cache, eliminating the actual formatting writes to disks. For example, creating a 1 TB data file on Oracle Exadata Full Rack running release 11.2.3.3 takes 90 seconds when using fast data file creation. Creating the same 1TB data file on earlier releases takes 220 seconds. This feature works automatically when write back flash cache is enabled, and the correct software releases are in use.

Minimum software: Oracle Exadata Storage Server Software release 11.2.3.3 running Oracle Database11g Release 2 (11.2) release 11.2.0.4, or Oracle Database 12c Release 1 (12.1) release 12.1.0.1

A.11.4 Network Resource Management

Network Resource Management automatically and transparently prioritizes critical database network messages through the InfiniBand fabric ensuring fast response times for latency critical operations. Prioritization is implemented in the database, database InfiniBand adapters, Oracle Exadata Storage Server Software, Exadata storage InfiniBand adapters, and InfiniBand switches to ensure prioritization happens through the entire InfiniBand fabric.

Latency sensitive messages such as Oracle RAC Cache Fusion messages are prioritized over batch, reporting, and backup messages. Log file write operations are given the highest priority to ensure low latency for transaction processing.

This feature works in conjunction with CPU and I/O Resource Management to help ensure high and predictable performance in consolidated environments. For example, given an online transaction processing (OLTP) workload, commit latency is determined by log write latency. This feature enables log writer process (LGWR) network transfer to be prioritized over other database traffic in the same or other databases, such as backups or reporting.

This feature is enabled by default, and requires no configuration or management.

Minimum software: Oracle Exadata Storage Server Software release 11.2.3.3 running Oracle Database 11g Release 2 (11.2) release 11.2.0.4, or Oracle Database 12c Release 1 (12.1) release 12.1.0.1, and switch firmware release 2.1.3-4

A.11.5 Active Bonding Network

Oracle Exadata Database Machine X4-2 database servers and storage servers enable active bonding support for both ports of an InfiniBand card. Active bonding provides much higher network bandwidth when compared to active passive bonding in earlier releases because both InfiniBand ports are simultaneously used for sending network traffic.

The active bonding capability improves network bandwidth on Oracle Exadata Database Machine X4-2 because it features a new InfiniBand card that supports much higher throughput than previous InfiniBand cards. Active bonding does not improve bandwidth on earlier generation hardware because earlier InfiniBand cards were not fast enough to take advantage of the faster bandwidth provided by the latest generation server PCI bus.

Note the following about active bonding on an InfiniBand card:

  • Database servers running Oracle Linux provide the active bonding capability.

  • Oracle Clusterware requires the same interconnect name on each database server in the cluster. It is advisable to keep legacy bonding on the database servers when expanding existing Oracle Exadata Database Machine X3-2 and Oracle Exadata Database Machine X2-2 systems with Oracle Exadata Database Machine X4-2 systems.

  • Two IP addresses are required for each InfiniBand card for increased network bandwidth.

The following table provides guidelines on how to configure systems:

Operating System Database Servers in Oracle Exadata Database Machine X4-2 Storage Servers in Oracle Exadata Database Machine X4-2 Database Servers in Oracle Exadata Database Machine X3-8 Full Rack with Exadata Storage Server X4-2L Servers

Oracle Linux

Active bonding

Active bonding

Legacy bonding, single port per HCA active

Oracle Solaris

IPMP, single port active

Active bonding

IPMP, single port per HCA active

Minimum hardware: Oracle Exadata Database Machine X4 generation servers

Minimum software: Oracle Exadata Storage Server Software release 11.2.3.3

A.11.6 Oracle ASM Disk Group in Appliance Mode

The Oracle ASM appliance.mode attribute improves disk rebalance completion time when dropping one or more Oracle ASM disks. This means that redundancy is restored faster after a failure. The attribute is automatically enabled when creating a new disk group. Existing disk groups must explicitly set the attribute using the ALTER DISKGROUP command.

The attribute can only be enabled on disk groups that meet the following requirements:

  • The Oracle ASM disk group attribute compatible.asm is set to release 11.2.0.4, or later.

  • The cell.smart_scan_capable attribute is set to TRUE.

  • All disks in the disk group are the same type of disk, such as all hard disks or extreme flash disks.

  • All disks in the disk group are the same size.

  • All failure groups in the disk group have an equal number of disks.

  • No disk in the disk group is offline.

Minimum software: Oracle Exadata Storage Server Software release 11.2.3.3 running Oracle Database 11g Release 2 (11.2) release 11.2.0.4 or Oracle Database 12c Release 1 (12.1) release 12.1.0.2

See Also:

Oracle Exadata Storage Server Software User's Guide for additional information about the appliance.mode attribute

A.11.7 Automatic Hard Disk Scrub and Repair

Oracle Exadata Storage Server Software automatically inspects and repairs hard disks periodically when hard disks are idle. If bad sectors are detected on a hard disk, then Oracle Exadata Storage Server Software automatically sends a request to Oracle ASM to repair the bad sectors by reading the data from another mirror copy. By default, the hard disk scrub runs every two weeks.

Minimum software: Oracle Exadata Storage Server Software release 11.2.3.3 running Oracle Database 11g Release 2 (11.2) release 11.2.0.4 or Oracle Database 12c Release 1 (12.1) release 12.1.0.2.

See Also:

Oracle Exadata Storage Server Software User's Guide for additional information about setting the scrub interval

A.11.8 Drop Hard Disk for Replacement

Before replacing a normal hard disk that is not in any failure status, the Oracle Exadata Database Machine administrator must run the ALTER PHYSICALDISK DROP FOR REPLACEMENT command, and confirm its success before removing the hard disk from Exadata Cell. The command checks to ensure that the grid disks on that hard disk can be safely taken offline from Oracle ASM without causing a disk group force dismount. If all the grid disks can be offlined without leading to disk group force dismount, then the command puts the grid disks offline from Oracle ASM, disables the hard disk, and then turns on the service LED on the storage server.

Minimum software: Oracle Exadata Storage Server Software release 11.2.3.3

See Also:

Oracle Exadata Storage Server Software User's Guide for additional information about the ALTER PHYSICALDISK command

A.11.9 Drop BBU for Replacement

Before performing an online BBU (battery backup unit) replacement on an Oracle Exadata Database Machine database server or storage cell, the Oracle Exadata Database Machine administrator must run the ALTER CELL BBU DROP FOR REPLACEMENT command, and confirm the success of the command. The command changes the controller to write-through caching and ensures that no data loss can occur when the BBU is replaced in case of a power loss.

Minimum hardware: Oracle Exadata Database Machine X3-2 or Oracle Exadata Database Machine X3-8 Full Rack, with disk-form-factor BBU

Minimum software: Oracle Exadata Storage Server Software release 11.2.3.3

See Also:

A.11.10 Oracle Exadata Database Machine Eighth Rack Configuration

Oracle Exadata Database Machine Eighth Rack configuration for storage cells can be enabled and disabled using the ALTER CELL eighthRack command. No more than 6 cell disks are created on hard disks and no more than 8 cell disks are created on flash disks when using an eighth rack configuration.

Minimum software: Oracle Exadata Storage Server Software release 11.2.3.2.1

See Also:

Oracle Exadata Database Machine Maintenance Guide for additional information about configuring Oracle Exadata Database Machine Eighth Rack

A.11.11 Cell Alert Summary

Oracle Exadata Storage Server Software periodically sends out an e-mail summary of all open alerts on Exadata Cells. The open alerts e-mail message provides a concise summary of all open issues on a cell. The summary includes the following:

  • Cell name

  • Event time

  • Severity of the alert

  • Description of the alert

  • Information about configuring the alert summary

Alerts created since the previous summary are marked with an asterisk.

Minimum software: Oracle Exadata Storage Server Software release 11.2.3.3

See Also:

Oracle Exadata Storage Server Software User's Guide for additional information about configuring the alert summary

A.11.12 Secure Erase for Larger Drives

With this release, Oracle Exadata Storage Server Software supports secure erase for larger hard drives, and flash drives. The following are the approximate times needed to securely erase the drives using the supported algorithms:

Type of Drive One Pass (1pass) Three Pass (3pass) Seven Pass (7pass)

1.2 TB drive

1.67 hours

5 hours

11.67 hours

4 TB drive

8 hours

24 hours

56 hours

186 GB flash drive

NA

NA

36 minutes

See Also:

"Exadata Secure Erase"

A.11.13 Periodic ILOM Reset

The Integrated Lights-Out Manager (ILOM) hang detection module in Management Server periodically resets the ILOM as a proactive measure. It is done in order to prevent the ILOM from entering an unstable state after running for a long period. The reset interval is 90 days.

Minimum software: Oracle Exadata Storage Server Software release 11.2.3.3.0

A.11.14 Oracle Exawatcher Replaces Oracle OSwatcher

Starting with this release, Oracle Exawatcher replaces Oracle OSwatcher. Oracle Exawatcher has greater collection and reporting functionality than Oracle OSwatcher.

See Also:

Oracle Exadata Storage Server Software User's Guide for information about Oracle Exawatcher

A.11.15 Enhancements for Hardware and Software

The following enhancements have been added for the hardware and software:

  • Enhancements for Sun Datacenter InfiniBand Switch 36 Switches

    • Sun Datacenter InfiniBand Switch 36 switches in Oracle Exadata Database Machine are upgraded in a rolling manner using the patchmgr utility. See Oracle Exadata Database Machine Maintenance Guide for additional information.

    • Switch software release 2.1.3-4 includes the ability to automatically disable intermittent links. The InfiniBand specification stipulates that the bit error rate on a link must be less than 1012. If the number of symbol errors is greater than 3546 bit errors per day, or 144 bit errors per hour, then the link is disabled. The InfiniBand switch software provides the autodisable command, and the patchmgr utility automatically enables this feature when the switch is upgraded to release 2.1.3-4.

    • The new switch software release 2.1.3-4 can create fat tree topologies with two switches, or with an unbalanced number of links across multiple spine switches in the fabric.

    • The amount of time taken to perform a subnet manager failover has been reduced to subseconds even on multi-rack configurations.

  • Enhancements for Patch Application

    • The patchmgr utility provides the ability to send e-mail messages upon completion of patching, as well as the status of rolling and non-rolling patch application. See Oracle Exadata Database Machine Maintenance Guide, and the patch set for additional information.

    • Firmware upgrades on database servers for ILOM/BIOS, InfiniBand HCA, and disk controller happen automatically during component replacements on racks running Oracle Linux and Oracle Solaris.

  • Enhancements for Hardware Robustness

    • The time to recover from a bad sector on a hard disk has been reduced by 12 times.

    • The failure state of a hard drive or flash drive is rarely Boolean. Most devices slow down considerably before they fail. Slow and intermittent drives are detected much earlier, and failed by Oracle Exadata Storage Server Software before the drives reach predictive failure or a hard failure state.

    • If the ILOM on a storage server stops responding, then the management software can automatically reset the ILOM.

  • Support for Oracle Solaris 11.1 (SRU 9.5.1)

    This release supports Oracle Solaris 11.1 SRU 9.5.1 on the database servers.

A.12 What's New in Oracle Exadata Database Machine 11g Release 2 (11.2.3.2)

The following are new for Oracle Exadata Database Machine 11g Release 2 (11.2.3.2):

A.12.1 Write Back Flash Cache with Exadata Smart Flash Cache

Exadata Smart Flash Cache transparently caches frequently-accessed data to fast solid-state storage, improving query response times and throughput. Write operations serviced by flash instead of by disk are referred to as "write back flash cache." Write back flash cache allows 20 times more write I/Os per second on X3 systems, and 10 times more on X2 systems. The larger flash capacity on X3 systems means that almost all writes are serviced by flash.

An active data block can remain in write back flash cache for months or years. Blocks that have not been read recently only keep the primary copy in cache. All data is copied to disk, as needed. This provides for smart usage of the premium flash space.

If there is a problem with the flash cache, then the operations transparently fail over to the mirrored copies on flash. No user intervention is required. The data on flash is mirrored based on their allocation units. This means the amount of data written is proportional to the lost cache size, not the disk size.

See Also:

Oracle Exadata Storage Server Software User's Guide for additional information about write back flash cache and write through flash cache

A.12.1.1 Exadata Smart Flash Cache Persistent After Cell Restart

Exadata Smart Flash Cache is persistent through power outages, shutdown operations, cell restarts, and so on. Data in flash cache is not repopulated by reading from the disk after a cell restarts. Write operations from the server go directly to flash cache. This reduces the number of I/O operations on the disks. The caching of data on the flash disks is set by the administrator.

A.12.2 Graceful Shutdown of CELLSRV Services

If a cell or disk is offline, and an administrator tries to restart or shut down CELLSRV services, then the administrator gets a message that the cell cannot be shutdown due to reduced redundancy.

A.12.3 LED Notification for Storage Server Disk Removal

When a storage server disk needs to be removed, a blue LED light is displayed on the server. The blue light makes it easier to determine which server disk needs maintenance.

A.12.4 Identification of Underperforming Disks

Underperforming disks affect the performance of all disks because work is distributed equally to all disks. For example, if a disk is performing 30% slower than other disks, then the entire system's I/O capacity will be 30% lower.

When an underperforming disk is detected, it is removed from the active configuration. Oracle Exadata Database Machine then performs a set of performance tests. If the problem with the disk is temporary and it passes the tests, then it is brought back into the configuration. If the disk does not pass the tests, then it is marked as poor performance, and an Auto Service Request (ASR) service request is opened to replace the disk. This feature applies to both hard disks and flash disks.

A.12.5 Oracle Database File System Support for Oracle Solaris

Oracle Database File System (DBFS) manages unstructured data in an Oracle database. Files in DBFS are stored in the database in SecureFiles and inherit all of its performance, scalability, security, availability and rich functionality benefits, such as compression, deduplication, encryption, text search, and XQuery.

In earlier releases, DBFS was only available for Oracle Exadata Database Machine running Linux. With this release, DBFS is also supported on Oracle Exadata Database Machine running Oracle Solaris.

See Also:

Oracle Database SecureFiles and Large Objects Developer's Guide for additional information about DBFS

A.12.6 Health Factor for Predictive Failed Disk Drop

When a hard disk enters predictive failure on Exadata Cell, Oracle Exadata Storage Server Software automatically triggers an Oracle ASM rebalance to relocate data from the disk. The Oracle ASM rebalance first reads from healthy mirrors to restore redundancy. If all other mirrors are not available, then Oracle ASM rebalance reads the data from the predictively-failed disk. This diverts rebalance reads away from the predictively-failed disk when possible to ensure optimal rebalance progress while maintaining maximum data redundancy during the rebalance process.

Before the data is completely relocated to other healthy disks in the disk group, Oracle Exadata Storage Server Software notifies database instances of the poor health of the predictively-failed disk so that queries and smart scans for data on that disk will be diverted to other mirrors for better response time.

A.12.7 Hard Disk Drive Numbering in Servers

The drives in the Exadata Storage Server X3-2 Servers are numbered from left to right in each row. The drives in the bottom row are numbered 0, 1, 2, and 3. The drives in the middle row are numbered 4, 5, 6, and 7. The drives in the top row are numbered 8, 9, 10, and 11.

Figure A-2 Disk Layout in Exadata Storage Server X3-2 Servers

Description of Figure A-2 follows
Description of "Figure A-2 Disk Layout in Exadata Storage Server X3-2 Servers"

The drives in the Exadata Storage Server with Sun Fire X4270 M2 Servers and earlier servers were numbered from the lower left to the top, such that the drives in the leftmost column were 0, 1, and 2. The drives in the next column were 3, 4, and 5. The drives in the next column were 6, 7, and 8. The drives in the rightmost column were 9, 10, and 11.

Figure A-3 Disk Layout in Exadata Storage Server with Sun Fire X4270 M2 Servers

Description of Figure A-3 follows
Description of "Figure A-3 Disk Layout in Exadata Storage Server with Sun Fire X4270 M2 Servers"

A.13 What's New in Oracle Exadata Database Machine 11g Release 2 (11.2.3.1)

The following are new for Oracle Exadata Database Machine 11g Release 2 (11.2.3.1:)

A.13.1 Support for Oracle Solaris 11 (SRU2a)

This release supports Oracle Solaris 11 (SRU2a) on the database servers.

A.13.2 Linux Database Server Updates with Unbreakable Linux Network

Starting with Oracle Exadata Storage Server Software 11g Release 2 (11.2) release 11.2.3.1, the minimal pack is deprecated. The database server update procedure uses the Unbreakable Linux Network (ULN) for the distribution of updates, and the yum utility to apply the updates.

See Also:

Oracle Exadata Database Machine Maintenance Guide for information about updating the database servers

A.13.3 Oracle Enterprise Manager Cloud Control for Oracle Exadata Database Machine

Oracle Exadata Database Machine can be managed using Oracle Enterprise Manager Cloud Control. Oracle Enterprise Manager Cloud Control combines management of servers, operating systems, firmware, virtual machines, storage, and network fabrics into a single console.

A.13.4 I/O Resource Management Support for More Than 32 Databases

I/O Resource Management (IORM) now supports share-based plans, which can support up to 1024 databases, and up to 1024 directives for interdatabase plans. The share-based plans allocate resources based on shares instead of percentages. A share is a relative distribution of the I/O resources. In addition, the new default directive specifies the default value for all databases that are not explicitly named in the database plan.

See Also:

Oracle Exadata Storage Server Software User's Guide for information about IORM

A.13.5 Oracle Database 11g Release 2 (11.2.0.3)

The Smart Scan in Oracle Exadata Storage Server Software 11g Release 2 (11.2) release 11.2.3.n is based on the technology present in Oracle Database software 11g Release 2 (11.2) release 11.2.0.3, and is backwards compatible with the 11.2.0.n releases of the database.

A.13.6 Exadata Cell Connection Limit

Oracle Database, Oracle ASM, Oracle Clusterware and Oracle utilities perform I/O operations on Exadata Cells. In order for a process to perform I/O operations on Exadata Cell, the process must first establish a connection to the cell. Once a process is connected to Exadata Cell, it remains connected until process termination.

With this release, each Exadata Cell can support up to 60,000 simultaneous connections originating from one or more database servers. This implies that no more than 60,000 processes can simultaneously remain connected to a cell and perform I/O operations. The limit was 32,000 connections in release 112.2.4. Prior to release 11.2.2.4, the limit was 20,000 connections.

A.14 What's New in Oracle Exadata Database Machine 11g Release 2 (11.2.2.4)

The following is new for Oracle Exadata Database Machine 11g release 2 (11.2.2.4):

A.14.1 Oracle Exadata Smart Flash Log

The time to commit user transactions is very sensitive to the latency of log writes. In addition, many performance-critical database algorithms, such as space management and index splits, are sensitive to log write latency. Oracle Exadata Storage Server Software speeds up log writes using battery-backed DRAM cache in the disk controller. Writes to the disk controller cache are normally very fast, but they can become slow during periods of high disk I/O. Oracle Exadata Smart Flash Log takes advantage of flash memory in Exadata Storage Server to accelerate log writes.

Flash memory has very good average write latency, but it has occasional slow outliers that are one to two orders of magnitude slower than the average. Oracle Exadata Smart Flash Log performs redo writes simultaneously to both flash memory and the disk controller cache, and completes the write when the first of the two completes. This improves the user transaction response time, and increases overall database throughput for I/O intensive workloads.

Oracle Exadata Smart Flash Log only uses Exadata flash storage for temporary storage of redo log data. By default, Oracle Exadata Smart Flash Log uses 32 MB on each flash disk, for a total of 512 MB across each Exadata Cell. It is automatically configured and enabled. No additional configuration is needed.

A.15 What's New in Oracle Exadata Database Machine 11g Release 2 (11.2.2.3)

The following are new for Oracle Exadata Database Machine 11g Release 2 (11.2.2.3):

A.15.1 Oracle Solaris Operating System for Database Servers

The database servers in Oracle Exadata Database Machine have the Linux operating system and Oracle Solaris operating system. During initial configuration, choose the operating system for your environment. After selecting an operating system, you can reclaim the disk space used by the other operating system.

A.15.2 Exadata Secure Erase

Oracle Exadata Storage Server Software includes a method to securely erase and clean physical disks before redeployment. The ERASE option overwrites the existing content on the disks with one pass, three passes or seven passes. The one pass option overwrites content with zeros. The three pass option follows recommendations from NNSA and the seven pass option follows recommendations from DOD.

The following table shows the approximate times needed to securely erase a drive using the supported algorithms.

Type of Drive One Pass Three Pass Seven Pass

600 GB drive

1 hour

3 hours

7 hours

2 TB drive

5 hours

15 hours

35 hours

3 TB drive

7 hours

21 hours

49 hours

22.875 GB flash drive

NA

NA

21 minutes

93 GB flash drive

NA

NA

32 minutes

Note:

  • Oracle Exadata Storage Server Software secure data erase uses multiple over-writes of all accessible data. The over-writes use variations of data characters. This method of data erase is based on commonly known algorithms. Under rare conditions even a 7-pass erase may not remove all traces of data. For example, if a disk has internally remapped sectors, then some data may remain physically on the disk. This data will not be accessible using normal I/O interfaces.

  • Using tablespace encryption is another way to secure data.

A.15.3 Optimized Smart Scan

Oracle Exadata Storage Server Software detects resource bottlenecks on Exadata Storage Servers by monitoring CPU utilization. When a bottleneck is found, work is relocated to improve performance. Each Exadata Cell maintains the following statistics:

  • Exadata Cell CPU usage and push-back rate snapshots for the last 30 minutes.

  • Total number of 1MB blocks that had a push-back decision on.

  • Number of blocks that have been pushed back to the database servers.

  • The statistic Total cpu passthru output IO size in KB.

A.16 What's New in Oracle Exadata Database Machine 11g Release 2 (11.2.1.2)

The following features are new for Oracle Exadata Database Machine 11g release 2 (11.2.1.2):

A.16.1 Exadata Smart Flash Cache

Exadata Smart Flash Cache provides a caching mechanism for frequently-accessed data on each Exadata Cell. It is a write-through cache which is useful for absorbing repeated random reads, and very beneficial to online transaction processing (OLTP). It provides a mechanism to cache data in KEEP mode using database-side storage clause hints at a table or index level. The Exadata Smart Flash Cache area on flash disks is automatically created on Exadata Cells during start up.

Oracle Exadata Storage Servers are equipped with high-performance flash disks in addition to traditional rotational hard disks. These high-performance flash disks can be used to create Exadata grid disks to store frequently accessed data. It requires the user to do accurate space planning, and to place the most active tablespaces on the premium disks. The recommended option is to dedicate all or part of flash disk space for Exadata Smart Flash Cache. In this case, the most frequently-accessed data on the spinning disks are automatically cached in the Exadata Smart Flash Cache area on high-performance flash disks. When the database needs to access this data Oracle Exadata Storage Server fetches the data from Exadata Smart Flash Cache instead of getting it from slower rotational disks.

When a partition or a table is scanned by the database, Exadata Storage Server can fetch the data being scanned from the Exadata Smart Flash Cache if the object has the CELL_FLASH_CACHE attribute set. In addition to serving data from the Exadata Flash Cache, Exadata Storage Server also has the capability to fetch the object being scanned from hard disks.

The performance delivered by Exadata Storage Server is additive when it fetches scanned data from the Exadata Smart Flash Cache and hard disks. Exadata Storage Server has the ability to utilize the maximum Exadata Smart Flash Cache bandwidth and the maximum hard disk bandwidth to scan an object, and give an additive maximum bandwidth while scanning concurrently from both.

Oracle Database and Exadata Smart Flash Cache software work closely with each other. When the database sends a read or write request to Oracle Exadata Storage Server, it includes additional information in the request about whether the data is likely to be accessed again, and should be cached. For example, when writing data to a log file or to a mirrored copy, the database sends a hint to skip caching. When reading a table index, the database sends a hint to cache the data. This cooperation allows optimized usage of Exadata Smart Flash Cache space to store only the most frequently-accessed data.

Users have additional control over which database objects, such as tablespace, tables, and so on, should be cached more aggressively than others, and which ones should not be cached at all. Control is provided by the new storage clause attribute, CELL_FLASH_CACHE, which can be assigned to a database object.

For example, to pin table CALLDETAIL in Exadata Smart Flash Cache one can use the following command:

ALTER TABLE calldetail STORAGE (CELL_FLASH_CACHE KEEP)

Exadata Storage Server caches data for the CALLDETAIL table more aggressively and tries to keep this data in Exadata Smart Flash Cache longer than cached data for other tables. If the CALLDETAIL table is spread across several Oracle Exadata Storage Servers, then each one caches its part of the table in its own Exadata Smart Flash Cache. If caches are sufficient size, then CALLDETAIL table is likely to be completely cached over time.

A.16.2 Hybrid Columnar Compression

Hybrid Columnar Compression offers higher compression levels for direct path loaded data. This new compression capability is recommended for data that is not updated frequently. You can specify Hybrid Columnar Compression at the partition, table, and tablespace level. You can also specify the desired level of compression, to achieve the proper trade-off between disk usage and CPU overhead. Included is a compression advisor that helps you determine the proper compression levels for your application.

This feature allows the database to reduce the number of I/Os to scan a table. For example, if you compress data 10 to 1, then the I/Os are reduced 10 to 1 as well. In addition, Hybrid Columnar Compression saves disk space by the same amount.

This feature also allows the database to offload Smart Scans for a column-compressed table to Exadata Cells. When a scan is done on a compressed table, Exadata Cell reads the compressed blocks from the disks for the scan. Oracle Exadata Storage Server Software then decompresses the referenced columns, does predicate evaluation of the data, and applies the filter. The cell then sends back qualifying data in an uncompressed format. Without this offload, data decompression would take place on the database host. Having Exadata Cell decompress the data results in significant CPU savings on the database host.

See Also:

Oracle Exadata Storage Server Software User's Guide for information about Hybrid Columnar Compression

A.16.3 Storage Index

Storage indexes are a very powerful capability provided in Oracle Exadata Storage Server Software that help avoid I/O operations. Oracle Exadata Storage Server Software creates and maintains a storage index in Exadata memory. The storage index keeps track of minimum and maximum values of columns per storage region for tables stored on that cell. This functionality is done transparently, and does not require any administration by the user.

When a query specifies a WHERE clause, Oracle Exadata Storage Server Software examines the storage index to determine if rows with the specified column value does not exist in a region of disk in the cell by comparing the column value to the minimum and maximum values maintained in the storage index. If the column value is outside the minimum and maximum range, then scan I/O in that region for that query is avoided. Many SQL operations run dramatically faster because large numbers of I/O operations are automatically replaced by a few in-memory lookups. To minimize operational overhead, storage indexes are created and maintained transparently and automatically by Oracle Exadata Storage Server Software.

Storage indexes provide benefits for encrypted tablespaces. However, storage indexes do not maintain minimum and maximum values for encrypted columns.

A.16.4 Smart Scan of Encrypted Data

Oracle Exadata Storage Server Software offloads decryption, and performs Smart Scans on encrypted tablespaces and encrypted columns. While the earlier release of Oracle Exadata Storage Server Software fully supported encrypted tablespaces and encrypted columns, it did not benefit from Exadata offload processing. For encrypted tablespaces, Oracle Exadata Storage Server Software can decrypt blocks and return the decrypted blocks to the Oracle Database, or it can perform smart scan which returns rows and columns. When Oracle Exadata Storage Server Software performs the decryption instead of the database there is significant CPU savings because CPU usage is offloaded to Exadata Cells.

A.16.5 Interleaved Grid Disks

Space for grid disks can be allocated in an interleaved manner. Grid disks that use this type of space allocation are referred to as interleaved grid disks. This method attempts to equalize the performance of the grid disks residing on the same cell disk rather than having the grid disks that occupy the outer tracks getting better performance at the expense of the grid disks on the inner tracks.

A cell disk is divided into two equal parts, the outer half (upper portion) and the inner half (lower portion). When a new grid disk is created, half of the grid disk space is allocated on the outer half of the cell disk, and the other half of the grid disk space is allocated the inner half of the cell disk. The upper portion of the grid disk starts on the first available outermost offset in the outer half depending on free or used space in the outer half. The lower portion of the grid disk starts on the first available outermost offset in the inner half.

For example, if cell disk, CD_01_cell01 is completely empty and has 100 GB of space, and a grid disk, data_CD_01_cell01, is created and sized to 50 GB on the cell disk, then the cell disk would have the following layout:

- Outer portion of data_CD_01_cell01 - 25GB
- Free space - 25GB
------------ Middle Point ------------------
- Inner portion of data_CD_cell01 - 25GB
- Free space - 25GB

See Also:

Oracle Exadata Storage Server Software User's Guide for information about grid disks

A.16.6 Data Mining Scoring Offload

Oracle Exadata Storage Server Software now offloads data mining model scoring. This makes the deployment of your data warehouse on Exadata Cells a better and more performant data analysis platform. All data mining scoring functions, such as PREDICTION_PROBABILITY, are offloaded to Exadata Cells for processing. This accelerates warehouse analysis while it reduces database server CPU consumption and the I/O load between the database server and Exadata storage.

A.16.7 Enhanced Manageability Features

Oracle Exadata Storage Server Software now includes the following manageability features:

  • Automatic addition of replacement disk to the disk group: All the required Exadata operations to re-create the disk groups, and add the grid disks back to the original disk group are now performed automatically when a replacement disk is inserted after a physical disk failure.

  • Automatic cell restart: Grid disks are automatically changed to online when a cell recovers from a failure, or after a restart.

  • Support for OCR and voting disks on ASM disk groups: In Oracle Database 11g Release 2 (11.2), Oracle Cluster Registry (OCR) and voting disks are supported on ASM disk groups, and the iSCSI partitions are no longer needed.

  • Support for up to four dual-port InfiniBand Host Channel Adapters in the database server. This feature enables larger Oracle Exadata Database Machine X2-8 Full Rack servers to be used as database servers using Oracle Exadata Storage Server Software.

A.17 What's New in Oracle Exadata Database Machine 11g Release 2 (11.2)

The following is new for Oracle Exadata Database Machine 11g Release 2 (11.2):

A.17.1 Expanded Content in the Guide

This release of Oracle Exadata Database Machine System Overview includes maintenance procedures, cabling information, site planning checklists, and so on. This guide is the main reference book for Oracle Exadata Database Machine.