8 New Features for Oracle Exadata System Software Release 11.x

Several new features were introduced for the various versions of Oracle Exadata System Software Release 11.x.

8.1 What's New in Oracle Exadata Database Machine 11g Release 2 (11.2.3.3.1)

The following are new for Oracle Exadata System Software 11g Release 2 (11.2.3.3.1):

8.1.1 Oracle Capacity-on-Demand for Database Servers

Oracle allows users to limit the number of active cores in the database servers in order to restrict the number of required database software licenses. The reduction of processor cores is implemented during software installation using Oracle Exadata Database Machine Deployment Assistant (OEDA). The number of active cores can be increased at a later time, when more capacity is needed, but not decreased. The number of active processor cores must be the same on every socket of a database server.

Capacity-on-demand differs from Oracle Exadata Infrastructure as a Service (IaaS) as follows:

  • The number of active cores for capacity-on-demand cannot be decreased after initial installation.

  • Software licenses are only needed for the active cores when using capacity-on-demand.

Note:

Reducing the number of active cores lowers the initial software licensing cost. It does not change the hardware cost.

See Also:

8.1.2 Exadata I/O Latency Capping

Disks drives or flash devices can, on rare occasion, exhibit high latencies for a small amount of time while an internal recovery operation is running. In addition, drives that are close to failing can sometimes exhibit high latencies before they fail. This feature masks these very rare latency spikes by redirecting read I/O operations to a mirror copy.

Oracle Exadata Storage Server Software automatically redirects read I/O operations to another cell when the latency of the read I/O is much longer than expected. This is performed by returning a message to the database that initiated the read I/O. The database then redirects the I/O to another mirror copy of the data. Any I/Os issued to the last valid mirror copy of the data is not redirected.

Minimum software: Oracle Database 11g Release 2 (11.2) release Monthly Database Patch For Exadata (June 2014 - 11.2.0.4.8). The same release is required for Grid Infrastructure.

8.1.3 Oracle Exadata Storage Server I/O Timeout Threshold

The I/O timeout threshold can be configured for Oracle Exadata Storage Servers. Storage server I/O that takes longer than the defined threshold is canceled. Oracle Automatic Storage Management (Oracle ASM) redirects the I/O to another mirror copy of the data. Any write I/Os issued to the last valid mirror copy of the data are not canceled, even if the timeout threshold is exceeded.

Setting the timeout threshold too low can negatively impact system performance. Oracle recommends reviewing the Automatic Workload Repository (AWR) reports of peak I/O loads, and setting the threshold value to a value higher than the peak I/O latency with sufficient safety margin.

Minimum software: Oracle Database 11g Release 2 (11.2) release Monthly Database Patch For Exadata (June 2014 - 11.2.0.4.8). The same release is required for Oracle Grid Infrastructure.

See Also:

Oracle Exadata System Software User's Guide for additional information about the ALTER CELL I/O threshold timeout attribute

8.1.4 Support for New Hardware

This release includes support for the following hardware:

  • Oracle Exadata Database Machine X4-8 Full Rack

  • 4 TB high-capacity drives for Exadata Storage Server X4-2, Exadata Storage Server X3-2, and Exadata Storage Server X2-2

Minimum software: Oracle Exadata Storage Server Software release 11.2.3.3.1

8.2 What's New in Oracle Exadata Database Machine 11g Release 2 (11.2.3.3.0)

The following are new for Oracle Exadata System Software 11g Release 2 (11.2.3.3.0):

8.2.1 Flash Cache Compression

Flash cache compression dynamically increases the logical capacity of the flash cache by transparently compressing user data as it is loaded into the flash cache. This allows much more data to be kept in flash, and decreases the need to access data on disk drives. The I/Os to data in flash are orders of magnitude faster than the I/Os to data on disk. The compression and decompression operations are completely transparent to the application and database, and have no performance overhead, even when running at rates of millions of I/Os per second.

Depending on the user data compressibility, Oracle Exadata System Software dynamically expands the flash cache size up to two times. Compression benefits vary based on the redundancy in the data. Tables and indexes that are uncompressed have the largest space reductions. Tables and indexes that are OLTP compressed have significant space reductions. Tables that use Hybrid Columnar Compression have minimal space reductions. Oracle Advanced Compression Option is required to enable flash cache compression.

This feature is enabled using the CellCLI ALTER CELL flashCacheCompress=true command.

Minimum hardware: Oracle Exadata Database Machine X3-2

8.2.2 Automatic Flash Caching for Table Scan Workloads

Oracle Exadata Storage Server Software automatically caches objects read by table and partition scan workloads in flash cache based on how frequently the objects are read. The algorithm takes into account the size of the object, the frequency of access of the object, the frequency of access to data displaced in the cache by the object, and the type of scan being performed by the database. Depending on the flash cache size, and the other concurrent workloads, all or only part of the table or partition is cached. There is no risk of thrashing the flash cache by trying to cache an object that is large compared to the size of the flash cache, or by caching a table that is accessed by maintenance operations.

This new feature largely eliminates the need for manually keeping tables in flash cache except to guarantee low response times for certain objects at the expense of potentially increasing total disk I/Os. In earlier releases, database administrators had to mark a large object as KEEP to have it cached in flash cache for table scan workloads.

This feature primarily benefits table scan intensive workloads such as Data Warehouses and Data Marts. Random I/Os such as those performed for Online Transaction Processing (OLTP) continue to be cached in the flash cache the same way as in earlier releases.

Minimum software: Oracle Exadata Storage Server Software release 11.2.3.3

8.2.3 Fast Data File Creation

Fast data file creation more than doubles the speed at which new data files are formatted. Instead of writing the newly formatted blocks to disk or flash, the flash cache just persists the metadata about the blocks in the write back flash cache, eliminating the actual formatting writes to disks. For example, creating a 1 TB data file on Oracle Exadata Full Rack running release 11.2.3.3 takes 90 seconds when using fast data file creation. Creating the same 1TB data file on earlier releases takes 220 seconds. This feature works automatically when write back flash cache is enabled, and the correct software releases are in use.

Minimum software: Oracle Exadata Storage Server Software release 11.2.3.3 running Oracle Database11g Release 2 (11.2) release 11.2.0.4, or Oracle Database 12c Release 1 (12.1) release 12.1.0.1

8.2.4 Network Resource Management

Network Resource Management automatically and transparently prioritizes critical database network messages through the InfiniBand fabric ensuring fast response times for latency critical operations. Prioritization is implemented in the database, database InfiniBand adapters, Oracle Exadata Storage Server Software, Exadata storage InfiniBand adapters, and InfiniBand switches to ensure prioritization happens through the entire InfiniBand fabric.

Latency sensitive messages such as Oracle RAC Cache Fusion messages are prioritized over batch, reporting, and backup messages. Log file write operations are given the highest priority to ensure low latency for transaction processing.

This feature works in conjunction with CPU and I/O Resource Management to help ensure high and predictable performance in consolidated environments. For example, given an online transaction processing (OLTP) workload, commit latency is determined by log write latency. This feature enables log writer process (LGWR) network transfer to be prioritized over other database traffic in the same or other databases, such as backups or reporting.

This feature is enabled by default, and requires no configuration or management.

Minimum software: Oracle Exadata Storage Server Software release 11.2.3.3 running Oracle Database 11g Release 2 (11.2) release 11.2.0.4, or Oracle Database 12c Release 1 (12.1) release 12.1.0.1, and switch firmware release 2.1.3-4

8.2.5 Active Bonding Network

Oracle Exadata Database Machine X4-2 database servers and storage servers enable active bonding support for both ports of an InfiniBand card. Active bonding provides much higher network bandwidth when compared to active passive bonding in earlier releases because both InfiniBand ports are simultaneously used for sending network traffic.

The active bonding capability improves network bandwidth on Oracle Exadata Database Machine X4-2 because it features a new InfiniBand card that supports much higher throughput than previous InfiniBand cards. Active bonding does not improve bandwidth on earlier generation hardware because earlier InfiniBand cards were not fast enough to take advantage of the faster bandwidth provided by the latest generation server PCI bus.

Note the following about active bonding on an InfiniBand card:

  • Database servers running Oracle Linux provide the active bonding capability.

  • Oracle Clusterware requires the same interconnect name on each database server in the cluster. It is advisable to keep legacy bonding on the database servers when expanding existing Oracle Exadata Database Machine X3-2 and Oracle Exadata Database Machine X2-2 systems with Oracle Exadata Database Machine X4-2 systems.

  • Two IP addresses are required for each InfiniBand card for increased network bandwidth.

The following table provides guidelines on how to configure systems:

Operating System Database Servers in Oracle Exadata Database Machine X4-2 Storage Servers in Oracle Exadata Database Machine X4-2 Database Servers in Oracle Exadata Database Machine X3-8 Full Rack with Exadata Storage Server X4-2L Servers

Oracle Linux

Active bonding

Active bonding

Legacy bonding, single port per HCA active

Oracle Solaris

IPMP, single port active

Active bonding

IPMP, single port per HCA active

Minimum hardware: Oracle Exadata Database Machine X4 generation servers

Minimum software: Oracle Exadata Storage Server Software release 11.2.3.3

8.2.6 Oracle ASM Disk Group in Appliance Mode

The Oracle ASM appliance.mode attribute improves disk rebalance completion time when dropping one or more Oracle ASM disks. This means that redundancy is restored faster after a failure. The attribute is automatically enabled when creating a new disk group. Existing disk groups must explicitly set the attribute using the ALTER DISKGROUP command.

The attribute can only be enabled on disk groups that meet the following requirements:

  • The Oracle ASM disk group attribute compatible.asm is set to release 11.2.0.4, or later.

  • The cell.smart_scan_capable attribute is set to TRUE.

  • All disks in the disk group are the same type of disk, such as all hard disks or extreme flash disks.

  • All disks in the disk group are the same size.

  • All failure groups in the disk group have an equal number of disks.

  • No disk in the disk group is offline.

Minimum software: Oracle Exadata System Software release 11.2.3.3 running Oracle Database 11g Release 2 (11.2) release 11.2.0.4 or Oracle Database 12c Release 1 (12.1) release 12.1.0.2

See Also:

Oracle Exadata System Software User's Guide for additional information about the appliance.mode attribute

8.2.7 Automatic Hard Disk Scrub and Repair

Oracle Exadata System Software automatically inspects and repairs hard disks periodically when hard disks are idle. If bad sectors are detected on a hard disk, then Oracle Exadata System Software automatically sends a request to Oracle ASM to repair the bad sectors by reading the data from another mirror copy. By default, the hard disk scrub runs every two weeks.

Minimum software: Oracle Exadata System Software release 11.2.3.3 running Oracle Database 11g Release 2 (11.2) release 11.2.0.4 or Oracle Database 12c Release 1 (12.1) release 12.1.0.2.

See Also:

Oracle Exadata System Software User's Guide for additional information about setting the scrub interval

8.2.8 Drop Hard Disk for Replacement

Before replacing a normal hard disk that is not in any failure status, the Oracle Exadata Database Machine administrator must run the ALTER PHYSICALDISK DROP FOR REPLACEMENT command, and confirm its success before removing the hard disk from Oracle Exadata Storage Server. The command checks to ensure that the grid disks on that hard disk can be safely taken offline from Oracle ASM without causing a disk group force dismount. If all the grid disks can be offlined without leading to disk group force dismount, then the command puts the grid disks offline from Oracle ASM, disables the hard disk, and then turns on the service LED on the storage server.

Minimum software: Oracle Exadata System Software release 11.2.3.3

See Also:

Oracle Exadata System Software User's Guide for additional information about the ALTER PHYSICALDISK command

8.2.9 Drop BBU for Replacement

Before performing an online BBU (battery backup unit) replacement on an Oracle Exadata Database Machine database server or storage server, the Oracle Exadata Database Machine administrator must run the ALTER CELL BBU DROP FOR REPLACEMENT command, and confirm the success of the command. The command changes the controller to write-through caching and ensures that no data loss can occur when the BBU is replaced in case of a power loss.

Minimum hardware: Oracle Exadata Database Machine X3-2 or Oracle Exadata Database Machine X3-8 Full Rack, with disk-form-factor BBU

Minimum software: Oracle Exadata System Software release 11.2.3.3

See Also:

8.2.10 Oracle Exadata Database Machine Eighth Rack Configuration

Oracle Exadata Database Machine Eighth Rack configuration for storage cells can be enabled and disabled using the ALTER CELL eighthRack command. No more than 6 cell disks are created on hard disks and no more than 8 cell disks are created on flash disks when using an eighth rack configuration.

Minimum software: Oracle Exadata Storage Server Software release 11.2.3.2.1

See Also:

Oracle Exadata Database Machine Maintenance Guide for additional information about configuring Oracle Exadata Database Machine Eighth Rack

8.2.11 Cell Alert Summary

Oracle Exadata System Software periodically sends out an e-mail summary of all open alerts on Oracle Exadata Storage Servers. The open alerts e-mail message provides a concise summary of all open issues on a cell. The summary includes the following:

  • Cell name

  • Event time

  • Severity of the alert

  • Description of the alert

  • Information about configuring the alert summary

Alerts created since the previous summary are marked with an asterisk.

Minimum software: Oracle Exadata System Software release 11.2.3.3

See Also:

Oracle Exadata System Software User's Guide for additional information about configuring the alert summary

8.2.12 Secure Erase for Larger Drives

With this release, Oracle Exadata Storage Server Software supports secure erase for larger hard drives, and flash drives. The following are the approximate times needed to securely erase the drives using the supported algorithms:

Type of Drive One Pass (1pass) Three Pass (3pass) Seven Pass (7pass)

1.2 TB drive

1.67 hours

5 hours

11.67 hours

4 TB drive

8 hours

24 hours

56 hours

186 GB flash drive

NA

NA

36 minutes

See Also:

"Exadata Secure Erase"

8.2.13 Periodic ILOM Reset

The Integrated Lights Out Manager (ILOM) hang detection module in Management Server (MS) periodically resets the ILOM as a proactive measure. It is done in order to prevent the ILOM from entering an unstable state after running for a long period. The reset interval is 90 days.

Minimum software: Oracle Exadata System Software release 11.2.3.3.0

8.2.14 Oracle Exawatcher Replaces Oracle OSwatcher

Starting with this release, Oracle Exawatcher replaces Oracle OSwatcher. Oracle Exawatcher has greater collection and reporting functionality than Oracle OSwatcher.

See Also:

Oracle Exadata System Software User's Guide for information about Oracle Exawatcher

8.2.15 Enhancements for Hardware and Software

The following enhancements have been added for the hardware and software:

  • Enhancements for Sun Datacenter InfiniBand Switch 36 switches

    • Sun Datacenter InfiniBand Switch 36 switches in Exadata Database Machine are upgraded in a rolling manner using the patchmgr utility. See Understanding Rolling and Non-Rolling Updates for additional information.

    • Switch software release 2.1.3-4 includes the ability to automatically disable intermittent links. The InfiniBand specification stipulates that the bit error rate on a link must be less than 1012. If the number of symbol errors is greater than 3546 bit errors per day, or 144 bit errors per hour, then the link is disabled. The InfiniBand switch software provides the autodisable command, and the patchmgr utility automatically enables this feature when the switch is upgraded to release 2.1.3-4.

    • The new switch software release 2.1.3-4 can create fat tree topologies with two switches, or with an unbalanced number of links across multiple spine switches in the fabric.

    • The amount of time taken to perform a subnet manager failover has been reduced to subseconds even on multi-rack configurations.

  • Enhancements for Patch Application

    • The patchmgr utility provides the ability to send e-mail messages upon completion of patching, as well as the status of rolling and non-rolling patch application. See Patchmgr syntax, and the patch set for additional information.

    • Firmware upgrades on database servers for ILOM/BIOS, InfiniBand HCA, and disk controller happen automatically during component replacements on racks running Oracle Linux and Oracle Solaris.

  • Enhancements for Hardware Robustness

    • The time to recover from a bad sector on a hard disk has been reduced by 12 times.

    • The failure state of a hard drive or flash drive is rarely Boolean. Most devices slow down considerably before they fail. Slow and intermittent drives are detected much earlier, and failed by Oracle Exadata System Software before the drives reach predictive failure or a hard failure state.

    • If the ILOM on a storage server stops responding, then the management software can automatically reset the ILOM.

  • Support for Oracle Solaris 11.1 (SRU 9.5.1)

    This release supports Oracle Solaris 11.1 SRU 9.5.1 on the database servers.

8.3 What's New in Oracle Exadata Database Machine 11g Release 2 (11.2.3.2)

The following are new for Oracle Exadata System Software 11g Release 2 (11.2.3.2):

8.3.1 Write-Back Flash Cache with Exadata Smart Flash Cache

Exadata Smart Flash Cache transparently caches frequently-accessed data to fast solid-state storage, improving query response times and throughput. Write operations serviced by flash instead of by disk are referred to as write-back flash cache. Write-back flash cache allows 20 times more write I/Os per second on X3 systems, and 10 times more on X2 systems. The larger flash capacity on X3 systems means that almost all writes are serviced by flash.

An active data block can remain in write-back flash cache for months or years. Blocks that have not been read recently only keep the primary copy in cache. All data is copied to disk, as needed. This provides for smart usage of the premium flash space.

If there is a problem with the flash cache, then the operations transparently fail over to the mirrored copies on flash. No user intervention is required. The data on flash is mirrored based on their allocation units. This means the amount of data written is proportional to the lost cache size, not the disk size.

See Also:

Oracle Exadata System Software User's Guide for additional information about write-back flash cache and write-through flash cache
8.3.1.1 Exadata Smart Flash Cache Persistent After Cell Restart

Exadata Smart Flash Cache is persistent through power outages, shutdown operations, cell restarts, and so on. Data in flash cache is not repopulated by reading from the disk after a cell restarts. Write operations from the server go directly to flash cache. This reduces the number of I/O operations on the disks. The caching of data on the flash disks is set by the administrator.

8.3.2 Graceful Shutdown of CELLSRV Services

If a cell or disk is offline, and an administrator tries to restart or shut down CELLSRV services, then the administrator gets a message that the cell cannot be shutdown due to reduced redundancy.

8.3.3 LED Notification for Storage Server Disk Removal

When a storage server disk needs to be removed, a blue LED light is displayed on the server. The blue light makes it easier to determine which server disk needs maintenance.

8.3.4 Identification of Underperforming Disks

Underperforming disks affect the performance of all disks because work is distributed equally to all disks. For example, if a disk is performing 30% slower than other disks, then the entire system's I/O capacity will be 30% lower.

When an underperforming disk is detected, it is removed from the active configuration. Oracle Exadata Database Machine then performs a set of performance tests. If the problem with the disk is temporary and it passes the tests, then it is brought back into the configuration. If the disk does not pass the tests, then it is marked as poor performance, and an Auto Service Request (ASR) service request is opened to replace the disk. This feature applies to both hard disks and flash disks.

8.3.5 Oracle Database File System Support for Oracle Solaris

Oracle Database File System (DBFS) manages unstructured data in an Oracle database. Files in DBFS are stored in the database in SecureFiles and inherit all of its performance, scalability, security, availability and rich functionality benefits, such as compression, deduplication, encryption, text search, and XQuery.

In earlier releases, DBFS was only available for Oracle Exadata Database Machine running Linux. With this release, DBFS is also supported on Oracle Exadata Database Machine running Oracle Solaris.

See Also:

Oracle Database SecureFiles and Large Objects Developer's Guide for additional information about DBFS

8.3.6 Health Factor for Predictive Failed Disk Drop

When a hard disk enters predictive failure on Oracle Exadata Storage Server, Oracle Exadata System Software automatically triggers an Oracle Automatic Storage Management (Oracle ASM) rebalance to relocate data from the disk. The Oracle ASM rebalance first reads from healthy mirrors to restore redundancy. If all other mirrors are not available, then Oracle ASM rebalance reads the data from the predictively-failed disk. This diverts rebalance reads away from the predictively-failed disk when possible to ensure optimal rebalance progress while maintaining maximum data redundancy during the rebalance process.

Before the data is completely relocated to other healthy disks in the disk group, Oracle Exadata System Software notifies database instances of the poor health of the predictively-failed disk so that queries and smart scans for data on that disk will be diverted to other mirrors for better response time.

8.3.7 Hard Disk Drive Numbering in Servers

The drives in the Exadata Storage Server X3-2 Servers are numbered from left to right in each row. The drives in the bottom row are numbered 0, 1, 2, and 3. The drives in the middle row are numbered 4, 5, 6, and 7. The drives in the top row are numbered 8, 9, 10, and 11.

Figure 8-1 Disk Layout in Exadata Storage Server X3-2 Servers

Description of Figure 8-1 follows
Description of "Figure 8-1 Disk Layout in Exadata Storage Server X3-2 Servers"

The drives in the Exadata Storage Server with Sun Fire X4270 M2 Servers and earlier servers were numbered from the lower left to the top, such that the drives in the leftmost column were 0, 1, and 2. The drives in the next column were 3, 4, and 5. The drives in the next column were 6, 7, and 8. The drives in the rightmost column were 9, 10, and 11.

Figure 8-2 Disk Layout in Exadata Storage Server with Sun Fire X4270 M2 Servers

Description of Figure 8-2 follows
Description of "Figure 8-2 Disk Layout in Exadata Storage Server with Sun Fire X4270 M2 Servers"

8.4 What's New in Oracle Exadata Database Machine 11g Release 2 (11.2.3.1)

The following are new for Oracle Exadata System Software 11g Release 2 (11.2.3.1:)

8.4.1 Support for Oracle Solaris 11 (SRU2a)

This release supports Oracle Solaris 11 (SRU2a) on the database servers.

8.4.2 Linux Database Server Updates with Unbreakable Linux Network

Starting with Oracle Exadata Storage Server Software 11g Release 2 (11.2) release 11.2.3.1, the minimal pack is deprecated. The database server update procedure uses the Unbreakable Linux Network (ULN) for the distribution of updates, and the yum utility to apply the updates.

See Also:

Oracle Exadata Database Machine Maintenance Guide for information about updating the database servers

8.4.3 Oracle Enterprise Manager Cloud Control for Oracle Exadata Database Machine

Oracle Exadata Database Machine can be managed using Oracle Enterprise Manager Cloud Control. Oracle Enterprise Manager Cloud Control combines management of servers, operating systems, firmware, virtual machines, storage, and network fabrics into a single console.

8.4.4 I/O Resource Management Support for More Than 32 Databases

I/O Resource Management (IORM) now supports share-based plans, which can support up to 1024 databases, and up to 1024 directives for interdatabase plans. The share-based plans allocate resources based on shares instead of percentages. A share is a relative distribution of the I/O resources. In addition, the new default directive specifies the default value for all databases that are not explicitly named in the database plan.

See Also:

Oracle Exadata System Software User's Guide for information about IORM

8.4.5 Oracle Database 11g Release 2 (11.2.0.3)

The Smart Scan in Oracle Exadata System Software 11g Release 2 (11.2) release 11.2.3.n is based on the technology present in Oracle Database software 11g Release 2 (11.2) release 11.2.0.3, and is backwards compatible with the 11.2.0.n releases of the database.

8.4.6 Exadata Cell Connection Limit

Oracle Database, Oracle ASM, Oracle Clusterware and Oracle utilities perform I/O operations on Exadata Cells. In order for a process to perform I/O operations on Exadata Cell, the process must first establish a connection to the cell. Once a process is connected to Exadata Cell, it remains connected until process termination.

With this release, each Exadata Cell can support up to 60,000 simultaneous connections originating from one or more database servers. This implies that no more than 60,000 processes can simultaneously remain connected to a cell and perform I/O operations. The limit was 32,000 connections in release 112.2.4. Prior to release 11.2.2.4, the limit was 20,000 connections.

8.5 What's New in Oracle Exadata Database Machine 11g Release 2 (11.2.2.4)

The following is new for Oracle Exadata System Software 11g release 2 (11.2.2.4):

8.5.1 Oracle Exadata Smart Flash Log

The time to commit user transactions is very sensitive to the latency of log writes. In addition, many performance-critical database algorithms, such as space management and index splits, are sensitive to log write latency. Oracle Exadata Storage Server Software speeds up log writes using battery-backed DRAM cache in the disk controller. Writes to the disk controller cache are normally very fast, but they can become slow during periods of high disk I/O. Oracle Exadata Smart Flash Log takes advantage of flash memory in Exadata Storage Server to accelerate log writes.

Flash memory has very good average write latency, but it has occasional slow outliers that are one to two orders of magnitude slower than the average. Oracle Exadata Smart Flash Log performs redo writes simultaneously to both flash memory and the disk controller cache, and completes the write when the first of the two completes. This improves the user transaction response time, and increases overall database throughput for I/O intensive workloads.

Oracle Exadata Smart Flash Log only uses Exadata flash storage for temporary storage of redo log data. By default, Oracle Exadata Smart Flash Log uses 32 MB on each flash disk, for a total of 512 MB across each Exadata Cell. It is automatically configured and enabled. No additional configuration is needed.

8.6 What's New in Oracle Exadata Database Machine 11g Release 2 (11.2.2.3)

The following are new for Oracle Exadata System Software 11g Release 2 (11.2.2.3):

8.6.1 Oracle Solaris Operating System for Database Servers

The database servers in Oracle Exadata Database Machine have the Linux operating system and Oracle Solaris operating system. During initial configuration, choose the operating system for your environment. After selecting an operating system, you can reclaim the disk space used by the other operating system.

8.6.2 Exadata Secure Erase

Oracle Exadata System Software includes a method to securely erase and clean physical disks before redeployment. The ERASE option overwrites the existing content on the disks with one pass, three passes or seven passes. The one pass option overwrites content with zeros. The three pass option follows recommendations from NNSA and the seven pass option follows recommendations from DOD.

The following table shows the approximate times needed to securely erase a drive using the supported algorithms.

Type of Drive One Pass Three Pass Seven Pass

600 GB drive

1 hour

3 hours

7 hours

2 TB drive

5 hours

15 hours

35 hours

3 TB drive

7 hours

21 hours

49 hours

22.875 GB flash drive

NA

NA

21 minutes

93 GB flash drive

NA

NA

32 minutes

Note:

  • Oracle Exadata System Software secure data erase uses multiple over-writes of all accessible data. The over-writes use variations of data characters. This method of data erase is based on commonly known algorithms. Under rare conditions even a 7-pass erase may not remove all traces of data. For example, if a disk has internally remapped sectors, then some data may remain physically on the disk. This data will not be accessible using normal I/O interfaces.

  • Using tablespace encryption is another way to secure data.

8.6.3 Optimized Smart Scan

Oracle Exadata Storage Server Software detects resource bottlenecks on Exadata Storage Servers by monitoring CPU utilization. When a bottleneck is found, work is relocated to improve performance. Each Exadata Cell maintains the following statistics:

  • Exadata Cell CPU usage and push-back rate snapshots for the last 30 minutes.

  • Total number of 1MB blocks that had a push-back decision on.

  • Number of blocks that have been pushed back to the database servers.

  • The statistic Total cpu passthru output IO size in KB.

8.7 What's New in Oracle Exadata Database Machine 11g Release 2 (11.2.1.2)

The following features are new for Oracle Exadata System Software 11g release 2 (11.2.1.2):

8.7.1 Exadata Smart Flash Cache

Exadata Smart Flash Cache provides a caching mechanism for frequently-accessed data on each Exadata Cell. It is a write-through cache which is useful for absorbing repeated random reads, and very beneficial to online transaction processing (OLTP). It provides a mechanism to cache data in KEEP mode using database-side storage clause hints at a table or index level. The Exadata Smart Flash Cache area on flash disks is automatically created on Exadata Cells during start up.

Oracle Exadata Storage Servers are equipped with high-performance flash disks in addition to traditional rotational hard disks. These high-performance flash disks can be used to create Exadata grid disks to store frequently accessed data. It requires the user to do accurate space planning, and to place the most active tablespaces on the premium disks. The recommended option is to dedicate all or part of flash disk space for Exadata Smart Flash Cache. In this case, the most frequently-accessed data on the spinning disks are automatically cached in the Exadata Smart Flash Cache area on high-performance flash disks. When the database needs to access this data Oracle Exadata Storage Server fetches the data from Exadata Smart Flash Cache instead of getting it from slower rotational disks.

When a partition or a table is scanned by the database, Exadata Storage Server can fetch the data being scanned from the Exadata Smart Flash Cache if the object has the CELL_FLASH_CACHE attribute set. In addition to serving data from the Exadata Flash Cache, Exadata Storage Server also has the capability to fetch the object being scanned from hard disks.

The performance delivered by Exadata Storage Server is additive when it fetches scanned data from the Exadata Smart Flash Cache and hard disks. Exadata Storage Server has the ability to utilize the maximum Exadata Smart Flash Cache bandwidth and the maximum hard disk bandwidth to scan an object, and give an additive maximum bandwidth while scanning concurrently from both.

Oracle Database and Exadata Smart Flash Cache software work closely with each other. When the database sends a read or write request to Oracle Exadata Storage Server, it includes additional information in the request about whether the data is likely to be accessed again, and should be cached. For example, when writing data to a log file or to a mirrored copy, the database sends a hint to skip caching. When reading a table index, the database sends a hint to cache the data. This cooperation allows optimized usage of Exadata Smart Flash Cache space to store only the most frequently-accessed data.

Users have additional control over which database objects, such as tablespace, tables, and so on, should be cached more aggressively than others, and which ones should not be cached at all. Control is provided by the new storage clause attribute, CELL_FLASH_CACHE, which can be assigned to a database object.

For example, to pin table CALLDETAIL in Exadata Smart Flash Cache one can use the following command:

ALTER TABLE calldetail STORAGE (CELL_FLASH_CACHE KEEP)

Exadata Storage Server caches data for the CALLDETAIL table more aggressively and tries to keep this data in Exadata Smart Flash Cache longer than cached data for other tables. If the CALLDETAIL table is spread across several Oracle Exadata Storage Servers, then each one caches its part of the table in its own Exadata Smart Flash Cache. If caches are sufficient size, then CALLDETAIL table is likely to be completely cached over time.

8.7.2 Hybrid Columnar Compression

Exadata Hybrid Columnar Compression offers higher compression levels for direct path loaded data. This new compression capability is recommended for data that is not updated frequently. You can specify Hybrid Columnar Compression at the partition, table, and tablespace level. You can also specify the desired level of compression, to achieve the proper trade-off between disk usage and CPU overhead. Included is a compression advisor that helps you determine the proper compression levels for your application.

This feature allows the database to reduce the number of I/Os to scan a table. For example, if you compress data 10 to 1, then the I/Os are reduced 10 to 1 as well. In addition, Hybrid Columnar Compression saves disk space by the same amount.

This feature also allows the database to offload Smart Scans for a column-compressed table to Oracle Exadata Storage Servers. When a scan is done on a compressed table, Oracle Exadata Storage Server reads the compressed blocks from the disks for the scan. Oracle Exadata System Software then decompresses the referenced columns, does predicate evaluation of the data, and applies the filter. The storage server then sends back qualifying data in an uncompressed format. Without this offload, data decompression would take place on the database server. Having Oracle Exadata Storage Server decompress the data results in significant CPU savings on the database server.

See Also:

Oracle Exadata System Software User's Guide for information about Hybrid Columnar Compression

8.7.3 Storage Index

Storage indexes are a very powerful capability provided in Oracle Exadata System Software that help avoid I/O operations. Oracle Exadata System Software creates and maintains a storage index in Exadata memory. The storage index keeps track of minimum and maximum values of columns per storage region for tables stored on that cell. This functionality is done transparently, and does not require any administration by the user.

When a query specifies a WHERE clause, Oracle Exadata System Software examines the storage index to determine if rows with the specified column value does not exist in a region of disk in the cell by comparing the column value to the minimum and maximum values maintained in the storage index. If the column value is outside the minimum and maximum range, then scan I/O in that region for that query is avoided. Many SQL operations run dramatically faster because large numbers of I/O operations are automatically replaced by a few in-memory lookups. To minimize operational overhead, storage indexes are created and maintained transparently and automatically by Oracle Exadata System Software.

Storage indexes provide benefits for encrypted tablespaces. However, storage indexes do not maintain minimum and maximum values for encrypted columns.

8.7.4 Smart Scan of Encrypted Data

Oracle Exadata System Software offloads decryption, and performs Smart Scans on encrypted tablespaces and encrypted columns. While the earlier release of Oracle Exadata System Software fully supported encrypted tablespaces and encrypted columns, it did not benefit from Exadata offload processing. For encrypted tablespaces, Oracle Exadata System Software can decrypt blocks and return the decrypted blocks to the Oracle Database, or it can perform smart scan which returns rows and columns. When Oracle Exadata System Software performs the decryption instead of the database there is significant CPU savings because CPU usage is offloaded to Exadata Cells.

8.7.5 Interleaved Grid Disks

This feature is deprecated in Oracle Exadata System Software release 19.1.0.

Space for grid disks can be allocated in an interleaved manner. Grid disks that use this type of space allocation are referred to as interleaved grid disks. This method attempts to equalize the performance of the grid disks residing on the same cell disk rather than having the grid disks that occupy the outer tracks getting better performance at the expense of the grid disks on the inner tracks.

A cell disk is divided into two equal parts, the outer half (upper portion) and the inner half (lower portion). When a new grid disk is created, half of the grid disk space is allocated on the outer half of the cell disk, and the other half of the grid disk space is allocated the inner half of the cell disk. The upper portion of the grid disk starts on the first available outermost offset in the outer half depending on free or used space in the outer half. The lower portion of the grid disk starts on the first available outermost offset in the inner half.

For example, if cell disk, CD_01_cell01 is completely empty and has 100 GB of space, and a grid disk, data_CD_01_cell01, is created and sized to 50 GB on the cell disk, then the cell disk would have the following layout:

- Outer portion of data_CD_01_cell01 - 25GB
- Free space - 25GB
------------ Middle Point ------------------
- Inner portion of data_CD_cell01 - 25GB
- Free space - 25GB

See Also:

Oracle Exadata System Software User's Guide for information about grid disks

8.7.6 Data Mining Scoring Offload

Oracle Exadata System Software now offloads data mining model scoring. This makes the deployment of your data warehouse on Oracle Exadata Storage Servers a better and more performant data analysis platform. All data mining scoring functions, such as PREDICTION_PROBABILITY, are offloaded to Oracle Exadata Storage Servers for processing. This accelerates warehouse analysis while it reduces database server CPU consumption and the I/O load between the Oracle Exadata Database Server and Oracle Exadata Storage Server.

8.7.7 Enhanced Manageability Features

Oracle Exadata Storage Server Software now includes the following manageability features:

  • Automatic addition of replacement disk to the disk group: All the required Exadata operations to re-create the disk groups, and add the grid disks back to the original disk group are now performed automatically when a replacement disk is inserted after a physical disk failure.

  • Automatic cell restart: Grid disks are automatically changed to online when a cell recovers from a failure, or after a restart.

  • Support for OCR and voting disks on ASM disk groups: In Oracle Database 11g Release 2 (11.2), Oracle Cluster Registry (OCR) and voting disks are supported on ASM disk groups, and the iSCSI partitions are no longer needed.

  • Support for up to four dual-port InfiniBand Host Channel Adapters in the database server. This feature enables larger Oracle Exadata Database Machine X2-8 Full Rack servers to be used as database servers using Oracle Exadata Storage Server Software.

8.8 What's New in Oracle Exadata Database Machine 11g Release 2 (11.2)

The following is new for Oracle Exadata System Software 11g Release 2 (11.2):

8.8.1 Expanded Content in the Guide

This release of Oracle Exadata Database Machine System Overview includes maintenance procedures, cabling information, site planning checklists, and so on. This guide is the main reference book for Oracle Exadata Database Machine.