1 Introduction to the Plug-in

This chapter provides a general overview of the Oracle Exadata plug-in, including supported hardware and software. The following topics are discussed:

Oracle Exadata Plug-in Features

Highlights of the Oracle Exadata plug-in release 13.3.2.0.0 include the following features:

Monitoring and Notification Features

With the Oracle Exadata plug-in, and related Systems Infrastructure and Virtual Infrastructure plug-ins, you can monitor Exadata targets through Enterprise Manager Cloud Control 13c. These plug-ins provide seamless integration with supported Exadata hardware and software so that you can receive notifications on any Exadata target. Features include:

  • Monitoring of the Exadata Database Machine as an Enterprise Manager target.

  • Monitoring of the Exadata target, including the Exadata Storage Server's I/O Resource Management feature within Enterprise Manager.

  • Support SNMP notification for Exadata Database Machine components.

  • Support dashboard report creation from Enterprise Manager Cloud Control, including a simplified configuration of the service dashboard.

  • Support of client network hostnames for compute nodes.

  • Enhanced InfiniBand network fault detection and InfiniBand schematic port state reporting.

  • Modification of Enterprise Manager monitoring agents as needed for all Exadata Database Machine components.

  • IORM for multi-tenancy database (CDB/PDB) environment:

    • CDB-level I/O Workload Summary with PDB-level details breakdown.

    • I/O Resource Management for Oracle Database 12c and above.

    • Exadata Database Machine-level physical visualization of I/O Utilization for CDB and PDB on each Exadata Storage Server.

    • Integration link to Database Resource Management UI.

  • Support discovery of locked down storage servers.

  • Since the release of EM 13c, the Exadata Plug-in is able to leverage the enhanced hardware monitoring features offered by the newly introduced Oracle Systems Infrastructure Plug-in. Users can view a photo-realistic schematic of the various hardware (including rack) and monitor the faults on individual hardware components.

Hardware Support Features

You can use the Oracle Exadata plug-in to optimize the performance of a wide variety of Exadata targets, including:

  • Oracle SuperCluster, including:

    • Configurations:

      • LDOM: Control domain, IO/guest domain

      • Zone: Global, non-global

    • Discover, monitor, and manage Exadata Database Machine-related components residing on SuperCluster Engineered System

    • See Oracle SuperCluster Support for more details.

  • Multi-Rack support:

    • Supports discovery use cases: Initial discovery, add a rack

    • Side-by-side rack schematic

  • Support for Storage Expansion Rack hardware.

  • Full partition support:

    • Logical splitting of an Exadata Database Machine Rack into multiple Database Machines.

    • Each partition is defined through a single OneCommand deployment.

    • Compute nodes are not shared between partitions.

    • Multiple partitions connected through the same InfiniBand network.

    • Compute nodes in same partition share the same Cluster.

    • Ability to specify a customized DBM name during discovery of the target.

    • User can confirm and select individual components for each DBM.

    • Flexibility to select none, some, or all of the InfiniBand switch as part of monitored network, including the ability to add switches post discovery.

    • Flexibility to select some or all of the Cells to be shared among Exadata Database Machines.

  • Support for the increasing types of Exadata Database Machine targets. See Oracle Exadata Database Machine Supported Hardware and Software for a complete list of supported hardware.

  • InfiniBand Switch Sensor fault detection, including power supply unit sensors and fan presence sensors.

Target Discovery Features

The target discovery process is streamlined and simplified with the Oracle Exadata plug-in. Features include:

  • Automatically push the Exadata plug-in to agent during discovery.

  • Discovery prerequisite checks updates, including:

    • Check for critical configuration requirements.

    • Check to ensure either databasemachine.xml or catalog.xml files exist and are readable.

    • Prevent discovered targets from being rediscovered.

  • Credential validation and named credential support.

  • Ability to apply a custom name to the Exadata target.

  • Support discovery using the client access network.

  • Automate SNMP notification setup for Database Machine components.

  • Support discovery of compute nodes with client network host names.

  • Support discovery using the new catalog.xml file generated from the OEDA Java-based Configurator.

  • Support discovery of locked down storage servers.

  • Enterprise Manager Cloud Control Exadata Discovery Wizard lets you discover Exadata Database Machine targets using 13c.

  • An existing Exadata Database Machine target with 12c target types can be converted to 13c target types. For more information, see Convert Database Machine Targets with 12c Target Types to 13c for more information.

Note:

Exadata Database Machine targets are configured with OOB default thresholds for the metrics. No additional template is provided by Oracle.

Exadata Storage Server Grid Home Page and Server Home Page Features

The Exadata Storage Server Grid home page and Server home page provides the following features:

  • Provides a fine-grained performance summary for flash and hard disk.

  • Provides usage statistics to highlight flash cache and Smart Scan efficiency.

  • Provides a detailed summary of flash space usage.

  • Provides metrics for:

    • I/O time distribution by flash and hard disk.

    • IORM wait per database.

Exadata Performance Page Features

The Performance home page provides the following features:

  • Side-by-side comparison of flash and hard disk performance.

  • Performance comparison between multiple Exadata Storage Servers.

  • Performance utilization for flash and hard disk to identify workload reaching hardware limit.

  • Provides Exadata Storage Server performance charts to help with diagnosing performance issues when I/O reaching hardware limits.

Exadata Metrics Features

Metrics reports are critical to manage your Oracle Exadata Database Machine effectively. With the metrics, you can determine where additional resources are needed, when peak usage times occur, and so forth.

  • Enhanced metric scalability in large environment to reduce time out by reducing calls.

  • Reduce metric collection error for the Exadata HCA metric. Improvements to combine the HCA port data collection in a single call to reduce chances of time out.

  • Reduced metric collection error from Exadata IORM Status metric. The metric was removed, and the user interface now uses the available configuration data.

Oracle Exadata Database Machine Supported Hardware and Software

The following sections describe the supported hardware and software by the Oracle Exadata plug-in:

Exadata Database Machine Configuration Support

Enterprise Manager Cloud Control 13c is supported on the following Exadata Database Machine configurations:

Note:

Unless otherwise noted, support is provided for all versions of Oracle Exadata plug-in Release 13.1.

Exadata Hardware and Software Support

For information on Enterprise Manager Plug-in requirements for supported Exadata Database Machine hardware and software, see Exadata Storage Software Versions Supported by the Oracle Enterprise Manager Exadata Plug-in (Doc ID 1626579.1).

Multi-Rack Support

Enterprise Manager supports managing multiple connected racks of Oracle Database Machine of the supported machine types listed above (Exadata Hardware and Software Support). Also, the following two racks can be monitored in a multi-rack as these cannot exist as a standalone single rack:

  • Storage Expansion Rack

  • Compute Node Expansion Rack

Partitioned Support

The following partitioned configurations are supported:

  • Partitioned Exadata Database Machine - the logical splitting of a Database Machine Rack into multiple Database Machines. The partitioned Exadata Database Machine configuration must meet the following conditions to be fully supported by Enterprise Manager Cloud Control 13c:

    • Each partition is defined through a single OneCommand deployment.

    • Cells and compute nodes are not shared between partitions.

    • Multiple partitions are connected through the same InfiniBand network.

    • Compute nodes in same partition share the same Cluster.

    The expected behavior of a partitioned Exadata Database Machine includes:

    • The target names for the Exadata Database Machine, Exadata Grid, and InfiniBand Network will be generated automatically during discovery (for example, Database Machine dbm1.mydomain.com, Database Machine dbm1.mydomain.com_2, Database Machine dbm1.mydomain.com_3, etc.). However, users can change these target names at the last step of discovery.

    • All Infiniband Switches need to be selected as part of the Exadata Database Machine targets for every partition. Inifiniband Switches will not be added automatically to subsequent Exadata Database Machine targets of other partitions. The KVM, PDU, and Cisco switches can be individually selected for the Database Machine target of each partition.

    • User can confirm and select individual components for each Database Machine.

Oracle SuperCluster Support

Only Oracle SuperCluster with software Version 1.1 with DB Domain on Control LDOM-only environments are supported. Earlier versions of Oracle SuperCluster can be made compatible if you update to the October 2012 QMU release. You can confirm this requirement by looking at the version of the compmon pkg installed on the system (using either pkg info compmon or pkg list compmon commands to check). You must have the following minimum version of compmon installed:

pkg://exa-family/system/platform/exadata/compmon@0.5.11,5.11-0.1.0.11:20120726T024158Z

The following hardware configurations are supported:

  • Oracle SuperCluster:

    • T4-4

    • T5-8

    • M6-32

    • M7

    • M8

The following software configurations are supported:

  • LDOM

    • Control Domain

    • IO/Guest Domain

  • Zone

    • Global

    • Non-Global

The following software versions are supported:

  • Oracle SuperCluster V1.1

  • Oracle SuperCluster V1.0.1 + October QMU

Oracle SuperCluster Known Issues

The following known issues have been reported for the Oracle SuperCluster:

  • PAGE13 is empty in the /opt/oracle.SupportTools/onecommand/catalog.xml file. This issue prevents Enterprise Manager from displaying the schematic diagram on the Database Machine home page. (Bug 16719172)

    Workaround: Manually replace the PAGE13 section by the one listed below:

       <PAGE13>
          <RACKS>
             <RACK ID="0">
                <MACHINE TYPE="203"/>
                <ITEM ID="1">
                   <TYPE>ibs</TYPE>
                   <ULOC>1</ULOC>
                   <HEIGHT>1</HEIGHT>
                </ITEM>
                <ITEM ID="2">
                   <TYPE>cell</TYPE>
                   <ULOC>2</ULOC>
                   <HEIGHT>2</HEIGHT>
                </ITEM>
                <ITEM ID="3">
                   <TYPE>cell</TYPE>
                   <ULOC>4</ULOC>
                   <HEIGHT>2</HEIGHT>
                </ITEM>
                <ITEM ID="4">
                   <TYPE>cell</TYPE>
                   <ULOC>6</ULOC>
                   <HEIGHT>2</HEIGHT>
                </ITEM>
                <ITEM ID="5">
                   <TYPE>cell</TYPE>
                   <ULOC>8</ULOC>
                   <HEIGHT>2</HEIGHT>
                </ITEM>
                <ITEM ID="6">
                   <TYPE>comp</TYPE>
                   <ULOC>10</ULOC>
                   <HEIGHT>8</HEIGHT>
                </ITEM>
                <ITEM ID="7">
                   <TYPE>ibl</TYPE>
                   <ULOC>18</ULOC>
                   <HEIGHT>1</HEIGHT>
                </ITEM>
                <ITEM ID="8">
                   <TYPE>cisco</TYPE>
                   <ULOC>19</ULOC>
                   <HEIGHT>1</HEIGHT>
                </ITEM>
                <ITEM ID="9">
                   <TYPE>zfs</TYPE>
                   <ULOC>20</ULOC>
                   <HEIGHT>4</HEIGHT>
                </ITEM>
                <ITEM ID="10">
                   <TYPE>ibl</TYPE>
                   <ULOC>24</ULOC>
                   <HEIGHT>1</HEIGHT>
                </ITEM>
                <ITEM ID="11">
                   <TYPE>head</TYPE>
                   <ULOC>25</ULOC>
                   <HEIGHT>1</HEIGHT>
                </ITEM>
                <ITEM ID="12">
                   <TYPE>head</TYPE>
                   <ULOC>26</ULOC>
                   <HEIGHT>1</HEIGHT>
                </ITEM>
                <ITEM ID="13">
                   <TYPE>comp</TYPE>
                   <ULOC>27</ULOC>
                   <HEIGHT>8</HEIGHT>
                </ITEM>
                <ITEM ID="14">
                   <TYPE>cell</TYPE>
                   <ULOC>35</ULOC>
                   <HEIGHT>2</HEIGHT>
                </ITEM>
                <ITEM ID="15">
                   <TYPE>cell</TYPE>
                   <ULOC>37</ULOC>
                   <HEIGHT>2</HEIGHT>
                </ITEM>
                <ITEM ID="16">
                   <TYPE>cell</TYPE>
                   <ULOC>39</ULOC>
                   <HEIGHT>2</HEIGHT>
                </ITEM>
                <ITEM ID="17">
                   <TYPE>cell</TYPE>
                   <ULOC>41</ULOC>
                   <HEIGHT>2</HEIGHT>
                </ITEM>
                <ITEM ID="18">
                   <TYPE>pdu</TYPE>
                   <ULOC>0</ULOC>
                   <HEIGHT>0</HEIGHT>
                </ITEM>
                <ITEM ID="19">
                   <TYPE>pdu</TYPE>
                   <ULOC>0</ULOC>
                   <HEIGHT>0</HEIGHT>
                </ITEM>
             </RACK>
          </RACKS>
       </PAGE13>
    
  • The Assert OK power sensor raises a critical alert in Enterprise Manager. (Bug 17445054 )

    Note:

    This bug does not apply to X3-2 and X4-2 machines.

  • Wrong machine names in the databasemachine.xml file. When the database is installed in a local zone on a Oracle SuperCluster T5-8, the databasemachine.xml file ends up with the machine name of the global zone rather than that of the local zone that the database is installed into.

    Workaround: Manually edit the file to change the hostnames for the database nodes to those of the zone name.

    (Bug 17582197)

  • In Enterprise Manager, the Schematic & Resource Utilization report will display only one LDOM per server.

  • Enterprise Manager will not report hard disk predictive failure on compute node in an Oracle SuperCluster environment.

  • The pre-requisite check script exadataDiscoveryPreCheck.pl that is bundled in Exadata plug-in 12.1.0.3.0 does not support the catalog.xml file. Please download the latest exadataDiscoveryPreCheck.pl file from My Oracle Support.

    You can obtain the script in one of the following ways:

  • On the Oracle SuperCluster Zone while deploying the Management Agent, the agent prerequisite check may fail with the following error:

    Note:

    The error can be ignored and you can continue to proceed with installation of the Management Agent.

    @ During the agent install, the prereq check failed:
    @ 
    @ Performing check for CheckHostName
    @ Is the host name valid?
    @ Expected result: Should be a Valid Host Name.
    @ Actual Result: abc12345678
    @ Check complete. The overall result of this check is: Failed <<<<
    @ 
    @ Check complete: Failed <<<<
    @ Problem: The host name specified for the installation or retrieved from the
    @ system is incorrect.
    @ Recommendation: Ensure that your host name meets the following conditions:
    @ (1) Does NOT contain localhost.localdomain.
    @ (2) Does NOT contain any IP address.
    @ (3) Ensure that the /ect/hosts file has the host details in the following
    @ format.
    @ <IP address> <host.domain> <short hostname>
    @ 
    @ If you do not have the permission to edit the /etc/hosts file,
    @ then while invoking the installer pass the host name using the
    @ argument
    @ ORACLE_HOSTNAME.

Supported Operating Systems

The following operating systems (where OMS and agent is installed on) are supported by the Oracle Exadata plug-in 13.3.2:

  • Management Server plug-in (all OMS-certified platforms):

    • IBM AIX on POWER Systems (64-bit)

    • HP-UX Itanium

    • Linux x86 and x86-64

    • Microsoft Windows x64 (64-bit)

    • Oracle Solaris on SPARC (64-bit)

    • Oracle Solaris on x86-64 (64-bit)

  • Agent Plug-ins for Exadata and Supercluster
    • Exadata Plug-in + SI plug-in + VI Plug-in for Exadata
    • Exadata Plug-in + SI Plug-in for SSC
    • Linux x86-64

    • Oracle Solaris on x86-64 (64-bit)

    • Oracle Solaris on SPARC (64-bit)

Oracle Exadata Database Machine Hardware Not Supported

The following Oracle Exadata Database Machine hardware configurations are not supported for Enterprise Manager Cloud Control Exadata plug-in 13.x:

  • V1 hardware

  • V2 hardware

    Note:

    V2 machines discovered in Enterprise Manager Cloud Control 12c are still supported in 13c. However, discovery of V2 machines in Enterprise Manager Cloud Control 13c is not supported.