With the Oracle Exadata plug-in, you can monitor Exadata targets through Enterprise Manager Cloud Control 13c. The plug-in provides seamless integration with supported Exadata software so that you can receive notification on any Exadata target. Features include:
Monitoring of the Exadata Database Machine as an Enterprise Manager target.
Monitoring of the Exadata target, including the Exadata Cell, within Enterprise Manager's I/O Resource Management (IORM) feature.
Support SNMP notification for Exadata cell.
Support dashboard report creation from Enterprise Manager Cloud Control, including a simplified configuration of the service dashboard.
Support of client network hostnames for compute nodes.
Enhanced InfiniBand network fault detection and InfiniBand schematic port state reporting.
Modification of Enterprise Manager monitoring agents as needed for all Exadata Database Machine components.
IORM for multi-tenancy database (CDB/PDB) environment:
CDB-level I/O Workload Summary with PDB-level details breakdown.
I/O Resource Management for Oracle Database 12c.
Exadata Database Machine-level physical visualization of I/O Utilization for CDB and PDB on each Exadata Storage Server.
Integration link to Database Resource Management UI.
Support discovery of locked down storage servers.
You can use the Oracle Exadata plug-in to optimize the performance of a wide variety of Exadata targets, including:
Oracle SuperCluster, including:
Versions: SuperCluster V1.1, V1.0.0 + October Quarterly Maintenance Update (QMU)
Configurations:
LDOM: Control domain, IO/guest domain
Zone: Global, non-global
Discover, monitor, and manage Exadata Database Machine-related components residing on SuperCluster Engineering System
See Oracle SuperCluster Support - Exadata Plug-in Release 12.1.0.4.0 and Later for more details.
Multi-Rack support:
Supports discovery use cases: Initial discovery, add a rack
Side-by-side rack schematic
Support for Storage Expansion Rack hardware.
Full partition support:
Logical splitting of an Exadata Database Machine Rack into multiple Database Machines.
Each partition is defined through a single OneCommand deployment.
Compute nodes are not shared between partitions.
Multiple partitions connected through the same InfiniBand network.
Compute nodes in same partition share the same Cluster.
Ability to specify a customized DBM name during discovery of the target.
User can confirm and select individual components for each DBM.
Flexibility to select "small-p" targets for individual partitions.
Flexibility to select some or all of the InfiniBand switch as part of monitored network, including the ability to add switches post discovery.
Flexibility to select some or all of the Cells to be shared among Exadata Database Machines.
Support for the increasing types of Exadata Database Machine targets. See Oracle Exadata Database Machine Supported Hardware and Software for a complete list of supported hardware.
InfiniBand Switch Sensor fault detection, including power supply unit sensors and fan presence sensors.
Support for on-demand refresh of InfiniBand schematic.
Through the Oracle Enterprise Manager Cloud Control interface, you can use the Oracle Exadata plug-in to access Exadata Storage Software functionality to efficiently manage your Exadata hardware. Support includes:
Integration with Exadata Storage Software.
Support for the latest Exadata Server Versions 12.1.2.3.1 and earlier:
11.2.3.3.1 or earlier.
A plug-in that supports 12.1.1.1.0 and beyond will have support for Oracle database 12c.
12.1.2.1.0.
12.1.2.1.1.
12.1.2.1.1 release and patch 2024004. See Doc ID 1959143.1 in My Oracle Support:
https://support.oracle.com/rs?type=doc&id=1959143.1
12.1.1.1.2.
12.1.1.1.2 release and patch 20699031. See Doc ID 1995937.1 in My Oracle Support:
https://support.oracle.com/rs?type=doc&id=1995937.1
12.1.2.2.0. See Doc ID 2038073.1 in My Oracle Support for details:
https://support.oracle.com/rs?type=doc&id=2038073.1
12.1.2.3.0. See Doc ID 2031447.1 in My Oracle Support for details:
https://support.oracle.com/rs?type=doc&id=2031447.1
Note:
For latest supported configurations please refer to Doc ID 1626579.1 in My Oracle SupportThe target discovery process is streamlined and simplified with the Oracle Exadata plug-in. Features include:
Automatically push the Exadata plug-in to agent during discovery.
Discovery prerequisite checks updates, including:
Check for critical configuration requirements.
Check to ensure either databasemachine.xml
or catalog.xml
files exist and are readable.
Prevent discovered targets from being rediscovered.
Credential validation and named credential support.
Ability to apply a custom name to the Exadata target.
Support enabled for discovery using the client access network.
Automate SNMP notification setup for Exadata Storage Server and InfiniBand Switches.
Support discovery of compute nodes with client network host names.
Support discovery using the new catalog.xml
file generated from the OEDA Java-based Configurator.
Additional credential validation and named credential support.
Support customization of Exadata Database Machine name.
Support discovery of locked down storage servers.
Note:
Exadata Database Machine targets are configured with OOB default thresholds for the metrics. No additional template is provided by Oracle.
The Exadata Storage Server Grid home page and Server home page provides the following features:
Provides a fine-grained performance summary for flash and hard disk.
Provides new usage statistics to highlight flash cache and Smart Scan efficiency.
Provides a new, detailed summary of flash space usage.
Provides new metrics for:
I/O time distribution by flash and hard disk.
IORM wait per database.
The Performance home page provides the following features:
Side-by-side comparison of flash and hard disk performance.
Performance comparison between multiple Exadata Storage Servers.
Performance utilization for flash and hard disk to identify workload reaching hardware limit.
Provides Exadata Storage Server performance charts to help with diagnosing performance issues when I/O reaching hardware limits.
Metrics reports are critical to manage your Oracle Exadata Database Machine effectively. With the metrics, you can determine where additional resources are needed, when peak usage times occur, and so forth.
Enhanced metric scalability in large environment to reduce time out by reducing cellcli
calls.
Reduce metric collection error for the Exadata HCA metric. Improvements to combine the HCA port data collection in a single cellcli
call to reduce chances of time out.
Reduced metric collection error from Exadata IORM Status metric. The metric was removed, and the user interface now uses the available configuration data.
Enterprise Manager Cloud Control 13c is supported on the following Exadata Database Machine configurations:
Note:
Unless otherwise noted, support is provided for all versions of Oracle Exadata plug-in Release 13.1.
The following Exadata Database Machine types are supported:
V2
Note:
V2 machines discovered in Enterprise Manager Cloud Control 12c are still supported in 13c. However, discovery of V2 machines in Enterprise Manager Cloud Control 13c is not supported.
X2
X2-2: Full rack, half rack, and quarter rack
X2-8: Full rack
X3
X3-2: Full rack, half rack, quarter rack, and eighth rack (requires release 12.1.0.3 or higher)
X3-8: Full rack
X4
X4-2
X5
X5-8
X6
X6–2
X6–8
Oracle SuperCluster, including (requires release 12.1.0.4 or higher):
Support for Oracle SuperCluster V1.1 on LDOM and Zone (Global & Non-Global)
Support for Oracle SuperCluster V1.0.1 with October QMU on LDOM and Zone
Enterprise Manager supports managing multiple connected racks of Oracle Database Machine of the supported machine types listed above (Supported Exadata Database Machine Types). Also, the following two racks can be monitored in a multi-rack as these cannot exist as a standalone single rack:
Storage Expansion Rack (requires release 12.1.0.4 or higher)
Compute Node Expansion Rack (requires release 12.1.0.4 or higher)
The following partitioned configurations are supported:
Partitioned Exadata Database Machine - the logical splitting of a Database Machine Rack into multiple Database Machines. The partitioned Exadata Database Machine configuration must meet the following conditions to be fully supported by Enterprise Manager Cloud Control 13c:
Each partition is defined through a single OneCommand deployment.
Cells and compute nodes are not shared between partitions.
Multiple partitions are connected through the same InfiniBand network.
Compute nodes in same partition share the same Cluster.
The expected behavior of a partitioned Exadata Database Machine includes:
For Oracle Exadata plug-in Release 12.1.0.3.0 and later:
The target names for the Exadata Database Machine, Exadata Grid, and InfiniBand Network will be generated automatically during discovery (for example, Database Machine dbm1.mydomain.com
, Database Machine dbm1.mydomain.com_2
, Database Machine dbm1.mydomain.com_3
, etc.). However, users can change these target names at the last step of discovery.
All InfiniBand switches in the Exadata Database Machine must be selected during discovery of the first Database Machine partition. They will be included in all subsequent Database Machine targets of the other partitions. The KVM, PDU, and Cisco switches can be individually selected for the DB Machine target of each partition.
User can confirm and select individual components for each Database Machine.
For Oracle Exadata plug-in Release 12.1.0.2.0 and earlier:
The target names for the Exadata Database Machine, Exadata Grid, and InfiniBand Network will be generated automatically during discovery (for example, Database Machine dbm1.mydomain.com
, Database Machine dbm1.mydomain.com_2
, Database Machine dbm1.mydomain.com_3
, etc.). Users cannot specify these target names.
All shared components (such as, KVM, PDU, Cisco switch, and InfiniBand switches) must be selected during discovery of the first Database Machine partition. They will be included in all subsequent Database Machine targets of the other partitions.
User can confirm and select individual components for each Database Machine.
Only Oracle SuperCluster with software Version 1.1 with DB Domain on Control LDOM-only environments are supported. Earlier versions of Oracle SuperCluster can be made compatible if you update to the October 2012 QMU release. You can confirm this requirement by looking at the version of the compmon pkg
installed on the system (using either pkg info compmon
or pkg list compmon
commands to check). You must have the following minimum version of compmon installed:
pkg://exa-family/system/platform/exadata/compmon@0.5.11,5.11-0.1.0.11:20120726T024158Z
The following hardware configurations are supported:
Oracle SuperCluster:
T5-8 server
T4-4 server
The following software configurations are supported:
LDOM
Control Domain
IO/Guest Domain
Zone
Global
Non-Global
The following software versions are supported:
Oracle SuperCluster V1.1
Oracle SuperCluster V1.0.1 + October QMU
The following issues have been fixed for the 12.1.0.4.0 release of the Oracle Exadata plug-in:
Exadata Storage Server monitoring: If multiple DB clusters share the same Exadata Storage Server, in one Enterprise Manager management server environment, you can discover and monitor the first DB Machine target and all its components. Also, for additional DB Machine targets sharing the same Exadata Storage Server, the Oracle Exadata Storage Server Grid system and the Oracle Database Exadata Storage Server System will now show all Exadata Storage Server members.
This issue reported in the plug-in's 12.1.0.3.0 release has been fixed in 12.1.0.4.0.
HCA port error monitoring: If the perfquery
command installed in the Oracle SuperCluster has version 1.5.8 or later, a bug (ID 15919339) was reported where most columns in the HCA Port Errors metric in the host targets for the compute nodes will be blank.
If there are errors occurring on the HCA ports, it is now reported in Enterprise Manager.
To check your version, run the following command:
$ perfquery -V
This issue reported in the plug-in's 12.1.0.3.0 release has been fixed in 12.1.0.4.0.
PAGE13 is empty in the /opt/oracle.SupportTools/onecommand/catalog.xml
file. This issue prevents Enterprise Manager from displaying the schematic diagram on the Database Machine home page. (Bug 16719172)
Workaround: Manually replace the PAGE13 section by the one listed below:
<PAGE13> <RACKS> <RACK ID="0"> <MACHINE TYPE="203"/> <ITEM ID="1"> <TYPE>ibs</TYPE> <ULOC>1</ULOC> <HEIGHT>1</HEIGHT> </ITEM> <ITEM ID="2"> <TYPE>cell</TYPE> <ULOC>2</ULOC> <HEIGHT>2</HEIGHT> </ITEM> <ITEM ID="3"> <TYPE>cell</TYPE> <ULOC>4</ULOC> <HEIGHT>2</HEIGHT> </ITEM> <ITEM ID="4"> <TYPE>cell</TYPE> <ULOC>6</ULOC> <HEIGHT>2</HEIGHT> </ITEM> <ITEM ID="5"> <TYPE>cell</TYPE> <ULOC>8</ULOC> <HEIGHT>2</HEIGHT> </ITEM> <ITEM ID="6"> <TYPE>comp</TYPE> <ULOC>10</ULOC> <HEIGHT>8</HEIGHT> </ITEM> <ITEM ID="7"> <TYPE>ibl</TYPE> <ULOC>18</ULOC> <HEIGHT>1</HEIGHT> </ITEM> <ITEM ID="8"> <TYPE>cisco</TYPE> <ULOC>19</ULOC> <HEIGHT>1</HEIGHT> </ITEM> <ITEM ID="9"> <TYPE>zfs</TYPE> <ULOC>20</ULOC> <HEIGHT>4</HEIGHT> </ITEM> <ITEM ID="10"> <TYPE>ibl</TYPE> <ULOC>24</ULOC> <HEIGHT>1</HEIGHT> </ITEM> <ITEM ID="11"> <TYPE>head</TYPE> <ULOC>25</ULOC> <HEIGHT>1</HEIGHT> </ITEM> <ITEM ID="12"> <TYPE>head</TYPE> <ULOC>26</ULOC> <HEIGHT>1</HEIGHT> </ITEM> <ITEM ID="13"> <TYPE>comp</TYPE> <ULOC>27</ULOC> <HEIGHT>8</HEIGHT> </ITEM> <ITEM ID="14"> <TYPE>cell</TYPE> <ULOC>35</ULOC> <HEIGHT>2</HEIGHT> </ITEM> <ITEM ID="15"> <TYPE>cell</TYPE> <ULOC>37</ULOC> <HEIGHT>2</HEIGHT> </ITEM> <ITEM ID="16"> <TYPE>cell</TYPE> <ULOC>39</ULOC> <HEIGHT>2</HEIGHT> </ITEM> <ITEM ID="17"> <TYPE>cell</TYPE> <ULOC>41</ULOC> <HEIGHT>2</HEIGHT> </ITEM> <ITEM ID="18"> <TYPE>pdu</TYPE> <ULOC>0</ULOC> <HEIGHT>0</HEIGHT> </ITEM> <ITEM ID="19"> <TYPE>pdu</TYPE> <ULOC>0</ULOC> <HEIGHT>0</HEIGHT> </ITEM> </RACK> </RACKS> </PAGE13>
The Assert OK power sensor raises a critical alert in Enterprise Manager. (Bug 17445054 )
Note:
This bug does not apply to X3-2 and X4-2 machines.
Wrong machine names in the databasemachine.xml
file. When the database is installed in a local zone on a Oracle SuperCluster T5-8, the databasemachine.xml
file ends up with the machine name of the global zone rather than that of the local zone that the database is installed into.
Workaround: Manually edit the file to change the hostnames for the database nodes to those of the zone name.
(Bug 17582197)
In Enterprise Manager, the Schematic & Resource Utilization report will display only one LDOM per server.
Enterprise Manager will not report hard disk predictive failure on compute node in an Oracle SuperCluster environment.
The pre-requisite check script exadataDiscoveryPreCheck.pl
that is bundled in Exadata plug-in 12.1.0.3.0 does not support the catalog.xml
file. Please download the latest exadataDiscoveryPreCheck.pl
file from My Oracle Support as described in Download the Discovery Precheck Script.
On the Oracle SuperCluster Zone while deploying the Management Agent, the agent prerequisite check may fail with the following error:
Note:
The error can be ignored and you can continue to proceed with installation of the Management Agent.
@ During the agent install, the prereq check failed: @ @ Performing check for CheckHostName @ Is the host name valid? @ Expected result: Should be a Valid Host Name. @ Actual Result: abc12345678 @ Check complete. The overall result of this check is: Failed <<<< @ @ Check complete: Failed <<<< @ Problem: The host name specified for the installation or retrieved from the @ system is incorrect. @ Recommendation: Ensure that your host name meets the following conditions: @ (1) Does NOT contain localhost.localdomain. @ (2) Does NOT contain any IP address. @ (3) Ensure that the /ect/hosts file has the host details in the following @ format. @ <IP address> <host.domain> <short hostname> @ @ If you do not have the permission to edit the /etc/hosts file, @ then while invoking the installer pass the host name using the @ argument @ ORACLE_HOSTNAME.
Table 1-1 shows the component versions supported by the Oracle Exadata plug-in Release 12.1:
Table 1-1 Supported Component Versions
Component | Firmware/Software Versions | Exadata Plug-in Version Support |
---|---|---|
Storage Server |
12.1.2.3.1, 12.1.2.3.0, 12.1.2.2.2 or below |
13.1.0.0.0 , 12.1.0.6.0 Note: EM 12c PS3 is tested with Exadata Server Software 12.1.2.2.0. SSH Lock down will not be a supported configuration under 12c PS3/PS4, however, it is certified to work with 13c. |
11.2.3.3.0 12.1.1.1.0 |
12.1.0.6.0 and later 12.1.0.5.0 with following patches applied (not cell related):
|
|
11.2.3.2.1.130109 |
12.1.0.4.0 and later |
|
11.2.3.2.0.120713 |
12.1.0.3.0 and later |
|
11.2.2.3.0 11.2.2.3.2 11.2.2.3.5 11.2.2.4 11.2.3.1.0 11.2.3.1.1 |
All versions |
|
InfiniBand Switch |
2.1.3-4 2.0.6-1 (for SuperCluster) |
12.1.0.4.0 and later |
1.1.3-2 1.3.3-2 |
All versions |
|
Integrated Lights Out Manager (ILOM) |
3.1 or later |
13.1 and later |
v3.0.16.15.a r73751 or later (v2) v3.0.16.10.d or later (X2) v3.0.16.20.b or later (X2-8) v3.0.16.20.b or later (X3-8) v3.1.2.10 r74387 or later (X3-2) |
12.1.0.4.0 and later |
|
v3.0.9.19.c r63792 |
All versions |
|
ILOM ipmitool |
Linux: 1.8.10.3 or later Oracle Solaris: 1.8.10.4 or later |
All versions |
Power Distribution Unit (PDU) |
1.04 or later |
13.1 and later |
1.05 1.06 |
12.1.0.4.0 and later |
|
1.01 1.02 (default version after reimage) 1.04 |
All versions |
|
Avocent MergePoint Unity KVM Switch |
Application: 1.2.8.14896 Boot: 1.4.14359 |
All versions |
Cisco Switch |
15.1(2)SG2 |
12.1.0.6.0 and later |
12.2(31)SGA9 |
All versions |
The following operating systems (where OMS and agent is installed on) are supported by the Oracle Exadata plug-in 13.2:
Management Server plug-in (all OMS-certified platforms):
IBM AIX on POWER Systems (64-bit)
HP-UX Itanium
Linux x86 and x86-64
Microsoft Windows x64 (64-bit)
Oracle Solaris on SPARC (64-bit)
Oracle Solaris on x86-64 (64-bit)
Agent plug-in:
Linux x86-64
Oracle Solaris on x86-64 (64-bit)
Oracle Solaris on SPARC (64-bit)
The following Oracle Exadata Database Machine hardware configurations are not supported for Enterprise Manager Cloud Control Exadata plug-in 13.x:
V1 hardware
V2 hardware
Note:
V2 machines discovered in Enterprise Manager Cloud Control 12c are still supported in 13c. However, discovery of V2 machines in Enterprise Manager Cloud Control 13c is not supported.