6 Virtualized Exadata Database Machine

This chapter describes how to manage and monitor a virtualized Oracle Exadata Database Machine with DB clusters using Oracle Virtual Machine (OVM) for x86 and Oracle Exadata Storage Servers in Exadata plug-in 12.1.0.6 and later.

The Exadata plug-in discovers, manages, and monitors virtualized Exadata Database Machine in conjunction with the Virtualization Infrastructure plug-in. For details about this plug-in, see the "Direct Monitoring of Xen Based Systems" chapter of the Oracle® Enterprise Manager Cloud Administration Guide:

http://docs.oracle.com/cd/E24628_01/doc.121/e28814/direct_monitor_cs.htm#EMCLO531

The following sections describe how to discover a virtualized Exadata Database Machine and other supported targets:

Once you have completed the discovery of a virtualized Exadata Database Machine, continue with the configuration steps outlined in Post-Discovery Configuration and Verification.

6.1 Integration with Virtualization Infrastructure Plug-in

The physical server (physical Oracle Server target), Dom0 (Virtual Platform target), and DomU (virtual Oracle Server target) are discovered and monitored by the Virtualization Infrastructure (VI) plug-in.

The Exadata discovery can work with the VI plug-in by the physical server, Dom0, and DomU are discovered using the VI plug-in before Exadata Database Machine discovery. During Exadata discovery, the discovery code looks up the existing Virtual Platform target that corresponds to the Dom0 of the compute node.

The Exadata discovery flow with VI plug-in integration includes the following checks:

  1. Check whether the Exadata Database Machine is virtualized based on the configuration metric of the host target of the discovery agent.

  2. Check whether the VI plug-in is deployed. If not, you will be prompted to deploy it as described in the "Direct Monitoring of Xen Based Systems" chapter of the Oracle® Enterprise Manager Cloud Administration Guide:

    http://docs.oracle.com/cd/E24628_01/doc.121/e28814/direct_monitor_cs.htm#EMCLO531
    

6.2 Discovering Virtualized Exadata Database Machine

With virtualized Exadata, one Exadata Database Machine target will be created for each physical Database Machine instead of one DB Machine target for each DB cluster deployed through OEDA. Compute nodes, Exadata Storage Servers, InfiniBand switches, compute node ILOM, PDU, KVM, and Cisco switches targets are discovered by the Exadata plug-in.

With only the compute nodes virtualized, the physical servers, Dom0 and DomU, are monitored by the Virtualization Infrastructure (VI) plug-in. The Exadata plug-in is integrates with the VI plug-in for discovery and target management as discussed below.

The hardware targets in virtualized Exadata are discovered in almost the same way as physical Exadata (see Discovering an Exadata Database Machine) except as noted below:

  • The Dom0 of the compute nodes are discovered using ibnetdiscover. Compute node ILOM to compute node mapping and VM hierarchy are obtained from the VI plug-in.

  • Exadata Storage Servers are discovered using ibnetdiscover instead of kfod. Therefore, there is no need to specify the Database Oracle Home during discovery.

  • InfiniBand switches are discovered using ibnetdiscover.

  • Compute node ILOM, PDU, and Cisco switch are discovered based on the databasemachine.xml schematic file.

  • Cluster and Database Target Discovery

    • The cluster and database target discovery is similar to the physical Exadata case (see Discovering Grid Infrastructure and RAC). The only difference is that the Enterprise Manager agents needed to be deployed on the DomU of the Database cluster before the cluster, ASM, and database targets can be discovered using the DB plug-in.

  • Agent Placement

    • The primary and backup Enterprise Manager agents monitoring the Exadata hardware targets should be deployed in two dedicated DomU that will not be suspended or shut down, and are ideally on different physical servers to ensure high availability.

    • For static virtual machine configurations, the Enterprise Manager agents that are used to monitor the Database clusters can be used to monitor the Exadata hardware.

To discover a virtualized Exadata Database Machine:

  1. From the Setup menu, select Add Target then Add Targets Manually.
  2. On the Add Targets Manually page, select Add Using Guided Process (Figure 6-1):

    Figure 6-1 Add Targets Manually


    Add Targets Manually

  3. In the Add Using Guided Process pop-up window, select Oracle Virtual Platform and click Add (Figure 6-2):

    Figure 6-2 Add Oracle Virtual Platform


    Add Oracle Virtual Platform

    Note:

    This option will be available only if the Virtualization Infrastructure (VI) plug-in has been deployed.

    If the plug-in is not deployed, then the Oracle Virtual Platform option will not appear. Deploy the VI plug-in as described in the Direct Monitoring of Xen Based Systems chapter of the Oracle® Enterprise Manager Cloud Administration Guide.

  4. On the Discover Oracle Virtual Platforms page, register Oracle Virtual Platforms with Enterprise Manager. Target credentials and/or monitoring agent can be specified for each target. If set, these override the defaults for that target.
    • Set the Default Monitoring Agent and Credentials (Figure 6-3). Click the magnifying glass icon search icon to search for the monitoring agent location. For the Credential Properties, enter root for the Username and provide the host's root password.

      Note:

      A non-root user with sudo privilege can also be used. Refer to the VI plug-in documentation for the necessary setup steps.

      Figure 6-3 Default Monitoring Agent and Credentials


      Default Monitoring Agent and Credentials

    • Add a Host Name or IP address. Click Add and enter the fully qualified domain name (FQDM) or IP address for the host in the pop-up window (Figure 6-4). If you have more than one, each entry must appear on a separate line.

      Figure 6-4 Discover Virtual Serves: Add Host/IP Address


      Discover Virtual Serves: Add Host/IP Address

      Click Add to close the pop-up window.

      Once added, the Hostname and IP address fields will update as shown in Figure 6-5:

      Figure 6-5 Enter Hostname or IP Address


      Enter Hostname or IP Address

    • Click Submit. A job is submitted to register the Oracle Virtual Platform, which creates the virtual targets. Approximately five (5) minutes is required to complete the job. Click Job Details to check the status, or click Close to close the window (Figure 6-6):

      Figure 6-6 Submit Oracle Virtual Platform Registration Job


      Submit Oracle Virtual Platform Registration Job

  5. Once the job completes, then discover the Exadata Database Machine. From the Setup menu, select Add Target then Add Targets Manually.
  6. On the Add Targets Manually page, select Add Using Guided Process (Figure 6-7):

    Figure 6-7 Add Targets Manually


    Add Targets Manually

  7. For the Target Types drop-down menu, select Oracle Exadata Database Machine (Figure 6-8):

    Figure 6-8 Add Using Guided Process: Add Oracle Exadata Database Machine


    Add Oracle Exadata Database Machine

  8. On the Oracle Exadata Database Machine Discovery page, select the Discover a new Database Machine and its hardware components as targets option and click Discover Targets.
  9. On the Discovery Inputs page, click the magnifying glass icon search icon to search for an agent in a DomU that resides in the DB Machine to be discovered. Do not search for a virtual platform (Dom0) because you cannot install an Enterprise Manager Agent in Dom0. Just like the physical Exadata case, you are specifying a host on which the schematic file can be read. Usually, it will be the same host as the discovery agent. It will not be the Dom0

    In the Schematic Files section, select a host on which the schematic file can be read, and set the host credential that can read the schematic file. Click Set Credential to set the log in and password (Figure 6-9) for the host you registered in the previous steps. Click OK.

    Figure 6-9 Set Credential


    Set Credential

    Click Next.

  10. On the InfiniBand Discovery page, set the nm2user credential. Select the New option and enter nm2user for the Username field and provide a password as shown in Figure 6-10:

    Figure 6-10 Set InfiniBand Credentials


    Set InfiniBand Credentials

    Click Next.

    Note:

    A known issue about missing CLIENTNAME and CLIENTIP information will cause an Error pop-up message. Click OK to dismiss the pop-up and to continue with the guided discovery.

  11. On the Prerequisite Check page, verify the Status is successful, then click Next.
  12. On the Components page, review the Exadata components that were discovered, then click Next.
  13. On the Monitoring Agents page, review the names, locations, and backup locations of the Monitoring Agents. Click Next.
  14. On the Agent Credential page, if the credentials are the same for all agents, then you can enter a new username and password or accept the default. Click Next.

    If the credentials are different for all agents, then select the Different for all agents option and enter usernames and passwords for all hosts. Click Next.

  15. On the Monitoring Credentials page, verify or enter the credentials for the following components:
    • Oracle Exadata Storage Server

    • InfiniBand Switch

    • ILOM

    Click Test Connection for each component to verify the connection. If the credentials are different for each component, then provide the appropriate username and password.

    Click Next.

  16. On the SNMP Subscription page, enter the SNMP Community String (default is public) value for both Oracle Exadata Storage Server and InfiniBand Switch areas. For the InfiniBand Switch, provide the root credentials (Figure 6-11). Click Next.

    Figure 6-11 Enter InfiniBand Switch Root Credential

    Enter InfiniBand Switch Root Credentia
  17. On the Component Properties page, enter public for both Ethernet and PDU components (Figure 6-12). Click Next.

    Figure 6-12 Enter 'public' for Ethernet Switch and PDUs


    Enter 'public' for Ethernet Switch and PDUs

  18. On the Review page, review the details of the following components:
    • System Target

    • Compute Node

    • Oracle Exadata Storage Server

    • InfiniBand Switch

    • Ethernet Switch

    • Compute Node ILOM

    • KVM

    • PDU

    Click Submit to promote all targets. Allow for approximately five (5) minutes to process. Once complete, all targets will now be monitored by Oracle Enterprise Manager Cloud Control.

Note:

The VI plug-in regularly checks whether a VM is added or deleted. The Oracle VM instance targets are auto-promoted once they are detected by the VI plug-in. If you want to monitor the OS, ASM, and DB in the VM, you need to push the Enterprise Manager agent to the VM.

6.3 Post-Discovery Configuration

Since agents run inside DomU nodes and remotely monitor cells, ILOMs, switches, and PDUs, ensure that Primary and Backup agents are on physically different DomU hosts. In case of a Dom0 outage, Enterprise Manager can continue to monitor such targets using an agent that is running on an active DomU hosted by a different Dom0 that is still running. This will achieve continuous monitoring in case of a complete Dom0 outage.

For details on how agents monitor Exadata targets remotely, refer to Post-Discovery Configuration and Verification, for details.

6.4 Exadata Virtualized Provisioning

Provisioning involves repeatable, reliable, automated, unattended, and scheduled mass deployment of a RAC Cluster including virtual machines (VMs), Oracle Database (DB), Grid Infrastructure, and ASM on Virtualized Exadata.

With the Exadata plug-in's virtualization provisioning functionality, you can:

6.4.1 Creating a Database Cluster

To create a database cluster:

  1. From the Database Machine target menu, select Provisioning, then select Create Cluster (Figure 6-13):

    Figure 6-13 Provisioning Menu


    Provisioning Menu

    The Exadata Provisioning Wizard will display.

  2. On the Exadata Provisioning: Cluster page, provide the information for:
    • Cluster Definition: Create a cluster name. Click Show existing clusters to display a list of database clusters already provisioned.

    • Virtual Machine Location: Select the servers on which you want to create the virtual machines. Select one or more from the Available Servers pane, then click the move button to move the selected server to the Selected Servers pane (Figure 6-14):

      Figure 6-14 Select Virtual Machine Location


      Select Virtual Machine Location

    • Storage Definition: Select the Exadata Storage Servers to be used by the virtual machines. Select one or more from the Available Storage Servers pane, then click the move button to move the selected server to the Selected Storage Servers pane:

      Figure 6-15 Select Exadata Storage Servers


      Select Exadata Storage Servers

    Click Next.

  3. On the Credentials page, set the credentials for:
    • Compute Server: Set the credentials for the root user. From the Set Credentials drop-down, select All to apply the settings to all servers, or select one or more servers from the list and select Selected to apply the settings to only the selected servers.

      In the Set Server Credential pop-up window, select an existing named credential or select New to create a new one.

      Click Test Connection to verify that the credentials are properly set. If successful, the Test Credential Status will update to show Succeeded (Figure 6-16):

      Figure 6-16 Set Compute Server Credentials


      Set Compute Server Credentials

    • Exadata Storage Server: Set the credentials for the cell administrator.

      In the Set Server Credential pop-up window, select an existing named credential or select New to create a new one.

      Click Test Connection to verify that the credentials are properly set. If successful, the Test Credential Status will update to show Succeeded (Figure 6-17):

      Figure 6-17 Set Exadata Storage Server Credentials


      Set Exadata Storage Server Credentials

    Click Next to define the Virtual Machines.

  4. On the Virtual Machines (VMs) page, provide the following information:

    Note:

    When available, the Exadata Provisioning Wizard will pre-populate the fields with the appropriate defaults. You can change this information to suit your environment.

    • Virtual Machine Definition: In this region (Figure 6-18), provide the details to define the VM:

      • Virtual Machine Size: Select Small, Medium, or Large. Click Customize to adjust the number of CPU cores and available memory.

      • Root password: Create a password in the two password fields.

      • Verify the Prefix, DNS, and NTP fields.

      • Time Zone: select the appropriate time zone from the drop-down list.

      Figure 6-18 Exadata Provisioning: Virtual Machine Definition


      Exadata Provisioning: Virtual Machine Definition

    • Software Locations: When available, the Exadata Provisioning Wizard will pre-populate the fields (Figure 6-19); otherwise, provide the following information:

      • Inventory Location

      • Grid Infrastructure Home

      • Database Home Location

      • Agent Installation Base Directory

      • Agent Port (optional)

      • Software Language (optional)

      Figure 6-19 Exadata Provisioning: Software Locations


      Exadata Provisioning: Software Locations

    • Operating System Users and Groups: Like the Software Location region, the Exadata Provisioning Wizard will pre-populate the fields except for the password fields (Figure 6-20). Otherwise, provide the following information:

      • User name: including the ID, password, and home directory.

      • DBA group name and ID.

      • OINSTALL group name and ID.

      Figure 6-20 Exadata Provisioning: Operating System Users and Groups


      Exadata Provisioning: Operating System Users and Groups

    Click Next to provide the Network details.

  5. On the Network page, specify the IP address, name and domain used for the Admin, Client and Private network. Provide the details for the following information:
    • Gateway and Domain Details: The domain and subnet mask details should already be supplied by the Exadata Provisioning Wizard. Enter a valid IP address for the Admin and Client gateway (Figure 6-21):

      Figure 6-21 Exadata Provisioning: Network Gateway and Domain Details


      Exadata Provisioning: Network Gateway and Domain Details

    • Virtual Machine 1 (Figure 6-22): For the first virtual machine in the cluster, the Exadata Provisioning Wizard will auto-complete the Prefix and Start ID field. You can enter an optional Suffix. Enter a valid IP address.

      Figure 6-22 Exadata Provisioning: Virtual Machine 1


      Exadata Provisioning: Virtual Machine 1

    • Generated Virtual Machine Name and IP (Figure 6-23): After you have entered the information for the first VM, click Generate to create the information for the other VMs in the cluster. You can always enter the details manually.

      Figure 6-23 Exadata Provisioning: Generated Virtual Machine Name and IP


      Exadata Provisioning: Generated Virtual Machine Name and IP

    Once you have entered all the necessary information, click Validate IP at the top of the page to verify the IP addresses. Click Next to continue to the enter the details for Grid Infrastructure and to create the Initial Database.

  6. On the Grid Infrastructure and Initial Database page, enter the details for:
    • Grid Infrastructure (Figure 6-24): Enter the details for the Cluster (SCAN name and port) and verify the Disk Group information. The Exadata Provisioning Wizard will pre-populate the details, but you can adjust them as needed.

      Figure 6-24 Exadata Provisioning: Grid Infrastructure


      Exadata Provisioning: Grid Infrastructure

    • Initial Database: Click the check box to create the initial database. Additional information for Database Identification and Administrator Credentials will be required as shown in Figure 6-25:

      Figure 6-25 Exadata Provisioning: Initial Database


      Exadata Provisioning: Initial Database

      Click Advanced to expand the region for additional details for Memory Parameters, Processes, and Character Sets as shown in Figure 6-26:

      Figure 6-26 Exadata Provisioning: Initial Database Advanced Details


      Exadata Provisioning: Initial Database Advanced Details

    Click Next to set the schedule.

  7. On the Schedule page, the Exadata Provisioning Wizard will create the Deployment Instance value. Select a schedule start and notification options:
    • Schedule: Select to initiate the creation immediately or later. If you select later, then you will be prompted to select a date and time.

    • Notification: Select the notification statuses for which you will be notified.

    Click Review to review the settings and initiate the job.

  8. On the Review page, review the selection in the summary displayed. To change any section, return to the previous page and edit the selection.

    Click Submit to begin the creation job.

6.4.2 Scaling Up a Database Cluster

To scale up a database cluster:

  1. From the Database Machine target menu, select Provisioning, then select Scale Up Cluster.

    The Exadata Provisioning Wizard will display.

  2. On the Cluster page, enter the cluster name or click the Search icon to select a cluster from the list.

    Select one or more from the Available Servers pane, then click the move button to move the selected server to the Selected Servers pane.

    Click Next to set the credentials.

  3. On the Credentials page, set the credentials for:
    • Compute Server: Set the credentials for the root user. From the Set Credentials drop-down, select All to apply the settings to all servers, or select one or more servers from the list and select Selected to apply the settings to only the selected servers.

      In the Set Server Credential pop-up window, select an existing named credential or select New to create a new one.

      Click Test Connection to verify that the credentials are properly set. If successful, the Test Credential Status will update to show Succeeded (Figure 6-27):

      Figure 6-27 Set Compute Server Credentials


      Set Compute Server Credentials

    • Virtual Machines (Figure 6-28): Set the credentials for the DomU Host and Root and for the Cluster ASM and Database.

      You have the option to use preferred credentials or to override the preferred credentials.

      Click Test Connection to verify that the credentials are properly set.

      Figure 6-28 Scale Up Cluster: Virtual Machines Credentials


      Scale Up Cluster: Virtual Machines Credentials

    Click Next.

  4. On the Virtual Machines page, click Next.
  5. On the Network page, specify the IP address, name and domain used for the Admin, Client and Private network. Provide the details for the following information:
    • Gateway and Domain Details: The domain and subnet mask details should already be supplied by the Exadata Provisioning Wizard. Enter a valid IP address for the Admin and Client gateway (Figure 6-29):

      Figure 6-29 Scale Up: Network Gateway and Domain Details


      Scale Up: Network Gateway and Domain Details

    • Virtual Machine 1 (Figure 6-30): For the first virtual machine in the cluster, the Exadata Provisioning Wizard will auto-complete the Prefix and Start ID field. You can enter an optional Suffix. Enter a valid IP address.

      Figure 6-30 Scale Up: Virtual Machine 1


      Scale Up: Virtual Machine 1

    • Generated Virtual Machine Name and IP (Figure 6-29): After you have entered the information for the first VM, click Generate to create the information for the other VMs in the cluster. You can always enter the details manually.

      Figure 6-31 Scale Up: Generated Virtual Machine Name and IP


      Scale Up: Generated Virtual Machine Name and IP

    Once you have entered all the necessary information, click Validate IP at the top of the page to verify the IP addresses.

    Click Next to continue to schedule the scale up.

    Note:

    The step for Grid Infrastructure and Initial Database are skipped because they do not need to be set up again.

  6. On the Schedule page, the Exadata Provisioning Wizard will create the Deployment Instance value. Select a schedule start and notification options:
    • Schedule: Select to initiate the scale up immediately or later. If you select later, then you will be prompted to select a date and time.

    • Notification: Select the notification statuses for which you will be notified.

    Click Review to review the settings and initiate the scale up.

  7. On the Review page, review the selection in the summary displayed. To change any section, return to the previous page and edit the selection.

    Click Submit to begin the scale up.

6.4.3 Scaling Down a Database Cluster

To scale down a database cluster, the Virtual Machine is removed from the cluster:

  1. From the Database Machine target menu, select Provisioning, then select Scale Down Cluster.

    The Exadata Provisioning Wizard will display.

  2. Enter the cluster name you want to scale down or click the Search icon to select an available cluster.

    Once you have selected a cluster, you will be prompted to:

    • Select nodes to delete.

    • Verify or enter new named credentials for DomU (host, root, and Exadata Server) and for the Exadata Storage Server. Click Test Credentials to verify the credentials have been selected properly.

    Click Next to schedule the scale-down job.

  3. On the Schedule page, the Exadata Provisioning Wizard will create the Deployment Instance value. Select a schedule start and notification options:
    • Schedule: Select to initiate the scale down immediately or later. If you select later, then you will be prompted to select a date and time.

    • Notification: Select the notification statuses for which you will be notified.

    Click Review to review the settings and initiate the scale down.

  4. On the Review page, review the selection in the summary displayed. To change any section, return to the previous page and edit the selection.

    Click Submit to begin the scale down.

6.4.4 Deleting a Database Cluster

To delete an existing cluster:

  1. From the Database Machine target menu, select Provisioning, then select Delete Cluster.

    The Exadata Provisioning Wizard will display.

  2. On the Cluster page, enter the cluster name you want to delete or click the Search icon to select an available cluster.

    The page will update to show the nodes to be deleted

    Verify or enter new named credentials for DomU (host, root, and Exadata Server) and for the Exadata Storage Server. Click Test Credentials to verify the credentials have been selected properly.

    Click Next to schedule the delete job.

  3. On the Schedule page, the Exadata Provisioning Wizard will create the Deployment Instance value. Select a schedule start and notification options:
    • Schedule: Select to initiate the scale down immediately or later. If you select later, then you will be prompted to select a date and time.

    • Notification: Select the notification statuses for which you will be notified.

    Click Review to review the settings and initiate the delete job.

  4. On the Review page, review the selection in the summary displayed. To change any section, return to the previous page and edit the selection.

    Click Submit to begin the delete job.

6.5 Viewing Virtualized Exadata Database Machine

Once discovered, the Exadata plug-in shows the virtual machines monitored by Enterprise Manager Cloud Control 12c, as shown in Figure 6-32:

Figure 6-32 Virtual Machines Monitored


Virtual Machines Monitored

Note:

The schematic diagram in the Database Machine home page is based on the content of the databasemachine.xml file found during discovery. The virtual platforms (Dom0) are displayed as compute nodes in the rack in the schematic diagram.

The Database Machine Software topology diagram will not display the physical Oracle Server, virtual Oracle Server targets (DomU), and Virtual Platform target (Dom0) targets. However, it will continue to show the host targets which are running in DomU.

The Software tab for the Exadata Database Machine target shows all clusters, ASM, and Database targets in the whole physical Database Machine grouped by clusters as described in Figure 6-33:

Figure 6-33 Exadata Database Machine Software Tab


Exadata Database Machine Software Tab

6.6 Resource Utilization Graphs

The following compute resource allocation graphs are available in virtualized Exadata. These graphs are dependent on the virtual machine hierarchy and metric data from the VI plug-in:

6.6.1 Cluster Placement

This graph (Figure 6-34) shows the ClusterWare cluster placement on physical servers in a particular Exadata Database Machine rack. Since this is a placement graph, the widths of the data series reflect the number of clusters on the physical server that has the most number of clusters.

Figure 6-34 Resource Utilization: Cluster Placement


Resource Utilization: Cluster Placement

6.6.2 Database Placement

This graph (Figure 6-35) shows the database placement on physical servers in a particular Exadata Database Machine rack for a particular DB cluster. Since this is a placement graph, the widths of the data series reflect the number of DB on the physical server that has the most number of databases for a particular DB cluster.

Figure 6-35 Resource Utilization: Database Placement


Resource Utilization: Database Placement

6.6.3 Database CPU Utilization

This graph (Figure 6-36) shows the database CPU utilization per database per VM host for a particular DB cluster.

Figure 6-36 Resource Utilization: Database CPU Utilization


Resource Utilization: Database CPU Utilization

6.6.4 Database Memory Utilization

This graph (Figure 6-37) shows the database memory utilization per database per VM host for a particular DB cluster.

Figure 6-37 Resource Utilization: Database Memory Utilization


Resource Utilization: Database Memory Utilization