17 Enterprise Manager High Availability

This chapter discusses best practices for installation and configuration of each Cloud Control component and covers the following topics:

17.1 Agent High Availability

The following sections discuss best practices for installation and configuration of the Management Agent.

17.1.1 Configuring the Management Agent to Automatically Start on Boot and Restart on Failure

The Management Agent is started manually. It is important that the Management Agent be automatically started when the host is booted to insure monitoring of critical resources on the administered host. To that end, use any and all operating system mechanisms to automatically start the Management Agent. For example, on UNIX systems this is done by placing an entry in the UNIX /etc/init.d that calls the Management Agent on boot or by setting the Windows service to start automatically.

17.1.2 Configuring Restart for the Management Agent

Once the Management Agent is started, the watchdog process monitors the Management Agent and attempts to restart it in the event of a failure. The behavior of the watchdog is controlled by environment variables set before the Management Agent process starts. The environment variables that control this behavior follow. All testing discussed here was done with the default settings.

  • EM_MAX_RETRIES – This is the maximum number of times the watchdog will attempt to restart the Management Agent within the EM_RETRY_WINDOW. The default is to attempt restart of the Management Agent three times.

  • EM_RETRY_WINDOW - This is the time interval in seconds that is used together with the EM_MAX_RETRIES environmental variable to determine whether the Management Agent is to be restarted. The default is 600 seconds.

The watchdog will not restart the Management Agent if the watchdog detects that the Management Agent has required restart more than EM_MAX_RETRIES within the EM_RETRY_WINDOW time period.

17.1.3 Installing the Management Agent Software on Redundant Storage

The Management Agent persists its configuration, intermediate state and collected information using local files in the Agent State Directory.

In the event that these files are lost or corrupted before being uploaded to the Management Repository, a loss of monitoring data and any pending alerts not yet uploaded to the Management Repository occurs.

To protect from such losses, configure the Agent State Directory on redundant storage. The Agent State Directory can be determined by entering the command '$AGENT_HOME/agent_inst/bin/emctl getemhome', or from the Agent Homepage in the Cloud Control console.

17.2 Repository High Availability

The following sections document best practices for repository configuration.

17.2.1 General Best Practice for Repository High Availability

Before installing Enterprise Manager, you should prepare the database, which will be used for setting up Management Repository. Install the database using Database Configuration Assistant (DBCA) to make sure that you inherit all Oracle install best practices.

  • Choose Automatic Storage Management (ASM) as the underlying storage technology.

  • Enable ARCHIVELOG Mode

  • Enable Block Checksums

  • Configure the Size of Redo Log Files and Groups Appropriately

  • Use a Flash Recovery Area

  • Enable Flashback Database

  • Use Fast-Start Fault Recovery to Control Instance Recovery Time

  • Enable Database Block Checking

  • Set DISK_ASYNCH_IO

Use the MAA Advisor for additional high availability recommendations that should be applied to the Management Repository. MAA Advisor can be accessed by selecting Availability > MAA Advisor from the Homepage of the Repository Database.

See Oracle Database High Availability Best Practices for more information on these and other best practices to ensure the database that hosts the Management Repository is configured to provide required availability.

17.2.2 Configuring RAC for the Management Repository

If the Management Repository is a Real Application Cluster (RAC) database, the Management Services should be configured with the appropriate connect strings. SCAN connect strings are recommended to avoid reconfiguration of the Repository connect descriptor following addition or removal of nodes in the Repository tier. SERVICE_NAME should always be used in connect strings instead of SID_NAME

Refer to the Oracle Database Net Services Administrator's Guide for details.

The following example shows a connect string for Repository where database version is lower than 11g Release 2.

(DESCRIPTION= (ADDRESS_LIST=(FAILOVER=ON) (ADDRESS=(PROTOCOL=TCP)(HOST=node1-vip.example.com)(PORT=1521)) (ADDRESS=(PROTOCOL=TCP)(HOST=node2-vip.example.com)(PORT=1521))) (CONNECT_DATA=(SERVICE_NAME=EMREP)))

The following example shows a connect string for Repository where database version is 11g Release 2 or higher

(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=primary-cluster-scan.example.com)(PORT=1521))(CON
NECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=PDB.example.com)))

The Repository connect descriptor is configured by running the emctl command from Management Service. If you have multiple Management Services configured, this command must be run on each Management Service.

emctl config oms -store_repos_details -repos_conndesc '(DESCRIPTION= (ADDRESS_LIST=(FAILOVER=ON) (ADDRESS=(PROTOCOL=TCP)(HOST=node1-vip.example.com)(PORT=1521)) (ADDRESS=(PROTOCOL=TCP)(HOST=node2-vip.example.com)(PORT=1521))) (CONNECT_DATA=(SERVICE_NAME=EMREP)))' -repos_user sysman

After updating the Repository connect descriptor, run the following command from any one OMS to make the same change to the monitoring configuration used for the Management Services and Repository target:

emctl config emrep -conn_desc <repository_connect descriptor as above>

17.3 Oracle Management Service High Availability

The following sections document configuring the OMS for high availability.

OMS high availability begins with ensuring there is at least one OMS available at any given time. Depending upon your Recovery Time Objective (RTO), this can be accomplished without downtime from loss of a node in an active/active configuration by adding at least one additional OMS, or with limited downtime from loss of a node in an active/passive configuration by ensuring that the OMS can be run with the same address on a different server if the primary server fails. See Chapter 16, "High Availability Solutions" for more details on architectural options for achieving high availability.

Regardless of the manner selected to provide high availability, and the level of availability selected for initial installation, there are a number of steps that can be taken to best prepare the environment for a future move to higher levels of availability including disaster recovery. See "Best Practices for Configuring the Cloud Control OMS to be Compatible with Disaster Recovery using Alias Host Names and Storage Replication" for details on these steps.

To ensure OMS high availability, there also must be a sufficient number of OMSs to support the size and scope of the environment managed by Enterprise Manager as well as the scale and complexity of the usage of Enterprise Manager including the number of administrators and the breadth of capability employed. See EM Operational Considerations and Troubleshooting Whitepaper Master Index in My Oracle Support note 1940179.1 for more information, including understanding how to configure and monitor for availability and how to determine how many OMSs are needed based on operational experience.

Once an environment requires more than one active OMS, whether to ensure sufficient capacity for the environment or to prevent the downtime associated with failover to a passive OMS, a Server Load Balancer (SLB) is required. A SLB provides a single address for Management Agents and administrators to communicate with the set of OMS servers, monitors the OMSs to know which OMSs are available, and routes the communication to an available OMS.

It can be expensive to implement a SLB. If the environment does not need more than one OMS to handle the processing requirements, and if the minutes of downtime associated with an active/passive failover of the OMS meets RTO requirements, a SLB is not required to provide high availability. The instructions in "Configuring the Cloud Control OMS in an Active/Passive Environment for HA Failover Using Virtual Host Names" provide an example of how to configure for high availability using a virtual IP address and shared storage.

If you need to add one or more additional OMSs to support your RTO and/or the processing needs of the environment, see "Installing Additional Management Services". Once you've added additional OMS(s), see "Configuring Multiple Management Services Behind a Server Load Balancer (SLB)" for information on how to configure multiple OMSs behind a SLB.

17.3.1 Best Practices for Configuring the Cloud Control OMS to be Compatible with Disaster Recovery using Alias Host Names and Storage Replication

This section provides best practices for Cloud Control administrators who want to install the Cloud Control OMS in a manner that will ensure compatibility with Disaster Recovery using Alias Host Names and Storage Replication. This will reduce the steps required to implement a Disaster Recovery configuration should it be required at a future date. These best practices are applicable for every MAA high availability level installation. Installing even a standalone OMS in a manner that considers the needs of the highest MAA high availability level will provide the greatest flexibility and easiest migration to higher MAA high availability levels in the future.

17.3.1.1 Overview and Requirements

The following installation conditions must be met in order for a Cloud Control OMS installation to support Disaster Recovery using alias host names and storage replication:

  • The Middleware Home, OMS Instance Base, Agent Base, and Oracle Inventory directories must be installed on storage that can be replicated to the standby site.

  • The installation of the OMS must be performed in a manner that maintains an Alias Host Name that is the same for the primary and standby site hosts for the OMS. This Alias Host Name allows the software to be configured such that the same binaries and configuration can be used either on the OMS host at the primary or standby site without changes.

  • The Middleware Home, OMS Instance Base, and Agent Base must be installed using the Oracle Inventory location on the storage that can be replicated to the standby site.

  • The software owner and time zone parameters must be the same on all nodes that will host this Oracle Management Service (OMS).

  • The path to the Middleware, Instance, OMS Agent, and Oracle Inventory directories must be the same on all nodes that will host this OMS.

17.3.1.2 Create an OMS installation base directory under ORACLE_BASE

To support disaster recovery, the Middleware Home, OMS Instance Base, Agent Base, and Oracle Inventory directories must be installed on storage that can be replicated to the standby site. Each of these directories is traditionally located directly underneath ORACLE_BASE. Once an OMS is installed, its directory path cannot be changed. Transitioning an installation with each of these directories located directly underneath ORACLE_BASE to replicated storage later can add complications such as requiring the ORACLE_BASE to be relocated to replicated storage to maintain the original directory paths for the installed software, which would require any locally installed software under that path to be uninstalled and reinstalled in an alternate local storage directory.

To provide the greatest flexibility for future storage migrations, create a directory under ORACLE_BASE that will be the base directory for all OMS software, including the Middleware Home, OMS Instance Base, Agent Base, and Oracle Inventory directories. For example, if the ORACLE_BASE is /u01/app/oracle, create a new OMS installation base directory, such as /u01/app/oracle/OMS. This directory will serve as the mount point for the replicated storage. If the software is installed locally under this directory, this directory can become a single mount point to the replicated storage enabling a simple migration. When providing and reviewing directory locations while installing the OMS, ensure the Middleware Home, OMS Instance Base, Agent Base, and Oracle Inventory are installed under this directory.

17.3.1.3 Configure an Alias Host Name

To support disaster recovery, a host at the primary site and a host at the standby site must be capable of running with the same host name used in the OMS installation. This can be accomplished using an alias host name.

Configure an alias host name to use in the installation using the guidance in "Planning Host Names" in chapter 18. Option 2: Alias host names on both sites in this section provides the greatest flexibility and is recommended as a best practice for new installations.

To implement Option 2, specify the alias host name when installing the OMS, either by using the ORACLE_HOSTNAME=<ALIAS_HOST_NAME> parameter or by specifying the alias host name in the Host Name field in the OUI installation. For example, include the following parameter on the runInstaller command line:

ORACLE_HOSTNAME=oms1.example.com

17.3.1.4 Configure an Oracle Inventory located under OMS installation base directory

To support disaster recovery, a single OMS installation is shared by a host at the primary site and a host at the standby site using replicated storage. Only the active OMS mounts the replicated storage. Software maintenance activities may need to be performed when either the primary or standby site is the active site. As such, it is important to ensure that the Oracle Inventory containing the details of the installation is available from either location.

To prevent the need to perform manual migration activities to move the OMS installation from a local Oracle Inventory to a replicated storage Oracle Inventory, create the Oracle Inventory under the OMS installation base directory.

Use the following steps to prepare the installer to set up an inventory located under the OMS installation base directory:

  1. Create the OMS installation base directory.

  2. Create the Oracle Inventory directory under the new OMS installation base directory:

    $ cd <OMS installation base directory>

    $ mkdir oraInventory

  3. Create the oraInst.loc file. This file contains the Oracle Inventory directory path information needed by the Universal Installer.

    $ cd oraInventory

    $ vi oraInst.loc

    Enter the path information to the Oracle Inventory directory and specify the group of the software owner as the oinstall user. For example:

    inventory_loc=/u01/app/oracle/OMS/oraInventory

    inst_group=oinstall

Specify the Oracle Inventory under the OMS installation base directory when installing the OMS by providing the -invPtrloc <oraInst.loc file with path> parameter on the runInstaller command line, for example:

-invPtrloc /u01/app/oracle/OMS/oraInventory/oraInst.loc

The installer will create the inventory in the specified location. Use this inventory for all installation, patching, and upgrade activities for this OMS and OMS agent.

17.3.1.5 Configure a Software Owner and Group that can be configured identically on all nodes

Just as the OMSs at the primary site are installed using the same software owner and group, to support disaster recovery, the software owner and group need to be configured identically on the standby site OMS hosts. Ensure that both the owner name and ID and the group name and ID selected for use at the primary site will also be available for use at the standby site.

Verification that the user and group of the software owner are configured identically on all OMS nodes can be performed using the 'id' command as in the example below:

$ id -a

uid=550(oracle) gid=50(oinstall) groups=501(dba)

17.3.1.6 Select a time zone that can be configured identically on all nodes

Just as the OMSs at the primary site are installed using the same time zone, to support disaster recovery, the time zone should be configured identically on the standby site OMS hosts. Select a time zone that can be used at both sites and ensure that the time zone is the same on all OMS hosts.

17.3.1.7 Installation and Configuration

The following are high level installation steps that reinforce the best practices listed in this section. Reference the detailed instructions in the Enterprise Manager Basic Installation Guide for details on the installation steps, including required pre-requisites and additional post installation operations.

If you are using an NFS mounted volume for the installation, please ensure that you specify rsize and wsize in your mount command to prevent running into I/O issues.

For example:

nas.example.com:/export/share1 /u01/app/oracle/OMS nfs rw,bg,rsize=32768,wsize=32768,hard,nointr,tcp,noacl,vers=3,timeo=600 0 0

Note:

Review the NFS Mount Point Location Requirements in Oracle Enterprise Manager Cloud Control Basic Installation Guide for additional important NFS-related requirements.

Refer to the following steps when installing the software:

  1. Create an OMS installation base directory under ORACLE_BASE. If installing on replicated storage now, ensure that the replicated storage is mounted to this directory.

  2. Configure the Alias Host Names for all OMSs being installed on each of the OMS hosts.

  3. Configure a Software Owner and Group that will be consistently defined on all OMS hosts.

  4. Configure the time zone that will be consistently set on all OMS hosts.

  5. Follow the detailed preparation and installation instructions in Installing an Enterprise Manager System in the Enterprise Manager Basic Installation Guide, specifying the following information as part of the installation process:

    1. Ensure that the Middleware Home, OMS Instance Base, and Agent Base are located under the OMS installation base directory.

    2. Specify the inventory location file and the Alias Host Name of the OMS. These can be specified on the command line as in the following example:

      $ runInstaller -invPtrloc /u01/app/oracle/OMS/oraInventory/oraInst.loc ORACLE_HOSTNAME=oms1.example.com

      You can also provide the ORACLE_HOSTNAME when prompted for this information from within the Enterprise Manager runInstaller UI.

  6. Continue the remainder of the installation.

17.3.2 Configuring the Cloud Control OMS in an Active/Passive Environment for HA Failover Using Virtual Host Names

This section provides a general reference for Cloud Control administrators who want to configure Enterprise Manager Cloud Control in Cold Failover Cluster (CFC) environments.

17.3.2.1 Overview and Requirements

The following conditions must be met for Cloud Control to fail over to a different host:

  • The installation must be done using a Virtual Host Name and an associated unique IP address.

  • Install on a shared disk/volume which holds the binaries and the gc_inst directory.

  • The Inventory location must failover to the surviving node.

  • The software owner and time zone parameters must be the same on all cluster member nodes that will host this Oracle Management Service (OMS).

17.3.2.2 Installation and Configuration

To override the physical host name of the cluster member with a virtual host name, software must be installed using the parameter ORACLE_HOSTNAME.

The software must be installed using the command line parameter -invPtrLoc to point to the shared inventory location file, which includes the path to the shared inventory location.

If you are using an NFS mounted volume for the installation, please ensure that you specify rsize and wsize in your mount command to prevent running into I/O issues.

For example:

nas.example.com:/export/share1 /u01/app/share1 nfs rw,bg,rsize=32768,wsize=32768,hard,nointr,tcp,noac,vers=3,timeo=600 0 0

Note:

Any reference to shared failover volumes could also be true for non-shared failover volumes which can be mounted on active hosts after failover.

17.3.2.3 Setting Up the Virtual Host Name/Virtual IP Address

You can set up the virtual host name and virtual IP address by either allowing the clusterware to set it up, or manually setting it up yourself before installation and startup of Oracle services. The virtual host name must be static and resolvable consistently on the network. All nodes participating in the setup must resolve the virtual IP address to the same host name. Standard TCP tools such as nslookup and traceroute can be used to verify the host name. Validate using the following commands:

nslookup <virtual hostname>

This command returns the virtual IP address and full qualified host name.

nslookup <virtual IP>

This command returns the virtual IP address and fully qualified host name.

Be sure to try these commands on every node of the cluster and verify that the correct information is returned.

17.3.2.4 Setting Up Shared Storage

Storage can be managed by the clusterware that is in use or you can use any shared file system (FS) volume, such as NFS, as long as it is not an unsupported type, such as OCFS V1.

Note:

Only OCFS V1 is not supported. All other versions of OCFS are supported.

If the OHS directory is on a shared storage, the LockFile directive in the httpd.conf file should be modified to point to a local disk, otherwise there is a potential for locking issues.

17.3.2.5 Setting Up the Environment

Some operating system versions require specific operating system patches be applied prior to installing 12c. The user installing and using the 12c software must also have sufficient kernel resources available. Refer to the operating system's installation guide for more details. Before you launch the installer, certain environment variables need to be verified. Each of these variables must be identically set for the account installing the software on ALL machines participating in the cluster:

  • OS variable TZ

    Time zone setting. You should unset this variable prior to installation.

  • PERL variables

    Variables such as PERL5LIB should also be unset to avoid association to the incorrect set of PERL libraries

17.3.2.6 Synchronizing Operating System IDs

The user and group of the software owner should be defined identically on all nodes of the cluster. This can be verified using the 'id' command:

$ id -a

uid=550(oracle) gid=50(oinstall) groups=501(dba)

17.3.2.7 Setting Up Shared Inventory

Use the following steps to set up shared inventory:

  1. Create your new ORACLE_HOME directory.

  2. Create the Oracle Inventory directory under the new ORACLE_HOME:

    $ cd <shared oracle home>

    $ mkdir oraInventory

  3. Create the oraInst.loc file. This file contains the Oracle Inventory directory path information needed by the Universal Installer.

    vi oraInst.loc

    Enter the path information to the Oracle Inventory directory and specify the group of the software owner as the oinstall user. For example:

    inventory_loc=/app/oracle/share1/oraInventory

    inst_group=oinstall

17.3.2.8 Installing the Software

Refer to the following steps when installing the software:

  1. Create the shared disk location on both the nodes for the software binaries.

  2. Point to the inventory location file oraInst.loc (under the ORACLE_BASE in this case), as well as specifying the host name of the virtual group. For example:

    $ runInstaller -invPtrloc /app/oracle/share1/oraInst.loc ORACLE_HOSTNAME=lxdb.example.com -debug

    You can also provide the ORACLE_HOSTNAME when prompted for this information from in Enterprise Manager runInstaller UI.

  3. Install Oracle Management Services on cluster member Host1.

  4. Continue the remainder of the installation normally.

  5. Once completed, copy the files oraInst.loc and oratab to /etc on all cluster member hosts (Host2, Host3, ...)

17.3.2.9 Starting Up Services

Ensure that you start your services in the proper order. Use the order listed below:

  1. Establish the IP address on the active node.

  2. Start the TNS listener (if it is part of the same failover group).

  3. Start the database (if it is part of the same failover group).

  4. Start Cloud Control using emctl start oms

  5. Test functionality.

In case of failover, refer to "Performing Switchover and Failover Operations".

17.3.3 Installing Additional Management Services

There are two ways to install additional Management Services:

  • Using the "Add Oracle Management Service" Deployment Procedure (preferred method). For more information about using this Deployment Procedure, see the chapter on Adding Additional Oracle Management Services in the Oracle® Enterprise Manager Cloud Control Basic Installation Guide.

  • Installing Additional Oracle Management Service in Silent Mode (alternative method). For more information about silent mode installation, see the chapter on Installing Additional OMSs in Silent Mode in the Oracle® Enterprise Manager Cloud Control Advanced Installation and Configuration Guide.

17.3.4 Configuring Multiple Management Services Behind a Server Load Balancer (SLB)

The following sections discuss how to configure the OMS for high availability in an Active/Active configuration using a Server Load Balancer.

17.3.4.1 Configuring the Software Library

The Software Library location must be accessible by all active Management Services. If the Software Library is not configured during installation, it needs to be configured post-install using the Enterprise Manager console:

  1. On the Enterprise Manager home page, from the Setup menu, select Provisioning and Patching, and then select Software Library.

  2. Click the Provisioning subtab.

  3. On the Provisioning page, click the Administration subtab.

  4. In the Software Library Configuration section, click Add to set the Software Library Directory Location to a shared storage that can be accessed by any Management Service hosts.

17.3.4.2 Configuring a Load Balancer

This section describes the guidelines for setting up a Server Load Balancer (SLB) to distribute the Agent and Browser traffic to available Management Services.

Server Load Balancer Requirements

In order to configure your OMS's in an active/active configuration behind an SLB, your SLB must meet the following requirements:

  • The SLB must provide support for multiple virtual server ports.

    Depending on your configuration, you may require up to 5 ports on the SLB (Secure Upload, Agent Registration, Secure Console, Unsecure Console, BI Publisher)

  • Support for persistence.

    HTTP and HTTPS traffic between the browser and the OMS requires persistence.

  • Support for application monitoring.

    The SLB must be capable of monitoring the health of the OMSs and detecting failures, so that requests will not be routed to OMSs that are not available.

SLB configuration is a two-step process:

  1. Configure the SLB.

  2. Make requisite changes on the Management Services.

17.3.4.2.1 SLB Side Setup

Use the following table as reference for setting up the SLB with Cloud Control Management Services.

Table 17-1 Management Service Ports

Cloud Control Service TCP Port Monitor Name Persistence Pool Name Load Balancing Virtual Server Name Virtual Server Port

Secure Upload

1159

mon_gcsu4900

None

pool_gcsu4900

Round Robin

vs_gcsu4900

1159

Agent Registration

4889

mon_gcar4889

Active Cookie Insert

pool_gcar4889

Round Robin

vs_gcar4889

4889

Secure Console

7799

mon_gcsc7799

Source IP

pool_gcsc7799

Round Robin

vs_gcsc443

443

Unsecure Console (optional)

7788

mon_gcuc7788

Source IP

pool_gcuc7788

Round Robin

vs_gcuc80

80


Use the administration tools that are packaged with your SLB. A sample configuration follows. This example assumes that you have two Management Services running on host A and host B using the default ports as listed in Table 33–1.

  1. Create Pools

    A pool is a set of servers grouped together to receive traffic on a specific TCP port using a load balancing method. Each pool can have its own unique characteristic for a persistence definition and the load-balancing algorithm used.

    Table 17-2 Pools

    Pool Name Usage Members Persistence Load Balancing

    pool_gcsu4900

    Secure upload

    HostA:4900 HostB:4900

    None

    Round Robin

    pool_gcar4889

    Agent registration

    HostA:4889 HostB:4889

    Active cookie insert; expiration 60 minutes

    Round Robin

    pool_gcsc7799

    Secured console access

    HostA:7799 HostB:7799

    Source IP; expiration 60 minutes

    Round Robin

    pool_gcuc7788 (optional)

    Unsecured console access

    HostA:7788 HostB:7788

    Source IP; expiration 60 minutes

    Round Robin


  2. Create Virtual Servers

    A virtual server, with its virtual IP Address and port number, is the client- addressable hostname or IP address through which members of a load balancing pool are made available to a client. After a virtual server receives a request, it directs the request to a member of the pool based on a chosen load balancing method.

    Table 17-3 Virtual Servers

    Virtual Server Name Usage Virtual Server Port Pool

    vs_gcsu4900

    Secure upload

    4900

    pool_gcsu4900

    vs_gcar4889

    Agent registration

    4889

    pool_gcar4889

    vs_gcsc443

    Secure console access

    443

    pool_gcsc7799

    vs_gcuc80 (optional)

    Unsecure console access

    80

    pool_gcuc7788


  3. Create Monitors

    Monitors are used to verify the operational state of pool members. Monitors verify connections and services on nodes that are members of load-balancing pools. A monitor is designed to check the status of a service on an ongoing basis, at a set interval. If the service being checked does not respond within a specified timeout period, the load balancer automatically takes it out of the pool and will choose the other members of the pool. When the node or service becomes available again, the monitor detects this and the member is automatically accessible to the pool and able to handle traffic.

    Table 17-4 Monitors

    Monitor Name Configuration Associate With

    mon_gcsu4900

    Type: https

    Interval: 60

    Timeout: 181

    Send String: GET /empbs/upload

    Receive String: Http Receiver Servlet active!

    HostA:4900 HostB:4900

    mon_gcar4889

    Type: http

    Interval: 60

    Timeout: 181

    Send String: GET /empbs/genwallet

    Receive String: GenWallet Servlet activated

    HostA:4889 HostB:4889

    mon_gcsc7799

    Type: https

    Interval: 5

    Timeout: 16

    Send String: GET /em/consoleStatus.jsp

    Receive String: Enterprise Manager Console is UP

    HostA:7799 HostB:7799

    mon_gcuc7788 (optional)

    Type: http

    Interval: 5

    Timeout: 16

    Send String: GET /em/consoleStatus.jsp

    Receive String: Enterprise Manager Console is UP

    HostA:7788 HostB:7788

    mon_gcscbip7799

    Type: https

    Interval: 5

    Timeout: 16

    Send String: GET /xmlpserver/services

    Receive String: getDocumentData

    HostA:7799 HostB:7799

    mon_gcucbip7788

    Type: https

    Interval: 5

    Timeout: 16

    Send String: GET /xmlpserver/services

    Receive String: getDocumentData

    HostA:7799 HostB:7799


    Note:

    Some Load Balancers require <CR><LF> characters to be added explicitly to the Send String using literal "\r\n". This is vendor-specific. Refer to your SLB documentation for details.
17.3.4.2.2 Enterprise Manager Side Setup

Perform the following steps:

  1. Resecure the Oracle Management Service

    By default, the service name on the Management Service-side certificate uses the name of the Management Service host. Management Agents do not accept this certificate when they communicate with the Oracle Management Service through a load balancer. You must run the following command to regenerate the certificate on each Management Service:

    emctl secure oms
      -host slb.example.com 
      -secure_port 4900 
      -slb_port 4900
      -slb_console_port 443  
      -console
      [-lock]  [-lock_console]
    

    Output:

    Oracle Enterprise Manager Cloud Control 12c Release 4
    Copyright (c) 1996, 2014 Oracle Corporation.  All rights reserved.
    Securing OMS... Started.
    Enter Enterprise Manager Root (SYSMAN) Password :
    Enter Agent Registration Password :
    Securing OMS... Successful
    Restart OMS
    
  2. Resecure all Management Agents

    Management Agents that were installed prior to SLB setup, including the Management Agent that comes with the Management Service install, would be uploading directly to the Management Service. These Management Agents will not be able to upload after SLB is setup. Resecure these Management Agents to upload to the SLB by running the following command on each Management Agent:

    emctl secure agent –emdWalletSrcUrl https://slb.example.com:<upload port>/em
    
17.3.4.2.3 Configuring SSL on Enterprise Manager and the SLB (Release 12.1.0.2 and later)

If the SLB is configured to use Third-Party/Custom SSL certificates, you must ensure that the CA certificates are properly configured in order for the trust relationship to be maintained between the Agent, SLB, and the OMS. Specifically, the following must be carried out:

  • Import the CA certificates of the SLB into the OMS trust store.

  • Copy the Enterprise Manager CA certificates to the trust store of the SLB

Enterprise Manager uses the default Enterprise Manager certificates and not the Custom certificates. In order for Agents to upload information successfully to the OMS through the SLB, these custom trusted certificates need to be copied/imported to the trust store of the OMS and AgentsThe following procedures illustrate the process used to secure the 12c OMS and Agent when an SLB is configured with Third Party/Custom SSL certificates.

Verifying the SSL Certificate used at the SLB

Perform the following steps to determine whether the SLB is using different certificates than the OMS:

  1. To check the certificate chain used by any URL, run the following command:

    <OMS_HOME>/bin>./emctl secdiag openurl -url <HTTPS URL>

    To check the certificates used by the SLB URL, run the following command:

    <OMS_HOME>/bin>./emctl secdiag openurl -url https://<SLB Hostname>:<HTTPS Upload port>/empbs/upload

    To check the certificates used by the OMS URL, run the following command:

    <OMS_HOME>/bin>./emctl secdiag openurl -url https://<OMS Hostname>:<HTTPS Upload port>/empbs/upload

  2. If the default Enterprise Manager self-signed certificates are used in the SLB, the output of both the commands will appear as follows:

    Issuer : CN=<OMS Hostname>, C=US, ST=CA, L=EnterpriseManager on <OMS Hostname>, OU=EnterpriseManager on <OMS Hostname>, O=EnterpriseManager on <OMS Hostname>

  3. If a custom or self-signed SSL certificate is used in the SLB, then output of the command executed with the SLB Name will provide details shown here:

    Issuer : CN=Entrust Certification Authority - L1C, OU="(c) 2014 Entrust, Inc.", OU=www.entrust.net/rpa is incorporated by reference, O="Entrust, Inc.", C=US

    In this example, the SLB is using the custom certificate (CN=Entrust Certification Authority - L1C, OU="(c) 2014 Entrust, Inc."), which needs to be imported as trusted certificate into the OMS.

  4. If OpenSSL is available on the OS, you can also check the value of CN by running the following command:

    $openssl s_client -connect <HOSTNAME>:<PORT>

Importing the SSL Certificate of the SLB to the Trust Store of the OMS and Agent

  1. Export the SLB certificate in base64 format to a text file named: customca.txt.

  2. Secure the OMS:

    cd <OMS_HOME>/bin>

    ./emctl secure oms -host <SLB Host name> -secure_port <HTTPS Upload Port> -slb_port <SLB upload Port> -slb_console_port <SLB Console port> -console -trust_certs_loc <path to customca.txt>

    Note:

    All the OMS's behind the SLB need to be secured using the emctl secure oms command.

    The CA certificate of the OMS is present in the <EM_INSTANCE_HOME>/em/EMGC_OMS1/sysman/config/b64LocalCertificate.txt file and needs to be copied to the SSL trust store of the SLB.

  3. Restart all the OMS:

    cd <OMS_HOME>/bin

    emctl stop oms -all

    emctl start oms

  4. Secure all the Agents pointing to this Enterprise Manager setup:

    cd <AGENT_HOME>/bin

    ./emctl secure agent –emdWalletSrcUrl <SLB Upload URL>