A Monitoring Prerequisites and Credentials

This appendix contains additional prerequisite and monitoring credential information for specific entities.

Host

Prerequisites

The operating system user used to install the Cloud Agent is also used as the host monitoring credential. Your hosts are automatically added as entities when a Cloud Agent is installed. However, hosts are not automatically monitored. To enable monitoring for host entities, see Download and Customize Oracle Infrastructure Monitoring JSONs.

Docker Engine / Docker Container

Docker Engine/Docker Container Configuration

You can configure a Docker Engine for monitoring in three ways:

Non-Secure Mode:

This mode doesn’t need any credentials information. When the Docker Engine is configured in the non-secure mode (http), you simply need the Base URL to connect to the Docker Engine.

For example, a Base URL could be: http://www.example.com:4243/. Note the http, and not https mode.

To check if your Docker Engine is configured in non-secure mode, view the /etc/sysconfig/docker file.  The following entries identify the Non-Secure Mode configuration:

http - non secure other_args="-H tcp://0.0.0.0:4243 -H unix:///var/run/docker.sock" 
set proxy export HTTP_PROXY=<your proxy host>:80

You will need to provide the Docker Engine Base URL in the entity definition JSON file.

Secure Mode:

To check if your Docker Engine is configured in Secure Mode, view the /etc/sysconfig/docker file.  If configured for:
  • for 1–way SSL you will typically see an entry of the format:

    https - secure 1 way SSL other_args="-H tcp://0.0.0.0:4243 -H unix:///var/run/docker.sock --tls --tlscert=/<certificate directory>/server-cert.pem --tlskey=/<certificate directory>/server-key.pem"
  • for 2–way SSL you will typically see an entry of the format:

    https - secure 2 way SSL other_args="-H tcp://0.0.0.0:4243 -H unix:///var/run/docker.sock --tlsverify --tlscacert=/<certificate directory>/ca.pem --tlscert=/<certificate directory>/server-cert.pem --tlskey=/<certificate directory>/server-key.pem"

If your Docker Engine is configured in Secure Mode, then you configure the monitoring credentials based on the type of communication defined.

  • For Secure 1–way SSL you need to add the truststore certificate (CA certificate) in the cloud agent default truststore (<agent home>/sysman/config/montrust/AgentTrust.jks) using this command:

    keytool -import -alias docker01 -keystore <agent home>/sysman/config/montrust/AgentTrust.jks -file <directory of your Docker certificate>/<certificate_file_name>.cer 

    Use the password welcome. Note the <agent home> is the directory where the Cloud Agent was installed. See Managing Cloud Agents in Oracle® Cloud Deploying and Managing Oracle Management Cloud Agents.

    You will only need to provide the Docker Engine Base URL in the entity definition JSON file.

  • For Secure 2–way SSL you need to add the truststore certificate (CA certificate) and the keystore information in the agent default truststore (<agent home>/sysman/config/montrust/AgentTrust.jks).

    1. Add the truststore certificate:

      keytool -import -alias docker01 -keystore <agent home>/sysman/config/montrust/AgentTrust.jks -file <directory of your Docker certificate>/<certificate_file_name>.cer 

      Use the password welcome. Note the agent home is the directory where the Cloud Agent was installed.

    2. Add the keystore information:

      keytool -import -alias docker01 -keystore <agent home>/sysman/config/montrust/AgentTrust.jks -file <directory of your Docker certificate>/<certificate_file_name>.cer

      Use the password welcome.

      To add a Secure 2–way SSL Docker Engine entity you will need to create an entity definition JSON file along with a credentials JSON file. The entity definition JSON file will include your Docker Engine Base URL while the credentials file will have details about the credentials store and credentials.

For more information about how to create Docker certificates, see https://docs.docker.com/engine/security/https/.

Cloud Agent Configuration

If the cloud agent communicates with Oracle Management Cloud through a proxy (OMC_PROXYHOST & OMC_PROXYPORT parameters were set on the cloud agent when it was installed), Docker Engine / Docker Container discovery will fail. You’ll need to perform additional configuration steps depending on the following situations:

For a New Agent Installation

If the agent requires proxy to communicate with Oracle Management Cloud, then use the gateway and set the proxy parameters (OMC_PROXYHOST & OMC_PROXYPORT) during gateway installation, and then set up the cloud agent (without proxy parameters) to point to the gateway.

For an Existing Agent

If the existing cloud agent has been set up to use the proxy to communicate with Oracle Management Cloud, to discover Docker Engine / Docker Container, execute the following commands on the cloud agent before performing entity discovery.

omcli setproperty agent -allow_new -name _configureProxyPerClient -value true 
omcli stop agent 
omcli start agent

XEN Virtual Platform / XEN Virtual Server

XEN Virtual Platform / XEN Virtual Server Prerequisites

To enable monitoring for XEN Virtual Platform / XEN Virtual Server, you need the root user (or) a user with SUDO privileges defined.

Oracle Database

Prerequisites

Setting Up Monitoring Credentials for Oracle Database

Before you can begin monitoring DB systems, you must have the necessary privileges. A SQL script (grantPrivileges.sql) is available to automate granting these privileges. This script must be run as the Oracle DB SYS user. In addition to granting privileges, the grantPrivileges.sql script can also be used to create new or update existing monitoring users with the necessary privileges. For information about this SQL script, location and usage instructions, see Creating the Oracle Database monitoring credentials for Oracle Management Cloud (Doc ID 2401597.1).

Enabling TCPS Connections

Database Side (Single Instance)
  1. Create the wallets.
    mkdir -p /scratch/aime/wallets/rwallets
    mkdir -p /scratch/aime/wallets/swallets
    mkdir -p /scratch/aime/wallets/cwallets
  2. To run the orapki commands go to the Oracle Home and run the following commands:
    cd $ORACLE_HOME/bin
    
    echo "**************** Create Root wallet *******************"
    
    ./orapki wallet create -wallet /scratch/aime/wallets/rwallets -auto_login -pwd oracle123
    
    ./orapki wallet add -wallet /scratch/aime/wallets/rwallets -dn "C=US,O=Oracle Corporation,CN=RootCA" -keysize 2048 -self_signed -validity 365 -pwd oracle123 -addext_ski -sign_alg sha256
    
    ./orapki wallet export -wallet /scratch/aime/wallets/rwallets -dn "C=US,O=Oracle Corporation,CN=RootCA" -cert /scratch/aime/wallets/rwallets/cert.pem
    
    ./orapki wallet display -wallet /scratch/aime/wallets/rwallets
    
    openssl x509 -noout -text -in /scratch/aime/wallets/rwallets/cert.pem
    
    echo "**************** Create server wallet *******************"
    
    ./orapki wallet create -wallet /scratch/aime/wallets/swallets -auto_login -pwd oracle123
    
    ./orapki wallet add -wallet /scratch/aime/wallets/swallets -trusted_cert -cert /scratch/aime/wallets/rwallets/cert.pem -pwd oracle123
    
    ./orapki wallet add -wallet /scratch/aime/wallets/swallets -dn "C=US,O=Oracle Corporation,CN=DBServer" -keysize 2048 -pwd oracle123 -addext_ski -sign_alg sha256
    
    ./orapki wallet export -wallet /scratch/aime/wallets/swallets -dn "C=US,O=Oracle Corporation,CN=DBServer" -request /scratch/aime/wallets/swallets/csr.pem
    
    ./orapki cert create -wallet /scratch/aime/wallets/rwallets -request /scratch/aime/wallets/swallets/csr.pem -cert /scratch/aime/wallets/swallets/cert.pem -validity 365 -sign_alg sha256 -serial_num $(date +%s%3N)
    
    ./orapki wallet add -wallet /scratch/aime/wallets/swallets -user_cert -cert /scratch/aime/wallets/swallets/cert.pem -pwd oracle123
    
    openssl x509 -noout -text -in /scratch/aime/wallets/swallets/cert.pem
    
    ./orapki wallet display -wallet /scratch/aime/wallets/swallets
     
    echo "**************** Create client wallet *******************"
     
    ./orapki wallet create -wallet /scratch/aime/wallets/cwallets -auto_login -pwd oracle123
    
    ./orapki wallet add -wallet /scratch/aime/wallets/cwallets -trusted_cert -cert /scratch/aime/wallets/rwallets/cert.pem -pwd oracle123
    
    ./orapki wallet add -wallet /scratch/aime/wallets/cwallets -dn "C=US,O=Oracle Corporation,CN=DBClient" -keysize 2048 -pwd oracle123 -addext_ski  -sign_alg sha256
    
    ./orapki wallet export -wallet /scratch/aime/wallets/cwallets -dn "C=US,O=Oracle Corporation,CN=DBClient" -request /scratch/aime/wallets/cwallets/csr.pem
    
    ./orapki cert create -wallet /scratch/aime/wallets/rwallets -request /scratch/aime/wallets/cwallets/csr.pem -cert /scratch/aime/wallets/cwallets/cert.pem -validity 365  -sign_alg sha256 -serial_num $(date +%s%3N)
    
    ./orapki wallet add -wallet /scratch/aime/wallets/cwallets -user_cert -cert /scratch/aime/wallets/cwallets/cert.pem -pwd oracle123
    
    openssl x509 -noout -text -in /scratch/aime/wallets/cwallets/cert.pem
    
    ./orapki wallet display -wallet /scratch/aime/wallets/cwallets
  3. Change the mode of ewallet.p12.
    chmod 666 /scratch/aime/wallets/swallets/ewallet.p12
    chmod 666 /scratch/aime/wallets/cwallets/ewallet.p12

Listener Changes

Running SI on TCPS (Single Instance)

  1. Create the Oracle Home.

  2. Create a listener using TCP protocol (such as LIST).

  3. Create a DB in the Oracle Home using the Listener created in Step 2. The Database and Listener might already be present.

  4. Shut down the database instance.

  5. Stop the Listener.
    ./lsnrctl stop LIST
  6. Perform the following procedure.
     Set the environment variables
    
    export WALLET_LOCATION=/net/slc05puy/scratch/dbwallets
    
    The wallet is already created and stored here. Make sure the wallet location is accessible from the current host. 
    
    export ORACLE_HOME=scratch/aimedb/12.1.0/12.1.0.2/dbhome_1
    export ORACLE_SID=solsi
    
    Back up the listener.ora, sqlnet.ora and tnsnames.ora files.
     
    cp $ORACLE_HOME/network/admin/listener.ora $ORACLE_HOME/network/admin/listener.ora.bckp
    cp $ORACLE_HOME/network/admin/sqlnet.ora $ORACLE_HOME/network/admin/sqlnet.ora.bckp
    cp $ORACLE_HOME/network/admin/tnsnames.ora $ORACLE_HOME/network/admin/tnsnames.ora.bckp
    
    If sqlnet.ora is not present, create it. 
    touch $ORACLE_HOME/network/admin/sqlnet.ora
  7. Modifying the ora files.
    Listener.ora
    
    Replace all 'TCP' with 'TCPS'
    
    sed -i 's/TCP/TCPS/'  $ORACLE_HOME/network/admin/listener.ora
    
    Replace all '43434' with '2484' [43434 being the old listener port number]
    
    sed -i 's/34343/2484/' $ORACLE_HOME/network/admin/listener.ora
    
    Before executing the above shell commands, make sure you don’t have any string other than the protocol which contains “TCP”. This also applies to the for Listener port.
     
    echo "SSL_CLIENT_AUTHENTICATION = TRUE" >> $ORACLE_HOME/network/admin/listener.ora;
    
    echo "WALLET_LOCATION =(SOURCE =(METHOD = FILE)(METHOD_DATA =(DIRECTORY = $WALLET_LOCATION/swallets)))" >> $ORACLE_HOME/network/admin/listener.ora;
    
    echo "SSL_VERSION = 1.2" >> $ORACLE_HOME/network/admin/listener.ora;  ** Only if TLS version has to be 1.2
     
    [SSL_VERSION = 1.2 or 1.1 or 1.0]
    
    Sqlnet.ora
    
    echo "SQLNET.AUTHENTICATION_SERVICES= (BEQ, TCPS)" >> $ORACLE_HOME/network/admin/sqlnet.ora;
    echo "SSL_CLIENT_AUTHENTICATION = TRUE" >> $ORACLE_HOME/network/admin/sqlnet.ora;
    echo "WALLET_LOCATION =(SOURCE =(METHOD = FILE)(METHOD_DATA =(DIRECTORY = $WALLET_LOCATION/swallets)))" >> $ORACLE_HOME/network/admin/sqlnet.ora;
    echo "SSL_VERSION = 1.2" >> $ORACLE_HOME/network/admin/sqlnet.ora;  
    ** Only if TLS version has to be 1.2
  8. Start the listener (./lsnrctl start LIST)

  9. Start the database instance.

  10. Run ./lsnrctl status LIST and check if the listener is running on TCPS with 2484 as the port and is associated with the database.
    ./lsnrctl status LTLS
    LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 06-APR-2016 13:03:54
    Copyright (c) 1991, 2014, Oracle.  All rights reserved.
    Connecting to
    (DESCRIPTION=(ADDRESS=(PROTOCOL=TCPS)(HOST=myhost.myco.com)(PORT=2484)))
    
    STATUS of the LISTENER
    ------------------------
    Alias                     LTLS
    Version                   TNSLSNR for Linux: Version 12.1.0.2.0 - Production
    Start Date                06-APR-2016 10:41:33
    Uptime                    0 days 2 hr. 22 min. 21 sec
    Trace Level               off
    Security                  ON: Local OS Authentication
    SNMP                      OFF
    
    Listener Parameter File
    /scratch/12102tls12/product/dbhome_1/network/admin/listener.ora
    
    Listener Log File      
    /scratch/12102tls12/diag/tnslsnr/myhost/ltls/alert/log.xml
    
    Listening Endpoints Summary...
      (DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=myhost.myco.com)(PORT=2484)))
    
      (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC2484)))
    
    Services Summary...
    Service "sitls" has 1 instance(s).
      Instance "sitls", status READY, has 1 handler(s) for this service...
    Service "sitlsXDB" has 1 instance(s).
      Instance "sitls", status READY, has 1 handler(s) for this service...
    The command completed successfully.
    You can see in the example that the database is now associated with the listener. If it is not, check whether the database local_listener parameter is set to the listener’s connect descriptor.

    alter system set local_listener='<CONNECT DESCRIPTOR FOR NEW LISTENER PORT>';

    Example: alter system set local_listener=' (DESCRIPTION=(ADDRESS=(PROTOCOL=TCPS)(HOST=strka31.myco.com)(PORT=2484)))’;

    Once done, bounce the database instance. Even after doing this, if the database is not getting associated with the listener, and the listener is up and running without any issue, go to the ORACLE HOME and create a brand new database out of it using DBCA. It will prompt you to use the listener you just secured, and which is up and running on TCPS protocol.

TCPS Credentials

In order to establish secure communication with the Oracle Database, you must add TCPS Database Credential Properties to the credential JSON file in order to add the Oracle Database entity.

  • connectionTrustStoreLocation: Your server/trust Key Store Location. This property is used to specify the location of the trust store. A trust store is a key store that is used when making decisions about which clients and servers can be trusted. The property takes a String value that specifies a valid trust store location.

  • connectionTrustStoreType: Your server/trust Key Store Type. This property denotes the type of the trust store. It takes a String value. Any valid trust store type supported by SSL can be assigned to this property.

  • connectionTrustStorePassword: Your server/trust Key Store Password. This property is used to set the password for the trust store. The trust store password is used to check the integrity of the data in the trust store before accessing it. The property takes a String value.

  • connectionKeyStoreLocation: Your client Key Store Location. This property is used to specify the location of the key store. A key store is a database of key material that are used for various purposes, including authentication and data integrity. This property takes a String value.

  • connectionKeyStoreType: Your client Key Store Type. This property denotes the type of the key store. It takes a String value. Any valid key store type supported by SSL can be assigned to this property.

  • connectionKeyStorePassword: Your client Key Store Password. This property specifies the password of the key store. This password value is used to check the integrity of the data in the key store before accessing it. This property takes a String value.

Agent Properties
Client authority

./omcli setproperty agent -name connectionKeyStoreLocation -value /scratch/aime/wallets/cwallets/ewallet.p12
./omcli setproperty agent -name connectionKeyStoreType -value sha256
./omcli setproperty agent -name connectionKeyStorePassword -value oracle123

Server authority

./omcli setproperty agent -name connectionTrustStoreLocation -value /scratch/aime/wallets/swallets/ewallet.p12
./omcli setproperty agent -name connectionTrustStorePassword -value oracle123
./omcli setproperty agent -name connectionTrustStoreType -value sha256

Once set, bounce the Agent.
./omcli stop agent
./omcli start agent

Note:

Make sure that the above wallet is accessible at the agent location.

AWS-RDS Oracle DB

Prerequisites

See Monitor AWS - RDS Oracle DB.

Oracle Automatic Storage Management (ASM)

Credentials

Monitoring of ASM is supported through credential-based monitoring. For simplicity, use the default asmsnmp user for the ASM monitoring credentials OR any user with both SYSASM and SYSDBA roles.

Note:

For monitoring ASM, the agent should be version 1.47 or above.

Oracle NoSQL

Credentials

Monitoring of Oracle NoSQL is supported only through credential-less JMX (no credentials JSON file is needed).

MySQL Database

Prerequisites
To enable monitoring for a My SQL Database, you can create a special database user, for example, moncs as follows:
  1. Create a user:

    CREATE USER 'moncs'@'l hostname' IDENTIFIED BY 'password';
  2. Grant appropriate privileges:

    GRANT SELECT, SHOW DATABASES  ON *.* TO 'moncs'@'hostname ' IDENTIFIED BY 'password';
    GRANT SELECT, SHOW DATABASES  ON *.* TO ' moncs '@'%' IDENTIFIED BY 'password';
  3. Flush privileges.

Microsoft SQL Server

Prerequisites

To enable monitoring for a Microsoft SQL Server Database, you can create a special database user as follows.

Create a user  (for example, moncs) and map the new user to the master and msdb databases. Then, give this user the following minimum privileges.

Note:

Beginning with Oracle Management Cloud 1.31, sqladmin-related privileges are no longer required.
CREATE LOGIN moncs
WITH PASSWORD = 'moncs';
GO
CREATE USER moncs FOR LOGIN moncs;
GO

Then, map the user moncs:

  1. From the Security menu, select Logins moncs.

  2. Right-click on moncs and select Properties.

  3. Select User Mapping.

  4. Map to all system and user databases:

USE master;
GRANT VIEW ANY DATABASE TO moncs;
GRANT VIEW ANY definition to moncs;
GRANT VIEW server state to moncs;
GRANT SELECT ON [sys].[sysaltfiles] TO [moncs];
GRANT execute on sp_helplogins to moncs;
GRANT execute on sp_readErrorLog to moncs;

GRANT EXECUTE ON dbo.xp_regread TO moncs;
USE msdb;
GRANT SELECT on dbo.sysjobsteps  TO moncs;
GRANT SELECT on dbo.sysjobs  TO moncs;
GRANT SELECT on dbo.sysjobhistory TO moncs;

For connecting to SQL server database with SSL encryption, do the following:

  1. Ensure the SQL server installation has the required updates for TLS 1.2 support as described in the following document.

    https://support.microsoft.com/en-in/help/3135244/tls-1-2-support-for-microsoft-sql-server

  2. Create a server certificate for the SQL server host.

    Set up the certificate as mentioned in the section “Install a certificate on a server with Microsoft Management Console (MMC)” in the following document: https://support.microsoft.com/en-in/help/316898/how-to-enable-ssl-encryption-for-an-instance-of-sql-server-by-using-mi

  3. Install the server certificate for the SQL server instance.

    Set up the SQL server instance to use the server certificate created above, as mentioned in the section “To install a certificate for a single SQL Server instance” in the following document: https://docs.microsoft.com/en-us/sql/database-engine/configure-windows/manage-certificates?view=sql-server-2016

  4. Export to a file, the root certification authority’s certificate that has signed the SQL server host certificate, and copy this file to the cloud agent host.

    Export the certificate as described in section “Enable encryption for a specific client” in the following document: https://support.microsoft.com/en-in/help/316898/how-to-enable-ssl-encryption-for-an-instance-of-sql-server-by-using-mi

  5. Create a trust store on the cloud agent host, and import the root certification authority’s certificate exported above.
    keytool -import -file .\ca_cert.cer –alias mytrust -keystore .\trustStore.jks -storetype jks
  6. Form the connection URL pointing to the trust store.
    jdbc:sqlserver://xxx.xxx.com:1433;encrypt=true;trustServerCertificate=false;trustStore=C:\trustStore.jks;trustStorePassword=xxxx;

MongoDB Database

Prerequisites

To enable monitoring for a MongoDB Database, you can create a special database user, for example, omc_monitor as follows:

  1. Connect to your database:

    use your MongoDB database name;
  2. Create user:

    db.createUser(
    {
    user: "omc_monitor",
    pwd: "mongo123",
    roles: [ "read" ]
    }
    )

Oracle WebLogic Server (includes WebLogic Domain and WebLogic Cluster)

Prerequisites

To enable monitoring of a Oracle WebLogic Server (WLS), use a WebLogic user with at least the Monitor role. The user can also have Operator or Administrator roles, which include the Monitor role.

If you have enabled the Oracle WebLogic Server with SSL, you must export the certificate from its keystore and import it in the Cloud Agent keystore. Perform the following steps:
  1. Stop the Cloud Agent.

    omcli stop agent

  2. Export the certificate from the WLS instance JMX SSL keystore to the Cloud Agent's truststore. For example, on a UNIX host:

    cd <agent base Directory>/agentStateDir/sysman/config/montrust

    keytool -exportcert -alias <alias of WLS SSL key> -file <Exported Cert Name> -keystore <path to the WLS SSL Keystore>.keystore -storepass <WLS SSL Keystore password> -rfc

  3. Import the WLS instance JMX SSL keystore to the Cloud Agent's truststore:

    keytool -import -noprompt -alias <alias agent's truststore key> -file <Exported Cert Name>.cer -keystore AgentTrust.jks -storepass <Agent truststore password, default is "welcome">

  4. Restart the Cloud Agent..

    omcli start agent

Oracle Service Bus

Prerequisites

Important: Before you can monitor Oracle Service Bus (OSB) entities in Oracle Management Cloud, you must first enable monitoring from the Oracle Service Bus Administration console.

Description of enable_mon.png follows
Description of the illustration enable_mon.png

For information, see What are Operational Settings for a Service?.

Once monitoring has been enabled from the Oracle Service Bus Administration console, you can add OSB entities to Oracle Management Cloud. When specifying an OSB entity, you use credentials of a user with at least the Monitor role. The user can also have either the Operator or Admin role.

Tomcat

Prerequisites and Credentials

Tomcat is monitored using JMX. You must configure Tomcat for JMX remote monitoring even if you are using a local agent.

Tomcat can be monitored with or without authentication. If a JMX credential is created, then it’s assumed you’re monitoring this entity with credentials.

To create a JMX credential for monitoring:

  1. Edit the environment file:

    vi $CATALINA_HOME/bin/setenv.sh

    Add:

    CATALINA_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9999 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=true -Dcom.sun.management.jmxremote.password.file=../conf/jmxremote.password -Dcom.sun.management.jmxremote.access.file=../conf/jmxremote.access"
  2. Save the file.

  3. Change the file permission as executable:

    chmod 755 $CATALINA_HOME/bin/setenv.sh
  4. Edit the password file:

    vi $CATALINA_HOME/conf/jmxremote.password 

    Add:

    control tomcat
    admin tomcat
  5. Edit the access file:

    vi $CATALINA_HOME/conf/jmxremote.access 

    Add:

    control readonly
    admin readwrite
  6. Change the file permission for only the owner:

    chmod 600 jmxremote.access
    chmod 600 jmxremote.password
  7. Bounce the Tomcat instance:

    sh $CATALINA_HOME/bin/shutdown.sh
    sh $CATALINA_HOME/bin/startup.sh
If you have enabled the Tomcat JMX with SSL, you must export the certificate from its keystore and import it in the Cloud Agent keystore. Perform the following steps:
  1. Export the certificate from the Tomcat instance JMX SSL keystore to the Cloud Agent's truststore. For example, on a UNIX host:

    cd <agent Base Directory>/agentStateDir/sysman/config/montrust

    keytool -exportcert -alias <alias of Tomcat JMX SSL key> -file <Exported Cert Name>.cer -keystore <path to the Tomcat JMX SSL Keystore>.keystore -storepass <Tomcat JMX SSL Keystore password> -rfc

  2. Import the Tomcat instance JMX SSL keystore to the Cloud Agent's truststore:

    keytool -import -noprompt -alias <alias agent's truststore key> -file <Exported Cert Name>.cer -keystore AgentTrust.jks -storepass <agent truststore password, default is "welcome">

  3. Restart the agent, using the command line interface:
    omcli stop agent
    omcli start agent

Oracle Traffic Director (OTD)

Prerequisites

OTD 11

Use an OTD Administrator user.

In addition, to enable collection of metrics, you must configure and start an SNMP subagent. To start the SNMP subagent, use OTD Admin Console, or use the following command:
tadm start-snmp-subagent
--host=<otd_host>
--port=<otd port>
--user=<otd user>
--password-file=<password_file>

For more information on configuring and starting an SNMP subagent, see the Oracle Traffic Director documentation.

OTD 12

Use a WebLogic Server user with the Monitor role. The user can also have Operator or Admin roles, which include the Monitor role.

Apache HTTP Server

Apache HTTP Server Prerequisites

In this release, only Apache HTTP Server 2.4.x and 2.2 for Linux are supported.

To enable the collection of configuration metrics, note the following:
  1. The Cloud Agent should be installed on the same host as Apache HTTP Server. The Apache *.conf file(s) , including httpd.conf file, should be accessible and readable by the Cloud Agent install user.

  2. The Apache install user and the Cloud Agent install user should be a part of the same operating system group.

In order to monitor an Apache HTTP Server you must first:

  • Enable ‘mod_status’ for the Apache module.

  • Configure/server-status location directive for the specified Host and Port (default or configured virtual host).

  • Turn ‘ON’ the Extended Status.

  • If applicable, provide access to the configured location directive so that HTTP/HTTPS request can be successfully made from the host where the agent is installed on.

For more information, see https://httpd.apache.org/docs/2.4/mod/mod_status.html and http://httpd.apache.org/docs/current/mod/core.html#location.

For HTTPS/Secure communication between Apache HTTP Server and the cloud agent during metrics collection, you must provide an SSL certificate. To make the certificate available with the cloud agent:
  1. Append the contents of your certificate file to the existing certificate file. For example, on a UNIX host the existing certificate file is: <AGENT_BASE_DIR>/sysman/config/b64InternetCertificate.txt

    Ensure that only the following lines are appended to the b64InternetCertificate.txt file. Do not include blank lines, comments, or any other special characters.

    ----BEGIN CERTIFICATE----
    <<<Certificate in Base64 format>>>
    ----END CERTIFICATE----
  2. Restart the agent by running the following commands from the agent installation directory (for example, on a UNIX host, this directory is <AGENT_BASE_DIR>/agent_inst/bin).
    a) ./omcli stop agent
    b) ./omcli start agent
    

For data retrieval of memory-related metrics (supported on Unix platforms and when an entity is locally monitored), the PID file (httpd.pid) file needs to be accessed.

If Apache is running as root or some user other than the agent process owner, access to the PID file will fail. Hence, to allow access to httpd.pid, you need to ensure that the file can be accessed without compromising Linux security. There are several ways to achieve this. One option is as follows:

As a privileged user, run the following commands:
setfacl -R -L -d -m u:<agent_user>:rx /etc/httpd/run
setfacl -R -L -m u:<agent_user>:rx /etc/httpd/run

where /etc/httpd/run is the directory containing the PID file.

Oracle HTTP Server (OHS)

Prerequisites

OHS 12 : Node Manager credentials are required.

Also, the following prerequisites must be met:
  • Cred-less (No credential file to be provided when running omcli add_entity during discovery) OHS discovery when the standalone OHS process owner and agent process owner are same user.

  • Cred-based: OHS discovery when the standalone OHS process owner and agent process owners are different users.

    Note:

    cred-less and cred-based discovery is applicable for standalone OHS 11. For OHS 12, only cred-based discovery is supported

  • For HTTPS/Secured communication between OHS and the Cloud agent (for metric data collection) , the required certificate must be available with the agent in order for the SSL handshake to be successful. To make the certificate available with the agent :
    • Append the contents of your certificate file to the file : /sysman/config/b64InternetCertificate.txt

    • Ensure that only the following lines are appended to the b64InternetCertificate.txt file (that is, do not include blank lines, comments, or any other special characters):
      -----BEGIN CERTIFICATE-----
      <<<Certificate in Base64 format>>>
      -----END CERTIFICATE-----
    • Restart the agent by running the following commands :
      omcli stop agent;omcli start agent;

Arista Ethernet Switch

Prerequisites

SNMPv1/v2 or SNMPv3 credentials are needed for monitoring.

If SNMPv1/v2 is used, you must provide the SNMP community string (which was entered during the Arista Switch configuration) along with IP address of agent that will be used for Arista Switch monitoring.

If SNMPv3 is used, you must provide the SNMPv3 user, plus the authentication method (SHA or MD5) and authorization password if authorization is used. In addition, you must supply the privilege method (only DES is supported) and privilege password if privilege is used. Everything needs to be manually configured up front in the Arista Switch.

Read-only access is all that’s required for Arista Switch monitoring.

Cisco Ethernet (Catalyst) Switch

Prerequisites

To enable monitoring of the Cisco Ethernet (Catalyst) Switch, you will need to provide the SNMPv1/v2 or SNMPv3 credentials in the JSON credential file. Read-only access is sufficient for Cisco Catalyst Switch monitoring. For more information on how to configure an SNMP user for a Cisco Catalyst Switch, see http://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst2960/software/release/12-2_55_se/configuration/guide/scg_2960/swsnmp.html#78160

Cisco Nexus Ethernet Switch

Prerequisites

SNMPv1/v2 or SNMPv3 credentials are needed for monitoring.

If SNMPv1/v2 is used, you must provide the SNMP community string (which was entered during Cisco Nexus Ethernet Switch configuration) along with IP address of agent that will be used for Cisco Nexus Ethernet Switch monitoring.

If SNMPv3 is used, you must provide the SNMPv3 user, plus the authentication method (SHA or MD5) and authentication password if authentication is used,. In addition, the privilege method (only DES supported) and privilege password must be supplied if privilege is used. Everything needs to be manually configured up front in the Cisco Nexus Ethernet Switch.

Read only access is enough for the Cisco Nexus Ethernet Switch monitoring.

Oracle Power Distribution Unit (PDU)

Prerequisites

To enable monitoring, HTTP and SNMPv1/v2c/v3 are needed. The NMS and trap tables in PDU administration interface must be set for a proper SNMP monitoring. for more information, see the PDU vendor documentation.

Juniper Ethernet Switch

Prerequisites

To enable monitoring, HTTP and SNMPv1/v2c/v3 are needed.

If SNMPv1/v2 is used, you must provide SNMP community string that has been used earlier in Juniper Switch configuration along with IP address of agent which will be used for Juniper Switch monitoring.

If SNMPv3 is used, in addition to SNMPv3 user, you must provide the auth method (SHA or MD5) and auth-password if auth is used, and priv method (only DES supported) and priv-password if priv used. You must configure everything manually in Juniper Switch. Read only access is sufficient for Juniper Switch monitoring.

Oracle Infiniband Switch

Prerequisites

To enable monitoring, HTTP and SNMPv1/v2c/v3 are needed.

If SNMPv1/v2 is used, you must provide SNMP community string that has been used earlier in IB Switch configuration along with IP address of agent which will be used for IB Switch monitoring.

If SNMPv3 is used, in addition to SNMPv3 user, you must provide the auth method (SHA or MD5) and auth-password if auth used, and plus priv method (only DES supported) and priv-password if priv used. You must configure everything manually in IB Switch. Read only access is sufficient for IB Switch monitoring.

Brocade Fibre Channel Switch

Prerequisites

SNMPv1/v2 or SNMPv3 credentials are needed for monitoring.

If SNMPv1/v2 is used, you must provide the SNMP community string (entered during Brocade Fibre Channel Switch configuration), along with the IP address of the agent that will be used for Brocade Fibre Channel Switch monitoring.

If SNMPv3 is used, you must provide the SNMPv3 the user, plus the authentication method (SHA or MD5) and authorization password (if authorization is used), plus privilege method (only DES is supported) and privilege password if a privilege method is used. All of this needs to be manually configured up front in the Brocade Fibre Channel Switch.

Read-only access is enough for Brocade Fibre Channel Switch monitoring.

SCOM (System Center Operations Manager)

Prerequisites

Credentials must follow the same criteria as any program which tries to obtain data from SCOM using the SCOM SDK. See How to Connect an Operations Manager SDK Client to the System Center Data Access Service.

... The account that is used for authentication must be included in an Operations Manager user-role profile ...
The OMC Cloud Agent uses the omc_scom.exe client to connect to the SCOM SDK. The Cloud agent does not bundle required SCOM SDK libraries (due to the license type of libraries). You must manually copy the SCOM SDK libraries to the machine where the agent is running.
C:\Program Files\Microsoft System Center 2012 R2\Operations Manager\Server\SDK Binaries\Microsoft.EnterpriseManagement.Runtime.dll 
C:\Program Files\Microsoft System Center 2012 R2\Operations Manager\Server\SDK Binaries\Microsoft.EnterpriseManagement.OperationsManager.dll 
C:\Program Files\Microsoft System Center 2012 R2\Operations Manager\Server\SDK Binaries\Microsoft.EnterpriseManagement.Core.dll

Juniper SRX Firewall

Prerequisites

SNMPv1/v2 or SNMPv3 credentials are needed for monitoring.

If SNMPv1/v2 is used, you must supply the SNMP community string (which was entered during Juniper SRX Firewall configuration) along with IP address of agent that will be used to monitor the Juniper SRX Firewall.

If SNMPv3 is used, you must supply the SNMPv3 user, plus the authentication method (SHA or MD5) and authentication password, if authentication is used. In addition, privilege method (only DES supported) and privilege password will be required, if privileges are used. Everything must be manually configured up front in the Juniper SRX Firewall.

Read-only access is sufficient for Juniper SRX Firewall monitoring.

Fujitsu Server

Prerequisites

SNMPv1/v2 or SNMPv3 credentials are needed for monitoring.

If SNMPv1/v2 is used, you must provide the SNMP community string that was entered during XSCF configuration along with IP address of the agent that will be used to monitor the Fujitsu Server.

If SNMPv3 is used, you must provide the SNMPv3 user, plus authorization method (SHA or MD5) and authorization password if authorization used. You must also provide the privilege method (only DES supported) and privilege password if privilages are used. All of this must be manually configured upfront in the Fujitsu server service processor..

Read-only access is adequate for the monitoring.

For more information on how to configure SNMP users in Fujitsu M10 servers, see http://www.fujitsu.com/downloads/SPARCS/manuals/en/c120-e684-11en.pdf

Intel/SPARC Computers

Credentials

Only the username and password are required to use SSH to log in to the ILOM service processor.

VMware vCenter

Prerequisites

In order for the Cloud Agent to be able to collect all the metrics for the Oracle Management Cloud VMware entities, you should:

  1. Install VMware tools on the VM host.

  2. Set the statistics level to one (1).

Credentials: username/password required to access VMware vCenter (use Administrator role).

Example:

username=Administrator@vsphere.local / password=<admin_pw>

Certificates:

You need to explicitly add the vCenter certificate to the Agent's JKS:

Example:

<jdk>/bin/keytool -importcert -file <vmware-vsphere-certificate> -alias vmware -keystore $T_WORK/agentStateDir/sysman/config/montrust/AgentTrust.jks -storepass welcome

How to extract certificate from vCenter:

openssl s_client -showcerts -connect <hostname>:443

Discovery properties:

How to retrieve VMware vCenter Server Instance UUID to be passed in at discovery time through the entity property omc_virtual_mgmt_system_id using VMware PowerCLI:

Example:
PS C:\> $vcenter = Connect-viserver vcsa-01a.corp.local -User Administrator@vsphere.local -Password admin_pw
PS C:\> $vcenter.InstanceUuid
d322b019-58d4-4d6f-9f8b-d28695a716c0

Docker Swarm

Prerequisites and Credentials

Cloud Agent Configuration

If the cloud agent communicates with Oracle Management Cloud through a proxy (OMC_PROXYHOST & OMC_PROXYPORT parameters were set on the cloud agent when it was installed), Docker Swarm discovery will fail. You’ll need to perform additional configuration steps depending on the following situations:

For a New Agent Installation

If the agent requires proxy to communicate with Oracle Management Cloud, then use the gateway and set the proxy parameters (OMC_PROXYHOST & OMC_PROXYPORT) during gateway installation, and then set up the cloud agent (without proxy parameters) to point to the gateway.

For an Existing Agent

If the existing cloud agent has been set up to use the proxy to communicate with Oracle Management Cloud, to discover Docker Swarm, execute the following commands on the cloud agent before performing entity discovery.

omcli setproperty agent -allow_new -name _configureProxyPerClient -value true 
omcli stop agent 
omcli start agent

Credentials

There are three methods you can use to authenticate and connect to the Docker Swarm via Rest APIs

1) Non-secure

2) Secure (https): 1–way SSL mode

3) Secure (https): 2–way SSL mode

Apache SOLR

Prerequisites

Two modes are supported: standalone & solrcloud

Monitoring is done over REST APIs exposed by Apache SOLR

Monitoring credentials require read access to following URIs:

  • /admin/collections?action=clusterstatus

  • /admin/collections?action=overseerstatus

  • /admin/info/system

  • /admin/info/threads

  • /admin/cores

  • /<core_name>/admin/mbeans

Credentials

  1. Without Credentials:

    1. non-secure (http)

    2. secure (https)

  2. With Credentials:

    1. Client Authentication - (2–way SSL)

    2. Basic Authentication - non secure

    3. Basic Authentication - secure

    4. Basic Authentication with Client authentication

Hadoop Cluster

Prerequisites

By default, Hadoop runs in non-secure mode in which no actual authentication is required.

By configuring Hadoop to run in secure mode, each user and service needs to be authenticated by Kerberos in order to use Hadoop services.

To perform Kerberos authentication, the Cloud Agent requires the following:

  1. krb5.conf file. This file can be found at /etc/krb5.conf

  2. Username and password

The Cloud Agent can use only one krb5.conf at a time. If a single Agent needs to perform Kerberos authentication with more than one domain, these details should be defined in a single krb5.conf file.

Arbor TMS/CP

Prerequisites

SNMP v1/v2 or SNMPv3 credentials are needed for monitoring.

If SNMPv1/v2 is used, you must provide the SNMP community string that was entered during Arbor appliance configuration along with IP address of Cloud Agent which will be used for appliance monitoring.

If SNMPv3 is used, you must provide the SNMPv3 user, plus authentication method (SHA or MD5) and password if authorization is used, plus the privilege method (only DES is supported) and privilege password if privilege is used. All of this needs to be manually configured beforehand in the appliance..

Read-only access is adequate for Arbor appliance monitoring.

Juniper Netscreen Firewall

Prerequisites

SNMPv1/v2 or SNMPv3 credentials are needed for monitoring.

If SNMPv1/v2 is used, you must provide the SNMP community string that was entered during firewall configuration along with IP address of the Cloud Agent which will be used for Juniper firewall monitoring.

If SNMPv3 is used, you must provide the SNMPv3 user, plus authentication method (SHA or MD5) and password if authorization is used, plus the privilege method (only DES is supported) and privilege password if privilege is used. All of this needs to be manually configured beforehand in the firewall..

Read-only access is adequate for Juniper firewall monitoring.

Juniper MX Router

Prerequisites

SNMPv1/v2 or SNMPv3 credentials are needed for monitoring.

If SNMPv1/v2 is used, you must provide the SNMP community string that was entered during router configuration along with the IP address of the Cloud Agent which will be used for router monitoring.

If SNMPv3 is used, you must provide the SNMPv3 user, plus authentication method (SHA or MD5) and password if authorization is used, plus the privilege method (only DES is supported) and privilege password if privilege is used. All of this needs to be manually configured beforehand in the router.

Read-only access is adequate for MX router monitoring.

F5 BIG-IP LTM 

Credentials

SNMPv1/v2 or SNMPv3 credentials are needed for monitoring.

If SNMPv1/v2 is used, you must provide the SNMP community string that was entered during F5 BIG-IP LTM configuration along with IP address of Cloud Agent which will be used for the LTM monitoring.

If SNMPv3 is used, you must provide the SNMPv3 user, plus authentication method (SHA or MD5) and password if authorization is used, plus the privilege method (only DES is supported) and privilege password if privilege is used. All of this needs to be manually configured beforehand in the LTM.

Read-only access is adequate for LTM monitoring.

F5 BIG-IP DNS 

Prerequisites

SNMPv1/v2 or SNMPv3 credentials are needed for monitoring.

If SNMPv1/v2 is used, you must provide the SNMP community string that was entered during F5 BIG-IP DNS configuration along with IP address of the Cloud Agent which will be used for the DNS monitoring.

If SNMPv3 is used, you must provide the SNMPv3 user, plus authentication method (SHA or MD5) and password if authorization is used, plus the privilege method (only DES is supported) and privilege password if privilege is used. All of this needs to be manually configured beforehand in the DNS.

Read-only access is adequate for DNS monitoring.

ES2 Ethernet Switch

Prerequisites

SNMPv1/v2 or SNMPv3 credentials needed for monitoring.

If SNMPv1/v2 is used, you must provide the SNMP community string that was entered during ES2 configuration along with IP address of the Cloud Agent which will be used for appliance monitoring.

If SNMPv3 used, you must provide the SNMPv3 user, plus authentication method (SHA or MD5) password if authentication is used, plus the privilege method (only DES is supported) and privilege password if privilege is used. All of this needs to be manually configured beforehand in the appliance.

Read-only access is adequate for the ES2 monitoring.

Oracle Flash Storage

Prerequisites and Credentials

Oracle Flash Storage exposes monitoring data through the REST API.

Oracle Flash Storage credentials (username and password) are required to monitor Oracle Flash Storage.

Apache Cassandra DB

Prerequisites
The default settings for Cassandra make JMX accessible only from the local host. If you want to enable remote JMX connections, change the LOCAL_JMX setting in cassandra-env.sh and enable authentication and/or SSL. To do this, perform the following procedure:
  1. Open the cassandra-env.sh file for editing and update or add these lines:
    JVM_OPTS="$JVM_OPTS -Dcom.sun.management.jmxremote.authenticate=true"
    JVM_OPTS="$JVM_OPTS -Dcom.sun.management.jmxremote.password.file=/etc/cassandra/jmxremote.password"
    
    If the LOCAL_JMX setting is in your file, set it to no:
    LOCAL_JMX=no
  2. Depending on whether the JDK or JRE is installed:

    • Copy the jmxremote.password.template from /jdk_install_location/jre/lib/management/ to /etc/cassandra/ and rename it to jmxremote.password

      $ cp /jdk_install_dir/lib/management/jmxremote.password.template /etc/cassandra/jmxremote.password

    • Copy the jmxremote.password.template from /jre_install_location/lib/management/ to /etc/cassandra/ and rename it to jmxremote.password

      $ cp /jre_install_dir/lib/management/jmxremote.password.template /etc/cassandra/jmxremote.password

  3. Change the ownership of the jmxremote.password to the user you use to run Cassandra and change permission to read-only:
    $ chown cassandra:cassandra /etc/cassandra/jmxremote.password
    $ chmod 400 /etc/cassandra/jmxremote.password
  4. Edit jmxremote.password and add the user and password for JMX-compliant utilities:
    monitorRole QED
    controlRole R&D
    cassandra cassandrapassword

    Note:

    The Cassandra user and Cassandra password shown in the above sample are examples. Specify the user and password for your environment.
  5. Add the Cassandra user with read and write permission to /jre_install_location/lib/management/jmxremote.access
    monitorRole readonly
    cassandra readwrite
    controlRole readwrite \
    create javax.management.monitor.,javax.management.timer. \
    unregister
  6. Restart Cassandra.

Oracle VM Server for SPARC (LDoms)

Prerequisites
  • Prerequisites: OMC Cloud Agent is deployed on the LDoms Control Domain

  • Discovery does not require any user credentials but you need to grant solaris ldoms read RBAC privileges to the OMC Cloud Agent user:
    /usr/sbin/usermod -A solaris.ldoms.read oracle
  • Discovery properties:

    • The following command retrieves the LDoms Control Domain UUID to be supplied at discovery time through entity identifying property omc_virtual_platform_id using virtinfo:
      # virtinfo -ap | grep DOMAINUUID
      DOMAINUUID|uuid=280c9ff4-a134-48cd-cee9-a270b2aaefa0
  • Autodiscovery of LDoms-related entities:

    • Use a JSON file with details to discover the Oracle VM Server for SPARC (LDoms). Using this method, all Logical Domains (Virtual Machines) are automatically discovered and updated periodically when things change in the Oracle VM Server for SPARC (LDoms) deployment.

Coherence

Prerequisites

Supports both credential and non-credential monitoring. When using a secured JMX connection, a credential input file needs to be passed. For information on configuring a Coherence cluster, see Configure a Coherence Cluster.

Oracle Unified Directory(OUD)

Prerequisites

i) OUD Gateway

ii) OUD Replication

iii) OUD Proxy

LDAP username and LDAP passwords are used to connect to the OUD LDAP server.

OUD Credentials:

Directory Server Username and Password: The username and password that will be used by the agent to bind to the server instance. Ensure the password is in the appropriate field.

The following credential JSON sample illustrates how the properties should be entered.

{ "entities":[
      {
        "name":"OMC_OUD_Directory1",
        "type":"omc_oud_directory",
        "displayName":"OUD_directory1",
        "timezoneRegion":"PST",
        "credentialRefs":["OudCreds"],
        "properties":{
                "host_name":{"displayName":"Directory Server Host","value":"myserver.myco.com"},
                "omc_ldap_port":{"displayName":"Administration Port","value":"4444"},
                "omc_trust_all":{"displayName":"Trust ALL Server SSL certificates","value":"true"},
                "capability":{"displayName":"capability","value":"monitoring"}}
      }
]}

{"credentials":[
      {
         "id":"OudCreds","name":"OUD Credentials","credType":"MonitorCreds",
         "properties":[{"name":"authUser", "value":"CLEAR[cn=Directory Manager]"},
                       {"name":"authPasswd", "value":"CLEAR[mypassword]"}]
      }
   ]
}

Oracle Access Manager (OAM)

Prerequisites and Monitoring Credentials

The same credentials are used to discover the WebLogic Domain.

Note:

Refresh of IDM targets is now supported. To refresh any IDM domain run omcli refresh_entity agent ./idm_domain.json

where the content of idm_domain.json is:
{ "entities":[
{
"name": "Idm Domain",
"type": "omc_weblogic_domain"
}
]}

Oracle Internet Directory (OID)

Prerequisites

Same credentials used to discover the WebLogic Domain.

Note:

Refresh of IDM targets is now supported. To refresh any IDM domain run omcli refresh_entity agent ./idm_domain.json

where the content of idm_domain.json is:
{ "entities":[
{
"name": "Idm Domain",
"type": "omc_weblogic_domain"
}
]}

Microsoft Internet Information Services (IIS)

Prerequisites

Local Monitoring: Credentials are not required. The agent user is used for monitoring.  

Remote Monitoring via WMI: Credentials are required. The credentials to be provided include the username and password used to log into the remote Windows host.  

Before you can monitor Microsoft IIS entities, you must ensure the following prerequisites have been met:

  • Remote Monitoring of IIS: If the Cloud agent and IIS are installed on different machines, then Microsoft Visual C++ needs to be installed on the Windows machine running the Cloud agent. The DLL msvcr100.dll , which is part of the Microsoft Visual C++ installation, is required.

    Local Monitoring of IIS: If the Cloud agent and IIS are installed on the same machine, Microsoft Visual C++ is not required.

  • IIS has been installed on a Windows Server. For more information about running the installation wizards from Server Manager, see Installing IIS 8.5 on Windows Server 2012 R2.

  • IIS Management Compatibility Components have been installed. To install the components:
    1. Click Start, click Control Panel, click Programs and Features, and then click Turn Windows features on or off.

    2. Follow the installation wizards and on the Select Server Roles page, select Web Server (IIS). For more information about running the installation wizards from Server Manager, see Installing IIS 8.5 on Windows Server 2012 R2.

    3. In Server Manager, expand Roles in the navigation pane and right-click Web Server (IIS), and then select Add Role Services.

    4. In the Select Role Services pane, scroll down to Web Server>Management Tools. Check the following boxes:

      • IIS 6 Management Compatibility

      • IIS 6 Metabase Compatibility

      • IIS 6 Management Console

      • IIS 6 WMI Compatibility

    5. Enable FTP Server.

  • DCOM settings and WMI namespace security settings have been enabled for a remote WMI connection.

    WMI uses DCOM to handle remote calls. DCOM settings for WMI can be configured using the DCOM Config utility (DCOMCnfg.exe) found in Administrative Tools in Control Panel. This utility exposes the settings that enable certain users to connect to the computer remotely through DCOM.

    The following procedure describes how to grant DCOM remote startup and activation permissions for certain users and groups.

    1. Click Start, click Run, type DCOMCNFG, and then click OK

    2. In the Component Services dialog box, expand Component Services, expand Computers, and then right-click My Computer and click Properties

    3. In the My Computer Properties dialog box, click the COM Security tab

    4. Under Launch and Activation Permissions, click Edit Limits

    5. In the Launch Permission dialog box, follow these steps if your name or your group does not appear in the Groups or user names list

      1. In the Launch Permission dialog box, click Add

      2. In the Select Users, Computers, or Groups dialog box, add your name and the group in the Enter the object names to select box, and then click OK

    6. In the Launch Permission dialog box, select your user and group in the Group or user names box. In the Allow column under Permissions for User, select Remote Launch and select Remote Activation, and then click OK

    The following procedure describes how to grant DCOM remote access permissions for certain users and groups.
    1. Click Start, click Run, type DCOMCNFG, and then click OK

    2. In the Component Services dialog box, expand Component Services, expand Computers, and then right-click My Computer and click Properties

    3. In the My Computer Properties dialog box, click the COM Security tab

    4. Under Access Permissions, click Edit Limits

    5. In the Access Permission dialog box, select ANONYMOUS LOGON name in the Group or user names box. In the Allow column under Permissions for User, select Remote Access, and then click OK

      .

    Allowing Users Access to a Specific WMI Namespace

    It is possible to allow or disallow users access to a specific WMI namespace by setting the "Remote Enable" permission in the WMI Control for a namespace.

    The following procedure sets remote enable permissions for a non-administrator user:
    1. In the Control Panel, double-click Administrative Tools

    2. In the Administrative Tools window, double-click Computer Management

    3. In the Computer Management window, expand the Services and Applications tree and double-click the WMI Contro

    4. Right-click the WMI Control icon and select Properties

    5. In the Security tab, select the namespace and click Security

    6. Locate the appropriate account and check Remote Enable in the Permissions list

    Firewall Settings 
    1. Click Start, click Run, type GPEDIT.MSC, and then click OK

    2. In the Group Policy dialog box, expand Administrative Templates, expand Network, expandNetwork Connections, and then expand Windows Firewall

    3. Select Standard Profile and double click on Windows Firewall : Allow Inbound Remote Administration Exceptions

    4. In the dialogue box that pops up, select Enabled and click on Apply

    5. If required, repeat the above 2 steps for Domain Profile as well

Oracle Identity Manager (OIM)

Prerequisites

Same credentials used to discover the WebLogic Domain.

Note:

Refresh of IDM targets is now supported. To refresh any IDM domain run omcli refresh_entity idm_domain.json

where the content of idm_domain.json is:
{ "entities":[
{
"name": "Idm Domain",
"type": "omc_weblogic_domain"
}
]}

Oracle Clusterware (CRS)

Prerequisite for Remote Monitoring

:

SSH must be set up between the machine where the Cloud agent is installed and the machine where CRS is installed, The Cloud agent connects to the remote machine where CRS is installed via SSH authentication.

JBOSS

Prerequisites

Before discovering a JBOSS server or domain, you must first add the JBOSS client jar file to the Cloud agent as a plug-in. The JBOSS client jar file contains the required JMX protocols that allow the agent to collect JBOSS metrics.

The JBOSS client jar is distributed as part of the JBOSS installation. When you download the JBOSS zip file, the client jar file will be bundled with it.

Step Action
Step 1: Locate the JBOSS client jar file.

From the JBOSS home directory, you will find the client jar file at the following location:

> JBOSS_HOME/bin/client

In this directory, you’ll see the jboss-client.jar file. This is the file you need to copy over to the Cloud agent location.

Step 2: Copy the JBOSS client jar file to the Cloud agent installation. Copy the jboss-client.jar file to a secure location that is accessible by the Cloud agent. Typically, this is located on the same host where the agent is installed.
Step 3: Add the jboss-client.jar to the Cloud agent installation as a plug-in.

From the Cloud agent home directory, navigate to the agent state directory:

<agent_home>/sysman/config

Create a classpath file. This file tells the agent where to find the jboss-client.jar. The file naming convention is <plugin_id>.classpath.lst.

Example: If you’re adding the GFM plug-in (plug-in ID is oracle.em.sgfm), the file name would be oracle.em.sgfm.classpath.lst .

Edit the classpath file and add the absolute path to the jboss-client.jar file at the end of the file.

/scratch/securelocation/jboss-client.jar

Bounce the agent. Any modifications made to the classpath file will not take effect until the agent is restarted. Once the agent has been bounced, you are ready to discover the JBOSS entity (server or domain).

Step 4: Discover the JBOSS server/domain.
  1. From the Oracle Management Cloud console, select AdministrationàDiscoveryàAdd Entity. The Add Entity page displays.

  2. From the Entity Type drop-down menu, choose either JBOSS Domain or JBOSS Server. The appropriate JBOSS parameters are displayed.

  3. Enter the appropriate parameters and monitoring credentials.

  4. Click Add Entity.

About JBOSS Monitoring Credentials

Depending on whether you choose JBOSS Server or JBOSS Domain entity type, the required monitoring credentials will differ:

JBOSS Server

  • JBOSS Username: User account used by the agent for monitoring.

  • JBOSS Password: Password for the above user account.

JBOSS Domain

  • JBOSS Credentials:
    • JBOSS username and password: Credentials used by the agent for monitoring.

    • App User Name and Password: Credentials used to communicate with servers in the domain.

  • User Credential Set:
    • Alias and Password: Same as the JBOSS username and password used for the JBOSS Credentials.

      Note:

      This is needed because two different fetchlets are being used.

Kubernetes Cluster

In order to monitor Kubernetes, you need to set access permissions.

Cloud Agent Configuration

If the cloud agent communicates with Oracle Management Cloud through a proxy (OMC_PROXYHOST & OMC_PROXYPORT parameters were set on the cloud agent when it was installed), Kubernetes discovery will fail. You’ll need to perform additional configuration steps depending on the following situations:

For a New Agent Installation

If the agent requires proxy to communicate with Oracle Management Cloud, then use the gateway and set the proxy parameters (OMC_PROXYHOST & OMC_PROXYPORT) during gateway installation, and then set up the cloud agent (without proxy parameters) to point to the gateway.

For an Existing Agent

If the existing cloud agent has been set up to use the proxy to communicate with Oracle Management Cloud, to discover Kubernetes, execute the following commands on the cloud agent before performing entity discovery.

omcli setproperty agent -allow_new -name _configureProxyPerClient -value true 
omcli stop agent 
omcli start agent

Master's Server Certificate

To connect to the Kubernetes API server, you need to add the Master's Server Certificate to the agent trust store.

Downloading the Certificate (Command Line)

# echo -n | openssl s_client -connect <master host>:<master port> | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > <certificate name>.crt
# Example for Kubernetes Master URL https://myhost.myco.com:6443/  execute the following
$ echo -n | openssl s_client -connect myhost.myco.com:6443 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > "slcak155.crt"

Adding to the Agent Trust Store

For secure Kubernetes installations (https), it is mandatory to add the certificate of the Kubernetes master to the agent's trust store. This can be done either using OMCLI or during the discovery process. If you add the certificate to the agent trust store using omcli secure, then you don't need to fill out the certificate-related fields (Certificate,CertAlias,CertPassword) in the JSON or the UI when adding the target. Alternatively, you can add the certificate to the agent trust store during the discovery process (either through the UI or using omcli add_entity).

  • Using OMCLI
    # omcli secure add_trust_cert_to_jks -password welcome -trust_certs_loc  <path to the certificate>  -alias <alias of the certificate>
    $ omcli secure add_trust_cert_to_jks -password welcome trust_certs_loc  /agentdir/cloud_agent/agent_inst/bin/slcak155.crt  -alias kube-server
  • During Discovery

    From the UI (Token Credentials, Basic Credentials, Keystore Credentials)Image displays the discover entity dialog specifying token credentials.

    From the Command Line

    When running omcli add_entity, the path to the certificate, its alias, and the agent's trust store password can be specified as part of the credential JSON, as shown in the following JSON example (Token-based authentication).
    {
      "credentials": [
        {
          "id": "KubeCredsRef",
          "name": "TokenCredentialSet",
          "credType": "TokenCredential",
          "properties": [
            {
              "name": "Token",
              "value": "CLEAR[<Token>]"
            },
    		{
    			"name": "Certificate",
    			"value": "FILE[<Absolute path of Kubernetes certificate file>]"
    		},
    		{
    			"name": "CertAlias",
    			"value": "CLEAR[<Kubernetes certificate alias>]"
    		},
    		{
    			"name": "CertPassword",
    			"value": "CLEAR[<Agent trust store password>]"
    		}
          ]
        }
      ]
    }

Token Based Authentication

Creating a Service Account

You can reuse a service account already present in the Kubernetes installation or create a new one.

# kubectl create serviceaccount <service account name>
$ kubectl create serviceaccount omc-monitoring
To list available service accounts:
$ kubectl get serviceaccounts
NAME                   SECRETS   AGE
default                1         14d
omc-monitoring         1         1h
Getting a Token

Every service account when created will have a secret associated with it.

$ kubectl get secrets
NAME                               TYPE                                  DATA      AGE
default-token-ggjlh                kubernetes.io/service-account-token   3         14d
omc-monitoring-token-96jpc         kubernetes.io/service-account-token   3         1h
# We are interested in the name that starts with our serviceaccount name. Here for example the token for omc-monitoring serviceaccount is omc-monitoring-token-96jpc

The token value can be extracted by describing the secret.

# kubectl describe secrets <secret name>
$ kubectl describe secrets omc-monitoring-token-96jpc
Name:         omc-monitoring-token-96jpc
Namespace:    default
Labels:       <none>
Annotations:  kubernetes.io/service-account.name=omc-monitoring
              kubernetes.io/service-account.uid=be1ff6c9-ed72-11e8-b9e7-0800275fc834
Type:  kubernetes.io/service-account-token
Data
====
ca.crt:     1025 bytes
namespace:  7 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9...(extracted token)

Access Permissions

Before the token of the service account is used for monitoring we need the "get, list and watch" privileges for following resources:

Kubernetes Version Resources
1.8

pods,nodes,namespaces,services,jobs,services/proxy,deployments.apps,replicasets.apps,daemonsets.apps,statefulsets.apps,replicationcontrollers

1.7

pods,nodes,namespaces,services,jobs,services/proxy,deployments.apps,replicasets.extensions,daemonsets.extensions,statefulsets.apps,replicationcontrollers

1.9 to 1.11

pods,nodes,namespaces,services,jobs,services/proxy,deployments.apps,replicasets.apps,daemonsets.apps,statefulsets.apps,replicationcontrollers,nodes/proxy

1.5 and 1.6 pods,nodes,namespaces,services,jobs,services/proxy,deployments.extensions,replicasets.extensions,daemonsets.extensions,statefulsets.apps,replicationcontrollers

For adding the permissions to a service account, first create a cluster role

# kubectl create clusterrole <cluster_role_name> --verb=get,list,watch --resource=<Resources>
$ kubectl create clusterrole  omc-monitoring-role --verb=get,list,watch --resource=pods,nodes,namespaces,services,jobs,services/proxy,deployments.apps,replicasets.apps,daemonsets.apps,statefulsets.apps,replicationcontrollers,nodes/proxy
After the cluster role is created, it must be bound to the service account.
# kubectl create clusterrolebinding <cluster_role_binding_name> --clusterrole=<cluster_role_name> --serviceaccount=default:<service_account_name>
$ kubectl create clusterrolebinding omc-monitoring-binding --clusterrole=omc-monitoring-role  --serviceaccount=default:omc-monitoring

Note:

If you want access to all the resources you can use this clusterrole cluster-admin,which has all privileges and can create binding for the created serviceaccount using below command.

# kubectl create clusterrolebinding <cluster_role_binding_name> --clusterrole=cluster-admin --serviceaccount=default:<service_account_name>
$ kubectl create clusterrolebinding all-access-binding --clusterrole=cluster-admin --serviceaccount=default:all-access-account

Basic Authentication

Creating Authorization Policy

For a specific user, you must create a username and password and add them in a file "basic_auth.csv" (under directory /etc/kubernetes/pki on the master node) as follows:
<password>,<username>,<groupname>

In a new file "abac-authz-policy.jsonl "(under directory /etc/kubernetes/pki on the master node ). For above user, you need to specify what API's the user needs access to and with what privileges, as shown below.

{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"<username>", "namespace": "*", "resource": "pods", "apiGroup": "*", "readonly": true}}
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"<username>", "namespace": "*", "resource": "nodes", "apiGroup": "*", "readonly": true}}
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"<username>", "namespace": "*", "resource": "services", "apiGroup": "*", "readonly": true}}
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"<username>", "namespace": "*", "resource": "namespaces", "apiGroup": "*", "readonly": true}}
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"<username>", "namespace": "*", "resource": "deployments", "apiGroup": "*", "readonly": true}}
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"<username>", "namespace": "*", "resource": "daemonsets", "apiGroup": "*", "readonly": true}}
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"<username>", "namespace": "*", "resource": "replicasets", "apiGroup": "*", "readonly": true}}
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"<username>", "namespace": "*", "resource": "jobs", "apiGroup": "*", "readonly": true}}
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"<username>", "namespace": "*", "resource": "replicationcontrollers", "apiGroup": "*", "readonly": true}}
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"<username>", "namespace": "*", "resource": "statefulsets", "apiGroup": "*", "readonly": true}}

Note:

If you need to provide access to all resources for this user, just add the following command in the JSON file.

{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"<username>", "namespace": "*", "resource": "*", "apiGroup": "*", "readonly":true}}

Modifying the API Server Manifest

Add details of above two files in /etc/kubernetes/manifets/kube-apiserver.yaml on the master node as follows.

- --authorization-mode=Node,RBAC,ABAC (EDIT THIS ENTRY)

- --basic-auth-file=/etc/kubernetes/pki/basic_auth.csv (NEW ENTRY)

- --authorization-policy-file=/etc/kubernetes/pki/abac-authz-policy.jsonl (NEW ENTRY)

Note:

1) Make sure that while adding the above lines, use spaces for indentation (do not use tabs). 2) Also, if you are trying to change the policy file, you need to copy them in a new file and refer the new file name as above, since APIserver will not be able to identify if there is any change in the existing policy JSON file.

To reflect the changes, restart the kubelet using the following command:

# Run the following command as root user
$ systemctl restart kubelet

Client Certificate Authentication

Add following lines to /etc/kubernetes/manifets/kube-apiserver.yam


- --client-ca-file=/srv/kubernetes/ca.crt
- --tls-cert-file=/srv/kubernetes/server.crt
- --tls-private-key-file=/srv/kubernetes/server.key

Information on how to generate these certificates can be found at the following website:

https://kubernetes.io/docs/concepts/cluster-administration/certificates/

To create the JKS from certificates, run the following:

cat server.crt server.key > keyCert.pem
openssl pkcs12 -export -in keyCert.pem -out clientKeystore.pkcs12 -name clientKeystore -noiter -nomaciter
keytool -importkeystore -srckeystore clientKeystore.pkcs12 -srcstoretype pkcs12 -srcalias clientKeystore -destkeystore clientKeystore.jks -deststoretype jks -deststorepass <password> -destalias clientKeystore

Discovery

Kubernetes can be discovered from either the Oracle Management Cloud console or from OMCLI.

OMCLI

Following table shows the content of the sample JSON files used to discover Kubernetes using OMCLI. See Add Entities Using JSON Files.

Without credentials - insecure With credentials - secure
{
  "entities": [
    {
      "name": "KUBERNETES_INSECURE",
      "type": "omc_kubernetes_cluster",
      "displayName": "KUBERNETES_INSECURE",
      "timezoneRegion": "GMT",
      "properties": {
        "host_name": {
          "displayName": "Hostname",
          "value": "myhost.myco.com"
        },
        "omc_kubernetes_master_url": {
          "displayName": "Kubernetes master URL",
          "value": "http://myhost.myco.com:80/"
        },
        "capability": {
          "displayName": "capability",
          "value": "monitoring"
        }
      }
    }
  ]
}
{
  "entities": [
    {
      "name": "KUBERNETES_SECURE",
      "type": "omc_kubernetes_cluster",
      "displayName": "KUBERNETES_SECURE",
      "credentialRefs": [
        "KubeCredsRef"
      ],
      "timezoneRegion": "GMT",
      "properties": {
        "host_name": {
          "displayName": "Hostname",
          "value": "myhost.myco.com"
        },
        "omc_kubernetes_master_url": {
          "displayName": "Kubernetes master URL",
          "value": "https://myhost.myco.com:443/"
        },
        "capability": {
          "displayName": "capability",
          "value": "monitoring"
        }
      }
    }
  ]
}

Note:

An additional property omc_heapster_url can be specified in the JSON's "properties" object (shown below) in order to fetch metrics from heapster. If this property is not provided then the metrics will be fetched from summary API.

{
    .....
    "properties" : {
        ....
        "omc_heapster_url" : {
            "displayName" : "Heapster URL",
            "value" : "<base url of heapster>"
        }
    }
}
Property description
Property UI Name Description
omc_kubernetes_master_url Kubernetes Master URL Base URL of the API Server on the Kubernetes Master Node. The URL is of the form http(s)://<hostname>:<port>
host_name Hostname Hostname of the Kubernetes master node
omc_heapster_url Heapster URL

Base URL of Heapster. This needs to be specified if the performance metrics are to be collected from Heapster.

If heapster is running inside Kubernetes as a cluster service the Base URL is of the form

http(s)://<host>:<port>/api/v1/namespaces/kube-system/services/heapster/proxy

Here the host & port are same as in omc_kubernetes_master_url

Credential JSONs
Basic Credentials Token Credentials Keystore Credentials
{
  "credentials": [
    {
      "id": "KubeCredsRef",
      "name": "UserCredentialSet",
      "credType": "AliasCredential",
      "properties": [
        {
          "name": "Alias",
          "value": "CLEAR[admin]"
        },
        {
          "name": "Password",
          "value": "CLEAR[M3ASfn0poA4tMdcO]"
        },
       {
                          "name": "Certificate",

                          "value": "FILE[<KUBERNETES_CERT_FILE_LOC>]"

                  },

                 {

                         "name": "CertAlias",

                         "value": "CLEAR[<KUBERNETES_CERT_ALIAS>]"

                 },

                 {

                         "name": "CertPassword",

                         "value": "CLEAR[<KUBERNETES_JKS_PASSWORD>]"

                 }

 
]
    }
  ]
}
{
  "credentials": [
    {
      "id": "KubeCredsRef",
      "name": "TokenCredentialSet",
      "credType": "TokenCredential",
      "properties": [
        {
          "name": "Token",
          "value": "CLEAR[seRsr3jMfQL8lDqvSgqgjJwH65j80gzB]"
        },
        {
                          "name": "Certificate",

                          "value": "FILE[<KUBERNETES_CERT_FILE_LOC>]"

                  },

                 {

                         "name": "CertAlias",

                         "value": "CLEAR[<KUBERNETES_CERT_ALIAS>]"

                 },

                 {

                         "name": "CertPassword",

                         "value": "CLEAR[<KUBERNETES_JKS_PASSWORD>]"

                 }

 
      ]
    }
  ]
}
{
  "credentials": [
    {
      "id": "KubeCredsRef",
      "name": "SSLKeyStoreCredentialSet",
      "credType": "StoreCredential",
      "properties": [
        {
          "name": "StoreLocation",
          "value": "CLEAR[/scratch/dritwik/view_storage/jsons/kubernetes/keystore.jks]"
        },
        {
          "name": "StoreType",
          "value": "CLEAR[JKS]"
        },
        {
          "name": "StorePassword",
          "value": "CLEAR[welcome]"
        }, 
        {
                          "name": "Certificate",

                          "value": "FILE[<KUBERNETES_CERT_FILE_LOC>]"

                  },

                 {

                         "name": "CertAlias",

                         "value": "CLEAR[<KUBERNETES_CERT_ALIAS>]"

                 },

                 {

                         "name": "CertPassword",

                         "value": "CLEAR[<KUBERNETES_JKS_PASSWORD>]"

                 }


      ]
    }
  ]
Property description

Basic Credentials

Property UI Name Description
Alias Username Username of the user going to discover Kubernetes
Password Password Password of the user
Certificate Keystore Certificate Certificate of Kubernetes API Server on Master Node. Users need to specify the text inside the certificate file if added from UI. In omcli, users need to create a Java Keystore, add certificate to that and specify the file path.
CertAlias Certificate Alias Alias for the Certificate. This should be unique alphanumeric string
CertPassword Trust Store Password Password of agent's Trust Store. This password is "welcome"
Token Credentials
Property UI Name Description
Token Token Token of the user going to discover Kubernetes
Certificate Keystore Certificate Refer Basic Credentials
CertAlias Certificate Alias Refer Basic Credentials
CertPassword Trust Store Password Refer Basic Credentials
Keystore Credentials
Property UI Name Description
StoreLocation Store Location Location of Client keystore. This Java Keystore file (JKS) should contain client's certificate.
StoreType Store Type Store type. This value is always set to "JKS"
StorePassword Store Password Password of the JKS file
Certificate Keystore Certificate Refer Basic Credentials
CertAlias Certificate Alias Refer Basic Credentials
CertPassword Trust Store Password Refer Basic Credentials

Oracle GoldenGate

Prerequisites

Oracle GoldenGate enables the continuous, real-time capture, routing, transformation, and delivery of transactional data across heterogeneous (Oracle, DB2, MySQL, SQL Server, Teradata) environments. The following prerequisites apply when discovering and monitoring Oracle GoldenGate environments.

Enable Monitoring

The first prerequisite is to enable monitoring in GoldenGate. Follow the steps below for your specific GoldenGate version and architecture.

Classic Architecture

If you are using GoldenGate Classic Architecture, you will need to add a parameter in the GLOBALS file to enable monitoring.

You must be running GoldenGate version 12.3.0.1.181120 at a minimum. This is a cumulative patch set for GoldenGate released in Jan 9, 2019.

  1. Locate the GLOBALS file in the top-level GoldenGate installation directory.
  2. Add the following line to this file and save the file:
    ENABLEMONITORING UDPPORT <port> HTTPPORT <port>
  3. Restart GoldenGate Manager.

Microservices Architecture

If you are using GoldenGate Microservices Architecture, then as part of the setup of GoldenGate using the GoldenGate Configuration Assistant, you should enable Monitoring. Once monitoring has been enabled, the Performance Metric Server will be started. This is an indication that monitoring has been enabled for GoldenGate.

OCI GoldenGate

If you are using OCI GoldenGate, no prerequisites are required. Monitoring is enabled by default.

Import Certification for GoldenGate Secure Installations

If the Oracle GoldenGate setup is secure (HTTPS), the GoldenGate certificate needs to be imported into the agent manually prior to discovery. To do this, perform the following:
  1. Extract the certificate from Oracle GoldenGate.
    openssl s_client -showcerts -connect <hostname>:<service port>
  2. Add the Oracle GoldenGate certificate to the cloud agent's JKS.
    <jdk>/bin/keytool -importcert -file <goldengate-certificate> -alias goldengate -keystore <AGENT_HOME>/agent_inst/sysman/config/montrust/AgentTrust.jks -storepass welcome
  3. Bounce the cloud agent.
    omcli stop agent ; omcli start agent

Oracle VM Manager

Prerequisites

The cloud agent must be deployed on the Oracle VM Manager host.

Credentials: The username and password are required to access the Oracle VM Manager console.

Example:

username=admin / password=admin_pw

Certificates:

You need to explicitly add the Oracle VM Manager Weblogic certificate to the Agent's JKS.

How to extract certificate from Oracle VM Manager:

To export the Oracle VM Manager WebLogic certificate, log in as the root user and enter the following command:

#/u01/app/oracle/ovm-manager-3/ovm_upgrade/bin/ovmkeytool.sh             exportca >
        <file_loc_for_certificate>

To import the Oracle VM Manager Weblogic certificate to the Agent Keystore, log in as an Oracle cloud agent user and enter the following command:

<AGENT_INSTANCE_HOME>/bin/omcli secure             add_trust_cert_to_jks -trust_certs_loc
        <file_loc_for_certificate> -alias <alias_name>

Oracle JVM Runtime

Prerequisites

Monitoring Oracle JVM Runtime can be performed in the following modes:

  1. No user authentication, No SSL
  2. No user authentication, SSL
  3. User authentication, No SSL
  4. User Authentication, SSL

SSL configuration:

You will need to import your truststore certificate into the cloud agent truststore using omcli as shown in the following example:

$ omcli secure add_trust_cert_to_jks  -alias <Alias of cert to import> -trust_certs_loc <Cert file to import>

Microsoft Azure

Prerequisites

Before monitoring Microsoft Azure with Infrastructure Monitoring, you first need to create a Wep app/API application registration for your Oracle Management Cloud account in Azure.

  1. Log in to the Microsoft Azure portal using Global administrator credentials: .
    https://portal.azure.com/#home

    Note:

    You can find Azure Active Directory, Subscriptions, etc. in the All Services menu (or you may already have them in Favorites)
  2. Under Azure services, open the "Azure Active Directory" service. Click More services to see the Azure Active Directory service.
  3. Switch to the Default Directory if not already selected.
  4. In the Manage section, click App Registrations.
  5. Click New Application Registration and fill in details:
    • Name = Name of your application, e.g. OMCAzureMonitoring
    • Supported account types = Accounts in this organizational directory only (Default Directory only - Single tenant)
    • Redirect URI (optional) = Web (Leave blank--does not affect discovery.)
    • Click Register.
  6. After clicking Register, you'll see the Application ID (you can later find the Application ID in the Application Registration blade).
Save the Application (client) ID and Directory (tenant) ID as you'll need them later to add the Microsoft Azure entity to Oracle Management Cloud.
  7. From the Manage section, click Certificates & secrets.
  8. In the area below Client Secrets click +New client secret, and provide the Description and expiration time frame. Click Add.
  9. The Client Secret has been created. Make note of the Client Secret value as you'll need it for discovery.

    Note:

    You won't be able to check the value after you leave the screen. You need to create a new secret if you ever forget or lose the key you've just created.
  10. Navigate Home, then to Subscriptions.
  11. Copy the Subscription ID directly from the screen or select the desired Subscription (in either case, it needs to belong to the Azure Active Directory used above) and copy the Subscription ID from the Overview section. Save the Subscription ID as you'll need it for discovery.
  12. Grant access to application principal to Azure resources.
There is no Oracle Management Cloud recommended practice to define the access policy and permissions for the user/application in the Azure Active Directory (tenant). Access depends on the customer security policy to allow monitoring access to the whole tenant, resource group or just individual resources. Further explanation can be found in the Azure documentation for role-based access control (RBAC).
    • The easiest approach is to grant the Monitoring Reader role to the registered app at the Subscription level:
      1. Navigate to Subscriptions.
      2. Select the desired Subscription (it needs to belong to the Azure Active Directory used above).
      3. Select Access control (IAM) and click Add. Fill in details for the following:
        • Role: Monitoring Reader
        • Assign access to: Azure AD user, group, or service principal (default value)
        • Select: Type in name of the application (e.g. OMCAzureMonitoring as used above).
      4. Click Save.
    • The process to grant access for Resource Groups or Resources is the same (instead of Subscription select the desired Resource Group or Resource).
    • More information on how to create the service principal can be found in Azure documentation.
  13. Once you've created a Wep app/API application registration for your Oracle Management Cloud account in Azure, you're now ready to add the Microsoft Azure entity to Oracle Management Cloud. Use the saved values (from the Azure prerequisites above) to fill in the details in the Cloud Discovery Profile and Monitoring Credentials for Azure.

Apache Kafka

Prerequisites

Import Zookeeper Jar

  • Place the Zookeeper jar from <Zookeeper Installation Home>/zookeeper-<version>.jar in an appropriate directory which is readable by the cloud agent user.

  • Add the location where the jar is placed to the Cloud Agent classpath lst file available under <Agent Installation Directory>/sysman/config/oracle.em.sgfm.classpath.lst

  • Restart the cloud agent for the inclusion to take effect.

Import Kafka Client Jar

  • Place the Kafka Client jar from <Kafka Installation Home>/libs/kafka-clients-0.10.2.1.jar in an appropriate directory which is readable by the cloud agent user
  • Add the location where the jar is placed to the cloud agent Classpath lst file available under <Agent Installation Directory>/sysman/config/oracle.em.sgfm.classpath.lst
  • Restart the cloud agent for the inclusion to take effect.

JMX Configuration

Enable JMX on Kafka Brokers with no authentication

Add the following lines to <Kafka Installation Home>/bin/kafka-server-start.sh to enable JMX with no authentication.

  • export JMX_PORT=<enter jmx port>
  • KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote=true -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false-Djava.rmi.server.hostname=<Hostname> -Dcom.sun.management.jmxremote.port=<JMX port> -Dcom.sun.management.jmxremote.rmi.port=<RMI port>"

Start the Kafka server.

Note:

For a multibroker setup, provide a unique JMX port and unique broker ID for each. The listen port should be different if brokers are started on the same node. If there are duplicate broker IDs, the connection will fail with an error "Address already in use".