12 Installing the Monitoring and Visualization Software

If you want to deploy your own Monitoring and Visualization software on the Kubernetes cluster, you can use the steps described in this chapter.
Production best practices for Elasticsearch and Prometheus deployments are outside the scope of this document. For detailed installation instructions, see:

This chapter includes the following topics:

Installing the Logging and Visualization Software

Elasticsearch enables you to aggregate logs from various products on your system. You can analize the logs and use the Kibana console to visualize the data in the form of charts and graphs. Elasticsearch recommends the best practice of using the API keys for connecting to a centralized Elasticsearch deployment. For simplicity, this document illustrates the process using user names and passwords. For complete information on these products, see the manufacturer documentation at https://www.elastic.co/.

This section includes the following topics:

Kubernetes Services

The Kubernetes services are created as part of the Monitoring and Visualization deployment process.

Table 12-1 Kubernetes Services

Service Name Type Service Port Mapped Port

elasticsearch

ClusterIP

9600

kibana-nodeport

NodePort

31800

6501

elk-nodeport

NodePort

31920

9200

Note:

The mapped port is randomly assigned at install time. The values provided in this table are examples only.

Variables Used in this Section

This section provide instructions to create a number of files. These sample files contain variables which you need to substitute with values applicable to your deployment.

Variables are formatted as <VARIABLE_NAME>. The following table provides the values you should set for each of these variables.

Table 12-2 List of Variables

Variable Sample Value Description

<ELKNS>

elkns

The name of the Elasticsearch namespace.

<ELK_OPER_VER>

2.10.0

The version of the Elasticsearch Operator.

<ELK_VER>

8.11.0

The version of Elasticsearch/Kibana you want to install.

<ELK_USER>

logstash_internal

The name of the user for logstash to access Elasticsearch.

<ELK_PASSWORD>

<password>

The password for ELK_USER.

<ELK_K8>

31920

The Kubernetes port used to access Elasticsearch externally.

<ELK_KIBANA_K8>

31800

The Kubernetes port used to access Kibana externally.

<DH_USER>

username

The user name for Docker Hub.

<DH_PWD>

mypassword

The password for Docker Hub.

<PV_SERVER>

mynfsserver.example.com

The name of the NFS server.

Note: This name should be resolvable inside the Kubernetes cluster.

<ELK_SHARE>

/export/IAMPVS/elkpv

The NFS mount point for the ELK persistent volumes.

Prerequisites

The latest releases of Elasticsearch uses the Elasticsearch Operator. The Elasticsearch Operator deploys an elastic cluster using the Kubernetes stateful sets. Stateful sets create dynamic persistent volumes.

Before installing Elasticsearch, you should ensure that you have a default Kubernetes storage class defined for your environment that allows dynamic storage. Each vendor has its own storage provider but it may not be configured to provide dynamic storage allocation.

To determine what your default storage class is, use the following command:
kubectl get storageclass
NAME            PROVISIONER                          RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
oci (default)   kubernetes.io/is-default-class       Delete          Immediate           false                  6d21h

If you do not have a storage provider that allows you to dynamically create storage, you can use an external NFS storage provider such as NFS subdir external provisioner.

For more information on storage classes, see Storage Classes.

For more information on NFS subdir external provisioner, see Kubernetes NFS Subdir External Provisioner.

For completeness, the following steps show how to install Elasticsearch and Kibana by using NFS subdir:

Creating a Filesystem for the ELK Data

Before you can deploy the NFS client, you need to create a mount point/export on your NFS storage for storing your ELK data. This mount point is used by the NFS subdir external provider.

Installing NFS Subdir External Provisioner
  1. To install the NFS subdir external provisioner, use the following commands:
    helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
    helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
        --set nfs.server=<PVSERVER>\
        --set nfs.path=<ELK_SHARE>
  2. Issue the following command:
    kubectl get storageclass

    You should now see a new storage class called nfs-client.

Installing Elasticsearch (ELK) Stack and Kibana

Each of the product chapters show how to send log files to a centralized Elasticsearch stack which includes the visualization console Kibana. The instructions help you deploy a simple ELK cluster. This is sufficient for testing. In production environments, you should obtain appropriate licenses from the vendor.

For setting up the production security, see Configure Security for the Elastic Stack.

This section includes the following topics:

Setting Up a Product Specific Work Directory

Before you begin the installation, you should have already downloaded and staged the ELK container image or should be using the Oracle Container Registry and the code repository.

See Identifying and Obtaining Software Distributions for an Enterprise Deployment. This section describes the procedure to copy the downloaded sample deployment scripts to a temporary working directory for ELK.

  1. Create a temporary working directory as the install user. The install user should have kubectl access to the Kubernetes cluster.
    mkdir <WORKDIR>
    For example:
    mkdir /workdir/ELK
  2. Change the directory to this location:
    cd /workdir/ELK

    Note:

    The same set of sample files are used by several products in this guide. To avoid having to download them each time, the files are staged in a non-product specific working directory.
Creating a Kubernetes Namespace

The Kubernetes namespace is used to store the Elasticsearch stack.

Use the following command to create the namespace for ELK:

kubectl create namespace <ELKNS>
For example:
kubectl create namespace elkns
Creating a Kubernetes Secret for Docker Hub Images

This secret allows Kubernetes to pull an image from hub.docker.com which contains the Elasticsearch images.

You should have an account on hub.docker.com.

Use the following command to create a Kubernetes secret for hub.docker.com:

kubectl create secret docker-registry dockercred --docker-server="https://index.docker.io/v1/" --docker-username="<DH_USER>" --docker-password="<DH_PWD>" --namespace=<ELKNS>
For example:
kubectl create secret docker-registry dockercred --docker-server="https://index.docker.io/v1/" --docker-username="username" --docker-password="mypassword" --namespace=elkns
If you are pulling the images from your own registry, then create a secret for your registry using the following command:
kubectl create secret -n <ELKNS> docker-registry <REGISTRY_SECRET_NAME> --docker-server=<REGISTRY_ADDRESS> --docker-username=<REG_USER> --docker-password=<REG_PWD>
For example:
kubectl create secret -n elkns docker-registry regcred --docker-server=iad.ocir.io/mytenancy --docker-username=mytenancy/oracleidentitycloudservice/myemail@email.com --docker-password=<password>
Installing the Elasticsearch Operator
To install the Elasticsearch Operator, perform the following steps:
  1. Use the command:
    helm repo add elastic https://helm.elastic.co
    helm repo update
    helm install elastic-operator elastic/eck-operator -n <ELKNS> --set image.tag=<ELK_OPER_VER>

    Note:

    If you are pulling the images from your own repository, the command will be as follows:
    helm install elastic-operator elastic/eck-operator -n <ELKNS> --set image.repository=container-registry.oracle.com/eck/eck-operator --set imagePullSecrets[0].name=regcred
    For example:
    helm install elastic-operator elastic/eck-operator -n elkns --set image.tag=2.11.0
  2. Verify the installation by using the command:
    kubectl get all -n elkns
Creating an Elasticsearch Cluster

To create an Elasticsearch cluster using the Elasticsearch Operator, perform the following steps:

Creating a Configuration File
To create a configuration file, complete the following steps:
  1. Create a configuration file called <WORKDIR>/ELK/elk_cluster.yaml with the following contents:
    apiVersion: elasticsearch.k8s.elastic.co/v1
    kind: Elasticsearch
    metadata:
      name: elasticsearch
      namespace: <ELKNS>
    spec:
      version: <ELK_VER>
      nodeSets:
      - name: default
        count: 1
        config:
          node.store.allow_mmap: false
        volumeClaimTemplates:
        - metadata:
            name: elasticsearch-data # Do not change this name unless you set up a volume mount for the data path.
          spec:
            accessModes:
            - ReadWriteOnce
            resources:
              requests:
                storage: 5Gi
            storageClassName: nfs-client

    Note:

    If you are using your own container registry, you should add the following lines above:
    spec:
      image: iad.ocir.io/mytenancy/idm/elasticsearch/elasticsearch:<ELK_VER>
    
        podTemplate:
          spec:
            imagePullSecrets:
            - name: regcred
  2. Ensure that the storageClassName is the name of the storage class you want to use. For example, oci or nfs-client.
Creating the Elasticsearch Cluster
To create the Elasticsearch cluster, complete the following steps
  1. Run the following command:
    kubectl  create -f <WORKDIR>/ELK/elk_cluster.yaml
  2. Monitor the creation of the pods by using the following commands:
    kubectl get all -n elkns -o wide
    kubectl get elasticsearch -n elkns
Copying the Elasticsearch Certificate

Logstash requires access to the Elasticsearch CA (Certificate Authority) certificate to connect to the Elasticsearch server. Logstash places a copy of the certificate into config maps loaded into each namespace from where logstash runs.

In a production environment, it is recommended that you use production certificates. However, if you have allowed Elasticsearch to create its own self-signed certificates, you should copy this certificate to your work directory for easy access later.

Copy the self-signed certificates to your work directory by using the following command:

kubectl cp <ELKNS>/elasticsearch-es-default-0:/usr/share/elasticsearch/config/http-certs/..data/ca.crt <WORKDIR>/ELK/elk.crt
For example:
kubectl cp elkns/elasticsearch-es-default-0:/usr/share/elasticsearch/config/http-certs/..data/ca.crt /workdir/ELK/elk.crt
Elasticsearch Access Details

After the cluster starts, you will need the following information to interact with it:

Credentials
When the cluster gets created, it creates a user called elastic. You can obtain the password for the user elastic by using the command:
kubectl get secret elasticsearch-es-elastic-user -n <ELKNS> -o go-template='{{.data.elastic | base64decode}}'
For example:
kubectl get secret elasticsearch-es-elastic-user -n elkns -o go-template='{{.data.elastic | base64decode}}'
URL

The URL for sending logs can be determined using the command:

https://elasticsearch-es-http.<ELKNS>.svc.cluster.local:9200/
For example:
https://elasticsearch-es-http.elkns.svc.cluster.local:9200/
Creating a Kibana Cluster

To create an Kibana cluster, perform the following steps:

Creating a Configuration File
Create a configuration file called <WORKDIR>/ELK/kibana.yaml with the following contents:
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
  name: kibana
  namespace: <ELKNS>
spec:
  version: <ELK_VER>
  count: 1
  elasticsearchRef:
    name: elasticsearch

Note:

If you are using your own container registry, you should add the following lines above:
spec:
  image: iad.ocir.io/mytenancy/idm/kibana/kibana:<ELK_VER>

  podTemplate:
    spec:
      imagePullSecrets:
      - name: regcred
Deploying Kibana
  1. Use the following command to deploy Kibana:
    kubectl  create -f <WORKDIR>/ELK/kibana.yaml
  2. Monitor the creation of the pods by using the commands:
    kubectl get all -n elkns -o wide
    kubectl get kibana -n elkns
Creating the Kubernetes Services

If you are using an Ingress controller, it is possible to expose these services through Ingress. However, if you want to send the Ingress log files to Elasticsearch, it makes sense not to make them dependent on each other.

You should create two NodePort Services to access Elasticsearch and Kibana.

  • A NodePort Service for the external Elasticsearch interactions. For example, to send logs from sources outside the cluster and to make API calls to the ELK cluster.
  • Another, to access the Kibana console.
Creating a NodePort Service for Kibana
To enable access to the Kibana console from a browser, you must expose it outside of the cluster. You can do this by creating a NodePort Service:
  1. Create a file called <WORKDIR>/ELK/kibana_nodeport.yaml with the following contents:
    kind: Service
    apiVersion: v1
    metadata:
      name: kibana-nodeport
      namespace: <ELKNS>
    spec:
      type: NodePort
      selector:
        common.k8s.elastic.co/type: kibana
        kibana.k8s.elastic.co/name: kibana
      ports:
        - targetPort: 5601
          port: 5601
          nodePort: <ELK_KIBANA_K8>
          protocol: TCP
  2. Create the NodePort Service by using the following command:
    kubectl create -f <WORKDIR>/ELK/kibana_nodeport.yaml
    For example:
    kubectl create -f </workdir/ELK/kibana_nodeport.yaml
Creating a NodePort Service for Elasticsearch
To enable access to the Elasticsearch stack from outside the cluster, you must expose it outside of the cluster. You can do this by creating a NodePort Service.
  1. Create a file called <WORKDIR>/ELK/elk_nodeport.yaml with the following contents:
    kind: Service
    apiVersion: v1
    metadata:
      name: elk-nodeport
      namespace: <ELKNS>
    spec:
      type: NodePort
      selector:
        common.k8s.elastic.co/type: elasticsearch
        elasticsearch.k8s.elastic.co/cluster-name: elasticsearch
      ports:
        - targetPort: 9200
          port: 9200
          nodePort: <ELK_K8>
          protocol: TCP 
  2. Create the NodePort Service using the following command:
    kubectl create -f <WORKDIR>/ELK/elk_nodeport.yaml
    For example:
    kubectl create -f </workdir/ELK/elk_nodeport.yaml
Granting Access to Logstash

In a production deployment, you would have enabled Elasticsearch security. Security has two parts - SSL communication between Logstash and Elasticsearch, and an API key or a user name and password combination to gain access.

Creating a Role and a User for Logstash
In a production deployment, you would have enabled Elasticsearch security. Security has two parts - SSL communication between Logstash and Elasticsearch and a user/password combination to gain access. You can create user names and roles through the command-line API or the Kibana console. The instructions provided below are for the command line.

Note:

Elasticsearch recommends the use of API keys instead of user names and passwords. For more information, see https://www.elastic.co.

Oracle recommends that you create a dedicated role and user for this purpose.

The following commands require that the Elasticsearch user 'elastic' be encoded. To encode user elastic, use the following command:
echo -n elastic:<ELASTIC PASSWORD> | base64

To obtain the password for the elastic user, see Credentials.

  1. To create a role with restricted privileges, use the following command:
    curl -k  -X  POST "https://k8worker1.example.com:31920/_security/role/logstash_writer" -H "Authorization: Basic <ENCODED USER>" -H 'Content-Type: application/json' -d'
    {
      "cluster": ["manage_index_templates", "monitor", "manage_ilm"],
      "indices": [
        {
          "names": ["oiglogs*","oamlogs*","oudlogs*","oudsmlogs*","oirilogs*","oaalogs*],
          "privileges": ["write","create","create_index","manage","manage_ilm"]
        }
      ]
    }'
  2. To create a user with the above role, use the following command:
    curl -k  -X  POST "https://k8worker1.example.com:31920/_security/user/<ELK_USER>" -H "Authorization: Basic <ENCODED USER>" -H 'Content-Type: application/json' -d'
    {
      "password" : "<ELK_PASSWORD>",
      "roles" : [ "logstash_writer"],
      "full_name" : "Internal Logstash User"
    }'
Creating an API Key for Logstash

Elastic search recommends that you use an API key to access Elasticsearch. To create an API Key using the command line option:

  1. Run the following command:
    curl -XPOST -u 'elastic:<ELK_PWD>' -k "https://K8WORKER1.example.com:<ELK_K8>/_security/api_key" -H "kbn-xsrf: reporting" -H "Content-Type: application/json" -d'
    {
      "name": "logstash_host001",
      "role_descriptors": {
        "logstash_writer": {
          "cluster": ["monitor", "manage_ilm", "manage_index_templates","read_ilm"],
          "index": [
            {
              "names": ["logs*","oiglogs*","oamlogs*","oudlogs*","oudsmlogs*"],
              "privileges": ["write","create","create_index","manage","manage_ilm"]
            }
          ]
        }
      }
    }'
    For example:
    curl -XPOST -u 'elastic:3k2KuO7eNxI2ZeB350H71VW2' -k "https://k8worker1.example.com:31920/_security/api_key" -H "kbn-xsrf: reporting" -H "Content-Type: application/json" -d'
    {
      "name": "logstash_host001",
      "role_descriptors": {
        "logstash_writer": {
          "cluster": ["monitor", "manage_ilm", "manage_index_templates","read_ilm"],
          "index": [
            {
              "names": ["logs*","oiglogs*","oamlogs*","oudlogs*","oudsmlogs*"],
              "privileges": ["write","create","create_index","manage","manage_ilm"]
            }
          ]
        }
      }
    }'
    The output will appear as follows:
    {"id":"PyGUf4MBwDOoSXCOlS5W","name":"logstash_host001","api_key":"Cs1YDZCvTG6SviN0ejL4-Q","encoded":"UHlHVWY0TUJ3RE9vU1hDT2xTNVc6Q3MxWURaQ3ZURzZTdmlOMGVqTDQtUQ=="}
  2. Make a note of the API key. You should used this value when you configure Logstash.
Accessing the Kibana Console
To access the Kibana console, use the following URL:
http://k8workers.example.com:30094/app/kibana
The default user is elastic. You can obtain the password using the following command:
kubectl get secret elasticsearch-es-elastic-user -n $ELKNS -o go-template='{{.data.elastic | base64decode}}'
Creating a Kibana Index
When you add log files to Elasticsearch, you will not be able to see them until you create an index pattern. You can perform this step only after Elasticsearch has received some logs.
  1. Log in to the Kibana console.
  2. Click Stack Management.
  3. Click Data Views in the Kibana section.
  4. Click Create Data View.
  5. Enter the following information:
    • Name: Logs*
    • Timestamp: @timestamp
  6. Click Create Data View.
  7. Click Discover to view the log file entries.

Installing the Monitoring Software

Prometheus and Grafana help monitor your environment. The instructions given in this chapter are for a simple deployment by using the kube-prometheus installer. See kube-prometheus. For more information and documentation on the Prometheus product, see Prometheus.

Before starting the installation process, ensure that you use the version of Prometheus that is supported with your Kubernetes release. See the Prometheus/Kubernetes compatibility matrix.

This section includes the following topics:

Kubernetes Services

The Kubernetes services created as part of the installation are:

Service Name Type Service Port Mapped Port

prometheus-nodeport

NodePort

32101

9090

grafana-nodeport

NodePort

32100

3000

alertmanager-nodeport

NodePort

32102

9093

Variables Used in this Section

This section provides instructions to create a number of files. These sample files contain variables which you need to substitute with values applicable to your deployment.

Variables are formatted as <VARIABLE_NAME>. The following table provides the values you should set for each of these variables.

Table 12-3 List of Variables

Variable Sample Value Description

<PROMNS>

monitoring

The name of the Kubernetes namespace to use for the deployment.

<PROM_GRAF_K8>

8.3.0

The Kubernetes port used to access Grafana externally.

<PROM _K8>

logstash_internal

The Kubernetes port used to access Prometheus externally.

<PROM_ALERT_K8>

<password>

The Kubernetes port used to access Alert Manager externally.

Installing Prometheus and Grafana

Each of the product chapters show how to send monitoring data to Prometheus and Grafana. This section explains how to install the Prometheus and Grafana software.

The installation process consists of the following steps:

Setting Up a Product Specific Work Directory

This section describes the procedure to copy the downloaded sample deployment scripts to a temporary working directory for Prometheus.

  1. Create a temporary working directory as the install user. The install user should have kubectl access to the Kubernetes cluster.
    mkdir <WORKDIR>
    For example:
    mkdir /workdir/PROM
  2. Change the directory to this location:
    cd /workdir/PROM

Note:

The same set of sample files are used by several products in this guide. To avoid having to download them each time, the files are staged in a non-product specific working directory.
Downloading the Prometheus Installer
The Prometheus installer is available on GitHub as a helm repository. To download the installer:
  1. Change directory to the working directory:
    cd /workdir/PROM
  2. Run the following command:
    helm repo add prometheus-community https://prometheus-community.github.io/helm-charts 
    helm repo update
Creating a Kubernetes Namespace
You have to create a namespace to contain all the objects for Prometheus and Grafana.

To create a namespace, run the following command:

kubectl create namespace monitoring

The output appears as follows:

namespace/monitoring created
Creating a Helm Override File
Create a helm override file called <WORKDIR>/override_prom.yaml to determine how the deployment is created. This file will have the following contents:
alertmanager:
  service:
    nodePort: <PROM_ALERT_K8>
    type: NodePort

prometheus:
  image:
    tag: <IMAGE_VER>
  service:
    nodePort: <PROM_K8>
    type: NodePort

grafana:
  image:
    tag: <IMAGE_VER>
  service:
    nodePort: <PROM_GRAF_K8>
    type: NodePort

  adminPassword: <PROM_ADMIN_PWD>

This example uses the NodePort services because Prometheus is capable of monitoring Ingress and you do not want issues associated with Ingress from preventing access to Prometheus. Therefore, NodePort is used to keep it standalone.

Note:

If you want to pull the images from a repository other than docker.io, add the following entries to the top of the file:
global:
  imageRegistry: <REPOSITORY>
For example:
global:
  ImageRegistry: iad.ocir.io/mytenancy/idm
Deploying Prometheus and Grafana
Use the override file created earlier to deploy the Prometheus application.
  1. Change the directory to the working directory.
    cd <WORKDIR>
  2. Run the following command:
    helm install -n <PROMNS> kube-prometheus prometheus-community/kube-prometheus-stack -f <WORKDIR>/override_prom.yaml

    This command creates the containers associated with the Prometheus and Grafana application.

    Note:

    If you are using your own repository, then in addition to the changes in the override file, you may need to add the following to your helm command:
    helm install -n <PROMNS> --set grafana.image.repository=<REPOSITORY>/grafana/grafana kube-prometheus prometheus-community/kube-prometheus-stack -f <WORKDIR>/override_prom.yaml
Validating the Installation
To ensure that the application is deployed, and the installation is complete, run the following command:
kubectl get all -n monitoring
The output apears as follows:
NAME                                                         READY   STATUS    RESTARTS   AGE
pod/alertmanager-kube-prometheus-kube-prome-alertmanager-0   2/2     Running   0          15h
pod/kube-prometheus-grafana-95944596-kcd9k                   3/3     Running   0          15h
pod/kube-prometheus-kube-prome-operator-84c5bc5876-klvrs     1/1     Running   0          15h
pod/kube-prometheus-kube-state-metrics-5f9b85478f-qtwnz      1/1     Running   0          15h
pod/kube-prometheus-prometheus-node-exporter-9h86g           1/1     Running   0          15h
pod/kube-prometheus-prometheus-node-exporter-gbkgb           1/1     Running   0          15h
pod/kube-prometheus-prometheus-node-exporter-l99sb           1/1     Running   0          15h
pod/kube-prometheus-prometheus-node-exporter-r7d77           1/1     Running   0          15h
pod/kube-prometheus-prometheus-node-exporter-rnq42           1/1     Running   0          15h
pod/prometheus-kube-prometheus-kube-prome-prometheus-0       2/2     Running   0          15h

NAME                                               TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
service/alertmanager-operated                      ClusterIP   None             <none>        9093/TCP,9094/TCP,9094/UDP   15h
service/kube-prometheus-grafana                    NodePort    10.97.137.130    <none>        80:30900/TCP                 15h
service/kube-prometheus-kube-prome-alertmanager    NodePort    10.97.153.100    <none>        9093:30903/TCP               15h
service/kube-prometheus-kube-prome-operator        ClusterIP   10.108.174.205   <none>        443/TCP                      15h
service/kube-prometheus-kube-prome-prometheus      NodePort    10.110.156.35    <none>        9090:30901/TCP               15h
service/kube-prometheus-kube-state-metrics         ClusterIP   10.96.233.108    <none>        8080/TCP                     15h
service/kube-prometheus-prometheus-node-exporter   ClusterIP   10.107.188.115   <none>        9100/TCP                     15h
service/prometheus-operated                        ClusterIP   None             <none>        9090/TCP                     15h

NAME                                                      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/kube-prometheus-prometheus-node-exporter   5         5         5       5            5           <none>          15h

NAME                                                  READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/kube-prometheus-grafana               1/1     1            1           15h
deployment.apps/kube-prometheus-kube-prome-operator   1/1     1            1           15h
deployment.apps/kube-prometheus-kube-state-metrics    1/1     1            1           15h

NAME                                                             DESIRED   CURRENT   READY   AGE
replicaset.apps/kube-prometheus-grafana-95944596                 1         1         1       15h
replicaset.apps/kube-prometheus-kube-prome-operator-84c5bc5876   1         1         1       15h
replicaset.apps/kube-prometheus-kube-state-metrics-5f9b85478f    1         1         1       15h

NAME                                                                    READY   AGE
statefulset.apps/alertmanager-kube-prometheus-kube-prome-alertmanager   1/1     15h
statefulset.apps/prometheus-kube-prometheus-kube-prome-prometheus       1/1     15h

About Grafana Dashboards

Grafana dashboards are used to visualize information from your targets. There are different types of dashboards for different products. You should install a dashboard to monitor your Kubernetes environment.

The following dashboards are relevant to an Oracle Identity Management deployment:

Table 12-4 Dashboards Relevant to an Oracle Identity Management Deployment

Dashboard Location Description

Kubernetes

https://grafana.com/grafana/dashboards/10856

Used to monitor the Kubernetes cluster.

Nginx

https://grafana.com/grafana/dashboards/9614-nginx-ingress-controller/

Used to monitor the Ingress controller.

WebLogic

<WORKDIR>/samples/monitoring-service/config/weblogic-server-dashboard-import.json

Included in the Oracle download from GitHub.

Used to Monitor the WebLogic domain.

Apache

https://grafana.com/grafana/dashboards/3894-apache/

Several Apache dashboards are available. This is an example.

Oracle Database

https://grafana.com/grafana/dashboards/3333-oracledb/

A sample database dashboard.

Installing a Grafana Dashboard
To install a dashboard:
  1. Download the Kubernetes Dashboard JSON file from the Grafana website. For example: https://grafana.com/grafana/dashboards/10856.
  2. Access the Grafana dashboard with the http://<K8_WORKER1>:30900 URL and log in with admin/<PROM_ADMIN_PWD>. Change your password if prompted.
  3. Click the search box at the top of the screen and select Import New Dashboard.
  4. Either drag the JSON file you downloaded in Step 1 to the Upload JSON File box or click the box and browse to the file. Click Import.
  5. When prompted, select the Prometheus data source. For example: Prometheus.
  6. Click Import. The dashboard is displayed in the Dashboards panel.