Go to primary content
Oracle® Communications OC-CNE Installation Guide
Release 1.0
F16979-01
Go To Table Of Contents
Contents

Previous
Previous
Next
Next

Post Install Verification

Introduction

Prerequisities

This document verifies installation of CNE Common services on all nodes hosting the cluster. There are different UI end points installed with common services like Kibana, Grafana, Prometheus Server, Alert Manager; below are the steps to launch different UI endpoints and verify the services are installed and working properly.

  1. Common services has been installed on all nodes hosting the cluster.
  2. Gather list of cluster names and version tags for docker images that were used during install.
  3. All cluster nodes and services pods should be up and running.
  4. Commands are required to be run on Management server.
  5. Any Modern browser(HTML5 compliant) with network connectivity to CNE.

Table 4-1 OCCNE Post Install Verification

Step No. Procedure Description
1. Run the commands to get the load-balancer IP address and port number for Kibana Web Interface.
# LoadBalancer ip address of the kibana service is retrieved with below command  
$ export KIBANA_LOADBALANCER_IP=$(kubectl get services occne-kibana --namespace occne-infra -o jsonpath="{.status.loadBalancer.ingress[*].ip}")
  
# LoadBalancer port number of the kibana service is retrieved with below command  
$ export KIBANA_LOADBALANCER_PORT=$(kubectl get services occne-kibana --namespace occne-infra -o jsonpath="{.spec.ports[*].port}")
  
# Complete url for accessing kibana in external browser
$ echo http://$KIBANA_LOADBALANCER_IP:$KIBANA_LOADBALANCER_PORT
http://10.75.182.51:80

Launch the Browser and navigate to http://$KIBANA_LOADBALANCER_IP:$KIBANA_LOADBALANCER_PORT(e.g.: http://10.75.182.51:80 in the example above) received in the output of the above commands.

2. Using Kibana verify Log and Tracer data is stored in Elasticsearch
  1. Navigate to "Management" Tab in Kibana.
  2. Click on "Index Patterns". You should be able to see the two patterns as below which confirms Log and Tracer data been stored in Elastic-Search successfully.
    1. jaeger-*
    2. logstash-*
  3. Type logstash* in the index pattern field and wait for few seconds.

  4. Verify the "Success" message and index pattern "logstash-YYYY.MM.DD" appeared as highlighted in the bottom red box . Click on " Next step "

  5. Select "I don't want to use the Time Filter" and click on "Create index pattern"
  6. Ensure the Web page having the indices appear in the main viewer frame
  7. Click on "Discover" Tab and you should be able to view raw Log records.
  8. Repeat steps 3-6 using "jaeger*" instead of "logstash* to ensure the data is stored in elastic search.
3. Verify Elasticsearch cluster health
  1. Navigate to "Dev Tools" in Kibana
  2. Enter the command "GET _cluster/health" and press on the green arrow mark. You should see the status as "green"on the right side of the screen.
4. Verify Prometheus Alert manager is accessible
  1. Run below commands to get the load-balancer IP address and port number for Prometheus Alert Manager Web Interface.
    # LoadBalancer ip address of the alertmanager service is retrieved with below command  
    $ export ALERTMANAGER_LOADBALANCER_IP=$(kubectl get services occne-prometheus-alertmanager --namespace occne-infra -o jsonpath="{.status.loadBalancer.ingress[*].ip}")
      
    # LoadBalancer port number of the alertmanager service is retrieved with below command  
    $ export ALERTMANAGER_LOADBALANCER_PORT=$(kubectl get services occne-prometheus-alertmanager --namespace occne-infra -o jsonpath="{.spec.ports[*].port}")
      
    # Complete url for accessing alertmanager in external browser
    $ echo http://$ALERTMANAGER_LOADBALANCER_IP:$ALERTMANAGER_LOADBALANCER_PORT
    http://10.75.182.53:80
  2. Launch the Browser and navigate to http://$ALERTMANAGER_LOADBALANCER_IP:$ALERTMANAGER_LOADBALANCER_PORT (e.g.: http://10.75.182.53:80 in the example above) received in the output of the above commands. Ensure the AlertManager GUI is accessible.
5. Verify metrics are scraped and stored in prometheus server
  1. Run below commands to get the load-balancer IP address and port number for Prometheus Server Web Interface.
    # LoadBalancer ip address of the prometheus service is retrieved with below command  
    $ export PROMETHEUS_LOADBALANCER_PORT=$(kubectl get services occne-prometheus-server --namespace occne-infra -o jsonpath="{.spec.ports[*].port}")
      
    # LoadBalancer port number of the prometheus service is retrieved with below command  
    $ export PROMETHEUS_LOADBALANCER_IP=$(kubectl get services occne-prometheus-server --namespace occne-infra -o jsonpath="{.status.loadBalancer.ingress[*].ip}")
      
    # Complete url for accessing prometheus in external browser
    $ echo http://$PROMETHEUS_LOADBALANCER_IP:$PROMETHEUS_LOADBALANCER_PORT
    http://10.75.182.54:80
  2. Launch the Browser and navigate to http://$PROMETHEUS_LOADBALANCER_IP:$PROMETHEUS_LOADBALANCER_PORT (e.g.: http://10.75.182.54:80 in the example above) received in the output of the above commands. Ensure the Prometheus server GUI is accessible.
  3. Select "UP" option from "insert metric at cursor" drop down and click on "Execute" button.
  4. Here the entries present under the Element section are scrape endpoints and under the value section its corresponding status( 1 for up 0 for down). Ensure all the scrape endpoints have value as 1 (means up and running).
6. Verify Alerts are configured
  1. Navigate to alerts tab of Prometheus server GUI or navigate using URL http://$PROMETHEUS_LOADBALANCER_IP:$PROMETHEUS_LOADBALANCER_PORT/alertsFor<PROMETHEUS_LOADBALANCER_IP>and<PROMETHEUS_LOADBALANCER_PORT>
  2. If below alerts are seen in " Alerts" tab of prometheus GUI, then Alerts are configured properly.
7. Verify grafana is accessible and change the default password for admin user
  1. Run below commands to get the load-balancer IP address and port number for Grafana Web Interface.
    # LoadBalancer ip address of the grafana service is retrieved with below command  
    $ export GRAFANA_LOADBALANCER_IP=$(kubectl get services occne-grafana --namespace occne-infra -o jsonpath="{.status.loadBalancer.ingress[*].ip}")
      
    # LoadBalancer port number of the grafana service is retrieved with below command  
    $ export GRAFANA_LOADBALANCER_PORT=$(kubectl get services occne-grafana --namespace occne-infra -o jsonpath="{.spec.ports[*].port}")
      
    # Complete url for accessing grafana in external browser
    $ echo http://$GRAFANA_LOADBALANCER_IP:$GRAFANA_LOADBALANCER_PORT
    http://10.75.182.55:80
  2. Launch the Browser and navigate to http://$GRAFANA_LOADBALANCER_IP:$GRAFANA_LOADBALANCER_PORT (e.g.: http://10.75.182.55:80 in the example above) received in the output of the above commands. Ensure the Prometheus server GUI is accessible. The default username and password is admin/admin for the 1st time access.
  3. At first connection to the Grafana dashboard, a 'Change Password' screen will appear. Change the password to the customer provided credentials.

    Note: Grafana data is not persisted, so if Grafana services restarted for some reason change password screen will appear again.

  4. Grafana dashboards are accessed after the changing the default password in the above step.
  5. Click on "New dashboard" as marked red below.
  6. Click on "Add Query"
  7. From "Queries to" drop down select "Prometheus" as data source. Presence of " Prometheus " entry in the " Queries to " drop down ensures Grafana is connected to Prometheus time series database.
  8. In the Query Section marked in Red below put " sum by(__name__)({kubernetes_namespace="occne-infra"}) " and then click any where outside of the textbox and wait for few seconds. Ensure the dashboard appearing in the top section of the page. This link shows all the metrics and number of entries in each metrics over time span originated from kubernetes namespace 'occne-infra. In the add query section we can give any valid promQl query.Example for using the metrics list link above to write a promQL query: sum($metricnamefromlist)sum by(kubernetes_pod_name) ($metricnamefromlist{kubernetes_namespace="occne-infra"})For more details about promQl please follow the link.