2 Observability Improvements Logs using ELK Stack

This topic describes the troubleshooting procedures using the ELK Stack.

The ELK Stack is a collection of the following open-source products:
  • Elasticsearch: It is an open-source, full-text search, and analysis engine based on the Apache Lucene search engine.
  • Logstash: Logstash is a log aggregator that collects data from various input sources, executes different transitions and enhancements, and then transports the data to various supported output destinations.
  • Kibana: Kibana is a visualization layer that works on top of Elasticsearch, providing users with the ability to analyze and visualize the data.

These components together are most commonly used for monitoring, troubleshooting, and securing IT environments. Logstash takes care of data collection and processing, Elasticsearch indexes and stores the data, and Kibana provides a user interface for querying the data and visualizing it.

2.1 Introduction

ELK Stack was a collection of the following open-source products:
  • Elasticsearch
  • Logstash
  • Kibana

Elasticsearch is an open source, full-text search, and analysis engine, based on the Apache Lucene search engine. Logstash is a log aggregator that collects data from various input sources, executes different transformations and enhancements and then ships the data to various supported output destinations. Kibana is a visualization layer that works on top of Elasticsearch, providing users with the ability to analyze and visualize the data.

Together, these different components are most commonly used for monitoring, troubleshooting, and securing IT environments. Logstash takes care of data collection and processing, Elasticsearch indexes and stores the data, and Kibana provides a user interface for querying the data and visualizing it.

2.2 Architecture

This topic describes about architecture.

It provides a comprehensive solution for handling all the required facets.

Spring Cloud Sleuth also provides additional functionality to trace the application calls by providing us with a way to create intermediate logging events. Therefore, Spring Cloud Sleuth dependency must be added to the applications.

2.3 Setting up ELK Stack

This topic describes the systematic instruction to download, run and access the ELK Stack.

Download ELK Stack

  1. Download the Elastic search from https://www.elastic.co/downloads/elasticsearch.
  2. Download the Kibana from https://www.elastic.co/downloads/kibana.
  3. Download the Logstash from https://www.elastic.co/downloads/logstash.

    Note:

    Default port for Elastic search is 9200 and the default port for Kibana is 5601.

2.3.1 Run ELK Stack

This topic describes the systematic instruction to run the ELK Stack.

Perform the following steps:
  1. Run the elasticsearch.sh file present in the folder path /scratch/software/ELK/elasticsearch-6.5.1/bin.
  2. Configure the Kibana to point the running instance of elastic search in the kibana.yml file.
  3. Follow the below steps to configure the Logstash.
    1. Input: This configuration is used to provide the log file location for the Logstash to read from.
    2. Filter: This configuration is used to control or format the read operation (Line by line or Bulk read).
    3. Output: This configuration is used to provide the running elastic search instance to send the data for persisting.

    Figure 2-3 Logstash Configuration



2.3.1.1 Start Elastic Search

This topic provides systematic instructions to start Elastic Search.

  1. Navigate to Elasticsearch root folder.
  2. Use nohup to start the Elasticsearch process.
    > nohup ./bin/elasticsearch
2.3.1.2 Setup and Start Logstash

This topic provides the systematic instructions to setup and start Logstash.

  1. Create a new logstash.conf file that provides the required file parsing and integration for Elasticsearch.
    logstatsh.conf:
    #Point to the application logs
    input {
     file {
      type => "java"
      path => "/scratch/app/work_area/app_logs/*.log"
      codec => multiline {
       pattern => "^%{YEAR}-%{MONTHNUM}-%{MONTHDAY} %{TIME}.*"
       negate => "true"
       what => "previous"
      }
     }
    }
    #Provide the parsing logic to transform logs into JSON
    filter {
     #If log line contains tab character followed by 'at' then we will tag that entry as stacktrace
     if [message] =~ "\tat" {
      grok {
       match => ["message", "^(\tat)"]
       add_tag => ["stacktrace"]
      }
     }
    
     #Grokking Spring Boot's default log format
     grok {
      match => [ "message",
                 "(?<timestamp>%{YEAR}-%{MONTHNUM}-%{MONTHDAY}
    %{TIME}) %{LOGLEVEL:level} %{NUMBER:pid} --- \[(?<thread>[A-Za-z0-9-]+)\] [A-Za-z0-9.]*\.(?<class>[A-Za-z0-9#_]+)\s*:\s+(?<logmessage>.*)",
                 "message",
                 "(?<timestamp>%{YEAR}-%{MONTHNUM}-%{MONTHDAY} %{TIME}) %{LOGLEVEL:level} %{NUMBER:pid} --- .+? :\s+(?<logmessage>.*)"
                ]
     }
      # pattern matching logback pattern
      grok {
             match =>
     { "message" => "%{TIMESTAMP_ISO8601:timestamp}\s+%{LOGLEVEL:severity}\s+\[%{DATA:service},%{DATA:trace},%{DATA:span},%{DATA:exportable}\]\s+\[%{DATA:environment}\]\s+\[%{DATA:tenant}\]\s+\[%{DATA:user}\]\s+\[%{DATA:branch}\]\s+%{DATA:pid}\s+---\s+\[%{DATA:thread}\]\s+%{DATA:class}\s+:\s+%{GREEDYDATA:rest}"
     }
       }
       #Parsing out timestamps which are in timestamp field thanks to previous grok section
       date {
        match => [ "timestamp" , "yyyy-MM-dd HH:mm:ss.SSS" ]
       }
      }
      #Ingest logs to Elasticsearch
      output {
       elasticsearch { hosts => ["localhost:9200"] }
       stdout { codec => rubydebug }
      }
  2. Start the Logstash process using below command.
    >nohup ./bin/logstash -f logstash.conf
2.3.1.3 Setup and Start Kibana

This topic provides the systematic instructions to setup and start Kibana.

  1. Navigate to the kibana.yml available under <kibana_setup_folder>/config.
  2. Modify the file to include the below:
    #Uncomment the below line and update the IP address to your host machine IP.
    server.host: "xx.xxx.xxx.xx"
    #Provide the elasticsearch url. If this is running on the same machine then you can use the below config as is
    elasticsearch.url: "http://localhost:9200"
  3. Start the Kibana process using the below command.
    >nohup ./bin/kibana

2.3.2 Access Kibana

This topic describes the information to access the kibana.

2.3.3 Kibana Logs

This topic describes the information to setup, search and export the logs in Kibana.

Setup Dynamic Log Levels in Oracle Banking Microservices Architecture Services without Restart

The plato-logging-service is dependent on two tables, which needs to be present in the PLATO schema (JNDI name: jdbc/PLATO). The two tables are as follows:
  • PLATO_DEBUG_USERS: This table contains the information about whether the dynamic logging is enabled to a user for a service. The table will have records, where DEBUG_ENABLED values for a user and a service have values Y or N, and depending on that plato-logger will enable dynamic logging.
  • PLATO_LOGGER_PARAM_CONFIG: This table contains the key-value entries of different parameters that can be changed at runtime for the dynamic logging.

    Figure 2-7 PLATO_LOGGER_PARAM_CONFIG



    The values that can be passed are as follows:

    • LOG_PATH: This specifies a dynamic logging path for the logging files to be stored. Changing this in runtime, changes the location of the log files at runtime. If this value is not passed then by default, the LOG_PATH value is taken from the -D parameter of plato.service.logging.path.
    • LOG_LEVEL: The level of the logging can be specified on runtime as INFO or ERROR etc. The default value of this can be set in the logback.xml.
    • LOG_MSG_WITH_TIME: Making this Y appends the current date into the log file name. Setting the value of this as N cannot append the current date into the filename.

Search for Logs in Kibana

Search logs in Kibana using https://www.elastic.co/guide/en/kibana/current/search.html.

Export Logs for Tickets

Perform the following steps to export logs:

  1. Click Share from the top menu bar.
  2. Select the CSV Reports option.
  3. Click Generate CSV button.