B Configuring Visualization Server

Perform the following procedure to create the LogServer (logstorage, logparser, loggui) stack.
  1. Install the following RPMs:
    • logstorage: On all the nodes
    • logparser: Only on master and data nodes
    • loggui: Only on ingestion Node
    • logstorage curator: Only on the master and data nodes
    • Rsync: Only on the master and data nodes
  2. Update the /etc/logstorage/logstorage.yml configurations file for logstorage search:
    • cluster.name: Name of the stack
    • node.name: Hostname of the node
    • network.host: IPV4 address of the node
    • node.data: true if it’s a data node else false
    • node.master: true if it’s a master node else false
    • discovery.seed_hosts: contains the IPV4 address of all other nodes in the stack (Master node, data node and ingestion node)
    • cluster.initial_master_nodes: On ingestion node specify all the master node IP addresses
    • gateway.recover_after_nodes: Minimum number of master node should be available before processing
    Sample “/etc/logstorage/logstorage.yml”
    # ---------------------------------- Cluster -----------------------------------
    # Use a descriptive name for your cluster:
    cluster.name: vstp
    # ------------------------------------ Node ------------------------------------
    # Use a descriptive name for the node:
    node.name: node-1
    # ----------------------------------- Paths ------------------------------------
    # Path to directory where to store the data (separate multiple locations by comma):
    path.data: /var/lib/logstorage
    # Path to log files:
    path.logs: /var/log/logstorage
    # Set the bind address to a specific IP (IPv4 or IPv6):
    network.host: 10.75.219.169
    # Set a custom port for HTTP:
    http.port: 9200
    node.master: true
    node.data: true
    # --------------------------------- Discovery ----------------------------------
    #
    # Pass an initial list of hosts to perform discovery when this node is started:
    # The default list of hosts is ["127.0.0.1", "[::1]"]
    #
    #discovery.seed_hosts: ["host1", "host2"]
    # Bootstrap the cluster using an initial set of master-eligible nodes:
    cluster.initial_master_nodes: ["node-1"]
    # Block initial recovery after a full cluster restart until N nodes are started:
    #gateway.recover_after_nodes: 3
    
  3. Update the /etc/ logparser /conf.d/ logparser.conf configurations file for logparser on the Master and data node.
    input {
      file {
       mode => "read"
       path => "/var/log/dummy3/*.csv"  path of the directory where logs are present 
       start_position => "beginning"
       codec => plain {
                charset => "ISO-8859-1"
             }
       file_completed_action => "delete"
       sincedb_path => "/dev/null"
      }
    }
    filter {
            if [message] =~ /^\s*$/ {
                drop { }
            }
    mutate {
                gsub => ["message", "\t", ""]
           }
    grok {
          match => {"message" => "%{GREEDYDATA:TIME},%{WORD:CM_NAME},%{WORD:Cat},%{WORD:OPERMODE},%{WORD:MSGTYPE},%{NOTSPACE:SESSION_ID},%{INT:CMD_CODE},%{INT:APP_ID},%{WORD:PEER_NAME},%{WORD:SUBSCRIBERTYPE},%{INT:IMSI},%{INT:MCC},%{NOTSPACE:ORIG_HOST},%{NOTSPACE:ORIG_REALM},%{NOTSPACE:DEST_HOST},%{NOTSPACE:DEST_REALM},%{INT:PLMN_ID},%{GREEDYDATA:ERRORTEXT}"}
    
         }
    
     mutate
           {
                         remove_field => [ "message" ]
            }
    
    if "_grokparsefailure" in [tags] {
        drop { }
      }
    }
    output {
            logstorage {
                hosts => [ "http://A:B:C:D:9200" ]  Host IP addresses 
                    index => "dsa"  Index where all the data will be captured and can be used on loggui to get all the logs.
        }
    }
    

    Note:

    Only path, index hosts and index field must be updated. Rest of the details will remain the same for DSA.
  4. Update the following mandatory fields in /etc/loggui/loggui.yml.

    server.host is the IP address of the host.

    logstorage.hosts is the IP address of the host in which logstoragemodule is running. In our architecture, logstorage, loggui will be running on the same instance/VM.

    logging.dest: is used to redirect the log of loggui. “stdout” is the default option.

  5. Follow these steps on the loggui GUI:
    1. By default, loggui runs on port 5601.
    2. Go to the loggui GUI and navigate to Management, and then loggui, and then Create index pattern.
      It will display all the existing index where data has been generated.
    3. Click Next Step, and then @timestamp.
    4. Click Create Index pattern.
      Now index has been generated and data can be seen in the discover tab.
    5. When the index is created, import the sample visualization first (visual_MCC_Cat.ndjson, visual_top_imsi.ndjson), and then import the sample dashboard from the DSA package.
    6. On loggui, navigate to Management, and then Saved Objects, and then Import.
  6. Logstorage curator: Curator helps to clear the older logs for an index pattern.
    command: 
      curator /root/curator/delete.yaml --config /root/curator/curator.yml
    Need to run a cron job to run curator periodically to clear the data.
    crontab -e and add below command in that.
    * */2 * * * /usr/bin/curator /root/curator/delete.yaml --config /root/curator/curator.yml
    Sample “/root/curator/delete.yaml” file:
    actions:
      1:
        action: delete_indices
        description: >-
          Delete indices older than 30 days (based on index name), for tomcat-
          prefixed indices. Ignore the error if the filter does not result in an
          actionable list of indices (ignore_empty_list) and exit cleanly.
        options:
          ignore_empty_list: True
          timeout_override:
          continue_if_exception: False
          disable_action: False
        filters:
        - filtertype: pattern
          kind: regex
          value: dsa  ----------> specify the regex of the index pattern
          exclude:
        - filtertype: age
          source: creation_date
          direction: older
          unit: days
          unit_count: 30
    sample “/root/curator/curator.yml” file:
    client:
      hosts:
        - A:B:C:D   IP address of the system.
      port: 9200
      url_prefix:
      use_ssl: False
      certificate:
      client_cert:
      client_key:
      ssl_no_validate: False
      http_auth:
      timeout: 30
      master_only: False
    logging:
      loglevel: INFO
      logfile:
      logformat: default
    blacklist: ['logstorage', 'urllib3']
    
  7. Some recommendations to increase the performance of the server:
    • Use separate index name for each logparser
    • Index name should be of the form: visual_dsa*
    • Example- visual_dsa1, visual_dsa2 and so on
    • In logparser.yml configure pipeline.workers as 32 and pipeline.batch.size as 500. Here in our setup 16 vCPU is there.
    • In jvm.options of logparser increase the heap space:
      • Xms10g
      • Xmx10g
    • In jvm.options of logstorage increase the heap size:
      • Xms6g
      • Xmx6g

    Note:

    After changing all the configuration files, services needs to be restart otherwise configurations will not get updated.
    • systemctl restart logparser
    • systemctl restart logstorage
    • systemctl restart loggui

    Note:

    Get all the sample configuration files from DSA package.