- Diameter Custom Application User Guide with UDR
- Configuring Visualization Server
B Configuring Visualization Server
Perform the following procedure to create the Elasticsearch logstash kibana
(ELK) stack.
- Install the following RPMs:
- ElasticSearch: On all the nodes
- Logstash: Only on master and data nodes
- Kibana: Only on ingestion Node
- Elasticsearch curator: Only on the master and data nodes
- Rsync: Only on the master and data nodes
- Update the
/etc/elasticsearch/elasticsearch.yml
configurations file for Elastic search:- cluster.name: Name of the stack
- node.name: Hostname of the node
- network.host: IPV4 address of the node
- node.data: true if it’s a data node else false
- node.master: true if it’s a master node else false
- discovery.seed_hosts: contains the IPV4 address of all other nodes in the stack (Master node, data node and ingestion node)
- cluster.initial_master_nodes: On ingestion node specify all the master node IP addresses
- gateway.recover_after_nodes: Minimum number of master node should be available before processing
Sample “/etc/elasticsearch/elasticsearch.yml” # ---------------------------------- Cluster ----------------------------------- # Use a descriptive name for your cluster: cluster.name: vstp # ------------------------------------ Node ------------------------------------ # Use a descriptive name for the node: node.name: node-1 # ----------------------------------- Paths ------------------------------------ # Path to directory where to store the data (separate multiple locations by comma): path.data: /var/lib/elasticsearch # Path to log files: path.logs: /var/log/elasticsearch # Set the bind address to a specific IP (IPv4 or IPv6): network.host: 10.75.219.169 # Set a custom port for HTTP: http.port: 9200 node.master: true node.data: true # --------------------------------- Discovery ---------------------------------- # # Pass an initial list of hosts to perform discovery when this node is started: # The default list of hosts is ["127.0.0.1", "[::1]"] # #discovery.seed_hosts: ["host1", "host2"] # Bootstrap the cluster using an initial set of master-eligible nodes: cluster.initial_master_nodes: ["node-1"] # Block initial recovery after a full cluster restart until N nodes are started: #gateway.recover_after_nodes: 3
- Update the
/etc/logstash/conf.d/logstash.conf
configurations file for Logstash on the Master and data node.input { file { mode => "read" path => "/var/log/dummy3/*.csv" path of the directory where logs are present start_position => "beginning" codec => plain { charset => "ISO-8859-1" } file_completed_action => "delete" sincedb_path => "/dev/null" } } filter { if [message] =~ /^\s*$/ { drop { } } mutate { gsub => ["message", "\t", ""] } grok { match => {"message" => "%{GREEDYDATA:TIME},%{WORD:CM_NAME},%{WORD:Cat},%{WORD:OPERMODE},%{WORD:MSGTYPE},%{NOTSPACE:SESSION_ID},%{INT:CMD_CODE},%{INT:APP_ID},%{WORD:PEER_NAME},%{WORD:SUBSCRIBERTYPE},%{INT:IMSI},%{INT:MCC},%{NOTSPACE:ORIG_HOST},%{NOTSPACE:ORIG_REALM},%{NOTSPACE:DEST_HOST},%{NOTSPACE:DEST_REALM},%{INT:PLMN_ID},%{GREEDYDATA:ERRORTEXT}"} } mutate { remove_field => [ "message" ] } if "_grokparsefailure" in [tags] { drop { } } } output { elasticsearch { hosts => [ "http://A:B:C:D:9200" ] Host IP addresses index => "dsa" Index where all the data will be captured and can be used on Kibana to get all the logs. } }
Note:
Only path, index hosts and index field must be updated. Rest of the details will remain the same for DSA. - Update the following mandatory fields in
/etc/kibana/kibana.yml
.server.host
is the IP address of the host.elasticsearch.hosts
is the IP address of the host in which elasticsearch module is running. In our architecture, Elasticsearch, kibana will be running on the same instance/VM.logging.dest:
is used to redirect the log of kibana. “stdout” is the default option. - Follow these steps on the Kibana GUI:
- By default, Kibana runs on port 5601.
- Go to the Kibana GUI and navigate to Management, and then Kibana, and then Create index pattern. It will display all the existing index where data has been generated.
- Click Next Step, and then @timestamp.
- Click Create Index pattern.Now index has been generated and data can be seen in the discover tab.
- When the index is created, import the sample visualization first (visual_MCC_Cat.ndjson, visual_top_imsi.ndjson), and then import the sample dashboard from the dsa package.
- On Kibana, navigate to Management, and then Saved Objects, and then Import.
- Elasticsearch curator: Curator helps to clear the older logs for an index
pattern.
command: curator /root/curator/delete.yaml --config /root/curator/curator.yml Need to run a cron job to run curator periodically to clear the data. crontab -e and add below command in that. * */2 * * * /usr/bin/curator /root/curator/delete.yaml --config /root/curator/curator.yml Sample “/root/curator/delete.yaml” file: actions: 1: action: delete_indices description: >- Delete indices older than 30 days (based on index name), for tomcat- prefixed indices. Ignore the error if the filter does not result in an actionable list of indices (ignore_empty_list) and exit cleanly. options: ignore_empty_list: True timeout_override: continue_if_exception: False disable_action: False filters: - filtertype: pattern kind: regex value: dsa ----------> specify the regex of the index pattern exclude: - filtertype: age source: creation_date direction: older unit: days unit_count: 30 sample “/root/curator/curator.yml” file: client: hosts: - A:B:C:D IP address of the system. port: 9200 url_prefix: use_ssl: False certificate: client_cert: client_key: ssl_no_validate: False http_auth: timeout: 30 master_only: False logging: loglevel: INFO logfile: logformat: default blacklist: ['elasticsearch', 'urllib3']
- Some recommendations to increase the performance of the server:
- Use separate index name for each logstash
- Index name should be of the form: visual_dsa*
- Example- visual_dsa1, visual_dsa2 and so on
- In logstash.yml configure pipeline.workers as 32 and pipeline.batch.size as 500. Here in our setup 16 vCPU is there.
- In jvm.options of logstash increase the heap space:
- Xms10g
- Xmx10g
- In jvm.options of elasticsearch increase the heap size:
- Xms6g
- Xmx6g
Note:
After changing all the configuration files, services needs to be restart otherwise configurations will not get updated.- systemctl restart logstash
- systemctl restart elasticsearch
- systemctl restart kibana
Note:
Get all the sample configuration files from dsa package.