4 Security Logging and Visualization

vSTP provides the SS7 Firewall Logging support. The logging support provides a holistic view of all the transactions happening on interconnects and helps in identifying the possible threats.

The logging data is presented through Kibana visualization platform, that is designed to work with Elasticsearch.

Feature Description

The vSTP Logging and Visualization feature generates and sends log messages from the vSTP MP to external Kibana visualization server. The feature displays logging information for the defined variables and logs are displayd to users for the specified variables.

The Logging and Visualization functionality provides the following features:

  • Data storage: The Log messages are stored with data indexing.
  • Search mechanisms: Data search and data filtering are performed through data indexing.
  • Dashboards: Information is displayed and analyzed through various dashboards.
In addition, it is important to note the following points with respect to the Logging and Visualization functionality:
  • Per MP 10k basic GTT traffic is supported with logging
  • Per MP 2.5k SFAPP traffic is supported with logging
  • Per site 50K messages can be logged

Overview

vSTP Logging and Visualization generates and sends log messages from the SCCP and SFAPP servers to an external visualization server. The log messages are converted into the JSON format with data enrichment for enhanced visualization.The logging is divided into two tasks:
  • SCCP/SFAPP Task: This task includes:
    • Copying all the required fields in logging event in the format as present on vSTP
    • Sending the logging event to the logging task
  • Logging Task: This task includes:
    • Fetching data from logging event
    • Performing data transformation, filling location information and category type
    • Writing the data in a csv file
    The feature provides a dashboard view of logs.
  • Visualization Task: This task includes Presenting the log data through dashboard. The user needs to install and configure the following modules on the Visualization server:
    • Elasticsearch
    • Filebeat
    • Kibana

Logging Rate and TPS supported per MP:

  • Per MP 10k basic GTT traffic is supported with logging

    Per MP 2.5K SFAPP traffic is supported with logging

    Per site 50K messages can be logged

Supported Operation Codes

The following lists define the OpCodes that are supported with vSTP Logging and Visualization.

The category includes messages that should only be received from within the same network and/or are unauthorized at interconnect level, and should not be sent between operators unless there is an explicit bilateral agreement between the operators to do so.

Category 1

This category includes messages that should only be received from within the same network and/or are unauthorized at interconnect level, and should not be sent between operators unless there is an explicit bilateral agreement between the operators to do so.

Following is the list of vulnerable category 1 opcodes:
  • provideRoamingNumber
  • sendParameters
  • registerSS
  • eraseSS
  • activateSS
  • deactivateSS
  • interrogateSS
  • registerPassword
  • getPassword
  • processUnstructuredSS-Data
  • sendRoutingInfo
  • sendRoutingInfoForGprs
  • sendIdentification
  • sendIMSI
  • processUnstructuredSS-Request
  • unstructuredSS-Request
  • unstructuredSS-Notify
  • anyTimeModification
  • anyTimeInterrogation
  • sendRoutingInfoForLCS
  • subscriberLocationReport

Category 2

This category includes messages that should only be received from visiting subscribers home network. These should normally only be received from an inbound roamer’s home network.

Following is the list of vulnerable category 2 opcodes:
  • provideRoamingNumber
  • provideSubscriberInfo
  • provideSubscriberLocation
  • insertSubscriberData
  • deleteSubscriberData
  • cancelLocation
  • getPassword
  • reset
  • unstructuredSS-Request
  • unstructuredSS-Notify
  • informServiceCentre

Category 3

This category includes messages that should only be received from the subscriber’s visited network. Specifically, MAP packets that are authorized to be sent on interconnects between mobile operators.

Following is the list of vulnerable category 3 opcodes:
  • updateLocation
  • updateGprsLocation
  • sendParameters
  • registerSS
  • eraseSS
  • activateSS
  • deactivateSS
  • interrogateSS
  • registerPassword
  • processUnstructuredSS-Data
  • mo-forwardSM
  • mt-forwardSM
  • beginSubscriberActivity
  • restoreData
  • processUnstructuredSS-Request
  • purgeMS
  • sendRoutingInfoForSM
  • sendAuthenticationInfo
  • reportSmDeliveryStatus
  • NoteMM-Event

Feature Configuration

MMI Managed Objects for Security Logging and Visualization

MMI information associated with Security Logging and Visualization support is accessed from a DSR NOAM or SOAM from Main Menu, and then MMI API Guide.

Once the MMI API Guide gets opened, use the application navigation to locate specific vSTP managed object information.

The following table lists the managed objects and operations supported for security logging and visualization support:

Table 4-1 Security Logging and Visualization support Managed Objects and Supported Operations

Managed Object Name Supported Operations
linksets Inser, Update, Delete
securitylogconfig Update

linksets

For this feature, the securityLogging parameter is added to the linkset MO.

The allowed values for this parameter with their interpretation are:
  • OFF: No Logging will be done when traffic is run thought the linkset.
  • ALL: Logging of all messages on the particular linkset will be done.
  • RISKY: Logging of only risky opcode messages coming on that linkset will be done.

The example output for Display of linkset MO:

{
            "asNotification": true,
            "asls8": false,
            "cgGtmod": false,
            "configurationLevel": "32",
            "enableBroadcastException": true,
            "gttmode": "Fcd",
            "islsrsb": 1,
            "ituTransferRestricted": false,
            "l2TimerSetName": "Default",
            "l3TimerSetName": "Default",
            "linkTransactionsPerSecond": 10000,
            "linksetAccMeasOption": "No",
			"localSignalingPointName": "LSP1",
            "name": "Linkset777",
            "numberSignalingLinkAllowedThreshold": 1,
            "numberSignalingLinkProhibitedThreshold": 1,
            "randsls": "Off",
            "remoteSignalingPointName": "RSP777",
            "routingContext": 8,
            "rsls8": false,
            "securityLogging": "All",
            "slsci": false,
            "slsrsb": 1,
            "type": "M3ua"
        }

securitylogconfig

The securitylogconfig MO manages all the attributes essential for Security Logging and Visualization support. The following table describes these parameters:

Table 4-2 securitylogconfig MO Paramaters

Parameter Name Description
securityLoggingFeature This is the global parameter for this feature. Users have to enable this parameter before configuring the securityLogging parameter for linkset.

When disabled, there will be no logging on that linkset. Also the other parameters for this MO can only be modified after disabling this parameter.

Allowed values: On, Off
siteIdentifier This parameter identifies the logging site. The value entered here will be logged in the .CSV logs formed and can be used to identify the logging site.

Allowed values: Alphanumeric characters of maximum length 20

logMpDirPath The path at MP where the user wants to temporarily form .CSV logs before they are transferred to SOAM.

Example: /var/TKLC/db/filemgmt/securityLog

logFileTimeout The maximum time interval in seconds until which the MP waits before starting to open new .CSV log files.

Allowed Values: Integer values from 60-120

maxLogsPerFile Maximum messages to be logged in a single .CSV log file before closing it and bginning a new one for logging.

Allowed Values: Integer values from 600000-3000000

minDiskSpaceForLogging Minimum disk space required for logging as % of available disk space in filemanagement area. If available disk space is below the configured % value then an alarm is raised.

Allowed Values: Integer values from 10-100

The example output for Display of securitylogconfig MO:

{
"logFileTimeout": 90,
"logMpDirPath":   "/var/TKLC/db/filemgmt/securityLog",                                              
"maxLogsPerFile": 1500000,
"minDiskSpaceForLogging": 30,
"securityLoggingFeature": "On",
"siteIdentifier": “ABC"
}

GUI Configuration

The Security Logging and Visualization functionality can be configured from Active System OAM (SOAM) using the following steps:
  1. On the Active System OAM (SOAM), select VSTP > Configuration>Security Log Config .

  2. On the Security Log Config page perform the configurations that governs the functionality of security logging in the file directory of SOAM. For more details, refer to Security Log Config section in vSTP User's Guide.
  3. On the Active System OAM (SOAM), select Diameter Common > Visualization Server .

    Figure 4-1 Visualization Server Page

    Visualization Server Page

    Th following table describes the key parameters on this page:

    Table 4-3 Visualization Server Parameter Description

    Parameter Description Allowed Values
    Task Name Name of the task. Alphanumeric Characters of maximum length 32
    Hostname List IPv4 addresses of Remote Server for Log transfer. Maximum of 8 remote servers can be configured.
    Username Username to access remote server. Alphanumeric Character words of maximum length 10
    Key Exchange Status Shows the keyexchange status for the remote servers with SO.

    This field cannot be edited.

     
    Source Directory Name of the source directory. VSTP or DSA

    Note: The VSTP Option is displayed in the dropdown when Security Logging Feature is enabled in VSTP using the VSTP > Configuration > Security Log Config GUI page.

    Upload Frequency Time interval between which logs are exported from SOAM to Remote Server.

    This field cannot be edited.

     

    Using this page, you can configure IP Addresses (IPv4) of remote servers and perform SSH Keyexchange of the SO with the Remote servers so that export of logs (.CSV) happens without any hassle in future. The remote server (ELK) must have a common username and password combination working for them, as the GUI screen allows a single username for all the remote servers.

    After filling all the required details in the GUI Screen and performing SSH Keyexchange, the log files present at the source directory of SOAM are moved to the destination directory of remote server after every 2 minutes time interval.

    The page support Insert, Edit, Delete, and SSH Key exchange operations.

  4. This completes the logging and visualization feature configurations for vSTP.

ELK Installation and Configuration

This section describes the procedures to install and configure ELK (Elasticsearch, Logstash, and Kibana).

Note:

ELK is a 3rd party software (not included as a part of DSR software) and it has to be installed, configured and maintained separately than the vSTP.

ELK VM Profile Requirement

The following are the specifications for ELK VM Profile:
  • vCPU – 16
  • RAM – 32 GB
  • Disk – 60TB

ELK VM Nodes Recommendation

The following tables describe the recommended VM configurations for 10K or 50K TPS.

For 10K TPS, two VMs are recommended with following configuration:

Table 4-4 VM Configurations

  Master Nodes Data Nodes Kibana Ingestion Node Logstash
VM1 Yes Yes Yes Yes Yes
VM2 Yes Yes No No Yes
For 50K TPS, 6 VMs are recommended with following configuration:

Table 4-5 VM Configurations

  Master Nodes Data Nodes Kibana Ingestion Node Logstash
VM1 Yes Yes Yes Yes  
VM2 Yes Yes     Yes
VM3 Yes Yes     Yes
VM4   Yes     Yes
VM5   Yes     Yes
VM6   Yes     Yes
    Yes     Yes
Elasticsearch

This section describes the installation and configuration of Elasticsearch:

Elasticsearch Installation
Perform the following steps to install Elasticsearch on the Visualization server:
  1. Create a directory to keep all visualization-related RPM(s) using the following command:
    mkdir visualization
  2. Enter the newly created directory in step 1 using the following command:
    cd visualization
  3. Download elasticsearch-7.6.2-x86_64.rpm using the following command:
    wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.6.2-x86_64.rpm

    Note:

    If the “wget” module is not installed in the system, install it using the “yum install wget” command.
  4. Download the published checksum of elasticsearch-7.6.2-x86_64.rpm using the following command:
    wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.6.2-x86_64.rpm.sha512
  5. Compare the SHA of the downloaded RPM and the published checksum using the following command:
    shasum -a 512 -c elasticsearch-7.6.2-x86_64.rpm.sha512

    Note:

    The output must be elasticsearch-{version}-x86_64.rpm: OK. Otherwise, there is an issue with RPM. It is recommended to install fresh RPM.
  6. Install RPM using the following command:
    sudo rpm --install elasticsearch-7.6.2-x86_64.rpm
  7. Verify whether or not the Elasticsearch RPM is successfully installed using the following command:
    rpm –qa | grep elasticsearch
Elasticsearch Configuration
Perform the following steps to configure Elasticsearch after it is installed:
  1. Open the elasticsearch configuration file using the following command:
    vim /etc/elasticsearch/elasticsearch.yml
  2. Update the following fields in elasticsearch.yml:
    1. [Optional] cluster.name can be given any name to the cluster. By default my-application is the name of the cluster.
    2. [Optional] node.name can be given any name for elasticsearch node. By default node-x is the name of node, where x is 1,2,3..N.s.
    3. [Mandatory] network.host is the IP address of the given node.
    4. [Optional] http.port is the port on which elasticsearch would listen. By default Port 9200 is assigned to elasticsearch.
    5. [Mandatory] cluster.initial_master_nodes is the most important setting while starting the cluster first time. It is the IP address of the node which is selected as Master node first time.
    6. [Mandatory] discovery.zen.ping.unicast.hosts is the list of IP address of nodes in elasticsearch.
    7. In jvm.options increase the heap size: Xms6g
  3. Start elasticsearch using the following command:
    Systemctl start elasticsearch.service
  4. Verify that cluster Id must be assigned after starting the cluster.
    1. Open https:/IP_ADDRESS_OF_NODE/9200 in the browser.
    2. Verify that cluster_uuid must be assigned to the cluster.
      The following example shows the sample verification:
      cluster.name: vstp
      node.name: node-1
      path.data: /var/lib/elasticsearch
      path.logs: /var/log/elasticsearch
      network.host: 10.75.xx.yy
      http.port: 9200
      cluster.initial_master_nodes: ["10.75.xx.yy"]
      discovery.zen.ping.unicast.hosts: ["10.75.xx.yy","10.75.xx.zz"]
      
Logstash

This chapter describes the installation and configuration of Logstash.

Logstash Installation
Perform the following steps to install Logstash on the Visualization server:
  1. Download logstash-7.4.1.rpm using the following command:
    wget https://artifacts.elastic.co/downloads/logstash/logstash-7.4.1.rpm

    Note:

    If the “wget” module is not installed in the system, install it using the “yum install wget” command.
  2. Install RPM using the following command:
    sudo rpm --install logstash-7.4.1.rpm
Logstash Configuration
Perform the following steps to install Logstash configuration on the Visualization server:
  1. Create a logstash config file for .CSV.
    input {
      file {
        mode => "read“
        #input file path
        path => "/var/log/input/*.csv"
        start_position => "beginning"
        file_completed_action => "delete"
        sincedb_path => "/dev/null"
      }
    }
    filter {
      csv {
          separator => ","
          columns => ["TIMESTAMP","OPC","DPC","MSGTYPE","NI","CGRI","CGTT","CGNP","CGNAI","CGPC","CGADDR","CGSSN","CDRI","CDTT","CDNP","CDNAI","CDPC","CDADDR","CDSSN","LSET","MSISDN","IMSI","Atype","Asubtype","Cat","Classification","OpCode","CGLOC","CDLOC","CGCN","CDCN","ACN","OTID","DTID","pkgtype","SMRPOA","SMRPDA","VLR"]
          skip_header => "true"
      }
    }
    output {
       elasticsearch {
         #elastic node ip and port
         hosts =>”http://10.75.xx.xx:9200”
         #index name
         index => “visual_vstp1"
      }
    }
    

    Note:

    • Use separate index name for each logstash
    • Index name should be of the form: visual_vstp
  2. Inn logstash.yml configure pipeline.workers as 32 and pipeline.batch.size as 500
  3. In jvm.options increase the heap space: Xms10g
  4. Start the logstash with command: systemctl start logstash
Kibana

This section describes the installation and configuration of Kibana.

Kibana Installation

Perform the following steps to install Kibana on the Visualization server:

  1. Create a directory to keep all the visualization related RPM(s) using the following command:
    mkdir visualization
  2. Enter the newly created directory in step 1 using the following command:
    cd visualization
  3. Download kibana-7.4.1-x86_64.rpm using the following command:
    wget https://artifacts.elastic.co/downloads/kibana/kibana-7.4.1-x86_64.rpm

    Note:

    If the wget module is not installed in the system, install it using the yum install wget command.
  4. Install RPM using the following command:
    sudo rpm --install kibana-7.4.1-x86_64.rpm
  5. Verify whether or not the Filebeat RPM is successfully installed using the following command:
    rpm –qa | grep kibana
Kibana Configuration
Perform the following steps to configure Kibana after it is installed:
  1. Open the kibana configuration file using the following command:
    vim /etc/kibana/kibana.yml
  2. Update the following fields in kibana.yml:
    1. [Mandatory] server.host is the IP address of the host.
    2. [Mandatory] elasticsearch.hosts is the IP address of the host in which elasticsearch module is running. In our architecture, Elasticsearch, kibana and filebeat will be running on the same instance/VM.
    3. [Mandatory] logging.dest: is used to redirect the log of kibana. stdout is the default option.
      The following example shows the sample configuration:
      server.port: 5601
      server.host: "10.75.xx.yy“
      elasticsearch.hosts: ["http://10.75.xx.yy:9200"]
      logging.dest: /var/log/kibana/kibana.log

      Note:

      Before redirecting the log, verify that the /var/log/kibana directory exists. Otherwise, kibana cannot restart.
  3. Restart kibana using the following command:

    Systemctl restart kibana.service

  4. Verify that kibana is successfully started using the following command:
    Systemctl status kibana.service
Kibana Dashboard
Perform the following steps to access Kibana Dashboard:
  1. Access the default dashboard from the browser using the following URL:
    http://IP:PORT
    where IP is the coordinating node IP in which kibana is installed and Port is 5601 (default).
  2. Click Management
  3. Click Saved Objects
  4. Click Import
  5. Browse the exported file.
  6. The dashboard displays the logging information as shown in the below illustration:

    Figure 4-2 Kibana Dashboard

    Kibana Dashboard
Elasticsearch Curator

The Elasticsearch Curator helps to clear older logs for an index pattern.

Perform the following steps to install the curator:
  1. Install Elasticsearch curator as per the below link:

    https://www.elastic.co/guide/en/elasticsearch/client/curator/current/installation.html

  2. Create a CRON job to delete the indices automatically on daily basis.

    crontab -e

  3. Add the following command to the job:

    /usr/bin/curator /root/curator/delete.yaml --config /root/curator/curator.yml

Below is a sample of /root/curator/delete.yaml file:

action: delete_indices
    description: >-
      Delete indices older than 30 days (based on index name), for tomcat-
      prefixed indices. Ignore the error if the filter does not result in an
      actionable list of indices (ignore_empty_list) and exit cleanly.
    options:
      ignore_empty_list: True
      timeout_override:
      continue_if_exception: False
      disable_action: False
  filters:
    - filtertype: pattern
      kind: regex
      value: vstp*  ----------> specify the regex of the index pattern
      exclude:
    - filtertype: age
      source: creation_date
      direction: older
      unit: days
      unit_count: 30
client:
  hosts:
   - 10.75.xx.yy
  port: 9200
logging:
  loglevel: INFO
  logfile: "/root/curator/logs/actions.log"
  logformat: default
  blacklist: ['elasticsearch', 'urllib3']

Alarms and Measurement

Alarms

The following table lists the measurements specific to the Security Logging and Visualization support for vSTP:
Alarm ID Alarm Name
70437 VstpSecuLogEventQueue
70438 VstpSecuLogErro
70439 VstpSecuLogFetchError
70440 VstpSecuLogRemoteServerError

For more details related to Alarms, refer to Alarms and KPIs Guidelines document.

Measurements

The following table lists the measurements specific to the Security Logging and Visualization support for vSTP:
Measurement ID Measurement Name
21977 VstpSecuLogDiscQueueFull
21978 VstpSecuLogQueuePeak
21979 VstpSecuLogQueueAvg
21980 VstpSecuLogRate
21981 VstpSecuLogRatePeak
21982 VstpSecuLogRateAvg

For more details related to measurements, refer to Measurement Reference document.

Troubleshooting

In case of the error scenarios, the measurements specific to Seurity Logging and Visualization feature are pegged. For information related to CAT2 SS7 Security measurements, see Alarms and Measurement.

Dependencies

The Security Logging and Visualization feature for vSTP has no dependency on any other vSTP operation.

Consider the following points while configuring this feature:
  • If MP crashes and does not comes up, then the log files present on that MP gets lost.
  • If logstash crashes and does not come up, log files present on that logstash gets lost.
  • The VM profile does noit have space to store logs at 30 minutes on SOAM at 50K site TPS. Hence if transfer of logs to remote server fails, logging may stop due to low disk space.