5 Collecting Data Using CNC NF Data Collector
The following procedures provide information about how to start both the data collector modules and generate the output.
Using the NF Deployment Data Collector Module
- Ensure that you have privileges to access the system and run kubectl and Helm commands.
- Prepare CNC NF Data Collector as described in Downloading and Preparing CNC NF Data Collector.
- Perform this procedure on the same machine (Bastion Host) where the NF is deployed using helm or kubectl.
Start the NF Logs, Metrics, Traces, Alerts Data Collector Module
Table 5-1 Start the NF Logs, Metrics, Traces, Alerts Data Collector Module
Task Sequence | Task Name | Description | Section Reference |
---|---|---|---|
1 | Pre-deployment task | Use this task to remove the residual or remaining resources of the previous deployment. | Pre-deployment Tasks of Logs, Metrics, Traces, Alerts Data Collector Module |
2 | Configuration of Exporter utility | Use this task to configure the Exporter utility and
to customize the exporter-custom-values.yaml
file.
|
Creating or Updating the YAML File for the Exporter Utility |
3 | Starting of Exporter utility | Use this task to start the Exporter utility on the same site from which the data needs to be collected. | Exporting the NF Logs, Metrics, Traces, and Alerts Data |
4 | Check the output of Exporter utility | Use this task to verify the output of the Exporter utility. | Sample Content of the Exporter Utility Custom values.yaml |
5 | Transfer the data collected by Exporter utility | Use this task to transfer the data collected by the Exporter utility. | Transferring the Data Collected by Exporter Utility |
6 | Configuration of Loader utility | Use this task to configure the Loader utility and to
customize the loader-custom-values.yaml
file.
|
Creating or Updating the YAML File for the Loader Utility |
7 | Starting of Loader utility | Use this task to start the Loader utility on the data collected from the site. | Loading the NF Logs, Metrics, Traces, Alerts Data |
8 | View NF Logs, Metrics, and Traces | Use this task to view the data collected from the site. | Viewing the NF Logs, Metrics, and Traces |
9 | Delete the logs, metrics, traces data | Use this task to delete the data after the analysis
is complete.
Note: This is an optional task. |
Deleting the NF Logs, Traces, and Metrics |
Removing the Pending Resources from Namespace
Creating or Updating the YAML File for the Exporter Utility
For information about parameter descriptions, refer to exporter-custom-values.yaml Parameter Description.
Examples of Logs, Metrics, and Traces Inclusion Filters
The following examples describe how to use inclusion filters in different scenarios:
Examples of Logs Inclusion Filters
Scenario 1
{“node”:”worker-1”}
{“node”:”worker-2”}
For
aforementioned logs, if you want to collect logs of only worker-2 node, then you
must use the inclusion filter as
follows:LogsExporter.inclusionFilters:
- node=worker-2
Scenario 2
{“node”:”worker-1”,”namespace”:”nssf”}
{“node”:”worker-2”,”namespace”:”nssf”}
{“node”:”worker-2”,”namespace”:”pcf”}
For
aforementioned logs, if you want to collect logs of worker-2 node and Network Slice
Selection Function (NSSF) namespace, then you must use the inclusion filter as
follows:LogsExporter.inclusionFilters:
- node=worker-2,namespace=nssf
Scenario 3
{“node”:”worker-1”,”namespace”:”nssf”}
{“node”:”worker-2”,”namespace”:”nssf”}
{“node”:”worker-2”,”namespace”:”pcf”}
{“node”:”worker-2”,”namespace”:”nrf”}
For
aforementioned logs, if you want to collect logs of worker-2 node in Network Slice
Selection Function (NSSF) and Policy Control Function (PCF) namespace, then you must
use the inclusion filter as
follows:LogsExporter.inclusionFilters:
- node=worker-2,namespace=nssf
- node=worker-2,namespace=pcf
Examples of Metrics Inclusion Filters
Scenario 1
http_request{“node”:”worker-1”}
http_request{“node”:”worker-2”}
For
aforementioned metrics, if you want to collect metrics of only worker-2 node, then
you must use the inclusion filter as
follows:MetricsExporter.inclusionFilters:
- node=worker-2
Examples of Traces Inclusion Filters
Scenario 1
{ "traceID": "75e9080f1ab45425", "spanID": "75e9080f1ab45425", "operationName": "10.178.246.56:32333", "startTime": 1582020558473011, "startTimeMillis": 1582020558473, "duration": 14763, "process": { "serviceName": "ocnssf-nsgateway-ocnssf","tags": [{"key": "node", "type": "string", "value": "worker-2"}]}}
{ "traceID": "75e9080f1ab45425", "spanID": "75e9080f1ab45425", "operationName": "10.178.246.56:32333", "startTime": 1582020558473011, "startTimeMillis": 1582020558473, "duration": 14763, "process": { "serviceName": "ocnssf-nsgateway-ocnssf","tags": [{"key": "node", "type": "string", "value": "worker-1"}]}}
For
aforementioned traces, if you want to collect traces of only worker-2 node, then you
must use the inclusion filter as
follows:TracesExporter.inclusionFilters:
- node=worker-2
exporter-custom-values.yaml Parameter Description
exporter-custom-values.yaml
file.
Table 5-2 exporter-custom-values.yaml Parameters
Parameter | Description | Default Value | Range or Possible Value |
---|---|---|---|
global.image.repository | Specifies the name of the docker registry that
contains the cnc-nfdata-collector image.
Provide the name of the docker registry where the cnc-nfdata-collector image is present. |
- | - |
global.slaveNodeName | Specifies the name of the Kubernetes slave that
stores the collected data.
To obtain the name of the
slave node or worker node, run the Provide the name of the Kubernetes slave where collected data by the tool is stored. |
- | - |
global.outputPath | Creates the exported-data/
directory in the specified path that can be copied from the slave
node or exporter-storage-pod if
the global.storagePod parameter is
enabled.
Note: Ensure that the path provided here already exists on the Kubernetes slave name specified in global.slaveNodename. |
/tmp | - |
global.capacityStorage | Specifies the estimated amount of space to be occupied by the collected data, for example, 2Gi, 200Mi, and so on. | 5Gi | 1Gi, 4Gi, 500Mi, 10Gi, and so on |
global.storagePod |
Enables the storage pod mounted with persistence volume. When the value is set to true, a storage pod is placed and path provided inglobal.outputPath is mounted inside
the pod at /volume .
This pod can be used to copy the collected data without Kubernetes slave login. |
false | true/false |
global.elasticSearchURL | Specifies the URL for Elastic search. FQDN of Elastic
search can also be provided. If Elastic search requires
authentication, it can be provided in the URL as:
http://<user-name>:<password>@<elastic-search-url>:<elastic-search-port> Note: In case of OCCNE, use the elasticsearch-client FQDN. |
- | - |
global.prometheusURL | Specifies the URL for the Prometheus server. FQDN of
Elastic search can also be provided.
If the
Prometheus server requires authentication, it can be provided in
the URL as:
|
- | - |
LogsExporter.inclusionFilters | Provides comma separated json key values that must be
present in the logs to be collected.
For information about logs inclusion filter keywords, refer to NF Specific Logs Inclusion Filter Keywords. Example:
|
- | - |
LogsExporter.exclusionFilters | Specifies the list of field key values that must not
exist in the logs to be generated.
Example:
|
- | - |
global.interval |
Specifies the interval in hours for which the data is required. The collected data will be last interval hours from now. Example: If the interval is set to 1, the data collector collects data for last one hour. |
1 | - |
global.pastTimeSpot [Optional Parameter] |
This parameter along with interval parameter specifies the time range for which the data has to be collected. This parameter accepts time in the UTC format. By default, the Exporter utility collects the last one hour data. This parameter can be used to override the default behavior. Example: If you want to generate the data from
2020-05-17T17:30:38Z and
goes back 2 hours in time to collect the data ranging from
2020-05-17T15:30:38Z to
2020-05-17T17:30:38Z .
|
- | - |
MetricsExporter.step | Provides the step in seconds to generate metrics
data.
When the step value is set to 30, it generates metrics data of each metric in an interval of 30 seconds. |
30 | - |
MetricsExporter.inclusionFilters | Provides comma separated labels which must be
present in the metrics to be collected.
For information about metrics inclusion filter keywords, refer to NF Specific Metrics Inclusion Filter Keywords. Example:
|
- | - |
TracesExporter.inclusionFilters | Provides comma separated tags which must be present
in the traces to be collected.
Example:
|
- | - |
TracesExporter.exclusionFilters | Provides comma separated tags which must not be
present in the traces to be collected.
Example:
|
- | - |
Sample custom file
global:
# Registry where cnc-nfdata-collector image present.
image:
repository: reg-1:5000
#Host machine is the slave node on which this job is scheduled. Make sure path is present on node already.
outputPath: /tmp
# Name of the slave where fetched data can be kept.
slaveNodeName: k8s-slave-node-1
#Storage to be allocated to persistence
capacityStorage: 30Gi
#Mount slave path with pod
storagePod: false
#Mention the URL of elasticSearch here.#Mention the URL of elasticSearch here.
elasticSearchURL: "http://10.75.226.21:9200"
#Mention the URL of prometheus here.
prometheusURL: "http://10.75.226.49"
#Time range for which data should fetched
interval: "24" # IN HOURS
#In case, data other than last few hours from now is required.
#pastTimeSpot: "2020-05-17T15:30:38Z"
LogsExporter:
# Enable to fetch logs Data
enabled: true
# provide the list of json key values which must exist in the logs to be fetched
inclusionFilters: |
- vendor=oracle,application=ocnssf
# provide the list of json key values which must not exist in the logs to be fetched
exclusionFilters: |
- audit_logs=true
#Default REGEX value for this param is '^.*$' which means select all the indices.
match: '^.*$'
MetricsExporter:
# Enable to fetch Metrics Data
enabled: true
# provide the list of labels which must exist in the metrics to be fetched
inclusionFilters: |
- application=ocnssf
# Timestamp difference between two data points in seconds
step: "30"
TracesExporter:
# Enable to fetch Traces Data
enabled: true
# provide the list of tags which must exist in the traces to be fetched
inclusionFilters: |
- vendor=oracle,application=ocnssf
# provide the list of labels which must not exist in the traces to be fetched
exclusionFilters: |
- exclude_field=true
#Default REGEX value for this param is '^.*$' which means select all the indices.
match: '^jaeger.*$'
Exporting the NF Logs, Metrics, Traces, and Alerts Data
Result
The Exporter utility creates theexported-data_<current-date>_<current-time>
directory in the specified path in the directory provided in the
exporter-custom-values-<release number>.yaml
file
parameter global.outputPath
.
Sample Content of the Exporter Utility Custom values.yaml
Sample output:
[root@k8s-cluster-master tmp]# du -sh exported-data_2020-07-03_08:15:38/*
232K exported-data_2020-07-03_08:15:38/logs
2.2G exported-data_2020-07-03_08:15:38/metrics
88K exported-data_2020-07-03_08:15:38/traces
[root@k8s-cluster-master tmp]# ls exported-data_2020-07-03_08:15:38/logs
jaeger-service-2020-06-28.json jaeger-service-2020-07-02.settings.json jaeger-span-2020-07-02.json logstash-2020.06.28.template.json logstash-2020.07.01.settings.json logstash-2042.01.06.json
jaeger-service-2020-06-28.mapping.json jaeger-service-2020-07-02.template.json jaeger-span-2020-07-02.mapping.json logstash-2020.06.29.mapping.json logstash-2020.07.01.template.json logstash-2042.01.06.mapping.json
[root@k8s-cluster-master tmp]# ls exported-data_2020-07-03_08:15:38/metrics
metrics-dump_ThreadPoolExecutor-0_0.json metrics-dump_ThreadPoolExecutor-0_16.json metrics-dump_ThreadPoolExecutor-0_22.json metrics-dump_ThreadPoolExecutor-0_29.json
[root@k8s-cluster-master tmp]# ls exported-data_2020-07-03_08:15:38/traces
jaeger-service-2020-06-28.json jaeger-service-2020-06-30.json jaeger-service-2020-07-02.json jaeger-span-2020-06-28.json jaeger-span-2020-06-30.json
Transferring the Data Collected by Exporter Utility
/volume
, then this pod can be used to
copy the collected data. The data must be extracted and move to a location where it can
be reviewed.