- Cloud Native Core Network Function Data Collector User's Guide
- Collecting Data Using CNC NF Data Collector
- Creating or Updating the YAML File for the Exporter Utility
Creating or Updating the YAML File for the Exporter Utility
For information about parameter descriptions, refer to exporter-custom-values.yaml Parameter Description.
- In the
exporter-custom-values.yaml
file, update the global.image.repository parameter by providing the name of the docker registry where thecnc-nfdata-collector
image is present. - Update the global.slaveNodeName parameter by providing the
name of the kubernetes worker node that can store the collected data from this
utility.To obtain the name of the worker node or slave node, execute the
kubectl get nodes
command. The names of all the master and worker nodes of the kubernetes cluster are displayed in the Name column. You can provide the name of one of the worker nodes from the generated output. - Update the global.outputPath parameter by providing the
path to collect the data on the kubernetes cluster worker node in
global.slaveNodeName
.- Ensure that the path provided here already exists on the kubernetes
worker node specified in
global.slaveNodeName
and is accessible by non-root users for read or write operation. The/tmp
path is recommended because it qualifies the required behavior on most of the kubernetes cluster nodes.
The Exporter utility creates theexported-data_<time-stamp>/
directory in this path. - Ensure that the path provided here already exists on the kubernetes
worker node specified in
- Update the global.capacityStorage parameter by specifying
the estimated amount of space required to occupy the collected data, for
example, 2Gi, 200Mi, and so on.The value specified here is the space provided to kubernetes persistence volume that is mounted with this utility to collect the data.
- Update the global.storagePod parameter.
Note:
If the user does not have access to worker node, where the data is collected, then the value of this parameter must be set to true. Otherwise, the value is false if the user has access to worker node. The system creates a storage pod and mounts it with the persistence volume that collects the data at the/volume
path in the pod.- To copy data from the storage pod, execute the following kubectl cp
command:
kubectl cp exporter-storage-pod:/volume/exported-data_<time-stamp> ./ -n <namespace>
Example:
kubectl cp exporter-storage-pod:/volume/exported-data_2020-08-24_09:40:52 ./ -n ocnf-data-exporter
- To copy data from the storage pod, execute the following kubectl cp
command:
- Update the global.elasticSearchURL parameter by specifying
the URL for Elastic search.FQDN of Elastic search can also be provided. If Elastic search requires authentication, it can be provided in the URL as
http://<user-name>:<password>@<elastic-search-url>:<elastic-search-port>
.To find the FQDN, execute
kubectl get svc –n <namespace>
and retrieve the Elastic search service name. After that, FQDN can be constructed ashttp://<elastic-search-service-name>.<namespace>:<port>
. The default Elastic search port is 9200. The administrator of the cluster must be asked for the user name and password of Elastic search if required.Note:
Use the elasticsearch-client FQDN for Oracle Communications Cloud Native Environment (OCCNE). - Update the global.prometheusURL parameter by specifying
the URL for the Prometheus server.FQDN of Elastic search can also be provided. The Prometheus server URL can be provided as
http://<prometheus-server-url>:<prometheus-server-port>
.To find the FQDN, execute
kubectl get svc –n <namespace>
and retrieve the Prometheus server service name. After that, FQDN can be constructed ashttp://<prometheus-server-service-name>.<namespace>:<port>
. The default Prometheus server port is 80.The namespace is not provided if the Prometheus server and the Exporter utility are in the same namespace.
- Update or remove the LogsExporter.inclusionFilters
parameter as required.You must provide this parameter if you want to filter logs data. If you want to collect all the data, do not provide this parameter. Inclusion filters accept an array of filters. Each array element is accepted as OR rule. Within each array element, filters separated by comma (,) are taken as AND rule. Where, AND and OR are logical operators.
LogsExporter.inclusionFilters: - node=worker-2,namespace=nssf - node=worker-2,namespace=pcf
The aforementioned filter is interpreted as:(node=worker-2 AND namespace=nssf) OR (node=worker-2 AND namespace=pcf)
Inclusion filter examples are described in Examples of Logs, Metrics, and Traces Inclusion Filters.
For information about logs inclusion filter keywords, refer to NF Specific Logs Inclusion Filter Keywords.
Note:
Log filters are applied to JSON fields. - Update or remove the LogsExporter.exclusionFilters
parameter as required.You must provide this parameter only if you want to exclude some logs data.
Exclusion filters accept an array of filters. Each array element is accepted as OR rule. Within each array element, filters separated by comma (,) are taken as AND rule as described in Step 8. The only difference is that the logs applicable for the rules are excluded. AND and OR are logical operators.
- Update the global.interval parameter by specifying the
interval in hours for which the data is required.The collected data will be of last interval hour from now. For example, if the interval is set to 1, the data collector collects data for the last one hour.
- Update or remove the global.pastTimeSpot parameter as
required.This parameter along with interval parameter specifies the time range for which the data has to be collected.
This parameter accepts time in the UTC format. By default, the Exporter utility collects the last one hour data. This parameter can be used to override this default behavior. For example, if you want to generate the data from
2020-05-17T15:30:38Z
to2020-05-17T17:30:38Z
, then you must setpastTimeSpot
to2020-05-17T17:30:38Z
and interval to2
. The system considers the current time to be2020-05-17T17:30:38Z
and goes back 2 hours in time to collect the data ranging from2020-05-17T15:30:38Z
to2020-05-17T17:30:38Z
. - Update or remove the MetricsExporter.inclusionFilters
parameter.You must provide this parameter if you want to filter metrics data. If you want to collect all the metric data, do not provide this parameter. Metrics Inclusion filters accept an array of filters. Each array element is accepted as OR rule. Within each array element, filters separated by comma (,) are taken as AND rule. Where, AND and OR are logical operators.
MetricsExporter.inclusionFilters: - node=worker-2,namespace=nssf - node=worker-2,namespace=pcf
The aforementioned filter is interpreted as:(node=worker-2 AND namespace=nssf) OR (node=worker-2 AND namespace=pcf)
Inclusion filter examples are described in Examples of Logs, Metrics, and Traces Inclusion Filters.
For information about metrics inclusion filter keywords, refer to NF Specific Metric Inclusion Filter Keywords.
Note:
Metrics filters are applied to Metrics labels. - Update or remove the TracesExporter.inclusionFilters
parameter as required.You must provide this parameter if you want to filter traces data. If you want to collect all the traces data, do not provide this parameter. Traces Inclusion filters accept an array of filters. Each array element is accepted as OR rule. Within each array element, filters separated by comma (,) are taken as AND rule. Where, AND and OR are logical operators.
TracesExporter.inclusionFilters: - node=worker-2,namespace=nssf - node=worker-2,namespace=pcf
The aforementioned filter is interpreted as:(node=worker-2 AND namespace=nssf) OR (node=worker-2 AND namespace=pcf)
Inclusion filter examples are described in Examples of Logs, Metrics, and Traces Inclusion Filters.
Note:
Trace filters are applied to process tags. - Update or remove the TracesExporter.exclusionFilters
parameter.You must provide this parameter only if you want to exclude some traces data.
Exclusion filters accepts an array of filters. Each array element is accepted as OR rule. Within each array element, filters separated by comma (,) are taken as AND rule. Where, AND and OR are logical operators. The only difference is that the traces applicable for the rules are excluded.