exporter-custom-values.yaml Parameter Description
exporter-custom-values.yaml
file.
Table 5-1 exporter-custom-values.yaml parameters
Parameter | Description | Default Value | Range or Possible Value |
---|---|---|---|
global.image.repository | Specifies the name of the docker registry that contains the cnc-nfdata-collector image. | - | - |
global.slaveNodeName | Specifies the name of the Kubernetes slave that
stores the collected data.
To obtain the name of the
slave node or worker node, execute the |
- | - |
global.outputPath | Creates the exported-data/
directory in the specified path that can be copied from the slave
node or exporter-storage-pod if
the global.storagePod parameter is
enabled.
Note: Ensure that the path provided here already exists on the Kubernetes slave name specified in global.slaveNodename. |
/tmp | - |
global.capacityStorage | Specifies the estimated amount of space to be occupied by the collected data, for example, 2Gi, 200Mi, and so on. | 5Gi | 1Gi, 4Gi, 500Mi, 10Gi, and so on |
global.storagePod |
Enables the storage pod mounted with persistence volume. When the value is set to true, a storage pod is placed and path provided inglobal.outputPath is mounted inside
the pod at /volume .
This pod can be used to copy the collected data without Kubernetes slave login. |
false | true/false |
global.elasticSearchURL | Specifies the URL for Elastic search. FQDN of Elastic
search can also be provided. If Elastic search requires
authentication, it can be provided in the URL as:
http://<user-name>:<password>@<elastic-search-url>:<elastic-search-port> |
- | - |
global.prometheusURL | Specifies the URL for the Prometheus server. FQDN of
Elastic search can also be provided.
If the
Prometheus server requires authentication, it can be provided in
the URL as:
|
- | - |
LogsExporter.inclusionFilters | Provides comma separated json key values that must be
present in the logs to be collected.
Example:
|
- | - |
LogsExporter.exclusionFilters | Specifies the list of field key values that must not
exist in the logs to be generated.
Example:
|
- | - |
LogsExporter.timeFilterFieldName | Specifies the name of the time json field name. | @timestamp | - |
global.interval |
Specifies the interval in hours for which the data is required. The collected data will be last interval hours from now. Example: If the interval is set to 1, the data collector collects data for last one hour. |
1 | - |
global.pastTimeSpot [Optional Parameter] |
This parameter along with interval parameter specifies the time range for which the data has to be collected. This parameter accepts time in the UTC format. By default, the Exporter utility collects the last one hour data. This parameter can be used to override the default behavior. Example: If you want to generate the data from
2020-05-17T17:30:38Z and
goes back 2 hours in time to collect the data ranging from
2020-05-17T15:30:38Z to
2020-05-17T17:30:38Z .
|
- | - |
LogsExporter.match | Scans limited number of Elastic search indices by using regex. Use the default regex that can scan all the indices if the user is not aware of indices. | '^.*$' | - |
LogsExporter.nodeTLSRejectedUnauthorized | Enables or disables certificate verification. | true | true/false |
MetricsExporter.step | Provides the step in seconds to generate metrics
data.
When the step value is set to 30, it generates metrics data of each metric in an interval of 30 seconds. |
30 | - |
MetricsExporter.inclusionFilters | Provides comma separated labels which must be
present in the metrics to be collected.
Example:
|
- | - |
TracesExporter.match | Scans limited number of Elastic search indices by
using regex.
Jaeger traces are stored in index starting with jaeger by default. Use the default regex that can scan all the indices if the user is not aware of indices. |
'^jaeger.*$' | - |
TracesExporter.inclusionFilters | Provides comma separated tags which must be present
in the traces to be collected.
Example:
|
- | - |
TracesExporter.exclusionFilters | Provides comma separated tags which must not be
present in the traces to be collected.
Example:
|
- | - |
TracesExporter.nodeTLSRejectedUnauthorized | Enables or disables certificate verification. | true | true/false |
Sample custom file
global:
# Registry where cnc-nfdata-collector image present.
image:
repository: reg-1:5000
#Host machine is the slave node on which this job is scheduled. Make sure path is present on node already.
outputPath: /tmp
# Name of the slave where fetched data can be kept.
slaveNodeName: k8s-slave-node-1
#Storage to be allocated to persistence
capacityStorage: 30Gi
#Mount slave path with pod
storagePod: false
#Mention the URL of elasticSearch here.#Mention the URL of elasticSearch here.
elasticSearchURL: "http://10.75.226.21:9200"
#Mention the URL of prometheus here.
prometheusURL: "http://10.75.226.49"
#Time range for which data should fetched
interval: "24" # IN HOURS
#In case, data other than last few hours from now is required.
#pastTimeSpot: "2020-05-17T15:30:38Z"
LogsExporter:
# Enable to fetch logs Data
enabled: true
# provide the list of json key values which must exist in the logs to be fetched
inclusionFilters: |
- vendor=oracle,application=ocnssf
# provide the list of json key values which must not exist in the logs to be fetched
exclusionFilters: |
- audit_logs=true
#Default REGEX value for this param is '^.*$' which means select all the indices.
match: '^.*$'
MetricsExporter:
# Enable to fetch Metrics Data
enabled: true
# provide the list of labels which must exist in the metrics to be fetched
inclusionFilters: |
- application=ocnssf
# Timestamp difference between two data points in seconds
step: "30"
TracesExporter:
# Enable to fetch Traces Data
enabled: true
# provide the list of tags which must exist in the traces to be fetched
inclusionFilters: |
- vendor=oracle,application=ocnssf
# provide the list of labels which must not exist in the traces to be fetched
exclusionFilters: |
- exclude_field=true
#Default REGEX value for this param is '^.*$' which means select all the indices.
match: '^jaeger.*$'