exporter-custom-values.yaml Parameter Description
exporter-custom-values.yaml
file.
Table 5-1 exporter-custom-values.yaml parameters
Parameter | Description | Default Value | Range or Possible Value |
---|---|---|---|
global.image.repository | Specifies the name of the docker registry that contains the cnc-nfdata-collector image. | - | - |
global.slaveNodeName | Specifies the name of the k8s slave that stores the
collected data.
To obtain the name of the slave node or worker
node, execute the |
- | - |
global.outputPath | Creates the exported-data/
directory in the specified path that can be copied from the slave
node or exporter-storage-pod if
the global.storagePod parameter is
enabled.
Note: Ensure that the path provided here already exists on the k8s slave name specified in global.slaveNodename. |
/tmp | - |
global.capacityStorage | Specifies the estimated amount of space to be occupied by the collected data, for example, 2Gi, 200Mi, and so on. | 5Gi | 1Gi, 4Gi, 500Mi, 10Gi, and so on |
global.storagePod |
Enables the storage pod mounted with persistence volume. When the value is set to true, a storage pod is placed and path provided inglobal.outputPath is mounted inside
the pod at /volume .
This pod can be used to copy the collected data without k8s slave login. |
false | true/false |
global.elasticSearchURL | Specifies the URL for Elastic search. FQDN of Elastic
search can also be provided. If Elastic search requires
authentication, it can be provided in the URL as:
http://<user-name>:<password>@<elastic-search-url>:<elastic-search-port> |
- | - |
global.prometheusURL | Specifies the URL for the Prometheus server. FQDN of
Elastic search can also be provided.
If the
Prometheus server requires authentication, it can be provided in
the URL as:
|
- | - |
LogsExporter.inclusionFilters | Provides comma separated json key values that must be
present in the logs to be collected.
Example:
|
- | - |
LogsExporter.exclusionFilters | Specifies the list of field key values that must not
exist in the logs to be generated.
Example:
|
- | - |
LogsExporter.timeFilterFieldName | Specifies the name of the time json field name. | @timestamp | - |
global.interval |
Specifies the interval in hours for which the data is required. The collected data will be last interval hours from now. Example: If the interval is set to 1, the data collector collects data for last one hour. |
1 | - |
global.pastTimeSpot [Optional Parameter] |
This parameter along with interval parameter specifies the time range for which the data has to be collected. This parameter accepts time in the UTC format. By default, the Exporter utility collects the last one hour data. This parameter can be used to override this default behavior. Example: If you want to generate the data from
2020-05-17T17:30:38Z and
goes back 2 hours in time to collect the data ranging from
2020-05-17T15:30:38Z to
2020-05-17T17:30:38Z .
|
- | - |
LogsExporter.match | Scans limited number of Elastic search indices by using regex. Use the default regex that can scan all the indices if the user is not aware of indices. | '^.*$' | - |
LogsExporter.limit | Sets the number of documents that must be generated per request. This parameter can be used to enhance the performance. The value is limited to available buffer. | 1000 | - |
LogsExporter.nodeTLSRejectedUnauthorized | Enables or disables certificate verification. | true | true/false |
MetricsExporter.step | Provides the step in seconds to generate metrics
data.
When the step value is set to 30, it generates metrics data of each metric in an interval of 30 seconds. |
30 | - |
MetricsExporter.inclusionFilters | Provides comma separated labels which must be
present in the metrics to be collected.
Example:
|
- | - |
MetricsExporter.workerThreads | Provides the number of worker threads to generate metrics data. | - | - |
TracesExporter.match | Scans limited number of Elastic search indices by
using regex.
Jaeger traces are stored in index starting with jaeger by default. Use the default regex that can scan all the indices if the user is not aware of indices. |
'^jaeger.*$' | - |
TracesExporter.inclusionFilters | Provides comma separated tags which must be present
in the traces to be collected.
Example:
|
- | - |
TracesExporter.exclusionFilters | Provides comma separated tags which must not be
present in the traces to be collected.
Example:
|
- | - |
TracesExporter.unitOfTime | Provides the unit of time to be used by time filter.
This parameter provides epoch timestamp units.
A 10 digit epoch has its units in seconds. A 13 digit epoch has its units in milliseconds. A 16 digit epoch has its units in microseconds. A 19 digit epoch has its units in nanosesonds. To know this value, you can count number of digits of the startTime field name. |
microseconds | seconds, milliseconds, microseconds, and nanoseconds |
TracesExporter.timeFilterFieldName | Provides the name of the time json field name. | startTime | - |
TracesExporter.limit | Sets the number of documents that must be generated per request. This parameter can be used to enhance the performance. The value is limited to available buffer. | 1000 | - |
TracesExporter.nodeTLSRejectedUnauthorized | Enables or disables certificate verification. | true | true/false |
Sample custom file
global:
# Registry where cnc-nfdata-collector image present.
image:
repository: reg-1:5000
#Host machine is the slave node on which this job is scheduled. Make sure path is present on node already.
outputPath: /tmp
# Name of the slave where fetched data can be kept.
slaveNodeName: remote-setup-kamal
#Storage to be allocated to persistence
capacityStorage: 30Gi
#Mount slave path with pod
storagePod: false
#Mention the URL of elasticSearch here.#Mention the URL of elasticSearch here.
elasticSearchURL: "http://10.75.226.21:9200"
#Mention the URL of prometheus here.
prometheusURL: "http://10.75.226.49"
#Time range for which data should fetched
interval: "24" # IN HOURS
#In case, data other than last few hours from now is required.
#pastTimeSpot: "2020-05-17T15:30:38Z"
LogsExporter:
# Enable to fetch logs Data
enabled: true
# provide the list of json key values which must exist in the logs to be fetched
inclusionFilters: |
- vendor=oracle,application=ocnssf
# provide the list of json key values which must not exist in the logs to be fetched
exclusionFilters: |
- audit_logs=true
# provide the name of the time json field name
timeFilterFieldName: "@timestamp"
#Default REGEX value for this param is '^.*$' which means select all the indices.
match: '^.*$'
#Maximun number of records to be transferred in a batch
limit: 1000
#To disable certificate verification
nodeTLSRejectedUnauthorized: "true"
MetricsExporter:
# Enable to fetch Metrics Data
enabled: true
# provide the list of labels which must exist in the metrics to be fetched
inclusionFilters: |
- application=ocnssf
# Timestamp difference between two data points in seconds
step: "30"
# Number of worker threads to fetch metrics Data
workerThreads: "32"
TracesExporter:
# Enable to fetch Traces Data
enabled: true
# provide the list of tags which must exist in the traces to be fetched
inclusionFilters: |
- vendor=oracle,application=ocnssf
# provide the list of labels which must not exist in the traces to be fetched
exclusionFilters: |
- exclude_field=true
# seconds, milliseconds, microseconds and nanoseconds.
unitOfTime: "microseconds"
# provide the name of the time json field name
timeFilterFieldName: "startTime"
#Default REGEX value for this param is '^.*$' which means select all the indices.
match: '^jaeger.*$'
#Maximum number of records to be transferred in a batch
limit: 1000
#To disable certificate verification
nodeTLSRejectedUnauthorized: "true"