Creating or Updating the YAML File for the Exporter Utility

The Export utility YAML file is present in the CNC NF Data Collector tar package. You can either update the existing YAML file or create a new YAML file with the following parameters which can be configured. Oracle recommends to consider the existing file that is available with the tar package and update its parameters.

For information about parameter descriptions, refer to exporter-custom-values.yaml Parameter Description.

  1. In the exporter-custom-values.yaml file, update the global.image.repository parameter by providing the name of the docker registry where the cnc-nfdata-collector image is present.
  2. Update the global.slaveNodeName parameter by providing the name of the kubernetes worker node that can store the collected data from this utility.
    To obtain the name of the worker node or slave node, execute the kubectl get nodes command. The names of all the master and worker nodes of the kubernetes cluster are displayed in the Name column. You can provide the name of one of the worker nodes from the generated output.
  3. Update the global.outputPath parameter by providing the path to collect the data on the kubernetes cluster worker node in global.slaveNodeName.
    1. Ensure that the path provided here already exists on the kubernetes worker node specified in global.slaveNodeName and is accessible by non-root users for read or write operation. The /tmp path is recommended because it qualifies the required behavior on most of the kubernetes cluster nodes.
    The Exporter utility creates the exported-data_<time-stamp>/ directory in this path.
  4. Update the global.capacityStorage parameter by specifying the estimated amount of space required to occupy the collected data, for example, 2Gi, 200Mi, and so on.
    The value specified here is the space provided to kubernetes persistence volume that is mounted with this utility to collect the data.
  5. Update the global.storagePod parameter.

    Note:

    If the user does not have access to worker node, where the data is collected, then the value of this parameter must be set to true. The system creates a storage pod and mounts it with the persistence volume that collects the data at the /volume path in the pod.
    1. To copy data from the storage pod, execute the following kubectl cp command:
      kubectl cp exporter-storage-pod:/volume/exported-data_<time-stamp> ./ -n <namespace>
  6. Update the global.elasticSearchURL parameter by specifying the URL for Elastic search.
    FQDN of Elastic search can also be provided. If Elastic search requires authentication, it can be provided in the URL as http://<user-name>:<password>@<elastic-search-url>:<elastic-search-port>.

    To find the FQDN, execute kubectl get svc –n <namespace> and retrieve the Elastic search service name. After that, FQDN can be constructed as http://<elastic-search-service-name>.<namespace>:<port>. The default Elastic search port is 9200. The administrator of the cluster must be asked for the user name and password of Elastic search if required.

  7. Update the global.prometheusURL parameter by specifying the URL for the Prometheus server.
    FQDN of Elastic search can also be provided. The Prometheus server URL can be provided as http://<prometheus-server-url>:<prometheus-server-port>.

    To find the FQDN, execute kubectl get svc –n <namespace> and retrieve the Prometheus server service name. After that, FQDN can be constructed as http://<prometheus-server-service-name>.<namespace>:<port>. The default Prometheus server port is 80.

  8. Update or remove the LogsExporter.inclusionFilters parameter as required.
    You must provide this parameter if you want to filter logs data. If you want to collect all the data, do not provide this parameter. Inclusion filters accept an array of filters. Each array element is accepted as OR rule. Within each array element, filters separated by comma (,) are taken as AND rule. Where, AND and OR are logical operators.
    LogsExporter.inclusionFilters:
      - node=worker-2,namespace=nssf
      - node=worker-2,namespace=pcf
    The aforementioned filter is interpreted as:
    (node=worker-2 AND namespace=nssf) OR (node=worker-2 AND namespace=pcf)

    Inclusion filter examples are described in Examples of Logs, Metrics, and Traces Inclusion Filters.

    Note:

    Log filters are applied to JSON fields.
  9. Update or remove the LogsExporter.exclusionFilters parameter as required.
    You must provide this parameter only if you want to exclude some logs data.

    Exclusion filters accept an array of filters. Each array element is accepted as OR rule. Within each array element, filters separated by comma (,) are taken as AND rule as described in Step 8. The only difference is that the logs applicable for the rules are excluded. AND and OR are logical operators.

  10. Update the global.interval parameter by specifying the interval in hours for which the data is required.
    The collected data will be of last interval hour from now. For example, if the interval is set to 1, the data collector collects data for the last one hour.
  11. Update or remove the global.pastTimeSpot parameter as required.
    This parameter along with interval parameter specifies the time range for which the data has to be collected.

    This parameter accepts time in the UTC format. By default, the Exporter utility collects the last one hour data. This parameter can be used to override this default behavior. For example, if you want to generate the data from 2020-05-17T15:30:38Z to 2020-05-17T17:30:38Z, then you must set pastTimeSpot to 2020-05-17T17:30:38Z and interval to 2. The system considers the current time to be 2020-05-17T17:30:38Z and goes back 2 hours in time to collect the data ranging from 2020-05-17T15:30:38Z to 2020-05-17T17:30:38Z.

  12. Update or remove the MetricsExporter.inclusionFilters parameter.
    You must provide this parameter if you want to filter metrics data. If you want to collect all the metric data, do not provide this parameter. Metrics Inclusion filters accept an array of filters. Each array element is accepted as OR rule. Within each array element, filters separated by comma (,) are taken as AND rule. Where, AND and OR are logical operators.
    MetricsExporter.inclusionFilters:
      - node=worker-2,namespace=nssf
      - node=worker-2,namespace=pcf
    The aforementioned filter is interpreted as:
    (node=worker-2 AND namespace=nssf) OR (node=worker-2 AND namespace=pcf)

    Inclusion filter examples are described in Examples of Logs, Metrics, and Traces Inclusion Filters.

    Note:

    Metrics filters are applied to Metrics labels.
  13. Update or remove the TracesExporter.inclusionFilters parameter as required.
    You must provide this parameter if you want to filter traces data. If you want to collect all the traces data, do not provide this parameter. Traces Inclusion filters accept an array of filters. Each array element is accepted as OR rule. Within each array element, filters separated by comma (,) are taken as AND rule. Where, AND and OR are logical operators.
    TracesExporter.inclusionFilters:
      - node=worker-2,namespace=nssf
      - node=worker-2,namespace=pcf
    The aforementioned filter is interpreted as:
    (node=worker-2 AND namespace=nssf) OR (node=worker-2 AND namespace=pcf)

    Inclusion filter examples are described in Examples of Logs, Metrics, and Traces Inclusion Filters.

    Note:

    Trace filters are applied to process tags.
  14. Update or remove the TracesExporter.exclusionFilters parameter.
    You must provide this parameter only if you want to exclude some traces data.

    Exclusion filters accepts an array of filters. Each array element is accepted as OR rule. Within each array element, filters separated by comma (,) are taken as AND rule. Where, AND and OR are logical operators. The only difference is that the traces applicable for the rules are excluded.