2 Logs

This chapter explains the process to retrieve the logs and status that can be used for effective troubleshooting.

2.1 Log Levels

Logs register system events along with their date and time of occurrence. They also provide important details about a chain of events that could have led to an error or problem.

A log level helps in defining the severity level of a log message. For OCNWDAF, the log level of a microservice can be set to any one of the following valid values:

  • TRACE: A log level that describes events, as a step by step execution of code. This can be ignored during the standard operation, but may be useful during extended debugging sessions.
  • DEBUG: A log level used for events during software debugging when more granular information is needed.
  • INFO: A standard log level indicating that something has happened, an application has entered a certain state, etc.
  • WARN: A log level indicates that something unexpected has happened in the application, a problem, or a situation that might disturb one of the processes. But this does not mean that the application has failed. The WARN level should be used in situations that are unexpected, but the code can continue to work.
  • ERROR: A log level that should be used when an application hits an issue preventing one or more functionalities from functioning.

Note:

Log levels are defined in the helm chart and as parameters of the Kubernetes pod, they can be updated by changing the Kubernetes pod deployment.

Using this information, the logs can be filtered based on the system requirements. For instance, if you want to filter the critical information about your system from the informational log messages, set a filter to view messages with only WARN log level in Kibana.

The following table provides log level details that may be helpful to handle different NRF Client Service debugging issues:

Table 2-1 Log Levels

Scenarios Pod Logs to be searched Log Level
Registration with NRF Successful nrf-client-service Register completed successfully / "nfServiceStatus":"REGISTERED" INFO
Heartbeat message log nrf-client-service Update completed successfully INFO
NRF configurations reloading nrf-client-service NRF client config reloaded INFO
Check for exiting NF Instance Entry nrf-client-service No registered NF instance exists WARN
Started Application nrf-client-service Successful application start INFO
NRF Client Config Initialized nrf-client-service Initialize NRF client configuration INFO
FQDN/BASEURL/livenessProbeUrl Improper nrf-client-service response=<503,java.net.UnknownHostException WARN
nudr-drservice liveness probe failure nrf-client-service NFService liveness probe failed WARN
Check if Ports successfully listening nrf-client-service Undertow started on port(s) INFO
Registration with NRF failed nrf-client-service Register failed ERROR
De registration with NRF successful nrf-client-service Deregister completed successfully INFO
De registration with NRF failed nrf-client-service Deregister failed ERROR
NF Profile update failed nrf-client-service Update failed ERROR

2.2 Collecting Logs

This section describes the steps to collect logs from PODs and containers. Perform the following steps:

  1. Run the following command to get the PODs details:
    kubectl -n <namespace_name> get pods
  2. Collect the logs from the specific pods or containers:
    kubectl logs <podname> -n <namespace> -c <containername>
  3. Store the log in a file using the following command:
    kubectl logs <podname> -n <namespace> > <filename>
  4. (Optional) You can also use the following commands for the log stream with file redirection starting with last 100 lines of log:
    kubectl logs <podname> -n <namespace> -f --tail <number of lines> > <filename>

For more information on how to collect the logs, see Oracle Communication Cloud Native Core Data Collector Guide.

2.3 Collect Logs using Deployment Data Collector Tool

Perform this procedure to start the NF Deployment Data Collector module and generate the tarballs. If the user does not specify the output storage path, then this module generates the output in the same directory where the module ran.

nfDataCapture.sh is a script which can be used for collecting all the required logs from NF deployment for debugging issues. The script collects logs from all microservice PODs of specified Helm input, Helm deployment details, the status, description of all the Kafka topics, status.server properties, and description of all the pods, services and events.

Before running the script, ensure the following requirements are met:
  • Ensure that you have appropriate privileges to access the system and run kubectl and helm commands.
  • Perform this procedure on the same machine where the OCNWDAF is deployed using helm or kubectl.
  • Run the chmod +x nfDataCapture.sh command on the tool to provide the executable permission.
  • Run the following command to start the module:
    ./nfDataCapture.sh -n|--k8Namespace=[K8 Namespace] -k|--kubectl=[KUBE_SCRIPT_NAME] -h|--helm=[HELM_SCRIPT_NAME]  -s|--size=[SIZE_OF_EACH_TARBALL] -o|--toolOutputPath -helm3=[true|false]

Where:

  • <K8 Namespace> is the Kubernetes Namespace where OCNWDAF is deployed.
  • <KUBE_SCRIPT_NAME> is the Kube script name.
  • <HELM_SCRIPT_NAME> is the Helm script name.
  • <SIZE_OF_EACH_TARBALL> indicates the size of each tarball.

Example:

./nfDataCapture.sh --k8Namespace=ocnwdaf-ns

Note:

  • If the size of the tarball and location are not specified, a default sized tarball will be generated (10M) and the default location of output will be the tool working directory.

  • Kafka Detailed Status is true by default and if we do not want to collect the details we have to specify the argument false in the command.

  • By default, Helm2 is used. Use proper argument in command to use Helm3.

Note:

If the database is not in same namespace, run the script again for the namespace in which database is deployed to capture the database related logs.

To verify the generated tars, run the commands:

cd <generated-tarball-name>
ls
  • Only if the size of the tar generated is greater than "SIZE_OF_EACH_TARBALL" specified in the command (for example, ocnwdaf.debugData.2023.02.28_09.15.01.tar.gz), the tar is split into multiple tarballs based on the size specified.
  • After running the command, tarballs will be created based on size specified in the following format:

    <namespace>.debugData.<timestamp>

Example:

ocnwdaf.debugData.2023.02.28_09.15.01

Each tarball can then be combined into one tarball using the following command:

cat <splitted files*> <combinedTarBall>.tar.gz

cat ocnwdaf.debugData.2023.02.28_09.15.01* > ocnwdaf.debugData.2023.02.28_09.15.01-combined.tar.gz

2.4 Understanding Logs

This chapter explains the logs you need to look into to handle different OCNWDAF debugging issues.

For more information on how to collect the logs, see Oracle Communication Cloud Native Core Data Collector Guide.

Log Formats

OCNWDAF supports the following log formats:
  • Executor logs

    Format:

    <datetime> - <level> - <module>.<line> [<thread>] : <message>

    Where:

    • datetime - The date and time of the event.
    • level - Helps in defining the severity level of a log message.
    • module - Software component that created the message.
    • line - Line of the source code where the message happened.
    • thread - Name of the thread that is currently running.
    • message - Description of the event.
  • Controller logs

    Format:

    <datetime>  <level> <process> --- [<thread>] <loggername> : <message>

    Where:

    • datetime - The date and time of the event.
    • level - Helps in defining the severity level of a log message.
    • process - Name of the process that is currently running.
    • thread - Name of the thread that is currently running.
    • loggername - The source class name (often abbreviated).
    • message - Description of the event.