10 Troubleshooting IWF

This section provides information to troubleshoot the common error which can be encountered during the installation and upgrade of IWF.

The environment is not working as expected

Problem: The environment is not working as expected.

Solution:
  • Check the version of kubectl in system e.g. ‘kubectl version'
  • Check if below commands execute successfully:
    • $ kubectl create namespace test
    • $ kubectl delete namespace test
  • Check if 'helm version' command works and gives the versions of client and server.

2Debugging of IWF Installation while Installing using helm

Problem: The user is getting the error: failed to parse ociwf-custom-values-1.5.0.yaml: error converting YAML to JSON: yaml.

Solution:

Verify the following:
  1. The ociwf-custom-values-1.5.0.yaml may not be created properly.
  2. The tree structure may not be followed.
  3. There may be tab spaces in the file.
  4. Verify that the ociwf-custom-values-1.5.0.yaml is proper

    Refer Iwf Cloud Native Installation Guide .

  5. If there is no error, helm installation will be deployed.

    Helm status can be checked using following command :

    helm status <helm release name>

IWF is not deployed successfully

Problem: The IWF deployment is not successful.

Solution:

  1. Verify if IWF specific pods are working as expected by executing the following command:

    kubectl get pods -n <ociwf_namespace>

    Check whether all the pods are up and running.

    Sample output:

    
    NAME                                 READY STATUS RESTARTS AGE
    iwf-pt-mysql-dd8cf7685-tqhm7            1/1   Running   0  3m10s
    ociwf-egress-gateway-5d86d7755b-8c4rx   1/1   Running   0  3m10s
    ociwf-ingress-gateway-6d5bdcd859-mn2pz  1/1   Running   0  3m10s
    ociwf-iwf-configmgr-65957546d5-jfwwr    1/1   Running   0  3m10s
    ociwf-iwf-d2h-d6ffd8788-qk52j           1/1   Running   0  3m10s
    ociwf-iwf-diameterproxy-56479d8c7-szcwh 1/1   Running   0  3m10s
    ociwf-iwf-h2d-6cfff7754d-ppmsk          1/1   Running   0  3m10s
    ociwf-iwf-mediation-66fbf5c98f-lzn2j    1/1   Running   0  3m10s
    ociwf-iwf-mediation-test-7c5dc59ff5-hsd4p1/1   Running  0  3m10s
    ociwf-iwf-pcfdiscovery-67966f7b6b-4rvf4 1/1   Running   2  3m10s
    ociwf-nf-mediation-8554fcd55f-wnncw     1/1   Running   0  3m10s
    ociwf-nf-mediation-test-5cd6c489fd-86gdh1/1   Running   0  3m10s
    ociwf-pcf-diam-gateway-0                1/1   Running   0  3m10s
  2. If status of any pod is shown as ImagePullBackOff or ErrImagePull then it can be due to:
    1. Incorrect ImageName or ImageTag provided in ociwf-custom-values-1.5.0.yaml.

      Then, double check the image name and tags in ociwf-custom-values-1.5.0.yaml.

    2. Docker registry is incorrectly configured.

      Then, check docker registry is properly configured in all master and slave nodes.

    3. Docker image doesn't exist in the docker repo.
  3. If RESTARTS count of the pods is continuously increasing, then it can happen due to the following reasons:
    1. MySQL primary and secondary hosts may not be configured properly in ociwf-custom-values-1.5.0.yaml
    2. MySQL servers may not be configured properly according to the pre-installation steps mentioned in IWF Cloud Native Installation Guide .

Debugging General CNE

Problem: The environment is not working as expected

Solution:

Execute the command kubectl get events -n <ociwf_namespace>to get all the events related to a particular namespace.

Collect the IWF Logs to check the error scenarios

Problem: The error scenarios are checked by collecting the IWF logs or kibana.

Solution:

The following commands must be executed to get the logs from iwf specific pods:

  1. Fetch the list of all pods by executing kubectl get pods -n <ociwf_namespace>
  2. Collect the logs from the pod and redirect to file by executing kubectl logs <pod_name> -n <ociwf_namespace> > <Log File>

Example:

use options -f for follow, --tail=1 to only dump the latest logs