2 Installing Cloud Native Core Policy

This chapter describes how to install Cloud Native Core Policy on a cloud native environment.

This chapter contains the following:

Pre-Installation Tasks

In this release, Single Release Bundle provides the following deployment models:

  • Converged Deployment
  • PCF Deployment
  • CNPCRF Deployment

Prior to installing the Cloud Native Core Policy (CNC Policy), perform the following tasks:

Checking the Software Requirements

The following softwares must be installed before installing Cloud Native Core Policy (CNC Policy):

Note:

In this release, Cloud Native Core Policy supports Oracle Communications Cloud Native Environment (OCCNE) 1.5.
Software Version
Kubernetes v1.16.7
HELM v3.0

Additional software that needs to be deployed as per the requirement of the services:

Software App Version Notes
alertmanager 0.20.0 Required for Tracing
elasticsearch 7.6.1 Required for Logging
elastic-curator 2.0.2 Required for Logging
elastic-exporter 1.1.2 Required for Logging
logs 2..7.0 Required for Logging
kibana 7.6.1 Required for Logging
grafana 5.0.5 Required for Metrics
prometheus 11.0.2 Required for Metrics
prometheus-node-exporter 1.9.0 Required for Metrics
metallb 0.12.0 Required for External
metrics-server 2.10.0 Required for Metric Server
occne-snmp-notifier 0.3.0 Required for Metric Server
tracer 0.13.3 Required for Tracing

Note:

The above softwares are available if the Cloud Native Core Policy (CNC Policy) is deployed in the Oracle Communications Cloud Native Environment (OCCNE). If you are deploying Cloud Native Core Policy (CNC Policy) in any other environment, the above softwares must be installed before installing the Cloud Native Core Policy (CNC Policy). To check the installed software items,
helm ls
Some of the systems may need to use helm command with admin.conf file as follows:
helm --kubeconfig admin.conf

Note:

If you are using Network Repository Function (NRF), install it before proceeding with the Core Policy (CNC Policy) installation.

Checking the Environment Setup

Note:

This section is applicable only when the Cloud Native Core Policy (CNC Policy) is deployed in the environment, other than OCCNE.

Network access

The Kubernetes cluster hosts must have network access to:

  • Local helm repository, where the Cloud Native Core Policy (CNC Policy) helm charts are available.
    To check if the Kubernetes cluster hosts have network access to the local helm repository, execute the following command:
    helm repo update

    Note:

    Some of the systems may need to use helm command with admin.conf file as follows:
    helm --kubeconfig admin.conf
  • Local docker image repository, where the Cloud Native Core Policy (CNC Policy) images are available.
    To check if the Kubernetes cluster hosts have network access to the local docker image repository, pull any image with tag name to check connectivity by executing the following command:
    docker pull docker-repo/image-name:image-tag
    where:

    docker-repo is the IP address or host name of the repository.

    image-name is the docker image name.

    image-tag is the tag the image used for the Cloud Native Core Policy (CNC Policy) pod.

Note:

All the kubectl and helm related commands that are used in this guide must be executed on a system depending on the infrastructure/deployment. It could be a client machine, such as, a VM, server, local desktop, and so on.

Client Machine Requirements

Following are the client machine requirements where the deployment commands executed:
  • It should have network access to the helm repository and docker image repository.
  • It should have network access to the Kubernetes cluster.
  • It should have necessary environment settings to run the kubectl and dockercommands. The environment should have privileges to create namespace in the Kubernetes cluster.
  • It should have helm client installed with the push plugin. The environment should be configured so that the helm install command deploys the software in the Kubernetes cluster.

Server or Space Requirements

For information on the server or space requirements, see the Oracle Communications Cloud Native Environment (OCCNE) Installation Guide.

Secret File Requirement

For enabling HTTPs on Ingress/Egress gateway the following certificates and pem files has to be created before creating secret files for keys:

  • ECDSA private Key and CA signed ECDSA Certificate (if initialAlgorithm: ES256)
  • RSA private key and CA signed RSA Certificate (if initialAlgorithm: RSA256)
  • TrustStore password file
  • KeyStore password file
  • CA signed ECDSA certificate

Installation Tasks

Downloading Cloud Native Core Policy (CNC Policy) package

To download the Cloud Native Core Policy (CNC Policy) package from MOS:
  1. Login to My Oracle Support with your credentials.
  2. Select Patches and Updates tab to locate the patch.
  3. In Patch Search window, click Product or Family (Advanced).
  4. Enter Oracle Communications Cloud Native Core - 5G in Product field, select Oracle Communications Cloud Native Core Policy 1.7.0.0.0 from Release drop-down.
  5. Click Search. The Patch Advanced Search Results displays a list of releases.
  6. Select the required patch from the search results. The Patch Details window opens.
  7. Click Download. File Download window appears.
  8. Click the <p********_<release_number>_Tekelec>.zip file to downlaod the CNC Policy package file.

Pushing the Images to Customer Docker Registry

To Push the images to customer docker resgistry:
  1. Untar the Cloud Native Core Policy (CNC Policy) package file to get Cloud Native Core Policy (CNC Policy) docker image tar file.

    tar -xvzf occnp-pkg-1.7.3.tgz

    The directory consists of the following:
    • Cloud Native Core Policy (CNC Policy) Docker Images File:

      occnp-images-1.7.3.tar

    • Helm File:

      occnp-1.7.3.tgz

    • Readme txt File:

      Readme.txt

    • Checksum for Helm chart tgz file:

      occnp-1.7.3.tgz.sha256

    • Checksum for images' tgz file:

      occnp-images-1.7.3.tar.sha256

  2. Load the occnp-images-1.7.3.tar file into the Docker system
    docker load --input /IMAGE_PATH/occnp-images-1.7.3.tar
  3. Verify that the image is loaded correctly by entering this command:
    docker images
    Refer Docker Images for more information on docker images available in Cloud Native Core Policy (CNC Policy).
  4. Create a new tag for each imported image and push the image to the customer docker registry by entering this command:
    
    docker tag occnp/app_info:1.7.3 CUSTOMER_REPO/app_info:1.7.3
    docker push CUSTOMER_REPO/app_info:1.7.3
    
    docker tag occnp/oc-policy-ds:1.7.3 CUSTOMER_REPO/oc-policy-ds:1.7.3
    docker push CUSTOMER_REPO/oc-policy-ds:1.7.3
    
    docker tag occnp/ocingress_gateway:1.7.7 CUSTOMER_REPO/ocingress_gateway:1.7.7
    docker push CUSTOMER_REPO/ocingress_gateway:1.7.7
    
    docker tag occnp/oc-pcf-sm:1.7.3 CUSTOMER_REPO/oc-pcf-sm:1.7.3
    docker push CUSTOMER_REPO/oc-pcf-sm:1.7.3
     
    docker tag occnp/oc-pcf-am:1.7.3 CUSTOMER_REPO/oc-pcf-am:1.7.3
    docker push CUSTOMER_REPO/oc-pcf-am:1.7.3
     
    docker tag occnp/oc-pcf-ue:1.7.3 CUSTOMER_REPO/oc-pcf-ue:1.7.3
    docker push CUSTOMER_REPO/oc-pcf-ue:1.7.3
    
    docker tag occnp/oc-audit:1.7.3 CUSTOMER_REPO/oc-audit:1.7.3
    docker push CUSTOMER_REPO/oc-audit:1.7.3
    
    docker tag occnp/oc-ldap-gateway:1.7.3 CUSTOMER_REPO/oc-ldap-gateway:1.7.3
    docker push CUSTOMER_REPO/oc-ldap-gateway:1.7.3
    
    docker tag occnp/oc-query:1.7.3 CUSTOMER_REPO/oc-query:1.7.3
    docker push CUSTOMER_REPO/oc-query:1.7.3
     
    docker tag occnp/oc-pre:1.7.3 CUSTOMER_REPO/oc-pre:1.7.3
    docker push CUSTOMER_REPO/oc-pre:1.7.3
    
    docker tag occnp/oc-perf-info:1.7.3 CUSTOMER_REPO/oc-perf-info:1.7.3
    docker push CUSTOMER_REPO/oc-perf-info:1.7.3
    
    docker tag occnp/oc-diam-gateway:1.7.3 CUSTOMER_REPO/oc-diam-gateway:1.7.3
    docker push CUSTOMER_REPO/oc-diam-gateway:1.7.3 
     
    docker tag occnp/oc-diam-connector:1.7.3 CUSTOMER_REPO/oc-diam-connector:1.7.3
    docker push CUSTOMER_REPO/oc-diam-connector:1.7.3
    
    docker tag occnp/oc-pcf-user:1.7.3 CUSTOMER_REPO/oc-pcf-user:1.7.3
    docker push CUSTOMER_REPO/oc-pcf-user:1.7.3
    
    docker tag occnp/oc-config-mgmt:1.7.3 CUSTOMER_REPO/oc-config-mgmt:1.7.3
    docker push CUSTOMER_REPO/oc-config-mgmt:1.7.3 
     
    docker tag occnp/oc-config-server:1.7.3 CUSTOMER_REPO/oc-config-server:1.7.3
    docker push CUSTOMER_REPO/oc-config-server:1.7.3
    
    docker tag occnp/ocegress_gateway:1.7.7 CUSTOMER_REPO/ocegress_gateway:1.7.7
    docker push CUSTOMER_REPO/ocegress_gateway:1.7.7 
     
    docker tag occnp/nrf-client:1.2.5 CUSTOMER_REPO/nrf-client:1.2.5
    docker push CUSTOMER_REPO/nrf-client:1.2.5 
    
    docker tag occnp/oc-readiness-detector:1.7.3 CUSTOMER_REPO/oc-readiness-detector:1.7.3
    docker push CUSTOMER_REPO/oc-readiness-detector:1.7.3
     
    docker tag occnp/configurationinit:1.2.0 CUSTOMER_REPO/configurationinit:1.2.0
    docker push CUSTOMER_REPO/configurationinit:1.2.0
     
    docker tag occnp/configurationupdate:1.2.0 CUSTOMER_REPO/configurationupdate:1.2.0
    docker push CUSTOMER_REPO/configurationupdate:1.2.0
    
    docker tag occnp/oc-soap-connector:1.7.3 CUSTOMER_REPO/occnp/oc-soap-connector:1.7.3
    docker push CUSTOMER_REPO/occnp/oc-soap-connector:1.7.3
    
    docker tag occnp/oc-pcrf-core:1.7.3 CUSTOMER_REPO/occnp/oc-pcrf-core:1.7.3
    docker push CUSTOMER_REPO/occnp/oc-pcrf-core:1.7.3
    
    docker tag occnp/oc-binding:1.7.3 CUSTOMER_REPO/occnp/oc-binding:1.7.3
    docker push CUSTOMER_REPO/occnp/oc-binding:1.7.3
    
     
    where:

    CUSTOMER_REPO is the docker registry address having Port Number, if registry has port attached.

    Note:

    For OCCNE, copy the package to bastion server and use localhost:5000 as CUSTOMER_REPO to tag the images and push to bastion docker registry.

    Note:

    You may need to configure the Docker certificate before the push command to access customer registry via HTTPS, otherwise, docker push command may fail.

Configuring Database, Creating Users, and Granting Permissions

Cloud Native Core Policy (CNC Policy) microservices use MySQL database to store the configuration and run time data. Following microservices require dedicated MySQL databases created in MySQL data tier.

  • Session Management (SM) Service - To store SM and Policy Authorization (PA) session state
  • Access and Mobility (AM) Service - To store AM session state
  • User Service - To store User information like Policy Data (from UDR) and Policy Counter information (from CHF)
  • Config Server - To store configuration data
  • Audit Service - To store session state audit data
  • PCRF Core service - To store Gx session, Rx Session and User Profile information
  • Binding Service - To store context binding information of 4g and 5g subscribers

The CNC Policy requires the database administrator to create user in MySQL DB and provide necessary permissions to access the databases. Before installing the CNC Policy it is required that the MySQL user and databases are created.

Each microservice has a default database name assigned as mentioned in below table:
Service Name Default Database Name Applicable to Deployment
SM Service occnp_pcf_sm PCF (if smServiceEnable parameter is enabled in custom yaml file.)
AM Service occnp_pcf_am PCF (if amServiceEnable parameter is enabled in custom yaml file.)
User Service occnp_pcf_user PCF (mandatory)
Config Server Service occnp_config_server cnPCRF & PCF (mandatory)
Audit Service occnp_audit_service PCF (if enabled)
PCRF Core Service occnp_pcrf_core cnPCRF (if pcrfCoreEnable parameter is enabled in custom yaml file.)
Binding Service occnp_binding cnPCRF & PCF (if bindingEnable parameter is enabled in custom yaml file.)
Apart from the databases created for these microservices, create a database, occnp_release (default database name) and it is a mandatory database for PCF and cnPCRF. It will be used to store and manipulate the release versions of all PCF and cnPCRF services on install/upgrade and rollback.

It is recommended to use unique database name when there are multiple instances of CNC Policy deployed in the network and they share the same data tier (MySQL cluster).

It is recommended to create custom unique database name, by simply prefixing the deployment name of the CNC Policy. This way database name uniqueness can be achieved across all deployments. However, you can use any prefix/suffix to create the unique database name. For example, if the OCPCF deployment name is "site1" then the SM Service database can be named as "occnp_pcf_sm_site1".

Refer the Configurable Parameters section for how to override default database names with custom database names.

To configure MYSQL database for the different microservices:

  1. Login to the server where the ssh keys are stored and SQL nodes are accessible.
  2. Connect to the SQL nodes.
  3. Login to the database as a root user.
  4. Create database for the different microservices:
    CREATE DATABASE occnp_audit_service;
    CREATE DATABASE occnp_config_server;
    CREATE DATABASE occnp_pcf_am;
    CREATE DATABASE occnp_pcf_sm;
    CREATE DATABASE occnp_pcf_user;
    CREATE DATABASE occnp_pcrf_core;
    CREATE DATABASE occnp_release;
    CREATE DATABASE occnp_binding;
    CREATE DATABASE occnp_policyds;
  5. Create an admin user and grant all the necessary permissions to the user by executing the following command:
    CREATE USER 'username'@'%' IDENTIFIED BY 'password';
    
    
    GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON occnp_pcf_sm.* TO 'username'@'%';
    GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON occnp_pcf_am.* TO 'username'@'%';
    GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON occnp_pcf_user.* TO 'username'@'%';
    GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON occnp_config_server.* TO 'username'@'%';
    GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON occnp_audit_service.* TO 'username'@'%';
    GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON occnp_release.* TO 'username'@'%';
    GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON occnp_pcrf_core.* TO 'username'@'%';
    GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON occnp_binding.* TO 'username'@'%';
    FLUSH PRIVILEGES;
    where:

    username is the username and password is the password for MYSQL admin user.

    For Example: In the below example "occnpadminusr" is used as username, "occnpadminpasswd" is used as password and granting all the permissions to "occnpadminusr". In this example, default database names of micro services are used.
    CREATE USER 'occnpadminusr'@'%' IDENTIFIED BY 'occnpadminpasswd';
    
    
    GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON occnp_pcf_sm.* TO 'occnpadminusr'@'%';
    GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON occnp_pcf_am.* TO 'occnpadminusr'@'%';
    GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON occnp_pcf_user.* TO 'occnpadminusr'@'%';
    GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON occnp_config_server.* TO 'occnpadminusr'@'%';
    GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON occnp_audit_service.* TO 'occnpadminusr'@'%';
    GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON occnp_release.* TO 'occnpadminusr'@'%';
    GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON occnp_pcrf_core.* TO 'occnpadminusr'@'%';
    GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON occnp_binding.* TO 'occnpadminusr'@'%';
    FLUSH PRIVILEGES;
  6. Create an application user and grant all the necessary permissions to the user by executing the following command:
    CREATE USER 'username'@'%' IDENTIFIED BY 'password';
    
    
    GRANT SELECT, INSERT, UPDATE, DELETE, CREATE  ON occnp_pcf_sm.* TO 'username'@'%';
    GRANT SELECT, INSERT, UPDATE, DELETE ON occnp_pcf_am.* TO 'username'@'%';
    GRANT SELECT, INSERT, UPDATE, DELETE, CREATE ON occnp_pcf_user.* TO 'username'@'%';
    GRANT SELECT, INSERT, UPDATE, DELETE ON occnp_config_server.* TO 'username'@'%';
    GRANT SELECT, INSERT, UPDATE, DELETE ON occnp_audit_service.* TO 'username'@'%';
    GRANT SELECT, INSERT, UPDATE, DELETE ON occnp_pcrf_core.* TO 'username'@'%';
    GRANT SELECT, INSERT, UPDATE, DELETE ON occnp_binding.* TO 'username'@'%';

    where:

    username is the username and password is the password for MYSQL database user.

    For Example: In the below example "occnpusr" is used as username, "occnppasswd" is used as password and granting the necessary permissions to "occnpusr". In this example, default database names of micro services are used.
    CREATE USER 'occnpusr'@'%' IDENTIFIED BY 'occnppasswd';
    
    
    GRANT SELECT, INSERT, UPDATE, DELETE, CREATE  ON occnp_pcf_sm.* TO 'occnpusr'@'%';
    GRANT SELECT, INSERT, UPDATE, DELETE ON occnp_pcf_am.* TO 'occnpusr'@'%';
    GRANT SELECT, INSERT, UPDATE, DELETE, CREATE ON occnp_pcf_user.* TO 'occnpusr'@'%';
    GRANT SELECT, INSERT, UPDATE, DELETE ON occnp_config_server.* TO 'occnpusr'@'%';
    GRANT SELECT, INSERT, UPDATE, DELETE ON occnp_audit_service.* TO 'occnpusr'@'%';
    GRANT SELECT, INSERT, UPDATE, DELETE ON occnp_pcrf_core.* TO 'occnpusr'@'%';
    GRANT SELECT, INSERT, UPDATE, DELETE ON occnp_binding.* TO 'occnpusr'@'%';

    Note:

    The database name can be specified in the envMysqlDatabase parameter for respective services in the custom-value.yaml file.

    It is recommended to use unique database name when there are multiple instances of Cloud Native Core Policy (CNC Policy) deployed in the network and they share the same data tier (MySQL cluster).

  7. Execute the command, show grants for username, to confirm that admin or application user has all the permissions.

    where, username is the admin or application user's username.

    For Example,

    show grants for occnpadminusr

    show grants for occnpusr

  8. Exit from database and logout from MYSQL node.
  9. Create namespace if already does not exists by entering the command:
    kubectl create namespace release_namespace
    where:

    release_namespace is the deployment Cloud Native Core Policy (CNC Policy) namespace used by helm command.

  10. Create a kubernetes secret for an admin user and an application user that were created in the step 5 and step 6.
    To create a kubernetes secret for storing database username and password for these users:
    1. Create a yaml file with the application user's username and password with the syntax shown below:
      apiVersion: v1
      kind: Secret
      metadata:
        name: occnp-db-pass
      type: Opaque
      data:
        mysql-username: b2NjbnB1c3I=
        mysql-password: b2NjbnBwYXNzd2Q=
    2. Create a yaml file with the admin user's username and password with the syntax shown below:
      apiVersion: v1
      kind: Secret
      metadata:
        name: occnp-admin-db-pass
      type: Opaque
      data:
        mysql-username: b2NjbnBhZG1pbnVzcg==
        mysql-password: b2NjbnBhZG1pbnBhc3N3ZA==

      Note:

      'name' will be used for the dbCredSecretName and privilegedDbCredSecretName parameters in the CNC Policy custom-values.yaml file.

      Note:

      The values for mysql-username and mysql-password should be base64 encoded.
    3. Execute the following commands to add the kubernetes secrets in a namespace:
      kubectl create -f yaml_file_name1 -n release_namespace
      kubectl create -f yaml_file_name2 -n release_namespace
      where:

      release_namespace is the deployment namespace used by the helm command.

      yaml_file_name1 is a name of the yaml file that is created in step a.

      yaml_file_name2 is a name of the yaml file that is created in step b.

Installing CNC Policy Package

To install the Cloud Native Core Policy (CNC Policy) package:

  1. Modify the required custom-values.yaml file with the required input parameters. To customize the file, see Customizing Cloud Native Core Policy.

    Note:

    The values of the parameters mentioned in the custom values yaml file overrides the defaults values specified in the helm chart. If the envMysqlDatabase parameter is modified, then you should modify the configDbName parameter with the same value.

    Note:

    perf-info has to be provided proper URL or else it will keep on restarting. [Below is an example of URL for bastion server]:
    perf-info:
    configmapPerformance:
    prometheus: http://occne-prometheus-server.occne-infra.svc
    jaeger=jaeger-agent.occne-infra
  2. Caution:

    Do not exit from helm install command manually. After running the helm install command, it takes some time to install all the services. In the meantime, you must not press "ctrl+c" to come out from helm install command. It leads to some anomalous behavior.
    1. Install CNC Policy by using Helm2:
      helm install <helm-chart> --name <release_name> --namespace <release_namespace> -f <custom_file> --atomic
            --timeout 600
    2. Install CNC Policy by using Helm3:
      helm install -f <custom_file> <release_name> <helm-chart> --namespace <release_namespace> --atomic --timeout
            10m
    where:

    helm_chartis the location of the helm chart extracted from occnp-pkg-1.7.3.0.0.tgz file

    release_name is the release name used by helm command.

    release_namespace is the deployment namespace used by helm command.

    custom_file - is the name of the custom values yaml file (including location).

    For example:
    helm install /home/cloud-user/occnp-1.7.3.tgz --name occnp --namespace occnp -f occnp-1.7.3-custom-values-occnp.yaml --atomic
    Refer Customizing Cloud Native Core Policy for the sample yaml file.
    Parameters in helm install command:
    • atomic: If this parameter is set, installation process purges chart on failure. The --wait flag will be set automatically.
    • wait: If this parameter is set, installation process will wait until all pods, PVCs, Services, and minimum number of pods of a deployment, StatefulSet, or ReplicaSet are in a ready state before marking the release as successful. It will wait for as long as --timeout.
    • timeout duration (optional): If not specified default value will be 300 (300 seconds) in Helm2 and 5m (5 minutes) in Helm3.Specifies the time to wait for any individual kubernetes operation (like Jobs for hooks). Default value is 5m0s. If the helm install command fails at any point to create a kubernetes object, it will internally call the purge to delete after timeout value (default: 300s). Here timeout value is not for overall install, but it is for automatic purge on installation failure.
  3. You can verify the installation while running the install command by entering this commands:
    watch kubectl get jobs,pods -n release_namespace
    Press "Ctrl+C" to exit watch mode. We should run the watch command on the another terminal.
    helm status release_name -n release_namespace
  4. Check the installation status by entering this command:
    helm ls release_name
    For example:
    helm ls occnp
    You will see the status as DEPLOYED if the deployment has been done successfully.
    Execute the following command to get status of jobs and pods:
    kubectl get jobs,pods -n release_namespace
    For example:
    kubectl get pod -n occnp
    You will see the status as Running for all the pods if the deployment has been done successfully.

    Note:

    If the installation is not successful or you do not see the status as Running for all the pods, perform the troubleshooting steps given under Troubleshooting Cloud Native Core Policy (CNC Policy).