3 Installing ATS for Different Network Functions

This section describes how to install ATS for different network functions. It includes:

3.1 Installing ATS for BSF

The BSF ATS installation procedure covers two steps:

  1. Locating and downloading the ATS package for BSF.
  2. Deploying ATS and stub pods in Kubernetes cluster

This includes installation of three stubs (nf1stub, nf11stub, and nf12stub), ocdns-bind stub, and BSF ATS in BSF namespace.

3.1.1 Resource Requirements

This section describes the ATS resource requirements for Binding Support Function.

Overview - Total Number of Resources

The following table describes the overall resource usage in terms of CPUs, memory, and storage:

Table 3-1 BSF - Total Number of Resources

Resource Name Non-ASM CPU Non-ASM Memory (GB) ASM CPU ASM Memory (GB)
BSF Total 41 36 73 52
ATS Total 11 11 23 17
cnDBTier Total 107.1 175.2 137.1 190.2
Grand Total BSF ATS 159.1 222.2 233.1 259.2

BSF Pods Resource Requirements Details

This section describes the resource requirements, which are needed to deploy BSF ATS successfully.

Table 3-2 BSF Pods Resource Requirements Details

BSF Microservices Max CPU Memory (GB) Max Replica Isito ASM CPU Isito ASM Memory (GB) Non-ASM Total CPU Non-ASM Memory (GB) ASM Total CPU ASM Total Memory (GB)
oc-app-info 1 1 1 2 1 1 1 3 2
oc-diam-gateway 4 2 1 2 1 4 2 6 3
alternate-route 2 4 1 2 1 2 4 4 5
oc-config-server 4 2 1 2 1 4 2 6 3
ocegress_gateway 4 6 1 2 1 4 6 6 7
ocingress_gateway 4 6 1 2 1 4 6 6 7
nrf-client-mngt 1 1 2 2 1 2 2 6 4
oc-audit 2 1 1 2 1 2 1 4 2
oc-config-mgmt 4 2 2 2 1 8 4 12 6
oc-query 2 1 2 2 1 4 2 8 4
oc-perf-info 1 1 2 2 1 2 2 6 4
bsf-management-service 4 4 1 2 1 4 4 6 5
BSF Totals 41 36 73 52

ATS Resource Requirements details for BSF

This section describes the ATS resource requirements, which are needed to deploy BSF ATS successfully.

Table 3-3 ATS Resource Requirements Details

ATS Microservices Max CPU Max Memory (GB) Max Replica Isito ASM CPU Isito ASM Memory (GB) Non- ASM Total CPU Non-ASM Total Memory (GB) ASM Total CPU ASM Total Memory (GB)
ocstub1-py 2 2 1 2 1 2 2 4 3
ocstub2-py 2 2 1 2 1 2 2 4 3
ocstub3-py 2 2 1 2 1 2 2 4 3
ocats-bsf 3 3 1 2 1 3 3 5 4
ocdns-bind 1 1 1 2 1 1 1 3 2
ocdiam-sim 1 1 1 2 1 1 1 3 2
ATS Totals 11 11 23 17

cnDBTier Resource Requirements Details for BSF ATS

This section describes the cnDBTier resource requirements, which are needed to deploy BSF ATS successfully.

Note:

For cnDBTier pods, a minimum of 4 worker nodes are required.

Table 3-4 cnDBTier Resource Requirements Details

cnDBTier Microservices Min CPU Min Memory (GB) Min Replica Isito ASM CPU Isito ASM Memory (GB) Total CPU Total Memory (GB) ASM Total CPU ASM Total Memory (GB)
db_monitor_svc 1 1 1 2 1 1 1 3 2
db_replication_svc 2 12 1 2 1 2 12 4 13
db_backup_manager_svc 0.1 0.2 1 2 1 0.1 0.2 2.1 1.2
ndbappmysqld 8 10 4 2 1 32 40 40 44
ndbmgmd 4 10 2 2 1 8 20 12 22
ndbmtd 10 18 4 2 1 40 72 48 76
ndbmysqld 8 10 2 2 1 16 20 20 22
db_infra_moditor_svc 8 10 1 2 1 8 10 8 10
cnDBTier Total 107.1 175.2 137.1 190.2

3.1.2 Downloading the ATS Package

This section provides information on how to locate and download BSF ATS package file from My Oracle Support (MOS).

Locating and Downloading BSF ATS Package

To locate and download the ATS Image from MOS, perform the following steps:

  1. Log in to My Oracle Support using the appropriate credentials.
  2. Select the Patches and Updates tab.
  3. In the Patch Search window, click Product or Family (Advanced).
  4. Enter Oracle Communications Cloud Native Core - 5G in the Product field.
  5. Select Oracle Communications Cloud Native Binding Support Function <release_number> from Release drop-down.
  6. Click Search. The Patch Advanced Search Results list appears.
  7. Select the required patch from the search results. The Patch Details window appears.
  8. Click Download. The File Download window appears.
  9. Click the <p********_<release_number>_Tekelec>.zip file to downlaod the BSF ATS package file.
  10. Untar the gzip file ocats-bsf-tools-24.1.0.0.0.tgz to access the following files:
    ocats-bsf-pkg-24.1.0.0.0.tgz
    ocdns-pkg-24.1.0.tgz
    ocstub-pkg-24.1.0.tgz
    ocdiam-sim-24.1.0.tgz

    The contents included in each of these files are as follow:

    ocats-bsf-tools-24.1.0.0.0.tgz
    |
    |_ _ _ocats-bsf-pkg-24.1.0.tgz
    | |_ _ _ _ _ _ ocats-bsf-24.1.0.tgz (Helm Charts)
    | |_ _ _ _ _ _ ocats-bsf-images-24.1.0.tar (Docker Images)
    | |_ _ _ _ _ _ ocats-bsf-data-24.1.0.tgz (BSF ATS and Jenkins job Data)
    |
    |_ _ _ocstub-pkg-24.1.0.0.0.tgz
    | |_ _ _ _ _ _ ocstub-py-24.1.0.tgz(Helm Charts)
    | |_ _ _ _ _ _ ocstub-py-image-24.1.0.tar (Docker Images)
    |
    |_ _ _ocdns-pkg-24.1.0.0.0.tgz
    | |_ _ _ _ _ _ ocdns-bind-24.1.0.tgz(Helm Charts)
    | |_ _ _ _ _ _ ocdns-bind-image-24.1.0.tar (Docker Images)
    |
    |_ _ _ocdiam-pkg-24.1.0.0.0.tgz
    |      |_ _ _ _ _ _ ocdiam-sim-24.1.0.tgz(Helm Charts)
    |      |_ _ _ _ _ _ ocdiam-sim-image-24.1.0.tar (Docker Images)
  11. Copy the tar file from the downloaded package to OCCNE, OCI, or Kubernetes cluster where you want to deploy ATS.

3.1.3 Pushing the Images to Customer Docker Registry

This section describes the pre-deployment steps for deploying ATS and stub pods.

Preparing to deploy ATS and Stub Pods in Kubernetes Cluster

To deploy ATS and Stub pods in a Kubernetes Cluster, perform the following steps:

  1. Run the following command to extract the tar file content:

    tar -zxvf ocats-bsf-tools-24.1.0.0.0.tgz

    The output of this command is:
    
    ocats-bsf-pkg-24.1.0.tgz
    ocstub-pkg-24.1.0.tgz
    ocdns-pkg-24.1.0.tgz
    ocdiam-pkg-24.1.0.0.0tgz
  2. Go to the ocats-bsf-tools-24.1.0.0.0 folder and run the following command to extract the helm charts and docker images of ATS:

    tar -zxvf ocats-bsf-pkg-24.1.0.0.0.tgz

    The output of this command is:

    
    ocats-bsf-24.1.0.tgz
    ocats-bsf-images-24.1.0.tar
    ocats-bsf-data-24.1.0.tgz
  3. Run the following command in your cluster to load the ATS docker image:

    docker load --input ocats-bsf-images-24.1.0.tar

  4. Run the following commands to tag and push the ATS images
    docker tag ocats-bsf:24.1.0 <registry>/ocats-bsf:24.1.0
    docker push <registry>/ocats-bsf:24.1.0

    Example:

    docker tag ocats-bsf:24.1.0 localhost:5000/ocats-bsf:24.1.0
    docker push localhost:5000/ocats-bsf:24.1.0
  5. Run the following command to untar the helm charts, ocats-bsf-24.1.0.tgz

    tar -zxvf ocats-bsf-24.1.0.tgz

  6. Update the registry name, image name and tag in the ocats-bsf/values.yaml file as required. For this, you need to update the image.repository and image.tag parameters in the ocats-bsf/values.yaml file.
  7. In the ocats-bsf/values.yaml file, the atsFeatures parameter is configured to contorl ATS feature deliveries.
    
    atsFeatures:  ## DO NOT UPDATE this section without My Oracle Support team's support
      testCaseMapping: true               # To display Test cases on GUI along with Features
      logging: true                       # To enable feature to collect applogs in case of failure
      lightWeightPerformance: false       # The Feature is not implemented yet
      executionWithTagging: true          # To enable Feature/Scenario execution with Tag
      scenarioSelection: false            # The Feature is not implemented yet
      parallelTestCaseExecution: true     # To run ATS features parallel
      parallelFrameworkChangesIntegrated: true # To run ATS features parallel
      mergedExecution: false              # To execute ATS Regression and NewFeatures pipelines together in merged manner
      individualStageGroupSelection: false  # The Feature is not implemented yet
      parameterization: true              # When set to false, the Configuration_Type parameter on the GUI will not be available.
      atsApi: true                        # To trigger ATS using ATS API
      healthcheck: true                   # TO enable/disable ATS Health Check.
      atsGuiTLSEnabled: false             # To run ATS GUI in https mode.
      atsCommunicationTLSEnabled: false  #If set to true, ATS will get necessary variables to communicate with SUT, Stub or other NFs with TLS enabled. It is not required in ASM environment.

    Note:

    It is recommended to avoid altering atsFeatures flags.

3.1.4 Configuring ATS

3.1.4.1 Enabling Static Port
To enable static port, in the ocats-bsf/values.yaml file under the service section, set the value of staticNodePortEnabled parameter to true and enter a valid nodePort value for staticNodePort parameter.
service:
  customExtension:
    labels: {}
    annotations: {}
  type: LoadBalancer
  ports:
    http:
      port: "8080"
      staticNodePortEnabled: false
      staticNodePort: ""
3.1.4.2 Enable Static API Node Port
To enable static API node port, in the ocats-bsf/values.yaml file under the service section, set the value of staticNodePortEnabled parameter to true and enter a valid nodePort value for staticNodePort parameter. The following is a snippet of the service section in the yaml file:

service:
  customExtension:
    labels: {}
    annotations: {}
  type: LoadBalancer
  ports:
    api:
      port: "5001"
      staticNodePortEnabled: false
      staticNodePort: ""
3.1.4.3 Service Account Requirements
To run BSF-ATS, use the following rules to create a service account:
rules:
- apiGroups: ["extensions"]
  resources: ["deployments", "replicasets"]
  verbs: ["watch", "get", "list", "update"]
- apiGroups: ["apps"]
  resources: ["deployments", "replicasets"]
  verbs: ["watch", "get", "list", "update"]
- apiGroups: [""]
  resources: ["pods", "services", "secrets", "configmaps"]
  verbs: ["watch", "get", "list", "delete", "update", "create"]
- apiGroups: [""]
  resources: ["pods/log"]
  verbs: ["get", "list"]
3.1.4.4 Enabling Aspen Service Mesh

This section provides information on how to enable Aspen service mesh while deploying ATS for Binding Support Function. The configurations mentioned in this section are optional and should be performed only if ASM is required.

To enable service mesh for BSF ATS, perform the following steps:

  1. In the service section of the values.yaml file, the serviceMeshCheck parameter is set to false by default. To enable service mesh, set the value for serviceMeshCheck to true. The following is a snippet of the service section in the yaml file:
    service:
      customExtension:
        labels: {}
        annotations: {}
      type: LoadBalancer
      ports:
        https:
          port: "8443"
          staticNodePortEnabled: false
          staticNodePort: ""
        http:
          port: "8080"
          staticNodePortEnabled: false
          staticNodePort: ""
        api:
          port: "5001"
          staticNodePortEnabled: false
          staticNodePort: ""
      serviceMeshCheck: true
  2. If the ASM is not enabled on the global level for the namespace, run the following command to enable it before deploying the ATS:
    kubectl label --overwrite namespace <namespace_name> istio-injection=enabled
    For example:
    kubectl label --overwrite namespace ocbsf istio-injection=enabled
  3. Uncomment and add the following annotation under the lbDeployments and nonlbDeployments section of the global section in values.yaml file as follows:

    traffic.sidecar.istio.io/excludeInboundPorts: "9000"

    traffic.sidecar.istio.io/excludeOutboundPorts: "9000"

    The following is a snippet from the values.yaml of BSF:

    /home/cloud-user/ocats-bsf/ocats-bsf-tools-24.1.0.0.0/ocats-bsf-pkg-24.1.0.0.0/ocats-bsf/
    vim values.yaml
     
     customExtension:
        allResources:
          labels: {}
          annotations: {
          #Enable this section for service-mesh based installation
             traffic.sidecar.istio.io/excludeInboundPorts: "9000",
             traffic.sidecar.istio.io/excludeOutboundPorts: "9000"
            }
    lbDeployments:
          labels: {}
          annotations: { 
            traffic.sidecar.istio.io/excludeInboundPorts: "9000",
            traffic.sidecar.istio.io/excludeOutboundPorts: "9000"}
  4. If service mesh is enabled, then create a destination rule for fetching the metrics from the Prometheus. In most of the deployments, Prometheus is kept outside the service mesh so you need a destination rule to communicate between TLS enabled entity (ATS) and non-TLS entity (Prometheus). You can create a destination rule using the following sample yaml file:
    kubectl apply -f - <<EOF
     
    apiVersion: networking.istio.io/v1alpha3
    kind: DestinationRule
    metadata:
      name: prometheus-dr
      namespace: ocats
    spec:
      host: oso-prometheus-server.pcf.svc.cluster.local
      trafficPolicy:
        tls:
          mode: DISABLE
    EOF
    In the destination rule:
    • name indicates the name of destination rule.
    • namespace indicates where the ATS is deployed.
    • host indicates the hostname of the prometheus server.
  5. Update the ocbsf_custom_values_servicemesh_config_24.1.0.yaml with the below additional configuration under virtualService section for Egress Gateway:
    virtualService:
      - name: nrfvirtual1
        host: ocbsf-ocbsf-egress-gateway
        destinationhost: ocbsf-ocbsf-egress-gateway
        port: 8000
        exportTo: |-
          [ "." ]
        attempts: "0"

    Where,

    host or destination name uses the format - <release_name>-<egress_svc_name>.

    You must update the host or destination name as per the deployment.

  6. Perform helm upgrade on the ocbsf-servicemesh-config release using the modified ocbsf_custom_values_servicemesh_config_24.1.0.yaml file.
    helm upgrade <helm_release_name_for_servicemesh> -n <namespace> <servicemesh_charts> -f <servicemesh-custom.yaml>
    For example,
    helm upgrade ocbsf-servicemesh-config ocbsf-servicemesh-config-24.1.0.tgz -n ocbsf -f ocbsf_custom_values_servicemesh_24.1.0.yaml
  7. Configure DNS for Alternate Route service. For more information, see Post-Installation Steps.
3.1.4.5 Enabling Health Check

This section describes how to enable Health Check for ATS.

To enable Health Check, in the ocats-bsf/values.yaml file, set the value of healthcheck parameter to true and enter a valid value to select either Webscale or OCCNE environment.

To select OCCNE environment, set the envtype to OCCNE and update the values of the following parameters:
  • Webscale - Update the value as false
  • envtype - T0NDTkU= (i.e envtype=$(echo -n 'OCCNE' | base64))
  • occnehostip - OCCNE Host IP address
  • occnehostusername - OCCNE Host Username
  • occnehostpassword - OCCNE Host Password
To select OCCNE environment, update the values of the following two parameters:
  • Webscale - Update the value as true
  • envtype - T0NDTkU= (i.e envtype=$(echo -n 'OCCNE' | base64))

After the configurations are done, encrypt the parameters and provide the values as shown in the following snippet:


atsFeatures:  ## DO NOT UPDATE this section without Engineering team's permission
  healthcheck: true                   # TO enable/disable ATS Health Check.
 
Webscale: false
healthchecksecretname: "healthchecksecret"
occnehostip: "MTAuMTcuMjE5LjY1"
occnehostusername: "dXNlcm5hbWU="
occnehostpassword: "KioqKg=="
envtype: "T0NDTkU="
To select WEBSCALE environment, update the values of the following two parameters:
  • Webscale - Update the value as true
  • envtype - V0VCU0NBTEU= (i.e envtype=$(echo -n 'WEBSCALE' | base64))

After the configurations are done, encrypt the parameters and provide the values as shown in the following snippet:


atsFeatures:  ## DO NOT UPDATE this section without Engineering team's permission
  healthcheck: true                   # TO enable/disable ATS Health Check.
 
Webscale: true
healthchecksecretname: "healthchecksecret"
occnehostip: ""
occnehostusername: ""
occnehostpassword: ""
envtype: "V0VCU0NBTEU="
webscalejumpip: "MTAuNzAuMTE3LjQy"
webscalejumpusername: "dXNlcm5hbWU="
webscalejumppassword: "KioqKg=="
webscaleprojectname: "KioqKg=="
webscalelabserverFQDN: "KioqKg=="
webscalelabserverport: "KioqKg=="
webscalelabserverusername: "KioqKg=="
webscalelabserverpassword: "KioqKg=="

Note:

Once the ATS is deployed with HealthCheck feature enabled or disabled, then it cannot be changed. To change the configuration, you are required to re-install.
3.1.4.6 Enabling Persistent Volume

Note:

The steps provided in this section are optional and required only if Persistent Volume needs be to enabled.

ATS supports Persistent storage to retain ATS historical build execution data, test cases and one-time environment variable configurations. With this enhancement, the user can decide whether to use persistent volume based on their resource requirements. By default, the persistent volume feature is not enabled.

To enable persistent storage, perform the following steps:
  1. Create a PVC using PersistentVolumeClaim.yaml file and associate the same to the ATS pod.
    Sample PersistentVolumeClaim.yaml file:
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: <Enter the PVC Name>
      annotations:
    spec:
      storageClassName: <Provide the Storage Class Name>
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: <Provide the size of the PV>
    1. Set PersistentVolumeClaim to the PVC file name.
    2. Enter the storageClassName to the Storage Class Name.
    3. Set storage to and size of the persistent volume.
      Sample PVC configuration:
      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: bsf-pvc-24.1.0   
      annotations:
      spec:
        storageClassName: standard
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 1Gi
  2. Run the following command to create PVC:
    kubectl apply -f <filename> -n <namespace>

    For example:

    kubectl apply -f PersistentVolumeClaim.yaml -n ocbsf

    Output:

    persistentvolumeclaim/bsf-pvc-24.1.0 created
  3. Once the PVC is created , run the following command to verify that it is bound to the persistent volume and is available.
    kubectl get pvc -n <namespace used for pvc creation>

    For example:

    kubectl get pvc -n ocbsf

    Sample output:

    NAME              STATUS        VOLUME                                   CAPACITY  ACCESS MODES   STORAGECLASS   AGE
    bsf-pvc-24.1.0   Bound    pvc-65484045-3805-4064-9fc3-f9eeeaccc8b8      1Gi        RWO            standard      11s

    Verify that the STATUS is Bound and rest of the parameters like NAME, CAPACITY, ACCESS MODES, and STORAGECLASS are as mentioned in the PersistentVolumeClaim.yaml file.

    Note:

    Do not proceed further with the next step if there is an issue with the PV creation and contact your administrator to get the PV Created.

  4. Enable PVC:
    1. Set the PVEnabled flag to true.
    2. Set PVClaimName to the PVC created in Step 1.
      
      deployment:
        customExtension:
          labels: {}
          annotations: {}
        PVEnabled: true
        PVClaimName: "ocbsf-pvc-24.1.0"
        

    Note:

    Make sure that ATS is deployed before proceeding to the further steps.
  5. Copy the <nf_main_folder> and <jenkins jobs> folders from the tar file to their ATS pod and restart the pod.
    1. Extract the tar file.
      tar -xvf ocats-bsf-data-24.1.0.tgz
    2. Run the following commands to copy the desired folder.
      kubectl cp ocats-bsf-data-24.1.0/ocbsf_tests <namespace>/<pod-name>:/var/lib/jenkins/
      kubectl cp ocats-bsf-data-24.1.0/jobs <namespace>/<pod-name>:/var/lib/jenkins/.jenkins/
    3. Restart the pod.
      kubectl delete po <pod-name> -n <namespace>
  6. Once the Pod is up and running, log in to the Jenkins console and configure the Discard old Builds option to configure the number of Jenkins builds, which must be retained in the persistent volume.

    Figure 3-1 Discarding Old Builds


    Discarding Old Builds

    Note:

    If Discard old Builds is not configured, Persistent Volume can get filled when there are huge number of builds.

For more details on Persistent Volume Storage, see Persistent Volume for 5G ATS.

3.1.5 Deploying ATS and Pods

3.1.5.1 Deploying ATS in Kubernetes Cluster

Important:

This Procedure is for Backwards porting purpose only and should not be considered as the Subsequent Release POD Deployment Procedure.

Prerequisite: Make sure that the old PVC, which contains the old release POD data is available.

To deploy ATS, perform the following steps:

  1. Run the following command to deploy ATS using the updated helm charts:

    Note:

    Ensure that all the the components, that is, ATS, stub pods and CNC BSF are deployed in the same namespace.

    Using Helm 3

    helm install -name <release_name> ocats-bsf-24.1.0.tgz --namespace <namespace_name> -f <values-yaml-file>

    For example:

    helm install -name ocats ocats-bsf-24.1.0.tgz --namespace ocbsf -f ocats-bsf/values.yaml
  2. Run the following command to verify ATS deployment:
    helm ls -n ocbsf
    The output of the command is as follows:
    
    NAME                    REVISION        UPDATED                         STATUS          CHART                   APP VERSION     NAMESPACE
    ocats                   1               Mon Nov 14 14:56:11 2020        DEPLOYED        ocats-bsf-24.1.0      1.0       ocbsf
    If the deployment is successful, the status is Deployed.
3.1.5.2 Deploying Stub Pod in Kubernetes Cluster
To deploy Stub Pod in Kubernetes cluster, perform the following steps:
  1. Navigate to ocats-bsf-tools-24.1.0.0.0 folder and run the following command:

    tar -zxvf ocstub-pkg-24.1.0.0.0.tgz
    The output of the command shows:
    • ocstub-py-24.1.0.tgz
    • ocstub-py-image-24.1.0.tar
  2. Deploy the additional stubs required to validate the session retry feature.

    You can use nf11stub or nf12stub as alternte FQDN for nf1stub.

    1. Run the following command to load the stub image.
      docker load --input ocstub-py-image-24.1.0.tar
    2. Tag and push the image to your docker registry using below commands.
      
      docker tag ocats-bsf:24.1.0 localhost:5000/ocats-bsf:23.4.0
      docker push localhost:5000/ocats-bsf:24.1.0
    3. Untar the helm charts ocstub-py-24.1.0.tgz and update the registry name, image name and tag (if required) in ocstub-py/values.yaml file.
    4. If required, change apiVersion to apps/v1 in ocstub-py/templates/deployment.yaml file.
      apiVersion: apps/v1
    5. Deploy the stub:
      helm install -name <release_name> ocstub-py --set env.NF=<NF> --setenv.LOG_LEVEL=<DEBUG/INFO> --set service.name=<service_name> --set service.appendReleaseName=false --namespace=<namespace_name> -f <valuesyaml-file>

      For example:

      
      helm install -name nf1stub ocstub-py --set env.NF=BSF --set env.LOG_LEVEL=DEBUG --set service.name=nf1stub --set service.appendReleaseName=false --namespace=ocbsf -f ocstub-py/values.yaml
      
      helm install -name nf11stub ocstub-py --set env.NF=BSF --set env.LOG_LEVEL=DEBUG --set service.name=nf11stub --set service.appendReleaseName=false --namespace=ocbsf -f ocstub-py/values.yaml
      
      helm install-name nf12stub ocstub-py --set env.NF=BSF --set env.LOG_LEVEL=DEBUG --set service.name=nf12stub --set service.appendReleaseName=false --namespace=ocbsf -f ocstub-py/values.yaml
    6. Run the following command to verify the stub deployment:
      helm ls -n ocbsf

      Sample output:

      [cloud-user@platform-bastion-1 ocstub-pkg-24.1.0.0.0]$ helm ls -n ocbsf
      NAME                    REVISION             UPDATED                  STATUS          CHART                   APP VERSION     NAMESPACE
      nf11stub             1               Thu Jul  29 05:55:48 2021        DEPLOYED        ocstub-py-24.1.0                 1.0      ocbsf
      nf12stub             1               Thu Jul  29 05:55:50 2021        DEPLOYED        ocstub-py-24.1.0                 1.0      ocbsf
      nf1stub              1               Thu Jul  29 05:55:47 2021        DEPLOYED        ocstub-py-24.1.0                 1.0      ocbsf
    7. Run the following command to verify the ATS and Stubs deployment status:
      helm status -n ocbsf
    8. Run the following command to verify if all the services are installed.
      kubectl get po -n ocbsf

      Sample output:

                                                       
      [cloud-user@platform-bastion-1 ocstub-pkg-24.1.0.0.0]$ kubectl get po -n ocbsf
      NAME                                                   READY   STATUS   RESTARTS   AGE
      nf11stub-ocstub-py-7bffd6dcd7-ftm5f                   1/1     Running   0          3d23h
      nf12stub-ocstub-py-547f7cb99f-7mpll                   1/1     Running   0          3d23h
      nf1stub-ocstub-py-bdd97cb9-xjrkx                      1/1     Running   0          3d23h
3.1.5.3 Deploying DNS Stub in Kubernetes Cluster

Note:

Ensure there are sufficient resources and limit for DNS Stub. Set the resource request and limit values in the resources section of the values.yaml file as follows:

resources: {}
  # We usually recommend not to specify default resources and to leave this as a conscious
  # choice for the user. This also increases chances charts run on environments with little
  # resources, such as Minikube. If you do want to specify resources, uncomment the following
  # lines, adjust them as necessary, and remove the curly braces after 'resources:'.  # limits:
  #  cpu: 1000m
  #  memory: 1024Mi
  # requests:
  #  cpu: 500m
  #  memory: 500Mi
To deploy DNS Stub in Kubernetes cluster, perform the following steps:
  1. Go to the ocats-bsf-tools-24.1.0.0.0 folder and run the following command to extract the ocstub tar file content:

    tar -zxvf ocdns-pkg-24.1.0.0.0.tgz

    Sample output:

    
    [cloud-user@platform-bastion-1 ocdns-pkg-24.1.0.0.0]$ ls -ltrh
    total 211M
    -rw-------. 1 cloud-user cloud-user 211M Mar 14 14:49 ocdns-bind-image-24.1.0.tar
    -rw-r--r--. 1 cloud-user cloud-user 2.9K Mar 14 14:49 ocdns-bind-24.1.0.tgz
    
  2. Run the following command in your cluster to load the DNS STUB image:

    docker load --input ocdns-bind-image-24.1.0.tar

  3. Run the following commands to tag and push the DNS STUB image:
    docker tag ocdns-bind:24.1.0 localhost:5000/ocdns-bind:24.1.0
    docker push localhost:5000/ocdns-bind:24.1.0
  4. Run the following command to untar the helm charts, ocdns-bind-24.1.0.tgz.

    tar -zxvf ocdns-bind-24.1.0.tgz

  5. Update the registry name, image name and tag (if required) in the ocdns-bind/values.yaml file as required. For this, open the values.yaml file and update the image.repository and image.tag parameters.
  6. Run the following command to deploy the DNS Stub.
    Using Helm3:
    helm install -name ocdns ocdns-bind-24.1.0.tgz --namespace ocbsf -f ocdns-bind/values.yaml
  7. Capture the cluster name of the deployment , namespace where nfstubs are deployed, and the cluster IP of DNS Stub.

    To capture the DNS Stub cluster IP:
    kubectl get svc -n ocbsf | grep dns

    Sample output:

    
    [cloud-user@platform-bastion-1 ocdns-pkg-24.1.0.0.0]$ kubectl get svc -n ocbsf | grep dns
    NAME      TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)                AGE
    ocdns     ClusterIP      10.233.11.45    <none>          53/UDP,6236/TCP        19h
    To caputer the cluster name:
    kubectl -n kube-system get configmap kubeadm-config -o yaml | grep clusterName
    Sample output:
    clusterName: platform
3.1.5.4 Deploying ocdiam Simulator in Kubernetes Cluster
To deploy ocdiam Simulator in Kubernetes cluster, perform the following steps:
  1. Go to the ocats-bsf-tools-24.1.0.0.0 folder and run the following command to extract the ocstub tar file content:

    tar -zxvf ocdiam-pkg-24.1.0.0.0.tgz

    Sample output:

    [cloud-user@platform-bastion-1 ocdiam-pkg-24.1.0.0.0]$ ls -ltrh
    total 908M
    -rw-------. 1 cloud-user cloud-user 908M Mar 14 14:49 ocdiam-sim-image-24.1.0.tar
    -rw-r--r--. 1 cloud-user cloud-user 3.8K Mar 14 14:49 ocdiam-sim-24.1.0.tgz
  2. Run the following command in your cluster to load the DNS STUB image:
    docker load --input ocdiam-sim-image-24.1.0.tar
  3. Run the following commands to tag and push the DNS STUB image:
    docker tag ocdiam-sim:24.1.0 localhost:5000/ocdiam-sim:24.1.0
    docker push localhost:5000/ocdiam-sim:24.1.0
  4. Run the following command to untar the helm charts, ocdiam-sim-24.1.0.tgz.
    tar -zxvf ocdiam-sim-24.1.0.tgz
  5. Update the registry name, image name and tag (if required) in the ocdiam-sim/values.yaml file as required. For this, open the values.yaml file and update the image.repository and image.tag parameters.
  6. Run the following command to deploy the Diam Sim.
    Using Helm3:
    helm install -name ocdiam-sim ocdiam-sim --namespace ocbcf -f ocdiam-sim/values.yaml

    Output:

    ocdiam-sim-69968444b6-fg6ks                1/1     Running   0          5h47m

Sample of BSF namespace with BSF and ATS after installation:

[cloud-user@platform-bastion-1 ocstub-pkg-24.1.0.0.0]$ kubectl get po -n ocbsf
NAME                                                      READY   STATUS    RESTARTS   AGE
ocbsf-appinfo-6fc99ffb85-f96j2                        1/1     Running   1          3d23h
ocbsf-bsf-management-service-df6b68d75-m77dv          1/1     Running   0          3d23h
ocbsf-oc-config-79b5444f49-7pwzx                      1/1     Running   0          3d23h
ocbsf-oc-diam-connector-77f7b855f4-z2p88              1/1     Running   0          3d23h
ocbsf-oc-diam-gateway-0                               1/1     Running   0          3d23h
ocbsf-ocats-bsf-5d8689bc77-cxdvx                      1/1     Running   0          3d23h
ocbsf-ocbsf-egress-gateway-644555b965-pkxsb           1/1     Running   0          3d23h
ocbsf-ocbsf-ingress-gateway-7558b7d5d4-lfs5s          1/1     Running   4          3d23h
ocbsf-ocbsf-nrf-client-nfmanagement-d6b955b48-4pptk   1/1     Running   0          3d23h
ocbsf-ocdns-ocdns-bind-75c964648-j5fsd                1/1     Running   0          3d23h
ocbsf-ocpm-cm-service-7775c76c45-xgztj                1/1     Running   0          3d23h
ocbsf-ocpm-queryservice-646cb48c8c-d72x4              1/1     Running   0          3d23h
ocbsf-performance-69fc459ff6-frrvs                    1/1     Running   4          3d23h
ocbsfnf11stub-7bffd6dcd7-ftm5f                        1/1     Running   0          3d23h
ocbsfnf12stub-547f7cb99f-7mpll                        1/1     Running   0          3d23h
ocbsfnf1stub-bdd97cb9-xjrkx                           1/1     Running   0          3d23h
ocdiam-sim-69968444b6                                 1/1     Running   0          3d23h

3.1.6 Post-Installation Steps

The section describes post-installation steps that users should perform after deploying ATS and stub pods.

Alternate Route Service Configurations

To edit the Alternate Route Service deployment file (ocbcf-ocbsf-alternate-route) that points to DNS Stub, perform the following steps:

  1. Run the following command to get searches information from dns-bind pod to enable communication between Alternate Route and dns-bind service:
    kubectl exec -it <dns-bind pod> -n <NAMESPACE> -- /bin/bash -c 'cat /etc/resolv.conf' | grep search | tr ' ' '\n' | grep -v 'search'
    The following output is displayed after running the command:

    Figure 3-2 Sample Output

    Sample Output
    By default alternate service will point to CoreDNS and you will see following settings in deployment file:

    Figure 3-3 Alternate Route Service Deployment File

    Screen capture to show alternate route service points to CoreDNS
  2. Run the following command to edit the deployment file and add the following content in alternate service to query DNS stub:
    $kubectl edit deployment ocpcf-occnp-alternate-route -n ocpcf
    1. Add the IP Address of the nameserver that you have recorded after installing the DNS stub (cluster IP Address of DNS Stub).
    2. Add the search information one by one which you recorded earlier.
    3. Set dnsPolicy to "None".
      dnsConfig:
        nameservers:
        - 10.233.33.169      // cluster IP of DNS Stub
        searches:
        - ocpcf.svc.occne15-ocpcf-ats
        - svc.occne15-ocpcf-ats
        - occne15-ocpcf-ats
      dnsPolicy: None
    For example:

    Figure 3-4 Example

    Example

NRF client configmap

  1. In the application-config configmap, configure the following parameters with the respective values:
    • primaryNrfApiRoot=nf1stub.<namespace_gostubs_are_deployed_in>.svc:8080

      Example: primaryNrfApiRoot=nf1stub.ocats.svc:8080

    • secondaryNrfApiRoot=nf11stub.<namespace_gostubs_are_deployed_in>.svc:8080

      Example:secondaryNrfApiRoot=nf11stub.ocats.svc:8080

    • virtualNrfFqdn = nf1stub.<namespace_gostubs_are_deployed_in>.svc

      Example:virtualNrfFqdn=nf1stub.ocats.svc

    Note:

    To get all configmaps in your namespace, run the following command:

    kubectl get configmaps -n <BSF_namespace>

  2. (Optional) If persistent volume is used, follow the post-installation steps provided in the Persistent Volume for 5G ATS section.

3.2 Installing ATS for NEF

3.2.1 Resource Requirements

This section describes the ATS resource requirements for NEF.

Overview - Total Number of Resources

The following table describes the overall resource usage in terms of CPUs, memory, and storage for the following:
  • NEF SUT
  • cnDBTier
  • ATS

Table 3-5 NEF - Total Number of Resources

Resource Name CPU Memory (GB) Storage (GB)
NEF SUT Totals 21.6 21.6 4
cnDBTier Totals 40 40 20
ATS Totals 4 3 0
Grand Total NEF ATS 65.6 64.6 24

NEF Pods Resource Requirements Details

This section describes the resource requirements, which are needed to deploy NEF ATS successfully.

Table 3-6 NEF Pods Resource Requirements Details

Microservice CPUs Required per Pod Memory Required per Pod (GB) Storage PVC Required per Pod (GB) # Replicas (regular deployment) # Replicas (ATS deployment) CPUs Required - Total Memory Required - Total (GB) Storage PVC Required - Total (GB)
oc_nef_ccfclient_manager 0.7 0.7 0 2 1 0.7 0.7 0
oc_nef_monitoring_events 1 1 0 2 1 1 1 0
oc_nef_quality_of_service 1 1 0 2 1 1 1 0
oc_nef_aef_apirouter 1 1 0 2 1 1 1 0
oc_nef_apd_manager 0.7 0.7 0 1 1 0.7 0.7 0
oc_nef_5gcagent 0.7 0.7 0 2 1 0.7 0.7 0
oc_nef_expiry_auditor 0.7 0.7 0 2 1 0.7 0.7 0
oc_capif_afmgr 0.7 0.7 0 2 1 0.7 0.7 0
oc_capif_apimgr 0.7 0.7 0 2 1 0.7 0.7 0
oc_capif_eventmgr 0.7 0.7 0 2 1 0.7 0.7 0
ocingress_gateway 1 1 0 2 1 1 1 0
ocegress_gateway 1 1 0 2 1 1 1 0
nrf-client 1 1 0 1 1 1 1 0
oc-app-info 1 1 0 1 1 1 1 0
oc-perf-info 1 1 0 1 1 1 1 0
oc-config-server 1 1 0 1 1 1 1 0
ocnef-trafficinfluence 0.5 0.5 0 1 1 0.5 0.5 0
nrf-client-discovery 1 1 0 1 1 1 1 0
oc_nef_diam_gateway 1 1 0 1 1 1 1 0
oc_nef_device_trigger 1 1 0 1 1 1 1 0
poolmanager 1 1 0 1 1 1 1 0
msisdnlessmosms 1 1 0 1 1 1 1 0
consoledataservice 0.5 0.5 0 1 1 0.5 0.5 0
NEF SUT Totals 19.9 19.9 0

ATS Resource Requirements details for NEF

This section describes the ATS resource requirements, which are needed to deploy NEF ATS successfully.

Table 3-7 ATS Resource Requirements Details

Microservice CPUs Required per Pod Memory Required per Pod (GB) Storage PVC Required per Pod (GB) # Replicas (regular deployment) # Replicas (ATS deployment) CPUs Required - Total Memory Required - Total (GB) Storage PVC Required - Total (GB)
ATS Behave 2 1 0 0 1 2 1 0
ocstub-nef-af 0.5 1 0 0 1 0.5 1 0
ocstub-nef-nrf 0.5 0.5 0 0 1 0.5 0.5 0
ocstub-nef-udm 0.5 1 0 0 1 0.5 1 0
ocstub-nef-gmlc 0.5 1 0 0 1 0.5 1 0
ocstub-nef-bsf 1 1 0 2 1 1 1 0
ocstub-nef-pcf 1 1 0 2 1 1 1 0
ocstub-nef-udr 1 1 0 2 1 1 1 0
ocstub-diam-nef 1 0.5 0 2 1 1 1 0
ATS Totals 8 8.5 0

cnDBTier Resource Requirements Details for NEF

This section describes the cnDBTier resource requirements, which are needed to deploy NEF ATS successfully.

Note:

For cnDBTier pods, a minimum of 4 worker nodes are required.

Table 3-8 cnDBTier Resource Requirements

Microservice CPUs Required per Pod Memory Required per Pod (GB) Storage PVC Required per Pod (GB) # Replicas (regular deployment) # Replicas (ATS deployment) CPUs Required - Total Memory Required - Total (GB) Storage PVC Required - Total (GB)
vrt-launcher-dt-1.cluster.local 4 4 2 2 1 4 4 2
vrt-launcher-dt-2.cluster.local 4 4 2 2 1 4 4 2
vrt-launcher-dt-3.cluster.local 4 4 2 2 1 4 4 2
vrt-launcher-dt-4.cluster.local 4 4 2 2 1 4 4 2
vrt-launcher-mt-1.cluster.local 4 4 2 2 1 4 4 2
vrt-launcher-mt-2.cluster.local 4 4 2 2 1 4 4 2
vrt-launcher-mt-3.cluster.local 4 4 2 2 1 4 4 2
vrt-launcher-sq-1.cluster.local 4 4 2 2 1 4 4 2
vrt-launcher-sq-2.cluster.local 4 4 2 2 1 4 4 2
vrt-launcher-db-installer.cluster.local 4 4 2 2 1 4 4 2
cnDBTier Totals 40 40 20

3.2.2 Downloading the ATS Package

Locating and Downloading ATS Images

To locate and download the ATS Image from MOS:

  1. Log in to My Oracle Support using the appropriate credentials.
  2. Select the Patches & Updates tab.
  3. In the Patch Search window, click Product or Family (Advanced).
  4. Enter Oracle Communications Cloud Native Core - 5G in the Product field.
  5. Select Oracle Communications Cloud Native Network Exposure Function <release_number> from the Release drop-down.
  6. Click Search. The Patch Advanced Search Results list appears.
  7. Select the required ATS patch from the list. The Patch Details window appears.
  8. Click Download. The File Download window appears.
  9. Click the <p********_<release_number>_Tekelec>.zip file to download the NEF ATS package file.
  10. Untar the zip file to access all the ATS Images. The <p********_<release_number>_Tekelec>.zip directory has following files:
    ocats-nef-tools-pkg-24.1.0.0.0.tgz
    ocats-nef-tools-pkg-24.1.0.0.0-README.txt
    ocats-nef-tools-pkg-24.1.0.0.0.tgz.sha256
    ocats-nef-custom-configtemplates-24.1.0.0.0.zip
    ocats-nef-custom-configtemplates-24.1.0.0.0-README.txt

    The ocats-nef-tools-pkg-24.1.0.0.0-README.txt file has all the information required for the package.

    The ocats-nef-tools-pkg-24.1.0.0.0.tgz file has following images and charts packaged as tar files:
    ocats-nef-tools-pkg-24.1.0.0.0.tgz
    
    |
    
    |_ _ _ocats-nef-pkg-24.1.0.0.0.tgz
    
    |       |_ _ _ _ _ _ ocats-nef-24.1.0.tgz (Helm Charts)
    
    |       |_ _ _ _ _ _ ocats-nef-image-24.1.0.tar (Docker Images)
    
    |       |_ _ _ _ _ _ OCATS-NEF-Readme.txt
    
    |       |_ _ _ _ _ _ ocats-nef-24.1.0.tgz.sha256
    
    |       |_ _ _ _ _ _ ocats-nef-image-24.1.0.tar.sha256
    
    |       |_ _ _ _ _ _ ats_data-24.1.0.tar (ATS test scripts and Jenkins data)
    
    |       |_ _ _ _ _ _ ats_data-24.1.0.tar.sha256
    
    |
    
    |
    
    |_ _ _ocstub-nef-pkg-24.1.0.0.0.tgz
    
           |_ _ _ _ _ _ ocstub-nef-24.1.0.tgz (Helm Charts)
    
           |_ _ _ _ _ _ ocstub-nef-image-24.1.0.tar (Docker Images)
    
           |_ _ _ _ _ _ ocstub-diam-nef-image-24.1.0.tar
    
           |_ _ _ _ _ _ ocstub-diam-nef-image-24.1.0.tar.sha256
    
           |_ _ _ _ _ _ OCSTUB-NEF-Readme.txt
    
           |_ _ _ _ _ _ ocstub-nef-24.1.0.tgz.sha256
    
           |_ _ _ _ _ _ ocstub-nef-image-24.1.0.tar.sha256
    In addition to the above images and charts, there is a ocats-nef-custom-configtemplates-24.1.0.0.0.zip file in the package zip file. The ocats-nef-custom-configtemplates-24.1.0.0.0-README.txt file has information about this zip file.
    ocats-nef-custom-configtemplates-24.1.0.0.0.zip
    
          |
    
          |_ _ _ocats-nef-custom-values.yaml (Custom values file for installation)    
              
          |
    
          |_ _ _ocats-nef-custom-serviceaccount.yaml (Template to create custom service account)
    
          |
    
          |_ _ _ocstub-nef-custom-values.yaml (Custom values file for stub installation)
  11. Copy the tar file to the CNE, OCI, or Kubernetes cluster where you want to deploy ATS.

3.2.3 Pushing the Images to Customer Docker Registry

Preparing to deploy ATS and Stub Pod in Kubernetes Cluster

To deploy ATS and Stub Pod in Kubernetes Cluster:

  1. Verify the checksums of tarballs mentioned in file Readme.txt.
  2. Run the following command to extract tar file content.

    tar -xvf ocats-nef-tools-pkg-24.1.0.0.0.tgz

    The output of this command is:
    ocats-nef-pkg-24.1.0.0.0.tgz
    ocstub-nef-pkg-24.1.0.0.0.tgz
  3. Run the following command to extract the helm charts and docker images of ATS.

    tar -xvf ocats-nef-pkg-24.1.0.0.0.tgz

    The output of this command is:
    ocats-nef-image-24.1.0.tar
    ocats-nef-24.1.0.tgz
    OCATS-NEF-Readme.txt

    Note:

    The OCATS-NEF-Readme.txt file has all the information required for the package.
  4. Run the following command to untar the ocstub package.

    tar -xvf ocstub-nef-pkg-24.1.0.0.0.tgz

    The output of this command is:
    ocstub-nef-image-24.1.0.tar
    ocstub-nef-24.1.0.tgz
    ocstub-diam-nef-image-24.1.0.tar
    OCSTUB-NEF-Readme.txt
  5. Run the following command to extract the content of the custom configuration templates:

    unzip ocats-nef-custom-configtemplates-24.1.0.0.0.zip

    The output of this command is:
    ocats-nef-custom-values.yaml (Custom yaml file for deployment of OCATS-NEF)
    ocats-nef-custom-serviceaccount.yaml (Custom yaml file for service account creation to help the customer if required)
    ocstub-nef-custom-values.yaml (Custom yaml file for deployment of OCSTUB-NEF)
  6. Run the following commands in your cluster to load the ATS docker image, 'ocats-nef-image-24.1.0.tar', and push it to your registry.
    $ docker load -i ocats-nef-image-24.1.0.tar
    
    $ docker tag ocats/ocats-nef:24.1.0 <local_registry>/ocats/ocats-nef:24.1.0
    
    $ docker push <local_registry>/ocats/ocats-nef:24.1.0
  7. Run the following commands in your cluster to load the ATS docker image, Stub docker image, 'ocstub-nef-image-24.1.0.tar' and push it to your registry.
    $ docker load -i ocstub-nef-image-24.1.0.tar
     
    $ docker tag ocats/ocstub-nef:24.1.0 <local_registry>/ocats/ocstub-nef:24.1.0
     
    $ docker push <local_registry>/ocats/ocstub-nef:24.1.0
    
    $ docker load -i ocstub-diam-nef-image-24.1.0.tar
    
    $ docker tag ocats/ocstub-diam-nef:24.1.0 <local_registry>/ocats/ocstub-diam-nef:24.1.0
    
    $ docker push <local_registry>/ocats/ocstub-diam-nef:24.1.0
  8. Update the image name and tag in the ocats-nef-custom-values.yaml and ocstub-nef-custom-values.yaml files as required. For this, open the ocats-nef-custom-values.yaml and ocstub-nef-custom-values.yaml file and update the image.repository and image.tag parameters.

3.2.4 Configuring ATS

3.2.4.1 Enabling Static Port
  1. To enable static port:

    Note:

    ATS supports static port. By default, this feature is not available.
    • In the ocats-nef-custom-values.yaml file under service section, set the staticNodePortEnabled parameter value to 'true' and staticNodePort parameter value with valid nodePort.
      service:
        customExtension:
          labels: {}
          annotations: {}
        type: LoadBalancer
        port: "8080"
        staticNodePortEnabled: true
        staticNodePort: "32385"

3.2.5 Deploying ATS, Stub and CNC Console in Kubernetes Cluster

Note:

It is important to ensure that all the four components; ATS, Stub, NEF and CNC Console are in the same namespace.

To run NEF test cases, you need seven stubs. The service name of the stubs should be ocnefsim-ocstub-svc-af, ocnefsim-ocstub-svc-nrf, ocnefsim-ocstub-svc-bsf, ocnefsim-ocstub-svc-gmlc, ocnefsim-ocstub-svc-pcf, ocnefsim-ocstub-svc-udm, and ocnefsim-ocstub-svc-udr.

ATS and Stub supports Helm2 and Helm3 for deployment.

If the namespace does not exists, run the following command to create a namespace:

kubectl create namespace ocnef

Important:

  • It is mandatory to use the <release_name> as ocnefsim while installing stubs.
  • The ATS deployment with NEF does not support the Persistent Volume (PV) feature. Therefore, the default value of the deployment.PVEnabled parameter in the ocats-nef-custom-values.yaml must not be changed. By default, the parameter value is set to false.
Using Helm 2 for Deploying ATS:
helm install ocats-nef-24.1.0.tgz --name <release_name> --namespace <namespace_name> -f <values-yaml-file>
Example:
helm install ocats-nef-24.1.0.tgz --name ocats --namespace ocnef -f ocats-nef-custom-values.yaml

Using Helm 2 for Deploying Stubs:

helm install ocstub-nef-24.1.0.tgz --name <release_name> --namespace <namespace_name> -f <values-yaml-file>
Example:
helm install ocstub-nef-24.1.0.tgz --name ocnefsim --namespace ocnef -f ocstub-nef-custom-values.yaml

Using Helm 3 for Deploying ATS:

helm3 install -name <release_name> ocats-nef-24.1.0.tgz --namespace <namespace_name> -f <values-yaml-file>
Example:

helm3 install -name ocats ocats-nef-24.1.0.tgz --namespace ocnef -f ocats-nef-custom-values.yaml
Using Helm 3 for Deploying Stubs:
helm3 install -name <release_name> ocstub-nef-24.1.0.tgz --namespace <namespace_name> -f <values-yaml-file>
Example:
helm3 install -name ocnefsim ocstub-nef-24.1.0.tgz --namespace ocnef -f ocstub-nef-custom-values.yaml
3.2.5.1 Deploy and Configure CNC Console
Perform the following steps to deploy and configure the CNC Console:
  1. Install the CNC Console 24.1.0 in the same namespace where NEF ATS is installed. For further information, see Oracle Communications Cloud Native Configuration Console Installation, Upgrade, and Fault Recovery Guide.
  2. Create user and assign the necessary roles to access the NEF CNC Console GUI.

    Note:

    Following are the different roles to be assigned:
    • ADMIN
    • NEF_WRITE
    • NEF_READ
    • default-roles-cncc
3.2.5.2 Creating Console secrets

Run the following command to create the console secrets that contains iam and console passwords created in Deploy and Configure CNC Console.

Command:
kubectl create secret generic ocats-console-secret --from-literal=cnc_console_password=<cncc-console-password> --from-literal=cnc_iam_password=<cnc_iam_password> -n <namespace>
For example:
kubectl create secret generic ocats-console-secret --from-literal=cnc_console_password=Nefuser@1 --from-literal=cnc_iam_password=abc123 -n ocnef

3.2.6 Verifying ATS Deployment

Run the following command to verify ATS deployment.

helm status <release_name>

Once ATS and Stub are deployed, run the following commands to check the pod and service deployment:

To check pod deployment:


kubectl get pod -n ocnef

To check service deployment:

kubectl get service -n ocnef

3.3 Installing ATS for NRF

3.3.1 Resource Requirements

This section describes the ATS resource requirements for NRF.

Overview - Total Number of Resources

The following table describes the overall resource usage in terms of CPUs, memory, and storage for the following:
  • NRF SUT
  • cnDBTier
  • ATS

Table 3-9 NRF - Total Number of Resources

Resource Name CPU Memory (Gi) Storage (Mi)
NRF SUT Totals 61 69 0
DBTier Totals 40.5 50.5 720
ATS Totals 7 6 0
Grand Total NRF ATS 108.5 125.5 720

NRF Pods Resource Requirements Details

For NRF Pods resource requirements, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

ATS Resource Requirements details for NRF

This section describes the ATS resource requirements, which are needed to deploy NRF ATS successfully.

Table 3-10 ATS Resource Requirements Details

Microservice CPUs Required per Pod Memory Required per Pod (GB) Storage PVC Required per Pod (GB) # Replicas (regular deployment) # Replicas (ATS deployment) CPUs Required - Total Memory Required - Total (GB) Storage PVC Required - Total (GB)
ATS Behave 2 1 1 1 1 2 1 0
ATS Stub (Python) 1 1 1 1 5 5 5 0
ATS Totals 7 6 0

cnDBTier Resource Requirements Details for NRF

This section describes the cnDBTier resource requirements, which are needed to deploy NRF ATS successfully.

Note:

For cnDBTier pods, a minimum of 4 worker nodes are required.

Table 3-11 cnDBTier Services Resource Requirements

Service Name Min Pod Replica # Min CPU/Pod Min Memory/Pod (in Gi) PVC Size (in Gi) Min Ephemeral Storage (Mi)
MGMT (ndbmgmd) 2 4 6 15 90
DB (ndbmtd) 4 5 5 4 90
SQL (ndbmysqld) 2 4 5 8 90
SQL (ndbappmysqld) 2 2 3 1 90
Monitor Service (db-monitor-svc) 1 0.4 490 (Mi) NA 90
Backup Manager Service (db-backup-manager-svc) 1 0.1 130(Mi) NA 90
Replication Service - Leader 1 2 2 2 90
Replication Service - Other 0 1 2 0 90

3.3.2 Downloading the ATS Package

Locating and Downloading ATS Images

To locate and download the ATS image from MOS:

  1. Log in to My Oracle Support using the appropriate credentials.
  2. Select the Patches & Updates tab.
  3. In the Patch Search window, click Product or Family (Advanced).
  4. Enter Oracle Communications Cloud Native Core - 5G in the Product field.
  5. Select Oracle Communications Cloud Native Core Network Repository Function <release_number> from the Release drop-down.
  6. Click Search. The Patch Advanced Search Results list appears.
  7. Select the required ATS patch from the list. The Patch Details window appears.
  8. Click Download. The File Download window appears.
  9. Click the <p********_<release_number>_Tekelec>.zip file to download the NRF ATS package file.
  10. Untar the zip file to access all the ATS images. The <p********_<release_number>_Tekelec>.zip directory has the following files:
    ocats_ocnrf_csar_24_1_3_0_0.zip
    ocats_ocnrf_csar_24_1_3_0_0.zip.sha256
    ocats_ocnrf_csar_mcafee-24.1.3.0.0.log
    
  11. Note:

    The above zip file contains all the images and custom values required for 24.1.3 release of OCATS-OCNRF.
    The ocats_ocnrf_csar_24_1_3_0_0.zip file has the following files and folders:
    ├── Definitions
    │   ├── ocats_ocnrf_ats_tests.yaml
    │   └── ocats_ocnrf.yaml
    ├── Files
    │   ├── ChangeLog.txt
    │   ├── Helm
    │   │   └── ocats-ocnrf-24.1.3.tgz
    │   ├── Licenses
    │   ├── ocats-nrf-24.1.3.tar
    │   ├── Oracle.cert
    │   ├── ocstub-py-24.1.3.tar
    │   └── Tests
    ├── ocats_ocnrf.mf
    ├── Scripts
    │   ├── ocats_ocnrf_custom_serviceaccount_24.1.3.yaml
    │   ├── ocats_ocnrf_custom_values_24.1.3.yaml
    │   └── ocats_ocnrf_tests_jenkinsjobs_24.1.3.tgz
    └── TOSCA-Metadata
        └── TOSCA.meta
  12. Copy the zip file to Kubernetes cluster where you want to deploy ATS.

3.3.3 Pushing the Images to Customer Docker Registry

Preparing to Deploy ATS and Stub Pod in Kubernetes Cluster

To deploy ATS and Stub Pod in Kubernetes Cluster:

  1. Run the following command to extract tar file content:
    unzip ocats_ocnrf_csar_24_1_3_0_0.zip
    The following docker image tar files are located at Files folder:
    • ocats-nrf-24.1.3.tar
    • ocstub-py-.tar
  2. Run the following commands in your cluster to load the ATS docker image, 'ocats-nrf-24.1.3.tar' and Stub docker image 'ocstub-py-.tar', and push it to your registry.
    $ docker load -i ocats-nrf-24.1.3.tar
    $ docker load -i ocstub-py-.tar
     
    $ docker tag ocats/ocats-nrf:24.1.3 <local_registry>/ocats/ocats-nrf:24.1.3 
    
    $ docker tag ocats/ocstub-py: <local_registry>/ocats/ocstub-py:  
    
    $ docker push <local_registry>/ocats/ocats-nrf:24.1.3 
    
    $ docker push <local_registry>/ocats/ocstub-py:
  3. Create a copy of the custom values located at Scripts/ocats_ocnrf_custom_values_24.1.3.yaml and update it for image name, tag and other parameters as per the requirement.

3.3.4 Configuring ATS

3.3.4.1 Enabling Static Port
  1. To enable static port:

    Note:

    ATS supports static port. By default, this feature is not available.
    • In the ocats-ocnrf-custom-values.yaml file under service section, set the staticNodePortEnabled parameter value to 'true' and staticNodePort parameter value with valid nodePort.
      service:
        customExtension:
          labels: {}
          annotations: {}
        type: LoadBalancer
        port: "8080"
        staticNodePortEnabled: true
        staticNodePort: "32385"
3.3.4.2 Enabling Aspen Service Mesh

To enable service mesh for ATS:

  1. To enable service mesh, set the value for serviceMeshCheck to true. The following is a snippet of the service section in the yaml file:
    service:
      customExtension:
        labels: {}
        annotations: {}
      type: LoadBalancer
      port: "8080"
      staticNodePortEnabled: true
      staticNodePort: "32385"
      serviceMeshCheck: true
  2. If the ASM is not enabled on the global level for the namespace, run the following command to enable it before deploying the ATS:
    kubectl label --overwrite namespace <namespace_name> istio-injection=enabled
    For example:
    kubectl label --overwrite namespace ocnrf istio-injection=enabled
  3. Add the following annotations under the lbDeployments and nonlbDeployments section of the global section in ocats-nrf-custom-values.yaml file for ATS deployment as follows:

    traffic.sidecar.istio.io/excludeInboundPorts: "8080"

    traffic.sidecar.istio.io/excludeOutboundPorts: "9090"

    For example:

       lbDeployments:
          labels: {}
          annotations:
            traffic.sidecar.istio.io/excludeInboundPorts: "8080"
            traffic.sidecar.istio.io/excludeOutboundPorts: "9090"
    
        lbDeployments:
          labels: {}
          annotations:
            traffic.sidecar.istio.io/excludeInboundPorts: "8090"
            traffic.sidecar.istio.io/excludeOutboundPorts: "9090"
     
        nonlbServices:
          labels: {}
          annotations: {}
     
        nonlbDeployments:
          labels: {}
          annotations: 
            traffic.sidecar.istio.io/excludeInboundPorts: "8090"
            traffic.sidecar.istio.io/excludeOutboundPorts: "9090"
  4. Add the following annotations in OCNRF deployment to work with ATS in service mesh environment:

    For example:

    oracle.com/cnc: "true"
    traffic.sidecar.istio.io/excludeInboundPorts: "9090,8095,8096,7,53"
    traffic.sidecar.istio.io/excludeOutboundPorts: "9090,8095,8096,7,53"
    

    
        lbDeployments:
          labels: {}
          annotations:
            oracle.com/cnc: "true"
            traffic.sidecar.istio.io/excludeInboundPorts: "9090,8095,8096,7,53"
            traffic.sidecar.istio.io/excludeOutboundPorts: "9090,8095,8096,7,53"
    
     
        nonlbServices:
          labels: {}
          annotations: {}
     
        nonlbDeployments:
          labels: {}
          annotations: 
            oracle.com/cnc: "true"
            traffic.sidecar.istio.io/excludeInboundPorts: "9090,8095,8096,7,53"
            traffic.sidecar.istio.io/excludeOutboundPorts: "9090,8095,8096,7,53"

Note:

If the above annotations are not provided in NRF deployment under lbDeployments and nonlbDeployments, all the metrics and alerts related test cases will fail.

3.3.4.3 Enabling Persistent Volume

ATS supports Persistent storage to retain ATS historical build execution data, test cases, and one-time environment variable configurations.

To enable persistent storage:
  1. Create a PVC and associate the same to the ATS pod.
  2. Set the PVEnabled flag to true.
  3. Set PVClaimName to PVC that is created for ATS.
    
    deployment:
      customExtension:
        labels: {}
        annotations: {}
      PVEnabled: true
      PVClaimName: "ocats-nrf-24.1.3-pvc"
      

For more details on Persistent Volume Storage, you can refer to Persistent Volume for 5G ATS.

3.3.4.4 Enabling NF FQDN Authentication

Note:

This procedure is applicable only if the NF FQDN Authentication feature is being tested else, proceed to the "Deploying ATS and Stub in Kubernetes Cluster" section.

You must enable this feature while deploying Service Mesh. For more information on how to enable NF FQDN Authentication feature, see Oracle Communications Cloud Native Core, Network Repository Function User Guide.

However, there is some change in the ATS deployment process, which is as follows:
  1. Use previously unzipped file "ocats-nrf-custom-serviceaccount-24.1.3.yaml" to create a service account. Add the following annotation in the "ocats-nrf-custom-serviceaccount-24.1.3.yaml" file where the kind is ServiceAccount.

    "certificate.aspenmesh.io/customFields": '{ "SAN": { "DNS": [ "<NF-FQDN>" ] } }'

    Sample format:

    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: ocats-custom-serviceaccount
      namespace: ocnrf
      annotations:
        "certificate.aspenmesh.io/customFields": '{ "SAN": { "DNS": [ "AMF.d5g.oracle.com" ] } }'

    Note:

    "AMF.d5g.oracle.com" is the NF FQDN that you must provide in the serviceaccount DNS field.
  2. Run the following command to create a service account:

    kubectl apply -f ocats-nrf-custom-serviceaccount-24.1.3.yaml

  3. Update the service account name in the ocats-ocnrf-custom-values-24.1.3.yaml file as follows:
    ocats-nrf:
      serviceAccountName: "ocats-custom-serviceaccount"

3.3.5 Deploying ATS and Stub in Kubernetes Cluster

Note:

It is important to ensure that all the three components; ATS, Stub and NRF are in the same namespace.

ATS and Stub supports Helm3 for deployment.

If the namespace does not exists, run the following command to create a namespace:

kubectl create namespace ocnrf

Using Helm for Deploying ATS:

helm install <release_name> ocats-ocnrf-24.1.3.tgz --namespace <namespace_name> -f <values-yaml-file>
Example:

helm install ocats ocats-ocnrf-24.1.3.tgz --namespace ocnrf -f ocats-ocnrf-custom-values.yaml

Note:

The above helm install command will deploy ATS along with the stub servers required for ATS executions, which include 1 ATS pod and 5 stub server pods.

3.3.6 Verifying ATS Deployment

Run the following command to verify ATS deployment.

helm status <release_name>

Once ATS and Stub are deployed, run the following command to check the pod and service deployment.
Checking Pod Deployment:
kubectl get pod -n ocnrf
Checking Service Deployment:
kubectl get service -n ocnrf

Figure 3-5 Checking Pod Deployment without Service Mesh

Checking Pod Deployment without Service Mesh

Figure 3-6 Checking Service Deployment without Service Mesh

Checking Service Deployment without Service Mesh

If ATS is deployed with side car of service mesh, ensure that both ATS and Stub pods have two containers in ready state and shows "2/2" as follows:

Figure 3-7 ATS and Stub Deployed with Service Mesh


ATS and Stub Deployed with Service Mesh

Figure 3-8 ATS and Stub Deployed with Service Mesh


ATS and Stub Deployed with Service Mesh

3.3.7 Post-Installation Steps (if Persistent Volume is Used)

If persistent volume is used, follow the post-installation steps mentioned in the Persistent Volume for 5G ATS section.

3.4 Installing ATS for NSSF

3.4.1 Resource Requirements

Total Number of Resources

The total number of resource requirements are as follows:

Table 3-12 Total Number of Resources

Resource CPUs Memory(GB) Storage(GB)
NSSF SUT Total 30.2 22 4
cnDBTier Total 40 40 20
ATS Total 5 5 0
Grand Total NSSF ATS 75.2 67 24

Resource Details

The details of resources required to install NSSF-ATS are as follows:

Table 3-13 Resource Details

Microservice CPUs Required per Pod Memory Required per Pod (GB) Storage PVC Required per Pod (GB) Replicas (regular deployment) Replicas (ATS deployment) CPUs Required - Total Memory Required - Total (GB) Storage PVC Required - Total (GB)
NSSF Pods
ingressgateway 4 4 0 2 1 4 4 0
egressgateway 4 4 0 2 1 4 4 0
nsselection 4 2 0 2 1 4 2 0
nsavailability 4 2 0 2 1 4 2 0
nsconfig 2 2 0 1 1 2 2 0
nssubscription 2 2 0 1 1 2 2 0
nrf-client-discovery 1 1 0 2 1 1 1 0
nrf-client-management 1 1 0 1 1 1 1 0
appinfo 0.2 1 0 2 1 0.2 1 0
perfinfo 0.2 0.5 0 1 1 0.2 0.5 0
config-server 0.2 0.5 0 1 1 0.2 0.5 0
NSSF SUT Totals 22.6 CPU 20 GB 0
ATS
ATS Behave 2 2 0 0 1 2 2 0
ATS AMF Stub (Python) 2 2 0 0 1 2 2 0
ATS NRF Stub (Python) 1 1 0 0 1 2 2 0
ATS Totals 4 3 0
cnDBTier Pods (minimum of 4 worker nodes required)
vrt-launcher-dt-1.cluster.local 4 4 2 2 1 4 4 2
vrt-launcher-dt-2.cluster.local 4 4 2 2 1 4 4 2
vrt-launcher-dt-3.cluster.local 4 4 2 2 1 4 4 2
vrt-launcher-dt-4.cluster.local 4 4 2 2 1 4 4 2
vrt-launcher-mt-1.cluster.local 4 4 2 2 1 4 4 2
vrt-launcher-mt-2.cluster.local 4 4 2 2 1 4 4 2
vrt-launcher-mt-3.cluster.local 4 4 2 2 1 4 4 2
vrt-launcher-sq-1.cluster.local 4 4 2 2 1 4 4 2
vrt-launcher-sq-2.cluster.local 4 4 2 2 1 4 4 2
vrt-launcher-db-installer.cluster.local 4 4 2 2 1 4 4 2
cnDBTier Totals 40 40 20

3.4.2 Locating and Downloading ATS and Simulator Images

To locate and download the ATS Image from MOS:

  1. Log in to My Oracle Support using the appropriate credentials.
  2. Select the Patches and Updates tab.
  3. In the Patch Search window, click Product or Family (Advanced).
  4. Enter Oracle Communications Cloud Native Core - 5G in the Product field.
  5. Select Oracle Communications Cloud Native Core Network Slice Selection Function <release_number> from Release drop-down.
  6. Click Search. The Patch Advanced Search Results list appears.
  7. Select the required ATS patch from the search results. The Patch Details window appears.
  8. Click Download. The File Download window appears.
  9. Click the <p********_<release_number>_Tekelec>.zip file to download the NSSF ATS package file.
  10. Untar the zip file to get ocats-nssf directory, which consists of all the ATS Images. The ocats-nssf directory has the following files:

    Note:

    Prerequisites:
    • NSSF ATS release should be "ocats".
    • To run oauth test cases for NSSF, oauth secrets needs to be generated. For more information, see "Configuring Secrets to Enable Access Token " section in Oracle communications Cloud Native Core, Network Slice Selection Function Installation, Upgrade, Fault Recovery Guide.
    • NSSF needs to point to NRF and Stub Servers.

    The required changes in NSSF custom-values.yaml file are mentioned below

    
    ocats-nssf
    ├── ocats-nssf-24.1.1.0.0-mcafee.log
    ├── ocats-nssf-custom-configtemplates-24.1.1.0.0-README.txt  file contains all the information required for the package.
    ├── ocats-nssf-custom-configtemplates-24.1.1.0.0.zip  contains serviceaccount,PVC File and Custom values file
    ├── ocats-nssf-tools-pkg-24.1.1.0.0-README.txt     file contains all the information required for the package.
    └── ocats-nssf-tools-pkg-24.1.1.0.0.tgz           file has the following images and charts packaged as tar files
  11. Untar the ocats-nssf-tools-pkg-24.1.1.0.0.tgz tar file
    The structure of the file looks as given below:
    
    ocats-nssf-tools-pkg-24.1.1.0.0
    ├── amfstub-24.1.1.tar          -AMF Stub Server Docker image    
    ├── amfstub-24.1.1.tar.sha256
    ├── ats_data-24.1.1.tar         - ATS data, After untar "ocnssf_tests" folder will gets created in which ATS feature files present
    ├── ats_data-24.1.1.tar.sha256
    ├── ocats-nssf-24.1.1.tar       - NSSF ATS Docker Image
    ├── ocats-nssf-24.1.1.tar.sha256
    ├── ocats-nssf-24.1.1.tgz       - ATS Helm Charts, after untar "ocats-nssf" ats charts folder gets created.
    ├── ocats-nssf-24.1.1.tgz.sha256
    └── README.md
  12. Copy the ocats-nssf-tools-pkg-24.1.1.0.0.tgz tar file to the CNE or Kubernetes cluster where you want to deploy ATS.
  13. Along with the above packages, there is ocats-nssf-custom-configtemplates-24.1.1.0.0.zip at the same location.

    The readme file ocats-nssf-custom-configtemplates-24.1.1.0.0-README.txt contains information about the content of this zip file.

    Content of ocats-nssf-custom-configtemplates-24.1.1.0.0.zip is as below:
    
      nssf_ats_pvc_24.1.1.yaml                      - NSSF ATS PVC file
      ocats_nssf_custom_values_24.1.1.yaml          - NSSF ATS custom values file which needs to be used while installing ATS
      ocats_ocnssf_custom_serviceaccount_24.1.1.yaml- Template to Create ATS service account 
    Copy these files to OCCNE or Kubernetes cluster where you want to deploy ATS.
    
    ocats-nssf-custom-configtemplates-24.1.1.0.0.zip
    -----nssf_ats_pvc_24.1.1.yaml                      - NSSF ATS PVC file
    -----ocats_nssf_custom_values_24.1.1.yaml          - NSSF ATS custom values file which needs to be used while installing ATS
    -----ocats_ocnssf_custom_serviceaccount_24.1.1.yaml- Template to Create ATS service account 
    
    ocats-nssf-tools-pkg-24.1.1.0.0
    ├── amfstub-24.1.1.tar.          -AMF Stub Server Docker image    
    ├── ats_data-24.1.1.tar.         - ATS data, After untar "ocnssf_tests" folder will gets created in which ATS feature files present
    ├── ocats-nssf-24.1.1.tar.       - NSSF ATS Docker Image
    ├── ocats-nssf-24.1.1.tgz.       - ATS Helm Charts, after untar "ocats-nssf" ats charts folder gets created.
    

3.4.3 Deploying ATS in Kubernetes Cluster

To deploy ATS in Kubernetes Cluster:

  1. Verify checksums of the tarballs mentioned in the file Readme.txt.
  2. Run the following commands to extract tar file content, Helm charts, and Docker images of ATS:

    tar -xvzf ocats-nssf-tools-pkg-24.1.1.0.0.tgz

    The output of this command will return the following files:
    
    amfstub-24.1.1.tar
    amfstub-24.1.1.tar.sha256 
    ats_data-24.1.1.tar 
    ats_data-24.1.1.tar.sha256 
    ocats-nssf-24.1.1.tar 
    ocats-nssf-24.1.1.tar.sha256 
    ocats-nssf-24.1.1.tgz 
    ocats-nssf-24.1.1.tgz.sha256 
    Readme.txt
  3. NSSF-ATS and Stub Images Load and Push: Run the following commands in your cluster to load the ocats image and amf stubserver image:

    Docker Commands:

    docker load -i ocats-nssf-<version>.tar
    docker load -i amfstub-<version>.tar

    Examples:

    docker load -i ocats-nssf-24.1.1.tar
    docker load -i amfstub-24.1.1.tar

    Podman Commands:

    podman load -i ocats-nssf-<version>.tar
    podman load -i amfstub-<version>.tar

    Examples:

    podman load -i ocats-nssf-24.1.1.tar
    podman load -i amfstub-24.1.1.tar
  4. Run the following commands to tag and push the ATS image registry.
    1. Run the following commands to grep the image:
      docker images | grep ocats-nssf
      docker images | grep amfstub
    2. Copy the Image ID from the output of the grep command and change the tag (version number) to your registry.

      Docker Commands:

      docker tag <Image_ID> <your-registry-name/ocats-nssf:<tag>>

      docker push <your-registry-name/ocats-nssf:<tag>>

      docker tag <Image_ID> <your-registry-name/amfstub:<tag>>

      docker push <your-registry-name/amfstub:<tag>>

      Podman Commands:

      podman tag <Image_ID> <your-registry-name/ats/ocats-nssf:<tag>>
      podman push <your-registry-name/ocats-nssf:<tag>>
      podman tag <Image_ID> <your-registry-name/amfstub:<tag>>
      podman push <your-registry-name/amfstub:<tag>>
  5. ATS Helm Charts : Run the following command to get "ATS" Helm charts as shown below:
    tar -xvzf ocats-nssf-24.1.1.tgz

    The above command creates "ocats-nssf" helm charts of ATS.

  6. ATS Data: Run the following command to get ATS data, which contains feature files and data:
    tar -xvf ats_data-24.1.1.tar

    The above command creates "ocnssf_tests" ATS feature files data, which needs to copied inside after the ATS installation is complete.

  7. <Optional> Go to certificate folder inside ocats-nssf and run the following command:
    kubectl create secret generic ocnssf-secret --from-file=certificates/rsa_private_key_pkcs1.pem --from-file=certificates/trust.txt --from-file=certificates/key.txt --from-file=certificates/ocnssf.cer --from-file=certificates/caroot.cer -n ocnssf
  8. ATS Custom Values File Changes: Update the image name and tag in the ocats_nssf_custom_values_24.1.1.yaml file as required.
    1. For this, open the ocats_nssf_custom_values_24.1.1.yaml file
    2. Update the image.repository and image.tag parameters for ocats-nssf, ocats-amf, and ocats-nrf.
    3. Save and close the file after making the updates.
  9. <Optional>To enable static port:

    Note:

    ATS supports static port. By default, this feature is not available.

    In the ocats-nssf/values.yaml file under service section, set the value of staticNodePortEnabled parameter as 'true' and provide a valid nodePort value for staticNodePort.

  10. ATS Service Account Creation: In ocats-nssf-custom-serviceaccount_24.1.1.yaml, change namespace as below:
    sed -i "s/changeme-ocats/${namespace}/g" ocats_ocnssf_custom_serviceaccount_24.1.1.yaml
  11. Run the following command to apply ocats-nssf-custom-serviceaccount_24.1.1.yaml file:

    kubectl apply -f <serviceaccount.yaml file> -n <namespace_name>

    For example:

    kubectl apply -f ocats-nssf-custom-serviceaccount_24.1.1.yaml -n ocnssf

  12. ATS Helm Release Name Update: If NSSF ATS helm release name is changed from "ocats" to any other value while installing ATS, that needs to be updated in NSSF custom-values.yaml file. Here is a NSSF custom-values.yaml file snippet where ATS helm release name is "ocats":
    #Static virtual FQDN Config
      staticVirtualFqdns:
        - name: https://abc.test.com
          alternateFqdns:
            - target: ocats-amf-stubserver.changeme-ocats
              port: 8080
              priority: 10
            - target: ocats-amf-stubserver.changeme-ocats
              port: 8080
              priority: 20
        - name: http://xyz.test.com
          alternateFqdns:
            - target: ocats-amf-stubserver.changeme-ocats
              port: 8080
              priority: 10
            - target: ocats-amf-stubserver.changeme-ocats
              port: 8080
              priority: 20
      
    nrf-client:
        # This config map is for providing inputs to NRF-Client
        configmapApplicationConfig:
          &configRef
          # Config-map to provide inputs to Nrf-Client
          # primaryNrfApiRoot - Primary NRF Hostname and Port
          # SecondaryNrfApiRoot - Secondary NRF Hostname and Port
          # retryAfterTime - Default downtime(in Duration) of an NRF detected to be unavailable.
          # nrfClientType - The NfType of the NF registering
          # nrfClientSubscribeTypes - the NFType for which the NF wants to subscribe to the NRF.
          # appProfiles - The NfProfile of the NF to be registered with NRF.
          # enableF3 - Support for 29.510 Release 15.3
          # enableF5 - Support for 29.510 Release 15.5
          # renewalTimeBeforeExpiry - Time Period(seconds) before the Subscription Validity time expires.
          # validityTime - The default validity time(days) for subscriptions.
          # enableSubscriptionAutoRenewal - Enable Renewal of Subscriptions automatically.
          # acceptAdditionalAttributes - Enable additionalAttributes as part of 29.510 Release 15.5
          # retryForCongestion - The duration(seconds) after which nrf-client should retry to a NRF server found to be congested.
          profile: |-
            [appcfg]
            primaryNrfApiRoot=ocats-nrf-stubserver.changeme-ocats:8080
            secondaryNrfApiRoot=ocats-nrf-stubserver.changeme-ocats:8080
            nrfScheme=http
    
    
      sbiRouting:
    
        sbiRoutingDefaultScheme: http
        peerConfiguration:
          - id: peer1
            host: ocats-amf-stubserver.changeme-ocats
            port: 8080
            apiPrefix: "/"
            healthApiPath: "/health/v1"
          - id: peer2
            host: ocats-amf-stubserver.changeme-ocats
            port: 8080
            apiPrefix: "/"
            healthApiPath: "/health/v2"
          - id: peer3
            host: ocats-nrf-stubserver.changeme-ocats
            port: 8080
            apiPrefix: "/"
            healthApiPath: "/health/v3"
    In above snippet, ATS Helm release name is "ocats". For example, if ATS helm release name is changed from "ocats" to "ocatsnssf" while installing, NSSF custom-values.yaml file should be updated as below snippet:
    #Static virtual FQDN Config
      staticVirtualFqdns:
        - name: https://abc.test.com
          alternateFqdns:
            - target:ocatsnssf-amf-stubserver.changeme-ocats
              port: 8080
              priority: 10
            - target:ocatsnssf-amf-stubserver.changeme-ocats
              port: 8080
              priority: 20
        - name: http://xyz.test.com
          alternateFqdns:
            - target:ocatsnssf-amf-stubserver.changeme-ocats
              port: 8080
              priority: 10
            - target:ocatsnssf-amf-stubserver.changeme-ocats
              port: 8080
              priority: 20
      
    nrf-client:
        # This config map is for providing inputs to NRF-Client
        configmapApplicationConfig:
          &configRef
          # Config-map to provide inputs to Nrf-Client
          # primaryNrfApiRoot - Primary NRF Hostname and Port
          # SecondaryNrfApiRoot - Secondary NRF Hostname and Port
          # retryAfterTime - Default downtime(in Duration) of an NRF detected to be unavailable.
          # nrfClientType - The NfType of the NF registering
          # nrfClientSubscribeTypes - the NFType for which the NF wants to subscribe to the NRF.
          # appProfiles - The NfProfile of the NF to be registered with NRF.
          # enableF3 - Support for 29.510 Release 15.3
          # enableF5 - Support for 29.510 Release 15.5
          # renewalTimeBeforeExpiry - Time Period(seconds) before the Subscription Validity time expires.
          # validityTime - The default validity time(days) for subscriptions.
          # enableSubscriptionAutoRenewal - Enable Renewal of Subscriptions automatically.
          # acceptAdditionalAttributes - Enable additionalAttributes as part of 29.510 Release 15.5
          # retryForCongestion - The duration(seconds) after which nrf-client should retry to a NRF server found to be congested.
          profile: |-
            [appcfg]
            primaryNrfApiRoot=ocatsnssf-nrf-stubserver.changeme-ocats:8080
            secondaryNrfApiRoot=ocatsnssf-nrf-stubserver.changeme-ocats:8080
            nrfScheme=http
    
    
      sbiRouting:
    
        sbiRoutingDefaultScheme: http
        peerConfiguration:
          - id: peer1
            host:ocatsnssf-amf-stubserver.changeme-ocats
            port: 8080
            apiPrefix: "/"
            healthApiPath: "/health/v1"
          - id: peer2
            host:ocatsnssf-amf-stubserver.changeme-ocats
            port: 8080
            apiPrefix: "/"
            healthApiPath: "/health/v2"
          - id: peer3
            host:ocatsnssf-nrf-stubserver.changeme-ocats
            port: 8080
            apiPrefix: "/"
            healthApiPath: "/health/v3"
  13. Pointing NSSF to Stub Servers: Follow this step to point out NSSF to NRF-Stubserver and AMF-Stubserver in NSSF custom values file:
    sed -i "s/changeme-ocats/${namespace}/g" $NSSF_CUSTOM_DEPLOY_FILE
    For example:
    sed -i "s/changeme-ocats/${namespace}/g" ocnssf_custom_values_24.1.1.yaml
    The NSSF custom values snippet is as follows:
     nrf-client:
        # This config map is for providing inputs to NRF-Client
        configmapApplicationConfig:
          &configRef
          # primaryNrfApiRoot - Primary NRF Hostname and Port
          # SecondaryNrfApiRoot - Secondary NRF Hostname and Port
          profile: |-
            [appcfg]
            primaryNrfApiRoot=ocats-nrf-stubserver.changeme-ocats:8080
            secondaryNrfApiRoot=ocats-nrf-stubserver.changeme-ocats:8080
    
      staticVirtualFqdns:
        - name: https://abc.test.com
          alternateFqdns:
            - target: ocats-amf-stubserver.changeme-ocats
              port: 8080
              priority: 10
            - target: ocats-amf-stubserver.changeme-ocats
              port: 8080
              priority: 20
        - name: http://xyz.test.com
          alternateFqdns:
            - target: ocats-amf-stubserver.changeme-ocats
              port: 8080
              priority: 10
            - target: ocats-amf-stubserver.changeme-ocats
              port: 8080
              priority: 20
  14. Deploy ATS as shown below. NSSF ATS Helm release name should be "ocats".

    helm install <release_name> <charts> -n <namespace_name> -f <custom_values file> –-version <helm-chart-version>

    For example:

    helm install ocats ocats-nssf -n ocnssf -f ocats_nssf_custom_values_24.1.1.yaml --version 24.1.1

    Running the above command creates the following three pods:

    ocats-amf-stubserver

    ocats-nrf-stubserver

    ocats-nssf

  15. Run the following command to verify the ATS deployment:

    helm status <release_name>

    The following screenshot is an example of a successful ATS deployment, where STATUS:DEPLOYED is an indicator of the same.


    Verify ATS deployment

3.5 Installing ATS for Policy

Installing ATS for Policy procedure consists of the following two steps:

  1. Locating and downloading the ATS package
  2. Deploying ATS and stub pods in Kubernetes cluster

This includes installation of nine stubs (nf1stub, nf11stub, nf12stub, nf2stub, nf21stub, nf22stub, nf3stub, nf31stub, nf32stub), ocamf stub, ocdns-bind stub, ocldap-stub, and Policy ATS in the namespace where CNC Policy is deployed.

3.5.1 Resource Requirements

This section describes the ATS resource requirements for CNC Policy.

Overview - Total Number of Resources

The following table describes the overall resource usage in terms of CPUs, and memory for the following:
  • PCF SUT
  • cnDBTier
  • ATS

Table 3-14 PCF - Total Number of Resources

Resource Name Non-ASM CPU Non-ASM Memory (GB) ASM CPU ASM Memory (GB)
PCF SUT Total 219 197 293 244
ATS Total 26 28 54 42
CnDBTier Total 107.1 175.2 137.1 190.2
Grand Total PCF ATS 352.1 400.2 484.1 476.2

PCF Pods Resource Requirements Details

This section describes the resource requirements, which are needed to deploy Policy ATS successfully.

Table 3-15 PCF Pods Resource Requirements Details

Policy Microservices Max CPU Memory (GB) Max Replica Non-ASM Total CPU Non-ASM Memory (GB) ASM Total CPU ASM Total Memory (GB) Isito ASM CPU Isito ASM Memory (GB)
oc-app-info 2 2 1 2 2 4 3 2 1
oc-bulwark 8 6 2 16 12 20 14 2 1
oc-diam-connector 4 2 2 8 4 12 6 2 1
oc-diam-gateway 4 2 1 4 2 6 3 2 1
alternate-route 2 4 1 2 4 4 5 2 1
oc-config-server 4 2 1 4 2 6 3 2 1
ocegress_gateway 4 6 1 4 6 6 7 2 1
ocingress_gateway 5 6 1 5 6 7 7 2 1
nrf-client-disc 4 2 2 8 4 12 6 2 1
nrf-client-mngt 1 1 2 2 2 6 4 2 1
oc-audit 2 4 1 2 4 4 5 2 1
oc-config-mgmt 4 2 2 8 4 12 6 2 1
oc-ldap-gateway 4 2 2 8 8 12 10 2 1
oc-policy-ds 7 8 2 14 16 18 18 2 1
oc-pre 4 4 2 8 8 12 10 2 1
oc-query 2 1 2 4 2 8 4 2 1
oc-soap-connector 4 4 2 8 8 12 10 2 1
oc-pcf-am 8 8 2 16 16 20 18 2 1
oc-pcf-sm 7 10 2 14 20 18 22 2 1
oc-pcf-ue 8 6 2 16 12 20 24 2 1
oc-pcrf-core 8 8 2 16 16 0 18 2 1
oc-perf-info 2 2 2 4 4 8 6 2 1
oc-binding 6 8 1 6 8 8 9 2 1
oc-udr-connector 6 4 2 12 8 16 10 2 1
oc-chf-connector 6 4 2 12 8 16 10 2 1
usage-mon 5 4 2 10 8 14 10 2 1
nwdaf-agent 2 1 1 2 1 4 2 2 1
notifier 2 1 2 4 2 8 4 2 1
Policy Totals 219 197 293 244    

ATS Resource Requirements details for Policy

This section describes the ATS resource requirements, which are needed to deploy Policy ATS successfully.

Table 3-16 ATS Resource Requirements Details

ATS Microservices Max CPU Max Memory (GB) Max Replica Non-ASM Total CPU Non-ASM Total Memory (GB) ASM Total CPU ASM Total Memory (GB) Isito ASM CPU Isito ASM Memory (GB)
ocstub1-py 2 2 1 2 2 4 3 2 1
ocstub2-py 2 2 1 2 2 4 3 2 1
ocstub3-py 2 2 1 2 2 4 3 2 1
ocstub11-py 2 2 1 2 2 4 3 2 1
ocstub12-py 2 2 1 2 2 4 3 2 1
ocstub21-py 2 2 1 2 2 4 3 2 1
ocstub22-py 2 2 1 2 2 4 3 2 1
ocstub31-py 2 2 1 2 2 4 3 2 1
ocstub32-py 2 2 1 2 2 4 3 2 1
ocamf-stub 1 1 1 1 1 3 2 2 1
ocats-policy 4 6 1 4 6 6 7 2 1
ocdns-bind 1 1 1 1 1 3 2 2 1
oc-ldap-org1 1 1 1 1 1 3 2 2 1
ocdiam-sim 1 1 1 1 1 3 2 2 1
ATS Totals 26 28 54 42    

cnDBTier Resource Requirements Details for Policy ATS

This section describes the cnDBTier resource requirements, which are needed to deploy Policy ATS successfully.

Note:

For cnDBTier pods, a minimum of 4 worker nodes are required.

Table 3-17 cnDBTier Resource Requirements Details

cnDBTier Microservices Min CPU Min Memory (GB) Min Replica Total CPU Total Memory (GB) ASM Total CPU ASM Total Memory (GB) Isito ASM CPU Isito ASM Memory (GB)
db_monitor_svc 1 1 1 1 1 3 2 2 1
db_replication_svc 2 12 1 2 12 4 13 2 1
db_backup_manager_svc 0.1 0.2 1 0.1 0.2 2.1 1.2 2 1
ndbappmysqld 8 10 4 32 40 40 44 2 1
ndbmgmd 4 10 2 8 20 12 22 2 1
ndbmtd 10 18 4 40 72 48 76 2 1
ndbmysqld 8 10 2 16 20 20 22 2 1
db_infra_moditor_svc 8 10 1 8 10 8 10
DB Tier Total 107.1 175.2 137.1 190.2

Note:

The requirements shown in the above table for CnDBTier are the default numbers and must be changed as per the deployment requirements.

3.5.2 Downloading the ATS Package

This section provides information on how to locate and download the Policy ATS package file from My Oracle Support (MOS).

Locating and Downloading Policy ATS Package

To locate and download the ATS package from MOS, perform the following steps:

  1. Log in to My Oracle Support using the valid credentials.
  2. Select the Patches & Updates tab.
  3. In the Patch Search window, click Product or Family (Advanced).
  4. Enter Oracle Communications Cloud Native Core - 5G in the Product field.
  5. Select Oracle Communications Cloud Native Network Repository Function <release_number> using the drop-down menu of the Release field.
  6. Click Search. The list of Patch Advanced Search Results appears.
  7. Select the required ATS patch from the list. The Patch Details window appears.
  8. Click Download. The File Download window appears.
  9. Click the <p********_<release_number>_Tekelec>.zip file to downlaod the CNC Policy ATS package file.
  10. Untar the gzip file ocats-policy-tools-24.1.0.0.0.tgz to access the following files:
    
    ocats-policy-pkg-24.1.0.0.0.tgz
    ocdns-pkg-24.1.0.0.0.tgz
    ocamf-pkg-24.1.0.0.0.tgz
    oc-ldap-org1-pkg-24.1.0.0.0.tgz
    ocstub-pkg-24.1.0.0.0.tgz
    ocdiam-pkg-24.1.0.0.0tgz

    The contents included in each of these files are as follow:

    |
    
    |_ _ _ocats-policy-pkg-24.1.0.0.0.tgz
    
    |      |_ _ _ _ _ _ ocats-policy-24.1.0.tgz (Helm Charts)
    
    |      |_ _ _ _ _ _ ocats-policy-image-24.1.0.tar (Docker Images)
    
    |      |_ _ _ _ _ _ ocats-policy-data-24.1.0.tgz (Policy ATS and Jenkins job Data)
    
    |
    
    |_ _ _ocstub-pkg-24.1.0.0.0.tgz
    
    |      |_ _ _ _ _ _ ocstub-py-24.1.0.tgz(Helm Charts)
    
    |      |_ _ _ _ _ _ ocstub-py-image-24.1.0.tar (Docker Images)
    
    |
    
    |_ _ _ocdns-pkg-24.1.0.0.0.tgz
    
    |      |_ _ _ _ _ _ ocdns-bind-24.1.0.tgz(Helm Charts)
    
    |      |_ _ _ _ _ _ ocdns-bind-image-24.1.0.tar (Docker Images)
    
    |
    
    |_ _ _ocamf-pkg-24.1.0.0.0.tgz
    
    |      |_ _ _ _ _ _ ocamf-stub-24.1.0.tgz(Helm Charts)
    
    |      |_ _ _ _ _ _ ocamf-stub-image-24.1.0.tar (Docker Images)
    
    |
    
    |_ _ _oc-ldap-org1-pkg-24.1.0.0.0.tgz
    
    |      |_ _ _ _ _ _ oc-ldap-org1-24.1.0.tgz(Helm Charts)
    
    |      |_ _ _ _ _ _ oc-ldap-org1-image-24.1.0.tar (Docker Images)
    
    |
    
    |_ _ __ocdiam-pkg-24.1.0.0.0.tgz
    
    |      |_ _ _ _ _ _ ocdiam-sim-24.1.0.tgz(Helm Charts)
    
    |      |_ _ _ _ _ _ ocdiam-sim-image-24.1.0.tar (Docker Images)
  11. Copy the tar file from the downloaded package to OCCNE, OCI, or Kubernetes cluster where you want to deploy ATS.

3.5.3 Pushing the Images to Customer Docker Registry

Preparing to deploy ATS and Stub Pods in Kubernetes Cluster

To deploy ATS and stub pods in Kubernetes Cluster, perform the following steps:

  1. Run the following command to extract tar file content.

    tar -zxvf ocats-policy-tools-24.1.0.0.0.tgz

    The following is the output of this command:
    ocats-policy-pkg-24.1.0.0.0.tgz
    ocstub-pkg-24.1.0.0.0.tgz
    ocdns-pkg-24.1.0.0.0.tgz
    ocamf-stub-24.1.0.0.0.tgz
    oc-ldap-org1-24.1.0.tgz
    ocdiam-pkg-24.1.0.0.0tgz
  2. Run the following command to extract the helm charts and docker images of ATS:

    tar -zxvf ocats-policy-pkg-24.1.0.0.0.tgz

    The following is the output:
    ocats-policy-24.1.0.tgz
    ocats-policy-images-24.1.0.tar
    ocats-policy-data-24.1.0.tgz
  3. Run the following command to to load the ATS docker image:

    docker load --input ocats-policy-images-24.1.0.tar

  4. Run the following commands to tag and push the ATS images:
    
    docker tag ocats-policy:24.1.0 <registry>/ocats-policy:24.1.0
    docker push <registry>/ocats-policy:24.1.0

    Example:

    
    docker tag ocats-policy:24.1.0 localhost:5000/ocats-policy:24.1.0
    docker push localhost:5000/ocats-policy:24.1.0

    Note:

    If you are using Podman instead of Docker, replace docker with podman in all the docker commands given in this document.
  5. Run the following command to untar the helm charts – 24.1.0.
    tar -zxvf ocats-policy-24.1.0.tgz

    Note:

    atsFeatures section is newly introduced in values.yaml which helps Engineering team to control feature deliveries over the releases.
    It is not advisable to update any of the following flags without Engineering team's permission.
    atsFeatures:  ## DO NOT UPDATE this section without Engineering team's permission
      testCaseMapping: true               # To display Test cases on GUI along with Features
      logging: true                       # To enable feature to collect applogs in case of failure
      lightWeightPerformance: false       # The Feature is not implemented yet
      executionWithTagging: true          # To enable Feature/Scenario execution with Tag
      scenarioSelection: false            # The Feature is not implemented yet
      parallelTestCaseExecution: true     # To run ATS features parallel
      parallelFrameworkChangesIntegrated: true # To run ATS features parallel
      mergedExecution: false              # To execute ATS Regression and NewFeatures pipelines together in merged manner
      individualStageGroupSelection: false  # The Feature is not implemented yet
      parameterization: true              # When set to false, the Configuration_Type parameter on the GUI will not be available.
      atsApi: true                        # To trigger ATS using ATS API
      healthcheck: true                   # TO enable/disable ATS Health Check.
      atsGuiTLSEnabled: false             # To run ATS GUI in https mode.
      atsCommunicationTLSEnabled: false  #If set to true, ATS will get necessary variables to communicate with SUT, Stub or other NFs with TLS enabled. It is not required in ASM environment.
  6. Update the registry name, image name and tag in the ocats-policy/values.yaml file as required. For this, you need to update the image.repository and image.tag parameters in the ocats-policy/values.yaml file.

3.5.4 Configuring ATS

3.5.4.1 Enabling Static Port
To enable static port, in the ocats-policy/values.yaml file under the service section, set the value of staticNodePortEnabled parameter to true and enter a valid nodePort value for staticNodePort parameter.
service:
  customExtension:
    labels: {}
    annotations: {}
  type: LoadBalancer
  ports:
    http:
      port: "8080"
      staticNodePortEnabled: false
      staticNodePort: ""
3.5.4.2 Enabling Static API Node Port

To enable static API node port, in the ocats-policy/values.yaml file under the service section, set the value of staticAPINodePortEnabled parameter to true and enter a valid nodePort value for staticAPINodePortEnabled parameter.

service:
  customExtension:
    labels: {}
    annotations: {}
  type: LoadBalancer
  ports:
    api:
      port: "5001"
      staticNodePortEnabled: false
      staticNodePort: ""
3.5.4.3 Service Account Requirements
To run Policy-ATS, use the following rules to create a service account:
rules:
- apiGroups: ["extensions"]
  resources: ["deployments", "replicasets", "statefulsets"]
  verbs: ["watch", "get", "list", "update"]
- apiGroups: ["apps"]
  resources: ["deployments", "replicasets", "statefulsets"]
  verbs: ["watch", "get", "list", "update"]
- apiGroups: [""]
  resources: ["pods", "services", "secrets", "configmaps"]
  verbs: ["watch", "get", "list", "delete", "update", "create"]
- apiGroups: [""]
  resources: ["pods/log"]
  verbs: ["get", "list"]
To run oc-ldap ATS, use the following rules to create a service account:
PolicyRule:
  Resources                   Non-Resource URLs  Resource Names      Verbs
  ---------                   -----------------  --------------      -----
  deployments.apps            []                 [rc1-oc-ldap-org1]  [get list watch create update patch delete]
  deployments.extensions      []                 [rc1-oc-ldap-org1]  [get list watch create update patch delete]
  podsecuritypolicies.policy  []                 [1org1-rc1]         [use]

Note:

For information about creating service account, see the Oracle Communications Cloud Native Core, Converged Policy Installation, Upgrade, and Fault Recovery Guide available on MOS.
3.5.4.4 Enabling Aspen Service Mesh

This section provides information on how to enable Aspen service mesh while deploying ATS for CNC Policy. The configurations mentioned in this section are optional and should be performed only if ASM is required.

To enable service mesh for CNC Policy ATS, perform the following steps:

  1. In the service section of the values.yaml file, the serviceMeshCheck parameter is set to false (default configuration). To enable service mesh, set the value for serviceMeshCheck to true. The following is a snippet of the service section in the yaml file:
    service:
      customExtension:
        labels: {}
        annotations: {}
      type: LoadBalancer
      ports:
        https:
          port: "8443"
          staticNodePortEnabled: false
          staticNodePort: ""
        http:
          port: "8080"
          staticNodePortEnabled: false
          staticNodePort: ""
        api:
          port: "5001"
          staticNodePortEnabled: false
          staticNodePort: ""
      serviceMeshCheck: true
  2. If the ASM is not enabled on the global level for the namespace, run the following command to enable it before deploying the ATS:
    kubectl label --overwrite namespace <namespace_name> istio-injection=enabled
    For example:
    kubectl label --overwrite namespace ocpcf istio-injection=enabled
  3. Uncomment and add the following annotation under the customExtension section of the global section in values.yaml file and deploy the ATS Pods:
    customExtension:
        allResources:
          labels: {}
          annotations: {
          #Enable this section for service-mesh based installation
             traffic.sidecar.istio.io/excludeInboundPorts: "9000",
             traffic.sidecar.istio.io/excludeOutboundPorts: "9000"
            }

    After making this update in the values.yaml file, make sure that all the ATS and stub pods come up with istio container.

  4. For ServerHeader feature, the user needs to perform the following configurations under the envoyFilters for nf1stub and nf2stub in the occnp-servicemesh-config-custom-values-24.1.0.yaml:
    envoyFilters:
      - name: serverheaderfilter-nf1stub
        labelselector: "app: nf1stub"
        applyTo: NETWORK_FILTER
        filtername: envoy.filters.network.http_connection_manager
        operation: MERGE
        typeconfig: type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
        configkey: server_header_transformation
        configvalue: PASS_THROUGH    
      - name: serverheaderfilter-nf2stub
        labelselector: "app: nf2stub"
        applyTo: NETWORK_FILTER
        filtername: envoy.filters.network.http_connection_manager
        operation: MERGE
        typeconfig: type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
        configkey: server_header_transformation
        configvalue: PASS_THROUGH   
    - name: serverheaderfilter-nf3stub
        labelselector: "app: nf3stub"
        applyTo: NETWORK_FILTER
        filtername: envoy.filters.network.http_connection_manager
        operation: MERGE
        typeconfig: type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
        configkey: server_header_transformation
        configvalue: PASS_THROUGH
      - name: serverheaderfilter-nf32stub
        labelselector: "app: nf32stub"
        applyTo: NETWORK_FILTER
        filtername: envoy.filters.network.http_connection_manager
        operation: MERGE
        typeconfig: type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
        configkey: server_header_transformation
        configvalue: PASS_THROUGH 
  5. Perform helm upgrade on the occnp-servicemesh-config release using the modified occnp-servicemesh-config-custom-values-24.1.0.yaml.
    helm upgrade <helm_release_name_for_servicemesh> -n <namespace> <servicemesh_charts> -f
          <servicemesh-custom.yaml>
    Example
    helm upgrade occnp-servicemesh-config occnp-servicemesh-config-24.1.0.tgz -n <namespace> -f occnp-servicemesh-config-custom-values-24.1.0.yaml
  6. Configure DNS for Alternate Route service. For more information, see Post-Installation Steps.
3.5.4.5 Enabling Persistent Volume

Note:

The steps provided in this section are optional and required only if Persistent Volume needs be to enabled.

ATS supports Persistent storage to retain ATS historical build execution data, test cases and one-time environment variable configurations. With this enhancement, the user can decide whether to use persistent volume based on their resource requirements. By default, the persistent volume feature is not enabled.

To enable persistent storage, perform the following steps:
  1. Create a PVC and associate the same to the ATS pod.
    1. Edit pvc.yaml file.
    2. Set PVClaimName to the PVC created in Step 1.
      
      deployment:
        customExtension:
          labels: {}
          annotations: {}
        PVEnabled: true
        PVClaimName: "ocpcf-pvc-24.1.0"
        
    3. Enter the storageClassName to the Storage Class Name.
    4. Set storage to and size of the persistent volume.
      Sample PVC:
      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: policy-pvc-24.1.0   annotations:
      spec:
        storageClassName: standard
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 1Gi

      Note:

      It is recommended to suffix the pvc name with the release version to avoid confusion during the subsequent releases. For example, policy-pvc-24.1.0.

    5. Run the following command to create the PVC.
      kubectl apply -f <filename> -n <namespace>
      For example:
      kubectl apply -f PersistentVolumeClaim.yaml -n ocpcf
    6. Once the PVC is created , run the following command to verify that it is bound to the persistent volume and is available.
      kubectl get pvc -n <namespace used for pvc creation>
      Sample output:
      [cloud-user@platform-bastion-1 ocats-policy]$ kubectl get pvc -n ocpcf
      NAME         STATUS  VOLUME                                     CAPACITY  ACCESS MODES   STORAGECLASS   AGE
      policy-pvc-24.1.0   Bound    pvc-65484045-3805-4064-9fc3-f9eeeaccc8b8  1Gi        RWO       standard      11s

      Note:

      Do not proceed further with the next step if there is in issue with the PV creation and contact your administrator to get the PV Created

  2. Enable PVC.
    1. Set the PVEnabled flag to true.
    2. Set PVClaimName to the PVC created in Step 1.
      PVEnabled: false
      PVClaimName: "policy-pvc-24.1.0"

    Note:

    Make sure that ATS is deployed before proceeding to the further steps.

  3. Copy the <nf_main_folder> and <jenkins jobs> folders from the tar file to their ATS pod and restart the pod.
    1. Extract the tar file.
      tar -xvf ocats-policy-data-24.1.0.tgz
    2. Run the following commands to copy the desired folder.
      kubectl cp ocats-policy-data-24.1.0/ocpcf_tests <namespace>/<pod-name>:/var/lib/jenkins/
       kubectl cp ocats-policy-data-24.1.0/jobs <namespace>/<pod-name>:/var/lib/jenkins/.jenkins/
    3. Restart the pod.
      kubectl delete po <pod-name> -n <namespace>
  4. Once the Pod is up and running, log in to the Jenkins console and configure the Discard old Builds option to configure the number of Jenkins builds, which must be retained in the persistent volume.

    Figure 3-9 Discarding Old Builds


    Discarding Old Builds

    Note:

    If Discard old Builds is not configured, Persistent Volume can get filled when there are huge number of builds.

For more details on Persistent Volume Storage, see Persistent Volume for 5G ATS.

3.5.4.6 Enabling Health Check

This section describes how to enable Health Check for ATS.

To enable Health Check, in the ocats-policy/values.yaml file, set the value of healthcheck parameter to true and enter a valid value to select the environment.

Webscale: false
healthchecksecretname: "healthchecksecret"
occnehostip: ""
occnehostusername: ""
occnehostpassword: ""
envtype: ""
webscalejumpip: ""
webscalejumpusername: ""
webscalejumppassword: ""
webscaleprojectname: ""
webscalelabserverFQDN: ""
webscalelabserverport: ""
webscalelabserverusername: ""
webscalelabserverpassword: ""
To select OCCNE environment, update the values of the following two parameters:
  • Webscale - Update the value as false
  • envtype - T0NDTkU= (i.e envtype=$(echo -n 'OCCNE' | base64))
  • occnehostip - OCCNE Host IP address
  • occnehostusername - OCCNE Host Username
  • occnehostpassword - OCCNE Host Password
The following is the sample configuration for OCCNE environment:
atsFeatures:  ## DO NOT UPDATE this section without Engineering team's permission
  healthcheck: true                   # TO enable/disable ATS Health Check.
 
Webscale: false
healthchecksecretname: "healthchecksecret"
occnehostip: "MTAuMTcuMjE5LjY1"
occnehostusername: "dXNlcm5hbWU="
occnehostpassword: "KioqKg=="
envtype: "T0NDTkU="
To select WEBSCALE environment, update the values of the following two parameters:
  • Webscale - Update the value as true
  • envtype - V0VCU0NBTEU= (i.e envtype=$(echo -n 'WEBSCALE' | base64))

After the configurations are done, encrypt below parameters and provide the values as shown in the following snippet:

The following is the sample configuration for WEBSCALE environment:
atsFeatures:  ## DO NOT UPDATE this section without Engineering team's permission
  healthcheck: true                   # TO enable/disable ATS Health Check.
 
 
Webscale: true
healthchecksecretname: "healthchecksecret"
occnehostip: ""
occnehostusername: ""
occnehostpassword: ""
envtype: "V0VCU0NBTEU="
webscalejumpip: "MTAuNzAuMTE3LjQy"
webscalejumpusername: "dXNlcm5hbWU="
webscalejumppassword: "KioqKg=="
webscaleprojectname: "KioqKg=="
webscalelabserverFQDN: "KioqKg=="
webscalelabserverport: "KioqKg=="
webscalelabserverusername: "KioqKg=="
webscalelabserverpassword: "KioqKg=="

Note:

Once the ATS is deployed with HealthCheck feature enabled or disabled, then it cannot be changed. To change the configuration, you are required to re-install.

3.5.5 Deploying ATS and Pods

3.5.5.1 Deploying ATS in Kubernetes Cluster

To deploy ATS, perform the following steps:

  1. Run the following command using the updated helm charts.

    Note:

    Ensure that all the components, ATS, go-Stub, dns-bind, ocamf, and CNC Policy are deployed in the same namespace.
    Using Helm 3 helm install -name <release_name> ocats-policy-24.1.0.tgz --namespace <namespace_name> -f <values-yaml-file>

    Example: helm install -name ocats ocats-policy-24.1.0.tgz --namespace ocpcf -f ocats-policy/values.yaml

  2. Run the following command to verify ATS deployment:

    helm ls -n ocpcf

    The sample output is as follows:
    NAME    REVISION        UPDATED                     STATUS        CHART              APP VERSION     NAMESPACE
    ocats               1         Mon November 6 14:56:11 2023      DEPLOYED    ocats-policy-24.1.0      1.0             ocpcf
    
    The status appears as DEPLOYED after the deployment is successful.
3.5.5.2 Deploying Stub Pod in Kubernetes Cluster

To deploy Stub Pod in Kubernetes cluster, perform the following steps:

  1. Go to the ocats-policy-tools-24.1.0.0.0 folder and run the following command to extract the ocstub tar file content.

    tar -zxvf ocstub-pkg-24.1.0.0.0.tgz

    The output of this command is:
    • ocstub-py-24.1.0.tgz
    • ocstub-py-image-24.1.0.tar

    Note:

    To deploy additional stubs required for session retry feature validation:
    • nf11stub, nf12stub → Alternate FQDN for nf1stub
    • nf21stub, nf22stub → Alternate FQDN for nf2stub
    • nf31stub, nf32stub → Alternate FQDN for nf3stub
  2. Run the following command in your cluster to load the STUB image.

    docker load --input ocstub-py-image-24.1.0.tar

  3. Run the following commands to tag and push the STUB image.
    docker tag ocstub-py:24.1.0 <registry>/ocstub-py:24.1.0
    docker push <registry>/ocstub-py:24.1.0
  4. Run the following command to untar the helm charts, ocstub-py-24.1.0.tgz.

    tar -zxvf ocstub-py-24.1.0.tgz

  5. Update the registry name, image name and tag (if required) in the ocstub-py/values.yaml file as required. For this, open the values.yaml file and update the image.repository and image.tag parameters.
  6. If required, change the apiVersion to apps/v1 in the ocstub-py/templates/deployment.yaml file as follows:

    apiVersion: apps/v1

  7. Deploy Stub.

    Using Helm3:

    helm install -name <release_name> ocstub-py --set env.NF=<NF> --set env.LOG_LEVEL=<DEBUG/INFO> --set service.name=<service_name>--set service.appendReleaseName=false --namespace=<namespace_name> -f <valuesyaml-file>

    Example:

    
    helm install -name nf1stub ocstub-py --set env.NF=PCF --set env.LOG_LEVEL=DEBUG --set service.name=nf1stub --set service.appendReleaseName=false  --namespace=ocpcf -f ocstub-py/values.yaml
    helm install -name nf2stub ocstub-py --set env.NF=PCF --set env.LOG_LEVEL=DEBUG --set service.name=nf2stub --set service.appendReleaseName=false  --namespace=ocpcf -f ocstub-py/values.yaml
    helm install -name nf3stub ocstub-py --set env.NF=PCF --set env.LOG_LEVEL=DEBUG --set service.name=nf3stub --set service.appendReleaseName=false  --namespace=ocpcf -f ocstub-py/values.yaml
    helm install -name nf11stub ocstub-py --set env.NF=PCF --set env.LOG_LEVEL=DEBUG --set service.name=nf11stub --set service.appendReleaseName=false  --namespace=ocpcf -f ocstub-py/values.yaml
    helm install -name nf12stub ocstub-py --set env.NF=PCF --set env.LOG_LEVEL=DEBUG --set service.name=nf12stub --set service.appendReleaseName=false  --namespace=ocpcf -f ocstub-py/values.yaml
    helm install -name nf21stub ocstub-py --set env.NF=PCF --set env.LOG_LEVEL=DEBUG --set service.name=nf21stub --set service.appendReleaseName=false  --namespace=ocpcf -f ocstub-py/values.yaml
    helm install -name nf22stub ocstub-py --set env.NF=PCF --set env.LOG_LEVEL=DEBUG --set service.name=nf22stub --set service.appendReleaseName=false  --namespace=ocpcf -f ocstub-py/values.yaml
    helm install -name nf31stub ocstub-py --set env.NF=PCF --set env.LOG_LEVEL=DEBUG --set service.name=nf31stub --set service.appendReleaseName=false  --namespace=ocpcf -f ocstub-py/values.yaml
    helm install -name nf32stub ocstub-py --set env.NF=PCF --set env.LOG_LEVEL=DEBUG --set service.name=nf32stub --set service.appendReleaseName=false  --namespace=ocpcf -f ocstub-py/values.yaml
  8. Run the following command to verify stub deployment.

    helm ls -n ocpcf

    The sample output is as follows:
    NAME         REVISION            UPDATED                         STATUS          CHART                        APP VERSION     NAMESPACE
    nf11stub                1               Tue March 14 10:05:59 2024        DEPLOYED        ocstub-py-24.1.0         1.0             ocpcf
    nf12stub                1               Tue March 14 10:06:00 2024        DEPLOYED        ocstub-py-24.1.0         1.0             ocpcf
    nf1stub                 1               Tue March 14 10:05:57 2024        DEPLOYED        ocstub-py-24.1.0         1.0             ocpcf
    nf21stub                1               Tue March 14 10:06:01 2024        DEPLOYED        ocstub-py-24.1.0         1.0             ocpcf
    nf22stub                1               Tue March 14 10:06:02 2024        DEPLOYED        ocstub-py-24.1.0         1.0             ocpcf
    nf2stub                 1               Tue March 14 10:05:58 2024        DEPLOYED        ocstub-py-24.1.0         1.0             ocpcf
    nf31stub                1               Tue March 14 10:06:03 2024        DEPLOYED        ocstub-py-24.1.0         1.0             ocpcf
    nf32stub                1               Tue March 14 10:06:11 2024        DEPLOYED        ocstub-py-24.1.0         1.0             ocpcf
    nf3stub                 1               Tue March 14 10:05:59 2024        DEPLOYED        ocstub-py-24.1.0         1.0             ocpcf
    
    The status changes to DEPLOYED after the deployment is successful.

    Similarly, install all other stubs.

  9. Run the following command to check the status of Stub deployment.

    helm status <release_name> -n ocpcf

    The sample output is as follows:
    NAME                                         READY   STATUS    RESTARTS   AGE
    nf11stub-ocstub-py-66449ddb94-qg2j9                    1/1     Running   0          19h
    nf12stub-ocstub-py-6b8575487-l8pxv                     1/1     Running   0          19h
    nf1stub-ocstub-py-5ff485954c-prc2x                     1/1     Running   0          19h
    nf21stub-ocstub-py-56cf5b77fc-x8wkr                    1/1     Running   0          19h
    nf22stub-ocstub-py-547dfdf476-4j2sn                    1/1     Running   0          19h
    nf2stub-ocstub-py-6fb6f786d6-bc9fr                     1/1     Running   0          19h
    nf31stub-ocstub-py-c6c6d5584-5m48z                     1/1     Running   0          19h
    nf32stub-ocstub-py-848dfc7757-q797z                    1/1     Running   0          19h
    nf3stub-ocstub-py-6cb769ccd9-4fv9b                     1/1     Running   0          19h
    
A sample output of Policy namespace with Policy and ATS after installation is as follows:
NAME                                         READY   STATUS    RESTARTS   AGE
ocpcf-appinfo-6c74cccd47-zsbb2                         1/1     Running   0          155m
ocpcf-oc-binding-77fbb9b79c-jv7kd                      1/1     Running   0          155m
ocpcf-oc-diam-connector-6c6fd868bd-4zfrn               1/1     Running   0          155m
ocpcf-oc-diam-gateway-0                                1/1     Running   0          147m
ocpcf-oc-oc-stub-595bb858d4-smzj8                      1/1     Running   0          147m
ocpcf-ocats-ocats-policy-667d8cf78-b8bc8               1/1     Running   0          147m
ocpcf-occnp-alternate-route-75455c858d-f6qs8           1/1     Running   0          146m
ocpcf-occnp-alternate-route-75455c858d-sqvlg           1/1     Running   0          147m
ocpcf-occnp-chf-connector-6b8b8bfcd6-jjch6             1/1     Running   0          155m
ocpcf-occnp-config-server-77bd99f96-mpscn              1/1     Running   0          155m
ocpcf-occnp-egress-gateway-59c4b784cc-6dx4w            1/1     Running   0          16m
ocpcf-occnp-ingress-gateway-75c47c57bc-pljtc           1/1     Running   0          39m
ocpcf-occnp-nrf-client-nfdiscovery-74b854956b-s6blq    1/1     Running   0          155m
ocpcf-occnp-nrf-client-nfmanagement-76cb55b8b8-tdjkj   1/1     Running   0          49m
ocpcf-occnp-udr-connector-75ffb9db9b-7xz9v             1/1     Running   0          155m
ocpcf-ocdns-ocdns-bind-57fbcd95dc-h4dtl                1/1     Running   0          147m
ocpcf-ocpm-audit-service-5cc46665c4-j6vhh              1/1     Running   0          155m
ocpcf-ocpm-cm-service-7795bb4c6c-446rb                 1/1     Running   0          155m
ocpcf-ocpm-policyds-75cbc9fc9d-7lbl5                   1/1     Running   0          155m
ocpcf-ocpm-pre-59b94d979-jzkv4                         1/1     Running   0          155m
ocpcf-ocpm-pre-test-84d9c89dd8-fqlpg                   1/1     Running   0          155m
ocpcf-ocpm-queryservice-94895bf88-bhwcc                1/1     Running   0          155m
ocpcf-pcf-amservice-56cdbb75c9-ph7tt                   1/1     Running   0          155m
ocpcf-pcf-smservice-64b899d766-jfhjm                   1/1     Running   0          155m
ocpcf-pcf-ueservice-7c6bd7ccc9-mrnxn                   1/1     Running   0          155m
ocpcf-pcrf-core-7594dbb7f8-z95vt                       1/1     Running   0          155m
ocpcf-performance-689dd556b-7vblc                      1/1     Running   0          155m
ocpcfnf11stub-5bb6b4f95d-v6fbb                         1/1     Running   0          147m
ocpcfnf12stub-59fb974f5d-2qr42                         1/1     Running   0          147m
ocpcfnf1stub-5bdf545fcb-zgbjb                          1/1     Running   0          147m
ocpcfnf21stub-ff6db9d86-5hvj6                          1/1     Running   0          147m
ocpcfnf22stub-794456fd66-sxq8q                         1/1     Running   0          147m
ocpcfnf2stub-656755dc46-hnr8m                          1/1     Running   0          147m
ocpcfnf31stub-68c6596b6-jdsgj                          1/1     Running   0          147m
ocpcfnf32stub-f49b57d86-rklc8                          1/1     Running   0          147m
ocpcfnf3stub-6c4c697648-lj6q7                          1/1     Running   0          147m
ocpcf-ocpm-ldap-gateway-5fd489b8fd-52dqn               1/1     Running   0          147m
3.5.5.3 Deploying DNS Stub in Kubernetes Cluster

Note:

Ensure there is sufficient resource requests and limit is configured for DNS Stub. Set the resource request and limit values in the resources section in the values.yaml file as follows:

resources: {}
  # We usually recommend not to specify default resources and to leave this as a conscious
  # choice for the user. This also increases chances charts run on environments with little
  # resources, such as Minikube. If you do want to specify resources, uncomment the following
  # lines, adjust them as necessary, and remove the curly braces after 'resources:'.  # limits:
  #  cpu: 1000m
  #  memory: 1024Mi
  # requests:
  #  cpu: 500m
  #  memory: 500Mi
To deploy DNS stub in Kubernetes cluster, perform the following steps:
  1. Go to the ocats-policy-tools-24.1.0.0.0 folder and run the following command:

    tar -zxvf ocdns-pkg-24.1.0.0.0.tgz

    The output of this command is:

    [cloud-user@platform-bastion-1 ocdns-pkg-24.1.0.0.0]$ ls -ltrh
    total 211M
    -rw-------. 1 cloud-user cloud-user 211M Mar 14 14:49 ocdns-bind-image-24.1.0.tar
    -rw-r--r--. 1 cloud-user cloud-user 2.9K Mar 14 14:49 ocdns-bind-24.1.0.tgz
  2. Run the following command in your cluster to load the DNS Stub image:

    docker load --input ocdns-bind-image-24.1.0.tar

  3. Run the following commands to tag and push the DNS stub to the registry:
    docker tag ocdns-bind:24.1.0 localhost:5000/ocdns-bind:24.1.0
    docker push localhost:5000/ocdns-bind:24.1.0
  4. Run the following command to untar the helm charts (ocdns-bind-24.1.0.tgz):

    tar -zxvf ocdns-bind-24.1.0.tgz

  5. Update the registry name, image name and tag (if required) in the ocdns-bind/values.yaml file as required. Open the values.yaml file and update the image.repository and image.tag parameters.
  6. Run the following command to install DNS Stub:
    helm :
    [cloud-user@platform-bastion-1 ocdns-bind]$ helm install -name ocdns  
    ocdns-bind-24.1.0.tgz --namespace ocpcf -f ocdns-bind/values.yaml
  7. Run the following command to capture the cluster name of the pcf deployment, namespace where nfstubs are deployed and cluster IP of DNS Stub.
    kubectl get svc -n ocpcf | grep dns
    NAME      TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)                AGE
    ocdns     ClusterIP      10.233.11.45    <none>          53/UDP,6236/TCP        19h

    Note:

    This information is required to configure DNS stub.

    Figure 3-10 Cluster Name

    kubectl -n kube-system get configmap kubeadm-config -o yaml | grep clusterName
        clusterName: platform
3.5.5.4 Deploying AMF Stub in Kubernetes Cluster
To deploy OCAMF stub in Kubernetes cluster:
  1. Go to the ocats-policy-tools-24.1.0.0.0 folder and run the following command:

    tar -zxvf ocamf-pkg-24.1.0.0.0.tgz

    The output of this command is:
    [cloud-user@platform-bastion-1 ocamf-pkg-24.1.0.0.0]$ ls -ltrh
    total 211M
    -rw-------. 1 cloud-user cloud-user 211M Mar 14 14:49 ocamf-stub-image-24.1.0.tar
    -rw-r--r--. 1 cloud-user cloud-user 2.9K Mar 14 14:49 ocamf-stub-24.1.0.tgz
  2. Run the following command in your cluster to load the AMF Stub image:

    docker load --input ocamf-stub-image-24.1.0.tar

  3. Run the following command to tag and push the DNS stub to the registry:
    docker tag ocamf-stub:24.1.0 localhost:5000/ocamf-stub:24.1.0
    docker push localhost:5000/ocamf-stub:24.1.0
  4. Run the following command to untar the helm charts (ocamf-stub-24.1.0.tgz):

    tar -zxvf ocamf-stub-24.1.0.tgz

  5. Update the registry name, image name and tag (if required) in the ocamf-stub/values.yaml file as required. Open the values.yaml file and update the registry name, image name, and tag (if required) in the file.
  6. Run the following command to install AMF Stub:

    Using Helm3:

    
    [cloud-user@platform-bastion-1 ocamf-stub]$ helm3 install-name ocamf2 ocamf-stub-24.1.0.tgz --setservice.name=ocamf2 --namespace ocpcf -f ocamf-stub/values.yaml

The status changes to RUNNING after the deployment is successful.

The following is a sample output for a successful deployment:
ocamf2-ocamf-ocamf-stub-79c8fbd6f7-qp5cl                1/1     Running   0          5h47m
3.5.5.5 Deploying LDAP Stub in Kubernetes Cluster
To deploy oc-ldap stub in the Kubernetes cluster, perform the following steps:
  1. Go to the ocats-policy-tools-24.1.0.0.0 folder and run the following command:

    tar -zxvf oc-ldap-org1-pkg-24.1.0.0.0

    The following is the output:
    [cloud-user@platform-bastion-1 oc-ldap-org1-pkg-24.1.0.0.0]$ ls -ltrh
    total 211M
    -rw-------. 1 cloud-user cloud-user 211M Mar 14 14:49 oc-ldap-org1-image-24.1.0.tar
    -rw-r--r--. 1 cloud-user cloud-user 2.9K Mar 14 14:49 oc-ldap-org1-24.1.0.tgz
  2. Run the following command in your cluster to load the LDAP Stub image:

    docker load --input oc-ldap-org1-image-24.1.0.tar

  3. Run the following command to tag and push the DNS stub to the registry:
    docker tag oc-ldap-org1:24.1.0 localhost:5000/oc-ldap-org1:24.1.0
    docker push localhost:5000/oc-ldap-org1:24.1.0
  4. Run the following command to untar the helm charts (oc-ldap-org1-24.1.0.tgz):

    tar -zxvf oc-ldap-org1-24.1.0.tgz

  5. Update the registry name, image name and tag (if required) in the oc-ldap-org1/values.yaml file as required. Open the values.yaml file and update the registry name, image name, and tag (if required) in the file.
  6. Run the following command to install AMF Stub:

    Using Helm3:

    
    [cloud-user@platform-bastion-1 oc-ldap-org1]$ helm upgrade --install --namespace ocpcf --set image.repository=localhost:5000/occnp/oc-ldap-org1 oc-ldap-org1 oc-ldap-org1-24.1.0.tgz
    

The status changes to RUNNING after the deployment is successful.

The following is a sample output for a successful deployment:
ocpcf-oc-ldap-org1-7b9d957bc6-ngtrl                1/1     Running   0          5h47m

Note:

oc-ldap-org1-secret of the OC-LDAP Stub is being created by Helm chart that comes with ATS package.
3.5.5.6 Deploying ocdiam Simulator in Kubernetes Cluster
To deploy ocdiam Simulator in the Kubernetes cluster, perform the following steps:
  1. Go to the ocats-policy-tools-24.1.0.0.0 folder and run the following command:
    tar -zxvf ocdiam-pkg-24.1.0.0.0

    The following is the output:

    [cloud-user@platform-bastion-1 ocdiam-pkg-24.1.0.0.0]$ ls -ltrh
    total 908M
    -rw-------. 1 cloud-user cloud-user 908M Mar 14 14:49 ocdiam-sim-image-24.1.0.tar
    -rw-r--r--. 1 cloud-user cloud-user 3.8K Mar 14 14:49 ocdiam-sim-24.1.0.tgz
  2. Run the following command in your cluster to load the Diameter Simulator Image using the command :
    docker load --input ocdiam-sim-image-24.1.0.tar
  3. Run the following command to tag and push the DNS stub to the registry:
    docker tag ocdiam-sim:24.1.0 localhost:5000/ocdiam-sim:24.1.0
    docker push localhost:5000/ocdiam-sim:24.1.0
  4. Run the following command to untar the helm charts (ocdiam-sim-24.1.0.tgz):
    tar -zxvf ocdiam-sim-24.1.0.tgz
  5. Update the registry name, image name and tag (if required) in the ocdiam-sim/values.yaml file as required. Open the values.yaml file and update the registry name, image name, and tag (if required) in the file.
  6. Run the following command to install Diameter Simulator:

    Using Helm3:

    [cloud-user@platform-bastion-1 ocdiam-sim]$ helm3 install-name ocdiam-sim ocdiam-sim --namespace ocpcf -f ocdiam-sim/values.yaml

The status changes to RUNNING after the deployment is successful.

The following is a sample output for a successful deployment:

ocdiam-sim-69968444b6-fg6ks    1/1     Running   0   5h47m
Sample of Policy namespace with Policy and ATS after installation:
[cloud-user@platform-bastion-1 ocstub-pkg-24.1.0.0.0]$ kubectl get po -n ocpcf
NAME                                                   READY   STATUS    RESTARTS   AGE
ocpcf-appinfo-6c74cccd47-zsbb2                         1/1     Running   0          155m
ocpcf-oc-binding-77fbb9b79c-jv7kd                      1/1     Running   0          155m
ocpcf-oc-diam-connector-6c6fd868bd-4zfrn               1/1     Running   0          155m
ocpcf-oc-diam-gateway-0                                1/1     Running   0          147m
ocamf2-ocamf-stub-595bb858d4-smzj8                     1/1     Running   0          147m
ocpcf-ocats-ocats-policy-667d8cf78-b8bc8               1/1     Running   0          147m
ocpcf-occnp-alternate-route-75455c858d-f6qs8           1/1     Running   0          146m
ocpcf-occnp-chf-connector-6b8b8bfcd6-jjch6             1/1     Running   0          155m
ocpcf-occnp-config-server-77bd99f96-mpscn              1/1     Running   0          155m
ocpcf-occnp-egress-gateway-59c4b784cc-6dx4w            1/1     Running   0          16m
ocpcf-occnp-ingress-gateway-75c47c57bc-pljtc           1/1     Running   0          39m
ocpcf-occnp-nrf-client-nfdiscovery-74b854956b-s6blq    1/1     Running   0          155m
ocpcf-occnp-nrf-client-nfmanagement-76cb55b8b8-tdjkj   1/1     Running   0          49m
ocpcf-occnp-udr-connector-75ffb9db9b-7xz9v             1/1     Running   0          155m
ocpcf-ocdns-ocdns-bind-57fbcd95dc-h4dtl                1/1     Running   0          147m
ocpcf-ocpm-audit-service-5cc46665c4-j6vhh              1/1     Running   0          155m
ocpcf-ocpm-cm-service-7795bb4c6c-446rb                 1/1     Running   0          155m
ocpcf-ocpm-policyds-75cbc9fc9d-7lbl5                   1/1     Running   0          155m
ocpcf-ocpm-pre-59b94d979-jzkv4                         1/1     Running   0          155m
ocpcf-ocpm-pre-test-84d9c89dd8-fqlpg                   1/1     Running   0          155m
ocpcf-ocpm-queryservice-94895bf88-bhwcc                1/1     Running   0          155m
ocpcf-pcf-amservice-56cdbb75c9-ph7tt                  
ocpcf-pcf-smservice-64b899d766-jfhjm                   1/1     Running   0          155m
ocpcf-pcf-ueservice-7c6bd7ccc9-mrnxn                   1/1     Running   0          155m
ocpcf-pcrf-core-7594dbb7f8-z95vt                       1/1     Running   0          155m
ocpcf-performance-689dd556b-7vblc                      1/1     Running   0          155m
ocpcfnf11stub-5bb6b4f95d-v6fbb                         1/1     Running   0          147m
ocpcfnf12stub-59fb974f5d-2qr42                         1/1     Running   0          147m
ocpcfnf1stub-5bdf545fcb-zgbjb                          1/1     Running   0          147m
ocpcfnf21stub-ff6db9d86-5hvj6                          1/1     Running   0          147m
ocpcfnf22stub-794456fd66-sxq8q                         1/1     Running   0          147m
ocpcfnf2stub-656755dc46-hnr8m                          1/1     Running   0          147m
ocpcfnf31stub-68c6596b6-jdsgj                          1/1     Running   0          147m
ocpcfnf32stub-f49b57d86-rklc8                          1/1     Running   0          147m
ocpcfnf3stub-6c4c697648-lj6q7                          1/1     Running   0          147m
ocpcf-ocpm-ldap-gateway-5fd489b8fd-52dqn               1/1     Running   0          147m 
ocdiam-sim-69968444b6                                  1/1     Running   0          147m

3.5.6 Post-Installation Steps

This section describes the post-installation steps for Policy.

Alternate Route Service Configurations

To edit the Alternate Route Service deployment file (ocpcf-occnp-alternate-route) that points to DNS Stub, perform the following steps:

  1. Run the following command to get searches information from dns-bind pod to enable communication between Alternate Route and dns-bind service:
    kubectl exec -it <dns-bind pod> -n <NAMESPACE> -- /bin/bash -c 'cat /etc/resolv.conf' | grep search | tr ' ' '\n' | grep -v 'search'
    The following output is displayed after running the command:

    Figure 3-11 Sample Output

    Sample Output
    By default alternate service will point to CoreDNS and you will see following settings in deployment file:

    Figure 3-12 Alternate Route Service Deployment File

    Screen capture to show alternate route service points to CoreDNS
  2. Run the following command to edit the deployment file and add the following content in alternate service to query DNS stub:
    $kubectl edit deployment ocpcf-occnp-alternate-route -n ocpcf
    1. Add the IP Address of the nameserver that you have recorded after installing the DNS stub (cluster IP Address of DNS Stub).
    2. Add the search information one by one which you recorded earlier.
    3. Set dnsPolicy to "None".
      dnsConfig:
        nameservers:
        - 10.233.33.169      // cluster IP of DNS Stub
        searches:
        - ocpcf.svc.occne15-ocpcf-ats
        - svc.occne15-ocpcf-ats
        - occne15-ocpcf-ats
      dnsPolicy: None
    For example:

    Figure 3-13 Example

    Example

NRF client configmap

In the -application-config configmap, configure the following parameters with the respective values:
  • primaryNrfApiRoot=nf1stub.<namespace_gostubs_are_deployed_in>.svc:8080

    Example: primaryNrfApiRoot=nf1stub.ocats.svc:8080

  • secondaryNrfApiRoot=nf1stub.ocats.svc:8080 (Remove the secondaryNrfApiRoot)
  • nrfClientSubscribeTypes=UDR, CHF, NWDAF
  • supportedDataSetId=POLICY (Remove the supportedDataSetId )

Note:

Configure these values at the time of Policy deployment.

Note:

To get all configmaps in your namespace, execute the following command:

kubectl get configmaps -n <Policy_namespace>

Persistent Volume (Optional)

If persistent volume is used, follow the post-installation steps provided in the Persistent Volume for 5G ATS section.

3.6 Installing ATS for SCP

This section describes Automated Testing Suite (ATS) installation procedures for Service Communication Proxy (SCP) in a cloud native environment using Continuous Delivery Control Server (CDCS) or Command Line Interface (CLI) procedures. For more information about CDCS, see the following documents:
  • Oracle Communications CD Control Server Installation and Upgrade Guide
  • Oracle Communications CD Control Server User Guide

You must perform ATS installation procedures for SCP in the same sequence as outlined in the following sections.

3.6.1 Prerequisites

To run SCP test cases, the following prerequisites are required.

3.6.1.1 Software Requirements

This section lists the software that must be installed before installing ATS.

Table 3-18 Preinstalled Software

Software Version
Kubernetes 1.28.x, 1.27.x, 1.26.x
Helm 3.13.2
Podman 4.4.1

To check the versions of the preinstalled software in the cloud native environment, run the following commands:

kubectl version
helm version
podman version
3.6.1.2 Environment Setup Requirements

This section describes the requirements for the client machine, that is, the machine used by the user to run deployment commands.

The client machine should have:
  • Helm repository configured.
  • Network access to the Helm repository and Docker image repository.
  • Network access to the Kubernetes cluster.
  • Required environment settings to run kubectl, docker, and podman commands. The environment should have privileges to create a namespace in the Kubernetes cluster.
  • Helm client is installed with the push plugin. Configure the environment in such a manner that the helm install command deploys the software in the Kubernetes cluster.
3.6.1.3 Resource Requirements
This section describes ATS resource requirements for SCP.

Overview - Total Number of Resources

The following table describes the total resource usage by different resource types:
  • SCP SUT
  • cnDB Tier
  • ATS

Table 3-19 SCP - Total Number of Resources

Resource Name CPU Memory (GB) Storage (GB)
SCP SUT Totals 56 61 0
cnDBTier Totals 29 65 235
ATS Totals 100 106 4
Grand Total SCP ATS 185 232 263

SCP Pods Resource Requirements

This section describes the resources required to deploy SCP ATS.

Table 3-20 SCP Pods Resource Requirements

Microservice CPUs Required per Pod Memory Required per Pod (GB) Storage PVC Required per Pod (GB) # Replicas (regular deployment) # Replicas (ATS deployment) CPUs Required (ATS) - Total Memory Required (ATS) - Total (GB) Storage PVC Required - Total (GB)
SCP Pods
scpc-subscription 1 1 0 1 1 1 1 0
scpc-notification 4 4 0 1 1 4 4 0
scpc-audit 3 4 0 1 1 3 4 0
scpc-configuration 2 2 0 1 1 2 2 0
scp-worker 4 8 0 2 1 4 8 0
scpc-alternate-resolution 2 2 0 1 1 2 2 0
scp-nrfproxy 8 8 0 2 1 8 8 0
scp-cache 8 8 0 3 1 8 8 0
scp-mediation 8 8 0 2 1 8 8 0
scp-load-manager 8 8 0 2 1 8 8 0
scp-oauth-nrfproxy 8 8 0 2 1 8 8 0
SCP SUT Totals 56 61 0

ATS Resource Requirements for SCP

This section describes the ATS resources required to deploy SCP-ATS.

Table 3-21 ATS Resource Requirements

Microservice CPUs Required per Pod Memory Required per Pod (GB) Storage PVC Required per Pod (GB) # Replicas (regular deployment) # Replicas (ATS deployment) CPUs Required - Total Memory Required - Total (GB) Storage PVC Required - Total (GB)
ATS Behave 6 12 4 - 1 6 12 4
ATS pystub 1 1 - - 90 91 91 0
DNS Stub 1 1 - - 1 1 1 0
Global Egress Rate Limiting Stub 1 1 - - 1 1 1 0
ATS DD Client stub 1 1 - - 1 1 1 0
ATS Totals 100 106 0
3.6.1.4 Downloading the ATS Package

This section provides information about how to download the ATS package using Command Line Interface (CLI). To download ATS package using CDCS, see Oracle Communications CD Control Server User Guide.

To locate and download the ATS Image from MOS:

  1. Log in to My Oracle Support using the appropriate login credentials.
  2. Click the Patches & Updates tab.
  3. In the Patch Search section, click Product or Family (Advanced).
  4. Enter Oracle Communications Cloud Native Core - 5G in the Product field.
  5. Select Oracle Communications Cloud Native Core Service Communication Proxy <release_number> from the Release drop-down list.
  6. Click Search.

    The Patch Advanced Search Results list appears.

  7. Select the required ATS patch from the list.

    The Patch Details window appears.

  8. Click Download.

    The File Download window appears.

  9. Click the ocats_ocscp_csar_24_1_1_0_0.pkg file to download the CNC SCP ATS package file.

    The ocats_ocscp_csar_24_1_1_0_0.pkg package contains the following files:

    ocats_ocscp_csar_24_1_1_0_0.zip
    mcafee-gen-ats-csar-24.1.1.log
    

    Note:

    The above zip file contains all the images and custom values required for 24.1.1 release of OCATS-OCSCP.

    Unzip the ocats_ocscp_csar_24_1_1_0_0.zip file to get the following files and folders:

    .
    |-- Definitions
    |  |-- ocats_ocscp_ats_tests.yaml
    |  |-- ocats_ocscp_cne_compatibility.yaml
    |  `-- ocats_ocscp.yaml
    |-- Files
    |  |-- ChangeLog.txt
    |  |-- Helm
    |  |  `-- ocats-ocscp-24.1.1.tgz
    |  |-- Licenses
    |  |-- ocats-ddclientstub-24.1.1.tar
    |  |-- ocats-dnsstub-24.1.1.tar (Docker Image)
    |  |-- ocats-pystub-24.1.1.tar (Docker Image)
    |  |-- ocats-scp-24.1.1.tar (Docker Image)
    |  |-- ocats-scpglbratelimitstub-24.1.1.tar (Docker Image)
    |  |-- Oracle.cert
    |  `-- Tests
    |-- ocats_ocscp_csar_24_1_1_0_0.zip
    |-- ocats_ocscp.mf
    |-- Scripts
    |  |-- ocats_ocscp_custom_serviceaccount_24.1.1.yaml (Template to create custom service account)
    |  |-- ocats_ocscp_tests_jenkinsjobs_24.1.1.tgz. ( ocscp_tests and jenkins jobs folder to be copied if persistent volume is deployed)
    |  `-- ocats_ocscp_values_24.1.1.yaml (Custom values file for installation)
    `-- TOSCA-Metadata
      `-- TOSCA.meta
  10. Copy the umbrella Helm chart ocats-ocscp-24.1.1.tgz file from the Files folder to Kubernetes cluster where you want to deploy ATS.

    The following table describes ATS parameters in the ocats_ocscp_values_24.1.1.yaml file:

    Table 3-22 ATS Parameters of the YAML File

    Parameter Default Value Possible Values Description
    ocatsdnsstubService true true, false Set it to true or false depending on the need of user.

    Setting these values to true to deploy ocats-dnsstub, ocats-scp stub, ocats-scpglbratelimitstub, ocats-ddclientStub, ocats-pystubs (ausf1, udm3 etc.)

    ocatsscpService true true, false Set it to true or false depending on the requirement.

    Setting these values to true to deploy ocats-dnsstub, ocats-scp stub, ocats-scpglbratelimitstub, ocats-ddclientStub, ocats-pystubs (ausf1, udm3, and so on)

    ocatsscpglbratelimitstubService true true, false Set it to true or false depending on the requirement.

    Setting these values to true to deploy ocats-dnsstub, ocats-scp stub, ocats-scpglbratelimitstub, ocats-ddclientStub, ocats-pystubs (ausf1, udm3, and so on.)

    ocatsddclientstubService true true, false Set it to true or false depending on the requirement.

    Setting these values to true to deploy ocats-dnsstub, ocats-scp stub, ocats-scpglbratelimitstub, ocats-ddclientStub, ocats-pystubs (ausf1, udm3,and so on.)

    ausf1Stubs true true, false Set it to true or false depending on the requirement.

    Setting these values to true to deploy all the ocats-pystubs (ausf1, udm3, chf1, scp3, and so on.)

    sutScpIPFamiliesforATSRun [IPv4] [IPv4], [IPv6], [IPv4,IPv6], [IPv6,IPv4] The parameter sutScpIPFamiliesforATSRun is used to specify the IP families that ATS will consider when running test cases.

    Note: If any value other than those specified is provided, ATS will proceed with the assumption that the deployment supports IPv4 only.

    traffic.sidecar.istio.io/excludeOutboundPorts 8091 - This annotation under lbDeployment is required for fetching metrics from soothsayer pods, which are required for a few FT's of ATS. When ATS runs in an ASPEN MESH environment, do not change this port.
    traffic.sidecar.istio.io/excludeInboundPorts 8080 - This annotation under lbDeployment is required for fetching metrics from soothsayer pods, which are required for a few FT's of ATS. When ATS runs in an ASPEN MESH environment, do not change this port.
    tokenConsumptionIntervalInMillis 50 Count in millliseconds Token Consumption Simulation Parameters.
    scpStubNfInstanceId 2faf1bbc-6e4a-4454-a507-a14ef8e1bc22 nfInstanceID of SCP NF Instance ID of SCP.
    rateDataReporterStartingOffset 35 Value in milliseconds -
    coherence.clusterName scpstub-coherence-cluster Local Coherence Cluster name, size not more than 66 characters Local Coherence Cluster name size must not be more than 66 characters.
    coherence.clusterName.federation.remoteScpOne.fqdnOrIp ocscp-scp-cache.scpsvc.svc.cluster.local - FQDN or IP of federation configuration.
    coherence.clusterName.federation.remoteScpOne.port 30001 - Port number of federation configuration.
    coherence.clusterName.federation.remoteScpOne.clusterName scp-coherence-cluster Ensure that the cluster name is unique among all participants. Size not more than 66 characters. remoteScpOne Coherence Cluster name.

    serviceIpFamilyPolicy.ocatsdnsstubService

    serviceIpFamilyPolicy.ocatsscpService

    serviceIpFamilyPolicy.ocatsscpglbratelimitstubService

    serviceIpFamilyPolicy.ocatsddclientstubService

    serviceIpFamilyPolicy.ausf1Service

    serviceIpFamilyPolicy.ausf2Service

    SingleStack SingleStack, PreferDualStack, RequireDualStack IpFamilyPolicy of ocatsdnsstubService, ocatsscpService, ocatsscpglbratelimitstubService, ocatsddclientstubService and pyStubs.

    Note: PreferDualStack and RequireDualStack values can only be used if the setup is dual stack.

    serviceIpFamilies.ocatsdnsstubService

    serviceIpFamilies.ocatsscpService

    serviceIpFamilies.ocatsscpglbratelimitstubService

    serviceIpFamilies.ocatsddclientstubService

    serviceIpFamilies.ausf1Service

    serviceIpFamilies.ausf2Service

    [IPv4] [IPv4], [IPv6], [IPv4,IPv6], [IPv6,IPv4] IpFamilies of ocatsdnsstubService, ocatsscpService, ocatsscpglbratelimitstubService, ocatsddclientstubService and pyStubs.

    Note: If serviceIpFamilyPolicy is SingleStack, then serviceIpFamilies can be [IPv4] or [IPv6]. If serviceIpFamilyPolicy is PreferDualStack or RequireDualStack, then serviceIpFamilies can be [IPv4,IPv6] or [IPv6, IPv4].

    PVEnabled false true, false Enabling persistent volume.
    PVClaimName false Name Persistent volume claim name.
    atsGuiTLSEnabled false true, false Enabling Https in Jenkins GUI.
    atsCommunicationTLSEnabled false true, false Enabling Https communication.
    ocats-scp.image.repository <docker-registryIP:docker-registryport>/ocats/ocats-scp <Image repository name:port>/ocats/ocats-scp Image repository and port of ocats scp image.
    ocats-scp.image.tag helm-tag Value of tag to be deployed Tag of ocats-scp image.
    ocats-scp.image.pullPolicy Always Always, IfNotPresent, Never Tag of ocats-scp image.
    ocats-scp.replicaCount 1 Positive integers Replica count of ocats-scp stub.
    ocats-scp.resources.limits.cpu 6 CPU value that is allocated Limit to CPU allocated to ocats-scp pod.
    ocats-scp.resources.limits.memory 12Gi Memory values that is allocated (in Gi or mi) Limit to memory allocated to ocats-scp pod .
    ocats-scp.resources.requests.cpu 6 CPU value that is allocated (must be less than or equal to limits) Request of CPU allocation for ocats-scp.
    ocats-scp.resources.requests.memory 12Gi Memory value that is allocated (in Gi or mi) (must be less than or equal to limits) Request of memory allocation for ocats-scp.
    ocats-scp.service.customExtension.labels {} Label of node Node labels for node allocation during deployment.
    ocats-scp.service.customExtension.type LoadBalancer ClusterIP, NodePort, LoadBalancer Service type of ocats-scp pod
    ocats-scp.service.customExtension.port 8080 Port number Port number of ocats-scp service.
    ocats-scp.service.customExtension.staticNodePortEnabled false true, false Enabling of static node port.
    ocats-scp.service.customExtension.staticNodePort false Port number Port number of static node port.
    ocats-scp.service.ports.http.port 8080 Port number Port number of the ocats-scp service if https is not enabled.
    ocats-scp.service.ports.http.staticNodePortEnabled false true, false Enabling of static node port.
    ocats-scp.service.ports.http.staticNodePort false Port number Port number of static node port.
    ocats-scp.service.ipFamilyPolicy SingleStack SingleStack, PreferDualStack, RequireDualStack

    ipFamilyPolicy to be allocated to the ocats-scp service.

    This value will be referred to as whatever is defined at serviceIpFamilyPolicy.ocatsscpService under global parameters.

    ocats-scp.service.ipFamilies [IPv4] [IPv4], [IPv6], [IPv4,IPv6], [IPv6,IPv4]

    ipFamilies to be allocated to the ocats-scp service.

    This value will be referred to as the value defined at serviceIpFamilies.ocatsscpService under global parameters.

    SELECTED_NF SCP NF name ATS parameters are set with default values in the ocats_ocscp_values_23.3.0.yaml file. Update ATS parameters with the actual value based on the environment, and then deploy the OCATS_OCSCP chart using this ocats_ocscp_values_23.3.0.yaml file. The updated ATS parameters are automatically applied during the deployment process, and the ATS will come up with the configuration as mentioned in this file.
    NFNAMESPACE scpsvc Update the SCP namespace -
    CLUSTERDOMAIN cluster.local Cluster Domain where SCP is Deployed -
    DESTNAMESPACE scpsvc Test Stubs NameSpace same as SCP Namespace -
    ocats-dnsstub.image.repository <docker-registryIP:docker-registryport>/ocats/ocats-dnsstub <Image repository name:port>/ocats/ocats-dnsstub Image repository and port of ocats-dnsstub.
    ocats-dnsstub.image.tag helm-tag Value of tag to be deployed Tag of ocats-dnsstub image.
    ocats-dnsstub.image.pullPolicy Always Always, IfNotPresent, Never Image pull policy of ocats-dnsstub image.
    ocats-dnsstub.replicaCount 1 Positive integers Replica count of ocats-dnsstub.
    ocats-dnsstub.service.customExtension.type ClusterIP ClusterIP, NodePort, LoadBalancer Service type of ocats-dnsstub pod.
    ocats-dnsstub.service.customExtension.port 53 Port Port of ocats-dnsstub.
    ocats-dnsstub.service.ipFamilyPolicy SingleStack SingleStack, PreferDualStack, RequireDualStack

    ipFamilyPolicy to be allocated to the ocats-dnsstub service.

    This value will be referred to as whatever is defined at serviceIpFamilyPolicy.ocatsdnsstubService under global parameters.

    ocats-dnsstub.service.ipFamilies [IPv4] [IPv4], [IPv6], [IPv4,IPv6], [IPv6,IPv4]

    ipFamilies to be allocated to the ocats-dnsstub service.

    This value will be referred to as whatever is defined at serviceIpFamilies.ocatsdnsstubService under global parameters.

    ocats-dnsstub.resources.limits.cpu 1 CPU value that is allocated Limit to CPU allocated to ocats-dnsstub pod.
    ocats-dnsstub.resources.limits.memory 1Gi Memory value that is allocated (in Gi or mi) Limit to memory allocated to ocats-dnsstub pod .
    ocats-dnsstub.resources.requests.cpu 1 CPU value that is allocated (must be less than or equal to limits) Request of CPU allocation for ocats-dnsstub.
    ocats-dnsstub.resources.requests.memory 1Gi Memory values that is allocated (in Gi or mi) (must be less than or equal to limits) Request of memory allocation for ocats-dnsstub.
    ocats-ddclientstub.image.repository <docker-registryIP:docker-registryport>/ocats/ocats-ddclientstub <Image repository name:port>/ocats/ocats-ddclientstub Image repository and port of ocats-ddclientstub.
    ocats-ddclientstub.image.tag helm-tag Value of tag to be deployed Tag of ocats-ddclientstub image.
    ocats-ddclientstub.image.pullPolicy Always Always, IfNotPresent, Never Image pull policy of ocats-ddclientstub image.
    ocats-ddclientstub.replicaCount 1 Positive integers Replica count of ocats-ddclientstub.
    ocats-ddclientstub.service.type LoadBalancer ClusterIP, NodePort, LoadBalancer Service type of ocats-ddclientstub.
    ocats-ddclientstub.ipFamilyPolicy SingleStack SingleStack, PreferDualStack, RequireDualStack

    ipFamilyPolicy to be allocated to the ocatsddclientstubService service.

    This value will be referred to as whatever is defined at serviceIpFamilyPolicy.ocatsddclientstubService under global parameters.

    ocats-ddclientstub.ipFamilies [IPv4] [IPv4], [IPv6], [IPv4,IPv6], [IPv6,IPv4] ipFamilies to be allocated to the ocatsddclientstubService service.

    This value will be referred to as whatever is defined at serviceIpFamilies.ocatsddclientstubService under global parameters.

    ocats-ddclientstub.resources.limits.cpu 1 CPU value that is allocated Limit to CPU allocated to ocats-ddclientstub pod.
    ocats-ddclientstub.resources.limits.memory 1Gi Memory value that is allocated (in Gi or mi) Limit to memory allocated to ocats-ddclientstub pod.
    ocats-ddclientstub.resources.requests.cpu 1 CPU value that is allocated (should be less than or equal to limits) Request of CPU allocation for ocats-ddclientstub.
    ocats-ddclientstub.resources.requests.memory 1Gi Memory value that is allocated (in Gi or mi) (should be less than or equal to limits) Request of memory allocation for ocats-ddclientstub.
    ocats-ddclientstub.log.level INFO INFO, WARN, DEBUG Log level of ddclientstub pod.
    ocats-ddclientstub.kafka_broker "kafka-broker-0.kafka-broker.ddkafkanamespace.svc.cluster.local:9092" Kafka broker fqdn and port Kafka broker fqdn or port for ddClientStub.
    ocats-ddclientstub.string_topic_name "string_topic" String topic ddClientStub string topic name.
    ocats-ddclientstub.json_topic_name "json_topic" Json topic ddClientStub json topic name.
    ocats-scpglbratelimitstub.image.repository <docker-registry IP:docker-registry port>/ocats/ocats-scpglbratelimitstub <Image repository name:port>/ocats/ocats-scpglbratelimitstub Image repository and port of ocats-scpglbratelimitstub.
    ocats-scpglbratelimitstub.image.tag helm-tag Value of tag to be deployed Tag of ocats-scpglbratelimitstub.
    ocats-scpglbratelimitstub.image.pullPolicy Always Always, IfNotPresent, Never Image pull policy of ocats-scpglbratelimitstub.
    ocats-scpglbratelimitstub.replicaCount 1 Positive integers Replica count of ocats-scpglbratelimitstub.
    ocats-scpglbratelimitstub.service.type ClusterIP ClusterIP, NodePort, LoadBalancer Service type of ocats-scpglbratelimitstub.
    ocats-scpglbratelimitstub.service.ipFamilyPolicy SingleStack SingleStack, PreferDualStack, RequireDualStack

    ipFamilyPolicy to be allocated to the ocatsscpglbratelimitstubService service.

    This value will be referred to as whatever is defined at serviceIpFamilyPolicy.ocatsscpglbratelimitstubService under global parameters.

    ocats-scpglbratelimitstub.service.ipFamilies [IPv4] [IPv4], [IPv6], [IPv4,IPv6], [IPv6,IPv4]

    ipFamilies to be allocated to the ocatsscpglbratelimitstubService service.

    This value will be referred to as whatever is defined at serviceIpFamilies.ocatsscpglbratelimitstubService under global parameters.

    ocats-scpglbratelimitstub.deployment.customExtension.labels {} Label of node Node labels for node allocation during deployment.
    ocats-scpglbratelimitstub.resources.limits.cpu 1 CPU value that is allocated Limit to CPU allocated to ocats-scpglbratelimitstub.
    ocats-scpglbratelimitstub.resources.limits.memory 1Gi Memory values that is allocated (in Gi or mi) Limit to memory allocated to ocats-scpglbratelimitstub.
    ocats-scpglbratelimitstub.resources.requests.cpu 1 CPU value that is allocated (should be less than or equal to limits) Request of CPU allocation for ocats-scpglbratelimitstub.
    ocats-scpglbratelimitstub.resources.requests.memory: 1Gi Memory value that is allocated (in Gi or mi) (should be less than or equal to limits) Request of memory allocation for ocats-scpglbratelimitstub.
    ocats-scpglbratelimitstub.minreplicas 1 Positive integer Minimum replicas of ocats-scpglbratelimitstub.
    ocats-scpglbratelimitstub.maxreplicas 1 Positive integer Maximum replicas of ocats-scpglbratelimitstub.
    ocats-scpglbratelimitstub.maxPdbUnavailable 1 Positive integer -
    ocats-scpglbratelimitstub.log.level INFO INFO, WARN, DEBUG Log level of ocats-scpglbratelimitstub.
    ocats-pystub.image.repository <docker-registry IP:docker-registry port>/ocats/ocats-pystub <Image repository name:port>/ocats/ocats-pystub Image repository and port of ocats-pystub.
    ocats-pystub.image.tag helm-tag Value of tag to be deployed Tag of ocats-pystub.
    ocats-pystub.image.pullPolicy Always Always, IfNotPresent, Never Image pull policy of ocats-pystub.
    ocats-pystub.replicaCount 1 Positive integers Replica count of ocats-pystub.
    RESPONSE_FROM_HEADER true true, false When true pystub returns podname.
    ocats-pystub.resources.limits.cpu 1 CPU value that is allocated Limit to CPU allocated to ocats-pystubocats-pystub.resources.limits.memory.
    ocats-pystub.resources.limits.memory 1Gi Memory values that is allocated (in Gi or mi) Limit to memory allocated to ocats-pystub.
    ocats-pystub.resources.requests.cpu 1 CPU value that is allocated (should be less than or equal to limits) Request of CPU allocation for ocats-pystub.
    ocats-pystub.resources.limits.memory 1Gi Memory value that is allocated (in Gi or mi) (should be less than or equal to limits) Request of memory allocation for ocats-pystub.
    ausf1.service.name:* ausf1svc Service name of ausf1 This is applicable for all the ocats-pystubs.
    ausf1.service.type:* ClusterIP ClusterIP, NodePort, LoadBalancer This is applicable for all the ocats-pystubs.
    ausf1.service.ports.port:* 8080 Port number This is applicable for all the ocats-pystubs.
    ausf1.service.ipFamilyPolicy SingleStack SingleStack, PreferDualStack, RequireDualStack

    ipFamilyPolicy to be allocated to the ausf1Service service.

    This value will be referred to as whatever is defined at serviceIpFamilyPolicy.ausf1Service under global parameters.

    ausf1.service.ipFamilies [IPv4] [IPv4], [IPv6], [IPv4,IPv6], [IPv6,IPv4]

    ipFamilies to be allocated to the ausf1Service service.

    This value will be referred to as whatever is defined at serviceIpFamilies.ausf1Service under global parameters.

    ausf1.deploymentName:* ausf1 Deployment name of ausf1 This is applicable for all the ocats-pystubs.
3.6.1.5 Pushing the Images to Customer Docker Registry

Preparing to Deploy ATS and Stub Pod in Kubernetes Cluster

To deploy ATS and Stub Pods in the Kubernetes Cluster:

  1. Click the file to download the CNC SCP ATS package file:
    ocats_ocscp_csar_24_1_1_0_0.pkg
    The package contains the following files:
    ocats_ocscp_csar_24_1_1_0_0.zip
    mcafee-gen-ats-csar-24.1.1.log
    

    Unzip the ocats_ocscp_csar_24_1_1_0_0.zip file to get the following files and folders:

    The output of this command is:
    .
    |-- Definitions
    |  |-- ocats_ocscp_ats_tests.yaml
    |  |-- ocats_ocscp_cne_compatibility.yaml
    |  `-- ocats_ocscp.yaml
    |-- Files
    |  |-- ChangeLog.txt
    |  |-- Helm
    |  |  `-- ocats-ocscp-24.1.1.tgz
    |  |-- Licenses
    |  |-- ocats-ddclientstub-24.1.1.tar
    |  |-- ocats-dnsstub-24.1.1.tar (Docker Image)
    |  |-- ocats-pystub-24.1.1.tar (Docker Image)
    |  |-- ocats-scp-24.1.1.tar (Docker Image)
    |  |-- ocats-scpglbratelimitstub-24.1.1.tar (Docker Image)
    |  |-- Oracle.cert
    |  `-- Tests
    |-- ocats_ocscp_csar_24_1_1_0_0.zip
    |-- ocats_ocscp.mf
    |-- Scripts
    |  |-- ocats_ocscp_custom_serviceaccount_24.1.1.yaml (Template to create custom service account)
    |  |-- ocats_ocscp_tests_jenkinsjobs_24.1.1.tgz. ( ocscp_tests and jenkins jobs folder to be copied if persistent volume is deployed)
    |  `-- ocats_ocscp_values_24.1.1.yaml (Custom values file for installation)
    `-- TOSCA-Metadata
      `-- TOSCA.meta
  2. Run the following commands in your cluster to load the ATS and stubs docker images, ocats-scp-image-24.1.3.tar, and push it to your registry:
    docker load --input ocats-scp-24.1.1.tar
    docker load --input oocats-dnsstub-24.1.1.tar
    docker load --input ocats-pystub-24.1.1.tar
    docker load --input ocats-scpglbratelimitstub.tar
    docker load --input ocats-ddclientstub-24.1.1.tar
    
  3. Run the following command in your cluster to load the ATS image:
    docker load --input ocats-scp-images-24.1.1.tar
  4. Run the following commands to push the ATS image to the registry:
    docker tag ocats/ocats-scp:24.1.1 <local_registry>/ocats/ocatsscp:24.1.1
    docker push <local_registry>/ocats/ocats-scp:24.1.1

    Where, <local_registry> indicates the registry where you can push the downloaded images.

  5. Run the following commands to push the Stub image to the registry:
    
    docker tag ocats/ocats-pystub:24.1.1 <local_registry>/ocats/ocats-pystub:24.1.1
    docker push <local_registry>/ocats/ocats-pystub:24.1.1
  6. Run the following command to push the DNS Stub Image to the registry:
    docker tag ocats/ocats-dnsstub:24.1.1 <local_registry>/ocats/ocats-dnsstub:24.1.1
    docker push <local_registry>/ocats/ocats-dnsstub:24.1.1
  7. Run the following command to push the Global Rate Limiting Stub Image to the registry:
    docker tag ocats/ocats-scpglbratelimitstub:24.1.1 <local_registry>/ocats/ocats-scpglbratelimitstub:24.1.1
    docker push <local_registry>/ocats/ocats-scpglbratelimitstub:24.1.1
  8. Run the following command to push the Data Director Stub Image to the registry:
    docker tag ocats/ocats-ddclientstub:24.1.1 <local_registry>/ocats/ocats-ddclientstub:24.1.1
    docker push <local_registry>/ocats/ocats-ddclientstub:24.1.1
  9. In the Scripts folder, extract the following content:
    ocats-ocscp-values-24.1.1.yaml
    ocats-ocscp-custom-serviceaccount-24.1.1.yaml
    ocats-ocscp-tests-jenkinsjobs-24.1.1.tgz
  10. Update the image name and tag in the ocats-ocscp-values-24.1.1.yaml file as required.
3.6.1.6 Preinstall Preparation of SCP for SCP-ATS
Complete the following steps before performing an installation:
  • When deploying default ATS with role binding, deploy ATS and test stubs in the same namespace as SCP.
  • The SCP stub for the Global Egress Rate Limiting feature must be deployed by setting the required Helm parameters as described in Oracle Communications Cloud Native Core, Service Communication Proxy Installation, Upgrade and Fault Recovery Guide to support the Global Egress Rate Limiting test cases.
  • In the ocats_ocscp_values_24.1.1.yaml, add the following for Prometheus that is required for alert test case:
    traffic.sidecar.istio.io/excludeInboundPorts: "9090"
  • If ASM is adding additional XFCC headers, the certExtractIndex and extractIndex of the xfccHeaderDecode value must be -1, otherwise certExtractIndex and extractIndex of xfccHeaderDecode value must be 0.
  • If ASM is enabled, for fetching the metrics from Prometheus, a destination rule must be created. In most deployments, Prometheus is kept outside of the service mesh, so a destination rule is required to communicate between a TLS enabled entity (ATS) and a non-TLS entity (Prometheus). The rule can be created as follows:
    kubectl apply -f - <<EOF
     
    apiVersion:networking.istio.io/v1alpha3
    kind:DestinationRule
    metadata:
      name:prometheus-dr
      namespace:ocscp
    spec:
      host:oso-prometheus-server.ocscp.svc.cluster.local
      trafficPolicy:
        tls:
          mode:DISABLE
    EOF
     
    Where,
    • name indicates the name of the destination rule.
    • namespace indicates where ATS is deployed.
    • host indicates the hostname of the Prometheus server.
  • FQDN and interPlmnFqdn must be the same for both nfServices for NRF profile NRF1.

    Sample

    # NRF profiles for primary(Priority=0) and secondry(Priority=1) NRF. Note that these NRFs needs to be backend DB Synced.
    # For Secondary NRF profile always make it priority lesser than First priority NRF, currently we set secondary NRF priority to 1.
    # In case of no secondry NRF user can comment the secondary NRF Profile.
    # Service level FQDN's of NRF are from the same namespace as that of SCP, this is put for SCP ATS cases. Otherwise, NRF's can be part of other namespaces or even other k8s clusters.
      nrfProfiles:
      - capacity: 10000
        locality: USEast
        nfInstanceId: 6faf1bbc-6e4a-2828-a507-a14ef8e1bc5a
        nfStatus: REGISTERED
        nfType: NRF
        priority: '0'
        # with rel15 flag is enabled, below mentioned NRF region to be specified.
    #    nfSetIdList: ["Reg1"]
        # with rel16 flag is enabled, below mentioned NRF set Id to be specified.
        nfSetIdList: ["setnrfl1.nrfset.5gc.mnc012.mcc345"]
        #Uncomment below section to configure interPlmnFqdn, plmnList or snpnList
        #NRF-Change-3
        interPlmnFqdn: nrf1.5gc.mnc213.mcc410.3gppnetwork.org 
        plmnList:
        - mcc: 410
          mnc: 213
        - mcc: 410
          mnc: 214
        #snpnList:
        #- mcc: 345
        #  mnc: 445
        #  nid: 000007ed9d5
        customInfo:
          preferredNrfForOnDemandDiscovery: true
     
             
        nfServices:
        - capacity: 5000
          #apiPrefix: USEast
          #NRF-Change-4
          fqdn: nrf1svc.scpsvc.svc.cluster.local
          interPlmnFqdn: nrf1svc.scpsvc.svc.cluster.local
          # Sample IPEndpoint with all the fields and it is commented out.
          #NRF-Change-5
          #ipEndPoints: [{"ipv4Address": "NRF-IP", "port": "NRF-PORT"}]
          #ipEndPoints: [{"ipv4Address": "10.75.213.56", "port": "31014"}]
          # ATS test cases need 8080 port with FQDN. Hence, in order to run ATS cases, below "ipEndPoints" field is left uncommented.
          #NRF-Change-6
          ipEndPoints: [{"port": "8080"}]
          load: 0
          nfServiceStatus: REGISTERED
          scheme: http
          serviceInstanceId: fe137ab7-740a-46ee-aa5c-951806d77b01
          serviceName: nnrf-nfm
          priority: 0
          versions:
          - apiFullVersion: 1.0.0
            apiVersionInUri: v1
     
        - capacity: 5000
          #apiPrefix: USEast
          #NRF-Change-4
          fqdn: nrf1svc.scpsvc.svc.cluster.local
          interPlmnFqdn: nrf1svc.scpsvc.svc.cluster.local
          # Sample IPEndpoint with all the fields and it is commented out.
          #NRF-Change-5
          #ipEndPoints: [{"ipv4Address": "NRF-IP", "port": "NRF-PORT"}]
          #ipEndPoints: [{"ipv4Address": "10.75.213.56", "port": "31014"}]
          # ATS test cases need 8080 port with FQDN. Hence, in order to run ATS cases, below "ipEndPoints" field is left uncommented.
          #NRF-Change-6
          ipEndPoints: [{"port": "8080"}]
          load: 0
          nfServiceStatus: REGISTERED
          scheme: http
          serviceInstanceId: fe137ab7-740a-46ee-aa5c-951806d77b02
          serviceName: nnrf-disc
          priority: 0
          versions:
          - apiFullVersion: 1.0.0
            apiVersionInUri: v1
             
        - capacity: 5000
          #apiPrefix: USEast
          #NRF-Change-4
          fqdn: nrf1svc.scpsvc.svc.cluster.local
          interPlmnFqdn: nrf1.5gc.mnc213.mcc410.3gppnetwork.org
          # Sample IPEndpoint with all the fields and it is commented out.
          #NRF-Change-5
          #ipEndPoints: [{"ipv4Address": "NRF-IP", "port": "NRF-PORT"}]
          #ipEndPoints: [{"ipv4Address": "10.75.213.56", "port": "31014"}]
          # ATS test cases need 8080 port with FQDN. Hence, in order to run ATS cases, below "ipEndPoints" field is left uncommented.
          #NRF-Change-6
          ipEndPoints: [{"port": "8080"}]
          load: 0
          nfServiceStatus: REGISTERED
          scheme: http
          serviceInstanceId: fe137ab7-740a-46ee-aa5c-951806d77b03
          serviceName: nnrf-oauth2
          priority: 0
          versions:
          - apiFullVersion: 1.0.0
            apiVersionInUri: v1
  • Ensure SCP is deployed with the following parameters:
    • Make sure that while providing NRF information at the time of SCP deployment, stub NRF details like nrf1svc and nrf2svc should also be provided at the time of ATS deployment before running these test cases. For example, if the teststub namespace is scpsvc, then SCP should have been deployed with primary NRF as nrf1svc.scpsvc.svc.<clusterDomain> and secondary NRF as nrf2svc.scpsvc.svc.<clusterDomain> for NRF test cases to work.
    • Ensure the defaultTopologySource parameter is set to NRF in the ocscp_values.yaml file.
    • Ensure the preventiveAuditOnLastNFInstanceDeletion parameter is set to false in the ocscp_values.yaml file.
    • The number of replicas of all SCP microservices pods must be set to 1 during SCP deployment as ATS is enabled to perform metric validations for metrics obtained from a single pod.
    • When you deploy, make sure to define the additional NRF stubs needed for InterSCP cases as nrfr2l1svc (preferred NRF of Reg2), nrfr2l2svc (non-preferred NRF of Reg2), nrfr3l1svc (non-preferred NRF of Reg3), and nrfr3l2svc (preferred NRF of Reg3), which are provided in the default custom value file. Also, in the SCP deployment file, ensure that the namespace of all these NRFs is the same as the deployed SCP namespace. Reg1, Reg2, and Reg3 are replaced with setnrfl1.nrfset.5gc.mnc012.mcc345, setnrfr1.nrfset.5gc.mnc012.mcc345, and setnrfr2.nrfset.5gc.mnc012.mcc345 for Release 16 SCP deployment.
    • Ensure the supportedNRFRegionOrSetIdList must have Reg1, Reg2, and Reg3 for Release 15 SCP deployment or setnrfl1.nrfset.5gc.mnc012.mcc345, setnrfr1.nrfset.5gc.mnc012.mcc345, and setnrfr2.nrfset.5gc.mnc012.mcc345 for Release 16 SCP deployment.
    • Ensure only Loc7, Loc8, Loc9, and USEast should be part of the servingLocalities for Release 15 SCP deployment and the servingScope for Release 16 SCP deployment.
    • Recommended auditInterval is 60 seconds and guardTime is 10 seconds in the SCP deployment file.
    • The regions such as Reg2 and Reg3 are the corresponding values for Release 15 SCP deployment, while NRF setIDS such as servingLocalities and setnrfr2.nrfset.5gc.mnc012.mcc345 are the corresponding values for Release 16 SCP deployment. The NRF belonging to either regions or the NRF set IDS's localities must not match with the SCP servingLocalities or SCP serving scope.
    • SCP deployment file should have the attribute scpToRegisterWithNrfRegions set to Reg1 for Release 15 SCP deployment and setnrfl1.nrfset.5gc.mnc012.mcc345 for Release 16 SCP deployment. For information about Release 15 and Release 16, see 3GPP TS 23.501.
    • To run CCA Validation feature tests, refer to Configuring ATS for CCA Test Cases section.
    • To enable OAuth support while deploying SCP, refer Configuring SCP to Run OAuth Test Cases in ATS section.
    • To enable alternate resolution service support while deploying SCP, refer to Configuring SCP to Run DNS SRV Test Cases in ATS section.
    • To enable mediation support while deploying SCP, refer to Configuring SCP to Run Mediation Test Cases in ATS section.
    • To enable nrfproxy support, refer to Configuring SCP to Run Model D Test Cases in ATS section.
    • To enable load manager support, refer to Configuring SCP to Run LCI Test Cases in ATS section.
    • To enable the Global Egress Rate Limiting feature for ATS environment, refer to Updating the Global Egress Rate Limiting Changes in the SCP Deployment File for ATS section.
    • By default, the ATS suite runs HTTPS test cases if the "ALL" option is selected, and SCP must be deployed with HTTPS support enabled to support the same. To enable HTTPs for ATS, refer to Enabling HTTPs for ATS, pystubs and Jenkins
      # If ingress gateway is available then set ingressGWAvailable flag to true
        # and provide ingress gateway IP and Port in publicSignalingIP and publicSignalingPort respectively.
       
            publicSignalingPort: &publicSignalingPort 8000   #Signaling PORT
            publicSignalingPortHttps: &publicSignalingPortHttps 9443 #Signaling PORT for HTTPS
          # uncomment below lines when deployed with release 16. Also, take a note that http port for SCP should be same as "publicSignalingPort" of SCP mentioned above.
          scpInfo:
            scpPrefix: scpPrefix
            scpPorts:
             http: *publicSignalingPort
          #Uncomment below key-value(https) to enable https for ingress connections for rel16 deployment. this port should be same as "publicSignalingPortHttps" of SCP mentioned above.
          #   https: *publicSignalingPortHttps
           # Note: If this flag is false, then by default all connections to PNF will be made using http protocol.
           nativeEgressHttpsSupport: false

      If SCP is deployed with HTTP support, either the single or multiple feature execution option must be selected, excluding all https test cases. Otherwise, make sure in an ASM environment where HTTPS is not enabled; in that case, manually remove the HTTPS-related test cases from the features directory on the ATS pod.

3.6.1.7 Preinstallation Preparation for SCP-ATS

Complete the following steps before performing an installation of SCP-ATS.

3.6.1.7.1 Enabling Aspen Service Mesh
To enable Aspen Service Mesh (ASM) for ATS, complete the following procedure:

Note:

By default, this feature is disabled.
  1. If ASM is not enabled on the global level for the namespace, run the following command before deploying ATS:
    kubectl label --overwrite namespace <namespace_name> istio-injection=enabled

    Example:

    kubectl label --overwrite namespace scpsvc istio-injection=enabled
  2. Add the following annotations in the lbDeployments section of the global section in the ocats_scp_values_24.1.1.yaml file:
    traffic.sidecar.istio.io/excludeOutboundPorts: "8091"
    traffic.sidecar.istio.io/excludeInboundPorts: "8080"

    Sample file with annotations:

    
    lbDeployments:
    labels: {}
    annotations:
    traffic.sidecar.istio.io/excludeOutboundPorts: "8091"
    traffic.sidecar.istio.io/excludeInboundPorts: "8080"
  3. Add Envoy Filter to enable the XFCC header forwarding by ASM sidecar.

    Envoy Filter for ATS:

    apiVersion: networking.istio.io/v1alpha3
    kind: EnvoyFilter
    metadata:
      name: <name>
      namespace: <namespace>
    spec:
      workloadSelector:
        labels:
          app: ocats-scp
      configPatches:
      - applyTo: NETWORK_FILTER
        match:
          listener:
            filterChain:
              filter:
                name: "envoy.http_connection_manager"
        patch:
          operation: MERGE
          value:
            typed_config:
              '@type': type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager
              forward_client_cert_details: ALWAYS_FORWARD_ONLY
              use_remote_address: true
              xff_num_trusted_hops: 1

    Envoy filter to enable the XFCC header forwarding on the application sidecar:

    apiVersion: networking.istio.io/v1alpha3
    kind: EnvoyFilter
    metadata:
      name: <name>
      namespace: <namespace>
    spec:
      workloadSelector:
        labels:
          app.kubernetes.io/instance: ocscp
      configPatches:
      - applyTo: NETWORK_FILTER
        match:
          listener:
            filterChain:
              filter:
                name: "envoy.http_connection_manager"
        patch:
          operation: MERGE
          value:
            typed_config:
              '@type': type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager
              forward_client_cert_details: ALWAYS_FORWARD_ONLY
              use_remote_address: true
              xff_num_trusted_hops: 1

    Envoy filter to enable server header pass through on sidecar:

    apiVersion: networking.istio.io/v1alpha3
    kind: EnvoyFilter
    metadata:
      name: <name>
      namespace: <namespace>
    spec:
      configPatches:
      - applyTo: NETWORK_FILTER
        match:
          listener:
            filterChain:
              filter:
                name: envoy.filters.network.http_connection_manager
        patch:
          operation: MERGE
          value:
            typed_config:
              '@type': type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager
              server_header_transformation: PASS_THROUGH

    Note:

    • The sidecar configuration for response timeout or stream timeout should not be applied for any of the SCP microservices.
    • For virtual service CRD, when the destinationhost is any SCP microservice, do not configure the timeout value.

Updating Virtual Services

Disabling retry attempts in virtual services:

For all SCP and ATS pods in the virtual services, the retry attempt should be set to 0.
    retries:
      attempts: 0
3.6.1.7.2 Enabling Persistent Volume

ATS supports persistent storage to retain ATS historical build execution data, test cases, and one-time environment variable configurations.

To enable persistent storage:
  1. Create a PVC and associate it with the ATS pod.
  2. Set the PVEnabled flag to true in the ocats_ocscp_values_24.1.1.yaml file.
  3. Set PVClaimName to PVC that is created for ATS.
    
    ocats-scp:
      PVEnabled: false
      PVClaimName: "ocats-scp-24.1.1-pvc"
      

Note:

In the event that Persistent Volume (PV) is enabled, ATS starts up with the parameter values specified in the ocats_ocscp_values_24.1.1.yaml file. If the ATS pod is restarted, the PV restores the configuration, ensuring that the new ATS pod will have the same configuration settings as the previous pod.

For more details on Persistent Volume Storage, you can refer to Persistent Volume for 5G ATS.

3.6.2 Configuring SCP-ATS and SCP

This section provides information about updating ATS deployment configuration, enabling and disabling stubs, configuring SCP to run test cases, and so on.

Note:

Make sure to follow the steps mentioned in the Preinstall Preparation of SCP for SCP-ATS to deploy SCP. For more information on SCP deployment, refer to the Oracle Communications Cloud Native Core, Service Communication Proxy Installation, Upgrade, and Fault Recovery Guide.
3.6.2.1 Updating ATS Configuration

The following section covers updating the ATS deployment configuration and ATS input parameter configuration.

3.6.2.1.1 Updating ATS Deployment Configuration

Previously, manual modification of the ATS configuration parameters for New feature and Regression jobs was required in the ATS graphical user interface (GUI). Now, with the introduction of a section "ATS_Config" in the ocats_ocscp_values_24.1.1.yaml file, you can update the values for ATS parameters and then deploy the OCATS_OCSCP charts using this modified ocats_ocscp_values_24.1.1.yaml file. The updated ATS parameters are automatically applied during the deployment process, and the ATS will come up with the configuration as mentioned in the ocats_ocscp_values_24.1.1.yaml file.

The ocats_ocscp_values_24.1.1.yaml file can be modified to update the ATS parameters according to their environment. To illustrate this, here's an example of how you can update the ATS parameters in the ocats_ocscp_values_24.1.1.yaml file:

Figure 3-14 ATS Deployment Configuration


ATS Deployment Configuration

Note:

Initially, at the time of deployment, you can configure or modify the parameter in the ocats_ocscp_values_24.1.1.yamlfile or you can also update or modify the parameters post deployment by following the process described in the Configuring New Feature Pipelines section.

3.6.2.1.2 Configuring SCP-ATS for OCI Setup

To leverage the existing infrastructure of Oracle Cloud, SCP which was only deployed on CNE, can now be integrated on the Oracle Cloud Infrastructure (OCI).

Creating Secret for Alarm(Alerts)

When deploying ATS within the OCI environment, users are required to provide the following inputs in the form of a Kubernetes secret named ocats-oci-secret:
user="<your user ocid>"
tenancy="<your tenancy ocid>"
region="<your oci region>"
fingerprint="<fingerprint of your public key>"
key_file="<full path to your private key>"
metric_namespace="<metric_namespace under which all the metrics and alarms of SUT NF will be captured>"
nf_compartment_id="<Compartment Id of SUT NF>"
Run the command to create the secret for alarm or alerts related to test cases:
kubectl
          create secret generic ocats-oci-secret --from-literal=user='<your_user_ocid>'--from-literal=tenancy='<your_tenancy_ocid>'--from-literal=region='<your_oci_region>'--from-literal=fingerprint='<fingerprint_of_your_oci_api_public_key>'--from-literal=metric_namespace='<metric_namespace under which all the metrics and alarms of SUT NF will be
          captured>'--from-literal=nf_compartment_id='Compartment Id of SUT NF'--from-file=key_file='<full_path_to_your_oci_api_private_key
          on_the_host_machine_where_you_are_running_this_command>'-n <namespace>
For example,
kubectl
          create secret generic ocats-oci-secret --from-literal=user='ocid1.user.oc1..aaaaaaaajjxlzewn3e76aufhdjfhdkfjkl6ea3aaazgzx7cxg74ljs5an3a'--from-literal=tenancy='ocid1.tenancy.oc1..aaaaaaaa5oqwziy4bngiebry6letze4hdjskjksdkdlksurhc6pojwe4wxe34a'--from-literal=region='us-ashburn-1'--from-literal=fingerprint='79:17:f2:89:76:d6:82:b2:13:b9:1d:9f:ff:92:28:3b'--from-literal=metric_namespace=scpdemons
          --from-literal=nf_compartment_id='ocid1.compartment.oc1..aaaaaaaa6crdjhjkkdjkldlxbi7erwtmo3wa7jy6q6ldjjskkdnnmitot4smcczgq'--from-file=key_file='/tmp/oci_api_key.pem'-n scpsvc

ATS Deployment Configuration in OCI Setup

Update the values for ATS parameters and then deploy the OCATS_OCSCP charts using this modified ocats_ocscp_values_24.1.1.yaml file. To update the ATS parameters in the ocats_ocscp_values_24.1.1.yaml file, see the Updating ATS Deployment Configuration section.

Modify the scpMetricVersion parameter to "v2" in the ocats_ocscp_values_24.1.1.yaml file. For more information, see Oracle Communications Cloud Native Core, Service Communication Proxy Installation, Upgrade, and Fault Recovery Guide.

3.6.2.1.3 Updating ATS Input Parameter Configuration

This section provides information about how to modify different services in SCP and configure SCP test cases in ATS.

3.6.2.1.3.1 Enabling or Disabling Stubs
By default, all the stubs are enabled.

Note:

Deploy NRF stubs with port 8080. NRF details of SCP should specify the ipEndPoints port as 8080 without any ipv4Address field. For example, ipEndPoints: [{"port": "8080"}]).

To enable or disable the stubs or pods, set the variable to true or false, respectively. You can install the required stubs or pods during ocats-ocscp deployments.

The following sample parameters show how the stub is enabled by setting different variables to true in the ocats_ocscp_values_24.1.1.yaml file:
global:
  # ********  Sub-Section Start: Custom Extension Global Parameters ********
  #**************************************************************************
  ocatsdnsstubService: true
  ocatsscpService: true
  ocatsscpglbratelimitstubService: true
  ocatsddclientstubService: true
  ausf1Stubs: true
  ausf2Stubs: true
  ausf3Stubs: true
  ausf4Stubs: true
  ausf5Stubs: true
  ausf6Stubs: true
  ausf7Stubs: true
  ausf11Stubs: true
  ausf12Stubs: true
  ausf13Stubs: true
  ausf14Stubs: true
  ausf15Stubs: true
  ausf16Stubs: true
  ausf21Stubs: true
  ausf22Stubs: true
  ausf23Stubs: true
  chf1Stubs: true
  chf2Stubs: true
  nrf1Stubs: true
  nrf2Stubs: true
  nrfr2l1Stubs: true
  nrfr2l2Stubs: true
  nrfr3l1Stubs: true
  nrfr3l2Stubs: true
  pcf1Stubs: true
  pcf1cStubs: true
  pcf2Stubs: true
  pcf3Stubs: true
  pcf4Stubs: true
  pcf5Stubs: true
  pcf6Stubs: true
  pcf7Stubs: true
  pcf8Stubs: true
  pcf10Stubs: true
  pcf11Stubs: true
  pcf12Stubs: true
  pcf13Stubs: true
  pcf14Stubs: true
  pcf15Stubs: true
  pcf16Stubs: true
  pcf21Stubs: true
  pcf22Stubs: true
  pcf23Stubs: true
  pcf24Stubs: true
  pcf25Stubs: true
  pcf26Stubs: true
  pcf27Stubs: true
  pcf28Stubs: true
  scp1Stubs: true
  scp2Stubs: true
  scp3Stubs: true
  scp11Stubs : true
  scp12Stubs: true
  scp51Stubs: true
  scp52Stubs: true
  scp61Stubs: true
  smf1Stubs: true
  smf2Stubs: true
  smf3Stubs: true
  smf4Stubs: true
  smf5Stubs: true
  smf11Stubs: true
  udm1Stubs: true
  udm2Stubs: true
  udm3Stubs: true
  udm4Stubs: true
  udm5Stubs: true
  udm22Stubs: true
  udm23Stubs: true
  udm33Stubs: true
  udm21Stubs: true
  udm31Stubs: true
  udm32Stubs: true
  udr1Stubs: true
  udr2Stubs: true
  scp51svcxxxxStubs: true
  scp52svcxxxxStubs: true
  scp61svcxxxxStubs: true
  sepp1Stubs: true
  sepp2Stubs: true
  sepp3Stubs: true
  nef1Stubs: true
  nef2Stubs: true
  nef3Stubs: true
  nef4Stubs: true
  nef5Stubs: true
  nef6Stubs: true
  nef7Stubs: true
  nef8Stubs: true
  gen1Stubs: true
  gen2Stubs: true

Note:

Replica count of 'scp51svcxxxx' 'scp52svcxxxx' 'scp61svcxxxx' stubs must be set to Zero.
3.6.2.1.3.1.1 Modifying IpFamilyPolicy or IpFamilies of Stubs
The deployment of all stubs can adhere to the following:
  • If IpFamilyPolicy is set to "SingleStack," then the value of IpFamilies can either be [IPv4] or [IPv6] only.
  • If IpFamilyPolicy is set as "PreferDualStack" or "RequireDualStack", then the values of IpFamilies can either be [IPv4,IPv6] or [IPv6,IPv4] only.

    Note:

    All the pyStubs should be deployed with the same combination of IpFamilyPolicy and IpFamilies.
3.6.2.1.3.2 Configuring ATS YAML File for Deployment

Umbrella Chart

Helm charts contain different charts that are referred to as subcharts through their dependencies section in the requirements.yaml file. When a chart is created for the purpose of grouping related subcharts or services, such as composing a whole application or deployment, it is known as umbrella chart.

Helm umbrella charts are created to deploy generic ATS packages in CDCS through a single values.yaml file.

Perform the following procedure to create umbrella charts and add stubs to the umbrella charts:

  1. To add stubs to the umbrella chart, do the following:
    1. Below parameters can be updated to ocats_ocscp_values_24.1.1.yaml file:

      ocats-scp

      ocats-scp:
        image:
          repository: cgbu-ocscp-dev-docker.dockerhub-phx.oci.oraclecorp.com/ocats/ocats-scp
          tag: 24.1.1
          pullPolicy: Always
        replicaCount: 1
        resources:
          limits:
            cpu: 3
            memory:23Gi
            #ephemeral-storage: 4Gi      
          requests:
            cpu: 3
            memory: 3Gi
      ocats-dnsstub
      ocats-dnsstub:
        image:
          repository: cgbu-ocscp-dev-docker.dockerhub-phx.oci.oraclecorp.com/ocats/ocats-dnsstub
          tag: helm-tag
          pullPolicy: Always
        replicaCount: 1
        service:
          customExtension:
            labels: {}
            annotations: {}
          type: ClusterIP
          port: 53
       
        deployment:
          customExtension:
            labels: {}
            annotations: {}
       
        resources:
          limits:
            cpu: 1
            memory: 1Gi
          requests:
            cpu: 1
            memory: 1Gi

      ocats-pystub

      ocats-pystub:
        image:
          repository: cgbu-ocscp-dev-docker.dockerhub-phx.oci.oraclecorp.com/ocats/ocats-pystub
          tag: helm-tag
          pullPolicy: Always
        replicaCount: 1
        resources:
          limits:
            cpu: 1
            memory: 1Gi
          requests:
            cpu: 1
            memory: 1Gi
        ausf1:
          service:
            name: ausf1svc
            type: ClusterIP
            ports:
              port: 8080
          deploymentName: ausf1
        ausf2:
          service:
            name: ausf2svc
            type: ClusterIP
            ports:
              port: 8080
          deploymentName: ausf2

      ocats-scpglbratelimitstub

      ocats-scpglbratelimitstub:
        image:
          repository: <docker-registry IP:docker-registry port>/ocats/ocats-scpglbratelimitstub
          tag: helm-tag
          pullPolicy: Always
        replicaCount: 1
        service:
          type: ClusterIP
        deployment:
          customExtension:
            labels: {}
            annotations: {}
        resources:
         limits:
          cpu: 1
          memory: 1Gi
         requests:
          cpu: 1
          memory: 1Gi
        minreplicas: 1
        maxreplicas: 1
        maxPdbUnavailable: 11:27
      ocats-ddclientstub:
        image:
          repository: <docker-registry IP:docker-registry port>/ocats/ocats-ddclientstub
          tag: helm-tag
          pullPolicy: Always
        replicaCount: 1
        service:
          type: LoadBalancer
        resources:
          limits:
            cpu: 1
            memory: 1Gi
            #ephemeral-storage: 55Mi
          requests:
            cpu: 1
            memory: 1Gi
            #ephemeral-storage: 55Mi
        log:
          level: INFO
        extraContainers: USE_GLOBAL_VALUE
        
        kafka_broker: "kafka-broker1-0.kafka-broker1.ddkafkanamespace.svc.cluster.local:9092"
        string_topic_name: "string_topic"
        json_topic_name: "json_topic" 

      ocats-dnsstub

      ocats-dnsstub:
        image:
          repository: <docker-registry IP:docker-registry port>/ocats/ocats-dnsstub
          tag: helm-tag
          pullPolicy: Always
        replicaCount: 1
        service:
          customExtension:
            labels: {}
            annotations: {}
          type: ClusterIP
          port: 53
      
        deployment:
          customExtension:
            labels: {}
            annotations: {}
      
        resources:
          limits:
            cpu: 1
            memory: 1Gi
          requests:
            cpu: 1
            memory: 1Gi
        extraContainers: USE_GLOBAL_VALUE
3.6.2.1.3.3 Dual Stack Support

Using the dual stack mechanism, applications or NFs can establish connections with pods and services in a Kubernetes cluster using IPv4 or IPv6 or both simultaneously.

With the introduction of the Dual Stack IPv6 support feature, there will be two categories of features performing the same tests. However, they are categorized differently based on the type of endpoint the stack supports (single endpoint or multiple endpoints). It is important to consider the IP endpoint support of the stack while running these tests.

For example:

  • SCP_22.3.0_BugFixes_MultipleIpEndpoint_P0.feature: Shall run on stacks supporting multiple endpoints (IPv4 and IPv6)
  • SCP_22.3.0_BugFixes_SingleIpEndpoint_P0.feature: Shall run on stacks supporting single endpoints (IPv4 or IPv6)

Feature files with "SingleIpEndpoint" in the name:

These test cases run on a single IP stack setup or a dual IP stack setup, where ipFamilies should be [IPv4] or [IPv6] in the ATS deployment file.

Feature files with "MultipleIpEndpoint" in name:

These test cases run only on a dual IP stack setup, where ipFamilies should be [IPv4, IPv6] or [IPv6, IPv4] in the ATS deployment file.

All other feature files can run on both setups.

3.6.2.1.3.4 Enabling HTTPs for ATS, pystubs and Jenkins

Perform the following procedure to enable HTTPs for ATS. These steps can be avoided if your environment does not require HTTPS by adhering to the Preinstall Preparation of SCP for SCP-ATS.

Ensure that you have the below file generated before you proceed with deployment:

  • Private key
  • Client certificate
  • Certificate Authority Root certificate
  1. To enable HTTPS for ATS, run the following command to create the Kubernetes secret:
    kubectl create secret generic ocats-scp-secret
          --from-file=rsa_private_key_pkcs1_client.pem --from-file=client.pem  --from-file=caroot.pem
          --from-file=jenkinsserver.jks -n scpsvc

    Note:

    Name of the secret, private key, client certificate, and CA root certificate must be the same as mentioned in the above mentioned command.
    1. Uncomment the private key, client certificate, and CA root certificate in the ocats_scp_values.yaml as follows:

      Existing value:

      #certificates:
      #  cert_secret_name: "ocats-scp-secret"
      #  ca_cert: "caroot.pem"
      #  client_cert: "client.pem"
      #  private_key: "rsa_private_key_pkcs1_client.pem"
      #  jks_file: "jenkinsserver.jks"
      #  jks_password: "123456"  #This is the password given to the jks file while creation.

      Updated values

      certificates:
        cert_secret_name: "ocats-scp-secret"
        ca_cert: "caroot.pem"
        client_cert: "client.pem"
        private_key: "rsa_private_key_pkcs1_client.pem"
        jks_file: "jenkinsserver.jks"
        jks_password: "123456"  #This is the password given to the jks file while creation.
    2. Set the atsGuiTLSEnaled and atsCommunicationTLSEnabled parameters to true in ocats_scp_values.yaml, as shown below:

      Generate Certificate

      cat >>$SERVER_EXT<<EOF
      authorityKeyIdentifier=keyid,issuer
      basicConstraints=CA:FALSE
      keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment
      extendedKeyUsage = serverAuth, clientAuth
      subjectAltName = @alt_names
        
      [alt_names]
      IP.1 = 127.0.0.1
      # replace IP.2 with local setup IP when enabling https Gui for jenkins
      IP.2 = 10.75.226.134
      DNS.1 = *.${NAMESPACE}.${COMMON_NAME}
      DNS.2 = localhost
      EOF
    3. Make changes to the SERVER_EXT. Set the IP.2 value as the local setup IP as mentioned below:

      Updated Value

      atsGuiTLSEnabled: true
      atsCommunicationTLSEnabled: true
    4. Deploy ATS pod with above mentioned changes.
  2. To enable HTTPS for pystubs, run the following command to create the Kubernetes secret:
    kubectl create secret generic ocats-pystub-secret --from-file=rsa_private_key_pkcs1_server.pem --from-file=server.pem --from-file=caroot.pem -n scpsvc

    Note:

    Name of the secret, private key, client certificate, and CA root certificate must be the same as mentioned in the above mentioned command.
    1. Run the following command to create role and role binding for GET access to the pystub pod:
      kubectl create role scpsvc-pystub-scp-role --verb=get --resource=secrets -n scpsvc
      kubectl create rolebinding --role=scpsvc-pystub-scp-role scpsvc-pystub-scp-rolebinding --serviceaccount=scpsvc:default -n scpsvc
    2. Uncomment the private key, server certificate, and CA root certificate in the ocats_pystub_values.yaml file as follows:

      Existing value:

      
      #certificates:
      #  cert_secret: "ocats-pystub-secret"
      #  ca_cert: "caroot.pem"
      #  server_cert: "server.pem"
      #  private_key: "rsa_private_key_pkcs1_server.pem"

      Updated values in ocats_ocscp_values_24.1.1.yaml file.

      
      certificates:
        cert_secret: "ocats-pystub-secret"
        ca_cert: "caroot.pem"
        server_cert: "server.pem"
        private_key: "rsa_private_key_pkcs1_server.pem"
    3. Deploy pystub pods with above mentioned changes.
3.6.2.1.3.5 Configuring SCP to Run OAuth Test Cases in ATS
By default, the ATS suite runs oauth test cases if the "ALL" option is selected, and SCP must be deployed with oauth support enabled to support the same.
  # Enable nrfproxy-oauth service (only for Rel16)
  nrfProxyOauthService: true

If SCP is deployed without oauth support, either the single or multiple feature execution option must be selected, excluding all oauth test cases, or these options must be manually removed from the feature directory on the ATS pod.

3.6.2.1.3.6 Configuring ATS for CCA Test Cases
Run CCA Validation feature tests with ATS.
  • Enable HTTPs to run these feature tests.
  • Generate 9 client certificates (using the gen_certificates.sh script) before running the feature tests and write these certificates inside the ocats-scp-secret kubernetes secret. This script is to be executed when SCP-ATS deployments have IP families such as [IPv4] or [IPv4, IPv6].
    The following lists the sample SANs that must be used while creating certificates:
    • client2.pem
      
      [alt_names]
      
      IP = 10.75.213.1
      
      DNS.1 = chf1svc.${NAMESPACE}.${COMMON_NAME}
      
      URI.1 = urn:uuid:11111111-aaaa-aaaa-aaaa-111111111177
      
      URI.2 = https://chf1svc.${NAMESPACE}.${COMMON_NAME}:443
      
      URI.3 = https://10.75.213.1:443
    • client3.pem
      
      [alt_names]
      URI.1 = urn:uuid:11111111-aaaa-aaaa-aaaa-111111111177
    • client4.pem
      [
      alt_names]
      DNS.1 = chf1svc.${NAMESPACE}.${COMMON_NAME}
    • client5.pem
      
      [alt_names]
      IP = 10.75.213.1
    • client6.pem
      
      [alt_names]
      URI.1 = https://chf1svc.${NAMESPACE}.${COMMON_NAME}:443
    • client7.pem
      
      [alt_names]
      URI.1 = https://10.75.213.1:443
    • client8.pem
      
      [alt_names]
      IP = 10.75.213.1
      DNS.1 = chf1svc.${NAMESPACE}.${COMMON_NAME}
      URI.1 = https://chf1svc.${NAMESPACE}.${COMMON_NAME}:443
      URI.2 = https://10.75.213.1:443
    • client9.pem
      
      [alt_names]
      IP.1 = 10.75.213.2
      IP.2 = 10.75.213.3
      IP.3 = 10.75.213.4
      IP.4 = 10.75.213.5
      IP.5 = 10.75.213.6
      IP.6 = 10.75.213.7
      IP.7 = 10.75.213.8
      IP.8 = 10.75.213.9
      IP.9 = 10.75.213.10
      IP.10 = 10.75.213.11
      IP.11 = 10.75.213.1
      DNS.1 = chf1svc.${NAMESPACE}.${COMMON_NAME}
      URI.1 = urn:uuid:11111111-aaaa-aaaa-aaaa-111111111177
      URI.2 = https://chf1svc.${NAMESPACE}.${COMMON_NAME}:443
      URI.3 = https://10.75.213.10:443
    • client10.pem
      
      [alt_names]
      IP = 10.75.213.1
      DNS.1 = chf1svc.${NAMESPACE}.${COMMON_NAME}
      URI.1 = urn:uuid:11111111-aaaa-aaaa-aaaa-111111111177
      URI.2 = https://chf1svc.${NAMESPACE}.${COMMON_NAME}:443
      URI.3 = https://10.75.213.10:443
      URI.4 = https://10.75.213.2:443
      URI.5 = https://10.75.213.3:443
      URI.6 = https://10.75.213.4:443
      URI.7 = https://10.75.213.5:443
      URI.8 = https://10.75.213.6:443
      URI.9 = https://10.75.213.7:443
      URI.10 = https://10.75.213.8:443
      URI.11 = https://10.75.213.1:443
  • Generate 9 client certificates before running the feature tests, and write these certificates inside the ocats-scp-secret Kubernetes secret. Run the gen_certificates_ipv6.sh script when SCP-ATS deployment is either [IPv6] or [IPv6,IPv4].
    The following lists the sample SANs that must be used while creating certificates:
    • client2.pem
      
      [alt_names]
      IP = 2001:db8:85a3:0:0:8a2e:370:7334
      
      DNS.1 = chf1svc.${NAMESPACE}.${COMMON_NAME}
      
      URI.1 = urn:uuid:11111111-aaaa-aaaa-aaaa-111111111177
      
      URI.2 = https://chf1svc.${NAMESPACE}.${COMMON_NAME}:443
      
      URI.3 = https://[2001:db8:85a3:0:0:8a2e:370:7334]:443
      
    • client3.pem
      
      [alt_names]
      URI.1 = urn:uuid:11111111-aaaa-aaaa-aaaa-111111111177
    • client4.pem
      [
      alt_names]
      DNS.1 = chf1svc.${NAMESPACE}.${COMMON_NAME}
    • client5.pem
      
      [alt_names]
      IP = 2001:db8:85a3:0:0:8a2e:370:7334
    • client6.pem
      
      [alt_names]
      URI.1 = https://chf1svc.${NAMESPACE}.${COMMON_NAME}:443
    • client7.pem
      
      [alt_names]
      URI.1 =https://[2001:db8:85a3:0:0:8a2e:370:7334]:443
    • client8.pem
      
      [alt_names]
      IP = 2001:db8:85a3:0:0:8a2e:370:7334
      DNS.1 = chf1svc.${NAMESPACE}.${COMMON_NAME}
      URI.1 = https://chf1svc.${NAMESPACE}.${COMMON_NAME}:443
      URI.2 = https://[2001:db8:85a3:0:0:8a2e:370:7334]:443
    • client9.pem
      
      [alt_names]
      IP.1 = 2002:db8:85a3:0:0:8a2e:370:7334
      IP.2 = 2003:db8:85a3:0:0:8a2e:370:7334
      IP.3 = 2004:db8:85a3:0:0:8a2e:370:7334
      IP.4 = 2005:db8:85a3:0:0:8a2e:370:7334
      IP.5 = 2006:db8:85a3:0:0:8a2e:370:7334
      IP.6 = 2007:db8:85a3:0:0:8a2e:370:7334
      IP.7 = 2008:db8:85a3:0:0:8a2e:370:7334
      IP.8 = 2009:db8:85a3:0:0:8a2e:370:7334
      IP.9 = 2010:db8:85a3:0:0:8a2e:370:7334
      IP.10 = 2011:db8:85a3:0:0:8a2e:370:7334
      IP.11 = 2001:db8:85a3:0:0:8a2e:370:7334
      DNS.1 = chf1svc.${NAMESPACE}.${COMMON_NAME}
      URI.1 = urn:uuid:11111111-aaaa-aaaa-aaaa-111111111177
      URI.2 = https://chf1svc.${NAMESPACE}.${COMMON_NAME}:443
      URI.3 = https://[2010:db8:85a3:0:0:8a2e:370:7334]:443
    • client10.pem
      
      [alt_names]
      IP = 2001:db8:85a3:0:0:8a2e:370:7334
      DNS.1 = chf1svc.${NAMESPACE}.${COMMON_NAME}
      URI.1 = urn:uuid:11111111-aaaa-aaaa-aaaa-111111111177
      URI.2 = https://chf1svc.${NAMESPACE}.${COMMON_NAME}:443
      URI.3 = https://[2010:db8:85a3:0:0:8a2e:370:7334]:443
      URI.4 = https://[2002:db8:85a3:0:0:8a2e:370:7334]:443
      URI.5 = https://[2003:db8:85a3:0:0:8a2e:370:7334]:443
      URI.6 = https://[2004:db8:85a3:0:0:8a2e:370:7334]:443
      URI.7 = https://[2005:db8:85a3:0:0:8a2e:370:7334]:443
      URI.8 = https://[2006:db8:85a3:0:0:8a2e:370:7334]:443
      URI.9 = https://[2007:db8:85a3:0:0:8a2e:370:7334]:443
      URI.10 = https://[2008:db8:85a3:0:0:8a2e:370:7334]:443
      URI.11 = https://[2001:db8:85a3:0:0:8a2e:370:7334]:443
3.6.2.1.3.7 Updating the Global Egress Rate Limiting Changes in the SCP Deployment File for ATS

Perform the following procedure to enable the Global Egress Rate Limiting feature for ATS environment.

  1. In the SCP custom-values.yaml file, update the following:
    • federation.remoteScpOne.fqdnOrIp: FQDN of the scpglbratelimitstub pod.
    • federation.remoteScpOne.clusterName: Coherence Cluster Name of global rate limit Stub [Example: scpstub-coherence-cluster].
    • federation.remoteScpOne.nfInstanceId: NFInstanceID of global rate limit Stub [Example: 2faf1bbc-6e4a-4454-a507-a14ef8e1bc22].
3.6.2.1.3.8 Configuring SCP to Run DNS SRV Test Cases in ATS
By default, the ATS suite runs alternate resolution (DNS SRV) if the "ALL" option is selected, and SCP must be deployed with alternate resolution service support enabled in order to support the same.
# Enable DNS SRV Alternate Routing Feature
  dnsSRVAlternateRouting: false

If SCP is deployed without alternate resolution support, either single or multiple feature execution options must be selected, excluding all DNS SRV test cases, or these options must be manually removed from the feature directory on the ATS pod.

3.6.2.1.3.9 Configuring SCP to Run Mediation Test Cases in ATS
By default, the ATS suite runs Mediation test cases if the "ALL" option is selected, and SCP must be deployed with Mediation support enabled to support the same.
# Enable mediation service
  mediationService: false
If SCP is deployed without the Mediation support, then either a single or multiple feature execution option can be selected, excluding all Mediation test cases, or these options must be manually removed from the feature directory on the ATS pod.
3.6.2.1.3.10 Configuring SCP to Run Model D Test Cases in ATS
By default, the ATS suite runs delegated discovery ( Model D) if the "ALL" option is selected, and SCP must be deployed with nrfproxy support enabled to support the same.
# Enable Nrf Proxy service (only for Rel16)
  nrfProxyService: false

If SCP is deployed without nrfproxy support, either single or multiple feature execution options must be selected, excluding all Model D test cases, or these options must be manually removed from the feature directory on the ATS pod.

3.6.2.1.3.11 Configuring ATS for Traffic Feed Test Cases

  1. To run Traffic Feed ATS testcases, install the kafka broker.
  2. In the ATS deployment file, update the following parameters:
    • kafka_broker: <kafka broker host>:<kafka broker host> (Example, “kafka-broker-0.kafka-broker.scpsvc.svc.cluster.local:9092”)
    • string_topic_name: <topic name for string serialization> (Example, “string_topic”)
    • json_topic_name: <topic name for json serialization> (Example, “json_topic”)
  3. In the global.yaml file, update the following parameters:
    • global_traffic_feed_key_Serializer: <key serialization> (Example, string)
    • global_traffic_feed_value_Serializer: <value serialization> (Example, string)
    • global_traffic_feed_topic_name: <topic name for selected serialization> (Example, string_topic)
    • global_traffic_feed_bootstrap_server_host: <kafka broker host> (Example, kafka-broker1-0.kafka-broker1.scpsvc.svc.cluster.local)
    • global_traffic_feed_bootstrap_server_port: <kafka broker port> (Example, 9092)

      For more information on global.yaml file, see the ATS Testcase Parametrization on User Input.

    For installation of the Data Director Kafka broker, perform the following:
    • Update the ocnadd-custom-values.yaml as documented in the Data Director Installation and Upgrade Guide.
    • Disable all services apart from ocnaddkafka by marking them as false.
    • Keep ocnaddkafka as true.
    For more information, see the Oracle Communications Network Analytics Data Director Installation and Upgrade Guide.

    Note:

    Kafka broker should be deployed with 3 partitions under string_topic.
3.6.2.1.3.12 Configuring SCP to Run LCI Test Cases in ATS
By default, the ATS suite runs load manager test cases if the "ALL" option is selected, and SCP must be deployed with load manager support enabled to support the same.
 # Enable load-manager service (only for Rel16)
  loadManagerService: true

If SCP is deployed without load manager support, either the single or multiple feature execution option must be selected, excluding all load manager test cases, or these options must be manually removed from the feature directory on the ATS pod.

3.6.3 Deploying ATS and Stub in the Kubernetes Cluster

This section provides information about how to deploy ATS and stubs using Command Line Interface (CLI). To deploy ATS and stubs using CDCS, see Oracle Communications CD Control Server User Guide.

To deploy ATS in the Kubernetes Cluster:

Note:

Deploy ATS, SCP, and stubs in the same namespace.
  1. Ensure the ocats_ocscp_values_24.1.1.yaml file has been updated with correct repository, image tag, and parameters as per requirements.
  2. In the Files or Helm folder, find the ocats-ocscp charts for version 23.3.0, which have to be used for installation.
  3. Run the following command to deploy ATS:
    helm install ocats-ocscp-24.1.1.tgz --name <release_name> --namespace <namespace_name> -f ocats-ocscp-values-24.1.1.yaml 

    Example:

    helm install ocats-ocscp-24.1.1.tgz --name ocats-ocscp --namespace scpsvc-f ocats-ocscp-values-24.1.1.yaml

    Note:

    Update image name, tag, service name, and deployment name in ocats-pystub of the ocats_pystub_values_24.1.1.yaml file before deploying.
  4. Verify whether all stub pods are up and running in the deployed namespace as updated in the ocats_ocscp_values_24.1.1.yaml.

3.6.4 Post Installation and Deployment Steps

This section describes the post-installation steps for SCP.

3.6.4.1 Verifying ATS Deployment
Run the following command to verify the ATS deployment status:
helm status <release_name>

Note:

If ATS is deployed in the service mesh environment, the Ready field for pods displays 2/2.
The following image displays that the deployment is complete because the STATUS field has changed to deployed.

Figure 3-15 Checking ATS Helm Release and ATS related Pod Status


Checking ATS Helm Release and ATS related Pod Status

3.6.4.2 Modifying the scpc-alternate-resolution Microservice
Perform the following procedure to modify the scpc-alternate-resolution microservice to point to the DNS Stub for the Alternate Routing based on the Domain Name System (DNS) Service Record (SRV) Records feature.
  1. Capture the cluster IP of DNS Stub service.

    By default, scpc-alternate-resolution points to CoreDNS and displays the following settings in the ocscp_values.yaml deployment file.

    Figure 3-16 CoreDNS

    CoreDNS
  2. Run the following command to change the deployment file to add content in scpc-alternate-resolution to query the DNS Stub.

    Uncomment ocscp-alternate-resolution's dnsConfig and dnsPolicy before deployment to allow editing.

    $kubectl edit deployment ocscp-scpc-alternate-resolution -n
        scpsvc

    Sample deployment file:

    dnsConfig:
            nameservers:
            - 10.96.77.54
            searches:
            - cicdscpsvc-230228133808.svc.cluster.local
            - svc.cluster.local
            - cluster.local
     dnsPolicy: None
    

    Add the following content:

    • nameservers: Add the IP address that you recorded after installing the DNS Stub (cluster IP of the DNS Stub).
    • searches: Add all the search based on the cluster name.
      • scpsvc

        This is the namespace.

      • cluster.local

        This is the cluster name.

    • dnsPolicy: Set it to "None" if it is not already set by default.
3.6.4.3 ATS Testcase Parametrization on User Input

Parameterization is an approach to decoupling the data from the feature files in ATS so that ATS test cases can be run against some predefined configuration that contains customer-specific data and service configurations. Parametrization allows you to provide or adjust values for the input and output parameters needed for the test cases to be compatible with the SUT configuration. You can update or adjust the key-value pair values in the global.yaml and feature.yaml files for each of the feature files so that they are compatible with SUT configuration. For more information, see the Parameterization section.

Three new folders "cust_data", "product_config" and "custom_config" are added to the base path inside ATS pod /var/lib/jenkins/ocscp_tests. Where the cust_data folder is a replica of the already existing data folder, the product_config folder contains configuration files that are compatible with the default product configurations, and the cust_config folder is a replica of the product_config folder. You can update the custom folders, such as cust_data and custom_config.

Product Config folder

The product config folder contains two types of YAML files: global.yaml and feature_name>. yaml (feature file-specific yaml).

Global File

The global.yaml contains global variable names and their corresponding default values that can be used across all the feature files. The variable name declared in global.yaml should start with global_Var_ as prefix.

For example, <key_name>: &<variable_name> <default_value_of_variable>
       #START_GLOBAL 
                 global:
                   File_Parameters:
                     global_Var_SUT_apiPrefix1: &global_Var_SUT_apiPrefix1 USEast
                  global_Var_SUT_ServingLocality1: &global_Var_SUT_ServingLocality1 USEast 
                    global_Var_localNrfSetId1: &global_Var_localNrfSetId1 setnrfl1.nrfset.5gc.mnc012.mcc345          
                    global_Var_stubPort: &global_Var_stubPort 8080
                #END_GLOBAL
Sample:
#START_GLOBAL
global:
  File_Parameters:
    global_Var_SUT_apiPrefix1: &global_Var_SUT_apiPrefix1 USEast
    # Following Serving Localities if needs to be changed are also to be changed while deploying SCP before running ATS
    global_Var_SUT_ServingLocality1: &global_Var_SUT_ServingLocality1 USEast
    global_Var_SUT_ServingLocality2: &global_Var_SUT_ServingLocality2 Loc7
    global_Var_SUT_ServingLocality3: &global_Var_SUT_ServingLocality3 Loc8
    global_Var_SUT_ServingLocality4: &global_Var_SUT_ServingLocality4 Loc9
 
    # Following SetId's if needs to be changed are also to be changed while deploying SCP before running ATS
    global_Var_localNrfSetId1: &global_Var_localNrfSetId1 setnrfl1.nrfset.5gc.mnc012.mcc345
    global_Var_remoteNrfSetId1: &global_Var_remoteNrfSetId1 setnrfr1.nrfset.5gc.mnc012.mcc345
    global_Var_remoteNrfSetId2: &global_Var_remoteNrfSetId2 setnrfr2.nrfset.5gc.mnc012.mcc345
 
    # If stubPort has to be changed then stub has to be deployed with the same port number before running ATS
    global_Var_stubPort: &global_Var_stubPort 8080
    global_Var_stubErrorCode: &global_Var_stubErrorCode 404
  
    global_Var_udm_nfSetIdList1: &global_Var_udm_nfSetIdList1 set1.udmset.5gc.mnc012.mcc345
    global_Var_udm_nfSetIdList2: &global_Var_udm_nfSetIdList2 set2.udmset.5gc.mnc012.mcc345
    global_Var_udm_nfSetIdList3: &global_Var_udm_nfSetIdList3 set3.udmset.5gc.mnc012.mcc345
     
    global_Var_smf_nfSetIdList1: &global_Var_smf_nfSetIdList1 set1.smfset.5gc.mnc012.mcc345
    global_Var_smf_nfSetIdList2: &global_Var_smf_nfSetIdList2 set2.smfset.5gc.mnc012.mcc345
    global_Var_smf_nfSetIdList3: &global_Var_smf_nfSetIdList3 set3.smfset.5gc.mnc012.mcc345
     
    global_Var_pcf_nfSetIdList1: &global_Var_pcf_nfSetIdList1 set1.pcfset.5gc.mnc012.mcc345
    global_Var_pcf_nfSetIdList2: &global_Var_pcf_nfSetIdList2 set2.pcfset.5gc.mnc012.mcc345
    global_Var_pcf_nfSetIdList3: &global_Var_pcf_nfSetIdList3 set3.pcfset.5gc.mnc012.mcc345
     
    global_Var_udm1_nfInstanceId: &global_Var_udm1_nfInstanceId 11111111-aaaa-aaaa-aaaa-111111111111
    global_Var_udm2_nfInstanceId: &global_Var_udm2_nfInstanceId 21111111-aaaa-aaaa-aaaa-111111111111
    global_Var_udm3_nfInstanceId: &global_Var_udm3_nfInstanceId 11111111-aaaa-aaaa-aaaa-111111111122
    global_Var_smf1_nfInstanceId: &global_Var_smf1_nfInstanceId 11111111-aaaa-aaaa-aaaa-111111111111
    global_Var_smf2_nfInstanceId: &global_Var_smf2_nfInstanceId 11111111-aaaa-aaaa-aaaa-111111111122
    global_Var_smf3_nfInstanceId: &global_Var_smf3_nfInstanceId 11111111-aaaa-aaaa-aaaa-111111111133
    global_Var_smf4_nfInstanceId: &global_Var_smf4_nfInstanceId 11111111-aaaa-aaaa-aaaa-111111111144
    global_Var_smf5_nfInstanceId: &global_Var_smf5_nfInstanceId 11111111-aaaa-aaaa-aaaa-111111111155
    global_Var_pcf1_nfInstanceId: &global_Var_pcf1_nfInstanceId 1faf1bbc-6e4a-3994-a507-a14ef8e1bc5a
    global_Var_pcf2_nfInstanceId: &global_Var_pcf2_nfInstanceId 1faf1bbc-6e4a-3994-a507-a14ef8e1bc6b
    global_Var_scp51_nfInstanceId: &global_Var_scp51_nfInstanceId 2fbf1bbc-6e4b-3994-b507-b14ef8e1bc51
    global_Var_scp61_nfInstanceId: &global_Var_scp61_nfInstanceId 2fbf1bbc-6e4b-3994-b507-b14ef8e1bc61
     
    # If svc name has to be changed then stub has to be deployed with the same svc name before running ATS
    global_Var_udm1_svc_name: &global_Var_udm1_svc_name udm1svc
    global_Var_udm2_svc_name: &global_Var_udm2_svc_name udm2svc
    global_Var_udm3_svc_name: &global_Var_udm3_svc_name udm3svc
    global_Var_smf1_svc_name: &global_Var_smf1_svc_name smf1svc
    global_Var_smf2_svc_name: &global_Var_smf2_svc_name smf2svc
    global_Var_smf3_svc_name: &global_Var_smf3_svc_name smf3svc
    global_Var_smf4_svc_name: &global_Var_smf4_svc_name smf4svc
    global_Var_smf5_svc_name: &global_Var_smf5_svc_name smf5svc
    global_Var_pcf1_svc_name: &global_Var_pcf1_svc_name pcf1svc
    global_Var_pcf2_svc_name: &global_Var_pcf2_svc_name pcf2svc
    global_Var_scp51_svc_name: &global_Var_scp51_svc_name scp51svc
    global_Var_scp61_svc_name: &global_Var_scp61_svc_name scp61svc
    global_Var_nrf1_svc_name: &global_Var_nrf1_svc_name nrf1svc
#END_GLOBAL

Feature File

The <Feature_name>.yaml file contains feature-specific variables that can be parameterized. For example, if the name of the feature file that needs to be parameterized is "ModelC_NF_Set.feature", then the YAML file corresponding to it will be namedModelC_NF_Set.yaml. This yaml file contains the #START_GLOBAL and #END_GLOBAL tags without any data in between, as the data is copied over from the global.yaml file to this section during test execution. The variable name in feature.yaml should have feature_Var_ as prefix.

For example:- <key_name>: &<variable_name> <default_value_of_variable>

        #START_GLOBAL
                 #END_GLOBAL

                           ModelC_NF_Set.feature:

                              File_Parameters:

                                feature_Var_udm1_Priority: &feature_Var_udm1_Priority 0
                                feature_Var_smf1_Priority: &feature_Var_smf1_Priority 0
                                feature_Var_traffic_rate: &feature_Var_traffic_rate 100

Scenario

The variables are referenced under the scenario tag to use in the feature file; for this, the scenario tag has to be concatenated with Scenario_ and enclosed within double commas.

The variables defined under the scenario tag should have sc_ as its prefix.

For example:- "Scenario_<Scenario_tag>":
         "Scenario_Scenario-1- <Scenario_tag>":
                     Input:
                       File_Parameters:
                           sc_http_requests_total_udm1: 100
Sample:
#START_GLOBAL
#END_GLOBAL
 
ModelC_NFSet.feature:
  File_Parameters:
    # The priorities can be changed without having the impact on the order of the priorities
    feature_Var_udm1_Priority: &feature_Var_udm1_Priority 0 
    feature_Var_udm2_Priority: &feature_Var_udm2_Priority 1
    feature_Var_udm3_Priority: &feature_Var_udm3_Priority 1
 
    feature_Var_smf1_Priority: &feature_Var_smf1_Priority 0
    feature_Var_smf2_Priority: &feature_Var_smf2_Priority 0
    feature_Var_smf3_Priority: &feature_Var_smf3_Priority 1
    feature_Var_smf4_Priority: &feature_Var_smf4_Priority 3
    feature_Var_smf5_Priority: &feature_Var_smf5_Priority 4
 
    feature_Var_pcf1_Priority: &feature_Var_pcf1_Priority 0
    feature_Var_pcf2_Priority: &feature_Var_pcf2_Priority 0
     
    feature_Var_supiOfPathURI: &feature_Var_supiOfPathURI imsi-100000001
     
    feature_Var_traffic_rate: &feature_Var_traffic_rate 100
     
    # The traffic for the below metrics to be configured in the same proportionate as the traffice sent
    feature_Var_scp_http_rx_req_total_cnt: *feature_Var_traffic_rate
    feature_Var_scp_http_tx_req_total_cnt: *feature_Var_traffic_rate
    feature_Var_scp_http_tx_req_total_cnt_alternate_route: &feature_Var_scp_http_tx_req_total_cnt_alternate_route 200
    feature_Var_http_requests_total_udm1: &feature_Var_http_requests_total_udm1 0
    feature_Var_http_requests_total_udm2: &feature_Var_http_requests_total_udm2 0
    feature_Var_http_requests_total_udm3: &feature_Var_http_requests_total_udm3 0
    feature_Var_http_requests_total_smf1: &feature_Var_http_requests_total_smf1 0
    feature_Var_http_requests_total_smf2: &feature_Var_http_requests_total_smf2 0
    feature_Var_http_requests_total_smf3: &feature_Var_http_requests_total_smf3 0
    feature_Var_http_requests_total_smf4: &feature_Var_http_requests_total_smf4 0
    feature_Var_http_requests_total_pcf1: &feature_Var_http_requests_total_pcf1 0
    feature_Var_scp_http_rx_res_total_cnt: *feature_Var_traffic_rate
    feature_Var_scp_http_tx_res_total_cnt: *feature_Var_traffic_rate
 
  "Scenario_Scenario-1- Forward route initial UECM AMF Registration messages with 3gpp-Sbi-Target-apiRoot and 3GPP-Sbi-Disocvery-target-NfSetid Header":
      Input:
        File_Parameters:
          sc_http_requests_total_udm1: 100     #To be configured in the same proportionate as the traffice sent
         
  "Scenario_Scenario-2- Alternate route initial UECM AMF Registration messages with 3gpp-Sbi-Target-apiRoot and 3GPP-Sbi-Discovery-target-NfSetid Header":
      Input:
        File_Parameters:
          sc_http_requests_total_udm1: 100     #To be configured in the same proportionate as the traffice sent
          sc_http_requests_total_udm2: 100     #To be configured in the same proportionate as the traffice sent
    
  "Scenario_Scenario-3- Load Balance initial UECM AMF Registration messages with 3gpp-Sbi-Target-apiRoot and 3GPP-Sbi-Disocvery-target-NfSetid Header":
      Input:
        File_Parameters:
          sc_http_requests_total_udm1: 100     #To be configured in the same proportionate as the traffice sent
          sc_http_requests_total_udm2: 50      #To be configured in the same proportionate as the traffice sent
          sc_http_requests_total_udm3: 50      #To be configured in the same proportionate as the traffice sent
         
  "Scenario_Scenario-4- Alternate route initial UECM AMF Registration messages with 3gpp-Sbi-Target-apiRoot missing and 3GPP-Sbi-Disocvery-target-NfSetid Header present":
      Input:
        File_Parameters:
          sc_http_requests_total_udm1: 100     #To be configured in the same proportionate as the traffice sent
         
  "Scenario_Scenario-5- Forward route subsequent UECM AMF Registration messages with 3gpp-Sbi-Target-apiRoot and 3gpp-sbi-routing-binding Header with bl=nfset":
      Input:
        File_Parameters:
          sc_http_requests_total_udm1: 100     #To be configured in the same proportionate as the traffice sent
         
  "Scenario_Scenario-6- Alternate route subsequent UECM AMF Registration messages with 3gpp-Sbi-Target-apiRoot and 3gpp-sbi-routing-binding Header with bl=nfset":
      Input:
        File_Parameters:
          sc_udm3_Priority: 30     #Priority can be changed without having the impact on the order of the priorities
          sc_http_requests_total_udm1: 100     #To be configured in the same proportionate as the traffice sent
          sc_http_requests_total_udm2: 100     #To be configured in the same proportionate as the traffice sent
        
  "Scenario_Scenario-7- Load Balance subsequent UECM AMF Registration messages with 3gpp-Sbi-Target-apiRoot and 3gpp-sbi-routing-binding Header with bl=nfset":
      Input:
        File_Parameters:
          sc_http_requests_total_udm1: 100     #To be configured in the same proportionate as the traffice sent
          sc_http_requests_total_udm2: 50     #To be configured in the same proportionate as the traffice sent
          sc_http_requests_total_udm3: 50     #To be configured in the same proportionate as the traffice sent
        
  "Scenario_Scenario-8- Alternate route subsequent UECM AMF Registration messages with 3gpp-Sbi-Target-apiRoot missing and 3gpp-sbi-routing-binding Header with bl=nfset is present":
      Input:
        File_Parameters:
          sc_http_requests_total_udm1: 100     #To be configured in the same proportionate as the traffice sent
         
 
  "Scenario_Scenario-9- To test when Forward route for notification request UECM AMF Registration messages with 3gpp-Sbi-Target-apiRoot and 3gpp-sbi-routing-binding header bl=nfset":
      Input:
        File_Parameters:
          sc_http_requests_total_udm1: 100     #To be configured in the same proportionate as the traffice sent
         
  "Scenario_Scenario-10- To test when Forward route fails for notification request UECM AMF Registration messages with 3gpp-Sbi-Target-apiRoot and 3gpp-sbi-routing-binding header,alternate route should happen on NfSet":
      Input:
        File_Parameters:
          sc_http_requests_total_udm1: 100     #To be configured in the same proportionate as the traffice sent
          sc_http_requests_total_udm2: 100     #To be configured in the same proportionate as the traffice sent
         
  "Scenario_Scenario-11- To test Forward route for notification request UECM AMF Registration messages with 3gpp-Sbi-Discovery-target-nf-set-id,3gpp-Sbi-Target-apiRoot":
    Input:
      File_Parameters:
        sc_http_requests_total_udm1: 100     #To be configured in the same proportionate as the traffice sent
         
  "Scenario_Scenario-12- To test when Forward route fails for notification request UECM AMF Registration messages with 3gpp-Sbi-Discovery-target-nf-set-id and 3gpp-Sbi-Target-apiRoot,alternate route should happen on NfSet":
      Input:
        File_Parameters:
          sc_http_requests_total_udm1: 100     #To be configured in the same proportionate as the traffice sent
          sc_http_requests_total_udm2: 100     #To be configured in the same proportionate as the traffice sent
        
  "Scenario_Scenario-13- To test when Forward route fails for notification request UECM AMF Registration messages with 3gpp-Sbi-Discovery-target-nf-set-id and 3gpp-Sbi-Target-apiRoot,load balancing should happen on NfSet on NFs with similar priority":
      Input:
        File_Parameters:
          sc_http_requests_total_udm1: 100    #To be configured in the same proportionate as the traffice sent
          sc_http_requests_total_udm2: 50     #To be configured in the same proportionate as the traffice sent
          sc_http_requests_total_udm3: 50     #To be configured in the same proportionate as the traffice sent
         
  "Scenario_Scenario-14- Forward route initial SMF PduSession sm-contexts create messages with 3gpp-Sbi-Target-apiRoot and 3GPP-Sbi-Disocvery-target-NfSetid Header":
      Input:
        File_Parameters:
          sc_http_requests_total_smf1: 100     #To be configured in the same proportionate as the traffice sent
         
  "Scenario_Scenario-15- Alternate route SMF PduSession sm-contexts create messages with 3gpp-Sbi-Target-apiRoot and 3GPP-Sbi-Disocvery-target-NfSetid Header":
      Input:
        File_Parameters:
          sc_http_requests_total_smf1: 100     #To be configured in the same proportionate as the traffice sent
          sc_http_requests_total_smf2: 100     #To be configured in the same proportionate as the traffice sent
         
  "Scenario_Scenario-16- Error response code received when SMF PduSession sm-contexts create message is sent with missing information in 3gpp-sbi-routing-binding header and 3gpp-Sbi-Target-apiRoot header is missing":
      Input:
        File_Parameters:
          sc_scp_http_tx_req_total_cnt: 0     #To be configured in the same proportionate as the traffice sent
          sc_ocscp_metric_scp_generated_response_total: 100     #To be configured in the same proportionate as the traffice sent
 
  "Scenario_Scenario-17-Error response code received when SMF PduSession sm-contexts create message is sent with missing information in 3gpp-Sbi-Discovery-target-nf-set-id header and 3gpp-Sbi-Target-apiRoot header":
      Input:
        File_Parameters:
          sc_targetNfSetId_Send: 1    
          sc_http_requests_total_smf1: 100     #To be configured in the same proportionate as the traffice sent
         
  "Scenario_Scenario-18- Alternate route SMF PduSession sm-contexts create messages with 3gpp-Sbi-Target-apiRoot and 3GPP-Sbi-Disocvery-target-NfSetid Header":
      Input:
        File_Parameters:
          sc_smf1_Priority: 2     #Priority can be changed without having the impact on the order of the priorities
          sc_scp_http_tx_req_total_cnt_alternate_route: 400     #To be configured in the same proportionate as the traffice sent
          sc_http_requests_total_smf1: 100     #To be configured in the same proportionate as the traffice sent
          sc_http_requests_total_smf4: 100     #To be configured in the same proportionate as the traffice sent
         
  "Scenario_Scenario-19- No Alternate route for initial UECM AMF Registration messages with 3gpp-Sbi-Target-apiRoot and 3GPP-Sbi-Discovery-target-NfSetid Header as reroute Policy is disabled":
      Input:
        File_Parameters:
          sc_http_requests_total_udm1: 100     #To be configured in the same proportionate as the traffice sent
          sc_scp_http_rx_res_total_cnt: 0     #To be configured in the same proportionate as the traffice sent
         
  "Scenario_Scenario-20- Forward route PCF SMPolicyControl Create SMPolicyAssociation with 3gpp-Sbi-Target-apiRoot and 3GPP-Sbi-Disocvery-target-NfSetid Header and verify that 3gpp-Sbi-Target-apiRoot and 3gpp-sbi-producer-id headers are not present in response":
      Input:
        File_Parameters:
          sc_pcf1_Service_Priority: 80        #Priority can be changed without having the impact on the order of the priorities
          sc_http_requests_total_pcf1: 100     #To be configured in the same proportionate as the traffice sent
         
  "Scenario_Scenario-21- Alternate route PCF SMPolicyControl Create SMPolicyAssociation messages with 3gpp-Sbi-Target-apiRoot and 3GPP-Sbi-Disocvery-target-NfSetid Header and verify that 3gpp-Sbi-Target-apiRoot and 3gpp-sbi-producer-id headers are present in response":
      Input:
        File_Parameters:
          sc_pcf1_Priority: 2     #Priority can be changed without having the impact on the order of the priorities
          sc_http_requests_total_pcf2: 100     #To be configured in the same proportionate as the traffice sent
         
  "Scenario_Scenario-22- Alternate route initial UECM AMF Registration messages with 3gpp-Sbi-Target-apiRoot missing and 3GPP-Sbi-Disocvery-target-NfSetid Header present and verify that only 3gpp-sbi-producer-id header is present in response since location header is present":
      Input:
        File_Parameters:
          sc_http_requests_total_udm1: 100     #To be configured in the same proportionate as the traffice sent
         
  "Scenario_Scenario-23- Alternate route PCF SMPolicyControl Create messages with 3gpp-Sbi-Target-apiRoot and 3GPP-Sbi-Disocvery-target-NfSetid Header and verify that 3gpp-Sbi-Target-apiRoot and 3gpp-sbi-producer-id headers are present in response":
      Input:
        File_Parameters:
          sc_pcf1_Priority: 2     #Priority can be changed without having the impact on the order of the priorities
          sc_http_requests_total_pcf2: 100     #To be configured in the same proportionate as the traffice sent
         
  "Scenario_Scenario-24- Alternate route PCF SMPolicyControl Create messages with 3gpp-Sbi-Target-apiRoot and 3gpp-sbi-routing-binding Header and verify that 3gpp-Sbi-Target-apiRoot and 3gpp-sbi-producer-id headers are present in response":
      Input:
        File_Parameters:
          sc_Var_pcf1_Priority: 2     #Priority can be changed without having the impact on the order of the priorities
          sc_http_requests_total_pcf2: 100     #To be configured in the same proportionate as the traffice sent 
Updates in Feature Files
The variables are defined in either global.yaml or <feature_name>.yaml is used in the feature files by enclosing the variable name in curly brackets. The steps in a feature file without parameterization are as follows:

Figure 3-17 Feature File without Parameterization


Feature File without Parameterization

The steps in a feature file when it is parameterized:

Figure 3-18 Feature File when it is Parameterized


Feature File when it is Parameterized

Running Feature Files

The changes are made in the product config folder, so the same configuration type can be chosen from the Jenkins UI and the test case can be run, and in the logs, it can be seen that the values of variables are being replaced with the ones that are provided in global.yaml or <feature_name>.yaml.

Note:

Only variables that bring in some value addition require parameterization, and a change in the value of those variables does not affect the test case. A variable that can have an effect across all the feature files should be kept under global.yaml, a variable that is specific to a feature file has to be kept under <feature_name>.yaml, and a variable that is related to a specific scenario must be kept under scenario level.

3.6.5 Appendix

This section provides supplementary information that may be helpful for a more comprehensive understanding of installing and running SCP test cases in ATS.

3.6.5.1 Creating Custom Service Account

By default, ATS will create a service account with the below rules. If the user does not want to use the default service account, then the service account needs to be manually created with the below permission, and the created custom service account name needs to be specified in the ocats_ocscp_values_24.1.1.yaml file.

To run SCP-ATS, use the following rules to create a custom service account:
rules:
- apiGroups: ["apps"]
  resources: ["deployments", "replicasets"]
  verbs: ["watch", "get", "list", "create", "delete", "update" ,"patch"]
- apiGroups: [""]
  resources: ["pods", "services", "pod/logs"]
  verbs: ["watch", "get", "list", "create", "delete"]
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["get", "list"]
- apiGroups: [""]
  resources: ["configmaps"]
  verbs: ["watch", "get", "list", "create", "delete", "update", "patch"]
3.6.5.2 Adding New Stubs to SCP-ATS
For adding new stubs related to pystub, add deployment, and service yaml files inside the template folder of ocats-pystub charts and update the deployment name in the stub deployment file as follows:
name- .Values.ausf11.deploymentName

app-    .Values.ausf11.deploymentName
Update service name, label and selectors, and ports in the service.yaml file.
name: {{ .Values.ausf11.service.name }}

app: {{ .Values.ausf11.deploymentName }}

port: {{ .Values.ausf11.service.ports.port }}
Update the ocats_ocscp_values_24.1.1.yaml file with new information:

ausf1:
service:
name: ausf1svc
type: ClusterIP
ports:
port: 8080

The images represent pystub service and deployment files for each stubs:

Figure 3-19 Sample Deployment File


Sample Deployment File

3.7 Installing ATS for SEPP

This section describes Automated Testing Suite (ATS) installation procedures for Security Edge Protection Proxy (SEPP) in a cloud native environment using Continuous Delivery Control Server (CDCS) or Command Line Interface (CLI) procedures:

3.7.1 Resource Requirements

Total Number of Resources

The resources required to install SEPP-ATS are as follows:

Table 3-23 Total Number of Resources

Resource CPUs Memory(GB) Storage(GB)
SEPP SUT Total 15.1 24.128 0
DB Tier Total 40 40 20
ATS Total 4.5 4.5 1
Grand Total SEPP ATS 59.6 68.628 21

Resource Details

The details of resources required to install SEPP-ATS are as follows:

Table 3-24 Resource Details

Microservice CPUs Required per Pod Memory Required per Pod (GB) Storage PVC Required per Pod (GB) # Replicas (regular deployment) # Replicas (ATS deployment) CPUs Required - Total Memory Required - Total (GB) Storage PVC Required - Total (GB)
SEPP Pods
n32-ingress-gateway 1.5 2 0 1 1 1.5 2 0
n32-egress-gateway 1.5 2 0 1 1 1.5 2 0
plmn-ingress-gateway 1.5 2 0 1 1 1.5 2 0
plmn-egress-gateway 1.5 2 0 1 1 1.5 2 0
pn32f-svc 1 2 0 1 1 1 2 0
cn32f-svc 1 2 0 1 1 1 2 0
cn32c-svc 0.5 1 0 1 1 0.5 1 0
pn32c-svc 0.5 1 0 1 1 0.5 1 0
config-mgr-svc 1 2 0 1 1 1 2 0
nrf-client-nfdiscovery 0.5 1 0 1 1 0.5 1 0
nrf-client-nfmanagement 0.5 1 0 1 1 0.5 1 0
ocpm-config-server 0.5 1 0 1 1 0.5 1 0
appinfo 0.5 1 0 1 1 0.5 1 0
perfinfo 0.1 0.128 0 1 1 0.1 0.128 0
nf-mediation 1 1 0 1 1 1 1 0
alternate-route 1 1 0 1 1 1 1 0
coherence-svc 1 2 0 1 1 1 2 0
SEPP SUT Totals 15.1 CPU 24.128 GB 0
ATS
ATS Behave 3 3 1 (Optional) 1 1 3 3 1
ATS Stub (Python) .5 .5 0 1 1 .5 .5 0
ATS Stub-2 (Python) .5 .5 0 1 1 .5 .5 0
ATS Stub-3 (Python) .5 .5 0 1 1 .5 .5 0
ATS Totals           4.5 4.5 1
DB Tier Pods (minimum of 4 worker nodes required)
vrt-launcher-dt-1.cluster.local 4 4 2 2 1 4 4 2
vrt-launcher-dt-2.cluster.local 4 4 2 2 1 4 4 2
vrt-launcher-dt-3.cluster.local 4 4 2 2 1 4 4 2
vrt-launcher-dt-4.cluster.local 4 4 2 2 1 4 4 2
vrt-launcher-mt-1.cluster.local 4 4 2 2 1 4 4 2
vrt-launcher-mt-2.cluster.local 4 4 2 2 1 4 4 2
vrt-launcher-mt-3.cluster.local 4 4 2 2 1 4 4 2
vrt-launcher-sq-1.cluster.local 4 4 2 2 1 4 4 2
vrt-launcher-sq-2.cluster.local 4 4 2 2 1 4 4 2
vrt-launcher-db-installer.cluster.local 4 4 2 2 1 4 4 2
DB Tier Totals 40 40 20
3.7.1.1 SEPP ATS Compatibility Matrix

The following table lists the versions SEPP ATS and the comapability with the SEPP and ATS framework:

Table 3-25 SEPP ATS Compatibility Matrix

SEPP ATS Release SEPP Release ATS Framework version
24.1.0 24.1.0 24.1.1
23.4.0 23.4.0 23.4.0
23.3.1 23.3.1 23.3.1
23.3.0 23.3.0 23.3.0
23.2.1 23.2.1 23.2.2
23.2.0 23.2.0 23.2.0

3.7.2 Downloading the ATS Pakage

Locating and Downloading ATS and Simulator Images

To locate and download the ATS Image from MOS:

  1. Log in to My Oracle Support with your credentials.
  2. Select the Patches and Updates tab to locate the patch.
  3. In the Patch Search window, click Product or Family (Advanced).
  4. Enter Oracle Communications Cloud Native Core - 5G in the Product field.
  5. Select Oracle Communications Cloud Native Core Security Edge Protection Proxy <release_number> from Release drop-down.
  6. Click Search. The Patch Advanced Search Results list appears.
  7. Select the required ATS patch from the search results. The Patch Details window appears.
  8. Click Download. The File Download window appears.
  9. Click the <p********_<release_number>_Tekelec>.zip file to downlaod the CNC SEPP ATS package file.
  10. Untar the zip file to access all the ATS Images. The ocsepp-ats directory has the following files:
    The csar directory has following files:
    
    ocats_ocsepp_csar_24_1_0_0_0.zip
    ocats_ocsepp_csar_24_1_0_0_0.zip.sha256

    Note:

    The above zip file contains all the images and custom values required for 24.1.0 release of OCATS-OCSEPP.
  11. The ocats_ocsepp_csar_24_1_0_0_0.zip file has following definitions:
    ├── Definitions
    │   ├── ocats_ocsepp_cne_compatibility.yaml
    │   └── ocats_ocsepp.yaml
    ├── Files
    │   ├── ChangeLog.txt
    │   ├── Helm
    │   │   └── ocats-sepp-24.1.0.tgz (Helm Charts)
    │   ├── Licenses
    │   ├── ocats-sepp-24.1.0.tar (BDD client image)
    │   ├── Oracle.cert
    │   ├── seppstub-24.1.0.tar (Stub server image)
    │   └── Tests
    ├── Scripts/
    │   ├── ocats_ocsepp_tests_jenkinsjobs_24.1.0.tgz
    │   ├──      ├──jobs (For Persistent Volume)
    │   └──      ├──ocsepp_tests (For Persistent Volume)
    │   └──  ocats_ocsepp_values_24.1.0.yaml (Custom values file for installation)
    ├── ocats_ocsepp.mf 
    └── TOSCA-Metadata
        └── TOSCA.meta
  12. Copy the zip file to Kubernetes cluster where you want to deploy ATS

3.7.3 Pushing the Images to Customer Docker Registry

Preparing to Deploy ATS and Stub Pod in Kubernetes Cluster

To deploy ATS and Stub Pod in Kubernetes Cluster:

  1. Run the following command to extract tar file content:
    unzip ocats_ocsepp_csar__24_1_0_0_0.zip
    The following docker image tar files are located at Files folder:
    • ocats-sepp-24.1.0.tar
    • seppstub-24.1.0.tar
  2. Run the following commands in your cluster to load the ATS docker image, 'ocats-sepp-24.1.0.tar'' and Stub docker image seppstub-24.1.0.tar, and push it to your registry.
    
    $ docker load -i ocats-sepp-24.1.0.tar
    $ docker load -i seppstub-24.1.0.tar
      
    $ docker tag ocats/ocats-sepp:24.1.0 <local_registry>/ocats/ocats-sepp:24.1.0
     
    $ docker tag ocats/seppstub:24.1.0 <local_registry>/ocats/seppstub:24.1.0
     
    $ docker push <local_registry>/ocats/ocats-sepp:24.1.0
     
    $ docker push <local_registry>/ocats/seppstub:24.1.0
  3. Run the following command to get the helm charts are located in Helm directory of files folder:
    tar -xvf ocats-sepp-24.1.0.tgz
    The output of this command is:
        
        ocats-sepp/ 
        ocats-sepp/Chart.yaml
        ocats-sepp/charts/
        ocats-sepp/values.yaml
  4. Create a copy of the custom values located at Scripts/ocats_ocsepp_values_24.1.0.yaml and update it for image name, tag and other parameters as per the requirement.

3.7.4 Creating Secrets and Support for TLSv1.2 and TLSv1.3

3.7.4.1 Configuring Root Certificate

The following are the steps to configure the root certificate (caroot.cer):

Note:

  • Use the same root certificate (caroot.cer) and key which is used for creating the SEPP certificates.
  • If the user has already generated the caroot.cer and cakey.pem file while deploying SEPP then user can skip to Generate ATS certificate step.
  • Both the ATS and SEPP must have same root certificate.
  1. In case caroot.cer is not available for SEPP, Create ssl.conf file for SEPP using the following format:
    
    # Creation of CSEPP Certs, Fqdn should be changed
     
    #ssl.conf
    [ req ]default_bits = 4096
    distinguished_name = req_distinguished_namereq_extensions = req_ext
    [ req_distinguished_name ]
    countryName = Country Name (2 letter code)
    countryName_default = IN
    stateOrProvinceName = State or Province Name (full name)
    stateOrProvinceName_default = Karnataka
    localityName = Locality Name (eg, city)
    localityName_default = Bangalore
    organizationName = Organization Name (eg, company)
    organizationName_default = Oracle
    commonName = sepp1.inter.oracle.com
    commonName_max = 64
    commonName_default = sepp1.inter.oracle.com
    [ req_ext ]
    subjectAltName = @alt_names
    [alt_names]
    IP = 127.0.0.1
    DNS.1 = sepp1.inter.oracle.com
  2. Set the following environment variables:
    
    export PEM_PHRASE=NextGen1
    export DEPLOYMENT_NAMESPACE=sepp1
  3. Run the following command to create the required files:
    
     
    openssl req -new -keyout cakey.pem -out careq.pem -passout pass:${PEM_PHRASE} -subj "/C=IN/ST=Karnataka/L=Bangalore/O=Oracle/CN=sepp1.inter.oracle.com/emailAddress=xyz@oracle.com"
     
    openssl x509 -signkey cakey.pem -req -days 3650 -in careq.pem -out caroot.cer -extensions v3_ca -passin pass:${PEM_PHRASE}
     
    openssl req -x509 -nodes -sha256 -days 365 -newkey rsa:2048 -keyout rsa_private_key -out rsa_certificate.crt -subj '/C=IN/ST=Karnataka/L=Bangalore/O=Oracle/CN=sepp1.inter.oracle.com/emailAddress=xyz@oracle.com'
     
    openssl rsa -in rsa_private_key -outform PEM -out rsa_private_key_pkcs1.pem
     
    openssl req -new -key rsa_private_key -out ocsepp.csr -config ssl.conf -subj '/C=IN/ST=Karnataka/L=Bangalore/O=Oracle/CN=sepp1.inter.oracle.com/emailAddress=xyz@oracle.com'
     
    openssl x509 -CA caroot.cer -CAkey cakey.pem -CAserial serial.txt -req -in ocsepp.csr -out ocsepp.cer -days 365 -extfile ssl.conf -extensions req_ext -passin pass:${PEM_PHRASE}
     
    openssl ecparam -genkey -name prime256v1 -noout -out ec_private_key.pem
     
    openssl pkcs8 -topk8 -in ec_private_key.pem -inform pem -out ecdsa_private_key.pem -outform pem -nocrypt
     
    openssl req -new -key ecdsa_private_key.pem -x509 -nodes -days 365 -out ecdsa_certificate_pkcs1.crt -subj '/C=IN/ST=Karnataka/L=Bangalore/O=Oracle/CN=sepp1.inter.oracle.com/emailAddress=xyz@oracle.com'
     
    openssl req -new -key ecdsa_private_key.pem -out ecdsa_certificate.csr -subj '/C=IN/ST=Karnataka/L=Bangalore/O=Oracle/CN=sepp1.inter.oracle.com/emailAddress=xyz@oracle.com'
     
    echo NextGen1 > trust.txt
    echo NextGen1 > key.txt
    echo 1234 > serial.txt
    
    
3.7.4.2 Generating ATS Certificate

The following are the steps to configure the ATS certificate:

  1. Create and edit the ssl.conf file as follows:

    Note:

    • While trying to access the GUI with DNS, ensure that the common Name_default is the same as the DNS name being used.
    • Ensure that the DNS is in the format <service_name>.<namespace>.<cluster_domain>
    • The user can add multiple DNS like DNS.1, DNS.2 so on.
    • The ATS_HELM_RELEASE_NAME is the release name which will be used to deploy ATS.
    1. In the alt_names section of the ssl.conf, IPs has to be listed down through which ATS GUI will be opened. We can add multiple IPs like IP.1, IP.2 so on.
    2. All stubserver service names (({ATS_HELM_RELEASE_NAME}-stubserver.{ats-namespace},{ATS_HELM_RELEASE_NAME}-stubserver-2.{ats-namespace}, {ATS_HELM_RELEASE_NAME}-stubserver-3.{ats-namespace}) must be in Subject Alternative Name in certificate.
    3. Update bddclient service name (${ATS_HELM_RELEASE_NAME}-bddclient.${DEPLOYMENT_NAMESPACE}.svc.cluster.local ) in the commonName, commonName_default and DNS name in alt_names section.
    Sample code:
    
    #ssl.conf
    [ req ]
    default_bits = 4096
    distinguished_name = req_distinguished_name
    req_extensions = req_ext
     
    [ req_distinguished_name ]
    countryName = Country Name (2 letter code)
    countryName_default = IN
    stateOrProvinceName = State or Province Name (full name)
    stateOrProvinceName_default = Karnataka
    localityName = Locality Name (eg, city)
    localityName_default = Bangalore
    organizationName = Organization Name (eg, company)
    organizationName_default = Oracle
    commonName = ${ATS_HELM_RELEASE_NAME}-bddclient.${DEPLOYMENT_NAMESPACE}.svc.cluster.local
    commonName_max = 64
    commonName_default = ${ATS_HELM_RELEASE_NAME}-bddclient.${DEPLOYMENT_NAMESPACE}.svc.cluster.local
     
    [ req_ext ]
    subjectAltName = @alt_names
     
    [alt_names]
    IP.1 = 127.0.0.1
    IP.2 = 10.75.217.5
    
    #Mandtory values
    DNS.1 = ${ATS_HELM_RELEASE_NAME}-bddclient.${DEPLOYMENT_NAMESPACE}.svc.cluster.local
    DNS.2 = ${ATS_HELM_RELEASE_NAME}-stubserver.${DEPLOYMENT_NAMESPACE}
    DNS.3 = ${ATS_HELM_RELEASE_NAME}-stubserver-2.${DEPLOYMENT_NAMESPACE}
    DNS.4 = ${ATS_HELM_RELEASE_NAME}-stubserver-3.${DEPLOYMENT_NAMESPACE}
    DNS.5 = localhost
  2. Run the following command to create a certificate signing request or csr:
    $ openssl req -config ssl.conf -newkey rsa:2048 -days 1000 -nodes -keyout rsa_private_key_pkcs1.key > ssl_rsa_certificate.csr
    Output:
    Ignoring -days; not generating a certificate
    Generating a RSA private key
    ...+++++
    ........+++++
    writing new private key to 'rsa_private_key_pkcs1.key'
    -----
    You are about to be asked to enter information that will be incorporated
    into your certificate request.
    What you are about to enter is what is called a Distinguished Name or a DN.
    There are quite a few fields but you can leave some blank
    For some fields there will be a default value,
    If you enter '.', the field will be left blank.
    -----
    Country Name (2 letter code) [IN]:
    State or Province Name (full name) [KA]:
    Locality Name (eg, city) [BLR]:
    Organization Name (eg, company) [ORACLE]:
    Common Name (e.g. server FQDN or YOUR name) [ocats]:
    [cloud-user@star23-bastion-1 ocats]$
  3. Run the following command to verify whether all configurations are done:
    openssl req -text -noout -verify -in ssl_rsa_certificate.csr
  4. Run the following command to sign the certificate signing request or csr file with root certificate:
    $ openssl x509 -extfile ssl.conf -extensions req_ext -req -in ssl_rsa_certificate.csr -days 1000 -CA caroot.cer -CAkey cakey.pem -set_serial 04 > ssl_rsa_certificate.crt
    Output:
    
    Signature ok
    subject=C = IN, ST = KA, L = BLR, O = ORACLE, CN = sepp-ats-rel-bddclient.testns.svc.cluster.local
    Getting CA Private Key
    [cloud-user@star23-bastion-1 ocats]$

    Note:

    When the output prompts for the current password, enter the password was used to createcakey.pem file.
  5. Verify whether the certificate is properly signed by root certificate:
    $ openssl verify -CAfile caroot.cer ssl_rsa_certificate.crt

    Output:

    ssl_rsa_certificate.crt:OK
  6. For then jenkins to support GUI access through https, a jks file has to be created. Perform the following steps to generate jks file for Jenkins Server:
    1. Run the following command to generate the .p12 keystore file:
      $ openssl pkcs12 -inkey rsa_private_key_pkcs1.key -in ssl_rsa_certificate.crt -export -out certificate.p12
      Output:
      
      Enter Export Password:
      Verifying - Enter Export Password:

      Note:

      When the output prompts for the password, enter the password and note it down, as it is required for creating the jks file.

  7. Run the following command to convert .p12 file in jks format file to be used in Jenkins Server:

    Note:

    • Esure to use the same password used for creating jks file and for creating the .p12 file.
    • Java should be pre-installed to run the keytool utility.
    $ keytool -importkeystore -srckeystore ./certificate.p12 -srcstoretype pkcs12 -destkeystore jenkinsserver.jks -deststoretype JKS
    Output:
    
    Importing keystore ./certificate.p12 to jenkinsserver.jks...
    Enter destination keystore password:
    Re-enter new password:
    Enter source keystore password:
    Entry for alias 1 successfully imported.
    Import command completed:  1 entries successfully imported, 0 entries failed or cancelled
    The generated file, jenkinserver.jks must be given to the jenkins server.
3.7.4.3 Creating ATS Secret and Configuring Helm chart

The following are the steps to create ATS Secret and configure the Helm chart:

Note:

The user can decide whether to use the generated CA signed certificate or self-signed certificate.
  1. Run the following command to create secret:
    kubectl create secret generic ocats-sepp-secret --from-file=jenkinsserver.jks  --from-file=ssl_rsa_certificate.crt
          --from-file=rsa_private_key_pkcs1.key --from-file=caroot.cer -n <deployment_namespace>
  2. Run the following command to verify the secret:
    $ kubectl describe secret ocats-sepp-secret -n testns
    Output:
    
    Name:         ocats-sepp-secret
    Namespace:    testns
    Labels:       <none>
    Annotations:  <none>
      
    Type:  Opaque
      
    Data
    ====
    caroot.cer:                        1147 bytes
    ssl_rsa_certificate.crt:           1424 bytes
    jenkinsserver.jks:                 2357 bytes
    rsa_private_key_pkcs1.key:         1675 bytes

Changes to the Helm charts:

The following changes must be updated in the ocats_ocsepp_values_<version>.yaml file:

  • The Helm parameter atsGuiTLSEnabled must set to true for ATS to get the certificates and support HTTPS for GUI. If the user does not want to open ATS GUI in HTTPS mode, then atsGuiTLSEnabled flag should be set to false.
    atsGuiTLSEnabled: false
  • The Helm parameter atsCommunicationTLSEnabled must set to true for for necessary context variable to be created which can be later used to communicate with other services in HTTPS.
    • For non-asm deployment atsCommunicationTLSEnabled flag should be set to true.
    • For asm deployment atsCommunicationTLSEnabled flag should be set to false.
    atsCommunicationTLSEnabled: true #If set to true,  ATS will get necessary variables to communicate with SUT, Stub
            or other  NFs with TLS enabled. It is not required in ASM
        environment.
  • The certificates section of the bddclient and stubserver section of the ats custom values file must be updated as follows:
    
    certificates:
       cert_secret_name: "ocats-sepp-secret"
       a_cert: "caroot.cer"
       client_cert: "ssl_rsa_certificate.crt"
       private_key: "rsa_private_key_pkcs1.key"
        
       # This parameter is needed when atsGuiTLSEnabled is set to true. This file is necessary for ATS GUI to be opend with secured TLS protocol. The file caroot.pem, used during creation of jks file needs to be passed for Jenkins/ATS API communication.
       jks_file: "jenkinsserver.jks"
     
       jks_password: "123456" #This is the password given to the jks file while creation.
    Add caroot certificate in browser to access the ATS GUI. The caroot.cer file certificate created by us will be added to Truststore, in this case the browser.
The following are the steps to add caroot certificate in the browser to access the ATS GUI, either Mozilla Firefox or Chrome:

Note:

Future versions of these browsers may involve different menu options. For more information on importing root certificate, see the browser documentation to add a self-signed certificate to the browser as a trusted certificate.
  1. In the Chrome browser, navigate to the settings and search for certificates.
  2. Click the security option that appears next to search.
  3. Click the Manage Device Certificate option. The Keychain Access window opens.
  4. Search the tab certificate and drag and drop the downloaded caroot certificate.
  5. Find the uploaded certificate in the list, usually listed by a temporary name.
  6. Double click the certificate and expand the Trust option.
  7. In When using this certificate option, assign it to "always trust".
  8. Close the window and validate if it asks for the password.
  9. Save and restart the browser.
  1. In the Mozilla Firefox browser, navigate to the settings and search for certificates.
  2. Click the View Certificate that appears next to search. This opens a Certificate Manager window.
  3. Navigate to the Authorities section, click the Import button, and upload the caroot certificate.
  4. Click the Trust options in the pop-up window and click OK.
  5. Save and restart the browser.
3.7.4.4 Create ATS Health Check secret

To enable the ATS health check pipeline, the following configurations needs to be updated in the ocats_ocsepp_values_<version>.yaml file:

Non OCI Environment

  1. The following parameters needs to be updated in the base64 encoded version format, occnehostip, occnehostusername, occnehostpassword, envtype.
  2. On installing ATS, health check secret will be created, and the health check pipeline will be shown in the ATS GUI. If healthcheck parameter is set to false, then the health check pipeline will not be visible in the ATS GUI.

bddclient
  atsFeatures:
    healthcheck: true
 
  Webscale: false
  healthchecksecretname: "healthchecksecret"
  occnehostip: "" # $(echo -n '10.75.217.42' | base64), Where occne host ip needs to be provided
  occnehostusername: "" # $(echo -n 'cloud-user' | base64), Where occne host username needs to be provided
  occnehostpassword: "" # $(echo -n '****' | base64), Where password of host
needs to be provided
  envtype: "" # $(echo -n 'OCCNE' | base64), Where occne keyword needs to be provided

OCI Environment

For key based health check support in OCI, refer to ATS Health Check section under the ATS Framework Features.
  1. The following parameters needs to be updated in the base64 encoded version in the ocats_ocsepp_values_<version>.yaml file:

bddclient
  atsFeatures:
    healthcheck: true
    
  envtype: "" # $(echo -n 'OCCNE' | base64), Where occne keyword needs to be provided
 
  ociHealthCheck:
    passwordAuthenticationEnabled: false
    bastion:
      ip: "" # $(echo -n '10.75.217.42' | base64), Where occne host ip needs to be provided
      username: "" # $(echo -n '10.75.217.42' | base64), Where occne host ip needs to be provided
      password: ""
    operatorInstance:
      ip: "" # $(echo -n '10.75.217.42' | base64), Where occne host ip needs to be provided
      username: "" # $(echo -n '10.75.217.42' | base64), Where occne host ip needs to be provided
      password: ""

3.7.5 Configuring ATS

This section describes how to configure ATS for SEPP.

3.7.5.1 Enabling Aspen Service Mesh
To enable Aspen Service Mesh (ASM) for ATS, complete the following procedure:

Note:

By default, this feature is disabled.
  1. If ASM is not enabled on the global level for the namespace, run the following command before deploying ATS:
    kubectl label --overwrite namespace <namespace_name> istio-injection=enabled

    Example:

    kubectl label --overwrite namespace seppsvc istio-injection=enabled
  2. Add the following annotations in the BDD client section in the ocats-sepp-custom-values.yaml file to support ASM:

    To enable or disable the ASM, update the asmEnabled flag to true or false.

    asmEnabled: false
    asm:
     configMgrPort: 9092
     stubServerPort: 8080
     plmnIgwPort: 80
     n32IgwPort: 80
    
  3. Add the following value in the stub-server section in the ocats-sepp-custom-values.yaml file to support ASM:

    To enable or disable the ASM, update the asmEnabled flag to true or false.

    asmEnabled: false
  4. For asm deployment, set the atsCommunicationTLSEnabled flag to false as given below:
    atsCommunicationTLSEnabled: false
     #If set to true, ATS will get necessary variables to communicate with SUT, Stub or other NFs with TLS enabled. It is not required in ASM environment.
  5. (Optional) The user can configure the resources assigned to the Aspen Mesh (istio-proxy) sidecars in ocats_ocsepp_values_24.1.0.yaml file sections as follows:
    
    asm:
    istioResources:
        limits:
          cpu: 2
          memory: 1Gi
        requests:
          cpu: 100m
          memory: 128Mi

    Note:

    It is recommended to use the default values of the resources.

Note:

  • If the SEPP is deployed with ASM enabled and user disables the ASM on the global level, then user needs to redeploy the setup to work without ASM.
  • At present, SEPP is not supporting the mediation in ASM enviroment. If user has deployed the mediation service, run the following command before triggering the ATS suite to delete the mediation service:
    kubectl delete svc <release-name>-nf-mediation -n <namespace>
             kubectl delete deploy <release-name>-nf-mediation -n <namespace>
3.7.5.2 Enabling Static Port
To enable static port:

Note:

ATS supports static port. By default, this feature is disabled.
  • In the ocats_ocsepp_values_23.4.0.yaml file under service section, set the staticNodePortEnabled parameter value to 'true' and staticNodePort parameter value with valid nodePort.
    service:
        customExtension:
          labels: {}
          annotations: {}
        type: LoadBalancer
        ports:
          https:
            port: "8443"
            staticNodePortEnabled: false
            staticNodePort: ""
          http:
            port: "8080"
            staticNodePortEnabled: false
            staticNodePort: ""
3.7.5.3 Enabling Roaming Hub Mode

SEPP ATS supports two types deployment modes:

  • SEPP
  • Roaming Hub or Hosted SEPP Mode

Flag has been introduced to select the deployment mode.

#Flag to enable Roaming Hub mode

RHenabled: True
3.7.5.4 Enabling Hosted SEPP Mode

The Hosted SEPP Mode can be enabled as follows:

#Flag to enable Hosted SEPP mode

RHenabled: True

Customizing Error Code Variable in Hosted SEPP Mode

For handling failure scenarios in Hosted SEPP mode, the following customized error code variable has been introduced:

#Customized error code variable for Hosted SEPP

HSErrCode: "400"
3.7.5.5 Configuring Egress Rate Limiting Feature

If the Egress Rate Limiting feature is enabled in SEPP deployment, to run the Egress Rate Limiter test cases in ATS, EgressRateLimiterFlag parameter has been introduced in the ocats-sepp-custom-values.yaml file. If the Egress Rate Limiting feature is disabled in SEPP deployment, ensure to set EgressRateLimiterFlag parameter to false.

EgressRateLimiterFlag: true/false

For more information about the feature, see the "Rate Limiting for Egress Roaming Signaling per PLMN" section in Oracle Communications Cloud Native Core, Security Edge Protection Proxy User Guide and the "Configuration Parameters" section in Oracle Communications Cloud Native Core, Security Edge Protection Proxy Installation, Upgrade, and Fault Recovery Guide.

3.7.5.6 Configuring Ingress Rate Limiting Feature

If the Ingress rate limiting feature is enabled in SEPP deployment, to run the Ingress rate limiter test cases in ATS, IngressRateLimiterFlag parameter has been introduced in the ocats_ocsepp_values_<version>.yaml file. If the Ingress rate limiting feature is disabled in SEPP deployment, ensure to set IngressRateLimiterFlag parameter to false.

Flag to enable or disable Egress Rate Limiter:

  IngressRateLimiterFlag: true/false

For more information about the feature, see the "Rate Limiting for Ingress Roaming Signaling per Remote SEPP Set" section in Oracle Communications Cloud Native Core, Security Edge Protection Proxy User Guide and the "Configuration Parameters" section in Oracle Communications Cloud Native Core, Security Edge Protection Proxy Installation, Upgrade, and Fault Recovery Guide.

3.7.5.7 Configuring the Cache Refresh Timeout Value

SEPP supports Refresh ahead cache. To run the ATS cases, cacheRefreshTimeout value must be set to 1000 (ms) so that cache can be updated automatically after every test case is executed when ATS suite is triggered.

To set the cache refresh timeout value:

In the ocsepp-custom-values.yaml file, under cn32f-svc and pn32f-svc microservices, set the cacheRefreshTimeout parameter value to 1000 (ms).

configs:
 cacheRefreshTimeout: 1000 #(ms)
 cacheRefreshInitTimeout: 50000 #(ms)

Note:

If ATS is not configured, the cacheRefreshTimeout value must be 30000 (ms).

3.7.5.8 Configuring the Topology Cache Refresh Timeout Value

SEPP supports Refresh ahead cache. To run the ATS cases, topologycacheRefreshTimeout value must be set to 1000 (ms) so that cache can be updated automatically after every test case is executed when ATS suite is triggered.

In the ocsepp-custom-values.yaml file, under cn32f-svc and pn32f-svc microservices, set the topologycacheRefreshTimeout parameter value to 1000 (ms).


topologyHiding:
  timerConfig:
     topologyhidingCacheRefreshTimeout: 30000
     topologyhidingCacheRefreshInitTimeout: 50000
     topologyhidingHistoryUpdateTimeout: 30000
     topologyhidingHistoryRefreshSeconds: 60
  config:
     topologyHidingStateCheck: true





 

3.7.5.9 Configuring the Security Counter Measure Cache Refresh Timeout

SEPP supports Refresh ahead cache. To run the ATS cases, you must set the value of the securityCacheRefreshTimeout parameter to 10 (ms) so that cache can be automatically updated after every test case is run when ATS suite is triggered.

In the ocsepp-custom-values.yaml file, under cn32f-svc and pn32f-svc microservices, set the Security Counter Measure Cache Refresh Timeout parameter value to 1000 (ms).


configs:  
 securityCacheRefreshTimeout: 1000 #(ms)  
 securityCacheRefreshInitTimeout: 50000 #(ms)


 

3.7.5.10 Configuring the n32cHandshakePlmnIdListValidationEnabled

SEPP supports validation of PLMN ID List in n32c capability exchange message which can be turned off. To run the ATS cases, n32cHandshakePlmnIdListValidationEnabled value must be set to true so that testcases validating PLMN ID List in n32c capability exchange message can be run successfully when ATS suite is triggered.

To set the n32cHandshakePlmnIdListValidationEnabled value:

In the ocsepp_custom_values_<version>.yaml file, under localProfile, set the n32cHandshakePlmnIdListValidationEnabled parameter value to true.


    localProfile:
    name: "SEPP-3"
    plmnIdList: [{"mcc":"111","mnc":"100"},{"mcc":"111","mnc":"101"},{"mcc":"111","mnc":"102"},{"mcc":"111","mnc":"103"},{"mcc":"111","mnc":"104"},{"mcc":"111","mnc":"105"},{"mcc":"111","mnc":"106"},{"mcc":"111","mnc":"107"},{"mcc":"111","mnc":"108"},{"mcc":"111","mnc":"109"},{"mcc":"111","mnc":"110"},{"mcc":"111","mnc":"111"},{"mcc":"111","mnc":"112"},{"mcc":"111","mnc":"113"},{"mcc":"111","mnc":"114"},{"mcc":"111","mnc":"115"},{"mcc":"111","mnc":"116"},{"mcc":"111","mnc":"117"},{"mcc":"111","mnc":"118"},{"mcc":"111","mnc":"119"},{"mcc":"111","mnc":"120"},{"mcc":"111","mnc":"121"},{"mcc":"111","mnc":"122"},{"mcc":"111","mnc":"123"},{"mcc":"111","mnc":"124"},{"mcc":"111","mnc":"125"},{"mcc":"111","mnc":"126"},{"mcc":"111","mnc":"127"},{"mcc":"111","mnc":"128"},{"mcc":"111","mnc":"129"}]
    # Do not change this value, this will be always true
    sbiTargetApiRootSupported: true
    # Enable PLMN ID List Validation in Exchange Capability Request, Default set to true
    n32cHandshakePlmnIdListValidationEnabled: true
    # PLMN ID List Validation Type in Exchange Capability Request, can be SUBSET or STRICT only
    n32cHandshakePlmnIdListValidationType: "SUBSET"
Update following PLMN ID list in localProfile section for SEPP mode to run the ATS testcases:
[{"mcc":"111","mnc":"100"},{"mcc":"111","mnc":"101"},{"mcc":"111","mnc":"102"},{"mcc":"111","mnc":"103"},{"mcc":"111","mnc":"104"},{"mcc":"111","mnc":"105"},{"mcc":"111","mnc":"106"},{"mcc":"111","mnc":"107"},{"mcc":"111","mnc":"108"},{"mcc":"111","mnc":"109"},{"mcc":"111","mnc":"110"},{"mcc":"111","mnc":"111"},{"mcc":"111","mnc":"112"},{"mcc":"111","mnc":"113"},{"mcc":"111","mnc":"114"},{"mcc":"111","mnc":"115"},{"mcc":"111","mnc":"116"},{"mcc":"111","mnc":"117"},{"mcc":"111","mnc":"118"},{"mcc":"111","mnc":"119"},{"mcc":"111","mnc":"120"},{"mcc":"111","mnc":"121"},{"mcc":"111","mnc":"122"},{"mcc":"111","mnc":"123"},{"mcc":"111","mnc":"124"},{"mcc":"111","mnc":"125"},{"mcc":"111","mnc":"126"},{"mcc":"111","mnc":"127"},{"mcc":"111","mnc":"128"},{"mcc":"111","mnc":"129"}]
Update following PLMN ID list in localProfile section for Roaming Hub mode to run the ATS testcases:
[{"mcc":"111","mnc":"200"},{"mcc":"111","mnc":"201"},{"mcc":"111","mnc":"202"},{"mcc":"111","mnc":"203"},{"mcc":"111","mnc":"204"},{"mcc":"111","mnc":"205"},{"mcc":"111","mnc":"206"},{"mcc":"111","mnc":"207"},{"mcc":"111","mnc":"208"},{"mcc":"111","mnc":"209"},{"mcc":"111","mnc":"210"},{"mcc":"111","mnc":"211"},{"mcc":"111","mnc":"212"},{"mcc":"111","mnc":"213"},{"mcc":"111","mnc":"214"},{"mcc":"111","mnc":"215"},{"mcc":"111","mnc":"216"},{"mcc":"111","mnc":"217"},{"mcc":"111","mnc":"218"},{"mcc":"111","mnc":"219"},{"mcc":"111","mnc":"220"},{"mcc":"111","mnc":"221"},{"mcc":"111","mnc":"222"},{"mcc":"111","mnc":"223"},{"mcc":"111","mnc":"224"},{"mcc":"111","mnc":"225"},{"mcc":"111","mnc":"226"},{"mcc":"111","mnc":"227"},{"mcc":"111","mnc":"228"},{"mcc":"111","mnc":"229"},{"mcc":"111","mnc":"230"},{"mcc":"111","mnc":"231"},{"mcc":"111","mnc":"232"},{"mcc":"111","mnc":"233"},{"mcc":"111","mnc":"234"},{"mcc":"111","mnc":"235"},{"mcc":"111","mnc":"236"},{"mcc":"111","mnc":"237"},{"mcc":"111","mnc":"238"},{"mcc":"111","mnc":"239"},{"mcc":"111","mnc":"240"},{"mcc":"111","mnc":"241"},{"mcc":"111","mnc":"242"},{"mcc":"111","mnc":"243"},{"mcc":"111","mnc":"244"},{"mcc":"111","mnc":"245"},{"mcc":"111","mnc":"246"},{"mcc":"111","mnc":"247"},{"mcc":"111","mnc":"248"},{"mcc":"111","mnc":"249"},{"mcc":"111","mnc":"250"},{"mcc":"111","mnc":"251"},{"mcc":"111","mnc":"252"},{"mcc":"111","mnc":"253"},{"mcc":"111","mnc":"254"},{"mcc":"111","mnc":"255"},{"mcc":"111","mnc":"256"},{"mcc":"111","mnc":"257"},{"mcc":"111","mnc":"258"},{"mcc":"111","mnc":"259"},{"mcc":"111","mnc":"260"},{"mcc":"111","mnc":"261"},{"mcc":"111","mnc":"262"},{"mcc":"111","mnc":"263"},{"mcc":"111","mnc":"264"},{"mcc":"111","mnc":"265"},{"mcc":"111","mnc":"266"},{"mcc":"111","mnc":"267"},{"mcc":"111","mnc":"268"},{"mcc":"111","mnc":"269"},{"mcc":"111","mnc":"270"},{"mcc":"111","mnc":"271"},{"mcc":"111","mnc":"272"},{"mcc":"111","mnc":"273"},{"mcc":"111","mnc":"274"},{"mcc":"111","mnc":"275"},{"mcc":"111","mnc":"276"},{"mcc":"111","mnc":"277"},{"mcc":"111","mnc":"278"},{"mcc":"111","mnc":"279"},{"mcc":"111","mnc":"280"},{"mcc":"111","mnc":"281"},{"mcc":"111","mnc":"282"},{"mcc":"111","mnc":"283"},{"mcc":"111","mnc":"284"},{"mcc":"111","mnc":"285"},{"mcc":"111","mnc":"286"},{"mcc":"111","mnc":"287"},{"mcc":"111","mnc":"288"},{"mcc":"111","mnc":"289"},{"mcc":"111","mnc":"290"},{"mcc":"111","mnc":"291"},{"mcc":"111","mnc":"292"},{"mcc":"111","mnc":"293"},{"mcc":"111","mnc":"294"},{"mcc":"111","mnc":"295"},{"mcc":"111","mnc":"296"},{"mcc":"111","mnc":"297"},{"mcc":"111","mnc":"298"},{"mcc":"111","mnc":"299"},{"mcc":"111","mnc":"300"},{"mcc":"111","mnc":"301"},{"mcc":"111","mnc":"302"},{"mcc":"111","mnc":"303"},{"mcc":"111","mnc":"304"},{"mcc":"111","mnc":"305"},{"mcc":"111","mnc":"306"},{"mcc":"111","mnc":"307"},{"mcc":"111","mnc":"308"},{"mcc":"111","mnc":"309"},{"mcc":"111","mnc":"310"},{"mcc":"111","mnc":"311"},{"mcc":"111","mnc":"312"},{"mcc":"111","mnc":"313"},{"mcc":"111","mnc":"314"},{"mcc":"111","mnc":"315"},{"mcc":"111","mnc":"316"},{"mcc":"111","mnc":"317"},{"mcc":"111","mnc":"318"},{"mcc":"111","mnc":"319"},{"mcc":"111","mnc":"320"},{"mcc":"111","mnc":"321"},{"mcc":"111","mnc":"322"},{"mcc":"111","mnc":"323"},{"mcc":"111","mnc":"324"},{"mcc":"111","mnc":"325"},{"mcc":"111","mnc":"326"},{"mcc":"111","mnc":"327"},{"mcc":"111","mnc":"328"},{"mcc":"111","mnc":"329"},{"mcc":"111","mnc":"330"},{"mcc":"111","mnc":"331"},{"mcc":"111","mnc":"332"},{"mcc":"111","mnc":"333"},{"mcc":"111","mnc":"334"},{"mcc":"111","mnc":"335"},{"mcc":"111","mnc":"336"},{"mcc":"111","mnc":"337"},{"mcc":"111","mnc":"338"},{"mcc":"111","mnc":"339"},{"mcc":"111","mnc":"340"},{"mcc":"111","mnc":"341"},{"mcc":"111","mnc":"342"},{"mcc":"111","mnc":"343"},{"mcc":"111","mnc":"344"},{"mcc":"111","mnc":"345"},{"mcc":"111","mnc":"346"},{"mcc":"111","mnc":"347"},{"mcc":"111","mnc":"348"},{"mcc":"111","mnc":"349"},{"mcc":"111","mnc":"350"},{"mcc":"111","mnc":"351"},{"mcc":"111","mnc":"352"},{"mcc":"111","mnc":"353"},{"mcc":"111","mnc":"354"},{"mcc":"111","mnc":"355"},{"mcc":"111","mnc":"356"},{"mcc":"111","mnc":"357"},{"mcc":"111","mnc":"358"},{"mcc":"111","mnc":"359"},{"mcc":"111","mnc":"360"},{"mcc":"111","mnc":"361"},{"mcc":"111","mnc":"362"},{"mcc":"111","mnc":"363"},{"mcc":"111","mnc":"364"},{"mcc":"111","mnc":"365"},{"mcc":"111","mnc":"366"},{"mcc":"111","mnc":"367"},{"mcc":"111","mnc":"368"},{"mcc":"111","mnc":"369"},{"mcc":"111","mnc":"370"},{"mcc":"111","mnc":"371"},{"mcc":"111","mnc":"372"},{"mcc":"111","mnc":"373"},{"mcc":"111","mnc":"374"},{"mcc":"111","mnc":"375"},{"mcc":"111","mnc":"376"},{"mcc":"111","mnc":"377"},{"mcc":"111","mnc":"378"},{"mcc":"111","mnc":"379"},{"mcc":"111","mnc":"380"},{"mcc":"111","mnc":"381"},{"mcc":"111","mnc":"382"},{"mcc":"111","mnc":"383"},{"mcc":"111","mnc":"384"},{"mcc":"111","mnc":"385"},{"mcc":"111","mnc":"386"},{"mcc":"111","mnc":"387"},{"mcc":"111","mnc":"388"},{"mcc":"111","mnc":"389"},{"mcc":"111","mnc":"390"},{"mcc":"111","mnc":"391"},{"mcc":"111","mnc":"392"},{"mcc":"111","mnc":"393"},{"mcc":"111","mnc":"394"},{"mcc":"111","mnc":"395"},{"mcc":"111","mnc":"396"},{"mcc":"111","mnc":"397"},{"mcc":"111","mnc":"398"},{"mcc":"111","mnc":"399"},{"mcc":"111","mnc":"400"},{"mcc":"111","mnc":"401"},{"mcc":"111","mnc":"402"},{"mcc":"111","mnc":"403"},{"mcc":"111","mnc":"404"},{"mcc":"111","mnc":"405"},{"mcc":"111","mnc":"406"},{"mcc":"111","mnc":"407"},{"mcc":"111","mnc":"408"},{"mcc":"111","mnc":"409"},{"mcc":"111","mnc":"410"},{"mcc":"111","mnc":"411"},{"mcc":"111","mnc":"412"},{"mcc":"111","mnc":"413"},{"mcc":"111","mnc":"414"},{"mcc":"111","mnc":"415"},{"mcc":"111","mnc":"416"},{"mcc":"111","mnc":"417"},{"mcc":"111","mnc":"418"},{"mcc":"111","mnc":"419"},{"mcc":"111","mnc":"420"},{"mcc":"111","mnc":"421"},{"mcc":"111","mnc":"422"},{"mcc":"111","mnc":"423"},{"mcc":"111","mnc":"424"},{"mcc":"111","mnc":"425"},{"mcc":"111","mnc":"426"},{"mcc":"111","mnc":"427"},{"mcc":"111","mnc":"428"},{"mcc":"111","mnc":"429"},{"mcc":"111","mnc":"430"},{"mcc":"111","mnc":"431"},{"mcc":"111","mnc":"432"},{"mcc":"111","mnc":"433"},{"mcc":"111","mnc":"434"},{"mcc":"111","mnc":"435"},{"mcc":"111","mnc":"436"},{"mcc":"111","mnc":"437"},{"mcc":"111","mnc":"438"},{"mcc":"111","mnc":"439"},{"mcc":"111","mnc":"440"},{"mcc":"111","mnc":"441"},{"mcc":"111","mnc":"442"},{"mcc":"111","mnc":"443"},{"mcc":"111","mnc":"444"},{"mcc":"111","mnc":"445"},{"mcc":"111","mnc":"446"},{"mcc":"111","mnc":"447"},{"mcc":"111","mnc":"448"},{"mcc":"111","mnc":"449"},{"mcc":"111","mnc":"450"},{"mcc":"111","mnc":"451"},{"mcc":"111","mnc":"452"},{"mcc":"111","mnc":"453"},{"mcc":"111","mnc":"454"},{"mcc":"111","mnc":"455"},{"mcc":"111","mnc":"456"},{"mcc":"111","mnc":"457"},{"mcc":"111","mnc":"458"},{"mcc":"111","mnc":"459"},{"mcc":"111","mnc":"460"},{"mcc":"111","mnc":"461"},{"mcc":"111","mnc":"462"},{"mcc":"111","mnc":"463"},{"mcc":"111","mnc":"464"},{"mcc":"111","mnc":"465"},{"mcc":"111","mnc":"466"},{"mcc":"111","mnc":"467"},{"mcc":"111","mnc":"468"},{"mcc":"111","mnc":"469"},{"mcc":"111","mnc":"470"},{"mcc":"111","mnc":"471"},{"mcc":"111","mnc":"472"},{"mcc":"111","mnc":"473"},{"mcc":"111","mnc":"474"},{"mcc":"111","mnc":"475"},{"mcc":"111","mnc":"476"},{"mcc":"111","mnc":"477"},{"mcc":"111","mnc":"478"},{"mcc":"111","mnc":"479"},{"mcc":"111","mnc":"480"},{"mcc":"111","mnc":"481"},{"mcc":"111","mnc":"482"},{"mcc":"111","mnc":"483"},{"mcc":"111","mnc":"484"},{"mcc":"111","mnc":"485"},{"mcc":"111","mnc":"486"},{"mcc":"111","mnc":"487"},{"mcc":"111","mnc":"488"},{"mcc":"111","mnc":"489"},{"mcc":"111","mnc":"490"},{"mcc":"111","mnc":"491"},{"mcc":"111","mnc":"492"},{"mcc":"111","mnc":"493"},{"mcc":"111","mnc":"494"},{"mcc":"111","mnc":"495"},{"mcc":"111","mnc":"496"},{"mcc":"111","mnc":"497"},{"mcc":"111","mnc":"498"},{"mcc":"111","mnc":"499"},{"mcc":"111","mnc":"500"},{"mcc":"111","mnc":"501"},{"mcc":"111","mnc":"502"},{"mcc":"111","mnc":"503"},{"mcc":"111","mnc":"504"},{"mcc":"111","mnc":"505"},{"mcc":"111","mnc":"506"},{"mcc":"111","mnc":"507"},{"mcc":"111","mnc":"508"},{"mcc":"111","mnc":"509"},{"mcc":"111","mnc":"510"},{"mcc":"111","mnc":"511"},{"mcc":"111","mnc":"512"},{"mcc":"111","mnc":"513"},{"mcc":"111","mnc":"514"},{"mcc":"111","mnc":"515"},{"mcc":"111","mnc":"516"},{"mcc":"111","mnc":"517"},{"mcc":"111","mnc":"518"},{"mcc":"111","mnc":"519"},{"mcc":"111","mnc":"520"},{"mcc":"111","mnc":"521"},{"mcc":"111","mnc":"522"},{"mcc":"111","mnc":"523"},{"mcc":"111","mnc":"524"},{"mcc":"111","mnc":"525"},{"mcc":"111","mnc":"526"},{"mcc":"111","mnc":"527"},{"mcc":"111","mnc":"528"},{"mcc":"111","mnc":"529"},{"mcc":"111","mnc":"530"},{"mcc":"111","mnc":"531"},{"mcc":"111","mnc":"532"},{"mcc":"111","mnc":"533"},{"mcc":"111","mnc":"534"},{"mcc":"111","mnc":"535"},{"mcc":"111","mnc":"536"},{"mcc":"111","mnc":"537"},{"mcc":"111","mnc":"538"},{"mcc":"111","mnc":"539"},{"mcc":"111","mnc":"540"},{"mcc":"111","mnc":"541"},{"mcc":"111","mnc":"542"},{"mcc":"111","mnc":"543"},{"mcc":"111","mnc":"544"},{"mcc":"111","mnc":"545"},{"mcc":"111","mnc":"546"},{"mcc":"111","mnc":"547"},{"mcc":"111","mnc":"548"},{"mcc":"111","mnc":"549"},{"mcc":"111","mnc":"550"},{"mcc":"111","mnc":"551"},{"mcc":"111","mnc":"552"},{"mcc":"111","mnc":"553"},{"mcc":"111","mnc":"554"},{"mcc":"111","mnc":"555"},{"mcc":"111","mnc":"556"},{"mcc":"111","mnc":"557"},{"mcc":"111","mnc":"558"},{"mcc":"111","mnc":"559"},{"mcc":"111","mnc":"560"},{"mcc":"111","mnc":"561"},{"mcc":"111","mnc":"562"},{"mcc":"111","mnc":"563"},{"mcc":"111","mnc":"564"},{"mcc":"111","mnc":"565"},{"mcc":"111","mnc":"566"},{"mcc":"111","mnc":"567"},{"mcc":"111","mnc":"568"},{"mcc":"111","mnc":"569"},{"mcc":"111","mnc":"570"},{"mcc":"111","mnc":"571"},{"mcc":"111","mnc":"572"},{"mcc":"111","mnc":"573"},{"mcc":"111","mnc":"574"},{"mcc":"111","mnc":"575"},{"mcc":"111","mnc":"576"},{"mcc":"111","mnc":"577"},{"mcc":"111","mnc":"578"},{"mcc":"111","mnc":"579"},{"mcc":"111","mnc":"580"},{"mcc":"111","mnc":"581"},{"mcc":"111","mnc":"582"},{"mcc":"111","mnc":"583"},{"mcc":"111","mnc":"584"},{"mcc":"111","mnc":"585"},{"mcc":"111","mnc":"586"},{"mcc":"111","mnc":"587"},{"mcc":"111","mnc":"588"},{"mcc":"111","mnc":"589"},{"mcc":"111","mnc":"590"},{"mcc":"111","mnc":"591"},{"mcc":"111","mnc":"592"},{"mcc":"111","mnc":"593"},{"mcc":"111","mnc":"594"},{"mcc":"111","mnc":"595"},{"mcc":"111","mnc":"596"},{"mcc":"111","mnc":"597"},{"mcc":"111","mnc":"598"},{"mcc":"111","mnc":"599"}]
3.7.5.11 Request Timeout

SEPP Supports requestTimeout parameter to avoid the request timeout of services. The requestTimeout is increased in the given services to avoid any request timeout that may happen due to the processing of response by the stub server.

  • The user has to set requestTimeout to 2000 in the cn32f-svc, pn32f-svc, plmn-egress-gateway, n32-egress-gateway, and n32-ingress-gateway sections of the ocsepp_custom_values_<version>.yaml file.
  • The user has to set requestTimeout to 5000 in the n32-ingress-gateway section of the ocsepp_custom_values_<version>.yaml file.
requestTimeout: 2000 #(ms)

Note:

  • If the ATS cases are failing due to the request timeout, increase the request timeout of a particular service which is creating the issue. For example set request timeout to "5000".
  • If ATS is not configured, requestTimeout must be set to the following values:

Table 3-26 requestTimeout

Service REQUEST TIMEOUT
cn32f-svc 2000
pn32f-svc 1100
plmn-egress-gateway 1000
n32-egress-gateway 1500
n32-ingress-gateway 700

Updating n32-ingress-gateway config map

In the ocsepp_custom_values.yaml file, under n32-ingress-gateway section, set the requestTimeout parameter value to 5000 (ms) for updating the config map.


   routesConfig:
    - id: n32f
      #Below field is used to provide an option to enable/disable route level xfccHeaderValidation, it will override global configuration for xfccHeaderValidation.enabled
      metadata:
        requestTimeout: 1200
3.7.5.12 Idle Timeout

The Idle timeout must be increased in the given services to avoid any pending request transaction to timeout due to its idle state in the request queue.

In the ocsepp_custom_values_<version>.yaml file, under n32-ingress-gateway section, set the jettyIdleTimeout parameter value to 5000(ms).


   #Jetty Idle Timeout Settings (ms)
  jettyIdleTimeout: 5000 #(ms)

Note:

  • If the ATS is not configured, the jettyIdleTimeout value should be 3000 (ms).
  • If the ATS cases are failing due to the request timeout, increase idle timeout of a particular service which is failing. For example set idle timeout to "5000".
3.7.5.13 EvictSanHeaderCacheDelay

The SEPP supports evictSanHeaderCacheDelay to update the cache on the PN32F service with the updated values of Remote SEPP and Remote SEPP set associated with the SAN header.

The user has to set evictSanHeaderCacheDelay parameter to 100 in pn32f-svc microservice section of custom yaml file.

configs:
    evictSanHeaderCacheDelay: 100 #(ms)

Note:

If ATS is not configured evictSanHeaderCacheDelay must be set to 50000 in the pn32f-svc section of ocsepp_custom_values_<values>.yaml file.
3.7.5.14 Configuring the Reroute Attempts for Egress Gateway

The attempts parameter is used in alternate routing, when the request has to be routed to multiple SEPP.

Under config manager section in ocsepp_custom_values_<version>.yaml file, the default value for attempts is set to 0. Set attempts to 3 before running ATS test execution.

alternateRoute:
    sbiReRoute:
      sbiRoutingErrorActionSets: [{"id": "action_0", "action": "reroute", "attempts": 3, "blacklist": {"enabled": false, "duration": 60000}}]
3.7.5.15 NRF Discovery Cache Refresh Timeout

The nrfDiscoveryCacheRefreshTimeout parameter defines the timer value when the UDR Discovery request is triggered if coherence map containing UDR Profile information is empty. After this timer expiry, UDR Discovery request will be initiated by NRF Client towards NRF.

In ocsepp_custom_values_<version>.yaml file, under pn32f-svc section, set the nrfDiscoveryCacheRefreshTimeout parameter value to 1000 (ms).
configs:
  nrfDiscoveryCacheRefreshTimeout: 1000 #(ms)

Note:

To run the Cat-3 testcases, user must have preinstalled coherence service while SEPP installation.
3.7.5.16 Configuring Alternate Routing based on the DNS SRV Record for Home Network Functions

The Alternate Routing based on the DNS SRV Record for Home Network Functions feature can be enabled as follows:

Update the following parameters in ocsepp_custom_values.yaml file to configure the DNS SRV feature:

At Global Section:
global:
  alternateRouteServiceEnable : true
At Alternate route section:
alternate-route: 
  global:
     alternateRouteServiceEnable: true
Update the Target host:
alternate-route:   #Static virtual FQDN Config
   staticVirtualFqdns:       
     - name: https://sepp.ats.test.routing.com
       alternateFqdns:
       #Below Target FQDNs needs to be updated for ATS DNS SRV scenarios
       - target: <ats-release-name>-stubserver
         port: 8443
         priority: 100
         weight: 90
       - target: <ats-release-name>-stubserver-2
         port: 8443
         priority: 100
         weight: 10
       - target: <ats-release-name>-stubserver-3
         port: 8443
         priority: 1
         weight: 90
Here, update the target parameter with the user-specific ATS release name.
Example: If the user defines the <ats-release-name> as sepp-ats-rel and <stub-server-name> as stubserver, then update the target as sepp-ats-rel-stubserver.

Note:

For DNS SRV and SOR features, the plmn-egress-gateway must be deployed in REST mode. In the ocsepp_custom_values.yaml file, update:
plmn-egress-gateway:
            routeConfigMode: REST

Note:

In DNS SRV feature, the rerouting to next peer is decided on the basis of error codes or exceptions. If some exception arises which is not present in the json file DNSSRVCriteriaSet.json (may cause scenario failures), user must add those exceptions in the exceptions list at the following paths:
In the file /var/lib/jenkins/ocsepp_tests/data/DNSSRVCriteriaSet.json, update

"exceptions": [ 
      "java.net.SocketException",
      "java.nio.channels.ClosedChannelException"
    ]
In the file /var/lib/jenkins/ocsepp_tests/cust_data/DNSSRVCriteriaSet.json, update

"exceptions": [ 
      "java.net.SocketException",
      "java.nio.channels.ClosedChannelException"
    ]
3.7.5.17 Configuring Load Sharing among Multiple Remote SEPP Nodes

The load sharing among multiple Remote SEPP nodes feature can be configured as follows:
  1. Enable the following parameters in ocsepp_custom_values.yaml file to configure the load sharing feature:
    1. Enable the following flag at alternate_route section:
      alternate-route: 
        global:
           alternateRouteServiceEnable: true
    2. Replace the release name of the Target host:

       alternate-route:   #Static virtual FQDN Config
         staticVirtualFqdns:       
           - name: http://sepp.ats.loadsharing.com
             alternateFqdns:
             #Below Target FQDNs needs to be updated for ATS DNS SRV scenarios
             - target: <ats-release-name>-stubserver
               port: 8443
               priority: 100
               weight: 90
             - target: <ats-release-name>-stubserver-2
               port: 8443
               priority: 100
               weight: 10
             - target: <ats-release-name>-stubserver-3
               port: 8443
               priority: 1
               weight: 90

    Note:

    If the user has chosen the ats release name as sepp-ats-rel

    Modify

    target: <release-name>-stubserver with actual release name of ATS deployment.

    Example: sepp-ats-rel-stubserver

  2. Enable the following parameter at NrfClient Global parameters:
      alternateRouteServiceEnable: true

3.7.5.18 Enabling Persistant Volume Storage

ATS supports Persistent storage to retain ATS historical build execution data, test cases, and one-time environment variable configurations.

Note:

By default, this feature is disabled.

To enable persistent storage:

  1. Create a PVC and associate the same to the ATS pod.
  2. Set the PVEnabled flag to true.
  3. Set PVClaimName to PVC that is created for ATS.
    
    bddclient:
      PVEnabled: false
      PVClaimName: "sepp-pvc"
      

For more details on Persistent Volume Storage, you can refer to Persistent Volume for 5G ATS

3.7.6 Deploying ATS in Kubernetes Cluster

Note:

It is important to ensure that all the three components; ATS, Stub and SEPP are in the same namespace.

If the namespace does not exists, run the following command to create a namespace:

kubectl create namespace ocsepp

Deploying ATS:

helm install -name <release_name> ocats-sepp-23.4.0.tgz --namespace <namespace_name> -f <values-yaml-file>
Example:

helm install -name ocats ocats-sepp-23.4.0.tgz --namespace ocsepp -f ocats-sepp-custom-values.yaml

3.7.7 Verifying ATS Deployment

Run the following command to verify ATS deployment:
helm status <release_name> 
Checking Pod Deployment:

kubectl get pod -n seppsvc
Checking Service Deployment:

kubectl get service -n seppsvc

Figure 3-20 Checking Pod and Service Deployment

img/seppats23.1.0_1.png

Figure 3-21 Checking Service Deployment

img/service-deployment.png

3.7.8 Post Installation Steps (If persistent volume is used)

If persistent volume is used, follow the post-installation steps as mentioned in the Persistent Volume for 5G ATS section.

3.8 Installing ATS for UDR

Before installing ATS for UDR, it is important to ensure the resource requirements are met. For information about the resource requirements, see Resource Requirements.

The UDR ATS installation procedure includes:

  1. Locating and downloading the ATS images
  2. Preparing to Deploy ATS and Stub Pods
  3. Loading UDR ATS Images
  4. Loading UDR Stub Images in the SLF-NewFeatures or SLF-Regression Pipeline
  5. Loading UDR Stub Images in the UDR-NewFeatures or UDR-Regression Pipeline
  6. Configuring ATS
  7. Deploying ATS in Kubernetes Cluster
  8. Deploying Stub Pods in Kubernetes Cluster
  9. Verifying UDR-ATS Deployment
  10. Post-Installation Steps

3.8.1 Resource Requirements

ATS resource requirements for UDR, SLF, and EIR are as follows:

Table 3-27 UDR - Total Number of Resources

Resource Name vCPUs Memory (GB) Storage (GB)
UDR SUT Totals 39 39 34
DB Tier Totals 20 41 37
ATS Totals 15 11 0
Grand Total UDR-ATS 74 91 71

Table 3-28 SLF - Total Number of Resources

Resource Name vCPUs (excluding sidecar) vCPUs (including sidecar) Memory (GB) (excluding sidecar) Memory (including sidecar) Storage (GB)
SLF-SUT Totals 83 113 83 114 64
DB Tier Totals 20 41 28 53 37
ATS Totals 8 11 6 8 0
Grand Total SLF-ATS 111 165 117 175 98

Table 3-29 EIR - Total Number of Resources

Resource Name vCPUs Memory (GB) Storage (GB)
EIR SUT Totals 25 25 19
DB Tier Totals 20 41 37
ATS Totals 7 6 0
Grand Total EIR-ATS 52 72 59

Table 3-30 UDR Operational Resource Requirements

Microservice vCPUs Required per Pod Memory Required per Pod (GB) Storage PVC Required per Pod (GB) Replicas (regular deployment) Replicas (ATS deployment) vCPUs Required - Total Memory Required - Total (GB) Storage PVC Required - Total (GB)
ocudr-ingressgateway-sig 2 2 1 GB 2 1 2 2 1
ocudr-ingressgateway-prov 2 2 1 GB 2 1 2 2 1
ocudr-nudr-drservice 1 1 1 GB 2 1 1 1 1
ocudr-nudr-dr-provservice 1 1 1 GB 2 1 1 1 1
ocudr-nudr-notify-service 1 1 1 GB 2 1 1 1 1
ocudr-oc-diam-gateway 1 1 1 GB 2 1 1 1 1
ocudr-nudr-diameterproxy 1 1 1 GB 2 1 1 1 1
ocudr-egressgateway 2 2 1 GB 1 1 2 2 1
ocudr-nudr-config 1 1 1 GB 1 1 1 1 1
ocudr-nudr-config-server 1 1 1 GB 1 1 1 1 1
ocudr-nudr-nrf-client-nfmanagement 1 1 1 GB 1 1 1 1 1
ocudr-appinfo 0.5 1 1 GB 1 1 0.5 1 1
ocudr-alternate-route 1 1 1 GB 1 1 1 1 1
ocudr-nudr-migration 1 1 1 GB 1 1 1 1 1
ocudr-nudr-bulk-import 1 1 1 GB 1 1 1 1 1
ocudr-nudr-ondemand-migration 1 1 1 GB 1 1 1 1 1
ocudr-performance 1 1 1 GB 1 1 1 1 1
ocudr-performance-prov 1 1 1 GB 1 1 1 1 1
UDR Additional Resources (Hooks/Init/Update Containers) - - - - - 6 6 6
provgw-prov-ingressgateway 2 2 0 2 1 2 2 1
provgw-provgw-service 1 1 0 2 1 1 1 1
provgw-provgw-config 1 1 0 1 1 1 1 1
provgw-provgw-config-server 1 1 0 1 1 1 1 1
provgw-prov-egressgateway 2 2 0 2 1 2 2 1
Provgw Additional Resources (Hooks/Init/Update Containers) - - - - - 5 5 5
UDR-SUT Total (UDR and ProvGw) - - - - - 39 39 34

Table 3-31 SLF Operational Resource Requirements

Microservice vCPUs Required per Pod (excluding sidecar) vCPUs required for sidecar container per pod Memory Required per Pod (GB) (excluing sidecar) Memory required for sidecar container per pod (GB) Storage PVC Required per Pod (GB) Replicas (regular deployment) Replicas (ATS deployment) vCPUs Required - Total (excludes sidecar) vCPUs Required - Total (includes sidecar) Memory Required - Total (GB) (excludes sidecar) Memory Required - Total (GB) (includes sidecar) Storage PVC Required - Total (GB)
ocudr-ingressgateway-sig 2 0.5 2 0.5 1 GB 2 1 2 2.5 2 2.5 1
ocudr-ingressgateway-prov 2 0.5 2 0.5 1 GB 2 1 2 2.5 2 2.5 1
ocudr-nudr-drservice 1 0.5 1 0.5 1 GB 2 1 1 1.5 1 1.5 1
ocudr-nudr-dr-provservice 1 0.5 1 0.5 1 GB 2 1 1 1.5 1 1.5 1
ocudr-egressgateway 2 0.5 2 0.5 1 GB 2 1 2 2.5 2 2.5 1
ocudr-nudr-config 1 0.5 1 0.5 1 GB 1 1 1 1.5 1 1.5 1
ocudr-nudr-config-server 1 0.5 1 0.5 1 GB 1 1 1 1.5 1 1.5 1
ocudr-nudr-nrf-client-nfmanagement 1 0.5 1 0.5 1 GB 1 1 1 1.5 1 1.5 1
ocudr-appinfo 0.5 0.5 1 0.5 1 GB 1 1 0.5 1 1 1.5 1
ocudr-alternate-route 2 0.5 2 0.5 1 GB 1 1 2 2.5 2 2.5 1
ocudr-nudr-bulk-import 1 0.5 1 0.5 1 GB 1 1 1 1.5 1 1.5 1
ocudr-performance 1 0.5 1 0.5 1 GB 1 1 1 1.5 1 1.5 1
ocudr-performance-prov 1 0.5 1 0.5 1 GB 1 1 1 1.5 1 1.5 1
ocudr-nudr-export-tool 2 1 2 1 3 GB 1 1 2 3 2 3 3
SLF Additional Resources (Hooks/Init/Update Containers) - - - - - - - 6 7 6 7 3
provgw-prov-ingressgateway 2 0.5 2 0.5 1 GB 2 1 2 2.5 2 2.5 1
provgw-prov-egressgateway 2 0.5 2 0.5 1 GB 2 1 2 2.5 2 2.5 1
provgw-provgw-service 1 0.5 1 0.5 1 GB 2 1 1 1.5 1 1.5 1
provgw-provgw-config 1 0.5 1 0.5 1 GB 1 1 1 1.5 1 1.5 1
provgw-provgw-config-server 1 0.5 1 0.5 1 GB 1 1 1 1.5 1 1.5 1
provgw-auditor-service 1 0.5 1 0.5 1 GB 1 1 1 1.5 1 1.5 1
Provgw Additional Resources (Hooks/Init/Update Containers) - - - - - - - 5 6 5 6 3
SLF-SUT Total Required (SLFs and ProvGw) - - - - - - - 83 113 83 114 64

Table 3-32 EIR Operational Resource Requirements

Microservice vCPUs Required per Pod Memory Required per Pod (GB) Storage PVC Required per Pod (GB) Replicas (regular deployment) Replicas (ATS deployment) vCPUs Required - Total Memory Required - Total (GB) Storage PVC Required - Total (GB)
ocudr-ingressgateway-sig 2 2 1 GB 2 1 2 2 1
ocudr-ingressgateway-prov 2 2 1 GB 2 1 2 2 1
ocudr-nudr-drservice 1 1 1 GB 2 1 1 1 1
ocudr-nudr-dr-provservice 1 1 1 GB 2 1 1 1 1
ocudr-egressgateway 2 2 1 GB 2 1 2 2 1
ocudr-nudr-config 1 1 1 GB 1 1 1 1 1
ocudr-nudr-config-server 1 1 1 GB 1 1 1 1 1
ocudr-nudr-nrf-client-nfmanagement 1 1 1 GB 1 1 1 1 1
ocudr-appinfo 0.5 1 1 GB 1 1 0.5 1 1
ocudr-alternate-route 2 2 1 GB 1 1 2 2 1
ocudr-nudr-bulk-import 1 1 1 GB 1 1 1 1 1
ocudr-performance 1 1 1 GB 1 1 1 1 1
ocudr-performance-prov 1 1 1 GB 1 1 1 1 1
ocudr-nudr-export-tool 2 2 3 GB 1 1 2 2 3
EIR Additional Resources (Hooks/Init/Update Containers) - - - - - 6 6 3
EIR-SUT Total Required - - - - - 25 25 19

Table 3-33 ATS Resource Requirements for UDR mode

Microservice vCPUs Required per Pod Memory Required per Pod (GB) Storage PVC Required per Pod (GB) Replicas (regular deployment) Replicas (ATS deployment) vCPUs Required - Total Memory Required - Total (GB) Storage PVC Required - Total (GB)
ATS Behave 4 4 0 - 1 4 4 0
NRF-Stub (python stub) 2 1 0 - 2 (Two separate deployments) 4 2 0
Notify-Stub (python stub) 2 1 0 - 2 (Two separate deployments) 4 2 0
diam-stub 1 1 0 - 2 (Two separate deployments) 2 2 0
fourg-stub 1 1 0 - 1 1 1 0
ATS Totals - - - - - 15 11 0

Table 3-34 ATS Resource Requirements for SLF Mode

Microservice vCPUs Required per Pod (excluding sidecar) vCPUs required for sidecar container per pod Memory Required per Pod (GB) (excluing sidecar) Memory required for sidecar container per pod(GB) Storage PVC Required per Pod (GB) Replicas (regular deployment) Replicas (ATS deployment) vCPUs Required - Total (excludes sidecar) vCPUs Required - Total (includes sidecar) Memory Required - Total (GB) (excludes sidecar) Memory Required - Total (GB) (includes sidecar) Storage PVC Required - Total (GB)
ocats-udr 4 2 4 1 0 1 1 4 6 4 5 0
nrf-ocstub-python 2 .5 1 .5 0 1 2 (Two separate deployments) 4 5 2 3 0
ATS Total required - - - - - - - 8 11 6 8 0

Table 3-35 ATS Resource Requirements for EIR mode

Microservice vCPUs Required per Pod Memory Required per Pod (GB) Storage PVC Required per Pod (GB) Replicas (regular deployment) Replicas (ATS deployment) vCPUs Required - Total Memory Required - Total (GB) Storage PVC Required - Total (GB)
ocats-udr 3 4 0 1 1 3 4 0
nrf-ocstub-python (For deployments) 2 1 0 1 2 (Two separate deployments) 4 2 0
ATS Total required - - - - - 7 6 0

Note:

ATS total resource calculation includes 2 nrf-stub deployments and 1 ATS deployment.

Table 3-36 UDR DB Tier Pods

Micro service name Container name Number of Pods vCPUs Requirement Per Pod Memory Requirement Per Pod PVC Requirement Total Resources
Management node mysqlndbcluster 2 2 CPUs 5 GB 3*2 → 6 GB

4 CPUs

10 GB Memory
Data node mysqlndbcluster 2 2 CPUs 4 GB 8*2 → 16 GB

5 CPUs

9 GB Memory
db-backup-executor-svc 100m CPU 128 MB
APP SQL node mysqlndbcluster 2 2 CPUs 5 GB 3*2 → 6 GB

5 CPUs

10.5 GB Memory
init-sidecar 100m CPU 256 MB
SQL node (Used for Replication) mysqlndbcluster 2 2 CPUs 5 GB 3*2 → 6 GB

5 CPUs

10.5 GB Memory
init-sidecar 100m CPU 256 MB
DB Monitor Service db-monitor-svc 1 200m CPUs 500 MB -

0.5 CPUs

0.5 GB Memory
DB Backup Manager Service replication-svc 1 200m CPU 500 MB -

0.5 CPUs

0.5 GB Memory
Total - - - - 34+3 (Replication Svc) → 37 GB

20 CPUs

41 GB Memory

Table 3-37 SLF DB Tier Pods

Micro service name Container name Number of Pods vCPUs Requirement Per Pod Memory Requirement Per Pod PVC Requirement Total Resources
Management node mysqlndbcluster 2 2 CPUs 5 GB 3*2 → 6 GB

8 CPUs

12 GB Memory
istio-proxy 2 CPUs 1 GB
Data node mysqlndbcluster 2 2 CPUs 4 GB 8*2 → 16 GB

9 CPUs

11 GB Memory
istio-proxy 2 CPUs 1 GB
db-backup-executor-svc 100m CPU 128 MB
APP SQL node mysqlndbcluster 2 2 CPUs 5 GB 3*2 → 6 GB

9 CPUs

13 GB Memory
init-sidecar 100m CPU 256 MB
istio-proxy 2 CPUs 1 GB
SQL node (Used for Replication) mysqlndbcluster 2 2 CPUs 5 GB 3*2 → 6 GB

9 CPUs

13 GB Memory
init-sidecar 100m CPU 256 MB
istio-proxy 2 CPUs 1 GB
DB Monitor Service db-monitor-svc 1 200m CPUs 500 MB -

3 CPUs

2 GB Memory
istio-proxy 2 CPUs 1 GB
DB Backup Manager Service replication-svc 1 200m CPU 500 MB -

3 CPUs

2 GB Memory
istio-proxy 2 CPUs 1 GB
Total - - - - 34+3 (Replication Svc) → 37 GB

41 CPUs

53 GB Memory

Table 3-38 EIR DB Tier Pods

Micro service name Container name Number of Pods vCPUs Requirement Per Pod Memory Requirement Per Pod PVC Requirement Total Resources
Management node mysqlndbcluster 2 2 CPUs 5 GB 3*2 → 6 GB

4 CPUs

10 GB Memory
Data node mysqlndbcluster 2 2 CPUs 4 GB 8*2 → 16 GB

5 CPUs

9 GB Memory
db-backup-executor-svc 100m CPU 128 MB
APP SQL node mysqlndbcluster 2 2 CPUs 5 GB 3*2 → 6 GB

5 CPUs

10.5 GB Memory
init-sidecar 100m CPU 256 MB
SQL node (Used for Replication) mysqlndbcluster 2 2 CPUs 5 GB 3*2 → 6 GB

5 CPUs

10.5 GB Memory
init-sidecar 100m CPU 256 MB
DB Monitor Service db-monitor-svc 1 200m CPUs 500 MB -

1 CPUs

1 GB Memory
DB Backup Manager Service replication-svc 1 200m CPU 500 MB -

1 CPUs

1 GB Memory
Total - - - - 34+3 (Replication Svc) → 37 GB

20 CPUs

41 GB Memory

Table 3-39 SLF Performance based Resource Details

- vCPUs (excluding sidecar) vCPUs (including sidecar) Memory (GB) (excluding sidecar) Memory (GB) (including sidecar) Storage (GB) - - - - - - -
SLF-SUT Totals 58 105 46 93 22 - - - - - - -
DB Tier Totals 40 80 40 80 20 - - - - - - -
ATS Totals 6 4 1 3 0 - - - - - - -
Grand Total SLF-ATS 104 193 94 183 42 - - - - - - -
Microservice CPUs Required per Pod (excluding sidecar) CPUs required for sidecar container Memory Required per Pod (GB) (excluding sidecar) Memory required for sidecar container per pod (GB) Storage PVC Required per Pod (GB) Replicas (regular deployment) Replicas (ATS deployment) CPUs Required - Total (excluding sidecar) CPUs Required - Total (including sidecar) Memory Required - Total (GB) (excluding sidecar) Memory Required - Total (GB) (including sidecar) Storage PVC Required - Total (GB)
ocudr-ingressgateway-sig 6 4 4 4 1 GB 2 2 12 20 8 16 2
ocudr-ingressgateway-prov 6 4 4 4 1 GB 2 2 12 20 8 16 2
ocudr-nudr-drservice 5 4 4 4 1 GB 2 2 10 18 8 16 2
ocudr-nudr-dr-provservice 5 4 4 4 1 GB 2 2 10 18 8 16 2
ocudr-egressgateway 1 2 1 2 1 GB 1 1 1 3 1 3 1
ocudr-nudr-config 1 1 1 1 1 GB 1 1 1 2 1 2 1
ocudr-nudr-config-server 1 1 1 1 1 GB 1 1 1 2 1 2 1
ocudr-nudr-nrf-client-nfmanagement 1 1 1 1 1 GB 1 1 1 2 1 2 1
ocudr-appinfo 1 1 1 1 1 GB 1 1 1 2 1 2 1
ocudr-alternate-route 1 1 1 1 1 GB 1 1 1 2 1 2 1
ocudr-performance 1 1 1 1 1 GB 1 1 1 2 1 2 1
ocudr-performance-prov 1 1 1 1 1 GB 1 1 1 2 1 2 1
Additional SLF Resources (Hooks/Init/Update Containers) - - - - - - - 6 12 6 12 6
SLF SUT Totals - - - - - - - 58 105 46 93 22
Microservice Container Number of Pods CPU Requirement Per Pod Memory Requirement Per Pod PVC Requirement Total Resources - - - - - -
Management node mysqlndbcluster 2 2 CPUs 5 GB 3*2 → 6 GB

8 CPUs

12 GB Memory

- - - - - -
istio-proxy 2 CPUs 1 GB
Data node mysqlndbcluster 2 2 CPUs 4 GB 8*2 → 16 GB

9 CPUs

11 GB Memory

- - - - - -
istio-proxy 2 CPUs 1 GB
db-backup-executor-svc 100m CPU 128 MB
APP SQL node mysqlndbcluster 2 2 CPUs 5 GB 3*2 → 6 GB

9 CPUs

13 GB Memory

- - - - - -
init-sidecar 100m CPU 256 MB
istio-proxy 2 CPUs 1 GB
SQL node (Used for Replication) mysqlndbcluster 2 2 CPUs 5 GB 3*2 → 6 GB

9 CPUs

13 GB Memory
- - - - - -
init-sidecar 100m CPU 256 MB
istio-proxy 2 CPUs 1 GB
DB Monitor Service db-monitor-svc 1 200m CPUs 1 GB -

3 CPUs

2 GB Memory
- - - - - -
istio-proxy 2 CPUs 500 MB
DB Backup Manager Service replication-svc 1 200m CPU 500 MB -

3 CPUs

2 GB Memory

- - - - - -
istio-proxy 2 CPUs 1 GB
Total - - - - - - - - - - 34+3(Replication Svc) → 37 GB

41 CPUs

53 GB Memory

Table 3-40 ATS and stub requirement for SLF Performance

Microservice vCPUs Required per Pod (excluding sidecar) vCPUs required for sidecar container per pod Memory Required per Pod (GB) (excluing sidecar) Memory required for sidecar container per pod(GB) Storage PVC Required per Pod (GB) Replicas (regular deployment) Replicas (ATS deployment) vCPUs Required - Total (excludes sidecar) vCPUs Required - Total (includes sidecar) Memory Required - Total (GB) (includes sidecar) Memory Required - Total (GB) (includes sidecar) Storage PVC Required - Total (GB)
ocats-udr 6 2 8 2 0 1 1 6 8 8 10 0
ATS Total required - - - - - -   6 8 8 10 0

3.8.2 Locating and Downloading ATS Image

To locate and download the ATS Image from MOS:

  1. Log in to My Oracle Support using appropriate credentials.
  2. Select the Patches & Updates tab.
  3. In the Patch Search window, click Product or Family (Advanced).
  4. Enter Oracle Communications Cloud Native Core - 5G in the Product field.
  5. Select Oracle Communications Cloud Native Core Unified Data Repository <release_number> from the Release drop-down.
  6. Click Search. The Patch Advanced Search Results list appears.
  7. Select the required ATS patch from the list. The Patch Details window appears.
  8. Click Download. The File Download window appears.
  9. Click the <p********_<release_number>_Tekelec>.zip file.
  10. Untar the zip file to access all the ATS images. The ocats_udr_pkg_24_1_1_0_0.tgz file has following files:
    • ocats_udr_pkg_24_1_1_0_0.tgz
    • ocats_udr_pkg_24_1_1_0_0-README.txt
    • ocats_udr_pkg_24_1_1_0_0.tgz.sha256
    • ocats-udr-custom-configtemplates-24.1.1.0.0.zip
    • ocats-udr-custom-configtemplates-24.1.1.0.0-README.txt
    • ocats_udr_pkg_24_1_1_0_0.mcafee.sw_packaging.log

    The ocats_udr_pkg_24_1_1_0_0.tgz file contains:

    |_ _ ocats-udr-ats-pkg-24.1.1.tgz
    
    |      |_ _ _ _ _ _ ocats-udr-24.1.1.tgz (Helm Charts)
    
    |      |_ _ _ _ _ _ ocats-udr-images-24.1.1.tar(Docker Images)
    
    |      |_ _ _ _ _ _ OCATS-UDR-Readme.txt
    
    |      |_ _ _ _ _ _ ocats-udr-24.1.1.tgz.sha256
    
    |      |_ _ _ _ _ _ ocats-udr-images-24.1.1.tar.sha256
    
    |      |_ _ _ _ _ _ _ ocats-udr-data-24.1.1.tgz
    
    |      |_ _ _ _ _ _ _ ocats-udr-data-24.1.1.tgz.sha256
    
    |_ _ ocats-udr-stub-pkg-24.1.1.tgz
    
         |_ _ _ _ _ _ ocstub-py-24.1.1.tgz (Helm Charts)
    
         |_ _ _ _ _ _ fourg-stub-24.1.1.tgz (Helm Charts)
    
         |_ _ _ _ _ _ diam-stub-24.1.1.tgz (Helm Charts)
    
         |_ _ _ _ _ _ ocstub-py-24.1.1.tar(Docker Images)
    
         |_ _ _ _ _ _ ocats-udr-fourg-stub-images-24.1.1.tar(Docker Images)
    
         |_ _ _ _ _ _ ocats-udr-diam-stub-images-24.1.1.tar(Docker Images)
    
         |_ _ _ _ _ _ OCATS-UDR-STUB-Readme.txt
    
         |_ _ _ _ _ _ ocstub-py-24.1.1.tgz.sha256
    
         |_ _ _ _ _ _ fourg-stub-24.1.1.tgz.sha256
    
         |_ _ _ _ _ _ diam-stub-24.1.1.tgz.sha256
    
         |_ _ _ _ _ _ ocstub-py-24.1.1.tar.sha256
    
         |_ _ _ _ _ _ ocats-udr-fourg-stub-images-24.1.1.tar.sha256
    
         |_ _ _ _ _ _ ocats-udr-diam-stub-images-24.1.1.tar.sha256
    The ocats-udr-custom-configtemplates-24.1.1.0.0.zip file contains:
    ocats-udr-custom-configtemplates-24.1.1.0.0.zip
    
          |_ _ _ _ _ _ ocats-udr-custom-values-24.1.1.yaml(Custom values for UDR-ATS)
    
          |_ _ _ _ _ _ ocstub-py-custom-values-24.1.1.yaml (Custom values for COMMON-PYTHON-STUB)
    
          |_ _ _ _ _ _ fourg-stub-custom-values-24.1.1.yaml (Custom values for FOURG-STUB)
    
          |_ _ _ _ _ _ diam-stub-custom-values-24.1.1.yaml(Custom values for DIAMETER-STUB)

    Copy the ocats_udr_pkg_24_1_1_0_0.tgz tar file and ocats-udr-custom-configtemplates-24.1.1.0.0.zip file to OCCNE, OCI, or Kubernetes cluster, where you want to deploy ATS.

3.8.3 Preparing to Deploy ATS and Stub Pods

To deploy ATS and stub pods in Kubernetes cluster:

Note:

Deploy ATS and Subscriber Location Function (SLF) in the same namespace.

3.8.4 Loading UDR ATS Image

To load UDR ATS image:

  1. Run the following command to extract the tar file content.

    tar -xvf ocats_udr_pkg_24_1_1_0_0.tgz

    The output of this command is:
    • ocats-udr-ats-pkg-24.1.1.tgz
    • ocats-udr-stub-pkg-24.1.1.tgz
  2. Run the following command to extract the helm charts and docker images of ATS.

    tar -xvf ocats-udr-ats-pkg-24.1.1.tgz

    The output of this command is:
    • ocats-udr-24.1.1.tgz
    • ocats-udr-images-24.1.1.tar
    • ocats-udr-data-24.1.1.tgz
  3. Run the following command to extract the helm charts and docker images of ATS.

    tar -xvf ocats-udr-stub-pkg-24.1.1.tgz

    The output of this command is:
    • ocstub-py-24.1.1.tgz (Helm charts)
    • fourg-stub-24.1.1.tgz (Helm charts)
    • diam-stub-24.1.1.tgz (Helm charts)
    • ocats-udr-notify-stub-images-24.1.1.tar (Docker image)
    • ocats-udr-fourg-stub-images-24.1.1.tar(Docker Images
    • ocats-udr-diam-stub-images-24.1.1.tar(Docker Images)
    • ocstub-py-24.1.1.tar (Docker image)

    The ocats-udr-images-24.1.1.tar file contains docker images (ocats-udr-images-24.1.1.tar) of ATS for UDR 24.1.1.

  4. Run the following command in your cluster to load the ATS image.
    docker load --input ocats-udr-images-24.1.1.tar 

    Note:

    For CNE 1.8.0 and above, you can use Podman instead of Docker. See the following sample Podman command:

    sudo podman load --input ocats-udr-images-24.1.1.tar

  5. Run the following commands to tag and push the ATS image to your registry.
    docker tag ocats-udr-images:24.1.1 <registry>/ocats-udr-images:24.1.1
    docker push <registry>/ocats-udr-images:24.1.1 

    In the previous command, <registry> is the name of docker image repository.

    Note:

    For CNE 1.8.0 and above, you can use Podman instead of Docker to tag and push the Docker image. Run the following sample Podman command to tag and push Docker image:
    sudo podman tag ocats-udr-images:24.1.1 <customer repo>/ <image name>:<image version>
    sudo podman push <customer repo>/<image name>:<image version>
  6. Run the following command to untar the helm charts (ocats-udr-24.1.1.tgz) and update the registry name, image name, and tag (if required) in the ocats-udr-custom-values-24.1.1.yaml file.
    tar -xvf ocats-udr-24.1.1.tgz
    Output:
    ocats-udr/Chart.yaml
    
    ocats-udr/values.yaml
    
    ocats-udr/templates/NOTES.txt
    
    ocats-udr/templates/_helpers.tpl
    
    ocats-udr/templates/deployment.yaml
    
    ocats-udr/templates/ingress.yaml
    
    ocats-udr/templates/service.yaml
    
    ocats-udr/templates/serviceaccount.yaml
    Output:
    ocats-udr
    
    ├── Chart.yaml
    
    ├── templates
    
    │   ├── deployment.yaml
    
    │   ├── _helpers.tpl
    
    │   ├── ingress.yaml
    
    │   ├── NOTES.txt
    
    │   ├── serviceaccount.yaml
    
    │   └── service.yaml
    
    └── values.yaml

3.8.5 Loading UDR Stub Images in the SLF-NewFeatures, EIR-NewFeatures, SLF-Regression, or EIR-Regression Pipeline

To load the UDR stub images:

Note:

For the SLF-NewFeatures, EIR-NewFeatures, or SLF-Regression pipeline, deploy only ocstub-py Helm chart. The ocstub-py-24.1.1.tar contains common-python-stub image (ocstub-py:24.1.1).
  1. Run the following command to extract the tar file content.

    tar -xvf ocats_udr_pkg_24.1.1_0_0.tgz

    The output of this command is:
    • ocats-udr-ats-pkg-24.1.1.tgz
    • ocats-udr-stub-pkg-24.1.1.tgz
  2. Run the following command to extract the stub tar file content.

    tar -xvf ocats-udr-stub-pkg-24.1.1.tgz

    The output of this command is:
    • ocstub-py-24.1.1.tgz (Helm charts)
    • fourg-stub-24.1.1.tgz (Helm charts)
    • diam-stub-24.1.1.tgz (Helm charts)
    • ocats-udr-notify-stub-images-24.1.1.tar (Docker image)
    • ocats-udr-fourg-stub-images-24.1.1.tar(Docker Images)
    • ocats-udr-diam-stub-images-24.1.1.tar(Docker Images)
    • ocstub-py-24.1.1.tar (Docker image)
  3. To load the UDR stub images (ocstub-py-24.1.1.tar) in the SLF-NewFeatures, EIR-NewFeatures, or SLF-Regression pipeline, run the following command in your cluster.
    docker load --input ocstub-py-24.1.1.tar
  4. Run the following commands to tag and push the stub image to your registry.
    docker tag ocstub-py:24.1.1 <registry>/ocstub-py:24.1.1
    
    docker push <registry>/ocstub-py:24.1.1

    In the previous command, <registry> is the name of docker image repository.

  5. Run the following command to untar the common python stub helm charts (ocstub-py-24.1.1.tgz) to get ocstub-py charts:
    tar -xvf ocstub-py-24.1.1.tgz
    Output:
    ocstub-py/Chart.yaml
    ocstub-py/values.yaml
    ocstub-py/templates/_helpers.tpl
    ocstub-py/templates/deployment.yaml
    ocstub-py/templates/ingress.yaml
    ocstub-py/templates/service.yaml
    ocstub-py/templates/serviceaccount.yaml
    ocstub-py/README.md
    Output:
    ocstub-py
    ├── Chart.yaml
    
    ├── templates
    
    │   ├── deployment.yaml
    
    │   ├── _helpers.tpl
    
    │   ├── ingress.yaml
    
    │   ├── serviceaccount.yaml
    
    │   └── service.yaml
    
    ├── values.yaml
    
    └── README.md

3.8.6 Loading UDR Stub Images in the UDR-NewFeatures or UDR-Regression Pipeline

Note:

  • To run UDR-NewFeatures/UDR-Regression, ocstub-py, fourg-stub, and diam-stub must be deployed
  • ocstub-py-24.1.1.tar contains common-python-stub image (ocstub-py:24.1.1)
  • ocats-udr-fourg-stub-images-24.1.1.tar contains fourg-stub image (ocats-udr-fourg-stub-images:24.1.1)
  • ocats-udr-diam-stub-images-24.1.1.tar contains diam-stub image (ocats-udr-diam-stub-images:24.1.1)
To load the UDR Stub Images:
  1. Run the following command to extract the tar file content.

    tar -xvf ocats-udr-ats-pkg-24.1.1.tgz

    The output of this command is:
    • ocats-udr-ats-pkg-24.1.1.tgz
    • ocats-udr-stub-pkg-24.1.1.tgz
  2. Run the following command to extract the stub tar file content.

    tar -xvf ocats-udr-stub-pkg-24.1.1.tgz

    • ocstub-py-24.1.1.tgz (Helm charts)
    • fourg-stub-24.1.1.tgz (Helm charts)
    • diam-stub-24.1.1.tgz (Helm charts)
    • ocats-udr-notify-stub-images-24.1.1.tar (Docker image)
    • ocats-udr-fourg-stub-images-24.1.1.tar(Docker Images)
    • ocats-udr-diam-stub-images-24.1.1.tar(Docker Images)
    • ocstub-py-24.1.1.tar (Docker image)
  3. Run the following commands in your cluster to load the stub images.
    docker load --input ocstub-py-24.1.1.tar
        docker load --input ocats-udr-fourg-stub-images-24.1.1.tar
        docker load --input ocats-udr-diam-stub-images-24.1.1.tar
  4. Run the following commands to tag and push the stub image to your registry.
    docker tag ocstub-py:24.1.1 <registry>/ocstub-py:24.1.1
        docker push <registry>/ocstub-py:24.1.1
        docker tag ocats-udr-fourg-stub-images:24.1.1 <registry>/ocats-udr-fourg-stub-images:24.1.1
        docker push <registry>/ocats-udr-fourg-stub-images:24.1.1
        docker tag ocats-udr-diam-stub-images:24.1.1 <registry>/ocats-udr-diam-stub-images:24.1.1 
        docker push <registry>/ocats-udr-diam-stub-images:24.1.1
  5. Run the following command to untar all the stub charts:
    tar -xvf ocstub-py-24.1.1.tgz
        tar -xvf fourg-stub-24.1.1.tgz
        tar -xvf diam-stub-24.1.1.tgz
    Output of each helm chart is as follows:
    ocstub-py:
    
    ├── Chart.yaml
    
    ├── templates
    
    │   ├── deployment.yaml
    
    │   ├── _helpers.tpl
    
    │   ├── ingress.yaml
    
    │   ├── serviceaccount.yaml
    
    │   └── service.yaml
    
    ├── values.yaml
    
    └── README.md
    fourg-stub
    
    ├── Chart.yaml
    
    ├── templates
    
    │   ├── deployment.yaml
    
    │   ├── _helpers.tpl
    
    │   ├── ingress.yaml
    
    │   ├── NOTES.txt
    
    │   └── service.yaml
    
    └── values.yaml
    diam-stub
    
    ├── Chart.yaml
    
    ├── templates
    
    │   ├── deployment.yaml
    
    │   ├── _helpers.tpl
    
    │   ├── ingress.yaml
    
    │   ├── NOTES.txt
    
    │   └── service.yaml
    
    └── values.yaml

3.8.7 Configuring ATS

It is important to configure the following features before deploying ATS for UDR:

Note:

  • The deployment of notify-stub, fourg-stub, and diam-stub are not applicable to SLF pipelines.
  • Service name used by each of the stubs must be unique for successful deployment.
3.8.7.1 Configuring Docker Registry

Update the docker registry file as follows:

image:
  repository: <docker registry>:<docker port>/ocats-udr-images
3.8.7.2 Enabling Static Port
To enable static port:

Note:

ATS supports static port. By default, this feature is not enabled.
In the ocats-udr-custom-values-24.1.1.yaml file under service section, set the staticNodePortEnabled parameter as true and staticNodePort parameter with valid nodePort value.

Here is a sample configuration for enabling static port in the ocats-udr-custom-values-24.1.1.yaml.file:

service:
  customExtension:
    labels: {}
    annotations: {}
  type: LoadBalancer
  port: "8080"
  staticNodePortEnabled: true
  staticNodePort: "31083"
3.8.7.3 Enabling Persistent Volume
To enable persistent volume, create a PVC and associate the same to the ATS pod.

Note:

To enable persistent volume, set the following parameters in the values.yaml file:
  1. Set the PVEnabled flag to 'true'.
  2. Set PVClaimName to 'PVC' that user has created for ATS.
deployment:
  customExtension:
    labels: {}
    annotations: {}
  PVEnabled: true
  PVClaimName: "ocats-udr-pvc"
3.8.7.4 Settings for OAuth2 Test Cases
UDR-ATS supports test cases that verifies OAuth2 validation scenarios at both UDR Provisioning Ingress Gateway and UDR Signaling Ingress Gateway. To run the OAuth2 related test cases on ATS:
  1. Generate four ECDSA private keys in pem format (two each for UDR Provisioning Ingress Gateway and UDR Signaling Ingress Gateway) and enter the keys name in the privateKey field.
  2. Generate four public certificates using the private keys (two each for UDR Provisioning Ingress Gateway and UDR Signaling Ingress Gateway) and enter the certificate names under publicKey.
    Sample Commands to Generate ECDSA Private Keys and Certificates
    openssl ecparam -genkey -name prime256v1 -noout -out ec_private_key1.pem
    openssl pkcs8 -topk8 -in ec_private_key1.pem -inform pem -out ec_private_key_pkcs8.pem -outform pem -nocrypt
    openssl req -new -key ec_private_key_pkcs8.pem -x509 -nodes -days 365 -out 4bc0c762-0212-416a-bd94-b7f1fb348bd4.crt -subj "/C=IN/ST=KA/L=BLR/O=ORACLE/OU=CGBU/CN=ocnrf-endpoint.ocnrf.svc.cluster.local"

    Note:

    • You can configure the above mentioned inputs only if OAuth2 is configured on UDR. For information to configure OAuth2 on UDR, see the "Configuring OAuth2.0" section in the Oracle Communications Cloud Native Core Unified Data Repository User Guide.
    • ATS configures Ingress Gateway with secret name, keyId, certificate name, and instanceid based on the inputs provided in the ocats-udr-custom-values-24.1.1.yaml file.
    • ATS supports only ES256 algorithm to generate token for this release. User should generate ECDSA private key to test OAuth2 feature.
  3. Update the ocats-udr-custom-values-24.1.1.yaml file with the public and private keys generated in the previous steps. A sample code snippet is as follows:
    Sample: OAuth Configuration
    deployment:   oauthKeys:
        - keyId: '664b344e74294c8fa5d2e7dfaaaba407'
          udrSecret: 'oauthsecret1'
          privateKey: 'ec_private_key1.pem'
          publicKey: '4bc0c762-0212-416a-bd94-b7f1fb348bd4.crt'
          issuerId: '4bc0c762-0212-416a-bd94-b7f1fb348bd4'
          reqType: 'igw-sig'
        - keyId: '664b344e74294c8fa5d2e7dfaaaba408'
          udrSecret: 'oauthsecret2'
          privateKey: 'ec_private_key2.pem'
          publicKey: '4bc0c762-0212-416a-bd94-b7f1fb348bd5.crt'
          issuerId: '4bc0c762-0212-416a-bd94-b7f1fb348bd5'
          reqType: 'igw-sig'
        - keyId: '664b344e74294c8fa5d2e7dfaaaba409'
          udrSecret: 'oauthsecret3'
          privateKey: 'ec_private_key3.pem'
          publicKey: '4bc0c762-0212-416a-bd94-b7f1fb348bd6.crt'
          issuerId: '4bc0c762-0212-416a-bd94-b7f1fb348bd6'
          reqType: 'igw-prov'
        - keyId: '664b344e74294c8fa5d2e7dfaaaba410'
          udrSecret: 'oauthsecret4'
          privateKey: 'ec_private_key4.pem'
          publicKey: '4bc0c762-0212-416a-bd94-b7f1fb348bd7.crt'
          issuerId: '4bc0c762-0212-416a-bd94-b7f1fb348bd7'
          reqType: 'igw-prov'
    In the above code snippet,
    • issuerId is any uuid that follows NfInstanceId format.
    • keyId is user-defined value.
    • reqType indicates the mapping between secret created and Ingress Gateway. igw-sig indicates secrets are to be used for UDR Signaling Ingress Gateway and igw-prov indicates UDR Provisioning Ingress Gateway.
  4. Create four secrets in the namespace where ATS and UDR are installed, with public certificate and enter the value for the udrSecret field.
3.8.7.5 Enabling IPv6 on ATS

If you are deploying ATS setup on IPv6 system, then enable the following flag in the ocats-udr-custom-values.yaml file:

Enabling IPv6 on ATS
deployment:
  ipv6enabled: true
3.8.7.6 Configuring ATS to Run Health-Check Pipeline
To run the SLF-Health-Check, UDR-Health-Check, or EIR-HealthCheck features, provide the following inputs in the values.yaml file if ATS is being deployed on OCCNE.
deployment:
  Webscale: false
  occnehostip: <base64 encoded occne bastion ip>
  occnehostusername: <base64 encoded occne login user name>
  occnehostpassword: <base64 encoded occne login password>
And, provide the following inputs to run the SLF-Health-Check, UDR-Health-Check, or EIR-HealthCheck features, if ATS is being deployed on Webscale.
deployment:
  Webscale: true
  #Provide Webscale Environment details with base64 encoding
  webscalejumpserverip: <base64 encoded jump server ip>
  webscalejumpserverusername: <base64 encoded jump server username>
  webscalejumpserverpassword: <base64 encoded jump server password>
  webscaleprojectname: <base64 encoded webscale project name>
  webscalelabserverFQDN: <base64 encoded lab server fqdn>
  webscalelabserverport: <base64 encoded lab server port>
  webscalelabserverusername: <base64 encoded lab server username>
  webscalelabserverpassword: <base64 encoded lab server password>
You can configure the name of the secret that the ATS creates for healthcheck pipeline:
healthchecksecretname: ats-healthcheck-secret

For more information, see ATS Health Check.

Note:

UDR-ATS creates a secret with the name 'healthcheck-secret' on Kubernetes to store the above inputs.
3.8.7.7 Creating Service Account
To run SLF-ATS, it is mandatory to create a service account using the following inputs:
rules:
- apiGroups: ["apps"]
  resources: ["deployments", "replicasets", "deployments/scale","statefulsets/scale"]
  verbs: ["get","watch","list","update"]
- apiGroups: [""]
  resources: ["pods", "deployments","pods/log","configmaps","pods/exec"]
  verbs: ["get","watch","list","update","create"]
To run UDR-ATS, it is mandatory to create a service account using the following inputs:
rules:
- apiGroups: ["apps"]
  resources: ["deployments", "replicasets", "deployments/scale","statefulsets/scale"]
  verbs: ["get","watch","list","update"]
- apiGroups: [""]
  resources: ["pods", "deployments","pods/log","configmaps","pods/exec","services"]
  verbs: ["get","watch","list","update","create","delete"]
- apiGroups: ["batch"]
  resources: ["jobs"]
  verbs: ["get","create","delete","update","list"]

Note:

For information about creating service account, see the Oracle Communications Cloud Native Core Unified Data Repository Installation and Upgrade Guide available on MOS.
3.8.7.8 Enabling Service Mesh

Note:

The UDR-NewFeatures, UDR-Regression, EIR-NewFeatures, and EIR-Regression pipelines do not support deployment on service mesh enabled system.
To enable service mesh:
  1. If service mesh is not enabled at the global level for namespace, then run the following command to enable service mesh at the namespace level before deploying UDR-ATS.

    kubectl label --overwrite namespace <namespace_name> istio-injection=enabled

    Example:

    kubectl label --overwrite namespace ocudr istio-injection=enabled

  2. Add the following annotation in the lbDeployment section under global section of the ocats-udr-custom-values-24.1.1.yaml file:
    global:
      # ********  Sub-Section Start: Custom Extension Global Parameters ********
      #**************************************************************************
     
      customExtension:
        allResources:
          labels: {}
          annotations: {}
     
        lbServices:
          labels: {}
          annotations: {}
     
        lbDeployments:
          labels: {}
          annotations:
            traffic.sidecar.istio.io/excludeInboundPorts: "8080"
            traffic.sidecar.istio.io/excludeOutboundPorts: "443,9000,22,9090"
     
        nonlbServices:
          labels: {}
          annotations: {}
     
        nonlbDeployments:
          labels: {}
          annotations: {}
  3. Use the following code snippet to create an envoy filter for both UDR and ATS:
    apiVersion: networking.istio.io/v1alpha3
    kind: EnvoyFilter
    metadata:
      name: <user defined name for envoy filter>
      namespace: <namespace where ATS is deployed>
    spec:
      workloadSelector:
        labels:
          app: ocats-udr
      configPatches:
      - applyTo: NETWORK_FILTER
        match:
          listener:
            filterChain:
              filter:
                name: "envoy.http_connection_manager"
        patch:
          operation: MERGE
          value:
            typed_config:
              '@type': type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager
              server_header_transformation: PASS_THROUGH
     
    ---
    apiVersion: networking.istio.io/v1alpha3
    kind: EnvoyFilter
    metadata:
      name: <user defined name for envoy filter>
      namespace: <namespace where ATS is deployed>
    spec:
      workloadSelector:
        labels:
          app: ocats-udr
      configPatches:
      - applyTo: NETWORK_FILTER
        match:
          listener:
            filterChain:
              filter:
                name: "envoy.http_connection_manager"
        patch:
          operation: MERGE
          value:
            typed_config:
              '@type': type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager
              forward_client_cert_details: ALWAYS_FORWARD_ONLY

    For more information about envoy filter and service mesh configuration, see the Oracle Communications Cloud Native Core Unified Data Repository Installation and Upgrade Guide.

  4. After deploying service mesh, create Peer Authentication on the pods for inter pod communication. A sample template is as follows:
    apiVersion: security.istio.io/v1beta1
    kind: PeerAuthentication
    metadata:
      name: <ATS Peer Authentication name>
      namespace: <ATS deployment namespace>
    spec:
      selector:
        matchLabels:
          app: ocats-udr
      mtls:
        mode: PERMISSIVE
    ---
     
    apiVersion: security.istio.io/v1beta1
    kind: PeerAuthentication
    metadata:
      name: <ATS Stubs Peer Authentication name>
      namespace: <ATS Stubs deployment namespace>
    spec:
      selector:
        matchLabels:
          app: ocstub-py   mtls:
        mode: PERMISSIVE
  5. ATS sidecar must have at least 2 CPUs and 2 Gi memory. ATS sidecar is configured as follows:
    deployment:
      customExtension:
        annotations:
          sidecar.istio.io/proxyCPU: "2000m"
          sidecar.istio.io/proxyCPULimit: "2000m"
          sidecar.istio.io/proxyMemory: "2Gi"
          sidecar.istio.io/proxyMemoryLimit: "2Gi"
          proxy.istio.io/config: |
            concurrency: 4
3.8.7.9 Configuring Stubs
From ATS release 24.1.0:
  • The common python stub chart and image must be used for two NRF-stubs, Notify-stub (supports TLS and non-TLS in single deployment), and SCP stub.
  • Two different deployments of NRF (primary and secondary) using common python stub chart and image must be deployed for SLF-ATS and EIR-ATS.
  • Two different deployments of NRF (primary and secondary), one deployment for notify stub (TLS or non-TLS scenarios) and one deployment for SCP stub must be deployed for UDR-ATS.
3.8.7.9.1 Configuring NRF Stub

UDR-ATS, SLF-ATS, and EIR-ATS requires two NRF stubs, one as primary NRF and other as secondary NRF. These NRF stubs is deployed using ocstub-py-custom-values-24.1.1.yaml.

To configure NRF stub:

  • Provide the docker registry details with ocstub-py-custom-values-24.1.1.yaml file where images are pushed as follows:
    image:
      repository: <docker registry>:<docker port>
  • Set the env.NF to SLF, UDR, or 5G EIR depending upon the deployed NF:
    env:
      ...
      NF: SLF
    env:
      ...
      NF: UDR
    env:
      ...
      NF: 5G_EIR
  • Set the appendReleaseName parameter to false for backward compatibility and bypass_additional_port parameter to true to avoid creating HTTP1 port. Each deployment should have unique service name:
    service:
      ...
      name: <user-defined nrf stub service name> #Example nrf-stub-service1 for primary NRF stub and nrf-stub-service2 for secondary NRF stub
      appendReleaseName: false #this is to avoid adding release name at the beginning of the service name thereby supporting backward compatibility
      ...
      bypass_additional_port: true #to avoid creating HTTP1 port.
3.8.7.9.2 Configuring Notify Stub

UDR-ATS requires notify-stub to be deployed in two modes using ocstub-py-custom-values-24.1.1.yaml:

  • HTTP mode and TLS mode: To receive TLS and non-TLS notifications from UDR
  • SCP mode: To validate SCP routing scenarios
To configure notify stub for Notify and SCP stub modes:
  • Update the docker registry where common python stub image is pushed as follows:
    image:
      repository: <docker registry>:<docker port>
  • Set the env.NF to UDR depending upon the whether the deployed NF is UDR:
    env:
      ...
      NF: UDR
  • Set the appendReleaseName parameter to false for backward compatibility and bypass_additional_port parameter to true to avoid creating HTTP1 port. Common python stub deployed as HTTP and TLS server must have unique service name:
    service:
      ...
      name: <user-defined notify stub service name> #Example notify-stub-service for TLS/non-TLS stub
      appendReleaseName: false #this is to avoid adding release name at the beginning of the service name thereby supporting backward compatibility
      ...
      bypass_additional_port: true #to avoid creating HTTP1 port.
  • Configure the stub with private key, certificates. and secret that is used by UDR Egress Gateway for TLS notification validation:
    env:
      ...
      cert_secret_name: <TLS secret of UDR egressgateway> #E.g., ocudr-gateway-secret
      ca_cert: <CA certificate used for UDR egressgateway deployment> #E.g., caroot.cer
      client_cert: <Client certificate used for UDR egressgateway deployment> #E.g., apigatewayrsa.cer
      private_key: <Private key used for UDR egressgateway deployment> #E.g., rsa_private_key_pkcs1.pem
      ...
      CLIENT_CERT_REQ: true
  • To deploy notify-stub as SCP set the appendReleaseName parameter to false for backward compatibility and bypass_additional_port parameter to true to avoid creating HTTP1 port. Common python stub deployed as HTTP and TLS server must have unique service name. Set the CLIENT_CERT_REQ to false as its default value:
    service:
      ...
      name: <user-defined scp stub service name> #Example scp-stub-service for SCP stub
      appendReleaseName: false #this is to avoid adding release name at the beginning of the service name thereby supporting backward compatibility
      ...
      bypass_additional_port: true #to avoid creating HTTP1 port.
3.8.7.9.3 Configuring Fourg Stub

UDR-ATS requires one fourg stub to test the migration scenarios. Configure the docker registry where diameter stub image is pushed as follows:

image:
  repository: <docker registry>:<docker port>/ocats-udr-fourg-stub-images
3.8.7.9.4 Configuring Diameter Stub

UDR-ATS requires two diameter stubs as two different diameter peers.

  • diam-toola: peernode1 (seagull1a.seagull.com)
  • diam-toolb: peernode2 (seagull1b.seagull.com)

Note:

UDR-ATS uses diam-stub-24.1.1.yaml file for diameter stub configuration.

To configure diameter stubs:

  • Update the docker registry where diameter stub image is pushed as follows:
    image:
      repository: <docker registry>:<docker port>/ocats-udr-diam-stub-images
  • Configure diam-toola as follows:
    deployment:
      SourceIdentity: seagull1a.seagull.com
      SourceRealm: seagulla.com
  • Configure diam-toolb as follows:
    deployment:
      SourceIdentity: seagull1b.seagull.com
      SourceRealm: seagullb.com

3.8.8 Deploying ATS in Kubernetes Cluster

You can deploy ATS Pod in Kubernetes cluster using Helm commands.

Run the following command to deploy ATS.

helm install --name <release_name> --namespace <namespace_name> -f <values-yaml-file> ocats-udr

Example:

helm install --name ocats-udr-24.1.1 --namespace ocudr -f ocats-udr-custom-values-24.1.1.yaml ocats-udr

3.8.8.1 Deploying ATS Pod and Stubs in Kubernetes Cluster

ATS resource allocation must be done by referring to the Resource Requirements section to support parallel test execution feature. For more information on Parallel Test Execution, see Parallel Test Execution. If you must enable the Application Log Collection, see UDR Application Log Collection.

The CPU and memory utilization depends on the number of behave command executed at given point of time. UDR-ATS runs six behave commands at a time and SLF-ATS runs seven behave commands at a time.

You can deploy stub pod in Kubernetes cluster using Helm commands.

Run the following command to deploy to deploy ATS and stubs:
helm install --name <release_name> --namespace <namespace_name> -f <values-yaml-file> ocats-udr
Example:
 helm install --name ocats-udr-24.1.1 --namespace ocudr -f ocats-udr-custom-values-24.1.1.yaml ocats-udr

SLF-NewFeatures, EIR-NewFeatures or SLF-Regression Pipeline

Run the following command to deploy NRF-STUB:
helm install --name <release_name> --namespace <namespace_name> -f <values-yaml-file> ocstub-py

Example:

helm install --name stub1 --namespace ocudr -f ocstub-py-custom-values-24.1.1.yaml ocstub-py
helm install --name stub2 --namespace ocudr -f ocstub-py-custom-values-24.1.1.yaml ocstub-py

Note:

To test DNS SRV feature in SLF-Regression and EIR-NewFeatures, NRF stub needs to be deployed two times to act as primary and secondary NRF.

UDR-NewFeatures or UDR-Regression Pipelines using Helm

Run the following commands to deploy each of the required stubs using Helm:

helm install --name <release_name> --namespace <namespace_name> -f <values-yaml-file> ocstub-py

helm install --name <release_name> --namespace <namespace_name> -f <fourg-stub-values-yaml-file> fourg-stub

helm install --name <release_name> --namespace <namespace_name> -f <diam-tool-values-yaml-file> diam-stub

Example:


 helm install --name nrfstub --namespace ocudr -f ocstub-py-custom-values-24.1.1.yaml ocstub-py

 helm install --name notify --namespace ocudr -f ocstub-py-custom-values-24.1.1.yaml ocstub-py

 helm install --name scp --namespace ocudr -f ocstub-py-custom-values-24.1.1.yaml  ocstub-py

 helm install --name fourgstub --namespace ocudr -f fourg-stub-custom-values-24.1.1.yaml fourg-stub

 helm install --name diamtoola --namespace ocudr -f diam-tool-custom-values-24.1.1.yaml diam-stub

 helm install --name diamtoolb --namespace ocudr -f diam-tool-custom-values-24.1.1.yaml diam-stub

3.8.9 Verifying ATS Deployment

To verify ATS deployment, run the following command:

helm status <release_name>

Figure 3-22 Verifying ATS Deployment


Verifying ATS Deployment

To view UDR, four UDRs, Provisioning Gateway, two NRF stubs, one bulk import and ATS deployed in the SLF namespace, run kubectl get pods -n <ns> command. The output is as follows:

Figure 3-23 Sample SLF Namespace


Sample SLF Namespace

Following is the sample output of the command, 'kubectl get pods -n <ns>'. It shows OCUDR namespace with one UDR, one Provisioning Gateway, two diam-tool stubs, one http-server stub, one SCP stub, one bulk import, two NRF stubs, one fourg-stub and ATS after installing UDR-ATS for UDR-Pipelines:

Figure 3-24 Sample UDR Namespace


Sample UDR Namespace

Following is the sample output of the command, 'kubectl get pods -n <ns> - It shows OCUDR namespace with one UDR, two nrf stubs, one bulk import and ATS after installation for EIR pipeline:

Figure 3-25 Sample EIR Namespace


Sample EIR Namespace

If you have installed ATS with sidecar, ensure ATS shows two containers in the READY state as "2/2". A sample output of the command, 'kubectl get pods -n <ns>' for SLF-Pipelines is as follows:

Figure 3-26 ATS Deployed with Sidecar


ATS Deployed with Sidecar

3.8.10 Post Installation Steps

If Provisioning Gateway is upgraded using helm upgrade command

Following are the post installation steps:

  1. Perform the post installation steps as described in Cloud Native Core Provisioning Gateway Guide to change segDetails to UDR Provisioning Ingress Gateway fqdn and port. For more information, see Cloud Native Core Provisioning Gateway Guide.
  2. Run the following command:
    kubectl exec -it -n <ns> <ATS pod> bash
  3. Run the following command:
    
    curl -X PUT http://<provgw helm release>-provgw-config.<ns>:5001/provgw-config/v1/udr.provgwservice.cfg/PROVGW-SERVICE -d '{"tracingEnabled": false,"soapService_udrIp": "ocudr-ingressgateway-
    prov.ocudr","soapService_udrSignallingIp": "ocudr-ingressgateway-sig.ocudr","retryErrorCodes": [500,503],"retryCount": 3,"retryInterval": 2}'
  4. Run the exit command.

If PV (Persistent Volume) is enabled for UDR ATS

Following are the post installation steps:

  1. Run the following command to extract the ocslf_tests (for SLF Pipelines), ocudr_tests ( for UDR Pipelines), or oceir_tests (for EIR Pipeline) and jobs from ocats-udr-data-24.1.1.tgz..

    tar -xvf ocats-udr-data-24.1.1.tgz

  2. Run the following command to create certs and oauth_keys in ocslf_tests folder (for SLF pipeline runs), oceir_tests (for EIR pipeline runs), or ocudr_tests (for UDR pipeline runs):
    mkdir -p ocslf_tests/certs ocslf_tests/oauth_keys
    mkdir -p oceir_tests/certs oceir_tests/oauth_keys
    mkdir -p ocudr_tests/certs ocudr_tests/oauth_keys
  3. Run the following commands to copy the ocslf_tests and jobs folder to the ATS pod only if it is intended to run SLF Pipelines.
    kubectl cp ocslf_tests <namespace>/<pod-name>:/var/lib/jenkins
    kubectl cp jobs <namespace>/<pod-name>:/var/lib/jenkins/.jenkins
  4. Run the following commands to copy the ocudr_tests and jobs folder to the ATS pod only if it is intended to run UDR Pipelines.
    kubectl cp ocudr_tests <namespace>/<pod-name>:/var/lib/jenkins
    kubectl cp jobs <namespace>/<pod-name>:/var/lib/jenkins/.jenkins
  5. Run the following commands to copy the oceir_tests and jobs folder to the ATS pod only if it is intended to run EIR Pipelines.
    kubectl cp oceir_tests <namespace>/<pod-name>:/var/lib/jenkins
    kubectl cp jobs <namespace>/<pod-name>:/var/lib/jenkins/.jenkins
  6. Run the following command to restart the pod:

    kubectl delete pod <pod-name> -n <namespace>

  7. For SLF-NewFeatures and SLF-Regression pipelines, copy the root certificate authority (CA), signed server certificate with root CA private key, and private keys used to create secret for TLS support on Provisioning Gateway to ocslf_tests/certs folder in the path as follows:

    Note:

    Provisioning Gateway must use all three files as part of TLS support.
    1. kubectl cp <root CA> <namespace>/<pod-name>:/var/lib/jenkins/ocslf_tests/certs

      Example: kubectl cp caroot.cer ocudr/ats-ocats-udr-696df7c84d-2qc7h:/var/lib/jenkins/ocslf_tests/certs

    2. kubectl cp <server certificate> <namespace>/<pod-name>:/var/lib/jenkins/ocslf_tests/certs

      Example: kubectl cp apigatewayrsa.cer ocudr/ats-ocats-udr-696df7c84d-2qc7h:/var/lib/jenkins/ocslf_tests/certs

    3. kubectl cp <private key> <namespace>/<pod-name>:/var/lib/jenkins/ocslf_tests/certs

      Example: kubectl cp rsa_private_key_pkcs1.pem ocudr/ats-ocats-udr-696df7c84d-2qc7h:/var/lib/jenkins/ocslf_tests/certs

    Note:

    Provisioning Gateway should use the above three files as part of TLS support.
  8. For UDR-NewFeatures and UDR-Regression pipelines, copy the root certificate authority (CA), signed server certificate with root CA private key, and private key that are used to create secret for TLS support on Provisioning Gateway to ocudr_tests or ocudr_certs folder in the path as follows:

    Note:

    For TLS validation, use the same set of copied certificates for Provisioning Gateway Ingress Gateway, UDR Ingress Gateway, and UDR Egress Gateway.
    1. kubectl cp <root CA> <namespace>/<pod-name>:/var/lib/jenkins/ocudr_tests/certs

      Example: kubectl cp caroot.cer ocudr/ats-ocats-udr-696df7c84d-2qc7h:/var/lib/jenkins/ocudr_tests/certs

    2. kubectl cp <server certificate> <namespace>/<pod-name>:/var/lib/jenkins/ocudr_tests/certs

      Example: kubectl cp apigatewayrsa.cer ocudr/ats-ocats-udr-696df7c84d-2qc7h:/var/lib/jenkins/ocudr_tests/certs

    3. kubectl cp <private key> <namespace>/<pod-name>:/var/lib/jenkins/ocudr_tests/certs

      Example: kubectl cp rsa_private_key_pkcs1.pem ocudr/ats-ocats-udr-696df7c84d-2qc7h:/var/lib/jenkins/ocudr_tests/certs

  9. For EIR-NewFeatures and EIR-Regression pipelines, copy the root certificate authority (CA), signed server certificate with root CA private key, and private key that are used to create secret for TLS support on UDR ingressgateway-prov and ingressgateway-sig to oceir_tests/certs folder in the path as follows:
    1. kubectl cp <root CA> <namespace>/<pod-name>:/var/lib/jenkins/oceir_tests/certs

      Example: kubectl cp caroot_sig.cer ocudr/ats-ocats-udr-696df7c84d-2qc7h:/var/lib/jenkins/oceir_tests/certs

    2. kubectl cp <server certificate> <namespace>/<pod-name>:/var/lib/jenkins/oceir_tests/certs

      Example: kubectl cp apigatewayrsa_sig.cer ocudr/ats-ocats-udr-696df7c84d-2qc7h:/var/lib/jenkins/oceir_tests/certs

    3. kubectl cp <private key> <namespace>/<pod-name>:/var/lib/jenkins/oceir_tests/certs

      Example: kubectl cp rsa_private_key_pkcs1_sig.pem ocudr/ats-ocats-udr-696df7c84d-2qc7h:/var/lib/jenkins/oceir_tests/certs

    4. kubectl cp <root CA> <namespace>/<pod-name>:/var/lib/jenkins/oceir_tests/certs

      Example: kubectl cp caroot_prov.cer ocudr/ats-ocats-udr-696df7c84d-2qc7h:/var/lib/jenkins/oceir_tests/certs

    5. kubectl cp <server certificate> <namespace>/<pod-name>:/var/lib/jenkins/oceir_tests/certs

      Example: kubectl cp apigatewayrsa_prov.cer ocudr/ats-ocats-udr-696df7c84d-2qc7h:/var/lib/jenkins/oceir_tests/certs

    6. kubectl cp <private key> <namespace>/<pod-name>:/var/lib/jenkins/oceir_tests/certs

      Example: kubectl cp rsa_private_key_pkcs1_prov.pem ocudr/ats-ocats-udr-696df7c84d-2qc7h:/var/lib/jenkins/oceir_tests/certs

  10. To run Oauth2 validation scenarios on SLF-Regression, copy each private keys in pem format to oauth_keys folder in /var/lib/jenkins/ocslf_tests path.
    kubectl cp <private key pem file> <namespace>/<pod-name>:/var/lib/jenkins/ocslf_tests/oauth_keys
    

    Example:

    kubectl cp ec_private_key1.pem ocudr/ats-ocats-udr-696df7c84d-2qc7h:/var/lib/jenkins/ocslf_tests/oauth_keys
  11. To run OAuth2 validation scenarios on UDR-Regression, copy each private keys in pem format to oauth_keys folder in /var/lib/jenkins/ocudr_tests path.
    kubectl cp <private key pem file> <namespace>/<pod-name>:/var/lib/jenkins/ocudr_tests/oauth_keys

    Example:

    kubectl cp ec_private_key1.pem ocudr/ats-ocats-udr-696df7c84d-2qc7h:/var/lib/jenkins/ocudr_tests/oauth_keys
  12. To run OAuth2 validation scenarios on EIR-Regression, copy each private keys in pem format to oauth_keys folder in /var/lib/jenkins/oceir_tests path.
    kubectl cp <private key pem file> <namespace>/<pod-name>:/var/lib/jenkins/oceir_tests/oauth_keys

    Example:

    kubectl cp ec_private_key1.pem ocudr/ats-ocats-udr-696df7c84d-2qc7h:/var/lib/jenkins/oceir_tests/oauth_keys

If PV is disabled:

Following are the post installation steps:

  1. For SLF-NewFeatures and SLF-Regression pipelines, copy the root certificate authority (CA), signed server certificate with root CA private key, and private key used to create secret for TLS support on Provisioning Gateway to ocslf_tests/certs folder in the path as follows:

    Note:

    Provisioning Gateway must use all three files as part of TLS support.
    1. kubectl cp <root CA> <namespace>/<pod-name>:/var/lib/jenkins/ocslf_tests/certs

      Example: kubectl cp caroot.cer ocudr/ats-ocats-udr-696df7c84d-2qc7h:/var/lib/jenkins/ocslf_tests/certs

    2. kubectl cp <server certificate> <namespace>/<pod-name>:/var/lib/jenkins/ocslf_tests/certs

      Example: kubectl cp apigatewayrsa.cer ocudr/ats-ocats-udr-696df7c84d-2qc7h:/var/lib/jenkins/ocslf_tests/certs

    3. kubectl cp <private key> <namespace>/<pod-name>:/var/lib/jenkins/ocslf_tests/certs

      Example: kubectl cp rsa_private_key_pkcs1.pem ocudr/ats-ocats-udr-696df7c84d-2qc7h:/var/lib/jenkins/ocslf_tests/certs

  2. For UDR-NewFeatures and UDR-Regression pipelines, copy the root certificate authority (CA), signed server certificate with root CA private key, and private keys used to create secret for TLS support on Provisioning Gateway to ocslf_tests/certs folder in the path as follows:

    Note:

    For TLS validation, use the same set of copied certificates for Provisioning Gateway Ingress Gateway, UDR Ingress Gateway, and UDR Egress Gateway.
    1. kubectl cp <root CA> <namespace>/<pod-name>:/var/lib/jenkins/ocudr_tests/certs

      Example: kubectl cp caroot.cer ocudr/ats-ocats-udr-696df7c84d-2qc7h:/var/lib/jenkins/ocudr_tests/certs

    2. kubectl cp <server certificate> <namespace>/<pod-name>:/var/lib/jenkins/ocudr_tests/certs

      Example: kubectl cp apigatewayrsa.cer ocudr/ats-ocats-udr-696df7c84d-2qc7h:/var/lib/jenkins/ocudr_tests/certs

    3. kubectl cp <private key> <namespace>/<pod-name>:/var/lib/jenkins/ocudr_tests/certs

      Example: kubectl cp rsa_private_key_pkcs1.pem ocudr/ats-ocats-udr-696df7c84d-2qc7h:/var/lib/jenkins/ocudr_tests/certs

    Note:

    The Ingress Gateway of UDR should use the above three files as part of TLS support.
  3. For EIR-NewFeatures and EIR-Regression pipelines, copy the root certificate authority (CA), signed server certificate with root CA private key, and private keys used to create secret for TLS support on UDR ingressgateway-prov and ingressgateway-sig to oceir_tests/certs folder in the path as follows:
    1. kubectl cp <root CA> <namespace>/<pod-name>:/var/lib/jenkins/oceir_tests/certs

      Example: kubectl cp caroot_sig.cer ocudr/ats-ocats-udr-696df7c84d-2qc7h:/var/lib/jenkins/oceir_tests/certs

    2. kubectl cp <server certificate> <namespace>/<pod-name>:/var/lib/jenkins/oceir_tests/certs

      Example: kubectl cp apigatewayrsa_sig.cer ocudr/ats-ocats-udr-696df7c84d-2qc7h:/var/lib/jenkins/oceir_tests/certs

    3. kubectl cp <private key> <namespace>/<pod-name>:/var/lib/jenkins/oceir_tests/certs

      Example: kubectl cp rsa_private_key_pkcs1_sig.pem ocudr/ats-ocats-udr-696df7c84d-2qc7h:/var/lib/jenkins/oceir_tests/certs

    4. kubectl cp <root CA> <namespace>/<pod-name>:/var/lib/jenkins/oceir_tests/certs

      Example: kubectl cp caroot_prov.cer ocudr/ats-ocats-udr-696df7c84d-2qc7h:/var/lib/jenkins/oceir_tests/certs

    5. kubectl cp <server certificate> <namespace>/<pod-name>:/var/lib/jenkins/oceir_tests/certs

      Example: kubectl cp apigatewayrsa_prov.cer ocudr/ats-ocats-udr-696df7c84d-2qc7h:/var/lib/jenkins/oceir_tests/certs

    6. kubectl cp <private key> <namespace>/<pod-name>:/var/lib/jenkins/oceir_tests/certs

      Example: kubectl cp rsa_private_key_pkcs1_prov.pem ocudr/ats-ocats-udr-696df7c84d-2qc7h:/var/lib/jenkins/oceir_tests/certs

  4. To run OAuth2 validation scenarios on SLF-Regression, copy each private keys in pem format to oauth_keys folder in /var/lib/jenkins/ocslf_tests path.
    kubectl cp <private key pem file>
          <namespace>/<pod-name>:/var/lib/jenkins/ocslf_tests/oauth_keys

    Example:

    kubectl cp ec_private_key1.pem ocudr/ats-ocats-udr-696df7c84d-2qc7h:/var/lib/jenkins/ocslf_tests/oauth_keys
  5. To run OAuth2 validation scenarios on UDR-Regression, copy each private keys in pem format to oauth_keys folder in /var/lib/jenkins/ocudr_tests path.
    kubectl cp <private key pem file>
          <namespace>/<pod-name>:/var/lib/jenkins/ocudr_tests/oauth_keys

    Example:

    kubectl cp ec_private_key1.pem ocudr/ats-ocats-udr-696df7c84d-2qc7h:/var/lib/jenkins/ocudr_tests/oauth_keys
  6. To run OAuth2 validation scenarios on EIR-Regression, copy each private keys in pem format to oauth_keys folder in /var/lib/jenkins/oceir_tests path.
    kubectl cp <private key pem file> <namespace>/<pod-name>:/var/lib/jenkins/oceir_tests/oauth_keys

    Example:

    Example: kubectl cp ec_private_key1.pem ocudr/ats-ocats-udr-696df7c84d-2qc7h:/var/lib/jenkins/oceir_tests/oauth_keys