3 Executing NF Test Cases using ATS
In this chapter, you will learn to execute NF (NRF, PCF and SCP) Test Cases using ATS.
Executing NRF Test Cases using ATS
Prerequisite
To execute NRF Test Cases using NRF ATS 1.7.2, you need to ensure that following prerequisites are fulfilled.
- To execute Geo-Redundancy test cases, you need to deploy two NRF-1.7.2 with replication enabled. These test cases are executed separately as it requires two different NRFs.
- The user should create certificates/keys (public and private) for AccessToken micro-service before deploying NRF.
- Deploy NRF 1.7.2 with default helm configurations using helm charts.
- All micro-services of NRF should be up and running including Accesstoken micro-service.
- Deploy ATS using helm charts.
- The user MUST copy the public keys (RSA and ECDSA) created in the above step to the ATS pod at the /var/lib/jenkins/ocnrf_tests/public_keys location.
- Deploy Stub using helm charts.
- For NRF ATS 1.7.2, you need to deploy two stub servers for executing SLF and Forwarding functionality test cases. The service name for both the STUB servers should be notify-stub-service and notify-stub-service02.
- Ensure Prometheus service is up and running.
- Deploy ATS and Stubs in the same namespace as of OCNRF as default ATS deployment is with role binding.In addition, test stubs must be deployed in same namespace as NRF.
- User MUST not initiate a job in two different pipelines at the same time.
- If Service Mesh check is enabled, you need to create a destination rule to fetch
the metrics from the Prometheus. This is so because in most of the deployments,
Prometheus is kept outside of the service mesh and a destination rule is
required to communicate between TLS enabled entity (ATS) and non-TLS entity
(Prometheus). To create a
rule:
kubectl apply -f - <<EOF apiVersion:networking.istio.io/v1alpha3 kind:DestinationRule metadata: name:prometheus-dr namespace:ocnrf spec: host:oso-prometheus-server.ocnrf.svc.cluster.local trafficPolicy: tls: mode:DISABLE EOF
In the above rule,- name indicates the name of destination rule.
- namespace indicates where the ATS is deployed.
- host indicates the hostname of the prometheus server.
Logging into ATS
Figure 3-1 Verifying ATS Pod

You can use the external IP of the worker node and nodeport of the ATS
service as <Worker-Node-IP>:<Node-Port-of-ATS>
Note:
In the Verifying ATS Pod screen, slave1 is the node where ATS is deployed, 32728 is the ATS nodeport and 10.75.225.177 is the worker node IP, highlighed in red. For more details on ATS deployment, refer to NRF ATS Installation Procedure.Figure 3-2 ATS Login

Executing ATS
NRF-NewFeatures Pipeline
- Enter the username as 'nrfuser'
and password as
'nrfpasswd'. Click Sign
in. The following screen appears.
Figure 3-3 NRF Pre-Configured Pipelines
NRF ATS has three pre-configured pipelines.
- NRF-NewFeatures: This pipeline has all the test cases, which are delivered as part of NRF ATS - 1.7.0
- NRF-Performance: This pipeline is not operational as of now. It is reserved for future releases of ATS.
- NRF-Regression: This pipleine has all the test cases, which were delivered in NRF ATS - 1.6.1
After identifying the NRF pipelines, the user needs to do one-time configuration in ATS as per NRF deployment. In this pipeline, all the new testcases related to NRF are executed. To configure its parameters:
- Click NRF-NewFeatures in the Name column. Following screen appears:
Figure 3-4 Configuring NRF-New Features
In the above screen:- Click Configure to navigate to the screen where configuration needs to be done.
- Click Documentation to navigate to the screen that has documented test cases, which are part of this NRF release.
- Click the blue dots inside Build History box to reach the success console logs of the "Sanity", "All-Without-GEO" and "All-GEO" respectively.
- The Stage View represents the already executed pipeline for the customer reference.
- Click Configure. User MUST wait for the page to load completely.
Once the page loads completely, click the Pipeline tab as shown below:
MAKE SURE THAT THE SCREEN SHOWN BELOW LOADS COMPLETELY BEFORE YOU PERFORM ANY ACTION ON IT. ALSO, DO NOT MODIFY ANY CONFIGURATION OTHER THAN DISCUSSED BELOW.
Figure 3-5 Pipeline Option
- The Pipeline section of the configuration page appears as shown
below:
Figure 3-6 Pipeline Section
In the above screen, you can change the values of the 'Pipeline script'. The content of the pipeline script is as follows:Figure 3-7 Pipeline Script
Note:
The User MUST NOT change any other value apart from line number 9 to line 21.You can change the parameter values from "a" - to - "m" as per user requirement. The parameter details are available as comments from line number 2 - to - 6.a - Name of the NF to be tested in capital (NRF). b - Namespace in which the NRF is deployed c - endPointIP:endPointPort value used while deploying the NRF using the helm chart d - Comma separated values of NRF1 and NRF2 ingress gateway service (ocnrf-ingressgateway.ocnrf,1.1.1.1) - this is also known as as cluster_domain. A dummy value of NRF2 ingress gateway (1.1.1.1) is provided for the reference. e - Comma separated values of NRF1 and NRF2 port of ingressgateway service (80,31000) - A dummy value of NRF2 ingress gateway port (31000) is provided for the reference. f - Comma separated values of NRF1 and NRF2 configuration service (ocnrf-nrfconfiguration.ocnrf,1.1.1.1) - this is also known as as cluster_domain. A dummy value of NRF2 configuration service (1.1.1.1) is provided for the reference. g - Comma separated values of NRF1 and NRF2 port of configuration service (8080,31001) - A dummy value of NRF2 configuration microservice port (31001) is provided for the reference. h - Name_of_stub_service.namespace (notify-stub-service.ocnrf) i - Port of stub service (8080) j - NRF_Instance ID (6faf1bbc-6e4a-4454-a507-a14ef8e1bc5c) k - Name_of_Prometheus_service.namespace (occne-prometheus-server.occne-infra) l - Port of Prometheus service (80) m - Number of times the re-run of failed case is allowed (default as 0).
Note:
You need not to change any value if- OCCNE cluster is used
- NRF, ATS and Stub are deployed in the ocnrf namespace
- Any GEO-Redundancy case is not executed.
- Click Save after making neccesary changes. The NRF-NewFeatures screen
appears. Click Build with Parameters. Following screen appears:
Figure 3-8 Pipeline NRF-NewFeatures
In the above screen, you have Execute_Suite options to execute NRF test cases either:- All-Without-GEO: This is the default option. It executes all the testcases except GEO-Redundancy.
- GEO: It executes all the GEO-Redundancy cases.
In the above screen, there are three Select_Option(s), which are:- All: This is the default option. It executes all the NRF test cases. User just need to scroll down and click Build to execute all the test cases.
- Sanity: It is recommended to execute Sanity
before executing any test case. This helps to ensure that all the
deployments are done properly. When you select Sanity, the following
screen appears:
Figure 3-9 Build Requires Parameters - Sanity
Click Build to execute all the sanity test cases.
Note:
Sanity option is not available when Execute_Suite is set to GEO. - Single/MultipleFeatures: This option allows you to select any number of test cases that you want to execute from the list of total test cases available for execution. After selecting the test cases, scroll-down and click Build. The selected NRF test cases are executed.
- NRF Sanity - This feature file contains all the basic sanity test cases for NRF ATS to validate NRF deployment. It is advisable for users to execute these test cases before starting a complete suite.
- Discovery - These feature files are listed with a prefix as "Disc".
- Registration - These feature files are listed with a prefix as "Reg".
- NRF SLF - These feature files are lited with a prefix as "SLF".
- GEO Redundancy - These feature files are lited with a prefix as "GEO".
Figure 3-10 Sample Screen

Figure 3-11 Test Cases Result - Sanity with Service Mesh

Figure 3-12 Test Cases Result - All-Without-Geo-with Service Mesh

NRF-NewFeatures Documentation
Note:
No new test cases are added as part of NRF-ATS 1.7.2 release.Figure 3-13 NRF-NewFeatures Documentation

- NF_BASIC_SANITY_CASES - Lists all the sanity cases, which are useful to identify whether all the NRF functionality works fine.
- NF_DISCOVERY_CASES - Lists all the discovery microservice related cases.
- NF_GEO_REDUNDANCY_FEATURE_CASES - Lists all the Geo-Redundancy related cases.
- NF_REGISTRATION_FT_CASES - Lists all the registration related cases.
- NF_SLF_FEATURE_CASES - Lists all the SLF related cases.
Figure 3-14 Sample Feature - NF_BASIC_SANITY_CASES

Based on the functionalities covered under Documentation, the Build Requires Parameters screen displays test cases. To navigate back to the Pipeline NRF-NewFeatures screen, click Back to NRF-NewFeatures link available on top left corner of the screen.
NRF-Regression Pipeline
This pre-configured pipeline contains all the test cases that were provided as part of NRF ATS 1.7.0. However, some test cases are updated as per new implementation of NRF.
The configuration method and parameters are same as the NewFeatures pipeline. Only difference in this pipeline is that it does not have Sanity and GEO option. Thus while configuring this pipeline, you need not to provide any information for NRF2.
- AccessToken - These feature files are listed with a prefix as "oAuth".
- Configuration - These feature files are listed with a prefix as "Config".
- Discovery - These feature files are listed with a prefix as "Disc".
- NRF Forwarding - These feature files are listed with a prefix as "Forwarding".
- NRF Functional - These feature files are listed with a prefix as "Feat".
- Registration - These feature files are listed with a prefix as "Reg" and "Upd". These are related to update operation of registered profiles.
- NRF SLF - These feature files are listed with a prefix as "SLF".
- Subscription - These feature files are listed with a prefix as "Subs".
Figure 3-15 NRF-Regression

Figure 3-16 NRF-Regression with Service Mesh Result

NRF-Regression Documentation
Click Documentation in the left navigation pane of the NRF-Regression pipeline to view all the test cases provided till NRF ATS 1.6.1.
- NF_CONFIGURATION_CASES - Lists the cases related to NRF configuration.
- NF_DISCOVERY_CASES - Lists all the discovery microservice related cases.
- NF_FORWARDING_FEATURE_CASES - Lists all the forwarding related cases.
- NF_FUNCTIONAL_CASES - Lists all the functional cases.
- NF_OAUTH_CASES - Lists all the accesstoken related cases.
- NF_REGISTRATION_CASES - Lists all the registration related cases.
- NF_SLF_FEATURE_CASES - Lists all the SLF related cases.
- NF_SUBSCRIPTION_CASES - Lists all subscription related cases.
Figure 3-17 NRF-Regression Documentation

The NRF-Regression test cases are divided into multiple groups based on the functionality.
- NF_CONFIGURATION_CASES - Lists all the cases related to NRF configuration.
- NF_DISCOVERY_CASES - Lists all the discovery microservice related cases.
- NF_FORWARDING_FEATURE_CASES - Lists all the forwarding related cases.
- NF_FUNCTIONAL_CASES - Lists all the functional cases.
- NF_OAUTH_CASES - Lists all the accesstoken related cases.
- NF_REGISTRATION_CASES - Lists all the registration related cases.
- NF_SLF_FEATURE_CASES - Lists all the SLF related cases.
- NF_SUBSCRIPTION_CASES - Lists all subscription related cases.
Figure 3-18 Sample Screen: NRF-Regression Documentation

Executing NSSF Test Cases using ATS
To execute NSSF Test Cases using NRF ATS 1.4, you need to ensure that following prerequisites are fulfilled.
- Before deploying NSSF, the user must create certificates/keys (public and private) for AccessToken microservice. The public keys (RSA and ECDSA) must be copied to the ATS pod at /var/lib/jenkins/ocnssf_tests/public_keys location.
- User must deploy NSSF 1.4 with default helm configurations using helm charts.
- All NSSF micro-services should be up and running including AccessToken microservice.
Logging into ATS
Figure 3-19 Verifying ATS Deployment

There are two ways to login to ATS Jenkins GUI.
- When an external load balancer (metalLB in case of OCCNE) is available and an external IP is provided to the ATS service, the user can login to ATS GUI using <External-IP>:8080.
- When an external IP is not provided to the ATS service, the user can open the
browser and provide the external IP of the worker node and nodeport of the ATS
service to login to ATS
GUI.
<Worker-Node-IP>:<Node-Port-of-ATS>
Note:
In the Verifying ATS Deployment screen, ATS nodeport is highlighted in red as 32013. For more details on ATS deployment, refer to NSSF ATS Installation Procedure.
Open a browser and provide IP and port details as <Worker-Node-IP>:<NodePort-of-ATS> (As per the above example: 10.98.101.171:32013). The ATS login screen appears.
Executing ATS
- Enter the username as 'nssfuser' and password as 'nssfpasswd'.
Click Sign in.
Note:
If you want to modify your default login password, refer to Modifying Login Password- NSSF-New-Features: This pipeline has all the test cases that are delivered as part of NSSF ATS - 1.4.
- NSSF-Performance: This pipeline is not operational as of now. It is reserved for future releases of ATS.
- NSSF-Regression: This pipleine has all the test cases of previous releases. As this is the first release of NSSF-ATS, this pipeline does not show any previous release test cases.
Figure 3-20 Pre-Configured Pipelines
Each one of this pipeline is explained below:- NSSF-NewFeatures Pipeline: After identifying
the NSSF pipelines, the user needs to do one-time configuration
in ATS as per their SUT deployment. In this pipeline, all the
new testcases related to NSSF are executed. To configure its
parameters:
- Click NSSF-NewFeatures in the Name
column. The following screen appears:
Figure 3-21 NSSF-NewFeatures Pipeline
In the above screen:- Click Configure to navigate to a screen where configuration needs to be done.
- Click Documentation to view the documented test cases.
- Click blue dots inside Build History box to view the success console logs of the "All" and "Sanity" respectively.
- The Stage View represents already executed pipeline for the customer reference.
- Click Configure. Users MUST wait for
the page to load completely. Once the page loads
completely, click the Pipeline tab to reach the
Pipeline configuration as shown below:
MAKE SURE THAT THE SCREEN SHOWN ABOVE LOADS COMPLETELY BEFORE YOU PERFORM ANY ACTION ON IT. ALSO, DO NOT MODIFY ANY CONFIGURATION OTHER THAN DISCUSSED BELOW.
Figure 3-22 NSSF Configure
- In the above screen, the values of the
'Pipeline script' needs to be changed. The content of
the pipeline script is as
follows:
node ('master'){ //a = SELECTED_NF b = NF_NAMESPACE c = FT_ENDPOINT d = GATEWAY_IP //e = GATEWAY_PORT f = CONFIG_IP g = CONFIG_PORT h = STUB_IP //i = STUB_PORT j = NFINSTANCEID k = PROMETHEUS_IP l = PROMETHEUS_PORT //m = RERUN_COUNT sh ''' sh /var/lib/jenkins/ocnssf_tests/preTestConfig.sh \ -a NSSF \ -b ocnssf \ -c ocnssf-ingressgateway.ocnssf.svc.cluster.local:80 \ -d ocnssf-ingressgateway.ocnssf \ -e 80 \ -f ocnssf-nssfconfiguration.ocnssf \ -g 8080 \ -h notify-stub-service.ocnssf \ -i 8080 \ -j 6faf1bbc-6e4a-4454-a507-a14ef8e1bc5c \ -k occne-prometheus-server.occne-infra \ -l 80 \ -m 2 ''' load "/var/lib/jenkins/ocnssf_tests/jenkinsData/Jenkinsfile-NewFeatures" }
Note:
The User MUST NOT change any other value apart from line number 8 to line 20.You can change only those parameters that are marked as "a" to "m" as per your requirement.
- a - Name of the NF to be tested in capital (NSSF).
- b - Namespace in which the NSSF is deployed
- c - endPointIP:endPointPort value used while deploying the NSSF using the helm chart
- d - Name_of_NSSF_ingressgateway_service.namespace (ocnssf-nssfconfiguration.ocnssf) - this is also known as as cluster_domain.
- e - Port of ingressgateway service (80)
- f - Name_of_NSSF_configuration_service.namespace (ocnssf-nssfconfiguration.ocnssf)
- g - Port of configuration service (8080)
- h - Name_of_stub_service.namespace (notify-stub-service.ocnssf)
- i - Port of stub service (8080)
- j - NSSF_Instance ID (6faf1bbc-6e4a-4454-a507-a14ef8e1bc5c)
- k - Name_of_Prometheus_service.namespace (occne-prometheus-server.occne-infra)
- l - Port of Prometheus service (80)
- m - Number of times the re-run of failed case is allowed (default as 2).
Note:
You do not have to change any value if OCCNE cluster is used and NSSF, ATS and STUB are deployed in ocnssf namespace. - Click Save after making necessary
changes. You are navigated back to the Pipeline
NSSF-NewFeatures screen. Click Build with
Parameters as shown below:
Figure 3-23 Build with Parameters
The following screen appears:
Figure 3-24 Build with Parameters Options
- Click NSSF-NewFeatures in the Name
column. The following screen appears:
Executing NSSF Test Cases
- Click the Schedule a Build with parameters icon present on the
NSSF-NewFeatures screen in the extreme right column corresponding to
NSSF-NewFeatures row as shown below:
Figure 3-25 Schedule a Build with Parameters
- The following screen appears:
Figure 3-26 Build Screen
In the above screen, there are three Select_Option(s), which are:
- All: By default, all the NSSF test cases are selected for execution. User just needs to scroll down and click Build to execute all the test cases.
- Sanity: It is recommended to execute Sanity
before executing any test case. This helps to ensure that all the
deployments are done properly or not. When you select Sanity, the
following screen appears.
Figure 3-27 Select_Option(s) - Sanity
Click Build to execute all the sanity test cases.
- Single/MultipleFeature: This option allows you to select any number of test cases that you want to execute from the list of total test cases available for execution. After selecting the test cases, scroll-down and click Build. The selected NSSF test cases are executed.
The NSSF test cases are divided into NSSF Service operations as follows:
- Availability Update: These feature files are listed with a prefix as "Update".
- Configuration: These feature files are listed with a prefix as "failure".
- Registration: These feature files are listed with a prefix as "NsSelection_Registration".
- PDU Session: These feature files are listed with a prefix as "NsSelection_PDU".
- NSSF Sanity: This feature file contains all the basic sanity cases for NSSF ATS 1.6.1.
- Subscription: These feature files are listed with a prefix as "Subscribe".
NewFeatures - Documentation
To view NSSF functionalities, go to NSSF-NewFeatures pipeline and click the Documentation link in the left navigation pane. The following screen appears:
Figure 3-28 NSSF - Documentation

Each one of the documentation features is described below:
- NSSF_BASIC_SANITY_CASES - Lists all the sanity cases, which are useful to identify whether all the NSSF functionality works fine.
- NSSF_CONFIG_CASES - Lists all the test cases related to NSSF configuration.
- NSSF_BASIC_UPDATE_CASES - Lists all the test cases relaed to Availability Update.
- NSSF_AVAILABILITY_PATCH_AND_NEGATIVE_CASES - Lists all the test cases related to Availability Patch and other negative scenarios.
- NSSF_NsSelection_REGISTRATION_CASES - Lists all the test cases related to NsSelection registration.
- NSSF_NsSelection_PDU_CASES - Lists all the test cases related to NsSelection PDU related cases.
- NSSF_BASIC_SUBSCRIBE_CASES - Lists all the test cases related to subscription.
Figure 3-29 NSSF_BASIC_SANITY_CASES

Figure 3-30 NSSF_BASIC_SUBSCRIBE_CASES

Figure 3-31 NSSF_NsSelection_Registration_CASES

Executing Policy Test Cases using ATS
This ATS-Policy release is a converged release comprising of scenarios (test cases) from PCF, PCRF and Converged. ATS 1.2.2 is compatible with Policy 1.7.4 with TLS Enabled (server side) and Disabled Mode, CNPCRF and converged policy.
To execute Policy Test Cases, you need to ensure that following prerequisites are fulfilled.
Prerequisites
- Deploy OCCNP.
- Install Go-STUB for PCF and Converged Policy.
- PCF
- PCF with TLS not available: In the PCF's custom values
file, check if the following parameters are configured with the respective
values:
ingress-gateway: enableIncomingHttps: false egress-gateway: enableOutgoingHttps: false
- PCF with TLS Enabled: In the PCF's custom values file,
check if the following parameters are configured with the respective
values:
ingress-gateway: enableIncomingHttps: true egress-gateway: enableOutgoingHttps: true/false
You also need to ensure that PCF is deployed with corresponding certificates.
This scenario has two options:- Client without TLS Enabled: In this case, PCF is deployed with TLS enabled without generating any certificate in the ATS pod.
- Client with TLS Security Enabled: In this case, PCF and ATS both have required certificates. For more details, refer to the Enabling Https support for Egress and Ingress Gateway section in this topic.
- In the -application-config configmap, configure the following
parameters with the respective values:
- primaryNrfApiRoot=http://nf1stub.<namespace_gostubs_are_deployed_in>.svc:8080
Example: primaryNrfApiRoot=http://nf1stub.ocats.svc:8080
- nrfClientSubscribeTypes=UDR,CHF
- supportedDataSetId=POLICY (Comment out the
supportedDataSetId )
Note:
You can configure these values at the time of Policy deployment also.Note:
Execute the following command to get all configmaps in your namespace.kubectl get configmaps -n <Policy_namespace>
- primaryNrfApiRoot=http://nf1stub.<namespace_gostubs_are_deployed_in>.svc:8080
- PCF with TLS not available: In the PCF's custom values
file, check if the following parameters are configured with the respective
values:
- CN-PCRF
Execute the following command to set the Log level to Debug in Diam-GW POD:
kubectl edit statefulset <diam-gw pod name> -n <namespace>
- Converged Policy: It is same as PCF. You can refer to PCF explanation given above.
- Prometheus server should be installed in cluster.
- Database cluster should be in a running state with all the required tables. You need to ensure that there are no previous entries in database before executing test cases.
- ATS should be deployed in same namespace as Policy using Helm Charts.
- User MUST NOT initiate a job in two different pipelines at the same time.
- If you enable Service Mesh check, then you need to create a destination rule for
fetching the metrics from the Prometheus. In most of the deployments, Prometheus is
kept outside the service mesh so you need a destination rule to communicate between
TLS enabled entity (ATS) and non-TLS entity (Prometheus). You can create a
destination rule as
follows:
kubectl apply -f - <<EOF apiVersion:networking.istio.io/v1alpha3 kind:DestinationRule metadata: name:prometheus-dr namespace:ocats spec: host:oso-prometheus-server.pcf.svc.cluster.local trafficPolicy: tls: mode:DISABLE EOF
In the destination rule:- name indicates the name of destination rule.
- namespace indicates where the ATS is deployed.
- host indicates the hostname of the prometheus server.
Enabling TLS in ATS Pod
- Execute the following command to copy the caroot.cer generated while PCF
deployment to ATS pod in "cert"
directory.
kubectl cp <path_to_file>/caroot.cer <namespace>/<ATS-Pod-name>: /var/lib/jenkins/cert/ -n <namespace>
Example:
kubcetl cp cert/caroot.cer ocpcf/ocpcf-ocats-pcf-56754b9568-rkj8z: /var/lib/jenkins/cert/
- Execute the following command to login to your ATS
Pod.
kubectl exec -it <ATS-Pod-name> bash -n <namespace>
- Execute the following commands from cert directory to create private key and
certificates:
-
openssl req -x509 -nodes -sha256 -days 365 -newkey rsa:2048 -keyout rsa_private_key_client -out rsa_certificate_client.crt
Figure 3-32 Command 1
Note:
You need to provide appropriate values and specify fqdn of PCF Ingress Gateway service i.e. <ingress-servicename>.<pcf_namespace>.svc in Common Name. -
openssl rsa -in rsa_private_key_client -outform PEM -out rsa_private_key_pkcs1_client.pem
Figure 3-33 Command 2
-
openssl req -new -key rsa_private_key_client -out ocegress_client.csr -config ssl.conf
Note:
You can either use or copy the ssl.conf file, which was used while deploying PCF to ATS pod for this step.Figure 3-34 Command 3
-
- Execute the following command to copy the ocegress_client.csr to the
bastion.
openssl x509 -CA caroot.cer -CAkey cakey.pem -CAserial serial.txt -req -in ocegress_client.csr -out ocegress_client.cer -days 365 -extfile ssl.conf -extensions req_ext
Figure 3-35 Copying ocegress_client.csr to bastion
- Copy the ocegress_client.cer from Bastion to the ATS Pod.
- Restart the ingress and egress gateway pods from the Bastion.
Logging into ATS
Before logging into ATS Jenkins GUI, it is important to get the nodeport of the service, '-ocats-Policy'. Execute the following command to get the nodeport:
kubectl get svc -n <Policy_namespace>
Example:
kubectl get svc -n ocats
Figure 3-36 Policy Nodeport

To login to Jenkins, open the Web Browser and type the URL: http://<Worker-Node-IP>:<Node-Port-of-ATS>. In the above screen, 32471 is the nodeport. Example: http://10.75.225.49:32471
Note:
For more information on ATS deployment in PCF, refer to Policy ATS Installation Procedure.Executing ATS
To execute ATS:
- Enter the username as "policyuser" and
password as
"policypasswd". Click Sign
in.
Note:
If you want to modify your default login password, refer to Modifying Login Password- Policy-NewFeatures: This pipeline has all the test cases, which are delivered as part of Policy ATS - 1.7.4
- Policy-Performance: This pipeline is not operational as of now. It is reserved for future releases of ATS.
- Policy-Regression: This pipleine has all the test cases, which were delivered in Policy ATS - 1.7.0
Figure 3-37 Pre-Configured Pipelines
The pre-configured pipelines are explained below:
Policy-New Features Pipeline
- Click Policy-NewFeatures in the Name column and then, click
Configure in the left navigation pane as shown below:
Figure 3-38 Policy-NewFeatures Configure
- The Policy-NewFeatures, General tab appears. Make sure that the screen loads completely.
- Scroll-down to the end. The control moves from General tab to the
Pipeline tab as shown below:
Figure 3-39 Policy - Pipeline Script
In the Script area of the Pipeline section, you can change value of the following parameters:- b: Change this parameter to update the namespace where Policy was deployed in your bastion.
- d: Change this parameter to update the namespace where your gostubs are deployed in your bastion.
- e: Set this parameter as 'unsecure', if you intend to run ATS in TLS disabled mode. Else, set this parameter as 'secure'.
- g: Set this parameter to more than 30 secs. The default wait time for the pod is 30 secs. Every TC requires restart of the nrf-client-management pod.
- h: Set this parameter to more than 60 secs. The default wait time to add a configured policy to the database is 60 secs.
- i: Set this parameter to more than 140 secs. The default wait time for Nf_Notification Test Cases is given as 140 secs.
- k: Use this parameter to set the waiting time to initialize Test Suite 8.
- l: Use this parameter to set the waiting time to get response from Stub 9.
- m: Use this parameter to set the waiting time after adding Policy Configuration 10.
- n: Use this parameter to set the waiting time after adding Policy 11.
- o: Use this parameter to set the waiting time before sending next message 12.
- p: Use this parameter to set Prometheus Server IP 13.
- q: Use this parameter to set Prometheus Server
Port.
Note:
DO NOT MODIFY ANYTHING OTHER THAN THESE PARAMETER VALUES. - Click Save after updating the parameters value.
The Policy-NewFeatures Pipeline screen appears.
Note:
It is advisable to save the pipeline script in your local machine that you can refer at the time of ATS pod restart.
Executing Policy Test Cases
- Click the Build with Parameters link available in the left navigation
pane of the Policy-NewFeatures Pipeline screen. The following screen
appears.
Figure 3-40 Policy - Build with Parameters
In the above screen, you can select SUT as either PCF, CN-PCRF or Converged Policy. It also has two Select_Option(s), which are:- All: By default, all the Policy test cases are selected for execution. User just need to scroll down and click Build to execute all the test cases.
- Single/MultipleFeatures: This option allows you to select any number of test cases that you want to execute from the list of total test cases available for execution. After selecting the test cases, scroll-down and click Build. The selected Policy test cases are executed.
Figure 3-41 SUT Options
Based on your selection, related test cases appear.Figure 3-42 Test Cases based on SUT
Go to Build → Console Output to view the test result output as shown below:
Figure 3-43 Sample: Test Result Output in Console
Figure 3-44 Sample Output of Build Status

NewFeatures - Documentation
Figure 3-45 Policy-NewFeatures Feature List

Figure 3-46 IMS_Emergency_Call_001

Based on the functionalities covered under Documentation, the Build Requires Parameters screen displays test cases. To navigate back to the Pipeline Policy-NewFeatures screen, click Back to Policy-NewFeatures link available on top left corner of the screen.
PCF-Regression Pipeline
This pre-configured pipeline has all the test cases of previous releases. For example, as part of Release 1.7.4, this pipeline has all the test cases that were released as part of release 1.7.0
Figure 3-47 Policy-Regression

Figure 3-48 Policy-Regression Build Output

Figure 3-49 Policy-Regression Console Output

Note:
The regression pipeline does not have any sanity option. However, you should perform all the steps as performed in NewFeatures pipeline. Configure the pipeline script changes to provide environment variables.Regression - Documentation
Figure 3-50 Policy-Regression Documentation

Figure 3-51 Sample: Regression Documentation - Feature

This screen shows only those functionalities whose test cases were released in previous releases.
Executing SCP Test Cases using ATS
To execute SCP Test Cases, you need to ensure that following prerequisites are fulfilled.
Prerequisites
- Deploy SCP 1.7.3 with following custom values in deployment
file.
- As you can provide NRF information only at time of deployment, Stub NRF details like nrf1svc and nrf2svc should also be provided at the time of deployment before executing these cases. For Example: If teststub namespace is scpsvc then SCP should have been deployed with primary nrf as nrf1svc.scpsvc.svc.<clusterDomain> and secondary nrf as nrf2svc.scpsvc.svc.<clusterDomain> for NRF test cases to work.
- NRF details of SCP should specify port as 8080 in ipEndPoints. Example: ipEndPoints: [{"port": "8080"}]).
- In the SCP deployment file, servingScope must have Reg1 and servingLocalities must have USEast and Loc9. In addition, the recommended auditInterval is 120 and guardTime is 10.
- For ATS execution, you should deploy SCP with SCP-Worker replicas set to 1.
- Deploy ATS using helm charts.
- As you can deploy default ATS with role binding, it is important to deploy ATS and test stubs in the same namespace as SCP.
Logging into ATS
Figure 3-52 Verifying ATS Deployment

To login to ATS Jenkins GUI, open the browser and provide the external IP of
the worker node and nodeport of the ATS service as
<Worker-Node-IP>:<Node-Port-of-ATS>
. The Jenkins login
screen appears.
Note:
In the Verifying ATS Deployment screen, the ATS nodeport is highlighed in red as 31745. For more details on ATS deployment, refer toSCP ATS Installation Procedure .Executing ATS
- Enter the username as "scpuser" and password as
"scppasswd". Click Sign in. A sample screen is shown below.
Figure 3-53 Logging into ATS GUI
Note:
If you want to modify your default login password, refer to Modifying Login Password - Following screen appears showing pre-configured pipelines for SCP
individually (3 Pipelines).
- SCP-New-Features
- SCP-Performance: This pipeline is not operational as of now. It is reserved for future releases of ATS.
- SCP-Regression
Figure 3-54 ATS SCP First Logged-In Screen
Pipeline SCP-NewFeatures
- Click SCP-NewFeatures in the Name column. The following
screen appears:
Figure 3-55 SCP-NewFeatures
- Click Configure in the left navigation pane to provide input
parameters. The SCP-NewFeatures Configure - General tab appears.
Note:
MAKE SURE THAT THE SCREEN SHOWN BELOW LOADS COMPLETELY BEFORE YOU PERFORM ANY ACTION ON IT. ALSO, DO NOT MODIFY ANY CONFIGURATION OTHER THAN DISCUSSED BELOW. - Scroll-down to the end. The control moves from General tab to
the Advanced Project Options tab as shown below:
Figure 3-56 Advanced Project Options
You can modify script pipeline parameters from "-b" to "-q" on the basis of your deployment environment and click Save. The content of the pipeline script is as follows:Figure 3-57 SCP Pipeline Content
- -a - Selected NF
- -b - NameSpace in which SCP is Deployed
- -c - Kubernetes Cluster Domain where SCP is Deployed
- -d - Test Stubs NameSpace - must be same as SCP Namespace
- -e - Docker registry where test stub image is available
- -f - Audit Interval provided in SCP Deployment file
- -g - Guard Time provided SCP Deployment file
- -h - SCP-Worker microservice name as provided during deployment
- -i - SCPC-Configuration microservice name as provided during deployment
- -j - SCPC-Notification microservice name as provided during deployment
- -k - SCPC-Subscription microservice name as provided during deployment
- -l - DB Secret name as provided during deployment
- -m - Mysql Host name as provided during deployment
- -n - Test Stub Image Name with tag
- -o - Test Stub CPU requests and limit
- -p - Test Stub Memory requests and limit
- -q - re-run count
Note:
DO NOT MODIFY ANYTHING OTHER THAN THESE PARAMETERS. - Click the Build with Parameters. Following screen
appears:
Figure 3-58 Build with Parameters Options
In the above screen, there are three Select_Option(s), which are:- All: By default, all the SCP test cases are selected for execution. User just need to scroll down and click Build to execute all the test cases.
- Sanity: This option is NOT AVAILABLE for SCP.
- Single/MultipleFeatures: This option allows you to select any number of test cases that you want to execute from the list of total test cases available for execution. After selecting the test cases, scroll-down and click Build. The selected SCP test cases are executed.
- To check execution results and logs:
- Click the execute-tests stage of pipeline and then logs.
- Select the test execution step.
- Double-click to open the execution logs console.
Figure 3-59 SCP-NewFeatures Stage Logs
NewFeatures - Documentation
Figure 3-60 SCP-NewFeatures-Documentation

Note:
Documentation option appears only if New-Features pipeline is executed atleast once.Figure 3-61 Sample: SCP Functionality

Based on the functionalities covered under Documentation, the Build Requires Parameters screen displays test cases. To navigate back to the Pipeline SCP-NewFeatures screen, click Back to SCP-NewFeatures link available on top left corner of the screen.
SCP-Regression Pipeline
Figure 3-62 SCP-Regression Pipeline

If you are executing SCP pipeline for the first time, you have to set Input Parameters before execution. Subsequent execution does not require any input unless there is a need to change any configuration.
Figure 3-63 Regression - Pipeline Script

Figure 3-64 SCP-Regression Pipeline Script

-a - Selected NF
-b - NameSpace in which SCP is Deployed
-c - K8s Cluster Domain where SCP is Deployed
-d - Test Stubs NameSpace - Must be same as SCP Namespace
-e - Docker registry where test stub image is available
-f - Audit Interval provided in SCP Deployment file
-g - Guard Time provided SCP Deployment file
-h - SCP-Worker microservice name as provided during deployment
-i - SCPC-Configuration microservice name as provided during deployment
-j - SCPC-Notification microservice name as provided during deployment
-k - SCPC-Subscription microservice name as provided during deployment
-l - DB Secret name as provided during deployment
-m - Mysql Host name as provided during deployment
-n - Test Stub Image Name with tag
-o - Test Stub CPU requests and limit
-p - Test Stub Memory requests and limit
-q - re-run count
Figure 3-65 SCP Regression - Build with Parameters

- All - To execute all the test cases except SCP_Audit_nnrf_disc. If SCP is deployed with nnrf-disc for Audit or Registration with NRF is not enabled, then you should not use the All option. Instead, use Single/MultipleFeatures option to select appropriate cases for execution.
- Sanity - This option is not available for SCP.
- Single/MultipleFeatures - To execute selected test cases. You can select one or more test cases and execute using this option.
Figure 3-66 SCP-Regression Build Option

Figure 3-67 SCP-Regression Stage Logs

Executing SLF Test Cases using ATS
Logging into ATS
Before logging into ATS, you need to know the nodeport of the "-ocats-slf" service. To get the nodeport detail, execute the following command:
kubectl get svc -n <slf_namespace>
Example:
kubectl get svc -n ocats
Figure 3-68 SLF Nodeport

In the above screen, 31150 is the nodeport.
- In the web browser, type http://<Worker IP>:<port
obtained above> and press Enter.
Example: http://10.75.225.49:31150
The Login screen appears.
- Enter the username as 'udruser' and password as 'udrpasswd'.
Click Sign in. A screen with pre-configured pipelines for SLF appears
(3 pipelines).
- SLF-New-Features: This pipeline has all the test cases, which are delivered as part of SLF ATS - 1.7.1.
- SLF-Performance: This pipeline is not operational as of now. It is reserved for future releases of ATS.
- SLF-Regression: This pipeline has all the test cases of previous releases. As this is the first release of SLF-ATS, this pipeline does not show any previous release test cases.
Note:
If you want to modify your default login password, refer to Modifying Login PasswordFigure 3-69 SLF Pre-configured Pipelines
- Click SLF-NewFeatures. The following screen appears:
Figure 3-70 SLF-NewFeatures Configure
- Click Configure in the left navigation pane. The General tab appears. User MUST wait for the page to load completely.
- Once the page loads completely, click the Advanced Project
Options tab. Scroll down to reach the Pipeline configuration
as shown below:
MAKE SURE THAT THE SCREEN SHOWN BELOW LOADS COMPLETELY BEFORE YOU PERFORM ANY ACTION ON IT. ALSO, DO NOT MODIFY ANY CONFIGURATION OTHER THAN DISCUSSED BELOW.
Figure 3-71 SLF Configuration Parameters
You SHOULD NOT change any other value apart from line number 12 to line 15. It means the parameters marked as "a" - to - "g" can only be changed as per user requirement. The detail about these parameters is as follows:
- a - Name of the NF to be tested in capital (SLF).
- b - Namespace in which the udr is deployed.
- c - Name_of_UDR_ingressgateway_service.namespace (ocudr-ingressgateway.ocudr).
- d - Port of ingressgateway service (80).
- e - Name_of_Prometheus_service.namespace (prometheus-server.ocudr.svc.cluster.local).
- f - Port of Prometheus service (80).
- g - Number of times the re-run of failed case is allowed (default as 2).
- Click Save after making neccesary changes. The SLF-NewFeatures screen appears.
- Click Build with Parameters. The following screen
appears:
Figure 3-72 SLF Build with Parameters
In the above screen, there are two Select_Option(s), which are:- All: By default, all the SCP test cases are selected for execution. User just need to scroll down and click Build to execute all the test cases.
- Single/MultipleFeatures: This option allows you to select any number of test cases that you want to execute from the list of total test cases available for execution. After selecting the test cases, scroll-down and click Build. The selected SLF test cases are executed.
NewFeatures-Documentation
Figure 3-73 SLF-NewFeatures Documentation Option

Figure 3-74 SLF-NewFeatures Documentation

Note:
Documentation option appears only if New-Features pipeline is executed atleast once.Figure 3-75 Sample: SLF Test Case Description

Based on the functionalities covered under Documentation, the Build Requires Parameters screen displays test cases. To navigate back to the Pipeline SLF-NewFeatures screen, click Back to SLF-NewFeatures link available on top left corner of the screen.