4 Running NF Test Cases Using ATS
4.1 Running BSF Test Cases using ATS
This section describes how to run BSF test cases using ATS.
Note:
Restart the NRF-client pod of BSF for UDR and CHF discovery as part of each test case.4.1.1 Prerequisites
- Deploy BSF 25.1.200 with default helm configurations using helm charts to run all test cases. The
ATS version must be compatible with BSF 25.1.200.
For more information on how to install BSF, see Oracle Communications Cloud Native Core, Binding Support Function Installation, Upgrade, and Fault Recovery Guide.
- Go-STUB must be installed in the same namespace where ocbsf is installed.
-
Add the following to Kubernetes namespace to grant role access:
PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- pods/log [] [] [get list] configmaps [] [] [watch get list delete update create] pods [] [] [watch get list delete update create] secrets [] [] [watch get list delete update create] services [] [] [watch get list delete update create] deployments.apps [] [] [watch get list update] replicasets.apps [] [] [watch get list update] deployments.extensions [] [] [watch get list update] replicasets.extensions [] [] [watch get list update]
- ATS Prometheus metrics validation works only when:
- the metrics suffixes are not configured
- installation has a single pod for each microservice in the BSF deployment
- You can customize test cases in the custom test case folders (cust_newfeatures, cust_regression and cust_performance). You can add new test cases, remove unwanted test cases and modify existing test cases. It does not impact the original product packaged test cases available in the newfeatures, regression and performance folders respectively. For more details about custom test case folders, see Custom Folder Implementation.
- Install Prometheus server in the cluster.
- Database cluster is in the running state with all required tables. Verify that there are no previous entries in the database before running test cases.
- Do not initiate a job in two different pipelines at the same time.
- For
running ATS features, ensure to update the following mandatory parameters in
ocbsf_custom_values_25.1.200.yaml file only when you are not using the minimal
custom values.yaml
file.
logging: burst: rate: 500 max: 3000 onMismatch: DENY logLevel: DEBUG
Note:
Please ensure that you use the latest version of the Custom Values file when installing BSF initially. - For using Controlled Shutdown feature, ensure that
enableControlledShutdown
parameter is enabled on BSF during installation. If this parameter is not enabled, the test case for this feature will fail. -
In the
application-config
configmap, configure the following parameters with the respective values:-
primaryNrfApiRoot=nf1stub.<namespace_gostubs_are_deployed_in>.svc:8080
For example:
primaryNrfApiRoot=nf1stub.ocats.svc:8080
-
secondaryNrfApiRoot=nf11stub.<namespace_gostubs_are_deployed_in>.svc:8080
For example:
secondaryNrfApiRoot=nf11stub.ocats.svc:8080
-
virtualNrfFqdn = nf1stub.<namespace_gostubs_are_deployed_in>.svc
For example:
virtualNrfFqdn=nf1stub.ocats.svc
retryAfterTime=PT30S
-
- To
enable ATS BSF GUI with the HTTPS protocol, the above mentioned
application-config
configmap related following parameters should be configured with the respective values:# Please edit the object below. Lines beginning with '#' will be ignored, # and an empty file will abort the edit. If an error occurs while saving this file will be # reopened with the relevant failures. # apiVersion: v1 data: profile: | - [appcfg] primaryNrfApiRoot=nf1stub.ocats.svc:8443 secondaryNrfApiRoot=nf11stub.ocats.svc:8443 nrfScheme=https virtualNrfPort=8443 virtualNrfScheme=https
-
Before running ATS Test suite, restart
nrf-client-nfdiscovery
andnrf-client-nfmanagement
pods.-
Run the following command to get all the configmaps in your namespace.
kubectl get configmaps -n <BSF_namespace>
-
Edit the
alternate route service
deployment pointing towards DNS Stub.Run the following command to get searches information from
dns-bind
pod to enable communication between Alternate Route anddns-bind
service.kubectl exec -it <dns-bind pod> -n <NAMESPACE> -- /bin/bash -c 'cat /etc/resolv.conf' | grep search | tr ' ' '\n' | grep -v 'search'
Example:
Figure 4-1 Editing Alternate Route Service deployment pointing towards DNS Stub
By default, Alternate Route Service points to CoreDNS.
Figure 4-2 Alternate Route Service settings in deployment file
Change the deployment file to add following content in alternate service to query DNS stub.
kubectl edit deployment ocbsf-occnp-alternate-route -n ocbsf
- Add the nameservers IPaddress.
- Add all the search information.
- Set dnsPolicy to
None
.dnsConfig: nameservers: - <dns_stub_cluster_ip_address> searches: - dns-bind search - dns-bind search - dns-bind search dnsPolicy: None
Figure 4-3 dnsConfig
-
If Service Mesh check is enabled, create a destination rule to fetch the metrics from the Prometheus. For destination rule to communicate between TLS enabled entity (ATS) and non-TLS entity (Prometheus), Prometheus is kept outside of the service mesh. The rule can be created as follows:
kubectl apply -f - <<EOF apiVersion:networking.istio.io/v1alpha3 kind:DestinationRule metadata: name:prometheus-dr namespace:ocats spec: host:oso-prometheus-server.ocbsf.svc.cluster.local trafficPolicy: tls: mode:DISABLE EOF
name
: indicates the name of destination rule.namespace
: indicates where the ATS is deployed.host
: indicates the hostname of the prometheus server.
-
-
For Bsf_To_Nrf_Late_Arrival feature file to be executed, nfInstanceId should be configured in Bsf_To_Nrf_Late_Arrival.yaml parameterization file. This nfInstanceId must be the same as the one under BSF nfInstanceId, that is configured during BSF installation. The nfInstanceId must to be configured in ATS UI under pipeline configuration (BSF_NFINSTANCE_ID).
-
To ensure consistent functioning of ATS related to the audit service, modify the Audit deployment by reducing the value of AUDIT_NOTIFY_SCHEDULER_POLLING_INTERVAL_MILLISEC from 30000 to 1000 (changing from 30 seconds to 1 second).
$ kubectl get deploy -n ocbsf | grep 'audit' ocbsf-ocpm-audit-service 1/1 1 1 10h
$ kubectl edit deploy ocbsf-ocpm-audit-service -n ocbsf
The deployment will open. Scroll down until below fields are seen
- name: AUDIT_NOTIFY_SCHEDULER_POLLING_INTERVAL_MILLISEC value: "30000"
Update the value of AUDIT_NOTIFY_SCHEDULER_POLLING_INTERVAL_MILLISEC to 1000 if it isn't already.
Application Config map changes for BSF registrations over TLS
- NRF port from 8080 to 8443
- nrfScheme to https
apiVersion: v1
data:
profile: |-
[appcfg]
primaryNrfApiRoot=nf1stub.ocats.svc:8443
secondaryNrfApiRoot=nf11stub.ocats.svc:8443
nrfScheme=https
virtualNrfPort=8443
virtualNrfScheme=https
Note:
In the config map of application-config, delete the lines which has supportedDataSetId or secondaryNrfApiRoot strings.
4.1.2 Logging into ATS
Before logging into ATS GUI, it is important to get the worker node external IP and node port of the service, 'ocats-bsf'.
Run the following command to get the external IP for the worker node:
Example:
kubectl get nodes -owide
ocbsf-k8s-node-1 Ready <none> 111d v1.16.7 192.168.200.26 10.75.152.111 Oracle Linux Server 7.8 4.14.35-1902.303.5.3.el7uek.x86_64 containerd://1.2.10
Run the following command to get the nodeport:
kubectl get svc -n <BSF_namespace>
Example:
kubectl get svc -n ocbsf
ocbsf-ocats-ocats-bsf LoadBalancer 10.233.53.144 10.75.225.49 8080:31944/TCP 19h
http://<Worker-Node-IP>:<Node-Port-of-ATS>
If the 'ocats-bsf' Service has an external IP available, <SVC external IP> can also be used to log in to ATS.
http://<External IP of ATS Service>:8080
http://10.75.225.49:8080
Running ATS
To run ATS test cases, perform the following steps:
- Enter the username as bsfuser and Password as bsfpasswd.
- Click Sign
in.
Note:
To modify default login password, see Modifying Login Password.Figure 4-4 Pre-configured Pipelines
- BSF-NewFeatures: This pipeline has all the new test cases delivered for BSF 25.1.200.
- BSF-Performance: This pipeline is not operational as of now. It is reserved for future releases of ATS.
- BSF-HealthCheck: This pipeline checks if BSF and ATS are deployed correctly. This shows only when the user has enabled this feature at the time of installing BSF ATS.
- BSF-Regression: This pipeline has all the test cases delivered in BSF ATS - 25.1.200.
4.1.3 Running BSF_NewFeatures Pipeline
BSF_NewFeatures Pipeline
- Click BSF_NewFeatures in the Name column and then,
click Configure in the left navigation pane as shown below:
Figure 4-5 BSF New Feature Pipeline
- The BSF_NewFeatures, General tab appears. Make sure that the screen loads completely.
- Scroll-down to the end. The control moves from General
tab to the Pipeline tab as shown below:
Figure 4-6 BSF New Features Pipeline Configuration
You can change the parameter values from "a" - to - "x" as per user requirement. The parameter details are available as comments from line number 2 - to - 7. In the Script area of the Pipeline section, you can change the values of the following parameters:- a: Name of the NF to be tested in capital (BSF)
- b: Change this parameter to update the namespace where BSF was deployed in your bastion.
- c: Name of Prometheus service namespace (occne-prometheus-server)
- d: Change this parameter to update the namespace where your gostubs are deployed in your bastion.
- e: Set this parameter as 'unsecure', if you intend to run ATS in TLS disabled mode. Else, set this parameter as 'secure'.
- f: Configure this parameter to set BSF_NFINSTANCE_ID.
- g: Set a value more than 45 seconds for this parameter. The default wait time for the pod to come up is 45 seconds. Every TC requires restart of the nrf-client-management pod.
- h: Set a value more than 60 seconds for this parameter. The default wait time to add a configurations to the database is 60 secs.
- i: Set this parameter to more than 140 secs. The default wait time for Nf_Notification Test Cases is given as 140 secs.
- k: Use this parameter to set the waiting time to initialize Test Suite.
- l: Use this parameter to set the waiting time to get response from Stub.
- m: Use this parameter to set the waiting time after adding BSF Configuration.
- n: Use this parameter to set the waiting time for Peer connection establishment.
- o: Use this parameter to set the waiting time before sending next message.
- p: Use this parameter to set Prometheus Server IP.
- q: Use this parameter to set Prometheus Server Port.
- r: Use this parameter to set the interval after which the POD status is checked when it is down.
- s: Use this parameter to set number of retry attempt in which will check the pod down status
- t: Use this parameter to set the interval after which we check the POD status if its UP.
- u: Use this parameter to set number of retry attempt in which will check the pod up status.
- v: Use this parameter to set Wait time to connect to Elastic Search.
- w: Use this parameter to set Elastic Search HostName.
- x: Use this parameter to set Elastic Search Port.
- y: Use this parameter to set To Enable/Disable Stub logs collection.
- z: Use this parameter to set Log collection endpoint either Elasticsearch or Kubernetes.
- A: Use this parameter to set Timer to wait for importing service configurations.
- B: Use bulk_import_to_complete to add custom time in Jenkins post bulk imports.
- C: Use this parameter to set TLS version (1.2 or 1.3). The default value is 1.2.
(Optional)To collect application logs per failed scenario, user can configure the values for the following parameters:- z: If you want log collection to happen
through Elastic search, set the value for this parameter as
Elasticsearch. If not, specify the
value as Kubernetes.
If you want to collect logs through Elastic search, it is required to configure the values for the following parameters:
- v: Specifies the wait time to connect
to Elastic search (
ELK_WAIT_TIME
). - w: Specifies the host name of Elastic
search (
ELK_HOST
). For example,occne-elastic-elasticsearch-master.occne-infra/
- x: Specifies the port for Elastic
search (
ELK_PORT
). For example, 9200.
- v: Specifies the wait time to connect
to Elastic search (
- y: If you want to collect stub logs, set the value for this parameter as yes. If not, specify the value as no.
-
Click Save after updating the parameter values. The BSF_NewFeatures Pipeline page appears.
Note:
It is recommended to save a copy of the pipeline script in your local machine that you may refer while restarting ATS pods.Attention:
Do not modify anything other than the parameter values described in this section.
Extracting Application Logs
- Log in to the ATS
pod:
kubectl exec -it pod/ocats-bsf-6f6dfc76b5-jbgzt -n ocbsf bash
- Go to Jenkins build
directory:
cd $JENKINS_HOME/jobs/$JOB_NAME/builds/$BUILD_NUMBER/
For example:cd /var/lib/jenkins/.jenkins/jobs/BSF_Regression/builds/2
- Extract the
applogs.zip
file:unzip applogs.zip
- After successfully unzipping the file, open the
applog
folder to view the pod logs for failed scenarios:(env) [jenkins@ocats-bsf-6f6dfc76b5-jbgzt applog]$ pwd /var/lib/jenkins/.jenkins/jobs/BSF_Regression/builds/2/applog (env) [jenkins@ocats-bsf-6f6dfc76b5-jbgzt applog]$ ls -ltrh total 760K -rw-r--r--. 1 jenkins jenkins 250K Nov 19 11:08 Initial_Run-Register_BSF_With_NFSetIDList.log -rw-r--r--. 1 jenkins jenkins 249K Nov 19 11:08 1st_Rerun-Register_BSF_With_NFSetIDList.log -rw-r--r--. 1 jenkins jenkins 255K Nov 19 11:09 2nd_Rerun-Register_BSF_With_NFSetIDList.log
Running BSF Test Cases
- Click the Build with Parameters link available in the
left navigation pane of the BSF_NewFeatures Pipeline
screen. The following page appears.
Figure 4-7 BSF New Features Build with Parameters
Note:
Make sure that the value of FilterWithTags and Include_NewFeatures is selected as NO.
- If you want to collect logs for any given build, select YES from the drop-down menu of Fetch_Log_Upon_Failure.
-
Click Build and select Console Output to view the test results. The following is a sample test result ouput:
Figure 4-8 Sample: Test Result Output in Console
Note:
For more details on consolidated test report, see Managing Final Summary Report, Build Color, and Application Log.Queuing Jenkins Jobs
Using this feature, you can queue a second job even when current job is still running. The second job can be triggered either from the same or a different pipeline.
Table 4-1 Queuing Jenkins Jobs
Concurrent Builds | New FeaturesCurrent Build | New FeaturesNew Build | RegressionCurrent Build | RegressionNew Build Build | Result |
---|---|---|---|---|---|
Enabled | Running | Triggered | NA | NA | New-Build of New-Features is added to queue. |
Enabled | Running | NA | NA | Triggered | New-Build of Regression is added to queue. |
Disabled | NA | NA | Running | Triggered | New-Build of Regression is added to queue. |
Disabled | NA | Triggered | Running | NA | New-Build of New-Features is added to queue. |
4.1.4 Running BSF_HealthCheck Pipeline
This is a pre-configured pipeline where ATS performs a test probe with SUT. It triggers helm test and provides the results in Jenkins Console Logs.
You can run BSF_HealthCheck pipeline to check if all BSF pods are up and running. If yes, it provides the status as successful. If any pod is down due to any reason, then the pipeline fails.
- Click BSF_HealthCheck in the Name column.
- Click Configure in the left navigation pane.
- When you scroll-down, the General tab becomes active. Be sure that the screen loads completely.
- Continue scrolling down until the
Pipeline tab becomes active. The following is a
screen capture that shows the Pipeline script:
Figure 4-9 Helm Test Script
- In the Script area under the
Pipeline section, users may customize the values for the following
parameters:
Attention:
Do not modify values of parameters other than the ones described in this section.- a: Change this parameter to update the helm release name where BSF is deployed in your bastion.
- b: Change this parameter to update the namespace where BSF is deployed in your bastion.
- Click Save to update the values.
Running Helm Test
To run BSF test cases, click Build Now.
4.1.5 BSF_NewFeatures Documentation
Figure 4-10 BSF_NewFeatures Feature List Documentation

Click any functionality to view its test cases and scenarios of each test case. For example, when you click FEATURE - BSF_Error_Response_Enhancements, the following test description appears:
Figure 4-11 Test Cases and Scenarios of Feature - BSF_Error_Response_Enhancements

Based on the functionalities covered under Documentation, the Build Requires Parameters screen displays test cases. To navigate back to the pipeline BSF-NewFeatures screen, click Back to BSF_NewFeatures link available on top left corner of the screen.
Test Result Analyzer
Using the Test Result Analyzer plug-in available in ATS, user can view consolidated and detailed reports. For more information, see Test Results Analyzer section.
Test Case Mapping to Features and Display Total Counts
With this feature, users can view Total Count of Features, TestCases/Scenarios and TestCase mapping to each Feature of BSF in ATS View. For more information, see Support for Test Case Mapping and Count section.
Stub Predefined_priming Support
Stub Predefined_priming configuration in ATS enables ATS to respond back with the payload message instead of default message when prime configuration does not match with the feature level priming or when the sub is not primed.
- Stub checks against the feature level prime configuration.
- If a match is found in feature level prime, stub replies with the payload message.
- If there is no match found, stub replies with the default
response.
stub_log: No match found in prime configuration for sending the response. Sending default - 200 ATS log: {default_response}
- Stub checks against the feature level prime configuration.
- If a match is found in feature level prime, feature level prime is used for responding to the request.
- If a match is not found in the feature level prime, but found in the pre-configured prime, pre-configured prime is used for responding to the request.
- If a match is found in both pre-primed as well as feature level prime, feature level prime configuration is given priority and the same is used for responding to the request.
- If a match is not found in both pre-primed as well as feature level prime, stub sends a default response.
4.1.6 Running BSF Regression Pipeline
This section describes how to run test cases for Binding Support Function (BSF) Regression pipeline.
The BSF_Regression pipeline is a pre-configured pipeline where all the test cases of previous releases are available. For example, for BSF 25.1.200, this pipeline has all the test cases released till BSF 25.1.200.
- Click BSF_Regression in the Name column.
- Click Build with Parameters in the left navigation pane.
- Copy the required test cases that are available in the BSF folder and place them appropriately within the custom folder for BSF_Regression.
- Reload the page to view the test cases available in the custom Regression folder.
- Click Build.
Figure 4-12 BSF_Regression Pipeline

Note:
The Regression pipeline does not have any sanity option. However, users must perform all the steps performed in the BSF_NewFeatures pipeline. Ensure that the pipeline script is configured according to the environment variables.4.1.7 BSF_Regression Documentation
This section describes the documentation for BSF_Regression pipeline.
To view the documentation for any of the BSF features, on the ATS home page, click BSF_Regression. Then, click Documentation in the left navigation pane.
This page shows features of only those test cases that are released in previous releases.
Figure 4-13 BSF_Regression Features Documentation

4.2 Running NRF Test Cases using ATS
4.2.1 Prerequisites
To run NRF test cases using NRF ATS 25.1.200, you need to ensure that following prerequisites are fulfilled.
- For running three-site georedundancy test cases, NRF 25.1.200 should be deployed on three sites with DB replication enabled. Georedundancy feature must be enabled on all the NRF sites before running these test cases.
- For running two-site georedundancy test cases, NRF 25.1.200 should be deployed on two sites with DB replication enabled. These test cases are executed separately as it requires two different NRFs. These can be executed on a three-site georedundancy setup as well. Georedundancy feature must be enabled on all the NRF sites before running these test cases.
- For running NRF-Growth test cases, two standalone NRF 25.1.200 instances should be deployed. These test cases are executed separately as it requires two different NRFs.
- All the ATS pipelines other than georedundancy, three-site georedundancy, and NRF Growth must be executed with only one NRF deployment (standalone NRF).
- The following NRF database tables should not have any entries:
- nrfApplicationDB.NfInstances
- nrfApplicationDB.NfStatusMonitor
- nrfApplicationDB.NfSubscriptions
- NRF/s should be deployed with the default Helm and REST
configurations, other than
ocnrfHost
andocnrfPort
. - To run NF-FQDN-Authentication-Feature test cases, deploy NRF and NRF ATS, both with NF Authentication feature changes.
- All microservices of NRF must be up and running including Alternate route and Artisan microservice.
- For running Alerts test cases, configure NRF Alerts on the AlertManager on the Prometheus Server. For more details on configuring alerts, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.
- Before deploying NRF, user should create RSA and ECDSA certificates and keys (public and private) for AccessToken microservice.
- Deploy NRF 25.1.200 with default helm configurations using helm charts to run all test cases except NF-FQDN-Authentication-Feature cases.
- Ensure all microservices of NRF should be up and running including Accesstoken microservice. A key with RSA and another with ECDSA algorithm is mandatory.
- Copy the public keys (RSA and ECDSA) to the ATS pod at the /var/lib/jenkins/ocnrf_tests/public_keys location, with the names ec_public.pem and rs_public.pem respectively for ECDSA and RSA public keys. Create a copy of rs_public.pem and save it as new_public.pem in the same location.
- Create JSON files for key and certificate details and copy them to ATS at /var/lib/jenkins/ocnrf_tests/ location. Refer to KID feature specific Prerequisite.
- Deploy ATS using helm charts.
- For NRF 25.1.200, deploy five stub servers for running Roaming, SLF, Forwarding, and SLF via SCP functionality test cases. The service name for the stub servers must be notify-stub-service, notify-stub-service02, notify-stub-service03, amf1-cluster1-net2-amf-5gc-mnc016-mcc310-3gppnetwork-org, and slf-stub-service01.
- The slf-stub-service01 stub must be deployed with service port as 80. For more information, see Installing ATS for NRF.
- Replace the existing content with the following content in the
alternate-route section of the
ocnrf_custom_values.yaml
file:The following configuration changes are required to accommodate alternate routing test cases based on the latest configuration.
staticVirtualFqdns: - name: https://abc.test.com alternateFqdns: - target: notify-stub-service03 port: 8080 priority: 10 weight: 20 - target: notify-stub-service03 port: 8080 priority: 20 weight: 30 - name: http://xyz.test.com alternateFqdns: - target: notify-stub-service03 port: 8080 priority: 10 weight: 20 - target: notify-stub-service03 port: 8080 priority: 20 weight: 30 dnsSrvEnabled: false
- Ensure Prometheus service is up and running.
- Deploy ATS and Stubs in the same namespace as NRF, as default ATS deployment is with role binding. In addition, deploy test stubs in the same namespace as NRF.
- User must not initiate a job in two different pipelines at the same time.
- User must not abort a running job, this may lead to data corruption in database.
- For getting PVC support to retain NRF ATS environment variables and pipeline console histories, follow the installation step. By default PVC support is disabled.
- If installation is done with Service mesh, the NRF must be deployed
with below annotation in lbDeployments and nonlbDeployments
section:
oracle.com/cnc: "true" traffic.sidecar.istio.io/excludeInboundPorts: "9090,8095,8096,7,53" traffic.sidecar.istio.io/excludeOutboundPorts: "9090,8095,8096,7,53"
where, first one is for connecting to Operations Services Overlay (OSO) for Alerts test cases. Second one is for connecting to ATS for fetching the metrics from pods.
- If Service Mesh is enabled, then create a destination rule to fetch
the metrics from the Prometheus. In most of the deployments, Prometheus is kept
outside the service mesh and a destination rule is required to communicate
between TLS enabled entity (ATS) and non-TLS enabled entity (Prometheus).
To create a rule:
kubectl apply -f - <<EOF apiVersion:networking.istio.io/v1alpha3 kind:DestinationRule metadata: name:prometheus-dr namespace:ocnrf spec: host:oso-prometheus-server.ocnrf.svc.cluster.local trafficPolicy: tls: mode:DISABLE EOF
In the above rule,- name indicates the name of destination rule.
- namespace indicates where the ATS is deployed.
- host indicates the hostname of the Prometheus server.
- In case of NRF is deployed with network policy enabled, then before
NRF-ATS execution a new network policy must be created to access NRF-ATS Jenkins
GUI. Following is the
sample:
kubectl apply -f nodePort.yaml -n ocnrf apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-node-port spec: podSelector: matchLabels: app: ocats-nrf policyTypes: - Ingress ingress: - ports: - protocol: TCP port: 8080
- To run with CNE
HA Prometheus, update the prom_type value in
product_config/global.yaml
andcustom_config/global.yaml
file to "cne".
Custom Folder Implementation
ATS provides custom test case folders (cust_newfeatures, cust_regression, and cust_performance) using which you can add new test cases, remove unwanted test cases, and modify existing test cases. It does not impact the original product packaged test cases available in the newfeatures, regression and performance folders. For more details, refer to Custom Folder Implementation.
Apart from cust_newfeatures and cust_regression, NRF ATS has cust_FQDNauthentication, cust_GeoRedundancy, and cust_3SiteGeo directories that contains test cases related to NF-FQDN and georedundancy respectively.
Key Identifier (KID) feature specific Prerequisite
- set_keyDetailsList.json
- set_currenKeyId.json
- tokenSigningDetails.json
$ cat set_keyDetailsList.json
{
"tokenSigningDetails": {
"defaultK8SecretDetails": {
"k8SecretNameSpace": "ocnrf",
"k8SecretName": "ocnrfaccesstoken-secret"
},
"keyDetailsList": [{
"keyID": "EcdsaKid",
"algorithm": "ES256",
"privateKey": {
"k8SecretName": "ocnrfaccesstoken-secret",
"k8SecretNameSpace": "ocnrf",
"fileName": "ecdsa_private_key.pem"
},
"certificate": {
"k8SecretName": "ocnrfaccesstoken-secret",
"k8SecretNameSpace": "ocnrf",
"fileName": "ecdsa_certificate.crt"
}
},
{
"keyID": "RsaKid",
"algorithm": "RS256",
"privateKey": {
"k8SecretName": "ocnrfaccesstoken-secret",
"k8SecretNameSpace": "ocnrf",
"fileName": "rsa_private_key.pem"
},
"certificate": {
"k8SecretName": "ocnrfaccesstoken-secret",
"k8SecretNameSpace": "ocnrf",
"fileName": "rsa_certificate.crt"
}
},
{
"keyID": "newKey",
"algorithm": "RS256",
"privateKey": {
"k8SecretName": "ocnrfaccesstoken-secret",
"k8SecretNameSpace": "ocnrf",
"fileName": "rsa_private_key.pem"
},
"certificate": {
"k8SecretName": "ocnrfaccesstoken-secret",
"k8SecretNameSpace": "ocnrf",
"fileName": "rsa_certificate.crt"
}
}
]
}
}
Note:
Update the following parameters in the above files:- k8SecretName
- k8SecretNameSpace
- fileName (name of the private keys and corresponding certificates)
For more information about parameter values, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.
After updating the files, copy them under the /var/lib/jenkins/ocnrf_tests/ directory.
$ cat set_currenKeyId.json
{
"tokenSigningDetails":
{"currentKeyID":"EcdsaKid"
}
}
Note:
User need not update thecurrentKeyID
name (for example: EcdsaKid
), if
the same file name as in keyID
is used while configuring the
set_currenKeyId.json
file.
$ cat tokenSigningDetails.json
{
"presentCurrentKey" : {
"algorithm" : "ES256",
"privateKey" : {
"k8sSecretName" : "ocnrfaccesstoken-secret",
"fileName" : "ecdsa_private_key.pem"
},
"certificate" : {
"k8sSecretName" : "ocnrfaccesstoken-secret",
"fileName" : "ecdsa_certificate.crt"
}
},
"newCurrentKey" : {
"algorithm" : "RS256",
"privateKey" : {
"k8sSecretName" : "ocnrfaccesstoken-secret",
"fileName" : "rsa_private_key.pem"
},
"certificate" : {
"k8sSecretName" : "ocnrfaccesstoken-secret",
"fileName" : "rsa_certificate.crt"
}
}
}
$ cat set_currenKeyId.json
{
"tokenSigningDetails":
{"currentKeyID":"EcdsaKid"
}
}
Note:
User doesn't want to update the name of the key if the same name was used while configuring.$ cat tokenSigningDetails.json
{
"presentCurrentKey" : {
"algorithm" : "ES256",
"privateKey" : {
"k8sSecretName" : "ocnrfaccesstoken-secret",
"fileName" : "ecdsa_private_key.pem"
},
"certificate" : {
"k8sSecretName" : "ocnrfaccesstoken-secret",
"fileName" : "ecdsa_certificate.crt"
}
},
"newCurrentKey" : {
"algorithm" : "RS256",
"privateKey" : {
"k8sSecretName" : "ocnrfaccesstoken-secret",
"fileName" : "rsa_private_key.pem"
},
"certificate" : {
"k8sSecretName" : "ocnrfaccesstoken-secret",
"fileName" : "rsa_certificate.crt"
}
}
}
Note:
Here, user should update k8SecretName fileName (name of private keys and the corresponding certificates.Note:
Update the following parameters in the above code snippet:- k8SecretName
- fileName (name of the private keys and corresponding certificates)
For more information about parameter values, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.
$ kubectl cp tokenSigningDetails.json ocats-ocats-nrf-58fc5dcbb9-zjsks:/var/lib/jenkins/ocnrf_tests/tokenSigningDetails.json -n ocnrf
$ kubectl cp set_keyDetailsList.json ocats-ocats-nrf-58fc5dcbb9-zjsks:/var/lib/jenkins/ocnrf_tests/set_keyDetailsList.json -n ocnrf
$ kubectl cp set_currenKeyId.json ocats-ocats-nrf-58fc5dcbb9-zjsks:/var/lib/jenkins/ocnrf_tests/set_currenKeyId.json -n ocnrf
$ kubectl cp ec_public.pem ocats-ocats-nrf-58fc5dcbb9-zjsks:/var/lib/jenkins/ocnrf_tests/public_keys/ec_public.pem -n ocnrf
$ kubectl cp rs_public.pem ocats-ocats-nrf-58fc5dcbb9-zjsks:/var/lib/jenkins/ocnrf_tests/public_keys/rs_public.pem -n ocnrf
$ kubectl cp rs_public.pem ocats-ocats-nrf-58fc5dcbb9-zjsks:/var/lib/jenkins/ocnrf_tests/public_keys/new_public.pem -n ocnrf
Prerequisites to Run Testcases for the CCA-Header Feature
To run the pipeline for the CCA-Header feature, NRF must be deployed with the following changes:
- During installation of NRF, the namespace, secret name and the
caroot.cer
file name must be provided in NRF's custom values, in ingress-gateway section, as follows:ingress-gateway: ccaHeaderValidation: k8SecretName: ocingress-secret k8NameSpace: ocnrf fileName: caroot.cer
- In NRF's custom values,
metadata.ccaHeaderValidation.enabled
should be set to true in the 'accesstoken_mapping' object from theingress-gateway.routesConfig
list.Note:
If ccaHeaderValidation is enabled in NRF, only CCA-Header pipeline should be run. The other pipelines (Newfeatures, Regression, 2SiteGeo, 3SiteGeo, or FQDN-Authentication) will not work with the feature enabled.ingress-gateway: routesConfig: - id: accesstoken_mapping uri: http://{{ template "accesstoken.service.fullname" . }}:{{ template "accesstoken.service.port" . }} path: /oauth2/token order: 4 filters: controlledShutdownFilter: applicableShutdownStates: - COMPLETE_SHUTDOWN metadata: ccaHeaderValidation: enabled: true
- Create a secret with the
caroot.cer
file provided in the ocats_ocnrf_tests_jenkinsjobs_25.1.200.tgz, using the following command. This samecaroot.cer
file is required to be configured in the secret to run the CCA-Header cases.kubectl create secret generic <secret name> --from-file=<ca root file name> -n <namespace>
For example:kubectl create secret generic ocingress-secret --from-file=caroot.cer -n ocnrf
- Update the secret name from step 3 in global.yaml file in either /var/lib/jenkins/ocnrf_tests/product_config or /var/lib/jenkins/ocnrf_tests/custom_config based on the whether the pipeline will be run with Product_Config option or Custom_Config option, under the ccaHeaderSecretName parameter.
Prerequisites to Run Testcases for the NRF Growth Features
To run the pipeline for the NRF Growth feature, the following prerequisites have to be done.
- Install two NRFs that are not georedundant with each other.
- The pipeline configuration has to be set to the below before running
the pipeline. The values of options "u" to "x" have to be set to
0.1.
node ('master'){ //a = SELECTED_NF b = NF_NAMESPACE c = FT_ENDPOINT d = GATEWAY_IP e = GATEWAY_PORT //f = CONFIG_IP g = CONFIG_PORT h = STUB_IP i = STUB_PORT j = NFINSTANCEID //k = PROMETHEUS_IP l = PROMETHEUS_PORT m = RERUN_COUNT n = PROMETHEUS_URI o = HELM_RELEASE_NAME //p = MYSQL_HOST q = PRIVILEGED_USER_SECRET_NAME r = DISCOVERY_WAIT_TIME //s = REPLICATION_STATUS_URI t = CLUSTER_DOMAIN u = REGISTER_WRITE_WAIT_TIME //v = SUBSCRIPTION_WAIT_TIME w = ACCESSTOKEN_WAIT_TIME x = REGISTER_READ_WAIT_TIME sh ''' sh /var/lib/jenkins/ocnrf_tests/preTestConfig.sh \ -a NRF \ -b occne-cndbtierone,occne-cndbtiertwo \ -c ocnrf-ingressgateway.ocnrf.svc.cluster.local:80,ocnrf-1-ingressgateway.ocnrf.svc.cluster.local:80 \ -d ocnrf-ingressgateway.occne-cndbtierone,ocnrf-1-ingressgateway.occne-cndbtiertwo \ -e 80,80 \ -f ocnrf-nrfconfiguration.occne-cndbtierone,ocnrf-1-nrfconfiguration.occne-cndbtiertwo \ -g 8080,8080 \ -h notify-stub-service.occne-cndbtierone,notify-stub-service02.occne-cndbtierone \ -i 8080,8080 \ -j 6faf1bbc-6e4a-4454-a507-a14ef8e1bc5c,6faf1bbc-6e4a-4454-a507-a14ef8e1bc5d \ -k oso-prom-svr.oso \ -l 80 \ -m 2 \ -n /prometheus/api/ \ -o ocnrf,ocnrf-1 \ -p mysql-connectivity-service.occne-cndbtierone,mysql-connectivity-service.occne-cndbtiertwo \ -q privilegeduser-secret \ -r 0.5 \ -s http://ocnrf-ocnrf-app-info:5906/status/category/replicationstatus,http://ocnrf-1-ocnrf-app-info:5906/status/category/replicationstatus \ -t svc.cluster.local \ -u 0.1 \ -v 0.1 \ -w 0.1 \ -x 0.1 ''' if(env.Include_NewFeatures && "${Include_NewFeatures}" == "YES"){ sh '''sh /var/lib/jenkins/common_scripts/merge_jenkinsfile.sh''' load "/var/lib/jenkins/ocnrf_tests/jenkinsData/Jenkinsfile-Merged" } else{ load "/var/lib/jenkins/ocnrf_tests/jenkinsData/Jenkinsfile-Regression" } } }
4.2.2 Logging into ATS
Running ATS
Note:
To modify default log in password, refer to Modifying Login Password.Figure 4-14 Verifying ATS Pod

For more information on verifying ATS deployment, see Verifying ATS Deployment.
Note:
In the Verifying ATS Pod screen, slave2 is the node where ATS is deployed, 30348 is the ATS nodeport and 10.75.225.227 is the worker node IP, highlighted in red. For more details on ATS deployment, refer to Installing ATS for NRF.Figure 4-15 ATS Login

- Enter the login credentials. Click
Sign in. The
following screen appears.
Figure 4-16 NRF Pre-Configured Pipelines
NRF ATS has two pre-configured pipelines.
- NRF-NewFeatures: This pipeline has all the test cases delivered as part of NRF ATS - 25.1.200.
- NRF-Regression: This pipeline has all the test cases delivered so far in the previous releases.
4.2.3 NRF-NewFeatures Pipeline
After identifying the NRF pipelines, configure ATS as one-time activity as per NRF deployment. In this pipeline, all the new testcases related to NRF are deployed. To configure its parameters:
- Click NRF-NewFeatures in the Name column. The
following screen appears:
Figure 4-17 Configuring NRF-NewFeatures
In the above screen:- Click Configure to configure NRF-New Features.
- Click Documentation to view the documented test cases, which are part of this NRF release.
- Click the blue dots inside Build History box to view the success console logs of the "Sanity", "All-NewFeatures" respectively.
- The Stage View represents the already deployed pipeline for the customer reference.
- The Test Results Analyzer is the new plugin integrated in the NRF-ATS. This option can be used to display the previous history of all the execution build-wise. It will provide a graphical representation of the past execution together.
- Click Configure. User must wait for the page to load
completely. Once the page loads completely, click the Pipeline tab:
Note:
Make sure that the following page loads completely before you perform any action on it. Also, do not modify any configuration other than shown below.Figure 4-18 Pipeline Tab
The Pipeline section of the configuration page appears as follows:Figure 4-19 Pipeline Section
In the above screenshot, change the values of the 'Pipeline script'. The content of the pipeline script is as follows:Figure 4-20 Pipeline Script
Note:
The user must not change any other value apart from line number 15 to line 37.You can change the parameter values from "a" - to - "x" as per user requirement. The parameter details are available as comments from line number 2 - to - 7.- a: Name of the NF to be tested in capital (NRF).
- b: Namespace in which the NRF is deployed.
- c: endPointIP: endPointPort value used while deploying NRF with the help of helm chart.
- d: Comma separated values of NRF1 and NRF2 ingress gateway service (For example: ocnrf-ingressgateway.ocnrf,1.1.1.1). It is also known as cluster_domain.
- e: Comma separated values of NRF1 and NRF2 port of ingressgateway service (For example: 80,31000).
- f: Comma separated values of NRF1 and NRF2 configuration service (For example: ocnrf-nrfconfiguration.ocnrf,1.1.1.1). It is also known as cluster_domain.
- g: Comma separated values of NRF1 and NRF2 port of configuration service (For example: 8080, 31001).
- h: Name_of_stub_service.namespace (notify-stub-service.ocnrf).
- i: Port of stub service (8080).
- j: NRF_Instance ID (6faf1bbc-6e4a-4454-a507-a14ef8e1bc5c).
- k: Name_of_Prometheus_service.namespace (occne-prometheus-server.occne-infra).
- l: Port of Prometheus service (80).
- m: Number of times the re-run of failed case is allowed (default as 2).
-
n: URI of Prometheus service (/api/) -
- Use "/api/" for CNE version till 1.6.0 and "/prometheus/api/" for CNE version 1.7.0 and 1.8.0.
- For, CNE 1.9.x, CNE 1.10.x, and CNE 22.1.0, cluster name is needed in the URI. The URI must be <cluster_name>/prometheus/api. For example: /bmw/prometheus/api (For more details on the Prometheus URI, check corresponding CNE version's user guide.)
- If OSO is being used, use "/api/" for OSO-1.6.1 or earlier and use "/prometheus/api/" for OSO-1.6.2 or later.
- For OSO-1.10.0, the URI must be <cluster_name>/prometheus/api. For example: /bmw/prometheus/api (For more details on the Prometheus URI, check corresponding OSO version's user guide)
- o: Helm_release_name used to deploy NRF
- p: Host name of MySQL (For example: mysql-connectivity-service)
- q: Privileged User secret name (For example: privilegeduser-secret)
- r: wait time before sending discovery request (0.5)
- s: comma-separated values of replicationStatusUri(s) of the appinfo(s).
- t: the cluster domain
- u: wait time before sending registration requests for write operations
- v: wait time before sending subscription requests
- w: wait time before sending accesstoken requests
- x: wait time before sending registration requests for read operation
Note:
- If the 2-Site-Geo cases are run, the user has to
provide values for NRF2 in corresponding environment variables in
the above script as per the deployment. A sample configuration is
given as
below:
node ('master'){ //a = SELECTED_NF b = NF_NAMESPACE c = FT_ENDPOINT d = GATEWAY_IP e = GATEWAY_PORT //f = CONFIG_IP g = CONFIG_PORT h = STUB_IP i = STUB_PORT j = NFINSTANCEID //k = PROMETHEUS_IP l = PROMETHEUS_PORT m = RERUN_COUNT n = PROMETHEUS_URI o = HELM_RELEASE_NAME //p = MYSQL_HOST q = PRIVILEGED_USER_SECRET_NAME r = DISCOVERY_WAIT_TIME //s = REPLICATION_STATUS_URI t = CLUSTER_DOMAIN u = REGISTER_WRITE_WAIT_TIME //v = SUBSCRIPTION_WAIT_TIME w = ACCESSTOKEN_WAIT_TIME x = REGISTER_READ_WAIT_TIME sh ''' sh /var/lib/jenkins/ocnrf_tests/preTestConfig.sh \ -a NRF \ -b occne-cndbtierone,occne-cndbtiertwo \ -c ocnrf-ingressgateway.ocnrf.svc.cluster.local:80,ocnrf-1-ingressgateway.ocnrf.svc.cluster.local:80 \ -d ocnrf-ingressgateway.occne-cndbtierone,ocnrf-1-ingressgateway.occne-cndbtiertwo \ -e 80,80 \ -f ocnrf-nrfconfiguration.occne-cndbtierone,ocnrf-1-nrfconfiguration.occne-cndbtiertwo \ -g 8080,8080 \ -h notify-stub-service.occne-cndbtierone,notify-stub-service02.occne-cndbtierone \ -i 8080,8080 \ -j 6faf1bbc-6e4a-4454-a507-a14ef8e1bc5c,6faf1bbc-6e4a-4454-a507-a14ef8e1bc5d \ -k oso-prom-svr.oso \ -l 80 \ -m 2 \ -n /prometheus/api/ \ -o ocnrf,ocnrf-1 \ -p mysql-connectivity-service \ -q privilegeduser-secret \ -r 0.5 \ -s http://ocnrf-ocnrf-app-info:5906/status/category/replicationstatus,http://ocnrf-1-ocnrf-app-info:5906/status/category/replicationstatus \ -t svc.cluster.local \ -u 0 \ -v 0 \ -w 0 \ -x 0 ''' if(env.Include_NewFeatures && "${Include_NewFeatures}" == "YES"){ sh '''sh /var/lib/jenkins/common_scripts/merge_jenkinsfile.sh''' load "/var/lib/jenkins/ocnrf_tests/jenkinsData/Jenkinsfile-Merged" } else{ load "/var/lib/jenkins/ocnrf_tests/jenkinsData/Jenkinsfile-Regression" } } }
- If 3SiteGeo and georedundancy suite in
Regression pipeline cases are run, the user has to provide values
for NRF2, NRF3 in corresponding environment variables in the above
script as per the deployment. A sample configuration is given as
below:
node ('master'){ //a = SELECTED_NF b = NF_NAMESPACE c = FT_ENDPOINT d = GATEWAY_IP e = GATEWAY_PORT //f = CONFIG_IP g = CONFIG_PORT h = STUB_IP i = STUB_PORT j = NFINSTANCEID //k = PROMETHEUS_IP l = PROMETHEUS_PORT m = RERUN_COUNT n = PROMETHEUS_URI o = HELM_RELEASE_NAME //p = MYSQL_HOST q = PRIVILEGED_USER_SECRET_NAME r = DISCOVERY_WAIT_TIME //s = REPLICATION_STATUS_URI t = CLUSTER_DOMAIN u = REGISTER_WRITE_WAIT_TIME //v = SUBSCRIPTION_WAIT_TIME w = ACCESSTOKEN_WAIT_TIME x = REGISTER_READ_WAIT_TIME sh ''' sh /var/lib/jenkins/ocnrf_tests/preTestConfig.sh \ -a NRF \ -b occne-cndbtierone,occne-cndbtiertwo,occne-cndbtierthree \ -c ocnrf-ingressgateway.ocnrf.svc.cluster.local:80,ocnrf-1-ingressgateway.ocnrf.svc.cluster.local:80,ocnrf-2-ingressgateway.ocnrf.svc.cluster.local:80 \ -d ocnrf-ingressgateway.occne-cndbtierone,ocnrf-1-ingressgateway.occne-cndbtiertwo,ocnrf-2-ingressgateway.occne-cndbtierthree \ -e 80,80,80 \ -f ocnrf-nrfconfiguration.occne-cndbtierone,ocnrf-1-nrfconfiguration.occne-cndbtiertwo,ocnrf-2-nrfconfiguration.occne-cndbtierthree \ -g 8080,8080,8080 \ -h notify-stub-service.occne-cndbtierone,notify-stub-service02.occne-cndbtierone,notify-stub-service03.occne-cndbtierone \ -i 8080,8080,8080 \ -j 6faf1bbc-6e4a-4454-a507-a14ef8e1bc5c,6faf1bbc-6e4a-4454-a507-a14ef8e1bc5d,6faf1bbc-6e4a-4454-a507-a14ef8e1bc5e \ -k oso-prom-svr.oso \ -l 80 \ -m 2 \ -n /prometheus/api/ \ -o ocnrf,ocnrf-1,ocnrf-2 \ -p mysql-connectivity-service \ -q privilegeduser-secret \ -r 0.5 \ -s http://ocnrf-ocnrf-app-info:5906/status/category/replicationstatus,http://ocnrf-1-ocnrf-app-info:5906/status/category/replicationstatus,http://ocnrf-2-ocnrf-app-info:5906/status/category/replicationstatus \ -t svc.cluster.local \ -u 0 \ -v 0 \ -w 0 \ -x 0 ''' if(env.Include_NewFeatures && "${Include_NewFeatures}" == "YES"){ sh '''sh /var/lib/jenkins/common_scripts/merge_jenkinsfile.sh''' load "/var/lib/jenkins/ocnrf_tests/jenkinsData/Jenkinsfile-Merged" } else{ load "/var/lib/jenkins/ocnrf_tests/jenkinsData/Jenkinsfile-Regression" } } }
Note:
If user is using CNE1.6 or OSO 1.6.1 or earlier, change the Prometheus service URI (-n) to "/api/" and Prometheus service name and port to appropriate values. For CNE 1.7.0 or OSO 1.6.2 or later, retain the default value "/prometheus/api/". If OSO 1.10.0 is used, use the URI along with the cluster_name (for example: /bmw/prometheus/api). - Click Save after making necessary changes. The
NRF-NewFeatures screen appears. Click the Build with Parameters link
available in the left navigation pane of the NRF-NewFeatures Pipeline
screen.
In the Pipeline screen, there are three Select_Option(s), which are:
- All: This is the default option. It runs all the NRF test cases. Scroll down and click Build to execute all the test cases.
- Sanity: It is recommended to run sanity before running any test case. It ensures all the deployments are done properly.
- Single/MultipleFeatures: This option allows you
to select any number of test cases that you want to run from the
list of total test cases available for execution.
After selecting the test cases, scroll down and click Build to run the selected NRF test cases.
- Select one of the following configuration types:
- Product_Config: On selecting this option, test cases from product folders are populated on ATS UI and product configuration is applied to them via the key-value pair and yaml files defined or present in the "Product Config" folder.
- Custom_Config: On selecting this option, test
cases from custom folders are populated on ATS UI and custom
configuration is applied to them via the key-value pair and yaml files
defined or present in the "Custom Config" folder. To use the Parameterization feature, always select the
Custom_Config option. User can copy, add, or delete the required test
cases that are available for the NRF and place them appropriately within
the custom folder for NRF-NewFeatures. Reload the page to view the test
cases available in the custom NewFeatures folder.
For more information, see Parameterized approach for SUT custom configuration.
- NRF Sanity - This feature file contains all the basic sanity cases for NRF ATS to validate the deployment is correct or not. It is advisable for user to run these cases before starting a complete suite.
- Discovery - These feature files are listed with a prefix as "Disc". All the discovery microservice related cases will be listed once this option is selected.
- NRF Functional - These feature files are listed with a prefix as "Feat". All the functional cases will be listed once this option is selected.
- Registration - These feature files are listed with a prefix as "Upd". These are related to update operation of registered profiles. All the registration cases will be listed once this option is selected.
- Subscription- These feature files are listed with a prefix as "Subs". All the subscription microservice related cases will be listed once this option is selected.
- Roaming -These feature files are listed with a prefix as "Roaming". All the roaming cases will be listed once this option is selected.
Figure 4-21 Sample Screen: NRF-ATS Full Execution

Figure 4-22 Test Cases Result - Sanity

Note:
For more details on consolidated test report, see Managing Final Summary Report, Build Color, and Application Log.Figure 4-23 Test Cases Result - All-NewFeatures

Parameterized approach for SUT custom configuration
- Add new test cases by adding datafiles
- Remove test cases
- Modify the parameters and their values in the key-value pair or
<feature>.yaml
files
Cust ConfigN
where N can be any number
At the time of execution, ensure to rename the required folder to Cust Config folder as Jenkins always retrieves data from this folder when user selects Custom_Config.
To use
Custom_Config, it is required to change the value of cust_folder
from data to cust_data in
global.yaml
file. In addition, you can customize the
parameters and their respective values in the global.yaml
as per the requirements.
<feature>.yaml
files for parameterized
feature:
- In addition to
global.yaml
parameters, feature files may also contain parameters for which user can update values at the time of running pipelines. - Changing the values of parameters tagged as "Feature Specific Value" may cause failures at the time of running pipelines.
- Values for parameters tagged with #START_GLOBAL and
#END_GLOBAL tags take values from
global.yaml
.
Note:
For NRF-ATS release 25.1.200, parameterization is supported for NRF NewFeatures pipeline only.4.2.4 NRF-NewFeatures Documentation
To view NRF test cases, go to NRF-NewFeatures pipeline and click the Documentation link in the left navigation pane. It lists all the test cases provided as part of NRF ATS -25.1.200 along with sanity cases.
The NRF test cases are divided into multiple groups based on the functionality.
- NF_CONFIGURATION_CASES- All the cases related to NRF configuration will be listed once this option is selected.
- NF_DISCOVERY_CASES - All the discovery microservice related cases will be listed once this option is selected.
- NRF_FORWARDING_CASES - All the forwarding related cases will be listed once this option is selected.
- NF_FUNCTIONAL_CASES - All the functional cases will be listed once this option is selected.
- NF_REGISTRATION_CASES- All the registration cases will be listed once this option is selected.
- NF_SANITY_CASES- All the sanity cases will be listed once this option is selected.
- NF_SLF_CASES - All the SLF related cases will be listed once this option is selected.
Figure 4-24 NRF-NewFeatures Documentation

Figure 4-25 Sample Feature: NF_BASIC_SANITY_CASES

Based on the functionalities covered under Documentation, the Build Requires Parameters screen displays test cases. To navigate back to the Pipeline NRF-NewFeatures screen, click the Back to NRF-NewFeatures link available on top left corner of the screen.
4.2.5 NRF-Regression Pipeline
This pre-configured pipeline contains all the test cases delivered till NRF ATS 25.1.200. However, some test cases are updated as per new implementation of NRF.
The configuration method and parameters are same as the NewFeatures pipeline.
Only difference in this pipeline is, while executing 2-Site-Geo and 3-Site-Geo test cases, user has to provide appropriate values for NRF2 and NRF3 in the Pipeline script. In this pipeline, user will not get option to run Sanity.
- 2-Site-Geo - These feature files are listed with a prefix as "Geo_Redundancy".
- 3-Site-Geo- These feature files contain all the cases which are executed on 3 Site GEO NRF.
- CCA-Header- These feature files contain all cases related to CCA Header feature.
- NF-FQDN-Authentication - These feature files are listed with a prefix as "NfAuthentication".
- AccessToken - These feature files are listed with a prefix as "oAuth" and "AccessToken".
- Alerts - These feature files are list with a prefix as "Alert".
- Configuration - These feature files are listed with a prefix as "Config" and "SystemOptions".
- Discovery - These feature files are listed with a prefix as "Disc".
- NRF Forwarding - These feature files are listed with a prefix as "Forwarding".
- NRF Functional - These feature files are listed with a prefix as "Feat".
- Key-ID - These feature files are listed with a prefix as "Kid".
- NRF State Data - These feature files are listed with a prefix as "State".
- Preferred Locality - These feature files are listed with a prefix as "Pref".
- Registration - These feature files are listed with a prefix as "HBTimerEnhancement", "Reg" and "Upd". These are related to update operation of registered profiles.
- NRF SLF - These feature files are listed with a prefix as "SLF".
- Subscription - These feature files are listed with a prefix as "Subs".
- NRF Growth - These feature files contain all cases related to NRF Growth feature.
Figure 4-26 NRF-Regression

Figure 4-27 NRF-Regression - All-default-Regression

4.2.6 NRF-Regression Documentation
Click Documentation in the left navigation pane of the NRF-Regression pipeline to view all the test cases provided till NRF ATS 25.1.200.
- NF_3SITEGEO_CASES- All the 3-Site Geo-Redundancy related cases will be listed once this option is selected.
- NF_ACCESSTOKEN_CASES - All the test cases for validating accesstoken operation will be listed once this option is selected.
- NF_ALERTS_CASES - Test case for validating alerts will be listed once this option is selected.
- NF_ALTERNATEROUTE_CASES- All the alternate route microservice related cases will be listed once this option is selected.
- NF_ARTISAN_CASES- All the NRF artisan microservice related cases will be listed once this option is selected.
- NF_CCAHEADER_CASES - Cases related to CCAHeader feature will be listed once this option is selected.
- NF_CHFINFO_CASES - All the ChfInfo enhancement cases will be listed once this option is selected.
- NF_CONFIGURATION_CASES - Cases related to NRF configuration will be listed once this option is selected.
- NF_DISCOVERY_CASES - All the discovery microservice related cases will be listed once this option is selected.
- NF_FORWARDING_CASES - All the forwarding related cases will be listed once this option is selected.
- NF_FQDNAUTHENTICATION_CASES - All the NF FQDN authentication cases will be listed once this option is selected.
- NF_FUNCTIONAL_CASES - All the functional cases will be listed once this option is selected.
- NF_GEOREDUNDANCY_CASES - All the Geo-Redundancy related cases (2-Site-GEO and 3-Site-GEO both) will be listed once this option is selected.
- NF_KID_CASES - All the K-ID feature related cases will be listed once this option is selected.
- NF_NRFGROWTH_CASES - All growth related cases will be listed once this option is selected.
- NF_NRFSTATE_DATA_CASES - All the nfStateData related cases will be listed once this option is selected.
- NF_PREFERRED_LOCALITY_CASES - All the preferred locality feature related cases will be listed once this option is selected.
- NF_REGISTRATION_CASES - All the registration related cases will be listed once this option is selected.
- NF_ROAMING_CASES - All the roaming related cases will be listed once this option is selected.
- NF_SLF_CASES - All the SLF related cases will be listed once this option is selected.
- NF_SUBSCRIPTION_CASES - All subscription related cases will be listed once this option is selected.
Figure 4-28 NRF-Regression Documentation

Figure 4-29 Sample Screen: NRF-Regression Documentation

4.3 Running NSSF Test Cases using ATS
This section describes how to run NSSF test cases using ATS.
4.3.1 Prerequisites
The prerequisites to run NSSF Test Cases using NSSF ATS 25.1.200 are:
- Deploy NSSF 25.1.200 with default helm configurations using helm charts.
- All NSSF microservices must be up and running.
- Both NSSF ATS and ATS Deployment needs to be same namespace.
- For NSSF ATS 25.1.200, deploy one stub server and the service name should be "amf-stubserver". It is required to run AMF-subscription Notification functionality test cases.
- For NSSF ATS 25.1.200, deploy one stub server and the service name should be "nrf-stubserver". It is required to run NRF-subscription Notification functionality test cases.
- For NSSF ATS 25.1.200, deploy one stub server and the service name should be "nrf-stubserver1". It is required to run NRF-selection based on DNS SRV.
- For NSSF ATS 25.1.200, deploy one stub server and the service name should be "nrf-stubserver2". It is required to run NRF-selection based on DNS SRV.
- For NSSF 25.1.200, deploy one DNS stub server and the service name should be "ocdns-bind". It is required to run NRF-selection based on DNS SRV.
4.3.2 Logging into ATS
Before logging into ATS, deploy ATS using HELM charts as shown below:
Verify ATS deployment
[opc@ocnssf-oci-phx-einstein-bastion-01 ~]$ helm status ocats
NAME: ocats
LAST DEPLOYED: Thu May 30 10:21:40 2024
NAMESPACE: cicdnssf-240530102024
STATUS: deployed
REVISION: 1
TEST SUITE: None
There are two ways to log in to ATS GUI.
- When an external load balancer (metalLB in case of OCCNE) is available and you provide an external IP to the ATS service, the user can log in to ATS GUI using <External-IP>:8080.
- When you do not provide an external IP to the ATS service, open the
browser and enter the external IP of the worker node and nodeport of the ATS
service to log in to the ATS GUI.
<Worker-Node-IP>:<Node-Port-of-ATS>
Note:
In the Verifying ATS Deployment screenshot, the ATS nodeport is highlighted in red as 32013. For more details on ATS deployment, refer to NSSF ATS Installation Procedure.
Open a browser and enter the IP Address and port details as <Worker-Node-IP>:<NodePort-of-ATS> (In the above example, the Worker-Node-IP and NodePort-of-ATS are 10.98.101.177:32013, which are shown as highlighted in the screenshot above).
The ATS login screen appears.

- Enter the username as 'nssfuser' and password as 'nssfpasswd'.
Click Sign in. A page with preconfigured pipelines appears.
Note:
To modify the default login password, refer to Modifying Login Password.- NSSF-New-Features: This pipeline has all the test cases delivered as part of NSSF ATS - 25.1.200.
- NSSF-Regression: This pipeline has the test cases of all the previous releases.
4.3.3 NSSF-NewFeatures Pipeline
In this pipeline, you can configure ATS, which is a one-time activity as per System Under Test (SUT) deployment. You can also run all the new NSSF test cases using the pipeline. To configure its parameters:
- Click NSSF-NewFeatures in the Name column. The
following screen appears:In the above screen:
- Click Configure to access the configuration screen.
- Click Documentation to view the documented test cases.
- Click blue dots inside the Build History box to view the success console logs of the "All" and "Sanity" respectively.
- The Stage View represents the already run pipeline for customer reference.
- Click Configure. Users MUST wait for the page to load
completely. Once the page loads completely, click the Pipeline tab to
reach the Pipeline configuration as shown below:
WARNING:
Make sure that the screen shown above loads completely before you perform any action on it. Also, do not modify any configuration other than that discussed below.
- You can modify script pipeline parameters from "a" to "B" on the
basis of your deployment environment and click Save. The content of the pipeline
script is as
follows:
node ('built-in'){ //a = SELECTED_NF b = NF_NAMESPACE c = INGRESS_GATEWAY_IP d = EGRESS_GATEWAY_IP //e = PROMETHEUS_SVC_IP f = STUB_IP g = PROMETHEUS_PORT h = RERUN_COUNT //i = NF_DB j = NF_DB_SECRET k = NSCONFIG_IP l = NRF_STUB_IP //m = HELM_RELEASE_NAME n = ATS_RELEASE_NAME o = NSSF_INGRESS_GATEWAY_PORT //p = NSSF_EGW_PORT q = NSSF_CONFIG_PORT r = NSSF_SELECTION_SVC_NAME //s = NSSF_AUDITOR_SVC t = NSSF_AVAILABILITY_SVC_NAME u = NSSF_SUBSCRIPTION_SVC_NAME //v = NSSF_APP_INFO_SVC w = NSSF_PERFINFO_SVC x = NSSF_INSTANCEID //y = NRF_STUB_1_SVC_NAME z = NRF_STUB_2_SVC_NAME A = NSSF_NRF_CLIENT_SVC_NAME //B = SUPPORTED_PLMN_LIST_MCC_MNC withEnv([ ]){ sh ''' sh /var/lib/jenkins/ocnssf_tests/preTestConfig.sh \ -a NSSF \ -b ocnssf \ -c ocnssf-ingress-gateway.ocnssf \ -d ocnssf-egress-gateway.ocnssf \ -e occne-prometheus-server.occne-infra \ -f amf-stubserver.ocnssf \ -g 80 \ -h 2 \ -i ocnssf-nsdb.ocnssf \ -j ocnssf-db-creds \ -k ocnssf-nsconfig.ocnssf \ -l nrf-stubserver.ocnssf \ -m ocnssf \ -n ocats \ -o 8081 \ -p 8080 \ -q 8080 \ -r ocnssf-nsselection.ocnssf \ -s ocnssf-nsauditor.ocnssf \ -t ocnssf-nsavailability.ocnssf \ -u ocnssf-nssubscription.ocnssf \ -v ocnssf-ocnssf-app-info.ocnssf \ -w ocnssf-ocnssf-perf-info.ocnssf \ -x 9faf1bbc-6e4a-4454-a507-aef01a101a01 \ -y nrf-stubserver1.ocnssf \ -z nrf-stubserver2.ocnssf \ -A ocnssf-ocnssf-nrf-client-nfmanagement.ocnssf \ -B 311 480 \ ''' if(env.Include_Regression && "${Include_Regression}" == "YES"){ sh '''sh /var/lib/jenkins/common_scripts/merge_jenkinsfile.sh''' load "/var/lib/jenkins/ocnssf_tests/jenkinsData/Jenkinsfile-Merged" } else{ load "/var/lib/jenkins/ocnssf_tests/jenkinsData/Jenkinsfile-NewFeatures" } } }
Note:
The User MUST NOT change any other value apart from these parameters.The description of these parameters is as follows:
- a: Name of the NF to be tested in capital (NSSF).
- b: Namespace in which the NSSF is deployed (default is ocnssf)
- c: Ingress Gateway IP address (default is ocnssf-ingress-gateway.ocnssf)
- d: Egress Gateway IP address (default is ocnssf-egress-gateway.ocnssf)
- e: Prometheus service IP address (default is prometheus.cne-infra)
- f: Stub service IP address (default is ocats-amf-stubserver.ocnssf)
- g: Port of Prometheus service (default is 80)
- h: Number of times the re-run of failed case is allowed (default is 2).
- i: Database name (default is ocnssf-nsdb.ocnssf)
- j Database secrets (default ocnssf-db-creds)
- k: NSSF config ip address (ocnssf-nsconfig.ocnssf)
- l: NRF stub server IP address (ocats-nrf-stubserver.ocnssf)
- m: NSSF release name (ocnssf)
- n: ATS release name (ocats)
- o: NSSF Ingress Gateway Port
- p: NSSF Egress Gateway Port
- q: NSSF Config Port
- r: NSSF Selection Service Name
- s: NSSF Auditor Service Name
- t: NSSF Availability Service Name
- u: NSSF Subscription Service Name
- v: APP Info Service Name
- w:Perf info Service Name
- x: NSSF Supported PLMN
- A: NRF Client Managment Service Name (ocnssf-ocnssf-nrf-client-nfmanagement.ocnssf)
- B: NSSF PLMN List (311 480 )
Note:
- Do not change any value if the OCCNE cluster is used and NSSF, ATS, and STUB are deployed in the ocnssf namespace.
- In the above image NSSF Helm release name is "ocnssf",
ATS Helm Release Name is "ocats" and namespace is "ocnssf". If any
change in helm release name of NSSF, ATS and namespace needs to be
updated accordingly.
For example, NSSF Helm release name is "ocnssfats" and ATS Helm release name is "ocatsnssf" and namespace is "ocnssf2510", then above Pipeline Configuration should be edited as shown below.
- NSSF Helm release name should be updated in "c d i k m r s t u v w" if any change in NSSF helm release name apart from "ocnssf".
- The NSSF Instance ID needs to be updated in the ATS
Jenkins pipeline parameters if there are any changes in the NSSF
custom values (CV) file. By default, the NSSF Instance ID is set to
"
9faf1bbc-6e4a-4454-a507-aef01a101a01
". If the Instance ID is modified in the NSSF CV file, the same changes must be reflected in the Jenkins pipeline parameters.By default, NSSF Instance ID in NSSF custom values file as below:#InstanceId of NSSF used in case of GR nfInstanceId: &nfInstanceId "9faf1bbc-6e4a-4454-a507-aef01a101a01"
For example, if the nfInstanceId in the NSSF custom values file is modified from "9faf1bbc-6e4a-4454-a507-aef01a101a01
" to "9faf1bbc-6e4a-4454-a507-aef01a101a20
", the same change must be updated in the "x" Jenkins pipeline parameters as shown below:node ('built-in'){ //a = SELECTED_NF b = NF_NAMESPACE c = INGRESS_GATEWAY_IP d = EGRESS_GATEWAY_IP //e = PROMETHEUS_SVC_IP f = STUB_IP g = PROMETHEUS_PORT h = RERUN_COUNT //i = NF_DB j = NF_DB_SECRET k = NSCONFIG_IP l = NRF_STUB_IP //m = HELM_RELEASE_NAME n = ATS_RELEASE_NAME o = NSSF_INGRESS_GATEWAY_PORT //p = NSSF_EGW_PORT q = NSSF_CONFIG_PORT r = NSSF_SELECTION_SVC_NAME //s = NSSF_AUDITOR_SVC t = NSSF_AVAILABILITY_SVC_NAME u = NSSF_SUBSCRIPTION_SVC_NAME //v = NSSF_APP_INFO_SVC w = NSSF_PERFINFO_SVC x = NSSF_INSTANCEID //y = NRF_STUB_1_SVC_NAME z = NRF_STUB_2_SVC_NAME A = NSSF_NRF_CLIENT_SVC_NAME //B = SUPPORTED_PLMN_LIST_MCC_MNC withEnv([ ]){ sh ''' sh /var/lib/jenkins/ocnssf_tests/preTestConfig.sh \ -a NSSF \ -b ocnssf2510 \ -c ocnssfats-ingress-gateway.ocnssf2510 \ -d ocnssfats-egress-gateway.ocnssf2510 \ -e occne-prometheus-server.occne-infra \ -f amf-stubserver.ocnssf2510 \ -g 80 \ -h 2 \ -i ocnssf-nsdb.ocnssf \ -j ocnssf-db-creds \ -k ocnssfats-nsconfig.ocnssf2510 \ -l nrf-stubserver.ocnssf2510 \ -m ocnssfats \ -n ocats \ -o 8081 \ -p 8080 \ -q 8080 \ -r ocnssfats-nsselection.ocnssf2510 \ -s ocnssfats-nsauditor.ocnssf2510 \ -t ocnssfats-nsavailability.ocnssf2510 \ -u ocnssfats-nssubscription.ocnssf2510 \ -v ocnssfats-ocnssf-app-info.ocnssf2510 \ -w ocnssfats-ocnssf-perf-info.ocnssf2510 \ -x 9faf1bbc-6e4a-4454-a507-aef01a101a20 \ -y nrf-stubserver1.ocnssf2510 \ -z nrf-stubserver2.ocnssf2510 \ -A ocnssfats-ocnssf-nrf-client-nfmanagement.ocnssf2510 \ -B 311 480 \ ''' if(env.Include_Regression && "${Include_Regression}" == "YES"){ sh '''sh /var/lib/jenkins/common_scripts/merge_jenkinsfile.sh''' load "/var/lib/jenkins/ocnssf_tests/jenkinsData/Jenkinsfile-Merged" } else{ load "/var/lib/jenkins/ocnssf_tests/jenkinsData/Jenkinsfile-NewFeatures" } } }
- Click Save after making necessary changes. The Pipeline NSSF-NewFeatures screen appears.
Running NSSF New Features Test Cases
To run NSSF New Features test cases:
- Go back to new features and Click Build with Parameters
present on the NSSF-NewFeatures screen in the extreme left column
corresponding to NSSF-NewFeatures row as shown below:
- The following screen appears:
In the above screen, there are three Select_Option(s), which are:
- By default, features 'ALL', configuration type 'Product_config', include regression 'NO' will be selected, and all test cases will be executed once clicked on BUILD, as shown above.
- Features: ALL. Once you click on ALL, a dropdown
'Select' option will appear. After selecting 'Select,' options will
appear to choose feature files. To run the selected feature files,
click on 'Build,' as shown below.
- Select one of the following configuration types:
- Product_Config: On selecting this option, test cases from product folders are populated on ATS UI and product configuration is applied to them via the keyvalue pair and yaml files defined or present in the "Product Config" folder.
- Custom_Config: On selecting this option, test cases from custom folders are populated on ATS UI and custom configuration is applied to them via the keyvalue pair and yaml files defined or present in the "Custom Config" folder. To use the Parameterization feature, always select the Custom_Config option. User can copy, add, or delete the required test cases that are available for the NSSF and place them appropriately within the custom folder for NSSF-NewFeatures. Reload the page to view the test cases available in the custom NewFeatures folder. For more information, see Parameterized approach for SUT custom configuration.
The NSSF test cases are divided into NSSF Service operations as follows:
- NSSF_204NoContent_NsAvailability_PATCH_REMOVE- This feature file contains test cases related to PATCH or PUT request for deleting all slices.
- Virtual_Host_NRF_Resolution_By_NSSF_Using_DNSSRV- This feature file contains test cases related to DNS SRV based selection of NRF in NSSF.
If you want to run custom_config, select below parameter as custom_config, where the framework automatically points out to custom_config and cust_data.
3 features passed, 0 failed, 0 skipped 25 scenarios passed, 0 failed, 0 skipped 546 steps passed, 0 failed, 0 skipped, 0 undefined Took 36m30.275s
Parameterized approach for SUT custom configuration
Using this feature, user can make the following customizations to custom folders:
- Add new test cases by adding datafiles
- Remove test cases
- Modify the parameters and their values in the key-value pair or
<feature>
.yaml files
To run ATS test cases, user can maintain as many versions of Custom_Config folder by using the following naming convention:
Cust ConfigN
where N can be any number
At the time of execution, ensure to rename the required folder to Custom Config folder as Jenkins always retrieves data from this folder when user selects Custom_Config.
Updating Global Parameters
To use Custom_Config, it is required to change the value of cust_folder from data to cust_data in global.yaml file. In addition, you can customize the parameters and their respective values in the global.yaml as per the requirements.
Updating Feature Parameters
Consider the following points when customizing <feature>.yaml files for parameterized feature:
- In addition to global.yaml parameters, feature files may also contain parameters for which user can update values at the time of running pipelines.
- Changing the values of parameters tagged as "Feature Specific Value" may cause failures at the time of running pipelines.
- Values for parameters tagged with #START_GLOBAL and #END_GLOBAL tags take values from global.yaml.
Note:
For NSSF-ATS release 25.1.200, parameterization is supported for NSSF NewFeatures pipeline only.4.3.4 NSSF-NewFeatures Documentation
To view NSSF functionalities, go to NSSF-NewFeatures pipeline latest build and click the Documentation link in the left navigation pane. The following screen appears. Click any functionality to view its test cases and scenarios of each test case as shown in the sample screenshot below:

A sample of a few documentation features are as follows:

Once the run of new features done successfully, click on the build number whose logs are to be viewed and then click on Console Output on left side navigation.
Wait till the page is loaded completely, then click Download to download the new features logs.

'

4.3.5 NSSF-Regression Pipeline
This pipeline contains test cases from the previous versions.
Some of the test cases are updated as per the new implementation of NSSF.
The configuration method and parameters are the same as the NewFeatures pipeline.
- Availability Update: These feature files are listed with a prefix as "Update".
- Configuration: These feature files are listed with a prefix as "failure".
- Registration: These feature files are listed with a prefix as "NsSelection_Registration".
- PDU Session: These feature files are listed with a prefix as "NsSelection_PDU".
- SuportedFeatures: This feature file has Feature Negotiation test cases.
- subscription_patch: This feature file has SUBMOD test cases.
- eanan: This feature file has EANAN test cases.
- TAI Range: This feature file has TAI Range supported NsAvailability and NsSelection test cases.
- NSSF_Sending_Notification_via_ARS_SCP: This feature file has test cases related to NRF discovery and subscription.
- NSSF Sanity: This feature file contains all the basic sanity cases for NSSF ATS 24.3.0.
- NSSF_Allowed_NSSAI_computation_RegUeConf: This feature file contains test cases related to NSSF Allowed SNSSAI Computation based on enabling and disabling of System Options PLMN based.
- NSSF_Allowed_NSSAI_computation_ueconfig_update: This feature file contains test cases related to NSSF Allowed SNSSAI Computation based on enabling and disabling of System Options PLMN based.
- NSSF_Authorization_NonTrustedAMF_NsAvailPATCH: This feature file contains test cases related to PATCH request for Update operation of NSSAI Availability in cases of Non Trusted AMFs.
- NSSF_Authorization_NonTrustedAMF_NsAvailPUT: This feature file contains test cases related to PUT request for Update operation of NSSAI Availability in cases of Non Trusted AMFs.
- AMF_TrustedListing: This feature file contains test cases related to Trusted of AMFs and Configuration of NsAvailability PUT request.

- In the above screen:
- Click Configure to access the configuration screen.
- Click Documentation to view the documented test cases.
- Click blue dots inside the Build History box to view the success console logs of the "All" and "Sanity" respectively.
- The Stage View represents the already run pipeline for customer reference.
- Click Configure. Users MUST wait for the page to load
completely. Once the page loads completely, click the Pipeline tab to
reach the Pipeline configuration as shown below:
WARNING:
Make sure that the screen shown above loads completely before you perform any action on it. Also, do not modify any configuration other than that discussed below.
- You can modify script pipeline parameters from "a" to "B" on the
basis of your deployment environment and click Save. The content of the pipeline
script is as
follows:
node ('built-in'){ //a = SELECTED_NF b = NF_NAMESPACE c = INGRESS_GATEWAY_IP d = EGRESS_GATEWAY_IP //e = PROMETHEUS_SVC_IP f = STUB_IP g = PROMETHEUS_PORT h = RERUN_COUNT //i = NF_DB j = NF_DB_SECRET k = NSCONFIG_IP l = NRF_STUB_IP //m = HELM_RELEASE_NAME n = ATS_RELEASE_NAME o = NSSF_INGRESS_GATEWAY_PORT //p = NSSF_EGW_PORT q = NSSF_CONFIG_PORT r = NSSF_SELECTION_SVC_NAME //s = NSSF_AUDITOR_SVC t = NSSF_AVAILABILITY_SVC_NAME u = NSSF_SUBSCRIPTION_SVC_NAME //v = NSSF_APP_INFO_SVC w = NSSF_PERFINFO_SVC x = NSSF_INSTANCEID //y = NRF_STUB_1_SVC_NAME z = NRF_STUB_2_SVC_NAME A = NSSF_NRF_CLIENT_SVC_NAME //B = SUPPORTED_PLMN_LIST_MCC_MNC withEnv([ ]){ sh ''' sh /var/lib/jenkins/ocnssf_tests/preTestConfig.sh \ -a NSSF \ -b ocnssf \ -c ocnssf-ingress-gateway.ocnssf \ -d ocnssf-egress-gateway.ocnssf \ -e occne-prometheus-server.occne-infra \ -f amf-stubserver.ocnssf \ -g 80 \ -h 2 \ -i ocnssf-nsdb.ocnssf \ -j ocnssf-db-creds \ -k ocnssf-nsconfig.ocnssf \ -l nrf-stubserver.ocnssf \ -m ocnssf \ -n ocats \ -o 8081 \ -p 8080 \ -q 8080 \ -r ocnssf-nsselection.ocnssf \ -s ocnssf-nsauditor.ocnssf \ -t ocnssf-nsavailability.ocnssf \ -u ocnssf-nssubscription.ocnssf \ -v ocnssf-ocnssf-app-info.ocnssf \ -w ocnssf-ocnssf-perf-info.ocnssf \ -x 9faf1bbc-6e4a-4454-a507-aef01a101a01 \ -y nrf-stubserver1.ocnssf \ -z nrf-stubserver2.ocnssf \ -A ocnssf-ocnssf-nrf-client-nfmanagement.ocnssf \ -B 311 480 \ ''' if(env.Include_NewFeatures && "${Include_NewFeatures}" == "YES"){ sh '''sh /var/lib/jenkins/common_scripts/merge_jenkinsfile.sh''' load "/var/lib/jenkins/ocnssf_tests/jenkinsData/Jenkinsfile-Merged" } else{ load "/var/lib/jenkins/ocnssf_tests/jenkinsData/Jenkinsfile-Regression" } } }
Note:
The User MUST NOT change any other value apart from these parameters.The description of these parameters is as follows:
- a: Name of the NF to be tested in capital (NSSF).
- b: Namespace in which the NSSF is deployed (default is ocnssf)
- c: Ingress Gateway IP address (default is ocnssf-ingress-gateway.ocnssf)
- d: Egress Gateway IP address (default is ocnssf-egress-gateway.ocnssf)
- e: Prometheus service IP address (default is prometheus.cne-infra)
- f: Stub service IP address (default is ocats-amf-stubserver.ocnssf)
- g: Port of Prometheus service (default is 80)
- h: Number of times the re-run of failed case is allowed (default is 2).
- i: Database name (default is ocnssf-nsdb.ocnssf)
- j Database secrets (default ocnssf-db-creds)
- k: NSSF config ip address (ocnssf-nsconfig.ocnssf)
- l: NRF stub server IP address (ocats-nrf-stubserver.ocnssf)
- m: NSSF release name (ocnssf)
- n: ATS release name (ocats)
- o: NSSF Ingress Gateway Port
- p: NSSF Egress Gateway Port
- q: NSSF Config Port
- r: NSSF Selection Service Name
- s: NSSF Auditor Service Name
- t: NSSF Availability Service Name
- u: NSSF Subscription Service Name
- v: APP Info Service Name
- w:Perf info Service Name
- x: NSSF NF Instance ID
- y: NRF stub Server Service name(nrf-stubserver1.ocnssf)
- z: NRF stub Server Service name(nrf-stubserver1.ocnssf)
- A: NRF Client Managment Service Name (ocnssf-ocnssf-nrf-client-nfmanagement.ocnssf)
- B: NSSF PLMN List (311 480 )
Note:
- Do not change any value if the OCCNE cluster is used, and NSSF, ATS, and STUB are deployed in the ocnssf namespace.
- In the above image, the NSSF Helm Release Name is "ocnssf", ATS Helm Release Name is "ocats", and Namespace is "ocnssf". If any change in the helm release name of NSSF, ATS, and namespace needs to be updated accordingly.
- For example, if the NSSF Helm Release Name is
"ocnssfats" and ATS Helm Release Name is "ocatsnssf", and Namespace
is "ocnssf2510", then the above Pipeline Configuration should be
edited as shown below:
- NSSF Helm Release Name should be updated in "c d i k m r s t u v w" if any change in NSSF helm release name apart from "ocnssf".
- The NSSF Instance ID needs to be updated in the ATS Jenkins pipeline
parameters if there are any changes in the NSSF custom values (CV)
file. By default, the NSSF Instance ID is set to
"
9faf1bbc-6e4a-4454-a507-aef01a101a01
". If the Instance ID is modified in the NSSF CV file, the same changes must be reflected in the Jenkins pipeline parameters.By default NSSF Instance ID in NSSF custom values file as below:#InstanceId of NSSF used in case of GR nfInstanceId: &nfInstanceId "9faf1bbc-6e4a-4454-a507-aef01a101a01"
For example, if the nfInstanceId in the NSSF custom values file is modified from "
9faf1bbc-6e4a-4454-a507-aef01a101a01
" to "9faf1bbc-6e4a-4454-a507-aef01a101a20
", the same change must be updated in the "x" Jenkins pipeline parameters as shown below.
node ('built-in'){ //a = SELECTED_NF b = NF_NAMESPACE c = INGRESS_GATEWAY_IP d = EGRESS_GATEWAY_IP //e = PROMETHEUS_SVC_IP f = STUB_IP g = PROMETHEUS_PORT h = RERUN_COUNT //i = NF_DB j = NF_DB_SECRET k = NSCONFIG_IP l = NRF_STUB_IP //m = HELM_RELEASE_NAME n = ATS_RELEASE_NAME o = NSSF_INGRESS_GATEWAY_PORT //p = NSSF_EGW_PORT q = NSSF_CONFIG_PORT r = NSSF_SELECTION_SVC_NAME //s = NSSF_AUDITOR_SVC t = NSSF_AVAILABILITY_SVC_NAME u = NSSF_SUBSCRIPTION_SVC_NAME //v = NSSF_APP_INFO_SVC w = NSSF_PERFINFO_SVC x = NSSF_INSTANCEID //y = NRF_STUB_1_SVC_NAME z = NRF_STUB_2_SVC_NAME A = NSSF_NRF_CLIENT_SVC_NAME //B = SUPPORTED_PLMN_LIST_MCC_MNC withEnv([ ]){ sh ''' sh /var/lib/jenkins/ocnssf_tests/preTestConfig.sh \ -a NSSF \ -b ocnssf2510 \ -c ocnssfats-ingress-gateway.ocnssf2510 \ -d ocnssfats-egress-gateway.ocnssf2510 \ -e occne-prometheus-server.occne-infra \ -f amf-stubserver.ocnssf2510 \ -g 80 \ -h 2 \ -i ocnssf-nsdb.ocnssf \ -j ocnssf-db-creds \ -k ocnssfats-nsconfig.ocnssf2510 \ -l nrf-stubserver.ocnssf2510 \ -m ocnssfats \ -n ocats \ -o 8081 \ -p 8080 \ -q 8080 \ -r ocnssfats-nsselection.ocnssf2510 \ -s ocnssfats-nsauditor.ocnssf2510 \ -t ocnssfats-nsavailability.ocnssf2510 \ -u ocnssfats-nssubscription.ocnssf2510 \ -v ocnssfats-ocnssf-app-info.ocnssf2510 \ -w ocnssfats-ocnssf-perf-info.ocnssf2510 \ -x 9faf1bbc-6e4a-4454-a507-aef01a101a20 \ -y nrf-stubserver1.ocnssf2510 \ -z nrf-stubserver2.ocnssf2510 \ -A ocnssfats-ocnssf-nrf-client-nfmanagement.ocnssf2510 \ -B 311 480 \ ''' if(env.Include_Regression && "${Include_Regression}" == "YES"){ sh '''sh /var/lib/jenkins/common_scripts/merge_jenkinsfile.sh''' load "/var/lib/jenkins/ocnssf_tests/jenkinsData/Jenkinsfile-Merged" } else{ load "/var/lib/jenkins/ocnssf_tests/jenkinsData/Jenkinsfile-Regression" } } } }
- Click Save after making necessary changes. The Pipeline NSSF-Regression screen appears.
NSSF-Regression - Build with Parameters
- By default, features "ALL," configuration type "Product_config," and include new features "NO" will be selected. All test cases will be executed once you click on BUILD, as shown above.
- Features: ALL. Once you click on ALL, a dropdown "Select" option
will appear. After selecting the "Select" option, as shown below, options will
appear to select feature files. To run the selected feature files, click on
"Build," as shown below.
80 features passed, 1 failed, 0 skipped
812 scenarios passed, 1 failed, 0 skipped
10129 steps passed, 1 failed, 4 skipped, 0 undefined
4.3.6 NSSF-Regression Documentation
To view NSSF Regression cases, go to NSSF-Regression latest pipeline build and click the Documentation link in the left navigation pane. The following screen appears. Click any functionality to view its test cases and scenarios of each test case as shown below:

A sample of a few documentation features are as follows:

Once the run of regression features is done successfully, click on the build number whose logs are to be viewed and then click on Console Output on left side navigation.
Wait till the page is loaded completely, then click Download to download the regression features logs.


4.4 Running Policy Test Cases using ATS
This section describes how to run CNC Policy test cases using ATS.
- Converged Policy
- PCF only for both TLS enabled (server side) and disabled mode
- PCRF only
Note:
Restart the NRF-client pod of CNC Policy for UDR and CHF discovery as part of each test case.4.4.1 Prerequisites
PCF only deployment
To run CNC Policy test cases in PCF mode, ensure that the following prerequisites are met:
- Deploy CNC Policy 25.1.200 with default helm configurations using helm charts to run all test cases. The ATS version must be compatible with CNC Policy 25.1.200. For more information on how to install CNC Policy, see Oracle Communications Cloud Native Core Policy Installation, Upgrade and Fault Recovery Guide.
- For running ATS features for optional services such as LDAP, Notifier, and Usage Monitoring, ensure that the respective service is enabled on CNC Policy. If this service is not enabled, the test case for this feature will fail.
- For running SOAP notification service, ensure that the SOAP connector service is enabled on CNC Policy. If this service is not enabled, the test case for this feature will fail.
- For
running ATS features, ensure to update the following mandatory parameters in
ocpcf_custom_values_25.1.200.yaml file only when you are not using the minimal
custom values.yaml
file.
logging: burst: rate: 500 max: 3000 onMismatch: DENY logLevel: DEBUG
Note:
Please ensure that you use the latest version of the Custom Values file when installing Policy initially. - For using Controlled Shutdown feature, ensure that
enableControlledShutdown
parameter is enabled on CNC Policy. If this parameter is not enabled, the test case for this feature will fail. - Install
py-stub
,ocamf-stub
,ocdns-bind stub
, andoc-ldap stub
for Converged Policy and PCF deployment modes. - Ensure there is a single pod for each microservice in the CNC Policy deployment for ATS Prometheus metrics validation to work.
- Users can add new test cases, remove unwanted test cases, and modify existing test cases in the custom test case folders (cust_newfeatures, cust_regression, and cust_performance). It does not impact the original test cases packaged with the product under newfeatures, regression, and performance folders. For more details about custom test case folders, see Custom Folder Implementation.
- The ATS prometheus metrics validation works only
when:
- the metrics suffixes are not configured
- installation has a single pod for each microservice in the BSF deployment
- For PRE and Policy_addition timer, Update PRE deployment to reduce
the value of POLLING_INTERVAL from 10s to 5s and POLLING_INTERVAL_POLICYTABLE
from 30s to 10s.
Figure 4-30 PRE and Policy_addition timer
Run the following command to reduce the value of polling deployment:[cloud-user@platform-bastion-1 ~]$ kubectl edit deploy ocpcf-ocpm-pre -n ocpcf
- The following deployment is displayed:
Figure 4-31 POLLING_INTERVAL and POLLING_INTERVAL_POLICYTABLE Deployment
Note:
The value of POLLING_INTERVAL can be updated to 5000 and POLLING_INTERVAL_POLICYTABLE to 10000 if it is not updated.For the policy_additon timer, the following changes have been made in the config.xml file for NewFeatures, Performance, and Regression.
Figure 4-32 Policy Addition Timer
For every SUT, change the of value of -n from 15 to 7, as follows:Figure 4-33 Change in SUT Value
- In the
application-config
configmap, configure the following parameters with the respective values:- Set the value of
primaryNrfApiRoot
asprimaryNrfApiRoot=nf1stub.<namespace_pystubs_are_deployed_in>.svc:8080
.For example:
primaryNrfApiRoot=nf1stub.ocats.svc:8080
- Remove
secondaryNrfApiRoot
.For example:
secondaryNrfApiRoot=nf1stub.ocats.svc:8080
- Set the value of
nrfClientSubscribeTypes
toUDR,CHF,NWDAF
. - Remove
supportedDataSetId
.For example:
supportedDataSetId=POLICY
Note:
These values can also be configured during Policy deployment.
Run the following command to get all configmaps in your namespace:
kubectl get configmaps -n <Policy_namespace>
- Set the value of
retryAfterTime
equal toPT30S
.
- Set the value of
- Before running Policy test cases using ATS, restart
nrf-client-nfdiscovery
andnrf-client-nfmanagement
pods. - Edit the Alternate Route Service deployment pointing towards DNS
Stub.
- Run the following command to get searches information from
dns-bind pod to enable communication between Alternate Route and
dns-bind
service:
The following output is displayed after running the command:kubectl exec -it <dns-bind pod> -n <NAMESPACE> -- /bin/bash -c 'cat /etc/resolv.conf' | grep search | tr ' ' '\n' | grep -v 'search'
Figure 4-34 Sample Output
Figure 4-35 Alternate Route Service Deployment File
- Run the following command to edit the deployment file and
add the following content in alternate service to query DNS
stub:
$kubectl edit deployment ocpcf-occnp-alternate-route -n ocpcf
- Add the IP Address of the nameserver that you have recorded after installing the DNS stub (cluster IP Address of DNS Stub).
- Add the search information one by one which you recorded earlier.
- Set
dnsPolicy
to "None".dnsConfig: nameservers: - 10.233.33.169 // cluster IP of DNS Stub searches: - ocpcf.svc.occne15-ocpcf-ats - svc.occne15-ocpcf-ats - occne15-ocpcf-ats dnsPolicy: None
Figure 4-36 Example
- Run the following command to get searches information from
dns-bind pod to enable communication between Alternate Route and
dns-bind
service:
PCRF only deployment
To run CNC Policy test cases in PCRF mode, ensure that the following prerequisites are met:
-
Run the following command to set the log level to Debug in Diam-GW statefulset:
kubectl edit statefulset <diam-gw statefulset name> -n <namespace>
name: LOGGING_LEVEL_APP
value: DEBUG
- Set the peer Configuration using configuration map for backend peers.
Edit and set the default configuration of Diameter peer in the diam configuration map and set
responseOnly
flag to true.kubectl edit cm oc-diam-gateway-config-peers -n <namespace>
nodes: - name: 'ocpcf-occnp-pcrf-core' type: 'pcrf' responseOnly: true host: ocpcf-occnp-pcrf-core-headless port: 3868 realm: '' identity: ''
Converged mode deployment
To run CNC Policy test cases in Converged mode, ensure that the following prerequisites are also met:
- Deploy CNC Policy 25.1.200 with default helm configurations using helm charts to run all test cases. The ATS version must be compatible with CNC Policy 25.1.200. For more information on how to install CNC Policy, see Oracle Native Core, Converged Policy Installation, Upgrade, and Fault Recovery Guide.
- For running ATS features for optional services such as LDAP, Notifier, and Usage Monitoring, ensure that the respective service is enabled on CNC Policy.
- Install Prometheus server in cluster.
- Database cluster should be in a running state with all the required tables. Ensure there are no previous entries in database before executing test cases.
- Deploy ATS in the same namespace as Policy using Helm Charts.
- User must not initiate a job in two different pipelines at the same time.
- The installation should have only one pod for each microservice related to ATS Prometheus Metrics validation to work in the CNC Policy deployment.
- If Service Mesh check is enabled, apply the below settings:
For fetching the metrics from the Prometheus, create a destination rule. In most deployments, Prometheus is kept outside of the service mesh. Hence, a destination rule is required to communicate between TLS enabled entity (ATS) and non-TLS entity (Prometheus). Created the rule as follows:
kubectl apply -f - <<EOF apiVersion:networking.istio.io/v1alpha3 kind:DestinationRule metadata: name:prometheus-dr namespace:ocats spec: host:oso-prometheus-server.pcf.svc.cluster.local trafficPolicy: tls: mode:DISABLE EOF Here, 1.) name,indicates the name of destination rule. 2.) namespace,indicates where the ATS is deployed. 3.) host,this indicates the hostname of the prometheus server.
- For running ATS features for Model D Indirect Communication Model,
ensure that the custom values yaml of CNC Policy has the following parameters with
the specified values:
routeConfigMode: REST configureDefaultRoute: true
- Update the
occnp-servicemesh-config-custom-values-25.1.200.yaml
of CNC Policy with the following virtualService configuration for Egress Gateway:virtualService: # - name: nrfvirtual1 # host: ocpcf-occnp-egress-gateway # destinationhost: ocpcf-occnp-egress-gateway # port: 8000 # exportTo: |- # [ "." ] # attempts: "0"
- Restart the ocpm-ldap-gateway pod manually before running the LDAP-specific ATS test cases.
-
To ensure the consistent functioning of ATS related to the audit service, modify the audit deployment by reducing the value of AUDIT_NOTIFY_SCHEDULER_POLLING_INTERVAL_MILLISEC from 30000 to 1000 (changing from 30 seconds to 1 second).
$ kubectl get deploy -n ocpcf | grep 'audit' ocpcf-ocpm-audit-service 1/1 1 1 10h
$ kubectl edit deploy ocpcf-ocpm-audit-service -n ocpcf
The deployment will open. Scroll down until below fields are seen.
- name: AUDIT_NOTIFY_SCHEDULER_POLLING_INTERVAL_MILLISEC value: "30000"
Update the value of AUDIT_NOTIFY_SCHEDULER_POLLING_INTERVAL_MILLISEC to 1000 if it isn't already.
Note:
In order to run stage2 or stage3 cases using configuration_type of custom_config, replace the "data" folder with "cust_data" folder in stage<x>.txt under /var/lib/jenkins/ocpcf_tests/cust_data/common/stage_hooks/stage<x>. directory.Application Config map changes for Policy registrations over TLS
- NRF port from 8080 to 8443
- nrfScheme to https
apiVersion: v1
data:
profile: |-
[appcfg]
primaryNrfApiRoot=nf1stub.ocats.svc:8443
nrfScheme=https
Note:
In the config map of application-config, delete the lines which has supportedDataSetId or secondaryNrfApiRoot strings.
4.4.2 Logging into ATS
Before logging into ATS GUI, it is important to get the nodeport of the service, 'ocats-policy'.
kubectl get svc -n <Policy_namespace>
kubectl get svc -n ocpcf
ocats-ocats-pcf LoadBalancer 10.233.56.56 10.75.225.49 8080:31944/TCP 19h
http://<Worker-Node-IP>:<Node-Port-of-ATS>
If the ocats-policy Service has an external IP available, <SVC external IP> can also be used to log in to ATS.
http://<SVC external IP>:8080
Example: http://10.75.225.49:8080
Running ATS
To run ATS test cases, perform the following steps:
- Enter the username as policyuser and password as policypasswd.
- Click Sign
in.
Note:
To modify default login password, see Modifying Login Password.Figure 4-37 Pre-configured Pipelines
- Policy-HealthCheck: This pipeline checks if CNC Policy and ATS are deployed correctly. This shows only when the user has enabled this feature at the time of installing CNC Policy ATS.
- Policy-NewFeatures: This pipeline has all the new test cases delivered for CNC Policy 25.1.200.
- Policy-Regression: This pipeline has all the test cases delivered in Policy ATS - 24.3.x.
- Policy-Performance: This pipeline is not operational as of now. It is reserved for future releases of ATS.
4.4.3 Running Policy-HealthCheck Pipeline
This is a pre-configured pipeline where ATS performs a test probe with SUT. It triggers helm test & provides the results in Jenkins Console Logs.
You can run Policy-HealthCheck pipeline to check if all CNC Policy pods are up and running. If yes, it provides the status as successful. If any pod is down due to any reason, then the pipeline fails.
- Click Policy-HealthCheck in the Name column.
- Click Configure in the left navigation pane.
- When you scroll-down, the General tab becomes active. Be sure that the screen loads completely.
- Continue scrolling down until the
Pipeline tab becomes active. The following is a
screen capture that shows the Pipeline script:
Figure 4-38 Helm Test Script
- In the Script area under the
Pipeline section, users may customize the values for the following
parameters:
Attention:
Do not modify values of parameters other than the ones described in this section.- a: Change this parameter to update the helm release name where Policy is deployed in your bastion.
- b: Change this parameter to update the namespace where Policy is deployed in your bastion.
- Click Save to update the values.
Running Helm Test
To run CNC Policy test cases, click Build Now.
4.4.4 Running Policy-NewFeatures Pipeline
Policy-New Features Pipeline
This is a pre-configured pipeline where all the Policy new test cases are executed.
Note:
Configuring the parameters is a one-time activity.- Click Policy-NewFeatures in the Name column.
- Click Configure in the left navigation pane.
- When you scroll-down, the General tab becomes active. Be sure that the screen loads completely.
- (Optional) If you want to retain a specified number of builds
in the persistent volume, select the check box for Discard old
Builds. It is recommended to configure this option;
otherwise, Persistent Volume may fail when the builds are exceedingly
high.
Enter values for the following parameters:
- Days to keep builds: Specifies the number of days for which build records are stored.
- Max # of builds to keep: Specifies the number of build records to store.
- Continue scrolling down until the
Pipeline tab becomes active. The following is a
screen capture that shows the Pipeline script:
Figure 4-39 Policy New Pipeline Configuration
- In the Script area under the
Pipeline section, users may customize the values for the following
parameters:
Attention:
Do not modify values of parameters other than the ones described in this section.- a: Name of the NF to be tested in capital (PCF).
- b: Change this parameter to update the namespace where Policy is deployed in your bastion.
- c: Name of Prometheus service namespace (occne-prometheus-server)
- d: Change this parameter to update the namespace where gostubs are deployed in your bastion.
- e: Set this parameter as 'unsecure', if you intend to run ATS in TLS disabled mode. Else, set this parameter as 'secure'.
- g: Set this parameter to more than 45 secs. The default wait time for the pod is 45 secs. Every TC requires restart of the nrf-client-management pod.
- h: Set this parameter to more than 60 secs. The default wait time to add a configured policy to the database is 60 secs.
- i: Set this parameter to more than 140 secs. The default wait time for Nf_Notification Test Cases is given as 140 secs.
- k: Use this parameter to set the waiting time to initialize Test Suite.
- l: Use this parameter to set the waiting time to get response from Stub.
- m: Use this parameter to set the waiting time after adding Policy Configuration.
- n: Use this parameter to set the waiting time after adding Policy.
- o: Use this parameter to set the waiting time before sending next message.
- p: Use this parameter to set Prometheus Server IP.
- q: Use this parameter to set Prometheus Server Port.
- r: Use this parameter to set the interval after which the POD status is checked when it is down.
- s: Use this parameter to set number of retry attempt in which will check the pod down status
- t: Use this parameter to set the interval after which we check the POD status if its UP.
- u: Use this parameter to set number of retry attempt in which will check the pod up status.
- v: Use this parameter to set Wait time to connect to Elastic Search
- w: Use this parameter to set Elastic Search HostName
- x: Use this parameter to set Elastic Search Port
- y: Use this parameter to set To Enable/Disable Stub logs collection
- z: Use this parameter to set Log collection endpoint either Elasticsearch or Kubernetes
- A: Use this parameter to enable or disable snapshots that are created at the start and restored at the end of each test run
- B: Use this parameter to for timer to wait for importing service configurations
- C: Use this parameter to set PCF_API_ROOT information for Ingress gateway service name and port.
- D: Use this parameter to set bulk_import_to_complete to add custom time in Jenkins post bulk imports.
- E: Use this parameterto configure TLS_VERSION to define the supported TLS version (1.2 or 1.3).
(Optional)To collect application logs per failed scenario, user can configure the values for the following parameter:- z: If you want log collection to happen
through Elastic search, set the value for this parameter as
Elasticsearch. If not, specify the
value as Kubernetes.
If you want to collect logs Elastic search, it is required to configure the values for the following parameters:
- v: Specifies the wait time to
connect to Elastic search
(
ELK_WAIT_TIME
). - w: Specifies the host name of Elastic
search (
ELK_HOST
). For example,occne-elastic-elasticsearch-master.occne-infra/
- x: Specifies the port for Elastic
search (
ELK_PORT
). For example, 9200.
- v: Specifies the wait time to
connect to Elastic search
(
- y: If you want to collect stub logs, set the value for this parameter as yes. If not, specify the value as no.
- A: To enable or disable snapshots that are created at the start and restored at the end of each test run.
- Click Save to update the values. The Policy_NewFeatures
Pipeline page appears.
Note:
It is recommended to save the pipeline script in your local machine as it is needed at the time of ATS pod restart.
Extracting Application Logs
- Log in to the ATS
pod:
kubectl exec -it pod/occnp1-ocats-ocats-policy-xxxxxxxx -n ocpcf bash
- Go to build
directory:
cd $JENKINS_HOME/jobs/$JOB_NAME/builds/$BUILD_NUMBER/
For example:
cd /var/lib/jenkins/.jenkins/jobs/Policy-Regression/builds/1
- Extract the
applogs.zip
file:unzip applogs.zip
- After successfully unzipping the file, open the
applog
folder to view pod logs for failed scenarios:(env) [jenkins@gapcf-ocats-ocats-policy-6df5cd84c-c6cm4 applog]$ pwd /var/lib/jenkins/.jenkins/jobs/Policy-Regression/builds/3/applog (env) [jenkins@gapcf-ocats-ocats-policy-6df5cd84c-c6cm4 applog]$ ls -ltrh total 1.4M -rw-r--r--. 1 jenkins jenkins 461K Sep 1 14:49 Initial_Run-Send_Npcf_SMPolicyControl_Create_request_message_to_PCF_and_verify_the_praInfos_structure_is_downloaded_in_the_response_message_to_SMF_also_check_requests_total_metric_incremented_in_the_PCF.log -rw-r--r--. 1 jenkins jenkins 459K Sep 1 14:49 1st_Rerun-Send_Npcf_SMPolicyControl_Create_request_message_to_PCF_and_verify_the_praInfos_structure_is_downloaded_in_the_response_message_to_SMF_also_check_requests_total_metric_incremented_in_the_PCF.log -rw-r--r--. 1 jenkins jenkins 459K Sep 1 14:49 2nd_Rerun-Send_Npcf_SMPolicyControl_Create_request_message_to_PCF_and_verify_the_praInfos_structure_is_downloaded_in_the_response_message_to_SMF_also_check_requests_total_metric_incremented_in_the_PCF.log
Running Policy Test Cases
- Click the Build with Parameters link available in the
left navigation pane of the Policy-NewFeatures
Pipeline screen. The following screen appears.
Figure 4-40 Pipeline Policy-NewFeatures
-
Select any of following valid values from the drop-down for SUT:
- PCF
- cnPCRF
- Converged Policy
-
Select any of following valid values from the drop-down for Configuration_Type:
- Product_Config: On selecting this option, test cases from product folders are populated on ATS UI and product configuration is applied to them via the key-value pair and yaml files defined or present in the "Product Config" folder.
- Custom_Config: On selecting this option, test
cases from custom folders are populated on ATS UI and custom
configuration is applied to them via the key-value pair and yaml
files defined or present in the "Custom Config" folder.
To use the parameterization feature, always select the Custom_Config option. User can copy, add, or delete the required test cases that are available for the Converged Policy, PCF or PCRF and place them appropriately within the custom folder for Policy-NewFeatures. Reload the page to view the test cases available in the custom NewFeatures folder.
For more information, see Parameterized approach for SUT custom configuration.
-
For the Select_Option field, select any of the following drop-down values:
- All: By default, all the Policy test cases are selected for execution.
- Single/MultipleFeatures: This option allows you to select any number of test cases you want to run from the list of total test cases. Select the check-box for each feature you want to run.
Based on your selection, related Test Cases appear on the page.
- To collect logs for any given build, select YES from the drop-down menu of Fetch_Log_Upon_Failure.
-
Click Build and select Console Output to view the test results. The following is a sample test result ouput:
Figure 4-41 Test Result Output in Console
Note:
For more details on consolidated test report, see Managing Final Summary Report, Build Color, and Application Log.Parameterized approach for SUT custom configuration
- Add new test cases by adding datafiles
- Remove test cases
- Modify the parameters and their values in the key-value pair or <feature>.yaml files
Cust ConfigN
where N can be any number
At the time of execution, ensure to rename the required folder to Cust Config folder as Jenkins always retrieves data from this folder when user selects Custom_Config.
To use
Custom_Config, it is required to change the value of cust_folder
from data to cust_data in global.yaml file.
In addition, you can customize the parameters and their respective values in the
global.yaml
as per the requirements.
To use custom diameter identity, update the value of parameter "diam_gw_id" under global.yaml file to the one configured for "DIAMETER_Identity" under diam-gw's statefulset.
<feature>.yaml
files for parameterized
feature:
- In addition to
global.yaml
parameters, feature files may also contain parameters for which user can update values at the time of running pipelines. - Changing the values of parameters tagged as "Feature Specific Value" may cause failures at the time of running pipelines.
- Values for parameters tagged with #START_GLOBAL and
#END_GLOBAL tags take values from
global.yaml
.
Note:
For CNC Policy-ATS release 25.1.200, parameterization is supported for PCF features only.2021-12-17 09:02:43,908[32m INFO LOG.featureHooks:14 [0m| [32mInside before_feature[0m
2021-12-17 09:02:43,908[32m INFO LOG.goldenConfig:305 [0m| [32mInside goldenConfig.py[0m
2021-12-17 09:02:43,912[31m ERROR LOG.goldenConfig:93 [0m| [31mPlease check if the key metricsEnabled is valid or if it is present under correct section[0m
2021-12-17 09:02:43,913[32m INFO LOG.goldenConfig:324 [0m| [32mExiting goldenConfig.py[0m
2021-12-17 09:02:43,913[32m INFO LOG.featureHooks:27 [0m| [32mExiting before_feature[0m
The execution continues to happen without any changes to the data files.
Config-Rollback (Snapshot Backup and Restore)
Using this feature, user can take a snapshot of initial configuration of the system, and then restore it at a later stage. On enabling this feature, a snapshot of system configurations is taken before ATS run starts and is reverted on completion of ATS job.
Queuing Jenkins Jobs
Using this feature, you can queue a second job even when current job is still running. The second job can be triggered either from the same or a different pipeline.
Table 4-2 Queuing Jenkins Jobs
Concurrent Builds | New FeaturesCurrent Build | New FeaturesNew Build | RegressionCurrent Build | RegressionNew Build Build | Result |
---|---|---|---|---|---|
Enabled | Running | Triggered | NA | NA | New-Build of New-Features is added to queue. |
Enabled | Running | NA | NA | Triggered | New-Build of Regression is added to queue. |
Disabled | NA | NA | Running | Triggered | New-Build of Regression is added to queue. |
Disabled | NA | Triggered | Running | NA | New-Build of New-Features is added to queue. |
Test Result Analyzer
Using the Test Result Analyzer plug-in, user can view consolidated and detailed reports. For more information, see Test Results Analyzer section.
Test Case Mapping to Features and Display Total Counts
With this feature, users can view total count of features, test cases, and TestCase mapping for each feature of CNC Policy on ATS GUI. For more information, see Support for Test Case Mapping and Count.4.4.5 Policy-NewFeatures Documentation
This section describes the documentation for Policy-NewFeatures pipeline.
To view the documentation for any of the CNC Policy features, on the ATS home page, click Policy-NewFeatures. Then, click Documentation in the left navigation pane.
Figure 4-42 Policy-NewFeatures Feature List

Figure 4-43 Feature - Consolidation_with_T3501_Expiry

Based on the features covered under Documentation, the Build Requires Parameters screen displays test cases.
To navigate back to the Policy-NewFeatures pipeline, click Back to Policy-NewFeatures link available on the top left corner of the screen.
4.4.6 Running Policy-Regression Pipeline
This section describes how to run test cases for CNC Policy Regression pipeline.
The Policy-Regression pipeline is a pre-configured pipeline where all the test cases of previous releases are available. For an example, for Policy 25.1.200, this pipeline has all the test cases released till Policy 24.3.x.
Running Policy Test Cases
- Click Policy-Regression in the Name column.
- Click the Build with Parameters link available in the
left navigation pane of the Policy-Regression
Pipeline screen. The following screen appears.
Figure 4-44 Regression Pipeline
Note:
ATS display empty list for test cases as it is referring to the custom folder for Policy-Regression pipeline. -
Select any of following valid values from the drop-down for SUT:
- PCF
- CN-PCRF
- Converged Policy
-
Select any of following valid values from the drop-down for Configuration_Type:
- Product_Config: On selecting this option, test cases from product folders are populated on ATS UI and product configuration is applied to them via the key-value pair and yaml files defined or present in the "Product Config" folder.
- Custom_Config: On selecting this option, test
cases from custom folders are populated on ATS UI and custom
configuration is applied to them via the key-value pair and yaml
files defined or present in the "Custom Config" folder.
To use the parameterization feature, always select the Custom_Config option. User can copy, add, or delete the required test cases that are available for the Converged Policy, PCF or PCRF and place them appropriately within the custom folder for Policy-NewFeatures. Reload the page to view the test cases available in the custom NewFeatures folder.
-
For the Select_Option field, select any of the following drop-down values:
- All: By default, all the Policy test cases are selected for execution.
- Single/MultipleFeatures: This option allows you to select any number of test cases you want to run from the list of total test cases. Select the check-box for each feature you want to run.
Based on your selection, related Test Cases appear on the page.
- To collect logs for any given build, select YES from the drop-down menu of Fetch_Log_Upon_Failure.
-
Click Build and select Console Output to view the test results. The following is a sample test result ouput:
Figure 4-45 Policy-Regression Console Output
Note:
The Regression pipeline does not have any sanity option. However, users must perform all the steps performed in the Policy-NewFeatures pipeline. Ensure that the pipeline script is configured according to the environment variables.
Note:
For more details on consolidated test report, see Managing Final Summary Report, Build Color, and Application Log.4.4.7 Policy-Regression Documentation
This section describes the documentation for Policy-Regression pipeline.
To view the documentation for any of the CNC Policy features, on the ATS home page, click PolicyRegression. Then, click Documentation in the left navigation pane.
Figure 4-46 Policy-Regression Feature List

Figure 4-47 FEATURE - Add_NFSetIDList

This page shows features of only those test cases that are released in previous releases.
4.5 Running SCP Test Cases using ATS
This section describes how to run SCP test cases using ATS.
4.5.1 Logging in to ATS
Logging in to ATS GUI in Non-OCI Setup
Figure 4-48 Verifying ATS Pod

Note:
In the Verifying ATS Pod image, the ATS nodeport is highlighted in red as 30345. For more information about ATS deployment, see Deploying ATS and Stub in the Kubernetes Cluster. Similarly, verify Stub deployments.To log in to the ATS GUI, open the browser and either provide the
external IP address of the worker node and nodeport of the ATS service as
<Worker-Node-IP>:<Node-Port-of-ATS>
or provide the
IP address of the load balancer and serviceport of the ATS service as
<ATS-LB-IP>:<Service-Port-of-ATS>
.
The ATS login screen appears.
Figure 4-49 Logging in to ATS GUI

Note:
It is recommended to keep the rerun count to minimum 1 for running ATS testcases.To run ATS:
- Enter the login credentials and click Sign
in.
Note:
To modify default login password, see Modifying Login Password.The following screens appear to display preconfigured pipelines for SCP individually.
Figure 4-50 ATS SCP First Login Screen
The different pre-configured pipelines for SCP are as follows:
- SCP-NewFeatures: This pipeline has all the test cases delivered as part of this release.
- SCP-Regression: This pipeline covers all the test cases from the previous releases.
Logging in to ATS GUI in OCI Setup
To access the ATS GUI in OCI, see the Accessing ATS GUI in OCI section.
Logging in to ATS GUI with HTTPS Enabled
Note:
For more information, see the Support for Transport Layer Security.4.5.2 SCP-NewFeatures Pipeline
This section describes how to configure new feature pipeline parameters, run SCP test cases, and view functionalities.
4.5.2.1 Configuring New Feature Pipelines
This is a pre-configured pipeline where all the SCP test cases are run.
- Click SCP-NewFeatures in the
Name column.
The SCP-NewFeatures screen appears.
Figure 4-51 SCP-NewFeatures
According to the installation guide, the configuration parameters can be modified in the
ocats_ocscp_values.yaml
file before deployment. If you are running SCP test cases for the first time without enabling the parameters in the deployment file, then you have to set the input parameters before running any test case. There is no need to set these parameters again unless there is a change in the configuration. - In the left navigation pane, click
Configure to provide input parameters and scroll
down to the pipeline script as shown in the following image:
The General tab appears.
Note:
Ensure that the following screen loads completely before you perform any action on it. Also, do not modify any other configuration than what is specified in the subsequent steps.The control moves from the General tab to the Pipeline tab, as shown in the following screenshot:
Figure 4-52 Pipeline Tab
- Modify the values of the pipeline script as
required, and then click Save.
Parameter "-I" must be changed as part of "Pipeline script". The content of the pipeline script is as follows:
node ('built-in'){ //a = SELECTED_NF b = NFNAMESPACE c = CLUSTERDOMAIN d = DESTNAMESPACE //e = ATSREGISTRY f = AUDITINTERVAL g = GUARDTIME h = SCPSVCNAME //i = SCPCONFIGSVCNAME j = SCPNOTIFYSVCNAME k = SCPSUBSVCNAME l = DBSECRETNAME //m = MYSQLHOST n = ATSSTUBIMAGE o = ATSSTUBCPU p = ATSSTUBMEMORY q = SCPAUDITSVCNAME //r = RERUN_COUNT s = SCPSVCPORT t = PROMSVCNAME u = PROMSVCPORT v = STUBWAITTIME //w = PROMSVC_URI x = SUBVALIDITYTIME z = OPENSEARCH_WAIT_TIME A = OPENSEARCH_HOST //B = OPENSEARCH_PORT C = STUB_LOG_COLLECTION D = OPENSEARCH_LOGS E = SCPSVCHTTPSPORT F = SCPNRFPROXYSVCNAME //G = KUBERNETES_HOST H = KAFKANAMESPACE I = SCPDDCLIENTSVCNAME J = SCPGLBRATELIMITSTUBSVCNAME K = SCPCONFIGSVCPORT //L = SCP_METRICS_VERSION M = SCPSVCIP withEnv([ ]){ sh ''' sh /var/lib/jenkins/ocscp_tests/preTestConfig.sh \ -a SCP \ -b scp-73999881-dev \ -c cluster.local \ -d scp-73999881-dev \ -e cgbu-ocscp-dev-docker.dockerhub-phx.oci.oraclecorp.com/ocats \ -f 60 \ -g 10 \ -h cicdocscp-scp-worker \ -i cicdocscp-scpc-configuration \ -j cicdocscp-scpc-notification \ -k cicdocscp-scpc-subscription \ -l appuser-secret \ -m mysql.scp-73999881-dev.svc.cluster.local \ -n ocats-pystub:25.1.100-beta.35 \ -o 0.5 \ -p 0.5G \ -q cicdocscp-scpc-audit \ -r 0 \ -s 8000 \ -t occne-prometheus-73999881-server.scp-73999881-dev-infra \ -u 80 \ -v 300 \ -w /api/ \ -x 120 \ -z 0 \ -A occne-opensearch-cluster-master.scp-73999881-dev-infra \ -B 9200 \ -C yes \ -D opensearch \ -E 9443 \ -F cicdocscp-scp-nrfproxy \ -G kubernetes.default \ -H scp-73999881-devkafka \ -I ocats-ocscp-ocats-ddclientstub \ -J ocats-ocscp-ocats-scpglbratelimitstub \ -K 8081 \ -L v1 \ -M scpSignallingFqdn \ ''' if(env.Include_Regression && "${Include_Regression}" == "YES"){ sh '''sh /var/lib/jenkins/common_scripts/merge_jenkinsfile.sh''' load "/var/lib/jenkins/ocscp_tests/jenkinsData/Jenkinsfile-Merged" } else{ load "/var/lib/jenkins/ocscp_tests/jenkinsData/Jenkinsfile-NewFeatures" } } }
The description of these parameters is as follows:
Note:
Do not modify anything other than these parameters.- a: Selected NF
- b: NameSpace in which SCP is deployed
- c: K8s Cluster Domain where SCP is deployed
- d: Test Stubs NameSpace - Must be same as SCP Namespace
- e: Docker registry where test stub image is available
- f: Audit Interval provided in SCP Deployment file
- g: Guard Time provided SCP Deployment file
- h: SCP-Worker microservice name as provided during deployment
- i: SCPC-Configuration microservice name as provided during deployment
- j: SCPC-Notification microservice name as provided during deployment
- k: SCPC-Subscription microservice name as provided during deployment
- l: DB Secret name as provided during deployment
- m: Mysql Host name as provided during deployment
- n: Test Stub Image Name with tag
- o: Test Stub CPU requests and limit
- p: Test Stub Memory requests and limit
- q: SCPC-Audit microservice name as provided during deployment
- r: re-run count
- s: SCPSVCPORT with which SCP is deployed
- t: PROMSVCNAME with which Prometheus server is deployed. Note: Prometheus service name is followed by namepsace in which Prometheus service is existing. Prometheus service is in a different namespace related to deployed SCP namespace.
- u: PROMSVCPORT with which Prometheus server is deployed
- v: Time to wait for Stub Creation. Default value of STUBWAITTIME is 240s.Note: A user needs to update the STUBWAITTIME in increments or multiples of 30 seconds. The minimum allowable value for STUBWAITTIME is 30 seconds.
- w: API URI of Prometheus (/api/) - For examples, use /api/or /prometheus/api/
- x: Subscription Validity Time to set by ATS in response to subscription request from SCP
- y: Application Debug Logs collection on Rerun if set to true. The default value is false
- z: Opensearch connection timeout
- A: Opensearch host/service name
- B: Opensearch port
- C: Stub log collection
- D: Provide Kubernetes or OpenSearch for ATS for app log collection (Fetch_Log_Upon_Failure should be enabled)
- E: SCP HTTPS Signaling Port
- F: SCP NRF Proxy service name
- G: Kubernetes Host
- I: scpddclientstub name as provided during deployment
- J: scpglobalratelimitstub name as provided during deployment
- K: port of configuration pod in SCP deployment
- L: scp metrics version
- M: scp worker FQDN or cluster IP: The default value is scpSignallingFqdn, which forwards requests to the worker using the worker's FQDN. Alternatively, an IPv4 or IPv6 address (enclosed in square brackets) can also be provided.
4.5.2.2 Running SCP New Feature Pipelines
To run SCP test cases, perform the following steps
- Click Build with Parameters link available in the left navigation pane of the Pipeline SCP-NewFeatures screen.
- Select the R16 option from the Execute_Suite drop-down menu to run Release 16 test cases for SCP.
- Select the required option from the Features
drop-down menu to run the test cases:
- All: To run all the test cases for SCP based on Release15 or Release16 selection.
- Sanity: Enabled for SCP. Sanity cases are run to do a quick sanity check on SCP SUT features.
- Select: This option allows users to separately select specific features for separate execution. Additionally, users can select the run for an entire stage or group instead of running all feature files.
- Select one of the following configuration types from the
Configuration_Type drop-down menu:
- Product_Config: To run test cases from the features or regression directory. Test cases in this directory should not be modified.
- Custom_Config: To run test cases from the features or cust_regression directory. Any test case customization required should be done in this directory.
- Select an appropriate option and click
Build to perform the test.
Figure 4-53 Build
- Click Full Stage View in the left navigation pane to view the status of the run.
- Hover over values of Execute Tests, and then
click Logs to check test results and logs.
Note:
A sample consolidated test report is shown in the following image when rerun is set to 2, which is the default value. All the features passed within the first rerun, therefore, the second rerun stage has not been run in this case.Figure 4-54 Sample consolidated report
Note:
- The New Feature tests in this release have been written
according to SCP-ATS parameterization method (Golden Configuration). The
individual features are not populated with direct values for the
parameters, however, with variable names. While performing the tests,
these variables are mapped according to the data provided adjacent to
them in the feature specific
<feature_name>.yaml
andglobal.yaml
that can be found in theproduct_config
directory. This method allows the decoupling of data that can be customer specific from the features. Fore more information about parameterization (Golden Configuration), see Parameterization.
To view the detail view of the respective group, rerun logs, and the consolidated report summary:- Click Open Blue Ocean Homepage from the left navigation pane.
- Click the build number in the run column to view the consolidated output of all the feature that are run.
- Click Download icon to download the log files.
- The New Feature tests in this release have been written
according to SCP-ATS parameterization method (Golden Configuration). The
individual features are not populated with direct values for the
parameters, however, with variable names. While performing the tests,
these variables are mapped according to the data provided adjacent to
them in the feature specific
4.5.3 SCP-NewFeatures Documentation
- To view SCP functionalities, go to the
SCP-NewFeatures pipeline and click the
Documentation option in the left navigation pane.
It lists all the test cases provided as part of SCP ATS -25.1.201.
Note:
The Documentation option appears only if the SCP-NewFeatures pipeline test cases are run at least once.Figure 4-55 SCP-NewFeatures Documentation
- Click any functionality to view its test cases and scenarios for
each test case.
Figure 4-56 Sample Scenario
Based on the functionalities covered under Documentation, the screen displays test cases.
- To navigate back to the Pipeline SCP-NewFeatures screen, click the Back to SCP-NewFeatures link available in the upper left corner of the screen.
4.5.4 SCP-Regression Pipeline
This section describes how to configure regression pipeline parameters, run SCP test cases, and view functionalities.
4.5.4.1 Configuring Regression Pipelines
This pre-configured pipeline has all the test cases from previous releases.
- Click SCP-Regression Pipeline:
Figure 4-57 SCP-Regression Pipeline
According to the installation guide, the configuration parameters can be modified in the ocats_ocscp_values.yaml file before deployment. If you are running SCP-Regression pipeline test cases for the first time without enabling the parameters in the deployment file, you have to set the input parameters before running the test cases. Subsequent test case run does not require any input unless there is a need to change any configuration.
- In the left navigation pane, click Configure to
provide input parameters and scroll-down to the pipeline script as shown in the
following image:
Figure 4-58 Regression - Pipeline Script
- Modify the values of the Pipeline script as required, and
then click Save.
Parameter "-I" should be changed as part of "Pipeline script". The content of the pipeline script is as follows:
node ('built-in'){ //a = SELECTED_NF b = NFNAMESPACE c = CLUSTERDOMAIN d = DESTNAMESPACE //e = ATSREGISTRY f = AUDITINTERVAL g = GUARDTIME h = SCPSVCNAME //i = SCPCONFIGSVCNAME j = SCPNOTIFYSVCNAME k = SCPSUBSVCNAME l = DBSECRETNAME //m = MYSQLHOST n = ATSSTUBIMAGE o = ATSSTUBCPU p = ATSSTUBMEMORY q = SCPAUDITSVCNAME //r = RERUN_COUNT s = SCPSVCPORT t = PROMSVCNAME u = PROMSVCPORT v = STUBWAITTIME //w = PROMSVC_URI x = SUBVALIDITYTIME z = OPENSEARCH_WAIT_TIME A = OPENSEARCH_HOST //B = OPENSEARCH_PORT C = STUB_LOG_COLLECTION D = OPENSEARCH_LOGS E = SCPSVCHTTPSPORT F = SCPNRFPROXYSVCNAME //G = KUBERNETES_HOST H = KAFKANAMESPACE I = SCPDDCLIENTSVCNAME J = SCPGLBRATELIMITSTUBSVCNAME K = SCPCONFIGSVCPORT //L = SCP_METRICS_VERSION M = SCPSVCIP withEnv([ ]){ sh ''' sh /var/lib/jenkins/ocscp_tests/preTestConfig.sh \ -a SCP \ -b scpsvc \ -c cluster.local \ -d scpsvc \ -e cgbu-ocscp-dev-docker.dockerhub-phx.oci.oraclecorp.com/ocats \ -f 60 \ -g 10 \ -h ocscp-scp-worker \ -i ocscp-scpc-configuration \ -j ocscp-scpc-notification \ -k ocscp-scpc-subscription \ -l appuser-secret \ -m mysql.default.svc.cluster.local \ -n ocats-pystub:tag \ -o 0.5 \ -p 0.5G \ -q ocscp-scpc-audit \ -r 0 \ -s 8000 \ -t occne-prometheus-server.occne-infra \ -u 80 \ -v 300 \ -w /api/ \ -x 120 \ -z 0 \ -A occne-opensearch-cluster-master.occne-infra \ -B 9200 \ -C no \ -D opensearch \ -E 9443 \ -F ocscp-scp-nrfproxy \ -G kubernetes.default \ -H scpsvc \ -I ocats-ocscp-ocats-ddclientstub \ -J ocats-ocscp-ocats-scpglbratelimitstub \ -K 8081 \ -L v1 \ -M scpSignallingFqdn \ ''' if(env.Include_NewFeatures && "${Include_NewFeatures}" == "YES"){ sh '''sh /var/lib/jenkins/common_scripts/merge_jenkinsfile.sh''' load "/var/lib/jenkins/ocscp_tests/jenkinsData/Jenkinsfile-Merged" } else{ load "/var/lib/jenkins/ocscp_tests/jenkinsData/Jenkinsfile-Regression" } } }
The description of these parameters is as follows:- a: Selected NF
- b: NameSpace in which SCP is deployed
- c: K8s Cluster Domain where SCP is deployed
- d: Test Stubs NameSpace - Must be same as SCP Namespace
- e: Docker registry where test stub image is available
- f: Audit Interval provided in SCP Deployment file
- g: Guard Time provided SCP Deployment file
- h: SCP-Worker microservice name as provided during deployment
- i: SCPC-Configuration microservice name as provided during deployment
- j: SCPC-Notification microservice name as provided during deployment
- k: SCPC-Subscription microservice name as provided during deployment
- l: DB Secret name as provided during deployment
- m: Mysql Host name as provided during deployment
- n: Test Stub Image Name with tag
- o: Test Stub CPU requests and limit
- p: Test Stub Memory requests and limit
- q: SCPC-Audit microservice name as provided during deployment
- r: re-run count
- s: SCPSVCPORT with which SCP is deployed
- t: PROMSVCNAME with which prometheus server is deployed. Note: Prometheus service name is followed by namepsace in which Prometheus service is existing. Prometheus service is in a different namespace related to deployed SCP namespace.
- u: PROMSVCPORT with which prometheus server is deployed
- v: Time to wait for Stub Creation. Default value of STUBWAITTIME is 240s.Note: A user needs to update the STUBWAITTIME in increments or multiples of 30 seconds. The minimum allowable value for STUBWAITTIME is 30 seconds.
- w: API URI of Prometheus (/api/) - Use "/api/" or "/prometheus/api/"
- x: Subscription Validity Time to set by ATS in response to subscription request from SCP
- y: Application Debug Logs collection on Rerun if set to true. The default value is false
- z: Opensearch connection timeout
- A: Opensearch host/service name
- B: Opensearch port
- C: Stub log collection
- D: Opensearch API Path Prefix
- E: SCP HTTPS Signaling Port
- F: SCP NRF Proxy service name
- G: Kubernetes Host
- I: scpddclientstub name as provided during deployment
- J: scpglobalratelimitstub name as provided during deployment
- K: port of configuration pod in scp deployment
- L: scp metrics version
- M: scp worker FQDN or cluster IP. The default value is scpSignallingFqdn, which forwards requests to the worker using the worker FQDN. Alternatively, an IPv4 or IPv6 address (enclosed in square brackets) can also be provided.
4.5.4.2 Running Regression Pipelines
This section describes how to run test cases for SCP Regression pipeline.
- To run the pipeline, click Build with Parameters in the left navigation pane.
- Select the R16 option from the Execute_Suite drop-down menu to run Release 16 test cases for SCP.
- Select the required regression option from the
Features drop-down menu to run the test cases:
- All: To run all the test cases for SCP based on
Release15 or Release16 selection. This runs all the cases except
SCP_Audit_nnrf_disc
. If SCP is deployed withnnrf-disc
for audit or registration with NRF is disabled, then the All option should not be used and instead Single or MultipleFeatures option can be used to select appropriate cases. - Sanity: Enabled for SCP. Sanity cases are run to do a quick sanity check on SCP SUT features. Sanity cases are included in the regression suite and are tagged with @sanity.
- Select: This option allows users to separately select specific features for separate test case run. Additionally, users can select the run for an entire stage or group instead of running all feature files.
- All: To run all the test cases for SCP based on
Release15 or Release16 selection. This runs all the cases except
- Select one of the following configuration types from the
Configuration_Type drop-down menu:
- Product_Config: To run test cases from the features or regression directory. Test cases in this directory should not be modified.
- Custom_Config: To run test cases from the features or cust_regression directory. Any test case customization required should be done in this directory.
- Select an appropriate option and click Build to run the test case.
- Click Full Stage View in the left navigation pane to view the status of the run.
- Hover over the Execute-Tests stage of
the pipeline, and then click Logs to check test case
results and logs.
To view the detail view of the respective group, rerun logs, and the consolidated report summary:
- Click Open Blue Ocean Homepage from the left navigation pane.
- Click the build number in the run column to view the consolidated output of all the feature that are run.
- Click Download icon to download the log files.
4.5.5 SCP-Regression Documentation
- To view SCP functionalities, go to the
SCP-Regression pipeline and click the
Documentation option in the left navigation pane.
It lists all the test cases provided as part of SCP ATS -25.1.201.
Note:
The Documentation option appears only if the SCP-Regression pipeline test cases are run at least once.In SCP 24.1.0, the feature files are renamed as per the naming convention. For more information, see Renaming of Regression Feature Files
Figure 4-59 SCP-Regression Documentation
- Click any functionality to view its test
cases and scenarios for each test case.
Figure 4-60 Sample Scenario
Based on the functionalities covered under Documentation, the screen displays test cases.
- To navigate back to the Pipeline SCP-Regression screen, click the Back to SCP-Regression link available in the upper left corner of the screen.
4.5.5.1 Renaming of Regression Feature Files
Table 4-3 Regression Feature File
Feature Names prior to SCP 24.1.0 | New Feature Names for SCP 24.1.0 |
---|---|
Route_group_static_config_api_validation | SCP_AlternateRoutingUsingStaticConfig_API_P0 |
Alternate_Routing_Using_Static_Config_EgressRL_SMF | SCP_AlternateRoutingUsingStaticConfig_EgressRL_SMF_P0 |
Alternate_Routing_Using_Static_Configuration_Https_SMF | SCP_AlternateRoutingUsingStaticConfig_HTTPS_SMF_P0 |
Alternate_Routing_Using_Static_Configuration_Notification_CircuitBreaking. | SCP_AlternateRoutingUsingStaticConfig_NotificationMessageCircuitBreaking_P0 |
Alternate_Routing_Using_Static_Configuration_SMF | SCP_AlternateRoutingUsingStaticConfig_SMF_P0 |
Alternate_Routing_Using_Static_Config_Overload_Control | SCP_AlternateRoutingUsingStaticConfig_OverloadControl_P0 |
SCP_Audit_nnrf_disc_SMF | SCP_Audit_nnrf-disc_SMF_P0 |
SCP_Audit_nnrf_nfm_SMF | SCP_Audit_nnrf-nfm_SMF_P0 |
SCP_Audit_nnrf_nfm_SMF_user_agent | SCP_Audit_nnrfnfmSCPsUserAgentHeaderValidation_P0 |
Dns_Srv_Call_Back_Notification_CD_OD_Cases | SCP_CallBackNotification_DNSSRVbasedCBOD_P0 |
Callback_And_Notification | SCP_CallBackNotification_P0 |
SCP_CCA_Header_Validation_API_P0 | SCP_CCAvalidation_API_P0 |
CCA_validation_AUSF_P0 | SCP_CCAvalidation_AUSF_P0 |
CCA_validation_AUSF_P1 | SCP_CCAvalidation_AUSF_P1 |
CCA_validation_InterPLMN_CHF_P0 | SCP_CCAvalidation_InterPLMNRouting_CHF_P0 |
CCA_validation_InterSCP_CHF_P0 | SCP_CCAvalidation_InterSCP_CHF_P0 |
CCA_validation_ModelD_CHF_P0 | SCP_CCAvalidation_ModelD_CHF_P0 |
CCA_validation_ModelD_CHF_P1 | SCP_CCAvalidation_ModelD_CHF_P1 |
SCP_Circuit_Breaking_Config_API | SCP_CircuitBreaking_ConfigAPI_P0 |
Circuit_Breaking_PCF | SCP_CircuitBreaking_PCF_P0 |
Circuit_Breaking_SMF | SCP_CircuitBreaking_SMF_P0 |
SCP_Congestion_Control_Config_API | SCP_CongestionControl_ConfigAPI_P0 |
S_Correlation_Header_interplmn_model_C | SCP_CorrelationHeader_InterPLMNModelC_P0 |
Correlation_Headersepp_model_D_Inter_PLMN_and_Mediation | SCP_CorrelationHeader_InterPLMNRoutingBasedOnModelDandMediation_P0 |
Interscp_Correlation_Header_model_C | SCP_CorrelationHeader_InterSCP_P0 |
Correlation_Header_model_C_cases | SCP_CorrelationHeader_ModelC_P0 |
Notification_Correlation_Header_model_C | SCP_CorrelationHeader_ModelCNotificationRequest_P0 |
Correlation_Header_SCP_generated_error | SCP_CorrelationHeader_SCPGeneratedError_P0 |
DefaultNotificationCallbackInitialRouteAndDiscovery_p0 | SCP_DefaultNotificationCallback_InitialRouteAndDiscovery_P0 |
Dns_Srv_Call_Back_Notification_Https | SCP_DefaultNotificationCallbackUri_DNSSRV_HTTPS_P0 |
Dns_Srv_Call_Back_Notification | SCP_DefaultNotificationCallbackUri_DNSSRV_P0 |
DefaultNotificationCallbackUriModelC_Https_p1 | SCP_DefaultNotificationCallbackUri_ModelC_HTTPS_P1 |
DefaultNotificationCallbackUriModelC_p0 | SCP_DefaultNotificationCallbackUri_ModelC_P0 |
DefaultNotificationCallbackUriModelC_p1 | SCP_DefaultNotificationCallbackUri_ModelC_P1 |
DefaultNotificationCallbackUriModelC_CB_p0 | SCP_DefaultNotificationCallbackUri_ModelCCB_P0 |
DefaultNotificationCallbackUriModelC_SEPP_p1 | SCP_DefaultNotificationCallbackUri_ModelCInterPLMNRouting_P1 |
DefaultNotificationCallbackUriModelC_SEPP_p1 | SCP_DefaultNotificationCallbackUri_ModelCInterPLMNRouting_P1 |
Alternate_resolution_api_validation | SCP_DNSSRV_AlternateResolution_API_P0 |
Dns_Srv_Based_Routing_For_ModelC_Headers_PCF_cases | SCP_DNSSRV_ModelCHeaders_PCF_P0 |
Dns_Srv_Based_Routing_For_ModelC_Headers_UDM_cases | SCP_DNSSRV_ModelCHeaders_UDM_P0 |
Dns_Srv_Producer_Overload_Control | SCP_DNSSRV_ProducerBasedOverloadControl_P0 |
Alert_egress_congestion_based_on_producer_load_and_message_priority_CHF | SCP_EgressCongestionBasedOnProducerLoad_AlertMessagePriority_CHF_P0 |
Egress_congestion_based_on_producer_load_and_message_priority_PCF | SCP_EgressCongestionBasedOnProducerLoad_MessagePriority_PCF_P0 |
Egress_congestion_based_on_producer_load_and_message_priority_UDM | SCP_EgressCongestionBasedOnProducerLoad_MessagePriority_UDM_P0 |
SCP_Egress_Host_Preference_alternate_routing_SMF_P0_P1 | SCP_EgressHostPreference_AlternateRouting_SMF_P1 |
SCP_Egress_Host_Preference_alternate_routing_withoutHeader_AUSF_P1 | SCP_EgressHostPreference_AlternateRoutingWithoutHeader_AUSF_P1 |
SCP_Egress_Host_Preference_API_P0 | SCP_EgressHostPreference_API_P0 |
SCP_Egress_Host_Preference_CHF_P0 | SCP_EgressHostPreference_CHF_P0 |
SCP_Egress_Host_Preference_CHF_P1 | SCP_EgressHostPreference_CHF_P1 |
SCP_Egress_Host_Preference_InterPLMN_Routing_P0 | SCP_EgressHostPreference_InterPLMNRouting_P0 |
SCP_Egress_Host_Preference_InterPLMN_Routing_P1 | SCP_EgressHostPreference_InterPLMNRouting_P1 |
SCP_Egress_Host_Preference_InterPLMN_Routing_Non_ASM_P0 | SCP_EgressHostPreference_InterPLMNRoutingNonASM_P0 |
SCP_Egress_Host_Preference_InterSCP_Routing_CHF_P0 | SCP_EgressHostPreference_InterSCPRouting_CHF_P0 |
SCP_Egress_Host_Preference_InterSCP_Routing_CHF_P1 | SCP_EgressHostPreference_InterSCPRouting_CHF_P1 |
SCP_Egress_Host_Preference_InterSCP_Routing_CHF_Non_ASM_P0 | SCP_EgressHostPreference_InterSCPRoutingNonASM_CHF_P0 |
SCP_Egress_Host_Preference_ModelD_routing_CHF_NRF_P0 | SCP_EgressHostPreference_ModelDRoutingCHF_NRF_P0 |
SCP_Egress_Host_Preference_ModelD_routing_CHF_NRF_P1 | SCP_EgressHostPreference_ModelDRoutingCHF_NRF_P1 |
SCP_Egress_Host_Preference_ModelD_routing_without_apiRootHdr_CHF_NRF_P1 | SCP_EgressHostPreference_ModelDRoutingWithoutAPIRootHdrCHFandNRF_P1 |
SCP_Egress_Host_Preference_Notification_PCF | SCP_EgressHostPreference_NotificationRequest_PCF_P0 |
SCP_Egress_Host_Preference_SCP_Generated_Audit_SMF_NRF_P1 | SCP_EgressHostPreference_SCPGeneratedAuditSMF_NRF_P1 |
SCP_Egress_Host_Preference_SMF_P0_P1 | SCP_EgressHostPreference_SMF_P1 |
SCP_Egress_Host_Prefrence_SCP_Generated_Subscription_SCP_NRF_P1 | SCP_EgressHostPrefrence_SCPgeneratedSubscription_NRF_P1 |
Alert_SCPEgressTrafficRateExceededConfiguredLimit_AUSF | SCP_EgressRateLimiting_AlertTrafficRateExceededConfiguredLimit_AUSF_P0 |
Alert_SCPEgressTrafficRoutedWithoutRateLimitTreatment_AUSF | SCP_EgressRateLimiting_AlertTrafficRoutedWithoutRateLimitTreatment_AUSF_P0 |
Egress_Rate_Limiting_Config_API | SCP_EgressRateLimiting_Config_API_P0 |
EgressRateLimiting_INTERSCP | SCP_EgressRateLimiting_InterSCP_P0 |
EgressRateLimiting | SCP_EgressRateLimiting_P0 |
EgressRateLimiting_UDM_Cases | SCP_EgressRateLimiting_UDM_P0 |
RateLimitingRelease16_AUSF | SCP_EgressRateLimitingRelease16_AUSF_P0 |
SCP_EgressRateLimiting_DNSSRV.feature | SCP_EgressRateLimiting_DNSSRV_P0.feature |
Enhanced_Suspended_State_Routing_API_Validation | SCP_EnhancedSuspendedStateRouting_API_P0 |
Enhanced_Suspended_State_Routing_Alternate_Routing_Using_Static_Config_DNS_SRV | SCP_EnhancedSuspendedStateRouting_ARusingStaticConfigandDNSSRV_P0 |
Enhanced_Suspended_State_Routing_Inter_PLMN | SCP_EnhancedSuspendedStateRouting_InterPLMNRouting_P0 |
Enhanced_Suspended_State_Routing_Mode2_API_Validation_P0 | SCP_EnhancedSuspendedStateRouting_Mode2_API_P0 |
Enhanced_Suspended_State_Routing_Mode2_Alternate_Routing_Using_Static_Config_DNS_SRV_P0 | SCP_EnhancedSuspendedStateRouting_Mode2AlternateRoutingStaticConfigDNSSRV_P0 |
Enhanced_Suspended_State_Routing_Mode2_ModelC_SMF_P1 | SCP_EnhancedSuspendedStateRouting_Mode2ModelC_SMF_P1 |
Enhanced_Suspended_State_Routing_Mode2_ModelC_Inter_PLMN_P0 | SCP_EnhancedSuspendedStateRouting_Mode2ModelCInterPLMN_P0 |
Enhanced_Suspended_State_Routing_Mode2_ModelC_InterSCP_P0 | SCP_EnhancedSuspendedStateRouting_Mode2ModelCInterSCP_P0 |
Enhanced_Suspended_State_Routing_Mode2_ModelC_Overload_Congestion_SMF_P1 | SCP_EnhancedSuspendedStateRouting_Mode2ModelCOverloadCongestion_SMF_P1 |
Enhanced_Suspended_State_Routing_Mode2_ModelD_P0 | SCP_EnhancedSuspendedStateRouting_Mode2ModelD_P0 |
Enhanced_Suspended_State_Routing_Mode2_ModelD_Inter_PLMN_P0 | SCP_EnhancedSuspendedStateRouting_Mode2ModelDInterPLMNRoutin_P0 |
Enhanced_Suspended_State_Routing_Mode2_ModelD_InterSCP_P0 | SCP_EnhancedSuspendedStateRouting_Mode2ModelDInterSCP_P0 |
Enhanced_Suspended_State_Routing_ModelC_SMF | SCP_EnhancedSuspendedStateRouting_ModelC_SMF_P0 |
Enhanced_Suspended_State_Routing_Parallel_ModelC_SMF_UDM_P0 | SCP_EnhancedSuspendedStateRouting_ModelC_SMFandUDM_P0 |
Enhanced_Suspended_State_Routing_ModelC_UDM | SCP_EnhancedSuspendedStateRouting_ModelC_UDM_P0 |
Enhanced_Suspended_State_Routing_Model_C_Inter_SCP | SCP_EnhancedSuspendedStateRouting_ModelCInterSCP_P0 |
Enhanced_Suspended_State_Routing_ModelD | SCP_EnhancedSuspendedStateRouting_ModelD_P0 |
Enhanced_Suspended_State_Routing_ModelD_InterSCP | SCP_EnhancedSuspendedStateRouting_ModelDInterSCP_P0 |
Enhanced_NF_Status_Processing_SUSPENDED | SCP_EnhancedSuspendedStateRouting_SuspendedMode2_P0 |
Enhanced_NF_Status_Processing_SUSPENDED_SMF | SCP_EnhancedSuspendedStateRouting_SuspendedMode2_SMF_P0 |
Enhanced_NF_Status_Processing_UNDISCOVERABLE | SCP_EnhancedSuspendedStateRouting_UNDISCOVERABLE_P0 |
SCP_Error_Profile_Config_API_P0 | SCP_ErrorProfile_ConfigAPI_P0 |
SCPFailureHandling_default_behavior_serverheader_AUSF | SCP_FailureHandling_DefaultBehaviorServerHeader_AUSF_P0 |
SCPFailureHandling_default_behavior_serverheader | SCP_FailureHandling_DefaultBehaviorServerHeader_P0 |
SCPFailureHandling_DNSSRV_based_EnhancedServerHeaderV2_NEF_p0 | SCP_FailureHandling_DNSSRVbasedEnhancedServerHeaderV2_NEF_P0 |
SCPFailureHandling_EnhancedServerHeader | SCP_FailureHandling_EnhancedServerHeader_P0 |
SCPFailureHandling_EnhancedServerHeaderV2_API | SCP_FailureHandling_EnhancedServerHeaderV2_API_P0 |
SCPFailureHandling_EnhancedServerHeaderV2_p0 | SCP_FailureHandling_EnhancedServerHeaderV2_P0 |
SCPFailureHandling_EnhancedServerHeaderV2_p1 | SCP_FailureHandling_EnhancedServerHeaderV2_P1 |
FailureHandling_InterSCP_EnhancedServerHeader | SCP_FailureHandling_InterSCPEnhancedServerHeader.P0 |
FormUrlEncoded | SCP_FormUrlEncoded_P0 |
Global_Egress_Rate_Limiting_Config_API | SCP_GlobalEgressRateLimiting_API_P0 |
GlobalEgressRateLimiting_P0 | SCP_GlobalEgressRateLimiting_P0 |
SCP_HealthCheck_Support_API_P0 | SCP_HealthCheckSupport_API_P0 |
SCP_HealthCheck_Support_HTTPS_P0 | SCP_HealthCheckSupport_HTTPS_P0 |
SCP_HealthCheckInterSCPModelC_HTTPS_P0 | SCP_HealthCheckSupport_InterSCPModelC_HTTPS_P0 |
SCP_HealthCheckInterSCPModelC_P0 | SCP_HealthCheckSupport_InterSCPModelC_P0 |
SCP_HealthCheckInterSCPModelD_P0 | SCP_HealthCheckSupport_InterSCPModelD_P0 |
Alert_SCPIngressTrafficRateExceededConfiguredLimit | SCP_IngressRateLimiting_AlertTrafficRateExceededConfiguredLimit_P |
Alert_SCPIngressTrafficRoutedWithoutRateLimitTreatment | SCP_IngressRateLimiting_AlertTrafficRoutedWithoutRateLimitTreatment_P0 |
Ingress_Rate_Limiting_Config_API | SCP_IngressRateLimiting_API_P0 |
Ingress_Rate_Limiting_Config_API_SC1 | SCP_IngressRateLimiting_ConfigAPI_P0 |
Ingress_Rate_Limiting_Config_API_enhancement_for_nfInstanceId | SCP_IngressRateLimiting_ConfigAPIenhancementNFInstanceId_P0 |
Ingress_Rate_Limiting_enhancement_for_nfInstanceId_to_support_UserAgentHeader | SCP_IngressRateLimiting_NFInstanceIdEnhancementUserAgentHeader_P0 |
SEPP_Audit_nnrf_nfm | SCP_InterPLMNRouting_Audit_nnrf-nfm_P0 |
SCP-SEPP_ModelD_Routing | SCP_InterPLMNRouting_ModelDBased_P0 |
SEPP_Routing_NRF | SCP_InterPLMNRouting_NRF_P0 |
OutlierDetectionSepp_P0 | SCP_InterPLMNRouting_OutlierDetection_P0 |
SCP-SEPP_Routing | SCP_InterPLMNRouting_P0 |
SEPP_plmnInfo_DELETE_API | SCP_InterPLMNRouting_PLMNInfoDeleteAPI_P0 |
SEPP_plmnInfo_GET_API | SCP_InterPLMNRouting_PLMNInfoGetAPI_P0 |
SEPP_plmnInfo_PATCH_API | SCP_InterPLMNRouting_PLMNInfoPatchAPI_P0 |
SEPP_plmnInfo_PUT_API | SCP_InterPLMNRouting_PLMNInfoPutAPI_P0 |
SCP-SEPP_Routing_UDM | SCP_InterPLMNRouting_UDM_P0 |
Rel16_Inter_SCP_Audit_nnrf_nfm_p0 | SCP_InterSCP_Audit_nnrf-nfm_P0 |
InterSCP_MateInfoListRouting | SCP_InterSCP_MateInfoListRouting_P0 |
Inter_SCP_Routing_NplusK_p0 | SCP_InterSCP_NplusKRouting_P0 |
Inter_SCP_Subscription_SMF_NRF | SCP_InterSCP_SubscriptionWithForeignNRFforNfTypeSMF_P0 |
Inter_SCP_Routing | SCP_InterSCPRouting_P0 |
Jaeger_Api_Validation | SCP_Jaeger_API_P0 |
SCP_LCI_Support_Config_API | SCP_LCIsupport_API_P0 |
Scp_Lci_Support_InterPLMN | SCP_LCIsupport_InterPLMNRouting_P0 |
SCP_Lci_Support_InterSCP | SCP_LCIsupport_InterSCP_P0 |
SCP_Lci_Support_InterSCP_PCF_SCP | SCP_LCIsupport_InterSCP_PCF_P0 |
SCP_Lci_Support_InterSCP_UDM | SCP_LCIsupport_InterSCP_UDM_P0 |
SCP_Lci_Support_Miscellaneous | SCP_LCIsupport_Miscellaneous_P0 |
Scp_Lci_Support_ModelD | SCP_LCIsupport_ModelD_P0 |
Scp_Lci_Support_ModelD_notification_cases | SCP_LCIsupport_ModelDNotificationRequest_P0 |
SCP_Lci_Support | SCP_LCIsupport_P0 |
Local_N_Plus_K | SCP_LocalNplusK_P0 |
Location_hdr_update_for_host_mismatch_API_validation | Location_hdr_update_for_host_mismatch_API_validation |
App_Routing_Options_Config_API | SCP_Mediation_AppRoutingOptionsConfig_API_P0 |
SCP_Mediation_Config_API | SCP_Mediation_ConfigAPI_P0 |
SCP_Mediation_Worker_side_cases_PCF_Cases | SCP_Mediation_FunctionalCases_PCF_P0 |
SCP_Mediation_Cases_AUSF | SCP_Mediation_FunctionalCasesPart1_AUSF_P0 |
SCP_Mediation_Worker_side_cases_AUSF | SCP_Mediation_FunctionalCasesPart2_AUSF_P0 |
SCP_Mediation_Worker_side_cases_UDM_SMF | SCP_Mediation_FunctionalTestCases_UDMandSMF_P0 |
SCP_Mediation_Worker_side_cases_UDM_AUSF_SMF_Notification_cases | SCP_Mediation_FunctionalTestCasesForUDMnAUSFnSMFnNotificationRequest_P0 |
SCP_Mediation_Cases_PCF | SCP_Mediation_PCF_P0 |
Alert_SCPRoutingFailedForProducer_CHF | SCP_ModelC_AlertSCPRoutingFailedForProducer_CHF_P0 |
ModelC_allheaderRequest | SCP_ModelC_AllHeaderRequest_P0 |
PCF_AMPolicy_forwardRoute_Target_apiRoot | SCP_ModelC_AMPolicyTargetAPIRoot_PCF_P0 |
ModelC_CatchAll_Routing_Configuration_AUSF | SCP_ModelC_CatchAllRoutingConfiguration_AUSF_P0 |
ModelC_Direct_Routing_Configuration_AUSF | SCP_ModelC_DirectRoutingConfiguration_AUSF_P0 |
ModelC_ProducerOverloadControl | SCP_ModelC_EgressCongestionBasedOnProducerLoad_P0 |
Egress_Congestion_based_on_load_PCF_SMPolicy_forwardRoute_Target_apiRoot | SCP_ModelC_EgressCongestionBasedOnProducerLoad_PCF_P0 |
Https_Routing_For_ModelC_Headers | SCP_ModelC_HTTPS_P0 |
ModelC_IngressRateLimiting | SCP_ModelC_IngressRateLimiting_P0 |
Model-C_based_Inter_SCP_Routing | SCP_ModelC_InterSCProuting_P0 |
ModelC_Inter_SCP_Routing_Options_Notification_Cases | SCP_ModelC_InterSCPRoutingOptionsNotificationCases_P0 |
InterSCP_VersionSupport | SCP_ModelC_InterSCPVersionSupport_P0 |
ModelC_NFInstance_AUSF_Cases | SCP_ModelC_NFInstance_AUSF_P0 |
ModelC_NFInstance_NEF_Cases | SCP_ModelC_NFInstance_NEF_P0 |
NF_Notification | SCP_ModelC_NFprofileNotificationWithMultipleVersions_P |
ModelC_NFSet_NEF_Cases | SCP_ModelC_NFSet_NEF_P0 |
ModelC_NFSet_PCF_Cases | SCP_ModelC_NFSet_PCF_P0 |
ModelC_NFSet_SMF_Cases | SCP_ModelC_NFSet_SMF_P0 |
ModelC_ReverseLookup_CHF_Cases | SCP_ModelC_ReverseLookup_CHF_P0 |
ModelC_ReverseLookup_UDM_Cases | SCP_ModelC_ReverseLookup_UDM_P0 |
ModelC_ReverseLookup_UDR_Cases | SCP_ModelC_ReverseLookup_UDR_P0 |
ModelC_Inter_SCP_Routing_Options | SCP_ModelC_RoutingOptionsInterSCP_P0 |
AUSF_AUTH_forwardRoute_Target_apiRoot | SCP_ModelC_TargetAPIRoot_AUSF_P0 |
CHF_SLC_forwardRoute_Target_apiRoot | SCP_ModelC_TargetAPIRoot_CHF_P0 |
SMF_forwardRoute_Target_apiRoot | SCP_ModelC_TargetAPIRoot_SMF_P0 |
UDM_SDM_forwardRoute_Target_apiRoot | SCP_ModelC_TargetAPIRootSDM_UDM_P0 |
UDM_UECM_forwardRoute_Target_apiRoot | SCP_ModelC_TargetAPIRootUECM_UDM_P0 |
Version_support | SCP_ModelC_VersionSupport_P0 |
ModelD_Config_API | SCP_ModelD_API_P0 |
ModelD_enforceReqSpecificSvcDiscovery | SCP_ModelD_EnforceReqSpecificSvcDiscovery_P0 |
ModelD_based_Routing_implicit_notification_p0 | SCP_ModelD_ImplicitNotificationRouting_P0 |
ModelD_based_Routing | SCP_ModelD_Routing_P0 |
Notification_Patch_Profile_Support_SMF | SCP_NotificationRequest_PatchProfileSupport_SMF_P0 |
SCP_Oauth2_Access_Token_Granularity_Config_API_P0 | SCP_Oauth2Support_AccessTokenGranularity_API_P0 |
SCP_Oauth2_Support_Alternative_Config_ModelD_SMF_NRF_P0 | SCP_Oauth2Support_AlternativeConfigModelD_SMFandNRF_P0 |
SCP_Oauth2_Granularity_config_ModelD_SMF_NRF_P0 | SCP_Oauth2Support_GranularityConfigModelD_SMFandNRF_P0 |
SCP_Oauth2_Local-PLMN-required-config_API_P0 | SCP_Oauth2Support_LocalPLMNRequiredConfig_API_P0 |
SCP_OAuth2_support_ModelC_CHF_P0 | SCP_Oauth2Support_ModelC_CHF_P0 |
Scp_Oauth2_Support_ModelC_SMF_NRF_P0 | SCP_Oauth2Support_ModelC_SMFandNRF_P0 |
SCP_OAuth2_Support_Config_API_P0 | SCP_OAuth2Support_ConfigAPI_P0 |
SCP_Oauth2_Support_ModelC_Error_Handling_CHF_NRF_P1 | SCP_Oauth2Support_ModelCErrorHandling_CHFandNRF_P1 |
SCP_Oauth2_Support_ModelD_SMF_NRF_P0_P1 | SCP_Oauth2Support_ModelD_SMFandNRF_P1 |
SCP_Oauth2_Support_ModelD_interSCP_P0 | SCP_Oauth2Support_ModelDinterSCP_P0 |
SCP_Oauth2_NRF_Configuration_Config_API_P0 | SCP_Oauth2Support_NRFConfiguration_API_P0 |
SCP_Observability_And_Inter_Microservice_Resilience_P0 | SCP_ObservabilityAndInterMicroserviceResilience_P0 |
SCP_OutlierDetection_API | SCP_OutlierDetection_API_P0 |
OutlierDetectionInterSCP_P0 | SCP_OutlierDetection_InterSCP_P0 |
OutlierDetectionProducer_AUSF_P0 | SCP_OutlierDetection_ProducerAUSF_P0 |
PseudoHeadersModification | SCP_PseudoHeadersModification_P0 |
PseudoHeadersModificationHttps | SCP_PseudoHeadersModification_HTTPS_P0 |
NRF_Registration | SCP_RegistrationWithNRF_P0 |
SCP_Registration_With_PLMNList | SCP_RegistrationWithNRF_PLMNList_P0 |
Scp_profile_mateScpInfo_update | SCP_RegistrationWithNRF_ProfileMateSCPInfoUpdate_P0 |
Rel16_Canary_support_PCF_Cases | SCP_Rel16CanarySupport_PCF_P0 |
Rel16_Canary_support_SMF_Cases | SCP_Rel16CanarySupport_SMF_P0 |
Rel16_Canary_support_UDM_Cases | SCP_Rel16CanarySupport_UDM_P0 |
Re_routing_based_on_configurable_http_status_codes_api_validation_p0 | SCP_ReroutingBasedonConfigurableHttpStatusCodes_API_P0 |
Re_routing_based_on_configurable_http_status_codes_p0 | SCP_ReRoutingBasedOnConfigurableHttpStatusCodes_P0 |
SCP_Ro_Enhancments_For_CB_CC_SMF_P0 | SCP_RoutingOptionEnhancements_CBAndCongestionControl_SMF_P0 |
SCP_Ro_Enhancments_For_CB_CC_InterSCP_InterPLMN_SMF_PCF_P0 | SCP_RoutingOptionEnhancements_CBAndCongestionControlForInterPLMNandSCPRouting_SMFandPCF_P0 |
SCP_Ro_Enahancment_For_CC_CB_Notification_PCF_P0 | SCP_RoutingOptionEnhancements_CBAndCongestionControlForNotificationReq_PCF_P0 |
SCP_Notification_Routing_Option_Config_API | SCP_RoutingOptionForNotificationReq_API_P0 |
SCP_Routing_Option_Set_API | SCP_RoutingOptionForNotificationReq_RoutingConfigSet_API_P0 |
Routing_Rules_API_R16 | SCP_RoutingRules_R16_API_P0 |
Sbi_Message_Priority_Config_API | SCP_SBImessagePriority_ConfigAPI_P0 |
SbiMessagePriority | CP_SBImessagePriority_P0 |
SCP_Features_API | SCP_SCPFeatures_API_P0 |
SCP_Generated_Error_Codes_SMF | SCPGeneratedErrorCodes_SMF_P0 |
SCP_Generated_Error_Codes_UDM | SCP_SCPGeneratedErrorCodes_UDM_P0 |
NRF_Subscription | SCP_Subscription_SubscriptionWithNRFforNfTypeUDM_P0 |
Configurataion_Support_For_NEF_services_NEF_P1 | SCP_SupportForNEFservices_Configuration_APIs_P1 |
Support_For_NEF_services_NEF_Enhanced_NF_Status_P0 | SCP_SupportForNEFservices_EnhancedSuspendedStateRouting_P0 |
Support_For_NEF_services_NEF_SEPP_P0 | SCP_SupportForNEFservices_InterPLMNRouting_P0 |
Support_For_NEF_services_NEF_P0 | SCP_SupportForNEFservices_NEF_P0 |
Support_For_NEF_services_NEF_NRF_SCP_p0 | SCP_SupportForNEFservices_NRF_P0 |
Traffic_Feed_REST_API | SCP_TrafficFeed_API_P0 |
TrafficFeed_CB | SCP_TrafficFeed_CB_P0 |
TrafficFeed_Notification | SCP_TrafficFeed_NotificationRequest_P0 |
TrafficFeed_PCF | SCP_TrafficFeed_PCF_P0 |
TrafficFeed_UDM | SCP_TrafficFeed_UDM_P0 |
Consumer_Info_API | SCP_UserAgent_ConsumerInfo_API_P0 |
Scp_User_Agent_Info_API | SCP_UserAgent_Info_API_P0 |
Worker_Pod_Overload_Action_Policy_Config_API | SCP_WorkerPodOverLoadControl_ActionPolicy_API_P0 |
CPUOverloadCtrlByNoAction | SCP_WorkerPodOverLoadControl_CPUbasedNoAction_P0 |
Worker_Pod_Overload_Ctrl_Policy_Config_API | SCP_WorkerPodOverLoadControl_CtrlPolicy_API_P0 |
Worker_Pod_Overload_Discard_Policy_Config_API | SCP_WorkerPodOverLoadControl_DiscardPolicy_API_P0 |
Worker_Traffic_Pod_Overload_P0 | SCP_WorkerPodOverLoadControl_PendingTransaction_P0 |
Worker_Traffic_Pod_Overload_P1 | SCP_WorkerPodOverLoadControl_PendingTransaction_P1 |
API_Worker_Pod_Pending_Transaction_Overload_Config_P0.feature | SCP_WorkerPodOverLoadControl_PendingTransactionConfig_API_P0 |
SCP_1_12_0_Bug_Fixes | SCP_1_12_0_BugFixes_P0 |
SCP_1.9.0_Bug_Fixes_SMF | SCP_1.9.0_BugFixes_SMF_P0 |
SCP_1.9.0_Bug_Fixes_UDR | SCP_1.9.0_BugFixes_UDR_P0 |
SCP_22.2.0_Bug_Fixes_PCF_Cases | SCP_22.2.0_BugFixes_PCF_P0 |
SCP_22.2.0_Bug_Fixes_UDM_Cases | SCP_22.2.0_BugFixes_UDM_P0 |
SCP_22.3.0_Bug_Fixes | SCP_22.3.0_BugFixes_P0 |
SCP_22.3.1_Bug_Fixes | SCP_22.3.1_BugFixes_P0 |
22.4.0_Bug_Fixes | SCP_22.4.0_BugFixes_P0 |
22.4.0_Bug_Fixes_Mediation | SCP_22.4.0_Bug_Fixes_Mediation |
22.4.4_Bug_Fixes | SCP_22.4.4_BugFixes_PCF_P0 |
23.1.0_Bug_Fixes | SCP_23.1.0_BugFixes_P0 |
23.2.0_Bug_Fixes_HTTPS | SCP_23.2.0_BugFixes_HTTPS_P0 |
23.2.0_Bug_Fixes | SCP_23.2.0_BugFixes_P0 |
Table 4-4 ATS Regression Feature Tests
Feature Test Name before Spliting | Feature Test Name after Spliting |
---|---|
SCP_SupportForNEFservices_EnhancedSuspendedStateRouting_P0 |
|
SCP_RoutingOptionEnhancments_CBAndCongestionControl_SMF_P0 |
|
SCP_OutlierDetection_AUSF_P0 |
|
SCP_Oauth2Support_GranularityConfigModelD_SMFandNRF_P0 |
|
SCP_ModelC_NFInstance_NEF_P0 |
|
SCP_LCIsupport_Miscellaneous_P0 |
|
SCP_EgressHostPreference_InterSCPRoutingNonASM_CHF_P0 |
|
SCP_DNSSRV_ModelCHeaders_PCF_P0 |
|
SCP_AlternateRoutingUsingStaticConfig_SMF_P0 |
|
SCP_22.3.0_BugFixes_P0 |
|
4.6 Running SEPP Test Cases using ATS
4.6.1 Prerequisites
To run SEPP Test Cases using SEPP ATS 25.1.201, you need to ensure that following prerequisites are fulfilled:
- Debug container should be DISABLED for all microservices during ATS runs.
- To run the ATS testcases, the user should not configure any value for apiPrefix
key in
ocsepp-custom-values.yaml
file. It should be empty. - The number of replicas of all SEPP microservices pods must be set to 1 as ATS is enabled to perform the metric validations for metrics obtained from a single pod.
- Create Kubernetes secret with certificates/keys (public and private) for both PLMN and N32 gateways before deploying SEPP.
- Deploy SEPP 25.1.201 with default helm configurations using helm charts.
- For ATS execution, deploy SEPP microservices with single replica.
- Ensure all microservices of SEPP are up and running.
- Create Kubernetes secret with certificates/keys (public and private) for ATS client and stub server microservices before deploying SEPP ATS.
- Update
ocats_ocsepp_values_25.1.201.yaml
file with the latest secrets and certificates. - Deploy ATS using helm charts.
- The user must not initiate a job in two different pipelines at the same time.
- User must not abort a running job, this may lead to data corruption in the database.
- The user needs to edit cn32f-svc and pn32f-svc deployment yaml. Set CACHE_REFRESH_TIMEOUT to 1000.
- User need to set
n32cHandshakePlmnIdListValidationEnabled
to true in localProfile section ofocsepp-custom-values.yaml
file. Orn32cHandshakePlmnIdListValidationEnabled
Parameter can also be changed after SEPP deployment by editing cn32c-svc and pn32c-svc deployment yaml. - User needs to update the plmnIdList provided in the ATS installation guide to run the ATS testcases.
- The user needs to edit cn32f-svc and pn32f-svc deployment yaml. Set "TOPOLOGY_HIDING_CACHE_REFRESH_TIMEOUT" to 1000.
- The user needs to edit cn32f-svc and pn32f-svc deployment yaml. Set "SECURITY_CACHE_REFRESH_TIMEOUT" to 1000.
- All the features are set to disable by default. Ensure that the features must be disabled while configuring SEPP ATS.
-
The user needs to edit cn32f-svc, pn32f-svc, plmn-egress-gateway, and n32-egress-gateway deployment yaml. Set "REQUEST_TIMEOUT" to 2000.
- The user needs to edit n32-ingress-gateway deployment yaml. Set "REQUEST_TIMEOUT" to 5000.
- The user needs to edit configmap of n32-ingress-gateway. Set "requestTimeout" to 5000.
- The user needs to edit n32-ingress-gateway deployment yaml. SET "IDLE_TIMEOUT" to 5000.
- The user needs to edit pn32f-svc deployment yaml. SET "EVICT_SAN_HEADER_CACHE_DELAY" to 100.
- To run the Cat-3 testcases, user must have pre-installed coherence-svc while SEPP installation.
- For DNS SRV and SOR, plmn-egress-gateway should be deployed in REST mode.
- To run the DNS SRV testcases, user must have pre-installed alternate-route service while SEPP installation.
- To run the Load Sharing among Multiple Remote SEPP Nodes testcases, user must have pre-installed alternate-route service while SEPP installation.
- To run the Egress Rate Limiter testcases user must have enabled the
Egress Rate Limiter feature while SEPP deployment. Also set
"EgressRateLimiterFlag" to true in
ocats_ocsepp_values_25.1.201.yaml
. - To run the Ingress Rate Limiter testcases user must have enabled
the Ingress Rate Limiter feature while SEPP deployment. Also set
"IngressRateLimiterFlag" to true in
ocats_ocsepp_values_25.1.201.yaml
. - To run the ATS testcases, user should not configure any value for
apiPrefix
key inocsepp-custom-values.yaml
file. It should be empty. - In ASM mode, set
expose_tls_service
parameter to false in stubserver section ofocats_ocsepp_values_25.1.201.yaml file.
Custom Folder Implementation
ATS provides custom test case folders (cust_newfeatures, cust_regression and cust_performance) using which you can add new test cases, remove unwanted test cases and modify existing test cases. It does not impact the original product packaged test cases available in the newfeatures, regression and performance folders. For more details, refer to Custom Folder Implementation.
4.6.2 Logging into ATS
Before logging into ATS, you need to ensure that ATS is deployed successfully using Helm charts as shown below:
Note:
To modify default log in password, refer to Modifying Login Password.Figure 4-61 Pods

There are following ways to login to ATS GUI.
- When an external load balancer (metalLB in case of OCCNE) is available and an external IP Address is provided to the ATS service, login to ATS GUI using <External-IP>:8080.
- When an external IP Address is not provided to the ATS service,
open the browser and provide the external IP Address of the worker node and
nodeport of the ATS service to login to ATS GUI.
<Worker-Node-IP>:<Node-Port-of-ATS>
Note:
In the Verifying ATS Deployment screen, ATS nodeport is highlighted in red as 30076. For more details on ATS deployment, refer to SEPP ATS Installation Procedure. - When ATS is installed with TLS enabled:
- If the external load balancer IP is used in the
ssl.conf
file to open the ATS GUI then the URL will be"https://<IP>:<https_nodeport>"
. - If the external load balancer IP or worker node IP is not
present on the setup, then with the common name provided in the
ssl.conf
file, the user can open the ATS GUI. The user needs to do the ssh port forwarding and then need to update the common name (CN) in theetc/hosts/
file of the local machine against the localhost IP (127.0.0.1). - ATS GUI URL format:
https://127.0.0.1:<port-forwarding-port>
https://<common name>:<port-forwarding-port>
- If the external load balancer IP is used in the
Open a browser and provide IP Address and port details as <Worker-Node-IP>:<NodePort-of-ATS> (As per the above example: 10.98.101.171:32013). The ATS login screen appears.
Figure 4-62 ATS Login

- Enter the username as 'seppuser' and password as 'sepppasswd'. Click Sign in. The following screen appears showing pre-configured pipelines for SEPP individually (3 Pipelines).
Figure 4-63 Pre-Configured Pipelines

- SEPP-NewFeatures: This pipeline has all the test cases that are delivered as part of SEPP ATS.
- SEPP-Regression: This pipeline has all the test cases of previous releases.
- SEPP-HealthCheck: This pipeline has the utility to run the helm test functionality.
Note:
Currently, SEPP does not have any performance pipeline.4.6.3 SEPP NewFeatures Pipeline
- Click SEPP-NewFeatures in the Name column. The following
screen appears:
Figure 4-64 SEPP-NewFeatures Pipeline
In the above screen:- Click Configure to configure SEPP-New Features.
- Click Documentation to view the documented test cases which are part of this SEPP release.
- Click blue dots inside Build History box to view the success console logs of the "All" and "Sanity" respectively.
- The Stage View shows pipeline that is already run for the customer reference.
- Click Open Blue Ocean to view the respective group, re-run logs and the consolidated report summary.
- Click Abort_Build button to gracefully stop the currently running ATS build.
- Click Configure. Users must wait for the page to load
completely. Once the page loads completely, click the Pipeline tab to
reach the Pipeline configuration as shown below:
The Pipeline section of the configuration page appears as follows:
Figure 4-65 Pipeline script

- The user must not change any other value apart from line number 12 to line 20 and line number 26.
- User can change only those parameters that are marked as "a" to "h" as per your requirement.
- A new parameter "p" has been introduced to customize the global wait time for the scale up and scale down of pods test cases. Pods scaling time is different on different setups. Sometimes pods take longer than usual for scaling up or scaling down operations. To handle this situation you can increase the global wait time parameter as required. Recommended value is 60.
- Recommended value for rerun count is "3".
a: Name of the NF to be tested in capital (SEPP).
b: Config manager service name and namespace
c: Plmn ingress service name and namespace
d: N32 ingress service name and namespace
e: Stub server service name and namespace
f: SEPP Instance ID configured
g: Prometheus hostname and namespace
h: Prometheus port
i: Prometheus API endpoint
j: Re run count
k: ELK wait time before logs are fetched
l: ELK hostname and namespace
m: ELK port
n: Stub log collection enable
o: Type of log collection method ( kubernetes/Elasticsearch)
p: GLOBAL WAIT TIME
Note:
Do not change any value if OCCNE cluster is used and SEPP, ATS and STUB are deployed in ocsepp namespace.- Click Save after making necessary changes. The Pipeline SEPP-NewFeatures screen appears.
Running SEPP Test Cases
- Click the Schedule a Build with parameters icon present
on the SEPP-NewFeatures screen in the extreme right column corresponding to
SEPP-NewFeatures row. The following screen appears:
Figure 4-66 SEPP-NewFeatures
- In the above screen, there is an option Execute_Suite to select SEPP or Roaming_Hub. If you have deployed SUT as SEPP, select SEPP option and if you have deployed SUT as Roaming_Hub, select Roaming_Hub option.
- In the above screen, there is an option Configuration_Type to select product/custom configuration.
- In the above screen, there are three Select_Option(s),
which are:
- All: This is the default option. It runs all the SEPP test cases. Scroll-down and click Build to run all the test cases.
- Sanity: It is recommended to run Sanity before running any test case. It ensures all the deployments are done properly.
- Single/Multiple Feature: This option allows you to select any number of test cases you want to run from total number of test cases available for execution. After selecting the test cases, scroll-down and click Build to run the selected SEPP test cases.
- In the above screen, When FilterWithTags option is selected as
Yes, GUI offers the following four options, select the required
tags from the different tags list and click Submit.
- Feature_Include_Tags: The features that contain either of the tags available in the Feature_Include_Tags field are considered for tagging.
- Feature_Exclude_Tags: The features that contain neither of the tags available in the Feature_Exclude_Tags field are considered for tagging.
- Scenario_Include_Tags: The scenarios that contain either of the tags available in the Scenario_Include_Tags field are considered.
- Scenario_Exclude_Tags: The features that contain neither of the tags available in the Scenario_Exclude_Tags field are considered.
- Click Abort_Build button to gracefully stop the currently running ATS build.
- Click Build to execute the ATS run.
4.6.3.1 Single Click Job Creation for SEPP New Features
Perform the following steps to create the custom pipeline for the new features:
- Log in to ATS using network function specific log-in credentials.
- Click New Item in the left navigation pane of
the ATS application. The following page appears:
Figure 4-67 New Item Window
- In the Enter an item name text box, enter the job name. Example: <NF-Specific-name>-NewFeatures.
- In the Copy from text box, enter the actual job name for which you need single-click execution functionality. Example: <NF-Specific-name>-NewFeatures.
- Click OK. You are automatically redirected to edit the newly created job's configuration.
- Under the General group, deselect the This Project is Parameterised option.
- Under the Pipeline group, make the corresponding changes to remove the 'Active Choice Parameters' dependency.
- Provide the default values for the TestSuite,
SUT, Select_Option,
Configuration_Type, and other parameters, as required, on
the BuildWithParameters page.
Example: Pipeline without Active Choice Parameter Dependency
node ('built-in'){ //a = SELECTED_NF b = SEPPCONFIGSVCNAME c = CSEPPIGWNAME d = PSEPPIGWNAME //e = SEPPSTUBNAME f = NFINSTANCEID g = PROMSVCIP h = PROMSVCPORT i= PROMSVCURI j = RERUN_COUNT //k = ELK_WAIT_TIME l = ELK_HOSTNAME m = ELK_PORT n = STUB_LOGS o = LOG_TYPE p = GLOBAL_WAIT_TIME withEnv([ 'TestSuite=NewFeatures', 'Execute_Suite=SEPP', 'FilterWithTags=true,false', 'Fetch_Log_Upon_Failure=NO', 'Select_Features_Option=All', 'Configuration_Type=Product_Config' ]){ sh ''' sh /var/lib/jenkins/ocsepp_tests/preTestConfig.sh \ -a SEPP \ -b ocsepp-release-config-mgr-svc.csepp \ -c ocsepp-release-plmn-ingress-gateway.csepp \ -d ocsepp-release-n32-ingress-gateway.csepp \ -e sepp-ats-rel-stubserver.csepp \ -f 9faf1bbc-6e4a-4454-a507-aef01a101a06 \ -g prometheus.cne-infra \ -h 9090 \ -i /api/ \ -j 1 \ -k 10 \ -l occne-elastic-elasticsearch-master.occne-infra \ -m 9200 \ -n yes \ -o kubernetes \ -p 5 ''' if(env.Include_Regression && "${Include_Regression}" == "YES"){ sh '''sh /var/lib/jenkins/common_scripts/merge_jenkinsfile.sh''' load "/var/lib/jenkins/ocsepp_tests/jenkinsData/Jenkinsfile-Merged" } else{ load "/var/lib/jenkins/ocsepp_tests/jenkinsData/Jenkinsfile-NewFeatures" } } }
- Click Save. The ATS application is ready to run TestSuite with 'SingleClick' using the newly created job.
- Run the custom pipeline job.
- Click the Build Now in the main page for the Custom pipeline job.
Note:
Value ofj (RERUN_COUNT) = 3
is recommended. But it can be changed as
per operator environment.
4.6.4 NewFeatures - Documentation
To view SEPP functionalities, go to SEPP-NewFeatures pipeline and click the Documentation link in the left navigation pane. The following screen appears:
Figure 4-68 Documentation

Click any functionality to view its test cases and scenarios of each test case. A sample screen is given below:
Figure 4-69 Test cases and Scenarios

Based on the functionalities covered under the documentation, the build Requires parameter screen displays test cases. To navigate back to the Pipeline SEPP-NewFeatures screen, click the Back to SEPP-NewFeatures link available on top left corner of the screen.
4.6.5 SEPP Regression Pipeline
This section describes how to run test cases for SEPP Regression pipeline.
The SEPP_Regression pipeline is a pre-configured pipeline where all the test cases of previous releases are available. For example, for SEPP 25.1.2xx, this pipeline has all the test cases released till SEPP 25.1.1xx.
- Click SEPP_Regression in the Name
column. The following screen appears:
Figure 4-70 SEPP Regression
- Click Build with Parameters in the left navigation pane.
- Click Documentation to view the documented test cases which are part of this SEPP release.
- Click blue dots in the Build History box to view the success console logs of the "All" and "Sanity" respectively.
- The Stage View shows pipeline that is already run for the customer reference.
- Click Configure to configure SEPP Regression. Users must wait for the page to load completely. Once the page loads completely.
- click the Pipeline tab to see the Pipeline configuration.
Copy the required test cases that are available in the SEPP folder and place them appropriately within the custom folder for SEPP_Regression. Reload the page to view the test cases available in the custom Regression folder.
The configuration method and parameters are same as the NewFeatures pipeline.
4.6.5.1 Single Click Job Creation for SEPP Regression Features
Perform the following steps to create the custom pipeline for the new features:
- Log in to ATS using network function specific log-in credentials..
- Click New Item in the left navigation pane of
the ATS application. The following page appears:
Figure 4-71 New Item Window
- In the Enter an item name text box, enter the job name. Example: <NF-Specific-name>-NewFeatures.
- In the Copy from text box, enter the actual job name for which you need single-click execution functionality. Example: <NF-Specific-name>-NewFeatures.
- Click OK. You are automatically redirected to edit the newly created job's configuration.
- Under the General group, deselect the This Project is Parameterised option.
- Under the Pipeline group, make the corresponding changes to remove the 'Active Choice Parameters' dependency.
- Provide the default values for the TestSuite,
SUT, Select_Option,
Configuration_Type, and other parameters, as required, on
the BuildWithParameters page.
Example: Pipeline without Active Choice Parameter Dependency
node ('built-in'){ //a = SELECTED_NF b = SEPPCONFIGSVCNAME c = CSEPPIGWNAME d = PSEPPIGWNAME //e = SEPPSTUBNAME f = NFINSTANCEID g = PROMSVCIP h = PROMSVCPORT i= PROMSVCURI j = RERUN_COUNT //k = ELK_WAIT_TIME l = ELK_HOSTNAME m = ELK_PORT n = STUB_LOGS o = LOG_TYPE p = GLOBAL_WAIT_TIME withEnv([ 'TestSuite=Regression', 'Execute_Suite=SEPP', 'FilterWithTags=true,false', 'Fetch_Log_Upon_Failure=NO', 'Select_Features_Option=All', 'Configuration_Type=Custom_Config' ]){ sh ''' sh /var/lib/jenkins/ocsepp_tests/preTestConfig.sh \ -a SEPP \ -b ocsepp-release-config-mgr-svc.csepp \ -c ocsepp-release-plmn-ingress-gateway.csepp \ -d ocsepp-release-n32-ingress-gateway.csepp \ -e sepp-ats-rel-stubserver.csepp \ -f 9faf1bbc-6e4a-4454-a507-aef01a101a06 \ -g prometheus.cne-infra \ -h 9090 \ -i /api/ \ -j 1 \ -k 10 \ -l occne-elastic-elasticsearch-master.occne-infra \ -m 9200 \ -n yes \ -o kubernetes \ -p 5 ''' if(env.Include_NewFeatures && "${Include_NewFeatures}" == "YES"){ sh '''sh /var/lib/jenkins/common_scripts/merge_jenkinsfile.sh''' load "/var/lib/jenkins/ocsepp_tests/jenkinsData/Jenkinsfile-Merged" } else{ load "/var/lib/jenkins/ocsepp_tests/jenkinsData/Jenkinsfile-Regression" } } }
- Click Save. The ATS application is ready to run TestSuite with 'SingleClick' using the newly created job.
- Run the custom pipeline job.
- Click the Build Now in the main page for the Custom pipeline job.
Note:
Value ofj (RERUN_COUNT) = 3
is recommended. But it can be changed as
per operator environment.
4.6.6 SEPP Regression Documentation
This section describes the documentation for SEPP-Regression pipeline.
To view the documentation for any of the SEPP features, on the ATS home page, click SEPP-Regression. Then, click Documentation in the left navigation pane.
This page shows features of only those test cases that are released in previous releases.
The following screen shows all the documentation features:
Figure 4-72 SEPP-Regression documentation

Click any functionality to view its test cases and scenarios of each test case.
The following screen appears if the CN32C_SUCCESS_SECURITY_CAPABILITY_TLS is selected.
Figure 4-73 CN32C_SUCCESS_SECURITY_CAPABILITY_TLS

4.6.7 Running SEPP Health Check Pipeline
- Navigate to SEPP-HealthCheck pipeline and click Configure.
- The General tab appears. The user must wait for the page to load completely.
- Click the Advanced Project Options tab.
Scroll-down to reach the Pipeline configuration section
as follows:
Important:
Ensure that the page loads completely before you perform any action on it. Do not modify any configuration except the one discussed below.Figure 4-74 SEPP-HealthCheck
Do not change any value other than line numbers 17, 18, and 19. You can change the parameters marked as "a","b",and "c" as per user requirement. The parameters details are provided as comments in line number 7, 8, and 9. The parameters description is as follows:
- a - helm releases [Provide Release Name with Comma Separated if more than 1 ]
- b - namespace of SUT.
- c - helm version supported
- Click Save. The Pipeline SEPP-HealthCheck page appears.
- Click Build Now. This triggers health check for SUT.
Started by user seppuser
[Pipeline] Start of Pipeline
[Pipeline] node
Running on Jenkins in /var/lib/jenkins/.jenkins/workspace/SEPP-HealthCheck
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Helm-Test-Init)
[Pipeline] catchError
[Pipeline] {
[Pipeline] sh
+ set +x
export HelmTestRelease=ocsepp-release-naman
export HelmTestNamespace=sepp2
export HelmCmd=helm
[Pipeline] sh
+ export NF=SEPP
+ NF=SEPP
+ /env/bin/python3 /env/lib/python3.9/site-packages/ocnftest_lib/helmtest_with_secret.py
[Pipeline] }
[Pipeline] // catchError
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Validate-Helm-Test)
[Pipeline] catchError
[Pipeline] {
[Pipeline] echo
2022-08-29 05:14:23,045[32m INFO LOG.HELM-TEST:108 [0m| [32mhelm test ocsepp-release-naman -n sepp2[0m
2022-08-29 05:14:53,006[32m INFO LOG.HELM-TEST:120 [0m| [32m
NAME: ocsepp-release-naman
LAST DEPLOYED: Wed Aug 24 10:08:10 2022
NAMESPACE: sepp2
STATUS: deployed
REVISION: 1
TEST SUITE: ocsepp-release-naman-test
Last Started: Mon Aug 29 05:14:36 2022
Last Completed: Mon Aug 29 05:15:00 2022
Phase: Succeeded[0m
[Pipeline] }
[Pipeline] // catchError
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS
4.7 Running UDR Test Cases using ATS
This section describes how to run UDR test cases using ATS.
4.7.1 Prerequisites
4.7.1.1 Prerequisites for SLF Pipelines
- Select any one of the following System Under Test (SUT) deployment
models, depending on your requirements.
- Model 1: To run all the scenarios (SUTSuite: 'All')
in the SLF-Regression and SLF-NewFeatures pipeline, deploy UDR with the
following configurations:
- Four UDRs like UDR1, UDR2, UDR3, and UDR4 where UDR1 and UDR2 are in segment1 and UDR3 and UDR4 in segment2.
- Deploy one Provisioning Gateway along with Provisioning Gateway Auditor service.
- Deploy two NRF-Stubs
- Deploy one ATS in same namespace as UDRs, ProvGW, NRF-Stubs
To deploy four UDRs in same namespace as ProvGw, ATS, and NRF stubs, create four secrets, such as each UDR has unique configuration database (configdb), but UDRs in same segment shares the subscriber database.
Configure the Provisioning Gateway segment so that the UDR1 and UDR3 are preferred SLFs in their segment 1 and segment 2 respectively. You must also deploy bulk import tool, export tool, and OSO.
-
Model 2: To run the SLF-Performance pipeline, deploy UDR with the resources defined in the Resource Requirements section.
The UDR must be deployed with the following replica counts:Table 4-5 Replica Count
Service name Setup type CPU/Pod Memory/Pod Number of pods ingressgateway-sig UDR 6 4 2 ingressgateway-prov UDR 6 4 2 nudr-drservice UDR 5 4 2 nudr-provservice UDR 5 4 2 The ATS pod must be installed with the following configuration for 1K TPS:Table 4-6 ATS Pod Installation for 1K TPS
Service name Setup type CPU/Pod Memory/Pod Number of pods ocats-udr ATS 4 6 1 The ATS pod must be installed with the following configuration for 2K TPS:Table 4-7 ATS Pod Installation for 2K TPS
Service name Setup type CPU/Pod Memory/Pod Number of pods ocats-udr ATS 8 16 1 Note:
OSO must also be deployed on the same namespace.Note:
The above configuration is tested for both, constant 1K TPS and varying TPS between 1K and 2K, for a duration of 15 minutes. - Model 3: To run the Multi_Site option of the SUTSuite field in the SLF Regression pipeline, deploy two UDRs in two different namespaces having two different cnDBTier with database site replication feature enabled between the two. Provisioning Gateway also needs to be deployed on each namespace. Bulk Import tool and export tool needs to be deployed on first namespace. You must also deploy OSO on each namespace.
- Model 1: To run all the scenarios (SUTSuite: 'All')
in the SLF-Regression and SLF-NewFeatures pipeline, deploy UDR with the
following configurations:
4.7.1.1.1 UDR Custom Values File Configuration for SLF Pipelines
Note:
- Deploy UDR in the SLF mode. For more information about UDR installation, see Oracle Communications Cloud Native Core, Unified Data Repository Installation, Upgrade, and Fault Recovery Guide.
- To configure ASM with UDR (including setting istio-proxy, CPU, and memory), see the ASM Specific Configuration and UDR Configuration Parameters sections in the Oracle Communications Cloud Native Core, Unified Data Repository Installation, Upgrade, and Fault Recovery Guide. The UDR custom-values.yaml file is available with documentation package on MOS.
- Configure the global section of the OCUDR custom values file as
follows:
global: dockerRegistry: <docker-registry>:<docker port>/ocudr mysql: dbServiceName: <db service to which UDR should be connected> port: &dbPortNumber "3306" configdbname: &configdbname <config db name> dbname: &dbname <subscriber db name> ... createNetworkPolicy: true #only if network policy is supported in the environment. SLF pipeline to validate network policy supports only default policy dbCredSecretName: <udr secret name> ... alternateRouteServiceEnable: true alternateRouteServiceHost: <udr-helm-release-name>-alternate-route ... performanceServiceEnable: true ... udrServices: "nudr-group-id-map" ... diamGatewayEnable: false ... deploymentMode: &deploymentMode IPv6_IPv4 #If it is intended to run Dual Stack scenarios, else do not modify the default value ... appInfoBaseUrl: "<udr-helm-release-name>-nudr-app-info:5906" ... nrfClientNfManagementEnable: true # value is true for UDR1 deployment; UDR2, UDR3 and UDR4 should be deployed with false deploymentNrfClientService: envNfNamespace: <namespace where UDR is to be deployed> ... enableControlledShutdown: &enableControlledShutDown true ... serviceMeshCheck: &serviceMeshFlag true #if UDR is intended to be deployed with ASM.
Note:
- If UDR is installed with ASM, then use the annotations defined in the Oracle Communications Cloud Native Core, Unified Data Repository Installation, Upgrade, and Fault Recovery Guide.
- Database names provided in the SLF custom.values files must match with the names created in the corresponding database secrets.
- The nfInstanceId for each of SLF must be unique for all model.
- Update the config-server section as
follows:
config-server: fullnameOverride: <udr-helm-release-name>-config-server
- Update the ingressgateway-sig section as
follows:
ingressgateway-sig: global: istioIngressTlsSupport: ingressGateway: true #only if UDR is being deployed with ASM. ... lciHeaderConfig: enabled: true loadThreshold: 30 localLciHeaderValidity: 10000 ... svcToSvcInstanceIdMapping: - svcName: "<udr-helm-release-name>-nudr-drservice" serviceInstanceId: <serviceInstanceId-defined-in-appProfiles-for-nudr-group-id-map> ... xfccHeaderValidation: validation: enabled: &xfccValidationEnabled true #should be disabled for Model 2 and Model 3. ... serverHeaderDetails: enabled: true ... oauthValidatorEnabled: true #should be disabled for model 2 and model 3. nrfPublicKeyKubeSecret: <oauth secret name for sig> nrfPublicKeyKubeNamespace: <UDR namespace> validationType: relaxed ... enableControlledShutdown: true controlledShutdownConfigMode: REST ... userAgentHeaderValidationConfigMode: REST
- Update the ingressgateway-prov section as
follows:
ingressgateway-prov: global: istioIngressTlsSupport: ingressGateway: true #only if UDR is being deployed with ASM. xfccHeaderValidation: validation: enabled: &xfccValidationEnabled false ... serverHeaderDetails: enabled: false ... oauthValidatorEnabled: true #should be disabled for model 2 and model 3. nrfPublicKeyKubeSecret: <oauth secret name for prov> nrfPublicKeyKubeNamespace: <UDR namespace> validationType: relaxed ... enableControlledShutdown: true controlledShutdownConfigMode: REST ... checkContentLength: true copyContentLengthFromBody: true
- Configure targets and ports in the alternate-route.staticVirtualFqdns
section with the NRF stubs FQDNs and ports.
Note:
This is applicable only for Model 1.alternate-route: staticVirtualFqdns: - name: http://abc.test.com alternateFqdns: - target: <stub1 fqdn> port: <stub1 port> priority: 10 - target: <stub2 fqdn> port: <stub2 port> priority: 20
- Update the appinfo section as
follows:
appinfo: ... dbStatusUri: http://<DB monitoring SVC>:<DB monitoring svc port>/db-tier/status/local realtimeDbStatusUri: http://<DB monitoring SVC>:<DB monitoring svc port>/db-tier/status/cluster/local/realtime replicationUri: http://<DB monitoring SVC>:<DB monitoring svc port>/db-tier/status/replication ... prometheusUrl: "http://<Prometheus fqdn or IP>:<prometheus port>/<domain-name>/prometheus" alertmanagerUrl: "http://<alertmanager fqdn or IP>:<prometheus port>/<domain-name>/alertmanager" tagNamespace: kubernetes_namespace # change this field only if it is being deployed with OSO. Please refer to UDR User Guide for more information. ... core_services: udr: *slf ... fullnameOverride: <udr-helm-release-name>-app-info
Run the below curl command from ATS pod to find domain name of OSO:curl -i http://<OSO FQDN or IP>:<OSO PORT>/
Example:
Prometheus URL is(env) [jenkins@ocats-udr-5c55f59d66-jkzml ~]$ curl -i http://oso-svr:80/ HTTP/1.1 302 Found Content-Type: text/html; charset=utf-8 Location: /cne-23-1/prometheus Date: Thu, 20 Apr 2023 06:28:48 GMT Content-Length: 48 <a href="/cne-23-1/prometheus">Found</a>.
http://oso-svr:80/cne-23-1/prometheus
. - Update the nrf-client section as follows:
Note:
This is applicable only for Model 1.nrf-client: profile: | #uncomment appProfiles that is dedicated to used for SLF mode (nudr-group-id-map) and comment the rest of the appProfiles. enableVirtualNrfResolution=true #virtualNrfFqdn value must be same as that of alternate-route.staticVirtualFqdns.name[0] without the url scheme as shown below virtualNrfFqdn=abc.test.com nrf-client-nfmanagement: ... replicas: 1 ... enablePDBSupport: false
- Update the perf-info section as
follows:
perf-info: envMysqlDatabase: <configdb name used for UDR> ... configmapPerformance: prometheus: "http://<Prometheus fqdn or IP>:<prometheus port>/<domain-name>/prometheus" log: level: perfinfo: "WARN"
Note:
The values for namespace, container name, and servicename depend on CNE or OSO versions. For information, see Oracle Communications Cloud Native Core, Unified Data Repository User Guide. - Update the egressgateway section as
follows:
egressgateway: userAgentHeaderConfigMode: REST
- Configure the following parameters in the bulk-import tool yaml file:
Note:
To test the bulk import tool, it is important to install the bulk import tool as a separate helm deployment in the same namespace as that of UDR. For information about the bulk import tool installation and sample bulk import yaml file, see Oracle Communications Cloud Native Core, Unified Data Repository User Guide. This is applicable for Model 1 only.- Ensure the secret is used to install subscriber bulk import tool is same
as UDR1. The Bulk Import tool should communicate with UDR1.
bulk import deployment
global: mysql: dbServiceName: "<DB Endpoint provided for UDR deployment>" port: "<PORT Used provided UDR deployment>" ... dbCredSecretName: '<UDR1 Secret>' ... xmlToCsv: enabled: false ... customExtension: allResources: labels: {} annotations: oracle.com/cnc: "true" ... serviceMeshCheck: true #if exprt tool is intended to be deployed with ASM ... ... ocudrReleaseName: '<UDR1 Helm name>'
Note:
- To run the bulk import scenarios, disable xmlToCsv.
- Subscriber bulk import tool must always be clean installed.
- Ensure the secret is used to install subscriber bulk import tool is same
as UDR1. The Bulk Import tool should communicate with UDR1.
Subscriber Export tool Configuration
Note:
To test the subscriber export tool, it is important to install the subscriber export tool as a separate helm deployment in the same namespace as that of UDR. For information about the subscriber export tool, see Subscriber Export Tool section in Oracle Communications Cloud Native Core, Unified Data Repository User Guide.- Ensure the secret is used to install the subscriber export tool is same as
UDR1. The subscriber export tool should communicate with UDR1.
Update the nudr-export-tool deployment section as follows
global: dockerRegistry: <docker-registry>:<docker port>/ocudr mysql: dbServiceName: "<DB Endpoint provided for UDR deployment>" port: "<PORT Used provided UDR deployment>" ... dbCredSecretName: '<UDR1 Secret>' ... sftpDetails: ... sftpExportEnabled: false ... ... serviceMeshCheck: true #if export tool is intended to be deployed with ASM ... ocudrReleaseName: '<UDR1 Helm name>' ... batch: corePoolSize: 1 maxPoolSize: 1 exportMode: SLF_EXPORT
Note:
Subscriber export tool must always be clean installed.Other configuration details
- Configure Prometheus with UDR and Provisioning Gateway default alerts. For more information about UDR alerts, see Oracle Communications Cloud Native Core, Unified Data Repository User Guide and for Provisioning Gateway alerts, see Oracle Communications Cloud Native Core, Provisioning Gateway Installation Guide.
- Certificates used for Provisioning Gateway TLS should be provided in
/var/lib/jenkins/ocslf_tests/certs
in pod. - Create Envoy Filter for both, UDR1 setup and ATS as described in Enabling Service Mesh for XFCC header validation scenarios and server header scenarios if installed with ASM.
- Create Peer Authentication as described in Enabling Service Mesh and Oracle Communications Cloud Native Core, Unified Data Repository Installation, Upgrade, and Fault Recovery Guide, if installed with ASM.
4.7.1.1.2 Provisioning Gateway Custom Values File Configuration for SLF Pipelines
Note:
For more information about Provisioning Gateway installation, see Oracle Communications Cloud Native Core Provisioning Gateway Installation Guide. The ProvGW custom-values.yaml file is available with documentation package on MOS.- Update the global section of the ProvGW custom-values.yaml file as
follows:
global: dockerRegistry: <docker-registry>:<docker port>/provgw mysql: dbServiceName: <db service to which UDR should be connected> port: &dbPortNumber "3306" configdbname: &configdbname <ProvGw DB name> dbname: &dbname <ProvGw DB name> ... deploymentMode: &deploymentMode IPv6_IPv4 #If it is intended to run Dual Stack scenarios, else do not modify the default value ... createNetworkPolicy: true #only if network policy is supported in the environment. SLF pipeline to validate network policy supports only default policy ... serviceMeshCheck: &serviceMeshFlag true #if UDR is intended to be deployed with ASM. ... usageMode: "SLF" udr: httpsEnabled: false segDetails: - name: SEG-1 fqdnValues: <UDR1 fqdn>,<UDR2 fqdn> preferred: <UDR1 fqdn> - name: SEG-2 fqdnValues: <UDR3 fqdn>,<UDR4 fqdn> preferred: <UDR3 fqdn>
Note:
Database names provided in the Provisioning Gateway custom values files must match with the names created in the corresponding database secret. - Database monitoring fqdn and port must provided under provgw-service:
provgw-service
provgw-service: ... probes: dbStatusCheckUrl: "http://<DB Monitoring svc>:<DB monitoring port>/db-tier/status/cluster/local/realtime"
- Enable the auditor-service parameter as
follows:
auditor-service: enabled: true ... sftpDetails: secrets: privatekey: name: <private key secret created for SFTP from export tool> publickey: name: <public key secret created for SFTP from export tool> ... usePersistentVolume: true ... probes: dbStatusCheckUrl: "http://<DB Monitoring svc>:<DB monitoring port>/db-tier/status/cluster/local/realtime"
- Configure provgw-config as
below:
provgw-config: ... probes: dbStatusCheckUrl: "http://<DB Monitoring svc>:<DB monitoring port>/db-tier/status/cluster/local/realtime"
- Under the prov-ingressgateway annotations section, set the
traffic.sidecar.istio.io/excludeInboundPorts as '8443'.
Ensure the
ssl
section has secret names and namespace that are created as part of Transport Layer Security (TLS) support. Set the initssl and enableIncomingHttps parameters as 'true'.prov-ingressgateway: deployment: customExtension: labels: {} annotations: traffic.sidecar.istio.io/excludeInboundPorts: "8443" ssl: tlsVersion: TLSv1.2,TLSv1.3 privateKey: k8SecretName: <TLS support secret name> k8NameSpace: <ProvGw Namespace> rsa: fileName: <RSA private key pem file> ecdsa: fileName: <ECDSA private key pem file> certificate: k8SecretName: <TLS support secret name> k8NameSpace: <ProvGw Namespace> rsa: fileName: <RSA certificate> ecdsa: fileName: <ECDSA certificate> caBundle: k8SecretName: <TLS support secret name> k8NameSpace: <ProvGw namespace> fileName: <caBundle file name> keyStorePassword: k8SecretName: <TLS support secret name> k8NameSpace: <ProvGw namespace> fileName: <key txt file> trustStorePassword: k8SecretName: <TLS support secret name> k8NameSpace: <ProvGw namespace> fileName: <trust txt file> ... initssl: true ... enableIncomingHttps: true
Note:
For information about Provisioning Gateway installation and ASM configuration for Provisioning Gateway, see Oracle Communications Cloud Native Core Provisioning Gateway Installation Guide. This section highlights the necessary changes that you are required to do in the Provisioning Gateway custom-values.yaml file to run ATS.4.7.1.2 Prerequisites for UDR Pipelines
- Select any one of the following SUT deployment models depending on
your requirements.
- Model 1: Single Site Deployment
To run UDR-NewFeatures, select SUTSuite: as 'All' and for UDR-Regresssion, select SUTSuite: 'ALL', Features: All (Except UDR_Multisite). For this, you must deploy one UDR, one Provisioning Gateway in UDR mode, two diameter stubs, one Fourg stub, one NOTIFY Stub, two NRF-stubs, one SCP stub, and one ATS in same namespace.
- Model 2: Multiple Site Deployment: To run
UDR-Regression pipeline where SUTSuite: is 'MULTISITE', you must
deploy:
- Two UDRs in two different namespaces that has two different cnDBTier with database site replication enabled between the two.
- ATS in any one namespace.
- Namespace-1 with one UDR, two diameter stubs, one Fourg stub, one NOTIFY stub, two NRF-Stubs, one SCP stub, and one ATS.
- Namespace-2 with one UDR.
- Model 1: Single Site Deployment
4.7.1.2.1 UDR Custom Values File Configuration for UDR Pipelines
This section highlights only the necessary changes that you are required to do in the UDR custom-values.yaml file to run ATS.
Note:
For information about UDR installation and ASM configuration for UDR, see the Oracle Communications Cloud Native Core, Unified Data Repository Installation, Upgrade, and Fault Recovery Guide available on My Oracle Support.- Update the
global section
as follows:global: dockerRegistry: <docker-registry>:<docker port>/ocudr mysql: dbServiceName: <db service to which UDR should be connected> port: &dbPortNumber "3306" configdbname: &configdbname <config db name> dbname: &dbname <subscriber db name> ... createNetworkPolicy: true #only if network policy is supported in the environment. SLF pipeline to validate network policy supports only default policy ... dbCredSecretName: <udr secret name> ... alternateRouteServiceEnable: true alternateRouteServiceHost: <udr-helm-release-name>-alternate-route performanceServiceEnable: true ... autoCreateSubscriber: false FourGPolicyConfiguration: ondemandMigration: enabled: &onDemandMigrationEnabled true ... s13InterfaceEnable: false ... deploymentMode: &deploymentMode IPv6_IPv4 #If it is intended to run Dual Stack scenarios, else do not modify the default value ... appInfoBaseUrl: "<udr-helm-release-name>-nudr-app-info:5906" ... deploymentNrfClientService: envNfNamespace: <UDR namespace> ... egressHttpsEnabled: &egressHttpsFlag true
Note:
Database names provided in the UDR custom values files must match with the names created in the corresponding database secret. - Update the
ingressgateway-sig
section as follows:ingressgateway-sig: global: ... lciHeaderConfig: enabled: true loadThreshold: 30 localLciHeaderValidity: 10000 ... svcToSvcInstanceIdMapping: - svcName: "<udr-helm-release-name>-nudr-drservice" ... xfccHeaderValidation: validation: enabled: &xfccValidationEnabled false ... service: ... ssl: tlsVersion: TLSv1.2,TLSv1.3 privateKey: k8SecretName: <TLS support secret name> k8NameSpace: <UDR Namespace> rsa: fileName: <RSA private key pem file> ecdsa: fileName: <ECDSA private key pem file> certificate: k8SecretName: <TLS support secret name> k8NameSpace: <UDR Namespace> rsa: fileName: <RSA certificate> ecdsa: fileName: <ECDSA certificate> caBundle: k8SecretName: <TLS support secret name> k8NameSpace: <UDR namespace> fileName: <caBundle file name> keyStorePassword: k8SecretName: <TLS support secret name> k8NameSpace: <UDR namespace> fileName: <key txt file> trustStorePassword: k8SecretName: <TLS support secret name> k8NameSpace: <UDR namespace> fileName: <trust txt file> ... serverHeaderDetails: enabled: true ... initssl: true ... oauthValidatorEnabled: &oauthEnabled true ... nrfPublicKeyKubeSecret: <oauth secret name> nrfPublicKeyKubeNamespace: <UDR namespace> validationType: relaxed ... ... enableIncomingHttps: true ... userAgentHeaderValidationConfigMode: REST
- Update the
ingressgateway-prov
section as follows:ingressgateway-prov: global: xfccHeaderValidation: validation: enabled: &xfccValidationEnabled false ... service: ... ssl: tlsVersion: TLSv1.2,TLSv1.3 privateKey: k8SecretName: <TLS support secret name> k8NameSpace: <UDR Namespace> rsa: fileName: <RSA private key pem file> ecdsa: fileName: <ECDSA private key pem file> certificate: k8SecretName: <TLS support secret name> k8NameSpace: <UDR Namespace> rsa: fileName: <RSA certificate> ecdsa: fileName: <ECDSA certificate> caBundle: k8SecretName: <TLS support secret name> k8NameSpace: <UDR namespace> fileName: <caBundle file name> keyStorePassword: k8SecretName: <TLS support secret name> k8NameSpace: <UDR namespace> fileName: <key txt file> trustStorePassword: k8SecretName: <TLS support secret name> k8NameSpace: <UDR namespace> fileName: <trust txt file> ... serverHeaderDetails: enabled: false ... initssl: true ... oauthValidatorEnabled: &oauthEnabled true ... nrfPublicKeyKubeSecret: <oauth secret name> nrfPublicKeyKubeNamespace: <UDR namespace> validationType: relaxed ... ... enableIncomingHttps: true ... checkContentLength: true copyContentLengthFromBody: true
- Update the
alternate-route
section as follows:alternate-route: staticVirtualFqdns: - name: http://abc.test.com alternateFqdns: - target: <nrf stub1 fqdn> port: <nrf stub1 port> priority: 10 - target: <nrf stub2 fqdn> port: <nrf stub2 port> priority: 20
- Update the
nrf-client
section as follows:nrf-client: configmapApplicationConfig: ... profile: ... enableVirtualNrfResolution=true #virtualNrfFqdn value must be same as that of alternate-route.staticVirtualFqdns.name[0] without the url scheme as shown below virtualNrfFqdn=abc.test.com ... nrfRetryConfig=[{"serviceRequestType":"ALL_REQUESTS","primaryNRFRetryCount":1,"nonPrimaryNRFRetryCount":1,"alternateNRFRetryCount":1,"errorReasonsForFailure":["503","504","500","SocketTimeoutException","JsonProcessingException","UnknownHostException","NoRouteToHostException","IOException"],"gatewayErrorCodes":["503"],"requestTimeout":10},{"serviceRequestType":"AUTONOMOUS_NFREGISTER","primaryNRFRetryCount":1,"nonPrimaryNRFRetryCount":1,"alternateNRFRetryCount":-1,"errorReasonsForFailure":["503","504","500","SocketTimeoutException","JsonProcessingException","UnknownHostException","NoRouteToHostException","IOException"],"gatewayErrorCodes":["503"],"requestTimeout":10}] ... nrf-client-nfmanagement: enablePDBSupport: false
- Update the
egressgateway
section to run SCP routing and TLS notification scenarios as follows:egressgateway: ... global: lciHeaderConfig: enabled: true localLciHeaderValidity: 1000 ... svcToSvcInstanceIdMapping: - svcName: "<udr-helm-release-name>-nudr-drservice" ... sbiRouting: peerConfiguration: - id: peer1 host: '<scp stub svc and this should match with scp stub expectedAuthorityHeader in scp stub custom values>' port: '<scp stub port>' apiPrefix: "/" peerSetConfiguration: - id: set0 httpConfiguration: - priority: 1 peerIdentifier: peer1 ... routesConfig: ... - id: scp_route ... path: /scp/** order: 2 metadata: ... sbiRoutingEnabled: true ... initssl: true enableOutgoingHttps: *egressHttpsFlag ... service: ... ssl: tlsVersion: TLSv1.2,TLSv1.3 privateKey: k8SecretName: <TLS support secret name> k8NameSpace: <UDR Namespace> rsa: fileName: <RSA private key pem file> ecdsa: fileName: <ECDSA private key pem file> certificate: k8SecretName: <TLS support secret name> k8NameSpace: <UDR Namespace> rsa: fileName: <RSA certificate> ecdsa: fileName: <ECDSA certificate> caBundle: k8SecretName: <TLS support secret name> k8NameSpace: <UDR namespace> fileName: <caBundle file name> keyStorePassword: k8SecretName: <TLS support secret name> k8NameSpace: <UDR namespace> fileName: <key txt file> trustStorePassword: k8SecretName: <TLS support secret name> k8NameSpace: <UDR namespace> fileName: <trust txt file> ... userAgentHeaderConfigMode: REST
- Update the
perf-info
section as follows:perf-info: envMysqlDatabase: <configdb name used for UDR> ... configmapPerformance: prometheus: "http://<Prometheus fqdn or IP>:<prometheus port>/<domain-name>/prometheus" log: level: perfinfo: "WARN"
Formation of Prometheus URL is as below:(env) [jenkins@ocats-udr-5c55f59d66-jkzml ~]$ curl -i http://oso-svr:80/ HTTP/1.1 302 Found Content-Type: text/html; charset=utf-8 Location: /cne-23-1/prometheus Date: Thu, 20 Apr 2023 06:28:48 GMT Content-Length: 48 <a href="/cne-23-1/prometheus">Found</a>. Prometheus url would be: http://oso-svr:80/cne-23-1/prometheus
Note:
The values for namespace, container name and service name depend on CNE versions. For more information, see Oracle Communications Cloud Native Core, Unified Data Repository User Guide. - Update the
app-info
section as follows:appinfo: ... dbStatusUri: http://<DB monitoring SVC>:<DB monitoring svc port>/db-tier/status/local realtimeDbStatusUri: http://<DB monitoring SVC>:<DB monitoring svc port>/db-tier/status/cluster/local/realtime replicationUri: http://<DB monitoring SVC>:<DB monitoring svc port>/db-tier/status/replication ... prometheusUrl: "http://<Prometheus fqdn or IP>:<prometheus port>/<domain-name>/prometheus" alertmanagerUrl: "http://<alertmanager fqdn or IP>:<prometheus port>/<domain-name>/alertmanager" ... fullnameOverride: <udr-helm-release-name>-app-info
- Configure the following parameters in the
bulk-import tool yaml file
:Note:
To test the bulk import tool, it is important to install the bulk import tool as part of UDR. For information about Bulk Import tool installation and sample bulk import yaml file, see Oracle Communications Cloud Native Core, Unified Data Repository User Guide.- Ensure the secret used to install bulk import tool is same as UDR.
Configure the bulk import deployment values.yaml file as
follows:
global: mysql: dbServiceName: "<DB Endpoint provided for UDR deployment>" port: "<PORT Used provided UDR deployment>" ... dbCredSecretName: '<UDR Secret>' ... batchConfig: asynchronous: false ... xmlToCsv: enabled: true ... ocudrReleaseName: '<UDR Helm name>'
- Do not change the containers name. It should be 'nudr-bulk-import' and 'nudr-xmltocsv'.
Note:
Subscriber bulk import tool must always be clean installed. - Ensure the secret used to install bulk import tool is same as UDR.
Configure the bulk import deployment values.yaml file as
follows:
- Configure the following parameters in the
ocudr_nudr_migration_tool_custom_values.yaml
file:Note:
Migration tool must be installed as a separate helm deployment. For information about Migration tool installation, see "Migration Tool" section in Oracle Communications Cloud Native Core, Unified Data Repository User Guide.- Ensure the secret used to install the migration tool is same as UDR.
Configure the
ocudr_nudr_migration_tool_custom_values.yaml
file as follows:global: mysql: dbServiceName: "<DB Endpoint provided for UDR deployment>" port: "<PORT Used provided UDR deployment>" ... dbCredSecretName: '<UDR Secret>' ... preInstall: ... config: additionalErrorLogging: ENABLED logSubscriberInfo: ENABLED ... fourG: realm: "oracle.com" identity: "tekelec.oracle.com" nodePort: 3000 nodeHost: "fourgstub-stub-service" maxConnectionRetry: 100 ... hikari: queryTimeout: "-1" ... ocudrReleaseName: '<UDR Helm name>' ... sourceEndPointURL: 'http://10.75.229.43:3868' udrServiceBaseURL: '<UDR Helm name>-ingressgateway-prov:80' ... gracefulShutdown: gracePeriod: 30s
- Ensure the secret used to install the migration tool is same as UDR.
Configure the
- Configure the following parameters in the
ocudr_nudr_export_tool_custom_values.yaml
file:Note:
The subscriber export tool must be installed as a separate helm deployment. For information about Migration tool installation, see "Subscriber Export Tool " section in Oracle Communications Cloud Native Core, Unified Data Repository User Guide.- Ensure the secret used to install the subscriber export tool is same as
UDR. Configure the
ocudr_nudr_export_tool_custom_values.yaml
file as follows:global: dockerRegistry: <docker-registry>:<docker port>/ocudr mysql: dbServiceName: "<DB Endpoint provided for UDR deployment>" port: "<PORT Used provided UDR deployment>" ... dbCredSecretName: '<UDR Secret>' ... createExportToolPVC: ... fileExtension: ".exml" ... ... sftpDetails: ... sftpExportEnabled: false ... ... ocudrReleaseName: '<UDR Helm name>' ... batch: corePoolSize: 1 maxPoolSize: 1 exportMode: EXML_EXPORT
Note:
Subscriber export tool must always be clean installed. - Ensure the secret used to install the subscriber export tool is same as
UDR. Configure the
- Other configuration details:
- Configure Prometheus with UDR default alerts. For more information about UDR alerts, see Oracle Communications Cloud Native Core, Unified Data Repository User Guide.
- Certificates used for UDR-TLS should be provided in
/var/lib/jenkins/ocudr_tests/certs
in ATS pod as defined for severheaderFlag in values.yaml file (must be true during UDR installation). For more information, see Installing ATS for UDR.
4.7.1.2.2 Provisioning Gateway Custom Values File Configuration for UDR Pipelines
Note:
For more information about Provisioning Gateway installation and ASM installation for Provisioning Gateway, see Oracle Communications Cloud Native Core Provisioning Gateway Installation Guide. The ProvGW custom-values.yaml file is available with documentation package on MOS. This section highlights only the necessary changes that are required in the ProvGW custom-values.yaml file to run ATS.- Update the global section of the ProvGW custom-values.yaml file as
follows:
global: dockerRegistry: <docker-registry>:<docker port>/provgw mysql: dbServiceName: <db service to which UDR should be connected> port: &dbPortNumber "3306" configdbname: &configdbname <ProvGw DB name> dbname: &dbname <ProvGw DB name> ... deploymentMode: &deploymentMode IPv6_IPv4 #If it is intended to run Dual Stack scenarios, else do not modify the default value ... createNetworkPolicy: true #only if network policy is supported in the environment. SLF pipeline to validate network policy supports only default policy ... usageMode: "UDR"
Note:
Database names provided in the Provisioning Gateway custom values files must match with the names created in the corresponding database secret. - Enable the provgw-service parameter as
follows:
provgw-service: ... soapService: udrIp: <UDR-Prov fqdn> udrSignallingIp: <UDR-Sig Fqdn> convertToSec: false secEntity: | - name: QuotaEntity elementString: usage innerElemenString: quota outerFields: - version - quota - SequenceNumber innerFields: - name - cid - time - totalVolume - inputVolume - outputVolume - serviceSpecific - nextResetTime - Type - grantedTotalVolume - grantedInputVolume - grantedOutputVolume - grantedTime - grantedServiceSpecific - QuotaState - RefInstanceId - custom1 - name: DynamicQuotaEntity elementString: definition innerElemenString: DynamicQuota outerFields: - version - DynamicQuota - SequenceNumber innerFields: - Type - name - InstanceId - Priority - InitialTime - InitialTotalVolume - InitialInputVolume - InitialOutputVolume - InitialServiceSpecific - activationdatetime - expirationdatetime - purchaseddatetime - Duration - InterimReportingInterval - custom1 - name: Subscriber elementString: subscriber outerFields: - IMSI - MSISDN - NAI - EXTID - ACCOUNTID - BillingDay - Entitlement - Tier - SequenceNumber - custom1 - custom2 - custom3 - custom4 - custom5 - custom6 - custom7 - custom8 - custom9 - custom10 - custom11 - custom12 - custom13 - custom14 - custom15 - name: StateEntity elementString: state innerElemenString: property outerFields: - version - property - SequenceNumber innerFields: - name - value - name: CustomEntity elementString: customusage innerElemenString: customent outerFields: - version - customent - SequenceNumber innerFields: - name - custcid - custtotal ... probes: dbStatusCheckUrl: "http://<DB Monitoring svc>:<DB monitoring port>/db-tier/status/cluster/local/realtime"
- Disable auditor-service and configure auditor-service section as
follows:
auditor-service: enabled: false
- Enable the provgw-config section as
below:
provgw-config: ... probes: dbStatusCheckUrl: "http://<DB Monitoring svc>:<DB monitoring port>/db-tier/status/cluster/local/realtime"
- Under the prov-ingressgateway annotations section, set the
routesConfig for provgw-service port as '62001'. Ensure the
ssl
section has secret names and namespace that are created as part of Transport Layer Security (TLS) support. Set the initssl and enableIncomingHttps parameters as 'true'.prov-ingressgateway: deployment: ... ssl: tlsVersion: TLSv1.2,TLSv1.3 privateKey: k8SecretName: <TLS support secret name> k8NameSpace: <UDR Namespace> rsa: fileName: <RSA private key pem file> ecdsa: fileName: <ECDSA private key pem file> certificate: k8SecretName: <TLS support secret name> k8NameSpace: <UDR Namespace> rsa: fileName: <RSA certificate> ecdsa: fileName: <ECDSA certificate> caBundle: k8SecretName: <TLS support secret name> k8NameSpace: <UDR namespace> fileName: <caBundle file name> keyStorePassword: k8SecretName: <TLS support secret name> k8NameSpace: <UDR namespace> fileName: <key txt file> trustStorePassword: k8SecretName: <TLS support secret name> k8NameSpace: <UDR namespace> fileName: <trust txt file> ... initssl: true ... enableIncomingHttps: true ... routesConfig: - id: traffic_mapping_rest_group_prov uri: http://{{ .Release.Name }}-provgw-service:62001
Note:
For information about Provisioning Gateway installation and ASM configuration for Provisioning Gateway, see Oracle Communications Cloud Native Core Provisioning Gateway Installation Guide. This section highlights the necessary changes that you are required to do in the Provisioning Gateway custom-values.yaml file to run ATS.4.7.1.3 Prerequisites for EIR Pipelines
To run EIR Pipelines, you must deploy one EIR, two NRF-stubs, one ATS, one diam-stub deployed in n5g-eir-eic mode, subscriber bulk import tool, and subscriber export tool in same namespace.
4.7.1.3.1 UDR Custom Values File Configuration for EIR Pipelines
Note:
Deploy UDR in the EIR mode. For more information about UDR installation, see Oracle Communications Cloud Native Core, Unified Data Repository Installation, Upgrade, and Fault Recovery Guide. This section highlights only the necessary changes that you are required to do in the UDR custom-values.yaml file to run ATS.- Update the global section as
follows:
global: dockerRegistry: <docker-registry>:<docker port>/ocudr mysql: dbServiceName: <db service to which UDR should be connected> port: &dbPortNumber "3306" configdbname: &configdbname <config db name> dbname: &dbname <subscriber db name> ... dbCredSecretName: <udr secret name> ... alternateRouteServiceEnable: true alternateRouteServiceHost: <udr-helm-release-name>-alternate-route ... performanceServiceEnable: true ... udrServices: "n5g-eir-eic" ... diamGatewayEnable: true ... deploymentMode: &deploymentMode IPv6_IPv4 #If it is intended to run Dual Stack scenarios, else do not modify the default value ... appInfoBaseUrl: "<udr-helm-release-name>-nudr-app-info:5906" ... nrfClientNfManagementEnable: true # value is true for UDR1 deployment; UDR2, UDR3 and UDR4 should be deployed with false deploymentNrfClientService: envNfNamespace: <namespace where UDR is to be deployed>
Note:
Database names provided in the Provisioning Gateway custom values files must match with the names created in the corresponding database secret. - Update the config-server as
follows:
config-server: fullnameOverride: <udr-helm-release-name>-config-server
- Update the ingressgateway-sig as
follows:
ingressgateway-sig: global: ... lciHeaderConfig: enabled: true loadThreshold: 30 localLciHeaderValidity: 10000 ... svcToSvcInstanceIdMapping: - svcName: "<udr-helm-release-name>-nudr-drservice" ... xfccHeaderValidation: validation: enabled: &xfccValidationEnabled true ... service: ... ssl: tlsVersion: TLSv1.2,TLSv1.3 privateKey: k8SecretName: <TLS support secret name> k8NameSpace: <UDR Namespace> rsa: fileName: <RSA private key pem file> ecdsa: fileName: <ECDSA private key pem file> certificate: k8SecretName: <TLS support secret name> k8NameSpace: <UDR Namespace> rsa: fileName: <RSA certificate> ecdsa: fileName: <ECDSA certificate> caBundle: k8SecretName: <TLS support secret name> k8NameSpace: <UDR namespace> fileName: <caBundle file name> keyStorePassword: k8SecretName: <TLS support secret name> k8NameSpace: <UDR namespace> fileName: <key txt file> trustStorePassword: k8SecretName: <TLS support secret name> k8NameSpace: <UDR namespace> fileName: <trust txt file> ... serverHeaderDetails: enabled: true ... initssl: true ... oauthValidatorEnabled: &oauthEnabled true nfType: 5G_EIR producerScope: n5g-eir-eic ... nrfPublicKeyKubeSecret: <oauth secret name> nrfPublicKeyKubeNamespace: <UDR namespace> validationType: relaxed ... ... enableIncomingHttps: true ... userAgentHeaderValidationConfigMode: REST
- Update the ingressgateway-prov as
follows:
ingressgateway-prov: global: xfccHeaderValidation: validation: enabled: &xfccValidationEnabled false ... serverHeaderDetails: enabled: false ... initssl: true ... oauthValidatorEnabled: &oauthEnabled true nfType: 5G_EIR producerScope: n5g-eir-prov ... nrfPublicKeyKubeSecret: <oauth secret name> nrfPublicKeyKubeNamespace: <UDR namespace> validationType: relaxed ... ... enableIncomingHttps: true ... checkContentLength: true copyContentLengthFromBody: true
- Configure targets and ports in the alternate-route.staticVirtualFqdns
section with the NRF stubs FQDNs and
ports.
alternate-route: staticVirtualFqdns: - name: http://abc.test.com alternateFqdns: - target: <nrf stub1 fqdn> port: <nrf stub1 port> priority: 10 - target: <nrf stub2 fqdn> port: <nrf stub2 port> priority: 20
- Update the appinfo section as
follows:
appinfo: ... dbStatusUri: http://<DB monitoring SVC>:<DB monitoring svc port>/db-tier/status/local realtimeDbStatusUri: http://<DB monitoring SVC>:<DB monitoring svc port>/db-tier/status/cluster/local/realtime replicationUri: http://<DB monitoring SVC>:<DB monitoring svc port>/db-tier/status/replication ... prometheusUrl: "http://<Prometheus fqdn or IP>:<prometheus port>/<domain-name>/prometheus" alertmanagerUrl: "http://<alertmanager fqdn or IP>:<prometheus port>/<domain-name>/alertmanager" ... core_services: 5g_eir: *eir fullnameOverride: <udr-helm-release-name>-app-info
- Update the nrf-client section as
follows:
nrf-client: profile: | #uncomment appProfiles that is dedicated to used for EIR mode (n5g-eir-eic) and comment the rest of the appProfiles. enableVirtualNrfResolution=true #virtualNrfFqdn value must be same as that of alternate-route.staticVirtualFqdns.name[0] without the url scheme as shown below virtualNrfFqdn=abc.test.com nrf-client-nfmanagement: ... replicas: 1 ... enablePDBSupport: false
- Update the perf-info section as
follows:
pperf-info: envMysqlDatabase: <configdb name used for UDR> ... configmapPerformance: prometheus: "http://<Prometheus fqdn or IP>:<prometheus port>/<domain-name>/prometheus" log: level: perfinfo: "WARN"
Note:
The values for namespace, container name, and servicename depend on CNE or OSO versions. For information, see Oracle Communications Cloud Native Core, Unified Data Repository User Guide.. - Update the egressgateway section as
follows:
egressgateway: userAgentHeaderConfigMode: REST
- Update the nudr-diameterproxy to verify diameter S13 scenarios as
follow:
nudr-diameterproxy: enabled: true
- Configure the following parameters in the bulk-import tool yaml
file:
Note:
To test the subscriber bulk import tool, it is important to install the bulk import tool as a separate helm deployment in the same namespace as that of EIR. For information about the bulk import tool installation and sample bulk import yaml file, see Oracle Communications Cloud Native Core, Unified Data Repository User Guide..- Ensure the secret is used to install subscriber bulk import tool is same
as EIR. Configure the Bulk Import deployment values.yaml file as
follows:
global: mysql: dbServiceName: "<DB Endpoint provided for UDR deployment>" port: "<PORT Used provided UDR deployment>" ... dbCredSecretName: '<EIR Secret>' ... batchConfig: asynchronous: false ... xmlToCsv: enabled: false ... pdbiToCsv: enabled: true ... ocudrReleaseName: '<EIRHelm name>'
Note:
Subscriber bulk import tool must always be clean installed. - Ensure the secret is used to install subscriber bulk import tool is same
as EIR. Configure the Bulk Import deployment values.yaml file as
follows:
- Configure the following parameters in the nudr-export-tool deployment yaml
file:
Note:
To test the subscriber export tool, it is important to install the subscriber export tool as a separate helm deployment in the same namespace as that of UDR. For information about the subscriber export tool installation, see Oracle Communications Cloud Native Core, Unified Data Repository User Guide..- Ensure the secret is used to install subscriber export tool is same as
EIR. Configure the subscriber export tool deployment values.yaml file as
follows:
global: dockerRegistry: <docker-registry>:<docker port>/ocudr mysql: dbServiceName: "<DB Endpoint provided for EIR deployment>" port: "<PORT Used provided UDR deployment>" ... dbCredSecretName: '<EIR Secret>' ... sftpDetails: ... sftpExportEnabled: false ... ... ocudrReleaseName: '<EIR Helm name>' ... exportMode: EIR_EXPORT
Note:
Subscriber export tool must always be clean installed. - Ensure the secret is used to install subscriber export tool is same as
EIR. Configure the subscriber export tool deployment values.yaml file as
follows:
- Other configuration details:
- Configure Prometheus with EIR default alerts. For more information about EIR alerts, see Oracle Communications Cloud Native Core, Unified Data Repository User Guide.
4.7.2 UDR Golden Configuration Support
The ATS golden configuration feature will facilitate the provisioning of customer specific configuration to run test cases. UDR ATS has been integrated with the ATS golden configuration feature. This feature enables the customer to provide their own configuration and save the configuration once the ATS runs are completed.
- Product configuration
- Custom configuration
To Configure ATS Golden Configuration
- Modify the parameter values in the /ocslf_tests/custom_config/global.yaml and/or feature config file (/ocslf_tests/custom_config/<feature>.yaml).
- Select the custom_config in the jenkins pipeline GUI to run the test cases with data provided in the custom_config directory.
ATS Golden Configuration Directory
- Parameterized features
- features/
- cust_newfeatures/
- cust_regression/Singlesite/
- features/
- Configuration files
- custom_config/
- yaml
- <feature_name>.yaml
- custom_config/
- Test data
- cust_data/
Parameterized features
New and regression feature configuration files use the same custom_config directory.
ATS Golden Configuration Feature Configuration files
- The global.yaml configuration file contains Key-Value pair parameters. The Key-Value pair parameters applies to more than one parameterized feature file.
- The Key-Value pair parameters in the <feature>.yaml
configuration file includes the following fields:
- subscriber profile data information
- cust_data payload file values
- data folder name
- other feature specific fields
- The <feature>.yaml file is divided into scenario specific sections, containing Key-Value pair parameters. Each parameter in the parameterized feature file takes values only from its own scenario File_Parameters dictionary. You can use the same parameter name in more than one scenario with different scenario specific values.
- When goldenConfig.py is running, each feature config yaml contains the global section at the top between #START_GLOBAL and #END_GLOBAL strings.
- Each SLF <feature>.yaml file allows to make the following
changes, if applicable:
- Use global.yaml anchors to copy the values to scenario specific sections that uses the yalm alias. The data folder name is taken from the global section and is used when a behave step is needed to open a json payload file.
- Provisioning the subscriber profile data.
- Overwrite data_folder payload file values. The payload file values will be overwritten in the specified path. The golden configuration feature will overwrite data string with cust_data even if each path is written with the data directory.
- Provisioning custom values to feature specific fields.
Test data
The test data files includes the json payload files used by each feature. A new cust_data directory, which is a copy of the original data directory, is created to maintain the data directory with product configuration unchanged because the golden configuration feature allows to overwrite data in these json files.
ATS Golden Configuration backup for SLF and Provisioning Gateway
The SLF and Provisioning Gateway backup process is implemented in the before_all and after_all behave hooks to allow ATS pipeline runs to implement feature specific configuration and revert them to the customer specific golden configuration at the end.
- Backup will be automatically executed for SLF and Provisioning Gateway deployments.
- If failure to export SLF or Provisioning Gateway golden configuration, pipeline execution will be aborted and feature test will not be performed.
- If you fail to import SLF or Provisioning Gateway golden
configuration then you are required to manually backup by using
'slfgoldenconfig.zip' and 'provgwgoldenconfig.zip' that is located in ATS
pod in the
/var/lib/jenkins/ocslf_tests/
directory.
Selective Run of Features and Scenarios
Note:
The values provided in theglobal.yaml
must match with the SUT configurations
(SLF
values.yaml
).
#START_GLOBAL
global:
data_folder: &data_folder cust_data
# To select an appropriate retries value take from SLF values.yaml the localLciHeaderValidity value (value in
# milliseconds) and perform the following formula: (localLciHeaderValidity/5000)+1.
# It is recommended to use 6 as the minimum retries value for igw-sig-lciHeaderRequestRetries.
igw-sig-lciHeaderRequestRetries: &igw-sig-lciHeaderRequestRetries '6'
# Values provided in this global.yaml should match with the SUT configurations (SLF values.yaml).
alternateRouteServiceEnable: true #true/false
performanceServiceEnable: true #true/false
igw-sig-lciheader-enabled: true #true/false
igw-sig-oauth-enabled: true #true/false
igw-sig-userAgentHeaderValidationConfigMode: REST #HELM/REST
igw-sig-userAgentHeaderValidation-enabled: true #true/false
igw-sig-userAgentHeader-validationType: &igw-sig-userAgentHeader-validationType relaxed #relaxed/strict
igw-prov-oauth-enabled: true #true/false
nrf-client-enableVirtualNrfResolution: true #true/false
egw-userAgentHeaderConfigMode: REST #HELM/REST
egw-userAgentHeader-enabled: true #true/false
egw-addFqdnToHeader: true #true/false
egressgateway_nfFqdn: &egressgateway_nfFqdn udr.oracle.com
error_logging_enabled: true
#END_GLOBAL
List of Parameterized Features
Table 4-8 Parameterized Features for EIR
EIR Feature |
---|
Prometheus_Scenarios.feature |
EIR_XFCC.feature |
EIR_User_Agent_Header.feature |
EIR_User_AgentHeader_Nrf.feature |
EIR_TLS_Signalling.feature |
EIR_TLS_Provisioning.feature |
EIR_Subscriber_Prov.feature |
EIR_Signalling.feature |
EIR_Server_Header.feature |
EIR_Provisioning_Err.feature |
EIR_PDBI_Import.feature |
EIR_OAuth2_Signalling.feature |
EIR_OAuth2_Provisioning.feature |
EIR_MGM_API.feature |
EIR_Default_Response.feature |
EIR_CSV_Import.feature |
EIR_AccessLog.feature |
EIR_IMEISV.feature |
EIR_IMEISV_Default_Response.feature |
EIR_IMEISV_AccessLog.feature |
EIR_PDBI_IMEISV_Import.feature |
EIR_Subscriber_Export_Tool.feature |
EIR_22_4_x_Patch_PDBI_Import.feature |
EIR_Validation_Additional_Scenarios.feature |
EIR_PDBI_Synchronous_Scenarios.feature |
EIR_Subscriber_Export_Tool_New.feature |
Table 4-9 Parameterized Features for SLF
SLF Feature |
---|
SLF_BulkImport.feature |
ProvGw_Config_ImportExport.feature |
SLF_CHFGroupId.feature |
SLF_DefaultGroupId.feature |
SLF_User_Agent_Header.feature |
SLF_LCI_Header.feature |
SLF_Prov_OAuth2_Validation.feature |
SLF_User_AgentHeader_Nrf.feature |
SLF_LCI_Via_Header.feature |
SLF_Subscriber_Activity_Logging.feature |
SLF_Subscriber_Export_Tool.feature |
SLF_Singlesite_NFCS_BulkImport.feature |
SLF_Singlesite_NFCS_Nrf_Client_Scenarios.feature |
SLF_Singlesite_NFCS_Prometheus_Scenarios.feature |
SLF_Singlesite_NFCS_Scenarios.feature |
SLF_Singlesite_NFCS_Subscriber_Export_Tool.feature |
ProvGw_Auditor_Status_API.feature |
SLF_NF_Scoring_Multisite.feature |
SLF_Network_Policy_Validation.feature |
SLF_Subscriber_Export_File_Cleanup.feature |
Table 4-10 Parameterized Features for UDR
UDR Feature |
---|
UDR_LCI_Header_Via.feature |
UDR_Subscriber_Activity_Logging.feature |
UDR_VSA_PCF_Validation.feature |
UDR_PCF_ETag_Validation.feature |
UDR_PCF_ETag_Notify_Validation.feature |
UDR_AUTO_Enroll_policy_Data.feature |
UDR_22.4.x_PCF_VSA_GET_DELETE.feature |
ProvGw_SOAP_Additional_scenarios.feature |
23_1_1_ProvGw_fixes.feature |
UDR_Mgm_Attribute_Mapping.feature |
UDR_PCF_Auto_Enrollment_New.feature |
UDR_ProvGw_Prov_Headers.feature |
UDR_Quota_Migration_Policy_Sh_Soap.feature |
UDR_Network_Policy_Validation.feature |
UDR_VSA_Delete_All_Validation.feature |
UDR_Quota_Migration_Sh_interface.feature |
UDR_Quota_Migration_OnDemand_VSA.feature |
UDR_Quota_Migration_OnDemand_Sh.feature |
UDR_Quota_Migration_OnDemand_SOAP_ProvPolicy.feature |
UDR_Quota_Migration_OnDemand_NudrPolicy.feature |
UDR_Quota_Migration_OnDemand_Misc.feature |
UDR_Network_Policy_Validation.feature |
UDR_Bug_Fix_25_1_0.feature |
UDR_Default_Headers.feature |
4.7.3 UDR Application Log Collection
- WARN
- INFO
- DEBUG
- ERROR
- TRACE
Note:
- If the application log collection feature is enabled, it will increase the execution time of the pipeline because some of the steps are executed before and after each scenario. It is recommended to enable the application log collection only if there are consistent failures and pod logs are required to further debug the issues.
- Do not provide any values for Values.k8sResource.container.prefix and Values.k8sResource.container.suffix in the UDR custom values file and do not modify debug tools container name when PCAP logs are collected.
Configuration
- ATS must be deployed with PVC enabled. For more information on PVC configurations. For more information, see Enabling Persistent Volume.
- For PVC size refer to ATS PVC Size Calculation.
- SUT must be deployed with debug tools to capture PCAP logs on SLF, UDR, and EIR. For more information, see Pod Security Policy, Role, and Role Binding Creation section in Oracle Communications Cloud Native Core, Unified Data Repository Installation, Upgrade, and Fault Recovery Guide.
- To enable a debug tool containers for UDR and Provision Gateway edit the global section of the custom-values.yaml file of UDR and Provision Gateway. For more information, see Using Debug Tools in Oracle Communications Cloud Native Core, Unified Data Repository Troubleshooting Guide and Oracle Communications Cloud Native Core, Provisioning Gateway Guide.
- When the ATS and SUT are installed enable the log collection
from the ATS Jenkins GUI. This is available on each pipelines on the
BuildWithParameters screen of the UDR ATS. This is currently provided only
for NewFeatures and Regression Pipelines.
Figure 4-75 SLF-NewFeatures BuildWithParameters
- To enable the application log collection follow the below
steps
- From the Fetch_Log_Upon_Failure select YES from the dropdown list.
- From the Log_Type select the
required options, AppLog or
PcapLog or both. Follow the below steps
depending on the selection and requirement:
- ATS must be deployed with the corresponding PVC size if AppLog is selected from the Log_Type.
- SUT must be deployed with debug tools container and ATS must be deployed with corresponding PVC size if PcapLog is selected from the Log_Type or if both AppLog and PcapLog are selected.
- Application log collection provides Log_Level
values from the dropdown. The log level can be set for all the
microservices to capture the SUT log for failure. The possible values for
Log_Level are as follows:
- WARN
- INFO
- DEBUG
- ERROR
- TRACE
- Run the following command to log in to the ATS
pod:
$kubectl exec -it <ATS_POD> -n <NAMESPACE> bash
- Select the builds path to capture AppLogs and PcapLogs
:
(env) [jenkins@slfats-ocats-udr-56f76cd668-jzj4d ~]$cd /var/lib/jenkins/.jenkins/jobs/<JOB-NAME>/builds/<BUILD-NUM>/
(env) [jenkins@slfats-ocats-udr-56f76cd668-jzj4d ~]$ls -lrt
ATS PVC Guidelines
- ATS_Type, Pipeline_Type
- Log_Type
- Log_Level
- Number_of_pipeline_runs
- SLF-ATS:
- AppLogs only: Per failure scenario: 72MB
- PcapLogs only: Per failure scenario: 554MB
- AppLogs and PcapLogs: Per failure scenario: 626MB
- UDR-ATS:
- AppLogs only: Per failure scenario: 1MB
- PcapLogs only: Per failure scenario: 12GB
- AppLogs and PcapLogs: Per failure scenario: 13GB
- EIR-ATS:
- AppLogs only: Per failure scenario: 500KB
- PcapLogs only: Per failure scenario: 10GB
- AppLogs and PcapLogs: Per failure scenario: 11GB
ATS PVC Size Calculation
Note:
The maximum PVC size must be able to store the logs even if all the scenarios fail in all the runs.PVC_size = Number of scenarios in Pipeline Type * PVC requirement per scenario for given type [ATS_Type or Log_Type] * Total runs to be executed.
Example 1: To capture the application logs for SLF-ATS-NewFeatures and the Log_Type is set as AppLogs only for 10 pipeline runs, the PVC for ATS is calculated as:
The given SLF-ATS-NewFeatures has 19 scenarios. The memory requirement per scenario is 72MB.
PVC_size = 19*72MB*10 = 13GB. The maximum PVC size required for ATS is 13 GB.
Example 2: To capture the application logs for EIR-ATS-Regression and the Log_Type is set as PcapLogs only for 1 pipeline run, the PVC for ATS is calculated as:
The given EIR-ATS-Regression has 242 scenarios. The memory requirement per scenario is 10GB.
PVC_size = 242*10GB*1 = 2420GB. The maximum PVC size required for ATS is 2420 GB.
4.7.4 Parallel Test Execution
Parallel test execution allows you to perform multiple logically grouped tests simultaneously on the same System Under Test (SUT) to reduce the overall execution time of ATS.
ATS currently runs all its tests in a sequential manner, which is time-consuming. With parallel test execution, tests can be run concurrently rather than sequentially or one at a time. Test cases or feature files are now separated into different folders, such as stages and groups, for concurrent test execution. Different stages, such as stage 1, stage 2, and stage 3, run the test cases in a sequential order, and each stage has its own set of groups. For more information about Parallel Test Execution, see Parallel Test Execution.
SLF ATS Stages and Groups
Table 4-11 SLF ATS Stages and Groups
Single Site Stages and Groups | Multiple Site Stages and Groups |
---|---|
|
|
UDR ATS Stages and Groups
Table 4-12 UDR ATS Stages and Groups
UDR New Feature | UDR Regression (Single Site) | UDR Regression (Multiple Site) |
---|---|---|
|
|
|
EIR ATS Stages and Groups
Table 4-13 EIR ATS Stages and Groups
EIR NewFeatures | EIR Regression |
---|---|
|
|
Viewing Console Logs for Specific Group
When parallel test execution feature is enabled and ATS runs are triggered, you can view the progress of each stage and group on the corresponding pipeline dashboard.
Figure 4-76 Sample SLF Regression

Figure 4-77 Sample Stage Logs

4.7.5 Logging into ATS
Before logging into ATS, it is important to know the nodeport of the "-ocats-UDR" service. To get the nodeport detail, run the following command:
kubectl get svc -n <slf_namespace>
kubectl get svc -n ocudr ocats-udr-ga
Figure 4-78 UDR Nodeport

In the previous image, 31175 is the nodeport.
kubectl get nodes -o wide
- In the web browser, type http://<Worker IP>:nodeport
and press Enter.
Example: http://10.75.229.87:30156
The log in page appears.
- Enter the username and password. Click Sign
in. A page with pre-configured pipelines appears.
Figure 4-79 SLF, EIR, and UDR Pre-configured Pipelines
Figure 4-80 EIR ATS Pipeline
Note:
To modify the default log in password, see Modifying Login Password.- SLF-HealthCheck: This pipeline does a test probe with SUT - trigger Helm test and provide the results in Console Logs.
- SLF-NewFeatures: This pipeline has all the test cases delivered as part of SLF ATS - 25.1.200 release.
- SLF-Performance: This pipeline provides a means to run 1K TPS of lightweight traffic.
- SLF-Regression: This pipeline has all the test cases of previous releases.
- UDR-HealthCheck: This pipeline does a test probe with SUT - trigger helm test and provide the results in Console Logs.
- UDR-NewFeatures: This pipeline has all the test cases delivered as part of UDR ATS - 25.1.200 release.
- UDR-Regression: This pipeline has all the UDR test cases of previous releases.
- EIR-HealthCheck: This pipeline does a test probe with SUT - trigger Helm test and provide the results in Console Logs.
- EIR-NewFeatures: This pipeline has all the test cases delivered as part of EIR ATS - 25.1.200 release.
- EIR-Regression: This pipeline has all the EIR test cases of previous releases.
4.7.6 Running SLF Pipelines
- Running SLF-NewFeatures Pipeline
- Running SLF Regression Pipeline - Single Site Deployment
- Running SLF Regression Pipeline - Multiple Site Deployment
- Running SLF HealthCheck Pipeline
- Running SLF-Performance Pipeline: Model 2: Single Site Deployment
- SLF NewFeatures Documentation - Single Site Scenarios
- SLF Regression Documentation - Single Site Scenarios
- SLF Regression Documentation - Multiple Site Scenarios
4.7.6.1 Running SLF-NewFeatures Pipeline
To run the test cases:
- Click SLF-NewFeatures. The following page
appears:
Figure 4-81 SLF-NewFeatures Configure
- Click Configure in the left navigation pane. The General tab appears. You must wait for the page to load completely.
- Select the Discard old builds. This option
allows you to configure the number of builds you want to retain in the persistent
volume.
Note:
It is recommended to configure this option. If there is large number of builds, the Persistent Volume may be utilized completely.Figure 4-82 Discard Old Builds
- Click the Advanced Project Options tab. Scroll
down to reach the Pipeline configuration section as
follows:
Important:
Make sure the page loads completely before you perform any action on it. Do not modify any configuration except the one discussed below.Figure 4-83 SLF Configuration page
In the pipeline script, do not change any value other than from line number 68 to line 112. You can change the parameters marked as "b" - to - "T" as per user requirement. The parameters details are provided as comments in line number 15 to 60. The parameters description is as follows:
- a - Selected NF
- b - NameSpace in which UDR is Deployed (Mandatory)
- c - NameSpace in which ProvGw is Deployed (Mandatory)
- d - Preferred UDR1 FQDN and PORT of segment 1 in the format FQDN:PORT
- e - UDR2 FQDN and PORT of segment 1 in the format FQDN:PORT
- f - Preferred UDR3 FQDN and PORT of segment 2 in the format FQDN:PORT
- g - UDR4 FQDN and PORT of segment 2 in the format FQDN:PORT
- h - ProvGw FQDN and PORT in the format FQDN:PORT
- i - prometheus_ip and port of prometheus server in the format FQDN:PORT
- j - re-run count
- k - Name and Port of Kubernetes Host server in the format FQDN:PORT
- l - Mode of Communication (Can be either IP or fqdn)
- m - Helm Name for UDR1
- n - Helm Name for UDR2
- o - Helm Name for UDR3
- p - Helm Name for UDR4
- q - Helm Name for ProvGw
- r - FQDN:PORT format for primary nrf-stub
- s - primary nrf-stub metrics port
- t - nudr-config FQDN:PORT format
- u - prometheus uri apiName and apiVersion
- v - FQDN:PORT format for secondary nrf-stub
- w - secondary nrf-stub metrics port
- x - primary nrf-stub deployment name
- y - secondary nrf-stub deployment name
- z - ProvGw HTTPS port
- A - root certificate authority
- B - Server certificate signed with root CA private key
- C - private key in .pem format
- D - Number of retries to check if pod down is successful
- E - Number of retries to check if pod up is successful
- F - Number of retries when ATS detects tcp connection failures
- G - Time interval in seconds between tcp connection retries
- H - UDR2 FQDN:PORT for multi-site scenarios
- I - UDR2 nudr-config FQDN:PORT for multi-site scenarios
- J - ProvGateway Config service host and port in the format FQDN:PORT
- K - Bulk import helm name
- L - true if CNE VERSION is 1.9 or above or OSO version is 1.10.x or above, else false.
- M - Preferred UDR1 PROV FQDN and PORT of segment 1 in the format FQDN:PORT
- N - UDR2 PROV FQDN and PORT of segment 1 in the format FQDN:PORT
- O - Preferred UDR3 PROV FQDN and PORT of segment 2 in the format FQDN:PORT
- P - UDR4 PROV FQDN and PORT of segment 2 in the format FQDN:PORT
- Q - MultiSite UDR2 PROV FQDN and PORT in the format FQDN:PORT
- R - Helm Release name of SLF export tool (slfexport)
- S - Enable Error Response Logging Validation (true)
- T - true if underlying platform supports dual stack, else false. All dual stack scenarios will be marked as skipped if false (Default is false)
Note:
Setting rerun count, number of retries to check pod status and segment details:- Rerun count can be set to any desired value by changing option 'j' in the pipeline script.
- Number of retries to check if pod is down can be set by providing option 'D' in the pipeline script.
- Number of retries to check if pod is up can be set by providing option 'E' in the pipeline script.
- If ProvGw is deployed with segment details having UDR fqdn/IP:port, provide option 'l' as "IP". If ProvGw is deployed with segment details having only UDR fqdn/IP, provide option 'l' as "fqdn". If any value other than "IP" and "fqdn" is provided, ATS assumes that the segment details in ProvGw is in "IP" mode.
- Option 'u' shall take value '/<domain
name>/prometheus/api/v1 for CNE version 1.9 or above or OSO version
is 1.10.x or above. If domain name is not known, then domain name can be
found after executing the command from ATS pod:
curl http://<prometheus fqdn>:<port>/
Example:curl http://occne-kube-prom-stack-kube-prometheus.occne-infra/ <a href="/udr-cne/prometheus">Found</a>
- Click Save. The SLF-NewFeatures page appears.
- Click Build with Parameters. The following page
displays all the newly added features and total count of the features, counts of
scenarios in each feature, name of each feature and scenarios in each feature:
Figure 4-84 SLF Build with Parameters
In the previous image,
- Select the Configuration_Type. For information about this feature, see Parameterization.
- There are two Select_Option(s), which
are:
- All: By default, it selects all the SLF test cases to run. Scroll-down and click Build to run all the test cases.
- Single/MultipleFeatures: This option allows you to select the test cases you want to run. After selecting the test cases, scroll-down and click Build.
If the Select_Option is selected as 'All', the TestCases details are mapped to each feature and if the Select_Option is selected as 'Single/MultipleFeatures', only the TestCases details corresponding to those test cases appear. For more information, see Support for Test Case Mapping and Count.
To view consolidated and detailed stack trace results in case of any failures, click Test Results Analyzer in the left navigation pane. The test results analyzer report appears. For more information, see Test Results Analyzer.
A sample consolidated test report, when rerun is set to 0, is as follows:
Figure 4-85 Consolidated Test Report - Rerun is 0

For more details on consolidated test report, see Final Summary Report, Build Color and Application Logs.
4.7.6.2 Running SLF Regression Pipeline - Single Site Deployment
Note:
Model 1 deployment is required for this pipeline. For more information, see Prerequisites for SLF Pipelines.- Click SLF-Regression and then, click
Configure.
Figure 4-86 SLF-Regression Pipeline
- The General tab appears. The user must wait for the page to load completely.
- Select the Discard old Builds. This option
allows you to configure the number of builds you want to retain in the
persistent volume.
Note:
It is recommended to configure this option. If there is a large number of builds, the Persistent Volume may be utilized completely. - Click the Advanced Project Options tab.
Scroll-down to reach the Pipeline configuration section
as follows:
Important:
Make sure the page loads completely before you perform any action on it. Do not modify any configuration except the one discussed below.Figure 4-87 SLF Configuration page
Do not change any value other than from line number 68 to 112. You can change the parameters marked as "b" - to - "T" as per user requirement. The parameters details are provided as comments in line number 15 to 60. The parameters description is as follows:
- a - Selected NF
- b - NameSpace in which UDR is Deployed (Mandatory)
- c - NameSpace in which ProvGw is Deployed (Mandatory)
- d - Preferred UDR1 FQDN and PORT of segment 1 in the format FQDN:PORT
- e - UDR2 FQDN and PORT of segment 1 in the format FQDN:PORT
- f - Preferred UDR3 FQDN and PORT of segment 2 in the format FQDN:PORT
- g - UDR4 FQDN and PORT of segment 2 in the format FQDN:PORT
- h - ProvGw FQDN and PORT in the format FQDN:PORT
- i - prometheus_ip and port of prometheus server in the format FQDN:PORT
- j - re-run count
- k - Name and Port of Kubernetes Host server in the format FQDN:PORT
- l - Mode of Communication (Can be either IP or fqdn)
- m - Helm Name for UDR1
- n - Helm Name for UDR2
- o - Helm Name for UDR3
- p - Helm Name for UDR4
- q - Helm Name for ProvGw
- r - FQDN:PORT format for primary nrf-stub
- s - primary nrf-stub metrics port
- t - nudr-config FQDN:PORT format
- u - prometheus uri apiName and apiVersion
- v - FQDN:PORT format for secondary nrf-stub
- w - secondary nrf-stub metrics port
- x - primary nrf-stub deployment name
- y - secondary nrf-stub deployment name
- z - ProvGw HTTPS port
- A - root certificate authority
- B - Server certificate signed with root CA private key
- C - private key in .pem format
- D - Number of retries to check if pod down is successful
- E - Number of retries to check if pod up is successful
- F - Number of retries when ATS detects tcp connection failures
- G - Time interval in seconds between tcp connection retries
- H - UDR2 FQDN:PORT for multi-site scenarios
- I - UDR2 nudr-config FQDN:PORT for multi-site scenarios
- J - ProvGateway Config service host and port in the format FQDN:PORT
- K - Bulk import helm name
- L - true if CNE VERSION is 1.9 or above or OSO version is 1.10.x or above, else false.
- M - Preferred UDR1 PROV FQDN and PORT of segment 1 in the format FQDN:PORT
- N - UDR2 PROV FQDN and PORT of segment 1 in the format FQDN:PORT
- O - Preferred UDR3 PROV FQDN and PORT of segment 2 in the format FQDN:PORT
- P - UDR4 PROV FQDN and PORT of segment 2 in the format FQDN:PORT
- Q - MultiSite UDR2 PROV FQDN and PORT in the format FQDN:PORT
- R - Helm Release name of SLF export tool (slfexport)
- S - Enable Error Response Logging Validation (true)
- T - true if underlying platform supports duals tack, else false. All dual stack scenarios will be marked as skipped if false (Default is false)
Note:
Setting rerun count, number of retries to check pod status and segment details:- Rerun count can be set to any desired value by changing option 'j' in the pipeline script.
- Number of retries to check if pod is down can be set by providing option 'D' in the pipeline script.
- Number of retries to check if pod is up can be set by providing option 'E' in the pipeline script.
- If ProvGw is deployed with segment details having UDR fqdn/IP:port, provide option 'l' as "IP". If ProvGw is deployed with segment details having only UDR fqdn/IP, provide option 'l' as "fqdn". If any value other than "IP" and "fqdn" is provided, ATS assumes that the segment details in ProvGw is in "IP" mode.
- Option 'u' shall take value '/<domain
name>/prometheus/api/v1 for CNE version 1.9 or above or OSO
version is 1.10.x or above. If domain name is not known, then domain
name can be found after executing the command from ATS pod:
curl http://<prometheus fqdn>:<port>/
Example:curl http://occne-kube-prom-stack-kube-prometheus.occne-infra/ <a href="/udr-cne-190/prometheus">Found</a>
- Click Save. The Pipeline SLF-Regression page appears.
- Click Build with Parameters.
Figure 4-88 Build with Parameters
- Select the Configuration_Type. For more information about this feature, see Parameterization.
- Select SUTSuite as 'All'. This lists all regression features and displays the total count of the features, counts of scenarios in each feature, name of each feature, and scenarios in each feature.
- Click Build.
For more details on consolidated test report, see Final Summary Report, Build Color and Application Logs.
Figure 4-89 Consolidated Test Report Rerun

4.7.6.3 Running SLF Regression Pipeline - Multiple Site Deployment
Note:
Model 3 deployment is required for this pipeline. For more information, see Prerequisites for SLF Pipelines.- Click SLF-Regression and then, click Configure.
- The General tab appears. The user must wait for the page to load completely.
- Select the Discard old Builds. This option
allows you to configure the number of builds you want to retain in the
persistent volume.
Note:
It is recommended to configure this option. If there are large number of builds, the Persistent Volume may be utilized completely. - Click the Advanced Project Options tab.
Scroll-down to reach the Pipeline configuration section
as follows:
Important:
Make sure the page loads completely before you perform any action on it. Do not modify any configuration except the one discussed below.Do not change any value other than from line number 68 to line 112. You can change the parameters marked as "b" - to - "T" as per user requirement. The parameters details are provided as comments in line number 15 to 60. The parameters description is as follows:
- a - Selected NF
- b - NameSpace in which UDR is Deployed (Mandatory)
- c - NameSpace in which ProvGw is Deployed (Mandatory)
- d - Preferred UDR1 FQDN and PORT of segment 1 in the format FQDN:PORT
- e - UDR2 FQDN and PORT of segment 1 in the format FQDN:PORT
- f - Preferred UDR3 FQDN and PORT of segment 2 in the format FQDN:PORT
- g - UDR4 FQDN and PORT of segment 2 in the format FQDN:PORT
- h - ProvGw FQDN and PORT in the format FQDN:PORT
- i - prometheus_ip and port of prometheus server in the format FQDN:PORT
- j - re-run count
- k - Name and Port of Kubernetes Host server in the format FQDN:PORT
- l - Mode of Communication (Can be either IP or fqdn)
- m - Helm Name for UDR1
- n - Helm Name for UDR2
- o - Helm Name for UDR3
- p - Helm Name for UDR4
- q - Helm Name for ProvGw
- r - FQDN:PORT format for primary nrf-stub
- s - primary nrf-stub metrics port
- t - nudr-config FQDN:PORT format
- u - prometheus uri apiName and apiVersion
- v - FQDN:PORT format for secondary nrf-stub
- w - secondary nrf-stub metrics port
- x - primary nrf-stub deployment name
- y - secondary nrf-stub deployment name
- z - ProvGw HTTPS port
- A - root certificate authority
- B - Server certificate signed with root CA private key
- C - private key in .pem format
- D - Number of retries to check if pod down is successful
- E - Number of retries to check if pod up is successful
- F - Number of retries when ATS detects tcp connection failures
- G - Time interval in seconds between tcp connection retries
- H - UDR2 FQDN:PORT for multi-site scenarios
- I - UDR2 nudr-config FQDN:PORT for multi-site scenarios
- J - ProvGateway Config service host and port in the format FQDN:PORT
- K - Bulk import helm name
- L - true if CNE VERSION is 1.9 or above or OSO version is 1.10.x or above, else false.
- M - Preferred UDR1 PROV FQDN and PORT of segment 1 in the format FQDN:PORT
- N - UDR2 PROV FQDN and PORT of segment 1 in the format FQDN:PORT
- O - Preferred UDR3 PROV FQDN and PORT of segment 2 in the format FQDN:PORT
- P - UDR4 PROV FQDN and PORT of segment 2 in the format FQDN:PORT
- Q - MultiSite UDR2 PROV FQDN and PORT in the format FQDN:PORT
- R - Helm Release name of SLF export tool (slfexport)
- S - Enable Error Response Logging Validation (true)
- T - true if underlying platform supports dual stack, else false. All dual stack scenarios will be marked as skipped if false (Default is false)
Note:
Setting rerun count, number of retries to check pod status, and segment details:- Rerun count can be set to any desired value by changing option 'j' in the pipeline script.
- Number of retries to check if pod is down can be set by providing option 'D' in the pipeline script.
- Number of retries to check if pod is up can be set by providing option 'E' in the pipeline script.
- If ProvGw is deployed with segment details having UDR fqdn/IP:port, provide option 'l' as "IP". If ProvGw is deployed with segment details having only UDR fqdn/IP, provide option 'l' as "fqdn". If any value other than "IP" and "fqdn" is provided, ATS assumes that the segment details in ProvGw is in "IP" mode.
- Option 'u' shall take value '/<domain
name>/prometheus/api/v1 for CNE version 1.9 or above or OSO
version is 1.10.x or above. If domain name is not known, then domain
name can be found after executing the command from ATS pod:
curl http://<prometheus fqdn>:<port>/
curl http://occne-kube-prom-stack-kube-prometheus.occne-infra/ <a href="/udr-cne-190/prometheus">Found</a>
- Click Save. The Pipeline SLF-Regression page appears.
- Click Build with Parameters. Select the
Configuration_Type.
Note:
For information about the Configuration_Type, see Parameterization. - Select SUTSuite as 'Multi_Site'. This lists all the regression features and displays the total count of the features, counts of scenarios in each feature, name of each feature, and scenarios in each feature.
- Click Build.
A sample consolidated test report, when rerun is set to 0 is as follows:
Figure 4-90 SLF- Regression to run Multiple Site Scenarios - Consolidated Test Report - Rerun is 0
For more details on consolidated test report, see Final Summary Report, Build Color and Application Logs.
4.7.6.4 Running SLF HealthCheck Pipeline
- Navigate to SLF-HealthCheck pipeline and
click Configure.
Figure 4-91 UDR - SLF-HealthCheck Pipeline
- The General tab appears. The user must wait for the page to load completely.
- Click the Advanced Project Options tab.
Scroll-down to reach the Pipeline configuration section
as follows:
Important:
Make sure the page loads completely before you perform any action on it. Do not modify any configuration except the one discussed below.Do not change any value other than line numbers 16 and 17. You can change the parameters marked as "a" "b" and "c" as per user requirement. The parameters details are provided as comments in line number 7 and 9. The parameters description is as follows:
- a - helm releases [Provide Release Name with Comma Separated if more than 1 ]
- b - namespace of SUT.
- c - helm command alias that is being used on the CNE (helm or helm2 or helm3)
Figure 4-92 SLF-HealthCheck Pipeline configuration
- Click Save. The Pipeline SLF-HealthCheck page appears.
- Click Build Now. This triggers health check
for SUT.
Figure 4-93 SLF-HealthCheck Pipeline Build
Figure 4-94 Sample Output - SLF Health Check
4.7.6.5 Running SLF-Performance Pipeline: Model 2: Single Site Deployment
Note:
Model 2 deployment is required for this pipeline. For more information, see Prerequisites for SLF Pipelines.- Navigate to the SLF-Performance pipeline and
click Configure.
Figure 4-95 SLF Single Site Performance Model 2
- The General tab appears. The user must wait for the page to load completely.
- Click the Advanced Project Options tab.
Scroll-down to reach the Pipeline configuration section
as follows:
Important:
Make sure the page loads completely before you perform any action on it. Do not modify any configuration except the one discussed below.Do not change any value other than line number 22 and line number 31. You can change the parameters marked as "b" and "k" as per user requirement. The parameters details are provided as comments in line number 7 and 21. The parameters description is as follows:
- a - Selected NF (should always be SLF)
- b - UDR ingressgateway provisioning FQDN and port for performance testing (ocudr-ingressgateway-prov.ocudr:80)
- c - UDR ingressgateway signalling FQDN and port for performance testing (ocudr-ingressgateway-sig.ocudr:80)
- d - Prometheus Server FQDN and port (occne-prometheus-server.occne-infra:80)
- e - Prometheus URI (/prometheus/api/v1)
- f - Namespace in which UDR is deployed for performance testing (ocudr)
- g - Helm Name of UDR (ocudr)
- h - Kubernetes host FQDN (kubernetes.default)
- i - Kubernetes host port number (443)
- j - Duration for which performance needs to be tested (in seconds) (900)
- k - Whether scenario with ramping up of TPS from 1K to 2K and back, needs to be run (false)
Figure 4-96 SLF-Performance Pipeline Configuration
- Click Save. The Pipeline SLF-Performance page appears.
- Click Build Now. This triggers lightweight
traffic on UDR-SLF.
Figure 4-97 Sample Output - SLF Performance
4.7.6.6 SLF NewFeatures Documentation - Single Site Scenarios
To view the documentation of SLF-NewFeatures, click the Documentation link in the left navigation pane (present inside the build), as follows:
Figure 4-98 SLF-New Features Documentation


Table 4-14 SLF New Features
Feature | Description |
---|---|
SLF_Subscriber_Export_File_Cleanup.feature | This feature file has scenario to validate the clean up of the older SLF export dumps. |
Note:
The Documentation option appears only if you have run the SLF-NewFeatures pipeline test cases atleast once.Based on the functionalities covered under Documentation, the Build Requires Parameters page displays test cases. To navigate back to the Pipeline SLF-NewFeatures page, click Back to SLF-NewFeatures link available on top left corner of the page.
4.7.7 Running UDR Pipelines
4.7.7.1 Running UDR NewFeatures Pipeline
To run the UDR-NewFeatures (Model 1) pipeline test cases:
Running UDR-NewFeatures (Model 1) pipeline test cases
- Click UDR-NewFeatures. Click
Configure in the left navigation pane. The
General tab appears. The user must wait for the
page to load completely.
Figure 4-101 UDR NewFeatures Pipeline
- Once the page loads completely, select the Discard
old Builds. This option allows you to configure the number
of builds you want to retain in the persistent volume.
Note:
It is recommended to configure this option. If there is large number of build, the Persistent Volume may be utilized completely. - Click the Advanced Project Options tab.
Scroll-down to reach the Pipeline configuration as
follows:
Important:
Make sure the page loads completely before you perform any action on it. Do not modify any configuration except the one discussed below.You should not change any other value apart from line number 68 to 112. It means you can change the parameters marked as "b" - to - "T" as per user requirement. The parameters details are provided as comments in line number 15 to 60. The parameters description is as follows:
- a - Name of the NF to be tested in capital (UDR).
- b - Namespace in which the udr is deployed.
- c - Name of signalling UDR1_ingressgateway_service.namespace and port (ocudr-ingressgateway-sig.ocudr:80).
- d - Helm release name of UDR (ocudr)
- e - FQDN and Port UDR's nudr-config in the format fqdn:port (ocudr-nudr-config.ocudr:5001)
- f - Name_of_Prometheus_service.namespace and port(occne-prometheus-server.occne-infra:80).
- g - If this option is not provided, Prometheus URI shall be /api/v1 (Default: /prometheus/api/v1) (if CNE 1.9+ /<cne-cluster-name>/prometheus/api/v1)
- h - Number of times the re-run of failed case is allowed (default as 2).
- i - Name of Kubernetes Host server and port (kubernetes.default:443)
- j - diameter endpoint fqdn and port in the form fqdn:port (ocudr-nudr-diam-gateway.ocudr:3868)
- k - fqdn or IP of http2-server in the format <ip/fqdn> (fqdn) (notify-stub-service.ocudr:80)
- l - fqdn or IP for primary nrf stub and port in the format <ip/fqdn>:<port> (nrf-stub-server.ocudr:8080)
- m - fqdn or IP for secondary nrf stub and port in the format <ip/fqdn>:<port> (nrf-stub-server2.ocudr:8080)
- n - primary stub deployment name (nrfstub-ocstub-py)
- o - secondary stub deployment name (nrfstub2-ocstub-py)
- p - HTTPS port for UDR (443)
- q - root certificate authority (CA) (caroot.cer)
- r - Server certificate signed with root CA private key (apigatewayrsa.cer)
- s - private key to .pem format (rsa_private_key_pkcs1.pem)
- t - fqdn and port for 2nd diameter tool endpoint
- u - MYSQL SERVICE ENDPOINT
- v - NOTIFY STUB deployment name
- w - Name of signaling in the format fqdn:port (ocudr-s2-ingressgateway-sig.ocudr-s2:80)
- x - fqdn and port for UDR2 nudr-config (ocudr2-nudr-config.ocudr:5001)
- y - fqdn and port for UDR2 DIAMETER_ENDPOINT (fqdn:port) (ocudr-s2-nudr-diam-gateway.ocudr-s2:3868)
- z - FQDN and port for SCP stub (scp-stub-service:8080)
- A - Deployment name for SCP stub (scpstub-notify-ocstub-py)
- B - Bulk import helm name
- C - ProvGw namespace (ocudr)
- D - ProvGw endpoint in format fqdn:port (provgw-prov-ingressgateway.ocudr:80)
- E - ProvGw endpoint https port(443)
- F - ProvGw helm name
- G - ProvGw config host in the format fqdn:port(provgw-provgw-config.ocudr:5001)
- H - true if OCCNE is 1.9.x and above, else false (true)
- I - fqdn and port for fourg stub in the form (fqdn:port)
- J - fqdn and port for 1st diameter tool endpoint
- K - fqdn and port for notify stub for TLS scenarios (tlsnotify-stub-service.ocudr:8443)
- L - Service name of UDR1 provisioning ingressgateway in the format FQDN:PORT(ocudr-ingressgateway-prov.ocudr:80)
- M - Service name of site2 UDR provisioning ingressgateway in the format FQDN:PORT(ocudr2-ingressgateway-prov.ocudr-s2:80)
- N - Diameter Stub Tool1 Deploy Name (diamtoola-diam-stub)
- O - Diameter Stub Tool2 Deploy Name (diamtoolb-diam-stub)
- P - Migration tool Helm release name (migration-tool)
- Q - Deployment name of the TLS notify stub (tlsnotifystub-ocstub-py)
- R - Enable error response logging validation (true)
- S - true (when underling CNE has support for CNLB Multus) to validate Traffic segregation feature else false
- T - true if underlying platform supports dual stack, else false. All dual stack scenarios will be marked as skipped if false (Default: false)
Figure 4-102 UDR NewFeatures Pipeline Configuration
- Click Save after making necessary changes. The Pipeline UDR-NewFeatures page appears.
- Click Build with Parameters. The page
lists all the newly added features and displays the total number of features
and the scenarios in each feature, name of each feature, and scenarios in
each feature.
Figure 4-103 UDR NewFeatures Pipeline Build with Parameter
- Select the Configuration_Type. For more information, see Parameterization.
- Choose the Select_Option(s),
which are:
- All: By default, it selects all the UDR test cases to run. Scroll-down and click Build to run all the test cases.
- Single/MultipleFeatures: This option allows you to select any number of test cases you want to run from the list of total test cases available for execution.
- After selecting the test cases, scroll-down and click Build to run the selected UDR test cases.
Stage:Stage_2 Group:no_igw_dr_metrics
UDR_IGW_LCI_Header.feature:81 UDR_LCI_03_UserAgent_Err
UDR_PCF_User_Agent_Header_New.feature:74 UDR_USER_AGENT_HEADER_02
UDR_PCF_User_Agent_Header_New.feature:140 UDR_USER_AGENT_HEADER_03
Running UDR-NewFeatures (Model 2) pipeline test cases
- Click UDR-NewFeatures. Click
Configure in the left navigation pane. The
General tab appears. The user must wait for the page
to load completely.
Figure 4-104 UDR NewFeatures Pipeline
- Once the page loads completely, select the Discard old
Builds. This option allows you to configure the number of builds
you want to retain in the persistent volume.
Note:
It is recommended to configure this option. If there is large number of build, the Persistent Volume may be utilized completely. - Click the Advanced Project Options tab.
Scroll-down to reach the Pipeline configuration as
follows:
Important:
Make sure the page loads completely before you perform any action on it. Do not modify any configuration except the one discussed below.You should not change any other value apart from line number 65 to 107. It means you can change the parameters marked as "b" - to - "R" as per user requirement. The parameters details are provided as comments in line number 14 to 57. The parameters description is as follows:
- a - Name of the NF to be tested in capital (UDR).
- b - Namespace in which the udr is deployed.
- c - Name of signalling UDR1_ingressgateway_service.namespace and port (ocudr-ingressgateway-sig.ocudr:80).
- d - Helm release name of UDR (ocudr)
- e - FQDN and Port UDR's nudr-config in the format fqdn:port (ocudr-nudr-config.ocudr:5001)
- f - Name_of_Prometheus_service.namespace and port(occne-prometheus-server.occne-infra:80).
- g - If this option is not provided, Prometheus URI shall be /api/v1 (Default: /prometheus/api/v1) (if CNE 1.9+ /<cne-cluster-name>/prometheus/api/v1)
- h - Number of times the re-run of failed case is allowed (default as 2).
- i - Name of Kubernetes Host server and port (kubernetes.default:443)
- j - diameter endpoint fqdn and port in the form fqdn:port (ocudr-nudr-diam-gateway.ocudr:3868)
- k - fqdn or IP of http2-server in the format <ip/fqdn> (fqdn) (notify-stub-service.ocudr:80)
- l - fqdn or IP for primary nrf stub and port in the format <ip/fqdn>:<port> (nrf-stub-server.ocudr:8080)
- m - fqdn or IP for secondary nrf stub and port in the format <ip/fqdn>:<port> (nrf-stub-server2.ocudr:8080)
- n - primary stub deployment name (nrfstub-ocstub-py)
- o - secondary stub deployment name (nrfstub2-ocstub-py)
- p - HTTPS port for UDR (443)
- q - root certificate authority (CA) (caroot.cer)
- r - Server certificate signed with root CA private key (apigatewayrsa.cer)
- s - private key to .pem format (rsa_private_key_pkcs1.pem)
- t - fqdn and port for 2nd diameter tool endpoint
- u - MYSQL SERVICE ENDPOINT
- v - NOTIFY STUB deployment name
- w - Name of signaling in the format fqdn:port (ocudr-s2-ingressgateway-sig.ocudr-s2:80)
- x - fqdn and port for UDR2 nudr-config (ocudr2-nudr-config.ocudr:5001)
- y - fqdn and port for UDR2 DIAMETER_ENDPOINT (fqdn:port) (ocudr-s2-nudr-diam-gateway.ocudr-s2:3868)
- z - FQDN and port for SCP stub (scp-stub-service:8080)
- A - Deployment name for SCP stub (scpstub-notify-ocstub-py)
- B - Bulk import helm name
- C - ProvGw namespace (ocudr)
- D - ProvGw endpoint in format fqdn:port (provgw-prov-ingressgateway.ocudr:80)
- E - ProvGw endpoint https port(443)
- F - ProvGw helm name
- G - ProvGw config host in the format fqdn:port(provgw-provgw-config.ocudr:5001)
- H - true if OCCNE is 1.9.x and above, else false (true)
- I - fqdn and port for fourg stub in the form (fqdn:port)
- J - fqdn and port for 1st diameter tool endpoint
- K - fqdn and port for notify stub for TLS scenarios (tlsnotify-stub-service.ocudr:8443)
- L - Service name of UDR1 provisioning ingressgateway in the format FQDN:PORT(ocudr-ingressgateway-prov.ocudr:80)
- M - Service name of site2 UDR provisioning ingressgateway in the format FQDN:PORT(ocudr2-ingressgateway-prov.ocudr-s2:80)
- N - Diameter Stub Tool1 Deploy Name (diamtoola-diam-stub)
- O - Diameter Stub Tool2 Deploy Name (diamtoolb-diam-stub)
- P - Migration tool Helm release name (migration-tool)
- Q - Deployment name of the TLS notify stub (tlsnotifystub-ocstub-py)
- R - Enable error response logging validation (true)
Figure 4-105 UDR NewFeatures Pipeline Configuration
- Click Save after making necessary changes. The Pipeline UDR-NewFeatures page appears.
- Click Build with Parameters. The page lists
all the newly added features and displays the total number of features and the
scenarios in each feature, name of each feature, and scenarios in each
feature.
- Select the Configuration_Type. For more information, see Parameterization.
- Choose the Select_Option(s), which
are:
- All: By default, it selects all the UDR test cases to run. Scroll-down and click Build to run all the test cases.
- Single/MultipleFeatures: This option allows you to select any number of test cases you want to run from the list of total test cases available for execution.
- After selecting the test cases, scroll-down and click Build to run the selected UDR test cases.
4.7.7.2 Running UDR Regression Pipeline
To run the UDR-Regression pipeline test cases:
Running UDR Regression Pipeline: Model 1: Single Site Deployment
- Click UDR-Regression and then, click
Configure.
Figure 4-106 UDR Regression Pipeline
- The General tab appears. The user must wait for the page to load completely.
- Select the Discard old Builds. This option
allows you to configure the number of builds you want to retain in the
persistent volume.
Note:
It is recommended to configure this option. If there are large number of builds, the Persistent Volume may be utilized completely. - Click the Advanced Project Options tab.
Scroll-down to reach the Pipeline configuration section
as follows:
Important:
Make sure the page loads completely before you perform any action on it. Do not modify any configuration except the one discussed below.Do not change any value other than from line number 68 to 112. You can change the parameters marked as "b" - to - "T" as per user requirement. The parameters details are provided as comments in line number 15 to 60. The parameters description is as follows:
Figure 4-107 UDR Regression Pipeline Configuration
- a - Name of the NF to be tested in capital (UDR).
- b - Namespace in which the udr is deployed.
- c - Name of signalling UDR1_ingressgateway_service.namespace and port (ocudr-ingressgateway-sig.ocudr:80).
- d - Helm release name of UDR (ocudr)
- e - FQDN and Port UDR's nudr-config in the format fqdn:port (ocudr-nudr-config.ocudr:5001)
- f - Name_of_Prometheus_service.namespace and port(occne-prometheus-server.occne-infra:80).
- g - If this option is not provided, Prometheus URI shall be /api/v1 (Default: /prometheus/api/v1) (if CNE 1.9+ /<cne-cluster-name>/prometheus/api/v1)
- h - Number of times the re-run of failed case is allowed (default as 1).
- i - Name and port of Kubernetes Host server in the format FQDN:PORT(kubernetes.default:443).
- j - fqdn and port for DIAMETER_ENDPOINT in the format FQDN:PORT (ocudr-nudr-diam-gateway:3868)
- k - fqdn or IP of http2-server in the format <ip/fqdn> (fqdn) (notify-stub-service.ocudr:80)
- l - fqdn or IP for primary nrf stub and port in the format <ip/fqdn>:<port> (nrf-stub-server.ocudr:8080)
- m - fqdn or IP for secondary nrf stub and port in the format <ip/fqdn>:<port> (nrf-stub-server2.ocudr:8080)
- n - primary stub deployment name (nrfstub-ocstub-py)
- o - secondary stub deployment name (nrfstub2-ocstub-py)
- p - HTTPS port for UDR (443)
- q - root certificate authority (CA) (caroot.cer)
- r - Server certificate signed with root CA private key (apigatewayrsa.cer)
- s - private key to .pem format (rsa_private_key_pkcs1.pem)
- t - fqdn and port for 2nd diameter tool endpoint
- u - MYSQL SERVICE ENDPOINT
- v - NOTIFY STUB deployment name
- w - Name of signaling in the format fqdn:port (ocudr-s2-ingressgateway-sig.ocudr-s2:80)
- x - fqdn and port for UDR2 nudr-config (ocudr2-nudr-config.ocudr:5001)
- y - fqdn and port for UDR2 DIAMETER_ENDPOINT (fqdn:port) (ocudr-s2-nudr-diam-gateway.ocudr-s2:3868)
- z - FQDN and port for SCP stub (scp-stub-service:8080)
- A - Deployment name for SCP stub (scpstub-notify-ocstub-py)
- B - Bulk import helm name
- C - ProvGw namespace (ocudr)
- D - ProvGw endpoint in format fqdn:port (provgw-prov-ingressgateway.ocudr:80)
- E - ProvGw endpoint https port(443)
- F - ProvGw helm name
- G - ProvGw config host in the format fqdn:port(provgw-provgw-config.ocudr:5001)
- H - true if OCCNE is 1.9.x and above, else false (true)
- I - fqdn and port for fourg stub in the form (fqdn:port)
- J - fqdn and port for 1st diameter tool endpoint
- K - port for notify stub for TLS scenarios (8443)
- L - Service name of UDR1 provisioning ingressgateway in the format FQDN:PORT(ocudr-ingressgateway-prov.ocudr:80)
- M - Service name of site2 UDR provisioning ingressgateway in the format FQDN:PORT(ocudr2-ingressgateway-prov.ocudr-s2:80)
- N - Diameter Stub Tool1 Deploy Name (diamtoola-diam-stub)
- O - Diameter Stub Tool2 Deploy Name (diamtoolb-diam-stub)
- P - Migration tool helm release name (migration-tool)
- Q - Deployment name of the TLS notify stub (tlsnotifystub-ocstub-py)
- R - Enable error response logging validation (true)
- S - true (when underling CNE has support for CNLB Multus) to validate Traffic segregation feature else false
- T - true if underlying platform supports dual stack, else false. All dual stack scenarios will be marked as skipped if false (Default: false)
- Click Save. The Pipeline UDR-Regression page appears.
- Click Build with Parameters. A page listing
all the regression features and displays the total count of the features and
displays the total count of the features, count of scenarios in each feature,
name of each feature, and scenarios in each feature.
Figure 4-108 UDR Regression Pipeline Site Deployment
- Click Build to run the test cases.
Figure 4-109 Consolidated Test Report
Note:
- For information about Configuration_Type, see Parameterization.
- The SUITSuite option selected as 'ALL' will run all suites except multiple site scenarios.
Stage:Stage_2 Group:no_igw_dr_metrics
UDR_Error_Response_log_validation.feature:299 UDR_Error_Response_log_Ondemand_05
UDR_Suppressed_notification_to_the_initiator.feature:255 UDR_Suppressed_notification_to_the_initiator_05
Running UDR Regression Pipeline: Model 2: Multiple Site Deployment
- Click UDR-Regression and then, click
Configure.
Figure 4-110 UDR Regression Pipeline
- The General tab appears. The user must wait for the page to load completely.
- Select the Discard old Builds. This option
allows you to configure the number of builds you want to retain in the
persistent volume.
Note:
It is recommended to configure this option. If there are large number of builds, the Persistent Volume may be utilized completely. - Click the Advanced Project Options tab.
Scroll-down to reach the Pipeline configuration section
as follows:
Important:
Make sure the page loads completely before you perform any action on it. Do not modify any configuration except the one discussed below.Do not change any value other than from line number 68 to 112. You can change the parameters marked as "b" - to - "T" as per user requirement. The parameters details are provided as comments in line number 15 to 60. The parameters description is as follows:
- a - Name of the NF to be tested in capital (UDR).
- b - Namespace in which the udr is deployed.
- c - Name of signalling UDR1_ingressgateway_service.namespace and port (ocudr-ingressgateway-sig.ocudr:80).
- d - Helm release name of UDR (ocudr)
- e - FQDN and Port UDR's nudr-config in the format fqdn:port (ocudr-nudr-config.ocudr:5001)
- f - Name_of_Prometheus_service.namespace and port(occne-prometheus-server.occne-infra:80).
- g - If this option is not provided, Prometheus URI shall be /api/v1 (Default: /prometheus/api/v1) (if CNE 1.9+ /<cne-cluster-name>/prometheus/api/v1)
- h - Number of times the re-run of failed case is allowed (default as 1).
- i - Name and port of Kubernetes Host server in the format FQDN:PORT(kubernetes.default:443).
- j - fqdn and port for DIAMETER_ENDPOINT in the format FQDN:PORT (ocudr-nudr-diam-gateway:3868)
- k - fqdn or IP of http2-server in the format <ip/fqdn> (fqdn) (notify-stub-service.ocudr:80)
- l - fqdn or IP for primary nrf stub and port in the format <ip/fqdn>:<port> (nrf-stub-server.ocudr:8080)
- m - fqdn or IP for secondary nrf stub and port in the format <ip/fqdn>:<port> (nrf-stub-server2.ocudr:8080)
- n - primary stub deployment name (nrfstub-ocstub-py)
- o - secondary stub deployment name (nrfstub2-ocstub-py)
- p - HTTPS port for UDR (443)
- q - root certificate authority (CA) (caroot.cer)
- r - Server certificate signed with root CA private key (apigatewayrsa.cer)
- s - private key to .pem format (rsa_private_key_pkcs1.pem)
- t - fqdn and port for 2nd diameter tool endpoint
- u - MYSQL SERVICE ENDPOINT
- v - NOTIFY STUB deployment name
- w - Name of signaling in the format fqdn:port (ocudr-s2-ingressgateway-sig.ocudr-s2:80)
- x - fqdn and port for UDR2 nudr-config (ocudr2-nudr-config.ocudr:5001)
- y - fqdn and port for UDR2 DIAMETER_ENDPOINT (fqdn:port) (ocudr-s2-nudr-diam-gateway.ocudr-s2:3868)
- z - FQDN and port for SCP stub (scp-stub-service:8080)
- A - Deployment name for SCP stub (scpstub-notify-ocstub-py)
- B - Bulk import helm name
- C - ProvGw namespace (ocudr)
- D - ProvGw endpoint in format fqdn:port (provgw-prov-ingressgateway.ocudr:80)
- E - ProvGw endpoint https port(443)
- F - ProvGw helm name
- G - ProvGw config host in the format fqdn:port(provgw-provgw-config.ocudr:5001)
- H - true if OCCNE is 1.9.x and above, else false (true)
- I - fqdn and port for fourg stub in the form (fqdn:port)
- J - fqdn and port for 1st diameter tool endpoint
- K - port for notify stub for TLS scenarios (8443)
- L - Service name of UDR1 provisioning ingressgateway in the format FQDN:PORT(ocudr-ingressgateway-prov.ocudr:80)
- M - Service name of site2 UDR provisioning ingressgateway in the format FQDN:PORT(ocudr2-ingressgateway-prov.ocudr-s2:80)
- N - Diameter Stub Tool1 Deploy Name (diamtoola-diam-stub)
- O - Diameter Stub Tool2 Deploy Name (diamtoolb-diam-stub)
- P - Migration tool helm release name (migration-tool)
- Q - Deployment name of the TLS notify stub (tlsnotifystub-ocstub-py)
- R - Enable error response logging validation (true)
- S - true (when underling CNE has support for CNLB Multus) to validate Traffic segregation feature else false
- T - true if underlying platform supports dual stack, else false. All dual stack scenarios will be marked as skipped if false (Default: false)
- Click Save. The Pipeline UDR-Regression page appears.
- Click Build with Parameters. A page listing
all the regression features and displays the total count of the features and
displays the total count of the features, count of scenarios in each feature,
name of each feature, and scenarios in each feature.
Figure 4-111 UDR Regression Pipeline Multiple Site Deployment
- Click Build to run the test cases.
Note:
- For information about Configuration_Type, see Parameterization.
- The SUITSuite option selected as 'MULTISITE' will run multiple site scenarios only.
Figure 4-112 Consolidated Test Report
4.7.7.3 Running UDR HealthCheck Pipeline
- Navigate to UDR-HealthCheck pipeline and
click Configure.
Figure 4-113 UDR-HealthCheck Pipeline
- The General tab appears. The user must wait for the page to load completely.
- Click the Advanced Project Options tab.
Scroll-down to reach the Pipeline configuration section
as follows:
Figure 4-114 UDR-HealthCheck Configuration
Important:
Make sure the page loads completely before you perform any action on it. Do not modify any configuration except the one discussed below.Do not change any value other than line numbers 17 and 18. You can change the parameters marked as "a", "b" and "c"as per user requirement. The parameters details are provided as comments in line number 7 and 9. The parameters description is as follows:
- a - helm releases [Provide Release Name with Comma Separated if more than 1 ]
- b - namespace of SUT
- c - -helm command alias (helm or helm2 or helm3)
- Click Save. The Pipeline UDR-HealthCheck page appears.
- Click Build Now. This triggers health check
for SUT.
Figure 4-115 UDR-HealthCheck Build Now
Figure 4-116 Sample Output

4.7.7.4 UDR NewFeatures Documentation
Figure 4-117 UDR-NewFeatures Documentation

UDR new features list along with its description is as follows:
Table 4-15 UDR New Features List
Feature | Description |
---|---|
UDR_Subscriber_Policy_Data_Export.feature | This feature file validates the policy subscriber export feature in CSV format. |
UDR_PNR_Multisite.feature | This feature file validates the PNR scenarios on the Multisite. |
UDR_Bug_Fix_25_1_100.feature | This feature file validates the bugs raised for 25.1.100 release. |
UDR_Default_Headers.feature | This feature file validates multiple default headers received on the ingress gateway signaling and provisioning. |
Click any feature to view its description.
4.7.8 Running EIR Pipelines
This section describes how to run EIR related pipelines. It includes:
4.7.8.1 Running EIR NewFeatures Pipeline
To run the EIR-NewFeatures pipeline test cases:
- Click EIR-NewFeatures. Click
Configure in the left navigation pane. The
General tab appears. The user must wait for the page to
load completely.
Figure 4-119 EIR NewFeatures Pipeline
- Once the page loads completely, select the Discard old
Builds. This option allows you to configure the number of builds you
want to retain in the persistent volume.
Note:
It is recommended to configure this option. If there is large number of build, the Persistent Volume may be utilized completely. - Click the Advanced Project Options tab.
Scroll-down to reach the Pipeline configuration as
follows:
Important:
Make sure the page loads completely before you perform any action on it. Do not modify any configuration except the one discussed below.You should not change any other value apart from line number 54 to 87. It means you can change the parameters marked as "b" - to - "I" as per user requirement. The parameters details are provided as comments in line number 12 to 46. The parameters description is as follows:
- a - Selected NF.
- b - NameSpace in which UDR is Deployed.
- c - EIR signaling FQDN and PORT in the format FQDN:POR.
- d - EIR provisioning FQDN and PORT in the format FQDN:PORT.
- e - EIR config service FQDN and PORT in the format FQDN:PORT.
- f - FQDN:PORT format for primary nrf-stub.
- g - FQDN:PORT format for secondary nrf-stub.
- h - FQDN and PORT of Kubernetes Host server in the format FQDN:PORT.
- i - Helm name of EIR.
- j - primary nrf-stub deployment name.
- k - secondary nrf-stub deployment name.
- l - primary nrf-stub metric port
- m - secondary nrf-stub metric port
- n - bulk import Helm name
- o - Number of retries to check if pod down is successful.
- p - Number of retries to check if pod up is successful.
- q - Number of retries when ATS detects tcp connection failures.
- r - Time interval in seconds between tcp connection retries.
- s - re-run count.
- t - FQDN and PORT of Prometheus server in the format FQDN:PORT.
- u - prometheus uri apiName and apiVersion.
- v - EIR HTTPS port.
- w - root certificate authority for signalling ingressgateway.
- x - Server certificate signed with root CA private key for signalling ingressgateway.
- y - private key in .pem format for signalling ingressgateway.
- z - root certificate authority for provisioning ingressgateway.
- A - Server certificate signed with root CA private key for provisioning ingressgateway.
- B - Private key in .pem format for provisioning ingressgateway.
- C - True if HA prometheus else false (default: true).
- D - Enable error response logging validation (true).
- E - fqdn and port for diameter gateway in the format FQDN:PORT (oceir-nudr-diam-gateway:3868).
- F - fqdn and port for diam-stub endpoint (diam-stub-service:3000).
- G - Diameter Stub Tool1 Deploy Name (diamtoola-diam-stub).
- H - true (when underling CNE has support for CNLB Multus) to validate traffic segregation feature else false (Default: false)
- I - true if underlying platform supports dual stack, else false. All dual stack scenarios will be marked as skipped if false (Default: false)
Figure 4-120 EIR NewFeatures Pipeline Configuration
Note:
Setting rerun count, number of retries to check pod status and segment details:- Rerun count can be set to any desired value by changing option 's' in the pipeline script.
- Number of retries to check if pod is down can be set by providing option 'o' in the pipeline script.
- Number of retries to check if pod is up can be set by providing option 'p' in the pipeline script.
- If ProvGw is deployed with segment details having UDR fqdn and port, provide option 'l' as port.
- Option 'u' will take the value '/<domain
name>/prometheus/api/v1 for CNE version 1.9 and above or OSO version
1.10.x and above. Run the below command, If domain name is not known.
The domain name can be found after executing the command from ATS pod:
curl http://<prometheus fqdn>:<port>/
Example:curl http://occne-kube-prom-stack-kube-prometheus.occne-infra/ <a href="/udr-cne/prometheus">Found</a>
- Click Save after making necessary changes. The
Pipeline EIR-NewFeatures page appears.
Figure 4-121 EIR Build with Parameters
- Click Build with Parameters. The page lists all the newly added features and displays the total count of features, counts of scenarios in each feature, name of each feature, and scenarios in each feature.
- To view consolidated and detailed stack trace results in case of any
failures, click Test Results Analyzer in the left navigation pane. The test results
analyzer report appears. For more information, see Test Results Analyzer.
Figure 4-122 EIR-NewFeatures Pipeline Consolidated Automation Suite Test Report
For more details on consolidated test report, see Final Summary Report, Build Color and Application Logs.
4.7.8.2 Running EIR Regression Pipeline
- Click EIR-Regression and then, click
Configure.
Figure 4-123 EIR Regression Pipeline
- The General tab appears. The user must wait for the page to load completely.
- Select the Discard old Builds. This option
allows you to configure the number of builds you want to retain in the
persistent volume.
Note:
It is recommended to configure this option. If there are large number of builds, the Persistent Volume may be utilized completely.Figure 4-124 Discard Builds
- Click the Advanced Project Options tab.
Scroll-down to reach the Pipeline configuration section
as follows:
Important:
Make sure the page loads completely before you perform any action on it. Do not modify any configuration except the one discussed below.Do not change any value other than from line number 48 to 76. You can change the parameters marked as "b" - to - "D" as per user requirement. The parameters details are provided as comments in line number 11 to 40. The parameters description is as follows:
- a - Selected NF
- b - NameSpace in which UDR is Deployed
- c - EIR signaling FQDN and PORT in the format FQDN:PORT
- d - EIR provisioning FQDN and PORT in the format FQDN:PORT
- e - EIR config service FQDN and PORT in the format FQDN:PORT
- f - FQDN:PORT format for primary nrf-stub
- g - FQDN:PORT format for secondary nrf-stub
- h - FQDN and PORT of Kubernetes Host server in the format FQDN:PORT
- i - helm name of EIR
- j - primary nrf-stub deployment name
- k - secondary nrf-stub deployment name
- l - primary nrf-stub metric port
- m - secondary nrf-stub metric port
- n - bulk import Helm name
- o - Number of retries to check if pod down is successful
- p - Number of retries to check if pod up is successful
- q - Number of retries when ATS detects tcp connection failures
- r - Time interval in seconds between tcp connection retries
- s - re-run count
- t - FQDN and PORT of Prometheus server in the format FQDN:PORT
- u - prometheus uri apiName and apiVersion
- v - EIR HTTPS port
- w - root certificate authority for signalling ingressgateway
- x - Server certificate signed with root CA private key for signalling ingressgateway
- y - private key in .pem format for signalling ingressgateway
- z - root certificate authority for provisioning ingressgateway
- A - Server certificate signed with root CA private key for provisioning ingressgateway
- B - Private key in .pem format for provisioning ingressgateway
- C - True if HA prometheus else false(default: true)
- D - Enable error response logging validation (true)
- E - FQDN and port for diameter gateway in the format FQDN:PORT (oceir-nudr-diam-gateway:3868)
- F - FQDN and port for diam-stub endpoint (diam-stub-service:3000)
- G - Diameter Stub Tool1 deploy name (diamtoola-diam-stub)
- H - true (when underling CNE has support for CNLB Multus) to validate traffic segregation feature else false (Default: false)
- I - true if underlying platform supports dual stack, else false. All dual stack scenarios will be marked as skipped if false (Default: false)
- Click Save. The Pipeline
EIR-Regression page appears.
Figure 4-125 EIR Regression Configuration
Note:
Setting rerun count and the number of retries to check the pod status and segment details is given below:- Rerun count can be set to any desired value by changing option 's' in the pipeline script.
- Number of retries to check if pod is down can be set by providing option 'o' in the pipeline script.
- Number of retries to check if pod is up can be set by providing option 'p' in the pipeline script.
- If ProvGw is deployed with segment details having UDR fqdn/IP:port, provide option 'l' as "IP". If ProvGw is deployed with segment details having only UDR fqdn/IP, provide option 'l' as "fqdn". If any value other than "IP" and "fqdn" is provided, ATS assumes that the segment details in ProvGw is in "IP" mode.
- Option 'u' shall take value '/<domain
name>/prometheus/api/v1 for CNE version 1.9 or above or OSO
version is 1.10.x or above. Run the below command from ATS pod to
find the domain name, if the domain name is not known:
curl http://<prometheus fqdn>:<port>/
Example:curl http://occne-kube-prom-stack-kube-prometheus.occne-infra/ <a href="/udr-cne/prometheus">Found</a>
- Click Save. The Pipeline EIR-Regression page appears.
- Click Build with Parameters. A page listing
all the regression features and displays the total count of the features and
displays the total count of the features, count of scenarios in each feature,
name of each feature, and scenarios in each feature.
Figure 4-126 EIR Regression Build with Parameters
- A sample consolidated automation suite test report, when re-run is set to 0 is
as follows
Figure 4-127 EIR-Regression Consolidated Automation Suite Test Report
4.7.8.3 Running EIR HealthCheck Pipeline
To run health check on EIR:
- Navigate to EIR-HealthCheck pipeline and click
Configure.
Figure 4-128 EIR HealthCheck Pipeline
- The General tab appears. The user must wait for the page to load completely.
- Click the Advanced Project Options tab.
Scroll-down to reach the Pipeline configuration section as
follows:
Important:
Make sure the page loads completely before you perform any action on it. Do not modify any configuration except the one discussed below.Do not change any value other than line numbers 16 and 17. You can change the parameters marked as "a" "b" and "c" as per user requirement. The parameters details are provided as comments in line number 7 and 9. The parameters description is as follows:
- a - helm releases [Provide Release Name with Comma Separated if more than 1 ]
- b - namespace of SUT.
- c - helm command alias (helm, helm2, or helm3)
Figure 4-129 EIR-HealthCheck Pipeline configuration
- Click Save. The Pipeline EIR-HealthCheck page appears.
- Click Build Now. This triggers health check for
SUT.
Figure 4-130 EIR-HealthCheck Build Now
4.7.8.4 EIR NewFeatures Documentation
To view the documentation of EIR-NewFeatures, click the Documentation link in the left navigation pane (present inside the build), as follows:
Figure 4-131 EIR-New Features Documentation

The page lists all the features. Click any feature to view description of that test case.
EIR new features list along with its description is as follows:
Table 4-16 EIR New Features List
Feature | Description |
---|---|
NA | NA |
Note:
The Documentation option appears only if you have run the EIR-NewFeatures pipeline test cases atleast once.Based on the functionalities covered under Documentation, the Build Requires Parameters page displays test cases. To navigate back to the Pipeline EIR-NewFeatures page, click Back to EIR-NewFeatures link available on top left corner of the page.