3 Installing ATS for Different Network Functions

This section describes how to install ATS for different network functions. It includes:

3.1 Installing ATS for BSF

The BSF ATS installation procedure covers two steps:

  1. Locating and downloading the ATS package for BSF.
  2. Deploying ATS and stub pods in Kubernetes cluster.

This includes installation of three stubs (nf1stub, nf11stub, and nf12stub), ocdns-bind stub, and BSF ATS in BSF namespace.

3.1.1 Resource Requirements

This section describes the ATS resource requirements for Binding Support Function.

Overview - Total Number of Resources

The following table describes the overall resource usage in terms of CPUs, memory, and storage:

Table 3-1 BSF - Total Number of Resources

Resource Name Non-ASM CPU Non-ASM Memory (GB) ASM CPU ASM Memory (GB)
BSF Total 41 36 73 52
ATS Total 11 11 23 17
cnDBTier Total 107.1 175.2 137.1 190.2
Grand Total BSF ATS 159.1 222.2 233.1 259.2

BSF Pods Resource Requirements Details

This section describes the resource requirements, which are needed to deploy BSF ATS successfully.

Table 3-2 BSF Pods Resource Requirements Details

BSF Microservices Max CPU Memory (GB) Max Replica Isito ASM CPU Isito ASM Memory (GB) Non-ASM Total CPU Non-ASM Memory (GB) ASM Total CPU ASM Total Memory (GB)
oc-app-info 1 1 1 2 1 1 1 3 2
oc-diam-gateway 4 2 1 2 1 4 2 6 3
alternate-route 2 4 1 2 1 2 4 4 5
oc-config-server 4 2 1 2 1 4 2 6 3
ocegress_gateway 4 6 1 2 1 4 6 6 7
ocingress_gateway 4 6 1 2 1 4 6 6 7
nrf-client-mngt 1 1 2 2 1 2 2 6 4
oc-audit 2 1 1 2 1 2 1 4 2
oc-config-mgmt 4 2 2 2 1 8 4 12 6
oc-query 2 1 2 2 1 4 2 8 4
oc-perf-info 1 1 2 2 1 2 2 6 4
bsf-management-service 4 4 1 2 1 4 4 6 5
BSF Totals 41 36 73 52

ATS Resource Requirements details for BSF

This section describes the ATS resource requirements, which are needed to deploy BSF ATS successfully.

Table 3-3 ATS Resource Requirements Details

ATS Microservices Max CPU Max Memory (GB) Max Replica Isito ASM CPU Isito ASM Memory (GB) Non- ASM Total CPU Non-ASM Total Memory (GB) ASM Total CPU ASM Total Memory (GB)
ocstub1-py 2 2 1 2 1 2 2 4 3
ocstub2-py 2 2 1 2 1 2 2 4 3
ocstub3-py 2 2 1 2 1 2 2 4 3
ocats-bsf 3 3 1 2 1 3 3 5 4
ocdns-bind 1 1 1 2 1 1 1 3 2
ocdiam-sim 1 1 1 2 1 1 1 3 2
ATS Totals 11 11 23 17

cnDBTier Resource Requirements Details for BSF ATS

This section describes the cnDBTier resource requirements, which are needed to deploy BSF ATS successfully.

Note:

For cnDBTier pods, a minimum of 4 worker nodes are required.

Table 3-4 cnDBTier Resource Requirements Details

cnDBTier Microservices Min CPU Min Memory (GB) Min Replica Isito ASM CPU Isito ASM Memory (GB) Total CPU Total Memory (GB) ASM Total CPU ASM Total Memory (GB)
db_monitor_svc 1 1 1 2 1 1 1 3 2
db_replication_svc 2 12 1 2 1 2 12 4 13
db_backup_manager_svc 0.1 0.2 1 2 1 0.1 0.2 2.1 1.2
ndbappmysqld 8 10 4 2 1 32 40 40 44
ndbmgmd 4 10 2 2 1 8 20 12 22
ndbmtd 10 18 4 2 1 40 72 48 76
ndbmysqld 8 10 2 2 1 16 20 20 22
db_infra_moditor_svc 8 10 1 2 1 8 10 8 10
cnDBTier Total 107.1 175.2 137.1 190.2

3.1.2 Downloading the ATS Package

This section provides information on how to locate and download BSF ATS package file from My Oracle Support (MOS).

Locating and Downloading BSF ATS Package

To locate and download the ATS Image from MOS, perform the following steps:

  1. Log in to My Oracle Support using the appropriate credentials.
  2. Select the Patches and Updates tab.
  3. In the Patch Search window, click Product or Family (Advanced).
  4. Enter Oracle Communications Cloud Native Core - 5G in the Product field.
  5. Select Oracle Communications Cloud Native Binding Support Function <release_number> from Release drop-down.
  6. Click Search. The Patch Advanced Search Results list appears.
  7. Select the required patch from the search results. The Patch Details window appears.
  8. Click Download. The File Download window appears.
  9. Click the <p********_<release_number>_Tekelec>.zip file to downlaod the BSF ATS package file.
  10. Untar the gzip file ocats-bsf-tools-25.1.200.0.0.tgz to access the following files:
    ocats-bsf-pkg-25.1.200.0.0.tgz
    ocdns-pkg-25.1.203.tgz
    ocstub-pkg-25.1.201.tgz
    ocdiam-sim-25.1.203.tgz

    The contents included in each of these files are as follow:

    ocats-bsf-tools-25.1.200.0.0.tgz
    |
    |_ _ _ocats-bsf-pkg-25.1.200.tgz
    | |_ _ _ _ _ _ ocats-bsf-25.1.200.tgz (Helm Charts)
    | |_ _ _ _ _ _ ocats-bsf-images-25.1.200.tar (Docker Images)
    | |_ _ _ _ _ _ ocats-bsf-data-25.1.200.tgz (BSF ATS and Jenkins job Data)
    |
    |_ _ _ocstub-pkg-25.1.201.0.0.tgz
    | |_ _ _ _ _ _ ocstub-py-25.1.201.tgz(Helm Charts)
    | |_ _ _ _ _ _ ocstub-py-image-25.1.201.tar (Docker Images)
    |
    |_ _ _ocdns-pkg-25.1.203.0.0.tgz
    | |_ _ _ _ _ _ ocdns-bind-25.1.203.tgz(Helm Charts)
    | |_ _ _ _ _ _ ocdns-bind-image-25.1.203.tar (Docker Images)
    |
    |_ _ _ocdiam-pkg-25.1.203.0.0.tgz
    |      |_ _ _ _ _ _ ocdiam-sim-25.1.203.tgz(Helm Charts)
    |      |_ _ _ _ _ _ ocdiam-sim-image-25.1.203.tar (Docker Images)
  11. Copy the tar file from the downloaded package to CNE, OCI, or Kubernetes cluster where you want to deploy ATS.

3.1.3 Deploy ATS with TLS Enabled

Note:

  • OCATS and Python stubs support both TLS 1.2. and TLS 1.3.
  • DiamSim pod do not support secure calls.

Follow the steps in this section to create a Java KeyStore (JKS) file and enable the BSF ATS GUI with HTTPS during installation.

3.1.3.1 Generate JKS File for Jenkins Server

To access Jenkins ATS GUI access through HTTPS, a JKS file should be created.

Perform the following steps to generate the JKS file:

Generate the Root Certificate

  1. If the user has a Certificate Authority (CA) signed root certificate such as caroot.cert and key, then the user can use those files.
  2. If the root certificate is not already available, the user can generate one self signed root certificate. This root certificate created needs to be added to the truststore such as a Browser like Firefox or Chrome. User can follow the Browser specific documentation to upload the root certificate. The root certificate is used to sign the application, or ATS certificate.
  3. Generate a root key with the following command:
    openssl genrsa 2048 > caroot.key

    This will generate a key called caroot.key

  4. Generate a caroot certificate with the following command:
    openssl req -new -x509 -nodes -days 1000 -key <root_key> > <root_certificate>

    For example,

    
    [cloud-user@platform-bastion-1]$ openssl req -new -x509 -nodes -days 1000 -key caroot.key > caroot.cer
    You are about to be asked to enter information that will be incorporated
    into your certificate request.
    What you are about to enter is what is called a Distinguished Name or a DN.
    There are quite a few fields but you can leave some blank
    For some fields there will be a default value,
    If you enter '.', the field will be left blank.
    -----
    Country Name (2 letter code) [XX]:IN
    State or Province Name (full name) []:KA
    Locality Name (eg, city) [Default City]:BLR
    Organization Name (eg, company) [Default Company Ltd]:ORACLE
    Organizational Unit Name (eg, section) []:CGBU
    Common Name (eg, your name or your server's hostname) []:ocats
    Email Address []:
    [cloud-user@platform-bastion-1]$

Generate Application or Client Certificate

  1. Create a ssl.conf file.
  2. Edit the ssl.conf file. In the "[alt_names]" section, list the IPs that are used to access ATS GUI as shown in the following sample ssl.conf file:
    
    [ req ]
    default_bits       = 4096
    distinguished_name = req_distinguished_name
    req_extensions     = req_ext
     
    [ req_distinguished_name ]
    countryName                 = Country Name (2 letter code)
    countryName_default         = IN
    stateOrProvinceName         = State or Province Name (full name)
    stateOrProvinceName_default = KN
    localityName                = Locality Name (eg, city)
    localityName_default        = BLR
    organizationName            = Organization Name (eg, company)
    organizationName_default    = ORACLE
    commonName                  = Common Name (e.g. server FQDN or YOUR name)
    commonName_max              = 64
    commonName_default          = ocats.ocbsf.svc.cluster.local
     
    [ req_ext ]
    keyUsage = critical, digitalSignature, keyEncipherment
    extendedKeyUsage = serverAuth, clientAuth
    basicConstraints = critical, CA:FALSE
    subjectAltName = critical, @alt_names
     
    [alt_names]
    IP.1 = 127.0.0.1
    IP.2 = 10.75.217.5
    IP.3 = 10.75.217.76
    DNS.1 = localhost
    DNS.2 = ocats.ocbsf.svc.cluster.local

    Note:

    • To access the GUI with DNS, make sure that the commonName_default is the same as the DNS name being used.
      • Ensure the DNS is in this format:<service_name>.<namespace>.svc.cluster.local

        Multiple DNSs, such as DNS.1, DNS.2, and so on, can be added.

    • To support the ATS API, it is necessary to add the IP 127.0.0.1 to the list of IPs.
  3. Create a Certificate Signing Request (CSR) with the following command:
    openssl req -config ssl.conf -newkey rsa:2048 -days 1000 -nodes -keyout rsa_private_key_pkcs1.key > ssl_rsa_certificate.csr
    Output:
    
    [cloud-user@platform-bastion-1 ocbsf]$ openssl req -config ssl.conf -newkey rsa:2048 -days 1000 -nodes -keyout rsa_private_key_pkcs1.key > ssl_rsa_certificate.csr
    Ignoring -days; not generating a certificate
    Generating a RSA private key
    ...+++++
    ........+++++
    writing new private key to 'rsa_private_key_pkcs1.key'
    -----
    You are about to be asked to enter information that will be incorporated
    into your certificate request.
    What you are about to enter is what is called a Distinguished Name or a DN.
    There are quite a few fields but you can leave some blank
    For some fields there will be a default value,
    If you enter '.', the field will be left blank.
    -----
    Country Name (2 letter code) [IN]:
    State or Province Name (full name) [KA]:
    Locality Name (eg, city) [BLR]:
    Organization Name (eg, company) [ORACLE]:
    Common Name (e.g. server FQDN or YOUR name) [ocbsf]:
    [cloud-user@platform-bastion-1 ocbsf]$
  4. To display all the components of the CSR file and to verify the configurations run the following command:
    openssl req -text -noout -verify -in ssl_rsa_certificate.csr
  5. Sign the CSR file with root certificate by running the following command:
    openssl x509 -extfile ssl.conf -extensions req_ext -req -inssl_rsa_certificate.csr -days 1000 -CA ../caroot.cer -CAkey ../caroot.key -set_serial 04 > ssl_rsa_certificate.crt

    Output:

    
    [cloud-user@platform-bastion-1 ocbsf]$ openssl x509 -extfile ssl.conf -extensions req_ext -req -in ssl_rsa_certificate.csr -days 1000 -CA ../caroot.cer -CAkey ../caroot.key -set_serial 04 > ssl_rsa_certificate.crt
    Signature ok
    subject=C = IN, ST = KA, L = BLR, O = ORACLE, CN = ocbsf
    Getting CA Private Key
    [cloud-user@platform-bastion-1 ocbsf]$
  6. Verify if the certificate is signed by the root certificate by running the following command:
    [cloud-user@platform-bastion-1 ocbsf]$ openssl verify -CAfile caroot.cer ssl_rsa_certificate.crt

    Output:

    
    [cloud-user@platform-bastion-1 ocbsf]$ openssl verify -CAfile caroot.cer ssl_rsa_certificate.crt
    ssl_rsa_certificate.crt: OK
  7. Save the generated application certificate and root certificate.
  8. Add the caroot.cer to the browser as a trusted author.
  9. The generated application/client certificates cannot be directly given to the Jenkins server. Hence generate the .p12 keystore file for the client certificate with the following command:
    
    [cloud-user@platform-bastion-1 ocbsf]$ openssl pkcs12 -inkey rsa_private_key_pkcs1.key -inssl_rsa_certificate.crt -export-out certificate.p12
    Enter Export Password:
    Verifying - Enter Export Password:
  10. In the prompt, create a password and save it for future use.
  11. Convert the .p12 keystore file into a JKS format file using the following command:
    
    [cloud-user@platform-bastion-1 ocbsf]$ keytool -importkeystore -srckeystore ./certificate.p12 -srcstoretype pkcs12 -destkeystore jenkinsserver.jks -deststoretype JKS

    Output:

    
    [cloud-user@platform-bastion-1 ocbsf]$ keytool -importkeystore -srckeystore ./certificate.p12 -srcstoretype pkcs12 -destkeystore jenkinsserver.jks -deststoretype JKS
    Importing keystore ./certificate.p12 to jenkinsserver.jks...
    Enter destination keystore password:
    Re-enter new password:
    Enter source keystore password:
    Entry for alias 1 successfully imported.
    Import command completed:  1 entries successfully imported, 0 entries failed or cancelled
  12. In the prompt, use the same password used while creating .p12 keystore file.

    Note:

    Ensure that the .p12 keystore and JKS files has the same passwords.
  13. The generated JKS file, jenkinserver.jks is added to the Jenkins path, where Jenkins server can access it.

For more details about the ATS TLS feature, refer to Deploy ATS with TLS Enabled section.

3.1.3.2 Enable TLS on Python Stubs
Before starting the creation of certificate, user should have the path details of caroot.cer file.
  1. Create a ssl.conf file.
  2. Edit ssl.conf file. Ensure that the DNS is in the format of *.<namespace>.svc
    A sample stub_ssl.conf file:
    
    [ req ]
    default_bits       = 4096
    distinguished_name = req_distinguished_name
    req_extensions     = req_ext
     
    [ req_distinguished_name ]
    countryName                 = Country Name (2 letter code)
    countryName_default         = IN
    stateOrProvinceName         = State or Province Name (full name)
    stateOrProvinceName_default = KN
    localityName                = Locality Name (eg, city)
    localityName_default        = BLR
    organizationName            = Organization Name (eg, company)
    organizationName_default    = ORACLE
    commonName                  = svc.cluster.local
     
    [ req_ext ]
    keyUsage = critical, digitalSignature, keyEncipherment
    extendedKeyUsage = serverAuth, clientAuth
    basicConstraints = critical, CA:FALSE
    subjectAltName = critical, @alt_names
     
    [alt_names]
    IP.1 = 127.0.0.1
    DNS.1 = *.ocats.svc
  3. Create a Certificate Signing Request (CSR) for the stubs using the following command:
    openssl req -config stub_ssl.conf -newkey  rsa:2048 -days 1000 -nodes -keyout rsa_private_key_stub_pkcs2.key > stub_ssl_rsa_certificate1.csr

    Output:

    
    [cloud-user@platform-bastion-1 stub_certs]$ openssl req -config stub_ssl.conf -newkey rsa:2048 -days 1000 -nodes -keyout rsa_private_key_stub_pkcs2.key > stub_ssl_rsa_certificate1.csr
    Ignoring -days; not generating a certificate
    Generating a RSA private key
    ....................+++++
    ...+++++
    writing new private key to 'rsa_private_key_stub_pkcs2.key'
    -----
    You are about to be asked to enter information that will be incorporated
    into your certificate request.
    What you are about to enter is what is called a Distinguished Name or a DN.
    There are quite a few fields but you can leave some blank
    For some fields there will be a default value,
    If you enter '.', the field will be left blank.
    -----
    Country Name (2 letter code) [IN]:
    State or Province Name (full name) [KN]:
    Locality Name (eg, city) [BLR]:
    Organization Name (eg, company) [ORACLE]:
    svc.cluster.local []:*.ocbsf.svc
  4. Sign the certificate with the CA root using the following command:
    openssl x509 -extfile stub_ssl.conf -extensions req_ext -req -in stub_ssl_rsa_certificate1.csr -days 1000 -CA  ../ocbsf-caroot.cer -CAkey ../ocbsf-caroot.key -set_serial 05 >  stub_ssl_rsa_certificate1.crt

    Output:

    
    [cloud-user@platform-bastion-1 stub_certs]$ openssl x509 -extfile stub_ssl.conf -extensions req_ext -req -in stub_ssl_rsa_certificate1.csr -days 1000 -CA ../ocbsf-caroot.cer -CAkey ../ocbsf-caroot.key -set_serial 05 > stub_ssl_rsa_certificate1.crt
    Signature ok
    subject=C = IN, ST = KN, L = BLR, O = ORACLE, CN = *.ocbsf.svc
    Getting CA Private Key
  5. Create a secret for the stub and associate it with the namespace using the following command:
    kubectl create secret generic ocats-stub-secret1 --from-file=stub_ssl_rsa_certificate1.crt --from-file=rsa_private_key_stub_pkcs2.key --from-file=../ocbsf-caroot.cer -n ocbsf

    Output:

    
    [cloud-user@platform-bastion-1 ocbsf]$ kubectl create secret generic ocbsf-stub-secret1 --from-file=stub_ssl_rsa_certificate1.crt --from-file=rsa_private_key_stub_pkcs2.key --from-file=../ocbsf-caroot.cer -n ocbsf
    secret/ocats-stub-secret1 created
  6. Update values.yaml file of each Python stub in specific NF namespace with following details:
    
    NF: "<NF-Name>"
    cert_secret_name: "ocats-stub-secret"
    ca_cert: "ocbsf-caroot.cer"
    client_cert: "ocbsf-stub_ssl_rsa_certificate.crt"
    private_key: "ocbsf-rsa_private_key_stub_pkcs1.key"
    expose_tls_service: true
    CLIENT_CERT_REQ: true

    Note:

    If the Helm, cert_secret_name parameter, is null, then ca_cert, client_cert, and private_key values are not considered by the TLS.
  7. Ensure to update the deployment of all the Python stubs installed on the setup.

For more details about the ATS TLS feature, refer to Support for Transport Layer Security section.

3.1.3.3 Enable ATS GUI with HTTPS

Follow the steps to secure or enable TLS on the server.

  1. Create a Kubernetes secret by adding the above created files:
    kubectl create secret generic ocats-tls-secret --from-file=jenkinsserver.jks --from-file=ssl_rsa_certificate.crt --from-file=rsa_private_key_pkcs1.key --from-file=caroot.cer -n ocbsf

    Where,

    jenkinsserver.jks: This file is needed when atsGuiTLSEnabled is set to true. This is necessary to open ATS GUI with secured TLS protocol.

    ssl_rsa_certificate.crt: This is client application certificate.

    rsa_private_key_pkcs1.key: This is RSA private key.

    caroot.cer: This file used during creation of jks file needs to be passed for Jenkins/ATS API communication.

    The sample of created secret:

    
    [cloud-user@platform-bastion-1 ~]$ kubectl describe secret ocats-tls-secret -n ocbsf
    Name:         ocats-tls-secret
    Namespace:    ocats
    Labels:       <none>
    Annotations:  <none>
     
    Type:  Opaque
     
    Data
    ====
    caroot.cer:                        1147 bytes
    ssl_rsa_certificate.crt:           1424 bytes
    jenkinsserver.jks:                 2357 bytes
    rsa_private_key_pkcs1.key:         1675 bytes
  2. Apply the following changes in values.yaml file.
    
    certificates:
      cert_secret_name: "ocats-tls-secret"
      ca_cert: "caroot.cer"
      client_cert: "ssl_rsa_certificate.crt"
      private_key: "rsa_private_key_pkcs1.pem"
      jks_file: "jenkinsserver.jks" # This parameter is needed when  atsGuiTLSEnabled
          is set to true. This file is necessary for ATS GUI to  be opend with secured TLS protocol.
      jks_password: "123456"  #This is the password given to the jks file while creation.
    The user can install the ATS, using the helm install command. Change the atsGuiTLSEnabled Helm parameter value to true for ATS to get the certificates and support HTTPS for GUI.
  3. Upload the caroot.cer file to the browser, before accessing it using https protocol.

    For more details about the uploading the file to the browser, refer Adding a Certificate in Browser section in Enable ATS GUI with HTTPS.

  4. A user can now start ATS with HTTPS the protocol. The link to open the ATS GUI format is https://<IP>:<port>, for example, https://10.75.217.25:30301.

    The lock symbol in the browser indicates that the server is secured or TLS enabled.

3.1.4 Pushing the Images to Customer Docker Registry

This section describes the pre-deployment steps for deploying ATS and stub pods.

Preparing to deploy ATS and Stub Pods in Kubernetes Cluster

To deploy ATS and Stub pods in a Kubernetes Cluster, perform the following steps:

  1. Run the following command to extract the tar file content:

    tar -zxvf ocats-bsf-tools-25.1.200.0.0.tgz

    The output of this command is:
    
    ocats-bsf-pkg-25.1.200.tgz
    ocstub-pkg-25.1.201.tgz
    ocdns-pkg-25.1.203.tgz
    ocdiam-pkg-25.1.203.0.0tgz
  2. Go to the ocats-bsf-tools-25.1.200.0.0 folder and run the following command to extract the helm charts and docker images of ATS:

    tar -zxvf ocats-bsf-pkg-25.1.200.0.0.tgz

    The output of this command is:

    
    ocats-bsf-25.1.200.tgz
    ocats-bsf-images-25.1.200.tar
    ocats-bsf-data-25.1.200.tgz
  3. Run the following command in your cluster to load the ATS docker image:

    docker load --input ocats-bsf-images-25.1.200.tar

  4. Run the following commands to tag and push the ATS images
    docker tag ocats-bsf:25.1.200 <registry>/ocats-bsf:25.1.200
    docker push <registry>/ocats-bsf:25.1.200

    Example:

    docker tag ocats-bsf:25.1.200 localhost:5000/ocats-bsf:25.1.200
    docker push localhost:5000/ocats-bsf:25.1.200
  5. Run the following command to untar the helm charts, in ocats-bsf-25.1.200.tgz
    tar -zxvf ocats-bsf-25.1.200.tgz
  6. Update the registry name, image name and tag in the ocats-bsf/values.yaml file as required. For this, you need to update the image.repository and image.tag parameters in the ocats-bsf/values.yaml file.
  7. In the ocats-bsf/values.yaml file, the atsFeatures parameter is configured to control ATS feature deliveries.
    
    atsFeatures:  ## DO NOT UPDATE this section without My Oracle Support team's support
      testCaseMapping: true               # To display Test cases on GUI along with Features
      logging: true                       # To enable feature to collect applogs in case of failure
      lightWeightPerformance: false       # The Feature is not implemented yet
      executionWithTagging: true          # To enable Feature/Scenario execution with Tag
      scenarioSelection: false            # The Feature is not implemented yet
      parallelTestCaseExecution: true     # To run ATS features parallel
      parallelFrameworkChangesIntegrated: true # To run ATS features parallel
      mergedExecution: false              # To execute ATS Regression and NewFeatures pipelines together in merged manner
      individualStageGroupSelection: false  # The Feature is not implemented yet
      parameterization: true              # When set to false, the Configuration_Type parameter on the GUI will not be available.
      atsApi: true                        # To trigger ATS using ATS API
      healthcheck: true                   # TO enable/disable ATS Health Check.
      atsGuiTLSEnabled: false             # To run ATS GUI in https mode.
      atsCommunicationTLSEnabled: false  #If set to true, ATS will get necessary variables to communicate with SUT, Stub or other NFs with TLS enabled. It is not required in ASM environment.

    Note:

    It is recommended to avoid altering atsFeatures flags.

3.1.5 Configuring ATS

3.1.5.1 Enabling Static Port
To enable static port, in the ocats-bsf/values.yaml file under the service section, set the value of staticNodePortEnabled parameter to true and enter a valid nodePort value for staticNodePort parameter.
service:
  customExtension:
    labels: {}
    annotations: {}
  type: LoadBalancer
  ports:
    http:
      port: "8080"
      staticNodePortEnabled: false
      staticNodePort: ""
3.1.5.2 Enable Static API Node Port
To enable static API node port, in the ocats-bsf/values.yaml file under the service section, set the value of staticNodePortEnabled parameter to true and enter a valid nodePort value for staticNodePort parameter. The following is a snippet of the service section in the yaml file:

service:
  customExtension:
    labels: {}
    annotations: {}
  type: LoadBalancer
  ports:
    api:
      port: "5001"
      staticNodePortEnabled: false
      staticNodePort: ""
3.1.5.3 Service Account Requirements
To run BSF-ATS, use the following rules to create a service account:
rules:
- apiGroups: ["extensions"]
  resources: ["deployments", "replicasets"]
  verbs: ["watch", "get", "list", "update"]
- apiGroups: ["apps"]
  resources: ["deployments", "replicasets"]
  verbs: ["watch", "get", "list", "update"]
- apiGroups: [""]
  resources: ["pods", "services", "secrets", "configmaps"]
  verbs: ["watch", "get", "list", "delete", "update", "create"]
- apiGroups: [""]
  resources: ["pods/log"]
  verbs: ["get", "list"]
3.1.5.4 Enabling Aspen Service Mesh

This section provides information on how to enable Aspen service mesh while deploying ATS for Binding Support Function. The configurations mentioned in this section are optional and should be performed only if ASM is required.

To enable service mesh for BSF ATS, perform the following steps:

  1. In the service section of the values.yaml file, the serviceMeshCheck parameter is set to false by default. To enable service mesh, set the value for serviceMeshCheck to true. The following is a snippet of the service section in the yaml file:
    service:
      customExtension:
        labels: {}
        annotations: {}
      type: LoadBalancer
      ports:
        https:
          port: "8443"
          staticNodePortEnabled: false
          staticNodePort: ""
        http:
          port: "8080"
          staticNodePortEnabled: false
          staticNodePort: ""
        api:
          port: "5001"
          staticNodePortEnabled: false
          staticNodePort: ""
      serviceMeshCheck: true
  2. If the ASM is not enabled on the global level for the namespace, run the following command to enable it before deploying the ATS:
    kubectl label --overwrite namespace <namespace_name> istio-injection=enabled
    For example:
    kubectl label --overwrite namespace ocbsf istio-injection=enabled
  3. Uncomment and add the following annotation under the lbDeployments and nonlbDeployments section of the global section in values.yaml file as follows:

    traffic.sidecar.istio.io/excludeInboundPorts: "9000"

    traffic.sidecar.istio.io/excludeOutboundPorts: "9000"

    The following is a snippet from the values.yaml of BSF:

    /home/cloud-user/ocats-bsf/ocats-bsf-tools-25.1.200.0.0/ocats-bsf-pkg-25.1.200.0.0/ocats-bsf/
    vim values.yaml
     
     customExtension:
        allResources:
          labels: {}
          annotations: {
          #Enable this section for service-mesh based installation
             traffic.sidecar.istio.io/excludeInboundPorts: "9000",
             traffic.sidecar.istio.io/excludeOutboundPorts: "9000"
            }
    lbDeployments:
          labels: {}
          annotations: { 
            traffic.sidecar.istio.io/excludeInboundPorts: "9000",
            traffic.sidecar.istio.io/excludeOutboundPorts: "9000"}
  4. If service mesh is enabled, then create a destination rule for fetching the metrics from the Prometheus. In most of the deployments, Prometheus is kept outside the service mesh so you need a destination rule to communicate between TLS enabled entity (ATS) and non-TLS entity (Prometheus). You can create a destination rule using the following sample yaml file:
    kubectl apply -f - <<EOF
     
    apiVersion: networking.istio.io/v1alpha3
    kind: DestinationRule
    metadata:
      name: prometheus-dr
      namespace: ocats
    spec:
      host: oso-prometheus-server.pcf.svc.cluster.local
      trafficPolicy:
        tls:
          mode: DISABLE
    EOF
    In the destination rule:
    • name indicates the name of destination rule.
    • namespace indicates where the ATS is deployed.
    • host indicates the hostname of the prometheus server.
  5. Update the ocbsf_custom_values_servicemesh_config_25.1.200.yaml with the below additional configuration under virtualService section for Egress Gateway:
    virtualService:
      - name: nrfvirtual1
        host: ocbsf-ocbsf-egress-gateway
        destinationhost: ocbsf-ocbsf-egress-gateway
        port: 8000
        exportTo: |-
          [ "." ]
        attempts: "0"

    Where,

    host or destination name uses the format - <release_name>-<egress_svc_name>.

    You must update the host or destination name as per the deployment.

  6. For ServerHeader and SessionRetry features, the user needs to perform the following configurations under the envoyFilters for nf1stub, nf11stub, and nf12stub in the ocbsf-servicemesh-config-custom-values-25.1.200.yaml:

    Note:

    occnp_custom_values_servicemesh_config yaml file and helm charts version names would differ based on the deployed BSF NF version. For example, "occnp_custom_values_servicemesh_config_24.3.0.yaml" or "occnp_custom_values_servicemesh_config_24.3.1.yaml".
    envoyFilters:
      - name: serverheaderfilter-nf1stub
        labelselector: "app: nf1stub-ocstub-py"
        configpatch:
          - applyTo: NETWORK_FILTER
            filtername: envoy.filters.network.http_connection_manager
            operation: MERGE
            typeconfig: type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
            configkey: server_header_transformation
            configvalue: PASS_THROUGH
      - name: serverheaderfilter-nf11stub
        labelselector: "app: nf11stub-ocstub-py"
        configpatch:
          - applyTo: NETWORK_FILTER
            filtername: envoy.filters.network.http_connection_manager
            operation: MERGE
            typeconfig: type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
            configkey: server_header_transformation
            configvalue: PASS_THROUGH
      - name: serverheaderfilter-nf12stub
        labelselector: "app: nf12stub-ocstub-py"
        configpatch:
          - applyTo: NETWORK_FILTER
            filtername: envoy.filters.network.http_connection_manager
            operation: MERGE
            typeconfig: type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
            configkey: server_header_transformation
            configvalue: PASS_THROUGH
  7. Perform helm upgrade on the ocbsf-servicemesh-config release using the modified ocbsf_custom_values_servicemesh_config_25.1.200.yaml file.
    helm upgrade <helm_release_name_for_servicemesh> -n <namespace> <servicemesh_charts> -f <servicemesh-custom.yaml>
    For example,
    helm upgrade ocbsf-servicemesh-config ocbsf-servicemesh-config-25.1.200.tgz -n ocbsf -f ocbsf_custom_values_servicemesh_25.1.200.yaml
  8. Configure DNS for Alternate Route service. For more information, see Post-Installation Steps.
3.1.5.5 Enabling Health Check

This section describes how to enable Health Check for ATS.

To enable Health Check, in the ocats-bsf/values.yaml file, set the value of healthcheck parameter to true and enter a valid value to select either Webscale or OCCNE environment.

To select OCCNE environment, set the envtype to OCCNE and update the values of the following parameters:
  • Webscale - Update the value as false
  • envtype - T0NDTkU= (i.e envtype=$(echo -n 'OCCNE' | base64))
  • occnehostip - OCCNE Host IP address
  • occnehostusername - OCCNE Host Username
  • occnehostpassword - OCCNE Host Password
To select OCCNE environment, update the values of the following two parameters:
  • Webscale - Update the value as true
  • envtype - T0NDTkU= (i.e envtype=$(echo -n 'OCCNE' | base64))

After the configurations are done, encrypt the parameters and provide the values as shown in the following snippet:


atsFeatures:  ## DO NOT UPDATE this section without Engineering team's permission
  healthcheck: true                   # TO enable/disable ATS Health Check.
 
sshDetails:
  secretname: "healthchecksecret"
  envtype: "T0NDTkU="
  occnehostip: "MTAuMTcuMjE5LjY1"
  occnehostusername: "dXNlcm5hbWU"
  occnehostpassword: "KioqKg=="
To select WEBSCALE environment, update the values of the following two parameters:
  • Webscale - Update the value as true
  • envtype - V0VCU0NBTEU= (i.e envtype=$(echo -n 'WEBSCALE' | base64))

After the configurations are done, encrypt the parameters and provide the values as shown in the following snippet:


atsFeatures:  ## DO NOT UPDATE this section without Engineering team's permission
  healthcheck: true                   # TO enable/disable ATS Health Check.
 
sshDetails:
  secretname: "healthchecksecret"
  envtype: "V0VCU0NBTEU="
  webscalejumpip: "MTAuNzAuMTE3LjQy"
  webscalejumpusername: "dXNlcm5hbWU="
  webscalejumppassword: "KioqKg=="
  webscaleprojectname: "KioqKg=="
  webscalelabserverFQDN: "KioqKg=="
  webscalelabserverport: "KioqKg=="
  webscalelabserverusername: "KioqKg=="
  webscalelabserverpassword: "KioqKg=="

Note:

Once the ATS is deployed with HealthCheck feature enabled or disabled, then it cannot be changed. To change the configuration, you are required to re-install.
3.1.5.6 Enabling Persistent Volume

Note:

The steps provided in this section are optional and required only if Persistent Volume needs be to enabled.

ATS supports Persistent storage to retain ATS historical build execution data, test cases and one-time environment variable configurations. With this enhancement, the user can decide whether to use persistent volume based on their resource requirements. By default, the persistent volume feature is not enabled.

To enable persistent storage, perform the following steps:
  1. Create a PVC using PersistentVolumeClaim.yaml file and associate the same to the ATS pod.
    Sample PersistentVolumeClaim.yaml file:
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: <Enter the PVC Name>
      annotations:
    spec:
      storageClassName: <Provide the Storage Class Name>
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: <Provide the size of the PV>
    1. Set PersistentVolumeClaim to the PVC file name.
    2. Enter the storageClassName to the Storage Class Name.
    3. Set storage to and size of the persistent volume.
      Sample PVC configuration:
      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: bsf-pvc-25.1.200   
      annotations:
      spec:
        storageClassName: standard
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 1Gi
  2. Run the following command to create PVC:
    kubectl apply -f <filename> -n <namespace>

    For example:

    kubectl apply -f PersistentVolumeClaim.yaml -n ocbsf

    Output:

    persistentvolumeclaim/bsf-pvc-25.1.200 created
  3. Once the PVC is created, run the following command to verify that it is bound to the namespace and is available.
    kubectl get pvc -n <namespace used for pvc creation>

    For example:

    kubectl get pvc -n ocbsf

    Sample output:

    NAME              STATUS        VOLUME                                   CAPACITY  ACCESS MODES   STORAGECLASS   AGE
    bsf-pvc-25.1.200   Bound    pvc-65484045-3805-4064-9fc3-f9eeeaccc8b8      1Gi        RWO            standard      11s

    Verify that the STATUS is Bound and rest of the parameters like NAME, CAPACITY, ACCESS MODES, and STORAGECLASS are as mentioned in the PersistentVolumeClaim.yaml file.

    Note:

    Do not proceed further with the next step if there is an issue with the PV creation and contact your administrator to get the PV Created.

  4. Enable PVC:
    1. Set the PVEnabled flag to true.
    2. Set PVClaimName to the PVC created in Step 1.
      
        PVEnabled: true
        PVClaimName: "ocbsf-pvc-25.1.200"
        

    Note:

    Make sure that ATS is deployed before proceeding to the further steps.
  5. Copy the <nf_main_folder> and <jenkins jobs> folders from the tar file to their ATS pod and restart the pod.
    1. Extract the tar file.
      tar -xvf ocats-bsf-data-25.1.200.tgz
    2. Run the following commands to copy the desired folder.
      kubectl cp ocats-bsf-data-25.1.200/ocbsf_tests <namespace>/<pod-name>:/var/lib/jenkins/
      kubectl cp ocats-bsf-data-25.1.200/jobs <namespace>/<pod-name>:/var/lib/jenkins/.jenkins/
    3. Restart the pod.
      kubectl delete po <pod-name> -n <namespace>
  6. Once the Pod is up and running, log in to the Jenkins console and configure the Discard old Builds option to configure the number of Jenkins builds, which must be retained in the persistent volume.

    Figure 3-1 Discarding Old Builds


    Discarding Old Builds

    Note:

    If Discard old Builds is not configured, Persistent Volume can get filled when there are huge number of builds.

For more details on Persistent Volume Storage, see Persistent Volume for 5G ATS.

3.1.5.7 ATS-BSF API Extended Support

The ATS application programming interface (API) feature provides APIs, to perform routine ATS tasks such as starting the ATS suite, monitoring and stopping the ATS suite etc.

By default, this feature is enabled in values.yaml file.

  atsFeatures:
    atsApi: true

For more details about the ATS API feature, refer to ATS API section.

This ATS feature is extended to provide the ability of running single features, or scenarios, or stages, or groups, or based on tags execution using the API. This also allows running of test cases by providing the features, or scenarios, or stages, or groups, or tags in the curl request to the server.

For more details about the API interfaces, refer to Use the RESTful Interfaces section.

3.1.6 Deploying ATS and Pods

3.1.6.1 Deploying ATS in Kubernetes Cluster

Important:

This Procedure is for Backwards porting purpose only and should not be considered as the Subsequent Release POD Deployment Procedure.

Prerequisite: Make sure that the old PVC, which contains the old release POD data is available.

To deploy ATS, perform the following steps:

  1. Run the following command to deploy ATS using the updated helm charts:

    Note:

    Ensure that all the the components, that is, ATS, stub pods and CNC BSF are deployed in the same namespace.

    Using Helm

    helm install -name <release_name> ocats-bsf-25.1.200.tgz --namespace <namespace_name> -f <values-yaml-file>

    For example:

    helm install -name ocats ocats-bsf-25.1.200.tgz --namespace ocbsf -f ocats-bsf/values.yaml
  2. Run the following command to verify ATS deployment:
    helm ls -n ocbsf
    The output of the command is as follows:
    
    NAME                    REVISION        UPDATED                         STATUS          CHART                   APP VERSION     NAMESPACE
    ocats                   1               Mon Nov 14 14:56:11 2020        DEPLOYED        ocats-bsf-25.1.200      25.1.200.0.0       ocbsf
    If the deployment is successful, the status is Deployed.
3.1.6.2 Deploying Stub Pod in Kubernetes Cluster
To deploy Stub Pod in Kubernetes cluster, perform the following steps:
  1. Navigate to ocats-bsf-tools-25.1.200.0.0 folder and run the following command:

    tar -zxvf ocstub-pkg-25.1.201.0.0.tgz
    The output of the command shows:
    • ocstub-py-25.1.201.tgz
    • ocstub-py-image-25.1.201.tar
  2. Deploy the additional stubs required to validate the session retry feature.

    You can use nf11stub or nf12stub as alternte FQDN for nf1stub.

    1. Run the following command to load the stub image.
      docker load --input ocstub-py-image-25.1.201.tar
    2. Tag and push the image to your docker registry using below commands.
      
      docker tag ocstub-py:25.1.201 localhost:5000/ocstub-py:25.1.201
      docker push localhost:5000/ocstub-py:25.1.201
    3. Untar the helm charts ocstub-py-25.1.201.tgz and update the registry name, image name and tag (if required) in ocstub-py/values.yaml file.
      tar -zxvf ocstub-py-25.1.201.0.0.tgz

      Note:

      From 24.2.0 onwards, service port names are configurable in ocstub-py. But as per Istio standard, it's advisable to keep the default values as it as.

      Example:

      
      names:
        http: "http"
        h2c: "http2-h2c"
        h2: "http2-h2"
    4. If required, change apiVersion to apps/v1 in ocstub-py/templates/deployment.yaml file.
      apiVersion: apps/v1

      Note:

      If the support for Predefined_priming feature is required, perform the following steps to configure Predefined_priming.

    5. Copy ocstub-py/values.yaml file to a new file with name pre_priming_values.yaml.

    6. Edit the ocstub-py/pre_priming_values.yaml file.
    7. Set the value of preConfig flag to true and replace the default configuration with below configurations under predefined_prime_configuration section.

      Predefined_priming configuration:

      preConfig:
        enabled: true
        predefined_prime_configuration: |+
          [
            {
              "method": "GET",
              "statuscode": "200",
              "url": "/nnrf-nfm/v1/nf-instances/fe7d992b-0541-4c7d-ab84-c6d70b1b0666",
              "data": "{\"nfInstanceId\": \"fe7d992b-0541-4c7d-ab84-c6d70b1b0666\", \"nfType\": \"BSF\", \"nfStatus\": \"REGISTERED\", \"heartBeatTimer\": 2, \"fqdn\": \"ocbsf1-2-api-gateway.bsf1-2.svc.atlantic.morrisville.us.lab.oracle.com\", \"priority\": 1, \"capacity\": 1, \"load\": 2, \"bsfInfo\": {\"ipv4AddressRanges\": [{\"start\": \"10.0.0.1\", \"end\": \"10.113.255.255\"}], \"ipv6PrefixRanges\": [{\"start\": \"2800:a00:cc03::/64\", \"end\": \"2800:a00:cc04::/64\"}]}, \"nfServices\": [{\"serviceInstanceId\": \"03063893-cf9e-4f7a-9827-111111111111\", \"serviceName\": \"nbsf-management\", \"versions\": [{\"apiVersionInUri\": \"v1\", \"apiFullVersion\": \"1.0.0\", \"expiry\": \"2019-08-03T18:66:08.871+0000\"}], \"scheme\": \"http\", \"nfServiceStatus\": \"REGISTERED\", \"fqdn\": \"ocbsf1-2-api-gateway.bsf1-2.svc.atlantic.morrisville.us.lab.oracle.com\", \"interPlmnFqdn\": null, \"ipEndPoints\": [{\"ipv4Address\": \"10.233.22.149\", \"transport\": \"TCP\", \"port\": 80}], \"apiPrefix\": null, \"allowedNfTypes\": [\"PCF\", \"AF\", \"NEF\"], \"priority\": 1, \"capacity\": 1, \"load\": 2}]}",
              "headers": "{\"Content-Type\": \"application/json\"}"
            },
            {
              "method": "PUT",
              "statuscode": "201",
              "url": "/nnrf-nfm/v1/nf-instances/fe7d992b-0541-4c7d-ab84-c6d70b1b0666",
              "data": "{\"nfInstanceId\": \"fe7d992b-0541-4c7d-ab84-c6d70b1b0666\", \"nfType\": \"BSF\", \"nfStatus\": \"REGISTERED\", \"heartBeatTimer\": 30, \"fqdn\": \"ocbsf1-2-api-gateway.bsf1-2.svc.atlantic.morrisville.us.lab.oracle.com\", \"priority\": 1, \"capacity\": 1, \"load\": 2, \"bsfInfo\": {\"ipv4AddressRanges\": [{\"start\": \"10.0.0.1\", \"end\": \"10.113.255.255\"}], \"ipv6PrefixRanges\": [{\"start\": \"2800:a00:cc03::/64\", \"end\": \"2800:a00:cc04::/64\"}]}, \"nfServices\": [{\"serviceInstanceId\": \"03063893-cf9e-4f7a-9827-111111111111\", \"serviceName\": \"nbsf-management\", \"versions\": [{\"apiVersionInUri\": \"v1\", \"apiFullVersion\": \"1.0.0\", \"expiry\": \"2019-08-03T18:66:08.871+0000\"}], \"scheme\": \"http\", \"nfServiceStatus\": \"REGISTERED\", \"fqdn\": \"ocbsf1-2-api-gateway.bsf1-2.svc.atlantic.morrisville.us.lab.oracle.com\", \"interPlmnFqdn\": null, \"ipEndPoints\": [{\"ipv4Address\": \"10.233.22.149\", \"transport\": \"TCP\", \"port\": 80}], \"apiPrefix\": null, \"allowedNfTypes\": [\"PCF\", \"AF\", \"NEF\"], \"priority\": 1, \"capacity\": 1, \"load\": 2}]}",
              "headers": "{\"Content-Type\": \"application/json\"}"
            },
            {
              "method": "PATCH",
              "statuscode": "204",
              "url": "/nnrf-nfm/v1/nf-instances/fe7d992b-0541-4c7d-ab84-c6d70b1b0666",
              "data": "{}",
              "headers": "{\"Content-Type\": \"application/json\"}"
            },
            {
              "method": "POST",
              "statuscode": "201",
              "url": "/nnrf-nfm/v1/subscriptions",
              "data": "{\"nfStatusNotificationUri\": \"http://ocbsf-ocbsf-ingress-gateway.ocpcf.svc/nnrf-client/v1/notify\", \"reqNfType\": \"BSF\", \"subscriptionId\": \"2d77e0de-15a9-11ea-8c5b-b2ca002e6839\", \"validityTime\": \"2050-12-26T09:34:30.816Z\"}",
              "headers": "{\"Content-Type\": \"application/json\"}"
           }
          ]

      Note:

      • The predefined_prime_configuration contains variables such as nfInstanceId, nfType, and fqdn in the data's content. Make sure to verify and update the variables based on the payload message that must be included in the response from the NRF on a request.
      • The default value of nfInstanceId variable is fe7d992b-0541-4c7d-ab84-c6d70b1b0666.
    8. Deploy the stub:
      helm install -name <release_name> ocstub-py --set env.NF=<NF> --setenv.LOG_LEVEL=<DEBUG/INFO> --set service.name=<service_name> --set service.appendReleaseName=false --namespace=<namespace_name> -f <valuesyaml-file>

      Install nf1stub and nf11stub with updated ocstub-py/pre_priming_values.yaml file.

      helm install -name nf1stub ocstub-py --set env.NF=BSF --set env.LOG_LEVEL=DEBUG --set service.name=nf1stub --set service.appendReleaseName=false  --namespace=ocbsf -f ocstub-py/pre_priming_values.yaml
      
      helm install -name nf11stub ocstub-py --set env.NF=BSF --set env.LOG_LEVEL=DEBUG --set service.name=nf11stub --set service.appendReleaseName=false  --namespace=ocbsf -f ocstub-py/pre_priming_values.yaml

      Install nf12stub with default values.yaml.

      helm install -name nf12stub ocstub-py --set env.NF=BSF --set env.LOG_LEVEL=DEBUG --set service.name=nf12stub --set service.appendReleaseName=false  --namespace=ocbsf -f ocstub-py/values.yaml

      If the support for Predefined_priming feature is not required, helm installation must be performed using default values.yaml file.

      helm install -name <release_name> ocstub-py --set env.NF=<NF> --set env.LOG_LEVEL=<DEBUG/INFO> --set service.name=<service_name>--set service.appendReleaseName=false  --namespace=<namespace_name> -f <valuesyaml-file>

      For example,

      helm install -name nf1stub ocstub-py --set env.NF=BSF --set env.LOG_LEVEL=DEBUG --set service.name=nf1stub --set service.appendReleaseName=false  --namespace=ocbsf -f ocstub-py/values.yaml
      
      helm install -name nf11stub ocstub-py --set env.NF=BSF --set env.LOG_LEVEL=DEBUG --set service.name=nf11stub --set service.appendReleaseName=false  --namespace=ocbsf -f ocstub-py/values.yaml
      
      helm install -name nf12stub ocstub-py --set env.NF=BSF --set env.LOG_LEVEL=DEBUG --set service.name=nf12stub --set service.appendReleaseName=false  --namespace=ocbsf -f ocstub-py/values.yaml
    9. Run the following command to verify the stub deployment:
      helm ls -n ocbsf

      Sample output:

      NAME                    REVISION             UPDATED                  STATUS          CHART                   APP VERSION              NAMESPACE
      nf11stub             1               Thu Jul  29 05:55:48 2024        DEPLOYED        ocstub-py-25.1.201      25.1.201.0.0             ocbsf
      nf12stub             1               Thu Jul  29 05:55:50 2024        DEPLOYED        ocstub-py-25.1.201      25.1.201.0.0             ocbsf
      nf1stub              1               Thu Jul  29 05:55:47 2024        DEPLOYED        ocstub-py-25.1.201      25.1.201.0.0             ocbsf
    10. Run the following command to verify the ATS and Stubs deployment status:
      helm status -n ocbsf
    11. Run the following command to verify if all the services are installed.
      kubectl get po -n ocbsf

      Sample output:

      NAME                                                   READY   STATUS    RESTARTS   AGE
      nf11stub-ocstub-py-7bffd6dcd7-ftm5f                   1/1     Running   0          3d23h
      nf12stub-ocstub-py-547f7cb99f-7mpll                   1/1     Running   0          3d23h
      nf1stub-ocstub-py-bdd97cb9-xjrkx                      1/1     Running   0          3d23h
  3. Verify the changes related to stub predefined prime configuration.
    1. Run the following command to verify the status of all the config-map.
      kubectl get cm -n ocbsf

      Notice the change in the number of config-map counts. It includes two extra config-maps of stubs and the number will be the same as of stubs count.

      For example:

      NAME                 DATA   AGE
      cm-pystub-nf1stub               1      3h35m
      cm-pystub-nf11stub              1      3h35m

Updating the Predefined_priming configurations

To update the Predefined_priming configurations such as value of nfInstanceID in the url:

Note:

This procedure is applicable only when Predefined_priming configuration is enabled.
  1. Run the following command to verify the status of all the config-maps.
    kubectl get cm -n ocbsf
  2. Perform the following steps separately for nf1stub and nf11stub pods.
    1. Edit the config-map of the pod.

      To edit the config-map of nf1stub,

      kubectl edit cm cm-pystub-nf1stub -n ocbsf

      To edit the config-map of nf11stub,

      kubectl edit cm cm-pystub-nf11stub -n ocbsf
    2. Edit the configurations as required, save and close the config-maps.
    3. Restart the nf1stub and nf11stub pods.
    4. Verify the logs of both these pods to confirm the changes.
3.1.6.3 Deploying DNS Stub in Kubernetes Cluster

Note:

Ensure there are sufficient resources and limit for DNS Stub. Set the resource request and limit values in the resources section of the values.yaml file as follows:

resources: {}
  # We usually recommend not to specify default resources and to leave this as a conscious
  # choice for the user. This also increases chances charts run on environments with little
  # resources, such as Minikube. If you do want to specify resources, uncomment the following
  # lines, adjust them as necessary, and remove the curly braces after 'resources:'.  # limits:
  #  cpu: 1000m
  #  memory: 1024Mi
  # requests:
  #  cpu: 500m
  #  memory: 500Mi
To deploy DNS Stub in Kubernetes cluster, perform the following steps:
  1. Go to the ocats-bsf-tools-25.1.200.0.0 folder and run the following command to extract the ocstub tar file content:

    tar -zxvf ocdns-pkg-25.1.203.0.0.tgz

    Sample output:

    
    [cloud-user@platform-bastion-1 ocdns-pkg-25.1.203.0.0]$ ls -ltrh
    total 211M
    -rw-------. 1 cloud-user cloud-user 211M Mar 14 14:49 ocdns-bind-image-25.1.203.tar
    -rw-r--r--. 1 cloud-user cloud-user 2.9K Mar 14 14:49 ocdns-bind-25.1.203.tgz
    
  2. Run the following command in your cluster to load the DNS STUB image:

    docker load --input ocdns-bind-image-25.1.203.tar

  3. Run the following commands to tag and push the DNS STUB image:
    docker tag ocdns-bind:25.1.203 localhost:5000/ocdns-bind:25.1.203
    docker push localhost:5000/ocdns-bind:25.1.203
  4. Run the following command to untar the helm charts, ocdns-bind-25.1.203.tgz.

    tar -zxvf ocdns-bind-25.1.203.tgz

  5. Update the registry name, image name and tag (if required) in the ocdns-bind/values.yaml file as required. For this, open the values.yaml file and update the image.repository and image.tag parameters.
  6. Run the following command to deploy the DNS Stub.
    Using Helm:
    helm install -name ocdns ocdns-bind-25.1.203.tgz --namespace ocbsf -f ocdns-bind/values.yaml
  7. Capture the cluster name of the deployment, namespace where nfstubs are deployed, and the cluster IP of DNS Stub.

    To capture the DNS Stub cluster IP:
    kubectl get svc -n ocbsf | grep dns

    Sample output:

    
    [cloud-user@platform-bastion-1 ocdns-pkg-25.1.203.0.0]$ kubectl get svc -n ocbsf | grep dns
    NAME      TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)                AGE
    ocdns     ClusterIP      10.233.11.45    <none>          53/UDP,6236/TCP        19h
    To capture the cluster name:
    kubectl -n kube-system get configmap kubeadm-config -o yaml | grep clusterName
    Sample output:
    clusterName: platform
3.1.6.4 Deploying ocdiam Simulator in Kubernetes Cluster
To deploy ocdiam Simulator in Kubernetes cluster, perform the following steps:
  1. Go to the ocats-bsf-tools-25.1.200.0.0 folder and run the following command to extract the ocstub tar file content:

    tar -zxvf ocdiam-pkg-25.1.203.0.0.tgz

    Sample output:

    [cloud-user@platform-bastion-1 ocdiam-pkg-25.1.203.0.0]$ ls -ltrh
    total 908M
    -rw-------. 1 cloud-user cloud-user 908M Mar 14 14:49 ocdiam-sim-image-25.1.203.tar
    -rw-r--r--. 1 cloud-user cloud-user 3.8K Mar 14 14:49 ocdiam-sim-25.1.203.tgz
  2. Run the following command in your cluster to load the Diameter Simulator image:
    docker load --input ocdiam-sim-image-25.1.203.tar
  3. Run the following commands to tag and push the Diameter Simulator image:
    docker tag ocdiam-sim:25.1.203 localhost:5000/ocdiam-sim:25.1.203
    docker push localhost:5000/ocdiam-sim:25.1.203
  4. Run the following command to untar the helm charts, ocdiam-sim-25.1.203.tgz.
    tar -zxvf ocdiam-sim-25.1.203.tgz
  5. Update the registry name, image name and tag (if required) in the ocdiam-sim/values.yaml file as required. For this, open the values.yaml file and update the image.repository and image.tag parameters.
  6. Run the following command to deploy the Diameter Simulator.
    Using Helm:
    helm install -name ocdiam-sim ocdiam-sim --namespace ocbcf -f ocdiam-sim/values.yaml

    Output:

    ocdiam-sim-69968444b6-fg6ks                1/1     Running   0          5h47m

Sample of BSF namespace with BSF and ATS after installation:

[cloud-user@platform-bastion-1 ocstub-pkg-25.1.201.0.0]$ kubectl get po -n ocbsf
NAME                                                      READY   STATUS    RESTARTS   AGE
ocbsf-appinfo-6fc99ffb85-f96j2                        1/1     Running   1          3d23h
ocbsf-bsf-management-service-df6b68d75-m77dv          1/1     Running   0          3d23h
ocbsf-oc-config-79b5444f49-7pwzx                      1/1     Running   0          3d23h
ocbsf-oc-diam-connector-77f7b855f4-z2p88              1/1     Running   0          3d23h
ocbsf-oc-diam-gateway-0                               1/1     Running   0          3d23h
ocbsf-ocats-bsf-5d8689bc77-cxdvx                      1/1     Running   0          3d23h
ocbsf-ocbsf-egress-gateway-644555b965-pkxsb           1/1     Running   0          3d23h
ocbsf-ocbsf-ingress-gateway-7558b7d5d4-lfs5s          1/1     Running   4          3d23h
ocbsf-ocbsf-nrf-client-nfmanagement-d6b955b48-4pptk   1/1     Running   0          3d23h
ocbsf-ocdns-ocdns-bind-75c964648-j5fsd                1/1     Running   0          3d23h
ocbsf-ocpm-cm-service-7775c76c45-xgztj                1/1     Running   0          3d23h
ocbsf-ocpm-queryservice-646cb48c8c-d72x4              1/1     Running   0          3d23h
ocbsf-performance-69fc459ff6-frrvs                    1/1     Running   4          3d23h
ocbsfnf11stub-7bffd6dcd7-ftm5f                        1/1     Running   0          3d23h
ocbsfnf12stub-547f7cb99f-7mpll                        1/1     Running   0          3d23h
ocbsfnf1stub-bdd97cb9-xjrkx                           1/1     Running   0          3d23h
ocdiam-sim-69968444b6                                 1/1     Running   0          3d23h

3.1.7 Post-Installation Steps

The section describes post-installation steps that users should perform after deploying ATS and stub pods.

Alternate Route Service Configurations

To edit the Alternate Route Service deployment file (ocbcf-ocbsf-alternate-route) that points to DNS Stub, perform the following steps:

  1. Run the following command to get searches information from dns-bind pod to enable communication between Alternate Route and dns-bind service:
    kubectl exec -it <dns-bind pod> -n <NAMESPACE> -- /bin/bash -c 'cat /etc/resolv.conf' | grep search | tr ' ' '\n' | grep -v 'search'
    The following output is displayed after running the command:

    Figure 3-2 Sample Output

    Sample Output
    By default alternate service will point to CoreDNS and you will see following settings in deployment file:

    Figure 3-3 Alternate Route Service Deployment File

    Screen capture to show alternate route service points to CoreDNS
  2. Run the following command to edit the deployment file and add the following content in alternate service to query DNS stub:
    $kubectl edit deployment ocpcf-occnp-alternate-route -n ocpcf
    1. Add the IP Address of the nameserver that you have recorded after installing the DNS stub (cluster IP Address of DNS Stub).
    2. Add the search information one by one which you recorded earlier.
    3. Set dnsPolicy to "None".
      dnsConfig:
        nameservers:
        - 10.233.33.169      // cluster IP of DNS Stub
        searches:
        - ocpcf.svc.occne15-ocpcf-ats
        - svc.occne15-ocpcf-ats
        - occne15-ocpcf-ats
      dnsPolicy: None
    For example:

    Figure 3-4 Example

    Example

NRF client configmap

  1. In the application-config configmap, configure the following parameters with the respective values:
    • primaryNrfApiRoot=nf1stub.<namespace_gostubs_are_deployed_in>.svc:8080

      Example: primaryNrfApiRoot=nf1stub.ocats.svc:8080

    • secondaryNrfApiRoot=nf11stub.<namespace_gostubs_are_deployed_in>.svc:8080

      Example:secondaryNrfApiRoot=nf11stub.ocats.svc:8080

    • virtualNrfFqdn = nf1stub.<namespace_gostubs_are_deployed_in>.svc

      Example:virtualNrfFqdn=nf1stub.ocats.svc

    Note:

    To get all configmaps in your namespace, run the following command:

    kubectl get configmaps -n <BSF_namespace>

  2. (Optional) If persistent volume is used, follow the post-installation steps provided in the Persistent Volume for 5G ATS section.

3.2 Installing ATS for NRF

3.2.1 Resource Requirements

This section describes the ATS resource requirements for NRF.

Overview - Total Number of Resources

The following table describes the overall resource usage in terms of CPUs, memory, and storage for the following:
  • NRF SUT
  • cnDBTier
  • ATS

Table 3-5 NRF - Total Number of Resources

Resource Name CPU Memory (Gi) Storage (Mi)
NRF SUT Totals 61 69 0
DBTier Totals 40.5 50.5 720
ATS Totals 7 6 0
Grand Total NRF ATS 108.5 125.5 720

NRF Pods Resource Requirements Details

For NRF Pods resource requirements, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

ATS Resource Requirements details for NRF

This section describes the ATS resource requirements, which are needed to deploy NRF ATS successfully.

Table 3-6 ATS Resource Requirements Details

Microservice CPUs Required per Pod Memory Required per Pod (GB) Storage PVC Required per Pod (GB) # Replicas (regular deployment) # Replicas (ATS deployment) CPUs Required - Total Memory Required - Total (GB) Storage PVC Required - Total (GB)
ATS Behave 2 1 1 1 1 2 1 0
ATS Stub (Python) 1 1 1 1 5 5 5 0
ATS Totals 7 6 0

cnDBTier Resource Requirements Details for NRF

This section describes the cnDBTier resource requirements, which are needed to deploy NRF ATS successfully.

Note:

For cnDBTier pods, a minimum of 4 worker nodes are required.

Table 3-7 cnDBTier Services Resource Requirements

Service Name Min Pod Replica # Min CPU/Pod Min Memory/Pod (in Gi) PVC Size (in Gi) Min Ephemeral Storage (Mi)
MGMT (ndbmgmd) 2 4 6 15 90
DB (ndbmtd) 4 5 5 4 90
SQL (ndbmysqld) 2 4 5 8 90
SQL (ndbappmysqld) 2 2 3 1 90
Monitor Service (db-monitor-svc) 1 0.4 490 (Mi) NA 90
Backup Manager Service (db-backup-manager-svc) 1 0.1 130(Mi) NA 90
Replication Service - Leader 1 2 2 2 90
Replication Service - Other 0 1 2 0 90

3.2.2 Downloading the ATS Package

Locating and Downloading ATS Images

To locate and download the ATS image from MOS:

  1. Log in to My Oracle Support using the appropriate credentials.
  2. Select the Patches & Updates tab.
  3. In the Patch Search window, click Product or Family (Advanced).
  4. Enter Oracle Communications Cloud Native Core - 5G in the Product field.
  5. Select Oracle Communications Cloud Native Core Network Repository Function <release_number> from the Release drop-down.
  6. Click Search. The Patch Advanced Search Results list appears.
  7. Select the required ATS patch from the list. The Patch Details window appears.
  8. Click Download. The File Download window appears.
  9. Click the <p********_<release_number>_Tekelec>.zip file to download the NRF ATS package file.
  10. Untar the zip file to access all the ATS images. The <p********_<release_number>_Tekelec>.zip directory has the following files:
    ocats_ocnrf_csar_25_1_200_0_0.zip
    ocats_ocnrf_csar_25_1_200_0_0.zip.sha256
    ocats_ocnrf_csar_mcafee-25.1.200.0.0.log
    
  11. Note:

    The above zip file contains all the images and custom values required for 25.1.200 release of OCATS-NRF.
    The ocats_ocnrf_csar_25_1_200_0_0.zip file has the following files and folders:
    ├── Definitions
    │   ├── ocats_ocnrf_ats_tests.yaml
    │   └── ocats_ocnrf.yaml
    ├── Files
    │   ├── ChangeLog.txt
    │   ├── Helm
    │   │   └── ocats-ocnrf-25.1.200.tgz
    │   ├── Licenses
    │   ├── ocats-nrf-25.1.200.tar
    │   ├── Oracle.cert
    │   ├── ocstub-py-25.1.202.tar
    │   └── Tests
    ├── ocats_ocnrf.mf
    ├── Scripts
    │   ├── ocats_ocnrf_custom_serviceaccount_25.1.200.yaml
    │   ├── ocats_ocnrf_custom_values_25.1.200.yaml
    │   └── ocats_ocnrf_tests_jenkinsjobs_25.1.200.tgz
    └── TOSCA-Metadata
        └── TOSCA.meta
  12. Copy the zip file to Kubernetes cluster where you want to deploy ATS.

3.2.3 Pushing the Images to Customer Docker Registry

Preparing to Deploy ATS and Stub Pod in Kubernetes Cluster

To deploy ATS and Stub Pod in Kubernetes Cluster:

  1. Run the following command to extract tar file content:
    unzip ocats_ocnrf_csar_25_1_200_0_0.zip
    The following docker image tar files are located at Files folder:
    • ocats-nrf-25.1.200.tar
    • ocstub-py-25.1.202.tar
  2. Run the following commands in your cluster to load the ATS docker image, 'ocats-nrf-25.1.200.tar' and Stub docker image 'ocstub-py-25.1.202.tar', and push it to your registry.
    $ docker load -i ocats-nrf-25.1.200.tar
    $ docker load -i ocstub-py-25.1.202.tar
     
    $ docker tag ocats/ocats-nrf:25.1.200 <local_registry>/ocats/ocats-nrf:25.1.200 
    
    $ docker tag ocats/ocstub-py:25.1.202 <local_registry>/ocats/ocstub-py:25.1.202  
    
    $ docker push <local_registry>/ocats/ocats-nrf:25.1.200 
    
    $ docker push <local_registry>/ocats/ocstub-py:25.1.202
  3. Create a copy of the custom values located at Scripts/ocats_ocnrf_custom_values_25.1.200.yaml and update it for image name, tag and other parameters as per the requirement.

3.2.4 Configuring ATS

3.2.4.1 Enabling Static Port
  1. To enable static port:

    Note:

    ATS supports static port. By default, this feature is not available.
    • In the ocats-ocnrf-custom-values.yaml file under service section, set the staticNodePortEnabled parameter value to 'true' and staticNodePort parameter value with valid nodePort.
      service:
        customExtension:
          labels: {}
          annotations: {}
        type: LoadBalancer
        port: "8080"
        staticNodePortEnabled: true
        staticNodePort: "32385"
3.2.4.2 Enabling Aspen Service Mesh

To enable service mesh for ATS:

  1. To enable service mesh, set the value for serviceMeshCheck to true. The following is a snippet of the service section in the yaml file:
    ocats-nrf: 
       serviceMeshCheck: true
  2. If the ASM is not enabled on the global level for the namespace, run the following command to enable it before deploying the ATS:
    kubectl label --overwrite namespace <namespace_name> istio-injection=enabled
    For example:
    kubectl label --overwrite namespace ocnrf istio-injection=enabled
  3. Add the following annotations under the lbDeployments and nonlbDeployments section of the global section in ocats-nrf-custom-values.yaml file for ATS deployment as follows:

    traffic.sidecar.istio.io/excludeInboundPorts: "8080"

    traffic.sidecar.istio.io/excludeOutboundPorts: "9090"

    For example:

       lbDeployments:
          labels: {}
          annotations:
            traffic.sidecar.istio.io/excludeInboundPorts: "8080"
            traffic.sidecar.istio.io/excludeOutboundPorts: "9090"
    
        lbDeployments:
          labels: {}
          annotations:
            traffic.sidecar.istio.io/excludeInboundPorts: "8090"
            traffic.sidecar.istio.io/excludeOutboundPorts: "9090"
     
        nonlbServices:
          labels: {}
          annotations: {}
     
        nonlbDeployments:
          labels: {}
          annotations: 
            traffic.sidecar.istio.io/excludeInboundPorts: "8090"
            traffic.sidecar.istio.io/excludeOutboundPorts: "9090"
  4. Add the following annotations in NRF deployment to work with ATS in service mesh environment:

    For example:

    oracle.com/cnc: "true"
    traffic.sidecar.istio.io/excludeInboundPorts: "9090,8095,8096,7,53"
    traffic.sidecar.istio.io/excludeOutboundPorts: "9090,8095,8096,7,53"
    

    
        lbDeployments:
          labels: {}
          annotations:
            oracle.com/cnc: "true"
            traffic.sidecar.istio.io/excludeInboundPorts: "9090,8095,8096,7,53"
            traffic.sidecar.istio.io/excludeOutboundPorts: "9090,8095,8096,7,53"
    
     
        nonlbServices:
          labels: {}
          annotations: {}
     
        nonlbDeployments:
          labels: {}
          annotations: 
            oracle.com/cnc: "true"
            traffic.sidecar.istio.io/excludeInboundPorts: "9090,8095,8096,7,53"
            traffic.sidecar.istio.io/excludeOutboundPorts: "9090,8095,8096,7,53"

Note:

If the above annotations are not provided in NRF deployment under lbDeployments and nonlbDeployments, all the metrics and alerts related test cases will fail.

3.2.4.3 Enabling Persistent Volume

ATS supports Persistent storage to retain ATS historical build execution data, test cases, and one-time environment variable configurations.

To enable persistent storage:
  1. Create a PVC and associate the same to the ATS pod.
  2. Set the PVEnabled flag to true.
  3. Set PVClaimName to PVC that is created for ATS.
    
    deployment:
      customExtension:
        labels: {}
        annotations: {}
      PVEnabled: true
      PVClaimName: "ocats-nrf-25.1.200-pvc"
      

For more details on Persistent Volume Storage, you can refer to Persistent Volume for 5G ATS.

3.2.4.4 Enabling NF FQDN Authentication

Note:

This procedure is applicable only if the NF FQDN Authentication feature is being tested else, proceed to the "Deploying ATS and Stub in Kubernetes Cluster" section.

You must enable this feature while deploying Service Mesh. For more information on how to enable NF FQDN Authentication feature, see Oracle Communications Cloud Native Core, Network Repository Function User Guide.

However, there is some change in the ATS deployment process, which is as follows:
  1. Use previously unzipped file "ocats-nrf-custom-serviceaccount-25.1.200.yaml" to create a service account. Add the following annotation in the "ocats-nrf-custom-serviceaccount-25.1.200.yaml" file where the kind is ServiceAccount.

    "certificate.aspenmesh.io/customFields": '{ "SAN": { "DNS": [ "<NF-FQDN>" ] } }'

    Sample format:

    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: ocats-custom-serviceaccount
      namespace: ocnrf
      annotations:
        "certificate.aspenmesh.io/customFields": '{ "SAN": { "DNS": [ "AMF.d5g.oracle.com" ] } }'

    Note:

    "AMF.d5g.oracle.com" is the NF FQDN that you must provide in the serviceaccount DNS field.
  2. Run the following command to create a service account:

    kubectl apply -f ocats-nrf-custom-serviceaccount-25.1.200.yaml

  3. Update the service account name in the ocats-ocnrf-custom-values-25.1.200.yaml file as follows:
    ocats-nrf:
      serviceAccountName: "ocats-custom-serviceaccount"

3.2.5 Deploying ATS and Stub in Kubernetes Cluster

Note:

It is important to ensure that all the three components; ATS, Stub and NRF are in the same namespace.

ATS and Stub supports Helm3 for deployment.

The NRF release name has to be provided in the ocats-ocnrf-custom-values.yaml file under ocats-nrf.nrfReleaseName.

ocats-nrf:    
   nrfReleaseName: "ocnrf"

If the namespace does not exists, run the following command to create a namespace:

kubectl create namespace ocnrf

Using Helm for Deploying ATS:

helm install <release_name> ocats-ocnrf-25.1.200.tgz --namespace <namespace_name> -f <values-yaml-file>
Example:

helm install ocats ocats-ocnrf-25.1.200.tgz --namespace ocnrf -f ocats-ocnrf-custom-values.yaml

Note:

The above helm install command will deploy ATS along with the stub servers required for ATS executions, which include 1 ATS pod and 5 stub server pods.

3.2.6 Verifying ATS Deployment

Run the following command to verify ATS deployment.

helm status <release_name>

Once ATS and Stub are deployed, run the following command to check the pod and service deployment.
Checking Pod Deployment:
kubectl get pod -n ocnrf
Checking Service Deployment:
kubectl get service -n ocnrf

Figure 3-5 Checking Pod Deployment without Service Mesh

Checking Pod Deployment without Service Mesh

Figure 3-6 Checking Service Deployment without Service Mesh

Checking Service Deployment without Service Mesh

If ATS is deployed with side car of service mesh, ensure that both ATS and Stub pods have two containers in ready state and shows "2/2" as follows:

Figure 3-7 ATS and Stub Deployed with Service Mesh


ATS and Stub Deployed with Service Mesh

Figure 3-8 ATS and Stub Deployed with Service Mesh


ATS and Stub Deployed with Service Mesh

3.2.7 Post-Installation Steps (if Persistent Volume is Used)

If persistent volume is used, follow the post-installation steps mentioned in the Persistent Volume for 5G ATS section.

3.3 Installing ATS for NSSF

This section describes Automated Testing Suite (ATS) installation procedures for Network Slice Selection Function (NSSF) in a cloud native environment. You must perform ATS installation procedures for NSSF in the same sequence as outlined in the following sections.

3.3.1 Resource Requirements

Total Number of Resources

The total number of resource requirements are as follows:

Table 3-8 Total Number of Resources

Resource CPUs Memory(GB) Storage(GB)
NSSF SUT Total 30.2 22 4
cnDBTier Total 40 40 20
ATS Total 14 14 0
Grand Total NSSF ATS 79.2 72 24

Resource Details

The details of resources required to install NSSF-ATS are as follows:

Table 3-9 Resource Details

Microservice CPUs Required per Pod Memory Required per Pod (GB) Storage PVC Required per Pod (GB) Replicas (regular deployment) Replicas (ATS deployment) CPUs Required - Total Memory Required - Total (GB) Storage PVC Required - Total (GB)
NSSF Pods
ingressgateway 4 4 0 2 1 4 4 0
egressgateway 4 4 0 2 1 4 4 0
nsselection 4 2 0 2 1 4 2 0
nsavailability 4 2 0 2 1 4 2 0
nsconfig 2 2 0 1 1 2 2 0
nssubscription 2 2 0 1 1 2 2 0
nrf-client-discovery 1 1 0 2 1 1 1 0
nrf-client-management 1 1 0 1 1 1 1 0
appinfo 0.2 1 0 2 1 0.2 1 0
perfinfo 0.2 0.5 0 1 1 0.2 0.5 0
config-server 0.2 0.5 0 1 1 0.2 0.5 0
NSSF SUT Totals 22.6 CPU 20 GB 0
ATS
ATS Behave 4 4 0 0 1 4 4 0
ATS AMF Stub (Python) 3 3 0 0 1 3 3 0
ATS NRF Stub (Python) 2 2 0 0 1 2 2 0
ATS NRF Stub1 (Python) 2 2 0 0 1 2 2 0
ATS NRF Stub2 (Python) 2 2 0 0 1 2 2 0
OCDNS-BIND 1 1 0 0 1 1 1 0
ATS Totals 14 14 0
cnDBTier Pods (minimum of 4 worker nodes required)
vrt-launcher-dt-1.cluster.local 4 4 2 2 1 4 4 2
vrt-launcher-dt-2.cluster.local 4 4 2 2 1 4 4 2
vrt-launcher-dt-3.cluster.local 4 4 2 2 1 4 4 2
vrt-launcher-dt-4.cluster.local 4 4 2 2 1 4 4 2
vrt-launcher-mt-1.cluster.local 4 4 2 2 1 4 4 2
vrt-launcher-mt-2.cluster.local 4 4 2 2 1 4 4 2
vrt-launcher-mt-3.cluster.local 4 4 2 2 1 4 4 2
vrt-launcher-sq-1.cluster.local 4 4 2 2 1 4 4 2
vrt-launcher-sq-2.cluster.local 4 4 2 2 1 4 4 2
vrt-launcher-db-installer.cluster.local 4 4 2 2 1 4 4 2
cnDBTier Totals 40 40 20

3.3.2 Locating and Downloading ATS and Simulator Images

To locate and download the ATS Image from MOS:

  1. Log in to My Oracle Support using the appropriate credentials.
  2. Select the Patches and Updates tab.
  3. In the Patch Search window, click Product or Family (Advanced).
  4. Enter Oracle Communications Cloud Native Core - 5G in the Product field.
  5. Select Oracle Communications Cloud Native Core Network Slice Selection Function <release_number> from Release drop-down.
  6. Click Search. The Patch Advanced Search Results list appears.
  7. Select the required ATS patch from the search results. The Patch Details window appears.
  8. Click Download. The File Download window appears.
  9. Click the <p********_<release_number>_Tekelec>.zip file to download the NSSF ATS package file.
  10. Untar the zip file to get ocats-nssf directory, which consists of all the ATS Images. The ocats-nssf directory has the following files:

    Note:

    Prerequisites:
    • To run oauth test cases for NSSF, oauth secrets needs to be generated. For more information, see "Configuring Secrets to Enable Access Token " section in Oracle communications Cloud Native Core, Network Slice Selection Function Installation, Upgrade, Fault Recovery Guide.
    1. To ensure the functionality of the Virtual_Host_NRF_Resolution_By_NSSF_Using_DNSSRV ATS feature, the following configuration must be enabled as per the engineering team's guidance:
      nrf-client:
        # This config map is for providing inputs to NRF-Client
        configmapApplicationConfig:
            enableVirtualNrfResolution=true
            virtualNrfFqdn=nrfstub.changeme-ocats.svc
            virtualNrfScheme=http
    2. The necessary changes to the NSSF custom-values.yaml file are outlined below.
      For instance, if the NSSF ATS is deployed in the "ocnssfats" namespace, the virtualNrfFqdn configuration must be updated as follows:
      nrf-client:
        # This config map is for providing inputs to NRF-Client
        configmapApplicationConfig:
            enableVirtualNrfResolution=true
            virtualNrfFqdn=nrfstub.ocnssfats.svc
            virtualNrfScheme=http
    3. If the NSSF is installed with a HELM release name different from "ocnssf" (for example, the HELM release name is "ocnssfats"), the following parameter must be updated accordingly.
      If the HELM release name is "ocnssf", no changes are required.
      # Alternate Route Service Host Value
      # Replace ocnssf with Release Name
      alternateRouteServiceHost: ocnssf-alternate-route
    4. If the HELM release name for NSSF is "ocnssfats", the following parameter must be updated accordingly.
      # Alternate Route Service Host Value
      # Replace ocnssf with Release Name
      alternateRouteServiceHost: ocnssfats-alternate-route
      ocats-nssf
      ├── ocats-nssf-custom-configtemplates-25.1.200-README.txt   - file contains all the information required for the package.
      ├── ocats-nssf-custom-configtemplates-25.1.200.zip   - contains serviceaccount,PVC File and Custom values file
      ├── ocats-nssf-tools-pkg-25.1.200-README.txt   - file contains all the information required for the package.
      └── ocats-nssf-tools-pkg-25.1.200.tgz   - file has the following images and charts packaged as tar files
  11. Untar the ocats-nssf-tools-pkg-25.1.200.tgz tar file
    The structure of the file looks as given below:
    
    ocats-nssf-tools-pkg-25.1.200
    ├── amfstub-25.1.200.tar      - AMF Stub Server Docker image 
    ├── amfstub-25.1.200.tar.sha256
    ├── ats_data-25.1.200.tar     - ATS data, After untar "ocnssf_tests" folder will gets created in which ATS feature files present
    ├── ats_data-25.1.200.tar.sha256
    ├── ocats-nssf-25.1.200.tar   - NSSF ATS Docker Image
    ├── ocats-nssf-25.1.200.tar.sha256
    ├── ocats-nssf-25.1.200.tgz   - ATS Helm Charts, after untar "ocats-nssf" ats charts folder gets created.
    ├── ocats-nssf-25.1.200.tgz.sha256
    ├── ocdns-bind-25.1.200.tar.  - NSSF DNS Stub Server Docker Image
    ├── ocdns-bind-25.1.200.tar.sha256
    └── README.md
  12. Copy the ocats-nssf-tools-pkg-25.1.200.tgz tar file to the CNE or Kubernetes cluster where you want to deploy ATS.
  13. Along with the above packages, there is ocats-nssf-custom-configtemplates-25.1.200.zip at the same location.

    The readme file ocats-nssf-custom-configtemplates-25.1.200-README.txt contains information about the content of this zip file.

    Content of ocats-nssf-custom-configtemplates-25.1.200.zip is as below:
    
     Archive:  ocats-nssf-custom-configtemplates-25.1.200.zip
      inflating: nssf_ats_pvc_25.1.200.yaml 
      inflating: ocats_nssf_custom_values_25.1.200.yaml 
      inflating: ocats_ocnssf_custom_serviceaccount_25.1.200.yaml 
    Copy these files to CNE or Kubernetes cluster where you want to deploy ATS.
    
    ocats-nssf-tools-pkg-25.1.200
    ├── amfstub-25.1.200.tar       - AMF Stub Server Docker image  
    ├── ats_data-25.1.200.tar      - ATS data, After untar "ocnssf_tests" folder will gets created in which ATS feature files present
    ├── ocats-nssf-25.1.200.tar    - NSSF ATS Docker Image
    ├── ocats-nssf-25.1.200.tgz    - ATS Helm Charts, after untar "ocats-nssf" ats charts folder gets created.
    ├── ocdns-bind-25.1.200.tar.   - NSSF DNS Stub Server Docker Image
    

3.3.3 Deploying ATS in Kubernetes Cluster

To deploy ATS in Kubernetes Cluster:

  1. Verify checksums of the tarballs mentioned in the file Readme.txt.
  2. Run the following commands to extract tar file content, Helm charts, and Docker images of ATS:

    tar -xvzf ocats-nssf-tools-pkg-25.1.200.tgz

    The output of this command will return the following files:
    
    ocats-nssf-tools-pkg-25.1.200
    ├── amfstub-25.1.200.tar      - AMF Stub Server Docker image 
    ├── amfstub-25.1.200.tar.sha256
    ├── ats_data-25.1.200.tar     - ATS data, After untar "ocnssf_tests" folder will gets created in which ATS feature files present
    ├── ats_data-25.1.200.tar.sha256
    ├── ocats-nssf-25.1.200.tar   - NSSF ATS Docker Image
    ├── ocats-nssf-25.1.200.tar.sha256
    ├── ocats-nssf-25.1.200.tgz   - ATS Helm Charts, after untar "ocats-nssf" ats charts folder gets created.
    ├── ocats-nssf-25.1.200.tgz.sha256
    ├── ocdns-bind-25.1.200.tar.  - NSSF DNS Stub Server Docker Image
    ├── ocdns-bind-25.1.200.tar.sha256
    └── README.md
    
  3. NSSF-ATS and Stub Images Load and Push: Run the following commands in your cluster to load the ocats image and amf stubserver image:

    Docker Commands:

    docker load -i ocats-nssf-<version>.tar
    docker load -i amfstub-<version>.tar
    docker load -i ocdns-bind-<version>.tar

    Examples:

    docker load -i ocats-nssf-25.1.200.tar
    docker load -i amfstub-25.1.200.tar
    docker load -i ocdns-bind-25.1.200.tar

    Podman Commands:

    podman load -i ocats-nssf-<version>.tar
    podman load -i amfstub-<version>.tar
    podman load -i ocdns-bind-<version>.tar

    Examples:

    podman load -i ocats-nssf-25.1.200.tar
    podman load -i amfstub-25.1.200.tar
    podman load -i ocdns-bind-25.1.200.tar
  4. Run the following commands to tag and push the ATS image registry.
    1. Run the following commands to grep the image:
      docker images | grep ocats-nssf
      docker images | grep amfstub
      docker images | grep ocdns
    2. Copy the Image ID from the output of the grep command and change the tag (version number) to your registry.

      Docker Commands:

      docker tag <Image_ID> <your-registry-name/ocats-nssf:<tag>>

      docker push <your-registry-name/ocats-nssf:<tag>>

      docker tag <Image_ID> <your-registry-name/amfstub:<tag>>

      docker push <your-registry-name/amfstub:<tag>>

      docker tag <Image_ID> <your-registry-name/ocdns-bind:<tag>>

      docker push <your-registry-name/ocdns-bind:<tag>>

      Podman Commands:

      podman tag <Image_ID> <your-registry-name/ats/ocats-nssf:<tag>>
      podman push <your-registry-name/ocats-nssf:<tag>>
      podman tag <Image_ID> <your-registry-name/amfstub:<tag>>
      podman push <your-registry-name/amfstub:<tag>>
      docker tag <Image_ID> <your-registry-name/ocdns-bind:<tag>>
      docker push <your-registry-name/ocdns-bind:<tag>>
  5. ATS Helm Charts : Run the following command to get "ATS" Helm charts as shown below:
    tar -xvzf ocats-nssf-25.1.200.tgz

    The above command creates "ocats-nssf" helm charts of ATS.

  6. ATS Data: Run the following command to get ATS data, which contains feature files and data:
    tar -xvf ats_data-25.1.200.tar

    The above command creates "ocnssf_tests" ATS feature files "dat" and "jobs" folder, which are copied inside after the ATS installation is complete.

    1. Copy the ocnssf_tests folder under NSSF ATS pod as shown below:
      
      kubectl cp ocnssf_tests <namespace>/<nssf ats podname>:/var/lib/jenkins/

      Example:

      kubectl cp ocnssf_tests cicdnssf-241204053638/ocats-nssf-8566d64cfb-cgmvc:/var/lib/jenkins/
    2. Copy the jobs folder under NSSF ATS pod as shown below:
      kubectl cp jobs <namespace>/<nssf ats podname>:/var/lib/jenkins/.jenkins/
      Example:
      kubectl cp jobs cicdnssf-241204053638/ocats-nssf-8566d64cfb-cgmvc:/var/lib/jenkins/.jenkins/
  7. <Optional> Go to certificate folder inside ocats-nssf and run the following command:
    kubectl create secret generic ocnssf-secret --from-file=certificates/rsa_private_key_pkcs1.pem --from-file=certificates/trust.txt --from-file=certificates/key.txt --from-file=certificates/ocnssf.cer --from-file=certificates/caroot.cer -n ocnssf
  8. ATS Custom Values File Changes: Update the image name and tag in the ocats_nssf_custom_values_25.1.200.yaml file as required.
    1. For this, open the ocats_nssf_custom_values_25.1.200.yaml file.
    2. Update the image.repository and image.tag parameters for ocats-nssf, ocats-amf-stubserver, ocats-nrf-stubserver, ocats-nrf-stubserver1, ocats-nrf-stubserver2, and ocdns-bind.
    3. Save and close the file after making the updates.
  9. <Optional>To enable static port: ATS supports static port. By default, this feature is not available.

    In the ocats-nssf/values.yaml file under service section, set the value of staticNodePortEnabled parameter as 'true' and provide a valid nodePort value for staticNodePort.

  10. ATS Service Account Creation: In ocats-nssf-custom-serviceaccount_.yaml, change namespace as below:
    sed -i "s/changeme-ocats/${namespace}/g" ocats_ocnssf_custom_serviceaccount_.yaml
  11. Run the following command to apply ocats-nssf-custom-serviceaccount_.yaml file:

    kubectl apply -f <serviceaccount.yaml file> -n <namespace_name>

    For example:

    kubectl apply -f ocats-nssf-custom-serviceaccount_.yaml -n ocnssf

  12. Pointing NSSF to Stub Servers: Follow this step to point out NSSF to NRF-Stubserver and AMF-Stubserver in NSSF custom values file:
    sed -i "s/changeme-ocats/${namespace}/g" $NSSF_CUSTOM_DEPLOY_FILE
    For example:
    sed -i "s/changeme-ocats/${namespace}/g" ocnssf_custom_values_25.1.200.yaml
    The NSSF custom values snippet is as follows:
     
    
    nrf-client:
      # This config map is for providing inputs to NRF-Client
      configmapApplicationConfig:
        &configRef
        profile: |-
          [appcfg]
          primaryNrfApiRoot=nrf-stubserver.changeme-ocats:8080
          secondaryNrfApiRoot=nrf-stubserver.changeme-ocats:8080
    # ARS Helm Configuration
    staticVirtualFqdns:
        - name: https://abc.test.com
          alternateFqdns:
            - target: amf-stubserver.changeme-ocats
              port: 8080
              priority: 10
            - target: nrf-stubserver.changeme-ocats
              port: 8080
              priority: 20
        - name: http://xyz.test.com
          alternateFqdns:
            - target: amf-stubserver.changeme-ocats
              port: 8080
              priority: 10
            - target: nrf-stubserver.changeme-ocats
              port: 8080
              priority: 20
  13. Deploy ATS as shown below. NSSF ATS Helm release name should be "ocats".

    helm install <release_name> <charts> -n <namespace_name> -f <custom_values file> –-version <helm-chart-version>

    For example:

    helm install ocats ocats-nssf -n ocnssf -f ocats_nssf_custom_values_25.1.200.yaml --version 25.1.200

    Running the above command creates the following pods:

    ocats-nssf

    ocats-amf-stubserver

    ocats-nrf-stubserver

    ocats-nrf-stubserver1

    ocats-nrf-stubserver2

    ocats-ocdns-bind

    For example:
    
    $ kubectl get pods -n cicdnssf-241204053638
    NAME                                                     READY   STATUS      RESTARTS       AGE
    ocats-amf-stubserver-5c5775dcb5-n9hlw                    2/2     Running     0              3h22m
    ocats-nrf-stubserver-6bc6ff5ccc-6wkv7                    2/2     Running     0              3h22m
    ocats-nrf-stubserver1-567f55488c-7bdxk                   2/2     Running     0              3h22m
    ocats-nrf-stubserver2-697d4cffd9-c7dvr                   2/2     Running     0              3h22m
    ocats-nssf-8566d64cfb-cgmvc                              2/2     Running     0              3h22m
    ocats-ocdns-bind-967bd4bc8-hpcjn                         2/2     Running     0              3h22m
    
    $ kubectl get svc -n cicdnssf-241204053638
    NAME                                    TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE
    amf-stubserver                          LoadBalancer   10.96.27.125    <pending>     8080:31064/TCP      5h34m
    dnssim                                  ClusterIP      10.96.99.119    <none>        53/UDP,6236/TCP     5h34m
    nrf-stubserver                          LoadBalancer   10.96.192.123   <pending>     8080:30409/TCP      5h34m
    nrf-stubserver1                         LoadBalancer   10.96.235.187   <pending>     8080:31157/TCP      5h34m
    nrf-stubserver2                         LoadBalancer   10.96.77.69     <pending>     8080:31056/TCP      5h34m
    ocats-nssf                              LoadBalancer   10.96.14.30     <pending>     8080:31995/TCP      5h34m
  14. Run the following command to verify the ATS deployment:

    helm status <release_name>

    The following screenshot is an example of a successful ATS deployment, where STATUS:DEPLOYED is an indicator of the same.

    
    $ helm status ocats -n cicdnssf-241204053638
    NAME: ocats
    LAST DEPLOYED: Wed Dec 4 05:37:54 2024
    NAMESPACE: cicdnssf-hrithik-241204053638
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None

Virtual_Host_NRF_Resolution_By_NSSF_Using_DNSSRV ATS Feature

To ensure the functionality of the Virtual_Host_NRF_Resolution_By_NSSF_Using_DNSSRV ATS feature, the following configuration must be enabled:

nrf-client:
  # Configuration map for providing inputs to the NRF-Client
  configmapApplicationConfig:
    enableVirtualNrfResolution: true
    virtualNrfFqdn: nrfstub.changeme-ocats.svc
    virtualNrfScheme: http

The necessary changes to the NSSF custom-values.yaml file are outlined below:

Example Configuration for Namespace "ocnssfats"

If the NSSF ATS is deployed in the ocnssfats namespace, update the virtualNrfFqdn configuration as follows:
nrf-client:
  # Configuration map for providing inputs to the NRF-Client
  configmapApplicationConfig:
    enableVirtualNrfResolution: true
    virtualNrfFqdn: nrfstub.ocnssfats.svc
    virtualNrfScheme: http

Configuration for Different HELM Release Names

  • Default Release Name (ocnssf): No changes are required.
  • Alternate Release Name: If the HELM release name is different (e.g., ocnssfats), update the following parameter:
    # Alternate Route Service Host Value
    # Replace 'ocnssf' with the HELM release name
    alternateRouteServiceHost: ocnssfats-alternate-route
    

Steps to Integrate DNS Stub with NSSF

  1. Enable virtual FQDN resolution in the nrf-client management app profile configuration.
    Example:
    nrf-client:
        profile: |-
          [appcfg]
          primaryNrfApiRoot=nrf-stubserver.changeme-ocats:8080
          secondaryNrfApiRoot=nrf-stubserver.changeme-ocats:8080
          nrfScheme=http
          retryAfterTime=PT120S
          nrfClientType=NSSF
          nrfClientSubscribeTypes=
          appProfiles=[{"nfInstanceId":"9faf1bbc-6e4a-4454-a507-aef01a101a01","nfType":"NSSF","nfStatus":"REGISTERED","heartBeatTimer":30,"fqdn":"ocnssf-nsgateway.ocnssf.svc","priority":1,"capacity":1,"load":2,"plmnList":[{"mcc":"311","mnc":"480"}],"nfSetIdList":["setEast.nssfset.5gc.mnc480.mcc311"],"locality":"rcnltxekloc1","nfServices":[{"serviceInstanceId":"92d59bfc-e5d6-47f5-a26b-3a03facdebcc","serviceName":"nnssf-nsselection","versions":[{"expiry":null,"apiFullVersion":"1.0.0","apiVersionInUri":"v1"}],"scheme":"http","nfServiceStatus":"REGISTERED","fqdn":"ocnssf1-ingress-gateway.ocnssf.svc","interPlmnFqdn":null,"ipEndPoints":[{"ipv4Address":"10.224.45.178","transport":"TCP","port":80}],"allowedNfTypes":["AMF","NSSF"],"priority":1,"capacity":1,"load":2},{"serviceInstanceId":"d33728cd-6e21-434b-bc5a-ed69bc612377","serviceName":"nnssf-nssaiavailability","versions":[{"expiry":null,"apiFullVersion":"1.0.0","apiVersionInUri":"v1"}],"scheme":"http","nfServiceStatus":"REGISTERED","fqdn":"ocnssf2-ingress-gateway.ocnssf.svc","interPlmnFqdn":null,"ipEndPoints":[{"ipv4Address":"10.224.45.179","transport":"TCP","port":80}],"allowedNfTypes":["AMF","NSSF"],"priority":1,"capacity":1,"load":2}]}]
          enableF3=true
          enableF5=true
          renewalTimeBeforeExpiry=3600
          validityTime=30
          enableSubscriptionAutoRenewal=true
          nfHeartbeatRate=80
          acceptAdditionalAttributes=false
          retryForCongestion=5
          enableVirtualNrfResolution=true
          virtualNrfFqdn=nrfstub.changeme-ocats.svc
          virtualNrfScheme=http
  2. In the above configuration, since "enableVirtualNrfResolution=true" and "virtualNrfFqdn=nrfstub.changeme-ocats.svc" have been set, the NRF client management attempts to contact the Egress Gateway, and Egress Gateway attempts to connect Alternate Route Service for virtual FQDN Resolution. In this service, we configure static settings related to the FQDN, as shown below:
    #Static virtual FQDN Config
    staticVirtualFqdns:
      - name: https://abc.test.com
        alternateFqdns:
          - target: amf-stubserver.changeme-ocats
            port: 8080
            priority: 10
          - target: nrf-stubserver.changeme-ocats
            port: 8080
            priority: 20
      - name: http://xyz.test.com
        alternateFqdns:
          - target: amf-stubserver.changeme-ocats
            port: 8080
            priority: 10
          - target: nrf-stubserver.changeme-ocats
            port: 8080
            priority: 20

    According to the static virtual FQDN configuration under ARS, there is no record for the FQDN "virtualNrfFqdn=nrfstub.changeme-ocats.svc". As a result, ARS will attempt to resolve this FQDN by contacting the default CoreDNS server (the default Kubernetes cluster DNS server). To ensure proper resolution, we need to point ARS to the DNS stub we installed by editing the ARS deployment file after deploying NSSF.

  3. Before running ATS Test suite, nrf-client-nfdiscovery and nrf-client-nfmanagement pods has to be restarted. Here are the steps to edit the "alternate route service" deployment pointing towards DNS Stub:
    1. Run the follwing command to get the list of pods:
      $ kubectl get pods -n <namespace>

      For example:

      $ kubectl get pods -n ocnssf
      Expected sample response:
      
      NAME                                                     READY   STATUS      RESTARTS      AGE
      ocats-amf-stubserver-6f57f7f57f-bk75w                    2/2     Running     0             64m
      ocats-nrf-stubserver-5f89cdb74c-649kx                    2/2     Running     0             64m
      ocats-nssf-76cf4fc678-vrl24                              2/2     Running     0             64m
      ocdns-bind-5975fc59c8-6vllx                              2/2     Running     0             41m
      ocnssf-alternate-route-5f857d94bb-dm5vl                  2/2     Running     0             40m                                            0/1     Completed   0             30m
    2. Run following command to get searches information from dns-bind pod to enable communication between Alternate Route and dns-bind service:
      kubectl exec -it <dnsbind-pod> -n <NAMESPACE> -- /bin/bash -c 'cat /etc/resolv.conf' | grep search | tr ' ' '\n' | grep -v 'search'
      For example:
      
      $ kubectl exec -it ocdns-bind-5975fc59c8-6vllx -n ocnssf -- /bin/bash -c 'cat /etc/resolv.conf' | grep search | tr ' ' '\n' | grep -v 'search'
      
      Expected sample response:
      ocnssf.svc.cluster.local
      svc.cluster.local
      cluster.local
      gbucdsint02phx.oraclevcn.com
      snphxprshared1.gbucdsint02phx.oraclevcn.com
    3. By default alternate service will point to CoreDNS and you will see following settings in deployment file. Now, edit alternate route service deployment and add configuration as below:
      $ kubectl edit deployment ocnssf-alternate-route -n <NAMESPACE>

      For example:

      $ kubectl edit deployment ocnssf-alternate-route -n ocnssf
      terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
    4. Edit the deployment file to add following content in alternate service to query DNS stub:
      • Add the nameservers IPaddress which you recorded after installing the DNS stub (cluster IP of DNS Stub)
      • Add all the search information one by one which you recorded earlier.
      • Set dnsPolicy to "None"
      dnsConfig:
        nameservers:
        - <dns_stub_cluster_ip_address>
        searches:
        - dns-bind search
        - dns-bind search
        - dns-bind search
        dnsPolicy: None
      $ kubectl get svc -n ocnssf

      Expected sample response:

      
      NAME                                    TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE
      dnssim                                  ClusterIP      10.96.65.56     <none>        53/UDP,6236/TCP     48m

      For example:

      terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsConfig:
        nameservers:
        - 10.96.65.56
        searches:
        - ocnssf.svc.cluster.local
        - svc.cluster.local
        - cluster.local
        - gbucdsint02phx.oraclevcn.com
        - snphxprshared1.gbucdsint02phx.oraclevcn.com
      dnsPolicy: None
      restartPolicy: Always

Verification

Use the following curl command to verify the setup:

curl -v --http2-prior-knowledge -X GET "http://ocnssf-alternate-route:80/lookup?fqdn=nf1stub.ocnssf.svc&scheme=h
ttp"
Expected sample response:
Note: Unnecessary use of -X or --request, GET is already inferred.
*   Trying 10.96.194.219...
* TCP_NODELAY set
* Connected to ocnssf-alternate-route (10.96.194.219) port 80 (#0)
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x55b19b19c6f0)
> GET /lookup?fqdn=nf1stub.ocnssf.svc&scheme=http HTTP/2
> Host: ocnssf-alternate-route
> User-Agent: curl/7.61.1
> Accept: */*
>
* Connection state changed (MAX_CONCURRENT_STREAMS == 2147483647)!
< HTTP/2 200
< content-type: application/json
< date: Tue, 08 Oct 2024 05:48:02 GMT
< x-envoy-upstream-service-time: 5
<
* Connection #0 to host ocnssf-alternate-route left intact
[{"target":"ocats-amf-stubserver.ocnssf.svc","port":8080,"ttl":86400,"type":"SRV","dclass":"IN","priority":1,"weight":60},{"target":"ocats-nrf-stubserver.ocnssf.svc","port":8080,"ttl":86400,"type":"SRV","dclass":"IN","priority":20,"weight":20},{"target":"ocats-nrf-stubserver1.ocnssf.svc","port":8080,"ttl":86400,"type":"SRV","dclass":"IN","priority":30,"weight":20}][ocnrfusr@ocnssf-ocnssf-nrf-client-nfmanagement-744ff58956-tfr5g app]$

3.4 Installing ATS for Policy

Installing ATS for Policy procedure consists of the following two steps:

  1. Locating and downloading the ATS package
  2. Deploying ATS and stub pods in Kubernetes cluster

This includes installation of nine stubs (nf1stub, nf11stub, nf12stub, nf2stub, nf21stub, nf22stub, nf3stub, nf31stub, nf32stub), ocamf stub, ocdns-bind stub, ocldap-stub, and Policy ATS in the namespace where CNC Policy is deployed.

3.4.1 Resource Requirements

This section describes the ATS resource requirements for CNC Policy.

Overview - Total Number of Resources

The following table describes the overall resource usage in terms of CPUs, and memory for the following:
  • PCF SUT
  • cnDBTier
  • ATS

Table 3-10 PCF - Total Number of Resources

Resource Name Non-ASM CPU Non-ASM Memory (GB) ASM CPU ASM Memory (GB)
PCF SUT Total 219 197 293 244
ATS Total 26 28 54 42
CnDBTier Total 107.1 175.2 137.1 190.2
Grand Total PCF ATS 352.1 400.2 484.1 476.2

PCF Pods Resource Requirements Details

This section describes the resource requirements, which are needed to deploy Policy ATS successfully.

Table 3-11 PCF Pods Resource Requirements Details

Policy Microservices Max CPU Memory (GB) Max Replica Non-ASM Total CPU Non-ASM Memory (GB) ASM Total CPU ASM Total Memory (GB) Isito ASM CPU Isito ASM Memory (GB)
oc-app-info 2 2 1 2 2 4 3 2 1
oc-bulwark 8 6 2 16 12 20 14 2 1
oc-diam-connector 4 2 2 8 4 12 6 2 1
oc-diam-gateway 4 2 1 4 2 6 3 2 1
alternate-route 2 4 1 2 4 4 5 2 1
oc-config-server 4 2 1 4 2 6 3 2 1
ocegress_gateway 4 6 1 4 6 6 7 2 1
ocingress_gateway 5 6 1 5 6 7 7 2 1
nrf-client-disc 4 2 2 8 4 12 6 2 1
nrf-client-mngt 1 1 2 2 2 6 4 2 1
oc-audit 2 4 1 2 4 4 5 2 1
oc-config-mgmt 4 2 2 8 4 12 6 2 1
oc-ldap-gateway 4 2 2 8 8 12 10 2 1
oc-policy-ds 7 8 2 14 16 18 18 2 1
oc-pre 4 4 2 8 8 12 10 2 1
oc-query 2 1 2 4 2 8 4 2 1
oc-soap-connector 4 4 2 8 8 12 10 2 1
oc-pcf-am 8 8 2 16 16 20 18 2 1
oc-pcf-sm 7 10 2 14 20 18 22 2 1
oc-pcf-ue 8 6 2 16 12 20 24 2 1
oc-pcrf-core 8 8 2 16 16 0 18 2 1
oc-perf-info 2 2 2 4 4 8 6 2 1
oc-binding 6 8 1 6 8 8 9 2 1
oc-udr-connector 6 4 2 12 8 16 10 2 1
oc-chf-connector 6 4 2 12 8 16 10 2 1
usage-mon 5 4 2 10 8 14 10 2 1
nwdaf-agent 2 1 1 2 1 4 2 2 1
notifier 2 1 2 4 2 8 4 2 1
Policy Totals 219 197 293 244    

ATS Resource Requirements details for Policy

This section describes the ATS resource requirements, which are needed to deploy Policy ATS successfully.

Table 3-12 ATS Resource Requirements Details

ATS Microservices Max CPU Max Memory (GB) Max Replica Non-ASM Total CPU Non-ASM Total Memory (GB) ASM Total CPU ASM Total Memory (GB) Isito ASM CPU Isito ASM Memory (GB)
ocstub1-py 2 2 1 2 2 4 3 2 1
ocstub2-py 2 2 1 2 2 4 3 2 1
ocstub3-py 2 2 1 2 2 4 3 2 1
ocstub11-py 2 2 1 2 2 4 3 2 1
ocstub12-py 2 2 1 2 2 4 3 2 1
ocstub21-py 2 2 1 2 2 4 3 2 1
ocstub22-py 2 2 1 2 2 4 3 2 1
ocstub31-py 2 2 1 2 2 4 3 2 1
ocstub32-py 2 2 1 2 2 4 3 2 1
ocamf-stub 1 1 1 1 1 3 2 2 1
ocats-policy 4 6 1 4 6 6 7 2 1
ocdns-bind 1 1 1 1 1 3 2 2 1
oc-ldap-org1 1 1 1 1 1 3 2 2 1
ocdiam-sim 1 1 1 1 1 3 2 2 1
ATS Totals 26 28 54 42    

cnDBTier Resource Requirements Details for Policy ATS

This section describes the cnDBTier resource requirements, which are needed to deploy Policy ATS successfully.

Note:

For cnDBTier pods, a minimum of 4 worker nodes are required.

Table 3-13 cnDBTier Resource Requirements Details

cnDBTier Microservices Min CPU Min Memory (GB) Min Replica Total CPU Total Memory (GB) ASM Total CPU ASM Total Memory (GB) Isito ASM CPU Isito ASM Memory (GB)
db_monitor_svc 1 1 1 1 1 3 2 2 1
db_replication_svc 2 12 1 2 12 4 13 2 1
db_backup_manager_svc 0.1 0.2 1 0.1 0.2 2.1 1.2 2 1
ndbappmysqld 8 10 4 32 40 40 44 2 1
ndbmgmd 4 10 2 8 20 12 22 2 1
ndbmtd 10 18 4 40 72 48 76 2 1
ndbmysqld 8 10 2 16 20 20 22 2 1
db_infra_moditor_svc 8 10 1 8 10 8 10
DB Tier Total 107.1 175.2 137.1 190.2

Note:

The requirements shown in the above table for CnDBTier are the default numbers and must be changed as per the deployment requirements.

3.4.2 Downloading the ATS Package

This section provides information on how to locate and download the Policy ATS package file from My Oracle Support (MOS).

Locating and Downloading Policy ATS Package

To locate and download the ATS package from MOS, perform the following steps:

  1. Log in to My Oracle Support using the valid credentials.
  2. Select the Patches & Updates tab.
  3. In the Patch Search window, click Product or Family (Advanced).
  4. Enter Oracle Communications Cloud Native Core - 5G in the Product field.
  5. Select Oracle Communications Cloud Native Core Policy <release_number> using the drop-down menu of the Release field.
  6. Click Search. The list of Patch Advanced Search Results appears.
  7. Select the required ATS patch from the list. The Patch Details window appears.
  8. Click Download. The File Download window appears.
  9. Click the <p********_<release_number>_Tekelec>.zip file to downlaod the CNC Policy ATS package file.
  10. Untar the gzip file ocats-policy-tools-25.1.200.0.0.tgz to access the following files:
    
    ocats-policy-pkg-25.1.200.0.0.tgz
    ocdns-pkg-25.1.204.0.0.tgz
    ocamf-pkg-25.1.204.0.0.tgz
    oc-ldap-org1-pkg-25.1.204.0.0.tgz
    ocstub-pkg-25.1.201.0.0.tgz
    ocdiam-pkg-25.1.204.0.0tgz

    The contents included in each of these files are as follow:

    |
    
    |_ _ _ocats-policy-pkg-25.1.200.0.0.tgz
    
    |      |_ _ _ _ _ _ ocats-policy-25.1.200.tgz (Helm Charts)
    
    |      |_ _ _ _ _ _ ocats-policy-image-25.1.200.tar (Docker Images)
    
    |      |_ _ _ _ _ _ ocats-policy-data-25.1.200.tgz (Policy ATS and Jenkins job Data)
    
    |
    
    |_ _ _ocstub-pkg-25.1.201.0.0.tgz
    
    |      |_ _ _ _ _ _ ocstub-py-25.1.201.tgz(Helm Charts)
    
    |      |_ _ _ _ _ _ ocstub-py-image-25.1.201.tar (Docker Images)
    
    |
    
    |_ _ _ocdns-pkg-25.1.204.0.0.tgz
    
    |      |_ _ _ _ _ _ ocdns-bind-25.1.204.tgz(Helm Charts)
    
    |      |_ _ _ _ _ _ ocdns-bind-image-25.1.204.tar (Docker Images)
    
    |
    
    |_ _ _ocamf-pkg-25.1.204.0.0.tgz
    
    |      |_ _ _ _ _ _ ocamf-stub-25.1.204.tgz(Helm Charts)
    
    |      |_ _ _ _ _ _ ocamf-stub-image-25.1.204.tar (Docker Images)
    
    |
    
    |_ _ _oc-ldap-org1-pkg-25.1.204.0.0.tgz
    
    |      |_ _ _ _ _ _ oc-ldap-org1-25.1.204.tgz(Helm Charts)
    
    |      |_ _ _ _ _ _ oc-ldap-org1-image-25.1.204.tar (Docker Images)
    
    |
    
    |_ _ __ocdiam-pkg-25.1.204.0.0.tgz
    
    |      |_ _ _ _ _ _ ocdiam-sim-25.1.204.tgz(Helm Charts)
    
    |      |_ _ _ _ _ _ ocdiam-sim-image-25.1.204.tar (Docker Images)
  11. Copy the tar file from the downloaded package to CNE, OCI, or Kubernetes cluster where you want to deploy ATS.

3.4.3 Deploy Policy ATS with TLS enabled

3.4.3.1 Generate JKS File for Jenkins Server

To access Jenkins ATS GUI access through HTTPS, a JKS file should be created.

Perform the following steps to generate the JKS file:

Generate the Root Certificate

  1. If the user has a Certificate Authority (CA) signed root certificate such as caroot.cert and key, then the user can use those files.
  2. If the root certificate is not already available, the user can generate one self signed root certificate. This root certificate created needs to be added to the truststore such as a Browser like Firefox or Chrome. User can follow the Browser specific documentation to upload the root certificate. The root certificate is used to sign the application, or ATS certificate.
  3. Generate a root key with the following command:
    openssl genrsa 2048 > caroot.key

    This will generate a key called caroot.key

  4. Generate a caroot certificate with the following command:
    openssl req -new -x509 -nodes -days 1000 -key <root_key> > <root_certificate>

    For example,

    
    [cloud-user@platform-bastion-1]$ openssl req -new -x509 -nodes -days 1000 -key caroot.key > caroot.cer
    You are about to be asked to enter information that will be incorporated
    into your certificate request.
    What you are about to enter is what is called a Distinguished Name or a DN.
    There are quite a few fields but you can leave some blank
    For some fields there will be a default value,
    If you enter '.', the field will be left blank.
    -----
    Country Name (2 letter code) [XX]:IN
    State or Province Name (full name) []:KA
    Locality Name (eg, city) [Default City]:BLR
    Organization Name (eg, company) [Default Company Ltd]:ORACLE
    Organizational Unit Name (eg, section) []:CGBU
    Common Name (eg, your name or your server's hostname) []:ocats
    Email Address []:
    [cloud-user@platform-bastion-1]$

Generate Application or Client Certificate

  1. Create a ssl.conf file.
  2. Edit the ssl.conf file. In the "[alt_names]" section, list the IPs that are used to access ATS GUI as shown in the following sample ssl.conf file:
    
    [ req ]
    default_bits       = 4096
    distinguished_name = req_distinguished_name
    req_extensions     = req_ext
     
    [ req_distinguished_name ]
    countryName                 = Country Name (2 letter code)
    countryName_default         = IN
    stateOrProvinceName         = State or Province Name (full name)
    stateOrProvinceName_default = KN
    localityName                = Locality Name (eg, city)
    localityName_default        = BLR
    organizationName            = Organization Name (eg, company)
    organizationName_default    = ORACLE
    commonName                  = Common Name (e.g. server FQDN or YOUR name)
    commonName_max              = 64
    commonName_default          = ocats.ocpcf.svc.cluster.local
     
    [ req_ext ]
    keyUsage = critical, digitalSignature, keyEncipherment
    extendedKeyUsage = serverAuth, clientAuth
    basicConstraints = critical, CA:FALSE
    subjectAltName = critical, @alt_names
     
    [alt_names]
    IP.1 = 127.0.0.1
    IP.2 = 10.75.217.5
    IP.3 = 10.75.217.76
    DNS.1 = localhost
    DNS.2 = ocats.ocpcf.svc.cluster.local

    Note:

    • To access the GUI with DNS, make sure that the commonName_default is the same as the DNS name being used.
      • Ensure the DNS is in this format:<service_name>.<namespace>.svc.cluster.local

        Multiple DNSs, such as DNS.1, DNS.2, and so on, can be added.

    • To support the ATS API, it is necessary to add the IP 127.0.0.1 to the list of IPs.
  3. Create a Certificate Signing Request (CSR) with the following command:
    openssl req -config ssl.conf -newkey rsa:2048 -days 1000 -nodes -keyout rsa_private_key_pkcs1.key > ssl_rsa_certificate.csr
    Output:
    
    [cloud-user@platform-bastion-1 ocpcf]$ openssl req -config ssl.conf -newkey rsa:2048 -days 1000 -nodes -keyout rsa_private_key_pkcs1.key > ssl_rsa_certificate.csr
    Ignoring -days; not generating a certificate
    Generating a RSA private key
    ...+++++
    ........+++++
    writing new private key to 'rsa_private_key_pkcs1.key'
    -----
    You are about to be asked to enter information that will be incorporated
    into your certificate request.
    What you are about to enter is what is called a Distinguished Name or a DN.
    There are quite a few fields but you can leave some blank
    For some fields there will be a default value,
    If you enter '.', the field will be left blank.
    -----
    Country Name (2 letter code) [IN]:
    State or Province Name (full name) [KA]:
    Locality Name (eg, city) [BLR]:
    Organization Name (eg, company) [ORACLE]:
    Common Name (e.g. server FQDN or YOUR name) [ocpcf]:
    [cloud-user@platform-bastion-1 ocpcf]$
  4. To display all the components of the CSR file and to verify the configurations run the following command:
    openssl req -text -noout -verify -in ssl_rsa_certificate.csr
  5. Sign the CSR file with root certificate by running the following command:
    openssl x509 -extfile ssl.conf -extensions req_ext -req -inssl_rsa_certificate.csr -days 1000 -CA ../caroot.cer -CAkey ../caroot.key -set_serial 04 > ssl_rsa_certificate.crt

    Output:

    
    [cloud-user@platform-bastion-1 ocpcf]$ openssl x509 -extfile ssl.conf -extensions req_ext -req -in ssl_rsa_certificate.csr -days 1000 -CA ../caroot.cer -CAkey ../caroot.key -set_serial 04 > ssl_rsa_certificate.crt
    Signature ok
    subject=C = IN, ST = KA, L = BLR, O = ORACLE, CN = ocpcf
    Getting CA Private Key
    [cloud-user@platform-bastion-1 ocpcf]$
  6. Verify if the certificate is signed by the root certificate by running the following command:
    [cloud-user@platform-bastion-1 ocpcf]$ openssl verify -CAfile caroot.cer ssl_rsa_certificate.crt

    Output:

    
    [cloud-user@platform-bastion-1 ocpcf]$ openssl verify -CAfile caroot.cer ssl_rsa_certificate.crt
    ssl_rsa_certificate.crt: OK
  7. Save the generated application certificate and root certificate.
  8. Add the caroot.cer to the browser as a trusted author.
  9. The generated application/client certificates cannot be directly given to the Jenkins server. Hence generate the .p12 keystore file for the client certificate with the following command:
    
    [cloud-user@platform-bastion-1 ococf]$ openssl pkcs12 -inkey rsa_private_key_pkcs1.key -inssl_rsa_certificate.crt -export-out certificate.p12
    Enter Export Password:
    Verifying - Enter Export Password:
  10. In the prompt, create a password and save it for future use.
  11. Convert the .p12 keystore file into a JKS format file using the following command:
    
    [cloud-user@platform-bastion-1 ocpcf]$ keytool -importkeystore -srckeystore ./certificate.p12 -srcstoretype pkcs12 -destkeystore jenkinsserver.jks -deststoretype JKS

    Output:

    
    [cloud-user@platform-bastion-1 ocpcf]$ keytool -importkeystore -srckeystore ./certificate.p12 -srcstoretype pkcs12 -destkeystore jenkinsserver.jks -deststoretype JKS
    Importing keystore ./certificate.p12 to jenkinsserver.jks...
    Enter destination keystore password:
    Re-enter new password:
    Enter source keystore password:
    Entry for alias 1 successfully imported.
    Import command completed:  1 entries successfully imported, 0 entries failed or cancelled
  12. In the prompt, use the same password used while creating .p12 keystore file.

    Note:

    Ensure that the .p12 keystore and JKS files has the same passwords.
  13. The generated JKS file, jenkinserver.jks is added to the Jenkins path, where Jenkins server can access it.

For more details about the ATS TLS feature, refer to Deploy ATS with TLS Enabled section.

3.4.3.2 Enable TLS on Python Stubs
Before starting the creation of certificate, user should have the path details of caroot.cer file.
  1. Create a ssl.conf file.
  2. Edit ssl.conf file. Ensure that the DNS is in the format of *.<namespace>.svc
    A Sample stub_ssl.conf file:
    
    [ req ]
    default_bits       = 4096
    distinguished_name = req_distinguished_name
    req_extensions     = req_ext
     
    [ req_distinguished_name ]
    countryName                 = Country Name (2 letter code)
    countryName_default         = IN
    stateOrProvinceName         = State or Province Name (full name)
    stateOrProvinceName_default = KN
    localityName                = Locality Name (eg, city)
    localityName_default        = BLR
    organizationName            = Organization Name (eg, company)
    organizationName_default    = ORACLE
    commonName                  = svc.cluster.local
     
    [ req_ext ]
    keyUsage = critical, digitalSignature, keyEncipherment
    extendedKeyUsage = serverAuth, clientAuth
    basicConstraints = critical, CA:FALSE
    subjectAltName = critical, @alt_names
     
    [alt_names]
    IP.1 = 127.0.0.1
    DNS.1 = *.ocats.svc
  3. Create a Certificate Signing Request (CSR) for the stubs using the following command:
    $ openssl req -config stub_ssl.conf -newkey  rsa:2048 -days 1000 -nodes -keyout rsa_private_key_stub_pkcs2.key > stub_ssl_rsa_certificate1.csr

    Output:

    $ openssl req -config stub_ssl.conf -newkey rsa:2048 -days 1000 -nodes -keyout rsa_private_key_stub_pkcs2.key > stub_ssl_rsa_certificate1.csr
    Ignoring -days; not generating a certificate
    Generating a RSA private key
    ....................+++++
    ...+++++
    writing new private key to 'rsa_private_key_stub_pkcs2.key'
    -----
    You are about to be asked to enter information that will be incorporated
    into your certificate request.
    What you are about to enter is what is called a Distinguished Name or a DN.
    There are quite a few fields but you can leave some blank
    For some fields there will be a default value,
    If you enter '.', the field will be left blank.
    -----
    Country Name (2 letter code) [IN]:
    State or Province Name (full name) [KN]:
    Locality Name (eg, city) [BLR]:
    Organization Name (eg, company) [ORACLE]:
    svc.cluster.local []:*.ocats.svc
  4. Sign the certificate with the CA root using the following command:
    openssl x509 -extfile stub_ssl.conf -extensions req_ext -req -in stub_ssl_rsa_certificate1.csr -days 1000 -CA  ../ocats-caroot.cer -CAkey ../ocats-caroot.key -set_serial 05 >  stub_ssl_rsa_certificate1.crt

    Output:

    
    $ openssl x509 -extfile stub_ssl.conf -extensions req_ext -req -in stub_ssl_rsa_certificate1.csr -days 1000 -CA ../ocats-caroot.cer -CAkey ../ocats-caroot.key -set_serial 05 > stub_ssl_rsa_certificate1.crt
    Signature ok
    subject=C = IN, ST = KN, L = BLR, O = ORACLE, CN = *.ocats.svc
    Getting CA Private Key
  5. Create a secret for the stub and associate it with the namespace using the following command:
    $ kubectl create secret generic ocats-stub-secret1 --from-file=stub_ssl_rsa_certificate1.crt --from-file=rsa_private_key_stub_pkcs2.key --from-file=../ocats-caroot.cer -n ocpcf

    Output:

    
    kubectl create secret generic ocats-stub-secret1 --from-file=stub_ssl_rsa_certificate1.crt --from-file=rsa_private_key_stub_pkcs2.key --from-file=../ocats-caroot.cer -n ocats
    secret/ocats-stub-secret1 created
  6. Update values.yaml file of each Python stub in specific NF namespace with following details:
    
    NF: "<NF-Name>"
    cert_secret_name: "ocats-stub-secret"
    ca_cert: "ocats-caroot.cer"
    client_cert: "ocats-stub_ssl_rsa_certificate.crt"
    private_key: "ocats-rsa_private_key_stub_pkcs1.key"
    expose_tls_service: true
    CLIENT_CERT_REQ: true

    Note:

    If the Helm, cert_secret_name parameter, is null, then ca_cert, client_cert, and private_key values are not considered by the TLS.
  7. Ensure to update the deployment of all the Python stubs installed on the setup.

For more details about the ATS TLS feature, refer to Support for Transport Layer Security section.

3.4.3.3 Enable TLS on ocams Stubs

Before starting the creation of certificate, user should have the path details of ocats-caroot.cer and ocats-caroot.keyfile.

  1. In the values.yaml file the following parameters are provided to configure the TLS for ocamf stub:
    
    tls:
      enabled: false
      version: "TLSv1.2,TLSv1.3"
      appPort: 8443
      initialAlgorithm: 'RS256'
      secretName: 'app-tls-secret'
      rsaPrivateKeyFileName: 'app-key.pem'
      ecdsaPrivateKeyFileName: 'app-ecdsa-private-key.pem'
      rsaCertificateFileName: 'app-cert.crt'
      ecdsaCertifacateFileName: 'app-ecdsa-certificate.crt'
      caBundleFileName: 'ocats-caroot.cer'
      certReloadEnabled: true
      certReloaderDelay: 15000
      cipherSuites:
        - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
        - TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
        - TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
        - TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
        - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
        - TLS_AES_256_GCM_SHA384
        - TLS_AES_128_GCM_SHA256
        - TLS_CHACHA20_POLY1305_SHA256
    1. Configure the following parameters to enable TLS for ocamf stub tls.enabled to true, tls.version to (either TLSv1.2 or TLSv1.3), tls.appPort, tls.initialAlgorithm, and tls.secretName parameters.
    2. It is not mandatory to provide both tls.rsaPrivateKey and tls.ecdsaPrivateKeyFile files. You can either provide both or any one of them.
    3. Configure the tls.cipherSuites parameter to test compatibility of different ciphers based on TLS versions.
    4. A Sample values.yaml file configured for enabling TLS on ocamf stub:
      
      tls:
        enabled: true
        version: "TLSv1.3"
        appPort: 8443
        initialAlgorithm: 'RS256'
        secretName: '${NAMESPACE}-ocamf-app-tls-secret'
        rsaPrivateKeyFileName: '${NAMESPACE}-ocamf-app-key.pem'
        ecdsaPrivateKeyFileName: 'app-ecdsa-private-key.pem'
        rsaCertificateFileName: '${NAMESPACE}-ocamf-app-cert.crt'
        ecdsaCertifacateFileName: 'app-ecdsa-certificate.crt'
        caBundleFileName: 'ocats-caroot.cer'
        certReloadEnabled: true
        certReloaderDelay: 15000
        cipherSuites:
          - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
          - TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
          - TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
          - TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
          - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
          - TLS_AES_256_GCM_SHA384
          - TLS_AES_128_GCM_SHA256
          - TLS_CHACHA20_POLY1305_SHA256
      
  2. Create certificate and private key generation for ocamf application by using the following steps.
    1. Run the following command to create private key:
      openssl genrsa -out ${NAMESPACE}-ocamf-app-key.pem 4096
    2. Run the following command to generate a certificate request for the generated private key:
      openssl req -new -sha256 -key ${NAMESPACE}-ocamf-app-key.pem -out ${NAMESPACE}-ocamf-app-csr.csr -subj "/CN=ocamf2.${NAMESPACE}.svc.cluster.local"
    3. Run the following command to generate a self signed root certificate:
      openssl x509 -req -sha256 -days 1000 \    -in ${NAMESPACE}-ocamf-app-csr.csr \    -CA ocats-caroot.cer \    -CAkey ocats-caroot.key \    -out ${NAMESPACE}-ocamf-app-cert.pem\    -CAcreateserial
    4. Run the following command to convert pem file to crt file:
      mv ${NAMESPACE}-ocamf-app-cert.pem ${NAMESPACE}-ocamf-app-cert.crt
  3. Create a secret for the stub and associate it with the namespace using the following command:
    kubectl create secret generic ${NAMESPACE}-ocamf-tls-secret --from-file=${NAMESPACE}-ocamf-app-key.pem --from-file=ocats-caroot.cer --from-file=${NAMESPACE}-ocamf-app-cert.crt -n ${NAMESPACE}

    Example:

    kubectl create secret generic altair-ns11-ocamf-tls-secret --from-file=altair-ns11-ocamf-app-key.pem --from-file=ocats-caroot.cer --from-file=altiar-ns11-ocamf-app-cert.crt -n altair-ns11
    secret/altair-ns11-ocamf-tls-secret created
3.4.3.4 Enable TLS on Diam-Sim

Before starting the creation of certificate, user should have the path details of ocats-caroot.cer and ocats-caroot.key file.

  1. In the values.yaml file the following parameters are provided to configure the TLS for diam-sim stub:
    
    tls:
      enabled: false
      secretName: 'dcli-tls-secret'
      rsaPrivateKeyFileName: 'dcli-key.pem'
      ecdsaPrivateKeyFileName: 'dcli-ecdsa-private-key.pem'
      rsaCertificateFileName: 'dcli-cert.crt'
      ecdsaCertifacateFileName: 'dcli-ecdsa-certificate.crt'
      caBundleFileName: 'ca-cert.cer'
    1. By default, tls.enabled parameter is false. Set this to true to enable TLS on diam-sim.
    2. By default secretName will be dcli-tls-secret. User can change this name, but ensure that name of the secret created should match with the name provided for secretName.
    3. A Sample values.yaml file configured for enabling TLS on diam-sim stub:
      
      tls:
        enabled: true
        secretName: '${NAMESPACE}-dcli-tls-secret'
        rsaPrivateKeyFileName: '${NAMESPACE}-dcli-key.pem'
        ecdsaPrivateKeyFileName: 'dcli-ecdsa-private-key.pem'
        rsaCertificateFileName: '${NAMESPACE}-dcli-cert.crt'
        ecdsaCertifacateFileName: 'dcli-ecdsa-certificate.crt'
        caBundleFileName: 'ocats-caroot.cer'
  2. Create certificate and private key generation for Diameter Simulaltor by using the following steps.
    1. Run the following command to create private key:
      openssl genrsa -out ${NAMESPACE}-dcli-key.pem 4096
    2. Run the following command to generate a certificate request for the generated private key:
      openssl req -new -sha256 -key ${NAMESPACE}-dcli-key.pem -out ${NAMESPACE}-dcli-csr.csr -subj "/CN=occnp-occnp-diam-gateway.${namespace}.svc"
    3. Run the following command to generate a self signed root certificate:
      openssl x509 -req -sha256 -days 1000 \    -in ${NAMESPACE}-dcli-csr.csr \    -CA ocats-caroot.cer \    -CAkey ocats-caroot.key \    -out ${NAMESPACE}-dcli-cert.pem \    -CAcreateserial
    4. Run the following command to convert pem file to crt file:
      mv ${NAMESPACE}-dcli-cert.pem ${NAMESPACE}-dcli-cert.crt
  3. Create a secret for the stub and associate it with the namespace using the following command:
    kubectl create secret generic ${NAMESPACE}-dcli-tls-secret --from-file=${NAMESPACE}-dcli-key.pem --from-file=ocats-caroot.cer --from-file=${NAMESPACE}-dcli-cert.crt -n ${NAMESPACE}

    Example:

    kubectl create secret generic u-ns1-dcli-tls-secret --from-file=u-ns1-dcli-key.pem --from-file=ocats-caroot.cer --from-file=u-ns1-dcli-cert.crt -n u-ns1
    secret/dcli-tls-secretcreated
3.4.3.5 Enable ATS GUI with HTTPS

Follow the steps to secure or enable TLS on the server.

  1. Create a Kubernetes secret by adding the above created files:
    kubectl create secret generic ocats-tls-secret --from-file=jenkinsserver.jks --from-file=ssl_rsa_certificate.crt --from-file=rsa_private_key_pkcs1.key --from-file=caroot.cer -n ocpcf

    Where,

    jenkinsserver.jks: This file is needed when atsGuiTLSEnabled is set to true. This is necessary to open ATS GUI with secured TLS protocol.

    ssl_rsa_certificate.crt: This is client application certificate.

    rsa_private_key_pkcs1.key: This is RSA private key.

    caroot.cer: This file used during creation of jks file needs to be passed for Jenkins/ATS API communication.

    The sample of created secret:

    
    [cloud-user@platform-bastion-1 ~]$ kubectl describe secret ocats-tls-secret -n ocpcf
    Name:         ocats-tls-secret
    Namespace:    ocats
    Labels:       <none>
    Annotations:  <none>
     
    Type:  Opaque
     
    Data
    ====
    caroot.cer:                        1147 bytes
    ssl_rsa_certificate.crt:           1424 bytes
    jenkinsserver.jks:                 2357 bytes
    rsa_private_key_pkcs1.key:         1675 bytes
  2. Apply the following changes in values.yaml file.
    
    certificates:
      cert_secret_name: "ocats-tls-secret"
      ca_cert: "caroot.cer"
      client_cert: "ssl_rsa_certificate.crt"
      private_key: "rsa_private_key_pkcs1.pem"
      jks_file: "jenkinsserver.jks" # This parameter is needed when  atsGuiTLSEnabled
          is set to true. This file is necessary for ATS GUI to  be opend with secured TLS protocol.
      jks_password: "123456"  #This is the password given to the jks file while creation.
    The user can install the ATS, using the helm install command. Change the atsGuiTLSEnabled Helm parameter value to true for ATS to get the certificates and support HTTPS for GUI.
  3. Upload the caroot.cer file to the browser, before accessing it using https protocol.

    For more details about the uploading the file to the browser, refer to Adding a Certificate in Browser section in Enable ATS GUI with HTTPS.

  4. A user can now start ATS with HTTPS the protocol. The link to open the ATS GUI format is https://<IP>:<port>, for example, https://10.75.217.25:30301.

    The lock symbol in the browser indicates that the server is secured or TLS enabled.

3.4.4 Pushing the Images to Customer Docker Registry

Preparing to deploy ATS and Stub Pods in Kubernetes Cluster

To deploy ATS and stub pods in Kubernetes Cluster, perform the following steps:

  1. Run the following command to extract tar file content.

    tar -zxvf ocats-policy-tools-25.1.200.0.0.tgz

    The following is the output of this command:
    ocats-policy-pkg-25.1.200.0.0.tgz
    ocstub-pkg-25.1.201.0.0.tgz
    ocdns-pkg-25.1.204.0.0.tgz
    ocamf-stub-25.1.204.0.0.tgz
    oc-ldap-org1-25.1.204.tgz
    ocdiam-pkg-25.1.204.0.0tgz
  2. Run the following command to extract the helm charts and docker images of Policy ATS:

    tar -zxvf ocats-policy-pkg-25.1.200.0.0.tgz

    The following is the output:
    ocats-policy-25.1.200.tgz
    ocats-policy-images-25.1.200.tar
    ocats-policy-data-25.1.200.tgz
  3. Run the following command to load the ATS docker image:

    docker load --input ocats-policy-images-25.1.200.tar

  4. Run the following commands to tag and push the ATS images:
    
    docker tag ocats-policy:25.1.200 <registry>/ocats-policy:25.1.200
    docker push <registry>/ocats-policy:25.1.200

    Example:

    
    docker tag ocats-policy:25.1.200 localhost:5000/ocats-policy:25.1.200
    docker push localhost:5000/ocats-policy:25.1.200

    Note:

    If you are using Podman instead of Docker, replace docker with podman in all the docker commands given in this document.
  5. Run the following command to untar the helm charts – 25.1.200.
    tar -zxvf ocats-policy-25.1.200.tgz

    Note:

    atsFeatures section is newly introduced in values.yaml which helps Engineering team to control feature deliveries over the releases.
    It is not advisable to update any of the following flags without Engineering team's permission.
    atsFeatures:  ## DO NOT UPDATE this section without Engineering team's permission
      testCaseMapping: true               # To display Test cases on GUI along with Features
      logging: true                       # To enable feature to collect applogs in case of failure
      lightWeightPerformance: false       # The Feature is not implemented yet
      executionWithTagging: true          # To enable Feature/Scenario execution with Tag
      scenarioSelection: false            # The Feature is not implemented yet
      parallelTestCaseExecution: true     # To run ATS features parallel
      parallelFrameworkChangesIntegrated: true # To run ATS features parallel
      mergedExecution: false              # To execute ATS Regression and NewFeatures pipelines together in merged manner
      individualStageGroupSelection: false  # The Feature is not implemented yet
      parameterization: true              # When set to false, the Configuration_Type parameter on the GUI will not be available.
      atsApi: true                        # To trigger ATS using ATS API
      healthcheck: true                   # TO enable/disable ATS Health Check.
      atsGuiTLSEnabled: false             # To run ATS GUI in https mode.
      atsCommunicationTLSEnabled: false  #If set to true, ATS will get necessary variables to communicate with SUT, Stub or other NFs with TLS enabled. It is not required in ASM environment.
  6. Update the registry name, image name and tag in the ocats-policy/values.yaml file as required. For this, you need to update the image.repository and image.tag parameters in the ocats-policy/values.yaml file.

3.4.5 Configuring ATS

3.4.5.1 Enabling Static Port
To enable static port, in the ocats-policy/values.yaml file under the service section, set the value of staticNodePortEnabled parameter to true and enter a valid nodePort value for staticNodePort parameter.
service:
  customExtension:
    labels: {}
    annotations: {}
  type: LoadBalancer
  ports:
    http:
      port: "8080"
      staticNodePortEnabled: false
      staticNodePort: ""
3.4.5.2 Enabling Static API Node Port

To enable static API node port, in the ocats-policy/values.yaml file under the service section, set the value of staticAPINodePortEnabled parameter to true and enter a valid nodePort value for staticAPINodePortEnabled parameter.

service:
  customExtension:
    labels: {}
    annotations: {}
  type: LoadBalancer
  ports:
    api:
      port: "5001"
      staticNodePortEnabled: false
      staticNodePort: ""
3.4.5.3 Service Account Requirements
To run Policy-ATS, use the following rules to create a service account:
rules:
- apiGroups: ["extensions"]
  resources: ["deployments", "replicasets", "statefulsets"]
  verbs: ["watch", "get", "list", "update"]
- apiGroups: ["apps"]
  resources: ["deployments", "replicasets", "statefulsets"]
  verbs: ["watch", "get", "list", "update"]
- apiGroups: [""]
  resources: ["pods", "services", "secrets", "configmaps"]
  verbs: ["watch", "get", "list", "delete", "update", "create"]
- apiGroups: [""]
  resources: ["pods/log"]
  verbs: ["get", "list"]
To run oc-ldap ATS, use the following rules to create a service account:
PolicyRule:
  Resources                   Non-Resource URLs  Resource Names      Verbs
  ---------                   -----------------  --------------      -----
  deployments.apps            []                 [rc1-oc-ldap-org1]  [get list watch create update patch delete]
  deployments.extensions      []                 [rc1-oc-ldap-org1]  [get list watch create update patch delete]
  podsecuritypolicies.policy  []                 [1org1-rc1]         [use]

Note:

For information about creating service account, see the Oracle Communications Cloud Native Core, Converged Policy Installation, Upgrade, and Fault Recovery Guide available on MOS.
3.4.5.4 Enabling Aspen Service Mesh

This section provides information on how to enable Aspen service mesh while deploying ATS for CNC Policy. The configurations mentioned in this section are optional and should be performed only if ASM is required.

To enable service mesh for CNC Policy ATS, perform the following steps:

  1. If the ASM is not enabled on the global level for the namespace, run the following command before deploying the ATS:
    kubectl label --overwrite namespace <namespace_name> istio-injection=enabled
    For example:
    kubectl label --overwrite namespace ocpcf istio-injection=enabled
  2. In the service section of the values.yaml file, the ASMEnabled parameter is set to false (default configuration). To enable service mesh, set the value for ASMEnabled to true. The following is a snippet of the service section in the yaml file:
    service:
      customExtension:
        labels: {}
        annotations: {}
      type: LoadBalancer
      ports:
        https:
          port: "8443"
          staticNodePortEnabled: false
          staticNodePort: ""
        http:
          port: "8080"
          staticNodePortEnabled: false
          staticNodePort: ""
        api:
          port: "5001"
          staticNodePortEnabled: false
          staticNodePort: ""
    
      #####set ASMEnabled value to true for service-mesh based installation
      ASMEnabled: false
  3. Uncomment and add the following annotation under the customExtension section of the global section in values.yaml file and deploy the ATS Pods:
    customExtension:
        allResources:
          labels: {}
          annotations: {
          #Enable this section for service-mesh based installation
             traffic.sidecar.istio.io/excludeInboundPorts: "9000",
             traffic.sidecar.istio.io/excludeOutboundPorts: "9000"
            }

    After making this update in the values.yaml file, make sure that all the ATS and stub pods come up with istio container.

  4. For ServerHeader and SessionRetry features, the user needs to perform the following configurations under the envoyFilters for nf1stub, nf11stub, nf12stub, nf2stub, nf21stub, nf22stub, nf3stub, nf31stub, and nf32stub in the occnp-servicemesh-config-custom-values-25.1.200.yaml:

    Note:

    occnp_custom_values_servicemesh_config yaml file and helm charts version names would differ based on the deployed Policy NF version. For example, "occnp_custom_values_servicemesh_config_24.2.0.yaml" or "occnp_custom_values_servicemesh_config_24.2.1.yaml".
    envoyFilters:
      - name: serverheaderfilter-nf1stub
        labelselector: "app: nf1stub-ocstub-py"
        configpatch:
          - applyTo: NETWORK_FILTER
            filtername: envoy.filters.network.http_connection_manager
            operation: MERGE
            typeconfig: type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
            configkey: server_header_transformation
            configvalue: PASS_THROUGH
      - name: serverheaderfilter-nf11stub
        labelselector: "app: nf11stub-ocstub-py"
        configpatch:
          - applyTo: NETWORK_FILTER
            filtername: envoy.filters.network.http_connection_manager
            operation: MERGE
            typeconfig: type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
            configkey: server_header_transformation
            configvalue: PASS_THROUGH
      - name: serverheaderfilter-nf12stub
        labelselector: "app: nf12stub-ocstub-py"
        configpatch:
          - applyTo: NETWORK_FILTER
            filtername: envoy.filters.network.http_connection_manager
            operation: MERGE
            typeconfig: type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
            configkey: server_header_transformation
            configvalue: PASS_THROUGH
      - name: serverheaderfilter-nf2stub
        labelselector: "app: nf2stub-ocstub-py"
        configpatch:
          - applyTo: NETWORK_FILTER
            filtername: envoy.filters.network.http_connection_manager
            operation: MERGE
            typeconfig: type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
            configkey: server_header_transformation
            configvalue: PASS_THROUGH
      - name: serverheaderfilter-nf21stub
        labelselector: "app: nf21stub-ocstub-py"
        configpatch:
          - applyTo: NETWORK_FILTER
            filtername: envoy.filters.network.http_connection_manager
            operation: MERGE
            typeconfig: type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
            configkey: server_header_transformation
            configvalue: PASS_THROUGH
      - name: serverheaderfilter-nf22stub
        labelselector: "app: nf22stub-ocstub-py"
        configpatch:
          - applyTo: NETWORK_FILTER
            filtername: envoy.filters.network.http_connection_manager
            operation: MERGE
            typeconfig: type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
            configkey: server_header_transformation
            configvalue: PASS_THROUGH
      - name: serverheaderfilter-nf3stub
        labelselector: "app: nf3stub-ocstub-py"
        configpatch:
          - applyTo: NETWORK_FILTER
            filtername: envoy.filters.network.http_connection_manager
            operation: MERGE
            typeconfig: type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
            configkey: server_header_transformation
            configvalue: PASS_THROUGH
      - name: serverheaderfilter-nf31stub
        labelselector: "app: nf31stub-ocstub-py"
        configpatch:
          - applyTo: NETWORK_FILTER
            filtername: envoy.filters.network.http_connection_manager
            operation: MERGE
            typeconfig: type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
            configkey: server_header_transformation
            configvalue: PASS_THROUGH
      - name: serverheaderfilter-nf32stub
        labelselector: "app: nf32stub-ocstub-py"
        configpatch:
          - applyTo: NETWORK_FILTER
            filtername: envoy.filters.network.http_connection_manager
            operation: MERGE
            typeconfig: type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
            configkey: server_header_transformation
            configvalue: PASS_THROUGH
     
  5. Perform helm upgrade on the occnp-servicemesh-config release using the modified occnp-servicemesh-config-custom-values-25.1.200.yaml.
    helm upgrade <helm_release_name_for_servicemesh> -n <namespace> <servicemesh_charts> -f <servicemesh-custom.yaml>
    Example
    helm upgrade occnp-servicemesh-config occnp-servicemesh-config-25.1.200.tgz -n <namespace> -f occnp-servicemesh-config-custom-values-25.1.200.yaml
  6. Configure DNS for Alternate Route service. For more information, see Post-Installation Steps.
3.4.5.5 Enabling Persistent Volume
ATS supports Persistent storage to retain ATS historical build execution data, test cases and one-time environment variable configurations. With this enhancement, the user can decide whether to use persistent volume based on their resource requirements. By default, the persistent volume feature is not enabled.

Note:

The steps provided in this section are optional and required only if Persistent Volume needs be to enabled.

To enable persistent storage, perform the following steps:

  1. PVC creation is part of helm charts. To create the PVC, configure the following set of parameters in values.yaml file.

    PVEnabled: This indicates whether to enable PVC or not.

    PVClaimName: This indicates the name of persistent volume.

    PVStorageClassName: This indicates the storage classname.

    PVStorage: This indicates the size of the persistent volume.

    retainPVC: This indicates whether to delete or retain created PVC during uninstallation of ATS.

    A sample edited values.yaml file for enabling PVC:

    
    PVEnabled: true
    truePVClaimName: "policy-pvc-25.1.200"
    PVStorageClassName: "standard"
    PVStorage:"5Gi"
    retainPVC: true

    Note:

    • The value of truePVClaimName parameter to be suffixed with the release version to avoid confusion during the subsequent releases.
    • To use already existing PVC, update the PVClaimName with existing name, ATS will use it. If not, a new PVC will be created with given properties.
    • The retainPVC parameter if set to true, then deletes the PVC after uninstallation of ATS. Set it false for default behavior.
  2. Verify the created PVC is bound to the Persistent Volume and available for use, by running:
    kubectl get pvc -n <namespace used for pvc creation>
    kubectl get pvc -n ocpcf

    Sample Output:

    kubectl get pvc -n ocpcf
    NAME                  STATUS   VOLUME                                 CAPACITY       ACCESS MODES   STORAGECLASS   AGE
    policy-pvc-25.1.200   Bound    pvc-65484045-3805-4064-9fc3-f9eeeaccc8b8   1Gi        RWO            standard       11s
    

    Note:

    • Do not proceed further with the next step if there is in issue with the PV creation and contact your administrator to get the PV Created
    • Make sure that ATS is deployed before proceeding to the further steps.
  3. Copy the <nf_main_folder> and <jenkins jobs> folders from the tar file to their ATS pod and restart the pod.
    1. Extract the tar file.
      tar -xvf ocats-policy-data-25.1.200.tgz
    2. Run the following commands to copy the desired folder.
      kubectl cp ocats-policy-data-25.1.200/ocpcf_tests <namespace>/<pod-name>:/var/lib/jenkins/
       kubectl cp ocats-policy-data-25.1.200/jobs <namespace>/<pod-name>:/var/lib/jenkins/.jenkins/
    3. Restart the pod.
      kubectl delete po <pod-name> -n <namespace>
  4. Once the Pod is up and running, log in to the Jenkins console and configure the Discard old Builds option to configure the number of Jenkins builds, which must be retained in the persistent volume.

    Figure 3-9 Discarding Old Builds


    Discarding Old Builds

    Note:

    If Discard old Builds is not configured, Persistent Volume can get filled when there are huge number of builds.

For more details on Persistent Volume Storage, see Persistent Volume for 5G ATS.

3.4.5.6 Enabling Health Check

This section describes how to enable Health Check for ATS.

To enable Health Check, in the ocats-policy/values.yaml file, set the value of healthcheck parameter to true and enter a valid value to select the environment.

sshDetails:
  secretname: "healthchecksecret"
  envtype: ""
  occnehostip: ""
  occnehostusername: ""
  occnehostpassword: ""
  webscalejumpserverip: ""
  webscalejumpserverusername: ""
  webscalejumpserverpassword: ""
  webscaleprojectname: ""
  webscalelabserverFQDN: ""
  webscalelabserverport: ""
  webscalelabserverusername: ""
  webscalelabserverpassword: ""
  ociHealthCheck:
    passwordAuthenticationEnabled: false
    bastion:
      ip: ""
      username: ""
      password: ""
    operatorInstance:
      ip: ""
      username: ""
      password: ""
To select OCCNE environment, update the values of the following two parameters:
  • Webscale - Update the value as false
  • envtype - T0NDTkU= (i.e envtype=$(echo -n 'OCCNE' | base64))
  • occnehostip - OCCNE Host IP address
  • occnehostusername - OCCNE Host Username
  • occnehostpassword - OCCNE Host Password
The following is the sample configuration for OCCNE environment:
atsFeatures:  ## DO NOT UPDATE this section without Engineering team's permission
  healthcheck: true                   # TO enable/disable ATS Health Check.
 
sshDetails:
  secretname: "healthchecksecret"
  envtype: "T0NDTkU="
  occnehostip: "MTAuMTcuMjE5LjY1"
  occnehostusername: "dXNlcm5hbWU"
  occnehostpassword: "KioqKg=="
To select WEBSCALE environment, update the values of the following two parameters:
  • Webscale - Update the value as true
  • envtype - V0VCU0NBTEU= (i.e envtype=$(echo -n 'WEBSCALE' | base64))

After the configurations are done, encrypt below parameters and provide the values as shown in the following snippet:

The following is the sample configuration for WEBSCALE environment:
atsFeatures:  ## DO NOT UPDATE this section without Engineering team's permission
  healthcheck: true                   # TO enable/disable ATS Health Check.
 
 sshDetails:
  secretname: "healthchecksecret"
  envtype: "V0VCU0NBTEU="
  webscalejumpip: "MTAuNzAuMTE3LjQy"
  webscalejumpusername: "dXNlcm5hbWU="
  webscalejumppassword: "KioqKg=="
  webscaleprojectname: "KioqKg=="
  webscalelabserverFQDN: "KioqKg=="
  webscalelabserverport: "KioqKg=="
  webscalelabserverusername: "KioqKg=="
  webscalelabserverpassword: "KioqKg=="

Note:

Once the ATS is deployed with HealthCheck feature enabled or disabled, then it cannot be changed. To change the configuration, you are required to re-install.
3.4.5.7 ATS-Policy API Extended Support

The ATS application programming interface (API) feature provides APIs, to perform routine ATS tasks such as starting the ATS suite, monitoring and stopping the ATS suite etc.

By default, this feature is enabled in values.yaml file.

  atsFeatures:
    atsApi: true

For more details about the ATS API feature, refer to ATS API section.

This ATS feature is extended to provide the ability of running single features, or scenarios, or stages, or groups, or based on tags execution using the API. This also allows running of test cases by providing the features, or scenarios, or stages, or groups, or tags in the curl request to the server.

For more details about the API interfaces, refer to Use the RESTful Interfaces section.

3.4.6 Deploying ATS and Pods

3.4.6.1 Deploying ATS in Kubernetes Cluster

To deploy ATS, perform the following steps:

  1. Run the following command using the updated helm charts.

    Note:

    Ensure that all the components, ATS, go-Stub, dns-bind, ocamf, and CNC Policy are deployed in the same namespace.

    Using Helm

    helm install -name <release_name> ocats-policy-25.1.200.tgz --namespace <namespace_name> -f <values-yaml-file>

    Example:

    helm install -name ocats ocats-policy-25.1.200.tgz --namespace ocpcf -f ocats-policy/values.yaml
  2. Run the following command to verify ATS deployment:

    helm ls -n ocpcf

    The sample output is as follows:
    
    NAME    REVISION        UPDATED                     STATUS        CHART              APP VERSION     NAMESPACE
    ocats      1         Mon November 6 14:56:11 2023      DEPLOYED    ocats-policy-25.1.200      1.0             ocpcf
    
    The status appears as DEPLOYED after the deployment is successful.
3.4.6.2 Deploying Stub Pod in Kubernetes Cluster

To deploy Stub Pod in Kubernetes cluster, perform the following steps:

  1. Navigate to the ocats-policy-tools-25.1.200.0.0 folder and run the following command to extract the ocstub tar file content.
    tar -zxvf ocstub-pkg-25.1.201.0.0.tgz
    The output of this command is:
    • ocstub-py-25.1.201.tgz
    • ocstub-py-image-25.1.201.tar

    Note:

    To deploy additional stubs required for session retry feature validation:
    • nf11stub, nf12stub → Alternate FQDN for nf1stub
    • nf21stub, nf22stub → Alternate FQDN for nf2stub
    • nf31stub, nf32stub → Alternate FQDN for nf3stub
  2. Run the following command in your cluster to load the STUB image:
    docker load --input ocstub-py-image-25.1.201.tar
  3. Run the following commands to tag and push the STUB image:
    docker tag ocstub-py:25.1.201 <registry>/ocstub-py:25.1.201
    docker push <registry>/ocstub-py:25.1.201
  4. Run the following command to untar the helm charts, ocstub-py-25.1.201.tgz:
    tar -zxvf ocstub-py-25.1.201.tgz
  5. Update the registry name, image name and tag (if required) in the ocstub-py/values.yaml file as required. For this, open the values.yaml file and update the image.repository and image.tag parameters.

    Note:

    From 24.2.0 onwards, service port names are configurable in ocstub-py. But as per Istio standard, it's advisable to keep the default values as it as.

    Example:

    
    names:
      http: "http"
      h2c: "http2-h2c"
      h2: "http2-h2"
  6. If required, change the apiVersion to apps/v1 in the ocstub-py/templates/deployment.yaml file as follows:

    apiVersion: apps/v1

  7. If the support for Predefined priming feature is required, perform the following steps to configure Predefined_priming
    1. Copy ocstub-py/values.yaml file to a new file with name pre_priming_values.yaml.
    2. Edit the ocstub-py/pre_priming_values.yaml file.
    3. Set the value of preConfig flag to true and replace the default configuration with below configurations under predefined_prime_configuration section.

      Sample Predefined_priming configuration:

      
      enabled: true
      predefined_prime_configuration: |+
        [
          {
            "method": "GET",
            "statuscode": "200",
            "url": "/nnrf-nfm/v1/nf-instances/${nfInstanceId}",
            "data": "{\"nfInstanceId\": \"${nfInstanceId}\"}",
            "headers": "{\"Content-Type\":\"application/json\"}"
          },
          {
            "method": "PUT",
            "statuscode": "201",
            "url": "/nnrf-nfm/v1/nf-instances/${nfInstanceId}",
            "data": "{\"nfInstanceId\": \"${nfInstanceId}\", \"nfType\": \"PCF\", \"nfStatus\": \"REGISTERED\", \"heartBeatTimer\": 300, \"capacity\": 100, \"load\": 0, \"pcfInfo\": {\"dnnList\": [\"dnn1\", \"dnn2\"], \"supiRanges\": [{\"start\": \"12123444444\", \"end\": \"232332323323232\"}]}, \"nfServices\": [{\"serviceInstanceId\": \"03063893-cf9e-4f7a-9827-067f6fa9dd01\", \"serviceName\": \"npcf-am-policy-control\", \"versions\": [{\"apiVersionInUri\": \"v1\", \"apiFullVersion\": \"1.R15.1.0\", \"expiry\": \"2020-03-03T18:55:08.871+0000\"}], \"scheme\": \"http\", \"nfServiceStatus\": \"REGISTERED\", \"fqdn\": \"ocpcf-pcf-ingress-gateway.ocpcf.svc\", \"allowedNfTypes\": [\"AMF\", \"NEF\"], \"capacity\": 100, \"load\": 0}, {\"serviceInstanceId\": \"03063893-cf9e-4f7a-9827-067f6fa9dd02\", \"serviceName\": \"npcf-smpolicycontrol\", \"versions\": [{\"apiVersionInUri\": \"v1\", \"apiFullVersion\": \"1.R15.1.0\", \"expiry\": \"2019-08-03T18:55:08.871+0000\"}], \"scheme\": \"http\", \"nfServiceStatus\": \"REGISTERED\", \"fqdn\": \"ocpcf-api-gateway.occ-demo-pcf.svc\", \"allowedNfTypes\": [\"SMF\", \"NEF\", \"AF\"], \"capacity\": 100, \"load\": 0}]}",
            "headers": "{\"Content-Type\":\"application/json\"}"
          },
          {
            "method": "PATCH",
            "statuscode": "204",
            "url": "/nnrf-nfm/v1/nf-instances/${nfInstanceId}",
            "data": "{}",
            "headers": "{\"Content-Type\":\"application/json\"}"
          },
          {
            "method": "POST",
            "statuscode": "201",
            "url": "/nnrf-nfm/v1/subscriptions",
            "data": "{\"nfStatusNotificationUri\": \"http://ocpcf-pcf-ingress-gateway.ocpcf.svc/nnrf-client/v1/notify\", \"reqNfType\": \"PCF\", \"subscriptionId\": \"2d77e0de-15a9-11ea-8c5b-b2ca002e6839\", \"validityTime\": \"2050-12-26T09:34:30.816Z\"}",
            "headers": "{\"Content-Type\": \"application/json\"}"
          }
        ]
      

      Note:

      • The predefined_prime_configuration contains variables such as nfInstanceId, nfType, and fqdn in the data's content. Ensure to verify and update the variables based on the payload message that must be included in the response from the NRF on a request.
      • The default value of nfInstanceId variable is fe7d992b-0541-4c7d-ab84-c6d70b1b0123.
  8. Deploy Stub.
    
    helm install -name <release_name> ocstub-py --set env.NF=<NF> --set env.LOG_LEVEL=<DEBUG/INFO> --set service.name=<service_name>--set service.appendReleaseName=false --namespace=<namespace_name> -f <valuesyaml-file>
    Install nf1stub and nf11stub with updated ocstub-py/pre_priming_values.yaml file.

    Note:

    If the support for Predefined_priming feature is not required, helm installation must be performed using default values.yaml file.
    helm install -name nf1stub ocstub-py --set env.NF=PCF --set env.LOG_LEVEL=DEBUG --set service.name=nf1stub --set service.appendReleaseName=false  --namespace=ocpcf -f ocstub-py/pre_priming_values.yaml

    Install other stubs such as nf12stub, nf11stub and others with default values.yaml file.

    Example:

    
    helm install -name nf1stub ocstub-py --set env.NF=PCF --set env.LOG_LEVEL=DEBUG --set service.name=nf1stub --set service.appendReleaseName=false  --namespace=ocpcf -f ocstub-py/values.yaml
    helm install -name nf2stub ocstub-py --set env.NF=PCF --set env.LOG_LEVEL=DEBUG --set service.name=nf2stub --set service.appendReleaseName=false  --namespace=ocpcf -f ocstub-py/values.yaml
    helm install -name nf3stub ocstub-py --set env.NF=PCF --set env.LOG_LEVEL=DEBUG --set service.name=nf3stub --set service.appendReleaseName=false  --namespace=ocpcf -f ocstub-py/values.yaml
    helm install -name nf11stub ocstub-py --set env.NF=PCF --set env.LOG_LEVEL=DEBUG --set service.name=nf11stub --set service.appendReleaseName=false  --namespace=ocpcf -f ocstub-py/values.yaml
    helm install -name nf12stub ocstub-py --set env.NF=PCF --set env.LOG_LEVEL=DEBUG --set service.name=nf12stub --set service.appendReleaseName=false  --namespace=ocpcf -f ocstub-py/values.yaml
    helm install -name nf21stub ocstub-py --set env.NF=PCF --set env.LOG_LEVEL=DEBUG --set service.name=nf21stub --set service.appendReleaseName=false  --namespace=ocpcf -f ocstub-py/values.yaml
    helm install -name nf22stub ocstub-py --set env.NF=PCF --set env.LOG_LEVEL=DEBUG --set service.name=nf22stub --set service.appendReleaseName=false  --namespace=ocpcf -f ocstub-py/values.yaml
    helm install -name nf31stub ocstub-py --set env.NF=PCF --set env.LOG_LEVEL=DEBUG --set service.name=nf31stub --set service.appendReleaseName=false  --namespace=ocpcf -f ocstub-py/values.yaml
    helm install -name nf32stub ocstub-py --set env.NF=PCF --set env.LOG_LEVEL=DEBUG --set service.name=nf32stub --set service.appendReleaseName=false  --namespace=ocpcf -f ocstub-py/values.yaml
  9. Run the following command to verify stub deployment.

    helm ls -n ocpcf

    The sample output is as follows:
    NAME         REVISION            UPDATED                         STATUS          CHART                        APP VERSION     NAMESPACE
    nf11stub                1               Tue March 14 10:05:59 2024        DEPLOYED        ocstub-py-25.1.201         1.0             ocpcf
    nf12stub                1               Tue March 14 10:06:00 2024        DEPLOYED        ocstub-py-25.1.201         1.0             ocpcf
    nf1stub                 1               Tue March 14 10:05:57 2024        DEPLOYED        ocstub-py-25.1.201         1.0             ocpcf
    nf21stub                1               Tue March 14 10:06:01 2024        DEPLOYED        ocstub-py-25.1.201         1.0             ocpcf
    nf22stub                1               Tue March 14 10:06:02 2024        DEPLOYED        ocstub-py-25.1.201         1.0             ocpcf
    nf2stub                 1               Tue March 14 10:05:58 2024        DEPLOYED        ocstub-py-25.1.201         1.0             ocpcf
    nf31stub                1               Tue March 14 10:06:03 2024        DEPLOYED        ocstub-py-25.1.201         1.0             ocpcf
    nf32stub                1               Tue March 14 10:06:11 2024        DEPLOYED        ocstub-py-25.1.201         1.0             ocpcf
    nf3stub                 1               Tue March 14 10:05:59 2024        DEPLOYED        ocstub-py-25.1.201         1.0             ocpcf
    
    The status changes to DEPLOYED after the deployment is successful.

    Similarly, install all other stubs.

  10. Run the following command to check the status of Stub deployment.

    helm status <release_name> -n ocpcf

    The sample output is as follows:
    NAME                                         READY   STATUS    RESTARTS   AGE
    nf11stub-ocstub-py-66449ddb94-qg2j9                    1/1     Running   0          19h
    nf12stub-ocstub-py-6b8575487-l8pxv                     1/1     Running   0          19h
    nf1stub-ocstub-py-5ff485954c-prc2x                     1/1     Running   0          19h
    nf21stub-ocstub-py-56cf5b77fc-x8wkr                    1/1     Running   0          19h
    nf22stub-ocstub-py-547dfdf476-4j2sn                    1/1     Running   0          19h
    nf2stub-ocstub-py-6fb6f786d6-bc9fr                     1/1     Running   0          19h
    nf31stub-ocstub-py-c6c6d5584-5m48z                     1/1     Running   0          19h
    nf32stub-ocstub-py-848dfc7757-q797z                    1/1     Running   0          19h
    nf3stub-ocstub-py-6cb769ccd9-4fv9b                     1/1     Running   0          19h
    
  11. Run the following command to verify if all the services are installed.
    kubectl get po -n ocpcf
    A sample output of Policy namespace with Policy and ATS after installation is as follows:
    NAME                                         READY   STATUS    RESTARTS   AGE
    ocpcf-appinfo-6c74cccd47-zsbb2                         1/1     Running   0          155m
    ocpcf-oc-binding-77fbb9b79c-jv7kd                      1/1     Running   0          155m
    ocpcf-oc-diam-connector-6c6fd868bd-4zfrn               1/1     Running   0          155m
    ocpcf-oc-diam-gateway-0                                1/1     Running   0          147m
    ocpcf-oc-oc-stub-595bb858d4-smzj8                      1/1     Running   0          147m
    ocpcf-ocats-ocats-policy-667d8cf78-b8bc8               1/1     Running   0          147m
    ocpcf-occnp-alternate-route-75455c858d-f6qs8           1/1     Running   0          146m
    ocpcf-occnp-alternate-route-75455c858d-sqvlg           1/1     Running   0          147m
    ocpcf-occnp-chf-connector-6b8b8bfcd6-jjch6             1/1     Running   0          155m
    ocpcf-occnp-config-server-77bd99f96-mpscn              1/1     Running   0          155m
    ocpcf-occnp-egress-gateway-59c4b784cc-6dx4w            1/1     Running   0          16m
    ocpcf-occnp-ingress-gateway-75c47c57bc-pljtc           1/1     Running   0          39m
    ocpcf-occnp-nrf-client-nfdiscovery-74b854956b-s6blq    1/1     Running   0          155m
    ocpcf-occnp-nrf-client-nfmanagement-76cb55b8b8-tdjkj   1/1     Running   0          49m
    ocpcf-occnp-udr-connector-75ffb9db9b-7xz9v             1/1     Running   0          155m
    ocpcf-ocdns-ocdns-bind-57fbcd95dc-h4dtl                1/1     Running   0          147m
    ocpcf-ocpm-audit-service-5cc46665c4-j6vhh              1/1     Running   0          155m
    ocpcf-ocpm-cm-service-7795bb4c6c-446rb                 1/1     Running   0          155m
    ocpcf-ocpm-policyds-75cbc9fc9d-7lbl5                   1/1     Running   0          155m
    ocpcf-ocpm-pre-59b94d979-jzkv4                         1/1     Running   0          155m
    ocpcf-ocpm-pre-test-84d9c89dd8-fqlpg                   1/1     Running   0          155m
    ocpcf-ocpm-queryservice-94895bf88-bhwcc                1/1     Running   0          155m
    ocpcf-pcf-amservice-56cdbb75c9-ph7tt                   1/1     Running   0          155m
    ocpcf-pcf-smservice-64b899d766-jfhjm                   1/1     Running   0          155m
    ocpcf-pcf-ueservice-7c6bd7ccc9-mrnxn                   1/1     Running   0          155m
    ocpcf-pcrf-core-7594dbb7f8-z95vt                       1/1     Running   0          155m
    ocpcf-performance-689dd556b-7vblc                      1/1     Running   0          155m
    ocpcfnf11stub-5bb6b4f95d-v6fbb                         1/1     Running   0          147m
    ocpcfnf12stub-59fb974f5d-2qr42                         1/1     Running   0          147m
    ocpcfnf1stub-5bdf545fcb-zgbjb                          1/1     Running   0          147m
    ocpcfnf21stub-ff6db9d86-5hvj6                          1/1     Running   0          147m
    ocpcfnf22stub-794456fd66-sxq8q                         1/1     Running   0          147m
    ocpcfnf2stub-656755dc46-hnr8m                          1/1     Running   0          147m
    ocpcfnf31stub-68c6596b6-jdsgj                          1/1     Running   0          147m
    ocpcfnf32stub-f49b57d86-rklc8                          1/1     Running   0          147m
    ocpcfnf3stub-6c4c697648-lj6q7                          1/1     Running   0          147m
    ocpcf-ocpm-ldap-gateway-5fd489b8fd-52dqn               1/1     Running   0          147m
    
  12. Verify the changes related to stub pre-defined prime configurations. Run the following command to verify the status of all the config-map.
    kubectl get cm -n ocpcf

    Notice the change in the number of config-map counts. It includes two extra config-maps of stubs and the number will be the same as of stubs count.

    For example:

    
    NAME                               DATA   AGE
    cm-pystub-nf1stub                   1      3h35m
    cm-pystub-nf11stub                  1      3h35m

Updating the Predefined priming configurations

To update the Predefined priming configurations such as value of nfInstanceID in the url:

Note:

This procedure is applicable only when Predefined priming configuration is enabled.
  1. Run the following command to verify the status of all the config-maps.
    kubectl get cm -n ocpcf
  2. Perform the following steps separately for nf1stub and nf11stub pods.
    1. Edit the config-map of the pod. To edit the config-map of nf1stub, run the command:
      kubectl edit cm cm-pystub-nf1stub -n ocpcf
      To edit the config-map of nf11stub, run the command:
      kubectl edit cm cm-pystub-nf11stub -n ocpcf
  3. Edit the configurations as required, save and close the config-maps.
  4. Restart the nf1stub and nf11stub pods.
  5. Verify the logs of both these pods to confirm the changes.
3.4.6.3 Deploying DNS Stub in Kubernetes Cluster

Note:

Ensure there is sufficient resource requests and limit is configured for DNS Stub. Set the resource request and limit values in the resources section in the values.yaml file as follows:

resources: {}
  # We usually recommend not to specify default resources and to leave this as a conscious
  # choice for the user. This also increases chances charts run on environments with little
  # resources, such as Minikube. If you do want to specify resources, uncomment the following
  # lines, adjust them as necessary, and remove the curly braces after 'resources:'.  # limits:
  #  cpu: 1000m
  #  memory: 1024Mi
  # requests:
  #  cpu: 500m
  #  memory: 500Mi
To deploy DNS stub in Kubernetes cluster, perform the following steps:
  1. Go to the ocats-policy-tools-25.1.200.0.0 folder and run the following command:

    tar -zxvf ocdns-pkg-25.1.204.0.0.tgz

    The output of this command is:

    [cloud-user@platform-bastion-1 ocdns-pkg-25.1.204.0.0]$ ls -ltrh
    total 211M
    -rw-------. 1 cloud-user cloud-user 211M Mar 14 14:49 ocdns-bind-image-25.1.204.tar
    -rw-r--r--. 1 cloud-user cloud-user 2.9K Mar 14 14:49 ocdns-bind-25.1.204.tgz
  2. Run the following command in your cluster to load the DNS Stub image:

    docker load --input ocdns-bind-image-25.1.204.tar

  3. Run the following commands to tag and push the DNS stub to the registry:
    docker tag ocdns-bind:25.1.204 localhost:5000/ocdns-bind:25.1.204
    docker push localhost:5000/ocdns-bind:25.1.204
  4. Run the following command to untar the helm charts (ocdns-bind-25.1.204.tgz):

    tar -zxvf ocdns-bind-25.1.204.tgz

  5. Update the registry name, image name and tag (if required) in the ocdns-bind/values.yaml file as required. Open the values.yaml file and update the image.repository and image.tag parameters.
  6. Run the following command to install DNS Stub:
    helm :
    [cloud-user@platform-bastion-1 ocdns-bind]$ helm install -name ocdns  
    ocdns-bind-25.1.204.tgz --namespace ocpcf -f ocdns-bind/values.yaml
  7. Run the following command to capture the cluster name of the pcf deployment, namespace where nfstubs are deployed and cluster IP of DNS Stub.
    kubectl get svc -n ocpcf | grep dns
    NAME      TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)                AGE
    ocdns     ClusterIP      10.233.11.45    <none>          53/UDP,6236/TCP        19h

    Note:

    This information is required to configure DNS stub.

    Figure 3-10 Cluster Name

    kubectl -n kube-system get configmap kubeadm-config -o yaml | grep clusterName
        clusterName: platform
3.4.6.4 Deploying AMF Stub in Kubernetes Cluster
To deploy OCAMF stub in Kubernetes cluster:
  1. Go to the ocats-policy-tools-25.1.200.0.0 folder and run the following command:

    tar -zxvf ocamf-pkg-25.1.204.0.0.tgz

    The output of this command is:
    [cloud-user@platform-bastion-1 ocamf-pkg-25.1.204.0.0]$ ls -ltrh
    total 211M
    -rw-------. 1 cloud-user cloud-user 211M Mar 14 14:49 ocamf-stub-image-25.1.204.tar
    -rw-r--r--. 1 cloud-user cloud-user 2.9K Mar 14 14:49 ocamf-stub-25.1.204.tgz
  2. Run the following command in your cluster to load the AMF Stub image:

    docker load --input ocamf-stub-image-25.1.204.tar

  3. Run the following command to tag and push the DNS stub to the registry:
    docker tag ocamf-stub:25.1.204 localhost:5000/ocamf-stub:25.1.204
    docker push localhost:5000/ocamf-stub:25.1.204
  4. Run the following command to untar the helm charts (ocamf-stub-25.1.204.tgz):

    tar -zxvf ocamf-stub-25.1.204.tgz

  5. Update the registry name, image name and tag (if required) in the ocamf-stub/values.yaml file as required. Open the values.yaml file and update the registry name, image name, and tag (if required) in the file.
  6. Run the following command to install AMF Stub:

    Using Helm:

    
    [cloud-user@platform-bastion-1 ocamf-stub]$ helm install -name ocamf2 ocamf-stub-25.1.204.tgz --set service.name=ocamf2 --namespace ocpcf -f ocamf-stub/values.yaml

The status changes to RUNNING after the deployment is successful.

The following is a sample output for a successful deployment:
ocamf2-ocamf-ocamf-stub-79c8fbd6f7-qp5cl                1/1     Running   0          5h47m

Note:

Do not deploy ocamf-stub when TLS communication is enabled. Temporarily, ATS features for AM are disabled until ocamf-stub supports TLS communication. Even if the AM related ATS features are populated on the ATS UI, it will not be executed

3.4.6.5 Deploying LDAP Stub in Kubernetes Cluster
To deploy oc-ldap stub in the Kubernetes cluster, perform the following steps:
  1. Go to the ocats-policy-tools-25.1.200.0.0 folder and run the following command:

    tar -zxvf oc-ldap-org1-pkg-25.1.204.0.0

    The following is the output:
    [cloud-user@platform-bastion-1 oc-ldap-org1-pkg-25.1.204.0.0]$ ls -ltrh
    total 211M
    -rw-------. 1 cloud-user cloud-user 211M Mar 14 14:49 oc-ldap-org1-image-25.1.204.tar
    -rw-r--r--. 1 cloud-user cloud-user 2.9K Mar 14 14:49 oc-ldap-org1-25.1.204.tgz
  2. Run the following command in your cluster to load the LDAP Stub image:

    docker load --input oc-ldap-org1-image-25.1.204.tar

  3. Run the following command to tag and push the DNS stub to the registry:
    docker tag oc-ldap-org1:25.1.204 localhost:5000/oc-ldap-org1:25.1.204
    docker push localhost:5000/oc-ldap-org1:25.1.204
  4. Run the following command to untar the helm charts (oc-ldap-org1-25.1.204.tgz):

    tar -zxvf oc-ldap-org1-25.1.204.tgz

  5. Update the registry name, image name and tag (if required) in the oc-ldap-org1/values.yaml file as required. Open the values.yaml file and update the registry name, image name, and tag (if required) in the file.
  6. Run the following command to install AMF Stub:

    Using Helm:

    
    [cloud-user@platform-bastion-1 oc-ldap-org1]$ helm upgrade --install --namespace ocpcf --set image.repository=localhost:5000/occnp/oc-ldap-org1 oc-ldap-org1 oc-ldap-org1-25.1.204.tgz
    

The status changes to RUNNING after the deployment is successful.

The following is a sample output for a successful deployment:
ocpcf-oc-ldap-org1-7b9d957bc6-ngtrl                1/1     Running   0          5h47m

Note:

oc-ldap-org1-secret of the OC-LDAP Stub is being created by Helm chart that comes with ATS package.
3.4.6.6 Deploying ocdiam Simulator in Kubernetes Cluster
To deploy ocdiam Simulator in the Kubernetes cluster, perform the following steps:
  1. Go to the ocats-policy-tools-25.1.200.0.0 folder and run the following command:
    tar -zxvf ocdiam-pkg-25.1.204.0.0

    The following is the output:

    [cloud-user@platform-bastion-1 ocdiam-pkg-25.1.204.0.0]$ ls -ltrh
    total 908M
    -rw-------. 1 cloud-user cloud-user 908M Mar 14 14:49 ocdiam-sim-image-25.1.204.tar
    -rw-r--r--. 1 cloud-user cloud-user 3.8K Mar 14 14:49 ocdiam-sim-25.1.204.tgz
  2. Run the following command in your cluster to load the Diameter Simulator Image using the command :
    docker load --input ocdiam-sim-image-25.1.204.tar
  3. Run the following command to tag and push the DNS stub to the registry:
    docker tag ocdiam-sim:25.1.204 localhost:5000/ocdiam-sim:25.1.204
    docker push localhost:5000/ocdiam-sim:25.1.204
  4. Run the following command to untar the helm charts (ocdiam-sim-25.1.204.tgz):
    tar -zxvf ocdiam-sim-25.1.204.tgz
  5. Update the registry name, image name and tag (if required) in the ocdiam-sim/values.yaml file as required. Open the values.yaml file and update the registry name, image name, and tag (if required) in the file.
  6. Run the following command to install Diameter Simulator:

    Using Helm:

    [cloud-user@platform-bastion-1 ocdiam-sim]$ helm install -name ocdiam-sim ocdiam-sim --namespace ocpcf -f ocdiam-sim/values.yaml

The status changes to RUNNING after the deployment is successful.

The following is a sample output for a successful deployment:

ocdiam-sim-69968444b6-fg6ks    1/1     Running   0   5h47m
Sample of Policy namespace with Policy and ATS after installation:
[cloud-user@platform-bastion-1 ocstub-pkg-25.1.201.0.0]$ kubectl get po -n ocpcf
NAME                                                   READY   STATUS    RESTARTS   AGE
ocpcf-appinfo-6c74cccd47-zsbb2                         1/1     Running   0          155m
ocpcf-oc-binding-77fbb9b79c-jv7kd                      1/1     Running   0          155m
ocpcf-oc-diam-connector-6c6fd868bd-4zfrn               1/1     Running   0          155m
ocpcf-oc-diam-gateway-0                                1/1     Running   0          147m
ocamf2-ocamf-stub-595bb858d4-smzj8                     1/1     Running   0          147m
ocpcf-ocats-ocats-policy-667d8cf78-b8bc8               1/1     Running   0          147m
ocpcf-occnp-alternate-route-75455c858d-f6qs8           1/1     Running   0          146m
ocpcf-occnp-chf-connector-6b8b8bfcd6-jjch6             1/1     Running   0          155m
ocpcf-occnp-config-server-77bd99f96-mpscn              1/1     Running   0          155m
ocpcf-occnp-egress-gateway-59c4b784cc-6dx4w            1/1     Running   0          16m
ocpcf-occnp-ingress-gateway-75c47c57bc-pljtc           1/1     Running   0          39m
ocpcf-occnp-nrf-client-nfdiscovery-74b854956b-s6blq    1/1     Running   0          155m
ocpcf-occnp-nrf-client-nfmanagement-76cb55b8b8-tdjkj   1/1     Running   0          49m
ocpcf-occnp-udr-connector-75ffb9db9b-7xz9v             1/1     Running   0          155m
ocpcf-ocdns-ocdns-bind-57fbcd95dc-h4dtl                1/1     Running   0          147m
ocpcf-ocpm-audit-service-5cc46665c4-j6vhh              1/1     Running   0          155m
ocpcf-ocpm-cm-service-7795bb4c6c-446rb                 1/1     Running   0          155m
ocpcf-ocpm-policyds-75cbc9fc9d-7lbl5                   1/1     Running   0          155m
ocpcf-ocpm-pre-59b94d979-jzkv4                         1/1     Running   0          155m
ocpcf-ocpm-pre-test-84d9c89dd8-fqlpg                   1/1     Running   0          155m
ocpcf-ocpm-queryservice-94895bf88-bhwcc                1/1     Running   0          155m
ocpcf-pcf-amservice-56cdbb75c9-ph7tt                  
ocpcf-pcf-smservice-64b899d766-jfhjm                   1/1     Running   0          155m
ocpcf-pcf-ueservice-7c6bd7ccc9-mrnxn                   1/1     Running   0          155m
ocpcf-pcrf-core-7594dbb7f8-z95vt                       1/1     Running   0          155m
ocpcf-performance-689dd556b-7vblc                      1/1     Running   0          155m
ocpcfnf11stub-5bb6b4f95d-v6fbb                         1/1     Running   0          147m
ocpcfnf12stub-59fb974f5d-2qr42                         1/1     Running   0          147m
ocpcfnf1stub-5bdf545fcb-zgbjb                          1/1     Running   0          147m
ocpcfnf21stub-ff6db9d86-5hvj6                          1/1     Running   0          147m
ocpcfnf22stub-794456fd66-sxq8q                         1/1     Running   0          147m
ocpcfnf2stub-656755dc46-hnr8m                          1/1     Running   0          147m
ocpcfnf31stub-68c6596b6-jdsgj                          1/1     Running   0          147m
ocpcfnf32stub-f49b57d86-rklc8                          1/1     Running   0          147m
ocpcfnf3stub-6c4c697648-lj6q7                          1/1     Running   0          147m
ocpcf-ocpm-ldap-gateway-5fd489b8fd-52dqn               1/1     Running   0          147m 
ocdiam-sim-69968444b6                                  1/1     Running   0          147m

3.4.7 Post-Installation Steps

This section describes the post-installation steps for Policy.

Alternate Route Service Configurations

To edit the Alternate Route Service deployment file (ocpcf-occnp-alternate-route) that points to DNS Stub, perform the following steps:

  1. Run the following command to get searches information from dns-bind pod to enable communication between Alternate Route and dns-bind service:
    kubectl exec -it <dns-bind pod> -n <NAMESPACE> -- /bin/bash -c 'cat /etc/resolv.conf' | grep search | tr ' ' '\n' | grep -v 'search'
    The following output is displayed after running the command:

    Figure 3-11 Sample Output

    Sample Output
    By default alternate service will point to CoreDNS and you will see following settings in deployment file:

    Figure 3-12 Alternate Route Service Deployment File

    Screen capture to show alternate route service points to CoreDNS
  2. Run the following command to edit the deployment file and add the following content in alternate service to query DNS stub:
    $kubectl edit deployment ocpcf-occnp-alternate-route -n ocpcf
    1. Add the IP Address of the nameserver that you have recorded after installing the DNS stub (cluster IP Address of DNS Stub).
    2. Add the search information one by one which you recorded earlier.
    3. Set dnsPolicy to "None".
      dnsConfig:
        nameservers:
        - 10.233.33.169      // cluster IP of DNS Stub
        searches:
        - ocpcf.svc.occne15-ocpcf-ats
        - svc.occne15-ocpcf-ats
        - occne15-ocpcf-ats
      dnsPolicy: None
    For example:

    Figure 3-13 Example

    Example

NRF client configmap

In the -application-config configmap, configure the following parameters with the respective values:
  • primaryNrfApiRoot=nf1stub.<namespace_gostubs_are_deployed_in>.svc:8080

    Example: primaryNrfApiRoot=nf1stub.ocats.svc:8080

  • secondaryNrfApiRoot=nf1stub.ocats.svc:8080 (Remove the secondaryNrfApiRoot)
  • nrfClientSubscribeTypes=UDR, CHF, NWDAF
  • supportedDataSetId=POLICY (Remove the supportedDataSetId )

Note:

Configure these values at the time of Policy deployment.

Note:

To get all configmaps in your namespace, execute the following command:

kubectl get configmaps -n <Policy_namespace>

Persistent Volume (Optional)

If persistent volume is used, follow the post-installation steps provided in the Persistent Volume for 5G ATS section.

3.5 Installing ATS for SCP

This section describes Automated Testing Suite (ATS) installation procedures for Service Communication Proxy (SCP) in a cloud native environment.

You must perform ATS installation procedures for SCP in the same sequence as outlined in the following sections.

3.5.1 Prerequisites

To run SCP test cases, the following prerequisites are required.

3.5.1.1 Software Requirements

This section lists the software that must be installed before installing ATS.

Table 3-14 Preinstalled Software

Software Version
Kubernetes 1.32.x, 1.31.x, 1.30.x
Helm 3.17.1
Podman 4.9.4

To check the versions of the preinstalled software in the cloud native environment, run the following commands:

kubectl version
helm version
podman version
3.5.1.2 Environment Setup Requirements

This section describes the requirements for the client machine, that is, the machine used by the user to run deployment commands.

The client machine should have:
  • Helm repository configured.
  • Network access to the Helm repository and Docker image repository.
  • Network access to the Kubernetes cluster.
  • Required environment settings to run kubectl, docker, and podman commands. The environment should have privileges to create a namespace in the Kubernetes cluster.
  • Helm client is installed with the push plugin. Configure the environment in such a manner that the helm install command deploys the software in the Kubernetes cluster.
3.5.1.3 Resource Requirements
This section describes ATS resource requirements for SCP.

Overview - Total Number of Resources

The following table describes the total resource usage by different resource types:
  • SCP SUT
  • cnDB Tier
  • ATS

Table 3-15 SCP - Total Number of Resources

Resource Name CPU Memory (GB) Storage (GB)
SCP SUT Totals 61 66 0
cnDBTier Totals 40 40 20
ATS Totals 100 106 4
Grand Total SCP ATS 201 212 24

SCP Pods Resource Requirements

This section describes the resources required to deploy SCP ATS.

Table 3-16 SCP Pods Resource Requirements for a Non ASM Setup

Microservice CPUs Required per Pod Memory Required per Pod (GB) Storage PVC Required per Pod (GB) # Replicas (ATS deployment) CPUs Required (ATS) - Total Memory Required (ATS) - Total (GB) Storage PVC Required - Total (GB)
SCP Pods
scpc-subscription 2 2 0 1 2 2 0
scpc-notification 4 4 0 1 4 4 0
scpc-audit 3 4 0 1 3 4 0
scpc-configuration 2 2 0 1 2 2 0
scp-worker 8 12 0 1 8 12 0
scpc-alternate-resolution 2 2 0 1 2 2 0
scp-nrfproxy 8 8 0 1 8 8 0
scp-cache 8 8 0 1 8 8 0
scp-mediation 8 8 0 1 8 8 0
scp-load-manager 8 8 0 1 8 8 0
scp-oauth-nrfproxy 8 8 0 1 8 8 0
SCP SUT Totals 61 66 0

Table 3-17 SCP Pods Resource Requirements for an ASM Setup

Microservice CPUs Required per Pod vCPUs(Sidecar) Memory Required per Pod (GB) Memory(required by sidecar) Storage PVC Required per Pod (GB) # Replicas (ATS deployment) CPUs Required (ATS) - Total Memory Required (ATS) - Total (GB) Storage PVC Required - Total (GB)
SCP Pods
scpc-subscription 2 2 2 1 0 1 4 3 0
scpc-notification 4 2 4 1 0 1 6 5 0
scpc-audit 3 2 4 1 0 1 5 5 0
scpc-configuration 2 2 2 1 0 1 4 3 0
scp-worker 8 2 12 1 0 1 10 13 0
scpc-alternate-resolution 2 2 2 1 0 1 4 3 0
scp-nrfproxy 8 2 8 1 0 1 10 9 0
scp-cache 8 2 8 1 0 1 10 9 0
scp-mediation 8 2 8 1 0 1 10 9 0
scp-load-manager 8 2 8 1 0 1 10 9 0
scp-oauth-nrfproxy 8 2 8 1 0 1 10 9 0
SCP SUT Totals 83 77 0

ATS Resource Requirements for SCP

This section describes the ATS resources required to deploy SCP-ATS.

Table 3-18 ATS Resource Requirements for a Non ASM Setup

Microservice CPUs Required per Pod Memory Required per Pod (GB) Storage PVC Required per Pod (GB) # Replicas (ATS deployment) CPUs Required - Total Memory Required - Total (GB) Storage PVC Required - Total (GB)
ATS Behave 6 12 4 1 6 12 4
ATS pystub 1 1 - 91 91 91 0
DNS Stub 1 1 - 1 1 1 0
Global Egress Rate Limiting Stub 1 1 - 1 1 1 0
ATS DD Client stub 1 1 - 1 1 1 0
ATS Totals 100 106 4

Table 3-19 ATS Resource Requirements for an ASM Setup

Microservice CPUs Required per Pod vCPUs(required by sidecar) Memory Required per Pod (GB) Memory(required by sidecar) Storage PVC Required per Pod (GB) # Replicas (ATS deployment) CPUs Required - Total Memory Required - Total (GB) Storage PVC Required - Total (GB)
ATS Behave 6 2 12 1 4 1 8 13 4
ATS pystub 1 2 1 1 - 91 273 182 0
DNS Stub 1 2 1 1 - 1 3 2 0
Global Egress Rate Limiting Stub 1 2 1 1 - 1 3 2 0
ATS DD Client stub 1 2 1 1 - 1 3 2 0
ATS Totals 290 201 4
3.5.1.4 Downloading the ATS Package

This section provides information about how to download the ATS package.

To locate and download the ATS Image from MOS:

  1. Log in to My Oracle Support using the appropriate login credentials.
  2. Click the Patches & Updates tab.
  3. In the Patch Search section, click Product or Family (Advanced).
  4. Enter Oracle Communications Cloud Native Core - 5G in the Product field.
  5. Select Oracle Communications Cloud Native Core Service Communication Proxy <release_number> from the Release drop-down list.
  6. Click Search.

    The Patch Advanced Search Results list appears.

  7. Select the required ATS patch from the list.

    The Patch Details window appears.

  8. Click Download.

    The File Download window appears.

  9. Click the ocats_ocscp_csar_25_1_201_0_0.pkg file to download the CNC SCP ATS package file.

    The ocats_ocscp_csar_25_1_201_0_0.pkg package contains the following files:

    ocats_ocscp_csar_25_1_201_0_0.zip
    mcafee-gen-ats-csar-25.1.201.log
    

    Note:

    The above zip file contains all the images and custom values required for 25.1.201 release of OCATS-OCSCP.

    Unzip the ocats_ocscp_csar_25_1_201_0_0.zip file to get the following files and folders:

    .
    |-- Definitions
    |  |-- ocats_ocscp_ats_tests.yaml
    |  |-- ocats_ocscp_cne_compatibility.yaml
    |  `-- ocats_ocscp.yaml
    |-- Files
    |  |-- ChangeLog.txt
    |  |-- Helm
    |  |  `-- ocats-ocscp-25.1.201.tgz
    |  |-- Licenses
    |  |-- ocats-ddclientstub-25.1.201.tar
    |  |-- ocats-dnsstub-25.1.201.tar (Docker Image)
    |  |-- ocats-pystub-25.1.201.tar (Docker Image)
    |  |-- ocats-scp-25.1.201.tar (Docker Image)
    |  |-- ocats-scpglbratelimitstub-25.1.201.tar (Docker Image)
    |  |-- Oracle.cert
    |  `-- Tests
    |-- ocats_ocscp_csar_25_1_201_0_0.zip
    |-- ocats_ocscp.mf
    |-- Scripts
    |  |-- ocats_ocscp_custom_serviceaccount_25.1.201.yaml (Template to create custom service account)
    |  |-- ocats_ocscp_tests_jenkinsjobs_25.1.201.tgz. ( ocscp_tests and jenkins jobs folder to be copied if persistent volume is deployed)
    |  `-- ocats_ocscp_values_25.1.201.yaml (Custom values file for installation)
    `-- TOSCA-Metadata
      `-- TOSCA.meta
  10. Copy the umbrella Helm chart ocats-ocscp-25.1.201.tgz file from the Files folder to Kubernetes cluster where you want to deploy ATS.

    The following table describes ATS parameters in the ocats_ocscp_values_25.1.201.yaml file:

    Table 3-20 ATS Parameters of the YAML File

    Parameter Default Value Possible Values Description
    ocatsdnsstubService true true, false Set it to true or false depending on the need of user.

    Setting these values to true to deploy ocats-dnsstub, ocats-scp stub, ocats-scpglbratelimitstub, ocats-ddclientStub, ocats-pystubs (ausf1, udm3 etc.)

    ocatsscpService true true, false Set it to true or false depending on the requirement.

    Setting these values to true to deploy ocats-dnsstub, ocats-scp stub, ocats-scpglbratelimitstub, ocats-ddclientStub, ocats-pystubs (ausf1, udm3, and so on)

    ocatsscpglbratelimitstubService true true, false Set it to true or false depending on the requirement.

    Setting these values to true to deploy ocats-dnsstub, ocats-scp stub, ocats-scpglbratelimitstub, ocats-ddclientStub, ocats-pystubs (ausf1, udm3, and so on.)

    ocatsddclientstubService true true, false Set it to true or false depending on the requirement.

    Setting these values to true to deploy ocats-dnsstub, ocats-scp stub, ocats-scpglbratelimitstub, ocats-ddclientStub, ocats-pystubs (ausf1, udm3,and so on.)

    ausf1Stubs true true, false Set it to true or false depending on the requirement.

    Setting these values to true to deploy all the ocats-pystubs (ausf1, udm3, chf1, scp3, and so on.)

    sutScpIPFamiliesforATSRun [IPv4] [IPv4], [IPv6], [IPv4,IPv6], [IPv6,IPv4] The parameter is used to specify the IP families that ATS will consider when running test cases.

    Note: If any value other than those specified is provided, ATS will proceed with the assumption that the deployment supports IPv4 only.

    traffic.sidecar.istio.io/excludeOutboundPorts 8091 - This annotation under lbDeployment is required for fetching metrics from soothsayer pods, which are required for a few FT's of ATS. When ATS runs in an ASPEN MESH environment, do not change this port.
    traffic.sidecar.istio.io/excludeInboundPorts 8080 - This annotation under lbDeployment is required for fetching metrics from soothsayer pods, which are required for a few FT's of ATS. When ATS runs in an ASPEN MESH environment, do not change this port.
    tokenConsumptionIntervalInMillis 50 Count in millliseconds Token Consumption Simulation Parameters.
    scpStubNfInstanceId 2faf1bbc-6e4a-4454-a507-a14ef8e1bc22 nfInstanceID of SCP NF Instance ID of SCP.
    rateDataReporterStartingOffset 35 Value in milliseconds -
    coherence.clusterName scpstub-coherence-cluster Local Coherence Cluster name, size not more than 66 characters Local Coherence Cluster name size must not be more than 66 characters.
    coherence.clusterName.federation.remoteScpOne.fqdnOrIp ocscp-scp-cache.scpsvc.svc.cluster.local - FQDN or IP of federation configuration.
    coherence.clusterName.federation.remoteScpOne.port 30001 - Port number of federation configuration.
    coherence.clusterName.federation.remoteScpOne.clusterName scp-coherence-cluster Ensure that the cluster name is unique among all participants. Size not more than 66 characters. remoteScpOne Coherence Cluster name.

    serviceIpFamilyPolicy.ocatsdnsstubService

    serviceIpFamilyPolicy.ocatsscpService

    serviceIpFamilyPolicy.ocatsscpglbratelimitstubService

    serviceIpFamilyPolicy.ocatsddclientstubService

    serviceIpFamilyPolicy.stubService

    SingleStack SingleStack, PreferDualStack, RequireDualStack IpFamilyPolicy of ocatsdnsstubService, ocatsscpService, ocatsscpglbratelimitstubService, ocatsddclientstubService and pyStubs.

    Note: PreferDualStack and RequireDualStack values can only be used if the setup is dual stack.

    serviceIpFamilies.ocatsdnsstubService

    serviceIpFamilies.ocatsscpService

    serviceIpFamilies.ocatsscpglbratelimitstubService

    serviceIpFamilies.ocatsddclientstubService

    serviceIpFamilies.stubService

    [IPv4] [IPv4], [IPv6], [IPv4,IPv6], [IPv6,IPv4] IpFamilies of ocatsdnsstubService, ocatsscpService, ocatsscpglbratelimitstubService, ocatsddclientstubService and pyStubs.

    Note: If serviceIpFamilyPolicy is SingleStack, then serviceIpFamilies can be [IPv4] or [IPv6]. If serviceIpFamilyPolicy is PreferDualStack or RequireDualStack, then serviceIpFamilies can be [IPv4,IPv6] or [IPv6, IPv4].

    PVEnabled false true, false Enabling persistent volume.
    PVClaimName false Name Persistent volume claim name.
    atsGuiTLSEnabled false true, false Enabling Https in Jenkins GUI.
    atsCommunicationTLSEnabled false true, false Enabling Https communication for ATS and stubs.
    ocats-scp.image.repository <docker-registryIP:docker-registryport>/ocats/ocats-scp <Image repository name:port>/ocats/ocats-scp Image repository and port of ocats scp image.
    ocats-scp.image.tag helm-tag Value of tag to be deployed Tag of ocats-scp image.
    ocats-scp.image.pullPolicy Always Always, IfNotPresent, Never Tag of ocats-scp image.
    ocats-scp.replicaCount 1 Positive integers Replica count of ocats-scp stub.
    ocats-scp.resources.limits.cpu 6 CPU value that is allocated Limit to CPU allocated to ocats-scp pod.
    ocats-scp.resources.limits.memory 12Gi Memory values that is allocated (in Gi or mi) Limit to memory allocated to ocats-scp pod .
    ocats-scp.resources.requests.cpu 6 CPU value that is allocated (must be less than or equal to limits) Request of CPU allocation for ocats-scp.
    ocats-scp.resources.requests.memory 12Gi Memory value that is allocated (in Gi or mi) (must be less than or equal to limits) Request of memory allocation for ocats-scp.
    ocats-scp.service.customExtension.labels {} Label of node Node labels for node allocation during deployment.
    ocats-scp.service.customExtension.type LoadBalancer ClusterIP, NodePort, LoadBalancer Service type of ocats-scp pod
    ocats-scp.service.customExtension.port 8080 Port number Port number of ocats-scp service.
    ocats-scp.service.customExtension.staticNodePortEnabled false true, false Enabling of static node port.
    ocats-scp.service.customExtension.staticNodePort false Port number Port number of static node port.
    ocats-scp.service.ports.http.port 8080 Port number Port number of the ocats-scp service if https is not enabled.
    ocats-scp.service.ports.http.staticNodePortEnabled false true, false Enabling of static node port.
    ocats-scp.service.ports.http.staticNodePort false Port number Port number of static node port.
    ocats-scp.service.ipFamilyPolicy SingleStack SingleStack, PreferDualStack, RequireDualStack

    ipFamilyPolicy to be allocated to the ocats-scp service.

    This value will be referred to as whatever is defined at serviceIpFamilyPolicy.ocatsscpService under global parameters.

    ocats-scp.service.ipFamilies [IPv4] [IPv4], [IPv6], [IPv4,IPv6], [IPv6,IPv4]

    ipFamilies to be allocated to the ocats-scp service.

    This value will be referred to as the value defined at serviceIpFamilies.ocatsscpService under global parameters.

    SELECTED_NF SCP NF name ATS parameters are set with default values in the ocats_ocscp_values_23.3.0.yaml file. Update ATS parameters with the actual value based on the environment, and then deploy the OCATS_OCSCP chart using this ocats_ocscp_values_23.3.0.yaml file. The updated ATS parameters are automatically applied during the deployment process, and the ATS will come up with the configuration as mentioned in this file.
    NFNAMESPACE scpsvc Update the SCP namespace -
    CLUSTERDOMAIN cluster.local Cluster Domain where SCP is Deployed -
    DESTNAMESPACE scpsvc Test Stubs NameSpace same as SCP Namespace -
    ocats-dnsstub.image.repository <docker-registryIP:docker-registryport>/ocats/ocats-dnsstub <Image repository name:port>/ocats/ocats-dnsstub Image repository and port of ocats-dnsstub.
    ocats-dnsstub.image.tag helm-tag Value of tag to be deployed Tag of ocats-dnsstub image.
    ocats-dnsstub.image.pullPolicy Always Always, IfNotPresent, Never Image pull policy of ocats-dnsstub image.
    ocats-dnsstub.replicaCount 1 Positive integers Replica count of ocats-dnsstub.
    ocats-dnsstub.service.customExtension.type ClusterIP ClusterIP, NodePort, LoadBalancer Service type of ocats-dnsstub pod.
    ocats-dnsstub.service.customExtension.port 53 Port Port of ocats-dnsstub.
    ocats-dnsstub.service.ipFamilyPolicy SingleStack SingleStack, PreferDualStack, RequireDualStack

    ipFamilyPolicy to be allocated to the ocats-dnsstub service.

    This value will be referred to as whatever is defined at serviceIpFamilyPolicy.ocatsdnsstubService under global parameters.

    ocats-dnsstub.service.ipFamilies [IPv4] [IPv4], [IPv6], [IPv4,IPv6], [IPv6,IPv4]

    ipFamilies to be allocated to the ocats-dnsstub service.

    This value will be referred to as whatever is defined at serviceIpFamilies.ocatsdnsstubService under global parameters.

    ocats-dnsstub.resources.limits.cpu 1 CPU value that is allocated Limit to CPU allocated to ocats-dnsstub pod.
    ocats-dnsstub.resources.limits.memory 1Gi Memory value that is allocated (in Gi or mi) Limit to memory allocated to ocats-dnsstub pod .
    ocats-dnsstub.resources.requests.cpu 1 CPU value that is allocated (must be less than or equal to limits) Request of CPU allocation for ocats-dnsstub.
    ocats-dnsstub.resources.requests.memory 1Gi Memory values that is allocated (in Gi or mi) (must be less than or equal to limits) Request of memory allocation for ocats-dnsstub.
    ocats-ddclientstub.image.repository <docker-registryIP:docker-registryport>/ocats/ocats-ddclientstub <Image repository name:port>/ocats/ocats-ddclientstub Image repository and port of ocats-ddclientstub.
    ocats-ddclientstub.image.tag helm-tag Value of tag to be deployed Tag of ocats-ddclientstub image.
    ocats-ddclientstub.image.pullPolicy Always Always, IfNotPresent, Never Image pull policy of ocats-ddclientstub image.
    ocats-ddclientstub.replicaCount 1 Positive integers Replica count of ocats-ddclientstub.
    ocats-ddclientstub.service.type LoadBalancer ClusterIP, NodePort, LoadBalancer Service type of ocats-ddclientstub.
    ocats-ddclientstub.ipFamilyPolicy SingleStack SingleStack, PreferDualStack, RequireDualStack

    ipFamilyPolicy to be allocated to the ocatsddclientstubService service.

    This value will be referred to as whatever is defined at serviceIpFamilyPolicy.ocatsddclientstubService under global parameters.

    ocats-ddclientstub.ipFamilies [IPv4] [IPv4], [IPv6], [IPv4,IPv6], [IPv6,IPv4] ipFamilies to be allocated to the ocatsddclientstubService service.

    This value will be referred to as whatever is defined at serviceIpFamilies.ocatsddclientstubService under global parameters.

    ocats-ddclientstub.resources.limits.cpu 1 CPU value that is allocated Limit to CPU allocated to ocats-ddclientstub pod.
    ocats-ddclientstub.resources.limits.memory 1Gi Memory value that is allocated (in Gi or mi) Limit to memory allocated to ocats-ddclientstub pod.
    ocats-ddclientstub.resources.requests.cpu 1 CPU value that is allocated (should be less than or equal to limits) Request of CPU allocation for ocats-ddclientstub.
    ocats-ddclientstub.resources.requests.memory 1Gi Memory value that is allocated (in Gi or mi) (should be less than or equal to limits) Request of memory allocation for ocats-ddclientstub.
    ocats-ddclientstub.log.level INFO INFO, WARN, DEBUG Log level of ddclientstub pod.
    ocats-ddclientstub.kafka_broker "kafka-broker-0.kafka-broker.ddkafkanamespace.svc.cluster.local:9092" Kafka broker fqdn and port Kafka broker fqdn or port for ddClientStub.
    ocats-ddclientstub.string_topic_name "string_topic" String topic ddClientStub string topic name.
    ocats-ddclientstub.json_topic_name "json_topic" Json topic ddClientStub json topic name.
    ocats-scpglbratelimitstub.image.repository <docker-registry IP:docker-registry port>/ocats/ocats-scpglbratelimitstub <Image repository name:port>/ocats/ocats-scpglbratelimitstub Image repository and port of ocats-scpglbratelimitstub.
    ocats-scpglbratelimitstub.image.tag helm-tag Value of tag to be deployed Tag of ocats-scpglbratelimitstub.
    ocats-scpglbratelimitstub.image.pullPolicy Always Always, IfNotPresent, Never Image pull policy of ocats-scpglbratelimitstub.
    ocats-scpglbratelimitstub.replicaCount 1 Positive integers Replica count of ocats-scpglbratelimitstub.
    ocats-scpglbratelimitstub.service.type ClusterIP ClusterIP, NodePort, LoadBalancer Service type of ocats-scpglbratelimitstub.
    ocats-scpglbratelimitstub.service.ipFamilyPolicy SingleStack SingleStack, PreferDualStack, RequireDualStack

    ipFamilyPolicy to be allocated to the ocatsscpglbratelimitstubService service.

    This value will be referred to as whatever is defined at serviceIpFamilyPolicy.ocatsscpglbratelimitstubService under global parameters.

    ocats-scpglbratelimitstub.service.ipFamilies [IPv4] [IPv4], [IPv6], [IPv4,IPv6], [IPv6,IPv4]

    ipFamilies to be allocated to the ocatsscpglbratelimitstubService service.

    This value will be referred to as whatever is defined at serviceIpFamilies.ocatsscpglbratelimitstubService under global parameters.

    ocats-scpglbratelimitstub.deployment.customExtension.labels {} Label of node Node labels for node allocation during deployment.
    ocats-scpglbratelimitstub.resources.limits.cpu 1 CPU value that is allocated Limit to CPU allocated to ocats-scpglbratelimitstub.
    ocats-scpglbratelimitstub.resources.limits.memory 1Gi Memory values that is allocated (in Gi or mi) Limit to memory allocated to ocats-scpglbratelimitstub.
    ocats-scpglbratelimitstub.resources.requests.cpu 1 CPU value that is allocated (should be less than or equal to limits) Request of CPU allocation for ocats-scpglbratelimitstub.
    ocats-scpglbratelimitstub.resources.requests.memory: 1Gi Memory value that is allocated (in Gi or mi) (should be less than or equal to limits) Request of memory allocation for ocats-scpglbratelimitstub.
    ocats-scpglbratelimitstub.minreplicas 1 Positive integer Minimum replicas of ocats-scpglbratelimitstub.
    ocats-scpglbratelimitstub.maxreplicas 1 Positive integer Maximum replicas of ocats-scpglbratelimitstub.
    ocats-scpglbratelimitstub.maxPdbUnavailable 1 Positive integer -
    ocats-scpglbratelimitstub.log.level INFO INFO, WARN, DEBUG Log level of ocats-scpglbratelimitstub.
    ocats-pystub.image.repository <docker-registry IP:docker-registry port>/ocats/ocats-pystub <Image repository name:port>/ocats/ocats-pystub Image repository and port of ocats-pystub.
    ocats-pystub.image.tag helm-tag Value of tag to be deployed Tag of ocats-pystub.
    ocats-pystub.image.pullPolicy Always Always, IfNotPresent, Never Image pull policy of ocats-pystub.
    ocats-pystub.replicaCount 1 Positive integers Replica count of ocats-pystub.
    ocats-pystub.service.type ClusterIP ClusterIP, NodePort, LoadBalancer Service type of ocats-pystub
    ocats-pystub.service.ports.https.port 8443 port number Indicates the port number to allocate for HTTPS to ocats-pystub.
    ocats-pystub.service.ports.http.port 8080 port number Indicates the port number to allocate for HTTP to ocats-pystub.
    ocats-pystub.service.ipFamilyPolicy SingleStack SingleStack, PreferDualStack, RequireDualStack Indicates the allocation of "ipFamilyPolicy" to pystub services. This value will be referenced from whatever is defined under " serviceIpFamilyPolicy.stubService" within global parameters.
    ocats-pystub.service.ipFamilies [IPv4] [IPv4], [IPv6], [IPv4,IPv6], [IPv6,IPv4] Indicates the allocation of "ipFamilies" to pystub services. This value will be referenced from whatever is defined under "serviceIpFamilies.stubService" within global parameters.
    RESPONSE_FROM_HEADER true true, false When true pystub returns podname.
    ocats-pystub.resources.limits.cpu 1 CPU value that is allocated Indicates the CPU allocation limit for ocats-pystub.
    ocats-pystub.resources.limits.memory 1Gi Memory values allocated in Gi or Mi. Indicates the memory allocation limit for ocats-pystub.
    ocats-pystub.resources.requests.cpu 1 CPU value allocated (should be less than or equal to limits). Indicates the CPU allocation request for ocats-pystub.
    ocats-pystub.resources.limits.memory 1Gi Memory value allocated (in Gi or Mi) should be less than or equal to limits. Indicates the memory allocation request for ocats-pystub.
    ausf1.service.name:* ausf1svc Service name of ausf1 Indicates the service name allocated to ausf1.
    ausf1.deploymentName:* ausf1 Deployment name of ausf1 Indicates the deployment name allocated to ausf1.

    Note:

    "*" indicates that all NF stubs will follow the same pattern.
3.5.1.5 Pushing the Images to Customer Docker Registry

Preparing to Deploy ATS and Stub Pod in Kubernetes Cluster

To deploy ATS and Stub Pods in the Kubernetes Cluster:

  1. Click the file to download the CNC SCP ATS package file:
    ocats_ocscp_csar_25_1_201_0_0.pkg
    The package contains the following files:
    ocats_ocscp_csar_25_1_201_0_0.zip
    mcafee-gen-ats-csar-25.1.201.log
    

    Unzip the ocats_ocscp_csar_25_1_201_0_0.zip file to get the following files and folders:

    The output of this command is:

    .
    |-- Definitions
    |  |-- ocats_ocscp_ats_tests.yaml
    |  |-- ocats_ocscp_cne_compatibility.yaml
    |  `-- ocats_ocscp.yaml
    |-- Files
    |  |-- ChangeLog.txt
    |  |-- Helm
    |  |  `-- ocats-ocscp-25.1.201.tgz
    |  |-- Licenses
    |  |-- ocats-ddclientstub-25.1.201.tar
    |  |-- ocats-dnsstub-25.1.201.tar (Docker Image)
    |  |-- ocats-pystub-25.1.201.tar (Docker Image)
    |  |-- ocats-scp-25.1.201.tar (Docker Image)
    |  |-- ocats-scpglbratelimitstub-25.1.201.tar (Docker Image)
    |  |-- Oracle.cert
    |  `-- Tests
    |-- ocats_ocscp_csar_25_1_201_0_0.zip
    |-- ocats_ocscp.mf
    |-- Scripts
    |  |-- ocats_ocscp_custom_serviceaccount_25.1.201.yaml (Template to create custom service account)
    |  |-- ocats_ocscp_tests_jenkinsjobs_25.1.201.tgz. ( ocscp_tests and jenkins jobs folder to be copied if persistent volume is deployed)
    |  `-- ocats_ocscp_values_25.1.201.yaml (Custom values file for installation)
    `-- TOSCA-Metadata
      `-- TOSCA.meta
  2. Run the following commands in your cluster to load the ATS and stubs docker images, ocats-scp-image-25.1.200.tar, and push it to your registry:
    docker load --input ocats-scp-25.1.201.tar
    docker load --input ocats-dnsstub-25.1.201.tar
    docker load --input ocats-pystub-25.1.201.tar
    docker load --input ocats-scpglbratelimitstub.tar
    docker load --input ocats-ddclientstub-25.1.201.tar
    
  3. Run the following command in your cluster to load the ATS image:
    docker load --input ocats-scp-images-25.1.201.tar
  4. Run the following commands to push the ATS image to the registry:
    docker tag ocats/ocats-scp:25.1.201 <local_registry>/ocats/ocatsscp:25.1.201
    docker push <local_registry>/ocats/ocats-scp:25.1.201

    Where, <local_registry> indicates the registry where you can push the downloaded images.

  5. Run the following commands to push the Stub image to the registry:
    
    docker tag ocats/ocats-pystub:25.1.201 <local_registry>/ocats/ocats-pystub:25.1.201
    docker push <local_registry>/ocats/ocats-pystub:25.1.201
  6. Run the following command to push the DNS Stub Image to the registry:
    docker tag ocats/ocats-dnsstub:25.1.201 <local_registry>/ocats/ocats-dnsstub:25.1.201
    docker push <local_registry>/ocats/ocats-dnsstub:25.1.201
  7. Run the following command to push the Global Rate Limiting Stub Image to the registry:
    docker tag ocats/ocats-scpglbratelimitstub:25.1.201 <local_registry>/ocats/ocats-scpglbratelimitstub:25.1.201
    docker push <local_registry>/ocats/ocats-scpglbratelimitstub:25.1.201
  8. Run the following command to push the Data Director Stub Image to the registry:
    docker tag ocats/ocats-ddclientstub:25.1.201 <local_registry>/ocats/ocats-ddclientstub:25.1.201
    docker push <local_registry>/ocats/ocats-ddclientstub:25.1.201
  9. In the Scripts folder, extract the following content:
    ocats-ocscp-values-25.1.201.yaml
    ocats-ocscp-custom-serviceaccount-25.1.201.yaml
    ocats-ocscp-tests-jenkinsjobs-25.1.201.tgz
  10. Update the image name and tag in the ocats-ocscp-values-25.1.201.yaml file as required.
3.5.1.6 Preinstall Preparation of SCP for SCP-ATS
Complete the following steps before performing an installation:
  • When deploying default ATS with role binding, deploy ATS and test stubs in the same namespace as SCP.
  • The SCP stub for the Global Egress Rate Limiting feature must be deployed by setting the required Helm parameters as described in Oracle Communications Cloud Native Core, Service Communication Proxy Installation, Upgrade, and Fault Recovery Guide to support the Global Egress Rate Limiting test cases.
  • In the ocats_ocscp_values_25.1.201.yaml, add the following for Prometheus that is required for alert test case:
    traffic.sidecar.istio.io/excludeInboundPorts: "9090"
  • If ASM is adding additional XFCC headers, the certExtractIndex and extractIndex of the xfccHeaderDecode value must be -1, otherwise certExtractIndex and extractIndex of xfccHeaderDecode value must be 0.
  • If ASM is enabled, for fetching the metrics from Prometheus, a destination rule must be created. In most deployments, Prometheus is kept outside of the service mesh, so a destination rule is required to communicate between a TLS enabled entity (ATS) and a non-TLS entity (Prometheus). The rule can be created as follows:
    kubectl apply -f - <<EOF
     
    apiVersion:networking.istio.io/v1alpha3
    kind:DestinationRule
    metadata:
      name:prometheus-dr
      namespace:ocscp
    spec:
      host:oso-prometheus-server.ocscp.svc.cluster.local
      trafficPolicy:
        tls:
          mode:DISABLE
    EOF
     
    Where,
    • name indicates the name of the destination rule.
    • namespace indicates where ATS is deployed.
    • host indicates the hostname of the Prometheus server.
  • FQDN and interPlmnFqdn must be the same for both nfServices for NRF profile NRF1.

    Sample

    # NRF profiles for primary(Priority=0) and secondry(Priority=1) NRF. Note that these NRFs needs to be backend DB Synced.
    # For Secondary NRF profile always make it priority lesser than First priority NRF, currently we set secondary NRF priority to 1.
    # In case of no secondry NRF user can comment the secondary NRF Profile.
    # Service level FQDN's of NRF are from the same namespace as that of SCP, this is put for SCP ATS cases. Otherwise, NRF's can be part of other namespaces or even other k8s clusters.
      nrfProfiles:
      - capacity: 10000
        locality: USEast
        nfInstanceId: 6faf1bbc-6e4a-2828-a507-a14ef8e1bc5a
        nfStatus: REGISTERED
        nfType: NRF
        priority: '0'
        # with rel15 flag is enabled, below mentioned NRF region to be specified.
    #    nfSetIdList: ["Reg1"]
        # with rel16 flag is enabled, below mentioned NRF set Id to be specified.
        nfSetIdList: ["setnrfl1.nrfset.5gc.mnc012.mcc345"]
        #Uncomment below section to configure interPlmnFqdn, plmnList or snpnList
        #NRF-Change-3
        interPlmnFqdn: nrf1.5gc.mnc213.mcc410.3gppnetwork.org 
        plmnList:
        - mcc: 410
          mnc: 213
        - mcc: 410
          mnc: 214
        #snpnList:
        #- mcc: 345
        #  mnc: 445
        #  nid: 000007ed9d5
        customInfo:
          preferredNrfForOnDemandDiscovery: true
     
             
        nfServices:
        - capacity: 5000
          #apiPrefix: USEast
          #NRF-Change-4
          fqdn: nrf1svc.scpsvc.svc.cluster.local
          interPlmnFqdn: nrf1svc.scpsvc.svc.cluster.local
          # Sample IPEndpoint with all the fields and it is commented out.
          #NRF-Change-5
          #ipEndPoints: [{"ipv4Address": "NRF-IP", "port": "NRF-PORT"}]
          #ipEndPoints: [{"ipv4Address": "10.75.213.56", "port": "31014"}]
          # ATS test cases need 8080 port with FQDN. Hence, in order to run ATS cases, below "ipEndPoints" field is left uncommented.
          #NRF-Change-6
          ipEndPoints: [{"port": "8080"}]
          load: 0
          nfServiceStatus: REGISTERED
          scheme: http
          serviceInstanceId: fe137ab7-740a-46ee-aa5c-951806d77b01
          serviceName: nnrf-nfm
          priority: 0
          versions:
          - apiFullVersion: 1.0.0
            apiVersionInUri: v1
     
        - capacity: 5000
          #apiPrefix: USEast
          #NRF-Change-4
          fqdn: nrf1svc.scpsvc.svc.cluster.local
          interPlmnFqdn: nrf1svc.scpsvc.svc.cluster.local
          # Sample IPEndpoint with all the fields and it is commented out.
          #NRF-Change-5
          #ipEndPoints: [{"ipv4Address": "NRF-IP", "port": "NRF-PORT"}]
          #ipEndPoints: [{"ipv4Address": "10.75.213.56", "port": "31014"}]
          # ATS test cases need 8080 port with FQDN. Hence, in order to run ATS cases, below "ipEndPoints" field is left uncommented.
          #NRF-Change-6
          ipEndPoints: [{"port": "8080"}]
          load: 0
          nfServiceStatus: REGISTERED
          scheme: http
          serviceInstanceId: fe137ab7-740a-46ee-aa5c-951806d77b02
          serviceName: nnrf-disc
          priority: 0
          versions:
          - apiFullVersion: 1.0.0
            apiVersionInUri: v1
             
        - capacity: 5000
          #apiPrefix: USEast
          #NRF-Change-4
          fqdn: nrf1svc.scpsvc.svc.cluster.local
          interPlmnFqdn: nrf1.5gc.mnc213.mcc410.3gppnetwork.org
          # Sample IPEndpoint with all the fields and it is commented out.
          #NRF-Change-5
          #ipEndPoints: [{"ipv4Address": "NRF-IP", "port": "NRF-PORT"}]
          #ipEndPoints: [{"ipv4Address": "10.75.213.56", "port": "31014"}]
          # ATS test cases need 8080 port with FQDN. Hence, in order to run ATS cases, below "ipEndPoints" field is left uncommented.
          #NRF-Change-6
          ipEndPoints: [{"port": "8080"}]
          load: 0
          nfServiceStatus: REGISTERED
          scheme: http
          serviceInstanceId: fe137ab7-740a-46ee-aa5c-951806d77b03
          serviceName: nnrf-oauth2
          priority: 0
          versions:
          - apiFullVersion: 1.0.0
            apiVersionInUri: v1
  • To execute SCP ATS under the scp values.yaml, ensure that dnsSrvSchemeConfig.defaultScheme is set to ‘https’.
  • Ensure SCP is deployed with the following parameters:
    • Make sure that while providing NRF information at the time of SCP deployment, stub NRF details like nrf1svc and nrf2svc should also be provided at the time of ATS deployment before running these test cases. For example, if the teststub namespace is scpsvc, then SCP should have been deployed with primary NRF as nrf1svc.scpsvc.svc.<clusterDomain> and secondary NRF as nrf2svc.scpsvc.svc.<clusterDomain> for NRF test cases to work.
    • Ensure the defaultTopologySource parameter is set to NRF in the ocscp_values.yaml file.
    • Ensure the preventiveAuditOnLastNFInstanceDeletion parameter is set to false in the ocscp_values.yaml file.
    • The number of replicas of all SCP microservices pods must be set to 1 during SCP deployment as ATS is enabled to perform metric validations for metrics obtained from a single pod.
    • When you deploy, make sure to define the additional NRF stubs needed for InterSCP cases as nrfr2l1svc (preferred NRF of Reg2), nrfr2l2svc (non-preferred NRF of Reg2), nrfr3l1svc (non-preferred NRF of Reg3), and nrfr3l2svc (preferred NRF of Reg3), which are provided in the default custom value file. Also, in the SCP deployment file, ensure that the namespace of all these NRFs is the same as the deployed SCP namespace. Reg1, Reg2, and Reg3 are replaced with setnrfl1.nrfset.5gc.mnc012.mcc345, setnrfr1.nrfset.5gc.mnc012.mcc345, and setnrfr2.nrfset.5gc.mnc012.mcc345 for Release 16 SCP deployment.
    • Ensure the supportedNRFRegionOrSetIdList must have Reg1, Reg2, and Reg3 for Release 15 SCP deployment or setnrfl1.nrfset.5gc.mnc012.mcc345, setnrfr1.nrfset.5gc.mnc012.mcc345, and setnrfr2.nrfset.5gc.mnc012.mcc345 for Release 16 SCP deployment.
    • Ensure only Loc7, Loc8, Loc9, and USEast should be part of the servingLocalities for Release 15 SCP deployment and the servingScope for Release 16 SCP deployment.
    • Recommended auditInterval is 60 seconds and guardTime is 10 seconds in the SCP deployment file.
    • The regions such as Reg2 and Reg3 are the corresponding values for Release 15 SCP deployment, while NRF setIDS such as servingLocalities and setnrfr2.nrfset.5gc.mnc012.mcc345 are the corresponding values for Release 16 SCP deployment. The NRF belonging to either regions or the NRF set IDS's localities must not match with the SCP servingLocalities or SCP serving scope.
    • SCP deployment file should have the attribute scpToRegisterWithNrfRegions set to Reg1 for Release 15 SCP deployment and setnrfl1.nrfset.5gc.mnc012.mcc345 for Release 16 SCP deployment. For information about Release 15 and Release 16, see 3GPP TS 23.501.
    • To run CCA Validation feature tests, refer to Configuring ATS for CCA Test Cases section.
    • To enable OAuth support while deploying SCP, refer Configuring SCP to Run OAuth Test Cases in ATS section.
    • To enable alternate resolution service support while deploying SCP, refer to Configuring SCP to Run DNS SRV Test Cases in ATS section.
    • To enable mediation support while deploying SCP, refer to Configuring SCP to Run Mediation Test Cases in ATS section.
    • To enable nrfproxy support, refer to Configuring SCP to Run Model D Test Cases in ATS section.
    • To enable load manager support, refer to Configuring SCP to Run LCI Test Cases in ATS section.
    • To enable the Global Egress Rate Limiting feature for ATS environment, refer to Updating the Global Egress Rate Limiting Changes in the SCP Deployment File for ATS section.
    • By default, the ATS suite runs HTTPS test cases if the "ALL" option is selected, and SCP must be deployed with HTTPS support enabled to support the same. To enable HTTPs for ATS, refer to Enabling HTTPs for ATS, pystubs and Jenkins
      # If ingress gateway is available then set ingressGWAvailable flag to true
        # and provide ingress gateway IP and Port in publicSignalingIP and publicSignalingPort respectively.
       
            publicSignalingPort: &publicSignalingPort 8000   #Signaling PORT
            publicSignalingPortHttps: &publicSignalingPortHttps 9443 #Signaling PORT for HTTPS
          # uncomment below lines when deployed with release 16. Also, take a note that http port for SCP should be same as "publicSignalingPort" of SCP mentioned above.
          scpInfo:
            scpPrefix: scpPrefix
            scpPorts:
             http: *publicSignalingPort
          #Uncomment below key-value(https) to enable https for ingress connections for rel16 deployment. this port should be same as "publicSignalingPortHttps" of SCP mentioned above.
          #   https: *publicSignalingPortHttps
           # Note: If this flag is false, then by default all connections to PNF will be made using http protocol.
           nativeEgressHttpsSupport: false

      If SCP is deployed with HTTP support, either the single or multiple feature execution option must be selected, excluding all https test cases. Otherwise, make sure in an ASM environment where HTTPS is not enabled; in that case, manually remove the HTTPS-related test cases from the features directory on the ATS pod.

3.5.1.7 Preinstallation Preparation for SCP-ATS

Complete the following steps before performing an installation of SCP-ATS.

3.5.1.7.1 Enabling Aspen Service Mesh
To enable Aspen Service Mesh (ASM) for ATS, complete the following procedure:

Note:

By default, this feature is disabled.
  1. If ASM is not enabled on the global level for the namespace, run the following command before deploying ATS:
    kubectl label --overwrite namespace <namespace_name> istio-injection=enabled

    Example:

    kubectl label --overwrite namespace scpsvc istio-injection=enabled
  2. Add the following annotations in the lbDeployments section of the global section in the ocats_scp_values_25.1.201.yaml file:
    traffic.sidecar.istio.io/excludeOutboundPorts: "8091"
    traffic.sidecar.istio.io/excludeInboundPorts: "8080"

    Sample file with annotations:

    
    lbDeployments:
    labels: {}
    annotations:
    traffic.sidecar.istio.io/excludeOutboundPorts: "8091"
    traffic.sidecar.istio.io/excludeInboundPorts: "8080"
  3. Add Envoy Filter to enable the XFCC header forwarding by ASM sidecar.

    Envoy Filter for ATS:

    apiVersion: networking.istio.io/v1alpha3
    kind: EnvoyFilter
    metadata:
      name: <name>
      namespace: <namespace>
    spec:
      workloadSelector:
        labels:
          app: ocats-scp
      configPatches:
      - applyTo: NETWORK_FILTER
        match:
          listener:
            filterChain:
              filter:
                name: "envoy.http_connection_manager"
        patch:
          operation: MERGE
          value:
            typed_config:
              '@type': type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager
              forward_client_cert_details: ALWAYS_FORWARD_ONLY
              use_remote_address: true
              xff_num_trusted_hops: 1

    Envoy filter to enable the XFCC header forwarding on the application sidecar:

    apiVersion: networking.istio.io/v1alpha3
    kind: EnvoyFilter
    metadata:
      name: <name>
      namespace: <namespace>
    spec:
      workloadSelector:
        labels:
          app.kubernetes.io/instance: ocscp
      configPatches:
      - applyTo: NETWORK_FILTER
        match:
          listener:
            filterChain:
              filter:
                name: "envoy.http_connection_manager"
        patch:
          operation: MERGE
          value:
            typed_config:
              '@type': type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager
              forward_client_cert_details: ALWAYS_FORWARD_ONLY
              use_remote_address: true
              xff_num_trusted_hops: 1

    Envoy filter to enable server header pass through on sidecar:

    apiVersion: networking.istio.io/v1alpha3
    kind: EnvoyFilter
    metadata:
      name: <name>
      namespace: <namespace>
    spec:
      configPatches:
      - applyTo: NETWORK_FILTER
        match:
          listener:
            filterChain:
              filter:
                name: envoy.filters.network.http_connection_manager
        patch:
          operation: MERGE
          value:
            typed_config:
              '@type': type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager
              server_header_transformation: PASS_THROUGH

Updating Virtual Services

Note:

  • The sidecar configuration for response timeout or stream timeout should not be applied for any of the SCP microservices.
  • For virtual service CRD, :
    • when the destinationhost is any SCP microservice, do not configure the timeout value.
    • SCP producer NFs as destinationhosts can be configured with a timeout value of more than 50 seconds, or it should not be configured.

Disabling retry attempts in virtual services:

For all SCP and ATS pods in the virtual services, the retry attempt should be set to 0.
    retries:
      attempts: 0
3.5.1.7.2 Enabling Persistent Volume

ATS supports persistent storage to retain ATS historical build execution data, test cases, and one-time environment variable configurations.

To enable persistent storage:
  1. Create a PVC and associate it with the ATS pod.
  2. Set the PVEnabled flag to true in the ocats_ocscp_values_25.1.201.yaml file.
  3. Set PVClaimName to PVC that is created for ATS.
    
    ocats-scp:
      PVEnabled: false
      PVClaimName: "ocats-scp-25.1.201-pvc"
      

Note:

In the event that Persistent Volume (PV) is enabled, ATS starts up with the parameter values specified in the ocats_ocscp_values_25.1.201.yaml file. If the ATS pod is restarted, the PV restores the configuration, ensuring that the new ATS pod will have the same configuration settings as the previous pod.

For more details on Persistent Volume Storage, you can refer to Persistent Volume for 5G ATS.

3.5.2 Configuring SCP-ATS and SCP

This section provides information about updating ATS deployment configuration, enabling and disabling stubs, configuring SCP to run test cases, and so on.

Note:

Make sure to follow the steps mentioned in the Preinstall Preparation of SCP for SCP-ATS to deploy SCP. For more information on SCP deployment, refer to the Oracle Communications Cloud Native Core, Service Communication Proxy Installation, Upgrade, and Fault Recovery Guide.
3.5.2.1 Updating ATS Configuration

The following section covers updating the ATS deployment configuration and ATS input parameter configuration.

3.5.2.1.1 Updating ATS Deployment Configuration

Previously, manual modification of the ATS configuration parameters for New feature and Regression jobs was required in the ATS graphical user interface (GUI). Now, with the introduction of a section "ATS_Config" in the ocats_ocscp_values_25.1.201.yaml file, you can update the values for ATS parameters and then deploy the OCATS_OCSCP charts using this modified ocats_ocscp_values_25.1.201.yaml file. The updated ATS parameters are automatically applied during the deployment process, and the ATS will come up with the configuration as mentioned in the ocats_ocscp_values_25.1.201.yaml file.

The ocats_ocscp_values_25.1.201.yaml file can be modified to update the ATS parameters according to their environment. To illustrate this, here's an example of how you can update the ATS parameters in the ocats_ocscp_values_25.1.201.yaml file:

Figure 3-14 ATS Deployment Configuration


ATS Deployment Configuration

Note:

Initially, at the time of deployment, you can configure or modify the parameter in the ocats_ocscp_values_25.1.201.yamlfile or you can also update or modify the parameters post deployment by following the process described in the Configuring New Feature Pipelines section.

3.5.2.1.2 Configuring SCP-ATS for OCI Setup

To leverage the existing infrastructure of Oracle Cloud, SCP which was only deployed on CNE, can now be integrated on the Oracle Cloud Infrastructure (OCI).

Creating Secret for Alarm(Alerts)

When deploying ATS within the OCI environment, users are required to provide the following inputs in the form of a Kubernetes secret named ocats-oci-secret:
user="<your user ocid>"
tenancy="<your tenancy ocid>"
region="<your oci region>"
fingerprint="<fingerprint of your public key>"
key_file="<full path to your private key>"
metric_namespace="<metric_namespace under which all the metrics and alarms of SUT NF will be captured>"
nf_compartment_id="<Compartment Id of SUT NF>"
Run the command to create the secret for alarm or alerts related to test cases:
kubectl
          create secret generic ocats-oci-secret --from-literal=user='<your_user_ocid>'--from-literal=tenancy='<your_tenancy_ocid>'--from-literal=region='<your_oci_region>'--from-literal=fingerprint='<fingerprint_of_your_oci_api_public_key>'--from-literal=metric_namespace='<metric_namespace under which all the metrics and alarms of SUT NF will be
          captured>'--from-literal=nf_compartment_id='Compartment Id of SUT NF'--from-file=key_file='<full_path_to_your_oci_api_private_key
          on_the_host_machine_where_you_are_running_this_command>'-n <namespace>
For example,
kubectl
          create secret generic ocats-oci-secret --from-literal=user='ocid1.user.oc1..aaaaaaaajjxlzewn3e76aufhdjfhdkfjkl6ea3aaazgzx7cxg74ljs5an3a'--from-literal=tenancy='ocid1.tenancy.oc1..aaaaaaaa5oqwziy4bngiebry6letze4hdjskjksdkdlksurhc6pojwe4wxe34a'--from-literal=region='us-ashburn-1'--from-literal=fingerprint='79:17:f2:89:76:d6:82:b2:13:b9:1d:9f:ff:92:28:3b'--from-literal=metric_namespace=scpdemons
          --from-literal=nf_compartment_id='ocid1.compartment.oc1..aaaaaaaa6crdjhjkkdjkldlxbi7erwtmo3wa7jy6q6ldjjskkdnnmitot4smcczgq'--from-file=key_file='/tmp/oci_api_key.pem'-n scpsvc

ATS Deployment Configuration in OCI Setup

Update the values for ATS parameters and then deploy the OCATS_OCSCP charts using this modified ocats_ocscp_values_25.1.201.yaml file. To update the ATS parameters in the ocats_ocscp_values_25.1.201.yaml file, see the Updating ATS Deployment Configuration section.

Modify the scpMetricVersion parameter to "v2" in the ocats_ocscp_values_25.1.201.yaml file. For more information, see Oracle Communications Cloud Native Core, Service Communication Proxy Installation, Upgrade, and Fault Recovery Guide.

3.5.2.1.3 Updating ATS Input Parameter Configuration

This section provides information about how to modify different services in SCP and configure SCP test cases in ATS.

3.5.2.1.3.1 Enabling or Disabling Stubs
By default, all the stubs are enabled.

Note:

Deploy NRF stubs with port 8080. NRF details of SCP should specify the ipEndPoints port as 8080 without any ipv4Address field. For example, ipEndPoints: [{"port": "8080"}]).

To enable or disable the stubs or pods, set the variable to true or false, respectively. You can install the required stubs or pods during ocats-ocscp deployments.

The following sample parameters show how the stub is enabled by setting different variables to true in the ocats_ocscp_values_25.1.201.yaml file:
global:
  # ********  Sub-Section Start: Custom Extension Global Parameters ********
  #**************************************************************************
  ocatsdnsstubService: true
  ocatsscpService: true
  ocatsscpglbratelimitstubService: true
  ocatsddclientstubService: true
  ausf1Stubs: true
  ausf2Stubs: true
  ausf3Stubs: true
  ausf4Stubs: true
  ausf5Stubs: true
  ausf6Stubs: true
  ausf7Stubs: true
  ausf11Stubs: true
  ausf12Stubs: true
  ausf13Stubs: true
  ausf14Stubs: true
  ausf15Stubs: true
  ausf16Stubs: true
  ausf21Stubs: true
  ausf22Stubs: true
  ausf23Stubs: true
  chf1Stubs: true
  chf2Stubs: true
  nrf1Stubs: true
  nrf2Stubs: true
  nrfr2l1Stubs: true
  nrfr2l2Stubs: true
  nrfr3l1Stubs: true
  nrfr3l2Stubs: true
  pcf1Stubs: true
  pcf1cStubs: true
  pcf2Stubs: true
  pcf3Stubs: true
  pcf4Stubs: true
  pcf5Stubs: true
  pcf6Stubs: true
  pcf7Stubs: true
  pcf8Stubs: true
  pcf10Stubs: true
  pcf11Stubs: true
  pcf12Stubs: true
  pcf13Stubs: true
  pcf14Stubs: true
  pcf15Stubs: true
  pcf16Stubs: true
  pcf21Stubs: true
  pcf22Stubs: true
  pcf23Stubs: true
  pcf24Stubs: true
  pcf25Stubs: true
  pcf26Stubs: true
  pcf27Stubs: true
  pcf28Stubs: true
  scp1Stubs: true
  scp2Stubs: true
  scp3Stubs: true
  scp11Stubs : true
  scp12Stubs: true
  scp51Stubs: true
  scp52Stubs: true
  scp61Stubs: true
  smf1Stubs: true
  smf2Stubs: true
  smf3Stubs: true
  smf4Stubs: true
  smf5Stubs: true
  smf11Stubs: true
  udm1Stubs: true
  udm2Stubs: true
  udm3Stubs: true
  udm4Stubs: true
  udm5Stubs: true
  udm22Stubs: true
  udm23Stubs: true
  udm33Stubs: true
  udm21Stubs: true
  udm31Stubs: true
  udm32Stubs: true
  udr1Stubs: true
  udr2Stubs: true
  scp51svcxxxxStubs: true
  scp52svcxxxxStubs: true
  scp61svcxxxxStubs: true
  sepp1Stubs: true
  sepp2Stubs: true
  sepp3Stubs: true
  nef1Stubs: true
  nef2Stubs: true
  nef3Stubs: true
  nef4Stubs: true
  nef5Stubs: true
  nef6Stubs: true
  nef7Stubs: true
  nef8Stubs: true
  gen1Stubs: true
  gen2Stubs: true

Note:

Replica count of 'scp51svcxxxx' 'scp52svcxxxx' 'scp61svcxxxx' stubs must be set to Zero.
3.5.2.1.3.1.1 Modifying IpFamilyPolicy or IpFamilies of Stubs
The deployment of all stubs can adhere to the following:
  • If IpFamilyPolicy is set to "SingleStack," then the value of IpFamilies can either be [IPv4] or [IPv6] only.
  • If IpFamilyPolicy is set as "PreferDualStack" or "RequireDualStack", then the values of IpFamilies can either be [IPv4,IPv6] or [IPv6,IPv4] only.

    Note:

    All the pyStubs should be deployed with the same combination of IpFamilyPolicy and IpFamilies.
3.5.2.1.3.2 Configuring ATS YAML File for Deployment

Umbrella Chart

Helm charts contain different charts that are referred to as subcharts through their dependencies section in the requirements.yaml file. When a chart is created for the purpose of grouping related subcharts or services, such as composing a whole application or deployment, it is known as umbrella chart.

Perform the following procedure to create umbrella charts and add stubs to the umbrella charts:

  1. To add stubs to the umbrella chart, do the following:
    1. Below parameters can be updated to ocats_ocscp_values_25.1.201.yaml file:

      ocats-scp

      ocats-scp:
        image:
          repository: cgbu-ocscp-dev-docker.dockerhub-phx.oci.oraclecorp.com/ocats/ocats-scp
          tag: 25.1.201
          pullPolicy: Always
        replicaCount: 1
        resources:
          limits:
            cpu: 3
            memory:23Gi
            #ephemeral-storage: 4Gi      
          requests:
            cpu: 3
            memory: 3Gi
      ocats-dnsstub
      ocats-dnsstub:
        image:
          repository: cgbu-ocscp-dev-docker.dockerhub-phx.oci.oraclecorp.com/ocats/ocats-dnsstub
          tag: helm-tag
          pullPolicy: Always
        replicaCount: 1
        service:
          customExtension:
            labels: {}
            annotations: {}
          type: ClusterIP
          port: 53
       
        deployment:
          customExtension:
            labels: {}
            annotations: {}
       
        resources:
          limits:
            cpu: 1
            memory: 1Gi
          requests:
            cpu: 1
            memory: 1Gi

      ocats-pystub

      ocats-pystub:
        image:
          repository: cgbu-ocscp-dev-docker.dockerhub-phx.oci.oraclecorp.com/ocats/ocats-pystub
          tag: helm-tag
          pullPolicy: Always
        replicaCount: 1
        resources:
          limits:
            cpu: 1
            memory: 1Gi
          requests:
            cpu: 1
            memory: 1Gi
        ausf1:
          service:
            name: ausf1svc
            type: ClusterIP
            ports:
              port: 8080
          deploymentName: ausf1
        ausf2:
          service:
            name: ausf2svc
            type: ClusterIP
            ports:
              port: 8080
          deploymentName: ausf2

      ocats-scpglbratelimitstub

      ocats-scpglbratelimitstub:
        image:
          repository: <docker-registry IP:docker-registry port>/ocats/ocats-scpglbratelimitstub
          tag: helm-tag
          pullPolicy: Always
        replicaCount: 1
        service:
          type: ClusterIP
        deployment:
          customExtension:
            labels: {}
            annotations: {}
        resources:
         limits:
          cpu: 1
          memory: 1Gi
         requests:
          cpu: 1
          memory: 1Gi
        minreplicas: 1
        maxreplicas: 1
        maxPdbUnavailable: 11:27
      ocats-ddclientstub:
        image:
          repository: <docker-registry IP:docker-registry port>/ocats/ocats-ddclientstub
          tag: helm-tag
          pullPolicy: Always
        replicaCount: 1
        service:
          type: LoadBalancer
        resources:
          limits:
            cpu: 1
            memory: 1Gi
            #ephemeral-storage: 55Mi
          requests:
            cpu: 1
            memory: 1Gi
            #ephemeral-storage: 55Mi
        log:
          level: INFO
        extraContainers: USE_GLOBAL_VALUE
        
        kafka_broker: "kafka-broker1-0.kafka-broker1.ddkafkanamespace.svc.cluster.local:9092"
        string_topic_name: "string_topic"
        json_topic_name: "json_topic" 

      ocats-dnsstub

      ocats-dnsstub:
        image:
          repository: <docker-registry IP:docker-registry port>/ocats/ocats-dnsstub
          tag: helm-tag
          pullPolicy: Always
        replicaCount: 1
        service:
          customExtension:
            labels: {}
            annotations: {}
          type: ClusterIP
          port: 53
      
        deployment:
          customExtension:
            labels: {}
            annotations: {}
      
        resources:
          limits:
            cpu: 1
            memory: 1Gi
          requests:
            cpu: 1
            memory: 1Gi
        extraContainers: USE_GLOBAL_VALUE
3.5.2.1.3.3 Dual Stack Support

Using the dual stack mechanism, applications or NFs can establish connections with pods and services in a Kubernetes cluster using IPv4 or IPv6 or both simultaneously.

With the introduction of the Dual Stack IPv6 support feature, there will be two categories of features performing the same tests. However, they are categorized differently based on the type of endpoint the stack supports (single endpoint or multiple endpoints). It is important to consider the IP endpoint support of the stack while running these tests.

For example:

  • SCP_22.3.0_BugFixes_MultipleIpEndpoint_P0.feature: Shall run on stacks supporting multiple endpoints (IPv4 and IPv6)
  • SCP_22.3.0_BugFixes_SingleIpEndpoint_P0.feature: Shall run on stacks supporting single endpoints (IPv4 or IPv6)

Feature files with "SingleIpEndpoint" in the name:

These test cases run on a single IP stack setup or a dual IP stack setup, where ipFamilies should be [IPv4] or [IPv6] in the ATS deployment file.

Feature files with "MultipleIpEndpoint" in name:

These test cases run only on a dual IP stack setup, where ipFamilies should be [IPv4, IPv6] or [IPv6, IPv4] in the ATS deployment file.

All other feature files can run on both setups.

3.5.2.1.3.4 Enabling HTTPs for ATS, pystubs and Jenkins

Perform the following procedure to enable HTTPs for ATS. These steps can be avoided if your environment does not require HTTPS by adhering to the Preinstall Preparation of SCP for SCP-ATS.

Ensure that you have the below file generated before you proceed with deployment:

  • Private key
  • Client certificate
  • Certificate Authority Root certificate
  1. To enable HTTPS for ATS, run the following command to create the Kubernetes secret:
    kubectl create secret generic ocats-scp-secret
          --from-file=rsa_private_key_pkcs1_client.pem --from-file=client.pem  --from-file=caroot.pem
          --from-file=jenkinsserver.jks -n scpsvc

    Note:

    Name of the secret, private key, client certificate, and CA root certificate must be the same as mentioned in the above mentioned command.
    1. Set the atsGuiTLSEnaled and atsCommunicationTLSEnabled parameters to true in ocats_ocscp_values.yaml, as shown below:

      Generate Certificate

      cat >>$SERVER_EXT<<EOF
      authorityKeyIdentifier=keyid,issuer
      basicConstraints=CA:FALSE
      keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment
      extendedKeyUsage = serverAuth, clientAuth
      subjectAltName = @alt_names
        
      [alt_names]
      IP.1 = 127.0.0.1
      # replace IP.2 with local setup IP when enabling https Gui for jenkins
      IP.2 = 10.75.226.134
      DNS.1 = *.${NAMESPACE}.${COMMON_NAME}
      DNS.2 = localhost
      EOF
    2. Make changes to the SERVER_EXT. Set the IP.2 value as the local setup IP as mentioned below:

      Updated Value

      atsGuiTLSEnabled: true
      atsCommunicationTLSEnabled: true
    3. Deploy ATS pod with above mentioned changes.
  2. To enable HTTPS for pystubs, run the following command to create the Kubernetes secret:
    kubectl create secret generic  ocats-pystub-secret
          --from-file=rsa_private_key_pkcs1_server.pem  --from-file=server.pem --from-file=caroot.pem -n
          scpsvc

    Note:

    Name of the secret, private key, client certificate, and CA root certificate must be the same as mentioned in the above mentioned command.
    1. Run the following command to create role and role binding for GET access to the pystub pod:
      kubectl create role scpsvc-pystub-scp-role --verb=get --resource=secrets -n scpsvc
      kubectl create rolebinding --role=scpsvc-pystub-scp-role scpsvc-pystub-scp-rolebinding --serviceaccount=scpsvc:default -n scpsvc
    2. Deploy pystub pods with above mentioned changes.

      In the ocats_ocscp_values_25.1.201.yamlfile, replace the value of atsCommunicationTLSEnabled with true.

3.5.2.1.3.5 Configuring SCP to Run OAuth Test Cases in ATS
By default, the ATS suite runs oauth test cases if the "ALL" option is selected, and SCP must be deployed with oauth support enabled to support the same.
  # Enable nrfproxy-oauth service (only for Rel16)
  nrfProxyOauthService: true

If SCP is deployed without oauth support, either the single or multiple feature execution option must be selected, excluding all oauth test cases, or these options must be manually removed from the feature directory on the ATS pod.

3.5.2.1.3.6 Configuring ATS for CCA Test Cases
Run CCA Validation feature tests with ATS.
  • Enable HTTPs to run these feature tests.
  • Generate 9 client certificates (using the gen_certificates.sh script) before running the feature tests and write these certificates inside the ocats-scp-secret kubernetes secret. This script is to be executed when SCP-ATS deployments have IP families such as [IPv4] or [IPv4, IPv6].
    The following lists the sample SANs that must be used while creating certificates:
    • client2.pem
      
      [alt_names]
      
      IP = 10.75.213.1
      
      DNS.1 = chf1svc.${NAMESPACE}.${COMMON_NAME}
      
      URI.1 = urn:uuid:11111111-aaaa-aaaa-aaaa-111111111177
      
      URI.2 = https://chf1svc.${NAMESPACE}.${COMMON_NAME}:443
      
      URI.3 = https://10.75.213.1:443
    • client3.pem
      
      [alt_names]
      URI.1 = urn:uuid:11111111-aaaa-aaaa-aaaa-111111111177
    • client4.pem
      [
      alt_names]
      DNS.1 = chf1svc.${NAMESPACE}.${COMMON_NAME}
    • client5.pem
      
      [alt_names]
      IP = 10.75.213.1
    • client6.pem
      
      [alt_names]
      URI.1 = https://chf1svc.${NAMESPACE}.${COMMON_NAME}:443
    • client7.pem
      
      [alt_names]
      URI.1 = https://10.75.213.1:443
    • client8.pem
      
      [alt_names]
      IP = 10.75.213.1
      DNS.1 = chf1svc.${NAMESPACE}.${COMMON_NAME}
      URI.1 = https://chf1svc.${NAMESPACE}.${COMMON_NAME}:443
      URI.2 = https://10.75.213.1:443
    • client9.pem
      
      [alt_names]
      IP.1 = 10.75.213.2
      IP.2 = 10.75.213.3
      IP.3 = 10.75.213.4
      IP.4 = 10.75.213.5
      IP.5 = 10.75.213.6
      IP.6 = 10.75.213.7
      IP.7 = 10.75.213.8
      IP.8 = 10.75.213.9
      IP.9 = 10.75.213.10
      IP.10 = 10.75.213.11
      IP.11 = 10.75.213.1
      DNS.1 = chf1svc.${NAMESPACE}.${COMMON_NAME}
      URI.1 = urn:uuid:11111111-aaaa-aaaa-aaaa-111111111177
      URI.2 = https://chf1svc.${NAMESPACE}.${COMMON_NAME}:443
      URI.3 = https://10.75.213.10:443
    • client10.pem
      
      [alt_names]
      IP = 10.75.213.1
      DNS.1 = chf1svc.${NAMESPACE}.${COMMON_NAME}
      URI.1 = urn:uuid:11111111-aaaa-aaaa-aaaa-111111111177
      URI.2 = https://chf1svc.${NAMESPACE}.${COMMON_NAME}:443
      URI.3 = https://10.75.213.10:443
      URI.4 = https://10.75.213.2:443
      URI.5 = https://10.75.213.3:443
      URI.6 = https://10.75.213.4:443
      URI.7 = https://10.75.213.5:443
      URI.8 = https://10.75.213.6:443
      URI.9 = https://10.75.213.7:443
      URI.10 = https://10.75.213.8:443
      URI.11 = https://10.75.213.1:443
  • Generate 9 client certificates before running the feature tests, and write these certificates inside the ocats-scp-secret Kubernetes secret. Run the gen_certificates_ipv6.sh script when SCP-ATS deployment is either [IPv6] or [IPv6,IPv4].
    The following lists the sample SANs that must be used while creating certificates:
    • client2.pem
      
      [alt_names]
      IP = 2001:db8:85a3:0:0:8a2e:370:7334
      
      DNS.1 = chf1svc.${NAMESPACE}.${COMMON_NAME}
      
      URI.1 = urn:uuid:11111111-aaaa-aaaa-aaaa-111111111177
      
      URI.2 = https://chf1svc.${NAMESPACE}.${COMMON_NAME}:443
      
      URI.3 = https://[2001:db8:85a3:0:0:8a2e:370:7334]:443
      
    • client3.pem
      
      [alt_names]
      URI.1 = urn:uuid:11111111-aaaa-aaaa-aaaa-111111111177
    • client4.pem
      [
      alt_names]
      DNS.1 = chf1svc.${NAMESPACE}.${COMMON_NAME}
    • client5.pem
      
      [alt_names]
      IP = 2001:db8:85a3:0:0:8a2e:370:7334
    • client6.pem
      
      [alt_names]
      URI.1 = https://chf1svc.${NAMESPACE}.${COMMON_NAME}:443
    • client7.pem
      
      [alt_names]
      URI.1 =https://[2001:db8:85a3:0:0:8a2e:370:7334]:443
    • client8.pem
      
      [alt_names]
      IP = 2001:db8:85a3:0:0:8a2e:370:7334
      DNS.1 = chf1svc.${NAMESPACE}.${COMMON_NAME}
      URI.1 = https://chf1svc.${NAMESPACE}.${COMMON_NAME}:443
      URI.2 = https://[2001:db8:85a3:0:0:8a2e:370:7334]:443
    • client9.pem
      
      [alt_names]
      IP.1 = 2002:db8:85a3:0:0:8a2e:370:7334
      IP.2 = 2003:db8:85a3:0:0:8a2e:370:7334
      IP.3 = 2004:db8:85a3:0:0:8a2e:370:7334
      IP.4 = 2005:db8:85a3:0:0:8a2e:370:7334
      IP.5 = 2006:db8:85a3:0:0:8a2e:370:7334
      IP.6 = 2007:db8:85a3:0:0:8a2e:370:7334
      IP.7 = 2008:db8:85a3:0:0:8a2e:370:7334
      IP.8 = 2009:db8:85a3:0:0:8a2e:370:7334
      IP.9 = 2010:db8:85a3:0:0:8a2e:370:7334
      IP.10 = 2011:db8:85a3:0:0:8a2e:370:7334
      IP.11 = 2001:db8:85a3:0:0:8a2e:370:7334
      DNS.1 = chf1svc.${NAMESPACE}.${COMMON_NAME}
      URI.1 = urn:uuid:11111111-aaaa-aaaa-aaaa-111111111177
      URI.2 = https://chf1svc.${NAMESPACE}.${COMMON_NAME}:443
      URI.3 = https://[2010:db8:85a3:0:0:8a2e:370:7334]:443
    • client10.pem
      
      [alt_names]
      IP = 2001:db8:85a3:0:0:8a2e:370:7334
      DNS.1 = chf1svc.${NAMESPACE}.${COMMON_NAME}
      URI.1 = urn:uuid:11111111-aaaa-aaaa-aaaa-111111111177
      URI.2 = https://chf1svc.${NAMESPACE}.${COMMON_NAME}:443
      URI.3 = https://[2010:db8:85a3:0:0:8a2e:370:7334]:443
      URI.4 = https://[2002:db8:85a3:0:0:8a2e:370:7334]:443
      URI.5 = https://[2003:db8:85a3:0:0:8a2e:370:7334]:443
      URI.6 = https://[2004:db8:85a3:0:0:8a2e:370:7334]:443
      URI.7 = https://[2005:db8:85a3:0:0:8a2e:370:7334]:443
      URI.8 = https://[2006:db8:85a3:0:0:8a2e:370:7334]:443
      URI.9 = https://[2007:db8:85a3:0:0:8a2e:370:7334]:443
      URI.10 = https://[2008:db8:85a3:0:0:8a2e:370:7334]:443
      URI.11 = https://[2001:db8:85a3:0:0:8a2e:370:7334]:443
3.5.2.1.3.7 Updating the Global Egress Rate Limiting Changes in the SCP Deployment File for ATS

Perform the following procedure to enable the Global Egress Rate Limiting feature for ATS environment.

  1. In the SCP custom-values.yaml file, update the following:
    • federation.remoteScpOne.fqdnOrIp: FQDN of the scpglbratelimitstub pod.
    • federation.remoteScpOne.clusterName: Coherence Cluster Name of global rate limit Stub [Example: scpstub-coherence-cluster].
    • federation.remoteScpOne.nfInstanceId: NFInstanceID of global rate limit Stub [Example: 2faf1bbc-6e4a-4454-a507-a14ef8e1bc22].
3.5.2.1.3.8 Configuring SCP to Run DNS SRV Test Cases in ATS
By default, the ATS suite runs alternate resolution (DNS SRV) if the "ALL" option is selected, and SCP must be deployed with alternate resolution service support enabled in order to support the same.
# Enable DNS SRV Alternate Routing Feature
  dnsSRVAlternateRouting: false

If SCP is deployed without alternate resolution support, either single or multiple feature execution options must be selected, excluding all DNS SRV test cases, or these options must be manually removed from the feature directory on the ATS pod.

3.5.2.1.3.9 Configuring SCP to Run Mediation Test Cases in ATS
By default, the ATS suite runs Mediation test cases if the "ALL" option is selected, and SCP must be deployed with Mediation support enabled to support the same.
# Enable mediation service
  mediationService: false
If SCP is deployed without the Mediation support, then either a single or multiple feature execution option can be selected, excluding all Mediation test cases, or these options must be manually removed from the feature directory on the ATS pod.
3.5.2.1.3.10 Configuring SCP to Run Model D Test Cases in ATS
By default, the ATS suite runs delegated discovery ( Model D) if the "ALL" option is selected, and SCP must be deployed with nrfproxy support enabled to support the same.
# Enable Nrf Proxy service (only for Rel16)
  nrfProxyService: false

If SCP is deployed without nrfproxy support, either single or multiple feature execution options must be selected, excluding all Model D test cases, or these options must be manually removed from the feature directory on the ATS pod.

3.5.2.1.3.11 Configuring ATS for Traffic Feed Test Cases
  1. To run Traffic Feed ATS testcases, install the kafka broker.
  2. In the ATS deployment file, update the following parameters:
    • kafka_broker: <kafka broker host>:<kafka broker host> (Example, “kafka-broker-0.kafka-broker.scpsvc.svc.cluster.local:9092”)
    • string_topic_name: <topic name for string serialization> (Example, “string_topic”)
    • json_topic_name: <topic name for json serialization> (Example, “json_topic”)
  3. In the global.yaml file, update the following parameters:
    • global_traffic_feed_key_Serializer: <key serialization> (Example, string)
    • global_traffic_feed_value_Serializer: <value serialization> (Example, string)
    • global_traffic_feed_topic_name: <topic name for selected serialization> (Example, string_topic)
    • global_traffic_feed_bootstrap_server_host: <kafka broker host> (Example, kafka-broker1-0.kafka-broker1.scpsvc.svc.cluster.local)
    • global_traffic_feed_bootstrap_server_port: <kafka broker port> (Example, 9092)

      For more information on global.yaml file, see ATS Testcase Parametrization on User Input.

    For installation of the Data Director Kafka broker, perform the following:
    • Update the ocnadd-custom-values.yaml as documented in the Data Director Installation and Upgrade Guide.
    • Disable all services apart from ocnaddkafka by marking them as false.
    • Keep ocnaddkafka as true.
    For more information, see Oracle Communications Network Analytics Data Director Installation and Upgrade Guide.

    Note:

    Kafka broker should be deployed with 3 partitions under string_topic.
3.5.2.1.3.12 Configuring SCP to Run LCI Test Cases in ATS
By default, the ATS suite runs load manager test cases if the "ALL" option is selected, and SCP must be deployed with load manager support enabled to support the same.
 # Enable load-manager service (only for Rel16)
  loadManagerService: true

If SCP is deployed without load manager support, either the single or multiple feature execution option must be selected, excluding all load manager test cases, or these options must be manually removed from the feature directory on the ATS pod.

3.5.3 Deploying ATS and Stub in the Kubernetes Cluster

This section provides information about how to deploy ATS and stubs.

To deploy ATS in the Kubernetes Cluster:

Note:

Deploy ATS, SCP, and stubs in the same namespace.
  1. Ensure the ocats_ocscp_values_25.1.201.yaml file has been updated with correct repository, image tag, and parameters as per requirements.
  2. In the Files or Helm folder, find the ocats-ocscp charts for version 23.3.0, which have to be used for installation.
  3. Run the following command to deploy ATS:
    helm install ocats-ocscp-25.1.201.tgz --name <release_name> --namespace <namespace_name> -f ocats-ocscp-values-25.1.201.yaml 

    Example:

    helm install ocats-ocscp-25.1.201.tgz --name ocats-ocscp --namespace scpsvc-f ocats-ocscp-values-25.1.201.yaml

    Note:

    Update image name, tag, service name, and deployment name in ocats-pystub of the ocats_pystub_values_25.1.201.yaml file before deploying.
  4. Verify whether all stub pods are up and running in the deployed namespace as updated in the ocats_ocscp_values_25.1.201.yaml.

3.5.4 Post Installation and Deployment Steps

This section describes the post-installation steps for SCP.

3.5.4.1 Verifying ATS Deployment
Run the following command to verify the ATS deployment status:
helm status <release_name>

Note:

If ATS is deployed in the service mesh environment, the Ready field for pods displays 2/2.
The following image displays that the deployment is complete because the STATUS field has changed to deployed.

Figure 3-15 Checking ATS Helm Release and ATS related Pod Status


Checking ATS Helm Release and ATS related Pod Status

3.5.4.2 Modifying the scpc-alternate-resolution Microservice
Perform the following procedure to modify the scpc-alternate-resolution microservice to point to the DNS Stub for the Alternate Routing based on the Domain Name System (DNS) Service Record (SRV) Records feature.
  1. Capture the cluster IP of DNS Stub service.

    By default, scpc-alternate-resolution points to CoreDNS and displays the following settings in the ocscp_values.yaml deployment file.

    Figure 3-16 CoreDNS

    CoreDNS
  2. Run the following command to change the deployment file to add content in scpc-alternate-resolution to query the DNS Stub.

    Uncomment ocscp-alternate-resolution's dnsConfig and dnsPolicy before deployment to allow editing.

    $kubectl edit deployment ocscp-scpc-alternate-resolution -n
        scpsvc

    Sample deployment file:

    dnsConfig:
            nameservers:
            - 10.96.77.54
            searches:
            - cicdscpsvc-230228133808.svc.cluster.local
            - svc.cluster.local
            - cluster.local
     dnsPolicy: None
    

    Add the following content:

    • nameservers: Add the IP address that you recorded after installing the DNS Stub (cluster IP of the DNS Stub).
    • searches: Add all the search based on the cluster name.
      • scpsvc

        This is the namespace.

      • cluster.local

        This is the cluster name.

    • dnsPolicy: Set it to "None" if it is not already set by default.
3.5.4.3 ATS Testcase Parametrization on User Input

Parameterization is an approach to decoupling the data from the feature files in ATS so that ATS test cases can be run against some predefined configuration that contains customer-specific data and service configurations. Parametrization allows you to provide or adjust values for the input and output parameters needed for the test cases to be compatible with the SUT configuration. You can update or adjust the key-value pair values in the global.yaml and feature.yaml files for each of the feature files so that they are compatible with SUT configuration. For more information, see the Parameterization section.

Three new folders "cust_data", "product_config" and "custom_config" are added to the base path inside ATS pod /var/lib/jenkins/ocscp_tests. Where the cust_data folder is a replica of the already existing data folder, the product_config folder contains configuration files that are compatible with the default product configurations, and the cust_config folder is a replica of the product_config folder. You can update the custom folders, such as cust_data and custom_config.

Product Config folder

The product config folder contains two types of YAML files: global.yaml and feature_name>. yaml (feature file-specific yaml).

Global File

The global.yaml contains global variable names and their corresponding default values that can be used across all the feature files. The variable name declared in global.yaml should start with global_Var_ as prefix.

For example, <key_name>: &<variable_name> <default_value_of_variable>
       #START_GLOBAL 
                 global:
                   File_Parameters:
                     global_Var_SUT_apiPrefix1: &global_Var_SUT_apiPrefix1 USEast
                  global_Var_SUT_ServingLocality1: &global_Var_SUT_ServingLocality1 USEast 
                    global_Var_localNrfSetId1: &global_Var_localNrfSetId1 setnrfl1.nrfset.5gc.mnc012.mcc345          
                    global_Var_stubPort: &global_Var_stubPort 8080
                #END_GLOBAL
Sample:
#START_GLOBAL
global:
  File_Parameters:
    global_Var_SUT_apiPrefix1: &global_Var_SUT_apiPrefix1 USEast
    # Following Serving Localities if needs to be changed are also to be changed while deploying SCP before running ATS
    global_Var_SUT_ServingLocality1: &global_Var_SUT_ServingLocality1 USEast
    global_Var_SUT_ServingLocality2: &global_Var_SUT_ServingLocality2 Loc7
    global_Var_SUT_ServingLocality3: &global_Var_SUT_ServingLocality3 Loc8
    global_Var_SUT_ServingLocality4: &global_Var_SUT_ServingLocality4 Loc9
 
    # Following SetId's if needs to be changed are also to be changed while deploying SCP before running ATS
    global_Var_localNrfSetId1: &global_Var_localNrfSetId1 setnrfl1.nrfset.5gc.mnc012.mcc345
    global_Var_remoteNrfSetId1: &global_Var_remoteNrfSetId1 setnrfr1.nrfset.5gc.mnc012.mcc345
    global_Var_remoteNrfSetId2: &global_Var_remoteNrfSetId2 setnrfr2.nrfset.5gc.mnc012.mcc345
 
    # If stubPort has to be changed then stub has to be deployed with the same port number before running ATS
    global_Var_stubPort: &global_Var_stubPort 8080
    global_Var_stubErrorCode: &global_Var_stubErrorCode 404
  
    global_Var_udm_nfSetIdList1: &global_Var_udm_nfSetIdList1 set1.udmset.5gc.mnc012.mcc345
    global_Var_udm_nfSetIdList2: &global_Var_udm_nfSetIdList2 set2.udmset.5gc.mnc012.mcc345
    global_Var_udm_nfSetIdList3: &global_Var_udm_nfSetIdList3 set3.udmset.5gc.mnc012.mcc345
     
    global_Var_smf_nfSetIdList1: &global_Var_smf_nfSetIdList1 set1.smfset.5gc.mnc012.mcc345
    global_Var_smf_nfSetIdList2: &global_Var_smf_nfSetIdList2 set2.smfset.5gc.mnc012.mcc345
    global_Var_smf_nfSetIdList3: &global_Var_smf_nfSetIdList3 set3.smfset.5gc.mnc012.mcc345
     
    global_Var_pcf_nfSetIdList1: &global_Var_pcf_nfSetIdList1 set1.pcfset.5gc.mnc012.mcc345
    global_Var_pcf_nfSetIdList2: &global_Var_pcf_nfSetIdList2 set2.pcfset.5gc.mnc012.mcc345
    global_Var_pcf_nfSetIdList3: &global_Var_pcf_nfSetIdList3 set3.pcfset.5gc.mnc012.mcc345
     
    global_Var_udm1_nfInstanceId: &global_Var_udm1_nfInstanceId 11111111-aaaa-aaaa-aaaa-111111111111
    global_Var_udm2_nfInstanceId: &global_Var_udm2_nfInstanceId 21111111-aaaa-aaaa-aaaa-111111111111
    global_Var_udm3_nfInstanceId: &global_Var_udm3_nfInstanceId 11111111-aaaa-aaaa-aaaa-111111111122
    global_Var_smf1_nfInstanceId: &global_Var_smf1_nfInstanceId 11111111-aaaa-aaaa-aaaa-111111111111
    global_Var_smf2_nfInstanceId: &global_Var_smf2_nfInstanceId 11111111-aaaa-aaaa-aaaa-111111111122
    global_Var_smf3_nfInstanceId: &global_Var_smf3_nfInstanceId 11111111-aaaa-aaaa-aaaa-111111111133
    global_Var_smf4_nfInstanceId: &global_Var_smf4_nfInstanceId 11111111-aaaa-aaaa-aaaa-111111111144
    global_Var_smf5_nfInstanceId: &global_Var_smf5_nfInstanceId 11111111-aaaa-aaaa-aaaa-111111111155
    global_Var_pcf1_nfInstanceId: &global_Var_pcf1_nfInstanceId 1faf1bbc-6e4a-3994-a507-a14ef8e1bc5a
    global_Var_pcf2_nfInstanceId: &global_Var_pcf2_nfInstanceId 1faf1bbc-6e4a-3994-a507-a14ef8e1bc6b
    global_Var_scp51_nfInstanceId: &global_Var_scp51_nfInstanceId 2fbf1bbc-6e4b-3994-b507-b14ef8e1bc51
    global_Var_scp61_nfInstanceId: &global_Var_scp61_nfInstanceId 2fbf1bbc-6e4b-3994-b507-b14ef8e1bc61
     
    # If svc name has to be changed then stub has to be deployed with the same svc name before running ATS
    global_Var_udm1_svc_name: &global_Var_udm1_svc_name udm1svc
    global_Var_udm2_svc_name: &global_Var_udm2_svc_name udm2svc
    global_Var_udm3_svc_name: &global_Var_udm3_svc_name udm3svc
    global_Var_smf1_svc_name: &global_Var_smf1_svc_name smf1svc
    global_Var_smf2_svc_name: &global_Var_smf2_svc_name smf2svc
    global_Var_smf3_svc_name: &global_Var_smf3_svc_name smf3svc
    global_Var_smf4_svc_name: &global_Var_smf4_svc_name smf4svc
    global_Var_smf5_svc_name: &global_Var_smf5_svc_name smf5svc
    global_Var_pcf1_svc_name: &global_Var_pcf1_svc_name pcf1svc
    global_Var_pcf2_svc_name: &global_Var_pcf2_svc_name pcf2svc
    global_Var_scp51_svc_name: &global_Var_scp51_svc_name scp51svc
    global_Var_scp61_svc_name: &global_Var_scp61_svc_name scp61svc
    global_Var_nrf1_svc_name: &global_Var_nrf1_svc_name nrf1svc
#END_GLOBAL

Feature File

The <Feature_name>.yaml file contains feature-specific variables that can be parameterized. For example, if the name of the feature file that needs to be parameterized is "ModelC_NF_Set.feature", then the YAML file corresponding to it will be namedModelC_NF_Set.yaml. This yaml file contains the #START_GLOBAL and #END_GLOBAL tags without any data in between, as the data is copied over from the global.yaml file to this section during test execution. The variable name in feature.yaml should have feature_Var_ as prefix.

For example:- <key_name>: &<variable_name> <default_value_of_variable>

        #START_GLOBAL
                 #END_GLOBAL

                           ModelC_NF_Set.feature:

                              File_Parameters:

                                feature_Var_udm1_Priority: &feature_Var_udm1_Priority 0
                                feature_Var_smf1_Priority: &feature_Var_smf1_Priority 0
                                feature_Var_traffic_rate: &feature_Var_traffic_rate 100

Scenario

The variables are referenced under the scenario tag to use in the feature file; for this, the scenario tag has to be concatenated with Scenario_ and enclosed within double commas.

The variables defined under the scenario tag should have sc_ as its prefix.

For example:- "Scenario_<Scenario_tag>":
         "Scenario_Scenario-1- <Scenario_tag>":
                     Input:
                       File_Parameters:
                           sc_http_requests_total_udm1: 100
Sample:
#START_GLOBAL
#END_GLOBAL
 
ModelC_NFSet.feature:
  File_Parameters:
    # The priorities can be changed without having the impact on the order of the priorities
    feature_Var_udm1_Priority: &feature_Var_udm1_Priority 0 
    feature_Var_udm2_Priority: &feature_Var_udm2_Priority 1
    feature_Var_udm3_Priority: &feature_Var_udm3_Priority 1
 
    feature_Var_smf1_Priority: &feature_Var_smf1_Priority 0
    feature_Var_smf2_Priority: &feature_Var_smf2_Priority 0
    feature_Var_smf3_Priority: &feature_Var_smf3_Priority 1
    feature_Var_smf4_Priority: &feature_Var_smf4_Priority 3
    feature_Var_smf5_Priority: &feature_Var_smf5_Priority 4
 
    feature_Var_pcf1_Priority: &feature_Var_pcf1_Priority 0
    feature_Var_pcf2_Priority: &feature_Var_pcf2_Priority 0
     
    feature_Var_supiOfPathURI: &feature_Var_supiOfPathURI imsi-100000001
     
    feature_Var_traffic_rate: &feature_Var_traffic_rate 100
     
    # The traffic for the below metrics to be configured in the same proportionate as the traffice sent
    feature_Var_scp_http_rx_req_total_cnt: *feature_Var_traffic_rate
    feature_Var_scp_http_tx_req_total_cnt: *feature_Var_traffic_rate
    feature_Var_scp_http_tx_req_total_cnt_alternate_route: &feature_Var_scp_http_tx_req_total_cnt_alternate_route 200
    feature_Var_http_requests_total_udm1: &feature_Var_http_requests_total_udm1 0
    feature_Var_http_requests_total_udm2: &feature_Var_http_requests_total_udm2 0
    feature_Var_http_requests_total_udm3: &feature_Var_http_requests_total_udm3 0
    feature_Var_http_requests_total_smf1: &feature_Var_http_requests_total_smf1 0
    feature_Var_http_requests_total_smf2: &feature_Var_http_requests_total_smf2 0
    feature_Var_http_requests_total_smf3: &feature_Var_http_requests_total_smf3 0
    feature_Var_http_requests_total_smf4: &feature_Var_http_requests_total_smf4 0
    feature_Var_http_requests_total_pcf1: &feature_Var_http_requests_total_pcf1 0
    feature_Var_scp_http_rx_res_total_cnt: *feature_Var_traffic_rate
    feature_Var_scp_http_tx_res_total_cnt: *feature_Var_traffic_rate
 
  "Scenario_Scenario-1- Forward route initial UECM AMF Registration messages with 3gpp-Sbi-Target-apiRoot and 3GPP-Sbi-Disocvery-target-NfSetid Header":
      Input:
        File_Parameters:
          sc_http_requests_total_udm1: 100     #To be configured in the same proportionate as the traffice sent
         
  "Scenario_Scenario-2- Alternate route initial UECM AMF Registration messages with 3gpp-Sbi-Target-apiRoot and 3GPP-Sbi-Discovery-target-NfSetid Header":
      Input:
        File_Parameters:
          sc_http_requests_total_udm1: 100     #To be configured in the same proportionate as the traffice sent
          sc_http_requests_total_udm2: 100     #To be configured in the same proportionate as the traffice sent
    
  "Scenario_Scenario-3- Load Balance initial UECM AMF Registration messages with 3gpp-Sbi-Target-apiRoot and 3GPP-Sbi-Disocvery-target-NfSetid Header":
      Input:
        File_Parameters:
          sc_http_requests_total_udm1: 100     #To be configured in the same proportionate as the traffice sent
          sc_http_requests_total_udm2: 50      #To be configured in the same proportionate as the traffice sent
          sc_http_requests_total_udm3: 50      #To be configured in the same proportionate as the traffice sent
         
  "Scenario_Scenario-4- Alternate route initial UECM AMF Registration messages with 3gpp-Sbi-Target-apiRoot missing and 3GPP-Sbi-Disocvery-target-NfSetid Header present":
      Input:
        File_Parameters:
          sc_http_requests_total_udm1: 100     #To be configured in the same proportionate as the traffice sent
         
  "Scenario_Scenario-5- Forward route subsequent UECM AMF Registration messages with 3gpp-Sbi-Target-apiRoot and 3gpp-sbi-routing-binding Header with bl=nfset":
      Input:
        File_Parameters:
          sc_http_requests_total_udm1: 100     #To be configured in the same proportionate as the traffice sent
         
  "Scenario_Scenario-6- Alternate route subsequent UECM AMF Registration messages with 3gpp-Sbi-Target-apiRoot and 3gpp-sbi-routing-binding Header with bl=nfset":
      Input:
        File_Parameters:
          sc_udm3_Priority: 30     #Priority can be changed without having the impact on the order of the priorities
          sc_http_requests_total_udm1: 100     #To be configured in the same proportionate as the traffice sent
          sc_http_requests_total_udm2: 100     #To be configured in the same proportionate as the traffice sent
        
  "Scenario_Scenario-7- Load Balance subsequent UECM AMF Registration messages with 3gpp-Sbi-Target-apiRoot and 3gpp-sbi-routing-binding Header with bl=nfset":
      Input:
        File_Parameters:
          sc_http_requests_total_udm1: 100     #To be configured in the same proportionate as the traffice sent
          sc_http_requests_total_udm2: 50     #To be configured in the same proportionate as the traffice sent
          sc_http_requests_total_udm3: 50     #To be configured in the same proportionate as the traffice sent
        
  "Scenario_Scenario-8- Alternate route subsequent UECM AMF Registration messages with 3gpp-Sbi-Target-apiRoot missing and 3gpp-sbi-routing-binding Header with bl=nfset is present":
      Input:
        File_Parameters:
          sc_http_requests_total_udm1: 100     #To be configured in the same proportionate as the traffice sent
         
 
  "Scenario_Scenario-9- To test when Forward route for notification request UECM AMF Registration messages with 3gpp-Sbi-Target-apiRoot and 3gpp-sbi-routing-binding header bl=nfset":
      Input:
        File_Parameters:
          sc_http_requests_total_udm1: 100     #To be configured in the same proportionate as the traffice sent
         
  "Scenario_Scenario-10- To test when Forward route fails for notification request UECM AMF Registration messages with 3gpp-Sbi-Target-apiRoot and 3gpp-sbi-routing-binding header,alternate route should happen on NfSet":
      Input:
        File_Parameters:
          sc_http_requests_total_udm1: 100     #To be configured in the same proportionate as the traffice sent
          sc_http_requests_total_udm2: 100     #To be configured in the same proportionate as the traffice sent
         
  "Scenario_Scenario-11- To test Forward route for notification request UECM AMF Registration messages with 3gpp-Sbi-Discovery-target-nf-set-id,3gpp-Sbi-Target-apiRoot":
    Input:
      File_Parameters:
        sc_http_requests_total_udm1: 100     #To be configured in the same proportionate as the traffice sent
         
  "Scenario_Scenario-12- To test when Forward route fails for notification request UECM AMF Registration messages with 3gpp-Sbi-Discovery-target-nf-set-id and 3gpp-Sbi-Target-apiRoot,alternate route should happen on NfSet":
      Input:
        File_Parameters:
          sc_http_requests_total_udm1: 100     #To be configured in the same proportionate as the traffice sent
          sc_http_requests_total_udm2: 100     #To be configured in the same proportionate as the traffice sent
        
  "Scenario_Scenario-13- To test when Forward route fails for notification request UECM AMF Registration messages with 3gpp-Sbi-Discovery-target-nf-set-id and 3gpp-Sbi-Target-apiRoot,load balancing should happen on NfSet on NFs with similar priority":
      Input:
        File_Parameters:
          sc_http_requests_total_udm1: 100    #To be configured in the same proportionate as the traffice sent
          sc_http_requests_total_udm2: 50     #To be configured in the same proportionate as the traffice sent
          sc_http_requests_total_udm3: 50     #To be configured in the same proportionate as the traffice sent
         
  "Scenario_Scenario-14- Forward route initial SMF PduSession sm-contexts create messages with 3gpp-Sbi-Target-apiRoot and 3GPP-Sbi-Disocvery-target-NfSetid Header":
      Input:
        File_Parameters:
          sc_http_requests_total_smf1: 100     #To be configured in the same proportionate as the traffice sent
         
  "Scenario_Scenario-15- Alternate route SMF PduSession sm-contexts create messages with 3gpp-Sbi-Target-apiRoot and 3GPP-Sbi-Disocvery-target-NfSetid Header":
      Input:
        File_Parameters:
          sc_http_requests_total_smf1: 100     #To be configured in the same proportionate as the traffice sent
          sc_http_requests_total_smf2: 100     #To be configured in the same proportionate as the traffice sent
         
  "Scenario_Scenario-16- Error response code received when SMF PduSession sm-contexts create message is sent with missing information in 3gpp-sbi-routing-binding header and 3gpp-Sbi-Target-apiRoot header is missing":
      Input:
        File_Parameters:
          sc_scp_http_tx_req_total_cnt: 0     #To be configured in the same proportionate as the traffice sent
          sc_ocscp_metric_scp_generated_response_total: 100     #To be configured in the same proportionate as the traffice sent
 
  "Scenario_Scenario-17-Error response code received when SMF PduSession sm-contexts create message is sent with missing information in 3gpp-Sbi-Discovery-target-nf-set-id header and 3gpp-Sbi-Target-apiRoot header":
      Input:
        File_Parameters:
          sc_targetNfSetId_Send: 1    
          sc_http_requests_total_smf1: 100     #To be configured in the same proportionate as the traffice sent
         
  "Scenario_Scenario-18- Alternate route SMF PduSession sm-contexts create messages with 3gpp-Sbi-Target-apiRoot and 3GPP-Sbi-Disocvery-target-NfSetid Header":
      Input:
        File_Parameters:
          sc_smf1_Priority: 2     #Priority can be changed without having the impact on the order of the priorities
          sc_scp_http_tx_req_total_cnt_alternate_route: 400     #To be configured in the same proportionate as the traffice sent
          sc_http_requests_total_smf1: 100     #To be configured in the same proportionate as the traffice sent
          sc_http_requests_total_smf4: 100     #To be configured in the same proportionate as the traffice sent
         
  "Scenario_Scenario-19- No Alternate route for initial UECM AMF Registration messages with 3gpp-Sbi-Target-apiRoot and 3GPP-Sbi-Discovery-target-NfSetid Header as reroute Policy is disabled":
      Input:
        File_Parameters:
          sc_http_requests_total_udm1: 100     #To be configured in the same proportionate as the traffice sent
          sc_scp_http_rx_res_total_cnt: 0     #To be configured in the same proportionate as the traffice sent
         
  "Scenario_Scenario-20- Forward route PCF SMPolicyControl Create SMPolicyAssociation with 3gpp-Sbi-Target-apiRoot and 3GPP-Sbi-Disocvery-target-NfSetid Header and verify that 3gpp-Sbi-Target-apiRoot and 3gpp-sbi-producer-id headers are not present in response":
      Input:
        File_Parameters:
          sc_pcf1_Service_Priority: 80        #Priority can be changed without having the impact on the order of the priorities
          sc_http_requests_total_pcf1: 100     #To be configured in the same proportionate as the traffice sent
         
  "Scenario_Scenario-21- Alternate route PCF SMPolicyControl Create SMPolicyAssociation messages with 3gpp-Sbi-Target-apiRoot and 3GPP-Sbi-Disocvery-target-NfSetid Header and verify that 3gpp-Sbi-Target-apiRoot and 3gpp-sbi-producer-id headers are present in response":
      Input:
        File_Parameters:
          sc_pcf1_Priority: 2     #Priority can be changed without having the impact on the order of the priorities
          sc_http_requests_total_pcf2: 100     #To be configured in the same proportionate as the traffice sent
         
  "Scenario_Scenario-22- Alternate route initial UECM AMF Registration messages with 3gpp-Sbi-Target-apiRoot missing and 3GPP-Sbi-Disocvery-target-NfSetid Header present and verify that only 3gpp-sbi-producer-id header is present in response since location header is present":
      Input:
        File_Parameters:
          sc_http_requests_total_udm1: 100     #To be configured in the same proportionate as the traffice sent
         
  "Scenario_Scenario-23- Alternate route PCF SMPolicyControl Create messages with 3gpp-Sbi-Target-apiRoot and 3GPP-Sbi-Disocvery-target-NfSetid Header and verify that 3gpp-Sbi-Target-apiRoot and 3gpp-sbi-producer-id headers are present in response":
      Input:
        File_Parameters:
          sc_pcf1_Priority: 2     #Priority can be changed without having the impact on the order of the priorities
          sc_http_requests_total_pcf2: 100     #To be configured in the same proportionate as the traffice sent
         
  "Scenario_Scenario-24- Alternate route PCF SMPolicyControl Create messages with 3gpp-Sbi-Target-apiRoot and 3gpp-sbi-routing-binding Header and verify that 3gpp-Sbi-Target-apiRoot and 3gpp-sbi-producer-id headers are present in response":
      Input:
        File_Parameters:
          sc_Var_pcf1_Priority: 2     #Priority can be changed without having the impact on the order of the priorities
          sc_http_requests_total_pcf2: 100     #To be configured in the same proportionate as the traffice sent 
Updates in Feature Files
The variables are defined in either global.yaml or <feature_name>.yaml is used in the feature files by enclosing the variable name in curly brackets. The steps in a feature file without parameterization are as follows:

Figure 3-17 Feature File without Parameterization


Feature File without Parameterization

The steps in a feature file when it is parameterized:

Figure 3-18 Feature File when it is Parameterized


Feature File when it is Parameterized

Running Feature Files

The changes are made in the product config folder, so the same configuration type can be chosen from the Jenkins UI and the test case can be run, and in the logs, it can be seen that the values of variables are being replaced with the ones that are provided in global.yaml or <feature_name>.yaml.

Note:

Only variables that bring in some value addition require parameterization, and a change in the value of those variables does not affect the test case. A variable that can have an effect across all the feature files should be kept under global.yaml, a variable that is specific to a feature file has to be kept under <feature_name>.yaml, and a variable that is related to a specific scenario must be kept under scenario level.

3.5.4.4 ATS Tagging Support for SCP

The ATS Tagging Support feature allows you to select features and scenarios based on specific tags. There are two types of tags, Feature tag and Scenario tag. For more information about this feature, see ATS Tagging Support. The following tags are supported by SCP:

  • @non_asm: This tag is for all test cases that should not run on an ASM setup.
  • @https: This tag is for all https test cases.
  • @multiple_ip_endpoint: This tag is for all test cases that should run only on multiple IP endpoint setups, such as IPv4 or IPv6 preferred dual stack setups.
  • @single_ip_endpoint: This tag is for all test cases that should run only on single IP endpoint setups, such as IPv4 or IPv6.
  • @alert: This tag is for all test cases with alert scenarios.
  • @api: This tag is for all test cases with API scenarios.
  • @sanity: This tag is for all the SCP sanity test cases.

Tags are available at Feature and Scenario levels. You can create tags by appending "@" as a prefix to the feature or scenario name.

Examples:
  • @SCP_InterSCP_PCF_HTTPS_P0 @https
  • @S1_To_test_alert_for_SCP_Observability @alert

3.5.5 Appendix

This section provides supplementary information that may be helpful for a more comprehensive understanding of installing and running SCP test cases in ATS.

3.5.5.1 Creating Custom Service Account

By default, ATS will create a service account with the below rules. If the user does not want to use the default service account, then the service account needs to be manually created with the below permission, and the created custom service account name needs to be specified in the ocats_ocscp_values_25.1.201.yaml file.

To run SCP-ATS, use the following rules to create a custom service account:
rules:
- apiGroups: ["apps"]
  resources: ["deployments", "replicasets"]
  verbs: ["watch", "get", "list", "create", "delete", "update" ,"patch"]
- apiGroups: [""]
  resources: ["pods", "services", "pod/logs"]
  verbs: ["watch", "get", "list", "create", "delete"]
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["get", "list"]
- apiGroups: [""]
  resources: ["configmaps"]
  verbs: ["watch", "get", "list", "create", "delete", "update", "patch"]
3.5.5.2 Adding New Stubs to SCP-ATS
For adding new stubs related to pystub, add deployment, and service yaml files inside the template folder of ocats-pystub charts and update the deployment name in the stub deployment file as follows:
name- .Values.ausf11.deploymentName

app-    .Values.ausf11.deploymentName
Update service name, label and selectors, and ports in the service.yaml file.
name: {{ .Values.ausf11.service.name }}

app: {{ .Values.ausf11.deploymentName }}

port: {{ .Values.ausf11.service.ports.port }}
Update the ocats_ocscp_values_25.1.201.yaml file with new information:

ausf1:
service:
name: ausf1svc
type: ClusterIP
ports:
port: 8080

The images represent pystub service and deployment files for each stubs:

Figure 3-19 Sample Deployment File


Sample Deployment File

3.6 Installing ATS for SEPP

This section describes Automated Testing Suite (ATS) installation procedures for Security Edge Protection Proxy (SEPP) in a cloud native environment.

3.6.1 Resource Requirements

Total Number of Resources

The resources required to install SEPP-ATS are as follows:

Table 3-21 Total Number of Resources

Resource CPUs Memory(GB) Storage(GB)
SEPP SUT Total 15.1 24.128 0
cnDBTier Total 40 40 20
ATS Total 4.5 4.5 1
Grand Total SEPP ATS 59.6 68.628 21

Resource Details

The details of resources required to install SEPP-ATS are as follows:

Table 3-22 Resource Details

Microservice CPUs Required per Pod Memory Required per Pod (GB) Storage PVC Required per Pod (GB) # Replicas (regular deployment) # Replicas (ATS deployment) CPUs Required - Total Memory Required - Total (GB) Storage PVC Required - Total (GB)
SEPP Pods
n32-ingress-gateway 1.5 2 0 1 1 1.5 2 0
n32-egress-gateway 1.5 2 0 1 1 1.5 2 0
plmn-ingress-gateway 1.5 2 0 1 1 1.5 2 0
plmn-egress-gateway 1.5 2 0 1 1 1.5 2 0
pn32f-svc 1 2 0 1 1 1 2 0
cn32f-svc 1 2 0 1 1 1 2 0
cn32c-svc 0.5 1 0 1 1 0.5 1 0
pn32c-svc 0.5 1 0 1 1 0.5 1 0
config-mgr-svc 1 2 0 1 1 1 2 0
nrf-client-nfdiscovery 1 2 0 1 1 1 2 0
nrf-client-nfmanagement 1 1 0 1 1 1 1 0
ocpm-config-server 0.5 1 0 1 1 0.5 1 0
appinfo 0.5 2 0 1 1 0.5 2 0
perfinfo 0.1 0.128 0 1 1 0.1 0.128 0
nf-mediation 1 1 0 1 1 1 1 0
alternate-route 1 1 0 1 1 1 1 0
coherence-svc 1 2 0 1 1 1 2 0
SEPP SUT Totals 15.1 CPU 25.128 GB 0
ATS
ATS Behave 3 3 1 (Optional) 1 1 3 3 1
ATS Stub (Python) .5 .5 0 1 1 .5 .5 0
ATS Stub-2 (Python) .5 .5 0 1 1 .5 .5 0
ATS Stub-3 (Python) .5 .5 0 1 1 .5 .5 0
ATS Totals           4.5 4.5 1
DB Tier Pods (minimum of 4 worker nodes required)
vrt-launcher-dt-1.cluster.local 4 4 2 2 1 4 4 2
vrt-launcher-dt-2.cluster.local 4 4 2 2 1 4 4 2
vrt-launcher-dt-3.cluster.local 4 4 2 2 1 4 4 2
vrt-launcher-dt-4.cluster.local 4 4 2 2 1 4 4 2
vrt-launcher-mt-1.cluster.local 4 4 2 2 1 4 4 2
vrt-launcher-mt-2.cluster.local 4 4 2 2 1 4 4 2
vrt-launcher-mt-3.cluster.local 4 4 2 2 1 4 4 2
vrt-launcher-sq-1.cluster.local 4 4 2 2 1 4 4 2
vrt-launcher-sq-2.cluster.local 4 4 2 2 1 4 4 2
vrt-launcher-db-installer.cluster.local 4 4 2 2 1 4 4 2
DB Tier Totals 40 40 20
3.6.1.1 SEPP ATS Compatibility Matrix

The following table lists the versions SEPP ATS and the comapability with the SEPP and ATS framework:

Table 3-23 SEPP ATS Compatibility Matrix

SEPP ATS Release SEPP Release ATS Framework version
25.1.201 25.1.200 25.1.202
25.1.200 25.1.200 25.1.202
25.1.101 25.1.101 25.1.101
25.1.100 25.1.100 25.1.100
24.3.0 24.3.0 24.3.0

3.6.2 Downloading the ATS Pakage

Locating and Downloading ATS and Simulator Images

To locate and download the ATS Image from MOS:

  1. Log in to My Oracle Support with your credentials.
  2. Select the Patches and Updates tab to locate the patch.
  3. In the Patch Search window, click Product or Family (Advanced).
  4. Enter Oracle Communications Cloud Native Core - 5G in the Product field.
  5. Select Oracle Communications Cloud Native Core Security Edge Protection Proxy <release_number> from Release drop-down.
  6. Click Search. The Patch Advanced Search Results list appears.
  7. Select the required ATS patch from the search results. The Patch Details window appears.
  8. Click Download. The File Download window appears.
  9. Click the <p********_<release_number>_Tekelec>.zip file to downlaod the CNC SEPP ATS package file.
  10. Untar the zip file to access all the ATS Images. The ocsepp-ats directory has the following files:
    The csar directory has following files:
    
    ocats_ocsepp_csar_25_1_201_0_0.zip
    ocats_ocsepp_csar_25_1_201_0_0.zip.sha256

    Note:

    The above zip file contains all the images and custom values required for 25.1.201 release of OCATS-OCSEPP.
  11. The ocats_ocsepp_csar_25_1_201_0_0.zip file has following definitions:
    ├── Definitions
    │   ├── ocats_ocsepp_cne_compatibility.yaml
    │   └── ocats_ocsepp.yaml
    ├── Files
    │   ├── ChangeLog.txt
    │   ├── Helm
    │   │   └── ocats-sepp-25.1.201.tgz (Helm Charts)
    │   ├── Licenses
    │   ├── ocats-sepp-25.1.201.tar (BDD client image)
    │   ├── Oracle.cert
    │   ├── seppstub-25.1.201.tar (Stub server image)
    │   └── Tests
    ├── Scripts/
    │   ├── ocats_ocsepp_tests_jenkinsjobs_25.1.201.tgz
    │   ├──      ├──jobs (For Persistent Volume)
    │   └──      ├──ocsepp_tests (For Persistent Volume)
    │   └──  ocats_ocsepp_values_25.1.201.yaml (Custom values file for installation)
    ├── ocats_ocsepp.mf 
    └── TOSCA-Metadata
        └── TOSCA.meta
  12. Copy the zip file to Kubernetes cluster where you want to deploy ATS

3.6.3 Pushing the Images to Customer Docker Registry

Preparing to Deploy ATS and Stub Pod in Kubernetes Cluster

To deploy ATS and Stub Pod in Kubernetes Cluster:

  1. Run the following command to extract tar file content:
    unzip ocats_ocsepp_csar_25_1_201_0_0.zip
    The following docker image tar files are located at Files folder:
    • ocats-sepp-25.1.201.tar
    • seppstub-25.1.201.tar
  2. Run the following commands in your cluster to load the ATS docker image, 'ocats-sepp-25.1.201.tar'' and Stub docker image seppstub-25.1.201.tar, and push it to your registry.
    
    $ docker load -i ocats-sepp-25.1.201.tar
    $ docker load -i seppstub-25.1.201.tar
      
    $ docker tag ocats/ocats-sepp:25.1.201 <local_registry>/ocats/ocats-sepp:25.1.201
     
    $ docker tag ocats/seppstub:25.1.201 <local_registry>/ocats/seppstub:25.1.201
     
    $ docker push <local_registry>/ocats/ocats-sepp:25.1.201
     
    $ docker push <local_registry>/ocats/seppstub:25.1.201
  3. Run the following command to get the helm charts are located in Helm directory of files folder:
    tar -xvf ocats-sepp-25.1.201.tgz
    The output of this command is:
        
        ocats-sepp/ 
        ocats-sepp/Chart.yaml
        ocats-sepp/charts/
        ocats-sepp/values.yaml
  4. Create a copy of the custom values located at Scripts/ocats_ocsepp_values_25.1.201.yaml and update it for image name, tag and other parameters as per the requirement.

3.6.4 Creating Secrets and Supporting TLS 1.2 and TLS 1.3

Note:

If SEPP ATS is deploying in ASM, the following procedures are not required.
3.6.4.1 Configuring Root Certificate

The following are the steps to configure the root certificate (caroot.cer):

Note:

  • Use the same root certificate (caroot.cer) and key which is used for creating the SEPP certificates.
  • If the user has already generated the caroot.cer and cakey.pem file while deploying SEPP then user can skip to Generate ATS certificate step.
  • Both the ATS and SEPP must have same root certificate.
  1. In case caroot.cer is not available for SEPP, create ssl.conf file for SEPP using the following format:
    
    # Creation of CSEPP Certs, Fqdn should be changed
     
    #ssl.conf
    [ req ]default_bits = 4096
    distinguished_name = req_distinguished_namereq_extensions = req_ext
    [ req_distinguished_name ]
    countryName = Country Name (2 letter code)
    countryName_default = IN
    stateOrProvinceName = State or Province Name (full name)
    stateOrProvinceName_default = Karnataka
    localityName = Locality Name (eg, city)
    localityName_default = Bangalore
    organizationName = Organization Name (eg, company)
    organizationName_default = Oracle
    commonName = sepp1.inter.oracle.com
    commonName_max = 64
    commonName_default = sepp1.inter.oracle.com
    [ req_ext ]
    subjectAltName = @alt_names
    [alt_names]
    IP = 127.0.0.1
    DNS.1 = sepp1.inter.oracle.com
  2. Set the following environment variables:
    
    export PEM_PHRASE=NextGen1
    export DEPLOYMENT_NAMESPACE=sepp1
  3. Run the following command to create the required files:
    
     
    openssl req -new -keyout cakey.pem -out careq.pem -passout pass:${PEM_PHRASE} -subj "/C=IN/ST=Karnataka/L=Bangalore/O=Oracle/CN=sepp1.inter.oracle.com/emailAddress=xyz@oracle.com"
     
    openssl x509 -signkey cakey.pem -req -days 3650 -in careq.pem -out caroot.cer -extensions v3_ca -passin pass:${PEM_PHRASE}
     
    openssl req -x509 -nodes -sha256 -days 365 -newkey rsa:2048 -keyout rsa_private_key -out rsa_certificate.crt -subj '/C=IN/ST=Karnataka/L=Bangalore/O=Oracle/CN=sepp1.inter.oracle.com/emailAddress=xyz@oracle.com'
     
    openssl rsa -in rsa_private_key -outform PEM -out rsa_private_key_pkcs1.pem
     
    openssl req -new -key rsa_private_key -out ocsepp.csr -config ssl.conf -subj '/C=IN/ST=Karnataka/L=Bangalore/O=Oracle/CN=sepp1.inter.oracle.com/emailAddress=xyz@oracle.com'
     
    openssl x509 -CA caroot.cer -CAkey cakey.pem -CAserial serial.txt -req -in ocsepp.csr -out ocsepp.cer -days 365 -extfile ssl.conf -extensions req_ext -passin pass:${PEM_PHRASE}
     
    openssl ecparam -genkey -name prime256v1 -noout -out ec_private_key.pem
     
    openssl pkcs8 -topk8 -in ec_private_key.pem -inform pem -out ecdsa_private_key.pem -outform pem -nocrypt
     
    openssl req -new -key ecdsa_private_key.pem -x509 -nodes -days 365 -out ecdsa_certificate_pkcs1.crt -subj '/C=IN/ST=Karnataka/L=Bangalore/O=Oracle/CN=sepp1.inter.oracle.com/emailAddress=xyz@oracle.com'
     
    openssl req -new -key ecdsa_private_key.pem -out ecdsa_certificate.csr -subj '/C=IN/ST=Karnataka/L=Bangalore/O=Oracle/CN=sepp1.inter.oracle.com/emailAddress=xyz@oracle.com'
     
    echo NextGen1 > trust.txt
    echo NextGen1 > key.txt
    echo 1234 > serial.txt
    
    
3.6.4.2 Generating ATS Certificate

The following are the steps to configure the ATS certificate:

  1. Create and edit the ssl.conf file as follows:

    Note:

    • While trying to access the GUI with DNS, ensure that the common Name_default is the same as the DNS name being used.
    • Ensure that the DNS is in the format <service_name>.<namespace>.<cluster_domain>
    • The user can add multiple DNS like DNS.1, DNS.2 so on.
    • The ATS_HELM_RELEASE_NAME is the release name which will be used to deploy ATS.
    1. In the alt_names section of the ssl.conf, IPs has to be listed down through which ATS GUI will be opened. We can add multiple IPs like IP.1, IP.2 so on.
    2. All stubserver service names (({ATS_HELM_RELEASE_NAME}-stubserver.{ats-namespace},{ATS_HELM_RELEASE_NAME}-stubserver-2.{ats-namespace}, {ATS_HELM_RELEASE_NAME}-stubserver-3.{ats-namespace}) must be in Subject Alternative Name in certificate.
    3. Update ocats service name (${ATS_HELM_RELEASE_NAME}-ocats.${DEPLOYMENT_NAMESPACE}.svc.cluster.local ) in the commonName, commonName_default and DNS name in alt_names section.
    Sample code:
    
    #ssl.conf
    [ req ]
    default_bits = 4096
    distinguished_name = req_distinguished_name
    req_extensions = req_ext
     
    [ req_distinguished_name ]
    countryName = Country Name (2 letter code)
    countryName_default = IN
    stateOrProvinceName = State or Province Name (full name)
    stateOrProvinceName_default = Karnataka
    localityName = Locality Name (eg, city)
    localityName_default = Bangalore
    organizationName = Organization Name (eg, company)
    organizationName_default = Oracle
    commonName = ${ATS_HELM_RELEASE_NAME}-ocats.${DEPLOYMENT_NAMESPACE}.svc.cluster.local
    commonName_max = 64
    commonName_default = ${ATS_HELM_RELEASE_NAME}-ocats.${DEPLOYMENT_NAMESPACE}.svc.cluster.local
     
    [ req_ext ]
    subjectAltName = @alt_names
     
    [alt_names]
    IP.1 = 127.0.0.1
    IP.2 = 10.75.217.5
    
    #Mandtory values
    DNS.1 = ${ATS_HELM_RELEASE_NAME}-ocats.${DEPLOYMENT_NAMESPACE}.svc.cluster.local
    DNS.2 = ${ATS_HELM_RELEASE_NAME}-stubserver.${DEPLOYMENT_NAMESPACE}
    DNS.3 = ${ATS_HELM_RELEASE_NAME}-stubserver-2.${DEPLOYMENT_NAMESPACE}
    DNS.4 = ${ATS_HELM_RELEASE_NAME}-stubserver-3.${DEPLOYMENT_NAMESPACE}
    DNS.5 = localhost
  2. Run the following command to create a certificate signing request or csr:
    $ openssl req -config ssl.conf -newkey rsa:2048 -days 1000 -nodes -keyout rsa_private_key_pkcs1.key > ssl_rsa_certificate.csr
    Output:
    Ignoring -days; not generating a certificate
    Generating a RSA private key
    ...+++++
    ........+++++
    writing new private key to 'rsa_private_key_pkcs1.key'
    -----
    You are about to be asked to enter information that will be incorporated
    into your certificate request.
    What you are about to enter is what is called a Distinguished Name or a DN.
    There are quite a few fields but you can leave some blank
    For some fields there will be a default value,
    If you enter '.', the field will be left blank.
    -----
    Country Name (2 letter code) [IN]:
    State or Province Name (full name) [KA]:
    Locality Name (eg, city) [BLR]:
    Organization Name (eg, company) [ORACLE]:
    Common Name (e.g. server FQDN or YOUR name) [ocats]:
    [cloud-user@star23-bastion-1 ocats]$
  3. Run the following command to verify whether all configurations are done:
    openssl req -text -noout -verify -in ssl_rsa_certificate.csr
  4. Run the following command to sign the certificate signing request or csr file with root certificate:
    $ openssl x509 -extfile ssl.conf -extensions req_ext -req -in ssl_rsa_certificate.csr -days 1000 -CA caroot.cer -CAkey cakey.pem -set_serial 04 > ssl_rsa_certificate.crt
    Output:
    
    Signature ok
    subject=C = IN, ST = KA, L = BLR, O = ORACLE, CN = sepp-ats-rel-ocats.testns.svc.cluster.local
    Getting CA Private Key
    [cloud-user@star23-bastion-1 ocats]$

    Note:

    When the output prompts for the current password, enter the password was used to createcakey.pem file.
  5. Verify whether the certificate is properly signed by root certificate:
    $ openssl verify -CAfile caroot.cer ssl_rsa_certificate.crt

    Output:

    ssl_rsa_certificate.crt:OK
  6. For then jenkins to support GUI access through https, a jks file has to be created. Perform the following steps to generate jks file for Jenkins Server:
    1. Run the following command to generate the .p12 keystore file:
      $ openssl pkcs12 -inkey rsa_private_key_pkcs1.key -in ssl_rsa_certificate.crt -export -out certificate.p12
      Output:
      
      Enter Export Password:
      Verifying - Enter Export Password:

      Note:

      When the output prompts for the password, enter the password and note it down, as it is required for creating the jks file.

  7. Run the following command to convert .p12 file in jks format file to be used in Jenkins Server:

    Note:

    • Esure to use the same password used for creating jks file and for creating the .p12 file.
    • Java should be pre-installed to run the keytool utility.
    $ keytool -importkeystore -srckeystore ./certificate.p12 -srcstoretype pkcs12 -destkeystore jenkinsserver.jks -deststoretype JKS
    Output:
    
    Importing keystore ./certificate.p12 to jenkinsserver.jks...
    Enter destination keystore password:
    Re-enter new password:
    Enter source keystore password:
    Entry for alias 1 successfully imported.
    Import command completed:  1 entries successfully imported, 0 entries failed or cancelled
    The generated file, jenkinserver.jks must be given to the jenkins server.
3.6.4.3 Creating ATS Secret and Configuring Helm chart

The following are the steps to create ATS Secret and configure the Helm chart:

Note:

The user can decide whether to use the generated CA signed certificate or self-signed certificate.
  1. Run the following command to create secret:
    kubectl create secret generic ocats-sepp-secret --from-file=jenkinsserver.jks --from-file=ssl_rsa_certificate.crt --from-file=rsa_private_key_pkcs1.key --from-file=caroot.cer --from-literal=jks_password=${JKS_PASS} -n <deployment_namespace>

    Note:

    Use the same password for JKS_PASS that was used during the creation of the jenkinserver.jks file.
  2. Run the following command to verify the secret:
    $ kubectl describe secret ocats-sepp-secret -n testns
    Output:
    
    Name:         ocats-sepp-secret
    Namespace:    seppsvc
    Labels:       <none>
    Annotations:  <none>
      
    Type:  Opaque
      
    Data
    ====
    caroot.cer:                        1147 bytes
    ssl_rsa_certificate.crt:           1424 bytes
    jenkinsserver.jks:                 2357 bytes
    rsa_private_key_pkcs1.key:         1675 bytes

Changes to the Helm charts:

The following changes must be updated in the ocats_ocsepp_values_<version>.yaml file:

  • The Helm parameter atsGuiTLSEnabled must set to true for ATS to get the certificates and support HTTPS for GUI. If the user does not want to open ATS GUI in HTTPS mode, then atsGuiTLSEnabled flag should be set to false.
    atsGuiTLSEnabled: false
  • The Helm parameter atsCommunicationTLSEnabled must set to true for for necessary context variable to be created which can be later used to communicate with other services in HTTPS.
    • For non-asm deployment atsCommunicationTLSEnabled flag should be set to true.
    • For asm deployment atsCommunicationTLSEnabled flag should be set to false.
    atsCommunicationTLSEnabled: true #If set to true,  ATS will get necessary variables to communicate with SUT, Stub
            or other  NFs with TLS enabled. It is not required in ASM
        environment.
  • The certificates section of the ocats and stubserver section of the ats custom values file must be updated as follows:
    
    certificates:
       cert_secret_name: "ocats-sepp-secret"
       a_cert: "caroot.cer"
       client_cert: "ssl_rsa_certificate.crt"
       private_key: "rsa_private_key_pkcs1.key"
        
       # This parameter is needed when atsGuiTLSEnabled is set to true. This file is necessary for ATS GUI to be opend with secured TLS protocol. The file caroot.pem, used during creation of jks file needs to be passed for Jenkins/ATS API communication.
       jks_file: "jenkinsserver.jks"
     
       
    Add caroot certificate in browser to access the ATS GUI. The caroot.cer file certificate created by us will be added to Truststore, in this case the browser.
The following are the steps to add caroot certificate in the browser to access the ATS GUI, either Mozilla Firefox or Chrome:

Note:

Future versions of these browsers may involve different menu options. For more information on importing root certificate, see the browser documentation to add a self-signed certificate to the browser as a trusted certificate.
  1. In the Chrome browser, navigate to the settings and search for certificates.
  2. Click the security option that appears next to search.
  3. Click the Manage Device Certificate option. The Keychain Access window opens.
  4. Search the tab certificate and drag and drop the downloaded caroot certificate.
  5. Find the uploaded certificate in the list, usually listed by a temporary name.
  6. Double click the certificate and expand the Trust option.
  7. In When using this certificate option, assign it to "always trust".
  8. Close the window and validate if it asks for the password.
  9. Save and restart the browser.
  1. In the Mozilla Firefox browser, navigate to the settings and search for certificates.
  2. Click the View Certificate that appears next to search. This opens a Certificate Manager window.
  3. Navigate to the Authorities section, click the Import button, and upload the caroot certificate.
  4. Click the Trust options in the pop-up window and click OK.
  5. Save and restart the browser.

3.6.5 Create ATS Health Check secret

To enable the ATS health check pipeline, the following configurations needs to be updated in the ocats_ocsepp_values_<version>.yaml file:

Non OCI Environment

  1. The following parameters needs to be updated in the base64 encoded version format, occnehostip, occnehostusername, occnehostpassword, envtype.
  2. On installing ATS, health check secret will be created, and the health check pipeline will be shown in the ATS GUI. If healthcheck parameter is set to false, then the health check pipeline will not be visible in the ATS GUI.
ats-ocats
  atsFeatures:
    healthcheck: true

  sshDetails:
    secretname: "healthchecksecret"
    occnehostip: "" # $(echo -n '10.75.217.42' | base64), Where occne host ip needs to be provided
    occnehostusername: "" # $(echo -n 'cloud-user' | base64), Where occne host username needs to be provided
    occnehostpassword: "" # $(echo -n '****' | base64), Where password of host
needs to be provided
    envtype: "" # $(echo -n 'OCCNE' | base64), Where occne keyword needs to be provided

OCI Environment

For key based health check support in OCI, refer to ATS Health Check section under the ATS Framework Features.
  1. The following parameters needs to be updated in the base64 encoded version in the ocats_ocsepp_values_<version>.yaml file:
ats-ocats
  atsFeatures:
    healthcheck: true    envtype: "" # $(echo -n 'OCCNE' | base64), Where occne keyword needs to be provided

  sshDetails
    ociHealthCheck:
      passwordAuthenticationEnabled: false
      bastion:
        ip: "" # $(echo -n '10.75.217.42' | base64), Where occne host ip needs to be provided
        username: "" # $(echo -n '10.75.217.42' | base64), Where occne host ip needs to be provided
        password: ""
      operatorInstance:
        ip: "" # $(echo -n '10.75.217.42' | base64), Where occne host ip needs to be provided
        username: "" # $(echo -n '10.75.217.42' | base64), Where occne host ip needs to be provided
        password: ""

3.6.6 Configuring ATS

This section describes how to configure ATS for SEPP.

3.6.6.1 Enabling Aspen Service Mesh
To enable Aspen Service Mesh (ASM) for ATS, complete the following procedure:

Note:

By default, this feature is disabled.
  1. If ASM is not enabled on the global level for the namespace, run the following command before deploying ATS:
    kubectl label --overwrite namespace <namespace_name> istio-injection=enabled

    Example:

    kubectl label --overwrite namespace seppsvc istio-injection=enabled
  2. Add the following annotations in the BDD client section in the ocats-sepp-custom-values.yaml file to support ASM:

    To enable or disable the ASM, update the ASMEnabled flag to true or false.

    ASMEnabled: false
    asm:
      configMgrPort: 9090 # config manager service port
      pn32fPort: 9090 # pn32f service port
      stubServerPort: 8080 # stub server service port
      plmnIgwPort: 80 # plmn ingress gateway service port
      n32IgwPort: 80 # n32 ingress gateway service port
  3. Add the following value in the stub-server section in the ocats-sepp-custom-values.yaml file to support ASM:

    To enable or disable the ASM, update the ASMEnabled flag to true or false.

    ASMEnabled: false
  4. For asm deployment, set the atsCommunicationTLSEnabled flag to false as given below:
    atsCommunicationTLSEnabled: false
     #If set to true, ATS will get necessary variables to communicate with SUT, Stub or other NFs with TLS enabled. It is not required in ASM environment.
  5. (Optional) The user can configure the resources assigned to the Aspen Mesh (istio-proxy) sidecars in ocats_ocsepp_values_<version>.yaml file sections as follows:
    
    asm:
    istioResources:
        limits:
          cpu: 2
          memory: 1Gi
        requests:
          cpu: 100m
          memory: 128Mi

    Note:

    It is recommended to use the default values of the resources.
  6. In ocats_ocsepp_values_<version>.yaml file, parameter expose_tls_service in stubserver section should be configured to false in ASM mode to deploy stub server in HTTP mode.
    
    stubserver:
      service:
        expose_tls_service: false
  7. Update the following parameters in the ocats_ocsepp_values_<values>.yaml file to empty values, as secret creation is not required when operating in ASM mode.
    
    ocats:
        certificates:
        cert_secret_name: ""
        ca_cert: ""
        client_cert: ""
        private_key: ""
        jks_file: "" # This parameter is needed when atsGuiTLSEnabled is set to true. This file is necessary for ATS GUI to be opened with secured TLS protocol.
        
    
    
    In stubserver section: 
    stubserver:
      env:
        cert_secret_name: ""
        ca_cert: ""
        client_cert: ""
        private_key: ""   

Note:

  • If the SEPP is deployed with ASM enabled and user disables the ASM on the global level, then user needs to redeploy the setup to work without ASM.
  • Mediation is supported in ASM from release 25.1.1xx onwards.
3.6.6.2 ATS API

The application programming interface (API) feature provides APIs to perform routine ATS tasks such as starting the ATS suite, monitoring and stopping the ATS suite etc.

By default, this feature is enabled in ATS framework.

For more details about the ATS API feature, refer to ATS API feature section.

3.6.6.3 Enabling Static Port
To enable static port:

Note:

ATS supports static port. By default, this feature is disabled.
  • In the ocats_ocsepp_values_<version>.yaml file under service section, set the staticNodePortEnabled parameter value to 'true' and staticNodePort parameter value with valid nodePort.
    
    ocats:
      service:
          customExtension:
            labels: {}
            annotations: {}
          type: LoadBalancer
          ports:
            https:
              port: "8443"
              staticNodePortEnabled: false
              staticNodePort: ""
            http:
              port: "8080"
              staticNodePortEnabled: false
              staticNodePort: ""
3.6.6.4 Enabling Roaming Hub Mode

SEPP ATS supports two types deployment modes:

  • SEPP
  • Roaming Hub or Hosted SEPP Mode

Flag has been introduced to select the deployment mode.

#Flag to enable Roaming Hub mode

RHenabled: True
3.6.6.5 Enabling Hosted SEPP Mode

The Hosted SEPP Mode can be enabled as follows:

#Flag to enable Hosted SEPP mode

RHenabled: True

Customizing Error Code Variable in Hosted SEPP Mode

For handling failure scenarios in Hosted SEPP mode, the following customized error code variable has been introduced:

#Customized error code variable for Hosted SEPP

HSErrCode: "400"
3.6.6.6 Configuring Egress Rate Limiting Feature

If the Egress Rate Limiting feature is enabled in SEPP deployment, to run the Egress Rate Limiter test cases in ATS, EgressRateLimiterFlag parameter has been introduced in the ocats-sepp-custom-values.yaml file. If the Egress Rate Limiting feature is disabled in SEPP deployment, ensure to set EgressRateLimiterFlag parameter to false.

EgressRateLimiterFlag: true/false

For more information about the feature, see the "Rate Limiting for Egress Roaming Signaling per PLMN" section in Oracle Communications Cloud Native Core, Security Edge Protection Proxy User Guide and the "Configuration Parameters" section in Oracle Communications Cloud Native Core, Security Edge Protection Proxy Installation, Upgrade, and Fault Recovery Guide.

3.6.6.7 Configuring Ingress Rate Limiting Feature

If the Ingress rate limiting feature is enabled in SEPP deployment, to run the Ingress rate limiter test cases in ATS, IngressRateLimiterFlag parameter has been introduced in the ocats_ocsepp_values_<version>.yaml file. If the Ingress rate limiting feature is disabled in SEPP deployment, ensure to set IngressRateLimiterFlag parameter to false.

Flag to enable or disable Ingress Rate Limiter:

  IngressRateLimiterFlag: true/false

For more information about the feature, see the "Rate Limiting for Ingress Roaming Signaling per Remote SEPP Set" section in Oracle Communications Cloud Native Core, Security Edge Protection Proxy User Guide and the "Configuration Parameters" section in Oracle Communications Cloud Native Core, Security Edge Protection Proxy Installation, Upgrade, and Fault Recovery Guide.

3.6.6.8 Configuring Single Service Account

Note:

If the user doesn't want to configure a single service account, skip the following steps.
  1. To configure the single service account across all microservices, the following parameter must be updated in the ocats_ocsepp_values_<version>.yaml file with the service account name that was created while deploying SEPP.
    
    ocats:
        serviceAccountName: "" # Update service account name
  2. In the stubserver section, update the following:

    Note:

    Here, update the same service account name that was used while deploying SEPP.

    
    stubserver:  
         serviceAccount:     
            create: false 
            name: ""
3.6.6.9 Configuring the Cache Refresh Timeout Value

SEPP supports Refresh ahead cache. To run the ATS cases, cacheRefreshTimeout value must be set to 1000 (ms) so that cache can be updated automatically after every test case is executed when ATS suite is triggered.

To set the cache refresh timeout value:

In the ocsepp-custom-values.yaml file, under cn32f-svc and pn32f-svc microservices, set the cacheRefreshTimeout parameter value to 1000 (ms).

configs:
 cacheRefreshTimeout: 1000 #(ms)
 cacheRefreshInitTimeout: 50000 #(ms)

Note:

If ATS is not configured, the cacheRefreshTimeout value must be 30000 (ms).

3.6.6.10 Configuring the Topology Cache Refresh Timeout Value

SEPP supports Refresh ahead cache. To run the ATS cases, topologycacheRefreshTimeout value must be set to 1000 (ms) so that cache can be updated automatically after every test case is executed when ATS suite is triggered.

In the ocsepp-custom-values.yaml file, under cn32f-svc and pn32f-svc microservices, set the topologycacheRefreshTimeout parameter value to 1000 (ms).


topologyHiding:
  timerConfig:
     topologyhidingCacheRefreshTimeout: 30000
     topologyhidingCacheRefreshInitTimeout: 50000
     topologyhidingHistoryUpdateTimeout: 30000
     topologyhidingHistoryRefreshSeconds: 60
  config:
     topologyHidingStateCheck: true





 

3.6.6.11 Configuring the Security Counter Measure Cache Refresh Timeout

SEPP supports Refresh ahead cache. To run the ATS cases, you must set the value of the securityCacheRefreshTimeout parameter to 10 (ms) so that cache can be automatically updated after every test case is run when ATS suite is triggered.

In the ocsepp-custom-values.yaml file, under cn32f-svc and pn32f-svc microservices, set the securityCacheRefreshTimeout parameter value to 1000 (ms).


configs:  
 securityCacheRefreshTimeout: 1000 #(ms)  
 securityCacheRefreshInitTimeout: 50000 #(ms)


 

3.6.6.12 Configuring n32cHandshakePlmnIdListValidationEnabled

SEPP supports validation of PLMN ID List in n32c capability exchange message which can be turned off. To run the ATS cases, n32cHandshakePlmnIdListValidationEnabled value must be set to true so that testcases validating PLMN ID List in n32c capability exchange message can be run successfully when ATS suite is triggered.

To set the n32cHandshakePlmnIdListValidationEnabled value, do the following:

In the ocsepp_custom_values_<version>.yaml file, under localProfile, set the n32cHandshakePlmnIdListValidationEnabled parameter value to true.


    localProfile:
    name: "SEPP-3"
    plmnIdList: [{"mcc":"111","mnc":"100"},{"mcc":"111","mnc":"101"},{"mcc":"111","mnc":"102"},{"mcc":"111","mnc":"103"},{"mcc":"111","mnc":"104"},{"mcc":"111","mnc":"105"},{"mcc":"111","mnc":"106"},{"mcc":"111","mnc":"107"},{"mcc":"111","mnc":"108"},{"mcc":"111","mnc":"109"},{"mcc":"111","mnc":"110"},{"mcc":"111","mnc":"111"},{"mcc":"111","mnc":"112"},{"mcc":"111","mnc":"113"},{"mcc":"111","mnc":"114"},{"mcc":"111","mnc":"115"},{"mcc":"111","mnc":"116"},{"mcc":"111","mnc":"117"},{"mcc":"111","mnc":"118"},{"mcc":"111","mnc":"119"},{"mcc":"111","mnc":"120"},{"mcc":"111","mnc":"121"},{"mcc":"111","mnc":"122"},{"mcc":"111","mnc":"123"},{"mcc":"111","mnc":"124"},{"mcc":"111","mnc":"125"},{"mcc":"111","mnc":"126"},{"mcc":"111","mnc":"127"},{"mcc":"111","mnc":"128"},{"mcc":"111","mnc":"129"}]
    # Do not change this value, this will be always true
    sbiTargetApiRootSupported: true
    # Enable PLMN ID List Validation in Exchange Capability Request, Default set to true
    n32cHandshakePlmnIdListValidationEnabled: true
    # PLMN ID List Validation Type in Exchange Capability Request, can be SUBSET or STRICT only
    n32cHandshakePlmnIdListValidationType: "SUBSET"
Update following PLMN ID list in localProfile section for Roaming Hub mode to run the ATS testcases:
[{"mcc":"111","mnc":"200"},{"mcc":"111","mnc":"201"},{"mcc":"111","mnc":"202"},{"mcc":"111","mnc":"203"},{"mcc":"111","mnc":"204"},{"mcc":"111","mnc":"205"},{"mcc":"111","mnc":"206"},{"mcc":"111","mnc":"207"},{"mcc":"111","mnc":"208"},{"mcc":"111","mnc":"209"},{"mcc":"111","mnc":"210"},{"mcc":"111","mnc":"211"},{"mcc":"111","mnc":"212"},{"mcc":"111","mnc":"213"},{"mcc":"111","mnc":"214"},{"mcc":"111","mnc":"215"},{"mcc":"111","mnc":"216"},{"mcc":"111","mnc":"217"},{"mcc":"111","mnc":"218"},{"mcc":"111","mnc":"219"},{"mcc":"111","mnc":"220"},{"mcc":"111","mnc":"221"},{"mcc":"111","mnc":"222"},{"mcc":"111","mnc":"223"},{"mcc":"111","mnc":"224"},{"mcc":"111","mnc":"225"},{"mcc":"111","mnc":"226"},{"mcc":"111","mnc":"227"},{"mcc":"111","mnc":"228"},{"mcc":"111","mnc":"229"},{"mcc":"111","mnc":"230"},{"mcc":"111","mnc":"231"},{"mcc":"111","mnc":"232"},{"mcc":"111","mnc":"233"},{"mcc":"111","mnc":"234"},{"mcc":"111","mnc":"235"},{"mcc":"111","mnc":"236"},{"mcc":"111","mnc":"237"},{"mcc":"111","mnc":"238"},{"mcc":"111","mnc":"239"},{"mcc":"111","mnc":"240"},{"mcc":"111","mnc":"241"},{"mcc":"111","mnc":"242"},{"mcc":"111","mnc":"243"},{"mcc":"111","mnc":"244"},{"mcc":"111","mnc":"245"},{"mcc":"111","mnc":"246"},{"mcc":"111","mnc":"247"},{"mcc":"111","mnc":"248"},{"mcc":"111","mnc":"249"},{"mcc":"111","mnc":"250"},{"mcc":"111","mnc":"251"},{"mcc":"111","mnc":"252"},{"mcc":"111","mnc":"253"},{"mcc":"111","mnc":"254"},{"mcc":"111","mnc":"255"},{"mcc":"111","mnc":"256"},{"mcc":"111","mnc":"257"},{"mcc":"111","mnc":"258"},{"mcc":"111","mnc":"259"},{"mcc":"111","mnc":"260"},{"mcc":"111","mnc":"261"},{"mcc":"111","mnc":"262"},{"mcc":"111","mnc":"263"},{"mcc":"111","mnc":"264"},{"mcc":"111","mnc":"265"},{"mcc":"111","mnc":"266"},{"mcc":"111","mnc":"267"},{"mcc":"111","mnc":"268"},{"mcc":"111","mnc":"269"},{"mcc":"111","mnc":"270"},{"mcc":"111","mnc":"271"},{"mcc":"111","mnc":"272"},{"mcc":"111","mnc":"273"},{"mcc":"111","mnc":"274"},{"mcc":"111","mnc":"275"},{"mcc":"111","mnc":"276"},{"mcc":"111","mnc":"277"},{"mcc":"111","mnc":"278"},{"mcc":"111","mnc":"279"},{"mcc":"111","mnc":"280"},{"mcc":"111","mnc":"281"},{"mcc":"111","mnc":"282"},{"mcc":"111","mnc":"283"},{"mcc":"111","mnc":"284"},{"mcc":"111","mnc":"285"},{"mcc":"111","mnc":"286"},{"mcc":"111","mnc":"287"},{"mcc":"111","mnc":"288"},{"mcc":"111","mnc":"289"},{"mcc":"111","mnc":"290"},{"mcc":"111","mnc":"291"},{"mcc":"111","mnc":"292"},{"mcc":"111","mnc":"293"},{"mcc":"111","mnc":"294"},{"mcc":"111","mnc":"295"},{"mcc":"111","mnc":"296"},{"mcc":"111","mnc":"297"},{"mcc":"111","mnc":"298"},{"mcc":"111","mnc":"299"},{"mcc":"111","mnc":"300"},{"mcc":"111","mnc":"301"},{"mcc":"111","mnc":"302"},{"mcc":"111","mnc":"303"},{"mcc":"111","mnc":"304"},{"mcc":"111","mnc":"305"},{"mcc":"111","mnc":"306"},{"mcc":"111","mnc":"307"},{"mcc":"111","mnc":"308"},{"mcc":"111","mnc":"309"},{"mcc":"111","mnc":"310"},{"mcc":"111","mnc":"311"},{"mcc":"111","mnc":"312"},{"mcc":"111","mnc":"313"},{"mcc":"111","mnc":"314"},{"mcc":"111","mnc":"315"},{"mcc":"111","mnc":"316"},{"mcc":"111","mnc":"317"},{"mcc":"111","mnc":"318"},{"mcc":"111","mnc":"319"},{"mcc":"111","mnc":"320"},{"mcc":"111","mnc":"321"},{"mcc":"111","mnc":"322"},{"mcc":"111","mnc":"323"},{"mcc":"111","mnc":"324"},{"mcc":"111","mnc":"325"},{"mcc":"111","mnc":"326"},{"mcc":"111","mnc":"327"},{"mcc":"111","mnc":"328"},{"mcc":"111","mnc":"329"},{"mcc":"111","mnc":"330"},{"mcc":"111","mnc":"331"},{"mcc":"111","mnc":"332"},{"mcc":"111","mnc":"333"},{"mcc":"111","mnc":"334"},{"mcc":"111","mnc":"335"},{"mcc":"111","mnc":"336"},{"mcc":"111","mnc":"337"},{"mcc":"111","mnc":"338"},{"mcc":"111","mnc":"339"},{"mcc":"111","mnc":"340"},{"mcc":"111","mnc":"341"},{"mcc":"111","mnc":"342"},{"mcc":"111","mnc":"343"},{"mcc":"111","mnc":"344"},{"mcc":"111","mnc":"345"},{"mcc":"111","mnc":"346"},{"mcc":"111","mnc":"347"},{"mcc":"111","mnc":"348"},{"mcc":"111","mnc":"349"},{"mcc":"111","mnc":"350"},{"mcc":"111","mnc":"351"},{"mcc":"111","mnc":"352"},{"mcc":"111","mnc":"353"},{"mcc":"111","mnc":"354"},{"mcc":"111","mnc":"355"},{"mcc":"111","mnc":"356"},{"mcc":"111","mnc":"357"},{"mcc":"111","mnc":"358"},{"mcc":"111","mnc":"359"},{"mcc":"111","mnc":"360"},{"mcc":"111","mnc":"361"},{"mcc":"111","mnc":"362"},{"mcc":"111","mnc":"363"},{"mcc":"111","mnc":"364"},{"mcc":"111","mnc":"365"},{"mcc":"111","mnc":"366"},{"mcc":"111","mnc":"367"},{"mcc":"111","mnc":"368"},{"mcc":"111","mnc":"369"},{"mcc":"111","mnc":"370"},{"mcc":"111","mnc":"371"},{"mcc":"111","mnc":"372"},{"mcc":"111","mnc":"373"},{"mcc":"111","mnc":"374"},{"mcc":"111","mnc":"375"},{"mcc":"111","mnc":"376"},{"mcc":"111","mnc":"377"},{"mcc":"111","mnc":"378"},{"mcc":"111","mnc":"379"},{"mcc":"111","mnc":"380"},{"mcc":"111","mnc":"381"},{"mcc":"111","mnc":"382"},{"mcc":"111","mnc":"383"},{"mcc":"111","mnc":"384"},{"mcc":"111","mnc":"385"},{"mcc":"111","mnc":"386"},{"mcc":"111","mnc":"387"},{"mcc":"111","mnc":"388"},{"mcc":"111","mnc":"389"},{"mcc":"111","mnc":"390"},{"mcc":"111","mnc":"391"},{"mcc":"111","mnc":"392"},{"mcc":"111","mnc":"393"},{"mcc":"111","mnc":"394"},{"mcc":"111","mnc":"395"},{"mcc":"111","mnc":"396"},{"mcc":"111","mnc":"397"},{"mcc":"111","mnc":"398"},{"mcc":"111","mnc":"399"},{"mcc":"111","mnc":"400"},{"mcc":"111","mnc":"401"},{"mcc":"111","mnc":"402"},{"mcc":"111","mnc":"403"},{"mcc":"111","mnc":"404"},{"mcc":"111","mnc":"405"},{"mcc":"111","mnc":"406"},{"mcc":"111","mnc":"407"},{"mcc":"111","mnc":"408"},{"mcc":"111","mnc":"409"},{"mcc":"111","mnc":"410"},{"mcc":"111","mnc":"411"},{"mcc":"111","mnc":"412"},{"mcc":"111","mnc":"413"},{"mcc":"111","mnc":"414"},{"mcc":"111","mnc":"415"},{"mcc":"111","mnc":"416"},{"mcc":"111","mnc":"417"},{"mcc":"111","mnc":"418"},{"mcc":"111","mnc":"419"},{"mcc":"111","mnc":"420"},{"mcc":"111","mnc":"421"},{"mcc":"111","mnc":"422"},{"mcc":"111","mnc":"423"},{"mcc":"111","mnc":"424"},{"mcc":"111","mnc":"425"},{"mcc":"111","mnc":"426"},{"mcc":"111","mnc":"427"},{"mcc":"111","mnc":"428"},{"mcc":"111","mnc":"429"},{"mcc":"111","mnc":"430"},{"mcc":"111","mnc":"431"},{"mcc":"111","mnc":"432"},{"mcc":"111","mnc":"433"},{"mcc":"111","mnc":"434"},{"mcc":"111","mnc":"435"},{"mcc":"111","mnc":"436"},{"mcc":"111","mnc":"437"},{"mcc":"111","mnc":"438"},{"mcc":"111","mnc":"439"},{"mcc":"111","mnc":"440"},{"mcc":"111","mnc":"441"},{"mcc":"111","mnc":"442"},{"mcc":"111","mnc":"443"},{"mcc":"111","mnc":"444"},{"mcc":"111","mnc":"445"},{"mcc":"111","mnc":"446"},{"mcc":"111","mnc":"447"},{"mcc":"111","mnc":"448"},{"mcc":"111","mnc":"449"},{"mcc":"111","mnc":"450"},{"mcc":"111","mnc":"451"},{"mcc":"111","mnc":"452"},{"mcc":"111","mnc":"453"},{"mcc":"111","mnc":"454"},{"mcc":"111","mnc":"455"},{"mcc":"111","mnc":"456"},{"mcc":"111","mnc":"457"},{"mcc":"111","mnc":"458"},{"mcc":"111","mnc":"459"},{"mcc":"111","mnc":"460"},{"mcc":"111","mnc":"461"},{"mcc":"111","mnc":"462"},{"mcc":"111","mnc":"463"},{"mcc":"111","mnc":"464"},{"mcc":"111","mnc":"465"},{"mcc":"111","mnc":"466"},{"mcc":"111","mnc":"467"},{"mcc":"111","mnc":"468"},{"mcc":"111","mnc":"469"},{"mcc":"111","mnc":"470"},{"mcc":"111","mnc":"471"},{"mcc":"111","mnc":"472"},{"mcc":"111","mnc":"473"},{"mcc":"111","mnc":"474"},{"mcc":"111","mnc":"475"},{"mcc":"111","mnc":"476"},{"mcc":"111","mnc":"477"},{"mcc":"111","mnc":"478"},{"mcc":"111","mnc":"479"},{"mcc":"111","mnc":"480"},{"mcc":"111","mnc":"481"},{"mcc":"111","mnc":"482"},{"mcc":"111","mnc":"483"},{"mcc":"111","mnc":"484"},{"mcc":"111","mnc":"485"},{"mcc":"111","mnc":"486"},{"mcc":"111","mnc":"487"},{"mcc":"111","mnc":"488"},{"mcc":"111","mnc":"489"},{"mcc":"111","mnc":"490"},{"mcc":"111","mnc":"491"},{"mcc":"111","mnc":"492"},{"mcc":"111","mnc":"493"},{"mcc":"111","mnc":"494"},{"mcc":"111","mnc":"495"},{"mcc":"111","mnc":"496"},{"mcc":"111","mnc":"497"},{"mcc":"111","mnc":"498"},{"mcc":"111","mnc":"499"},{"mcc":"111","mnc":"500"},{"mcc":"111","mnc":"501"},{"mcc":"111","mnc":"502"},{"mcc":"111","mnc":"503"},{"mcc":"111","mnc":"504"},{"mcc":"111","mnc":"505"},{"mcc":"111","mnc":"506"},{"mcc":"111","mnc":"507"},{"mcc":"111","mnc":"508"},{"mcc":"111","mnc":"509"},{"mcc":"111","mnc":"510"},{"mcc":"111","mnc":"511"},{"mcc":"111","mnc":"512"},{"mcc":"111","mnc":"513"},{"mcc":"111","mnc":"514"},{"mcc":"111","mnc":"515"},{"mcc":"111","mnc":"516"},{"mcc":"111","mnc":"517"},{"mcc":"111","mnc":"518"},{"mcc":"111","mnc":"519"},{"mcc":"111","mnc":"520"},{"mcc":"111","mnc":"521"},{"mcc":"111","mnc":"522"},{"mcc":"111","mnc":"523"},{"mcc":"111","mnc":"524"},{"mcc":"111","mnc":"525"},{"mcc":"111","mnc":"526"},{"mcc":"111","mnc":"527"},{"mcc":"111","mnc":"528"},{"mcc":"111","mnc":"529"},{"mcc":"111","mnc":"530"},{"mcc":"111","mnc":"531"},{"mcc":"111","mnc":"532"},{"mcc":"111","mnc":"533"},{"mcc":"111","mnc":"534"},{"mcc":"111","mnc":"535"},{"mcc":"111","mnc":"536"},{"mcc":"111","mnc":"537"},{"mcc":"111","mnc":"538"},{"mcc":"111","mnc":"539"},{"mcc":"111","mnc":"540"},{"mcc":"111","mnc":"541"},{"mcc":"111","mnc":"542"},{"mcc":"111","mnc":"543"},{"mcc":"111","mnc":"544"},{"mcc":"111","mnc":"545"},{"mcc":"111","mnc":"546"},{"mcc":"111","mnc":"547"},{"mcc":"111","mnc":"548"},{"mcc":"111","mnc":"549"},{"mcc":"111","mnc":"550"},{"mcc":"111","mnc":"551"},{"mcc":"111","mnc":"552"},{"mcc":"111","mnc":"553"},{"mcc":"111","mnc":"554"},{"mcc":"111","mnc":"555"},{"mcc":"111","mnc":"556"},{"mcc":"111","mnc":"557"},{"mcc":"111","mnc":"558"},{"mcc":"111","mnc":"559"},{"mcc":"111","mnc":"560"},{"mcc":"111","mnc":"561"},{"mcc":"111","mnc":"562"},{"mcc":"111","mnc":"563"},{"mcc":"111","mnc":"564"},{"mcc":"111","mnc":"565"},{"mcc":"111","mnc":"566"},{"mcc":"111","mnc":"567"},{"mcc":"111","mnc":"568"},{"mcc":"111","mnc":"569"},{"mcc":"111","mnc":"570"},{"mcc":"111","mnc":"571"},{"mcc":"111","mnc":"572"},{"mcc":"111","mnc":"573"},{"mcc":"111","mnc":"574"},{"mcc":"111","mnc":"575"},{"mcc":"111","mnc":"576"},{"mcc":"111","mnc":"577"},{"mcc":"111","mnc":"578"},{"mcc":"111","mnc":"579"},{"mcc":"111","mnc":"580"},{"mcc":"111","mnc":"581"},{"mcc":"111","mnc":"582"},{"mcc":"111","mnc":"583"},{"mcc":"111","mnc":"584"},{"mcc":"111","mnc":"585"},{"mcc":"111","mnc":"586"},{"mcc":"111","mnc":"587"},{"mcc":"111","mnc":"588"},{"mcc":"111","mnc":"589"},{"mcc":"111","mnc":"590"},{"mcc":"111","mnc":"591"},{"mcc":"111","mnc":"592"},{"mcc":"111","mnc":"593"},{"mcc":"111","mnc":"594"},{"mcc":"111","mnc":"595"},{"mcc":"111","mnc":"596"},{"mcc":"111","mnc":"597"},{"mcc":"111","mnc":"598"},{"mcc":"111","mnc":"599"}]
3.6.6.13 Configuring Request Timeout

SEPP supports requestTimeout parameter to avoid the request timeout of services. The requestTimeout is increased in the given services to avoid any request timeout that may happen due to the processing of response by the stub server.

  • The user has to set requestTimeout to 5000 in the cn32f-svc, pn32f-svc, plmn-egress-gateway, n32-egress-gateway, n32-ingress-gateway, and plmn-ingress-gateway sections of the ocsepp_custom_values_<version>.yaml file.
requestTimeout: 5000 #(ms)

Note:

  • If the ATS cases are failing due to the request timeout, increase the request timeout of a particular service which is creating the issue. For example set request timeout to "5000".
  • If ATS is not configured, requestTimeout must be set to the following values:

Table 3-24 requestTimeout

Service REQUEST TIMEOUT
cn32f-svc 2000
pn32f-svc 1100
plmn-egress-gateway 1000
n32-egress-gateway 1500
n32-ingress-gateway 700

Updating n32-ingress-gateway config map

In the ocsepp_custom_values.yaml file, under n32-ingress-gateway section, set the requestTimeout parameter value to 5000 (ms) for updating the config map.


   routesConfig:
    - id: n32f
      #Below field is used to provide an option to enable/disable route level xfccHeaderValidation, it will override global configuration for xfccHeaderValidation.enabled
      metadata:
        requestTimeout: 1200

Updating plmn-ingress-gateway config map

In the ocsepp_custom_values.yaml file, under plmn-ingress-gateway section, set the requestTimeout parameter value to 5000 (ms) for updating the config map.


     routesConfig:
    - id: cn32f
      #Below field is used to provide an option to enable/disable route level xfccHeaderValidation, it will override global configuration for xfccHeaderValidation.enabled
      metadata:
        requestTimeout: 2600
3.6.6.14 Idle Timeout

The Idle timeout must be increased in the given services to avoid any pending request transaction to timeout due to its idle state in the request queue.

In the ocsepp_custom_values_<version>.yaml file, under n32-ingress-gateway section, set the jettyIdleTimeout parameter value to 5000(ms).


   #Jetty Idle Timeout Settings (ms)
  jettyIdleTimeout: 5000 #(ms)

Note:

  • If the ATS is not configured, the jettyIdleTimeout value should be 3000 (ms).
  • If the ATS cases are failing due to the request timeout, increase idle timeout of a particular service which is failing. For example set idle timeout to "5000".
3.6.6.15 Configuring Polling Interval in N32 Egress Gateway
The pollingInterval parameter overrides polling interval in n32-egress-gateway. As a result of continuous configuration of Remote SEPPs while running ATS, peer, peerset, and routes configurations are constantly being updated in common_configuration for n32-egress-gateway. To minimize route-related failures, the cache for updated peers, peer sets, and routes needs to be refreshed in the n32-egress-gateway. For this, the polling interval must be updated to 2000ms in the ocsepp_custom_values_<version>.yaml file.

n32-egress-gateway: 
   commonCfgServer:   
      pollingInterval: 2000
3.6.6.16 EvictSanHeaderCacheDelay

The SEPP supports evictSanHeaderCacheDelay to update the cache on the PN32F service with the updated values of Remote SEPP and Remote SEPP set associated with the SAN header.

The user has to set evictSanHeaderCacheDelay parameter to 100 in pn32f-svc microservice section of custom yaml file.

configs:
    evictSanHeaderCacheDelay: 100 #(ms)

Note:

If ATS is not configured evictSanHeaderCacheDelay must be set to 50000 in the pn32f-svc section of ocsepp_custom_values_<values>.yaml file.
3.6.6.17 Configuring the Reroute Attempts for Egress Gateway

SEPP allows configuring the number of reroute attempts for alternate routing. You can configure attempts parameter based on the requirement.

Under config manager section in ocsepp_custom_values_<version>.yaml file, configure attempts parameter before running ATS test execution. The default value for attempts is set to 0.

Set attempts to 3 before running ATS test execution.

alternateRoute:
    sbiReRoute:
      sbiRoutingErrorActionSets: [{"id": "action_0", "action": "reroute", "attempts": 3, "blacklist": {"enabled": false, "duration": 60000}}]
3.6.6.18 Configuring Alternate Routing based on the DNS SRV Record for Home Network Functions

The Alternate Routing based on the DNS SRV Record for Home Network Functions feature can be enabled as follows:

Update the following parameters in ocsepp_custom_values.yaml file to configure the DNS SRV feature:

At Global Section:
global:
  alternateRouteServiceEnable : true
At Alternate route section:
alternate-route: 
  global:
     alternateRouteServiceEnable: true
Update the Target host:
alternate-route:   #Static virtual FQDN Config
   staticVirtualFqdns:       
     - name: https://sepp.ats.test.routing.com
       alternateFqdns:
       #Below Target FQDNs needs to be updated for ATS DNS SRV scenarios
       - target: <ats-release-name>-stubserver
         port: 8443
         priority: 100
         weight: 90
       - target: <ats-release-name>-stubserver-2
         port: 8443
         priority: 100
         weight: 10
       - target: <ats-release-name>-stubserver-3
         port: 8443
         priority: 1
         weight: 90
Here, update the target parameter with the user-specific ATS release name.
Example: If the user defines the <ats-release-name> as sepp-ats-rel and <stub-server-name> as stubserver, then update the target as sepp-ats-rel-stubserver.

Note:

For DNS SRV and SOR features, the plmn-egress-gateway must be deployed in REST mode. In the ocsepp_custom_values.yaml file, update:
plmn-egress-gateway:
            routeConfigMode: REST

Note:

In DNS SRV feature, the rerouting to next peer is decided on the basis of error codes or exceptions. If some exception arises which is not present in the json file DNSSRVCriteriaSet.json (may cause scenario failures), user must add those exceptions in the exceptions list at the following paths:
In the file /var/lib/jenkins/ocsepp_tests/data/DNSSRVCriteriaSet.json, update

"exceptions": [ 
      "java.net.SocketException",
      "java.nio.channels.ClosedChannelException"
    ]
In the file /var/lib/jenkins/ocsepp_tests/cust_data/DNSSRVCriteriaSet.json, update

"exceptions": [ 
      "java.net.SocketException",
      "java.nio.channels.ClosedChannelException"
    ]
3.6.6.19 Configuring Load Sharing among Multiple Remote SEPP Nodes

The load sharing among multiple Remote SEPP nodes feature can be configured as follows:
  1. Enable the following parameters in ocsepp_custom_values.yaml file to configure the load sharing feature:
    1. Enable the following flag at alternate_route section:
      alternate-route: 
        global:
           alternateRouteServiceEnable: true
    2. Replace the release name of the Target host:

       alternate-route:   #Static virtual FQDN Config
         staticVirtualFqdns:       
           - name: http://sepp.ats.loadsharing.com
             alternateFqdns:
             #Below Target FQDNs needs to be updated for ATS DNS SRV scenarios
             - target: <ats-release-name>-stubserver
               port: 8443
               priority: 100
               weight: 90
             - target: <ats-release-name>-stubserver-2
               port: 8443
               priority: 100
               weight: 10
             - target: <ats-release-name>-stubserver-3
               port: 8443
               priority: 1
               weight: 90

    Note:

    If the user has chosen the ats release name as sepp-ats-rel

    Modify

    target: <release-name>-stubserver with actual release name of ATS deployment.

    Example: sepp-ats-rel-stubserver

  2. Enable the following parameter at NrfClient Global parameters:
      alternateRouteServiceEnable: true

3.6.6.20 Configuring Dual Stack
Using the dual stack mechanism, applications or NFs can establish connections with pods and services in a Kubernetes cluster using IPv4 or IPv6 or both simultaneously. Dual stack provides:
  • coexistence strategy that allows hosts to reach IPv4 and IPv6 simultaneously.
  • IPv4 and IPv6 allocation to the Kubernetes clusters during cluster creation. This allocation is applicable for all the Kubernetes resources unless explicitly specified during the cluster creation.

    To support dual stack functionality in the ATS Helm charts, the following updates are required in the ocats_ocsepp_values_<version>.yaml file:
    global:
      #Deployment mode for ats and stub in dual stack support. Possible values : IPv4, IPv6, IPv4_IPv6, IPv6_IPv4, ClusterPreferred
      deploymentMode: &deploymentMode IPv4
    
    
The following are the possible values based on the cluster type:

Single Stack cluster: IPv4, IPv6
Dual Stack cluster: IPv4, IPv6, IPv4_IPv6, IPv6_IPv4, ClusterPreferred
3.6.6.21 Enabling Persistant Volume Storage

ATS supports Persistent storage to retain ATS historical build execution data, test cases, and one-time environment variable configurations.

Note:

By default, this feature is disabled.

To enable persistent storage:

  1. Create a PVC and associate the same to the ATS pod.
  2. Set the PVEnabled flag to true.
  3. Set PVClaimName to PVC that is created for ATS.
    
    ocats:
      PVEnabled: false
      PVClaimName: "sepp-pvc"
      PVStorageClassName: "standard"
      PVStorage: "1Gi"
      RetainPVC: false
      

For more details on Persistent Volume Storage, you can refer to Persistent Volume for 5G ATS

3.6.6.22 ATS Tagging Support

The ATS tagging support feature assists in running the feature files after filtering features and scenarios based on tags. Instead of manually navigating through several feature files, the user can save time by using this feature.

For more details about tagging support feature, see ATS Tagging Support section.

By default, this feature is enabled in ATS framework but if user wants to enable or disable the feature, set executionWithTagging flag to true or false in the ocats_ocsepp_values_<version>.yaml file.


ocats:
  atsFeatures:
    executionWithTagging: true

Feature Level Tags in SEPP Mode

Table 3-25 Feature Level Tags in SEPP Mode

Sl.No. Tag Selected Feature Files
1. MEDIATION All the mediation feature files will be selected.
2 MULTIPART All multipart feature files will be selected.
3 ORIGINATING_NETWORK_ID All originating network id feature files will be selected.
4 ERL All Egress rate limiting feature files will be selected.
5 TOPOLOGY All topology feature files will be selected.
6 PREVIOUS_LOCATION_CHECK All previous location check feature files will be selected.
7 MESSAGE_VALIDATION All message validation feature files will be selected.
8 PROACTIVE_STATUS_UPDATE All proactive status update feature files will be selected.
9 ALTERNATE_ROUTING All alternate routing feature files will be selected.
10 SOR All SOR feature files will be selected
11 Handshake All the feature files which are written for testing handshake functionality will be selected.
12 PORT_SEGREGATION All port segregation feature files will be selected.
13 DNS_SRV All DNS SRV files will be selected.
14

NETWORK_ID_VALIDATION

All network id validation feature files will be selected.
15

SERVICE_API_VALIDATION

All service api validation feature files will be selected.
16 IRL All Ingress rate limiting feature files will be selected.
17 API_PREFIX All API Prefix feature files will be selected.
18 SI All system integration feature files will be selected.
19 HEADER_VALIDATION All header validation feature files will be selected.
20 TIME_LOCATION_CHECK All Time Location check feature files will be selected.
21 VERBOSITY_VALIDATION All the feature files which will have verbosity validation scenarios will be selected.
22 ASSERTED_PLMN_ID_METRIC_VALIDATION SEPP dashboard support for detecting vulnerable messages.

Feature Level Tags in Roaming Hub Mode

Table 3-26 Feature Level Tags in Roaming Hub Mode

Sl.No. Tag Selected Feature Files
1 MEDIATION All the mediation feature files will be selected.
2 TOPOLOGY All topology feature files will be selected.
3 MESSAGE_VALIDATION All message validation feature files will be selected.
4 SI All system integration feature files will be selected.
5

SERVICE_API_VALIDATION

All service API validation feature files will be selected.
6

NETWORK_ID_VALIDATION

All network id validation feature files will be selected.
7 Handshake All the feature files which are written for testing handshake functionality will be selected.
8 DNS_SRV All DNS SRV feature files will be selected.
9 VERBOSITY_VALIDATION All the feature files which will have verbosity validation scenarios will be selected.
10 ERL All egress rate limiting feature files will be selected.
11 PROACTIVE_STATUS_UPDATE All proactive status update feature files will be selected.
12 ALTERNATE_ROUTING All alternate routing feature files will be selected.
13 HOSTEDSEPP_VALIDATION All the feature files will be selected which will have Hosted SEPP validation scenarios.
14 PORT_SEGREGATION All Port Segregation feature files will be selected

Scenario Level Tags in SEPP and Roaming Hub Mode

Table 3-27 Scenario Level Tags in SEPP and Roaming Hub Mode

Sl.No. Tag Selected Scenarios
1 sanity All the scenarios will be selected which are part of sanity testing.

3.6.7 Deploying ATS in Kubernetes Cluster

Note:

It is important to ensure that all the three components; ATS, Stub, and SEPP are in the same namespace.

If the namespace does not exists, run the following command to create a namespace:

kubectl create namespace ocsepp

Deploying ATS:

helm install -name <release_name> ocats-sepp-<version>.tgz --namespace <namespace_name> -f <values-yaml-file>
Example:

helm install -name ocats ocats-sepp-<version>.tgz --namespace ocsepp -f ocats-sepp-custom-values.yaml

3.6.8 Verifying ATS Deployment

Run the following command to verify ATS deployment:
helm status <release_name> 
Checking Pod Deployment:

kubectl get pod -n seppsvc
Checking Service Deployment:

kubectl get service -n seppsvc

Figure 3-20 Checking Pod and Service Deployment

Checking Pod and Service Deployment

Note:

bddclient name is now updated to ocats.

3.6.9 Post Installation Steps (If persistent volume is used)

If persistent volume is used, follow the post-installation steps as mentioned in the Persistent Volume for 5G ATS section.

3.7 Installing ATS for UDR

Before installing ATS for UDR, it is important to ensure the resource requirements are met. For information about the resource requirements, see Resource Requirements.

3.7.1 Resource Requirements

ATS resource requirements for UDR, SLF, and EIR are as follows:

Table 3-28 UDR - Total Number of Resources

Resource Name vCPUs Memory (GB) Storage (GB)
UDR SUT Totals 38 38

Ephemeral Storage: 33

PVC Storage: 30

DB Tier Totals 20 41

Ephemeral Storage: 0

PVC Storage: 37

ATS Totals 17 12

Ephemeral Storage: 0

PVC Storage: 160

Grand Total UDR-ATS 75 91

Ephemeral Storage: 33

PVC Storage: 227

Table 3-29 SLF - Total Number of Resources

Resource Name vCPUs (excluding sidecar) vCPUs (including sidecar) Memory (GB) (excluding sidecar) Memory (including sidecar) Storage (GB)
SLF-SUT Totals 76 103 78 105

Ephemeral Storage: 56

PVC Storage: 25

DB Tier Totals 20 41 28 53

Ephemeral Storage: 0

PVC Storage: 37

ATS Totals 8 11 6 8

Ephemeral Storage: 0

PVC Storage: 160

Grand Total SLF-ATS 104 155 112 166

Ephemeral Storage: 56

PVC Storage: 222

Table 3-30 EIR - Total Number of Resources

Resource Name vCPUs Memory (GB) Storage (GB)
EIR SUT Totals 26 26

Ephemeral Storage: 20

PVC Storage: 30

DB Tier Totals 20 41

Ephemeral Storage: 0

PVC Storage: 37

ATS Totals 8 7

Ephemeral Storage: 0

PVC Storage: 160

Grand Total EIR-ATS 54 74

Ephemeral Storage: 20

PVC Storage: 227

Table 3-31 UDR Operational Resource Requirements

Microservice vCPUs Required per Pod Memory Required per Pod (GB) Ephemeral Storage Required per Pod (GB) Replicas (regular deployment) Replicas (ATS deployment) vCPUs Required - Total Memory Required - Total (GB) Ephemeral Storage Required - Total (GB) PVC Storage (GB)
ocudr-ingressgateway-sig 2 2 1 GB 2 1 2 2 1 NA
ocudr-ingressgateway-prov 2 2 1 GB 2 1 2 2 1 NA
ocudr-nudr-drservice 1 1 1 GB 2 1 1 1 1 NA
ocudr-nudr-dr-provservice 1 1 1 GB 2 1 1 1 1 NA
ocudr-nudr-notify-service 1 1 1 GB 2 1 1 1 1 NA
ocudr-oc-diam-gateway 1 1 1 GB 2 1 1 1 1 NA
ocudr-nudr-diameterproxy 1 1 1 GB 2 1 1 1 1 NA
ocudr-egressgateway 2 2 1 GB 1 1 2 2 1 NA
ocudr-nudr-config 1 1 1 GB 1 1 1 1 1 NA
ocudr-nudr-config-server 1 1 1 GB 1 1 1 1 1 NA
ocudr-nudr-nrf-client-nfmanagement 1 1 1 GB 1 1 1 1 1 NA
ocudr-appinfo 0.5 1 1 GB 1 1 0.5 1 1 NA
ocudr-alternate-route 1 1 1 GB 1 1 1 1 1 NA
ocudr-nudr-migration 1 1 1 GB 1 1 1 1 1 NA
ocudr-nudr-bulk-import 1 1 1 GB 1 1 1 1 1 20
ocudr-nudr-ondemand-migration 1 1 1 GB 1 1 1 1 1 NA
ocudr-performance 1 1 1 GB 1 1 1 1 1 NA
ocudr-export-tool 1 1 1 GB 1 1 1 1 1 10
UDR Additional Resources (Hooks/Init/Update Containers) - - - - - 6 6 6 -
provgw-prov-ingressgateway 2 2 0 2 1 2 2 1 NA
provgw-provgw-service 1 1 0 2 1 1 1 1 NA
provgw-provgw-config 1 1 0 1 1 1 1 1 NA
provgw-prov-egressgateway 2 2 0 2 1 2 2 1 NA
Provgw Additional Resources (Hooks/Init/Update Containers) - - - - - 5 5 5 NA
UDR-SUT Total (UDR and ProvGw) - - - - - 38 38 33 30

Table 3-32 SLF Operational Resource Requirements

Microservice vCPUs Required per Pod (excluding sidecar) vCPUs required for sidecar container per pod Memory Required per Pod (GB) (excluing sidecar) Memory required for sidecar container per pod (GB) Ephemeral Storage Required per Pod (GB) Replicas (regular deployment) Replicas (ATS deployment) vCPUs Required - Total (excludes sidecar) vCPUs Required - Total (includes sidecar) Memory Required - Total (GB) (excludes sidecar) Memory Required - Total (GB) (includes sidecar) Ephemeral Storage Required - Total (GB) PVC Required Total (GB)
ocudr-ingressgateway-sig 2 0.5 2 0.5 1 GB 2 1 2 2.5 2 2.5 1 NA
ocudr-ingressgateway-prov 2 0.5 2 0.5 1 GB 2 1 2 2.5 2 2.5 1 NA
ocudr-nudr-drservice 1 0.5 1 0.5 1 GB 2 1 1 1.5 1 1.5 1 NA
ocudr-nudr-dr-provservice 1 0.5 1 0.5 1 GB 2 1 1 1.5 1 1.5 1 NA
ocudr-egressgateway 2 0.5 2 0.5 1 GB 2 1 2 2.5 2 2.5 1 NA
ocudr-nudr-config 1 0.5 1 0.5 1 GB 1 1 1 1.5 1 1.5 1 NA
ocudr-nudr-config-server 1 0.5 1 0.5 1 GB 1 1 1 1.5 1 1.5 1 NA
ocudr-nudr-nrf-client-nfmanagement 1 0.5 1 0.5 1 GB 1 1 1 1.5 1 1.5 1 NA
ocudr-appinfo 0.5 0.5 1 0.5 1 GB 1 1 0.5 1 1 1.5 1 NA
ocudr-alternate-route 2 0.5 2 0.5 1 GB 1 1 2 2.5 2 2.5 1 NA
ocudr-nudr-bulk-import 1 0.5 1 0.5 1 GB 1 1 1 1.5 1 1.5 1 10Gi
ocudr-performance 1 0.5 1 0.5 1 GB 1 1 1 1.5 1 1.5 1 NA
ocudr-nudr-export-tool 2 1 2 1 3 GB 1 1 2 3 2 3 3 10Gi
SLF Additional Resources (Hooks/Init/Update Containers) - - - - - - - 6 7 6 7 3 NA
provgw-prov-ingressgateway 2 0.5 2 0.5 1 GB 2 1 2 2.5 2 2.5 1 NA
provgw-prov-egressgateway 2 0.5 2 0.5 1 GB 2 1 2 2.5 2 2.5 1 NA
provgw-provgw-service 1 0.5 1 0.5 1 GB 2 1 1 1.5 1 1.5 1 NA
provgw-provgw-config 1 0.5 1 0.5 1 GB 1 1 1 1.5 1 1.5 1 NA
provgw-provgw-config-server 1 0.5 1 0.5 1 GB 1 1 1 1.5 1 1.5 1 NA
provgw-auditor-service 1 0.5 1 0.5 1 GB 1 1 1 1.5 1 1.5 1 5Gi
Provgw Additional Resources (Hooks/Init/Update Containers) - - - - - - - 5 6 5 6 3 NA
SLF-SUT Total Required (SLFs and ProvGw) - - - - - - - 76 103 78 105 56 25

Table 3-33 EIR Operational Resource Requirements

Microservice vCPUs Required per Pod Memory Required per Pod (GB) Ephemeral Storage Required per Pod (GB) Replicas (regular deployment) Replicas (ATS deployment) vCPUs Required - Total Memory Required - Total (GB) Ephemeral Storage Required - Total (GB) PVC Required Total (GB)
ocudr-ingressgateway-sig 2 2 1 GB 2 1 2 2 1 NA
ocudr-ingressgateway-prov 2 2 1 GB 2 1 2 2 1 NA
ocudr-nudr-drservice 1 1 1 GB 2 1 1 1 1 NA
ocudr-nudr-dr-provservice 1 1 1 GB 2 1 1 1 1 NA
ocudr-oc-diam-gateway 1 1 1 GB 2 1 1 1 1 NA
ocudr-nudr-diameterproxy 1 1 1 GB 2 1 1 1 1 NA
ocudr-egressgateway 2 2 1 GB 2 1 2 2 1 NA
ocudr-nudr-config 1 1 1 GB 1 1 1 1 1 NA
ocudr-nudr-config-server 1 1 1 GB 1 1 1 1 1 NA
ocudr-nudr-nrf-client-nfmanagement 1 1 1 GB 1 1 1 1 1 NA
ocudr-appinfo 0.5 1 1 GB 1 1 0.5 1 1 NA
ocudr-alternate-route 2 2 1 GB 1 1 2 2 1 NA
ocudr-nudr-bulk-import 1 1 1 GB 1 1 1 1 1 20
ocudr-performance 1 1 1 GB 1 1 1 1 1 NA
ocudr-nudr-export-tool 2 2 3 GB 1 1 2 2 3 10
EIR Additional Resources (Hooks/Init/Update Containers) - - - - - 6 6 3 NA
EIR-SUT Total Required - - - - - 26 26 20 30

Table 3-34 ATS Resource Requirements for UDR mode

Microservice vCPUs Required per Pod Memory Required per Pod (GB) Storage PVC Required per Pod (GB) Replicas (regular deployment) Replicas (ATS deployment) vCPUs Required - Total Memory Required - Total (GB) Storage PVC Required - Total (GB)
ATS Behave 4 4 0 - 1 4 4 160
NRF-Stub (python stub) 2 1 0 - 2 (Two separate deployments) 4 2 0
Notify-Stub (python stub) 2 1 0 - 3 (Three separate deployments) 6 3 0
diam-stub 1 1 0 - 2 (Two separate deployments) 2 2 0
fourg-stub 1 1 0 - 1 1 1 0
ATS Totals - - - - - 17 12 160

Table 3-35 ATS Resource Requirements for SLF Mode

Microservice vCPUs Required per Pod (excluding sidecar) vCPUs required for sidecar container per pod Memory Required per Pod (GB) (excluing sidecar) Memory required for sidecar container per pod(GB) Ephemeral Storage Required per Pod (GB) Replicas (regular deployment) Replicas (ATS deployment) vCPUs Required - Total (excludes sidecar) vCPUs Required - Total (includes sidecar) Memory Required - Total (GB) (excludes sidecar) Memory Required - Total (GB) (includes sidecar) Storage PVC Required - Total (GB)
ocats-udr 4 2 4 1 0 1 1 4 6 4 5 160
nrf-ocstub-python 2 .5 1 .5 0 1 2 (Two separate deployments) 4 5 2 3 0
ATS Total required - - - - - - - 8 11 6 8 160

Table 3-36 ATS Resource Requirements for EIR mode

Microservice vCPUs Required per Pod Memory Required per Pod (GB) Storage PVC Required per Pod (GB) Replicas (regular deployment) Replicas (ATS deployment) vCPUs Required - Total Memory Required - Total (GB) Storage PVC Required - Total (GB)
ocats-udr 3 4 0 1 1 3 4 160
nrf-ocstub-python (For deployments) 2 1 0 1 2 (Two separate deployments) 4 2 0
diam-stub 1 1 0 - 1 1 1 0
ATS Total required - - - - - 7 6 160

Note:

ATS total resource calculation includes 2 nrf-stub deployments, 1 diam-stub deployment, and 1 ATS deployment.

Table 3-37 UDR cnDBTier Pods

Micro service name Container name Number of Pods vCPUs Requirement Per Pod Memory Requirement Per Pod PVC Requirement Total Resources
Management node mysqlndbcluster 2 2 CPUs 5 GB 3*2 → 6 GB

4 CPUs

10 GB Memory
Data node mysqlndbcluster 2 2 CPUs 4 GB 8*2 → 16 GB

5 CPUs

9 GB Memory
db-backup-executor-svc 100m CPU 128 MB
APP SQL node mysqlndbcluster 2 2 CPUs 5 GB 3*2 → 6 GB

5 CPUs

10.5 GB Memory
init-sidecar 100m CPU 256 MB
SQL node (Used for Replication) mysqlndbcluster 2 2 CPUs 5 GB 3*2 → 6 GB

5 CPUs

10.5 GB Memory
init-sidecar 100m CPU 256 MB
DB Monitor Service db-monitor-svc 1 200m CPUs 500 MB -

0.5 CPUs

0.5 GB Memory
DB Backup Manager Service replication-svc 1 200m CPU 500 MB -

0.5 CPUs

0.5 GB Memory
Total - - - - 34+3 (Replication Svc) → 37 GB

20 CPUs

41 GB Memory

Table 3-38 SLF cnDBTier Pods

Micro service name Container name Number of Pods vCPUs Requirement Per Pod Memory Requirement Per Pod PVC Requirement Total Resources
Management node mysqlndbcluster 2 2 CPUs 5 GB 3*2 → 6 GB

8 CPUs

12 GB Memory
istio-proxy 2 CPUs 1 GB
Data node mysqlndbcluster 2 2 CPUs 4 GB 8*2 → 16 GB

9 CPUs

11 GB Memory
istio-proxy 2 CPUs 1 GB
db-backup-executor-svc 100m CPU 128 MB
APP SQL node mysqlndbcluster 2 2 CPUs 5 GB 3*2 → 6 GB

9 CPUs

13 GB Memory
init-sidecar 100m CPU 256 MB
istio-proxy 2 CPUs 1 GB
SQL node (Used for Replication) mysqlndbcluster 2 2 CPUs 5 GB 3*2 → 6 GB

9 CPUs

13 GB Memory
init-sidecar 100m CPU 256 MB
istio-proxy 2 CPUs 1 GB
DB Monitor Service db-monitor-svc 1 200m CPUs 500 MB -

3 CPUs

2 GB Memory
istio-proxy 2 CPUs 1 GB
DB Backup Manager Service replication-svc 1 200m CPU 500 MB -

3 CPUs

2 GB Memory
istio-proxy 2 CPUs 1 GB
Total - - - - 34+3 (Replication Svc) → 37 GB

41 CPUs

53 GB Memory

Table 3-39 EIR cnDBTier Pods

Micro service name Container name Number of Pods vCPUs Requirement Per Pod Memory Requirement Per Pod PVC Requirement Total Resources
Management node mysqlndbcluster 2 2 CPUs 5 GB 3*2 → 6 GB

4 CPUs

10 GB Memory
Data node mysqlndbcluster 2 2 CPUs 4 GB 8*2 → 16 GB

5 CPUs

9 GB Memory
db-backup-executor-svc 100m CPU 128 MB
APP SQL node mysqlndbcluster 2 2 CPUs 5 GB 3*2 → 6 GB

5 CPUs

10.5 GB Memory
init-sidecar 100m CPU 256 MB
SQL node (Used for Replication) mysqlndbcluster 2 2 CPUs 5 GB 3*2 → 6 GB

5 CPUs

10.5 GB Memory
init-sidecar 100m CPU 256 MB
DB Monitor Service db-monitor-svc 1 200m CPUs 500 MB -

1 CPUs

1 GB Memory
DB Backup Manager Service replication-svc 1 200m CPU 500 MB -

1 CPUs

1 GB Memory
Total - - - - 34+3 (Replication Svc) → 37 GB

20 CPUs

41 GB Memory

Table 3-40 SLF Performance based Resource Details

- vCPUs (excluding sidecar) vCPUs (including sidecar) Memory (GB) (excluding sidecar) Memory (GB) (including sidecar) Ephemeral Storage (GB) PVC Storage (GB) - - - - - -
SLF-SUT Totals 59 105 47 93 21 0 - - - - - -
DB Tier Totals 40 80 40 80 0 37 - - - - - -
ATS Totals 6 4 1 3 0 0 - - - - - -
Grand Total SLF-ATS 105 193 95 183 21 37 - - - - - -
Microservice CPUs Required per Pod (excluding sidecar) CPUs required for sidecar container Memory Required per Pod (GB) (excluding sidecar) Memory required for sidecar container per pod (GB) Ephemeral Storage Required per Pod (GB) Replicas (regular deployment) Replicas (ATS deployment) CPUs Required - Total (excluding sidecar) CPUs Required - Total (including sidecar) Memory Required - Total (GB) (excluding sidecar) Memory Required - Total (GB) (including sidecar) Ephemeral Storage Required - Total (GB)
ocudr-ingressgateway-sig 6 4 4 4 1 GB 2 2 12 20 8 16 2
ocudr-ingressgateway-prov 6 4 4 4 1 GB 2 2 12 20 8 16 2
ocudr-nudr-drservice 5 4 4 4 1 GB 2 2 10 18 8 16 2
ocudr-nudr-dr-provservice 5 4 4 4 1 GB 2 2 10 18 8 16 2
ocudr-egressgateway 1 2 1 2 1 GB 1 1 1 3 1 3 1
ocudr-nudr-config 1 1 1 1 1 GB 1 1 1 2 1 2 1
ocudr-nudr-config-server 1 1 1 1 1 GB 1 1 1 2 1 2 1
ocudr-nudr-nrf-client-nfmanagement 1 1 1 1 1 GB 1 1 1 2 1 2 1
ocudr-appinfo 1 1 1 1 1 GB 1 1 1 2 1 2 1
ocudr-alternate-route 1 1 1 1 1 GB 1 1 1 2 1 2 1
ocudr-performance 1 1 1 1 1 GB 1 1 1 2 1 2 1
Additional SLF Resources (Hooks/Init/Update Containers) - - - - - - - 6 12 6 12 6
SLF SUT Totals - - - - - - - 59 105 47 93 21
Microservice Container Number of Pods CPU Requirement Per Pod Memory Requirement Per Pod PVC Requirement Total Resources - - - - - -
Management node mysqlndbcluster 2 2 CPUs 5 GB 3*2 → 6 GB

8 CPUs

12 GB Memory

- - - - - -
istio-proxy 2 CPUs 1 GB
Data node mysqlndbcluster 2 2 CPUs 4 GB 8*2 → 16 GB

9 CPUs

11 GB Memory

- - - - - -
istio-proxy 2 CPUs 1 GB
db-backup-executor-svc 100m CPU 128 MB
APP SQL node mysqlndbcluster 2 2 CPUs 5 GB 3*2 → 6 GB

9 CPUs

13 GB Memory

- - - - - -
init-sidecar 100m CPU 256 MB
istio-proxy 2 CPUs 1 GB
SQL node (Used for Replication) mysqlndbcluster 2 2 CPUs 5 GB 3*2 → 6 GB

9 CPUs

13 GB Memory
- - - - - -
init-sidecar 100m CPU 256 MB
istio-proxy 2 CPUs 1 GB
DB Monitor Service db-monitor-svc 1 200m CPUs 1 GB -

3 CPUs

2 GB Memory
- - - - - -
istio-proxy 2 CPUs 500 MB
DB Backup Manager Service replication-svc 1 200m CPU 500 MB -

3 CPUs

2 GB Memory

- - - - - -
istio-proxy 2 CPUs 1 GB
Total - - - - - - - - - - 34+3(Replication Svc) → 37 GB

41 CPUs

53 GB Memory

Table 3-41 ATS and stub requirement for SLF Performance

Microservice vCPUs Required per Pod (excluding sidecar) vCPUs required for sidecar container per pod Memory Required per Pod (GB) (excluing sidecar) Memory required for sidecar container per pod(GB) Storage PVC Required per Pod (GB) Replicas (regular deployment) Replicas (ATS deployment) vCPUs Required - Total (excludes sidecar) vCPUs Required - Total (includes sidecar) Memory Required - Total (GB) (includes sidecar) Memory Required - Total (GB) (includes sidecar) Storage PVC Required - Total (GB)
ocats-udr 6 2 8 2 0 1 1 6 8 8 10 0
ATS Total required - - - - - -   6 8 8 10 0

3.7.2 Locating and Downloading ATS Image

To locate and download the ATS Image from MOS:

  1. Log in to My Oracle Support using appropriate credentials.
  2. Select the Patches & Updates tab.
  3. In the Patch Search window, click Product or Family (Advanced).
  4. Enter Oracle Communications Cloud Native Core - 5G in the Product field.
  5. Select Oracle Communications Cloud Native Core Unified Data Repository <release_number> from the Release drop-down.
  6. Click Search. The Patch Advanced Search Results list appears.
  7. Select the required ATS patch from the list. The Patch Details window appears.
  8. Click Download. The File Download window appears.
  9. Click the <p********_<release_number>_Tekelec>.zip file.
  10. Untar the zip file to access all the ATS images. The ocats_udr_pkg_25_1_200_0_0.tgz file has following files:
    • ocats_udr_pkg_25_1_200_0_0.tgz
    • ocats_udr_pkg_25_1_200_0_0-README.txt
    • ocats_udr_pkg_25_1_200_0_0.tgz.sha256
    • ocats-udr-custom-configtemplates-25.1.200.0.0.zip
    • ocats-udr-custom-configtemplates-25.1.200.0.0-README.txt
    • ocats_udr_pkg_25_1_200_0_0.mcafee.sw_packaging.log

    The ocats_udr_pkg_25_1_200_0_0.tgz file contains:

    |_ _ ocats-udr-ats-pkg-25.1.200.tgz
    
    |      |_ _ _ _ _ _ ocats-udr-25.1.200.tgz (Helm Charts)
    
    |      |_ _ _ _ _ _ ocats-udr-images-25.1.200.tar(Docker Images)
    
    |      |_ _ _ _ _ _ OCATS-UDR-Readme.txt
    
    |      |_ _ _ _ _ _ ocats-udr-25.1.200.tgz.sha256
    
    |      |_ _ _ _ _ _ ocats-udr-images-25.1.200.tar.sha256
    
    |      |_ _ _ _ _ _ _ ocats-udr-data-25.1.200.tgz
    
    |      |_ _ _ _ _ _ _ ocats-udr-data-25.1.200.tgz.sha256
    
    |_ _ ocats-udr-stub-pkg-25.1.200.tgz
    
         |_ _ _ _ _ _ ocstub-py-25.1.202.tgz (Helm Charts)
    
         |_ _ _ _ _ _ fourg-stub-25.1.200.tgz (Helm Charts)
    
         |_ _ _ _ _ _ diam-stub-25.1.200.tgz (Helm Charts)
    
         |_ _ _ _ _ _ ocstub-py-25.1.202.tar(Docker Images)
    
         |_ _ _ _ _ _ ocats-udr-fourg-stub-images-25.1.200.tar(Docker Images)
    
         |_ _ _ _ _ _ ocats-udr-diam-stub-images-25.1.200.tar(Docker Images)
    
         |_ _ _ _ _ _ OCATS-UDR-STUB-Readme.txt
    
         |_ _ _ _ _ _ ocstub-py-25.1.202.tgz.sha256
    
         |_ _ _ _ _ _ fourg-stub-25.1.200.tgz.sha256
    
         |_ _ _ _ _ _ diam-stub-25.1.200.tgz.sha256
    
         |_ _ _ _ _ _ ocstub-py-25.1.202.tar.sha256
    
         |_ _ _ _ _ _ ocats-udr-fourg-stub-images-25.1.200.tar.sha256
    
         |_ _ _ _ _ _ ocats-udr-diam-stub-images-25.1.200.tar.sha256
    The ocats-udr-custom-configtemplates-25.1.200.0.0.zip file contains:
    ocats-udr-custom-configtemplates-25.1.200.0.0.zip
    
          |_ _ _ _ _ _ ocats-udr-custom-values-25.1.200.yaml(Custom values for UDR-ATS)
    
          |_ _ _ _ _ _ ocstub-py-custom-values-25.1.202.yaml (Custom values for COMMON-PYTHON-STUB)
    
          |_ _ _ _ _ _ fourg-stub-custom-values-25.1.200.yaml (Custom values for FOURG-STUB)
    
          |_ _ _ _ _ _ diam-stub-custom-values-25.1.200.yaml(Custom values for DIAMETER-STUB)

    Copy the ocats_udr_pkg_25_1_200_0_0.tgz tar file and ocats-udr-custom-configtemplates-25.1.200.0.0.zip file to OCI or Kubernetes cluster, where you want to deploy ATS.

3.7.3 Preparing to Deploy ATS and Stub Pods

To deploy ATS and stub pods in Kubernetes cluster:

Note:

Deploy ATS and Subscriber Location Function (SLF) in the same namespace.

3.7.4 Loading UDR ATS Image

To load UDR ATS image:

  1. Run the following command to extract the tar file content.

    tar -xvf ocats_udr_pkg_25_1_200_0_0.tgz

    The output of this command is:
    • ocats-udr-ats-pkg-25.1.200.tgz
    • ocats-udr-stub-pkg-25.1.200.tgz
  2. Run the following command to extract the helm charts and docker images of ATS.

    tar -xvf ocats-udr-ats-pkg-25.1.200.tgz

    The output of this command is:
    • ocats-udr-25.1.200.tgz
    • ocats-udr-images-25.1.200.tar
    • ocats-udr-data-25.1.200.tgz
  3. Run the following command to extract the helm charts and docker images of ATS.

    tar -xvf ocats-udr-stub-pkg-25.1.200.tgz

    The output of this command is:
    • ocstub-py-25.1.202.tgz (Helm charts)
    • fourg-stub-25.1.200.tgz (Helm charts)
    • diam-stub-25.1.200.tgz (Helm charts)
    • ocats-udr-notify-stub-images-25.1.200.tar (Docker image)
    • ocats-udr-fourg-stub-images-25.1.200.tar(Docker Images
    • ocats-udr-diam-stub-images-25.1.200.tar(Docker Images)
    • ocstub-py-25.1.202.tar (Docker image)

    The ocats-udr-images-25.1.200.tar file contains docker images (ocats-udr-images-25.1.200.tar) of ATS for UDR 25.1.200.

  4. Run the following command in your cluster to load the ATS image.
    docker load --input ocats-udr-images-25.1.200.tar 

    Note:

    For CNE 1.8.0 and above, you can use Podman instead of Docker. See the following sample Podman command:

    sudo podman load --input ocats-udr-images-25.1.200.tar

  5. Run the following commands to tag and push the ATS image to your registry.
    docker tag ocats-udr-images:25.1.200 <registry>/ocats-udr-images:25.1.200
    docker push <registry>/ocats-udr-images:25.1.200 

    In the previous command, <registry> is the name of docker image repository.

    Note:

    For CNE 1.8.0 and above, you can use Podman instead of Docker to tag and push the Docker image. Run the following sample Podman command to tag and push Docker image:
    sudo podman tag ocats-udr-images:25.1.200 <customer repo>/ <image name>:<image version>
    sudo podman push <customer repo>/<image name>:<image version>
  6. Run the following command to untar the helm charts (ocats-udr-25.1.200.tgz) and update the registry name, image name, and tag (if required) in the ocats-udr-custom-values-25.1.200.yaml file.
    tar -xvf ocats-udr-25.1.200.tgz
    Output:
    ocats-udr/Chart.yaml
    
    ocats-udr/values.yaml
    
    ocats-udr/templates/NOTES.txt
    
    ocats-udr/templates/_helpers.tpl
    
    ocats-udr/templates/deployment.yaml
    
    ocats-udr/templates/ingress.yaml
    
    ocats-udr/templates/service.yaml
    
    ocats-udr/templates/serviceaccount.yaml
    Output:
    ocats-udr
    
    ├── Chart.yaml
    
    ├── templates
    
    │   ├── deployment.yaml
    
    │   ├── _helpers.tpl
    
    │   ├── ingress.yaml
    
    │   ├── NOTES.txt
    
    │   ├── serviceaccount.yaml
    
    │   └── service.yaml
    
    └── values.yaml

3.7.5 Loading UDR Stub Images in the SLF-NewFeatures or SLF-Regression Pipeline

To load the UDR stub images:

Note:

For the SLF-NewFeatures or SLF-Regression pipeline, deploy only ocstub-py Helm chart. The ocstub-py-25.1.202.tar contains common-python-stub image (ocstub-py:25.1.202).
  1. Run the following command to extract the tar file content.

    tar -xvf ocats_udr_pkg_25_1_200_0_0.tgz

    The output of this command is:
    • ocats-udr-ats-pkg-25.1.200.tgz
    • ocats-udr-stub-pkg-25.1.200.tgz
  2. Run the following command to extract the stub tar file content.

    tar -xvf ocats-udr-stub-pkg-25.1.200.tgz

    The output of this command is:
    • ocstub-py-25.1.202.tgz (Helm charts)
    • fourg-stub-25.1.200.tgz (Helm charts)
    • diam-stub-25.1.200.tgz (Helm charts)
    • ocats-udr-notify-stub-images-25.1.200.tar (Docker image)
    • ocats-udr-fourg-stub-images-25.1.200.tar(Docker Images)
    • ocats-udr-diam-stub-images-25.1.200.tar(Docker Images)
    • ocstub-py-25.1.202.tar (Docker image)
  3. To load the UDR stub images (ocstub-py-25.1.202.tar) in the SLF-NewFeatures and SLF-Regression pipeline, run the following command in your cluster.
    docker load --input ocstub-py-25.1.202.tar
  4. Run the following commands to tag and push the stub image to your registry.
    docker tag ocstub-py:25.1.202 <registry>/ocstub-py:25.1.202
    
    docker push <registry>/ocstub-py:25.1.202

    In the previous command, <registry> is the name of docker image repository.

  5. Run the following command to untar the common python stub helm charts (ocstub-py-25.1.202.tgz) to get ocstub-py charts:
    tar -xvf ocstub-py-25.1.202.tgz
    Output:
    ocstub-py/Chart.yaml
    ocstub-py/values.yaml
    ocstub-py/templates/_helpers.tpl
    ocstub-py/templates/deployment.yaml
    ocstub-py/templates/ingress.yaml
    ocstub-py/templates/service.yaml
    ocstub-py/templates/serviceaccount.yaml
    ocstub-py/README.md
    Output:
    ocstub-py
    ├── Chart.yaml
    
    ├── templates
    
    │   ├── deployment.yaml
    
    │   ├── _helpers.tpl
    
    │   ├── ingress.yaml
    
    │   ├── serviceaccount.yaml
    
    │   └── service.yaml
    
    ├── values.yaml
    
    └── README.md

3.7.6 Loading UDR Stub Images in the UDR-NewFeatures or UDR-Regression Pipeline

Note:

  • To run UDR-NewFeatures/UDR-Regression, ocstub-py, fourg-stub, and diam-stub must be deployed
  • ocstub-py-25.1.202.tar contains common-python-stub image (ocstub-py:25.1.202)
  • ocats-udr-fourg-stub-images-25.1.200.tar contains fourg-stub image (ocats-udr-fourg-stub-images:25.1.200)
  • ocats-udr-diam-stub-images-25.1.200.tar contains diam-stub image (ocats-udr-diam-stub-images:25.1.200)
To load the UDR Stub Images:
  1. Run the following command to extract the tar file content.

    tar -xvf ocats-udr-ats-pkg-25.1.200.tgz

    The output of this command is:
    • ocats-udr-ats-pkg-25.1.200.tgz
    • ocats-udr-stub-pkg-25.1.200.tgz
  2. Run the following command to extract the stub tar file content.

    tar -xvf ocats-udr-stub-pkg-25.1.200.tgz

    • ocstub-py-25.1.202.tgz (Helm charts)
    • fourg-stub-25.1.200.tgz (Helm charts)
    • diam-stub-25.1.200.tgz (Helm charts)
    • ocats-udr-notify-stub-images-25.1.200.tar (Docker image)
    • ocats-udr-fourg-stub-images-25.1.200.tar(Docker Images)
    • ocats-udr-diam-stub-images-25.1.200.tar(Docker Images)
    • ocstub-py-25.1.202.tar (Docker image)
  3. Run the following commands in your cluster to load the stub images.
    docker load --input ocstub-py-25.1.202.tar
        docker load --input ocats-udr-fourg-stub-images-25.1.200.tar
        docker load --input ocats-udr-diam-stub-images-25.1.200.tar
  4. Run the following commands to tag and push the stub image to your registry.
    docker tag ocstub-py:25.1.202 <registry>/ocstub-py:25.1.202
        docker push <registry>/ocstub-py:25.1.202
        docker tag ocats-udr-fourg-stub-images:25.1.200 <registry>/ocats-udr-fourg-stub-images:25.1.200
        docker push <registry>/ocats-udr-fourg-stub-images:25.1.200
        docker tag ocats-udr-diam-stub-images:25.1.200 <registry>/ocats-udr-diam-stub-images:25.1.200 
        docker push <registry>/ocats-udr-diam-stub-images:25.1.200
  5. Run the following command to untar all the stub charts:
    tar -xvf ocstub-py-25.1.202.tgz
        tar -xvf fourg-stub-25.1.200.tgz
        tar -xvf diam-stub-25.1.200.tgz
    Output of each helm chart is as follows:
    ocstub-py:
    
    ├── Chart.yaml
    
    ├── templates
    
    │   ├── deployment.yaml
    
    │   ├── _helpers.tpl
    
    │   ├── ingress.yaml
    
    │   ├── serviceaccount.yaml
    
    │   └── service.yaml
    
    ├── values.yaml
    
    └── README.md
    fourg-stub
    
    ├── Chart.yaml
    
    ├── templates
    
    │   ├── deployment.yaml
    
    │   ├── _helpers.tpl
    
    │   ├── ingress.yaml
    
    │   ├── NOTES.txt
    
    │   └── service.yaml
    
    └── values.yaml
    diam-stub
    
    ├── Chart.yaml
    
    ├── templates
    
    │   ├── deployment.yaml
    
    │   ├── _helpers.tpl
    
    │   ├── ingress.yaml
    
    │   ├── NOTES.txt
    
    │   └── service.yaml
    
    └── values.yaml

3.7.7 Loading EIR Stub Images in the EIR-NewFeatures or EIR-Regression Pipeline

Note:

  • To run EIR-NewFeatures/EIR-Regression, ocstub-py, and diam-stub must be deployed
  • ocstub-py-25.1.202.tar contains common-python-stub image (ocstub-py:25.1.202)
  • ocats-udr-diam-stub-images-25.1.200.tar contains diam-stub image (ocats-udr-diam-stub-images:25.1.200)
To load the EIR Stub Images:
  1. Run the following commands in your cluster to load the stub images.
    docker load --input ocstub-py-25.1.202.tar
        docker load --input ocats-udr-diam-stub-images-25.1.200.tar
  2. Run the following commands to tag and push the stub image to your registry.
        docker tag ocstub-py:25.1.202 <registry>/ocstub-py:25.1.202
        docker push <registry>/pythonstub:25.1.200
        docker tag ocats-udr-diam-stub-images:25.1.200 <registry>/ocats-udr-diam-stub-images:25.1.200 
        docker push <registry>/ocats-udr-diam-stub-images:25.1.200
  3. Run the following command to untar all the stub charts:
        tar -xvf ocstub-py-25.1.202.tgz
        tar -xvf diam-stub-25.1.200.tgz
    Output of each helm chart is as follows:
    ocstub-py:
    
    ├── Chart.yaml
    
    ├── templates
    
    │   ├── deployment.yaml
    
    │   ├── _helpers.tpl
    
    │   ├── ingress.yaml
    
    │   ├── serviceaccount.yaml
    
    │   └── service.yaml
    
    ├── values.yaml
    
    └── README.md
    diam-stub
    
    ├── Chart.yaml
    
    ├── templates
    
    │   ├── deployment.yaml
    
    │   ├── _helpers.tpl
    
    │   ├── ingress.yaml
    
    │   ├── NOTES.txt
    
    │   └── service.yaml
    
    └── values.yaml

3.7.8 Configuring ATS

It is important to configure the following features before deploying ATS for UDR:

Note:

  • The deployment of notify-stub, fourg-stub, and diam-stub are not applicable to SLF pipelines.
  • Service name used by each of the stubs must be unique for successful deployment.
3.7.8.1 Configuring Docker Registry

Update the docker registry file as follows:

image:
  repository: <docker registry>:<docker port>/ocats-udr-images
3.7.8.2 Enabling Static Port
To enable static port:

Note:

ATS supports static port. By default, this feature is not enabled.
In the ocats-udr-custom-values-25.1.200.yaml file under service section, set the staticNodePortEnabled parameter as true and staticNodePort parameter with valid nodePort value.

Here is a sample configuration for enabling static port in the ocats-udr-custom-values-25.1.200.yaml.file:

service:
  customExtension:
    labels: {}
    annotations: {}
  type: LoadBalancer
  port: "8080"
  staticNodePortEnabled: true
  staticNodePort: "31083"
3.7.8.3 Enabling Persistent Volume
To enable persistent volume, create a PVC and associate the same to the ATS pod.

Note:

To enable persistent volume, set the following parameters in the values.yaml file:
  1. Set the PVEnabled flag to 'true'.
  2. Set PVClaimName to 'PVC' that user has created for ATS.
deployment:
  customExtension:
    labels: {}
    annotations: {}
  PVEnabled: true
  PVClaimName: "ocats-udr-pvc"
3.7.8.4 Settings for OAuth2 Test Cases
UDR-ATS supports test cases that verifies OAuth2 validation scenarios at both UDR Provisioning Ingress Gateway and UDR Signaling Ingress Gateway. To run the OAuth2 related test cases on ATS:
  1. Generate four ECDSA private keys in pem format (two each for UDR Provisioning Ingress Gateway and UDR Signaling Ingress Gateway) and enter the keys name in the privateKey field.
  2. Generate four public certificates using the private keys (two each for UDR Provisioning Ingress Gateway and UDR Signaling Ingress Gateway) and enter the certificate names under publicKey.
    Sample Commands to Generate ECDSA Private Keys and Certificates
    openssl ecparam -genkey -name prime256v1 -noout -out ec_private_key1.pem
    openssl pkcs8 -topk8 -in ec_private_key1.pem -inform pem -out ec_private_key_pkcs8.pem -outform pem -nocrypt
    openssl req -new -key ec_private_key_pkcs8.pem -x509 -nodes -days 365 -out 4bc0c762-0212-416a-bd94-b7f1fb348bd4.crt -subj "/C=IN/ST=KA/L=BLR/O=ORACLE/OU=CGBU/CN=ocnrf-endpoint.ocnrf.svc.cluster.local"

    Note:

    • You can configure the above mentioned inputs only if OAuth2 is configured on UDR. For information to configure OAuth2 on UDR, see the "Configuring OAuth2.0" section in the Oracle Communications Cloud Native Core Unified Data Repository User Guide.
    • ATS configures Ingress Gateway with secret name, keyId, certificate name, and instanceid based on the inputs provided in the ocats-udr-custom-values-25.1.200.yaml file.
    • ATS supports only ES256 algorithm to generate token for this release. User should generate ECDSA private key to test OAuth2 feature.
  3. Update the ocats-udr-custom-values-25.1.200.yaml file with the public and private keys generated in the previous steps. A sample code snippet is as follows:
    Sample: OAuth Configuration
    deployment:   oauthKeys:
        - keyId: '664b344e74294c8fa5d2e7dfaaaba407'
          udrSecret: 'oauthsecret1'
          privateKey: 'ec_private_key1.pem'
          publicKey: '4bc0c762-0212-416a-bd94-b7f1fb348bd4.crt'
          issuerId: '4bc0c762-0212-416a-bd94-b7f1fb348bd4'
          reqType: 'igw-sig'
        - keyId: '664b344e74294c8fa5d2e7dfaaaba408'
          udrSecret: 'oauthsecret2'
          privateKey: 'ec_private_key2.pem'
          publicKey: '4bc0c762-0212-416a-bd94-b7f1fb348bd5.crt'
          issuerId: '4bc0c762-0212-416a-bd94-b7f1fb348bd5'
          reqType: 'igw-sig'
        - keyId: '664b344e74294c8fa5d2e7dfaaaba409'
          udrSecret: 'oauthsecret3'
          privateKey: 'ec_private_key3.pem'
          publicKey: '4bc0c762-0212-416a-bd94-b7f1fb348bd6.crt'
          issuerId: '4bc0c762-0212-416a-bd94-b7f1fb348bd6'
          reqType: 'igw-prov'
        - keyId: '664b344e74294c8fa5d2e7dfaaaba410'
          udrSecret: 'oauthsecret4'
          privateKey: 'ec_private_key4.pem'
          publicKey: '4bc0c762-0212-416a-bd94-b7f1fb348bd7.crt'
          issuerId: '4bc0c762-0212-416a-bd94-b7f1fb348bd7'
          reqType: 'igw-prov'
    In the above code snippet,
    • issuerId is any uuid that follows NfInstanceId format.
    • keyId is user-defined value.
    • reqType indicates the mapping between secret created and Ingress Gateway. igw-sig indicates secrets are to be used for UDR Signaling Ingress Gateway and igw-prov indicates UDR Provisioning Ingress Gateway.
  4. Create four secrets in the namespace where ATS and UDR are installed, with public certificate and enter the value for the udrSecret field.
3.7.8.5 Enabling IPv6 on ATS

If you are deploying ATS setup on IPv6 system, then enable the following flag in the ocats-udr-custom-values.yaml file:

Enabling IPv6 on ATS
deployment:
  ipv6enabled: true
3.7.8.6 Configuring ATS to Run Health-Check Pipeline
To run the SLF-Health-Check, UDR-Health-Check, or EIR-HealthCheck features, provide the following inputs in the values.yaml file if ATS is being deployed on OCCNE.
deployment:
  Webscale: false
  occnehostip: <base64 encoded occne bastion ip>
  occnehostusername: <base64 encoded occne login user name>
  occnehostpassword: <base64 encoded occne login password>
And, provide the following inputs to run the SLF-Health-Check, UDR-Health-Check, or EIR-HealthCheck features, if ATS is being deployed on Webscale.
deployment:
  Webscale: true
  #Provide Webscale Environment details with base64 encoding
  webscalejumpserverip: <base64 encoded jump server ip>
  webscalejumpserverusername: <base64 encoded jump server username>
  webscalejumpserverpassword: <base64 encoded jump server password>
  webscaleprojectname: <base64 encoded webscale project name>
  webscalelabserverFQDN: <base64 encoded lab server fqdn>
  webscalelabserverport: <base64 encoded lab server port>
  webscalelabserverusername: <base64 encoded lab server username>
  webscalelabserverpassword: <base64 encoded lab server password>
You can configure the name of the secret that the ATS creates for healthcheck pipeline:
healthchecksecretname: ats-healthcheck-secret
If CNE supports dual stack feature and UDR, SLF, EIR, and ProvGw are deployed in IPv6_IPv4 mode, then you can enable the dual stack feature on ATS by setting the deploymentMode parameter to IPv6_IPv4. This will enable ATS service to have both IPv6 and IPv4 address. IPv6 address will be preferred.
deplopymentMode: IPv6_IPv4

For more information, see Health Check.

Note:

UDR-ATS creates a secret with the name 'healthcheck-secret' on Kubernetes to store the above inputs.
3.7.8.7 Creating Service Account
To run SLF-ATS and EIR-ATS, it is mandatory to create a service account using the following inputs:
rules:
- apiGroups: ["apps"]
  resources: ["deployments", "replicasets", "deployments/scale","statefulsets/scale"]
  verbs: ["get","watch","list","update"]
- apiGroups: [""]
  resources: ["pods", "deployments","pods/log","configmaps","pods/exec"]
  verbs: ["get","watch","list","update","create"]
To run UDR-ATS, it is mandatory to create a service account using the following inputs:
rules:
- apiGroups: ["apps"]
  resources: ["deployments", "replicasets", "deployments/scale","statefulsets/scale"]
  verbs: ["get","watch","list","update"]
- apiGroups: [""]
  resources: ["pods", "deployments","pods/log","configmaps","pods/exec","services"]
  verbs: ["get","watch","list","update","create","delete"]
- apiGroups: ["batch"]
  resources: ["jobs"]
  verbs: ["get","create","delete","update","list"]

Note:

For information about creating service account, see the Oracle Communications Cloud Native Core Unified Data Repository Installation and Upgrade Guide available on MOS.
3.7.8.8 Enabling Service Mesh

Note:

The UDR-NewFeatures, UDR-Regression, EIR-NewFeatures, and EIR-Regression pipelines do not support deployment on service mesh enabled system.
To enable service mesh:
  1. If service mesh is not enabled at the global level for namespace, then run the following command to enable service mesh at the namespace level before deploying UDR-ATS.

    kubectl label --overwrite namespace <namespace_name> istio-injection=enabled

    Example:

    kubectl label --overwrite namespace ocudr istio-injection=enabled

  2. Add the following annotation in the lbDeployment section under global section of the ocats-udr-custom-values-25.1.200.yaml file:
    global:
      # ********  Sub-Section Start: Custom Extension Global Parameters ********
      #**************************************************************************
     
      customExtension:
        allResources:
          labels: {}
          annotations: {}
     
        lbServices:
          labels: {}
          annotations: {}
     
        lbDeployments:
          labels: {}
          annotations:
            traffic.sidecar.istio.io/excludeInboundPorts: "8080"
            traffic.sidecar.istio.io/excludeOutboundPorts: "443,9000,22,9090"
     
        nonlbServices:
          labels: {}
          annotations: {}
     
        nonlbDeployments:
          labels: {}
          annotations: {}
  3. Use the following code snippet to create an envoy filter for both UDR and ATS:
    apiVersion: networking.istio.io/v1alpha3
    kind: EnvoyFilter
    metadata:
      name: <user defined name for envoy filter>
      namespace: <namespace where ATS is deployed>
    spec:
      workloadSelector:
        labels:
          app: ocats-udr
      configPatches:
      - applyTo: NETWORK_FILTER
        match:
          listener:
            filterChain:
              filter:
                name: "envoy.http_connection_manager"
        patch:
          operation: MERGE
          value:
            typed_config:
              '@type': type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager
              server_header_transformation: PASS_THROUGH
     
    ---
    apiVersion: networking.istio.io/v1alpha3
    kind: EnvoyFilter
    metadata:
      name: <user defined name for envoy filter>
      namespace: <namespace where ATS is deployed>
    spec:
      workloadSelector:
        labels:
          app: ocats-udr
      configPatches:
      - applyTo: NETWORK_FILTER
        match:
          listener:
            filterChain:
              filter:
                name: "envoy.http_connection_manager"
        patch:
          operation: MERGE
          value:
            typed_config:
              '@type': type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager
              forward_client_cert_details: ALWAYS_FORWARD_ONLY

    For more information about envoy filter and service mesh configuration, see the Oracle Communications Cloud Native Core, Unified Data Repository Installation, Upgrade, and Fault Recovery Guide.

  4. After deploying service mesh, create Peer Authentication on the pods for inter pod communication. A sample template is as follows:
    apiVersion: security.istio.io/v1beta1
    kind: PeerAuthentication
    metadata:
      name: <ATS Peer Authentication name>
      namespace: <ATS deployment namespace>
    spec:
      selector:
        matchLabels:
          app: ocats-udr
      mtls:
        mode: PERMISSIVE
    ---
     
    apiVersion: security.istio.io/v1beta1
    kind: PeerAuthentication
    metadata:
      name: <ATS Stubs Peer Authentication name>
      namespace: <ATS Stubs deployment namespace>
    spec:
      selector:
        matchLabels:
          app: ocstub-py   mtls:
        mode: PERMISSIVE
  5. ATS sidecar must have at least 2 CPUs and 2 Gi memory. ATS sidecar is configured as follows:
    deployment:
      customExtension:
        annotations:
          sidecar.istio.io/proxyCPU: "2000m"
          sidecar.istio.io/proxyCPULimit: "2000m"
          sidecar.istio.io/proxyMemory: "2Gi"
          sidecar.istio.io/proxyMemoryLimit: "2Gi"
          proxy.istio.io/config: |
            concurrency: 4
3.7.8.9 Configuring Stubs
From ATS release 24.2.0:
  • The common python stub chart and image must be used for two NRF-stubs, Notify-stub, TLS notify stub (TLS notify must be deployed separately), and SCP stub.
  • Two different deployments of NRF (primary and secondary) using common python stub chart and image must be deployed for SLF-ATS and EIR-ATS.
  • Two different deployments of NRF (primary and secondary), one deployment for notify stub (non-TLS scenarios), one deployment for tls-notify-stub (TLS scenarios) and one deployment for SCP stub must be deployed for UDR-ATS.
3.7.8.9.1 Configuring NRF Stub

UDR-ATS, SLF-ATS, and EIR-ATS requires two NRF stubs, one as primary NRF and other as secondary NRF. These NRF stubs is deployed using ocstub-py-custom-values-25.1.200.yaml.

To configure NRF stub:

  • Provide the docker registry details with ocstub-py-custom-values-25.1.202.yaml file where images are pushed as follows:
    image:
      repository: <docker registry>:<docker port>
  • Set the env.NF to SLF, UDR, or 5G EIR depending upon the deployed NF:
    env:
      ...
      NF: SLF
    env:
      ...
      NF: UDR
    env:
      ...
      NF: 5G_EIR
  • Set the appendReleaseName parameter to false for backward compatibility and bypass_additional_port parameter to true to avoid creating HTTP1 port. Each deployment should have unique service name:
    service:
      ...
      name: <user-defined nrf stub service name> #Example nrf-stub-service1 for primary NRF stub and nrf-stub-service2 for secondary NRF stub
      appendReleaseName: false #this is to avoid adding release name at the beginning of the service name thereby supporting backward compatibility
      ...
      bypass_additional_port: true #to avoid creating HTTP1 port.
  • If CNE supports dual stack feature and UDR, SLF, EIR, and ProvGw are deployed in IPv6_IPv4 mode, then to run the dual stack scenarios on ATS, you must deploy NRF stub by setting the deploymentMode as IPv6_IPv4. This will enable NRF stub service to have both IPv6 and IPv4 address. IPv6 address will be preferred.
    deplopymentMode: IPv6_IPv4
3.7.8.9.2 Configuring Notify Stub

UDR-ATS requires notify-stub to be deployed in three modes using ocstub-py-custom-values-25.1.202.yaml:

  • HTTP mode: To receive non-TLS notifications from UDR
  • HTTPS mode: To receive TLS notifications from UDR
  • SCP mode: To validate SCP routing scenarios
To configure notify stub for Notify (TLS and non-TLS) and SCP stub modes:
  • Update the docker registry where common python stub image is pushed as follows:
    image:
      repository: <docker registry>:<docker port>
  • Set the env.NF to UDR depending upon the whether the deployed NF is UDR:
    env:
      ...
      NF: UDR
  • If CNE supports dual stack feature and UDR, SLF, EIR, and ProvGw are deployed in IPv6_IPv4 mode, then to run the dual stack scenarios on ATS, you must deploy Notify stub by setting the deploymentMode as IPv6_IPv4. This will enable Notify stub service to have both IPv6 and IPv4 address. IPv6 address will be preferred.
    deplopymentMode: IPv6_IPv4
  • Set the appendReleaseName parameter to false for backward compatibility and bypass_additional_port parameter to true to avoid creating HTTP1 port and expose_tls_service parameter to false to disable TLS mode of stub deployment. Common python stub deployed as HTTP and TLS server must have unique service name:
    service:
      ...
      name: <user-defined notify stub service name> #Example notify-stub-service for non-TLS stub
      appendReleaseName: false #this is to avoid adding release name at the beginning of the service name thereby supporting backward compatibility
      ...
      bypass_additional_port: true #to avoid creating HTTP1 port.
      expose_tls_service: false
  • Set the appendReleaseName parameter to false for backward compatibility and bypass_additional_port parameter to true to avoid creating HTTP1 port. Common python stub deployed as HTTP server must have unique service name:
    service:
      ...
      name: <user-defined TLS notify stub service name> #Example tlsnotify-stub-service for TLS stub
      appendReleaseName: false #this is to avoid adding release name at the beginning of the service name thereby supporting backward compatibility
      ...
      bypass_additional_port: true #to avoid creating HTTP1 port.
  • Configure the stub with private key, certificates and secret that is used by UDR Egress Gateway for TLS notification validation and set expose_tls_service to true:
    env:
      ...
      cert_secret_name: <TLS secret of UDR egressgateway> #E.g., ocudr-gateway-secret
      ca_cert: <CA certificate used for UDR egressgateway deployment> #E.g., caroot.cer
      client_cert: <Client certificate used for UDR egressgateway deployment> #E.g., apigatewayrsa.cer
      private_key: <Private key used for UDR egressgateway deployment> #E.g., rsa_private_key_pkcs1.pem
      ...
      CLIENT_CERT_REQ: true
      APACHE_KEEP_ALIVE_TIMEOUT: 1000
    ...
    service:
     ...
      expose_tls_service: true
  • To deploy notify-stub as SCP set the appendReleaseName parameter to false for backward compatibility and bypass_additional_port parameter to true to avoid creating HTTP1 port and set expose_tls_service to false. Common python stub deployed as HTTP and TLS server must have unique service name:
    service:
      ...
      name: <user-defined scp stub service name> #Example scp-stub-service for SCP stub
      appendReleaseName: false #this is to avoid adding release name at the beginning of the service name thereby supporting backward compatibility
      ...
      bypass_additional_port: true #to avoid creating HTTP1 port.
      expose_tls_service: false
3.7.8.9.3 Configuring Fourg Stub
UDR-ATS requires one fourg stub to test the migration scenarios. Configure the docker registry where diameter stub image is pushed as follows:
image:
  repository: <docker registry>:<docker port>/ocats-udr-fourg-stub-images
If CNE supports dual stack feature and UDR, SLF, EIR, and ProvGw are deployed in IPv6_IPv4 mode, then to run the dual stack scenarios on ATS, you must deploy Fourg stub by setting the deploymentMode as IPv6_IPv4. This will enable Fourg stub service to have both IPv6 and IPv4 address. IPv6 address will be preferred.
deplopymentMode: IPv6_IPv4
3.7.8.9.4 Configuring Diameter Stub
UDR-ATS requires two diameter stubs as two different diameter peers.
  • diam-toola: peernode1 (seagull1a.seagull.com)
  • diam-toolb: peernode2 (seagull1b.seagull.com)

Note:

UDR-ATS uses diam-stub-25.1.200.yaml file for diameter stub configuration.
EIR-ATS requires one diam-stub to be deployed as single diameter peer.
  • diam-toola → peernode1 (seagull1a.seagull.com)

Note:

You must follow only configuring diam-toola section. The diam-toolb section is only for UDR-ATS.

To configure diameter stubs:

  • Update the docker registry where diameter stub image is pushed as follows:
    image:
      repository: <docker registry>:<docker port>/ocats-udr-diam-stub-images
  • If CNE supports dual stack feature and UDR, SLF, EIR, and ProvGw are deployed in IPv6_IPv4 mode, then to run the dual stack scenarios on ATS, you must deploy diam stub by setting the deploymentMode as IPv6_IPv4. This will enable diam stub service to have both IPv6 and IPv4 address. IPv6 address will be preferred.
    deplopymentMode: IPv6_IPv4
  • Configure diam-toola as follows:
    deployment:
      SourceIdentity: seagull1a.seagull.com
      SourceRealm: seagulla.com
      mode: nudr-dr
  • Configure diam-toolb as follows:
    deployment:
      SourceIdentity: seagull1b.seagull.com
      SourceRealm: seagullb.com
      mode: nudr-dr
  • Configure diam-tool for EIR-ATS as follows:
    deployment:
      SourceIdentity: seagull1a.seagull.com
      SourceRealm: seagulla.com
      mode: n5g-eir-eic

3.7.9 Deploying ATS in Kubernetes Cluster

You can deploy ATS Pod in Kubernetes cluster using Helm commands.

Run the following command to deploy ATS.

helm install --name <release_name> --namespace <namespace_name> -f <values-yaml-file> ocats-udr

Example:

helm install --name ocats-udr-25.1.100 --namespace ocudr -f ocats-udr-custom-values-25.1.100.yaml ocats-udr

3.7.9.1 Deploying ATS Pod and Stubs in Kubernetes Cluster

ATS resource allocation must be done by referring to the Resource Requirements section to support parallel test execution feature. For more information on Parallel Test Execution, see Parallel Test Execution. If you must enable the Application Log Collection, see UDR Application Log Collection.

The CPU and memory utilization depends on the number of behave command executed at given point of time. UDR-ATS runs six behave commands at a time and SLF-ATS runs seven behave commands at a time.

You can deploy stub pod in Kubernetes cluster using Helm commands.

Run the following command to deploy to deploy ATS and stubs:
helm install --name <release_name> --namespace <namespace_name> -f <values-yaml-file> ocats-udr
Example:
 helm install --name ocats-udr-25.1.200 --namespace ocudr -f ocats-udr-custom-values-25.1.200.yaml ocats-udr

SLF-NewFeatures, EIR-NewFeatures, SLF-Regression, or EIR-Regression Pipeline

Run the following command to deploy NRF-STUB:
helm install --name <release_name> --namespace <namespace_name> -f <values-yaml-file> ocstub-py

Example:

helm install --name stub1 --namespace ocudr -f ocstub-py-custom-values-25.1.202.yaml ocstub-py
helm install --name stub2 --namespace ocudr -f ocstub-py-custom-values-25.1.202.yaml ocstub-py

Note:

To test DNS SRV feature in SLF-Regression and EIR-NewFeatures, NRF stub needs to be deployed two times to act as primary and secondary NRF.

UDR-NewFeatures or UDR-Regression Pipelines using Helm

Run the following commands to deploy each of the required stubs using Helm:

helm install --name <release_name> --namespace <namespace_name> -f <values-yaml-file> ocstub-py

helm install --name <release_name> --namespace <namespace_name> -f <fourg-stub-values-yaml-file> fourg-stub

helm install --name <release_name> --namespace <namespace_name> -f <diam-tool-values-yaml-file> diam-stub

Example:


 helm install --name nrfstub --namespace ocudr -f ocstub-py-custom-values-25.1.202.yaml ocstub-py

 helm install --name notify --namespace ocudr -f ocstub-py-custom-values-25.1.202.yaml ocstub-py
 
 helm install --name tlsnotify--namespace ocudr -f ocstub-py-custom-values-25.1.202.yaml ocstub-py

 helm install --name scp --namespace ocudr -f ocstub-py-custom-values-25.1.202.yaml  ocstub-py

 helm install --name fourgstub --namespace ocudr -f fourg-stub-custom-values-25.1.200.yaml fourg-stub

 helm install --name diamtoola --namespace ocudr -f diam-tool-custom-values-25.1.200.yaml diam-stub

 helm install --name diamtoolb --namespace ocudr -f diam-tool-custom-values-25.1.200.yaml diam-stub

3.7.10 Verifying ATS Deployment

To verify ATS deployment, run the following command:

helm status <release_name>

Figure 3-21 Sample Output of Verifying ATS Deployment


Sample Output of Verifying ATS Deployment

To view UDR, four UDRs, Provisioning Gateway, two NRF stubs, one bulk import and ATS deployed in the SLF namespace, run kubectl get pods -n <ns> command. The output is as follows:

Figure 3-22 Sample SLF Namespace


Sample SLF Namespace

Following is the sample output of the command, 'kubectl get pods -n <ns>'. It shows OCUDR namespace with one UDR, one Provisioning Gateway, two diam-tool stubs, one http-server stub, one SCP stub, one bulk import, two NRF stubs, one fourg-stub and ATS after installing UDR-ATS for UDR-Pipelines:

Figure 3-23 Sample UDR Namespace


Sample UDR Namespace

Following is the sample output of the command, 'kubectl get pods -n <ns> - It shows OCUDR namespace with one UDR, two nrf stubs, one diam-stub, one bulk import and ATS after installation for EIR pipeline:

Figure 3-24 Sample EIR Namespace


Sample EIR Namespace

If you have installed ATS with sidecar, ensure ATS shows two containers in the READY state as "2/2". A sample output of the command, 'kubectl get pods -n <ns>' for SLF-Pipelines is as follows:

Figure 3-25 Sample ATS Deployed with Sidecar


Sample ATS Deployed with Sidecar

3.7.11 Post Installation Steps

If Provisioning Gateway is upgraded using helm upgrade command

Following are the post installation steps:

  1. Perform the post installation steps as described in Cloud Native Core Provisioning Gateway Guide to change segDetails to UDR Provisioning Ingress Gateway fqdn and port. For more information, see Cloud Native Core Provisioning Gateway Guide.
  2. Run the following command:
    kubectl exec -it -n <ns> <ATS pod> bash
  3. Run the following command:
    
    curl -X PUT http://<provgw helm release>-provgw-config.<ns>:5001/provgw-config/v1/udr.provgwservice.cfg/PROVGW-SERVICE -d '{"tracingEnabled": false,"soapService_udrIp": "ocudr-ingressgateway-
    prov.ocudr","soapService_udrSignallingIp": "ocudr-ingressgateway-sig.ocudr","retryErrorCodes": [500,503],"retryCount": 3,"retryInterval": 2}'
  4. Run the exit command.

If PV (Persistent Volume) is enabled for UDR ATS

Following are the post installation steps:

  1. Run the following command to extract the ocslf_tests (for SLF Pipelines), ocudr_tests ( for UDR Pipelines), or oceir_tests (for EIR Pipeline) and jobs from ocats-udr-data-25.1.200.tgz.

    tar -xvf ocats-udr-data-25.1.200.tgz

  2. Run the following command to create certs and oauth_keys in ocslf_tests folder (for SLF pipeline runs), oceir_tests (for EIR pipeline runs), or ocudr_tests (for UDR pipeline runs):
    mkdir -p ocslf_tests/certs ocslf_tests/oauth_keys
    mkdir -p oceir_tests/certs oceir_tests/oauth_keys
    mkdir -p ocudr_tests/certs ocudr_tests/oauth_keys
  3. Run the following commands to copy the ocslf_tests and jobs folder to the ATS pod only if it is intended to run SLF Pipelines.
    kubectl cp ocslf_tests <namespace>/<pod-name>:/var/lib/jenkins
    kubectl cp jobs <namespace>/<pod-name>:/var/lib/jenkins/.jenkins
  4. Run the following commands to copy the ocudr_tests and jobs folder to the ATS pod only if it is intended to run UDR Pipelines.
    kubectl cp ocudr_tests <namespace>/<pod-name>:/var/lib/jenkins
    kubectl cp jobs <namespace>/<pod-name>:/var/lib/jenkins/.jenkins
  5. Run the following commands to copy the oceir_tests and jobs folder to the ATS pod only if it is intended to run EIR Pipelines.
    kubectl cp oceir_tests <namespace>/<pod-name>:/var/lib/jenkins
    kubectl cp jobs <namespace>/<pod-name>:/var/lib/jenkins/.jenkins
  6. Run the following command to restart the pod:

    kubectl delete pod <pod-name> -n <namespace>

  7. For SLF-NewFeatures and SLF-Regression pipelines, copy the root certificate authority (CA), signed server certificate with root CA private key, and private keys used to create secret for TLS support on Provisioning Gateway to ocslf_tests/certs folder in the path as follows:

    Note:

    Provisioning Gateway must use all three files as part of TLS support.
    1. kubectl cp <root CA> <namespace>/<pod-name>:/var/lib/jenkins/ocslf_tests/certs

      Example: kubectl cp caroot.cer ocudr/ats-ocats-udr-696df7c84d-2qc7h:/var/lib/jenkins/ocslf_tests/certs

    2. kubectl cp <server certificate> <namespace>/<pod-name>:/var/lib/jenkins/ocslf_tests/certs

      Example: kubectl cp apigatewayrsa.cer ocudr/ats-ocats-udr-696df7c84d-2qc7h:/var/lib/jenkins/ocslf_tests/certs

    3. kubectl cp <private key> <namespace>/<pod-name>:/var/lib/jenkins/ocslf_tests/certs

      Example: kubectl cp rsa_private_key_pkcs1.pem ocudr/ats-ocats-udr-696df7c84d-2qc7h:/var/lib/jenkins/ocslf_tests/certs

    Note:

    Provisioning Gateway should use the above three files as part of TLS support.
  8. For UDR-NewFeatures and UDR-Regression pipelines, copy the root certificate authority (CA), signed server certificate with root CA private key, and private key that are used to create secret for TLS support on Provisioning Gateway to ocudr_tests or ocudr_certs folder in the path as follows:

    Note:

    For TLS validation, use the same set of copied certificates for Provisioning Gateway Ingress Gateway, UDR Ingress Gateway, and UDR Egress Gateway.
    1. kubectl cp <root CA> <namespace>/<pod-name>:/var/lib/jenkins/ocudr_tests/certs

      Example: kubectl cp caroot.cer ocudr/ats-ocats-udr-696df7c84d-2qc7h:/var/lib/jenkins/ocudr_tests/certs

    2. kubectl cp <server certificate> <namespace>/<pod-name>:/var/lib/jenkins/ocudr_tests/certs

      Example: kubectl cp apigatewayrsa.cer ocudr/ats-ocats-udr-696df7c84d-2qc7h:/var/lib/jenkins/ocudr_tests/certs

    3. kubectl cp <private key> <namespace>/<pod-name>:/var/lib/jenkins/ocudr_tests/certs

      Example: kubectl cp rsa_private_key_pkcs1.pem ocudr/ats-ocats-udr-696df7c84d-2qc7h:/var/lib/jenkins/ocudr_tests/certs

  9. For EIR-NewFeatures and EIR-Regression pipelines, copy the root certificate authority (CA), signed server certificate with root CA private key, and private key that are used to create secret for TLS support on UDR ingressgateway-prov and ingressgateway-sig to oceir_tests/certs folder in the path as follows:
    1. kubectl cp <root CA> <namespace>/<pod-name>:/var/lib/jenkins/oceir_tests/certs

      Example: kubectl cp caroot_sig.cer ocudr/ats-ocats-udr-696df7c84d-2qc7h:/var/lib/jenkins/oceir_tests/certs

    2. kubectl cp <server certificate> <namespace>/<pod-name>:/var/lib/jenkins/oceir_tests/certs

      Example: kubectl cp apigatewayrsa_sig.cer ocudr/ats-ocats-udr-696df7c84d-2qc7h:/var/lib/jenkins/oceir_tests/certs

    3. kubectl cp <private key> <namespace>/<pod-name>:/var/lib/jenkins/oceir_tests/certs

      Example: kubectl cp rsa_private_key_pkcs1_sig.pem ocudr/ats-ocats-udr-696df7c84d-2qc7h:/var/lib/jenkins/oceir_tests/certs

    4. kubectl cp <root CA> <namespace>/<pod-name>:/var/lib/jenkins/oceir_tests/certs

      Example: kubectl cp caroot_prov.cer ocudr/ats-ocats-udr-696df7c84d-2qc7h:/var/lib/jenkins/oceir_tests/certs

    5. kubectl cp <server certificate> <namespace>/<pod-name>:/var/lib/jenkins/oceir_tests/certs

      Example: kubectl cp apigatewayrsa_prov.cer ocudr/ats-ocats-udr-696df7c84d-2qc7h:/var/lib/jenkins/oceir_tests/certs

    6. kubectl cp <private key> <namespace>/<pod-name>:/var/lib/jenkins/oceir_tests/certs

      Example: kubectl cp rsa_private_key_pkcs1_prov.pem ocudr/ats-ocats-udr-696df7c84d-2qc7h:/var/lib/jenkins/oceir_tests/certs

  10. To run Oauth2 validation scenarios on SLF-Regression, copy each private keys in pem format to oauth_keys folder in /var/lib/jenkins/ocslf_tests path.
    kubectl cp <private key pem file> <namespace>/<pod-name>:/var/lib/jenkins/ocslf_tests/oauth_keys
    

    Example:

    kubectl cp ec_private_key1.pem ocudr/ats-ocats-udr-696df7c84d-2qc7h:/var/lib/jenkins/ocslf_tests/oauth_keys
  11. To run OAuth2 validation scenarios on UDR-Regression, copy each private keys in pem format to oauth_keys folder in /var/lib/jenkins/ocudr_tests path.
    kubectl cp <private key pem file> <namespace>/<pod-name>:/var/lib/jenkins/ocudr_tests/oauth_keys

    Example:

    kubectl cp ec_private_key1.pem ocudr/ats-ocats-udr-696df7c84d-2qc7h:/var/lib/jenkins/ocudr_tests/oauth_keys
  12. To run OAuth2 validation scenarios on EIR-Regression, copy each private keys in pem format to oauth_keys folder in /var/lib/jenkins/oceir_tests path.
    kubectl cp <private key pem file> <namespace>/<pod-name>:/var/lib/jenkins/oceir_tests/oauth_keys

    Example:

    kubectl cp ec_private_key1.pem ocudr/ats-ocats-udr-696df7c84d-2qc7h:/var/lib/jenkins/oceir_tests/oauth_keys

If PV is disabled:

Following are the post installation steps:

  1. For SLF-NewFeatures and SLF-Regression pipelines, copy the root certificate authority (CA), signed server certificate with root CA private key, and private key used to create secret for TLS support on Provisioning Gateway to ocslf_tests/certs folder in the path as follows:

    Note:

    Provisioning Gateway must use all three files as part of TLS support.
    1. kubectl cp <root CA> <namespace>/<pod-name>:/var/lib/jenkins/ocslf_tests/certs

      Example: kubectl cp caroot.cer ocudr/ats-ocats-udr-696df7c84d-2qc7h:/var/lib/jenkins/ocslf_tests/certs

    2. kubectl cp <server certificate> <namespace>/<pod-name>:/var/lib/jenkins/ocslf_tests/certs

      Example: kubectl cp apigatewayrsa.cer ocudr/ats-ocats-udr-696df7c84d-2qc7h:/var/lib/jenkins/ocslf_tests/certs

    3. kubectl cp <private key> <namespace>/<pod-name>:/var/lib/jenkins/ocslf_tests/certs

      Example: kubectl cp rsa_private_key_pkcs1.pem ocudr/ats-ocats-udr-696df7c84d-2qc7h:/var/lib/jenkins/ocslf_tests/certs

  2. For UDR-NewFeatures and UDR-Regression pipelines, copy the root certificate authority (CA), signed server certificate with root CA private key, and private keys used to create secret for TLS support on Provisioning Gateway to ocslf_tests/certs folder in the path as follows:

    Note:

    For TLS validation, use the same set of copied certificates for Provisioning Gateway Ingress Gateway, UDR Ingress Gateway, and UDR Egress Gateway.
    1. kubectl cp <root CA> <namespace>/<pod-name>:/var/lib/jenkins/ocudr_tests/certs

      Example: kubectl cp caroot.cer ocudr/ats-ocats-udr-696df7c84d-2qc7h:/var/lib/jenkins/ocudr_tests/certs

    2. kubectl cp <server certificate> <namespace>/<pod-name>:/var/lib/jenkins/ocudr_tests/certs

      Example: kubectl cp apigatewayrsa.cer ocudr/ats-ocats-udr-696df7c84d-2qc7h:/var/lib/jenkins/ocudr_tests/certs

    3. kubectl cp <private key> <namespace>/<pod-name>:/var/lib/jenkins/ocudr_tests/certs

      Example: kubectl cp rsa_private_key_pkcs1.pem ocudr/ats-ocats-udr-696df7c84d-2qc7h:/var/lib/jenkins/ocudr_tests/certs

    Note:

    The Ingress Gateway of UDR should use the above three files as part of TLS support.
  3. For EIR-NewFeatures and EIR-Regression pipelines, copy the root certificate authority (CA), signed server certificate with root CA private key, and private keys used to create secret for TLS support on UDR ingressgateway-prov and ingressgateway-sig to oceir_tests/certs folder in the path as follows:
    1. kubectl cp <root CA> <namespace>/<pod-name>:/var/lib/jenkins/oceir_tests/certs

      Example: kubectl cp caroot_sig.cer ocudr/ats-ocats-udr-696df7c84d-2qc7h:/var/lib/jenkins/oceir_tests/certs

    2. kubectl cp <server certificate> <namespace>/<pod-name>:/var/lib/jenkins/oceir_tests/certs

      Example: kubectl cp apigatewayrsa_sig.cer ocudr/ats-ocats-udr-696df7c84d-2qc7h:/var/lib/jenkins/oceir_tests/certs

    3. kubectl cp <private key> <namespace>/<pod-name>:/var/lib/jenkins/oceir_tests/certs

      Example: kubectl cp rsa_private_key_pkcs1_sig.pem ocudr/ats-ocats-udr-696df7c84d-2qc7h:/var/lib/jenkins/oceir_tests/certs

    4. kubectl cp <root CA> <namespace>/<pod-name>:/var/lib/jenkins/oceir_tests/certs

      Example: kubectl cp caroot_prov.cer ocudr/ats-ocats-udr-696df7c84d-2qc7h:/var/lib/jenkins/oceir_tests/certs

    5. kubectl cp <server certificate> <namespace>/<pod-name>:/var/lib/jenkins/oceir_tests/certs

      Example: kubectl cp apigatewayrsa_prov.cer ocudr/ats-ocats-udr-696df7c84d-2qc7h:/var/lib/jenkins/oceir_tests/certs

    6. kubectl cp <private key> <namespace>/<pod-name>:/var/lib/jenkins/oceir_tests/certs

      Example: kubectl cp rsa_private_key_pkcs1_prov.pem ocudr/ats-ocats-udr-696df7c84d-2qc7h:/var/lib/jenkins/oceir_tests/certs

  4. To run OAuth2 validation scenarios on SLF-Regression, copy each private keys in pem format to oauth_keys folder in /var/lib/jenkins/ocslf_tests path.
    kubectl cp <private key pem file> <namespace>/<pod-name>:/var/lib/jenkins/ocslf_tests/oauth_keys

    Example:

    kubectl cp ec_private_key1.pem ocudr/ats-ocats-udr-696df7c84d-2qc7h:/var/lib/jenkins/ocslf_tests/oauth_keys
  5. To run OAuth2 validation scenarios on UDR-Regression, copy each private keys in pem format to oauth_keys folder in /var/lib/jenkins/ocudr_tests path.
    kubectl cp <private key pem file> <namespace>/<pod-name>:/var/lib/jenkins/ocudr_tests/oauth_keys

    Example:

    kubectl cp ec_private_key1.pem ocudr/ats-ocats-udr-696df7c84d-2qc7h:/var/lib/jenkins/ocudr_tests/oauth_keys
  6. To run OAuth2 validation scenarios on EIR-Regression, copy each private keys in pem format to oauth_keys folder in /var/lib/jenkins/oceir_tests path.
    kubectl cp <private key pem file> <namespace>/<pod-name>:/var/lib/jenkins/oceir_tests/oauth_keys

    Example:

    kubectl cp ec_private_key1.pem ocudr/ats-ocats-udr-696df7c84d-2qc7h:/var/lib/jenkins/oceir_tests/oauth_keys