4 OCNWDAF Benchmarking Testing

This chapter describes various testing scenarios and the results obtained by running performance tests on Oracle Communications Networks Data Analytics Function.

A series of scripts is created to simulate the entire flow of CAP4C model execution and to extract the performance metrics of the models created. The scripts perform the following tasks:

  • Inserts synthetic data into the database.
  • Calls the Model Controller API to train the models and perform the prediction tasks.
  • Retrieves the metrics from Jaeger.
  • Deletes the synthetic data in the database.

The table below displays the expected performance metrics (per model), on completion of the flow of CAP4C model executions:

Table 4-1 Expected Performance Metrics

Metric Name Description
UE Mobility models trained 6
UE Mobility model trained max time 275202.415ms
UE Mobility model trained avg time 56976.9326ms
UE Mobility model trained min time 2902.839ms
NF Load models trained 5
NF Load model trained max time 9606.666 ms
NF Load model trained avg time 8852.0124 ms
NF Load model trained min time 8272.789 ms
Abnormal behavior models trained 8
Abnormal behavior m model trained max time 8297.173 ms
Abnormal behavior m model trained avg time 7504.53 ms
Abnormal behavior m model trained min time 7094.006 ms

Script Execution

The following two scripts are created to simulate the CAP4C model execution and to extract the performance metrics of the models:
  • SQL script to insert data and call the Model Controller API.
  • Script to extract Jaeger Metrics and delete synthetic data from the database.

A docker image is provided with the scripts, deploy the image in the Kubernetes cluster where all the OCNWDAF services are running. The docker image is provided in the images folder of OCNWDAF installer.

  • To deploy and run the image create the values.yml file. To create the values.yml file, use the following content:
    global:
      projectName: nwdaf-cap4c-initial-setup-script
      imageName: nwdaf-cap4c/nwdaf-cap4c-initial-setup-script
      imageVersion: 23.1.0.0.0
    config:
      env:
        APPLICATION_NAME: nwdaf-cap4c-initial-setup-script
        APPLICATION_HOME: /app
    deploy:
      probes:
        readiness:
          enabled: false
        liveness:
          enabled: false
  • Run the following command:
    $ helm install <deploy_name> <helm_chart> -f <path_values_file>/values.yml -n <namespace_name>

    For example:

    $ helm install nwdaf-cap4c-initial-setup-script https://artifacthub-phx.oci.oraclecorp.com/artifactory/ocnwdaf-helm/nwdaf-cap4c-deployment-template-23.1.0.tgz -f <path_values_file>/values.yml -n <namespace_name>
    
  • A container will be running inside the Kubernetes cluster. Identify the container and note down the name of the container for reference in subsequent steps. Run the following command:
    $ kubectl get pods -n <namespace_name>

    For example:

    NAME                                                                                                 READY   STATUS    RESTARTS   AGE
    nwdaf-cap4c-initial-setup-script-deploy-64b8fbcd9-2vqf9           1/1     Running              0          55s
  • Run the following command to access the container:
    $ kubectl exec -n <namespace_name> <pod_name> -it bash
  • Once inside the container, navigate to the path /app/performance. The scripts to be run are located in this path. Follow the steps below to run the scripts:

    • SQL Script to insert data / script to call model controller api: This script handles the insertion of the synthetic data into the DB. Once all the tables are inserted a process runs to call the Model Controller API. The models are generated and execution tasks are created to use the models. Run the script:
      $ python3 performance/integration_script.py -h <host> -p <path> -t <type> -n <number_tests>

      Table 4-2 Parameters

      Parameter Description Default Value
      -h The Model controllers host name or address localhost
      -p Path of the .csv files that have to create the request to controller ../integration/data/
      -t The files extension to read the data to generate payloads CSV
      -n Number of tests flows that the script will do. The recommended value of number of tests to be performed per pod. 10

      Note:

      If the parameters are not set, the script uses the default values.

      Example:

      $ python3 integration_script.py -h http://cap4c-model-controller.ocnwdaf-ns:8080 -p /app/performance/integration/data/ -t CSV -n 10

      Sample output:

      Figure 4-1 Sample Output


      Sample Output

    • Script to extract Jaeger Metrics/Script to delete synthetic data from DB: This script extracts the metrics and then deletes the dummay data from the DB. Run the script:
      $  python3 performance/report_script.py -h <host> -p <port> -m <options> -u <prefix> -y <types>

      Table 4-3 Parameters

      Parameter Description Default Value
      -h Jaeger host name or address localhost
      -p Jaeger UI port 16686
      -m Available options: ABNORMAL_BEHAVIOUR, NF_LOAD. all
      -u Set a URL prefix if needed, such as '/blurr7/jaeger NA
      -y Type of execution: TRAIN, EXECUTION, all. all

      Note:

      If the parameters are not set, the script uses the default values.

      Example:

      $ python report_script.py -t occne-tracer-jaeger-query.occne-infra -p 80 -u /blurr7/jaeger

      Sample output:

      Figure 4-2 Sample Output


      Sample Output

      Figure 4-3 Sample Output


      Sample Output

ATS NWDAF Perfgo Performance Results

Below are the results for PerfGo tool to execute performance testing on ATS NWDAF:

  • Create Subscription
  • Update Subscription
  • Delete Subscription
Create Subscription

Table 4-4 Create Subscription

Request Base Total Success Failure Rate Latency Average Rate Average Latency
ue-mobility-create.create-ue-mob-statistics.[CallFlow] 622 417 194 16.0 734.3ms 9.6 1030.6ms
ue-mobility-create.create-ue-mob-predictions.[CallFlow] 637 430 196 11.0 749.6ms 9.8 996.0ms
abnormal-behaviour-create.create-ab-predictions.[CallFlow] 781 39 731 7.0 1145.5ms 12.1 806.0ms
abnormal-behaviour-create.create-ab-statistics.[CallFlow] 748 34 703 5.0 1201.1ms 11.6 830.4ms
slice-load-level-create.create-sll-statistics.[CallFlow] 295 32 252 13.0 860.6ms 4.6 2256.8ms
slice-load-level-create.create-sll-predictions.[CallFlow] 304 27 266 14.0 842.8ms 4.7 2177.7ms
nf-load-create.create-nf-load-predictions.[CallFlow] 691 247 434 9.0 813.7ms 10.6 911.7ms
nf-load-create.create-nf-load-statistics.[CallFlow] 705 260 434 11.0 805.9ms 10.9 892.0ms
Update Subscription

Table 4-5 Update Subscription

Request Base Total Success Failure Rate Latency Average Rate Average Latency
ue-mobility-update.update-ue-mob.[CallFlow] 80 80 0 3.0 953.4ms 1.4 870.7ms
abnormal-behaviour-update.update-ab.[CallFlow] 80 80 0 3.0 577.1ms 1.5 555.8ms
slice-load-level-update.update-sll.[CallFlow] 70 70 0 2.0 4119.4ms 1.1 1441.4ms
nf-load-update.update-nf-load.[CallFlow] 73 73 0 2.0 610.3ms 1.3 552.4ms
Delete Subscription

Table 4-6 Delete Subscription

Request Base Total Success Failure Rate Latency Average Rate Average Latency
ue-mobility-delete.delete-ue-mob.[CallFlow] 80 80 0 6.0 799.4ms 1.4 611.2ms
abnormal-behaviour-delete.delete-ab.[CallFlow] 80 80 0 2.7 577.1ms 1.4 591.0ms
slice-load-level-delete.delete-sll.[CallFlow] 65 65 0 2.0 777.3ms 1.2 890.6ms
nf-load-delete.delete-nf-load.[CallFlow] 59 59 0 2.0 474.6ms 1.0 536.1ms