3 SEPP Benchmark Testing

This section provides information about the SEPP testcases run in different scenarios.

3.1 Test Scenario 1: SEPP 40K MPS, 36 Hrs Run with Features Enabled with 50ms Delay at Server End

This test scenario describes the performance and capacity of SEPP and provides the benchmarking results for 40K MPS, 36 hours run with the following SEPP features enabled with 50ms delay at the server end:
  • Topology Hiding
  • Security Counter Measure features:
    • Cat-0 SBI Message Schema Validation Feature
    • Cat-1 NRF Service API Query Parameters Validation
    • Cat-1 Service API Validation Feature
    • Cat -2 Network ID Validation Feature
    • Cat-3 Previous Location Check Feature
    • Cat-3 Time Check for Roaming Subscribers
  • Overload Control
  • 5G SBI Message Mediation Support
  • Steering of Roaming (SOR) Feature
  • Global Rate Limiting on Ingress Gateway of SEPP
  • Alternate Routing and Load Sharing based on the DNS SRV Record for Home Network Functions

Note:

ASM is enabled in this test case scenario.

.

3.1.1 Testcase and Setup Details

Following are the testcase and setup details:

Traffic Model Details

Table 3-1 Transactions Per Second (TPS)

Total TPS Site 1 Site 2
40K MPS 20K MPS 20K MPS
Setup Details

Table 3-2 Setup Details

Setup Details Values
Active User NA for SEPP
Execution Time 36 Hrs
Environment vCNE
Cluster Hardhead1
cnDBTier 25.1.200
cSEPP 25.1.200
pSEPP 25.1.200
CNC Console 25.1.200
Setup Configuration
  • Both SEPPs are deployed on Model-B.
  • cnDBTier is deployed on both sites.
List of SEPP Features enabled
  • Topology Hiding
  • Security Counter Measure features:
    • Cat-0 SBI Message Schema Validation Feature
    • Cat-1 NRF Service API Query Parameters Validation
    • Cat-1 Service API Validation Feature
    • Cat -2 Network ID Validation Feature
    • Cat-3 Previous Location Check Feature
    • Cat-3 Time Check for Roaming Subscribers
  • Overload Control
  • 5G SBI Message Mediation Support
  • Steering of Roaming (SOR) Feature
  • Global Rate Limiting on Ingress Gateway of SEPP
  • Alternate Routing and Load Sharing based on the DNS SRV Record for Home Network Functions

Resource Footprint

Table 3-3 Resource Footprint

Microservices / container Container Count CPU Resource per container (Limit) CPU Resource per container (Request) Memory Resource per container (Limit) Memory Resource per container (Request)
Site1-ocsepp-alternate-route/alternate-route 2 2 2 4Gi 4Gi
Site1-ocsepp-appinfo/appinfo 2 1 1 2Gi 1Gi
Site1-ocsepp-cn32c-svc/cn32c-svc 2 2 2 2Gi 2Gi
Site1-ocsepp-cn32f-svc/cn32f-svc 7 5 5 8Gi 8Gi
Site1-ocsepp-coherence-svc/coherence-svc 1 1 1 2Gi 2Gi
Site1-ocsepp-config-mgr-svc/config-mgr-svc 1 2 2 2Gi 2Gi
Site1-ocsepp-n32-egress-gateway/n32-egress-gateway 7 5 5 5Gi 5Gi
Site1-ocsepp-n32-ingress-gateway/n32-ingress-gateway 7 6 6 5Gi 5Gi
ocsepp-nf-mediation/nf-mediation 2 8 8 8Gi 8Gi
Site1-ocsepp-ocpm-config/config-server 2 1 1 1Gi 1Gi
Site1-ocsepp-performance/perf-info 2 2 2 4Gi 200Mi
Site1-ocsepp-plmn-egress-gateway/plmn-egress-gateway 7 5 5 5Gi 5Gi
Site1-ocsepp-plmn-ingress-gateway/plmn-ingress-gateway 7 5 5 5Gi 5Gi
Site1-ocsepp-pn32c-svc/pn32c-svc 2 2 2 2Gi 2Gi
Site1-ocsepp-pn32f-svc/pn32f-svc 7 5 5 8Gi 8Gi
Site1-ocsepp-sepp-nrf-client-nfdiscovery/nrf-client-nfdiscovery 2 1 1 2Gi 2Gi
Site1-ocsepp-sepp-nrf-client-nfmanagement/nrf-client-nfmanagement 1 1 1 1Gi 1Gi
Site2-ocsepp-alternate-route/alternate-route 1 2 2 4Gi 4Gi
Site2-ocsepp-appinfo/appinfo 2 1 1 2Gi 1Gi
Site2-ocsepp-cn32c-svc/cn32c-svc 2 2 2 2Gi 2Gi
Site2-ocsepp-cn32f-svc/cn32f-svc 7 5 5 8Gi 8Gi
Site2-ocsepp-coherence-svc/coherence-svc 1 1 1 2Gi 2Gi
Site2-ocsepp-config-mgr-svc/config-mgr-svc 1 2 2 2Gi 2Gi
Site2-ocsepp-n32-egress-gateway/n32-egress-gateway 7 5 5 5Gi 5Gi
Site2-ocsepp-n32-ingress-gateway/n32-ingress-gateway 7 6 6 5Gi 5Gi
Site2-ocsepp-ocpm-config/config-server 2 1 1 1Gi 1Gi
Site2-ocsepp-performance/perf-info 2 2 2 4Gi 200Mi
Site2-ocsepp-plmn-egress-gateway/plmn-egress-gateway 7 5 5 5Gi 5Gi
Site2-ocsepp-plmn-ingress-gateway/plmn-ingress-gateway 7 5 5 5Gi 5Gi
Site2-ocsepp-pn32c-svc/pn32c-svc 2 2 2 2Gi 2Gi
Site2-ocsepp-pn32f-svc/pn32f-svc 7 5 5 8Gi 8Gi
Site2-ocsepp-sepp-nrf-client-nfdiscovery/nrf-client-nfdiscovery 2 1 1 2Gi 2Gi
Site2-ocsepp-sepp-nrf-client-nfmanagement/nrf-client-nfmanagement 1 1 1 1Gi 1Gi
Site2-mysql-cluster-db-backup-manager-svc/db-backup-manager-svc 1 100m 100m 128Mi 128Mi
Site2-mysql-cluster-db-monitor-svc/db-monitor-svc 1 4 4 4Gi 4Gi
Site2-ndbappmysqld/mysqlndbcluster 2 8 8 10Gi 10Gi
Site2-ndbappmysqld/init-sidecar 2 100m 100m 256Mi 256Mi
Site2-ndbmgmd/mysqlndbcluster 2 4 4 10Gi 8Gi
Site2-ndbmgmd/db-infra-monitor-svc 2 100m 100m 256Mi 256Mi
Site2-ndbmtd/mysqlndbcluster 4 10 10 18Gi 16Gi
Site1-mysql-cluster-db-backup-manager-svc/db-backup-manager-svc 1 100m 100m 128Mi 128Mi
Site2-ndbmtd/db-backup-executor-svc 4 100m 100m 256Mi 256Mi
Site1-mysql-cluster-db-monitor-svc/db-monitor-svc 1 4 4 4Gi 4Gi
Site2-ndbmtd/db-infra-monitor-svc 4 100m 100m 256Mi 256Mi
Site1-ndbappmysqld/mysqlndbcluster 2 8 8 10Gi 10Gi
Site1-ndbappmysqld/init-sidecar 2 100m 100m 256Mi 256Mi
Site1-ndbmgmd/mysqlndbcluster 2 4 4 10Gi 8Gi
Site1-ndbmgmd/db-infra-monitor-svc 2 100m 100m 256Mi 256Mi
Site1-ndbmtd/mysqlndbcluster 4 10 10 18Gi 16Gi
Site1-ndbmtd/db-backup-executor-svc 4 100m 100m 256Mi 256Mi
Site1-ndbmtd/db-infra-monitor-svc 4 100m 100m 256Mi 256Mi
Site1-hello-world2/hello-world2 2 NA NA NA NA
Site1-occne-alertmanager-snmp-notifier/alertmanager-snmp-notifier 1 NA NA NA NA
Site1-occne-bastion-controller/bastion-controller 1 200m 10m 256Mi 128Mi
Site1-occne-kube-prom-stack-grafana/grafana-sc-dashboard 1 NA NA NA NA
Site1-occne-kube-prom-stack-grafana/grafana-sc-datasources 1 NA NA NA NA
Site1-occne-kube-prom-stack-grafana/grafana 1 500m 500m 500Mi 500Mi
Site1-occne-kube-prom-stack-kube-operator/kube-prometheus-stack 1 200m 100m 200Mi 100Mi
Site1-occne-kube-prom-stack-kube-state-metrics/kube-state-metrics 1 20m 20m 500Mi 32Mi
Site1-occne-metrics-server/metrics-server 1 100m 100m 200Mi 200Mi
Site1-occne-promxy/promxy 1 100m 100m 512Mi 512Mi
Site1-occne-promxy-apigw-nginx/nginx 2 2 1 1536Mi 1Gi
Site1-occne-tracer-jaeger-collector/occne-tracer-jaeger-collector 1 1250m 500m 1Gi 512Mi
Site1-occne-tracer-jaeger-query/occne-tracer-jaeger-query 1 500m 256m 512Mi 128Mi
Site1-occne-tracer-jaeger-query/occne-tracer-jaeger-agent-sidecar 1 NA NA NA NA
Site1-alertmanager-occne-kube-prom-stack-kube-alertmanager/alertmanager 2 20m 20m 64Mi 64Mi
Site1-alertmanager-occne-kube-prom-stack-kube-alertmanager/config-reloader 2 200m 200m 50Mi 50Mi
Site1-occne-opensearch-cluster-client/opensearch 3 1 1 2Gi 2Gi
Site1-occne-opensearch-cluster-data/opensearch 5 2 1 32Gi 16Gi
Site1-occne-opensearch-cluster-master/opensearch 3 2 1 32Gi 16Gi
Site1-prometheus-occne-kube-prom-stack-kube-prometheus/prometheus 2 12 12 55Gi 55Gi
Site1-prometheus-occne-kube-prom-stack-kube-prometheus/config-reloader 2 200m 200m 50Mi 50Mi

Note:

  • Mi- Megabytes
  • Gi- Gigabytes
  • m- millicores
  • CPU Resource per container without unit is represented in cores

3.1.2 Traffic and Latency

The following tables describe the traffic and latency details:

Traffic Details

Table 3-4 Traffic Details

TPS Site-1 Site-2
PLMN-IGW-requests-rate 10290.83 10305.44
CN32F-requests-rate 10148.51 10237.15
N32-IGW-requests-rate 10237.13 10148.64
N32-EGW-requests-rate 10148.65 10237.16
PN32F-requests-rate 10236.35 10147.94
PLMN-EGW-requests-rate 10537.70 10445.02
Total TPS 10266.5 10253.5

Latency Details

Table 3-5 Latency Details

NF Service Latency( In MilliSecond) Site-1 Site-2
IGW(s) 0.17 0.24
EGW(s) 0.25 0.18
cn32f(s) 0.13 0.10
pn32f(s) 0.03 0.03

3.1.3 Results

  • csepp call success rate: 99.522%

    Note:

    A 0.48% drop was intentionally introduced by enabling global rate limiting on the PLMN Ingress gateway. Actual success rate is 99.998%.
  • psepp call success rate: 99.677%

    Note:

    A 0.33% drop was intentionally introduced by enabling global rate limiting on the PLMN Ingress gateway. Actual success rate is 99.999%.
  • csepp_Avg_Latency_rate: 107.03ms
  • psepp_Avg_Latency_rate: 101.61ms
  • No pod restarts are observed
  • Perfgo deployed on hardhead 1 cluster with 15 server and 4 client each side
  • Run with 50ms server delay
  • Features enabled:
    • Topology Hiding
    • Security Counter Measure features:
      • Cat-0 SBI Message Schema Validation Feature
      • Cat-1 NRF Service API Query Parameters Validation
      • Cat-1 Service API Validation Feature
      • Cat -2 Network ID Validation Feature
      • Cat-3 Previous Location Check Feature
      • Cat-3 Time Check for Roaming Subscribers
    • Overload Control
    • 5G SBI Message Mediation Support
    • Steering of Roaming (SOR) Feature
    • Global Rate Limiting on Ingress Gateway of SEPP
    • Alternate Routing and Load Sharing based on the DNS SRV Record for Home Network Functions

3.2 Test Scenario 2: SEPP 40K MPS, 24 Hrs Run with ASM Enabled and without any Feature Enabled with 50ms Delay at Server End

This test scenario describes the performance and capacity of SEPP and provides the benchmarking results for 24 Hrs with ASM enabled and without any feature enabled with 50ms delay at the server end.

Note:

ASM is enabled in this test case scenario.

.

3.2.1 Test Case and Setup Details

Following are the testcase and setup details:

Traffic Model Details

Table 3-6 Transactions Per Second (TPS)

Total TPS Site 1 Site 2
40K MPS 20K MPS 20K MPS
Setup Details

Table 3-7 Setup Details

Setup Details Values
Active User NA for SEPP
Execution Time 24 Hrs
Environment vCNE
Cluster Hardhead1
cnDBTier 25.1.200
cSEPP 25.1.200
pSEPP 25.1.200
CNC Console 25.1.200
Setup Configuration
  • Both SEPPs are deployed on hardhead1.
  • cnDBTier is deployed on both sites.
List of SEPP Features enabled None. This test case is performed on a vanilla SEPP deployment.

Resource Footprint

Table 3-8 Resource Footprint

Micro services/container Replicas CPU (Limit) CPU (Request) Memory (Limit) Memory (Request)
Site2-ocsepp-alternate-route/alternate-route 2 2 2 4Gi 4Gi
Site2-ocsepp-appinfo/appinfo 2 1 1 2Gi 1Gi
Site2-ocsepp-cn32c-svc/cn32c-svc 2 2 2 2Gi 2Gi
Site2-ocsepp-cn32f-svc/cn32f-svc 7 5 5 8Gi 8Gi
Site2-ocsepp-coherence-svc/coherence-svc 1 1 1 2Gi 2Gi
Site2-ocsepp-config-mgr-svc/config-mgr-svc 1 2 2 2Gi 2Gi
Site2-ocsepp-n32-egress-gateway/n32-egress-gateway 7 5 5 5Gi 5Gi
Site2-ocsepp-n32-ingress-gateway/n32-ingress-gateway 7 6 6 5Gi 5Gi
Site2-ocsepp-nf-mediation/nf-mediation 2 8 8 8Gi 8Gi
Site2-ocsepp-ocpm-config/config-server 2 1 1 1Gi 1Gi
Site2-ocsepp-performance/perf-info 2 2 2 4Gi 200Mi
Site2-ocsepp-plmn-egress-gateway/plmn-egress-gateway 7 5 5 5Gi 5Gi
Site2-ocsepp-plmn-ingress-gateway/plmn-ingress-gateway 7 5 5 5Gi 5Gi
Site2-ocsepp-pn32c-svc/pn32c-svc 2 2 2 2Gi 2Gi
Site2-ocsepp-pn32f-svc/pn32f-svc 7 5 5 8Gi 8Gi
Site2-ocsepp-sepp-nrf-client-nfdiscovery/nrf-client-nfdiscovery 2 1 1 2Gi 2Gi
Site2-ocsepp-sepp-nrf-client-nfmanagement/nrf-client-nfmanagement 1 1 1 1Gi 1Gi
Site1-ocsepp-alternate-route/alternate-route 2 2 2 4Gi 4Gi
Site1-ocsepp-appinfo/appinfo 2 1 1 2Gi 1Gi
Site1-ocsepp-cn32c-svc/cn32c-svc 2 2 2 2Gi 2Gi
Site1-ocsepp-cn32f-svc/cn32f-svc 7 5 5 8Gi 8Gi
Site1-ocsepp-coherence-svc/coherence-svc 1 1 1 2Gi 2Gi
Site1-ocsepp-config-mgr-svc/config-mgr-svc 1 2 2 2Gi 2Gi
Site1-ocsepp-n32-egress-gateway/n32-egress-gateway 7 5 5 5Gi 5Gi
Site1-ocsepp-n32-ingress-gateway/n32-ingress-gateway 7 6 6 5Gi 5Gi
Site1-ocsepp-nf-mediation/nf-mediation 2 8 8 8Gi 8Gi
Site1-ocsepp-ocpm-config/config-server 2 1 1 1Gi 1Gi
Site1-ocsepp-performance/perf-info 2 2 2 4Gi 200Mi
Site1-ocsepp-plmn-egress-gateway/plmn-egress-gateway 7 5 5 5Gi 5Gi
Site1-ocsepp-plmn-ingress-gateway/plmn-ingress-gateway 7 5 5 5Gi 5Gi
Site1-ocsepp-pn32c-svc/pn32c-svc 2 2 2 2Gi 2Gi
Site1-ocsepp-pn32f-svc/pn32f-svc 7 5 5 8Gi 8Gi
Site1-ocsepp-sepp-nrf-client-nfdiscovery/nrf-client-nfdiscovery 2 1 1 2Gi 2Gi
Site1-ocsepp-sepp-nrf-client-nfmanagement/nrf-client-nfmanagement 1 1 1 1Gi 1Gi

Note:

  • Mi- Megabytes
  • Gi- Gigabytes
  • m- millicore
  • CPU Resource per container without unit is represented in cores

3.2.2 Traffic and Latency

The following tables describe the traffic and latency details:

Traffic Details

Table 3-9 Traffic Details

TPS Site-1 Site-2
PLMN-IGW-requests-rate 10289.46 10289.58
CN32F-requests-rate 10289.45 10289.59
N32-IGW-requests-rate 10289.61 10289.47
N32-EGW-requests-rate 10289.45 10289.57
PN32F-requests-rate 10289.61 10289.42
PLMN-EGW-requests-rate 10289.60 10289.46
Total TPS 10289.5 10289.5

Latency Details

Table 3-10 Latency Details

NF Service Latency( In MilliSecond) Site-1 Site-2
IGW(s) 0.06 0.06
EGW(s) 0.07 0.07
cn32f(s) 0.04 0.04
pn32f(s) 0.03 0.03

3.2.3 Results

  • csepp call success rate: 100%
  • psepp call success rate: 100%
  • csepp_Avg_Latency_rate: 86.165 ms
  • psepp_Avg_Latency_rate: 85.670 ms
  • No pod restarts are observed.
  • Perfgo deployed on hardhead 1 cluster with 7 server and 8 client each side.
  • Run with 50ms server delay.
  • Feature enabled: NA (Vanilla run)