3 Benchmark Testing
This section describes the environment used for benchmarking Oracle Communications Cloud Native Core, Network Slice Selection Function (NSSF).
The default values or recommendations for any required software or resource are available from the third-party vendors. Benchmarking should be performed with the settings described in this section. Operators may choose different values.
The benchmark testing is performance testing with the fine-tuning done to improve the performance of NSSF. It is performed in the CNE environment.
3.1 Test Scenario-1: NSSF Performance with 10K TPS
3.1.1 Overview
To qualify the test run, you can consider the following elements:
- CPU and Memory utilization
- Ingress and Egress traffic rate
- Success rate
- Message request and response processing time
- Infrastructure resource requirements and utilization
Note:
The performance and capacity of the NSSF system may vary based on the call model, Feature or Interface configuration, and underlying CNE and hardware environment.3.1.2 NSSF Features
Table 3-1 NSSF Features
NSSF Features | Status for benchmarking |
---|---|
Auto-Population of Configuration Based on NsAvailability Update | ENABLED |
Handover from EPS to 5G | ENABLED |
Subscription Modification Feature | ENABLED |
Optimized NSSAI Availability Data Encoding and TAI Range | ENABLED |
Support Indirect Communication | ENABLED |
Multiple PLMN Support | ENABLED |
Integration with ASM | ENABLED |
Supports Compression Using Accept-Encoding or Content-Encoding gzip | ENABLED |
OAuth Access Token Based Authorization with K-id | ENABLED |
Protection from Distributed Denial-of-Service (DDoS) Attack through Rate Limiting | ENABLED |
Overload control | ENABLED |
Note:
Apart from these features being enabled, rest of the configurations have been made default.3.1.3 Software Test Constraints
Table 3-2 Software Test Constraints
Test Constraint | Details |
---|---|
NSSF Version | 24.1.0 |
Sidecar ENABLED/DISABLED | Enabled |
TLS ENABLED/DISABLED | Enabled |
3.1.4 NSSF Call-Mix
NsSelection
Table 3-3 NsSelection
Get Request Type | Traffic % | TPS |
---|---|---|
Initial Registration | 10% | 1000 |
UE Config update | 5% | 500 |
PDU establishment | 80% | 8000 |
EPS to 5G HO | 5% | 500 |
NsAvailability
NsAvailability traffic is purely transactional, and records only 2 TPS.
3.1.5 Test Observations
The test scenarios in this section are based on the combination of NSConfig and NSSubscription microservices of NSSF.
3.1.5.1 General Observations
The following table provides observation data for the performance test that can be used for benchmark testing to increase the traffic rate.
Table 3-4 General Observations
Parameter | Values |
---|---|
Test Duration | 48 hours |
TPS Achieved | 10K |
3.1.5.2 Resource Utilization
The following table describes NSSF microservices and their utilization.
CNE Common Applications
Table 3-5 CNE Common Applications Resource Utilization
Container Name | Count | Total CPU (m) | Total Memory (Mi) |
---|---|---|---|
alertmanager | 2 | 6 | 98 |
bastion-controller | 1 | 17 | 62 |
config-reloader | 4 | 1 | 71 |
controller | 1 | 1 | 67 |
dashboards | 1 | 1 | 140 |
fluent-bit | 22 | 32 | 5747 |
grafana | 1 | 129 | 111 |
grafana-sc-dashboard | 1 | 1 | 60 |
grafana-sc-datasources | 1 | 1 | 62 |
kube-prometheus-stack | 1 | 3 | 121 |
kube-state-metrics | 1 | 4 | 67 |
metrics-server | 1 | 21 | 96 |
nginx | 1 | 0 | 5 |
node-exporter | 24 | 1104 | 1791 |
occne-tracer-jaeger-agent | 7 | 7 | 112 |
occne-tracer-jaeger-agent-sidecar | 1 | 1 | 23 |
occne-tracer-jaeger-collector | 1 | 2 | 81 |
occne-tracer-jaeger-query | 1 | 1 | 19 |
opensearch | 9 | 468 | 35990 |
prometheus | 2 | 1211 | 18665 |
promxy | 1 | 96 | 72 |
snmp-notifier | 1 | 1 | 22 |
speaker | 20 | 188 | 1477 |
NSSF Services
Table 3-6 NSSF Services Resource Utilization
Service | Replica | Total CPU (m) | Total Memory (Mi) |
---|---|---|---|
<helm-release-name>-alternate-route | 1 | 2 | 369 |
<helm-release-name>-appinfo | 1 | 16 | 235 |
<helm-release-name>-config-server | 1 | 4 | 363 |
<helm-release-name>-egress-gateway | 2 | 7 | 1285 |
<helm-release-name>-ingress-gateway | 5 | 10426 | 11875 |
<helm-release-name>-nrf-client-nfdiscovery | 2 | 6 | 988 |
<helm-release-name>-nrf-client-nfmanagement | 2 | 11 | 984 |
<helm-release-name>-nsauditor | 1 | 1 | 413 |
<helm-release-name>-nsavailability | 2 | 14 | 1399 |
<helm-release-name>-nsconfig | 1 | 7 | 521 |
<helm-release-name>-nsselection | 6 | 11839 | 6607 |
<helm-release-name>-nssubscription | 1 | 2 | 464 |
<helm-release-name>-perf-info | 1 | 4 | 114 |
ASM Sidecar
Table 3-7 ASM Sidecar
Service | Total CPU (m) | Total Memory (Mi) |
---|---|---|
<helm-release-name>-alternate-route | 4 | 160 |
<helm-release-name>-appinfo | 3 | 162 |
<helm-release-name>-egress | 9 | 340 |
<helm-release-name>-ingress-gateway | 5703 | 891 |
<helm-release-name>-nsauditor | 5 | 161 |
<helm-release-name>-nsavailability | 16 | 330 |
<helm-release-name>-nsconfig | 8 | 166 |
<helm-release-name>-nsselection | 4476 | 1011 |
<helm-release-name>-nssubscription | 6 | 163 |
<helm-release-name>-nfdiscovery | 10 | 345 |
<helm-release-name>-nfmanagement | 11 | 320 |
<helm-release-name>-nsconfig-server | 6 | 163 |
<helm-release-name>-perf-info | 4 | 160 |
cnDBTier Services
The following table provides observed values of cnDBTier services.
NAME | CPU(cores) | MEMORY(bytes) |
---|---|---|
ndbappmysqld | 13662m | 10069Mi |
ndbmysqld | 71m | 1805Mi |
ndbmgmd | 33m | 435Mi |
ndbmtd | 18926m | 63053Mi |
mysql | 20m | 1234Mi |
Total | 32712m | 76596Mi |
3.1.5.3 Latency Observations
The following table provides observed values in latency:
Table 3-8 Latency Parameters
Latency Parameter | Details (Avg) |
---|---|
Turnaround time at INGRESS Simulator NsSelection | 9.8ms |
Turnaround time at INGRESS Simulator NsAvailability | 44.5ms |
Table 3-9 NsSelection Ingress Gateway Latency
Traffic percentile | min | max | avg |
---|---|---|---|
50% | 7.37 ms | 11.1 ms | 8.56 ms |
90% | 17.1 ms | 36.4 ms | 20.7 ms |
95% | 22.7 ms | 53.6 ms | 29.3 ms |
99% | 42.7 ms | 107 ms | 56.1 ms |
Table 3-10 NsSelection Application Latency
Traffic percentile | max | max | avg |
---|---|---|---|
50% | 9.39 ms | 17.2 ms | 10.9 ms |
90% | 25.0 ms | 64.9 ms | 32.9 ms |
95% | 32.8 ms | 90.8 ms | 43.4 ms |
99% | 50.8 ms | 137 ms | 70.7 ms |
Table 3-11 NsAvailability Ingress Gateway Latency
Traffic percentile | min | max | avg |
---|---|---|---|
50% | 20.2 ms | 35.7 ms | 31.9 ms |
90% | 34.2 ms | 67.9 ms | 37.6 ms |
95% | 34.9 ms | 69.9 ms | 39.2 ms |
99% | 35.5 ms | 74.6 ms | 40.8 ms |
Table 3-12 NsAvailability Application Latency
Traffic percentile | max | max | avg |
---|---|---|---|
50% | 39.9 ms | 36.5 ms | 31.2 ms |
90% | 54.7 ms | 18.3 ms | 36.5 ms |
95% | 56.7 ms | 38.3 ms | 38.3 ms |
99% | 56.7 ms | 38.3 ms | 38.3 ms |