3 Benchmark Testing
This section describes the environment used for benchmarking Oracle Communications Cloud Native Core, Network Slice Selection Function (NSSF).
The default values or recommendations for any required software or resource are available from the third-party vendors. Benchmarking should be performed with the settings described in this section. Operators may choose different values.
The benchmark testing is performance testing with the fine-tuning done to improve the performance of NSSF. It is performed in the CNE environment.
3.1 Test Scenario-1: NSSF Performance with 80K TPS
3.1.1 Overview
To qualify the test run, you can consider the following elements:
- CPU and Memory utilization
- Ingress and Egress traffic rate
- Success rate
- Message request and response processing time
- Infrastructure resource requirements and utilization
Note:
The performance and capacity of the NSSF system may vary based on the call model, Feature or Interface configuration, and underlying CNE and hardware environment.3.1.2 NSSF Features
Table 3-1 NSSF Features
| NSSF Features | Status for benchmarking |
|---|---|
| Enhanced Computation of AllowedNSSAI in NSSF | ENABLED |
| Auto-Population of Configuration Based on NsAvailability Update | ENABLED |
| Support Indirect Communication | ENABLED |
| Multiple PLMN Support | ENABLED |
| Integration with ASM | ENABLED |
| Supports Compression Using Accept-Encoding or Content-Encoding gzip | ENABLED |
| OAuth Access Token Based Authorization with KID | ENABLED |
| Overload control | ENABLED |
Note:
Apart from these features being enabled, rest of the configurations have been made default.3.1.3 Software Test Constraints
Table 3-2 Software Test Constraints
| Test Constraint | Details |
|---|---|
| NSSF Version | 25.2.200 |
| cnDBTier | 25.2.2xx |
| Sidecar enabled/disabled | Enabled |
| TLS enabled/disabled (NSSF & cnDBTier) | Disabled |
3.1.4 NSSF Call-Mix
Table 3-3 NSSF Call-Mix
| Number of SNSSAIs in Initial Registration (Both Subscribed and Requested) | 4 |
| Number of SNSSAIs per TAI in Availability PUT Requests | 10 |
| Number of TAIs per Availability PUT Request | 500 |
| Number of AMFSets | 13 |
| Number of AMFs per AMF set | 4 |
| Total number of TAIs
Note: Tere is no overlap of TAIs among the AMF sets. |
6500 |
Call Flow
Table 3-4 Call Flow
| Service | Scenario | TPS percentage |
|---|---|---|
| NS Availability |
Ns-Availability Update
|
1 TPS |
| Ns-Selection | Initial Registration | 99% |
| UE Config | 1% |
3.1.5 Test Observations
The NsSelection performance test evaluates the NSSF’s capability to process service selection requests under realistic traffic conditions. The traffic profile is predominantly composed of Initial Registration requests, which account for 99% of the load (79,200 TPS), reflecting peak UE attach scenarios observed in live networks. The remaining 1% of traffic (800 TPS) consists of UE Configuration Update requests, representing control-plane updates triggered after initial registration.
This distribution mirrors real-world NsSelection behavior, where the NSSF is primarily utilized during subscriber registration events, with minimal activity related to configuration updates.
In contrast, the NsAvailability test is purely provisioning data and is designed to validate the NSSF’s responsiveness to availability updates received from other network functions. This traffic is intentionally limited to a low rate of 2 TPS, as it does not reflect a high-volume operational scenario. Since provisioning messages happen occasionally and are usually contain bigger payloads, a rate of only 2 TPS has been considered.
3.1.5.1 General Observations
The following table provides observation data for the performance test that can be used for benchmark testing to increase the traffic rate.
Table 3-5 General Observations
| Parameter | Values |
|---|---|
| Test Duration | 5 days |
| TPS Achieved | 80K |
3.1.5.2 Resource Utilization
NSSF services and sidecar
The following table provides observed values of NSSF services and sidecar:
Table 3-6 NSSF Services
| Service (Container) | Replica Count | Total CPU | Total Memory |
|---|---|---|---|
| <helm-release name>- alternate-route | 2 | 6m | 1396Mi |
| <helm-release name>- appinfo | 1 | 33m | 276Mi |
| <helm-release name>- egress-gateway | 2 | 8m | 1636Mi |
| <helm-release name>- ingress-gateway | 36 | 113821m | 133339Mi |
| <helm-release name>- nsauditor | 1 | 2m | 449Mi |
| <helm-release name>- nsavailability | 2 | 92m | 2280Mi |
| <helm-release name>- nsconfig | 1 | 19m | 652Mi |
| <helm-release name>- nsselection | 8 | 31863m | 11831Mi |
| <helm-release name>- nssubscription | 1 | 4m | 462Mi |
| <helm-release name>-<helm-release name>- nrf-client-nfmanagement | 2 | 9m | 1129Mi |
| <helm-release name>- config-server | 1 | 1m | 394Mi |
| <helm-release name>- perf-info | 1 | 35m | 142Mi |
Table 3-7 ASM Sidecar
| Service (Container) | Replica Count | Total CPU | Total Memory |
|---|---|---|---|
| <helm-release name-> alternate-route | 2 | 47m | 452Mi |
| <helm-release name-> appinfo | 1 | 14m | 217Mi |
| <helm-release name-> egress-gateway | 2 | 30m | 442Mi |
| <helm-release name-> ingress-gateway | 36 | 54274m | 9565Mi |
| <helm-release name-> nsauditor | 1 | 31m | 220Mi |
| <helm-release name-> nsavailability | 2 | 35m | 469Mi |
| <helm-release name-> nsconfig | 1 | 18m | 240Mi |
| <helm-release name-> nsselection | 8 | 26617m | 2561Mi |
| <helm-release name-> nssubscription | 1 | 20m | 219Mi |
| <helm-release name-> nrf-client-nfmanagement | 2 | 7m | 458Mi |
| <helm-release name-> config-server | 1 | 17m | 218Mi |
| <helm-release name-> perf-info | 1 | 22m | 224Mi |
cnDBTier services and sidecar
The following table provides observed values of cnDBTier services and sidecar:
Table 3-8 cnDBTier services
| Pod | Container | Replica Count | Total CPU | Total Memory |
|---|---|---|---|---|
| mysql-cluster-db-backup-manager-svc- | db-backup-manager-svc | 1 | 4m | 96Mi |
| mysql-cluster-db-monitor-svc | db-monitor-svc | 1 | 71m | 668Mi |
| mysql-cluster-one-three-replication-svc | one-three-replication-svc | 1 | 7m | 253Mi |
| mysql-cluster-one-two-replication-svc | db-infra-monitor-svc | 1 | 1m | 52Mi |
| mysql-cluster-one-two-replication-svc | one-two-replication-svc | 1 | 7m | 297Mi |
| ndbappmysqld | db-infra-monitor-svc | 4 | 4m | 221Mi |
| ndbappmysqld | init-sidecar | 4 | 8m | 4Mi |
| ndbappmysqld | mysqlndbcluster | 4 | 201m | 3000Mi |
| ndbmgmd | db-infra-monitor-svc | 2 | 2m | 102Mi |
| ndbmgmd | mysqlndbcluster | 2 | 11m | 70Mi |
| ndbmtd | db-backup-executor-svc | 4 | 17m | 214Mi |
| ndbmtd | db-infra-monitor-svc | 4 | 47m | 18677Mi |
| ndbmysqld | db-infra-monitor-svc | 4 | 10m | 262Mi |
| ndbmysqld | init-sidecar | 4 | 8m | 5Mi |
| ndbmysqld | mysqlndbcluster | 4 | 87m | 3366Mi |
Table 3-9 cnDBTier Sidecar
| Services | Container | Replica Count | Total CPU | Total Memory (Mi) |
|---|---|---|---|---|
| mysql-cluster-site1-site2-replication | istio-proxy | 1 | 3m | 211Mi |
| mysql-cluster-site1-site3-replication | istio-proxy | 1 | 3m | 235Mi |
| ndbmysqld | istio-proxy | 4 | 13m | 1009Mi |
3.1.5.3 Latency Observations
The following table provides observed values in latency:
Table 3-10 Latency Parameters
| Latency Parameter | Details (average) |
|---|---|
| NsSelection Success % | 100 |
| Ingress Gateway average latency for NsSelection | 5.46 ms |
| NsAvailability Success % | 100 |
| Ingress Gateway average latency for NsAvailability | 123 ms |
| Ingress average success rate | 100 |
| Result | Pass |
Table 3-11 Ingress Gateway Latency for NsSelection
| Percentile | min | max | average |
|---|---|---|---|
| 50% | 1.94 ms | 2.60 ms | 2.02 ms |
| 90% | 4.43 ms | 4.72 ms | 4.46 ms |
| 95% | 4.74 ms | 4.99 ms | 4.76 ms |
| 99% | 4.99 ms | 9.58 ms | 5.46 ms |
Table 3-12 Ingress Gateway Latency for NsAvailability
| Percentile | min | max | average |
|---|---|---|---|
| 50% | 54.7 ms | 64.2 ms | 59.1 ms |
| 90% | 82.8 ms | 96.1 ms | 90.3 ms |
| 95% | 91.4 ms | 122 ms | 95.9 ms |
| 99% | 98.3 ms | 191 ms | 123 ms |
Table 3-13 NsSelection Application Latency
| Percentile | min | max | average |
|---|---|---|---|
| 50% | 5.00 ms | 5.00 ms | 5.00 ms |
| 90% | 9.00 ms | 9.00 ms | 9.00 ms |
| 95% | 9.50 ms | 9.50 ms | 9.50 ms |
| 99% | 9.90 ms | 9.90 ms | 9.90 ms |
Table 3-14 NsAvailability Application Latency
| Percentile | max | average |
|---|---|---|
| 50% | 34.0 ms | 32.4 ms |
| 90% | 74.6 ms | 65.4 ms |
| 95% | 88.5 ms | 78.8 ms |
| 99% | 151 ms | 96.6 ms |