3 UDR Benchmark Testing
This chapter describes UDR, SLF, and EIR test scenarios.
3.1 Test Scenario 1: SLF Call Deployment Model
This section provides information about SLF call deployment model test scenarios.
3.1.1 SLF Call Model: 24.2K TPS for Performance-Medium Resource Profile for SLF Lookup
This test scenario describes performance and capacity of SLF functionality offered by UDR and provides the benchmarking results for various deployment sizes.
- LCI and OCI handling
- User Agent in Egress Traffic
- Subscriber Activity Logging - Enable and configure 100 keys
- Overload and Rate Limiting
- Signaling (SLF Look Up): 24.2K TPS
- Provisioning: 1260 TPS
- Total Subscribers: 37M
- Profile Size: 450 bytes
- Average HTTP Provisioning Request Packet Size: 350
- Average HTTP Provisioning Response Packet Size: 250
Table 3-1 Traffic Model Details
Request Type | Details | Provisioning % | TPS |
---|---|---|---|
Lookup 24.2k | SLF Lookup GET Requests | - | 24.2K |
Provisioning (1.26K using Provgw one site) | CREATE | 10% | 126K |
DELETE | 10% | 126K | |
UPDATE | 40% | 504K | |
GET | 40% | 504K |
Note:
- To run this model, one UDR site is brought down and 24.2K look up traffic and 1.26K provisioning traffic are run from one site.
- The values provided is for single site deployment
Table 3-2 Testcase Parameters
Input Parameter Details | Configuration Values |
---|---|
UDR Version Tag | 24.1.0 |
Target TPS | 24.2K Lookup + 1.26K Provisioning |
Traffic Profile | SLF 24.2K Profile |
Notification Rate | OFF |
UDR Response Timeout | 5s |
Client Timeout | 30s |
Signaling Requests Latency Recorded on Client | 10ms |
Provisioning Requests Latency Recorded on Client | 30ms |
Table 3-3 Consolidated Resource Requirement
Resource | CPU | Memory |
---|---|---|
cnDBTier | 69 | 252 |
SLF | 184 | 116 |
ProvGw | 32 | 30 |
Buffer | 50 | 50 |
Total | 335 | 448 |
Note:
All values are inclusive of ASM sidecar.
Note:
- The same resources and usage are application for cnDBTier2
- For cnDBTier, you must use ocudr_slf_37msub_dbtier and ocudr_udr_10msub_dbtier custom value files for SLF and UDR respectively. For more information, see Cloud Native Core, Unified Data Repository Installation, Upgrade, and Fault Recovery Guide.
Table 3-4 cnDBTier Resources and Usage
Microservice Name | Container Name | Number of Pods | CPU Allocation Per Pod | Memory Allocation Per Pod | Total Resources | CPU Usage | Memory Usage |
---|---|---|---|---|---|---|---|
Management node (ndbmgmd) | mysqlndbcluster | 2 | 2 CPUs | 9 GB | 6 CPUs
26 GB |
Minimal resources are used | |
1 CPUs | 4 GB | ||||||
Data node (ndbmtd) | mysqlndbcluster | 4 | 4 CPUs | 33 GB |
28 CPUs 156 GB |
3 CPU/pod | 20 GB/pod |
istio-proxy | 2 CPUs | 4 GB | |||||
db-backup-executor-svc | 1 CPU | 2GB | |||||
APP SQL node (ndbappmysqld) | mysqlndbcluster | 5 | 4 CPUs | 2 GB |
14 CPUs 12 GB |
4 CPU/pod | 1 GB/pod |
istio-proxy | 3 CPUs | 4 GB | |||||
SQL node (Used for Replication) (ndbmysqld) | mysqlndbcluster | 2 | 4 CPUs | 16 GB |
13 CPUs 41GB |
Minimal resources are used | |
istio-proxy | 2 CPUs | 4 GB | |||||
init-sidecar | 100m CPU | 256 MB | |||||
DB Monitor Service (db-monitor-svc) | db-monitor-svc | 1 | 200m CPUs | 500 MB |
3 CPUs 2 GB |
Minimal resources are used | |
istio-proxy | 1 CPUs | 1 GB | |||||
DB Backup Manager Service (backup-manager-svc) | backup-manager-svc | 1 | 100m CPU | 128 MB |
2 CPUs 2 GB |
Minimal resources are used | |
istio-proxy | 1 CPUs | 1 GB | |||||
Replication Service (db-replication-svc) | db-replication-svc | 1 | 2 CPU | 12 GB |
3 CPUs 13 GB |
Minimal resources are used | |
istio-proxy | 200m CPU | 500MB |
Table 3-5 SLF Resources and Usage
Microservice name | Container name | Number of Pods | CPU Allocation Per Pod | Memory Allocation Per Pod | Total Resources | CPU Usage | Memory Usage | CPU Utilization hpa |
---|---|---|---|---|---|---|---|---|
Ingress-gateway-sig | ingressgateway-sig | 8 | 6 CPUs | 4 GB |
80 CPUs 40 GB Memory |
3 CPU/pod | 2 GB/pod | 49% |
istio-proxy | 4 CPUs | 1 GB | 2.1 CPU/pod | 350 MB/pod | ||||
Ingress-gateway-prov | ingressgateway-prov | 2 | 4 CPUs | 4 GB |
12 CPUs 10 GB |
0.9 CPU/pod | 1.4 GB/pod | 24% |
istio-proxy | 2 CPUs | 1 GB | 0.65 CPU/pod | 300 MB/pod | ||||
Nudr-dr-service | nudr-drservice | 6 | 6 CPUs | 4 GB |
54 CPUs 30 GB |
3.1 CPU/pod | 1.7 GB/pod | 55% |
istio-proxy | 3 CPUs | 1 GB | 2 CPU/pod | 325 MB/pod | ||||
Nudr-dr-provservice | nudr-dr-provservice | 2 | 4 CPUs | 4 GB |
12 CPUs 10 GB |
.9 CPU/pod | 1.5 GB/pod | 22% |
istio-proxy | 2 CPUs | 1 GB | 0.5 CPU/pod | 300 MB/pod | ||||
Nudr-nrf-client-nfmanagement | nrf-client-nfmanagement | 2 | 1 CPU | 1 GB |
4 CPUs 4 GB |
Minimal resources are used. | ||
istio-proxy | 1 CPUs | 1 GB | ||||||
Nudr-egress-gateway | egressgateway | 2 | 1 CPUs | 1 GB |
4 CPUs 4 GB |
Minimal resources are used. | ||
istio-proxy | 1 CPUs | 1 GB | ||||||
Nudr-config | nudr-config | 1 | 2 CPUs | 2 GB |
3 CPUs 3 GB |
Minimal resources are used. | ||
istio-proxy | 1 CPUs | 1 GB | ||||||
Nudr-config-server | nudr-config-server | 1 | 2 CPUs | 2 GB |
3 CPUs 3 GB |
Minimal resources are used. | ||
istio-proxy | 1 CPUs | 1 GB | ||||||
alternate-route | alternate-route | 2 | 1 CPUs | 1 GB |
4 CPUs 4 GB |
Minimal resources are used. | ||
istio-proxy | 1 CPUs | 1 GB | ||||||
app-info | app-info | 2 | 1 CPUs | 1 GB |
4 CPUs 4 GB |
Minimal resources are used. | ||
istio-proxy | 1 CPUs | 1 GB | ||||||
perf-info | perf-info | 2 | 1 CPUs | 1 GB |
4 CPUs 4 GB |
Minimal resources are used. | ||
istio-proxy | 1 CPUs | 1 GB |
Note:
The same resources and usage are used for Site2.The following table describes provision gateway resources and their utilization (Provisioning Latency: 30ms):
Table 3-6 Provision Gateway Resources and their utilization
Microservice name | Container name | Number of Pods | CPU Allocation Per Pod | Memory Allocation Per Pod | Total Resources | CPU Usage | Memory Usage | CPU Utilization hpa |
---|---|---|---|---|---|---|---|---|
provgw-ingress-gateway | ingressgateway | 2 | 2 CPUs | 2 GB |
6 CPUs 6 GB Memory |
0.7 CPU/pod | 1.6 GB/pod | 45% |
istio-proxy | 1 CPUs | 1 GB | 0.5 CPU/pod | 300 MB/pod | ||||
provgw-egress-gateway | egressgateway | 2 | 3 CPUs | 2 GB |
6 CPUs 6 GB Memory |
0.8 CPU/pod | 1 GB/pod | 51% |
istio-proxy | 1 CPUs | 1 GB | 0.6 CPU/pod | 300 MB/pod | ||||
provgw-service | provgw-service | 2 | 3 CPUs | 2 GB |
8 CPUs 6 GB Memory |
1 CPU/pod | 1.2 GB/pod | 40% |
istio-proxy | 1 CPUs | 1 GB | 0.6 CPU/pod | 300 MB/pod | ||||
provgw-config | provgw-config | 2 | 2 CPUs | 2 GB |
6 CPUs 6 GB Memory |
Minimal resources are used. Utilization data is not captured. | ||
istio-proxy | 1 CPUs | 1 GB | ||||||
provgw-config-server | provgw-config-server | 2 | 2 CPUs | 2 GB |
6 CPUs 6 GB Memory |
Minimal resources are used. Utilization data is not captured. | ||
istio-proxy | 1 CPUs | 1 GB |
Table 3-7 Result and Observation
Parameter | Values |
---|---|
Test Duration | 17hr |
TPS Achieved | 24.2k SLF Lookup + 1.26k Provisioning |
Success Rate | 100% |
Average UDR processing time (Request and Response) | 40ms |
3.2 Test Scenario 2: EIR Deployment Model
Performance Requirement - 300K subscriber DB size with 10K EIR lookup TPS
This test scenario describes performance and capacity improvements of EIR functionality offered by UDR and provides the benchmarking results for various deployment sizes.
- TLS
- OAuth2.0
- Default Response set to EQUIPMENT_UNKNOWN
- Header Validations like XFCC, server header, and user agent header
EIR is benchmarked for compute and storage resources under following conditions:
- Signaling (EIR Look Up): 10K TPS
- Total Subscribers: 300K
- Profile Size: 130 bytes
- Average HTTP Provisioning Request Packet Size: NA
- Average HTTP Provisioning Response Packet Size: NA
Figure 3-1 EIR Deployment Model

The following table describes the benchmarking parameters and their values:
Table 3-8 Traffic Model Details
Request Type | Details | TPS |
---|---|---|
Lookup 10k | EIR EIC | 10k |
The following table describes the testcase parameters and their values:
Table 3-9 Testcase Parameters
Input Parameter Details | Configuration Values |
---|---|
UDR Version Tag | 22.3.0 |
Target TPS | 10k Lookup |
Traffic Profile | 10k EIR EIC |
Notification Rate | OFF |
EIR Response Timeout | 5s |
Client Timeout | 10s |
Signaling Requests Latency Recorded on Client | NA |
Provisioning Requests Latency Recorded on Client | NA |
Table 3-10 Consolidated Resource Requirement
Resource | CPUs | Memory |
---|---|---|
EIR | 32 | 30 GB |
cnDBTier | 177 | 616 GB |
Total | 600 | 903 GB |
Table 3-11 cnDBTier Resources and their Utilization
Micro service name | Container name | Number of Pods | CPU Allocation Per Pod | Memory Allocation Per Pod | Total Resources | CPU Usage | Memory Usage |
---|---|---|---|---|---|---|---|
Management node | mysqlndbcluster | 2 | 4 CPUs | 4 GB |
4 CPU 8 GB Memory |
.013 CPU/pod | .031 GB/pod |
Data node | mysqlndbcluster | 4 | 16 CPUs | 32 GB |
64 CPU 128 GB Memory |
1 CPU/pod | 15.5 GB/pod |
APP SQL node | mysqlndbcluster | 3 | 16 CPUs | 32 GB |
48 CPU 96 GB Memory |
4.1 CPU/pod | .8 GB/pod |
SQL node (Used for Replication) | mysqlndbcluster | 2 | 2 CPUs | 4 GB |
4 CPU 8 GB Memory |
.02 CPU/pod | .6 GB/pod |
DB Monitor Service | db-monitor-svc | 1 | 500m CPUs | 500 MB |
1 CPU 1 GB Memory |
Minimal resources are used. Utilization is not captured | |
DB Backup Manager Service | replication-svc | 1 | 250m CPUs | 320 MB |
1 CPU 1 GB Memory |
Minimal resources are used. Utilization is not captured |
Table 3-12 EIR Resources and their Utilization (Lookup Latency: 16.9ms) without Aspen Service Mesh (ASM) Enabled
Micro service name | Container name | Number of Pods | CPU Allocation Per Pod | Memory Allocation Per Pod | Total Resources | CPU Usage | Memory Usage |
---|---|---|---|---|---|---|---|
Ingress-gateway-sig | Ingress-gateway-sig | 5 | 6 CPUs | 4 GB |
30 CPUs 20 GB Memory |
3 CPU/pod | 2 GB/pod |
Ingress-gateway-prov | Ingress-gateway-prov | 2 | 6 CPUs | 4 GB |
12 CPUs 8 GB Memory |
.07 CPU/pod | .9 GB/pod |
Nudr-dr-service | nudr-drservice | 6 | 4 CPUs | 4 GB |
24 CPUs 24 GB Memory |
3 CPU/pod | 1.2 GB/pod |
Nudr-dr-provservice | nudr-dr-provservice | 2 | 4 CPUs | 4 GB |
8 CPUs 8 GB Memory |
.02 CPU/pod | .5 GB/pod |
Nudr-egress-gateway | egressgateway | 1 | 2 CPUs | 2 GB |
2 CPUs 2 GB Memory |
.04 CPU/pod | .4 GB/Pod |
Nudr-config | nudr-config | 2 | 2 CPUs | 2 GB |
4 CPUs 4 GB Memory |
Minimal resources are used. Utilization is not captured | |
Nudr-config-server | nudr-config-server | 2 | 2 CPUs | 2 GB |
4 CPU 4 GB Memory |
Minimal resources are used. Utilization is not captured |
Note:
The following table provides observation data for the performance test that can be used for the benchmark testing to scale up EIR performance:
Table 3-13 Result and Observation
Parameter | Values |
---|---|
Test Duration | 8hr |
TPS Achieved | 10k |
Success Rate | 100% |
Average EIR processing time (Request and Response) | 16.9ms |
3.3 Test Scenario 3: SOAP and Diameter Deployment Model
2K SOAP provisioning TPS for ProvGw for Medium profile + Diameter 25K with Large profile
- TLS
- OAuth2.0
- Header Validations like XFCC, server header, and user agent header
UDR is benchmarked for compute and storage resources under following conditions:
- Signaling : 10K TPS
- Provisioning: 2K TPS
- Total Subscribers: 1M - 10M range used for Diameter Sh and 1M range used for SOAP/XML
- Profile Size: 2.2KB
- Average HTTP Provisioning Request Packet Size: NA
- Average HTTP Provisioning Response Packet Size: NA
Figure 3-2 SOAP and Diameter Deployment Model

The following table describes the benchmarking parameters and their values:
Table 3-14 Traffic Model Details
Request Type | Details | TPS |
---|---|---|
Diameter SH Traffic | SH Traffic | 25K |
Provisioning (2K using Provgw) | SOAP Traffic | 2K |
Table 3-15 SOAP Traffic Model
Request Type | SOAP Traffic % |
---|---|
GET | 33% |
DELETE | 11% |
POST | 11% |
PUT | 45% |
Table 3-16 Diameter Traffic Model
Request Type | Diameter Traffic % |
---|---|
SNR | 25% |
PUR | 50% |
UDR | 25% |
The following table describes the benchmarking parameters and their values:
Table 3-17 Testcase Parameters
Input Parameter Details | Configuration Values |
---|---|
UDR Version Tag | 22.2.0 |
Target TPS | 25K + 2K |
Traffic Profile | 25K sh + 2K SOAP |
Notification Rate | OFF |
UDR Response Timeout | 5s |
Client timeout | 10s |
Signaling Requests Latency Recorded on Client | NA |
Provisioning Requests Latency Recorded on Client | NA |
Note:
PNR scenarios are not tested because server stub is not used.Table 3-18 cnDBTier Resources and their Utilization
Micro service name | Container name | Number of Pods | CPU Allocation Per Pod | Memory Allocation Per Pod | Total Resources | CPU Usage | Memory Usage |
---|---|---|---|---|---|---|---|
Management node | mysqlndbcluster | 3 | 4 CPUs | 10 GB |
12 CPUs 30 GB Memory |
0.2 CPU/pod | .2 GB/pod |
Data node | mysqlndbcluster | 4 | 15 CPUs | 98 GB |
64 CPU 408 GB Memory |
5.8 CPU/pod | 92 GB/pod |
db-backup-executor-svc | 100m CPU | 128 MB | NA | NA | |||
APP SQL node | mysqlndbcluster | 4 | 16 CPUs | 16 GB |
64 CPUs 64 GB Memory |
9.5 CPU/pod | 8.8 GB/pod |
SQL node (Used for Replication) | mysqlndbcluster | 4 | 8 CPUs | 16 GB |
49 CPUs 81 GB Memory |
Utilization data is not available for this service because of resource constraints, pods are not used. | |
DB Monitor Service | db-monitor-svc | 1 | 200m CPUs | 500 MB |
3 CPUs 2 GB Memory |
Minimal resources are used. Utilization is not captured | |
DB Backup Manager Service | replication-svc | 1 | 200m CPU | 500 MB |
3 CPUs 2 GB Memory |
Minimal resources are used. Utilization is not captured |
cnDBTier Usage
Results for Kubectl top pods on cndbtier is shown below:
Results for Kubectl get hpa on cndbtier is shown below:
- Data memory usage: 72GB (5.164GB used)
- DB Reads per second: 52k
- DB Writes per second: 24k
Table 3-19 UDR Resources and their Utilization (Request Latency: 40ms)
Micro service name | Container name | Number of Pods | CPU Allocation Per Pod | Memory Allocation Per Pod | Total Resources | CPU Usage | Memory Usage |
---|---|---|---|---|---|---|---|
nudr-diameterproxy | nudr-diameterproxy | 19 | 2.5 CPUs | 4 GB |
47.5 CPUs 76 GB Memory |
1.75 CPU/pod | 1 GB/pod |
nudr-diam-gateway | nudr-diam-gateway | 3 | 6 CPUs | 4 GB |
18 CPUs 12 GB Memory |
.2.5 CPU/pod | 2 GB/pod |
Ingress-gateway-sig | ingressgateway-sig | 2 | 2 CPUs | 2 GB |
4 CPUs 4 GB Memory |
Minimal resources are used. Utilization is not captured | |
Ingress-gateway-prov | ingressgateway-prov | 2 | 2 CPUs | 2 GB |
4 CPUs 4 GB Memory |
1 CPU/pod | 1 GB/pod |
Nudr-dr-service | nudr-drservice | 2 | 2 CPUs | 2 GB |
4 CPUs 4 GB Memory |
Minimal resources are used. Utilization is not captured | |
Nudr-dr-provservice | nudr-dr-provservice | 2 | 2 CPUs | 2 GB |
4 CPUs 4 GB Memory |
1.4 CPU/pod | 1 GB/pod |
Nudr-nrf-client-nfmanagement | nrf-client-nfmanagement | 2 | 1 CPUs | 1 GB |
2 CPUs 2 GB Memory |
Minimal resources are used. Utilization is not captured | |
Nudr-egress-gateway | egressgateway | 2 | 2 CPUs | 2 GB |
4 CPU 4 GB Memory |
Minimal resources are used. Usage is not captured | |
Nudr-config | nudr-config | 2 | 1 CPUs | 1 GB |
2 CPU 2 GB Memory |
Minimal resources are used. Utilization is not captured | |
Nudr-config-server | nudr-config-server | 2 | 1 CPUs | 1 GB |
2 CPU 2 GB Memory |
Minimal resources are used. Utilization is not captured | |
alternate-route | alternate-route | 2 | 1 CPUs | 1 GB |
2 CPU 2 GB Memory |
Minimal resources are used. Usage is not captured | |
app-info | app-info | 2 | 1 CPUs | 1 GB |
2 CPU 2 GB Memory |
Minimal resources are used. Utilization is not captured | |
perf-info | perf-info | 2 | 1 CPUs | 1 GB |
2 CPU 2 GB Memory |
Minimal resources are used. Usage is not captured |
Resource Utilization
Diameter resource utilization is shown below:
UDR HPA resource utilization is shown below:
The following table describes provision gateway resources and their utilization:
Table 3-20 Provision Gateway Resources aand their Utilization (Provisioning Request Latency: 40ms)
Micro service name | Container name | Number of Pods | CPU Allocation Per Pod | Memory Allocation Per Pod | Total Resources | CPU Usage | Memory Usage |
---|---|---|---|---|---|---|---|
provgw-ingress-gatewa | ingressgateway | 3 | 2 CPUs | 2 GB |
6 CPUs 6 GB Memory |
1.3 CPU/pod | 1 GB/pod |
provgw-egress-gateway | egressgateway | 2 | 2 CPUs | 2 GB |
4 CPUs 4 GB Memory |
.0.9 CPU/pod | 700 Mi/pod |
provgw-service | provgw-service | 4 | 2.5 CPUs | 3 GB |
10 CPUs 12 GB Memory |
1.75 CPU/pod | 1 GB/pod |
provgw-config | provgw-config | 2 | 1 CPUs | 1 GB |
2 CPUs 2 GB Memory |
Minimal resources are used. Utilization is not captured | |
provgw-config-server | provgw-config-server | 2 | 1 CPUs | 1 GB |
2 CPUs 2 GB Memory |
Minimal resources are used. Utilization is not captured |
Provisioning Gateway resource utilization is shown below:
Table 3-21 cnUDR and ProvGw Resources Calculation
Resources | cnUDR | ProvGw | ||||
---|---|---|---|---|---|---|
Core services used for traffic runs (Nudr-diamgw, Nudr-diamproxy, Nudr-ingressgateway-prov and Nudr-dr-prov) at 70% usage | Other Microservices | Total | Core services used for traffic runs (ProvGw-ingressgateway, ProvGw-provgw service and ProvGw-egressgateway) at 70% usage | Other Microservice | Total | |
CPU | 73.5 | 24 | 97.5 | 20 | 4 | 24 |
Memory in GB | 96 | 24 | 120 | 22 | 4 | 26 |
Disk Volume (Ephemeral storage) in GB | 26 | 16 | 42 | 9 | 4 | 13 |
Table 3-22 cnDbTier Resources Calculation
Resources | cnDbTier | |||||
---|---|---|---|---|---|---|
SQL nodes (at actual usage) | SQL Nodes (Overhead/ Buffer resources at 20%) | Data nodes (at actual usage) | Data nodes (Overhead/ Buffer resources at 10%) | MGM nodes and other resources (Default resources) | Total | |
CPU | 76 | 16 | 23.2 | 5 | 18 | 138.5 |
Memory in GB | 70.4 | 14 | 368 | 36 | 34 | 522 |
Disk Volume (Ephemeral storage) in GB | 8 | NA | 960 (ndbdisksize= 240*4) | NA | 20 | 988 |
Table 3-23 Total Resources Calculation
Resources | Total |
---|---|
CPU | 260 |
Memory in GB | 668 GB |
Disk Volume (Ephemeral storage) in GB | 104 GB |
The following table provides observation data for the performance test that can be used for the benchmark testing to scale up UDR performance:
Table 3-24 Result and Observation
Parameter | Values |
---|---|
Test Duration | 18hr |
TPS Achieved | 10K |
Success Rate | 100% |
Average UDR processing time (Request and Response) | 40ms |
3.4 Test Scenario 4: Policy Data Traffic Deployment Model
This section provides information about policy data traffic deployment model test scenarios.
3.4.1 Policy Data Large Profile 10K Mix Traffic with 3K Notifications
- TLS
- OAuth2.0
- Header Validations like XFCC, server header, and user agent header
You can perform benchmark tests on UDR for compute and storage resources by considering the following conditions:
- Signaling : 10k (includes subscriptions)
- Provisioning: NA
- Total Subscribers: 4M
- Profile Size: NA
- Average HTTP Provisioning Request Packet Size: NA
- Average HTTP Provisioning Response Packet Size: NA
Figure 3-3 Policy Data Traffic Deployment Model

The following table describes the benchmarking parameters and their values:
Table 3-25 Traffic Model Details
Request Type | Details | TPS |
---|---|---|
Notifications 3k | Notifications Requests triggered from UDR | 3k |
Signaling (10k mix traffic) | GET | 18% |
PUT | 37% | |
PATCH | 15% | |
DELETE | 15% | |
POST Subscription | 7.5% | |
DELETE Subscription | 7.5% |
Table 3-26 Testcase Parameters
Input Parameter Details | Configuration Values |
---|---|
UDR Version Tag | 1.15.0 |
Target TPS | 10k SignalingTPS + 3k Notifications |
Notification Rate | 3k |
UDR Response Timeout | 45ms |
Client Timeout | 10s |
Signaling Requests Latency Recorded on Client | NA |
Provisioning Requests Latency Recorded on Client | NA |
Table 3-27 Average Deployment Size to Achieve Higher TPS
Subscriber profile (number of subscribers on DB) | Microservice name | TPS rate | Number of pods | CPU allocation per pod | Memory allocation per pod | Average CPU used | Average memory used | Latency values on httpGo tool |
---|---|---|---|---|---|---|---|---|
4M | ocudr-ingress-gateway | 10000 | 5 | 6 | 5Gi | 2.8 | 1352Mi | 45ms |
nudr-dr-service | 10000 | 10 | 5 | 4Gi | 4.15 | 1512Mi | NA | |
nudr-notify-service | 3.0 K | 7 | 4 | 4Gi | 4.5 | 1050Mi | NA | |
ocudr-egress-gateway | 3.0 K | 4 | 4 | 3Gi | 2.7 | 991Mi | NA |
Table 3-28 cnDBTier Pod Details
VM name | vCPU | RAM | Storage |
---|---|---|---|
ndbmysqld-0 | 16 | 64 GB | 90 |
ndbmysqld-1 | 16 | 64 GB | 90 |
ndbmysqld-2 | 16 | 64 GB | 90 |
ndbmysqld-3 | 16 | 64 GB | 90 |
ndbmgmd-0 | 8 | 8 GB | 50 |
ndbmtd-0 | 16 | 64 GB | 190 |
ndbmtd-1 | 16 | 64 GB | 190 |
ndbmtd-2 | 16 | 64 GB | 190 |
ndbmtd-3 | 16 | 64 GB | 190 |
Table 3-29 TPS Rate
TPS | TPS Rate |
---|---|
TPS of DB writes | 3.91K x 4 |
TPS of DB reads | 15.4K x 4 |
Total DB reads | 29.42 Mil x 4 |
Total DB writes | 12.31 Mil x 4 |
The following table provides observation data for the performance test that can be used for the benchmark testing to scale up UDR performance:
Table 3-30 Result and Observation
Parameter | Values |
---|---|
Test Duration | 4h |
TPS Achieved | 10k Provisioning (Includes Subscription) + 3k Notifications |
Success rate | 100% |
Average UDR processing time (Request and Response) | 45ms |
3.4.2 Policy Data: 10K TPS Signaling Traffic
- Entity Tag (ETag) is enabled
You can perform benchmark tests on UDR for compute and storage resources by considering the following conditions:
- Signaling : 10K
- Provisioning: NA
- Total Subscribers: 1M
- Profile Size: 2.5KB
- Average HTTP Provisioning Request Packet Size: NA
- Average HTTP Provisioning Response Packet Size: NA
The following table describes the benchmarking parameters and their values:
Table 3-31 Traffic Model Details
Request Type | Details | TPS |
---|---|---|
N36 traffic (100%) 10K TPS for sm-data | GET | 2500 |
PUT | 2500 | |
PATCH | 2500 | |
DELETE | 2500 |
Note:
Provisioning and Egress traffic are not included in this model.Table 3-32 Testcase Parameters
Input Parameter Details | Configuration Values |
---|---|
UDR Version Tag | 23.4.0 |
Target TPS | 10k Signaling |
Notification Rate | NA |
UDR Response Timeout | 5s |
Client Timeout | 30s |
Signaling Requests Latency Recorded on Client | NA |
Provisioning Requests Latency Recorded on Client | NA |
Table 3-33 Consolidated Resource Requirement
Resource | CPU | Memory |
---|---|---|
cnDBTier | 50 | 434 |
UDR | 73 | 55 |
Buffer | 50 | 50 |
Total | 173 | 539 |
Note:
For cnDBTier, you must use ocudr_slf_37msub_dbtier and ocudr_udr_10msub_dbtier custom value files for SLF and UDR respectively. For more information, see Cloud Native Core, Unified Data Repository Installation, Upgrade, and Fault Recovery Guide.Table 3-34 cnDBTier Resources and their Utilization
Microservice name | Container name | Number of Pods | CPU Allocation Per Pod | Memory Allocation Per Pod | Total Resources | CPU Usage | Memory Usage | CPU Utilization |
---|---|---|---|---|---|---|---|---|
Management node (ndbmgmd) | mysqlndbcluster | 2 | 2 CPUs | 9 GB |
4 CPUs 18 GB |
Minimal resources are used. | ||
Data node (ndbmtd) | mysqlndbcluster | 4 | 4 CPUs | 93 GB |
16 CPUs 372 GB |
1.5 CPU/pod | 77.5 GB/pod | 37% |
APP SQL node (ndbappmysqld) | mysqlndbcluster | 5 | 4 CPUs | 2 GB |
20 CPUs 10 GB |
3.5 CPU/pod | 700 MB/pod | 82% |
SQL node (ndbmysqld,used for replication) | mysqlndbcluster | 2 | 4 CPUs | 16 GB |
8 CPUs 32 GB |
Minimal resources are used. | ||
DB Monitor Service | db-monitor-svc | 1 | 200m CPUs | 500 MB |
1 CPUs 500 MB |
Minimal resources are used. | ||
DB Backup Manager Service | backup-manager-svc | 1 | 100m CPU | 128 MB |
1 CPUs 128 MB |
Minimal resources are used. |
# values for configuration files, cnf
ndbconfigurations:
mgm:
HeartbeatIntervalMgmdMgmd: 2000
TotalSendBufferMemory: 16M
startNodeId: 49
ndb:
MaxNoOfAttributes: 5000
MaxNoOfOrderedIndexes: 1024
NoOfFragmentLogParts: 4
MaxNoOfExecutionThreads: 4
StopOnError: 0
MaxNoOfTables: 1024
NoOfFragmentLogFiles: 64
api:
user: mysql
max_connections: 4096
all_row_changes_to_bin_log: 1
binlog_expire_logs_seconds: '86400'
auto_increment_increment: 2
auto_increment_offset: 1
wait_timeout: 600
interactive_timeout: 600
additionalndbconfigurations:
mgm: {}
ndb:
__TransactionErrorLogLevel: '0x0000'
TotalSendBufferMemory: '32M'
CompressedLCP: true
TransactionDeadlockDetectionTimeout: 1200
HeartbeatIntervalDbDb: 500
ConnectCheckIntervalDelay: 0
LockPagesInMainMemory: 0
MaxNoOfConcurrentOperations: 128K
MaxNoOfConcurrentTransactions: 65536
MaxNoOfUniqueHashIndexes: 16K
FragmentLogFileSize: 128M
ODirect: false
RedoBuffer: 1024M
SchedulerExecutionTimer: 50
SchedulerSpinTimer: 0
TimeBetweenEpochs: 100
TimeBetweenGlobalCheckpoints: 2000
TimeBetweenLocalCheckpoints: 6
TimeBetweenEpochsTimeout: 4000
TimeBetweenGlobalCheckpointsTimeout: 60000
# By default LcpScanProgressTimeout is configured to overwrite configure LcpScanProgressTimeout
# with required value.
# LcpScanProgressTimeout: 180
RedoOverCommitLimit: 60
RedoOverCommitCounter: 3
StartPartitionedTimeout: '1800000'
CompressedBackup: 'true'
MaxBufferedEpochBytes: '26214400'
MaxBufferedEpochs: '100'
api:
TotalSendBufferMemory: '32M'
DefaultOperationRedoProblemAction: 'ABORT'
mysqld:
max_connect_errors: '4294967295'
ndb_applier_allow_skip_epoch: 0
ndb_batch_size: '2000000'
ndb_blob_write_batch_bytes: '2000000'
replica_allow_batching: 'ON'
max_allowed_packet: '134217728'
ndb_log_update_minimal: 1
replica_parallel_workers: 0
binlog_transaction_compression: 'ON'
binlog_transaction_compression_level_zstd: '3'
ndb_report_thresh_binlog_epoch_slip: 50
ndb_eventbuffer_max_alloc: 0
ndb_allow_copying_alter_table: 'ON'
ndb_clear_apply_status: 'OFF'
tcp:
SendBufferMemory: '2M'
ReceiveBufferMemory: '2M'
TCP_SND_BUF_SIZE: '0'
TCP_RCV_BUF_SIZE: '0'
# specific mysql cluster node values needed in different charts
mgm:
ndbdisksize: 15Gi
ndb:
ndbdisksize: 132Gi
ndbbackupdisksize: 164Gi
datamemory: 69G
KeepAliveSendIntervalMs: 60000
use_separate_backup_disk: true
restoreparallelism: 128
api:
ndbdisksize: 12.6Gi
startNodeId: 56
startEmptyApiSlotNodeId: 222
numOfEmptyApiSlots: 4
ndb_extra_logging: 99
general_log: 'OFF'
ndbapp:
ndbdisksize: 2Gi
ndb_cluster_connection_pool: 1
ndb_cluster_connection_pool_base_nodeid: 100
startNodeId: 70
Table 3-35 UDR Resources and their Utilization (Average Latency: 10ms)
Micro service name | Container name | Number of Pods | CPU Allocation Per Pod | Memory Allocation Per Pod | Total Resources | CPU Usage | Memory Usage | CPU Utilization (hpa) |
---|---|---|---|---|---|---|---|---|
Ingress-gateway-sig | ingressgateway-sig | 4 | 6 CPUs | 4 GB |
24 CPUs 16 GB |
2.9 CPU/pod | 1.4 GB/pod | 49% |
Ingress-gateway-prov | ingressgateway-prov | 2 | 4 CPUs | 4 GB |
8 CPUs 8 GB |
Minimal resources are used. | ||
Nudr-dr-service | nudr-drservice | 5 | 6 CPUs | 4 GB |
30 CPUs 20 GB |
4.5 CPU/pod | 2 GB/pod | 74% |
Nudr-dr-provservice | nudr-dr-provservice | 2 | 4 CPUs | 4 GB |
8 CPUs 8 GB |
Minimal resources are used. | ||
Nudr-egress-gateway | egressgateway | 1 | 1 CPUs | 1 GB |
1 CPUs 1 GB |
Minimal resources are used. | ||
Nudr-config | nudr-config | 1 | 1 CPUs | 1 GB |
1 CPUs 1 GB |
Minimal resources are used. | ||
Nudr-config-server | nudr-config-server | 1 | 1 CPUs | 1 GB |
1 CPU 1 GB |
Minimal resources are used. |
{
"sm-data": {
"umData": {
"mk1": {
"scopes": {
"11-abc123": {
"dnn": [
"dnn1"
],
"snssai": {
"sd": "abc123",
"sst": 11
}
}
},
"limitId": "mk1",
"umLevel": "SERVICE_LEVEL",
"resetTime": "2018-01-02T08:17:14.090Z",
"allowedUsage": {
"duration": 9000,
"totalVolume": 8888,
"uplinkVolume": 6666,
"downlinkVolume": 7777
}
}
},
"umDataLimits": {
"mk1": {
"scopes": {
"11-abc123": {
"dnn": [
"dnn1"
],
"snssai": {
"sd": "abc123",
"sst": 11
}
}
},
"endDate": "2018-11-05T08:17:14.090Z",
"limitId": "mk1",
"umLevel": "SESSION_LEVEL",
"startDate": "2018-09-05T08:17:14.090Z",
"usageLimit": {
"duration": 6000,
"totalVolume": 9000,
"uplinkVolume": 5000,
"downlinkVolume": 4000
},
"resetPeriod": {
"period": "YEARLY"
}
}
},
"smPolicySnssaiData": {
"11-abc123": {
"snssai": {
"sd": "abc123",
"sst": 11
},
"smPolicyDnnData": {
"dnn1": {
"dnn": "dnn1",
"bdtRefIds": {
"xyz": "bdtRefIds",
"abc": "xyz"
},
"gbrDl": "7788 Kbps",
"gbrUl": "5566 Kbps",
"online": true,
"chfInfo": {
"primaryChfAddress": "1.1.1.1",
"secondaryChfAddress": "2.2.2.2"
},
"offline": true,
"praInfos": {
"p1": {
"praId": "p1",
"trackingAreaList": [{
"plmnId": {
"mcc": "976",
"mnc": "32"
},
"tac": "5CB6"
},
{
"plmnId": {
"mcc": "977",
"mnc": "33"
},
"tac": "5CB7"
}
],
"ecgiList": [{
"plmnId": {
"mcc": "976",
"mnc": "32"
},
"eutraCellId": "92FFdBE"
},
{
"plmnId": {
"mcc": "977",
"mnc": "33"
},
"eutraCellId": "8F868C4"
}
],
"ncgiList": [{
"plmnId": {
"mcc": "976",
"mnc": "32"
},
"nrCellId": "b2fB6fE9D"
},
{
"plmnId": {
"mcc": "977",
"mnc": "33"
},
"nrCellId": "5d1B4127b"
}
],
"globalRanNodeIdList": [{
"plmnId": {
"mcc": "965",
"mnc": "235"
},
"n3IwfId": "fFf0f2AFbFa16CEfE7"
},
{
"plmnId": {
"mcc": "967",
"mnc": "238"
},
"gNbId": {
"bitLength": 25,
"gNbValue": "1A8F1D"
}
}
]
}
},
"ipv4Index": 0,
"ipv6Index": 0,
"subscCats": [
"cat1",
"cat2"
],
"adcSupport": true,
"mpsPriority": true,
"allowedServices": [
"ser1",
"ser2"
],
"mpsPriorityLevel": 2,
"imsSignallingPrio": true,
"refUmDataLimitIds": {
"mk1": {
"monkey": [
"monkey1"
],
"limitId": "mk1"
}
},
"subscSpendingLimits": true
}
}
}
}
}
}
Table 3-36 Result and Observation
Parameter | Values |
---|---|
Test Duration | 48h |
TPS Achieved | 10K Signaling |
Success rate | 100% |
Average UDR processing time (Request and Response) | 10ms |
3.4.3 Policy Data: Performance 17.2K N36 and 6.56K Notifications Policy
You can perform benchmark tests on UDR for compute and storage resources by considering the following conditions:
- Signaling : 17.2K
- Provisioning: NA
- Total Subscribers: 10M
The following table describes the benchmarking parameters and their values:
Table 3-37 Traffic Model Details
Request Type | Details | TPS |
---|---|---|
N36 traffic (100%) 17.2K TPS for sm-data and subs-to-notify | subs-to-notify POST | 4.57K (26%) |
sm-data GET | 4.63K (27%) | |
subs-to-notify DELETE | 1.39K (8%) | |
sm-data PATCH | 6.56K (39%) | |
NOTIFICATIONS | POST Operation (Egress) | 6.56K |
Note:
Provisioning traffic are not included in this model.Table 3-38 Testcase Parameters
Input Parameter Details | Configuration Values |
---|---|
Target TPS | 17.2K Signaling |
Notification Rate | 6.56K |
UDR Response Timeout | 2.7s |
Client Timeout | 3s |
Signaling Requests Latency Recorded on Client | 150ms |
Provisioning Requests Latency Recorded on Client | 150ms |
Table 3-39 Consolidated Resource Requirement
Resource | CPU | Memory |
---|---|---|
cnDBTier | 84 CPUs | 451 GB |
UDR | 229 CPUs | 177 GB |
Buffer | 50 CPUs | 50 GB |
Total | 363 CPUs | 678 GB |
Note:
For cnDBTier, you must use ocudr_udr_10msub17.2K_TPS_dbtier_24.1.0_custom_values_24.1.0 file. For more information, see Oracle Communications Cloud Native Core, Unified Data Repository Installation, Upgrade, and Fault Recovery Guide.Table 3-40 cnDBTier Resources and their Utilization
Microservice name | Container name | Number of Pods | CPU Allocation Per Pod | Memory Allocation Per Pod | Total Resources |
---|---|---|---|---|---|
Management node (ndbmgmd) | mysqlndbcluster | 2 | 2 CPUs | 9 GB |
4 CPUs 18 GB |
Data node (ndbmtd) | mysqlndbcluster | 4 | 4 CPUs | 93 GB |
16 CPUs 372 GB |
APP SQL node (ndbappmysqld) | mysqlndbcluster | 9 | 6 CPUs | 3 GB |
54 CPUs 27 GB |
SQL node (ndbmysqld,used for replication) | mysqlndbcluster | 2 | 4 CPUs | 16 GB |
8 CPUs 32 GB |
DB Monitor Service | db-monitor-svc | 1 | 200 millicores CPUs | 500 MB |
1 CPU 500 MB |
DB Backup Manager Service | backup-manager-svc | 1 | 100 millicores CPUs | 128 MB |
1 CPU 128 MB |
Table 3-41 UDR Resources and their Utilization
Micro service name | Container name | Number of Pods | CPU Allocation Per Pod | Memory Allocation Per Pod | Total Resources |
---|---|---|---|---|---|
Ingress-gateway-sig | ingressgateway-sig | 9 | 6 CPUs | 4 GB |
54 CPUs 36 GB |
Ingress-gateway-prov | ingressgateway-prov | 2 | 4 CPUs | 4 GB |
8 CPUs 8 GB |
Nudr-dr-service | nudr-drservice | 17 | 6 CPUs | 4 GB |
102 CPUs 68 GB |
Nudr-dr-provservice | nudr-dr-provservice | 2 | 4 CPUs | 4 GB |
8 CPUs 8 GB |
Nudr-notify-service | nudr-notify-service | 7 | 4 CPUs | 4 GB |
28 CPUs 28 GB |
Nudr-egress-gateway | egressgateway | 4 | 4 CPUs | 4 GB |
16 CPUs 16 GB |
Nudr-config | nudr-config | 2 | 1 CPU | 1 GB |
2 CPUs 2 GB |
Nudr-config-server | nudr-config-server | 2 | 1 CPU | 1 GB |
2 CPUs 2 GB |
Alternate-route | alternate-route | 2 | 1 CPU | 1 GB |
2 CPUs 2 GB |
Nudr-nrf-client-nfmanagement-service | nrf-client-nfmanagement | 2 | 1 CPU | 1 GB |
2 CPUs 2 GB |
App-info | app-info | 2 | 1 CPU | 1 GB |
2 CPUs 2 GB |
Perf-info | perf-info | 2 | 1 CPU | 1 GB |
2 CPUs 2 GB |
Nudr-dbcr-auditor-service | nudr-dbcr-auditor-service | 1 | 1 CPU | 1 GB |
1 CPU 1 GB |
Table 3-42 Result and Observation
Parameter | Values |
---|---|
Test Duration | 2h |
TPS Achieved | 17.2K Signaling |
Success rate | 100% |
Average UDR processing time (Request and Response) | 150ms |