Performance Test Configuration
This chapter describes the specific system configuration of the Oracle Communications Unified Assurance environment used for performance testing.
This information serves as a reference point for interpreting the results and understanding how your own configuration might influence the performance of your Unified Assurance deployment. In order to achieve repeatable testing, lab environment setup may not completely replicate a production environment.
Unified Assurance Configuration
Performance testing was run using microservices with the following multi-server, non-redundant deployment architecture:
-
Presentation layer: Single internal presentation server
-
Database layer:
-
Event database server
-
Graph database server
-
Metric database server
-
Three clustered Historical database servers
-
-
Five clustered collection servers
Hardware Configuration
The following table describes the basic hardware details for the servers used in the performance testing.
| Servers | Cores | Memory(GB) | Boot Volume(GB) | Hard Drive Used at Start (GB) |
|---|---|---|---|---|
| Internal presentation server | 16 | 136 | 500 | 94 |
| Event database server | 16 | 136 | 1000 | 36 |
| Graph database server | 16 | 64 | 500 | 31 |
| Historical database server 1 | 16 | 64 | 800 | 36 |
| Historical database server 2 | 16 | 64 | 800 | 36 |
| Historical database server 3 | 16 | 64 | 800 | 36 |
| Metric database server | 16 | 64 | 800 | 33 |
| Collection server 1 | 16 | 64 | 300 | 50 |
| Collection server 2 | 16 | 64 | 300 | 50 |
| Collection server 3 | 16 | 64 | 300 | 51 |
| Collection server 4 | 16 | 64 | 300 | 52 |
| Collection server 5 | 16 | 64 | 300 | 51 |
The server CPUs had the following specifications:
-
OCI Shape: VM.Standard.E5.Flex
-
Processor: AMD EPYC 9J14 (AMD E4)
-
Base frequency: 2.4 GHz, max boost frequency 3.7 GHz.
-
Threads per core: 2
All servers were FIPS and SELinux enabled.
Microservice Configuration
Microservices were configured with pods as follows:
-
Trap Collector: 1 pod
-
FCOM Processor: 16 pods
-
Event Sink: 2 pods
-
Flow Collector: Deployed for the Netflow Throughput test only, tested with 1 pod and 3 pods
All microservices had the log level set to ERROR.
The following versions were used:
-
Pulsar: 4.0.5.9
-
Java: 21.0.10
Tuning Parameters
The following values were used in the /etc/sysctl.conf file:
fs.file-max=6526474
vm.swappiness=10
vm.dirty_ratio=60
vm.dirty_background_ratio=10
vm.max_map_count=65530
net.ipv4.ip_forward=0
net.ipv6.conf.all.forwarding=0
net.ipv4.tcp_synack_retries=5
net.ipv4.tcp_rfc1337=0
net.ipv4.tcp_fin_timeout=15
net.ipv4.tcp_keepalive_time=7200
net.ipv4.tcp_keepalive_probes=9
net.ipv4.tcp_keepalive_intvl=75
net.core.rmem_default=2097152
net.core.rmem_max=2097152
net.core.wmem_default=212992
net.core.wmem_max=212992
net.core.somaxconn=16384
net.core.netdev_max_backlog=250000
net.core.optmem_max=81920
net.ipv4.tcp_mem=762645 1016862 1525290
net.ipv4.udp_mem=1525293 2033725 3050586
net.ipv4.tcp_rmem=4096 131072 6291456
net.ipv4.udp_rmem_min=4096
net.ipv4.tcp_wmem=4096 16384 4194304
net.ipv4.udp_wmem_min=4096
net.ipv4.tcp_max_tw_buckets=1440000
net.ipv4.neigh.default.gc_thresh1=512
net.ipv4.neigh.default.gc_thresh2=1024
net.ipv4.neigh.default.gc_thresh3=2048
net.ipv4.tcp_low_latency=0
net.ipv6.conf.all.disable_ipv6=1
net.ipv6.conf.default.disable_ipv6=1