2 Deployment Environment

This section provides information about the cloud native platform used for SCP benchmarking.

2.1 Deployed Components

Deployment Platform

Oracle Communications Cloud Native Core, Cloud Native Environment (CNE) 24.1.0 and CNE on Bare Metal 24.1.0 is used for performing benchmark tests.

Observability Services

The following table lists services that are part of CNE and used for fetching SCP metrics.

Table 2-1 Observability Services

Service Name Version
Oracle OpenSearch 2.3.0
Oracle OpenSearch Dashboard 2.3.0
Fluentd 1.16.2
Prometheus 2.51.1
Grafana 9.5.3
Jaeger 1.52.0

Cloud Native Orchestrator

Kubernetes 1.28.6 is used to manage application pods across the cluster.

cnDBTier

cnDBTier 25.1.100 is used to perform benchmark tests.

For more information about above mentioned software, see Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.

2.2 Deployment Resources

The performance and capacity of SCP can vary based on the chosen environment and how SCP is deployed. This section provides information about CNE and cnDBTier resources used to perform benchmark tests.

2.2.1 Cluster Details

The following table provides information about the types of servers and the number of servers used in the test environment:

Table 2-2 Test Bed 1 - CNE on Bare Metal

Nodes Type Count
Primary Nodes HP Gen10 RMS 3
Worker Nodes HP Gen10 Blades 29
HP Gen8 Blades 7
Top of Rack Switch Cisco Nexus9000 93180YC-EX 2
Enclosure Switch HP 6120 2

The following table provides information about the number of pods required by each CNE service.

Table 2-3 CNE Common Services Observability Resources

Service Name Number of Pods RAM Request/Limit vCPU Request/Limit PVC Size Recommendation
Prometheus Server 2 50Gi/50Gi 12/12 150GB to 200GB
Prometheus-pushgateway 1 32Mi/32Mi 10m/10m NA
Alert Manager 2 164Mi/164Mi 40m/40m NA
Fluentd 1 per Worker Node 200Mi/500Mi 100m/100m NA
Prom-node-exporter 1 per Worker Node 512Mi/512Mi 800m/800m NA
MetalLB speaker 1 per Worker Node 100Mi/100Mi 100m/100m NA
OpenSearch Data 3/3 32Gi/32Gi (JVM 16) 2/2 300GB
OpenSearch Master 3/3 16Gi/16Gi(JVM 8) 1/1 300GB
ISM Policy 3/3 128Mi/128Mi 100m/100m NA
OpenSearch Client 1 128Mi/128Mi 100m/100m NA
Grafana 1 500Mi/500Mi 500m/500m NA
Kibana 1 500Mi/1Gi 100m/1 NA
kube-state-metrics 1 200Mi/200Mi 50m/50m NA
jaeger-agent 1 per Worker Node 128Mi/512Mi 256m/500m NA
jaeger-collector 1 512Mi/1Gi 500m/1250m NA
jaeger-query 1 128Mi/512Mi 256m/500m NA
rook-ceph-osd 1 for each raw disk available to OS on all Worker Node 1Gi/8Gi 500m/1 NA
rook-ceph-mgr 1 1Gi/1Gi 500m/500m NA
rook-ceph-mon 3 1Gi/1Gi 500m/500m NA
rook-ceph-operator 1 2Gi/2Gi 100m/500m NA

Table 2-4 Test Bed 2 - VMware Tanzu

Nodes Type Count
Primary Nodes VM (8 CPU and 64 GB Memory) 3
Worker Nodes VM(32 CPU and 128 GB Memory) 51
Underlying Hardware Cisco Nexus9000 93180YC-EX 19

Table 2-5 Test Bed 3 - CNE on Bare Metal

Nodes Type Count
Primary Nodes X9 Server and NVME 3
Worker Nodes X9 Server and NVME 17

Table 2-6 Test Bed 4 - CNE on Bare Metal

Nodes Type Count
Primary Nodes ORACLE SERVER X8-2 3
Worker Nodes ORACLE SERVER X8-2 45
Top of Rack Switch Cisco 93108tc-ex 2

The following table provides information about the number of pods required by each CNE service.

Table 2-7 CNE Common Services Observability Resources

Service Name Number of Pods RAM Request/Limit vCPU Request/Limit PVC Size Recommendation
Prometheus Server 2 50Gi/50Gi 16/16 150GB to 800GB
Prometheus-pushgateway 1 2Gi/3Gi 2/4 NA
Alert Manager 2 164Mi/164Mi 40m/40m NA
Fluentd 1 per Worker Node 200Mi/500Mi 100m/100m NA
Prom-node-exporter 1 per Worker Node 512Mi/512Mi 800m/800m NA
MetalLB speaker 1 per Worker Node 100Mi/100Mi 100m/100m NA
OpenSearch Data 3/3 164Gi/100Mi 1/8 300GB
OpenSearch Master 3/3 16Gi/16Gi(JVM 8) 1/1 300GB
ISM Policy 3/3 128Mi/128Mi 100m/100m NA
OpenSearch Client 1 128Mi/128Mi 100m/100m NA
Grafana 1 500Mi/500Mi 500m/500m NA
Kibana 1 500Mi/1Gi 100m/1 NA
kube-state-metrics 1 200Mi/200Mi 50m/50m NA
jaeger-agent 1 per Worker Node 128Mi/512Mi 256m/500m NA
jaeger-collector 1 512Mi/1Gi 500m/1250m NA
jaeger-query 1 128Mi/512Mi 256m/500m NA
rook-ceph-osd 1 for each raw disk available to OS on all Worker Node 1Gi/8Gi 500m/1 NA
rook-ceph-mgr 1 1Gi/1Gi 500m/500m NA
rook-ceph-mon 3 1Gi/1Gi 500m/500m NA
rook-ceph-operator 1 2Gi/2Gi 100m/500m NA

2.2.2 cnDBTier Resources

The following table describes resources required by cnDBTier pods to perform SCP benchmark tests.

Table 2-8 Test Bed 1 - cnDBTier Resources

cnDBTier Pods Replica vCPU RAM (GB) ASM Sidecar cnDBTier Sidecar Storage Ephemeral Storage
Request Limit Request Limit vCPU RAM (GB) RAM (GB) RAM(GB) PVC(GB) Count Req(M) Limit(G)
SQL - Replication (ndbmysqld) StatefulSet 2 2 3 2 4 1 1 1 1 30 1 90 1
MGMT (ndbmgmd) StatefulSet 3 2 3 2 4 1 1 NA NA 30 1 90 1
DB (ndbmtd) StatefulSet 4 3 4 4 4 1 1 1 1 30 2 90 1
db-backup-manager-svc 1 1 1 1 1 1 1 NA NA NA NA 90 1
db-replication-svc 1 1 2 1 2 1 1 NA NA NA NA 90 1
db-monitor-svc 1 1 2 1 2 1 1 NA NA NA NA 90 1
db-connitivity-service 0 0 0 0 0 0 0 0 0 NA NA NA NA
SQL - Access (ndbappmysqld) StatefulSet 2 3 4 4 4 1 1 NA NA 20 2 90 1

Table 2-9 Test Bed 2 - cnDBTier Resources

cnDBTier Pods Replica vCPU RAM (GB)
Request Limit Request Limit
SQL - Replication(ndbmysqld) 2 4 4 10 10
MGMT (ndbmgmd) StatefulSet 3 2 2 7 7
DB (ndbmtd) StatefulSet 4 3 3 7 7
SQL - Access (ndbappmysqld) 2 4 4 8 8
db-backup-manager-svc 1 0.1 0.1 0.128 0.128
db-monitor-svc 1 0.2 0.2 0.5 0.5

2.2.3 SCP Resources

The following table provides information about resource requirements to perform SCP benchmark tests:

Table 2-10 SCP Resources

Microservice Name SCP Service Pods
vCPU/Pod Memory/Pod (in Gi)
Min Max Min Max
Helm test 3 3 3 3
Helm Hook 3 3 3 3
scpc-subscription 1 1 1 1
scpc-notification 4 4 4 4
scpc-audit 3 3 4 4
scpc-configuration 2 2 2 2
scp-cache 8 8 8 8
scp-loadmanager 8 8 8 8
scp-nrfproxy 8 8 8 8
scp-worker (Profile 1) 4 4 8 8
scp-worker (Profile 2) 8 8 12 12
scp-worker (Profile 3) 12 12 16 16
scp-mediation 8 8 8 8
scp-nrfproxy-oauth 8 8 8 8
scpc-alternate-resolution 2 2 2 2