2 Deployment Environment

This section provides information about the cloud native platform used for SCP benchmarking.

2.1 Deployed Components

Deployment Platform

Oracle Communications Cloud Native Core, Cloud Native Environment (CNE) 24.2.0 and CNE on BareMetal 24.1.0 can be used for performing benchmark tests.

Observability Services

The following table lists services that are part of CNE and used for fetching SCP metrics.

Table 2-1 Observability Services

Service Name Version
Fluentd 1.16.2
Grafana 1.26.1
Jaeger 1.52.0
Kibana 7.9.3
Oracle OpenSearch 2.3.0
Oracle OpenSearch Dashboard 2.3.0
Prometheus 1.7.0

Cloud Native Orchestrator

Kubernetes 1.28.6 is used to manage application pods across the cluster.

cnDBTier

cnDBTier 25.1.200 is used to perform benchmark tests.

For more information about above mentioned software, see Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.

2.2 Deployment Resources

The performance and capacity of SCP can vary based on the chosen environment and how SCP is deployed. This section provides information about CNE and cnDBTier resources used to perform benchmark tests.

2.2.1 Cluster Details

The following table provides information about the types of servers and the number of servers used in the test environment:

Table 2-2 Test Bed 1 - CNE on BareMetal

Nodes Type Count
Primary Nodes HP Gen10 RMS 3
Worker Nodes HP Gen10 Blades 29
HP Gen8 Blades 7
Top of Rack Switch Cisco Nexus9000 93180YC-EX 2
Enclosure Switch HP 6120 2

The following table provides information about the number of pods required by each CNE service.

Table 2-3 CNE Common Services Observability Resources

Service Name Number of Pods RAM Request/Limit vCPU Request/Limit PVC Size Recommendation
Prometheus Server 2 50Gi/50Gi 12/12 150GB to 200GB
Prometheus-pushgateway 1 32Mi/32Mi 10m/10m NA
Alert Manager 2 164Mi/164Mi 40m/40m NA
Fluentd 1 per Worker Node 200Mi/500Mi 100m/100m NA
Prom-node-exporter 1 per Worker Node 512Mi/512Mi 800m/800m NA
MetalLB speaker 1 per Worker Node 100Mi/100Mi 100m/100m NA
OpenSearch Data 3/3 32Gi/32Gi (JVM 16) 2/2 300GB
OpenSearch Master 3/3 16Gi/16Gi(JVM 8) 1/1 300GB
ISM Policy 3/3 128Mi/128Mi 100m/100m NA
OpenSearch Client 1 128Mi/128Mi 100m/100m NA
Grafana 1 500Mi/500Mi 500m/500m NA
Kibana 1 500Mi/1Gi 100m/1 NA
kube-state-metrics 1 200Mi/200Mi 50m/50m NA
jaeger-agent 1 per Worker Node 128Mi/512Mi 256m/500m NA
jaeger-collector 1 512Mi/1Gi 500m/1250m NA
jaeger-query 1 128Mi/512Mi 256m/500m NA
rook-ceph-osd 1 for each raw disk available to OS on all Worker Node 1Gi/8Gi 500m/1 NA
rook-ceph-mgr 1 1Gi/1Gi 500m/500m NA
rook-ceph-mon 3 1Gi/1Gi 500m/500m NA
rook-ceph-operator 1 2Gi/2Gi 100m/500m NA

Table 2-4 Test Bed 2 - VMware Tanzu

Nodes Type Count
Primary Nodes VM (8 CPU and 64 GB Memory) 3
Worker Nodes VM(32 CPU and 128 GB Memory) 51
Underlying Hardware Cisco Nexus9000 93180YC-EX 19

Table 2-5 Test Bed 3 - CNE on BareMetal

Nodes Type Count
Primary Nodes X9 Server and NVME 3
Worker Nodes X9 Server and NVME 17

Table 2-6 Test Bed 4 - CNE on BareMetal

Nodes Type Count
Primary Nodes ORACLE SERVER X8-2 3
Worker Nodes ORACLE SERVER X8-2 45
Top of Rack Switch Cisco 93108tc-ex 2

Table 2-7 Test Bed 5 - vCNE on OpenStack

Nodes Type Count
Master Nodes ORACLE SERVER X8-2 3
Worker Nodes ORACLE SERVER X8-2 42
Top of Rack Switch Cisco 93108TC-ex 2

Table 2-8 Test Bed 5 - LBVM Resources used for vCNE

Resources Values
Number of LBVM

4

Note: There are two pairs of LBVMs in this setup, with 2 LBVMs in each pair. For testing purposes, only one LBVM pair was used, with one LBVM operating in active mode and the other in standby mode.

RAM 16 GB
vCPU 4
Disk Size 40 GB

The following table provides information about the number of pods required by each CNE service.

Table 2-9 CNE Common Services Observability Resources

Service Name Number of Pods RAM Request/Limit vCPU Request/Limit PVC Size Recommendation
Prometheus Server 2 50Gi/50Gi 12/12 150GB to 200GB
Alert Manager 2 64Mi/64Mi 40m/40m NA
Fluentd 1 per Worker Node 4Gi/4Gi 400m/500m NA
Prom-node-exporter 1 per Worker Node 512Mi/512Mi 800m/800m NA
Grafana 1 2Gi/2Gi 2000m/2000m NA
jaeger-agent 1 per Worker Node 128Mi/512Mi 256m/500m NA
jaeger-collector 1 512Mi/1Gi 500m/1250m NA
jaeger-query 1 128Mi/512Mi 256m/500m NA
rook-ceph-osd 1 for each raw disk available to OS on all Worker Node 1Gi/8Gi 500m/1 NA
rook-ceph-mgr 1 1Gi/1Gi 500m/500m NA
rook-ceph-mon 3 1Gi/1Gi 500m/500m NA
rook-ceph-operator 1 2Gi/2Gi 100m/500m NA

2.2.2 cnDBTier Resources

The following table provides information about cnDBTier resources for both ASM and non-ASM setups required to perform SCP benchmark tests:

Table 2-10 cnDBTier Resources (Non-ASM)

Service Name CPU/Pod Memory/Pod (in GB) PVC Size (in GB) Ephemeral Storage (MB) Sidecar CPU/Pod Sidecar Memory/Pod (in GB) Sidecar Ephemeral Storage (MB)
Min Max Min Max PVC 1 PVC 2 Min Max Min Max Min Max Min Max
MGMT (ndbmgmd) 2 2 4 5 14 NA 90 1000 0.2 0.2 0.256 0.256 90 1000
DB(ndbmtd) 2 2 8 8 15 8 90 1000 1.2 1.2 2.256 2.256 180 3000
SQL - Replication(ndbmysqld) 4 4 10 10 25 NA 90 1000 0.3 0.3 0.512 0.512 180 2000
SQL - Access(ndbappmysqld) 4 4 8 8 20 NA 90 1000 0.3 0.3 0.512 0.512 180 2000
Monitor Service(db-monitor-svc) 4 4 4 4 0 NA 90 1000 0 0 0 0 0 0
db-connectivity-service 0 0 0 0 0 NA 0 0 NA NA NA NA NA NA
Replication Service - Leader(db-replication-svc) 2 2 12 12 190 NA 90 1000 0.2 0.2 0.5 0.5 90 1000
Replication Service - Other(db-replication-svc) 0.6 1 1 2 NA NA 90 1000 0.2 0.2 0.5 0.5 NA NA
Backup Manager Service(db-backup-manager-svc) 1 1 1 1 0 NA 90 1000 0 0 0 0 0 0

Table 2-11 cnDBTier Resources (ASM)

Service Name CPU/Pod Memory/Pod (in GB) PVC Size (in GB) Ephemeral Storage (MB) Sidecar CPU/Pod Sidecar Memory/Pod (in GB) Sidecar Ephemeral Storage (MB)
Min Max Min Max PVC 1 PVC 2 Min Max Min Max Min Max Min Max
MGMT (ndbmgmd) 2 2 4 5 14   90 1000 1.2 1.2 1.256 1.256 90 1000
DB(ndbmtd) 2 2 8 8 15 8 90 1000 2.2 2.2 3.256 3.256 180 3000
SQL - Replication(ndbmysqld) 4 4 10 10 25 NA 90 1000 2.3 2.3 2.512 2.512 180 2000
SQL - Access(ndbappmysqld) 4 4 8 8 20 NA 90 1000 2.3 2.3 2.512 2.512 180 2000
Monitor Service(db-monitor-svc) 4 4 4 4 0 NA 90 1000 1 1 1 1 0 0
db-connectivity-service 0 0 0 0 0 NA 0 0 NA NA NA NA NA NA
Replication Service - Leader(db-replication-svc) 2 2 12 12 190 NA 90 1000 1.2 1.2 1.5 1.5 90 1000
Replication Service - Other(db-replication-svc) 0.6 1 1 2 NA NA 90 1000 1.2 1.2 1.5 1.5 NA NA
Backup Manager Service(db-backup-manager-svc) 1 1 1 1 0 NA 90 1000 1 1 1 1 0 0

2.2.3 SCP Resources

The following table provides information about resource requirements to perform SCP benchmark tests:

Table 2-12 SCP Resources

Microservice Name SCP Service Pods
vCPU/Pod Memory/Pod (in Gi)
Min Max Min Max
Helm test 3 3 3 3
Helm Hook 3 3 3 3
scpc-subscription 1 1 1 1
scpc-notification 4 4 4 4
scpc-audit 3 3 4 4
scpc-configuration 2 2 2 2
scp-cache 8 8 8 8
scp-loadmanager 8 8 8 8
scp-nrfproxy 8 8 8 8
scp-worker (Profile 1) 4 4 8 8
scp-worker (Profile 2) 8 8 12 12
scp-worker (Profile 3) 12 12 16 16
scp-mediation 8 8 8 8
scp-nrfproxy-oauth 8 8 8 8
scpc-alternate-resolution 2 2 2 2