2 Deployment Environment

This section provides information about the NF deployment platform, such as CNE, the services used for fetching counters or metrics, and the software requirements for NEF benchmark testing.

2.1 Deployed Components

Deployment Platform

Oracle Communications Cloud Native Core, Cloud Native Environment (CNE) 24.2.0 and Bare Metal CNE 24.2.0 can be used for performing benchmark tests for NEF deployment.

Observability Services

The following table lists services that are part of CNE and used for fetching NEF metrics.

Table 2-1 Observability Services

Service Name Version
OpenSearch 2.11.0
OpenSearch Dashboard 2.11.0
logs 3.1.0
Kyverno 1.9
Fluentd 3.1.0
Prometheus 2.51.1
prometheus-kube-state-metrics 2.9.2
prometheus-node-exporter 1.5.0
Grafana 9.5.3
Jaeger 1.52.0
MetalLb 0.14.4
metrics-server 0.6.1
tracer 1.21.0

Cloud Native Orchestrator

Kubernetes 1.29.x is used to manage application pods across the cluster.

cnDBTier

cnDBTier 24.2.0 is used to perform benchmark tests.

For more information about above mentioned software, see Oracle Communications Cloud Native Core, Network Exposure Function Installation, Upgrade, and Fault Recovery Guide.

2.2 Deployment Resources

The performance and capacity of NEF can vary based on the chosen environment and how NEF is deployed. This section provides information about CNE and cnDBTier resources used to perform benchmark tests.

2.2.1 CNE Cluster Details

The following table provides information about the types of servers and the number of servers used in Bare Metal CNE clusters.

Table 2-2 Bare Metal CNE

Nodes Type Count
Worker Nodes HP ProLiant BL460c Gen8 12
Primary Nodes HP ProLiant BL460c Gen8 3
2.2.1.1 CNE Common Services Observability Resources

The following table provides information about the number of pods required by each CNE service.

Table 2-3 CNE Common Services Observability Resources

Service Name No. of Pods RAM Request/Limit vCpu Request/Limit PVC Size - Recommendation
Prometheus Server 2 4Gi/4Gi 2/2 8Gi
Alert Manager 2 64Mi/64Mi 20m/20m NA
Fluentd 1 per Worker Node 512Mi/1Gi 100m/200m NA
Prom-node-exporter 1 per Worker node 512Mi/512Mi 800m/800m NA
Metal LB speaker 1 per Worker node 100Mi/100Mi 100m/100m NA
ES Data 3 16Gi/16Gi 1/1 10Gi
ES Master 3 2Gi/2Gi 1/1 30Gi
ES Curator 1 128Mi/128Mi 100m/100m NA
ES-exporter 1 128Mi/128Mi 100m/100m NA
Grafana 1 128Mi/128Mi 100m/100m NA
Kibana 1 500Mi/1Gi 100m/1 NA
kube-state-metrics 1 32Mi/100Mi 20m/20m NA
jaeger-agent 12 128Mi/512Mi 256m/500m NA
jaeger-collector 1 512Mi/1Gi 500m/1250m NA
jaeger-query 1 128Mi/512Mi 256m/500m NA

2.2.2 cnDBTier Resources

The following table describes resources required by cnDBTier 24.2.0 pods to perform NEF benchmark tests.

Table 2-4 cnDBTier Resources

DB Tier Pods Replica vCPU RAM Storage PVC
Request Limit Request Limit
ndbappmysqld-n 2 8 8 10Gi 10Gi 20Gi
MGMT (ndbmgmd-n) StatefulSet 2 4 4 8Gi 10Gi 15Gi
DB (ndbmtd-n) StatefulSet 4 10 10 16Gi 18Gi 60Gi
SQL (ndbmysqld-n) StatefulSet 2 8 8 10Gi 10Gi 256Gi
nef-db-cluster-db-backup-manager-svc 1 100m 100m 128Mi 128Mi NA
nef-db-cluster-db-monitor-svc 1 1 2 500Mi 500Mi NA
mysql-cluster-site1-site2-replication-svc 1 2 2 12G 12G NA

2.2.3 NEF Resources

The following table provides information about resource requirements to perform NEF benchmark tests.

Table 2-5 NEF Resources

Microservice Name CPU Request and limit per POD (A) Memory Request and limit per POD (B) Scaling Criteria CPU Usage % Default POD Count (C) Maximum POD count with scaling (D) Maximum CPU Total for Default PODs (A*C) Minimum CPU Total for Default PODs (B*C) Maximum CPU Total for all PODs (with full scaling and surge) (A*D) Maximum Memory Total for all PODs (with full scaling and surge) (B*D)
5GC Agent 4 4Gi 60 5 5 20 20Gi 20 20Gi
5GC Egress Gateway 4 4Gi 60 5 5 20 20Gi 20 20Gi
5GC Ingress Gateway 4 4Gi 60 5 5 20 20Gi 20 20Gi
Common Config Hook 1 1Gi 70 1 1 1 1 1 1
APD Manager 4 4Gi 80 5 5 20 20Gi 20 20Gi
API Router 4 4Gi 80 5 5 20 20Gi 20 20Gi
App-Info 1 1Gi 70 5 5 5 5Gi 5 5Gi
CCF Client 2 2Gi 60 5 5 10 10Gi 10 10Gi
Config-Server 1 1Gi 70 5 5 5 5Gi 5 5Gi
Diameter Gateway 4 4Gi No HPA support 5 5 20 20Gi 20 20Gi
Expiry Auditor 4 4Gi 60 5 5 20 20Gi 20 20Gi
External Egress Gateway 4 4Gi 80 5 5 20 20Gi 20 20Gi
External Ingress Gateway 4 4Gi 80 5 5 20 20Gi 20 20Gi
ME Service 4 4Gi 70 5 5 20 20Gi 20 20Gi
NRF Client 1 1Gi 70 5 5 5 5Gi 5 5Gi
Perf-Info 1 1Gi 70 5 5 5 5Gi 5 5Gi
QoS Service 4 4Gi 70 5 5 20 20Gi 20 20Gi
Traffic Influence 4 4Gi 70 5 5 20 20Gi 20 20Gi
Device Trigger 1 1Gi 70 1 1 1 1Gi 1 1Gi
Pool Manager 4 4Gi 70 1 1 4 4Gi 4 4Gi
MSISDNless MO SMS 4 4Gi 70 1 12 4 4Gi 4 4Gi
Console Data Service 4 4Gi 70 1 12 4 4Gi 4 4Gi

Note:

Horizontal Pod Autoscaling (HPA) is not supported for Diameter Gateway, here the number of pods should be configured at the time of install or upgrade.