2 Deployment Environment

This section provides information about the cloud native platform requirements for deploying Oracle Communications Cloud Native Core, Network Slice Selection Function (NSSF).

Note:

The performance and capacity of the NSSF system may vary based on the call model, Feature or Interface configuration, and underlying CNE and hardware environment.

2.1 Deployed Components

This section provides details about the deployed components.

2.1.1 Hardware Details

This section describes the hardware details.

Table 2-1 CNE Cluster Details

Nodes Server Model Count
Master Nodes ORACLE SERVER X8-2 3
Worker Nodes ORACLE SERVER X8-2 40

Note:

The CNE clusters for performance benchmarking were utilized in a shared model.

2.1.2 System Software

This section describes the system software details.

Table 2-2 System Software

System Software Details
Operating System (+Kernel Version)

5.15.0-306.177.4.el9uek.x86_64

Oracle Linux Server 9.5

Hypervisor Bare metal Server
CNE 25.1.2xx
OSO 25.2.2xx
Kubernetes 1.32.x
ASM 1.14.6
Podman 5.2.2

2.1.3 Observability Services

This section describes the required observability services.

Table 2-3 Observability Services

Software 25.2.2xx
AlertManager 0.28.0
Calico 3.30.3
cinder-csi-plugin 1.33.0
containerd 2.1.4
CoreDNS 1.12.0
Fluentd 1.17.1
Grafana 7.5.17
Jaeger 1.72.0
Kyverno 1.15.0
MetalLB 0.15.2
metrics-server 0.7.2
Multus 4.2.1-thick
OpenSearch 2.19.1
OpenSearch Dashboard 2.19.1
Prometheus 3.6.0
prometheus-kube-state-metric 2.17.0
prometheus-node-exporter 1.10.2
Prometheus Operator 0.85.0
rook 1.17.7
snmp-notifier 2.0.0
Velero 1.16.2

Note:

The CNE clusters for performance benchmarking were utilized in a shared model. Hence not all components listed here are utlized by NSSF.

2.2 Resource Profile

This section describes the resource profile for common applications, NSSF microservices, and cnDBTier.

2.2.1 CNE Common Applications

The CPU and RAM resources that each common service provided by CNE consumes are constrained, so that they do not consume excess resources that could be used by applications. Each service is given an initial CPU and RAM allocation when it is deployed and is allowed to grow to a specified upper limit of each resource while it continues to run. For services where little growth is expected, or where increasing the CPU/RAM underneath a running application might cause an unacceptable service disruption, the initial allocation and upper limit are set to the same value. The resource requests and limits are given below:

Here is the table displaying the Resource Profile for CNE:

Table 2-4 CNE Common Applications

Service Container Min CPU Max CPU Min Memory Max Memory
occne-cert-exporter-cert-manager cert-exporter 100m 200m 128Mi 256Mi
occne-fluentd-opensearch fluentd 100m 500m 1Gi 1Gi
occne-kube-prom-stack-prometheus-node-exporter node-exporter 800m 800m 512Mi 512Mi
occne-bastion-controller bastion-controller 10m 200m 128Mi 256Mi
occne-kube-prom-stack-grafana grafana 500m 500m 500Mi 500Mi
occne-kube-prom-stack-kube-operator kube-prometheus-stack 100m 200m 100Mi 200Mi
occne-kube-prom-stack-kube-state-metrics kube-state-metrics 20m 40m 32Mi 256Mi
occne-metrics-server metrics-server 100m 100m 200Mi 200Mi
occne-opensearch-dashboards dashboards 100m 100m 512M 512M
occne-promxy promxy 100m 100m 512Mi 512Mi
occne-promxy-apigw-nginx nginx 1 2 1Gi 1536Mi
occne-tracer-jaeger-collector occne-tracer-jaeger-collector 500m 1250m 512Mi 1Gi
occne-tracer-jaeger-query occne-tracer-jaeger-query 256m 500m 128Mi 512Mi
alertmanager-occne-kube-prom-stack-kube-alertmanager alertmanager 20m 20m 64Mi 64Mi
alertmanager-occne-kube-prom-stack-kube-alertmanager config-reloader 100m 100m 50Mi 50Mi
occne-opensearch-cluster-client opensearch 1 1 2Gi 2Gi
occne-opensearch-cluster-data opensearch 200m 200m 10Gi 10Gi
occne-opensearch-cluster-master opensearch 1 1 2Gi 2Gi
prometheus-occne-kube-prom-stack-kube-prometheus prometheus 12 12 55Gi 55Gi
prometheus-occne-kube-prom-stack-kube-prometheus config-reloader 100m 100m 50Mi 50Mi

Note:

The overall common services resource usage varies on each worker node. The common services listed above are distributed evenly across all worker nodes in the CNE Kubernetes cluster.

2.2.2 Application Microservices

Resource needs to be adjusted or tuned as per the performance or the ongoing benchmark testing.

This section lists the resource requirements to install and run NSSF.

NSSF Services

The following table lists resource requirement for NSSF Services:

Table 2-5 NSSF Services

Service Replicas Min CPU Max CPU Min Memory Max Memory Min Ephemeral Storage Max Ephemeral Storage
<helm-release-name>-alternate-route 2 2 2 4Gi 4Gi 78Mi 1Gi
<helm-release-name>-appinfo 1 2 2 2Gi 2Gi 78Mi 1Gi
<helm-release-name>-egress-gateway 2 4 4 4Gi 4Gi 78Mi 1Gi
<helm-release-name>-ingress-gateway 36 6 6 6Gi 6Gi 78Mi 1Gi
<helm-release-name>-nsauditor 1 2 2 2Gi 2Gi 78Mi 1Gi
<helm-release-name>-nsavailability 2 4 4 4Gi 4Gi 78Mi 1Gi
<helm-release-name>-nsconfig 1 2 2 2Gi 2Gi 78Mi 1Gi
<helm-release-name>-nsselection 8 8 8 8Gi 8Gi 78Mi 1Gi
<helm-release-name>-nssubscription 1 2 2 2Gi 2Gi 78Mi 1Gi
<helm-release-name>-<helm-release-name>-nrf-client-nfmanagement 2 2 2 2Gi 2Gi 78Mi 1Gi
<helm-release-name>-config-server 1 2 2 2Gi 2Gi 78Mi 1Gi
<helm-release-name>-perf-info 1 2 2 2Gi 2Gi 78Mi 6Gi

ASM Sidecar

NSSF leverages the Platform Service Mesh (for example, Aspen Service Mesh) for all internal and external TLS communication. If ASM Sidecar injection is enabled during NSSF deployment or upgrade, this container is injected to each NSSF pod (or selected pod, depending on the option chosen during deployment or upgrade). These containers stay till pod or deployment exist. For more information about installing ASM, see Configuring NSSF to support Aspen Service Mesh in Oracle Communications Cloud Native Core, Network Slice Selection Function Installation, Upgrade, and Fault Recovery Guide.

Table 2-6 ASM Sidecar

Service Replicas Min CPU Max CPU Min Memory Max Memory
<helm-release-name>-alternate-route 2 2 2 2Gi 2Gi
<helm-release-name>-appinfo 1 2 2 2Gi 2Gi
<helm-release-name>-egress-gateway 2 2 2 2Gi 2Gi
<helm-release-name>-ingress-gateway 36 4 4 2Gi 2Gi
<helm-release-name>-nsauditor 1 2 2 2Gi 2Gi
<helm-release-name>-nsavailability 2 2 2 2Gi 2Gi
<helm-release-name>-nsconfig 1 2 2 2Gi 2Gi
<helm-release-name>-nsselection 8 12 12 6Gi 6Gi
<helm-release-name>-nssubscription 1 2 2 2Gi 2Gi
<helm-release-name>-<helm-release-name>-nrf-client-nfmanagement 2 2 2 2Gi 2Gi
<helm-release-name>-config-server 1 2 2 2Gi 2Gi
<helm-release-name>-perf-info 1 2 2 2Gi 2Gi

Note:

<helm-release-name> is the Helm release name. For example, if Helm release name is "ocnssf", then nsselection microservice name will be "<helm-release-name>-nsselection".

2.2.3 cnDBTier Resource Profile

The following table describes resources required by cnDBTier pods to perform NSSF benchmark tests.

cnDBTier Services

Table 2-7 cnDBTier Services

Pod Container Replicas Min CPU Max CPU Min Memory Max Memory Min Ephemeral Storage Max Ephemeral PVC
mysql-cluster-db-backup-manager-svc db-backup-manager-svc 1 2 2 1Gi 1Gi 90Mi 1Gi -
mysql-cluster-db-monitor-svc db-monitor-svc 1 4 4 4Gi 4Gi 90Mi 1Gi -
mysql-cluster-site1-site2-replication-svc site1-site2-replication-svc 1 2 2 12Gi 12Gi 90Mi 1Gi 44Gi
mysql-cluster-site1-site2-replication-svc db-infra-monitor-svc 1 200m 200m 256Mi 256Mi 90Mi 1Gi -
mysql-cluster-site1-site3-replication-svc site1-site3-replication-svc 1 2 2 2Gi 2Gi 90Mi 1Gi -
ndbappmysqld mysqlndbcluster 2 8 8 3Gi 3Gi 90Mi 1Gi 4Gi
ndbappmysqld db-infra-monitor-svc 2 200m 200m 256Mi 256Mi 90Mi 1Gi -
ndbappmysqld init-sidecar 2 100m 100m 256Mi 256Mi 90Mi 1Gi -
ndbmgmd mysqlndbcluster 2 4 4 10Gi 10Gi 90Mi 1Gi 16Gi
ndbmgmd db-infra-monitor-svc 2 200m 200m 256Mi 256Mi 90Mi 1Gi -
ndbmtd mysqlndbcluster 4 5 5 24Gi 24Gi 90Mi 1Gi 50Gi
ndbmtd db-backup-executor-svc 4 2 2 2Gi 2Gi 90Mi 1Gi 50Gi
ndbmtd db-infra-monitor-svc 4 200m 200m 256Mi 256Mi 90Mi 1Gi -
ndbmysqld mysqlndbcluster 4 5 5 20Gi 20Gi 90Mi 1Gi 256Gi
ndbmysqld init-sidecar 4 100m 100m 256Mi 256Mi 90Mi 1Gi -
ndbmysqld db-infra-monitor-svc 4 200m 200m 256Mi 256Mi 90Mi 1Gi -

Sidecar Resources

Table 2-8 Sidecar Resources

Pod Container Replicas Min CPU Max CPU Min Memory Max Memory Min Ephemeral Storage Max Ephemeral
mysql-cluster-site1-site2-replication-svc istio-proxy 1 2 2 1Gi 1Gi - -
mysql-cluster-site1-site3-replication-svc istio-proxy 1 2 2 1Gi 1Gi - -
ndbmysqld istio-proxy 4 2 2 1Gi 1Gi - -