2 Prerequisites

Before installing and configuring NEF, ensure that the following prerequisites are met:
The NEF installation must be performed in the following order:
  1. Installing CAPIF
  2. Installing NEF

2.1 Software Requirements

This section lists the software that must be installed before installing NEF:

Table 2-1 Preinstalled Software

Software Version
Kubernetes 1.29.1, 1.28.6, 1.27.x
Helm 3.13.2
Podman 3.3.1, 3.2.3, 2.2.1

Note:

  • NEF 24.2.x supports CNE 24.1.x, 23.4.x, and 23.3.x.
  • NEF 24.2.x supports OKE managed clusters on OCI.
To check the versions of the preinstalled software in the cloud native environment, run the following commands:
kubectl version
helm version
podman version

The following software are available if NEF is deployed in CNE. If you are deploying NEF in any other cloud native environment, these additional software must be installed before installing the NEF.

To check the installed software, run the following command:
helm ls -A
The list of additional software items, along with the supported versions and usage, is provided in the following table:

Table 2-2 Additional Software

Software App Version Required For
OpenSearch 2.11.0 Logging
OpenSearch Dashboard 2.11.0 Logging
logs 3.1.0 Logging
Kyverno 1.9.0 Logging
Grafana 9.5.3 Metrics and KPIs
Prometheus 2.51.1 Metrics and Alerts
Prometheus Operator 0.72.0 Metrics
MetalLb 0.14.4 External IP
metrics-server 0.6.1 Metrics
tracer 1.21.0 Tracing
Jaeger 1.52.0 Tracing
snmp-notifier 1.4.0 Alerts

Note:

On OCI, the above mentioned software are not required because OCI observability and management service is used for logging, metrics, alerts, and KPIs. For more information, see Oracle Communications Cloud Native Core, OCI Adaptor Deployment Guide.

2.2 Environment Setup Requirements

This section describes the environment setup requirements for installing NEF.

2.2.1 Network Access Requirement

The Kubernetes cluster hosts must have network access to the following repositories:
  • Local helm repository – It contains the CAPIF and NEF helm charts.

    To check if the Kubernetes cluster hosts can access the local helm repository, run the following command:

    helm repo update
  • Local docker image repository – It contains the CAPIF and NEF docker images.

    To check if the Kubernetes cluster hosts can access the local docker image repository, pull any image with an image-tag, using the following commands:

    docker pull <Docker-repo>/<image-name>:<image-tag>
    podman pull <Docker-repo>/<image-name>:<image-tag>

    where:

    <Docker-repo> is the IP address or host name of the Docker repository.

    <Podman-repo> is the IP address or host name of the Podman repository.

    <image-name> is the Docker image name.

    <image-tag> is the tag assigned to the Docker image used for the CAPIF and NEF pods.

For example:
docker pull CUSTOMER_REPO/ocnef_monitoring_events:24.2.2
podman pull docker-repo/ocnef_monitoring_events:24.2.2

Note:

Run the kubectl and helm commands on a system based on the deployment infrastructure. For instance, it can be run on a client machine such as VM, server, local desktop, and so on.

2.2.2 Client Machine Requirement

This section describes the requirements for client machine, that is, the machine used by the user to run deployment commands.

The client machine should have:
  • Helm repository configured.
  • network access to the Helm repository and Docker image repository.
  • network access to the Kubernetes cluster.
  • required environment settings to run the kubectl, docker, and podman commands. The environment must have privileges to create a namespace in the Kubernetes cluster.
  • Helm client installed with the push plugin. Configure the environment in such a manner that the helm install command deploys the software in the Kubernetes cluster.

2.2.3 Server or Space Requirement

For information about server or space requirements, see the Oracle Communications Cloud Native Core, Cloud Native Environment Installation, Upgrade, and Fault Recovery Guide .

2.2.4 CNE Requirement

This section is applicable only if you are installing NEF on Cloud Native Environment (CNE).

NEF supports CNE 24.1.x, 23.4.x, 23.3.x, and 23.2.x.

To check the CNE version, run the following command:
echo $OCCNE_VERSION

Note:

For more information about CNE, see Oracle Communications Cloud Native Core, Cloud Native Environment Installation, Upgrade, and Fault Recovery Guide.

2.2.5 OCI Requirements

NEF can be deployed in OCI. While deploying NEF in OCI, the user must use the Operator instance/VM instead of Bastion Host.

For more information about OCI Adaptor, see Oracle Communications Cloud Native Core, OCI Adaptor Deployment Guide.

2.2.6 cnDBTier Requirement

NEF supports cnDBTier 24.1.x, 23.4.x, 23.3.x, and 23.2.x in a virtual CNE (vCNE) environment. cnDBTier must be up and active in case of containerized CNE. For more information on installation procedure, see the Oracle Communications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide.

To install NEF or CAPIF with recommended cnDBTier resources, install cnDBTier using the ocnef_dbtier_24.2.2_custom_values_24.2.2.yaml file provided in the ocnef_custom_configTemplates_24.2.2 file. For information about the steps to download ocnef_custom_configTemplates_24.2.2 file, see Customizing NEF or Customizing CAPIF.

Note:

There are no customizations required in the ocnef_dbtier_24.2.2_custom_values_24.2.2.yaml file for installing NEF 24.2.2.
2.2.6.1 cnDBtier Customization Parameters

Table 2-3 cnDBtier Customization Parameters

Parameter Name Recommended Value Added/Modified in Release
db-monitor-svc.resources.limits.cpu 4 24.2.2
db-monitor-svc.resources.limits.memory 2Gi 24.2.2
db-monitor-svc.resources.limits.ephemeral-storage 1Gi 24.2.2
db-monitor-svc.resources.requests.cpu 4 24.2.2
db-monitor-svc.resources.requests.memory 2Gi 24.2.2
db-monitor-svc.resources.requests.ephemeral-storage 1Gi 24.2.2
ndb.resources.limits.cpu 6 24.2.2
ndb.resources.limits.memory 18Gi 24.2.2
ndb.resources.limits.ephemeral-storage 1Gi 24.2.2
ndb.resources.requests.cpu 6 24.2.2
ndb.resources.requests.memory 16Gi 24.2.2
ndb.resources.requests.ephemeral-storage 90Mi 24.2.2
additionalndbconfigurations.mysqld.ndb_allow_copying_alter_table OFF 24.2.2
api.resources.limits.cpu 4 24.2.2
api.resources.limits.memory 10Gi 24.2.2
api.resources.limits.ephemeral-storage 1Gi 24.2.2
api.resources.requests.cpu 4 24.2.2
api.resources.requests.memory 10Gi 24.2.2
api.resources.requests.ephemeral-storage 90Mi 24.2.2
api.ndbapp.resources.limits.cpu 5 24.2.2
api.ndbapp.resources.limits.memory 10Gi 24.2.2
api.ndbapp.resources.limits.ephemeral-storage 1Gi 24.2.2
api.ndbapp.resources.requests.cpu 5 24.2.2
api.ndbapp.resources.requests.memory 10Gi 24.2.2
api.ndbapp.resources.requests.ephemeral-storage 90Mi 24.2.2

2.3 Resource Requirements

This section lists the resource requirements to install and run NEF and CAPIF.

Note:

The performance and capacity of the NEF system may vary based on the call model, feature or interface configuration, and underlying CNE and hardware environment.

2.3.1 Services

2.3.1.1 NEF Services
The following table lists the resource requirement for NEF Services:

Table 2-4 NEF Services

Service Name POD CPU Memory
Default POD Count Maximum POD count with scaling Min Max Min Max
APD Manager 2 2 4 4 4Gi 4Gi
API Router 2 2 4 4 4Gi 4Gi
External Egress Gateway 2 5 4 4 4Gi 4Gi
External Ingress Gateway 2 5 4 4 4Gi 4Gi
5GC Egress Gateway 2 5 4 4 4Gi 4Gi
5GC Ingress Gateway 2 5 4 4 4Gi 4Gi
ME Service 2 12 4 4 4Gi 4Gi
QoS Service 2 12 4 4 4Gi 4Gi
5GC Agent 2 12 4 4 4Gi 4Gi
CCF Client 2 3 2 2 2Gi 2Gi
Expiry Auditor 2 3 4 4 4Gi 4Gi
Perf-Info 1 1 1 1 1Gi 1Gi
App-Info 1 1 1 1 1Gi 1Gi
Config-Server 1 1 1 1 1Gi 1Gi
NRF Client 1 1 1 1 1Gi 1Gi
Traffic Influence 2 12 4 4 4Gi 4Gi
Diameter Gateway 2 2 4 4 4Gi 4Gi
Device Trigger 1 1 1 1 1Gi 1Gi
Pool Manager 1 1 4 4 4Gi 4Gi
MSISDNless MO SMS 1 12 4 4 4Gi 4Gi
Console Data Service 1 12 4 4 4Gi 4Gi
2.3.1.2 CAPIF Services
The following table lists the resource requirement for CAPIF Services:

Table 2-5 CAPIF Services

Service Name POD CPU Memory
Default POD Count Maximum POD count with scaling Min Max Min Max
API Manager 2 3 4 4 4Gi 4Gi
AF Manager 2 3 4 4 4Gi 4Gi
Event Manager 2 2 4 4 4Gi 4Gi
External Ingress Gateway 2 5 4 4 4Gi 4Gi
External Egress Gateway 2 5 2 2 4Gi 4Gi
Network Ingress Gateway 2 5 4 4 4Gi 4Gi
Network Egress Gateway 2 5 4 4 4Gi 4Gi

2.3.2 Debug Tool Container

The Debug Tool provides third-party troubleshooting tools for debugging the runtime issues in lab environment. If Debug Tool Container injection is enabled during NEF deployment or upgrade, this container is injected to each NEF pod (or selected pod, depending on the option chosen during deployment or upgrade). These containers stay till pod or deployment exist. For more information about Debug Tool, see Oracle Communication Cloud Native Core, Network Exposure Function Troubleshooting Guide.

Table 2-6 NEF - Debug Tool Container

Service Name CPU Memory POD Ephemeral Storage
Min Max Min Max Min Max Min Max
APD Manager 0.5 0.5 4Gi 4Gi 2 2 512Mi 512Mi
API Router 0.5 0.5 4Gi 4Gi 2 2 512Mi 512Mi
External Egress Gateway 0.5 0.5 4Gi 4Gi 2 5 512Mi 512Mi
External Ingress Gateway 0.5 0.5 4Gi 4Gi 2 5 512Mi 512Mi
5GC Egress Gateway 0.5 0.5 4Gi 4Gi 2 5 512Mi 512Mi
5GC Ingress Gateway 0.5 0.5 4Gi 4Gi 2 5 512Mi 512Mi
ME Service 0.5 0.5 4Gi 4Gi 2 12 512Mi 512Mi
QoS Service 0.5 0.5 4Gi 4Gi 2 12 512Mi 512Mi
5GC Agent 0.5 0.5 4Gi 4Gi 2 12 512Mi 512Mi
CCF Client 0.5 0.5 4Gi 4Gi 2 3 512Mi 512Mi
Expiry Auditor 0.5 0.5 4Gi 4Gi 2 3 512Mi 512Mi
Perf-Info 0.5 0.5 4Gi 4Gi 1 1 512Mi 512Mi
App-Info 0.5 0.5 4Gi 4Gi 1 1 512Mi 512Mi
Config-Server 0.5 0.5 4Gi 4Gi 1 1 512Mi 512Mi
NRF Client 0.5 0.5 4Gi 4Gi 1 1 512Mi 512Mi
Traffic Influence 0.5 0.5 4Gi 4Gi 2 12 512Mi 512Mi
Diameter Gateway 0.5 0.5 4Gi 4Gi 2 2 512Mi 512Mi
Device Trigger 0.5 0.5 1Gi 1Gi 1 1 512Mi 512Mi
Pool Manager 1 1 1Gi 1Gi 2 12 512Mi 512Mi
Console Data Service 1 1 1Gi 1Gi 1 12 512Mi 512Mi
MSISDNLess MO SMS 1 1 1Gi 1Gi 1 12 512Mi 512Mi

Note:

The debug container resources are optional. If you are using the debug container, then the CPU Request and Limit for Debug Container and Memory Request and Limit for Debug Container values must be included to the minimum and maximum resources for NEF services.

Table 2-7 CAPIF - Debug Tool Container

Service Name CPU Memory POD Ephemeral Storage
Min Max Min Max Default POD Count Maximum POD count with scaling Min Max
API Manager 0.5 0.5 4Gi 4Gi 2 3 512Mi 512Mi
AF Manager 0.5 0.5 4Gi 4Gi 2 3 512Mi 512Mi
Event Manager 0.5 0.5 4Gi 4Gi 2 2 512Mi 512Mi
External Ingress Gateway 0.5 0.5 4Gi 4Gi 2 5 512Mi 512Mi
External Egress Gateway 0.5 0.5 4Gi 4Gi 2 5 512Mi 512Mi
Network Ingress Gateway 0.5 0.5 4Gi 4Gi 2 5 512Mi 512Mi
Network Egress Gateway 0.5 0.5 4Gi 4Gi 2 5 512Mi 512Mi

Note:

The debug container resources are optional. If you are using the debug container, then the CPU Request and Limit for Debug Container and Memory Request and Limit for Debug Container values must be included to the minimum and maximum resources for CAPIF services.

2.3.3 Upgrade

2.3.3.1 NEF Upgrade
The following table lists the resource requirement for NEF Upgrade:

Table 2-8 NEF Upgrade

Service Name POD CPU Memory
Default POD Count Maximum POD count with scaling Min Max Min Max
APD Manager 2 2 4 4 4Gi 4Gi
API Router 2 2 4 4 4Gi 4Gi
External Egress Gateway 2 5 4 4 4Gi 4Gi
External Ingress Gateway 2 5 4 4 4Gi 4Gi
5GC Egress Gateway 2 5 4 4 4Gi 4Gi
5GC Ingress Gateway 2 5 4 4 4Gi 4Gi
ME Service 2 12 4 4 4Gi 4Gi
QoS Service 2 12 4 4 4Gi 4Gi
5GC Agent 2 12 4 4 4Gi 4Gi
CCF Client 2 3 2 2 2Gi 2Gi
Expiry Auditor 2 3 4 4 4Gi 4Gi
Perf-Info 1 1 1 1 1Gi 1Gi
App-Info 1 1 1 1 1Gi 1Gi
Config-Server 1 1 1 1 1Gi 1Gi
NRF Client 1 1 1 1 1Gi 1Gi
Traffic Influence 2 12 4 4 4Gi 4Gi
Diameter Gateway 2 2 4 4 4Gi 4Gi
Device Trigger 1 1 1 1 1Gi 1Gi
Pool Manager 1 1 4 4 4Gi 4Gi
MSISDNless MO SMS 1 12 4 4 4Gi 4Gi
Console Data Service 1 12 4 4 4Gi 4Gi
2.3.3.2 CAPIF Upgrade
The following table lists the resource requirement for CAPIF Upgrade:

Table 2-9 CAPIF Upgrade

Service Name POD CPU Memory
Default POD Count Maximum POD count with scaling Min Max Min Max
API Manager 2 3 4 4 4Gi 4Gi
AF Manager 2 3 4 4 4Gi 4Gi
Event Manager 2 2 4 4 4Gi 4Gi
External Ingress Gateway 2 5 4 4 4Gi 4Gi
External Egress Gateway 2 5 2 2 4Gi 4Gi
Network Ingress Gateway 2 5 4 4 4Gi 4Gi
Network Egress Gateway 2 5 4 4 4Gi 4Gi

2.3.4 Common Services Container

Following is the resource requirement for Common Services Container.

Table 2-10 Common Services Container

Container Name CPU Request and Limit Per Container Memory (GB) Request and Limit Per Container Kubernetes Init Container (Job)
init-service 1 1 Y
update-service 1 1 N
common_config_hook 1 1 N
  • Update Container service: Ingress or Egress Gateway services use this container service to periodically refresh CAPIF Private Key or Certificate and CA Root Certificate for TLS.
  • Init Container service: Ingress or Egress Gateway services use this container to get CAPIF Private Key or Certificate and CA Root Certificate for TLS during start up.

2.3.5 Hooks

Following is the resource requirement for Hooks for Monitoring Events microservice.

Table 2-11 Hooks for Monitoring Events Microservice

Hook Name CPU Memory
<helm-release-name>-monitoringevents-pre-install 1 1Gi
<helm-release-name>-monitoringevents-post-install 1 1Gi
<helm-release-name>-monitoringevents-pre-upgrade 1 1Gi
<helm-release-name>-monitoringevents-post-upgrade 1 1Gi
<helm-release-name>-monitoringevents-pre-rollback 1 1Gi
<helm-release-name>-monitoringevents-post-rollback 1 1Gi
<helm-release-name>-monitoringevents-pre-delete 1 1Gi
<helm-release-name>-monitoringevents-post-delete 1 1Gi

Note:

  • <helm-release-name> is the Helm release name. For example, if Helm release name is "ocnef", then monitoringevents microservice name will be "ocnef-monitoringevents"
  • The above table provides the hooks for monitoringevents microservice. Similar hooks are applicable for all the following microservices:
    • NEF microservices:
      • apdmanager
      • ocnef-expiry-auditor
      • nrfclient
      • aef-apirouter
      • app-info
      • perf-info
      • config-server
      • monitoringevents
      • qualityofservice
      • trafficinfluence
      • ocnef-ccfclient
      • devicetrigger
      • poolmanager
      • fivegcagent
      • ingress-gateway
      • egress-gateway
      • ocnef-diam-gateway
      • msisdnless_mo_sms
      • console_data_service
    • CAPIF microservices:
      • apimgr
      • afmgr
      • eventmgr
      • ingress-gateway
      • egress-gateway
      • console_data_service