5 NEF Installation and Upgrade
This chapter describes how to install, customize, and uninstall Network Exposure Function (NEF) on Cloud Native Environment (CNE).
5.1 Installing NEF
Note:
NEF supports fresh installation, and it can also be upgraded from 23.3.x and 23.4.x. For information about the prerequisites to install NEF, see Prerequisites chapter and to know how to upgrade NEF, see Upgrading NEF section.5.1.1 Preinstallation
Before installing NEF, perform the tasks described in this section.
5.1.1.1 Verifying and Creating NEF Namespace
Note:
This is a mandatory procedure, run this before proceeding further with the installation procedure. The namespace created or verified in this procedure is an input for the next procedures.$ kubectl get namespaces
Note:
This is an optional step. Skip this step if the required namespace already exists.$ kubectl create namespace <required namespace>
$ kubectl create namespace ocnef-namespace
Naming Convention for Namespaces
- start and end with an alphanumeric character.
- contain 63 characters or less.
- contain only alphanumeric characters or '-'.
Note:
It is recommended to avoid using prefixkube-
when creating
namespace as this prefix is reserved for Kubernetes system namespaces.
5.1.1.2 Creating Service Account, Role, and RoleBinding
This section describes the procedure to create a service account, role, and role binding resources. The Secret(s) can be under same namespace where NEF is getting deployed (recommended), or you can select to use different namespaces for different secret(s). If all the secret(s) are under the same namespace as NEF, then you can bind the Kubernetes Role with the given ServiceAccount. Otherwise, it is required to bind the ClusterRole with the given ServiceAccount.
- Create an NEF resource
file:
vi <ocnef-resource-file>
For example:vi ocnef-resource-template.yaml
- Update the
ocnef-resource-template.yaml
with release specific information:Note:
Update <helm-release> and <namespace> with its respective NEF namespace and NEF helm release name.## Sample template start# apiVersion: v1 kind: ServiceAccount metadata: name: <helm-release>-ocnef-serviceaccount namespace: <namespace> --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: <helm-release>-ocnef-role namespace: <namespace> rules: - apiGroups: - "" # "" indicates the core API group resources: - services - configmaps - pods - secrets - endpoints verbs: - get - watch - list --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata: name: <helm-release>-ocnef-rolebinding namespace: <namespace> roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: <helm-release>-ocnef-role subjects: - kind: ServiceAccount name: default namespace: <namespace> ## Sample template end#
- Run the following command to create service account, role, and role
binding:
kubectl -n <ocnef-namespace> create -f ocnef-resource-template.yaml
For example:kubectl -n ocnef create -f ocnef-resource-template.yaml
- Update the
serviceAccountName
parameter in theoc-nef-24.2.2-custom-values.yaml
file with the value updated inname
field underkind: ServiceAccount
. For more information aboutserviceAccountName
parameter, see the Global Parameters section.
Note:
The service account name configured in this section must be used as the value for theserviceAccountName
parameter during the customization using the NEF Custom Values YAML file. For more
information about the parameter, see Global Parameters.
5.1.1.3 Configuring Database, Creating Users, and Granting Permissions
NEF microservices use MySQL database to store the configuration and run time data.
Note:
While performing a fresh installation, in case the NEF release is already deployed, then purge the deployment and remove the databases and users used for the previous deployment. For uninstallation procedure, see Uninstalling NEF.NEF Users
There are two types of NEF database users with a different set of permissions:
- NEF privileged user: This user has a complete set of permissions. This user can create or delete the database and perform create, alter, or drop operations on the database tables for performing installation, upgrade, rollback, and delete operations.
- NEF application user: This user has a limited set of permissions and is used by NEF application during service operations handling. This user can insert, update, get, and remove the records. This user cannot create, alter, and drop the database and the tables.
Note:
If cnDBTier 24.2.2 is used during installation, set the ndb_allow_copying_alter_table parameter to 'ON' in theocnef_dbtier_24.2.2_custom_values_24.2.2.yaml
file before installing NEF. After NEF installation, set the parameter to
its default value 'OFF'.
5.1.1.3.1 Single Site
Note:
New MySQL users along with their privileges must be added manually in each SQL node, of cnDBtier namespace, for NEF site.- Log in to the machine where ssh keys are stored. The machine must have permission to access the SQL nodes of NDB cluster.
- Connect to the SQL nodes.
- Log in to the database as a root user.
- Create the NEF Release
database:
CREATE DATABASE <ocnef_release_database_name>;
For example:CREATE DATABASE ocnef_releaseDb ;
Note:
In case of georedundant deployment, each NEF site must have different Release database names. - Create the NEF Service
database:
CREATE DATABASE <ocnef_service_database_name>;
For example:CREATE DATABASE ocnef_db;
Note:
In case of georedundant deployment, each NEF site must have same Service database name. - Create a privileged user and grant all the necessary permissions to
the user.
- Run the following command to create privileged
user:
CREATE USER '<ocnef privileged username>'@'%' IDENTIFIED BY '<ocnef privileged user password>';
Note:
If MySQL is 8.0, run the following command to create a user:CREATE USER IF NOT EXISTS '<ocnef privileged username>'@'%' IDENTIFIED WITH mysql_native_password BY '<ocnef privileged user password>'
<ocnef privileged username> is the username and <ocnef privileged user password> is the password for MySQL privileged user.
- Run the following command to grant the necessary permissions
to the privileged
user:
GRANT SELECT, INSERT, CREATE, ALTER, DROP, LOCK TABLES, CREATE TEMPORARY TABLES, DELETE, UPDATE, EXECUTE, NDB_STORED_USER , REFERENCES ON *.* TO '<ocnef privileged username>'@'%';
For example:
In the following example "ocnefprivilegedusr" is used as username and "ocnefprivilegedpasswd" is used as the password. All the permissions are granted to the privileged user, that is, ocnefprivilegedusr.CREATE USER 'ocnefprivilegedusr'@'%' IDENTIFIED BY 'ocnefprivilegedpasswd';
GRANT SELECT, INSERT, CREATE, ALTER, DROP, LOCK TABLES, CREATE TEMPORARY TABLES, DELETE, UPDATE, EXECUTE, NDB_STORED_USER , REFERENCES ON *.* TO 'ocnefprivilegedusr'@'%';
- Run the following command to create privileged
user:
- Create an application user and grant all the necessary
permissions.
- Run the following command to create application
user:
CREATE USER '<ocnef application username>'@'%' IDENTIFIED BY '<ocnef application user password>';
where:
<ocnef application username> is the username and <ocnef application user password> is the password for MySQL database user.
- Run the following command to grant the necessary
permissions to the application
user:
GRANT SELECT, INSERT, CREATE, ALTER, DROP, LOCK TABLES, CREATE TEMPORARY TABLES, DELETE, UPDATE, EXECUTE, NDB_STORED_USER , REFERENCES ON *.* TO '<ocnef application username>'@'%';
In the following example, "ocnefusr" is used as username and "ocnefpasswd" is used as its password. All the necessary permissions are granted to the application user, that is, ocnefusr. Here, default database names of microservices are used.
CREATE USER 'ocnefusr'@'%' IDENTIFIED BY 'ocnefpasswd';
GRANT SELECT, INSERT, CREATE, ALTER, DROP, LOCK TABLES, CREATE TEMPORARY TABLES, DELETE, UPDATE, EXECUTE, NDB_STORED_USER , REFERENCES ON *.* TO 'ocnefusr'@'%';
Note:
The database name is specified in the dbName parameter for NEF services in theoc-nef-24.2.2-custom-values.yaml
file. - Run the following command to create application
user:
- To confirm that the privileged or application user has all the
permissions, run the following
command:
show grants for username;
where, username is the privileged or application user's username.
For example:show grants for ocnefprivilegedusr;
show grants for ocnefusr;
- Exit from database and log out from MySQL node.
5.1.1.3.2 Multisite
Note:
- Perform the steps in Single Site for creating database.
- NEF supports only 2-site.
For information on the Database Configurations parameters, refer to Global Parameters section.
5.1.1.4 Configuring Secret for Enabling Access Token Validation
This section describes the procedure to configure Kubernetes secrets which will contain public certificate of CAPIF.
Note:
Public certificate is generated when installing CAPIF.- Run the following command to create kubernetes secret:
$ kubectl create secret generic <certificate-secret> --from-file=<capif public certificate> -n <Namespace of OCNEF deployment>
Note:
Note down the command used during the creation of kubernetes secret. This command is used for updating the secrets in future.Example:
$ kubectl create secret generic certificate-secret --from-file=cert.cer -n ocnef
- Verify the secret creation with the following
command:
$ kubectl describe secret <certificate-secret> -n <Namespace of NEF deployment>
Example:
$ kubectl describe secret certificate-secret -n ocnef
5.1.1.5 Configuring Kubernetes Secret for Accessing NEF Database
This section describes the procedure to configure Kubernetes secrets for accessing the databases created in the previous section.
5.1.1.5.1 Creating and Updating Kubernetes Secret for Accessing NEF Privileged Database User
This section explains the steps to create kubernetes secrets for accessing NEF database and privileged user details created by database administrator in the above section.
- Run the following command to create kubernetes secret:
$ kubectl create secret generic <privileged user secret name> --from-literal=dbUsername=<OCNEF Privileged User Name> --from-literal=dbPassword=<Password for OCNEF Privileged User> --from-literal=mysql-username=<OCNEF Privileged User Name> --from-literal=mysql-password=<Password for OCNEF Privileged User> -n <Namespace of OCNEF deployment>
Note:
Note down the command used during the creation of kubernetes secret. This command is used for updating the secrets in future.Example:
$ kubectl create secret generic privilegeduser-secret --from-literal=dbUsername=ocnefPrivilegedUsr --from-literal=dbPassword=ocnefPrivilegedPasswd --from-literal=mysql-username=ocnefPrivilegedUsr --from-literal=mysql-password=ocnefPrivilegedPasswd -n ocnef_namespace
- Verify the secret creation with the following command:
$ kubectl describe secret <privileged user secret name> -n <Namespace of OCNEF deployment>
Example:
$ kubectl describe secret privilegeduser-secret -n ocnef_namespace
5.1.1.5.2 Creating and Updating Kubernetes Secret for Accessing NEF Application Database User
This section explains the steps to create secrets for accessing and configuring application database user created in above section.
- Run the following command to create kubernetes secret:
$ kubectl create secret generic <appuser-secret name> --from-literal=dbUsername=<OCNEF APPLICATION User Name> --from-literal=dbPassword=<Password for OCNEF APPLICATION User> --from-literal=mysql-username=<OCNEF APPLICATION User Name> --from-literal=mysql-password=<Password for OCNEF APPLICATION User> -n <Namespace of OCNEF deployment>
Note:
Note down the command used during the creation of kubernetes secret. This command is used for updating the secrets in future.Example:
$ kubectl create secret generic appuser-secret --from-literal=dbUsername=ocnefusr --from-literal=dbPassword=ocnefpasswd --from-literal=mysql-username=ocnefusr --from-literal=mysql-password=ocnefpasswd -n ocnef_namespace
- Verify the secret creation with the following command:
$ kubectl describe secret <appuser-secret name> -n <Namespace of OCNEF deployment>
Example:
$ kubectl describe secret appuser-secret -n ocNEF
5.1.1.5.3 Creating and Updating Kubernetes Secret for Storing Security Certificates for NEF
To create a kubernetes secret for storing security certificates for NEF Exposure Gateway (EG), perform the following steps:
- Run the following command to create kubernetes secret :
$ kubectl create secret generic <ocnef-secret-name> --from-file=rsa_private_key_pkcs1.pem --from-file=trust.txt --fromfile=key.txt --from-file=ocegress.cer --from-file=caroot.cer -n<namespace>
where,
<ocnef-secret-name>
can beext-expgw-secret
orfivegc-service-secret
, as requiredrsa_private_key_pkcs1.pem is rsa private key
trust.txt is trust store password
key.txt is key store password
caroot.cer is the cer chain for trust store
cert.cer is signed server certificate
5.1.1.6 Configuring Secrets for Enabling HTTPS
Note:
The passwords for TrustStore and KeyStore are stored in respective password files as mentioned in this section.- ECDSA private key and CA signed certificate of NEF, if initialAlgorithm is ES256
- RSA private key and CA signed certificate of NEF, if initialAlgorithm is RS256
- TrustStore password file
- KeyStore password file
Note:
It is at the discretion of user to create the private keys and certificates, and it is not in the scope for NEF. This section lists only samples to create KeyPairs and certificates.Update Secrets
This section explains how to update the secret with updated details.
- Copy the exact command used in above section during creation of secret.
- Update the same command with string "--dry-run -o yaml" and
"kubectl replace -f - -n <Namespace of NEF deployment>".
Example of the command syntax:
$ kubectl create secret generic <ocnef-secret-name> --from-file=<ssl_ecdsa_private_key.pem> --from-file=<rsa_private_key_pkcs1.pem> --from-file=<ssl_truststore.txt> --from-file=<ssl_keystore.txt> --from-file=<caroot.cer> --from-file=<ssl_rsa_certificate.crt> --from-file=<ssl_ecdsa_certificate.crt> --dry-run -o yaml -n <Namespace of OCNEF deployment> | kubectl replace -f - -n <Namespace of OCNEF deployment>
Example:
The names used below are same as provided inoc-nef-24.2.2-custom-values.yaml
in NEF deployment:$ kubectl create secret generic ocingress-secret --from-file=ssl_ecdsa_private_key.pem --from-file=rsa_private_key_pkcs1.pem --from-file=ssl_truststore.txt --from-file=ssl_keystore.txt --from-file=caroot.cer --from-file=ssl_rsa_certificate.crt --from-file=ssl_ecdsa_certificate.crt --dry-run -o yaml -n OCNEF | kubectl replace -f - -n OCNEF
- Run the updated command.
After successful secret update, the following message is displayed:
secret/<ocingress-secret> replaced
5.1.1.7 Configuring Secrets to Enable Access Token
This section explains how to configure a secret for enabling access token.
5.1.1.7.1 Generating Private Keys and Certificates
Create Private Keys and Certificates
- Generate RSA private key by running
the following command:
openssl req -x509 -nodes -sha256 -days 365 -newkey rsa:2048 -keyout rsa_private_key -out rsa_certificate.crt
- Convert the private key to
.pem
format with the following command:openssl rsa -in rsa_private_key -outform PEM -out rsa_private_key_pkcs1.pem
- Generate cert out of private key, by running the
following command:
openssl req -new -key rsa_private_key -out tmp.csr -config ssl.conf
Note:
Thessl.conf
can be used to configure default entries along with SAN details for your cert.ssl.conf
syntax:#ssl.conf [ req ] default_bits = 4096 distinguished_name = req_distinguished_name req_extensions = req_ext [ req_distinguished_name ] countryName = Country Name (2 letter code) countryName_default = IN stateOrProvinceName = State or Province Name (full name) stateOrProvinceName_default = Karnataka localityName = Locality Name (eg, city) localityName_default = Bangalore organizationName = Organization Name (eg, company) organizationName_default = Oracle commonName = Common Name (e.g. server FQDN or YOUR name) commonName_max = 64 commonName_default = localhost [ req_ext ] subjectAltName = @alt_names [alt_names] IP = 127.0.0.1 DNS.1 = localhost
- Create Root CA by running the following command:
openssl req -new -keyout cakey.pem -out careq.pem openssl x509 -signkey cakey.pem -req -days 3650 -in careq.pem -out caroot.cer -extensions v3_ca echo 1234 > serial.txt
- Sign server cert with root Ca private key, by
running the following command:
openssl x509 -CA caroot.cer -CAkey cakey.pem -CAserial serial.txt -req -in tmp.csr -out tmp.cer -days 365 -extfile ssl.conf -extensions req_ext
Note:
Thessl.conf
file must be reused, as SAN contents is not packaged when signing.
Import key and cert inside keystore
- Create keystore with default rsa
keys, by running the following command:
keytool -genkey -keyalg RSA -alias rsa -keystore keystore.jks
- Delete the keys from the keystore
to remove the default keys, by running the following command:
Check if the keystore is empty with zero entries, by executing:keytool -delete -alias rsa -keystore keystore.jks
keytool -list -v -keystore keystore.jks
- Combine private key and cert in
pkcs12
format to import into keystore, by running the following command:openssl pkcs12 -export -out signedCert.pkcs12 -inkey rsa_private_key -in sample.cer
Note:
The certificate must be signed with the root Ca privatekey. - Import the
pkcs
file to the keystore, by running the following command:keytool -v -importkeystore -srckeystore signedCert.pkcs12 -srcstoretype PKCS12 -destkeystore keystore.jks -deststoretype JKS
5.1.1.8 Configuring Network Policies for NEF
Perform the following installation and upgrade steps for deploying network policies for NEF deployment.
Note:
- Ports mentioned in policies are container ports and not the exposed service ports.
- Connections that were created before installing network policy and still persist are not impacted by the new network policy. Only the new connections would be impacted.
- If you are using ATS suite along with network policies, it is required to install the NEF, CAPIF, and ATS in the same namespace.
- If the traffic is blocked or unblocked between the pods even after applying network policies, check if any existing policy is impacting the same pod or set of pods that might alter the overall cumulative behavior.
- If changing default ports of services such as Prometheus, Database, Jaegar, or if Ingress or Egress Gateway names is overridden, update them in the corresponding network policies.
- While using the debug container, it is recommended to uninstall the network policies or update them as required to establish the connections.
Configuring Network Policies
Following are the various operations that can be performed for network policies:
5.1.1.8.1 Installing Network Policy
Prerequisite
Note:
For a fresh installation, it is recommended to install Network Policies before installing NEF. However, if NEF is already installed, you can still install the Network Policies.To install network policy
- Open the
ocnef-network-policy-custom-values-24.2.2.yaml
file provided in the release package zip file. For downloading the file, see Installation Package Download. - The custom values are provided with the default security policies.
If required, update the
ocnef-network-policy-custom-values-24.2.2.yaml
file as described in the Configurable Parameters of NEF. - Run the following command to install the network policy:
Sample command:helm install <helm-release-name> ocnef-network-policy/ -n <namespace> -f <custom-value-file>
Where,helm install ocnef-network-policy ocnef-network-policy/ -n ocnef -f ocnef-network-policy-custom-values-24.2.2.yaml
helm-release-name
: ocnef-network-policy Helm release namecustom-value-file
: ocnef-network-policy custom value filenamespace
: namespace must be the NEF's namespace
5.1.1.8.2 Upgrading Network Policy
- Modify the
ocnef-network-policy-custom-values-24.2.2.yaml
file to add new network policies or update the existing policies. - Run the following command to upgrade the network
policy:
Sample command:helm upgrade <helm-release-name> ocnef-network-policy/ -n <namespace> -f <custom-value-file>
helm upgrade ocnef-network-policy ocnef-network-policy/ -n ocnef -f ocnef-network-policy-custom-values-24.2.2.yaml
Where,
helm-release-name
: ocnef-network-policy Helm release namecustom-value-file
: ocnef-network-policy custom value filenamespace
: namespace must be the NEF's namespace
5.1.1.8.3 Verifying Network Policies
kubectl get <helm-release-name> -n <namespace>
kubectl get ocnef-network-policy -n ocnef
helm-release-name
: ocnef-network-policy Helm release name.namespace
: NEF namespace.
5.1.1.8.4 Uninstalling Network Policy
helm uninstall <helm-release-name> -n <namespace>
Sample command:
helm uninstall ocnef-network-policy -n ocnef
5.1.1.8.5 Configuration Parameters for Network Policies
Table 5-1 Configuration Parameters for Network Policy
Parameter | Description | Details |
---|---|---|
networkPolicies | The networkPolicies parameter is of array type. Each element of this array must have standard object's metadata and specification of the desired behavior for this NetworkPolicy. | This is an optional parameter.
The network policy Helm chart creates policies for each entry. Example:
Note: Specify the policies compatible with apiversion networking.k8s.io/v1. |
For more information about this functionality, see Network Policies in the Oracle Communications Cloud Native Core, Network Exposure Function User Guide.
5.1.2 Installation Tasks
Note:
Before installing NEF, you must complete Prerequisites and Preinstallation Tasks. In a georedundant deployment, perform the steps explained in this section on all georedundant sites.5.1.2.1 Pushing the Images to Customer Docker Registry
NEF deployment package includes ready-to-use images and Helm charts to orchestrate containers in Kubernetes.
Table 5-2 Docker Images for NEF
Service Name | Docker Image Name | Image Tag |
---|---|---|
Expiry Auditor Service | oc_nef_expiry_auditor | 24.2.2 |
API Router | oc_nef_aef_apirouter | 24.2.2 |
CCF Client | oc_nef_ccfclient_manager | 24.2.2 |
Monitoring Events | oc_nef_monitoring_events | 24.2.2 |
Quality of Service | oc_nef_quality_of_service | 24.2.2 |
Traffic Influence | oc_nef_traffic_influence | 24.2.2 |
Diameter Gateway | oc_nef_diam_gateway | 24.2.2 |
Nrf Client Service | nrf-client | 24.2.8 |
5GC Agent | oc_nef_5gcagent | 24.2.2 |
APD Manager | oc_nef_apd_manager | 24.2.2 |
Configuration Update Service | configurationupdate | 24.2.15 |
Configuration INIT Service | configurationinit | 24.2.15 |
Ingress Gateway | ocingress_gateway | 24.2.15 |
Egress Gateway | ocegress_gateway | 24.2.15 |
Common Config Hook | common_config_hook | 24.2.15 |
Application Performance Service | oc-perf-info | 24.2.15 |
Application Info Service | oc-app-info | 24.2.15 |
Config-Server Service | oc-config-server | 24.2.15 |
Debug Tools Service | ocdebug-tools | 24.2.6 |
NF Test Service | nf_test | 24.2.5 |
Device Trigger | oc_nef_device_trigger | 24.2.2 |
Pool Manager | oc_nef_pool_manager | 24.2.2 |
Camara | oc_camara_atf | 24.2.2 |
Console Data Service | oc_nef_console_data_service | 24.2.2 |
MSISDNLess MO SMS | oc_nef_msisdnless_mo_sms | 24.2.2 |
Pushing Docker Images
Prerequisite: Download and untar the NEF Package ZIP file. For more information about downloading the package, see Installation Package Download.
To push the images to the registry:
- Unzip the release package to the location where you want to install NEF. The
package is as follows:
ocnef-pkg-24.2.2.0.0.tgz
- Untar the NEF package zip file to get NEF image tar
file:
tar -xvzf ocnef-pkg-24.2.2.0.0.tgz
The directory consists of the following:ocnef-24.2.2.tgz
: Helm Chartocnef-24.2.2.tgz.sha256
: Checksum for Helm chart tgz fileocnef-images-24.2.2.tar
: NEF Images Fileocnef-images-24.2.2.tar.sha256
: Checksum for images tar fileocnef-network-policy-24.2.2.tgz
: Helm Chart for network policyocnef-network-policy-24.2.2.tgz.sha256
Checksum for network policy Helm chart tgz file
- Run one of the following commands to load the
ocnef-images-24.2.2.tar
file:docker load --input /IMAGE_PATH/ocnef-images-24.2.2.tar
podman load --input /IMAGE_PATH/ocnef-images-24.2.2.tar
- Run one of the following commands to verify the images are
loaded:
docker images
podman images
Note:
Verify the list of images shown in the output with the list of images shown in the above table. If the list does not match, reload the image tar files. - Run one of the following commands to tag the images to the
registry:
docker tag <image-name>:<image-tag> <docker-repo>/<image-name>:<image-tag>
podman tag <image-name>:<image-tag> <docker-repo>/<image-name>:<image-tag>
- Run one of the following commands to push the images to the
registry:
docker push <docker-repo>/<image-name>:<image-tag>
podman push <docker-repo>/<image-name>:<image-tag>
Note:
It is recommended to configure the Docker certificate before running the push command to access customer registry through HTTPS, otherwise, docker push command may fail.
5.1.2.2 Pushing the NEF Images to OCI Registry
NEF deployment package includes ready-to-use images and Helm charts to orchestrate containers in Kubernetes.
Table 5-3 Docker Images for NEF
Service Name | Docker Image Name | Image Tag |
---|---|---|
Expiry Auditor Service | oc_nef_expiry_auditor | 24.2.2 |
API Router | oc_nef_aef_apirouter | 24.2.2 |
CCF Client | oc_nef_ccfclient_manager | 24.2.2 |
Monitoring Events | oc_nef_monitoring_events | 24.2.2 |
Quality of Service | oc_nef_quality_of_service | 24.2.2 |
Traffic Influence | oc_nef_traffic_influence | 24.2.2 |
Diameter Gateway | oc_nef_diam_gateway | 24.2.2 |
Nrf Client Service | nrf-client | 24.2.2 |
5GC Agent | oc_nef_5gcagent | 24.2.2 |
APD Manager | oc_nef_apd_manager | 24.2.2 |
Configuration Update Service | configurationupdate | 24.2.15 |
Configuration INIT Service | configurationinit | 24.2.15 |
Ingress Gateway | ocingress_gateway | 24.2.15 |
Egress Gateway | ocegress_gateway | 24.2.15 |
Common Config Hook | common_config_hook | 24.2.15 |
Application Performance Service | oc-perf-info | 24.2.2 |
Application Info Service | oc-app-info | 24.2.2 |
Config-Server Service | oc-config-server | 24.2.2 |
Debug Tools Service | ocdebug-tools | 24.2.2 |
NF Test Service | nf_test | 24.2.2 |
Device Trigger | oc_nef_device_trigger | 24.2.2 |
Pool Manager | oc_nef_pool_manager | 24.2.2 |
Camara | oc_camara_atf | 24.2.2 |
Console Data Service | oc_nef_console_data_service | 24.2.2 |
MSISDNLess MO SMS | oc_nef_msisdnless_mo_sms | 24.2.2 |
Pushing Docker Images
Prerequisite: Download and untar the NEF Package ZIP file. For more information about downloading the package, see Installation Package Download.
To push the images to the registry:
- Unzip the release package to the location where you want to install
NEF. The package is as follows:
ocnef-pkg-24.2.2.0.0.tgz
- Untar the NEF package zip file to get NEF image tar
file:
tar -xvzf ocnef-pkg-24.2.2.0.0.tgz
The directory consists of the following:ocnef-24.2.2.tgz
: Helm Chartocnef-24.2.2.tgz.sha256
: Checksum for Helm chart tgz fileocnef-images-24.2.2.tar
: NEF Images Fileocnef-images-24.2.2.tar.sha256
: Checksum for images tar fileocnef-network-policy-24.2.2.tgz
: Helm Chart for network policyocnef-network-policy-24.2.2.tgz.sha256
Checksum for network policy Helm chart tgz file
- Run one of the following commands to load the
ocnef-images-24.2.2.tar
file:docker load --input /IMAGE_PATH/ocnef-images-24.2.2.tar
podman load --input /IMAGE_PATH/ocnef-images-24.2.2.tar
- Run one of the following commands to verify the images are
loaded:
docker images
podman images
Note:
Verify the list of images shown in the output with the list of images shown in the above table. If the list does not match, reload the image tar files. - Run one of the following commands to tag the images to the
registry:
docker tag <image-name>:<image-tag> <docker-repo>/<image-name>:<image-tag>
podman tag <image-name>:<image-tag> <docker-repo>/<image-name>:<image-tag>
- Run one of the following commands to push the images to the
registry:
docker push <docker-repo>/<image-name>:<image-tag>
podman push <docker-repo>/<image-name>:<image-tag>
Note:
It is recommended to configure the Docker certificate before running the push command to access customer registry through HTTPS, otherwise, docker push command may fail.
5.1.2.3 Installing NEF Package
- Run the following command to access the extracted
package:
cd ocnef-<release_number>
For example:cd ocnef-24.2.2.0.0
- Customize the
ocnef-24.2.2-custom-values-ocnef.yaml
file with the required deployment parameters. See Customizing NEF chapter to customize the file. For more information about predeployment parameter configurations, see Preinstallation. - Run the following command to install
NEF:
helm install -f <custom-file> <release_name> <helm-chart> --namespace <release_namespace> --timeout 10m
For example:helm install -f ocnef-24.2.2-custom-values-ocnef.yaml ocnef /home/cloud-user/ocnef-24.2.2.tgz --namespace ocnef
where:- helm_chart is the location of the helm chart extracted from package file.
- ocnef-24.2.2.tgz is the helm chart.
- release_name is the
release name used by helm command.
Note:
- The release_name should not exceed 63 character limit.
- In case of georedundant setup, it is mandatory to use unique release_name for each NEF instance.
- release_namespace is the deployment namespace used by helm command.
- custom-file is the name of the custom values yaml file (including location).
Note:
- You can verify the installation while running the
install command by entering this command on a separate
terminal:
watch kubectl get jobs,pods -n release_namespace
- The DB hooks start creating the NEF database tables,
once the
helm install
command is run.
Following are the optional parameters that can be used in thehelm install
command:- atomic: If this parameter is set, installation
process purges chart on failure. The
--wait
flag will be set automatically. - wait: If this parameter is set, installation
process will wait until all pods, PVCs, Services, and minimum number
of pods of a deployment, StatefulSet, or ReplicaSet are in a ready
state before marking the release as successful. It will wait for as
long as
--timeout
. - timeout duration: If not specified, default value
will be 300 seconds in Helm. It specifies the time to wait for any
individual Kubernetes operation (like Jobs for hooks). If the
helm install
command fails at any point to create a Kubernetes object, it will internally call the purge to delete after the timeout value. Here, the timeout value is not for overall installation, but for automatic purge on installation failure.
Caution:
Do not exit fromhelm install
command manually. After running thehelm install
command, it takes some time to install all the services. In the meantime, you must not press Ctrl+C to come out from the command. It may lead to some anomalous behavior. - Press Ctrl+C to exit watch mode. Run the
watch
command on another terminal. Run the following command to check the status:helm status release_name -n release_namespace
5.1.3 Postinstallation Tasks
This section explains the postinstallation tasks for NEF.
5.1.3.1 Verifying NEF Installation
- Run the following
command:
helm status <helm-release> -n <namespace>
Where,
<helm-release>
is the Helm release name of NEF.<namespace>
is the namespace of NEF deployment.For example:
In the output, ifhelm status ocnef -n ocnef_namespace
STATUS
is showing asdeployed
, then the installation is successful.Sample output:NAME: ocnef LAST DEPLOYED: Fri Sep 18 10:08:03 2020 NAMESPACE: ocnef STATUS: deployed REVISION: 1
- Run the following command to verify if the pods are up and
active:
kubectl get jobs,pods -n <Namespace>
Where,
<Namespace>
is the namespace where NEF is deployed.For example:
In the output, thekubectl get pod -n ocnef
STATUS
column of all the pods must beRunning
and theREADY
column of all the pods must ben/n
, where n is the number of containers in the pod. - Run the following command to verify if the services are deployed
and
active:
kubectl get services -n <Namespace>
For example:kubectl get services -n ocnef_namespace
Note:
If the installation is unsuccessful or theSTATUS
of all the pods is not in the
Running
state, perform the troubleshooting steps provided in
the Oracle Communications Cloud Native Core, Network Exposure
Function Troubleshooting Guide.
5.1.3.2 Performing Helm Test
This section describes how to perform sanity check for NEF installation through Helm test. The pods to be checked should be based on the namespace and label selector configured for the Helm test configurations.
Note:
Helm Test expects all of the pods of given microservice to be inREADY
state for a successful result.
However, the NRF Client Management microservice comes with Active/Standby mode for
the multi-pod support in the current release. When the multi-pod support for NRF
Client Management service is enabled, you may ignore if the Helm Test for
NRF-Client-Management pod fails.
- Complete the Helm test configurations under the "Helm Test Global
Parameters" section of the
oc-nef-24.2.2-custom-values.yaml
file.
For more information on Helm test parameters, see Global Parameters.nfName: ocnef image: name: nf_test tag: 24.2.2 registry: cgbu-cnc-comsvc-release-docker.dockerhub-phx.oci.oraclecorp.com/cgbu-ocudr-nftest config: logLevel: WARN timeout: 120 #Beyond this duration helm test will be considered failure resources: - horizontalpodautoscalers/v1 - deployments/v1 - configmaps/v1 - prometheusrules/v1 - serviceaccounts/v1 - poddisruptionbudgets/v1 - roles/v1 - statefulsets/v1 - persistentvolumeclaims/v1 - services/v1 - rolebindings/v1 complianceEnable: true
- Run the following command to perform the helm
test:
helm test <release_name> -n <namespace>
where:
<release_name> is the release name.
namespace is the deployment namespace where NEF is installed.
For example:Sample output:helm test ocnef -n ocnef
If the Helm test fails, see Oracle Communications Cloud Native Core, Network Exposure Function Troubleshooting Guide.NAME: ocnef LAST DEPLOYED: Fri Nov 12 10:08:03 2020 NAMESPACE: ocnef STATUS: deployed REVISION: 1 TEST SUITE: ocnef-test Last Started: Fri Nov 12 10:41:25 2020 Last Completed: Fri Nov 12 10:41:34 2020 Phase: Succeeded
5.2 Upgrading NEF
Note:
- In a georedundant deployment, perform the steps explained in this section on all the georedundant sites.
- Before starting with the NEF upgrade, ensure that CAPIF upgrade procedure is successfully complete. For information about the procedure to upgrade CAPIF, see Upgrade Tasks.
- To avoid any duplicate subscription for a Monitoring Event (ME) request, NEF validates the subscription request for any duplicate entries based on AFID, MSISDN, External ID, and MonitoringType parameters. NEF performs this validation only before and after the upgrade to NEF 24.2.2 and not during the upgrade process.
- If cnDBTier 24.2.2 is used during upgrade, set the ndb_allow_copying_alter_table
parameter to 'ON' in the
ocnef_dbtier_24.2.2_custom_values_24.2.2.yaml
file before upgrading NEF. After NEF upgrade, set the parameter to its default value 'OFF'. - Before upgrading, perform sanity check using Helm test. See Performing Helm Test section for the Helm test procedure.
5.2.1 Supported Upgrade Paths
The following table lists the supported upgrade paths for NEF.
Table 5-4 Supported Upgrade Paths
Source NEF release | Target NEF release |
---|---|
24.1.x | 24.2.2 |
23.4.x | 24.2.2 |
Note:
NEF must be upgraded before upgrading cnDBTier.5.2.2 Upgrade Tasks
This section provides information about the sequence of tasks to be performed for upgrading an existing NEF deployment.
Note:
- It is recommended to perform NEF upgrade in a specific order. For more information about the upgrade order, see Oracle Communications Cloud Native Core, Solution Upgrade Guide.
- Install or upgrade the network policies, if applicable. For more information, see Configuring Network Policies for NEF.
When you attempt to upgrade an existing NEF deployment, the running set of containers and pods are replaced with the new set of containers and pods. However, If there is no change in the pod configuration, the running set of containers and pods are not replaced.
Note:
It is advisable to create a backup of the file before changing any configuration.To configure the parameters, see Customizing NEF.
helm upgrade <release> <chart> -f oc-nef-24.2.2-custom-values.yaml -f OCCAPIF_API_Invoker_Mapping.yaml
Note:
To avoid any duplicate subscription for a Monitoring Event (ME) request, NEF validates the subscription request for any duplicate entries based on AFID, MSISDN, External ID, and MonitoringType parameters. NEF performs this validation only before and after the upgrade to NEF and not during the upgrade process.Table 5-5 Parameters and Definitions during NEF Upgrade
Parameters | Definitions |
---|---|
<chart>
|
It is the name of the chart that is of
the form <repository/ocnef> . For example: reg-1/ocnef or
cne-repo/ocnef |
<release>
|
It can be found in the output of helm list
command
|
maxsurge |
|
maxUnavailability |
- Check the history of helm deployment:
helm history <helm_release>
- Rollback to the required revision:
helm rollback <release name> <revision number>
Note:
Perform sanity check using Helm test. See Performing Helm Test section for the Helm test procedure.5.3 Rolling Back NEF
Note:
- In a georedundant deployment, perform the steps explained in this section on all georedundant sites separately.
- If operator chooses to rollback to a previous release or if some
issue occurs during deployment and if the Converged SCEF-NEF feature
is enabled, then the subscriptions on 4G side (HSS/MME) might not get
cleaned up. To avoid this, operator should perform the following tasks:
- Before rollback, should trigger unsubscribe for all subscriptions created after upgrade
- After rollback, create new subscribe for the deleted
subscriptions.
New subscription is not applicable for EPC network.
- For upgrade or rollback, a default timeout of 5 minutes is provided by Helm.
In case of a slow server, it might take more time. In such scenarios, the
upgrade status will be pending-upgrade or
pending-rollback.
To avoid this, add
--timeout
flag according to their network speed.Example
Upgrade:helm upgrade oncef oc-nef -f experiment.yaml –-timeout=45m -n mv --dry-run
Rollback:helm rollback oncef 1 -n mv –-timeout=45m
- To avoid any duplicate subscription for a Monitoring Event (ME) request, the NEF validates subscription request for any duplicate entries based on AFID, MSISDN, External ID and MonitoringType parameters. NEF does not perform this validation if NEF is rolled back from 24.2.2 to any previous release. The validation resumes when NEF is upgraded back to 24.2.2.
5.3.1 Supported Rollback Paths
Table 5-6 Supported Rollback Paths
Source Release | Target Release |
---|---|
24.2.2 | 24.1.x |
24.2.2 | 23.4.x |
Note:
If georedundancy feature was disabled before upgrading to 23.2.x, then rolling back to a previous version will automatically disable this feature. However, the database will still have records of theNfInstances
and NfSubscriptions
from the mated sites. For more information, contact My Oracle Support.
5.3.2 Rollback Tasks
- Run the following command to check the revision you must roll back
to:
$ helm history <release_name> -n <release_namespace>
- Run the command to rollback to the required
revision:
$ helm rollback <release_name> <revision_number> -n <release_namespace>
Note:
- No configuration should be performed during rollback.
- Do not exit from
helm rollback
command manually. After running thehelm rollback
command, it takes some time (depending upon number of pods to rollback) to rollback all of the services. In the meantime, you must not press "ctrl+c" to come out fromhelm rollback
command. It may lead to anomalous behavior. - Ensure that no NEF pod is in the failed state.
- If the rollback is not successful, perform the troubleshooting steps mentioned in Oracle Communications Cloud Native Core, Network Exposure Function Troubleshooting Guide.