Deploying Network Exposure Function
Note:
The Network Exposure Function requires a MySQL database to store the configuration and run time data.To deploy the NEF:
- Download the file, ocnef-pkg-1.2.0.0.0.tgz.
- Untar ocnef-pkg-1.2.0.0.0.tgz.
-
Untar displays the following files:
ocnef-pkg-1.2.0.0.0.tgz |_ _ _ _ _ _ ocnef-1.2.0.tgz (helm chart) |_ _ _ _ _ _ ocnef-images-1.2.0.tar (docker images) |_ _ _ __ _ Readme.txt (Contains cksum and md5sum of tarballs)
-
Check the checksums of tarballs mentioned in the Readme.txt file.
- After you load the tarballs
to docker images, if required, re-tag it according to your specific repository.
- Run the following
command to load
ocnef-images-1.2.0.tar
to docker and push imported docker images to your's docker image registry.docker load --input /<IMAGE_PATH>/ocnef-images-1.2.0.tar docker tag ocpcf/nef_asqos:1.2.0 <customer repo>/nef_asqos:1.2.0 docker push <customer repo>/nef_asqos:<IMAGE_TAG> * Repeat above tag and push commands for ALL imported docker images as listed in the Table 3-1
Note:
User may need to configure docker certificate to access customer docker image repository via HTTPS. Configure the certificate before executing the docker push command or the command may fail to execute.
Table 3-1 provides the details of the docker images file name:
Table 3-1 Docker Images
Service Name Docker Image Name Service Category NEF CM Service ocnef-nefcm GUI NEF Asqos Service ocnef-nefasqos NEF NEF ME Service ocnef-nefme NEF NEF Capif Service ocnef-nefcapif NEF NEF TI Service ocnef-nefti NEF NEF Config Service ocnef-nefconfigserver GUI NEF BDT Service ocnef-nefbdt NEF Table 3-2 provides the information about the modules:
Table 3-2 Module Descriptions
Module Name Description GUI GUI services for NEF NEF Define NEF services
- Run the following
command to load
- Execute the following
command:
Note:
It is mandatory to run the below command under helm chart folder as the last line of the command,
./<HELM_CHART_NAME_WITH_EXTENSION>
specifies that helm chart path is current working path. To run the below command in another server, copy the helm chart file to it first.helm install --namespace=nefsvc --name=ocnef \
set global.envMysqlPrimaryHost=<MYSQL_PRIMARY_HOST>,global.nvMysqlSecondaryHost=<MYSQL_SECONDARY_HOST> \
--set global.envMysqlUser=nefusr,global.envMysqlPassword=nefpasswd \
--set global.envJaegerAgentHost=<JAEGER_SERVICE>.<JAEGER_SERVICE_NAMESPACE>,global.nvJaegerAgentPort=<JAEGER_SERVICE_PORT> \
--set global.imageTag=<IMAGE_TAG>,global.dockerRegistry=<CUSTOMER_REPO> \
--set nef-ti.deploymentNefTiService.envUdmBaseUrl=<UDM_BASE_URI>,nef-ti.deploymentNefTiService.envUdrBaseUrl=<UDR_BASE_URI> \
--set nef-ti.deploymentNefTiService.envBsfBaseUrl=<BSF_BASE_URI>,nef-ti.deploymentNefTiService.envPcfBaseUrl=<PCF_BASE_URI> \
--set nef-bdt.deploymentNefBdtService.envPcfPort=<PCF_PORT>,nef-bdt.deploymentNefBdtService.envPcfHost=<PCF_HOST> \
--set nef-bdt.deploymentNefBdtService.envUdmHost=<UDM_HOST>,nef-bdt.deploymentNefBdtService.envUdmPort=<UDM_PORT> \
./<HELM_CHART_NAME_WITH_EXTENSION>
Note:
If Kubernetes cluster has Jaeger service running, set the global.envJaegerAgentHost with the value "<JAEGER_SERVICE>.<JAEGER_SERVICE_NAMESPACE>" and envJaegerAgentPort with the value "<JAEGER_SERVICE_PORT>". Otherwise, you should not use these variables.Table 3-3 provides details of each variable:
Table 3-3 Variables Description
Variable Description Notes <MYSQL_PRIMARY_HOST> <MYSQL_SECONDARY_HOST>
MySQL primary host name and secondary host name or IP Address <JAEGER_SERVICE>.<JAEGER_SERVICE_NAMESPACE> Jaeger service and Jaeger service namespace can be found in same Kubernetes via "helm list" command Follow the below format:
<JAEGER_AGENT_SERVICE_NA ME>.<JAEGER_NAMESPACE>
Such as occne-tracer-jaegeragent.occne-infra, then the jaeger agent service name under jaeger deployment is occne-tracer-jaeger-agent.
<JAEGER_SERVICE_PORT> Port on which Jaeger service is running. <IMAGE_TAG> The image tag used in customer docker registry, it is recommend to use same image tag when pull docker image to registry. If follow above steps to push docker image to customer docker registry then the <IMAGE_TAG> value should be 1.2.0
Each service deployment yaml file would use global.imageTag as image tag to fetch relateddocker image per helm chart design. With the release tar file, the global image tag for all services is 1.2.0 <CUSTOMER_REPO> The docker registry address in the customer side along with the port number if registry has port attached If registry has port value, add port. For example, reg-1:5000 <UDM_BASE_URI> Base url for UDM Follow the below format: " http://{ hostname or ip address}:{ port }/nudm-sdm/v2/"
<UDR_BASE_URI> Base url for UDR Follow the below format: " http://{ hostname or ip address}:{ port }/nudr-dr/v1/"
<BSF_BASE_URL> Base url for BSF Follow the below format: " http://{ hostname or ip address}:{ port }/nbsf-management/v1/"
<PCF_BASE_URL> Base url for PCF Follow the below format: " http://{ hostname or ip address}:{ port }/npcf-policyauthorization /v1/"
<PCF_HOST>, <PCF_PORT> hostname or ip address and port of PCF. <UDM_HOST>,<UDM_PORT> hostname or ip address and port of UDM. Kubernetes provides the following two deployment types.
Table 3-4 Service Deployment Service Type
Service Type Description ClusterIP Exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster.
This is the default ServiceType.
NodePort Exposes the service on each Node's IP at a static port (the NodePort).
A ClusterIP service, to which the NodePort service routes is automatically created. Contact the NodePort service from outside the cluster, by requesting,
<NodeIP>:<NodePort>
Most of the NEF services use NodePort to deploy.