2 Installing Cloud Native Policy Control Function
This chapter describes how to install Cloud Native Policy Control Function (PCF) on a cloud native environment.
Pre-Installation Tasks
Prior to installing the PCF, perform the following tasks:
Checking the Software Requirements
The following softwares must be installed before installing Policy Control Function (PCF):
Software | Version |
---|---|
Kubernetes | v1.15.3 |
HELM | v2.14.3 |
Additional software that needs to be deployed as per the requirement of the services:
Software | App Version | Notes |
---|---|---|
alertmanager | 0.18.0 | Required for Tracing |
elasticsearch | 7.4.0 | Required for Logging |
elastic-curator | 2.0.2 | Required for Logging |
elastic-exporter | 1.1.2 | Required for Logging |
logs | 2..7.0 | Required for Logging |
kibana | 7.4.0 | Required for Logging |
grafana | 3.8.4 | Required for Metrics |
prometheus | 9.2.0 | Required for Metrics |
prometheus-node-exporter | 1.6.0 | Required for Metrics |
metallb | 0.8.4 | Required for External |
metrics-server | 2.5.1 | Required for Metric Server |
occne-snmp-notifier | 0.3.0 | Required for Metric Server |
tracer | 0.13.3 | Required for Tracing |
Note:
The above softwares are available if the PCF is deployed in the Oracle Communications Cloud Native Environment (OCCNE). If you are deploying PCF in any other environment, the above softwares must be installed before installing the PCF. To check the installed software items,helm ls
Some of the systems may need to use helm command with
admin.conf file
as follows:
helm --kubeconfig admin.conf
Note:
If you are using Network Repository Function (NRF), install it before proceeding with the PCF installation.Checking the Environment Setup
Note:
This section is applicable only when the Policy Control Function (PCF) is deployed in the environment, other than OCCNE.Network access
The Kubernetes cluster hosts must have network access to:
- Local helm repository,
where the PCF helm charts are available.
To check if the Kubernetes cluster hosts have network access to the local helm repository, execute the following command:
helm repo update
Note:
Some of the systems may need to use helm command with admin.conf file as follows:helm --kubeconfig admin.conf
- Local docker image
repository, where the PCF images are available.
To check if the Kubernetes cluster hosts have network access to the local docker image repository, pull any image with tag name to check connectivity by executing the following command:
where:docker pull docker-repo/image-name:image-tag
docker-repo is the IP address or host name of the repository.
image-name is the docker image name.
image-tag is the tag the image used for the PCF pod.
Note:
All the kubectl and helm related commands that are used in this guide must be executed on a system depending on the infrastructure/deployment. It could be a client machine, such as, a VM, server, local desktop, and so on.Client Machine Requirements
Following are the client machine requirements where the deployment commands executed:- It should have network access to the helm repository and docker image repository.
- It should have network access to the Kubernetes cluster.
- It should have necessary environment
settings to run the
kubectl
commands. The environment should have privileges to create namespace in the Kubernetes cluster. - It should have helm client installed
with the push plugin. The
environment should be configured so that the
helm install
command deploys the software in the Kubernetes cluster.
Server or Space Requirements
For information on the server or space requirements, see the Oracle Communications Cloud Native Environment (OCCNE) Installation Guide.
Secret File Requirement
For enabling HTTPs on Ingress/Egress gateway the following certificates and pem files has to be created before creating secret files for keys:
- ECDSA private Key and CA signed ECDSA Certificate (if initialAlgorithm: ES256)
- RSA private key and CA signed RSA Certificate (if initialAlgorithm: RSA256)
- TrustStore password file
- KeyStore password file
- CA signed ECDSA certificate
Installation Tasks
Downloading PCF package
- Login to My Oracle Support with your credentials.
- Select Patches and Updates tab to locate the patch.
- In Patch Search window, click Product or Family (Advanced).
- Enter Oracle Communications Cloud Native Core - 5G in Product field, select Oracle Communications Cloud Native Core Policy Control Function 1.6.1.0.0 from Release drop-down.
- Click Search. The Patch Advanced Search Results displays a list of releases.
- Click the required patch from the search results. A window opens and click Download.
- Click the zip file to download the package. Package is named as follows:
ReleaseName-pkg-Releasenumber.tgz
where:
ReleaseName is a name which is used to track this installation instance.
Releasenumber is the release number.For example, ocpcf-pkg-1.6.1.0.0.tgz
Pushing the Images to Customer Docker Registry
- Untar the PCF package file to get PCF docker image tar file.
The directory consists of the following:tar -xvzf ReleaseName-pkg-Releasenumber.tgz
- PCF Docker Images File:
ocpcf-images-1.6.1.tar
- Helm File:
ocpcf-1.6.1.tgz
- Readme txt File:
Readme.txt (Contains cksum and md5sum of tarballs)
- PCF Docker Images File:
-
Load the ocpcf-images-1.6.1.tar file into the Docker system
docker load --input /IMAGE_PATH/ocpcf-images-1.6.1.tar
- Verify that the image is loaded correctly by entering this command:
Refer Appendix A for more information on docker images available in PCF.docker images
- Create a new tag for each imported image and push the image to the customer
docker registry by entering this command:
where:docker tag ocpcf/pcf_smservice:1.6.1 CUSTOMER_REPO/pcf_smservice:1.6.1 docker push CUSTOMER_REPO/pcf_smservice:1.6.1 docker tag ocpcf/pcf_ueservice:1.6.1 CUSTOMER_REPO/pcf_ueservice:1.6.1 docker push CUSTOMER_REPO/pcf_ueservice:1.6.1 docker tag ocpcf/pcf-amservice:1.6.1 CUSTOMER_REPO/pcf-amservice:1.6.1 docker push CUSTOMER_REPO/pcf-amservice:1.6.1 docker tag ocpcf/pcf_userservice:1.6.1 CUSTOMER_REPO/pcf_userservice:1.6.1 docker push CUSTOMER_REPO/pcf_userservice:1.6.1 docker tag ocpcf/ocpm_pre:1.6.1 CUSTOMER_REPO/ocpm_pre:1.6.1 docker push CUSTOMER_REPO/ocpm_pre:1.6.1 docker tag ocpcf/diam-connector:1.6.1 CUSTOMER_REPO/diam-connector:1.6.1 docker push CUSTOMER_REPO/diam-connector:1.6.1 docker tag ocpcf/diam-gateway:1.6.1 CUSTOMER_REPO/diam-gateway:1.6.1 docker push CUSTOMER_REPO/diam-gateway:1.6.1 docker tag ocpcf/ocpm_config_server:1.6.1 CUSTOMER_REPO/ocpm_config_server:1.6.1 docker push CUSTOMER_REPO/ocpm_config_server:1.6.1 docker tag ocpcf/ocpm_cm_service:1.6.1 CUSTOMER_REPO/ocpm_cm_service:1.6.1 docker push CUSTOMER_REPO/ocpm_cm_service:1.6.1 docker tag ocpcf/nrf-client:1.2.0 CUSTOMER_REPO/nrf-client:1.2.0 docker push CUSTOMER_REPO/nrf-client:1.2.0 docker tag ocpcf/ocpm_queryservice:1.6.1 CUSTOMER_REPO/ocpm_queryservice:1.6.1 docker push CUSTOMER_REPO/ocpm_queryservice:1.6.1 docker tag ocpcf/audit_service:1.6.1 CUSTOMER_REPO/audit_service:1.6.1 docker push CUSTOMER_REPO/audit_service:1.6.1 docker tag ocpcf/readiness-detector:1.6.1 CUSTOMER_REPO/readiness-detector:1.6.1 docker push CUSTOMER_REPO/readiness-detector:1.6.1 docker tag ocpcf/perf_info:1.6.1 CUSTOMER_REPO/perf_info:1.6.1 docker push CUSTOMER_REPO/perf_info:1.6.1 docker tag ocpcf/app_info:1.6.1 CUSTOMER_REPO/app_info:1.6.1 docker push CUSTOMER_REPO/app_info:1.6.1 docker tag ocpcf/policyds:1.6.1 CUSTOMER_REPO/policyds:1.6.1 docker push CUSTOMER_REPO/policyds:1.6.1 docker tag ocpcf/ldap-gateway:1.6.1 CUSTOMER_REPO/ldap-gateway:1.6.1 docker push CUSTOMER_REPO/ldap-gateway:1.6.1 docker tag ocpcf/ocingress_gateway:1.6.3 CUSTOMER_REPO/ocingress_gateway:1.6.3 docker push CUSTOMER_REPO/ocingress_gateway:1.6.3 docker tag ocpcf/ocegress_gateway:1.6.3 CUSTOMER_REPO/ocegress_gateway:1.6.3 docker push CUSTOMER_REPO/ocegress_gateway:1.6.3 docker tag ocpcf/configurationinit:1.1.1 CUSTOMER_REPO/configurationinit:1.1.1 docker push CUSTOMER_REPO/configurationinit:1.1.1 docker tag ocpcf/configurationupdate:1.1.1 CUSTOMER_REPO/configurationupdate:1.1.1 docker push CUSTOMER_REPO/configurationupdate:1.1.1
CUSTOMER_REPO is the docker registry address having Port Number, if registry has port attached.
Note:
For OCCNE, copy the package to bastion server and use localhost:5000 as CUSTOMER_REPO to tag the images and push to bastion docker registry.Note:
You may need to configure the Docker certificate before the push command to access customer registry via HTTPS, otherwise, docker push command may fail.
Installing the PCF Package
To install the PCF package:
- Login to the server where the ssh keys are stored and SQL nodes are accessible.
- Connect to the SQL nodes.
- Login to the database as a root user.
- Create an admin user and grant all the necessary
permissions to the user by executing the following
command:
where:CREATE USER 'username'@'%' IDENTIFIED BY 'password'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON pcf_smservice.* TO 'username'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON pcf_amservice.* TO 'username'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON pcf_userservice.* TO 'username'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON ocpm_config_server.* TO 'username'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON oc5g_audit_service.* TO 'username'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON pcf_release.* TO 'username'@'%'; FLUSH PRIVILEGES;
username is the username and password is the password for MYSQL admin user.
Note:
Admin user can use helm hooks to perform DDL and DML operations to perform install/upgrade/rollback or delete operations.For Example: In the below example "pcfadminusr" is used as username, "pcfadminpasswd" is used as password and granting the necessary permissions to "pcfadminusr". In this example, default database names of micro services are used.CREATE USER 'pcfadminusr'@'%' IDENTIFIED BY 'pcfadminpasswd'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON pcf_smservice.* TO 'pcfadminusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON pcf_amservice.* TO 'pcfadminusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON pcf_userservice.* TO 'pcfadminusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON ocpm_config_server.* TO 'pcfadminusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON oc5g_audit_service.* TO 'pcfadminusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON pcf_release.* TO 'pcfadminusr'@'%'; FLUSH PRIVILEGES;
- Create an application user and grant all the necessary permissions to the user
by executing the following command:
CREATE USER 'username'@'%' IDENTIFIED BY 'password'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE ON pcf_smservice.* TO 'username'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE ON pcf_amservice.* TO 'username'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE ON pcf_userservice.* TO 'username'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE ON ocpm_config_server.* TO 'username'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE ON oc5g_audit_service.* TO 'username'@'%';
where:
username is the username and password is the password for MYSQL database user.
For Example: In the below example "pcfusr" is used as username, "pcfpasswd" is used as password and granting the necessary permissions to "pcfusr". In this example, default database names of micro services are used.CREATE USER 'pcfusr'@'%' IDENTIFIED BY 'pcfpasswd'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE ON pcf_smservice.* TO 'pcfusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE ON pcf_amservice.* TO 'pcfusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE ON pcf_userservice.* TO 'pcfusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE ON ocpm_config_server.* TO 'pcfusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE ON oc5g_audit_service.* TO 'pcfusr'@'%';
Note:
The database name can be specified in the envMysqlDatabase parameter for respective services in the custom-value.yaml file.It is recommended to use unique database name when there are multiple instances of PCF deployed in the network and they share the same data tier (MySQL cluster).
- Execute the command,
show grants for username
, to confirm that admin user has all the permission. - Exit from database and logout from MYSQL node.
- Create namespace if already does not exists by
entering the command:
where:kubectl create namespace release_namespace
release_namespace is the deployment CNPCF namespace used by helm command.
- Create a kubernetes secret for an admin user and an application user.
To create a kubernetes secret for storing database username and password for these users:
- Create a yaml file with the
application user's username and password with the syntax shown below:
apiVersion: v1 kind: Secret metadata: name: pcf-db-pass type: Opaque data: mysql-username: <base64 encoded mysql username> mysql-password: <base64 encoded mysql password>
- Create a yaml file with the admin user's username and
password with the syntax shown below:
apiVersion: v1 kind: Secret metadata: name: pcf-admin-db-pass type: Opaque data: mysql-username: <base64 encoded mysql username> mysql-password: <base64 encoded mysql password>
Note:
The values for mysql-username and mysql-password should be base64 encoded. - Execute the following commands
to add the kubernetes secrets in a namespace:
where:kubectl create -f yaml_file_name1 -n release_namespace kubectl create -f yaml_file_name2 -n release_namespace
release_namespace is the deployment namespace used by the helm command.
yaml_file_name1 is a name of the yaml file that is created in step 1.
yaml_file_name2 is a name of the yaml file that is created in step 2.
- Create a yaml file with the
application user's username and password with the syntax shown below:
- Create the customize ocpcf-custom-values-1.6.1.yaml file with the required input parameters. To
customize the file, see Customizing Policy Control Function.
Note:
The values of the parameters mentioned in the custom values yaml file overrides the defaults values specified in the helm chart. If the envMysqlDatabase parameter is modified, then you should modify the configDbName parameter with the same value. -
Caution:
Do not exit fromhelm install
command manually. After running thehelm install
command, it takes some time to install all the services. In the meantime, you must not press "ctrl+c" to come out fromhelm install
command. It leads to some anomalous behavior.- Install PCF by using
Helm2:
helm install <helm-chart> --name <release_name> --namespace <release_namespace> -f <custom_file> --atomic --timeout 600
- Install PCF by using
Helm3:
helm install -f <custom_file> <release_name> <helm-chart> --namespace <release_namespace> --atomic --timeout 10m
HELM_CHARTis the location of the helm chart extracted from ocpcf-pkg-1.6.1.tgz file
release_name is the release name used by helm command.
release_namespace is the deployment namespace used by helm command.
custom_file - is the name of the custom values yaml file (including location).
For example:Parameters in
Refer Customizing Policy Control Function for the sample yaml file.helm install /home/cloud-user/pcf-1.6.1.tgz --name ocpcf --namespace ocpcf -f ocpcf-custom-values-1.6.1.yaml --atomic
helm install
command:- atomic: If this parameter is set, installation process purges chart on failure. The --wait flag will be set automatically.
- wait: If this parameter is set, installation process will wait until all pods, PVCs, Services, and minimum number of pods of a deployment, StatefulSet, or ReplicaSet are in a ready state before marking the release as successful. It will wait for as long as --timeout.
- timeout duration (optional): If not specified
default value will be 300 (300 seconds) in Helm2 and 5m (5 minutes) in
Helm3.Specifies the time to wait for any individual kubernetes operation
(like Jobs for hooks). Default value is 5m0s. If the
helm install
command fails at any point to create a kubernetes object, it will internally call the purge to delete after timeout value (default: 300s). Here timeout value is not for overall install, but it is for automatic purge on installation failure.
- Install PCF by using
Helm2:
- You can verify the installation while running the install command by entering
this
commands:
Press "Ctrl+C" to exit watch mode.watch kubectl get jobs,pods -n release_namespace
helm status release_name -n release_namespace
- Check the installation status by entering this
command
For example:helm ls release_name
You will see the status as DEPLOYED if the deployment has been done successfully.helm ls ocpcf
Execute the following command to get status of jobs and pods:For example:kubectl get jobs,pods -n release_namespace
You will see the status as Running for all the nodes if the deployment has been done successfully.kubectl get pod -n ocpcf