2 Installing OCNRF

This section describes the prerequisites and installation procedure for the OCNRF.

Note:

In case you want to configure OCNRF to support Aspen Service Mesh (ASM), refer to Configuring OCNRF to support ASM.

Prerequisites

Following are the prerequisites to install and configure OCNRF:

OCNRF Software

The OCNRF software includes:

  • OCNRF Helm charts
  • OCNRF docker images

The following software must be installed before installing OCNRF:

Table 2-1 Pre-installed Software

Software Version
Kubernetes v1.18.4
HELM v2.14.3 and v3.2

Following are the common services that needs to be deployed as per the requirement:

Table 2-2 Common Services

Software Chart Version Required For
elasticsearch 7.6.1 Logging Area
elastic-curator 5.5.4 Logging Area
elastic-exporter 1.1.0 Logging Area
elastic-master 7.6.1 Logging Area
logs 3.0.0 Logging Area
kibana 7.6.1 Logging Area
grafana 7.0.4 Metrics Area
prometheus 2.16.0 Metrics Area
prometheus-kube-state-metrics 1.9.5 Metrics Area
prometheus-node-exporter 0.18.1 Metrics Area
metallb 0.9.3 External IP
metrics-server 2.10.0 Metric Server
tracer 1.14.0 Tracing Area

Note:

Install the specified software items before proceeding, if any of the above services are needed and the respective software is not already installed in CNE.
To check the installed software items, execute:
helm ls

Some of the systems may need to use helm command with admin.conf file, such as:

helm --kubeconfig admin.conf

Network access

The Kubernetes cluster hosts must have network access to:

  • Local docker image repository where the OCNRF images are available.
    To check if the Kubernetes cluster hosts has network access to the local docker image repository, try to pull any image with tag name to check connectivity by executing:
    docker pull <docker-repo>/<image-name>:<image-tag>

    Note:

    Some of the systems may need to use helm command with admin.conf file, such as:

    helm --kubeconfig admin.conf

  • Local helm repository where the OCNRF helm charts are available.
    To check if the Kubernetes cluster hosts has network access to the local helm repository, execute:
    helm repo update

    Note:

    Some of the systems may need to use helm command with admin.conf file, such as:

    helm --kubeconfig admin.conf

Note:

All the kubectl and helm related commands that are used in this document must be executed on a system depending on the infrastructure of the deployment. It could be a client machine such as a VM, server, local desktop, and so on.

Client machine requirement

Client machine needs to have the following minimum requirements:
  • Network access to the helm repository and docker image repository.
  • Helm repository must be configured on the client.
  • Network access to the Kubernetes cluster.
  • Necessary environment settings to run the kubectl commands. The environment should have privileges to create a namespace in the Kubernetes cluster.
  • Helm client must be installed. The environment should be configured so that the helm install command deploys the software in the Kubernetes cluster.

Server or Space Requirements

For information on the server or space requirements, see the Oracle Communications Cloud Native Environment (OCCNE) Installation Guide.

Secret file requirement

For HTTPs and Access token, the following certs and pem files has to be created before creating secret files for Keys and MySql.

Note: The following files must be created before creating secret files.
  1. ECDSA private Key and CA signed ECDSA Certificate (if initialAlgorithm: ES256)
  2. RSA private key and CA signed RSA Certificate (if initialAlgorithm: RS256)
  3. TrustStore password file
  4. KeyStore password file

ServiceAccount requirement

Operator must create a service account, bind it with a Role for resource with permissions for atleast get, watch and list.

serviceAccountName is a mandatory parameter. Kubernetes Secret resource is used for providing the following:
  • MYSQL DB Details to micro-services.

  • NRF's Private Key, NRF's Certificate and CA Certificate Details to Ingress/Egress Gateway for TLS.

  • NRF's Private and NRF's Public Keys to nfAccessToken micro-service for Digitally Signing AccessTokenClaims.

  • Producer/Consumer NF's Service/Endpoint details for routing messages from/to Egress/Ingress Gateway.

The Secret(s) can be under same namespace where OCNRF is getting deployed (recommended) or # Operator can choose to use different namespaces for different secret(s). If all the Secret(s) are under same namespace as OCNRF, then Kubernetes Role can be binded with the given ServiceAccount. Otherwise ClusterRole needs to be binded with the given ServiceAccount. The Role/ClusterRole needs to be created with resources: (services, configmaps, pods, secrets, endpoints) and (verbs: get, watch, list). Refer to Creating Service Account, Role and Role bindings for more details.

DB Tier Requirement

DB Tier must be up and running. In case of geo-redundant deployments, replication between geo-redundant DB Tier must be configured. Refer to DB Tier section in OCCNE installation guide.

Installation Sequence

This section explains the tasks to be performed for installing OCNRF.

OCNRF pre-deployment configuration

Following are the pre-deployment configuration procedures:

  1. Creating OCNRF namespace

    Note:

    This is a mandatory procedure, execute this before proceeding any further. The namespace created/verified in this procedure is an input for next procedures.
  2. Creating Service Account, Role and Role bindings

    Note:

    This procedure is a sample. In case the service account with role and role-bindings is already configured or the user has any in-house procedure to create service account, skip this procedure. In case deployment is with ASM, then Configuring OCNRF with ASM for all details and skip this procedure.
  3. Configuring MySql database and user
  4. Configuring Kubernetes Secret for Accessing OCNRF Database
  5. Configuring secrets for enabling HTTPS
  6. Configuring Secret for Enabling AccessToken Service
Creating OCNRF namespace

This section explains how the user can verify if the required namespace is available in the system or not.

Procedure

  1. Verify required namespace already exists in system:
    $ kubectl get namespaces
  2. In the output of the above command, check if required namespace is available. If not available, create the namespace using following command:

    Note:

    This is an optional step. In case required namespace already exists, skip this procedure.
    $ kubectl create namespace <required namespace>
    For example:-
    $ kubectl create namespace ocnrf
Creating Service Account, Role and Role bindings

This section explains how user can create service account, required role and role bindings resources. The Secret(s) can be under same namespace where OCNRF is getting deployed (recommended) or operator can choose to use different namespaces for different secret(s). If all the Secret(s) are under same namespace as OCNRF, then Kubernetes Role can be binded with the given ServiceAccount. Otherwise ClusterRole needs to be binded with the given ServiceAccount.

Sample template for the resources is as follows and add sample template content to resource input yaml file.

Example file name: ocnrf-resource-template.yaml

Example command for creating the resources

kubectl -n <ocnrf-namespace> create -f ocnrf-resource-template.yaml

Sample template to create the resources

Note:

Update <helm-release> and <namespace> with respective OCNRF namespace and planned OCNRF helm release name in the place holders.
## Sample template start#
apiVersion: v1
kind: ServiceAccount
metadata:
  name: <helm-release>-ocnrf-serviceaccount
  namespace: <namespace>
---

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: <helm-release>-ocnrf-role
  namespace: <namespace>
rules:
- apiGroups:
  - "" # "" indicates the core API group
  resources:
  - services
  - configmaps
  - pods
  - secrets
  - endpoints
  verbs:
  - get
  - watch
  - list
---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: <helm-release>-ocnrf-rolebinding
  namespace: <namespace>
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: <helm-release>-ocnrf-role
subjects:
- kind: ServiceAccount                                   
  name:  <helm-release>-ocnrf-serviceaccount
  namespace: <namespace>
## Sample template end#
Configuring MySql database and user

Procedure for Geo-Redundant OCNRF sites

This section explains how database administrator can create the databases and users for OCNRF network function.

Note:

  1. Procedure can be different for geo-redundant OCNRF sites and standalone OCNRF site.
  2. Before executing the below procedure for Geo-Redundant sites, ensure that the DB-Tier for Geo-Redundant sites are already up and replication channels are enabled.
  3. While performing Fresh Installation, in case OCNRF release is already deployed, purge the deployment, remove databases, users used for previous deployment. Refer to Uninstalling OCNRF for uninstallation procedure.
  1. Login to the machine where ssh keys are stored and which has permission to access the SQL nodes of NDB cluster.
  2. Connect to the SQL nodes.
  3. Login to the MySQL prompt using root permission or user, which has permission to create users with conditions as mentioned below. For example: mysql -h 127.0.0.1 -uroot -p

    Note:

    This command may vary from system to system, path for MySQL binary, root user and root password. After executing this command, user need to enter the password specific to the user mentioned in the command.
  4. Check if the OCNRF database user already exists. If the user does not exists, create a database user.
    Below steps covers the creation of two types of OCNRF database users. Different users has different set of permissions.
    1. OCNRF privileged user: This user has complete set of permissions. This user can perform create, alter, drop operations on tables to perform install/upgrade/rollback or delete operations.
    2. OCNRF application user: This user has less set of permissions and will be used by OCNRF application during service operations handling. This user can insert, update, get, remove the records. This user cannot create, alter and drop the database as wells as tables
    $ SELECT User FROM mysql.user;
    In case, user already exists, move to next step. Else, create OCNRF user as follows:
    • Create new ocnrf privileged user:

      $ CREATE USER '<OCNRF Privileged-User Name>'@'%' IDENTIFIED BY '<OCNRF Privileged-User Password>';

      Example:

      $ CREATE USER 'nrfPrivilegedUsr'@'%' IDENTIFIED BY 'nrfPrivilegedPasswd'
    • Create new ocnrf application user:
      $ CREATE USER '<OCNRF APPLICATION User Name>'@'%' IDENTIFIED BY '<OCNRF APPLICATION User Password>';

      Example:

      $ CREATE USER 'nrfApplicationUsr'@'%' IDENTIFIED BY 'nrfApplicationPasswd'

    Note:

    Both users must be created on all the SQL Nodes on all the sites.
  5. Check if the OCNRF database already exists. If the database does not exists, create databases for OCNRF network function:
    Execute the following command to check if database exists:
    $ show databases;

    In case database already exists, then move to next step. Else, perform the following steps.

    For OCNRF application, two types of databases are required:
    1. OCNRF application database: This database consists of tables used by application to perform functionality of NRF network function.
    2. OCNRF network database: This database consists of tables used by OCNRF to store per the network details like system details and database backups.
    1. Create database for OCNRF application:
      $ CREATE DATABASE IF NOT EXISTS <OCNRF Application Database> CHARACTER SET utf8;
      Example:
      $ CREATE DATABASE IF NOT EXISTS nrfApplicationDB CHARACTER SET utf8;
    2. Create database for OCNRF network database:
      $ CREATE DATABASE IF NOT EXISTS <OCNRF network database> CHARACTER SET utf8;
      Example:
      $ CREATE DATABASE IF NOT EXISTS nrfNetworkDB CHARACTER SET utf8;

      Note:

      OCNRF application and network database must be created on any one of SQL node on any one of the OCNRF site.
    3. Grant permission to users on the OCNRF database created:

      Note:

      This step must be executed on all the SQL nodes on all the OCNRF Geo-Redundant sites.
      $ GRANT SELECT, INSERT, CREATE, ALTER, DROP, LOCK TABLES, CREATE TEMPORARY TABLES, DELETE, UPDATE, EXECUTE ON <OCNRF Application Database>.* TO '<OCNRF Privileged-User Name>'@'%';
      Example:
      $ GRANT SELECT, INSERT, CREATE, ALTER, DROP, LOCK TABLES, CREATE TEMPORARY TABLES, DELETE, UPDATE, EXECUTE ON nrfApplicationDB.* TO 'nrfPrivilegedUsr'@'%';
    4. Grant permission to OCNRF privileged user on OCNRF network database:
      $ GRANT SELECT, INSERT, CREATE, ALTER, DROP, LOCK TABLES, CREATE TEMPORARY TABLES, DELETE, UPDATE, EXECUTE ON <OCNRF network database>.* TO '<OCNRF Privileged-User Name>'@'%';
      Example:
      $ GRANT SELECT, INSERT, CREATE, ALTER, DROP, LOCK TABLES, CREATE TEMPORARY TABLES, DELETE, UPDATE, EXECUTE ON nrfNetworkDB.* TO 'nrfPrivilegedUsr'@'%';
    5. Grant permission to OCNRF application user on OCNRF application database:
      $ GRANT SELECT, INSERT, LOCK TABLES, DELETE, UPDATE, EXECUTE ON <OCNRF Application Database>.* TO '<OCNRF APPLICATION User Name>'@'%';
      Example:
      $ GRANT SELECT, INSERT, LOCK TABLES, DELETE, UPDATE, EXECUTE ON nrfApplicationDB.* TO 'nrfApplicationUsr'@'%';
    6. Grant read permission to OCNRF application user for replication_info:
      $ GRANT SELECT ON replication_info.* TO '<OCNRF APPLICATION User Name>'@'%';
      Example:
      $ GRANT SELECT ON replication_info.* TO 'nrfApplicationusr'@'%';
    7. Apply the grants using following command:
      FLUSH PRIVILEGES;
  6. Execute the command, show grants for <username>, to confirm that users has all of the required permissions
  7. Exit from database and logout from MYSQL nodes.

Procedure for standalone OCNRF site

  1. Login to the machine where ssh keys are stored and which has permission to access the SQL nodes of NDB cluster.
  2. Connect to the SQL nodes.
  3. Login to the MySQL prompt using root permission or user, which has permission to create users with conditions as mentioned below. For example: mysql -h 127.0.0.1 -uroot -p

    Note:

    This command may vary from system to system, path for mysql binary, root user and root password. After executing this command, user need to enter the password specific to the user mentioned in the command.
  4. Check if OCNRF network function user already exists. If the user does not exists, create an OCNRF network function user.
    Below steps covers the creation of two types of OCNRF users. Different users has different set of permissions.
    1. OCNRF privileged user: This user has complete set of permissions. This user can perform create, alter, drop operations on tables to perform install/upgrade/rollback or delete operations.
    2. OCNRF application user: This user has less set of permissions and will be used by OCNRF application during service operations handling. This user can insert, update, get, remove the records. This user can't create, alter and drop the database as wells as tables.
    $ SELECT User FROM mysql.user;
    In case, user already exists, move to next step. Else, create new following OCNRF user:
    • Create new OCNRF application user:
      $ CREATE USER '<OCNRF APPLICATION User Name>'@'%' IDENTIFIED BY '<OCNRF APPLICATION Password>';

      Example:

      $ CREATE USER 'nrfApplicationUsr'@'%' IDENTIFIED BY 'nrfApplicationPasswd'
    • Create new OCNRF privileged user:
      $ CREATE USER '<OCNRF Privileged-User Name>'@'%' IDENTIFIED BY '<OCNRF Privileged-User Password>';

      Example:

      $ CREATE USER 'nrfPrivilegedUsr'@'%' IDENTIFIED BY 'nrfPrivilegedPasswd'

    Note:

    Both users must be created on all the SQL Nodes for all the sites.
  5. Check if OCNRF network function databases already exists. If not exists, create databases for OCNRF network function:
    Execute the following command to check if database exists:
    $ show databases;

    Check if required database is already in list. In case the database already exists, then move to next step. Else, perform the following steps.

    For OCNRF application, two types of databases are required:
    1. OCNRF application database: This database consists of tables used by application to perform functionality of NRF network function.
    2. OCNRF network database: This database consists of tables used by OCNRF to store per OCNRF network details like system details and database backups.
    1. Create database for OCNRF application:
      $ CREATE DATABASE IF NOT EXISTS <OCNRF Application Database> CHARACTER SET utf8;
      Example:
      $ CREATE DATABASE IF NOT EXISTS nrfApplicationDB CHARACTER SET utf8;
    2. Create database for OCNRF network:
      $ CREATE DATABASE IF NOT EXISTS <OCNRF network database> CHARACTER SET utf8;
      Example:
      $ CREATE DATABASE IF NOT EXISTS nrfNetworkDB CHARACTER SET utf8;
  6. Grant permissions to users on the databases:

    Note:

    This step must be executed on all the SQL nodes on each OCNRF standalone site.
    1. Grant permission to OCNRF privileged user on OCNRF application database:
      $ GRANT SELECT, INSERT, CREATE, ALTER, DROP, LOCK TABLES, CREATE TEMPORARY TABLES, DELETE, UPDATE, EXECUTE ON <OCNRF Application Database>.* TO '<OCNRF Privileged-User Name>'@'%';
      Example:
      $ GRANT SELECT, INSERT, CREATE, ALTER, DROP, LOCK TABLES, CREATE TEMPORARY TABLES, DELETE, UPDATE, EXECUTE ON nrfApplicationDB.* TO 'nrfPrivilegedUsr'@'%';
    2. Grant permission to OCNRF privileged user on OCNRF network database:
      $ GRANT SELECT, INSERT, CREATE, ALTER, DROP, LOCK TABLES, CREATE TEMPORARY TABLES, DELETE, UPDATE, EXECUTE ON <OCNRF network database>.* TO '<OCNRF Privileged-User Name>'@'%';
      Example:
      $ GRANT SELECT, INSERT, CREATE, ALTER, DROP, LOCK TABLES, CREATE TEMPORARY TABLES, DELETE, UPDATE, EXECUTE ON nrfNetworkDB.* TO 'nrfPrivilegedUsr'@'%';
    3. Grant permission to OCNRF application user on OCNRF application database:
      $ GRANT SELECT, INSERT, LOCK TABLES, DELETE, UPDATE, EXECUTE ON <OCNRF Application Database>.* TO '<OCNRF APPLICATION User Name>'@'%'; 
      Example:
      $ GRANT SELECT, INSERT, LOCK TABLES, DELETE, UPDATE, EXECUTE ON nrfApplicationDB.* TO 'nrfApplicationUsr'@'%';
    4. Grant read permission to OCNRF application user for replication_info:
      $ GRANT SELECT ON replication_info.* TO '<OCNRF APPLICATION User Name>'@'%';
      Example:
      $ GRANT SELECT ON replication_info.* TO 'nrfApplicationusr'@'%';
  7. Apply the grants using following command:
    FLUSH PRIVILEGES;
  8. Exit from MySQL prompt and SQL nodes.
Configuring Kubernetes Secret for Accessing OCNRF Database

This section explains the steps to configure kubernetes secrets for accessing the OCNRF database created in the above section. This procedure must be executed before deploying OCNRF.

Kubernetes Secret Creation for OCNRF Privileged Database User

This section explains the steps to create kubernetes secrets for accessing OCNRF database and privileged user details created by database administrator in above section. This section must be execute before deploying OCNRF.

Create kubernetes secret for privileged user as follows:
  1. Create kubernetes secret for MySQL:
    $ kubectl create secret generic <privileged user secret name> --from-literal=dbUsername=<OCNRF Privileged Mysql database username> --from-literal=dbPassword=<OCNRF Privileged Mysql User database passsword> --from-literal=appDbName=<OCNRF Mysql database name> --from-literal=networkScopedDbName=<OCNRF Mysql Network database name> -n <Namespace of OCNRF deployment>

    Note:

    Note down the command used during the creation of kubernetes secret, this command is used for updates in future.
    Example:
    $ kubectl create secret generic privilegeduser-secret --from-literal=dbUsername=nrfPrivilegedUsr --from-literal=dbPassword=nrfPrivilegedPasswd --from-literal=appDbName=nrfApplicationDb --from-literal=networkScopedDbName=nrfNetworkDB -n ocnrf 
  2. Verify the secret created using above command:
    $ kubectl describe secret <database secret name> -n <Namespace of OCNRF deployment>
    Example:
    $ kubectl describe secret privilegeduser-secret -n ocnrf
Kubernetes Secret Update for OCNRF Privileged Database User
This section describes the steps to update the secrets. Update Kubernetes secret for privileged user as follows:
  1. Copy the exact command used in Kubernetes Secret Creation for OCNRF Privileged Database User section during creation of secret:
    $ kubectl create secret generic <privileged user secret name> --from-literal=dbUsername=<OCNRF Privileged Mysql database username> --from-literal=dbPassword=<OCNRF Privileged Mysql database password> --from-literal=appDbName=<OCNRF Mysql database name> --from-literal=networkScopedDbName=<OCNRF Mysql Network database name> -n <Namespace of OCNRF deployment>
  2. Update the same command with string "--dry-run -o yaml" and "kubectl replace -f - -n <Namespace of MYSQL secret>". After update, the command will be as follows:
    $ kubectl create secret generic <privileged user secret name> --from-literal=dbUsername=<OCNRF Privileged Mysql database username> --from-literal=dbPassword=<OCNRF Privileged Mysql database password> --from-literal=appDbName=<OCNRF Mysql database name> --from-literal=networkScopedDbName=<OCNRF Mysql Network database name> --dry-run -o yaml -n <Namespace of OCNRF deployment> | kubectl replace -f - -n <Namespace of OCNRF deployment>
  3. Execute the updated command. The following message is displayed:
    secret/<database secret name> replaced
Kubernetes Secret Creation for OCNRF Application Database User

This section explains the steps to create secrets for accessing and configuring application database user created in above section. This section must be execute before deploying OCNRF.

Create kubernetes secret for OCNRF application database user for configuring records is as follows:
  1. Create kubernetes secret for OCNRF application database user:
    $ kubectl create secret generic <appuser-secret name> --from-literal=dbUsername=<OCNRF APPLICATION User Name> --from-literal=dbPassword=<Password for OCNRF APPLICATION User> --from-literal=appDbName=<OCNRF Application Database> -n <Namespace of OCNRF deployment>

    Note:

    Note down the command used during the creation of kubernetes secret, this command will be used for updates in future.
    Example:
    $ kubectl create secret generic appuser-secret --from-literal=dbUsername=nrfApplicationUsr --from-literal=dbPassword=nrfApplicationPasswd --from-literal=appDbName=nrfApplicationDB -n ocnrf 
  2. Verify the secret creation:
    $ kubectl describe secret <appuser-secret name> -n <Namespace of OCNRF deployment>
    Example:
    $ kubectl describe secret appuser-secret -n ocnrf
Kubernetes Secret Update for OCNRF Application Database User
This section explains how to update the kubernetes secret.
  1. Copy the exact command used in above section during creation of secret:
    $ kubectl create secret generic <appuser-secret name> --from-literal=dbUsername=<OCNRF APPLICATION User Name> --from-literal=dbPassword=<Password for OCNRF APPLICATION User> --from-literal=appDbName=<OCNRF Application Database> -n <Namespace of OCNRF deployment>
  2. Update the same command with string "--dry-run -o yaml" and "kubectl replace -f - -n <Namespace of MYSQL secret>". After update, the command will be as follows:
    $ kubectl create secret generic <database secret name> --from-literal=dbUsername=<OCNRF APPLICATION User Name> --from-literal=dbPassword=<Password for OCNRF APPLICATION User> --from-literal=appDbName=<OCNRF Application Database> --dry-run -o yaml -n <Namespace of OCNRF deployment> | kubectl replace -f - -n <Namespace of OCNRF deployment>
  3. Execute the updated command. The following message is displayed:
    secret/<database secret name> replaced
Configuring secrets for enabling HTTPS

Creation of secrets for enabling HTTPS in OCNRF Ingress gateway

This section explains the steps to configure secrets for enabling HTTPS in ingress and egress gateways. This section must be executed before enabling HTTPS in OCNRF Ingress/Egress gateway.

Note:

The passwords for TrustStore and KeyStore are stored in respective password files mentioned below.
To create kubernetes secret for HTTPS, following files are required:
  • ECDSA private key and CA signed certificate of OCNRF (if initialAlgorithm is ES256)
  • RSA private key and CA signed certificate of OCNRF (if initialAlgorithm is RS256)
  • TrustStore password file
  • KeyStore password file

Note:

Creation process for private keys, certificates and passwords is on discretion of user/operator.

  1. Execute the following command to create secret:
    $ kubectl create secret generic <ocingress-secret-name> --from-file=<ssl_ecdsa_private_key.pem> --from-file=<rsa_private_key_pkcs1.pem> --from-file=<ssl_truststore.txt> --from-file=<ssl_keystore.txt> --from-file=<caroot.cer> --from-file=<ssl_rsa_certificate.crt> --from-file=<ssl_ecdsa_certificate.crt> -n <Namespace of OCNRF deployment>

    Note:

    Note down the command used during the creation of kubernetes secret, this command will be used for updates in future.
    Example: The names used below are same as provided in custom_values.yaml in OCNRF deployment.
    $ kubectl create secret generic ocingress-secret --from-file=ssl_ecdsa_private_key.pem --from-file=rsa_private_key_pkcs1.pem --from-file=ssl_truststore.txt --from-file=ssl_keystore.txt --from-file=caroot.cer --from-file=ssl_rsa_certificate.crt --from-file=ssl_ecdsa_certificate.crt -n ocnrf
  2. Verify the secret created using the following command:
    $ kubectl describe secret <ocingress-secret-name> -n <Namespace of OCNRF deployment>
    Example:
    $ kubectl describe secret ocingress-secret -n ocnrf

Update the secrets for enabling HTTPS in OCNRF Ingress gateway

This section explains how to update the secret with updated details.

  1. Copy the exact command used in above section during creation of secret.
  2. Update the same command with string "--dry-run -o yaml" and "kubectl replace -f - -n <Namespace of OCNRF deployment>".
  3. Create secret command will look like:
    $ kubectl create secret generic <ocingress-secret-name> --from-file=<ssl_ecdsa_private_key.pem> --from-file=<rsa_private_key_pkcs1.pem> --from-file=<ssl_truststore.txt> --from-file=<ssl_keystore.txt> --from-file=<caroot.cer> --from-file=<ssl_rsa_certificate.crt> --from-file=<ssl_ecdsa_certificate.crt> --dry-run -o yaml -n <Namespace of OCNRF deployment> | kubectl replace -f - -n <Namespace of OCNRF deployment>

    Example:-

    The names used below are same as provided in custom_values.yaml in OCNRF deployment:
    $ kubectl create secret generic ocingress-secret --from-file=ssl_ecdsa_private_key.pem --from-file=rsa_private_key_pkcs1.pem --from-file=ssl_truststore.txt --from-file=ssl_keystore.txt --from-file=caroot.cer --from-file=ssl_rsa_certificate.crt --from-file=ssl_ecdsa_certificate.crt --dry-run -o yaml -n ocnrf | kubectl replace -f - -n ocnrf
  4. Execute the updated command.
  5. After successful secret update, the following message is displayed:
    secret/<ocingress-secret> replaced

Creation of secrets for enabling HTTPS in OCNRF Egress gateway

This section explains the steps to create secret for HTTPS related details. This section must be executed before enabling HTTPS in OCNRF Egress gateway.

Note:

The passwords for TrustStore and KeyStore are stored in respective password files mentioned below.

To create kubernetes secret for HTTPS, following files are required:

  • ECDSA private key and CA signed certificate of OCNRF (if initialAlgorithm is ES256)
  • RSA private key and CA signed certificate of OCNRF (if initialAlgorithm is RS256)
  • TrustStore password file
  • KeyStore password file

Note:

Creation process for private keys, certificates and passwords is on discretion of user/operator.
  1. Execute the following command to create secret.
    $ kubectl create secret generic <ocegress-secret-name> --from-file=<ssl_ecdsa_private_key.pem>  --from-file=<ssl_rsa_private_key.pem> --from-file=<ssl_truststore.txt> --from-file=<ssl_keystore.txt> --from-file=<ssl_cabundle.crt> --from-file=<ssl_rsa_certificate.crt> --from-file=<ssl_ecdsa_certificate.crt> -n <Namespace of OCNRF deployment>

    Note:

    Note down the command used during the creation of kubernetes secret, this command will be used for updates in future.

    Example: The names used below are same as provided in custom_values.yaml in OCNRF deployment.

    $ kubectl create secret generic ocegress-secret --from-file=ssl_ecdsa_private_key.pem --from-file=ssl_rsa_private_key.pem --from-file=ssl_truststore.txt --from-file=ssl_keystore.txt --from-file=ssl_cabundle.crt --from-file=ssl_rsa_certificate.crt --from-file=ssl_ecdsa_certificate.crt -n ocnrf
  2. Command to verify secret created:
    $ kubectl describe secret <ocegress-secret-name> -n <Namespace of OCNRF deployment>

    Example:

    $ kubectl describe secret ocegress-secret -n ocnrf

Update the secrets for enabling HTTPS in OCNRF Egress gateway

This section explains how to update the secret with updated details.

  1. Copy the exact command used in above section during creation of secret:
  2. Update the same command with string "--dry-run -o yaml" and "kubectl replace -f - -n <Namespace of OCNRF deployment>".
  3. Create secret command will look like:
    kubectl create secret generic <ocegress-secret-name> --from-file=<ssl_ecdsa_private_key.pem> --from-file=<ssl_rsa_private_key.pem> --from-file=<ssl_truststore.txt> --from-file=<ssl_keystore.txt> --from-file=<ssl_cabundle.crt> --from-file=<ssl_rsa_certificate.crt> --from-file=<ssl_ecdsa_certificate.crt> --dry-run -o yaml -n <Namespace of OCNRF Egress Gateway secret> | kubectl replace -f - -n <Namespace of OCNRF deployment>

    Example:

    The names used below are same as provided in custom_values.yaml in OCNRF deployment:
    $ kubectl create secret generic egress-secret --from-file=ssl_ecdsa_private_key.pem --from-file=rsa_private_key_pkcs1.pem --from-file=ssl_truststore.txt --from-file=ssl_keystore.txt --from-file=caroot.cer --from-file=ssl_rsa_certificate.crt --from-file=ssl_ecdsa_certificate.crt --dry-run -o yaml -n ocnrf | kubectl replace -f - -n ocnrf
  4. Execute the updated command.
  5. After successful secret update, the following message is displayed:
    secret/<ocegress-secret> replaced
Configuring Secret for Enabling AccessToken Service

Access Token secret creation

This section explains the steps to create secret for AccessToken service of OCNRF. This section must be executed before enabling Access Token in OCNRF.

Note:

The password for KeyStore is stored in respective password file mentioned below.

To create kubernetes secret for AccessToken, following files are required:

  • ECDSA private key and CA signed certificate of OCNRF (if initialAlgorithm is ES256)
  • RSA private key and CA signed certificate of OCNRF (if initialAlgorithm is RS256)
  • KeyStore password file: This file contains a password which is used to protect the PrivateKeys/Certificates that will get loaded into the application in-memory (KeyStore).

    For example:echo qwerpoiu > keystore_password.txt

    where qwerpoiu is the password and keystore_password.txt is the target file which is provided as input to the AccessToken secret.

Note:

Creation process for private keys, certificates and passwords is on discretion of user/operator.
  1. Execute the following command to create secret. The names used below are same as provided in custom values.yaml in OCNRF deployment:
    kubectl create secret generic <ocnrfaccesstoken-secret> --from-file=<ecdsa_private_key.pem> --from-file=<rsa_private_key.pem> --from-file=<keystore_password.txt> --from-file=<rsa_certificate.crt> --from-file=<ecdsa_certificate.crt> -n <Namespace of OCNRF deployment>

    Note:

    Note down the command used during the creation of kubernetes secret, this command will be used for updates in future.
    Example:
    $ kubectl create secret generic ocnrfaccesstoken-secret --from-file=ecdsa_private_key.pem --from-file=rsa_private_key.pem --from-file=keystore_password.txt --from-file=rsa_certificate.crt --from-file=ecdsa_certificate.crt -n ocnrf
  2. Execute the following command to verify secret created:
    $ kubectl describe secret <ocnrfaccesstoken-secret-name> -n <Namespace of OCNRF deployment>

    Example:

     $ kubectl describe secret ocnrfaccesstoken-secret -n ocnrf

Access Token secret update

This section explains how to update the access token secret with updated details.
  1. Copy the exact command used in above section during creation of secret.
  2. Update the same command with string "--dry-run -o yaml" and "kubectl replace -f - -n <Namespace of OCNRF deployment>".
  3. Create secret command will look like:
    kubectl create secret generic <ocnrfaccesstoken-secret> --from-file=<ecdsa_private_key.pem> --from-file=<rsa_private_key.pem> --from-file=<keystore_password.txt> --from-file=<rsa_certificate.crt> --from-file=<ecdsa_certificate.crt> --dry-run -o yaml -n <Namespace of OCNRF deployment> | kubectl replace -f - -n <Namespace of OCNRF deployment>

    Example:-

    The names used below are same as provided in custom_values.yaml in OCNRF deployment:
    $ kubectl create secret generic ocnrfaccesstoken-secret --from-file=ecdsa_private_key.pem --from-file=rsa_private_key.pem --from-file=keystore_password.txt --from-file=rsa_certificate.crt --from-file=ecdsa_certificate.crt --dry-run -o yaml -n ocnrf | kubectl replace -f - -n ocnrf
  4. Execute the updated command.
  5. After successful secret update, the following message is displayed:
    secret/<ocnrfaccesstoken-secret> replaced
OCNRF Access Token Service Usage Details

OCNRF implements Nnrf_AccessToken service (used for OAuth2 authorization), along with the "Client Credentials" authorization grant. It exposes a "Token Endpoint" where the Access Token Request service can be requested by NF Service Consumers.

The Nnrf_AccessToken service operation is defined as follows:
  • Access Token Request (i.e. Nnrf_AccessToken_Get)

Note:

This procedure is specific to OCNRF Access Token service operation. OCNRF general configurations, database and database specific secret creation are not part of this procedure.

Procedure to use OCNRF Access Token Service Operation

This procedure provides step by step details which are needed to use 3GPP defined Access Token Service Operation supported by OCNRF.
  1. Create OCNRF private key and public certificate
    This step explains need to create the OCNRF private keys and public certificates. Private key are used by OCNRF NF to sign the Access Token generated. It shall be available only with OCNRF. Public certificates are used by producer NFs to validate the access token generated by OCNRF. So, public certificates shall be available with producer network functions. Two types of signing algorithms are supported by OCNRF. For both types different keys and certificates required to be generated:
    • ES256: ECDSA digital signature with SHA-256 hash algorithm
    • RS256: RSA digital signature with SHA-256 hash algorithm
    Any one/both of algorithm files can be generated depending upon usage of hash algorithms. One algorithm depending upon configuration at OCNRF will decide which key will used to sign the Access Token.

    Note:

    Creation process for private keys, certificates and passwords is on discretion of user/operator.
    Sample keys and certificates:

    After execution of this step, there will be private keys and public certificates of OCNRF (generated files depends upon algorithms chosen by operator/user).

    For example:

    ES256 based keys and certificates:
    • ecdsa_private_key.pem

    • ecdsa_certificate.crt

    RS256 based keys and certificates:
    • rsa_private_key.pem

    • rsa_certificate.crt

  2. Password to keep safely the generated keys and certificate inside OCNRF container

    This step explains the create password that is used to keep safely the generated keys and certificate inside OCNRF container.

    Sample step to create:
     echo qwerpoiu > keystore_password.txt
    where, qwerpoiu is the password and keystore_password.txt is the target password file

    Note:

    This file is provided in Kubernetes secret.

    After execution of this step, file will be available with password.

    For example: keystore_password.txt

  3. Name space creation for Secrets

    This step explains the need for creating kubernetes namespace in which kubernetes secrets will be created for OCNRF private keys, OCNRF public certificate and keystore password. Refer to Creating OCNRF Namespace section.

    Note:

    • Different namespaces or same namespace can be used for OCNRF private keys, OCNRF public certificate and keystore password.
    • Namespace(s) shall have RBAC resources defined with required privileges.
    • It can be same namespace as for OCNRF.
    • Namespace will be available in which required secrets can be created in next steps
  4. Secret creation for OCNRF private keys, OCNRF public certificate and keystore password
    This step explain commands to create the kubernetes secret(s) in which OCNRF private keys, OCNRF public certificate and keystore password can be kept safely. Refer to Configuring Kubernetes Secret for Accessing OCNRF Database section.

    Note:

    Single secret can be created for OCNRF private keys, OCNRF public certificate and keystore password. Sample command is provided in steps to create single secret. In case, there is need to create separate secret for each entity, then same command can be used.
  5. Configure OCNRF custom_values.yaml with outcome details of Steps 1 to 4

    This step explains customize the OCNRF custom_values.yaml to use the OCNRF private keys, OCNRF public certificate, keystore password file, secrets, and secret namespace. Refer to Configuring Secret for Enabling AccessToken Service section.

    Key Attributes in OCNRF custom_values.yaml:
    • nfaccesstoken.oauth.nrfInstanceId - OCNRF's NF Instance ID that will be used for signing AccessTokenClaim.
    • nfaccesstoken.oauth.initialAlgorithm - Signing algorithm which will be used by Access Token microservice. This is default value.
    • NF Access Token OCNRF Private Key Details
      1. k8SecretName - K8 Secret Name for OCNRF Access Token Private key
      2. k8NameSpace - Namespace for OCNRF Access Token Private key Secret
      3. rsa.filename - Key File name which is OCNRF Access Token Private Key for RSA algorithm
      4. ecdsa.filename - Key File name which is OCNRF Access Token Private Key for ECDSA algorithm
    • NF Access Token OCNRF Public Certificate Details
      1. k8SecretName - K8 Secret Name for OCNRF Access Token Public Certificate
      2. k8NameSpace - Namespace for OCNRF Access Token Public Certificate Secret
      3. rsa.filename - Key File name which is OCNRF Access Token Public Certificate for RSA algorithm
      4. ecdsa.filename - Key File name which is OCNRF Access Token Public Certificate for ECDSA algorithm
    • NF Access Token Key Store Password Details
      1. k8SecretName - K8 Secret Name for OCNRF Access Token Key Store Password
      2. k8NameSpace - Namespace for OCNRF Access Token Key Store password Secret
      3. filename - KeyStore password file

Installation Tasks

This section describes the tasks that the user must follow for installing OCNRF.

Download OCNRF package
Following is the procedure to download the release package from MOS:
  1. Login to MOS using the appropriate login credentials.
  2. Select Product & Updates tab.
  3. In Patch Search console, select Product or Family (Advanced) tab.
  4. Enter Oracle Communications Cloud Native Core - 5G in Product field and select the product from the Product drop-down.
  5. Select Oracle Communications Cloud Native Core Network Repository Function <release_number> in Release field.
  6. Click Search. The Patch Advanced Search Results list appears.
  7. Select the required patch from the list. The Patch Details window appears.
  8. Click on Download. File Download window appears.
  9. Click on the <p********_<release_number>_Tekelec>.zip file.
  10. Extract the release package zip file to download the network function patch to the system where network function must be installed.
Configuring OCNRF to support ASM

OCNRF leverages the Istio or Envoy service mesh (Aspen Service Mesh) for all internal and external communication. The service mesh integration provides inter-NF communication and allows API gateway co-working with service mesh. The service mesh integration supports the services by deploying a special sidecar proxy in the environment to intercept all network communication between microservices.

Supported ASM version: 1.5.7-am3

For ASM installation and configuration, refer to Official Aspen Service Mesh website for details.

Pre-deployment configurations

This sections explains the pre-deployment configuration procedure to install OCNRF with ASM support.

Follow the procedure as mentioned below:

  1. Steps for creating OCNRF namespace
    1. Verify required namespace already exists in system:
      $ kubectl get namespaces
    2. In the output of the above command, check if required namespace is available. If not available, create the namespace using following command:
      $ kubectl create namespace <ocnrf namespace>
      Example:
      $ kubectl create namespace ocnrf
  2. Steps to set the connectivity to database (DB) service
    1. For VM based DB deployment
      1. Create a Headless service for DB connectivity in OCNRF namespace:
        $ kubectl apply -f db-connectivity.yaml
        Sample db-connectivity.yaml file:
        # db-connectivity.yaml
        apiVersion: v1
        kind: Endpoints
        metadata:
          name: ocnrf-db-connectivity-service-headless
          namespace: <db-namespace>
        subsets:
        - addresses:
          - ip: <10.75.203.49> # IP Endpoint of DB service.
          ports:
          - port: 3306
            protocol: TCP
        ---
        apiVersion: v1
        kind: Service
        metadata:
          name: ocnrf-db-connectivity-service-headless
          namespace: <db-namespace>
        spec:
          clusterIP: None
          ports:
          - port: 3306
            protocol: TCP
            targetPort: 3306
          sessionAffinity: None
          type: ClusterIP
        ---
        apiVersion: v1
        kind: Service
        metadata:
          name: ocnrf-db-connectivity-service
          namespace: <ocnrf-namespace>
        spec:
          externalName: ocnrf-db-connectivity-service-headless.<db-namespace>.svc.<domain>
          sessionAffinity: None
          type: ExternalName
      2. Create ServiceEntry and DestinationRule for DB connectivity service:
        $ kubectl apply -f db-se-dr.yaml
        Sample db-se-dr.yaml file:
        apiVersion: networking.istio.io/v1alpha3
        kind: ServiceEntry
        metadata:
          name: ocnrf-db-external-se
          namespace: <ocnrf-namespace>
        spec:
          exportTo:
          - "."
          hosts:
          - ocnrf-db-connectivity-service-headless.<db-namespace>.svc.<domain>
          ports:
          - number: 3306
            name: mysql
            protocol: MySQL
          location: MESH_EXTERNAL
          resolution: NONE
        ---
        apiVersion: networking.istio.io/v1alpha3
        kind: DestinationRule
        metadata:
          name: ocnrf-db-external-dr
          namespace: <ocnrf-namespace>
        spec:
          exportTo:
          - "."
          host: ocnrf-db-connectivity-service-headless.<db-namespace>.svc.<domain>
          trafficPolicy:
            tls:
              mode: DISABLE
    2. For KubeVirt based DB deployment
      1. DB connectivity headless service is not required for KubeVirt based deployment as DB service may be exposed as K8S service. OCNRF can use K8S service FQDN to connect to DB service.
        Create a DestinationRule with DB FQDN to disable mTLS:
        $ kubectl apply -f db-dr.yaml
        Sample db-dr.yaml file:
        apiVersion: networking.istio.io/v1alpha3
        kind: DestinationRule
        metadata:
          name: ocnrf-db-service-dr
          namespace: <ocnrf-namespace>
        spec:
          exportTo:
          - "."
          host: <db-service-fqdn>.<db-namespace>.svc.<domain>
          trafficPolicy:
            tls:
              mode: DISABLE
  3. Configure access to Kubernetes API service
    Create a service entry in pod networking so that pods can access kubernetes api-server:
    $ kubectl apply -f kube-api-se.yaml
    Sample kube-api-se.yaml file:
    # kube-api-se.yaml
    apiVersion: networking.istio.io/v1alpha3
    kind: ServiceEntry
    metadata:
      name: kube-api-server
      namespace: <ocnrf-namespace>
    spec:
      hosts:
      - kubernetes.default.svc.<domain>
      exportTo:
      - "."
      addresses:
      - <10.96.0.1> # cluster IP of kubernetes api server
      location: MESH_INTERNAL
      ports:
      - number: 443
        name: https
        protocol: HTTPS
      resolution: NONE
Deploying OCNRF with ASM
  1. Namespace label for auto sidecar injection
    Create namespace label for auto sidecar injection to automatically add the sidecars in all of the pods spawned in OCNRF namespace:
    $ kubectl label ns <ocnrf-namespace> istio-injection=enabled
  2. Creating Service Account, Role and Role bindings
    Create a Service Account for OCNRF and a role with appropriate security policies for sidecar proxies to work refer to sample sa-role-rolebinding.yaml file:
    $ kubectl apply -f sa-role-rolebinding.yaml
    Sample sa-role-rolebinding.yaml file:
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: "ocnrf-service-account"
      namespace: "ocnrf"
      labels:
        app.kubernetes.io/component: internal
      annotations:
        sidecar.istio.io/inject: "false"
        "certificate.aspenmesh.io/customFields": '{ "SAN": { "DNS": [ "ocnrf.3gpp.oracle.com" ] } }'
    ---
     
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      name: "ocnrf-role"
      namespace: "ocnrf"
      labels:
        app.kubernetes.io/component: internal
      annotations:
        sidecar.istio.io/inject: "false"
    rules:
    - apiGroups:
      - "" # "" indicates the core API group
      resources:
      - services
      - configmaps
      - pods
      - secrets
      - endpoints
      verbs:
      - get
      - watch
      - list
    ---
     
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: "ocnrf-rolebinding"
      namespace: "ocnrf"
      labels:
        app.kubernetes.io/component: internal
      annotations:
        sidecar.istio.io/inject: "false"
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: "ocnrf-role"
    subjects:
    - kind: ServiceAccount
      name: "ocnrf-service-account"
      namespace: "ocnrf"
  3. Update ocnrf-custom-values-1.8.0.yaml for the required annotations
    Update custom values with following annotations:
    1. Update global section for following attributes:
      global:
        customExtension:
          allResources:
             labels: {}
             annotations: {}
          lbDeployments:
            annotations:
              oracle.com/cnc: "true"
          nonlbDeployments:
            annotations:
              oracle.com/cnc: "true"
    2. Update service account name with value configured OCNRF service account:
      serviceAccountName: <"ocnrf-release-1-8-0-ocnrf-serviceaccount">
    3. Update MySQL primary database host with value depending upon DB service configuration done in above section:
      mysql:
           primary:
             # Primary DB Connection Service IP or Hostname
             host: "ocnrf-db-connectivity-service"
    4. Update global ingress-gateway section for below attributes:
      In case of NF authentication using TLS certificate feature, update 'enabled' attribute to true.
        xfccHeaderValidation:
          extract:
            enabled: false
    5. Update ingress-gateway section for below attributes:
      Enable Service Mesh Flag in ingress-gateway:
      ingress-gateway:
        # Mandatory: This flag needs to set it "true" if Service Mesh would be present where OCNRF will be deployed
        serviceMeshCheck: true
      Change Ingress-Gateway Service Type to ClusterIP:
      global:
          # Service Type
          type: ClusterIP
    6. Update NRF configuration microservice section for below attributes:
      nrfconfiguration:
        service:
          # Service Type
          type: ClusterIP
    7. Update NF access token microservice section for below attributes:
      nfaccesstoken:
        deployment:
          customExtension:
            labels: {}
            annotations:
              traffic.sidecar.istio.io/excludeOutboundIPRanges: <Kubernetes API Server IP Address in CIDR format>
      
  4. Install OCNRF using updated ocnrf-custom-values-1.8.0.yaml. Refer OCNRF installation section for details.
    Sample output for pods:
    NAME                                     READY  STATUS  RESTARTS   AGE
    ocnrf-appinfo-69c54fff6c-59wsm            2/2  Running   0        2d13h
    ocnrf-egressgateway-79448858b5-dqsbp      2/2  Running   0        2d13h
    ocnrf-ingressgateway-5bb8784498-slvvd     2/2  Running   0        2d13h
    ocnrf-nfaccesstoken-77bb954fc7-448t4      3/3  Running   0        2d13h
    ocnrf-nfdiscovery-5df8755ff4-grtnn        2/2  Running   0        2d13h
    ocnrf-nfregistration-7895d8799c-z6rpf     2/2  Running   0        2d13h
    ocnrf-nfsubscription-69769fd586-dgjfp     2/2  Running   0        2d13h
    ocnrf-nrfauditor-56487f956b-gqrrp         2/2  Running   0        2d13h
    ocnrf-nrfconfiguration-6c6c9d466b-ptxqm   2/2  Running   0        2d13h
Post-deployment configuration
This section explains the post-deployment configurations to install OCNRF with support for ASM.
  1. Enable Inter-NF communication

    For every new NF participating in call flows when OCNRF is client, DestinationRule and ServiceEntry needs to be created in OCNRF namespace to enable communication.

    Following are the inter-NF communication with OCNRF:
    • OCNRF to SLF/UDR communication
    • OCNRF to other NRF communication (Forwarding)
    • OCNRF to different NFs Notification Servers
    $ kubectl apply -f new-nf-se-dr.yaml
    Sample new-nf-se-dr.yaml file:
    apiVersion: networking.istio.io/v1alpha3
    kind: DestinationRule
    metadata:
      name: <unique DR name for NR>
      namespace: <ocnrf-namespace>
    spec:
      exportTo:
      - .
      host: <NF-public-FQDN>
      trafficPolicy:
        tls:
          mode: MUTUAL
          clientCertificate: /etc/certs/cert-chain.pem
          privateKey: /etc/certs/key.pem
          caCertificates: /etc/certs/root-cert.pem
    ---
    apiVersion: networking.istio.io/v1alpha3
    kind: ServiceEntry
    metadata:
      name: <unique SE name for NR>
      namespace: <ocnrf-namespace>
    spec:
      exportTo:
      - .
      hosts:
      - <NF-public-FQDN>
      ports:
      - number: <NF-public-port>
        name: http2
        protocol: HTTP2
      location: MESH_EXTERNAL
      resolution: NONE
    Sample example resource is provided for UDR/SLF service below:
    apiVersion: networking.istio.io/v1alpha3
    kind: DestinationRule
    metadata:
      name: udr1-dr
      namespace: <ocnrf-namespace>
    spec:
      exportTo:
      - .
      host: s24e65f98-bay190-rack38-udr-11.oracle-ocudr.cnc.us-east.oracle.com
      trafficPolicy:
        tls:
          mode: MUTUAL
          clientCertificate: /etc/certs/cert-chain.pem
          privateKey: /etc/certs/key.pem
          caCertificates: /etc/certs/root-cert.pem
    ---
    apiVersion: networking.istio.io/v1alpha3
    kind: ServiceEntry
    metadata:
      name: udr-se
      namespace: <ocnrf-namespace>
    spec:
      exportTo:
      - .
      hosts:
      - "s24e65f98-bay190-rack38-udr-11.oracle-ocudr.cnc.us-east.oracle.com"
      ports:
      - number: 16016
        name: http2
        protocol: HTTP2
      location: MESH_EXTERNAL
      resolution: NONE

    Note:

    Above procedure need to be executed for all of forwarding NRFs and SLF/UDR.
    For each Network Function Notification URI(s) which NFs sends to OCNRF during subscription creation which are not part of Service Mesh Registry, DestinationRule and ServiceEntry needs to be created in OCNRF namespace to enable communication.
    $ kubectl apply -f notification-uri-se-dr.yaml
    Example:
    apiVersion: networking.istio.io/v1alpha3
    kind: DestinationRule
    metadata:
      name: ocpcf-callback-dr
      namespace: <ocnrf-namespace>
    spec:
      exportTo:
      - .
      host: ocpcf-notifications-processor-03.oracle-ocpcf.cnc.us-east.oracle.com
      trafficPolicy:
        tls:
          mode: MUTUAL
          clientCertificate: /etc/certs/cert-chain.pem
          privateKey: /etc/certs/key.pem
          caCertificates: /etc/certs/root-cert.pem
    ---
    apiVersion: networking.istio.io/v1alpha3
    kind: ServiceEntry
    metadata:
      name: ocpcf-callback-se
      namespace: <ocnrf-namespace>
    spec:
      exportTo:
      - .
      hosts:
      - "ocpcf-notifications-processor-03.oracle-ocpcf.cnc.us-east.oracle.com"
      ports:
      - number: 16016
        name: http2
        protocol: HTTP2
      location: MESH_EXTERNAL
      resolution: NONE
  2. OSO deployment

    No additional steps are required. Refer to OSO Installation Guide for more information.

    Note:

    If OSO is deployed in same namesapce as OCNRF, make sure all deployments of OSO has the following annotations to skip sidecar injection as OSO currently does not support ASM sidecar proxy.
    sidecar.istio.io/inject: "\"false\""
OCNRF Installation
This section describes how to install OCNRF on the cloud native environment.
  1. Unzip the release package file to the system where you want to install the network function. You can find the OCNRF package as follows:

    ReleaseName-pkg-Releasenumber.tgz

    where:

    ReleaseName is a name which is used to track this installation instance.

    Releasenumber is the release number.

    For example, ocnrf-pkg-1.8.0.0.0.tgz
  2. Untar the OCNRF package file to get OCNRF docker image tar file:
    tar -xvzf ReleaseName-pkg-Releasenumber.tgz
  3. Load the ocnrf-images-<release_number>.tar file into the Docker system:
    docker load --input /IMAGE_PATH/ocnrf-images-<release_number>.tar
  4. Verify that the image is loaded correctly by entering this command:
    docker images 
  5. Execute the following commands to push the docker images to docker registry:
    docker tag <image-name>:<image-tag> <docker-repo>/ <image-name>:<image-tag>
    docker push <docker-repo>/<image-name>:<image-tag> 
  6. Untar the helm files:
    tar -xvzf ocnrf-<release_number>.tgz
  7. Create the customize ocnrf-custom-values-1.8.0.yaml file with the required input parameters. To customize the file, refer to Customizing OCNRF chapter.
  8. Go to the extracted OCNRF package as explained in:
    cd ocnrf-<release_number>
  9. Install OCNRF by executing the following command:
    1. In case of helm2, execute the following command:
      helm install ocnrf/ --name <helm-release> --namespace <k8s namespace> -f <ocnrf_customized_values.yaml>
      

      Example: helm install ocnrf/ --name ocnrf --namespace ocnrf -f ocnrf-custom-values-1.8.0.yaml

    2. In case of helm3, execute the following command:
      helm3 install -name <helm-release-name> <charts> --namespace <namespace-name> -f <custom-values.yaml-filename>

    Caution:

    This command will appear hung for a while. Because from OCNRF 1.8.0 release onwards, Kubernetes jobs will get execute by Install/Upgrade/Rollback helm hooks. Helm Deployment will be shown as DONE after all the applicable hooks are executed.

    timeout duration (optional): If not specified, default value will be 300 (300 seconds) in Helm2 and 5m (5 minutes) in Helm3. Specifies the time to wait for any individual kubernetes operation (like Jobs for hooks). Default value is 5m0s. If the helm install command fails at any point to create a kubernetes object, it will internally call the purge to delete after timeout value (default: 300s). Here timeout value is not for overall install, but it is for automatic purge on installation failure.

    To verify the deployment status, open a new terminal and execute the following command:

    Command: $ watch kubectl get pods -n <k8s namespace>

    The pod status gets updated on a regular interval. When helm install command exits with the status, you may stop watching the status of kubernetes pods.

    Note:

    In case helm purge do not clean the deployment and kubernetes objects completely then follow Cleaning OCNRF deployment section.
  10. Execute the following command to check the status:
    1. For helm2:

      helm status <helm-release>

      For example: helm status ocnrf

    2. For helm3:
      helm3 status <helm-release>  -n <helm-release>

      Example: helm3 status ocnrf -n ocnrf

  11. Execute the following command to check status of the services:

    kubectl -n <k8s namespace> get services

    For example:

    kubectl -n ocnrf get services

    Note: If external load balancer is used, EXTERNAL-IP is assigned to <helm release name>-ingressgateway.ocnrf is the release name. ocnrf is the helm release name.
    NAME                   TYPE          CLUSTER-IP     EXTERNAL-IP  PORT(S)            AGE
    ocnrf-egressgateway    ClusterIP     10.233.1.61    <none>       8080/TCP,5701/TCP  30h
    ocnrf-ingressgateway   LoadBalancer  10.233.52.194  <pending>    80:31776/TCP       30h
    ocnrf-nfaccesstoken    ClusterIP     10.233.53.115  <none>       8080/TCP           30h
    ocnrf-nfdiscovery      ClusterIP     10.233.21.28   <none>       8080/TCP           30h
    ocnrf-nfregistration   ClusterIP     10.233.4.140   <none>       8080/TCP           30h
    ocnrf-nfsubscription   ClusterIP     10.233.44.98   <none>       8080/TCP           30h
    ocnrf-nrfauditor       ClusterIP     10.233.1.71    <none>       8080/TCP           30h
    ocnrf-nrfconfiguration LoadBalancer  10.233.40.230  <pending>    8080:30076/TCP     30h
    ocnrf-ocnrf-app-info   ClusterIP     10.104.113.86  <none>       5906/TCP           30h
  12. Execute the following command to check status of the pods:

    kubectl get pods -n <k8s namespace>

    Status column of all the pods should be 'Running'.

    Ready column of all the pods should be n/n, where n is number of containers in the pod.

    For example:

    kubectl get pods -n ocnrf
    NAME                                     READY  STATUS   RESTARTS  AGE
    ocnrf-egressgateway-d6567bbdb-9jrsx      2/2    Running  0         30h
    ocnrf-egressgateway-d6567bbdb-ntn2v      2/2    Running  0         30h
    ocnrf-ingressgateway-754d645984-h9vzq    2/2    Running  0         30h
    ocnrf-ingressgateway-754d645984-njz4w    2/2    Running  0         30h
    ocnrf-nfaccesstoken-59fb96494c-k8w9p     2/2    Running  0         30h
    ocnrf-nfaccesstoken-49fb96494c-k8w9q     2/2    Running  0         30h
    ocnrf-nfdiscovery-84965d4fb9-rjxg2       1/1    Running  0         30h
    ocnrf-nfdiscovery-94965d4fb9-rjxg3       1/1    Running  0         30h
    ocnrf-nfregistration-64f4d8f5d5-6q92j    1/1    Running  0         30h
    ocnrf-nfregistration-44f4d8f5d5-6q92i    1/1    Running  0         30h
    ocnrf-nfsubscription-5b6db965b9-gcvpf    1/1    Running  0         30h
    ocnrf-nfsubscription-4b6db965b9-gcvpe    1/1    Running  0         30h
    ocnrf-nrfauditor-67b676dd87-xktbm        1/1    Running  0         30h
    ocnrf-nrfconfiguration-678fddc5f5-c5htj  1/1    Running  0         30h
    ocnrf-appinfo-8b7879cdb-jds4r            1/1    Running  0         30h