2 Installing BSF

This chapter provides information about installing Oracle Communications Cloud Native Core, Binding Support Function (BSF) in a cloud native environment.

Note:

  • BSF supports fresh installation, and it can also be upgraded from 24.2.x and 24.1.x. For more information on how to upgrade BSF, see Upgrading BSF.

2.1 Prerequisites

Before installing and configuring BSF, ensure that the following prerequisites are met.

2.1.1 Software Requirements

This section lists the software that must be installed before installing BSF:

Table 2-1 Preinstalled Software

Software Versions
Kubernetes 1.30.x,1.29.x, 1.28.x
Helm 3.14.2
Podman 4.9.4

Note:

CNE 24.3.x, 24.2.x, and 24.1.x versions can be used to install BSF 24.3.0.
To check the current CNE, Helm, Kubernetes, and Podman version installed, run the following commands:
echo $OCCNE_VERSION
helm version
kubectl version
podman version

Note:

This guide covers the installation instructions for BSF when Podman is the container platform with Helm as the Packaging Manager. For non-CNE, the operator can use commands based on their deployed Container Runtime Environment, see the Oracle Communications Cloud Native Core, Cloud Native Environment (CNE) Installation, Upgrade and Fault Recovery Guide.

The following software are available if BSF is deployed in CNE. If you are deploying BSF in any other cloud native environment, these additional software must be installed before installing BSF.

To check the installed software, run the following command:

helm ls -A

Table 2-2 Additional Software

Software Version Purpose
AlertManager 0.27.0 Alerts Manager
Calico 3.27.3 Security Solution
cert-manager 1.12.4 Secrets Manager
Containerd 1.7.16 Container Runtime Manager
Fluentd - OpenSearch 1.16.2 Logging
Grafana 9.5.3 Metrics
HAProxy 3.0.2 Load Balancer
Istio 1.18.2 Service Mesh
Jaeger 1.60.0 Tracing
Kyverno 1.12.5 Logging
MetalLB 0.14.4 External IP
Oracle OpenSearch 2.11.0 Logging
Oracle OpenSearch Dashboard 2.11.0 Logging
Prometheus 2.52.0 Metrics
Prometheus Operator 0.76.0 Metrics
Velero 1.12.0 Logging
elastic-curator 5.5.4 Logging
elastic-exporter 1.1.0 Logging
elastic-master 7.9.3 Logging
Logs 3.1.0 Logging
prometheus-kube-state-metric 1.9.7 Metrics
prometheus-node-exporter 1.0.1 Metrics
metrics-server 0.3.6 Metric Server
occne-snmp-notifier 1.2.1 Metric Server
tracer 1.22.0 Tracing

Important:

If you are using NRF with BSF, install it before proceeding with the BSF installation. BSF 24.3.0 supports NRF 24.3.x.

2.1.2 Environment Setup Requirements

This section describes the environment setup requirements required for installing BSF.

2.1.2.1 Client Machine Requirement

This section describes the requirements for client machine, that is, the machine used by the user to run deployment commands.

The client machine must have:
  • Helm repository configured on the client.
  • network access to the Helm repository and Docker image repository.
  • network access to the Kubernetes cluster.
  • required environment settings to run the kubectl, podman, and docker commands. The environment should have privileges to create namespace in the Kubernetes cluster.
  • Helm client installed with the push plugin. The environment should be configured so that the helm install command deploys the software in the Kubernetes cluster.
2.1.2.2 Network Access Requirement

The Kubernetes cluster hosts must have network access to the following repositories:

  • Local Helm Repository: It contains the BSF Helm charts.
    To check if the Kubernetes cluster hosts have network access to the local Helm repository, run the following command:
    helm repo update
  • Local Docker Image Repository: It contains the BSF Docker images.
    To check if the Kubernetes cluster hosts can access the local Docker image repository, pull any image with an image-tag, using the following command:
    docker pull <docker-repo>/<image-name>:<image-tag>
    podman pull <docker-repo>/<image-name>:<image-tag>
    Where:
    • <docker-repo> is the IP address or host name of the Docker repository.
    • <podman-repo> is the IP address or host name of the Podman repository.
    • image-name is the Docker image name.
    • image-tag is the tag assigned to the Docker image used for the BSF pod.

    For Example:

    docker pull CUSTOMER_REPO/oc-app-info:24.3.4

    For CNE 1.8.0 and later versions, use the following command:

    podman pull <docker-repo>/<image-name>:<image-tag>

    For Example:

    podman pull CUSTOMER_REPO/oc-app-info:24.3.4

Note:

Run the kubectl and Helm commands on a system based on the deployment or infrastructure. For instance, you can run these commands on a client machine such as VM, server, local desktop, and so on.

2.1.2.3 Server or Space Requirement

For information about server or space requirements, see the Oracle Communications Cloud Native Core, Cloud Native Environment (CNE) Installation, Upgrade and Fault Recovery Guide.

2.1.2.4 CNE Requirement

This section is applicable only if you are installing BSF on Cloud Native Environment (CNE). BSF supports CNE 24.3.x, 24.2.x, and 24.1.x.

To check the CNE version, run the following command:

echo $OCCNE_VERSION

For more information, see Oracle Communications Cloud Native Core, Cloud Native Environment (CNE) Installation, Upgrade, and Fault Recovery Guide.

2.1.2.5 cnDBTier Requirement

BSF supports cnDBTier 24.3.x, 24.2.x, and 24.1.x. cnDBTier must be configured and running before installing BSF. For more information about cnDBTier installation, see Oracle Communications Cloud Native Core, cnDBTier (cnDBTier) Installation, Upgrade, and Fault Recovery Guide.

2.1.2.6 OSO Requirement

BSF supports Operations Services Overlay (OSO) 24.3.x, 24.2.x, and 24.1.x for common operation services (Prometheus and components such as Alertmanager, Pushgateway) on a Kubernetes cluster, which does not have these common services. For more information on installation procedure, see Oracle Communications Cloud Native Core, Operations Services Overlay Installation, Upgrade, and Fault Recovery Guide.

2.1.2.7 CNC Console Requirements

BSF supports CNC Console (CNCC) 24.3.x.

For more information about CNCC, see Oracle Communications Cloud Native Configuration Console Installation, Upgrade, and Fault Recovery Guide and Oracle Communications Cloud Native Configuration Console User Guide.

2.1.2.8 OCCM Requirements

BSF supports OCCM 24.3.x. To support automated certificate lifecycle management, BSF integrates with Oracle Communications Cloud Native Core, Certificate Management (OCCM) in compliance with 3GPP security recommendations. For more information about OCCM in BSF, see the Support for Automated Certificate Lifecycle Management section in Oracle Communications Cloud Native Core, Binding Support Function User Guide.

For more information about OCCM, see the following guides:

  • Oracle Communications Cloud Native Core, Certificate Manager Installation, Upgrade, and Fault Recovery Guide
  • Oracle Communications Cloud Native Core, Certificate Manager User Guide

2.1.3 Resource Requirement

This section lists the resource requirements to install and run BSF.

Note:

The performance and capacity of the BSF system may vary based on the call model, Feature/Interface configuration, and underlying CNE and hardware environment.
2.1.3.1 BSF Services

The following table lists resource requirement for BSF Services:

Table 2-3 BSF Services

Service CPU Memory (Gi) Replica(s) Ephemeral Storage (If Enabled)
Min Max Min Max Min Max Count Min Max
bsf-management-service 3 4 1 4 2 8 1 78.1Mi 4Gi
To modify the default values of replicas for any BSF microservice, you must add the following parameters under the required service group with CPU and memory in ocbsf_custom_values_24.3.0.yaml file:

minReplicas: 1
maxReplicas: 1
For example, to update the default values for Ingress gateway or Egress gateway, add the parameters under ingress-gateway or egress-gateway group:

ingress-gateway:
  #Resource details
  resources:
    limits:
      cpu: 1
      memory: 6Gi
    requests:
      cpu: 1
      memory: 2Gi
    target:
      averageCpuUtil: 80
  minReplicas: 1
  maxReplicas: 1
egress-gateway:
  #Resource details
  resources:
    limits:
      cpu: 1
      memory: 6Gi
    requests:
      cpu: 1
      memory: 2Gi
    target:
      averageCpuUtil: 80
  minReplicas: 1
  maxReplicas: 1

Note:

It is recommended to avoid altering the above mentioned standard resources. Either increasing or decreasing the CPU or memory will result in unpredictable behavior of the pods. Contact My Oracle Support (MOS) team for Min Replicas and Max Replicas count values.
2.1.3.2 Upgrade

Following is the resource requirement for upgrading BSF.

Table 2-4 Upgrade

Service CPU Memory (Gi) Replica(s)
Min Max Min Max Max
Alternate Route Service 1 2 1 2 1
bsf-management-service 1 2 1 2 1
Diameter Gateway 1 2 1 2 1
Egress Gateway 1 2 1 2 1
Ingress Gateway 1 2 1 2 1
NRF Client NF Management 1 2 1 2 1
Perf-Info 1 2 1 2 1
App-info 1 2 1 2 1
Query Service 1 2 1 2 1
CM Service 1 2 1 2 1
Config Server 1 2 1 2 1
Audit Service 1 2 1 2 1
2.1.3.3 Common Services Container

Table 2-5 Common Services Container

Service CPU Memory (Gi) Replica(s) Ephemeral Storage (If Enabled)
Min Max Min Max Min Max Count Min Max
Alternate Route Service 1 2 2 4 2 5 1 78.1Mi 4Gi
Diameter Gateway 3 4 0.5 2 2

Max replica per service should be set based on required TPS and other dimensioning factors.

You must take Upgrade resources into account during dimensioning. Default upgrade resource requirements are 25% above max replica, rounding up to the next integer. For example, if a service has a max replica count of 8, upgrade resources of 25% will result in additional resources equivalent to 2 pods. If a max replica is 1, one additional pod would be required (rounding 0.25 to 1).

1 78.1Mi 2Gi
Egress Gateway 3 4 4 6 2 5 2 78.1Mi 6Gi
Ingress Gateway 3 4 4 6 2 5 2 78.1Mi 6Gi
NRF Client NF Management 1 1 1 1 NA NA 2 78.1Mi 1Gi
Perf-Info 3 4 0.5 1 NA NA 1 78.1Mi 1Gi
App-info 1 1 0.5 1 1 2 1 78.1Mi 1Gi
Query Service 1 2 1 1 1 2 1 78.1Mi 1Gi
CM Service 2 4 0.5 2 NA NA 2 78.1Mi 2Gi
Config Server 2 4 0.5 2 1 2 1 78.1Mi 2Gi
Audit Service 1 2 1 1 2 8 1 78.1Mi 1Gi

2.2 Installation Sequence

This section describes preinstallation, installation, and postinstallation tasks for BSF.

2.2.1 Preinstallation Tasks

Before installing BSF, perform the tasks described in this section.

2.2.1.1 Verifying and Creating Namespace

This section explains how to verify and create a namespace in the system.

Note:

This is a mandatory procedure, run this before proceeding further with the installation. The namespace created or verified in this procedure is an input for the next procedures.

To verify and create a namespace:

  1. Run the following command to verify if the required namespace already exists in system:
    kubectl get namespaces

    In the output of the above command, if the namespace exists, continue with Creating Service Account, Role, and RoleBinding.

  2. If the required namespace is unavailable, create the namespace using the following command:
    kubectl create namespace <required namespace>

    Where, <required namespace> is the name of the namespace.

    For example:

    kubectl create namespace ocbsf

    Sample output:

    namespace/ocbsf created

Naming Convention for Namespaces

The namespace must meet the following requirements:
  • The namespace should:start and end with an alphanumeric character.
  • contains 63 characters or less
  • contains only alphanumeric characters or '-'

Note:

It is recommended to avoid using the prefix kube- when creating namespace. The prefix is reserved for Kubernetes system namespaces.
2.2.1.2 Creating Service Account, Role, and RoleBinding

This section is optional and it describes how to manually create a service account, Role, and RoleBinding resources.

Note:

The secret(s) should exist in the same namespace where BSF is getting deployed. This helps to bind the Kubernetes role with the given service account.

To create service account, role and rolebinding:
  1. Create a Global Service Account.

    create a YAML file bsf-sample-serviceaccount-template.yaml using the following sample code:

    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: <helm-release>-serviceaccount
      namespace: <namespace>

    Where,

    <helm-release> is a name provided to identify the Helm deployment.

    <namespace> is a name provided to identify the Kubernetes namespace of BSF. All the BSF microservices are deployed in this Kubernetes namespace.

  2. Define role permissions using roles for the BSF namespace.

    Create a YAML file bsf-sample-role-template.yaml using the following sample code:

    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      name: <helm-release>-role
      namespace: <namespace>
    rules:
       - apiGroups:
           - ""
         resources:
           - services
           - configmaps
           - pods
           - secrets
           - endpoints
           - nodes
           - events
           - persistentvolumeclaims
         verbs:
           - get
           - list
           - watch
       - apiGroups:
           - apps
         resources:
           - deployments
           - statefulsets
         verbs:
           - get
           - watch
           - list
       - apiGroups:
           - autoscaling
         resources:
           - horizontalpodautoscalers
         verbs:
           - get
           - watch
           - list
  3. To bind the role defined in the bsf-sample-role-binding template.yaml file with the service account, create a bsf-sample-rolebinding-template.yaml file using the following sample code:

    apiVersion: rbac.authorization.k8s.io/v1beta1
    kind: RoleBinding
    metadata:
      name: <helm-release>-rolebinding
      namespace: <namespace>
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: <helm-release>-role
    subjects:
    - kind: ServiceAccount                                
      name:  <helm-release>-serviceaccount
      namespace: <namespace>

    Note:

    If you are installing BSF 22.1.0 using CNE 22.2.0 or later versions, change the apiVersion from rbac.authorization.k8s.io/v1beta1 to rbac.authorization.k8s.io/v1.

  4. Run the following commands to create resources:

    kubectl -n <namespace> create -f bsf-sample-serviceaccount-template.yaml;
    kubectl -n <namespace> create -f bsf-sample-role-template.yaml;
    kubectl -n <namespace> create -f bsf-sample-rolebinding-template.yaml
    

Note:

Once the global service account is added, you must add globalServiceAccountName in the ocbsf_custom_values_24.3.0.yaml file. Otherwise, installation may fail as a result of creating and deleting Custom Resource Definition (CRD).

Note:

PodSecurityPolicy kind is required for Pod Security Policy service account. For more information, see Oracle Communications Cloud Native Core, Binding Support Function Troubleshooting Guide.

.

2.2.1.3 Configuring cnDBTier

With cnDBTier, BSF facilitates automatic user creation with its pre-install hook. However, ensure that there is a privileged user on the NDB cluster, which has privileges similar to root user. You must have necessary permissions to allow connections from remote hosts.

Single Site Deployment

Perform the following steps on each of the SQL nodes.
  1. Log in to MySQL on each of the API nodes of cnDBTier to verify this.

    mysql>select host from mysql.user where User='<privileged username>';
    +------+
    | host |
    +------+
    | %    |
    +------+
    1 rowinset(0.00 sec)
  2. If you do not see '%' in the output of the above query, modify this field to allow remote connections to root.

    mysql>update mysql.user set host='%' where User='<privileged username>';
    Query OK, 0rowsaffected (0.00 sec)
    Rowsmatched: 1  Changed: 0  Warnings: 0
    mysql> flush privileges;
    Query OK, 0rowsaffected (0.06 sec)

    Note:

    Perform this step on each SQL node.

Multisite Deployment

To configure cnDBTier in case of multisite deployment:
  1. Update mysqld configuration in the cnDBTier custom-values.yaml file before installing or upgrading BSF.

    global:
      ndbconfigurations:
        api
          auto_increment_increment: 3
          auto_increment_offset: 1

    Note:

    • Set the auto_increment_increment parameter same as number of sites. For example: If the number of sites is 2, set its value as 2 and if the number of sites is 3, set its value as 3.
    • Set the auto_increment_offset parameter as site ID. For example: The site ID for Site 1 is 1, Site 2 is 2, for Site 3 is 3, and so on.
  2. If the fresh installation or upgrade of BSF on cnDBTier is not planned, then run the following command to edit the mysqldconfig configmap on all the cnDBTier sites.

    kubectl edit configmap mysqldconfig <db-site-namespace>

    For example:

    kubectl edit configmap mysqldconfig -n site1

    Note:

    Update the auto_increment_increment and auto_increment_offset values as mentioned in the previous step for all sites.

2.2.1.4 Configuring Multiple Site Deployment

In case of multiple site deployment of BSF, there is only one subscriber database which is used by each site and different configuration databases for each site, as each site has its own configuration. To have different configuration databases and same subscriber database, you need to create secrets accordingly. For more information about creating secrets, see Configuring Kubernetes Secret for Accessing Database.

To configure multiple site deployment:
  1. Configure nfInstanceId under the global section of the ocbsf_custom_values_24.3.0.yaml file differently for each BSF site deployed.

    Note:

    Ensure that the nfInstanceId configuration in the global section is same as that in the appProfile section of nrf-client.

    global: 
      # Unique ID to register to NRF, Should be configured differently on multi site deployments for each BSF
      nfInstanceId: &nfInsId 5a7bd676-ceeb-44bb-95e0-f6a55a328b03
      
    nrf-client:
      configmapApplicationConfig:
        profile: |-
          appProfiles=[{"nfInstanceId":"5a7bd676-ceeb-44bb-95e0-f6a55a328b03","nfStatus":"REGISTERED","fqdn":"ocbsf-ingressgateway.mybsf.svc.cluster.local","nfType":"BSF","allowedNfTypes":["NRF"],"plmnList":[{"mnc":"14","mcc":"310"}],"priority":10,"capacity":500,"load":0,"locality":"bangalore","nfServices":[{"load":0,"scheme":"http","versions":[{"apiFullVersion":"2.1.0.alpha-3","apiVersionInUri":"v1"}],"fqdn":"ocbsf-ingressgateway.mybsf.svc.cluster.local","ipEndPoints":[{"port":"80","ipv4Address":"10.0.0.0","transport":"TCP"}],"nfServiceStatus":"REGISTERED","allowedNfTypes":["NRF"],"serviceInstanceId":"547d42af-628a-4d5d-a8bd-38c4ba672682","serviceName":"nbsf-group-id-map","priority":10,"capacity":500}],"udrInfo":{"groupId":"bsf-1","externalGroupIdentifiersRanges":[{"start":"10000000000","end":"20000000000"}],"supiRanges":[{"start":"10000000000","end":"20000000000"}],"gpsiRanges":[{"start":"10000000000","end":"20000000000"}]},"heartBeatTimer":90,"nfServicePersistence":false,"nfProfileChangesSupportInd":false,"nfSetIdList":["setxyz.bsfset.5gc.mnc012.mcc345"]}]
    
  2. Configure fullnameOverride under the config-server section to <helm-release-name>-config-server. It should be different for each site deployed.

    config-server:
      fullnameOverride: ocudr1-config-server
  3. Configure fullnameOverride under the appinfo section to <helm-release-name>-app-info. It should be different for each site deployed.

    appinfo:
      fullnameOverride: ocudr1-app-info
  4. For cnDBTier configurations in multiple site deployment, see Configuring cnDBTier.
2.2.1.5 Creating Service Account, Role, and Role Binding for Helm Test

This section describes the procedure to create service account, role, and role binding resources for Helm Test.

Important:

The steps described in this section are optional and users may skip it in any of the following scenarios:
  • If user wants service accounts to be created automatically at the time of deploying BSF.
  • Global service account with associated role and role bindings is already configured or the user has any in-house procedure to create service accounts.

Creating Global Service Account

To create the global service account, create a YAML file bsf-sample-helmtestserviceaccount-template.yaml using the following sample code:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: <helm-release>-helmtestserviceaccount
  namespace: <namespace>

Where,

<helm-release> is a name provided to identify the helm deployment.

<namespace> is a name provided to identify the Kubernetes namespace of BSF. All the BSF microservices are deployed in this Kubernetes namespace.

Define Role Permissions

To define permissions using roles for the BSF namespace, create a YAML file bsf-sample-helmtestsrole-template.yaml using the following sample code:

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: <helm-release>-helmtest-role
  namespace: <namespace>
rules:
   - apiGroups:
       - ""
     resources:
	-pods
        -persistentvolumeclaims
        -services
        -endpoints
        -configmaps
        -events
        -secrets
        -serviceaccounts
        verbs:
        -list
        -get
        -watch
        -apiGroups:
        -apps
        resources:
        -deployments
        -statefulsets
        verbs:
        -get
        -watch
        -list
        -apiGroups:
        -autoscaling
        resources:
        -horizontalpodautoscalers
        verbs:
        -get
        -watch
        -list
        -apiGroups:
        -policy
        resources:
        -poddisruptionbudgets
        verbs:
        -get
        -watch
        -list
        -apiGroups:
        -rbac.authorization.k8s.io
        resources:
        -roles
        -rolebindings
        verbs:
        -get
        -watch
        -list

Creating Helm Test Role Binding Template

To bind the role defined in the bsf-sample-role-template.yaml file with the service account, create a bsf-sample-helmttestrolebinding-template.yaml file using the following sample code:

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: <helm-release>-helmtest-rolebinding
  namespace: <namespace>
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: <helm-release>-helmtest-role
subjects:
- kind: ServiceAccount                                
  name:  <helm-release>-helmtestserviceaccount
  namespace: <namespace>

Creating resources

Run the following commands to create resources:

kubectl -n <namespace> create -f bsf-sample-helmtestserviceaccount-template.yaml;
kubectl -n <namespace> create -f bsf-sample-role-template.yaml;
kubectl -n <namespace> create -f bsf-sample-rolebinding-template.yaml

Note:

Once the global service account is added, users must add helmTestServiceAccountName in the ocbsf_custom_values_24.3.0.yaml file. Otherwise, installation may fail as a result of creating and deleting Custom Resource Definition (CRD).
2.2.1.6 Configuring Database, Creating Users, and Granting Permissions

This section explains how database administrators can create users and database in a single and multisite deployment.

BSF has four databases (Provisional, State, Release, Leaderpod, and NRF Client Database) and two users (Application and Privileged).

Note:

  • Before running the procedure for georedundant sites, ensure that the cnDBTier for georedundant sites is already up and replication channels are enabled.
  • While performing a fresh installation, if BSF release is already deployed, purge the deployment and remove the database and users that were used for the previous deployment. For uninstallation procedure, see Uninstalling BSF.

BSF Databases

For BSF applications, four types of databases are required:

  1. Provisional Database: Provisional Database contains configuration information. The same configuration must be done on each site by the operator. Both Privileged User and Application User have access to this database. In case of georedundant deployments, each site must have a unique Provisional Database. BSF sites can access only the information in their unique Provisional Database.
    For example:
    • For Site 1: ocbsf_config_server_site1
    • For Site 2: ocbsf_config_server_site2
    • For Site 3: ocbsf_config_server_site3
  2. State Database: This database maintains the running state of BSF sites and has information of subscriptions, pending notification triggers, and availability data. It is replicated and the same configuration is maintained by all BSF georedundant sites. Both Privileged User and Application User have access to this database.
  3. Release Database: This database maintains release version state, and it is used during upgrade and rollback scenarios. Only Privileged User has access to this database.
  4. Leaderpod Database: This database is used to store leader and follower if PDB is enabled for microservices that require a single pod to be up in all the instances. The configuration of this database must be done on each site. In case of georedundant deployments, each site must have a unique Leaderpod database.
    For example:
    • For Site 1: ocbsf_leaderPodDb_site1
    • For Site 2: ocbsf_leaderPodDb_site2
    • For Site 3: ocbsf_leaderPodDb_site3

    Note:

    This database is used only when nrf-client-nfmanagement.enablePDBSupport is set to true in the ocbsf_custom_values_24.3.0.yaml.
  5. NRF Client Database: This database is used to support NRF Client features. Only Privileged User has access to this database and it is used only when the caching feature is enabled. In case of georedundant deployments, each site must have a unique NRF Client database and its configuration must be done on each site.

    For example:

    • For Site 1: ocbsf_nrf_client_site1
    • For Site 2: ocbsf_nrf_client_site2
    • For Site 3: ocbsf_nrf_client_site3

BSF Users

There are two types of BSF database users with different set of permissions:

  1. Privileged User: This user has a complete set of permissions. This user can perform create, alter, or drop operations on tables to perform install, upgrade, rollback, or delete operations.

    Note:

    In examples given in this document, Privileged User's username is 'bsfprivilegedusr' and password is 'bsfprivilegedpasswd'.
  2. Application User: This user has a limited set of permissions and is used by BSF application to handle service operations. This user can insert, update, get, or remove the records. This user will not be able to create, alter, or drop the database or tables.

    Note:

    In examples given in this document, Application User's username is 'bsfusr' and password is 'bsfpasswd'.

Default Databases

BSF microservices use the MySQL database to store the configuration and run time data.

Before deploying BSF, make sure that the MySQL user and databases are created.

Each microservice has a default database assigned to it.

The following table lists the default database names and applicable deployment modes for various databases that need to be configured while deploying BSF.

Table 2-6 Default Database Names for BSF Microservices

Service Name Default Database Name Database Type
Config Server

ocbsf_config_server

Provisional
CM Service

ocbsf_commonconfig

ocbsf_cmservice

Provisional
BSF Service

ocpm_bsf

State
Audit Service

ocbsf_audit_service

Provisional
NRF Client

ocbsf_nrf_client

ocbsf_leaderPodDb

Provisional

In addition, create the ocbsf_release (default name) database to store and manipulate the release versions of BSF services during the install, upgrade, and rollback procedure. This database name is specified in the releaseDbName parameter in the ocbsf_custom_values_24.3.0.yaml file.

2.2.1.6.1 Single Site

This section explains how a database administrator can create database and users for a single site deployment.

Configuring Database

Perform the following steps to configure MySQL database for different microservices:
  1. Log in to the server where the SSH keys are stored and have permission to access the SQL nodes of NDB cluster.
  2. Connect to the SQL nodes.
  3. Log in to the MySQL prompt using root permission, or log in as a user who has the permission to create users as per conditions explained in the next step.

    Example:

    mysql -h 127.0.0.1 -uroot -p

    Note:

    This command varies between systems, path for MySQL binary, root user, and root password. After running this command, enter the password specific to the user mentioned in the command.

  4. Run the following command to check if both the BSF users already exist:

    SELECT User FROM mysql.user;
    If the users already exist, go to the next step. Else, create the respective new user or users by following the steps below:
    • Run the following command to create a new Privileged User:
      CREATE USER '<BSF Privileged-User Name>'@'%' IDENTIFIED BY '<BSF Privileged-User Password>';

      Example:

      CREATE USER 'bsfprivilegedusr'@'%' IDENTIFIED BY 'bsfprivilegedpasswd';
    • Run the following command to create a new Application User:
      CREATE USER '<Application User Name>'@'%' IDENTIFIED BY '<APPLICATION Password>';

      Example:

      CREATE USER 'bsfusr'@'%' IDENTIFIED BY 'bsfpasswd';
  5. Run the following command to check whether any of the BSF databases already exists:

    show databases;
    1. If any of the previously configured database is already present, remove them. Otherwise, skip this step.

      Run the following command to remove a preconfigured BSF database:

      DROP DATABASE if exists <DB Name>;

      Example:

      DROP DATABASE if exists ocbsf_audit_service;
    2. Run the following command to create new BSF database if it does not exist, or after dropping an existing database:

      CREATE DATABASE IF NOT EXISTS <DB Name> CHARACTER SET utf8;

      For example:

      CREATE DATABASE IF NOT EXISTS ocbsf_config_server CHARACTER SET utf8;
      CREATE DATABASE IF NOT EXISTS ocbsf_release CHARACTER SET utf8;
      CREATE DATABASE IF NOT EXISTS ocbsf_commonconfig CHARACTER SET utf8;
      CREATE DATABASE IF NOT EXISTS ocbsf_cmservice CHARACTER SET utf8;
      CREATE DATABASE IF NOT EXISTS ocbsf_audit_service CHARACTER SET utf8;
      CREATE DATABASE IF NOT EXISTS ocpm_bsf CHARACTER SET utf8;
      CREATE DATABASE IF NOT EXISTS ocbsf_nrf_client CHARACTER SET utf8;
      CREATE DATABASE IF NOT EXISTS ocbsf_leaderPodDb CHARACTER SET utf8;
      

      Note:

      Ensure that you use the same database names while creating database that you have used in the global parameters of ocbsf_custom_values_24.3.0.yaml files.

      Following is an example of what are the names of the BSF database names configured in the ocbsf_custom_values_24.3.0.yaml file:
      
      global:
        releaseDbName: &releaseDbName 'ocbsf_release'
        nrfClientDbName: 'ocbsf_nrf_client'
      bsf-management-service:
        envMysqlDatabase: 'ocpm_bsf'
      config-server:
        envMysqlDatabase: *configServerDB
      cm-service:
        envMysqlDatabase: ocbsf_cmservice
      nrf-client-nfmanagement:
        dbConfig:
          leaderPodDbName: ocbsf_leaderPodDb
      audit-service:
        envMysqlDatabase: ocbsf_audit_service
        

      BSF follows the best database practices by having the idle connection timeout for client applications lesser than the idle connection timeout of the database server.

      The default idle connection timeout value of BSF applications is 540 seconds or 9 minutes. This value remains changed.

Creating Users and Granting Permissions

Note:

Creation of database is optional if grant is scoped to all database, that is, database name is not mentioned in grant command.

To create users and grant permissions, perform the following steps:
  1. Run the following set of commands to grant all the necessary permissions:

    GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE ON <DB Name>.*TO `<BSF Privileged-User Name>`@`%`;

    In the following example, "bsfprivilegedusr" is used as username, "bsfprivilegedpasswd" is used as password. Here, all permissions are being granted to "bsfprivilegedusr".

    GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON ocbsf_config_server.* TO 'bsfprivilegedusr'@'%';
    GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON ocbsf_release.* TO 'bsfprivilegedusr'@'%';
    GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON ocpm_bsf.* TO 'bsfprivilegedusr'@'%';
    GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON ocbsf_commonconfig.* TO 'bsfprivilegedusr'@'%';
    GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON ocbsf_audit_service.* TO 'bsfprivilegedusr'@'%';
    GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON ocbsf_cmservice.* TO 'bsfprivilegedusr'@'%';
    GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP ON mysql.ndb_replication TO 'bsfprivilegedusr'@'%';
    GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, INDEX, ALTER ON ocbsf_nrf_client.* TO 'bsfprivilegedusr'@'%';
    GRANT SELECT, INSERT, CREATE, ALTER, DROP, LOCK TABLES, CREATE TEMPORARY TABLES, DELETE, UPDATE, EXECUTE ON ocbsf_leaderPodDb.* TO 'bsfprivilegedusr'@'%';
    FLUSH PRIVILEGES;
  2. Run the following command to grant NDB_STORED_USER permissions to the Privileged User:
    GRANT NDB_STORED_USER ON *.* TO 'bsfprivilegedusr'@'%';
  3. Grant all the necessary permissions by running the following set of commands.

    Note:

    The database name is specified in the envMysqlDatabase parameter for respective services in the ocbsf_custom_values_24.3.0.yaml file.

    It is recommended to use a unique database name when there are multiple instances of BSF deployed in the network as they share the same data tier (MySQL cluster).

    To grant permissions:

    GRANT SELECT, INSERT, LOCK TABLES, DELETE, UPDATE, REFERENCES, EXECUTE ON <DB Name>.* TO '<Application User Name>'@'%';

    In the following example, "bsfusr" is used as username, "bsfpasswd" is used as password. Here, all permissions are being granted to "bsfusr".

    GRANT SELECT, INSERT, UPDATE, DELETE ON ocbsf_config_server.* TO 'bsfusr'@'%';
    GRANT SELECT, INSERT, UPDATE, DELETE ON ocbsf_release.* TO 'bsfusr'@'%';
    GRANT SELECT, INSERT, UPDATE, DELETE ON ocpm_bsf.* TO 'bsfusr'@'%';
    GRANT CREATE, SELECT, INSERT, UPDATE, DELETE ON ocbsf_commonconfig.* TO 'bsfusr'@'%';
    GRANT CREATE, SELECT, INSERT, UPDATE, DELETE ON ocbsf_audit_service.* TO 'bsfusr'@'%';
    GRANT CREATE, SELECT, INSERT, UPDATE, DELETE ON ocbsf_nrf_client.* TO 'bsfusr'@'%';
    GRANT CREATE, SELECT, INSERT, UPDATE, DELETE ON ocbsf_cmservice.* TO 'bsfusr'@'%';
    FLUSH PRIVILEGES;
  4. Run the following command to grant NDB_STORED_USER permissions to the Application User:
    GRANT NDB_STORED_USER ON *.* TO 'bsfusr'@'%';
  5. Run the following commands to verify that the privileged or application users have all the required permission:

    show grants for username;

    where username is the name of the privileged or application user.

    Example:

    show grants for bsfprivilegedusr;
    show grants for bsfusr;
  6. Run the following command to flush privileges:

    FLUSH PRIVILEGES;
  7. Exit from the database and logout from the MySQL node.

2.2.1.6.2 Multisite

This section explains how database administrator can create the databases and users for a multisite deployment.

For BSF georedundant deployment, listed databases names must be unique for each site. For the remaining databases, the database name must be same across all the sites.

It is recommended to use unique database names when multiple instances of BSF use and share a single cnDBTier (MySQL cluster) in the network. To maintain unique database names for all the NF instances in the network, a good practice is to add the deployment name of the BSF instance as a prefix or suffix to the database name. However, you can use any prefix or suffix to create the unique database name. For example, if the BSF deployment nfInstance value is "site1" then the BSF Configuration service database can be named as "ocbsf_config_server_site1".

Note:

Before running the procedure for georedundant sites, ensure that the cnDBTier for georedundant sites is up and replication channels are enabled.

Table 2-7 BSF Unique Databases names for two site and three site deployment

Two Site Database Names Three Site Database Names

ocbsf_config_server_site1

ocbsf_config_server_site2

ocbsf_config_server_site1

ocbsf_config_server_site2

ocbsf_config_server_site3

ocbsf_cmservice_site1

ocbsf_cmservice_site2

ocbsf_cmservice_site1

ocbsf_cmservice_site2

ocbsf_cmservice_site3

ocbsf_commonconfig_site1

ocbsf_commonconfig_site2

ocbsf_commonconfig_site1

ocbsf_commonconfig_site2

ocbsf_commonconfig_site3

ocbsf_leaderPodDb_site1

ocbsf_leaderPodDb_site2

ocbsf_leaderPodDb_site1

ocbsf_leaderPodDb_site2

ocbsf_leaderPodDb_site3

ocbsf_overload_site1

ocbsf_overload_site2

ocbsf_overload_site1

ocbsf_overload_site2

ocbsf_overload_site3

ocbsf_audit_service_site1

ocbsf_audit_service_site2

ocbsf_audit_service_site1

ocbsf_audit_service_site2

ocbsf_audit_service_site3

ocbsf_nrf_client_site1

ocbsf_nrf_client_site2

ocbsf_nrf_client_site1

ocbsf_nrf_client_site2

ocbsf_nrf_client_site3

Configuring Database

Perform the following steps to configure MySQL database for different microservices:
  1. Log in to the server where the SSH keys are stored and have permission to access the SQL nodes of NDB cluster.

  2. Connect to the SQL nodes.

  3. Log in to the database either as a root user or as a user who has the permission to create users as per conditions explained in the next step.

    Example:

    mysql -h 127.0.0.1 -uroot -p

    Note:

    This command varies between systems, path for MySQL binary, root user, and root password. After running this command, enter the password specific to the user mentioned in the command.
  4. Run the following command to check if both the BSF users already exist:
    SELECT User FROM mysql.user;

    If the users already exist, go to the next step. Otherwise, create the respective new user or users by following the steps below:

    • Run the following command to create a new Privileged User:
      CREATE USER '<BSF Privileged-User Name>'@'%' IDENTIFIED BY '<BSF Privileged-User Password>';

      Example:

      CREATE USER 'bsfprivilegedusr'@'%' IDENTIFIED BY 'bsfprivilegedpasswd';
    • Run the following command to create a new Application User:
      CREATE USER '<Application User Name>'@'%' IDENTIFIED BY '<APPLICATION Password>';

      Example:

      CREATE USER 'bsfusr'@'%' IDENTIFIED BY 'bsfpasswd';

    Note:

    You must create both the users on all the SQL nodes for all georedundant sites.
  5. Run the following command to check whether any of the BSF databases already exists:

    show databases;
    1. If any of the previously configured database is already present, remove them. Otherwise, skip this step.

      Caution:

      In case you have georedundant sites configured, removal of the database from any one of the SQL nodes of any cluster will remove the database from all georedundant sites.

      Run the following command to remove a preconfigured BSF database:

      DROP DATABASE if exists <DB Name>;

      Example:

      DROP DATABASE if exists ocbsf_audit_service;
    2. Run the following command to create new BSF database if it does not exist, or after dropping an existing database:

      CREATE DATABASE IF NOT EXISTS <DB Name> CHARACTER SET utf8;

      For example: Sample illustration for creating all database required for BSF installation in site1.

      CREATE DATABASE IF NOT EXISTS ocbsf_config_server_site1 CHARACTER SET utf8;
      CREATE DATABASE IF NOT EXISTS ocbsf_commonconfig_site1 CHARACTER SET utf8;
      CREATE DATABASE IF NOT EXISTS ocbsf_cmservice_site1 CHARACTER SET utf8;
      CREATE DATABASE IF NOT EXISTS ocbsf_audit_service_site1 CHARACTER SET utf8;
      CREATE DATABASE IF NOT EXISTS ocbsf_nrf_client_site1 CHARACTER SET utf8;
      CREATE DATABASE IF NOT EXISTS ocbsf_leaderPodDb_site1 CHARACTER SET utf8;
      

      For example: Sample illustration for creating all database required for BSF installation in site2.

      CREATE DATABASE IF NOT EXISTS ocbsf_config_server_site2 CHARACTER SET utf8;
      CREATE DATABASE IF NOT EXISTS ocbsf_commonconfig_site2 CHARACTER SET utf8;
      CREATE DATABASE IF NOT EXISTS ocbsf_cmservice_site2 CHARACTER SET utf8;
      CREATE DATABASE IF NOT EXISTS ocbsf_audit_service_site2 CHARACTER SET utf8;
      CREATE DATABASE IF NOT EXISTS ocbsf_nrf_client_site2 CHARACTER SET utf8;
      CREATE DATABASE IF NOT EXISTS ocbsf_leaderPodDb_site2 CHARACTER SET utf8;
      

      Note:

      Ensure that you use the same database names while creating database that you have used in the global parameters of ocbsf_custom_values_24.3.0.yaml files.

      BSF follows the best database practices by having the idle connection timeout for client applications lesser than the idle connection timeout of the database server.

      The default idle connection timeout value of BSF applications is 540 seconds or 9 minutes. This value remains changed.

Granting Permissions to Users on the Database

Note:

  • Run this step on all the SQL nodes for each BSF standalone site in a georedundant deployment.
  • Creation of database is optional if grant is scoped to all databases, that is, database name is not mentioned in grant command.
  1. Run the following command to grant Privileged User permission on all BSF Databases:

    GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE ON <DB Name>_<site_name>.*TO `<BSF Privileged-User Name>`@`%`;

    In the following example, "bsfprivilegedusr" is used as username, "bsfprivilegedpasswd" is used as password. Here, all permissions are being granted to "bsfprivilegedusr".

    Example for site1:

    GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON ocbsf_config_server_site1.* TO 'bsfprivilegedusr'@'%';
    GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON ocbsf_commonconfig_site1.* TO 'bsfprivilegedusr'@'%';
    GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON ocbsf_audit_service_site1.* TO 'bsfprivilegedusr'@'%';
    GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON ocbsf_cmservice_site1.* TO 'bsfprivilegedusr'@'%';
    GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP ON mysql.ndb_replication TO 'bsfprivilegedusr'@'%';
    GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, INDEX, ALTER ON ocbsf_nrf_client_site1.* TO 'bsfprivilegedusr'@'%';
    GRANT SELECT, INSERT, CREATE, ALTER, DROP, LOCK TABLES, CREATE TEMPORARY TABLES, DELETE, UPDATE, EXECUTE ON ocbsf_leaderPodDb_site1.* TO 'bsfprivilegedusr'@'%';
    FLUSH PRIVILEGES;

    Example for site2:

    GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON ocbsf_config_server_site2.* TO 'bsfprivilegedusr'@'%';
    GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON ocbsf_commonconfig_site2.* TO 'bsfprivilegedusr'@'%';
    GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON ocbsf_audit_service_site2.* TO 'bsfprivilegedusr'@'%';
    GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON ocbsf_cmservice_site2.* TO 'bsfprivilegedusr'@'%';
    GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP ON mysql.ndb_replication TO 'bsfprivilegedusr'@'%';
    GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, INDEX, ALTER ON ocbsf_nrf_client_site2.* TO 'bsfprivilegedusr'@'%';
    GRANT SELECT, INSERT, CREATE, ALTER, DROP, LOCK TABLES, CREATE TEMPORARY TABLES, DELETE, UPDATE, EXECUTE ON ocbsf_leaderPodDb_site2.* TO 'bsfprivilegedusr'@'%';
    FLUSH PRIVILEGES;
  2. Run the following command to grant NDB_STORED_USER permissions to the Privileged User:
    GRANT NDB_STORED_USER ON *.* TO 'bsfprivilegedusr'@'%';
  3. Run the following command to grant Application User permission on all BSF Databases:

    GRANT SELECT, INSERT, LOCK TABLES, DELETE, UPDATE, REFERENCES, EXECUTE ON <DB Name>_<site_name>.* TO '<Application User Name>'@'%'; 

    In the following example, "bsfusr" is used as username, "bsfpasswd" is used as password. Here, all permissions are being granted to "bsfusr".

    For example in BSF site1:

    GRANT SELECT, INSERT, UPDATE, DELETE ON ocbsf_config_server_site1.* TO 'bsfusr'@'%';
    GRANT CREATE, SELECT, INSERT, UPDATE, DELETE ON ocbsf_commonconfig_site1.* TO 'bsfusr'@'%';
    GRANT CREATE, SELECT, INSERT, UPDATE, DELETE ON ocbsf_audit_service_site1.* TO 'bsfusr'@'%';
    GRANT SELECT, INSERT, UPDATE, DELETE, CREATE ON ocbsf_nrf_client_site1.* TO 'bsfusr'@'%';
    GRANT CREATE, SELECT, INSERT, UPDATE, DELETE ON ocbsf_cmservice_site1.* TO 'bsfusr'@'%';
    FLUSH PRIVILEGES;

    Example for site2:

    GRANT SELECT, INSERT, UPDATE, DELETE ON ocbsf_config_server_site2.* TO 'bsfusr'@'%';
    GRANT CREATE, SELECT, INSERT, UPDATE, DELETE ON ocbsf_commonconfig_site2.* TO 'bsfusr'@'%';
    GRANT CREATE, SELECT, INSERT, UPDATE, DELETE ON ocbsf_audit_service_site2.* TO 'bsfusr'@'%';
    GRANT SELECT, INSERT, UPDATE, DELETE, CREATE ON ocbsf_nrf_client_site2.* TO 'bsfusr'@'%';
    GRANT CREATE, SELECT, INSERT, UPDATE, DELETE ON ocbsf_cmservice_site2.* TO 'bsfusr'@'%';
    FLUSH PRIVILEGES;
  4. Run the following command to grant NDB_STORED_USER permissions to the Application User:
    GRANT NDB_STORED_USER ON *.* TO 'bsfusr'@'%';
  5. Run the following command to verify that the privileged or application users have all the required permissions:

    show grants for username;

    where username is the name of the privileged or application user.

    For example:
    show grants for bsfprivilegedusr;
    show grants for bsfusr;
  6. Run the following command to flush privileges:

    FLUSH PRIVILEGES;
  7. Exit from MySQL prompt and SQL nodes.
2.2.1.7 Configuring Kubernetes Secret for Accessing Database

This section explains how to configure Kubernetes secrets for accessing BSF database.

2.2.1.7.1 Creating and Updating Secret for Privileged Database User

This section explains how to create and update Kubernetes secret for Privileged User to access the database.

  1. Run the following command to create Kubernetes secret:
    kubectl create secret generic <Privileged User secret name> --from-literal=mysql-username=<Privileged Mysql database username> --from-literal=mysql-password=<Privileged Mysql User database password> -n <Namespace>

    Where,

    <Privileged User secret name> is the secret name of the Privileged User.

    <Privileged MySQL database username> is the username of the Privileged User.

    <Privileged MySQL User database password> is the password of the Privileged User.

    <Namespace> is the namespace of BSF deployment.

    Note:

    Note down the command used during the creation of Kubernetes secret. This command is used for updating the secrets in future.

    For example:

    kubectl create secret generic ocbsf-privileged-db-pass --from-literal=mysql-username=bsfprivilegedusr --from-literal=mysql-password=bsfprivilegedpasswd -n ocbsf
  2. Run the following command to verify the secret created:
    kubectl describe secret <Privileged User secret name> -n <Namespace>

    Where,

    <Privileged User secret name> is the secret name of the database.

    <Namespace> is the namespace of BSF deployment.

    For example:
    kubectl describe secret ocbsf-privileged-db-pass -n ocbsf

    Sample output:

    Name:  ocbsf-privileged-db-pass
    Namespace:    ocbsf
    Labels:       <none>
    Annotations:  <none>
    
    Type:  Opaque
    
    Data
    ====
    mysql-password:  10 bytes
    mysql-username:  17 bytes
    
  3. Update the command used in step 1 with string "--dry-run -o yaml" and "kubectl replace -f - -n <Namespace of BSF deployment>". After the update is performed, use the following command:
    kubectl create secret generic <Privileged User secret name> --from-literal=dbUsername=<Privileged MySQL database username> --from-literal=dbPassword=<Privileged Mysql database password> --dry-run -o yaml -n <Namespace> | kubectl replace -f - -n <Namespace>

    Where,

    <Privileged User secret name> is the secret name of the Privileged User.

    <Privileged MySQL database username> is the username of the Privileged User.

    <Privileged MySQL User database passsword> is the password of the Privileged User.

    <Namespace> is the namespace of BSF deployment.

  4. Run the updated command. The following message is displayed:

    secret/<Privileged User secret name> replaced

    Where,

    <Privileged User secret name> is the updated secret name of the Privileged User.

2.2.1.7.2 Creating and Updating Secret for Application Database User

This section explains how to create and update Kubernetes secret for application user to access the database.

  1. Run the following command to create Kubernetes secret:
    kubectl create secret generic <Application User secret name> --from-literal=mysql-username=<Application MySQL Database Username> --from-literal=mysql-password=<Application MySQL User database password> -n <Namespace>

    Where,

    <Application User secret name> is the secret name of the Application User.

    <Application MySQL database username> is the username of the Application User.

    <Application MySQL User database password> is the password of the Application User.

    <Namespace> is the namespace of BSF deployment.

    Note:

    Note down the command used during the creation of Kubernetes secret. This command is used for updating the secrets in future.

    For example:

    kubectl create secret generic ocbsf-db-pass --from-literal=mysql-username=bsfusr --from-literal=mysql-password=bsfpasswd -n ocbsf
  2. Run the following command to verify the secret created:
    kubectl describe secret <Application User secret name> -n <Namespace>

    Where,

    <Application User secret name> is the secret name of the database.

    <Namespace> is the namespace of BSF deployment.

    For example:
    kubectl describe secret ocbsf-db-pass -n ocbsf

    Sample output:

    Name:  ocbsf-db-pass
    Namespace:    ocbsf
    Labels:       <none>
    Annotations:  <none>
    
    Type:  Opaque
    
    Data
    ====
    mysql-password:  10 bytes
    mysql-username:  17 bytes
    
  3. Update the command used in step 1 with string "--dry-run -o yaml" and "kubectl replace -f - -n <Namespace of BSF deployment>". After the update is performed, use the following command:
    kubectl create secret generic <Application User secret name> --from-literal=dbUsername=<Application MySQL database username> --from-literal=dbPassword=<Application Mysql database password> --dry-run -o yaml -n <Namespace> | kubectl replace -f - -n <Namespace>

    Where,

    <Application User secret name> is the secret name of the Application User.

    <Application MySQL database username> is the username of the Application User.

    <Application MySQL User database password> is the password of the Application User.

    <Namespace> is the namespace of BSF deployment.

  4. Run the updated command. The following message is appears:

    secret/<Application User secret name> replaced

    Where,

    <Application User secret name> is the updated secret name of the Application User.

2.2.1.7.3 Creating Secret for Support of TLS in Diameter Gateway
This section explains how to create Kubernetes secret to store private key, public key, and trust chain certificates to support TLS in Diameter Gateway.
  1. Run the following command to create Kubernetes secret:
    $ kubectl create secret generic <TLS_SECRET_NAME> --from-file=<TLS_RSA_PRIVATE_KEY_FILENAME/TLS_ECDSA_PRIVATE_KEY_FILENAME> --from-file=<TLS_CA_BUNDLE_FILENAME> --from-file=<TLS_RSA_CERTIFICATE_FILENAME/TLS_ECDSA_CERTIFICATE_FILENAME> -n <Namespace of OCCNP deployment>.

    For example:

    kubectl create secret generic dgw-tls-secret --from-file=dgw-key.pem --from-file=ca-cert.cer --from-file=dgw-cert.crt -n vega-ns6
    

    Where,

    dgw-key.pem is the private Key of diam-gateway (either generated by RSA or ECDSA).

    dgw-cert.crt is the public Key certificate of diam-gateway (either generated by RSA or ECDSA).

    ca-cert.cer is the trust Chain Certificate file, either an Intermediate CA or Root CA.

    dgw-tls-secret is the default name of the secret.

2.2.1.8 Configuring Secrets for Enabling HTTPS

This section explains the steps to create and update the Kubernetes secret and enable HTTPS at Ingress Gateway.

This step is optional. It is required only when SSL settings need to be enabled on Ingress Gateway and Egress Gateway microservices of BSF.

2.2.1.8.1 Configuring HTTPS at Ingress Gateway

This section explains the steps to configure secrets for enabling HTTPS in Ingress Gateway. This procedure must be performed before deploying CNC BSF.

Note:

The passwords for TrustStore and KeyStore are stored in respective password files mentioned below.
To create Kubernetes secret for HTTPS, following files are required:
  • ECDSA private key and CA signed certificate of OCBSF, if initialAlgorithm is ES256
  • RSA private key and CA signed certificate of OCBSF, if initialAlgorithm is RS256
  • TrustStore password file
  • KeyStore password file
  • Trust Chain Certificate file, either an Intermediate CA or Root CA,

Note:

Creation process for private keys, certificates and passwords is based on discretion of user or operator.
2.2.1.8.1.1 Creating Secrets for Enabling HTTPS in Ingress Gateway

This section provides the steps to create secrets for enabling HTTPS in ingress gateway. Perform this procedure before deploying BSF.

  1. Run the following command to create secret:
    $ kubectl create secret generic <ocingress-secret-name> --from-file=<ssl_ecdsa_private_key.pem> --from-file=<rsa_private_key_pkcs1.pem> --from-file=<ssl_truststore.txt> --from-file=<ssl_keystore.txt> --from-file=<caroot.cer> --from-file=<ssl_rsa_certificate.crt> --from-file=<ssl_ecdsa_certificate.crt> -n <Namespace of OCBSF deployment>

    Note:

    Note down the command used during the creation of the secret. Use the command for updating the secrets in future.
    Example: The names used below are same as provided in ocbsf_custom_values_24.3.0.yaml in BSF deployment.
    $ kubectl create secret generic ocingress-secret --from-file=ssl_ecdsa_private_key.pem --from-file=rsa_private_key_pkcs1.pem --from-file=ssl_truststore.txt --from-file=ssl_keystore.txt --from-file=caroot.cer --from-file=ssl_rsa_certificate.crt --from-file=ssl_ecdsa_certificate.crt -n ocbsf

    Note:

    It is recommended to use the same secret name as mentioned in the example. In case you change <ocingress-secret-name>, then update the k8SecretName parameter under ingressgateway attributes section in the ocbsf_custom_values_24.3.0.yaml file.
  2. Run the following command to verify the secret created:
    $ kubectl describe secret <ocingress-secret-name> -n <Namespace of OCBSF deployment>
    Example:
    $ kubectl describe secret ocingress-secret -n ocbsf
2.2.1.8.1.2 Updating Secrets for Enabling HTTPS in Ingress Gateway

This section explains how to update the secrets.

  1. Copy the exact command used in the section during creation of secret.
  2. Update the same command with string "--dry-run -o yaml" and "kubectl replace -f - -n <Namespace of OCBSF deployment>".
    The updated command is as follows:
    $ kubectl create secret generic <ocingress-secret-name> --from-file=<ssl_ecdsa_private_key.pem> --from-file=<rsa_private_key_pkcs1.pem> --from-file=<ssl_truststore.txt> --from-file=<ssl_keystore.txt> --from-file=<caroot.cer> --from-file=<ssl_rsa_certificate.crt> --from-file=<ssl_ecdsa_certificate.crt> --dry-run -o yaml -n <Namespace of OCBSF deployment> | kubectl replace -f - -n <Namespace of OCBSF deployment>

    Example:

    $ kubectl create secret generic ocingress-secret --from-file=ssl_ecdsa_private_key.pem --from-file=rsa_private_key_pkcs1.pem --from-file=ssl_truststore.txt --from-file=ssl_keystore.txt --from-file=caroot.cer --from-file=ssl_rsa_certificate.crt --from-file=ssl_ecdsa_certificate.crt --dry-run -o yaml -n ocbsf | kubectl replace -f - -n ocbsf

    Note:

    The names used in the aforementioned command must be as same as the names provided in the ocbsf-24.3.0 ocbsf_custom_values_24.3.0.yaml in OCCNP deployment.
  3. Run the updated command.
    After the secret update is complete, the following message appears:
    secret/<ocingress-secret> replaced
2.2.1.8.1.3 Enabling HTTPS at Ingress Gateway
This step is optional. It is required only when SSL settings needs to be enabled on Ingress Gateway microservice of OCBSF.
  1. Enable enableIncomingHttps parameter under Ingress Gateway Global Parameters section in the ocbsf_custom_values_24.3.0.yaml file. For more information about enableIncomingHttps parameter, see under global parameters section of the ocbsf_custom_values_24.3.0.yaml file. section.
  2. Configure the following details in the ssl section under ingressgateway attributes, in case you have changed the attributes while creating secret:
    • Kubernetes namespace
    • Kubernetes secret name holding the certificate details
    • Certificate information
    
    ingress-gateway:
      # ---- HTTPS Configuration - BEGIN ----
      enableIncomingHttps: false
    
      service:
        ssl:
          privateKey:
            k8SecretName: ocbsf-gateway-secret
            k8NameSpace: ocbsf
            rsa:
              fileName: rsa_private_key_pkcs1.pem
          certificate:
            k8SecretName: ocbsf-gateway-secret
            k8NameSpace: ocbsf
            rsa:
              fileName: ocegress.cer
          caBundle:
            k8SecretName: ocbsf-gateway-secret
            k8NameSpace: ocbsf
            fileName: caroot.cer
          keyStorePassword:
            k8SecretName: ocbsf-gateway-secret
            k8NameSpace: ocbsf
            fileName: key.txt
          trustStorePassword:
            k8SecretName: ocbsf-gateway-secret
            k8NameSpace: ocbsf
            fileName: trust.txt
  3. Save the ocbsf_custom_values_24.3.0.yaml file.
2.2.1.8.2 Configuring HTTPS at Egress Gateway

This section explains the steps to configure secrets for enabling HTTPS in Egress Gateway. This procedure must be performed before deploying OCBSF.

Note:

The passwords for TrustStore and KeyStore are stored in respective password files mentioned below.
To create Kubernetes secret for HTTPS, following files are required:
  • ECDSA private key and CA signed certificate of OCBSF, if initialAlgorithm is ES256
  • RSA private key and CA signed certificate of OCBSF, if initialAlgorithm is RS256
  • TrustStore password file
  • KeyStore password file
  • Trust Chain Certificate file, either an Intermediate CA or Root CA,

Note:

Creation process for private keys, certificates and passwords is based on discretion of user or operator.
2.2.1.8.2.1 Creating Secrets for Enabling HTTPS in Egress Gateway

This section provides information about how to create secret for HTTPS related details. Perform this procedure before enabling HTTPS in OCBSF Egress Gateway.

  1. Run the following command to create secret.
    $ kubectl create secret generic <ocegress-secret-name> --from-file=<ssl_ecdsa_private_key.pem>  --from-file=<ssl_rsa_private_key.pem> --from-file=<ssl_truststore.txt> --from-file=<ssl_keystore.txt> --from-file=<ssl_cabundle.crt> --from-file=<ssl_rsa_certificate.crt> --from-file=<ssl_ecdsa_certificate.crt> -n <Namespace of OCBSF deployment>

    Note:

    Note down the command used during the creation of the secret. Use the command for updating the secrets in future.

    Example:

    $ kubectl create secret generic ocegress-secret --from-file=ssl_ecdsa_private_key.pem --from-file=ssl_rsa_private_key.pem --from-file=ssl_truststore.txt --from-file=ssl_keystore.txt --from-file=ssl_cabundle.crt --from-file=ssl_rsa_certificate.crt --from-file=ssl_ecdsa_certificate.crt -n ocbsf

    Note:

    It is recommended to use the same secret name as mentioned in the example. In case you change <ocegress-secret-name>, then update the k8SecretName parameter under egressgateway attributes section in the ocbsf_custom_values_24.3.0.yaml file.
  2. Run the following command to verify the secret created:
    $ kubectl describe secret <ocegress-secret-name> -n <Namespace of OCBSF deployment>

    Example:

    $ kubectl describe secret ocegress-secret -n ocbsf
2.2.1.8.2.2 Updating Secrets for Enabling HTTPS in Egress Gateway

This section explains how to update the secret with related details.

  1. Copy the exact command used in the section during creation of secret.
  2. Update the same command with string "--dry-run -o yaml" and "kubectl replace -f - -n <Namespace of OCBSF deployment>".
    The updated command is as follows:
    kubectl create secret generic <ocegress-secret-name> --from-file=<ssl_ecdsa_private_key.pem> --from-file=<ssl_rsa_private_key.pem> --from-file=<ssl_truststore.txt> --from-file=<ssl_keystore.txt> --from-file=<ssl_cabundle.crt> --from-file=<ssl_rsa_certificate.crt> --from-file=<ssl_ecdsa_certificate.crt> --dry-run -o yaml -n <Namespace of OCBSF Egress Gateway secret> | kubectl replace -f - -n <Namespace of OCBSF deployment>

    Example:

    $ kubectl create secret generic egress-secret --from-file=ssl_ecdsa_private_key.pem --from-file=rsa_private_key_pkcs1.pem --from-file=ssl_truststore.txt --from-file=ssl_keystore.txt --from-file=caroot.cer --from-file=ssl_rsa_certificate.crt --from-file=ssl_ecdsa_certificate.crt --dry-run -o yaml -n ocbsf | kubectl replace -f - -n ocbsf

    Note:

    The names used in the aforementioned command must be as same as the names provided in the ocbsf_custom_values_24.3.0.yaml in OCBSF deployment.
  3. Run the updated command.
    After the secret update is complete, the following message appears:
    secret/<ocingress-secret> replaced
2.2.1.8.2.3 Enabling HTTPS at Egress Gateway
This step is optional. It is required only when SSL settings needs to be enabled on Egress Gateway microservice of OCBSF.
  1. Enable enableOutgoingHttps parameter under egressgateway attributes section in the ocbsf_custom_values_24.3.0.yaml file. For more information about enableOutgoingHttps parameter, see the Egress Gateway section.
  2. Configure the following details in the ssl section under egressgateway attributes, in case you have changed the attributes while creating secret:
    • Kubernetes namespace
    • Kubernetes secret name holding the certificate details
    • Certificate information
      
    egress-gateway:
      #Enabling it for egress https requests
      enableOutgoingHttps: false
    
      service:
        ssl:
          privateKey:
            k8SecretName: ocbsf-gateway-secret
            k8NameSpace: ocbsf
            rsa:
              fileName: rsa_private_key_pkcs1.pem
            ecdsa:
              fileName: ssl_ecdsa_private_key.pem
          certificate:
            k8SecretName: ocbsf-gateway-secret
            k8NameSpace: ocbsf
            rsa:
              fileName: ocegress.cer
            ecdsa:
              fileName: ssl_ecdsa_certificate.crt
          caBundle:
            k8SecretName: ocbsf-gateway-secret
            k8NameSpace: ocbsf
            fileName: caroot.cer
          keyStorePassword:
            k8SecretName: ocbsf-gateway-secret
            k8NameSpace: ocbsf
            fileName: key.txt
          trustStorePassword:
            k8SecretName: ocbsf-gateway-secret
            k8NameSpace: ocbsf
            fileName: trust.txt
  3. Save the ocbsf_custom_values_24.3.0.yaml file.
2.2.1.9 Configuring Secret for Enabling Access Token Validation

This section explains how to configure a secret for enabling access token service.

2.2.1.9.1 Generating KeyPairs for NRF Instances

Important:

It is at the user's discretion to create the private keys and certificates, and it is not in the scope of BSF. This section lists only samples to create KeyPairs.
Using the Openssl tool, you can generate KeyPairs for each of the NRF instances. The commands to generate the KeyPairs are as follows:

Note:

Here, it is assumed that there are only two NRF instances with the the following instance IDs:
  • NRF Instance 1: 664b344e-7429-4c8f-a5d2-e7dfaaaba407
  • NRF Instance 2: 601aed2c-e314-46a7-a3e6-f18ca02faacc

Example Command to generate KeyPair for NRF Instance 1

Generate a 2048-bit RSA private key
 
openssl genrsa -out private_key.pem 2048
Convert private Key to PKCS#8 format (so Java can read it)

openssl pkcs8 -topk8 -inform PEM -outform PEM -in private_key.pem -out private_key_pkcs.der -nocrypt
Output public key portion in PEM format (so Java can read it)

openssl rsa -in private_key.pem -pubout -outform PEM -out public_key.pem
 
Create reqs.conf and place the required content for NRF certificate
[req]
distinguished_name = req_distinguished_name
req_extensions = v3_req
prompt = no
[req_distinguished_name]
C = IN
ST = BLR
L = TempleTerrace
O = Personal
CN = nnrf-001.tmtrflaa.5gc.tmp.com
[v3_req]
basicConstraints = CA:FALSE
keyUsage = digitalSignature, dataEncipherment
subjectAltName = DNS:nnrf-001.tmtrflaa.5gc.tmp.com
#subjectAltName = URI:UUID:6faf1bbc-6e4a-4454-a507-a14ef8e1bc5c
#subjectAltName = otherName:UTF8:NRF
 
Output ECSDA private key portion in PEM format and corresponding NRF certificate in {nrfInstanceId}_ES256.crt file
 
openssl req -x509 -new -out {nrfInstanceId}_ES256.crt -newkey ec:<(openssl ecparam -name secp521r1) -nodes -sha256 -keyout ecdsa_private_key.key -config reqs.conf
 
#Replace the place holder "{nrfInstanceId}" with NRF Instance 1's UUID while running the command.
Example below
openssl req -x509 -new -out 664b344e-7429-4c8f-a5d2-e7dfaaaba407_ES256.crt -newkey ec:<(openssl ecparam -name secp521r1) -nodes -sha256 -keyout ecdsa_private_key.key -config reqs.conf
The output is a set of Private Key and NRF Certificate similar to the following:

NRF1 (Private key: ecdsa_private_key.key, NRF Public Certificate: 664b344e-7429-4c8f-a5d2-e7dfaaaba407_ES256.crt)

Example Command to generate KeyPair for NRF Instance 2

Generate a 2048-bit RSA private key
 
openssl genrsa -out private_key.pem 2048
Convert private Key to PKCS#8 format (so Java can read it)

openssl pkcs8 -topk8 -inform PEM -outform PEM -in private_key.pem -out private_key_pkcs.der -nocrypt
Output public key portion in PEM format (so Java can read it)

openssl rsa -in private_key.pem -pubout -outform PEM -out public_key.pem
 
Create reqs.conf and place the required content for NRF certificate
[req]
distinguished_name = req_distinguished_name
req_extensions = v3_req
prompt = no
[req_distinguished_name]
C = IN
ST = BLR
L = TempleTerrace
O = Personal
CN = nnrf-001.tmtrflaa.5gc.tmp.com
[v3_req]
basicConstraints = CA:FALSE
keyUsage = digitalSignature, dataEncipherment
subjectAltName = DNS:nnrf-001.tmtrflaa.5gc.tmp.com
#subjectAltName = URI:UUID:6faf1bbc-6e4a-4454-a507-a14ef8e1bc5c
#subjectAltName = otherName:UTF8:NRF
 
Output ECSDA private key portion in PEM format and corresponding NRF certificate in {nrfInstanceId}_ES256.crt file
 
openssl req -x509 -new -out {nrfInstanceId}_ES256.crt -newkey ec:<(openssl ecparam -name prime256v1) -nodes -sha256 -keyout ecdsa_private_key.key -config reqs.conf
 
#Replace the place holder "{nrfInstanceId}" with NRF Instance 2's UUID while running the command.
Example below
openssl req -x509 -new -out 601aed2c-e314-46a7-a3e6-f18ca02faacc_ES256.crt -newkey ec:<(openssl ecparam -name prime256v1) -nodes -sha256 -keyout ecdsa_private_key.key -config reqs.conf
The output is a set of Private Key and NRF Certificate similar to the following:

NRF2 (Private key: ecdsa_private_key.key, PublicCerificate: 601aed2c-e314-46a7-a3e6-f18ca02faacc_ES256.crt)

2.2.1.9.2 Enabling and Configuring Access Token

Enabling and Configuring Access Token

To enable access token validation, configure both Helm-based and REST-based configurations on Ingress Gateway.

Configuration using Helm:

For Helm-based configuration, perform the following steps:

  1. Create a Namespace for Secrets. The namespace is used as an input to create Kubernetes secret for private keys and public certificates. Create a namespace using the following command:
    kubectl create namespace <required namespace>

    Where,

    <required namespace> is the name of the namespace.

    For example, the following command creates the namespace, ocbsf:

    kubectl create namespace ocbsf
  2. Create Kubernetes Secret for NRF Public Key. To create a secret using the Public keys of the NRF instances, run the following command:
    kubectl create secret generic <secret-name> --from-file=<filename.crt> -n <Namespace>

    Where,

    <secret-name> is the secret name.

    <Namespace> is the BSF namespace.

    <filename.crt> is the public key certificate and we can have any number of certificates in the secret.

    For example:

    kubectl create secret generic nrfpublickeysecret --from-file=./664b344e-7429-4c8fa5d2-e7dfaaaba407_ES256.crt --from-file=./601aed2c-e314-46a7-a3e6-f18ca02faacc_ES256.crt -n ocbsf

    Note:

    In the above command:
    • nrfpublickeysecret is the secret name
    • ocbsf is the namespace
    • .crt files is the public key certificates
  3. Enable Access token using Helm Configuration by setting the Ingress Gateway parameter oauthValidatorEnabled parameter value to true.

    Further, configure the secret and namespace on Ingress Gateway in the OAUTH CONFIGURATION section of the custom-value.yaml file.

    The following is a sample Helm configuration. For more information on parameters and their supported values, see OAUTH Configuration.
    # ----OAUTH CONFIGURATION - BEGIN ----
    oauthValidatorEnabled: true
    nfInstanceId: 6faf1bbc-6e4a-4454-a507-a14ef8e1bc11
    allowedClockSkewSeconds: 0
    nrfPublicKeyKubeSecret: 'nrfpublickeysecret' //SECRET NAME
    nrfPublicKeyKubeNamespace: 'osbsf'           //NAME SPACE of BSF
    validationType: strict
    producerPlmnMNC: 123
    producerPlmnMCC: 456
    nfType: BSF

Configuration using REST API:

After Helm configuration, send the REST requests to Ingress Gateway to use configured public key certificates. Using REST-based configuration, you can distinguish between the certificates configured on different NRFs and can use these certificates to validate the token received from a specific NRF.

Following are the 3 types of OAuth Validation Mode:
  • INSTANCEID_ONLY (Default): Ingress Gateway validates access token based on public keys indexed with NRF Instance ID in the issuer field.
  • KID_ONLY: Ingress Gateway validates access token based on public keys indexed with key-id only.
  • KID_PREFERRED: Ingress Gateway validates access token based on public keys indexed with Key-ID. If Key-ID is not FOUND in Access token, Ingress Gateway attempts token validation using public keys indexed with NRF instance ID in the issuer field.

Table 2-8 Configuring oauth Validator

URI Operation
/{nfType}/nf-common-component/v1/{serviceName}/oauthvalidatorconfiguration
For example:
http://10.75.152.236:8000/bsf/nf-common-component/v1/igw/oAuthValidatorConfiguration
PUT
Sample JSON:

"oAuthValidatorConfiguration": {
   "type": "object",
   "description": "Validator configurations for oAuth",
   "properties": {
      "keyIdList": {
         "type": ["array","null"],
         "uniqueItems": true, 
         "maxItems": 150,
         "description": "Array containing KID based configuration",
         "items": {
            "anyOf": [
               {
                  "type": "object",
                  "properties": {
                     "keyId":{"type": "string",  "minLength": 1, "maxLength": 36,"pattern": "[a-zA-Z0-9]"},
                     "kSecretName": {"type": "string"},
                     "certName": {"type": "string"},
                     "certAlgorithm": {"type": "string"}
                  },
                  "required": [
                     "keyId",
                     "kSecretName",
                     "certName",
                     "certAlgorithm"
		     ]
               }
            ]
         }
      },
      "instanceIdList": {
         "type": ["array","null"],
         "uniqueItems": true,
         "maxItems": 150,
         "description": "Array containing Instance Id based configuration",
         "items": {
            "anyOf": [
               {
                  "type": "object",
                  "properties": {
                     "instanceId": {"type": "string"},
                     "kSecretName": {"type": "string"},
                     "certName": {"type": "string"},
                     "certAlgorithm": {"type": "string"}
                  },
                  "required": [
                     "instanceId",
                     "kSecretName",
                     "certName",
                     "certAlgorithm"
		    ]
               }
            ]
         }
      },
      "oauthValidationMode": {
         "type": "string",
         "enum": [
            "KID_ONLY",
            "INSTANCEID_ONLY",
            "KID_PREFERRED"
	  ],
         "description": "Mode of validation"
        }
    }
}

Validating OAuth Token

The following Curl command sends a request to create PCF Bindings with valid oAuth header:

curl -X POST --http2-prior-knowledge -i "http://10.75.233.75:32564/nbsf-management/v1/pcfBindings" -H "Content-Type: application/json" -H Authorization:'Bearer eyJ0eXAiOiJKV1QiLCJraWQiOiI2MDFhZWQyYy1lMzE0LTQ2YTctYTNlNi1mMThjYTAyZmFheHgiLCJhbGciOiJFUzI1NiJ9.eyJpc3MiOiI2NjRiMzQ0ZS03NDI5LTRjOGYtYTVkMi1lN2RmYWFhYmE0MDciLCJzdWIiOiJmZTdkOTkyYi0wNTQxLTRjN2QtYWI4NC1jNmQ3MGIxYjAxYjEiLCJhdWQiOiJTTUYiLCJzY29wZSI6Im5zbWYtcGR1c2Vzc2lvbiIsImV4cCI6MTYxNzM1NzkzN30.oGAYtR3FnD33xOCmtUPKBEA5RMTNvkfDqaK46ZEnnZvgN5Cyfgvlr85Zzdpo2lNISADBgDumD_m5xHJF8baNJQ'-d '{
  "supi": "imsi-310410000000015",
  "gpsi": "5084943708",
  "ipv4Addr": "10.10.10.10",
  "dnn": "internet",
  "pcfFqdn": "pcf-smservice.oracle.com",
  "pcfDiamHost": "pcf-smservice.oracle.com",
  "pcfDiamRealm": "oracle.com",
  "snssai": {
    "sst": 11,
    "sd": "abc123"
  }
}
2.2.1.10 Configuring BSF to Support Aspen Service Mesh

BSF leverages the Platform Service Mesh (for example, Aspen Service Mesh) for all internal and external Transport Layer Security (TLS) communication. The service mesh integration provides inter-NF communication and allows API gateway co-working with service mesh. The servicemesh integration supports the services by deploying a special sidecar proxy in each pod to intercept all network communications between microservices.

Supported ASM version: 1.11.x and 1.14.x

For ASM installation and configuration, see official Aspen Service Mesh website for details.

The Aspen Service Mesh (ASM) configurations are classified into:

  • Control Plane: It involves adding labels or annotations to inject a sidecar.
  • Data Plane: It helps in traffic management such as handling NF call flows by adding Service Entries (SE), Destination Rules (DR), Envoy Filters (EF) and other resource changes such as API version changes between versions. Data plane configuration is done manually depending on each NF requirement and ASM deployment.

Data Plane Configuration

The Data Plane configuration consists of the following Custom Resource Definitions (CRDs):

  • Service Entry (SE)
  • Destination Rule (DR)
  • Envoy Filter (EF)
  • Peer Authentication (PA)
  • Authorization Policy (AP)
  • Virtual Service (VS)
  • requestAuthentication

Note:

Use Helm charts to add or delete CRDs that you may require due to ASM upgrades to configure features across different releases.

The Data Plane configuration is applicable in the following scenarios:

  • NF to NF Communication: During NF to NF communication where sidecar is injected on both NFs, you need SE and DR to communicate with the other NF, otherwise sidecar rejects the communication. All Egress communications of NFs must have an entry for SE and DR and the same needs to be configured.

    Note:

    For out of cluster communication, you must configure the core DNS with the producer NF endpoint to enable access.
  • Kube-api-server: For Kube-api-server, a few NF flows may require access to the Kubernetes API server. The ASM proxy (mTLS enabled) may block this. As per the F5 recommendation, the NF requires to add SE for Kubernetes API server in its namespace.
  • Envoy Filters: When Sidecars rewrite the header value with its default value, the headers from back-end services are lost. To overcome this situation, Envoy Filters help in passing the headers from back-end services to use it as is.

ASM Configuration File

A sample ocbsf_custom_values_servicemesh_config_24.3.0.yaml is available in Custom_Templates folder. For downloading the file, see Customizing BSF.

2.2.1.10.1 Predeployment configurations

This section explains the pre-deployment configuration procedure to install Cloud Native Core Binding Support Function (BSF) with ASM support.

Step 1 - Creating BSF Namespace

Create a namespace and apply istio injection to it by using the following command:
kubectl label --overwrite namespace <required namespace> istio-injection=enabled
Example
kubectl label --overwrite namespace ocbsf istio-injection=enabled

Step 2 - The Operator should have special capabilities at service account level to start pre-install init container.

Example of some special capabilities:


readOnlyRootFilesystem: false
  allowPrivilegeEscalation: true
  allowedCapabilities:
  - NET_ADMIN
  - NET_RAW
  runAsUser:
    rule: RunAsAny
2.2.1.10.2 Deploying BSF With ASM

Customize the ocbsf-24.3.0-custom-values-servicemesh-config.yaml file

To customize the ocbsf_custom_values_servicemesh_config_24.3.0.yaml file, uncomment and modify the parameters as per your requirements.

A sample ocbsf_custom_values_servicemesh_config_24.3.0.yaml is available in Custom_Templates file. For downloading the file, see Customizing BSF.

Note:

When BSF is deployed with ASM and cnDBTier is also installed in the same namespace or cluster, then you can skip installing service entries and destination rules.
To update Service Entries, make the required changes using the following sample template:
#serviceEntries:
#  - hosts: |-
#      [ "mysql-connectivity-service.<cndbtiernamespace>.svc.<clustername>" ]
#    exportTo: |-
#      [ "." ]
#    location: MESH_EXTERNAL
#    ports:
#    - number: 3306
#      name: mysql
#      protocol: MySQL
#    name: ocbsf-to-mysql-external-se-test
#  - hosts: |-
#      [ "*.cluster-bsfnrf" ]
#    exportTo: |-
#      [ "." ]
#    location: MESH_EXTERNAL
#    ports:
#    - number: 8090
#      name: http2-8090
#      protocol: TCP
#    - number: 80	
#      name: HTTP2-80
#      protocol: TCP
#    name: ocbsf-to-other-nf-se-test
#  - hosts: |-
#      [ "kubernetes.default.svc.<clustername>" ]
#    exportTo: |-
#      [ "." ]
#    location: MESH_INTERNAL
#    addresses: |-
#      [ "192.168.200.36" ]
#    ports:
#    - number: 443
#      name: https
#      protocol: HTTPS
#    name: nf-to-kube-api-server
To customize Destination Rule, make the required changes using the following sample template:

# destinationRules:
#  - host: "*.<clustername>"
#    mode: DISABLE
#    name: ocbsf-to-other-nf-dr-test
#    sbitimers: true
#    tcpConnectTimeout: "750ms"
#    tcpKeepAliveProbes: 3
#    tcpKeepAliveTime: "1500ms"
#    tcpKeepAliveInterval: "1s"
#  - host: mysql-connectivity-service.<cnDBTiernamespace>.svc.cluster.local
#    mode: DISABLE
#    name: mysql-occne
#    sbitimers: false
For customizing envoyFilters according to the Istio version installed on the Bastion server, use any of the following templates:

For Istio version 1.11.x and 1.14.x


#envoyFilters_v_19x_111x:
# - name: set-xfcc-bsf
#   labelselector: "app.kubernetes.io/instance: ocbsf"
#   applyTo: NETWORK_FILTER
#   filtername: envoy.filters.network.http_connection_manager
#   operation: MERGE
#   typeconfig: type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
#   configkey: forward_client_cert_details
#   configvalue: ALWAYS_FORWARD_ONLY
# - name: serverheaderfilter
#   labelselector: "app.kubernetes.io/instance: ocbsf"
#   applyTo: NETWORK_FILTER
#   filtername: envoy.filters.network.http_connection_manager
#   operation: MERGE
#   typeconfig: type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
#   configkey: server_header_transformation
#   configvalue: PASS_THROUGH
# - name: custom-http-stream
#   labelselector: "app.kubernetes.io/instance: ocbsf"
#   applyTo: NETWORK_FILTER
#   filtername: envoy.filters.network.http_connection_manager
#   operation: MERGE
#   typeconfig: type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
#   configkey: server_header_transformation
#   configvalue: PASS_THROUGH
#   stream_idle_timeout: "6000ms"
#   max_stream_duration: "7000ms"
#   patchContext: SIDECAR_OUTBOUND
#   networkFilter_listener_port: 8000
# - name: custom-tcpsocket-timeout
#   labelselector: "app.kubernetes.io/instance: ocbsf"
#   applyTo: FILTER_CHAIN
#   patchContext: SIDECAR_INBOUND
#   operation: MERGE
#   transport_socket_connect_timeout: "750ms"
#   filterChain_listener_port: 8000
# - name: custom-http-route
#   labelselector: "app.kubernetes.io/instance: ocbsf"
#   applyTo: HTTP_ROUTE
#   patchContext: SIDECAR_OUTBOUND
#   operation: MERGE
#   route_idle_timeout: "6000ms"
#   route_max_stream_duration: "7000ms"
#   httpRoute_routeConfiguration_port: 8000
#   vhostname: "bsf-ocbsf-policy-ds.ocbsf.svc.cluster:8000"

For Istio version 1.11.x and 1.14.x

Note:

Istio 1.11.x and 1.14.x support the same template for envoyFilters configurations.

envoyFilters_v_19x_111x:
  - name: xfccfilter
    labelselector: "app.kubernetes.io/instance: ocbsf"
    configpatch:
      - applyTo: NETWORK_FILTER
        filtername: envoy.filters.network.http_connection_manager
        operation: MERGE
        typeconfig: type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
        configkey: forward_client_cert_details
        configvalue: ALWAYS_FORWARD_ONLY
  - name: serverheaderfilter
    labelselector: "app.kubernetes.io/instance: ocbsf"
    configpatch:
      - applyTo: NETWORK_FILTER
        filtername: envoy.filters.network.http_connection_manager
        operation: MERGE
        typeconfig: type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
        configkey: server_header_transformation
        configvalue: PASS_THROUGH
  - name: custom-http-stream
    labelselector: "app.kubernetes.io/instance: ocbsf"
    configpatch:
      - applyTo: NETWORK_FILTER
        filtername: envoy.filters.network.http_connection_manager
        operation: MERGE
        typeconfig: type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
        configkey: server_header_transformation
        configvalue: PASS_THROUGH
        stream_idle_timeout: "6000ms"
        max_stream_duration: "7000ms"
        patchContext: SIDECAR_OUTBOUND
        networkFilter_listener_port: 8000
  - name: custom-tcpsocket-timeout
    labelselector: "app.kubernetes.io/instance: ocbsf"
    configpatch:
      - applyTo: FILTER_CHAIN
        patchContext: SIDECAR_INBOUND
        operation: MERGE
        transport_socket_connect_timeout: "750ms"
        filterChain_listener_port: 8000
  - name: custom-http-route
    labelselector: "app.kubernetes.io/instance: ocbsf"
    configpatch:
      - applyTo: HTTP_ROUTE
        patchContext: SIDECAR_OUTBOUND
        operation: MERGE
        route_idle_timeout: "6000ms"
        route_max_stream_duration: "7000ms"
        httpRoute_routeConfiguration_port: 8000
        vhostname: "ocbsf.svc.cluster:8000"
  - name: logicaldnscluster
    labelselector: "app.kubernetes.io/instance: ocbsf"
    configpatch:
      - applyTo: CLUSTER
        clusterservice: rchltxekvzwcamf-y-ec-x-002.amf.5gc.mnc480.mcc311.3gppnetwork.org
        operation: MERGE
        logicaldns: LOGICAL_DNS
      - applyTo: CLUSTER
        clusterservice: rchltxekvzwcamd-y-ec-x-002.amf.5gc.mnc480.mcc311.3gppnetwork.org
        operation: MERGE
        logicaldns: LOGICAL_DNS

Note:

The parameter vhostname is mandatory when applyTo is HTTP_ROUTE.

Note:

Depending on the Istio version, update the correct value of envoy filters in the following line:
{{- range .Values.envoyFilters_v_19x_111x }}
For customizing PeerAuthentication, make the required changes using the following sample template:
#peerAuthentication:
#   - name: default
#     tlsmode: PERMISSIVE
#   - name: cm-service
#     labelselector: "app.kubernetes.io/name: cm-service"
#     tlsmode: PERMISSIVE
#   - name: ingress
#     labelselector: "app.kubernetes.io/name: ocbsf-ingress-gateway"
#     tlsmode: PERMISSIVE
#   - name: diam-gw
#     labelselector: "app.kubernetes.io/name: diam-gateway"
#     tlsmode: PERMISSIVE

Istio Authorization Policy enables access control on workloads in the mesh. Authorization policy supports CUSTOM, DENY and ALLOW actions for access control. When CUSTOM, DENY and ALLOW actions are used for a workload at the same time, the CUSTOM action is evaluated first, then the DENY action, and finally the ALLOW action.

For more details on Istio Authorization Policy, see Istio / Authorization Policy.

To customize the Authorization Policy, make the required changes using the following sample template:

#authorizationPolicies:
#- name: allow-all-provisioning-on-ingressgateway-ap
#  labelselector: "app.kubernetes.io/name: ingressgateway"
#  action: "ALLOW"
#  hosts:
#    - "*"
#  paths:
#    - "/nudr-dr-prov/*"
#    - "/nudr-dr-mgm/*"
#    - "/nudr-group-id-map-prov/*"
#    - "/slf-group-prov/*"
#- name: allow-all-sbi-on-ingressgateway-ap
#  labelselector: "app.kubernetes.io/name: ingressgateway"
#  action: "ALLOW"
#  hosts:
#    - "*"
# paths:
#   - "/nbsf-policyauthorization/*"
# xfccvalues:
#   - "*DNS=nrf1.site1.com"
#   - "*DNS=nrf2.site2.com"
#   - "*DNS=scp1.site1.com"
#   - "*DNS=scp1.site2.com"
#   - "*DNS=scp1.site3.com

VirtualService is required to configure the retry attempts for the destination host. For instance, for error response code value 503, the default behaviour of Istio is to retry two times. However, if the user wants to configure the number of retry attempts, then it can be done using virtualService.

In the following example, the number of retry attempts are set to 0:
#virtualService:
#  - name: scp1site1vs
#    host: “scp1.site1.com”
#    destinationhost: “scp1.site1.com”
#    port: 8000
#    exportTo: |-
#      [ "." ]
#    attempts: "0"
#    timeout: 7s
#  - name: scp1site2vs
#    host: “scp1.site2.com”
#    destinationhost: “scp1.site2.com”
#    port: 8000
#    exportTo: |-
#      [ "." ]
#    retryon: 5xx
#    attempts: "1"
#    timeout: 7s
where, host or destination name uses the format - <release_name>-<egress_svc_name>
To get the <egress_svc_name>, run the following command:
kubectl get svc -n <namespace>

Note:

For 5xx response codes, set the value of retry attempts to 1.
Request Authentication is used to configure JWT tokens for Oauth validation. Network functions need to authenticate the OAuth token sent by consumer network functions by using the Public key of the NRF signing certificate and using service mesh to authenticate the token. Using the following sample format, users can configure requestAuthentication as per their system requirements:
requestAuthentication:
#  - name: jwttokenwithjson
#    labelselector: httpbin
#    issuer: "jwtissue"
#    jwks: |-
#     '{
#       "keys": [{
#       "kid": "1",
#       "kty": "EC",
#       "crv": "P-256",
#       "x": "Qrl5t1-Apuj8uRI2o_BP9loqvaBnyM4OPTPAD_peDe4",
#       "y": "Y7vNMKGNAtlteMV-KJIaG-0UlCVRGFHtUVI8ZoXIzRY"
#      }]
#     }'
#  - name: jwttoken
#    labelselector: httpbin
#    issuer: "jwtissue"
#    jwksUri: https://example.com/.well-known/jwks.json

Note:

For requestAuthetication, use either jwks or jwksUri.

Run the following command to create the Custom Resource Definitions (CRDs):

helm install ocbsf-servicemesh-config ocbsf-servicemesh-config-24.3.0.tgz  -n ocbsf -f ocbsf_custom_values_servicemesh_config_24.3.0.yaml
2.2.1.10.3 Post-deployment Configurations
To verify if the CDRs have been created, run the following command::
kubectl get se,dr,peerauthentication,envoyfilter,vs,authorizationpolicy,requestauthentication -n ocbsf
The following is a sample output:
NAME                                                                   HOSTS                                              LOCATION        RESOLUTION   AGE
serviceentry.networking.istio.io/nf-to-kube-api-server                 ["kubernetes.default.svc.vega"]                    MESH_INTERNAL   NONE         17h
serviceentry.networking.istio.io/vega-ns1a-to-mysql-external-se-test   ["mysql-connectivity-service.vega-ns1.svc.vega"]   MESH_EXTERNAL   NONE         17h
serviceentry.networking.istio.io/vega-ns1a-to-other-nf-se-test         ["*.vega"]                                         MESH_EXTERNAL   NONE         17hNAME                                                                HOST                                                    AGE
destinationrule.networking.istio.io/jaeger-dr                       occne-tracer-jaeger-query.occne-infra                   17h
destinationrule.networking.istio.io/mysql-occne                     mysql-connectivity-service.vega-ns1.svc.cluster.local   17h
destinationrule.networking.istio.io/prometheus-dr                   occne-prometheus-server.occne-infra                     17h
destinationrule.networking.istio.io/vega-ns1a-to-other-nf-dr-test   *.vega                                                  17hNAME                                                MODE         AGE
peerauthentication.security.istio.io/cm-service     PERMISSIVE   17h
peerauthentication.security.istio.io/default        PERMISSIVE   17h
peerauthentication.security.istio.io/diam-gw        PERMISSIVE   17h
peerauthentication.security.istio.io/ingress        PERMISSIVE   17h
peerauthentication.security.istio.io/ocats-policy   PERMISSIVE   17hNAME                                                         AGE
envoyfilter.networking.istio.io/ocats-policy-xfcc            17h
envoyfilter.networking.istio.io/serverheaderfilter           17h
envoyfilter.networking.istio.io/serverheaderfilter-nf1stub   17h
envoyfilter.networking.istio.io/serverheaderfilter-nf2stub   17h
envoyfilter.networking.istio.io/set-xfcc-bsf                 17hNAME                                             GATEWAYS   HOSTS                                AGE
virtualservice.networking.istio.io/nrfvirtual1              ["vega-ns1a-occnp-egress-gateway"]   17h
[cloud-user@vega-bastion-1 ~]$

Then, perform the steps described in Installing BSF Package.

2.2.1.10.4 Deleting Service Mesh

This section describes the steps to delete Aspen Service Mesh (ASM) for ASM based BSF.

  1. Disable ASM.

    kubectl label --overwrite <namespace> ocbsf istio-injection=disabled

    where,

    namespace-name is the deployment namespace used by helm command.

  2. Delete all the pods in the namespace.

    kubectl delete pods --all -n <namespace>

  3. Delete ASM.

    helm delete <helm-release-name> -n <namespace-name>

    where, helm-release-name is the release name used by the helm install command. This release name must be the same as the release name used for ServiceMesh.

    namespace-name is the deployment namespace used by helm command.

    Example:

    helm delete ocbsf-servicemesh-config -n ocbsf

  4. Verify ASM deletion.

    kubectl get se,dr,peerauthentication,envoyfilter,vs -n ocbsf

2.2.1.11 Configuring Network Policies

Network Policies allow you to define ingress or egress rules based on Kubernetes resources such as Pod, Namespace, IP, and Port. These rules are selected based on Kubernetes labels in the application. These Network Policies enforce access restrictions for all the applicable data flows except communication from Kubernetes node to pod for invoking container probe.

Note:

Configuring Network Policy is optional. Based on the security requirements, Network Policy can be configured.

For more information on Network Policies, see https://kubernetes.io/docs/concepts/services-networking/network-policies/.

Note:

  • If the traffic is blocked or unblocked between the pods even after applying Network Policies, check if any existing policy is impacting the same pod or set of pods that might alter the overall cumulative behavior.
  • If changing default ports of services such as Prometheus, Database, Jaegar, or if Ingress or Egress Gateway names is overridden, update them in the corresponding Network Policies.

Configuring Network Policies

Network Policies support Container Network Interface (CNI) plugins for cluster networking.

Note:

For any deployment with CNI, it must be ensured that Network Policy is supported.

Following are the various operations that can be performed for Network Policies:

2.2.1.11.1 Installing Network Policies

Prerequisite

Network Policies are implemented by using the network plug-in. To use Network Policies, you must be using a networking solution that supports Network Policy.

Note:

For a fresh installation, it is recommended to install Network Policies before installing BSF. However, if BSF is already installed, you can still install the Network Policies.

To install Network Policies:

  1. Open the ocbsf-network-policy-custom-values.yaml file provided in the release package zip file.

    For downloading the file, see Downloading BSF package and Pushing the Images to Customer Docker Registry

  2. The file is provided with the default Network Policies. If required, update the ocbsf-network-policy-custom-values.yaml file. For more information on the parameters, see Configuration Parameters for Network Policies.

    Note:

    • To run ATS, uncomment the following policies from ocbsf-network-policy-custom-values.yaml:
      • allow-egress-for-ats
      • allow-ingress-to-ats
      • allow-egress-to-ats-pods-from-bsf-pods
      • allow-ingress-from-ats-pods-to-bsf-pods
    • To connect with CNC Console, update the following parameter in the allow-ingress-from-console Network Policy in the ocbsf-network-policy-custom-values.yaml:

      kubernetes.io/metadata.name: <namespace in which CNCC is deployed>
    • In allow-ingress-prometheus policy, kubernetes.io/metadata.name parameter must contain the value for the namespace where Prometheus is deployed, and app.kubernetes.io/name parameter value should match the label from Prometheus pod.
  3. Run the following command to install the Network Policies:
    helm install <helm-release-name> ocbsf-network-policy/ -n <namespace> -f
        <custom-value-file>
    where:
    • <helm-release-name> is the ocbsf-network-policy Helm release name.
    • <yaml-file> is the ocbsf-network-policy-custom-value file.
    • <namespace> is the OCBSF namespace.

    For example:

    helm install ocbsf-network-policy ocbsf-network-policy/ -n ocbsf
          -f ocbsf-network-policy-custom-values.yaml

Note:

  • Connections that were created before installing Network Policy and still persist are not impacted by the new Network Policy. Only the new connections would be impacted.

  • If you are using ATS suite along with Network Policies, it is required to install the BSF and ATS in the same namespace.

  • It is highly recommended to run ATS after deploying Network Policies to detect any missing/invalid rule that can impact signaling flows.

2.2.1.11.2 Upgrading Network Policies

To add, delete, or update Network Policies:

  1. Modify the ocbsf-network-policy-custom-values.yaml file to update, add, or delete the Network Policy.
  2. Run the following command to upgrade the Network Policies:
    helm upgrade <helm-release-name> network-policy/ -n <namespace> -f
        <custom-value-file>
    where:
    • <helm-release-name> is the ocbsf-network-policy helm release name.
    • <yaml-file> is the ocbsf-network-policy-custom-value file.
    • <namespace> is the OCBSF namespace.

    For example:

    helm upgrade ocbsf-network-policy ocbsf-network-policy/ -n ocbsf
          -f ocbsf-network-policy-custom-values.yaml
2.2.1.11.3 Verifying Network Policies

Run the following command to verify if the Network Policies are deployed successfully:

kubectl get <helm-release-name> -n <namespace>

For example:

kubectl get ocbsf-network-policy -n ocbsf
Where,
  • helm-release-name: ocbsf-network-policy Helm release name.
  • namespace: CNC Console namespace.
2.2.1.11.4 Uninstalling Network Policies

Run the following command to uninstall network policies:

helm uninstall <helm-release-name> -n <namespace>

For example:

helm uninstall ocbsf-network-policy -n ocbsf

Note:

While using the debug container, it is recommended to uninstall the network policies or update them as required to establish the connections.

2.2.1.11.5 Configuration Parameters for Network Policies

Table 2-9 Supported Kubernetes Resource for Configuring Network Policies

Parameter Description Details
apiVersion

This is a mandatory parameter.

Specifies the Kubernetes version for access control.

Note: This is the supported api version for network policy. This is a read-only parameter.

Data Type: string

Default Value:
networking.k8s.i o/v1
kind

This is a mandatory parameter.

Represents the REST resource this object represents.

Note: This is a read-only parameter.

Data Type: string

Default Value: NetworkPolicy

Table 2-10 Supported Parameters for Configuring Network Policies

Parameter Description Details
metadata.name

This is a mandatory parameter.

Specifies a unique name for Network Policies.

DataType: String

Default Value: {{ .metadata.name }}

spec.{}

This is a mandatory parameter.

This consists of all the information needed to define a particular network policy in the given namespace.

Note: Policy supports the spec parameters defined in "Supported Kubernetes Resource for Configuring Network Policies".

Default Value: NA

For more information, see Network Policies in Oracle Communications Cloud Native Core, Binding Support Function User Guide.

2.2.2 Installation Tasks

This section explains how to install BSF.

Note:

  • Before installing BSF, you must complete Prerequisites and Preinstallation Tasks.

  • In a georedundant deployment, perform the steps explained in this section on all the georedundant sites.

2.2.2.1 Downloading BSF package
To download the BSF package from My Oracle Support (MOS), perform the following steps:
  1. Log in to My Oracle Support with your credentials.
  2. Select the Patches and Updates tab.
  3. In the Patch Search window, click Product or Family (Advanced) option.
  4. Enter Oracle Communications Cloud Native Core - 5G in the Product field, and select the Product from the drop-down list.
  5. From the Release drop-down list, select "Oracle Communications Cloud Native Core, Converged Policy <release_number>".

    Where, <release_number> indicates the required release number of Policy.

  6. Click Search.

    The Patch Advanced Search Results lists appears

  7. Select the required patch from the results.

    The Patch Details window papers.

  8. Click Download.

    File Download window appears.

  9. Click the <p********_<release_number>_Tekelec>.zip file to download the BSF release package.
2.2.2.2 Pushing the Images to Customer Docker Registry

BSF deployment package includes ready-to-use images and Helm charts to orchestrate containers in Kubernetes.

Table 2-11 Docker Images for BSF

Service Name Docker Image Name Image Tag
Alternate Route Service alternate_route 24.3.3
BSF Management oc-bsf-management 24.3.0
Application Info Service oc-app-info 24.3.4
Common Configuration Hook common_config_hook 24.3.3
Config Server oc-config-server 24.3.4
Configuration Management Server oc-config-mgmt 24.3.4
Debug Tool ocdebug-tools 24.3.1
Diameter Connector oc-diam-connector 24.3.4
Diameter Gateway oc-diam-gateway 24.3.4
Egress Gateway ocegress_gateway 24.3.3
NF Test nf_test 24.3.2
Ingress Gateway ocingress_gateway 24.3.3
Ingress Gateway/Egress Gateway init configuration configurationinit 24.3.3
Ingress Gateway/Egress Gateway update configuration configurationupdate 24.3.3
NRF Client Service nrf-client 24.3.2
Performance Monitoring Service oc-perf-info 24.3.4
Query Service oc-query 24.3.4
Session State Audit oc-audit 24.3.4

Pushing images

To push the images to the registry:
  1. Run the following command to untar the BSF package file to get the BSF docker image tar file:
    tar -xvzf <ReleaseName>-pkg-<Releasenumber>.tgz

    Example: tar -xvzf ocbsf-pkg-24.3.0.0.0.tgz

    The directory consists of the following:
    • BSF Docker Images File:

      ocbsf-images-24.3.0.tar

    • Helm File:

      ocbsf-24.3.0.tgz

    • Readme txt File:

      Readme.txt

    • Checksum for Helm chart tgz file:

      ocbsf-24.3.0.tgz.sha256

    • Checksum for Helm chart for Service Mesh tgz file:

      ocbsf-servicemesh-config-24.3.0.tgz.sha256

    • Checksum for images' tgz file:

      ocbsf-images-24.3.0.tar.sha256

  2. Run one of the following commands to load the ocbsf-images-<release_number>.tar file:
    docker load --input /IMAGE_PATH/ocbsf-images-24.3.0.tar
    where IMAGE_PATH points to the location where ocbsf-images-24.3.0.tar is stored.
    For CNE 1.8.0 and later versions, use the following command:
    podman load --input /IMAGE_PATH/ocbsf-images-24.3.0.tar
  3. Run one of the following commands to verify that the images are loaded:

    docker images

    podman images

    Verify the list of images shown in the output with the list of images shown in the table Table 2-11. If the list does not match, reload the image tar file.

    For more information on docker images available in BSF, see Docker Images.
  4. Run one of the following commands to tag the images to the registry:

    docker tag <image-name>:<image-tag> <docker-repo>/<image-name>:<image-tag>

    podman tag <image-name>:<image-tag> <docker-repo>/<image-name>:<image-tag>

  5. Run one of the following commands to push the images to the registry:

    docker push <docker-repo>/<image-name>:<image-tag>

    podman push <docker-repo>/<image-name>:<image-tag>

    Note:

    It is recommended to configure the Docker certificate before running the push command to access customer registry through HTTPS, otherwise docker push command may fail.

Example for OCCNE 1.8.0 and later versions

podman tag docker.io/ocbsf/oc-config-mgmt:24.3.4 occne-repo-host/oc-config-mgmt:24.3.4
podman push occne-repo-host/oc-config-mgmt:24.3.4

podman tag docker.io/ocbsf/nf_test:24.3.2 occne-repo-host/nf_test:24.3.2
podman push occne-repo-host/nf_test:24.3.2

podman tag docker.io/ocbsf/alternate_route:24.3.3 occne-repo-host/alternate_route:24.3.3
podman push occne-repo-host/alternate_route:24.3.3

podman tag docker.io/ocbsf/oc-config-server:24.3.4 occne-repo-host/oc-config-server:24.3.4
podman push occne-repo-host/oc-config-server:24.3.4

podman tag docker.io/ocbsf/configurationupdate:24.3.3 occne-repo-host/configurationupdate:24.3.3
podman push occne-repo-host/configurationupdate:24.3.3

podman tag docker.io/ocbsf/oc-app-info:24.3.4 occne-repo-host/oc-app-info:24.3.4
podman push occne-repo-host/oc-app-info:24.3.4

podman tag docker.io/ocbsf/ocingress_gateway:24.3.3 occne-repo-host/ocingress_gateway:24.3.3
podman push occne-repo-host/ocingress_gateway:24.3.3

podman tag docker.io/ocbsf/ocegress_gateway:24.3.3 occne-repo-host/ocegress_gateway:24.3.3
podman push occne-repo-host/ocegress_gateway:24.3.3

podman tag docker.io/ocbsf/oc-diam-gateway:24.3.4 occne-repo-host/oc-diam-gateway:24.3.4
podman push occne-repo-host/oc-diam-gateway:24.3.4

podman tag docker.io/ocbsf/oc-bsf-management:24.3.0 occne-repo-host/oc-bsf-management:24.3.0
podman push occne-repo-host/oc-bsf-management:24.3.0

podman tag docker.io/ocbsf/oc-query:24.3.4 occne-repo-host/oc-query:24.3.4
podman push occne-repo-host/oc-query:24.3.4

podman tag docker.io/ocbsf/oc-perf-info:24.3.4 occne-repo-host/oc-perf-info:24.3.4
podman push occne-repo-host/oc-perf-info:24.3.4

podman tag docker.io/ocbsf/nrf-client:24.3.2 occne-repo-host/nrf-client:24.3.2
podman push occne-repo-host/nrf-client:24.3.2

podman tag docker.io/ocbsf/configurationinit:24.3.3 occne-repo-host/configurationinit:24.3.3
podman push occne-repo-host/configurationinit:24.3.3

podman tag docker.io/ocbsf/common_config_hook:24.3.3 occne-repo-host/common_config_hook:24.3.3
podman push occne-repo-host/common_config_hook:24.3.3

podman tag docker.io/ocbsf/ocdebug-tools:24.3.1 occne-repo-host/ocdebug-tools:24.3.1
podman push occne-repo-host/ocdebug-tools:24.3.1

podman tag docker.io/ocbsf/oc-audit:24.3.4 occne-repo-host/oc-audit:24.3.4
podman push occne-repo-host/oc-audit:24.3.4
2.2.2.3 Installing BSF Package

This section describes how to install BSF package.

To install BSF package, perform the following steps:

  1. Unzip the release package to the location where you want to install BSF. The package is as follows ReleaseName_pkg_Releasenumber.tgz:

    where:

    ReleaseName is a name that is used to track this installation instance.

    Releasenumber is the release number. For example, ocbsf_pkg_24.3.0_0_0.tgz.

    Run the following command to access the extracted package:

  2. Customize the ocbsf_custom_values_24.3.0.yaml file. For more information, see Customizing BSF.

    Note:

    The values of the parameters mentioned in the ocbsf_custom_values_24.3.0.yaml file overrides the default values specified in the Helm chart. If the envMysqlDatabase parameter is modified, you must modify the configDbName parameter with the same value.

    Note:

    The URL syntax for perf-info must be in the correct syntax otherwise, it keeps restarting. The following is a URL example for the bastion server if the BSF is deployed on OCCNE platform. On any other PaaS platform, the url should be updated according to the Prometheus and Jaeger query deployment.
    # Values provided must match the Kubernetes environment.
    perf-info:
      configmapPerformance:
        prometheus: http://occne-prometheus-server.occne-infra.svc/clustername/prometheus
        jaeger:jaeger-agent.occne-infra
        jaeger_query_url:http://jaeger-query.occne-infra/clustername/jaeger

    At least three configuration items must be present in the config map for perf-info, failing which perf-info will not work. If jaeger is not enabled, the jaeger and jaeger_query_url parameter can be omitted.

  3. Install BSF using Helm:

    helm install <release-name> -f <custom_file>  <helm-chart> -namespace <release-namespace> --atomic  --timeout 10m
    where:
    • helm-chart is the location of the Helm chart extracted from ocbsf-pkg-24.3.0.0.0.tgz file
    • release_name is the release name used by Helm command. The maximum allowed length is 63 characters.
    • release_namespace is the deployment namespace used by helm command.
    • custom_file is the name of the custom values yaml file (including location).

    Note:

    To verify installation while running the install command, run the following command on a separate window:
    watch kubectl get jobs,pods -n release_namespace

    For example:

    helm install ocbsf /home/cloud-user/bsf-24.3.0.0.0.tgz --namespace ocbsf -f
          ocbsf-custom-values-24.3.0.yaml --atomic
    Parameters in helm install command:
    • atomic: If this parameter is set, the installation process purges the Helm chart on failure. The wait flag will be set automatically.
    • wait: If this parameter is set, installation process will wait until all pods, PVCs, Services, and minimum number of pods of a deployment, StatefulSet, or ReplicaSet are in a ready state before marking the release as successful. It will wait for as long as --timeout.
    • timeout duration (optional): This parameter specifies the wait time for individual Kubernetes operations such as Jobs for hooks. The default value is 300s (in seconds) in Helm. If the helm install command fails to create a Kubernetes object, it will internally call the purge to delete after reaching the default timeout value.

      Note:

      Timeout value is not for the overall install but for automatic purge on installation failure.

    Caution:

    When you run the install command, make sure that you do not exit from the helm install command manually. After running the helm install command, installing all the services may take some time. In the meantime, you must not press "ctrl+c" to come out from the helm install command as it leads to anomalous behavior.

    Note:

    In Georedundant deployment, if you want to add or remove a site, refer to <Appendix B and Appendix C>.

    Note:

    The following warnings must be ignored for BSF installation on 24.3.0, 24.2.0 and 24.1.0 CNE:
    helm install <release-name> -f <custom.yaml> <tgz-file> -n <namespace>
    W0301 07:39:30.096125 1718193 warnings.go:70] spec.template.spec.containers[0].ports[3]: duplicate port definition with spec.template.spec.containers[0].ports[1]
    W0301 07:39:34.033420 1718193 warnings.go:70] spec.template.spec.containers[0].ports[3]: duplicate port definition with spec.template.spec.containers[0].ports[2]
    NAME: <release-name>
    LAST DEPLOYED: <Date-Time>
    NAMESPACE: <namespace>
    STATUS: deployed
    REVISION: <N>
  4. Press "Ctrl+C" to exit watch mode. Make sure to run the watch command on a different terminal.

    For Helm 2:

    helm status <helm-release> -n <namespace>

2.2.3 Postinstallation Tasks

This section explains the postinstallation tasks for BSF.

2.2.3.1 Verifying BSF Installation
To verify if BSF is installed:
  1. Run the following command to verify the installation status:

    For Helm:

    helm status <helm-release> -n <namespace>

    For example:

    helm status ocbsf -n ocbsf

    Status should be DEPLOYED.

  2. Run the following command to verify if the pods are up and active:

    kubectl get pods -n <release_namespace>

    For example:

    kubectl get pod -n ocbsf

    You should see the status as Running and Ready for all the pods.

  3. Run the following command to verify if the services are deployed and active:

    kubectl get services -n <release_namespace>

    For example:

    kubectl get services -n ocbsf

Note:

If the installation is not successful or you do not see the status as Running for all the pods, perform the troubleshooting steps mentioned in Oracle Communications Cloud Native Core, Binding Support Function Troubleshooting Guide.
2.2.3.2 Performing Helm Test

This section describes how to perform sanity check for BSF installation through Helm test. The pods to be checked should be based on the namespace and label selector configured for the Helm test configurations.

Helm Test is a feature that validates successful installation of BSF and determines if the NF is ready to take traffic. The pods that are checked are based on the namespace and label selector configured for the Helm test configurations.

Note:

  • Helm test can be performed only on Helm3.

  • If nrf-client-nfmanagement.enablePDBSupport is set to true in the custom-values.yaml, Helm test fails. It is an expected behavior as the mode is active and on standby, the leader pod (nrf-client-management) will be in ready state but the follower will not be in ready state, which will lead to failure in the Helm test.

Before running Helm test, complete the Helm test configurations.

To perform the helm test, run the following command:
helm test <helm-release_name> -n <namespace>

where:

helm-release-name is the release name.

namespace is the deployment namespace where BSF is installed.

Example:
helm test ocbsf -n ocbsf
Sample output:
Pod ocbsf-helm-test-test pending
Pod ocbsf-helm-test-test pending
Pod ocbsf-helm-test-test pending
Pod ocbsf-helm-test-test running
Pod ocbsf-helm-test-test succeeded
NAME: ocbsf-helm-test
LAST DEPLOYED: Thu May 19 12:22:20 2022
NAMESPACE: ocbsf-helm-test
STATUS: deployed
REVISION: 1
TEST SUITE:     ocbsf-helm-test-test
Last Started:   Thu May 19 12:24:23 2022
Last Completed: Thu May 19 12:24:35 2022
Phase:          Succeeded
If the Helm test failed, run the following command to view the logs:
helm test <release_name> -n <namespace> --logs

Note:

  • Helm Test expects all of the pods of given microservice to be in READY state for a successful result. However, the NRF Client Management microservice comes with Active/Standby model for the multi-pod support in the current release. When the multi-pod support for NRF Client Management service is enabled, you may ignore if the Helm Test for NRF-Client-Management pod fails.

  • If Helm test fails, for details on troubleshooting the installation, see Oracle Communications Cloud Native Core, Binding Support Function Troubleshooting Guide.

2.2.3.3 Backing Up Important Files

Take a backup of the following files that are required during fault recovery:

  • updated ocbsf_custom_values_24.3.0.yaml file
  • updated ocbsf_custom_values_servicemesh_config_24.3.0.yaml file
  • updated Helm charts
  • secrets, certificates, and keys used during the installation