2 Installing BSF
This chapter provides information about installing Oracle Communications Cloud Native Core, Binding Support Function (BSF) in a cloud native environment.
Note:
-
BSF supports fresh installation, and it can also be upgraded from 23.4.x and 23.2.x. For more information on how to upgrade BSF, see Upgrading BSF.
2.1 Prerequisites
Before installing and configuring BSF, ensure that the following prerequisites are met.
2.1.1 Software Requirements
This section lists the software that must be installed before installing BSF:
Table 2-1 Preinstalled Software
Software | Version |
---|---|
Kubernetes | 1.27.x, 1.25.x |
Helm | 3.12.3 |
Podman | 4.4.1 |
Note:
CNE 23.4.x, or 23.2.x versions can be used to install BSF 23.4.6.echo $OCCNE_VERSION
helm version
kubectl version
podman version
Note:
This guide covers the installation instructions for BSF when Podman is the container platform with Helm as the Packaging Manager. For non-CNE, the operator can use commands based on their deployed Container Runtime Environment, see the Oracle Communications Cloud Native Core, Cloud Native Environment (CNE) Installation, Upgrade and Fault Recovery Guide.The following software are available if BSF is deployed in CNE. If you are deploying BSF in any other cloud native environment, these additional software must be installed before installing BSF.
To check the installed software, run the following command:
helm ls -A
Table 2-2 Additional Software
Software | App Version | Required for |
---|---|---|
OpenSearch | 2.3.0 | Logging |
OpenSearch Dashboard | 2.3.0 | Logging |
Fluentd OpenSearch | 1.16.2 | Logging |
Velero | 1.12.0 | Logging |
Kyverno | 1.9.0 | Logging |
elastic-curator | 5.5.4 | Logging |
elastic-exporter | 1.1.0 | Logging |
elastic-master | 7.9.3 | Logging |
Logs | 3.1.0 | Logging |
Grafana | 9.5.3 | Metrics |
Prometheus | 2.44.0 | Metrics |
prometheus-kube-state-metrics | 1.9.7 | Metrics |
prometheus-node-exporter | 1.0.1 | Metrics |
MetalLB | 0.13.11 | External IP |
metrics-server | 0.3.6 | Metric Server |
occne-snmp-notifier | 1.2.1 | Metric Server |
tracer | 1.22.0 | Tracing |
Jaeger | 1.45.0 | Tracing |
Istio | 1.18.2 | Service Mesh |
cert-manager | 1.12.4 | Secrets Manager |
Calico | 3.25.2 | Security Solution |
containerd | 1.7.5 | Container Runtime Manager |
Note:
CNE 23.4.x replaces Elasticsearch and Kibana with OpenSearch and OpenSearch Dashboard, respectively. All components of Elasticsearch will be uninstalled in CNE 23.4.x, including the data nodes. Until then, you can access data stored in Elasticsearch. For more information about OpenSearch, see Oracle Communications Cloud Native Core, Cloud Native Environment Installation, Upgrade, and Fault Recovery Guide.Important:
If you are using NRF with BSF, install it before proceeding with the BSF installation. BSF 23.4.6 supports NRF 23.4.x.2.1.2 Environment Setup Requirements
This section describes the environment setup requirements required for installing BSF.
2.1.2.1 Client Machine Requirement
This section describes the requirements for client machine, that is, the machine used by the user to run deployment commands.
- Helm repository configured on the client.
- network access to the Helm repository and Docker image repository.
- network access to the Kubernetes cluster.
- required environment settings to run
the
kubectl
,podman
, anddocker
commands. The environment should have privileges to create namespace in the Kubernetes cluster. - Helm client installed with the
push
plugin. The environment should be configured so that thehelm install
command deploys the software in the Kubernetes cluster.
2.1.2.2 Network Access Requirement
The Kubernetes cluster hosts must have network access to the following repositories:
- Local Helm Repository: It contains
the BSF Helm charts.
To check if the Kubernetes cluster hosts have network access to the local Helm repository, run the following command:
helm repo update
- Local Docker Image Repository: It
contains the BSF Docker images.
To check if the Kubernetes cluster hosts can access the local Docker image repository, pull any image with an
image-tag
, using the following command:docker pull <docker-repo>/<image-name>:<image-tag>
Where:podman pull <docker-repo>/<image-name>:<image-tag>
<docker-repo>
is the IP address or host name of the Docker repository.<podman-repo>
is the IP address or host name of the Podman repository.image-name
is the Docker image name.image-tag
is the tag assigned to the Docker image used for the BSF pod.
For Example:
docker pull CUSTOMER_REPO/oc-app-info:23.4.14
For CNE 1.8.0 and later versions, use the following command:
podman pull <docker-repo>/<image-name>:<image-tag>
For Example:
podman pull CUSTOMER_REPO/oc-app-info:23.4.14
Note:
Run the kubectl
and Helm commands on a system based on
the deployment or infrastructure. For instance, you can run these commands on a
client machine such as VM, server, local desktop, and so on.
2.1.2.3 Server or Space Requirement
For information about server or space requirements, see the Oracle Communications Cloud Native Core, Cloud Native Environment (CNE) Installation, Upgrade and Fault Recovery Guide.
2.1.2.4 CNE Requirement
This section is applicable only if you are installing BSF on Cloud Native Environment (CNE).
BSF supports CNE 23.4.x and 23.2.x.
To check the CNE version, run the following command:
echo $OCCNE_VERSION
For more information, see Oracle Communications Cloud Native Core, Cloud Native Environment (CNE) Installation, Upgrade and Fault Recovery Guide.
2.1.2.5 cnDBTier Requirement
BSF supports cnDBTier 23.4.x and 23.2.x. cnDBTier must be configured and running before installing BSF. For more information about cnDBTier installation, see Oracle Communications Cloud Native Core, cnDBTier (cnDBTier) Installation, Upgrade, and Fault Recovery Guide.
2.1.2.6 OSO Requirement
BSF supports Operations Services Overlay (OSO) 23.4.x and 23.2.x for common operation services (Prometheus and components such as Alertmanager, Pushgateway) on a Kubernetes cluster, which does not have these common services. For more information on installation procedure, see Oracle Communications Cloud Native Core, Operations Services Overlay Installation, Upgrade and Fault Recovery Guide.
2.1.3 Resource Requirement
This section lists the resource requirements to install and run BSF.
Note:
The performance and capacity of the BSF system may vary based on the call model, Feature/Interface configuration, and underlying CNE and hardware environment.2.1.3.1 BSF Services
The following table lists resource requirement for BSF Services:
Table 2-3 BSF Services
Service | CPU | Memory (GB) | Replica(s) | Ephemeral Storage (If Enabled) | |||||
---|---|---|---|---|---|---|---|---|---|
Min | Max | Min | Max | Min | Max | Count | Min | Max | |
bsf-management-service | 3 | 4 | 1 | 4 | 2 | 8 | 1 | 78.1Mi | 4Gi |
ocbsf_custom_values_23.4.6.yaml
file:
minReplicas: 1
maxReplicas: 1
ingress-gateway
or
egress-gateway
group:
ingress-gateway:
#Resource details
resources:
limits:
cpu: 1
memory: 6Gi
requests:
cpu: 1
memory: 2Gi
target:
averageCpuUtil: 80
minReplicas: 1
maxReplicas: 1
egress-gateway:
#Resource details
resources:
limits:
cpu: 1
memory: 6Gi
requests:
cpu: 1
memory: 2Gi
target:
averageCpuUtil: 80
minReplicas: 1
maxReplicas: 1
Note:
It is recommended to avoid altering the above mentioned standard resources. Either increasing or decreasing the CPU or memory will result in unpredictable behavior of the pods. Contact My Oracle Support (MOS) team for Min Replicas and Max Replicas count values.2.1.3.2 Upgrade
Following is the resource requirement for upgrading BSF.
Table 2-4 Upgrade
Service | CPU | Memory (GB) | Replica(s) | ||
---|---|---|---|---|---|
Min | Max | Min | Max | Max | |
Alternate Route Service | 1 | 2 | 1 | 2 | 1 |
bsf-management-service | 1 | 2 | 1 | 2 | 1 |
Diameter Gateway | 1 | 2 | 1 | 2 | 1 |
Egress Gateway | 1 | 2 | 1 | 2 | 1 |
Ingress Gateway | 1 | 2 | 1 | 2 | 1 |
NRF Client NF Management | 1 | 2 | 1 | 2 | 1 |
Perf-Info | 1 | 2 | 1 | 2 | 1 |
App-info | 1 | 2 | 1 | 2 | 1 |
Query Service | 1 | 2 | 1 | 2 | 1 |
CM Service | 1 | 2 | 1 | 2 | 1 |
Config Server | 1 | 2 | 1 | 2 | 1 |
Audit Service | 1 | 2 | 1 | 2 | 1 |
2.1.3.3 Common Services Container
Table 2-5 Common Services Container
Service | CPU | Memory (GB) | Pod | Replica(s) | Ephemeral Storage (If Enabled) | ||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Min | Max | Min | Max | Min | Max | Min | Max | Count | Min | Max | |
Alternate Route Service | 1 | 2 | 2 | 4 | 2 | 5 | 1 | 78.1Mi | 4Gi | ||
Diameter Gateway | 3 | 4 | 0.5 | 2 | 2 |
Max replica per service should be set based on required TPS and other dimensioning factors. You must take Upgrade resources into account during dimensioning. Default upgrade resource requirements are 25% above max replica, rounding up to the next integer. For example, if a service has a max replica count of 8, upgrade resources of 25% will result in additional resources equivalent to 2 pods. If a max replica is 1, one additional pod would be required (rounding 0.25 to 1). |
1 | 78.1Mi | 2Gi | ||
Egress Gateway | 3 | 4 | 4 | 6 | 2 | 5 | 2 | 78.1Mi | 6Gi | ||
Ingress Gateway | 3 | 4 | 4 | 6 | 2 | 5 | 2 | 78.1Mi | 6Gi | ||
NRF Client NF Management | 1 | 1 | 1 | 1 | NA | NA | 2 | 78.1Mi | 1Gi | ||
Perf-Info | 3 | 4 | 0.5 | 1 | NA | NA | 1 | 78.1Mi | 1Gi | ||
App-info | 1 | 1 | 0.5 | 1 | 1 | 2 | 1 | 78.1Mi | 1Gi | ||
Query Service | 1 | 2 | 1 | 1 | 1 | 2 | 1 | 78.1Mi | 1Gi | ||
CM Service | 2 | 4 | 0.5 | 2 | NA | NA | 2 | 78.1Mi | 2Gi | ||
Config Server | 2 | 4 | 0.5 | 2 | 1 | 2 | 1 | 78.1Mi | 2Gi | ||
Audit Service | 1 | 2 | 1 | 1 | 2 | 8 | 1 | 78.1Mi | 1Gi |
2.2 Installation Sequence
This section describes preinstallation, installation, and postinstallation tasks for BSF.
2.2.1 Preinstallation Tasks
Before installing BSF, perform the tasks described in this section.
2.2.1.1 Verifying and Creating Namespace
This section explains how to verify and create a namespace in the system.
Note:
This is a mandatory procedure, run this before proceeding further with the installation. The namespace created or verified in this procedure is an input for the next procedures.
To verify and create a namespace:
- Run the following command to verify if the required
namespace already exists in system:
kubectl get namespaces
In the output of the above command, if the namespace exists, continue with Creating Service Account, Role, and RoleBinding.
- If the required namespace is unavailable, create
the namespace using the following command:
kubectl create namespace <required namespace>
Where,
<required namespace>
is the name of the namespace.For example:
kubectl create namespace ocbsf
Sample output:
namespace/ocbsf created
Naming Convention for Namespaces
- The namespace should:start and end with an alphanumeric character.
- contains 63 characters or less
- contains only alphanumeric characters or '-'
Note:
It is recommended to avoid using the prefixkube-
when creating namespace. The prefix is reserved for
Kubernetes system namespaces.
2.2.1.2 Creating Service Account, Role, and RoleBinding
This section is optional and it describes how to manually create a service account, Role, and RoleBinding resources.
Note:
The secret(s) should exist in the same namespace where BSF is getting deployed. This helps to bind the Kubernetes role with the given service account.
-
Create a Global Service Account.
create a YAML file
bsf-sample-serviceaccount-template.yaml
using the following sample code:apiVersion: v1 kind: ServiceAccount metadata: name: <helm-release>-serviceaccount namespace: <namespace>
Where,
<helm-release>
is a name provided to identify the Helm deployment.<namespace>
is a name provided to identify the Kubernetes namespace of BSF. All the BSF microservices are deployed in this Kubernetes namespace. -
Define role permissions using roles for the BSF namespace.
Create a YAML file
bsf-sample-role-template.yaml
using the following sample code:apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: <helm-release>-role namespace: <namespace> rules: - apiGroups: - "" resources: - services - configmaps - pods - secrets - endpoints - nodes - events - persistentvolumeclaims verbs: - get - list - watch - apiGroups: - apps resources: - deployments - statefulsets verbs: - get - watch - list - apiGroups: - autoscaling resources: - horizontalpodautoscalers verbs: - get - watch - list
-
To bind the role defined in the
bsf-sample-role-template.yaml
file with the service account, create absf-sample-rolebinding-template.yaml
file using the following sample code:apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata: name: <helm-release>-rolebinding namespace: <namespace> roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: <helm-release>-role subjects: - kind: ServiceAccount name: <helm-release>-serviceaccount namespace: <namespace>
Note:
If you are installing BSF 22.1.0 using CNE 22.2.0 or later versions, change the apiVersion from
rbac.authorization.k8s.io/v1beta1
torbac.authorization.k8s.io/v1
. -
Run the following commands to create resources:
kubectl -n <namespace> create -f bsf-sample-serviceaccount-template.yaml; kubectl -n <namespace> create -f bsf-sample-role-template.yaml; kubectl -n <namespace> create -f bsf-sample-rolebinding-template.yaml
Note:
Once the global service account is added, you must addglobalServiceAccountName
in the
ocbsf_custom_values_23.4.6.yaml
file. Otherwise, installation may fail as a result of
creating and deleting Custom Resource Definition (CRD).
Note:
PodSecurityPolicy
kind is required for Pod Security Policy service
account. For more information, see Oracle Communications Cloud Native Core,
Binding Support Function Troubleshooting Guide.
.
2.2.1.3 Configuring cnDBTier
With cnDBTier, BSF facilitates automatic user creation with its pre-install hook. However, ensure that there is a privileged user on the NDB cluster, which has privileges similar to root user. You must have necessary permissions to allow connections from remote hosts.
Single Site Deployment
-
Log in to MySQL on each of the API nodes of cnDBTier to verify this.
mysql>select host from mysql.user where User='<privileged username>'; +------+ | host | +------+ | % | +------+ 1 rowinset(0.00 sec)
-
If you do not see '%' in the output of the above query, modify this field to allow remote connections to root.
mysql>update mysql.user set host='%' where User='<privileged username>'; Query OK, 0rowsaffected (0.00 sec) Rowsmatched: 1 Changed: 0 Warnings: 0 mysql> flushprivileges; Query OK, 0rowsaffected (0.06 sec)
Note:
Perform this step on each SQL node.
Multisite Deployment
-
Update mysqld configuration in the cnDBTier
custom-values.yaml
file before installing or upgrading BSF.global: ndbconfigurations: api auto_increment_increment: 3 auto_increment_offset: 1
Note:
- Set the
auto_increment_increment
parameter same as number of sites. For example: If the number of sites is 2, set its value as 2 and if the number of sites is 3, set its value as 3. - Set the
auto_increment_offset
parameter as site ID. For example: The site ID for Site 1 is 1, Site 2 is 2, for Site 3 is 3, and so on.
- Set the
-
If the fresh installation or upgrade of BSF on cnDBTier is not planned, then run the following command to edit the
mysqldconfig
configmap on all the cnDBTier sites.kubectl edit configmap mysqldconfig <db-site-namespace>
For example:
kubectl edit configmap mysqldconfig -n site1
Note:
Update the
auto_increment_increment
andauto_increment_offset
values as mentioned in the previous step for all sites.
2.2.1.4 Configuring Multiple Site Deployment
In case of multiple site deployment of BSF, there is only one subscriber database which is used by each site and different configuration databases for each site, as each site has its own configuration. To have different configuration databases and same subscriber database, you need to create secrets accordingly. For more information about creating secrets, see Configuring Kubernetes Secret for Accessing Database.
-
Configure
nfInstanceId
under the global section of theocbsf_custom_values_23.4.6.yaml
file differently for each BSF site deployed.Note:
Ensure that the
nfInstanceId
configuration in the global section is same as that in theappProfile
section of nrf-client.global: # Unique ID to register to NRF, Should be configured differently on multi site deployments for each BSF nfInstanceId: &nfInsId 5a7bd676-ceeb-44bb-95e0-f6a55a328b03 nrf-client: configmapApplicationConfig: profile: |- appProfiles=[{"nfInstanceId":"5a7bd676-ceeb-44bb-95e0-f6a55a328b03","nfStatus":"REGISTERED","fqdn":"ocbsf-ingressgateway.mybsf.svc.cluster.local","nfType":"BSF","allowedNfTypes":["NRF"],"plmnList":[{"mnc":"14","mcc":"310"}],"priority":10,"capacity":500,"load":0,"locality":"bangalore","nfServices":[{"load":0,"scheme":"http","versions":[{"apiFullVersion":"2.1.0.alpha-3","apiVersionInUri":"v1"}],"fqdn":"ocbsf-ingressgateway.mybsf.svc.cluster.local","ipEndPoints":[{"port":"80","ipv4Address":"10.0.0.0","transport":"TCP"}],"nfServiceStatus":"REGISTERED","allowedNfTypes":["NRF"],"serviceInstanceId":"547d42af-628a-4d5d-a8bd-38c4ba672682","serviceName":"nbsf-group-id-map","priority":10,"capacity":500}],"udrInfo":{"groupId":"bsf-1","externalGroupIdentifiersRanges":[{"start":"10000000000","end":"20000000000"}],"supiRanges":[{"start":"10000000000","end":"20000000000"}],"gpsiRanges":[{"start":"10000000000","end":"20000000000"}]},"heartBeatTimer":90,"nfServicePersistence":false,"nfProfileChangesSupportInd":false,"nfSetIdList":["setxyz.bsfset.5gc.mnc012.mcc345"]}]
-
Configure
fullnameOverride
under theconfig-server
section to<helm-release-name>-config-server
. It should be different for each site deployed.config-server: fullnameOverride: ocudr1-config-server
-
Configure
fullnameOverride
under theappinfo
section to<helm-release-name>-app-info
. It should be different for each site deployed.appinfo: fullnameOverride: ocudr1-app-info
- For cnDBTier configurations in multiple site deployment, see Configuring cnDBTier.
2.2.1.5 Creating Service Account, Role, and Role Binding for Helm Test
This section describes the procedure to create service account, role, and role binding resources for Helm Test.
Important:
The steps described in this section are optional and users may skip it in any of the following scenarios:- If user wants service accounts to be created automatically at the time of deploying BSF.
- Global service account with associated role and role bindings is already configured or the user has any in-house procedure to create service accounts.
Creating Global Service Account
To create the global service account, create a YAML file
bsf-sample-helmtestserviceaccount-template.yaml
using the
following sample code:
apiVersion: v1
kind: ServiceAccount
metadata:
name: <helm-release>-helmTestServiceAccountName
namespace: <namespace>
Where,
<helm-release>
is a name provided to identify the helm
deployment.
<namespace>
is a name provided to identify the Kubernetes namespace of BSF. All the BSF microservices are deployed in this Kubernetes namespace.
Define Role Permissions
To define permissions using roles for the BSF namespace, create a YAML
file bsf-sample-role-template.yaml
using the following sample
code:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: <helm-release>-role
namespace: <namespace>
rules:
- apiGroups:
- ""
resources:-pods
-persistentvolumeclaims
-services
-endpoints
-configmaps
-events
-secrets
-serviceaccounts
verbs:
-list
-get
-watch
-apiGroups:
-apps
resources:
-deployments
-statefulsets
verbs:
-get
-watch
-list
-apiGroups:
-autoscaling
resources:
-horizontalpodautoscalers
verbs:
-get
-watch
-list
-apiGroups:
-policy
resources:
-poddisruptionbudgets
verbs:
-get
-watch
-list
-apiGroups:
-rbac.authorization.k8s.io
resources:
-roles
-rolebindings
verbs:
-get
-watch
-list
Creating Role Binding Template
To bind the role defined in the
bsf-sample-role-template.yaml
file with the service account,
create a bsf-sample-rolebinding-template.yaml
file using the
following sample code:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: <helm-release>-rolebinding
namespace: <namespace>
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: <helm-release>-role
subjects:
- kind: ServiceAccount
name: <helm-release>-helmtestserviceaccount
namespace: <namespace>
Creating resources
Run the following commands to create resources:
kubectl -n <namespace> create -f bsf-sample-helmtestserviceaccount-template.yaml;
kubectl -n <namespace> create -f bsf-sample-role-template.yaml;
kubectl -n <namespace> create -f bsf-sample-rolebinding-template.yaml
Note:
Once the global service account is added, users must addhelmTestServiceAccountName
in the
ocbsf_custom_values_23.4.6.yaml
file. Otherwise, installation may fail as a result of
creating and deleting Custom Resource Definition (CRD).
2.2.1.6 Configuring Database, Creating Users, and Granting Permissions
This section explains how database administrators can create users and database in a single and multisite deployment.
BSF has four databases (Provisional, State, Release, Leaderpod, and NRF Client Database) and two users (Application and Privileged).
Note:
- Before running the procedure for georedundant sites, ensure that the cnDBTier for georedundant sites is already up and replication channels are enabled.
- While performing a fresh installation, if BSF release is already deployed, purge the deployment and remove the database and users that were used for the previous deployment. For uninstallation procedure, see Uninstalling BSF.
BSF Databases
For BSF applications, four types of databases are required:
- Provisional Database: Provisional Database contains
configuration information. The same configuration must be done on each site by
the operator. Both Privileged User and Application User have access to this
database. In case of georedundant deployments, each site must have a unique
Provisional Database. BSF sites can access only the information in their unique
Provisional Database.
For example:
- For Site 1: ocbsf_config_server_site1
- For Site 2: ocbsf_config_server_site2
- For Site 3: ocbsf_config_server_site3
- State Database: This database maintains the running state of BSF sites and has information of subscriptions, pending notification triggers, and availability data. It is replicated and the same configuration is maintained by all BSF georedundant sites. Both Privileged User and Application User have access to this database.
- Release Database: This database maintains release version state, and it is used during upgrade and rollback scenarios. Only Privileged User has access to this database.
- Leaderpod Database: This database is used to store leader
and follower if PDB is enabled for microservices that require a single pod to be
up in all the instances. The configuration of this database must be done on each
site. In case of georedundant deployments, each site must have a unique
Leaderpod database.
For example:
- For Site 1: ocbsf_leaderPodDb_site1
- For Site 2: ocbsf_leaderPodDb_site2
- For Site 3: ocbsf_leaderPodDb_site3
Note:
This database is used only whennrf-client-nfmanagement.enablePDBSupport
is set totrue
in theocbsf_custom_values_23.4.6.yaml
. -
NRF Client Database: This database is used to support NRF Client features. Only Privileged User has access to this database and it is used only when the caching feature is enabled. In case of georedundant deployments, each site must have a unique NRF Client database and its configuration must be done on each site.
For example:
- For Site 1: ocbsf_nrf_client_site1
- For Site 2: ocbsf_nrf_client_site2
- For Site 3: ocbsf_nrf_client_site3
BSF Users
There are two types of BSF database users with different set of permissions:
- Privileged User: This user has a complete set of
permissions. This user can perform create, alter, or drop operations on tables
to perform install, upgrade, rollback, or delete operations.
Note:
In examples given in this document, Privileged User's username is 'bsfprivilegedusr
' and password is 'bsfprivilegedpasswd
'. - Application User: This user has a limited set of permissions
and is used by BSF application to handle service operations. This user can
insert, update, get, or remove the records. This user will not be able to
create, alter, or drop the database or tables.
Note:
In examples given in this document, Application User's username is 'bsfusr
' and password is 'bsfpasswd
'.
Default Databases
BSF microservices use the MySQL database to store the configuration and run time data.
Before deploying BSF, make sure that the MySQL user and databases are created.
Each microservice has a default database assigned to it.
The following table lists the default database names and applicable deployment modes for various databases that need to be configured while deploying BSF.
Table 2-6 Default Database Names for BSF Microservices
Service Name | Default Database Name | Database Type |
---|---|---|
Config Server |
|
Provisional |
CM Service |
|
Provisional |
BSF Service |
|
State |
Audit Service |
|
Provisional |
NRF Client |
|
Provisional |
In addition, create the ocbsf_release (default
name) database to store and manipulate the release versions of BSF services during
the install, upgrade, and rollback procedure. This database name is specified in the
releaseDbName parameter in the ocbsf_custom_values_23.4.6.yaml
file.
2.2.1.6.1 Single Site
This section explains how a database administrator can create database and users for a single site deployment.
Configuring Database
- Log in to the server where the SSH keys are stored and have permission to access the SQL nodes of NDB cluster.
- Connect to the SQL nodes.
- Log in to the MySQL prompt using root permission, or log in as a
user who has the permission to create users as per conditions explained in the
next step.
Example:
mysql -h 127.0.0.1 -uroot -p
Note:
This command varies between systems, path for MySQL binary, root user, and root password. After running this command, enter the password specific to the user mentioned in the command.
-
Run the following command to check if both the BSF users already exist:
$ SELECT User FROM mysql.user;
If the users already exist, go to the next step. Else, create the respective new user or users by following the steps below:- Run the following command to create a new Privileged
User:
$ CREATE USER '<BSF Privileged-User Name>'@'%' IDENTIFIED BY '<BSF Privileged-User Password>';
Example:
CREATE USER 'bsfprivilegedusr'@'%' IDENTIFIED BY 'bsfprivilegedpasswd';
- Run the following command to create a new Application
User:
$ CREATE USER '<Application User Name>'@'%' IDENTIFIED BY '<APPLICATION Password>';
Example:
CREATE USER 'bsfusr'@'%' IDENTIFIED BY 'bsfpasswd';
- Run the following command to create a new Privileged
User:
-
Run the following command to check whether any of the BSF databases already exists:
$ show databases;
- If any of the previously configured database is already
present, remove them. Otherwise, skip this step.
Run the following command to remove a preconfigured BSF database:
$ DROP DATABASE if exists <DB Name>;
Example:
DROP DATABASE if exists ocbsf_audit_service;
-
Run the following command to create new BSF database if it does not exist, or after dropping an existing database:
$ CREATE DATABASE IF NOT EXISTS <DB Name> CHARACTER SET utf8;
For example:
CREATE DATABASE IF NOT EXISTS ocbsf_config_server CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS ocbsf_release CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS ocbsf_commonconfig CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS ocbsf_cmservice CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS ocbsf_audit_service CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS ocpm_bsf CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS ocbsf_nrf_client CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS ocbsf_leaderPodDb CHARACTER SET utf8;
Note:
Ensure that you use the same database names while creating database that you have used in the global parameters of
ocbsf_custom_values_23.4.6.yaml
files.Following is an example of what are the names of the BSF database names configured in theocbsf_custom_values_23.4.6.yaml
file:global: releaseDbName: &releaseDbName 'ocbsf_release' nrfClientDbName: 'ocbsf_nrf_client' bsf-management-service: envMysqlDatabase: 'ocpm_bsf' config-server: envMysqlDatabase: *configServerDB cm-service: envMysqlDatabase: occnp_cmservice nrf-client-nfmanagement: dbConfig: leaderPodDbName: ocbsf_leaderPodDb audit-service: envMysqlDatabase: ocbsf_audit_service
BSF follows the best database practices by having the idle connection timeout for client applications lesser than the idle connection timeout of the database server.
The default idle connection timeout value of BSF applications is 540 seconds or 9 minutes. This value remains changed.
- If any of the previously configured database is already
present, remove them. Otherwise, skip this step.
Creating Users and Granting Permissions
Note:
Creation of database is optional if grant is scoped to all database, that is, database name is not mentioned in grant command.
-
Run the following set of commands to grant all the necessary permissions:
$ GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE ON <DB Name>.*TO `<BSF Privileged-User Name>`@`%`;
In the following example, "bsfprivilegedusr" is used as username, "bsfprivilegedpasswd" is used as password. Here, all permissions are being granted to "bsfprivilegedusr".
GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON ocbsf_config_server.* TO 'bsfprivilegedusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON ocbsf_release.* TO 'bsfprivilegedusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON ocpm_bsf.* TO 'bsfprivilegedusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON ocbsf_commonconfig.* TO 'bsfprivilegedusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON ocbsf_audit_service.* TO 'bsfprivilegedusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON ocbsf_cmservice.* TO 'bsfprivilegedusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP ON mysql.ndb_replication TO 'bsfprivilegedusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, INDEX, ALTER ON `ocbsf_nrf_client`.* TO `bsfprivilegedusr`@`%`; GRANT SELECT, INSERT, CREATE, ALTER, DROP, LOCK TABLES, CREATE TEMPORARY TABLES, DELETE, UPDATE, EXECUTE ON ocbsf_leaderPodDb.* TO 'bsfprivilegedusr'@'%'; FLUSH PRIVILEGES;
- Run the following command to grant
NDB_STORED_USER permissions to the Privileged User:
GRANT NDB_STORED_USER ON *.* TO 'bsfprivilegedusr'@'%';
-
Grant all the necessary permissions by running the following set of commands.
Note:
The database name is specified in the envMysqlDatabase parameter for respective services in theocbsf_custom_values_23.4.6.yaml
file.It is recommended to use a unique database name when there are multiple instances of BSF deployed in the network as they share the same data tier (MySQL cluster).
To grant permissions:
$ GRANT SELECT, INSERT, LOCK TABLES, DELETE, UPDATE, REFERENCES, EXECUTE ON <DB Name>.* TO '<Application User Name>'@'%';
In the following example, "bsfusr" is used as username, "bsfpasswd" is used as password. Here, all permissions are being granted to "bsfusr".
GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON ocbsf_config_server.* TO 'bsfusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON ocbsf_release.* TO 'bsfusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON ocpm_bsf.* TO 'bsfusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON ocbsf_commonconfig.* TO 'bsfusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON ocbsf_audit_service.* TO 'bsfusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON ocbsf_cmservice.* TO 'bsfusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, INDEX, ALTER ON `ocbsf_nrf_client`.* TO `bsfusr`@`%`; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP ON mysql.ndb_replication TO 'bsfusr'@'%';
- Run the following command to grant
NDB_STORED_USER permissions to the Application User:
GRANT NDB_STORED_USER ON *.* TO 'bsfusr'@'%';
-
Run the following commands to verify that the privileged or application users have all the required permission:
show grants for username;
where
username
is the name of the privileged or application user.Example:
show grants for bsfprivilegedusr; show grants for bsfusr;
-
Run the following command to flush privileges:
FLUSH PRIVILEGES;
-
Exit from the database and logout from the MySQL node.
2.2.1.6.2 Multisite
This section explains how database administrator can create the databases and users for a multisite deployment.
For BSF georedundant deployment, listed databases names must be unique for each site. For the remaining databases, the database name must be same across all the sites.
It is recommended to use unique database names when multiple instances of BSF use and share a single cnDBTier (MySQL cluster) in the network. To maintain unique database names for all the NF instances in the network, a good practice is to add the deployment name of the BSF instance as a prefix or suffix to the database name. However, you can use any prefix or suffix to create the unique database name. For example, if the BSF deployment nfInstance value is "site1" then the BSF service database can be named as "ocpm_bsf_site1".
Note:
Before running the procedure for georedundant sites, ensure that the cnDBTier for georedundant sites is up and replication channels are enabled.Table 2-7 For Example: BSF Unique Databases names for two site and three site deployment
Two Site Database Names | Three Site Database Names |
---|---|
ocbsf_config_server_site1 ocbsf_config_server_site2 |
ocbsf_config_server_site1 ocbsf_config_server_site2 ocbsf_config_server_site3 |
ocbsf_cmservice_site1 ocbsf_cmservice_site2 |
ocbsf_cmservice_site1 ocbsf_cmservice_site2 ocbsf_cmservice_site3 |
ocbsf_commonconfig_site1 ocbsf_commonconfig_site2 |
ocbsf_commonconfig_site1 ocbsf_commonconfig_site2 ocbsf_commonconfig_site3 |
ocbsf_leaderPodDb_site1 ocbsf_leaderPodDb_site2 |
ocbsf_leaderPodDb_site1 ocbsf_leaderPodDb_site2 ocbsf_leaderPodDb_site3 |
ocbsf_overload_site1 ocbsf_overload_site2 |
ocbsf_overload_site1 ocbsf_overload_site2 ocbsf_overload_site3 |
ocbsf_audit_service_site1 ocbsf_audit_service_site2 |
ocbsf_audit_service_site1 ocbsf_audit_service_site2 ocbsf_audit_service_site3 |
ocbsf_nrf_client_site1 ocbsf_nrf_client_site2 |
ocbsf_nrf_client_site1 ocbsf_nrf_client_site2 ocbsf_nrf_client_site3 |
Configuring Database
-
Log in to the server where the SSH keys are stored and have permission to access the SQL nodes of NDB cluster.
-
Connect to the SQL nodes.
-
Log in to the database either as a root user or as a user who has the permission to create users as per conditions explained in the next step.
Example:
mysql -h 127.0.0.1 -uroot -p
Note:
This command varies between systems, path for MySQL binary, root user, and root password. After running this command, enter the password specific to the user mentioned in the command. - Run the following command to check if both the BSF users already
exist:
$ SELECT User FROM mysql.user;
If the users already exist, go to the next step. Otherwise, create the respective new user or users by following the steps below:
- Run the following command to create a new Privileged
User:
$ CREATE USER '<BSF Privileged-User Name>'@'%' IDENTIFIED BY '<BSF Privileged-User Password>';
Example:
CREATE USER 'bsfprivilegedusr'@'%' IDENTIFIED BY 'bsfprivilegedpasswd';
- Run the following command to create a new Application
User:
$ CREATE USER '<Application User Name>'@'%' IDENTIFIED BY '<APPLICATION Password>';
Example:
CREATE USER 'bsfusr'@'%' IDENTIFIED BY 'bsfpasswd';
Note:
You must create both the users on all the SQL nodes for all georedundant sites. - Run the following command to create a new Privileged
User:
-
Run the following command to check whether any of the BSF databases already exists:
$ show databases;
- If any of the previously configured database is already
present, remove them. Otherwise, skip this step.
Caution:
In case you have georedundant sites configured, removal of the database from any one of the SQL nodes of any cluster will remove the database from all georedundant sites.Run the following command to remove a preconfigured BSF database:
$ DROP DATABASE if exists <DB Name>;
Example:
DROP DATABASE if exists ocbsf_audit_service;
-
Run the following command to create new BSF database if it does not exist, or after dropping an existing database:
$ CREATE DATABASE IF NOT EXISTS <DB Name> CHARACTER SET utf8;
For example: Sample illustration for creating all database required for BSF installation in site1.
CREATE DATABASE IF NOT EXISTS ocbsf_config_server_site1 CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS ocbsf_release_site1 CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS ocbsf_commonconfig_site1 CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS ocbsf_cmservice_site1 CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS ocbsf_audit_service_site1 CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS ocpm_bsf_site1 CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS ocbsf_nrf_client_site1 CHARACTER SET utf8 CREATE DATABASE IF NOT EXISTS ocbsf_leaderPodDb_site1 CHARACTER SET utf8;
For example: Sample illustration for creating all database required for BSF installation in site2.
CREATE DATABASE IF NOT EXISTS ocbsf_config_server_site2 CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS ocbsf_release_site2 CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS ocbsf_commonconfig_site2 CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS ocbsf_cmservice_site2 CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS ocbsf_audit_service_site2 CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS ocpm_bsf_site2 CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS ocbsf_nrf_client_site2 CHARACTER SET utf8 CREATE DATABASE IF NOT EXISTS ocbsf_leaderPodDb_site2 CHARACTER SET utf8;
Note:
Ensure that you use the same database names while creating database that you have used in the global parameters of
ocbsf_custom_values_23.4.6.yaml
files.BSF follows the best database practices by having the idle connection timeout for client applications lesser than the idle connection timeout of the database server.
The default idle connection timeout value of BSF applications is 540 seconds or 9 minutes. This value remains changed.
- If any of the previously configured database is already
present, remove them. Otherwise, skip this step.
Granting Permissions to Users on the Database
Note:
- Run this step on all the SQL nodes for each BSF standalone site in a georedundant deployment.
- Creation of database is optional if grant is scoped to all databases, that is, database name is not mentioned in grant command.
-
Run the following command to grant Privileged User permission on all BSF Databases:
$ GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE ON <DB Name>_<site_name>.*TO `<BSF Privileged-User Name>`@`%`;
In the following example, "bsfprivilegedusr" is used as username, "bsfprivilegedpasswd" is used as password. Here, all permissions are being granted to "bsfprivilegedusr".
Example for site1:
GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON ocbsf_config_server_site1.* TO 'bsfprivilegedusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON ocbsf_release_site1.* TO 'bsfprivilegedusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON ocpm_bsf_site1.* TO 'bsfprivilegedusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON ocbsf_commonconfig_site1.* TO 'bsfprivilegedusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON ocbsf_audit_service_site1.* TO 'bsfprivilegedusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON ocbsf_cmservice_site1.* TO 'bsfprivilegedusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP ON mysql.ndb_replication TO 'bsfprivilegedusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, INDEX, ALTER ON `ocbsf_nrf_client_site1`.* TO `bsfprivilegedusr`@`%`; GRANT SELECT, INSERT, CREATE, ALTER, DROP, LOCK TABLES, CREATE TEMPORARY TABLES, DELETE, UPDATE, EXECUTE ON ocbsf_leaderPodDb_site2.* TO 'bsfprivilegedusr'@'%'; FLUSH PRIVILEGES;
Example for site2:
GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON ocbsf_config_server_site2.* TO 'bsfprivilegedusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON ocbsf_release_site2.* TO 'bsfprivilegedusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON ocpm_bsf_site2.* TO 'bsfprivilegedusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON ocbsf_commonconfig_site2.* TO 'bsfprivilegedusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON ocbsf_audit_service_site2.* TO 'bsfprivilegedusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON ocbsf_cmservice_site2.* TO 'bsfprivilegedusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP ON mysql.ndb_replication TO 'bsfprivilegedusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, INDEX, ALTER ON `ocbsf_nrf_client_site2`.* TO `bsfprivilegedusr`@`%`; GRANT SELECT, INSERT, CREATE, ALTER, DROP, LOCK TABLES, CREATE TEMPORARY TABLES, DELETE, UPDATE, EXECUTE ON ocbsf_leaderPodDb_site2.* TO 'bsfprivilegedusr'@'%'; FLUSH PRIVILEGES;
- Run the following command to grant NDB_STORED_USER permissions to
the Privileged User:
GRANT NDB_STORED_USER ON *.* TO 'bsfprivilegedusr'@'%';
-
Run the following command to grant Application User permission on all BSF Databases:
$ GRANT SELECT, INSERT, LOCK TABLES, DELETE, UPDATE, REFERENCES, EXECUTE ON <DB Name>_<site_name>.* TO '<Application User Name>'@'%';
In the following example, "bsfusr" is used as username, "bsfpasswd" is used as password. Here, all permissions are being granted to "bsfusr".
For example in BSF site1:
GRANT SELECT, INSERT, UPDATE, DELETE ON ocbsf_config_server_site1.* TO 'bsfusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE ON ocbsf_release_site1.* TO 'bsfusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE ON ocpm_bsf_site1.* TO 'bsfusr'@'%'; GRANT CREATE, SELECT, INSERT, UPDATE, DELETE ON ocbsf_commonconfig_site1.* TO 'bsfusr'@'%'; GRANT CREATE, SELECT, INSERT, UPDATE, DELETE ON ocbsf_audit_service_site1.* TO 'bsfusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, INDEX, ALTER ON `ocbsf_nrf_client_site1`.* TO `bsfusr`@`%`; GRANT CREATE, SELECT, INSERT, UPDATE, DELETE ON ocbsf_cmservice_site1.* TO 'bsfusr'@'%'; FLUSH PRIVILEGES;
Example for site2:
GRANT SELECT, INSERT, UPDATE, DELETE ON ocbsf_config_server_site2.* TO 'bsfusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE ON ocbsf_release_site2.* TO 'bsfusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE ON ocpm_bsf_site2.* TO 'bsfusr'@'%'; GRANT CREATE, SELECT, INSERT, UPDATE, DELETE ON ocbsf_commonconfig_site2.* TO 'bsfusr'@'%'; GRANT CREATE, SELECT, INSERT, UPDATE, DELETE ON ocbsf_audit_service_site2.* TO 'bsfusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, INDEX, ALTER ON `ocbsf_nrf_client_site2`.* TO `bsfusr`@`%`; GRANT CREATE, SELECT, INSERT, UPDATE, DELETE ON ocbsf_cmservice_site2.* TO 'bsfusr'@'%'; FLUSH PRIVILEGES;
- Run the following command to grant NDB_STORED_USER permissions to the
Application User:
GRANT NDB_STORED_USER ON *.* TO 'bsfusr'@'%';
-
Run the following command to verify that the privileged or application users have all the required permissions:
show grants for username;
where
username
is the name of the privileged or application user.For example:show grants for bsfprivilegedusr; show grants for bsfusr;
-
Run the following command to flush privileges:
FLUSH PRIVILEGES;
- Exit from MySQL prompt and SQL nodes.
2.2.1.7 Configuring Kubernetes Secret for Accessing Database
This section explains how to configure Kubernetes secrets for accessing BSF database.
2.2.1.7.1 Creating and Updating Secret for Privileged Database User
This section explains how to create and update Kubernetes secret for Privileged User to access the database.
- Run the following command to create Kubernetes secret:
kubectl create secret generic <Privileged User secret name> --from-literal=mysql-username=<Privileged Mysql database username> --from-literal=mysql-password=<Privileged Mysql User database password> -n <Namespace>
Where,
<Privileged User secret name>
is the secret name of the Privileged User.<Privileged MySQL database username>
is the username of the Privileged User.<Privileged MySQL User database password>
is the password of the Privileged User.<Namespace>
is the namespace of BSF deployment.Note:
Note down the command used during the creation of Kubernetes secret. This command is used for updating the secrets in future.For example:
kubectl create secret generic ocbsf-privileged-db-pass --from-literal=mysql-username=bsfprivilegedusr --from-literal=mysql-password=bsfprivilegedpasswd -n ocbsf
- Run the following command to verify the secret created:
$ kubectl describe secret <Privileged User secret name> -n <Namespace>
Where,
<Privileged User secret name>
is the secret name of the database.
For example:<Namespace>
is the namespace of BSF deployment.kubectl describe secret ocbsf-privileged-db-pass -n ocbsf
Sample output:
Name: ocbsf-privileged-db-pass Namespace: ocbsf Labels: <none> Annotations: <none> Type: Opaque Data ==== mysql-password: 10 bytes mysql-username: 17 bytes
- Update the command used in step 1 with string "
--dry-run -o yaml
" and "kubectl replace -f - -n <Namespace of BSF deployment>
". After the update is performed, use the following command:$ kubectl create secret generic <Privileged User secret name> --from-literal=dbUsername=<Privileged MySQL database username> --from-literal=dbPassword=<Privileged Mysql database password> --dry-run -o yaml -n <Namespace> | kubectl replace -f - -n <Namespace>
Where,
<Privileged User secret name>
is the secret name of the Privileged User.<Privileged MySQL database username>
is the username of the Privileged User.<Privileged MySQL User database passsword>
is the password of the Privileged User.<Namespace>
is the namespace of BSF deployment. - Run the updated command. The following message is displayed:
secret/<Privileged User secret name> replaced
Where,
<Privileged User secret name>
is the updated secret name of the Privileged User.
2.2.1.7.2 Creating and Updating Secret for Application Database User
This section explains how to create and update Kubernetes secret for application user to access the database.
- Run the following command to create Kubernetes
secret:
$ kubectl create secret generic <Application User secret name> --from-literal=mysql-username=<Application MySQL Database Username> --from-literal=mysql-password=<Application MySQL User database password> -n <Namespace>
Where,
<Application User secret name>
is the secret name of the Application User.<Application MySQL database username>
is the username of the Application User.<Application MySQL User database password>
is the password of the Application User.<Namespace>
is the namespace of BSF deployment.Note:
Note down the command used during the creation of Kubernetes secret. This command is used for updating the secrets in future.For example:
kubectl create secret generic ocbsf-db-pass --from-literal=mysql-username=bsfusr --from-literal=mysql-password=bsfpasswd -n ocbsf
- Run the following command to verify the secret created:
$ kubectl describe secret <Application User secret name> -n <Namespace>
Where,
<Application User secret name>
is the secret name of the database.
For example:<Namespace>
is the namespace of BSF deployment.kubectl describe secret ocbsf-db-pass -n ocbsf
Sample output:
Name: ocbsf-db-pass Namespace: ocbsf Labels: <none> Annotations: <none> Type: Opaque Data ==== mysql-password: 10 bytes mysql-username: 17 bytes
- Update the command used in step 1 with string "
--dry-run -o yaml
" and "kubectl replace -f - -n <Namespace of BSF deployment>
". After the update is performed, use the following command:$ kubectl create secret generic <Application User secret name> --from-literal=dbUsername=<Application MySQL database username> --from-literal=dbPassword=<Application Mysql database password> --dry-run -o yaml -n <Namespace> | kubectl replace -f - -n <Namespace>
Where,
<Application User secret name>
is the secret name of the Application User.<Application MySQL database username>
is the username of the Application User.<Application MySQL User database password>
is the password of the Application User.<Namespace>
is the namespace of BSF deployment. - Run the updated command. The following message is appears:
secret/<Application User secret name> replaced
Where,
<Application User secret name>
is the updated secret name of the Application User.
2.2.1.8 Configuring Secrets for Enabling HTTPS
This section explains the steps to create and update the Kubernetes secret and enable HTTPS at Ingress Gateway.
This step is optional. It is required only when SSL settings need to be enabled on Ingress Gateway and Egress Gateway microservices of BSF.
2.2.1.8.1 Configuring HTTPS at Ingress Gateway
This section explains the steps to configure secrets for enabling HTTPS in Ingress Gateway. This procedure must be performed before deploying CNC BSF.
Note:
The passwords for TrustStore and KeyStore are stored in respective password files mentioned below.- ECDSA private key and CA signed certificate of OCBSF, if initialAlgorithm is ES256
- RSA private key and CA signed certificate of OCBSF, if initialAlgorithm is RS256
- TrustStore password file
- KeyStore password file
Note:
Creation process for private keys, certificates and passwords is based on discretion of user or operator.2.2.1.8.1.1 Creating Secrets for Enabling HTTPS in Ingress Gateway
This section provides the steps to create secrets for enabling HTTPS in ingress gateway. Perform this procedure before deploying BSF.
- Run the following command to create secret:
$ kubectl create secret generic <ocingress-secret-name> --from-file=<ssl_ecdsa_private_key.pem> --from-file=<rsa_private_key_pkcs1.pem> --from-file=<ssl_truststore.txt> --from-file=<ssl_keystore.txt> --from-file=<caroot.cer> --from-file=<ssl_rsa_certificate.crt> --from-file=<ssl_ecdsa_certificate.crt> -n <Namespace of OCBSF deployment>
Note:
Note down the command used during the creation of the secret. Use the command for updating the secrets in future.Example: The names used below are same as provided inocbsf_custom_values_23.4.6.yaml
in BSF deployment.$ kubectl create secret generic ocingress-secret --from-file=ssl_ecdsa_private_key.pem --from-file=rsa_private_key_pkcs1.pem --from-file=ssl_truststore.txt --from-file=ssl_keystore.txt --from-file=caroot.cer --from-file=ssl_rsa_certificate.crt --from-file=ssl_ecdsa_certificate.crt -n ocbsf
Note:
It is recommended to use the same secret name as mentioned in the example. In case you change<ocingress-secret-name>
, then update thek8SecretName
parameter underingressgateway attributes
section in theocbsf_custom_values_23.4.6.yaml
file. - Run the following command to verify the secret
created:
$ kubectl describe secret <ocingress-secret-name> -n <Namespace of OCBSF deployment>
Example:$ kubectl describe secret ocingress-secret -n ocbsf
2.2.1.8.1.2 Updating Secrets for Enabling HTTPS in Ingress Gateway
This section explains how to update the secrets.
- Copy the exact command used in the section during creation of secret.
- Update the same command with string "--dry-run -o yaml" and
"kubectl replace -f - -n <Namespace of OCBSF deployment>".
The updated command is as follows:
$ kubectl create secret generic <ocingress-secret-name> --from-file=<ssl_ecdsa_private_key.pem> --from-file=<rsa_private_key_pkcs1.pem> --from-file=<ssl_truststore.txt> --from-file=<ssl_keystore.txt> --from-file=<caroot.cer> --from-file=<ssl_rsa_certificate.crt> --from-file=<ssl_ecdsa_certificate.crt> --dry-run -o yaml -n <Namespace of OCBSF deployment> | kubectl replace -f - -n <Namespace of OCBSF deployment>
Example:
$ kubectl create secret generic ocingress-secret --from-file=ssl_ecdsa_private_key.pem --from-file=rsa_private_key_pkcs1.pem --from-file=ssl_truststore.txt --from-file=ssl_keystore.txt --from-file=caroot.cer --from-file=ssl_rsa_certificate.crt --from-file=ssl_ecdsa_certificate.crt --dry-run -o yaml -n ocbsf | kubectl replace -f - -n ocbsf
Note:
The names used in the aforementioned command must be as same as the names provided in the ocbsf-23.4.6ocbsf_custom_values_23.4.6.yaml
in OCCNP deployment. - Run the updated command.
After the secret update is complete, the following message appears:
secret/<ocingress-secret> replaced
2.2.1.8.1.3 Enabling HTTPS at Ingress Gateway
- Enable
enableIncomingHttps
parameter underIngress Gateway Global Parameters
section in theocbsf_custom_values_23.4.6.yaml
file. For more information aboutenableIncomingHttps
parameter, see under global parameters section of theocbsf_custom_values_23.4.6.yaml
file. section. - Configure the following details in the
ssl
section underingressgateway attributes
, in case you have changed the attributes while creating secret:- Kubernetes namespace
- Kubernetes secret name holding the certificate details
- Certificate information
ingress-gateway: # ---- HTTPS Configuration - BEGIN ---- enableIncomingHttps: false service: ssl: privateKey: k8SecretName: ocbsf-gateway-secret k8NameSpace: ocbsf rsa: fileName: rsa_private_key_pkcs1.pem certificate: k8SecretName: ocbsf-gateway-secret k8NameSpace: ocbsf rsa: fileName: ocegress.cer caBundle: k8SecretName: ocbsf-gateway-secret k8NameSpace: ocbsf fileName: caroot.cer keyStorePassword: k8SecretName: ocbsf-gateway-secret k8NameSpace: ocbsf fileName: key.txt trustStorePassword: k8SecretName: ocbsf-gateway-secret k8NameSpace: ocbsf fileName: trust.txt
- Save the
ocbsf_custom_values_23.4.6.yaml
file.
2.2.1.8.2 Configuring HTTPS at Egress Gateway
This section explains the steps to configure secrets for enabling HTTPS in Egress Gateway. This procedure must be performed before deploying OCBSF.
Note:
The passwords for TrustStore and KeyStore are stored in respective password files mentioned below.- ECDSA private key and CA signed certificate of OCBSF, if initialAlgorithm is ES256
- RSA private key and CA signed certificate of OCBSF, if initialAlgorithm is RS256
- TrustStore password file
- KeyStore password file
Note:
Creation process for private keys, certificates and passwords is based on discretion of user or operator.2.2.1.8.2.1 Creating Secrets for Enabling HTTPS in Egress Gateway
This section provides information about how to create secret for HTTPS related details. Perform this procedure before enabling HTTPS in OCBSF Egress Gateway.
- Run the following command to create secret.
$ kubectl create secret generic <ocegress-secret-name> --from-file=<ssl_ecdsa_private_key.pem> --from-file=<ssl_rsa_private_key.pem> --from-file=<ssl_truststore.txt> --from-file=<ssl_keystore.txt> --from-file=<ssl_cabundle.crt> --from-file=<ssl_rsa_certificate.crt> --from-file=<ssl_ecdsa_certificate.crt> -n <Namespace of OCBSF deployment>
Note:
Note down the command used during the creation of the secret. Use the command for updating the secrets in future.Example:
$ kubectl create secret generic ocegress-secret --from-file=ssl_ecdsa_private_key.pem --from-file=ssl_rsa_private_key.pem --from-file=ssl_truststore.txt --from-file=ssl_keystore.txt --from-file=ssl_cabundle.crt --from-file=ssl_rsa_certificate.crt --from-file=ssl_ecdsa_certificate.crt -n ocbsf
Note:
It is recommended to use the same secret name as mentioned in the example. In case you change<ocegress-secret-name>
, then update thek8SecretName
parameter underegressgateway attributes
section in theocbsf_custom_values_23.4.6.yaml
file. - Run the following command to verify the secret created:
$ kubectl describe secret <ocegress-secret-name> -n <Namespace of OCBSF deployment>
Example:
$ kubectl describe secret ocegress-secret -n ocbsf
2.2.1.8.2.2 Updating Secrets for Enabling HTTPS in Egress Gateway
This section explains how to update the secret with related details.
- Copy the exact command used in the section during creation of secret.
- Update the same command with string "--dry-run -o yaml" and "kubectl
replace -f - -n <Namespace of OCBSF deployment>".
The updated command is as follows:
kubectl create secret generic <ocegress-secret-name> --from-file=<ssl_ecdsa_private_key.pem> --from-file=<ssl_rsa_private_key.pem> --from-file=<ssl_truststore.txt> --from-file=<ssl_keystore.txt> --from-file=<ssl_cabundle.crt> --from-file=<ssl_rsa_certificate.crt> --from-file=<ssl_ecdsa_certificate.crt> --dry-run -o yaml -n <Namespace of OCBSF Egress Gateway secret> | kubectl replace -f - -n <Namespace of OCBSF deployment>
Example:
$ kubectl create secret generic egress-secret --from-file=ssl_ecdsa_private_key.pem --from-file=rsa_private_key_pkcs1.pem --from-file=ssl_truststore.txt --from-file=ssl_keystore.txt --from-file=caroot.cer --from-file=ssl_rsa_certificate.crt --from-file=ssl_ecdsa_certificate.crt --dry-run -o yaml -n ocbsf | kubectl replace -f - -n ocbsf
Note:
The names used in the aforementioned command must be as same as the names provided in theocbsf_custom_values_23.4.6.yaml
in OCBSF deployment. - Run the updated command.
After the secret update is complete, the following message appears:
secret/<ocingress-secret> replaced
2.2.1.8.2.3 Enabling HTTPS at Egress Gateway
- Enable
enableOutgoingHttps
parameter underegressgateway attributes
section in theocbsf_custom_values_23.4.6.yaml
file. For more information aboutenableOutgoingHttps
parameter, see the Egress Gateway section. - Configure the following details in the
ssl
section underegressgateway attributes
, in case you have changed the attributes while creating secret:- Kubernetes namespace
- Kubernetes secret name holding the certificate details
- Certificate information
egress-gateway: #Enabling it for egress https requests enableOutgoingHttps: false service: ssl: privateKey: k8SecretName: ocbsf-gateway-secret k8NameSpace: ocbsf rsa: fileName: rsa_private_key_pkcs1.pem ecdsa: fileName: ssl_ecdsa_private_key.pem certificate: k8SecretName: ocbsf-gateway-secret k8NameSpace: ocbsf rsa: fileName: ocegress.cer ecdsa: fileName: ssl_ecdsa_certificate.crt caBundle: k8SecretName: ocbsf-gateway-secret k8NameSpace: ocbsf fileName: caroot.cer keyStorePassword: k8SecretName: ocbsf-gateway-secret k8NameSpace: ocbsf fileName: key.txt trustStorePassword: k8SecretName: ocbsf-gateway-secret k8NameSpace: ocbsf fileName: trust.txt
- Save the
ocbsf_custom_values_23.4.6.yaml
file.
2.2.1.9 Configuring Secret for Enabling Access Token Validation
This section explains how to configure a secret for enabling access token service.
2.2.1.9.1 Generating KeyPairs for NRF Instances
Important:
It is at the user's discretion to create the private keys and certificates, and it is not in the scope of BSF. This section lists only samples to create KeyPairs.Note:
Here, it is assumed that there are only two NRF instances with the the following instance IDs:- NRF Instance 1: 664b344e-7429-4c8f-a5d2-e7dfaaaba407
- NRF Instance 2: 601aed2c-e314-46a7-a3e6-f18ca02faacc
Example Command to generate KeyPair for NRF Instance 1
Generate a 2048-bit RSA private key
$ openssl genrsa -out private_key.pem 2048
Convert private Key to PKCS#8 format (so Java can read it)
$ openssl pkcs8 -topk8 -inform PEM -outform PEM -in private_key.pem
-out
private_key_pkcs.der -nocrypt
Output public key portion in PEM format (so Java can read it)
$ openssl rsa -in private_key.pem -pubout -outform PEM -out
public_key.pem
Create reqs.conf and place the required content for NRF certificate
[req]
distinguished_name = req_distinguished_name
req_extensions = v3_req
prompt = no
[req_distinguished_name]
C = IN
ST = BLR
L = TempleTerrace
O = Personal
CN = nnrf-001.tmtrflaa.5gc.tmp.com
[v3_req]
basicConstraints = CA:FALSE
keyUsage = digitalSignature, dataEncipherment
subjectAltName = DNS:nnrf-001.tmtrflaa.5gc.tmp.com
#subjectAltName = URI:UUID:6faf1bbc-6e4a-4454-a507-a14ef8e1bc5c
#subjectAltName = otherName:UTF8:NRF
Output ECSDA private key portion in PEM format and corresponding NRF certificate in {nrfInstanceId}_ES256.crt file
openssl req -x509 -new -out {nrfInstanceId}_ES256.crt -newkey ec:<(openssl ecparam -name secp521r1) -nodes -sha256 -keyout ecdsa_private_key.key -config reqs.conf
#Replace the place holder "{nrfInstanceId}" with NRF Instance 1's UUID while running the command.
Example below
$ openssl req -x509 -new -out 664b344e-7429-4c8f-a5d2-e7dfaaaba407_ES256.crt -newkey ec:<(openssl ecparam -name secp521r1) -nodes -sha256 -keyout ecdsa_private_key.key -config reqs.conf
NRF1 (Private key: ecdsa_private_key.key, NRF Public Certificate: 664b344e-7429-4c8f-a5d2-e7dfaaaba407_ES256.crt)
Example Command to generate KeyPair for NRF Instance 2
Generate a 2048-bit RSA private key
$ openssl genrsa -out private_key.pem 2048
Convert private Key to PKCS#8 format (so Java can read it)
$ openssl pkcs8 -topk8 -inform PEM -outform PEM -in private_key.pem
-out
private_key_pkcs.der -nocrypt
Output public key portion in PEM format (so Java can read it)
$ openssl rsa -in private_key.pem -pubout -outform PEM -out
public_key.pem
Create reqs.conf and place the required content for NRF certificate
[req]
distinguished_name = req_distinguished_name
req_extensions = v3_req
prompt = no
[req_distinguished_name]
C = IN
ST = BLR
L = TempleTerrace
O = Personal
CN = nnrf-001.tmtrflaa.5gc.tmp.com
[v3_req]
basicConstraints = CA:FALSE
keyUsage = digitalSignature, dataEncipherment
subjectAltName = DNS:nnrf-001.tmtrflaa.5gc.tmp.com
#subjectAltName = URI:UUID:6faf1bbc-6e4a-4454-a507-a14ef8e1bc5c
#subjectAltName = otherName:UTF8:NRF
Output ECSDA private key portion in PEM format and corresponding NRF certificate in {nrfInstanceId}_ES256.crt file
openssl req -x509 -new -out {nrfInstanceId}_ES256.crt -newkey ec:<(openssl ecparam -name prime256v1) -nodes -sha256 -keyout ecdsa_private_key.key -config reqs.conf
#Replace the place holder "{nrfInstanceId}" with NRF Instance 2's UUID while running the command.
Example below
$ openssl req -x509 -new -out 601aed2c-e314-46a7-a3e6-f18ca02faacc_ES256.crt -newkey ec:<(openssl ecparam -name prime256v1) -nodes -sha256 -keyout ecdsa_private_key.key -config reqs.conf
NRF2 (Private key: ecdsa_private_key.key, PublicCerificate: 601aed2c-e314-46a7-a3e6-f18ca02faacc_ES256.crt)
2.2.1.9.2 Enabling and Configuring Access Token
Enabling and Configuring Access Token
To enable access token validation, configure both Helm-based and REST-based configurations on Ingress Gateway.
Creating Namespace for Secrets
nrfPublicKeyKubeSecret: nrfpublickeysecret
#Name of our secret should be same as given in Helm configuration of CNC BSF.
nrfPublicKeyKubeNamespace: ocbsf
#Our secret should be created in the same namespace as configured here.
The namespace is used as an input to create Kubernetes secret for private keys and public certificates.
Creating Kubernetes Secret for NRF Public Key
kubectl create secret generic nrfpublickeysecret
--from-file=./664b344e-7429-4c8f-a5d2-e7dfaaaba407_ES256.crt
--from-file=./601aed2c-e314-46a7-a3e6-f18ca02faacc_ES256.crt -n ocbsf
Configuring Helm to Enable Access Token
# ----OAUTH CONFIGURATION - BEGIN ----
oauthValidatorEnabled: true
nfInstanceId: 6faf1bbc-6e4a-4454-a507-a14ef8e1bc11
allowedClockSkewSeconds: 0
nrfPublicKeyKubeSecret: 'b' 'nrfpublickeysecret' //SECRET NAME
nrfPublicKeyKubeNamespace: //NAME SPACE of BSF
validationType: strict
producerPlmnMNC: 123
producerPlmnMCC: 456
producerScope: nsmf-pdusession,nsmf-event-exposure
nfType: PCF
For more information on parameters and their supported values, see OAUTH Configuration.
Updating configuration in database
This URI can be used to update/add oauth configuration that will be used for validating token sent in request to IGW. By default, instanceIdlist, keyIdList are null and validation mode is INSTANCEID_ONLY. This configuration is only applicable when oauth feature is enabled via helmchart.
Table 2-8 Configuring oauth Validator
URI |
Example:
|
Operations Supported | PUT |
Sample JSON |
|
Validating OAuth Token
The following Curl command sends a request to create PCF Bindings with valid oAuth header:
curl -X POST --http2-prior-knowledge -i "http://10.75.233.75:32564/nbsf-management/v1/pcfBindings" -H "Content-Type: application/json" -H Authorization:'Bearer eyJ0eXAiOiJKV1QiLCJraWQiOiI2MDFhZWQyYy1lMzE0LTQ2YTctYTNlNi1mMThjYTAyZmFheHgiLCJhbGciOiJFUzI1NiJ9.eyJpc3MiOiI2NjRiMzQ0ZS03NDI5LTRjOGYtYTVkMi1lN2RmYWFhYmE0MDciLCJzdWIiOiJmZTdkOTkyYi0wNTQxLTRjN2QtYWI4NC1jNmQ3MGIxYjAxYjEiLCJhdWQiOiJTTUYiLCJzY29wZSI6Im5zbWYtcGR1c2Vzc2lvbiIsImV4cCI6MTYxNzM1NzkzN30.oGAYtR3FnD33xOCmtUPKBEA5RMTNvkfDqaK46ZEnnZvgN5Cyfgvlr85Zzdpo2lNISADBgDumD_m5xHJF8baNJQ'-d '{
"supi": "imsi-310410000000015",
"gpsi": "5084943708",
"ipv4Addr": "10.10.10.10",
"dnn": "internet",
"pcfFqdn": "pcf-smservice.oracle.com",
"pcfDiamHost": "pcf-smservice.oracle.com",
"pcfDiamRealm": "oracle.com",
"snssai": {
"sst": 11,
"sd": "abc123"
}
}
2.2.1.10 Configuring BSF to Support Aspen Service Mesh
BSF leverages the Platform Service Mesh (for example, Aspen Service Mesh) for all internal and external Transport Layer Security (TLS) communication. The service mesh integration provides inter-NF communication and allows API gateway co-working with service mesh. The servicemesh integration supports the services by deploying a special sidecar proxy in each pod to intercept all network communications between microservices.
Supported ASM version: 1.9.1 and 1.11.x
For ASM installation and configuration, see official Aspen Service Mesh website for details.
The Aspen Service Mesh (ASM) configurations are classified into:
- Control Plane: It involves adding labels or annotations to inject a sidecar.
- Data Plane: It helps in traffic management such as handling NF call flows by adding Service Entries (SE), Destination Rules (DR), Envoy Filters (EF) and other resource changes such as API version changes between versions. Data plane configuration is done manually depending on each NF requirement and ASM deployment.
Data Plane Configuration
The Data Plane configuration consists of the following Custom Resource Definitions (CRDs):
- Service Entry (SE)
- Destination Rule (DR)
- Envoy Filter (EF)
- Peer Authentication (PA)
- Authorization Policy (AP)
- Virtual Service (VS)
- requestAuthentication
Note:
Use Helm charts to add or delete CRDs that you may require due to ASM upgrades to configure features across different releases.The Data Plane configuration is applicable in the following scenarios:
- NF to NF Communication: During NF to NF
communication where sidecar is injected on both NFs, you need SE and
DR to communicate with the other NF, otherwise sidecar rejects the
communication. All Egress communications of NFs must have an entry
for SE and DR and the same needs to be configured.
Note:
For out of cluster communication, you must configure the core DNS with the producer NF endpoint to enable access. - Kube-api-server: For Kube-api-server, a few NF flows may require access to the Kubernetes API server. The ASM proxy (mTLS enabled) may block this. As per the F5 recommendation, the NF requires to add SE for Kubernetes API server in its namespace.
- Envoy Filters: When Sidecars rewrite the header value with its default value, the headers from back-end services are lost. To overcome this situation, Envoy Filters help in passing the headers from back-end services to use it as is.
ASM Configuration File
A sample ocbsf_custom_values_servicemesh_config_23.4.6.yaml
is
available in Custom_Templates
folder. For downloading
the file, see Customizing BSF.
2.2.1.10.1 Predeployment configurations
This section explains the pre-deployment configuration procedure to install Cloud Native Core Binding Support Function (BSF) with ASM support.
Step 1 - Creating BSF Namespace
$ kubectl label --overwrite namespace <required namespace> istio-injection=enabled
$ kubectl label --overwrite namespace ocbsf istio-injection=enabled
Step 2 - The Operator should have special capabilities at service account level to start pre-install init container.
Example of some special capabilities:
readOnlyRootFilesystem: false
allowPrivilegeEscalation: true
allowedCapabilities:
- NET_ADMIN
- NET_RAW
runAsUser:
rule: RunAsAny
2.2.1.10.2 Deploying BSF With ASM
Customize the ocbsf-23.4.6-custom-values-servicemesh-config.yaml
file
To customize the ocbsf_custom_values_servicemesh_config_23.4.6.yaml
file, uncomment and
modify the parameters as per your requirements.
A sample ocbsf_custom_values_servicemesh_config_23.4.6.yaml
is available in
Custom_Templates file. For downloading the file, see Customizing BSF.
Note:
When BSF is deployed with ASM and cnDBTier is also installed in the same namespace or cluster, then you can skip installing service entries and destination rules.#serviceEntries:
# - hosts: |-
# [ "mysql-connectivity-service.cndbtier.svc.cluster-bsfnrf" ]
# exportTo: |-
# [ "." ]
# location: MESH_EXTERNAL
# ports:
# - number: 3306
# name: mysql
# protocol: MySQL
# name: ocbsf-to-mysql-external-se-test
# - hosts: |-
# [ "*.cluster-bsfnrf" ]
# exportTo: |-
# [ "." ]
# location: MESH_EXTERNAL
# ports:
# - number: 8090
# name: http2-8090
# protocol: TCP
# - number: 80
# name: HTTP2-80
# protocol: TCP
# name: ocbsf-to-other-nf-se-test
# - hosts: |-
# [ "kubernetes.default.svc.cluster-bsfnrf" ]
# exportTo: |-
# [ "." ]
# location: MESH_INTERNAL
# addresses: |-
# [ "192.168.200.36" ]
# ports:
# - number: 443
# name: https
# protocol: HTTPS
# name: nf-to-kube-api-server
#destinationRules:
# - host: "*.cluster-bsfnrf"
# mode: DISABLE
# name: ocbsf-to-other-nf-dr-test
# sbitimers: true
# tcpConnectTimeout: "750ms"
# tcpKeepAliveProbes: 3
# tcpKeepAliveTime: "1500ms"
# tcpKeepAliveInterval: "1s"
# - host: mysql-connectivity-service.ocbsf-db.svc.cluster.local
# mode: DISABLE
# name: mysql-occne
# sbitimers: false
For Istio version 1.6.x
#envoyFilters_v_16x:
# - name: set-xfcc-bsf
# labelselector: "app.kubernetes.io/instance: ocbsf"
# applyTo: NETWORK_FILTER
# filtername: envoy.filters.network.http_connection_manager
# operation: MERGE
# typeconfig: type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v2.HttpConnectionManager
# configkey: forward_client_cert_details
# configvalue: ALWAYS_FORWARD_ONLY
# - name: serverheaderfilter
# labelselector: "app.kubernetes.io/instance: ocbsf"
# applyTo: NETWORK_FILTER
# filtername: envoy.filters.network.http_connection_manager
# operation: MERGE
# typeconfig: type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v2.HttpConnectionManager
# configkey: server_header_transformation
# configvalue: PASS_THROUGH
For Istio version 1.9.x and 1.11.x
#envoyFilters_v_19x_111x:
# - name: set-xfcc-bsf
# labelselector: "app.kubernetes.io/instance: ocbsf"
# applyTo: NETWORK_FILTER
# filtername: envoy.filters.network.http_connection_manager
# operation: MERGE
# typeconfig: type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
# configkey: forward_client_cert_details
# configvalue: ALWAYS_FORWARD_ONLY
# - name: serverheaderfilter
# labelselector: "app.kubernetes.io/instance: ocbsf"
# applyTo: NETWORK_FILTER
# filtername: envoy.filters.network.http_connection_manager
# operation: MERGE
# typeconfig: type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
# configkey: server_header_transformation
# configvalue: PASS_THROUGH
# - name: custom-http-stream
# labelselector: "app.kubernetes.io/instance: ocbsf"
# applyTo: NETWORK_FILTER
# filtername: envoy.filters.network.http_connection_manager
# operation: MERGE
# typeconfig: type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
# configkey: server_header_transformation
# configvalue: PASS_THROUGH
# stream_idle_timeout: "6000ms"
# max_stream_duration: "7000ms"
# patchContext: SIDECAR_OUTBOUND
# networkFilter_listener_port: 8000
# - name: custom-tcpsocket-timeout
# labelselector: "app.kubernetes.io/instance: ocbsf"
# applyTo: FILTER_CHAIN
# patchContext: SIDECAR_INBOUND
# operation: MERGE
# transport_socket_connect_timeout: "750ms"
# filterChain_listener_port: 8000
# - name: custom-http-route
# labelselector: "app.kubernetes.io/instance: ocbsf"
# applyTo: HTTP_ROUTE
# patchContext: SIDECAR_OUTBOUND
# operation: MERGE
# route_idle_timeout: "6000ms"
# route_max_stream_duration: "7000ms"
# httpRoute_routeConfiguration_port: 8000
# vhostname: "bsf-ocbsf-policy-ds.ocbsf.svc.cluster:8000"
For Istio version 1.9.x and 1.11.x
Note:
Istio 1.9.x and 1.11.x support the same template for envoyFilters configurations.
envoyFilters_v_19x_111x:
- name: xfccfilter
labelselector: "app.kubernetes.io/instance: ocudr"
configpatch:
- applyTo: NETWORK_FILTER
filtername: envoy.filters.network.http_connection_manager
operation: MERGE
typeconfig: type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
configkey: forward_client_cert_details
configvalue: ALWAYS_FORWARD_ONLY
- name: serverheaderfilter
labelselector: "app.kubernetes.io/instance: ocudr"
configpatch:
- applyTo: NETWORK_FILTER
filtername: envoy.filters.network.http_connection_manager
operation: MERGE
typeconfig: type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
configkey: server_header_transformation
configvalue: PASS_THROUGH
- name: custom-http-stream
labelselector: "app.kubernetes.io/instance: ocudr"
configpatch:
- applyTo: NETWORK_FILTER
filtername: envoy.filters.network.http_connection_manager
operation: MERGE
typeconfig: type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
configkey: server_header_transformation
configvalue: PASS_THROUGH
stream_idle_timeout: "6000ms"
max_stream_duration: "7000ms"
patchContext: SIDECAR_OUTBOUND
networkFilter_listener_port: 8000
- name: custom-tcpsocket-timeout
labelselector: "app.kubernetes.io/instance: ocudr"
configpatch:
- applyTo: FILTER_CHAIN
patchContext: SIDECAR_INBOUND
operation: MERGE
transport_socket_connect_timeout: "750ms"
filterChain_listener_port: 8000
- name: custom-http-route
labelselector: "app.kubernetes.io/instance: ocudr"
configpatch:
- applyTo: HTTP_ROUTE
patchContext: SIDECAR_OUTBOUND
operation: MERGE
route_idle_timeout: "6000ms"
route_max_stream_duration: "7000ms"
httpRoute_routeConfiguration_port: 8000
vhostname: "ocudr.svc.cluster:8000"
- name: logicaldnscluster
labelselector: "app.kubernetes.io/instance: ocudr"
configpatch:
- applyTo: CLUSTER
clusterservice: rchltxekvzwcamf-y-ec-x-002.amf.5gc.mnc480.mcc311.3gppnetwork.org
operation: MERGE
logicaldns: LOGICAL_DNS
- applyTo: CLUSTER
clusterservice: rchltxekvzwcamd-y-ec-x-002.amf.5gc.mnc480.mcc311.3gppnetwork.org
operation: MERGE
logicaldns: LOGICAL_DNS
Note:
The parameter vhostname is mandatory when applyTo is HTTP_ROUTE.Note:
Depending on the Istio version, update the correct value of envoy filters in the following line:{{- range .Values.envoyFilters_v_19x_111x }}
#peerAuthentication:
# - name: default
# tlsmode: PERMISSIVE
# - name: cm-service
# labelselector: "app.kubernetes.io/name: cm-service"
# tlsmode: PERMISSIVE
# - name: ingress
# labelselector: "app.kubernetes.io/name: ocbsf-ingress-gateway"
# tlsmode: PERMISSIVE
# - name: diam-gw
# labelselector: "app.kubernetes.io/name: diam-gateway"
# tlsmode: PERMISSIVE
Istio Authorization Policy enables access control on workloads in the mesh. Authorization policy supports CUSTOM, DENY and ALLOW actions for access control. When CUSTOM, DENY and ALLOW actions are used for a workload at the same time, the CUSTOM action is evaluated first, then the DENY action, and finally the ALLOW action.
For more details on Istio Authorization Policy, see Istio / Authorization Policy.
To customize the Authorization Policy, make the required changes using the following sample template:
#authorizationPolicies:
#- name: allow-all-provisioning-on-ingressgateway-ap
# labelselector: "app.kubernetes.io/name: ingressgateway"
# action: "ALLOW"
# hosts:
# - "*"
# paths:
# - "/nudr-dr-prov/*"
# - "/nudr-dr-mgm/*"
# - "/nudr-group-id-map-prov/*"
# - "/slf-group-prov/*"
#- name: allow-all-sbi-on-ingressgateway-ap
# labelselector: "app.kubernetes.io/name: ingressgateway"
# action: "ALLOW"
# hosts:
# - "*"
# paths:
# - "/nbsf-policyauthorization/*"
# xfccvalues:
# - "*DNS=nrf1.site1.com"
# - "*DNS=nrf2.site2.com"
# - "*DNS=scp1.site1.com"
# - "*DNS=scp1.site2.com"
# - "*DNS=scp1.site3.com
VirtualService is required to configure the retry attempts for the
destination host. For instance, for error response code value 503, the default
behaviour of Istio is to retry two times. However, if the user wants to configure
the number of retry attempts, then it can be done using
virtualService
.
#virtualService:
# - name: scp1site1vs
# host: “scp1.site1.com”
# destinationhost: “scp1.site1.com”
# port: 8000
# exportTo: |-
# [ "." ]
# attempts: "0"
# timeout: 7s
# - name: scp1site2vs
# host: “scp1.site2.com”
# destinationhost: “scp1.site2.com”
# port: 8000
# exportTo: |-
# [ "." ]
# retryon: 5xx
# attempts: "1"
# timeout: 7s
where, host or destination name uses the format - <release_name>-<egress_svc_name>
kubectl get svc -n <namespace>
Note:
For 5xx response codes, set the value of retry attempts to 1.requestAuthentication:
# - name: jwttokenwithjson
# labelselector: httpbin
# issuer: "jwtissue"
# jwks: |-
# '{
# "keys": [{
# "kid": "1",
# "kty": "EC",
# "crv": "P-256",
# "x": "Qrl5t1-Apuj8uRI2o_BP9loqvaBnyM4OPTPAD_peDe4",
# "y": "Y7vNMKGNAtlteMV-KJIaG-0UlCVRGFHtUVI8ZoXIzRY"
# }]
# }'
# - name: jwttoken
# labelselector: httpbin
# issuer: "jwtissue"
# jwksUri: https://example.com/.well-known/jwks.json
Note:
For requestAuthetication, use eitherjwks
or jwksUri
.
Run the following command to create the Custom Resource Definitions (CRDs):
helm install ocbsf-servicemesh-config ocbsf-servicemesh-config-23.4.6.tgz -n ocbsf -f ocbsf_custom_values_servicemesh_config_23.4.6.yaml
2.2.1.10.3 Post-deployment Configurations
kubectl get se,dr,peerauthentication,envoyfilter,vs,authorizationpolicy,requestauthentication -n ocbsf
NAME HOSTS LOCATION RESOLUTION AGE
serviceentry.networking.istio.io/nf-to-kube-api-server ["kubernetes.default.svc.vega"] MESH_INTERNAL NONE 17h
serviceentry.networking.istio.io/vega-ns1a-to-mysql-external-se-test ["mysql-connectivity-service.vega-ns1.svc.vega"] MESH_EXTERNAL NONE 17h
serviceentry.networking.istio.io/vega-ns1a-to-other-nf-se-test ["*.vega"] MESH_EXTERNAL NONE 17hNAME HOST AGE
destinationrule.networking.istio.io/jaeger-dr occne-tracer-jaeger-query.occne-infra 17h
destinationrule.networking.istio.io/mysql-occne mysql-connectivity-service.vega-ns1.svc.cluster.local 17h
destinationrule.networking.istio.io/prometheus-dr occne-prometheus-server.occne-infra 17h
destinationrule.networking.istio.io/vega-ns1a-to-other-nf-dr-test *.vega 17hNAME MODE AGE
peerauthentication.security.istio.io/cm-service PERMISSIVE 17h
peerauthentication.security.istio.io/default PERMISSIVE 17h
peerauthentication.security.istio.io/diam-gw PERMISSIVE 17h
peerauthentication.security.istio.io/ingress PERMISSIVE 17h
peerauthentication.security.istio.io/ocats-policy PERMISSIVE 17hNAME AGE
envoyfilter.networking.istio.io/ocats-policy-xfcc 17h
envoyfilter.networking.istio.io/serverheaderfilter 17h
envoyfilter.networking.istio.io/serverheaderfilter-nf1stub 17h
envoyfilter.networking.istio.io/serverheaderfilter-nf2stub 17h
envoyfilter.networking.istio.io/set-xfcc-bsf 17hNAME GATEWAYS HOSTS AGE
virtualservice.networking.istio.io/nrfvirtual1 ["vega-ns1a-occnp-egress-gateway"] 17h
[cloud-user@vega-bastion-1 ~]$
Then, perform the steps described in Installing BSF Package.
2.2.1.10.4 Deleting Service Mesh
This section describes the steps to delete Aspen Service Mesh (ASM) for ASM based BSF.
-
Disable ASM.
kubectl label --overwrite <namespace> ocbsf istio-injection=disabled
where,namespace-name is the deployment namespace used by helm command.
-
Delete all the pods in the namespace.
kubectl delete pods --all -n <namespace>
-
Delete ASM.
helm delete <helm-release-name> -n <namespace-name>
where, helm-release-name is the release name used by thehelm install
command. This release name must be the same as the release name used for ServiceMesh.namespace-name is the deployment namespace used by helm command.
Example:
helm delete ocbsf-servicemesh-config -n ocbsf
-
Verify ASM deletion.
kubectl get se,dr,peerauthentication,envoyfilter,vs -n ocbsf
2.2.1.11 Configuring Network Policies
Network Policies allow you to define ingress or egress rules based on Kubernetes resources such as Pod, Namespace, IP, and Port. These rules are selected based on Kubernetes labels in the application. These Network Policies enforce access restrictions for all the applicable data flows except communication from Kubernetes node to pod for invoking container probe.
Note:
Configuring Network Policy is optional. Based on the security requirements, Network Policy can be configured.
For more information on Network Policies, see https://kubernetes.io/docs/concepts/services-networking/network-policies/.
Note:
- If the traffic is blocked or unblocked between the pods even after applying Network Policies, check if any existing policy is impacting the same pod or set of pods that might alter the overall cumulative behavior.
- If changing default ports of services such as Prometheus, Database, Jaegar, or if Ingress or Egress Gateway names is overridden, update them in the corresponding Network Policies.
Configuring Network Policies
Network Policies support Container Network Interface (CNI) plugins for cluster networking.
Note:
For any deployment with CNI, it must be ensured that Network Policy is supported.
Following are the various operations that can be performed for Network Policies:
2.2.1.11.1 Installing Network Policies
Prerequisite
Network Policies are implemented by using the network plug-in. To use Network Policies, you must be using a networking solution that supports Network Policy.
Note:
For a fresh installation, it is recommended to install Network Policies before installing BSF. However, if BSF is already installed, you can still install the Network Policies.
To install Network Policies:
- Open the
ocbsf-network-policy-custom-values.yaml
file provided in the release package zip file.For downloading the file, see Downloading BSF package and Pushing the Images to Customer Docker Registry
-
The file is provided with the default Network Policies. If required, update the
ocbsf-network-policy-custom-values.yaml
file. For more information on the parameters, see Configuration Parameters for Network Policies.Note:
- To run ATS, uncomment the following policies from
ocbsf-network-policy-custom-values.yaml
:- allow-egress-for-ats
- allow-ingress-to-ats
- allow-egress-to-ats-pods-from-bsf-pods
- allow-ingress-from-ats-pods-to-bsf-pods
-
To connect with CNC Console, update the following parameter in the allow-ingress-from-console Network Policy in the
ocbsf-network-policy-custom-values.yaml
:kubernetes.io/metadata.name: <namespace in which CNCC is deployed>
- In
allow-ingress-prometheus
policy,kubernetes.io/metadata.name
parameter must contain the value for the namespace where Prometheus is deployed, andapp.kubernetes.io/name
parameter value should match the label from Prometheus pod.
- To run ATS, uncomment the following policies from
- Run the following command to install the Network Policies:
helm install <helm-release-name> ocbsf-network-policy/ -n <namespace> -f <custom-value-file>
where:- <helm-release-name> is the ocbsf-network-policy Helm release name.
- <yaml-file> is the ocbsf-network-policy-custom-value file.
- <namespace> is the OCBSF namespace.
For example:
helm install ocbsf-network-policy ocbsf-network-policy/ -n ocbsf -f ocbsf-network-policy-custom-values.yaml
Note:
-
Connections that were created before installing Network Policy and still persist are not impacted by the new Network Policy. Only the new connections would be impacted.
-
If you are using ATS suite along with Network Policies, it is required to install the BSF and ATS in the same namespace.
-
It is highly recommended to run ATS after deploying Network Policies to detect any missing/invalid rule that can impact signaling flows.
2.2.1.11.2 Upgrading Network Policies
To add, delete, or update Network Policies:
- Modify the
ocbsf-network-policy-custom-values.yaml
file to update, add, or delete the Network Policy. - Run the following command to upgrade the Network Policies:
helm upgrade <helm-release-name> network-policy/ -n <namespace> -f <custom-value-file>
where:- <helm-release-name> is the ocbsf-network-policy helm release name.
- <yaml-file> is the ocbsf-network-policy-custom-value file.
- <namespace> is the OCBSF namespace.
For example:
helm upgrade ocbsf-network-policy ocbsf-network-policy/ -n ocbsf -f ocbsf-network-policy-custom-values.yaml
2.2.1.11.3 Verifying Network Policies
Run the following command to verify if the Network Policies are deployed successfully:
kubectl get <helm-release-name> -n <namespace>
For example:
kubectl get ocbsf-network-policy -n ocbsf
- helm-release-name: ocbsf-network-policy Helm release name.
- namespace: CNC Console namespace.
2.2.1.11.4 Uninstalling Network Policies
Run the following command to uninstall network policies:
helm uninstall <helm-release-name> -n <namespace>
For example:
helm uninstall ocbsf-network-policy -n ocbsf
Note:
While using the debug container, it is recommended to uninstall the network policies or update them as required to establish the connections.
2.2.1.11.5 Configuration Parameters for Network Policies
Table 2-9 Supported Kubernetes Resource for Configuring Network Policies
Parameter | Description | Details |
---|---|---|
apiVersion |
This is a mandatory parameter. Specifies the Kubernetes version for access control. Note: This is the supported api version for network policy. This is a read-only parameter. |
Data Type: string Default Value:
|
kind |
This is a mandatory parameter. Represents the REST resource this object represents. Note: This is a read-only parameter. |
Data Type: string Default Value: NetworkPolicy |
Table 2-10 Supported Parameters for Configuring Network Policies
Parameter | Description | Details |
---|---|---|
metadata.name |
This is a mandatory parameter. Specifies a unique name for Network Policies. |
DataType: String Default Value: {{ .metadata.name }} |
spec.{} |
This is a mandatory parameter. This consists of all the information needed to define a particular network policy in the given namespace. Note: Policy supports the spec parameters defined in "Supported Kubernetes Resource for Configuring Network Policies". |
Default Value: NA |
For more information, see Network Policies in Oracle Communications Cloud Native Core, Binding Support Function User Guide.
2.2.2 Installation Tasks
This section explains how to install BSF.
Note:
-
Before installing BSF, you must complete Prerequisites and Preinstallation Tasks.
-
In a georedundant deployment, perform the steps explained in this section on all the georedundant sites.
2.2.2.1 Downloading BSF package
- Log in to My Oracle Support with your credentials.
- Select the Patches and Updates tab.
- In the Patch Search window, click Product or Family (Advanced) option.
- Enter Oracle Communications Cloud Native Core - 5G in the Product field, and select the Product from the drop-down list.
- From the Release drop-down list, select
"Oracle Communications Cloud Native Core, Binding Support
Function <release_number>".
Where,
<release_number>
indicates the required release number of Policy. - Click Search.
The Patch Advanced Search Results lists appears
- Select the required patch from the results.
The Patch Details window appers.
- Click Download.
File Download window appears.
- Click the <p********_<release_number>_Tekelec>.zip file to download the BSF release package.
2.2.2.2 Pushing the Images to Customer Docker Registry
BSF deployment package includes ready-to-use images and Helm charts to orchestrate containers in Kubernetes.
Table 2-11 Docker Images for BSF
Service Name | Docker Image Name | Image Tag |
---|---|---|
Alternate Route Service | alternate_route | 23.4.10 |
BSF Management | oc-bsf-management | 23.4.6 |
Application Info Service | oc-app-info | 23.4.14 |
Common Configuration Hook | common_config_hook | 23.4.10 |
Config Server | oc-config-server | 23.4.14 |
Configuration Management Server | oc-config-mgmt | 23.4.14 |
Debug Tool | ocdebug-tools | 23.4.3 |
Diameter Connector | oc-diam-connector | 23.4.14 |
Diameter Gateway | oc-diam-gateway | 23.4.15 |
Egress Gateway | ocegress_gateway | 23.4.10 |
NF Test | nf_test | 23.4.3 |
Ingress Gateway | ocingress_gateway | 23.4.10 |
Ingress Gateway/Egress Gateway init configuration | configurationinit | 23.4.10 |
Ingress Gateway/Egress Gateway update configuration | configurationupdate | 23.4.10 |
NRF Client Service | nrf-client | 23.4.7 |
Performance Monitoring Service | oc-perf-info | 23.4.14 |
Query Service | oc-query | 23.4.14 |
Session State Audit | oc-audit | 23.4.14 |
Pushing images
- Run the following command to untar the BSF package file to get
the BSF docker image tar file:
tar -xvzf <ReleaseName>-pkg-<Releasenumber>.tgz
Example:
The directory consists of the following:tar -xvzf ocbsf-pkg-23.4.6.0.0.tgz
- BSF Docker Images File:
ocbsf-images-23.4.6.tar
- Helm File:
ocbsf-23.4.6.tgz
- Readme txt File:
Readme.txt
- Checksum for Helm chart tgz file:
ocbsf-23.4.6.tgz.sha256
- Checksum for Helm chart for Service Mesh tgz
file:
ocbsf-servicemesh-config-23.4.6.tgz.sha256
- Checksum for images' tgz file:
ocbsf-images-23.4.6.tar.sha256
- BSF Docker Images File:
- Run one of the following commands to load the
ocbsf-images-<release_number>.tar
file:
where IMAGE_PATH points to the location wheredocker load --input /IMAGE_PATH/ocbsf-images-23.4.6.tar
ocbsf-images-23.4.6.tar
is stored.For CNE 1.8.0 and later versions, use the following command:podman load --input /IMAGE_PATH/ocbsf-images-23.4.6.tar
-
Run one of the following commands to verify that the images are loaded:
docker images
podman images
Verify the list of images shown in the output with the list of images shown in the table Table 2-11. If the list does not match, reload the image tar file.
For more information on docker images available in BSF, see Docker Images. -
Run one of the following commands to tag the images to the registry:
docker tag <image-name>:<image-tag> <docker-repo>/<image-name>:<image-tag>
podman tag <image-name>:<image-tag> <docker-repo>/<image-name>:<image-tag>
-
Run one of the following commands to push the images to the registry:
docker push <docker-repo>/<image-name>:<image-tag>
podman push <docker-repo>/<image-name>:<image-tag>
Note:
It is recommended to configure the Docker certificate before running the push command to access customer registry through HTTPS, otherwise docker push command may fail.
Example for OCCNE 1.8.0 and later versions
podman tag docker.io/ocbsf/oc-config-mgmt:23.4.14 occne-repo-host/oc-config-mgmt:23.4.14
podman push occne-repo-host/oc-config-mgmt:23.4.14
podman tag docker.io/ocbsf/nf_test:23.4.3 occne-repo-host/nf_test:23.4.3
podman push occne-repo-host/nf_test:23.4.3
podman tag docker.io/ocbsf/alternate_route:23.4.10 occne-repo-host/alternate_route:23.4.10
podman push occne-repo-host/alternate_route:23.4.10
podman tag docker.io/ocbsf/oc-config-server:23.4.14 occne-repo-host/oc-config-server:23.4.14
podman push occne-repo-host/oc-config-server:23.4.14
podman tag docker.io/ocbsf/configurationupdate:23.4.10 occne-repo-host/configurationupdate:23.4.10
podman push occne-repo-host/configurationupdate:23.4.10
podman tag docker.io/ocbsf/oc-app-info:23.4.14 occne-repo-host/oc-app-info:23.4.14
podman push occne-repo-host/oc-app-info:23.4.14
podman tag docker.io/ocbsf/ocingress_gateway:23.4.10 occne-repo-host/ocingress_gateway:23.4.10
podman push occne-repo-host/ocingress_gateway:23.4.10
podman tag docker.io/ocbsf/ocegress_gateway:23.4.10 occne-repo-host/ocegress_gateway:23.4.10
podman push occne-repo-host/ocegress_gateway:23.4.10
podman tag docker.io/ocbsf/oc-diam-gateway:23.4.15 occne-repo-host/oc-diam-gateway:23.4.15
podman push occne-repo-host/oc-diam-gateway:23.4.15
podman tag docker.io/ocbsf/oc-bsf-management:23.4.6 occne-repo-host/oc-bsf-management:23.4.6
podman push occne-repo-host/oc-bsf-management:23.4.6
podman tag docker.io/ocbsf/oc-query:23.4.14 occne-repo-host/oc-query:23.4.14
podman push occne-repo-host/oc-query:23.4.14
podman tag docker.io/ocbsf/oc-perf-info:23.4.14 occne-repo-host/oc-perf-info:23.4.14
podman push occne-repo-host/oc-perf-info:23.4.14
podman tag docker.io/ocbsf/nrf-client:23.4.7 occne-repo-host/nrf-client:23.4.7
podman push occne-repo-host/nrf-client:23.4.7
podman tag docker.io/ocbsf/configurationinit:23.4.10 occne-repo-host/configurationinit:23.4.10
podman push occne-repo-host/configurationinit:23.4.10
podman tag docker.io/ocbsf/common_config_hook:23.4.10 occne-repo-host/common_config_hook:23.4.10
podman push occne-repo-host/common_config_hook:23.4.10
podman tag docker.io/ocbsf/ocdebug-tools:23.4.3 occne-repo-host/ocdebug-tools:23.4.3
podman push occne-repo-host/ocdebug-tools:23.4.3
podman tag docker.io/ocbsf/oc-audit:23.4.14 occne-repo-host/oc-audit:23.4.14
podman push occne-repo-host/oc-audit:23.4.14
2.2.2.3 Installing BSF Package
This section describes how to install BSF package.
To install BSF package, perform the following steps:
- Unzip the release package to the location where you want to install
BSF. The package is as follows
ReleaseName_pkg_Releasenumber.tgz
:where:
ReleaseName
is a name that is used to track this installation instance.Releasenumber
is the release number.For example,ocbsf_pkg_23.4.6_0_0.tgz
.Run the following command to access the extracted package:
- Customize the
ocbsf_custom_values_23.4.6.yaml
file. For more information, see Customizing BSF.Note:
The values of the parameters mentioned in theocbsf_custom_values_23.4.6.yaml
file overrides the default values specified in the Helm chart. If theenvMysqlDatabase
parameter is modified, you must modify theconfigDbName
parameter with the same value.Note:
The URL syntax for perf-info must be in the correct syntax otherwise, it keeps restarting. The following is a URL example for the bastion server if the BSF is deployed on OCCNE platform. On any other PaaS platform, the url should be updated according to the Prometheus and Jaeger query deployment.# Values provided must match the Kubernetes environment. perf-info: configmapPerformance: prometheus: http://occne-prometheus-server.occne-infra.svc/clustername/prometheus jaeger:jaeger-agent.occne-infra jaeger_query_url:http://jaeger-query.occne-infra/clustername/jaeger
At least three configuration items must be present in the config map for perf-info, failing which perf-info will not work. If
jaeger
is not enabled, thejaeger
andjaeger_query_url
parameter can be omitted. -
Install BSF using Helm:
helm install <release-name> <helm-chart> --namespace <release-namespace> -f <custom_file> --atomic --timeout 600
helm-chart
is the location of the Helm chart extracted fromocbsf-pkg-23.4.6.0.0.tgz
file-
release_name
is the release name used by Helm command. The maximum allowed length is 63 characters. release_namespace
is the deployment namespace used byhelm
command.custom_file
is the name of the custom values yaml file (including location).
Note:
To verify installation while running the install command, run the following command on a separate window:watch kubectl get jobs,pods -n release_namespace
For example:
Parameters inhelm install ocbsf /home/cloud-user/bsf-23.4.6.0.0.tgz --namespace ocbsf -f ocbsf-custom-values-23.4.6.yaml --atomic
helm install
command:atomic
: If this parameter is set, the installation process purges the Helm chart on failure. Thewait
flag will be set automatically.wait
: If this parameter is set, installation process will wait until all pods, PVCs, Services, and minimum number of pods of a deployment,StatefulSet
, orReplicaSet
are in a ready state before marking the release as successful. It will wait for as long as --timeout.- timeout duration (optional): This parameter
specifies the wait time for individual Kubernetes operations such as
Jobs for hooks. The default value is 300s (in seconds) in Helm. If the
helm install command fails to create a Kubernetes object, it will
internally call the purge to delete after reaching the default timeout
value.
Note:
Timeout value is not for the overall install but for automatic purge on installation failure.
Caution:
When you run the install command, make sure that you do not exit from thehelm install
command manually. After running thehelm install
command, installing all the services may take some time. In the meantime, you must not press "ctrl+c" to come out from thehelm install
command as it leads to anomalous behavior.Note:
In Georedundant deployment, if you want to add or remove a site, refer to <Appendix B and Appendix C>. -
Press "Ctrl+C" to exit watch mode. Make sure to run the
watch
command on a different terminal.For Helm 2:
helm status <helm-release> -n <namespace>
2.2.3 Postinstallation Tasks
This section explains the postinstallation tasks for BSF.
2.2.3.1 Verifying BSF Installation
-
Run the following command to verify the installation status:
For Helm:
helm status <helm-release> -n <namespace>
For example:
helm status ocbsf -n ocbsf
Status should be
DEPLOYED
. -
Run the following command to verify if the pods are up and active:
kubectl get pods -n <release_namespace>
For example:
kubectl get pod -n ocbsf
You should see the status as
Running
andReady
for all the pods. -
Run the following command to verify if the services are deployed and active:
kubectl get services -n <release_namespace>
For example:
kubectl get services -n ocbsf
Note:
If the installation is not successful or you do not see the status as Running for all the pods, perform the troubleshooting steps mentioned in Oracle Communications Cloud Native Core, Binding Support Function Troubleshooting Guide.2.2.3.2 Performing Helm Test
This section describes how to perform sanity check for BSF installation through Helm test. The pods to be checked should be based on the namespace and label selector configured for the Helm test configurations.
Helm Test is a feature that validates successful installation of BSF and determines if the NF is ready to take traffic. The pods that are checked are based on the namespace and label selector configured for the Helm test configurations.
Note:
-
Helm test can be performed only on Helm3.
-
If
nrf-client-nfmanagement.enablePDBSupport
is set totrue
in thecustom-values.yaml
, Helm test fails. It is an expected behavior as the mode is active and on standby, the leader pod (nrf-client-management
) will be in ready state but the follower will not be in ready state, which will lead to failure in the Helm test.
Before running Helm test, complete the Helm test configurations.
helm test <helm-release_name> -n <namespace>
where:
helm-release-name
is the release name.
namespace
is the deployment namespace where BSF is
installed.
helm test ocbsf -n ocbsf
Pod ocbsf-helm-test-test pending
Pod ocbsf-helm-test-test pending
Pod ocbsf-helm-test-test pending
Pod ocbsf-helm-test-test running
Pod ocbsf-helm-test-test succeeded
NAME: ocbsf-helm-test
LAST DEPLOYED: Thu May 19 12:22:20 2022
NAMESPACE: ocbsf-helm-test
STATUS: deployed
REVISION: 1
TEST SUITE: ocbsf-helm-test-test
Last Started: Thu May 19 12:24:23 2022
Last Completed: Thu May 19 12:24:35 2022
Phase: Succeeded
helm test <release_name> -n <namespace> --logs
Note:
-
Helm Test expects all of the pods of given microservice to be in
READY
state for a successful result. However, the NRF Client Management microservice comes withActive/Standby
model for the multi-pod support in the current release. When the multi-pod support for NRF Client Management service is enabled, you may ignore if the Helm Test for NRF-Client-Management pod fails. -
If Helm test fails, for details on troubleshooting the installation, see Oracle Communications Cloud Native Core, Binding Support Function Troubleshooting Guide.