2 Setting Up Prerequisite Software

You can perform prerequisite tasks, such as installing Kubernetes and Helm, before deploying Oracle Communications Network Bridge on your cloud native environment.

Topics in this document:

Caution:

Oracle does not provide support for any prerequisite third-party software installation or configuration. The customer must handle any installation or configuration issues related to non-Oracle prerequisite software.

Network Bridge Prerequisite Tasks

As part of preparing your environment for Network Bridge cloud native, you choose, install, and set up external applications and services in ways that are best suited for your cloud native environment. The following shows the high-level prerequisite tasks:

  1. Ensure you have downloaded the latest supported software that is compatible with Network Bridge cloud native. See "Network Bridge Cloud Native Software Compatibility" in CCS Compatibility Matrix.

  2. Create a Kubernetes cluster.

  3. Install a container platform supported by Kubernetes, such as Docker, Podman, or containerd.

  4. Install Helm.

  5. Install MySQL NDB Operator.

  6. Install an ingress controller.

  7. If you plan to trace the flow of API calls through Network Bridge, install and configure Jaeger.

    For more information about tracing, see "Tracing the Flow of API Calls".

  8. If you plan to autoscale your pods using Kubernetes Horizontal Pod Autoscaler:

    • Install and configure Kubernetes Metrics Server.

    • Install and configure a service mesh, such as Istio.

    For more information about autoscaling, see "Setting up Autoscaling of Network Bridge Pods".

  9. If you plan to monitor Network Bridge operations:

    • Install and configure Prometheus Operator.

    • Install and configure Grafana.

    For more information, see "Monitoring Network Bridge Processes".

Prepare your environment with these technologies installed, configured, and tuned for performance, networking, security, and high availability. Make sure backup nodes are available in case of system failure in any of the cluster's active nodes.

Creating a Kubernetes Cluster

Kubernetes is an open-source system for automating the deployment, scaling, and management of containerized applications. It groups containers into logical units for easy management and discovery. When you deploy Kubernetes, you get a physical cluster with machines called nodes. A reliable cluster must have multiple worker nodes spread over separate physical infrastructure, and a very reliable cluster must have multiple primary nodes spread over separate physical infrastructure.

Figure 2-1 illustrates the Kubernetes cluster and the components that it interacts with.

Figure 2-1 Overview of the Kubernetes Cluster



Set up a Kubernetes cluster for your Network Bridge cloud native deployment, securing access to the cluster and its objects with the help of service accounts and proper authentication and authorization modules. Also, set up the following in your cluster:

  • Volumes: Volumes are directories accessible to the containers in a pod and provide a way to share data. The Network Bridge cloud native deployment package uses persistent volumes for sharing data in and out of containers but does not enforce any particular type. You can choose from the volume type options available in Kubernetes.

  • A networking model: Kubernetes assumes that pods can communicate with other pods, regardless of which host they land on. Every pod gets its own IP address, so you do not need to explicitly create a link between pods or map container ports to host ports. Several implementations are available that meet the fundamental requirements of Kubernetes’ networking model. Choose the networking model depending on the cluster requirement.

Typically, you don't use Kubernetes nodes directly to run or monitor Kubernetes workloads. Instead, you reserve worker node resources to run the Kubernetes workload. Multiple cluster users (manual and automated) require a point from which to access and operate the cluster. For example, you can use kubectl commands or Kubernetes APIs. For this purpose, set aside a separate host or set of hosts. You can restrict operational and administrative access to the Kubernetes cluster to these hosts. To reduce cluster exposure and promote the traceability of actions, give specific users named accounts on these hosts.

Typically, the Continuous Delivery pipeline automation deploys directly on a set of operations hosts or leverages runners deployed on operations hosts. These hosts must run Linux, with all interactive-use packages installed to support tools such as Bash, Wget, cURL, Hostname, Sed, AWK, cut, and grep. An example is the Oracle Linux 7.6 image on Oracle Cloud Infrastructure.

In addition, you need the appropriate tools to connect to your overall environment, including the Kubernetes cluster. For instance, for a Container Engine for Kubernetes (OKE) cluster, you must install and configure the Oracle Cloud Infrastructure Command Line Interface.

Additional integrations may need to include LDAP for users to log in to this host, appropriate NFS mounts for home directories, security lists, firewall configuration for access to the overall environment, and so on.

For more information about Kubernetes, see "Kubernetes Concepts" in the Kubernetes documentation.

Installing Docker

Use the Docker platform to containerize CCS products. Install Docker Engine to use the prebuilt images from the Network Bridge cloud native deployment package.

You can use Docker Engine or any container runtime that supports the Open Container Initiative if it supports the Kubernetes version specified in "Network Bridge Cloud Native Software Compatibility" in CCS Compatibility Matrix.

Installing Helm

Helm is a package manager that helps you install and maintain software on a Kubernetes system. In Helm, a package is called a chart, consisting of YAML files and templates rendered into Kubernetes manifest files. The Network Bridge cloud native deployment package includes Helm charts that help create Kubernetes objects, such as ConfigMaps, Secrets, controller sets, and pods, with a single command.

The Network Bridge package also includes a values.yaml file, which contains the default configuration for a cloud native deployment. You can change the Network Bridge configuration by creating an override-values.yaml file and modifying the keys and values you want to change. The settings in this file will override the default values when you deploy Network Bridge or update your Network Bridge release.

Helm leverages kubeconfig for users running the helm command to access the Kubernetes cluster. By default, this is $HOME/.kube/config. Helm inherits the permissions set up for this access into the cluster. If you configure role-based access control (RBAC), ensure you grant sufficient cluster permissions to users running Helm.

To install Helm, see the Helm installation documentation at: https://helm.sh/docs/intro/install/.

Installing MySQL NDB Operator

Network Bridge components use MySQL NDB Operator to store session information. Ensure that you install MySQL NDB Operator before deploying Network Bridge.

Note:

If you attempt to deploy Network Bridge before installing MySQL NDB Operator, you will receive an error message similar to the following:

Error: INSTALLATION FAILED: unable to build kubernetes objects from release manifest: resource mapping not found for name: "ccs-ndb" namespace: "" from "": no matches for kind "NdbCluster" in version "mysql.oracle.com/v1"ensure CRDs are installed first

To deploy MySQL NDB Operator on your Network Bridge cloud native environment, run the following command:

helm install --repo https://mysql.github.io/mysql-ndb-operator/ ndb-operator ndb-operator -n NdbOperator --create-namespace \
   --set image=container-registry.oracle.com/mysql/commercial-ndb-operator:8.0.32

If successful, you should see something similar to this:

NAME: ndb-operator
LAST DEPLOYED: Fri Oct 28 03:42:38 2023
NAMESPACE: NdbOperator
STATUS: deployed
REVISION: 1
TEST SUITE: None

For more information, see the MySQL NDB Operator README on the GitHub website: https://github.com/mysql/mysql-ndb-operator/blob/main/README.md.

Installing an Ingress Controller

Using an ingress controller exposes Network Bridge services outside the Kubernetes cluster and allows clients to communicate with Network Bridge. Ingress controllers monitor ingress objects and act on the configuration embedded in these objects to expose Network Bridge HTTP and T3 services to the external network.

Adding an external load balancer provides highly reliable single-point access to the services exposed by the Kubernetes cluster. In this case, the ingress controller exposes the services on behalf of the Network Bridge cloud native instance. Using a load balancer removes the need to expose Kubernetes node IPs to the larger user base, insulates users from changes (in terms of nodes appearing or being decommissioned) to the Kubernetes cluster, and enforces access policies.

Add an ingress controller, such as NGINX, Istio, or Traefik, to your Network Bridge cloud native system that has:

  • Path-based routing for the Kubernetes Cluster service.

  • TLS enabled between the client and the load balancer to secure communications outside of the Kubernetes cluster.

After you install an ingress controller, you must define the rules for directing requests from the following 5G services to the mediation service and mediation port:

  • /nchf-convergedcharging/v3/*

  • /npcf-smpolicycontrol/v1/*

The following shows example rules for mapping requests to the mediation service and the mediation port (8080). Although this example is for NGINX, you can use any ingress controller.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-oc-ccs
  namespace: ingressNameSpace
spec:
  ingressClassName: nginx
  rules:
  - http:
      paths:
      - path: /nchf-convergedcharging/v3
        pathType: Prefix
        backend:
          service:
            name: mediation
            port:
              number: 8080
      - path: /npcf-smpolicycontrol/v1
        pathType: Prefix
        backend:
          service:
            name: mediation
            port:
              number: 8080

Installing Jaeger

The Jaeger tracing tool helps you trace the flow of messages through Network Bridge components, making it easier to troubleshoot issues.

To install Jaeger, see the Jaeger documentation at: https://www.jaegertracing.io/docs/latest/.

Installing Kubernetes Metrics Server

Metrics Server collects resource metrics from kubelets and exposes them through the Metrics API. These metrics are used by Kubernetes Horizontal Pod Autoscaler to automatically adjust the CPU and memory usage in your Network Bridge pods.

To install Metrics Server, see the Kubernetes Metrics Server documentation at: https://kubernetes-sigs.github.io/metrics-server/.

Installing Prometheus Operator

Prometheus Operator in an open-source toolkit that scrapes metric data from Network Bridge and then stores it in a time-series database. You use it to monitor the operation of Network Bridge processes. See "Monitoring Network Bridge Processes" for more information.

To install Prometheus Operator, see the prometheus-operator GitHub website at: https://github.com/prometheus-operator/prometheus-operator.

Installing Grafana

Grafana is an open-source tool for viewing metric data that is stored in Prometheus Operator. You can use the Grafana Dashboards shipped with Network Bridge to view Network Bridge performance data.

To install Grafana, see the Grafana Loki installation documentation at: https://grafana.com/docs/loki/latest/installation/helm/.