5 Maintenance Procedures

This section provides details about the OSO maintenance procedures.

5.1 Postinstallation CNE Configuration

5.1.1 Changing Metrics Storage Allocation

The following procedure describes how to increase or decrease the amount of persistent storage allocated to Prometheus for metrics storage.

Prerequisites

The user must calculate the revised amount of persistent storage required by the metrics.

Procedure
  1. A Prometheus resource is used to configure all Prometheus instances running in OSO. Run the following command to identify the Prometheus resource:
    kubectl get prometheus -n <namespace>
  2. Run the following command to resize the Prometheus metric allocation size by setting the value of allowVolumeExpansion to true.
    $ kubectl -n <namespace> get sc 
    $ '{"allowVolumeExpansion": true}'
  3. Run the following command to scale the Prometheus pod down.
    $ kubectl -n <namespace> scale deploy oso-prom-svr --replicas 0
  4. Run the following command to change the pvc size of Prometheus pods:
    $ kubectl -n <namespace> edit pvc oso-prom-svr

    Note:

    You will be placed in a vi editor session that contains all of the configurations for the OSO Prometheus pvc. Scroll down to the line that contains the "spec.Capacity" key, then update the value to the <desired increased pv size> as configured in the above step. The file must look similar to the following example:
    spec.Capacity: 10Gi

    Type ":wq" to exit the editor session and save the changes.

  5. Run the following command to verify that the pvc size change was applied:
    $ kubectl get pv | grep oso-prom-svr

    Note:

    Wait until the new desired size gets reflected "10Gi".
  6. Once both the pv sizes are updated to the new desired size, run the following command to scale up the Prometheus pods:
    $ kubectl -n <namespace> scale deploy oso-prom-svr --replicas 1
    

    Note:

    You will be placed in a vi editor session that contains all of the configurations for the OSO Prometheus instances. Scroll down to the line that contains the "replicas" key, then change the value back to 2. This scale backs up both the pods. The file must look similar to the following example:
  7. Run the following command to verify that the Prometheus pods are up and running:
    kubectl get pods -n <namespace> | grep oso-prometheus

5.2 Managing 5G NFs

This section describes procedures to manage 5G NFs in CNE OSO.

5.2.1 Updating Alert Rules for an NF

This section describes the procedure to add or update the alerting rules for any Cloud Native Core (CNC) 5G Network Functions (NF) in OSO Prometheus GUI.

Prerequisites

  • All NFs are required to create a separate Alert-rules.
  • For OSO Prometheus: A valid OSO release must be installed and an alert file describing all NF alert rules according to old format is required.

Add or Update Alert Rules

Perform the following steps to add alert rules in OSO Prometheus GUI:

  1. Take the backup of current configuration map of OSO Prometheus.
    $ kubectl get configmaps <OSO-prometheus-configmap-name> -o yaml -n <namespace> /tmp/tempPrometheusConfig.yaml
    Where,
    • <OSO-prometheus-configmap-name> is the name of the OSO Prometheus configuration map.
    • <namespace> is the OSO namespace.
  2. Check and add the NF Alert file name inside the Prometheus configuration map.

    <nf-alertsname> varies from NF to NF, and can be retrieved from each individual NF alert rules file.

    For example, in the following screenshot, "alertscndbtier" is the nf-alertsname for cnDBTier.

    Figure 5-1 OSO Alert file


    OSO Alert file

    After retrieving the nf-alertsname run the following steps:
    
    $ sed -i '/etc\/config\/<nf-alertsname>/d' /tmp/tempPrometheusConfig.yaml
    $ sed -i '/rule_files:/a\    \- /etc/config/<nf-alertsname>' /tmp/tempPrometheusConfig.yaml
  3. Update the configuration map with the updated file.
    $ kubectl -n <namespace> replace configmap <OSO-prometheus-configmap-name> -f
        /tmp/tempPrometheusConfig.yaml
  4. Patch the NF alert rules in OSO Prometheus configuration map by mentioning the Alert-rule file path.
    $ kubectl patch configmap <OSO-prometheus-configmap-name> -n <namespace> --type merge --patch "$(cat ./NF_altertrules.yaml)"

5.3 Prometheus Vertical Scaling

This section describes the procedure for vertical scaling of Prometheus.

To scale Prometheus deployments, follow these steps:
  1. Get the list of deployments and identify the OSO Prometheus deployment with the suffix prom-svr:
    # To list all the deployments in the OSO namespace
    $ kubectl -n <OSO_namespace> get deployments
    # To filter the deployment name by its suffix
    $ kubectl -n <OSO_namespace> get deployments | grep prom-svr
  2. Edit the OSO deployment using the following command:

    Note:

    This will open a vi editor with the deployment's yaml definition.
    $ kubectl -n <OSO_namespace> edit deployment <oso_deployment_name>-prom-svr
  3. Find the resources section for the prom-svr container in the edit mode of deployment, and edit the amount of resources as per the requirements.
    name: prom-svr
    ports:
    ... # ports definitions
    readinessProbe:
    ... # readiness probe definition
    resources:
      limits:
        cpu: "2"
        memory: 4Gi
      requests:
        cpu: "2"
        memory: 4Gi
  4. Save and quit from the editor after making the required changes in the yaml file for the CPU and memory. In case of any errors while editing, the editor opens again and error message appears at the top of the yaml file as a comment.

Note:

If any of these objects have two containers each, you will find two resources sections. For more information about how to assign resources, see https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/.