2 Installing Policy
Note:
Policy supports fresh installation, and it can also be upgraded from CNC Policy 24.2.x and 24.1.x. For more information on how to upgrade Policy, see Upgrading Policy.Deployment Models
- Converged Policy – Unified policy solution that supports both PCF and PCRF functionalities. If the user wants to enable only PCRF service, enable PCRF and its related services, and disable the PCF services.
- PCF only - Independent deployment for PCF and its microservices.
2.1 Prerequisites
Before installing and configuring Policy, ensure that the following prerequisites are met:
2.1.1 Software Requirements
Table 2-1 Preinstalled Software
Software | Versions |
---|---|
Kubernetes | 1.30.x, 1.29.x,1.28.x |
Helm | 3.14.2 |
Podman | 4.9.4 |
Note:
CNE 24.3.x, 24.2.x, and 24.1.x versions can be used to install Policy 24.3.0.kubectl version
helm version
podman version
docker version
The following software are available if Policy is deployed in CNE. If you are deploying Policy in any other cloud native environment, these additional software must be installed before installing Policy.
To check the installed software, run the following command:
helm ls -A
The list of additional software items, along with the supported versions and usage, is provided in the following table:
Table 2-2 Additional Software
Software | Version | Purpose |
---|---|---|
AlertManager | 0.27.0 | Alerts Manager |
Calico | 3.27.3 | Security Solution |
cert-manager | 1.12.4 | Secrets Manager |
Containerd | 1.7.16 | Container Runtime Manager |
Fluentd - OpenSearch | 1.16.2 | Logging |
Grafana | 9.5.3 | Metrics |
HAProxy | 3.0.2 | Load Balancer |
Istio | 1.18.2 | Service Mesh |
Jaeger | 1.60.0 | Tracing |
Kyverno | 1.12.5 | Logging |
MetalLB | 0.14.4 | External IP |
Oracle OpenSearch | 2.11.0 | Logging |
Oracle OpenSearch Dashboard | 2.11.0 | Logging |
Prometheus | 2.52.0 | Metrics |
Prometheus Operator | 0.76.0 | Metrics |
Velero | 1.12.0 | Logging |
elastic-curator | 5.5.4 | Logging |
elastic-exporter | 1.1.0 | Logging |
elastic-master | 7.9.3 | Logging |
Logs | 3.1.0 | Logging |
prometheus-kube-state-metric | 1.9.7 | Metrics |
prometheus-node-exporter | 1.0.1 | Metrics |
metrics-server | 0.3.6 | Metric Server |
occne-snmp-notifier | 1.2.1 | Metric Server |
tracer | 1.22.0 | Tracing |
Important:
If you are using NRF with Policy, install it before proceeding with the Policy installation. Policy 24.3.0 supports NRF 24.3.x.2.1.2 Environment Setup Requirements
This section describes the environment setup requirements for installing Policy.
2.1.2.1 Network Access Requirement
The Kubernetes cluster hosts must have network access to the following repositories:
- Local Helm repository: It contains the
Policy Helm charts.
To check if the Kubernetes cluster hosts can access the local helm repository, run the following command:
helm repo update
- Local Docker image repository: It
contains the Policy Docker images.
To check if the Kubernetes cluster hosts can access the local Docker image repository, pull any image with an image-tag, using either of the following commands:
docker pull <docker-repo>/<image-name>:<image-tag>
podman pull <podman-repo>/<image-name>:<image-tag>
Where:<docker-repo>
is the IP address or host name of the Docker repository<podman-repo>
is the IP address or host name of the Podman repository.<image-name>
is the Docker image name.<image-tag>
is the tag assigned to the Docker image used for the Policy pod.
For example:
docker pull CUSTOMER_REPO/oc-app-info:24.3.5
podman pull occne-repo-host:5000/occnp/oc-app-info:24.3.5
2.1.2.2 Client Machine Requirement
This section describes the requirements for client machine, that is, the machine used by the user to run deployment commands.
- network access to the Helm repository and Docker image repository
- Helm repository configured
- network access to the Kubernetes cluster
- required environment settings to
run the
kubectl
,podman
, anddocker
commands. The environment must have privileges to create namespace in the Kubernetes cluster - Helm client installed with the push
plugin. Configure the environment in such a manner that the
helm install
command deploys the software in the Kubernetes cluster
2.1.2.3 Server or Space Requirement
For information about server or space requirements, see the Oracle Communications Cloud Native Core, Cloud Native Environment (CNE) Installation, Upgrade, and Fault Recovery Guide.
2.1.2.4 CNE Requirement
This section is applicable only if you are installing Policy on Oracle Communications Cloud Native Core, Cloud Native Environment. Policy supports CNE 24.3.x, 24.2.x, and 24.1.x
To check the CNE version, run the following command:
echo $OCCNE_VERSION
For more information about CNE, see Oracle Communications Cloud Native Core, Cloud Native Environment Installation, Upgrade, and Fault Recovery Guide.
2.1.2.5 cnDBTier Requirement
Policy supports cnDBTier 24.3.x, 24.2.x, and 24.1.x. cnDBTier must be configured and running before installing Policy. For more information about cnDBTier installation procedure, see Oracle Communications Cloud Native Core, cnDBTier Installation,Upgrade and Fault Recovery Guide.
Note:
These parameters must be updated in the cnDBTier custom_values.yaml before the installation and upgrade.Table 2-3 Modified/Additional cnDBTier Parameters
Parameter | Modified/Additional | Old value | Current Value |
---|---|---|---|
ndb_batch_size | Modified | 0.03G | 2G |
TimeBetweenEpochs | Modified | 200 | 100 |
NoOfFragmentLogFiles | Modified | 128 | 50 |
FragmentLogFileSize | Modified | 16M | 256M |
RedoBuffer | Modified | 32M | 1024M |
ndbmtd pods CPU | Modified | 3/3 | 8/8 |
ndb_report_thresh_binlog_epoch_slip | Additional | NA | 50 |
ndb_eventbuffer_max_alloc | Additional | NA | 19G |
ndb_log_update_minimal | Additional | NA | 1 |
replicationskiperrors | Modified | enable: false | enable: true |
replica_skip_errors | Modified | '1007,1008,1050,1051,1022,1296,13119' |
'1007,1008, 1022, 1050,1051,1054,1060,1061,1068,1094,1146, 1296,13119' |
Important:
These parameters may need further tuning based on the call model and deployments. For any further support, you must consult My Oracle Support (https://support.oracle.com).2.1.2.6 OSO Requirement
Policy supports Operations Services Overlay (OSO) 24.3.x, 24.2.x, and 24.1.x for common operation services (Prometheus and components such as alertmanager, pushgateway) on a Kubernetes cluster, which does not have these common services. For more information about OSO installation, see Oracle Communications Cloud Native Core, Operations Services Overlay Installation, Upgrade, and Fault Recovery Guide.
2.1.2.7 CNC Console Requirements
Policy supports CNC Console (CNCC) 24.3.x
For more information about CNCC, see Oracle Communications Cloud Native Configuration Console Installation, Upgrade, and Fault Recovery Guide and Oracle Communications Cloud Native Configuration Console User Guide.
2.1.2.8 OCCM Requirements
Policy supports OCCM 24.3.x. To support automated certificate lifecycle management, Policy integrates with Oracle Communications Cloud Native Core, Certificate Management (OCCM) in compliance with 3GPP security recommendations. For more information about OCCM in Policy, see the Support for Automated Certificate Lifecycle Management section in Oracle Communications Cloud Native Core, Converged Policy User Guide.
For more information about OCCM, see the following guides:
- Oracle Communications Cloud Native Core, Certificate Manager Installation, Upgrade, and Fault Recovery Guide
- Oracle Communications Cloud Native Core, Certificate Manager User Guide
2.1.2.9 OCNADD Requirements
Policy supports Oracle Communications Network Analytics Data Director (OCNADD) <version> to store the metadata of the messages copied at Ingress Gateway and Egress Gateway, Storing these messages is required to support SBI monitoring.
For more information about copying the messages at Ingress and Egress gateways, see Message Feed for SBI Monitoring section in Oracle Communications Cloud Native Core, Converged Policy User Guide.
For details on Oracle Communications Network Analytics Data Director (OCNADD), see Oracle Communications Network Analytics Data Director User Guide.
2.1.3 Resource Requirements
This section lists the resource requirements to install and run Policy.
Note:
The performance and capacity of the Policy system may vary based on the Call model, Feature/Interface configuration, underlying CNE and hardware environment, including but not limited to the complexity of deployed policies, policy table size, object expression and custom json usage in policy design.2.1.3.1 Policy Services
The following table lists resource requirement for Policy Services:
Table 2-4 Policy Services:
Service Name | CPU | Memory(Gi) | Replica(s) | Ephemeral-Storage | |||||
---|---|---|---|---|---|---|---|---|---|
Min | Max | Min | Max | Count | Min | Max | Min | Max | |
App-Info | 2 | 2 | 4 | 4 | 1 | 2 | 5 | 80Mi | 1Gi |
Audit Service | 2 | 2 | 4 | 4 | 1 | 2 | 8 | 80Mi | 1Gi |
CM Service | 2 | 4 | 0.5 | 2 | 2 | NA | NA | 80Mi | 1Gi |
Config Server | 4 | 4 | 0.5 | 2 | 1 | 1 | 2 | 80Mi | 1Gi |
NRF Client NF Discovery | 4 | 4 | 2 | 2 | 2 | 2 | 5 | 80Mi | 1Gi |
NRF Client NF Management | 1 | 1 | 1 | 1 | 2 | NA | NA | 80Mi | 1Gi |
Perf-Info | 1 | 2 | 1 | 2 | 2 | NA | NA | 80Mi | 1Gi |
PRE-Test | 1 | 1 | 0.5 | 2 | 1 | 1 | 8 | 80Mi | 1Gi |
Query Service | 1 | 2 | 1 | 1 | 1 | 1 | 2 | 80Mi | 1Gi |
Soap Connector | 2 | 4 | 4 | 4 | 2 | 2 | 8 | 80Mi | 1Gi |
NWDAF Agent | 1 | 2 | 1 | 1 | 1 | 1 | 1 | 80Mi | 1Gi |
Alternate Route Service | 2 | 2 | 4 | 4 | 1 | 2 | 5 | 80Mi | 1Gi |
Binding Service | 6 | 6 | 8 | 8 | 1 | 2 | 8 | 80Mi | 1Gi |
Bulwark Service | 8 | 8 | 6 | 6 | 2 | 2 | 8 | 80Mi | 1Gi |
Egress Gateway | 4 | 4 | 6 | 6 | 2 | 2 | 5 | 80Mi | 1Gi |
Ingress Gateway | 5 | 5 | 6 | 6 | 2 | 2 | 5 | 80Mi | 1Gi |
LDAP Gateway | 3 | 4 | 2 | 4 | 1 | 2 | 4 | 80Mi | 1Gi |
Policy Data Source (PDS) | 7 | 7 | 8 | 8 | 1 | 2 | 8 | 80Mi | 1Gi |
PRE-Test | 1 | 1 | 0.5 | 2 | 1 | 1 | 8 | 80Mi | 1Gi |
Notifier Service | 1 | 2 | 1 | 1 | 2 | 2 | 8 | 80Mi | 1Gi |
Usage Monitoring | 4 | 5 | 3 | 4 | 2 | 2 | 4 | 80Mi | 1Gi |
Diameter Connector | 4 | 4 | 1 | 2 | 1 | 2 | 8 | 80Mi | 1Gi |
Diameter Gateway | 4 | 4 | 1 | 2 | 1 | 2 | Foot 1 | 80Mi | 1Gi |
PCRF-Core | 8 | 8 | 8 | 8 | 2 | 2 | 80Mi | 1Gi | |
AM Service | 8 | 8 | 8 | 8 | 1 | 2 | 80Mi | 1Gi | |
SM Service | 7 | 7 | 10 | 10 | 2 | 2 | 80Mi | 1Gi | |
UE Service | 8 | 8 | 6 | 6 | 2 | 2 | 80Mi | 1Gi | |
UDR-Connector | 6 | 6 | 4 | 4 | 2 | 2 | 80Mi | 1Gi | |
CHF-Connector | 6 | 6 | 4 | 4 | 2 | 2 | 80Mi | 1Gi |
Footnote 1
Max replica per service should be set based on required TPS and other dimensioning factors.
Upgrade resources should be taken into account during dimensioning. Default upgrade resource requirements are 25% above maximum replica, rounding up to the next integer. For example, if a service has a max replica count of 8, upgrade resources of 25% will result in additional resources equivalent to 2 pods.
Updating CPU and Memory for Microservices
occnp_custom_values_24.3.0.yaml
file:
minReplicas: 1
maxReplicas: 1
ingress-gateway
or
egress-gateway
group:
ingress-gateway:
#Resource details
resources:
limits:
cpu: 1
memory: 6Gi
requests:
cpu: 1
memory: 2Gi
target:
averageCpuUtil: 80
minReplicas: 1
maxReplicas: 1
egress-gateway:
#Resource details
resources:
limits:
cpu: 1
memory: 6Gi
requests:
cpu: 1
memory: 2Gi
target:
averageCpuUtil: 80
minReplicas: 1
maxReplicas: 1
Note:
It is recommended to avoid altering the above mentioned standard resources. Either increasing or decreasing the CPU or memory will result in unpredictable behavior of the pods. Contact My Oracle Support (MOS) for Min Replicas and Max Replicas count values.By
default, ephemeral storage resource is enabled during Policy deployment with request
set to 80Mi and limit set to 1Gi. These values can be updated by modifying the
custom-values.yaml file using the global Helm variables logStorage
and crictlStorage
.
logStorage
: 70 #default calculated value 70crictlStorage
: 3 #default calculated value 1
To change the ephemeral storage limit value of a service, ensure that at least one of
the global Helm variables logStorage
and
crictlStorage
is non-zero and modify the ephemeral storage
limit of the service in custom-values.yaml.
To change the ephemeral storage request value of services, ensure that at least one
of the global Helm variables logStorage
and
crictlStorage
is non-zero and 110% of their summation
determines the ephemeral storage request of all the services in
custom-values.yaml.
Note:
In certain scenarios, data collection like full heap dump requires additional ephemeral storage limit values. In such cases, the ephemeral storage resources must be modified in pod's deployment.
2.1.3.2 Upgrade
The following Upgrade Resources are required for each microservice mentioned in the below table:
Table 2-5 Upgrade
Service Name | CPU Min | CPU Max | Memory Min (Gi) | Memory Max (Gi) | Ephemeral Storage Min | Ephemeral Storage Max | Replica Count |
---|---|---|---|---|---|---|---|
AM Service | 1 | 2 | 1 | 2 | 200Mi | 2Gi | 1 |
Audit Service | 1 | 2 | 1 | 2 | 200Mi | 2Gi | 1 |
Binding Service | 1 | 2 | 1 | 2 | 200Mi | 2Gi | 1 |
Config Server | 1 | 2 | 1 | 2 | 200Mi | 2Gi | 1 |
PCRF-Core | 1 | 2 | 1 | 2 | 200Mi | 2Gi | 1 |
Policy Data Source (PDS) | 1 | 2 | 1 | 2 | 200Mi | 2Gi | 1 |
SM Service | 1 | 2 | 1 | 2 | 200Mi | 2Gi | 1 |
UE Service | 1 | 2 | 1 | 2 | 200Mi | 2Gi | 1 |
UDR-Connector | 1 | 2 | 1 | 2 | 200Mi | 2Gi | 1 |
CHF-Connector | 1 | 2 | 1 | 2 | 200Mi | 2Gi | 1 |
Usage Monitoring | 1 | 2 | 1 | 2 | 200Mi | 2Gi | 1 |
2.2 Installation Sequence
This section describes preinstallation, installation, and postinstallation tasks for Policy.
2.2.1 Preinstallation Tasks
Before installing Policy, perform the tasks described in this section.
2.2.1.1 Downloading Policy package
- Log in to My Oracle Support with your credentials.
- Select the Patches and Updates tab.
- In the Patch Search window, click Product or Family (Advanced) option.
- Enter Oracle Communications Cloud Native Core - 5G in the Product field, and select the Product from the drop-down list.
- From the Release drop-down list, select
"Oracle Communications Cloud Native Core, Converged Policy
<release_number>".
Where,
<release_number>
indicates the required release number of Policy. - Click Search.
The Patch Advanced Search Results lists appears
- Select the required patch from the results.
The Patch Details window papers.
- Click Download.
File Download window appears.
- Click the <p********_<release_number>_Tekelec>.zip file to download the Policy release package.
2.2.1.2 Pushing the Images to Customer Docker Registry
Policy deployment package includes ready-to-use images and Helm charts to orchestrate containers in Kubernetes. The communication between Pods of services of Policy products are preconfigured in the Helm charts.
Table 2-6 Docker Images for Policy
Service Name | Image Name | Image Tag |
---|---|---|
Alternate Route Service | alternate_route | 24.3.3 |
AM Service | oc-pcf-am | 24.3.0 |
Application Info Service | oc-app-info | 24.3.5 |
Binding Service | oc-binding | 24.3.0 |
Bulwark Service | oc-bulwark | 24.3.0 |
CM Service | oc-config-mgmt | 24.3.5 |
CM Service | common_config_hook | 24.3.3 |
Config Server Service | oc-config-server | 24.3.5 |
Debug Tool | ocdebug-tools | 24.3.1 |
Diameter Connector | oc-diam-connector | 24.3.5 |
Diameter Gateway | oc-diam-gateway | 24.3.5 |
Egress Gateway | ocegress_gateway | 24.3.3 |
NF Test | nf_test | 24.3.2 |
Notifier Service | oc-notifier | 24.3.0 |
Ingress Gateway | ocingress_gateway | 24.3.3 |
Ingress Gateway/Egress Gateway init configuration | configurationinit | 24.3.3 |
Ingress Gateway/Egress Gateway update configuration | configurationupdate | 24.3.3 |
LDAP Gateway Service | oc-ldap-gateway | 24.3.0 |
Nrf Client Service | nrf-client | 24.3.2 |
NWDAF Agent | oc-nwdaf-agent | 24.3.0 |
PCRF Core Service | oc-pcrf-core | 24.3.0 |
Performance Monitoring Service | oc-perf-info | 24.3.5 |
PolicyDS Service | oc-policy-ds | 24.3.0 |
Policy Runtime Service | oc-pre | 24.3.0 |
Query Service | oc-query | 24.3.5 |
Session State Audit | oc-audit | 24.3.5 |
SM Service | oc-pcf-sm | 24.3.0 |
Soap Connector | oc-soap-connector | 24.3.0 |
UE Service | oc-pcf-ue | 24.3.0 |
Usage Monitoring | oc-usage-mon | 24.3.0 |
User Service | oc-pcf-user | 24.3.0 |
Pushing Images
- Unzip the release package to the location where you want to
install Policy.
The directory consists of the following:tar -xvzf occnp-pkg-24.3.0.0.0.tgz
-
occnp-images-24.3.0.tar
: Policy Image File -
occnp-24.3.0.tgz
: Helm file -
Readme.txt: Readme txt File
-
occnp-24.3.0.tgz.sha256
: Checksum for Helm chart tgz file -
occnp-servicemesh-config-24.3.0.tgz.sha256
: Checksum for Helm chart for Service Mesh tgz file -
occnp-images-24.3.0.tar.sha256
: Checksum for images' tgz file
-
-
Run one of the following commands to load
occnp-images-24.3.0.tar
filedocker load --input /IMAGE_PATH/occnp-images-24.3.0.tar
podman load --input /IMAGE_PATH/occnp-images-24.3.0.tar
- To verify if the image is loaded correctly, run one of the
following commands:
docker images
podman images
Verify the list of images shown in the output with the list of images shown in the table. If the list does not match, reload the image tar file.
- Create a new tag for each imported image and push the image to
the customer docker registry by entering the following commands:
docker tag <image-name>:<image-tag> <docker-repo>/<image-name>:<image-tag> docker push <docker-repo>/<image-name>:<image-tag>
Where,podman tag <image-name>:<image-tag> <podman-repo>/<image-name>:<image-tag> podman push <docker-repo>/<image-name>:<image-tag>
<image-name>
is the image name.<image-tag>
is the image release number.<docker-repo>
is the docker registry address with Port Number if registry has port attached. This is a repository to store the images.<podman-repo>
is the Podman registry address with Port Number if registry has port attached. This is a repository to store the images.
Note:
It is recommended to configure the Docker certificate before running the push command to access customer registry via HTTPS, otherwise, docker push command may fail.
Example for CNE 1.8 and later
podman tag docker.io/occnp/oc-app-info:24.3.5 occne-repo-host:5000/occnp/oc-app-info:24.3.5
podman push occne-repo-host:5000/occnp/oc-app-info:24.3.5
podman tag docker.io/occnp/nf_test:24.3.2 occne-repo-host:5000/occnp/nf_test:24.3.2
podman push occne-repo-host:5000/occnp/nf_test:24.3.2
podman tag docker.io/occnp/oc-policy-ds:24.3.0 occne-repo-host:5000/occnp/oc-policy-ds:24.3.0
podman push occne-repo-host:5000/occnp/oc-policy-ds:24.3.0
podman tag docker.io/occnp/alternate_route:24.3.3 occne-repo-host:5000/occnp/alternate_route:24.3.3
podman push occne-repo-host:5000/occnp/alternate_route:24.3.3
podman tag docker.io/occnp/ocingress_gateway:24.3.3 occne-repo-host:5000/occnp/ocingress_gateway:24.3.3
podman push occne-repo-host:5000/occnp/ocingress_gateway:24.3.3
podman tag docker.io/occnp/oc-pcf-sm:24.3.0 occne-repo-host:5000/occnp/oc-pcf-sm:24.3.0
podman push occne-repo-host:5000/occnp/oc-pcf-sm:24.3.0
podman tag docker.io/occnp/oc-pcf-am:24.3.0 occne-repo-host:5000/occnp/oc-pcf-am:24.3.0
podman push occne-repo-host:5000/occnp/oc-pcf-am:24.3.0
podman tag docker.io/occnp/oc-pcf-ue:24.3.0 occne-repo-host:5000/occnp/oc-pcf-ue:24.3.0
podman push occne-repo-host:5000/occnp/oc-pcf-ue:24.3.0
podman tag docker.io/occnp/oc-audit:24.3.5 occne-repo-host:5000/occnp/oc-audit:24.3.5
podman push occne-repo-host:5000/occnp/oc-audit:24.3.5
podman tag docker.io/occnp/oc-ldap-gateway:24.3.0 occne-repo-host:5000/occnp/oc-ldap-gateway:24.3.0
podman push occne-repo-host:5000/occnp/oc-ldap-gateway:24.3.0
podman tag docker.io/occnp/oc-query:24.3.5 occne-repo-host:5000/occnp/oc-query:24.3.5
podman push occne-repo-host:5000/occnp/oc-query:24.3.5
podman tag docker.io/occnp/oc-pre:24.3.0 occne-repo-host:5000/occnp/oc-pre:24.3.0
podman push occne-repo-host:5000/occnp/oc-pre:24.3.0
podman tag docker.io/occnp/oc-perf-info:24.3.5 occne-repo-host:5000/occnp/oc-perf-info:24.3.5
podman push occne-repo-host:5000/occnp/oc-perf-info:24.3.5
podman tag docker.io/occnp/oc-diam-gateway:24.3.5 occne-repo-host:5000/occnp/oc-diam-gateway:24.3.5
podman push occne-repo-host:5000/occnp/oc-diam-gateway:24.3.5
podman tag docker.io/occnp/oc-diam-connector:24.3.5 occne-repo-host:5000/occnp/oc-diam-connector:24.3.5
podman push occne-repo-host:5000/occnp/oc-diam-connector:24.3.5
podman tag docker.io/occnp/oc-pcf-user:24.3.0 occne-repo-host:5000/occnp/oc-pcf-user:24.3.0
podman push occne-repo-host:5000/occnp/oc-pcf-user:24.3.0
podman tag docker.io/occnp/ocdebug-tools:24.3.1 occne-repo-host:5000/occnp/ocdebug-tools:24.3.1
podman push occne-repo-host:5000/occnp/ocdebug-tools:24.3.1
podman tag docker.io/occnp/oc-config-mgmt:24.3.5 occne-repo-host:5000/occnp/oc-config-mgmt:24.3.5
podman push occne-repo-host:5000/occnp/oc-config-mgmt:24.3.5
podman tag docker.io/occnp/oc-config-server:24.3.5 occne-repo-host:5000/occnp/oc-config-server:24.3.5
podman push occne-repo-host:5000/occnp/oc-config-server:24.3.5
podman tag docker.io/occnp/ocegress_gateway:24.3.3 occne-repo-host:5000/occnp/ocegress_gateway:24.3.3
podman push occne-repo-host:5000/occnp/ocegress_gateway:24.3.3
podman tag docker.io/occnp/nrf-client:24.3.2 occne-repo-host:5000/occnp/nrf-client:24.3.2
podman push occne-repo-host:5000/occnp/nrf-client:24.3.2
podman tag docker.io/occnp/common_config_hook:24.3.3 occne-repo-host:5000/occnp/common_config_hook:24.3.3
podman push occne-repo-host:5000/occnp/common_config_hook:24.3.3
podman tag docker.io/occnp/configurationinit:24.3.3 occne-repo-host:5000/occnp/configurationinit:24.3.3
podman push occne-repo-host:5000/occnp/configurationinit:24.3.3
podman tag docker.io/occnp/configurationupdate:24.3.3 occne-repo-host:5000/occnp/configurationupdate:24.3.3
podman push occne-repo-host:5000/occnp/configurationupdate:24.3.3
podman tag docker.io/occnp/oc-soap-connector:24.3.0 occne-repo-host:5000/occnp/occnp/oc-soap-connector:24.3.0
podman push occne-repo-host:5000/occnp/occnp/oc-soap-connector:24.3.0
podman tag docker.io/occnp/oc-pcrf-core:24.3.0 occne-repo-host:5000/occnp/occnp/oc-pcrf-core:24.3.0
podman push occne-repo-host:5000/occnp/occnp/oc-pcrf-core:24.3.0
podman tag docker.io/occnp/oc-binding:24.3.0 occne-repo-host:5000/occnp/occnp/oc-binding:24.3.0
podman push occne-repo-host:5000/occnp/occnp/oc-binding:24.3.0
podman tag docker.io/occnp/oc-bulwark:24.3.0 occne-repo-host:5000/occnp/occnp/oc-bulwark:24.3.0
podman push occne-repo-host:5000/occnp/occnp/oc-bulwark:24.3.0
podman tag docker.io/occnp/oc-notifier:24.3.0 occne-repo-host:5000/occnp/occnp/oc-notifier:24.3.0
podman push occne-repo-host:5000/occnp/occnp/oc-notifier:24.3.0
podman tag docker.io/occnp/oc-usage-mon:24.3.0 occne-repo-host:5000/occnp/occnp/oc-usage-mon:24.3.0
podman push occne-repo-host:5000/occnp/occnp/oc-usage-mon:24.3.0
podman tag docker.io/occnp/oc-nwdaf-agent:24.3.0 occne-repo-host:5000/occnp/occnp/
oc-nwdaf-agent:24.3.0
podman push occne-repo-host:5000/occnp/occnp/
oc-nwdaf-agent:24.3.0
2.2.1.3 Verifying and Creating Namespace
This section explains how to verify or create new namespace in the system.
Note:
This is a mandatory procedure, run this before proceeding further with the installation. The namespace created or verified in this procedure is an input for the next procedures.
-
Run the following command to verify if the required namespace already exists in the system:
kubectl get namespaces
In the output of the above command, if the namespace exists, continue with "Creating Service Account, Role and RoleBinding.
-
If the required namespace is not available, create a namespace using the following command:
kubectl create namespace <required namespace>
Where,
<required namespace> is the name of the namespace.
For example, the following command creates the namespace,
occnp
:kubectl create namespace occnp
Sample output:
namespace/occnp created
- Update the
global.nameSpace
parameter inoccnp_custom_values_24.3.0.yaml
file with the namespace created in step 2:Here is a sample configuration snippet from the
occnp_custom_values_24.3.0.yaml
file:global: #NameSpace where secret is deployed nameSpace: occnp
Naming Convention for Namespace
- start and end with an alphanumeric character
- contains 63 characters or less
- contains only alphanumeric characters or '-'
Note:
It is recommended to avoid using prefixkube-
when creating namespace. This is prefix is reserved for
Kubernetes system namespaces.
2.2.1.4 Creating Service Account, Role and RoleBinding
This section is optional and it describes how to manually create a service account, role, and rolebinding resources. It is required only when customer needs to create a role, rolebinding, and service account manually before installing Policy.
Note:
The secret(s) should exist in the same namespace where Policy is getting deployed. This helps to bind the Kubernetes role with the given service account.
Create Service Account, Role and RoleBinding
- Run the following command to define the global service account by
creating a Policy service account resource file:
vi <occnp-resource-file>
Example:
vi occnp-resource-template.yaml
- Update the
occnp-resource-template.yaml
with release specific information:Note:
Update <helm-release> and <namespace> with its respective Policy namespace and Policy Helm release name.A sample template to update the
occnp-resource-template.yaml
file is given below:## Sample template start# apiVersion: v1 kind: ServiceAccount metadata: name: <helm-release>-serviceaccount namespace: <namespace> #--- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: <helm-release>-role namespace: <namespace> rules: - apiGroups: - "" resources: - services - configmaps - pods - secrets - endpoints - nodes - events - persistentvolumeclaims verbs: - get - list - watch - apiGroups: - apps resources: - deployments - statefulsets verbs: - get - watch - list - apiGroups: - autoscaling resources: - horizontalpodautoscalers verbs: - get - watch - list #---- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata: name: <helm-release>-rolebinding namespace: <namespace> roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: <helm-release>-role subjects: - kind: ServiceAccount name: <helm-release>-serviceaccount namespace: <namespace> ## Sample template end#
Where,
<helm-release>
is a name provided by the user to identify the Policy Helm deployment.<namespace>
is a name provided by the user to identify the Kubernetes namespace of Policy. All the Policy microservices are deployed in this Kubernetes namespace.Note:
If you are installing Policy 22.1.0 using CNE 22.2.0 or later versions change the apiVersion of
kind:rolebinding
fromrbac.authorization.k8s.io/v1beta1
torbac.authorization.k8s.io/v1
. - Run the following command to create service account, role, and
rolebinding:
kubectl -n <namespace> create -f <occnp-resource-file>
Example:
kubectl -n occnp create -f occnp-resource-template.yaml
- Update the
serviceAccountName
parameter in theoccnp_custom_values_24.3.0.yaml
file with the value updated inname
field underkind:ServiceAccount
. For more information aboutserviceAccountName
parameter, see the "Configuration for Mandatory Parameters".Note:
PodSecurityPolicy
kind is required for Pod Security Policy service account. For more information, see Oracle Communications Cloud Native Core, Converged Policy Troubleshooting Guide.
2.2.1.5 Creating Service Account, Role and Role Binding for Helm Test
This section describes the procedure to create service account, role, and role binding resources for Helm Test.
Important:
The steps described in this section are optional and users may skip it in any of the following scenarios:- If user wants service accounts to be created automatically at the time of deploying CNC Policy
- Global service account with associated role and role-bindings is already configured or the user has any in-house procedure to create service accounts.
Create Service Account
To create the global service account, create a YAML file
(occnp-sample-helmtestserviceaccount-template.yaml
) using the
following sample code:
apiVersion: v1
kind: ServiceAccount
metadata:
name: <helm-release>-helmtestserviceaccount
namespace: <namespace>
<helm-release>
is a name provided by the
user to identify the helm
deployment.
namespace
is a name provided by the user to identify the Kubernetes namespace of the Policy. All the Policy microservices are deployed in this Kubernetes namespace.
Define Role Permissions
To define permissions using roles for Policy namespace, create a YAML
file (occnp-sample-role-template.yaml
) using the following sample
code:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: <helm-release>-role
namespace: <namespace>
rules:
- apiGroups:
- ""
resources:
-pods
-persistentvolumeclaims
-services
-endpoints
-configmaps
-events
-secrets
-serviceaccounts
verbs:
-list
-get
-watch
-apiGroups:
-apps
resources:
-deployments
-statefulsets
verbs:
-get
-watch
-list
-apiGroups:
-autoscaling
resources:
-horizontalpodautoscalers
verbs:
-get
-watch
-list
-apiGroups:
-policy
resources:
-poddisruptionbudgets
verbs:
-get
-watch
-list
-apiGroups:
-rbac.authorization.k8s.io
resources:
-roles
-rolebindings
verbs:
-get
-watch
-list
Creating Role Binding Template
To bind the above role with the service account, you must create role
binding. To do so, create a YAML file
(occnp-sample-rolebinding-template.yaml
) using the following
sample code:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: <helm-release>-rolebinding
namespace: <namespace>
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: <helm-release>-role
subjects:
- kind: ServiceAccount
name: <helm-release>-helmtestserviceaccount
namespace: <namespace>
Create resources
Run the following commands to create resources:
kubectl -n <namespace> create -f occnp-sample-helmtestserviceaccount-template.yaml;
kubectl -n <namespace> create -f occnp-sample-role-template.yaml;
kubectl -n <namespace> create -f occnp-sample-rolebinding-template.yaml
Note:
Once the global service account is added, users must addglobal.helmTestServiceAccountName
in the
custom-values.yaml
file. Otherwise, installation can fail
as a result of creating and deleting Custom Resource Definition (CRD).
2.2.1.6 Configuring Database, Creating Users, and Granting Permissions
This section explains how database administrators can create users and database in a single and multisite deployment.
Policy has four databases (Provisional, State, Release, Leaderpod, and NRF Client Database) and two users (Application and Privileged).
Note:
- Before running the procedure for georedundant sites, ensure that the cnDBTier for georedundant sites is up and replication channels are enabled.
- While performing a fresh installation, if Policy is already deployed, purge the deployment and remove the database and users that were used for the previous deployment. For uninstallation procedure, see the Uninstalling Policy section.
Policy Databases
For Policy applications, four types of databases are required:
- Provisional Database: Provisional Database contains configuration
information. The same configuration must be done on each site by the operator.
Both Privileged User and Application User have access to this database. In case
of georedundant deployments, each site must have a unique Provisional Database.
Policy sites can access only the information in their unique Provisional
Database.
For example:
- For Site 1: occnp_config_server_site1
- For Site 2: occnp_config_server_site2
- For Site 3: occnp_config_server_site3
- State Database: This database maintains the running state of Policy sites and has information of subscriptions, pending notification triggers, and availability data. It is replicated and the same configuration is maintained by all Policy georedundant sites. Both Privileged User and Application User have access to this database.
- Release Database: This database maintains release version state, and it is used during upgrade and rollback scenarios. Only Privileged User has access to this database.
- Leaderpod Database: This database is used to store leader
and follower if PDB is enabled for microservices that require a single pod to be
up in all the instances. The configuration of this database must be done on each
site. In case of georedundant deployments, each site must have a unique
Leaderpod database.
For example:
- For Site 1: occnp_leaderPodDb_site1
- For Site 2: occnp_leaderPodDb_site2
- For Site 3: occnp_leaderPodDb_site3
Note:
This database is used only whennrf-client-nfmanagement.enablePDBSupport
is set totrue
in theoccnp_custom_values_24.3.0.yaml
. For more information, see NRF Client Configuration - NRF Client Database: This database is used
to store discovery cache tables, and it also supports NRF Client features. Only
Privileged User has access to this database and it is used only when the caching
feature is enabled. In case of georedundant deployments, each site must have a
unique NRF Client database and its configuration must be done on each
site.
For example:
- For Site 1: occnp_nrf_client_site1
- For Site 2: occnp_nrf_client_site2
- For Site 3: occnp_nrf_client_site3
Policy Users
There are two types of Policy database users with different set of permissions:
- Privileged User : This user has a complete set of permissions. This user
can perform create, alter, or drop operations on tables to perform install,
upgrade, rollback, or delete operations.
Note:
In examples given in this document, Privileged User's username is 'occnpadminusr
' and password is 'occnpadminpasswd
'. - Application User: This user has a limited set of permissions
and is used by Policy application to handle service operations. This user can
create, insert, update, get, or remove the records. This user will not be able
to alter, or drop the database or tables.
Note:
In examples given in this document, Application User's username is 'occnpusr
' and password is 'occnppasswd
'.
Table 2-7 Policy Default Database Names
Service Name | Default Database Name | Database Type | Applicable for |
---|---|---|---|
SM Service | occnp_pcf_sm |
State | Converged Policy and PCF |
AM Service | occnp_pcf_am |
State | Converged Policy and PCF |
PCRF Core Service | occnp_pcrf_core |
State | Converged Policy and PCRF |
Binding Service | occnp_binding |
State | Converged Policy and PCF |
PDS Service | occnp_policyds |
State | Converged Policy and PCF |
UE Service | occnp_pcf_ue |
State | Converged Policy and PCF |
CM Service | occnp_commonconfig
|
Provisional | Converged Policy, PCF, and PCRF |
Config Server Service | occnp_config_server |
Provisional | Converged Policy, PCRF, and PCF |
Audit Service | occnp_audit_service |
Provisional | Converged Policy and PCF |
Usage Monitoring | occnp_usagemon |
Provisional | Converged Policy, PCF, and PCRF |
NRF Client |
|
Provisional | Converged Policy and PCF |
NWDAF Agent | occnp_pcf_nwdaf_agent |
Provisional | Converged Policy and PCF |
Perf Info Service | occnp_overload |
Leaderpod | Converged Policy, PCRF, and PCF |
Release | occnp_release |
Release |
2.2.1.6.1 Single Site
- Log in to the machine where SSH keys are stored and have permission to access the SQL nodes of NDB cluster.
- Connect to the SQL nodes.
- Log in to the MySQL prompt using
root permission, or log in as a user who has the permission to create users as per
conditions explained in the next step.
Example:
mysql -h 127.0.0.1 -uroot -p
Note:
This command varies between systems, path for MySQL binary, root user, and root password. After running this command, enter the password specific to the user mentioned in the command. - Run the following command to check
if both the Policy users already exist:
SELECT User FROM mysql.user;
If the users already exist, go to the next step. Else, create the respective new user or users by following the steps below:
- Run the following command to create a new Privileged User:
CREATE USER '<Policy Privileged-User Name>'@'%' IDENTIFIED BY '<Policy Privileged-User Password>';
Example:
CREATE USER 'occnpadminusr'@'%' IDENTIFIED BY 'occnpadminpasswd';
- Run the following command to create a new Application User:
CREATE USER '<Application User Name>'@'%' IDENTIFIED BY '<APPLICATION Password>';
Example:
CREATE USER 'occnpusr'@'%' IDENTIFIED BY 'occnppasswd';
- Run the following command to create a new Privileged User:
- Run the following command to check
whether any of the Policy database already exists:
show databases;
- If any of the previously configured database is already present, remove
them. Otherwise, skip this step.
Run the following command to remove a preconfigured Policy database:
DROP DATABASE if exists <DB Name>;
Example:
DROP DATABASE if exists occnp_audit_service;
- Run the following command to create a new Policy database if it does
not exist, or after dropping an existing database:
CREATE DATABASE IF NOT EXISTS <DB Name> CHARACTER SET utf8;
For example: Sample illustration for creating all database required for Policy installation.
CREATE DATABASE IF NOT EXISTS occnp_policyds CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS occnp_audit_service CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS occnp_config_server CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS occnp_pcf_am CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS occnp_pcf_sm CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS occnp_pcrf_core CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS occnp_release CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS occnp_binding CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS occnp_pcf_ue CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS occnp_commonconfig CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS occnp_cmservice CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS occnp_usagemon CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS occnp_overload CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS occnp_pcf_nwdaf_agent CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS occnp_leaderPodDb CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS occnp_nrf_client CHARACTER SET utf8;
Note:
Ensure that you use the same database names while creating database that you have used in the global parameters ofoccnp_custom_values_24.3.0.yaml
file. Following is an example of what are the names of the policy database names configured in theoccnp_custom_values_24.3.0.yaml
file:global: releaseDbName: occnp_release nrfClientDbName: occnp_nrf_client policyds: envMysqlDatabase: occnp_policyds audit-service: envMysqlDatabase: occnp_audit_service config-server: envMysqlDatabase:occnp_config_server am-service: envMysqlDatabase: occnp_pcf_am sm-service: envMysqlDatabase: occnp_pcf_sm pcrf-core: envMysqlDatabase: occnp_pcrf_core binding: envMysqlDatabase: occnp_binding ue-service: envMysqlDatabase: occnp_pcf_ue cm-service: envMysqlDatabase: occnp_cmservice usage-mon: envMysqlDatabase: occnp_usagemon nwdaf-agent: envMysqlDatabase: occnp_pcf_nwdaf_agent nrf-client-nfmanagement: dbConfig: leaderPodDbName: occnp_leaderPodDb
- If any of the previously configured database is already present, remove
them. Otherwise, skip this step.
- Grant permissions to users on the
database:
Note:
Creation of database is optional if grant is scoped to all database, that is, database name is not mentioned in grant command.- Run the following command to grant NDB_STORED_USER permissions to the
Privileged User:
GRANT NDB_STORED_USER ON *.* TO 'occnpadminusr'@'%';
- Run the following commands to grant Privileged User permission on all
Policy Databases:
GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE ON <DB Name>.*TO `<Policy Privileged-User Name>`@`%`;
For example:
GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON occnp_pcf_sm.* TO 'occnpadminusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON occnp_pcf_am.* TO 'occnpadminusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON occnp_config_server.* TO 'occnpadminusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON occnp_audit_service.* TO 'occnpadminusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON occnp_release.* TO 'occnpadminusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON occnp_pcrf_core.* TO 'occnpadminusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON occnp_binding.* TO 'occnpadminusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON occnp_policyds.* TO 'occnpadminusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON occnp_pcf_ue.* TO 'occnpadminusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON occnp_cmservice.* TO 'occnpadminusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON occnp_commonconfig.* TO 'occnpadminusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON occnp_overload.* TO 'occnpadminusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON occnp_nrf_client.* TO 'occnpadminusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON occnp_pcf_nwdaf_agent.* TO 'occnpadminusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON occnp_usagemon.* TO 'occnpadminusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP ON mysql.ndb_replication TO 'occnpadminusr'@'%'; GRANT SELECT, INSERT, CREATE, ALTER, DROP, LOCK TABLES, CREATE TEMPORARY TABLES, DELETE, UPDATE, EXECUTE ON occnp_leaderPodDb.* TO 'occnpadminusr'@'%';
- Run the following command to grant NDB_STORED_USER permissions to the
Application User:
GRANT NDB_STORED_USER ON *.* TO 'occnpusr'@'%';
- Run the following commands to grant Application User permission on all
Policy Databases:
GRANT SELECT, INSERT, LOCK TABLES, DELETE, UPDATE, REFERENCES, EXECUTE ON <DB Name>.* TO '<Application User Name>'@'%';
For example:
GRANT SELECT, INSERT, UPDATE, DELETE, CREATE ON occnp_pcf_sm.* TO 'occnpusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE ON occnp_pcf_am.* TO 'occnpusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE ON occnp_config_server.* TO 'occnpusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE ON occnp_audit_service.* TO 'occnpusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE ON occnp_pcrf_core.* TO 'occnpusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE ON occnp_binding.* TO 'occnpusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE ON occnp_policyds.* TO 'occnpusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE ON occnp_pcf_ue.* TO 'occnpusr'@'%'; GRANT CREATE, SELECT, INSERT, UPDATE, DELETE ON occnp_commonconfig.* TO 'occnpusr'@'%'; GRANT CREATE, SELECT, INSERT, UPDATE, DELETE ON occnp_cmservice.* TO 'occnpusr'@'%'; GRANT CREATE, SELECT, INSERT, UPDATE, DELETE ON occnp_usagemon.* TO 'occnpusr'@'%'; GRANT CREATE, SELECT, INSERT, UPDATE, DELETE ON occnp_pcf_nwdaf_agent.* TO 'occnpusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE ON occnp_overload.* TO 'occnpusr'@'%'; GRANT CREATE, SELECT, INSERT, UPDATE, DELETE ON occnp_nrf_client.* TO 'occnpusr'@'%';
- Run the following command to grant NDB_STORED_USER permissions to the
Privileged User:
- Run the following command to verify that the privileged or application users have all
the required permissions:
show grants for username;
where
username
is the name of the privileged or application user.Example:
show grants for occnpadminusr; show grants for occnpusr;
- Run the following command to flush
privileges:
FLUSH PRIVILEGES;
- Exit from MySQL prompt and SQL nodes.
2.2.1.6.2 Multisite
This section explains how database administrator can create the databases and users for a multisite deployment.
For Policy georedundant deployment, listed databases names must be unique for each site. For the remaining databases, the database name must be same across all the sites.
Note:
Before running the procedure for georedundant sites, ensure that the cnDBTier for georedundant sites is up and replication channels are enabled.Table 2-8 Policy Unique Databases names for two site and three site deployment
Two Site Database Names | Three Site Database Names |
---|---|
occnp_config_server_site1 occnp_config_server_site2 |
occnp_config_server_site1 occnp_config_server_site2 occnp_config_server_site3 |
occnp_cmservice_site1 occnp_cmservice_site2 |
occnp_cmservice_site1 occnp_cmservice_site2 occnp_cmservice_site3 |
occnp_commonconfig_site1 occnp_commonconfig_site2 |
occnp_commonconfig_site1 occnp_commonconfig_site2 occnp_commonconfig_site3 |
occnp_leaderPodDb_site1 occnp_leaderPodDb_site2 |
occnp_leaderPodDb_site1 occnp_leaderPodDb_site2 occnp_leaderPodDb_site3 |
occnp_overload_site1 occnp_overload_site2 |
occnp_overload_site1 occnp_overload_site2 occnp_overload_site3 |
occnp_audit_service_site1 occnp_audit_service_site2 |
occnp_audit_service_site1 occnp_audit_service_site2 occnp_audit_service_site3 |
occnp_pcf_nwdaf_agent_site1 occnp_pcf_nwdaf_agent_site2 |
occnp_pcf_nwdaf_agent_site1 occnp_pcf_nwdaf_agent_site2 occnp_pcf_nwdaf_agent_site3 |
occnp_nrf_client_site1 occnp_nrf_client_site2 |
occnp_nrf_client_site1 occnp_nrf_client_site2 occnp_nrf_client_site3 |
- Log in to the machine where SSH keys are stored and have permission to access the SQL nodes of NDB cluster.
- Connect to the SQL nodes.
- Log in to the MySQL prompt using root
permission, or log in as a user who has the permission to create users as per conditions
explained in the next step.
Example:
mysql -h 127.0.0.1 -uroot -p
Note:
This command varies between systems, path for MySQL binary, root user, and root password. After running this command, enter the password specific to the user mentioned in the command. - Run the following command to check if both the
Policy users already exist:
SELECT User FROM mysql.user;
If the users already exist, go to the next step. Otherwise, create the respective new user or users by following the steps below:
- Run the following command to create a new Privileged User:
CREATE USER '<Policy Privileged-User Name>'@'%' IDENTIFIED BY '<Policy Privileged-User Password>';
Example:
CREATE USER 'occnpadminusr'@'%' IDENTIFIED BY 'occnpadminpasswd';
- Run the following command to create a new Application User:
CREATE USER '<Application User Name>'@'%' IDENTIFIED BY '<APPLICATION Password>';
Example:
CREATE USER 'occnpusr'@'%' IDENTIFIED BY 'occnppasswd';
Note:
You must create both the users on all the SQL nodes for all georedundant sites. - Run the following command to create a new Privileged User:
- Run the following command to check whether any
of the Policy database already exists:
show databases;
- If any of the previously configured database is already present, remove
them. Otherwise, skip this step.
Caution:
In case you have georedundant sites configured, removal of the database from any one of the SQL nodes of any cluster will remove the database from all georedundant sites.Run the following command to remove a preconfigured Policy database:
DROP DATABASE if exists <DB Name>;
Example:
DROP DATABASE if exists occnp_audit_service;
- Run the following command to create a new Policy database if it does not
exist, or after dropping an existing database:
CREATE DATABASE IF NOT EXISTS <DB Name> CHARACTER SET utf8;
For example: Sample illustration for creating all database required for Policy installation in site1.
CREATE DATABASE IF NOT EXISTS occnp_config_server_site1 CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS occnp_cmservice_site1 CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS occnp_commonconfig_site1 CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS occnp_leaderPodDb_site1 CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS occnp_overload_site1 CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS occnp_audit_service_site1 CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS occnp_pcf_nwdaf_agent_site1 CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS occnp_policyds CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS occnp_pcf_am CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS occnp_pcf_sm CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS occnp_pcrf_core CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS occnp_release CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS occnp_binding CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS occnp_pcf_ue CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS occnp_usagemon CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS occnp_nrf_client_site1 CHARACTER SET utf8;
For example: Sample illustration for creating all database required for Policy installation in site2.
CREATE DATABASE IF NOT EXISTS occnp_config_server_site2 CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS occnp_cmservice_site2 CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS occnp_commonconfig_site2 CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS occnp_leaderPodDb_site2 CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS occnp_overload_site2 CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS occnp_audit_service_site2 CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS occnp_pcf_nwdaf_agent_site2 CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS occnp_policyds CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS occnp_pcf_am CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS occnp_pcf_sm CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS occnp_pcrf_core CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS occnp_release CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS occnp_binding CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS occnp_pcf_ue CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS occnp_usagemon CHARACTER SET utf8; CREATE DATABASE IF NOT EXISTS occnp_nrf_client_site2 CHARACTER SET utf8;
Note:
Ensure that you use the same database names while creating database that you have used in the global parameters ofoccnp_custom_values_24.3.0.yaml
files. Following is an example of what are the names of the Policy database names configured in theoccnp_custom_values_24.3.0.yaml
files in site1 and site2:global: nrfClientDbName: occnp_nrf_client_site1 audit-service: envMysqlDatabase: occnp_audit_service_site1 config-server: envMysqlDatabase:occnp_config_server_site1 cm-service: envMysqlDatabase: occnp_cmservice_site1 nwdaf-agent: envMysqlDatabase: occnp_pcf_nwdaf_agent_site1 nrf-client-nfmanagement: dbConfig: leaderPodDbName: occnp_leaderPodDb_site1
global: nrfClientDbName: occnp_nrf_client_site2 audit-service: envMysqlDatabase: occnp_audit_service_site2 config-server: envMysqlDatabase:occnp_config_server_site2 cm-service: envMysqlDatabase: occnp_cmservice_site2 nwdaf-agent: envMysqlDatabase: occnp_pcf_nwdaf_agent_site2 nrf-client-nfmanagement: dbConfig: leaderPodDbName: occnp_leaderPodDb_site2
- If any of the previously configured database is already present, remove
them. Otherwise, skip this step.
- Grant permissions to users on the database:
Note:
- Run this step on all the SQL nodes for each Policy standalone site in a georedundant deployment.
- Creation of database is optional if grant is scoped to all database, that is, database name is not mentioned in grant command.
- Run the following command to
grant NDB_STORED_USER permissions to the Privileged User:
GRANT NDB_STORED_USER ON *.* TO 'occnpadminusr'@'%';
- Run the following commands to grant
Privileged User permission on all Policy Databases:
GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE ON <DB Name>.*TO `<Policy Privileged-User Name>`@`%`;
For example for site1:
GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON occnp_config_server_site1.* TO 'occnpadminusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON occnp_audit_service_site1.* TO 'occnpadminusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON occnp_cmservice_site1.* TO 'occnpadminusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON occnp_commonconfig_site1.* TO 'occnpadminusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON occnp_overload_site1.* TO 'occnpadminusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON occnp_pcf_nwdaf_agent_site1.* TO 'occnpadminusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON occnp_nrf_client_site1.* TO 'occnpadminusr'@'%'; GRANT SELECT, INSERT, CREATE, ALTER, DROP, LOCK TABLES, CREATE TEMPORARY TABLES, DELETE, UPDATE, EXECUTE ON occnp_leaderPodDb_site1.* TO 'occnpadminusr'@'%';
For example for site2:
GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON occnp_config_server_site2.* TO 'occnpadminusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON occnp_audit_service_site2.* TO 'occnpadminusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON occnp_cmservice_site2.* TO 'occnpadminusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON occnp_commonconfig_site2.* TO 'occnpadminusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON occnp_overload_site2.* TO 'occnpadminusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON occnp_pcf_nwdaf_agent_site2.* TO 'occnpadminusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, REFERENCES, INDEX ON occnp_nrf_client_site2.* TO 'occnpadminusr'@'%'; GRANT SELECT, INSERT, CREATE, ALTER, DROP, LOCK TABLES, CREATE TEMPORARY TABLES, DELETE, UPDATE, EXECUTE ON occnp_leaderPodDb_site2.* TO 'occnpadminusr'@'%';
- Run the following command to grant
NDB_STORED_USER permissions to the Application User:
GRANT NDB_STORED_USER ON *.* TO 'occnpusr'@'%';
- Run the following commands to
grant Application User permission on all Policy Databases:
GRANT SELECT, INSERT, LOCK TABLES, DELETE, UPDATE, REFERENCES, EXECUTE ON <DB Name>.* TO '<Application User Name>'@'%';
For example in Policy site1:
GRANT SELECT, INSERT, UPDATE, DELETE ON occnp_cmservice_site1.* TO 'occnpusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE ON occnp_config_server_site1.* TO 'occnpusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE ON occnp_audit_service_site1.* TO 'occnpusr'@'%'; GRANT CREATE, SELECT, INSERT, UPDATE, DELETE ON occnp_commonconfig_site1.* TO 'occnpusr'@'%'; GRANT CREATE, SELECT, INSERT, UPDATE, DELETE ON occnp_pcf_nwdaf_agent_site1.* TO 'occnpusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE ON occnp_overload_site1.* TO 'occnpusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE ON occnp_nrf_client_site1.* TO 'occnpusr'@'%';
For example in Policy site2:
GRANT SELECT, INSERT, UPDATE, DELETE ON occnp_cmservice_site2.* TO 'occnpusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE ON occnp_config_server_site2.* TO 'occnpusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE ON occnp_audit_service_site2.* TO 'occnpusr'@'%'; GRANT CREATE, SELECT, INSERT, UPDATE, DELETE ON occnp_commonconfig_site2.* TO 'occnpusr'@'%'; GRANT CREATE, SELECT, INSERT, UPDATE, DELETE ON occnp_pcf_nwdaf_agent_site2.* TO 'occnpusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE ON occnp_overload_site2.* TO 'occnpusr'@'%'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE ON occnp_nrf_client_site2.* TO 'occnpusr'@'%';
- Run the following command to verify that the privileged or application users
have all the required permissions:
show grants for username;
where
username
is the name of the privileged or application user.Example:
show grants for occnpadminusr; show grants for occnpusr;
- Run the following command to flush privileges:
FLUSH PRIVILEGES;
- Exit from MySQL prompt and SQL nodes.
2.2.1.7 Configuring Kubernetes Secret for Accessing Database
This section explains how to configure Kubernetes secrets for accessing Policy database.
2.2.1.7.1 Creating and Updating Secret for Privileged Database User
This section explains how to create and update Kubernetes secret for Privileged User to access the database.
- Run the following command to create Kubernetes secret:
kubectl create secret generic <Privileged User secret name> --from-literal=mysql-username=<Privileged Mysql database username> --from-literal=mysql-password=<Privileged Mysql User database passsword> -n <Namespace>
Where,
<Privileged User secret name>
is the secret name of the Privileged User.<Privileged MySQL database username>
is the username of the Privileged User.<Privileged MySQL User database passsword>
is the password of the Privileged User.<Namespace>
is the namespace of Policy deployment.Note:
Note down the command used during the creation of Kubernetes secret. This command is used for updating the secrets in future.For example:
kubectl create secret generic occnp-privileged-db-pass --from-literal=mysql-username=occnpadminusr --from-literal=mysql-password=occnpadminpasswd -n occnp
- Run the following command to verify the secret created:
kubectl describe secret <Privileged User secret name> -n <Namespace>
Where,
<Privileged User secret name>
is the secret name of the database.
For example:<Namespace>
is the namespace of Policy deployment.kubectl describe secret occnp-privileged-db-pass -n occnp
Sample output:
Name: occnp-privileged-db-pass Namespace: occnp Labels: <none> Annotations: <none> Type: Opaque Data ==== mysql-password: 10 bytes mysql-username: 17 bytes
- Update the command used in step 1 with string "
--dry-run -o yaml
" and "
". After the update is performed, use the following command:kubectl replace -f - -n <Namespace of Policy deployment>
kubectl create secret generic <Privileged User secret name> --from-literal=mysql-username=<Privileged MySQL database username> --from-literal=mysql-password=<Privileged Mysql database password> --dry-run -o yaml -n <Namespace> | kubectl replace -f - -n <Namespace>
Where,
<Privileged User secret name>
is the secret name of the Privileged User.<Privileged MySQL database username>
is the username of the Privileged User.<Privileged MySQL User database passsword>
is the password of the Privileged User.<Namespace>
is the namespace of Policy deployment. - Run the updated command. The following message is displayed:
secret/<Privileged User secret name> replaced
Where,
<Privileged User secret name>
is the updated secret name of the Privileged User.
2.2.1.7.2 Creating and Updating Secret for Application Database User
This section explains how to create and update Kubernetes secret for application user to access the database.
- Run the following command to create Kubernetes secret:
kubectl create secret generic <Application User secret name> --from-literal=mysql-username=<Application MySQL Database Username> --from-literal=mysql-password=<Application MySQL User database passsword> -n <Namespace>
Where,
<Application User secret name>
is the secret name of the Application User.<Application MySQL database username>
is the username of the Application User.<Application MySQL User database passsword>
is the password of the Application User.<Namespace>
is the namespace of Policy deployment.Note:
Note down the command used during the creation of Kubernetes secret. This command is used for updating the secrets in future.For example:
kubectl create secret generic occnp-db-pass --from-literal=mysql-username=occnpusr --from-literal=mysql-password=occnppasswd -n occnp
- Run the following command to verify the secret created:
kubectl describe secret <Application User secret name> -n <Namespace>
Where,
<Application User secret name>
is the secret name of the database.
For example:<Namespace>
is the namespace of Policy deployment.kubectl describe secret occnp-db-pass -n occnp
Sample output:
Name: occnp-db-pass Namespace: occnp Labels: <none> Annotations: <none> Type: Opaque Data ==== mysql-password: 10 bytes mysql-username: 17 bytes
- Update the command used in step 1 with string "
--dry-run -o yaml
" and "
". After the update is performed, use the following command:kubectl replace -f - -n <Namespace of Policy deployment>
kubectl create secret generic <Application User secret name> --from-literal=mysql-username=<Application MySQL database username> --from-literal=mysql-password=<Application Mysql database password> --dry-run -o yaml -n <Namespace> | kubectl replace -f - -n <Namespace>
Where,
<Application User secret name>
is the secret name of the Application User.<Application MySQL database username>
is the username of the Application User.<Application MySQL User database passsword>
is the password of the Application User.<Namespace>
is the namespace of Policy deployment. - Run the updated command. The following message is displayed:
secret/<Application User secret name> replaced
Where,
<Application User secret name>
is the updated secret name of the Application User.
2.2.1.7.3 Creating Secret for Support of TLS in Diameter Gateway
- Run the following command to create Kubernetes
secret:
kubectl create secret generic <TLS_SECRET_NAME> --from-file=<TLS_RSA_PRIVATE_KEY_FILENAME/TLS_ECDSA_PRIVATE_KEY_FILENAME> --from-file=<TLS_CA_BUNDLE_FILENAME> --from-file=<TLS_RSA_CERTIFICATE_FILENAME/TLS_ECDSA_CERTIFICATE_FILENAME> -n <Namespace of OCCNP deployment>.
For example:
kubectl create secret generic dgw-tls-secret --from-file=dgw-key.pem --from-file=ca-cert.cer --from-file=dgw-cert.crt -n vega-ns6
Where,
dgw-key.pem
is the private Key of diam-gateway (either generated by RSA or ECDSA).dgw-cert.crt
is the public Key certificate of diam-gateway (either generated by RSA or ECDSA).ca-cert.cer
is the trust Chain Certificate file, either an Intermediate CA or Root CA.dgw-tls-secret
is the default name of the secret.
2.2.1.8 Enabling MySQL based DB Compression
mySqlDbCompressionEnabled: 'true'
mySqlDbCompressionScheme: '1'
Note:
Data compression must be activated when all sites are upgraded to 22.4.5. Rollback is not possible once data compression is activated.For more information on DB compression configurations, see PCRF-Core.
2.2.1.9 Configuring Secrets for Enabling HTTPS
This section explains the steps to create and update the Kubernetes secret and enable HTTPS at Egress Gateway.
2.2.1.9.1 Managing HTTPS at Ingress Gateway
This section explains the steps to configure secrets for enabling HTTPS in Ingress Gateway. This procedure must be performed before deploying Policy.
Creating and Updating Secrets at Ingress Gateway
Note:
The passwords for TrustStore and KeyStore are stored in respective password files. The process to create private keys, certificates, and passwords is at the discretion of the user or operator. To create Kubernetes secret for HTTPS, following files are required:- PCF Private Key and Certificate (either generated by RSA or ECDSA)
- Trust Chain Certificate file, either an Intermediate CA or Root CA
- TrustStore password file
- Run the following command to create secret:
kubectl create secret generic <ocingress-secret-name> --from-file=<ssl_ecdsa_private_key.pem> --from-file=<rsa_private_key_pkcs1.pem> --from-file=<ssl_truststore.txt> --from-file=<ssl_keystore.txt> --from-file=<caroot.cer> --from-file=<ssl_rsa_certificate.crt> --from-file=<ssl_ecdsa_certificate.crt> -n <Namespace of OCCNP deployment>
Where,
<ocingress-secret-name>
is the secret name for Ingress Gateway.<ssl_ecdsa_private_key.pem>
is the ECDSA private key.<rsa_private_key_pkcs1.pem>
is the RSA private key.<ssl_truststore.txt>
is the SSL Truststore file.<caroot.cer>
is the CA root file.<ssl_rsa_certificate.crt>
is the SSL RSA certificate.<ssl_ecdsa_certificate.crt>
is the SSL ECDSA certificate.<Namespace>
of Policy deployment.Note:
Note down the command used during the creation of the secret. Use the command for updating the secrets in future.For example:
kubectl create secret generic ocingress-secret --from-file=ssl_ecdsa_private_key.pem --from-file=rsa_private_key_pkcs1.pem --from-file=ssl_truststore.txt --from-file=ssl_keystore.txt --from-file=caroot.cer --from-file=ssl_rsa_certificate.crt --from-file=ssl_ecdsa_certificate.crt -n occnp
Note:
It is recommended to use the same secret name as mentioned in the example. In case you change<ocingress-secret-name>
, update thek8SecretName
parameter underingressgateway attributes
section in theoccnp_custom_values_24.3.0.yaml
file. - Run the following command to verify the details of the secret
created:
kubectl describe secret <ocingress-secret-name> -n <Namespace of OCCNP deployment>
Where,
<ocingress-secret-name>
is the secret name for Ingress Gateway.Namespace
of Policy deployment.For example:
kubectl describe secret ocingress-secret -n occnp
- Update the command used in Step 1 with string "
--dry-run -o yaml
" and "kubectl replace -f - -n <Namespace of occnp deployment>
". After the update is performed, use the following command:kubectl create secret generic <ocingress-secret-name> --from-file=<ssl_ecdsa_private_key.pem> --from-file=<rsa_private_key_pkcs1.pem> --from-file=<ssl_truststore.txt> --from-file=<ssl_keystore.txt> --from-file=<caroot.cer> --from-file=<ssl_rsa_certificate.crt> --from-file=<ssl_ecdsa_certificate.crt> --dry-run -o yaml -n <Namespace> | kubectl replace -f - -n <Namespace>
For example:
kubectl create secret generic ocingress-secret --from-file=ssl_ecdsa_private_key.pem --from-file=rsa_private_key_pkcs1.pem --from-file=ssl_truststore.txt --from-file=ssl_keystore.txt --from-file=caroot.cer --from-file=ssl_rsa_certificate.crt --from-file=ssl_ecdsa_certificate.crt --dry-run -o yaml -n occnp | kubectl replace -f - -n occnp
Note:
The names used in the aforementioned command must be same as the names provided in theoccnp_custom_values_24.3.0.yaml
in Policy deployment. - Run the updated command.
- After the secret update is complete, the following message
appears:
secret/<ocingress-secret> replaced
Enabling HTTPS at Ingress Gateway
This step is required only when SSL settings needs to be enabled on Ingress Gateway microservice of Policy.
- Enable
enableIncomingHttps
parameter under Ingress Gateway Global Parameters section in theoccnp-24.3.0-custom-values-occnp.yaml
file. For more information aboutenableIncomingHttps
parameter, see under global parameters section of theoccnp-24.3.0-custom-values-occnp.yaml
file. - Configure the following details in the
ssl
section underingressgateway attributes
, in case you have changed the attributes while creating secret:- Kubernetes namespace
- Kubernetes secret name holding the certificate details
- Certificate information
ingress-gateway: # ---- HTTPS Configuration - BEGIN ---- enableIncomingHttps: true service: ssl: privateKey: k8SecretName: occnp-gateway-secret k8NameSpace: occnp rsa: fileName: rsa_private_key_pkcs1.pem certificate: k8SecretName: occnp-gateway-secret k8NameSpace: occnp rsa: fileName: ocegress.cer caBundle: k8SecretName: occnp-gateway-secret k8NameSpace: occnp fileName: caroot.cer keyStorePassword: k8SecretName: occnp-gateway-secret k8NameSpace: occnp fileName: key.txt trustStorePassword: k8SecretName: occnp-gateway-secret k8NameSpace: occnp fileName: trust.txt
- Save the
occnp_custom_values_24.3.0.yaml
file.
2.2.1.9.2 Managing HTTPS at Egress Gateway
This section explains the steps to create and update the Kubernetes secret and enable HTTPS at Egress Gateway.
Creating and Updating Secrets at Egress Gateway
Note:
The passwords for TrustStore and KeyStore are stored in respective password files. The process to create private keys, certificates, and passwords is at the discretion of the user or operator. To create Kubernetes secret for HTTPS, following files are required:- PCF Private Key and Certificate (either generated by RSA or ECDSA)
- Trust Chain Certificate file, either an Intermediate CA or Root CA
- TrustStore password file
- Run the following command to create secret:
kubectl create secret generic <ocegress-secret-name> --from-file=<ssl_ecdsa_private_key.pem> --from-file=<ssl_rsa_private_key.pem> --from-file=<ssl_truststore.txt> --from-file=<ssl_keystore.txt> --from-file=<ssl_cabundle.crt> --from-file=<ssl_rsa_certificate.crt> --from-file=<ssl_ecdsa_certificate.crt> -n <Namespace of OCCNP deployment>
Where,
<ocegress-secret-name>
is the secret name for Egress Gateway.<ssl_ecdsa_private_key.pem>
is the ECDSA private key.<rsa_private_key_pkcs1.pem>
is the RSA private key.<ssl_truststore.txt>
is the SSL Truststore file.<caroot.cer>
is the CA root file.<ssl_rsa_certificate.crt>
is the SSL RSA certificate.<ssl_ecdsa_certificate.crt>
is the SSL ECDSA certificate.<Namespace>
of Policy deployment.Note:
Note down the command used during the creation of the secret. Use the command for updating the secrets in future.For example:
kubectl create secret generic ocegress-secret --from-file=ssl_ecdsa_private_key.pem --from-file=ssl_rsa_private_key.pem --from-file=ssl_truststore.txt --from-file=ssl_keystore.txt --from-file=ssl_cabundle.crt --from-file=ssl_rsa_certificate.crt --from-file=ssl_ecdsa_certificate.crt -n occnp
Note:
It is recommended to use the same secret name as mentioned in the example. In case you change<ocegress-secret-name>
, update thek8SecretName
parameter underegressgateway attributes
section in theoccnp_custom_values_24.3.0.yaml
file. - Run the following command to verify the details of the secret
created:
kubectl describe secret <ocegress-secret-name> -n <Namespace of OCCNP deployment>
Where,
<ocegress-secret-name>
is the secret name for Egress Gateway.Namespace
of Policy deployment.For example:
kubectl describe secret ocegress-secret -n occnp
- Update the command used in Step 1 with string "
--dry-run -o yaml
" and "kubectl replace -f - -n <Namespace of occnp deployment>
". After the update is performed, use the following command:kubectl create secret generic <ocegress-secret-name> --from-file=<ssl_ecdsa_private_key.pem> --from-file=<rsa_private_key_pkcs1.pem> --from-file=<ssl_truststore.txt> --from-file=<ssl_keystore.txt> --from-file=<caroot.cer> --from-file=<ssl_rsa_certificate.crt> --from-file=<ssl_ecdsa_certificate.crt> --dry-run -o yaml -n <Namespace> | kubectl replace -f - -n <Namespace>
For example:
kubectl create secret generic ocegress-secret --from-file=ssl_ecdsa_private_key.pem --from-file=rsa_private_key_pkcs1.pem --from-file=ssl_truststore.txt --from-file=ssl_keystore.txt --from-file=caroot.cer --from-file=ssl_rsa_certificate.crt --from-file=ssl_ecdsa_certificate.crt --dry-run -o yaml -n occnp | kubectl replace -f - -n occnp
Note:
The names used in the aforementioned command must be same as the names provided in theoccnp_custom_values_24.3.0.yaml
in Policy deployment. - Run the updated command.
- After the secret update is complete, the following message
appears:
secret/<ocegress-secret> replaced
Enabling HTTPS at Egress Gateway
This step is required only when SSL settings needs to be enabled on Egress Gateway microservice of Policy.
- Enable
enableOutgoingHttps
parameter under egressgateway attributes section in theoccnp-24.3.0-custom-values-occnp.yaml
file. For more information aboutenableOutgoingHttps
parameter, see the Egress Gateway section. - Configure the following details in the
ssl
section underegressgateway attributes
, in case you have changed the attributes while creating secret:- Kubernetes namespace
- Kubernetes secret name holding the certificate details
- Certificate information
egress-gateway: #Enabling it for egress https requests enableOutgoingHttps: true service: ssl: privateKey: k8SecretName: ocpcf-gateway-secret k8NameSpace: ocpcf rsa: fileName: rsa_private_key_pkcs1.pem ecdsa: fileName: ssl_ecdsa_private_key.pem certificate: k8SecretName: ocpcf-gateway-secret k8NameSpace: ocpcf rsa: fileName: ocegress.cer ecdsa: fileName: ssl_ecdsa_certificate.crt caBundle: k8SecretName: ocpcf-gateway-secret k8NameSpace: ocpcf fileName: caroot.cer keyStorePassword: k8SecretName: ocpcf-gateway-secret k8NameSpace: ocpcf fileName: key.txt trustStorePassword: k8SecretName: ocpcf-gateway-secret k8NameSpace: ocpcf fileName: trust.txt
- Save the
occnp_custom_values_24.3.0.yaml
file.
2.2.1.10 Configuring Secrets to Enable Access Token
This section explains how to configure a secret for enabling access token.
Generating KeyPairs for NRF Instances
Important:
It is at the discretion of user to create the private keys and certificates, and it is not in the scope for Policy. This section lists only samples to create KeyPairs.Note:
Here, it is assumed that there are only two NRF instances with the the following instance IDs:- NRF Instance 1: 664b344e-7429-4c8f-a5d2-e7dfaaaba407
- NRF Instance 2: 601aed2c-e314-46a7-a3e6-f18ca02faacc
Example Command to generate KeyPair for NRF Instance 1
Generate a 2048-bit RSA private key
openssl genrsa -out private_key.pem 2048
Convert private Key to PKCS#8 format (so Java can read it)
openssl pkcs8 -topk8 -inform PEM -outform PEM -in private_key.pem -out private_key_pkcs.der -nocrypt
Output public key portion in PEM format (so Java can read it)
openssl rsa -in private_key.pem -pubout -outform PEM -out public_key.pem
Create reqs.conf and place the required content for NRF certificate
[req]
distinguished_name = req_distinguished_name
req_extensions = v3_req
prompt = no
[req_distinguished_name]
C = IN
ST = BLR
L = TempleTerrace
O = Personal
CN = nnrf-001.tmtrflaa.5gc.tmp.com
[v3_req]
basicConstraints = CA:FALSE
keyUsage = digitalSignature, dataEncipherment
subjectAltName = DNS:nnrf-001.tmtrflaa.5gc.tmp.com
#subjectAltName = URI:UUID:6faf1bbc-6e4a-4454-a507-a14ef8e1bc5c
#subjectAltName = otherName:UTF8:NRF
Output ECSDA private key portion in PEM format and corresponding NRF certificate in {nrfInstanceId}_ES256.crt file
openssl req -x509 -new -out {nrfInstanceId}_ES256.crt -newkey ec:<(openssl ecparam -name secp521r1) -nodes -sha256 -keyout ecdsa_private_key.key -config reqs.conf
#Replace the place holder "{nrfInstanceId}" with NRF Instance 1's UUID while running the command.
Example below
$ openssl req -x509 -new -out 664b344e-7429-4c8f-a5d2-e7dfaaaba407_ES256.crt -newkey ec:<(openssl ecparam -name secp521r1) -nodes -sha256 -keyout ecdsa_private_key.key -config reqs.conf
NRF1 (Private key: ecdsa_private_key.key, NRF Public Certificate: 664b344e-7429-4c8f-a5d2-e7dfaaaba407_ES256.crt)
Example Command to generate KeyPair for NRF Instance 2
Generate a 2048-bit RSA private key
openssl genrsa -out private_key.pem 2048
Convert private Key to PKCS#8 format (so Java can read it)
openssl pkcs8 -topk8 -inform PEM -outform PEM -in private_key.pem -out private_key_pkcs.der -nocrypt
Output public key portion in PEM format (so Java can read it)
openssl rsa -in private_key.pem -pubout -outform PEM -out public_key.pem
Create reqs.conf and place the required content for NRF certificate
[req]
distinguished_name = req_distinguished_name
req_extensions = v3_req
prompt = no
[req_distinguished_name]
C = IN
ST = BLR
L = TempleTerrace
O = Personal
CN = nnrf-001.tmtrflaa.5gc.tmp.com
[v3_req]
basicConstraints = CA:FALSE
keyUsage = digitalSignature, dataEncipherment
subjectAltName = DNS:nnrf-001.tmtrflaa.5gc.tmp.com
#subjectAltName = URI:UUID:6faf1bbc-6e4a-4454-a507-a14ef8e1bc5c
#subjectAltName = otherName:UTF8:NRF
Output ECSDA private key portion in PEM format and corresponding NRF certificate in {nrfInstanceId}_ES256.crt file
openssl req -x509 -new -out {nrfInstanceId}_ES256.crt -newkey ec:<(openssl ecparam -name prime256v1) -nodes -sha256 -keyout ecdsa_private_key.key -config reqs.conf
#Replace the place holder "{nrfInstanceId}" with NRF Instance 2's UUID while running the command.
Example below
$ openssl req -x509 -new -out 601aed2c-e314-46a7-a3e6-f18ca02faacc_ES256.crt -newkey ec:<(openssl ecparam -name prime256v1) -nodes -sha256 -keyout ecdsa_private_key.key -config reqs.conf
NRF2 (Private key: ecdsa_private_key.key, PublicCerificate: 601aed2c-e314-46a7-a3e6-f18ca02faacc_ES256.crt)
Enabling and Configuring Access Token
To enable access token validation, configure both Helm-based and REST-based configurations on Ingress Gateway.
Configuration using Helm:
For Helm-based configuration, perform the following steps:- Create a Namespace for Secrets. The namespace is used as an input
to create Kubernetes secret for private keys and public certificates. Create a
namespace using the following command:
kubectl create namespace <required namespace>
Where,
<required namespace> is the name of the namespace.
For example, the following command creates the namespace,
ocpcf
:kubectl create namespace ocpcf
- Create Kubernetes Secret for NRF Public Key. To create a secret
using the Public keys of the NRF instances, run the following
command:
kubectl create secret generic <secret-name> --from-file=<filename.crt> -n <Namespace>
Where,
<secret-name> is the secret name.
<Namespace> is the PCF namespace.
<filename.crt> is the public key certificate and we can have any number of certificates in the secret.
For example:
kubectl create secret generic nrfpublickeysecret --from-file=./664b344e-7429-4c8f-a5d2-e7dfaaaba407_ES256.crt --from-file=./601aed2c-e314-46a7-a3e6-f18ca02faacc_ES256.crt -n ocpcf
Note:
In the above command:nrfpublickeysecret
is the secret nameocpcf
is the namespace.crt
files is the public key certificates
We can have any number of certificates in the secret.
- Enable Access token using Helm Configuration by setting the Ingress
Gateway parameter
oauthValidatorEnabled
parameter value totrue
.Further, configure the secret and namespace on Ingress Gateway in the OAUTH CONFIGURATION section of the
occnp_custom_values_24.3.0.yaml
file.The following is a sample Helm configuration. For more information on parameters and their supported values, see OAUTH Configuration.# ----OAUTH CONFIGURATION - BEGIN ---- oauthValidatorEnabled: false nfInstanceId: 6faf1bbc-6e4a-4454-a507-a14ef8e1bc11 allowedClockSkewSeconds: 0 nrfPublicKeyKubeSecret: 'nrfpublickeysecret' nrfPublicKeyKubeNamespace: 'ocpcf' validationType: relaxed producerPlmnMNC: 123 producerPlmnMCC: 456 producerScope: nsmf-pdusession,nsmf-event-exposure nfType: PCF # ----OAUTH CONFIGURATION - END ----
Verifying oAuth Token
The following Curl command sends a request to create SM Policy with valid oAuth header:
curl --http2-prior-knowledge http://10.75.153.75:30545/npcf-smpolicycontrol/v1/sm-policies -X POST -H 'Content-Type: application/json' -H Authorization:'Bearer eyJ0eXAiOiJKV1QiLCJraWQiOiI2MDFhZWQyYy1lMzE0LTQ2YTctYTNlNi1mMThjYTAyZmFheHgiLCJhbGciOiJFUzI1NiJ9.eyJpc3MiOiI2NjRiMzQ0ZS03NDI5LTRjOGYtYTVkMi1lN2RmYWFhYmE0MDciLCJzdWIiOiJmZTdkOTkyYi0wNTQxLTRjN2QtYWI4NC1jNmQ3MGIxYjAxYjEiLCJhdWQiOiJTTUYiLCJzY29wZSI6Im5zbWYtcGR1c2Vzc2lvbiIsImV4cCI6MTYxNzM1NzkzN30.oGAYtR3FnD33xOCmtUPKBEA5RMTNvkfDqaK46ZEnnZvgN5Cyfgvlr85Zzdpo2lNISADBgDumD_m5xHJF8baNJQ' -d '{"3gppPsDataOffStatus":true,"accNetChId":{"accNetChaIdValue":"01020304","sessionChScope":true},"accessType":"3GPP_ACCESS","dnn":"dnn1","gpsi":"msisdn-81000000002","ipv4Address":"192.168.10.10","ipv6AddressPrefix":"2800:a00:cc01::/64","notificationUri":"http://nf1stub.ocats.svc:8080/smf/notify","offline":true,"online":false,"pduSessionId":1,"pduSessionType":"IPV4","pei":"990000862471854","ratType":"NR","servingNetwork":{"mcc":"450","mnc":"08"},"sliceInfo":{"sd":"abc123","sst":11},"smPoliciesUpdateNotificationUrl":"npcf-smpolicycontrol/v1/sm-policies/{ueId}/notify","subsSessAmbr":{"downlink":"1000000 Kbps","uplink":"10000 Kbps"},"supi":"imsi-450081000000001","chargEntityAddr":{"anChargIpv4Addr":"11.111.10.10"},"InterGrpIds":"group1","subsDefQos":{"5qi":23,"arp":{"priorityLevel":3,"preemptCap":"NOT_PREEMPT","preemptVuln":"NOT_PREEMPTABLE"},"priorityLevel":34},"numOfPackFilter":33,"chargingCharacteristics":"CHARGEING","refQosIndication":true,"qosFlowUsage":"IMS_SIG","suppFeat":"","traceReq":{"traceRef":"23322-ae34a2","traceDepth":"MINIMUM","neTypeList":"32","eventList":"23","collectionEntityIpv4Addr":"12.33.22.11","collectionEntityIpv6Addr":"2001:db8:85a3::37:7334","interfaceList":"e2"},"ueTimeZone":"+08:00","userLocationInfo":{"nrLocation":{"ncgi":{"nrCellId":"51234a243","plmnId":{"mcc":"450","mnc":"08"}},"tai":{"plmnId":{"mcc":"450","mnc":"08"},"tac":"1801"}},"eutraLocation":{"tai":{"plmnId":{"mnc":"08","mcc":"450"},"tac":"1801"},"ecgi":{"plmnId":{"mnc":"08","mcc":"450"},"eutraCellId":"23458da"},"ageOfLocationInformation":233,"ueLocationTimestamp":"2019-03-13T06:44:14.34Z","geographicalInformation":"AAD1234567890123","geodeticInformation":"AAD1234567890123BCEF","globalNgenbId":{"plmnId":{"mnc":"08","mcc":"450"},"n3IwfId":"n3iwfid"}},"n3gaLocation":{"n3gppTai":{"plmnId":{"mnc":"08","mcc":"450"},"tac":"1801"},"n3IwfId":"234","ueIpv4Addr":"11.1.100.1","ueIpv6Addr":"2001:db8:85a3::370:7334","portNumber":30023}}}'
2.2.1.11 Configuring Policy to support Aspen Service Mesh
Policy leverages the Platform Service Mesh (for example, Aspen Service Mesh) for all internal and external TLS communication by deploying a special sidecar proxy in each pod to intercept all the network communications. The service mesh integration provides inter-NF communication and allows API gateway co-working with service mesh. The service mesh integration supports the services by deploying a special sidecar proxy in each pod to intercept all network communication between microservices.
Supported ASM versions: 1.11.x, and 1.14.6
For ASM installation and configuration, see official Aspen Service Mesh website for details.
The Aspen Service Mesh (ASM) configurations are categorized as follows:
- Control Plane: It involves adding labels or annotations to inject sidecar. The control plane configurations are part of the NF Helm chart.
- Data Plane: It helps in traffic management, such as
handling NF call flows by adding Service Entries (SE), Destination Rules (DR), Envoy Filters
(EF), and other resource changes like apiVersion change between versions. This configuration
is done manually by using
occnp_custom_values_servicemesh_config_24.3.0.yaml
file .
Configuring ASM Data Plane
Data Plane configuration consists of the following Custom Resource Definitions (CRDs):
- Service Entry (SE)
- Destination Rule (DR)
- Envoy Filter (EF)
- Peer Authentication (PA)
- Authorization Policy (AP)
- Virtual Service (VS)
- RequestAuthentication
Note:
Useoccnp_custom_values_servicemesh_config_24.3.0.yaml
Helm charts to add or delete CRDs that you may require due to
ASM upgrades to configure features across different releases.
The Data Plane configuration is applicable in the following scenarios:
- NF to NF Communication: During NF to NF
communication, where sidecar is injected on both NFs, SE and
DR to communicate with the corresponding SE and DR of the
other NF. Otherwise, sidecar rejects the communication. All
egress communications of NFs must have a configured entry
for SE and DR.
Note:
Configure the core DNS with the producer NF endpoint to enable the sidecar access for establishing communication between cluster. - Kube-api-server: For Kube-api-server, there are a few NFs that require access to Kubernetes API server. The ASM proxy (mTLS enabled) may block this. As per F5 recommendation, the NF need to add SE for Kubernetes API server in its own namespace.
- Envoy Filters: Sidecars rewrite the header with its own default value. Therefore, the headers from back end services are lost. So, you need Envoy Filters to help in passing the headers from back end services to use it as it is.
The Custom Resources (CR) are customized in the following scenarios:
- Service Entry: Enables adding additional entries into Sidecar's internal service registry, so that auto-discovered services in the mesh can access or route to these manually specified services. A service entry describes the properties of a service (DNS name, VIPs, ports, protocols, endpoints).
- Destination Rule: Defines policies that apply to traffic intended for service after routing has occurred. These rules specify configuration for load balancing, connection pool size from the sidecar, and outlier detection settings to detect and evict unhealthy hosts from the load balancing pool.
- Envoy Filters: Provides a mechanism to customize the Envoy configuration generated by Istio Pilot. Use Envoy Filter to modify values for certain fields, add specific filters, or even add entirely new listeners, clusters, and so on.
- Peer Authentication: Used for service-to-service authentication to verify the client making the connection.
- Virtual Service: Defines a set of traffic routing rules to apply when a host is addressed. Each routing rule defines matching criteria for the traffic of a specific protocol. If the traffic is matched, then it is sent to a named destination service (or subset or version of it) defined in the registry.
- Request Authentication: Used for end-user authentication to verify the credential attached to the request.
- Policy Authorization: Enables access control
on workloads in the mesh. Policy Authorization supports
CUSTOM
,DENY
, andALLOW
actions for access control. WhenCUSTOM
,DENY
, andALLOW
actions are used for a workload at the same time, theCUSTOM
action is evaluated first, then theDENY
action, and finally theALLOW
action.For more details on Istio Authorization Policy, see Istio / Authorization Policy.
Service Mesh Configuration File
A sample occnp_custom_values_servicemesh_config_24.3.0.yaml
is available in
Custom_Templates file. For downloading the file, see Customizing Policy.
Note:
To connect to vDBTier, create an SE and DR for MySQL connectivity service if the database is in different cluster. Else, the sidecar rejects request as vDBTier does not support sidecars.Table 2-9 Supported Fields in CRD
CRD | Supported Fields |
---|---|
Service Entry |
|
Destination Rule |
|
Envoy Filters |
|
Peer Authentication |
|
Virtual Service |
|
Request Authentication |
|
Policy Authorization |
|
2.2.1.11.1 Predeployment Configurations
This sections explains the predeployment configuration procedure to install Policy with Service Mesh support.
Creating Policy namespace:
- Verify required namespace already exists in system:
kubectl get namespaces
- In the output of the above command, check if required namespace is
available. If not available, create the namespace using the following
command:
kubectl create namespace <namespace>
Where,
<namespace>
is the Policy namespace.For example:
kubectl create namespace occnp
2.2.1.11.2 Installing Service Mesh Configuration Charts
Perform the below steps to configure Service Mesh CRs using the Service Mesh Configuration chart:
- Download the service mesh chart
occnp_custom_values_servicemesh_config_24.3.0.yaml
file available in Custom_Templates directory.A sampleoccnp_custom_values_servicemesh_config_24.3.0.yaml
is available in Custom_Templates file. For downloading the file, see Customizing Policy.Note:
When Policy is deployed with ASM then cnDBTier is also installed in the same namespace or cluster, you can skip installing service entries and destination rules. - Configure the
occnp_custom_values_servicemesh_config_24.3.0.yaml
file as follows:Modify only the "SERVICE-MESH Custom Resource Configuration" section for configuring the CRs as needed. For example, to add or modify a ServiceEntry CR, required attributes and its value must be configured under the "serviceEntries:" section of "SERVICE-MESH Custom Resource Configuration". You can also comment on the CRs that you do not need.
- For updating Service Entries, make the required changes using the
following sample template:
serviceEntries: - hosts: |- [ "mysql-connectivity-service.<cndbtiernamespace>.svc.<clustername>" ] exportTo: |- [ "." ] location: MESH_EXTERNAL ports: - number: 3306 name: mysql protocol: MySQL name: ocpcf-to-mysql-external-se-test - hosts: |- ".*<clustername>" exportTo: |- [ "." ] location: MESH_EXTERNAL ports: - number: 8090 name: http2-8090 protocol: TCP - number: 80 name: HTTP2-80 protocol: TCP name: ocpcf-to-other-nf-se-test - hosts: |- [ "kubernetes.default.svc.<clustername>" ] exportTo: |- [ "." ] location: MESH_INTERNAL addresses: |- [ "192.168.200.36" ] ports: - number: 443 name: https protocol: HTTPS name: nf-to-kube-api-server
- For
customizing Destination Rule, make the required changes using the following
sample
template:
# destinationRules: # - host: "*.<clustername>" # mode: DISABLE # name: ocpcf-to-other-nf-dr-test # sbitimers: true # tcpConnectTimeout: "750ms" # tcpKeepAliveProbes: 3 # tcpKeepAliveTime: "1500ms" # tcpKeepAliveInterval: "1s" # - host: mysql-connectivity-service.<clustername>.svc.cluster.local # mode: DISABLE # name: mysql-occne # sbitimers: false
-
For customizing envoyFilters according to the Istio version installed on the Bastion server, use any of the following templates:
For Istio version 1.11.x and 1.14.x
Note:
Istio 1.11.x and 1.14.x support the same template for envoyFilters configurations.envoyFilters_v_19x_111x: - name: set-xfcc-pcf labelselector: "app.kubernetes.io/instance: ocpcf" configpatch: - applyTo: NETWORK_FILTER filtername: envoy.filters.network.http_connection_manager operation: MERGE typeconfig: type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager configkey: forward_client_cert_details configvalue: ALWAYS_FORWARD_ONLY - name: serverheaderfilter labelselector: "app.kubernetes.io/instance: ocpcf" configpatch: - applyTo: NETWORK_FILTER filtername: envoy.filters.network.http_connection_manager operation: MERGE typeconfig: type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager configkey: server_header_transformation configvalue: PASS_THROUGH - name: custom-http-stream labelselector: "app.kubernetes.io/instance: ocpcf" configpatch: - applyTo: NETWORK_FILTER filtername: envoy.filters.network.http_connection_manager operation: MERGE typeconfig: type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager configkey: server_header_transformation configvalue: PASS_THROUGH stream_idle_timeout: "6000ms" max_stream_duration: "7000ms" patchContext: SIDECAR_OUTBOUND networkFilter_listener_port: 8000 - name: custom-tcpsocket-timeout labelselector: "app.kubernetes.io/instance: ocpcf" configpatch: - applyTo: FILTER_CHAIN patchContext: SIDECAR_INBOUND operation: MERGE transport_socket_connect_timeout: "750ms" filterChain_listener_port: 8000 - name: custom-http-route labelselector: "app.kubernetes.io/instance: ocpcf" configpatch: - applyTo: HTTP_ROUTE patchContext: SIDECAR_OUTBOUND operation: MERGE route_idle_timeout: "6000ms" route_max_stream_duration: "7000ms" httpRoute_routeConfiguration_port: 8000 vhostname: "ocpcf.svc.cluster:8000" - name: logicaldnscluster labelselector: "app.kubernetes.io/instance: ocpcf" configpatch: - applyTo: CLUSTER clusterservice: rchltxekvzwcamf-y-ec-x-002.amf.5gc.mnc480.mcc311.3gppnetwork.org operation: MERGE logicaldns: LOGICAL_DNS - applyTo: CLUSTER clusterservice: rchltxekvzwcamd-y-ec-x-002.amf.5gc.mnc480.mcc311.3gppnetwork.org operation: MERGE logicaldns: LOGICAL_DNS
Note:
The parameter vhostname is mandatory when applyTo is HTTP_ROUTE
Note:
Depending on the Istio version, update the correct value of envoy filters in the following line:{{- range .Values.envoyFilters_v_19x_111x }}
-
For customizing PeerAuthentication, make the required changes using the following sample template:
peerAuthentication: - name: default tlsmode: PERMISSIVE - name: cm-service labelselector: "app.kubernetes.io/name: cm-service" tlsmode: PERMISSIVE - name: ingress labelselector: "app.kubernetes.io/name: occnp-ingress-gateway" tlsmode: PERMISSIVE - name: diam-gw labelselector: "app.kubernetes.io/name: diam-gateway" tlsmode: PERMISSIVE
-
To customize the Authorization Policy, make the required changes using the following sample template:
#authorizationPolicies: #- name: allow-all-provisioning-on-ingressgateway-ap # labelselector: "app.kubernetes.io/name: ingressgateway" # action: "ALLOW" # hosts: # - "*" # paths: # - "/nudr-dr-prov/*" # - "/nudr-dr-mgm/*" # - "/nudr-group-id-map-prov/*" # - "/slf-group-prov/*" #- name: allow-all-sbi-on-ingressgateway-ap # labelselector: "app.kubernetes.io/name: ingressgateway" # action: "ALLOW" # hosts: # - "*" # paths: # - "/npcf-smpolicycontrol/*" # - "/npcf-policyauthorization/*" # xfccvalues: # - "*DNS=nrf1.site1.com" # - "*DNS=nrf2.site2.com" # - "*DNS=scp1.site1.com" # - "*DNS=scp1.site2.com" # - "*DNS=scp1.site3.com
-
VirtualService is required to configure the retry attempts for the destination host. For instance, for error response code value 503, the default behaviour of Istio is to retry two times. However, if the user wants to configure the number of retry attempts, then it can be done using
virtualService
.To customize the VirtualService, make the required changes using the following sample template:
In the following example, the number of retry attempts are set to 0:#virtualService: # - name: scp1site1vs # host: “scp1.site1.com” # destinationhost: “scp1.site1.com” # port: 8000 # exportTo: |- # [ "." ] # attempts: "0" # timeout: 7s # - name: scp1site2vs # host: “scp1.site2.com” # destinationhost: “scp1.site2.com” # port: 8000 # exportTo: |- # [ "." ] # retryon: 5xx # attempts: "1" # timeout: 7s
Where,
host
ordestinationhost
value uses the format - <release_name>-<egress_svc_name>To get the <egress_svc_name>, run the following command:kubectl get svc -n <namespace>
For 5xx response codes, set the value of retry attempts to 1, as shown in the following sample:# - name: nrfvirtual2 # host: ocpcf-occnp-egress-gateway # destinationhost: ocpcf-occnp-egress-gateway # port: 8000 # exportTo: |- # [ "." ] # retryon: 5xx # attempts: "1"
-
Request Authentication is used to configure JWT tokens for Oauth validation. Network functions need to authenticate the OAuth token sent by consumer network functions by using the Public key of the NRF signing certificate and using service mesh to authenticate the token. Using the following sample format, users can configure requestAuthentication as per their system requirements:
To customize the Request Authentication, make the required changes using the following sample template:
requestAuthentication: # - name: jwttokenwithjson # labelselector: httpbin # issuer: "jwtissue" # jwks: |- # '{ # "keys": [{ # "kid": "1", # "kty": "EC", # "crv": "P-256", # "x": "Qrl5t1-Apuj8uRI2o_BP9loqvaBnyM4OPTPAD_peDe4", # "y": "Y7vNMKGNAtlteMV-KJIaG-0UlCVRGFHtUVI8ZoXIzRY" # }] # }' # - name: jwttoken # labelselector: httpbin # issuer: "jwtissue" # jwksUri: https://example.com/.well-known/jwks.json
Note:
For requestAuthetication, use eitherjwks
orjwksUri
.
- For updating Service Entries, make the required changes using the
following sample template:
- Install the Service Mesh Configuration Chart as below:
- Run the below Helm install command on the namespace you want to
apply the
changes:
helm install <helm-release-name> <charts> --namespace <namespace-name> -f <custom-values.yaml-filename>
For example,
helm install occnp-servicemesh-config occnp-servicemesh-config-24.3.0.tgz -n ocpcf -f occnp_custom_values_servicemesh_config_24.3.0.yaml
- Run the below Helm install command on the namespace you want to
apply the
changes:
2.2.1.11.3 Deploying Policy with ASM
- Create namespace label for auto sidecar injection to automatically
add the sidecars in all of the pods spawned in Policy namespace:
kubectl label --overwrite namespace <Namespace> istio-injection=enabled
Where,
<Namespace>
is the Policy namespace.For example:
kubectl label --overwrite namespace ocpcf istio-injection=enabled
- The Operator should have special capabilities at service account level to start
pre-install init container.
Example of some special capabilities:
readOnlyRootFilesystem: false allowPrivilegeEscalation: true allowedCapabilities: - NET_ADMIN - NET_RAW runAsUser: rule: RunAsAny
- Customize the
occnp_custom_values_servicemesh_config_24.3.0.yaml
file for ServiceEntries, DestinationRule, EnvoyFilters, PeerAuthentication, Virtual Service, and Request Authentication. - Install Policy using updated
occnp_custom_values_servicemesh_config_24.3.0.yaml
file.
2.2.1.11.4 Postdeployment ASM configuration
This section explains the postdeployment configurations.
kubectl get <CRD-Name> -n <Namespace>
For example,
kubectl get se,dr,peerauthentication,envoyfilter,vs,authorizationpolicy,requestauthentication -n ocpcf
Sample output for pods:
NAME HOSTS LOCATION RESOLUTION AGE
serviceentry.networking.istio.io/nf-to-kube-api-server ["kubernetes.default.svc.vega"] MESH_INTERNAL NONE 17h
serviceentry.networking.istio.io/vega-ns1a-to-mysql-external-se-test ["mysql-connectivity-service.vega-ns1.svc.vega"] MESH_EXTERNAL NONE 17h
serviceentry.networking.istio.io/vega-ns1a-to-other-nf-se-test ["*.vega"] MESH_EXTERNAL NONE 17hNAME HOST AGE
destinationrule.networking.istio.io/jaeger-dr occne-tracer-jaeger-query.occne-infra 17h
destinationrule.networking.istio.io/mysql-occne mysql-connectivity-service.vega-ns1.svc.cluster.local 17h
destinationrule.networking.istio.io/prometheus-dr occne-prometheus-server.occne-infra 17h
destinationrule.networking.istio.io/vega-ns1a-to-other-nf-dr-test *.vega 17hNAME MODE AGE
peerauthentication.security.istio.io/cm-service PERMISSIVE 17h
peerauthentication.security.istio.io/default PERMISSIVE 17h
peerauthentication.security.istio.io/diam-gw PERMISSIVE 17h
peerauthentication.security.istio.io/ingress PERMISSIVE 17h
peerauthentication.security.istio.io/ocats-policy PERMISSIVE 17hNAME AGE
envoyfilter.networking.istio.io/ocats-policy-xfcc 17h
envoyfilter.networking.istio.io/serverheaderfilter 17h
envoyfilter.networking.istio.io/serverheaderfilter-nf1stub 17h
envoyfilter.networking.istio.io/serverheaderfilter-nf2stub 17h
envoyfilter.networking.istio.io/set-xfcc-pcf 17hNAME GATEWAYS HOSTS AGE
virtualservice.networking.istio.io/nrfvirtual1 ["vega-ns1a-occnp-egress-gateway"] 17h
[cloud-user@vega-bastion-1 ~]$
Then, perform the steps described in Installing CNC Policy Package.
2.2.1.11.5 Disable ASM
This section describes the steps to delete ASM.
To Disable ASM, by running the following command:
kubectl label --overwrite namespace ocpcf istio-injection=disabled
where,
namespace
is the deployment namespace used by helm
command.
To see what namespaces have injection enabled or disabled, run the command:
kubectl get namespace -L istio-injection
In case, you want to uninstall ASM, disable ASM and then follow the below steps:
- To Delete all the pods in the namespace:
kubectl delete pods --all -n <namespace>
- To Delete ASM, run the following command:
helm delete <helm-release-name> -n <namespace-name>
where,
<helm-release-name>
is the release name used by thehelm
command. This release name must be the same as the release name used for Service Mesh.<namespace-name>
is the deployment namespace used byhelm
commandFor example:
helm delete occnp-servicemesh-config -n ocpcf
- To verify if ASM is disabled, run the following command:
kubectl get se,dr,peerauthentication,envoyfilter,vs -n ocpcf
2.2.1.12 Anti-affinity Approach to Assign Pods to Nodes
Policy uses the anti-affinity approach to constrain a Pod to run only on the desired set of nodes. Using this approach, you can constrain Pods against labels on other Pods. It allows you to constrain on which nodes your Pods can be scheduled based on the labels of Pods already running on that node.
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: "app.kubernetes.io/name"
operator: In
values:
- {{ template "chart.fullname" .}}
topologyKey: "kubernetes.io/hostname"
-
preferredDuringSchedulingIgnoredDuringExecution
: Specifies that the scheduler tries to find a node that meets the rule. If a matching node is not available, the scheduler still schedules the Pod. -
weight
: For each instance of thepreferredDuringSchedulingIgnoredDuringExecution
affinity type, you can specify a weight between 1 and 100 (default). -
matchExpressions
: The attributes undermatchExpressions
define the rules for constraining a Pod. Based on the preceding snippet, the scheduler avoids scheduling Pods having key asapp.kubernetes.io/name
and the value aschart.fullname
on worker nodes having same value for labelkubernetes.io/hostname
. That is, avoid scheduling on Pod on same worker node when there are other worker nodes available with different value for labelkubernetes.io/hostname
and having no Pod with key asapp.kubernetes.io/name
and the value aschart.fullname
. -
topologyKey
: The key for the node label used to specify the domain.For anti-affinity approach to work effectively, every node in the cluster is required to have an appropriate label matching
topologyKey
. If the label is missing for any of the nodes, system may exhibit unintended behavior.
2.2.1.13 Configuring Network Policies
Network Policies allow you to define ingress or egress rules based on Kubernetes resources such as Pod, Namespace, IP, and Port. These rules are selected based on Kubernetes labels in the application. These Network Policies enforce access restrictions for all the applicable data flows except communication from Kubernetes node to pod for invoking container probe.
Note:
Configuring Network Policy is optional. Based on the security requirements, Network Policy can be configured.
For more information on Network Policies, see https://kubernetes.io/docs/concepts/services-networking/network-policies/.
Note:
- If the traffic is blocked or unblocked between the pods even after applying Network Policies, check if any existing policy is impacting the same pod or set of pods that might alter the overall cumulative behavior.
- If changing default ports of services such as Prometheus, Database, Jaegar, or if Ingress or Egress Gateway names is overridden, update them in the corresponding Network Policies.
Configuring Network Policies
Network Policies support Container Network Interface (CNI) plugins for cluster networking.
Note:
For any deployment with CNI, it must be ensured that Network Policy is supported.
Following are the various operations that can be performed for Network Policies:
2.2.1.13.1 Installing Network Policies
Prerequisite
Network Policies are implemented by using the network plug-in. To use Network Policies, you must be using a networking solution that supports Network Policy.
Note:
For a fresh installation, it is recommended to install Network Policies before installing Policy. However, if Policy is already installed, you can still install the Network Policies.
To install Network Policies:
- Open the
occnp-network-policy-custom-values.yaml
file provided in the release package zip file.For downloading the file, see Downloading Policy package and Pushing the Images to Customer Docker Registry.
-
The file is provided with the default Network Policies. If required, update the
occnp-network-policy-custom-values.yaml
file. For more information on the parameters, see Configuration Parameters for Network Policies.Note:
- To run ATS, uncomment the following policies from
occnp-network-policy-custom-values.yaml
:- allow-egress-for-ats
- allow-ingress-to-ats
-
To connect with CNC Console, update the following parameter in the allow-ingress-from-console network policy in the
occnp-network-policy-custom-values.yaml
:kubernetes.io/metadata.name: <namespace in which CNCC is deployed>
- In
allow-ingress-prometheus
policy,kubernetes.io/metadata.name
parameter must contain the value for the namespace where Prometheus is deployed, andapp.kubernetes.io/name
parameter value should match the label from Prometheus pod.
- To run ATS, uncomment the following policies from
- Run the following command to install the Network Policies:
helm install <helm-release-name> occnp-network-policy/ -n <namespace> -f <custom-value-file>
where:- <helm-release-name> is the occnp-network-policy helm release name.
- <yaml-file> is the occnp-network-policy value file.
- <namespace> is the OCCNP namespace.
For example:
helm install occnp-network-policy occnp-network-policy/ -n ocpcf -f occnp-network-policy-custom-values.yaml
Note:
-
Connections that were created before installing Network Policy and still persist are not impacted by the new Network Policy. Only the new connections would be impacted.
-
If you are using ATS suite along with Network Policies, it is required to install the Policy and ATS in the same namespace.
-
It is highly recommended to run ATS after deploying Network Policies to detect any missing/invalid rule that can impact signaling flows.
2.2.1.13.2 Upgrading Network Policies
To add, delete, or update Network Policies:
- Modify the
occnp-network-policy-custom-values.yaml
file to update, add, or delete the Network Policy. - Run the following command to upgrade the Network Policies:
helm upgrade <helm-release-name> network-policy/ -n <namespace> -f <custom-value-file>
where:- <helm-release-name> is the occnp-network-policy helm release name.
- <custom-value-file> is the occnp-network-policy value file.
- <namespace> is the OCCNP namespace.
For example:
helm upgrade occnp-network-policy occnp-network-policy/ -n occnp -f occnp-network-policy-custom-values.yaml
2.2.1.13.3 Verifying Network Policies
Run the following command to verify if the Network Policies are deployed successfully:
kubectl get <helm-release-name> -n <namespace>
For example:
kubectl get occnp-network-policy -n occnp
- helm-release-name: occnp-network-policy Helm release name.
- namespace: CNC Console namespace.
2.2.1.13.4 Uninstalling Network Policies
Run the following command to uninstall network policies:
helm uninstall <helm-release-name> -n <namespace>
For example:
helm uninstall occnp-network-policy -n occnp
Note:
While using the debug container, it is recommended to uninstall the network policies or update them as required to establish the connections.
2.2.1.13.5 Configuration Parameters for Network Policies
Table 2-10 Supported Kubernetes Resource for Configuring Network Policies
Parameter | Description | Details |
---|---|---|
apiVersion |
This is a mandatory parameter. Specifies the Kubernetes version for access control. Note: This is the supported api version for network policy. This is a read-only parameter. |
Data Type: string Default Value:
|
kind |
This is a mandatory parameter. Represents the REST resource this object represents. Note: This is a read-only parameter. |
Data Type: string Default Value: NetworkPolicy |
Table 2-11 Supported Parameters for Configuring Network Policies
Parameter | Description | Details |
---|---|---|
metadata.name |
This is a mandatory parameter. Specifies a unique name for Network Policies. |
DataType: String Default Value: {{ .metadata.name }} |
spec.{} |
This is a mandatory parameter. This consists of all the information needed to define a particular network policy in the given namespace. Note: Policy supports the spec parameters defined in "Supported Kubernetes Resource for Configuring Network Policies". |
Default Value: NA |
For more information, see Network Policies in Oracle Communications Cloud Native Core, Converged Policy User Guide.
2.2.1.14 Configuring Traffic Segregation
This section provides information on how to configure Traffic Segregation in Policy. For description of " Traffic Segregation" feature, see " Traffic Segregation" section in "CNC Policy Features " chapter of Oracle Communications Cloud Native Core, Converged Policy User Guide.
Various networks can be created at the time of CNE cluster installation. The
following things can be customized at the time of the cluster installation using
cnlb.ini
file provided as part of CNE installation.
- Number of network pools
- Number of Egress IPs
- Number of Service IPs/Ingress IPs
- External IPs/subnet
For more information, see Oracle Communications Cloud Native Core, Cloud Native Environment User Guide.
Note:
- The network attachments will be deployed as a part of cluster installation only.
- The network attachment name should be unique for all the pods.
- The destination (egress) subnet addresses are known beforehand and defined under cnlb.ini file egress_dest variable to generate Network Attachment Definitions.
Note:
When Policy is deployed with cnLB feature enabled, the TYPE field for the applicable Policy services shall remain to be "LoadBalancer" and the EXTERNAL-IP field shall be in pending state. This has no impact on the overall cnLB functionality in Policy application.
Configuration at Ingress Gateway
ingress-gateway.deployment.customExtension.annotations
parameter of the occnp_custom_values_24.3.0_occnp.yaml
file.
ingress-gateway:
deployment:
customExtension:
annotations: {
# Enable this section for service-mesh based installation
# traffic.sidecar.istio.io/excludeOutboundPorts: "9000,8095,8096",
# traffic.sidecar.istio.io/excludeInboundPorts: "9000,8095,8096"
}
Annotation for a single interface
k8s.v1.cni.cncf.io/networks: default/<network interface>@<network interface>,
oracle.com.cnc/cnlb: '[{"backendPortName": "<igw port name>", "cnlbIp": "<external IP>","cnlbPort":"<port number>"}]'
Here,
k8s.v1.cni.cncf.io/networks
: Contains all the network attachment information the pod uses for network segregation.oracle.com.cnc/cnlb
: To define service IP and port configurations that the deployment will employ for ingress load balancing.Where,
- cnlbIp is the front-end IP utilized by the application.
- cnlbPort is the front-end port used in conjunction with the CNLB IP for load balancing.
- backendPortName is the backend port name of the container that needs load balancing, retrievable from the deployment or pod spec of the application.
Note:
In case of TLS enabled for Ingress gateway, please use backendPortName as igw-https.
Sample annotation for a single interface:
k8s.v1.cni.cncf.io/networks: default/nf-sig1-int8@nf-sig1-int8,
oracle.com.cnc/cnlb: '[{"backendPortName": "igw-http", "cnlbIp": "10.123.155.16","cnlbPort":"80"}]'
Annotation for two or multiple interfaces
k8s.v1.cni.cncf.io/networks: default/<network interface1>@<network interface1>, default/<network interface2>@<network interface2>,
oracle.com.cnc/cnlb: '[{"backendPortName": "<igw port name>", "cnlbIp": "<network interface1>/<external IP1>, <network interface2>/<external IP2>","cnlbPort":"<port number>"}]',
oracle.com.cnc/ingressMultiNetwork: "true"
k8s.v1.cni.cncf.io/networks: default/nf-sig1-int8@nf-sig1-int8,default/nf-sig2-int9@nf-sig2-int9,
oracle.com.cnc/cnlb: '[{"backendPortName": "igw-http", "cnlbIp": "nf-sig1-int8/10.123.155.16,nf-sig2-int9/10.123.155.30","cnlbPort":"80"}]',
oracle.com.cnc/ingressMultiNetwork: "true"
k8s.v1.cni.cncf.io/networks: default/nf-oam-int5@nf-oam-int5,
oracle.com.cnc/cnlb: '[{"backendPortName": "query", "cnlbIp": "10.75.180.128","cnlbPort": "80"},
{"backendPortName": "admin", "cnlbIp": "10.75.180.128", "cnlbPort":"16687"}]'
In the above example, each item in the list refers to a different backend port name with the same CNLB IP, but the ports for the front end are distinct.
ports:
- containerPort: 16686
name: query
protocol: TCP
- containerPort: 16687
name: admin
protocol: TCP
Configuration at Egress Gateway
egress-gateway.deployment.customExtension.annotations
parameter
of the occnp_custom_values_24.3.0_occnp.yaml
file.
egress-gateway:
deployment:
customExtension:
annotations: {
# Enable this section for service-mesh based installation
# traffic.sidecar.istio.io/excludeOutboundPorts: "9000,8095,8096",
# traffic.sidecar.istio.io/excludeInboundPorts: "9000,8095,8096"
}
k8s.v1.cni.cncf.io/networks: default/nf-sig-egr1@nf-sig-egr1
k8s.v1.cni.cncf.io/networks: default/nf-oam-egr1@nf-oam-egr1,default/nf-sig-egr1@nf-sig-egr1
Configuration at Diameter Gateway
Diameter gateway uses ingress-egress type of NAD to enable traffic flow for both ingress and egress directions.
diameter-gateway.deployment.customExtension.annotations
parameter of the occnp_custom_values_24.3.0_occnp.yaml
file.
diameter-gateway:
deployment:
customExtension:
annotations: {
# Enable this section for service-mesh based installation
# traffic.sidecar.istio.io/excludeOutboundPorts: "9000,5801",
# traffic.sidecar.istio.io/excludeInboundPorts: "9000,5801"
}
k8s.v1.cni.cncf.io/networks: default/<network interface>@<network interface>,
oracle.com.cnc/cnlb: '[{"backendPortName":"diam-signaling", "cnlbIp": "<externalIP>","cnlbPort":"<port number>"}]'
Here,
- k8s.v1.cni.cncf.io/networks: Contains all the network attachment information the pod uses for network segregation.
- oracle.com.cnc/cnlb: To define service IP and port configurations that the deployment will employ for diameter gateway ingress load balancing.
- cnlbIp: This is the front-end IP utilized by the application
- cnlbPort: This is the front-end port used in conjunction with the CNLB IP for load balancing.
- backendPortName: This is the backend port name of the container that needs load balancing, retrievable from the deployment or pod spec of the application.
Note:
In case of TLS enabled for diameter gateway, please use backendPortName as tls-signaling.Sample annotation for a single interface:
k8s.v1.cni.cncf.io/networks: default/nf-sig1-ie1@nf-sig1-ie1,
oracle.com.cnc/cnlb:'[{"backendPortName":"diam-signaling","cnlbIp":"10.123.155.17","cnlbPort":"3868"}]'
Annotation for two or multiple interfaces:
k8s.v1.cni.cncf.io/networks: default/<network interface1>@<network interface1>, default/<networkinterface2>@<network interface2>,
oracle.com.cnc/cnlb: '[{"backendPortName":"diam-signaling","cnlbIp": "<networkinterface1>/<external IP1>,<networkinterface2>/<externalIP2>","cnlbPort":"<portnumber>"}]',
oracle.com.cnc/ingressMultiNetwork: "true"
Sample annotation for two or multiple interfaces:
k8s.v1.cni.cncf.io/networks: default/nf-sig3-ie1@nf-sig3-ie1,default/nf-sig4-ie1@nf-sig4-ie1,
oracle.com.cnc/cnlb:'[{"backendPortName":"diam-signaling","cnlbIp":"nf-sig3-ie1/10.123.155.16,nf-sig4-ie1/10.123.155.30","cnlbPort":"3868"}]',
oracle.com.cnc/ingressMultiNetwork: "true"
Sample annotation for multiport:
k8s.v1.cni.cncf.io/networks: default/nf-sig3-ie1@nf-sig3-ie1,
oracle.com.cnc/cnlb:'[{"backendPortName": "query", "cnlbIp": "10.75.180.128","cnlbPort": "3868"},
{"backendPortName": "admin", "cnlbIp": "10.75.180.128","cnlbPort":"16687"}]'
In the above sample, each item in the list refers to a different backend port name with the same CNLB IP, but the ports for the front end are distinct.
ports:
-containerPort: 16686
name: query
protocol: TCP
-containerPort: 16687
name: admin
protocol: TCP
Configuration at LDAP gateway
dap-gateway.deployment.customExtension.annotations
parameter
of the occnp_custom_values_24.3.0_occnp.yaml
file.
ldap-gateway:
deployment:
customExtension:
annotations: {
}
Sample annotation for a single interface:
k8s.v1.cni.cncf.io/networks: default/nf-sig-egr1@nf-sig-egr1
Sample annotation for a multiple interface:
k8s.v1.cni.cncf.io/networks:default/nf-oam-egr1@nf-oam-egr1,default/nf-sig-egr1@nf-sig-egr1
Configuration at PRE Service
pre-service.deployment.customExtension.annotations
parameter of
the occnp_custom_values_24.3.0_occnp.yaml
file.
pre-service:
deployment:
customExtension:
annotations: {
}
Sample annotation for a single interface:
k8s.v1.cni.cncf.io/networks: default/nf-sig-egr1@nf-sig-egr1
Sample annotation for a multiple interface:
k8s.v1.cni.cncf.io/networks:default/nf-oam-egr1@nf-oam-egr1,default/nf-sig-egr1@nf-sig-egr1
Configuration at notifier
notifier.deployment.customExtension.annotations
parameter of
the occnp_custom_values_24.3.0_occnp.yaml
file.
notifier:
deployment:
customExtension:
annotations: {
}
Sample annotation for a single interface:
k8s.v1.cni.cncf.io/networks: default/nf-sig-egr1@nf-sig-egr1
Sample annotation for a multiple interface:
k8s.v1.cni.cncf.io/networks:default/nf-oam-egr1@nf-oam-egr1,default/nf-sig-egr1@nf-sig-egr1
For information about the above mentioned annotations, see "Configuring Cloud Native Load Balancer (CNLB)" in Oracle Communications Cloud Native Core, Cloud Native Environment Installation, Upgrade, and Fault Recovery Guide.
2.2.2 Installation Tasks
This section provides the procedure to install Policy.
Note:
- Before installing Policy, you must complete prerequiests and preinstallation tasks.
- In a georedundant deployment, perform the steps explained in this section on all the georedundant sites.
- In a Policy georedundant deployments, while adding a new Policy site ensure that its version is same as the other existing Policy site versions.
2.2.2.1 Installing Policy Package
This section describes the procedure to install Policy package.
To install the Policy package:
- Run the following command to access the extracted Policy package.
cd occnp-<release_number>
- Customize the
occnp_custom_values_occnp_24.3.0.yaml
oroccnp_custom_values_pcf_24.3.0.yaml
file (depending on the type of deployment model), with the required deployment parameters. See Customizing Policy chapter to customize the file.Note:
- The parameters values mentioned in the custom-values.yaml
file overrides the default values specified in the Helm chart. If the
envMyslqDatabase
parameter is modified, you must modify theconfigDbName
parameter with the same value. - The URL syntax for perf-info must be in the correct syntax
otherwise, it keeps restarting. The following is a URL example for the
bastion server if the BSF is deployed on OCCNE platform. On any other
PaaS platform, the url should be updated according to the
Prometheus and Jaeger query deployment.
# Values provided must match the Kubernetes environment. perf-info: configmapPerformance: prometheus: http://occne-prometheus-server.occne-infra.svc/clustername/prometheus jaeger:jaeger-agent.occne-infra jaeger_query_url:http://jaeger-query.occne-infra/clustername/jaeger
- At least three configuration items must be present in the
config map for perf-info, failing which perf-info will not work. If
jaeger
is not enabled, thejaeger
andjaeger_query_url
parameter can be omitted.
- The parameters values mentioned in the custom-values.yaml
file overrides the default values specified in the Helm chart. If the
- Run the following
helm install
commands:- Install Policy using
Helm:
helm install -f <custom_file> <release_name> <helm-chart> --namespace <release_namespace> --atomic --timeout 10m
For example:helm install -f occnp_custom_values_24.3.0.yaml occnp /home/cloud-user/occnp-24.3.0.tgz --namespace occnp --atomic
helm_chart is the location of the Helm chart extracted from
occnp-24.3.0.tgz
file.release_name is the release name used by helm command.Note:
release_name should not exceed the limit of 63 characters.release_namespace is the deployment namespace used by helm command.
custom_file is the name of the custom values yaml file (including location).
Optional Parameters that can be used in thehelm install
command:- atomic: If this parameter is set, installation process purges chart on failure. The --wait flag will be set automatically.
- wait: If this parameter is set, installation process will wait until all pods, PVCs, Services, and minimum number of pods of a deployment, StatefulSet, or ReplicaSet are in a ready state before marking the release as successful. It will wait for as long as --timeout.
- timeout duration (optional): If not specified,
default value will be 300 (300 seconds) in Helm. It specifies the time to
wait for any individual kubernetes operation (like Jobs for hooks). If the
helm install
command fails at any point to create a kubernetes object, it will internally call the purge to delete after timeout value. Here, timeout value is not for overall install, but it is for automatic purge on installation failure.
Caution:
Do not exit fromhelm install
command manually. After running thehelm install
command, it takes some time to install all the services. In the meantime, you must not press "ctrl+c" to come out fromhelm install
command. It leads to some anomalous behavior.Note:
You can verify the installation while running the install command by entering this command on a separate terminal:watch kubectl get jobs,pods -n release_namespace
Note:
The following warnings must be ignored for policy installation on 24.1.0, 24.2.0 and 24.3.0 CNE:helm install <release-name> -f <custom.yaml> <tgz-file> -n <namespace> W0311 11:38:44.824154 554744 warnings.go:70] spec.template.spec.containers[0].ports[4]: duplicate port definition with spec.template.spec.containers[0].ports[2] W0311 11:38:45.528363 554744 warnings.go:70] spec.template.spec.containers[0].ports[3]: duplicate port definition with spec.template.spec.containers[0].ports[2] W0311 11:38:45.684949 554744 warnings.go:70] spec.template.spec.containers[0].ports[4]: duplicate port definition with spec.template.spec.containers[0].ports[2] W0311 11:38:47.682599 554744 warnings.go:70] spec.template.spec.containers[0].ports[3]: duplicate port definition with spec.template.spec.containers[0].ports[1] W0909 12:21:54.735046 2509474 warnings.go:70] spec.template.spec.containers[0].env[32]: hides previous definition of "PRRO_JDBC_SERVERS", which may be dropped when using apply. NAME: <release-name> LAST DEPLOYED: <Date-Time> NAMESPACE: <namespace> STATUS: deployed REVISION: <N>
- Install Policy using
Helm:
- Press "Ctrl+C" to exit watch mode. We should run the
watch
command on another terminal. Run the following command to check the status:For Helm:helm status release_name
2.2.3 Postinstallation Task
This section explains the postinstallation tasks for Policy.
2.2.3.1 Verifying Policy Installation
To verify the installation:
- Run the following command to verify the installation status:
helm status <helm-release> -n <namespace>
Where,
<release_name>
is the Helm release name of Policy.For example:
helm status occnp -n occnp
In the output, if
STATUS
is showing asdeployed
, then the deployment is succesful - Run the following command to verify if the pods are up and
active:
kubectl get jobs,pods -n <namespace>
For example:
kubectl get pod -n occnp
In the output, the
STATUS
column of all the pods must beRunning
and theREADY
column of all the pods must ben/n
, where n is the number of containers in the pod. - Run the following command to verify if the services are deployed and
active:
kubectl get services -n <namespace>
For example:
kubectl get services -n occnp
2.2.3.2 Performing Helm Test
This section describes how to perform sanity check for Policy installation through Helm test. The pods to be checked should be based on the namespace and label selector configured for the Helm test configurations.
Note:
- Helm test can be performed only on helm3.
- If
nrf-client-nfmanagement.enablePDBSupport
is set totrue
in the custom-values.yaml, Helm test fails. It is an expected behavior as the mode is active and on standby, the leader pod (nrf-client-management
) will be in ready state but the follower will not be in ready state, which will lead to failure in the Helm test.
Before running Helm test, complete the Helm test configurations under the Helm Test
Global Parameters section in custom-values.yaml
file. For
more information on Helm test parameters, see Global Parameters.
helm test <helm-release_name> -n <namespace>
where:
helm-release-name
is the release name.
namespace
is the deployment namespace where Policy is
installed.
helm test occnp -n occnp
Pod occnp-helm-test-test pending
Pod occnp-helm-test-test pending
Pod occnp-helm-test-test pending
Pod occnp-helm-test-test running
Pod occnp-helm-test-test succeeded
NAME: occnp-helm-test
LAST DEPLOYED: Thu May 19 12:22:20 2022
NAMESPACE: occnp-helm-test
STATUS: deployed
REVISION: 1
TEST SUITE: occnp-helm-test-test
Last Started: Thu May 19 12:24:23 2022
Last Completed: Thu May 19 12:24:35 2022
Phase: Succeeded
helm test <release_name> -n <namespace> --logs
Note:
- Helm Test expects all of the pods of given microservice to be in
READY
state for a successful result. However, the NRF Client Management microservice comes withActive/Standby
model for the multi-pod support in the current release. When the multi-pod support for NRF Client Management service is enabled, you may ignore if the Helm Test for NRF-Client-Management pod fails. - If the Helm test fails, see Oracle Communications Cloud Native Core, Converged Policy Troubleshooting Guide.