2 Installing UDR
This chapter provides information about installing Oracle Communications Cloud Native Core, Unified Data Repository (UDR) in a cloud native environment.
Note:
UDR supports both fresh installation, and it can also be upgraded from 23.1.x. 22.4.x. and 22.3.x. For more information on how to upgrade UDR, see Upgrading Unified Data Repository.2.1 Prerequisites
Before installing and configuring UDR, ensure that the following prerequisites are met:
2.1.1 Software Requirements
This section lists the software that must be installed before installing UDR.
Table 2-1 Preinstalled Software
Software | Versions |
---|---|
Kubernetes | 1.27.5, 1.26.x, 1.25.x, and 1.24.x |
Helm | 3.12.3, 3.12.0, 3.10.3, and 3.8.2 |
Podman | 4.2.0, 4.0.2, and 3.4.2 |
kubectl version
helm version
Podman version
helm3 ls -A
The list of
additional software items, along with the supported versions and usage, is provided in
the following table:Table 2-2 Additional Software
Software | Version | Required for |
---|---|---|
OpenSearch Dashboard | 2.3.0 | Logging |
OpenSearch | 2.3.0 | Logging |
containerd | 1.7.5 | Container Lifecycle Managemet |
Fuentd Opensearch | 1.16.2 | Logging |
metallb | 0.13.11 | Loadbalancer |
Prometheus | 2.44.0 | Metrics and Alerts |
Grafana | 9.5.3 | Metrics and KPIs |
Jaeger | 1.45.0 | Tracing |
Istio | 1.18.2 | Service mesh |
Kyverno | 1.9.0 | Logging |
cert-manager | 1.12.4 | Certificates management |
Velero | 1.12.0 | Migration, backup and restore for Kubernetes cluster |
2.1.2 Environment Setup Requirements
This section describes the environment setup required for installing UDR.
2.1.2.1 Client Machine Requirements
This section describes the requirements for client machine, that is, the machine used by the user to run deployment commands.
- Helm repository configured.
- network access to the Helm repository and Docker image repository.
- network access to the Kubernetes cluster.
- required environment settings to run the
kubectl
anddocker
, andPodman
commands. The environment should have privileges to create a namespace in the Kubernetes cluster. - Helm client installed with the 'push' plugin. Configure the
environment in such a manner that the
helm install
command to deploy the software in the Kubernetes cluster.
2.1.2.2 Network Access Requirement
The Kubernetes cluster hosts must have network access to the following repositories:
- Local Helm repository: It contains the UDR Helm charts. To check if
the Kubernetes cluster hosts can access the local Helm repository, run the
following command:
helm repo update
- Local docker image repository: It contains the UDR docker images.
To check if the Kubernetes cluster hosts can access the local Docker image
repository, pull any image with an image-tag using either of the following
commands:
Where:podman pull <Podman-repo>/<image-name>:<image-tag>
<Podman-repo>
is the IP address or host name of the Podman repository.<image-name>
is the Docker image name.<image-tag>
is tag assigned to the Docker image used for the UDR pod.
Note:
Run thekubectl
and helm
commands on a system based on the deployment
infrastructure. For instance, it can be run on a client machine such as VM, server,
local desktop, and so on.
2.1.2.3 Server or Space Requirement
For information about server or space requirements, see Oracle Communications Cloud Native Core, Cloud Native Environment Installation, Upgrade, and Fault Recovery Guide
2.1.2.4 Access to OpenStack Environment
The user should have access to an existing OpenStack environment, including the OpenStack desktop. This environment is configured with appropriate resource flavors and network resources that allow its users to allocate resources to the virtual machines.
2.1.2.5 Availability of a Public Key
The user must have a public key for logging into the Bootstrap Host. This key should be placed into the customer OpenStack environment using the Import Key tab on the Launch Instance→Key Pair dialog box or through the Compute→Access and Security.
2.1.2.6 UDR Software
- Helm Charts that reflect the UDR software version. It is a zipped tar file.
- Docker images of the microservices that are shared as tar file.
Tools Package
It consists of deployment template yaml files for the nudr-migration and nudr-bulk-import services.
2.1.2.7 CNE Requirement
This section is applicable only if you are installing UDR on Cloud Native Environment (CNE).
UDR supports CNE 23.2.x, 23.3.x, and 23.4.x
echo $OCCNE_VERSION
Note:
If cnDBTier 23.4.x is used during installation, set the
ndb_allow_copying_alter_table
parameter to 'ON' in the
ocudr_udr_10msub_dbtier_23.4.1_custom_values_23.4.1 or
ocudr_slf_37msub_dbtier_23.4.1_custom_values_23.4.1 files before
installing UDR or SLF.
For more information about CNE, see Oracle Communications Cloud Native Core, CloudNative Environment Installation, Upgrade, and Fault Recovery Guide.
2.1.2.8 cnDBTier Requirement
UDR supports cnDBTier 23.2.x, 23.3.x, and 23.4.x cnDBTier must be configured and running before installing UDR. For more information about cnDBTier installation, see OracleCommunications Cloud Native Core, cnDBTier Installation, Upgrade, and Fault Recovery Guide..
2.1.2.9 OSO Requirement
UDR supports Operations Services Overlay (OSO) 23.4.x, 23.3.x, and 23.2.x, for common operation services (Prometheus and components such as alertmanager, pushgateway) on a Kubernetes cluster, which does not have these common services. For more information on installation procedure, see Oracle Communications Cloud Native Core, Operations Services Overlay Installation and Upgrade Guide.
2.1.3 Resource Requirements
Note:
The performance and capacity of the UDR system may vary based on the call model, feature or interface configuration, and underlying CNE and hardware environment, including but not limited to, the size of the json payload, operation type, and traffic model.Table 2-3 Resource Profile for UDR Deployment
Resource Requirement | Minimum | Minimum (Including side car requirement) | Default | Default (Including side car requirement) | Performance-25K Diameter Only | Performance-10K Policy | Performance-25K SH and 2K SOAP |
---|---|---|---|---|---|---|---|
CPU | 26 CPUs | 48 CPUs | 64 CPUs | 102 CPUs | 71 CPUs | 136 CPUs | 122 GB |
Memory | 26 GB | 48 GB | 64CPUs | 102 GB | 86 GB | 118 GB | 134 GB |
Ephemeral Storage | 21 GB | 21 GB | 38 GB | 38 GB | 23 GB | 38 GB | 53 GB |
Individual Microservice Resource Requirements | Minimum Resource Requirements for UDR | Minimum Resource Requirements for UDR | Default Resource Requirements for UDR | Default Resource Requirements for UDR | Performance 25K Diameter | 10K Performance Policy | Performance 25K SH and 2K SOAP |
For information about resources required for UDR hooks and UDR tools, see:
Table 2-4 Resource Profile for SLF Deployment
Resource Requirement (Includes side car requirement) | Minimum | Minimum (Including side car requirement) | Default | Default (Including side car requirement) | Performance-5.5K SLF | Performance-36K SLF | Performance-12.1K SLF |
---|---|---|---|---|---|---|---|
CPU | 22 CPUs | 40 CPUs | 47 CPUs | 79 CPUs | 149 CPUs | 265 CPUs | 117 CPUs |
Memory | 18 GB | 36 GB | 47 GB | 78 GB | 133 GB | 161 GB | 84 CPUs |
Ephemeral Storage | 18 GB | 18 GB | 30 GB | 30 GB | 32 GB | 42 GB | 27 CPUs |
Individual Microservice Resource Requirements | Minimum Resource Requirements for SLF | Minimum Resource Requirements for SLF | Default Resource Requirements for SLF | Default Resource Requirements for SLF | Performance 5.5K for SLF | Performance 36K for SLF | Performance 12.1K for SLF |
Table 2-5 Resource Profile for EIR Deployment
Resource Requirement (Includes side car requirement) | Minimum | Minimum (Including side car requirement) | Default | Default (Including side car requirement) | Performance-10K EIR Lookup |
---|---|---|---|---|---|
CPU | 22 CPUs | 40 CPUs | 47 CPUs | 80 CPUs | 104 CPUs |
Memory | 18 GB | 36 GB | 47 GB | 79 GB | 90 GB |
Ephemeral Storage | 18 GB | 18 GB | 30 GB | 30 GB | 37 GB |
Individual Microservice Resource Requirements | Minimum Resource Requirements for EIR | Minimum Resource Requirements for EIR | Default Resource Requirements for EIR | Default Resource Requirements for EIR | Performance 10K for EIR |
2.1.3.1 Default Resource Requirements for UDR
Table 2-6 Default Resource Requirements: UDR with Aspen Service Mesh (ASM) Enabled
Microservice Name | CPU Per Pod | Memory Per Pod | Ephemeral Storage Per Pod | Replicas (As per default deployment) | Total CPU | Total Memory | Total Ephemeral Storage |
---|---|---|---|---|---|---|---|
nudr-drservice | 2 CPUs | 2 GB | 1 GB | 2 | 4 CPUs | 4 GB | 2 GB |
nudr-dr-provservice | 2 CPUs | 2 GB | 1 GB | 2 | 4 CPUs | 4 GB | 2 GB |
nudr-config | 2 CPUs | 2 GB | 1 GB | 2 | 4 CPUs | 4 GB | 2 GB |
nudr-config-server | 2 CPUs | 2 GB | 1 GB | 2 | 2 CPUs | 4 GB | 2 GB |
ingressgateway-sig | 2 CPUs | 2 GB | 1 GB | 2 | 4 CPUs | 4 GB | 2 GB |
ingressgateway-prov | 2 CPUs | 2 GB | 1 GB | 2 | 4 CPUs | 4 GB | 2 GB |
egressgateway | 2 CPUs | 2 GB | 1 GB | 2 | 4 CPUs | 4 GB | 2 GB |
nudr-diameterproxy | 2 CPUs | 2 GB | 1 GB | 2 | 4 CPUs | 4 GB | 2 GB |
diam-gateway | 2 CPUs | 2 GB | 1 GB | 2 | 4 CPUs | 4 GB | 2 GB |
nudr-ondemandmigration | 2 CPUs | 2 GB | 1 GB | 2 | 4 CPUs | 4 GB | 2 GB |
nudr-notify-service | 2 CPUs | 2 GB | 1 GB | 2 | 4 CPUs | 4 GB | 2 GB |
alternate-route | 2 CPUs | 2 GB | 1 GB | 2 | 4 CPUs | 4 GB | 2 GB |
perf-info | 1 CPU | 1 GB | 1 GB | 2 | 2 CPU | 2 GB | 2 GB |
app-info | 0.5 CPU | 1 GB | 1 GB | 2 | 2 CPU | 2 GB | 2 GB |
nudr-nrf-client-nfmanagement | 1 CPU | 1 GB | 1 GB | 2 | 2 CPU | 2 GB | 2 GB |
nudr-dbcr-auditor-service | 2 CPUs | 2 GB | 1 GB | 1 | 2 CPU | 2 GB | 1 GB |
Total | - | - | - | - | CPU | Memory | Ephemeral Storage |
Total resources (Considering Main service containers) | - | - | - | - | 56 CPUs | 56 GB | 31 GB |
Additional Resources (Hooks/Init/Update Containers) | - | - | - | - | 8 CPUs | 8 GB | 7 GB |
Total Resources | - | - | - | - | 64 CPUs | 64 GB | 38 GB |
Total side car resources (Applicable for ASM setups) | - | - | - | - | 38 CPUs | 38 GB | NA |
Grand Total for UDR with side car requirement | - | - | - | - | 102 CPUs | 102 GB | 38 GB |
Note:
If the debug tool is enabled, then the additional resource requirements are 1 CPU, 2GB RAM, and 4G ephemeral storage for each pod.2.1.3.2 Minimum Resource Requirements for UDR
Table 2-7 Minimum Resource Requirements: UDR with Aspen Service Mesh (ASM) Enabled
Microservice Name | CPU Per Pod | Memory Per Pod | Ephemeral Storage Per Pod | Replicas (As per minimum deployment) | Total CPU | Total Memory | Total Ephemeral Storage |
---|---|---|---|---|---|---|---|
nudr-drservice | 1 CPU | 1 GB | 1 GB | 1 | 1 CPU | 1 GB | 1 GB |
nudr-dr-provservice | 1 CPU | 1 GB | 1 GB | 1 | 1 CPU | 1 GB | 1 GB |
nudr-config | 1 CPU | 1 GB | 1 GB | 1 | 1 CPU | 1 GB | 1 GB |
nudr-config-server | 1 CPU | 1 GB | 1 GB | 1 | 1 CPU | 1 GB | 1 GB |
nudr-nrf-client-nfmanagement | 1 CPU | 1 GB | 1 GB | 1 | 1 CPU | 1 GB | 1 GB |
ingressgateway-sig | 2 CPUs | 1 GB | 1 GB | 1 | 2 CPUs | 2 GB | 1 GB |
ingressgateway-prov | 2 CPUs | 1 GB | 1 GB | 1 | 2 CPUs | 2 GB | 1 GB |
egressgateway | 2 CPUs | 1 GB | 1 GB | 1 | 2 CPUs | 2 GB | 1 GB |
nudr-diameterproxy | 1 CPU | 1 GB | 1 GB | 1 | 1 CPU | 1 GB | 1 GB |
diam-gateway | 1 CPU | 1 GB | 1 GB | 1 | 1 CPU | 1 GB | 1 GB |
nudr-ondemandmigration | 1 CPU | 1 GB | 1 GB | 1 | 1 CPUs | 1 GB | 1 GB |
nudr-notify-service | 1 CPU | 1 GB | 1 GB | 1 | 1 CPU | 1 GB | 1 GB |
alternate-route | 2 CPU | 1 GB | 1 GB | 1 | 2 CPU | 2 GB | 1 GB |
perf-info | 1 CPU | 1 GB | 1 GB | 1 | 1 CPU | 1 GB | 1 GB |
app-info | 0.5 CPU | 1 GB | 1 GB | 1 | 1 CPU | 1 GB | 1 GB |
nudr-dbcr-auditor-service | 1 CPUs | 1 GB | 1 GB | 1 | 1 CPU | 1 GB | 1 GB |
Total | - | - | - | - | CPU | Memory | Ephemeral Storage |
Total UDR resources (Considering default values) | - | - | - | - | 20 CPUs | 20 GB | 16 GB |
Additional Resources (Hooks/Init/Update Containers) | - | - | - | - | 6 CPUs | 6 GB | 5 GB |
Total Resources | - | - | - | - | 26 CPUs | 26 GB | 21 GB |
Total side car resources (Applicable for ASM setups) | - | - | - | - | 22 CPUs | 22 GB | NA |
Grand Total for SLF with side car | - | - | - | - | 48 CPUs | 48 GB | 21 GB |
Note:
If the debug tool is enabled, then the additional resource requirements are 1 CPU, 2GB RAM, and 4G ephemeral storage for each pod.2.1.3.3 Performance 25K Diameter
The resources required to deploy diameter based microservices are:
Table 2-8 Resources for Diameter Based Microservices
Microservice Name | CPU Per Pod | Memory Per Pod | Ephemeral Storage per pod | Replicas | Total CPU | Total Memory | Total Ephemeral Storage |
---|---|---|---|---|---|---|---|
nudr-diam-gateway | 6 CPUs | 4 GB | 1 GB | 2 | 12 CPUs | 8 GB | 2 GB |
nudr-diameterproxy | 3 CPUs | 4 GB | 1 GB | 19 | 57 CPUs | 76 GB | 19 GB |
Additional Resources (Hooks/Init/Update Containers) | - | - | - | - | 2 CPUs | 2 GB | 2 GB |
Total Resources | - | - | - | - | 71 CPUs | 86 GB | 23 GB |
2.1.3.4 10K Performance Policy
The resources required to deploy 10K Performance Policy are:
Table 2-9 Resource Requirement for 10K Performance Policy
Microservice Name | CPU Per Pod | Memory Per Pod | Ephemeral Storage per pod | Replicas | Total CPU | Total Memory | Total Ephemeral Storage |
---|---|---|---|---|---|---|---|
ingress-gateway | 6 CPUs | 5 GB | 1 GB | 5 | 30 CPUs | 25 GB | 5 GB |
nudr-drservice | 5 CPUs | 4 GB | 1 GB | 10 | 50 CPUs | 40 GB | 10 GB |
nudr-notify-service | 4 CPUs | 4 GB | 1 GB | 7 | 28 CPUs | 25 GB | 7 GB |
egress-gateway | 4 CPUs | 4 GB | 1 GB | 4 | 16 CPUs | 16 GB | 4 GB |
Additional Resources (Hooks/Init/Update Containers) | - | - | - | - | 12 CPUs | 12 GB | 12 GB |
Total Resources | - | - | - | - | 136 CPUs | 118 GB | 38 GB |
2.1.3.5 Performance 25K SH and 2K SOAP
Table 2-10 Performance 25K SH and 2K SOAP
Microservice Name | CPU Per Pod | Memory Per Pod | Ephemeral Storage per pod | Replicas | Total CPU | Total Memory | Total Ephemeral Storage |
---|---|---|---|---|---|---|---|
nudr-diam-gateway | 6 CPUs | 4 GB | 1 GB | 3 | 18 CPUs | 12 GB | 3 GB |
nudr-diameterproxy | 3 CPUs | 4 GB | 1 GB | 19 | 57 CPUs | 76 GB | 19 GB |
ingressgateway-sig | 2 CPU | 2 GB | 1 GB | 2 | 4 CPUs | 4 GB | 2 GB |
ingressgateway-prov | 2 CPU | 2 GB | 1 GB | 2 | 4 CPUs | 4 GB | 2 GB |
egressgateway | 2 CPU | 2 GB | 1 GB | 2 | 4 CPUs | 4 GB | 2 GB |
nudr-drservice | 2 CPU | 2 GB | 1 GB | 2 | 4 CPUs | 4 GB | 2 GB |
nudr-dr-provservice | 2 CPU | 2 GB | 1 GB | 2 | 4 CPUs | 4 GB | 2 GB |
nudr-config | 1 CPU | 1 GB | 1 GB | 2 | 2 CPUs | 2 GB | 2 GB |
nudr-config-server | 1 CPU | 1 GB | 1 GB | 2 | 2 CPUs | 2 GB | 2 GB |
alternate-route | 1 CPU | 1 GB | 1 GB | 2 | 2 CPUs | 2 GB | 2 GB |
perf-info | 1 CPU | 1 GB | 1 GB | 2 | 2 CPUs | 2 GB | 2 GB |
app-info | 1 CPU | 1 GB | 1 GB | 2 | 2 CPUs | 2 GB | 2 GB |
nudr-nrf-client-nfmanagement | 1 CPU | 1 GB | 1 GB | 2 | 2 CPUs | 2 GB | 2 GB |
nudr-dbcr-auditor-service | 1 CPUs | 1 GB | 1 GB | 1 | 1 CPU | 1 GB | 1 GB |
Additional Resources (Hooks/Init/Update Containers) | 14 CPUs | 14 GB | 8 GB | ||||
Total Resources | 122 CPUs | 134 GB | 53 GB |
2.1.3.6 Default Resource Requirements for SLF
Table 2-11 Default Resource Requirements: SLF with ASM enabled
Microservice Name | CPU Per Pod | Memory Per Pod | Sidecar container CPU Per Pod | Sidecar container Memory Per Pod | Ephemeral Storage Per Pod | Replicas (As per default deployment) | Total CPU (Micro service + Sidecar Containers) | Total Memory for the Pod (Micro service + Sidecar Containers) | Total Ephemeral Storage |
---|---|---|---|---|---|---|---|---|---|
nudr-drservice | 2 CPUs | 2 GB | 1 CPUs | 1 GB | 1 GB | 2 | 6 CPUs | 6 GB | 2 GB |
nudr-dr-provservice | 2 CPUs | 2 GB | 1 CPUs | 1 GB | 1 GB | 2 | 6 CPUs | 6 GB | 2 GB |
nudr-config | 2 CPUs | 2 GB | 1 CPU | 1 GB | 1 GB | 2 | 6 CPUs | 6 GB | 2 GB |
nudr-config-server | 2 CPUs | 2 GB | 1 CPU | 1 GB | 1 GB | 2 | 6 CPUs | 6 GB | 2 GB |
nudr-nrf-client-nfmanagement | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB | 2 | 4 CPUs | 4 GB | 2 GB |
ingressgateway-sig | 2 CPUs | 2 GB | 1 CPUs | 1 GB | 1 GB | 2 | 6 CPUs | 6 GB | 2 GB |
ingressgateway-prov | 2 CPUs | 2 GB | 1 CPUs | 1 GB | 1 GB | 2 | 6 CPUs | 6 GB | 2 GB |
egressgateway | 2 CPUs | 2 GB | 1 CPUs | 1 GB | 1 GB | 2 | 6 CPUs | 6 GB | 2 GB |
alternate-route | 2 CPUs | 2 GB | 1 CPU | 1 GB | 1 GB | 2 | 6 CPUs | 6 GB | 2 GB |
perf-info | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB | 2 | 4 CPUs | 4 GB | 2 GB |
app-info | 0.5 CPU | 1 GB | 1 CPU | 1 GB | 1 GB | 2 | 4 CPUs | 4 GB | 2 GB |
nudr-dbcr-auditor-service | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB | 1 | 2 CPUs | 2 GB | 1 GB |
Total | - | - | - | - | - | - | CPU | Memory | Ephemeral Storage |
SLF resource Requirement | - | - | - | - | - | - | 39 CPUs | 39 GB | 23 GB |
Additional Resources (Hooks/Init/Update Containers) | - | - | - | - | - | - | 8 CPUs | 8 GB | 7 GB |
SLF Total Resources | - | - | - | - | - | - | 47 CPUs | 47 GB | 30 GB |
Total side car resources | - | - | - | - | - | - | 32 CPUs | 31 GB | NA |
Total Resources | - | - | - | - | - | - | 79 CPUs | 78 GB | 30 GB |
Note:
If the debug tool is enabled, then the additional resource requirements are 1 CPU, 2GB RAM, and 4G ephemeral storage for each pod.2.1.3.7 Minimum Resource Requirements for SLF
Table 2-12 Minimum Resource Requirements: SLF with ASM Enabled
Microservice Name | CPU Per Pod | Memory Per Pod | Sidecar container CPU Per Pod | Sidecar container Memory Per Pod | Ephemeral Storage Per Pod | Replicas (As per default deployment) | Total CPU (Micro service + Sidecar Containers) | Total Memory for the Pod (Micro service + Sidecar Containers) | Total Ephemeral Storage |
---|---|---|---|---|---|---|---|---|---|
nudr-drservice | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB | 1 | 2 CPUs | 2 GB | 1 GB |
nudr-dr-provservice | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB | 1 | 2 CPUs | 2 GB | 1 GB |
nudr-config | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB | 1 | 2 CPUs | 2 GB | 1 GB |
nudr-config-server | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB | 1 | 2 CPUs | 2 GB | 1 GB |
nudr-nrf-client-nfmanagement | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB | 1 | 2 CPUs | 2 GB | 1 GB |
ingressgateway-sig | 2 CPUs | 1 GB | 1 CPU | 1 GB | 1 GB | 1 | 3 CPUs | 2 GB | 1 GB |
ingressgateway-prov | 2 CPUs | 1 GB | 1 CPU | 1 GB | 1 GB | 1 | 3 CPUs | 2 GB | 1 GB |
egressgateway | 2 CPUs | 1 GB | 1 CPU | 1 GB | 1 GB | 1 | 3 CPUs | 2 GB | 1 GB |
alternate-route | 2 CPU | 1 GB | 1 CPU | 1 GB | 1 GB | 1 | 3 CPUs | 2 GB | 1 GB |
perf-info | 1 CPU | 1GB | 1 CPU | 1 GB | 1 GB | 1 | 2 CPUs | 2 GB | 1 GB |
app-info | 0.5 CPU | 1 GB | 1 CPU | 1 GB | 1 GB | 1 | 2 CPUs | 2 GB | 1 GB |
nudr-dbcr-auditor-service | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB | 1 | 2 CPUs | 2 GB | 1 GB |
Total | - | - | - | - | - | - | CPU | Memory | Ephemeral Storage |
SLF resource Requirement | - | - | - | - | - | - | 16 CPUs | 12 GB | 12 GB |
Additional Resources (Hooks/Init/Update Containers) | - | - | - | - | - | - | 6 CPUs | 6 GB | 6 GB |
SLF Total Resources | - | - | - | - | - | - | 22 CPUs | 18 GB | 18 GB |
Total side car resources | - | - | - | - | - | - | 18 CPUs | 18 GB | NA |
Total Resources | - | - | - | - | - | - | 40 CPUs | 36 GB | 18 GB |
Note:
If the debug tool is enabled, then the additional resource requirements are 1 CPU, 2GB RAM, and 4G ephemeral storage for each pod.2.1.3.8 Performance 5.5K for SLF
Note:
This deployment can support 5.7K SLF traffic (200 Provisioning + 5500 SLF Lookup). The subscriber size on cnDBTier used for testing is 12M.Table 2-13 Resource Requirements - Performance
Microservice Name | CPU Per Pod | Memory Per Pod | Sidecar container CPU Per Pod | Sidecar container Memory Per Pod | Ephemeral Storage Per Pod | Replicas (As per default deployment) | Total CPU (Micro service + Sidecar Containers) | Total Memory for the Pod (Micro service + Sidecar Containers) | Total Ephemeral Storage |
---|---|---|---|---|---|---|---|---|---|
nudr-drservice | 5 CPUs | 4 GB | 4 CPUs | 4 GB | 1 GB | 4 | 36 CPUs | 32 GB | 4 GB |
nudr-dr-provservice | 5 CPUs | 4 GB | 4 CPUs | 4 GB | 1 GB | 2 | 18 CPUs | 16 GB | 2 GB |
nudr-config | 2 CPUs | 2 GB | 1 CPU | 1 GB | 1 GB | 2 | 6 CPUs | 6 GB | 2 GB |
nudr-config-server | 2 CPUs | 2 GB | 1 CPU | 1 GB | 1 GB | 2 | 6 CPUs | 6 GB | 2 GB |
nudr-nrf-client-nfmanagement | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB | 2 | 4 CPUs | 4 GB | 2 GB |
ingressgateway-sig | 6 CPUs | 4 GB | 4 CPUs | 4 GB | 1 GB | 2 | 20 CPUs | 16 GB | 2 GB |
ingressgateway-prov | 6 CPUs | 4 GB | 4 CPUs | 4 GB | 1 GB | 2 | 20 CPUs | 16 GB | 2 GB |
egressgateway | 2 CPUs | 2 GB | 2 CPUs | 1 GB | 1 GB | 2 | 8 CPUs | 6 GB | 2 GB |
alternate-route | 2 CPUs | 2 GB | 1 CPU | 1 GB | 1 GB | 2 | 6 CPUs | 6 GB | 2 GB |
perf-info | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB | 2 | 4 CPUs | 4 GB | 2 GB |
app-info | 0.5 CPU | 1 GB | 1 CPU | 1 GB | 1 GB | 2 | 4 CPUs | 4 GB | 2 GB |
nudr-dbcr-auditor-service | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB | 1 | 2 CPUs | 2 GB | 1 GB |
Total | - | - | - | - | - | - | CPU | Memory | Ephemeral Storage |
SLF resource Requirement | - | - | - | - | - | - | 77 CPUs | 63 GB | 25 GB |
Additional Resources (Hooks/Init/Update Containers) | - | - | - | - | - | - | 7 CPUs | 7 GB | 7 GB |
SLF Total Resources | - | - | - | - | - | - | 84 CPUs | 70 GB | 32 GB |
Total side car resources | - | - | - | - | - | - | 65 CPUs | 63 GB | NA |
Total Resources | - | - | - | - | - | - | 149 CPUs | 133 GB | 32 GB |
Note:
If the debug tool is enabled, then the additional resource requirements are 1 CPU, 2GB RAM, and 4G ephemeral storage for each pod.2.1.3.9 Performance 36K for SLF
Table 2-14 Resource Requirements - Performance
Microservice Name | CPU Per Pod | Memory Per Pod | Sidecar container CPU Per Pod | Sidecar container Memory Per Pod | Ephemeral Storage Per Pod | Replicas (As per default deployment) | Total CPU (Micro service + Sidecar Containers) | Total Memory for the Pod (Micro service + Sidecar Containers) | Total Ephemeral Storage |
---|---|---|---|---|---|---|---|---|---|
nudr-drservice | 7 CPUs | 4 GB | 3 CPUs | 1 GB | 1 GB | 10 | 100 CPUs | 50 GB | 10 GB |
nudr-dr-provservice | 4 CPUs | 4 GB | 3 CPUs | 1 GB | 1 GB | 2 | 14 CPUs | 10 GB | 2 GB |
nudr-config | 2 CPUs | 2 GB | 1 CPU | 1 GB | 1 GB | 2 | 6 CPUs | 6 GB | 2 GB |
nudr-config-server | 2 CPUs | 2 GB | 1 CPU | 1 GB | 1 GB | 2 | 6 CPUs | 6 GB | 2 GB |
nudr-nrf-client-nfmanagement | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB | 2 | 4 CPUs | 4 GB | 2 GB |
ingressgateway-sig | 6 CPUs | 4 GB | 3 CPUs | 1 GB | 1 GB | 10 | 90 CPUs | 50 GB | 10 GB |
ingressgateway-prov | 6 CPUs | 4 GB | 3 CPUs | 1 GB | 1 GB | 2 | 18 CPUs | 10 GB | 2 GB |
egressgateway | 2 CPUs | 2 GB | 2 CPUs | 1 GB | 1 GB | 2 | 8 CPUs | 6 GB | 2 GB |
alternate-route | 2 CPUs | 2 GB | 1 CPU | 1 GB | 1 GB | 2 | 6 CPUs | 6 GB | 2 GB |
perf-info | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB | 2 | 4 CPUs | 4 GB | 2 GB |
app-info | 0.5 CPU | 1 GB | 1 CPU | 1 GB | 1 GB | 2 | 4 CPUs | 4 GB | 2 GB |
nudr-dbcr-auditor-service | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB | 1 | 2 CPUs | 2 GB | 1 GB |
Total | - | - | - | - | - | - | CPU | Memory | Ephemeral Storage |
SLF resource Requirement | - | - | - | - | - | - | 173 CPUs | 119 GB | 39 GB |
Additional Resources (Hooks/Init/Update Containers) | - | - | - | - | - | - | 3 CPUs | 3 GB | 3 GB |
SLF Total Resources | - | - | - | - | - | - | 176 CPUs | 122 GB | 42 GB |
Total side car resources | - | - | - | - | - | - | 89 CPUs | 39 GB | NA |
Total Resources | - | - | - | - | - | - | 265 CPUs | 161 GB | 42 GB |
Note:
If the debug tool is enabled, then the additional resource requirements are 1 CPU, 2GB RAM, and 4G ephemeral storage for each pod.2.1.3.10 Performance 12.1K for SLF
Table 2-15 Resource Requirements - Performance
Microservice Name | CPU Per Pod | Memory Per Pod | Sidecar container CPU Per Pod | Sidecar container Memory Per Pod | Ephemeral Storage Per Pod | Replicas (As per default deployment) | Total CPU (Micro service + Sidecar Containers) | Total Memory for the Pod (Micro service + Sidecar Containers) | Total Ephemeral Storage |
---|---|---|---|---|---|---|---|---|---|
nudr-drservice | 6 CPUs | 4 GB | 2 CPUs | 1 GB | 1 GB | 3 | 24 CPUs | 15 GB | 3 GB |
nudr-dr-provservice | 4 CPUs | 4 GB | 1 CPUs | 1 GB | 1 GB | 2 | 12 CPUs | 10GB | 2 GB |
nudr-config | 2 CPUs | 2 GB | 1 CPU | 1 GB | 1 GB | 1 | 3 CPUs | 3 GB | 1 GB |
nudr-config-server | 2 CPUs | 2 GB | 1 CPU | 1 GB | 1 GB | 1 | 3CPUs | 3 GB | 1 GB |
nudr-nrf-client-nfmanagement | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB | 2 | 4 CPUs | 4 GB | 2 GB |
ingressgateway-sig | 6 CPUs | 4 GB | 4 CPUs | 1 GB | 1 GB | 4 | 40 CPUs | 20 GB | 4 GB |
ingressgateway-prov | 4 CPUs | 4 GB | 2 CPUs | 1 GB | 1 GB | 2 | 12 CPUs | 10 GB | 2 GB |
egressgateway | 1 CPUs | 1 GB | 1 CPUs | 1 GB | 1 GB | 2 | 4 CPUs | 4 GB | 2 GB |
alternate-route | 2 CPUs | 2 GB | 1 CPU | 1 GB | 1 GB | 2 | 6 CPUs | 6 GB | 2 GB |
perf-info | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB | 2 | 4 CPUs | 4 GB | 2 GB |
app-info | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB | 2 | 4 CPUs | 4 GB | 2 GB |
nudr-dbcr-auditor-service | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB | 1 | 2 CPUs | 2 GB | 1 GB |
Total | - | - | - | - | - | - | CPU | Memory | Ephemeral Storage |
SLF resource Requirement | - | - | - | - | - | - | 73 CPUs | 57 GB | 24 GB |
Additional Resources (Hooks/Init/Update Containers) | - | - | - | - | - | - | 3 CPUs | 3 GB | 3 GB |
SLF Total Resources | - | - | - | - | - | - | 76 CPUs | 60 GB | 27 GB |
Total side car resources | - | - | - | - | - | - | 41 CPUs | 24 GB | NA |
Total Resources | - | - | - | - | - | - | 117 CPUs | 84 GB | 27 GB |
Note:
If the debug tool is enabled, then the additional resource requirements are 1 CPU, 2GB RAM, and 4G ephemeral storage for each pod.2.1.3.11 Default Resource Requirements for EIR
Table 2-16 Default Resource Requirements: EIR
Microservice Name | CPU Per Pod | Memory Per Pod | Ephemeral Storage Per Pod | Replicas (As per default deployment) | Total CPU | Total Memory for the Pod | Total Ephemeral Storage |
---|---|---|---|---|---|---|---|
nudr-drservice | 2 CPUs | 2 GB | 1 GB | 2 | 4 CPUs | 4 GB | 2 GB |
nudr-dr-provservice | 2 CPUs | 2 GB | 1 GB | 2 | 4 CPUs | 4 GB | 2 GB |
nudr-config | 2 CPUs | 2 GB | 1 GB | 2 | 4 CPUs | 4 GB | 2 GB |
nudr-config-server | 2 CPUs | 2 GB | 1 GB | 2 | 4 CPUs | 4 GB | 2 GB |
nudr-nrf-client-nfmanagement | 1 CPU | 1 GB | 1 GB | 2 | 4 CPUs | 4 GB | 2 GB |
ingressgateway-sig | 2 CPUs | 2 GB | 1 GB | 2 | 4 CPUs | 4 GB | 2 GB |
ingressgateway-prov | 2 CPUs | 2 GB | 1 GB | 2 | 4 CPUs | 4 GB | 2 GB |
egressgateway | 2 CPUs | 2 GB | 1 GB | 2 | 4 CPUs | 4 GB | 2 GB |
alternate-route | 2 CPUs | 2 GB | 1 GB | 2 | 4 CPUs | 2 GB | 2 GB |
perf-info | 1 CPU | 1 GB | 1 GB | 2 | 2 CPUs | 2 GB | 2 GB |
app-info | 0.5 CPU | 1 GB | 1 GB | 2 | 2 CPUs | 2 GB | 2 GB |
nudr-dbcr-auditor-service | 1 CPU | 1 GB | 1 GB | 1 | 1 CPU | 1 GB | 1 GB |
Total | - | - | - | - | CPU | Memory | Ephemeral Storage |
EIR resource Requirement | - | - | - | - | 39 CPUs | 39 GB | 23 GB |
Additional Resources (Hooks/Init/Update Containers) | - | - | - | - | 8 CPUs | 8 GB | 7 GB |
EIR Total Resources | - | - | - | - | 47 CPUs | 47 GB | 30 GB |
Total side car resources, if installed with ASM | - | - | - | - | 33 CPUs | 32 GB | NA |
Total Resources | - | - | - | - | 80 CPUs | 79 GB | 30 GB |
Note:
If the debug tool is enabled, then the additional resource requirements are 1 CPU, 2GB RAM, and 4G ephemeral storage for each pod.2.1.3.12 Minimum Resource Requirements for EIR
Table 2-17 Minimum Resource Requirements: EIR
Microservice Name | CPU Per Pod | Memory Per Pod | Ephemeral Storage Per Pod | Replicas (As per minimum deployment) | Total CPU | Total Memory for the Pod | Total Ephemeral Storage |
---|---|---|---|---|---|---|---|
nudr-drservice | 1 CPU | 1 GB | 1 GB | 1 | 1 CPUs | 1 GB | 1 GB |
nudr-dr-provservice | 1 CPU | 1 GB | 1 GB | 1 | 1 CPUs | 1 GB | 1 GB |
nudr-config | 1 CPU | 1 GB | 1 GB | 1 | 1 CPUs | 1 GB | 1 GB |
nudr-config-server | 1 CPU | 1 GB | 1 GB | 1 | 1 CPUs | 1 GB | 1 GB |
nudr-nrf-client-nfmanagement | 1 CPU | 1 GB | 1 GB | 1 | 1 CPUs | 1 GB | 1 GB |
ingressgateway-sig | 2 CPUs | 1 GB | 1 GB | 1 | 2 CPUs | 1 GB | 1 GB |
ingressgateway-prov | 2 CPUs | 1 GB | 1 GB | 1 | 2 CPUs | 1 GB | 1 GB |
egressgateway | 2 CPUs | 1 GB | 1 GB | 1 | 2 CPUs | 1 GB | 1 GB |
alternate-route | 2 CPU | 1 GB | 1 GB | 1 | 2 CPUs | 1 GB | 1 GB |
perf-info | 1 CPU | 1GB | 1 GB | 1 | 1 CPUs | 1 GB | 1 GB |
app-info | 0.5 CPU | 1 GB | 1 GB | 1 | 1 CPUs | 1 GB | 1 GB |
nudr-dbcr-auditor-service | 1 CPU | 1 GB | 1 GB | 1 | 1 CPU | 1 GB | 1 GB |
Total | - | - | - | - | CPU | Memory | Ephemeral Storage |
EIR resource Requirement | - | - | - | - | 16 CPUs | 12 GB | 12 GB |
Additional Resources (Hooks/Init/Update Containers) | - | - | - | - | 6 CPUs | 6 GB | 6 GB |
EIR Total Resources | - | - | - | - | 22 CPUs | 18 GB | 18 GB |
Total side car resources, if installed with ASM | - | - | - | - | 18 CPUs | 18 GB | NA |
Total Resources | - | - | - | - | 40 CPUs | 36 GB | 18 GB |
Note:
If the debug tool is enabled, then the additional resource requirements are 1 CPU, 2GB RAM, and 4G ephemeral storage for each pod.2.1.3.13 Performance 10K for EIR
Note:
This deployment can support 10k EIR EIC traffic. The subscriber size on cnDBTier used for testing is 300k.Table 2-18 Resource Requirements - Performance 10K for EIR
Microservice Name | CPU Per Pod | Memory Per Pod | Ephemeral Storage Per Pod | Replicas (As per default deployment) | Total CPU | Total Memory for the Pod | Total Ephemeral Storage |
---|---|---|---|---|---|---|---|
nudr-drservice | 4 CPUs | 4 GB | 1 GB | 6 | 24 CPUs | 24 GB | 6 GB |
nudr-dr-provservice | 4 CPUs | 4 GB | 1 GB | 2 | 8 CPUs | 8 GB | 2 GB |
nudr-config | 2 CPUs | 2 GB | 1 GB | 2 | 4 CPUs | 4 GB | 2 GB |
nudr-config-server | 2 CPUs | 2 GB | 1 GB | 2 | 4 CPUs | 4 GB | 2 GB |
nudr-nrf-client-nfmanagement | 1 CPU | 1 GB | 1 GB | 2 | 2 CPUs | 2 GB | 2 GB |
ingressgateway-sig | 6 CPUs | 4 GB | 1 GB | 5 | 30 CPUs | 20 GB | 5 GB |
ingressgateway-prov | 6 CPUs | 4 GB | 1 GB | 2 | 12 CPUs | 8 GB | 2 GB |
egressgateway | 2 CPUs | 2 GB | 1 GB | 2 | 4 CPUs | 4 GB | 2 GB |
alternate-route | 2 CPUs | 2 GB | 1 GB | 2 | 4 CPUs | 4 GB | 2 GB |
perf-info | 1 CPU | 1 GB | 1 GB | 2 | 2 CPUs | 2 GB | 1 GB |
app-info | 0.5 CPU | 1 GB | 1 GB | 2 | 2 CPUs | 2 GB | 2 GB |
nudr-dbcr-auditor-service | 1 CPU | 1 GB | 1 GB | 1 | 1 CPU | 1 GB | 1 GB |
Total | - | - | - | - | CPU | Memory | Ephemeral Storage |
EIR resource Requirement | - | - | - | - | 97 CPUs | 83 GB | 30 GB |
Additional Resources (Hooks/Init/Update Containers) | - | - | - | - | 7 CPUs | 7 GB | 7 GB |
Total Resources | - | - | - | - | 104 CPUs | 90 GB | 37 GB |
Note:
If the debug tool is enabled, then the additional resource requirements are 1 CPU, 2GB RAM, and 4G ephemeral storage for each pod.2.1.3.14 Resource Requirements for UDR Hooks
Table 2-19 Install, Upgrade, Rollback, Delete or Uninstall, and Test Hooks
Hook Name | CPU Requirement | Memory Requirement | Side car CPU Requirement | Side car Memory Requirement | Ephemeral Storage Requirement |
---|---|---|---|---|---|
nudr-config-server-pre-install | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB |
nudr-config-server-post-install | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB |
nudr-config-server-pre-upgrade | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB |
nudr-config-server-post-upgrade | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB |
nudr-config-server-pre-rollback | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB |
nudr-config-server-post-rollback | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB |
nudr-config-server-pre-delete | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB |
nudr-config-server-post-delete | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB |
nudr-pre-install | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB |
nudr-post-install | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB |
nudr-pre-upgrade | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB |
nudr-post-upgrade | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB |
ingressgateway-pre-install | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB |
ingressgateway-pre-upgrade | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB |
ingressgateway-pre-rollback | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB |
ingressgateway-pre-delete | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB |
ingressgateway-post-upgrade | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB |
ingressgateway-post-rollback | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB |
ingressgateway-pre-upgrade-oauth-validator | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB |
egressgateway-pre-install | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB |
egressgateway-pre-upgrade | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB |
egressgateway-pre-rollback | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB |
egressgateway-pre-delete | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB |
egressgateway-post-upgrade | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB |
egressgateway-post-rollback | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB |
alternate-route-pre-install | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB |
alternate-route-pre-upgrade | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB |
alternate-route-pre-rollback | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB |
alternate-route-pre-delete | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB |
alternate-route-post-upgrade | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB |
alternate-route-post-rollback | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB |
perf-info-pre-install | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB |
perf-info-pre-upgrade | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB |
perf-info-pre-rollback | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB |
perf-info-pre-delete | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB |
perf-info-post-upgrade | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB |
perf-info-post-rollback | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB |
app-info-pre-install | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB |
app-info-pre-upgrade | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB |
app-info-pre-rollback | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB |
app-info-pre-delete | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB |
app-info-post-upgrade | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB |
app-info-post-rollback | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB |
nudr-nrf-client-nfmanagement-pre-install | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB |
nudr-nrf-client-nfmanagement-pre-upgrade | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB |
nudr-nrf-client-nfmanagement-pre-rollback | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB |
nudr-nrf-client-nfmanagement-pre-delete | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB |
nudr-nrf-client-nfmanagement-post-upgrade | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB |
nudr-nrf-client-nfmanagement-post-rollback | 1 CPU | 1 GB | 1 CPU | 1 GB | 1 GB |
test | 1 CPU | 1 GB | NA | NA | 1 GB |
Note:
- These hooks are applicable for both SLF and UDR deployments with ASM. However, these hooks are executed on demand, for example, during install, uninstall, upgrade, rollback, or helm-test cases.
- All the hooks in the above table except test are cleared once the job is completed.
2.1.3.15 Resource Requirements for UDR Tools
The resource details to install UDR Tools are:
Table 2-20 UDR Tools with ASM Enabled
Microservice Name | CPU Per Pod | Memory Per Pod | Sidecar container CPU Per Pod | Sidecar container Memory Per Pod | Ephemeral Storage Per Pod | Replicas (As per default deployment) | Total CPU (Micro service + Sidecar Containers) | Total Memory for the Pod (Micro service + Sidecar Containers) | Total Ephemeral Storage |
---|---|---|---|---|---|---|---|---|---|
nudr-bulk-import | 4 CPUs | 6 GB | 2 CPUs | 1 GB | 1 GB | 1 | 6 CPUs | 7 GB | 1 GB |
nudr-migration | 4 CPUs | 4 GB | 2 CPUs | 1 GB | 1 GB | 1 | 6 CPUs | 5 GB | 1 GB |
nudr-export-tool | 4 CPUs | 3 GB | 2 CPUs | 1 GB | 1 GB | 1 | 6 CPUs | 5 GB | 1 GB |
Note:
If the Debug tool is enabled, then you need additional resources; 1 CPU, 2GB RAM and 4G ephemeral storage for each pod.2.2 Installation Sequence
This section describes UDR preinstallation, installation, and postinstallation tasks.
2.2.1 PreInstallation Tasks
Before installing UDR, perform the tasks described in this section:
2.2.1.1 Downloading Unified Data Repository Package
To download the UDR package from My Oracle Support (MOS), perform the following steps:
- Log in to My Oracle Support using your login credentials.
- Select the Patches and Updates tab.
- In the Patch Search console, select Product or Family (Advanced) option.
- Enter
Oracle Communications Cloud Native Core - 5G
in Product field and select the product from the Product drop-down list. - From the Release drop-down list select Oracle Communications Cloud Native Core Unified Data Repository <release_number>. Where, <release_number> indicates the required release number of OCUDR.
- Click Search. The Patch Advanced Search Results list appears.
- Select the required patch from the search results. The Patch Details window appears.
- Click Download. The File Download window appears.
- Click the <p********_<release_number>_Tekelec>.zip file to download the package.
2.2.1.2 Pushing the Images to Customer Docker Registry
UDR deployment package includes ready-to-use images and Helm charts to orchestrate containers in Kubernetes.
- Unzip the release package to the location where you want to install
UDR. The package is as follows:
ReleaseName_pkg_Releasenumber.tgz
. - Untar the UDR package zip file to get UDR image tar file:
tar -xvf ocudr_pkg_23_4_2_0_0.tgz
This command results intoocudr_pkg_23_4_2_0_0.tgz
directory. The directory consists of following:- ocudr_pkg_23_4_2_0_0.tgz
- ocudr-23.4.2.tgz (helm chart)
- ocudr-23.4.2.tgz.sha256 (checksum)
- nudr-bulk-import-23.4.2.tgz (helm chart for bulk import tool)
- nudr-bulk-import-23.4.2.tgz.sha256 (checksum)
- nudr-export-tool-23.4.2.tgz (helm chart for export tool)
- nudr-export-tool-23.4.2.tgz.sha256 (checksum)
- nudr-migration-tool-23.4.2.tgz (helm chart for migration tool)
- nudr-migration-tool-23.4.2.tgz.sha256 (checksum)
- ocudr-servicemesh-config-23.4.2.tgz (service mesh helm chart)
- nudr-export-tool-23.4.2.tgz.sha256 (checksum)
- ocudr-servicemesh-config-23.4.2.tgz.sha256 (checksum)
- ocudr-images-23.4.2.tar (docker images)
- ocudr-images-23.4.2.tar.sha256 (checksum)
- ocudr_custom_configtemplates_23.4.2.zip
- ocudr_custom_configtemplates_23.4.2.zip.sha256
- Readme.txt (Contains cksum and md5sum of tarballs)
- Verify the package content from the
Readme.txt
file. - Run one of the following commands to load the
ocudr-images-23.4.2.tar file into the Docker system. Run the following
command:
docker load --input /root/ocudr-images-23.4.2.tar
sudo podman load --input /root/ocudr-images-23.4.2.tar
- To verify if the image is loaded correctly, run the following
command:
docker images | grep ocudr
Note:
If required, re-tag and push the images according to their repository.The following table lists the docker images for UDR:
Table 2-21 UDR Images
Service Name Image Image Version nudr-drservice ocudr/nudr_datarepository_service 23.4.2 ocudr/debug-tools 23.4.2 nudr-dr-provservice ocudr/nudr_datarepository_service 23.4.2 ocudr/debug-tools 23.4.2 nudr-notify-service ocudr/nudr_notify_service 23.4.2 ocudr/debug-tools 23.4.2 nrf-client ocudr/nrf-client 23.4.5 ocudr/debug-tools 23.4.2 ingressgateway ocudr/ocingress_gateway 23.4.7 ocudr/configurationinit 23.4.7 ocudr/configurationupdate 23.4.7 ocudr/debug-tools 23.4.2 egressgateway ocudr/ocegress_gateway 23.4.7 ocudr/configurationinit 23.4.7 ocudr/configurationupdate 23.4.7 ocudr/debug-tools 23.4.2 nudr-config ocudr/nudr_config 23.4.2 ocudr/debug-tools 23.4.2 nudr-config-server ocudr/ocpm_config_server 23.4.9 ocudr/debug-tools 23.4.2 nudr-diameterproxy ocudr/nudr_diameterproxy 23.4.2 ocudr/debug-tools 23.4.2 diam-gateway ocudr/nudr_diameterproxy 23.4.2 ocudr/debug-tools 23.4.2 nudr-ondemand-migration ocudr/nudr_ondemand_migration 23.4.2 ocudr/debug-tools 23.4.2 alternate-route ocudr/alternate_route 23.4.7 ocudr/debug-tools 23.4.2 nudr-perf-info ocudr/perf-info 23.4.9 ocudr/debug-tools 23.4.2 nudr-app-info ocudr/app-info 23.4.9 ocudr/debug-tools 23.4.2 nudr-bulk-import ocudr/nudr-bulk-import 23.4.2 ocudr/nudr-xmltocsv 23.4.2 ocudr/nudr-pdbitocsv 23.4.2 ocudr/debug-tools 23.4.2 nudr-export-tool ocudr/nudr-export-tool 23.4.2 ocudr/debug-tools 23.4.2 nudr-migration ocudr/nudr_pre_migration_hook 23.4.2 ocudr/nudr_migration 23.4.2 ocudr/debug-tools 23.4.2 nudr-dbcr-auditor-service ocudr/nudr_dbcr_auditor_service 23.4.2 ocudr/debug-tools 23.4.2 ingressgateway/egressgateway/alternate-route/perf-info/app-info hooks ocudr/common_config_hook 23.4.7 nudr-pre-install-hook/nudr-pre-upgrade-hook/nudr-post-upgrade-hook/nudr-post-rollback-hook/nudr-post-install-hook ocudr/nudr_common_hooks 23.4.2 nudr-bulk-import-pre-upgrade-hook ocudr/nudr_bulk_import_pre_upgrade_hook 23.4.2 nudr-bulk-import-post-upgrade-hook ocudr/nudr_bulk_import_post_upgrade_hook 23.4.2 nudr-export-tool-pre-upgrade-hook ocudr/nudr_export_tool_pre_upgrade_hook 23.4.2 nudr-export-tool-post-upgrade-hook ocudr/nudr_export_tool_post_upgrade_hook 23.4.2 test ocudr/nf_test 23.4.2 Note:
The nudr-notify-service and nudr-diameterproxy-service microservices are not required for SLF deployment and nudr-notify-service, nudr-diameterproxy, and nudr-ondemand-migration are not required for EIR deployment. So, disable these services by setting the 'enabled' value as 'false' under the nudr-diameterproxy and nudr-notity-service, and nudr-ondemand-migration sections of the custom values file. For more information, see User Configurable Parameter. - Run one of the following commands to tag the images to the
registry:
docker tag <image-name>:<image-tag> <docker-repo>/<image-name>:<image-tag>
sudo podman tag <customer repo>/<imagen name>:<image version> <customer repo>/<imagen name>:<image version>
Sample Tag Commands:
# podman tag ocudr/nudr_datarepository_service:23.4.2 <customer repo>/nudr_datarepository_service:23.4.2 # podman tag ocudr/nrf-client:23.4.5 <customer repo>/nrf-client:23.4.5 # podman tag ocudr/nudr_common_hooks:23.4.2 <customer repo>/nudr_common_hooks:23.4.2 # podman tag ocudr/nf_test:23.4.2 <customer repo>/nf_test:23.4.2 # podman tag ocudr/nudr_bulk_import:23.4.2 <customer repo>/nudr_bulk_import:23.4.2 # podman tag ocudr/nudr_bulk_import_pre_install_hook:23.4.2 <customer repo>/nudr_bulk_import_pre_install_hook:23.4.2 # podman tag ocudr/nudr_bulk_import_post_upgrade_hook:23.4.2 <customer repo>/nudr_bulk_import_post_upgrade_hook:23.4.2 # podman tag ocudr/nudr_export_tool_pre_install_hook:23.4.2 <customer repo>/nudr_export_tool_pre_install_hook:23.4.2 # podman tag ocudr/nudr_export_tool_post_upgrade_hook:23.4.2 <customer repo>/nudr_export_tool_post_upgrade_hook:23.4.2 # podman tag ocudr/nudr_export_tool:23.4.2 <customer repo>/nudr_export_tool:23.4.2 # podman tag ocudr/nudr_migration:23.4.2 <customer repo>/nudr_migration:23.4.2 # podman tag ocudr/nudr_pre_migration_hook:23.4.2 <customer repo>/nudr_pre_migration_hook:23.4.2 # podman tag ocudr/nudr_config:23.4.2 <customer repo>/nudr_config:23.4.2 # podman tag ocudr/ocpm_config_server:23.4.9 <customer repo>/ocpm_config_server:23.4.9 # podman tag ocudr/ocingress_gateway:23.4.7 <customer repo>/ocingress_gateway:23.4.7 # podman tag ocudr/ocegress_gateway:23.4.7 <customer repo>/ocegress_gateway:23.4.7 # podman tag ocudr/configurationinit:23.4.7 <customer repo>/configurationinit:23.4.7 # podman tag ocudr/configurationupdate:23.4.7 <customer repo>/configurationupdate:23.4.7 # podman tag ocudr/debug-tools:23.4.2 <customer repo>/debug-tools:23.4.2 # podman tag ocudr/alternate_route:23.4.7 <customer repo>/alternate_route:23.4.7 # podman tag ocudr/common_config_hook:23.4.7 <customer repo>/commo_config_hook:23.4.7 # podman tag ocudr/perf-info:23.4.9 <customer repo>/perf-info:23.4.9 # podman tag ocudr/app-info:23.4.9 <customer repo>/app-info:23.4.9 # podman tag ocudr/nudr_dbcr_auditor_service:23.4.2 <customer repo>/nudr_dbcr_auditor_service:23.4.2
Below microservices are applicable only in UDR mode.# podman tag ocudr/nudr_notify_service:23.4.2 <customer repo>/nudr_notify_service:23.4.2 # podman tag ocudr/nudr_diameterproxy:23.4.2 <customer repo>/nudr_diameterproxy:23.4.2 # podman tag ocudr/nudr_diam_gateway:23.4.2 <customer repo>/nudr_diam_gateway:23.4.2 # podman tag ocudr/nudr_ondemand_migration:23.4.2 <customer repo>/nudr_ondemand_migration:23.4.2
Note:
- <customer_repo> is the local docker registry on the cluster having Port Number, if registry has a port attached to it.
- For OCCNE, copy the package to bastion server and use localhost:5000 as CUSTOMER_REPO to tag the images and push to bastion docker registry.
- You may need to configure the Docker certificate before the push command to access customer registry via HTTPS, otherwise, docker push command may fail.
- Run one of the following commands to push the images to the
registry:
docker push <docker-repo>/<image-name>:<image-tag>
# sudo podman push <customer repo>/<imagen name>:<image version>
Note:
Beginning from OC-CNE 1.9, Docker is replaced by Podman. Run the following sample Podman command to push the Docker image:Sample Push Commands:
# podman push <customer repo>/readiness_check:23.4.2 # podman push <customer repo>/nudr_datarepository_service:23.4.2 # podman push <customer repo>/nrf-client:23.4.5 # podman push <customer repo>/nudr_common_hooks:23.4.2 # podman push <customer repo>/nf_test:23.4.2 # podman push <customer repo>/nudr_bulk_import:23.4.2 # podman push <customer repo>/nudr_bulk_import_pre_install_hook:23.4.2 # podman push <customer repo>/nudr_bulk_import_post_upgrade_hook:23.4.2 # podman push <customer repo>/nudr_export_tool_pre_install_hook:23.4.2 # podman push <customer repo>/nudr_export_tool_post_upgrade_hook:23.4.2 # podman push <customer repo>/nudr_export_tool:23.4.2 # podman push <customer repo>/nudr_migration:23.4.2 # podman push <customer repo>/nudr_pre_migration_hook:23.4.2 # podman push <customer repo>/nudr_config:23.4.2 # podman push <customer repo>/ocpm_config_server:23.4.9 # podman push <customer repo>/ocingress_gateway:23.4.7 # podman push <customer repo>/ocegress_gateway:23.4.7 # podman push <customer repo>/configurationinit:23.4.7 # podman push <customer repo>/configurationupdate:23.4.7 # podman push <customer repo>/debug-tools:23.4.2 # podman push <customer repo>/alternate_route:23.4.7 # podman push <customer repo>/common_config_hook:23.4.7 # podman push <customer repo>/perf-info:23.4.9 # podman push <customer repo>/app-info:23.4.9 # podman push <customer repo>/nudr_dbcr_auditor_service:23.4.2
Below microservices are applicable only in UDR mode.# podman push <customer repo>/nudr_notify_service:23.4.2 # podman push <customer repo>/nudr_diameterproxy:23.4.2 # podman push <customer repo>/nudr_diam_gateway:23.4.2 # podman push <customer repo>/nudr_ondemand_migration:23.4.2
- Run the following command to untar the Helm files:
tar -xvzf ocudr-23.4.2.tgz
- To download the ocudr_custom_configtemplates_23.4.2.zip, click the file available on My Oracle Support (MOS).
- Unzip the template to get ocudr_custom_configtemplates_23.4.2.zip
file that contains the following:
- ocudr_custom_values_23.4.2.yaml - Custom values for UDR or SLF used during helm installation.
- ocudr_nudr_bulk_import_custom_values_23.4.2.yaml - Custom value file for UDR bulk import tool.
- ocudr_nudr_export_tool_custom_values_23.4.2.yaml - Custom value file for UDR export tool.
- ocudr_nudr_migration_tool_custom_values_23.4.2.yaml - Custom value file for UDR migration tool.
- ocudr_servicemesh_config_custom_values_23.4.0.yaml - Custom values for UDR service mesh charts used during helm installation.
- commonConfig_rollback.py - Rollback scripts for Database Schema rollback.
- RollBack_Schema_15.3.py - Rollback scripts for Schema rollback to 15.3 version specification.
- ocslf_alerts_23.4.0.yaml - SLF Alerts template for CNE 1.8 and below. Applicable for OSO 1.6.x and 22.3.x versions.
- ocslf_alerts_haprom_23.4.0.yaml - SLF Alerts CRD for CNE 1.9 and above. Applicable only for CNE deployments
- ocslf_dashboard_23.4.0.json - SLF KPI and metrics representation template that should be loaded on Grafana.
- ocslf_dashboard_haprom_23.4.0.json - SLF KPI and metrics representation template that should be loaded on Grafana (CNE 1.9 and Above).
- ocslf_kibanaconfig_23.4.0.json - SLF Kibana Template for logging that should be loaded on Kibana.
- ocudr_alerts_23.4.0.yaml - UDR Alerts CRD, Applicable only for CNE deployments.
- ocudr_dashboard_23.4.0.json - UDR KPI and metrics representation template that should be loaded on Grafana.
- ocslf_alerts_23.4.0.yaml - EIR Alerts CRD, Applicable only for CNE deployments.
- oceir_dashboard_23.4.0.json - EIR KPI and metrics representation template that should be loaded on Grafana.
- ocslf_mib_23.4.0.mib - SLF top level mib file, where the Objects and their datatypes are defined.
- ocslf_mib_tc_23.4.0.mib - SLF mib file, where the Objects and their data types are defined.
- ocudr_mib_23.4.0.mib - OCUDR top level mib file, where the Objects and their data types are defined.
- ocudr_mib_tc_23.4.0.mib - OCUDR mib file, where the Objects and their data types are defined.
- oceir_mib_23.4.0.mib - EIR top level mib file, where the Objects and their datatypes are defined.
- oceir_mib_tc_23.4.0.mib - EIR mib file, where the Objects and their data types are defined.
- toplevel.mib - Top level MIB file, where the Objects and their data types are defined.
- ocudr_nudr_config_api_23.4.0.yaml - Open API spec file.
- ocudr_mgm_api_23.4.0.yaml - Open API Spec for MGM APIs
- /db_conflict_resolution_sqlfiles/EIR_mode_ndb_replication_insert_UPGRADE.sql
- /db_conflict_resolution_sqlfiles/SLF_mode_ndb_replication_insert_UPGRADE.sql
- /db_conflict_resolution_sqlfiles/ALL_mode_ndb_replication_insert.sql
- /db_conflict_resolution_sqlfiles/EIR_mode_ndb_replication_insert.sql
- /db_conflict_resolution_sqlfiles/SLF_mode_ndb_replication_insert.sql
- /db_conflict_resolution_sqlfiles/ALL_mode_ndb_replication_insert_UPGRADE.sql
Note:
It is recommended to configure the Docker certificate before running the push command to access customer registry through HTTPS, otherwise docker push command may fail.2.2.1.3 Verifying and Creating Namespace
Note:
This is a mandatory step, and must be performed before proceeding any further. The namespace created or verified in this step is used as an input for further steps in the installation procedure.To verify and create a namespace:
- Run the following command to verify if the required namespace already
exists in the system:
In the output of the above command, if the namespace exists, continue with the Naming Convention for Namespaces section.
kubectl get namespace
- If the required namespace is not available, create the namespace using
the following command::
kubectl create namespace <required namespace>
- Update the global.nameSpace parameter in the
ocudr-23.4.x-custom-values.yaml file with the namespace created in the previous
step.
For example, the following kubectl command create the namespace
ocudr
kubectl create namespace ocudr
global: # NameSpace where secret is deployed nameSpace: ocudr
Naming Convention for Namespaces
The namespace should::
- start and end with an alphanumeric character.
- contain 63 characters or less.
- contain only alphanumeric characters or '-'.
Note:
It is recommended to avoid using the prefix,kube-
when creating a namespace. The prefix is reserved
for Kubernetes system namespaces.
2.2.1.4 Creating Service Account, Role, and RoleBinding
This section is optional and it describes how to manually create a service account, role, and rolebinding. It is required only when customer needs to create a role, rolebinding, and service account manually before installing UDR.
Note:
The secret(s) should exist in the same namespace where UDR is getting deployed. This helps to bind the Kubernetes role with the given service account.Creating Service Account, Role, and RoleBinding
- Run the following command to create a UDR resource file:
Example:vi <ocudr-resource-file>
vi ocudr-sample-resource-template.yaml
- Update the
ocudr-sample-resource-template.yaml
with release specific information:Note:
Make sure to update <helm-release> and <namespace> in the following template with respective UDR namespace and UDR helm release name.A sample template to update the ocudr-sample-resource-template.yaml file with is given below:
# # Sample template start # apiVersion: v1 kind: ServiceAccount metadata: name: <helm-release>-serviceaccount namespace: <namespace> --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: <helm-release>-role namespace: <namespace> rules: - apiGroups: - "" # "" indicates the core API group resources: - services - configmaps - pods - secrets - endpoints - persistentvolumeclaims verbs: - get - watch - list - update - apiGroups: - apps resources: - deployments - statefulsets verbs: - get - watch - list - update - apiGroups: - autoscaling resources: - horizontalpodautoscalers verbs: - get - watch - list - update - apiGroups: - rbac.authorization.k8s.io resources: - roles - rolebindings verbs: - get - watch - list - update - apiGroups: - monitoring.coreos.com resources: - prometheusrules verbs: - get - watch - list - update --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: <helm-release>-rolebinding namespace: <namespace> roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: <helm-release>-role subjects: - kind: ServiceAccount name: <helm-release>-serviceaccount namespace: <namespace> # # Sample template end #
- Run the following command to create the service account, role and role
binding:
kubectl -n <ocudr-namespace> create -f ocudr-sample-resource-template.yaml
Example:
kubectl -n ocudr create -f ocudr-sample-resource-template.yaml
- Update the
serviceAccountName
parameter in theocudr-custom-values-23.4.x.yaml
file with the value updated in the name field underkind: ServiceAccount
. For more information about theserviceAccountName
parameter, see Global Configurable Parameters.
2.2.1.5 Pod Security Policy, Role, and Role Binding Creation
- tcpdump
- ip
- netstat
- curl
- ping
- dig
Running Debug tool containers from CNE version 23.2.0
- You must update Kyverno policy to allow running of debug tool containers from CNE version 23.2.0 onwards.
- Exclude the udr deployment namespace by editing the cluster
policy
disallow-capabilities
. For more information, see UDR Compatibility with Kubernetes, CNE and Kyverno Policies.
Running Debug tool containers for previous CNE versions:
This section describes how to create a Pod Security Policy (PSP), role, and role binding resources. These resources are used to define the Role Based Access Control (RBAC) rules for debug container running as part of each pod.
This step is optional. It is required only when the debug tool is enabled and customer wants to create a role, role binding, and service account for it manually.
This role binding should be associated with the service account that you have created in Creating Service Account, Role, and RoleBinding section.
- Add the following sample template to the sample resource input yaml
file
(ocudr-debug-tool-rbac.yaml).
apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: name: ocudr-debug-tool-psp spec: allowPrivilegeEscalation: true allowedCapabilities: - NET_ADMIN - NET_RAW fsGroup: ranges: - max: 65535 min: 1 rule: MustRunAs runAsUser: rule: MustRunAsNonRoot seLinux: rule: RunAsAny supplementalGroups: rule: RunAsAny volumes: - configMap - downwardAPI - emptyDir - persistentVolumeClaim - projected - secret --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: <ocudr-helm-release-name>-debug-tool-role namespace: <ocudr-namespace> rules: - apiGroups: - policy resources: - podsecuritypolicies verbs: - use resourceNames: - ocudr-debug-tool-psp --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: <ocudr-helm-release-name>-debug-tool-rolebinding namespace: <ocudr-namespace> roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: <ocudr-helm-release-name>-debug-tool-role subjects: - kind: Group apiGroup: rbac.authorization.k8s.io name: <helm-release>-serviceaccount namespace: <ocudr-namespace>
- Run the following command to create the resources:
kubectl –n <ocudr-namespace> create -f ocudr-debug-tool-rbac.yaml
The debug container uses readOnlyRootFileSystem, allowPrivilegeEscalation, and allowedCapabilities parameters as part of PSP. Other parameters are mandatory for PSP creation and you can customize them as per the CNE environment. It is recommended to use the default values used in the same template.
Table 2-22 PSP Parameters
Parameter | Description |
---|---|
apiVersion | It defines the versioned schema of this representation of an object. |
kind | It is a string value representing the REST resource this object represents. |
metadata | Standard object's metadata. |
metadata.name | Name must be unique within a namespace. |
spec | spec defines the policy enforced. |
spec.allowPrivilegeEscalation | This parameter controls whether or not a user is allowed to set the security context of a container to true. |
spec.allowedCapabilities | This parameter provides a list of capabilities that are allowed to be added to a container. |
spec.fsGroup | It controls the supplemental group applied to some volumes. RunAsAny allows any fsGroup ID to be specified. |
spec.runAsUser | It controls the user ID with which the containers are run. RunAsAny allows any runAsUser to be specified. |
spec.seLinux | In this parameter, RunAsAny allows any seLinuxOptions to be specified. |
spec.supplementalGroups | It controls the group IDs that containers can add. RunAsAny allows any supplementalGroups to be specified. |
spec.volumes | This parameter provides a list of allowed volume types. The allowable values correspond to the volume sources that are defined when creating a volume. |
Table 2-23 Role Parameters
Parameter | Description |
---|---|
apiVersion | It defines the versioned schema of this representation of an object. |
kind | It is a string value representing the REST resource this object represents. |
metadata | Standard object's metadata. |
metadata.name | Name must be unique within a namespace. |
metadata.namespace | Namespace defines the space within which each name must be unique. |
rules | Rules holds all the PolicyRules for this Role. |
apiGroups | APIGroups is the name of the APIGroup that contains the resources. |
rules.resources | Resources is a list of resources this rule applies to. |
rules.verbs | Verbs is a list of verbs that apply to ALL the ResourceKinds and AttributeRestrictions contained in this rule. |
rules.resourceNames | ResourceNames is an optional list of names to which the rule applies. |
Table 2-24 Role Binding Parameters
Parameter | Description |
---|---|
apiVersion | It defines the versioned schema of this representation of an object. |
kind | It is a string value representing the REST resource this object represents. |
metadata | Standard object's metadata. |
metadata.name | Name must be unique within a namespace. |
metadata.namespace | Namespace defines the space within which each name must be unique. |
roleRef | RoleRef can reference a Role in the current namespace or a ClusterRole in the global namespace. |
roleRef.apiGroup | APIGroup is the group for the resource being referenced |
roleRef.kind | Kind is the type of resource being referenced |
roleRef.name | Name is the name of resource being referenced |
subjects | Subjects holds references to the objects, the role applies to. |
subjects.kind | Kind of object being referenced. Values defined by this API group are "User", "Group", and "ServiceAccount". |
subjects.apiGroup | APIGroup holds the API group of the referenced subject. |
subjects.name | Name of the object being referenced. |
2.2.1.6 Configuring cnDBTier
With cnDBTier, UDR facilitates automatic user creation with its pre-install hook. However, ensure that there is a privileged user on the NDB cluster, which has privileges similar to root user. This user should have necessary permissions to allow connections from remote hosts.
For cnDBTier version 23.4.x, enable the flag
ndb_allow_copying_alter_table
on cnDBTier configurations during
installation.
Single Site Deployment
- Log in to MySQL on each of the API nodes of cnDBTier to verify
this.
mysql>select host from mysql.user where User='<privileged username>'; +------+ | host | +------+ | % | +------+ 1 rowinset(0.00 sec)
- If you do not see '%' in the output of the above query, modify
this field to allow remote connections to
root.
mysql>update mysql.user set host='%' where User='<privileged username>'; Query OK, 0rowsaffected (0.00 sec) Rowsmatched: 1 Changed: 0 Warnings: 0 mysql> flushprivileges; Query OK, 0rowsaffected (0.06 sec)
Note:
Perform this step on each SQL node.
Multiple Site Deployment
To configure cnDBTier in case of multiple site deployment:
- Update mysqld configuration in the cnDBTier custom values file
before the installing or upgrading UDR or
SLF.
global: ndbconfigurations: api auto_increment_increment: 3 auto_increment_offset: 1
Note:
- Set the auto_increment_increment parameter as same as number of sites. For example: If the number of sites is 2, set its value as 2 and if the number of sites is 3, set its value as 3.
- Set the auto_increment_offset parameter as site ID. For example: The site ID for Site 1 is 1, Site 2 is 2, for Site 3 is 3 and so on.
- If the fresh installation or upgrade of UDR or SLF on cnDBTier is
not planned, then run the following command to edit the exiting mysqldconfig
configmap on all the cnDBTier
sites.
kubectl edit configmap mysqldconfig <db-site-namespace>
For Example:
kubectl edit configmap mysqldconfig -n site1
Note:
Update the auto_increment_increment and auto_increment_offset values as mentioned in the previous step for all sites.
2.2.1.7 Configuring Multiple Site Deployment
In case of multiple site deployment of EIR or SLF, there is only one subscriber database which is used by each site and different configuration databases for each site, as each site has its own configuration. To have different configuration databases and same subscriber database, you need to create secrets accordingly. For more information about creating secrets, see Creating Kubernetes Secret - DBName, Username, Password, and Encryption Key.
- Configure nfInstanceId under the
global section of the
ocudr-custom-values-23.4.2.yaml file differently for each UDR, SLF, or EIR site
deployed.
Note:
Ensure the nfInstanceId configuration in the global section is same as that in the appProfile section of nrf-client.global: # Unique ID to register to NRF, Should be configured differently on multi site deployments for each SLF/UDR nfInstanceId: &nfInsId 5a7bd676-ceeb-44bb-95e0-f6a55a328b03 nrf-client: configmapApplicationConfig: profile: |- appProfiles=[{"nfInstanceId":"5a7bd676-ceeb-44bb-95e0-f6a55a328b03","nfStatus":"REGISTERED","fqdn":"ocudr-ingressgateway.myudr.svc.cluster.local","nfType":"UDR","allowedNfTypes":["NRF"],"plmnList":[{"mnc":"14","mcc":"310"}],"priority":10,"capacity":500,"load":0,"locality":"bangalore","nfServices":[{"load":0,"scheme":"http","versions":[{"apiFullVersion":"2.1.0.alpha-3","apiVersionInUri":"v1"}],"fqdn":"ocudr-ingressgateway.myudr.svc.cluster.local","ipEndPoints":[{"port":"80","ipv4Address":"10.0.0.0","transport":"TCP"}],"nfServiceStatus":"REGISTERED","allowedNfTypes":["NRF"],"serviceInstanceId":"547d42af-628a-4d5d-a8bd-38c4ba672682","serviceName":"nudr-group-id-map","priority":10,"capacity":500}],"udrInfo":{"groupId":"udr-1","externalGroupIdentifiersRanges":[{"start":"10000000000","end":"20000000000"}],"supiRanges":[{"start":"10000000000","end":"20000000000"}],"gpsiRanges":[{"start":"10000000000","end":"20000000000"}]},"heartBeatTimer":90,"nfServicePersistence":false,"nfProfileChangesSupportInd":false,"nfSetIdList":["setxyz.udrset.5gc.mnc012.mcc345"]}]
- Configure fullnameOverride under
the config-server section to
<helm-release-name>-config-server. It should be different for each site
deployed.
config-server: fullnameOverride: ocudr1-config-server
- Configure fullnameOverride under the appinfo section to
<helm-release-name>-app-info. It should be different for each site
deployed.
appinfo: fullnameOverride: ocudr1-app-info
- For cnDBTier configurations in multiple site deployment, see Configuring cnDBTier.
2.2.1.8 Creating Kubernetes Secret - DBName, Username, Password, and Encryption Key
UDR Database
- Subscriber Database: Subscriber database includes all subscriber data such as SLF, EIR, PCF, UDM, and NEF data.
- Configuration database: Configuration database includes configuration content for all microservices.
UDR Users
Note:
- Before running the procedure for georedundant sites, ensure that the cnDBTier for georedundant sites is already up and replication channels are enabled.
- While performing a fresh installation, if UDR is already deployed, purge the deployment, and remove the database and users that were used for the previous deployment. For uninstallation procedure, see Uninstalling UDR.
- Privileged User: This user requires to be granted remote access and complete set of permissions to create, alter, and drop udrdb and udrconfigdb databases across the SQL nodes. If necessary, the user should have the required permissions to create the application user.
- Application User: This user provided in the secret can be created by the UDR if global.preInstall.createUser=true in the custom-values.yaml. If the application user is manually created, then the user requires to be granted remote access and have a minimum set of permission for the udrdb and udrconfigdb to insert, update, get, and remove the records across the SQL nodes.
Single Site Deployment
kubectl create secret generic ocudr-secrets --from-literal=dbname=<dbname> --from-literal=configdbname=<configdbname> --from-literal=privilegedUsername=<privilegedUsername> --from-literal=privilegedPassword=<privilegedPassword> --from-literal=dsusername=<udruserName> --from-literal=dspassword=<udruserPassword> --from-literal=encryptionKey='My secret passphrase' -n <ocudr-namespace>
Example
kubectl create secret generic ocudr-secrets --from-literal=dbname=udrdb --from-literal=configdbname=udrconfigdb --from-literal=privilegedUsername=privUsr --from-literal=privilegedPassword=privUsrPasswd --from-literal=dsusername=udruser --from-literal=dspassword=udrpasswd --from-literal=encryptionKey='My secret passphrase' -n myudr
Multiple Site Deployment
In case of multiple site deployment, configure the
configdbname
differently for each site.
Each site has its own configuration content and it is stored
separately in their respective configuration databases. Multiple
UDRs under the same segment also have different configuration
databases.
To create a secret in case of two site deployment, run the following commands:
- For site
1:
kubectl create secret generic ocudr-secrets --from-literal=dbname=<dbname> --from-literal=configdbname=<configdbname1> --from-literal=privilegedUsername=<privilegedUsername> --from-literal=privilegedPassword=<privilegedPassword> --from-literal=dsusername=<udruserName> --from-literal=dspassword=<udruserPassword> --from-literal=encryptionKey='My secret passphrase' -n <ocudr1-namespace>
Example:
kubectl create secret generic ocudr-secrets --from-literal=dbname=udrdb --from-literal=configdbname=udrconfigdb1 --from-literal=privilegedUsername=privUsr --from-literal=privilegedPassword=privUsrPasswd --from-literal=dsusername=udruser --from-literal=dspassword=udrpasswd --from-literal=encryptionKey='My secret passphrase' -n myudr
- For site
2:
kubectl create secret generic ocudr-secrets --from-literal=dbname=<dbname> --from-literal=configdbname=<configdbname2> --from-literal=privilegedUsername=<privilegedUsername> --from-literal=privilegedPassword=<privilegedPassword> --from-literal=dsusername=<udruserName> --from-literal=dspassword=<udruserPassword> --from-literal=encryptionKey='My secret passphrase' -n <ocudr2-namespace>
Example:
kubectl create secret generic ocudr-secrets --from-literal=dbname=udrdb --from-literal=configdbname=udrconfigdb2 --from-literal=privilegedUsername=privUsr --from-literal=privilegedPassword=privUsrPasswd --from-literal=dsusername=udruser --from-literal=dspassword=udrpasswd --from-literal=encryptionKey='My secret passphrase' -n myudr
2.2.1.9 Enabling HTTP1.1
Note:
- This configuration is required for CNC Console to perform provisioning on UDR or SLF deployment.
- CNC Console communicates only to ingressgateway-prov service for provisioning. Ingressgateway-prov communicates over selected ingressgateway-prov.global.publicHttp1SignalingPort port (81 port by default) which is a HTTP1.1 specific port. This configuration is not required to be enabled for ingressgateway-sig.
- Enable the enableIncomingHttp1 parameter under ingressgateway section in ocudr-custom-values file.
- Configure the port to be used for HTTP1 using publicHttp1SignalingPort under global section in ocudr-custom-values file.
- To enable HTTPS on HTTP1 port, enable the configuration enableTLSIncomingHttp1 under ingressgateway section in ocudr-custom-values file.
Note:
There are additional configurations to enable HTTPS on Ingress Gateway. For more information, see Enabling HTTPS on Ingress Gateway and Egress Gateway.Sample template
ingressgateway-prov: global: ..... ..... # port on which UDR's API-Gateway service is exposed # If httpsEnabled is false, this Port would be HTTP/2.0 Port (unsecured) # If httpsEnabled is true, this Port would be HTTPS/2.0 Port (secured SSL) publicHttpSignalingPort: 80 publicHttpsSignalingPort: 443 # HTTP1 Port publicHttp1SignalingPort: 81 ...... ...... # Enable HTTP1 enableIncomingHttp1: true # Enable Secure HTTP1 enableTLSIncomingHttp1: true
- You are required to configure routes on cncc-core-custom-values file to
connect to its port on ingressgateway for the provisioning routes configured. For
this, check the routesConfig section as follows:
CNCC Core routesConfig
ingress-gateway: routesConfig: ...... ...... # uri should be configured with UDR/SLF ingressgateway fqdn with HTTP1 port. (Example: ocudr-ingressgateway.ocudr:81) # 81 used above should be same as publicHttp1SignalingPort configured in ocudr-custom-values.yaml file - id: udr_ingress uri: http://<FQDN>:<PORT> path: /nudr-dr-prov/**,/nudr-dr-mgm/**,/nudr-group-id-map-prov/**,/slf-group-prov/**
2.2.1.10 Enabling HTTPS on Ingress Gateway and Egress Gateway
This step is optional. It is required only when SSL settings need to be enabled on Ingress Gateway and Egress Gateway microservices of UDR.
Note:
For more information about the ssl section configuration, see UDR Configuration Parameters.Important:
If the certificates are not available, then create them following the instructions given in the Creating Private Keys and Certificates for Ingress Gateway section of this guide.2.2.1.10.1 Enabling HTTPS on Ingress Gateway
To enable HTTPS on Ingress Gateway:
- Enable initssl and enableIncomingHttps parameters in the ingressgateway-sig and ingressgateway-provsections of the ocudr-custom-values.yaml file.
- Configure the following details in the ssl section under
ingressgateway
- Configure the following details in the ssl section of
ingressgateway-sig and ingressgateway-prov in the
ocudr-custom-values.yaml file.
- Kubernetes namespace
- Kubernetes secret name holding the certificate details
- Certificate information
# Configure this section to support TLS with ingress gateway ssl: # TLS verison used tlsVersion: TLSv1.2 # Secret Details for certificates privateKey: k8SecretName: ocudr-gateway-secret k8NameSpace: ocudr rsa: fileName: rsa_private_key_pkcs1.pem ecdsa: fileName: ecdsa_private_key_pkcs8.pem certificate: k8SecretName: ocudr-gateway-secret k8NameSpace: ocudr rsa: fileName: apigatewayrsa.cer ecdsa: fileName: apigatewayecdsa.cer caBundle: k8SecretName: ocudr-gateway-secret k8NameSpace: ocudr fileName: caroot.cer keyStorePassword: k8SecretName: ocudr-gateway-secret k8NameSpace: ocudr fileName: key.txt trustStorePassword: k8SecretName: ocudr-gateway-secret k8NameSpace: ocudr fileName: trust.txt initialAlgorithm: RS256
- Configure the following details in the ssl section of
ingressgateway-sig and ingressgateway-prov in the
ocudr-custom-values.yaml file.
Note:
If there are different sets of ssl certificates for the signaling and the provisioning interface, use two different secrets for ingressgateway-sig and ingressgateway-prov.2.2.1.10.2 Enabling HTTPS on Egress Gateway
To enable HTTPS on Egress Gateway:
- Enable the outgoingHttps parameter under globalsection of the ocudr-custom-values.yaml file.
- Configure the following details in the ssl section of egressgateway
in the ocudr-custom-values.yaml file.
- Kubernetes namespace
- Kubernetes secret name holding the certificate details
- Certificate information
# Configure this section to support TLS with egress gateway ssl: # TLS verison used tlsVersion: TLSv1.2 initialAlgorithm: RS256 # Secret Details for certificates privateKey: k8SecretName: ocudr-gateway-secret k8NameSpace: ocudr rsa: fileName: rsa_private_key_pkcs1.pem ecdsa: fileName: ecdsa_private_key_pkcs8.pem certificate: k8SecretName: ocudr-gateway-secret k8NameSpace: ocudr rsa: fileName: apigatewayrsa.cer ecdsa: fileName: apigatewayecdsa.cer caBundle: k8SecretName: ocudr-gateway-secret k8NameSpace: ocudr fileName: caroot.cer keyStorePassword: k8SecretName: ocudr-gateway-secret k8NameSpace: ocudr fileName: key.txt trustStorePassword: k8SecretName: ocudr-gateway-secret k8NameSpace: ocudr fileName: trust.txt
2.2.1.11 Creating Kubernetes Secret - Keys and Certificates for OAuth2 Support
This section describes how to create Kubernetes secret to store keys and certificates for OAuth2 support. This step is optional. It is required only when you want to enable Oauth2 validation on UDR Ingress Gateway microservice.
Creating a Secret to Store Keys and Certificates for OAuth2
- NRF creates access tokens using following private keys.
ECDSA private key (Example:
ecdsa_private_key_pkcs8.pem
)RSA private key (Example:
rsa_private_key_pkcs1.pem
) - To validate these access tokens, create a secret and configure
the certificates fetched from NRF into the ocudr-ingress-gateway. The
certificates naming format should be:
<nrfInstanceId>_<AlgorithmUsed>.crt (6faf1bbc-6e4a-4454-a507-a14ef8e1bc5c_ES256.crt
- To create a secret:
- Log in to Bastion Host or a server that has
kubectl
. - Run the following command to create a namespace for the
secret:
kubectl create namespace ocudr
- Run the following command to create kubernetes secret
for NF access token
validation:
kubectl create secret generic oauthsecret --from-file=6faf1bbc-6e4a-4454-a507 -a14ef8e1bc5c_ES256.crt -n ocudr
Note:
The file names in the above command are same as in Step 1. - Run the following command to verify whether a secret is
created successfully or not:
kubectl describe secret oauthsecret -n ocudr
- Log in to Bastion Host or a server that has
Updating an OAuth Token
- Log in to Bastion Host or a server that has
kubectl
. - To update the secret with new or updated details:
- Run the following command to delete the secret:
kubectl delete secret oauthsecret -n ocudr
- Update the certificates required from NRF.
- Run the following command to recreate the secret with
updated details:
kubectl create secret generic oauthsecret --from-file=0263663c-f5c2-4d1b-9170-f7b1a9116337_ES256.crt -n ocudr
- Run the following command to delete the secret:
2.2.1.12 Creating Persistent Volume Claim
This step is optional and is required when the UDR 1.12.0 and lower bulk import tool is deployed. For newer version, this is not required.
Note:
The bulk import Persistent Volume Claim (PVC) and xmltocsv container PVC are created automatically using helm.- Use the following sample template for previous release versions and
save it as <file-name>.yaml file.
PersistentVolumeClaim Bulk Import Service
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: bulkimportpersistentclaim spec: storageClassName: <Please Provide your StorageClass Name> accessModes: - ReadWriteOnce resources: requests: storage: 1Gi
- Run the following command to create a PVC.
kubectl create -f <file-name>.yaml
- Run the following command to verify whether a PVC is created or
not.
kubectl get pvc -n <namespace>
Figure 2-1 Verifying PVC Creation
- Run the following command to verify whether a Persistent Volume (PV)
is created and bound to the PVC volume.
kubectl get pv
Figure 2-2 Verifying PV Creation
For more information about Bulk Import Provisioning, see Oracle Communications Cloud Native Core, Unified Data Repository User Guide.
2.2.1.13 UDR Compatibility with Kubernetes, CNE and Kyverno Policies
Perform the below steps before starting the ASM deployment to avoid ASM deployment failure due to Kubernetes version 1.25, CNE verion 23.4.x, and Kyverno policies.
kubectl patch clusterpolicy disallow-capabilities --type "json" -p '[{"op":"add","path":"/spec/rules/0/exclude/any/0/resources/namespaces/-","value":"testns"}] testns need to replace with user ns.
2.2.1.14 Configuring UDR to Support Aspen Service Mesh
UDR leverages the Platform Service Mesh (for example, Aspen Service Mesh (ASM)) for all internal and external TLS communication by deploying a special sidecar proxy in each pod to intercept all the network communications. The service mesh integration provides inter-NF communication and allows API gateway to co-work with service mesh. The service mesh integration supports the services by deploying a special sidecar proxy in each pods to intercept all the network communications between microservices.
The Service Mesh configurations are classified into:
- Control Plane: It involves adding labels or annotations to inject sidecar. The control plane configurations are already part of NF Helm chart.
- Data Plane: It helps in traffic management like handling NF call flows by adding Service Entries (SE), Destination Rules (DR), Envoy Filters (EF), and other resource changes like apiVersion change between versions. This is done manually depending on the NF requirement and Service Mesh deployment.
Data Plane Configuration
Data Plane configuration consists of following Custom Resource Definitions (CRDs):
- Service Entry (SE)
- Destination Rule (DR)
- Envoy Filter (EF)
- Peer Authentication
- Virtual Service (VS)
- Request Authentication (RA)
- Policy Authorization (PA)
Note:
Use Helm charts to add or delete CRDs that you may require due to Service Mesh upgrades to configure features across different releases.The Data Plane configuration is applicable in following scenarios:
- NF to NF Communication: During NF to NF
communication where sidecar is injected on both NFs, you
need SE and DR to communicate to the other NF, otherwise
sidecar rejects the communication. All Egress communications
of NFs must have an entry for SE and DR and the same needs
to be configured.
Note:
For out of cluster communication, you must configure the core DNS with producer NF end point to enable the access. - Kube-api-server: For Kube-api-server, there are few NF flows that require access to Kubernetes API server. The Service Mesh proxy (mTLS enabled) may block this. As per F5 recommendation, the NF need to add SE for Kubernetes API server for its own namespace.
- Envoy Filters: Sidecars rewrite the header with its own default value as a result the headers from back end services are lost. So, you need Envoy Filters to help in passing the headers from back end services to use it as it is.
- Peer Authentication: This template can be used to change the default mTLS mode on the deployment. It allows values such as STRICT, PERMISSIVE, and DISABLE.
- Virtual Service: If the target host returns 503 error
response in the first attempt, then by default Istio tries
to access the target host for two more times. To configure
the number of retry attempts, use virtual service. Based on
this configured number, Istio tries to access the target
host.
Note:
If there is a requirement to configure multiple destination hosts, then it is recommended to use separate virtual services for each destination host instead of using one common virtual service. - Request Authentication: This template is used to configure JWT tokens for OAuth use case. NF need to authenticate the OAuth token sent by consumer NFs. This is done by using the public key of the NRF signing certificate and telling ServiceMesh to authenticate the token. UDR and SLF do not use this feature.
- Policy Authorization: Istio Authorization Policy enables access control on workloads in the mesh. Authorization policy supports CUSTOM, DENY and ALLOW actions for access control. When CUSTOM, DENY and ALLOW actions are used for a workload at the same time, the CUSTOM action is evaluated first, then the DENY action, and finally the ALLOW action is evaluated.
You need CRDs for the following UDR and SLF use cases:
- UDR to NRF communication (applicable for UDR and SLF)
- UDR to Notifications (applicable to UDR only)
- Kube-api-server (applicable to UDR and SLF)
- Envoy Filter for Serverheader and XFCC (applicable for UDR and SLF)
2.2.1.14.1 UDR Service Mesh Values.yaml File
Note:
To connect to vDBTier, create an SE and DR for MySQL connectivity service if the database is in different cluster. Else, the sidecar rejects request as vDBTier does not support sidecars.serviceEntries:
# Required to communicate to k8 api server
- hosts: |-
[ "kubernetes.default.svc.cluster.local",
"kubernetes.default.svc.cluster.local1" ] # kubernetes cluster name
exportTo: |-
[ "." ] # refers to the current namespace
location: MESH_INTERNAL
addresses: |-
[ "198.223.0.1",
"198.223.0.2" ]
ports:
- number: 443
name: https
protocol: HTTPS
name: kube-api-server
# Add Service Entries for the producer NF's. Provide the public fqdn host list along with the corresponding service entry related settings
- hosts: |-
[ "nrf123.oracle.com" ] # pubic fqdn of destination NF (NRF)
exportTo: |-
[ "." ]
location: MESH_EXTERNAL
ports:
- number: 8081
name: http2-8081
protocol: TCP
- number: 8082
name: http2-8082
protocol: TCP
resolution: NONE # should be NONE/DNS - if we are using wildchars in fqdn it must be NONE
name: nrf # unique name in service entry and destination rule to identify destination NF
# Below entry is required to send out notifications from UDR
#- hosts: |-
# [ "notifyurl.oracle.com" ]
# location: MESH_EXTERNAL
# portno: 80
# portname: http2
# portprotocol: HTTP2
# resolution: NONE
# name: notify
# namespace: ocudr
destinationRules:
- host: "nrf123.oracle.com"
mode: DISABLE
name: ocudr-to-other-nf-dr-test
sbitimers: true
tcpConnectTimeout: "750ms"
tcpKeepAliveProbes: 3
tcpKeepAliveTime: "1500ms"
tcpKeepAliveInterval: "1s"
# - host: "notifyurl.oracle.com"
# mode: DISABLE
# name: notify
# sbitimers: false
# Add Envoy Filters for server header and XFCC filters w.r.t to asm version 1.6.x
#envoyFilters_v_16x:
# - name: xfccfilter
# labelselector: "app.kubernetes.io/instance: ocudr"
# configpatch:
# - applyTo: NETWORK_FILTER
# filtername: envoy.filters.network.http_connection_manager
# operation: MERGE
# typeconfig: type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager
# configkey: forward_client_cert_details
# configvalue: ALWAYS_FORWARD_ONLY
# - name: serverheaderfilter
# labelselector: "app.kubernetes.io/instance: ocudr"
# configpatch:
# - applyTo: NETWORK_FILTER
# filtername: envoy.filters.network.http_connection_manager
# operation: MERGE
# typeconfig: type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager
# configkey: server_header_transformation
# configvalue: PASS_THROUGH
# Add Envoy Filters for server header and XFCC filters w.r.t to asm version 1.9.x & 1.11.x
envoyFilters_v_19x_111x:
- name: xfccfilter
labelselector: "app.kubernetes.io/instance: ocudr"
configpatch:
- applyTo: NETWORK_FILTER
filtername: envoy.filters.network.http_connection_manager
operation: MERGE
typeconfig: type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
configkey: forward_client_cert_details
configvalue: ALWAYS_FORWARD_ONLY
- name: serverheaderfilter
labelselector: "app.kubernetes.io/instance: ocudr"
configpatch:
- applyTo: NETWORK_FILTER
filtername: envoy.filters.network.http_connection_manager
operation: MERGE
typeconfig: type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
configkey: server_header_transformation
configvalue: PASS_THROUGH
- name: custom-http-stream
labelselector: "app.kubernetes.io/instance: ocudr"
configpatch:
- applyTo: NETWORK_FILTER
filtername: envoy.filters.network.http_connection_manager
operation: MERGE
typeconfig: type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
configkey: server_header_transformation
configvalue: PASS_THROUGH
stream_idle_timeout: "6000ms"
max_stream_duration: "7000ms"
patchContext: SIDECAR_OUTBOUND
networkFilter_listener_port: 8000
- name: custom-tcpsocket-timeout
labelselector: "app.kubernetes.io/instance: ocudr"
configpatch:
- applyTo: FILTER_CHAIN
patchContext: SIDECAR_INBOUND
operation: MERGE
transport_socket_connect_timeout: "750ms"
filterChain_listener_port: 8000
- name: custom-http-route
labelselector: "app.kubernetes.io/instance: ocudr"
configpatch:
- applyTo: HTTP_ROUTE
patchContext: SIDECAR_OUTBOUND
operation: MERGE
route_idle_timeout: "6000ms"
route_max_stream_duration: "7000ms"
httpRoute_routeConfiguration_port: 8000
vhostname: "ocudr.svc.cluster:8000"
- name: logicaldnscluster
labelselector: "app.kubernetes.io/instance: ocudr"
configpatch:
- applyTo: CLUSTER
clusterservice: rchltxekvzwcamf-y-ec-x-002.amf.5gc.mnc480.mcc311.3gppnetwork.org
operation: MERGE
logicaldns: LOGICAL_DNS
- applyTo: CLUSTER
clusterservice: rchltxekvzwcamd-y-ec-x-002.amf.5gc.mnc480.mcc311.3gppnetwork.org
operation: MERGE
logicaldns: LOGICAL_DNS
- name: slfexactbalancesample
labelselector: "app.kubernetes.io/name: ingressgateway-prov"
configpatch:
- applyTo: LISTENER
patchContext: SIDECAR_OUTBOUND
listener_port: 5001
operation: MERGE
exactbalance: true
#PeerAuthentication is required to allow the incoming traffic with tls mode
peerAuthentication:
- name: allservices
labelselector: "app.kubernetes.io/part-of: ocudr"
tlsmode: PERMISSIVE
#
# VirtualService is required to configure the retry attempts for the destination host.
# By configuring this retries attempts will override the istio default behaviour for retries for example for 503 response istio reties for 2 times.
# If we want mutiple virtualServices to configure, we can add entries like below.
virtualService:
- name: nrfvirtual1
host: ocudr-ocudr-egress-gateway
destinationhost: ocudr-ocudr-egress-gateway
port: 8000
exportTo: |-
[ "." ]
attempts: "0"
timeout: 7s
- name: nrfvirtual2
host: ocudr-egress-gateway
destinationhost: ocudr-egress-gateway
port: 8000
exportTo: |-
[ "." ]
retryon: 5xx
attempts: "1"
timeout: 7s
#
# RequestAuthentication is required to configure ouath secrets
# Note: Only one of jwks and jwksUri should be used as per istio document
#requestAuthentication:
# - name: jwttokenwithjson
# labelselector: "app: httpbin"
# issuer: "jwtissue"
# jwks: |-
# '{
# "keys": [{
# "kid": "1",
# "kty": "EC",
# "crv": "P-256",
# "x": "Qrl5t1-Apuj8uRI2o_BP9loqvaBnyM4OPTPAD_peDe4",
# "y": "Y7vNMKGNAtlteMV-KJIaG-0UlCVRGFHtUVI8ZoXIzRY"
# }]
# }'
# - name: jwttoken
# labelselector: "app: httpbin"
# issuer: "jwtissue"
# jwksUri: https://example.com/.well-known/jwks.json
#authorizationPolicies:
#- name: allow-all-provisioning-on-ingressgateway-ap
# labelselector: "app.kubernetes.io/name: ingressgateway"
# action: "ALLOW"
# hosts:
# - "*"
# paths:
# - "/nudr-dr-prov/*"
# - "/nudr-dr-mgm/*"
# - "/nudr-group-id-map-prov/*"
# - "/slf-group-prov/*"
#- name: allow-all-sbi-on-ingressgateway-ap
# labelselector: "app.kubernetes.io/name: ingressgateway"
# action: "ALLOW"
# hosts:
# - "*"
# paths:
# - "/nudr-group-id-map/v1/*"
# xfccvalues:
# - "*DNS=<dns-url-1>"
# - "*DNS=<dns-url-2>"
# Template for Service Entry,Destination Rules and Envoy Filters
# The above range of host list will be used here in the below templates to create CRD's
servicemeshResourcesTemplate: |
{{- range .Values.serviceEntries }}
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: {{ .name }}
namespace: {{ .namespace | default $.Release.Namespace }}
spec:
hosts: {{ .hosts }}
exportTo: {{ .exportTo }}
{{ if .addresses }}
addresses: {{ .addresses }}
{{ end }}
ports:
{{- range .ports }}
- number: {{ .number }}
name: {{ .name }}
protocol: {{ .protocol }}
{{- end }}
location: {{ .location | default "MESH_EXTERNAL" }}
resolution: {{ .resolution | default "NONE" }}
---
{{- end -}}
{{- range .Values.destinationRules }}
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: {{ .name }}
namespace: {{ .namespace | default $.Release.Namespace }}
spec:
host: "{{ .host }}"
trafficPolicy:
tls:
mode: {{ .mode | default "MUTUAL" }}
{{ if ne .mode "DISABLE" }}
clientCertificate: /etc/certs/cert-chain.pem
privateKey: /etc/certs/key.pem
caCertificates: /etc/certs/root-cert.pem
{{ end }}
{{- if .sbitimers }}
connectionPool:
tcp:
connectTimeout: {{ .tcpConnectTimeout | default "750ms" }}
tcpKeepalive:
probes: {{ .tcpKeepAliveProbes | default 3 }}
time: {{ .tcpKeepAliveTime | default "1500ms" }}
interval: {{ .tcpKeepAliveInterval | default "1s" }}
{{- end }}
---
{{- end -}}
{{- range .Values.envoyFilters_v_16x }}
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: {{ .name }}
namespace: {{ .namespace | default $.Release.Namespace }}
spec:
workloadSelector:
labels:
{{ .labelselector }}
configPatches:
{{- range .configpatch }}
- applyTo: {{ .applyTo | default "NETWORK_FILTER" }}
match:
{{- if .patchContext }}
context: {{ .patchContext }}
{{- end }}
{{- if eq .applyTo "CLUSTER" }}
cluster:
service: {{ .clusterservice }}
{{- end }}
{{- if eq .applyTo "HTTP_ROUTE" }}
routeConfiguration:
{{- if .httpRoute_routeConfiguration_port }}
portNumber: {{ .httpRoute_routeConfiguration_port }}
{{- end }}
vhost:
name: {{ required ( printf "Error in %s envoyFilter. vhostname is required for routeConfiguration when applyTo is HTTP_ROUTE." .name ) .vhostname }}
{{- else if eq .applyTo "FILTER_CHAIN" }}
listener:
{{- if .filterChain_listener_port }}
portNumber: {{ .filterChain_listener_port }}
{{- end }}
{{- else if eq .applyTo "NETWORK_FILTER" }}
listener:
{{- if .filtername }}
filterChain:
filter:
name: {{ .filtername | default "envoy.http_connection_manager" }}
{{- end }}
{{- if .networkFilter_listener_port }}
portNumber: {{ .networkFilter_listener_port }}
{{- end }}
{{- end }}
patch:
operation: {{ .operation | default "MERGE" }}
value:
{{- if .typeconfig }}
typed_config:
'@type': {{ .typeconfig }}
{{ .configkey }}: {{ .configvalue }}
{{- if .stream_idle_timeout }}
stream_idle_timeout: {{ .stream_idle_timeout }}
{{- end }}
{{- if .max_stream_duration }}
common_http_protocol_options:
max_stream_duration: {{ .max_stream_duration }}
{{- end }}
{{- end }}
{{- if .transport_socket_connect_timeout }}
transport_socket_connect_timeout: {{ .transport_socket_connect_timeout }}
{{- end }}
{{- if or .route_idle_timeout .route_max_stream_duration }}
route:
{{- if .route_idle_timeout }}
idle_timeout: {{ .route_idle_timeout }}
{{- end }}
{{- if .route_max_stream_duration }}
max_stream_duration:
max_stream_duration: {{ .route_max_stream_duration }}
{{- end }}
{{- end }}
{{- if .logicaldns }}
type: {{ .logicaldns }}
{{- end }}
{{- end -}}
---
{{- end -}}
{{- range .Values.envoyFilters_v_19x_111x }}
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: {{ .name }}
namespace: {{ .namespace | default $.Release.Namespace }}
spec:
workloadSelector:
labels:
{{ .labelselector }}
configPatches:
{{- range .configpatch }}
- applyTo: {{ .applyTo | default "NETWORK_FILTER" }}
match:
{{- if .patchContext }}
context: {{ .patchContext }}
{{- end }}
{{- if eq .applyTo "CLUSTER" }}
cluster:
service: {{ .clusterservice }}
{{- end }}
{{- if eq .applyTo "HTTP_ROUTE" }}
routeConfiguration:
{{- if .httpRoute_routeConfiguration_port }}
portNumber: {{ .httpRoute_routeConfiguration_port }}
{{- end }}
vhost:
name: {{ required ( printf "Error in %s envoyFilter. vhostname is required for routeConfiguration when applyTo is HTTP_ROUTE." .name ) .vhostname }}
{{- else if eq .applyTo "FILTER_CHAIN" }}
listener:
{{- if .filterChain_listener_port }}
portNumber: {{ .filterChain_listener_port }}
{{- end }}
{{- else if eq .applyTo "LISTENER" }}
listener:
{{- if .listener_port }}
portNumber: {{ .listener_port }}
{{- end }}
{{- else if eq .applyTo "NETWORK_FILTER" }}
listener:
{{- if .filtername }}
filterChain:
filter:
name: {{ .filtername | default "envoy.http_connection_manager" }}
{{- end }}
{{- if .networkFilter_listener_port }}
portNumber: {{ .networkFilter_listener_port }}
{{- end }}
{{- end }}
patch:
operation: {{ .operation | default "MERGE" }}
value:
{{- if .typeconfig }}
typed_config:
'@type': {{ .typeconfig }}
{{ .configkey }}: {{ .configvalue }}
{{- if .stream_idle_timeout }}
stream_idle_timeout: {{ .stream_idle_timeout }}
{{- end }}
{{- if .max_stream_duration }}
common_http_protocol_options:
max_stream_duration: {{ .max_stream_duration }}
{{- end }}
{{- end }}
{{- if .transport_socket_connect_timeout }}
transport_socket_connect_timeout: {{ .transport_socket_connect_timeout }}
{{- end }}
{{- if or .route_idle_timeout .route_max_stream_duration }}
route:
{{- if .route_idle_timeout }}
idle_timeout: {{ .route_idle_timeout }}
{{- end }}
{{- if .route_max_stream_duration }}
max_stream_duration:
max_stream_duration: {{ .route_max_stream_duration }}
{{- end }}
{{- end }}
{{- if .logicaldns }}
type: {{ .logicaldns }}
{{- end }}
{{- if .exactbalance }}
connection_balance_config:
exact_balance: {}
{{- end }}
{{- end }}
---
{{- end -}}
{{- range .Values.peerAuthentication }}
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
name: {{ .name }}
namespace: {{ .namespace | default $.Release.Namespace }}
spec:
{{ if .labelselector }}
selector:
matchLabels:
{{ .labelselector }}
{{ end }}
mtls:
mode: {{ .tlsmode }}
---
{{- end -}}
{{- range .Values.virtualService }}
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: {{ .name }}
namespace: {{ .namespace | default $.Release.Namespace }}
spec:
hosts:
- {{ .host }}
exportTo: {{ .exportTo }}
http:
- route:
- destination:
host: {{ .destinationhost }}
port:
number: {{ .port }}
{{- if .timeout }}
timeout: {{ .timeout }}
{{- end }}
{{ if or .attempts .retryon }}
retries:
{{ if .attempts }}
attempts: {{ .attempts }}
{{ end }}
{{ if .retryon }}
retryOn: {{ .retryon }}
{{ end }}
{{ end }}
---
{{- end -}}
{{- range .Values.requestAuthentication }}
apiVersion: security.istio.io/v1beta1
kind: RequestAuthentication
metadata:
name: {{ .name }}
namespace: {{ .namespace | default $.Release.Namespace }}
spec:
selector:
matchLabels:
{{ .labelselector }}
jwtRules:
- issuer: {{ .issuer }}
{{ if .jwksUri }}
jwksUri: {{ .jwksUri }}
{{ end }}
{{ if .jwks }}
jwks: {{ .jwks }}
{{ end }}
forwardOriginalToken: true
---
{{- end -}}
{{- range .Values.authorizationPolicies }}
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: {{ .name }}
namespace: {{ .namespace | default $.Release.Namespace }}
spec:
action: {{ .action }}
rules:
- to:
- operation:
{{- if .hosts }}
hosts:
{{- range .hosts }}
- {{ . | quote -}}
{{- end }}
{{- end }}
{{- if .paths }}
paths:
{{- range .paths }}
- {{ . | quote -}}
{{- end }}
{{- end }}
{{- if .xfccvalues }}
when:
- key: request.headers[X-Forwarded-Client-Cert]
values:
{{- range .xfccvalues }}
- {{ . | quote -}}
{{- end }}
{{- end }}
selector:
matchLabels:
{{ .labelselector }}
---
{{- end -}}
Supported Fields in Custom Resource Definitions
Table 2-25 Supported Fields in Custom Resource Definitions
Custom Resource Definition | Fields |
---|---|
Service Entry | hosts |
exportTo | |
addresses | |
ports.name | |
ports.number | |
ports.protocol | |
resolution | |
Destination Rule | host |
mode | |
sbitimers | |
tcpConnectTimeout | |
tcpKeepAliveProbes | |
tcpKeepAliveTime | |
tcpKeepAliveInterval | |
Envoy Filter | labelselector |
applyTo | |
filtername | |
operation | |
typeconfig | |
configkey | |
configvalue | |
stream_idle_timeout | |
max_stream_duration | |
patchContext | |
networkFilter_listener_port | |
transport_socket_connect_timeout | |
filterChain_listener_port | |
route_idle_timeout | |
route_max_stream_duration | |
httpRoute_routeConfiguration_port | |
vhostname | |
cluster.service | |
type | |
listener_port | |
exactbalance | |
Peer Authentication | labelselector |
tlsmode | |
Virtual Service | host |
destinationhost | |
port | |
exportTo | |
retryon | |
attempts | |
timeout | |
Request Authentication | labelselector |
issuer | |
jwks/jwksUri | |
Policy Authorization | labelselector |
action | |
hosts | |
paths | |
xfccvalues |
2.2.1.14.2 Installation of ASM Configuration Charts
- Download the ASM charts.tar file
ocudr-servicemesh-config-23.4.0.tgz
available with the CSAR files. . - Configure the custom-values.yaml file as follows:
- Modify only the data section for all the Custom Resource Definition (CRD). You can comment the CRDs that you do not need. For example, in the CRD Service Entry modify the default values for all the fields with the values that you want to apply. This is applicable to all the supported CRDs.
- The key and value mentioned for each CRD's is replaced in
the template section. For example, the following sample envoy filters data shows how
to configure the sample envoy filters using different FILTERS,
PORTS:
envoyFilters_v_19x_111x: - name: xfccfilter labelselector: "app.kubernetes.io/instance: ocudr" configpatch: - applyTo: NETWORK_FILTER filtername: envoy.filters.network.http_connection_manager operation: MERGE typeconfig: type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager configkey: forward_client_cert_details configvalue: ALWAYS_FORWARD_ONLY - name: serverheaderfilter labelselector: "app.kubernetes.io/instance: ocudr" configpatch: - applyTo: NETWORK_FILTER filtername: envoy.filters.network.http_connection_manager operation: MERGE typeconfig: type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager configkey: server_header_transformation configvalue: PASS_THROUGH - name: custom-http-stream labelselector: "app.kubernetes.io/instance: ocudr" configpatch: - applyTo: NETWORK_FILTER filtername: envoy.filters.network.http_connection_manager operation: MERGE typeconfig: type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager configkey: server_header_transformation configvalue: PASS_THROUGH stream_idle_timeout: "6000ms" max_stream_duration: "7000ms" patchContext: SIDECAR_OUTBOUND networkFilter_listener_port: 8000 - name: custom-tcpsocket-timeout labelselector: "app.kubernetes.io/instance: ocudr" configpatch: - applyTo: FILTER_CHAIN patchContext: SIDECAR_INBOUND operation: MERGE transport_socket_connect_timeout: "750ms" filterChain_listener_port: 8000 - name: custom-http-route labelselector: "app.kubernetes.io/instance: ocudr" configpatch: - applyTo: HTTP_ROUTE patchContext: SIDECAR_OUTBOUND operation: MERGE route_idle_timeout: "6000ms" route_max_stream_duration: "7000ms" httpRoute_routeConfiguration_port: 8000 vhostname: "ocudr.svc.cluster:8000" - name: logicaldnscluster labelselector: "app.kubernetes.io/instance: ocudr" configpatch: - applyTo: CLUSTER clusterservice: rchltxekvzwcamf-y-ec-x-002.amf.5gc.mnc480.mcc311.3gppnetwork.org operation: MERGE logicaldns: LOGICAL_DNS - applyTo: CLUSTER clusterservice: rchltxekvzwcamd-y-ec-x-002.amf.5gc.mnc480.mcc311.3gppnetwork.org operation: MERGE logicaldns: LOGICAL_DNS - name: slfexactbalancesample labelselector: "app.kubernetes.io/name: ingressgateway-prov" configpatch: - applyTo: LISTENER patchContext: SIDECAR_OUTBOUND listener_port: 5001 operation: MERGE exactbalance: true
- Install the ASM configuration charts as below:
- Run the below Helm install command to the namespace you
want to apply the
changes:
helm install <Release-Name> -n <Namespace>
- Run the below command to verify if all CRDs are
installed:
kubectl get <CRD-Name> -n <Namespace>
- Run the below Helm upgrade command to make changes in the
custom-values.yaml
file:
helm upgrade <Release-Name> -n <Namespace>
- Run the below Helm install command to the namespace you
want to apply the
changes:
2.2.1.14.3 Verification of Charts
Pre-requisite: A system has service mesh deployed with few ASM configurations like service entries and destination rule for UDR or SLF communication to NRF. Example: nrf1.abc.com.
- Unzip the attached zip file which contains ocudr asm charts.
- Use the ocudr-asm-config’s values.yaml file to add a new SE and DR with nrf1.abc.com and nrf2.abc.com.
- Modify the asmResourceTemplate as needed.
- Runt the helm3 upgrade command on the values.yaml file.
- Use the
kubectl get se
andkubectl get dr
commands to verify the updates and ensure service mesh applies these new changes.
For Inter-Cluster NF Communication Service Mesh verification, perform the following steps:
- For Source NF configuration, add the destination rule mode as 'DISABLE' and in
service entry, provide the host (which is configured on DNS) and load balancer port
details.
destinationRules: - hosts: nrf124.oracle.com mode: DISABLE name: nrf
- For destination NF, configure the peerAuthentication for the namespace with mTLS
mode as 'PERMISSIVE'. A sample template is as follows:
Peer Authentication:
apiVersion: security.istio.io/v1beta1 kind: PeerAuthentication metadata: name: allservice namespace: ocudr spec: mtls: mode: PERMISSIVE
- After performing the above steps, verify the inter cluster communication between two NF's.
2.2.1.14.4 ASM Specific Configuration
To configure ASM
- Add the following annotation under global
section:
# ******** Sub-Section Start: Custom Extension Global Parameters ******** #************************************************************************** global: customExtension: allResources: labels: {} annotations: oracle.com/cnc: "true" lbServices: labels: {} annotations: {} lbDeployments: labels: {} annotations: {} nonlbServices: labels: {} annotations: {} nonlbDeployments: labels: {} annotations: {} # ******** Sub-Section End: Custiom Extensions Global Parameters ******** #**************************************************************************
- Enable the following flag under global section in custom values
file:
global: # Servicemesh related configurations # Enable when deployed with ASM serviceMeshCheck: true
- Enable the following flag under ingressgateway
section.
ingressgateway: global: # In case of ASPEN Service Mesh enabled, to support clear text traffic from outside of the cluster below flag needs to be true. istioIngressTlsSupport: ingressGateway: true
- Change Ingress Gateway Service Type to ClusterIP under the ingressgateway
section.
ingressgateway: global: # Service Type type: ClusterIP
- Exclude actuator ports from Aspen mesh to avoid traffic through side car.
These ports should be same as actuator ports (used for readiness/liveness checks) for
Ingress Gateway and UDR microservices.
The default actuator port (service.port.management) used for UDR microservices is 9000 and ingress/egress gateway is 9090 (ingressgateway.ports.actuatorPort). If the default ports are not changed then use the same annotation as given below. Also, exclude the management port of nudr-nrf-client-service from nudr-nrf-client-service's side car.
nudr-nrf-client-service: deployment: customExtension: labels: {} annotations: traffic.sidecar.istio.io/excludeInboundPorts: "9000"
Note:
Exclude the management port of nudr-nrf-client-service from nudr-nrf-client-service side car. - This step is conditional. In custom-values.yaml file, if any port is listed
with excludeInboundPorts and excludeOutboundPorts annotation for ingress
gateway, egressgateway, and alternate-route services, then you are required to exclude the
coherence ports along with the port added. Following are the ports used for coherence by
default.
traffic.sidecar.istio.io/excludeInboundPorts: 8095, 8096 9090, 7, 53 traffic.sidecar.istio.io/excludeOutboundPorts: 8095, 8096 9090, 7, 53
- Create a destination rule and service entry to enable MYSQL connectivity
service to establish a connection between UDR/SLF and NDB cluster. This is outside ASM.
Sample template is given below:
Creating a Service for External MySQL Instance
apiVersion: v1 kind: Endpoints metadata: name: mysql-connectivity-service-headless namespace: <ocudr-namespace> subsets: - addresses: - ip: <sql-node1-ip> - ip: <sql-node2-ip> ports: - port: 3306 protocol: TCP --- apiVersion: v1 kind: Service metadata: name: mysql-connectivity-service-headless namespace: <ocudr-namespace> spec: clusterIP: None ports: - port: 3306 protocol: TCP targetPort: 3306 sessionAffinity: None type: ClusterIP --- apiVersion: v1 kind: Service metadata: name: mysql-connectivity-service namespace: <ocudr-namespace> spec: externalName: mysql-connectivity-service-headless.<ocudr-namespace>.svc.cluster.local sessionAffinity: None type: ExternalName
Creating Service Entry and Destination Rule for External Database Instance
apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: mysql-external-se namespace: <ocudr-namespace> spec: hosts: - mysql-connectivity-service-headless.<ocudr-namespace>.svc.cluster.local ports: - number: 3306 name: mysql protocol: MySQL location: MESH_EXTERNAL --- apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: mysql-external-dr namespace: <ocudr-namespace> spec: host: mysql-connectivity-service-headless.<ocudr-namespace>.svc.cluster.local trafficPolicy: tls: mode: DISABLE
Note:
Do not use this template if you are using cnDBTier with ASM as backend for UDR/SLF 1.9.0 or above. - If you enable nudr-ondemand-migration service, then a policy for 4G
UDR connectivity is required as it is outside ASM.
A sample template to create an external Kubernetes service for 4G UDR along with service entry and destination rule is given below:
External Kubernetes Service for 4G UDR
apiVersion: v1 kind: Endpoints metadata: name: sourceudr-service-headless namespace: <ocudr-namespace> subsets: - addresses: - ip: <4gudr-destination-ip> ports: - port: <4gudr-destination-port> protocol: TCP --- apiVersion: v1 kind: Service metadata: name: sourceudr-service-headless namespace: <ocudr-namespace> spec: clusterIP: None ports: - port: <4gudr-destination-port> protocol: TCP targetPort: <4gudr-destination-port> sessionAffinity: None type: ClusterIP --- apiVersion: v1 kind: Service metadata: name: sourceudr-service namespace: <ocudr-namespace> spec: externalName: sourceudr-service-headless.<ocudr-namespace>.svc.cluster.local sessionAffinity: None type: ExternalName
Service Entry and Destination Rule (for the service created in the previous template)
apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: sourceudr-external-se namespace: <ocudr-namespace> spec: hosts: - sourceudr-service-headless.<ocudr-namespace>.svc.cluster.local ports: - number: <4gudr-destination-port> name: sourceudr location: MESH_EXTERNAL --- apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: sourceudr-external-dr namespace: <ocudr-namespace> spec: host: sourceudr-service-headless.<ocudr-namespace>.svc.cluster.local trafficPolicy: tls: mode: DISABLE
- If you are using nudr-migration tool with ASM setup, it is mandatory to
create a service entry and a destination rule for 4G UDR IP along with the port used,
which is outside ASM. Use same IP Address given in the host IP details of nudr-migration
template. A sample template is given
below:
apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: sourceudr-external-se namespace: <ocudr-namespace> spec: hosts: - sourceudr-service-headless.<ocudr-namespace>.svc.cluster.local addresses: - 10.196.7.251 ports: - number: 3867 name: firstconnectionport - number: 3866 name: secondconnectionport location: MESH_EXTERNAL--- apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: sourceudr-external-dr namespace: <ocudr-namespace> spec: host: sourceudr-service-headless.<ocudr-namespace>.svc.cluster.local trafficPolicy: tls: mode: DISABLE
- To run XFCC and Server header related ATS test cases on SLF or UDR with ASM,
envoy filters are used to forward the header passed by client. Sample envoy filters
template is given
below:
apiVersion: networking.istio.io/v1alpha3 kind: EnvoyFilter metadata: name: <server-header envoyfilter name> namespace: <NF deployment namespace> spec: workloadSelector: labels: app.kubernetes.io/instance: <helm release name> configPatches: - applyTo: NETWORK_FILTER match: listener: filterChain: filter: name: "envoy.http_connection_manager" patch: operation: MERGE value: typed_config: '@type': type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager server_header_transformation: PASS_THROUGH --- apiVersion: networking.istio.io/v1alpha3 kind: EnvoyFilter metadata: name: <xfcc envoyfilter name> namespace: <NF deployment namespace> spec: workloadSelector: labels: app.kubernetes.io/instance: <helm release name> configPatches: - applyTo: NETWORK_FILTER match: listener: filterChain: filter: name: "envoy.http_connection_manager" patch: operation: MERGE value: typed_config: '@type': type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager forward_client_cert_details: ALWAYS_FORWARD_ONLY
- For inter pod communication on UDR or SLF deployed with ASM, create Peer
Authentication on pods. Sample template is given below:
Peer Authentication
apiVersion: security.istio.io/v1beta1 kind: PeerAuthentication metadata: name: <Peer Authentication name> namespace: <UDR/SLF deployment namespace> spec: selector: matchLabels: app.kubernetes.io/part-of: ocudr mtls: mode: PERMISSIVE
2.2.1.14.5 Deleting ASM
This section describes the steps to delete ASM.
helm delete <helm-release-name> -n <namespace>
Where,<helm-release-name>
is the release name used by the helm command. This release name must be the same as the release name used for ServiceMesh.<namespace>
is the deployment namespcae used by Helm command.
helm delete ocudr-servicemesh-config -n ocudr
kubectl label --overwrite namespace ocudr istio-injection=disabled
kubectl get se,dr,peerauthentication,envoyfilter,vs -n ocudr
2.2.1.15 Configuring udrServices and consumerNF Flags
- udrService: This flag indicates UDR mode. It can be either All, nudr-dr, or nudr-group-id-map (slf). On the basis of udrService value, NF profiles regiater with NRF. If the udrService value is 'All' or 'nudr-dr', it is filtered further on the basis of consumerNF flag.
- consumerNF: This flag indicates consumer network functions for UDR. It is a comma separated list of possible consumer services like PCF, UDM, NEF, or any combination of these services.
To configure the udrService and consumerNF flags, use the following information.
- If the udrService is All, then on the basis of udrMode and consumerNF type, set the
value of allowedNFTypes in appProfile.
Note:
NRF needs to be added in allowedNFType.The default value of consumerNF type in this mode is PCF, UDM, NEF. The details are:
Table 2-26 Allowed NF Type Values When udrService is All
Consumer NF Value of Allowed NF Type (a field in App PROFILE) Notes PCF "allowedNfTypes":["PCF","NRF"] Only PCF related entries are loaded in udrdb. UDM "allowedNfTypes":["UDM","NRF"] Only UDM related entries are loaded in udrdb. Make sure diameter proxy and diameter gateway microservices are disabled. NEF "allowedNfTypes":["NEF","NRF"] Only NEF related entries are loaded in udrdb. Make sure diameter proxy and diameter gateway microservices are disabled. PCF, UDM "allowedNfTypes":["PCF","NRF","UDM"] Only PCF and UDM related entries are loaded in udrdb. PCF, NEF "allowedNfTypes":["PCF","NRF","NEF"] Only PCF,NEF related entries are loaded in udrdb. UDM, NEF "allowedNfTypes":["UDM","NRF","NEF"] Only UDM and NEF related entries are loaded in udrdb. Make sure diameter proxy and diameter gateway microservices are disabled. PCF, UDM, NEF (default)
"allowedNfTypes":["PCF","UDM","NRF","NEF"] PCF, UDR, and UDM related entries are loaded in udrdb The default app profile request is:[{"nfInstanceId":"5a7bd676-ceeb-44bb-95e0-f6a55a328b03","nfStatus":"REGISTERED","fqdn":"ocudr-ingressgateway.myudr.svc.cluster.local","nfType":"UDR","allowedNfTypes":["PCF","UDM","NRF"],"plmnList":[{"mnc":"14","mcc":"310"}],"priority":10,"capacity":500,"load":0,"locality":"bangalore","nfServices":[{"load":0,"scheme":"http","versions":[{"apiFullVersion":"2.1.0.alpha-3","apiVersionInUri":"v1"}],"fqdn":"ocudr-ingressgateway.myudr.svc.cluster.local","ipEndPoints":[{"port":"80","ipv4Address":"10.0.0.0","transport":"TCP"}],"nfServiceStatus":"REGISTERED","allowedNfTypes":["PCF","UDM"],"serviceInstanceId":"ae870316-384d-458a-bd45-025c9e748976","serviceName":"nudr-dr","priority":10,"capacity":500},{"load":0,"scheme":"http","versions":[{"apiFullVersion":"2.1.0.alpha-3","apiVersionInUri":"v1"}],"fqdn":"ocudr-ingressgateway.myudr.svc.cluster.local","ipEndPoints":[{"port":"80","ipv4Address":"10.0.0.0","transport":"TCP"}],"nfServiceStatus":"REGISTERED","allowedNfTypes":["NRF"],"serviceInstanceId":"547d42af-628a-4d5d-a8bd-38c4ba672682","serviceName":"nudr-group-id-map","priority":10,"capacity":500}],"udrInfo":{"supportedDataSets":["POLICY","SUBSCRIPTION"],"groupId":"udr-1","externalGroupIdentifiersRanges":[{"start":"10000000000","end":"20000000000"}],"supiRanges":[{"start":"10000000000","end":"20000000000"}],"gpsiRanges":[{"start":"10000000000","end":"20000000000"}]},"heartBeatTimer":90,"nfServicePersi
- If the udrService is nudr-dr, then on the basis of udrMode and consumerNF type, set
the value of allowedNFTypes in appProfile. The default value of consumerNF is
PCF.
Table 2-27 Allowed NF Type Values When udrService is nudr-dr
Consumer NF Value of Allowed NF Type (a field in NF PROFILE) Notes PCF (default)
"allowedNfTypes":["PCF"] Only PCF related entries are loaded in udrdb. UDM "allowedNfTypes":["UDM"] Only UDM related entries are loaded in udrdb. Make sure diameter proxy and diameter gateway microservices are disabled. NEF "allowedNfTypes":["NEF"] Only NEF related entries are loaded in udrdb. Make sure diameter proxy and diameter gateway microservices are disabled. PCF, UDM "allowedNfTypes":["PCF","UDM"] Only PCF and UDM related entries are loaded in udrdb. PCF, NEF "allowedNfTypes":["PCF","NEF"] Only PCF and NEF related entries are loaded in udrdb. UDM, NEF "allowedNfTypes":["UDM","NEF"] Only UDM and NEF related entries are loaded in udrdb. Make sure diameter proxy and diameter gateway microservices are disabled. PCF, UDM, NEF "allowedNfTypes":["PCF","UDM","NEF"] PCF, UDR, and UDM related entries are loaded in udrdb. The default appProfile request is:nudr-dr [{"nfInstanceId":"5a7bd676-ceeb-44bb-95e0-f6a55a328b03","nfStatus":"REGISTERED","fqdn":"ocudr-ingressgateway.myudr.svc.cluster.local","nfType":"UDR","allowedNfTypes":["PCF","UDM","NRF"],"plmnList":[{"mnc":"14","mcc":"310"}],"priority":10,"capacity":500,"load":0,"locality":"bangalore","nfServices":[{"load":0,"scheme":"http","versions":[{"apiFullVersion":"2.1.0.alpha-3","apiVersionInUri":"v1"}],"fqdn":"ocudr-ingressgateway.myudr.svc.cluster.local","ipEndPoints":[{"port":"80","ipv4Address":"10.0.0.0","transport":"TCP"}],"nfServiceStatus":"REGISTERED","allowedNfTypes":["PCF","UDM"],"serviceInstanceId":"ae870316-384d-458a-bd45-025c9e748976","serviceName":"nudr-dr","priority":10,"capacity":500}],"udrInfo":{"supportedDataSets":["POLICY","SUBSCRIPTION"],"groupId":"udr-1","externalGroupIdentifiersRanges":[{"start":"10000000000","end":"20000000000"}],"supiRanges":[{"start":"10000000000","end":"20000000000"}],"gpsiRanges":[{"start":"10000000000","end":"20000000000"}]},"heartBeatTimer":90,"nfServicePersistence":false,"nfProfileChangesSupportInd":false,"nfSetIdList":["setxyz.udrset.5gc.mnc012.mcc345"]}]
- If the udrService is nudr-group-id-map, then set the allowedNFType in appProfile as
NRF. The default appProfile request
is:
[{"nfInstanceId":"5a7bd676-ceeb-44bb-95e0-f6a55a328b03","nfStatus":"REGISTERED","fqdn":"ocudr-ingressgateway.myudr.svc.cluster.local","nfType":"UDR","allowedNfTypes":["NRF"],"plmnList":[{"mnc":"14","mcc":"310"}],"priority":10,"capacity":500,"load":0,"locality":"bangalore","nfServices":[{"load":0,"scheme":"http","versions":[{"apiFullVersion":"2.1.0.alpha-3","apiVersionInUri":"v1"}],"fqdn":"ocudr-ingressgateway.myudr.svc.cluster.local","ipEndPoints":[{"port":"80","ipv4Address":"10.0.0.0","transport":"TCP"}],"nfServiceStatus":"REGISTERED","allowedNfTypes":["NRF"],"serviceInstanceId":"547d42af-628a-4d5d-a8bd-38c4ba672682","serviceName":"nudr-group-id-map","priority":10,"capacity":500}],"udrInfo":{"groupId":"udr-1","externalGroupIdentifiersRanges":[{"start":"10000000000","end":"20000000000"}],"supiRanges":[{"start":"10000000000","end":"20000000000"}],"gpsiRanges":[{"start":"10000000000","end":"20000000000"}]},"heartBeatTimer":90,"nfServicePersistence":false,"nfProfileChangesSupportInd":false,"nfSetIdList":["setxyz.udrset.5gc.mnc012.mcc345"]}]
2.2.1.16 Configuring eirServices and consumerNF Flags
- udrService: eirService indicates EIR mode. eirService is n5g-eir-eic. Set the parameter global.udrServices to n5g-eir-eic to indicate EIR mode. n5g-eir-eic is the service used to register EIR with NRF.
- consumerNF: This flag indicates consumer network functions for EIR.
To configure the eirService and consumerNF flags, use the following information.
- If the eirService is n5g-eir-eic, then on the basis of eirMode and
consumerNF type, set the value of allowedNfTypes in appProfile. The default value of
consumerNF is AMF.
Note:
NRF needs to be added in allowedNFType.Table 2-28 Allowed NF Type Values When eirService is n5g-eir-eic
Consumer NF Value of Allowed NF Type (a field in App PROFILE) Notes AMF "allowedNfTypes":["AMF"] Only AMF related entries are loaded. The default app profile request is:[{ "nfInstanceId": "5a7bd676-ceeb-44bb-95e0-f6a55a328b03", "nfStatus": "REGISTERED", "fqdn": "ocudr-ingressgateway-sig.myudr.svc.cluster.local", "nfType": "5G_EIR", "allowedNfTypes": ["AMF"], "plmnList": [{ "mnc": "14", "mcc": "310" }], "priority": 10, "capacity": 500, "load": 0, "locality": "bangalore", "nfServices": [{ "load": 0, "scheme": "http", "versions": [{ "apiFullVersion": "2.1.0.alpha-3", "apiVersionInUri": "v1" }], "fqdn": "ocudr-ingressgateway-sig.myudr.svc.cluster.local", "ipEndPoints": [{ "port": "80", "ipv4Address": "10.0.0.0", "transport": "TCP" }], "nfServiceStatus": "REGISTERED", "allowedNfTypes": ["AMF"], "serviceInstanceId": "ae870316-384d-458a-bd45-025c9e748976", "serviceName": "n5g-eir-eic", "priority": 10, "capacity": 500 }], "heartBeatTimer": 90, "nfServicePersistence": false, "nfProfileChangesSupportInd": false, "nfSetIdList": ["setxyz.udrset.5gc.mnc012.mcc345"] }]
2.2.1.17 Configuring PodDisruptionBudget Kubernetes Resource
Note:
The performance and capacity of the UDR system may vary based on the call model, feature or interface configuration, and underlying CNE and hardware environment, including but not limited to, the size of the json payload, operation type, and traffic model.Note:
The value of the maxAvailable parameter is an absolute number.Table 2-29 PodDisruptionBudget for SLF Deployment
Microservice | PDB Value Specified (maxUnavailable) | Notes |
---|---|---|
ingressgateway | 1 | NA |
egressgateway | 1 | NA |
nudr-drservice | 1 | NA |
nudr-dr-provservice | 1 | NA |
nudr-config | 1 | NA |
config-server | 1 | NA |
alternate-route | 1 | NA |
nrf-client | 1 | NA |
app-info | 1 | NA |
perf-info | 1 | NA |
perf-info-prov | 1 | NA |
Table 2-30 PodDisruptionBudget for UDR Deployment
Microservice | PDB Value Specified (maxUnavailable) | Notes |
---|---|---|
ingressgateway | 1 | NA |
egressgateway | 1 | NA |
nudr-drservice | 1 | NA |
nudr-dr-provservice | 1 | NA |
nudr-notify-service | 1 | NA |
nudr-diam-gateway | 1 | NA |
nudr-diameterproxy | 1 | NA |
nudr-ondemand-migration | 1 | NA |
nudr-config | 1 | NA |
config-server | 1 | NA |
alternate-route | 1 | NA |
nrf-client | 1 | NA |
app-info | 1 | NA |
perf-info | 1 | NA |
perf-info-prov | 1 | NA |
Table 2-31 PodDisruptionBudget for EIR Deployment
Microservice | PDB Value Specified (maxUnavailable) | Notes |
---|---|---|
ingressgateway | 1 | NA |
egressgateway | 1 | NA |
nudr-drservice | 1 | NA |
nudr-dr-provservice | 1 | NA |
nudr-config | 1 | NA |
config-server | 1 | NA |
alternate-route | 1 | NA |
nrf-client | 1 | NA |
app-info | 1 | NA |
perf-info | 1 | NA |
perf-info-prov | 1 | NA |
2.2.1.18 Configuring Network Policies
Overview
Note:
Configuring network policies is an optional step. Based on the security requirements, network policies may or may not be configured.For more information on the network policy, see https://kubernetes.io/docs/concepts/services-networking/network-policies/.
Configuring Network Policies
Following are the various operations that can be performed for network policies after UDR deployment.
Installing Network Policies
The Network Policies are part of ocudr Helm package and can be modified from ocudr_custom_values.yaml file. The entire specification section in the ocudr_custom_values.yaml file is taken as input for network policies. You can provide names to each policy to differentiate the policies when deployed. The template file for network policy is as below:
#Template for various network Policy
{{- if .Values.global.createNetworkPolicy }}
{{- range .Values.global.networkPolicy }}
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: {{ .metadata.name }}
spec:
{{- .spec | toYaml | nindent 2 }}
---
{{- end -}}
{{- end }}
- - metadata:
name: deny-ingress-all-udr
spec:
podSelector:
matchLabels:
app.kubernetes.io/part-of: ocudr
policyTypes:
- Ingress
- metadata:
name: allow-ingress-sbi-udr
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: ingressgateway-sig
policyTypes:
- Ingress
ingress:
- {}
- metadata:
name: allow-ingress-prov-udr
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: ingressgateway-prov
policyTypes:
- Ingress
ingress:
- {}
- metadata:
name: allow-ingress-diamgateway-udr
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: nudr-diam-gateway
policyTypes:
- Ingress
ingress:
- ports:
- protocol: TCP
port: 3868
- metadata:
name: deny-egress-all-ocudr
spec:
podSelector:
matchLabels:
app.kubernetes.io/part-of: ocudr
policyTypes:
- Egress
- metadata:
name: allow-egress-through-egress-ocudr
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: egressgateway
policyTypes:
- Egress
egress:
- {}
- metadata:
name: allow-egress-through-perf-ocudr
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: perf-info
policyTypes:
- Egress
egress:
- {}
- metadata:
name: allow-egress-through-appinfo-ocudr
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: appinfo
policyTypes:
- Egress
egress:
- {}
- metadata:
name: allow-egress-through-diamgateway-ocudr
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: nudr-diam-gateway
policyTypes:
- Egress
egress:
- {}
# Allow ingress for Prometheus Metrics Scrapping
- metadata:
name: allow-ingress-prometheus-udr
spec:
podSelector:
matchLabels:
app.kubernetes.io/part-of: ocudr
policyTypes:
- Ingress
ingress:
- ports:
- protocol: TCP
port: 9090 # Prometheus Port
- protocol: TCP
port: 9000
- protocol: TCP
port: 9001
- protocol: TCP
port: 9002
- metadata:
name: allow-ingress-service-mesh-ocudr
spec:
podSelector:
matchLabels:
app.kubernetes.io/part-of: ocudr
policyTypes:
- Ingress
ingress:
- ports:
- protocol: TCP
port: 15021
- protocol: TCP
port: 15012
- protocol: TCP
port: 15000
- protocol: TCP
port: 15020
# Allow egress to DB SQL and Monitoring Ports
- metadata:
name: allow-mysql-db-udr
spec:
podSelector:
matchLabels:
app.kubernetes.io/part-of: ocudr
policyTypes:
- Egress
egress:
- ports:
- protocol: TCP
port: 3306 # DB SQL Port
- protocol: TCP
port: 8080 # DB Monitoring Port
# Allow egress to K8s API Port
- metadata:
name: allow-egress-k8s
spec:
podSelector:
matchLabels:
app.kubernetes.io/part-of: ocudr
policyTypes:
- Egress
egress:
- ports:
- protocol: TCP
port: 6443 # K8s API Server Port
# Allow egress to Jaeger Agent
- metadata:
name: allow-jaeger-udr
spec:
podSelector:
matchLabels:
app.kubernetes.io/part-of: ocudr
policyTypes:
- Egress
egress:
- ports:
- protocol: TCP
port: 4318 # Jaeger Agent Port
# Allow egress to DNS
- metadata:
name: allow-egress-dns-udr
spec:
podSelector:
matchLabels:
app.kubernetes.io/part-of: ocudr
policyTypes:
- Egress
egress:
- ports:
- protocol: TCP
port: 53 # DNS TCP Port
- protocol: UDP
port: 53 # DNS UDP Port
- metadata:
name: allow-egress-service-mesh-ocudr
spec:
podSelector:
matchLabels:
app.kubernetes.io/part-of: ocudr
policyTypes:
- Egress
egress:
- ports:
- protocol: TCP
port: 15021
- protocol: TCP
port: 15012
- protocol: TCP
port: 15000
- protocol: TCP
port: 15020
# Allow ingress from UDR Pods
- metadata:
name: allow-ingress-from-udr-pods
spec:
podSelector:
matchLabels:
app.kubernetes.io/part-of: ocudr
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app.kubernetes.io/part-of: ocudr
# Allow egress to UDR Pods
- metadata:
name: allow-egress-to-udr-pods
spec:
podSelector:
matchLabels:
app.kubernetes.io/part-of: ocudr
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels:
app.kubernetes.io/part-of: ocudr
# Allow ingress from Console
- metadata:
name: allow-ingress-from-console-udr
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: nudr-config
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector: {}
podSelector:
matchLabels:
app.kubernetes.io/part-of: cncc-core
ports:
- protocol: TCP
port: 5001
- metadata:
name: allow-ingress-from-atspod-config-ocudr
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: nudr-config
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: ocats-udr
ports:
- protocol: TCP
port: 5001
- metadata:
name: allow-ingress-from-console-udr
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: nudr-config
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name=<namespace of console>
podSelector:
matchLabels:
app.kubernetes.io/part-of: cncc-core
ports:
- protocol: TCP
port: 5001
#Uncomment line no 76 to 88 If sftp is enabled and to transfer out the files to remote server for export tool
#- metadata:
# name: sftp-export-network-policy
# namespace: ocudr # should be deployed ns
# spec:
# podSelector:
# matchLabels:
# app.kubernetes.io/name: nudr-export-tool
# policyTypes:
# - Egress
# egress:
# - to:
# - ipBlock:
# cidr: 10.121.29.66/32 #Give the remote server ip here
#Uncomment line no 91 to 117 If sftp is enabled and to transfer out the files to remote server for import tool
#- metadata:
# name: sftp-import-egress-network-policy
# namespace: ocudr # should be deployed ns
# spec:
# podSelector:
# matchLabels:
# app.kubernetes.io/name: nudr-bulk-import
# policyTypes:
# - Egress
# egress:
# - to:
# - ipBlock:
# cidr: 10.75.229.137/32 #Give the remote server ip here
#- metadata:
# name: sftp-import-ingress-network-policy
# namespace: ocudr # should be deployed ns
# spec:
# podSelector:
# matchLabels:
# app.kubernetes.io/name: nudr-bulk-import
# policyTypes:
# - Ingress
# ingress:
# - from:
# - ipBlock:
# cidr: 10.75.229.137/32 #Give the remote server ip here
2.2.1.18.1 Verifying Network Policies
kubectl get networkpolicy -n <namespace>
allow-egress-dns-provgw app.kubernetes.io/part-of=provgw 4d3h
allow-egress-dns-udr app.kubernetes.io/part-of=ocudr 7d19h
allow-egress-k8s app.kubernetes.io/part-of=ocudr 7d19h
allow-egress-k8s-provgw app.kubernetes.io/part-of=provgw 4d3h
allow-egress-to-provgw-pods app.kubernetes.io/part-of=provgw 4d3h
allow-egress-to-udr-pods app.kubernetes.io/part-of=ocudr 7d19h
allow-ingress-from-console-provgw app.kubernetes.io/name=provgw-config 4d3h
allow-ingress-from-console-udr app.kubernetes.io/name=nudr-config 7d19h
allow-ingress-from-provgw-pods-provgw app.kubernetes.io/part-of=provgw 4d3h
allow-ingress-from-udr-pods app.kubernetes.io/part-of=ocudr 7d19h
allow-ingress-prometheus-provgw app.kubernetes.io/part-of=provgw 4d3h
allow-ingress-prometheus-udr app.kubernetes.io/part-of=ocudr 7d19h
allow-ingress-prov-udr app.kubernetes.io/name=ingressgateway-prov 7d19h
allow-ingress-provgw app.kubernetes.io/name=prov-ingressgateway 4d3h
allow-ingress-sbi-udr app.kubernetes.io/name=ingressgateway-sig 7d19h
allow-jaeger-provgw app.kubernetes.io/part-of=provgw 4d3h
allow-jaeger-udr app.kubernetes.io/part-of=ocudr 7d19h
allow-mysql-db-provgw app.kubernetes.io/part-of=provgw 4d3h
allow-mysql-db-udr app.kubernetes.io/part-of=ocudr 7d19h
deny-egress-all-except-egw-provgw app.kubernetes.io/name notin (egressgateway) 4d3h
deny-egress-all-except-egw-udr app.kubernetes.io/name notin (egressgateway) 7d19h
deny-ingress-all-provgw app.kubernetes.io/part-of=provgw 4d3h
deny-ingress-all-udr app.kubernetes.io/part-of=ocudr 7d19h
Table 2-32 Supported Kubernetes Resource for Configuring Network Policies
Parameter | Description | Details |
---|---|---|
apiVersion |
This indicates Kubernetes version for access control. Note: This is the supported API version for network policies. This is a read-only parameter. |
Data Type: string Default Value:
|
kind |
This represents the REST resource this object represents. Note: This is a read-only parameter. |
Data Type: NetworkPolicy Default Value:
|
metadata.name |
This indicates unique name for the network policy. |
Default Value:
|
spec.{} |
This consists of all the information needed to define a particular network policy in the given namespace. |
Default Value: NA This is a mandatory parameter. |
For more information about this functionality, see Network Policies in the Oracle Communications Cloud Native Core, Unified Data Repository User Guide.
2.2.1.19 Manual Creation of UDR Database and MySQL User
- Perform the following steps to create the UDR databases required for the
deployment:
- Run the following command to log in to one of the ndbappmysqld node
pods:
kubectl exec -it ndbappmysqld-0 -n <db-namespace>
- Once logged into the pod, run the following command with root
credentials to login to mysql
terminal:
mysql -h127.0.0.1 -uroot -p<rootPassword>
- Run the following command to create the UDR subscriber and configuration
database:
-
CREATE DATABASE IF NOT EXISTS <udrdbname> CHARACTER SET utf8;
-
CREATE DATABASE IF NOT EXISTS <udrconfigdbname> CHARACTER SET utf8;
Note:
In case of multiple site deployment you can create the configuration database for all sites using the similar CREATE statements. This can be performed in one of the ndbappmysqld under any one site. -
- Run the following command to log in to one of the ndbappmysqld node
pods:
- Perform the following steps to create the mysql user required for accessing the
UDR databases. These instructions need to be executed on all ndbappmysqld and
ndbmysql nodes:
- Run the following command to log in to one of the sql node
pods:
kubectl exec -it ndbappmysqld-x/ndbmysqld- -n <db-namespace>
- Once logged in to the pod, run the following command with root
credentials to login to mysql
terminal:
mysql -h127.0.0.1 -uroot -p<rootPassword>
- Run the following command to create mysql user and provide required
GRANTS for the user:
-
CREATE USER IF NOT EXISTS '<udruser>'@'%' IDENTIFIED BY '<udrpasswd>';
-
GRANT SELECT, INSERT, CREATE, ALTER, DROP, LOCK TABLES, CREATE TEMPORARY TABLES, DELETE, UPDATE,EXECUTE,REFERENCES ON <udrdbname>.* TO '<udruser>' @'%';
-
GRANT SELECT, INSERT, CREATE, ALTER, DROP, LOCK TABLES, CREATE TEMPORARY TABLES, DELETE, UPDATE,EXECUTE,REFERENCES ON <udrconfigdbname>.* TO '<udruser>' @'%';
-
FLUSH PRIVILEGES;
Note:
In case of multiple site deployment using the GRANT command provide access to all UDR configuration databases created in the Step 1. -
- Run the following command to log in to one of the sql node
pods:
2.2.2 Installation Tasks
2.2.2.1 Installing UDR Package
To install the UDR package, perform the following steps:
- Run the following command to access the extracted
package:
cd ocudr-<release_number>
- (Optional) Customize the ocudr-custom-values-23.4.2.yaml file with the required input parameters. To customize the file, see UDR Configuration Parameters.
- Run the following command to install UDR:
helm install <helm chart> [--version <OCUDR version>] --name <release> --namespace <k8s namespace> -f <ocudr-custom-values-23.4.2.yaml>
In the above command:- <helm chart> - is the name of the chart, which is of the form <helm repo>/ocudr.
- <OCUDR version> - is the software version (helm chart version) of the OCUDR. This is optional. If omitted, the default is latest version available in helm repository.
- <release> - is a user defined release name to identify the helm deployment, all pod names, service name, deployment name will be prepended by this release name. This name should be different for each UDR/SLF site incase of multisite deployments
- <k8s namespace> - is a name of user's choice to identify the Kubernetes namespace of the Unified Data Repository. All the Unified Data Repository microservices are deployed in this Kubernetes namespace.
- <ocudr-custom-values-23.4.2.yaml> - is the customized ocudr-custom-values-23.4.2.yaml file. The ocudr-custom-values-23.4.2.yaml file is a part of customer documentation. Download the file and modify it as per user site.
Note:
If helm3 is used, run the following command to install UDR:helm install -name <release> --namespace <k8s namespace> -f <ocudr-custom-values-23.4.2.yaml> <helm chart> [--version <OCUDR version>]
Caution:
Do not exit from helm install command manually. After running the helm install command, it takes some time to install all the services. In the meantime, you must not press "ctrl+c" to come out from helm install command. It leads to some anomalous behavior.
Preinstall Hook Failure Warning
If there is installation failure at the the pre-install hook and if the
hook pod goes to error state with the error log "message":" Access denied
for user 'root'@'127.0.0.6' (u sing password: YES)"
. You must manually
create the UDR databases and mysql user along with GRANTS. For more information, see
. When you perform these manual instructions, disable the configuration
createUser
under the global.preInstall
section
on the ocudr custom values file before retrying helm installation.
2.2.3 Post Installation Task
This section explains the postinstallation tasks for UDR.
2.2.3.1 Verifying Installation
- Run the following command:
Where,helm ls release_name -n <release-namespace>
<hem-release>
is the Helm release name of UDR.<namespace>
is the namespace of UDR deployment.In the output, if
STATUS
is showing asdeployed
, then the deployment is successful. - Run the following command to verify if the pods are up and active:
Where,kubectl get jobs,pods -n release_namespace
<Namespace>
is the namespace where UDR is deployed.kubectl get pod -n ocudr
In the output, the
STATUS
column of all the pods must beRunning
and theREADY
column of all the pods must ben/n
, where n is the number of containers in the pod. - Run the following command to verify if the services are deployed
and active:
kubectl get services -n release_namespace
Example:
kubectl get services -n ocudr
If the installation is unsuccessful or the STATUS
of all
the pods is not in the as Running
state, perform the troubleshooting
steps provided in the Oracle Communications Cloud Native Core, Unified Data
Repository Troubleshooting Guide.
2.2.3.2 Performing Helm Test
Helm Test - Overview
Note:
Helm test can be performed only on Helm3.Helm Test is a feature that validates successful installation of UDR along with the readiness of all its pods. It also validates PVCs and confirms if they are in the bound state.
This test also checks for all the PVCs to be in the Bound state under the release namespace and label selector configured.
Configuring Helm Test
- Complete the Helm test configurations under the Global section
of the
custom-values.yaml
file.global: # Helm test related configurations test: nfName: ocudr image: name: ocudr/nf_test tag: 23.4.2 config: logLevel: WARN timeout: 240 resources: - horizontalpodautoscalers/v1 - deployments/v1 - configmaps/v1 - prometheusrules/v1 - serviceaccounts/v1 - poddisruptionbudgets/v1 - roles/v1 - statefulsets/v1 - persistentvolumeclaims/v1 - services/v1 - rolebindings/v1 #used to enable the helm test logging feature complianceEnable: true
For more details about the configurable parameters, see UDR Configuration Parameters.
- Ensure the
app.kubernetes.io/instance: {{ .Release.Name }}
label is part of all microservice deployments. The Helm Test feature takes the labelSelector internally, along with the helm release namespace, to select the pods and PVCs for verification.Note:
If the above label is not present, add it to the labels section so that the helm test can work on specific helm release. - Run the following helm test command:
helm test <helm_release_name> -n <k8s namespace>
Once the helm test job completes, check the output whether the test job is successful or not.Note:
- The Pod or PVC to be verified is fetched based on namespace and labelselector. If the list is empty, then the Helm Test is considered as success. If the Helm Test fails with errors, then see the Oracle Communications Cloud Native Core, Unified Data Repository Troubleshooting Guide.
- When the deployment has two replicas enabled for the nrf-client-nfmanagement micro service, the helm test fails stating that one of the nrf-client-nfmanagement pod is not ready. In such a case, this behavior is expected and the helm test failure can be ignored.