3 Customizing and Configuring Unified Data Repository
This section provides information on customizing and configuring Unified Data Repository.
Customizing Unified Data Repository
You can customize the Unified Data Repository deployment by overriding the default values of various configurable parameters.
In the ocudr-custom-values.yaml File Configuration section, MySQL host is customized.
The ocudr-custom-values.yaml file can be prepared by hand to customize the parameters.
Note:
All the configurable parameters are mentioned in the Configuring User Parameters# Copyright 2019 (C), Oracle and/or its affiliates. All rights reserved.
global:
dockerRegistry: ocudr-registry.us.oracle.com:5000
mysql:
dbServiceName: "mysql-connectivity-service.occne-infra" #This is a read only parameter. Use the default value.
port: "3306"
udrTracing:
enable: false
host: "occne-tracer-jaeger-collector.occne-infra"
port: 14268
dbenc:
shavalue: 256
serviceAccountName:
egress:
enabled: true
# Configurations for Config-Server
configServerEnable: true
initContainerEnable: false
dbCredSecretName: 'ocudr-secrets'
releaseDbName: 'udr_release'
configServerFullNameOverride: nudr-config-server
# Configuration to decide the Service the deployment will provide
udrServices: "nudr-group-id-map"
# Enable to register with NRF for UDSF service
udsfEnable: false
# Helm test related configurations
test:
nfName: ocudr
image:
name: ocudr/nf_test
tag: 1.7.1
config:
logLevel: WARN
timeout: 120
# Pre Hook Install configurations
preInstall:
image:
name: ocudr/nudr_prehook
tag: 1.7.1
config:
logLevel: WARN
# Resources for Hooks
hookJobResources:
limits:
cpu: 2
memory: 2Gi
requests:
cpu: 1
memory: 1Gi
#**************************************************************************
# ******** Sub-Section Start: Custom Extension Global Parameters ********
#**************************************************************************
customExtension:
allResources:
labels: {}
annotations:
sidecar.istio.io/inject: "\"false\""
lbServices:
labels: {}
annotations: {}
lbDeployments:
labels: {}
annotations:
sidecar.istio.io/inject: "\"true\""
oracle.com/cnc: "\"true\""
nonlbServices:
labels: {}
annotations: {}
nonlbDeployments:
labels: {}
annotations:
sidecar.istio.io/inject: "\"true\""
oracle.com/cnc: "\"true\""
# ******** Sub-Section End: Custiom Extensions Global Parameters ********
#**************************************************************************
# ******** Sub-Section Start: Prefix/Suffix Global Parameters ************
#**************************************************************************
k8sResource:
container:
prefix:
suffix:
# ******** Sub-Section End: Prefix/Suffix Global Parameters *************
#**************************************************************************
nudr-drservice:
# nameOverride: "nudr-drservice"
image:
name: ocudr/nudr_datarepository_service
tag: 1.7.1
pullPolicy: Always
service:
http2enabled: "true"
type: ClusterIP
port:
http: 5001
https: 5002
management: 9000
customExtension:
labels: {}
annotations: {}
tracingEnabled: false
notify:
port:
http: 5001
https: 5002
deployment:
replicaCount: 2
customExtension:
labels: {}
annotations: {}
logging:
level:
root: "WARN"
subscriber:
autocreate: "true"
validate:
smdata: "false"
vsaLevel: "smpolicy"
resources:
limits:
cpu: 4
memory: 4Gi
requests:
cpu: 4
memory: 4Gi
target:
averageCpuUtil: 80
hikari:
poolsize: "25"
minReplicas: 2
maxReplicas: 8
nudr-notify-service:
# nameOverride: "nudr-notify-service"
enabled: false
image:
name: ocudr/nudr_notify_service
tag: 1.7.1
pullPolicy: Always
service:
http2enabled: "true"
type: ClusterIP
port:
http: 5001
https: 5002
management: 9000
customExtension:
labels: {}
annotations: {}
tracingEnabled: false
deployment:
replicaCount: 2
customExtension:
labels: {}
annotations: {}
notification:
retrycount: "3"
retryinterval: "5"
retryerrorcodes: "400,429,500,503"
hikari:
poolsize: "10"
logging:
level:
root: "WARN"
resources:
limits:
cpu: 3
memory: 3Gi
requests:
cpu: 3
memory: 3Gi
target:
averageCpuUtil: 80
minReplicas: 2
maxReplicas: 4
# for egress port
http:
proxy:
port: 8080
nudr-config:
# nameOverride: "nudr-configuration-service"
enabled: true
image:
name: ocudr/nudr_config
tag: 1.7.1
pullPolicy: Always
service:
http2enabled: "true"
type: ClusterIP
port:
http: 5001
https: 5002
management: 9000
customExtension:
labels: {}
annotations: {}
deployment:
replicaCount: 1
customExtension:
labels: {}
annotations: {}
logging:
level:
root: "WARN"
resources:
limits:
cpu: 2
memory: 2Gi
requests:
cpu: 2
memory: 2Gi
target:
averageCpuUtil: 80
minReplicas: 1
maxReplicas: 1
config-server:
enabled: true
global:
nfName: nudr
imageServiceDetector: ocudr/readiness-detector:latest
envJaegerAgentHost: ''
envJaegerAgentPort: 6831
replicas: 1
envLoggingLevelApp: WARN
resources:
limits:
cpu: 2
memory: 2Gi
requests:
cpu: 2
memory: 512Mi
service:
type: ClusterIP
fullnameOverride: udr-config-server
installedChartVersion: ''
nudr-nrf-client-service:
# nameOverride: "nudr-nrf-client-service"
enabled: true
host:
baseurl: "http://ocnrf-ingressgateway.mynrf.svc.cluster.local/nnrf-nfm/v1/nf-instances"
proxy:
ssl: "false"
logging:
level:
root: "WARN"
image:
name: ocudr/nudr_nrf_client_service
tag: 1.7.1
pullPolicy: Always
heartBeatTimer: "90"
udrGroupId: "udr-1"
capacityMultiplier: "500"
supirange: "[{\"start\": \"10000000000\", \"end\": \"20000000000\"}]"
priority: "10"
udrMasterIpv4: "10.0.0.0"
gpsirange: "[{\"start\": \"10000000000\", \"end\": \"20000000000\"}]"
#endpointLabelSelector : "ocudr-ingressgateway"
plmnvalues: "[{\"mnc\": \"14\", \"mcc\": \"310\"}]"
scheme: "http"
livenessProbeMaxRetry: 5
# this is for egress port
http:
proxy:
host:
port: 8080
# The below 2 configuration will change based on site k8s name resolution settings, Also note the changes with namespace used for udr installation
#livenessProbeUrl: "http://nudr-notify-service.myudr.svc.cluster.local:9000/actuator/health,http://nudr-drservice.myudr.svc.cluster.local:9000/actuator/health"
fqdn: "ocudr-ingressgateway.myudr.svc.cluster.local"
resources:
limits:
cpu: 1
memory: 2Gi
requests:
cpu: 1
memory: 2Gi
service:
customExtension:
labels: {}
annotations: {}
deployment:
customExtension:
labels: {}
annotations:
traffic.sidecar.istio.io/excludeOutboundPorts: "\"9000,9090\"" #Should be configured with the management ports used for UDR microservices and actutorPort used for IGW/EGW
ingressgateway:
global:
# Docker registry name
# dockerRegistry: reg-1:5000
# Specify type of service - Possible values are :- ClusterIP, NodePort, LoadBalancer and ExternalName
type: ClusterIP
# Enable or disable IP Address allocation from Metallb Pool
metalLbIpAllocationEnabled: true
# Address Pool Annotation for Metallb
metalLbIpAllocationAnnotation: "metallb.universe.tf/address-pool: signaling"
# Set to true if constant node port needs to be assigned when Servicetype is LoadBalancer or NodePort
staticNodePortEnabled: false
# port on which UDR's API-Gateway service is exposed
# If httpsEnabled is false, this Port would be HTTP/2.0 Port (unsecured)
# If httpsEnabled is true, this Port would be HTTPS/2.0 Port (secured SSL)
publicHttpSignalingPort: 80
publicHttpsSignallingPort: 443
image:
# image name
name: ocudr/ocingress_gateway
# tag name of image
tag: 1.7.7
# Pull Policy - Possible Values are:- Always, IfNotPresent, Never
pullPolicy: Always
initContainersImage:
# inint Containers image name
name: ocudr/configurationinit
# tag name of init Container image
tag: 1.2.0
# Pull Policy - Possible Values are:- Always, IfNotPresent, Never
pullPolicy: Always
updateContainersImage:
# update Containers image name
name: ocudr/configurationupdate
# tag name of update Container image
tag: 1.2.0
# Pull Policy - Possible Values are:- Always, IfNotPresent, Never
pullPolicy: Always
deployment:
customExtension:
labels: {}
annotations: {}
service:
customExtension:
labels: {}
annotations: {}
ssl:
tlsVersion: TLSv1.2
privateKey:
k8SecretName: ocudr-gateway-secret
k8NameSpace: ocudr
rsa:
fileName: rsa_private_key_pkcs1.pem
ecdsa:
fileName: ecdsa_private_key_pkcs8.pem
certificate:
k8SecretName: ocudr-gateway-secret
k8NameSpace: ocudr
rsa:
fileName: apigatewayrsa.cer
ecdsa:
fileName: apigatewayecdsa.cer
caBundle:
k8SecretName: ocudr-gateway-secret
k8NameSpace: ocudr
fileName: caroot.cer
keyStorePassword:
k8SecretName: ocudr-gateway-secret
k8NameSpace: ocudr
fileName: key.txt
trustStorePassword:
k8SecretName: ocudr-gateway-secret
k8NameSpace: ocudr
fileName: trust.txt
initialAlgorithm: RSA256
cncc:
enabled: false
enablehttp1: true
# Resource details
resources:
limits:
cpu: 5
memory: 4Gi
initServiceCpu: 1
initServiceMemory: 1Gi
updateServiceCpu: 1
updateServiceMemory: 1Gi
requests:
cpu: 5
memory: 4Gi
initServiceCpu: 1
initServiceMemory: 1Gi
updateServiceCpu: 1
updateServiceMemory: 1Gi
target:
averageCpuUtil: 80
log:
level:
root: WARN
ingress: INFO
oauth: INFO
# enable jaeger tracing
jaegerTracingEnabled: false
openTracing :
jaeger:
udpSender:
# udpsender host
host: "occne-tracer-jaeger-agent.occne-infra"
# udpsender port
port: 6831
probabilisticSampler: 0.5
# Number of Pods must always be available, even during a disruption.
minAvailable: 2
# Min replicas to scale to maintain an average CPU utilization
minReplicas: 2
# Max replicas to scale to maintain an average CPU utilization
maxReplicas: 5
# label to override name of api-gateway micro-service name
#fullnameOverride: ocudr-endpoint
# To Initialize SSL related infrastructure in init/update container
initssl: false
# Cipher suites to be enabled on server side
ciphersuites:
- TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
- TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
- TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
- TLS_DHE_RSA_WITH_AES_256_GCM_SHA384
- TLS_DHE_RSA_WITH_AES_256_CCM
- TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
- TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
#OAUTH CONFIGURATION
oauthValidatorEnabled: false
nfType: SMF
nfInstanceId: 6faf1bbc-6e4a-4454-a507-a14ef8e1bc11
producerScope: nsmf-pdusession,nsmf-event-exposure
allowedClockSkewSeconds: 0
nrfPublicKeyKubeSecret: nrfpublickeysecret
nrfPublicKeyKubeNamespace: ingress
validationType: strict
producerPlmnMNC: 123
producerPlmnMCC: 346
#Server Configuration for http and https support
#Server side http support
enableIncomingHttp: true
#Server side https support
enableIncomingHttps: false
#Client side https support
enableOutgoingHttps: false
maxRequestsQueuedPerDestination: 5000
maxConnectionsPerIp: 10
#Service Mesh (Istio) to take care of load-balancing
serviceMeshCheck: true
# configuring routes
routesConfig:
- id: traffic_mapping_http
uri: http://{{ .Release.Name }}-nudr-drservice:5001
path: /nudr-dr/**
order: 1
- id: traffic_mapping_http_prov
uri: http://{{ .Release.Name }}-nudr-drservice:5001
path: /nudr-dr-prov/**
order: 2
- id: traffic_mapping_http_mgmt
uri: http://{{ .Release.Name }}-nudr-drservice:5001
path: /nudr-dr-mgm/**
order: 3
- id: traffic_mapping_http_udsf
uri: http://{{ .Release.Name }}-nudr-drservice:5001
path: /nudsf-dr/**
order: 4
- id: traffic_mapping_http_group
uri: http://{{ .Release.Name }}-nudr-drservice:5001
path: /nudr-group-id-map/**
order: 5
- id: traffic_mapping_http_group_prov
uri: http://{{ .Release.Name }}-nudr-drservice:5001
path: /nudr-group-id-map-prov/**
order: 6
- id: traffic_mapping_http_slf_group_prov
uri: http://{{ .Release.Name }}-nudr-drservice:5001
path: /slf-group-prov/**
order: 7
egressgateway:
enabled: true
#fullnameOverride : 'ocudr-egress-gateway'
nfType: UDR
#global:
# dockerRegistry: reg-1:5000
deploymentEgressGateway:
image: ocudr/ocegress_gateway
imageTag: 1.7.7
pullPolicy: Always
initContainersImage:
# inint Containers image name
name: configurationinit
# tag name of init Container image
tag: 1.2.0
# Pull Policy - Possible Values are:- Always, IfNotPresent, Never
pullPolicy: Always
updateContainersImage:
# update Containers image name
name: configurationupdate
# tag name of update Container image
tag: 1.2.0
# Pull Policy - Possible Values are:- Always, IfNotPresent, Never
pullPolicy: Always
# enable jagger tracing
jaegerTracingEnabled: false
openTracing :
jaeger:
udpSender:
# udpsender host
host: "occne-tracer-jaeger-agent.occne-infra"
# udpsender port
port: 6831
probabilisticSampler: 0.5
# ---- Oauth Configuration - BEGIN ----
oauthClientEnabled: false
nrfAuthority: 10.75.224.7:8085
nfInstanceId: fe7d992b-0541-4c7d-ab84-c6d70b1b01b1
consumerPlmnMNC: 345
consumerPlmnMCC: 567
# ---- Oauth Configuration - END ----
minReplicas: 1
maxReplicas: 4
minAvailable: 1
# ---- HTTPS Configuration - BEGIN ----
initssl: false
enableOutgoingHttps: false
# Resource details
resources:
limits:
cpu: 3
memory: 4Gi
initServiceCpu: 1
initServiceMemory: 1Gi
updateServiceCpu: 1
updateServiceMemory: 1Gi
requests:
cpu: 3
memory: 4Gi
initServiceCpu: 1
initServiceMemory: 1Gi
updateServiceCpu: 1
updateServiceMemory: 1Gi
target:
averageCpuUtil: 80
deployment:
customExtension:
labels: {}
annotations: {}
service:
type: ClusterIP
customExtension:
labels: {}
annotations: {}
ssl:
tlsVersion: TLSv1.2
initialAlgorithm: RSA256
privateKey:
k8SecretName: ocudr-gateway-secret
k8NameSpace: ocudr
rsa:
fileName: rsa_private_key_pkcs1.pem
ecdsa:
fileName: ecdsa_private_key_pkcs8.pem
certificate:
k8SecretName: ocudr-gateway-secret
k8NameSpace: ocudr
rsa:
fileName: apigatewayrsa.cer
ecdsa:
fileName: apigatewayecdsa.cer
caBundle:
k8SecretName: ocudr-gateway-secret
k8NameSpace: ocudr
fileName: caroot.cer
keyStorePassword:
k8SecretName: ocudr-gateway-secret
k8NameSpace: ocudr
fileName: key.txt
trustStorePassword:
k8SecretName: ocudr-gateway-secret
k8NameSpace: ocudr
fileName: trust.txt
# ---- HTTPS Configuration - END ----
#Enable this if loadbalancing is to be done by egress instead of K8s
K8ServiceCheck: false
#Set the root log level
log:
level:
root: WARN
egress: INFO
oauth: INFO
nudr-diameterproxy:
enabled: false
image:
name: ocudr/nudr_diameterproxy
tag: 1.7.1
pullPolicy: Always
service:
http2enabled: "true"
type: ClusterIP
diameter:
type: LoadBalancer
port:
http: 5001
https: 5002
management: 9000
diameter: 6000
customExtension:
labels: {}
annotations: {}
deployment:
replicaCount: 2
customExtension:
labels: {}
annotations: {}
logging:
level:
root: "WARN"
resources:
limits:
cpu: 3
memory: 4Gi
requests:
cpu: 3
memory: 4Gi
target:
averageCpuUtil: 80
minReplicas: 2
maxReplicas: 4
drservice:
port:
http: 5001
https: 5002
diameter:
realm: "oracle.com"
identity: "nudr.oracle.com"
strictParsing: false #strict parse message and AVP
IO:
threadCount: 0 # should not go beyond 2*CPU
queueSize: 0 # range [2048-8192] should be power of 2
messageBuffer:
threadCount: 0 # should not go beyond 2*CPU
queueSize: 0 # range [1024-4096] and default 1024/Low, 2048/Medium, 4096/High. should be power of 2
peer:
setting: |
reconnectDelay: 3
responseTimeout: 4
connectionTimeOut: 3
watchdogInterval: 6
transport: 'TCP'
reconnectLimit: 50
nodes: |
- name: 'seagull'
responseOnly: false
namespace: 'seagull1'
host: '10.75.185.158'
domain: 'svc.cluster.local'
port: 4096
realm: 'seagull1.com'
identity: 'seagull1a.seagull1.com'
clientNodes: |
- identity: 'seagull1a.seagull1.com'
realm: 'seagull1.com'
- identity: 'seagull1.com'
realm: 'seagull1.com'
Configuring User Parameters
The UDR micro services have configuration options. The user should be able to configure them via deployment values.yaml.
Note:
The default value of some of the settings may change.Note:
- NAME: is the release name used in helm install command
- NAMESPACE: is the namespace used in helm install command
- K8S_DOMAIN: is the default kubernetes domain (svc.cluster.local)
Default Helm Release Name:- ocudr
Global Configuration: These values are suffixed to all the container names of OCNRF. These values are useful to add custom annotation(s) to all non-Load Balancer Type Services that OCNRF helm chart creates.
Following table provides the parameters for global configurations.
Parameter | Description | Default value | Range or Possible Values (If applicable) | Notes |
---|---|---|---|---|
dockerRegistry | Docker registry from where the images will be pulled | ocudr-registry.us.oracle.com:5000 | Not applicable | |
mysql.dbServiceName | DB service to connect | mysql-connectivity-service.occne-infra | Not applicable | This is a CNE service used for db connection. Default name used on CNE is the same as configured. |
mysql.port | Port for DB Service Connection | 3306 | Not applicable | |
udrTracing.enable | Flag to enable udr tracing on Jaeger | false | true/false | |
udrTracing.host | Jaegar Service Name installed in CNE | occne-tracer-jaeger-collector.occne-infra | Not applicable | |
udrTracing.port | Jaegar Service Port installed in CNE | 14268 | Not applicable | |
dbenc.shavalue | Encryption Key size | 256 | 256 or 512 | |
serviceAccountName | Service account name | null | Not Applicable | The serviceaccount, role and rolebindings required for deployment should be done prior installation. Use the created serviceaccountname here. |
egress.enabled | Flag to enable outgoing traffic through egress gateway | true | true/false | |
configServerEnable | Flag to enable config-server | true | true/false | |
initContainerEnable | Flag to disable init container for config-server. This is not required because the pre install hooks take care of DB tables creation and connectivity is also verified | false | true/false | |
dbCredSecretName | DB Credentioal Secret Name | ocudr-secrets | Not Applicable | |
releaseDbName | Release Db Name | udr_release | Not Applicable | |
configServerFullNameOverride | Config Server Full Name Override | nudr-config-server | Not Applicable | |
udrServices | Services supported on the UDR deployment, This config will decide the schema execution on the udrdb which is done by the nudr-preinstall hook pod. | nudr-group-id-map | All/nudr-dr/nudr-group-id-map | This release is specifically for SLF, so default value is nudr-group-id-map |
udsfEnable | Flag to enable UDSF services on the deployment | false | true/false | |
test.nfName | NF name on which the helm test is performed. For UDR the default value is UDR. Will be used in container name as suffix | ocudr | Not applicable | |
test.image.name | Image name for the helm test container image | ocudr/nf_test | Not Applicable | |
test.image.tag | Image version tag for helm test | 1.7.1 | Not Applicable | |
test.config.logLevel | Log level for helm test pod | WARN |
Possible Values - WARN INFO DEBUG |
|
test.config.timeout | Timeout value for the helm test operation. If exceeded helm test will be considered as failure | 120 |
Range: 1-300 Unit:seconds |
|
preinstall.image.name | Image name for the nudr-prehook pod which will take care of DB and table creation for UDR deployment. | ocudr/prehook | Not Applicable | |
preinstall.image.tag | Image version for nudr-prehook pod image | 1.7.1 | Not Applicable | |
preinstall.config.logLevel | Log level for preinstall hook pod | WARN |
Possible Values - WARN INFO DEBUG |
|
hookJobResources.limits.cpu | CPU limit for pods created kubernetes hooks/jobs created as part of UDR installation. Applicable for helm test job as well. | 2 | Not Applicable | |
hookJobResources.limits.memory | Memory limit for pods created kubernetes hooks/jobs created as part of UDR installation. Applicable for helm test job as well. | 2Gi | Not Applicable | |
hookJobResources.requests.cpu | CPU requests for pods created kubernetes hooks/jobs created as part of UDR installation. Applicable for helm test job as well. | 1 | Not Applicable | The cpu to be allocated for hooks during deployment |
hookJobResources.requests.memory | Memory requests for pods created k8s hooks/jobs created as part of UDR installation. Applicable for helm test job as well. | 1Gi | Not Applicable | The memory to be allocated for hooks during deployment |
customExtension.allResources.labels | Custom Labels that needs to be added to all the OCUDR kubernetes resources | null | Not Applicable | This can be used to add custom label(s) to all k8s resources that will be created by OCUDR helm chart. |
customExtension.allResources.annotations | Custom Annotations that needs to be added to all the OCUDR kubernetes resources | null |
Not Applicable Note: ASM related annotations needs to be added under ASM Specific Configuration section |
This can be used to add custom annotation(s) to all k8s resources that will be created by OCUDR helm chart. |
customExtension.lbServices.labels | Custom Labels that needs to be added to OCUDR Services that are considered as Load Balancer type | null | Not Applicable | This can be used to add custom label(s) to all Load Balancer Type Services that will be created by OCUDR helm chart. |
customExtension.lbServices.annotations | Custom Annotations that needs to be added to OCUDR Services that are considered as Load Balancer type | null | Not Applicable | This can be used to add custom annotation(s) to all Load Balancer Type Services that will be created by OCUDR helm chart. |
customExtension.lbDeployments.labels | Custom Labels that needs to be added to OCUDR Deployments that are associated to a Service which is of Load Balancer type | null | Not Applicable | This can be used to add custom label(s) to all Deployments that will be created by OCUDR helm chart which are associated to a Service which if of Load Balancer Type. |
customExtension.lbDeployments.annotations | Custom Annotations that needs to be added to OCUDR Deployments that are associated to a Service which is of Load Balancer type | null |
Not Applicable Note: ASM related annotations needs to be added under ASM Specific Configuration section |
This can be used to add custom annotation(s) to all Deployments that will be created by OCUDR helm chart which are associated to a Service which if of Load Balancer Type. |
customExtension.nonlbServices.labels | Custom Labels that needs to be added to OCUDR Services that are considered as not Load Balancer type | null | Not Applicable | This can be used to add custom label(s) to all non-Load Balancer Type Services that will be created by OCUDR helm chart. |
customExtension.nonlbServices.annotations | Custom Annotations that needs to be added to OCUDR Services that are considered as not Load Balancer type | null | Not Applicable | This can be used to add custom annotation(s) to all non-Load Balancer Type Services that will be created by OCUDR helm chart. |
customExtension.nonlbDeployments.labels | Custom Labels that needs to be added to OCUDR Deployments that are associated to a Service which is not of Load Balancer type | null | Not Applicable | This can be used to add custom label(s) to all Deployments that will be created by OCUDR helm chart which are associated to a Service which if not of Load Balancer Type. |
customExtension.nonlbDeployments.annotations | Custom Annotations that needs to be added to OCUDR Deployments that are associated to a Service which is not of Load Balancer type | null |
Not Applicable Note: ASM related annotations to be added under ASM Specific Configuration section |
This can be used to add custom annotation(s) to all Deployments that will be created by OCUDR helm chart which are associated to a Service which if not of Load Balancer Type. |
k8sResource.container.prefix | Value that will be prefixed to all the container names of OCUDR. | null | Not Applicable | This value will be used to prefix to all the container names of OCUDR. |
k8sResource.container.suffix | Value that will be suffixed to all the container names of OCUDR. | null | Not Applicable | This value will be used to prefix to all the container names of OCUDR. |
Following table provides the parameters for nudr-drservice micro service.
Parameter | Description | Default value | Range or Possible Values (If applicable) | Notes |
---|---|---|---|---|
image.name | Docker Image name | ocudr/nudr_datarepository_service | Not applicable | |
image.tag | Tag of Image | 1.7.1 | Not applicable | |
image.pullPolicy | This setting will tell if image need to be pulled or not | Always |
Possible Values - Always IfNotPresent Never |
|
subscriber.autocreate | Flag to enable auto creation of subscriber | true | true/false | This flag will enable auto creation of subscriber when creating data for a non existent subscriber. |
validate.smdata | Flag to enable correlation feature for smdata | false | true/false | This flag will control the correlation feature for smdata. This flag must be false if using v16.2.0 for PCF data. |
logging.level.root | Log Level | WARN |
Possible Values - WARN INFO DEBUG |
Log level of the nudr-drservice pod |
deployment.replicaCount | Replicas of nudr-drservice pod | 2 | Not applicable | Number of nudr-drservice pods to be maintained by replica set created with deployment |
minReplicas | Minimum Replicas | 2 | Not applicable | Minimum number of pods |
maxReplicas | Maximum Replicas | 8 | Not applicable | Maximum number of pods |
service.http2enabled | Enabled HTTP2 support flag for rest server | true | true/false | Enable/Disable HTTP2 support for rest server |
service.type | UDR service type | ClusterIP |
Possbile Values- ClusterIP NodePort LoadBalancer |
The kubernetes service type for exposing UDR deployment Note: Suggested to be set as ClusterIP (default value) always |
service.port.http | HTTP port | 5001 | Not applicable | The http port to be used in nudr-drservice service |
service.port.https | HTTPS port | 5002 | Not applicable | The https port to be used for nudr-drservice service |
service.port.management | Management port | 9000 | Not applicable | The actuator management port to be used for nudr-drservice service |
resources.requests.cpu | Cpu Allotment for nudr-drservice pod | 3 | Not applicable | The cpu to be allocated for nudr-drservice pod during deployment |
resources.requests.memory | Memory allotment for nudr-drservice pod | 4Gi | Not applicable | The memory to be allocated for nudr-drservice pod during deployment |
resources.limits.cpu | Cpu allotment limitation | 3 | Not applicable | |
resources.limits.memory | Memory allotment limitation | 4Gi | Not applicable | |
resources.target.averageCpuUtil | CPU utilization limit for autoscaling | 80 | Not Applicable | CPU utilization limit for creating HPA |
notify.port.http | HTTP port on which notify service is running | 5001 | Not applicable | |
notify.port.https | HTTPS port on which notify service is running | 5002 | Not applicable | |
hikari.poolsize | Mysql Connection pool size | 25 | Not applicable | The hikari pool connection size to be created at start up |
vsaLevel | The data level where the vsa which holds the 4G Policy data is added. | smpolicy | Not applicable | |
tracingEnabled | Flag to enable/disable jaeger tracing for nudr-drservice | false | true/false | |
service.customExtension.labels | Custom Labels that needs to be added to nudr-drservice specific Service. | null | Not Applicable | This can be used to add custom label(s) to nudr-drservice Service. |
service.customExtension.annotations | Custom Annotations that needs to be added to nudr-drservice specific Services. | null | Not Applicable | This can be used to add custom annotation(s) to nudr-drservice Service. |
deployment.customExtension.labels | Custom Labels that needs to be added to nudr-drservice specific deployment. | null | Not Applicable | This can be used to add custom label(s) to nudr-drservice Deployment. |
deployment.customExtension.annotations | Custom Annotations that needs to be added to nudr-drservice specific deployment. | null | Not Applicable | This can be used to add custom annotation(s) to nudr-drservice deployment. |
Following table provides the parameters for nudr-notify-service micro service.
Parameter | Description | Default value | Range or Possible Values (If applicable) | Notes |
---|---|---|---|---|
enabled | flag for enabling or disabling nudr-notify-service | false | true or false | For SLF deployment, this micro service must be disabled. |
image.name | Docker Image name | ocudr/nudr_notify_service | Not applicable | |
image.tag | Tag of Image | 1.7.1 | Not applicable | |
image.pullPolicy | This setting will tell if image need to be pulled or not | Always |
Possible Values - Always IfNotPresent Never |
|
notification.retrycount | Number of notifications to be attempted | 3 | Range: 1 - 10 |
Number of notification attempts to be done in case of notification failures. Whether retry should be done will be based on notification.retryerrorcodes configuration. |
notification.retryinterval | 5 |
Range: 1 - 60 Unit: Seconds |
The retry interval for notifications in case of failure. Unit is in seconds. Whether retry should be done will be based on notification.retryerrorcodes configuration. |
|
notification.retryerrorcodes | Notification failures eligible for retry | "400,429,500,503" | Valid HTTP status codes comma seperated | Comma separated error code should be given. These error codes will be eligible for retry notifications in case of failures. |
hikari.poolsize | Mysql Connection pool size | 25 | Not applicable | The hikari pool connection size to be created at start up |
tracingEnabled | Flag to enable/disable jaeger tracing for nudr-notify-service | false | true/false | |
http.proxy.port | Port to connect to egress gateway | 8080 | Not applicable | |
logging.level.root | Log Level | WARN |
Possible Values - WARN INFO DEBUG |
Log level of the notify service pod |
deployment.replicaCount | Replicas of nudr-notify-service pod | 2 | Not applicable | Number of nudr-notify-service pods to be maintained by replica set created with deployment |
minReplicas | Minimum Replicas | 2 | Not applicable | Minimum number of pods |
maxReplicas | Maximum Replicas | 4 | Not applicable | Maximum number of pods |
service.http2enabled | Enabled HTTP2 support flag | true | true/false | This is a read only parameter. Do not change this value |
service.type | UDR service type | ClusterIP |
Possbile Values- ClusterIP NodePort LoadBalancer |
The kubernetes service type for exposing UDR deployment Note: Suggested to be set as ClusterIP (default value) always |
service.port.http | HTTP port | 5001 | Not applicable | The http port to be used in notify service to receive signals from nudr-notify-service pod. |
service.port.https | HTTPS port | 5002 | Not applicable | The https port to be used in notify service to receive signals from nudr-notify-service pod. |
service.port.management | Management port | 9000 | Not applicable | The actuator management port to be used for notify service. |
resources.requests.cpu | Cpu Allotment for nudr-notify-service pod | 3 | Not applicable | The cpu to be allocated for notify service pod during deployment |
resources.requests.memory | Memory allotment for nudr-notify-service pod | 3Gi | Not applicable | The memory to be allocated for nudr-notify-service pod during deployment |
resources.limits.cpu | Cpu allotment limitation | 3 | Not applicable | |
resources.limits.memory | Memory allotment limitation | 3Gi | Not applicable | |
resources.target.averageCpuUtil | CPU utilization limit for autoscaling | 80 | Not Applicable | CPU utilization limit for creating HPA |
service.customExtension.labels | Custom Labels that needs to be added to nudr-notify-service specific service. | null | Not Applicable | This can be used to add custom label(s) tonudr-notify-service Service. |
service.customExtension.annotations | Custom Annotations that needs to be added to nudr-notify-service specific services. | null | Not Applicable | This can be used to add custom annotation(s) to nudr-notify-service Service. |
deployment.customExtension.labels | Custom Labels that needs to be added to nudr-notify-service specific deployment. | null | Not Applicable | This can be used to add custom label(s) to nudr-notify-service deployment. |
deployment.customExtension.annotations | Custom Annotations that needs to be added to nudr-notify-service specific deployment. | null | Not Applicable | This can be used to add custom annotation(s) to nudr-notify-service deployment. |
Following table provides the parameters for nudr-nrf-client-service micro service.
Parameter | Description | Default value | Range or Possible Values (If applicable) | Notes |
---|---|---|---|---|
enabled | flag for enabling or disabling nudr-nrf-client-service | true | true/false | |
host.baseurl | NRF url for registration | http://ocnrf-ingressgateway.mynrf.svc.cluster.local/nnrf-nfm/v1/nf-instances | Not applicable | Url used for udr to connect and register with NRF |
host.proxy | Proxy Setting | NULL | nrfClient.host | Proxy setting if required to connect to NRF |
ssl | SSL flag | false | true/false | SSL flag to enable SSL with udr nrf client pod |
logging.level.root | Log Level | WARN |
Possible Values - WARN INFO DEBUG |
Log level of the UDR nrf client pod |
image.name | Docker Image name | ocudr/nudr_nrf_client_service | Not applicable | |
image.tag | Tag of Image | 1.7.1 | Not applicable | |
image.pullPolicy | This setting will tell if image need to be pulled or not | Always |
Possible Values - Always IfNotPresent Never |
|
heartBeatTimer | Heart beat timer | 90 | Unit: Seconds | |
udrGroupId | Group ID of UDR | udr-1 | Not applicable | |
capacityMultiplier | Capacity of UDR | 500 | Not applicable | Capacity multiplier of UDR based on number of UDR pods running |
supirange | Supi Range supported with UDR | [{\"start\": \"10000000000\", \"end\": \"20000000000\"}] | Valid start and end supi range | |
priority | Priority | 10 | Priority to be sent in registration request | Priority to be sent in registration request |
fqdn | UDR FQDN | ocudr-ingressgateway.myudr.svc.cluster.local | Not Applicable |
FQDN to used for registering in NRF for other NFs to connect to UDR. Note: Be cautious in updating this value. Should consider helm release name, namespace used for udr deployment and name resolution setting in k8s. |
gpsirange | Gpsi Range supported with UDR | [{\"start\": \"10000000000\", \"end\": \"20000000000\"}] | Valid start and end gpsi range | |
livenessProbeMaxRetry | Max retries of liveness proble failed | 5 | This should be changed based on how many times do you want to retry | This should be changed based on how many times do you want to retry if liveness fails |
udrMasterIpv4 | Master IP of which we deployed | 10.0.0.0 | This should be changed with the master ip which we deployed | udrMasterIpv4 is used to send the ipv4 address to the nrf while registration. |
plmnvalues | Plmn values range that it supports | [{\"mnc\": \"14\", \"mcc\": \"310\"}] | This values can be changed that the range it supports | Plmn values are sent to nrf during regisration from UDR. |
scheme | scheme in which udr supports | http | This can be changed to https. | scheme which we send to NRF during registration |
resources.requests.cpu | Cpu Allotment for nudr-notify-service pod | 1 | Not applicable | The cpu to be allocated for nrf client service pod during deployment |
resources.requests.memory | Memory allotment for nudr-notify-service pod | 2Gi | Not applicable | The memory to be allocated for nrf client service pod during deployment |
resources.limits.cpu | Cpu allotment limitation | 1 | Not applicable | |
resources.limits.memory | Memory allotment limitation | 2Gi | Not applicable | |
http.proxy.port | Port to connect egress gateway | 8080 | Not applicable | |
service.customExtension.labels | Custom Labels that needs to be added to nudr-nrf-client specific service. | null | Not Applicable | This can be used to add custom label(s) to nudr-nrf-client service. |
service.customExtension.annotations | Custom Annotations that needs to be added to nudr-nrf-client specific services. | null | Not Applicable | This can be used to add custom annotation(s) to nudr-nrf-client service. |
deployment.customExtension.labels | Custom Labels that needs to be added to nudr-nrf-client specific deployment. | null | Not Applicable | This can be used to add custom label(s) to nudr-nrf-client deployment. |
deployment.customExtension.annotations | Custom Annotations that needs to be added to nudr-nrf-client specific deployment. | null | Not Applicable
Note: ASM related annotations to be added under ASM Specific Configuration section |
This can be used to add custom annotation(s) to nudr-nrf-client deployment. |
Following table provides the parameters for nudr-config micro service.
Parameter | Description | Default value | Range or Possible Values (If applicable) | Notes |
---|---|---|---|---|
deployment.customExtension.annotations | Custom Annotations that needs to be added to nudr-config specific Deployment. | null | Not applicable | This can be used to add custom annotation(s) to nudr-config Deployment. |
deployment.customExtension.labels | Custom Labels that needs to be added to nudr-config specific Deployment. | null | Not applicable | This can be used to add custom label(s) to nudr-config Deployment. |
deployment.replicaCount | Replicas of nudr-config pod | 1 | Not applicable | Number of nudr-config pods to be maintained by replica set created with deployment |
image.name | Docker Image name | ocudr/nudr_config | Not applicable | |
image.pullPolicy | This setting indicates whether image needs to be pulled or not | Always |
Possible Values - Always IfNotPresent Never |
|
image.tag | Tag of Image | 1.7.1 | Not applicable | |
logging.level.root | Log Level | WARN |
Possible Values - WARN INFO DEBUG |
Log level of the nudr-config pod |
maxReplicas | Maximum Replicas | 1 | Not applicable | Maximum number of pods |
minReplicas | Minimum Replicas | 1 | Not applicable | Minimum number of pods |
resources.limits.cpu | Cpu allotment limitation | 2 | Not applicable | |
resources.limits.memory | Memory allotment limitation | 2Gi | Not applicable | |
resources.requests.cpu | Cpu Allotment for nudr-drservice pod | 2 | Not applicable | The cpu to be allocated for nudr-config pod during deployment |
resources.requests.memory | Memory allotment for nudr-drservice pod | 2Gi | Not applicable | The memory to be allocated for nudr-config pod during deployment |
resources.target.averageCpuUtil | CPU utilization limit for autoscaling | 80 | Not Applicable | CPU utilization limit for creating HPA |
service.customExtension.annotations | Custom Annotations that needs to be added to nudr-config specific Services. | null | Not applicable | This can be used to add custom annotation(s) to nudr-config Service. |
service.customExtension.labels | Custom Labels that needs to be added to nudr-config specific Service. | null | Not applicable | This can be used to add custom label(s) to nudr-config Service. |
service.http2enabled | Enabled HTTP2 support flag for rest server | true | true/false | Enable/Disable HTTP2 support for rest server |
service.port.http | HTTP port | 5001 | Not applicable | The http port to be used in nudr-config service |
service.port.https | HTTPS port | 5002 | Not applicable | The https port to be used for nudr-config service |
service.port.management | Management port | 9000 | Not applicable | The actuator management port to be used for nudr-config service |
service.type | UDR service type | ClusterIP |
Possbile Values- ClusterIP NodePort LoadBalancer |
The kubernetes service type for exposing UDR deployment Note: Suggested to be set as ClusterIP (default value) always |
Following table provides the parameters for nudr-config-server Micro service.
Parameter | Description | Default value | Range or Possible Values (If applicable) | Notes |
---|---|---|---|---|
envLoggingLevelApp | Log Level | WARN |
Possible Values - WARN INFO DEBUG |
Log level of the nudr-config-server pod |
replicas | Replicas of nudr-config-server pod | 1 | Not applicable | Number of nudr-config-server pods to be maintained by replica set created with deployment |
resources.requests.cpu | Cpu Allotment for nudr-drservice pod | 2 | Not applicable | The cpu to be allocated for nudr-config-server pod during deployment |
service.type | UDR service type | ClusterIP |
Possbile Values- ClusterIP NodePort LoadBalancer |
The kubernetes service type for exposing UDR deployment Note: Suggested to be set as ClusterIP (default value) always |
resources.requests.memory | Memory allotment for nudr-drservice pod |
|
Not applicable | The memory to be allocated for nudr-config-server pod during deployment |
enabled | Flag to enable/disable nudr-config-server service | true | true/false | |
global.nfName | It is NF name used to add with config server service name. | nudr | Not applicable | |
global.imageServiceDetector | Image Service Detector for config-server init container | ocudr/readiness-detector:latest | Not Applicable | |
global.envJaegerAgentHost | Host FQDN for Jaeger agent service for config-server tracing | ' ' | Not Applicable | |
global.envJaegerAgentPort | Port for Connection to Jaeger agent for config-server tracing | 6831 | Valid Port | |
resources.limits.cpu | Cpu allotment limitation | 2 | Not applicable | |
resources.limits.memory | Memory allotment limitation | 2Gi | Not applicable |
Following table provides parameters for ocudr-ingressgateway micro service (API Gateway)
Parameter | Description | Default value | Range or Possible Values (If applicable) | Notes |
---|---|---|---|---|
global.type | ocudr-ingressgateway service type | ClusterIP |
Possbile Values- ClusterIP NodePort LoadBalancer |
|
global.metalLbIpAllocationEnabled | Enable or disable Address Pool for Metallb | true | true/false | |
global.metalLbIpAllocationAnnotation | Address Pool for Metallb | metallb.universe.tf/address-pool: signaling | Not applicable | |
global.staticNodePortEnabled | If Static node port needs to be set, then set staticNodePortEnabled flag to true and provide value for staticNodePort | false | Not applicable | |
global.publicHttpSignalingPort | Port used on which ingressgateway listens for incoming http requests. | 80 | Valid Port | |
global.publicHttpsSignallingPort | Port used on which ingressgateway listens for incoming https requests. | 443 | Valid Port | |
image.name | Docker image name | ocudr/ocingress_gateway | Not applicable | |
image.tag | Image version tag | 1.7.7 | Not applicable | |
image.pullPolicy | This setting will tell if image need to be pulled or not | Always |
Possible Values - Always IfNotPresent Never |
|
initContainersImage.name | Docker Image name | ocudr/configurationinit | Not applicable | |
initContainersImage.tag | Image version tag | 1.2.0 | Not applicable | |
initContainersImage.pullPolicy | This setting will tell if image need to be pulled or not | Always |
Possible Values - Always IfNotPresent Never |
|
updateContainersImage.name | Docker Image name | ocudr/configurationupdate | Not applicable | |
updateContainersImage.tag | Image version tag | 1.2.0 | Not applicable | |
updateContainersImage.pullPolicy | This setting will tell if image need to be pulled or not | Always |
Possible Values - Always IfNotPresent Never |
|
service.ssl.tlsVersion | Configuration to take TLS version to be used | TLSv1.2 | Valid TLS version | These are service fixed parameters |
service.ssl.privateKey.k8SecretName | name of the secret which stores keys and certificates | ocudr-gateway-secret | Not applicable | |
service.ssl.privateKey.k8NameSpace | namespace in which secret is created | ocudr | Not applicable | |
service.ssl.privateKey.rsa.fileName | rsa private key stored in the secret | rsa_private_key_pkcs1.pem | Not applicable | |
service.ssl.privateKey.ecdsa.fileName | ecdsa private key stored in the secret | ecdsa_private_key_pkcs8.pem | Not applicable | |
service.ssl.certificate.k8SecretName | name of the secret which stores keys and certificates | ocudr-gateway-secret | Not applicable | |
service.ssl.certificate.k8NameSpace | namespace in which secret is created | ocudr | Not applicable | |
service.ssl.certificate.rsa.fileName | rsa certificate stored in the secret | apigatewayrsa.cer | Not applicable | |
service.ssl.certificate.ecdsa.fileName | ecdsa certificate stored in the secret | apigatewayecdsa.cer | Not applicable | |
service.ssl.caBundle.k8SecretName | name of the secret which stores keys and certificates | ocudr-gateway-secret | Not applicable | |
service.ssl.caBundle.k8NameSpace | namespace in which secret is created | ocudr | Not applicable | |
service.ssl.caBundle.fileName | ca Bundle stored in the secret | caroot.cer | Not applicable | |
service.ssl.keyStorePassword.k8SecretName | name of the secret which stores keys and certificates | ocudr-gateway-secret | Not applicable | |
service.ssl.keyStorePassword.k8NameSpace | namespace in which secret is created | ocudr | Not applicable | |
service.ssl.keyStorePassword.fileName | keyStore password stored in the secret | key.txt | Not applicable | |
service.ssl.trustStorePassword.k8SecretName | name of the secret which stores keys and certificates | ocudr-gateway-secret | Not applicable | |
service.ssl.trustStorePassword.k8NameSpace | namespace in which secret is created | ocudr | Not applicable | |
service.ssl.trustStorePassword.fileName | trustStore password stored in the secret | trust.txt | Not applicable | |
service.initialAlgorithm |
Algorithm to be used ES256 can also be used, but corresponding certificates need to be used. |
RSA256 | RSA256/ES256 | |
resources.limits.cpu | Cpu allotment limitation | 5 | Not applicable | |
resources.limits.memory | Memory allotment limitation | 4Gi | Not applicable | |
resources.limits.initServiceCpu | Maximum amount of CPU that Kubernetes will allow the ingress-gateway init container to use. | 1 | Not Applicable | |
resources.limits.initServiceMemory | Memory Limit for ingress-gateway init container | 1Gi | Not Applicable | |
resources.limits.updateServiceCpu | Maximum amount of CPU that Kubernetes will allow the ingress-gateway update container to use. | 1 | Not Applicable | |
resources.limits.updateServiceMemory | Memory Limit for ingress-gateway update container | 1Gi | Not Applicable | |
resources.requests.cpu | Cpu allotment for ocudr-endpoint pod | 5 | Not Applicable | |
resources.requests.memory | Memory allotment for ocudr-endpoint pod | 4Gi | Not Applicable | |
resources.requests.initServiceCpu | The amount of CPU that the system will guarantee for the ingress-gateway init container, and K8s will use this value to decide on which node to place the pod | Not Applicable | ||
resources.requests.initServiceMemory | The amount of memory that the system will guarantee for the ingress-gateway init container, and Kubernetes will use this value to decide on which node to place the pod | Not Applicable | ||
resources.requests.updateServiceCpu | The amount of CPU that the system will guarantee for the ingress-gateway update container, and Kubernetes will use this value to decide on which node to place the pod. | Not Applicable | ||
resources.requests.updateServiceMemory | The amount of memory that the system will guarantee for the ingress-gateway update container, and Kubernetes will use this value to decide on which node to place the pod. | Not Applicable | ||
resources.target.averageCpuUtil | CPU utilization limit for autoscaling | 80 | Not Applicable | |
minAvailable | Number of pods always running | 2 | Not Applicable | |
minReplicas | Min replicas to scale to maintain an average CPU utilization | 2 | Not applicable | |
maxReplicas | Max replicas to scale to maintain an average CPU utilization | 5 | Not applicable | |
log.level.root | Logs to be shown on ocudr-endpoint pod | WARN | valid level | |
log.level.ingress | Logs to be shown on ocudr-ingressgateway pod for ingress related flows | INFO | valid level | |
log.level.oauth | Logs to be shown on ocudr-ingressgateway pod for oauth related flows | INFO | valid level | |
initssl | To Initialize SSL related infrastructure in init/update container | false | Not Applicable | |
jaegerTracingEnabled | Enable/Disable Jaeger Tracing | false | true/false | |
openTracing.jaeger.udpSender.host | Jaeger agent service FQDN | occne-tracer-jaeger-agent.occne-infra | Valid FQDN | |
openTracing.jaeger.udpSender.port | Jaeger agent service UDP port | 6831 | Valid Port | |
openTracing.jaeger.probabilisticSampler | Probablistic Sampler on Jaeger | 0.5 | Range: 0.0 - 1.0 | Sampler makes a random sampling decision with the probability of sampling. For example, if the value set is 0.1, approximately 1 in 10 traces will be sampled |
Supported cipher suites for ssl |
|
Not applicable | ||
oauthValidatorEnabled | OAUTH Configuration | false | Not Applicable | |
enableIncomingHttp | Enabling for accepting http requests | true | Not Applicable | |
enableIncomingHttps | Enabling for accepting https requests | false | true or false | |
enableOutgoingHttps | Enabling for sending https requests | false | true or false | |
maxRequestsQueuedPerDestination | Queue Size at the ocudr-endpoint pod | 5000 | Not Applicable | |
maxConnectionsPerIp | Connections from endpoint to other microServices | 10 | Not Applicable | |
serviceMeshCheck | Load balancing will be handled by Ingress gateway, if true it would be handled by serviceMesh | true | true/false | |
routesConfig | Routes configured to connect to different micro services of UDR |
|
Not Applicable | |
service.customExtension.labels | Custom Labels that needs to be added to ingressgateway specific service. | null | Not Applicable | This can be used to add custom label(s) to ingressgateway service. |
service.customExtension.annotations | Custom Annotations that needs to be added to ingressgateway specific services. | null | Not Applicable | This can be used to add custom annotation(s) to ingressgateway service. |
deployment.customExtension.labels | Custom Labels that needs to be added to ingressgateway specific deployment. | null | Not Applicable | This can be used to add custom label(s) to ingressgateway deployment. |
deployment.customExtension.annotations | Custom Annotations that needs to be added to ingressgateway specific deployment. | null | Not Applicable | This can be used to add custom annotation(s) to ingressgateway deployment. |
Following table provides parameters for ocudr-egressgateway micro service (API Gateway)
Parameter | Description | Default value | Range or Possible Values (If applicable) | Notes |
---|---|---|---|---|
enabled | Configuration flag to enable/disable egress gateway | true | true/false | |
image.name | Docker image name | ocudr/ocegress_gateway | Not applicable | |
image.tag | Image version tag | 1.7.7 | Not applicable | |
image.pullPolicy | This setting will tell if image need to be pulled or not | Always |
Possible Values - Always IfNotPresent Never |
|
initContainersImage.name | Docker Image name | ocudr/configurationinit | Not applicable | |
initContainersImage.tag | Image version tag | 1.2.0 | Not applicable | |
initContainersImage.pullPolicy | This setting will tell if image need to be pulled or not | Always |
Possible Values - Always IfNotPresent Never |
|
updateContainersImage.name | Docker Image name | ocudr/configurationupdate | Not applicable | |
updateContainersImage.tag | Image version tag | 1.2.0 | Not applicable | |
updateContainersImage.pullPolicy | This setting will tell if image need to be pulled or not | Always |
Possible Values - Always IfNotPresent Never |
|
resources.limits.cpu | Cpu allotment limitation | 3 | Not applicable | |
resources.limits.memory | Memory allotment limitation | 4Gi | Not applicable | |
resources.limits.initServiceCpu | Maximum amount of CPU that Kubernetes will allow the egress-gateway init container to use. | 1 | Not applicable | |
resources.limits.initServiceMemory | Memory Limit for egress-gateway init container | 1Gi | Not applicable | |
resources.limits.updateServiceCpu | Maximum amount of CPU that Kubernetes will allow the egress-gateway update container to use. | 1 | Not applicable | |
resources.limits.updateServiceMemory | Memory Limit for egress-gateway update container | 1Gi | Not applicable | |
resources.requests.cpu | Cpu allotment for ocudr-egressgateway pod | 3 | Not applicable | |
resources.requests.memory | Memory allotment for ocudr-egressgatewaypod | 4Gi | Not applicable | |
resources.requests.initServiceCpu | The amount of CPU that the system will guarantee for the egress-gateway init container, and Kubernetes will use this value to decide on which node to place the pod | Not Applicable | ||
resources.requests.initServiceMemory | The amount of memory that the system will guarantee for the egress-gateway init container, and Kubernetes will use this value to decide on which node to place the pod | Not Applicable | ||
resources.requests.updateServiceCpu | The amount of CPU that the system will guarantee for the egress-gateway update container, and Kubernetes will use this value to decide on which node to place the pod. | Not Applicable | ||
resources.requests.updateServiceMemory | The amount of memory that the system will guarantee for the egress-gateway update container, and Kubernetes will use this value to decide on which node to place the pod. | Not Applicable | ||
resources.target.averageCpuUtil | CPU utilization limit for autoscaling | 80 | Not applicable | |
service.ssl.tlsVersion | Configuration to take TLS version to be used | TLSv1.2 | Valid TLS version | These are service fixed parameters |
service.initialAlgorithm |
Algorithm to be used ES256 can also be used, but corresponding certificates need to be used. |
RSA256 | RSA256/ES256 | |
service.ssl.privateKey.k8SecretName | name of the secret which stores keys and certificates | ocudr-gateway-secret | Not applicable | |
service.ssl.privateKey.k8NameSpace | namespace in which secret is created | ocudr | Not applicable | |
service.ssl.privateKey.rsa.fileName | rsa private key stored in the secret | rsa_private_key_pkcs1.pem | Not applicable | |
service.ssl.privateKey.ecdsa.fileName | ecdsa private key stored in the secret | ecdsa_private_key_pkcs8.pem | Not applicable | |
service.ssl.certificate.k8SecretName | name of the secret which stores keys and certificates | ocudr-gateway-secret | Not applicable | |
service.ssl.certificate.k8NameSpace | namespace in which secret is created | ocudr | Not applicable | |
service.ssl.certificate.rsa.fileName | rsa certificate stored in the secret | apigatewayrsa.cer | Not applicable | |
service.ssl.certificate.ecdsa.fileName | ecdsa certificate stored in the secret | apigatewayecdsa.cer | Not applicable | |
service.ssl.caBundle.k8SecretName | name of the secret which stores keys and certificates | ocudr-gateway-secret | Not applicable | |
service.ssl.caBundle.k8NameSpace | namespace in which secret is created | ocudr | Not applicable | |
service.ssl.caBundle.fileName | ca Bundle stored in the secret | caroot.cer | Not applicable | |
service.ssl.keyStorePassword.k8SecretName | name of the secret which stores keys and certificates | ocudr-gateway-secret | Not applicable | |
service.ssl.keyStorePassword.k8NameSpace | namespace in which secret is created | ocudr | Not applicable | |
service.ssl.keyStorePassword.fileName | keyStore password stored in the secret | key.txt | Not applicable | |
service.ssl.trustStorePassword.k8SecretName | name of the secret which stores keys and certificates | ocudr-gateway-secret | Not applicable | |
service.ssl.trustStorePassword.k8NameSpace | namespace in which secret is created | ocudr | Not applicable | |
service.ssl.trustStorePassword.fileName | trustStore password stored in the secret | trust.txt | Not applicable | |
minAvailable | Number of pods always running | 1 | Not Applicable | |
minReplicas | Min replicas to scale to maintain an average CPU utilization | 1 | Not applicable | |
maxReplicas | Max replicas to scale to maintain an average CPU utilization | 4 | Not applicable | |
log.level.root | Logs to be shown on ocudr-egressgateway pod | WARN | valid level | |
log.level.egress | Logs to be shown on ocudr-egressgateway pod for egress related flows | INFO | valid level | |
log.level.oauth | Logs to be shown on ocudr-egressgateway pod for oauth related flows | INFO | valid level | |
fullnameOverride | Name to be used for deployment | ocudr-egressgateway | Not applicable | This config is commented by default. |
initssl | To Initialize SSL related infrastructure in init/update container | false | Not Applicable | |
jaegerTracingEnabled | Enable/Disable Jaeger Tracing | false | true/false | |
openTracing.jaeger.udpSender.host | Jaeger agent service FQDN | occne-tracer-jaeger-agent.occne-infra | Valid FQDN | |
openTracing.jaeger.udpSender.port | Jaeger agent service UDP port | 6831 | Valid Port | |
openTracing.jaeger.probabilisticSampler | Probablistic Sampler on Jaeger | 0.5 | Range: 0.0 - 1.0 | Sampler makes a random sampling decision with the probability of sampling. For example if the value set is 0.1, approximately 1 in 10 traces will be sampled. |
enableOutgoingHttps | Enabling for sending https requests | false | true or false | |
oauthClientEnabled | Enable if oauth is required | false | true or false | Enable based on Oauth configuration |
nrfAuthority | Nrf Authoriy configuration | 10.75.224.7:8085 | Not Applicable | |
nfInstanceId | Nrf Instance Id | fe7d992b-0541-4c7d-ab84-c6d70b1b01b1 | Not Applicable | |
consumerPlmnMNC | plmnmnc | 345 | Not Applicable | |
consumerPlmnMCC | plmnmcc | 567 | Not Applicable | |
k8sServiceCheck | Enable this if loadbalancing is to be done by egress instead of K8s | false | true/false | |
service.customExtension.labels | Custom Labels that needs to be added to egressgateway specific Service. | null | Not applicable | This can be used to add custom label(s) to egressgateway Service. |
service.customExtension.annotations | Custom Annotations that needs to be added to egressgateway specific Services. | null | Not applicable | This can be used to add custom annotation(s) to egressgateway Service. |
deployment.customExtension.labels | Custom Labels that needs to be added to egressgateway specific Deployment. | null | Not applicable | This can be used to add custom label(s) to egressgateway Deployment. |
deployment.customExtension.annotations | Custom Annotations that needs to be added to egressgateway specific Deployment. | null | Not applicable | This can be used to add custom annotation(s) to egressgateway deployment. |
Following table provides parameters for nudr-diameterproxy micro service.
Parameter | Description | Default value | Range or Possible Values (If applicable) | Notes |
---|---|---|---|---|
enabled | To enable service. | false | Not applicable | Used to enable or disable service. |
image.name | Docker Image name | ocudr/nudr_diameterproxy | Not applicable | |
image.tag | Tag of Image | 1.7.1 | Not applicable | |
image.pullPolicy | This setting will tell if image need to be pulled or not | Always |
Possible Values - Always IfNotPresent Never |
|
logging.level.root | Log Level | WARN |
Possible Values - WARN INFO DEBUG |
The log level of the nudr-diameterproxy server pod |
deployment.replicaCount | Replicas of the nudr-diameterproxy pod | 2 | Not applicable | Number of nudr-config-server pods to be maintained by replica set created with deployment |
minReplicas | min replicas of nudr-diameterproxy | 2 | Not applicable | Minimum number of pods |
maxReplicas | max replicas of nudr-diameterproxy | 4 | Not applicable | Maximum number of pods |
service.http2enabled | Enabled HTTP2 support flag for rest server | true | true/false | Enable/Disable HTTP2 support for rest server |
service.type | UDR service type | ClusterIP |
Possible Values- ClusterIP NodePort LoadBalancer |
The Kubernetes service type for exposing UDR deployment Note: Suggested to be set as ClusterIP (default value) always |
service.diameter.type | Diameter service type | LoadBalancer |
Possible Values- ClusterIP NodePort LoadBalancer |
The Kubernetes service type for exposing UDR deploymentdiameter traffic goes via diameter-endpoint, not via ingress-gateway |
service.port.http | HTTP port | 5001 | Not applicable | The HTTP port to be used in nudr-diameterproxy service |
service.port.https | HTTPS port | 5002 | Not applicable | The https port to be used for nudr-diameterproxy service |
service.port.management | Management port | 9000 | Not applicable | The actuator management port to be used for nudr-diameterproxy service |
service.port.diameter | Diameter port | 6000 | Not applicable | The diameter port to be used for nudr-diameterproxy service |
resources.requests.cpu | Cpu Allotment for nudr-diameterproxy pod | 3 | Not applicable | The CPU to be allocated for nudr-diameterproxy pod during deployment |
resources.requests.memory | Memory allotment for nudr-diameterproxy pod |
|
Not applicable | The memory to be allocated for nudr-diameterproxy pod during deployment |
resources.limits.cpu | Cpu allotment limitation | 3 | Not applicable | The CPU to be max allocated for nudr-diameterproxy pod |
resources.limits.memory | Memory allotment limitation | 4Gi | Not applicable | The memory to be max allocated for nudr-diameterproxy pod |
resources.target.averageCpuUtil | CPU utilization limit for autoscaling | 80 | Not Applicable | CPU utilization limit for creating HPA |
drservice.port.http | HTTP port on which dr service is running | 5001 | Not Applicable | dr-service port is required in diameterproxy application |
drservice.port.https | HTTPS port on which dr service is running | 5002 | Not Applicable | dr-service port is required in diameterproxy application |
diameter.realm | Realm of the diameterproxy microservice | oracle.com | String value | Host realm of diameterproxy |
diameter.identity | FQDN of the diameterproxy in diameter messages | nudr.oracle.com | String value | identity of the diameterproxy |
diameter.strictParsing | Strict parsing of Diameter AVP and Messages | false | Not Applicable | strict parsing |
diameter.IO.threadCount | Number of thread for IO operation | 0 | 0 to 2* CPU |
Number of threads to handle IO operations in diameterproxy pod if threadcount is 0 then application choose the threadCount based on pod profile size |
diameter.IO.queueSize | Queue size for IO | 0 | 2048 to 8192 |
the count should be the power of 2 if queueSize is 0 then application choose the queueSize based on pod profile size |
diameter.messageBuffer.threadCount | Number of threads for process the message | 0 | 0 to 2* CPU |
Number of threads to handle meassages in diameterproxy pod if threadcount is 0 then application choose the threadCount based on pod profile size |
diameter.peer.setting | Diameter peer setting |
reconnectDelay: 3 responseTimeout: 4 connectionTimeOut: 3 watchdogInterval: 6 transport: 'TCP' reconnectLimit: 50 |
Not Applicable |
|
diameter.peer.nodes | diameter server peer nodes list |
- name: 'seagull' responseOnly: false namespace: 'seagull1' host: '10.75.185.158' domain: 'svc.cluster.local' port: 4096 realm: 'seagull1.com' identity: 'seagull1a.seagull1.com' |
Not applicable |
the diameter server peer node information *it should be yaml list *default values are template , how to add peer nodes. |
diameter.peer.clientNodes | diameter client peers |
- identity: 'seagull1a.seagull1.com' realm: 'seagull1.com' - identity: 'seagull1.com' realm: 'seagull1.com' |
Not applicable |
the diameter client node information *it should be yaml list *default values is template, how to add peer nodes. |
service.customExtension.labels | Custom Labels that needs to be added to nudr-diameterproxy specific Service. | null | Not applicable | This can be used to add custom label(s) to nudr-diameterproxy Service. |
service.customExtension.annotations | Custom Annotations that needs to be added to nudr-diameterproxy specific Services. | null | Not applicable | This can be used to add custom annotation(s) to nudr-diameterproxy Service. |
deployment.customExtension.labels | Custom Labels that needs to be added to nudr-diameterproxy specific Deployment. | null | Not applicable | This can be used to add custom label(s) to nudr-diameterproxy Deployment. |
deployment.customExtension.annotations | Custom Annotations that needs to be added to nudr-diameterproxy specific Deployment. | null | Not applicable | This can be used to add custom annotation(s) to nudr-diameterproxy Deployment. |