3 Customizing and Configuring Unified Data Repository

This section provides information on customizing and configuring Unified Data Repository.

Customizing Unified Data Repository

You can customize the Unified Data Repository deployment by overriding the default values of various configurable parameters.

In the ocudr-custom-values.yaml File Configuration section, MySQL host is customized.

You can prepare the ocudr-custom-values.yaml file to customize the parameters.

Following is an example of Unified Data Repository customization file.

Note:

All the configurable parameters are mentioned in the Configuring User Parameters

# Copyright 2019 (C), Oracle and/or its affiliates. All rights reserved.

 

global:

  dockerRegistry: ocudr-registry.us.oracle.com:5000

 

  # MYSQL Connectivity Configurations

  mysql:

    dbServiceName: "mysql-connectivity-service.occne-infra"  #This is a read only parameter. Use the default value.

    port: "3306"

 

  # Jaeger tracing Configurations

  udrTracing:

    enable: false

    host: "occne-tracer-jaeger-collector.occne-infra"

    port: 14268

 

  dbenc:

    shavalue: 256

 

  # Configure customer created service accounts

  serviceAccountName:

 

  # Configuration to enable UDR egress traffic through EGW

  egress:

    enabled: true

 

  # Config server related configurations

  configServerEnable: true

  initContainerEnable: false

  dbCredSecretName: 'ocudr-secrets'

  configServerFullNameOverride: nudr-config-server

 

  # Configuration to decide the Service the deployment will provide

  udrServices: "All"

 

  # Enable to register with NRF for UDSF service

  udsfEnable: false

 

  # port on which UDR's API-Gateway service is exposed

  # If httpsEnabled is false, this Port would be HTTP/2.0 Port (unsecured)

  # If httpsEnabled is true, this Port would be HTTPS/2.0 Port (secured SSL)

  publicHttpSignalingPort: 80

  publicHttpsSignallingPort: 443

 

  # Nf Instance ID for UDR, same will be registered with NRF

  nfInstanceId: 5a7bd676-ceeb-44bb-95e0-f6a55a328b03

 

  # Helm test hook related configurations

  test:

    nfName: ocudr

    image:

      name: ocudr/nf_test

      tag: 1.8.0

    config:

      logLevel: WARN

      timeout: 120      #Beyond this duration helm test will be considered failure

 

  # Pre Install Hook configurations. Used for DB Creation

  preInstall:

    image:

      name: ocudr/nudr_prehook

      tag: 1.8.0

    config:

      logLevel: WARN

 

# Pre Upgrade Hook configurations. Used for DB Schema Upgrade

  preUpgrade:

    image:

      name: ocudr/nudr_pre_upgrade_hook

      tag: 1.8.0

    config:

      logLevel: WARN

 

  # Resource allocation for all UDR hooks

  hookJobResources:

    limits:

      cpu: 2

      memory: 2Gi

    requests:

      cpu: 1

      memory: 1Gi

 

  #**************************************************************************

 

  # ********  Sub-Section Start: Custom Extension Global Parameters ********

  #**************************************************************************

 

  customExtension:

    # Applicable for all resources created as part of helm intallation

    allResources:

      labels: {}

      annotations: {}

 

    # Applicable for all load balancer type services

    lbServices:

      labels: {}

      annotations: {}

 

    # Applicable for all load balancer type deployments

    lbDeployments:

      labels: {}

      annotations: {}

 

    # Applicable for all non load balancer type services

    nonlbServices:

      labels: {}

      annotations: {}

 

    # Applicable for all non load balancer type deployments

    nonlbDeployments:

      labels: {}

      annotations: {}

 

  # ********  Sub-Section End: Custiom Extensions Global Parameters ********

  #**************************************************************************

 

  # ********  Sub-Section Start: Prefix/Suffix Global Parameters ************

  #**************************************************************************

 

  k8sResource:

    container:

      prefix:

      suffix:

 

  # ********  Sub-Section End: Prefix/Suffix Global Parameters *************

  #**************************************************************************

 

# nudr-drservice microservice configurations

nudr-drservice:

#  nameOverride: "nudr-drservice"

#  Image Details

  image:

    name: ocudr/nudr_datarepository_service

    tag: 1.8.0

    pullPolicy: Always

 

  service:

    # Enable http2 server

    http2enabled: "true"

    # k8s Service type

    type: ClusterIP

    # Ports used in dr service. Applicable for both container and service ports.

    port:

      http: 5001

      https: 5002

      management: 9000

    # Microservice specific annotation for exposed service

    customExtension:

      labels: {}

      annotations: {}

  # Flag to enable/disable dr service tracing

  tracingEnabled: false

 

  # nudr-notify service ports used. Should be same as the ports configured under nudr-notify-service section

  notify:

    port:

      http: 5001

      https: 5002

 

  deployment:

    # Replica count for deployment

    replicaCount: 2

    # Microservice specific annotation for deployment

    customExtension:

      labels: {}

      annotations: {}

 

  # Logging level

  logging:

    level:

      root: "WARN"

 

  # Flag to enable/disable autocreation of subscriber when we do PUT operataion on a new UEID

  subscriber:

    autocreate: "true"

 

  # Flag to validate smdata

  validate:

    smdata: "false"

 

  # Decides where the vsaLevel parameter will be placed in the data

  vsaLevel: "smpolicy" # sample values {"smpolicy" or "nssai" or "dnn"}

  vsaBillingDay: 0

 

  # Resource specification for nudr-drservice container

  resources:

    limits:

      cpu: 4

      memory: 4Gi

    requests:

      cpu: 4

      memory: 4Gi

 

    # When CPU utilization goes beyond this limit, new pod will be scaled by HPA

    target:

      averageCpuUtil: 80

 

  # MYSQL connection pool size

  hikari:

    poolsize: "25"

 

  # Minumum replica count to be maintaned by HPA. Suggested to keep same as deployment.replicaCount

  minReplicas: 2

  # Maximum replicas that can be scaled by HPA

  maxReplicas: 8

 

  # Do not change any values in this section. If we see delays in pod coming up and probe is killing the pod then we

  # should consider tuning these parameters.

  readinessProbe:

    # tells the kubelet that it should wait second before performing the first probe

    initialDelaySeconds: 70

    # specifies that the kubelet should perform a readiness probe every xx seconds

    periodSeconds: 10

 

  # Do not change any values in this section. If we see delays in pod coming up and probe is killing the pod then we

  # should consider tuning these parameters.

  livenessProbe:

    # tells the kubelet that it should wait second before performing the first probe

    initialDelaySeconds: 70

    # specifies that the kubelet should perform a liveness probe every xx seconds

    periodSeconds: 10

 

# nudr-notify-service microservice configurations

nudr-notify-service:

#  nameOverride: "nudr-notify-service"

#  Enable/Disable nudr-notify-service deployment

  enabled: true

 

  # Image Details

  image:

    name: ocudr/nudr_notify_service

    tag: 1.8.0

    pullPolicy: Always

 

  service:

    # Enable http2 server

    http2enabled: "true"

    # k8s Service type

    type: ClusterIP

    # Ports used in notify service. Applicable for both container and service ports.

    port:

      http: 5001

      https: 5002

      management: 9000

    # Microservice specific annotation for exposed service

    customExtension:

      labels: {}

      annotations: {}

 

  # Flag to enable/disable dr service tracing

  tracingEnabled: false

 

  deployment:

    # Replica count for deployment

    replicaCount: 2

    # Microservice specific annotation for deployment

    customExtension:

      labels: {}

      annotations: {}

 

  notification:

    # Retry count for failed notifications

    retrycount: "3"

    # Interval for each retry attempt

    retryinterval: "5"

    # Error codes for which notification will be retried

    retryerrorcodes: "400,429,500,503"

 

  # MYSQL connection pool size

  hikari:

    poolsize: "10"

 

  # Logging level

  logging:

    level:

      root: "WARN"

 

  # Resource specification for nudr-notify-service container

  resources:

    limits:

      cpu: 3

      memory: 3Gi

    requests:

      cpu: 3

      memory: 3Gi

 

    # When CPU utilization goes beyond this limit, new pod will be scaled by HPA

    target:

      averageCpuUtil: 80

 

  # Minumum replica count to be maintaned by HPA. Suggested to keep same as deployment.replicaCount

  minReplicas: 2

  # Maximum replicas that can be scaled by HPA

  maxReplicas: 4

 

  # Egress Gateway port to be used for connection

  http:

    proxy:

      port: 8080

 

  # Do not change any values in this section. If we see delays in pod coming up and probe is killing the pod then we

  # should consider tuning these parameters.

  readinessProbe:

    # tells the kubelet that it should wait second before performing the first probe

    initialDelaySeconds: 80

    # specifies that the kubelet should perform a readiness probe every xx seconds

    periodSeconds: 5

 

  # Do not change any values in this section. If we see delays in pod coming up and probe is killing the pod then we

  # should consider tuning these parameters.

  livenessProbe:

    # tells the kubelet that it should wait second before performing the first probe

    initialDelaySeconds: 80

    # specifies that the kubelet should perform a liveness probe every xx seconds

    periodSeconds: 20

 

nudr-config:

#  nameOverride: "nudr-configuration-service"

#  Enable/Disable nudr-config deployment

  enabled: true

 

  # Image Details

  image:

    name: ocudr/nudr_config

    tag: 1.8.0

    pullPolicy: Always

 

  service:

    # Enable http2 server

    http2enabled: "true"

    # k8s Service type

    type: ClusterIP

    #Ports used in nudr-config service. Applicable for both container and service ports.

    port:

      http: 5001

      https: 5002

      management: 9000

    # Microservice specific annotation for exposed service

    customExtension:

      labels: {}

      annotations: {}

 

  deployment:

    # Replica count for deployment

    replicaCount: 1

    # Microservice specific annotation for deployment

    customExtension:

      labels: {}

      annotations: {}

 

  # Logging level

  logging:

    level:

      root: "WARN"

 

  # Resource specification for nudr-config container

  resources:

    limits:

      cpu: 2

      memory: 2Gi

    requests:

      cpu: 2

      memory: 2Gi

 

    # When CPU utilization goes beyond this limit, new pod will be scaled by HPA

    target:

      averageCpuUtil: 80

 

  # Minumum replica count to be maintaned by HPA. Suggested to keep same as deployment.replicaCount

  minReplicas: 1

  # Maximum replicas that can be scaled by HPA

  maxReplicas: 1

 

  # Do not change any values in this section. If we see delays in pod coming up and probe is killing the pod then we

  # should consider tuning these parameters.

  readinessProbe:

    # tells the kubelet that it should wait second before performing the first probe

    initialDelaySeconds: 30

    # specifies that the kubelet should perform a readiness probe every xx seconds

    periodSeconds: 5

 

  # Do not change any values in this section. If we see delays in pod coming up and probe is killing the pod then we

  # should consider tuning these parameters.

  livenessProbe:

    # tells the kubelet that it should wait second before performing the first probe

    initialDelaySeconds: 40

    # specifies that the kubelet should perform a liveness probe every xx seconds

    periodSeconds: 40

 

# config-server related configurations

config-server:

  # Enable/Disable config-server deployment

  enabled: true

 

  global:

    nfName: nudr

    # Init service image to be used if global.initContainerEnable is set to true

    imageServiceDetector: ocudr/readiness-detector:1.7.1

    # Jaeger configurations for Config-server tracing

    envJaegerAgentHost: ''

    envJaegerAgentPort: 6831

 

  replicas: 1

  envLoggingLevelApp: WARN

 

  # Resource specification for nudr-drservice container

  resources:

    limits:

      cpu: 2

      memory: 2Gi

    requests:

      cpu: 2

      memory: 512Mi

 

  service:

    # k8s Service type

    type: ClusterIP

    port: 0

    # Microservice specific annotation for exposed service

    customExtension:

      labels: {}

      annotations: {}

 

  deployment:

    # Microservice specific annotation for deployment

    customExtension:

      labels: {}

      annotations: {}

 

  fullnameOverride: udr-config-server

  installedChartVersion: ''

 

  # Do not change any values in this section. If we see delays in pod coming up and probe is killing the pod then we

  # should consider tuning these parameters.

  readinessProbe:

    # tells the kubelet that it should wait second before performing the first probe

    initialDelaySeconds: 20

    # Number of seconds after which the probe times out

    timeoutSeconds: 3

    # specifies that the kubelet should perform a readiness probe every xx seconds

    periodSeconds: 10

    # Minimum consecutive successes for the probe to be considered successful after having failed

    successThreshold: 1

    # When a Pod starts and the probe fails, Kubernetes will try failureThreshold times before giving up

    failureThreshold: 3

 

  # Do not change any values in this section. If we see delays in pod coming up and probe is killing the pod then we

  # should consider tuning these parameters.

  livenessProbe:

    # tells the kubelet that it should wait second before performing the first probe

    initialDelaySeconds: 60

    # Number of seconds after which the probe times out

    timeoutSeconds: 3

    # specifies that the kubelet should perform a liveness probe every xx seconds

    periodSeconds: 15

    # Minimum consecutive successes for the probe to be considered successful after having failed

    successThreshold: 1

    # When a Pod starts and the probe fails, Kubernetes will try failureThreshold times before giving up

    failureThreshold: 3

 

# nudr-nrf-client-service related configurations

nudr-nrf-client-service:

#  nameOverride: "nudr-nrf-client-service"

#  Enable/Disable nudr-notify-service deployment

  enabled: true

  # NRF ingressgateway details along with registration url and proxy config if any

  host:

    baseurl: "http://ocnrf-ingressgateway.mynrf.svc.cluster.local/nnrf-nfm/v1/nf-instances"

    proxy:

  # Enable SSL for nrf client service

  ssl: "false"

 

  # Logging level config

  logging:

    level:

      root: "WARN"

 

  # Image details

  image:

    name: ocudr/nudr_nrf_client_service

    tag: 1.8.0

    pullPolicy: Always

 

  # Heart beat timer for Update NF Profile requests to NRF

  heartBeatTimer: "90"

  # UDR group id sent in NF Profile

  udrGroupId: "udr-1"

  #  Capacity multiplier of UDR based on number of dr service UDR pods running

  capacityMultiplier: "500"

  # Supported SUPI range registered with NRF

  supirange: "[{\"start\": \"10000000000\", \"end\": \"20000000000\"}]"

  # Priority parameter in Nf Profile

  priority: "10"

  # IPV4 address of UDR used in registration

  udrMasterIpv4: "10.0.0.0"

  # Supported GPSI range registered with NRF

  gpsirange: "[{\"start\": \"10000000000\", \"end\": \"20000000000\"}]"

  #endpointLabelSelector : "ocudr-ingressgateway"

  # Supported plmn values for the UDR

  plmnvalues: "[{\"mnc\": \"14\", \"mcc\": \"310\"}]"

  # Client scheme used for all egress messages

  scheme: "http"

  # Liveness check retry attempts on failure

  livenessProbeMaxRetry: 5

  # this is for egress port

  http:

    proxy:

      host:

      port: 8080

 # The below 2 configuration will change based on site k8s name resolution settings, Also note the changes with namespace used for udr installation

  #livenessProbeUrl: "http://nudr-notify-service.myudr.svc.cluster.local:9000/actuator/health,http://nudr-drservice.myudr.svc.cluster.local:9000/actuator/health"

  fqdn: "ocudr-ingressgateway.myudr.svc.cluster.local"

 

  # Resource specification for nudr-nrf-client-service container

  resources:

    limits:

      cpu: 1

      memory: 2Gi

    requests:

      cpu: 1

      memory: 2Gi

 

  service:

    customExtension:

      labels: {}

      annotations: {}

 

  deployment:

    # Microservice specific annotation for deployment

    customExtension:

      labels: {}

      annotations: {}

 

ingressgateway:

 global:

   # Docker registry name

   # dockerRegistry: reg-1:5000

 

   # Specify type of service - Possible values are :- ClusterIP, NodePort, LoadBalancer and ExternalName

   type: LoadBalancer

 

   # Enable or disable IP Address allocation from Metallb Pool

   metalLbIpAllocationEnabled: true

 

   # Address Pool Annotation for Metallb

   metalLbIpAllocationAnnotation: "metallb.universe.tf/address-pool: signaling"

 

   # If Static node port needs to be set, then set staticNodePortEnabled flag to true and provide value for staticNodePort

   staticNodePortEnabled: false

 

   # In case of ASPEN Service Mesh enabled, to support clear text traffic from outside of the cluster below flag needs to be true.

   istioIngressTlsSupport:

     ingressGateway: false

 

 image:

   # image name

   name: ocudr/ocingress_gateway

   # tag name of image

   tag: 1.8.1

   # Pull Policy - Possible Values are:- Always, IfNotPresent, Never

   pullPolicy: Always

 

 initContainersImage:

   # inint Containers image name

   name: ocudr/configurationinit

   # tag name of init Container image

   tag: 1.4.0

   # Pull Policy - Possible Values are:- Always, IfNotPresent, Never

   pullPolicy: Always

 

 updateContainersImage:

   # update Containers image name

   name: ocudr/configurationupdate

   # tag name of update Container image

   tag: 1.4.0

   # Pull Policy - Possible Values are:- Always, IfNotPresent, Never

   pullPolicy: Always

 

 deployment:

   # Microservice specific annotation for deployment

   customExtension:

      labels: {}

      annotations: {}

 

 service:

   # Microservice specific annotation for service exposed

   customExtension:

      labels: {}

      annotations: {}

   # Configure this section to support TLS with ingress gateway

   ssl:

     # TLS verison used

     tlsVersion: TLSv1.2

 

     # Secret Details for certificates

     privateKey:

       k8SecretName: ocudr-gateway-secret

       k8NameSpace: ocudr

       rsa:

         fileName: rsa_private_key_pkcs1.pem

       ecdsa:

         fileName: ecdsa_private_key_pkcs8.pem

 

     certificate:

       k8SecretName: ocudr-gateway-secret

       k8NameSpace: ocudr

       rsa:

         fileName: apigatewayrsa.cer

       ecdsa:

         fileName: apigatewayecdsa.cer

 

     caBundle:

       k8SecretName: ocudr-gateway-secret

       k8NameSpace: ocudr

       fileName: caroot.cer

 

     keyStorePassword:

       k8SecretName: ocudr-gateway-secret

       k8NameSpace: ocudr

       fileName: key.txt

 

     trustStorePassword:

       k8SecretName: ocudr-gateway-secret

       k8NameSpace: ocudr

       fileName: trust.txt

 

     initialAlgorithm: RSA256

 

 # This section default values can be retained. USed to support HTTP1.1 to ingressgateway

 cncc:

   enabled: false

   enablehttp1: true

 

 # Resource details for IGW, init and also update containers

 resources:

   limits:

     cpu: 5

     memory: 4Gi

     initServiceCpu: 1

     initServiceMemory: 1Gi

     updateServiceCpu: 1

     updateServiceMemory: 1Gi

   requests:

     cpu: 5

     memory: 4Gi

     initServiceCpu: 1

     initServiceMemory: 1Gi

     updateServiceCpu: 1

     updateServiceMemory: 1Gi

   # When CPU utilization goes beyond this limit, new pod will be scaled by HPA

   target:

     averageCpuUtil: 80

 

 # Logging level

 log:

   level:

     root: WARN

     ingress: INFO

     oauth: INFO

 

 # enable jaeger tracing

 jaegerTracingEnabled: false

 

 openTracing :

   jaeger:

     udpSender:

       # udpsender host

       host: "occne-tracer-jaeger-agent.occne-infra"

       # udpsender port

       port: 6831

     probabilisticSampler: 0.5

 

 # Number of Pods must always be available, even during a disruption.

 minAvailable: 2

 # Min replicas to scale to maintain an average CPU utilization

 minReplicas: 2

 # Max replicas to scale to maintain an average CPU utilization

 maxReplicas: 5

 

 # Do not change any values in this section. If we see delays in pod coming up and probe is killing the pod then we

 # should consider tuning these parameters.

 readinessProbe:

   # tells the kubelet that it should wait second before performing the first probe

   initialDelaySeconds: 30

   # Number of seconds after which the probe times out

   timeoutSeconds: 3

   # specifies that the kubelet should perform a readiness probe every xx seconds

   periodSeconds: 10

   # Minimum consecutive successes for the probe to be considered successful after having failed

   successThreshold: 1

   # When a Pod starts and the probe fails, Kubernetes will try failureThreshold times before giving up

   failureThreshold: 3

 

 # Do not change any values in this section. If we see delays in pod coming up and probe is killing the pod then we

 # should consider tuning these parameters.

 livenessProbe:

   # tells the kubelet that it should wait second before performing the first probe

   initialDelaySeconds: 30

   # Number of seconds after which the probe times out

   timeoutSeconds: 3

   # specifies that the kubelet should perform a liveness probe every xx seconds

   periodSeconds: 15

   # Minimum consecutive successes for the probe to be considered successful after having failed

   successThreshold: 1

   # When a Pod starts and the probe fails, Kubernetes will try failureThreshold times before giving up

   failureThreshold: 3

 

 # label to override name of api-gateway micro-service name

 #fullnameOverride: ocudr-endpoint

 

 # To Initialize SSL related infrastructure in init/update container

 initssl: false

 

 # Cipher suites to be enabled on server side

 ciphersuites:

   - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384

   - TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384

   - TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256

   - TLS_DHE_RSA_WITH_AES_256_GCM_SHA384

   - TLS_DHE_RSA_WITH_AES_256_CCM

   - TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256

   - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256

 

 #OAUTH CONFIGURATION

 oauthValidatorEnabled: false

 nfType: UDR

 #Moved to global section

 #nfInstanceId: 5a7bd676-ceeb-44bb-95e0-f6a55a328b03

 producerScope: nudr-dr,nudr-group-id-map

 allowedClockSkewSeconds: 0

 nrfPublicKeyKubeSecret: oauthsecret

 nrfPublicKeyKubeNamespace: ocudr

 validationType: strict

 producerPlmnMNC: 14

 producerPlmnMCC: 310

 

 #Server Configuration for http and https support

 #Server side http support

 enableIncomingHttp: true

 #Server side https support

 enableIncomingHttps: false

 #Client side https support

 enableOutgoingHttps: false

 

 maxRequestsQueuedPerDestination: 5000

 maxConnectionsPerIp: 10

 

 #Service Mesh (Istio) to take care of load-balancing

 serviceMeshCheck: false

 # configuring routes

 routesConfig:

 - id: traffic_mapping_http

   uri: http://{{ .Release.Name }}-nudr-drservice:5001

   path: /nudr-dr/**

   order: 1

 - id: traffic_mapping_http_prov

   uri: http://{{ .Release.Name }}-nudr-drservice:5001

   path: /nudr-dr-prov/**

   order: 2

 - id: traffic_mapping_http_mgmt

   uri: http://{{ .Release.Name }}-nudr-drservice:5001

   path: /nudr-dr-mgm/**

   order: 3

 - id: traffic_mapping_http_udsf

   uri: http://{{ .Release.Name }}-nudr-drservice:5001

   path: /nudsf-dr/**

   order: 4

 - id: traffic_mapping_http_group

   uri: http://{{ .Release.Name }}-nudr-drservice:5001

   path: /nudr-group-id-map/**

   order: 5

 - id: traffic_mapping_http_group_prov

   uri: http://{{ .Release.Name }}-nudr-drservice:5001

   path: /nudr-group-id-map-prov/**

   order: 6

 - id: traffic_mapping_http_slf_group_prov

   uri: http://{{ .Release.Name }}-nudr-drservice:5001

   path: /slf-group-prov/**

   order: 7

 

egressgateway:

  enabled: true

  #fullnameOverride : 'ocudr-egress-gateway'

  nfType: UDR

 

  #global:

  #  dockerRegistry: reg-1:5000

 

  deploymentEgressGateway:

    image: ocudr/ocegress_gateway

    imageTag: 1.8.1

    pullPolicy: Always

 

  initContainersImage:

    # inint Containers image name

    name: configurationinit

    # tag name of init Container image

    tag: 1.4.0

    # Pull Policy - Possible Values are:- Always, IfNotPresent, Never

    pullPolicy: Always

 

  updateContainersImage:

    # update Containers image name

    name: configurationupdate

    # tag name of update Container image

    tag: 1.4.0

    # Pull Policy - Possible Values are:- Always, IfNotPresent, Never

    pullPolicy: Always

 

  # enable jagger tracing

  jaegerTracingEnabled: false

 

  openTracing :

    jaeger:

      udpSender:

        # udpsender host

        host: "occne-tracer-jaeger-agent.occne-infra"

        # udpsender port

        port: 6831

      probabilisticSampler: 0.5

 

  # ---- Oauth Configuration - BEGIN ----

  oauthClient:

    enabled: false

    dnsSrvEnabled: false

    httpsEnabled: false

    virtualFqdn: localhost:port

    staticNrfList:

      - localhost:port

    nfType: UDR

    #Moved to global section

    #nfInstanceId: 5a7bd676-ceeb-44bb-95e0-f6a55a328b03

    consumerPlmnMNC: 14

    consumerPlmnMCC: 310

    maxRetry: 2

    apiPrefix: ""

    errorCodeSeries: 4XX

    retryAfter: 5000

  # ---- Oauth Configuration - END ----

 

  #jetty client configuration

  maxConcurrentPushedStreams: 1000

  maxRequestsQueuedPerDestination: 1024

  #maxConnectionsPerDestination: 4

  maxConnectionsPerIp: 4

  connectionTimeout: 10000 #(ms)

  requestTimeout: 1000 #(ms)

  jettyIdleTimeout: 0 #(ms,<=0 -> to make timeout infinite)

 

  minReplicas: 1

  maxReplicas: 4

  minAvailable: 1

 

  # ---- HTTPS Configuration - BEGIN ----

  initssl: false

  enableOutgoingHttps: false

 

  # Resource details for EGW, init and update container

  resources:

    limits:

      cpu: 3

      memory: 4Gi

      initServiceCpu: 1

      initServiceMemory: 1Gi

      updateServiceCpu: 1

      updateServiceMemory: 1Gi

    requests:

      cpu: 3

      memory: 4Gi

      initServiceCpu: 1

      initServiceMemory: 1Gi

      updateServiceCpu: 1

      updateServiceMemory: 1Gi

    # When CPU utilization goes beyond this limit, new pod will be scaled by HPA

    target:

      averageCpuUtil: 80

 

  deployment:

    # Microservice specific annotation for deployment

    customExtension:

      labels: {}

      annotations: {}

 

  service:

    type: ClusterIP

    # Microservice specific annotation for service

    customExtension:

      labels: {}

      annotations: {}

 

    # This section needs to be configured to support TLS on ingressgateway

    ssl:

      tlsVersion: TLSv1.2

      initialAlgorithm: RSA256

 

      # Secret related info for certificates

      privateKey:

        k8SecretName: ocudr-gateway-secret

        k8NameSpace: ocudr

        rsa:

          fileName: rsa_private_key_pkcs1.pem

        ecdsa:

          fileName: ecdsa_private_key_pkcs8.pem

 

      certificate:

        k8SecretName: ocudr-gateway-secret

        k8NameSpace: ocudr

        rsa:

          fileName: apigatewayrsa.cer

        ecdsa:

          fileName: apigatewayecdsa.cer

 

      caBundle:

        k8SecretName: ocudr-gateway-secret

        k8NameSpace: ocudr

        fileName: caroot.cer

 

      keyStorePassword:

        k8SecretName: ocudr-gateway-secret

        k8NameSpace: ocudr

        fileName: key.txt

 

      trustStorePassword:

        k8SecretName: ocudr-gateway-secret

        k8NameSpace: ocudr

        fileName: trust.txt

  # ---- HTTPS Configuration - END ----

 

  #Enable this if loadbalancing is to be done by egress instead of K8s

  K8ServiceCheck: false

 

  #Set the root log level

  log:

    level:

      root: WARN

      egress: INFO

      oauth: INFO

 

  readinessProbe:

    # tells the kubelet that it should wait second before performing the first probe

    initialDelaySeconds: 30

    # Number of seconds after which the probe times out

    timeoutSeconds: 3

    # specifies that the kubelet should perform a readiness probe every xx seconds

    periodSeconds: 10

    # Minimum consecutive successes for the probe to be considered successful after having failed

    successThreshold: 1

    # When a Pod starts and the probe fails, Kubernetes will try failureThreshold times before giving up

    failureThreshold: 3

 

  livenessProbe:

    # tells the kubelet that it should wait second before performing the first probe

    initialDelaySeconds: 30

    # Number of seconds after which the probe times out

    timeoutSeconds: 3

    # specifies that the kubelet should perform a liveness probe every xx seconds

    periodSeconds: 15

    # Minimum consecutive successes for the probe to be considered successful after having failed

    successThreshold: 1

    # When a Pod starts and the probe fails, Kubernetes will try failureThreshold times before giving up

    failureThreshold: 3

 

nudr-diameterproxy:

  #  Enable/Disable nudr-diameterproxy deployment

  enabled: true

 

  # Image Details

  image:

    name: ocudr/nudr_diameterproxy

    tag: 1.8.0

    pullPolicy: Always

 

  service:

    # Enable http2 rest server

    http2enabled: "true"

    # K8s service type

    type: ClusterIP

    # K8s service type for Diameter endpoint

    diameter:

      type: LoadBalancer

 

    # Ports used in diameterproxy service. Applicable for both container and service ports.

    port:

      http: 5001

      https: 5002

      management: 9000

      diameter: 6000

    # Microservice specific annotation for exposed service

    customExtension:

      labels: {}

      annotations: {}

 

  deployment:

    # Replica count for deployment

    replicaCount: 2

    # Microservice specific annotation for deployment

    customExtension:

      labels: {}

      annotations: {}

 

  # Logging level

  logging:

    level:

      root: "WARN"

 

  # Resource specification for nudr-diameterproxy container

  resources:

    limits:

      cpu: 3

      memory: 4Gi

    requests:

      cpu: 3

      memory: 4Gi

    # When CPU utilization goes beyond this limit, new pod will be scaled by HPA

    target:

      averageCpuUtil: 80

 

  # Minumum replica count to be maintaned by HPA. Suggested to keep same as deployment.replicaCount

  minReplicas: 2

  # Maximum replicas that can be scaled by HPA

  maxReplicas: 4

 

  # nudr-drservice port details. Should be

  drservice:

    port:

      http: 5001

      https: 5002

 

  diameter:

    # Host realm of diameterproxy

    realm: "oracle.com"

    # Host realm of diameterproxy

    identity: "nudr.oracle.com"

    IO:

      # Number of threads for IO operation

      threadCount: 0      # should not go beyond 2*CPU

      # Queue size for IO

      queueSize: 0        # range [2048-8192] should be power of 2

    messageBuffer:

      # Number of threads for processing the message

      threadCount: 0      # should not go beyond 2*CPU

      # Queue Size for message processing

      queueSize: 0        # range [1024-4096] and default 1024/Low, 2048/Medium, 4096/High. should be power of 2

    # Diameter peer setting, Parameter details below

    # reconnect delay for diameter reconnect (in seconds)

    # total turnaround time for process the diameter messages.(in sec)

    # TCP connection timeout time.(in sec)

    # DWR and DWA messages every number of time (in sec)

    # Transport layer

    # reconnect the number of time if diameter peer is down

    peer:

      setting: |

         reconnectDelay: 3

         responseTimeout: 4

         connectionTimeOut: 3

         watchdogInterval: 6

         transport: 'TCP'

         reconnectLimit: 50

      # Diameter server peer node information

      # The below information should be yaml list

      nodes: |

       - name: 'seagull'

         responseOnly: false

         namespace: 'seagull1'

         host: '10.75.185.158'

         domain: 'svc.cluster.local'

         port: 4096

         realm: 'seagull1.com'

         identity: 'seagull1a.seagull1.com'

      # Diameter client node information

      # The below information should be yaml list

      clientNodes: |

       - identity: 'seagull1a.seagull1.com'

         realm: 'seagull1.com'

       - identity: 'seagull1.com'

         realm: 'seagull1.com'

 

  # Do not change any values in this section. If we see delays in pod coming up and probe is killing the pod then we

  # should consider tuning these parameters.

  readinessProbe:

    # tells the kubelet that it should wait second before performing the first probe

    initialDelaySeconds: 80

    # specifies that the kubelet should perform a readiness probe every xx seconds

    periodSeconds: 5

 

  # Do not change any values in this section. If we see delays in pod coming up and probe is killing the pod then we

  # should consider tuning these parameters.

  livenessProbe:

    # tells the kubelet that it should wait second before performing the first probe

    initialDelaySeconds: 80

    # specifies that the kubelet should perform a liveness probe every xx seconds

    periodSeconds: 20

Configuring User Parameters

The UDR micro services have configuration options. The user should be able to configure them via deployment values.yaml.

Note:

The default value of some of the settings may change.

Note:

  • NAME: is the release name used in helm install command
  • NAMESPACE: is the namespace used in helm install command
  • K8S_DOMAIN: is the default kubernetes domain (svc.cluster.local)

Default Helm Release Name:- ocudr

Global Configuration: These values are suffixed to all the container names of OCUDR. These values are useful to add custom annotation(s) to all non-Load Balancer Type Services that OCUDR helm chart creates.

Following table provides the parameters for global configurations.

Parameter Description Default value Range or Possible Values (If applicable) Notes
dockerRegistry Docker registry from where the images will be pulled ocudr-registry.us.oracle.com:5000 Not applicable  
mysql.dbServiceName DB service to connect mysql-connectivity-service.occne-infra Not applicable This is a CNE service used for db connection. Default name used on CNE is the same as configured.
mysql.port Port for DB Service Connection 3306 Not applicable  
udrTracing.enable Flag to enable udr tracing on Jaeger false true/false  
udrTracing.host Jaegar Service Name installed in CNE occne-tracer-jaeger-collector.occne-infra Not applicable  
udrTracing.port Jaegar Service Port installed in CNE 14268 Not applicable  
dbenc.shavalue Encryption Key size 256 256 or 512  
serviceAccountName Service account name null Not Applicable The serviceaccount, role and rolebindings required for deployment should be done prior installation. Use the created serviceaccountname here.
egress.enabled Flag to enable outgoing traffic through egress gateway true true/false  
configServerEnable Flag to enable config-server true true/false  
initContainerEnable Flag to disable init container for config-server. This is not required because the pre install hooks take care of DB tables creation and connectivity is also verified false true/false  
dbCredSecretName DB Credentioal Secret Name ocudr-secrets Not Applicable  
configServerFullNameOverride Config Server Full Name Override nudr-config-server Not Applicable  
udrServices Services supported on the UDR deployment, This config decides the schema execution on the udrdb which is done by the nudr-preinstall hook pod. All All/nudr-dr/nudr-group-id-map For SLF, set udrServices values as nudr-group-id-map.
udsfEnable Flag to enable UDSF services on the deployment false true/false  
publicHttpSignalingPort Port on which ingressgateway listens for incoming http requests. 80 Valid Port  
publicHttpsSignallingPort Port on which ingressgateway listens for incoming https requests. 443 Valid Port  
nfInstanceId Nf Instance ID for UDR (same is registered with NRF) 5a7bd676-ceeb-44bb-95e0-f6a55a328b03 Valid uuid A valid UUID is a 128-bit unique number that helps to identify information in computer systems.
test.nfName NF name on which the helm test is performed. For UDR the default value is UDR. Will be used in container name as suffix ocudr Not applicable  
test.image.name Image name for the helm test container image ocudr/nf_test Not Applicable  
test.image.tag Image version tag for helm test 1.8.0 Not Applicable  
test.config.logLevel Log level for helm test pod WARN

Possible Values -

WARN

INFO

DEBUG

 
test.config.timeout Timeout value for the helm test operation. If exceeded helm test will be considered as failure 120

Range: 1-300

Unit:seconds

 
preinstall.image.name Image name for the nudr-prehook pod which will take care of DB and table creation for UDR deployment. ocudr/prehook Not Applicable  
preinstall.image.tag Image version for nudr-prehook pod image 1.8.0 Not Applicable  
preinstall.config.logLevel Log level for preinstall hook pod WARN

Possible Values -

WARN

INFO

DEBUG

 
hookJobResources.limits.cpu CPU limit for pods created kubernetes hooks/jobs created as part of UDR installation. Applicable for helm test job as well. 2 Not Applicable  
hookJobResources.limits.memory Memory limit for pods created kubernetes hooks/jobs created as part of UDR installation. Applicable for helm test job as well. 2Gi Not Applicable  
hookJobResources.requests.cpu CPU requests for pods created kubernetes hooks/jobs created as part of UDR installation. Applicable for helm test job as well. 1 Not Applicable The cpu to be allocated for hooks during deployment
hookJobResources.requests.memory Memory requests for pods created k8s hooks/jobs created as part of UDR installation. Applicable for helm test job as well. 1Gi Not Applicable The memory to be allocated for hooks during deployment
customExtension.allResources.labels Custom Labels that needs to be added to all the OCUDR kubernetes resources null Not Applicable This can be used to add custom label(s) to all k8s resources that will be created by OCUDR helm chart.
customExtension.allResources.annotations Custom Annotations that needs to be added to all the OCUDR kubernetes resources null

Not Applicable

Note: ASM related annotations needs to be added under ASM Specific Configuration section

This can be used to add custom annotation(s) to all k8s resources that will be created by OCUDR helm chart.
customExtension.lbServices.labels Custom Labels that needs to be added to OCUDR Services that are considered as Load Balancer type null Not Applicable This can be used to add custom label(s) to all Load Balancer Type Services that will be created by OCUDR helm chart.
customExtension.lbServices.annotations Custom Annotations that needs to be added to OCUDR Services that are considered as Load Balancer type null Not Applicable This can be used to add custom annotation(s) to all Load Balancer Type Services that will be created by OCUDR helm chart.
customExtension.lbDeployments.labels Custom Labels that needs to be added to OCUDR Deployments that are associated to a Service which is of Load Balancer type null Not Applicable This can be used to add custom label(s) to all Deployments that will be created by OCUDR helm chart which are associated to a Service which if of Load Balancer Type.
customExtension.lbDeployments.annotations Custom Annotations that needs to be added to OCUDR Deployments that are associated to a Service which is of Load Balancer type null

Not Applicable

Note: ASM related annotations needs to be added under ASM Specific Configuration section

This can be used to add custom annotation(s) to all Deployments that will be created by OCUDR helm chart which are associated to a Service which if of Load Balancer Type.
customExtension.nonlbServices.labels Custom Labels that needs to be added to OCUDR Services that are considered as not Load Balancer type null Not Applicable This can be used to add custom label(s) to all non-Load Balancer Type Services that will be created by OCUDR helm chart.
customExtension.nonlbServices.annotations Custom Annotations that needs to be added to OCUDR Services that are considered as not Load Balancer type null Not Applicable This can be used to add custom annotation(s) to all non-Load Balancer Type Services that will be created by OCUDR helm chart.
customExtension.nonlbDeployments.labels Custom Labels that needs to be added to OCUDR Deployments that are associated to a Service which is not of Load Balancer type null Not Applicable This can be used to add custom label(s) to all Deployments that will be created by OCUDR helm chart which are associated to a Service which if not of Load Balancer Type.
customExtension.nonlbDeployments.annotations Custom Annotations that needs to be added to OCUDR Deployments that are associated to a Service which is not of Load Balancer type null

Not Applicable

Note: ASM related annotations to be added under ASM Specific Configuration section

This can be used to add custom annotation(s) to all Deployments that will be created by OCUDR helm chart which are associated to a Service which if not of Load Balancer Type.
k8sResource.container.prefix Value that will be prefixed to all the container names of OCUDR. null Not Applicable This value will be used to prefix to all the container names of OCUDR.
k8sResource.container.suffix Value that will be suffixed to all the container names of OCUDR. null Not Applicable This value will be used to prefix to all the container names of OCUDR.

Following table provides the parameters for nudr-drservice micro service.

Parameter Description Default value Range or Possible Values (If applicable) Notes
image.name Docker Image name ocudr/nudr_datarepository_service Not applicable  
image.tag Tag of Image 1.8.0 Not applicable  
image.pullPolicy This setting signifies whether image needs to be pulled or not Always

Possible Values -

Always

IfNotPresent

Never

 
subscriber.autocreate Flag to enable auto creation of subscriber true true/false This flag enables auto creation of subscriber when creating data for a non existent subscriber.
validate.smdata Flag to enable correlation feature for smdata false true/false This flag controls the correlation feature for smdata. This flag must be false if using v16.2.0 for PCF data.
logging.level.root Log Level WARN

Possible Values -

WARN

INFO

DEBUG

Log level of the nudr-drservice pod
deployment.replicaCount Replicas of nudr-drservice pod 2 Not applicable Number of nudr-drservice pods to be maintained by replica set created with deployment
minReplicas Minimum Replicas 2 Not applicable Minimum number of pods
maxReplicas Maximum Replicas 8 Not applicable Maximum number of pods
service.http2enabled Enabled HTTP2 support flag for rest server true true/false Enable/Disable HTTP2 support for rest server
service.type UDR service type ClusterIP

Possbile Values-

ClusterIP

NodePort

LoadBalancer

The kubernetes service type for exposing UDR deployment

Note: Suggested to be set as ClusterIP (default value) always

service.port.http HTTP port 5001 Not applicable The http port to be used in nudr-drservice service
service.port.https HTTPS port 5002 Not applicable The https port to be used for nudr-drservice service
service.port.management Management port 9000 Not applicable The actuator management port to be used for nudr-drservice service
resources.requests.cpu Cpu Allotment for nudr-drservice pod 3 Not applicable The cpu to be allocated for nudr-drservice pod during deployment
resources.requests.memory Memory allotment for nudr-drservice pod 4Gi Not applicable The memory to be allocated for nudr-drservice pod during deployment
resources.limits.cpu Cpu allotment limitation 3 Not applicable  
resources.limits.memory Memory allotment limitation 4Gi Not applicable  
resources.target.averageCpuUtil CPU utilization limit for autoscaling 80 Not Applicable CPU utilization limit for creating HPA
notify.port.http HTTP port on which notify service is running 5001 Not applicable  
notify.port.https HTTPS port on which notify service is running 5002 Not applicable  
hikari.poolsize Mysql Connection pool size 25 Not applicable The hikari pool connection size to be created at start up
vsaLevel The data level where the vsa which holds the 4G Policy data is added. smpolicy Not applicable  
vsaBillingDay The Billing day value 0 Not applicable  
tracingEnabled Flag to enable/disable jaeger tracing for nudr-drservice false true/false  
service.customExtension.labels Custom Labels that needs to be added to nudr-drservice specific Service. null Not Applicable This can be used to add custom label(s) to nudr-drservice Service.
service.customExtension.annotations Custom Annotations that needs to be added to nudr-drservice specific Services. null Not Applicable This can be used to add custom annotation(s) to nudr-drservice Service.
deployment.customExtension.labels Custom Labels that needs to be added to nudr-drservice specific deployment. null Not Applicable This can be used to add custom label(s) to nudr-drservice Deployment.
deployment.customExtension.annotations Custom Annotations that needs to be added to nudr-drservice specific deployment. null Not Applicable This can be used to add custom annotation(s) to nudr-drservice deployment.
readinessProbe.initialDelaySeconds

Configurable wait time before performing the first readiness probe by the kubelet

Note: Do not change this value. If there is any delay in pod coming up and probe is killing the pod then you should consider tuning these parameters.

70

Not Applicable

Unit: Seconds

 
readinessProbe.periodSeconds

Time interval for every readiness probe check.

Note: Do not change this value. If there is any delay in pod coming up and probe is killing the pod then you should consider tuning these parameters.

10

Not Applicable

Unit: Seconds

 
livenessProbe.initialDelaySeconds

Configurable wait time before performing the first liveness probe by the kubelet.

Note: Do not change this value. If there is any delay in pod coming up and probe is killing the pod then you should consider tuning these parameters.

70

Not Applicable

Unit: Seconds

 
livenessProbe.periodSeconds

Time interval for every liveness probe check.

Note: Do not change this value. If there is any delay in pod coming up and probe is killing the pod then you should consider tuning these parameters.

10

Not Applicable

Unit: Seconds

 

Following table provides the parameters for nudr-notify-service micro service.

Parameter Description Default value Range or Possible Values (If applicable) Notes
enabled flag for enabling or disabling nudr-notify-service true true or false For SLF deployment, this micro service must be disabled.
image.name Docker Image name ocudr/nudr_notify_service Not applicable  
image.tag Tag of Image 1.8.0 Not applicable  
image.pullPolicy This setting will tell if image need to be pulled or not Always

Possible Values -

Always

IfNotPresent

Never

 
notification.retrycount Number of notifications to be attempted 3 Range: 1 - 10

Number of notification attempts to be done in case of notification failures.

Whether retry should be done will be based on notification.retryerrorcodes configuration.

notification.retryinterval   5

Range: 1 - 60

Unit: Seconds

The retry interval for notifications in case of failure. Unit is in seconds.

Whether retry should be done will be based on notification.retryerrorcodes configuration.

notification.retryerrorcodes Notification failures eligible for retry "400,429,500,503" Valid HTTP status codes comma seperated Comma separated error code should be given. These error codes will be eligible for retry notifications in case of failures.
hikari.poolsize Mysql Connection pool size 10 Not applicable The hikari pool connection size to be created at start up
tracingEnabled Flag to enable/disable jaeger tracing for nudr-notify-service false true/false  
http.proxy.port Port to connect to egress gateway 8080 Not applicable  
logging.level.root Log Level WARN

Possible Values -

WARN

INFO

DEBUG

Log level of the notify service pod
deployment.replicaCount Replicas of nudr-notify-service pod 2 Not applicable Number of nudr-notify-service pods to be maintained by replica set created with deployment
minReplicas Minimum Replicas 2 Not applicable Minimum number of pods
maxReplicas Maximum Replicas 4 Not applicable Maximum number of pods
service.http2enabled Enabled HTTP2 support flag true true/false This is a read only parameter. Do not change this value
service.type UDR service type ClusterIP

Possbile Values-

ClusterIP

NodePort

LoadBalancer

The kubernetes service type for exposing UDR deployment

Note: Suggested to be set as ClusterIP (default value) always

service.port.http HTTP port 5001 Not applicable The http port to be used in notify service to receive signals from nudr-notify-service pod.
service.port.https HTTPS port 5002 Not applicable The https port to be used in notify service to receive signals from nudr-notify-service pod.
service.port.management Management port 9000 Not applicable The actuator management port to be used for notify service.
resources.requests.cpu Cpu Allotment for nudr-notify-service pod 3 Not applicable The cpu to be allocated for notify service pod during deployment
resources.requests.memory Memory allotment for nudr-notify-service pod 3Gi Not applicable The memory to be allocated for nudr-notify-service pod during deployment
resources.limits.cpu Cpu allotment limitation 3 Not applicable  
resources.limits.memory Memory allotment limitation 3Gi Not applicable  
resources.target.averageCpuUtil CPU utilization limit for autoscaling 80 Not Applicable CPU utilization limit for creating HPA
service.customExtension.labels Custom Labels that needs to be added to nudr-notify-service specific service. null Not Applicable This can be used to add custom label(s) tonudr-notify-service Service.
service.customExtension.annotations Custom Annotations that needs to be added to nudr-notify-service specific services. null Not Applicable This can be used to add custom annotation(s) to nudr-notify-service Service.
deployment.customExtension.labels Custom Labels that needs to be added to nudr-notify-service specific deployment. null Not Applicable This can be used to add custom label(s) to nudr-notify-service deployment.
deployment.customExtension.annotations Custom Annotations that needs to be added to nudr-notify-service specific deployment. null Not Applicable This can be used to add custom annotation(s) to nudr-notify-service deployment.
readinessProbe.initialDelaySeconds

Configurable wait time before performing the first readiness probe by the kubelet

Note: Do not change this value. If there is any delay in pod coming up and probe is killing the pod then you should consider tuning these parameters.

80

Not Applicable

Unit: Seconds

 
readinessProbe.periodSeconds

Time interval for every readiness probe check.

Note: Do not change this value. If there is any delay in pod coming up and probe is killing the pod then you should consider tuning these parameters.

5

Not Applicable

Unit: Seconds

 
livenessProbe.initialDelaySeconds

Configurable wait time before performing the first liveness probe by the kubelet.

Note: Do not change this value. If there is any delay in pod coming up and probe is killing the pod then you should consider tuning these parameters.

80

Not Applicable

Unit: Seconds

 
livenessProbe.periodSeconds

Time interval for every liveness probe check.

Note: Do not change this value. If there is any delay in pod coming up and probe is killing the pod then you should consider tuning these parameters.

20

Not Applicable

Unit: Seconds

 

Following table provides the parameters for nudr-nrf-client-service micro service.

Parameter Description Default value Range or Possible Values (If applicable) Notes
enabled flag for enabling or disabling nudr-nrf-client-service true true/false  
host.baseurl NRF url for registration http://ocnrf-ingressgateway.mynrf.svc.cluster.local/nnrf-nfm/v1/nf-instances Not applicable Url used for udr to connect and register with NRF
host.proxy Proxy Setting NULL nrfClient.host Proxy setting if required to connect to NRF
ssl SSL flag false true/false SSL flag to enable SSL with udr nrf client pod
logging.level.root Log Level WARN

Possible Values -

WARN

INFO

DEBUG

Log level of the UDR nrf client pod
image.name Docker Image name ocudr/nudr_nrf_client_service Not applicable  
image.tag Tag of Image 1.8.0 Not applicable  
image.pullPolicy This setting will tell if image need to be pulled or not Always

Possible Values -

Always

IfNotPresent

Never

 
heartBeatTimer Heart beat timer 90 Unit: Seconds  
udrGroupId Group ID of UDR udr-1 Not applicable  
capacityMultiplier Capacity of UDR 500 Not applicable Capacity multiplier of UDR based on number of UDR pods running
supirange Supi Range supported with UDR [{\"start\": \"10000000000\", \"end\": \"20000000000\"}] Valid start and end supi range  
priority Priority 10 Priority to be sent in registration request Priority to be sent in registration request
fqdn UDR FQDN ocudr-ingressgateway.myudr.svc.cluster.local Not Applicable

FQDN to used for registering in NRF for other NFs to connect to UDR.

Note: Be cautious in updating this value. Should consider helm release name, namespace used for udr deployment and name resolution setting in k8s.

gpsirange Gpsi Range supported with UDR [{\"start\": \"10000000000\", \"end\": \"20000000000\"}] Valid start and end gpsi range  
livenessProbeMaxRetry Max retries of liveness proble failed 5 This should be changed based on how many times do you want to retry This should be changed based on how many times do you want to retry if liveness fails
udrMasterIpv4 Master IP of which we deployed 10.0.0.0 This should be changed with the master ip which we deployed udrMasterIpv4 is used to send the ipv4 address to the nrf while registration.
plmnvalues Plmn values range that it supports [{\"mnc\": \"14\", \"mcc\": \"310\"}] This values can be changed that the range it supports Plmn values are sent to nrf during regisration from UDR.
scheme scheme in which udr supports http This can be changed to https. scheme which we send to NRF during registration
resources.requests.cpu Cpu Allotment for nudr-notify-service pod 1 Not applicable The cpu to be allocated for nrf client service pod during deployment
resources.requests.memory Memory allotment for nudr-notify-service pod 2Gi Not applicable The memory to be allocated for nrf client service pod during deployment
resources.limits.cpu Cpu allotment limitation 1 Not applicable  
resources.limits.memory Memory allotment limitation 2Gi Not applicable  
http.proxy.port Port to connect egress gateway 8080 Not applicable  
service.customExtension.labels Custom Labels that needs to be added to nudr-nrf-client specific service. null Not Applicable This can be used to add custom label(s) to nudr-nrf-client service.
service.customExtension.annotations Custom Annotations that needs to be added to nudr-nrf-client specific services. null Not Applicable This can be used to add custom annotation(s) to nudr-nrf-client service.
deployment.customExtension.labels Custom Labels that needs to be added to nudr-nrf-client specific deployment. null Not Applicable This can be used to add custom label(s) to nudr-nrf-client deployment.
deployment.customExtension.annotations Custom Annotations that needs to be added to nudr-nrf-client specific deployment. null Not Applicable

Note: ASM related annotations to be added under ASM Specific Configuration section

This can be used to add custom annotation(s) to nudr-nrf-client deployment.

Following table provides the parameters for nudr-config micro service.

Parameter Description Default value Range or Possible Values (If applicable) Notes
enabled flag for enabling or disabling nudr-config service true true/false  
logging.level.root Log Level WARN

Possible Values -

WARN

INFO

DEBUG

Log level of the nudr-config pod
service.http2enabled Enabled HTTP2 support flag for rest server true true/false Enable/Disable HTTP2 support for rest server
image.name Docker Image name ocudr/nudr_config Not applicable  
service.customExtension.labels Custom Labels that needs to be added to nudr-config specific Service. null Not applicable This can be used to add custom label(s) to nudr-config Service.
service.customExtension.annotations Custom Annotations that needs to be added to nudr-config specific Services. null Not applicable This can be used to add custom annotation(s) to nudr-config Service.
deployment.customExtension.labels Custom Labels that needs to be added to nudr-config specific Deployment. null Not applicable This can be used to add custom label(s) to nudr-config Deployment.
deployment.customExtension.annotations Custom Annotations that needs to be added to nudr-config specific Deployment. null Not applicable This can be used to add custom annotation(s) to nudr-config Deployment.
service.type UDR service type ClusterIP

Possbile Values-

ClusterIP

NodePort

LoadBalancer

The kubernetes service type for exposing UDR deployment

Note: Suggested to be set as ClusterIP (default value) always

image.pullPolicy This setting will tell if image need to be pulled or not Always

Possible Values -

Always

IfNotPresent

Never

 
service.port.management Management port 9000 Not applicable The actuator management port to be used for nudr-config service
service.port.https HTTPS port 5002 Not applicable The https port to be used for nudr-config service
service.port.http HTTP port 5001 Not applicable The http port to be used in nudr-config service
resources.target.averageCpuUtil CPU utilization limit for autoscaling 80 Not Applicable CPU utilization limit for creating HPA
resources.requests.memory Memory allotment for nudr-drservice pod 2Gi Not applicable The memory to be allocated for nudr-config pod during deployment
resources.limits.memory Memory allotment limitation 2Gi Not applicable  
resources.requests.cpu Cpu Allotment for nudr-drservice pod 2 Not applicable The cpu to be allocated for nudr-config pod during deployment
resources.limits.cpu Cpu allotment limitation 2 Not applicable  
image.tag Tag of Image 1.8.0 Not applicable  
deployment.replicaCount Replicas of nudr-config pod 1 Not applicable Number of nudr-config pods to be maintained by replica set created with deployment
minReplicas Minimum Replicas 1 Not applicable Minimum number of pods
maxReplicas Maximum Replicas 1 Not applicable Maximum number of pods
readinessProbe.initialDelaySeconds

Configurable wait time before performing the first readiness probe by the kubelet

Note: Do not change this value. If there is any delay in pod coming up and probe is killing the pod then you should consider tuning these parameters.

30

Not Applicable

Unit: Seconds

 
readinessProbe.periodSeconds

Time interval for every readiness probe check.

Note: Do not change this value. If there is any delay in pod coming up and probe is killing the pod then you should consider tuning these parameters.

5

Not Applicable

Unit: Seconds

 
livenessProbe.initialDelaySeconds

Configurable wait time before performing the first liveness probe by the kubelet.

Note: Do not change this value. If there is any delay in pod coming up and probe is killing the pod then you should consider tuning these parameters.

40

Not Applicable

Unit: Seconds

 
livenessProbe.periodSeconds

Time interval for every liveness probe check.

Note: Do not change this value. If there is any delay in pod coming up and probe is killing the pod then you should consider tuning these parameters.

10

Not Applicable

Unit: Seconds

 

Following table provides the parameters for nudr-config-server Micro service.

Parameter Description Default value Range or Possible Values (If applicable) Notes
enabled Flag to enable/disable nudr-config-server service true true/false  
global.nfName It is NF name used to add with config server service name. nudr Not applicable  
global.imageServiceDetector Image Service Detector for config-server init container ocudr/readiness-detector:1.7.1 Not Applicable  
global.envJaegerAgentHost Host FQDN for Jaeger agent service for config-server tracing ' ' Not Applicable  
global.envJaegerAgentPort Port for Connection to Jaeger agent for config-server tracing 6831 Valid Port  
envLoggingLevelApp Log Level WARN

Possible Values -

WARN

INFO

DEBUG

Log level of the nudr-config-server pod
replicas Replicas of nudr-config-server pod 1 Not applicable Number of nudr-config-server pods to be maintained by replica set created with deployment
service.type UDR service type ClusterIP

Possbile Values-

ClusterIP

NodePort

LoadBalancer

The kubernetes service type for exposing UDR deployment

Note: Suggested to be set as ClusterIP (default value) always

resources.requests.cpu Cpu Allotment for nudr-drservice pod 2 Not applicable The cpu to be allocated for nudr-config-server pod during deployment
resources.requests.memory Memory allotment for nudr-drservice pod 512Mi Not applicable The memory to be allocated for nudr-config-server pod during deployment
resources.limits.cpu Cpu allotment limitation 2 Not applicable  
resources.limits.memory Memory allotment limitation 2Gi Not applicable  
readinessProbe.initialDelaySeconds

Configurable wait time before performing the first readiness probe by the kubelet

Note: Do not change this value. If there is any delay in pod coming up and probe is killing the pod then you should consider tuning these parameters.

70

Not Applicable

Unit: Seconds

 
readinessProbe.periodSeconds

Time interval for every readiness probe check.

Note: Do not change this value. If there is any delay in pod coming up and probe is killing the pod then you should consider tuning these parameters.

10

Not Applicable

Unit: Seconds

 
readinessProbe.timeoutSeconds

Number of seconds after which the probe times out

Note: Do not change this default value.

3 Not Applicable  
readinessProbe.successThreshold

Minimum consecutive successes for the probe to be considered successful after having failed

Note: Do not change this default value.

1 Not Applicable  
readinessProbe.failureThreshold

When a Pod starts and the probe fails, Kubernetes tries failureThreshold times before giving up

Note: Do not change this default value.

3 Not Applicable  
livenessProbe.initialDelaySeconds

Configurable wait time before performing the first liveness probe by the kubelet.

Note: Do not change this value. If there is any delay in pod coming up and probe is killing the pod then you should consider tuning these parameters.

60

Not Applicable

Unit: Seconds

 
livenessProbe.periodSeconds

Time interval for every liveness probe check.

Note: Do not change this value. If there is any delay in pod coming up and probe is killing the pod then you should consider tuning these parameters.

15

Not Applicable

Unit: Seconds

 
livenessProbe.timeoutSeconds

Number of seconds after which the probe times out

Note: Do not change this default value.

3 Not Applicable  
livenessProbe.successThreshold

Minimum consecutive successes for the probe to be considered successful after having failed

Note: Do not change this default value.

1 Not Applicable  
livenessProbe.failureThreshold When a Pod starts and the probe fails, Kubernetes will try failureThreshold times before giving up

Note: Do not change this default value.

3 Not Applicable  

Following table provides parameters for nudr-diameterproxy micro service.

Parameter Description Default value Range or Possible Values (If applicable) Notes
enabled To enable service. true Not applicable Used to enable or disable service.
image.name Docker Image name ocudr/nudr_diameterproxy Not applicable  
image.tag Tag of Image 1.8.0 Not applicable  
image.pullPolicy This setting will tell if image need to be pulled or not Always

Possible Values -

Always

IfNotPresent

Never

 
logging.level.root Log Level WARN

Possible Values -

WARN

INFO

DEBUG

The log level of the nudr-diameterproxy server pod
deployment.replicaCount Replicas of the nudr-diameterproxy pod 2 Not applicable Number of nudr-config-server pods to be maintained by replica set created with deployment
minReplicas min replicas of nudr-diameterproxy 2 Not applicable Minimum number of pods
maxReplicas max replicas of nudr-diameterproxy 4 Not applicable Maximum number of pods
service.http2enabled Enabled HTTP2 support flag for rest server true true/false Enable/Disable HTTP2 support for rest server
service.type UDR service type ClusterIP

Possible Values-

ClusterIP

NodePort

LoadBalancer

The Kubernetes service type for exposing UDR deployment

Note: Suggested to be set as ClusterIP (default value) always

service.diameter.type Diameter service type LoadBalancer

Possible Values-

ClusterIP

NodePort

LoadBalancer

The Kubernetes service type for exposing UDR deploymentdiameter traffic goes via diameter-endpoint, not via ingress-gateway
service.port.http HTTP port 5001 Not applicable The HTTP port to be used in nudr-diameterproxy service
service.port.https HTTPS port 5002 Not applicable The https port to be used for nudr-diameterproxy service
service.port.management Management port 9000 Not applicable The actuator management port to be used for nudr-diameterproxy service
service.port.diameter Diameter port 6000 Not applicable The diameter port to be used for nudr-diameterproxy service
resources.requests.cpu Cpu Allotment for nudr-diameterproxy pod 3 Not applicable The CPU to be allocated for nudr-diameterproxy pod during deployment
resources.requests.memory Memory allotment for nudr-diameterproxy pod
4Gi
Not applicable The memory to be allocated for nudr-diameterproxy pod during deployment
resources.limits.cpu Cpu allotment limitation 3 Not applicable The CPU to be max allocated for nudr-diameterproxy pod
resources.limits.memory Memory allotment limitation 4Gi Not applicable The memory to be max allocated for nudr-diameterproxy pod
resources.target.averageCpuUtil CPU utilization limit for autoscaling 80 Not Applicable CPU utilization limit for creating HPA
drservice.port.http HTTP port on which dr service is running 5001 Not Applicable dr-service port is required in diameterproxy application
drservice.port.https HTTPS port on which dr service is running 5002 Not Applicable dr-service port is required in diameterproxy application
diameter.realm Realm of the diameterproxy microservice oracle.com String value Host realm of diameterproxy
diameter.identity FQDN of the diameterproxy in diameter messages nudr.oracle.com String value identity of the diameterproxy
diameter.strictParsing Strict parsing of Diameter AVP and Messages false Not Applicable strict parsing
diameter.IO.threadCount Number of thread for IO operation 0 0 to 2* CPU

Number of threads to handle IO operations in diameterproxy pod

if threadcount is 0 then application choose the threadCount based on pod profile size

diameter.IO.queueSize Queue size for IO 0 2048 to 8192

the count should be the power of 2

if queueSize is 0 then application choose the queueSize based on pod profile size

diameter.messageBuffer.threadCount Number of threads for process the message 0 0 to 2* CPU

Number of threads to handle meassages in diameterproxy pod

if threadcount is 0 then application choose the threadCount based on pod profile size

diameter.peer.setting Diameter peer setting

reconnectDelay: 3

responseTimeout: 4

connectionTimeOut: 3

watchdogInterval: 6

transport: 'TCP'

reconnectLimit: 50

Not Applicable
  1. reconnect delay for diameter reonnect (in seconds).
  2. total turnaround time for process the diameter messages.(in sec)
  3. TCP connection timeout time.(in sec)
  4. DWR and DWA messages every number of time (in sec)
  5. Transport layer
  6. reconnect the number of time if diameter peer is down
diameter.peer.nodes diameter server peer nodes list

- name: 'seagull'

responseOnly: false

namespace: 'seagull1'

host: '10.75.185.158'

domain: 'svc.cluster.local'

port: 4096

realm: 'seagull1.com'

identity: 'seagull1a.seagull1.com'

Not applicable

the diameter server peer node information

*it should be yaml list

*default values are template , how to add peer nodes.

diameter.peer.clientNodes diameter client peers

- identity: 'seagull1a.seagull1.com'

realm: 'seagull1.com'

- identity: 'seagull1.com'

realm: 'seagull1.com'

Not applicable

the diameter client node information

*it should be yaml list

*default values is template, how to add peer nodes.

service.customExtension.labels Custom Labels that needs to be added to nudr-diameterproxy specific Service. null Not applicable This can be used to add custom label(s) to nudr-diameterproxy Service.
service.customExtension.annotations Custom Annotations that needs to be added to nudr-diameterproxy specific Services. null Not applicable This can be used to add custom annotation(s) to nudr-diameterproxy Service.
deployment.customExtension.labels Custom Labels that needs to be added to nudr-diameterproxy specific Deployment. null Not applicable This can be used to add custom label(s) to nudr-diameterproxy Deployment.
deployment.customExtension.annotations Custom Annotations that needs to be added to nudr-diameterproxy specific Deployment. null Not applicable This can be used to add custom annotation(s) to nudr-diameterproxy Deployment.
readinessProbe.initialDelaySeconds Configurable wait time before performing the first readiness probe by the kubeletNote: Do not change this value. If you see delays in pod coming up and probe is killing the pod then you should consider tuning these parameters. 80

Not Applicable

Unit: Seconds

 
readinessProbe.periodSeconds Time interval for every readiness probe check.Note: Do not change this value. If you see delays in pod coming up and probe is killing the pod then you should consider tuning these parameters. 5

Not Applicable

Unit: Seconds

 
livenessProbe.initialDelaySeconds

Configurable wait time before performing the first liveness probe by the kubelet.

Note: Do not change this value. If you see delays in pod coming up and probe is killing the pod then you should consider tuning these parameters.

80

Not Applicable

Unit: Seconds

 
livenessProbe.periodSeconds

Time interval for every liveness probe check.

Note: Do not change this value. If you see delays in pod coming up and probe is killing the pod then you should consider tuning these parameters.

20

Not Applicable

Unit: Seconds

 

Following table provides parameters for ocudr-ingressgateway micro service (API Gateway)

Parameter Description Default value Range or Possible Values (If applicable) Notes
global.type ocudr-ingressgateway service type LoadBalancer

Possbile Values-

ClusterIP

NodePort

LoadBalancer

 
global.metalLbIpAllocationEnabled Enable or disable Address Pool for Metallb true true/false  
global.metalLbIpAllocationAnnotation Address Pool for Metallb metallb.universe.tf/address-pool: signaling Not applicable  
global.staticNodePortEnabled If Static node port needs to be set, then set staticNodePortEnabled flag to true and provide value for staticNodePort false Not applicable  
global.istioIngressTlsSupport.ingressGateway Supports clear text traffic from outside of the cluster when enabled to try in case of Service Mesh Enabled. false true/false  
image.name Docker image name ocudr/ocingress_gateway Not applicable  
image.tag Image version tag 1.8.1 Not applicable  
image.pullPolicy This setting will tell if image need to be pulled or not Always

Possible Values -

Always

IfNotPresent

Never

 
initContainersImage.name Docker Image name ocudr/configurationinit Not applicable  
initContainersImage.tag Image version tag 1.4.0 Not applicable  
initContainersImage.pullPolicy This setting will tell if image need to be pulled or not Always

Possible Values -

Always

IfNotPresent

Never

 
updateContainersImage.name Docker Image name ocudr/configurationupdate Not applicable  
updateContainersImage.tag Image version tag 1.4.0 Not applicable  
updateContainersImage.pullPolicy This setting will tell if image need to be pulled or not Always

Possible Values -

Always

IfNotPresent

Never

 
service.ssl.tlsVersion Configuration to take TLS version to be used TLSv1.2 Valid TLS version These are service fixed parameters
service.ssl.privateKey.k8SecretName name of the secret which stores keys and certificates ocudr-gateway-secret Not applicable  
service.ssl.privateKey.k8NameSpace namespace in which secret is created ocudr Not applicable  
service.ssl.privateKey.rsa.fileName rsa private key stored in the secret rsa_private_key_pkcs1.pem Not applicable  
service.ssl.privateKey.ecdsa.fileName ecdsa private key stored in the secret ecdsa_private_key_pkcs8.pem Not applicable  
service.ssl.certificate.k8SecretName name of the secret which stores keys and certificates ocudr-gateway-secret Not applicable  
service.ssl.certificate.k8NameSpace namespace in which secret is created ocudr Not applicable  
service.ssl.certificate.rsa.fileName rsa certificate stored in the secret apigatewayrsa.cer Not applicable  
service.ssl.certificate.ecdsa.fileName ecdsa certificate stored in the secret apigatewayecdsa.cer Not applicable  
service.ssl.caBundle.k8SecretName name of the secret which stores keys and certificates ocudr-gateway-secret Not applicable  
service.ssl.caBundle.k8NameSpace namespace in which secret is created ocudr Not applicable  
service.ssl.caBundle.fileName ca Bundle stored in the secret caroot.cer Not applicable  
service.ssl.keyStorePassword.k8SecretName name of the secret which stores keys and certificates ocudr-gateway-secret Not applicable  
service.ssl.keyStorePassword.k8NameSpace namespace in which secret is created ocudr Not applicable  
service.ssl.keyStorePassword.fileName keyStore password stored in the secret key.txt Not applicable  
service.ssl.trustStorePassword.k8SecretName name of the secret which stores keys and certificates ocudr-gateway-secret Not applicable  
service.ssl.trustStorePassword.k8NameSpace namespace in which secret is created ocudr Not applicable  
service.ssl.trustStorePassword.fileName trustStore password stored in the secret trust.txt Not applicable  
service.initialAlgorithm

Algorithm to be used

ES256 can also be used, but corresponding certificates need to be used.

RSA256 RSA256/ES256  
resources.limits.cpu Cpu allotment limitation 5 Not applicable  
resources.limits.memory Memory allotment limitation 4Gi Not applicable  
resources.limits.initServiceCpu Maximum amount of CPU that Kubernetes will allow the ingress-gateway init container to use. 1 Not Applicable  
resources.limits.initServiceMemory Memory Limit for ingress-gateway init container 1Gi Not Applicable  
resources.limits.updateServiceCpu Maximum amount of CPU that Kubernetes will allow the ingress-gateway update container to use. 1 Not Applicable  
resources.limits.updateServiceMemory Memory Limit for ingress-gateway update container 1Gi Not Applicable  
resources.requests.cpu Cpu allotment for ocudr-endpoint pod 5 Not Applicable  
resources.requests.memory Memory allotment for ocudr-endpoint pod 4Gi Not Applicable  
resources.requests.initServiceCpu The amount of CPU that the system guarantees for the ingress-gateway init container, and Kubernetes uses this value to decide on which node to place the pod.   Not Applicable  
resources.requests.initServiceMemory The amount of memory that the system will guarantee for the ingress-gateway init container, and Kubernetes will use this value to decide on which node to place the pod   Not Applicable  
resources.requests.updateServiceCpu The amount of CPU that the system will guarantee for the ingress-gateway update container, and Kubernetes will use this value to decide on which node to place the pod.   Not Applicable  
resources.requests.updateServiceMemory The amount of memory that the system will guarantee for the ingress-gateway update container, and Kubernetes will use this value to decide on which node to place the pod.   Not Applicable  
resources.target.averageCpuUtil CPU utilization limit for autoscaling 80 Not Applicable  
minAvailable Number of pods always running 2 Not Applicable  
minReplicas Min replicas to scale to maintain an average CPU utilization 2 Not applicable  
maxReplicas Max replicas to scale to maintain an average CPU utilization 5 Not applicable  
log.level.root Logs to be shown on ocudr-endpoint pod WARN valid level  
log.level.ingress Logs to be shown on ocudr-ingressgateway pod for ingress related flows INFO valid level  
log.level.oauth Logs to be shown on ocudr-ingressgateway pod for oauth related flows INFO valid level  
initssl To Initialize SSL related infrastructure in init/update container false Not Applicable  
jaegerTracingEnabled Enable/Disable Jaeger Tracing false true/false  
openTracing.jaeger.udpSender.host Jaeger agent service FQDN occne-tracer-jaeger-agent.occne-infra Valid FQDN  
openTracing.jaeger.udpSender.port Jaeger agent service UDP port 6831 Valid Port  
openTracing.jaeger.probabilisticSampler Probablistic Sampler on Jaeger 0.5 Range: 0.0 - 1.0 Sampler makes a random sampling decision with the probability of sampling. For example, if the value set is 0.1, approximately 1 in 10 traces will be sampled
  Supported cipher suites for ssl
- TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
 - TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
 - TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
 - TLS_DHE_RSA_WITH_AES_256_GCM_SHA384
 - TLS_DHE_RSA_WITH_AES_256_CCM
 - TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
 - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
Not applicable  
oauthValidatorEnabled OAUTH Configuration false Not Applicable  
nfType NFType of service producer UDR Not Applicable Mandatory when oauthValidatorEna ebled is true
producerScope Comma-seperated list of services hosted by service producer. nudr-dr,nudr-group-id-map Valid service list Mandatory when oauthValidatorEna ebled is true
allowedClockSkewSeconds Set this value if clock on the parsing NF (producer) is not perfectly in sync with the clock on the NF (consumer) that created the JWT. 0 Unit: Seconds Mandatory when oauthValidatorEna ebled is true
nrfPublicKeyKubeSecret Name of the secret which stores the public key(s) of NRF. oauthsecret Not Applicable Mandatory when oauthValidatorEna ebled is true
nrfPublicKeyKubeNamespace Namespace of the NRF publicKey Secret ocudr Not Applicable Mandatory when oauthValidatorEna ebled is true
validationType Values can be "strict" or "relaxed"."strict" means that incoming requests without "Authorization"(Access Token) header are rejected."relaxed" means that if incoming request contains "Authorization" header, it is validated. If incoming request does not contain "Authorization" header, validation is ignored. strict strict/relaxed Mandatory when oauthValidatorEna ebled is true
producerPlmnMNC MNC of service producer 14 Valid MNC  
producerPlmnMCC MCC of service producer 310 Valid MCC  
enableIncomingHttp Enabling for accepting http requests true Not Applicable  
enableIncomingHttps Enabling for accepting https requests false true or false  
enableOutgoingHttps Enabling for sending https requests false true or false  
maxRequestsQueuedPerDestination Queue Size at the ocudr-endpoint pod 5000 Not Applicable  
maxConnectionsPerIp Connections from endpoint to other microServices 10 Not Applicable  
serviceMeshCheck Load balancing will be handled by Ingress gateway, if true it would be handled by serviceMesh false true/false  
routesConfig Routes configured to connect to different micro services of UDR
- id: traffic_mapping_http
  uri: http://{{ .Release.Name }}-nudr-drservice:5001
  path: /nudr-dr/**
  order: 1 
- id: traffic_mapping_http_prov
  uri: http://{{ .Release.Name }}-nudr-drservice:5001 
  path: /nudr-dr-prov/**  
 order: 2  
- id: traffic_mapping_http_mgmt
  uri: http://{{ .Release.Name }}-nudr-drservice:5001
  path: /nudr-dr-mgm/**  
 order: 3
- id: traffic_mapping_http_udsf
  uri: http://{{ .Release.Name }}-nudr-drservice:5001
  path: /nudsf-dr/** 
  order: 4 
- id: traffic_mapping_http_group 
  uri: http://{{ .Release.Name }}-nudr-drservice:5001
  path: /nudr-group-id-map/**
  order: 5
- id: traffic_mapping_http_group_prov
  uri: http://{{ .Release.Name }}-nudr-drservice:5001  
 path: /nudr-group-id-map-prov/** 
  order: 6 
- id: traffic_mapping_http_slf_group_prov 
  uri: http://{{ .Release.Name }}-nudr-drservice:5001  
 path: /slf-group-prov/**
  order: 7
Not Applicable  
service.customExtension.labels Custom Labels that needs to be added to ingressgateway specific service. null Not Applicable This can be used to add custom label(s) to ingressgateway service.
service.customExtension.annotations Custom Annotations that needs to be added to ingressgateway specific services. null Not Applicable This can be used to add custom annotation(s) to ingressgateway service.
deployment.customExtension.labels Custom Labels that needs to be added to ingressgateway specific deployment. null Not Applicable This can be used to add custom label(s) to ingressgateway deployment.
deployment.customExtension.annotations Custom Annotations that needs to be added to ingressgateway specific deployment. null Not Applicable This can be used to add custom annotation(s) to ingressgateway deployment.
readinessProbe.initialDelaySeconds Configurable wait time before performing the first readiness probe by the kubelet

Note: Do not change this value. If you see delays in pod coming up and probe is killing the pod then you should consider tuning these parameters.
30

Not Applicable

Unit: Seconds

 
readinessProbe.periodSeconds

Time interval for every readiness probe check.

Note: Do not change this value. If you see delays in pod coming up and probe is killing the pod then you should consider tuning these parameters.

10

Not Applicable

Unit: Seconds

 
readinessProbe.timeoutSeconds

Number of seconds after which the probe times out

Note: Do not change this default value.

3 Not Applicable  
readinessProbe.successThreshold Minimum consecutive successes for the probe to be considered successful after having failed

Note: Do not change this default value.
1 Not Applicable  
readinessProbe.failureThreshold

When a Pod starts and the probe fails, Kubernetes will try failureThreshold times before giving up

Note: Do not change this default value.

3 Not Applicable  
livenessProbe.initialDelaySeconds

Configurable wait time before performing the first liveness probe by the kubelet.

Note: Do not change this value. If you see delays in pod coming up and probe is killing the pod then you should consider tuning these parameters.

30

Not Applicable

Unit: Seconds

 
livenessProbe.periodSeconds

Time interval for every liveness probe check.

Note: Do not change this value. If you see delays in pod coming up and probe is killing the pod then you should consider tuning these parameters.

15

Not Applicable

Unit: Seconds

 
livenessProbe.timeoutSeconds

Number of seconds after which the probe times out

Note: Do not change this default value.

3 Not Applicable  
livenessProbe.successThreshold

Minimum consecutive successes for the probe to be considered successful after having failed

Note: Do not change this default value.

1 Not Applicable  
livenessProbe.failureThreshold

When a Pod starts and the probe fails, Kubernetes will try failureThreshold times before giving up

Note: Do not change this default value.

3 Not Applicable  

Following table provides parameters for ocudr-egressgateway micro service (API Gateway)

Parameter Description Default value Range or Possible Values (If applicable) Notes
enabled Configuration flag to enable/disable egress gateway true true/false  
image.name Docker image name ocudr/ocegress_gateway Not applicable  
image.tag Image version tag 1.8.1 Not applicable  
image.pullPolicy This setting will tell if image need to be pulled or not Always

Possible Values -

Always

IfNotPresent

Never

 
initContainersImage.name Docker Image name ocudr/configurationinit Not applicable  
initContainersImage.tag Image version tag 1.4.0 Not applicable  
initContainersImage.pullPolicy This setting will tell if image need to be pulled or not Always

Possible Values -

Always

IfNotPresent

Never

 
updateContainersImage.name Docker Image name ocudr/configurationupdate Not applicable  
updateContainersImage.tag Image version tag 1.4.0 Not applicable  
updateContainersImage.pullPolicy This setting will tell if image need to be pulled or not Always

Possible Values -

Always

IfNotPresent

Never

 
resources.limits.cpu Cpu allotment limitation 3 Not applicable  
resources.limits.memory Memory allotment limitation 4Gi Not applicable  
resources.limits.initServiceCpu Maximum amount of CPU that Kubernetes will allow the egress-gateway init container to use. 1 Not applicable  
resources.limits.initServiceMemory Memory Limit for egress-gateway init container 1Gi Not applicable  
resources.limits.updateServiceCpu Maximum amount of CPU that Kubernetes will allow the egress-gateway update container to use. 1 Not applicable  
resources.limits.updateServiceMemory Memory Limit for egress-gateway update container 1Gi Not applicable  
resources.requests.cpu Cpu allotment for ocudr-egressgateway pod 3 Not applicable  
resources.requests.memory Memory allotment for ocudr-egressgatewaypod 4Gi Not applicable  
resources.requests.initServiceCpu The amount of CPU that the system will guarantee for the egress-gateway init container, and Kubernetes will use this value to decide on which node to place the pod   Not Applicable  
resources.requests.initServiceMemory The amount of memory that the system will guarantee for the egress-gateway init container, and Kubernetes will use this value to decide on which node to place the pod   Not Applicable  
resources.requests.updateServiceCpu The amount of CPU that the system will guarantee for the egress-gateway update container, and Kubernetes will use this value to decide on which node to place the pod.   Not Applicable  
resources.requests.updateServiceMemory The amount of memory that the system will guarantee for the egress-gateway update container, and Kubernetes will use this value to decide on which node to place the pod.   Not Applicable  
resources.target.averageCpuUtil CPU utilization limit for autoscaling 80 Not applicable  
service.ssl.tlsVersion Configuration to take TLS version to be used TLSv1.2 Valid TLS version These are service fixed parameters
service.initialAlgorithm

Algorithm to be used

ES256 can also be used, but corresponding certificates need to be used.

RSA256 RSA256/ES256  
service.ssl.privateKey.k8SecretName name of the secret which stores keys and certificates ocudr-gateway-secret Not applicable  
service.ssl.privateKey.k8NameSpace namespace in which secret is created ocudr Not applicable  
service.ssl.privateKey.rsa.fileName rsa private key stored in the secret rsa_private_key_pkcs1.pem Not applicable  
service.ssl.privateKey.ecdsa.fileName ecdsa private key stored in the secret ecdsa_private_key_pkcs8.pem Not applicable  
service.ssl.certificate.k8SecretName name of the secret which stores keys and certificates ocudr-gateway-secret Not applicable  
service.ssl.certificate.k8NameSpace namespace in which secret is created ocudr Not applicable  
service.ssl.certificate.rsa.fileName rsa certificate stored in the secret apigatewayrsa.cer Not applicable  
service.ssl.certificate.ecdsa.fileName ecdsa certificate stored in the secret apigatewayecdsa.cer Not applicable  
service.ssl.caBundle.k8SecretName name of the secret which stores keys and certificates ocudr-gateway-secret Not applicable  
service.ssl.caBundle.k8NameSpace namespace in which secret is created ocudr Not applicable  
service.ssl.caBundle.fileName ca Bundle stored in the secret caroot.cer Not applicable  
service.ssl.keyStorePassword.k8SecretName name of the secret which stores keys and certificates ocudr-gateway-secret Not applicable  
service.ssl.keyStorePassword.k8NameSpace namespace in which secret is created ocudr Not applicable  
service.ssl.keyStorePassword.fileName keyStore password stored in the secret key.txt Not applicable  
service.ssl.trustStorePassword.k8SecretName name of the secret which stores keys and certificates ocudr-gateway-secret Not applicable  
service.ssl.trustStorePassword.k8NameSpace namespace in which secret is created ocudr Not applicable  
service.ssl.trustStorePassword.fileName trustStore password stored in the secret trust.txt Not applicable  
minAvailable Number of pods always running 1 Not Applicable  
minReplicas Min replicas to scale to maintain an average CPU utilization 1 Not applicable  
maxReplicas Max replicas to scale to maintain an average CPU utilization 4 Not applicable  
log.level.root Logs to be shown on ocudr-egressgateway pod WARN valid level  
log.level.egress Logs to be shown on ocudr-egressgateway pod for egress related flows INFO valid level  
log.level.oauth Logs to be shown on ocudr-egressgateway pod for oauth related flows INFO valid level  
fullnameOverride Name to be used for deployment ocudr-egressgateway Not applicable This config is commented by default.
initssl To Initialize SSL related infrastructure in init/update container false Not Applicable  
jaegerTracingEnabled Enable/Disable Jaeger Tracing false true/false  
openTracing.jaeger.udpSender.host Jaeger agent service FQDN occne-tracer-jaeger-agent.occne-infra Valid FQDN  
openTracing.jaeger.udpSender.port Jaeger agent service UDP port 6831 Valid Port  
openTracing.jaeger.probabilisticSampler Probablistic Sampler on Jaeger 0.5 Range: 0.0 - 1.0 Sampler makes a random sampling decision with the probability of sampling. For example if the value set is 0.1, approximately 1 in 10 traces will be sampled.
enableOutgoingHttps Enabling for sending https requests false true or false  
oauthClient.enabled Enable if oauth is required false true or false Enable based on Oauth configuration
oauthClient.dnsSrvEnabled DNS SRV Enabled for oAuth false true/false  
oauthClient.httpsEnabled Determine if https support is enabled or not which is a deciding factor for oauth request scheme and search query parameter in dns-srv request false true/false  
oauthClient.virtualFqdn virtualFqdn value which needs to be populated and sent in the dns-srv query. localhost:port   Mandatory if oauthClient.dnsSrvEnabled is true
oauthClient.staticNrfList List of Static NRF's - localhost:port   Mandatory if oauthClient.enabled is true
oauthClient.nfType NFType of service consumer. UDR Not Applicable Mandatory if oauthClient.enabled is true
oauthClient.consumerPlmnMNC MNC of service Consumer. 14 Valid MNC  
oauthClient.consumerPlmnMCC MCC of service Consumer. 310 Valid MCC  
oauthClient.maxRetry Maximum number of retry that need to be performed to other NRF Fqdn’s in case of failure response from first contacted NRF based on the errorCodeSeries configured. 2 Valid Number Mandatory if oauthClient.enabled is true
oauthClient.apiPrefix apiPrefix that needs to be appended in the Oauth request flow. "" Valid String Mandatory if oauthClient.enabled is true
oauthClient.errorCodeSeries Determines the fallback condition to other NRF in case of failure response from currently contacted NRF. 4XX Valid series Mandatory if oauthClient.enabled is true and requires different error code series
oauthClient.retryAfter RetryAfter value in milliseconds that needs to be set for a particular NRF Fqdn, if the error matched the configured errorCodeSeries. 5000 Unit: Milliseconds Mandatory if oauthClient.enabled is true
maxConcurrentPushedStreams Jetty client configuration 1000 Valid Number  
maxRequestsQueuedPerDestination Jetty client configuration 1024 Valid Number  
maxConnectionsPerIp Max Connections allowed per Ip 4 Valid Number  
connectionTimeout Connection timeout in milli seconds 10000

Unit: Milliseconds

 
requestTimeout Request Timeout in milli seconds 1000

Unit: Milliseconds

 
jettyIdleTimeout Jetty Idle Timeout in milli seconds 0

Unit: Milliseconds

#(ms,<=0 -> to make timeout infinite)

 
k8sServiceCheck Enable this if loadbalancing is to be done by egress instead of K8s false true/false  
service.customExtension.labels Custom Labels that needs to be added to egressgateway specific Service. null Not applicable This can be used to add custom label(s) to egressgateway Service.
service.customExtension.annotations Custom Annotations that needs to be added to egressgateway specific Services. null Not applicable This can be used to add custom annotation(s) to egressgateway Service.
deployment.customExtension.labels Custom Labels that needs to be added to egressgateway specific Deployment. null Not applicable This can be used to add custom label(s) to egressgateway Deployment.
deployment.customExtension.annotations Custom Annotations that needs to be added to egressgateway specific Deployment. null Not applicable This can be used to add custom annotation(s) to egressgateway deployment.
readinessProbe.initialDelaySeconds Configurable wait time before performing the first readiness probe by the kubelet

Note: Do not change this value. If you see delays in pod coming up and probe is killing the pod then you should consider tuning these parameters.
30

Not Applicable

Unit: Seconds

 
readinessProbe.periodSeconds

Time interval for every readiness probe check.

Note: Do not change this value. If you see delays in pod coming up and probe is killing the pod then you should consider tuning these parameters.

10

Not Applicable

Unit: Seconds

 
readinessProbe.timeoutSeconds

Number of seconds after which the probe times out

Note: Do not change this default value.

3 Not Applicable  
readinessProbe.successThreshold

Minimum consecutive successes for the probe to be considered successful after having failed

Note: Do not change this default value.

1 Not Applicable  
readinessProbe.failureThreshold

When a Pod starts and the probe fails, Kubernetes will failureThreshold times before giving up

Note: Do not change this default value.

3 Not Applicable  
livenessProbe.initialDelaySeconds

Configurable wait time before performing the first liveness probe by the kubelet.

Note: Do not change this value. If you see delays in pod coming up and probe is killing the pod then you should consider tuning these parameters.

30

Not Applicable

Unit: Seconds

 
livenessProbe.periodSeconds

Time interval for every liveness probe check.

Note: Do not change this value. If you see delays in pod coming up and probe is killing the pod then you should consider tuning these parameters.

15

Not Applicable

Unit: Seconds

 
livenessProbe.timeoutSeconds

Number of seconds after which the probe times out

Note: Do not change this default value.

3 Not Applicable  
livenessProbe.successThreshold

Minimum consecutive successes for the probe to be considered successful after having failed

Note: Do not change this default value.

1 Not Applicable  
livenessProbe.failureThreshold When a Pod starts and the probe fails, Kubernetes will try failureThreshold times before giving upNote: Do not change this default value. 3 Not Applicable