2 Configuring User Parameters
The UDR micro services have configuration options. The user should be able to configure them via deployment values.yaml.
Note:
The default value of some of the settings may change.Note:
- NAME: is the release name used in helm install command
- NAMESPACE: is the namespace used in helm install command
- K8S_DOMAIN: is the default kubernetes domain (svc.cluster.local)
Default Helm Release Name:- ocudr
Global Configuration: These values are suffixed to all the container names of OCUDR. These values are useful to add custom annotation(s) to all non-Load Balancer Type Services that OCUDR helm chart creates.
Following table provides the parameters for global configurations.
Parameter | Description | Default value | Range or Possible Values (If applicable) | Notes |
---|---|---|---|---|
dockerRegistry | Docker registry from where the images will be pulled | ocudr-registry.us.oracle.com:5000 | Not applicable | |
mysql.dbServiceName | DB service to connect | mysql-connectivity-service.occne-infra | Not applicable | This is a CNE service used for db connection. Default name used on CNE is the same as configured. |
mysql.port | Port for DB Service Connection | 3306 | Not applicable | |
udrTracing.enable | Flag to enable udr tracing on Jaeger | false | true/false | |
udrTracing.host | Jaegar Service Name installed in CNE | occne-tracer-jaeger-collector.occne-infra | Not applicable | |
udrTracing.port | Jaegar Service Port installed in CNE | 14268 | Not applicable | |
dbenc.shavalue | Encryption Key size | 256 | 256 or 512 | |
serviceAccountName | Service account name | null | Not Applicable | The serviceaccount, role and rolebindings required for deployment should be done prior installation. Use the created serviceaccountname here. |
egress.enabled | Flag to enable outgoing traffic through egress gateway | true | true/false | |
configServerEnable | Flag to enable config-server | true | true/false | |
initContainerEnable | Flag to disable init container for config-server. This is not required because the pre install hooks take care of DB tables creation and connectivity is also verified | false | true/false | |
dbCredSecretName | DB Credentioal Secret Name | ocudr-secrets | Not Applicable | |
configServerFullNameOverride | Config Server Full Name Override | nudr-config-server | Not Applicable | |
udrServices | Services supported on the UDR deployment, This config decides the schema execution on the udrdb which is done by the nudr-preinstall hook pod. | All | All/nudr-dr/nudr-group-id-map | For SLF, set udrServices values as nudr-group-id-map. |
udsfEnable | Flag to enable UDSF services on the deployment | false | true/false | |
publicHttpSignalingPort | Port on which ingressgateway listens for incoming http requests. | 80 | Valid Port | |
publicHttpsSignallingPort | Port on which ingressgateway listens for incoming https requests. | 443 | Valid Port | |
nfInstanceId | Nf Instance ID for UDR (same is registered with NRF) | 5a7bd676-ceeb-44bb-95e0-f6a55a328b03 | Valid uuid | A valid UUID is a 128-bit unique number that helps to identify information in computer systems. |
test.nfName | NF name on which the helm test is performed. For UDR the default value is UDR. Will be used in container name as suffix | ocudr | Not applicable | |
test.image.name | Image name for the helm test container image | ocudr/nf_test | Not Applicable | |
test.image.tag | Image version tag for helm test | 1.8.0 | Not Applicable | |
test.config.logLevel | Log level for helm test pod | WARN |
Possible Values - WARN INFO DEBUG |
|
test.config.timeout | Timeout value for the helm test operation. If exceeded helm test will be considered as failure | 120 |
Range: 1-300 Unit:seconds |
|
preinstall.image.name | Image name for the nudr-prehook pod which will take care of DB and table creation for UDR deployment. | ocudr/prehook | Not Applicable | |
preinstall.image.tag | Image version for nudr-prehook pod image | 1.8.0 | Not Applicable | |
preinstall.config.logLevel | Log level for preinstall hook pod | WARN |
Possible Values - WARN INFO DEBUG |
|
hookJobResources.limits.cpu | CPU limit for pods created kubernetes hooks/jobs created as part of UDR installation. Applicable for helm test job as well. | 2 | Not Applicable | |
hookJobResources.limits.memory | Memory limit for pods created kubernetes hooks/jobs created as part of UDR installation. Applicable for helm test job as well. | 2Gi | Not Applicable | |
hookJobResources.requests.cpu | CPU requests for pods created kubernetes hooks/jobs created as part of UDR installation. Applicable for helm test job as well. | 1 | Not Applicable | The cpu to be allocated for hooks during deployment |
hookJobResources.requests.memory | Memory requests for pods created k8s hooks/jobs created as part of UDR installation. Applicable for helm test job as well. | 1Gi | Not Applicable | The memory to be allocated for hooks during deployment |
customExtension.allResources.labels | Custom Labels that needs to be added to all the OCUDR kubernetes resources | null | Not Applicable | This can be used to add custom label(s) to all k8s resources that will be created by OCUDR helm chart. |
customExtension.allResources.annotations | Custom Annotations that needs to be added to all the OCUDR kubernetes resources | null |
Not Applicable Note: ASM related annotations needs to be added under ASM Specific Configuration section |
This can be used to add custom annotation(s) to all k8s resources that will be created by OCUDR helm chart. |
customExtension.lbServices.labels | Custom Labels that needs to be added to OCUDR Services that are considered as Load Balancer type | null | Not Applicable | This can be used to add custom label(s) to all Load Balancer Type Services that will be created by OCUDR helm chart. |
customExtension.lbServices.annotations | Custom Annotations that needs to be added to OCUDR Services that are considered as Load Balancer type | null | Not Applicable | This can be used to add custom annotation(s) to all Load Balancer Type Services that will be created by OCUDR helm chart. |
customExtension.lbDeployments.labels | Custom Labels that needs to be added to OCUDR Deployments that are associated to a Service which is of Load Balancer type | null | Not Applicable | This can be used to add custom label(s) to all Deployments that will be created by OCUDR helm chart which are associated to a Service which if of Load Balancer Type. |
customExtension.lbDeployments.annotations | Custom Annotations that needs to be added to OCUDR Deployments that are associated to a Service which is of Load Balancer type | null |
Not Applicable Note: ASM related annotations needs to be added under ASM Specific Configuration section |
This can be used to add custom annotation(s) to all Deployments that will be created by OCUDR helm chart which are associated to a Service which if of Load Balancer Type. |
customExtension.nonlbServices.labels | Custom Labels that needs to be added to OCUDR Services that are considered as not Load Balancer type | null | Not Applicable | This can be used to add custom label(s) to all non-Load Balancer Type Services that will be created by OCUDR helm chart. |
customExtension.nonlbServices.annotations | Custom Annotations that needs to be added to OCUDR Services that are considered as not Load Balancer type | null | Not Applicable | This can be used to add custom annotation(s) to all non-Load Balancer Type Services that will be created by OCUDR helm chart. |
customExtension.nonlbDeployments.labels | Custom Labels that needs to be added to OCUDR Deployments that are associated to a Service which is not of Load Balancer type | null | Not Applicable | This can be used to add custom label(s) to all Deployments that will be created by OCUDR helm chart which are associated to a Service which if not of Load Balancer Type. |
customExtension.nonlbDeployments.annotations | Custom Annotations that needs to be added to OCUDR Deployments that are associated to a Service which is not of Load Balancer type | null |
Not Applicable Note: ASM related annotations to be added under ASM Specific Configuration section |
This can be used to add custom annotation(s) to all Deployments that will be created by OCUDR helm chart which are associated to a Service which if not of Load Balancer Type. |
k8sResource.container.prefix | Value that will be prefixed to all the container names of OCUDR. | null | Not Applicable | This value will be used to prefix to all the container names of OCUDR. |
k8sResource.container.suffix | Value that will be suffixed to all the container names of OCUDR. | null | Not Applicable | This value will be used to prefix to all the container names of OCUDR. |
Following table provides the parameters for nudr-drservice micro service.
Parameter | Description | Default value | Range or Possible Values (If applicable) | Notes |
---|---|---|---|---|
image.name | Docker Image name | ocudr/nudr_datarepository_service | Not applicable | |
image.tag | Tag of Image | 1.8.0 | Not applicable | |
image.pullPolicy | This setting signifies whether image needs to be pulled or not | Always |
Possible Values - Always IfNotPresent Never |
|
subscriber.autocreate | Flag to enable auto creation of subscriber | true | true/false | This flag enables auto creation of subscriber when creating data for a non existent subscriber. |
validate.smdata | Flag to enable correlation feature for smdata | false | true/false | This flag controls the correlation feature for smdata. This flag must be false if using v16.2.0 for PCF data. |
logging.level.root | Log Level | WARN |
Possible Values - WARN INFO DEBUG |
Log level of the nudr-drservice pod |
deployment.replicaCount | Replicas of nudr-drservice pod | 2 | Not applicable | Number of nudr-drservice pods to be maintained by replica set created with deployment |
minReplicas | Minimum Replicas | 2 | Not applicable | Minimum number of pods |
maxReplicas | Maximum Replicas | 8 | Not applicable | Maximum number of pods |
service.http2enabled | Enabled HTTP2 support flag for rest server | true | true/false | Enable/Disable HTTP2 support for rest server |
service.type | UDR service type | ClusterIP |
Possbile Values- ClusterIP NodePort LoadBalancer |
The kubernetes service type for exposing UDR deployment Note: Suggested to be set as ClusterIP (default value) always |
service.port.http | HTTP port | 5001 | Not applicable | The http port to be used in nudr-drservice service |
service.port.https | HTTPS port | 5002 | Not applicable | The https port to be used for nudr-drservice service |
service.port.management | Management port | 9000 | Not applicable | The actuator management port to be used for nudr-drservice service |
resources.requests.cpu | Cpu Allotment for nudr-drservice pod | 3 | Not applicable | The cpu to be allocated for nudr-drservice pod during deployment |
resources.requests.memory | Memory allotment for nudr-drservice pod | 4Gi | Not applicable | The memory to be allocated for nudr-drservice pod during deployment |
resources.limits.cpu | Cpu allotment limitation | 3 | Not applicable | |
resources.limits.memory | Memory allotment limitation | 4Gi | Not applicable | |
resources.target.averageCpuUtil | CPU utilization limit for autoscaling | 80 | Not Applicable | CPU utilization limit for creating HPA |
notify.port.http | HTTP port on which notify service is running | 5001 | Not applicable | |
notify.port.https | HTTPS port on which notify service is running | 5002 | Not applicable | |
hikari.poolsize | Mysql Connection pool size | 25 | Not applicable | The hikari pool connection size to be created at start up |
vsaLevel | The data level where the vsa which holds the 4G Policy data is added. | smpolicy | Not applicable | |
vsaBillingDay | The Billing day value | 0 | Not applicable | |
tracingEnabled | Flag to enable/disable jaeger tracing for nudr-drservice | false | true/false | |
service.customExtension.labels | Custom Labels that needs to be added to nudr-drservice specific Service. | null | Not Applicable | This can be used to add custom label(s) to nudr-drservice Service. |
service.customExtension.annotations | Custom Annotations that needs to be added to nudr-drservice specific Services. | null | Not Applicable | This can be used to add custom annotation(s) to nudr-drservice Service. |
deployment.customExtension.labels | Custom Labels that needs to be added to nudr-drservice specific deployment. | null | Not Applicable | This can be used to add custom label(s) to nudr-drservice Deployment. |
deployment.customExtension.annotations | Custom Annotations that needs to be added to nudr-drservice specific deployment. | null | Not Applicable | This can be used to add custom annotation(s) to nudr-drservice deployment. |
readinessProbe.initialDelaySeconds |
Configurable wait time before performing the first readiness probe by the kubelet Note: Do not change this value. If there is any delay in pod coming up and probe is killing the pod then you should consider tuning these parameters. |
70 |
Not Applicable Unit: Seconds |
|
readinessProbe.periodSeconds |
Time interval for every readiness probe check. Note: Do not change this value. If there is any delay in pod coming up and probe is killing the pod then you should consider tuning these parameters. |
10 |
Not Applicable Unit: Seconds |
|
livenessProbe.initialDelaySeconds |
Configurable wait time before performing the first liveness probe by the kubelet. Note: Do not change this value. If there is any delay in pod coming up and probe is killing the pod then you should consider tuning these parameters. |
70 |
Not Applicable Unit: Seconds |
|
livenessProbe.periodSeconds |
Time interval for every liveness probe check. Note: Do not change this value. If there is any delay in pod coming up and probe is killing the pod then you should consider tuning these parameters. |
10 |
Not Applicable Unit: Seconds |
Following table provides the parameters for nudr-notify-service micro service.
Parameter | Description | Default value | Range or Possible Values (If applicable) | Notes |
---|---|---|---|---|
enabled | flag for enabling or disabling nudr-notify-service | true | true or false | For SLF deployment, this micro service must be disabled. |
image.name | Docker Image name | ocudr/nudr_notify_service | Not applicable | |
image.tag | Tag of Image | 1.8.0 | Not applicable | |
image.pullPolicy | This setting will tell if image need to be pulled or not | Always |
Possible Values - Always IfNotPresent Never |
|
notification.retrycount | Number of notifications to be attempted | 3 | Range: 1 - 10 |
Number of notification attempts to be done in case of notification failures. Whether retry should be done will be based on notification.retryerrorcodes configuration. |
notification.retryinterval | 5 |
Range: 1 - 60 Unit: Seconds |
The retry interval for notifications in case of failure. Unit is in seconds. Whether retry should be done will be based on notification.retryerrorcodes configuration. |
|
notification.retryerrorcodes | Notification failures eligible for retry | "400,429,500,503" | Valid HTTP status codes comma seperated | Comma separated error code should be given. These error codes will be eligible for retry notifications in case of failures. |
hikari.poolsize | Mysql Connection pool size | 10 | Not applicable | The hikari pool connection size to be created at start up |
tracingEnabled | Flag to enable/disable jaeger tracing for nudr-notify-service | false | true/false | |
http.proxy.port | Port to connect to egress gateway | 8080 | Not applicable | |
logging.level.root | Log Level | WARN |
Possible Values - WARN INFO DEBUG |
Log level of the notify service pod |
deployment.replicaCount | Replicas of nudr-notify-service pod | 2 | Not applicable | Number of nudr-notify-service pods to be maintained by replica set created with deployment |
minReplicas | Minimum Replicas | 2 | Not applicable | Minimum number of pods |
maxReplicas | Maximum Replicas | 4 | Not applicable | Maximum number of pods |
service.http2enabled | Enabled HTTP2 support flag | true | true/false | This is a read only parameter. Do not change this value |
service.type | UDR service type | ClusterIP |
Possbile Values- ClusterIP NodePort LoadBalancer |
The kubernetes service type for exposing UDR deployment Note: Suggested to be set as ClusterIP (default value) always |
service.port.http | HTTP port | 5001 | Not applicable | The http port to be used in notify service to receive signals from nudr-notify-service pod. |
service.port.https | HTTPS port | 5002 | Not applicable | The https port to be used in notify service to receive signals from nudr-notify-service pod. |
service.port.management | Management port | 9000 | Not applicable | The actuator management port to be used for notify service. |
resources.requests.cpu | Cpu Allotment for nudr-notify-service pod | 3 | Not applicable | The cpu to be allocated for notify service pod during deployment |
resources.requests.memory | Memory allotment for nudr-notify-service pod | 3Gi | Not applicable | The memory to be allocated for nudr-notify-service pod during deployment |
resources.limits.cpu | Cpu allotment limitation | 3 | Not applicable | |
resources.limits.memory | Memory allotment limitation | 3Gi | Not applicable | |
resources.target.averageCpuUtil | CPU utilization limit for autoscaling | 80 | Not Applicable | CPU utilization limit for creating HPA |
service.customExtension.labels | Custom Labels that needs to be added to nudr-notify-service specific service. | null | Not Applicable | This can be used to add custom label(s) tonudr-notify-service Service. |
service.customExtension.annotations | Custom Annotations that needs to be added to nudr-notify-service specific services. | null | Not Applicable | This can be used to add custom annotation(s) to nudr-notify-service Service. |
deployment.customExtension.labels | Custom Labels that needs to be added to nudr-notify-service specific deployment. | null | Not Applicable | This can be used to add custom label(s) to nudr-notify-service deployment. |
deployment.customExtension.annotations | Custom Annotations that needs to be added to nudr-notify-service specific deployment. | null | Not Applicable | This can be used to add custom annotation(s) to nudr-notify-service deployment. |
readinessProbe.initialDelaySeconds |
Configurable wait time before performing the first readiness probe by the kubelet Note: Do not change this value. If there is any delay in pod coming up and probe is killing the pod then you should consider tuning these parameters. |
80 |
Not Applicable Unit: Seconds |
|
readinessProbe.periodSeconds |
Time interval for every readiness probe check. Note: Do not change this value. If there is any delay in pod coming up and probe is killing the pod then you should consider tuning these parameters. |
5 |
Not Applicable Unit: Seconds |
|
livenessProbe.initialDelaySeconds |
Configurable wait time before performing the first liveness probe by the kubelet. Note: Do not change this value. If there is any delay in pod coming up and probe is killing the pod then you should consider tuning these parameters. |
80 |
Not Applicable Unit: Seconds |
|
livenessProbe.periodSeconds |
Time interval for every liveness probe check. Note: Do not change this value. If there is any delay in pod coming up and probe is killing the pod then you should consider tuning these parameters. |
20 |
Not Applicable Unit: Seconds |
Following table provides the parameters for nudr-nrf-client-service micro service.
Parameter | Description | Default value | Range or Possible Values (If applicable) | Notes |
---|---|---|---|---|
enabled | flag for enabling or disabling nudr-nrf-client-service | true | true/false | |
host.baseurl | NRF url for registration | http://ocnrf-ingressgateway.mynrf.svc.cluster.local/nnrf-nfm/v1/nf-instances | Not applicable | Url used for udr to connect and register with NRF |
host.proxy | Proxy Setting | NULL | nrfClient.host | Proxy setting if required to connect to NRF |
ssl | SSL flag | false | true/false | SSL flag to enable SSL with udr nrf client pod |
logging.level.root | Log Level | WARN |
Possible Values - WARN INFO DEBUG |
Log level of the UDR nrf client pod |
image.name | Docker Image name | ocudr/nudr_nrf_client_service | Not applicable | |
image.tag | Tag of Image | 1.8.0 | Not applicable | |
image.pullPolicy | This setting will tell if image need to be pulled or not | Always |
Possible Values - Always IfNotPresent Never |
|
heartBeatTimer | Heart beat timer | 90 | Unit: Seconds | |
udrGroupId | Group ID of UDR | udr-1 | Not applicable | |
capacityMultiplier | Capacity of UDR | 500 | Not applicable | Capacity multiplier of UDR based on number of UDR pods running |
supirange | Supi Range supported with UDR | [{\"start\": \"10000000000\", \"end\": \"20000000000\"}] | Valid start and end supi range | |
priority | Priority | 10 | Priority to be sent in registration request | Priority to be sent in registration request |
fqdn | UDR FQDN | ocudr-ingressgateway.myudr.svc.cluster.local | Not Applicable |
FQDN to used for registering in NRF for other NFs to connect to UDR. Note: Be cautious in updating this value. Should consider helm release name, namespace used for udr deployment and name resolution setting in k8s. |
gpsirange | Gpsi Range supported with UDR | [{\"start\": \"10000000000\", \"end\": \"20000000000\"}] | Valid start and end gpsi range | |
livenessProbeMaxRetry | Max retries of liveness proble failed | 5 | This should be changed based on how many times do you want to retry | This should be changed based on how many times do you want to retry if liveness fails |
udrMasterIpv4 | Master IP of which we deployed | 10.0.0.0 | This should be changed with the master ip which we deployed | udrMasterIpv4 is used to send the ipv4 address to the nrf while registration. |
plmnvalues | Plmn values range that it supports | [{\"mnc\": \"14\", \"mcc\": \"310\"}] | This values can be changed that the range it supports | Plmn values are sent to nrf during regisration from UDR. |
scheme | scheme in which udr supports | http | This can be changed to https. | scheme which we send to NRF during registration |
resources.requests.cpu | Cpu Allotment for nudr-notify-service pod | 1 | Not applicable | The cpu to be allocated for nrf client service pod during deployment |
resources.requests.memory | Memory allotment for nudr-notify-service pod | 2Gi | Not applicable | The memory to be allocated for nrf client service pod during deployment |
resources.limits.cpu | Cpu allotment limitation | 1 | Not applicable | |
resources.limits.memory | Memory allotment limitation | 2Gi | Not applicable | |
http.proxy.port | Port to connect egress gateway | 8080 | Not applicable | |
service.customExtension.labels | Custom Labels that needs to be added to nudr-nrf-client specific service. | null | Not Applicable | This can be used to add custom label(s) to nudr-nrf-client service. |
service.customExtension.annotations | Custom Annotations that needs to be added to nudr-nrf-client specific services. | null | Not Applicable | This can be used to add custom annotation(s) to nudr-nrf-client service. |
deployment.customExtension.labels | Custom Labels that needs to be added to nudr-nrf-client specific deployment. | null | Not Applicable | This can be used to add custom label(s) to nudr-nrf-client deployment. |
deployment.customExtension.annotations | Custom Annotations that needs to be added to nudr-nrf-client specific deployment. | null | Not Applicable
Note: ASM related annotations to be added under ASM Specific Configuration section |
This can be used to add custom annotation(s) to nudr-nrf-client deployment. |
Following table provides the parameters for nudr-config micro service.
Parameter | Description | Default value | Range or Possible Values (If applicable) | Notes |
---|---|---|---|---|
enabled | flag for enabling or disabling nudr-config service | true | true/false | |
logging.level.root | Log Level | WARN |
Possible Values - WARN INFO DEBUG |
Log level of the nudr-config pod |
service.http2enabled | Enabled HTTP2 support flag for rest server | true | true/false | Enable/Disable HTTP2 support for rest server |
image.name | Docker Image name | ocudr/nudr_config | Not applicable | |
service.customExtension.labels | Custom Labels that needs to be added to nudr-config specific Service. | null | Not applicable | This can be used to add custom label(s) to nudr-config Service. |
service.customExtension.annotations | Custom Annotations that needs to be added to nudr-config specific Services. | null | Not applicable | This can be used to add custom annotation(s) to nudr-config Service. |
deployment.customExtension.labels | Custom Labels that needs to be added to nudr-config specific Deployment. | null | Not applicable | This can be used to add custom label(s) to nudr-config Deployment. |
deployment.customExtension.annotations | Custom Annotations that needs to be added to nudr-config specific Deployment. | null | Not applicable | This can be used to add custom annotation(s) to nudr-config Deployment. |
service.type | UDR service type | ClusterIP |
Possbile Values- ClusterIP NodePort LoadBalancer |
The kubernetes service type for exposing UDR deployment Note: Suggested to be set as ClusterIP (default value) always |
image.pullPolicy | This setting will tell if image need to be pulled or not | Always |
Possible Values - Always IfNotPresent Never |
|
service.port.management | Management port | 9000 | Not applicable | The actuator management port to be used for nudr-config service |
service.port.https | HTTPS port | 5002 | Not applicable | The https port to be used for nudr-config service |
service.port.http | HTTP port | 5001 | Not applicable | The http port to be used in nudr-config service |
resources.target.averageCpuUtil | CPU utilization limit for autoscaling | 80 | Not Applicable | CPU utilization limit for creating HPA |
resources.requests.memory | Memory allotment for nudr-drservice pod | 2Gi | Not applicable | The memory to be allocated for nudr-config pod during deployment |
resources.limits.memory | Memory allotment limitation | 2Gi | Not applicable | |
resources.requests.cpu | Cpu Allotment for nudr-drservice pod | 2 | Not applicable | The cpu to be allocated for nudr-config pod during deployment |
resources.limits.cpu | Cpu allotment limitation | 2 | Not applicable | |
image.tag | Tag of Image | 1.8.0 | Not applicable | |
deployment.replicaCount | Replicas of nudr-config pod | 1 | Not applicable | Number of nudr-config pods to be maintained by replica set created with deployment |
minReplicas | Minimum Replicas | 1 | Not applicable | Minimum number of pods |
maxReplicas | Maximum Replicas | 1 | Not applicable | Maximum number of pods |
readinessProbe.initialDelaySeconds |
Configurable wait time before performing the first readiness probe by the kubelet Note: Do not change this value. If there is any delay in pod coming up and probe is killing the pod then you should consider tuning these parameters. |
30 |
Not Applicable Unit: Seconds |
|
readinessProbe.periodSeconds |
Time interval for every readiness probe check. Note: Do not change this value. If there is any delay in pod coming up and probe is killing the pod then you should consider tuning these parameters. |
5 |
Not Applicable Unit: Seconds |
|
livenessProbe.initialDelaySeconds |
Configurable wait time before performing the first liveness probe by the kubelet. Note: Do not change this value. If there is any delay in pod coming up and probe is killing the pod then you should consider tuning these parameters. |
40 |
Not Applicable Unit: Seconds |
|
livenessProbe.periodSeconds |
Time interval for every liveness probe check. Note: Do not change this value. If there is any delay in pod coming up and probe is killing the pod then you should consider tuning these parameters. |
10 |
Not Applicable Unit: Seconds |
Following table provides the parameters for nudr-config-server Micro service.
Parameter | Description | Default value | Range or Possible Values (If applicable) | Notes |
---|---|---|---|---|
enabled | Flag to enable/disable nudr-config-server service | true | true/false | |
global.nfName | It is NF name used to add with config server service name. | nudr | Not applicable | |
global.imageServiceDetector | Image Service Detector for config-server init container | ocudr/readiness-detector:1.7.1 | Not Applicable | |
global.envJaegerAgentHost | Host FQDN for Jaeger agent service for config-server tracing | ' ' | Not Applicable | |
global.envJaegerAgentPort | Port for Connection to Jaeger agent for config-server tracing | 6831 | Valid Port | |
envLoggingLevelApp | Log Level | WARN |
Possible Values - WARN INFO DEBUG |
Log level of the nudr-config-server pod |
replicas | Replicas of nudr-config-server pod | 1 | Not applicable | Number of nudr-config-server pods to be maintained by replica set created with deployment |
service.type | UDR service type | ClusterIP |
Possbile Values- ClusterIP NodePort LoadBalancer |
The kubernetes service type for exposing UDR deployment Note: Suggested to be set as ClusterIP (default value) always |
resources.requests.cpu | Cpu Allotment for nudr-drservice pod | 2 | Not applicable | The cpu to be allocated for nudr-config-server pod during deployment |
resources.requests.memory | Memory allotment for nudr-drservice pod | 512Mi | Not applicable | The memory to be allocated for nudr-config-server pod during deployment |
resources.limits.cpu | Cpu allotment limitation | 2 | Not applicable | |
resources.limits.memory | Memory allotment limitation | 2Gi | Not applicable | |
readinessProbe.initialDelaySeconds |
Configurable wait time before performing the first readiness probe by the kubelet Note: Do not change this value. If there is any delay in pod coming up and probe is killing the pod then you should consider tuning these parameters. |
70 |
Not Applicable Unit: Seconds |
|
readinessProbe.periodSeconds |
Time interval for every readiness probe check. Note: Do not change this value. If there is any delay in pod coming up and probe is killing the pod then you should consider tuning these parameters. |
10 |
Not Applicable Unit: Seconds |
|
readinessProbe.timeoutSeconds |
Number of seconds after which the probe times out Note: Do not change this default value. |
3 | Not Applicable | |
readinessProbe.successThreshold |
Minimum consecutive successes for the probe to be considered successful after having failed Note: Do not change this default value. |
1 | Not Applicable | |
readinessProbe.failureThreshold |
When a Pod starts and the probe fails, Kubernetes tries failureThreshold times before giving up Note: Do not change this default value. |
3 | Not Applicable | |
livenessProbe.initialDelaySeconds |
Configurable wait time before performing the first liveness probe by the kubelet. Note: Do not change this value. If there is any delay in pod coming up and probe is killing the pod then you should consider tuning these parameters. |
60 |
Not Applicable Unit: Seconds |
|
livenessProbe.periodSeconds |
Time interval for every liveness probe check. Note: Do not change this value. If there is any delay in pod coming up and probe is killing the pod then you should consider tuning these parameters. |
15 |
Not Applicable Unit: Seconds |
|
livenessProbe.timeoutSeconds |
Number of seconds after which the probe times out Note: Do not change this default value. |
3 | Not Applicable | |
livenessProbe.successThreshold |
Minimum consecutive successes for the probe to be considered successful after having failed Note: Do not change this default value. |
1 | Not Applicable | |
livenessProbe.failureThreshold | When a Pod starts and the probe fails, Kubernetes will try
failureThreshold times before giving up
Note: Do not change this default value. |
3 | Not Applicable |
Following table provides parameters for nudr-diameterproxy micro service.
Parameter | Description | Default value | Range or Possible Values (If applicable) | Notes |
---|---|---|---|---|
enabled | To enable service. | true | Not applicable | Used to enable or disable service. |
image.name | Docker Image name | ocudr/nudr_diameterproxy | Not applicable | |
image.tag | Tag of Image | 1.8.0 | Not applicable | |
image.pullPolicy | This setting will tell if image need to be pulled or not | Always |
Possible Values - Always IfNotPresent Never |
|
logging.level.root | Log Level | WARN |
Possible Values - WARN INFO DEBUG |
The log level of the nudr-diameterproxy server pod |
deployment.replicaCount | Replicas of the nudr-diameterproxy pod | 2 | Not applicable | Number of nudr-config-server pods to be maintained by replica set created with deployment |
minReplicas | min replicas of nudr-diameterproxy | 2 | Not applicable | Minimum number of pods |
maxReplicas | max replicas of nudr-diameterproxy | 4 | Not applicable | Maximum number of pods |
service.http2enabled | Enabled HTTP2 support flag for rest server | true | true/false | Enable/Disable HTTP2 support for rest server |
service.type | UDR service type | ClusterIP |
Possible Values- ClusterIP NodePort LoadBalancer |
The Kubernetes service type for exposing UDR deployment Note: Suggested to be set as ClusterIP (default value) always |
service.diameter.type | Diameter service type | LoadBalancer |
Possible Values- ClusterIP NodePort LoadBalancer |
The Kubernetes service type for exposing UDR deploymentdiameter traffic goes via diameter-endpoint, not via ingress-gateway |
service.port.http | HTTP port | 5001 | Not applicable | The HTTP port to be used in nudr-diameterproxy service |
service.port.https | HTTPS port | 5002 | Not applicable | The https port to be used for nudr-diameterproxy service |
service.port.management | Management port | 9000 | Not applicable | The actuator management port to be used for nudr-diameterproxy service |
service.port.diameter | Diameter port | 6000 | Not applicable | The diameter port to be used for nudr-diameterproxy service |
resources.requests.cpu | Cpu Allotment for nudr-diameterproxy pod | 3 | Not applicable | The CPU to be allocated for nudr-diameterproxy pod during deployment |
resources.requests.memory | Memory allotment for nudr-diameterproxy pod |
|
Not applicable | The memory to be allocated for nudr-diameterproxy pod during deployment |
resources.limits.cpu | Cpu allotment limitation | 3 | Not applicable | The CPU to be max allocated for nudr-diameterproxy pod |
resources.limits.memory | Memory allotment limitation | 4Gi | Not applicable | The memory to be max allocated for nudr-diameterproxy pod |
resources.target.averageCpuUtil | CPU utilization limit for autoscaling | 80 | Not Applicable | CPU utilization limit for creating HPA |
drservice.port.http | HTTP port on which dr service is running | 5001 | Not Applicable | dr-service port is required in diameterproxy application |
drservice.port.https | HTTPS port on which dr service is running | 5002 | Not Applicable | dr-service port is required in diameterproxy application |
diameter.realm | Realm of the diameterproxy microservice | oracle.com | String value | Host realm of diameterproxy |
diameter.identity | FQDN of the diameterproxy in diameter messages | nudr.oracle.com | String value | identity of the diameterproxy |
diameter.strictParsing | Strict parsing of Diameter AVP and Messages | false | Not Applicable | strict parsing |
diameter.IO.threadCount | Number of thread for IO operation | 0 | 0 to 2* CPU |
Number of threads to handle IO operations in diameterproxy pod if threadcount is 0 then application choose the threadCount based on pod profile size |
diameter.IO.queueSize | Queue size for IO | 0 | 2048 to 8192 |
the count should be the power of 2 if queueSize is 0 then application choose the queueSize based on pod profile size |
diameter.messageBuffer.threadCount | Number of threads for process the message | 0 | 0 to 2* CPU |
Number of threads to handle meassages in diameterproxy pod if threadcount is 0 then application choose the threadCount based on pod profile size |
diameter.peer.setting | Diameter peer setting |
reconnectDelay: 3 responseTimeout: 4 connectionTimeOut: 3 watchdogInterval: 6 transport: 'TCP' reconnectLimit: 50 |
Not Applicable |
|
diameter.peer.nodes | diameter server peer nodes list |
- name: 'seagull' responseOnly: false namespace: 'seagull1' host: '10.75.185.158' domain: 'svc.cluster.local' port: 4096 realm: 'seagull1.com' identity: 'seagull1a.seagull1.com' |
Not applicable |
the diameter server peer node information *it should be yaml list *default values are template , how to add peer nodes. |
diameter.peer.clientNodes | diameter client peers |
- identity: 'seagull1a.seagull1.com' realm: 'seagull1.com' - identity: 'seagull1.com' realm: 'seagull1.com' |
Not applicable |
the diameter client node information *it should be yaml list *default values is template, how to add peer nodes. |
service.customExtension.labels | Custom Labels that needs to be added to nudr-diameterproxy specific Service. | null | Not applicable | This can be used to add custom label(s) to nudr-diameterproxy Service. |
service.customExtension.annotations | Custom Annotations that needs to be added to nudr-diameterproxy specific Services. | null | Not applicable | This can be used to add custom annotation(s) to nudr-diameterproxy Service. |
deployment.customExtension.labels | Custom Labels that needs to be added to nudr-diameterproxy specific Deployment. | null | Not applicable | This can be used to add custom label(s) to nudr-diameterproxy Deployment. |
deployment.customExtension.annotations | Custom Annotations that needs to be added to nudr-diameterproxy specific Deployment. | null | Not applicable | This can be used to add custom annotation(s) to nudr-diameterproxy Deployment. |
readinessProbe.initialDelaySeconds | Configurable wait time before performing the first readiness probe by the kubeletNote: Do not change this value. If you see delays in pod coming up and probe is killing the pod then you should consider tuning these parameters. | 80 |
Not Applicable Unit: Seconds |
|
readinessProbe.periodSeconds | Time interval for every readiness probe check.Note: Do not change this value. If you see delays in pod coming up and probe is killing the pod then you should consider tuning these parameters. | 5 |
Not Applicable Unit: Seconds |
|
livenessProbe.initialDelaySeconds |
Configurable wait time before performing the first liveness probe by the kubelet. Note: Do not change this value. If you see delays in pod coming up and probe is killing the pod then you should consider tuning these parameters. |
80 |
Not Applicable Unit: Seconds |
|
livenessProbe.periodSeconds |
Time interval for every liveness probe check. Note: Do not change this value. If you see delays in pod coming up and probe is killing the pod then you should consider tuning these parameters. |
20 |
Not Applicable Unit: Seconds |
Following table provides parameters for ocudr-ingressgateway micro service (API Gateway)
Parameter | Description | Default value | Range or Possible Values (If applicable) | Notes |
---|---|---|---|---|
global.type | ocudr-ingressgateway service type | LoadBalancer |
Possbile Values- ClusterIP NodePort LoadBalancer |
|
global.metalLbIpAllocationEnabled | Enable or disable Address Pool for Metallb | true | true/false | |
global.metalLbIpAllocationAnnotation | Address Pool for Metallb | metallb.universe.tf/address-pool: signaling | Not applicable | |
global.staticNodePortEnabled | If Static node port needs to be set, then set staticNodePortEnabled flag to true and provide value for staticNodePort | false | Not applicable | |
global.istioIngressTlsSupport.ingressGateway | Supports clear text traffic from outside of the cluster when enabled to try in case of Service Mesh Enabled. | false | true/false | |
image.name | Docker image name | ocudr/ocingress_gateway | Not applicable | |
image.tag | Image version tag | 1.8.1 | Not applicable | |
image.pullPolicy | This setting will tell if image need to be pulled or not | Always |
Possible Values - Always IfNotPresent Never |
|
initContainersImage.name | Docker Image name | ocudr/configurationinit | Not applicable | |
initContainersImage.tag | Image version tag | 1.4.0 | Not applicable | |
initContainersImage.pullPolicy | This setting will tell if image need to be pulled or not | Always |
Possible Values - Always IfNotPresent Never |
|
updateContainersImage.name | Docker Image name | ocudr/configurationupdate | Not applicable | |
updateContainersImage.tag | Image version tag | 1.4.0 | Not applicable | |
updateContainersImage.pullPolicy | This setting will tell if image need to be pulled or not | Always |
Possible Values - Always IfNotPresent Never |
|
service.ssl.tlsVersion | Configuration to take TLS version to be used | TLSv1.2 | Valid TLS version | These are service fixed parameters |
service.ssl.privateKey.k8SecretName | name of the secret which stores keys and certificates | ocudr-gateway-secret | Not applicable | |
service.ssl.privateKey.k8NameSpace | namespace in which secret is created | ocudr | Not applicable | |
service.ssl.privateKey.rsa.fileName | rsa private key stored in the secret | rsa_private_key_pkcs1.pem | Not applicable | |
service.ssl.privateKey.ecdsa.fileName | ecdsa private key stored in the secret | ecdsa_private_key_pkcs8.pem | Not applicable | |
service.ssl.certificate.k8SecretName | name of the secret which stores keys and certificates | ocudr-gateway-secret | Not applicable | |
service.ssl.certificate.k8NameSpace | namespace in which secret is created | ocudr | Not applicable | |
service.ssl.certificate.rsa.fileName | rsa certificate stored in the secret | apigatewayrsa.cer | Not applicable | |
service.ssl.certificate.ecdsa.fileName | ecdsa certificate stored in the secret | apigatewayecdsa.cer | Not applicable | |
service.ssl.caBundle.k8SecretName | name of the secret which stores keys and certificates | ocudr-gateway-secret | Not applicable | |
service.ssl.caBundle.k8NameSpace | namespace in which secret is created | ocudr | Not applicable | |
service.ssl.caBundle.fileName | ca Bundle stored in the secret | caroot.cer | Not applicable | |
service.ssl.keyStorePassword.k8SecretName | name of the secret which stores keys and certificates | ocudr-gateway-secret | Not applicable | |
service.ssl.keyStorePassword.k8NameSpace | namespace in which secret is created | ocudr | Not applicable | |
service.ssl.keyStorePassword.fileName | keyStore password stored in the secret | key.txt | Not applicable | |
service.ssl.trustStorePassword.k8SecretName | name of the secret which stores keys and certificates | ocudr-gateway-secret | Not applicable | |
service.ssl.trustStorePassword.k8NameSpace | namespace in which secret is created | ocudr | Not applicable | |
service.ssl.trustStorePassword.fileName | trustStore password stored in the secret | trust.txt | Not applicable | |
service.initialAlgorithm |
Algorithm to be used ES256 can also be used, but corresponding certificates need to be used. |
RSA256 | RSA256/ES256 | |
resources.limits.cpu | Cpu allotment limitation | 5 | Not applicable | |
resources.limits.memory | Memory allotment limitation | 4Gi | Not applicable | |
resources.limits.initServiceCpu | Maximum amount of CPU that Kubernetes will allow the ingress-gateway init container to use. | 1 | Not Applicable | |
resources.limits.initServiceMemory | Memory Limit for ingress-gateway init container | 1Gi | Not Applicable | |
resources.limits.updateServiceCpu | Maximum amount of CPU that Kubernetes will allow the ingress-gateway update container to use. | 1 | Not Applicable | |
resources.limits.updateServiceMemory | Memory Limit for ingress-gateway update container | 1Gi | Not Applicable | |
resources.requests.cpu | Cpu allotment for ocudr-endpoint pod | 5 | Not Applicable | |
resources.requests.memory | Memory allotment for ocudr-endpoint pod | 4Gi | Not Applicable | |
resources.requests.initServiceCpu | The amount of CPU that the system guarantees for the ingress-gateway init container, and Kubernetes uses this value to decide on which node to place the pod. | Not Applicable | ||
resources.requests.initServiceMemory | The amount of memory that the system will guarantee for the ingress-gateway init container, and Kubernetes will use this value to decide on which node to place the pod | Not Applicable | ||
resources.requests.updateServiceCpu | The amount of CPU that the system will guarantee for the ingress-gateway update container, and Kubernetes will use this value to decide on which node to place the pod. | Not Applicable | ||
resources.requests.updateServiceMemory | The amount of memory that the system will guarantee for the ingress-gateway update container, and Kubernetes will use this value to decide on which node to place the pod. | Not Applicable | ||
resources.target.averageCpuUtil | CPU utilization limit for autoscaling | 80 | Not Applicable | |
minAvailable | Number of pods always running | 2 | Not Applicable | |
minReplicas | Min replicas to scale to maintain an average CPU utilization | 2 | Not applicable | |
maxReplicas | Max replicas to scale to maintain an average CPU utilization | 5 | Not applicable | |
log.level.root | Logs to be shown on ocudr-endpoint pod | WARN | valid level | |
log.level.ingress | Logs to be shown on ocudr-ingressgateway pod for ingress related flows | INFO | valid level | |
log.level.oauth | Logs to be shown on ocudr-ingressgateway pod for oauth related flows | INFO | valid level | |
initssl | To Initialize SSL related infrastructure in init/update container | false | Not Applicable | |
jaegerTracingEnabled | Enable/Disable Jaeger Tracing | false | true/false | |
openTracing.jaeger.udpSender.host | Jaeger agent service FQDN | occne-tracer-jaeger-agent.occne-infra | Valid FQDN | |
openTracing.jaeger.udpSender.port | Jaeger agent service UDP port | 6831 | Valid Port | |
openTracing.jaeger.probabilisticSampler | Probablistic Sampler on Jaeger | 0.5 | Range: 0.0 - 1.0 | Sampler makes a random sampling decision with the probability of sampling. For example, if the value set is 0.1, approximately 1 in 10 traces will be sampled |
Supported cipher suites for ssl |
|
Not applicable | ||
oauthValidatorEnabled | OAUTH Configuration | false | Not Applicable | |
nfType | NFType of service producer | UDR | Not Applicable | Mandatory when oauthValidatorEna ebled is true |
producerScope | Comma-seperated list of services hosted by service producer. | nudr-dr,nudr-group-id-map | Valid service list | Mandatory when oauthValidatorEna ebled is true |
allowedClockSkewSeconds | Set this value if clock on the parsing NF (producer) is not perfectly in sync with the clock on the NF (consumer) that created the JWT. | 0 | Unit: Seconds | Mandatory when oauthValidatorEna ebled is true |
nrfPublicKeyKubeSecret | Name of the secret which stores the public key(s) of NRF. | oauthsecret | Not Applicable | Mandatory when oauthValidatorEna ebled is true |
nrfPublicKeyKubeNamespace | Namespace of the NRF publicKey Secret | ocudr | Not Applicable | Mandatory when oauthValidatorEna ebled is true |
validationType | Values can be "strict" or "relaxed"."strict" means that incoming requests without "Authorization"(Access Token) header are rejected."relaxed" means that if incoming request contains "Authorization" header, it is validated. If incoming request does not contain "Authorization" header, validation is ignored. | strict | strict/relaxed | Mandatory when oauthValidatorEna ebled is true |
producerPlmnMNC | MNC of service producer | 14 | Valid MNC | |
producerPlmnMCC | MCC of service producer | 310 | Valid MCC | |
enableIncomingHttp | Enabling for accepting http requests | true | Not Applicable | |
enableIncomingHttps | Enabling for accepting https requests | false | true or false | |
enableOutgoingHttps | Enabling for sending https requests | false | true or false | |
maxRequestsQueuedPerDestination | Queue Size at the ocudr-endpoint pod | 5000 | Not Applicable | |
maxConnectionsPerIp | Connections from endpoint to other microServices | 10 | Not Applicable | |
serviceMeshCheck | Load balancing will be handled by Ingress gateway, if true it would be handled by serviceMesh | false | true/false | |
routesConfig | Routes configured to connect to different micro services of UDR |
|
Not Applicable | |
service.customExtension.labels | Custom Labels that needs to be added to ingressgateway specific service. | null | Not Applicable | This can be used to add custom label(s) to ingressgateway service. |
service.customExtension.annotations | Custom Annotations that needs to be added to ingressgateway specific services. | null | Not Applicable | This can be used to add custom annotation(s) to ingressgateway service. |
deployment.customExtension.labels | Custom Labels that needs to be added to ingressgateway specific deployment. | null | Not Applicable | This can be used to add custom label(s) to ingressgateway deployment. |
deployment.customExtension.annotations | Custom Annotations that needs to be added to ingressgateway specific deployment. | null | Not Applicable | This can be used to add custom annotation(s) to ingressgateway deployment. |
readinessProbe.initialDelaySeconds | Configurable wait time before performing the first readiness probe by the kubelet Note: Do not change this value. If you see delays in pod coming up and probe is killing the pod then you should consider tuning these parameters. | 30 |
Not Applicable Unit: Seconds |
|
readinessProbe.periodSeconds |
Time interval for every readiness probe check. Note: Do not change this value. If you see delays in pod coming up and probe is killing the pod then you should consider tuning these parameters. |
10 |
Not Applicable Unit: Seconds |
|
readinessProbe.timeoutSeconds |
Number of seconds after which the probe times out Note: Do not change this default value. |
3 | Not Applicable | |
readinessProbe.successThreshold | Minimum consecutive successes for the probe to be considered successful after having failed Note: Do not change this default value. | 1 | Not Applicable | |
readinessProbe.failureThreshold |
When a Pod starts and the probe fails, Kubernetes will try failureThreshold times before giving up Note: Do not change this default value. |
3 | Not Applicable | |
livenessProbe.initialDelaySeconds |
Configurable wait time before performing the first liveness probe by the kubelet. Note: Do not change this value. If you see delays in pod coming up and probe is killing the pod then you should consider tuning these parameters. |
30 |
Not Applicable Unit: Seconds |
|
livenessProbe.periodSeconds |
Time interval for every liveness probe check. Note: Do not change this value. If you see delays in pod coming up and probe is killing the pod then you should consider tuning these parameters. |
15 |
Not Applicable Unit: Seconds |
|
livenessProbe.timeoutSeconds |
Number of seconds after which the probe times out Note: Do not change this default value. |
3 | Not Applicable | |
livenessProbe.successThreshold |
Minimum consecutive successes for the probe to be considered successful after having failed Note: Do not change this default value. |
1 | Not Applicable | |
livenessProbe.failureThreshold |
When a Pod starts and the probe fails, Kubernetes will try failureThreshold times before giving up Note: Do not change this default value. |
3 | Not Applicable |
Following table provides parameters for ocudr-egressgateway micro service (API Gateway)
Parameter | Description | Default value | Range or Possible Values (If applicable) | Notes |
---|---|---|---|---|
enabled | Configuration flag to enable/disable egress gateway | true | true/false | |
image.name | Docker image name | ocudr/ocegress_gateway | Not applicable | |
image.tag | Image version tag | 1.8.1 | Not applicable | |
image.pullPolicy | This setting will tell if image need to be pulled or not | Always |
Possible Values - Always IfNotPresent Never |
|
initContainersImage.name | Docker Image name | ocudr/configurationinit | Not applicable | |
initContainersImage.tag | Image version tag | 1.4.0 | Not applicable | |
initContainersImage.pullPolicy | This setting will tell if image need to be pulled or not | Always |
Possible Values - Always IfNotPresent Never |
|
updateContainersImage.name | Docker Image name | ocudr/configurationupdate | Not applicable | |
updateContainersImage.tag | Image version tag | 1.4.0 | Not applicable | |
updateContainersImage.pullPolicy | This setting will tell if image need to be pulled or not | Always |
Possible Values - Always IfNotPresent Never |
|
resources.limits.cpu | Cpu allotment limitation | 3 | Not applicable | |
resources.limits.memory | Memory allotment limitation | 4Gi | Not applicable | |
resources.limits.initServiceCpu | Maximum amount of CPU that Kubernetes will allow the egress-gateway init container to use. | 1 | Not applicable | |
resources.limits.initServiceMemory | Memory Limit for egress-gateway init container | 1Gi | Not applicable | |
resources.limits.updateServiceCpu | Maximum amount of CPU that Kubernetes will allow the egress-gateway update container to use. | 1 | Not applicable | |
resources.limits.updateServiceMemory | Memory Limit for egress-gateway update container | 1Gi | Not applicable | |
resources.requests.cpu | Cpu allotment for ocudr-egressgateway pod | 3 | Not applicable | |
resources.requests.memory | Memory allotment for ocudr-egressgatewaypod | 4Gi | Not applicable | |
resources.requests.initServiceCpu | The amount of CPU that the system will guarantee for the egress-gateway init container, and Kubernetes will use this value to decide on which node to place the pod | Not Applicable | ||
resources.requests.initServiceMemory | The amount of memory that the system will guarantee for the egress-gateway init container, and Kubernetes will use this value to decide on which node to place the pod | Not Applicable | ||
resources.requests.updateServiceCpu | The amount of CPU that the system will guarantee for the egress-gateway update container, and Kubernetes will use this value to decide on which node to place the pod. | Not Applicable | ||
resources.requests.updateServiceMemory | The amount of memory that the system will guarantee for the egress-gateway update container, and Kubernetes will use this value to decide on which node to place the pod. | Not Applicable | ||
resources.target.averageCpuUtil | CPU utilization limit for autoscaling | 80 | Not applicable | |
service.ssl.tlsVersion | Configuration to take TLS version to be used | TLSv1.2 | Valid TLS version | These are service fixed parameters |
service.initialAlgorithm |
Algorithm to be used ES256 can also be used, but corresponding certificates need to be used. |
RSA256 | RSA256/ES256 | |
service.ssl.privateKey.k8SecretName | name of the secret which stores keys and certificates | ocudr-gateway-secret | Not applicable | |
service.ssl.privateKey.k8NameSpace | namespace in which secret is created | ocudr | Not applicable | |
service.ssl.privateKey.rsa.fileName | rsa private key stored in the secret | rsa_private_key_pkcs1.pem | Not applicable | |
service.ssl.privateKey.ecdsa.fileName | ecdsa private key stored in the secret | ecdsa_private_key_pkcs8.pem | Not applicable | |
service.ssl.certificate.k8SecretName | name of the secret which stores keys and certificates | ocudr-gateway-secret | Not applicable | |
service.ssl.certificate.k8NameSpace | namespace in which secret is created | ocudr | Not applicable | |
service.ssl.certificate.rsa.fileName | rsa certificate stored in the secret | apigatewayrsa.cer | Not applicable | |
service.ssl.certificate.ecdsa.fileName | ecdsa certificate stored in the secret | apigatewayecdsa.cer | Not applicable | |
service.ssl.caBundle.k8SecretName | name of the secret which stores keys and certificates | ocudr-gateway-secret | Not applicable | |
service.ssl.caBundle.k8NameSpace | namespace in which secret is created | ocudr | Not applicable | |
service.ssl.caBundle.fileName | ca Bundle stored in the secret | caroot.cer | Not applicable | |
service.ssl.keyStorePassword.k8SecretName | name of the secret which stores keys and certificates | ocudr-gateway-secret | Not applicable | |
service.ssl.keyStorePassword.k8NameSpace | namespace in which secret is created | ocudr | Not applicable | |
service.ssl.keyStorePassword.fileName | keyStore password stored in the secret | key.txt | Not applicable | |
service.ssl.trustStorePassword.k8SecretName | name of the secret which stores keys and certificates | ocudr-gateway-secret | Not applicable | |
service.ssl.trustStorePassword.k8NameSpace | namespace in which secret is created | ocudr | Not applicable | |
service.ssl.trustStorePassword.fileName | trustStore password stored in the secret | trust.txt | Not applicable | |
minAvailable | Number of pods always running | 1 | Not Applicable | |
minReplicas | Min replicas to scale to maintain an average CPU utilization | 1 | Not applicable | |
maxReplicas | Max replicas to scale to maintain an average CPU utilization | 4 | Not applicable | |
log.level.root | Logs to be shown on ocudr-egressgateway pod | WARN | valid level | |
log.level.egress | Logs to be shown on ocudr-egressgateway pod for egress related flows | INFO | valid level | |
log.level.oauth | Logs to be shown on ocudr-egressgateway pod for oauth related flows | INFO | valid level | |
fullnameOverride | Name to be used for deployment | ocudr-egressgateway | Not applicable | This config is commented by default. |
initssl | To Initialize SSL related infrastructure in init/update container | false | Not Applicable | |
jaegerTracingEnabled | Enable/Disable Jaeger Tracing | false | true/false | |
openTracing.jaeger.udpSender.host | Jaeger agent service FQDN | occne-tracer-jaeger-agent.occne-infra | Valid FQDN | |
openTracing.jaeger.udpSender.port | Jaeger agent service UDP port | 6831 | Valid Port | |
openTracing.jaeger.probabilisticSampler | Probablistic Sampler on Jaeger | 0.5 | Range: 0.0 - 1.0 | Sampler makes a random sampling decision with the probability of sampling. For example if the value set is 0.1, approximately 1 in 10 traces will be sampled. |
enableOutgoingHttps | Enabling for sending https requests | false | true or false | |
oauthClient.enabled | Enable if oauth is required | false | true or false | Enable based on Oauth configuration |
oauthClient.dnsSrvEnabled | DNS SRV Enabled for oAuth | false | true/false | |
oauthClient.httpsEnabled | Determine if https support is enabled or not which is a deciding factor for oauth request scheme and search query parameter in dns-srv request | false | true/false | |
oauthClient.virtualFqdn | virtualFqdn value which needs to be populated and sent in the dns-srv query. | localhost:port | Mandatory if oauthClient.dnsSrvEnabled is true | |
oauthClient.staticNrfList | List of Static NRF's | - localhost:port | Mandatory if oauthClient.enabled is true | |
oauthClient.nfType | NFType of service consumer. | UDR | Not Applicable | Mandatory if oauthClient.enabled is true |
oauthClient.consumerPlmnMNC | MNC of service Consumer. | 14 | Valid MNC | |
oauthClient.consumerPlmnMCC | MCC of service Consumer. | 310 | Valid MCC | |
oauthClient.maxRetry | Maximum number of retry that need to be performed to other NRF Fqdn’s in case of failure response from first contacted NRF based on the errorCodeSeries configured. | 2 | Valid Number | Mandatory if oauthClient.enabled is true |
oauthClient.apiPrefix | apiPrefix that needs to be appended in the Oauth request flow. | "" | Valid String | Mandatory if oauthClient.enabled is true |
oauthClient.errorCodeSeries | Determines the fallback condition to other NRF in case of failure response from currently contacted NRF. | 4XX | Valid series | Mandatory if oauthClient.enabled is true and requires different error code series |
oauthClient.retryAfter | RetryAfter value in milliseconds that needs to be set for a particular NRF Fqdn, if the error matched the configured errorCodeSeries. | 5000 | Unit: Milliseconds | Mandatory if oauthClient.enabled is true |
maxConcurrentPushedStreams | Jetty client configuration | 1000 | Valid Number | |
maxRequestsQueuedPerDestination | Jetty client configuration | 1024 | Valid Number | |
maxConnectionsPerIp | Max Connections allowed per Ip | 4 | Valid Number | |
connectionTimeout | Connection timeout in milli seconds | 10000 |
Unit: Milliseconds |
|
requestTimeout | Request Timeout in milli seconds | 1000 |
Unit: Milliseconds |
|
jettyIdleTimeout | Jetty Idle Timeout in milli seconds | 0 |
Unit: Milliseconds #(ms,<=0 -> to make timeout infinite) |
|
k8sServiceCheck | Enable this if loadbalancing is to be done by egress instead of K8s | false | true/false | |
service.customExtension.labels | Custom Labels that needs to be added to egressgateway specific Service. | null | Not applicable | This can be used to add custom label(s) to egressgateway Service. |
service.customExtension.annotations | Custom Annotations that needs to be added to egressgateway specific Services. | null | Not applicable | This can be used to add custom annotation(s) to egressgateway Service. |
deployment.customExtension.labels | Custom Labels that needs to be added to egressgateway specific Deployment. | null | Not applicable | This can be used to add custom label(s) to egressgateway Deployment. |
deployment.customExtension.annotations | Custom Annotations that needs to be added to egressgateway specific Deployment. | null | Not applicable | This can be used to add custom annotation(s) to egressgateway deployment. |
readinessProbe.initialDelaySeconds | Configurable wait time before performing the first readiness probe by the kubelet Note: Do not change this value. If you see delays in pod coming up and probe is killing the pod then you should consider tuning these parameters. | 30 |
Not Applicable Unit: Seconds |
|
readinessProbe.periodSeconds |
Time interval for every readiness probe check. Note: Do not change this value. If you see delays in pod coming up and probe is killing the pod then you should consider tuning these parameters. |
10 |
Not Applicable Unit: Seconds |
|
readinessProbe.timeoutSeconds |
Number of seconds after which the probe times out Note: Do not change this default value. |
3 | Not Applicable | |
readinessProbe.successThreshold |
Minimum consecutive successes for the probe to be considered successful after having failed Note: Do not change this default value. |
1 | Not Applicable | |
readinessProbe.failureThreshold |
When a Pod starts and the probe fails, Kubernetes will failureThreshold times before giving up Note: Do not change this default value. |
3 | Not Applicable | |
livenessProbe.initialDelaySeconds |
Configurable wait time before performing the first liveness probe by the kubelet. Note: Do not change this value. If you see delays in pod coming up and probe is killing the pod then you should consider tuning these parameters. |
30 |
Not Applicable Unit: Seconds |
|
livenessProbe.periodSeconds |
Time interval for every liveness probe check. Note: Do not change this value. If you see delays in pod coming up and probe is killing the pod then you should consider tuning these parameters. |
15 |
Not Applicable Unit: Seconds |
|
livenessProbe.timeoutSeconds |
Number of seconds after which the probe times out Note: Do not change this default value. |
3 | Not Applicable | |
livenessProbe.successThreshold |
Minimum consecutive successes for the probe to be considered successful after having failed Note: Do not change this default value. |
1 | Not Applicable | |
livenessProbe.failureThreshold | When a Pod starts and the probe fails, Kubernetes will try failureThreshold times before giving upNote: Do not change this default value. | 3 | Not Applicable |