3 Customizing UDR
This chapter provides information about customizing Oracle Communications Cloud Native Core, Unified Data Repository (UDR) deployment in a cloud native environment.
A sample custom-values.yaml file
is available in Custom_Templates file.
For more information on how to download the package, see Downloading Unified Data Repository Package.
3.1 UDR Configuration Parameters
This section includes information about the configuration parameters of UDR.
Note:
Mandatory parameters must be configured before the UDR deployment.3.1.1 Global Configurable Parameters
The Global Configurable Parameters are suffixed to all the container names of UDR. These values are useful to add custom annotation(s) to all non-load balancer type services that UDR Helm chart creates.
The following table provides global configurable parameters:
Table 3-1 Global Configurable Parameters
Parameter | Description | Default value | Range or Possible Values (If applicable) | Notes |
---|---|---|---|---|
dockerRegistry | Docker registry from where the images are pulled | ocudr-registry.us.oracle.com/ocudr | NA | NA |
mysql.dbServiceName | Name of the database service | mysql-connectivity-service.occne-infra | NA | Its the connectivity service exposed by cnDBTier deployment. |
mysql.port | Port exposed by database service | 3306 | NA | NA |
mysql.configdbname | Database name for configdb | udrconfigdb | NA | NA |
mysql.dbname | Database name for udrDB | udrdb | NA | NA |
mysql.dbUNameLiteral | Database user name for Literal | dsusername
Note: This value must not be modified |
NA | NA |
mysql.dbPwdLiteral | Database password name for Literal | dspassword
Note: This value must not be modified |
NA | NA |
mysql.dbEngine | Database engine name | NDBCLUSTER
Note: This value must not be modified |
NA | NA |
nrfClientDbName | Database name for nrf-client service | configdbname
Note: This value must not be modified |
NA | |
udrTracing.enable | Flag to enable udr tracing on Jaeger | false | true/false | NA |
udrTracing.host | Jaegar service name installed on the cluster | occne-tracer-jaeger-agent.occne-infra | NA | NA |
udrTracing.port | Jaegar service port installed on the cluster | 14268 | NA | NA |
dbenc.shavalue | Encryption Key size | 256 | 256 or 512 | NA |
serviceAccountName | Name of the service account | null | NA | The service account, role and role bindings required for deployment should be created prior to UDR installation. For more information see, creating service account. |
configServerEnable | Flag to enable config-server | &configServerEnabled true | true/false | NA |
dbCredSecretName | Name of the secret created for storing database credentials | ocudr-secrets | NA | NA |
configServerFullNameOverride | Name of the Config-server to be used in deployment | nudr-config-server | NA | NA |
configServicePort | HTTP port for nudr-config microservice | &configServiceHttpPort 5001 | Valid Port | Keep the reference unchanged |
createNetworkPolicy | Flag to enable Network Policy | false | true/false | NA |
alternatRouteServiceEnable | Flag to enable alternate route service | false | true/false | NA |
appinfoServiceEnable | Flag to enable appinfo service. | true | true/false | NA |
vendor | Logging parameter picked up by Kibana | Oracle | NA | NA |
app_name | Logging parameter picked up by Kibana | alternate-route | NA | NA |
performanceServiceEnable | Flag to enable performance info service | false | true/false | NA |
servicePorts.perfInfoHttp | HTTP port exposed on performance info service | 5905 | valid port | NA |
envJaegerQueryUrl | Jaeger URL to be used for perf-info service | http://occne-tracer-jaeger-query.occne-infra/ | valid URL | NA |
overloadManagerEnabled | Enabled when overload manager is required. By setting this flag to true, readiness check microservice provisions the overload configurations to database. | false | true/false | NA |
sbiCorrelationInfoEnable | If the flag is enabled, then the 3gpp-Sbi-Correlation-Info header is added in the dr-service responses and notification requests in Egress Gateway. | false | true/false | NA |
ephemeralStorage.requests.containersLogStorage | Log storage for all the containers used for ephemeral storage allocation request. | &containersLogStorageRequestsRef 50 | Unit: MB | Do not remove reference |
ephemeralStorage.requests.containersCrictlStorage | Crictl storage for all the containers used for ephemeral storage allocation request. | &containersCrictlStorageRequestsRef 2 | Unit: MB | Do not remove reference |
ephemeralStorage.limits.containersLogStorage | Log storage for all the containers used for ephemeral storage allocation. | &containersLogStorageLimitsRef 1000 | Unit: MB | Do not remove reference |
ephemeralStorage.limits.containersCrictlStorage | Crictl storage for all the containers used for ephemeral storage allocation limits. | &containersCrictlStorageLimitsRef 2 | Unit: MB | Do not remove reference |
udrServices | Services supported on UDR deployment. This configuration decides the schema execution on UDR database that is done by nudr-preinstall hook pod | All | All/nudr-dr/nudr-group-id-map/n5g-eir-eic |
For SLF, set udrServices value as nudr-group-id-map. For EIR, set udrServices value as n5g-eir-eic. |
consumerNF | Network functions that use the UDR. It is a comma-seperated list of Network functions. |
If udrServices parameter is set to All; default value is PCF,UDM,NEF If udrServices parameter is set to nudr-dr; default value is PCF |
PCF/UDM/NEF (Any combination of these 3) | |
udrorAll | Services that need to be monitored by app info in "All" or "dr" mode | &allordr | nudr-drservice,nudr-dr-provservice,nudr-notify-service,nudr-diameterproxy,egressgateway,ingressgateway,nudr-config,nudr-config-server,alternate-route,nudr-ondemand-migration | NA |
slf | Services that need to be monitored by app info in "Anudr-group-id-map" mode | &slf | nudr-drservice,nudr-dr-provservice,egressgateway,ingressgateway,nudr-config,nudr-config-server,alternate-route | NA |
udsfEnable | Flag to enable UDSF services | true | true/false | NA |
nfInstanceId | NF Instance ID of UDR (same is registered with NRF) | 5a7bd676-ceeb-44bb-95e0-f6a55a328b03 | Valid UUID | The reference definition remains unchanged.
A valid Universally Unique Identifier (UUID) is a 128-bit unique number that helps NFs to identify NRF. |
autoCreateSubscriber | Flag to enable or disable the auto creation of the subscriber when the PUT operation is performed on a new UEID (Unique Entity Identifier). | true | true/false | NA |
addDefaultBillingDay | ||||
diamGatewayEnable | Flag to enable or disable diamGateway service. | true | true/false | NA |
diamGatewayRealm | diameter gateway service realm | oracle.com | valid realm | NA |
diamGatewayIdentity | diameter gateway service identity | nudr.oracle.com | valid identity | NA |
FourGPolicyConfiguration.vsaLevel | The data level where VSA (Vendor Specific Attribute) data holding the 4G policy content is added. | smpolicy | smpolicy/nssai/dnn | NA |
FourGPolicyConfiguration.vsaDefaultBillingDay | The Billing day value on 4G policy data | 1 | NA | NA |
FourGPolicyConfiguration.snssai | The snssai value configurable on 4G policy data | 2-FFFFFF | NA | NA |
FourGPolicyConfiguration.dnn | The dnn value configurable on 4G policy data | dnn1 | NA | NA |
FourGPolicyConfiguration.ondemandMigration.enabled | Flag to enable on-demand migration service | &onDemandMigrationEnabled false | true/false | Keep the reference variable unchanged |
FourGPolicyConfiguration.ondemandMigration.diamRealm | ondemand migration tool realm | oracle.com | valid realm | NA |
FourGPolicyConfiguration.ondemandMigration.diamIdentity | ondemand migration tool identity | ondemand-migration.nudr.oracle.com | valid identity or FQDN | Use migration keyword in the diameter identity |
FourGPolicyConfiguration.ondemandMigration.deleteSubscriberOnSource | Enable or disable 4G Subscriber deletion | false | true or false | NA |
FourGPolicyConfiguration.ondemandMigration.restEndpoint | 4G Rest endpoint for deletion of subscriber | http://source-end-rest-endpoint | Valid Endpoint | NA |
FourGPolicyConfiguration.sourceUdr.realm | Source diameter server realm | tekelec1.com | valid diameter realm | NA |
FourGPolicyConfiguration.sourceUdr.identity | Source diameter server identity | local1.tekelec1.com | valid diameter identity | NA |
FourGPolicyConfiguration.sourceUdr.hostIp | Source IP, refers to 4G UDR | valid HostIP or FQDN | NA | |
FourGPolicyConfiguration.sourceUdr.hostPort | Source Port, refers to 4G UDR | 3868 | Valid diameter port | NA |
serviceMeshCheck | Flag to check side car container readiness when deployed with Aspen Service Mesh. It is enabled when deployed with Aspen Service Mesh. | &serviceMeshFlag false | true/false | Keep the reference variable unchanged |
istioSidecarQuitUrl | Quit URL configurable for side car | &istioQuitUrl "http://127.0.0.1:15020/quitquitquit" | Valid URL | Keep the reference variable unchanged |
istioSidecarReadyUrl | Readiness URL configurable for side car | &istioReadyUrl "http://127.0.0.1:15000/ready" | Valid URL | The port used should be the admin configured for istio
proxy container.
Keep the reference variable unchanged |
test.nfName | NF name on which the helm test is performed. For UDR, the default value is UDR and is used in container name as suffix | UDR | NA | NA |
test.image.name | Image name for helm test container image | nf_test | NA | NA |
test.image.tag | Image version tag for helm test | 23.4.2 | NA | NA |
test.config.logLevel | Log level for helm test pod | WARN |
Possible Values - WARN INFO DEBUG |
NA |
test.config.timeout | Timeout value for helm test operation. If the timeout value exceeds, helm test is considered as failure | 240 | Range: 1-300
Unit:seconds |
NA |
test.resources | Kubernetes resources for which the API version information needs to be fetched | horizontalpodautoscalers/v1
|
NA | NA |
test.complianceEnable | Indicates whether or not the Kubernetes logging feature is enabled | true | true or false | NA |
preinstall.image.name | Image name for the nudr-prehook pod which takes care of database and table creation for UDR deployment | nudr_common_hooks | NA | NA |
preinstall.image.tag | Image version for nudr-pre-install-hook pod image | 23.4.2 | NA | NA |
preinstall.config.logLevel | Log level for pre-install hook pod | WARN | Possible Values -
WARN INFO DEBUG |
NA |
preinstall.createUser | Flag to enable user creation on SQL nodes as part of
pre-install hook.
It is set as 'false' when installed with vDBTier |
true | true/false | NA |
preinstall.serviceMeshCheck | Flag to check side car container readiness when deployed
with Aspen Service Mesh.
Set this parameter to false if side car is not enabled only for this hook. |
*serviceMeshFlag | NA | Do not change this value unless it is different from the value configured in global section reference |
preinstall.istioSidecarQuitUrl | Quit URL configurable for side car.
Change its value only if the URL for this hook is different from the one configured in global section |
*istioQuitUrl | NA | Do not change this value unless it is different from the value configured in global section reference. |
preinstall.istioSidecarReadyUrl |
Readiness URL configurable for side car. Change its value only if the URL for this hook is different from the one configured in global section |
*istioReadyUrl | NA | Do not change this value unless it is different from the value configured in global section reference |
preUpgrade.image.name | Image name of nudr-pre-upgrade pod that takes care of database schema updates on UDR datastore | nudr_common_hooks | NA | NA |
preUpgrade.image.tag | Image version of nudr-pre-upgrade pod image | 23.4.2 | NA | NA |
preUpgrade.config.logLevel | Log level of nudr-pre-upgrade hook pod | WARN |
Possible Values: WARN INFO DEBUG |
NA |
preUpgrade.serviceMeshCheck |
Flag to check side car container readiness when deployed with Aspen Service Mesh. Set this parameter to false if side car is not enabled only for this hook. |
*serviceMeshFlag | NA | Do not change this value unless it is different from the value configured in global section reference |
preUpgrade.istioSidecarQuitUrl |
Quit URL configurable for side car. Update this parameter only if the URL of this hook is different from the one configured in global section |
*istioQuitUrl | NA | Do not change this value unless it is different from the value configured in global section reference |
preUpgrade.istioSidecarReadyUrl | Readiness URL configurable for side car.
Update this parameter only if the URL of this hook is different from the one configured in global section |
*istioReadyUrl | NA | Do not change this value unless it is different from the value configured in global section reference |
preUpgrade.upgradeFailed | Flag to enable delete configuration database. | false | true/false | Change to true if previous upgrade failed and try to upgrade again. |
postInstall.image.name | Image name for the nudr-post-upgrade pod, which will take care of schema updates on the UDR datastore | nudr_common_hooks | NA | NA |
postInstall.image.tag | Image version for nudr-post-upgrade pod image | 23.4.2 | NA | NA |
postInstall.config.logLevel | Log level for nudr-post-upgrade hook pod | WARN | Possible Values:
WARN INFO DEBUG |
NA |
postInstall.serviceMeshCheck | Flag to check the side car container readiness when deployed with Aspen Service mesh. For more information, refer to serviceMeshCheck under global section. | *serviceMeshFlag | NA
Note: Do not change this value unless it is different from the value configured in global section reference. |
NA |
postInstall.istioSidecarQuitUrl |
Quit URL configurable for sidecar. For more information, refer to istioSidecarQuitUrl under global section. Change only if the URL for this hook is different from the one configured in global section. |
*istioQuitUrl | NA
Note: Do not change this value unless it is different from the value configured in global section reference. |
NA |
postInstall.istioSidecarReadyUrl | Readiness URL configurable for sidecar. For more information, refer to istioSidecarReadyUrl under global section. Change only if the URL for this hook is different from the one configured in global section. | *istioReadyUrl | NA
Note: Do not change this value unless it is different from the value configured in global section reference. |
NA |
postInstall.serviceToCheck | Services to check before adding default configuration for overload control | nudr-config-server,nudr-config | NA | NA |
postInstall.serviceCheckTimeout | Service check timeout for servicesToCheck configuration above | 100 | NA | |
postUpgrade.image.name | Image name for the nudr-post-upgrade pod, which takes care of schema updates on the UDR datastore. | nudr_common_hooks | NA | NA |
postUpgrade.image.tag | Image version for nudr-post-upgrade pod image | 23.4.2 | NA | NA |
postUpgrade.config.logLevel | Log level for nudr-post-upgrade hook pod | WARN | Possible
Values:
WARN INFO DEBUG |
NA |
postUpgrade.serviceMeshCheck | Flag to check side car container readiness when deployed
with Aspen Service mesh.
Change to false if side car is disabled only for this hook |
*serviceMeshFlag | NA | Do not change this value unless it is different from the value configured in the global section reference. |
postUpgrade.istioSidecarQuitUrl | Quit URL configurable for side car.
Change only if the URL for this hook is different from the one configured in global section. |
*istioQuitUrl | NA | Do not change this value unless it is different from the value configured in the global section reference. |
postUpgrade.istioSidecarReadyUrl | Readiness URL configurable for side car.
Change only if the URL for this hook is different from the one configured in global section. |
*istioReadyUrl | NA | Do not change this value unless it is different from the value configured in the global section reference. |
postRollback.image.name | Image name for the nudr-post-upgrade pod, which takes care of schema updates on the UDR datastore. | nudr_common_hooks | Not Applicable | NA |
postRollback.image.tag | Image version for nudr-post-upgrade pod image | 23.4.2 | Not Applicable | NA |
postRollback.config.logLevel | Log level for nudr-post-upgrade hook pod | WARN |
Possible Values - WARN INFO DEBUG |
NA |
postRollback.serviceMeshCheck |
Flag to check side car container readiness when deployed with Aspen Servicemesh. Referred to serviceMeshCheck under global section. Change to false if side car is disabled only for this hook |
*serviceMeshFlag | Not Applicable | Do not change this value unless it is different from the value configured in global section reference |
postRollback.istioSidecarQuitUrl |
Quit url configurable for side car. Referred to istioSidecarQuitUrl under global section. Change only if the url for this hook is different from the one configured in global section |
*istioQuitUrl | Not Applicable | Do not change this value unless it is different from the value configured in global section reference |
postRollback.istioSidecarReadyUrl | Readiness url configurable for side car. Referred to istioSidecarReadyUrl under global section. Change only if the url for this hook is different from the one configured in global section | *istioReadyUrl | Not Applicable | Do not change this value unless it is different from the value configured in global section reference |
nrfClientNfDiscoveryEnable |
Flag to enable NRF client discovery pod Note: Do not enable the
|
false | true/false | NA |
nrfClientNfManagementEnable | Flag to enable NRF client management pod | true | true/false | NA |
deploymentNrfClientService.envNfNamespace | Namespace of the UDR deployment | myudr | Not Applicable | NA |
deploymentNrfClientService.envNfType | Name of nfType | udr | Not Applicable | NA |
deploymentNrfClientService.envConsumeSvcName | The services to be monitored by perf-info to report load
and capacity of configured NFServices. The format shall
be
|
'nudr-drservice:nudr-drservice' | Not Applicable | NA |
initContainerEnable | Flag to enable init container. The value of this parameter should be set to false when UDR is deployed with Aspen Service Mesh. | true | true/false | NA |
nfName | Set to prefix of the name of microservices | nudr | Not Applicable | NA |
applicationName | Application Name | ocudr | Not Applicable | NA |
egressGatewayFullnameOverride | Egress gateway Host. It should be used as {{ ReleaseName }}-egressGatewayFullnameOverride | egressgateway | Not Applicable | NA |
egressGatewayPort | Egress gateway Port | 8080 | Valid Port | NA |
hookJobResources.limits.cpu | CPU limit for pods created in Kubernetes hooks or jobs created as part of UDR installation | 1 | NA | Applicable to helm test job as well |
hookJobResources.limits.memory | Memory limit for pods created in Kubernetes hooks or jobs created as part of UDR installation | 1Gi | NA | Applicable for helm test job as well |
hookJobResources.limits.ephemeral-storage | Ephemeral storage limits for hook pods | 1024Mi | Unit MB | NA |
hookJobResources.requests.cpu | CPU requests for pods created in Kubernetes hooks or jobs created as part of UDR installation | 0.5 | NA | CPU allocated for hooks during deployment. It is applicable for helm test job as well. |
hookJobResources.requests.memory | Memory requests for pods created in Kubernetes hooks or jobs created as part of UDR installation | 0.5Gi | NA | The memory to be allocated for hooks during deployment. It is applicable for helm test job as well. |
hookJobResources.requests.ephemeral-storage | Ephemeral storage request for hook pods | 72Mi | Unit MB | NA |
tlsVersionDowngrade | Downgrade tls version to TLS 1.2 while running the microservices. Retain its default value unless you face any issue w.r.t SSL version on microservices | false | true/false | NA |
customExtension.allResources.labels | Custom Labels added to all the UDR Kubernetes resources | null | NA | This can be used to add custom label(s) to all Kubernetes resources that UDR helm chart creates. |
customExtension.allResources.annotations | Custom Annotations added to all the UDR Kubernetes resources | null | NA | This can be used to add custom annotation(s) to all
Kubernetes resources that UDR helm chart creates.
ASM related annotations needs to be added under ASM Specific Configuration section. |
customExtension.lbServices.labels | Custom Labels added to UDR Services that are of Load Balancer type | null | NA | This can be used to add custom label(s) to all Load Balancer type services UDR helm chart creates. |
customExtension.lbServices.annotations | Custom Annotations added to UDR Services that are of Load Balancer type | null | NA | This can be used to add custom annotation(s) to all Load Balancer type services that UDR helm chart creates. |
customExtension.lbDeployments.labels | Custom Labels added to UDR Deployment that is associated to a service of Load Balancer type | null | NA | This can be used to add custom label(s) to all Deployments that UDR helm chart creates which are associated to a service which is of Load Balancer type. |
customExtension.lbDeployments.annotations | Custom Annotations added to UDR Deployment that is associated to a Service of Load Balancer type | null | NA | ASM related annotations needs to be added under ASM
Specific Configuration section.
This can be used to add custom annotation(s) to all Deployments that UDR helm chart creates which are associated to a service which if of Load Balancer type. |
customExtension.nonlbServices.labels | Custom Labels added to UDR Services that is not of Load Balancer type | null | NA | This can be used to add custom label(s) to all non-Load Balancer type services that UDR helm chart creates. |
customExtension.nonlbServices.annotations | Custom Annotations added to UDR Services that is not of Load Balancer type | null | NA | This can be used to add custom annotation(s) to all non-Load Balancer type services that UDR helm chart creates. |
customExtension.nonlbDeployments.labels | Custom Labels added to UDR Deployment that are associated to a Service which is not of Load Balancer type | null | NA | This can be used to add custom label(s) to all Deployments that are created by UDR helm chart which are associated to a service which if not of Load Balancer type. |
customExtension.nonlbDeployments.annotations | Custom Annotations added to UDR Deployment that are associated to a Service which is not of Load Balancer type | null | NA | ASM related annotations to be added under ASM Specific
Configuration section
This can be used to add custom annotation(s) to all Deployments that are created by UDR helm chart which are associated to a service which if not of Load Balancer type. |
k8sResource.container.prefix | Value that is prefixed to all the container names of UDR | null | NA | NA |
k8sResource.container.suffix | Value that is suffixed to all the container names of UDR | null | NA | NA |
extraContainers | Flag to enable addition of container configuration under extraContainersTpl to all the deployments under UDR umbrella. This parameter is used for debug container template | DISABLED | Allowed values:
|
This configuration is dependent on extraContainers configuration under each microservice. This parameter is not used for hooks. |
extraContainerTpl | The template for extraContainer is defined in this configuration. For now, it defines the debugContainer template. It is applicable to all microservices under UDR umbrella |
|
NA |
This is not used for hooks. |
extraContainersVolumesTpl | Debug container volume template |
|
||
tolerationsSetting | Flag to enable the toleration setting at the global level. If set to ENABLED and local toleration setting at each microservice level is set to USE_GLOBAL_VALUE the same can be used in the microservice. | DISABLED | Allowed Values:
|
NA |
nodeSelection | Flag to enable the nodeSelection setting at the global level. If set to ENABLED and local nodeSelection setting at each microservice level is set to USE_GLOBAL_VALUE the same can be used in the microservice. | DISABLED | Allowed Values:
|
NA |
tolerations | When global.tolerationSetting is ENABLED configure toleration here. | [] | Example:
|
NA |
nodeSelector.nodeKey | NodeSelector key configuration at the global level when
helmBasedConfigurationNodeSelectorApiVersion at microservice level is
set to v1 then this configuration comes to use. And this configuration
does not depend on nodeSelection flag, once configured this can be used
for all microservices.
If at microservice level this is configured with a different value, then the same can be used for that particular microservice. |
' ' | Valid key for a label for a particular node. | NA |
nodeSelector.nodeValue | NodeSelector Value configuration at the global level
when helmBasedConfigurationNodeSelectorApiVersion at microservice level
is set to v1 then this configuration is used. And this configuration
does not depend on nodeSelection flag, once configured this can be used
for all microservices.
If at microservice level this is configured with a different value, then the same can be used for that particular microservice |
' ' | Valid value pair for the above key for a label for a particular node. | NA |
nodeSelector | NodeSelector configuration when
helmBasedConfigurationNodeSelectorApiVersion at microservice level is
set to v2.
Uncomment and use this configuration when required. Else keep this commented. |
{} | Valid key value pair matching a node for nodeSelection by a pod. | NA |
hooksCommonConfig.tolerations | When global.tolerationSetting is ENABLED configure tolerations here. | [] | Example:
|
NA |
hooksCommonConfig.helmBasedConfigurationNodeSelectorApiVersion | Node Selector API version setting | v1 | Allowed Values:
|
NA |
hooksCommonConfig.nodeSelector.nodeKey | Configuring NodeSelector key for UDR hooks. This
excludes the common service hooks when hooksCommonConfig is set to v1
then this configuration comes to
use.
|
' ' | Valid key for a label for a particular node. | NA |
hooksCommonConfig.nodeSelector.nodeValue | NodeSelector value configuration for UDR hooks, This
excludes the common service hooks when hooksCommonConfig is set to v1
then this configuration comes to
use.
|
' ' | Valid value pair for the above key for a label for a particular node. | NA |
hooksCommonConfig.nodeSelector | Configuring NodeSelector when hooksCommonConfig is set
to
v2.
Uncomment and use this configuration when required. Else, keep this commented. |
{} | Valid key value pair matching a node for nodeSelection by a pod. | NA |
subscriberActivityEnabled | Flag to enable or disable the subscriber activity logging feature | true or false | false | NA |
autoEnrolOnSignalling | Flag to enable or disable auto enrolment of the subscriber when PUT operation is executed on UsageMonitoringInformation or PATCH operation on SM Data. | true | true or false | NA |
etagEnabled | Flag to enable or disable Etag support for SM Data | false | true or false | NA |
enableControlledShutdown | Flag to enable or disable operational state changes. | false | true or false | NA |
k8sResources.hpat.supportedVersions |
Supported versions for horizontal pod autoscalers. This is updated as per the latest CNE version which has kubernetes version 1.25. For older CNE version, change the value to autoscaling/v1 |
autoscaling/v2 | NA | NA |
k8sResources.pdb.supportedVersions | Supported versions for pod disruption budget | policy/v1 | NA | NA |
multipleIgwEnabled | Flag to include multiple criteria, such as TPS and signaling connection, in NF Scoring calculation | true | true or false | NA |
quotaVersion | This value is used to configure the quota entity version in VSA data | 3 | NA | NA |
DynamicQuotaVersion | This value is used to configure the dynamic quota entity version in VSA data | 3 | NA | NA |
dbConflictResolutionEnabled | The configuration enables conflict resolution for UDR subscriber database when installed on a multiple site setup. | false | true or false | NA |
suppressNotificationEnabled | Flag to enable or disable the suppress notification feature. | true | true or false | NA |
3.1.2 nudr-drservice Microservice Parameters
Following table provides the parameters for nudr-drservice microservice.
Table 3-2 nudr-drservice microservice parameters
Parameter | Description | Default value | Range or Possible Values (If applicable) | Notes |
---|---|---|---|---|
image.name | Docker Image name | nudr_datarepository_service | NA | NA |
image.tag | Tag or version of the Image | 23.4.2 | NA | NA |
image.pullPolicy | Indicates whether image needs to be pulled or not | IfNotPresent |
Possible Values - Always IfNotPresent Never |
NA |
logging.level.root | Log level of the nudr-drservice pod | WARN |
Possible Values - WARN INFO DEBUG |
NA |
deployment.replicaCount | Number of nudr-drservice pods to be maintained by replica set created with deployment | 2 | NA | NA |
minReplicas | Minimum number of pods | 2 | NA | NA |
maxReplicas | Maximum number of pods | 8 | NA | NA |
maxUnavailable | Number of replicas that can go down during a disruption | 1 | NA | This parameter is used for PodDisruptionBudget k8s resource |
service.type | Kubernetes service type for exposing UDR deployment | ClusterIP |
Possbile Values- ClusterIP NodePort LoadBalancer |
It is recommended to set the default value as ClusterIP always. |
service.port.http | The http port to be used in nudr-drservice service | 5001 | NA | NA |
service.port.https | The https port to be used for nudr-drservice service | 5002 | NA | NA |
service.port.management | The actuator management port to be used for nudr-drservice service | 9000 | NA | NA |
resources.requests.cpu | Number of CPUs allocated for nudr-drservice pod during deployment | 2 | NA | NA |
resources.requests.memory | The memory to be allocated for nudr-drservice pod during deployment | 2Gi | NA | NA |
resources.requests.logStorage | Log storage for ephemeral storage allocation request | *containersLogStorageRequestsRef | Unit: MB | Change the reference to a different value if allocation needs to be different for this container |
resources.requests.crictlStorage | Crictl storage for ephemeral storage allocation request | *containersCrictlStorageRequestsRef | Unit: MB | Change the reference to a different value if allocation needs to be different for this container |
resources.limits.cpu | CPU limit allotted | 2 | NA | NA |
resources.limits.memory | Memory limit allotted | 2Gi | NA | NA |
resources.limits.logStorage | Log storage for ephemeral storage allocation limit | *containersLogStorageLimitsRef | Unit: MB | Change the reference to a different value if allocation needs to be different for this container |
resources.limits.crictlStorage | Crictl storage for ephemeral storage allocation limit | *containersCrictlStorageLimitsRef | Unit: MB | Change the reference to a different value if allocation needs to be different for this container |
resources.target.averageCpuUtil | CPU utilization limit for autoscaling to create HPA | 80 | NA | NA |
notify.port.http | HTTP port on which notify service is running | 5001 | NA | NA |
notify.port.https | HTTPS port on which notify service is running | 5002 | NA | NA |
serviceToCheck | Essential services on which init container performs readiness check | nudr-config-server,nudr-config | Comma separated microservice names | NA |
hikari.poolsize |
Mysql Connection pool size The hikari pool connection size to be created at start up |
100 | NA | NA |
hikari.connectionTimeout | MYSQL connection timeout | 1000 | Unit: Milliseconds | NA |
hikari.minimumIdleConnections | Minimum idle MYSQL connections maintained | 2 | Valid. No, less than poolsize configured | NA |
hikari.idleTimeout | Idle MYSQL connections timeout | 4000 | Unit: Milliseconds | NA |
hikari.queryTimeout | Query Timeout for a single database query. If set to -1, there will be no timeout set for database queries. | -1 | Unit: Seconds | NA |
internalTracingEnabled | Flag to enable Jaeger tracing for nudr-drservice pod | false | true/false | NA |
maxConcurrentPushedStreams | Maximum number of concurrent requests that can be pushed per destination | 6000 | NA | NA |
maxRequestsQueuedPerDestination | Maximum number of requests that can be queued per destination | 5000 | NA | NA |
maxConnectionsPerDestination | Maximum connection allowed per destination | 10 | NA | NA |
maxConnectionsPerIp | Maximum connection allowed per IP | 10 | NA | NA |
requestTimeout | Client request timeout | 5000 | Unit: Milliseconds | NA |
connectionTimeout | Client connection timeout | 10000 | Unit: Milliseconds | NA |
idleTimeout | Client connection idle timeout | 0 | Unit: Milliseconds | NA |
dnsRefreshDelay | Server DNS refresh delay | 120000 | Unit: Milliseconds | NA |
pingDelay | Server ping delay | 60 | Unit: Milliseconds | NA |
connectionFailureThreshold | Client connection failure threshold | 10 | NA | NA |
jettyserviceMeshCheck | Flag to enable or disable jetty service mesh check for ASM deployment | false | true or false | NA |
service.customExtension.labels | This can be used to add custom label(s) to nudr-drservice Service | null | NA | NA |
service.customExtension.annotations | This can be used to add custom annotation(s) to nudr-drservice Service | null | NA | NA |
deployment.customExtension.labels | This can be used to add custom label(s) to nudr-drservice Deployment | null | NA | NA |
deployment.customExtension.annotations | This can be used to add custom annotation(s) to nudr-drservice deployment | null | NA | NA |
startupProbe.failureThreshold | It is the configurable number of times the startup probe
is retried for failures.
Note: Do not change this value. If there is delay in pod coming up and probe is killing the pod then tune these parameters |
40 | Unit: Seconds | NA |
startupProbe.periodSeconds | It is the time interval for every startup probe check.
Note: Do not change this value. If there is delay in pod coming up and probe is killing the pod then tune these parameters |
5 | Unit: Seconds | NA |
extraContainers | This configuration decides service level extraContainer support. The default value is dependent on what is configured in the global level | USE_GLOBAL_VALUE |
Allowed Values:
|
NA |
serviceMeshCheck | Flag to check service mesh. Set to false when side car is not included for this service | *serviceMeshFlag | NA | Do not change this value unless it is different from the value configured in global section |
istioSidecarReadyUrl | Readiness url configurable for side car.
Change this url only if it is different from the side car container url in this microservice |
*istioReadyUrl | NA | Do not change this value unless it is different from the value configured in global section |
defaultGroupIdEnabled | Flag to enable or disable the Default Group ID feature | False | True or False | NA |
defaultSLFGroupName | The default group name provisioned. If there are any changes done to this through CNC console, then the default group should be reprovisioned with the modified group name. But if the change is done through helm values and helm upgrade is performed, then the default group will be created with the updated name. | DefaultGrp | True or False | NA |
tolerationsSetting | Flag to enable the toleration setting at the microservice
level or global level .
If set to USE_GLOBAL_VALUE global.tolerations configured value is used. If set to ENABLED tolerations configuration below in the same section is used. |
USE_GLOBAL_VALUE |
Allowed Values:
|
NA |
nodeSelection | Flag to enable the nodeSelection setting at the
microservice level or global level.
If set to USE_GLOBAL_VALUE global.tolerations configured value is used. If set to ENABLED tolerations configuration below in the same section is used. This is applicable for v2 apiVersion nodeSelector configuration. |
USE_GLOBAL_VALUE |
Allowed Values:
|
NA |
tolerations | When tolerationsSetting is ENABLED configure tolerations here. | [] |
NA Example:
|
NA |
helmBasedConfigurationNodeSelectorApiVersion | Node Selector API version setting | v1 |
Allowed Values:
|
NA |
nodeSelector.nodeKey | NodeSelector key configuration at the microservice level when helmBasedConfigurationNodeSelectorApiVersion is set to v1 then this configuration is used. And this configuration does not depend on nodeSelection flag, once configured this is used for all microservices. | ' ' | Valid key for a label for a particular node | NA |
nodeSelector.nodeValue |
NodeSelector Value configuration at the global level when helmBasedConfigurationNodeSelectorApiVersion is set to v1 then this configuration is used. And this configuration does not depend on nodeSelection flag, once configured this is used for all microservices. |
' ' | Valid value pair for the above key for a label for a particular node | NA |
nodeSelector | NodeSelector configuration when
helmBasedConfigurationNodeSelectorApiVersion is set to v2
Uncomment and use this configuration when required. Else keep this commented. |
{} | Valid key value pair matching a node for nodeSelection by a pod | NA |
subscriptionDataSubscriptionsOnly | NA | False | NA | NA |
onDemandMigrationEnabled | Flag to enable or disable the on demand migration service. | False | True or False | NA |
convertUeidToStandard | Enable to convert ueId key to standard format while provisioning or updating profile. | false | True or False | NA |
eirConfigurations.defaultResponse |
Configuration to set the EIC (Equipment Identity Check) response. When there is a PEI match found on EIR but the query parameters SUPI or GPSI do not match, the default value will be EQUIPMENT UNKNOWN response. This can be updated to WHITELISTED,GREYLISTED, or BLACKLISTED using this configuration. |
EQUIPMENT_UNKNOWN | GREYLISTED/BLACKLISTED/WHITELISTED/EQUIPMENT_UNKNOWN | NA |
eirConfigurations.accessLogEnabled | Flag to enable or disable the access Log feature | true | true/false | NA |
onDemandMigrationSigEnabled | Flag to enable or disable on-demand migration for provisioning interface | false | true or false | NA |
eirConfigurations.defaultResponseIMEINotFound |
Configuration to set the EIC (Equipment Identity Check) response. When there is a PEI match found on EIR but the query parameters SUPI or GPSI do not match, the default value will be EQUIPMENT UNKNOWN response. This can be updated to WHITELISTED,GREYLISTED, or BLACKLISTED using this configuration. |
EQUIPMENT_UNKNOWN | GREYLISTED/BLACKLISTED/WHITELISTED/EQUIPMENT_UNKNOWN | NA |
notificationThreadPool.corePoolSize |
Thread pool configurations for notification processing. Note: This is engineering configurations, you must not change the default value. |
100 | NA | NA |
notificationThreadPool.maxPoolSize |
Thread pool configurations for notification processing. Note: This is engineering configurations, you must not change the default value. |
1500 | NA | NA |
notificationThreadPool.maxQueueCapacity |
Thread pool configurations for notification processing. Note: This is engineering configurations, you must not change the default value. |
10000 | NA | NA |
3.1.3 nudr-dr-provservice Microservice Parameters
Following table provides the parameters for nudr-dr-provservice microservice.
Table 3-3 nudr-dr-provservice microservice parameters
Parameter | Description | Default value | Range or Possible Values (If applicable) | Notes |
---|---|---|---|---|
image.name | Docker Image name | nudr_datarepository_service | NA | NA |
image.tag | Tag of Image | 23.4.1 | NA | NA |
image.pullPolicy | This setting allows you to determine whether image need to be pulled or not | IfNotPresent |
Possible Values - Always IfNotPresent Never |
NA |
logging.level.root | Log level of the nudr-drservice pod | WARN |
Possible Values - WARN INFO DEBUG |
NA |
deployment.replicaCount | Number of nudr-drservice pods to be maintained by replica set created with deployment | 2 | NA | NA |
minReplicas | Minimum number of pods | 2 | NA | NA |
maxReplicas | Maximum number of pods | 8 | NA | NA |
maxUnavailable | Number of replicas that can go down during a disruption | 1 | NA | This parameter is used for PodDisruptionBudget Kubernetes resource |
service.type | The Kubernetes service type for exposing UDR deployment | ClusterIP |
Possible Values-
|
Suggested to be set as ClusterIP (default value) always |
service.port.http | The HTTP port to be used in nudr-drservice service | 5001 | NA | NA |
service.port.https | The HTTPS port to be used for nudr-drservice service | 5002 | NA | NA |
service.port.management | The actuator management port to be used for nudr-drservice service | 9000 | NA | NA |
resources.requests.cpu | The CPU to be allocated for nudr-drservice pod during deployment | 2 | NA | NA |
resources.requests.memory | The memory to be allocated for nudr-drservice pod during deployment | 2Gi | NA | NA |
resources.requests.logStorage | Log storage for ephemeral storage allocation request | *containersLogStorageRequestsRef | Unit: MB | Change the reference to a different value if allocation needs to be different for this container |
resources.requests.crictlStorage | Crictl storage for ephemeral storage allocation request | *containersCrictlStorageRequestsRef | Unit: MB | Change the reference to a different value if allocation needs to be different for this container |
resources.limits.cpu | Cpu allotment limitation | 2 | NA | NA |
resources.limits.memory | Memory allotment limitation | 2Gi | NA | NA |
resources.limits.logStorage | Log storage for ephemeral storage allocation limit | *containersLogStorageLimitsRef | Unit: MB | Change the reference to a different value if allocation needs to be different for this container |
resources.limits.crictlStorage | Crictl storage for ephemeral storage allocation limit | *containersCrictlStorageLimitsRef | Unit: MB | Change the reference to a different value if allocation needs to be different for this container |
resources.target.averageCpuUtil | CPU utilization limit for creating HPA | 80 | NA | NA |
notify.port.http | HTTP port on which notify service is running | 5001 | NA | NA |
notify.port.https | HTTPS port on which notify service is running | 5002 | NA | NA |
serviceToCheck | Essential services on which init container will perform readiness checks. | nudr-config-server,nudr-config | Comma separated microservice names | NA |
hikari.poolsize | The hikari pool connection size to be created at start up | 100 | NA | NA |
hikari.connectionTimeout | MYSQL connection timeout | 1000 | Unit: Milliseconds | NA |
hikari.minimumIdleConnections | Minimum idle MYSQL connections maintained | 2 | Valid No, less than poolsize configured | NA |
hikari.idleTimeout | Idle MYSQL connections timeout | 4000 | Unit: Milliseconds | NA |
hikari.queryTimeout | Query timeout for a single databse query. If set to -1, there will be no timeout set for database queries. | -1 | Unit: Seconds | NA |
internalTracingEnabled | Flag to enable/disable jaeger tracing for nudr-drservice | false | true/false | NA |
maxConcurrentPushedStreams | Maximum number of concurrent requests that can be pushed per destination | 6000 | NA | NA |
maxRequestsQueuedPerDestination | Maximum number of requests that can be queued per destination | 5000 | NA | NA |
maxConnectionsPerDestination | Maximum connection allowed per destination | 10 | NA | NA |
maxConnectionsPerIp | Maximum connection allowed per IP | 10 | NA | NA |
requestTimeout | Client request timeout | 5000 | Unit: Milliseconds | NA |
connectionTimeout | Client connection timeout | 10000 | Unit: Milliseconds | NA |
idleTimeout | Client connection idle timeout | 0 | Unit: Milliseconds | NA |
dnsRefreshDelay | Server DNS refresh delay | 120000 | Unit: Milliseconds | NA |
pingDelay | Server ping delay | 60 | Unit: Milliseconds | NA |
connectionFailureThreshold | Client connection failure threshold | 10 | NA | NA |
jettyserviceMeshCheck | Flag to enable or disable jetty service mesh check for ASM deployment | false | true/false | NA |
service.customExtension.labels | This can be used to add custom label(s) to nudr-drservice Service. | null | NA | NA |
service.customExtension.annotations | This can be used to add custom annotation(s) to nudr-drservice Service. | null | NA | NA |
deployment.customExtension.labels | This can be used to add custom label(s) to nudr-drservice Deployment. | null | NA | NA |
deployment.customExtension.annotations | This can be used to add custom annotation(s) to nudr-drservice deployment. | null | NA | NA |
startupProbe.failureThreshold | It is the configurable number of times the startup probe
is retried for failures.
Note: Do not change this value. If there is delay in pod coming up and probe is killing the pod then tune these parameters |
40 | Unit: Seconds | NA |
startupProbe.periodSeconds | It is the time interval for every startup probe check.
Note: Do not change this value. If there is delay in pod coming up and probe is killing the pod then tune these parameters |
5 | Unit: Seconds | NA |
extraContainers | This configuration decides at this microservice level the extra container need to be used or not. The default value lets it to be dependent on what is configured at the global level | USE_GLOBAL_VALUE |
Allowed Values:
|
NA |
serviceMeshCheck | Enable when deployed in serviceMesh, refer to the serviceMeshCheck in global section. It is set as false when side car is not included for this service | *serviceMeshFlag | NA | Do not change this value unless it is different from the value configured in global section reference |
istioSidecarReadyUrl | Readiness URL configurable for side car, refer to global section
istioSidecarReadyUrl.
Change this value only if the URL is different for the side car container in this microservice |
*istioReadyUrl | NA | Do not change this value unless it is different from the value configured in global section reference |
onDemandMigrationEnabled | Flag to enable or disable the on demand migration service | False | True or False | NA |
tolerationsSetting | Flag to enable the toleration setting at the
microservice level or global level .
If set to USE_GLOBAL_VALUE, the global.tolerations configured value is used. If set to ENABLED, the tolerations configuration below in the same section is used. |
USE_GLOBAL_VALUE |
Allowed Values:
|
NA |
nodeSelection | Flag to enable the nodeSelection setting at the the
microservice level or global level .
If set to USE_GLOBAL_VALUE, the global.tolerations configured value is used. If set to ENABLED, the tolerations configuration below in the same section is used. This is applicable for the v2 apiVersion nodeSelector configuration. |
USE_GLOBAL_VALUE |
Allowed Values:
|
NA |
tolerations | When tolerationsSetting is ENABLED, tolerations are configured here. | [] | NA
Example:
|
NA |
helmBasedConfigurationNodeSelectorApiVersion | Indicates the Node Selector API version setting. | v1 |
Allowed Values:
|
NA |
nodeSelector.nodeKey |
Indicates the NodeSelector key configuration at the microservice level . When helmBasedConfigurationNodeSelectorApiVersion is set to v1, this configuration comes into use. This configuration does not depend on the nodeSelection flag. Once configured, this is used for all the microservices. |
' ' | Valid key for a label for a particular node | NA |
nodeSelector.nodeValue | Indicates the NodeSelector Value configuration at the
global level .
When helmBasedConfigurationNodeSelectorApiVersion is set to v1, this configuration comes into use. This configuration does not depend on the nodeSelection flag. Once configured, this is used for all the microservices. |
' ' | Valid value pair for the above key for a label for a particular node | NA |
nodeSelector | NodeSelector configuration when
helmBasedConfigurationNodeSelectorApiVersion is set to v2.
Uncomment and use this configuration when required. Else keep this commented. |
{} |
Valid key value pair matching a node for nodeSelection by a pod |
NA |
convertUeidToStandard | Enable to convert ueId key to standard format while provisioning or updating profile. | false | true/false | NA |
onDemandMigrationProvEnabled | Flag to enable or disable on-demand migration for provisioning interface | false | true or false | NA |
notificationThreadPool.corePoolSize |
Thread pool configurations for notification processing. Note: This is engineering configurations, you must not change the default value. |
100 | NA | NA |
notificationThreadPool.maxPoolSize |
Thread pool configurations for notification processing. Note: This is engineering configurations, you must not change the default value. |
1500 | NA | NA |
notificationThreadPool.maxQueueCapacity |
Thread pool configurations for notification processing. Note: This is engineering configurations, you must not change the default value. |
10000 | NA | NA |
3.1.4 nudr-notify-service Microservice Parameters
Following table provides the parameters for nudr-notify-service microservice.
Table 3-4 nudr-notify-service microservice parameters
Parameter | Description | Default value | Range or Possible Values (If applicable) | Notes |
---|---|---|---|---|
enabled | flag to enable nudr-notify-service | true | true or false | For SLF deployment, this microservice must not be enabled. |
image.name | Docker Image name | nudr_notify_service | NA | NA |
image.tag | Tag or version of Image | 23.4.2 | NA | NA |
image.pullPolicy | A value that defines whether to pull an image or not | IfNotPresent |
Possible Values - Always IfNotPresent Never |
NA |
notification.retrycount | Number of notification attempts in case of notification failures. | 0 | Range: 1 - 10 | Retry is based on notification.retryerrorcodes configuration |
notification.retryinterval | The retry interval for notifications in case of failure | 5 |
Range: 1 - 60 Unit: Seconds |
NA |
notification.retryerrorcodes | Error codes eligible for retry notifications in case of failures | "400,429,500,503" | Valid HTTP status codes comma separated | Give comma separated error codes |
notification.retryNotificationExpiry | Expiry time for retry notifications | 86400 | NA | NA |
hikari.poolsize | Mysql Connection pool size | 10 | NA | Create the hikari pool connection size at start up |
hikari.connectionTimeout | MYSQL connection timeout | 60000 | Unit: Milliseconds | NA |
hikari.minimumIdleConnections | MYSQL connection timeout | Minimum idle MYSQL connections maintained | 2 | Valid. No, less than poolsize configured |
hikari.idleTimeout | Idle MYSQL connections timeout | 4000 | Unit: Milliseconds | NA |
hikari.queryTimeout | Query timeout for a single database query. If set to -1, there will be no timeout set for database queries. | -1 | Unit: Seconds | NA |
serviceToCheck | Essential services on which init container performs readiness check | nudr-drservice | Comma separated microservice names | NA |
internalTracingEnabled | Flag to enable Jaeger tracing for nudr-notify-service | false | true/false | NA |
maxConcurrentPushedStreams | Maximum number of concurrent requests that can be pushed per destination | 6000 | NA | NA |
maxRequestsQueuedPerDestination | Maximum number of requests that can be queued per destination | 5000 | NA | NA |
maxConnectionsPerDestination | Maximum connection allowed per destination | 10 | NA | NA |
maxConnectionsPerIp | Maximum connection allowed per IP | 10 | NA | NA |
requestTimeout | Client request timeout | 5000 | Unit: Milliseconds | NA |
connectionTimeout | Client connection timeout | 10000 | Unit: Milliseconds | NA |
idleTimeout | Client connection idle timeout | 0 | Unit: Milliseconds | NA |
dnsRefreshDelay | Server DNS refresh delay | 120000 | Unit: Milliseconds | NA |
pingDelay | Server ping delay | 60 | Unit: Milliseconds | NA |
connectionFailureThreshold | Client connection failure threshold | 10 | NA | NA |
jettyserviceMeshCheck | Flag to enable or disable jetty service mesh check for ASM deployment | false | true or false | NA |
logging.level.root | Log level of the notify service pod | WARN |
Possible Values:
|
NA |
deployment.replicaCount | Number of nudr-notify-service pods to be maintained by replica set created with deployment | 2 | NA | NA |
minReplicas | Minimum number of pods | 2 | NA | NA |
maxReplicas | Maximum number of pods | 4 | NA | NA |
maxUnavailable | Number of replicas that can go down during a disruption | 1 | NA | This parameter is used for PodDisruptionBudget Kubernetes resource |
http.proxy.port | Port for connecting to Egress Gateway service | *egressport | NA | Do not change the reference |
service.type | Kubernetes service type for exposing UDR deployment | ClusterIP |
Possbile Values- ClusterIP NodePort LoadBalancer |
Suggested to be set as ClusterIP (default value) always |
service.port.http | The http port to be used in notify service to receive signals from nudr-notify-service pod | 5001 | NA | NA |
service.port.https | The https port to be used in notify service to receive signals from nudr-notify-service pod | 5002 | NA | NA |
service.port.management | The actuator management port to be used for notify service | 9000 | NA | NA |
resources.requests.cpu | Number of CPUs allocated for notify service pod during deployment | 2 | NA | NA |
resources.requests.memory | Memory allocated for nudr-notify-service pod during deployment | 2Gi | NA | NA |
resources.requests.logStorage | Log storage for ephemeral storage allocation request | *containersLogStorageRequestsRef | Unit: MB | Change the reference to a different value if allocation needs to be different for this container |
resources.requests.crictlStorage | Crictl storage for ephemeral storage allocation request | *containersCrictlStorageRequestsRef | Unit: MB | Change the reference to a different value if allocation needs to be different for this container |
resources.limits.cpu | Number of CPUs allocated during deployment | 2 | NA | NA |
resources.limits.memory | Memory allotment limitation | 2Gi | NA | NA |
resources.limits.logStorage | Log storage for ephemeral storage allocation limit | *containersLogStorageLimitsRef | Unit: MB | Change the reference to a different value if allocation needs to be different for this container |
resources.limits.crictlStorage | Crictl storage for ephemeral storage allocation limit | *containersCrictlStorageLimitsRef | Unit: MB | Change the reference to a different value if allocation needs to be different for this container |
resources.target.averageCpuUtil | CPU utilization limit for autoscaling (creating HPA) | 80 | NA | NA |
service.customExtension.labels | Custom Labels added to nudr-notify-service specific service | null | NA | NA |
service.customExtension.annotations | This can be used to add custom annotation(s) to nudr-notify-service | null | NA | NA |
deployment.customExtension.labels | This can be used to add custom label(s) to nudr-notify-service deployment | null | NA | NA |
deployment.customExtension.annotations | This can be used to add custom annotation(s) to nudr-notify-service deployment | null | NA | NA |
startupProbe.failureThreshold | It is the configurable number of times the startup probe
is retried for failures.
Note: Do not change this value. If there is delay in pod coming up and probe is killing the pod then tune these parameters |
40 | Unit: Seconds | NA |
startupProbe.periodSeconds | It is the time interval for every startup probe check.
Note: Do not change this value. If there is delay in pod coming up and probe is killing the pod then tune these parameters |
5 | Unit: Seconds | NA |
extraContainers | This configuration decides service level extraContainer support. The default value is dependent on what is configured in the global level | USE_GLOBAL_VALUE |
Allowed Values:
|
NA |
serviceMeshCheck | Enable when deployed in serviceMesh. Set it to false when side car is not included for this service | *serviceMeshFlag | NA | Do not change this value unless it is different from the value configured in global section reference |
istioSidecarReadyUrl | Readiness url configurable for side car
Change only if the url is different for the side car container in this microservice |
*istioReadyUrl | NA | Do not change this value unless it is different from the value configured in global section reference |
tolerationsSetting |
Flag to enable the toleration setting at the microservice level or global level . If set to USE_GLOBAL_VALUE global.tolerations configured value is used. If set to ENABLED tolerations configuration below in the same section is used. |
USE_GLOBAL_VALUE |
Allowed Values:
|
NA |
nodeSelection |
Flag to enable the nodeSelection setting at the microservice level or global level. If set to USE_GLOBAL_VALUE global.tolerations configured value is used. If set to ENABLED tolerations configuration below in the same section is used. This is applicable for v2 apiVersion nodeSelector configuration. |
USE_GLOBAL_VALUE |
Allowed Values:
|
NA |
tolerations | When tolerationsSetting is ENABLED configure tolerations here. | [] |
NA Example:
|
NA |
helmBasedConfigurationNodeSelectorApiVersion | Node Selector API version setting | v1 |
Allowed Values:
|
NA |
nodeSelector.nodeKey |
NodeSelector key configuration at the microservice level when helmBasedConfigurationNodeSelectorApiVersion is set to v1 then this configuration is used. And this configuration does not depend on nodeSelection flag. Once configured this is used for all microservices. |
' ' | Valid key for a label for a particular node | NA |
nodeSelector.nodeValue | NodeSelector value configuration at the global level when helmBasedConfigurationNodeSelectorApiVersion is set to v1 then this configuration is used. And this configuration does not depend on nodeSelection flag, once configured this is used for all microservices. | ' ' | Valid value pair for the above key for a label for a particular node | NA |
nodeSelector | NodeSelector configuration when
helmBasedConfigurationNodeSelectorApiVersion is set to v2
Uncomment and use this configuration when required. Else keep this commented. |
{} | Valid key value pair matching a node for nodeSelection by a pod | NA |
egressThreadPool.corePoolSize |
Thread pool configurations for notification processing. Note: This is engineering configurations, you must not change the default value. |
100 | NA | NA |
egressThreadPool.maxPoolSize |
Thread pool configurations for notification processing. Note: This is engineering configurations, you must not change the default value. |
1500 | NA | NA |
egressThreadPool.maxQueueCapacity |
Thread pool configurations for notification processing. Note: This is engineering configurations, you must not change the default value. |
10000 | NA | NA |
3.1.5 nrf-client-service Microservice Parameters
Based on UDR mode: configmapApplicationConfig.profile.appProfiles
nrfRetryConfig=[{"serviceRequestType":"ALL_REQUESTS","primaryNRFRetryCount":1,"nonPrimaryNRFRetryCount":1,"alternateNRFRetryCount":1,"errorReasonsForFailure":["503","504","500","SocketTimeoutException","JsonProcessingException","UnknownHostException","NoRouteToHostException","IOException"],"gatewayErrorCodes":["503"],"requestTimeout":10},{"serviceRequestType":"AUTONOMOUS_NFREGISTER","primaryNRFRetryCount":1,"nonPrimaryNRFRetryCount":1,"alternateNRFRetryCount":-1,"errorReasonsForFailure":["503","504","500","SocketTimeoutException","JsonProcessingException","UnknownHostException","NoRouteToHostException","IOException"],"gatewayErrorCodes":["503"],"requestTimeout":10}]
healthCheckConfig={"healthCheckCount":-1,"healthCheckInterval":5,"requestTimeout":10,"errorReasonsForFailure":["503","504","500","SocketTimeoutException","IOException"],"gatewayErrorCodes":[]}
Note:
InstanceId passed in profile should be same as that mentioned in global section.Table 3-5 appProfiles Details
UDSDF ENABLED | MODE | PROFILE |
---|---|---|
FALSE | ALL | [{"nfInstanceId":"5a7bd676-ceeb-44bb-95e0-f6a55a328b03","nfStatus":"REGISTERED","fqdn":"ocudr-ingressgateway.myudr.svc.cluster.local","nfType":"UDR","allowedNfTypes":["PCF","UDM","NRF"],"plmnList":[{"mnc":"14","mcc":"310"}],"priority":10,"capacity":500,"load":0,"locality":"bangalore","nfServices":[{"load":0,"scheme":"http","versions":[{"apiFullVersion":"2.1.0.alpha-3","apiVersionInUri":"v1"}],"fqdn":"ocudr-ingressgateway.myudr.svc.cluster.local","ipEndPoints":[{"port":"80","ipv4Address":"10.0.0.0","transport":"TCP"}],"nfServiceStatus":"REGISTERED","allowedNfTypes":["PCF","UDM"],"serviceInstanceId":"ae870316-384d-458a-bd45-025c9e748976","serviceName":"nudr-dr","priority":10,"capacity":500},{"load":0,"scheme":"http","versions":[{"apiFullVersion":"2.1.0.alpha-3","apiVersionInUri":"v1"}],"fqdn":"ocudr-ingressgateway.myudr.svc.cluster.local","ipEndPoints":[{"port":"80","ipv4Address":"10.0.0.0","transport":"TCP"}],"nfServiceStatus":"REGISTERED","allowedNfTypes":["NRF"],"serviceInstanceId":"547d42af-628a-4d5d-a8bd-38c4ba672682","serviceName":"nudr-group-id-map","priority":10,"capacity":500}],"udrInfo":{"supportedDataSets":["POLICY","SUBSCRIPTION"],"groupId":"udr-1","externalGroupIdentifiersRanges":[{"start":"10000000000","end":"20000000000"}],"supiRanges":[{"start":"10000000000","end":"20000000000"}],"gpsiRanges":[{"start":"10000000000","end":"20000000000"}]},"heartBeatTimer":90,"nfServicePersistence":false,"nfProfileChangesSupportInd":false,"nfSetIdList":["setxyz.udrset.5gc.mnc012.mcc345"]}] |
NUDR-DR | [{"nfInstanceId":"5a7bd676-ceeb-44bb-95e0-f6a55a328b03","nfStatus":"REGISTERED","fqdn":"ocudr-ingressgateway.myudr.svc.cluster.local","nfType":"UDR","allowedNfTypes":["PCF","UDM","NRF"],"plmnList":[{"mnc":"14","mcc":"310"}],"priority":10,"capacity":500,"load":0,"locality":"bangalore","nfServices":[{"load":0,"scheme":"http","versions":[{"apiFullVersion":"2.1.0.alpha-3","apiVersionInUri":"v1"}],"fqdn":"ocudr-ingressgateway.myudr.svc.cluster.local","ipEndPoints":[{"port":"80","ipv4Address":"10.0.0.0","transport":"TCP"}],"nfServiceStatus":"REGISTERED","allowedNfTypes":["PCF","UDM"],"serviceInstanceId":"ae870316-384d-458a-bd45-025c9e748976","serviceName":"nudr-dr","priority":10,"capacity":500}],"udrInfo":{"supportedDataSets":["POLICY","SUBSCRIPTION"],"groupId":"udr-1","externalGroupIdentifiersRanges":[{"start":"10000000000","end":"20000000000"}],"supiRanges":[{"start":"10000000000","end":"20000000000"}],"gpsiRanges":[{"start":"10000000000","end":"20000000000"}]},"heartBeatTimer":90,"nfServicePersistence":false,"nfProfileChangesSupportInd":false,"nfSetIdList":["setxyz.udrset.5gc.mnc012.mcc345"]}] | |
NUDR-GROUP-ID-MAP | [{"nfInstanceId":"5a7bd676-ceeb-44bb-95e0-f6a55a328b03","nfStatus":"REGISTERED","fqdn":"udr1-ingressgateway.atspatch-1111","nfType":"UDR","allowedNfTypes":["NRF"],"plmnList":[{"mnc":"14","mcc":"310"}],"priority":10,"capacity":500,"load":0,"locality":"bangalore","nfServices":[{"load":0,"scheme":"http","versions":[{"apiFullVersion":"2.1.0.alpha-3","apiVersionInUri":"v1"}],"fqdn":"ocudr-ingressgateway.myudr.svc.cluster.local","ipEndPoints":[{"port":"80","ipv4Address":"10.0.0.0","transport":"TCP"}],"nfServiceStatus":"REGISTERED","allowedNfTypes":["NRF"],"serviceInstanceId":"547d42af-628a-4d5d-a8bd-38c4ba672682","serviceName":"nudr-group-id-map","priority":10,"capacity":500}],"udrInfo":{"groupId":"udr-1","externalGroupIdentifiersRanges":[{"start":"10000000000","end":"20000000000"}],"supiRanges":[{"start":"10000000000","end":"20000000000"}],"gpsiRanges":[{"start":"10000000000","end":"20000000000"}]},"heartBeatTimer":90,"nfServicePersistence":false,"nfProfileChangesSupportInd":false,"nfSetIdList":["setxyz.udrset.5gc.mnc012.mcc345"]}] | |
N5G-EIR-EIC (EIR Mode) | [{"nfInstanceId": "5a7bd676-ceeb-44bb-95e0-f6a55a328b03", "nfStatus": "REGISTERED", "fqdn": "ocudr-ingressgateway.myudr.svc.cluster.local", "nfType": "5G_EIR", "allowedNfTypes": ["AMF"], "plmnList": [{ "mnc": "14", "mcc": "310" }], "priority": 10, "capacity": 500, "load": 0, "locality": "bangalore", "nfServices": [{ "load": 0, "scheme": "http", "versions": [{ "apiFullVersion": "2.1.0.alpha-3", "apiVersionInUri": "v1" }], "fqdn": "ocudr-ingressgateway.myudr.svc.cluster.local", "ipEndPoints": [{ "port": "80", "ipv4Address": "10.0.0.0", "transport": "TCP" }], "nfServiceStatus": "REGISTERED", "allowedNfTypes": ["AMF"], "serviceInstanceId": "ae870316-384d-458a-bd45-025c9e748976", "serviceName": "n5g-eir-eic", "priority": 10, "capacity": 500 }], "heartBeatTimer": 90, "nfServicePersistence": false, "nfProfileChangesSupportInd": false, "nfSetIdList": ["setxyz.udrset.5gc.mnc012.mcc345"]}] | |
TRUE | NA | [{"nfInstanceId":"abae35e5-cc45-4016-8dd4-89598e5311b9","nfType":"UDSF","nfStatus":"REGISTERED","heartBeatTimer":60,"plmnList":[{"mnc":"14","mcc":"310"}],"fqdn":"ocudr-ingressgateway.myudr.svc.cluster.local","allowedNfTypes":["PCF","UDM"],"capacity":1000,"load":0,"priority":10,"udsfInfo":{"groupId":"udr-1","supiRanges":[{"start":"10000000000","end":"20000000000"}]},"nfServicePersistence":false,"nfServices":[{"serviceInstanceId":"b066742a-c270-4c91-b59c-23afe81fa715","serviceName":"nudsf-dr","nfServiceStatus":"REGISTERED","priority":10,"capacity":1000,"scheme":"http","versions":[{"apiVersionInUri":"v1","apiFullVersion":"1.1.0.0"}]}],"nfProfileChangesSupportInd":false}] |
Other nrf-client-service configuration parameters are as follows:
Table 3-6 nrf-client-service parameters
Parameter | Description | Default Value | M/O/C | Ranges or Possible Values |
---|---|---|---|---|
configmapApplicationConfig.profile.primaryNrfApiRoot | Primary NRF Hostname and Port | nrfDeployName-nrf-1-nrf-simulator.nrfDeployNamespace-nrf-1.svc:5808 | M | NA |
configmapApplicationConfig.profile.secondaryNrfApiRoot | Secondary NRF Hostname and Port | - | NA | NA |
configmapApplicationConfig.profile.nrfScheme | Scheme of primary and secondary NRF http or https | http | NA | http, https |
configmapApplicationConfig.profile.retryAfterTime | Default downtime(in Duration) of an NRF detected to be unavailable. | PT120S | NA | NA |
configmapApplicationConfig.profile.nrfClientType | The NfType of the NF registering | NRF | NA | NA |
configmapApplicationConfig.profile.nrfClientSubscribeTypes | the NFType for which the NF wants to subscribe to the NRF | PCF | NA | NA |
configmapApplicationConfig.profile.appProfiles | The NfProfile of the NF to be registered with NRF | [{}] | NA | NA |
configmapApplicationConfig.profile.enableF3 | Support for 29.510 Release 15.3 | true | NA | NA |
configmapApplicationConfig.profile.enableF5 | Support for 29.510 Release 15.5 | true | NA | NA |
configmapApplicationConfig.profile.registrationRetryInterval | Retry Interval after a failed autonomous registration request | 5000 | NA | 1.4.0 |
configmapApplicationConfig.profile.subscriptionRetryInterval | Retry Interval after a failed autonomous subscription request | 5000 | NA | 1.4.0 |
configmapApplicationConfig.profile.discoveryRetryInterval | Retry Interval after a failed autonomous discovery request | 5000 | NA | 1.4.0 |
configmapApplicationConfig.profile.renewalTimeBeforeExpiry | Time Period(seconds) before the Subscription Validity time expires | 3600 | NA | NA |
configmapApplicationConfig.profile.validityTime | The default validity time(days) for subscriptions | 30 | NA | NA |
configmapApplicationConfig.profile.enableSubscriptionAutoRenewal | Enable Renewal of Subscriptions automatically | true | NA | NA |
configmapApplicationConfig.profile.nfHeartbeatRate | This value specifies the rate at which the NF shall heartbeat with the NRF. The value shall be configured in terms of percentage(1-100). if the heartbeatTimer is 60s, then the NF shall heartbeat at nfHeartBeatRate * 60/100 | - | NA | NA |
configmapApplicationConfig.profile.acceptAdditionalAttributes | Enable additionalAttributes as part of 29.510 Release 15.5 | false | NA | NA |
configmapApplicationConfig.profile.retryForCongestion | The duration(seconds) after which nrf-client should retry to a NRF server found to be congested | 5 | NA | NA |
configmapApplicationConfig.profile.supportedDataSetId | The data-set value to be used in queryParams for NFs autonomous/on-demand discovery | - | NA | NA |
configmapApplicationConfig.profile.enableVirtualNrfResolution | Enable virtual NRF session retry by Alternate routing service | false | NA | NA |
configmapApplicationConfig.profile.virtualNrfFqdn | Virtual NRF FQDN used to query static list of route | nrf.oracle.com | NA | NA |
configmapApplicationConfig.profile.virtualNrfScheme | http or https | http | NA | NA |
configmapApplicationConfig.profile.virtualNrfPort | - | - | NA | 1.4.0 |
configmapApplicationConfig.profile.useAlternateScpOnAlternateRouting | Enable use SCP on alternate routing service | - | NA | NA |
configmapApplicationConfig.profile.subscriberNotificationRetry | Number of health status notification retries to a subscriber | 2 | NA | 1.4.0 |
configmapApplicationConfig.profile.timeoutUrlPaths | Timeout Non 3GPP URL Resource Path(s) ConfigurationtimerProfileJson | - | NA | NA |
configmapApplicationConfig.profile.timerProfileJson | Timer Profile JSON Configuration | - | NA | NA |
configmapApplicationConfig.profile.requestTimeoutGracePeriod | The grace period at nrf-client for which it shall wait for a response
from the NRF. This value shall be added to value configured at
configmapApplicationConfig.profile.requestTimeout.
Unit: seconds The setting support for 2s (means in seconds) or 50ms (means in milliseconds) format from Release 1.6.x |
2 | NA | 1.4.1 |
configmapApplicationConfig.profile.nrfRetryConfig | Configurations required for the NRF Retry mechanism NRF Alternate Route Retry Feature | - | NA | 1.4.0 |
configmapApplicationConfig.profile.serviceRequestType | Type of service operation for which configuration is applicable NRF Alternate Route Retry Feature | ALL_REQUESTS | NA | 1.4.0 |
configmapApplicationConfig.profile.primaryNRFRetryCount | Number of retries for primary instance NRF Alternate Route Retry Feature | 1 | NA | 1.4.0 |
configmapApplicationConfig.profile.nonPrimaryNRFRetryCount | Number of retries for non-primary instance NRF Alternate Route Retry Feature | 1 | NA | 1.4.0 |
configmapApplicationConfig.profile.alternateNRFRetryCount | Number of retry attempt NRF Alternate Route Retry Feature | 1 | NA | 1.4.0 |
configmapApplicationConfig.profile.errorReasonsForFailure | httpStatusCode or error conditions for which retry shall be attempted NRF Alternate Route Retry Feature | ["503","504","500,"SocketTimeoutException","JsonProcessingException","UnknownHostException","NoRouteToHostException","IOException""] | NA | 1.4.1 |
configmapApplicationConfig.profile.requestTimeout | Timeout period(seconds) where no response is received from NRF NRF Alternate Route Retry Feature | 10 | NA | 1.4.0 |
configmapApplicationConfig.profile.gatewayErrorCodes | httpStatusCode sent by gateway for which retry shall be attempted NRF Alternate Route Retry Feature | ["503"] | NA | 1.4.0 |
configmapApplicationConfig.profile.healthCheckConfig | Configurations required for the Health check of NRFs NRF Alternate Route Retry Feature | - | NA | 1.4.0 |
configmapApplicationConfig.profile.healthCheckCount | Consecutive success/failure operation count (default -1 means disabled) NRF Alternate Route Retry Feature | 1 | NA | 1.4.0 |
configmapApplicationConfig.profile.healthCheckInterval | Interval duration in seconds to perform health check NRF Alternate Route Retry Feature | 5 | NA | 1.4.0 |
configmapApplicationConfig.profile.requestTimeout | Timeout period(seconds) where no response is received from NRF NRF Alternate Route Retry Feature | 10 | NA | 1.4.0 |
configmapApplicationConfig.profile.errorReasonsForFailure | httpStatusCode or error conditions for which the NRF is considered as unhealthy NRF Alternate Route Retry Feature | ["503","504","500,"SocketTimeoutException","JsonProcessingException","UnknownHostException","NoRouteToHostException","IOException""] | NA | 1.4.1 |
configmapApplicationConfig.profile.gatewayErrorCodes | httpStatusCode sent by gateway for the NRF shall be considered unhealthy NRF Alternate Route Retry Feature | [] | NA | 1.4.0 |
configmapApplicationConfig.profile.enableNrfRetry | enable NRF retry | - | NA | 1.4.1 |
configmapApplicationConfig.profile.maxNrfRetries | Number of retry attempt | - | NA | 1.4.1 |
configmapApplicationConfig.profile.enableNrfAlternateRouting | enable NRF alternate routing service | - | NA | 1.4.1 |
configmapApplicationConfig.profile.alternateRoutingErrorCodes | set alternate routing error codes | - | NA | 1.4.1 |
nrf-client-nfdiscovery | - | - | NA | NA |
configmapApplicationConfig | A reference to the nrf-client.configMapApplication | *configRef | NA | NA |
nfProfileConfigMode | Configuration Mode for NfProfilevalues: - REST, HELM | REST | NA | NA |
image | NRF Client Microservice image name | nrf-client | NA | NA |
imageTag | NRF Client Microservice image tag | 23.4.5 | NA | NA |
envJaegerSamplerParam | - | 1 | NA | NA |
envJaegerSamplerType | - | ratelimiting | NA | NA |
envJaegerServiceName | - | nrf-client-nfdiscovery | NA | NA |
cpuRequest | Resource Details | 2 | NA | NA |
cpuLimit | - | 2 | NA | NA |
memoryRequest | - | 1Gi | NA | NA |
memoryLimit | - | 1Gi | NA | NA |
minReplicas | Min replicas to scale to maintain an average CPU utilization | 2 | NA | NA |
maxReplicas | Max replicas to scale to maintain an average CPU utilization | 5 | NA | NA |
averageCpuUtil | - | 80 | NA | NA |
type | - | ClusterIP | NA | NA |
cacheDiscoveryResults | Set to true if the discovery results should be cached. | true | NA | NA |
envDiscoveryServicePort | Discovery Service Port used for subscribing to management Service | 5910 | NA | NA |
envManagementServicePort | Management Service Port used to send subscriptions to the Management Service | 5910 | NA | NA |
commonCfgClient.enabled | Flag to enable/disable dynamic logging using common configuration service | false | NA | NA |
commonCfgServer.configServerSvcName | This attribute shall be configured only if commonCfgClient.enabled is
set to true.The Nf Configuration Service name or the common config
service.
Do not comment this line in case you want to deploy both config-server and APIGW in same namespace simultaneously Otherwise comment this line and use 'host' |
'common-config-server' | NA | NA |
commonCfgServer.host | The Host name of the Nf Configuration Service name or the common config service.This attribute shall be configured only if commonCfgClient.enabled is set to true. | 'common-config-server' | NA | NA |
commonCfgServer.port | The port of the Nf Configuration Service name or the common config service.This attribute shall be configured only if commonCfgClient.enabled is set to true. | 80 | NA | NA |
commonCfgServer.pollingInterval | The interval at which the discovery service shall poll the configuration service to check for updates in msThis attribute shall be configured only if commonCfgClient.enabled is set to true. | 5000 | NA | NA |
dbConfig.dbHost | The db host to which the helm hooks shall connect to load the json scheme and default configurations.This attribute shall be configured only if commonCfgClient.enabled is set to true. | 10.75.225.4 | NA | NA |
dbConfig.dbPort | The db port to which the helm hooks shall connect to load the json scheme and default configurations.This attribute shall be configured only if commonCfgClient.enabled is set to true. | 3306 | NA | NA |
dbConfig.secretName | The database secretThis attribute shall be configured only if commonCfgClient.enabled is set to true. | 'nrfclient-mysql' | NA | NA |
dbConfig.dbName | The database name which will be used to store the common configuraiton.This attribute shall be configured only if commonCfgClient.enabled is set to true. | ocpm_config_server | NA | NA |
dbConfig.dbUNameLiteral | The db literal name that shall be used as per the secrets configured.This attribute shall be configured only if commonCfgClient.enabled is set to true. | mysql-username | NA | NA |
dbConfig.dbPwdLiteral | The db password literal name that shall be used as per the secrets configured. This attribute shall be configured only if commonCfgClient.enabled is set to true. | mysql-password | NA | NA |
logging.level.root | default system log level | WARN | NA | NA |
logging.level.nrfclient | default application log level | WARN | NA | NA |
serviceMeshCheck | This attributes allows to enable serviceMesh when used with ASM deployment. | false | NA | NA |
istioSidecarQuitUrl |
The sidecar (istio quit url) when deployed in serviceMesh. This value shall be considered only when serviceMeshCheck is true. default: http://127.0.0.1:15000/quitquitquit |
"" | NA | NA |
istioSidecarReadyUrl | The sidecar (istio ready url) when deployed in serviceMesh. This value shall be considered only when serviceMeshCheck is true.default :http://127.0.0.1:15000/ready | "" | NA | NA |
metricPrefix | A prefix that shall be added to all the metric names.By default, this shall contain the value configured in the global section metricPrefix | "oc" | NA | NA |
metricSuffix | A suffix that shall be added to all the metric names.By default, this shall contain the value configured in the global section metricSuffix | "" | NA | NA |
cacheServicePortStart | Coherence Port for communication | 8095 | NA | NA |
cacheServicePortEnd | Coherence Port for communication | 8096 | NA | NA |
nrf-client-nfmanagement | - | - | NA | NA |
configmapApplicationConfig | A reference to the nrf-client.configMapApplication | *configRef | NA | NA |
nfProfileConfigMode |
Configuration Mode for NfProfile values: - REST, HELM |
REST | NA | NA |
image | Deployment specific configuration for Nrf-Client Management Microservice. NRF Client Microservice image name. | nrf-client | NA | NA |
imageTag | NRF Client Microservice image tag | 23.4.5 | NA | NA |
envJaegerSamplerParam | - | '1' | NA | NA |
envJaegerSamplerType | - | ratelimiting | NA | NA |
envJaegerServiceName | - | nrf-client-nfmanagement | NA | NA |
replicas | Resource Details | 1 | NA | NA |
cpuRequest | CPU Request | 1 | NA | NA |
cpuLimit | CPU Limit | 1 | NA | NA |
memoryRequest | Memory Request | 1Gi | NA | NA |
memoryLimit | Memory Limit | 1Gi | NA | NA |
type | ClusterIP | NA | NA | |
commonCfgClient.enabled | Flag to enable/disable dynamic logging using common configuration service | false | NA | NA |
commonCfgServer.configServerSvcName | This attribute shall be configured only if commonCfgClient.enabled is
set to true.The Nf Configuration Service name or the common config
service.
Do not comment this line in case you want to deploy both config-server and APIGW in same namespace simultaneously Otherwise comment this line and use 'host' |
'common-config-server' | NA | NA |
commonCfgServer.host | The Host name of the Nf Configuration Service name or the common config service.This attribute shall be configured only if commonCfgClient.enabled is set to true. | 'common-config-server' | NA | NA |
commonCfgServer.port | The port of the Nf Configuration Service name or the common config service.This attribute shall be configured only if commonCfgClient.enabled is set to true. | 80 | NA | NA |
commonCfgServer.pollingInterval | The interval at which the discovery service shall poll the configuration service to check for updates in msThis attribute shall be configured only if commonCfgClient.enabled is set to true. | 5000 | NA | NA |
dbConfig.dbHost | The db host to which the helm hooks shall connect to load the json scheme and default configurations.This attribute shall be configured only if commonCfgClient.enabled is set to true. | 10.75.225.4 | NA | NA |
dbConfig.dbPort | The db port to which the helm hooks shall connect to load the json scheme and default configurations.This attribute shall be configured only if commonCfgClient.enabled is set to true. | 3306 | NA | NA |
dbConfig.secretName | The database secretThis attribute shall be configured only if commonCfgClient.enabled is set to true. | nrfclient-mysql | NA | NA |
dbConfig.dbName | The database name which will be used to store the common configuraiton.This attribute shall be configured only if commonCfgClient.enabled is set to true. | ocpm_config_server | NA | NA |
dbConfig.dbUNameLiteral | The db literal name that shall be used as per the secrets configured.This attribute shall be configured only if commonCfgClient.enabled is set to true. | mysql-username | NA | NA |
dbConfig.dbPwdLiteral | The db password literal name that shall be used as per the secrets configuredThis attribute shall be configured only if commonCfgClient.enabled is set to true. | mysql-password | NA | NA |
logging.level.root | default system log level | WARN | NA | NA |
logging.level.nrfclient | default application log level | WARN | NA | NA |
serviceMeshCheck | This attributes allows to enable serviceMesh when used with ASM deployment. | false | NA | NA |
istioSidecarQuitUrl |
The sidecar (istio quit url) when deployed in serviceMesh. This value shall be considered only when serviceMeshCheck is true. default : http://127.0.0.1:15000/quitquitquit |
"" | NA | NA |
istioSidecarReadyUrl | The sidecar (istio ready url) when deployed in serviceMesh. This
value shall be considered only when serviceMeshCheck is true.default
:http://127.0.0.1:15000/ready |
"" | NA | NA |
metricPrefix | A prefix that shall be added to all the metric names.By default, this shall contain the value configured in the global section metricPrefix | "oc" | NA | NA |
metricSuffix | A suffix that shall be added to all the metric names.By default, this shall contain the value configured in the global section metricSuffix | "" | NA | NA |
cacheServicePortStart | Coherence Port for communication | 8095 | NA | NA |
cacheServicePortEnd | Coherence Port for communication | 8096 | NA | NA |
3.1.6 nudr-config Microservice Parameters
Following table provides the parameters for nudr-config microservice.
Table 3-7 nudr-config microservice parameters
Parameter | Description | Default value | Range or Possible Values (If applicable) | Notes |
---|---|---|---|---|
enabled | Flag to enable nudr-config service | *configServerEnabled | NA | Do not change this parameter value |
logging.level.root | Log level of the nudr-config pod | WARN | Possible Values:
WARN INFO DEBUG |
NA |
image.name | Docker Image name | nudr_config | NA | NA |
service.customExtension.labels | Use this to add custom label(s) to nudr-config service | null | NA | NA |
service.customExtension.annotations | Use this to add custom annotation(s) to nudr-config service | null | NA | NA |
deployment.customExtension.labels | Use this to add custom label(s) to nudr-config deployment | null | NA | NA |
deployment.customExtension.annotations | Use this to add custom annotation(s) to nudr-config deployment | null | NA | NA |
hikari.poolsize | Configuration of MYSQL connection pool size | 25 | NA | NA |
hikari.connectionTimeout | MYSQL connection timeout | 1000 | Unit: Milliseconds | NA |
hikari.minimumIdleConnections | Minimum idle MYSQL connections maintained | 2 | Valid. No, less than poolsize configured | NA |
hikari.idleTimeout | Idle MYSQL connections timeout | 4000 | Unit: Milliseconds | NA |
hikari.queryTimeout | Query timeout for a single database query. If set to -1, there will be no timeout set for database queries. | -1 | Unit: Seconds | NA |
service.type | Kubernetes service type for exposing UDR deployment | ClusterIP | Possible Values:
ClusterIP NodePort LoadBalancer |
Suggested to be set as ClusterIP (default value) always |
image.pullPolicy | This setting ensures if image need to be pulled or not | IfNotPresent | Possible Values:
Always IfNotPresent Never |
NA |
service.port.management | Actuator management port to be used for nudr-config service | 9000 | NA | NA |
service.port.https | The https port to be used for nudr-config service | 5002 | NA | NA |
service.port.http | The http port to be used in nudr-config service | 5001 | NA | NA |
resources.target.averageCpuUtil | CPU utilization limit for autoscaling (creating HPA) | 80 | NA | NA |
resources.requests.memory | Memory allocated for nudr-config pod during deployment | 2Gi | NA | NA |
resources.requests.logStorage | Log storage for ephemeral storage allocation request | *containersLogStorageRequestsRef | Unit: MB | Change the reference to a different value if allocation needs to be different for this container |
resources.requests.crictlStorage | Crictl storage for ephemeral storage allocation request | *containersCrictlStorageRequestsRef | Unit: MB | Change the reference to a different value if allocation needs to be different for this container |
resources.limits.memory | Memory allotment limit | 2Gi | NA | NA |
resources.requests.cpu | CPUs allocated for nudr-config pod during deployment | 2 | NA | NA |
resources.limits.cpu | CPU allotment limitation | 2 | NA | NA |
resources.limits.logStorage | Log storage for ephemeral storage allocation limit | *containersLogStorageLimitsRef | Unit: MB | Change the reference to a different value if allocation needs to be different for this container |
resources.limits.crictlStorage | Critical storage for ephemeral storage allocation limit | *containersCrictlStorageLimitsRef | Unit: MB | Change the reference to a different value if allocation needs to be different for this container |
image.tag | Tag or version of Image | 23.4.2 | NA | NA |
deployment.replicaCount | Number of nudr-config pods to be maintained by replica set created with deployment | 2 | NA | NA |
minReplicas | Minimum number of pods | 2 | NA | NA |
maxReplicas | Maximum number of pods | 2 | NA | NA |
startupProbe.failureThreshold | It is the configurable number of times the startup probe
is retried for failures.
Note: Do not change this value. If there is delay in pod coming up and probe is killing the pod then tune these parameters |
40 | Unit: Seconds | |
startupProbe.periodSeconds | It is the time interval for every startup probe check.
Note: Do not change this value. If there is delay in pod coming up and probe is killing the pod then tune these parameters |
5 | Unit: Seconds | |
extraContainers | This configuration decides service level extraContainer support. The default value is dependent on what is configured in the global level | USE_GLOBAL_VALUE | Allowed Values:
|
NA |
serviceMeshCheck | Enable when deployed in serviceMesh. Set it to false when side car is not included for this service | *serviceMeshFlag | NA | Do not change this value unless it is different from the value configured in global section reference |
istioSidecarReadyUrl | Readiness url configurable for side car
Change only if the url is different for the side car container in this microservice |
*istioReadyUrl | NA | Do not change this value unless it is different from the value configured in global section reference |
tolerationsSetting |
Flag to enable the toleration setting at the microservice level or global level. If set to USE_GLOBAL_VALUE global.tolerations configured value is used. If set to ENABLED tolerations configuration below in the same section is used. |
USE_GLOBAL_VALUE |
Allowed Values:
|
NA |
nodeSelection |
Flag to enable the nodeSelection setting at the microservice level or global level. If set to USE_GLOBAL_VALUE global.tolerations configured value is used. If set to ENABLED tolerations configuration below in the same section is used. This is applicable for v2 apiVersion nodeSelector configuration. |
USE_GLOBAL_VALUE |
Allowed Values:
|
NA |
tolerations | When tolerationsSetting is ENABLED, configure tolerations here. | [] |
NA Example:
|
NA |
helmBasedConfigurationNodeSelectorApiVersion | Node Selector API version setting | v1 |
Allowed Values:
|
NA |
nodeSelector.nodeKey |
NodeSelector key configuration at the microservice level when helmBasedConfigurationNodeSelectorApiVersion is set to v1 then this configuration is used. And this configuration does not depend on nodeSelection flag, once configured this is used for all microservices. |
' ' | Valid key for a label for a particular node | NA |
nodeSelector.nodeValue |
NodeSelector Value configuration at the global level when helmBasedConfigurationNodeSelectorApiVersion is set to v1 then this configuration is used. And this configuration does not depend on nodeSelection flag, once configured this is used for all microservices. |
' ' | Valid value pair for the above key for a label for a particular node | NA |
nodeSelector | NodeSelector configuration when
helmBasedConfigurationNodeSelectorApiVersion is set to v2.
Uncomment and use this configuration when required. Else keep this commented. |
{} | Valid key value pair matching a node for nodeSelection by a pod | NA |
3.1.7 nudr-config-server Microservice Parameters
Following table provides the parameters for nudr-config-server Microservice.
Table 3-8 nudr-config-server microservice parameters
Parameter | Description | Default value | Range or Possible Values (If applicable) | Notes |
---|---|---|---|---|
enabled | Flag to enable nudr-config-server service | *configServerEnabled | NA | Do not change this default value |
image | Image name of config-server | ocpm_config_server | NA | NA |
imageTag | Tag or version of Image | 23.4.9 | NA | NA |
pullPolicy | It helps to decide on policy for image pull | IfNotPresent | Allowed Values:
IfNotPresent Never Always |
NA |
global.nfName | NF name used to add with config server service name | nudr | NA | NA |
global.envJaegerAgentHost | Host FQDN for Jaeger agent service for config-server tracing | ' ' | NA | NA |
global.envJaegerAgentPort | Port to connect to Jaeger agent for config-server tracing | 6831 | Valid Port | NA |
global.serviceMeshEnabled | Enable when deployed in serviceMesh. Set this parameter to false when side car is not included for this service | *serviceMeshFlag | NA | Do not change this value unless it is different from the value configured in global section reference |
global.istioSidecarQuitUrl | Readiness url configurable for side car
Change only if URL is different for side car container in this microservice |
*istioQuitUrl | NA | Do not change this value unless it is different from the value configured in global section reference |
global.istioSidecarReadyUrl | Readiness url configurable for side car
Change this parameter value only if the url is different for side car container in this microservice |
*istioReadyUrl | NA | Do not change this value unless it is different from the value configured in global section reference |
global.logStorage | Log storage for ephemeral storage request | *containersLogStorageRequestsRef | Unit MB | Do not change reference unless a different value needs to be configured |
global.crictlStorage | Crictl storage for ephemeral storage request | *containersCrictlStorageRequestsRef | Unit MB | Do not change reference unless a different value needs to be configured |
envLoggingLevelApp | Log level of the nudr-config-server pod | WARN | Possible Values:
WARN INFO DEBUG |
NA |
replicas | Number of nudr-config-server pods to be maintained by replica set created with deployment | 1 | NA | NA |
minAvailable | Number of pods that must be available, even during a disruption | 0 | NA | This parameter is used for PodDisruptionBudget Kubernetes resource |
service.type | Kubernetes service type for exposing UDR deployment | ClusterIP | Possible Values:
ClusterIP NodePort LoadBalancer |
Suggested to be set as ClusterIP (default value) always |
resources.requests.cpu | Number of CPUs allocated for nudr-config-server pod during deployment | 2 | NA | NA |
resources.requests.memory | Memory allocated for nudr-config-server pod during deployment | 512Mi | NA | NA |
resources.limits.cpu | CPU allotment limitation | 2 | NA | NA |
resources.limits.memory | Memory allotment limitation | 2Gi | NA | NA |
resources.limits.ephemeralStorage | Ephemeral storage allocation limits | 1024Mi | Unit MB | NA |
service.customExtension.labels | Use this parameter to add custom label(s) to nudr-config-server service | null | NA | NA |
service.customExtension.annotations | Use this parameter to add custom annotation(s) to nudr-config-server service | null | NA | NA |
deployment.customExtension.labels | Use this parameter to add custom label(s) to nudr-config-server deployment | null | NA | NA |
deployment.customExtension.annotations | Use this parameter to add custom annotation(s) to nudr-config-server deployment | null | NA | NA |
fullnameOverride | Full name to be used for configuration server service | udr-config-server | NA |
This configuration must be updated to <Helm-release-name>-config-server if multiple ocudr/ocslf deployments are done within the same kubernetes namespace. Example: ocudr1-config-server |
installedChartVersion | Chart version to be read by hooks | '' | Valid version | NA |
readinessProbe.initialDelaySeconds | Configurable wait time before performing the first readiness probe by Kubelet | 70 | NA
Unit: Seconds |
Do not change this value. If there is any delay in pod to come up and probe is killing the pod then you should tune these parameters |
readinessProbe.periodSeconds | Time interval for every readiness probe check | 10 | NA
Unit: Seconds |
Do not change this value. If there is any delay in pod to come up and probe is killing the pod then you should tune these parameters |
readinessProbe.timeoutSeconds | Number of seconds after which the probe times out | 3 | NA | Do not change this default value |
readinessProbe.successThreshold | Minimum consecutive successes for the probe to be considered successful after having failed | 1 | NA | Do not change this default value |
readinessProbe.failureThreshold | When a Pod starts and the probe fails, Kubernetes tries failureThreshold times before giving up | 3 | NA | Do not change this default value |
livenessProbe.initialDelaySeconds | Configurable wait time before performing the first liveness probe by Kubelet | 60 | NA
Unit: Seconds |
Do not change this value. If there is any delay in pod to come up and probe is killing the pod then you should tune these parameters |
livenessProbe.periodSeconds | Time interval for every liveness probe check | 15 | NA
Unit: Seconds |
Do not change this value. If there is any delay in pod to come up and probe is killing the pod then you should tune these parameters |
livenessProbe.timeoutSeconds | Number of seconds after which the probe times out | 3 | NA | Do not change this default value |
livenessProbe.successThreshold | Minimum consecutive successes for the probe to be considered successful after having failed | 1 | NA | Do not change this default value |
livenessProbe.failureThreshold | When a Pod starts and the probe fails, Kubernetes tries failureThreshold times before giving up | 3 | NA | Do not change this default value |
extraContainers | This configuration decides service level extraContainer support. The default value is dependent on what is configured in the global level | USE_GLOBAL_VALUE | Allowed Values:
|
NA |
tolerationsSetting |
Flag to enable the toleration setting at the microservice level or global level. If set to USE_GLOBAL_VALUE global.tolerations configured value is used. If set to ENABLED tolerations configuration below in the same section is used. |
USE_GLOBAL_VALUE |
Allowed Values:
|
NA |
nodeSelection |
Flag to enable the nodeSelection setting at the microservice level or global level. If set to USE_GLOBAL_VALUE global.tolerations configured value is used. If set to ENABLED tolerations configuration below in the same section is used. This is applicable for v2 apiVersion nodeSelector configuration. |
USE_GLOBAL_VALUE |
Allowed Values:
|
NA |
tolerations | When tolerationsSetting is ENABLED configure tolerations here. | [] |
Example:
|
NA |
helmBasedConfigurationNodeSelectorApiVersion | Node Selector API version setting | v1 |
Allowed Values:
|
NA |
nodeSelectorEnabled | Flag to enable v1 nodeSelector | false | true/false | NA |
nodeSelectorKey | NodeSelector key configuration at the microservice level when helmBasedConfigurationNodeSelectorApiVersion is set to v1 then this configuration is used. And this configuration does not depend on nodeSelection flag, once configured this is used for all microservices. | ' ' | Valid key for a label for a particular node | NA |
nodeSelectorValue |
NodeSelector Value configuration at the global level when helmBasedConfigurationNodeSelectorApiVersion is set to v1 then this configuration is used. And this configuration does not depend on nodeSelection flag, once configured this is used for all microservices. |
' ' | Valid value pair for the above key for a label for a particular node | NA |
nodeSelector | NodeSelector configuration when
helmBasedConfigurationNodeSelectorApiVersion is set to v2.
Uncomment and use this configuration when required. Else keep this commented. |
{} | Valid key value pair matching a node for nodeSelection by a pod | NA |
3.1.8 nudr-diameterproxy Microservice Parameters
Following table list parameters for nudr-diameterproxy microservice.
Table 3-9 nudr-diameterproxy microservice parameters
Parameter | Description | Default value | Range or Possible Values (If applicable) | Notes |
---|---|---|---|---|
enabled | Flag to enable nudr-diameterproxy service | true | NA | NA |
image.name | Name of the Docker Image | nudr_diameterproxy | NA | NA |
image.tag | Tag or version of Image | 23.4.2 | NA | NA |
image.pullPolicy | This setting indicates if image need to be pulled or not | IfNotPresent | Possible Values:
Always IfNotPresent Never |
NA |
logging.level.root | The log level of the nudr-diameterproxy server pod | WARN | Possible Values:
WARN INFO DEBUG |
NA |
deployment.replicaCount | Number of nudr-config-server pods to be maintained by replica set created with deployment | 2 | NA | NA |
serviceToCheck | Essential services on which init container may perform readiness checks | nudr-config-server | Comma separated microservice names | NA |
minReplicas | Minimum number of pods of nudr-diameterproxy | 2 | NA | NA |
maxReplicas | Maximum number of pods of nudr-diameterproxy | 4 | NA | NA |
maxUnavailable | Number of replicas that can go down during a disruption | 1 | NA | This parameter is used for PodDisruptionBudget k8s resource |
hikari.poolsize | Hikari pool connection size to be created at start up | 20 | NA | NA |
hikari.connectionTimeout | MYSQL connection timeout | 1000 | Unit: Milliseconds | NA |
hikari.minimumIdleConnections | Minimum idle MYSQL connections maintained | 2 | Valid. No, less than poolsize configured | NA |
hikari.idleTimeout | Idle MYSQL connections timeout | 4000 | Unit: Milliseconds | NA |
hikari.queryTimeout | Query timeout for a single database query. If set to -1, there will be no timeout set for database queries. | -1 | Unit: Seconds | NA |
maxConcurrentPushedStreams | Maximum number of concurrent requests that can be pushed per destination | 6000 | NA | NA |
maxRequestsQueuedPerDestination | Maximum number of requests that can be queued per destination | 5000 | NA | NA |
maxConnectionsPerDestination | Maximum connection allowed per destination | 10 | NA | NA |
maxConnectionsPerIp | Maximum connection allowed per IP | 10 | NA | NA |
requestTimeout | Client request timeout | 5000 | Unit: Milliseconds | NA |
connectionTimeout | Client connection timeout | 10000 | Unit: Milliseconds | NA |
idleTimeout | Client connection idle timeout | 0 | Unit: Milliseconds | NA |
dnsRefreshDelay | Server DNS refresh delay | 120000 | Unit: Milliseconds | NA |
pingDelay | Server ping delay | 60 | Unit: Milliseconds | NA |
connectionFailureThreshold | Client connection failure threshold | 10 | NA | NA |
jettyserviceMeshCheck | Flag to enable or disable jetty service mesh check for ASM deployment | false | true or false | NA |
service.type | Kubernetes service type for exposing UDR deployment | ClusterIP | Possible Values:
|
Suggested to be set as ClusterIP (default value) always |
service.port.http | The HTTP port to be used in nudr-diameterproxy service | 5001 | NA | NA |
service.port.https | The HTTPS port to be used for nudr-diameterproxy service | 5002 | NA | NA |
service.port.management | The actuator management port to be used for nudr-diameterproxy service | 9000 | NA | NA |
service.port.diameter | The diameter port to be used for nudr-diameterproxy service | 6000 | NA | NA |
resources.requests.cpu | The number of CPUs allocated for nudr-diameterproxy pod during deployment | 2 | NA | NA |
resources.requests.memory | The memory to be allocated for nudr-diameterproxy pod during deployment | 2Gi | NA | NA |
resources.requests.logStorage | Log storage for ephemeral storage allocation request | *containersLogStorageRequestsRef | Unit: MB | Change the reference to a different value if allocation needs to be different for this container |
resources.requests.crictlStorage | Crictl storage for ephemeral storage allocation request | *containersCrictlStorageRequestsRef | Unit: MB | Change the reference to a different value if allocation needs to be different for this container |
resources.limits.cpu | Maximum number of CPUs allocated for nudr-diameterproxy pod | 2 | NA | NA |
resources.limits.memory | Maximum memory to be allocated for nudr-diameterproxy pod | 2Gi | NA | NA |
resources.limits.logStorage | Log storage for ephemeral storage allocation limit | *containersLogStorageLimitsRef | Unit: MB | Change the reference to a different value if allocation needs to be different for this container |
resources.limits.crictlStorage | Crictl storage for ephemeral storage allocation limit | *containersCrictlStorageLimitsRef | Unit: MB | Change the reference to a different value if allocation needs to be different for this container |
resources.target.averageCpuUtil | CPU utilization limit for auto scaling (creating HPA) | 80 | NA | NA |
drservice.port.http | HTTP port on which dr-service is running | 5001 | NA | NA |
drservice.port.https | HTTPS port on which dr-service is running | 5002 | NA | dr-service port is required in diameterproxy application |
diameter.realm | Host realm of diameterproxy | oracle.com | String value | NA |
diameter.identity | FQDN (identity) of the diameterproxy in diameter messages | nudr.oracle.com | String value | NA |
diameter.strictParsing | Enable strict parsing of Diameter AVP and Messages | false | NA | NA |
diameter.IO.threadCount | Number of threads to handle IO operations in diameterproxy pod | 0 | 0 to 2* CPU | If the threadcount is 0 then application chooses the threadCount based on pod profile size |
diameter.IO.queueSize | Queue size for IO | 0 | 2048 to 8192 | Count should be the power of 2.
If the queueSize is 0 then application chooses the queueSize based on pod profile size |
diameter.messageBuffer.threadCount | Number of threads to handle messages in diameterproxy pod | 0 | 0 to 2* CPU | If the threadcount is 0 then application chooses the threadCount based on pod profile size |
service.customExtension.labels | Use this to add custom label(s) to nudr-diameterproxy service | null | NA | NA |
service.customExtension.annotations | Use this to add custom annotation(s) to nudr-diameterproxy service | null | NA | NA |
deployment.customExtension.labels | Use this to add custom label(s) to nudr-diameterproxy deployment | null | NA | NA |
deployment.customExtension.annotations | Use this to add custom annotation(s) to nudr-diameterproxy deployment | null | NA | NA |
startupProbe.failureThreshold | It is the configurable number of times the startup probe
is retried for failures.
Note: Do not change this value. If there is delay in pod coming up and probe is killing the pod then tune these parameters |
40 | Unit: Seconds | NA |
startupProbe.periodSeconds | It is the time interval for every startup probe check.
Note: Do not change this value. If there is delay in pod coming up and probe is killing the pod then tune these parameters |
5 | Unit: Seconds | NA |
extraContainers | This configuration decides service level extraContainer support. The default value lets it to be dependent on what is configured in the global level | USE_GLOBAL_VALUE | Allowed Values:
|
NA |
serviceMeshCheck | Flag to enable serviceMesh. Set this to false when side car is not included for this service | *serviceMeshFlag | NA | Do not change this value unless it is different from the value configured in global section reference |
istioSidecarReadyUrl | Readiness URL configurable for side car
Change this value only if the URL is different for side car container in this microservice |
*istioReadyUrl | NA | Do not change this value unless it is different from the value configured in global section reference |
tolerationsSetting |
Flag to enable the toleration setting at the microservice level or global level. If set to USE_GLOBAL_VALUE global.tolerations configured value is used. If set to ENABLED tolerations configuration below in the same section is used. |
USE_GLOBAL_VALUE |
Allowed Values:
|
NA |
nodeSelection |
Flag to enable the nodeSelection setting at the microservice level or global level. If set to USE_GLOBAL_VALUE global.tolerations configured value is used. If set to ENABLED tolerations configuration below in the same section is used. This is applicable for v2 apiVersion nodeSelector configuration. |
USE_GLOBAL_VALUE |
Allowed Values:
|
NA |
tolerations | When tolerationsSetting is ENABLED, configure tolerations here. | [] |
Example:
|
NA |
helmBasedConfigurationNodeSelectorApiVersion | Node Selector API version setting | v1 |
Allowed Values:
|
NA |
nodeSelector.nodeKey |
NodeSelector key configuration at the microservice level when helmBasedConfigurationNodeSelectorApiVersion is set to v1 then this configuration is used. And this configuration does not depend on nodeSelection flag, once configured this is used for all microservices. |
' ' | Valid key for a label for a particular node | NA |
nodeSelector.nodeValue |
NodeSelector Value configuration at the global level when helmBasedConfigurationNodeSelectorApiVersion is set to v1 then this configuration is used. And this configuration does not depend on nodeSelection flag, once configured this is used for all microservices. |
' ' | Valid value pair for the above key for a label for a particular node | NA |
nodeSelector | NodeSelector configuration when
helmBasedConfigurationNodeSelectorApiVersion is set to v2
Uncomment and use this configuration when required. Else keep this commented. |
{} | Valid key value pair matching a node for nodeSelection by a pod | NA |
convertFieldsUnderSubscriberEntityToUpperCase | To convert the fields in the subscriber entity to uppercase | false | NA | NA |
onDemandMigrationDiameterEnabled | Flag to enable or disable on-demand migration for diameter interface. | false | true or false | NA |
3.1.9 appinfo microservice parameters
Following table list configurable parameters for appinfo microservice.
Table 3-10 appinfo microservice parameters
Parameter | Description | Default value | Range or Possible Values (If applicable) | Notes |
---|---|---|---|---|
replicaCount | Indicates the number of pods, which needs to be created as part of deployment. | 2 | Valid number | NA |
image | Name of the docker image of app info service | app-info | NA | NA |
imageTag | Tag of the image of app info service | 23.4.9 | NA | NA |
pullPolicy | Indicates if the image needs to be pulled or not. | IfNotPresent | Possible Values -
Always IfNotPresent Never |
NA |
dbHookImage.name | Image name of dbHook | common_config_hook | Not Applicable | NA |
dbHookImage.tag | Image tag name of dbHook | 23.4.9 | Not Applicable | NA |
dbHookImage.pullPolicy | Indicates if the image needs to be pulled or not | IfNotPresent | Possible Values -
Always IfNotPresent Never |
NA |
service.type | Defines service type used for UDR deployment | ClusterIP | Possible
Values-
ClusterIP NodePort LoadBalancer |
NA |
global.logStorage | Log storage for ephemeral storage request | *containersLogStorageRequestsRef |
Unit MB |
Do not change reference unless a different value needs to be configured |
global.crictlStorage | Crictl storage for ephemeral storage request | *containersCrictlStorageRequestsRef |
Unit MB |
Do not change reference unless a different value needs to be configured |
resources.limits.cpu | CPU Limit for app info pod | 0.5 | NA | NA |
resources.limits.memory | Memory Limit for app info pod | 1Gi | NA | NA |
resources.limits.ephemeralStorage | Ephemeral storage allocation limits | 1024Mi | Unit MB | NA |
resources.requests.cpu | Requested CPU usage for app info pod to come up | 0.5 | NA | NA |
resources.requests.memory | Requested memory usage for app info pod to come up | 1Gi | NA | NA |
nodeSelector | Node selector to stick to one slave node | {} | NA | NA |
tolerations | Pod tolerations | [] | NA | NA |
commonCfgClient.enabled | Set it to true if persistent configuration needs to be enabled. | true | true or false | NA |
commonCfgServer.configServerSvcName | Service name of common configuration service to which the client tries to poll for configuration updates | nudr-config | Valid server service name. Example: nudr-config | NA |
commonCfgServer.host | Host name of common configuration server to which client tries to poll for configuration updates. This value is picked up if commonCfgServer.configServerSvcName is not available. | 10.75.224.123 | Valid host details | NA |
commonCfgServer.port | Port of common configuration server | 5001 | Valid Port | NA |
commonCfgServer.pollingInterval | This is the interval between two subsequent polling requests from configuration client to server | 5000 | Valid period | NA |
appinfo.debug | Specifies the log level of app info service. When this parameter is set to true, the log level of app info is DEBUG, otherwise, log level is INFO or WARN. | false | Possible Values -
true or false |
NA |
log.level.appinfo | Identifies log level of app info | INFO | Possible Values:
WARN DEBUG INFO |
NA |
infraServices | Infrastructure services.
If using OCCNE 1.4 or if you don't want to monitor infra services such as db-monitor service, then the parameter should be set as an empty array. If infraServices is not set, by default appinfo shall monitor status of db-monitor-svc and db-replication-svc. |
[] | NA | NA |
core_services | List of microservices monitored by app info depending on
UDR mode.
It refers to *allordr or *slf or *eir from the Global Configurable Parameters section. |
*allordr | Possible Values: *allordr, *slf, or 5g_eir: *eir | NA |
istioSidecarQuitUrl | Defines the URL that is used for quitting service mesh side car. This URL is used to quit the istio side car after successful completion of hook job. | *istioQuitUrl | Not Applicable | Do not change this value unless the URL is different for the side car container in this microservice. |
istioSidecarReadyUrl | Defines the URL that is used for checking the service mesh sidecar status and start application when the status is ready. | *istioReadyUrl | Not Applicable | Do not change this value unless the URL is different from the sidecar container in this microservice. |
serviceMeshCheck | Set this parameter as true, when side car is deployed in serviceMesh and as false when side car is not included for this service. | *serviceMeshFlag | Not Applicable | Do not change this value unless it is different from the value configured in the global section |
tolerationsSetting | Flag to enable the toleration setting at the microservice
level or global level.
If set to USE_GLOBAL_VALUE global.tolerations configured value is used. If set to ENABLED tolerations configuration below in the same section is used. |
USE_GLOBAL_VALUE |
Allowed Values:
|
NA |
nodeSelection | Flag to enable the nodeSelection setting at the
microservice level or global level.
If set to USE_GLOBAL_VALUE global.tolerations configured value is used. If set to ENABLED tolerations configuration below in the same section is used. This is applicable for v2 apiVersion nodeSelector configuration. |
USE_GLOBAL_VALUE |
Allowed Values:
|
NA |
tolerations | When tolerationsSetting is ENABLED, configure tolerations here. | [] |
Example:
|
NA |
helmBasedConfigurationNodeSelectorApiVersion | Node Selector API version setting | v2 |
Allowed Values:
|
NA |
nodeSelector | NodeSelector configuration when
helmBasedConfigurationNodeSelectorApiVersion is set to v2
For v1, you can have key:vaue pair configured. |
{} | Valid key value pair matching a node for nodeSelection by
a pod
v1 Example: zone: Antartica |
NA |
maxUnavailable | Number of replicas that can go down during a disruption | 50% | Not applicable | NA |
watchMySQL | It monitors the MySQL connectiviy service | false | true or false | NA |
dbStatusUri | URI of the MySQL connector database monitor service. URL must be changed according to the setup | http://occne-db-monitor-svc.occne-infra:8080/db-tier/status/local |
NA | NA |
scrapeInterval | Time interval in seconds, the app try to connect to database. | 5 | NA | NA |
replicationStatusCheck | Enabling the flag will monitor the replication status | false | true or false | NA |
realtimeDbStatusUri | This will monitor the realtime database status of the URL | http://occne-db-monitor-svc.occne-infra:8080/db-tier/status/cluster/local/realtime |
NA | NA |
replicationUri | This will monitor the replication status from app to the database in site | http://occne-db-monitor-svc.occne-infra:8080/db-tier/status/replication |
NA | NA |
replicationInterval | This is the configuration to scrape and monitor the replication status from app to database | 30 | NA | NA |
prometheusUrl | This is the promethues service URL. The URL must be changed according to the setup. |
http://occne-kube-prom-stack-kube-prometheus.occne-infra:80/cne-23-1/prometheus |
NA | NA |
alertmanagerUrl | This is the alert manager service URL. The URL must be changed according to the setup. | http://occne-prometheus-alertmanager.occne-infra:80 |
NA | NA |
3.1.10 nudr-perf-info Microservice Parameters
Following table list parameters for nudr-perf-info microservice.
Table 3-11 nudr-perf-info microservice parameters
Parameter | Description | Default value | Range or Possible Values (If applicable) |
---|---|---|---|
replicaCount | Indicates the number pods which need to be created as part of deployment. | 2 | Valid number |
image | Name of the docker image of perf info service | perf-info | NA |
imageTag | Tag of the image of perf info service | 23.4.9 | NA |
imagePullPolicy | Indicates if the image needs to be pulled or not | Always |
Possible Values - Always IfNotPresent Never |
dbHookImage.name | Image name of dbHook | common_config_hook | Not Applicable |
dbHookImage.tag | Image tag name of dbHook | 23.4.9 | Not Applicable |
dbHookImage.pullPolicy | Pull policy of image | IfNotPresent |
Possible Values - Always IfNotPresent Never |
service.type | K8s service type | ClusterIP |
Possible Values- ClusterIP NodePort LoadBalancer |
global.logStorage | Log storage for ephemeral storage request | *containersLogStorageRequestsRef |
Unit MB Do not change reference unless a different value needs to be configured |
global.crictlStorage | Crictl storage for ephemeral storage request | *containersCrictlStorageRequestsRef |
Unit MB Do not change reference unless a different value needs to be configured |
resources.limits.cpu | CPU Limit | 1 | NA |
resources.limits.memory | Memory Limit | 1Gi | NA |
resources.limits.ephemeralStorage | Ephemeral Storage allocation limits | 1024Mi | Unit MB |
resources.requests.cpu | CPU Requested | 1 | NA |
resources.requests.memory | Memory Requested | 1Gi | NA |
nodeSelector | Node selector to stick to one slave node | {} | NA |
tolerations | Pod Toleration | [] | NA |
affinity | Pod Affinity configurations | {} | NA |
commonCfgClient.enabled | Set it to true if persistent configuration needs to be enabled. | true | true/false |
commonCfgServer.configServerSvcName | Service name of common configuration service to which the client tries to poll for configuration updates | nudr-config | Valid server service name |
commonCfgServer.host | Host name of Common configuration server to which client tries to poll for configuration updates. This value is picked up if commonCfgServer.configServerSvcName is not available | 10.75.224.123 | Valid host details |
commonCfgServer.port | Port of Common Configuration server | 5001 | Valid Port |
commonCfgServer.pollingInterval | This is the interval between two subsequent polling requests from config client to server | 5000 | Valid period |
commonServiceName | This is the common service name that is currently requesting for configuration updates from server | alt-route | Not Applicable |
ingress.enabled | Ingress flag control | false | true/false |
configmapPerformance.prometheus | Prometheus server k8s service URL Information | http://occne-prometheus-server.occne-infra:80 | Not Applicable |
configmapPerformance.jaeger | Jaeger Agent K8s service URL Information | occne-tracer-jaeger-agent.occne-infra | Not Applicable |
configmapPerformance.jaeger_query_url | Jaeger Query k8s service URL Information | http://occne-tracer-jaeger-query.occne-infra | Not Applicable |
overloadManager.perfInfoScrapeInterval | Overlaod Manager Scrape Interval | 30 | Unit: Seconds |
log.level.perfinfo | Log level for perf-info service | WARN |
Possible Values - WARN INFO DEBUG |
istioSidecarQuitUrl | Quit url configurable for side car,
referred to global section istioSidecarQuitUrl. Change only if the url is different for the side car container in this microservice |
*istioQuitUrl | Not ApplicableNote: Do not change this value unless it is different from the value configured in global section reference |
istioSidecarReadyUrl | Readiness url configurable for side car, referred to global section
istioSidecarReadyUrl.
Change only if the url is different for the side car container in this microservice |
*istioReadyUrl | Not ApplicableNote: Do not change this value unless it is different from the value configured in global section reference |
serviceMeshCheck | Enable when deployed in serviceMesh, referred to the serviceMeshCheck in global section. Set to false when side car is not included for this service | *serviceMeshFlag | Not ApplicableNote: Do not change this value unless it is different from the value configured in global section reference |
overloadManager.enabled | Enable when overload manager is enabled. | *overloadEnabled | Not Applicable. Do not change this value unless it is different from the value configured in global section. |
overloadManager.ajacentLevelDuration | Based on this, overload threshold level happens. | 40 | Not Changed |
overloadManager.ingressGatewaySvcName | Default svcName for Ingress Gateway | ingressgateway | - |
overloadManager.ingressGatewayHost | This parameter can be left blank as svcName is configured. | - | Not Changed |
overloadManager.ingressGatewayPort | Default svcName for Ingress Gateway. | 80 | Default port for ingressgateway. |
overloadManager.ingressGatewayScrapeInterval | Based on this, overload manager send requests to Ingress Gateway. By default, it is 1 second. | 1 | It can be changed to different value. |
overloadManager.ingressGatewayFailureRateLength | Based on this, failure rate time series is formed. | 60 | It can be changed to different value. |
overloadManager.perfInfoScrapeInterval | Based on this parameter, perfinfo gets CPU and memory from perf-info itself. | 10 | It can be changed to different value. |
overloadManager.nfType | NF Type value should be UDR | UDR | This value should not be changed. |
tagNamespace | Dimension names used for metrics. | namespace |
Values for CNE 1.8 or OSO are: {tagNamespace: kubernetes_namespace} Values for CNE 1.9 and above is: {tagNamespace: namespace} |
tagContainerName | Dimension names used for metrics. | container | Values for CNE 1.8 or OSO are: {tagContainerName: container_name}Value for CNE 1.9 and above is: {tagContainerName: container} |
tagServiceName | Dimension names used for metrics. | service | Values for CNE 1.8 or OSO are: {tagServiceName: kubernetes_name}Value for CNE 1.9 and above is: {tagServiceName: service} |
tolerationsSetting | Flag to enable the toleration setting at the
microservice level or global level.
If set to USE_GLOBAL_VALUE global.tolerations configured value is used. If set to ENABLED tolerations configuration below in the same section is used. |
USE_GLOBAL_VALUE |
Allowed Values:
|
nodeSelection |
Flag to enable the nodeSelection setting at the microservice level or global level . If set to USE_GLOBAL_VALUE global.tolerations configured value is used. If set to ENABLED tolerations configuration below in the same section is used. This is applicable for v2 apiVersion nodeSelector configuration. |
USE_GLOBAL_VALUE |
Allowed Values:
|
tolerations | When tolerationsSetting is ENABLED, configure tolerations here. | [] |
Example:
|
helmBasedConfigurationNodeSelectorApiVersion | Node Selector API version setting | v1 |
Allowed Values:
|
nodeSelectorEnabled | Flag to enable v1 nodeSelector | false | true/false |
nodeSelectorKey |
NodeSelector key configuration at the microservice level when helmBasedConfigurationNodeSelectorApiVersion is set to v1 then this configuration is used. And this configuration does not depend on nodeSelection flag, once configured this is used for all microservices. |
' ' | Valid key for a label for a particular node |
nodeSelectorValue | NodeSelector Value configuration at the global level when helmBasedConfigurationNodeSelectorApiVersion is set to v1 then this configuration is used. And this configuration does not depend on nodeSelection flag, once configured this is used for all microservices. | ' ' | Valid value pair for the above key for a label for a particular node |
nodeSelector | NodeSelector configuration when
helmBasedConfigurationNodeSelectorApiVersion is set to v2.
Uncomment and use this configuration when required. Else keep this commented. |
{} | Valid key value pair matching a node for nodeSelection by a pod |
envMysqlDatabase | Database name in which the leader election table needs to be created |
|
NA |
envLeaderElectionTableName | Table name to be used for the leader election table |
|
NA |
maxUnavailable | Number of replicas that can go down during a disruption | 50% | NA |
3.1.11 diam-gateway Parameters
Following table list configurable parameters for diam-gateway microservice.
Table 3-12 diam-gateway microservice parameters
Parameter | Description | Default Value | Range or Possible Values (If applicable) | Notes |
---|---|---|---|---|
global.servicePorts.diamGatewayHttp | HTTP Service Port exposed | 8000 | Valid Port | NA |
global.servicePorts.diamGatewayDiameter | Diameter Service Port | 3868 | Valid Port | NA |
global.servicePorts.configServerHttp | Config server connection Service port | 5807 | Valid Port | NA |
global.servicePorts.bindingHttp | Binding Service Port | 8080 | Valid Port | NA |
global.servicePorts.diamGatewayDistCache | Diameter Cache Service port | 5801 | Valid Port | NA |
global.servicePorts.dbConnStatusHttp | Database connection check Service port | 8000 | Valid Port | NA |
global.containerPorts.monitoringHttp | Monitoring Container Port | 9000 | Valid Port | NA |
global.containerPorts.diamGatewayHttp | HTTP Container Port | 8000 | Valid Port | NA |
global.containerPorts.diamGatewayDiameter | Diameter Connection Container Port | 3868 | Valid Port | NA |
global.containerPorts.diamGatewayDistCache | Diameter Cache Container Port | 5801 | Valid Port | NA |
global.containerPorts.configContainerSignalingHttp | Signaling Container Port | 8100 | Valid Port | NA |
global.containerPorts.configContainerMonitoringHttp | Configuration Monitoring Container Port | 8101 | Valid Port | NA |
image | Docker image name | diam-gateway | not applicable | NA |
imageTag | Docker image tag | 23.4.2 | not applicable | not applicable |
replicas | replicas of diam-gateway | 2 | not applicable | not applicable |
maxUnavailable | Number of replicas that can go down during a disruption | 1 | not applicable | This parameter is used for PodDisruptionBudget Kubernetes resource |
envLoggingLevelApp | diam-gateway log level | INFO | DEBUG, INFO, WARN, ERROR | not applicable |
envDiameterPort | diam-gateway diameter service port | 3868 | valid port number | not applicable |
envDiameterRealm | diameter server realm | *diamGatewayRealm | not applicable | not applicable |
envDiameterIdentity | diameter server identity | *diamGatewayIdentity | not applicable | not applicable |
envDiameterIOThreadCount | Number of thread for IO operation | 0 | 0 to 2* CPU | not applicable |
envDiameterIOQueueSize | Queue size for IO | 0 | 2048 to 8192 | not applicable |
envDiameterMsgBufferThreadCount | Number of threads for process the message | 0 | 0 to 2* CPU | not applicable |
envDiameterMsgBufferQueueSize | Queue size of processing message | 0 | 1024-4096 | not applicable |
envDiameterValidationStrictParsing | Strict parsing of diameter AVP and messages | false | not applicable | strict parsing |
envDiameterHostIp | Host IP value in capability exchange message | "" | not applicable | If the server wants to validate host IP address and need to add all
Kubernetes node name and IP address.
format: nodename=ipaddress |
enableConfigService | config server support for diam-gateway | *configServerEnabled | true/false | not applicable |
resources.limits.cpu | max CPU allotment for diam-gateway pod | 2 | not applicable | not applicable |
resources.limits.memory | Maximum memory allotment for diam-gateway pod | 2 | not applicable | not applicable |
resources.limits.ephemeral-storage | Ephemeral storage allocation limits | 1024Mi | Unit: MB | not applicable |
resources.requests.cpu | CPU allocated for diam-gateway pod during deployment | 2 | not applicable | not applicable |
resources.requests.memory | Memory allocated for diam-gateway pod during deployment | 2 | not applicable | not applicable |
resources.requests.ephemeral-storage | Ephemeral storage allocation request | 72Mi | Unit MB | not applicable |
service.type | Kubernetes diam-gateway service type for exposing UDR deployment | LoadBalancer | ClusterIP
NodePort LoadBalancer |
Recommends to set LoadBalancer as default value always |
service.customExtension.labels | Custom labels that needs to be added to nudr-diam-gateway specific service. | null | NA | This can be used to add custom label(s) to nudr-diam-gateway deployment. |
service.customExtension.annotations | Custom annotations that needs to be added to nudr-diam-gateway specific service. | null | NA | This can be used to add custom annotation(s) to nudr-diam-gateway deployment. |
service.externalTrafficPolicy | Restricts the LBVM to use only the IP of local worker node for routing traffic on which diameter gateway is deployed. | Cluster | Cluster/Local | |
deployment.customExtension.labels | Custom labels that needs to be added to nudr-diam-gateway specific deployment. | null | NA | This can be used to add custom label(s) to nudr-diam-gateway deployment. |
deployment.customExtension.annotations | Custom annotations that needs to be added to nudr-diam-gateway specific deployment. | null | NA | This can be used to add custom annotation(s) to nudr-diam-gateway deployment. |
fullnameOverride | pod-name in deployment | diam-gateway | not applicable | not applicable |
peerConfig.setting | Diameter peer setting | reconnectDelay: 3responseTimeout: 4connectionTimeOut: 3watchdogInterval: 6transport: 'TCP'reconnectLimit: 50 | Not Applicable |
|
readinessProbe.initialDelaySeconds | Configurable wait time before performing the first readiness probe by the kubelet | 80 | Not Applicable
Unit: Seconds |
Do not change this value. If there is delay in pod coming up and probe is killing the pod then tune these parameters. |
readinessProbe.periodSeconds | Time interval for every readiness probe check. | 5 | Not Applicable
Unit: Seconds |
Do not change this value. If there is delay in pod coming up and probe is killing the pod then tune these parameters. |
resources.requests.ephemeral-storage | Ephemeral storage allocation request. | 72Mi | Unit: MB | Not Applicable |
livenessProbe.initialDelaySeconds | Configurable wait time before performing the first liveness probe by the kubelet. | 80 | Not Applicable
Unit: Seconds |
Do not change this value. If there is delay in pod coming up and probe is killing the pod then tune these parameters. |
livenessProbe.periodSeconds | Time interval for every liveness probe check. | 20 | Not Applicable
Unit: Seconds |
Do not change this value. If there is delay in pod coming up and probe is killing the pod then tune these parameters. |
internalTracingEnabled | Flag to enable/disable jaeger tracing for nudr-ondemand-migration-service | false | true/false | NA |
podCongetion.enabled | diameter-gateway pod congestion feature enable flag | false | true/false | NA |
notificationRetryCode.diameter | PNR retry error code configurations | 3002,3004,3005,3006,4003,5012 | Valid diameter error code | |
notificationRetryCode.http | PNR retry error code configurations | *notificationRetryErrorCode | NA | Do not change reference |
peerConfig.clientPeers | Diameter client node information |
|
Information should be yaml list | NA |
Diameter server peer node information | null | Information should be yaml list | NA | |
extraContainers | This configuration decides at this microservice level the extra container need to be used or not. | USE_GLOBAL_VALUE | Allowed values:
|
This configuration decides service level extraContainer support. The default value lets it to be dependent on what is configured in the global level |
serviceMeshCheck | Enable when deployed in serviceMesh, referred to the serviceMeshCheck in global section. Set to false when side car is not included for this service | *serviceMeshFlag | Not Applicable | Do not change this value unless it is different from the value configured in global section reference |
istioSidecarReadyUrl | Readiness url configurable for side car,
referred to global section istioSidecarReadyUrl. Change only if the URL is different for the side car container in this microservice |
*istioReadyUrl | Not Applicable | Do not change this value unless it is different from the value configured in global section reference |
podCongestion.enabled | Enable or disable the Diameter Gateway Pod Congestion feature | false | true/false | NA |
tolerationsSetting | Flag to enable the toleration setting at the
microservice level or global level.
If set to USE_GLOBAL_VALUE global.tolerations configured value is used. If set to ENABLED tolerations configuration below in the same section is used. |
USE_GLOBAL_VALUE |
Allowed Values:
|
NA |
nodeSelection |
Flag to enable the nodeSelection setting at the microservice level or global level. If set to USE_GLOBAL_VALUE global.tolerations configured value is used. If set to ENABLED tolerations configuration below in the same section is used. This is applicable for v2 apiVersion nodeSelector configuration. |
USE_GLOBAL_VALUE |
Allowed Values:
|
NA |
tolerations | When tolerationsSetting is ENABLED, configure tolerations here. | [] |
Example:
|
NA |
helmBasedConfigurationNodeSelectorApiVersion | Node Selector API version setting | v1 |
Allowed Values:
|
NA |
nodeSelector.nodeKey | NodeSelector key configuration at the microservice level when helmBasedConfigurationNodeSelectorApiVersion is set to v1 then this configuration is used. And this configuration does not depend on nodeSelection flag, once configured this is used for all microservices. | ' ' | Valid key for a label for a particular node. | NA |
nodeSelector.nodeValue | NodeSelector Value configuration at the global level when helmBasedConfigurationNodeSelectorApiVersion is set to v1 then this configuration is used. And this configuration does not depend on nodeSelection flag, once configured this is used for all microservices. | ' ' | Valid value pair for the above key for a label for a particular node. | NA |
nodeSelector | NodeSelector configuration when
helmBasedConfigurationNodeSelectorApiVersion is set to v2.
Uncomment and use this configuration when required. Else keep this commented. |
{} | Valid key value pair matching a node for nodeSelection by a pod | NA |
gracefulShutdown.gracePeriod | The graceful shutdown period used for diam gateway service termination | 30s | Unit Seconds | NA |
3.1.12 ocudr-ingressgateway-sig Microservice Parameters
The following table provides parameters for ocudr-ingressgateway microservice (API Gateway)
Table 3-13 ocudr-ingressgateway microservice (API Gateway) parameters
Parameter | Description | Default value | Range or Possible Values (If applicable) | Notes |
---|---|---|---|---|
global.type | ocudr-ingressgateway service type | LoadBalancer | Possible Values:
ClusterIP NodePort LoadBalancer |
NA |
global.metalLbIpAllocationEnabled | Flag to enable Address Pool for Metallb | true | true/false | NA |
global.metalLbIpAllocationAnnotation | Address Pool for Metallb | metallb.universe.tf/address-pool: signaling | NA | NA |
global.staticNodePortEnabled | Flag to enable static node port | false | NA | NA |
global.istioIngressTlsSupport.ingressGateway | Flag to enable clear text traffic from outside the cluster when Service Mesh is enabled | false | true/false | NA |
global.xfccHeaderValidation.validation.enabled | Flag to determine if incoming xfcc header needs to be validated | &xfccValidationEnabled false | true/false | NA |
global.xfccHeaderValidation.validation.nfList | List of configured NF FQDN’s against which the matchField parameter present in the XFCC header of incoming request is validated | nfList:
-scp1.com -scp2.com |
Valid Network Function List, which is available as part of XFCC Header | NA |
global.xfccHeaderValidation.validation.matchCerts | The number of certificates that need to be validated starting from the right most entry in the XFCC header | -1 | (Value = -1 validation to be performed against all entries, value = +ve number, then validate starting from the right most entry backwards) | NA |
global.xfccHeaderValidation.validation.matchField | Field in XFCC header against which the configured nfList FQDN validation needs to be performed | Subject.CN | Valid XFCC header field | NA |
global.xfccHeaderValidation.validation.errorCodeOnValidationFailure | Field determines the errorCode to be sent in response when XFCC header validation fails at Ingress Gateway | 500 | Valid HTTP error code | NA |
global.publicHttpsSignalingPort | Https Signaling port | Yes | 443 | NA |
openTracing.jaeger.enableb3Propagation | To send b3 zipkin headers instead of uber-trace-id | Yes (If jaegerTracingEnabled is true) | false | NA |
dbConfig.backupDbName | configure when your backup table should have separate schema | No | NA | NA |
isIpv6Enabled | Set this flag to true when deployed in Ipv6 enabled setup | No | NA | NA |
pod-protection.enabled | podProtection feature can be enabled and disabled using this flag | Yes | false | NA |
pod-protection.monitoringInterval | Interval in which the state of the resource is checked and updated accordingly | Yes | 100 (ms) | NA |
pod-protection.congestionControl.enabled | the congestion control mechanism of pod protection can be enabled and disabled using this flag | Yes | false | NA |
pod-protection.congestionControl.stateChangeSampleCount | the number of times the pod has to remain in same state in order to transition to different state | Yes | 10 | NA |
pod-protection.congestionControl.actionSamplingPeriod | actionSamplingPeriod the (isSchedulerTimeSensitiveAction) action will be executed only on the specified time (actionSamplingPeriod * monitoringInterval) | Yes | 3 | NA |
pod-protection.congestionControl.states[i].name | Any meaning full name of the congestion state | Yes |
Normal at index 0, Doc at index 2, Congested at index 3 |
NA |
pod-protection.congestionControl.states[i].weight | Each state is identified by the weight. The weight defines the criticality of the state. Lower the state lower the criticality | Yes |
0, at index 0, 1 at index 1, 2 at index 2 |
NA |
pod-protection.congestionControl.states[i].entryAction[j].action | The action that has to be executed at the current state | Yes |
MaxConcurrentStreamsUpdate, AcceptIncomingConnections |
NA |
pod-protection.congestionControl.states[i].entryAction[j].args | the argument corresponding to action | Yes | NA | NA |
global.xfccHeaderValidation.validation.peerList | Supports virtual host and static FQDNS both. It is backward compatible with nfLIst(old configuration). | NA | NA | |
global.xfccHeaderValidation.validation.errorDescriptionOnValidationFailure | Configurable error description to be returned when XFCC header validation fails at Ingress Gateway. This error is populated in ProblemDetails response in ProblemDetails.detail section | Validation of SCP failed | NA | NA |
global.xfccHeaderValidation.validation.errorCauseOnValidationFailure | Configurable error cause to be returned when XFCC header validation fails at Ingress Gateway. This error is populated in ProblemDetails response in ProblemDetails.cause section | XFCC Validation Failed | NA | NA |
global.xfccHeaderValidation.validation.errorTitleOnValidationFailure | Configurable error title to be returned when XFCC header validation fails at Ingress Gateway. This error is populated in ProblemDetails response in ProblemDetails.title section. | Internal Server Error | NA | NA |
global.xfccHeaderValidation.validation.retryAfter | Retry-after value in seconds populated in Retry-After header and sent in response, when XFCC header validation fails at Ingress Gateway and the value configured for errorCodeOnValidationFaulire lies in 3xx series | "" | Retry-After value in seconds | NA |
global.xfccHeaderValidation.validation.redirectUrl | Redirect-Url string populated in LOCATION header and sent in response when XFCC header validation fails at Ingress Gateway and the value configured for errorCodeOnValidationFaulire lies in 3xx series | "" | Valid redirect url | NA |
xfcc.xfccHeaderValidation.validation.errorTrigger | Configurable exception or error type and details for an error scenario in Ingress Gateway for XFCC |
|
NA | NA |
image.name | Docker image name | ocingress_gateway | NA | NA |
image.tag | Tag or version of Image | 23.4.7 | NA | NA |
image.pullPolicy | This setting indicates if image needs to be pulled or not | IfNotPresent | Possible Values:
Always IfNotPresent Never |
NA |
initContainersImage.name | Docker Image name | configurationinit | NA | NA |
initContainersImage.tag | Tag or version of Image | 23.4.7 | NA | NA |
initContainersImage.pullPolicy | This setting indicates if image need to be pulled or not | IfNotPresent |
Possible Values - Always IfNotPresent Never |
NA |
updateContainersImage.name | Docker Image name | configurationupdate | NA | NA |
updateContainersImage.tag | Tag or version of Image | 23.4.7 | NA | NA |
updateContainersImage.pullPolicy | This setting indicates if image need to be pulled or not | IfNotPresent |
Possible Values - Always IfNotPresent Never |
NA |
dbHookImage.name | Image name of dbHook | common_config_hook | NA | NA |
dbHookImage.tag | Image tag name of dbHook | 23.4.7 | NA | NA |
dbHookImage.pullPolicy | Indicates the pull policy of image | IfNotPresent | Possible Values:
Always IfNotPresent Never |
NA |
service.ssl.tlsVersion | TLS version to be used | TLSv1.2 | Valid TLS version | These are service fixed parameters |
service.ssl.privateKey.k8SecretName | Name of the secret that store keys and certificates | ocudr-gateway-secret | NA | NA |
service.ssl.privateKey.k8NameSpace | Namespace in which secret is created | ocudr | NA | NA |
service.ssl.privateKey.rsa.fileName | RSA private key stored in the secret | rsa_private_key_pkcs1.pem | NA | NA |
service.ssl.privateKey.ecdsa.fileName | ECDSA private key stored in the secret | ecdsa_private_key_pkcs8.pem | NA | NA |
service.ssl.certificate.k8SecretName | Name of the secret that store keys and certificates | ocudr-gateway-secret | NA | NA |
service.ssl.certificate.k8NameSpace | Namespace in which secret is created | ocudr | NA | NA |
service.ssl.certificate.rsa.fileName | RSA certificate stored in the secret | apigatewayrsa.cer | NA | NA |
service.ssl.certificate.ecdsa.fileName | ECDSA certificate stored in the secret | apigatewayecdsa.cer | NA | NA |
service.ssl.caBundle.k8SecretName | Name of the secret that store keys and certificates | ocudr-gateway-secret | NA | NA |
service.ssl.caBundle.k8NameSpace | Namespace in which secret is created | ocudr | NA | NA |
service.ssl.caBundle.fileName | Name of caBundle file stored in the secret | caroot.cer | NA | NA |
service.ssl.keyStorePassword.k8SecretName | Name of the secret that store keys and certificates | ocudr-gateway-secret | NA | NA |
service.ssl.keyStorePassword.k8NameSpace | Namespace in which secret is created | ocudr | NA | NA |
service.ssl.keyStorePassword.fileName | Keystore password stored in the secret | key.txt | NA | NA |
service.ssl.trustStorePassword.k8SecretName | Name of the secret that store keys and certificates | ocudr-gateway-secret | NA | NA |
service.ssl.trustStorePassword.k8NameSpace | Namespace in which secret is created | ocudr | NA | NA |
service.ssl.trustStorePassword.fileName | trustStore password stored in the secret | trust.txt | NA | NA |
service.initialAlgorithm | Algorithm to be used for SSL verification when request lands on secure port of Ingress Gateway. You can use ES256, but use its corresponding certificates also. | RS256 | RS256/ES256 | NA |
http1.client.keepAlive | http1.1 settings when enablehttp1 is set to true. If keepAlive is set as false, then the connection is closed after one request. | false | true/false | NA |
http1.client.useConnectionPool | http1.1 settings when enablehttp1 is set to true. If useConnectionPool is set as false, it creates new connection instead of looking for active connections | false | true/false | NA |
resources.limits.cpu | CPU allotment limitation | 2 | NA | NA |
resources.limits.memory | Memory allotment limitation | 2Gi | NA | NA |
resources.limits.initServiceCpu | Maximum number of CPUs that Kubernetes allow the ingress-gateway init container to use | 1 | NA | NA |
resources.limits.initServiceMemory | Memory limit for ingress-gateway init container | 1Gi | NA | NA |
resources.limits.updateServiceCpu | Maximum number of CPUs that Kubernetes allow the ingress-gateway update container to use | 1 | NA | NA |
resources.limits.updateServiceMemory | Memory limit for ingress-gateway update container | 1Gi | NA | NA |
resources.limits.commonHooksCpu | CPU Limit of database Hook Container | 1 | NA | NA |
resources.limits.commonHooksMemory | Memory limit of database Hook Container | 1Gi | NA | NA |
resources.requests.cpu | Number of CPUs allocated for ocudr-endpoint pod | 2 | NA | NA |
resources.requests.memory | Memory allocated for ocudr-endpoint pod | 2Gi | NA | NA |
resources.requests.initServiceCpu | Number of CPUs that the system guarantees for ingress-gateway init container, and Kubernetes uses this value to decide the node to place the pod | 1 | NA | NA |
resources.requests.initServiceMemory | Memory allotment that the system guarantees for ingress-gateway init container, and Kubernetes uses this value to decide the node to place the pod | 1Gi | NA | NA |
resources.requests.updateServiceCpu | Number of CPUs that the system guarantees for ingress-gateway update container, and Kubernetes uses this value to decide the node to place the pod | 1 | NA | NA |
resources.requests.updateServiceMemory | Memory allotment that the system guarantees for ingress-gateway update container, and Kubernetes uses this value to decide the node to place the pod | 1Gi | NA | NA |
resources.requests.commonHooksCpu | Db Hook Container CPU request, and K8s uses this value to decide on which node to place the pod. | 1 | NA | NA |
resources.requests.commonHooksMemory | Db Hook Container CPU request, and K8s uses this value to decide on which node to place the pod. | 1Gi | NA | NA |
resources.target.averageCpuUtil | CPU utilization limit for auto scaling | 80 | NA | NA |
maxUnavailable | Number of pods always running | 1 | NA | NA |
minReplicas | Minimum replicas to scale to maintain an average CPU utilization | 2 | NA | NA |
maxReplicas | Maximum replicas to scale to maintain an average CPU utilization | 5 | NA | NA |
log.level.root | Log level shown on ocudr-endpoint pod | WARN | Possible Values: WARN INFO DEBUG | NA |
log.level.ingress | Log level shown on ocudr-ingressgateway pod for ingress related flows | WARN | Possible Values: WARN INFO DEBUG | NA |
log.level.oauth | Log level shown on ocudr-ingressgateway pod for oauth related flows | WARN | Possible Values: WARN INFO DEBUG | NA |
log.level.updateContainer | Log level for Update container | WARN | Possible Values: WARN INFO DEBUG | NA |
log.level.configclient | Log level for Config Client | WARN | Possible Values: WARN INFO DEBUG | NA |
log.level.hook | Log level for hook | WARN | Possible Values: WARN INFO DEBUG | NA |
log.level.cncc.security | Log level for cncc logs | WARN | Possible Values: WARN INFO DEBUG | NA |
log.level.traceIdGenerationEnabled | Flag to enable TraceId Generation | true | true/false | NA |
initssl | Flag to initialize SSL related infrastructure in init/update container | false | NA | NA |
jaegerTracingEnabled | Flag to enable Jaeger Tracing | false | true/false | NA |
openTracing.jaeger.udpSender.host | FQDN of Jaeger agent service | occne-tracer-jaeger-agent.occne-infra | Valid FQDN | NA |
openTracing.jaeger.udpSender.port | UDP port of Jaeger agent service | 6831 | Valid Port | NA |
openTracing.jaeger.probabilisticSampler | Sampler makes a random sampling decision with the probability of sampling. This parameter indicates the probabilistic Sampler on Jaeger. | 0.5 | Range: 0.0 - 1.0 | For example, if the value set is 0.1, approximately 1 in 10 traces are sampled. |
ciphersuites | Supported cipher suites for SSL |
|
NA | NA |
oauthValidatorEnabled | Flag to enable OAUTH validator | false | NA | NA |
nfType | NF type of the service producer | UDR | NA | Mandatory when oauthValidatorEnabled is true |
producerScope | Comma separated list of services hosted by service producer | nudr-dr,nudr-group-id-map | Valid service list | Mandatory when oauthValidatorEnabled is true |
allowedClockSkewSeconds | Set this value if clock on the parsing NF (producer) is not perfectly in sync with the clock on the consumer NF that created the JWT | 0 | Unit: Seconds | Mandatory when oauthValidatorEnabled is true |
enableInstanceIdConfigHook | Flag to enable a preUpgrade hook when persistent config is enabled. It populates InstanceId configuration based on the content of nrfPublicKeyKubeSecret if instance id configuration existed for previous release then that is picked | true | true/false | NA |
nrfPublicKeyKubeSecret | Name of the secret that stores public key(s) of NRF | oauthsecret | NA | Mandatory when oauthValidatorEnabled is true |
nrfPublicKeyKubeNamespace | Namespace of the NRF publicKey s ecret | ocudr | NA | Mandatory when oauthValidatorEnabled is true |
validationType | It can be either "strict" or "relaxed". Strict means that incoming requests without "Authorization"(Access Token) header are rejected. Relaxed means that if incoming request contains "Authorization" header, it is validated. If incoming request does not contain "Authorization" header, validation is ignored | strict | strict/relaxed | Mandatory when oauthValidatorEnabled is true |
producerPlmnMNC | MNC of service producer | 14 | Valid MNC | NA |
producerPlmnMCC | MCC of service producer | 310 | Valid MCC | NA |
errorCodeOnValidationFailure | Error code to return on oauth validation failure | 401 | Valid http error code | NA |
errorCodeOnTokenAbsence | Error code to return when oauth token is not present | 400 | Valid http error code | NA |
oauthErrorConfig.errorTitle | Error title information to return in error response Problem details | InvalidOAuthAccessToken | NA | NA |
oauthErrorConfig.errorDescription | Error description to return in error response Problem details | UNAUTHORIZED | NA | NA |
oauthErrorConfig.errorCause | Error cause to return in response | "" | NA | NA |
oauthErrorConfig.redirectUrl | Redirect URL configuration | null | NA | NA |
oauthErrorConfig.retryAfter | Value of retryAfter to be configured | null | NA | NA |
enableIncomingHttp | Flag to enable HTTP requests | true | NA | NA |
enableIncomingHttps | Flag to enable HTTPS requests | false | true or false | NA |
enableOutgoingHttps | Flag to enable outgoing HTTPS requests | false | true or false | NA |
maxConcurrentPushedStreams | Jetty client settings for maximum concurrent streams | 6000 | Valid Number | NA |
maxConnectionsPerDestination | Jetty client settings for maximum connections per destination | 50 | Valid Number | NA |
maxRequestsQueuedPerDestination | Queue Size at the ocudr-endpoint pod | 5000 | Valid Number | NA |
maxConnectionsPerIp | Number of connections from endpoint to other microservices | 10 | Valid Number | NA |
connectionTimeout | Connection timeout limit configured on ingress gateway client towards the backend | 10000 | Unit: Milliseconds | NA |
requestTimeout | Request timeout limit configured on ingress gateway client towards the backend | 2700 | Unit: Milliseconds | NA |
serviceMeshCheck | Flag to check service mesh | *serviceMeshFlag | true/false | NA |
istioSidecarQuitUrl | Quit URL configured for side car | *istioQuitUrl | Valid url | NA |
istioSidecarReadyUrl | Readiness URL configured for side car | *istioReadyUrl | Valid url
Note: The port used should be the admin configured for istio proxy container. |
NA |
routesConfig | Routes configured to connect to different microservices of UDR |
|
NA | NA |
service.customExtension.labels | Use this to add custom label(s) to ingressgateway service | null | NA | NA |
service.customExtension.annotations | Use this to add custom annotation(s) to ingress gateway service | null | NA | NA |
deployment.customExtension.labels | Use to add custom label(s) to ingress gateway deployment | null | NA | NA |
deployment.customExtension.annotations | Use to add custom annotation(s) to ingress gateway deployment | null | NA | NA |
readinessProbe.initialDelaySeconds | Configurable wait time before performing the first readiness probe by Kubelet | 30 | NA
Unit: Seconds |
Do not change this value. If you see delays in pod to come up and probe is killing the pod then you should tune these parameters |
readinessProbe.periodSeconds | Time interval for every readiness probe check | 10 | NA
Unit: Seconds |
Do not change this value. If you see delays in pod to come up and probe is killing the pod then you should tune these parameters |
readinessProbe.timeoutSeconds | Number of seconds after which the probe times out | 3 | NA | Do not change this default value |
readinessProbe.successThreshold | Minimum consecutive successes for the probe to be considered successful after having failed | 1 | NA | Do not change this default value |
readinessProbe.failureThreshold | When a Pod starts and the probe fails, Kubernetes indicates failureThreshold time before giving up | 3 | NA | Do not change this default value |
livenessProbe.initialDelaySeconds | Configurable wait time before performing first liveness probe by Kubelet | 30 | NA
Unit: Seconds |
Do not change this value. If you see delays in pod to come up and probe is killing the pod then you should tune these parameters |
livenessProbe.periodSeconds | Time interval for every liveness probe check | 15 | NA
Unit: Seconds |
Do not change this value. If you see delays in pod to come up and probe is killing the pod then you should tune these parameters |
livenessProbe.timeoutSeconds | Number of seconds after which the probe times out | 3 | NA | Do not change this value |
livenessProbe.successThreshold | Minimum consecutive successes for the probe to be considered successful after having failed | 1 | NA | Do not change this value |
livenessProbe.failureThreshold | When a Pod starts and the probe fails, Kubernetes tries failureThreshold times before giving up | 3 | NA | Do not change this value |
extraContainers | This configuration decides service level extraContainer support. The default value is dependent on what is configured in the global level | USE_GLOBAL_VALUE | Allowed Values:
|
NA |
commonCfgClient.enabled | Flag to enable persistent configuration | true | true/false | NA |
commonCfgServer.configServerSvcName | Service name of common configuration service to which the client tries to poll for configuration updates | nudr-config | Valid server service name | NA |
commonCfgServer.host | Host name of common configuration server to which client tries to poll for configuration updates | 10.75.224.123 | Valid host details | This value is picked up if commonCfgServer.configServerSvcName is not available |
commonCfgServer.port | Port of Common Configuration server | 5001 | Valid port | NA |
commonCfgServer.pollingInterval | Interval between two subsequent polling requests from configuration client to server | 5000 | Valid period | NA |
commonServiceName | Common service name that is currently requesting for configuration updates from server | igw | NA | NA |
userAgentHeaderValidationConfigMode | This flag is used to accept the user-agent configurations from HELM or REST | HELM | HELM/REST | NA |
userAgentHeaderValidation.enabled | This flag is used to enable the user-agent feature | false | true/false | NA |
userAgentHeaderValidation.consumerNfTypes | This parameter is used to configure the NF Types, like PCF, NRF, UDM that are using UDR services | EMPTY | All NF Types that are defined in 3gpp specification. Example PCF,NRF,NEF,UDM | NA |
restoreBackupOnInstall | Flag to enable restore when data is picked up from backup table during installation of ingress gateway | false | true/false | NA |
dropTablesOnUpgrade | This flag when enabled drops the common configuration tables on Helm upgrade if there is no data in the tables | false | true/false | NA |
autoRedirect | Flag to enable redirection in Ingress Gateway. If enabled, Ingress Gateway redirects to URL present in location header when the response code is 301, 302, 303, 307, or 308 | true | true/false | NA |
serverHeaderConfigMode.serverHeaderDetails.enabled | This flag is to enable or disable server header. | false | true/false | NA |
global.logStorage | Ephemeral storage configuration for log storage | No | 0 | NA |
global.crictlStorage | Ephemeral storage configuration for crictl storage | No | 0 | NA |
podProtectionConfigMode | This configuration is to enable pod protection configuration to rest based or helm based. | NA | HELM or REST | NA |
nfSpecificConfig | This configuration is to enable or disable the NF specific default value configuration and a list of configuration files is defined in the default configuration list. | NA |
|
NA |
ephemeralStorageLimit | Ephemeral storage limit | No | 0 | NA |
global.publicHttp1SignalingPort | HTTP1 Signaling port | - | NA | NA |
global.staticHttp1NodePort | HTTP1 Node Port | - | NA | NA |
global.containerHttp1Port | Container Http1 port | - | NA | NA |
global.enableIncomingHttp1 | global flag to enable or disable HTTP1 | - | NA | NA |
global.enableTLSIncomingHttp1 | global flag to enable or disable HTTP1 in TLS or NON TLS mode | - | NA | NA |
userAgentHeaderValidationConfigMode | parameter to configure mode of configuration. It can be either REST or HELM | YES | Helm | NA |
userAgentHeaderValidation.enabled | flag to enable or disable the feature | NO | False | NA |
userAgentHeaderValidation.validationType | parameter to decide the type of validation when User-Agent header is not present in the incoming request | NO | NA | NA |
userAgentHeaderValidation.consumerNfTypes | list of valid consumer NF types as per 3GPP specification | NO | NA | NA |
global.lciHeaderConfig.enabled |
Flag to enable or disable the LCI headers reporting | false | true or false | NA |
global.lciHeaderConfig.loadThreshold | If current load level is beyond the previously computed load level + loadThreshold, LCI headers are reported again. | 40 | NA | NA |
global.lciHeaderConfig.localLciHeaderValidity | Validity period of the LCI headers reported to the consumer NF. LCI headers are reported again if the headers are reported as previously expired. | 1000 (value in milliseconds) | NA | NA |
global.lciHeaderConfig.producerSvcIdHeader | Header name, which holds the producer svc identity. |
|
NA | NA |
global.ociHeaderConfig.enabled | Flag to enable or disable the OCI headers reporting | false | true or false | NA |
global.ociHeaderConfig.producerSvcIdHeader | Header name, which holds the producer svc identity. | custom_producer_svc_header | NA | NA |
global.ociHeaderConfig.validityPeriod | Validity period of the OCI headers reported to the consumer NF. OCI headers are reported again if the headers are reported as previously expired. | 5000 (value in milliseconds) | NA | NA |
global.ociHeaderConfig.overloadConfigRange.minor | Range to identify the minor overload condition | "[0-10]" | NA | NA |
global.ociHeaderConfig.overloadConfigRange.major | Range to identify the major overload condition | "[10-70]" | NA | NA |
global.ociHeaderConfig.overloadConfigRange.critical | Range to identify the critical overload condition | "[70-100]" | NA | NA |
global.ociHeaderConfig.reductionMetrics.minor | Reduction metric to be reported for the minor overload condition | 5 (Possible values 1 to 9 both inclusive) | NA | NA |
global.ociHeaderConfig.reductionMetrics.major | Reduction metric to be reported for the major overload condition | 10 (Possible values 5 to 15 both inclusive) | NA | NA |
global.ociHeaderConfig.reductionMetrics.critical | Reduction metric to be reported for the critical overload condition | 30 (Possible values 10 to 50 both inclusive) | NA | NA |
global.nfInstanceId | NF instance Id of the producer NF | 6faf1bbc-6e4a-4454-a507-a14ef8e1bc11 | NA | NA |
global.svcToSvcInstanceIdMapping.svcName | Back-end service name, which should match the producerSvcIdHeader value and perf info reported service name for the LCI or OCI headers reporting. | nf-registration | NA | NA |
global.svcToSvcInstanceIdMapping.serviceInstanceId | Back-end service instance id to be included in the LCI or OCI headers | fe7d992b-0541-4c7d-ab84-c6d70b1b01b1 | NA | NA |
global.perfInfoConfig.pollingInterval | Configurable interval at which the load infromation is polled from the perf-info service at GW | 5000 | NA | NA |
global.perfInfoConfig.serviceName | Perf-Info service name | NA | NA | NA |
global.perfInfoConfig.host | Perf-Info service Host IP | NA | NA | NA |
global.perfInfoConfig.port | Perf-Info service port | NA | NA | NA |
global.perfInfoConfig.perfInfoRequestMap | Perf-Info service request endpoint |
|
NA | NA |
global.http1PortName | The port name for http1 port exposed on ingressgateway | http | NA | NA |
3.1.13 ocudr-ingressgateway-prov Microservice Parameters
Table 3-14 ocudr-ingressgateway-prov microservice (API Gateway) parameters
Parameter | Description | Default value | Range or Possible Values (If applicable) | Notes |
---|---|---|---|---|
image.tag | Tag or version of Image | 23.4.7 | Not Applicable | NA |
commonCfgClient.enabled | Flag to enable common configuration client | true | true/false | NA |
commonCfgServer.configServerSvcName | Service name of common configuration service to which the client tries to poll for configuration updates | nudr-config | Valid server service name | NA |
commonCfgServer.host | Host name of Common configuration server to which client tries to poll for configuration updates. This value is picked up if commonCfgServer.configServerSvcName is not available | 10.75.224.123 | Valid host details | NA |
commonCfgServer.pollingInterval | Time interval between two subsequent polling requests from configuration client to server | 5000 | Valid period | NA |
commonCfgServer.port | Port of Common Configuration server | 5001 | Valid Port | NA |
commonServiceName | This is the common service name that is currently requesting for configuration updates from server | igw | Not Applicable | NA |
connectionTimeout | Connection timeout limit configured on ingress gateway client towards the backend | 10000 | Unit: Milliseconds | NA |
global.xfccHeaderValidation.validation.errorCauseOnValidationFailure | Configurable error cause to be returned when XFCC header validation fails at Ingress Gateway. This error is populated in ProblemDetails response in ProblemDetails.cause section | XFCC Validation Failed | NA | NA |
global.xfccHeaderValidation.validation.errorCodeOnValidationFailure | Field determines the errorCode to be sent in response when XFCC header validation fails at Ingress Gateway | 500 | Valid http error code | NA |
global.xfccHeaderValidation.validation.errorDescriptionOnValidationFailure | Configurable error description to be returned when XFCC header validation fails at Ingress Gateway. This error is populated in ProblemDetails response in ProblemDetails.detail section | Validation of SCP failed | NA | NA |
global.xfccHeaderValidation.validation.errorTitleOnValidationFailure | Configurable error title to be returned when XFCC header validation fails at Ingress Gateway. This error is populated in ProblemDetails response in ProblemDetails.title section. | Internal Server Error | NA | NA |
global.xfccHeaderValidation.validation.matchCerts | The number of certificates that need to be validated starting from the right most entry in the XFCC header | -1 | (Value = -1 validation to be performed against all entries, value = +ve number, then validate from starting from the right most entry backwards) | NA |
global.xfccHeaderValidation.validation.matchField | Field in XFCC header against which the configured nfList FQDN validation needs to be performed | Subject.CN | Valid XFCC header field | NA |
global.xfccHeaderValidation.validation.nfList | List of configured NF FQDN’s against which the matchField parameter present in the XFCC header of incoming request is validated |
nfList
|
Valid url list received in Subject.CN if the matchField is Subject.CN | NA |
global.xfccHeaderValidation.validation.redirectUrl | Redirect-Url string populated in LOCATION header and sent in response when XFCC header validation fails at Ingress Gateway and the value configured for errorCodeOnValidationFaulire lies in 3xx series | "" | Valid redirect url | NA |
global.xfccHeaderValidation.validation.retryAfter | Retry-after value in seconds populated in Retry-After header and sent in response, when XFCC header validation fails at Ingress Gateway and the value configured for errorCodeOnValidationFaulire lies in 3xx series | "" | Retry-After value in seconds | NA |
istioSidecarQuitUrl | Quit URL configured for side car | *istioQuitUrl |
Not Applicable |
This value is not to be changed unless it is different from the value configured in the global section reference |
istioSidecarReadyUrl | Readiness URL configured for side car | *istioReadyUrl |
Not Applicable |
This value is not to be changed unless it is different from the value configured in the global section reference |
maxConcurrentPushedStreams | Maximum number of concurrent pushed streams for Jetty client configuration | 6000 | Valid Number | NA |
maxConnectionsPerDestination | Max connections allowed per destination from Ingress Gateway | 50 | Valid Number | NA |
maxConnectionsPerIp | Maximum number of connections allowed from endpoint to other microservices | 10 | Valid Number | NA |
maxRequestsQueuedPerDestination | Queue Size at the ocudr-ingressgateway pod | 5000 | Valid Number | NA |
nfType | The NFType of the service producer
Mandatory when oauthValidatorEna ebled is true |
UDR | Not Applicable | v |
nrfPublicKeyKubeNamespace | Namespace of the NRF publicKey secret
Mandatory when oauthValidatorEna ebled is true |
ocudr | Not Applicable | NA |
producerScope | Comma separated list of services hosted by service
producer
Mandatory when oauthValidatorEna ebled is true |
nudr-dr-prov,nudr-group-id-map-prov,slf-group-prov | Valid service list | NA |
requestTimeout | Request timeout limit configured on ingress gateway client towards the backend | 2700 | Unit: Milliseconds | NA |
routesConfig | Routes configured to connect to different microservices of UDR | routesConfig
|
Not Applicable | NA |
serviceMeshCheck | Flag to enable serviceMesh | *serviceMeshFlag | Not Applicable | This value is not to be changed unless it is different from the value configured in the global section reference |
global.lciHeaderConfig.loadThreshold | If current load level is beyond the previously computed load level + loadThreshold, LCI headers are reported again. | 40 | NA | NA |
global.lciHeaderConfig.localLciHeaderValidity | Validity period of the LCI headers reported to the consumer NF. LCI headers are reported again if the headers are reported as previously expired. | 1000 (value in milliseconds) | NA | NA |
global.lciHeaderConfig.producerSvcIdHeader | Header name, which holds the producer svc identity. |
|
NA | NA |
global.ociHeaderConfig.enabled | Flag to enable or disable the OCI headers reporting | false | true or false | NA |
global.ociHeaderConfig.producerSvcIdHeader | Header name, which holds the producer svc identity. | custom_producer_svc_header | NA | NA |
global.ociHeaderConfig.validityPeriod | Validity period of the OCI headers reported to the consumer NF. OCI headers are reported again if the headers are reported as previously expired. | 5000 (value in milliseconds) | NA | NA |
global.ociHeaderConfig.overloadConfigRange.minor | Range to identify the minor overload condition | "[0-10]" | NA | NA |
global.ociHeaderConfig.overloadConfigRange.major | Range to identify the major overload condition | "[10-70]" | NA | NA |
global.ociHeaderConfig.overloadConfigRange.critical | Range to identify the critical overload condition | "[70-100]" | NA | NA |
global.ociHeaderConfig.reductionMetrics.minor | Reduction metric to be reported for the minor overload condition | 5 (Possible values 1 to 9 both inclusive) | NA | NA |
global.ociHeaderConfig.reductionMetrics.major | Reduction metric to be reported for the major overload condition | 10 (Possible values 5 to 15 both inclusive) | NA | NA |
global.ociHeaderConfig.reductionMetrics.critical | Reduction metric to be reported for the critical overload condition | 30 (Possible values 10 to 50 both inclusive) | NA | NA |
global.nfInstanceId | NF instance Id of the producer NF | 6faf1bbc-6e4a-4454-a507-a14ef8e1bc11 | NA | NA |
global.svcToSvcInstanceIdMapping.svcName | Back-end service name, which should match the producerSvcIdHeader value and perf info reported service name for the LCI or OCI headers reporting. | nf-registration | NA | NA |
global.svcToSvcInstanceIdMapping.serviceInstanceId | Back-end service instance id to be included in the LCI or OCI headers | fe7d992b-0541-4c7d-ab84-c6d70b1b01b1 | NA | NA |
global.perfInfoConfig.pollingInterval | Configurable interval at which the load infromation is polled from the perf-info service at GW | 5000 | NA | NA |
global.perfInfoConfig.serviceName | Perf-Info service name | NA | NA | NA |
global.perfInfoConfig.host | Perf-Info service Host IP | NA | NA | NA |
global.perfInfoConfig.port | Perf-Info service port | NA | NA | NA |
global.perfInfoConfig.perfInfoRequestMap | Perf-Info service request endpoint |
|
NA | NA |
global.lciHeaderConfig.enabled |
Flag to enable or disable the LCI headers reporting | false | true or false | NA |
global.http1PortName | The port name for http1 port exposed on ingressgateway | http | NA | NA |
3.1.14 ocudr-egressgateway Microservice Parameters
Following table list parameters for ocudr-egressgateway microservice (API Gateway).
Table 3-15 ocudr-egressgateway microservice parameters
Parameter | Description | Default value | Range or Possible Values (If applicable) | Notes |
---|---|---|---|---|
enabled | Flag to enable egress gateway | true | true/false | NA |
image.name | Docker image name | ocegress_gateway | NA | NA |
image.tag | Tag or version of image | 23.4.7 | NA | NA |
image.pullPolicy | This setting indicates if image needs to be pulled or not | IfNotPresent |
Possible Values - Always IfNotPresent Never |
NA |
initContainersImage.name | Docker image name | configurationinit | NA | NA |
initContainersImage.tag | Tag or version of Image | 23.4.7 | NA | NA |
initContainersImage.pullPolicy | This setting indicates if image needs to be pulled or not | IfNotPresent |
Possible Values - Always IfNotPresent Never |
NA |
updateContainersImage.name | Docker Image name | configurationupdate | NA | NA |
updateContainersImage.tag | Tag or version of Image | 23.4.7 | NA | NA |
updateContainersImage.pullPolicy | This setting indicates if image needs to be pulled or not | IfNotPresent |
Possible Values - Always IfNotPresent Never |
NA |
dbHookImage.name | Image name of dbHook | common_config_hook | NA | NA |
dbHookImage.tag | Tag or version of dbHook Image | 23.4.7 | NA | NA |
dbHookImage.pullPolicy | Pull policy of image | IfNotPresent |
Possible Values - Always IfNotPresent Never |
NA |
resources.limits.cpu | CPU allotment limitation | 2 | NA | NA |
resources.limits.memory | Memory allotment limitation | 2Gi | NA | NA |
resources.limits.initServiceCpu | Maximum number of CPUs that Kubernetes allow the egress-gateway init container to use | 1 | NA | NA |
resources.limits.initServiceMemory | Memory limit for egress-gateway init container | 1Gi | NA | NA |
resources.limits.updateServiceCpu | Maximum number of CPUs that Kubernetes allow the egress-gateway update container to use | 1 | NA | NA |
resources.limits.updateServiceMemory | Memory limit for egress-gateway update container | 1Gi | NA | NA |
resources.limits.commonHooksCpu | CPU limit for database Hook container | 1 | NA | NA |
resources.limits.commonHooksMemory | Memory limit for database Hook container | 1Gi | NA | NA |
resources.requests.cpu | CPU allotment for ocudr-egressgateway pod | 2 | NA | NA |
resources.requests.memory | Memory allotment for ocudr-egressgateway pod | 2Gi | NA | NA |
resources.requests.initServiceCpu | Number of CPUs that system guarantees for egress-gateway init container, and Kubernetes uses this value to decide the node to place the pod | 1 | NA | NA |
resources.requests.initServiceMemory | Memory limit that system guarantees for egress-gateway init container, and Kubernetes uses this value to decide the node to place the pod | 1Gi | NA | NA |
resources.requests.updateServiceCpu | Number of CPUs that system guarantees for egress-gateway update container, and Kubernetes uses this value to decide the node to place the pod | 1 | NA | NA |
resources.requests.updateServiceMemory | Memory limit that system guarantees for egress-gateway update container, and Kubernetes uses this value to decide the node to place the pod | 1Gi | NA | NA |
resources.requests.commonHooksCpu | Number of CPUs request for database Hook container, and Kubernetes uses this value to decide the node to place the pod | 1 | NA | NA |
resources.requests.commonHooksMemory | Memory request for database Hook container and Kubernetes uses this value to decide on which node to place the pod. | 1Gi | NA | NA |
resources.target.averageCpuUtil | CPU utilization limit for auto scaling | 80 | NA | NA |
service.ssl.tlsVersion | TLS version to be used | TLSv1.2 | Valid TLS version | These are service fixed parameters |
service.initialAlgorithm | Algorithm to be used. You can use ES256, but use its corresponding certificates | RS256 | RS256/ES256 | NA |
service.ssl.privateKey.k8SecretName | Name of the secret that store keys and certificates | ocudr-gateway-secret | NA | NA |
service.ssl.privateKey.k8NameSpace | Namespace in which secret is created | ocudr | NA | NA |
service.ssl.privateKey.rsa.fileName | RSA private key stored in the secret | rsa_private_key_pkcs1.pem | NA | NA |
service.ssl.privateKey.ecdsa.fileName | ECDSA private key stored in the secret | ecdsa_private_key_pkcs8.pem | NA | NA |
service.ssl.certificate.k8SecretName | Name of the secret that store keys and certificates | ocudr-gateway-secret | NA | NA |
service.ssl.certificate.k8NameSpace | Namespace in which secret is created | ocudr | NA | NA |
service.ssl.certificate.rsa.fileName | RSA certificate stored in the secret | apigatewayrsa.cer | NA | NA |
service.ssl.certificate.ecdsa.fileName | ECDSA certificate stored in the secret | apigatewayecdsa.cer | NA | NA |
service.ssl.caBundle.k8SecretName | Name of the secret that store keys and certificates | ocudr-gateway-secret | NA | NA |
service.ssl.caBundle.k8NameSpace | Namespace in which secret is created | ocudr | NA | NA |
service.ssl.caBundle.fileName | caBundle file name stored in the secret | caroot.cer | NA | NA |
service.ssl.keyStorePassword.k8SecretName | Name of the secret that store keys and certificates | ocudr-gateway-secret | NA | NA |
service.ssl.keyStorePassword.k8NameSpace | Namespace in which secret is created | ocudr | NA | NA |
service.ssl.keyStorePassword.fileName | keyStore password file name stored in the secret | key.txt | NA | NA |
service.ssl.trustStorePassword.k8SecretName | Name of the secret which stores keys and certificates | ocudr-gateway-secret | NA | NA |
service.ssl.trustStorePassword.k8NameSpace | Namespace in which secret is created | ocudr | NA | NA |
service.ssl.trustStorePassword.fileName | trustStore password stored in the secret | trust.txt | NA | NA |
maxUnavailable | Number of pods always running | 23.4.7 | NA | NA |
minReplicas | Minimum number of replicas to scale to maintain an average CPU utilization | 1 | NA | NA |
maxReplicas | Maximum number of replicas to scale to maintain an average CPU utilization | 4 | NA | NA |
log.level.root | Log level to be shown on ocudr-egressgateway pod | WARN | valid level | NA |
log.level.egress | Log level to be shown on ocudr-egressgateway pod for egress related flows | INFO | valid level | NA |
log.level.oauth | Logs level to be shown on ocudr-egressgateway pod for oauth related flows | INFO | valid level | NA |
log.level.updateContainer | Log level for update container | INFO | valid level | NA |
log.level.configclient | Log level for configuration client | INFO | valid level | NA |
log.level.hook | Log level for hook | INFO | valid level | NA |
fullnameOverride | Name to be used for deployment | ocudr-egressgateway | NA | This configuration is commented by default. |
initssl | Flag to initialize SSL related infrastructure in init or update container | false | NA | NA |
jaegerTracingEnabled | Flag to enable Jaeger Tracing | false | true/false | NA |
openTracing.jaeger.udpSender.host | FQDN of Jaeger agent service | occne-tracer-jaeger-agent.occne-infra | Valid FQDN | NA |
openTracing.jaeger.udpSender.port | UDP port of Jaeger agent service | 6831 | Valid Port | NA |
openTracing.jaeger.probabilisticSampler | probabilisticSampler on Jaeger | 0.5 | Range: 0.0 - 1.0 | Sampler makes a random sampling decision with the probability of sampling. For example if the value set is 0.1, approximately 1 in 10 traces are sampled. |
enableOutgoingHttps | Flag to enable outgoing HTTPS requests | false | true or false | NA |
istioSidecarQuitUrl | QuitUrl configurable for side car | *istioQuitUrl | Valid url | NA |
istioSidecarReadyUrl | Readiness url configurable for side car | *istioReadyUrl | Valid url
Note: The port used should be the admin configured for istio proxy container. |
NA |
serviceMeshCheck | Flag to enable serviceMesh | *serviceMeshFlag | true/false | |
httpsTargetOnly | This is global parameter which will be taken into
consideration if route (under routeConfig section ) based
httpsTargetOnly parameter is not available.
#true: Select SbiRouting instances for https list only #false: Run existing logic as per provided scheme. |
false | true/false | Double quotes to be enclosed for values of httpsTargetOnly |
httpRuriOnly | This is global parameter which will be taken into
consideration if route (under routeConfig section) based httpRuriOnly
parameter is not available.
#true: Means change Scheme of RURI to http #false: Keep scheme as is. |
false | true/false | Double quotes to be enclosed for values of httpsRuriOnly |
sbiRouting.sbiRoutingDefaultScheme | Default scheme applicable when 3gpp-sbi-target-apiroot header is missing | HTTPS | http/https | NA |
sbiRouting.sbiRerouteEnabled | Set this flag to true if re-routing to multiple SCP instances is to be enabled. | false | true/false | NA |
sbiRouting.peerConfiguration[0].id | Peer identifier for the peer | Value to be configured based on SCP details | NA | NA |
sbiRouting.peerConfiguration[0].host | First SbiRouting instance HTTP IP/FQDN | Value to be configured based on SCP details | NA | NA |
sbiRouting.peerConfiguration[0].port | First SbiRouting instance Port | Value to be configured based on SCP details | NA | NA |
sbiRouting.peerConfiguration[0].apiPrefix | First SbiRouting instance apiPrefix. Change this value to corresponding prefix if "/" is not expected to be provided along. Applicable only for SCP with TLS enabled. | Value to be configured based on SCP details | NA | NA |
sbiRouting.peerConfiguration[0].virtualHost | This will have Http VirtualFQDN. | Value to be configured based on SCP details | NA | NA |
sbiRouting.peerSetConfiguration[0].id | This is the peer set id that contains list of http and http instances | Value to be configured based on SCP details | NA | NA |
sbiRouting.peerSetConfiguration[0].httpConfiguration[0].priority | This denotes the priority of the http instance that request needs to be forwarded. Lower the priority, higher the preference | Value to be configured based on SCP details | NA | NA |
sbiRouting.peerSetConfiguration[0].httpConfiguration[0].peerIdentifier | This denotes the peer id that is present in the list of peers configured with unique ids | Value to be configured based on SCP details | NA | NA |
sbiRouting.peerSetConfiguration[0].httpsConfiguration[0].priority | This denotes the priority of the https instance that request needs to be forwarded. Lower the priority, higher the preference | Value to be configured based on SCP details | NA | NA |
sbiRouting.peerSetConfiguration[0].httpsConfiguration[0].peerIdentifier | This denotes the peer id that is present in the list of peers configured with unique ids | Value to be configured based on SCP details | NA | NA |
headlessServiceEnabled | Enabling this will make the service type default to ClusterIP | false | true/false | NA |
oauthClient.enabled | Flag to enable oauth client | false | true or false | Enable based on Oauth configuration |
routesConfig | Valid routes to be configured for UDR egress traffic |
|
Valid routes | The scpSetId to be configured based on the setId used for SCP route. These sets are configured under SCP section. |
dnsSrv.host | Host of DNS Alternate Route Service | 127.0.0.1 | Valid host IP | NA |
dnsSrv.alternateRouteSvcName | Service name of Alternate Route Service.
If Service name is provided, then this parameter is picked for integrating Egress Gateway with alternate route service. If IP or FQDN is expected to be provided then update this parameter as blank and update above parameter accordingly. If this parameter is populated with data then above parameter can be ignored |
alternate-route | Valid service name of alternate route service | NA |
dnsSrv.port | Port of DNS Alternate Route Service | 80 | Valid port | NA |
dnsSrv.scheme | Scheme of request to be sent to alternate route service. By default it is HTTP | HTTP | HTTP/HTTPS | NA |
dnsSrv.errorCodeOnDNSResolutionFailure | Configurable error code used in case of DNS resolution failure | 425 | Valid HTTP error code | NA |
dnsSrv.errorDescriptionOnDNSResolutionFailure | Error description for DNS resolution failure. Populated in ProblemDetails response in ProblemDetails.detail section. | "" | NA | NA |
dnsSrv.errorCauseOnDNSResolutionFailure | Error cause for DNS resolution failure. Populated in ProblemDetails response in ProblemDetails.cause section. | "" | NA | NA |
dnsSrv.errorTitleOnDNSResolutionFailure | Error title for DNS resolution failure. Populated in ProblemDetails response in ProblemDetails.title section. | "" | NA | NA |
oauthClient.enabled | Flag to enable oauth client | false | true or false | Enable based on Oauth configuration |
oauthClient.dnsSrvEnabled | Flag to enable DNS SRV for oAuth | false | true/false | NA |
oauthClient.httpsEnabled | Flag to enable HTTPS support, which is a deciding factor for oauth request scheme and search query parameter in dns-srv request | false | true/false | NA |
oauthClient.virtualFqdn | virtualFqdn value that needs to be populated and sent in dns-srv query | localhost:port | NA | Mandatory if oauthClient.dnsSrvEnabled is true |
oauthClient.staticNrfList | List of Static NRF's | - localhost:port | NA | Mandatory if oauthClient.enabled is true |
oauthClient.nfType | NF type of service consumer | UDR | NA | Mandatory if oauthClient.enabled is true |
oauthClient.consumerPlmnMNC | MNC of service consumer | 14 | Valid MNC | NA |
oauthClient.consumerPlmnMCC | MCC of service consumer | 310 | Valid MCC | NA |
oauthClient.maxNonPrimaryNrfs | Maximum number of non-primary NRF instances to query based on retryErrorCodeSeries configured (from the list of non-primary NRF instances available) if a failure response is received from primary NRF | 2 | NA | Mandatory if oauthClient.enabled is true |
oauthClient.apiPrefix | apiPrefix that needs to be appended in the Oauth request flow | "" | Valid String | Mandatory if oauthClient.enabled is true |
oauthClient.retryErrorCodeSeriesForSameNrf | Determines fallback condition to other non primary NRF
instances if attempts for current NRF are exhausted and last received
response from NRF matches retryErrorCodeSeries from errorSetId (4XX,
5XX) and errorCodes.
If the configured attempts for current NRF are not exhausted and the previously received error matches retryErrorCodeSeries from errorSetId (4XX, 5XX) and errorCodes then retry same NRF again. errorCodes can be configured with specific set of errorCodes for a particular errorSetId, or can be configured as "-1", if configured "-1" then the entire range of errorCodes is considered for that particular errorSetId |
|
Valid http errors | NA |
oauthClient.retryErrorCodeSeriesForNextNrf |
|
Valid HTTP errors | NA | |
oauthClient.retryExceptionListForSameNrf | List of available exceptions for oauth client while
sending AccessToken requests to NRF.
When an exception matches one of the exceptions in retryExceptionList, it tries next available NRF instance based on priority instead of retrying same NRF. |
|
Valid exception | NA |
oauthClient.retryExceptionListForNextNrf |
|
Valid Exception | NA | |
oauthClient.connectionTimeout | Determines the connection timeout in milliseconds for jetty client in case of subscription requests to NRF-Client Management Service and Access Token requests to NRF | 1000 | Unit: milliseconds | Mandatory if oauthClient.enabled is true |
oauthClient.requestTimeout | Determines the request timeout in milliseconds for jetty client for access token requests to NRF | 1000 | Unit: milliseconds | Mandatory if oauthClient.enabled is true |
oauthClient.attemptsForPrimaryNRFInstance | Determines the number of attempts available to query primary NRF | 1 | NA | Mandatory if oauthClient.enabled is true |
oauthClient.attemptsForNonPrimaryNRFInstance | Determines the number of attempts available to query non-primary NRF | 1 | NA | Mandatory if oauthClient.enabled is true |
oauthClient.defaultNRFInstance | Default NRF instance configuration. It is used to query access token from default NRF if access token is not successfully fetched from any of the available NRF instances | localhost:port | NA | NA |
oauthClient.oauthClient.defaultErrorCode | Default error code to be sent in response to requesterNf when an error response from NRF is received for an AccessToken request. | 503 | Valid Error Code | Mandatory if oauthClient.enabled is true |
oauthClient.nrfClientConfig.serviceName | The service name of NRF-Client Management Service used
for constructing Subscription URL and sending out subscription
request.
When serviceName field is empty then nrfClientConfig.host is used in NRF-Client management service subscription request URL for host configuration |
ocnf-client-nfmanagement | NA | NA |
oauthClient.nrfClientConfig.host | The host configuration for NRF-Client management service for sending subscription requests when service name is not configured | 10.75.224.123 | Valid host ip/fqdn | NA |
oauthClient.nrfClientConfig.port | The port configuration for NRF-Client management service for sending subscription requests | 8080 | Valid Port | NA |
oauthClient.nrfClientConfig.nrfClientRequestMap | The request mapping URL for sending subscription requests from Egress-Gateway to NRF-Client management service | /v1/nrf-client/subscriptions/nrfRouteList | NA | NA |
maxConcurrentPushedStreams | Maximum number of concurrent pushed streams for Jetty client configuration | 6000 | Valid Number | NA |
maxConnectionsPerDestination | Max connections allowed per destination from Ingress Gateway | 10 | Valid Number | NA |
maxRequestsQueuedPerDestination | Queue Size at the ocudr-ingressgateway pod | 5000 | Valid Number | NA |
maxConnectionsPerIp | Maximum number of connections allowed from endpoint to other microservices | 10 | Valid Number | NA |
connectionTimeout | Connection timeout configured on ingress gateway client towards the backend | 10000 |
Unit: Milliseconds |
NA |
requestTimeout | Request timeout configured on ingress gateway client towards the backend | 5000 | Unit: Milliseconds | NA |
jettyIdleTimeout | Idle timeout allowed for Jetty in milliseconds | 0 | Unit: Milliseconds
#(ms,<=0 -> to make timeout infinite) |
NA |
k8sServiceCheck | Flag to enable load balancing for egress instead of Kubernetes | false | true/false | NA |
service.customExtension.labels | Custom labels that needs to be added to egress gateway specific service | null | NA | This can be used to add custom label(s) to egressgateway Service. |
service.customExtension.annotations | Custom Annotations that needs to be added to egress gateway specific services | null | NA | This can be used to add custom annotation(s) to egressgateway Service. |
deployment.customExtension.labels | Custom labels that needs to be added to egress gateway specific deployment | null | NA | This can be used to add custom label(s) to egressgateway Deployment. |
deployment.customExtension.annotations | Custom Annotations that needs to be added to egress gateway specific deployment | null | NA | This can be used to add custom annotation(s) to egressgateway deployment. |
readinessProbe.initialDelaySeconds | Configurable wait time before performing first readiness probe by the Kubelet | 30 | NA
Unit: Seconds |
Do not change this value. If you see delays in pod to come up and probe is killing the pod then you should tune these parameters |
readinessProbe.periodSeconds | Time interval for every readiness probe check | 10 | NA
Unit: Seconds |
Do not change this value. If you see delays in pod to come up and probe is killing the pod then you should tune these parameters |
readinessProbe.timeoutSeconds | Number of seconds after which the probe times out | 3 | NA | Do not change this default value |
readinessProbe.successThreshold | Minimum consecutive successes for the probe to be considered successful after being failed | 1 | NA | Do not change this default value |
readinessProbe.failureThreshold | When a pod starts and the probe fails, Kubernetes failureThreshold times before giving up | 3 | NA | NA |
livenessProbe.initialDelaySeconds | Configurable wait time before performing the first liveness probe by Kubelet | 30 | NA
Unit: Seconds |
Do not change this value. If you see delays in pod to come up and probe is killing the pod then you should tune these parameters |
livenessProbe.periodSeconds | Time interval for every liveness probe check
Note: Do not change this value. If you see delays in pod coming up and probe is killing the pod then you should consider tuning these parameters. |
15 | NA
Unit: Seconds |
NA |
livenessProbe.timeoutSeconds | Number of seconds after which the probe times out | 3 | NA | Do not change this default value |
livenessProbe.successThreshold | Minimum consecutive successes for the probe to be considered successful after being failed | 1 | NA | Do not change this default value |
livenessProbe.failureThreshold | When a Pod starts and the probe fails, Kubernetes tries failureThreshold times before giving up | 3 | NA | Do not change this default value |
extraContainers | This configuration decides service level extraContainer support. The default value lets it to be dependent on what is configured in the global level | USE_GLOBAL_VALUE | Allowed Values:
|
NA |
commonCfgClient.enabled | Flag to enable common configuration client | true | true/false | NA |
commonCfgServer.configServerSvcName | Service name of common configuration service to which the client tries to poll for configuration updates | nudr-config | Valid server service name | NA |
commonCfgServer.host | Host name of Common configuration server to which client tries to poll for configuration updates. This value is picked up if commonCfgServer.configServerSvcName is not available | 10.75.224.123 | Valid host details | NA |
commonCfgServer.port | Port number of Common Configuration server | 5001 | Valid port | NA |
commonCfgServer.pollingInterval | Time interval between two subsequent polling requests from configuration client to server | 5000 | Valid period | NA |
commonServiceName | The common service name that is currently requesting for configuration updates from server | egw | NA | NA |
restoreBackupOnInstall | This flag when enabled picks up data from the backup table during installation of ingress gateway | false | true/false | NA |
dropTablesOnUpgrade | This flag when enabled drops the common configuration tables on Helm upgrade if there is no data in the tables | false | true/false | NA |
httpRuriOnly | This flag is enabled when the alternate route schema is set to HTTPS in nudr-nrf-client section | false | true/false | NA |
autoRedirect | Flag to enable redirection in Egress Gateway. If enabled, Egress Gateway redirects to the URL present in location header when the response code is 301, 302, 303, 307, or 308. | false | true/false | NA |
global.configurableErrorCodes.errorScenarios[i].exceptionType |
Configurable exception or error type for an error scenario in Ingress Gateway. The only supported values are ConnectionTimeout, RequestTimeout, UnknownHostException, ConnectException, RejectedExecutionException, SSLHandshakeException, InternalError and NotFoundException,ClosedChannelException |
Yes (If ConfigurableErrorCodes is enabled and as per need) | NA | NA |
routeConfigMode | Mode of route configuration for sbiRouting. Possible values are HELM, REST | No | Helm | NA |
configureDefaultRoute | Flag to configure default route. Configure this flag when routeConfigMode is configured as REST. | No | True | NA |
dbConfig.backupDbName | configure when your backup table should have separate schema | No | NA | NA |
componentValidator | custom validator configuration are needed for hooks to populate values in database | No | "com.oracle.common.egw.EgressCustomValidator" | NA |
dependentValidators |
It has comma separated values. for eg: "com.oracle.common.oauth.OauthCustomValidator, com.oracle.common.igw.IngressCustomValidator" |
No | "" | NA |
isIpv6Enabled | Set this flag to true when deployed in Ipv6 enabled setup | No | false | NA |
routesConfig[0].header | Provide the header name and regex (header predicate) in route to match header in the incoming request. | No | NA | NA |
routesConfig[0].filterName2.exceptions | The type exceptions on which the re-route need to be attempted. | Yes (If sbiRerouteEnabled is true) | NA | NA |
deploymentEgressGateway.imageTag | Image Tag name of egress gateway | No | 1.14.0 | NA |
global.logStorage | Ephemeral storage configuration for log storage | No | 0 | NA |
global.crictlStorage | Ephemeral storage configuration for crictl storage | No | 0 | NA |
nfSpecificConfig | This configuration is to enable or disable the NF specific default value configuration and a list of configuration files is defined in the default configuration list. | No |
|
NA |
honorDnsRecordttl | This configuration is to enable whether to honor the TTL values while resolving FQDN. | No | honorDnsRecordttl: false | NA |
unusedDestinationCleanup | This configuration is used to enable scheduler which will remove unused FQDN after configured idle time. | No | unusedDestinationCleanup: false | NA |
unusedDestinationCleanupAfter | This configuration is used to decide when to clean up unused destination. | No | unusedDestinationCleanupAfter: 20000 #(ms) | NA |
unusedDestinationCleanupScheduler | This configuration is the time interval when the scheduler should run. | No | unusedDestinationCleanupScheduler: 600000 #(ms) | NA |
ephemeralStorageLimit | Ephemeral storage limit | No | 0 | NA |
cncc.enablehttp1 | This flag is used for enabling http1 traffic on Ingress Gateway. This need not be enabled unless you need to receive http1 traffic on Ingress Gateway | false | true/false | NA |
userAgentHeaderValidationConfigMode | This flag is used to accept user-agent configurations from HELM or REST | HELM | HELM/REST | NA |
userAgentHeaderValidation.enabled | This flag is used to enable the user-agent feature | false | true/false | NA |
userAgentHeaderValidation.consumerNfTypes | Use this parameter to confgure who is consuming UDR from different NF Types, example like PCF,NRF,UDM | Empty | All NF Types that are defined in 3gpp specification. Example PCF,NRF,NEF,UDM | NA |
serverHeader.autoBlacklistProxy.enabled | To enable blacklisting in server header feature | No | NA | default is false |
serverHeader.autoBlacklistProxy.errorCodeList | List of error codes to check for blacklisting | No | NA | NA |
serverHeader.autoBlacklistProxy.blacklistingPeriod | Time to blacklist a peer in milliseconds | No | NA | NA |
userAgentHeaderConfigMode | parameter to configure mode of configuration | No | HELM | NA |
userAgentHeader.enabled | flag to enable or disable the feature | No | False | NA |
userAgentHeader.nfType | consumer nf type | No | empty string | NA |
userAgentHeader.nfInstanceId | instance id of consumer nf type | No | empty string | NA |
userAgentHeader.addfqdnToHeader | flag to indicate if user agent header should be generated with fqdn | No | false | NA |
userAgentHeader.overwriteHeader | flag to indicate if user agent header should be overwritten if already present in the request | No | false | NA |
userAgentHeader.nfFqdn | fqdn of nf | No | empty string | NA |
routesConfig[0].filterName2.withServerHeaderSupport.retries | Number of re-routes to be attempted to alternate SCP instances if request matches this route's path in server header scenario | No | NA | NA |
routesConfig[0].filterName2.withServerHeaderSupport.methods | The type of methods for which the re-route need to be attempted in server header scenario | No | NA | NA |
routesConfig[0].filterName2.withServerHeaderSupport.statuses | The type response error codes on which the re-route need to be attempted in server header scenario | No | NA | NA |
routesConfig[0].filterName2.withServerHeaderSupport.exceptions | The type exceptions on which the re-route need to be attempted in server header scenario | No | NA | NA |
routesConfig[0].metadata.serverHeaderEnabled | To enable or Disable server header feature | No | NA | default is false |
routesConfig[0].metadata.serverHeaderNFtypes | List of NF types for server header | No | NA | NA |
global.lciHeaderConfig.enabled | Flag to enable LCI Header | False | True or False | NA |
global.lciHeaderConfig.loadThreshold | Egress Gateway decides whether or not to send the header in the next response using this threshold value | 40 | NA | NA |
global.lciHeaderConfig.localLciHeaderValidity | Used to send the LCI Header response for each specific interval. If it is 60000 ms for every 60 sec, this response is sent | 1000 | NA | NA |
global.lciHeaderConfig.consumerSvcIdHeader | Using this header, EGW identifies the destination address details and based on that, it decides whether or not to send the lci header | ConsumerHeader | NA | NA |
global.lciHeaderConfig.producerSvcIdHeader | This header is configured in the same way in which the header from the response of back-end service (nudr-drservice) is received | UDRHeader | NA | NA |
global.ociHeaderConfig.enabled | Flag to enable OCI Header | False | True or False | NA |
global.ociHeaderConfig.consumerSvcIdHeader | ||||
global.ociHeaderConfig.producerSvcIdHeader | This header is configured in the same way in which the header from the response of back-end service (nudr-drservice) is received. | UDRHeader | NA | NA |
global.ociHeaderConfig.validityPeriod | Used to send OCI Header in the response for each specific interval. If it is 30000 ms for every 30 sec, this response is sent. | 5000 | NA | NA |
global.ociHeaderConfig.overloadConfigRange | The back-end service load level is determined based on this configuration range |
minor: "[0-80]" major: "[80-85]" critical: "[90-100]" |
NA | NA |
global.ociHeaderConfig.reductionMetrics | Indicates the percentage value in which we will give information to consumer that how much traffic will be dropped before sending. |
minor: 5 #(Possible values 1 to 9 both inclusive) major: 15 #(Possible values 5 to 15 both inclusive) critical: 25 #(Possible values 10 to 50 both inclusive) |
NA | NA |
global.nfInstanceId | Indicates the global value being used at the NF level. | *nfInsId | NA | NA |
global.svcToSvcInstanceIdMapping.svcName | Indicates the back-end service name, which is configured. This value should match with the service in which perf-info reports the load. | ocudr-nudr-drservice | NA | NA |
global.svcToSvcInstanceIdMapping.serviceInstanceId | This value is unique for the service and it needs to be configured with the same value in which NFProfile (for nudr-drservice) for the svc level is sent. | ae870316-384d-458a-bd45-025c9e748976 | NA | NA |
global.perfInfoConfig.pollingInterval | Configurable interval at which load infromation is polled from the perf-info service at Gateway. | |||
global.perfInfoConfig.serviceName | Perf-Info service name | NA | NA | NA |
global.perfInfoConfig.host | Perf-Info service Host IP | NA | NA | NA |
global.perfInfoConfig.port | Perf-Info service port | NA | NA | NA |
global.perfInfoConfig.perfInfoRequestMap | Perf-Info service request endpoint |
|
NA | NA |
deDupeResponseHeader | Used for handling the duplicate values in the response
headers. Multiple values can be provided with a space.
For example: content-type nettylatency requestmethod, RETAIN_LAST |
False | True or False | NA |
oauthEnabled | To enable or disable oauth at the route level | False | True or False | NA |
http1.enableOutgoingHTTP1 | Flag to enable or disable the outgoing http1.1 traffic for Egress Gateway | False | True or False | NA |
sbiroutingerrorcriteriasets.id | Unique id for sbiRoutingErrorCriteriaSet | NA | NA | NA |
sbiroutingerrorcriteriasets.method | Methods for which a reroute or retry is triggered | NA | NA | NA |
sbiroutingerrorcriteriasets.response.statuses.statusSeries | Http Status series for which a reroute or retry is triggered when an error response from downsream is received | NA | NA | NA |
sbiroutingerrorcriteriasets.response.statuses.status | Specific HTTP status that belongs to statusSeries for which a reroute or retry has to be triggered. To enable a retry or reroute for all the HTTP status belonging to statusSeries, configure as -1 | NA | NA | NA |
sbiroutingerrorcriteriasets.response.exceptions | Specific exceptions for which a reroute or retry is triggered | NA | NA | NA |
sbiroutingerrorcriteriasets.response.headersMatchingScript |
A comma separated String values in the following format: 1st token: headerCheck (Hard coded values) 2nd to n-1 token: Header names that need to be validated nth token: regex expression for the header validation Note: The final result would be an aggregated OR of the individual header checks. |
NA | NA | NA |
sbiroutingerroractionsets.id | Unique Id for sbiRoutingErrorActionSet | NA | NA | NA |
sbiroutingerroractionsets.action | Action that needs to be taken when a specific criteria set is matched. Currently, only 2 values are supported: reroute/retry | NA | NA | NA |
sbiroutingerroractionsets.attempts | Maximum number of retries to either same or different peer in case of error or failures from the back-end. | NA | NA | NA |
sbiroutingerroractionsets.blackList.enabled | Flag to enable the peer blacklist feature using the server headers received in the response. | NA | NA | NA |
sbiroutingerroractionsets.blackList.duration | The duration for which the peer is blacklisted and no traffic is routed to that peer for this period. | NA | NA | NA |
http1.enableOutgoingHTTP1 | Flag to enable or disable outgoing http1.1 traffic for Egress Gateway | NA | NA | |
dnsRefreshDelay | Dns Refresh Delay in milli-seconds | 10000
Unit: Miliseconds |
NA | NA |
3.1.15 alternate-route Microservice Parameters
Following table list parameters for alternate-route microservice.
Table 3-16 alternate-route microservice parameters
Parameter | Description | Default value | Range or Possible Values (If applicable) | Notes |
---|---|---|---|---|
extraContainers | This configuration decides service level extraContainer support. The default value lets it to be dependent on what is configured in the global level | USE_GLOBAL_VALUE | Allowed Values:
|
NA |
replicaCount | Indicates the number of pods that need to be created as part of deployment | 2 | Any valid number from 1 to 4 | NA |
deploymentDnsSrv.image | Name of the docker image of Alternate Route application | alternate_route | NA | NA |
deploymentDnsSrv.tag | Tag or version of the image of Alternate Route application | 23.4.7 | NA | NA |
deploymentDnsSrv.pullPolicy | This setting indicates if image need to be pulled or not | IfNotPresent | Possible Values:
Always IfNotPresent Never |
NA |
dbHookImage.name | Image name of dbHook | common_config_hook | Not Applicable | NA |
dbHookImage.tag | Tag or version of dbHook image | 23.4.7 | Not Applicable | NA |
dbHookImage.pullPolicy | Pull policy of image | IfNotPresent | Possible Values:
Always IfNotPresent Never |
NA |
service.type | Type of Kubernetes service | ClusterIP | Possible Values:
ClusterIP NodePort LoadBalancer |
NA |
service.customExtension.labels | Custom labels that needs to be added to alternate-route Kubernetes service | null | NA | NA |
service.customExtension.annotations | Custom Annotations that needs to be added to alternate-route Kubernetes service | null | NA | NA |
deployment.customExtension.labels | Custom labels that needs to be added to alternate-route Kubernetes deployment | null | NA | NA |
deployment.customExtension.annotations | Custom Annotations that needs to be added to alternate-route Kubernetes deployment | null | NA | NA |
minReplicas | Minimum number of replicas to scale to maintain an average CPU utilization | 1 | NA | NA |
maxReplicas | Maximum number of replicas to scale to maintain an average CPU utilization | 5 | NA | NA |
ports.servicePort | Port number of Kubernetes service | 80 | Valid Port | NA |
ports.containerPort | Container port number that represents a network port in a single container | 8004 | Valid Port | NA |
ports.actuatorPort | Actuator Port number | 9090 | Valid Port | NA |
log.level.root | Log level for root logs | WARN | Possible Values:
WARN INFO DEBUG |
NA |
log.level.altroute | Log level for alternate route logs | WARN | Possible Values:
WARN INFO DEBUG |
NA |
log.level.configclient | Log level for configuration client logs | WARN | Possible Values:
WARN INFO DEBUG |
NA |
coherence.port | Coherence port | No | 8000 | NA |
coherence.messagingPort1 | Port used by Coherence for communication | No | 8095 | NA |
coherence.messagingPort2 | Port used by Coherence for communication | No | 8096 | NA |
log.level.hook | Log level for hook logs | warn | Possible Values:
WARN INFO DEBUG |
NA |
staticVirtualFqdns[0].name | Name of the virtual FQDN | NA | Valid name:
ex: http://abc.test.com |
NA |
staticVirtualFqdns[0].alternateFqdns[0].target | Name of the alternate FQDN mapped to above virtual FQDN | NA | target FQDN
ex: abc.test.com |
NA |
staticVirtualFqdns[0].alternateFqdns[0].port | Port number of the alternate FQDN | NA | Valid Port | NA |
staticVirtualFqdns[0].alternateFqdns[0].priority | Priority number of the alternate FQDN | NA | Priority Number | NA |
dnsSrvEnabled | Flag to enable DNS-SRV query to coreDNS Server | true | true/false | NA |
dnsSrvFqdnSetting.enabled | Flag to enable the usage of custom pattern for FQDN while triggering DNS-SRV query | true | true/false | NA |
dnsSrvFqdnSetting.pattern | Flag to enable the usage of custom pattern for FQDN while triggering DNS-SRV query | "_{scheme}._tcp.{fqdn}." | NA | NA |
asynTaskExecutor.corePoolSize | This is the number of threads available in the "registration" thread pool which performs registration asynchronously | 10 | Valid number | NA |
asynTaskExecutor.maxPoolSize | This is the limit for maximum number of threads in the "registration" thread pool | 10 | Valid number | NA |
asynTaskExecutor.queueCapacity | This is the limit for number of requests that get queued for the availability of thread from "registration" thread pool | 100 | Valid number | NA |
refreshTaskExecutor.corePoolSize | This is the number of threads available in the "auto-refresh" thread pool that performs "auto-refresh" scheduled task | 10 | Valid number | NA |
refreshTaskExecutor.maxPoolSize | This is the limit for maximum number of threads in the "auto-refresh" thread pool | 10 | Valid number | NA |
refreshTaskExecutor.queueCapacity | Limit for number of requests that get queued for the availability of thread from "auto-refresh" thread pool | 20 | Valid number | NA |
cleanupTaskExecutor.corePoolSize | Number of threads available in the "auto-cleanup" thread pool which performs "auto-cleanup" scheduled task | 3 | Valid number | NA |
cleanupTaskExecutor.maxPoolSize | Limit for maximum number of threads in the "auto-cleanup" thread pool | 3 | Valid number | NA |
cleanupTaskExecutor.queueCapacity | Limit for number of requests that get queued for the availability of thread from "auto-cleanup" thread pool | 5 | Valid number | NA |
refreshScheduler.enabled | Flag to enable the "auto-refresh" scheduled task | true | true/false | NA |
refreshScheduler.interval | Use this to trigger "refresh-scheduler: as per time interval configured in minutes | 60 | Unit: Minutes | NA |
refreshScheduler.auditorShuffle | Flag to enable auditor (auto-refresh) functionality to randomly rotate among available non-leader pods | false | true/false | NA |
refreshScheduler.throttling.burstLimit | Limit for number of DNS-SRV queries that can be sent in single loop while doing auto-refresh. It should always be greater than zero | 50 | Valid Number | NA |
refreshScheduler.throttling.burstInterval | Time to wait (in seconds) before triggering the next burst of DNS-SRV queries while doing auto-refresh. This should always be greater than zero | 5 | Unit: Seconds | NA |
refreshScheduler.throttling.threadPoolSize | Number of threads in ThreadPoolExecutors that schedules the auto refresh bursts as per burstInterval | 5 | Valid Number | NA |
cleanupScheduler.enabled | Flag to enable the "auto-cleanup" scheduled task | true | true/false | NA |
cleanupScheduler.interval | Use to trigger "cleanup-scheduler" as per time interval configured in minutes | 100 | Unit: Minutes | NA |
cleanupScheduler.lastUsedInterval | This time interval in minutes is used for selection of records for cleanup | 200 | Unit: Minutes | NA |
readinessProbe.initialDelaySeconds | Configurable wait time before performing the first readiness probe by the Kubelet | 30 | Unit: Seconds | Do not change this value. If you see delays in pod to come up and probe is killing the pod then tune these parameters |
readinessProbe.timeoutSeconds | Number of seconds after which the probe times out | 3 | Unit: Seconds | Do not change this default value. |
readinessProbe.periodSeconds | Time interval for every readiness probe check | 10 | Unit: Seconds | Do not change this value. If you see delays in pod to come up and probe is killing the pod then tune these parameters |
readinessProbe.successThreshold | Number of seconds after which the probe times out | 1 | NA | Do not change this default value. |
readinessProbe.failureThreshold | When a Pod starts and the probe fails, Kubernetes tries failureThreshold times before giving up | 3 | NA | Do not change this default value |
livenessProbe.initialDelaySeconds | Configurable wait time before performing the first liveness probe by the Kubelet | 30 | Unit: Seconds | Do not change this value. If you see delays in pod to come up and probe is killing the pod then tune these parameters |
livenessProbe.timeoutSeconds | Number of seconds after which the probe times out | 3 | Unit: Seconds | Do not change this default value |
livenessProbe.periodSeconds | Time interval for every liveness probe check | 15 | Unit: Seconds | Do not change this value. If you see delays in pod to come up and probe is killing the pod then tune these parameters |
livenessProbe.successThreshold | Minimum consecutive successes for the probe to be considered successful after having failed | 1 | NA | Do not change this default value. |
livenessProbe.failureThreshold | When a Pod starts and the probe fails, Kubernetes tries failureThreshold times before giving up | 3 | NA | Do not change this default value |
resources.limits.cpu | CPU Limit | 2 | NA | NA |
resources.limits.memory | Memory Limit | 2Gi | NA | NA |
resources.limits.commonHooksCpu | CPU limit of database Hook container | 1 | Not applicable | NA |
resources.limits.commonHooksMemory | Memory limit of database Hook container | 1Gi | Not applicable | NA |
resources.requests.cpu | Number of CPUs requested | 1 | NA | NA |
resources.requests.memory | Memory Requested | 2Gi | NA | NA |
resources.requests.commonHooksCpu | Number of CPU request for database Hook container. Kubernetes uses this value to decide the node to place the pod | 1 | Not Applicable | NA |
resources.requests.commonHooksMemory | Memory request for database Hook container. Kubernetes uses this value to decide the node to place the pod | 1Gi | Not Applicable | NA |
target.averageCpuUtil | Limit of CPU utilization (in percentage) of the Pod for auto scaler to pitch-in | 80 | Unit: Percent | NA |
jaegerTracingEnabled | Flag to enable Jaeger tracing | false | true/false | NA |
openTracing.jaeger.udpSender.host | Jaeger Host | occne-tracer-jaeger-agent.occne-infra | Valid host details | NA |
openTracing.jaeger.udpSender.port | Jaeger Port number | 6831 | Valid Port | NA |
openTracing.probablisticSampler | Probabilistic Sampler on Jaeger | 0.5 | Range: 0.0 - 1.0 | Sampler makes a random sampling decision with the probability of sampling. For example: If the value set is 0.1, approximately 1 in 10 traces will be sampled. |
istioSidecarQuitUrl | Quit URL configurable for side car.
Change only if the URL is different for the side car container in this microservice |
*istioQuitUrl | NA | Do not change this value unless it is different from the value configured in global section reference |
istioSidecarReadyUrl | Readiness URL configurable for side car.
Change only if the URL is different for the side car container in this microservice |
*istioReadyUrl | Not Applicable | Do not change this value unless it is different from the value configured in global section reference |
serviceMeshCheck | Flag to enable service mesh. Set this to false when side car is not included for this service | *serviceMeshFlag | Not Applicable | Do not change this value unless it is different from the value configured in global section reference |
commonCfgClient.enabled | Flag to enable persistent configuration | true | true/false | NA |
commonCfgServer.configServerSvcName | Service name of common configuration service to which the client tries to poll for configuration updates | nudr-config | Valid server service name | NA |
commonCfgServer.host | Host name of Common configuration server to which client tries to poll for configuration updates. This value is picked up if commonCfgServer.configServerSvcName is not available | 10.75.224.123 | Valid host details | NA |
commonCfgServer.port | Port of Common Configuration server | 5001 | Valid Port | NA |
commonCfgServer.pollingInterval | Interval between two subsequent polling requests from configuration client to server | 5000 | Valid period | NA |
commonServiceName | This is the common service name that is currently requesting for configuration updates from server | alt-route | Not Applicable | NA |
restoreBackupOnInstall | This flag when enabled picks up the data from the backup table during installation of ingress gateway | false | true/false | NA |
dropTablesOnUpgrade | This flag when enabled drops the common configuration tables on Helm upgrade if there is no data in the tables | false | true/false | NA |
global.metricPrefix | NF level metric prefix that is added to all the metrics | No | empty string | NA |
global.metricSuffix | NF level metric suffix that is added to all the metrics | No | empty string | NA |
metricPrefix | Service level metric prefix that is added to all the metrics | No | empty string | NA |
metricSuffix | Service level metric suffix that is added to all the metrics | No | empty string | NA |
minAvailable | Number of Pods must always be available, even during a disruption. | Yes | 0 | This parameter is used for PodDisruptionBudget Kubernetes resource |
dbConfig.backupDbName | Configure when your backup table should have separate schema | No | NA | NA |
global.logStorage | Ephemeral storage configuration for log storage | No | 0 | NA |
global.crictlStorage | Ephemeral storage configuration for crictl storage | No | 0 | NA |
ephemeralStorageLimit | Ephemeral storage limit | No | 0 | NA |
tolerationsSetting | Flag to enable the toleration setting at the
microservice level or global level.
If set to USE_GLOBAL_VALUE global.tolerations configured value can be used. If set to ENABLED tolerations configuration below in the same section is used. |
USE_GLOBAL_VALUE |
Allowed Values:
|
NA |
nodeSelection | Flag to enable the nodeSelection setting at the
microservice level or global level.
If set to USE_GLOBAL_VALUE global.tolerations configured value is used. If set to ENABLED tolerations configuration below in the same section is used. This is applicable for v2 apiVersion nodeSelector configuration. |
USE_GLOBAL_VALUE |
Allowed Values:
|
NA |
tolerations | When tolerationsSetting is ENABLED, configure tolerations here. | [] |
Example:
|
NA |
helmBasedConfigurationNodeSelectorApiVersion | Node Selector API version setting | v1 |
Allowed Values:
|
NA |
nodeSelector.nodeKey | NodeSelector key configuration at the microservice level when helmBasedConfigurationNodeSelectorApiVersion is set to v1 then this configuration is used. And this configuration does not depend on nodeSelection flag, once configured this is used for all microservices. | ' ' | Valid key for a label for a particular node | NA |
nodeSelector.nodeValue | NodeSelector Value configuration at the global level when helmBasedConfigurationNodeSelectorApiVersion is set to v1 then this configuration is used. And this configuration does not depend on nodeSelection flag, once configured this is used for all microservices. | ' ' | Valid value pair for the above key for a label for a particular node. | NA |
nodeSelector | NodeSelector configuration when
helmBasedConfigurationNodeSelectorApiVersion is set to v2.
Uncomment and use this configuration when required. Else keep this commented. |
{} | Valid key value pair matching a node for nodeSelection by a pod. | NA |
3.1.16 Database Resolution Auditor Service
The following table list the user parameters for database resolution auditor service.
Table 3-17 Database Resolution Auditor Service
Parameter | Description | Default value | Range or Possible Values (If applicable) |
---|---|---|---|
enabled | Flag for enable or disable nudr-notify-service | false | true or false |
image.name | Docker image name | nudr_dbcr_auditor_service | NA |
image.tag | Docker image tag | 23.4.2 | NA |
replicas | replicas of nudr_dbcr_auditor_service | 1 | NA |
service.type | nudr_dbcr_auditor_service service type | ClusterIP | The Kubernetes service type for exposing UDR
deployment:
ClusterIP NodePort LoadBalancer |
minReplicas | Minimum replicas
Minimum number of pods. |
1 | NA |
maxReplicas | Maximum replicas
Maximum number of pods. |
1 | NA |
maxUnavailable |
Number of replicas that can go down during a disruption. This parameter is used for PodDisruptionBudget k8s resource. |
1 | NA |
service.port.http | HTTP port
The HTTP port to be used in nudr_dbcr_auditor_service |
5001 | NA |
service.port.https | HTTPS port
The HTTP port to be used in nudr_dbcr_auditor_service. |
5002 | NA |
service.port.management | Management port.
The actuator management port to be used for nudr_dbcr_auditor_service . |
9000 | NA |
deployment.replicaCount | Replicas of nudr_dbcr_auditor_service pod.
Number of nudr_dbcr_auditor_service pods to be maintained by replica set created with the deployment. |
1 | NA |
hikari.poolsize | Mysql Connection pool size
The hikari pool connection size to be created at start up. |
10 | NA |
hikari.connectionTimeout | MYSQL connection timeout | 1000 | Unit: Milliseconds |
hikari.minimumIdleConnections | Minimum idle MYSQL connections maintained | 5 | Valid number is less than poolsize configured |
hikari.idleTimeout | Idle MYSQL connections timeout | 4000 | Unit: Milliseconds |
hikari.queryTimeout | Query timeout for a single database query. If set to -1, there will be no timeout set for database queries. | -1 | Unit: Seconds |
logging.level.root | Log Level
Log level of the export tool pod. |
WARN |
Possible Values - WARN INFO DEBUG |
resources.requests.cpu | CPU Allotment for nudr-notify-service pod.
The CPU to be allocated for notify service pod during deployment. |
2 | NA |
resources.requests.memory | Memory allotment for nudr-bulk-import pod.
The memory to be allocated for nudr-bulk-import pod during deployment. |
2Gi | NA |
resources.requests.logStorage | Log storage for ephemeral storage allocation request. | 50 | Unit: MB |
resources.requests.crictlStorage | Crictl storage for ephemeral storage allocation request. | 2 | Unit: MB |
resources.limits.cpu | CPU allotment limitation. | 2 | NA |
resources.limits.memory | Memory allotment limitation. | 2Gi | NA |
resources.limits.logStorage | Log storage for ephemeral storage allocation limit. | 1000 | Unit: MB |
resources.limits.crictlStorage | Crictl storage for ephemeral storage allocation limit. | 2 | Unit: MB |
resources.target.averageCpuUtil | CPU utilization limit for autoscaling.
CPU utilization limit for creating HPA. |
80 | NA |
startupProbe.failureThreshold |
Configurable number of times startup probe will be retried for failures. Note: Do not change this value. If there is delay in pod coming up and probe is killing the pod, then you must adjust these parameters. |
40 | Unit: Seconds |
startupProbe.periodSeconds |
Time interval for every startup probe check. Note: Do not change this value. If we see delays in pod coming up and probe is killing the pod then we should consider tuning these parameters. |
5 | Unit: Seconds |
extraContainers | Configuration to enable debug tool container | DISABLED | Disabled or enabled |
serviceMeshCheck | Enable when deployed in serviceMesh, referred to the serviceMeshCheck in global section. Set to false when side car is not included for this service. | *serviceMeshFlag | NA
Note: Do not change this value unless it is different from the value configured in global section reference. |
istioSidecarReadyUrl |
Readiness URL configurable for sidecar, refer to the global section istioSidecarReadyUrl. Change only if the URL is different for the sidecar container in this microservice. |
*istioReadyUrl | NA
Note: Do not change this value unless it is different from the value configured in global section reference. |
tolerationsSetting |
Flag to enable micro service level or global level toleration setting. If set to USE_GLOBAL_VALUE global.tolerations then configured value will be used. If set to ENABLED, service specific tolerations setting will be used. |
USE_GLOBAL_VALUE | Allowed Values:
ENABLED D USE_GLOBAL_VALUE DISABLE |
nodeSelection |
Flag to enable micro service level or global level nodeSelection setting. If set to USE_GLOBAL_VALUE global.tolerations configured value will be used. If set to ENABLED, service specific tolerations setting will be used. This is applicable for v2 apiVersion nodeSelector configuration. |
USE_GLOBAL_VALUE | Allowed Values:
ENABLED D USE_GLOBAL_VALUE DISABLE |
tolerations | Configure the tolerations when tolerationsSetting is ENABLED. | [] | NA |
helmBasedConfigurationNodeSelectorApiVersion | Node Selector API version setting | v2 |
Allowed Values:
|
nodeSelector |
NodeSelector configuration when helmBasedConfigurationNodeSelectorApiVersion is set to v2. For v1 you can have key:value pair configured. |
{} |
Valid key value pair matching a node for nodeSelection by a pod. v1 Example: zone: Antartica |
dbConflictResolution.auditException | The list of exception tables to be monitor. |
The following configuration can be used as default value if deployed in EIR mode. ID1TODATA$EX,ID2TODATA$EX,ID3TODATA$EX,ID4TODATA$EX,ID5TODATA$EX,ID6TODATA$EX,ID7TODATA$EX,PROFILE_DATA$EX If installed in SLF, UDR, or All modes change the value as following: ID1TODATA$EX,ID2TODATA$EX,ID3TODATA$EX,ID4TODATA$EX,ID5TODATA$EX,PROFILE_DATA$EX |
NA |
dbConflictResolution.deleteException | The list of exception tables to be cleared without auditing. |
All mode: GROUP_ID_MAP$EX,SLF_GROUP_NAME$EX,PCF_DATA$EX,SUBSCRIPTION$EX,NEF_DATA$EX,EXPOSURE_DATA$EX,NEF_SUBSCRIPTION$EX,AUTHENTICATION_DATA$EX,UDM_AUTH_ENC$EX,CONTEXT_DATA_1$EX,CONTEXT_DATA_2$EX,UDM_DATA1$EX,UDMSUB$EX,UDM_DATA2$EX,UE_UPDATE_CONFIRMATION_DATA$EX,UE_SPECIFIC_DATA_1$EX,UE_SPECIFIC_DATA_2$EX,UDM_SHARED_DATA$EX,PROVISIONED_DATA_1$EX,UDM_SUBSCRIPTION$EX,PCF_AMDATA$EX,PCF_UEPOLICYSET$EX You can configure the following values based on deployment mode:
|
NA |
dbConflictResolution.deleteConsumer | The list of consumer data tables to be audited. |
All mode: PROFILE_DATA,GROUP_ID_MAP,PCF_DATA,NEF_DATA,EXPOSURE_DATA,AUTHENTICATION_DATA,UDM_AUTH_ENC,CONTEXT_DATA_1,CONTEXT_DATA_2,UDM_DATA1 UDMSUB,UDM_DATA2,UE_UPDATE_CONFIRMATION_DATA,UE_SPECIFIC_DATA_1,UE_SPECIFIC_DATA_2,PROVISIONED_DATA_1,EIR_DATA,PCF_AMDATA,PCF_UEPOLICYSET You can configure the following values based on deployment mode:
|
NA |
dbConflictResolution.auditIdxtodata | The list of data id tables to be audited. |
EIR mode: ID1TODATA,ID2TODATA,ID3TODATA,ID4TODATA,ID5TODATA,ID6TODATA,ID7TODATA You can configure the following values based on deployment mode: All/SLF/UDR modes: ID1TODATA,ID2TODATA,ID3TODATA,ID4TODATA,ID5TODATA |
NA |
mateSitesIgwIPList |
It is a comma separated list of mate site Ingress gateway provisioning IPs or FQDN with port. Example: In case of three-site setup installed on
myudr1, myudr2, and myudr3 namespaces and the release name used is
ocudr. The configuration must be same as the default value for site1
installed on myudr1 namespace. For site2 installed on myudr2
namespace, the value must be The site3 installed on myudr3 namespace, the value should be
|
http://ocudr-ingressgateway-prov.myudr2:80,http://ocudr-ingressgateway-prov.myudr3:80 |
Valid IP with port or FQDN with port. This must be comma separated. |