4 NRF Features

This section explains the NRF features.

Note:

The performance and capacity of the NRF system may vary based on the call model, Feature or Interface configuration, and underlying CNE and hardware environment.

4.1 Ingress Gateway Pod Protection Using Rate Limiting

The Ingress Gateway microservice manages all incoming traffic from Network Functions (NFs). The Ingress Gateway microservice may become vulnerable due to overload conditions during unexpected traffic spikes, uneven traffic distribution, or bursts caused by network fluctuations. To ensure system stability and prevent potential network outages, a protection mechanism for the Ingress Gateway microservice is necessary.

With the implementation of this feature, rate-limiting mechanism is applied for Ingress Gateway pods. This mechanism allows to set a maximum number of requests that a pod can process. When the request rate exceeds the threshold, the pods take action to protect themselves. They either reject additional requests with a custom error code or allow them, depending on the configuration. Each Ingress Gateway pod has its own Pod Protection Policer, which is applied individually.

The Ingress Gateway's configuration for this feature includes:

  • Fill Rate: This sets the maximum number of requests a pod can handle in a defined time interval. It is also possible to allocate the specific percentage for various path or method combinations.
  • Error Code Profiles: This allows to reject the requests with a predefined error code profile.
  • Congestion Configuration: This allows to configure the congestion levels based on CPU resource utilization.
  • Denied Request Actions: This defines the action to be taken when the fill rate is exceeded. The options are 'CONTINUE', which allows requests to be processed, or 'REJECT', which rejects requests with a specified error code.

    Ingress Gateway will reject all the requests which exceeds the fill rate unless specific 'CONTINUE' action is defined in the denied request actions.

The fill rate represents the maximum number of requests a pod can handle when traffic is uniformly distributed within a span of one second interval. However, in bursty traffic scenarios where the traffic is not uniformly distributed within a span of one second interval, even if the average traffic received is within the fill rate, still traffic failure may be expected.

Congestion level is checked in a fixed interval (congestionconfig.refreshInterval) and the action will be decided based on the congestion level in that interval. It is possible that the CPU levels might toggle between these intervals causing the congestion levels to switch frequently. To know the detected congestion level, see the metric oc_ingressgateway_congestion_level_bucket_total.

The following table describes NRF's configuration for different congestion levels for Ingress Gateway pods:

Table 4-1 Congestion Levels of Pods

Level Value Abatement Threshold Onset Threshold Level Name Description
1 58 65 Normal

The default congestion level of pod is "0".

The pod moves from Normal to default when the CPU is less than the abatement of Normal (58).

The pod moves from default to Normal state when the CPU exceeds onset of Normal (65).

The pod moves from Danger of Congestion to Normal state, when the CPU is less than abatement of Danger of Congestion (70).

2 70 75 Danger of Congestion

The pod moves from Danger of Congestion to Normal, when the CPU is less than abatement of Danger of Congestion (70).

The pod moves from Normal to Danger of Congestion state when the CPU exceeds onset of Danger of Congestion (75).

The pod moves from Congested to Danger of Congestion when the CPU is less than abatement of Congested (80).

3 80 85 Congested

The pod moves from Congested to Danger of Congestion when the CPU is less than abatement of Congested (80).

The pod moves from Danger of Congestion to Congested state when the CPU exceeds onset of Congested (85).

Note:

  • If the current level is at L3 (Congested with onset value of 85) and the CPU value is 66, the level would be marked as L1 (Normal). The abatement value is considered only when the switching is between the adjacent levels (L2 (Danger of Congestion) to L1 (Normal) and L3 (Congested) to L2 (Danger of Congestion)).
  • When NRF rejects the requests, it is expected that the consumer NFs reroute to alternate NRFs.
  • Ingress Gateway microservice rate limiting only applies to signaling messages and does not impact internal messages, such as requests from perf-info to Ingress Gateway microservice, message copy requests.
  • Ingress Gateway microservice performs routes validation before applying any other gateway service filters. Consequently, requests that don't match a valid route in routesConfig in Helm are not subject to rate limiting. For more details on routesConfig, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide .

The following diagram explains the Ingress Gateway Pod Protection Using Rate Limiting mechanism:

Figure 4-1 Ingress Gateway Pod Protection Using Rate Limiting


Ingress Gateway Pod Protection Using Rate Limiting

  1. Consumer NFs send traffic to the Ingress Gateway microservice.
  2. Ingress Gateway pod checks if the Pod Protection Using Rate Limiting feature is enabled or not.
    1. If the feature is enabled, Ingress Gateway compares the incoming traffic rate with the fill rate that is defined for the rate limiting feature.
      1. If the incoming request rate is below the fill rate, all the incoming traffic are processed.
      2. If the incoming request rate exceeds the fill rate, requests up to the fill rate are processed. The requests beyond the fill rate are processed or rejected based on the congestion level of the pod and the denied request actions.
Consider the following examples:
  • if fillRate is configured as 1000,
    • if the incoming Transaction Per Second (TPS) is 1100, congestionLevel is Normal, and action is configured as CONTINUE, all the 1100 requests are processed.
    • if the incoming TPS is 1300, congestionLevel is Danger of Congestion, and action is configured as CONTINUE, all the 1300 requests are processed.
    • if the incoming TPS is 1500, congestionLevel is Congested, and action is configured as REJECT, 1000 requests are processed successfully and the 500 requests are rejected with the configured error code.
  • if two routes, id1 configured with 30 percentage and id2 with 70 percentage, and fillRate is configured as 1000,
    • if the incoming TPS is 1100, congestionLevel is Normal, and action is configured as CONTINUE, all the 1100 requests are processed.
    • if the incoming TPS is 1300, fill rate is 1000, congestionLevel is Danger of Congestion, and action is configured as CONTINUE, all requests are processed successfully.
    • if the incoming TPS is 1500, congestionLevel is Congested, and action is configured as REJECT, 300 requests from route id1 is processed and 700 requests from route id2 is processed, and 500 remaining TPS will be rejected.

Managing the feature

Prerequisites

Note:

It is recommended to enable the Ingress Gateway Pod Protection Using Rate Limiting feature over the Ingress Gateway pod protection feature. The Ingress Gateway Pod Protection feature will be deprecated in the upcoming releases. You must disable the Ingress Gateway Pod Protection before proceeding to this feature. For more information about disabling the existing pod protection feature, see Ingress Gateway Pod Protection.

Enable

  • Configure using REST API: Perform the following configurations for this feature:
    • Configure the Error code profile using the {apiRoot}/nrf/nf-common-component/v1/igw/errorcodeprofiles API.

      Configure the ingressgateway.errorCodeProfiles in Helm. Perform a Helm upgrade. This ingressgateway.errorCodeProfiles value will be used when request is rejected at Ingress Gateway.

    • Configure the congestion level of the pods based on CPU using the {apiRoot}/nrf/nf-common-component/v1/igw/congestionConfig API.

      For more information about these APIs, see Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.

  • Configure using CNC Console:
    • Configure the Error code Profiles, as described in Error Code Profile.

      Configure the ingressgateway.errorCodeProfiles in Helm. Perform a Helm upgrade. This ingressgateway.errorCodeProfiles value will be used when request is rejected at Ingress Gateway.

    • Create or update the Congestion Level Configuration, as in Congestion Level Configuration.
  • Enable using REST API: Configure the attributes in the {apiRoot}/nrf/nf-common-component/v1/igw/podProtectionByRateLimiting} API to enable the feature.

    For more information about this API, see "Pod Protection By Rate Limiting" in Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.

  • Enable using CNC Console: Configure the parameters as mentioned in Pod Protection By Rate Limiting.

Observability

Metrics

The following metrics are added for this feature:

  • oc_ingressgateway_http_request_ratelimit_values_total
  • oc_ingressgateway_http_request_ratelimit_reject_chain_duration_histogram_seconds
  • oc_ingressgateway_http_request_ratelimit_reject_chain_length_histogram
  • oc_ingressgateway_http_request_ratelimit_denied_count_total
  • oc_ingressgateway_congestion_cpu_state
  • oc_ingressgateway_congestion_system_state
  • oc_ingressgateway_system_state_duration_percentage
  • oc_ingressgateway_congestion_level_total
  • oc_ingressgateway_congestion_level_bucket_total
  • oc_ingressgateway_congestion_cpu_percentage_bucket

For more information about the metrics, see Ingress Gateway Metrics.

Alerts

The following alerts are added in the Ingress Gateway Pod Protection Using Rate Limiting section.

KPIs

The following KPIs are added for this feature:

  • Allowed Request Rate Per Route Id
  • Total Rejections Chain Length
  • Discard Request Action Traffic Rate
  • Pod Congestion Level

For more information about KPIs, see Ingress Gateway Pod Protection Using Rate Limiting.

Maintain

If you encounter alerts at system or application levels, see the NRF Alerts section for resolution steps.

In case the alerts still persist, perform the following:

  1. Collect the logs: For more information on collecting logs, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.2 NF Profile Size Limit

The Nnrf_NFManagement service allows an NF to register, update, or deregister its profile in the NRF. The NF Profile consists of general parameters of the NF Instance and also the parameters of the different NF service instances exposed by the NF Instance. NRF uses the registered profiles for service operations like NfListRetrieval, NfProfileRetrieval, NfDiscover, and NFAccessToken. It is observed that when the size of the NF Profile registered is large, the performance of these service operations degrades due to increased processing time of larger profiles and can cause higher resource utilization and latency.

This feature allows to specify the maximum limit of the NF Profile size that can be registered with NRF. The NF Profile size is evaluated during the NFRegister or NFUpdate service operation, and if the size is within the configured maximum limit, the service operation is allowed. If the profile size breaches the configured thresholds, the service operation gets rejected.


NRF behavior for each of the service operations is defined as follows:
  • NFRegister or NFUpdate (PUT):
    • NRF receives the NF Profile from the NF and evaluates the size of the NF Profile against the configured maximum allowed size.
    • The NFRegister or NFUpdate service operation succeeds if the NF Profile size is below the configured size.
    • If the size of the NF Profile has breached the configured size, the NFRegister or NFUpdate service operation fails with a configurable error response.
  • NFUpdate (Partial Update):

    NRF receives the update that the NF wishes to apply on NF Profiles registered at NRF. NRF evaluates the size of the NF Profile after applying the updates on registered nfProfile.

    If the size of the NF Profile is below the configured size, the NFUpdate service operation succeeds.

    If the size of the NF Profile has breached the configured size, the NFUpdate service operation fails with a configurable error response.

  • NFHeartBeat:

    NfProfile size is not evaluated for NFHeartbeat service operation. The NFs can continue to Heartbeat even if their profile size has breached the threshold. This can occur only if the bigger profile was registered before enabling the feature.

The ocnrf_nfProfile_size_limit metric shows the size of the nfProfile registered at NRF. The size of the nfProfile is calculated as per the payload received while registering or updating the nfProfile.

Calculating NF Profile Size

To calculate the size of the nfProfiles payload before registering or updating it, use the following steps:
  1. Save the profile payload in a text file without spaces.
  2. Verify the size of the nfProfile payload using Linux tools like wc.
For example, consider the below payload:
{"nfInstanceId":"ae137ab7-740a-46ee-aa5c-951806d77b0d","nfType":"BSF","nfStatus":"REGISTERED","heartBeatTimer":30,"plmnList":[{"mcc":"310","mnc":"14"}],"sNssais":[{"sd":"4eaaab","sst":124},{"sd":"daaac8","sst":54},{"sd":"f4aaa6","sst":73}],"nsiList":["slice-1","slice-2"],"fqdn":"BSF.oracle.com","interPlmnFqdn":"BSF.oracle.com","ipv4Addresses":["192.168.2.100","192.168.3.100","192.168.2.110","192.168.3.110"],"ipv6Addresses":["2001:0db8:85a3:0000:0000:8a2e:0370:7334"],"capacity":2000,"load":0,"locality":"US
      East","bsfInfo":{"ipv4AddressRanges":[{"start":"192.168.0.0","end":"192.168.0.100"}],"ipv6PrefixRanges":[{"start":"2001:db8:8513::/48","end":"2001:db8:8513::/96"}],"dnnList":["abc","DnN-OrAcLe-32","ggsn-cluster-A.mnc012.mcc345.gprs","lemon"]},"nfServiceList":{"ae137ab7-740a-46ee-aa5c-951806d77b0d":{"serviceInstanceId":"ae137ab7-740a-46ee-aa5c-951806d77b0d","serviceName":"nbsf-management","versions":[{"apiVersionInUri":"v1","apiFullVersion":"1.0.0","expiry":"2018-12-03T18:55:08.871Z"}],"scheme":"http","nfServiceStatus":"REGISTERED","fqdn":"BSF.oracle.com","interPlmnFqdn":"BSF.oracle.com","apiPrefix":"","capacity":500,"load":0,"supportedFeatures":"80000000"}}}
For the above nfProfile, the size is calculated as 1177 bytes.

$ cat bsf.txt |wc -c 
1177 

Note:

  • The {apiRoot}/nrf-configuration/v1/nfManagementOptions API allows to configure the maximum size of the profile per nfType. If not configured for a specific nfType, the evaluation is performed using "ALL_NF_TYPES". This is a mandatory configuration.
  • The calculated total size of the NfProfile may vary due to updates made by the NRF. The size is determined based on the profile after the NRF has applied its modifications. For instance, the NRF may update certain attributes such as expiry, which follow a timestamp format.

Managing the feature

Enable

  • Enable Using REST API: Set the value of featureStatus to ENABLED in the {apiRoot}/nrf-configuration/v1/nfManagementOptions URI. For more details, see Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
  • Enable Using CNC Console: Set the value of Feature Status to ENABLED in the NF Management Options.

Observability

Metrics

The ocnrf_nfProfile_size_limit_breached metric is added in the NRF NF Metrics section.

Alerts

There are no alerts introduced for this feature.

KPIs

The "NF Profile Size Limit Breached" KPI is added in the NRF Service KPIs section.

Maintain

If you encounter alerts at system or application levels, see the NRF Alerts section for resolution steps.

In case the alerts still persist, perform the following:

  1. Collect the logs: For more information on collecting logs, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.3 Egress Gateway Pod Throttling

Egress Gateway microservice handles all the outgoing traffic from NRF to other NFs. In the event of unexpected high traffic, uneven traffic distribution, or traffic bursts due to network fluctuations, the Egress Gateway pods may get overloaded. This further may impact the system latency and performance. It is required to protect the Egress Gateway microservice from these conditions to prevent high impact on the outgoing traffic.

With the implementation of this feature, each Egress Gateway pod monitors its incoming traffic and if the traffic exceeds the defined capacity, the excess traffic is not processed and gets rejected. The traffic capacity is applied at each pod and applicable to all the incoming requests irrespective of the message type.

Egress Gateway Pod Throttling Mechanism

The following image describes the pod throttling mechanism when the requests received from NRF microservices like NFDiscovery, NFRegistration, NFSubscription, and NFAccessToken exceeds the defined capacity in Egress Gateway microservice.

Figure 4-2 Egress Gateway Pod Throttling Mechanism


Egress Gateway Pod Throttling Mechanism

  1. NRF microservice sends the requests to the Egress Gateway microservice.
  2. Egress Gateway microservice evaluates the number of requests received from the other NRF microservices:
    1. If the number of incoming requests are 2500 and the maximum defined limit is 2700, then Egress Gateway microservice processes all the 2500 requests.
    2. If the number of incoming requests are 3000 and the maximum defined limit is 2700, then Egress Gateway microservice processes only 2700 requests, and rejects the additional 300 requests with response code 429 to the NRF microservice.

Note: When NRF Discovery microservice receives an overload response from Egress Gateway microservice, it does not retry or re-reroute this request. An error response will be generated based on the specific feature configurations like NRF Forwarding, Subscriber Location Function, and Roaming.

Managing the Egress Gateway Pod Throttling

Enable

This feature can be enabled or disabled at the time of NRF deployment using the following Helm Parameters:

Perform the following configuration for this feature using the Helm:

  1. Open the ocnrf_custom_values_25.1.200.yaml file.
  2. Set the value of egressgateway.podLevelMessageThrottling.enabled to true under Egress Gateway Microservice parameters section.

    Note:

    • This feature is enabled by default.
    • This feature also uses the following parameters which are read-only:
      • egressgateway.podLevelMessageThrottling.duration
      • egressgateway.podLevelMessageThrottling.requestLimit
      • global.egwGeneratedErrorCheckDuringOverLoad
      • global.egwResponseCodesDuringOverLoad
      • global.noRetryDuringEgwOverload

    For more information about the above parameters, see the "Egress Gateway Microservice" and "Global Parameters" sections in Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

  3. Save the file.
  4. Install NRF. For more information about installation procedure, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.
  5. Run Helm upgrade, if you are enabling this feature after NRF deployment. For more information about upgrade procedure, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

Observability

Metrics

Following metrics are added in the Egress Gateway Metrics section:

  • oc_egressgateway_podlevel_throttling_allowed_total
  • oc_egressgateway_podlevel_throttling_discarded_total

The route_id dimension is added in NRF Metrics.

Alerts

Following alerts are added in the Egress Gateway Pod Throttling section:

For more information about alerts, see NRF Alerts section.

KPIs

Added the Egress Gateway Pod Throttling KPI in the Egress Gateway Pod Throttling section.

Maintain

If you encounter alerts at system or application levels, see NRF Alerts section for resolution steps.

In case the alert persists, perform the following:

1. Collect the logs and Troubleshooting Scenarios: For more information on how to collect logs and troubleshooting information, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.

2. Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.4 Traffic Segregation

This feature provides an option for traffic segregation at NRF based on traffic types. Within a Kubernetes cluster, traffic segregation can divide applications or workloads into distinct sections such as Operations Administration and Maintenance (OAM), Service Based Interface (SBI), Kubernetes control traffic, and so on. The Multus Container Network Interface (CNI) plugin for Kubernetes enables attaching multiple network interfaces to pods to help segregate traffic to/from NRF edge microservices like Ingress and Egress Gateway microservices.

This feature addresses the challenge of logically separating IP traffic of different traffic types, which are typically handled through a single network (Kubernetes overlay). The new functionality ensures that critical networks are not cross-connected or sharing the same routes, thereby preventing network congestion.

The feature requires configuration of separate networks, Network Attachment Definitions (NADs), and the Cloud Native Load Balancer (CNLB). These configurations are crucial for facilitating Ingress/Egress gateway traffic segregation, and optimizing load distribution in NRF.

Note:

The Traffic Segregation feature is only available in NRF if OCCNE is installed with CNLB.

Cloud Native Load Balancer (CNLB)

CNE provides CNLB for managing the ingress and egress network as an alternate to the existing Load Balancing of Virtual Machine (LBVM), lb-controller, and egress-controller solutions. You can enable or disable this feature only during a fresh CNE installation. When this feature is enabled, CNE automatically uses CNLB to control ingress traffic. To manage the egress traffic, you must preconfigure the egress network details in the cnlb.ini file before installing CNE.

For more information about enabling and configuring CNLB, see Oracle Communications Cloud Native Core, Cloud Native Environment User Guide, and Oracle Communications Cloud Native Core, Cloud Native Environment Installation, Upgrade, and Fault Recovery Guide.

Network Attachment Definitions for CNLB

A Network Attachment Definition (NAD) is a resource used to set up a network attachment, in this case, a secondary network interface to a pod. NRF supports two types of CNLB NADs:

  1. Ingress Network Attachment Definitions

    Ingress NADs are used to handle inbound traffic only. This traffic enters the CNLB application through an external interface service IP address and is routed internally using interfaces within CNLB networks.

    Naming Convention: nf-<service_network_name>-int

  2. Egress Only Network Attachment Definitions

    Egress Only NADs enable outbound traffic only. An NRF Egress Gateway pod can initiate traffic and route it through a CNLB application, translating the source IP address to an external egress IP address. An egress NAD contains network information to create interfaces for NRF Egress Gateway pods and routes to external subnets.

    • Prerequisites:
      • Ingress NADs are already created for the desired internal networks.
      • Destination (egress) subnet addresses are known beforehand and defined under the cnlb.ini file's egress_dest variable to generate NADs.
      • The use of an Egress NAD on a deployment can be combined with Ingress NADs to route traffic through specific CNLB apps.
    • Naming Convention: nf-<service_network_name>-egr

Ingress Traffic Segregation

The following image describes the traffic segregation at Ingress Gateway.

Figure 4-3 Ingress Traffic Segregation


Ingress Traffic Segregation

Here AMF-1 and AMF-2 send messages to the Ingress Gateway of the NRF. Internal traffic between NRF Backend and the Ingress Gateway travels through the default interface, eth0, which is connected to the pods. The signaling traffic between AMF-1 and NRF, as well as between AMF-2 and NRF, travels through a new interface, veth1, created using the Multus plugin. The Ingress service IPs and ports will be set up in Cloud Native Load Balancers (CNLBs), which will direct traffic to the Ingress Gateway. This gateway will then handle the requests and route them to the NRF.

Egress Traffic Segregation

The following image describes the traffic segregation at Egress Gateway.

Figure 4-4 Egress Traffic Segregation


Egress Traffic Segregation

Here NRF is connected to SLF and another NRF, which are on different PLMNs and different networks. Internal traffic between NRF Backend and the NRF Egress Gateway passes through the default interface, eth0, which is connected to the pods. The signaling traffic between NRF and SLF, as well as between NRF and NRF-2, travels through a new interface, veth1, created using the Multus plugin. Egress IPs will be defined at the CNLBs, which will perform Source Network Address Translation (SNAT) on the IP addresses in requests sent to SLF and NRF-2.

Managing Traffic Segregation

Enable

This feature is disabled by default. To enable this feature, you need to configure the network attachment annotations in the ocnrf_custom_values_25.1.200.yaml file.

Configuration

For more information about Traffic Segregation configuration, see " Configuring Traffic Segregation" section in Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

Observe

There are no Metrics, KPIs, or Alerts available for this feature.

Maintain

To resolve any alerts at the system or application level, see NRF Alerts section. If the alerts persist, perform the following:

  1. Collect the logs: For more information on how to collect logs, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.5 Support for Server Header

NRF handles various requests from consumer Network Functions (NFs) and other network entities over HTTP protocol. On receiving these requests, NRF validates and processes them before responding to these requests. When NRF sends an error response, it includes the source of the error to troubleshoot and take corrective measures.

This feature offers the support for Server Header in NRF responses, which contain information about the origin of an error response.

With this feature, NRF adds or passes the server header in the error responses as described in the following scenarios:

  • When NRF acts as a standalone server.
  • In NRF Forwarding scenarios, NRF propagates the server header received from peer NRF to the Consumer NF without any changes.
  • In NRF Roaming scenarios, NRF propagates the server header received from SEPP or peer NRFs to the Consumer NF without any changes.

This enhancement improves NRF’s error-handling scenarios by determining the originator of the error response for better troubleshooting.

When the server header feature is enabled, NRF adds the server header while generating the error responses.

The server header format is as follows:

<NRF>-<NrfInstanceId>

Where,

  • <NRF> is the NF type.
  • <NrfInstanceId> is the unique identifier of the NRF instance.

For example: NRF-6faf1bbc-6e4a-4454-a507-a14ef8e1bc5c

The following image illustrates the NRF behavior while processing the error responses:

Figure 4-5 Server Header Details


Server Header Details

  1. Consumer or Client NF sends requests to NRF.
  2. NRF receives the requests from the Consumer or Client NF and validates them for further processing.
  3. In case, NRF sends an error response while processing the incoming requests, it adds the server header {NRF-<NrfInstanceId>} in the error response and sends it back to the Consumer or Client NF.

Managing Support for Server Header

Enable

This feature is disabled by default. This feature can be enabled at global level or at per-route level.
  • Enable using REST API:
    • Global: Set the value of the enabled attribute to true in the {apiRoot}/nrf/nf-common-component/v1/igw/serverheaderdetails configuration API. For more information about the API, see "Server Header Details" section in Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
    • Route level: Set the value of the enabled attribute to true in the {apiRoot}/nrf/nf-common-component/v1/igw/routesconfiguration configuration API. For more information about the API, see "Routes Configuration" section in Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
  • Enable using CNC Console:
    • Global: Switch Enabled on the Server Header Details page to enable the feature. For more information about enabling the feature using CNC Console, see Server Header Details.
    • Route level: Switch Enabled on the Routes Configuration page to enable the feature. For more information about enabling the feature using CNC Console, see Routes Configuration.

For more information about server header propagation in service-mesh deployment, see Adding Filters for Server Header Propagation.

Configure

Perform the configurations in the following sequence to configure this feature using REST API or CNC Console:

  1. Configure {apiRoot}/nrf/nf-common-component/v1/igw/errorcodeserieslist to update the errorcodeserieslist that is used to list the configurable exception or error for an error scenario in Ingress Gateway.
  2. Perform the following global configuration in {apiRoot}/nrf/nf-common-component/v1/igw/serverheaderdetails API.
  3. (Optional) Perform the following configuration at route level based on the id attribute in {apiRoot}/nrf/nf-common-component/v1/igw/routesconfiguration API.

    Note:

    • If the feature is not configured at route level, global configuration is taken into consideration.
    • Route-level configuration will take precedence over the global configuration.

Observability

Metrics

There are no metrics added to this feature.

Alerts

There are no alerts added or updated for this feature.

KPIs

There are no KPIs related to this feature.

Maintain

If you encounter alerts at system or application levels, see the NRF Alerts section for resolution steps.

In case the alerts still persist, perform the following:

1. Collect the logs: For more information on collecting logs, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.

2. Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.6 Support for TLS

NRF use Hypertext Transfer Protocol Secure (HTTPS) to establish secured connections with Consumer NFs and Producer NFs, respectively. These communication protocols are encrypted using Transport Layer Security (TLS). TLS comprises the following components:
  • Handshake Protocol: Exchanges the security parameters of a connection. Handshake messages are supplied to the TLS record layer.
  • Record Protocol: Receives the messages to be transmitted, fragments the data into multiple blocks, secures the records, and then transmits the result. Received data is delivered to higher-level peers.

From Release 24.2.0 onwards, NRF supports TLS 1.3 for all Consumer NFs, Producer NFs, Data Director, SBI Interfaces, and interfaces where TLS 1.2 was supported. TLS 1.2 will continue to be supported.

TLS Handshake

This section describes the differences between TLS 1.2 and TLS 1.3 and the advantages of TLS 1.3 over TLS 1.2 and earlier versions.

Figure 4-6 TLS Handshake


TLS Handshake

TLS 1.2

Step 1: The connection or handshake starts when the client sends a “client hello” message to the server. This message consists of cryptographic information such as supported protocols and supported cipher suites. It also contains a random value or random byte string.

Step 2: To respond to the “client hello” message, the server sends the “server hello” message. This message contains the CipherSuite that the server has selected from the options provided by the client. The server also sends its certificate along with the session ID and another random value.

Step 3: The client verifies the certificate sent by the server. When the verification is complete, it sends a byte string and encrypts it using the public key of the server certificate.

Step 4: When the server receives the secret, both the client and server generate a master key along with session keys (ephemeral keys). These session keys are used to symmetrically encrypt the data.

Step 5: The client sends an “HTTP Request” message to the server to enable the server to switch to symmetric encryption using the session keys.

Step 6: To respond to the client’s “HTTP Request” message, the server does the same and switches its security state to symmetric encryption. The server concludes the handshake by sending an HTTP response.

Step 7: The client-server handshake is completed in two round-trips.

TLS 1.3

Step 1: The connection or handshake starts when the client sends a “client hello” message to the server. The client sends the list of supported cipher suites. The client also sends its key share for that particular key agreement protocol.

Step 2: To respond to the “client hello” message, the server sends the key agreement protocol that it has chosen. The “Server Hello” message comprises the server key share, server certificate, and the “Server Finished” message.

Step 3: The client verifies the server certificate, generates keys as it has the key share of the server, and sends the “Client Finished” message along with an HTTP request.

Step 4: The server completes the handshake by sending an HTTP response.

The following digital signature algorithms are supported in TLS handshake:

Table 4-2 Digital Signature Algorithms

Algorithm Key Size (Bits) Elliptic Curve (EC)
RS256 (RSA) 2048 NA
4096 This is the recommended value. NA
ES256 (ECDSA) NA SECP384r1

This is the recommended value.

Comparison Between TLS 1.2 and TLS 1.3

The following table provides a comparison of TLS 1.2 and TLS 1.3:

Table 4-3 Comparison of TLS 1.2 and TLS 1.3

Feature TLS 1.2 TLS 1.3
TLS Handshake
  • The initial handshake was carried out in clear text.
  • A typical handshake in TLS 1.2 involves the exchange of 5 to 7 packets.
  • The initial handshake is carried out along with the key share.
  • A typical handshake IN TLS 1.3 involves the exchange of up to 3 packets.
Cipher Suites
  • Less secure Cipher suites.
  • Use SHA-256 and SHA-384 hashing
    • TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
    • TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
    • TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
    • TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
    • TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
  • More secure Cipher suites.
  • Apart from all the ciphers supported for TLS 1.2 , the following additional ciphers are supported for only TLS 1.3 :
    • TLS_AES_128_GCM_SHA256
    • TLS_AES_256_GCM_SHA384
    • TLS_CHACHA20_POLY1305_SHA256
Round-Trip Time (RTT) This has a high RTT during the TLS handshake. This has low RTT during the TLS handshake.
Perfect Forward Secrecy (PFS) This doesn't support PFS. TLS 1.3 supports PFS. PFS ensures that each session key is completely independent of long-term private keys, which are keys that are used for an extended period to decrypt encrypted data.
Privacy This is less secure, as the ciphers used are weak. This is more secure, as the ciphers used are strong.
Performance This has high latency and a less responsive connection. This has low latency and a more responsive connection.

Advantages of TLS 1.3

TLS 1.3 handshake offers the following improvements over earlier versions:

  • all handshake messages after the ServerHello are encrypted.
  • improves efficiency in the handshake process by requiring fewer round trips than TLS 1.2. It also uses cryptographic algorithms that are faster.
  • better security than TLS 1.2. It addresses known vulnerabilities in the handshake process.
  • got rid of data compression.

The following table describes the TLS versions supported in the client and server side. The last column indicates which version will be used.

TLS Version Used

When NRF is acting as a client or a server, it can have different TLS versions.

The following table gives information about which TLS version will be used when you have various combinations of TLS versions between the server and the client.

Table 4-4 TLS Version Used

Client Support Server Support TLS Version Used
TLS 1.2, TLS 1.3 TLS 1.2, TLS 1.3 TLS 1.3
TLS 1.3 TLS 1.3 TLS 1.3
TLS 1.3 TLS 1.2, TLS 1.3 TLS 1.3
TLS 1.2, TLS 1.3 TLS v1.3 TLS 1.3
TLS 1.2 TLS 1.2, TLS 1.3 TLS 1.2
TLS 1.2, TLS 1.3 TLS 1.2 TLS 1.2
TLS 1.3 TLS 1.2

Sends an error message. For more information about the error message, see "Troubleshooting TLS Version Compatibilities" section in Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.

TLS 1.2 TLS 1.3

Sends an error message. For more information about the error message, see "Troubleshooting TLS Version Compatibilities" section in Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.

Note:

  • If Egress Gateway is deployed with both the versions of TLS that is TLS 1.2 and TLS 1.3, then Egress Gateway as client will send both versions of TLS in the client hello message during the handshake and the server needs to decide which version to be used.
  • If Ingress Gateway is deployed with both the version of TLS that is with TLS 1.2 and TLS 1.3, then Ingress Gateway as the server will use the TLS version received from the client in the server hello message during the handshake.
  • This feature does not work in ASM deployment.

Managing Support for TLS 1.2 and TLS 1.3

Enable

This feature can be enabled or disabled at the time of NRF deployment using the following Helm Parameters:

  • enableIncomingHttps - This flag is used for enabling/disabling HTTPS/2.0 (secured TLS) in the Ingress Gateway. If the value is set to false, NRF will not accept any HTTPS/2.0 (secured) traffic. If the value is set to true, NRF will accept HTTPS/2.0 (secured) Traffic.

    Note: Do not change &enableIncomingHttpsRef reference variable.

    For more information on enabling this flag, see the "Enabling HTTPS at Ingress Gateway" section in Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.
  • enableOutgoingHttps -This flag is used for enabling/disabling HTTPS/2.0 (secured TLS) in the Egress Gateway. If the value is set to false, NRF will not accept any HTTPS/2.0 (secured) traffic. If the value is set to true, NRF will accept HTTPS/2.0 (secured) traffic.

    For more information on enabling this flag, see the "Enabling HTTPS at Egress Gateway" section in Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

Configure

You can configure this feature using Helm parameters.

The following parameters in the Ingress Gateway and Egress Gateway microservices must be customized to support TLS 1.2 or TLS 1.3.

  1. Generate HTTPS certificates for both the Ingress and Egress Gateways. Ensure that the certificates are correctly configured for secure communication. After generating the certificates, create a Kubernetes secret for each Gateway (Ingress and Egress). Then, configure these secrets to be used by the respective Gateways. For more information about HTTPS configuration, generating certificates, and creating secrets, see the "Configuring Secrets for Enabling HTTPS" section in the Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.
  2. Open the ocnrf_custom_values_25.1.200.yaml file.
  3. Configure the following parameters under Ingress Gateway and Egress Gateway parameters section.
    • Parameters required to support TLS 1.2:
      • service.ssl.tlsVersion indicates the TLS version.
      • cipherSuites indicates supported cipher suites.
      • allowedCipherSuites indicates allowed cipher suites.
      • messageCopy.security.tlsVersion indicates the TLS version for establishing communication between Kafka and NF when security is enabled.
    • Parameters required to support TLS 1.3:
      • service.ssl.tlsVersion indicates the TLS version.
      • cipherSuites indicates the supported cipher suites.
      • allowedCipherSuites indicates the allowed cipher suites.
      • messageCopy.security.tlsVersion indicates the TLS version for establishing communication between Kafka and NF when security is enabled.
      • clientDisabledExtension is used to disable the extension sent by messages originated by clients during the TLS handshake with the server.
      • serverDisabledExtension is used to disable the extension sent by messages originated by servers during the TLS handshake with the client.
      • tlsNamedGroups is used to provide a list of values sent in the supported_groups extension. These are comma-separated values.
      • clientSignatureSchemes is used to provide a list of values sent in the signature_algorithms extension.

    For more information about configuring the values of the above-mentioned parameters, see the "Ingress Gateway Microservice" and "Egress Gateway Microservice" sections in Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

  4. Save the file.
  5. Install NRF. For more information about installation procedure, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.
  6. Run Helm upgrade, if you are enabling this feature after NRF deployment. For more information about upgrade procedure, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

Note:

  • NRF does not prioritize cipher suites based on priorities. To select a cipher based on priorities, you must write the cipher suites in decreasing order of priority.
  • NRF does not prioritize supported groups based on priorities. To select a supported group based on priorities, you must write the supported group values in the decreasing order of priority.
  • If you want to provide values for the signature_algorithms extension using the clientSignatureSchemes parameter, the following comma-separated values must be provided to deploy the services:
    • rsa_pkcs1_sha512
    • rsa_pkcs1_sha384
    • rsa_pkcs1_sha256
  • The mandatory extensions as listed in RFC 8446 cannot be disabled using the clientDisabledExtension attribute on the client or using the serverDisabledExtension attribute on the server side. The following is the list of the extensions that cannot be disabled:
    • supported_versions
    • key_share
    • supported_groups
    • signature_algorithms
    • pre_shared_key
Observe

Metrics

Following metrics are added in the NRF Gateways Metrics section:

  • oc_ingressgateway_incoming_tls_connections
  • oc_egressgateway_outgoing_tls_connections
  • security_cert_x509_expiration_seconds

For more information about metrics, see NRF Metrics section.

Alerts

Following alerts are added in the NRF Alerts section:

Note:

Alert gets raised for every certificate that will expire in the above time frame. For example, NRF supports both RSA and ECDSA. So, we have configured two certificates. Accordingly, let us suppose RSA certificate is about to expire in 6 months in this situation only one alert will be raised and if both are about to expire then two alerts will be raised. Moreover, alerts can be differentiated using "serialNumber" tag.

For example, serialNumber=4661 is used for RSA and serialNumber =4662 is used for ECDSA.

Maintain

If you encounter alerts at system or application levels, see NRF Alerts section for resolution steps.

In case the alert persists, perform the following:

  1. Collect the logs and Troubleshooting Scenarios: For more information on how to collect logs and troubleshooting information, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.7 Error Messages Enhancement

NRF sends error messages when there is an error scenario while performing any of the service operations. In order to identify the cause of these errors and fix them quickly, NRF supports the following enhancements:

4.7.1 Error Response Enhancement for NRF

NRF receives the message requests from Consumer NFs and may send error responses while processing them. The error response generated by NRF included the error description in the detail attribute of the ProblemDetails. This description is used to identify the issue and troubleshoot them.

The sample ProblemDetails before the error details enhancement is as follows:

{
 “title”: “Not Found”,
 “status”: 404,
 “detail”: “NRF Forwarding Error Occurred”,
 “cause”: “UNSPECIFIED_MSG_FAILURE”
}

With the enhanced error response mechanism, NRF sends additional information such as server FQDN, NF service name, vendor NF name, and application error Id in the detail attribute of the ProblemDetails. This enhancement provides more information about the error and troubleshoot them.

The new format of the ProblemDetails is as follows:

{
“title”:“<title of Problem Details>”,
“status”:“<status code of Problem Details>”,
“detail”:“<Server-FQDN>:<NF Service>:<Application specific problem detail>:<VendorNF>-<App-Error-Id>“,
“cause”:“<cause value of Problem Details>”
}
Example:
{
 “title”: “Not Found”,
 “status”: 404,
 “detail”: “NRF-d5g.oracle.com: Nnrf_NFManagement: NRF Forwarding Error Occurred: ONRF-REG-NFPRFWD-E0299; 404; NRF-d5g.oracle.com: Nnrf_NFManagement: Could not find Profile records for NfInstanceId=8cfc6828-bd5d-4a3a-93d4-6bdd848d6bab: ONRF-REG-NFPR-E1004”,
 “cause”: “UNSPECIFIED_MSG_FAILURE”
}

The updated format for the detail attribute will be sent only for the error responses generated by NRF application. For all other scenarios (like response generated by the underlying stack) the detail attribute will not provide the information in the updated format.

Note:

All responses sent by NRF application towards the peer follows this format. All internal microservice responses and OAM responses may or may not follow this format.

The following table describes the detail attribute of the ProblemDetails parameter:

Table 4-5 Detail Attributes of the ProblemDetails

Attribute Description
<Server-FQDN> Indicates the NRF FQDN. It is obtained from the nfFqdn Helm parameter.

Sample Value: NRF-d5g.oracle.com

<NF Service> Indicates the service name. This service name can be 3GPP defined NRF service name or an internal one.

Possible Values:

  • Nnrf_NFManagement: This is used by nfregistration, nfsubscription, and nrfauditor microservices.
  • Nnrf_NFDiscovery: This is used by nfdiscovery microservice.
  • Nnrf_AccessToken: This is used by nfaccesstoken microservice.
  • Nnrf_Internal_CDS: This is used by nrfcachedata microservice.
  • Nnrf_Internal_Artisan: This is used by nrfauditor microservice.
  • Nnrf_Internal_Config: This is used by nrfconfiguration microservice for NRF specific configurations.
  • ingressgateway: This is used by ingressgateway.
  • egressgateway: This is used by egressgateway.
  • alternateroute: This is used by alternate-route.
  • commonconfig: This is used by nrfconfiguration microservice for NRF common configurations such as ingressgateway, egressgateway.
<Application specific problem detail>

Indicates a short description of the error.

Sample Value: NRF Forwarding Error occurred

<VendorNF>

Indicates the Oracle NF Type.

For NRF, this value is always ONRF.

Here, O is the prefixed value. NF type is obtained from the nfType Helm parameter.
<App-Error-Id>

Indicates the application specific error Id. This Id is a unique error Id which includes the Microservice ID, Category, and the Error Code.

The format of <App-Error-Id> is as follows: <Microservice ID>-<Category>-<Error Code>

Where,
  • <Microservice ID>: Indicates the NRF microservice IDs. For more information of NRF Microservice IDs see the Table 4-6.

    Sample Value: REG is the Microservice ID for nfregistration.

  • <Category>: Indicates an unique grouping of errors. The presence of category is optional in application error code. For detailed information about the different categories available, see the Table 4-7.

    Sample Value: NFPRFWD for NFProfileRetrieval service operation during forwarding.

  • <Error Code>: Indicates the error code in the Exxxx format.

    Sample Value: E0299

Sample Value:
  • With category: REG-NFPRFWD-E0299

    where, REG is the Microservice ID, NFPRFWD is the Category, and E0299 is the Error Code.

  • Without category: ONRF-IGW-E002

    where, ONRF is the vendor NF, IGW is the Microservice ID, and E002 is the Error Code.

The following table describes the list of available microservices in NRF.

Table 4-6 Microservice Ids List

Microservice Ids Microservice Name
REG nfregistration
SUB nfsubscription
DIS nfdiscovery
ACC nfaccesstoken
CDS nrfcachedata
AUD nrfauditor
CFG nrfconfiguration
ART nrfartisan
IGW ingressgateway
EGW egressgateway
ARS alternate-route

The following table describes the list of available categories in NRF.

Table 4-7 List of Categories

Category Id Category Description
ACAUTHN Authentication during Access Token service operation.
ACAUTHZ Authorization during Access Token service operation.
ACCOPT Access Token configuration.
ACFWD Access Token service operation during forwarding.
ACROAM Access Token service operation during roaming.
ACTOK Access Token service operation.
AUTHOPT Authentication options configuration.
CNCCDTN CNCC Controller configuration.
CSHTOPT Control shutdown options configuration.
DEREG NFDeregister service operation.
DISAUTH NFDiscover service operation during authorization.
DISC NFDiscover service operation.
DISCOPT Discovery options configuration.
DISCSLF SLF for NFDiscover.
DISCFWD NFDiscover service operation during forwarding.
DISROAM NFDiscover service operation during roaming.
FULLUPD NFUpdate with Full Replacement (PUT) service operation.
FWDOPT Forwarding options configuration.
GENOPT General options configuration.
GROPT GeoRedundancy options configuration.
GTSUB Get subscription.
GWTHFOR Growth configuration for forwarding.
GWTHOPT Growth options configuration.
HB NFHeartbeat service operation.
HBAUD Heart beat for NRF Auditor service operation.
HBRMAUD Remote heart beat for NRF Auditor.
INVSROP UnSupported HTTP Method for service operation.
LOGOPT Logging options configuration.
MGMTOPT Management options configuration.
NFDTLS Fetching NF State data
NFINSTS Get remote discovery instance and remote instance data.
NFLR NFListRetrieval service operation.
NFPR NFProfileRetrieval service operation.
NFPRAUD NRF Auditor service operation for NF profiles.
NFPRFWD NFProfileRetrieval service operation during forwarding.
NPTRDRG NRF Artisan for notification event to DNS NAPTR on deleting AMF profile.
NPTROPT NAPTR options configuration.
NPTRREC Status API for DNS NAPTR record.
NPTRREG NRF Artisan for NAPTR registration.
NPTRUPD NRF Artisan for DNS NAPTR update.
NRFCFG Fetching NRF Configuration.
NRFOPER NRF operation state configuration.
PARTUPD NFUpdate with Partial Replacement (PATCH) service operation.
POPROPT Pod protection options configuration.
REGN NFRegister service operation.
RELDVER NRF Auditor service operation for reloading site and network version info.
ROAMOPT Roaming options configuration.
RTNPTR Manual cache update of DNS NAPTR records.
SCRNOPT Screening options configuration.
SCRNRUL Screening rules configuration.
SLFOPT SLF options configuration.
SNOTIF NFStatusNotify service operation.
SUBAUD NRF Auditor service operation for subscription.
SUBDTLS Retrieve subscription details based on query attributes for configuring the non-signaling API.
SUBFWD NFStatusSubscribe service operation during forwarding.
SUBRAUD NRF Auditor service operation for remote subscription.
SUBROAM NFStatusSubscribe service operation during roaming.
SUBSCR NFStatusSubscribe service operation.
UNKSROP UnSupported media type.
UNSFWD NFStatusUnsubscribe service operation during forwarding.
UNSROAM NFStatusUnsubscribe service operation during roaming.
UNSUBSC NFStatusUnsubscribe service operation.
UPDSUBS NFStatusSubscribe update (PATCH) service operation.
UPSFWD NFStatusSubscribe update (PATCH) service operation during forwarding.
UPSROAM NFStatusSubscribe update (PATCH) service operation during roaming.

The error response codes are available in the Error Response Code Detailssection.

Managing the Error Response Enhancements for NRF

This section explains the procedure to enable and configure the feature.

Enable

This feature is an enhancement to the existing detail attribute in the ProblemDetails and hence there is no specific flag to enable or disable this feature.

Configure

You can configure the feature using Helm.

Helm

Perform the following configuration for this feature using the Helm:
  1. Open the ocnrf_custom_values_25.1.200.yaml file.
  2. Configure the following parameters under global parameters:
    • global.nfFqdn parameter to indicate the NRF FQDN.
    • global.nfType parameter to indicate the NF type.
    • global.maxDetailsLength is used to determine the maximum length limit of compiled error string by NRF in the details field.
  3. Save the file.
  4. Install NRF. For more information about installation procedure, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.
  5. Run Helm upgrade, if you are enabling this feature after NRF deployment. For more information about upgrade procedure, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

Observability

Metrics

There are no new metrics added to this feature.

Alerts

There are no alerts added or updated for this feature.

KPIs

There are no new KPIs related to this feature.

Maintain

If you encounter alerts at system or application levels, see the NRF Alerts section for resolution steps.

In case the alerts still persist, perform the following:

1. Collect the logs: For more information on collecting logs, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.

2. Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.7.2 Error Log Messages Enhancement

NRF uses the logs to register the system events along with their date and time of occurrence. They also provide important details about the chain of events that could have led to an error or problem. These log details are further used to identify the source of the error and troubleshoot them. NRF supports various log levels that can be set for a microservice with any of the following values:

  • TRACE
  • DEBUG
  • INFO
  • WARN
  • ERROR

For more information on details of these log levels, see the “Log Levels” section in Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.

With this feature, NRF adds additional information to the existing “ERROR” log messages. This additional information can provide more details about the error which can help to identify the problem details, error generating entity and subscriber information.

The Error Log Messages Enhancement feature adds the following additional attributes to the existing "ERROR" logs with appropriate values during the failure scenarios:

  • errorStatus
  • errorTitle
  • errorDetails
  • errorCause
  • sender
  • receiver
  • subscriberId

For more information about the new attributes, see Table 4-8.

Sample enhanced log details for the NRF Subscription Microservice:

{
  "instant": {
    "epochSecond": 1717077558,
    "nanoOfSecond": 256932930
  },
  "thread": "boundedElastic-8",
  "level": "ERROR",
  "loggerName": "com.oracle.cgbu.cne.nrf.routes.SubscriptionHandler",
  "message": "Response sent: 500 Subscription global limit breached for uri : http://ocnrf-ingressgateway.nrf1-ns/nnrf-nfm/v1/subscriptions with the problem cause : INSUFFICIENT_RESOURCES and problem details : NRF-d5g.oracle.com: Nnrf_NFManagement: Subscription global limit breached: ONRF-SUB-SUBSCR-E2003",
  "endOfBatch": false,
  "loggerFqcn": "org.apache.logging.log4j.spi.AbstractLogger",
  "threadId": 39547,
  "threadPriority": 5,
  "messageTimestamp": "2024-05-30T13:59:18.256+0000",
  "configuredLevel": "WARN",
  "subsystem": "NfSubscribe",
  "processId": "1",
  "nrfTxId": "nrf-tx-2103278974",
  "ocLogId": "1717077558225_77_ocnrf-ingressgateway-854464d548-426bz:1717077558237_110_ocnrf-nfsubscription-766d45f5c7-t4xp8",
  "xRequestId": "",
  "numberOfRetriesAttempted": "",
  "hostname": "ocnrf-nfsubscription-766d45f5c7-t4xp8",
  "errorStatus": "500",
  "errorTitle": "Subscription global limit breached",
  "errorDetails": "NRF-d5g.oracle.com: Nnrf_NFManagement: Subscription global limit breached: ONRF-SUB-SUBSCR-E2003",
  "errorCause": "INSUFFICIENT_RESOURCES",
  "sender": "NRF-6faf1bbc-6e4a-4454-a507-a14ef8e1bc5c",
  "subscriberId": "imsi-345012123123126"
}

In the above example, the following new attributes are added to the error logs.


  "errorStatus": "500",
  "errorTitle": "Subscription global limit breached",
  "errorDetails": "NRF-d5g.oracle.com: Nnrf_NFManagement: Subscription global limit breached: ONRF-SUB-SUBSCR-E2003",
  "errorCause": "INSUFFICIENT_RESOURCES",
  "sender": "NRF-6faf1bbc-6e4a-4454-a507-a14ef8e1bc5c",
  "subscriberId": "imsi-345012123123126"

The additionalErrorLogging feature flag is added to the logging API for the supported microservices. When the feature flag is enabled, it adds the errorStatus, errorTitle, errorDetails, errorCause, sender, receiver attributes to the ERROR logs.

Additionally, the logSubscriberInfo flag is added to the logging API for the supported microservices. When additionalErrorLogging and logSubscriberInfo are enabled, the subscriber information is also added to the ERROR logs.

When the logSubscriberInfo flag is enabled and if the subscriberId information is available, then NRF adds the subscriberId value in the log messages. If logSubscriberInfo is disabled, and the subscriberId information is still available NRF adds the subscriberId value as XXXX in the log messages.

Note:

The subscriberId value can be any of the following:

  • 3GPP-Sbi-Correlation-Info header present in the request header received at NRF.
  • 3GPP-Sbi-Correlation-Info header present in the response header received from the peer in the case of NRF acting as a client.
  • Either SUPI or GPSI values or both SUPI and GPSI values.

For more information about the subscriberId attribute, see Table 4-8.

Example with both SUPI and GPSI values:

{
  "instant": {
    "epochSecond": 1717489820,
    "nanoOfSecond": 998846286
  },
  "thread": "@608f310a-49",
  "level": "ERROR",
  "loggerName": "com.oracle.cgbu.cne.nrf.service.NFDiscoveryService",
  "message": "Response sent: 503 null for uri : /nnrf-disc/v1/nf-instances?{target-nf-type=[UDR], requester-nf-type=[UDM], supi=[imsi-10000005], gpsi=[msisdn-77195225555]} with the problem cause : null and problem details : NRF-d5g.oracle.com: Nnrf_NFDiscovery: SLF Lookup Error Occurred: ONRF-DIS-DISCSLF-E0499; 503; null; 503; null",
  "endOfBatch": false,
  "loggerFqcn": "org.apache.logging.log4j.spi.AbstractLogger",
  "threadId": 49,
  "threadPriority": 5,
  "messageTimestamp": "2024-06-04T08:30:20.998+0000",
  "configuredLevel": "WARN",
  "processId": "1",
  "nrfTxId": "nrf-tx-2050587612",
  "ocLogId": "1717489820750_308_ocnrf-ingressgateway-54594dfd87-9fq2k:1717489820763_73_ocnrf-nfdiscovery-744ff58c64-5cdh9",
  "serviceOperation": "NFDiscover",
  "xRequestId": "",
  "requesterNfType": "UDM",
  "targetNfType": "UDR",
  "discoveryQuery": "/nnrf-disc/v1/nf-instances?{target-nf-type=[UDR], requester-nf-type=[UDM], supi=[imsi-10000005], gpsi=[msisdn-77195225555]}",
  "hostname": "ocnrf-nfdiscovery-744ff58c64-5cdh9",
  "subsystem": "discoveryNfInstances",
  "errorStatus": "503",
  "errorDetails": "NRF-d5g.oracle.com: Nnrf_NFDiscovery: SLF Lookup Error Occurred: ONRF-DIS-DISCSLF-E0499; 503; null; 503; null",
  "sender": "NRF-6faf1bbc-6e4a-4454-a507-a14ef8e1bc5c",
  "subscriberId": "imsi-10000005;msisdn-77195225555"
}

Note:

The log with updates attributes will be printed only for the error responses generated by NRF application. For all other scenarios (like error responses generated by the underlying stack) the log with updated attributes will not be printed.

Mapping Error Log Attributes

The following table explains the mapping of the new attributes with the existing attributes of ProblemDetails, subscriberId, sender, and receiver in the error logs.

Table 4-8 Mapping Error Log Attributes

New Attribute in Logs Value when NRF acting as Standalone Server(Example: NFRegister, NFUpdate, NFListRetrieval, NFProfileRetrieval service operations and so on) Value when NRF is acting as Standalone Client(Example: NfStatusNotify service operation) Value when NRF acting as both Server and Client(Example: NRF Forwarding, Roaming, SLF)
As Server As Client
errorStatus status sent by NRF in ProblemDetails of HTTP response status received byNRF in ProblemDetails of HTTP response status sent by NRF in ProblemDetails of HTTP response status received by NRF in ProblemDetails of HTTP response
errorTitle title sent by NRF in ProblemDetails of HTTP response title received by NRF in ProblemDetails of HTTP response title sent by NRF in ProblemDetails of HTTP response title received by NRF in ProblemDetails of HTTP response
errorDetails detail sent by NRF in ProblemDetails of HTTP response detail received by NRF in ProblemDetails of HTTP response detail sent by NRF in ProblemDetails of HTTP response detail received by NRF in ProblemDetails of HTTP response
errorCause cause sent by NRF in ProblemDetails of HTTP response cause received by NRF in ProblemDetails of HTTP response cause sent by NRF in ProblemDetails of HTTP response cause received by NRF in ProblemDetails of HTTP response
sender NRF-NRF Instance Id Server header value as received in the error response. NRF-NRF Instance Id Server header value as received in the error response
receiver Not applicable NRF-NRF Instance Id Not applicable NRF-NRF Instance Id
subscriberId The priority is considered as follows:
  1. if the value of 3GPP-Sbi-Correlation-Info header is present in the request.
  2. else, only in case of NFDiscover Service operation request, the derived values from SUPI/GPSI query parameters if present.
The priority is considered as follows:
  1. if the value of 3GPP-Sbi-correlation-Info header is present in the response received when NRF is acting as a client.
  2. else, if the value of 3GPP-Sbi-correlation-Info header is present in the request.
  3. else, only in case of NFDiscover Service operation request, the derived values from SUPI/GPSI query parameters if present.
The priority is considered as follows:
  1. if the value of 3GPP-Sbi-correlation-Info header is present in the response received when NRF is acting as a client.
  2. else, if the value of 3GPP-Sbi-correlation-Info header is present in the request.
  3. else, only in case of NFDiscover Service operation request, the derived values from SUPI/GPSI query parameters if present.
The priority is considered as follows:
  1. if the value of 3GPP-Sbi-correlation-Info header is present in the response received when NRF is acting as a client.
  2. else, if the value of 3GPP-Sbi-correlation-Info header is present in the request.
  3. else, only in case of NFDiscover Service operation request, the derived values from SUPI/GPSI query parameters if present.

Note:

In case, if any of the above newly added attributes does not have a value, then these attributes will not be present in the log.

Access Token Error Mappings

In case of ERROR generated for Access Token service operation, the failure responses generated can be an instance of problemDetails or an AccessTokenErr. In case of problemDetails, the mappings are mentioned as Table 4-8 table. In case of AccessTokenErr the mapping is as follows:

Table 4-9 Access Token Err

New Attributes in Logs AccessTokenErr
errorStatus 400
errorTitle Bad Request
errorDetails error_description
errorCause error

Sample AccessTokenErr message


{
    "error": "unauthorized_client",
    "error_description": "NRF-d5g.oracle.com: Nnrf_AccessToken: NfType in oAuth request is different from the registered profile: ONRF-ACC-ACTOK-E4004",
    "error_uri": None
}

Sample Error Log Messages for Access Token Error

{
  "instant": {
    "epochSecond": 1717078943,
    "nanoOfSecond": 335260003
  },
  "thread": "XNIO-1 task-2",
  "level": "ERROR",
  "loggerName": "com.oracle.cgbu.cne.nrf.rest.NFAccessTokenController",
  "message": "Response sent: 400 Bad Request for uri : http://ocnrf-ingressgateway.nrf1-ns/oauth2/token with the problem cause : Bad Request and problem details : Bad Request",
  "endOfBatch": false,
  "loggerFqcn": "org.apache.logging.log4j.spi.AbstractLogger",
  "threadId": 71,
  "threadPriority": 5,
  "messageTimestamp": "2024-05-30T14:22:23.335+0000",
  "configuredLevel": "WARN",
  "subsystem": "accessToken",
  "processId": "1",
  "nrfTxId": "nrf-tx-1682094534",
  "ocLogId": "1717078942906_77_ocnrf-ingressgateway-854464d548-426bz:1717078942947_71_ocnrf-nfaccesstoken-79b55d9b45-qm4gn",
  "xRequestId": "",
  "hostname": "ocnrf-nfaccesstoken-79b55d9b45-qm4gn",
  "errorStatus": "400",
  "errorTitle": "Bad Request",
  "errorDetails": "NRF-d5g.oracle.com: Nnrf_AccessToken: NfType in oAuth request is different from the registered profile: ONRF-ACC-ACTOK-E4004",
  "errorCause": "unauthorized_client",
  "sender": "NRF-6faf1bbc-6e4a-4454-a507-a14ef8e1bc5c",
  "subscriberId": "imsi-345012123123129"
}

Managing the Error Log Messages Enhancements

This section explains the procedure to enable and configure the feature.

Enable

You can enable the Error Log Messages Enhancements feature using the REST API or Console:

  • Enable Using REST API: Set the value of the feature flags additionalErrorLogging and logSubscriberInfo as ENABLED for each of the supported microservices as described in Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.

    Sample API

    {
        "appLogLevel": "WARN",
        "packageLogLevel": [
            {
                "packageName": "root",
                "logLevelForPackage": "WARN"
            }
        ],
        "additionalErrorLogging": "ENABLED",
        "logSubscriberInfo": "ENABLED"
    }
    • {apiRoot}/nrf-configuration/v1/nfAccessToken/logging
    • {apiRoot}/nrf-configuration/v1/nfDiscovery/logging
    • {apiRoot}/nrf-configuration/v1/nfRegistration/logging
    • {apiRoot}/nrf-configuration/v1/nfSubscription/logging
    • {apiRoot}/nrf-configuration/v1/nrfArtisan/logging
    • {apiRoot}/nrf-configuration/v1/nrfAuditor/logging
    • {apiRoot}/nrf-configuration/v1/nrfConfiguration/logging
    • {apiRoot}/nrf-configuration/v1/nrfCacheData/logging
  • Enable Using CNC Console: Set the value of the feature flags Additional Error Logging and Log Subscriber Info as ENABLED for each of the supported microservices in Logging Level Options. For more information, see Logging Level Options.
Configure

You can configure the feature using Helm.

Perform the following steps to configure this feature using Helm:

  1. Open the ocnrf_custom_values_25.1.200.yaml file.
  2. Configure the following existing parameters:
    • global.nrfInstanceId to determine the NRF NfInstanceId.
    • global.nfType to determine the NF type.
  3. Save the file.
  4. Install NRF. For more information about the installation procedure, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.
  5. Run Helm upgrade, if you are enabling this feature after NRF deployment. For more information about the upgrade procedure, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

Observability

Metrics

There are no new metrics added for this feature.

Alerts

There are no alerts added or updated for this feature.

KPIs

There are no new KPIs related for this feature.

Maintain

If you encounter alerts at system or application levels, see the NRF Alerts section for the resolution steps.

In case the alerts still persist, perform the following:

1. Collect the logs: For more information on collecting logs, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.

2. Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.8 Limiting Number of NFProfiles in NFDiscover Response

When NFDiscover service operation request is received, NRF returns NFProfiles based on the query attributes. As per 3GPP TS 29.510, the following two attributes play a key role to limit the number of NFProfiles to be returned in the response:
  • limit: Maximum number of NFProfiles to be returned in the response.

    Minimum: 1

  • max-payload-size: Maximum payload size of the response, expressed in kilo octets. When present, the NRF limits the number of NFProfiles returned in the response so as to not to exceed the maximum payload size indicated in the request.

    Default: 124 kilo octets

    Maximum: 2000 (that is, 2 mega octet)

limit Attribute

While returning the NFProfiles in the NFDiscover service operation response, NRF limits the number of NFProfiles that can be returned after processing the NFDiscover service operation request based on the value of the limit attribute.

In case the limit attribute is not present in the NFDiscover search query, then NRF limits the number of NFProfiles based on the value of the profilesCountInDiscoveryResponse attribute in the {apiRoot}/nrf-configuration/v1/nfDiscoveryOptions API.

max-payload-size Attribute

While returning the NFProfiles in the NFDiscover service operation response, NRF limits the number of NFProfiles whose size is within the value of max-payload-size attribute, excluding the size of the following attributes defined by 3GPP of the NFDiscover service operation response:
  • validityPeriod (Mandatory)
  • nfInstances (Mandatory)
  • preferredSearch (Conditional)
  • nrfSupportedFeatures (Conditional)

In case this attribute is not present in the NFDiscover search query, then NRF considers the default value (124 kilo octets) as defined by 3GPP TS 29.510.

4.9 Support for cnDBTier APIs in CNC Console

NRF earlier fetched the status of cnDBTier using the CLI-based mechanism.

With the implementation of this feature, cnDBTier APIs are integrated into the CNC Console. NRF users can view the specific cnDBTier APIs directly on the CNC Console as mentioned in the following tables:

Table 4-10 cnDBTier APIs

Console Parameter cnDBTier API name Description
cnDBTier Backup Status http://<base-uri>/ocdbtier/backup/status

This is a read-only API.

This API displays the current backup status of the cnDBTier. It displays the following:
  • current system time
  • current cnDBTier backup status
  • next scheduled backup time
cnDBTier Health http://<base-uri>/{unique_prefix}/ocdbtier/health-info This API lists the health info of cnDBTier pods.
Backup Manager Health Status http://<base-uri>/{unique_prefix}/ocdbtier/health-info/backup-mgr-svc/status

This is a read-only API.

This API displays the health status of the backup manager service. It checks the following:
  • if the backup manager service is up or not
  • if the service can connect to database or not
Monitor Health Status http://<base-uri>/{unique_prefix}/ocdbtier/health-info/monitor-svc/status

This is a read-only API.

This API displays the health status of the monitor service. It checks the following:
  • if the monitor service is up or not
  • if the service can connect to database or not
  • if the metrics are fetched or not (the metrics are fetched when the service is up and vice versa)
NDB Health Status http://<base-uri>/{unique_prefix}/ocdbtier/health-info/ndb-svc/status

This is a read-only API.

This API displays the health status of the NDB service pods such as (data pods, sql pods, app-my-sql pods, mgmt pods). It checks the following:
  • if the pod is connected to PVC or not
  • if the pods status is up or not

Note: PVC Health Status attribute is set to NA when some of the database pods are not connected to the PVC.

Replication Health Status http://<base-uri>/{unique_prefix}/ocdbtier/health-info/replication-svc/status

This is a read-only API.

This API displays the health status of the replication service. It checks the following:
  • if the replication service is up or not
  • if the replication service can connect to database or not
cnDBTier Version http://<base-uri>/ocdbtier/version

This is a read-only API.

This API displays the cnDBTier version.
Backup List http://<base-uri>/ocdbtier/list/backups

This is a read-only API.

This API displays the details of completed backups along with backup ID, backup creation timestamp, and backup size.
Database Statistics Report http://<base-uri>/ocdbtier/statistics/report/dbinfo

This is a read-only API.

This API displays the number of available database.
Georeplication Status dbconfig-geoReplicationStatus This API displays the georeplication status.
Real Time Overall Replication Status http://<base-uri>/ocdbtier/status/replication/realtime

This is a read-only API.

This API displays the overall replication status in multisite deployments. For example, in a four-site deployment, it provides the replication status between the following sites: site1-site2, site1-site3, site1-site4. This is applicable for all other sites, such as, site 2, site 3, and site 4.
Site Specific Real Time Replication Status http://<base-uri>/ocdbtier/status/replication/realtime/{remoteSiteName}

This is a read-only API.

This API displays the site-specific replication status.
Replication HeartBeat Status http://<base-uri>/ocdbtier/heartbeat/status

This is a read-only API.

This API displays the connectivity status between the local site and the remote site name to which NRF is connected.
Local Cluster Status http://<base-uri>/ocdbtier/status/cluster/local/realtime

This is a read-only API.

This API displays the status of the local cluster.
On Demand Backup http://<base-uri>/ocdbtier/on-demand/backup/initiate

This API is used to initiate a new backup.

This API displays the status of initiated on-demand backups and helps to create a new backup.
Update Cluster As Failed http://<base-uri>/ocdbtier/markcluster/failed This API is used to mark a disrupted cnDBTier cluster as failed.
Start Georeplication Recovery http://<base-uri>/ocdbtier/faultrecovery/start This API is used to start the georeplication recovery process.
Georeplication Recovery Status http://<base-uri>/ocdbtier/faultrecovery/status

This is a read-only API.

This API is used to monitor the recovery status of georeplication for both FAILED and ACTIVE cnDBTier sites.

For more information about the above-mentioned cnDBTier APIs, see Oracle Communications Cloud Native Core, cnDBTier User Guide.
Managing cnDBTier APIs in CNC Console

Enable

The CNC console section for accessing the cnDBTier APIs is available in NRF if the cnDBTier is configured as an instance during the CNC Console deployment. For more information about integrating cnDBTier APIs in CNC Console, see the "NF Single Cluster Configuration With cnDBTier Menu Enabled" section in Oracle Communications Cloud Native Configuration Console Installation, Upgrade, and Fault Recovery Guide.

There is no option to disable this feature.

Configure

cnDBTier APIs can be accessed or configured using the CNC Console. For more information, see cnDBTier APIs in the CNC Console.

Maintain

If you encounter alerts at system or application levels, see the NRF Alerts section for resolution steps.

In case the alerts still persist, perform the following:

  1. Collect the logs: For more information on collecting logs, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.10 Enhanced NRF Set Based Deployment (NRF Growth)

NRF was supporting the deployment of two segments of NRF in a network. In the case of capacity expansion of the network to support increased traffic, a single NRF set cannot be scaled beyond a certain capacity.

With the implementation of this feature, NRF supports the deployment of multiple NRF set on the network instead of scaling the capacity of each NRF instance. Each segment in this network can have a single georedundant set and each set can have up to four georedundant NRFs. The georedundant NRFs in the set share the state data with each other.

Figure 4-7 NRF Segments


NRF Segments

A dedicated cnDBTier instance is deployed for each NRF that is georeplicated within the specific NRF set. Each NRF in the set synchronizes with its most preferred NRFs in the other sets within the segment to retrieve the state data of that set. Thus, every NRF has the complete segment-level view of all the sets in a specific segment.

The state data in each NRF comprises of:

  • local data
  • georeplicated data of the specific set
  • remote data from the other sets.

Figure 4-8 State Data in NRF


State Data in NRF

The Cache Data Service (CDS), which is multipod deployment, builds and maintains the state data of the local NRF set and the remote NRF sets in the in-memory cache. Each pod of CDS maintains its cache independently. CDS maintains the segment-level view of the state data. This segment-level view is not pushed to cnDBTier. For more information about CDS, see NRF Architecture.

The cache of local NRF set data:

  • is built or updated by querying the cnDBTier periodically
  • uses the last known information about the NFs when CDS is not updated due to a database error

The cache of remote set data:

  • is built or updated by synchronizing the state data of remote NRF sets periodically.

    Note:

    The periodicity at which the remote NRF set data is synchronized with CDS in-memory cache is 2 seconds. Remote NRF set data synchronization is not performed for each request as it is performed over the WAN network. This may cause higher latency as it involves retries also.
  • uses the last known information about the NFs in case remote NRF set sync is failed.

Figure 4-9 Cache Data Service


Cache Data Service

During pod initialization, the CDS service marks itself as available only after the local and remote set state data is loaded in the in-memory cache. CDS is only used for reading local and remote state data. The NRF microservices continue to write state data directly to the cnDBTier.

Note:

CDS tries to fetch the state data from all the NRFs in remote sets. In case all the NRFs in any of the specific remote sets are unavailable, then CDS will come up with the state data of the remaining NRF sets (if any) and the local data.

The following NRF core microservices read data from CDS for various service operation requests as explained below. The write operations are directed to cnDBTier.

  • NF Registration Microservice (nfregistration): The registration microservice queries the CDS for service operations such as NFListRetrieval and NFProfileRetrieval.
  • NF Subscription Microservice (nfsubscription): The subscription microservice queries the CDS for service operations such as NFStatusSubscribe, NfStatusSubscribe(Patch), NfStatusUnsubscribe, NfStatusNotify.
  • NRF Auditor Microservice (nrfauditor): The auditor microservice queries the CDS to retrieve the total number of NFs per NfType in the segment.
  • NF Access Token Microservice (nfaccesstoken): The AccessToken microservice queries the CDS for processing the access token service operations.
  • NRF Configuration Microservice (nrfconfiguration): The configuration microservice queries CDS for processing the state data API requests.
  • NF Discovery Microservice (nfdiscovery): The discovery microservice queries the CDS periodically to update its local cache.

For more information about the microservices, see the Impacted Service Operations section.

Note:

All the remaining NRF microservices don't query CDS.

This feature interacts with some of the existing NRF features, for more information about this, see the Interaction with Existing Features section.

Managing NRF Growth feature

Enable

Prerequisites

A minimum of two NRF sets are required to enable the feature.

Steps to Enable

  1. Upgrade all existing sites to NRF 24.1.0 version or above. Ensure that all the sites are on the same version.
  2. Configure a unique nfSetId for each NRF set using the /nrf-configuration/v1/nrfGrowth/featureOptions. Ensure that the same nfSetId must be configured for all the NRFs in the set.

    For example: Consider a case of three NRF instances in a set and the nfSetId is set1. Set1 has NRF11, NRF12, and NRF13. Configure set1 as nfSetId for NRF11, NRF12, and NRF13. See Figure 4-10 for more details.

  3. Install new NRF set(s) or upgrade an existing NRF set(s) to the NRF 24.1.0 version or above.

    For example: Consider the new NRF set as Set2 with three NRF instances, NRF21, NRF 22, and NRF23. See Figure 4-10 for more details.

  4. Using the following state data API, retrieve the nf-instances and subscription details in each NRF of each set. Save the output for validation later.
    1. Use /nrf-state-data/v1/nf-details to retrieve the nf-instances details.
    2. Use /nrf-state-data/v1/subscription-details to retrieve the subscription details.

    For more information about the query parameters, see the REST Based NRF State Data Retrieval section.

  5. Configure the nfSetId of the new NRF set by setting the attribute nfSetId using the API /nrf-configuration/v1/nrfGrowth/featureOptions. Ensure that the same nfSetId must be configured for all the NRFs in the set.

    For example: nfSetId of Set2 is set2. Configure set2 as nfSetId for NRF21, NRF 22, and NRF23. See Figure 4-10 for more details.

  6. Load the NRF Growth feature specific alerts in both the sets. For more information about the alert configuration, see the NRF Alert Configuration section.
  7. Configure the NRFs of the remote NRF sets in each NRF by setting the attribute nrfHostConfigList using the API /nrf-configuration/v1/nrfGrowth/featureOptions.

    Note:

    • Once configured, the NRFs will start syncing with the remote NRF sets. The remote set data will not be used for service operations until the feature is enabled.
    • One of the entries in the staticNrfConfigList attribute must contain the details of the local NRF set.

    For example: Configure the host details of NRFs in Set2 as the remote NRF sets in each NRFs of Set1. For instance, configure the nrfHostConfigList attribute in NRF11 of Set1 as remote NRFs host details of NRF21, NRF 22, and NRF23. Similarly, configure each NRFs in Set1 and Set2.

  8. Ensure that the following alerts are not present in any of the sets in the network:
    1. OcnrfRemoteSetNrfSyncFailed
    2. OcnrfSyncFailureFromAllNrfsOfAllRemoteSets
    3. OcnrfSyncFailureFromAllNrfsOfAnyRemoteSet

      If present, wait for 30 seconds to 1 minute and retry till the alerts are cleared. If the alerts are not cleared, see alerts for resolution steps.

  9. Use the state data API to validate the nf-instances and subscriptions as below to ensure that the remote synching is successful, and all the NFs and subscriptions are synced.
    1. Use /nrf-state-data/v1/nf-details?testSegmentData=true to validate the nf-instances.
    2. Use /nrf-state-data/v1/subscription-details?testSegmentData=true to validate the subscriptions.

    Where, testSegmentData query is used to validate the state data of the segment before enabling the NRF Growth feature.

    Note:

    Ensure that the state data of the NRFs in Set1 is available in the NRFs of Set2. In case you are upgrading the existing set, then ensure that the state data of the NRFs in Set2 is available in the NRFs of Set1.

    For more information about the query parameters, see the REST Based NRF State Data Retrieval section.

  10. Enable NRF Growth Forwarding feature, if forwarding is applicable.

    Note:

    When NRF Growth feature is enabled, the existing forwarding options will not be used.
  11. Enable the NRF Growth feature in all the NRFs. Enable the feature using the API /nrf-configuration/v1/nrfGrowth/featureOptions.
  12. Ensure that the following alerts are not present in any of the sets in the network:
    1. OcnrfRemoteSetNrfSyncFailed
    2. OcnrfSyncFailureFromAllNrfsOfAllRemoteSets
    3. OcnrfSyncFailureFromAllNrfsOfAnyRemoteSet
    4. OcnrfDatabaseFallbackUsed

    If present, wait for 30 seconds to 1 minute and retry till the alerts are cleared. If the alerts are not cleared, see alerts for resolution steps.

  13. After the above configurations are complete, all service operations consider complete segment data for processing the request.

    You can migrate the NFs from Set1 to Set2 or vice-versa. For more information about the steps to migrate NFs, see the Migration of NFs section.

Figure 4-10 NRF Growth nfSetId Configuration


NRF Growth nfSetId Configuration

Migration of NFs

  1. Make sure that the NF subscription list is available.
  2. Use the following state data API at the remote NRF set to validate the NRF has record of this NF record as remote host details. If present, wait for 5-10 seconds and retry the API till the nfInstanceId is not present in the response.
    1. Use /nrf-state-data/v1/nf-details?nf-instance-id=<the nfInstanceId of the NF> to validate the nf-instances.
    2. Use /nrf-state-data/v1/subscription-details?subscription-id= <subscription ID of the subscription> to validate the subscriptions.
  3. Unsubscribe the subscriptions of the NF from the NRF.
    1. NF sends NfStatusUnsubscribe request to NRF.
    2. NRF sends NfStatusUnsubscribe response to NF.
  4. Deregister the NF from the NRF.
    1. NF sends NfDeregister request to NRF.
    2. NRF sends NfDeregister response to NF.
  5. Use the following state data API at the target NRF to validate the NRF does not have the NF record. If present, wait for 5-10 seconds and retry the API till the nfInstanceId is not present in the response.
    1. Use /nrf-state-data/v1/nf-details?nf-instance-id=<the nfInstanceId of the NF> to validate the nf-instances.
    2. Use /nrf-state-data/v1/subscription-details?subscription-id= <subscription ID of the subscription> to validate the subscriptions.
  6. Register the NF with the NRF at the target NRF set.
    1. NF sends NfRegister request to NRF.
    2. NRF sends NfRegister response to NF.
  7. Create the new subscription as required.
    1. NF sends NfStatusSubscribe request to NRF.
    2. NRF sends NfStatusSubscribe response to NF.
  8. Use the following state data API at the target NRF to validate that the NF is registered and the subscription is created. The NF record is stored as local registered NF. If not present, wait for 5-10 seconds and retry the API till the nfInstanceId is present in the response.
    1. Use /nrf-state-data/v1/nf-details?nf-instance-id=<the nfInstanceId of the NF> to validate the nf-instances.
    2. Use /nrf-state-data/v1/subscription-details?subscription-id= <subscription ID of the subscription> to validate the subscription.

Figure 4-11 Migration of NF from one set to another


Migration of NF from one set to another

Configure

Configure the NRF Growth feature using REST API or CNC Console:
  • Configure Forwarding Options for NRF Growth feature using REST API: Perform the configurations as described in the "Forwarding Options for NRF Growth" section of Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
  • Configure Forwarding Options for NRF Growth feature using CNC Console: Perform the configurations as described in the Forwarding Options section.
  • Configure NRF Growth feature using REST API: Perform the feature configurations as described in the "NRF Growth Options" section of Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
  • Configure NRF Growth feature using CNC Console: Perform the feature configurations as described in NRF Growth Options.

Note:

The forwarding rules for nfDiscover service requests are based on following 3GPP discovery request parameters:

  • target-nf-type (Mandatory Parameter)
  • service-names (Optional Parameter)

The forwarding rules for access token service requests are based on the following 3GPP access token request parameter:

  • scope (Mandatory Parameter)

Observe

Metrics

The following metrics are added for the NRF Growth feature.
  • ocnrf_cds_rx_requests_total
  • ocnrf_cds_tx_responses_total
  • ocnrf_cds_round_trip_time_seconds
  • ocnrf_query_remote_cds_requests_total
  • ocnrf_query_remote_cds_responses_total
  • ocnrf_query_remote_cds_round_trip_time_seconds
  • ocnrf_query_remote_cds_message_size_bytes
  • ocnrf_cache_fallback_total
  • ocnrf_db_fallback_total
  • ocnrf_query_cds_requests_total
  • ocnrf_query_cds_responses_total
  • ocnrf_query_cds_round_trip_time_seconds
  • ocnrf_dbmetrics_total
  • ocnrf_nf_registered_count
  • ocnrf_cache_sync_count_total
  • ocnrf_remote_set_unavailable_total
  • ocnrf_all_remote_sets_unavailable_total

For more information on the above metrics, see the NRF Cache Data Metrics section.

KPIs

Following are the NRF Growth feature specific KPIs:
  • Cache Sync
  • Total Number of CDS Requests
  • Total Number of CDS Responses
  • Total Number of CDS Requests per Service Operation
  • Total Number of CDS Responses per Service Operation
  • CDS Latency 50%
  • CDS Latency 90%
  • CDS Latency 95%
  • CDS Latency 99%
  • Total Number of CDS Requests per Request Type
  • Total Number of CDS Responses per Request Type
  • Total Number of Remote CDS Requests
  • Total Number of Remote CDS Responses
  • Remote CDS Query Latency 50%
  • Remote CDS Query Latency 90%
  • Remote CDS Query Latency 95%
  • Remote CDS Query Latency 99%
  • Database Fallback
  • CDS Cache Sync

For more information on the above metrics, see the NRF Growth Specific KPIs section.

Maintain

If you encounter alerts at system or application levels, see the NRF Alerts section for resolution steps.

In case the alerts still persist, perform the following:

  1. Collect the logs: For more information on collecting logs, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.10.1 Impacted Service Operations

This section explains the change in the functionality of the service operation when multiple NRF sets are deployed.

nfRegistration Service Operation

NFRegister, NFUpdate, or NFDeregister

The NF registers and heartbeats to the Primary NRF. When the nfRegistration service receives an NFRegister, NFUpdate, or NFDeregister request, it processes the request and if successful, it creates, updates, or deletes the nf-instances records from the cnDBTier. The nf-instances is made available and used by the service operation in the remote set as well.

The nfRegistration service does not query the Cache Data Service (CDS) for these service operations. It directly updates or saves the data in the local cnDBTier.

NFProfileRetrieval

When the nfRegistration service receives the NFProfileRetrieval request, it queries the CDS to fetch the NFProfile.

The CDS provides the NFProfile of the local NRF set by querying the cnDBTier. The CDS queries the remote NRF sets periodically and caches the data in the in-memory cache. If the growth feature is enabled, the CDS provides the NFProfile registered at the remote NRF sets by querying the in-memory cache.

The response contains the matching NFProfile either from the local NRF set or from the remote NRF set. The NFProfileRetrieval response is created by the nfRegistration service with the nf-instances provided by the CDS.

The responses from the nfRegistration service vary based on the following conditions:

  • If the NFProfile is not present in the response from the CDS, a response with status code 404 NOT FOUND is sent to the consumer NF.
  • If the CDS is unreachable or a non-2xx status code is received, the registration service relies on the cnDBTier to obtain the nf-instances local data and fulfill the service operation. The nfRegistration service fetches data from the cnDBTier and only the matching NFProfile from the local NRF set data is sent.
  • If the NRF Growth feature is enabled and the remote NRF set is not reachable, CDS provides state data of the local NRF set received from cnDBTier and the last known state data of the remote set from its in-memory cache.

NFListRetrieval

When the nfRegistration service receives an NFListRetrieval request, it queries the CDS to get the list of nfInstanceIDs.

The CDS provides the nf-instances of the local NRF set by querying the cnDBTier. The CDS queries the remote set periodically and caches the data in the in-memory cache. If the growth feature is enabled, the CDS provides the nf-instances of the remote NRF sets by querying the in-memory cache.

The response contains the matching nf-instances from the local NRF set and the remote NRF set. The NFListRetrieval response is created by the nfRegistration service with the nf-instances provided by the CDS.

The responses from the nfRegistration service vary based on the following conditions:

  • If there are no matching nf-instances in either of the sets, an empty response is sent.
  • If the CDS is unreachable or a non-2xx status code is received, the registration service falls back to the cnDBTier to get the nf-instances local data and fulfill the service operation. The nfRegistration service fetches data from the cnDBTier and only the matching nf-instances from the local NRF set data is sent.
  • If the NRF Growth feature is enabled and the remote NRF set is not reachable, CDS provides state data of the local NRF set received from cnDBTier and the last known state data of the remote set from its in-memory cache.

nfSubscription Service Operation

The NFs continue to subscribe for Producer NFs and update their subscriptions using the NRFs of the same georedundant set. The nfSubscription service is responsible for creating and maintaining the subscriptions in NRF. It also takes care of triggering NfStatusNotify to the consumer NFs for their subscriptions.

NFStatusSubscribe

When the nfSubscription service receives the NfStatusSubscribe request to create a subscription, the nfSubscription service checks the following constraints before creating the subscription:

  • duplicate subscription based on the allowDuplicateSubscriptions parameter configuration. This check is performed across all the sets in the segment. For more information about the configuration of this parameter, see NF Management Options. For more information about the functionality of this parameter for different values configured, see Use Cases for Allow Duplicate Subscriptions.
  • subscription limit conditions based on the subscriptionLimit parameter configuration. This check is applicable for all the subscriptions at the local NRF set. For more information about the Subscription Limit feature, see the Subscription Limit section.

If the constraints are passed, the nfSubscription service creates the subscription and saves it to the cnDBTier. The nfSubscription service queries the CDS to get the subscription already created. The CDS provides subscriptions at the local NRF set by querying the cnDBTier. The CDS queries the remote set periodically and caches the data in the in-memory cache. If the growth feature is enabled, the CDS provides the subscriptions of the remote NRF sets by querying the in-memory cache.

The responses from the nfSubscription service vary based on the following conditions:

  • If the CDS is unreachable or a non-2xx status code is received, the nfSubscription service relies on the cnDBTier to get the subscription in local NRF and fulfill the service operation. The nfSubscription service creates subscriptions only in the local NRF.
  • If the NRF Growth feature is enabled and the remote NRF set is not reachable, CDS provides subscription details of the local NRF set, and the last known information of the remote NRF set is provided.

NfStatusSubscribe(Patch)

When the nfSubscription service receives the NfStatusSubscribe (Patch) request to update a subscription, the nfSubscription service checks the following constraints before updating the subscription:

  • checks if the subscription exists. This check is performed across all the sets in the segment.
  • subscription limit conditions based on the subscriptionLimit parameter configuration. This check is applicable for all the subscriptions at the local NRF set.

If the constraints are passed, the nfSubscription service updates the subscription and saves it to the cnDBTier.

The subscription microservice queries the CDS to retrieve the subscriptions that are created at the local NRF set. The CDS provides the corresponding subscriptions created at the local NRF set by querying the cnDBTier. If the NRF Growth feature is enabled, the CDS provides the matching subscriptions created at the remote NRF sets by querying the in-memory cache.

The responses from the nfSubscription service vary based on the following conditions:

  • If the CDS is unreachable or a non-2xx status code is received, the nfSubscription service relies on the cnDBTier to get the subscription in local NRF and fulfill the service operation. The nfSubscription service updates subscriptions only in the local NRF.
  • If the NRF Growth feature is enabled and the remote NRF set is not reachable, CDS provides subscription details of the local NRF set, and the last known information of the remote NRF set is provided.

NfStatusUnsubscribe

When the nfSubscription service receives the NfStatusUnsubscribe request to delete a subscription, the nfSubscription service checks if the subscription exists. This check is performed across all the sets in the segment.

The subscription microservice queries the CDS to retrieve the matching subscriptions that are created at the local NRF set. The CDS provides the corresponding subscriptions created at the local NRF set by querying the cnDBTier. If the NRF Growth feature is enabled, the CDS provides the matching subscriptions created at the remote NRF sets by querying the in-memory cache.

If the subscription is present, it deletes the subscription from the cnDBTier and responds with the 200 OK status code. If the subscriptions created at the one NRF set cannot be deleted at any other NRF set and such requests are rejected. If the subscription is not present, the response with a 404 status code is sent.

The responses from the nfSubscription service vary based on the following conditions:

  • If the CDS is unreachable or a non-2xx status code is received, the nfSubscription service relies on the cnDBTier to get the subscription in local NRF and fulfill the service operation. The nfSubscription service unsubscribes the subscriptions only in the local NRF.
  • If the NRF Growth feature is enabled and the remote NRF set is not reachable, CDS provides subscription details of the local NRF set, and the last known information of the remote NRF set is provided.

NfStatusNotify

When the nfSubscription service triggers notifications towards the consumer NFs that are subscribed for nf-instances status events when the conditions specified in the nfSubscription condition are met. This check is performed across all the sets in the segment.

The subscription service queries the CDS to retrieve the subscriptions matching the nf-instances event that is created at the local NRF set. The CDS provides the corresponding subscriptions created at the local NRF set by querying the cnDBTier. If the NRF Growth feature is enabled, the CDS retrieves the subscriptions matching the nf-instances event at the remote NRF sets by querying the in-memory cache.

The nfSubscription service triggers the notification event to the consumer NFs for all the subscriptions received from the CDS.

The responses from the nfSubscription service vary based on the following conditions:

  • If the CDS is unreachable or a non-2xx status code is received, the nfSubscription service relies on the cnDBTier to get the subscription in local NRF and fulfill the service operation. The nfSubscription service triggers the notification for the subscriptions only in the local NRF.
  • If the NRF Growth feature is enabled and the remote NRF set is not reachable, CDS provides subscription details of the local NRF set, and the last known information of the remote NRF set is provided.

NFDiscover

When the NFDiscover service receives a discovery request, it looks for Producer NFProfile maintained at the in-memory cache in the discovery service. The discovery service cache is periodically updated by querying the CDS.

CDS updates the local cache of the discovery microservice with the latest local NRF set data CDS sends the response with the latest local NRF set data and cached remote NRF data

Additionally, if the NRF Growth feature is enabled CDS updates the local cache of the discovery microservice with the latest local NRF set data and cached remote NRF data.

In case the CDS is not reachable or a non-2xx status code is received, the discovery service relies on the cnDBTier to get local NRF set data and update its in-memory cache for the local NRF set state data. If the NRF Growth feature is enabled, the last known data of the remote NRF set is used for the discovery service operation.

NfAccessToken

When the nfAccessToken service receives a nfAccessToken request with targetNfInstanceId, it queries the CDS to validate if the target NF and/or requester NF is registered.

The nfAccessToken service queries the CDS to retrieve the targetNfInstanceId and requesterNfInstanceId that are created at the local NRF set. The CDS provides the corresponding NF instances created at the local NRF set by querying the cnDBTier. If the NRF Growth feature is enabled, the CDS provides the matching NF instances created at the remote NRF sets by querying the in-memory cache.

Note:

The requesterNfInstanceId is validated as part of the Access Token Request Authorization feature.

If the CDS is unreachable or a non-2xx status code is received, the nfAccessToken service falls back to the cnDBTier to get the NfInstanceId local NRF set and fulfill the service operation.

4.10.2 Interaction with Existing Features

With the deployment of multiple sets of NRFs, the functionality of the following features is impacted.

NRF Forwarding Feature

When the growth feature is enabled, the service requests can be forwarded to NRF in another segment. The service requests are forwarded to NRF in another segment if the requested Producer NfInstances are not registered in any of the NRF sets in the segment.

For more information about configuring the forwarding options, see the Enhanced NRF Set Based Deployment (NRF Growth) section.

Dynamic SLF Feature

When dynamic SLF is configured and growth feature is enabled, the dynamic selection of the SLF profiles is performed across local and remote NRF sets. In a given NRF set, the preferredSLFLocality attribute can be utilized to prefer SLFs of the local NRF set over SLFs of remote NRF set for sending SLF query. For more information about the dynamic SLF feature, see the Subscriber Location Function section.

Subscription Limit

When the growth feature is enabled, the subscription count is evaluated within each set.

Note Ensure that the total number of subscriptions in all the sets in the segment must not exceed the globalMaxLimit value. Configure the globalMaxLimit value accordingly.

For example, consider there are two sets deployed in the segment, the total subscription that is supported in the segment is 1000, and the globalMaxLimit of set1 is 300 and set2 is 700. In this case, NRF validates that the subscription count of set1 does not cross 300 and set2 does not cross 700.

For more information about the Subscription Limit feature, see the Subscription Limit section.

REST-Based NRF State Data Retrieval

REST-based NRF state data retrieval feature provides options for Non-Signaling APIs to access NRF state data. It allows the operator to access the NRF state data to understand and debug in the event of failures. NRF exposes the following APIs that are used to fetch the NfProfiles and NfSubscriptions at NRF across the local and remote NRF sets. When the growth feature is enabled, the API behavior is as follows:

  1. {apiRoot}/nrf-state-data/v1/nf-details

    The API returns the NfProfiles registered at the NRF and its mated sites. The response includes the NfProfiles registered at the local site, and its mated sites if the replication channel between the given site and the mate site is UP.

    When the growth feature is disabled, the remote profiles can still be fetched by setting the query parameter ?testSegmentData=true.

    Refer Enable section to know the usage of this query parameter.

    The NfProfiles are filtered based on the query parameters mentioned in Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide. The filtered NFProfiles are included in the response.

  2. {apiRoot}/nrf-state-data/v1/subscription-details

    The API returns the NfSubscriptions created at the NRF and its mated sites. The response includes the NfSubscriptions created at the local site, and its mated sites if the replication channel between the given site and the mate site is UP.

    When the growth feature is disabled, the remote subscriptions can still be fetched by setting the query parameter?testSegmentData=true.

    Refer Enable section to know the usage of this query parameter.

    The NfSubscriptions are filtered based on the query parameters mentioned in Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide. The filtered NfSubscriptions are included in the response.

Kubernetes Probes

Startup Probe

The Cache Data Service startup probe proceeds when the following conditions are met:

  1. Connectivity to the cnDBTier is successful.
  2. The NRF State and Configuration tables are present, and configuration data is present in the cnDBTier.

Readiness Probe

The Cache Data Service readiness probe runs the container only if the following conditions are met:

  1. Connectivity to the cnDBTier is successful.
  2. The NRF State and Configuration tables are present, and configuration data is present in the cnDBTier.
  3. One successful attempt to load the local NRF set state data into the local cache. The state data is the NfInstances and NFSubscriptions. If no NfInstances or NfSubscriptions are present in the cnDBTier, the cache will remain empty and the start-up probe is considered a success. This check is done only once when the pod comes up for the first time. Subsequent readiness probes will not check this condition.
  4. If the growth feature is enabled, there will be one successful attempt to load the state data from all the remote NRF sets that are configured. If no NfInstances or NfSubscriptions are received from the remote NRF Set, that request is considered a success. If the remote set NRFs are not reachable, then after all the reattempts and reroutes, the request is considered a success.

Liveness Probe

The Liveness Probe monitors the health of the critical threads running the Cache Data Service. If any of the threads are detected to be in a deadlock state or non-running, the liveness Probe will fail.

For more information about Kubernetes probes, see the Kubernetes Probes section.

4.11 Support for Automated Certificate Lifecycle Management

In NRF 23.3.x and earlier, X.509 and Transport Layer Security (TLS) certificates were managed manually. When multiple instances of NRF were deployed in a 5G network, certificate management, such as certificate creation, renewal, removal, and so on, became tedious and error-prone.

Starting with NRF 23.4.x, you can integrate NRF with Oracle Communications Cloud Native Core, Certificate Management (OCCM) to support automation of certificate lifecycle management. OCCM manages TLS certificates stored in Kubernetes secrets by integrating with Certificate Authority (CA) using the Certificate Management Protocol Version 2 (CMPv2) protocol in the Kubernetes secret. OCCM obtains and signs TLS certificates within the NRF namespace. For more information about OCCM, see Oracle Communications Cloud Native Core, Certificate Management User Guide.

Figure 4-12 Support for OCCM


Support for OCCM

The above diagram indicates that OCCM writes the keys to the certificates and NRF reads these keys to establish a TLS connection with other NFs.

OCCM can automatically manage the following TLS certificates:

  • 5G Service Based Architecture (SBA) client TLS certificates
  • 5G SBA server TLS certificates
  • Message Feed TLS certificates

This feature enables NRF to monitor, create, recreate, and renew TLS certificates using OCCM, based on their validity. For information about enabling HTTPS, see "Configuring Secrets for Enabling HTTPS" in Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

Install Guide Considerations
  • Upgrade: When NRF is deployed with OCCM, follow the specific upgrade procedure. For information about the upgrade strategy, see "Upgrade Strategy" in Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.
  • Rollback: For more information on migrating the secrets from NRF to OCCM and removal of Kubernetes secrets from the yaml file, see "Postupgrade Task" in Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.
Configure

There are no additional configuration changes required at NRF.

Observe
Metrics

This feature uses the existing metrics:

  • oc_egressgateway_connection_failure_total
  • oc_ingressgateway_connection_failure_total

For more information, see NRF Gateways Metrics.

Maintain

If you encounter any OCCM-specific alerts, see the "OCCM Alerts" section in Oracle Communications Cloud Native Core, Certificate Management User Guide.

If you encounter alerts at system or application levels, see the NRF Alerts section for resolution steps.

In case the alerts still persist, perform the following:

  1. Collect the logs: For more information on collecting logs, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.12 Egress Gateway Route Configuration for Different Deployments

The Egress Gateway route configuration for mTLS and non-TLS based deployments is as follows:

Table 4-11 Egress Gateway Route Configuration for Static Peers

Egress Traffic Egress Gateway Configuration Outgoing Message Details
Peer under PeerSetConfiguration RoutesConfiguration.httpsTargetOnly RoutesConfiguration.httpRuriOnly 3gpp-Sbi-Target-apiRoot Scheme Authority Header
HTTPS Outgoing Traffic (TLS) httpsConfiguration true false Target FQDN

Example: <SLF FQDN>/ <NotificationServer>

https Peer FQDN

Example: SCP/SEPP FQDN

HTTP Outgoing Traffic (non-TLS) httpsConfiguration true true Target FQDN

Example: <SLF FQDN>/ <NotificationServer>

http Peer FQDN

Example: SCP/SEPP FQDN

Table 4-12 Egress Gateway Route Configuration for virtualHost Peers

Egress Traffic Egress Gateway Configuration Outgoing Message Details
Peer under PeerSetConfiguration RoutesConfiguration.httpsTargetOnly RoutesConfiguration.httpRuriOnly 3gpp-Sbi-Target-apiRoot Scheme Authority Header
HTTPS Outgoing Traffic (TLS) httpsConfiguration true false Target FQDN

Example: <SLF FQDN>/ <NotificationServer>

https Resolved Peer FQDN

Example: Resolved SCP/SEPP FQDN

HTTP Outgoing Traffic (non-TLS) httpConfiguration false true Target FQDN

Example: <SLF FQDN>/ <NotificationServer>

http Resolved Peer FQDN

Example: Resolved SCP/SEPP FQDN

Note:

  • For service-mesh based deployment, NRF Egress Gateway application container sends HTTP outgoing traffic only. The sidecar container is responsible for sending out the traffic as HTTPS traffic. Hence, for service-mesh based deployment, perform the configuration as per HTTP Outgoing Traffic (non-TLS).
Configuration for HTTPS Outgoing Traffic
For HTTPS Outgoing Traffic, perform the following configuration at NRF Egress Gateway:
  1. Update the PeerConfiguration as follows:
    
    sample PeerConfiguration.json
    [
      {
        "id": "peer1",
        "host": "scp-stub-service01",
        "port": "8080",
        "apiPrefix": "/",
        "healthApiPath":"/{scpApiRoot}/{apiVersion}/status"
      }
    ]
  2. Update the PeerSetConfiguration as follows:
     
    sample peerset.json
    [
        {
            "id":"set0",
            "httpsConfiguration":[
            {
            "priority": 1,
            "peerIdentifier": "peer1"
            }]
        }
    ]
  3. Update the RoutesConfiguration as follows:
    
    sample RoutesConfiguration.json
    {
          "id":"egress_scp_proxy2",
          "uri":"http://localhost:32069/",
          "order":3,
          "metadata":{
             "httpsTargetOnly":true,
             "httpRuriOnly":false,
             "sbiRoutingEnabled":true
          },
          "predicates":[
             {
                "args":{
                   "pattern":"/nudr-group-id-map/v1/nf-group-ids"
                },
                "name":"Path"
             }
          ],
          "filters":[
             {
                "name":"SbiRouting",
                "args":{
                   "peerSetIdentifier":"set0",
                   "customPeerSelectorEnabled":false
                }
             }
          ]
       }
Configuration for HTTP Outgoing Traffic
For HTTP Outgoing Traffic, perform the following configuration at NRF Egress Gateway:
  1. Update the PeerConfiguration as follows:
    
    sample PeerConfiguration.json
    [
      {
        "id": "peer1",
        "host": "scp-stub-service01",
        "port": "8080",
        "apiPrefix": "/",
        "healthApiPath":"/{scpApiRoot}/{apiVersion}/status"
      }
    ]
  2. Update the PeerSetConfiguration as follows:
     
    sample PeerSetConfiguration.json
    [
        {
            "id":"set0",
            "httpsConfiguration":[
            {
            "priority": 1,
            "peerIdentifier": "peer1"
            }]
        }
    ]
  3. Update the RoutesConfiguration as follows:
    
    sample RoutesConfiguration.json
    {
          "id":"egress_scp_proxy2",
          "uri":"http://localhost:32069/",
          "order":3,
          "metadata":{
             "httpsTargetOnly":true,
             "httpRuriOnly":true,
             "sbiRoutingEnabled":true
          },
          "predicates":[
             {
                "args":{
                   "pattern":"/nudr-group-id-map/v1/nf-group-ids"
                },
                "name":"Path"
             }
          ],
          "filters":[
             {
                "name":"SbiRouting",
                "args":{
                   "peerSetIdentifier":"set0",
                   "customPeerSelectorEnabled":false
                }
             }
          ]
       }

4.13 Routing Egress Messages through SCP

NRF allows routing of following Egress messages through SCP or directly with an option to configure each Egress request type independently:
  • SLF requests
  • Notification requests
  • NRF Forwarding requests
  • Roaming requests
The above requests are routed through SCP based on the configuration in the Egress Gateway. For more information about the routes configuration, see the "Routes Configuration" section in Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.

NRF supports routing of the Egress requests through SCP configured with FQDN and virtual FQDN as peer instances.

Alternate route service is used for the resolution of virtual FQDN using DNS-SRV. Egress Gateway uses the virtual FQDN of peer instances to query the Alternate Route Service to get the list of alternate FQDNs, each of which has a priority assigned to them. The Egress Gateway selects the peer instances based on the priority value.

Figure 4-13 Routing Egress Messages through SCP

Routing Egress Messages through SCP
Managing the feature
Enable
  1. Open the ocnrf_custom_values_25.1.200.yaml file.
  2. Set enableNrfArtisanService to true to enable Artisan microservice.

    Note:

    This parameter must be enabled when dynamic SLF feature is configured.
  3. Set alternateRouteServiceEnable to true to enable alternate route service. For more information on enabling alternate route service, see the "Global Parameters" section in Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

    Note:

    This parameter must be enabled when alternate route service through DNS-SRV is required.
  4. Save the file.
  5. Run helm install. For more information about installation procedure, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.
  6. If you are enabling this parameter after NRF deployment, run helm upgrade. For more information about upgrade procedure, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

Configure

Configure using REST API: Perform the following Egress Gateway configuration as described in the "Egress Gateway Configuration" section of the Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.

  • create or update the peerconfiguration with SCP FQDN details.
  • create or update the peersetconfiguration to assign these peers.
  • create or update the sbiroutingerrorcriteriasets.
  • create or update the sbiroutingerroractionsets.
  • create or update the routesconfiguration to provide the SCP details. For more information about the routes configuration specific to the above-mentioned requests, see the following sections:
  • If all the requests must be routed through SCP, then create or update the routesconfiguration as follows:
    [
      {
        "id": "egress_scp_proxy1",
        "uri": "http://localhost:32068/",
        "order": 0,
        "metadata": {
          "httpsTargetOnly": true,
          "httpRuriOnly": true,
          "sbiRoutingEnabled": true
        },
        "predicates": [
          {
            "args": {
              "pattern": "/**"
            },
            "name": "Path"
          }
        ],
        "filters": [
          {
            "name": "SbiRouting",
            "args": {
              "peerSetIdentifier": "set0",
              "customPeerSelectorEnabled": false,
              "errorHandling": [
                {
                  "errorCriteriaSet": "criteria_1",
                  "actionSet": "action_1",
                  "priority": 1
                },
                {
                  "errorCriteriaSet": "criteria_0",
                  "actionSet": "action_0",
                  "priority": 2
                }
              ]
            }
          }
        ]
      },
      {
        "id": "default_route",
        "uri": "egress://request.uri",
        "order": 100,
        "filters": [
          {
            "name": "DefaultRouteRetry"
          }
        ],
        "predicates": [
          {
            "args": {
              "pattern": "/**"
            },
            "name": "Path"
          }
        ]
      }
    ]

    For more information about the routes configuration on different deployments, see Egress Gateway Route Configuration for Different Deployments.

Maintain

If you encounter alerts at system or application levels, see the NRF Alerts section for resolution steps.

In case the alerts still persist, perform the following:
  • Collect the logs: For more information on how to collect logs, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  • Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.13.1 SLF Requests

NRF supports routing of SLF queries through SCP towards SLF or UDR.

Managing SLF Requests through SCP

Configure

You can configure the SLF requests through SCP using the Egress Gateway configurations either using REST API or Console:
  • Configure using REST API: Perform the following feature configurations as described in the "Egress Gateway Configuration" section of Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide:
    • Create or update {apiRoot}/nrf/nf-common-component/v1/egw/peerconfiguration.
    • Create or update the {apiRoot}/nrf/nf-common-component/v1/egw/peersetconfiguration to assign these peers.
    • Create or update {apiRoot}/nrf/nf-common-component/v1/egw/sbiroutingerrorcriteriasets.
    • Create or update {apiRoot}/nrf/nf-common-component/v1/egw/sbiroutingerroractionsets.
    • Update the {apiRoot}/nrf/nf-common-component/v1/egw/routesconfiguration as mentioned below:
      curl -v -X PUT "http://10.75.226.126:32247/nrf/nf-common-component/v1/egw/routesconfiguration" -H  "Content-Type: application/json"  -d @header.json
       
      sample header.json:-
      [
          {
              "id":"egress_scp_proxy1",
              "uri":"http://localhost:32068/",
              "order":0,
              "metadata":{
                  "httpsTargetOnly":true,
                  "httpRuriOnly":false,
                  "sbiRoutingEnabled":true
              },
              "predicates":[
                  {
                      "args":{
                          "pattern":"/nudr-group-id-map/v1/nf-group-ids"
                      },
                      "name":"Path"
                  }
              ],
              "filters":[
                  {
                      "name":"SbiRouting",
                      "args":{
                          "peerSetIdentifier":"set0",
                          "customPeerSelectorEnabled":true,
                          "errorHandling":[
                              {
                                  "errorCriteriaSet":"criteria_1",
                                  "actionSet":"action_1",
                                  "priority":1
                              },
                              {
                                  "errorCriteriaSet":"criteria_0",
                                  "actionSet":"action_0",
                                  "priority":2
                              }
                          ]
                      }
                  }
              ]
          },
       {
      	"id": "default_route",
      	"uri": "egress://request.uri",
      	"order": 100,
      	"predicates": [{
      		"args": {
      			"pattern": "/**"
      		},
      		"name": "Path"
      	}]
       }
      ]
  • Configure using Console: Perform the following feature configurations as described:

Note:

To disable the routing of SLF queries through SCP, remove the Egress Gateway configuration from the {apiRoot}/nrf/nf-common-component/v1/egw/routesconfiguration, and retain the default_route configuration. This default_route configuration is available by default during fresh installation or upgrade.

{
	"id": "default_route",
	"uri": "egress://request.uri",
	"order": 100,
	"predicates": [{
		"args": {
			"pattern": "/**"
		},
		"name": "Path"
	}]
}

4.13.2 Notifications Requests

NRF allows you to route the notification messages through SCP based on the configuration in the Egress Gateway.

Managing Notifications Requests through SCP

This section explains the procedure to configure the feature.

Configure

You can configure the notifications through SCP using the Egress Gateway configurations either using REST API or Console:
  • Configure using REST API: Perform the following feature configurations as described in the "Egress Gateway Configuration" section of Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide:
    • Create or update {apiRoot}/nrf/nf-common-component/v1/egw/peerconfiguration.
    • Create or update the {apiRoot}/nrf/nf-common-component/v1/egw/peersetconfiguration to assign these peers.
    • Create or update {apiRoot}/nrf/nf-common-component/v1/egw/sbiroutingerrorcriteriasets.
    • Create or update {apiRoot}/nrf/nf-common-component/v1/egw/sbiroutingerroractionsets.
    • Update the {apiRoot}/nrf/nf-common-component/v1/egw/routesconfiguration as mentioned below:
      curl -v -X PUT "http://10.75.226.126:32247/nrf/nf-common-component/v1/egw/routesconfiguration" -H  "Content-Type: application/json"  -d @header.json
       
      sample header.json:-
      {
          "id": "egress_scp_proxy2",
          "uri": "http://localhost:32068/",
          "order": 20,
          "filters": [
            {
              "args": {
                "errorHandling": [
                  {
                    "priority": 1,
                    "actionSet": "action_1",
                    "errorCriteriaSet": "criteria_1"
                  },
                  {
                    "priority": 2,
                    "actionSet": "action_0",
                    "errorCriteriaSet": "criteria_0"
                  }
                ],
                "peerSetIdentifier": "set0",
                "customPeerSelectorEnabled": false
              },
              "name": "SbiRouting"
            }
          ],
          "metadata": {
            "httpRuriOnly": false,
            "httpsTargetOnly": true,
            "sbiRoutingEnabled": true
          },
          "predicates": [
            {
              "args": {
                "header": "3gpp-Sbi-Callback",
                "regexp": "Nnrf_NFManagement_NFStatusNotify"
              },
              "name": "Header"
            }
          ],
          {
      	"id": "default_route",
      	"uri": "egress://request.uri",
      	"order": 100,
      	"predicates": [{
      		"args": {
      			"pattern": "/**"
      		},
      		"name": "Path"
      	}]
         }
        }
  • Configure using Console: Perform the following feature configurations as described:

Note:

To disable the routing of notification through SCP, remove the SbiRouting filter for SCP from the {apiRoot}/nrf/nf-common-component/v1/egw/routesconfiguration, and add the default_route configuration. Following the sample default_route configuration:
{
	"id": "default_route",
	"uri": "egress://request.uri",
	"order": 100,
	"predicates": [{
		"args": {
			"pattern": "/**"
		},
		"name": "Path"
	}]
}

4.13.3 NRF Forwarding Requests

NRF allows you to route the NRF-NRF forwarding messages through SCP based on the configuration in the Egress Gateway.

Managing NRF Forwarding Requests through SCP

This section explains the procedure to configure the feature.

Configure

You can configure the forwarding through SCP using the Egress Gateway configurations either using REST API or Console:
  • Configure using REST API: Perform the following feature configurations as described in the "Egress Gateway Configuration" section of Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide:
    • Create or update {apiRoot}/nrf/nf-common-component/v1/egw/peerconfiguration.
    • Create or update the {apiRoot}/nrf/nf-common-component/v1/egw/peersetconfiguration to assign these peers.
    • Create or update {apiRoot}/nrf/nf-common-component/v1/egw/sbiroutingerrorcriteriasets.
    • Create or update {apiRoot}/nrf/nf-common-component/v1/egw/sbiroutingerroractionsets.
    • Update the {apiRoot}/nrf/nf-common-component/v1/egw/routesconfiguration as mentioned below:
      curl -v -X PUT "http://10.75.226.126:32247/nrf/nf-common-component/v1/egw/routesconfiguration" -H  "Content-Type: application/json"  -d @header.json
       
      sample header.json:-
      [
        {
          "id": "egress_scp_proxy1",
          "uri": "http://localhost:32068/",
          "order": 0,
          "filters": [
            {
              "args": {
                "errorHandling": [
                  {
                    "priority": 1,
                    "actionSet": "action_1",
                    "errorCriteriaSet": "criteria_1"
                  },
                  {
                    "priority": 2,
                    "actionSet": "action_0",
                    "errorCriteriaSet": "criteria_0"
                  }
                ],
                "peerSetIdentifier": "set0",
                "customPeerSelectorEnabled": false
              },
              "name": "SbiRouting"
            },
            {
              "args": {
                "name": "OC-NRF-Forwarding"
              },
              "name": "RemoveRequestHeader"
            }
          ],
          "metadata": {
            "httpRuriOnly": false,
            "httpsTargetOnly": true,
            "sbiRoutingEnabled": true
          },
          "predicates": [
            {
              "args": {
                "header": "OC-NRF-Forwarding",
                "regexp": ".*"
              },
              "name": "Header"
            }
          ]
        },
        {
      	"id": "default_route",
      	"uri": "egress://request.uri",
      	"order": 100,
             "filters": [
               {
                 "name": "DefaultRouteRetry"
                }
             ],
      	"predicates": [{
      		"args": {
      			"pattern": "/**"
      		},
      		"name": "Path"
      	}]
      }
      ]
  • Configure using Console: Perform the following feature configurations as described:

Note:

To disable the NRF-NRF forwarding messages through SCP, remove the Egress Gateway configuration from the {apiRoot}/nrf/nf-common-component/v1/egw/routesconfiguration, and retain the default_route configuration. This default_route configuration is available by default during fresh installation or upgrade. Following the sample default_route configuration:
{
	"id": "default_route",
	"uri": "egress://request.uri",
	"order": 100,
       "filters": [
         {
           "name": "DefaultRouteRetry"
          }
       ],
	"predicates": [{
		"args": {
			"pattern": "/**"
		},
		"name": "Path"
	}]
}

4.13.4 Roaming Requests

NRF allows you to route the roaming messages through SCP based on the configuration in the Egress Gateway.

Managing Roaming Requests through SCP

This section explains the procedure to configure the feature.

Configure

You can configure the roaming through SCP using the Egress Gateway configurations either using REST API or Console:
  • Configure using REST API: Perform the following feature configurations as described in the "Egress Gateway Configuration" section of Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide:
    • Create or update {apiRoot}/nrf/nf-common-component/v1/egw/peerconfiguration.
    • Create or update the {apiRoot}/nrf/nf-common-component/v1/egw/peersetconfiguration to assign these peers.
    • Create or update {apiRoot}/nrf/nf-common-component/v1/egw/sbiroutingerrorcriteriasets.
    • Create or update {apiRoot}/nrf/nf-common-component/v1/egw/sbiroutingerroractionsets.
    • Update the {apiRoot}/nrf/nf-common-component/v1/egw/routesconfiguration as mentioned below:
      curl -v -X PUT "http://10.75.226.126:32247/nrf/nf-common-component/v1/egw/routesconfiguration" -H  "Content-Type: application/json"  -d @header.json
       
      sample header.json:-
      [
        {
          "id": "egress_scp_proxy1",
          "uri": "http://localhost:32068/",
          "order": 0,
          "filters": [
            {
              "args": {
                "errorHandling": [
                  {
                    "priority": 1,
                    "actionSet": "action_1",
                    "errorCriteriaSet": "criteria_1"
                  },
                  {
                    "priority": 2,
                    "actionSet": "action_0",
                    "errorCriteriaSet": "criteria_0"
                  }
                ],
                "peerSetIdentifier": "set0",
                "customPeerSelectorEnabled": false
              },
              "name": "SbiRouting"
            },
            {
              "args": {
                "name": "OC-MCCMNC"
              },
              "name": "RemoveRequestHeader"
            }],
          "metadata": {
            "httpRuriOnly": false,
            "httpsTargetOnly": true,
            "sbiRoutingEnabled": true
          },
          "predicates": [
            {
              "args": {
                "header": "OC-MCCMNC",
                "regexp": "310014"
              },
              "name": "Header"
            }
          ]
        },
        {
      	"id": "default_route",
      	"uri": "egress://request.uri",
      	"order": 100,
             "filters": [
               {
                 "name": "DefaultRouteRetry"
                }
             ],
      	"predicates": [{
      		"args": {
      			"pattern": "/**"
      		},
      		"name": "Path"
      	}]
         }
      ]
  • Configure using Console: Perform the following feature configurations as described:

Note:

To disable the routing of roaming messages through SCP, remove the Egress Gateway configuration from the {apiRoot}/nrf/nf-common-component/v1/egw/routesconfiguration, and retain the default_route configuration. This default_route configuration is available by default during fresh installation or upgrade. Following the sample default_route configuration:
{
	"id": "default_route",
	"uri": "egress://request.uri",
	"order": 100,
       "filters": [
         {
           "name": "DefaultRouteRetry"
          }
       ],
	"predicates": [{
		"args": {
			"pattern": "/**"
		},
		"name": "Path"
	}]
}

4.14 Ingress Gateway Pod Protection

Ingress Gateway handles all the incoming traffic towards NRF. It may undergo overload conditions due to uneven distribution of traffic, network fluctuations leading to traffic bursts, or unexpected high traffic volume.

This feature protects the Ingress Gateway container from getting overloaded due to uneven traffic distribution, traffic bursts, and congestion. It ensures the protection and mitigation of container from entering an overload condition, while also facilitating necessary actions for recovery.

The pod protection is performed based on the CPU usage and the Pending Message Count of the container as explained in the Congestion State Parameters. These congestion parameters are measured at various states mentioned in the Ingress Gateway Load States to detect the overload condition.

In a service mesh based deployment, all incoming connections to the pod get terminated at the sidecar container, then the sidecar container creates a new connection toward the application container. These incoming connections from the peer are managed by the sidecar and outside the purview of the application container.

Hence when the Ingress Gateway container reaches DOC or Congested level, in a service mesh based deployment, the Ingress Gateway container will only be able to stop accepting new connections from the sidecar container. Also in this state, the Ingress Gateway container will reduce the concurrency of the existing connections between the sidecar container and the Ingress Gateway container. Any new request received over a new connection may get accepted or rejected based on the sidecar connection management.

In a non service mesh based deployment, all incoming connections to the pod get terminated at the Ingress Gateway container. Hence when the Ingress Gateway container reaches DOC or Congested level, the Ingress Gateway container will stop accepting new connections. Also in this state, the Ingress Gateway container will reduce the concurrency of the existing connections between the peer and the Ingress Gateway container. Any new request received over a new connection will result in to a request timeout at the peer.

Congestion State Parameter

As part of the Pod Protection feature, every Ingress Gateway microservice pod monitors its congestion state. Following are the congestion parameters to monitor the pod state:

  • CPU

    The congestion state is monitored based on CPU usage to determine the congestion level. The CPU usage is monitored using the Kubernetes cgroup (cpuacct.usage) and it is measured in nanoseconds.

    It is monitored periodically and calculated using the following formula and then compared against the configured CPU thresholds to determine the congestion state. For more information about the parameters and the proposed threshold values, see Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.

    Figure 4-14 CPU Measurement


    CPU Measurement

    Where,

    • CurrentCpuUsage is the counter reading at current periodic cycle.
    • LastCpuUsage is the counter reading at previous periodic cycle.
    • CurrentTime is the current time snapshot.
    • LastSampleTime is the previous periodic cycle time snapshot.
    • CPUs is the total number of CPUs for a given pod.
  • Pending Message Count: The pending message count is the number of requests that are received by the Ingress Gateway pod from other NFs and yet to send the response. This includes all the requests triggered towards the Ingress Gateway pod. The pending message count is monitored periodically and compared against the configured thresholds to determine the congestion state.

Ingress Gateway Pod States

Following are the various states to detect overload conditions. This will protect and mitigate the pod entering into an unstable condition, and to take necessary actions to recover from this condition.

Figure 4-15 Pod Protection State Transition


Pod Protection State Transition

Note:

The transition can occur between any states. The threshold for these congestion parameters are preconfigured and must not be changed.
  • Congested State: This is the upper bound state where the pod is congested. This means one or more congestion parameters are above the configured thresholds for the congested state. For more information about the configuration, see Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide. The pod can be transitioned to the Congested state either from the Normal State or the DoC state. When the pod reaches this state, the following actions are performed:
    • new incoming HTTP2 connection requests are not accepted.
    • the pod gradually decrements the number of concurrent streams by updating SETTINGS_MAX_CONCURRENT_STREAMS parameter in a SETTINGS frame to the configured maxConcurrentStreamsPerCon value at a regular interval. The number of concurrent streams are decremented based on the value configured in decrementBy parameter. And, the regular interval is configured in the decrementSamplingPeriod parameter.
  • Danger of Congestion (DoC) State: This is the intermediate state where the pod is approaching a congested state. This means if one or more congestion parameters are above the configured thresholds for the DoC state. For more information about the configuration, see Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.

    When the pod reaches this state, the following actions are performed:
    • any new incoming HTTP2 connection requests are not accepted.
    • if the pod is transitioning from the Normal state to the DoC state, the pod gradually decrements the number of concurrent streams by updating SETTINGS_MAX_CONCURRENT_STREAMS parameter in a SETTINGS frame to the configured maxConcurrentStreamsPerCon value at a regular interval. The number of concurrent streams are decremented based on the value configured in decrementBy parameter. And, the regular interval is configured in the decrementSamplingPeriod parameter.
    • if the pod is transitioning from the Congested state to the DoC state, the pod gradually increments the number of concurrent streams by updating SETTINGS_MAX_CONCURRENT_STREAMS parameter in a SETTINGS frame to the configured maxConcurrentStreamsPerCon value at a regular interval. The number of concurrent streams are incremented based on the value configured in incrementBy parameter. And, the regular interval is configured in the incrementSamplingPeriod parameter.
  • Normal State: This is the lower bound state where the CPU usage is below the configured thresholds for DoC and Congested states. For more information about the configuration, see Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.

    When the pod reaches this state, the following actions are performed:
    • the pod will continue accepting new incoming HTTP2 connection requests.
    • in case the pod is transitioning from the Congested or DoC state to Normal state, the pod gradually increments the number of concurrent streams by updating SETTINGS_MAX_CONCURRENT_STREAMS parameter in a SETTINGS frame to the configured maxConcurrentStreamsPerCon value at a regular interval. The number of concurrent streams are incremented based on the value configured in incrementBy parameter. And, the regular interval is configured in the incrementSamplingPeriod parameter.

To avoid toggling between these states due to traffic pattern, it is required for the pod to satisfy the condition of the target state for a given period before transitioning it to target state. For example, if the pod is transitioning from DoC to Congested state, then the pod must satisfy the threshold parameters of Congested state for a given period, before moving to Congested state.

The below configurations are used to define the period till which the pod has to be in a particular state:
  • stateChangeSampleCount
  • monitoringInterval

Formula for calculating the period is: (stateChangeSampleCount * monitoringInterval)

For more information about the configuration, see Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.

Managing Ingress Gateway Pod Protection

This section explains the procedure to enable and configure the feature.

Enable

You can enable the Pod Protection feature using the REST API or Console:
  • Enable using REST API: Perform the following configurations
    1. Use the API path as {apiRoot}/nf-common-component/v1/igw/podprotection.
    2. Set podprotection.enabled to true.
    3. Set podProtection.congestionControl.enabled to true.
    4. Run the API using PUT method with the proposed values given in the Rest API.

      For more information about the configuration using REST API, see Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.

      Note:

      The proposed values are engineering configured values and must not be changed.
  • Enable using Console: Perform the following configurations in Pod Protection:
    1. Switch Enabled to enable the feature.
    2. Switch Enabled under Congestion Control section.
    3. Click Save to save the changes.

Configure

You can configure the feature using the Ingress Gateway configurations either using REST API or Console:
  • Configure using REST API: Perform the following feature configurations as described in the "Ingress Gateway Configuration" section of Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
  • Configure using Console: Perform the feature configurations as described in Pod Protection.

Observe

Metrics

Following metrics are added in the NRF Gateways Metrics section:
  • oc_ingressgateway_pod_congestion_state
  • oc_ingressgateway_pod_resource_stress
  • oc_ingressgateway_pod_resource_state
  • oc_ingressgateway_incoming_pod_connections_rejected_total

KPIs

Added the feature specific KPIs in the Ingress Gateway Pod Protection section.

Maintain

If you encounter alerts at system or application levels, see NRF Alerts section for resolution steps.

In case the alerts still persists, perform the following:

  1. Collect the logs: For more information about how to collect logs, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.15 Network Slice Specific Metrics

The 5G Network slices are identified by Network Slice Instances (NSIs) and Single Network Slice Selection Assistance Information (SNSSAI).

A Network Function (NF) can have multiple NSIs and SNSSAIs listed under them to support multiple slices. NRF supports measuring the number of requests and responses for various service operation per network slice. This measurement is performed using the metrics mentioned in Observe.

Observe

Metrics

Following are the metrics to measure the requests and responses:
  • ocnrf_nfDiscover_rx_requests_perSnssai_total
  • ocnrf_nfDiscover_tx_success_response_perSnssai_total
  • ocnrf_nfDiscover_tx_empty_response_perSnssai_total
  • ocnrf_nfDiscover_tx_failure_response_perSnssai_total
  • ocnrf_nfDiscover_rx_requests_perNsi_total
  • ocnrf_nfDiscover_tx_success_response_perNsi_total
  • ocnrf_nfDiscover_tx_empty_response_perNsi_total
  • ocnrf_nfDiscover_tx_failure_response_perNsi_total
  • ocnrf_nfDiscover_tx_forwarded_requests_perSnssai_total
  • ocnrf_nfDiscover_rx_success_forwarded_responses_perSnssai_total
  • ocnrf_nfDiscover_rx_empty_forwarded_responses_perSnssai_total
  • ocnrf_nfDiscover_rx_failure_forwarded_responses_perSnssai_total
  • ocnrf_nfDiscover_tx_forwarded_requests_perNsi_total
  • ocnrf_nfDiscover_rx_success_forwarded_responses_perNsi_total
  • ocnrf_nfDiscover_rx_empty_forwarded_responses_perNsi_total
  • ocnrf_nfDiscover_rx_failure_forwarded_responses_perNsi_total
  • ocnrf_nfRegister_requests_perSnssai_total
  • ocnrf_nfRegister_success_responses_perSnssai_total
  • ocnrf_nfRegister_failure_responses_perSnssai_total
  • ocnrf_nfRegister_requests_perNsi_total
  • ocnrf_nfRegister_success_responses_perNsi_total
  • ocnrf_nfRegister_failure_responses_perNsi_total
  • ocnrf_nfUpdate_requests_perSnssai_total
  • ocnrf_nfUpdate_success_responses_perSnssai_total
  • ocnrf_nfUpdate_failure_responses_perSnssai_total
  • ocnrf_nfUpdate_requests_perNsi_total
  • ocnrf_nfUpdate_success_responses_perNsi_total
  • ocnrf_nfUpdate_failure_responses_perNsi_total
  • ocnrf_nfDeregister_requests_perSnssai_total
  • ocnrf_nfDeregister_success_responses_perSnssai_total
  • ocnrf_nfDeregister_failure_responses_perSnssai_total
  • ocnrf_nfDeregister_requests_perNsi_total
  • ocnrf_nfDeregister_success_responses_perNsi_total
  • ocnrf_nfDeregister_failure_responses_perNsi_total
  • ocnrf_nfHeartBeat_requests_perSnssai_total
  • ocnrf_nfHeartBeat_success_responses_perSnssai_total
  • ocnrf_nfHeartBeat_failure_responses_perSnssai_total
  • ocnrf_nfHeartBeat_requests_perNsi_total
  • ocnrf_nfHeartBeat_success_responses_perNsi_total
  • ocnrf_nfHeartBeat_failure_responses_perNsi_total

For more information about the metrics, see Network Slice Specific Metrics section.

KPIs

The feature specific KPIs are added in the Network Slice Specific KPIs section.

Maintain

If you encounter alerts at system or application levels, see NRF Alerts section for resolution steps.

In case the alerts still persist, perform the following:

  1. Collect the logs: For more information on how to collect logs, see "Collecting Logs" section in Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.16 CCA Header Validation in NRF for Access Token Service Operation

Client Credentials assertion (CCA) is a token signed by the Consumer NF. It enables NRF to authenticate the Consumer NF which includes the signed token in Access Token service request. CCA header contains the Consumer NF's NfInstanceId that gets checked against the certificate by the NRF. The CCA also includes a timestamp as the basis for the restriction of its lifetime.

Consumer NF sends the 3gpp-Sbi-Client-Credentials header containing the CCA in the HTTP request and NRF performs the CCA validation. The CCA header validation is a JWT-based validation at NRF where the Consumer NF sends a JWT token as part of the header to NRF in Access Token request. The JWT token has an X5c certificate and other NF-specific information that is validated against the configuration values defined in NRF. The signature of the JWT token is validated against the CA root certificate configured at NRF.

Figure 4-16 Client Credentials JWT Token


Client Credentials JWT Token

Table 4-13 JOSE header

Attribute name Data type P Cardinality Description
typ String M 1 The "typ" (type) Header Parameter is used to declare the media type of the JWS. Default value: JWT
alg String M 1 The "alg" (algorithm) Header Parameter is used to secure the JWS. Supported Algorithm types: RSA/ECDSA.
X5c Array M 1 The "X5c" (X.509 certificate) Header Parameter contains the X.509 public key certificate corresponding to the key used to digitally sign the JWT.

Table 4-14 JWT claims

Attribute name Data type P Cardinality Description
sub NfInstanceId M 1 This IE contains the NF instance ID of the NF service consumer, corresponding to the standard "Subject" claim.
iat integer M 1 This IE indicates the time at which the JWT was issued, corresponding to the standard "Issued At" claim. This claim may be used to determine the age of the JWT.
exp integer M 1 This IE contains the expiration time after which the client credentials assertion is considered to be expired, corresponding to the standard "Expiration Time" claim.
aud array(NFType) M 1..N This IE contains the NF type of NRF, for which the claim is applicable, corresponding to the standard "Audience" claim.

The digitally signed client credentials assertion is further converted to the JWS compact serialization encoding as a string.

If the validation is successful, the Access Token request is processed further.

If CCA header validation fails, the Access Token request is rejected by NRF with "403 Forbidden" with the cause attribute set to "CCA_VERIFICATION_FAILURE".

Managing CCA Header validation in NRF for Access Token Service Operation

  1. Create Ingress Gateway Secret. For more information about configuring secrets, see section “Configuring Secret to Enable CCA Header” in Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.
  2. Configure the secret containing the CA root bundle and enable CCA header feature for AccessToken either using REST or Console:
    • Using REST API: Perform the feature configurations for CCA Header Validation in Ingress Gateway as described in Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide
    • Using Console: Perform the feature configurations for CCA Header Validation in Ingress Gateway as described in CCA Header.
  3. Enable the feature using Helm, as follows:
    1. Customize the ocnrf_custom_values_25.1.200.yaml file.
    2. To enable this feature in Helm, set the metadata.ccaHeaderValidation.enabled to true for accesstoken_mapping id under routesConfig of Ingress Gateway microservice.
      
      metadata:
        ccaHeaderValidation:
          enabled: true

      Note:

      This feature can be enabled only by using Helm.
    3. Save the file.
    4. Run Helm upgrade, if you are enabling this feature after NRF deployment. For more information about upgrade procedure, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

Observe

Metrics

Following are the CCA Header Validation in NRF for Access Token Service Operation feature specific metrics in NRF Gateways Metrics section:
  • oc_ingressgateway_cca_header_request_total
  • oc_ingressgateway_cca_header_response_total
  • oc_ingressgateway_cca_certificate_info

KPIs

There are no KPIs for this feature.

Alerts

Following are the CCA Header validation in NRF for Access Token Service Operation feature specific alerts in Feature Specific Alerts section:

Maintain

If you encounter alerts at system or application levels, see NRF Alerts section for resolution steps.

In case the alerts persist, perform the following:

  1. Collect the logs: For more information on how to collect logs, see "Collecting Logs" section in Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.17 Monitoring the Availability of SCP Using SCP Health APIs

NRF determines the availability and reachability status of all SCPs irrespective of the configuration types.

This feature is an enhancement to the existing SBI routing functionality. Egress Gateway microservice interacts with SCP on their health API endpoints using HTTP2 OPTIONS method. It monitors the health of configured SCP peers to ensure that the traffic is routed directly to the healthy peers. This enhancement avoids routing or rerouting towards unhealthy peers, thus minimizing the latency time.

Egress Gateway microservice maintains the health status of all available and unavailable SCPs. It maintains the latest health of SCPs by periodically monitoring and uses this data to route egress traffic to the most preferred healthy SCP.

Figure 4-17 SCP Selection Mechanism


SCP Selection Mechanism

Once peerconfiguration, peersetconfiguration, routesconfiguration, and peermonitoringconfiguration parameters are configured at Egress Gateway microservice, and all SCPs (after Alternate Route Service (ARS) resolution, if any vFQDN is configured) are marked initially as healthy. The peers attached to the associated peerset are scheduled to run health API checks and update the health status continuously.
During the installation, the value of the parameter peermonitoringconfiguration is set to false by default. Since, this feature is an add-on to the existing SBI Routing feature and will be activated if the sbirouteconfig feature is enabled. To enable this feature, perform the following:
  • configure peerconfiguration with healthApiPath as /{scpApiRoot}/{apiVersion}/status
  • configure peersetconfiguration
  • configure routesconfiguration
  • configure sbiroutingerrorcriteriasets
  • configure sbiroutingerroractionsets
  • enable peermonitoringconfiguration
If SBI Routing feature is enabled before upgrading, the healthApi in peerconfiguration should be attached manually to existing configured peers. If the operator tries to enable peermonitoringconfiguration and the targeted peers do not have the healthApiPath then an appropriate error response is sent.

Managing Monitoring the Availability of SCP Using SCP Health APIs

This section explains the procedure to enable and configure the feature.

Configure

You can configure the Monitoring the Availability of SCP using the REST API or Console:
  • Configure using REST API: Perform the following feature configurations as described in Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide:
    • Create or update {apiRoot}/nrf/nf-common-component/v1/egw/peerconfiguration.
    • Create or update the {apiRoot}/nrf/nf-common-component/v1/egw/peersetconfiguration to assign these peers.
    • Create or update {apiRoot}/nrf/nf-common-component/v1/egw/sbiroutingerrorcriteriasets.
    • Create or update {apiRoot}/nrf/nf-common-component/v1/egw/sbiroutingerroractionsets.
    • create or update routesets {apiRoot}/nrf/nf-common-component/v1/egw/routesconfiguration to use the above peerset.
    • enable the feature using the below peermonitoring configuration {apiRoot}/nrf/nf-common-component/v1/egw/peermonitoringconfiguration.

    Note:

    Health Monitoring of the peer will start only after the feature is enabled and the corresponding peerset is used in sbirouteconfig.
  • Configure using Console: Perform the following feature configurations as described:

    Note:

    Health Monitoring of the peer will start only after the feature is enabled and the corresponding peerset is used in sbirouteconfig.

Observe

Metrics
Following metrics are added in the NRF Gateways Metrics section:
  • oc_egressgateway_peer_health_status
  • oc_egressgateway_peer_health_ping_request_total
  • oc_egressgateway_peer_health_ping_response_total
  • oc_egressgateway_peer_health_status_transitions_total
  • oc_egressgateway_peer_count
  • oc_egressgateway_peer_available_count
Alert
Following alerts are added in the NRF Alerts section:
KPIs

Added the feature specific KPIs in the SCP Health Status section.

Maintain

If you encounter alerts at system or application levels, see NRF Alerts section for resolution steps.

In case the alerts persists, perform the following:

  1. Collect the logs: For more information on how to collect logs, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.18 Controlled Shutdown of NRF

NRF supports controlled shutdown feature to isolate NRF from the current network at a particular site. This isolation helps to perform any maintenance activities or recovery procedures as required without uninstalling the NRF at the particular site. During this time the operations of the Ingress Gateway, Egress Gateway, and NrfAuditor microservices are paused. These services read the operational state from the database periodically. The operational state of NRF can be changed by using REST API or CNC Console at any time.

The two operational states defined for NRF are as follows:

  • NORMAL
  • COMPLETE_SHUTDOWN

Note:

  • Ensure that the database is up and the NRF services must be able to connect and communicate with the database to change the operational state.
  • In either state, if the database goes down, the back-end pods go to the NOT_READY state, but the Ingress Gateway microservice will continue to be in the last known operational state. When the Controlled Shutdown feature is enabled and the database is not available, all the incoming messages will be rejected with the Service Unavailable messages if the operational state is "NORMAL", or with a configurable error code if the operational state is COMPLETE_SHUTDOWN". However, the operators will not be able to see the operational state in CNC Console due to database unavailability.

If the controlled shutdown operational state is NORMAL, then NRF processes the message as normal.

If the controlled shutdown operational state is COMPLETE_SHUTDOWN, then NRF rejects all incoming requests with a configurable error code.

Figure 4-18 Operational State changes


Operational State changes

The following behavior changes occur when the operational state changes:

From NORMAL to COMPLETE_SHUTDOWN

  • The Ingress Gateway microservice rejects all new requests towards the services that have been configured for controlled shutdown with a configurable error code. In this case, all the inflight transactions will be gracefully handled. The controlled shutdown applies to the following services:
    • nfregistration
    • nfdiscovery
    • nfsubscription
    • nfaccesstoken
    The error codes are configured using controlledshutdownerrormapping and errorcodeprofiles APIs.
  • The NrfAuditor microservice pauses all its audit procedures, hence no notifications are generated. OcnrfAuditOperationsPaused alert is raised to indicate that the audit is paused.
  • The Egress Gateway microservice handles all the inflight requests gracefully. Since the Ingress Gateway and NrfAuditor microservices are in a COMPLETE_SHUTDOWN operational state, there would be no requests from the backend services.

    Note:

    No specific configuration for controlled shutdown needs to be applied on the routes for the Egress Gateway microservice.
  • The NrfConfiguration microservice continues to process any configuration requests in the COMPLETE_SHUTDOWN state.
  • OcnrfOperationalStateCompleteShutdown alert is raised to indicate the operational state of NRF is COMPLETE_SHUTDOWN.
  • The operational state change is recorded and can be viewed using the controlledShutdownOptions REST API. Additionally, the history of operational state changes can be viewed using the operationalStateHistory REST API.

From COMPLETE_SHUTDOWN to NORMAL

  • The Ingress Gateway microservice resumes processing all incoming requests.
  • The Egress Gateway microservice resumes processing all outgoing requests.
  • The NrfAuditor pod waits for a preconfigured waiting period before resuming its audit procedures to allow the NFs to move back to this NRF after the NRF has changed to the NORMAL operational state. Once the waiting period has expired, the audit procedures resume, and the OcnrfAuditOperationsPaused alert is cleared. The waiting period is defined as
    (defaultHbTimer * nfHeartBeatMissAllowed) + maxReplicationLatency(s) + replicationLatencyThreshold * latencyMultiplier
    Where,
    • defaultHbTimer - defaultHbTimer configured in nfManagementOptions.nfHeartBeatTimers where the nfType is "ALL_NF_TYPE". For more information about the parameter, see "NF Management Options" in Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
    • nfHeartBeatMissAllowed - nfHeartBeatMissAllowed configured in nfManagementOptions.nfHeartBeatTimers where the nfType is "ALL_NF_TYPE". For more information about the parameter, see "NF Management Options" in Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
    • maxReplicationLatency - The maximum replication latency detected across all the sites. This parameter supports dynamic value.
    • replicationLatencyThreshold - The replicationLatencyThreshold configured in geoRedundancyOptions. For more information about the parameter, see "Georedundancy Options" in Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
    • latencyMultiplier - A preconfigured fixed offset value set as 3.
  • OcnrfOperationalStateCompleteShutdown alert is cleared to indicate that the operational state of NRF is NORMAL.
  • The operational state change is recorded and can be viewed using the controlledShutdownOptions REST API. Additionally, the history of operational state changes can be viewed using the operationalStateHistory REST API.

NRF Behavior Post Fault Recovery

During the database restore procedure, along with the configuration data, the NFProfile data/subscription data also gets restored which may not always be the latest state data at that moment. In this state, the NrfAuditor microservice may act upon the NFProfiles/NFsubscriptions which are not up-to-date yet. Also, the NFs is in process of moving to the current NRF which is available now. In this state, if the audit procedure is performed, NRF suspends those NFs and send out notifications to the consumer NFs. The same is applicable for NfSubscriptions, where the subscriptions may get deleted due to an older lastUpdatedTimestamp in the backup data.

To avoid this problem, NrfAuditor microservice waits for a waiting period before resuming the auditing of NFProfiles and subscriptions as soon as it comes to Ready state from NotReady state. An alert ("OcnrfAuditOperationsPaused") is raised to indicate that the audit processes have paused, see NRF Alerts section. Once the waiting period is elapsed, the audit processes resume and the alert is cleared.

To know about the computation of the waiting period, see Controlled Shutdown of NRF .

Note:

The NrfAuditor pod goes to NotReady state whenever it loses connectivity with the database. During temporary connectivity fluctuations, the NrfAuditor pod may transition between Ready and NotReady states cause the cool off period to kick in for every NotReady to Ready transition. To avoid such short and frequent transitions, NrfAuditor microservice applies the waiting period only when the pod is in NotReady for more than 5 seconds.

Managing Controlled Shutdown of NRF

Prerequisites
The following parameters are required in the ocnrf_custom_values_25.1.200.yaml file to configure the feature.
  • global.enableControlledShutdown is the flag to enable the feature. The default value is true. To disable the feature, set the value of the flag to false.

    Please note that the below configurations are required for the feature to work.

  • controlled shutdown filter under routesConfig.

    Note:

    The route configuration must not be modified.
For more information about the global.enableControlledShutdown attribute, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.
Configure

You can configure the Controlled Shutdown of NRF feature using the REST API or Console:

  • REST API: Perform the following feature configurations as described in Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
    • {apiRoot}/nrf/nf-common-component/v1/igw/errorcodeprofiles
    • {apiRoot}/nrf/nf-common-component/v1/igw/controlledshutdownerrormapping
    • {apiRoot}/nrf-configuration/v1/controlledShutdownOptions
    • {apiRoot}/nrf-configuration/v1/operationalStateHistory
  • CNC Console:

Observe

Metrics
Following are the controlled shutdown feature specific metrics:
  • ocnrf_operational_state
  • ocnrf_audit_status

For more information about the metrics for the controlled shutdown of NRF see the NRF Metrics section.

Alerts
Following are the controlled shutdown feature specific alerts:

For more information about the alerts raised for the controlled shutdown of NRF see the Controlled Shutdown of NRF Feature section.

KPIs
Following are the controlled shutdown feature specific KPIs:
  • Operational State {{ pod }}
  • NRF Audit status

For more information about the KPIs in NRF see the Controlled Shutdown of NRF section.

Maintain

If you encounter alerts at system or application levels, see NRF Alerts section for resolution steps.

In case the alerts still persist, perform the following:

  1. Collect the logs: For more information on how to collect logs, see "Collecting Logs" section in Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.19 User-Agent Header for Outgoing Requests

NRF supports the addition of User-Agent header for outgoing messages for NFStatusNotify and SLF query requests. NRF adds the User-Agent header with the configured value to the mentioned outgoing messages.

In addition, NRF propagates the User-Agent header received in all forwarding and roaming requests.

Managing User-Agent Header for Outgoing Requests

Configure

You can configure User-Agent header value feature using the REST API or CNC Console.

  • Configure using REST API: Provide the value for ocnrfUserAgentHeader in generalOptions configuration API. For more information about this API, see Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
  • Configure using CNC Console: Provide the value for OCNRF User-Agent Header on the General Options Page. For more information about the field, see General Options page.

Maintain

If you encounter alerts at system or application levels, see NRF Alerts section for resolution steps.

In case the alerts persist, perform the following:

  1. Collect the logs: For more information on how to collect logs, see "Collecting Logs" section in Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.20 Support for Kubernetes Resource

4.20.1 Node Selector

The Node Selector feature allows the Kubernetes scheduler to determine the type of nodes in which the NRF pods are scheduled, depending on the predefined node labels or constraints.

Node selector is a basic form of cluster node selection constraint. It allows you to define the node labels (constraints) in the form of key-value pairs. When the nodeSelector feature is used, Kubernetes assigns the pods to only the nodes that match with the node labels you specify.

To see all the default labels assigned to a Kubernetes node, run the following command:

kubectl describe node <node_name>

Where,

<node_name> is the name of Kubernetes node.

For example:

kubectl describe node pollux-k8s-node-1

Sample output:
Name:               pollux-k8s-node-1
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    kubernetes.io/hostname=pollux-k8s-node-1
                    kubernetes.io/os=linux
                    topology.kubernetes.io/region=RegionOne
                    topology.kubernetes.io/zone=nova

Managing NRF Node Selector Feature

Enable

You can enable Node Selector feature using the Helm:

  1. Open the ocnrf_custom_values_25.1.200.yaml file.
  2. Set global.nodeSelection to ENABLED to enable the node selection.

    For more information about the parameters, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

  3. Save the file.
  4. Install NRF. For more information about installation procedure, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

Configure

You can configure the node selector parameters using Helm.
  • Configure the following parameters in the global, NRF microservices and Gateways sections:
    • nodeSelection
    • helmBasedConfigurationNodeSelectorApiVersion
    • nodeSelector.nodeKey
    • nodeSelector.nodeValue
  • Configure the following parameters in the appinfo section:
    • nodeSelection
    • helmBasedConfigurationNodeSelectorApiVersion
    • nodeSelector
  • Configure the following parameters in the perfinfo section:
    • nodeSelectorEnabled
    • helmBasedConfigurationNodeSelectorApiVersion
    • nodeSelectorKey
    • nodeSelectorValue

    For more information about the parameters, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

Maintain

If you encounter alerts at system or application levels, see NRF Alerts section for resolution steps.

In case the alerts still persists, perform the following:

  1. Collect the logs: For more information on how to collect logs, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.20.2 Kubernetes Probes

One of the key feature that Kubernetes provides is high availability. This is achieved by the smallest deployment model called Pods. The health check of these Pods are performed by Kubernetes Probes.

There are three types of probes:
  • Liveness Probe: Indicates if the container is operating. If the container is operating, no action is taken. If not, the kubelet kills and restarts the container.
  • Readiness Probe: Indicates whether the application running in the container is ready to accept requests. If the application is ready, services matching the pod are allowed to send traffic to it. If not, the endpoints controller removes the pod from all matching Kubernetes Services.
  • Startup Probe: Indicates whether the application running in the container has started. If the application is started, other probes start functioning. If not, the kubelet kills and restarts the container.

Managing Kubernetes Probes Feature

Configure

You can configure Kubernetes Probes feature using the Helm:

  1. Open the ocnrf_custom_values_25.1.200.yaml file.
  2. Configure the probes for each microservices. For more information about configuration parameters, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.
  3. Save the file.
  4. Install NRF. For more information about installation procedure, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.
  5. If you are configuring these parameter after NRF deployment, upgrade NRF. For more information about upgrade procedure, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

Maintain

If you encounter alerts at system or application levels, see NRF Alerts section for resolution steps.

In case the alerts still persists, perform the following:

  1. Collect the logs: For more information on how to collect logs, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.20.3 Pod Disruption Budget

PodDisruptionBudget (PDB) is a Kubernetes resource that allows you to achieve high availability of scalable application services when the cluster administrators perform voluntary disruptions to manage the cluster nodes.

PDB restricts the number of pods that are down simultaneously from voluntary disruptions. Defining PDB is helpful to keep the services running undisrupted when a pod is deleted accidentally or deliberately. PDB can be defined for high available and scalable NRF services.

It allows safe eviction of pods when a Kubernetes node is drained to perform maintenance on the node. It uses the maxPdbUnavailable parameter specified in the Helm chart to determine the maximum number of pods that can be unavailable during a voluntary disruption.

Managing Pod Disruption Budget

Enable

This feature is enabled automatically if you are deploying NRF with Release 16.

Configure

You can configure this feature using Helm. For information about configuring PDB, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

Observe

There is no specific metrics and alerts required for the Support of PDB functionality.

4.20.4 Network Policies

NetworkPolicies are an application-centric construct that allows you to specify how a pod is allowed to communicate with various network entities. To control communication between the cluster's pods and services and to determine which pods and services can access one another inside the cluster, it creates pod-level rules.

Previously, NRF had the privilege to communicate with other namespaces, and pods of one namespace could communicate with others without any restriction. Now, namespace-level isolation is provided for the NRF pods, and some scope of communications is allowed between the NRF and pods outside the cluster. The network policies enforces access restrictions for all the applicable data flows except communication from Kubernetes node to pod for invoking container probe.

Managing Support for Network Policies

Enable

To use this feature, network policies need to be applied to the namespace in which NRF is deployed.

Configure

You can configure this feature using Helm. For information about configuring PDB, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

Observe

There is no specific metrics and alerts required for the Network Policy feature.

4.20.5 Tolerations

Taints and tolerations are Kubernetes mechanism that allows you to ensure that pods are not placed on inappropriate nodes. Taints are added to nodes, while tolerations are defined in the pod specification. One or more taints are applied to a node, this marks that the node should not accept any pods that do not tolerate the taints.

When a taint is assigned by Kubernetes configurations, it repels all the pods except those that have a matching toleration for that taint. Tolerations are applied to pods to allow the pods to schedule onto nodes with matching taints. Tolerations allow scheduling but do not guarantee scheduling. For more information about the Kubernetes documentation.

Managing Kubernetes Tolerations

Enable

To enable the tolerations perform the following:
  1. Open the ocnrf_custom_values_25.1.200.yaml file.
  2. Set global.tolerationsSetting to ENABLED to enable toleration.

    For more information about the parameters, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

  3. Save the file.
  4. Install NRF. For more information about installation procedure, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

Configure

You can configure the tolerations parameters using Helm.
  1. Configure global.tolerations as per the requirement based on the taints on the node. For more information about the parameters, see the Tolerations table.
  2. By default global.tolerations is applied to all the microservices. In case you want to modify the tolerations for a specific microservice, configure tolerationsSetting under the specific microservice.
For more information about the parameters, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

Table 4-15 Tolerations

Parameter Description
key It is name of the key.
value It is a value for the configured key.
effect

Indicates the taint effect applied for the node.

The effect is defined by one of the following:
  • NoSchedule:

    • New pods that do not match the taint are not scheduled onto that node.

    • Existing pods on the node remain.

  • PreferNoSchedule:

    • New pods that do not match the taint might be scheduled onto that node, but the scheduler tries not to.

    • Existing pods on the node remain.

  • NoExecute:

    • New pods that do not match the taint cannot be scheduled onto that node.

    • Existing pods on the node that do not have a matching toleration are removed.

operator

Indicates the criteria to match the tolerations with the taint configuration.

The value can be:

  • Equal: The key, value, and effect parameters must match with the taint configuration.
  • Exists: The key and effect parameters must match the taint configuration. The value parameter can be blank.

Observe

There are no specific metrics and alerts required for this feature.

4.20.6 Dual Stack

According to RFC 4213, Dual stack supports both versions of the Internet Protocol (IPv4 and IPv6).

Dual stack provides:
  • coexistence strategy that allows hosts to reach IPv4 and IPv6 simultaneously.
  • IPv4 and IPv6 allocation to the Kubernetes clusters during cluster creation. This allocation is applicable for all the Kubernetes resources unless explicitly specified during the cluster creation.

Using the dual stack mechanism, NRF communicates within services or deployments in a Kubernetes cluster using IPv4 or IPv6 or both simultaneously depending on the configured deployment mode. Previously, NRF was supporting only dual stack with IPv4 preferred deployment on a platform configured as dual stack with IPv4 preferred.

From 25.1.100, NRF also supports dual stack with IPv6 preferred deployment on a platform configured as dual stack with IPv4 preferred. This can be achieved by configuring the ipFamilyPolicy and ipFamilies based on the deployment mode for all the NRF microservices. For more information about the attributes, see Table 4-17.

In a 5G network, for a dual stack based deployment, NRF microservices can be grouped as follows:

  • Edge Microservices: The edge microservice is Ingress Gateway microservice which handles the incoming requests from other NFs. NRF supports dual stack for Ingress Gateway microservice. The Ingress Gateway microservice can accept any IP address types (IPv4 or IPv6) from other NFs.

Note:

Ingress Gateway microservice establishes connection with NRF backend microservices on a single stack with IPv6 preferred.
  • Backend Microservices: All other microservices except the Ingress Gateway microservice is considered as backend microservice. NRF supports single stack IPv4 or IPv6 for backend microservices. All the backend microservices interacts with each other using single stack.
  • Egress Gateway microservice provide options for sending outgoing request using IPv4 or IPv6. The preferred routing mode can be configured using global.egressRoutingMode Helm parameter. For more information about this parameter, see the "Customizing NRF" section in the Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

Note:

The IP type defined in Dynamic SLF feature (preferredRoutingParameter.routingParameter) should match with the IP type configured in Egress Gateway microservice dual stack preference.

IP Address Allocation to Pods

IP address allocation to pods depends on the IP address preference which is set in the Kubernetes cluster. Pods do not have the privilege to choose an IP address. For example, if the Kubernetes cluster has IPv4 preferred configuration, both IPv4 and IPv6 are allocated to the pod, but the primary address is IPv4.

Example of a pod with primary address as IPv4 deployed in an IPv4 preferred infrastructure:

Status: Running
IP: 10.xxx.xxx.xxx
IPs:
 IP: 10.xxx.xxx.xxx
 IP: fd00::1:1xxx:9xxx:bxxx:fxxx

IP address allocation to pods depends on the IP address preference set in the Kubernetes cluster. Pods do not have the privilege to choose an IP address. For example, if the Kubernetes cluster has IPv6 preferred configuration, both IPv4 and IPv6 are allocated to the pod, but the primary IP address is IPv6.

Example of a pod with primary address as IPv6 deployed in an IPv4 preferred infrastructure:


Status: Running
IP: fd00::1:1xxx:9xxx:bxxx:fxxx
IPs:
 IP: fd00::1:1xxx:9xxx:bxxx:fxxx
 IP: 10.xxx.xxx.xxx

IP Address Allocation to Services

IP address allocation to all the NRF services, depends on the NRF deployment mode as defined as per Supported Deployment Mode Table. When the Helm parameters are configured appropriately, NRF automatically configures IP Family Policy and IP Families attributes to the NRF services.

Table 4-16 Supported Deployment Mode

Supported Mode Required Configurations in Helm
Edge Deployment Mode (global.edgeDeploymentMode) Backend Deployment Mode(global.backEndDeploymentMode) Egress Routing Mode (global.egressRoutingMode)
IPv4 Only IPv4 IPv4 IPv4
IPv6 Only IPv6 IPv6 IPv6
IPv4 over IPv6 IPv4_IPv6 IPv6 any
IPv6 over IPv4 IPv6_IPv4 IPv6 any
ClusterPreferred ClusterPreferred ClusterPreferred None

For more information about the parameters, see the "Customizing NRF" section in the Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

You can customize the IP address allocation to services based on the above Helm parameters. Services route the traffic to the destination endpoints based on this configuration. If the Helm parameter is set to IPv4, then IPv4 is allocated to services, and services use IPv4 pod IPs to send the traffic to endpoints.

IP Family Policy Attribute, IP Families Attribute, Pod IP and Service IP indicates the Ingress Gateway Microservice whereas the Endpoint belongs to backend microservice.

IP address allocation for the edge microservice (Ingress Gateway Microservice) is configured using the global.edgeDeploymentMode Helm parameter.

IP address allocation for all the backend microservices are performed based on the value configured at the global.backEndDeploymentMode Helm parameter.

IP selection control and connections for the Egress Gateway microservice or the outgoing traffic is handled by the global.egressRoutingMode Helm parameter.

For more information about the above parameters, see the "Customizing NRF" section in the Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

The following table describes how IP address allocation, ipFamilyPolicy, and ipFamilies vary based on the above parameter configuration for services:

Table 4-17 IP Address Allocation

Infrastructure Preference Backend Deployment Mode Helm Parameter ipFamilyPolicy Attribute ipFamilies Attribute Pod IP Service IP Endpoints
IPv4 Preferred IPv4 SingleStack IPv4 Ipv4,IPv6 IPv4 IPv4
IPv6 Preferred IPv4 SingleStack IPv4 Ipv6,IPv4 IPv4 IPv4
IPv4 Preferred IPv6 SingleStack IPv6 Ipv4,IPv6 IPv6 IPv6
IPv6 Preferred IPv6 SingleStack IPv6 Ipv6,IPv4 IPv6 IPv6
IPv4 Preferred IPv4_IPv6(IPv4 Preferred) RequiredDualStack IPv4 Preferred Ipv4,IPv6 Ipv4,IPv6 IPv4
IPv6 Preferred IPv4_IPv6(IPv4 Preferred) RequiredDualStack IPv4 Preferred Ipv6,IPv4 Ipv4,IPv6 IPv4
IPv4 Preferred IPv6_IPv4(IPv6 Preferred) RequiredDualStack IPv6 Preferred Ipv4,IPv6 Ipv6,IPv4 IPv6
IPv6 Preferred IPv6_IPv4(IPv6 Preferred) RequiredDualStack IPv6 Preferred Ipv6,IPv4 Ipv6,IPv4 IPv6

Note:

NRF defines all the global.edgeDeploymentMode and global.backEndDeploymentMode as ClusterPreferred by default. This helps NRF or other microservices to fallback to the single stack support.

Upgrade Impact

In a dual stack environment, all the NRF backend services are configured as single stack. As per Kubernetes limitation, NRF upgrade is not supported when there is a mismatch in ipFamilies configured between NRF backend microservices and the deployment mode ipFamilies configuration. In this case, NRF recommends a fresh installation to change the preferred ipFamilies. However, this is not applicable for the edge microservice (Ingress Gateway microservice) as they can choose to increase the list of ipFamilies by setting it as dual stack according to the cluster preference.

Managing Dual Stack

Enable

The dual stack support is a core functionality of NRF. You do not need to enable or disable this feature.

Configure

This section lists the configuration details for this feature.

Helm

  1. Customize the ocnrf_custom_values_25.1.100.yaml file.
  2. Set global.edgeDeploymentMode to clusterPreferred to indicate the edge deployment mode.
  3. Set global.backEndDeploymentMode to clusterPreferred to indicate the backend deployment mode.
  4. Set global.egressRoutingMode to None to control the IP address selection and connection for the Egress Gateway microservice. For more information about the parameters, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.
  5. Save the file.
  6. Install NRF. For more information about installation procedure, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

Observe

Metrics

The following metrics are added in the Ingress Gateway Metrics section:

  • oc_ingressgateway_incoming_ip_type
  • oc_ingressgateway_outgoing_ip_type

The following metrics are added in the Egress Gateway Metrics section:

  • oc_egressgateway_incoming_ip_type
  • oc_egressgateway_outgoing_ip_type
  • oc_egressgateway_dualstack_ip_rejected_total

Maintain

If you encounter alerts at system or application levels, see NRF Alerts section for resolution steps.

In case the alerts still persist, perform the following:

  1. Collect the logs: For more information on how to collect logs, see "Collecting Logs" section in Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.21 Pod Protection Support for NRF Subscription Microservice

The NRF subscription microservice is responsible for the following service operations:

  • NfStatusSubscribe
  • NfStatusUnsubscribe
  • NfStatusNotify

Of the above service operations, NfStatusSubscribe and NfStatusUnsubscribe requests are received by the subscription pod through the Ingress Gateway service. The registration (nfregistration) and auditor (nrfauditor) pods trigger notification events towards the subscription pod, which in turn triggers the NfStatusNotify requests to the consumer NFs.

The subscription pod currently undergo a risk of the congested condition. A congested condition is defined as a state where the pod's resource utilization (CPU and Pending Message count) is higher than the expected thresholds. This would result in higher latency, pod restarts, or traffic loss.

This situation may occur due to:

  • increase of notification events received from the registration and auditor pods.
  • increase of NfStatusSubscribe or NfStatusUnsubscribe events.
  • a large number of NfStatusNotify events being triggered.

The Overload Control feature through Ingress Gateway only ensures the traffic through the Ingress Gateway to the subscription pods is regulated. For more information about the Overload Control feature, see Overload Control.

However, in the NRF traffic call model, this traffic constitutes only 1% of the traffic to this service while 99% of the traffic is because of the NfStatusNotify events. Hence, the overload feature alone does not prevent the NRF subscription microservice pod from going into the congested state. The Pod Protection feature is introduced as a solution to address the overload situation by protecting the pod independently and continuing to provide service.

Note:

Horizontal POD Autoscaling (HPA) at subscription microservice and pod protection mechanism at subscription microservice are independent features with different trigger conditions. While HPA considers microservice load, pod protection mechanism works only on the POD load. Therefore, their order of triggering cannot be predicted.
NRF monitors the congestion state for the subscription microservice pod. The congestion state is monitored by the following parameters:
  • CPU: The CPU consumption of the Subscription microservice container is used to determine the congestion level. The CPU usage is monitored using the Kubernetes cgroup (cpuacct.usage) and it is measured in nanoseconds.

    It is monitored periodically and calculated using the following formula and then compared against the configured CPU thresholds to determine the congestion state.

    Figure 4-19 CPU Measurement


    CPU Measurement

    Where,

    • CurrentCpuUsage is the counter reading at current periodic cycle.
    • LastCpuUsage is the counter reading at previous periodic cycle.
    • CurrentTime is the current time snapshot.
    • LastSampleTime is the previous periodic cycle time snapshot.
    • CPUs is the total number of CPUs for a given pod.
  • Pending Message Count: The pending message count is the number of requests that are received by the subscription pod and yet to send the response. This includes all the requests triggered towards the Subscription pods from the registration, auditor, and Ingress Gateway microservices. The processing of the requests may include DB operations, forwarding the request to another NRF, and triggering notifications. The pending message count is monitored periodically and then compared against the configured thresholds to determine the congestion state.

Subscription Microservice Load States

Following are the different overload states to detect overload conditions. This will protect and mitigate the pod entering into an overload condition, and to take necessary actions to recover from overload.

Figure 4-20 Pod Protection State Transition


Pod Protection State Transition

Note:

The transition can occur between any states based on the congestion parameters. These congestion parameters are preconfigured and cannot be changed.
  • Congested State: This is the upper bound state where the pod is congested. This means one or more congestion parameters is above the configured thresholds for the congested state. For more information about the configuration using CNC Console, see Pod Protection Options. The pod can be transitioned to the Congested State either from the Normal State or the DoC state.
    When pod reaches this state, the following actions are performed:
    • new incoming HTTP2 connection requests are not accepted.
    • the pod gradually decrements the number of concurrent streams by updating SETTINGS_MAX_CONCURRENT_STREAMS parameter in a SETTINGS frame to the configured maxConcurrentStreamsPerCon value at a regular interval. The concurrent streams are decremented based on the value configured in decrementBy parameter. And, the regular interval is configured in the decrementSamplingPeriod parameter.

    Alerts are raised when any of the subscription pods go into this state. For more information about alerts, see NRF Alerts.

  • Danger of Congestion (DoC) State: This is the intermediate state where the pod is approaching a congested state. This means one or more congestion parameters, CPU or Pending Message Count, is above the configured thresholds for the DoC state. For more information about the configuration using CNC Console, see Pod Protection Options.
    • When pod reaches this state, the following actions are performed:
      • any new incoming HTTP2 connection requests are not accepted.
      • if the pod is transitioning from the Normal State to the DoC state, the pod gradually decrements the number of concurrent streams by updating SETTINGS_MAX_CONCURRENT_STREAMS parameter in a SETTINGS frame to the configured maxConcurrentStreamsPerCon value at a regular interval. The concurrent streams are decremented based on the value configured in decrementBy parameter. And, the regular interval is configured in the decrementSamplingPeriod parameter.
      • if the pod is transitioning from the Congested State to the DoC state, the pod gradually increments the number of concurrent streams by updating SETTINGS_MAX_CONCURRENT_STREAMS parameter in a SETTINGS frame to the configured maxConcurrentStreamsPerCon value at a regular interval. The concurrent streams are incremented based on the value configured in incrementBy parameter. And, the regular interval is configured in the incrementSamplingPeriod parameter.
  • Normal State: This is the lower bound state where all the congestion parameters for the pod are below the configured thresholds for DoC and Congested states. For more information about the configuration using CNC Console, see Pod Protection Options.
    When pod reaches this state, the following actions are performed:
    • the pod will continue accepting new incoming HTTP2 connection requests.
    • in case the pod is transitioning from the Congested or DoC state to Normal state, the pod gradually increments the number of concurrent streams by updating SETTINGS_MAX_CONCURRENT_STREAMS parameter in a SETTINGS frame to the configured maxConcurrentStreamsPerCon value at a regular interval. The concurrent streams are incremented based on the value configured in incrementBy parameter. And, the regular interval is configured in the incrementSamplingPeriod parameter.
To avoid toggling between these states due to traffic pattern, it is required for the pod to be in a particular state for a given period before transitioning to another state. The below configurations are used to define the period till which the pod has to be in a particular state:
  • stateChangeSampleCount
  • monitoringInterval

Formula for calculating the period is: (stateChangeSampleCount * monitoringInterval)

When the Subscription pods are in DoC or Congested state, it forces the client pods, registration, auditor, and Ingress Gateway to send lesser traffic towards the subscription pod. Subscription pod receives traffic in following scenarios:

  • The registration pod sends a notification trigger request to the subscription pods when:
    • an NF registers itself with NRF.
    • an NF updates its registered profiles.
    • an NF deregisters itself from NRF.
  • The auditor pod sends a notification trigger request to the subscription pods when:
    • the auditor marks an NF as SUSPENDED.
    • the auditor deregisters an NF when an NF is SUSPENDED state for a configurable amount of time.
  • The subscription pod receives the below requests through the Ingress Gateway service when:
    • the consumer NF creates a new subscription in NRF using NFStatusSubscribe request.
    • the consumer NF updates an existing Subscription in NRF using NFStatusSubscribe request.
    • the consumer NF unsubscribes using NFStatusUnsubscribe request.

    If the registration and auditor pods are unable to trigger a notification request to the subscription pods since they are in DoC or Congested State, It is possible that the consumer NFs will not be notified about the particular change in the consumer NfProfile. However, this is mitigated as the consumer NFs are expected to perform rediscovery of the producer NFs whenever the discovery validity time expires. This will ensure that their producer NF data will be refreshed, despite the dropped NfStatusNotify message.

    The registration pod will not reject any of the incoming requests, like NfRegister, NfUpdate, NfDeregister, and NfHeartbeat even if it is unable to trigger a notification to the subscription pods for the same reason.

    It is expected that the requests through the Ingress Gateway to the congested subscription pods may timeout, or get rejected.

Managing Subscription Microservice Pod Protection

This section explains the procedure to enable and configure the feature.

Enable

You can enable the Pod Protection feature using the CNC Console or REST API.

  • Enable using REST API: Set podProtectionOptions.enabled to true and podProtectionOptions.congestionControl.enabled to true in Pod Protection Options configuration API. For more information about API path, see Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
  • Enable using CNC Console: Set Enabled to true for Pod Protection and Enabled to true for Congestion Control in Pod Protection Options page. For more information about enabling the feature using CNC Console, see the Pod Protection Options section.

Observe

Metrics
Following metrics are added in the Pod Protection Metrics section:
  • ocnrf_pod_congestion_state
  • ocnrf_pod_cpu_congestion_state
  • ocnrf_pod_pending_message_count_congestion_state
  • ocnrf_incoming_connections
  • ocnrf_max_concurrent_streams
  • ocnrf_pod_cpu_usage
  • ocnrf_pod_pending_message_count
  • ocnrf_pod_incoming_connection_rejected_total
  • ocnrf_nfNotification_trigger_total
KPIs

Added the feature specific KPIs in the Subscription Pod Protection section.

Maintain

If you encounter alerts at system or application levels, see NRF Alerts section for resolution steps.

In case the alerts still persists, perform the following:

  1. Collect the logs: For more information on how to collect logs, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.22 Pre and Post Install/Upgrade Validations

This feature applies validation checks on the infrastructure, application, databases, and its related tables before and after the installation, upgrade, or fault recovery.

When NRF is deployed, there could be inconsistencies and unexpected results if the required NRF tables or the site-specific configurations are not available. This could happen due to network issues, system issues, human error, race conditions. The feature aims to detect any inconsistency early in the system state and report the same. The validations are done during the preinstallation, postinstallation, preupgrade, and postupgrade. The infrastructure validations are performed as part of the preinstallation and preupgrade validations.

Note:

Validation of database and tables during rollback procedures is not supported.

The below diagram depicts the flow where the validations occur:

Figure 4-21 Pre and Post Install/Upgrade Validations


Pre and Post Install/Upgrade Validations

Database and its Related Tables Validation

Each microservice is responsible for validating the database and the tables that it creates and manages.

The following tables are validated by each microservice:
  • NrfConfiguration Service validates the following tables in the NRF Application database:
    • NrfSystemOptions
    • NfScreening
    • SiteIdToNrfInstanceIdMapping
    • NrfEventTransactions
  • NfRegistration Service validates the following tables in the NRF Application database:
    • NfInstances
    • NfStatusMonitor
  • NfSubscription service validates the following table in the NRF Application database:
    • NfSubscriptions
  • NrfAuditor service validates the following table in NRF Leader Election database:
    • NrfAuditorLeaderPod
  • NrfConfiguration Service validates the following tables in the NRF Network Database:
    • NfScreening_backup
    • NrfSystemOptions_backup
    • SiteIdToNrfInstanceIdMapping_backup

Note:

  • The common configuration database and its tables are currently not validated in the preinstallation, fault recovery, and postinstallation validation.
  • The schema of the ReleaseConfig table is not currently validated in the preinstallation and postinstallation validation.

Managing Pre and Post Install/Upgrade Validations Feature

Enable

You can enable the feature using the Helm:

  1. Open the ocnrf_custom_values_25.1.200.yaml file.
  2. Set global.appValidate.preValidateEnabled to true to validate the database tables during preinstallation and preupgrade.
  3. Set global.appValidate.postValidateEnabled to true to validate the database tables during postinstallation and postupgrade.
  4. Set global.appValidate.infraValidateEnabled to true to validate the database tables during infrastructure validation. Configure the following appInfo attributes:
    1. Configure replicationUri with the database monitoring service FQDN or port as per the deployment. The URI must be provided as "http://<db monitor service name>:<db monitor service port>/db-tier/status/replication/realtime".
    2. Configure dbTierVersionUri with the database monitoring service FQDN or port to retrieve the cnDBTier version. The URI must be provided as "http://<db monitor service name>:<db monitor service port>/db-tier/version".
    3. Configure alertmanagerUrl with the database monitoring service FQDN or port to retrieve the cnDBTier version. The URI must be provided as "http://<alert manager service name>:<alert manager service port>/cluster/alertmanager".
  5. Set global.appValidate.faultRecoveryMode to true to install NRF in fault recovery mode.

    For more information about the parameters, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

  6. Save the file.
  7. Install or upgrade NRF. For more information about the procedure, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

4.22.1 NRF Infrastructure Validation

The infrastructure validations are performed as part of the preinstallation and preupgrade validations. The validation is done by the preinstallation and preupgrade hooks of the app-info service. The infrastructure validation is disabled by default.

Note:

  • It is highly recommended to enable it, and perform the required configuration to detect incompatibilities early. The infrastructure validation will be enabled by default from the next releases.
  • Infrastructure validation is not supported during rollback procedures.

This validation is enabled by setting global.appValidate.infraValidateEnabled parameter to true in the ocnrf_custom_values_25.1.200.yaml file. The following checks are performed as part of the infrastructure validation before installing or upgrading NRF:

  • Validate the minimum viable path from the previous NRF version to the current NRF version for an upgrade. This is validated based on the value configured in the global.minViablePath parameter. If this minimum viable path is not supported, NRF upgrade will not proceed.
  • Validate that the installed Kubernetes version is compatible with the target NRF version being installed or upgraded. This is validated based on the value configured in the global.minKubernetesVersion parameter. If the Kubernetes version is not compatible, NRF installation or upgrade will not proceed.
  • Validate that the installed cnDBTier version is compatible with the target NRF version being installed or upgraded. This is validated based on the value configured in the global.minDbTierVersion parameter. If the cnDBTier version is not compatible, NRF installation or upgrade will not proceed.
  • Verify the replication status with the connected peer. This is validated based on the value configured in the appinfo.defaultReplicationStatusOnError parameter. If the replication is disabled or failed for all peers, NRF installation or upgrade will not proceed.
  • Verify that there are no critical alerts raised in the system. If critical alerts are found, installation or upgrade will not proceed.

4.22.2 NRF Preinstallation Validation

The following sections describe the preinstallation validation performed when NRF is installed for fresh installation mode and fault recovery mode.

4.22.2.1 For Fresh Installation

The preinstallation validation is performed as part of the preinstallation hooks of each NRF microservice. It is the first set of actions performed. For more information about the list of microservices that perform validation, see NRF Microservice Table Validation.

This validation is configured by setting global.appValidate.preValidateEnabled parameter to true in the ocnrf_custom_values_25.1.200.yaml. The following checks are performed as part of the preinstallation validation:
  • Validate the presence of the database. If not present, the installation will not proceed. The database is expected to be present, before proceeding with installation.
  • Validate the ReleaseConfig table does not have release information about the site being installed. If present, the installation will not proceed, as it will not be a fresh installation.
  • Validate the presence of the required tables. Each microservice validates the tables that it is responsible for. If all the required tables are present, then the table schema is validated against the schema expected as per the release version. If the tables are partially present or if the schema validation fails, installation will not proceed. The tables are expected to be present when multisite georedundant NRF is deployed.
4.22.2.2 For Fault Recovery
NRF can be installed in fault recovery mode by setting global.appValidate.faultRecoveryMode parameter to true in the ocnrf_custom_values_25.1.200.yaml. When NRF is deployed in the fault recovery mode, with a database backup in place, the following checks are performed as part of the preinstallation validation:
  • Validate the presence of the database. If not present, the installation will not proceed. The database is expected to be present, before proceeding with installation.
  • Validate if the ReleaseConfig table contains release information about the site. If not present, the installation will not proceed as the table is expected to contain information from the database backup.
  • Validate the presence of the required tables. Each microservice validates the tables that it is responsible for. If the tables are present, then the table schema is validated against the schema expected as per the release version. If the tables are partially present or if the schema validation fails, installation will not proceed.
  • Validate the configType in the NrfSystemOptions table against the configTypes expected as per the releaseVersion of the specific site. If validation fails, installation will not proceed.
  • Validate the nfScreeningRulesListType in the NfScreening table against the nfScreeningRulesListType expected as per the releaseVersion of the specific site.

Note:

It is recommended that after the installation is completed in fault recovery mode, this flag has to be set to false. This can be done while performing future upgrades.

In case preinstallation fails, the error reason is logged in the preinstallation hook job of the particular microservice.

4.22.3 NRF Postinstallation Validations

The following sections describe the postinstallation validation performed when NRF is installed in fresh installation mode and fault recovery mode.

4.22.3.1 For Fresh Installation

The following sections describe the postinstallation validation performed when NRF is installed in fresh installation mode and fault recovery mode.

The postinstallation validation is done as part of the postinstallation hooks of each NRF microservice and is the last set of actions performed. For more information about the list of microservices that perform validation, see NRF Microservice Validation.

This validation is configured by setting global.appValidate.postValidateEnabled parameter to true. The following checks are performed as part of the postinstallation validation:
  • Validate if the ReleaseConfig table contains release information about the site. If not present, the installation will not proceed.
  • Validate the presence of the database. If not, the installation will not proceed.
  • Validate the presence of the required tables. Each microservice validates the tables that it is responsible for. If all the required tables are present, then the table schema is validated against the schema expected as per the release version. If the tables are partially present or if the schema validation fails, installation will not proceed. The tables are expected to be present when multisite georedundant NRF is deployed.
  • Validate the configType in the NrfSystemOptions table against the configTypes expected as per the releaseVersion of the specific site. If validation fails, installation will not proceed,
  • Validate the nfScreeningRulesListType in the NfScreening table against the nfScreeningRulesListType expected as per the releaseVersion of the specific site.

The tables are expected to be present when multisite georedundant NRF is deployed.

In case postinstallation fails, the error reason will be logged in the postinstallation hook job of the particular microservice.

4.22.3.2 For Fault Recovery
When NRF is installed in the fault recovery mode, using previously backed up database, similar actions as mentioned in NRF in Fault Recovery Mode are performed.

Note:

It is recommended that after the installation is complete in fault recovery mode, this flag has to be set to false. This can be done while performing future upgrades.

4.22.4 NRF Preupgrade Validation

The following sections describe the preupgrade validation done as part of the preupgrade hooks of each NRF microservice and is the first set of actions performed. The following checks are performed as part of the preupgrade validation.

For more information about the list of microservices that perform validation, see NRF Microservice Validation.

This validation is configured by setting global.appValidate.preValidateEnabled parameter to true in the ocnrf_custom_values_25.1.200.yaml file. The following checks are performed as part of the preupgrade validation:
  • Validate if the ReleaseConfig table contains release information about the site. If not present, the upgrade will not proceed.
  • Validate the presence of the database required to proceed with the upgrade. If not present, the upgrade will not proceed.
  • Validate the presence of the required tables required to proceed with the upgrade. Each microservice validates the tables that it is responsible for. If all the tables are not present or partially present, the upgrade will not proceed.
  • Validate the table schema against the schema expected as per the release version. If validation fails, upgrade will not proceed.
  • Validate the configType in the NrfSystemOptions table against the configTypes expected as per the releaseVersion of the specific site. If validation fails, upgrade will not proceed.
  • Validate the nfScreeningRulesListType in the NfScreening table against the nfScreeningRulesListType expected as per the releaseVersion of the specific site.

The tables are expected to be present when multisite georedundant NRF is deployed.

4.22.5 NRF Postupgrade Validation

The following sections describe the postupgrade validation done as part of the postupgrade hooks of each NRF microservice and is the last set of actions performed. The following checks are performed as part of the postupgrade validation.

For more information about the list of services that perform validation, see NRF Microservice Validation.

This validation is configured by setting global.appValidate.postValidateEnabled parameter to true in the ocnrf_custom_values_25.1.200.yaml file. The following checks are performed as part of the postupgrade validation:
  • Validate if the ReleaseConfig table contains release information about the site. If not present, the upgrade will not proceed.
  • Validate the presence of the database required to proceed with the upgrade. If not present, the upgrade will not proceed.
  • Validate the presence of the required tables required to proceed with the upgrade. Each microservice validates the tables that it is responsible for. If all the tables are not present or partially present, the upgrade will not proceed.
  • Validate the table schema against the schema expected as per the release version. If validation fails, upgrade will not proceed.
  • Validate the configType in the NrfSystemOptions table against the configTypes expected as per the releaseVersion of the specific site. If validation fails, upgrade will not proceed.
  • Validate the nfScreeningRulesListType in the NfScreening table against the nfScreeningRulesListType expected as per the releaseVersion of the specific site.

The tables are expected to be present when multisite georedundant NRF is deployed.

4.23 Ignore Unknown Attribute in NFDiscover Search Query

NRF by default rejects any nfDiscover request (400 Bad Request) with query attributes that are not supported. As part of this feature, instead of rejecting the request, NRF allows processing the nfDiscover request by ignoring the unsupported and unknown search query attributes. The list of query attributes that should be ignored while processing the nfDiscover request is configured using Helm.

Enable

You can enable this feature using the Helm:
  1. Open the ocnrf_custom_values_25.1.200.yaml file.
  2. List the search query in searchQueryIgnoreList under nfdiscovery setion. For more information about the parameter, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.
  3. Save the file.
  4. Run helm install. For more information about the installation procedure, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.
  5. If you are setting this parameter after NRF deployment, run the Helm upgrade. For more information about the upgrade procedure, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

4.24 NF Authentication using TLS Certificate

This feature supports authentication of the Network Function before accessing the, NRF services. In case, authentication fails, NRF rejects the service operation requests. In this feature, NRF challenges some attributes from the TLS certificate against defined attributes.

4.24.1 XFCC Header Validation

HTTPS support is a minimum requirement for 5G NFs as defined in 3GPP TS 33.501. This feature enables extending identity validation from the Transport layer to the Application layer and provides a mechanism to validate the NF FQDN presence in Transport Layer Security (TLS) certificate as added by the Service Mesh against the NF Profile FQDN present in the request.

NRF provides configurations to dynamically enable or disable the feature. To enable the feature on Ingress Gateway in NRF deployment, see xfccHeaderValidation attribute in User Configurable Section of Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide for more details.

Note:

  • This feature is disabled by default. The feature needs to be enabled at API Gateway and NRF. At NRF, the feature enabling or disabling can be performed using the following configuration.
  • Once this feature is enabled, all of NFs must re-register with FQDN in NF Profile or NFs can send NFUpdate with FQDN. For Subscription Service Operations, Network Functions must register with NRF. The NFs that are subscribed before enabling the feature, must register with NRF for further service operations.

Managing NF Authentication using TLS Certificate

Enable

To enable the feature on Ingress Gateway in NRF deployment:

  1. Customize the ocnrf_custom_values_25.1.200.yaml file.
  2. Set enabled parameter to true under XFCC header validation/extraction in Ingress Gateway Global Parameters section:
    xfccHeaderValidation:
       extract:
        enabled: false
  3. Save the file.
  4. Run helm upgrade, if you are enabling this feature after NRF deployment. For more information about upgrade procedure, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

Configure

Configure the NF Authentication using the TLS Certificate feature using REST API or CNC Console:
  • Configure NF Authentication using TLS Certificate using REST API: Perform the feature configurations as described in Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.

    Refer to attributes under nfAuthenticationOptions in Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide for more details.

  • Configure NF Authentication using TLS Certificate using CNC Console: Perform the feature configurations as described in NF Authentication Options.

Observe

For more information on NF Authentication using TLS certificate feature metrics and KPIs, see NRF Metrics and NRF KPIs sections.

Maintain

If you encounter alerts at system or application levels, see NRF Alerts section for resolution steps.

In case the alerts still persist, perform the following:

  1. Collect the logs: For more information on how to collect logs, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.24.2 TLS SNI Header Validation

Oracle Communications Cloud Native Core, Network Repository Function (NRF) supports Server Name Identification (SNI) when acting as a client and sending TLS handshake message. The FQDN in SNI is used to identify the server for which the TLS connection is needed. When the same server is supporting multiple services, the SNI identifies which service is to be used for the TLS connection. As a part of the TLS handshake initiation process, SNI is populated in the client handshake sent by Egress Gateway depending on the routing type. There are two routing options:
  • Direct Routing – Egress Gateway adds the target server's FQDN in the SNI. If the target server has only IP address, then the SNI header will not be populated in the outgoing TLS requests.
  • Indirect Routing – Egress Gateways uses the FQDN of the selected Peer (For example, host of the SCP or SEPP) for populating the SNI header. If the selected peer has an IP address, SNI header is not populated in the outgoing TLS requests.

WARNING:

This feature should be enabled only for non-servicemesh-based deployments.

Note:

When the feature is enabled, SNI header is populated at Egress Gateway only when an FQDN is available.

Managing TLS SNI Header Validation

Enable

You can enable the TLS SNI header feature in NRF deployment:

  1. Open the ocnrf_custom_values_25.1.200.yaml file.
  2. Set egress-gateway.sniHeader.enabled to true to enable TLS SNI header validation.
  3. Save the file.
  4. Run helm install. For more information about the installation procedure, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.
  5. If you are setting this parameter after NRF deployment, run the Helm upgrade. For more information about the upgrade procedure, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

Observe

For more information on TLS SNI header validation feature metrics oc_egressgateway_sni_error_total see NRF Metrics section.

Maintain

In case the SNI header is not sent in the TLS 1.2 handshake, perform the following:

  1. Collect the logs: The logs are collected during the following scenarios:
    • To ensure that the SNI header is added and the feature is running as expected, look up the SNI feature is enabled log statement at the debug level.
    • To identify when the Peer rejected the client handshake due to invalid SNI sent by Egress Gateway, look up the Unrecognized server name indication log statement at the debug level.
    • To verify if both service mesh and SNI feature are enabled, look up the Service mesh is enabled. As a result, SNI will not be populated even though it is enabled log statement at the warn level.

      For more information on how to collect logs, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.

  2. Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.25 Subscriber Location Function

Subscriber Location Function (SLF) feature of NRF allows you to select the network function based on the Subscription Permanent Identifier (SUPI) and Generic Public Subscription Identifier (GPSI) subscriber identities. The SLF feature supports Authentication Server Function (AUSF), Unified Data Repository (UDR), Unified Data Management (UDM), Policy Control Function (PCF), and Charging Function (CHF) nfTypes for discovery query.

The discovery of producer network functions is performed as follows:
  • NRF checks if SLF feature is enabled or not. If the feature is enabled:

    • Check slfLookupConfig contains details for target-nf-type in NFDiscover query:
      • In case any of the NFDiscover search query attributes are present in configured skipSLFLookupParameters then the SLF lookup is not performed.

        For example, if the search query attribute is group-id-list and the same attribute is configured in skipSLFLookupParameters, then NRF will not perform SLF lookup rather NRF will use the group-id-list present in the search query while processing of NFDiscover service operation.

      • In case the configured skipSLFLookupParameters attribute does not match with any of the NFDiscover search query, then the mandatory parameter (either SUPI or GPSI) required for SLF lookup must be present in the NFDiscover search query.
      • In case none of the mandatory attributes (SUPI or GPSI) are present in NFDiscover search query and if NFDiscover search query attribute is present in configured exceptionListForMissingMandatoryParameter attribute, then NRF does not perform SLF lookup and processes the NFDiscover service operation without rejecting the NFDiscover search query.
      • In case both the mandatory attributes (SUPI or GPSI) are present in NFDiscover search query and if NFDiscover search query attribute is present in configured exceptionListForMissingMandatoryParameter attribute, NRF performs SLF lookup and drops the SUPI or GPSI and processes the NFDiscover service operation.
      • In case none of the mandatory attributes (SUPI or GPSI) are present in NFDiscover search query and if NFDiscover search query attribute is present in configured exceptionListForMissingMandatoryParameter attribute, then NRF will process the NFDiscover service operation without rejecting the NFDiscover search query.
      • In case none of the mandatory attributes (SUPI or GPSI) are present in the NFDiscover search query and if NFDiscover search query attribute is not present in configured exceptionListForMissingMandatoryParameter attribute then NRF rejects the discovery query.
      • In case both of the mandatory attributes (SUPI or GPSI) are present in NFDiscover search query, then configured preferredSubscriberIdType is used to decide which of the mandatory attribute is used to perform the SLF query by ignoring the other attributes.
  • Finds the NF Group Id by sending SLF query (that is, Nudr_GroupIDmap service operation) with the received Subscriber Identity (like SUPI and GPSI).
  • Generates NFDiscover service response using the NFGroupId received in SLF response and other parameters. The received Subscriber Identifier is not used during NF Producer selection.
  • accessTokenCacheEnabled flag enables caching of the oAuth2 token for SLF communication at NRF.

    Note:

    • accessTokenCacheEnabled flag is functional only when the oAuth2 token is required for SLF communication that is controlled by the accessTokenCacheEnabled parameter.
    • Operators must enable the accessTokenCacheEnabled flag only after NRF deployment is successfully upgraded to 25.1.200.
NRF supports direct routing or dynamic routing of SLF queries:

Managing Subscriber Location Function Feature

Enable
You can enable SLF feature using the REST API or CNC Console.
  • Enable using REST API: Set featureStatus to ENABLED in SLF Options configuration API. For more information about API path, see Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
  • Enable using CNC Console: Set Feature Status to ENABLED on the SLF Options page. For more information about enabling the feature using CNC Console, see SLF Options.

Configure

You can configure the SLF feature using the REST API or CNC Console:
  • Configure using REST API: Perform the feature configurations as described in Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
  • Configure using CNC Console: Perform the feature configurations as described in SLF Options.

Note:

  • Atleast one of the attributes fqdn, ipv4Address, or ipv6Address must be included by the registered UDR.
  • For routing, the NfService level attributes are considered first. If none of the attributes are present at NfService level, then the NfProfile level attributes are considered, not in a mix mode where some of the attributes are from NfService and some from NfProfile.

    However, with Support for Configurable Port and Routing Parameter Selection feature, routing selection parameter can be picked either from NfService or NfProfile in a mix mode as per the availability.

  • Of the three attributes, the endpoint is selected in the following order:
    • ipv4Address
    • ipv6Address
    • fqdn

    However with the Support for Configurable Port and Routing Parameter Selection feature, the order of the attributes can be configured.

  • For the cases where ports are not present, (example: ipv4Address and ipv6Address in NfProfile or FQDN from NfService/NfProfile or ipEndpoints with no port in NfService), scheme configured in the NfService is used to determine the port.
    • If NfService.scheme is set to http, port 80 is used.
    • If NfService.scheme is set to https, port 443 is used.

    However with Support for Configurable Port and Routing Parameter Selection feature, the port selection can be configured.

Observe

Metrics

Following are the SLF feature specific metrics:
  • ocnrf_nfDiscover_ForSLF_rx_requests_total
  • ocnrf_nfDiscover_ForSLF_tx_responses_total
  • ocnrf_SLF_tx_requests_total
  • ocnrf_SLF_rx_responses_total
  • ocnrf_slf_jetty_latency_seconds
  • ocnrf_nfDiscover_SLFlookup_skipped_total
  • ocnrf_nfDiscover_continue_mandatoryAttributes_missing_total
  • ocnrf_max_slf_attempts_exhausted_total

For more information about SLF metrics, see NRF SLF Metrics section.

Alerts

Following are the SLF feature specific alerts:

Maintain

If you encounter alerts at system or application levels, see NRF Alerts section for resolution steps.

In case the alerts still persist, perform the following:

  1. Collect the logs: For more information about how to collect logs, see "Collecting Logs" section in Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information about how to raise a service request.

4.25.1 Static Selection of SLF

NRF supports static selection of SLF to perform direct routing of discovery queries. These queries are sent to Unified Data Repository (UDR) or SLF (that is, Nudr_GroupIDmap service) to retrieve the corresponding NFGroupId. In the current configuration, the SLF or UDR is selected based on the slfHostConfig configuration and establishes direct communication with the selected SLF or UDR. For SLF query through Service Communication Proxy (SCP), see the SLF Requests section.

SLF host configuration attribute (slfHostConfig) allows the user to configure the details of SLF or UDR network functions.

The slfHostConfig configuration consists of attributes like apiVersion, scheme, FQDN, port, priority. NRF allows the configuration of more than two hosts. The host with the highest priority is considered as the Primary Host. The host with the second highest priority is considered as a Secondary Host.

Note:

  • Refer to 3GPP TS 29.510 (release 15.5) for definition and allowed range for slfHostConfig attribute (apiVersion, scheme, FQDN, port, priority, and so on).
  • Apart from priority attribute, no other attributes plays any role in primary or secondary host selection.
  • Apart from primary or secondary host, other configured hosts (if any) are not used during any message processing.
  • When more than one host is configured with highest priority, then two of them is picked as primary or secondary host randomly.
The SLF request is first sent to the primary SLF. In case of an error from primary SLF, the request is sent to secondary SLF based on the following configurations:
  1. rerouteOnResponseHttpStatusCodes: This attribute is used to determine if SLF retry must be performed to an alternate SLF based on the response code received from the SLF. The alternate SLF is picked from the SLF Host Config, if slfConfigMode is set to STATIC_SLF_CONFIG_MODE or from the slfDiscoveredCandidateList if slfConfigMode is set to DISCOVERED_SLF_CONFIG_MODE. For primary and secondary SLF details, see slfHostConfig attribute.
  2. maximumHopCount: This configuration determines the maximum number of hops (SLF or NRF) that NRF can forward in a given service request. This configuration is useful during NRF Forwarding and SLF feature interaction.

Enable the feature in Static mode

The default configuration for slfConfigMode is STATIC_SLF_CONFIG_MODE. Perform the following steps to enabled the SLF feature in static mode:

  1. Configure slfHostConfig and slfLookupConfig parameters. For more information on the parameters, see Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
  2. Set featureStatus to ENABLED.

4.25.2 Dynamic Selection of SLF

NRF supports dynamic selection of SLF based on the registered SLF or UDR profiles. This configuration is defined based on the SLF Configuration Mode attribute (slfConfigMode). This attribute allows the user to decide whether the SLF lookup can be performed based on preconfigured slfHostConfig configuration or use the SLFs registered with NRF.

The dynamic selection of producer network functions is as follows.

NRF checks if the SLF feature is enabled or not. When the feature is enabled:

  • If slfConfigMode attribute is set to STATIC_SLF_CONFIG_MODE, the SLF lookup is performed based on preconfigured slfHostConfig as described in Static Selection of SLF.
  • To perform SLF lookup based on the SLFs registered with NRF:
    • Set populateSlfCandidateList to true:
      ("populateSlfCandidateList": true)
    • Wait until atleast one entry of SLF candidate is listed under slfDiscoveredCandidateList. Perform GET operation to check if any SLF candidate is listed in the output.
    • Change slfConfigMode to DISCOVERED_SLF_CONFIG_MODE:
      {
        "featureStatus": "ENABLED",
         "slfLookupConfig": [{
              "nfType": "UDR"
      }],
        "slfConfigMode": "DISCOVERED_SLF_CONFIG_MODE"
      }

    Note:

    Setting populateSlfCandidateList to true and slfConfigMode to DISCOVERED_SLF_CONFIG_MODE must be performed in two separate requests.

Note:

Upgrade NRF to enable NrfArtisan service, if not enabled during installation. For more information on enabling Artisan Microservice, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

The slfHostConfig must be configured before or while setting the slfConfigMode as STATIC_SLF_CONFIG_MODE when featureStatus is ENABLED.

Once the featureStatus is ENABLED, and the slfConfigMode is STATIC_SLF_CONFIG_MODE, the slfHostConfig cannot be empty.

If slfConfigMode is set to DISCOVERED_SLF_CONFIG_MODE, slfHostConfig is not considered for discovery query.

The slfConfigMode can be set to DISCOVERED_SLF_CONFIG_MODE only if there is atleast one slfCandidate present in the slfDiscoveredCandidateList. To trigger the population of slfDiscoveredCandidateList, set populateSlfCandidateList to true.

The nrfArtisan service populates the slfDiscoveredCandidateList by sending an nfDiscovery query to the nfDiscovery service to fetch the registered SLF or UDR NfProfiles. The discovery query used is:
/nnrf-disc/v1/nf-instances?target-nf-type=UDR&requester-nf-type=NRF&service-names=nudr-group-id-map&preferred-locality=<slfOptions.preferredSLFLocality>&limit=0
The nfDiscovery service fetches the registered SLF or UDR NfProfiles and apply the filtering criteria based on the discovery query parameters to get the relevant set of SLF or UDR NfProfiles. The UDR or SLF profiles are sorted and prioritized before adding to the discovery response. All the features that are available for discovery service operations, if enabled, are applied while processing the discovery query. For example: forwarding, emptylist and extended preferred locality features are included. The discovery response is sent to the nrfArtisan service which will then populate the slfDiscoveredCandidateList.

Note:

If nrfArtisan service does not receive a success response from the discovery service, the slfDiscoveredCandidateList will contain the SLF producers that were received from the discovery service in the last successful response.

The order of the UDR or SLF profiles in the slfDiscoveredCandidateList list will be decided based on the sorting algorithms in nfDiscovery Service. The same is considered when NRF sends the SLF query to the SLF.

Refer Preferred Locality Feature Set for details on how the NfProfiles are sorted based on preferred-locality and extended-preferred-locality.

Prior to 22.2.x of NRF, Egress Gateway used to return 503 response for timeout exceptions (for example, No response received for SLF query or No Response received for Query sent to Forwarding NRF).

From 22.3.x onwards, Egress Gateway returns 408 response codes for timeout exceptions (REQUEST_TIMEOUT and CONNECTION_TIMEOUT), to keep the behavior in line with the HTTP standard. Hence, after upgrading to 22.3.x or a fresh install of 22.3.x, if a retry needs to be performed for the timeout scenario, 408 error code explicitly needs to be configured under SLF reroute option. Following is the configuration for SLF options:
{
           "rerouteOnResponseHttpStatusCodes": {
             "pattern": "^[3,5][0-9]{2}$|408$"
           }
}
If the SLF request failed due to non-2xx error response (except 404response)/request timeouts all SLFs (and the error codes are configured under “rerouteOnResponseHttpStatusCodes”), then
  • if forwarding is enabled and the message is a candidate to forward, NRF will forward the request to the other segment NRF.
  • if forwarding is disabled, NRF will reject the discovery query with the configured error code under “SLF_Not_Reachable”
If the SLF request failed due to 404 error response and 404 is configured under “rerouteOnResponseHttpStatusCodes”, then,
  • NRF will retry to secondary or tertiary SLFs.
  • forwarding is not performed in this case (there is an explicit code was added to have this functionality as per the SLF requirement).

Enable the feature in Dynamic mode

Prerequisite

Artisan Microservice (NrfArtisan) must be ENABLED. For more information on enabling alternate route service, see the "Global Parameters" section in Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide

Configuring slfConfigMode to Dynamic

Perform the following steps to enable the SLF feature using registered SLF profiles:

  1. Configure slfLookupConfig parameter. For more information on the parameters, see Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
  2. Set populateSlfDiscoveredCandidateList to true. This will trigger the population of slfDiscoveredCandidateList.
  3. Perform GET on the slfOptions to see if slfDiscoveredCandidateList has atleast one candidate. If present, go to step 4. If not, then wait till the slfDiscoveredCandidateList is populated with SLF profiles.
  4. Set slfConfigMode to DISCOVER_SLF_CONFIG_MODE and featureStatus to ENABLED.

Moving from Dynamic to Static

Perform the following to switch from dynamic to static configuration:

  1. Configure slfHostConfig parameter. For more information on the parameters, see Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
  2. Set slfConfigMode to STATIC_SLF_CONFIG_MODE, provided slfHostConfig and slfLookupConfig are already present.
  3. Set populateSlfDiscoveredCandidateList to false.

Moving from Static to Dynamic

Perform the following to switch from static to dynamic configuration:

  1. Upgrade NRF to enable NrfArtisan service, if not enabled previously. Configure enableNrfArtisanService under global attributes to true in the ocnrf_custom_values_25.1.200.yaml file.
  2. Set populateSlfDiscoveredCandidateList to true. This triggers the population of slfDiscoveredCandidateList.
  3. Perform GET on the slfOptions to see if slfDiscoveredCandidateList has atleast one candidate. If present, go to step 4. If not, then wait till the slfDiscoveredCandidateList is populated with SLF profiles.
  4. Set slfConfigMode to DISCOVER_SLF_CONFIG_MODE and featureStatus to ENABLED.

    Note:

    If slfConfigMode is set to DISCOVERED_SLF_CONFIG_MODE, slfHostConfig is not considered for discovery query.

4.25.3 Support for Configurable Port and Routing Parameter Selection

NRF provides a configurable option to select a port, if not configured explicitly, either from IPEndpoint or using the scheme attribute of NfService. The scenario where this configuration is used for port selection is when the routing attribute is IPv4Address/Ipv6Addresses of NfProfile or FQDN of NfService/NfProfile.

Additionally, NRF allows to configure the preferred routing attribute in case more than one of the below attribute is present in the NfProfile or NfService:

  • IPv4 (ipv4Addresses from IpEndpoint of NfService if present, or ipv4Addresses of NfProfile)
  • IPv6 (ipv6Addresses from IpEndpoint of NfService if present, or ipv6Addresses of NfProfile)
  • FQDN (fqdn of NfService if present, or fqdn of NfProfile)

Managing Configurable Port and Routing Parameter Selection Feature

Configure

You can configure the following parameters using the REST API or CNC Console:
  • Configure using REST API: Perform the preferredPortFromIPEndpoint and preferredRoutingParameter feature configurations as described in Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
  • Configure using CNC Console: Perform the feature configurations as described in SLF Options.

4.25.4 Rerouting SLF Requests Using Alternate SCP and Alternate SLF

NRF supports routing SLF requests through SCP as per the configuration in the Egress Gateway SBI routing configuration. SLFs are selected based on Static or Dynamic SLF feature configuration.

In case an error response is received from SLF, NRF reroutes the SLF request to an alternate SLF. However, in this case, the same SCP that was used in the previous route is always used. It is possible that SCP had rerouted the SLF request to the alternate SLF in the first attempt and it failed. In the subsequent attempt if the same SCP is used, there is a possibility that this reroute might also fail. Also, NRF doesn't consider the number of SLFs that are already been attempted by SCP.

With the implementation of this feature, NRF allows you to configure maximum number of SLF attempts or an alternate SCP route to enhance the routing strategy and minimize the number of reroutes. When an error response is received, the subsequent reroutes to SLF are through alternate SCP and SLF path, improving resilience and service continuity.

NRF discovery microservice sends the Nudr_GroupIDmap request to SLF through SCP based on the Egress Gateway routes configuration.

This feature allows you to configure the maximum number of SLF attempts or an alternate SCP route to process the subsequent attempts.

This can be achieved by:
  • Limiting number of SLF attempts: This is performed by configuring the number of SLF attempts in the maxSLFattempts parameter. For more information, see Maximum SLF Attempts.
  • Choosing an alternate SCP: This is performed by enabling the useAlternateScp parameter. For more information, see Use Alternate SCP.

Note:

If the Egress Gateway is down, then no reroute is performed and an error response is sent to the producer NF.

Maximum SLF Attempts

In a routing using SCP deployment, when the discovery service receives a non-2xx response, NRF extracts the server header to deduce the number of SLFs that have been attempted.

For example:

  • If the server header is Server: UDR-UDR1-Instance-Id UDR-UDR2-Instance-Id, the number of attempts is considered as 2.
  • If the server header is Server: UDR-UDR1-Instance-Id UDR-UDR2-Instance-Id SCP-SCP1-Instance-Id, the number of attempts is considered as 2.
  • If the server header is Server: envoy, the number of attempts is considered as 1.
  • If the server header is Server: UDR-UDR1-Instance-Id, the number of attempts is considered as 1.
  • If the server header is Server: SCP-SCP1-Instance-Id, the number of attempts is considered as 1.
  • If there is no server header received or server header is empty, the number of attempts is considered as 1.
  • If the request has timed out, the number of attempts is considered as 1.
The number of attempts to SLF is validated against the value configured for maxSLFattempts parameter, as follows:
  • If the maxSLFattempts is not exhausted, discovery selects the next SLF to reroute the request.
  • If the maxSLFattempts is exhausted and there is no successful response, discovery sends the error response. If there is only one SLF, NRF will not reroute to the already attempted SLF and send the error response.
  • If the maxSLFattempts is exhausted and forwarding is enabled, NRF routes the discovery request to another NRF based on maximumHopCount configuration.

    The maximumHopCount provides the total number of SLF attempts and NRFs forwarding attempts performed. NRF tries to attempt to all available SLFs and then if maximumHopCount is left, NRF will forward the request to another NRF. When the maxSLFattempts parameter is configured, the operator can choose how many maximum SLF attempts can be done before forwarding the request.

For example:

In 3 SLF and 4 NRF deployments,

  • If maximumHopCount is set to 5 and maxSLFattempts is set as 0, then the number of attempts to SLF is 3 and number of attempts to NRF is 2.
  • If maximumHopCount is set to 5 and maxSLFattempts is set as 2, the following combinations are possible:
    • the number of attempts to SLF is 1 and number of attempts to NRF is 4.
    • the number of attempts to SLF is 2 and number of attempts to NRF is 3.

Note:

The value of maxSLFattempts parameter must be less than or equal to the value configured for maximumHopCount parameter in {apiRoot}/nrf-configuration/v1/generalOptions API. For more information about the maximumHopCount parameter, see Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.

For more information about server header propagation in service-mesh deployment, see Adding Filters for Server Header Propagation.

Use Alternate SCP

In a routing using SCP deployment, when the discovery service receives a non-2xx response, NRF checks the useAlternateScp parameter value.

If useAlternateScp parameter is set to true, then NRF chooses an alternate SCP, if available, to reroute the in a round-robin format to reroute the request. If there is only one SCP instance, NRF reroutes the request using the same SCP irrespective of the useAlternateScp configuration.

If all the available SCPs are tired and attempts are left, then NRF revolves back to the first SCP and repeat the SCP selection based on the round-robin.

Note:

  • When useAlternateScp is set to true, alternate SCP selection will not guaranteed if SCP Health APIs feature is enabled. Hence, it is recommended to disable SCP Health APIs feature when useAlternateScp is set to true. For more information about the SCP Health APIs feature, see the Monitoring the Availability of SCP Using SCP Health APIs section.
  • When useAlternateScp is set to true, customPeerSelectorEnabled parameter in {apiRoot}/nrf/nf-common-component/v1/egw/routesconfiguration must also be set to true.

Managing the feature

This section explains the procedure to enable and configure the feature.

Configure

You can configure the feature either using REST API or Console:
  • Configure using REST API: Perform the following feature configurations as described in the "Egress Gateway Configuration" section of Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide:
    • Configure the value of maxSLFattempts parameter greater than 0 to limit the number of SLF attempts. The number of attempts is calculated based on the server header received in the response.
    • (Optional) Enable useAlternateScp in the {apiRoot}/nrf-configuration/v1/slfOptions to allow alternate routing using SCP.
    • Create or update the alternate SCP peer details in the {apiRoot}/nrf/nf-common-component/v1/egw/peerconfiguration.
    • Create or update the {apiRoot}/nrf/nf-common-component/v1/egw/peersetconfiguration to assign these peers.
    • Update the {apiRoot}/nrf/nf-common-component/v1/egw/routesconfiguration as mentioned in the below sample:

      Note:

      When maxSLFattempts is greater than 0 or useAlternateScp is set to true, don't configure errorHandling under routesconfiguration, as NRF discovery microservice will handle the reroute.

      Sample body:

      [
        {
          "id": "egress_scp_proxy1",
          "uri": "http://localhost:32068/",
          "order": 3,
          "metadata": {
            "httpsTargetOnly": true,
            "httpRuriOnly": true,
            "sbiRoutingEnabled": true
          },
          "predicates": [
            {
              "args": {
                "pattern": "/nudr-group-id-map/v1/nf-group-ids"
              },
              "name": "Path"
            }
          ],
          "filters": [
            {
              "name": "SbiRouting",
              "args": {
                "peerSetIdentifier": "set0",
                "customPeerSelectorEnabled": true
              }
            }
          ]
        },
        {
          "id": "default_route",
          "uri": "egress://request.uri",
          "order": 100,
          "filters": [
            {
              "name": "DefaultRouteRetry"
            }
          ],
          "predicates": [
            {
              "args": {
                "pattern": "/**"
              },
              "name": "Path"
            }
          ]
        }
      ]
  • Configure using Console: Perform the following feature configurations as described:
    • Configure the value of Maximum SLF Attempts parameter greater than 0 in the SLF Options to limit the number of SLF attempts.
    • (Optional) Enable Use Alternate SCP in the SLF Options to allow alternate routing using SCP.
    • Create or update Peer Configuration.
    • Create or update the Peer Set Configuration to assign these peers.
    • Update the Routes Configuration as mentioned above.

      Note:

      When Maximum SLF Attempts is greater than 0 or Use Alternate SCP is set to true, don't configure Error Handling under Routes Configuration, as NRF discovery microservice will handle the reroute.

Observe

Metrics

  • Updated the dimension list for the following metrics with slfFqdn dimension:
  • Updated the description of the ocnrf_max_slf_attempts_exhausted_total metric in the Table 6-147.

Alerts

Updated the description of the OcnrfMaxSlfAttemptsExhausted alert.

KPIs

There are no KPIs for this feature.

Maintain

If you encounter alerts at system or application levels, see NRF Alerts section for resolution steps.

In case the alerts still persists, perform the following:

  1. Collect the logs: For more information about how to collect logs, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information about how to raise a service request.

4.25.5 Discovery Parameter Value Based Skip SLF Lookup

Currently, NRF does not have an option to skip SLF lookup based on specific values of the discovery query parameter.

With the implementation of this feature, when NRF receives a discovery request, it provides an option to skip the Subscriber Location Function (SLF) lookup if a certain discovery query parameter, configured under skipSLFLookupParameters is present in the discovery request. In this case, the SLF lookup is skipped regardless of the parameter's value.

This feature provides an option to configure a set of allowed values for the discovery query parameter. If the parameter is present in the discovery request and its value matches the configured values, the SLF lookup will be skipped.

NRF processes the discovery request and performs skip SLF lookup as follows:

  • If the value of enableValueBasedSkipSLFLookup parameter is set as true, NRF validates the discovery query parameters and its values received in the discovery request against the values configured in valueBasedSkipSLFLookupParams parameter to determine if the SLF lookup is to be skipped. The valueBasedSkipSLFLookupParams parameter comprises of parameterName and parameterValue attributes.

    valueBasedSkipSLFLookupParams": [ {"parameterName": "dnn","parameterValue": "abc*"},{"parameterName": "group-id-list" ,"parameterValue": ".*"}]

    Where,

    • parameterName - indicates a valid discovery query parameter supported by NRF for which the SLF lookup is to be skipped.

      Note:

      In 25.1.100, this feature supports only dnn based skipSlfLookup.
    • parameterValue - indicates the value for the corresponding discovery parameter name for which the SLF lookup is skipped. This value can be configured as a regular expression.
      • If the parameterValue is configured as .* , it indicates all values. NRF skips the SLF lookup for any value received for the discovery parameter.
      • If the parameterValue is a regular expression, NRF matches this value with the discovery request.
        • If the values match, NRF skips the SLF lookup for a value that matches with the expression and proceeds with discovery processing.
        • If the expression does not match the value, NRF performs SLF lookup. NRF will not fallback to the skip SLF lookup feature, even if (skipSLFLookupParameters) is configured.

        Note:

        Pattern matching is performed using Java Platform Standard Ed. 8 Documentation. For more information, about regular expressions, see https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html.
  • If the value of enableValueBasedSkipSLFLookup is set as false, NRF falls back to the existing skip SLF lookup feature based on the list of discovery parameters configured in the skipSLFLookupParameters parameter.

The following table describes the functionality of the feature based on the sample values configured in valueBasedSkipSLFLookupParams and the incoming discovery query parameters.

Table 4-18 Examples for Discovery Parameter Value Based Skip SLF Lookup

Value of enableValueBasedSkipSLFLookup parameter parameterName and parameterValue configured in valueBasedSkipSLFLookupParams Query Parameter name and value received in the incoming discovery requests Result
true {"parameterName":"dnn","parameterValue":".*"} dnn=dnn-c4, dnn=dnn-1, dnn=DnNvAl123 SLF lookup is skipped for all the values (dnn-c4, dnn-1 , and DnNvAl123).
true {"parameterName":"dnn","parameterValue":"dnn*"} dnn=dnn-c4, dnn=dnn-1, dnn=DnvAl123, dnn=DnaaAl123 SLF lookup is skipped for the following incoming requests with the values dnn-c4 and dnn-1 and but SLF lookup will be performed for the incoming requests with the values DnvAl123 and DnaaAl123.
true {"parameterName":"dnn","parameterValue":"^dnn-c4$"} dnn=dnn-c4, dnn=adnn-c4, dnn=dnn-1, dnn=dnnAl123, dnn=DnaaAl123 SLF lookup is skipped for the incoming request with value dnn-c4 and SLF lookup is performed for the incoming requests with the values adnn-c4, dnn-1, dnnAl123, and DnaaAl123.
true {"parameterName":"dnn","parameterValue":"^prov[a-g\d]{1,3}\w{3}$"} dnn=prova5aaa, dnn=provaa15aaa, dnn=prova5a SLF lookup is skipped for the incoming request with value prova5aaa and SLF lookup is performed for the incoming requests with the values provaa15aaa and prova5a.
true {"parameterName":"dnn","parameterValue":"^province1\.mnc\d{3}\.mcc\d{3}\.gprs$"} dnn=province1.mnc014.mcc310.gprs, dnn=province3.mnc012.mcc3150.gprs, dnn=province1.mnc0141.mcc001.gprs, dnn=province1.mnc001.mcc003.gprs SLF lookup is skipped for the incoming request with value province1.mnc014.mcc310.gprs, province1.mnc001.mcc003.gprs and SLF lookup is performed for the incoming requests with the values province3.mnc012.mcc3150.gprs, andprovince1.mnc0141.mcc001.gprs
true {"parameterName":"dnn","parameterValue":"parameterValue":".+\.mnc012\.mcc345\.gprs"} prov.mnc012.mcc345.gprs, prov2.mnc011.mcc234.gprs,prov2.mnc001.mcc233.gprs SLF lookup is skipped for the incoming request with value prov.mnc012.mcc345.gprs and SLF lookup is performed for the incoming requests with the values prov2.mnc011.mcc234.gprs and prov2.mnc001.mcc233.gprs.
false {"parameterName":"dnn","parameterValue":"dnn *"} dnn=dnn-c4, dnn=dnn-1, dnn=DnvAl123, dnn=DnaaAl123 Value based SLF lookup is not performed as the feature is disabled. falls back to the existing skip SLF lookup feature (skipSLFLookupParameters), if it is configured.

Managing Discovery Parameter Value Based Skip SLF Lookup

Configure

You can configure the following parameters using the REST API or CNC Console:
  • Configure using REST API: Configure the values for valueBasedSkipSLFLookupParams as described in Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
  • Configure using CNC Console: Configure the values for Value Based Skip SLF Lookup Parameter List as described in SLF Options.

Note:

After NRF upgrade to 25.1.100, the valueBasedSkipSLFLookupParams is empty by default. Populate the values for valueBasedSkipSLFLookupParams attribute and then enable the enableValueBasedSkipSLFLookup parameter. The feature cannot be enabled when the valueBasedSkipSLFLookupParams is empty.

Enable

You can enable Discovery Parameter Value Based Skip SLF Lookup feature using the REST API or CNC Console.
  • Enable using REST API: Set enableValueBasedSkipSLFLookup to true in SLF Options configuration API. For more information about API path, see Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
  • Enable using CNC Console: Set Enable Value Based Skip SLF Lookup to true on the SLF Options page. For more information about enabling the feature using CNC Console, see SLF Options.

Observability

Metrics

The existing ocnrf_nfDiscover_SLFlookup_skipped_total metric is enhanced with a new dimension SkipSLFLookupValue for this feature.

For more information about SLF metrics, see NRF SLF Metrics section.

Alerts

There are no alerts added or updated for this feature.

KPIs

There are no KPIs related to this feature.

Maintain

If you encounter alerts at system or application levels, see NRF Alerts section for resolution steps.

In case the alerts persist, perform the following:

1. Collect the logs: For more information on how to collect logs, see "Collecting Logs" section in Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.

2. Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.26 NRF Forwarding

NRF Forwarding feature forwards the service operation messages to another NRF, if NRF is not able to fulfill the required service operation.

Note:

Service operations explained below with specific cases or scenarios are eligible for forwarding.

A consumer NF instance can perform the following:

  • Subscribe to changes of NF instances registered in an NRF to which it is not directly interacting. The NF subscription message is forwarded by an intermediate NRF to another NRF.
  • Retrieve the NF profile of the NF instances registered in an NRF to which it is not directly interacting. The NF profile retrieval message is forwarded by an intermediate NRF to another NRF.
  • Discover the NF profile of the NF instances registered in an NRF to which it is not directly interacting. The NF discover message is forwarded by an intermediate NRF to another NRF.
  • Request OAuth 2.0 access token for the NF instances registered in an NRF to which it is not directly interacting. The OAuth 2.0 access token service request is forwarded by an intermediate NRF to NRF (which may issue the token).

NRF also enables the users to define the forwarding criteria for NFDiscover and AccessToken Service Requests from NRF to an intermediate NRF. The user can configure the preferred NF Type, the NF Services, or both for which forwarding is applicable. This provides the flexibility to regulate the traffic between the NRFs. For enabling Forwarding Rules Feature configuration, see the Enable Forwarding Rules section.

Managing NRF Forwarding Feature

Enable

The Forwarding feature is a core functionality of NRF. You need not enable or disable this feature.

If you are enabling NRF Growth feature, then perform the configuration as described in Configure Forwarding Options for NRF Growth.

Enable Forwarding Rules

You can enable the Forwarding Rules feature using REST API or CNC Console.
  • Enable using REST API: Set featureStatus to ENABLED in Forwarding Options configuration API. For more information about API path, see Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
  • Enable using CNC Console: Set Feature Status to ENABLED on the Forwarding Options page. For more information about enabling the feature using CNC Console, see Forwarding Options.

Note:

  • Before enabling forwardingRulesFeatureConfig, ensure that corresponding configurations are completed.
  • The featureStatus flag under forwardingRulesFeatureConfig can be ENABLED, only if discoveryStatus or accessTokenStatus attributes are already ENABLED.
  • Once featureStatus flag under forwardingRulesFeatureConfig is ENABLED, both discoveryStatus and accessTokenStatus attributes cannot be DISABLED.

Configure

Configure the NRF Forwarding feature using REST API or CNC Console:
  • Configure NRF Forwarding feature using REST API: Perform the feature configurations as described in Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
  • Configure NRF Forwarding feature using CNC Console: Perform the feature configurations as described in Forwarding Options.

Note:

The forwarding rules for nfDiscover service requests are based on following 3GPP discovery request parameters:

  • target-nf-type (Mandatory Parameter)
  • service-names (Optional Parameter)

The forwarding rules for access token service requests are based on the following 3GPP access token request parameter:

  • scope (Mandatory Parameter)

Configure Forwarding Options for NRF Growth

Configure the NRF Forwarding Options for NRF Growth using REST API or CNC Console:
  • Configure NRF Forwarding Options for NRF Growth feature using REST API: Perform the feature configurations as described in Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
  • Configure NRF Forwarding Options for NRF Growth feature using CNC Console: Perform the feature configurations as described in Forwarding Options.

Note:

The forwarding rules for nfDiscover service requests are based on following 3GPP discovery request parameters:

  • target-nf-type (Mandatory Parameter)
  • service-names (Optional Parameter)

The forwarding rules for access token service requests are based on the following 3GPP access token request parameter:

  • scope (Mandatory Parameter)

NRF Host configuration attribute (nrfHostConfig) allows the user to configure the details of another NRF to which the service operation messages are forwarded.

Note:

For NRF-NRF Forwarding feature to work, nrfHostConfig attribute must be configured in both NRFs with each other values.

For example: NRF1 is forwarding requests towards NRF2, then nrfHostConfig configuration attribute of NRF1 shall have NRF2 details and similarly nrfHostConfig configuration attribute of NRF2 shall have NRF1 details. These configurations are used while handling different service operations of NRF.

The nrfHostConfig configuration consists of attributes like apiVersion, scheme, FQDN, port, priority, and so on. NRF allows to configure maximum of four host details. However, the host with the highest priority is considered as the Primary host.

Note:

  • Refer 3GPP TS 29.510 (release 15.5) for definition and allowed range for NfHostConfig attributes (apiVersion, scheme, FQDN, port, priority, and so on).
  • Apart from priority attribute, no other attributes plays any role in the host selection.
  • Apart from Primary, three more hosts (if configured) can used as alternate NRF.
  • When more than one host is configured with same priority, the order of NRF host selection between the same priority is random.
In the NRF Forwarding feature, a request is first forwarded to Primary NRF. In case of error, the request is forwarded to alternate NRFs based on the following configurations:
  • nrfRerouteOnResponseHttpStatusCodes: This configuration is used to determine if the service operation message can be forwarded to alternate NRF or not. After receiving a response from the primary NRF, if the response status code from the primary NRF matches with this configuration, then NRF reroutes the request to the alternate NRF. Refer nrfHostConfig attribute for host NRF details.

    Prior to 22.2.x of NRF, Egress Gateway used to return 503 response for timeout exceptions (for example, No response received for SLF query or No Response received for Query sent to Forwarding NRF).

    From 22.3.x onwards, Egress Gateway returns 408 response codes for timeout exceptions (REQUEST_TIMEOUT and CONNECTION_TIMEOUT), to keep the behavior in line with the HTTP standard. Hence, after upgrading to 22.3.x or a fresh install of 22.3.x, if a retry needs to be performed for the timeout scenario, 408 error code explicitly needs to be configured under forwarding reroute option. Following is the configuration for forwarding options:
    {
               "nrfRerouteOnResponseHttpStatusCodes": {
                 "pattern": "^[3,5][0-9]{2}$|408$"
               }
    }
  • maximumHopCount: This configuration determines maximum number of hops (SLF or NRF) that NRF can forward in a given service request. This configuration useful during NRF Forwarding and SLF feature interaction.

Observe

Following are the NRF Forwarding feature specific metrics filters:
  • ocnrf_forward_accessToken_tx_requests_total
  • ocnrf_forward_accessToken_rx_responses_total
  • ocnrf_forward_nfProfileRetrieval_tx_requests_total
  • ocnrf_forward_nfProfileRetrieval_rx_responses_total
  • ocnrf_forward_nfStatusSubscribe_tx_requests_total
  • ocnrf_forward_nfStatusSubscribe_rx_responses_total
  • ocnrf_forward_nfDiscover_tx_requests_total
  • ocnrf_forward_nfDiscover_rx_responses_total
  • ocnrf_forwarding_jetty_latency_seconds
  • ocnrf_forward_nfDiscover_barred_total
  • ocnrf_forward_accessToken_barred_total
  • ocnrf_forward_nfStatusSubscribe_barred_total
  • ocnrf_forward_profileRetrieval_barred_total

For more information on NRF Forwarding feature metrics and KPIs, see NRF Forwarding Metrics, and NRF KPIs sections.

Maintain

If you encounter alerts at system or application levels, see NRF Alerts section for resolution steps.

In case the alerts still persist, perform the following:

  1. Collect the logs: For more information on how to collect logs, see "Collecting Logs" section in Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.27 NRF Georedundancy

NRF supports georedundancy to ensure high availability and redundancy. It offers a two, three, or four-sites georedundancy to ensure service availability when one of the NRF site is down. When NRF is deployed as georedundant site, all the sites work in an active state and the same data is available at all the sites.

The NFs send service requests to its primary NRF. When the primary NRF site is unavailable, the NFs redirect the service requests to the alternate site NRF. In this case, NFs get the same information from alternate NRF, and each site maintains and uses its set of configurations.

The NRF's state data gets replicated between the georedundant sites using DBTier's replication service.

Following are the prerequisites for georedundancy:

  • Each site configures the remote NRF sites that it is georedundant with.
  • Once the georedundancy feature is enabled on a site, it cannot be disabled.
  • Georedundant sites must be time synchronized.
  • Georedundant sites must be reachable from NFs or Peers on all the sites.
  • NFs are required to configure primary and alternate NRFs so that when one site is down, the alternate NRF can provide the required service operations.
  • At any given instance, NFs communicate with only one NRF, that is, NFs register services and maintain heartbeats with only one NRF. The data is replicated across the georedundant NRFs, thereby allowing the NFs to seamlessly move between the NRFs in case of failure.
  • Ensure that site is isolated before performing georeplication.

Managing NRF Georedundancy Feature

Prerequisites
  1. cnDBTier is installed and configured for each site. The DB replication channels between the sites are up. For more information about cnDBTier installation, see
  2. Configure MySQL Database, Users, and Secrets. For the configuration procedure, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.
Preconfigurations for Georedundancy Feature
The following are the preconfigurations that must be performed before enabling georedundancy feature.
  • Preconfigurations using Helm:
    1. Before performing upgrade, update the Database Monitor service host and port in the below attributes as per the deployment in appInfo parameter section.
      The attribute indicates to monitor the database status
          watchMySQL: true
        #The URI used by the appinfo service to retrieve the replication channel status from the CN DB Tier. The service name used is the db-monitor service.The URI should be provided as "http://<db monitor service name>:<db monitor service port>/db-tier/status/replication/realtime"
        replicationUri: "http://mysql-cluster-db-monitor-svc.occne-infra:8080/db-tier/status/replication/realtime"
        #The URI used by the appinfo service to retrieve the database status from the CN DB Tier. This is for future usage. The service name used is the db-monitor service. It is recommended to configure these URI correctly to avoid continuous error log in App-info.The URI should be provided as "http://<db monitor service name>:<db monitor service port>/db-tier/status/local"
        dbStatusUri: "http://mysql-cluster-db-monitor-svc.occne-infra:8080/db-tier/status/local"
        #The URI used by the appinfo service to retrieve the realtime database status from the CN DB Tier. This is for future usage.It is recommended to configure these URI correctly to avoid continuous error log in App-info.The service name used is the db-monitor service.The URI should be provided as "http://<db monitor service name>:<db monitor service port>/db-tier/status/local"
        realtimeDbStatusUri: "http://mysql-cluster-db-monitor-svc.occne-infra:8080/db-tier/status/cluster/local/realtime"
        #The URI used by the appinfo service to retrieve the dbTier version from the CN DB Tier. [This url is supported from CN DBTier 22.4.0].The service name used is the db-monitor service.The URI should be provided as "http://<db monitor service name>:<db monitor service port>/db-tier/version"
        dbTierVersionUri: "http://mysql-cluster-db-monitor-svc.occne-infra:8080/db-tier/version"
        #The URI used by the appinfo service to retrieve alerts from alert manager. The service name used is the alert manager service name of the CNE.The URI should be provided as "http://<alert manager service name>:<alert manager service port>/<cluster name>/alertmanager".
        alertmanagerUrl: "http://occne-prom-alertmanager.occne-infra:80/cluster/alertmanager"
    2. Run helm upgrade to apply the above configuration.
    3. Verify if the appinfo service is able to fetch the database replication status by querying the following API:
      Resource Method : GET
              Resource URI: <appinfo-svc>:<appinfo-port>/status/category/replicationstatus

      Where,

      <appinfo-svc> is the appinfo service name.

      <appinfo-port> is the port of appinfo service.

      Sample response:
      [
        {
          "localSiteName": "nrf1",
          "remoteSiteName": "nrf2",
          "replicationStatus": "UP",
          "secondsBehindRemote": 0,
          "replicationGroupDelay": [
            {
              "replchannel_group_id": "1",
              "secondsBehindRemote": 0
            }
          ]
        }
      ]
      

      Recheck the configuration if the sample response is not received. If the sample response is not received, appinfo is not querying the Database Monitor service correctly.

    4. This step must be done only after verifying that appinfo is able to fetch the status correctly. If the above configuration is not correct, NRF services will receive incorrect information about the database replication channel status and hence will assume replication channel status as DOWN.

      Configure the NRF microservice to query appinfo for the replication status using the Georedundancy Options API or CNC Console:

      Resource Method : PUT
      Resource URI: <configuration-svc>:<configuration-port>/nrf-configuration/v1/geoRedundancyOptions
      {
         “replicationStatusUri”: "http://<appinfo-svc>:<appinfo-port>/status/category/replicationstatus"
      }
      

      Where,

      <configuration-svc> is the nrfconfiguration service name.

      <configuration-port> is the port of nrfconfiguration service.

      <appinfo-svc> is the appinfo service name.

      <appinfo-port> is the port of appinfo service.

      Perform the configurations in CNC Console as described in Georedundancy Options.

    5. Verify that the OcnrfDbReplicationStatusInactive and OcnrfReplicationStatusMonitoringInactive alerts are not raised on the alert dashboard.
  • Preconfigurations using REST API:
    1. Deploy NRF as per the installation procedure provided in Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.
    2. Enable the Georedundancy feature using the CNC Console or REST API:
      Following is the sample configuration at Site Chicago which is georedundant with Sites Atlantic (siteName: Atlantic, NrfInstanceId: 723da493-528f-4bed-871a-2376295c0020) and Pacific (siteName: Pacific, NrfInstanceId: cfa780dc-c8ed-11eb-b8bc-0242ac130003)
      # Resource URI:  {apiRoot}/nrf-configuration/v1/geoRedundancyOptions
      # Method: GET
      # Content Type: application/json
      {
      "useRemoteDataWhenReplDown":false,
      "featureStatus":"ENABLED",
      "monitorNrfServiceStatusInterval":"5s",
      "monitorDBReplicationStatusInterval":"5s",
      "replicationDownTimeTolerance":"10s",
      "replicationUpTimeTolerance":"10s",
      "replicationLatencyThreshold":"20s",
      "replicationStatusUri": "http://<appinfo-svc>:<appinfo-port>/status/category/replicationstatus",
      "siteNameToNrfInstanceIdMappingList":
          [{"siteName":"atlantic",
            "nrfInstanceId":"723da493-528f-4bed-871a-2376295c0020"},
           {"siteName":"pacific",
            "nrfInstanceId":"cfa780dc-c8ed-11eb-b8bc-0242ac130003"}
          ]}
      }

    NRF considers NfProfiles, registered across georedundant sites, for all its service operations. However, if NRF detects that the replication status with its mated sites is down, it will consider NfProfiles registered at local site only. If NfProfiles that are registered across georedundant sites must be considered for certain service operations (for example, service operations that result into DB Update), set the useRemoteDataWhenReplDown attribute to true.

    This attribute is applicable only for the NfHeartBeat, NfUpdate (Patch), NfDeregister, NfStatusSubscribe (Patch), and NfStatusUnSubscribe service operations. It ensures that during the NRF site down scenario, if the NF is moving to its mated NRFs, NF need not reregister or resubscribe again. However, if this attribute is set to false and NF is switching to mated site, then the NF receives a 404 response and:
    • for NfUpdate (Patch), NfDeregister, and NfHeartBeat opearations, the NF is expected to register again with the NRF using NfRegister service operations.
    • for NfStatusSubscribe (Patch) and NfStatusUnSubscribe, the NF is expected to subscribe again with the NRF using NfStatusSubscribe service operation.

    Note:

    This attribute is not applicable for NfDiscovery, NfAccessToken, NfProfileRetrieval, and NfListRetrieval service operations.
  • Preconfigurations using CNC Console:
    1. Configure Replication Status Uri to "http://<appinfo-svc>:<appinfo-port>/status/category/replicationstatus"
    2. Configure Site Name To NRF InstanceId Mapping List group with the NRF Instance Id and the corresponding database Site Name of the remote site(s) with which the given NRF is georedundant.

    Note:

    Configure these mandatory attributes before enabling the Georedundancy feature. If these attributes are not configured during deployment or using the CNC Console postdeployment, georedundancy cannot be enabled, and NRF at the site acts as a standalone NRF.
Enable

After performing the above mentioned preconfigurations, enable the NRF georedundancy feature using the REST API or CNC Console.

  • Enable using REST API: Set featureStatus to ENABLED in Georedundancy Options configuration API. For more information about API path, see Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
  • Enable using CNC Console: Set Feature Status to ENABLED on the Georedundancy Options page. For more information about enabling the feature using CNC Console, see Georedundancy Options.

Configure

You can configure the georedundancy feature using REST API or CNC Console:
  • Configure Georedundancy using REST API: Perform the feature configurations as described in Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
  • Configure Georedundancy using CNC Console: Perform the feature configurations as described in Georedundancy Options.

Fetching the Database Replication Channel Status Using appinfo Microservice

NRF microservices require the database replication channel status for various NRF service operations. The Database Monitor service exposes APIs which provides the database replication channel status. The following NRF microservice query this API:
  • Nfregistration
  • Nfsubscription
  • NfDiscover
  • nrfAuditor
  • NfAccessToken
  • NrfConfiguration
  • NrfArtisan

Currently, all pods of the above services perform the REST API query towards the Database Monitor service periodically, and maintains the status in memory. When NRF is scaled to handle high traffic rate, the number of pods querying the Database Monitor service is very high. But, the Database Monitor service is not designed to handle high traffic rate.

Observe

Following are the georedundancy feature specific metrics:
  • ocnrf_dbreplication_status
  • ocnrf_dbreplication_down_time_seconds
  • ocnrf_nf_switch_over_total
  • ocnrf_nfSubscriptions_switch_over_total
  • ocnrf_stale_nf_deleted_total
  • ocnrf_stale_nfSubscriptions_deleted_total
  • ocnrf_reported_dbreplication_status

For more information on georedundancy metrics and KPIs, see Georedundancy Metrics and NRF KPIs sections.

Maintain

If you encounter alerts at system or application levels, see NRF Alerts section for resolution steps.

In case the alerts still persists, perform the following:

  1. Collect the logs: For more information on how to collect logs, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.27.1 NRF Last Known Response

When an NRF detects that the replication channel with its georedundant NRF(s) is down, it stops considering the NFs registered in the georedundant NRF(s) for any service operations. Due to unavailability of the replication channel, the latest status of the NFs registered in the georedundant NRF(s) will not get replicated. In this state the last replicated NfProfiles may be stale, hence NRF does not consider these profiles for any service operations.

During platform maintenance, network issues, or for any other reason, if the replication channels goes down, given that usually, the NFs do not register, deregister or change their profiles very often, it is acceptable for NRF to use the last known NfProfiles of remote NRFs for any service operations.

As part of this feature, overrideReplicationCheck Helm configurable parameter is provided to control the behavior when replication channels are down. The parameter is applicable for the following NRF microservices only:
  • nfregistration
  • nfaccesstoken
  • nfdiscovery
  • nrfartisan
  • nrfcachedata

Note:

overrideReplicationCheck is a global parameter. This is a reference variable. The value of this parameter is applicable for the above-mentioned microservices.

Enable

You can enable Last Known Response using the Helm.

Note:

Perform the following steps only after Geo Redundancy Feature is enabled as described in Georedundancy Options.
  1. Open the ocnrf_custom_values_25.1.200.yaml file.
  2. Set global.overrideReplicationCheck to true to enable the Last Known Response feature.

    Note:

    global.overrideReplicationCheck is a reference variable.

    For more information about the parameters, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

  3. Save the file.
  4. Run helm upgrade. For more information about upgrade procedure, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

NRF behavior When the Feature is Disabled

Following are the different scenarios that explain the behavior of each NRF microservices when this feature is disabled:

  • Replication is Up
    • NF Register Microservice:
      • NfRegister, NfUpdate, and NfDeregister Processing: The service operations will be successful, irrespective of which site the NF was originally registered at the local site.
      • NfListRetrieval and NfProfileRetrieval Processing:

        The NRF will consider:
        • The NFs registered and heartbeating at the local site.
        • The NFs registered and heartbeating at the remote sites, which replicated from its Georedundant mate NRFs.
    • NF Discover and NF AccessToken Microservice:

      The NRF will consider:
      • The NFs registered and heartbeating at the local site.
      • The NFs registered and heartbeating at the remote sites, which replicated from its Georedundant mate NRFs.
    • NrfArtisan Microservice: The SLFs registered at both local site and remote sites are used for discovering UDRs for performing SLF query, when slfConfigMode is set to DISCOVERED_SLF_CONFIG_MODE.
    • NrfAuditor Microservice:
      • NRF will audit the NfProfiles and subscriptions of the local site.

        When NRF discovers any of its registered profiles has missed heartbeats, it will be marked as SUSPENDED. The NF will thereafter not be included in the nfDiscovery or nfAccessToken response. The Consumer NFs that have subscribed for the change are notified about the change.

      • When NRF discovers any of its subscriptions have crossed the validity period, it deletes the subscription.
      • NRF will also audit the remote site NRFs NfProfiles and Subscriptions. If NRF discovers any of the registered Profiles at the remote site has missed heartbeats, it will be marked as PROBABLE_SUSPENDED (internal state). The NF will thereafter not be included in the nfDiscovery or nfAccessToken response. The NF will eventually be marked as SUSPENDED, if the NF continues to miss its heartbeat for a configurable period. The Consumer NFs that have subscribed for the change are not notified. When NRF receives a profile Retrieval request for a NfProfile which is in PROBABLE_SUSPENDED state, it shall return the profile in the response with the status set to UNDISCOVERABLE.
      • When NRF discovers any of the subscriptions at the remote site have crossed the validity period, it marks the subscription as SUSPENDED.

      Note:

      global.overrideReplicationCheck attribute is not configurable for NrfAuditor microservice. Refer the behavior for nrfAuditor when replication channel status is UP or DOWN.
  • Replication is Down
    • NF Register Microservice:
      • NfRegister, NfUpdate, and NfDeregister Processing:
        • The NfProfiles will be processed if the NF is registered at the local site
        • If the NF is registered at the remote site, then the NfProfile will be processed only if geoRedundancyOptions.useRemoteDataWhenReplDown is true. Else, the request will be rejected with 404 Not Found.
      • NfListRetrieval and NfProfileRetrieval Processing:
        • The NRF will consider the NFs registered and heartbeating with the local site.
        • The NRF will not consider the NFs that are registered with remote NRF site with which the replication is down.
    • NF Discover and NF AccessToken Microservice:

      The NRF will consider:
      • The NFs registered and heartbeating at the local site.
      • NRF will not consider the NFs registered and heartbeating at the remote sites with which the replication is down.
    • NrfArtisan Microservice:
      • When slfConfigMode is set to DISCOVERED_SLF_CONFIG_MODE, NRF will consider
        • The SLFs registered and heartbeating at the local site.
        • The SLFs registered with remote site NRFs with which the replication is down will not be considered.
    • NrfAuditor Microservice:

      • NRF will audit the NfProfiles and subscriptions of the local site.

        When NRF discovers any of its registered profiles has missed heartbeats, it will be marked as PROBABLE_SUSPENDED (internal state). The NF will thereafter not be included in the nfDiscovery or nfAccessToken response. The Consumer NFs that have subscribed for the change shall not be notified of the change.

        When NRF discovers any of its subscriptions have crossed the validity period, it shall mark the Subscription as SUSPENDED (internal state).

      • NRF will not audit the remote site NRFs NfProfiles and Subscriptions.

NRF behaviour When the Feature is Enabled, replication is UP or Down

  • NF Register Microservice:
    • NfRegister, NfUpdate, and NfDeregister Processing:
      • The profiles will be processed for the NFs registered at the local site or at the remote sites, irrespective of the value of the geoRedundancyOptions.useRemoteDataWhenReplDown parameter.
    • NfListRetrieval and NfProfileRetrieval Processing:

      The NRF will consider:
      • The NFs registered and heartbeating at the local site.
      • The NFs registered and heartbeating at the remote sites (last known status) which replicated from its Georedundant mate NRFs.
  • NF Discover and NF AccessToken Microservice:

    The NRF will consider:
    • The NFs registered and heartbeating at the local site.
    • The NFs registered and heartbeating at the remote sites (last known status) which replicated from its Georedundant mate NRFs.
  • NrfArtisan Microservice:

    The SLFs registered at both local site and remote sites (last known status) are used for discovering UDRs for performing SLF query, when slfConfigMode is set to DISCOVERED_SLF_CONFIG_MODE.

  • NrfAuditor Microservice:
    • If the replication channel status is up,
      • NRF will audit the NfProfiles and subscriptions of the local site.

        When NRF discovers any of its registered profiles has missed heartbeats, it will be marked as SUSPENDED. The NF will thereafter not be included in the nfDiscovery or nfAccessToken response. The Consumer NFs that have subscribed for the change are notified about the change.

        When NRF discovers any of its subscriptions have crossed the validity period, it deletes the subscription.

      • NRF will also audit the remote site NRFs NfProfiles and Subscriptions. If NRF discovers any of the registered Profiles at the remote site has missed heartbeats, it will be marked as PROBABLE_SUSPENDED (internal state). The NF will thereafter not be included in the nfDiscovery or nfAccessToken response. The Consumer NFs that have subscribed for the change are not notified.
      • When NRF discovers any of the subscriptions at the remote site have crossed the validity period, it marks the subscription as SUSPENDED.
    • If the replication channel status is down,
      • NRF will audit the NfProfiles and subscriptions of the local site.

        When NRF discovers any of its registered profiles has missed heartbeats, it will be marked as PROBABLE_SUSPENDED (internal state). The NF will thereafter not be included in the nfDiscovery or nfAccessToken response. The Consumer NFs that have subscribed for the change shall not be notified of the change.

      • When NRF discovers any of its subscriptions have crossed the validity period, it shall mark the Subscription as SUSPENDED (internal state).
      • NRF will not audit the remote site NRFs NfProfiles and Subscriptions.

      Note:

      global.overrideReplicationCheck attribute is not configurable for NrfAuditor microservice. Refer the behavior for nrfAuditor when replication channel status is UP or DOWN.

4.28 NF Heartbeat Enhancement

This feature allows the operator to configure minimum, maximum, default heartbeat timers and the maximum number of consecutive heartbeats that the NF is allowed to skip. Further, these values can be customized per NF type.

According to 3GPP 29.510, every NF registered with NRF keeps its operative status alive by sending NF heartbeat requests periodically. The NF can optionally send the heartbeatTimer value when it registers its NFProfile or when it wants to update its registered NFProfile.

NRF may modify the value of the heartbeatTimer based on its configuration and return the new value to the NF on successful registration. The NF will thereafter use the heartbeatTimer as received in the registration response as its heartbeat interval.

In case the configuration changes at the NRF for the heartbeatTimer, the changed value must be communicated to the NF in the response of the next periodic NF heartbeat request or when it next sends an NFUpdate request to the NRF.

NRF monitors the operative status of all the NFs registered with it, and when it detects that an NF has missed updating its NFProfile or sending a heartbeat for the heartbeat interval, NRF must mark the NFProfile as SUSPENDED. The NFProfile and its services may no longer be discoverable by the other NFs through the NfDiscovery service. The NRF notifies the subscribed NFs of the change in status of the NFProfile.

Managing NF Heartbeat Enhancement

Enable

The NF heartbeat is a core functionality of NRF. You do not need to enable or disable this feature.

Configure

Configure the NF Heartbeat Enhancement using REST API or CNC Console:
  • Configure NF Heartbeat using REST API: Perform the feature configurations as described in Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
  • Configure NF Heartbeat using CNC Console: Perform the feature configurations as described in NF Management Options.

Observe

For more information on heartbeat metrics and KPIs, see NRF Metrics and NRF KPIs sections.

Maintain

If you encounter alerts at system or application levels, see NRF Alerts section for resolution steps.

In case the alerts still persists, perform the following:

  1. Collect the logs: For more information on how to collect logs, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.29 Service mesh for intra-NF communication

Oracle NRF leverages the Istio or Envoy service mesh (Aspen Service Mesh) for all internal and external communication. The service mesh integration provides inter-NF communication and allows API gateway co-working with service mesh. The service mesh integration supports the services by deploying a special sidecar proxy in the environment to intercept all network communication between microservices. For more information on configuring ASM, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

4.30 NF Screening

The incoming service requests from 5G Network Functions (NFs) must be screening before allowing access to Nnrf_NfManagement service operations to ensure security.

NF Screening feature screens the incoming service requests based on certain attributes in the NfProfile against a set of screening rules configured at NRF. NRF processes the incoming service requests, only if the screening is successful, and allows to invoke Nnrf_NfManagement service operations.

NRF supports the following NF screening rules list type. For more information about the screening rules applicable to different attributes in NfProfile, see Table 4-21.

Table 4-19 Screening Rules List Type

Screening Rule List Description
"NF_FQDN" Screening List type for NF FQDN. This screening rule type is applicable for fqdn of a NfProfile in NF_Register and NF_Update service operations.
"NF_IP_ENDPOINT" Screening list type for IP Endpoint. This screening rule type is applicable for ipv4address, ipv6address attributes at NfProfile level and ipEndPoint attribute at nfServices level for NF_Register and NF_Update service operations.
"CALLBACK_URI" Screening list type for callback URIs in NF Service and nfStatusNotificationUri in SubscriptionData. This is also applicable for nfStatusNotificationUri attribute of SubscriptionData for NFStatusSubscribe service operation. This screening rule type is applicable for defaultNotificationSubscription attribute at nfServices level for NF_Register and NF_Update service operations.
"PLMN_ID" Screening list type for PLMN ID. This screening rule type is applicable for plmnList attribute at NfProfile level for NF_Register and NF_Update service operations.
"NF_TYPE_REGISTER" Screening list type for allowed NF Types to register. NRF supports 3GPP TS 29510 Release 15 and specific Release 16 NF Types. For more information on the supported NF Types list, see "Supported NF Types" section. This screening rule type is applicable for nfTypeList attribute at NfProfile level for NF_Register and NF_Update service operations.

When a service request is received, NRF performs screening of the service request for each of the above mentioned screening rules list type, as follows:

  • Checks if the global screening option is Enabled or Disabled.

    You can configure the nfScreeningOptions.featureStatus parameter as "ENABLED" using REST or Console.

    Note:

    By default, the NF Screening feature is disabled globally.
  • If it is Enabled, NRF checks if the attribute in the NfProfile is configured under BLOCKLIST or ALLOWLIST. For more information about these attributes, see Table 4-20.

    You can configure nfScreeningType parameter as BLOCKLIST or ALLOWLIST for the specific screening rule configuration. For more information about the screening rules applicable to different attributes in NfProfile, see Table 4-21.

    Table 4-20 NF Screening Type

    NfScreeningType Description
    BLOCKLIST If the attribute is configured as Blocklist and the attribute in the request matches with the configured value, the service request is not processed further. For example, nfFqdn is configured as Blocklist, then the service request with fqdn present in the NfProfile is not processed further.
    ALLOWLIST If the attribute is configured as Allowlist and the attribute in the request matches with the configured value, then the service request is processed further. For example, nfIpEndPointList is configured as Allowlist, then the service request with ipv4Addresses present in the NfProfile is processed further.
  • Based on the nfScreeningType parameter configuration, NRF checks screening rules, per NfType and global screening rules data level. Depending on the configuration, the service request is processed. For more information about the configuration, see Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
The following table describes the screening rules applicable to different attributes in NfProfile.

Table 4-21 Screening rules applicable to NfProfile attributes

Management Service Operation Screening Rules List Attribute in NfProfile Attribute in REST
NF_Subscribe CALLBACK_URI SubscriptionData.nfStatusNotificationUri nfCallBackUriList
NF_Register, NF_Update CALLBACK_URI NfService.defaultNotificationSubscriptions nfCallBackUriList
NF_Register, NF_Update NF_FQDN NfProfile.fqdn nfFqdn
NF_Register, NF_Update NF_IP_ENDPOINT
  • NfProfile.ipv4Addresses
  • NfProfile.ipv6Addresses
  • NfService.ipEndPoints
nfIpEndPointList
NF_Register, NF_Update PLMN_ID NfProfile.plmnList plmnList
NF_Register, NF_Update NF_TYPE_REGISTER NfProfile.nfType nfTypeList

Managing NF Screening Feature

Enable

You can enable the NF Screening feature using the CNC Console or REST API.

  • Enable using REST API: Set featureStatus to ENABLED in NF Screening Options configuration API. For more information about API path, see Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
  • Enable using CNC Console: Set Feature Status to ENABLED on the NF Screening Options page. For more information about enabling the feature using CNC Console, see NF Screening Options.

Configure

You can configure the NF Screening feature using REST API or CNC Console:
  • Configure using REST API: Perform the feature configurations as described in Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
  • Configure using CNC Console: Perform the feature configurations as described in NF Screening Options.

Observe

Following are the NF Screening feature specific metrics:
  • ocnrf_nfScreening_nfFqdn_requestFailed_total
  • ocnrf_nfScreening_nfFqdn_requestRejected_total
  • ocnrf_nfScreening_nfIpEndPoint_requestFailed_total
  • ocnrf_nfScreening_nfIpEndPoint_requestRejected_total
  • ocnrf_nfScreening_callbackUri_requestFailed_total
  • ocnrf_nfScreening_callbackUri_requestRejected_total
  • ocnrf_nfScreening_plmnId_requestFailed_total
  • ocnrf_nfScreening_plmnId_requestRejected_total
  • ocnrf_nfScreening_nfTypeRegister_requestFailed_total
  • ocnrf_nfScreening_nfTypeRegister_requestRejected_total
  • ocnrf_nfScreening_notApplied_InternalError_total

For more information about NF Screening metrics and KPIs, see NF Screening Metrics and NRF KPIs sections.

Maintain

If you encounter alerts at system or application levels, see NRF Alerts section for resolution steps.

In case the alerts persists, perform the following:

  1. Collect the logs: For more information about how to collect logs, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information about how to raise a service request.

4.31 REST Based NRF State Data Retrieval

REST based NRF state data retrieval feature provides options for Non-Signaling APIs to access NRF state data. It helps the operator to access the NRF state data to understand and debug in the event of failures.

This feature provides various queries which can help to get data as per the requirement.

Managing REST Based NRF State Data Retrieval Feature

Configure

You can configure the REST Based NRF State Data Retrieval feature using REST API. For more information on state data retrieval, see Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.

Maintain

If you encounter alerts at system or application levels, see NRF Alerts section for resolution steps.

In case the alerts still persists, perform the following:

  1. Collect the logs: For more information on how to collect logs, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.32 Key-ID for Access Token

Key-ID (kid) feature is about to add "kid" header in Access Token Response generated by NRF. As per RFC 7515 section 4.1.4, the use of Key-ID (kid) is to indicate which key was used to secure JSON Web Signature (JWS).

Note:

You must perform OCNRF Access Token Service Operation configuration before configuring Key-ID for Access Token.

Each NRF and NF producer(s) will have multiple keys with algorithm indexed with "kid" configured. NRF REST based configuration provides options to configure different key-ids and corresponding NRF private keys and public certificates along with corresponding oauth signing algorithms. One of the configured key-ids, can be set as current Key-ID. While generating the oauth access token, NRF uses the keys, algorithm and certificates corresponding to current Key-ID.

NRF configuration provides "addkeyIDInAccessToken" attribute which tells to add Key-ID as header in Access Token Response or not. If value is true, then currentKeyID value will be added in "kid" header in AccessToken Response. If value is false, then "kid" header will not be added in AccessToken Response.

See Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide for more information on how to check AccessToken Signing Key Status.

Managing Key-ID for Access Token Feature

Enable

You can enable the Key-ID for Access Token feature using the CNC Console or REST API.

Enable the Key-ID for Access Token feature using REST API as follows:
  1. Use the API path as {apiRoot}/nrf-configuration/v1/nfAccessTokenOptions.
  2. Content type must be application/json.
  3. Run the PUT REST method using the following JSON:
    {
       "addkeyIDInAccessToken": true
     
    }
Enable the Key-ID for Access Token feature using CNC Console as follows:
  1. From the left navigation menu, navigate to NRF and then select NF Access Token Options. The NF Access Token Options page is displayed.
  2. Click Edit from the top right side to edit or update NF Access Token Options parameter. The page is enabled for modification.
  3. Select the Add KeyID in AccessToken under Token Signing Details section as True from the drop-down menu.
  4. Click Save to save the NF Access Token Options.

Configure

You can configure the Key-ID for Access Token Feature using REST API or CNC Console:
  • Configure Key-ID for Access Token using REST API: Perform the feature configurations as described in Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
  • Configure Key-ID for Access Token using CNC Console: Perform the feature configurations as described in NF Access Token Options.

Observe

Metrics

For more information on Key-ID for Access Token metrics, see the NF Access token Metrics section.

KPIs

For more information on Key-ID for Access Token KPIs, see the NRF KPIs section.

Maintain

If you encounter alerts at system or application levels, see NRF Alerts section for resolution steps.

In case the alerts still persists, perform the following:

  1. Collect the logs: For more information on how to collect logs, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.33 NRF Access Token Service Usage Details

NRF implements Nnrf_AccessToken service (used for OAuth2 authorization), along with the "Client Credentials" authorization grant. It exposes a "Token Endpoint" where the Access Token Request service can be requested by NF service consumers.

The Nnrf_AccessToken service operation is defined as follows:
  • Access Token Request (Nnrf_AccessToken_Get)

Note:

This procedure is specific to the NRF Access Token service operation. NRF general configurations, database, and database-specific secret creation are not part of this procedure.

Procedure to use NRF Access Token Service Operation

This procedure provides step-by-step details to use 3GPP defined access token service operation supported by NRF.
  1. Create NRF private key and public certificate

    This step explains the need to create the NRF private keys and public certificates.

    Private keys are used by NRF to sign the Access Token generated. It is available only with NRF.

    Public certificates are used by producer NFs to validate the access token generated by NRF. So, public certificates are available with producer network functions. NRF does not need the public certificate while signing the access token.

    The expiry time of the certificate is required to set appropriate validity time in the AccessTokenClaim.

    Note:

    For more details about the validity time of AccessTokenClaim, see "oauthTokenExpiryTime".
    Two types of signing algorithms are supported by NRF. For both types different keys and certificates required to be generated:
    • ES256: ECDSA digital signature with SHA-256 hash algorithm
    • RS256: RSA digital signature with SHA-256 hash algorithm
    Anyone or both of the algorithm files can be generated depending upon the usage of hash algorithms. Depending upon NRF rest based configuration-specific keys and certificates are used to sign the Access Token.

    Note:

    The creation process for private keys, certificates, and passwords is at the discretion of the user or operator.
    Sample keys and certificates:

    After running this step, the private keys and public certificates of NRF (generated files depends upon algorithms chosen by operator or user) are created. There can be multiple such pairs of private keys and public certificates which will eventually be configured in NRF with different KeyIds.

    For example:

    ES256 based keys and certificates:
    • ecdsa_private_key.pem

    • ecdsa_certificate.crt

    RS256 based keys and certificates:
    • rsa_private_key.pem

    • rsa_certificate.crt

    Note:

    • Unencrypted key and certificates are only supported.
    • For RSA, the supported versions are PKCS1 and PKCS8.
    • For ECDSA, the supported version is PKCS8.
  2. Namespace creation for Secrets

    This step explains the need to create Kubernetes namespace in which Kubernetes secrets are created for NRF private keys and public certificates. For creating namespace, see the "Verifying and Creating NRF Namespace" section in Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

    Note:

    • Different namespaces or the same namespace can be used for NRF private keys and public certificates.
    • It can be the same namespace as for NRF deployment.
    • Appropriate RBAC permission needs to be associated with the ServiceAccount, if the namespace is other than NRF's namespace.
  3. Secret creation for NRF private keys and public certificates
    This step explains commands to create the Kubernetes secret(s) in which NRF private keys and public certificates can be kept safely. For configuring secrets, see the "Configuring Secret for Enabling Access Token Service" section in Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

    Note:

    The same secret can be used which is already there for NRF private keys.

    A sample command is provided in the "Configuring Kubernetes Secret for Accessing NRF Database" section in Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide to create or update the secret. In case, there is a need to create or update multiple secrets for each entity, then the same section can be followed.

  4. Perform NRF REST based configuration with outcome details of Steps 1 to 3

    This step explains the NRF REST based configuration to use the NRF private keys, public certificates, secret(s), and secret namespace(s).

    NRF REST based configuration provides options to configure different key-ids and corresponding NRF private keys and public certificates along with corresponding oauth signing algorithms. One of the configured key-ids, can be set as the current key-id.

    While generating the oauth access token, NRF uses the keys, algorithm, and certificates corresponding to the current key-id.

    For more information on NF Access Token options configuration using REST APIs, see Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.

4.34 Access Token Request Authorization

NRF follows and supports the 3GPP TS 29.510 based verification for Access Token Authorization requests for specific NF producer based on the allowed NFType, PLMN present in the NFProfiles. Extension to this requirement is to include screening for Access Token requests based on NFType.

NRF plays a major role as an OAuth2.0 Authorization server in 5G Service based architecture. When an NF service Consumer needs to access the services of an NF producer of a particular NFType and NFInstanceId, it obtains an OAuth2 access token from the NRF. NRF performs the required authorization, and if the authorization is successful, a token is issued with the requested claims. NRF provides an option to the user to specify the authorization of the Producer-Consumer NF Types along with the producer NF's services.

The operator can configure the mapping of the Requester NFType, Target NFType, and the allowed services of the Target NF. Access Token request received based on the configuration and is furthered processes the request only if the authorization is successful. Allowed Services can be configured as a single wild card '*' which denotes all the Target NFs services are allowed for the consumer NF. The operator can also configure the HTTP status code and error description that can be used in the Error Response sent by the NRF when the Access Token request is rejected.

Note:

When Access Token Authorization feature is enabled, it is expected that the requester and the target NFs are registered in NRF for the validation. So if the targetNfType is not specifically mentioned in the request, the targetNfType is extracted from the registered profile in the database using the targetNfInstanceId. Similarly if the requesterNFInstanceId is present in the request, the requesterNfType is extracted from the registered profile in the database.

Access Token configurable attribute "logicalOperatorForScope" is used while authorizing the services in the Access Token Request's scope against the allowed services in the configuration. If the logicalOperatorForScope is set to "OR", at least one of the services in the scope will be present in the allowed services. If it is set to "AND", all the services in the scope will be present in the allowed services.

The authFeatureConfig attribute under nfAccessTokenOptions provides the support required to use NRF Access Token Request Authorization Feature. For more details, see Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.

When authFeatureConfig attribute is ENABLED, the nfType validation is performed as follows:
  • targetNfType and requesterNfType is matched with the nfTypes used for accessToken generation. This configuration overrides the nfType validation against the allowedNfTypes in nfProfile or nfServices.
  • If the above mentioned validation is not met, then:
    • requesterNfType from the accessToken request is validated against the allowedNfTypes present in the producer NF's nfServices, if present.
    • requesterNfType from the accessToken request is validated against the allowedNfTypes from the producer nfProfile, if allowedNfTypes is not present in the nfServices.

Managing Access Token Request Authorization Feature

Enable

You can enable the Access Token Request Authorization Feature using the REST API or CNC Console.

  • Enable using REST API: Set featureStatus to ENABLED in NF AccessToken Options configuration API. For more information about API path, see Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
  • Enable using CNC Console: Set Feature Status to ENABLED on the NF Access Token Options page. For more information about enabling the feature using CNC Console, see NF Access Token Options.

Configure

You can configure the Access Token Request Authorization feature using REST API or CNC Console:
  • Configure using REST API: Perform the feature configurations as described in Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.

    With nfAccessTokenOptions API, authFeatureConfig attribute provides the support required to use NRF Access Token Request Authorization Feature. For more details, see Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.

  • Configure using CNC Console: Perform the feature configurations as described in NF Access Token Options.

Observe

Following are the Access Token ReSquest Authorization feature specific metrics:
  • ocnrf_accessToken_rx_requests_total
  • ocnrf_accessToken_tx_responses_total

For more information on Access Token Request Authorization metrics and KPIs, see NF Access token Metrics and NRF KPIs sections.

Maintain

If you encounter alerts at system or application levels, see NRF Alerts section for resolution steps.

In case the alerts still persists, perform the following:

  1. Collect the logs: For more information on how to collect logs, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.35 Preferred Locality Feature Set

Preferred Locality Feature Set comprises of the following features:
  • Preferred Locality
  • Extended Preferred Locality with Location or Location Sets
  • Limiting the Number of Producers Based on NF Set Ids
For more information about these features and their processing, see the following sections.

4.35.1 Preferred Locality

When the consumer NF sends the discovery query along with an additional attribute “preferred-locality”, the Preferred Locality feature is applied. By default, such discovery queries are processed as follows:
  1. NRF searches and collects all the NFs profiles matching the search criteria sent in the discovery query except the "preferred-locality" attribute.
  2. NF profiles collected in the above step are arranged as per "preferred-locality". Here, the NFs matching the "preferred-locality" are arranged as per the increasing order of their priority and the NF profiles that do not have "preferred-locality" are then placed in the increasing order of post-processed priority value. Here, post-processed priority value means adding the highest NF priority of the NFs that are matching with the "preferred-locality" to the priority value of non-preferred NF profiles with an additional increment offset value of 1.

    For example: If the highest NF priority value of "preferred-locality" is 10 and the priority value of non-preferred NF profile is 5, then after post processing the priority value of non-preferred NF profile will be 16 (where, 10 is the highest priority value of NF profile with "preferred-locality", 5 is the priority of non-preferred NF and 1 is the offset value).

    For service-name based discovery query, if there is single service in the NF profile after discovery processing, service level priority is updated.

    For service-name based discovery query, if there are multiple services in the NF profile with same service name after discovery processing, along with priority value update at NF profile, service level priority is also updated.

Managing Preferred Locality feature

Enable

Enabling the Preferred Locality feature: This feature is enabled by default as per 3GPP TS29.510. There is no option to enable or disable it explicitly.

Observe

There are no specific metrics and alerts for this feature.

For the entire list of NRF metrics and alerts, see NRF Metrics and NRF Alerts section.

Maintain

If you encounter alerts at system or application levels, see NRF Alerts section for resolution steps.

In case the alerts still persists, perform the following:

  1. Collect the logs: For more information on how to collect logs, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.35.2 Extended Preferred Locality

The limitation with the default "Preferred Locality" feature is that, it does not provide a mechanism to select matching producers from more than one "preferred-locality" because "preferred-locality" attribute allows to provide only one location in the discovery query.

To overcome this limitation, the "Extended Preferred Locality" feature is implemented where a preferred locality triplet, that is, collection of primary, secondary and tertiary preferred locality is associated with the given "preferred-locality" and "target-nf- type". The "Primary Preferred Locality", "Secondary Preferred Locality", and "Tertiary Preferred Locality" in "Preferred Locality Triplet" can be configured with single location or a location set. That is, the "Location" attribute under "Target Preferred Locations" can be an individual "Location" or a "Location Set Name" configured under the "Location Sets". You can configure a maximum of 255 location sets, and each location set can have upto 10 locations.

Managing Extended Preferred Locality and Limiting the Number of Producers Based on NF Set Ids

Enable

For enabling the features, see below:

  • Enable using REST API: Set featureStatus to ENABLED in NF Discovery Options configuration API. For more information about the API path, see Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
  • Enable using CNC Console: Set Feature Status to ENABLED on the NF Discovery Options page. For more information about enabling the feature using CNC Console, see NF Discovery Options.

Configure

You can configure the Extended Preferred Locality feature using REST API or CNC Console:
  • Configure using REST API: Perform the feature configurations in the {apiRoot}/nrf-configuration/v1/nfDiscoveryOptions as described in Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
  • Configure using CNC Console: Perform the feature configurations as described in NF Discovery Options.

Maintain

If you encounter alerts at system or application levels, see NRF Alerts section for resolution steps.

In case the alerts still persists, perform the following:

  1. Collect the logs: For more information on how to collect logs, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information on how to raise a service request.
4.35.2.1 Discovery Processing for Extended Preferred Locality Feature

Discovery query is processed with respect to Extended Preferred Locality Feature is as follows:

  1. NRF filters out all of the NFs profiles matching the search criteria sent in the discovery request except the "preferred-locality" query parameter.
  2. NFProfiles filtered in the above step are then arranged matching the locality attribute as per the configuration for "Primary Preferred Locality", "Secondary Preferred Locality", "Tertiary Preferred Locality" in "Preferred Locality Triplet". The NFs profiles that are not matching with the "preferred-locality" query parameter are kept in a different group.
  3. NFProfiles in each "Preferred Locality Triplet" will be arranged based on their priority. For NFProfiles having the same priority, load will be used for for arranging the NFProfiles. If load values are also same, then the NFProfiles will be arranged in a random order.
  4. Additionally, priorities of NFProfiles falling in the groups of secondary, tertiary, and remaining non-matching locations are updated based upon a similar way of post-processed priority as defined in the "Preferred Locality" feature.

    Note:

    When the "Extended Preferred Locality" feature is enabled and in case, if any of the required configurations are not matched with the respective discovery query parameters, NRF will apply the "Preferred Locality" feature.
  5. Locations or a set of locations can be defined in "Preferred Locality Triplet" for the discovery search query.
4.35.2.2 Configuring Extended Preferred Locality

According to 3GPP TS 29.510 V15.5.0, consumer NFs can send discovery query with preferred-locality, along with requester-nf-type, target-nf-type, and nf-services.

Note:

Only "preferred-locality", "target-nf-type", and "nf-services" attributes are considered for this feature, other attributes in the search query are not applicable.
  1. Configure the location types as per the network deployment. Location types can be configured using locationTypes attribute.

    For Example: Category-S, Category-N, and so on.

    Sample location types attribute value:

    "locationTypes": ["Category-S", "Category-N", "Category-x"]

    Note:

    • Maximum 25 location types can be configured.
    • locationTypes can have minimum 3 characters and maximum 36 characters. It can have only alphanumeric and special characters '-' and '_'. It cannot start and end with special characters.
    • Duplicate location types are not allowed to be configured in this attribute.
    • locationTypes are case sensitive it means Category-x and Category-X are different.
  2. Configure NF types (along with different services) under locationTypeMapping attribute. This attribute allows the operator to create mapping between different nfType (along with nfServices) and the locationTypes. (Location types is already configured by following Step 1 above).

    For Example: PCF is the NF type, am_policy, bdt_policy are NF services and Category-x is the location type.

    Sample output:
    
    "locationTypeMapping": [{
          "nfType": "PCF",
          "nfServices": ["am_policy", "bdt_policy"],
          "locationType": "Category-x"
        }],

    Note:

    • Configure nfType attribute to map it with the "target-nf-type" attribute in incoming discovery query.
    • nfServices attribute of this configuration is used when "service-names" query parameter is present in discovery query. Different nfServices along with nfType can be mapped with different location types.
    • In case, if nfServices is not required to be mapped with any particular location type, then configure the value of nfServices as '*'. This indicates that locationType mapped all NF services of that "target-nf-type".
    • '*' value record for "target-nf-type" is used when no configuration is found corresponding to service-names attribute of discovery query.
    • If "service-names" attribute is unavailable in the discovery query, then '*' value is used for locationType selection.
    • "service-names" of discovery search query can be subset of configured nfServices attribute.
      For Example:
      • If sm_policy is the only service-names present in discovery query and configured value for nfServices are sm_policy, bdt_policy. Then this configuration is used for locationType selection.
      • If am_policy, nudr_policy are the service-names present in discovery query and configured value for nfServices are am_policy, nudr_policy, xyz_policy. Then this configuration is used for locationType selection.
    • Same nfServices cannot be mapped to different location type.

      For example:

      One locationType say Category-N is already mapped to nfServices with value 'sm_policy' with nfType as 'PCF', then sm_policy (individual or with group of NF services) cannot be mapped to another location type for 'PCF'.

    • In case, the service-names attribute of search query has multiple nfServices but in configured nfServices attribute (individual or with group of NF services) are mapped to different locationType then '*' value record for target-nf-type is used for location type selection.
    • Maximum 100 locationTypeMapping values can be configured.
    Sample locationTypeMapping attribute value
    
    "locationTypeMapping": [{
          "nfType": "PCF",
          "nfServices": ["am_policy", "bdt_policy"],
          "locationType": "Category-x"
        },
        {
          "nfType": "PCF",
          "nfServices": ["sm_policy"],
          "locationType": "Category-N"
        },
        {
           "nfType": "PCF",
           "nfServices": ["*"],
           "locationType": "Category-S"
        },
        {
           "nfType": "AMF",
           "nfServices": ["*"],
           "locationType": "Category-S"
         }
       ]
  3. Configure the preferredLocationDetails corresponding to locationType selected and preferred-locality. The preferredLocation attribute defines the preferred-locality (derived from discovery search query) and locationType selected from the locationTypeMapping attribute maps to a preferredLocationDetails.

    Note:

    • Maximum 650 preferredLocationDetails values can be configured.
    • Different priorities can be configured for preferred locations.
    • preferredLocation attribute for this configuration is mapped to preferred-locality (derived from discovery search query).
    • targetLocationType attribute for this configuration is mapped to locationType selected from locationTypeMapping attribute.
    • Corresponding to preferredLocation and targetLocationType attributes, targetPreferredLocations can be configured. targetPreferredLocations are defined by operator.
    • targetPreferredLocations attribute can have upto 3 target preferred locations.
    • priority is assigned to targetPreferredLocations. There cannot be same priority to different targetPreferredLocations.
    • targetPreferredLocations can be same as preferredLocation attribute value.
    • In case the location attribute in targetPreferredLocations is a set of location, then configure the locationSets attribute:
      • Maximum of 255 locationSets can be configured.
      • Length of the location name can be in the range of 5 to 100 characters.
      • Maximum of 10 locations can be configured.
    Sample preferredLocationDetails attribute value
    "preferredLocationDetails": [{
                    "preferredLocation": "Azusa",
                    "targetLocationType": "Category-x",
                    "maxNFProfilesFromFirstMatchLoc":0,
                    "targetPreferredLocations": [{
                        "priority": 1,
                        "location": "Azusa"
                    }, {
                        "priority": 2,
                        "location": "Vista"
                    }, {
                        "priority": 3,
                        "location": "Ohio"
                    }]
                },
                {
                    "preferredLocation": "Azusa",
                    "targetLocationType": "Category-y",
                    "maxNFProfilesFromFirstMatchLoc":0,
                    "targetPreferredLocations": [{
                        "priority": 1,
                        "location": "RKL"
                    }, {
                        "priority": 2,
                        "location": "CSP"
                    }, {
                        "priority": 3,
                        "location": "West-Region-Edge-Set01"
                    }]
                }
            ],
            "locationSets": [                                   
                {
                    "locationSetName" : "West-Region-Edge-Set01",
                    "locations": ["LA", "SFO"]                        
                }
            ]]
4.35.2.3 Limiting the Number of Producers Based on NF Set Ids

"Limiting the Number of Producers Based on NF Set Ids" feature is a supplement to the "Extended Preferred Locality" feature.

In the Extended Preferred Locality feature, if there are more number of producers that are matching in each location (primary, secondary, tertiary), then NRF potentially returns all the matching producer NFs. For Example, If each location is having 10 matching producer, NRF ends up in sending 30 producers in discovery response. This results in returning all the 30 producers in a message being too long which ends up in not using the network resources efficiently.

To enhance the producer selection, NRF supports to limit the number of producers by using nfSetIdList as follows:
  • After NF profile selection is done based on the extended preferred locality logic, from the first matching location, only a limited number of NF profile will get selected based on the configuration attribute "Maximum number of NF Profiles from First Matching Location." The first matching location is the location where the first set of matching producers are found from the Preferred Locality Triplet.
  • From the remaining locations, only those producers will be shortlisted whose nfSetIdList matches with the NF Set Ids of the first matching location's producers (after limiting feature is applied). As these producers can be used as alternate producers during failover scenarios.

In case of the following scenarios, NRF fallbacks to the "Extended Preferred Locality" feature from the "Limiting the Number of Producers Based on NF Set Ids" feature.

  • if the value of the Maximum number of NF Profiles from First Matching Location attribute is configured as 0.
  • if any one of the NF profiles from the first matching location after limiting the profiles, does not have the nfSetIdList attribute (that is the profiles are registered without NF Set Ids).

Note:

  • It is recommended to enable this feature only if NFs are registered with the nfSetIdList attribute.
  • In upgrade scenarios, if an extended preferred locality is configured, then for each preferred locality attribute the value of Maximum number of NF Profiles from First Matching Location attribute will become 0, which disables the feature by default.

Managing Limiting the Number of Producers Based on NF Set Ids

Enable

For enabling the features, see below:

  • Enable using REST API: Enabling Limiting the Number of Producers Based on NF Set Ids feature: Set the value of the maxNFProfilesFromFirstMatchLoc field to a value greater than 0 in {apiRoot}/nrf-configuration/v1/nfDiscoveryOptions configuration API. For more information about the API path, see Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
  • Enable using CNC Console: Enabling Limiting the Number of Producers Based on NF Set Ids feature: Set the value of Maximum number of NF Profiles from First Matching Location attribute to a number greater than 0 under Preferred Location Details on the NF Discovery Options page. For more information about enabling the feature using CNC Console, see NF Discovery Options.

Note:

Extended Preferred Locality feature should be enabled prior enabling the Limiting the Number of Producers Based on NF Set Ids feature.

Configure

You can configure the Limiting the Number of Producers Based on NF Set Ids features using REST API or CNC Console:
  • Configure using REST API: Perform the feature configurations in the {apiRoot}/nrf-configuration/v1/nfDiscoveryOptions URI as described in Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
  • Configure using CNC Console: Perform the feature configurations as described in NF Discovery Options.

Observe

The following metrics are added for the Limiting the Number of Producers Based on NF Set Ids feature:
  • ocnrf_nfDiscover_limiting_profile_count_for_nfSet_total
  • ocnrf_nfDiscover_limiting_profiles_not_applied_for_nfSet_total

For more information on metrics, see NRF Metrics section.

Maintain

If you encounter alerts at system or application levels, see NRF Alerts section for resolution steps.

In case the alerts still persists, perform the following:

  1. Collect the logs: For more information on how to collect logs, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information on how to raise a service request.
4.35.2.3.1 Discovery Processing for Limiting Number of Producers Based on NF Set Ids

Discovery query is processed with respect to Limiting the Number of Producers Based on NF Set Ids feature is as follows:

  1. Perform the same steps with the discovery processing as mentioned in Discovery Processing for Extended Preferred Locality Feature.
  2. When the attribute Maximum number of NF Profiles from First Matching Location is configured to a value greater than 0, NRF allows the consumer NF to select the top "n" number of producers from the first preferred locality. And from the remaining preferred localities, only those producers are selected that have matching nfsetIdList of the top "n" producers selected from the first preferred locality.
4.35.2.3.2 Configuring Limiting Number of Producers Based on NF Set Ids

This section describes about Configuring Limiting Number of Producers Based on NF Set Ids feature.

  1. Perform the same steps with the configuration as mentioned in Configuring Extended Preferred Locality.
  2. Configure the maxNFProfilesFromFirstMatchLoc attribute in the preferredLocationDetails to limit the number of producers based on NF Set Ids from the first matching location in a preferred locality triplet. If the value of maxNFProfilesFromFirstMatchLoc attribute is greater than 0, then the feature is enabled.
    Sample output:"maxNFProfilesFromFirstMatchLoc":0, indicates feature is disabled.
    "preferredLocationDetails": [{
                    "preferredLocation": "Azusa",
                    "targetLocationType": "Category-x",
                    "maxNFProfilesFromFirstMatchLoc":0,
                    "targetPreferredLocations": [{
                        "priority": 1,
                        "location": "Azusa"
                    }, {
                        "priority": 2,
                        "location": "Vista"
                    }, {
                        "priority": 3,
                        "location": "Ohio"
                    }]
                },
                {
                    "preferredLocation": "Azusa",
                    "targetLocationType": "Category-y",
                    "maxNFProfilesFromFirstMatchLoc":0,
                    "targetPreferredLocations": [{
                        "priority": 1,
                        "location": "RKL"
                    }, {
                        "priority": 2,
                        "location": "CSP"
                    }, {
                        "priority": 3,
                        "location": "West-Region-Edge-Set01"
                    }]
                }
            ],
            "locationSets": [                                   
                {
                    "locationSetName" : "West-Region-Edge-Set01",
                    "locations": ["LA", "SFO"]                        
                }
            ]]

4.35.3 NFService Priority Update

This feature is an extension of the existing functionality of the Preferred Locality Feature Set. For more details, see Preferred Locality and Extended Preferred Locality.

With this feature, NRF now updates the NFService level priority along with the NFProfile level priority while processing the discovery query. NRF updates the following:

  • NFProfile level priority considering the lowest NFProfile level priority of NFProfiles
  • NFService level priority considering the lowest NFService level priority of all NFServices

Impact on Preferred Locality feature

For the Preferred Locality feature, there can be two groups of NFProfiles. One group contains NFProfiles with locality attribute matching preferred-locality attribute query parameter and another group with the rest of the NFProfiles with non-matching preferred-locality query parameter.

With this NF Service priority feature, NFProfile level priority is updated for the second group using the Lowest NFProfile Level Priority value of the first group. Similarly, NFService level priority is updated for the second group using the Lowest NFService Level Priority value among all the NFServices present in the first group.

Priority Value Calculation:

  • NFProfile Level Priority = Actual NFProfile Level Priority Value + Lowest NFProfile Level Priority of the first group + Offset Value (that is, 1)
  • NFService Level Priority = Actual NFService Level Priority Value + Lowest NFService Level Priority among all the NFServices present in the first group + Offset Value (that is, 1)

Impact on Extended Preferred Locality feature

For the Extended Preferred Locality feature, there can be multiple groups of NFProfiles depending on the Extended Preferred locality feature configuration. The first group contains NFProfiles with locality attribute configured with the first matching location/location set. And subsequent group(s) can be present depending upon the configured locations and the matching NFProfiles.

With NF Service priority feature, NFProfile level priority is updated for the second group onwards using the Lowest NFProfile Level Priority of the previous group. Similarly, NFService level priority is updated for the second group onwards using the Lowest NFService Level Priority among all the NFServices of its previous group.

Priority Value Calculation:

  • NFProfile Level Priority = Actual NFProfile Level Priority Value + Lowest NFProfile Level Priority of the previous group + Offset Value (that is, 1)
  • NFService Level Priority = Actual NFService Level Priority Value + Lowest NFService Level Priority among all the NFServices present in the previous group + Offset Value (that is, 1)

In case, priority or load is not present in a NFService/NFProfile, then the values are considered as below:

  • For NFProfile level, if priority or load values are not present, then the configured defaultPriority and defaultLoad values are considered respectively. Similarly, for NFService level, if priority or load values are not present, then the NFProfile level priority or load values are considered respectively. However, if priority or load values are not present in NFProfile as well, then the configured defaultPriority and defaultLoad values are considered respectively.

    For more information about these parameters, see General Options section.

  • In case, priority or load attributes are not present in the registered NFProfile or its NFServices, then the final calculated value is set based on the configured defaultPriorityAssignment and defaultLoadAssignment flags respectively.

    For more information about these parameters, see General Options section.

This feature can be enabled or disabled using the servicePriorityUpdateFeatureStatus flag in the {apiRoot}/nrf-configuration/v1/nfDiscoveryOptions API or Service Priority Update Feature Status flag in the NF Discovery Options page in the CNC Console.

Managing NFService Priority Update

Enable

To enable the features, see below:

  • Enable using REST API: Set the value of the servicePriorityUpdateFeatureStatus to ENABLED in the {apiRoot}/nrf-configuration/v1/nfDiscoveryOptions configuration API. For more information about the API, see Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
  • Enable using CNC Console: Set the value of the Service Priority Update Feature Status to ENABLED on the NF Discovery Options page. For more information about enabling the feature using CNC Console, see NF Discovery Options.

Configure

You can configure the NFService Priority Update feature using REST API or CNC Console:
  • Configure using REST API: Perform the feature configurations as described in Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
Sample configuration:

   "profilesCountInDiscoveryResponse": 3,
   "discoveryResultLoadThreshold": 0,
   "servicePriorityUpdateFeatureStatus": "ENABLED" 
    "discoveryValidityPeriodCfg":
    
  • Configure using CNC Console: Perform the feature configurations as described in NF Discovery Options.

Observability

Metrics

There are no new metrics added to this feature.

Alerts

There are no alerts added or updated for this feature.

KPIs

There are no new KPIs related to this feature.

Maintain

If you encounter alerts at system or application levels, see the NRF Alerts section for the resolution steps.

In case the alerts still persist, perform the following:

1. Collect the logs: For more information on collecting logs, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.

2. Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.35.3.1 Discovery Processing in NFService Priority Update with preferred-locality
NRF receives the discovery request with the preferred-locality query parameter. It checks the status of servicePriorityUpdateFeatureStatus feature flag. If this feature flag is enabled, perform the following steps:
  1. Filter out the NF Profiles matching the preferred-locality attribute of NFDiscover service operation query.
    • With preferred locality feature, the NF Profiles can be filtered into two groups:
      • the matching location group
      • the non-matching location group
    • With the Extended Preferred Locality feature enabled, the NF Profiles can be filtered into four groups:
      • the first matching location group
      • the second matching location group
      • the third matching location group
      • the non-matching location group
  2. Order the NF Profiles based on both priority and load values of NFService level and/or NF Profile level depending on the scenarios where Extended Preferred Locality feature is enabled or disabled.
  3. Modify and Set the priority for both NFService and NFProfile level, based on the offset calculation logic. For more information, regarding the priority value calculation and offset calculation logic see the priority value calculation in NFService Priority Update.
  4. NRF continues with the rest of the discovery flow.

If the feature is disabled, NRF continues with the existing flow skipping the above steps.

4.35.3.2 Configuring NFService Priority Update

If the value of the servicePriorityUpdateFeatureStatus is ENABLED, NRF updates the NFService level priority along with the NFProfile level priority. NRF updates the NFProfile level priority considering the lowest NFProfile level priority of NFProfiles based upon specific preferred locality features and NFService level priority considering the lowest NFService level priority of NFServices based upon specific preferred locality features.

By default, the value of this servicePriorityUpdateFeatureStatus is DISABLED.

Enable this feature in the {apiRoot}/nrf-configuration/v1/nfDiscoveryOptions API.

"servicePriorityUpdateFeatureStatus": "ENABLED"

4.36 Roaming Support

NRF supports the 3GPP defined inter-PLMN routing for NRF specific service operations like NFDiscover, AccessToken, NFStatusSubscribe, NFStatusUnSubscribe, and NFStatusNotify. To serve 5G subscribers roaming in a non-home network also known as visited or serving networks, consumer network functions in visited or serving networks need to access the NF Profiles located in the home network of 5G subscribers.

In this process, the consumer NFs sends the NRF specific service operations towards NRF which is in visited or serving network. Then, the visited or serving NRF routes these service operations towards the home NRF through SEPPs in visited or serving and home networks. NFDiscover, AccessToken, NFStatus Subscribe, and NFStatus UnSubscribe service operations are routed from visited or serving NRF to home NRF. NFStatusNotify service operation is initiated by the home network NRF towards consumer NFs residing in visited or serving network for inter-PLMN specific subscriptions.

Note:

XFCC specific validations are not supported for inter-PLMN service operations.

3GPP specific attributes defined for different service operations plays a deciding role during inter-PLMN message routing.

There are two important terminology used in roaming mechanism:
  • vNRF - visited or serving NRF when subscribers are roaming in non-home network.
  • hNRF - NRF in home network of the subscribers.

Table 4-22 3GPP Attributes

Attribute Name Service Operation Details
requester-plmn-list NFDiscover

If the requester-plmn-list matches with NRF PLMN list, it means that the NRF will function as vNRF.

If the requester-plmn-list not matches with NRF PLMN list, it means that the NRF will function as hNRF.

target-plmn-list NFDiscover

When an NRF is vNRF, the target-plmn-list becomes a mandatory attribute to decide the target PLMN.

In case the NRF is hNRF, this value can be optional but if it is present, then the value matches with hNRF PLMN.

requesterPlmnList AccessToken

If the requesterPlmnList matches with the NRF PLMN list, it means that the NRF will function as vNRF.

If the requesterPlmnList does not match with the NRF PLMN list, it means that the NRF will function as hNRF.

requesterPlmn AccessToken

If the requesterPlmn matches with the NRF PLMN list, it means that the NRF will function as vNRF.

If the requesterPlmn does not match with the NRF PLMN list, it means that the NRF will function as hNRF.

In case both the requesterPlmnList and requesterPlmn attributes are present, then combined PLMN values are used.

targetPlmn AccessToken

When an NRF is vNRF, the targetPlmn is considered to decide the target PLMN.

In case the NRF is hNRF, this value must match with the hNRF PLMN.

nfType AccessToken

When an NRF is hNRF, then this value is used for NRF NfAccessToken Authorization Feature.

But in case this value is not present, then the user-agent header is used. User-Agent header is present with 3GPP defined nfType. In case this header is not present, see userAgentMandatory attribute for more details.

reqPlmnList NFStatusSubscribe

If the reqPlmnList matches with the NRF PLMN list, it means that the NRF will function as vNRF.

If the reqPlmnList does not match with the NRF PLMN list, it means that the NRF will function as hNRF.

nfStatusNotificationURI NFStatusSubscribe In case reqPlmnList attribute is not present in Subscription data, then NRF will check that nfStatusNotificationURI is in inter-PLMN format which is 5gc.mnc(\d\d\d).mcc(\d\d\d).3gppnetwork.org format.

Sample: 5gc.mnc310.mcc314.3gppnetwork.org

In case nfStatusNotificationURI is in this format, then this is used to find the role of NRF.

However, if reqPlmnList is present, then this attribute have to be in inter-PLMN format. This will help when hNRF generate the notification towards visited or serving network.

plmnId NFStatusSubscribe

When an NRF is vNRF, then the plmnId is considered to decide the target PLMN.

In case the NRF is hNRF, this value can be optional but if it is present, then the value shall match with the hNRF PLMN.

subscriptionId

NFStatusSubscribe,

NFStatusUnSubscribe

subscriptionId also plays an important role during roaming cases. Subscription Id generated by hNRF will have 'roam' keyword prefixed in beginning of subscriptionId. It helps to identify the inter-PLMN request for subsequent service operations NFStatusSubscribe - Update and NFStatusUnSubscribe.

Enable

You can enable the Roaming Options feature using REST API or CNC Console.
  • Enable using REST API: Set featureStatus to ENABLED in Roaming Options configuration API. For more information about API path, see Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
  • Enable using CNC Console: Set Feature Status to ENABLED on the Roaming Options page. For more information about enabling the feature using CNC Console, see Roaming Options.

Note:

Before enabling the featureStatus attribute, ensure that the corresponding configurations are completed.

Configure

You can configure the Roaming Options using REST API or CNC Console:
  • Configure NRF Roaming Options using REST API: Perform the feature configurations as described in Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
    • Create or update {apiRoot}/nrf/nf-common-component/v1/egw/peerconfiguration. Update host with SEPP host.
      curl -v -X PUT "http://10.75.226.126:30747/nrf/nf-common-component/v1/egw/peerconfiguration" -H  "Content-Type: application/json"  -d @peer.json
       
      peer.json sample:-
      [
        {
          "id": "peer1",
          "host": "sepp-stub-service",
          "port": "8080",
          "apiPrefix": "/",
        }
      ]
      
    • Create or update the {apiRoot}/nrf/nf-common-component/v1/egw/peersetconfiguration to assign these peers.
    • Create or update {apiRoot}/nrf/nf-common-component/v1/egw/sbiroutingerrorcriteriasets.
    • Create or update {apiRoot}/nrf/nf-common-component/v1/egw/sbiroutingerroractionsets.
    • Update the {apiRoot}/nrf/nf-common-component/v1/egw/routesconfiguration as mentioned below:
      curl -v -X PUT "http://10.75.226.126:32247/nrf/nf-common-component/v1/egw/routesconfiguration" -H  "Content-Type: application/json"  -d @header.json
       
      sample header.json:-
      [
          {
              "id":"egress_sepp_proxy1",
              "uri": "http://localhost:32068/",
              "order": 0,
              "sbiRoutingConfiguration": {
                  "enabled": true,
                  "peerSetIdentifier": "set0"
              },
              "httpRuriOnly": true,
              "httpsTargetOnly": true,
              "predicates": [{
                  "args": {
                      "header": "OC-MCCMNC",
                      "regexp": "310014"
                  },
                  "name": "Header"
              }],
              "filters": [{
                  "name": "SbiRouting"
              }, {
                  "args": {
                      "retries": "3",
                      "methods": "GET, POST, PUT, DELETE, PATCH",
                      "statuses": "BAD_REQUEST, INTERNAL_SERVER_ERROR, BAD_GATEWAY, NOT_FOUND, GATEWAY_TIMEOUT",
                      "exceptions": "java.util.concurrent.TimeoutException,java.net.ConnectException,java.net.UnknownHostException"
                  },
                  "name": "SBIReroute"
              },{
                  "args": {
                      "name": "OC-MCCMNC"
                  },
                  "name": "RemoveRequestHeader"
              }],
              "metadata": {
              }
          },
          {"id":"default_route","uri":"egress://request.uri","order":100,"predicates":[{"args":{"pattern":"/**"},"name":"Path"}]}
      ]

      For more information about the routes configuration on different deployments, see Egress Gateway Route Configuration for Different Deployments.

  • Configure NRF Roaming Options using CNC Console: Perform the feature configurations as described in Roaming Options.

Observe

Following are the NRF Roaming Options specific metrics filters:
  • ocnrf_roaming_nfStatusSubscribe_rx_requests_total
  • ocnrf_roaming_nfStatusSubscribe_tx_responses_total
  • ocnrf_roaming_nfStatusSubscribe_tx_requests_total
  • ocnrf_roaming_nfStatusSubscribe_rx_responses_total
  • ocnrf_roaming_nfStatusSubscribe_rx_requests_total
  • ocnrf_roaming_nfStatusSubscribe_tx_responses_total
  • ocnrf_roaming_nfStatusSubscribe_tx_requests_total
  • ocnrf_roaming_nfStatusSubscribe_rx_responses_total
  • ocnrf_roaming_nfStatusUnSubscribe_rx_requests_total
  • ocnrf_roaming_nfStatusUnSubscribe_tx_responses_total
  • ocnrf_roaming_nfStatusUnSubscribe_tx_requests_total
  • ocnrf_roaming_nfStatusUnSubscribe_rx_responses_total
  • ocnrf_roaming_nfDiscover_rx_requests_total
  • ocnrf_roaming_nfDiscover_tx_responses_total
  • ocnrf_roaming_nfDiscover_tx_requests_total
  • ocnrf_roaming_nfDiscover_rx_responses_total
  • ocnrf_roaming_accessToken_rx_requests_total
  • ocnrf_roaming_accessToken_tx_responses_total
  • ocnrf_roaming_accessToken_tx_requests_total
  • ocnrf_roaming_accessToken_rx_responses_total
  • ocnrf_roaming_nfStatusNotify_tx_requests_total
  • ocnrf_roaming_nfStatusNotify_rx_responses_total
  • ocnrf_roaming_jetty_latency_seconds

For more information on NRF Roaming Options metrics and KPIs, see Roaming Support Metrics, and NRF KPIs sections.

Maintain

If you encounter alerts at system or application levels, see NRF Alerts section for resolution steps.

In case the alerts still persists, perform the following:

  1. Collect the logs: For more information on how to collect logs, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.37 EmptyList in Discovery Response

NRF checks the NFStatus of the target-nf-type in the NRF state data. It identifies whether the current status of the matching NFProfile is SUSPENDED.

If the NFStatus of all matching profiles is SUSPENDED, then NRF modifies the NFStatus of these profiles as "REGISTERED". The modified NFStatus is sent in the discovery response with a shorter validity period so that a call can be established. Upon expiry of the validityPeriod, the Requester NF must rediscover the target NF.

Note:

Change in the NFStatus of target-nf-type to REGISTERED is not stored in the database.
In case forwarding is ENABLED for a discovery request, NRF forwards the request to identify if there are any matching profiles in another region:
  • If matching profiles are found, then that profile is sent in the discovery response.
  • If the NFStatus of all the matching profiles are in SUSPENDED state, then these profiles are sent in the discovery response as REGISTERED with a shorter validity period.

Note:

Once the profile moves from SUSPENDED state to NOT AVAILABLE state (after 7 days), NRF will start choosing the NFProfile containing backupInfoAmfFailure instead of the NFProfile containing backupInfoAmfRemoval.

Table 4-23 Discovering NF Profiles for AMF with Guami

Scenario Profile A that contains match with the guami attribute. Profile B that contains match with the backupInfoAmfFailure attribute Profile C that contains match with the backupInfoAmfRemoval attribute Expected Discovery Response Notes
-

{
  "guami": {
    "plmnId": {
      "mcc": "594",
      "mnc": "75"
    },
    "amfId": "947d18"
  }
}

{
  "backupInfoAmfFailure": [
    {
      "plmnId": {
        "mcc": "594",
        "mnc": "75"
      },
      "amfId": "947d18"
    }
  ]
}
{
  "backupInfoAmfRemoval": [
    {
      "plmnId": {
        "mcc": "594",
        "mnc": "75"
      },
      "amfId": "947d18"
    }
  ]
}
- -
1

REGISTERED

NA NA Profile A Profile A is registered and matches the guami.
2

SUSPENDED

PROBABLE_SUSPENDED

REGISTERED

NA Profile B

Profile B acts as the alternative to SUSPENDED Profile A

3

SUSPENDED (and NOT AVAILABLE)

PROBABLE_SUSPENDED

SUSPENDED

PROBABLE_SUSPENDED

UNDISCOVERABLE

DEREGISTERED

NOT AVAILABLE

REGISTERED

Profile C There is no alternate REGISTERED profile for the SUSPENDED Profile matching the backupInfoAmfFailure Hence the alternate for NOT AVAILABLE Profiles is Profile B.
4

SUSPENDED (and NOT AVAILABLE)

PROBABLE_SUSPENDED

SUSPENDED

PROBABLE_SUSPENDED

UNDISCOVERABLE

DEREGISTERED

NOT AVAILABLE

SUSPENDED

PROBABLE_SUSPENDED

UNDISCOVERABLE

DEREGISTERED

NOT AVAILABLE

Profile A

No Alternate REGISTERED Profiles for SUSPENDED and NOT AVAILABLE profile are present.

Hence empty response feature is applied and Profile A is selected.

5

UNDISCOVERABLE

DEREGISTERED

NOT AVAILABLE

NA

REGISTERED

Profile C

Profile C is the alternate to NOT AVAILABLE profiles.

6

UNDISCOVERABLE

DEREGISTERED

NOT AVAILABLE

NA

SUSPENDED

PROBABLE_SUSPENDED

Profile C

There are no REGISTERED profiles or alternates for it.

Empty response feature is applied on Profile C.

7

UNDISCOVERABLE

DEREGISTERED

NOT AVAILABLE

NA

UNDISCOVERABLE

DEREGISTERED

NOT AVAILABLE

NO Profile

There are no alternate REGISTERED profiles for SUSPENDED or NOT AVAILABLE profiles.

Empty List feature can’t be applied to Profile A or Profile C as they are not in suspended state.

Hence no profile is sent in discovery response.

8

Few: SUSPENDED

Few: DEREGISTERED or

UNDISCOVERABLE

REGISTERED

SUSPENDED or PROBABLE_SUSPENDED or DEREGISTERED or NOT AVAILABLE or UNDISCOVERABLE

Profile B Profile B acts as the alternative to SUSPENDED Profile A
9

Few: SUSPENDED

Few: DEREGISTERED or

UNDISCOVERABLE

SUSPENDED or PROBABLE_SUSPENDED or DEREGISTERED or NOT AVAILABLE or UNDISCOVERABLE

REGISTERED

Profile C

There is no alternate REGISTERED Profile for SUSPENDED profiles.

Profile C is REGISTERED alternate for NOT AVAILABLE Hence Considering Profile C.

10

Few: SUSPENDED

Few: DEREGISTERED or

UNDISCOVERABLE

SUSPENDED

PROBABLE_SUSPENDED

SUSPENDED or PROBABLE_SUSPENDED or DEREGISTERED or NOT AVAILABLE or UNDISCOVERABLE

Profile A

There are no REGISTERED alternates for any state.

Hence empty response feature is applied on Profile A.

Managing EmptyList Feature

Enable
You can enable EmptyList feature using the REST API or CNC Console.
  • Enable using REST API: Set emptyListFeatureStatus to ENABLED in nfDiscovery Options configuration API. Also, set featureStatus to ENABLED to configure EmptyList for a particular nfType. For more information about API path, see Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
  • Enable using CNC Console: Set Feature Status to ENABLED for Empty Discovery Response. Also, set Feature Status to ENABLED to configure EmptyList for a particular nfType. For more information about enabling the feature using CNC Console, see NF Discovery Options.

Observe

Metrics

ocnrf_nfDiscover_emptyList_total metric is added for the EmptyList feature.

Alert

OcnrfNFDiscoveryEmptyListObservedNotification alert is added for the EmptyList feature.

Maintain

If you encounter alerts at system or application levels, see NRF Alerts section for resolution steps.

In case the alerts still persists, perform the following:

  1. Collect the logs: For more information on how to collect logs, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.38 Overload Control

Overload occurs when 100% of the planned capacity is exhausted. It can be due to uneven distribution of traffic towards a given NRF service instance, network fluctuations leading to traffic bursts, or unexpected high traffic volume at any given point of time.

The Overload Control feature protects the system from getting overloaded due to unexpected surges in traffic, traffic failures, or high latency and maintains the overall health of NRF. The system needs to not only detect overload conditions but also protect against the same. Further, it needs to mitigate against and avoid the system from entering into an overload condition by taking necessary actions for recovering from overload. Using Ingress Gateway and Perf-Info microservices, NRF provides options to control network congestion and handle the overload scenarios.

NRF provides the following means for overload management:

  • It provides a predefined set of threshold load levels.
  • It monitors various overload indicators such as CPU utilization, pending message count, and failure message count.
  • In case the threshold load level is breached for any overload indicator, enforce load shedding to protect the service.

The overload indicator for each service is monitored by the Overload Manager module in Perf-Info service. It performs overload calculations based on the following indicators:

  • CPU Utilization: Perf-Info monitors the CPU utilization of each service by querying Prometheus for the metric 'cgroup_cpu_nanoseconds'.
  • Pending Message Count: Ingress Gateway microservice monitors and maintains the number of messages pending to be processed by Ingress Gateway for each service. Perf-Info queries the Ingress Gateway microservice periodically to get the pending message count.
  • Failure Message Count: Ingress Gateway microservice monitors the number of requests returning failure responses for each service. Perf-Info queries Ingress Gateway microservice periodically to get the failure message count.

Note:

NRF has deprecated memory-based overload control as it is already covered by pending message count overload control.

The overload level is configured for the following NRF microservices:

  • Registration
  • Discovery
  • AccessToken

Overload control is triggered if the threshold for one or more indicator is reached.

Perf-Info calculates the overload levels based on the values reported for each overload indicator and the configured thresholds for the overload indicator. If one or more overload indicators have breached their thresholds, the highest overload level is considered.
 For example, if the CPU utilization of the discovery microservice has crossed L2 threshold, and its pending message count has crossed L3 threshold, then the overload level is L3. The overload level is communicated to Ingress Gateway periodically to enforce load shedding. A configurable value is available for sampling interval as ocPolicyMapping.samplingPeriod based on which Ingress Gateway calculates the rate per service in the current sampling period and applies appropriate discard policies and actions in the subsequent sampling period.

The Ingress Gateway Overload Controller monitors all the service requests received against the overload control configuration. If the feature is enabled, Ingress Gateway computes the percentage of the service requests that should be discarded by using the rate of incoming requests per service for a configurable sampling period.

When Perf-Info reports that the service is in an overload state, Ingress Gateway matches the service requests to the configured service name and the configured discard policy for a particular service, and takes the appropriate action.

Note:

When the percentage-based overload control discard policy is enabled, the number of requests to be dropped in the current sampling period is computed based on the configured percentage of requests that should be discarded and the "rate of requests outgoing of Ingress Gateway" in the previous sampling period for the service. Once the number of requests to be dropped in the current sampling period is computed, the Ingress Gateway does not drop all the new traffic to meet the discard count. Instead, Ingress Gateway decides if a request is to be discarded or not in a random way. If the random function returns the value as true, the request is discarded in the current sampling period with the discard action "RejectWithErrorCode". In a sampling period, the requests are discarded in a distributed way. The percentage dropped is not precisely the configured percentage because the number of requests to be dropped in the current sampling period is calculated based on the number of requests sent to the NRF discovery microservice in the previous sampling period and not based on the total requests received at Ingress Gateway.

Pod Level Traffic Rejections

Currently, NRF uses percentage-based discard for overload control. NRF rejects the traffic (configured percentage) with an error code when overload thresholds are breached. Depending on the number of requests to be rejected, a token is fetched from a cache shared across all Ingress Gateway pods.

This approach has certain accuracy limitations, particularly in environments with:

  • Higher number of minimum token request values
  • Large number of Ingress Gateway pods
  • Uneven traffic distribution across Ingress Gateway pods
  • Low Transactions Per Second (TPS) traffic pattern

From Release 25.1.200 onwards, NRF rejects the incoming requests at pod level for percentage-based overload control by removing the dependency on cache-based coordination across pods. When the overload control level is breached, the number of requests to be rejected is calculated based on the requests received at each Ingress Gateway pod. This ensures a more accurate and consistent request rejection even in scenarios with low Transactions Per Second (TPS) and uneven traffic distribution.

Managing Overload Control Feature

Enable
You can enable Overload Control and Pod Level Traffic Rejections feature using the following Helm configuration:
  1. Open the ocnrf_custom_values_25.1.200.yaml file.
  2. Check the value of global.performanceServiceEnable parameter to true in ocnrf_custom_values_25.1.200.yaml file.
  3. Check the value of perf-info.overloadManager.enabled parameter to true in ocnrf_custom_values_25.1.200.yaml file.
  4. Check the value of ingressgateway.overloadControlLocalDiscardEnable parameter is true in ocnrf_custom_values_25.1.200.yaml file for the pod level traffic rejections.
  5. Configure the Prometheus URI in perf-info.configmapPerformance.prometheus in ocnrf_custom_values_25.1.200.yaml file.
  6. Save the ocnrf_custom_values_25.1.200.yaml file.
  7. Ensure that the autoscaling and apps apiGroups are configured under service account. For more information on configuration, see "Creating Service Account, Role and Role Binding Resources" section in Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.
  8. Run helm upgrade, if you are enabling this feature after NRF deployment. For more information on upgrade procedure, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.
Configure
  • Configure using REST API: The Overload Control feature related configurations are performed at Ingress Gateway and Perf-Info.
    The following REST APIs must be configured for this feature:
    • {apiRoot}/nrf/nf-common-component/v1/perfinfo/overloadLevelThreshold
    • {apiRoot}/nrf/nf-common-component/v1/igw/errorcodeprofiles
    • {apiRoot}/nrf/nf-common-component/v1/igw/ocdiscardpolicies
    • {apiRoot}/nrf/nf-common-component/v1/igw/ocpolicymapping
    • {apiRoot}/nrf/nf-common-component/v1/igw/errorcodeserieslist
    • {apiRoot}/nrf/nf-common-component/v1/igw/routesconfiguration
    For more information about APIs, see "Common Services REST APIs" section in the Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide
  • Configure using CNC Console: Perform the following configurations:

Disable

  1. Use {apiRoot}/nrf/nf-common-component/v1/igw/ocpolicymapping API and set enabled to false as follows:
    {
        "enabled": false,
        "mappings": [
            {
                "svcName": "ocnrf-nfdiscovery",
                "policyName": "nfdiscoveryPolicy"
            },
            {
                "svcName": "ocnrf-nfaccesstoken",
                "policyName": "nfaccesstokenPolicy"
            },
            {
                "svcName": "ocnrf-nfregistration",
                "policyName": "nfregistrationPolicy"
            }
        ],
        "samplingPeriod": 200
    }
  2. Open the ocnrf_custom_values_25.1.200.yaml file.
  3. Set perf-info.overloadManager.enabled parameter to false in ocnrf_custom_values_25.1.200.yaml file.
  4. (Optional) Set global.performanceServiceEnable parameter to false in ocnrf_custom_values_25.1.200.yaml file.

    Note:

    Perform this step only if you want to disable complete Perf-Info service.
  5. Save the ocnrf_custom_values_25.1.200.yaml file.
  6. Run helm upgrade, if you are disabling this feature after NRF deployment. For more information on upgrade procedure, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

Observe

Metrics
The following metrics are added for the Overload Control feature:
  • load_level
  • service_resource_stress
  • service_resource_overload_level

For more information about the metric, see Overload Control.

KPIs

There are no KPIs for this feature.

For more information on the entire list of KPIs, see NRF KPIs.

Identifying Kubernetes Tag for Overload Control

The following procedure explains how to identify the tags for different Kubernetes version:
  1. Log in to Prometheus.
  2. Enter cgroup_cpu_nanoseconds query in Prometheus as shown:

    Figure 4-22 Prometheus Search

    Prometheus Search
  3. In the response, search for the tags that contain the values for name of the container, NRF deployment namespace, and name of the NRF services:

    Figure 4-23 Tag Search

    Tag Search
  4. Use the tags from the response to configure the following parameters under Perf-Info microservice:
    • tagNamespace
    • tagContainerName
    • tagServiceName

    For example, in the Figure 4-23 image, namespace is the tag that contain the value for NRF deployment namespace. You need to set namespace as the value for tagNamespace parameter in Perf-Info microservice. For more information on the parameters, see Perf-Info section in Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

Maintain

If you encounter alerts at system or application levels, see NRF Alerts section for resolution steps.

In case the alerts still persists, perform the following:

  1. Collect the logs: For more information on how to collect logs, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.39 DNS NAPTR Update

In the 5G core networks, AMFs are added or removed dynamically for scalability or planned maintenance. NRF supports updating the Name Authority Pointer (NAPTR) record in Domain Name System (DNS) during Access and Mobility Functions (AMF) registration, update, and deregistration. The AMFs available within an AMF set is provisioned within NAPTR records in the DNS.

Note:

  • NRF considers the priority of the peer only for peer selection.
  • This feature can be used in a NRF deployment where service-mesh is NOT present.

Enable

You must enable Artisan microservice for DNS NAPTR Update as follows:
  1. Open the ocnrf_custom_values_25.1.200.yaml file.
  2. Set enableNrfArtisanService to true to enable Artisan microservice.
  3. Save the file.
  4. Run Helm install. For more information about installation procedure, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.
  5. If you are enabling this parameter after NRF deployment, run helm upgrade. For more information about upgrade procedure, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.
  6. Set the DNS configuration as described in the "DNS NAPTR Configuration in Alternate Route Service" section of Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
  7. After enabling Artisan microservice, enable DNS NAPTR Update Options using the REST API or CNC Console:
    • Enable using REST API: Set featureStatus to ENABLED in dnsNAPTRUpdateOptions configuration API. For more information about API path, see Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
    • Enable using CNC Console: Set Feature Status to ENABLED on the DNS NAPTR Update Options page. For more information about enabling the feature using CNC Console, see DNS NAPTR Update Options Options.

Configure

You can configure DNS NAPTR Update options feature using the REST API or CNC Console.
  • Configure using REST API: Perform the feature configurations as described in Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
  • Enable using CNC Console: Perform the feature configurations as described in DNS NAPTR Update Options Options.

Observe

Metrics
Following metrics are added in the DNS NAPTR Update Metrics section:
  • ocnrf_dns_naptr_tx_requests_total
  • ocnrf_dns_naptr_rx_responses_total
  • ocnrf_dns_naptr_audit_tx_requests_total
  • ocnrf_dns_naptr_audit_rx_responses_total
  • ocnrf_dns_naptr_failure_rx_responses
  • ocnrf_dns_naptr_round_trip_time_seconds
  • ocnrf_dns_naptr_nfRegistration_tx_requests_total
  • ocnrf_dns_naptr_nfRegistration_rx_responses_total
  • ocnrf_dns_naptr_nrfAuditor_tx_requests_total
  • ocnrf_dns_naptr_nrfAuditor_rx_responses_total
  • ocnrf_dns_naptr_trigger_rx_requests_total
  • ocnrf_dns_naptr_trigger_tx_responses_total
  • oc_alternate_route_upstream_dns_request_timeout_total
Alert
Following alerts are added for the DNS NAPTR Update feature:

Maintain

If you encounter alerts at system or application levels, see NRF Alerts section for resolution steps.

In case the alerts still persists, perform the following:

  1. Collect the logs: For more information about how to collect logs, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information about how to raise a service request.

4.40 Notification Retry

NRF supports notification retry mechanism in case of the following scenarios in sending NfStatusNotify message. Based on the configured Notification Retry Profile, the NRF attempts to re-send the NfStatusNotify message on failures until the retry attempt is exhausted.

Following are the scenarios where retry can be performed:
  • 4xx, 5xx Response Codes from notification callback server of the NF
  • Connection Timeout between NRF Egress Gateway and notification callback server of the NF
  • Request Timeout at NRF Egress Gateway
To enable notification request retry, the NRF Subscription microservice sends the NfStatusNotify message to the Egress Gateway. Upon failure of the request, Egress Gateway retries towards the notification callback server of the NF depending on the configuration. The retries will happen depending upon the response codes, exception list or timeout configuration in the nfManagementOptions.

Note:

This is applicable only when direct routing is used for routing notification messages.

Managing Notification Retry Feature

Enable

You can enable the Notification Retry feature using the CNC Console or REST API.

  • Enable using REST API: Set requestRetryDetails.featureStatus to ENABLED in NF Management Options configuration API. For more information about API path, see Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
  • Enable using CNC Console: Set Feature Status to ENABLED under Notification Retry section on the NF Management Options page. For more information about enabling the feature using CNC Console, see the NF Management Options section.

Configuring DefaultRouteRetry in Egress Gateway

After NRF upgrade from 23.3.x to 23.4.0, to enable notification retry feature, DefaultRouteRetry filter must be added to the Egress Gateway default route configuration.

Note:

  • The value of nfsubscription.jetty.request.idleTimeout parameter must be greater than total request timeout value (totalRequestTimeout = ((retryCount+1) * requestTimeout) + 1000) if notification retry feature is enabled. For more information about the parameter, see the "NF Subscription Microservice (nfsubscription)" section in Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.
  • If attempts are done due to request timeouts, then it is possible that the backend services may timeout before all the attempts are completed. In order to avoid this, the backend requestTimeout attributes must be configured considering the total number of Egress reroutes and the Egress Gateway requestTimeout. The discovery requestTimeout values are configured using nfdiscovery.jetty.request.timeout parameter.
Following is the sample command to configure DefaultRouteRetry:
curl -X PUT "http://ocnrf-nrfconfiguration:8080/nrf/nf-common-component/v1/egw/routesconfiguration" -H 'Content-Type:application/json' -d 
'[{
	"id": "default_route",
	"uri": "egress://request.uri",
	"order": 100,
	"filters": [{
		"name": "DefaultRouteRetry"
	}],
	"predicates": [{
		"args": {
			"pattern": "/**"
		},
		"name": "Path"
	}]
}]'

This updates the hostname and port of the ocnrf-nrfconfiguration and reflects the details to access the NRF configuration.

Note:

To disable the notification retry, remove the DefaultRouteRetry filter from the {apiRoot}/nrf/nf-common-component/v1/egw/routesconfiguration, and retain the default_route configuration. This default_route configuration is available by default during fresh installation or upgrade. Following is the sample default route configuration:
{
	"id": "default_route",
	"uri": "egress://request.uri",
	"order": 100,
	"predicates": [{
		"args": {
			"pattern": "/**"
		},
		"name": "Path"
	}]
}

Configure

You can configure Notification Retry feature using the REST API or CNC Console.
  • Configure using REST API: Perform the feature configurations as described in Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
  • Enable using CNC Console: Perform the feature configurations as described in the NF Management Options section.

Observe

Metrics
Following dimensions are added for ocnrf_nfStatusNotify_rx_responses_total metric in the NRF NF Metrics section:
  • NotificationHostPort
  • NumberOfRetriesAttempted

Alert

Following alerts are added for the Notification Retry feature:

Maintain

If you encounter alerts at system or application levels, see NRF Alerts section for resolution steps.

In case the alerts still persists, perform the following:

  1. Collect the logs: For more information on how to collect logs, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information about how to raise a service request.

4.41 NRF Message Feed

NRF supports copying of both request and response HTTP messages routed through Ingress and Egress Gateways, to a Data Director.

Data Director receives the messages from Gateways with a correlation-id and feeds the data securely to an external monitoring system.

The correlation-id in the messages is used as a unique identifier for every transaction. If the request does not have the correlation-id, the Gateway generates the correlation-id and then passes it to Data Director.

Upon receiving an Ingress Gateway requests containing a correlation-Id, the NRF microservices includes this correlation-Id in following Egress Gateway requests that may be generated by the NRF based on the Ingress Gateway transaction:

  • Inter-PLMN
  • SLF
  • Forwarding
  • Notification

The communication between the Gateways and the Data Director is encrypted using TLS. Also, the Gateways authenticate itself to Data Director using Simple Authentication and Security Layer (SASL). For more information on configuring SASL, see Configuring SASL.

Note:

  • NRF does not copy the following messages to Data Director:
    • NRF Discovery for SLF (Artisan to Discovery query for filling SLF Candidate List)
    • NRF Getting AccessToken for SLF (Egress Gateway Microservice to AccessToken for fetching Token for SLF request)
  • This feature can be used in a NRF deployment where service-mesh is NOT present.

Additional Metadata Sent from NRF to Data Director

From release 25.1.200 onwards, along with the existing attributes, the following additional attributes about the message are also copied as part of the metadata list:

  • source-ip
  • destination-ip
  • source-port
  • destination-port
  • pod-instance-id

Figure 4-24 Sending Metadata from NRF to Data Director


Sending Metadata from NRF to Data Director

Request Messages:

  • RxRequest: NRF receives a request sent by an external entity, such request is handled first by the Ingress Gateway microservice, it creates a message with additional metadata which is communicated to Data Director and the request is forwarded to a specific NRF backend microservice.
  • TxRequest: NRF transmits a request to communicate an external entity, such request is sent by NRF backend microservice to Egress Gateway microservice, it creates a message with additional metadata which is communicated to Data Director and then forwards such requests to the specific address.

Response Messages:

  • RxResponse: NRF receives a response from an external entity, such response is handled by Egress Gateway microservice, it creates a message with additional metadata which communicated to Data Director and then it forwards the response.
  • TxResponse: NRF transmits a response to an external entity, such response is sent by an NRF microservice to Ingress Gateway microservice, it creates a message with additional metadata which is communicated to Data Director and then sends back the response to the external entity which initiated the communication.

The following table defines the attributes of the message copy feed template:

Table 4-24 OracleNfFeed

Attribute Data Type Description
version String NRF Producer data model version.
metadata-list MetaData The metadata added by NRF is used by Data Director for filtering, aggregation, and routing.
header-list Http2Header The http2 message header.
5g-sbi-message Http2Body The http2 message body.

Supported formats:

  • JSON
  • gzip data (content-encoding:gzip)

The following are the descriptions of the attributes that are copied to the Data Director as part of metadata:

Table 4-25 Attributes Copied to Data Director

Attribute name Data type Description Source of data
correlation-id String

This is a mandatory parameter.

This is a unique identifier for the message (both request and response). The possibilities to populate this field in the metadata is as follows:
  • If the x-request-id header is present in the message, then the correlation-id will be filled with the x-request-id header value.
  • If this header is not present, then the following processing is done:
    • At Ingress Gateway Microservice: As soon as the message is received and the x-request-id header is not present, then Ingress Gateway generates a UUID, and set correlation-id to this UUID. Copy the message towards Data Director with correlation-id in metadata, but no x-request-id header. After copying the message to Data Director, this x-request-id header is added by Ingress Gateway to the incoming request. For response coming from the NRF backend microservice, Ingress Gateway microservice again picks the same UUID as set in the request from its context and set in the metadata. For response, it does not matter if the x-request-id header is present or not, as Ingress Gateway microservice already has it in its request context which is re-used while processing the corresponding response.
    • At Egress Gateway Microservice: After receiving a request from the NRFbackend microservice, if the x-request-id header is not present, Egress Gateway microservice first generates a UUID and set correlation-id metadata to this UUID, and also add the x-request-id header to the request message. Then it should copy the message to Data Director and then send it out on the wire. In the response received from outside NF, Egress Gateway microservice reuses the UUID it generated during the request and set it to correlation-id of response metadata. Egress Gateway microservice need not check or add the x-request-id header in the response.
x-request-id header or autogenerate.
consumer-id String

This is an optional parameter.

5G NF Instance ID of the NF originated the received message.

For Ingress Gateway microservice, this is peer-nf-id

For Egress Gateway microservice, this is self-nf-id

For Notifications at Egress Gateway, this is peer-nf-id

peer-nf-id for Ingress Gateway is the NF-Instance ID extracted from the user-agent header if available in the request.

User-Agent: <NF Type>-<NF Instance ID> <NF FQDN>

self-nf-id will be extracted from Helm configurations (nfInstanceId).

producer-id String

This is an optional parameter.

5G NF Instance ID of the next destination NF.

self-nf-id will be extracted from Helm configurations (nfInstanceId).

Note: Egress Gateway doesn't populate producer-id.

consumer-fqdn String

This is an optional parameter.

It is the FQDN of the orginating NF.

For Ingress Gateway, this is peer-nf-fqdn.

For Egress Gateway, this is self-nf-fqdn.

For Notifications at Egress Gateway, this is peer-nf-fqdn.

peer-nf-fqdn for Ingress Gateway is the NF-FQDN extracted from user-agent header.

User-Agent: <NF Type>-<NF Instance ID> <NF FQDN>

self-nf-fqdn will be extracted from Helm configurations (nfFqdn).

peer-nf-fqdn for Egress Gateway is extracted from 3gpp-sbi-target-apiroot header.

producer-fqdn String

This is an optional parameter.

FQDN of the destination NF

For Egress Gateway, this is peer-nf-fqdn

For Ingress Gateway, this is self-nf-fqdn

For Notifications at Egress Gateway, this is self-nf-fqdn

peer-nf-fqdn is extracted from3gpp-sbi-target-apiroot header.

self-nf-fqdn will be extracted from Helm configurations (nfFqdn).

source-ip String

This is an optional parameter.

The IP address of the source NF that has sent the message.

NA
destination-ip String

This is an optional parameter.

The IP address of the target NF to which the message has to be sent.

NA
source-port String

This is an optional parameter.

The port of the source NF that receives response from the target NF. TCP port of the NF on the source IP address.

NA
destination-port String

This is an optional parameter.

The port of the target NF that sends response to the source NF. TCP port of the NF on the destination IP address.

NA
timestamp String

This is a mandatory parameter.

Identifies the timestamp when the message arrives at the producer. The timestamp format would be long and in nanoseconds.

The time at which the message is created with nanoseconds precision.

Example: 1656499195571109210 nanoseconds

message-direction String

This is a mandatory parameter.

The direction of the message. This can be either incoming or outgoing.

Parameter to indicate whether a message is an ingress to or egress from NF. It can be indicated by putting the traffic feed trigger point name.

  • RxRequest (Ingress Request)
  • TxRequest (Egress Request)
  • RxResponse (Egress Response)
  • TxResponse (Ingress Response)
feed-source FeedSource

This is a mandatory parameter.

Information about the NF details.

Note: The details are given in the following table Table 4-26.

The details are given in the following table Table 4-26.

Table 4-26 FeedSource

Attribute name Data type Description Source of data
nf-type String

This is a mandatory parameter.

Identifies a type of producer NF.

The information from the Helm configuration from the parameter global.nfTypeMsgCpy. If the NFs haven't configured this, then unknown would be attached as a value.
nf-instance-id String

This is an optional Parameter.

Identifies a producer NF Instance.

The information from Helm configuration from the parameter global.nfTypeMsgCpy.
nf-fqdn String

This is an optional Parameter.

Identifies a producer NF FQDN.

The information from Helm configuration from the parameter global.nfFqdn.
pod-instance-id String

This is an optional parameter.

Identifies a producer NF's pod. This is the unique identifier or name assigned to the pod.

NA

Sample JSON file with message feed details for TxResponse transaction:


{
    "version": "1.0.0",
    "header-list": {
        ":status": "201",
        "date": "Tue, 22 Apr 2025 01:56:52 GMT",
        "location": "http://nrf-18199363-ingressgateway:80/nnrf-nfm/v1/nf-instances/17380efd-6b77-4c9a-8ed9-f820371d842b",
        "content-type": "application/json",
        "nettylatency": "1745287012718",
        "requestmethod": "PUT"
    },
    "metadata-list": {
        "correlation-id": "7967c76c-15fb-4c3c-b324-c9f408698930",
        "producer-id": "6faf1bbc-6e4a-4454-a507-a14ef8e1bc5c",
        "producer-fqdn": "NRF-d5g.oracle.com",
        "timestamp": "1745287012820818431",
        "message-direction": "TxResponse",
        "source-ip": "xxx.xxx.xxx.xxx",
        "source-port": "8081",
        "destination-ip": "xxx.xxx.xxx.xxx",
        "destination-port": "51244",
        "feed-source": {
            "nf-type": "NRF",
            "nf-instance-id": "6faf1bbc-6e4a-4454-a507-a14ef8e1bc5c",
            "nf-fqdn": "NRF-d5g.oracle.com",
            "pod-instance-id": "nrf-18199363-ingressgateway-55bdcb779c-clvwg"
        }
    },
    "5g-sbi-message": {
        "nfInstanceId": "17380efd-6b77-4c9a-8ed9-f820371d842b",
        "nfType": "AMF",
        "nfStatus": "REGISTERED",
        "heartBeatTimer": 30,
        "plmnList": [
            {
                "mcc": "310",
                "mnc": "14"
            }
        ],
        "sNssais": [
            {
                "sd": "4ebaaa",
                "sst": 124
            },
            {
                "sd": "dc8aaa",
                "sst": 54
            },
            {
                "sd": "f46aaa",
                "sst": 73
            }
        ],
        "nsiList": [
            "slice-1",
            "slice-2"
        ],
        "fqdn": "AMF.d5g.oracle.com",
        "interPlmnFqdn": "AMF-d5g.oracle.com",
        "ipv4Addresses": [
            "xxx.xxx.xxx.xxx",
            "xxx.xxx.xxx.xxx",
            "xxx.xxx.xxx.xxx",

            "xxx.xxx.xxx.xxx"
        ],
        "ipv6Addresses": [
            "2001:0db8:85a3:0000:0000:8a2e:0370:7334"
        ],
        "capacity": 2000,
        "load": 10,
        "locality": "US East",
        "amfInfo": {
            "amfSetId": "1ab",
            "amfRegionId": "23",
            "guamiList": [
                {
                    "plmnId": {
                        "mcc": "594",
                        "mnc": "75"
                    },
                    "amfId": "947d18"
                },
                {
                    "plmnId": {
                        "mcc": "602",
                        "mnc": "42"
                    },
                    "amfId": "5f3259"
                },
                {
                    "plmnId": {
                        "mcc": "135",
                        "mnc": "19"
                    },
                    "amfId": "f37f1a"
                },
                {
                    "plmnId": {
                        "mcc": "817",
                        "mnc": "80"
                    },
                    "amfId": "f7a026"
                }
            ],
            "taiList": [
                {
                    "plmnId": {
                        "mcc": "641",
                        "mnc": "72"
                    },
                    "tac": "ccc0"
                },
                {
                    "plmnId": {
                        "mcc": "594",
                        "mnc": "45"
                    },
                    "tac": "77db"
                }
            ]
        },
        "nfServices": [
            {
                "serviceInstanceId": "fe137ab7-740a-46ee-aa5c-951806d77b0d",
                "serviceName": "namf-mt",
                "versions": [
                    {
                        "apiVersionInUri": "v1",
                        "apiFullVersion": "1.0.0",
                        "expiry": "2018-12-03T18:55:08.871Z"
                    }
                ],
                "scheme": "http",
                "nfServiceStatus": "REGISTERED",
                "fqdn": "AMF.d5g.oracle.com",
                "interPlmnFqdn": "AMF-d5g.oracle.com",
                "apiPrefix": "",
                "defaultNotificationSubscriptions": [
                    {
                        "notificationType": "LOCATION_NOTIFICATION",
                        "callbackUri": "http://somehost.oracle.com/callback-uri"
                    },
                    {
                        "notificationType": "N1_MESSAGES",
                        "callbackUri": "http://somehost.oracle.com/callback-uri",
                        "n1MessageClass": "SM"
                    },
                    {
                        "notificationType": "N2_INFORMATION",
                        "callbackUri": "http://somehost.oracle.com/callback-uri",
                        "n2InformationClass": "NRPPa"
                    }
                ],
                "allowedPlmns": [
                    {
                        "mcc": "904",
                        "mnc": "47"
                    },
                    {
                        "mcc": "743",
                        "mnc": "47"
                    },
                    {
                        "mcc": "222",
                        "mnc": "23"
                    },
                    {
                        "mcc": "521",
                        "mnc": "11"
                    }
                ],
                "allowedNfTypes": [
                    "NRF",
                    "AMF",
                    "AUSF",
                    "BSF",
                    "UDM",
                    "UDR",
                    "PCF"
                ],
                "allowedNfDomains": [
                    "oracle.com",
                    "xyz.com"
                ],
                "allowedNssais": [
                    {
                        "sd": "dbaaaa",
                        "sst": 14
                    },
                    {
                        "sd": "3caaa9",
                        "sst": 153
                    },
                    {
                        "sd": "5faaad",
                        "sst": 132
                    }
                ],
                "capacity": 500,
                "load": 0,
                "supportedFeatures": "80000000"
            }
        ]
    }
},

Managing Data

The messages are managed according to the type of content.

The following table describes the behavior of different content type:

Table 4-27 Managing Data

Content-Type Content-Encoding Behavior
  • application/json
  • application/problem+json
  • application/json-patch+json
  • application/3gppHal+json
NA The payload is converted into a JSON object and attached to the message copy feed payload as a JSON node.

Writing Messages of the Same Transaction in the Same Kafka Partition

From release 25.1.200 onwards, while sending messages to Data Director, NRF supports copying the request and response messages of same transaction to the same Kafka partition. This reduces the latency used to process the same transaction data. This feature uses correlation-id (unique-identifier) metadata as message key for the correlation of the messages for a transaction.

The following diagram represents the enhancement of selecting the same Kafka partition for the messages of the same transaction type:

Figure 4-25 Same Kafka Partition for Writing the Messages of the Same Transaction Type


Same Kafka Partition for Writing the Messages of the Same Transaction Type

Data Director uses the corelation-id received from the Gateways to group the messages belonging to the same transaction. Data Director feeds all these messages to the same Kafka partition instead of multiple partitions.

Managing the feature

Managing NRF Message Feed Feature

Enable

You can enable Message Feed and the feature using the Helm:

  1. Open the ocnrf_custom_values_25.1.200.yaml file.
  2. Set ingressgateway.messageCopy.enabled to true to enable message copying at the Ingress Gateway. For more information on enabling the parameter, see the "Ingress Gateway Parameters" section in Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.
  3. Set egressgateway.messageCopy.enabled to true to enable message copying at the Egress Gateway. For more information on enabling the parameter, see the "Egress Gateway Parameters" section in Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.
  4. Set ingressgateway.messageCopy.keybasedKafkaProducer to true to enable copying messages of the given transaction to the same Kafka partition in Ingress Gateway. For more information on enabling the parameter, see the "Ingress Gateway Parameters" section in Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.
  5. Set egressgateway.messageCopy.keybasedKafkaProducer to true to enable copying messages of the given transaction to the same Kafka partition in Egress Gateway. For more information on enabling the parameter, see the "Egress Gateway Parameters" section in Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.
  6. Save the file.
  7. Install NRF. For more information about installation procedure, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.
  8. If you are enabling this parameter after NRF deployment, upgrade NRF. For more information about upgrade procedure, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

    Note:

    Enabling the Message Feed feature on both the Ingress and Egress Gateways may affect pod capacity, depending on your usage model and configuration.

Configure

There is no configuration to be performed using the REST API or CNC Console.

Configuring SASL

  1. Generate your SSL Certificates.

    Note:

    Creation process for private keys, certificates and passwords is based on discretion of user or operator.
  2. Before copying the certificates into the secret, add the DD Root certificates contents into the CA certificate (caroot.cer) generated for NRF as follows:

    Note:

    Make sure you have added 8 hyphens "-"between 2 certificates.
    -----BEGIN CERTIFICATE-----
    <existing caroot-certificate content>
    -----END CERTIFICATE-----
    --------
    -----BEGIN CERTIFICATE-----
    <DD caroot-certificate content>
    -----END CERTIFICATE-----
  3. Create secrets for both Ingress and Egress Gateway for authentication with Data Director. To create a secret you need to store the password in a text file and using same file need to create a new secret. Run the following command to create secret:
    kubectl create secret generic ocingress-secret --from-file=ssl_ecdsa_private_key.pem --from-file=rsa_private_key_pkcs1.pem --from-file=ssl_truststore.txt --from-file=ssl_keystore.txt --from-file=caroot.cer --from-file=ssl_rsa_certificate.crt --from-file=ssl_ecdsa_certificate.crt --from-file=sasl.txt -n <namespace>
    
    kubectl create secret generic ocegress-secret --from-file=ssl_ecdsa_private_key.pem --from-file=ssl_rsa_private_key.pem --from-file=ssl_truststore.txt --from-file=ssl_keystore.txt --from-file=ssl_cabundle.crt --from-file=ssl_rsa_certificate.crt --from-file=ssl_ecdsa_certificate.crt --from-file=sasl.txt -n <namespace>
    For more information on creating secrets, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.
  4. Configure the SSL section as described in Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.
  5. Configure the message copy feature as described in Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.
  6. Configure the SASL_SSL port in the kafka.bootstrapAddress attribute.

Observe

Metrics
Following metrics are added in the NRF Gateways Metrics section:
  • oc_ingressgateway_msgcopy_requests_total
  • oc_ingressgateway_msgcopy_responses_total
  • oc_ingressgateway_dd_unreachable
  • oc_egressgateway_msgcopy_requests_total
  • oc_egressgateway_msgcopy_responses_total
  • oc_egressgateway_dd_unreachable
Alert

Maintain

If you encounter alerts at system or application levels, see NRF Alerts section for resolution steps.

In case the alerts still persists, perform the following:

  1. Collect the logs: For more information on how to collect logs, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.42 Subscription Limit

In 5G architecture, Network Functions (NFs) subscribe to be notified for a change in the producer NF Profile in a network using the NFStatusSubscribe service Operation. This subscription is managed by NRF Management microservice and maintained in the database for a particular period. A notification is triggered when there is change in the producer NF Profile to the Consumer NFs by the NRF subscription microservice.

If the number of subscriptions created is very high, it may trigger a huge number of notification requests. This might lead to the overload of the NRF subscription microservice.

NRF restricts the number of allowed subscriptions to avoid overload condition at NRF subscription microservice. NRF regulates the maximum number of allowed subscriptions in the database using a configurable Global Subscription Limit. In case of georedundant NRF, the limit is applied across all the mated sites.

Note:

The subscription limit must be configured with the same across all georedundant site.

NRF evaluates the global subscription limit condition every time an NF tries to create a new subscription or update an existing subscription. If the limit is breached, new subscriptions are not created or renewed, and the subscription requests are rejected with configured error code. Upon the global subscription limit is breached as well as approaching the breach threshold, specific alerts are raised based on the threshold level.

Managing NRF Subscription Limit Feature

Prerequisites

Following are the prerequisites to enable the feature:

  1. All the georedundant sites must be upgraded to NRF 25.1.200 release.
  2. Replication link between all the georedundant sites must be up. The OcnrfDbReplicationStatusInactive alert indicates if the replication link is inactive. If this alert is raised in any of the sites, wait till it is cleared.
  3. Wait until all the subscription records are migrated. The OcnrfSubscriptionMigrationInProgressWarn and OcnrfSubscriptionMigrationInProgressCritical alerts indicate whether the migration is complete. If this alert is raised in any of the sites, wait till it is cleared.

Enable

You can enable the Subscription Limit feature using the CNC Console or REST API.
  • Enable using REST API: Set subscriptionLimit.featureStatus to ENABLED in NF Management Options configuration API. For more information about API path, see Oracle Communications Cloud Native Core Network Repository Function REST Specification Guide.
  • Enable using CNC Console: Set Feature Status to ENABLED under Subscription Limit section on the NF Management Options page. For more information about enabling the feature using CNC Console, see the NF Management Options section.

Configure

You can configure Subscription Limit feature using the REST API or CNC Console.
  • Configure using REST API: Perform the feature configurations as described in Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide
  • Enable using CNC Console: Perform the feature configurations as described in the NF Management Options section.

Observe

Metrics
Following dimension is added in the NRF Metrics section:
  • RejectionReason=SubscriptionLimitExceeded dimension is added for ocnrf_nfStatusSubscribe_tx_responses_total metric.
Following metrics are added for the Subscription Limit feature:
  • ocnrf_nfset_active_subscriptions
  • ocnrf_nfset_limit_level
  • ocnrf_subscription_migration_status

KPIs

Following KPIs are added for the Subscription Limit feature:

Maintain

If you encounter alerts at system or application levels, see NRF Alerts section for resolution steps.

In case the alerts still persists, perform the following:

  1. Collect the logs: For more information on how to collect logs, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.43 Automated Test Suite Support

NRF provides Automated Test Suite (ATS) for validating the functionalities. ATS allows you to run NRF test cases using an automated testing tool, and then compares the actual results with the expected or predicted results. In this process, there is no intervention from the user. For more information on installing and configuring ATS, see Oracle Communications Cloud Native Core, Automated Test Suite User Guide.