4 NRF Features

This section explains the NRF features.

Note:

The performance and capacity of the NRF system may vary based on the call model, Feature or Interface configuration, and underlying CNE and hardware environment.

4.1 Limiting Number of NFProfiles in NFDiscover Response

When NFDiscover service operation request is received, NRF returns NFProfiles based on the query attributes. As per 3GPP TS 29.510, the following two attributes play a key role to limit the number of NFProfiles to be returned in the response:
  • limit: Maximum number of NFProfiles to be returned in the response.

    Minimum: 1

  • max-payload-size: Maximum payload size of the response, expressed in kilo octets. When present, the NRF limits the number of NFProfiles returned in the response so as to not to exceed the maximum payload size indicated in the request.

    Default: 124 kilo octets

    Maximum: 2000 (that is, 2 mega octet)

limit Attribute

While returning the NFProfiles in the NFDiscover service operation response, NRF limits the number of NFProfiles that can be returned after processing the NFDiscover service operation request based on the value of the limit attribute.

In case the limit attribute is not present in the NFDiscover search query, then NRF limits the number of NFProfiles based on the value of the profilesCountInDiscoveryResponse attribute in the {apiRoot}/nrf-configuration/v1/nfDiscoveryOptions API.

max-payload-size Attribute

While returning the NFProfiles in the NFDiscover service operation response, NRF limits the number of NFProfiles whose size is within the value of max-payload-size attribute, excluding the size of the following attributes defined by 3GPP of the NFDiscover service operation response:
  • validityPeriod (Mandatory)
  • nfInstances (Mandatory)
  • preferredSearch (Conditional)
  • nrfSupportedFeatures (Conditional)

In case this attribute is not present in the NFDiscover search query, then NRF considers the default value (124 kilo octets) as defined by 3GPP TS 29.510.

4.2 nrfSupportedFeatures

As per 3GPP TS 29.510, nrfSupportedFeatures attribute indicates the features supported by NRF for the NFDiscovery service. This attribute is included in the discovery response if at least one feature is supported by NRF. From release 23.4.0, the value of this attribute is 172 (in hexadecimal and 101110010 in binary), by reading the bits from left to right it indicates that it supports the 2, 5, 6, 7, and 9 Feature Number as per the following table:

Table 4-1 Features of nrfSupportedFeatures attribute used by Nnrf_NFDiscovery service

Feature Number Feature Mandatory/Optional Description
1 Complex-Query O

Support of Complex Query expression (see clause 6.2.3.2.3.1)

2 Query-Params-Ext1 O

Support of the following query parameters:

- limit

- max-payload-size

- required-features

- pdu-session-types

3 Query-Param-Analytics O

Support of the query parameters for Analytics identifier:

- event-id-list

- nwdaf-event-list

4 MAPDU O This feature indicates whether the NRF supports selection of UPF with Access Traffic Steering, Switching and Splitting (ATSSS) capability.
5 Query-Params-Ext2 O

Support of the following query parameters:

- requester-nf-instance-id

- upf-ue-ip-addr-ind

- pfd-data

- target-snpn

- af-ee-data

- w-agf-info

- tngf-info

- twif-info

- target-nf-set-id

- target-nf-service-set-id

- preferred-tai

- nef-id

- preferred-nf-instances

- notification-type

- serving-scope

- internal-group-identity

- preferred-api-versions

- v2x-support-ind

- redundant-gtpu

- redundant-transport

- lmf-id

- an-node-type

- rat-type

- ipups

- scp-domain-list

- address-domain

- ipv4-addr

- ipv6-prefix

- served-nf-set-id

- remote-plmn-id

- data-forwarding

- preferred-full-plmn

- requester-snpn-list

- max-payload-size-ext

6 Service-Map M This feature indicates whether it is supported to identify the list of NF Service Instances as a map (that is, the "nfServiceList" attribute of NFProfile is supported).
7 Query-Params-Ext3 O

Support of the following query parameters:

- ims-private-identity

- ims-public-identity

- msisdn

- requester-plmn-specific-snssai-list

- n1-msg-class

- n2-info-class

8 Query-Params-Ext4 O

Support of the following query parameters:

- realm-id

- storage-id

9 Query-Param-vSmf-Capability O

Support of the query parameters for V-SMF capability: -vsmf-support-ind

The value of nrfSupportedFeatures attribute in the discovery response varies based on the value of the requester-features attribute sent by the Consumer NF. The requester-features attribute indicates the features that are supported by the Consumer NF.

The value of the nrfSupportedFeatures attribute is calculated based on the following scenarios:

  • value of nrfSupportedFeatures attribute when there are common supported features between consumer NF and NRF

    If the requester-features attribute has the value of 12 (this is hexadecimal value) in the discovery search query, the corresponding binary value is 00010010. It indicates that the 2nd and 5th bit are enabled and Consumer NF supports the Feature Number 2 and 5. As per the current support at NRF, the common supported features between Consumer NF and NRF are 2 and 5. Hence, the value of nrfSupportedFeatures attribute in the discovery response is 12.

  • value of nrfSupportedFeatures attribute when there are partial common supported features between consumer NF and NRF

    If the requester-features attribute has the value of 6 (this is hexadecimal value) in the discovery search query, the corresponding binary value is 0110. It indicates that the 2nd and 3rd bit are enabled and Consumer NF supports Feature Number 2 and 3. As per the current support at NRF, the common supported feature between Consumer NF and NRF is 2. Hence, the value of the nrfSupportedFeatures attribute in the discovery response is 2.

  • value of nrfSupportedFeatures attribute when there is no common supported features between consumer NF and NRF

    If the requester-features attribute has the value of 8 (this is hexadecimal value) in the discovery search query, the corresponding binary value is 1000. It indicates that no common features are available between the Consumer NF and NRF. Hence, the value of the nrfSupportedFeatures attribute in the discovery response is 0.

  • value of nrfSupportedFeatures attribute is 172

    When the requester-features attribute is unavailable in the discovery search query of Consumer NF, the value of the nrfSupportedFeatures attribute in the discovery response is 172.

4.3 Support for cnDBTier APIs in CNC Console

NRF fetches the status of cnDBTier using the CLI- based mechanism.

With the implementation of this feature, cnDBTier APIs are integrated into the CNC Console. NRF users can now view the specific cnDBTier APIs such as checking the available backup lists, cnDBTier version, database statistics, heartbeat status, local cnDBTier cluster status, georeplication status, and initiating on-demand backup on the CNC Console.

The following cnDBTier APIs can be viewed directly on the CNC Console:

  • Backup List (List completed backups of the current site): This API displays the details of completed backups along with backup ID, creation timestamp, and backup size.
  • cnDBTier Version (cnDBTier version): This API displays the cnDBTier version.
  • Database Statistics Report (Database Statistic Reports): This API displays the number of available database.
  • Geo Replication Status:
    • Real Time Overall Replication Status (Real time overall replication status): This API displays the overall replication status in multisite deployments. For example, in a four-site deployment, it provides the replication status between the following sites: site1-site2, site1-site3, site1-site4. This is applicable for all other sites, such as, site 2, site 3, and site 4.
    • Site Specific Real Time Replication Status (Site specific real time replication status): This API displays the site-specific replication status.
  • HeartBeat Status (Overall HeartBeat Status): This API displays the connectivity status between the local site and the remote site name to which NRF is connected.
  • Local Cluster Status (Real time local cluster status): This API displays the status of the local cluster.
  • On-Demand Backup (On-demand Backup): This API displays the status of initiated on-demand backups and helps to create a new backup.

    For more information about the above-mentioned cnDBTier APIs, see Oracle Communications Cloud Native Core, cnDBTier User Guide.

Managing cnDBTier APIs in CNC Console

Enable

This feature is available by default, when cnDBTier is configured as an instance during the CNC Console deployment. For more information about integrating cnDBTier APIs in CNC Console, see the "NF Single Cluster Configuration With cnDBTier Menu Enabled" section in the Oracle Communications Cloud Native Configuration Console Installation, Upgrade, and Fault Recovery Guide.

There is no option to disable this feature.

Configure

You can view the cnDBTier APIs in the CNC Console. For more information, see cnDBTier APIs in the CNC Console.

Maintain

If you encounter alerts at system or application levels, see the NRF Alerts section for resolution steps.

In case the alerts still persist, perform the following:

  1. Collect the logs: For more information on collecting logs, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.4 Enhanced NRF Set Based Deployment (NRF Growth)

NRF was supporting the deployment of two segments of NRF in a network. In the case of capacity expansion of the network to support increased traffic, a single NRF set cannot be scaled beyond a certain capacity.

With the implementation of this feature, NRF supports the deployment of multiple NRF set on the network instead of scaling the capacity of each NRF instance. Each segment in this network can have a single georedundant set and each set can have up to four georedundant NRFs. The georedundant NRFs in the set share the state data with each other.

Figure 4-1 NRF Segments


NRF Segments

A dedicated cnDBTier instance is deployed for each NRF that is georeplicated within the specific NRF set. Each NRF in the set synchronizes with its most preferred NRFs in the other sets within the segment to retrieve the state data of that set. Thus, every NRF has the complete segment-level view of all the sets in a specific segment.

The state data in each NRF comprises of:

  • local data
  • georeplicated data of the specific set
  • remote data from the other sets.

Figure 4-2 State Data in NRF


State Data in NRF

The Cache Data Service (CDS), which is multipod deployment, builds and maintains the state data of the local NRF set and the remote NRF sets in the in-memory cache. Each pod of CDS maintains its cache independently. CDS maintains the segment-level view of the state data. This segment-level view is not pushed to cnDBTier. For more information about CDS, see NRF Architecture.

The cache of local NRF set data:

  • is built or updated by querying the cnDBTier periodically
  • uses the last known information about the NFs when CDS is not updated due to a database error

The cache of remote set data:

  • is built or updated by synchronizing the state data of remote NRF sets periodically.

    Note:

    The periodicity at which the remote NRF set data is synchronized with CDS in-memory cache is 2 seconds. Remote NRF set data synchronization is not performed for each request as it is performed over the WAN network. This may cause higher latency as it involves retries also.
  • uses the last known information about the NFs in case remote NRF set sync is failed.

Figure 4-3 Cache Data Service


Cache Data Service

During pod initialization, the CDS service marks itself as available only after the local and remote set state data is loaded in the in-memory cache. CDS is only used for reading local and remote state data. The NRF microservices continue to write state data directly to the cnDBTier.

Note:

CDS tries to fetch the state data from all the NRFs in remote sets. In case all the NRFs in any of the specific remote sets are unavailable, then CDS will come up with the state data of the remaining NRF sets (if any) and the local data.

The following NRF core microservices read data from CDS for various service operation requests as explained below. The write operations are directed to cnDBTier.

  • NF Registration Microservice (nfregistration): The registration microservice queries the CDS for service operations such as NFListRetrieval and NFProfileRetrieval.
  • NF Subscription Microservice (nfsubscription): The subscription microservice queries the CDS for service operations such as NFStatusSubscribe, NfStatusSubscribe(Patch), NfStatusUnsubscribe, NfStatusNotify.
  • NRF Auditor Microservice (nrfauditor): The auditor microservice queries the CDS to retrieve the total number of NFs per NfType in the segment.
  • NF Access Token Microservice (nfaccesstoken): The AccessToken microservice queries the CDS for processing the access token service operations.
  • NRF Configuration Microservice (nrfconfiguration): The configuration microservice queries CDS for processing the state data API requests.
  • NF Discovery Microservice (nfdiscovery): The discovery microservice queries the CDS periodically to update its local cache.

For more information about the microservices, see the Impacted Service Operations section.

Note:

All the remaining NRF microservices don't query CDS.

This feature interacts with some of the existing NRF features, for more information about this, see the Interaction with Existing Features section.

Managing NRF Growth feature

Enable

Installation and Upgrade Impact

  • Upon 24.1.0 installation or upgrade, CDS microservice pod is deployed by default. The NRF core microservices queries the CDS for state data information. In case CDS is not available, NRF core microservices fall back to the cnDBTier for service operation.
  • Release 23.4.x microservice pods retrieve the state data from the cnDBTier for processing the service operations.
  • Release 24.1.x microservice pods retrieve queries CDS to retrieve the state data for processing the service operations.
  • In case of in-service upgrade:
    • CDS updates its in-memory cache with state data. The readiness probes of the CDS are configured to succeed only after at least one cache update attempt is performed. The cache is updated with the local NRF set data from the cnDBTier and if the NRF Growth feature is enabled the cache is updated with remote NRF set data.
    • During the above-mentioned upgrade scenario, until the CDS pod is available, the previous and new release pods of other microservices will query the old release pod of the CDS for the state data. This will ensure there are no in-service traffic failures during the upgrade.

Enable

Prerequisites

A minimum of two NRF sets are required to enable the feature.

Steps to Enable

  1. Upgrade all existing sites to NRF 24.1.0 version or above. Ensure that all the sites are on the same version.
  2. Configure a unique nfSetId for each NRF set using the /nrf-configuration/v1/nrfGrowth/featureOptions. Ensure that the same nfSetId must be configured for all the NRFs in the set.

    For example: Consider a case of three NRF instances in a set and the nfSetId is set1. Set1 has NRF11, NRF12, and NRF13. Configure set1 as nfSetId for NRF11, NRF12, and NRF13. See Figure 4-4 for more details.

  3. Install new NRF set(s) or upgrade an existing NRF set(s) to the NRF 24.1.0 version or above.

    For example: Consider the new NRF set as Set2 with three NRF instances, NRF21, NRF 22, and NRF23. See Figure 4-4 for more details.

  4. Using the following state data API, retrieve the nf-instances and subscription details in each NRF of each set. Save the output for validation later.
    1. Use /nrf-state-data/v1/nf-details to retrieve the nf-instances details.
    2. Use /nrf-state-data/v1/subscription-details to retrieve the subscription details.

    For more information about the query parameters, see the REST Based NRF State Data Retrieval section.

  5. Configure the nfSetId of the new NRF set by setting the attribute nfSetId using the API /nrf-configuration/v1/nrfGrowth/featureOptions. Ensure that the same nfSetId must be configured for all the NRFs in the set.

    For example: nfSetId of Set2 is set2. Configure set2 as nfSetId for NRF21, NRF 22, and NRF23. See Figure 4-4 for more details.

  6. Load the NRF Growth feature specific alerts in both the sets. For more information about the alert configuration, see the NRF Alert Configuration section.
  7. Configure the NRFs of the remote NRF sets in each NRF by setting the attribute nrfHostConfigList using the API /nrf-configuration/v1/nrfGrowth/featureOptions.

    Note:

    Once configured, the NRFs will start syncing with the remote NRF sets. The remote set data will not be used for service operations until the feature is enabled.

    For example: Configure the host details of NRFs in Set2 as the remote NRF sets in each NRFs of Set1. For instance, configure the nrfHostConfigList attribute in NRF11 of Set1 as remote NRFs host details of NRF21, NRF 22, and NRF23. Similarly, configure each NRFs in Set1 and Set2.

  8. Ensure that the following alerts are not present in any of the sets in the network:
    1. OcnrfRemoteSetNrfSyncFailed
    2. OcnrfSyncFailureFromAllNrfsOfAllRemoteSets
    3. OcnrfSyncFailureFromAllNrfsOfAnyRemoteSet

      If present, wait for 30 seconds to 1 minute and retry till the alerts are cleared. If the alerts are not cleared, see alerts for resolution steps.

  9. Use the state data API to validate the nf-instances and subscriptions as below to ensure that the remote synching is successful, and all the NFs and subscriptions are synced.
    1. Use /nrf-state-data/v1/nf-details?testSegmentData=true to validate the nf-instances.
    2. Use /nrf-state-data/v1/subscription-details?testSegmentData=true to validate the subscriptions.

    Where, testSegmentData query is used to validate the state data of the segment before enabling the NRF Growth feature.

    Note:

    Ensure that the state data of the NRFs in Set1 is available in the NRFs of Set2. In case you are upgrading the existing set, then ensure that the state data of the NRFs in Set2 is available in the NRFs of Set1.

    For more information about the query parameters, see the REST Based NRF State Data Retrieval section.

  10. Enable the NRF Growth feature in all the NRFs. Enable the feature using the API /nrf-configuration/v1/nrfGrowth/featureOptions.
  11. Ensure that the following alerts are not present in any of the sets in the network:
    1. OcnrfRemoteSetNrfSyncFailed
    2. OcnrfSyncFailureFromAllNrfsOfAllRemoteSets
    3. OcnrfSyncFailureFromAllNrfsOfAnyRemoteSet
    4. OcnrfDatabaseFallbackUsed

    If present, wait for 30 seconds to 1 minute and retry till the alerts are cleared. If the alerts are not cleared, see alerts for resolution steps.

  12. After the above configurations are complete, all service operations consider complete segment data for processing the request.

    You can migrate the NFs from Set1 to Set2 or vice-versa. For more information about the steps to migrate NFs, see the Migration of NFs section.

Figure 4-4 NRF Growth nfSetId Configuration


NRF Growth nfSetId Configuration

Migration of NFs

  1. Make sure that the NF subscription list is available.
  2. Use the following state data API at the remote NRF set to validate the NRF has record of this NF record as remote host details. If present, wait for 5-10 seconds and retry the API till the nfInstanceId is not present in the response.
    1. Use /nrf-state-data/v1/nf-details?nf-instance-id=<the nfInstanceId of the NF> to validate the nf-instances.
    2. Use /nrf-state-data/v1/subscription-details?subscription-id= <subscription ID of the subscription> to validate the subscriptions.
  3. Unsubscribe the subscriptions of the NF from the NRF.
    1. NF sends NfStatusUnsubscribe request to NRF.
    2. NRF sends NfStatusUnsubscribe response to NF.
  4. Deregister the NF from the NRF.
    1. NF sends NfDeregister request to NRF.
    2. NRF sends NfDeregister response to NF.
  5. Use the following state data API at the target NRF to validate the NRF does not have the NF record. If present, wait for 5-10 seconds and retry the API till the nfInstanceId is not present in the response.
    1. Use /nrf-state-data/v1/nf-details?nf-instance-id=<the nfInstanceId of the NF> to validate the nf-instances.
    2. Use /nrf-state-data/v1/subscription-details?subscription-id= <subscription ID of the subscription> to validate the subscriptions.
  6. Register the NF with the NRF at the target NRF set.
    1. NF sends NfRegister request to NRF.
    2. NRF sends NfRegister response to NF.
  7. Create the new subscription as required.
    1. NF sends NfStatusSubscribe request to NRF.
    2. NRF sends NfStatusSubscribe response to NF.
  8. Use the following state data API at the target NRF to validate that the NF is registered and the subscription is created. The NF record is stored as local registered NF. If not present, wait for 5-10 seconds and retry the API till the nfInstanceId is present in the response.
    1. Use /nrf-state-data/v1/nf-details?nf-instance-id=<the nfInstanceId of the NF> to validate the nf-instances.
    2. Use /nrf-state-data/v1/subscription-details?subscription-id= <subscription ID of the subscription> to validate the subscription.

Figure 4-5 Migration of NF from one set to another


Migration of NF from one set to another

Configure

Configure the NRF Growth feature using REST API or CNC Console:
  • Configure NRF Growth feature using REST API: Perform the feature configurations as described in the "NRF Growth Options" section of Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
  • Configure NRF Growth feature using CNC Console: Perform the feature configurations as described in NRF Growth Options.
  • Configure Forwarding Options for NRF Growth feature using REST API: Perform the configurations as described in the "Forwarding Options for NRF Growth" section of Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
  • Configure Forwarding Options for NRF Growth feature using CNC Console: Perform the configurations as described in the Forwarding Options section.

Note:

The forwarding rules for nfDiscover service requests are based on following 3GPP discovery request parameters:

  • target-nf-type (Mandatory Parameter)
  • service-names (Optional Parameter)

The forwarding rules for access token service requests are based on the following 3GPP access token request parameter:

  • scope (Mandatory Parameter)

Observe

Metrics

The following metrics are added for the NRF Growth feature.
  • ocnrf_cds_rx_requests_total
  • ocnrf_cds_tx_responses_total
  • ocnrf_cds_round_trip_time_seconds
  • ocnrf_query_remote_cds_requests_total
  • ocnrf_query_remote_cds_responses_total
  • ocnrf_query_remote_cds_round_trip_time_seconds
  • ocnrf_query_remote_cds_message_size_bytes
  • ocnrf_cache_fallback_total
  • ocnrf_db_fallback_total
  • ocnrf_query_cds_requests_total
  • ocnrf_query_cds_responses_total
  • ocnrf_query_cds_round_trip_time_seconds
  • ocnrf_dbmetrics_total
  • ocnrf_nf_registered_count
  • ocnrf_cache_sync_count_total
  • ocnrf_remote_set_unavailable_total
  • ocnrf_all_remote_sets_unavailable_total

For more information on the above metrics, see the NRF Cache Data Metrics section.

KPIs

Following are the NRF Growth feature specific KPIs:
  • Cache Sync
  • Total Number of CDS Requests
  • Total Number of CDS Responses
  • Total Number of CDS Requests per Service Operation
  • Total Number of CDS Responses per Service Operation
  • CDS Latency 50%
  • CDS Latency 90%
  • CDS Latency 95%
  • CDS Latency 99%
  • Total Number of CDS Requests per Request Type
  • Total Number of CDS Responses per Request Type
  • Total Number of Remote CDS Requests
  • Total Number of Remote CDS Responses
  • Remote CDS Query Latency 50%
  • Remote CDS Query Latency 90%
  • Remote CDS Query Latency 95%
  • Remote CDS Query Latency 99%
  • Database Fallback
  • CDS Cache Sync

For more information on the above metrics, see the NRF Growth Specific KPIs section.

Maintain

If you encounter alerts at system or application levels, see the NRF Alerts section for resolution steps.

In case the alerts still persist, perform the following:

  1. Collect the logs: For more information on collecting logs, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.4.1 Impacted Service Operations

This section explains the change in the functionality of the service operation when multiple NRF sets are deployed.

nfRegistration Service Operation

NFRegister, NFUpdate, or NFDeregister

The NF registers and heartbeats to the Primary NRF. When the nfRegistration service receives an NFRegister, NFUpdate, or NFDeregister request, it processes the request and if successful, it creates, updates, or deletes the nf-instances records from the cnDBTier. The nf-instances is made available and used by the service operation in the remote set as well.

The nfRegistration service does not query the Cache Data Service (CDS) for these service operations. It directly updates or saves the data in the local cnDBTier.

NFProfileRetrieval

When the nfRegistration service receives the NFProfileRetrieval request, it queries the CDS to fetch the NFProfile.

The CDS provides the NFProfile of the local NRF set by querying the cnDBTier. The CDS queries the remote NRF sets periodically and caches the data in the in-memory cache. If the growth feature is enabled, the CDS provides the NFProfile registered at the remote NRF sets by querying the in-memory cache.

The response contains the matching NFProfile either from the local NRF set or from the remote NRF set. The NFProfileRetrieval response is created by the nfRegistration service with the nf-instances provided by the CDS.

The responses from the nfRegistration service vary based on the following conditions:

  • If the NFProfile is not present in the response from the CDS, a response with status code 404 NOT FOUND is sent to the consumer NF.
  • If the CDS is unreachable or a non-2xx status code is received, the registration service relies on the cnDBTier to obtain the nf-instances local data and fulfill the service operation. The nfRegistration service fetches data from the cnDBTier and only the matching NFProfile from the local NRF set data is sent.
  • If the NRF Growth feature is enabled and the remote NRF set is not reachable, CDS provides state data of the local NRF set received from cnDBTier and the last known state data of the remote set from its in-memory cache.

NFListRetrieval

When the nfRegistration service receives an NFListRetrieval request, it queries the CDS to get the list of nfInstanceIDs.

The CDS provides the nf-instances of the local NRF set by querying the cnDBTier. The CDS queries the remote set periodically and caches the data in the in-memory cache. If the growth feature is enabled, the CDS provides the nf-instances of the remote NRF sets by querying the in-memory cache.

The response contains the matching nf-instances from the local NRF set and the remote NRF set. The NFListRetrieval response is created by the nfRegistration service with the nf-instances provided by the CDS.

The responses from the nfRegistration service vary based on the following conditions:

  • If there are no matching nf-instances in either of the sets, an empty response is sent.
  • If the CDS is unreachable or a non-2xx status code is received, the registration service falls back to the cnDBTier to get the nf-instances local data and fulfill the service operation. The nfRegistration service fetches data from the cnDBTier and only the matching nf-instances from the local NRF set data is sent.
  • If the NRF Growth feature is enabled and the remote NRF set is not reachable, CDS provides state data of the local NRF set received from cnDBTier and the last known state data of the remote set from its in-memory cache.

nfSubscription Service Operation

The NFs continue to subscribe for Producer NFs and update their subscriptions using the NRFs of the same georedundant set. The nfSubscription service is responsible for creating and maintaining the subscriptions in NRF. It also takes care of triggering NfStatusNotify to the consumer NFs for their subscriptions.

NFStatusSubscribe

When the nfSubscription service receives the NfStatusSubscribe request to create a subscription, the nfSubscription service checks the following constraints before creating the subscription:

  • duplicate subscription based on the allowDuplicateSubscriptions parameter configuration. This check is performed across all the sets in the segment. For more information about the allowDuplicateSubscriptions parameter, see Oracle Communications Cloud Native Core, Network Repository Function User Guide.
  • subscription limit conditions based on the subscriptionLimit parameter configuration. This check is applicable for all the subscriptions at the local NRF set. For more information about the Subscription Limit feature, see the "Subscription Limit" section in Oracle Communications Cloud Native Core, Network Repository Function User Guide.

If the constraints are passed, the nfSubscription service creates the subscription and saves it to the cnDBTier. The nfSubscription service queries the CDS to get the subscription already created. The CDS provides subscriptions at the local NRF set by querying the cnDBTier. The CDS queries the remote set periodically and caches the data in the in-memory cache. If the growth feature is enabled, the CDS provides the subscriptions of the remote NRF sets by querying the in-memory cache.

The responses from the nfSubscription service vary based on the following conditions:

  • If the CDS is unreachable or a non-2xx status code is received, the nfSubscription service relies on the cnDBTier to get the subscription in local NRF and fulfill the service operation. The nfSubscription service creates subscriptions only in the local NRF.
  • If the NRF Growth feature is enabled and the remote NRF set is not reachable, CDS provides subscription details of the local NRF set, and the last known information of the remote NRF set is provided.

NfStatusSubscribe(Patch)

When the nfSubscription service receives the NfStatusSubscribe (Patch) request to update a subscription, the nfSubscription service checks the following constraints before updating the subscription:

  • checks if the subscription exists. This check is performed across all the sets in the segment.
  • subscription limit conditions based on the subscriptionLimit parameter configuration. This check is applicable for all the subscriptions at the local NRF set.

If the constraints are passed, the nfSubscription service updates the subscription and saves it to the cnDBTier.

The subscription microservice queries the CDS to retrieve the subscriptions that are created at the local NRF set. The CDS provides the corresponding subscriptions created at the local NRF set by querying the cnDBTier. If the NRF Growth feature is enabled, the CDS provides the matching subscriptions created at the remote NRF sets by querying the in-memory cache.

The responses from the nfSubscription service vary based on the following conditions:

  • If the CDS is unreachable or a non-2xx status code is received, the nfSubscription service relies on the cnDBTier to get the subscription in local NRF and fulfill the service operation. The nfSubscription service updates subscriptions only in the local NRF.
  • If the NRF Growth feature is enabled and the remote NRF set is not reachable, CDS provides subscription details of the local NRF set, and the last known information of the remote NRF set is provided.

NfStatusUnsubscribe

When the nfSubscription service receives the NfStatusUnsubscribe request to delete a subscription, the nfSubscription service checks if the subscription exists. This check is performed across all the sets in the segment.

The subscription microservice queries the CDS to retrieve the matching subscriptions that are created at the local NRF set. The CDS provides the corresponding subscriptions created at the local NRF set by querying the cnDBTier. If the NRF Growth feature is enabled, the CDS provides the matching subscriptions created at the remote NRF sets by querying the in-memory cache.

If the subscription is present, it deletes the subscription from the cnDBTier and responds with the 200 OK status code. If the subscriptions created at the one NRF set cannot be deleted at any other NRF set and such requests are rejected. If the subscription is not present, the response with a 404 status code is sent.

The responses from the nfSubscription service vary based on the following conditions:

  • If the CDS is unreachable or a non-2xx status code is received, the nfSubscription service relies on the cnDBTier to get the subscription in local NRF and fulfill the service operation. The nfSubscription service unsubscribes the subscriptions only in the local NRF.
  • If the NRF Growth feature is enabled and the remote NRF set is not reachable, CDS provides subscription details of the local NRF set, and the last known information of the remote NRF set is provided.

NfStatusNotify

When the nfSubscription service triggers notifications towards the consumer NFs that are subscribed for nf-instances status events when the conditions specified in the nfSubscription condition are met. This check is performed across all the sets in the segment.

The subscription service queries the CDS to retrieve the subscriptions matching the nf-instances event that is created at the local NRF set. The CDS provides the corresponding subscriptions created at the local NRF set by querying the cnDBTier. If the NRF Growth feature is enabled, the CDS retrieves the subscriptions matching the nf-instances event at the remote NRF sets by querying the in-memory cache.

The nfSubscription service triggers the notification event to the consumer NFs for all the subscriptions received from the CDS.

The responses from the nfSubscription service vary based on the following conditions:

  • If the CDS is unreachable or a non-2xx status code is received, the nfSubscription service relies on the cnDBTier to get the subscription in local NRF and fulfill the service operation. The nfSubscription service triggers the notification for the subscriptions only in the local NRF.
  • If the NRF Growth feature is enabled and the remote NRF set is not reachable, CDS provides subscription details of the local NRF set, and the last known information of the remote NRF set is provided.

NFDiscover

When the NFDiscover service receives a discovery request, it looks for Producer NFProfile maintained at the in-memory cache in the discovery service. The discovery service cache is periodically updated by querying the CDS.

CDS updates the local cache of the discovery microservice with the latest local NRF set data CDS sends the response with the latest local NRF set data and cached remote NRF data

Additionally, if the NRF Growth feature is enabled CDS updates the local cache of the discovery microservice with the latest local NRF set data and cached remote NRF data.

In case the CDS is not reachable or a non-2xx status code is received, the discovery service relies on the cnDBTier to get local NRF set data and update its in-memory cache for the local NRF set state data. If the NRF Growth feature is enabled, the last known data of the remote NRF set is used for the discovery service operation.

NfAccessToken

When the nfAccessToken service receives a nfAccessToken request with targetNfInstanceId, it queries the CDS to validate if the target NF and/or requester NF is registered.

The nfAccessToken service queries the CDS to retrieve the targetNfInstanceId and requesterNfInstanceId that are created at the local NRF set. The CDS provides the corresponding NF instances created at the local NRF set by querying the cnDBTier. If the NRF Growth feature is enabled, the CDS provides the matching NF instances created at the remote NRF sets by querying the in-memory cache.

Note:

The requesterNfInstanceId is validated as part of the Access Token Request Authorization feature.

If the CDS is unreachable or a non-2xx status code is received, the nfAccessToken service falls back to the cnDBTier to get the NfInstanceId local NRF set and fulfill the service operation.

4.4.2 Interaction with Existing Features

With the deployment of multiple sets of NRFs, the functionality of the following features is impacted.

NRF Forwarding Feature

When the growth feature is enabled, the service requests can be forwarded to NRF in another segment. The service requests are forwarded to NRF in another segment if the requested Producer NfInstances are not registered in any of the NRF sets in the segment.

For more information about configuring the NRF Forwarding Options for NRF Growth using REST API or CNC Console:
  • Configure NRF Forwarding Options for NRF Growth feature using REST API: Perform the feature configurations as described in Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
  • Configure NRF Forwarding Options for NRF Growth feature using CNC Console: Perform the feature configurations as described in Forwarding Options.
Dynamic SLF Feature

When dynamic SLF is configured and growth feature is enabled, the dynamic selection of the SLF profiles is performed across local and remote NRF sets. In a given NRF set, the preferredSLFLocality attribute can be utilized to prefer SLFs of the local NRF set over SLFs of remote NRF set for sending SLF query. For more information about the dynamic SLF feature, see the Subscriber Location Function section.

Subscription Limit

When the growth feature is enabled, the subscription count is evaluated within each set.

Note Ensure that the total number of subscriptions in all the sets in the segment must not exceed the globalMaxLimit value. Configure the globalMaxLimit value accordingly.

For example, consider there are two sets deployed in the segment, the total subscription that is supported in the segment is 1000, and the globalMaxLimit of set1 is 300 and set2 is 700. In this case, NRF validates that the subscription count of set1 does not cross 300 and set2 does not cross 700.

For more information about the Subscription Limit feature, see the Subscription Limit section.

REST-Based NRF State Data Retrieval

REST-based NRF state data retrieval feature provides options for Non-Signaling APIs to access NRF state data. It allows the operator to access the NRF state data to understand and debug in the event of failures. NRF exposes the following APIs that are used to fetch the NfProfiles and NfSubscriptions at NRF across the local and remote NRF sets. When the growth feature is enabled, the API behavior is as follows:

  1. {apiRoot}/nrf-state-data/v1/nf-details

    The API returns the NfProfiles registered at the NRF and its mated sites. The response includes the NfProfiles registered at the local site, and its mated sites if the replication channel between the given site and the mate site is UP.

    When the growth feature is disabled, the remote profiles can still be fetched by setting the query parameter ?testSegmentData=true.

    Refer Enable section to know the usage of this query parameter.

    The NfProfiles are filtered based on the query parameters mentioned in Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide. The filtered NFProfiles are included in the response.

  2. {apiRoot}/nrf-state-data/v1/subscription-details

    The API returns the NfSubscriptions created at the NRF and its mated sites. The response includes the NfSubscriptions created at the local site, and its mated sites if the replication channel between the given site and the mate site is UP.

    When the growth feature is disabled, the remote subscriptions can still be fetched by setting the query parameter?testSegmentData=true.

    Refer Enable section to know the usage of this query parameter.

    The NfSubscriptions are filtered based on the query parameters mentioned in Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide. The filtered NfSubscriptions are included in the response.

Kubernetes Probes

Startup Probe

The Cache Data Service startup probe proceeds when the following conditions are met:

  1. Connectivity to the cnDBTier is successful.
  2. The NRF State and Configuration tables are present, and configuration data is present in the cnDBTier.

Readiness Probe

The Cache Data Service readiness probe runs the container only if the following conditions are met:

  1. Connectivity to the cnDBTier is successful.
  2. The NRF State and Configuration tables are present, and configuration data is present in the cnDBTier.
  3. One successful attempt to load the local NRF set state data into the local cache. The state data is the NfInstances and NFSubscriptions. If no NfInstances or NfSubscriptions are present in the cnDBTier, the cache will remain empty and the start-up probe is considered a success. This check is done only once when the pod comes up for the first time. Subsequent readiness probes will not check this condition.
  4. If the growth feature is enabled, there will be one successful attempt to load the state data from all the remote NRF sets that are configured. If no NfInstances or NfSubscriptions are received from the remote NRF Set, that request is considered a success. If the remote set NRFs are not reachable, then after all the reattempts and reroutes, the request is considered a success.

Liveness Probe

The Liveness Probe monitors the health of the critical threads running the Cache Data Service. If any of the threads are detected to be in a deadlock state or non-running, the liveness Probe will fail.

For more information about Kubernetes probes, see the Kubernetes Probes section.

4.5 Support for Automated Certificate Lifecycle Management

In NRF 23.3.x and earlier, X.509 and Transport Layer Security (TLS) certificates were managed manually. When multiple instances of NRF were deployed in a 5G network, certificate management, such as certificate creation, renewal, removal, and so on, became tedious and error-prone.

Starting with NRF 23.4.x, you can integrate NRF with Oracle Communications Cloud Native Core, Certificate Management (OCCM) to support automation of certificate lifecycle management. OCCM manages TLS certificates stored in Kubernetes secrets by integrating with Certificate Authority (CA) using the Certificate Management Protocol Version 2 (CMPv2) protocol in the Kubernetes secret. OCCM obtains and signs TLS certificates within the NRF namespace. For more information about OCCM, see Oracle Communications Cloud Native Core, Certificate Management User Guide.

Figure 4-6 Support for OCCM


Support for OCCM

The above diagram indicates that OCCM writes the keys to the certificates and NRF reads these keys to establish a TLS connection with other NFs.

OCCM can automatically manage the following TLS certificates:

  • 5G Service Based Architecture (SBA) client TLS certificates
  • 5G SBA server TLS certificates
  • Message Feed TLS certificates

This feature enables NRF to monitor, create, recreate, and renew TLS certificates using OCCM, based on their validity. For information about enabling HTTPS, see "Configuring Secrets for Enabling HTTPS" in Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

Install Guide Considerations
  • Upgrade: When NRF is deployed with OCCM, follow the specific upgrade procedure. For information about the upgrade strategy, see "Upgrade Strategy" in Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.
  • Rollback: For more information on migrating the secrets from NRF to OCCM and removal of Kubernetes secrets from the yaml file, see "Postupgrade Task" in Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.
Configure

There are no additional configuration changes required at NRF.

Observe
Metrics

This feature uses the existing metrics:

  • oc_egressgateway_connection_failure_total
  • oc_ingressgateway_connection_failure_total

For more information, see NRF Gateways Metrics.

Maintain

If you encounter any OCCM-specific alerts, see the "OCCM Alerts" section in Oracle Communications Cloud Native Core, Certificate Management User Guide.

If you encounter alerts at system or application levels, see the NRF Alerts section for resolution steps.

In case the alerts still persist, perform the following:

  1. Collect the logs: For more information on collecting logs, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.6 Egress Gateway Route Configuration for Different Deployments

The Egress Gateway route configuration for mTLS and non-TLS based deployments is as follows:

Table 4-2 Egress Gateway Route Configuration for Static Peers

Egress Traffic Egress Gateway Configuration Outgoing Message Details
Peer under PeerSetConfiguration RoutesConfiguration.httpsTargetOnly RoutesConfiguration.httpRuriOnly 3gpp-Sbi-Target-apiRoot Scheme Authority Header
HTTPS Outgoing Traffic (TLS) httpsConfiguration true false Target FQDN

Example: <SLF FQDN>/ <NotificationServer>

https Peer FQDN

Example: SCP/SEPP FQDN

HTTP Outgoing Traffic (non-TLS) httpsConfiguration true true Target FQDN

Example: <SLF FQDN>/ <NotificationServer>

http Peer FQDN

Example: SCP/SEPP FQDN

Table 4-3 Egress Gateway Route Configuration for virtualHost Peers

Egress Traffic Egress Gateway Configuration Outgoing Message Details
Peer under PeerSetConfiguration RoutesConfiguration.httpsTargetOnly RoutesConfiguration.httpRuriOnly 3gpp-Sbi-Target-apiRoot Scheme Authority Header
HTTPS Outgoing Traffic (TLS) httpsConfiguration true false Target FQDN

Example: <SLF FQDN>/ <NotificationServer>

https Resolved Peer FQDN

Example: Resolved SCP/SEPP FQDN

HTTP Outgoing Traffic (non-TLS) httpConfiguration false true Target FQDN

Example: <SLF FQDN>/ <NotificationServer>

http Resolved Peer FQDN

Example: Resolved SCP/SEPP FQDN

Note:

  • For service-mesh based deployment, NRF Egress Gateway application container sends HTTP outgoing traffic only. The sidecar container is responsible for sending out the traffic as HTTPS traffic. Hence, for service-mesh based deployment, perform the configuration as per HTTP Outgoing Traffic (non-TLS).
Configuration for HTTPS Outgoing Traffic
For HTTPS Outgoing Traffic, perform the following configuration at NRF Egress Gateway:
  1. Update the PeerConfiguration as follows:
    
    sample PeerConfiguration.json
    [
      {
        "id": "peer1",
        "host": "scp-stub-service01",
        "port": "8080",
        "apiPrefix": "/",
        "healthApiPath":"/{scpApiRoot}/{apiVersion}/status"
      }
    ]
  2. Update the PeerSetConfiguration as follows:
     
    sample peerset.json
    [
        {
            "id":"set0",
            "httpsConfiguration":[
            {
            "priority": 1,
            "peerIdentifier": "peer1"
            }]
        }
    ]
  3. Update the RoutesConfiguration as follows:
    
    sample RoutesConfiguration.json
    {
          "id":"egress_scp_proxy2",
          "uri":"http://localhost:32069/",
          "order":3,
          "metadata":{
             "httpsTargetOnly":true,
             "httpRuriOnly":false,
             "sbiRoutingEnabled":true
          },
          "predicates":[
             {
                "args":{
                   "pattern":"/nudr-group-id-map/v1/nf-group-ids"
                },
                "name":"Path"
             }
          ],
          "filters":[
             {
                "name":"SbiRouting",
                "args":{
                   "peerSetIdentifier":"set0",
                   "customPeerSelectorEnabled":false
                }
             }
          ]
       }
Configuration for HTTP Outgoing Traffic
For HTTP Outgoing Traffic, perform the following configuration at NRF Egress Gateway:
  1. Update the PeerConfiguration as follows:
    
    sample PeerConfiguration.json
    [
      {
        "id": "peer1",
        "host": "scp-stub-service01",
        "port": "8080",
        "apiPrefix": "/",
        "healthApiPath":"/{scpApiRoot}/{apiVersion}/status"
      }
    ]
  2. Update the PeerSetConfiguration as follows:
     
    sample PeerSetConfiguration.json
    [
        {
            "id":"set0",
            "httpsConfiguration":[
            {
            "priority": 1,
            "peerIdentifier": "peer1"
            }]
        }
    ]
  3. Update the RoutesConfiguration as follows:
    
    sample RoutesConfiguration.json
    {
          "id":"egress_scp_proxy2",
          "uri":"http://localhost:32069/",
          "order":3,
          "metadata":{
             "httpsTargetOnly":true,
             "httpRuriOnly":true,
             "sbiRoutingEnabled":true
          },
          "predicates":[
             {
                "args":{
                   "pattern":"/nudr-group-id-map/v1/nf-group-ids"
                },
                "name":"Path"
             }
          ],
          "filters":[
             {
                "name":"SbiRouting",
                "args":{
                   "peerSetIdentifier":"set0",
                   "customPeerSelectorEnabled":false
                }
             }
          ]
       }

4.7 Routing Egress Messages through SCP

NRF allows routing of following Egress messages through SCP or directly with an option to configure each Egress request type independently:
  • SLF requests
  • Notification requests
  • NRF Forwarding requests
  • Roaming requests
The above requests are routed through SCP based on the configuration in the Egress Gateway. For more information about the routes configuration, see the "Routes Configuration" section in Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.

NRF supports routing of the Egress requests through SCP configured with FQDN and virtual FQDN as peer instances.

Alternate route service is used for the resolution of virtual FQDN using DNS-SRV. Egress Gateway uses the virtual FQDN of peer instances to query the Alternate Route Service to get the list of alternate FQDNs, each of which has a priority assigned to them. The Egress Gateway selects the peer instances based on the priority value.

Figure 4-7 Routing Egress Messages through SCP

Routing Egress Messages through SCP
Managing the feature
Enable
  1. Open the ocnrf_custom_values_23.4.6.yaml file.
  2. Set enableNrfArtisanService to true to enable Artisan microservice.

    Note:

    This parameter must be enabled when dynamic SLF feature is configured.
  3. Set alternateRouteServiceEnable to true to enable alternate route service. For more information on enabling alternate route service, see the "Global Parameters" section in Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

    Note:

    This parameter must be enabled when alternate route service through DNS-SRV is required.
  4. Save the file.
  5. Run helm install. For more information about installation procedure, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.
  6. If you are enabling this parameter after NRF deployment, run helm upgrade. For more information about upgrade procedure, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

Configure

Configure using REST API: Perform the following Egress Gateway configuration as described in "Egress Gateway Configuration" section of the Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.

  • create or update the peerconfiguration with SCP FQDN details.
  • create or update the peersetconfiguration to assign these peers.
  • create or update the sbiroutingerrorcriteriasets.
  • create or update the sbiroutingerroractionsets.
  • create or update the routesconfiguration to provide the SCP details. For more information about the routes configuration specific to the above-mentioned requests, see the following sections:
  • If all the requests must be routed through SCP, then create or update the routesconfiguration as follows:
    [
      {
        "id": "egress_scp_proxy1",
        "uri": "http://localhost:32068/",
        "order": 0,
        "metadata": {
          "httpsTargetOnly": true,
          "httpRuriOnly": true,
          "sbiRoutingEnabled": true
        },
        "predicates": [
          {
            "args": {
              "pattern": "/**"
            },
            "name": "Path"
          }
        ],
        "filters": [
          {
            "name": "SbiRouting",
            "args": {
              "peerSetIdentifier": "set0",
              "customPeerSelectorEnabled": false,
              "errorHandling": [
                {
                  "errorCriteriaSet": "criteria_1",
                  "actionSet": "action_1",
                  "priority": 1
                },
                {
                  "errorCriteriaSet": "criteria_0",
                  "actionSet": "action_0",
                  "priority": 2
                }
              ]
            }
          }
        ]
      },
      {
        "id": "default_route",
        "uri": "egress://request.uri",
        "order": 100,
        "filters": [
          {
            "name": "DefaultRouteRetry"
          }
        ],
        "predicates": [
          {
            "args": {
              "pattern": "/**"
            },
            "name": "Path"
          }
        ]
      }
    ]
Maintain

If you encounter alerts at system or application levels, see the NRF Alerts section for resolution steps.

In case the alerts still persist, perform the following:
  • Collect the logs: For more information on how to collect logs, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  • Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.7.1 SLF Requests

NRF supports routing of SLF queries through SCP towards SLF or UDR.

Managing SLF Requests through SCP

Configure

You can configure the SLF requests through SCP using the Egress Gateway configurations in the REST API.

Configure using REST API: Perform the following feature configurations as described in the "Egress Gateway Configuration" section of Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide:
  • Create or update {apiRoot}/nrf/nf-common-component/v1/egw/peerconfiguration.
  • Create or update the {apiRoot}/nrf/nf-common-component/v1/egw/peersetconfiguration to assign these peers.
  • Create or update {apiRoot}/nrf/nf-common-component/v1/egw/sbiroutingerrorcriteriasets.
  • Create or update {apiRoot}/nrf/nf-common-component/v1/egw/sbiroutingerroractionsets.
  • Update the {apiRoot}/nrf/nf-common-component/v1/egw/routesconfiguration as mentioned below:
    curl -v -X PUT "http://10.75.226.126:32247/nrf/nf-common-component/v1/egw/routesconfiguration" -H  "Content-Type: application/json"  -d @header.json
     
    sample header.json:-
    [
        {
            "id":"egress_scp_proxy1",
            "uri":"http://localhost:32068/",
            "order":0,
            "metadata":{
                "httpsTargetOnly":true,
                "httpRuriOnly":false,
                "sbiRoutingEnabled":true
            },
            "predicates":[
                {
                    "args":{
                        "pattern":"/nudr-group-id-map/v1/nf-group-ids"
                    },
                    "name":"Path"
                }
            ],
            "filters":[
                {
                    "name":"SbiRouting",
                    "args":{
                        "peerSetIdentifier":"set0",
                        "customPeerSelectorEnabled":true,
                        "errorHandling":[
                            {
                                "errorCriteriaSet":"criteria_1",
                                "actionSet":"action_1",
                                "priority":1
                            },
                            {
                                "errorCriteriaSet":"criteria_0",
                                "actionSet":"action_0",
                                "priority":2
                            }
                        ]
                    }
                }
            ]
        },
     {
    	"id": "default_route",
    	"uri": "egress://request.uri",
    	"order": 100,
    	"predicates": [{
    		"args": {
    			"pattern": "/**"
    		},
    		"name": "Path"
    	}]
     }
    ]

Note:

To disable the routing of SLF queries through SCP, remove the Egress Gateway configuration from the {apiRoot}/nrf/nf-common-component/v1/egw/routesconfiguration, and retain the default_route configuration. This default_route configuration is available by default during fresh installation or upgrade.

{
	"id": "default_route",
	"uri": "egress://request.uri",
	"order": 100,
	"predicates": [{
		"args": {
			"pattern": "/**"
		},
		"name": "Path"
	}]
}

4.7.2 Notifications Requests

NRF allows you to route the notification messages through SCP based on the configuration in the Egress Gateway.

Managing Notifications Requests through SCP

This section explains the procedure to configure the feature.

Configure

You can configure the notifications through SCP using the Egress Gateway configurations in the REST API.

Configure using REST API: Perform the following feature configurations as described in the "Egress Gateway Configuration" section of Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide:
  • Create or update {apiRoot}/nrf/nf-common-component/v1/egw/peerconfiguration.
  • Create or update the {apiRoot}/nrf/nf-common-component/v1/egw/peersetconfiguration to assign these peers.
  • Create or update {apiRoot}/nrf/nf-common-component/v1/egw/sbiroutingerrorcriteriasets.
  • Create or update {apiRoot}/nrf/nf-common-component/v1/egw/sbiroutingerroractionsets.
  • Update the {apiRoot}/nrf/nf-common-component/v1/egw/routesconfiguration as mentioned below:
    curl -v -X PUT "http://10.75.226.126:32247/nrf/nf-common-component/v1/egw/routesconfiguration" -H  "Content-Type: application/json"  -d @header.json
     
    sample header.json:-
    {
        "id": "egress_scp_proxy2",
        "uri": "http://localhost:32068/",
        "order": 20,
        "filters": [
          {
            "args": {
              "errorHandling": [
                {
                  "priority": 1,
                  "actionSet": "action_1",
                  "errorCriteriaSet": "criteria_1"
                },
                {
                  "priority": 2,
                  "actionSet": "action_0",
                  "errorCriteriaSet": "criteria_0"
                }
              ],
              "peerSetIdentifier": "set0",
              "customPeerSelectorEnabled": false
            },
            "name": "SbiRouting"
          }
        ],
        "metadata": {
          "httpRuriOnly": false,
          "httpsTargetOnly": true,
          "sbiRoutingEnabled": true
        },
        "predicates": [
          {
            "args": {
              "header": "3gpp-Sbi-Callback",
              "regexp": "Nnrf_NFManagement_NFStatusNotify"
            },
            "name": "Header"
          }
        ],
        {
    	"id": "default_route",
    	"uri": "egress://request.uri",
    	"order": 100,
    	"predicates": [{
    		"args": {
    			"pattern": "/**"
    		},
    		"name": "Path"
    	}]
       }
      }

Note:

To disable the routing of notification through SCP, remove the SbiRouting filter for SCP from the {apiRoot}/nrf/nf-common-component/v1/egw/routesconfiguration, and add the default_route configuration. Following the sample default_route configuration:
{
	"id": "default_route",
	"uri": "egress://request.uri",
	"order": 100,
	"predicates": [{
		"args": {
			"pattern": "/**"
		},
		"name": "Path"
	}]
}

4.7.3 NRF Forwarding Requests

NRF allows you to route the NRF-NRF forwarding messages through SCP based on the configuration in the Egress Gateway.

Managing NRF Forwarding Requests through SCP

This section explains the procedure to configure the feature.

Configure

You can configure the forwarding through SCP using the Egress Gateway configurations either using REST API:
  • Configure using REST API: Perform the following feature configurations as described in the "Egress Gateway Configuration" section of Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide:
    • Create or update {apiRoot}/nrf/nf-common-component/v1/egw/peerconfiguration.
    • Create or update the {apiRoot}/nrf/nf-common-component/v1/egw/peersetconfiguration to assign these peers.
    • Create or update {apiRoot}/nrf/nf-common-component/v1/egw/sbiroutingerrorcriteriasets.
    • Create or update {apiRoot}/nrf/nf-common-component/v1/egw/sbiroutingerroractionsets.
    • Update the {apiRoot}/nrf/nf-common-component/v1/egw/routesconfiguration as mentioned below:
      curl -v -X PUT "http://10.75.226.126:32247/nrf/nf-common-component/v1/egw/routesconfiguration" -H  "Content-Type: application/json"  -d @header.json
       
      sample header.json:-
      [
        {
          "id": "egress_scp_proxy1",
          "uri": "http://localhost:32068/",
          "order": 0,
          "filters": [
            {
              "args": {
                "errorHandling": [
                  {
                    "priority": 1,
                    "actionSet": "action_1",
                    "errorCriteriaSet": "criteria_1"
                  },
                  {
                    "priority": 2,
                    "actionSet": "action_0",
                    "errorCriteriaSet": "criteria_0"
                  }
                ],
                "peerSetIdentifier": "set0",
                "customPeerSelectorEnabled": false
              },
              "name": "SbiRouting"
            },
            {
              "args": {
                "name": "OC-NRF-Forwarding"
              },
              "name": "RemoveRequestHeader"
            }
          ],
          "metadata": {
            "httpRuriOnly": false,
            "httpsTargetOnly": true,
            "sbiRoutingEnabled": true
          },
          "predicates": [
            {
              "args": {
                "header": "OC-NRF-Forwarding",
                "regexp": ".*"
              },
              "name": "Header"
            }
          ]
        },
        {
      	"id": "default_route",
      	"uri": "egress://request.uri",
      	"order": 100,
             "filters": [
               {
                 "name": "DefaultRouteRetry"
                }
             ],
      	"predicates": [{
      		"args": {
      			"pattern": "/**"
      		},
      		"name": "Path"
      	}]
      }
      ]

Note:

To disable the NRF-NRF forwarding messages through SCP, remove the Egress Gateway configuration from the {apiRoot}/nrf/nf-common-component/v1/egw/routesconfiguration, and retain the default_route configuration. This default_route configuration is available by default during fresh installation or upgrade. Following the sample default_route configuration:
{
	"id": "default_route",
	"uri": "egress://request.uri",
	"order": 100,
       "filters": [
         {
           "name": "DefaultRouteRetry"
          }
       ],
	"predicates": [{
		"args": {
			"pattern": "/**"
		},
		"name": "Path"
	}]
}

4.7.4 Roaming Requests

NRF allows you to route the roaming messages through SCP based on the configuration in the Egress Gateway.

Managing Roaming Requests through SCP

This section explains the procedure to configure the feature.

Configure

You can configure the roaming through SCP using the Egress Gateway configurations either using REST API:
  • Configure using REST API: Perform the following feature configurations as described in the "Egress Gateway Configuration" section of Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide:
    • Create or update {apiRoot}/nrf/nf-common-component/v1/egw/peerconfiguration.
    • Create or update the {apiRoot}/nrf/nf-common-component/v1/egw/peersetconfiguration to assign these peers.
    • Create or update {apiRoot}/nrf/nf-common-component/v1/egw/sbiroutingerrorcriteriasets.
    • Create or update {apiRoot}/nrf/nf-common-component/v1/egw/sbiroutingerroractionsets.
    • Update the {apiRoot}/nrf/nf-common-component/v1/egw/routesconfiguration as mentioned below:
      curl -v -X PUT "http://10.75.226.126:32247/nrf/nf-common-component/v1/egw/routesconfiguration" -H  "Content-Type: application/json"  -d @header.json
       
      sample header.json:-
      [
        {
          "id": "egress_scp_proxy1",
          "uri": "http://localhost:32068/",
          "order": 0,
          "filters": [
            {
              "args": {
                "errorHandling": [
                  {
                    "priority": 1,
                    "actionSet": "action_1",
                    "errorCriteriaSet": "criteria_1"
                  },
                  {
                    "priority": 2,
                    "actionSet": "action_0",
                    "errorCriteriaSet": "criteria_0"
                  }
                ],
                "peerSetIdentifier": "set0",
                "customPeerSelectorEnabled": false
              },
              "name": "SbiRouting"
            },
            {
              "args": {
                "name": "OC-MCCMNC"
              },
              "name": "RemoveRequestHeader"
            }],
          "metadata": {
            "httpRuriOnly": false,
            "httpsTargetOnly": true,
            "sbiRoutingEnabled": true
          },
          "predicates": [
            {
              "args": {
                "header": "OC-MCCMNC",
                "regexp": "310014"
              },
              "name": "Header"
            }
          ]
        },
        {
      	"id": "default_route",
      	"uri": "egress://request.uri",
      	"order": 100,
             "filters": [
               {
                 "name": "DefaultRouteRetry"
                }
             ],
      	"predicates": [{
      		"args": {
      			"pattern": "/**"
      		},
      		"name": "Path"
      	}]
         }
      ]

Note:

To disable the routing of roaming messages through SCP, remove the Egress Gateway configuration from the {apiRoot}/nrf/nf-common-component/v1/egw/routesconfiguration, and retain the default_route configuration. This default_route configuration is available by default during fresh installation or upgrade. Following the sample default_route configuration:
{
	"id": "default_route",
	"uri": "egress://request.uri",
	"order": 100,
       "filters": [
         {
           "name": "DefaultRouteRetry"
          }
       ],
	"predicates": [{
		"args": {
			"pattern": "/**"
		},
		"name": "Path"
	}]
}

4.8 Support for vsmf-support-ind Attribute in NF Discover Service Operation

NRF supports vsmf-support-ind attribute in the NF Discover service operation query as per the 3GPP standards. For more information about the attribute, see NRF Compliance Matrix.

The discovery query is processed as follows:

  • If vsmf-support-ind attribute is present in the NFDiscover query and the value of targetNfType is other than SMF, NRF does not reject the discovery query. In this case, NF profiles are not returned.
  • When the requester-feature attribute is present or its value does not have any impact on the processing of vsmf-support-ind in the discovery query.
  • When SMF profile contains SMFInfoList attribute and if one of the SMFInfo element in it doesn’t have vsmf-support-ind, then NRF can consider such SMF profile as “V-SMF Capability Support of the SMF is not specified”.

4.9 Ingress Gateway Pod Protection

Ingress Gateway handles all the incoming traffic towards NRF. It may undergo overload conditions due to uneven distribution of traffic, network fluctuations leading to traffic bursts, or unexpected high traffic volume.

This feature protects the Ingress Gateway pods from getting overloaded due to uneven traffic distribution, traffic bursts, and congestion. It ensures the protection and mitigation of pods from entering an overload condition, while also facilitating necessary actions for recovery.

The pod protection is performed based on the CPU usage and the Pending Message Count as explained in the Congestion State Parameters. These congestion parameters are measured at various states mentioned in the Ingress Gateway Load States to detect the overload condition.

Note:

Horizontal POD Autoscaling (HPA) at Ingress Gateway microservice and pod protection mechanism at Ingress Gateway microservice are independent features with different trigger conditions. While HPA considers microservice load, pod protection mechanism works only on the POD load. Therefore, their order of triggering cannot be predicted.

In a service mesh based deployment, all incoming connections to the pod get terminated at the sidecar container, then the sidecar container creates a new connection toward the application container. These incoming connections from the peer are managed by the sidecar and outside the purview of the application container.

Hence when the Ingress Gateway container reaches DOC or Congested level, in a service mesh based deployment, the Ingress Gateway container will only be able to stop accepting new connections from the sidecar container. Also in this state, the Ingress Gateway container will reduce the concurrency of the existing connections between the sidecar container and the Ingress Gateway container. Any new request received over a new connection may get accepted or rejected based on the sidecar connection management.

In a non-service mesh based deployment, all incoming connections to the pod get terminated at the Ingress Gateway container. Hence when the Ingress Gateway container reaches DOC or Congested level, the Ingress Gateway container will stop accepting new connections. Also in this state, the Ingress Gateway container will reduce the concurrency of the existing connections between the peer and the Ingress Gateway container. Any new request received over a new connection will result in to a request timeout at the peer.

Congestion State Parameter

As part of the Pod Protection feature, every Ingress Gateway microservice pod monitors its congestion state. Following are the congestion parameters to monitor the pod state:

  • CPU

    The congestion state is monitored based on CPU usage to determine the congestion level. The CPU usage is monitored using the Kubernetes cgroup (cpuacct.usage) and it is measured in nanoseconds.

    It is monitored periodically and calculated using the following formula and then compared against the configured CPU thresholds to determine the congestion state. For more information about the parameters and the proposed threshold values, see Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.

    Figure 4-8 CPU Measurement


    CPU Measurement

    Where,

    • CurrentCpuUsage is the counter reading at current periodic cycle.
    • LastCpuUsage is the counter reading at previous periodic cycle.

    • CurrentTime is the current time snapshot.
    • LastSampletime is the previous periodic cycle time snapshot.
    • CPUs is the total number of CPUs for a given pod.
  • Pending Message Count: The pending message count is the number of requests that are received by the Ingress Gateway pod from other NFs and yet to send the response. This includes all the requests triggered towards the Ingress Gateway pod. The pending message count is monitored periodically and compared against the configured thresholds to determine the congestion state.

Ingress Gateway Pod States

Following are the various states to detect overload conditions. This will protect and mitigate the pod entering into an unstable condition, and to take necessary actions to recover from this condition.

Figure 4-9 Pod Protection State Transition


Pod Protection State Transition

Note:

The transition can occur between any states. The threshold for these congestion parameters are preconfigured and must not be changed.
  • Congested State: This is the upper bound state where the pod is congested. This means one or more congestion parameters are above the configured thresholds for the congested state. For more information about the configuration, see Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide. The pod can be transitioned to the Congested state either from the Normal State or the DoC state. When the pod reaches this state, the following actions are performed:
    • new incoming HTTP2 connection requests are not accepted.
    • the pod gradually decrements the number of concurrent streams by updating SETTINGS_MAX_CONCURRENT_STREAMS parameter in a SETTINGS frame to the configured maxConcurrentStreamsPerCon value at a regular interval. The number of concurrent streams are decremented based on the value configured in decrementBy parameter. And, the regular interval is configured in the decrementSamplingPeriod parameter.
  • Danger of Congestion (DoC) State: This is the intermediate state where the pod is approaching a congested state. This means if one or more congestion parameters are above the configured thresholds for the DoC state. For more information about the configuration, see Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.

    When the pod reaches this state, the following actions are performed:
    • any new incoming HTTP2 connection requests are not accepted.
    • if the pod is transitioning from the Normal state to the DoC state, the pod gradually decrements the number of concurrent streams by updating SETTINGS_MAX_CONCURRENT_STREAMS parameter in a SETTINGS frame to the configured maxConcurrentStreamsPerCon value at a regular interval. The number of concurrent streams are decremented based on the value configured in decrementBy parameter. And, the regular interval is configured in the decrementSamplingPeriod parameter.
    • if the pod is transitioning from the Congested state to the DoC state, the pod gradually increments the number of concurrent streams by updating SETTINGS_MAX_CONCURRENT_STREAMS parameter in a SETTINGS frame to the configured maxConcurrentStreamsPerCon value at a regular interval. The number of concurrent streams are incremented based on the value configured in incrementBy parameter. And, the regular interval is configured in the incrementSamplingPeriod parameter.
  • Normal State: This is the lower bound state where the CPU usage is below the configured thresholds for DoC and Congested states. For more information about the configuration, see Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.

    When the pod reaches this state, the following actions are performed:
    • the pod will continue accepting new incoming HTTP2 connection requests.
    • in case the pod is transitioning from the Congested or DoC state to Normal state, the pod gradually increments the number of concurrent streams by updating SETTINGS_MAX_CONCURRENT_STREAMS parameter in a SETTINGS frame to the configured maxConcurrentStreamsPerCon value at a regular interval. The number of concurrent streams are incremented based on the value configured in incrementBy parameter. And, the regular interval is configured in the incrementSamplingPeriod parameter.

To avoid toggling between these states due to traffic pattern, it is required for the pod to satisfy the condition of the target state for a given period before transitioning it to target state. For example, if the pod is transitioning from DoC to Congested state, then the pod must satisfy the threshold parameters of Congested state for a given period, before moving to Congested state.

The below configurations are used to define the period till which the pod has to be in a particular state:
  • stateChangeSampleCount
  • monitoringInterval

Formula for calculating the period is: (stateChangeSampleCount * monitoringInterval)

For more information about the configuration, see Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.

Managing Ingress Gateway Pod Protection

This section explains the procedure to enable and configure the feature.

Enable

You can enable the Pod Protection feature using the REST API.

  1. Use the API path as {apiRoot}/nf-common-component/v1/igw/podprotection.
  2. Set podprotection.enabled to true.
  3. Set podProtection.congestionControl.enabled to true.
  4. Run the API using PUT method with the proposed values given in the Rest API.

    For more information about the configuration using REST API, see Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.

    Note:

    The proposed values are engineering configured values and must not be changed.

Observe

Metrics

Following metrics are added in the NRF Gateways Metrics section:
  • oc_ingressgateway_pod_congestion_state
  • oc_ingressgateway_pod_resource_stress
  • oc_ingressgateway_pod_resource_state
  • oc_ingressgateway_incoming_pod_connections_rejected_total

KPIs

Added the feature specific KPIs in the Ingress Gateway Pod Protection section.

Maintain

If you encounter alerts at system or application levels, see NRF Alerts section for resolution steps.

In case the alerts still persists, perform the following:

  1. Collect the logs: For more information about how to collect logs, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.10 Network Slice Specific Metrics

The 5G Network slices are identified by Network Slice Instances (NSIs) and Single Network Slice Selection Assistance Information (SNSSAI).

A Network Function (NF) can have multiple NSIs and SNSSAIs listed under them to support multiple slices. NRF supports measuring the number of requests and responses for various service operation per network slice. This measurement is performed using the metrics mentioned in Observe.

Observe

Metrics

Following are the metrics to measure the requests and responses:
  • ocnrf_nfDiscover_rx_requests_perSnssai_total
  • ocnrf_nfDiscover_tx_success_response_perSnssai_total
  • ocnrf_nfDiscover_tx_empty_response_perSnssai_total
  • ocnrf_nfDiscover_tx_failure_response_perSnssai_total
  • ocnrf_nfDiscover_rx_requests_perNsi_total
  • ocnrf_nfDiscover_tx_success_response_perNsi_total
  • ocnrf_nfDiscover_tx_empty_response_perNsi_total
  • ocnrf_nfDiscover_tx_failure_response_perNsi_total
  • ocnrf_nfDiscover_tx_forwarded_requests_perSnssai_total
  • ocnrf_nfDiscover_rx_success_forwarded_responses_perSnssai_total
  • ocnrf_nfDiscover_rx_empty_forwarded_responses_perSnssai_total
  • ocnrf_nfDiscover_rx_failure_forwarded_responses_perSnssai_total
  • ocnrf_nfDiscover_tx_forwarded_requests_perNsi_total
  • ocnrf_nfDiscover_rx_success_forwarded_responses_perNsi_total
  • ocnrf_nfDiscover_rx_empty_forwarded_responses_perNsi_total
  • ocnrf_nfDiscover_rx_failure_forwarded_responses_perNsi_total
  • ocnrf_nfRegister_requests_perSnssai_total
  • ocnrf_nfRegister_success_responses_perSnssai_total
  • ocnrf_nfRegister_failure_responses_perSnssai_total
  • ocnrf_nfRegister_requests_perNsi_total
  • ocnrf_nfRegister_success_responses_perNsi_total
  • ocnrf_nfRegister_failure_responses_perNsi_total
  • ocnrf_nfUpdate_requests_perSnssai_total
  • ocnrf_nfUpdate_success_responses_perSnssai_total
  • ocnrf_nfUpdate_failure_responses_perSnssai_total
  • ocnrf_nfUpdate_requests_perNsi_total
  • ocnrf_nfUpdate_success_responses_perNsi_total
  • ocnrf_nfUpdate_failure_responses_perNsi_total
  • ocnrf_nfDeregister_requests_perSnssai_total
  • ocnrf_nfDeregister_success_responses_perSnssai_total
  • ocnrf_nfDeregister_failure_responses_perSnssai_total
  • ocnrf_nfDeregister_requests_perNsi_total
  • ocnrf_nfDeregister_success_responses_perNsi_total
  • ocnrf_nfDeregister_failure_responses_perNsi_total
  • ocnrf_nfHeartBeat_requests_perSnssai_total
  • ocnrf_nfHeartBeat_success_responses_perSnssai_total
  • ocnrf_nfHeartBeat_failure_responses_perSnssai_total
  • ocnrf_nfHeartBeat_requests_perNsi_total
  • ocnrf_nfHeartBeat_success_responses_perNsi_total
  • ocnrf_nfHeartBeat_failure_responses_perNsi_total

For more information about the metrics, see Network Slice Specific Metrics section.

KPIs

The feature specific KPIs are added in the Network Slice Specific KPIs section.

Maintain

If you encounter alerts at system or application levels, see NRF Alerts section for resolution steps.

In case the alerts still persist, perform the following:

  1. Collect the logs: For more information on how to collect logs, see "Collecting Logs" section in Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.11 CCA Header Validation in NRF for Access Token Service Operation

Client Credentials assertion (CCA) is a token signed by the Consumer NF. It enables NRF to authenticate the Consumer NF which includes the signed token in Access Token service request. CCA header contains the Consumer NF's NfInstanceId that gets checked against the certificate by the NRF. The CCA also includes a timestamp as the basis for the restriction of its lifetime.

Consumer NF sends the 3gpp-Sbi-Client-Credentials header containing the CCA in the HTTP request and NRF performs the CCA validation. The CCA header validation is a JWT-based validation at NRF where the Consumer NF sends a JWT token as part of the header to NRF in Access Token request. The JWT token has an X5c certificate and other NF-specific information that is validated against the configuration values defined in NRF. The signature of the JWT token is validated against the CA root certificate configured at NRF.

Figure 4-10 Client Credentials JWT Token


Client Credentials JWT Token

Table 4-4 JOSE header

Attribute name Data type P Cardinality Description
typ String M 1 The "typ" (type) Header Parameter is used to declare the media type of the JWS. Default value: JWT
alg String M 1 The "alg" (algorithm) Header Parameter is used to secure the JWS. Supported Algorithm types: RSA/ECDSA.
X5c Array M 1 The "X5c" (X.509 certificate) Header Parameter contains the X.509 public key certificate corresponding to the key used to digitally sign the JWT.

Table 4-5 JWT claims

Attribute name Data type P Cardinality Description
sub NfInstanceId M 1 This IE contains the NF instance ID of the NF service consumer, corresponding to the standard "Subject" claim.
iat integer M 1 This IE indicates the time at which the JWT was issued, corresponding to the standard "Issued At" claim. This claim may be used to determine the age of the JWT.
exp integer M 1 This IE contains the expiration time after which the client credentials assertion is considered to be expired, corresponding to the standard "Expiration Time" claim.
aud array(NFType) M 1..N This IE contains the NF type of NRF, for which the claim is applicable, corresponding to the standard "Audience" claim.

The digitally signed client credentials assertion is further converted to the JWS compact serialization encoding as a string.

If the validation is successful, the Access Token request is processed further.

If CCA header validation fails, the Access Token request is rejected by NRF with "403 Forbidden" with the cause attribute set to "CCA_VERIFICATION_FAILURE".

Managing CCA Header validation in NRF for Access Token Service Operation

  1. Create Ingress Gateway Secret. For more information about configuring secrets, see section “Configuring Secret to Enable CCA Header” in Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.
  2. Configure the secret containing the CA root bundle and enable CCA header feature for AccessToken.
    1. Using REST API: Perform the feature configurations for CCA Header Validation in Ingress Gateway as described in Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide
    2. Using Helm:
      1. Customize the ocnrf_custom_values_23.4.6.yaml file.
      2. To enable this feature in Helm, set the metadata.ccaHeaderValidation.enabled to true for accesstoken_mapping id under routesConfig of Ingress Gateway microservice.
        
        metadata:
          ccaHeaderValidation:
            enabled: true

        Note:

        This feature can be enabled only by using Helm.
      3. Save the file.
      4. Run Helm upgrade, if you are enabling this feature after NRF deployment. For more information about upgrade procedure, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

Observe

Metrics

Following are the CCA Header Validation in NRF for Access Token Service Operation feature specific metrics in NRF Gateways Metrics section:
  • oc_ingressgateway_cca_header_request_total
  • oc_ingressgateway_cca_header_response_total
  • oc_ingressgateway_cca_certificate_info

KPIs

There are no KPIs for this feature.

Alerts

Following are the CCA Header validation in NRF for Access Token Service Operation feature specific alerts in Application level alerts section:

Maintain

If you encounter alerts at system or application levels, see NRF Alerts section for resolution steps.

In case the alerts persist, perform the following:

  1. Collect the logs: For more information on how to collect logs, see "Collecting Logs" section in Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.12 Monitoring the Availability of SCP Using SCP Health APIs

NRF determines the availability and reachability status of all SCPs irrespective of the configuration types.

This feature is an enhancement to the existing SBI routing functionality. Egress Gateway microservice interacts with SCP on their health API endpoints using HTTP2 OPTIONS method. It monitors the health of configured SCP peers to ensure that the traffic is routed directly to the healthy peers. This enhancement avoids routing or rerouting towards unhealthy peers, thus minimizing the latency time.

Egress Gateway microservice maintains the health status of all available and unavailable SCPs. It maintains the latest health of SCPs by periodically monitoring and uses this data to route egress traffic to the most preferred healthy SCP.

Figure 4-11 SCP Selection Mechanism


SCP Selection Mechanism

Once peerconfiguration, peersetconfiguration, routesconfiguration, and peermonitoringconfiguration parameters are configured at Egress Gateway microservice, and all SCPs (after Alternate Route Service (ARS) resolution, if any vFQDN is configured) are marked initially as healthy. The peers attached to the associated peerset are scheduled to run health API checks and update the health status continuously.
During the installation, the value of the parameter peermonitoringconfiguration is set to false by default. Since, this feature is an add-on to the existing SBI Routing feature and will be activated if the sbirouteconfig feature is enabled. To enable this feature, perform the following:
  • configure peerconfiguration with healthApiPath as /{scpApiRoot}/{apiVersion}/status
  • configure peersetconfiguration
  • configure routesconfiguration
  • configure sbiroutingerrorcriteriasets
  • configure sbiroutingerroractionsets
  • enable peermonitoringconfiguration
If SBI Routing feature is enabled before upgrading, the healthApi in peerconfiguration should be attached manually to existing configured peers. If the operator tries to enable peermonitoringconfiguration and the targeted peers do not have the healthApiPath then an appropriate error response is sent.

Managing Monitoring the Availability of SCP Using SCP Health APIs

This section explains the procedure to enable and configure the feature.

Configure

You can configure the Monitoring the Availability of SCP using the REST API.

Configure using REST API: Perform the following feature configurations as described in Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide:
  • create or update peer Peer Configuration with health status endpoint details.
  • create or update the peerset peersetconfiguration to assign these peers.
  • create or update errorcriteriasets sbiroutingerrorcriteriasets.
  • create or update sbiroutingerroractionsets.
  • create or update routesets routesconfiguration to use the above peerset.
  • enable the feature using the below peermonitoring configuration peermonitoringconfiguration.

Note:

Health Monitoring of the peer will start only after the feature is enabled and the corresponding peerset is used in sbirouteconfig.

Observe

Metrics
Following metrics are added in the NRF Gateways Metrics section:
  • oc_egressgateway_peer_health_status
  • oc_egressgateway_peer_health_ping_request_total
  • oc_egressgateway_peer_health_ping_response_total
  • oc_egressgateway_peer_health_status_transitions_total
  • oc_egressgateway_peer_count
  • oc_egressgateway_peer_available_count
Alert
Following alerts are added in the NRF Alerts section:
KPIs

Added the feature specific KPIs in the SCP Health Status section.

Maintain

If you encounter alerts at system or application levels, see NRF Alerts section for resolution steps.

In case the alerts persists, perform the following:

  1. Collect the logs: For more information on how to collect logs, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.13 Controlled Shutdown of NRF

NRF supports controlled shutdown feature to isolate NRF from the current network at a particular site. This isolation helps to perform any maintenance activities or recovery procedures as required without uninstalling the NRF at the particular site. During this time the operations of the Ingress Gateway, Egress Gateway, and NrfAuditor microservices are paused. These services read the operational state from the database periodically. The operational state of NRF can be changed by using REST API or CNC Console at any time.

The two operational states defined for NRF are as follows:

  • NORMAL
  • COMPLETE_SHUTDOWN

Note:

  • Ensure that the database is up and the NRF services must be able to connect and communicate with the database to change the operational state.
  • In either state, if the database goes down, the back-end pods go to the NOT_READY state, but the Ingress Gateway microservice will continue to be in the last known operational state. When the Controlled Shutdown feature is enabled and the database is not available, all the incoming messages will be rejected with the Service Unavailable messages if the operational state is "NORMAL", or with a configurable error code if the operational state is COMPLETE_SHUTDOWN". However, the operators will not be able to see the operational state in CNC Console due to database unavailability.

If the controlled shutdown operational state is NORMAL, then NRF processes the message as normal.

If the controlled shutdown operational state is COMPLETE_SHUTDOWN, then NRF rejects all incoming requests with a configurable error code.

Figure 4-12 Operational State changes


Operational State changes

The following behavior changes occur when the operational state changes:

From NORMAL to COMPLETE_SHUTDOWN
  • The Ingress Gateway microservice rejects all new requests towards the services that have been configured for controlled shutdown with a configurable error code. In this case, all the inflight transactions will be gracefully handled. The controlled shutdown applies to the following services:
    • nfregistration
    • nfdiscovery
    • nfsubscription
    • nfaccesstoken
    The error codes are configured using controlledshutdownerrormapping and errorcodeprofiles APIs.
  • The NrfAuditor microservice pauses all its audit procedures, hence no notifications are generated. OcnrfAuditOperationsPaused alert is raised to indicate that the audit is paused.
  • The Egress Gateway microservice handles all the inflight requests gracefully. Since the Ingress Gateway and NrfAuditor microservices are in a COMPLETE_SHUTDOWN operational state, there would be no requests from the backend services.

    Note:

    No specific configuration for controlled shutdown needs to be applied on the routes for the Egress Gateway microservice.
  • The NrfConfiguration microservice continues to process any configuration requests in the COMPLETE_SHUTDOWN state.
  • OcnrfOperationalStateCompleteShutdown alert is raised to indicate the operational state of NRF is COMPLETE_SHUTDOWN.
  • The operational state change is recorded and can be viewed using the controlledShutdownOptions REST API. Additionally, the history of operational state changes can be viewed using the operationalStateHistory REST API.
From COMPLETE_SHUTDOWN to NORMAL
  • The Ingress Gateway microservice resumes processing all incoming requests.
  • The Egress Gateway microservice resumes processing all outgoing requests.
  • The NrfAuditor pod waits for a preconfigured waiting period before resuming its audit procedures to allow the NFs to move back to this NRF after the NRF has changed to the NORMAL operational state. Once the waiting period has expired, the audit procedures resume, and the OcnrfAuditOperationsPaused alert is cleared. The waiting period is defined as
    (defaultHbTimer * nfHeartBeatMissAllowed) + maxReplicationLatency(s) + replicationLatencyThreshold * latencyMultiplier
    Where,
    • defaultHbTimer - defaultHbTimer configured in nfManagementOptions.nfHeartBeatTimers where the nfType is "ALL_NF_TYPE". For more information about the parameter, see "NF Management Options" in Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
    • nfHeartBeatMissAllowed - nfHeartBeatMissAllowed configured in nfManagementOptions.nfHeartBeatTimers where the nfType is "ALL_NF_TYPE". For more information about the parameter, see "NF Management Options" in Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
    • maxReplicationLatency - The maximum replication latency detected across all the sites. This parameter supports dynamic value.
    • replicationLatencyThreshold - The replicationLatencyThreshold configured in geoRedundancyOptions. For more information about the parameter, see "Georedundancy Options" in Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
    • latencyMultiplier - A preconfigured fixed offset value set as 3.
  • OcnrfOperationalStateCompleteShutdown alert is cleared to indicate that the operational state of NRF is NORMAL.
  • The operational state change is recorded and can be viewed using the controlledShutdownOptions REST API. Additionally, the history of operational state changes can be viewed using the operationalStateHistory REST API.

NRF Behavior Post Fault Recovery

During the database restore procedure, along with the configuration data, the NFProfile data/subscription data also gets restored which may not always be the latest state data at that moment. In this state, the NrfAuditor microservice may act upon the NFProfiles/NFsubscriptions which are not up-to-date yet. Also, the NFs is in process of moving to the current NRF which is available now. In this state, if the audit procedure is performed, NRF suspends those NFs and send out notifications to the consumer NFs. The same is applicable for NfSubscriptions, where the subscriptions may get deleted due to an older lastUpdatedTimestamp in the backup data.

To avoid this problem, NrfAuditor microservice waits for a waiting period before resuming the auditing of NFProfiles and subscriptions as soon as it comes to Ready state from NotReady state. An alert ("OcnrfAuditOperationsPaused") is raised to indicate that the audit processes have paused, see NRF Alerts section. Once the waiting period is elapsed, the audit processes resume and the alert is cleared.

To know about the computation of the waiting period, see Controlled Shutdown of NRF .

Note:

The NrfAuditor pod goes to NotReady state whenever it loses connectivity with the database. During temporary connectivity fluctuations, the NrfAuditor pod may transition between Ready and NotReady states cause the cool off period to kick in for every NotReady to Ready transition. To avoid such short and frequent transitions, NrfAuditor microservice applies the waiting period only when the pod is in NotReady for more than 5 seconds.

Managing Controlled Shutdown of NRF

Prerequisites
The following parameters are required in the ocnrf_custom_values_23.4.6.yaml file to configure the feature.
  • global.enableControlledShutdown is the flag to enable the feature. The default value is true. To disable the feature, set the value of the flag to false.

    Please note that the below configurations are required for the feature to work.

  • controlled shutdown filter under routesConfig.

    Note:

    The route configuration must not be modified.
For more information about the global.enableControlledShutdown attribute, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.
Configure

You can configure the Controlled Shutdown of NRF feature using the REST API or Console:

  • REST API: Perform the following feature configurations as described in Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
    • {apiRoot}/nrf/nf-common-component/v1/igw/errorcodeprofiles
    • {apiRoot}/nrf/nf-common-component/v1/igw/controlledshutdownerrormapping
    • {apiRoot}/nrf-configuration/v1/controlledShutdownOptions
    • {apiRoot}/nrf-configuration/v1/operationalStateHistory
  • CNC Console:

    You can change only the Operational State and view the Operational State History as described in the Controlled Shutdown section.

Observe

Metrics
Following are the controlled shutdown feature specific metrics:
  • ocnrf_operational_state
  • ocnrf_audit_status

For more information about the metrics for the controlled shutdown of NRF see the NRF Metrics section.

Alerts
Following are the controlled shutdown feature specific alerts:

For more information about the alerts raised for the controlled shutdown of NRF see the NRF Alerts section.

KPIs
Following are the controlled shutdown feature specific KPIs:
  • Operational State {{ pod }}
  • NRF Audit status

For more information about the KPIs in NRF see the Controlled Shutdown of NRF section.

Maintain

If you encounter alerts at system or application levels, see NRF Alerts section for resolution steps.

In case the alerts still persist, perform the following:

  1. Collect the logs: For more information on how to collect logs, see "Collecting Logs" section in Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.14 User-Agent Header for Outgoing Requests

NRF supports the addition of User-Agent header for outgoing messages for NFStatusNotify and SLF query requests. NRF adds the User-Agent header with the configured value to the mentioned outgoing messages.

In addition, NRF propagates the User-Agent header received in all forwarding and roaming requests.

Managing User-Agent Header for Outgoing Requests

Configure

You can configure User-Agent header value feature using the REST API or CNC Console.

  • Configure using REST API: Provide the value for ocnrfUserAgentHeader in generalOptions configuration API. For more information about this API, see Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
  • Configure using CNC Console: Provide the value for OCNRF User-Agent Header on the General Options Page. For more information about the field, see General Options page.

Maintain

If you encounter alerts at system or application levels, see NRF Alerts section for resolution steps.

In case the alerts persist, perform the following:

  1. Collect the logs: For more information on how to collect logs, see "Collecting Logs" section in Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.15 Support for Kubernetes Resource

4.15.1 Node Selector

The Node Selector feature allows the Kubernetes scheduler to determine the type of nodes in which the NRF pods are scheduled, depending on the predefined node labels or constraints.

Node selector is a basic form of cluster node selection constraint. It allows you to define the node labels (constraints) in the form of key-value pairs. When the nodeSelector feature is used, Kubernetes assigns the pods to only the nodes that match with the node labels you specify.

To see all the default labels assigned to a Kubernetes node, run the following command:

kubectl describe node <node_name>

Where,

<node_name> is the name of Kubernetes node.

For example:

kubectl describe node pollux-k8s-node-1

Sample output:
Name:               pollux-k8s-node-1
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    kubernetes.io/hostname=pollux-k8s-node-1
                    kubernetes.io/os=linux
                    topology.kubernetes.io/region=RegionOne
                    topology.kubernetes.io/zone=nova

Managing NRF Node Selector Feature

Enable

You can enable Node Selector feature using the Helm:

  1. Open the ocnrf_custom_values_23.4.6.yaml file.
  2. Set global.nodeSelection to ENABLED to enable the node selection.

    For more information about the parameters, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

  3. Save the file.
  4. Install NRF. For more information about installation procedure, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

Configure

You can configure the node selector parameters using Helm.
  • Configure the following parameters in the global, NRF microservices and Gateways sections:
    • nodeSelection
    • helmBasedConfigurationNodeSelectorApiVersion
    • nodeSelector.nodeKey
    • nodeSelector.nodeValue
  • Configure the following parameters in the appinfo section:
    • nodeSelection
    • helmBasedConfigurationNodeSelectorApiVersion
    • nodeSelector
  • Configure the following parameters in the perfinfo section:
    • nodeSelectorEnabled
    • helmBasedConfigurationNodeSelectorApiVersion
    • nodeSelectorKey
    • nodeSelectorValue

    For more information about the parameters, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

Maintain

If you encounter alerts at system or application levels, see NRF Alerts section for resolution steps.

In case the alerts still persists, perform the following:

  1. Collect the logs: For more information on how to collect logs, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.15.2 Kubernetes Probes

One of the key feature that Kubernetes provides is high availability. This is achieved by the smallest deployment model called Pods. The health check of these Pods are performed by Kubernetes Probes.

There are three types of probes:
  • Liveness Probe: Indicates if the container is operating. If the container is operating, no action is taken. If not, the kubelet kills and restarts the container.
  • Readiness Probe: Indicates whether the application running in the container is ready to accept requests. If the application is ready, services matching the pod are allowed to send traffic to it. If not, the endpoints controller removes the pod from all matching Kubernetes Services.
  • Startup Probe: Indicates whether the application running in the container has started. If the application is started, other probes start functioning. If not, the kubelet kills and restarts the container.

Managing Kubernetes Probes Feature

Configure

You can configure Kubernetes Probes feature using the Helm:

  1. Open the ocnrf_custom_values_23.4.6.yaml file.
  2. Configure the probes for each microservices. For more information about configuration parameters, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.
  3. Save the file.
  4. Install NRF. For more information about installation procedure, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.
  5. If you are configuring these parameter after NRF deployment, upgrade NRF. For more information about upgrade procedure, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

Maintain

If you encounter alerts at system or application levels, see NRF Alerts section for resolution steps.

In case the alerts still persists, perform the following:

  1. Collect the logs: For more information on how to collect logs, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.15.3 Pod Disruption Budget

PodDisruptionBudget (PDB) is a Kubernetes resource that allows you to achieve high availability of scalable application services when the cluster administrators perform voluntary disruptions to manage the cluster nodes.

PDB restricts the number of pods that are down simultaneously from voluntary disruptions. Defining PDB is helpful to keep the services running undisrupted when a pod is deleted accidentally or deliberately. PDB can be defined for high available and scalable NRF services.

It allows safe eviction of pods when a Kubernetes node is drained to perform maintenance on the node. It uses the maxPdbUnavailable parameter specified in the Helm chart to determine the maximum number of pods that can be unavailable during a voluntary disruption.

Managing Pod Disruption Budget

Enable

This feature is enabled automatically if you are deploying NRF with Release 16.

Configure

You can configure this feature using Helm. For information about configuring PDB, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

Observe

There is no specific metrics and alerts required for the Support of PDB functionality.

4.15.4 Network Policies

NetworkPolicies are an application-centric construct that allows you to specify how a pod is allowed to communicate with various network entities. To control communication between the cluster's pods and services and to determine which pods and services can access one another inside the cluster, it creates pod-level rules.

Previously, NRF had the privilege to communicate with other namespaces, and pods of one namespace could communicate with others without any restriction. Now, namespace-level isolation is provided for the NRF pods, and some scope of communications is allowed between the NRF and pods outside the cluster. The network policies enforces access restrictions for all the applicable data flows except communication from Kubernetes node to pod for invoking container probe.

Managing Support for Network Policies

Enable

To use this feature, network policies need to be applied to the namespace in which NRF is deployed.

Configure

You can configure this feature using Helm. For information about configuring PDB, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

Observe

There is no specific metrics and alerts required for the Network Policy feature.

4.15.5 Tolerations

Taints and tolerations are Kubernetes mechanism that allows you to ensure that pods are not placed on inappropriate nodes. Taints are added to nodes, while tolerations are defined in the pod specification. One or more taints are applied to a node, this marks that the node should not accept any pods that do not tolerate the taints.

When a taint is assigned by Kubernetes configurations, it repels all the pods except those that have a matching toleration for that taint. Tolerations are applied to pods to allow the pods to schedule onto nodes with matching taints. Tolerations allow scheduling but do not guarantee scheduling. For more information about the Kubernetes documentation.

Managing Kubernetes Tolerations

Enable

To enable the tolerations perform the following:
  1. Open the ocnrf_custom_values_23.4.6.yaml file.
  2. Set global.tolerationsSetting to ENABLED to enable toleration.

    For more information about the parameters, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

  3. Save the file.
  4. Install NRF. For more information about installation procedure, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

Configure

You can configure the tolerations parameters using Helm.
  1. Configure global.tolerations as per the requirement based on the taints on the node. For more information about the parameters, see the Tolerations table.
  2. By default global.tolerations is applied to all the microservices. In case you want to modify the tolerations for a specific microservice, configure tolerationsSetting under the specific microservice.
For more information about the parameters, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

Table 4-6 Tolerations

Parameter Description
key It is name of the key.
value It is a value for the configured key.
effect

Indicates the taint effect applied for the node.

The effect is defined by one of the following:
  • NoSchedule:

    • New pods that do not match the taint are not scheduled onto that node.

    • Existing pods on the node remain.

  • PreferNoSchedule:

    • New pods that do not match the taint might be scheduled onto that node, but the scheduler tries not to.

    • Existing pods on the node remain.

  • NoExecute:

    • New pods that do not match the taint cannot be scheduled onto that node.

    • Existing pods on the node that do not have a matching toleration are removed.

operator

Indicates the criteria to match the tolerations with the taint configuration.

The value can be:

  • Equal: The key, value, and effect parameters must match with the taint configuration.
  • Exists: The key and effect parameters must match the taint configuration. The value parameter can be blank.

Observe

There are no specific metrics and alerts required for this feature.

4.15.6 Dual Stack

Using the dual stack mechanism, applications or NFs can establish connections with pods and services in a Kubernetes cluster using IPv4 or IPv6 or both simultaneously.

Dual stack provides:
  • coexistence strategy that allows hosts to reach IPv4 and IPv6 simultaneously.
  • IPv4 and IPv6 allocation to the Kubernetes clusters during cluster creation. This allocation is applicable for all the Kubernetes resources unless explicitly specified during the cluster creation.

NRF application supports single stack IPv4 or single stack IPv6 on CNE that supports dual stack networking.

4.16 Pod Protection Support for NRF Subscription Microservice

The NRF subscription microservice is responsible for the following service operations:

  • NfStatusSubscribe
  • NfStatusUnsubscribe
  • NfStatusNotify

Of the above service operations, NfStatusSubscribe and NfStatusUnsubscribe requests are received by the subscription pod through the Ingress Gateway service. The registration (nfregistration) and auditor (nrfauditor) pods trigger notification events towards the subscription pod, which in turn triggers the NfStatusNotify requests to the consumer NFs.

The subscription pod currently undergo a risk of the congested condition. A congested condition is defined as a state where the pod's resource utilization (CPU and Pending Message count) is higher than the expected thresholds. This would result in higher latency, pod restarts, or traffic loss.

This situation may occur due to:

  • increase of notification events received from the registration and auditor pods.
  • increase of NfStatusSubscribe or NfStatusUnsubscribe events.
  • a large number of NfStatusNotify events being triggered.

The Overload Control feature through Ingress Gateway only ensures the traffic through the Ingress Gateway to the subscription pods is regulated. For more information about the Overload Control feature, see Overload Control Based on Percentage Discards.

However, in the NRF traffic call model, this traffic constitutes only 1% of the traffic to this service while 99% of the traffic is because of the NfStatusNotify events. Hence, the overload feature alone does not prevent the NRF subscription microservice pod from going into the congested state. The Pod Protection feature is introduced as a solution to address the overload situation by protecting the pod independently and continuing to provide service.

Note:

Horizontal POD Autoscaling (HPA) at subscription microservice and pod protection mechanism at subscription microservice are independent features with different trigger conditions. While HPA considers microservice load, pod protection mechanism works only on the POD load. Therefore, their order of triggering cannot be predicted.
NRF monitors the congestion state for the subscription microservice pod. The congestion state is monitored by the following parameters:
  • CPU: The CPU consumption of the Subscription microservice container is used to determine the congestion level. The CPU usage is monitored using the Kubernetes cgroup (cpuacct.usage) and it is measured in nanoseconds.

    It is monitored periodically and calculated using the following formula and then compared against the configured CPU thresholds to determine the congestion state.

    Figure 4-13 CPU Measurement


    CPU Measurement

    Where,

    • CurrentCpuUsage is the counter reading at current periodic cycle.
    • LastCpuUsage is the counter reading at previous periodic cycle.
    • CurrentTime is the current time snapshot.
    • LastSampleTime is the previous periodic cycle time snapshot.
    • CPUs is the total number of CPUs for a given pod.
  • Pending Message Count: The pending message count is the number of requests that are received by the subscription pod and yet to send the response. This includes all the requests triggered towards the Subscription pods from the registration, auditor, and Ingress Gateway microservices. The processing of the requests may include DB operations, forwarding the request to another NRF, and triggering notifications. The pending message count is monitored periodically and then compared against the configured thresholds to determine the congestion state.

Subscription Microservice Load States

Following are the different overload states to detect overload conditions. This will protect and mitigate the pod entering into an overload condition, and to take necessary actions to recover from overload.

Figure 4-14 Pod Protection State Transition


Pod Protection State Transition

Note:

The transition can occur between any states based on the congestion parameters. These congestion parameters are preconfigured and cannot be changed.
  • Congested State: This is the upper bound state where the pod is congested. This means one or more congestion parameters is above the configured thresholds for the congested state. For more information about the configuration using CNC Console, see Pod Protection Options. The pod can be transitioned to the Congested State either from the Normal State or the DoC state.
    When pod reaches this state, the following actions are performed:
    • new incoming HTTP2 connection requests are not accepted.
    • the pod gradually decrements the number of concurrent streams by updating SETTINGS_MAX_CONCURRENT_STREAMS parameter in a SETTINGS frame to the configured maxConcurrentStreamsPerCon value at a regular interval. The concurrent streams are decremented based on the value configured in decrementBy parameter. And, the regular interval is configured in the decrementSamplingPeriod parameter.

    Alerts are raised when any of the subscription pods go into this state. For more information about alerts, see NRF Alerts.

  • Danger of Congestion (DoC) State: This is the intermediate state where the pod is approaching a congested state. This means one or more congestion parameters, CPU or Pending Message Count, is above the configured thresholds for the DoC state. For more information about the configuration using CNC Console, see Pod Protection Options.
    • When pod reaches this state, the following actions are performed:
      • any new incoming HTTP2 connection requests are not accepted.
      • if the pod is transitioning from the Normal State to the DoC state, the pod gradually decrements the number of concurrent streams by updating SETTINGS_MAX_CONCURRENT_STREAMS parameter in a SETTINGS frame to the configured maxConcurrentStreamsPerCon value at a regular interval. The concurrent streams are decremented based on the value configured in decrementBy parameter. And, the regular interval is configured in the decrementSamplingPeriod parameter.
      • if the pod is transitioning from the Congested State to the DoC state, the pod gradually increments the number of concurrent streams by updating SETTINGS_MAX_CONCURRENT_STREAMS parameter in a SETTINGS frame to the configured maxConcurrentStreamsPerCon value at a regular interval. The concurrent streams are incremented based on the value configured in incrementBy parameter. And, the regular interval is configured in the incrementSamplingPeriod parameter.
  • Normal State: This is the lower bound state where all the congestion parameters for the pod are below the configured thresholds for DoC and Congested states. For more information about the configuration using CNC Console, see Pod Protection Options.
    When pod reaches this state, the following actions are performed:
    • the pod will continue accepting new incoming HTTP2 connection requests.
    • in case the pod is transitioning from the Congested or DoC state to Normal state, the pod gradually increments the number of concurrent streams by updating SETTINGS_MAX_CONCURRENT_STREAMS parameter in a SETTINGS frame to the configured maxConcurrentStreamsPerCon value at a regular interval. The concurrent streams are incremented based on the value configured in incrementBy parameter. And, the regular interval is configured in the incrementSamplingPeriod parameter.
To avoid toggling between these states due to traffic pattern, it is required for the pod to be in a particular state for a given period before transitioning to another state. The below configurations are used to define the period till which the pod has to be in a particular state:
  • stateChangeSampleCount
  • monitoringInterval

Formula for calculating the period is: (stateChangeSampleCount * monitoringInterval)

When the Subscription pods are in DoC or Congested state, it forces the client pods, registration, auditor, and Ingress Gateway to send lesser traffic towards the subscription pod. Subscription pod receives traffic in following scenarios:

  • The registration pod sends a notification trigger request to the subscription pods when:
    • an NF registers itself with NRF.
    • an NF updates its registered profiles.
    • an NF deregisters itself from NRF.
  • The auditor pod sends a notification trigger request to the subscription pods when:
    • the auditor marks an NF as SUSPENDED.
    • the auditor deregisters an NF when an NF is SUSPENDED state for a configurable amount of time.
  • The subscription pod receives the below requests through the Ingress Gateway service when:
    • the consumer NF creates a new subscription in NRF using NFStatusSubscribe request.
    • the consumer NF updates an existing Subscription in NRF using NFStatusSubscribe request.
    • the consumer NF unsubscribes using NFStatusUnsubscribe request.

    If the registration and auditor pods are unable to trigger a notification request to the subscription pods since they are in DoC or Congested State, It is possible that the consumer NFs will not be notified about the particular change in the consumer NfProfile. However, this is mitigated as the consumer NFs are expected to perform rediscovery of the producer NFs whenever the discovery validity time expires. This will ensure that their producer NF data will be refreshed, despite the dropped NfStatusNotify message.

    The registration pod will not reject any of the incoming requests, like NfRegister, NfUpdate, NfDeregister, and NfHeartbeat even if it is unable to trigger a notification to the subscription pods for the same reason.

    It is expected that the requests through the Ingress Gateway to the congested subscription pods may timeout, or get rejected.

Managing Subscription Microservice Pod Protection

This section explains the procedure to enable and configure the feature.

Enable

You can enable the Pod Protection feature using the CNC Console or REST API.

  • Enable using REST API: Set podProtectionOptions.enabled to true and podProtectionOptions.congestionControl.enabled to true in Pod Protection Options configuration API. For more information about API path, see Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
  • Enable using CNC Console: Set Enabled to true for Pod Protection and Enabled to true for Congestion Control in Pod Protection Options page. For more information about enabling the feature using CNC Console, see the Pod Protection Options section.

Observe

Metrics
Following metrics are added in the Pod Protection Metrics section:
  • ocnrf_pod_congestion_state
  • ocnrf_pod_cpu_congestion_state
  • ocnrf_pod_pending_message_count_congestion_state
  • ocnrf_incoming_connections
  • ocnrf_max_concurrent_streams
  • ocnrf_pod_cpu_usage
  • ocnrf_pod_pending_message_count
  • ocnrf_pod_incoming_connection_rejected_total
  • ocnrf_nfNotification_trigger_total
KPIs

Added the feature specific KPIs in the Subscription Pod Protection section.

Maintain

If you encounter alerts at system or application levels, see NRF Alerts section for resolution steps.

In case the alerts still persists, perform the following:

  1. Collect the logs: For more information on how to collect logs, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.17 Pre and Post Install/Upgrade Validations

This feature applies validation checks on the infrastructure, application, databases, and its related tables before and after the installation, upgrade, or fault recovery.

When NRF is deployed, there could be inconsistencies and unexpected results if the required NRF tables or the site-specific configurations are not available. This could happen due to network issues, system issues, human error, race conditions. The feature aims to detect any inconsistency early in the system state and report the same. The validations are done during the preinstallation, postinstallation, preupgrade, and postupgrade. The infrastructure validations are performed as part of the preinstallation and preupgrade validations.

Note:

Validation of database and tables during rollback procedures is not supported.

The below diagram depicts the flow where the validations occur:

Figure 4-15 Pre and Post Install/Upgrade Validations


Pre and Post Install/Upgrade Validations

Database and its Related Tables Validation

Each microservice is responsible for validating the database and the tables that it creates and manages.

The following tables are validated by each microservice:
  • NrfConfiguration Service validates the following tables in the NRF Application database:
    • NrfSystemOptions
    • NfScreening
    • SiteIdToNrfInstanceIdMapping
    • NrfEventTransactions
  • NfRegistration Service validates the following tables in the NRF Application database:
    • NfInstances
    • NfStatusMonitor
  • NfSubscription service validates the following table in the NRF Application database:
    • NfSubscriptions
  • NrfAuditor service validates the following table in NRF Leader Election database:
    • NrfAuditorLeaderPod
  • NrfConfiguration Service validates the following tables in the NRF Network Database:
    • NfScreening_backup
    • NrfSystemOptions_backup
    • SiteIdToNrfInstanceIdMapping_backup

Note:

  • The common configuration database and its tables are currently not validated in the preinstallation, fault recovery, and postinstallation validation.
  • The schema of the ReleaseConfig table is not currently validated in the preinstallation and postinstallation validation.

Managing Pre and Post Install/Upgrade Validations Feature

Enable

You can enable the feature using the Helm:

  1. Open the ocnrf_custom_values_23.4.6.yaml file.
  2. Set global.appValidate.preValidateEnabled to true to validate the database tables during preinstallation and preupgrade.
  3. Set global.appValidate.postValidateEnabled to true to validate the database tables during postinstallation and postupgrade.
  4. Set global.appValidate.infraValidateEnabled to true to validate the database tables during infrastructure validation. Configure the following appInfo attributes:
    1. Configure replicationUri with the database monitoring service FQDN or port as per the deployment. The URI must be provided as "http://<db monitor service name>:<db monitor service port>/db-tier/status/replication/realtime".
    2. Configure dbTierVersionUri with the database monitoring service FQDN or port to retrieve the cnDBTier version. The URI must be provided as "http://<db monitor service name>:<db monitor service port>/db-tier/version".
    3. Configure alertmanagerUrl with the database monitoring service FQDN or port to retrieve the cnDBTier version. The URI must be provided as "http://<alert manager service name>:<alert manager service port>/cluster/alertmanager".
  5. Set global.appValidate.faultRecoveryMode to true to install NRF in fault recovery mode.

    For more information about the parameters, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

  6. Save the file.
  7. Install or upgrade NRF. For more information about the procedure, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

4.17.1 NRF Infrastructure Validation

The infrastructure validations are performed as part of the preinstallation and preupgrade validations. The validation is done by the preinstallation and preupgrade hooks of the app-info service. The infrastructure validation is disabled by default.

Note:

  • It is highly recommended to enable it, and perform the required configuration to detect incompatibilities early. The infrastructure validation will be enabled by default from the next releases.
  • Infrastructure validation is not supported during rollback procedures.

This validation is enabled by setting global.appValidate.infraValidateEnabled parameter to true in the ocnrf_custom_values_23.4.6.yaml file. The following checks are performed as part of the infrastructure validation before installing or upgrading NRF:

  • Validate the minimum viable path from the previous NRF version to the current NRF version for an upgrade. This is validated based on the value configured in the global.minViablePath parameter. If this minimum viable path is not supported, NRF upgrade will not proceed.
  • Validate that the installed Kubernetes version is compatible with the target NRF version being installed or upgraded. This is validated based on the value configured in the global.minKubernetesVersion parameter. If the Kubernetes version is not compatible, NRF installation or upgrade will not proceed.
  • Validate that the installed cnDBTier version is compatible with the target NRF version being installed or upgraded. This is validated based on the value configured in the global.minDbTierVersion parameter. If the cnDBTier version is not compatible, NRF installation or upgrade will not proceed.
  • Verify the replication status with the connected peer. This is validated based on the value configured in the appinfo.defaultReplicationStatusOnError parameter. If the replication is disabled or failed for all peers, NRF installation or upgrade will not proceed.
  • Verify that there are no critical alerts raised in the system. If critical alerts are found, installation or upgrade will not proceed.

4.17.2 NRF Preinstallation Validation

The following sections describe the preinstallation validation performed when NRF is installed for fresh installation mode and fault recovery mode.

4.17.2.1 For Fresh Installation

The preinstallation validation is performed as part of the preinstallation hooks of each NRF microservice. It is the first set of actions performed. For more information about the list of microservices that perform validation, see NRF Microservice Table Validation.

This validation is configured by setting global.appValidate.preValidateEnabled parameter to true in the ocnrf_custom_values_23.4.6.yaml. The following checks are performed as part of the preinstallation validation:
  • Validate the presence of the database. If not present, the installation will not proceed. The database is expected to be present, before proceeding with installation.
  • Validate the ReleaseConfig table does not have release information about the site being installed. If present, the installation will not proceed, as it will not be a fresh installation.
  • Validate the presence of the required tables. Each microservice validates the tables that it is responsible for. If all the required tables are present, then the table schema is validated against the schema expected as per the release version. If the tables are partially present or if the schema validation fails, installation will not proceed. The tables are expected to be present when multisite georedundant NRF is deployed.
4.17.2.2 For Fault Recovery
NRF can be installed in fault recovery mode by setting global.appValidate.faultRecoveryMode parameter to true in the ocnrf_custom_values_23.4.6.yaml. When NRF is deployed in the fault recovery mode, with a database backup in place, the following checks are performed as part of the preinstallation validation:
  • Validate the presence of the database. If not present, the installation will not proceed. The database is expected to be present, before proceeding with installation.
  • Validate if the ReleaseConfig table contains release information about the site. If not present, the installation will not proceed as the table is expected to contain information from the database backup.
  • Validate the presence of the required tables. Each microservice validates the tables that it is responsible for. If the tables are present, then the table schema is validated against the schema expected as per the release version. If the tables are partially present or if the schema validation fails, installation will not proceed.
  • Validate the configType in the NrfSystemOptions table against the configTypes expected as per the releaseVersion of the specific site. If validation fails, installation will not proceed.
  • Validate the nfScreeningRulesListType in the NfScreening table against the nfScreeningRulesListType expected as per the releaseVersion of the specific site.

Note:

It is recommended that after the installation is completed in fault recovery mode, this flag has to be set to false. This can be done while performing future upgrades.

In case preinstallation fails, the error reason is logged in the preinstallation hook job of the particular microservice.

4.17.3 NRF Postinstallation Validations

The following sections describe the postinstallation validation performed when NRF is installed in fresh installation mode and fault recovery mode.

4.17.3.1 For Fresh Installation

The following sections describe the postinstallation validation performed when NRF is installed in fresh installation mode and fault recovery mode.

The postinstallation validation is done as part of the postinstallation hooks of each NRF microservice and is the last set of actions performed. For more information about the list of microservices that perform validation, see NRF Microservice Validation.

This validation is configured by setting global.appValidate.postValidateEnabled parameter to true. The following checks are performed as part of the postinstallation validation:
  • Validate if the ReleaseConfig table contains release information about the site. If not present, the installation will not proceed.
  • Validate the presence of the database. If not, the installation will not proceed.
  • Validate the presence of the required tables. Each microservice validates the tables that it is responsible for. If all the required tables are present, then the table schema is validated against the schema expected as per the release version. If the tables are partially present or if the schema validation fails, installation will not proceed. The tables are expected to be present when multisite georedundant NRF is deployed.
  • Validate the configType in the NrfSystemOptions table against the configTypes expected as per the releaseVersion of the specific site. If validation fails, installation will not proceed,
  • Validate the nfScreeningRulesListType in the NfScreening table against the nfScreeningRulesListType expected as per the releaseVersion of the specific site.

The tables are expected to be present when multisite georedundant NRF is deployed.

In case postinstallation fails, the error reason will be logged in the postinstallation hook job of the particular microservice.

4.17.3.2 For Fault Recovery
When NRF is installed in the fault recovery mode, using previously backed up database, similar actions as mentioned in NRF in Fault Recovery Mode are performed.

Note:

It is recommended that after the installation is complete in fault recovery mode, this flag has to be set to false. This can be done while performing future upgrades.

4.17.4 NRF Preupgrade Validation

The following sections describe the preupgrade validation done as part of the preupgrade hooks of each NRF microservice and is the first set of actions performed. The following checks are performed as part of the preupgrade validation.

For more information about the list of microservices that perform validation, see NRF Microservice Validation.

This validation is configured by setting global.appValidate.preValidateEnabled parameter to true in the ocnrf_custom_values_23.4.6.yaml file. The following checks are performed as part of the preupgrade validation:
  • Validate if the ReleaseConfig table contains release information about the site. If not present, the upgrade will not proceed.
  • Validate the presence of the database required to proceed with the upgrade. If not present, the upgrade will not proceed.
  • Validate the presence of the required tables required to proceed with the upgrade. Each microservice validates the tables that it is responsible for. If all the tables are not present or partially present, the upgrade will not proceed.
  • Validate the table schema against the schema expected as per the release version. If validation fails, upgrade will not proceed.
  • Validate the configType in the NrfSystemOptions table against the configTypes expected as per the releaseVersion of the specific site. If validation fails, upgrade will not proceed.
  • Validate the nfScreeningRulesListType in the NfScreening table against the nfScreeningRulesListType expected as per the releaseVersion of the specific site.

The tables are expected to be present when multisite georedundant NRF is deployed.

4.17.5 NRF Postupgrade Validation

The following sections describe the postupgrade validation done as part of the postupgrade hooks of each NRF microservice and is the last set of actions performed. The following checks are performed as part of the postupgrade validation.

For more information about the list of services that perform validation, see NRF Microservice Validation.

This validation is configured by setting global.appValidate.postValidateEnabled parameter to true in the ocnrf_custom_values_23.4.6.yaml file. The following checks are performed as part of the postupgrade validation:
  • Validate if the ReleaseConfig table contains release information about the site. If not present, the upgrade will not proceed.
  • Validate the presence of the database required to proceed with the upgrade. If not present, the upgrade will not proceed.
  • Validate the presence of the required tables required to proceed with the upgrade. Each microservice validates the tables that it is responsible for. If all the tables are not present or partially present, the upgrade will not proceed.
  • Validate the table schema against the schema expected as per the release version. If validation fails, upgrade will not proceed.
  • Validate the configType in the NrfSystemOptions table against the configTypes expected as per the releaseVersion of the specific site. If validation fails, upgrade will not proceed.
  • Validate the nfScreeningRulesListType in the NfScreening table against the nfScreeningRulesListType expected as per the releaseVersion of the specific site.

The tables are expected to be present when multisite georedundant NRF is deployed.

4.18 Ignore Unknown Attribute in NFDiscover Search Query

NRF by default rejects any nfDiscover request (400 Bad Request) with query attributes that are not supported. As part of this feature, instead of rejecting the request, NRF allows processing the nfDiscover request by ignoring the unsupported and unknown search query attributes. The list of query attributes that should be ignored while processing the nfDiscover request is configured using Helm.

Enable

You can enable this feature using the Helm:
  1. Open the ocnrf_custom_values_23.4.6.yaml file.
  2. List the search query in searchQueryIgnoreList under nfdiscovery setion. For more information about the parameter, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.
  3. Save the file.
  4. Run helm install. For more information about the installation procedure, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.
  5. If you are setting this parameter after NRF deployment, run the Helm upgrade. For more information about the upgrade procedure, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

4.19 NF Authentication using TLS Certificate

This feature supports authentication of the Network Function before accessing the, NRF services. In case, authentication fails, NRF rejects the service operation requests. In this feature, NRF challenges some attributes from the TLS certificate against defined attributes.

4.19.1 XFCC Header Validation

HTTPS support is a minimum requirement for 5G NFs as defined in 3GPP TS 33.501. This feature enables extending identity validation from the Transport layer to the Application layer and provides a mechanism to validate the NF FQDN presence in Transport Layer Security (TLS) certificate as added by the Service Mesh against the NF Profile FQDN present in the request.

NRF provides configurations to dynamically enable or disable the feature. To enable the feature on Ingress Gateway in NRF deployment, see xfccHeaderValidation attribute in User Configurable Section of Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide for more details.

Note:

  • This feature is disabled by default. The feature needs to be enabled at API Gateway and NRF. At NRF, the feature enabling or disabling can be performed using the following configuration.
  • Once this feature is enabled, all of NFs must re-register with FQDN in NF Profile or NFs can send NFUpdate with FQDN. For Subscription Service Operations, Network Functions must register with NRF. The NFs that are subscribed before enabling the feature, must register with NRF for further service operations.

Managing NF Authentication using TLS Certificate

Enable

To enable the feature on Ingress Gateway in NRF deployment:

  1. Customize the ocnrf_custom_values_23.4.6.yaml file.
  2. Set enabled parameter to true under XFCC header validation/extraction in Ingress Gateway Global Parameters section:
    xfccHeaderValidation:
       extract:
        enabled: false
  3. Save the file.
  4. Run helm upgrade, if you are enabling this feature after NRF deployment. For more information about upgrade procedure, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

Configure

Configure the NF Authentication using the TLS Certificate feature using REST API or CNC Console:
  • Configure NF Authentication using TLS Certificate using REST API: Perform the feature configurations as described in Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.

    Refer to attributes under nfAuthenticationOptions in Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide for more details.

  • Configure NF Authentication using TLS Certificate using CNC Console: Perform the feature configurations as described in NF Authentication Options.

Observe

For more information on NF Authentication using TLS certificate feature metrics and KPIs, see NRF Metrics and NRF KPIs sections.

Maintain

If you encounter alerts at system or application levels, see NRF Alerts section for resolution steps.

In case the alerts still persist, perform the following:

  1. Collect the logs: For more information on how to collect logs, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.19.2 TLS SNI Header Validation

Oracle Communications Cloud Native Core, Network Repository Function (NRF) supports Server Name Identification (SNI) when acting as a client and sending TLS handshake message. The FQDN in SNI is used to identify the server for which the TLS connection is needed. When the same server is supporting multiple services, the SNI identifies which service is to be used for the TLS connection. As a part of the TLS handshake initiation process, SNI is populated in the client handshake sent by Egress Gateway depending on the routing type. There are two routing options:
  • Direct Routing – Egress Gateway adds the target server's FQDN in the SNI. If the target server has only IP address, then the SNI header will not be populated in the outgoing TLS requests.
  • Indirect Routing – Egress Gateways uses the FQDN of the selected Peer (For example, host of the SCP or SEPP) for populating the SNI header. If the selected peer has an IP address, SNI header is not populated in the outgoing TLS requests.

WARNING:

This feature should be enabled only for non-servicemesh-based deployments.

Note:

When the feature is enabled, SNI header is populated at Egress Gateway only when an FQDN is available.

Managing TLS SNI Header Validation

Enable

You can enable the TLS SNI header feature in NRF deployment:

  1. Open the ocnrf_custom_values_23.4.6.yaml file.
  2. Set egress-gateway.sniHeader.enabled to true to enable TLS SNI header validation.
  3. Save the file.
  4. Run helm install. For more information about the installation procedure, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.
  5. If you are setting this parameter after NRF deployment, run the Helm upgrade. For more information about the upgrade procedure, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

Observe

For more information on TLS SNI header validation feature metrics oc_egressgateway_sni_error_total see NRF Metrics section.

Maintain

In case the SNI header is not sent in the TLS 1.2 handshake, perform the following:

  1. Collect the logs: The logs are collected during the following scenarios:
    • To ensure that the SNI header is added and the feature is running as expected, look up the SNI feature is enabled log statement at the debug level.
    • To identify when the Peer rejected the client handshake due to invalid SNI sent by Egress Gateway, look up the Unrecognized server name indication log statement at the debug level.
    • To verify if both service mesh and SNI feature are enabled, look up the Service mesh is enabled. As a result, SNI will not be populated even though it is enabled log statement at the warn level.

      For more information on how to collect logs, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.

  2. Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.20 Subscriber Location Function

Subscriber Location Function (SLF) feature of NRF allows you to select the network function based on the Subscription Permanent Identifier (SUPI) and Generic Public Subscription Identifier (GPSI) subscriber identities. The SLF feature supports Authentication Server Function (AUSF), Unified Data Repository (UDR), Unified Data Management (UDM), and Charging Function (CHF) nfTypes for discovery query.

The discovery of producer network functions is performed as follows:
  • NRF checks if SLF feature is enabled or not. If the feature is enabled:

    • Check slfLookupConfig contains details for target-nf-type in NFDiscover query:
      • In case any of the NFDiscover search query attributes are present in configured skipSLFLookupParameters then the SLF lookup is not performed.

        For example, if the search query attribute is group-id-list and the same attribute is configured in skipSLFLookupParameters, then NRF will not perform SLF lookup rather NRF will use the group-id-list present in the search query while processing of NFDiscover service operation.

      • In case the configured skipSLFLookupParameters attribute does not match with any of the NFDiscover search query, then the mandatory parameter (either SUPI or GPSI) required for SLF lookup must be present in the NFDiscover search query.
      • In case none of the mandatory attributes (SUPI or GPSI) are present in NFDiscover search query and if NFDiscover search query attribute is present in configured exceptionListForMissingMandatoryParameter attribute, then NRF will process the NFDiscover service operation without rejecting the NFDiscover search query.
      • In case both of the mandatory attributes (SUPI or GPSI) are present in NFDiscover search query, then configured preferredSubscriberIdType is used to decide which of the mandatory attribute is used to perform the SLF query by ignoring the other attributes.
  • Finds the NF Group Id by sending SLF query (that is, Nudr_GroupIDmap service operation) with the received Subscriber Identity (like SUPI and GPSI).
  • Generates NFDiscover service response using the NFGroupId received in SLF response and other parameters. The received Subscriber Identifier is not used during NF Producer selection.
  • accessTokenCacheEnabled flag enables caching of the oAuth2 token for SLF communication at NRF.

    Note:

    • accessTokenCacheEnabled flag is functional only when the oAuth2 token is required for SLF communication that is controlled by the accessTokenCacheEnabled parameter.
    • Operators must enable the accessTokenCacheEnabled flag only after NRF deployment is successfully upgraded to 23.4.6.

Managing Subscriber Location Function Feature

Enable
You can enable SLF feature using the REST API or CNC Console.
  • Enable using REST API: Set featureStatus to ENABLED in SLF Options configuration API. For more information about API path, see Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
  • Enable using CNC Console: Set Feature Status to ENABLED on the SLF Options page. For more information about enabling the feature using CNC Console, see SLF Options.

Configure

You can configure the SLF feature using the REST API or CNC Console:
  • Configure using REST API: Perform the feature configurations as described in Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
  • Configure using CNC Console: Perform the feature configurations as described in SLF Options.

Note:

  • Atleast one of the attributes fqdn, ipv4Address, or ipv6Address must be included by the registered UDR.
  • For routing, the NfService level attributes are considered first. If none of the attributes are present at NfService level, then the NfProfile level attributes are considered, not in a mix mode where some of the attributes are from NfService and some from NfProfile.

    However, with Support for Configurable Port and Routing Parameter Selection feature, routing selection parameter can be picked either from NfService or NfProfile in a mix mode as per the availability. For more information, see Support for Configurable Port and Routing Parameter Selection section.

  • Of the three attributes, the endpoint is selected in the following order:
  • For the cases where ports are not present, (example: ipv4Address and ipv6Address in NfProfile or FQDN from NfService/NfProfile or ipEndpoints with no port in NfService), scheme configured in the NfService is used to determine the port.

SLF Host Configuration

NRF supports direct routing of SLF queries to Unified Data Repository (UDR) or SLF (that is, Nudr_GroupIDmap service) to retrieve the corresponding NFGroupId. In the current configuration, the SLF or UDR is selected based on the slfHostConfig configuration and establishes direct communication with the selected SLF or UDR. For SLF query through Service Communication Proxy (SCP), see SLF Host Configuration.

SLF Host Configuration attribute (slfHostConfig) allows the user to configure the details of SLF or UDR Network Functions.

The slfHostConfig configuration consists of attributes like apiVersion, scheme, FQDN, port, priority. NRF allows the configuration of more than two hosts. The host with the highest priority is considered as the Primary Host. The host with the second highest priority is considered as a Secondary Host.

Note:

  • Refer to 3GPP TS 29.510 (release 15.5) for definition and allowed range for slfHostConfig attribute (apiVersion, scheme, FQDN, port, priority, and so on).
  • Apart from priority attribute, no other attributes plays any role in primary or secondary host selection.
  • Apart from primary or secondary host, other configured hosts (if any) are not used during any message processing.
  • When more than one host is configured with highest priority, then two of them is picked as primary or secondary host randomly.
In Subscriber Location Function (SLF) feature, the SLF request is first sent to the primary SLF. In case of an error from primary SLF, the request is sent to secondary SLF based on the following configurations:
  1. rerouteOnResponseHttpStatusCodes: This attribute is used to determine if SLF retry must be performed to an alternate SLF based on the response code received from the SLF. The alternate SLF is picked from the SLF Host Config, if slfConfigMode is set to STATIC_SLF_CONFIG_MODE or from the slfDiscoveredCandidateList if slfConfigMode is set to DISCOVERED_SLF_CONFIG_MODE. For primary and secondary SLF details, see slfHostConfig attribute.
  2. maximumHopCount: This configuration determines the maximum number of hops (SLF or NRF) that NRF can forward in a given service request. This configuration is useful during NRF Forwarding and SLF feature interaction.

SLF Configuration Mode

NRF supports dynamic selection of SLF based on the registered SLF or UDR profiles. This configuration is defined based on the SLF Configuration Mode attribute (slfConfigMode). This attribute allows the user to decide whether the SLF lookup can be performed based on preconfigured slfHostConfig configuration or use the SLFs registered with NRF.

The dynamic selection of producer network functions is as follows.

NRF checks if the SLF feature is enabled or not. When the feature is enabled:

  • If slfConfigMode attribute is set to STATIC_SLF_CONFIG_MODE, the SLF lookup is performed based on preconfigured slfHostConfig as described in SLF Host Configuration.
  • To perform SLF lookup based on the SLFs registered with NRF:
    • Set populateSlfCandidateList to true:
      ("populateSlfCandidateList": true)
    • Wait until atleast one entry of SLF candidate is listed under slfDiscoveredCandidateList. Perform GET operation to check if any SLF candidate is listed in the output.
    • Change slfConfigMode to DISCOVERED_SLF_CONFIG_MODE:
      {
        "featureStatus": "ENABLED",
         "slfLookupConfig": [{
              "nfType": "UDR"
      }],
        "slfConfigMode": "DISCOVERED_SLF_CONFIG_MODE"
      }

    Note:

    Setting populateSlfCandidateList to true and slfConfigMode to DISCOVERED_SLF_CONFIG_MODE must be performed in two separate requests.

Note:

Upgrade NRF to enable NrfArtisan service, if not enabled during installation. For more information on enabling Artisan Microservice, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

The slfHostConfig must be configured before or while setting the slfConfigMode as STATIC_SLF_CONFIG_MODE when featureStatus is ENABLED.

Once the featureStatus is ENABLED, and the slfConfigMode is STATIC_SLF_CONFIG_MODE, the slfHostConfig cannot be empty.

If slfConfigMode is set to DISCOVERED_SLF_CONFIG_MODE, slfHostConfig is not considered for discovery query.

The slfConfigMode can be set to DISCOVERED_SLF_CONFIG_MODE only if there is atleast one slfCandidate present in the slfDiscoveredCandidateList. To trigger the population of slfDiscoveredCandidateList, set populateSlfCandidateList to true.

The nrfArtisan service populates the slfDiscoveredCandidateList by sending an nfDiscovery query to the nfDiscovery service to fetch the registered SLF or UDR NfProfiles. The discovery query used is:
/nnrf-disc/v1/nf-instances?target-nf-type=UDR&requester-nf-type=NRF&service-names=nudr-group-id-map&preferred-locality=<slfOptions.preferredSLFLocality>&limit=0

The nfDiscovery service fetches the registered SLF or UDR NfProfiles and apply the filtering criteria based on the discovery query parameters to get the relevant set of SLF or UDR NfProfiles. The UDR or SLF profiles are sorted and prioritized before adding to the discovery response. All the features that are available for discovery service operations, if enabled, are applied while processing the discovery query. For example: forwarding, emptylist and extended preferred locality features are included. The discovery response is sent to the nrfArtisan service which will then populate the slfDiscoveredCandidateList.

The order of the UDR or SLF profiles in the slfDiscoveredCandidateList list will be decided based on the sorting algorithms in nfDiscovery Service. The same is considered when NRF sends the SLF query to the SLF.

Refer Preferred Locality Feature Set for details on how the NfProfiles are sorted based on preferred-locality and extended-preferred-locality.

Prior to 22.2.x of NRF, Egress Gateway used to return 503 response for timeout exceptions (for example, No response received for SLF query or No Response received for Query sent to Forwarding NRF).

From 22.3.x onwards, Egress Gateway returns 408 response codes for timeout exceptions (REQUEST_TIMEOUT and CONNECTION_TIMEOUT), to keep the behavior in line with the HTTP standard. Hence, after upgrading to 22.3.x or a fresh install of 22.3.x, if a retry needs to be performed for the timeout scenario, 408 error code explicitly needs to be configured under SLF reroute option. Following is the configuration for SLF options:
{
           "rerouteOnResponseHttpStatusCodes": {
             "pattern": "^[3,5][0-9]{2}$|408$"
           }
}
If the SLF request failed due to non-2xx error response (except 404response)/request timeouts all SLFs (and the error codes are configured under “rerouteOnResponseHttpStatusCodes”), then
  • if forwarding is enabled and the message is a candidate to forward, NRF will forward the request to the other segment NRF.
  • if forwarding is disabled, NRF will reject the discovery query with the configured error code under “SLF_Not_Reachable”
If the SLF request failed due to 404 error response and 404 is configured under “rerouteOnResponseHttpStatusCodes”, then,
  • NRF will retry to secondary or tertiary SLFs.
  • forwarding is not performed in this case (there is an explicit code was added to have this functionality as per the SLF requirement).

Observe

Following are the SLF feature specific metrics:
  • ocnrf_nfDiscover_ForSLF_rx_requests_total
  • ocnrf_nfDiscover_ForSLF_tx_responses_total
  • ocnrf_SLF_tx_requests_total
  • ocnrf_SLF_rx_responses_total
  • ocnrf_slf_jetty_latency_seconds
  • ocnrf_nfDiscover_SLFlookup_skipped_total
  • ocnrf_nfDiscover_continue_mandatoryAttributes_missing_total
  • ocnrf_max_slf_attempts_exhausted_total

For more information on SLF metrics and KPIs, see NRF SLF Metrics and NRF KPIs sections.

Maintain

If you encounter alerts at system or application levels, see NRF Alerts section for resolution steps.

In case the alerts still persist, perform the following:

  1. Collect the logs: For more information about how to collect logs, see "Collecting Logs" section in Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information about how to raise a service request.

4.20.1 Static Selection of SLF

NRF supports static selection of SLF to perform direct routing of discovery queries. These queries are sent to Unified Data Repository (UDR) or SLF (that is, Nudr_GroupIDmap service) to retrieve the corresponding NFGroupId. In the current configuration, the SLF or UDR is selected based on the slfHostConfig configuration and establishes direct communication with the selected SLF or UDR. For SLF query through Service Communication Proxy (SCP), see the SLF Requests section.

SLF host configuration attribute (slfHostConfig) allows the user to configure the details of SLF or UDR network functions.

The slfHostConfig configuration consists of attributes like apiVersion, scheme, FQDN, port, priority. NRF allows the configuration of more than two hosts. The host with the highest priority is considered as the Primary Host. The host with the second highest priority is considered as a Secondary Host.

Note:

  • Refer to 3GPP TS 29.510 (release 15.5) for definition and allowed range for slfHostConfig attribute (apiVersion, scheme, FQDN, port, priority, and so on).
  • Apart from priority attribute, no other attributes plays any role in primary or secondary host selection.
  • Apart from primary or secondary host, other configured hosts (if any) are not used during any message processing.
  • When more than one host is configured with highest priority, then two of them is picked as primary or secondary host randomly.
The SLF request is first sent to the primary SLF. In case of an error from primary SLF, the request is sent to secondary SLF based on the following configurations:
  1. rerouteOnResponseHttpStatusCodes: This attribute is used to determine if SLF retry must be performed to an alternate SLF based on the response code received from the SLF. The alternate SLF is picked from the SLF Host Config, if slfConfigMode is set to STATIC_SLF_CONFIG_MODE or from the slfDiscoveredCandidateList if slfConfigMode is set to DISCOVERED_SLF_CONFIG_MODE. For primary and secondary SLF details, see slfHostConfig attribute.
  2. maximumHopCount: This configuration determines the maximum number of hops (SLF or NRF) that NRF can forward in a given service request. This configuration is useful during NRF Forwarding and SLF feature interaction.

Enable the feature in Static mode

The default configuration for slfConfigMode is STATIC_SLF_CONFIG_MODE. Perform the following steps to enabled the SLF feature in static mode:

  1. Configure slfHostConfig and slfLookupConfig parameters. For more information on the parameters, see Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
  2. Set featureStatus to ENABLED.

4.20.2 Dynamic Selection of SLF

NRF supports dynamic selection of SLF based on the registered SLF or UDR profiles. This configuration is defined based on the SLF Configuration Mode attribute (slfConfigMode). This attribute allows the user to decide whether the SLF lookup can be performed based on preconfigured slfHostConfig configuration or use the SLFs registered with NRF.

The dynamic selection of producer network functions is as follows.

NRF checks if the SLF feature is enabled or not. When the feature is enabled:

  • If slfConfigMode attribute is set to STATIC_SLF_CONFIG_MODE, the SLF lookup is performed based on preconfigured slfHostConfig as described in Static Selection of SLF.
  • To perform SLF lookup based on the SLFs registered with NRF:
    • Set populateSlfCandidateList to true:
      ("populateSlfCandidateList": true)
    • Wait until atleast one entry of SLF candidate is listed under slfDiscoveredCandidateList. Perform GET operation to check if any SLF candidate is listed in the output.
    • Change slfConfigMode to DISCOVERED_SLF_CONFIG_MODE:
      {
        "featureStatus": "ENABLED",
         "slfLookupConfig": [{
              "nfType": "UDR"
      }],
        "slfConfigMode": "DISCOVERED_SLF_CONFIG_MODE"
      }

    Note:

    Setting populateSlfCandidateList to true and slfConfigMode to DISCOVERED_SLF_CONFIG_MODE must be performed in two separate requests.

Note:

Upgrade NRF to enable NrfArtisan service, if not enabled during installation. For more information on enabling Artisan Microservice, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

The slfHostConfig must be configured before or while setting the slfConfigMode as STATIC_SLF_CONFIG_MODE when featureStatus is ENABLED.

Once the featureStatus is ENABLED, and the slfConfigMode is STATIC_SLF_CONFIG_MODE, the slfHostConfig cannot be empty.

If slfConfigMode is set to DISCOVERED_SLF_CONFIG_MODE, slfHostConfig is not considered for discovery query.

The slfConfigMode can be set to DISCOVERED_SLF_CONFIG_MODE only if there is atleast one slfCandidate present in the slfDiscoveredCandidateList. To trigger the population of slfDiscoveredCandidateList, set populateSlfCandidateList to true.

The nrfArtisan service populates the slfDiscoveredCandidateList by sending an nfDiscovery query to the nfDiscovery service to fetch the registered SLF or UDR NfProfiles. The discovery query used is:
/nnrf-disc/v1/nf-instances?target-nf-type=UDR&requester-nf-type=NRF&service-names=nudr-group-id-map&preferred-locality=<slfOptions.preferredSLFLocality>&limit=0

The nfDiscovery service fetches the registered SLF or UDR NfProfiles and apply the filtering criteria based on the discovery query parameters to get the relevant set of SLF or UDR NfProfiles. The UDR or SLF profiles are sorted and prioritized before adding to the discovery response. All the features that are available for discovery service operations, if enabled, are applied while processing the discovery query. For example: forwarding, emptylist and extended preferred locality features are included. The discovery response is sent to the nrfArtisan service which will then populate the slfDiscoveredCandidateList.

The order of the UDR or SLF profiles in the slfDiscoveredCandidateList list will be decided based on the sorting algorithms in nfDiscovery Service. The same is considered when NRF sends the SLF query to the SLF.

Refer Preferred Locality Feature Set for details on how the NfProfiles are sorted based on preferred-locality and extended-preferred-locality.

Prior to 22.2.x of NRF, Egress Gateway used to return 503 response for timeout exceptions (for example, No response received for SLF query or No Response received for Query sent to Forwarding NRF).

From 22.3.x onwards, Egress Gateway returns 408 response codes for timeout exceptions (REQUEST_TIMEOUT and CONNECTION_TIMEOUT), to keep the behavior in line with the HTTP standard. Hence, after upgrading to 22.3.x or a fresh install of 22.3.x, if a retry needs to be performed for the timeout scenario, 408 error code explicitly needs to be configured under SLF reroute option. Following is the configuration for SLF options:
{
           "rerouteOnResponseHttpStatusCodes": {
             "pattern": "^[3,5][0-9]{2}$|408$"
           }
}
If the SLF request failed due to non-2xx error response (except 404response)/request timeouts all SLFs (and the error codes are configured under “rerouteOnResponseHttpStatusCodes”), then
  • if forwarding is enabled and the message is a candidate to forward, NRF will forward the request to the other segment NRF.
  • if forwarding is disabled, NRF will reject the discovery query with the configured error code under “SLF_Not_Reachable”
If the SLF request failed due to 404 error response and 404 is configured under “rerouteOnResponseHttpStatusCodes”, then,
  • NRF will retry to secondary or tertiary SLFs.
  • forwarding is not performed in this case (there is an explicit code was added to have this functionality as per the SLF requirement).

Enable the feature in Dynamic mode

Prerequisite

Artisan Microservice (NrfArtisan) must be ENABLED. For more information on enabling alternate route service, see the "Global Parameters" section in Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide

Configuring slfConfigMode to Dynamic

Perform the following steps to enable the SLF feature using registered SLF profiles:

  1. Configure slfLookupConfig parameter. For more information on the parameters, see Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
  2. Set populateSlfDiscoveredCandidateList to true. This will trigger the population of slfDiscoveredCandidateList.
  3. Perform GET on the slfOptions to see if slfDiscoveredCandidateList has atleast one candidate. If present, go to step 4. If not, then wait till the slfDiscoveredCandidateList is populated with SLF profiles.
  4. Set slfConfigMode to DISCOVER_SLF_CONFIG_MODE and featureStatus to ENABLED.

Moving from Dynamic to Static

Perform the following to switch from dynamic to static configuration:

  1. Configure slfHostConfig parameter. For more information on the parameters, see Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
  2. Set slfConfigMode to STATIC_SLF_CONFIG_MODE, provided slfHostConfig and slfLookupConfig are already present.
  3. Set populateSlfDiscoveredCandidateList to false.

Moving from Static to Dynamic

Perform the following to switch from static to dynamic configuration:

  1. Upgrade NRF to enable NrfArtisan service, if not enabled previously. Configure enableNrfArtisanService under global attributes to true in the ocnrf_custom_values_23.4.6.yaml file.
  2. Set populateSlfDiscoveredCandidateList to true. This triggers the population of slfDiscoveredCandidateList.
  3. Perform GET on the slfOptions to see if slfDiscoveredCandidateList has atleast one candidate. If present, go to step 4. If not, then wait till the slfDiscoveredCandidateList is populated with SLF profiles.
  4. Set slfConfigMode to DISCOVER_SLF_CONFIG_MODE and featureStatus to ENABLED.

    Note:

    If slfConfigMode is set to DISCOVERED_SLF_CONFIG_MODE, slfHostConfig is not considered for discovery query.

4.20.3 Support for Configurable Port and Routing Parameter Selection

NRF provides a configurable option to select a port, if not configured explicitly, either from IPEndpoint or using the scheme attribute of NfService. The scenario where this configuration is used for port selection is when the routing attribute is IPv4Address/Ipv6Addresses of NfProfile or FQDN of NfService/NfProfile.

Additionally, NRF allows to configure the preferred routing attribute in case more than one of the below attribute is present in the NfProfile or NfService:

  • IPv4 (ipv4Addresses from IpEndpoint of NfService if present, or ipv4Addresses of NfProfile)
  • IPv6 (ipv6Addresses from IpEndpoint of NfService if present, or ipv6Addresses of NfProfile)
  • FQDN (fqdn of NfService if present, or fqdn of NfProfile)

Managing Configurable Port and Routing Parameter Selection Feature

Configure

You can configure the following parameters using the REST API or CNC Console:
  • Configure using REST API: Perform the preferredPortFromIPEndpoint and preferredRoutingParameter feature configurations as described in Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
  • Configure using CNC Console: Perform the feature configurations as described in SLF Options.

4.21 NRF Forwarding

NRF Forwarding feature forwards the service operation messages to another NRF, if NRF is not able to fulfill the required service operation.

Note:

Service operations explained below with specific cases or scenarios are eligible for forwarding.

A consumer NF instance can perform the following:

  • Subscribe to changes of NF instances registered in an NRF to which it is not directly interacting. The NF subscription message is forwarded by an intermediate NRF to another NRF.
  • Retrieve the NF profile of the NF instances registered in an NRF to which it is not directly interacting. The NF profile retrieval message is forwarded by an intermediate NRF to another NRF.
  • Discover the NF profile of the NF instances registered in an NRF to which it is not directly interacting. The NF discover message is forwarded by an intermediate NRF to another NRF.
  • Request OAuth 2.0 access token for the NF instances registered in an NRF to which it is not directly interacting. The OAuth 2.0 access token service request is forwarded by an intermediate NRF to NRF (which may issue the token).

NRF also enables the users to define the forwarding criteria for NFDiscover and AccessToken Service Requests from NRF to an intermediate NRF. The user can configure the preferred NF Type, the NF Services, or both for which forwarding is applicable. This provides the flexibility to regulate the traffic between the NRFs. For enabling Forwarding Rules Feature configuration, see the Enable Forwarding Rules section.

Managing NRF Forwarding Feature

Enable

The Forwarding feature is a core functionality of NRF. You need not enable or disable this feature.

Enable Forwarding Rules

You can enable the Forwarding Rules feature using REST API or CNC Console.
  • Enable using REST API: Set featureStatus to ENABLED in Forwarding Options configuration API. For more information about API path, see Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
  • Enable using CNC Console: Set Feature Status to ENABLED on the Forwarding Options page. For more information about enabling the feature using CNC Console, see Forwarding Options.

Note:

  • Before enabling forwardingRulesFeatureConfig, ensure that corresponding configurations are completed.
  • The featureStatus flag under forwardingRulesFeatureConfig can be ENABLED, only if discoveryStatus or accessTokenStatus attributes are already ENABLED.
  • Once featureStatus flag under forwardingRulesFeatureConfig is ENABLED, both discoveryStatus and accessTokenStatus attributes cannot be DISABLED.

Configure

Configure the NRF Forwarding feature using REST API or CNC Console:
  • Configure NRF Forwarding feature using REST API: Perform the feature configurations as described in Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
  • Configure NRF Forwarding feature using CNC Console: Perform the feature configurations as described in Forwarding Options.

Note:

The forwarding rules for nfDiscover service requests are based on following 3GPP discovery request parameters:

  • target-nf-type (Mandatory Parameter)
  • service-names (Optional Parameter)

The forwarding rules for access token service requests are based on the following 3GPP access token request parameter:

  • scope (Mandatory Parameter)

NRF Host configuration attribute (nrfHostConfig) allows the user to configure the details of another NRF to which the service operation messages are forwarded.

Note:

For NRF-NRF Forwarding feature to work, nrfHostConfig attribute must be configured in both NRFs with each other values.

For example: NRF1 is forwarding requests towards NRF2, then nrfHostConfig configuration attribute of NRF1 shall have NRF2 details and similarly nrfHostConfig configuration attribute of NRF2 shall have NRF1 details. These configurations are used while handling different service operations of NRF.

The nrfHostConfig configuration consists of attributes like apiVersion, scheme, FQDN, port, priority, and so on. NRF allows to configure maximum of four host details. However, the host with the highest priority is considered as the Primary host.

Note:

  • Refer 3GPP TS 29.510 (release 15.5) for definition and allowed range for NfHostConfig attributes (apiVersion, scheme, FQDN, port, priority, and so on).
  • Apart from priority attribute, no other attributes plays any role in the host selection.
  • Apart from Primary, three more hosts (if configured) can used as alternate NRF.
  • When more than one host is configured with same priority, the order of NRF host selection between the same priority is random.
In the NRF Forwarding feature, a request is first forwarded to Primary NRF. In case of error, the request is forwarded to alternate NRFs based on the following configurations:
  • nrfRerouteOnResponseHttpStatusCodes: This configuration is used to determine if the service operation message can be forwarded to alternate NRF or not. After receiving a response from the primary NRF, if the response status code from the primary NRF matches with this configuration, then NRF reroutes the request to the alternate NRF. Refer nrfHostConfig attribute for host NRF details.

    Prior to 22.2.x of NRF, Egress Gateway used to return 503 response for timeout exceptions (for example, No response received for SLF query or No Response received for Query sent to Forwarding NRF).

    From 22.3.x onwards, Egress Gateway returns 408 response codes for timeout exceptions (REQUEST_TIMEOUT and CONNECTION_TIMEOUT), to keep the behavior in line with the HTTP standard. Hence, after upgrading to 22.3.x or a fresh install of 22.3.x, if a retry needs to be performed for the timeout scenario, 408 error code explicitly needs to be configured under forwarding reroute option. Following is the configuration for forwarding options:
    {
               "nrfRerouteOnResponseHttpStatusCodes": {
                 "pattern": "^[3,5][0-9]{2}$|408$"
               }
    }
  • maximumHopCount: This configuration determines maximum number of hops (SLF or NRF) that NRF can forward in a given service request. This configuration useful during NRF Forwarding and SLF feature interaction.

Observe

Following are the NRF Forwarding feature specific metrics filters:
  • ocnrf_forward_accessToken_tx_requests_total
  • ocnrf_forward_accessToken_rx_responses_total
  • ocnrf_forward_nfProfileRetrieval_tx_requests_total
  • ocnrf_forward_nfProfileRetrieval_rx_responses_total
  • ocnrf_forward_nfStatusSubscribe_tx_requests_total
  • ocnrf_forward_nfStatusSubscribe_rx_responses_total
  • ocnrf_forward_nfDiscover_tx_requests_total
  • ocnrf_forward_nfDiscover_rx_responses_total
  • ocnrf_forwarding_jetty_latency_seconds
  • ocnrf_forward_nfDiscover_barred_total
  • ocnrf_forward_accessToken_barred_total
  • ocnrf_forward_nfStatusSubscribe_barred_total
  • ocnrf_forward_profileRetrieval_barred_total

For more information on NRF Forwarding feature metrics and KPIs, see NRF Forwarding Metrics, and NRF KPIs sections.

Maintain

If you encounter alerts at system or application levels, see NRF Alerts section for resolution steps.

In case the alerts still persist, perform the following:

  1. Collect the logs: For more information on how to collect logs, see "Collecting Logs" section in Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.22 NRF Georedundancy

NRF supports georedundancy to ensure high availability and redundancy. It offers a two, three, or four-sites georedundancy to ensure service availability when one of the NRF site is down. When NRF is deployed as georedundant site, all the sites work in an active state and the same data is available at all the sites.

The NFs send service requests to its primary NRF. When the primary NRF site is unavailable, the NFs redirect the service requests to the alternate site NRF. In this case, NFs get the same information from alternate NRF, and each site maintains and uses its set of configurations.

The NRF's state data gets replicated between the georedundant sites using DBTier's replication service.

Following are the prerequisites for georedundancy:

  • Each site configures the remote NRF sites that it is georedundant with.
  • Once the georedundancy feature is enabled on a site, it cannot be disabled.
  • Georedundant sites must be time synchronized.
  • Georedundant sites must be reachable from NFs or Peers on all the sites.
  • NFs are required to configure primary and alternate NRFs so that when one site is down, the alternate NRF can provide the required service operations.
  • At any given instance, NFs communicate with only one NRF, that is, NFs register services and maintain heartbeats with only one NRF. The data is replicated across the georedundant NRFs, thereby allowing the NFs to seamlessly move between the NRFs in case of failure.

Managing NRF Georedundancy Feature

Prerequisites
  1. cnDBTier is installed and configured for each site. The DB replication channels between the sites are up. For more information about cnDBTier installation, see
  2. Configure MySQL Database, Users, and Secrets. For the configuration procedure, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.
Preconfigurations for Georedundancy Feature
The following are the preconfigurations that must be performed before enabling georedundancy feature.
  • Preconfigurations using Helm:
    1. Before performing upgrade, update the Database Monitor service host and port in the below attributes as per the deployment in appInfo parameter section.
      The attribute indicates to monitor the database status
          watchMySQL: true
        #The URI used by the appinfo service to retrieve the replication channel status from the CN DB Tier. The service name used is the db-monitor service.The URI should be provided as "http://<db monitor service name>:<db monitor service port>/db-tier/status/replication/realtime"
        replicationUri: "http://mysql-cluster-db-monitor-svc.occne-infra:8080/db-tier/status/replication/realtime"
        #The URI used by the appinfo service to retrieve the database status from the CN DB Tier. This is for future usage. The service name used is the db-monitor service. It is recommended to configure these URI correctly to avoid continuous error log in App-info.The URI should be provided as "http://<db monitor service name>:<db monitor service port>/db-tier/status/local"
        dbStatusUri: "http://mysql-cluster-db-monitor-svc.occne-infra:8080/db-tier/status/local"
        #The URI used by the appinfo service to retrieve the realtime database status from the CN DB Tier. This is for future usage.It is recommended to configure these URI correctly to avoid continuous error log in App-info.The service name used is the db-monitor service.The URI should be provided as "http://<db monitor service name>:<db monitor service port>/db-tier/status/local"
        realtimeDbStatusUri: "http://mysql-cluster-db-monitor-svc.occne-infra:8080/db-tier/status/cluster/local/realtime"
        #The URI used by the appinfo service to retrieve the dbTier version from the CN DB Tier. [This url is supported from CN DBTier 22.4.0].The service name used is the db-monitor service.The URI should be provided as "http://<db monitor service name>:<db monitor service port>/db-tier/version"
        dbTierVersionUri: "http://mysql-cluster-db-monitor-svc.occne-infra:8080/db-tier/version"
        #The URI used by the appinfo service to retrieve alerts from alert manager. The service name used is the alert manager service name of the CNE.The URI should be provided as "http://<alert manager service name>:<alert manager service port>/<cluster name>/alertmanager".
        alertmanagerUrl: "http://occne-prom-alertmanager.occne-infra:80/cluster/alertmanager"
    2. Run helm upgrade to apply the above configuration.
    3. Verify if the appinfo service is able to fetch the database replication status by querying the following API:
      Resource Method : GET
              Resource URI: <appinfo-svc>:<appinfo-port>/status/category/replicationstatus

      Where,

      <appinfo-svc> is the appinfo service name.

      <appinfo-port> is the port of appinfo service.

      Sample response:
      [
        {
          "localSiteName": "nrf1",
          "remoteSiteName": "nrf2",
          "replicationStatus": "UP",
          "secondsBehindRemote": 0,
          "replicationGroupDelay": [
            {
              "replchannel_group_id": "1",
              "secondsBehindRemote": 0
            }
          ]
        }
      ]
      

      Recheck the configuration if the sample response is not received. If the sample response is not received, appinfo is not querying the Database Monitor service correctly.

    4. This step must be done only after verifying that appinfo is able to fetch the status correctly. If the above configuration is not correct, NRF services will receive incorrect information about the database replication channel status and hence will assume replication channel status as DOWN.

      Configure the NRF microservice to query appinfo for the replication status using the Georedundancy Options API or CNC Console:

      Resource Method : PUT
      Resource URI: <configuration-svc>:<configuration-port>/nrf-configuration/v1/geoRedundancyOptions
      {
         “replicationStatusUri”: "http://<appinfo-svc>:<appinfo-port>/status/category/replicationstatus"
      }
      

      Where,

      <configuration-svc> is the nrfconfiguration service name.

      <configuration-port> is the port of nrfconfiguration service.

      <appinfo-svc> is the appinfo service name.

      <appinfo-port> is the port of appinfo service.

      Perform the configurations in CNC Console as described in Georedundancy Options.

    5. Verify that the OcnrfDbReplicationStatusInactive and OcnrfReplicationStatusMonitoringInactive alerts are not raised on the alert dashboard.
  • Preconfigurations using REST API:
    1. Deploy NRF as per the installation procedure provided in Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.
    2. Enable the Georedundancy feature using the CNC Console or REST API:
      Following is the sample configuration at Site Chicago which is georedundant with Sites Atlantic (siteName: Atlantic, NrfInstanceId: 723da493-528f-4bed-871a-2376295c0020) and Pacific (siteName: Pacific, NrfInstanceId: cfa780dc-c8ed-11eb-b8bc-0242ac130003)
      # Resource URI:  {apiRoot}/nrf-configuration/v1/geoRedundancyOptions
      # Method: GET
      # Content Type: application/json
      {
      "useRemoteDataWhenReplDown":false,
      "featureStatus":"ENABLED",
      "monitorNrfServiceStatusInterval":"5s",
      "monitorDBReplicationStatusInterval":"5s",
      "replicationDownTimeTolerance":"10s",
      "replicationUpTimeTolerance":"10s",
      "replicationLatencyThreshold":"20s",
      "replicationStatusUri": "http://<appinfo-svc>:<appinfo-port>/status/category/replicationstatus",
      "siteNameToNrfInstanceIdMappingList":
          [{"siteName":"atlantic",
            "nrfInstanceId":"723da493-528f-4bed-871a-2376295c0020"},
           {"siteName":"pacific",
            "nrfInstanceId":"cfa780dc-c8ed-11eb-b8bc-0242ac130003"}
          ]}
      }

    NRF considers NfProfiles, registered across georedundant sites, for all its service operations. However, if NRF detects that the replication status with its mated sites is down, it will consider NfProfiles registered at local site only. If NfProfiles that are registered across georedundant sites must be considered for certain service operations (for example, service operations that result into DB Update), set the useRemoteDataWhenReplDown attribute to true.

    This attribute is applicable only for the NfHeartBeat, NfUpdate (Patch), NfDeregister, NfStatusSubscribe (Patch), and NfStatusUnSubscribe service operations. It ensures that during the NRF site down scenario, if the NF is moving to its mated NRFs, NF need not reregister or resubscribe again. However, if this attribute is set to false and NF is switching to mated site, then the NF receives a 404 response and:
    • for NfUpdate (Patch), NfDeregister, and NfHeartBeat opearations, the NF is expected to register again with the NRF using NfRegister service operations.
    • for NfStatusSubscribe (Patch) and NfStatusUnSubscribe, the NF is expected to subscribe again with the NRF using NfStatusSubscribe service operation.

    Note:

    This attribute is not applicable for NfDiscovery, NfAccessToken, NfProfileRetrieval, and NfListRetrieval service operations.
  • Preconfigurations using CNC Console:
    1. Configure Replication Status Uri to "http://<appinfo-svc>:<appinfo-port>/status/category/replicationstatus"
    2. Configure Site Name To NRF InstanceId Mapping List group with the NRF Instance Id and the corresponding database Site Name of the remote site(s) with which the given NRF is georedundant.

    Note:

    Configure these mandatory attributes before enabling the Georedundancy feature. If these attributes are not configured during deployment or using the CNC Console postdeployment, georedundancy cannot be enabled, and NRF at the site acts as a standalone NRF.
Enable

After performing the above mentioned preconfigurations, enable the NRF georedundancy feature using the REST API or CNC Console.

  • Enable using REST API: Set featureStatus to ENABLED in Georedundancy Options configuration API. For more information about API path, see Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
  • Enable using CNC Console: Set Feature Status to ENABLED on the Georedundancy Options page. For more information about enabling the feature using CNC Console, see Georedundancy Options.

Configure

You can configure the georedundancy feature using REST API or CNC Console:
  • Configure Georedundancy using REST API: Perform the feature configurations as described in Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
  • Configure Georedundancy using CNC Console: Perform the feature configurations as described in Georedundancy Options.

Fetching the Database Replication Channel Status Using appinfo Microservice

NRF microservices require the database replication channel status for various NRF service operations. The Database Monitor service exposes APIs which provides the database replication channel status. The following NRF microservice query this API:
  • Nfregistration
  • Nfsubscription
  • NfDiscover
  • nrfAuditor
  • NfAccessToken
  • NrfConfiguration
  • NrfArtisan

Currently, all pods of the above services perform the REST API query towards the Database Monitor service periodically, and maintains the status in memory. When NRF is scaled to handle high traffic rate, the number of pods querying the Database Monitor service is very high. But, the Database Monitor service is not designed to handle high traffic rate.

Observe

Following are the georedundancy feature specific metrics:
  • ocnrf_dbreplication_status
  • ocnrf_dbreplication_down_time_seconds
  • ocnrf_nf_switch_over_total
  • ocnrf_nfSubscriptions_switch_over_total
  • ocnrf_stale_nf_deleted_total
  • ocnrf_stale_nfSubscriptions_deleted_total
  • ocnrf_reported_dbreplication_status

For more information on georedundancy metrics and KPIs, see Georedundancy Metrics and NRF KPIs sections.

Maintain

If you encounter alerts at system or application levels, see NRF Alerts section for resolution steps.

In case the alerts still persists, perform the following:

  1. Collect the logs: For more information on how to collect logs, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.22.1 NRF Last Known Response

When an NRF detects that the replication channel with its georedundant NRF(s) is down, it stops considering the NFs registered in the georedundant NRF(s) for any service operations. Due to unavailability of the replication channel, the latest status of the NFs registered in the georedundant NRF(s) will not get replicated. In this state the last replicated NfProfiles may be stale, hence NRF does not consider these profiles for any service operations.

During platform maintenance, network issues, or for any other reason, if the replication channels goes down, given that usually, the NFs do not register, deregister or change their profiles very often, it is acceptable for NRF to use the last known NfProfiles of remote NRFs for any service operations.

As part of this feature, overrideReplicationCheck Helm configurable parameter is provided to control the behaviour when replication channels are down. The parameter is applicable for the following NRF services only:
  • nfregistration
  • nfaccesstoken
  • nfdiscovery
  • nrfartisan

Enable

You can enable Last Known Response using the Helm. The below steps should be performed only after Geo Redundancy feature is enabled as described in Georedundancy Options.

  1. Open the ocnrf_custom_values_23.4.6.yaml file.
  2. Set <service_name>.global.overrideReplicationCheck to true to control NRF response when replication is down.

    Where, <service_name> is the above mentioned NRF services.

    For more information about the parameters, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

  3. Save the file.
  4. Run helm upgrade. For more information about upgrade procedure, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

NRF behaviour When the Feature is Disabled

Following are the different scenarios that explain the behavior of each NRF microservices when this feature is disabled:

  • Replication is Up
    • NF Register Microservice
      • NfRegister, NfUpdate, and NfDeregister ProcessingThe service operations will be successful, irrespective of which site the NF was originally registered at
      • NfListRetrieval and NfProfileRetrieval Processing

        The NRF will consider:
        • The NFs registered and heartbeating at the local site
        • The NFs registered and heartbeating at the remote sites, which replicated from its Georedundant mate NRFs
    • NF Discover and NF AccessToken Microservice

      The NRF will consider:
      • The NFs registered and heartbeating at the local site
      • The NFs registered and heartbeating at the remote sites, which replicated from its Georedundant mate NRFs
    • NrfArtisan MicroserviceThe SLFs registered at both local site and remote sites are used for discovering UDRs for performing SLF query, when slfConfigMode is set to DISCOVERED_SLF_CONFIG_MODE.
    • NrfAuditor Microservice
      • NRF will audit the NfProfiles and subscriptions of the local site.

        When NRF discovers any of its registered profiles has missed heartbeats, it will be marked as SUSPENDED. The NF will thereafter not be included in the nfDiscovery or nfAccessToken response. The Consumer NFs that have subscribed for the change are notified about the change.

      • When NRF discovers any of its subscriptions have crossed the validity period, it deletes the subscription.
      • NRF will also audit the remote site NRFs NfProfiles and Subscriptions. If NRF discovers any of the registered Profiles at the remote site has missed heartbeats, it will be marked as PROBABLE_SUSPENDED (internal state). The NF will thereafter not be included in the nfDiscovery or nfAccessToken response. The NF will eventually be marked as SUSPENDED, if the NF continues to miss its heartbeat for a configurable period. The Consumer NFs that have subscribed for the change are not notified.
      • When NRF discovers any of the subscriptions at the remote site have crossed the validity period, it marks the subscription as SUSPENDED.
  • Replication is Down
    • NF Register Microservice
      • NfRegister, NfUpdate, and NfDeregister Processing
        • The NfProfiles will be processed if the NF is registered at the local site
        • If the NF is registered at the remote site, then the NfProfile will be processed only if geoRedundancyOptions.useRemoteDataWhenReplDown is true. Else, the request will be rejected with 404 Not Found.
      • NfListRetrieval and NfProfileRetrieval Processing
        • The NRF will consider the NFs registered and heartbeating with the local site.
        • The NRF will not consider the NFs that are registered with remote NRF site with which the replication is down.
    • NF Discover and NF AccessToken Microservice

      The NRF will consider:
      • The NFs registered and heartbeating at the local site.
      • NRF will not consider the NFs registered and heartbeating at the remote sites with which the replication is down.
    • NrfArtisan Microservice
      • When slfConfigMode is set to DISCOVERED_SLF_CONFIG_MODE, NRF will consider
        • The SLFs registered and heartbeating at the local site
        • The SLFs registered with remote site NRFs with which the replication is down will not be considered
    • NrfAuditor Microservice

      • NRF will audit the NfProfiles and subscriptions of the local site.

        When NRF discovers any of its registered profiles has missed heartbeats, it will be marked as PROBABLE_SUSPENDED (internal state). The NF will thereafter not be included in the nfDiscovery or nfAccessToken response. The Consumer NFs that have subscribed for the change shall not be notified of the change.

        When NRF discovers any of its subscriptions have crossed the validity period, it shall mark the Subscription as SUSPENDED (internal state).

      • NRF will not audit the remote site NRFs NfProfiles and Subscriptions.

NRF behaviour When the Feature is Enabled, replication is UP or Down

  • NF Register Microservice
    • NfRegister, NfUpdate, and NfDeregister Processing
      • The profiles will be processed for the NFs registered at the local site or at the remote sites, irrespective of the value of the geoRedundancyOptions.useRemoteDataWhenReplDown parameter.
    • NfListRetrieval and NfProfileRetrieval Processing

      The NRF will consider:
      • The NFs registered and heartbeating at the local site
      • The NFs registered and heartbeating at the remote sites (last known status) which replicated from its Georedundant mate NRFs
  • NF Discover and NF AccessToken Microservice

    The NRF will consider:
    • The NFs registered and heartbeating at the local site
    • The NFs registered and heartbeating at the remote sites (last known status) which replicated from its Georedundant mate NRFs
  • NrfArtisan Microservice

    The SLFs registered at both local site and remote sites (last known status) are used for discovering UDRs for performing SLF query, when slfConfigMode is set to DISCOVERED_SLF_CONFIG_MODE.

  • NrfAuditor Microservice
    • If the replication channel status is up,
      • NRF will audit the NfProfiles and subscriptions of the local site.

        When NRF discovers any of its registered profiles has missed heartbeats, it will be marked as SUSPENDED. The NF will thereafter not be included in the nfDiscovery or nfAccessToken response. The Consumer NFs that have subscribed for the change are notified about the change.

        When NRF discovers any of its subscriptions have crossed the validity period, it deletes the subscription.

      • NRF will also audit the remote site NRFs NfProfiles and Subscriptions. If NRF discovers any of the registered Profiles at the remote site has missed heartbeats, it will be marked as PROBABLE_SUSPENDED (internal state). The NF will thereafter not be included in the nfDiscovery or nfAccessToken response. The Consumer NFs that have subscribed for the change are not notified.
      • When NRF discovers any of the subscriptions at the remote site have crossed the validity period, it marks the subscription as SUSPENDED.
    • If the replication channel status is down,
      • NRF will audit the NfProfiles and subscriptions of the local site.

        When NRF discovers any of its registered profiles has missed heartbeats, it will be marked as PROBABLE_SUSPENDED (internal state). The NF will thereafter not be included in the nfDiscovery or nfAccessToken response. The Consumer NFs that have subscribed for the change shall not be notified of the change.

      • When NRF discovers any of its subscriptions have crossed the validity period, it shall mark the Subscription as SUSPENDED (internal state).
      • NRF will not audit the remote site NRFs NfProfiles and Subscriptions.

4.23 NF Heartbeat Enhancement

This feature allows the operator to configure minimum, maximum, default heartbeat timers and the maximum number of consecutive heartbeats that the NF is allowed to skip. Further, these values can be customized per NF type.

According to 3GPP 29.510, every NF registered with NRF keeps its operative status alive by sending NF heartbeat requests periodically. The NF can optionally send the heartbeatTimer value when it registers its NFProfile or when it wants to update its registered NFProfile.

NRF may modify the value of the heartbeatTimer based on its configuration and return the new value to the NF on successful registration. The NF will thereafter use the heartbeatTimer as received in the registration response as its heartbeat interval.

In case the configuration changes at the NRF for the heartbeatTimer, the changed value must be communicated to the NF in the response of the next periodic NF heartbeat request or when it next sends an NFUpdate request to the NRF.

NRF monitors the operative status of all the NFs registered with it, and when it detects that an NF has missed updating its NFProfile or sending a heartbeat for the heartbeat interval, NRF must mark the NFProfile as SUSPENDED. The NFProfile and its services may no longer be discoverable by the other NFs through the NfDiscovery service. The NRF notifies the subscribed NFs of the change in status of the NFProfile.

Managing NF Heartbeat Enhancement

Enable

The NF heartbeat is a core functionality of NRF. You do not need to enable or disable this feature.

Configure

Configure the NF Heartbeat Enhancement using REST API or CNC Console:
  • Configure NF Heartbeat using REST API: Perform the feature configurations as described in Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
  • Configure NF Heartbeat using CNC Console: Perform the feature configurations as described in NF Management Options.

Observe

For more information on heartbeat metrics and KPIs, see NRF Metrics and NRF KPIs sections.

Maintain

If you encounter alerts at system or application levels, see NRF Alerts section for resolution steps.

In case the alerts still persists, perform the following:

  1. Collect the logs: For more information on how to collect logs, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.24 Service mesh for intra-NF communication

Oracle NRF leverages the Istio or Envoy service mesh (Aspen Service Mesh) for all internal and external communication. The service mesh integration provides inter-NF communication and allows API gateway co-working with service mesh. The service mesh integration supports the services by deploying a special sidecar proxy in the environment to intercept all network communication between microservices. For more information on configuring ASM, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

4.25 NF Screening

The incoming service requests from 5G Network Functions (NFs) must be screening before allowing access to Nnrf_NfManagement service operations to ensure security.

NF Screening feature screens the incoming service requests based on certain attributes in the NfProfile against a set of screening rules configured at NRF. NRF processes the incoming service requests, only if the screening is successful, and allows to invoke Nnrf_NfManagement service operations.

NRF supports the following NF screening rules list type. For more information about the screening rules applicable to different attributes in NfProfile, see Table 4-9.

Table 4-7 Screening Rules List Type

Screening Rule List Description
"NF_FQDN" Screening List type for NF FQDN. This screening rule type is applicable for fqdn of a NfProfile in NF_Register and NF_Update service operations.
"NF_IP_ENDPOINT" Screening list type for IP Endpoint. This screening rule type is applicable for ipv4address, ipv6address attributes at NfProfile level and ipEndPoint attribute at nfServices level for NF_Register and NF_Update service operations.
"CALLBACK_URI" Screening list type for callback URIs in NF Service and nfStatusNotificationUri in SubscriptionData. This is also applicable for nfStatusNotificationUri attribute of SubscriptionData for NFStatusSubscribe service operation. This screening rule type is applicable for defaultNotificationSubscription attribute at nfServices level for NF_Register and NF_Update service operations.
"PLMN_ID" Screening list type for PLMN ID. This screening rule type is applicable for plmnList attribute at NfProfile level for NF_Register and NF_Update service operations.
"NF_TYPE_REGISTER" Screening list type for allowed NF Types to register. NRF supports 3GPP TS 29510 Release 15 and specific Release 16 NF Types. For more information on the supported NF Types list, see "Supported NF Types" section. This screening rule type is applicable for nfTypeList attribute at NfProfile level for NF_Register and NF_Update service operations.

When a service request is received, NRF performs screening of the service request for each of the above mentioned screening rules list type, as follows:

  • Checks if the global screening option is Enabled or Disabled.

    You can configure the nfScreeningOptions.featureStatus parameter as "ENABLED" using REST or Console.

    Note:

    By default, the NF Screening feature is disabled globally.
  • If it is Enabled, NRF checks if the attribute in the NfProfile is configured under Blacklist or Whitelist. For more information about these attributes, see Table 4-8.

    You can configure nfScreeningType parameter as Blacklist or Whitelist for the specific screening rule configuration. For more information about the screening rules applicable to different attributes in NfProfile, see Table 4-9.

    Table 4-8 NF Screening Type

    NfScreeningType Description
    Blacklist If the attribute is configured as Blacklist and the attribute in the request matches with the configured value, the service request is not processed further. For example, nfFqdn is configured as Blacklist, then the service request with fqdn present in the NfProfile is not processed further.
    Whitelist If the attribute is configured as Whitelist and the attribute in the request matches with the configured value, then the service request is processed further. For example, nfIpEndPointList is configured as Whitelist, then the service request with ipv4Addresses present in the NfProfile is processed further.
  • Based on the nfScreeningType parameter configuration, NRF checks screening rules, per NfType and global screening rules data level. Depending on the configuration, the service request is processed. For more information about the configuration, see Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
The following table describes the screening rules applicable to different attributes in NfProfile.

Table 4-9 Screening rules applicable to NfProfile attributes

Management Service Operation Screening Rules List Attribute in NfProfile Attribute in REST
NF_Subscribe CALLBACK_URI SubscriptionData.nfStatusNotificationUri nfCallBackUriList
NF_Register, NF_Update CALLBACK_URI NfService.defaultNotificationSubscriptions nfCallBackUriList
NF_Register, NF_Update NF_FQDN NfProfile.fqdn nfFqdn
NF_Register, NF_Update NF_IP_ENDPOINT
  • NfProfile.ipv4Addresses
  • NfProfile.ipv6Addresses
  • NfService.ipEndPoints
nfIpEndPointList
NF_Register, NF_Update PLMN_ID NfProfile.plmnList plmnList
NF_Register, NF_Update NF_TYPE_REGISTER NfProfile.nfType nfTypeList

Managing NF Screening Feature

Enable

You can enable the NF Screening feature using the CNC Console or REST API.

  • Enable using REST API: Set featureStatus to ENABLED in NF Screening Options configuration API. For more information about API path, see Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
  • Enable using CNC Console: Set Feature Status to ENABLED on the NF Screening Options page. For more information about enabling the feature using CNC Console, see NF Screening Options.

Configure

You can configure the NF Screening feature using REST API or CNC Console:
  • Configure using REST API: Perform the feature configurations as described in Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
  • Configure using CNC Console: Perform the feature configurations as described in NF Screening Options.

Observe

Following are the NF Screening feature specific metrics:
  • ocnrf_nfScreening_nfFqdn_requestFailed_total
  • ocnrf_nfScreening_nfFqdn_requestRejected_total
  • ocnrf_nfScreening_nfIpEndPoint_requestFailed_total
  • ocnrf_nfScreening_nfIpEndPoint_requestRejected_total
  • ocnrf_nfScreening_callbackUri_requestFailed_total
  • ocnrf_nfScreening_callbackUri_requestRejected_total
  • ocnrf_nfScreening_plmnId_requestFailed_total
  • ocnrf_nfScreening_plmnId_requestRejected_total
  • ocnrf_nfScreening_nfTypeRegister_requestFailed_total
  • ocnrf_nfScreening_nfTypeRegister_requestRejected_total
  • ocnrf_nfScreening_notApplied_InternalError_total

For more information about NF Screening metrics and KPIs, see NF Screening Metrics and NRF KPIs sections.

Maintain

If you encounter alerts at system or application levels, see NRF Alerts section for resolution steps.

In case the alerts persists, perform the following:

  1. Collect the logs: For more information about how to collect logs, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information about how to raise a service request.

4.26 REST Based NRF State Data Retrieval

REST based NRF state data retrieval feature provides options for Non-Signaling APIs to access NRF state data. It helps the operator to access the NRF state data to understand and debug in the event of failures.

This feature provides various queries which can help to get data as per the requirement.

Managing REST Based NRF State Data Retrieval Feature

Configure

You can configure the REST Based NRF State Data Retrieval feature using REST API. For more information on state data retrieval, see Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.

Maintain

If you encounter alerts at system or application levels, see NRF Alerts section for resolution steps.

In case the alerts still persists, perform the following:

  1. Collect the logs: For more information on how to collect logs, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.27 NRF Access Token Service Usage Details

NRF implements Nnrf_AccessToken service (used for OAuth2 authorization), along with the "Client Credentials" authorization grant. It exposes a "Token Endpoint" where the Access Token Request service can be requested by NF service consumers.

The Nnrf_AccessToken service operation is defined as follows:
  • Access Token Request (Nnrf_AccessToken_Get)

Note:

This procedure is specific to the NRF Access Token service operation. NRF general configurations, database, and database-specific secret creation are not part of this procedure.

Procedure to use NRF Access Token Service Operation

This procedure provides step-by-step details to use 3GPP defined access token service operation supported by NRF.
  1. Create NRF private key and public certificate

    This step explains the need to create the NRF private keys and public certificates.

    Private keys are used by NRF to sign the Access Token generated. It is available only with NRF.

    Public certificates are used by producer NFs to validate the access token generated by NRF. So, public certificates are available with producer network functions. NRF does not need the public certificate while signing the access token.

    The expiry time of the certificate is required to set appropriate validity time in the AccessTokenClaim.

    Note:

    For more details about the validity time of AccessTokenClaim, see "oauthTokenExpiryTime".
    Two types of signing algorithms are supported by NRF. For both types different keys and certificates required to be generated:
    • ES256: ECDSA digital signature with SHA-256 hash algorithm
    • RS256: RSA digital signature with SHA-256 hash algorithm
    Anyone or both of the algorithm files can be generated depending upon the usage of hash algorithms. Depending upon NRF rest based configuration-specific keys and certificates are used to sign the Access Token.

    Note:

    The creation process for private keys, certificates, and passwords is at the discretion of the user or operator.
    Sample keys and certificates:

    After running this step, the private keys and public certificates of NRF (generated files depends upon algorithms chosen by operator or user) are created. There can be multiple such pairs of private keys and public certificates which will eventually be configured in NRF with different KeyIds.

    For example:

    ES256 based keys and certificates:
    • ecdsa_private_key.pem

    • ecdsa_certificate.crt

    RS256 based keys and certificates:
    • rsa_private_key.pem

    • rsa_certificate.crt

    Note:

    • Unencrypted key and certificates are only supported.
    • For RSA, the supported versions are PKCS1 and PKCS8.
    • For ECDSA, the supported version is PKCS8.
  2. Namespace creation for Secrets

    This step explains the need to create Kubernetes namespace in which Kubernetes secrets are created for NRF private keys and public certificates. For creating namespace, see the "Verifying and Creating NRF Namespace" section in Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

    Note:

    • Different namespaces or the same namespace can be used for NRF private keys and public certificates.
    • It can be the same namespace as for NRF deployment.
    • Appropriate RBAC permission needs to be associated with the ServiceAccount, if the namespace is other than NRF's namespace.
  3. Secret creation for NRF private keys and public certificates
    This step explains commands to create the Kubernetes secret(s) in which NRF private keys and public certificates can be kept safely. For configuring secrets, see the "Configuring Secret for Enabling Access Token Service" section in Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

    Note:

    The same secret can be used which is already there for NRF private keys.

    A sample command is provided in the "Configuring Kubernetes Secret for Accessing NRF Database" section in Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide to create or update the secret. In case, there is a need to create or update multiple secrets for each entity, then the same section can be followed.

  4. Perform NRF REST based configuration with outcome details of Steps 1 to 3

    This step explains the NRF REST based configuration to use the NRF private keys, public certificates, secret(s), and secret namespace(s).

    NRF REST based configuration provides options to configure different key-ids and corresponding NRF private keys and public certificates along with corresponding oauth signing algorithms. One of the configured key-ids, can be set as the current key-id.

    While generating the oauth access token, NRF uses the keys, algorithm, and certificates corresponding to the current key-id.

    For more information on NF Access Token options configuration using REST APIs, see Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.

4.28 Key-ID for Access Token

Key-ID (kid) feature is about to add "kid" header in Access Token Response generated by NRF. As per RFC 7515 section 4.1.4, the use of Key-ID (kid) is to indicate which key was used to secure JSON Web Signature (JWS).

Note:

You must perform OCNRF Access Token Service Operation configuration before configuring Key-ID for Access Token.

Each NRF and NF producer(s) will have multiple keys with algorithm indexed with "kid" configured. NRF REST based configuration provides options to configure different key-ids and corresponding NRF private keys and public certificates along with corresponding oauth signing algorithms. One of the configured key-ids, can be set as current Key-ID. While generating the oauth access token, NRF uses the keys, algorithm and certificates corresponding to current Key-ID.

NRF configuration provides "addkeyIDInAccessToken" attribute which tells to add Key-ID as header in Access Token Response or not. If value is true, then currentKeyID value will be added in "kid" header in AccessToken Response. If value is false, then "kid" header will not be added in AccessToken Response.

See Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide for more information on how to check AccessToken Signing Key Status.

Managing Key-ID for Access Token Feature

Enable

You can enable the Key-ID for Access Token feature using the CNC Console or REST API.

Enable the Key-ID for Access Token feature using REST API as follows:
  1. Use the API path as {apiRoot}/nrf-configuration/v1/nfAccessTokenOptions.
  2. Content type must be application/json.
  3. Run the PUT REST method using the following JSON:
    {
       "addkeyIDInAccessToken": true
     
    }
Enable the Key-ID for Access Token feature using CNC Console as follows:
  1. From the left navigation menu, navigate to NRF and then select NF Access Token Options. The NF Access Token Options page is displayed.
  2. Click Edit from the top right side to edit or update NF Access Token Options parameter. The page is enabled for modification.
  3. Select the Add KeyID in AccessToken under Token Signing Details section as True from the drop-down menu.
  4. Click Save to save the NF Access Token Options.

Configure

You can configure the Key-ID for Access Token Feature using REST API or CNC Console:
  • Configure Key-ID for Access Token using REST API: Perform the feature configurations as described in Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
  • Configure Key-ID for Access Token using CNC Console: Perform the feature configurations as described in NF Access Token Options.

Observe

For more information on Key-ID for Access Token metrics and KPIs, see NF Access token Metrics and NRF KPIs sections.

Maintain

If you encounter alerts at system or application levels, see NRF Alerts section for resolution steps.

In case the alerts still persists, perform the following:

  1. Collect the logs: For more information on how to collect logs, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.29 Access Token Request Authorization

NRF follows and supports the 3GPP TS 29.510 based verification for Access Token Authorization requests for specific NF producer based on the allowed NFType, PLMN present in the NFProfiles. Extension to this requirement is to include screening for Access Token requests based on NFType.

NRF plays a major role as an OAuth2.0 Authorization server in 5G Service based architecture. When an NF service Consumer needs to access the services of an NF producer of a particular NFType and NFInstanceId, it obtains an OAuth2 access token from the NRF. NRF performs the required authorization, and if the authorization is successful, a token is issued with the requested claims. NRF provides an option to the user to specify the authorization of the Producer-Consumer NF Types along with the producer NF's services.

The operator can configure the mapping of the Requester NFType, Target NFType, and the allowed services of the Target NF. Access Token request received based on the configuration and is furthered processes the request only if the authorization is successful. Allowed Services can be configured as a single wild card '*' which denotes all the Target NFs services are allowed for the consumer NF. The operator can also configure the HTTP status code and error description that can be used in the Error Response sent by the NRF when the Access Token request is rejected.

Note:

When Access Token Authorization feature is enabled, it is expected that the requester and the target NFs are registered in NRF for the validation. So if the targetNfType is not specifically mentioned in the request, the targetNfType is extracted from the registered profile in the database using the targetNfInstanceId. Similarly if the requesterNFInstanceId is present in the request, the requesterNfType is extracted from the registered profile in the database.

Access Token configurable attribute "logicalOperatorForScope" is used while authorizing the services in the Access Token Request's scope against the allowed services in the configuration. If the logicalOperatorForScope is set to "OR", at least one of the services in the scope will be present in the allowed services. If it is set to "AND", all the services in the scope will be present in the allowed services.

The authFeatureConfig attribute under nfAccessTokenOptions provides the support required to use NRF Access Token Request Authorization Feature. For more details, see Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.

When authFeatureConfig attribute is ENABLED, the nfType validation is performed as follows:
  • targetNfType and requesterNfType is matched with the nfTypes used for accessToken generation. This configuration overrides the nfType validation against the allowedNfTypes in nfProfile or nfServices.
  • If the above mentioned validation is not met, then:
    • requesterNfType from the accessToken request is validated against the allowedNfTypes present in the producer NF's nfServices, if present.
    • requesterNfType from the accessToken request is validated against the allowedNfTypes from the producer nfProfile, if allowedNfTypes is not present in the nfServices.

Managing Access Token Request Authorization Feature

Enable

You can enable the Access Token Request Authorization Feature using the REST API or CNC Console.

  • Enable using REST API: Set featureStatus to ENABLED in NF AccessToken Options configuration API. For more information about API path, see Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
  • Enable using CNC Console: Set Feature Status to ENABLED on the NF Access Token Options page. For more information about enabling the feature using CNC Console, see NF Access Token Options.

Configure

You can configure the Access Token Request Authorization feature using REST API or CNC Console:
  • Configure using REST API: Perform the feature configurations as described in Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.

    With nfAccessTokenOptions API, authFeatureConfig attribute provides the support required to use NRF Access Token Request Authorization Feature. For more details, see Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.

  • Configure using CNC Console: Perform the feature configurations as described in NF Access Token Options.

Observe

Following are the Access Token ReSquest Authorization feature specific metrics:
  • ocnrf_accessToken_rx_requests_total
  • ocnrf_accessToken_tx_responses_total

For more information on Access Token Request Authorization metrics and KPIs, see NF Access token Metrics and NRF KPIs sections.

Maintain

If you encounter alerts at system or application levels, see NRF Alerts section for resolution steps.

In case the alerts still persists, perform the following:

  1. Collect the logs: For more information on how to collect logs, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.30 Preferred Locality Feature Set

Preferred Locality Feature Set comprises of the following features:
  • Preferred Locality
  • Extended Preferred Locality with Location or Location Sets
  • Limiting the Number of Producers Based on NF Set Ids
For more information about these features and their processing, see the following sections.

4.30.1 Preferred Locality

When the consumer NF sends the discovery query along with an additional attribute “preferred-locality”, the Preferred Locality feature is applied. By default, such discovery queries are processed as follows:
  1. NRF searches and collects all the NFs profiles matching the search criteria sent in the discovery query except the "preferred-locality" attribute.
  2. NF profiles collected in the above step are arranged as per "preferred-locality". Here, the NFs matching the "preferred-locality" are arranged as per the increasing order of their priority and the NF profiles that do not have "preferred-locality" are then placed in the increasing order of post-processed priority value. Here, post-processed priority value means adding the highest NF priority of the NFs that are matching with the "preferred-locality" to the priority value of non-preferred NF profiles with an additional increment offset value of 1.

    For example: If the highest NF priority value of "preferred-locality" is 10 and the priority value of non-preferred NF profile is 5, then after post processing the priority value of non-preferred NF profile will be 16 (where, 10 is the highest priority value of NF profile with "preferred-locality", 5 is the priority of non-preferred NF and 1 is the offset value).

    For service-name based discovery query, if there is single service in the NF profile after discovery processing, service level priority is updated.

    For service-name based discovery query, if there are multiple services in the NF profile with same service name after discovery processing, along with priority value update at NF profile, service level priority is also updated.

Managing Preferred Locality feature

Enable

Enabling the Preferred Locality feature: This feature is enabled by default as per 3GPP TS29.510. There is no option to enable or disable it explicitly.

Observe

There are no specific metrics and alerts for this feature.

For the entire list of NRF metrics and alerts, see NRF Metrics and NRF Alerts section.

Maintain

If you encounter alerts at system or application levels, see NRF Alerts section for resolution steps.

In case the alerts still persists, perform the following:

  1. Collect the logs: For more information on how to collect logs, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.30.2 Extended Preferred Locality

The limitation with the default "Preferred Locality" feature is that, it does not provide a mechanism to select matching producers from more than one "preferred-locality" because "preferred-locality" attribute allows to provide only one location in the discovery query.

To overcome this limitation, the "Extended Preferred Locality" feature is implemented where a preferred locality triplet, that is, collection of primary, secondary and tertiary preferred locality is associated with the given "preferred-locality" and "target-nf- type". The "Primary Preferred Locality", "Secondary Preferred Locality", and "Tertiary Preferred Locality" in "Preferred Locality Triplet" can be configured with single location or a location set. That is, the "Location" attribute under "Target Preferred Locations" can be an individual "Location" or a "Location Set Name" configured under the "Location Sets". You can configure a maximum of 255 location sets, and each location set can have upto 10 locations.

Limiting the Number of Producers Based on NF Set Ids

"Limiting the Number of Producers Based on NF Set Ids" feature is a supplement to the "Extended Preferred Locality" feature.

In the Extended Preferred Locality feature, if there are more number of producers that are matching in each location (primary, secondary, tertiary), then NRF potentially returns all the matching producer NFs. For Example, If each location is having 10 matching producer, NRF ends up in sending 30 producers in discovery response. This results in returning all the 30 producers in a message being too long which ends up in not using the network resources efficiently.

To enhance the producer selection, NRF supports to limit the number of producers by using nfSetIdList as follows:
  • After NF profile selection is done based on the extended preferred locality logic, from the first matching location, only a limited number of NF profile will get selected based on the configuration attribute "Maximum number of NF Profiles from First Matching Location." The first matching location is the location where the first set of matching producers are found from the Preferred Locality Triplet.
  • From the remaining locations, only those producers will be shortlisted whose nfSetIdList matches with the NF Set Ids of the first matching location's producers (after limiting feature is applied). As these producers can be used as alternate producers during failover scenarios.

In case of the following scenarios, NRF fallbacks to the "Extended Preferred Locality" feature from the "Limiting the Number of Producers Based on NF Set Ids" feature.

  • if the value of the Maximum number of NF Profiles from First Matching Location attribute is configured as 0.
  • if any one of the NF profiles from the first matching location after limiting the profiles, does not have the nfSetIdList attribute (that is the profiles are registered without NF Set Ids).

Note:

  • It is recommended to enable this feature only if NFs are registered with the nfSetIdList attribute.
  • In upgrade scenarios, if an extended preferred locality is configured, then for each preferred locality attribute the value of Maximum number of NF Profiles from First Matching Location attribute will become 0, which disables the feature by default.

Managing Extended Preferred Locality and Limiting the Number of Producers Based on NF Set Ids

Enable

For enabling the features, see below:

  • Enable using REST API:
    • Enabling Extended Preferred Locality feature: Set featureStatus to ENABLED in NF Discovery Options configuration API. For more information about the API path, see Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
    • Enabling Limiting the Number of Producers Based on NF Set Ids feature: Set the value of the maxNFProfilesFromFirstMatchLoc field to a value greater than 0 in NF Discovery Options configuration API. For more information about the API path, see Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
  • Enable using CNC Console:
    • Enabling Extended Preferred Locality feature: Set Feature Status to ENABLED on the NF Discovery Options page. For more information about enabling the feature using CNC Console, see NF Discovery Options.
    • Enabling Limiting the Number of Producers Based on NF Set Ids feature: Set the value of Maximum number of NF Profiles from First Matching Location attribute to a number greater than 0 under Preferred Location Details on the NF Discovery Options page. For more information about enabling the feature using CNC Console, see NF Discovery Options.

Note:

Extended Preferred Locality feature should be enabled prior enabling the Limiting the Number of Producers Based on NF Set Ids feature.

Configure

You can configure the Extended Preferred Locality and Limiting the Number of Producers Based on NF Set Ids features using REST API or CNC Console:
  • Configure using REST API: Perform the feature configurations as described in Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
  • Configure using CNC Console: Perform the feature configurations as described in NF Discovery Options.

Observe

The following metrics are added for the Limiting the Number of Producers Based on NF Set Ids feature:
  • ocnrf_nfDiscover_limiting_profile_count_for_nfSet_total
  • ocnrf_nfDiscover_limiting_profiles_not_applied_for_nfSet_total

For more information on metrics, see NRF Metrics section.

Maintain

If you encounter alerts at system or application levels, see NRF Alerts section for resolution steps.

In case the alerts still persists, perform the following:

  1. Collect the logs: For more information on how to collect logs, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information on how to raise a service request.
4.30.2.1 Discovery Processing with Preferred Locality Feature Set

Discovery query is processed with respect to Preferred Locality Feature Set is as follows:

  1. NRF searches and collects all the NF profiles matching the search criteria sent in the discovery query except the "preferred-locality" attribute.
  2. NFs collected in the above step are then arranged as per "Primary Preferred Locality", "Secondary Preferred Locality", "Tertiary Preferred Locality" in "Preferred Locality Triplet", then followed by NFs that are identified in step-1 but not matching any of the configured locations.
  3. NFs in each "Preferred Locality Triplet" will be arranged based on their priority. For NFs having the same priority, load will be used for tie-breaking. If loads are also same, then the NFs will be arranged in a random way.
  4. Additionally, priorities of NFs falling in secondary, tertiary, and remaining locations are also updated based on a similar principle of post-processed priority as defined in the "Preferred Locality" feature.

    Note:

    "Preferred Locality" feature is enabled by default.

    When "Extended Preferred Locality" feature is enabled using the feature flag, and in case, the required configuration to match the received "preferred-locality" and "target-nf- type" is not found, NRF will fallback to the "Preferred Locality" feature.

  5. Locations or a set of locations can be defined in "Preferred Locality Triplet" for the discovery search query. You can configure a maximum of 255 location sets and each location set can have upto 10 locations.
  6. When the attribute Maximum number of NF Profiles from First Matching Location is configured to a value greater than 0, NRF allows the consumer NF to select the top "n" number of producers from the first preferred locality. And from the remaining preferred localities, only those producers are selected that have matching nfsetIdList of the top "n" producers selected from the first preferred locality.
4.30.2.2 Configuring Preferred Locality Feature Set

According to 3GPP TS 29.510 V15.5.0, consumer NFs can send discovery query with preferred-locality, along with requester-nf-type and target-nf-type.

Note:

Only "preferred-locality" and "target-nf- type" attributes are considered for this feature, other attributes in the search query are not applicable.
  1. Configure the location types as per the network deployment. Location types can be configured using locationTypes attribute.

    For Example: Category-S, Category-N, and so on.

    Sample location types attribute value:

    "locationTypes": ["Category-S", "Category-N", "Category-x"]

    Note:

    • Maximum 25 location types can be configured.
    • locationTypes can have minimum 3 characters and maximum 36 characters. It can have only alphanumeric and special characters '-' and '_'. It cannot start and end with special characters.
    • Duplicate location types are not allowed to be configured in this attribute.
    • locationTypes are case sensitive it means Category-x and Category-X are different.
  2. Configure NF types (along with different services) under locationTypeMapping attribute. This attribute allows the operator to create mapping between different nfType (along with nfServices) and the locationTypes. (Location types is already configured by following Step 1 above).

    For Example: PCF is the NF type, am_policy, bdt_policy are NF services and Category-x is the location type.

    Sample output:
    
    "locationTypeMapping": [{
          "nfType": "PCF",
          "nfServices": ["am_policy", "bdt_policy"],
          "locationType": "Category-x"
        }],

    Note:

    • Configure nfType attribute to map it with the "target-nf- type" attribute in incoming discovery query.
    • nfServices attribute of this configuration is used when "service-names" query parameter is present in discovery query. Different nfServices along with nfType can be mapped with different location types.
    • In case, if nfServices is not required to be mapped with any particular location type, then configure the value of nfServices as '*'. This indicates that locationType mapped all NF services of that "target-nf- type".
    • '*' value record for "target-nf- type" is used when no configuration is found corresponding to service-names attribute of discovery query.
    • If "service-names" attribute is unavailable in the discovery query, then '*' value is used for locationType selection.
    • "service-names" of discovery search query can be subset of configured nfServices attribute.
      For Example:
      • If sm_policy is the only service-names present in discovery query and configured value for nfServices are sm_policy, bdt_policy. Then this configuration is used for locationType selection.
      • If am_policy, nudr_policy are the service-names present in discovery query and configured value for nfServices are am_policy, nudr_policy, xyz_policy. Then this configuration is used for locationType selection.
    • Same nfServices cannot be mapped to different location type.

      For example:

      One locationType say Category-N is already mapped to nfServices with value 'sm_policy' with nfType as 'PCF', then sm_policy (individual or with group of NF services) cannot be mapped to another location type for 'PCF'.

    • In case, the service-names attribute of search query has multiple nfServices but in configured nfServices attribute (individual or with group of NF services) are mapped to different locationType then '*' value record for target-nf-type is used for location type selection.
    • Maximum 100 locationTypeMapping values can be configured.
    Sample locationTypeMapping attribute value
    
    "locationTypeMapping": [{
          "nfType": "PCF",
          "nfServices": ["am_policy", "bdt_policy"],
          "locationType": "Category-x"
        },
        {
          "nfType": "PCF",
          "nfServices": ["sm_policy"],
          "locationType": "Category-N"
        },
        {
           "nfType": "PCF",
           "nfServices": ["*"],
           "locationType": "Category-S"
        },
        {
           "nfType": "AMF",
           "nfServices": ["*"],
           "locationType": "Category-S"
         }
       ]
  3. Configure the preferredLocationDetails corresponding to locationType selected and preferred-locality. The preferredLocation attribute defines the preferred-locality (derived from discovery search query) and locationType selected from the locationTypeMapping attribute maps to a preferredLocationDetails.

    Note:

    • Maximum 650 preferredLocationDetails values can be configured.
    • Different priorities can be configured for preferred locations.
    • preferredLocation attribute for this configuration is mapped to preferred-locality (derived from discovery search query).
    • targetLocationType attribute for this configuration is mapped to locationType selected from locationTypeMapping attribute.
    • Corresponding to preferredLocation and targetLocationType attributes, targetPreferredLocations can be configured. targetPreferredLocations are defined by operator.
    • targetPreferredLocations attribute can have upto 3 target preferred locations.
    • priority is assigned to targetPreferredLocations. There cannot be same priority to different targetPreferredLocations.
    • targetPreferredLocations can be same as preferredLocation attribute value.
    • In case the location attribute in targetPreferredLocations is a set of location, then configure the locationSets attribute:
      • Maximum of 255 locationSets can be configured.
      • Length of the location name can be in the range of 5 to 100 characters.
      • Maximum of 10 locations can be configured.
    Sample preferredLocationDetails attribute value
    "preferredLocationDetails": [{
                    "preferredLocation": "Azusa",
                    "targetLocationType": "Category-x",
                    "maxNFProfilesFromFirstMatchLoc":0,
                    "targetPreferredLocations": [{
                        "priority": 1,
                        "location": "Azusa"
                    }, {
                        "priority": 2,
                        "location": "Vista"
                    }, {
                        "priority": 3,
                        "location": "Ohio"
                    }]
                },
                {
                    "preferredLocation": "Azusa",
                    "targetLocationType": "Category-y",
                    "maxNFProfilesFromFirstMatchLoc":0,
                    "targetPreferredLocations": [{
                        "priority": 1,
                        "location": "RKL"
                    }, {
                        "priority": 2,
                        "location": "CSP"
                    }, {
                        "priority": 3,
                        "location": "West-Region-Edge-Set01"
                    }]
                }
            ],
            "locationSets": [                                   
                {
                    "locationSetName" : "West-Region-Edge-Set01",
                    "locations": ["LA", "SFO"]                        
                }
            ]]
  4. Configure the maxNFProfilesFromFirstMatchLoc attribute in the preferredLocationDetails to limit the number of producers based on NF Set Ids from the first matching location in a preferred locality triplet. If the value of maxNFProfilesFromFirstMatchLoc attribute is greater than 0, then the feature is enabled.

    Sample output:

    "maxNFProfilesFromFirstMatchLoc":0, indicates feature is disabled.

    "preferredLocationDetails": [{
                    "preferredLocation": "Azusa",
                    "targetLocationType": "Category-x",
                    "maxNFProfilesFromFirstMatchLoc":0,
                    "targetPreferredLocations": [{
                        "priority": 1,
                        "location": "Azusa"
                    }, {
                        "priority": 2,
                        "location": "Vista"
                    }, {
                        "priority": 3,
                        "location": "Ohio"
                    }]
                },
                {
                    "preferredLocation": "Azusa",
                    "targetLocationType": "Category-y",
                    "maxNFProfilesFromFirstMatchLoc":0,
                    "targetPreferredLocations": [{
                        "priority": 1,
                        "location": "RKL"
                    }, {
                        "priority": 2,
                        "location": "CSP"
                    }, {
                        "priority": 3,
                        "location": "West-Region-Edge-Set01"
                    }]
                }
            ],
            "locationSets": [                                   
                {
                    "locationSetName" : "West-Region-Edge-Set01",
                    "locations": ["LA", "SFO"]                        
                }
            ]]

4.31 Roaming Support

NRF supports the 3GPP defined inter-PLMN routing for NRF specific service operations like NFDiscover, AccessToken, NFStatusSubscribe, NFStatusUnSubscribe, and NFStatusNotify. To serve 5G subscribers roaming in a non-home network also known as visited or serving networks, consumer network functions in visited or serving networks need to access the NF Profiles located in the home network of 5G subscribers.

In this process, the consumer NFs sends the NRF specific service operations towards NRF which is in visited or serving network. Then, the visited or serving NRF routes these service operations towards the home NRF through SEPPs in visited or serving and home networks. NFDiscover, AccessToken, NFStatus Subscribe, and NFStatus UnSubscribe service operations are routed from visited or serving NRF to home NRF. NFStatusNotify service operation is initiated by the home network NRF towards consumer NFs residing in visited or serving network for inter-PLMN specific subscriptions.

Note:

XFCC specific validations are not supported for inter-PLMN service operations.

3GPP specific attributes defined for different service operations plays a deciding role during inter-PLMN message routing.

There are two important terminology used in roaming mechanism:
  • vNRF - visited or serving NRF when subscribers are roaming in non-home network.
  • hNRF - NRF in home network of the subscribers.

Table 4-10 3GPP Attributes

Attribute Name Service Operation Details
requester-plmn-list NFDiscover

If the requester-plmn-list matches with NRF PLMN list, it means that the NRF will function as vNRF.

If the requester-plmn-list not matches with NRF PLMN list, it means that the NRF will function as hNRF.

target-plmn-list NFDiscover

When an NRF is vNRF, the target-plmn-list becomes a mandatory attribute to decide the target PLMN.

In case the NRF is hNRF, this value can be optional but if it is present, then the value matches with hNRF PLMN.

requesterPlmnList AccessToken

If the requesterPlmnList matches with the NRF PLMN list, it means that the NRF will function as vNRF.

If the requesterPlmnList does not match with the NRF PLMN list, it means that the NRF will function as hNRF.

requesterPlmn AccessToken

If the requesterPlmn matches with the NRF PLMN list, it means that the NRF will function as vNRF.

If the requesterPlmn does not match with the NRF PLMN list, it means that the NRF will function as hNRF.

In case both the requesterPlmnList and requesterPlmn attributes are present, then combined PLMN values are used.

targetPlmn AccessToken

When an NRF is vNRF, the targetPlmn is considered to decide the target PLMN.

In case the NRF is hNRF, this value must match with the hNRF PLMN.

nfType AccessToken

When an NRF is hNRF, then this value is used for NRF NfAccessToken Authorization Feature.

But in case this value is not present, then the user-agent header is used. User-Agent header is present with 3GPP defined nfType. In case this header is not present, see userAgentMandatory attribute for more details.

reqPlmnList NFStatusSubscribe

If the reqPlmnList matches with the NRF PLMN list, it means that the NRF will function as vNRF.

If the reqPlmnList does not match with the NRF PLMN list, it means that the NRF will function as hNRF.

nfStatusNotificationURI NFStatusSubscribe In case reqPlmnList attribute is not present in Subscription data, then NRF will check that nfStatusNotificationURI is in inter-PLMN format which is 5gc.mnc(\d\d\d).mcc(\d\d\d).3gppnetwork.org format.

Sample: 5gc.mnc310.mcc314.3gppnetwork.org

In case nfStatusNotificationURI is in this format, then this is used to find the role of NRF.

However, if reqPlmnList is present, then this attribute have to be in inter-PLMN format. This will help when hNRF generate the notification towards visited or serving network.

plmnId NFStatusSubscribe

When an NRF is vNRF, then the plmnId is considered to decide the target PLMN.

In case the NRF is hNRF, this value can be optional but if it is present, then the value shall match with the hNRF PLMN.

subscriptionId

NFStatusSubscribe,

NFStatusUnSubscribe

subscriptionId also plays an important role during roaming cases. Subscription Id generated by hNRF will have 'roam' keyword prefixed in beginning of subscriptionId. It helps to identify the inter-PLMN request for subsequent service operations NFStatusSubscribe - Update and NFStatusUnSubscribe.

Enable

You can enable the Roaming Options feature using REST API or CNC Console.
  • Enable using REST API: Set featureStatus to ENABLED in Roaming Options configuration API. For more information about API path, see Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
  • Enable using CNC Console: Set Feature Status to ENABLED on the Roaming Options page. For more information about enabling the feature using CNC Console, see Roaming Options.

Note:

Before enabling the featureStatus attribute, ensure that the corresponding configurations are completed.

Configure

You can configure the Roaming Options using REST API or CNC Console:
  • Configure NRF Roaming Options using REST API: Perform the feature configurations as described in Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
    • Create or update {apiRoot}/nrf/nf-common-component/v1/egw/peerconfiguration. Update host with SEPP host.
      curl -v -X PUT "http://10.75.226.126:30747/nrf/nf-common-component/v1/egw/peerconfiguration" -H  "Content-Type: application/json"  -d @peer.json
       
      peer.json sample:-
      [
        {
          "id": "peer1",
          "host": "sepp-stub-service",
          "port": "8080",
          "apiPrefix": "/",
        }
      ]
      
    • Create or update the {apiRoot}/nrf/nf-common-component/v1/egw/peersetconfiguration to assign these peers.
    • Create or update {apiRoot}/nrf/nf-common-component/v1/egw/sbiroutingerrorcriteriasets.
    • Create or update {apiRoot}/nrf/nf-common-component/v1/egw/sbiroutingerroractionsets.
    • Update the {apiRoot}/nrf/nf-common-component/v1/egw/routesconfiguration as mentioned below:
      curl -v -X PUT "http://10.75.226.126:32247/nrf/nf-common-component/v1/egw/routesconfiguration" -H  "Content-Type: application/json"  -d @header.json
       
      sample header.json:-
      [
          {
              "id":"egress_sepp_proxy1",
              "uri": "http://localhost:32068/",
              "order": 0,
              "sbiRoutingConfiguration": {
                  "enabled": true,
                  "peerSetIdentifier": "set0"
              },
              "httpRuriOnly": true,
              "httpsTargetOnly": true,
              "predicates": [{
                  "args": {
                      "header": "OC-MCCMNC",
                      "regexp": "310014"
                  },
                  "name": "Header"
              }],
              "filters": [{
                  "name": "SbiRouting"
              }, {
                  "args": {
                      "retries": "3",
                      "methods": "GET, POST, PUT, DELETE, PATCH",
                      "statuses": "BAD_REQUEST, INTERNAL_SERVER_ERROR, BAD_GATEWAY, NOT_FOUND, GATEWAY_TIMEOUT",
                      "exceptions": "java.util.concurrent.TimeoutException,java.net.ConnectException,java.net.UnknownHostException"
                  },
                  "name": "SBIReroute"
              },{
                  "args": {
                      "name": "OC-MCCMNC"
                  },
                  "name": "RemoveRequestHeader"
              }],
              "metadata": {
              }
          },
          {"id":"default_route","uri":"egress://request.uri","order":100,"predicates":[{"args":{"pattern":"/**"},"name":"Path"}]}
      ]
  • Configure NRF Roaming Options using CNC Console: Perform the feature configurations as described in Roaming Options.

Observe

Following are the NRF Roaming Options specific metrics filters:
  • ocnrf_roaming_nfStatusSubscribe_rx_requests_total
  • ocnrf_roaming_nfStatusSubscribe_tx_responses_total
  • ocnrf_roaming_nfStatusSubscribe_tx_requests_total
  • ocnrf_roaming_nfStatusSubscribe_rx_responses_total
  • ocnrf_roaming_nfStatusSubscribe_rx_requests_total
  • ocnrf_roaming_nfStatusSubscribe_tx_responses_total
  • ocnrf_roaming_nfStatusSubscribe_tx_requests_total
  • ocnrf_roaming_nfStatusSubscribe_rx_responses_total
  • ocnrf_roaming_nfStatusUnSubscribe_rx_requests_total
  • ocnrf_roaming_nfStatusUnSubscribe_tx_responses_total
  • ocnrf_roaming_nfStatusUnSubscribe_tx_requests_total
  • ocnrf_roaming_nfStatusUnSubscribe_rx_responses_total
  • ocnrf_roaming_nfDiscover_rx_requests_total
  • ocnrf_roaming_nfDiscover_tx_responses_total
  • ocnrf_roaming_nfDiscover_tx_requests_total
  • ocnrf_roaming_nfDiscover_rx_responses_total
  • ocnrf_roaming_accessToken_rx_requests_total
  • ocnrf_roaming_accessToken_tx_responses_total
  • ocnrf_roaming_accessToken_tx_requests_total
  • ocnrf_roaming_accessToken_rx_responses_total
  • ocnrf_roaming_nfStatusNotify_tx_requests_total
  • ocnrf_roaming_nfStatusNotify_rx_responses_total
  • ocnrf_roaming_jetty_latency_seconds

For more information on NRF Roaming Options metrics and KPIs, see Roaming Support Metrics, and NRF KPIs sections.

Maintain

If you encounter alerts at system or application levels, see NRF Alerts section for resolution steps.

In case the alerts still persists, perform the following:

  1. Collect the logs: For more information on how to collect logs, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.32 EmptyList in Discovery Response

NRF checks the NFStatus of the target-nf-type in the NRF state data. It identifies whether the current status of the matching NFProfile is SUSPENDED.

If the NFStatus of all matching profiles is SUSPENDED, then NRF modifies the NFStatus of these profiles as "REGISTERED". The modified NFStatus is sent in the discovery response with a shorter validity period so that a call can be established. Upon expiry of the validityPeriod, the Requester NF must rediscover the target NF.

Note:

Change in the NFStatus of target-nf-type to REGISTERED is not stored in the database.
In case forwarding is ENABLED for a discovery request, NRF forwards the request to identify if there are any matching profiles in another region:
  • If matching profiles are found, then that profile is sent in the discovery response.
  • If the NFStatus of all the matching profiles are in SUSPENDED state, then these profiles are sent in the discovery response as REGISTERED with a shorter validity period.

Managing EmptyList Feature

Enable
You can enable EmptyList feature using the REST API or CNC Console.
  • Enable using REST API: Set emptyListFeatureStatus to ENABLED in nfDiscovery Options configuration API. Also, set featureStatus to ENABLED to configure EmptyList for a particular nfType. For more information about API path, see Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
  • Enable using CNC Console: Set Feature Status to ENABLED for Empty Discovery Response. Also, set Feature Status to ENABLED to configure EmptyList for a particular nfType. For more information about enabling the feature using CNC Console, see NF Discovery Options.

Observe

Metrics

ocnrf_nfDiscover_emptyList_total metric is added for the EmptyList feature.

Alert

OcnrfNFDiscoveryEmptyListObservedNotification alert is added for the EmptyList feature.

Maintain

If you encounter alerts at system or application levels, see NRF Alerts section for resolution steps.

In case the alerts still persists, perform the following:

  1. Collect the logs: For more information on how to collect logs, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.33 Overload Control Based on Percentage Discards

The Overload Control feature protects the system from overload and maintains the overall health of NRF. The system needs to detect overload conditions, protect, mitigate, and avoid entering into an overload condition, and take necessary actions for recovering from overload.

NRF provides the following means for overload management:

  • Predefined threshold load levels.
  • Tracks number of pending messages.
  • Tracks CPU and memory usage.
  • Enforce load shedding during various overload levels.
Perf-Info performs overload calculations based on the indicators:
  • CPU Utilization
  • Memory Utilization
  • Pending Message Count
  • Failure Count

The overload level is configured for the following NRF microservices:

  • Registration
  • Discovery
  • AccessToken

The Overload Manager module in Perf-Info is configured or updated with the threshold value for services. A configurable flag is available for sampling interval as ocPolicyMapping.samplingPeriod based on which Ingress Gateway calculates rate per service in the current sampling period and applies appropriate discard policies and actions in the subsequent sampling period.

Overload Manager triggers Rate Calculator to start calculating the rate of incoming requests per service per sampling period. Ingress Gateway receives a notification event per service with the calculated rates to the Overload Manager filter at the end of every sampling period. It applies an appropriate configured discard policy for a particular service based on the rate of requests.

Ingress Gateway calculates the number of requests to be dropped in the current sampling period based on configured percentage discard.

Overload Thresholds for each service is evaluated based on four metrics namely cpu, svc_failure_count, svc_pending_count, and memory. Overload control is triggered if the thresholds for any one metrics are reached.

Managing Overload Control Feature

Enable
You can enable Overload Control feature using the following Helm configuration:
  1. Open the ocnrf-custom-values-23.4.6.yaml file.
  2. Set global.performanceServiceEnable parameter to true in ocnrf-custom-values-23.4.6.yaml file.
  3. Set perf-info.overloadManager.enabled parameter to true in ocnrf-custom-values-23.4.6.yaml file.
  4. Configure the Prometheus URI in perf-info.configmapPerformance.prometheus in ocnrf-custom-values-23.4.6.yaml file.
  5. Save the ocnrf-custom-values-23.4.6.yaml file.
  6. Ensure that the autoscaling and apps apiGroups are configured under service account. For more information on configuration, see "Creating Service Account, Role and Role Binding Resources" section in Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.
  7. Run helm upgrade, if you are enabling this feature after NRF deployment. For more information on upgrade procedure, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.
Configure
  • Configure using REST API: The Overload Control feature related configurations are performed at Ingress Gateway and Perf-Info.
    The following REST APIs must be configured for this feature:
    • {apiRoot}/nrf/nf-common-component/v1/perfinfo/overloadLevelThreshold
    • {apiRoot}/nrf/nf-common-component/v1/igw/errorcodeprofiles
    • {apiRoot}/nrf/nf-common-component/v1/igw/ocdiscardpolicies
    • {apiRoot}/nrf/nf-common-component/v1/igw/ocpolicymapping
    • {apiRoot}/nrf/nf-common-component/v1/igw/errorcodeserieslist
    • {apiRoot}/nrf/nf-common-component/v1/igw/routesconfiguration
    For more information about APIs, see "Common Services REST APIs" section in the Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide
  • Configure using CNC Console: There are no CNC Console configurations for this feature.

Disable

  1. Use {apiRoot}/nrf/nf-common-component/v1/igw/ocpolicymapping API and set enabled to false as follows:
    {
        "enabled": false,
        "mappings": [
            {
                "svcName": "ocnrf-nfdiscovery",
                "policyName": "nfdiscoveryPolicy"
            },
            {
                "svcName": "ocnrf-nfaccesstoken",
                "policyName": "nfaccesstokenPolicy"
            },
            {
                "svcName": "ocnrf-nfregistration",
                "policyName": "nfregistrationPolicy"
            }
        ],
        "samplingPeriod": 6000
    }
  2. Open the ocnrf-custom-values-23.4.6.yaml file.
  3. Set perf-info.overloadManager.enabled parameter to false in ocnrf-custom-values-23.4.6.yaml file.
  4. (Optional) Set global.performanceServiceEnable parameter to false in ocnrf-custom-values-23.4.6.yaml file.

    Note:

    Perform this step only if you want to disable complete Perf-Info service.
  5. Save the ocnrf-custom-values-23.4.6.yaml file.
  6. Run helm upgrade, if you are disabling this feature after NRF deployment. For more information on upgrade procedure, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

Observe

Metrics

No new metrics added for the Overload Control feature.

Identifying Kubernetes Tag for Overload Control

The following procedure explains how to identify the tags for different Kubernetes version:
  1. Log in to Prometheus.
  2. Enter cgroup_cpu_nanoseconds query in Prometheus as shown:

    Figure 4-16 Prometheus Search

    Prometheus Search
  3. In the response, search for the tags that contain the values for name of the container, NRF deployment namespace, and name of the NRF services:

    Figure 4-17 Tag Search

    Tag Search
  4. Use the tags from the response to configure the following parameters under Perf-Info microservice:
    • tagNamespace
    • tagContainerName
    • tagServiceName

    For example, in the Figure 4-17 image, namespace is the tag that contain the value for NRF deployment namespace. You need to set namespace as the value for tagNamespace parameter in Perf-Info microservice. For more information on the parameters, see Perf-Info section in Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

Maintain

If you encounter alerts at system or application levels, see NRF Alerts section for resolution steps.

In case the alerts still persists, perform the following:

  1. Collect the logs: For more information on how to collect logs, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.34 DNS NAPTR Update

In the 5G core networks, AMFs are added or removed dynamically for scalability or planned maintenance. NRF supports updating the Name Authority Pointer (NAPTR) record in Domain Name System (DNS) during Access and Mobility Functions (AMF) registration, update, and deregistration. The AMFs available within an AMF Set is provisioned within NAPTR records in the DNS.

Enable

You must enable Artisan microservice for DNS NAPTR Update as follows:
  1. Open the ocnrf_custom_values_23.4.6.yaml file.
  2. Set enableNrfArtisanService to true to enable Artisan microservice.
  3. Save the file.
  4. Run helm install. For more information about installation procedure, see Oracle Communications Cloud Native Core Network Repository Function Installation and Upgrade Guide.
  5. If you are enabling this parameter after NRF deployment, run helm upgrade. For more information about upgrade procedure, see Oracle Communications Cloud Native Core Network Repository Function Installation and Upgrade Guide.
  6. Set the DNS configuration as described in the "DNS NAPTR Configuration in Alternate Route Service" section of Oracle Communications Cloud Native Core Network Repository Function REST Specification Guide.
  7. After enabling Artisan microservice, enable DNS NAPTR Update Options using the REST API or CNC Console:
    • Enable using REST API: Set featureStatus to ENABLED in dnsNAPTRUpdateOptions configuration API. For more information about API path, see Oracle Communications Cloud Native Core Network Repository Function REST Specification Guide.
    • Enable using CNC Console: Set Feature Status to ENABLED on the DNS NAPTR Update Options page. For more information about enabling the feature using CNC Console, see DNS NAPTR Update Options Options.

Configure

You can configure DNS NAPTR Update options feature using the REST API or CNC Console.
  • Configure using REST API: Perform the feature configurations as described in Oracle Communications Cloud Native Core Network Repository Function REST Specification Guide.
  • Enable using CNC Console: Perform the feature configurations as described in DNS NAPTR Update Options Options.

Observe

Metrics
Following metrics are added in the DNS NAPTR Update Metrics section:
  • ocnrf_dns_naptr_tx_requests_total
  • ocnrf_dns_naptr_rx_responses_total
  • ocnrf_dns_naptr_audit_tx_requests_total
  • ocnrf_dns_naptr_audit_rx_responses_total
  • ocnrf_dns_naptr_failure_rx_responses
  • ocnrf_dns_naptr_round_trip_time_seconds
  • ocnrf_dns_naptr_nfRegistration_tx_requests_total
  • ocnrf_dns_naptr_nfRegistration_rx_responses_total
  • ocnrf_dns_naptr_nrfAuditor_tx_requests_total
  • ocnrf_dns_naptr_nrfAuditor_rx_responses_total
  • ocnrf_dns_naptr_trigger_rx_requests_total
  • ocnrf_dns_naptr_trigger_tx_responses_total
  • oc_alternate_route_upstream_dns_request_timeout_total
Alert
Following alerts are added for the DNS NAPTR Update feature:

Maintain

If you encounter alerts at system or application levels, see NRF Alerts section for resolution steps.

In case the alerts still persists, perform the following:

  1. Collect the logs: For more information on how to collect logs, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.35 Notification Retry

NRF supports notification retry mechanism in case of the following scenarios in sending NfStatusNotify message. Based on the configured Notification Retry Profile, the NRF attempts to re-send the NfStatusNotify message on failures until the retry attempt is exhausted.

Following are the scenarios where retry can be performed:
  • 4xx, 5xx Response Codes from notification callback server of the NF
  • Connection Timeout between NRF Egress Gateway and notification callback server of the NF
  • Request Timeout at NRF Egress Gateway
To enable notification request retry, the NRF Subscription microservice sends the NfStatusNotify message to the Egress Gateway. Upon failure of the request, Egress Gateway retries towards the notification callback server of the NF depending on the configuration. The retries will happen depending upon the response codes, exception list or timeout configuration in the nfManagementOptions.

Note:

This is applicable only when direct routing is used for routing notification messages.

Managing Notification Retry Feature

Enable

You can enable the Notification Retry feature using the CNC Console or REST API.

  • Enable using REST API: Set requestRetryDetails.featureStatus to ENABLED in NF Management Options configuration API. For more information about API path, see Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
  • Enable using CNC Console: Set Feature Status to ENABLED under Notification Retry section on the NF Management Options page. For more information about enabling the feature using CNC Console, see the NF Management Options section.

Configuring DefaultRouteRetry in Egress Gateway

After NRF upgrade from 23.3.x to 23.4.0, to enable notification retry feature, DefaultRouteRetry filter must be added to the Egress Gateway default route configuration.

Following is the sample command to configure DefaultRouteRetry:
curl -X PUT "http://ocnrf-nrfconfiguration:8080/nrf/nf-common-component/v1/egw/routesconfiguration" -H 'Content-Type:application/json' -d 
'[{
	"id": "default_route",
	"uri": "egress://request.uri",
	"order": 100,
	"filters": [{
		"name": "DefaultRouteRetry"
	}],
	"predicates": [{
		"args": {
			"pattern": "/**"
		},
		"name": "Path"
	}]
}]'

This updates the hostname and port of the ocnrf-nrfconfiguration and reflects the details to access the NRF configuration.

Note:

To disable the notification retry, remove the DefaultRouteRetry filter from the {apiRoot}/nrf/nf-common-component/v1/egw/routesconfiguration, and retain the default_route configuration. This default_route configuration is available by default during fresh installation or upgrade. Following is the sample default route configuration:
{
	"id": "default_route",
	"uri": "egress://request.uri",
	"order": 100,
	"predicates": [{
		"args": {
			"pattern": "/**"
		},
		"name": "Path"
	}]
}

Configure

You can configure Notification Retry feature using the REST API or CNC Console.
  • Configure using REST API: Perform the feature configurations as described in Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide.
  • Enable using CNC Console: Perform the feature configurations as described in the NF Management Options section.

Observe

Metrics
Following dimensions are added for ocnrf_nfStatusNotify_rx_responses_total metric in the NRF NF Metrics section:
  • NotificationHostPort
  • NumberOfRetriesAttempted

Alert

Following alerts are added for the Notification Retry feature:

Maintain

If you encounter alerts at system or application levels, see NRF Alerts section for resolution steps.

In case the alerts still persists, perform the following:

  1. Collect the logs: For more information on how to collect logs, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information about how to raise a service request.

4.36 NRF Message Feed

NRF supports copying of both request and response HTTP messages routed through Ingress and Egress Gateways, to a Data Director.

Data Director receives the messages from Gateways with a correlation-id and feeds the data securely to an external monitoring system.

The correlation-id in the messages is used as a unique identifier for every transaction. If the request does not have the correlation-id, the Gateway generates the correlation-id and then passes it to Data Director.

Upon receiving an Ingress Gateway requests containing a correlation-Id, the NRF microservices includes this correlation-Id in following Egress Gateway requests that may be generated by the NRF based on the Ingress Gateway transaction:

  • Inter-PLMN
  • SLF
  • Forwarding
  • Notification

The communication between the Gateways and the Data Director is encrypted using TLS. Also, the Gateways authenticate itself to Data Director using Simple Authentication and Security Layer (SASL). For more information on configuring SASL, see Configuring SASL.

Managing NRF Message Feed Feature

Enable

You can enable Message Feed feature using the Helm:

  1. Open the ocnrf_custom_values_23.4.6.yaml file.
  2. Set ingress-gateway.message-copy.enabled to true to enable message copying at the Ingress Gateway.
  3. Set egress-gateway.message-copy.enabled to true to enable message copying at the Egress Gateway.
  4. Save the file.
  5. Install NRF. For more information about installation procedure, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.
  6. If you are enabling this parameter after NRF deployment, upgrade NRF. For more information about upgrade procedure, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.

    Note:

    If you enable Message Feed feature at Ingress Gateway and Egress Gateway, approximately 33% pod capacity is impacted.

Configure

There is no configurations to be performed using the REST API or CNC Console.

Configuring SASL

  1. Generate your SSL Certificates.

    Note:

    Creation process for private keys, certificates and passwords is based on discretion of user or operator.
  2. Before copying the certificates into the secret, add the DD Root certificates contents into the CA certificate (caroot.cer) generated for NRF as follows:

    Note:

    Make sure you have added 8 hyphens "-"between 2 certificates.
    -----BEGIN CERTIFICATE-----
    <existing caroot-certificate content>
    -----END CERTIFICATE-----
    --------
    -----BEGIN CERTIFICATE-----
    <DD caroot-certificate content>
    -----END CERTIFICATE-----
  3. Create secrets for both Ingress and Egress Gateway for authentication with Data Director. To create a secret you need to store the password in a text file and using same file need to create a new secret. Run the following command to create secret:
    kubectl create secret generic ocingress-secret --from-file=ssl_ecdsa_private_key.pem --from-file=rsa_private_key_pkcs1.pem --from-file=ssl_truststore.txt --from-file=ssl_keystore.txt --from-file=caroot.cer --from-file=ssl_rsa_certificate.crt --from-file=ssl_ecdsa_certificate.crt --from-file=sasl.txt -n <namespace>
    
    kubectl create secret generic ocegress-secret --from-file=ssl_ecdsa_private_key.pem --from-file=ssl_rsa_private_key.pem --from-file=ssl_truststore.txt --from-file=ssl_keystore.txt --from-file=ssl_cabundle.crt --from-file=ssl_rsa_certificate.crt --from-file=ssl_ecdsa_certificate.crt --from-file=sasl.txt -n <namespace>
    For more information on creating secrets, see Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.
  4. Configure the SSL section as described in Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.
  5. Configure the message copy feature as described in Oracle Communications Cloud Native Core, Network Repository Function Installation, Upgrade, and Fault Recovery Guide.
  6. Configure the SASL_SSL port in the kafka.bootstrapAddress attribute.

Observe

Metrics
Following metrics are added in the NRF Gateways Metrics section:
  • oc_ingressgateway_msgcopy_requests_total
  • oc_ingressgateway_msgcopy_response_total
  • oc_ingressgateway_dd_unreachable
  • oc_egressgateway_msgcopy_requests_total
  • oc_egressgateway_msgcopy_response_total
  • oc_egressgateway_dd_unreachable
Alert

Maintain

If you encounter alerts at system or application levels, see NRF Alerts section for resolution steps.

In case the alerts still persists, perform the following:

  1. Collect the logs: For more information on how to collect logs, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.37 Subscription Limit

In 5G architecture, Network Functions (NFs) subscribe to be notified for a change in the producer NF Profile in a network using the NFStatusSubscribe service Operation. This subscription is managed by NRF Management microservice and maintained in the database for a particular period. A notification is triggered when there is change in the producer NF Profile to the Consumer NFs by the NRF subscription microservice.

If the number of subscriptions created is very high, it may trigger a huge number of notification requests. This might lead to the overload of the NRF subscription microservice.

NRF restricts the number of allowed subscriptions to avoid overload condition at NRF subscription microservice. NRF regulates the maximum number of allowed subscriptions in the database using a configurable Global Subscription Limit. In case of georedundant NRF, the limit is applied across all the mated sites.

Note:

The subscription limit must be configured with the same across all georedundant site.

NRF evaluates the global subscription limit condition every time an NF tries to create a new subscription or update an existing subscription. If the limit is breached, new subscriptions are not created or renewed, and the subscription requests are rejected with configured error code. Upon the global subscription limit is breached as well as approaching the breach threshold, specific alerts are raised based on the threshold level.

Managing NRF Subscription Limit Feature

Prerequisites

Following are the prerequisites to enable the feature:

  1. All the georedundant sites must be upgraded to NRF 23.4.6 release.
  2. Replication link between all the georedundant sites must be up. The OcnrfDbReplicationStatusInactive alert indicates if the replication link is inactive. If this alert is raised in any of the sites, wait till it is cleared.
  3. Wait until all the subscription records are migrated. The OcnrfSubscriptionMigrationInProgressWarn and OcnrfSubscriptionMigrationInProgressCritical alerts indicate whether the migration is complete. If this alert is raised in any of the sites, wait till it is cleared.

Enable

You can enable the Subscription Limit feature using the CNC Console or REST API.
  • Enable using REST API: Set subscriptionLimit.featureStatus to ENABLED in NF Management Options configuration API. For more information about API path, see Oracle Communications Cloud Native Core Network Repository Function REST Specification Guide.
  • Enable using CNC Console: Set Feature Status to ENABLED under Subscription Limit section on the NF Management Options page. For more information about enabling the feature using CNC Console, see the NF Management Options section.

Configure

You can configure Subscription Limit feature using the REST API or CNC Console.
  • Configure using REST API: Perform the feature configurations as described in Oracle Communications Cloud Native Core, Network Repository Function REST Specification Guide
  • Enable using CNC Console: Perform the feature configurations as described in the NF Management Options section.

Observe

Metrics
Following dimension is added in the NRF Metrics section:
  • RejectionReason=SubscriptionLimitExceeded dimension is added for ocnrf_nfStatusSubscribe_tx_responses_total metric.
Following metrics are added for the Subscription Limit feature:
  • ocnrf_nfset_active_subscriptions
  • ocnrf_nfset_limit_level
  • ocnrf_subscription_migration_status

KPIs

Following KPIs are added for the Subscription Limit feature:

Maintain

If you encounter alerts at system or application levels, see NRF Alerts section for resolution steps.

In case the alerts still persists, perform the following:

  1. Collect the logs: For more information on how to collect logs, see Oracle Communications Cloud Native Core, Network Repository Function Troubleshooting Guide.
  2. Raise a service request: See My Oracle Support for more information on how to raise a service request.

4.38 Automated Test Suite Support

NRF provides Automated Test Suite (ATS) for validating the functionalities. ATS allows you to run NRF test cases using an automated testing tool, and then compares the actual results with the expected or predicted results. In this process, there is no intervention from the user. For more information on installing and configuring ATS, see Oracle Communications Cloud Native Core, Automated Test Suite User Guide.