4 Upgrading OCNADD
This section provides information on how to upgrade an existing OCNADD deployment. The section describes the upgrade order for source NFs, CNC Console, cnDBTier, and the upgrade impact on the source NFs.
4.1 Migrating OCNADD to New Architecture
This section provides information on how to migrate an existing OCNADD deployment. The section describes the migration order for source NFs, CNC Console, cnDBTier, and the impact on the source NFs.
4.1.1 Migration Overview
The following steps outline the migration process for transitioning an existing OCNADD deployment to the new architecture.
- The migration will follow a Blue-Green deployment approach, where both the old and new architectures coexist.
- A new deployment of the current release will be installed alongside the existing OCNADD deployment, using the same profile to ensure equivalent throughput.
- Once the new deployment installation is verified, configurations will be migrated from the existing deployment to the new one via OCNADD migration job.
- After successful configuration migration, traffic from source NFs will be routed to the new deployment.
- Once the migration is completed and traffic is streaming via the new deployment, normal traffic throughput can be resumed.
- After finalizing the migration, the existing deployment can be scaled down to release the resources.
- After monitoring the new deployment for a sufficient period (typically several days or a week), the existing deployment can be uninstalled. After this point, it is not possible to route traffic via the existing deployment.
Note:
- It is assumed that migration is performed at approximately 20% of the currently running traffic rate.
- The traffic flow between the NFs and the OCNADD Kafka may degrade only when the traffic is being switched from the existing deployment to the new deployment.
- The traffic flow between OCNADD consumer adapters and third-party consumers may degrade only during the switchover period.
- Alarms from the source release will not be migrated into the target release during this migration procedure.
4.1.2 Impact on Resource Requirement
The Blue-Green deployment approach requires additional resources during the migration period, as both the old and new deployments will coexist temporarily. However, this approach minimizes the impact on end-to-end traffic flow, with traffic disruption limited to the switchover period. Also, if the migration fails after routing traffic to the new deployment, this approach allows for a quick rollback to the existing deployment with minimal impact. The following resources will be increased during the migration:
- vCPU
- Memory (additional memory will be required if RAM drive storage mode is enabled for Relay Agent or Mediation Kafka cluster. For Relay Agent Kafka, RAM drive storage is enabled by default in the target release)
- Disk Storage PVC for Kraft Controllers deployed in the Relay Agent Kafka cluster
- DB Resources – additional DB resources are required to support the database creation
for management group services (
configuration_schema,alarm_schema, andhealthdb_schema) in the new deployment.
Note:
During the migration process, consider the following additional requirements:
- Generate new SSL and TLS certificates for the new deployment and ensure that they are signed using the same CA authority used in the source release. Users can generate a single certificate for each group or a certificate for all services individually.
- Allocate external IPs for ingress connections (Ingress adapter and Relay Agent Kafka brokers).
- Update the CNCC console by adding a new instance for the new deployment. If CNCC instance limits are reached, remove the existing deployment instance before adding the new one.
Note:
If the user plans to operate at maximum throughput supported in the target release after migration, then the CPU and memory resources required during the migration will be less than the total resources required to support maximum throughput.
4.1.3 Supported Migration Paths
The following table lists the supported migration paths for OCNADD:
Table 4-1 Supported Migration Path
| Supported Release | Supported Release |
|---|---|
| 25.2.100 | 25.2.200 |
| 25.1.200 | 25.2.200 |
4.1.4 Preparing for migration
Preparing for migration
- Fetch the images and charts of the target release as described in Pre-Installation Tasks.
- Keep a backup of the
ocnadd-custom-values.yamlfile and the extracted chart folderocnaddof the source release as a backup before starting the migration procedure. - Take the manual backup of the OCNADD before starting the upgrade procedures. See Performing OCNADD Manual Backup for taking a manual backup of the OCNADD.
- If external access for Kafka brokers is enabled, ensure that you have sufficient IPs in your setup to allocate for the new deployment.
- While performing the migration, you must align the custom values YAML files of the
target release as per the
ocnadd/values.yamlfile of the source release or the older release. Do not enable any new feature during the migration. The parent or sub-chartsvalues.yamlmust not be changed while performing the upgrade, unless it is explicitly specified in this document. At least the following features must be aligned from source release to target release:- CNLB configurations for OCNADD ingress and egress interface
- ACL configurations
- Client ACLs (for more details on creating client ACLs, refer to the section Create Client ACLs in the Oracle Communications Network Analytics Data Director User Guide)
- IP Family configurations
- IntraTLS and mTLS configurations
Note:
- In the target release, the
global.ssl.mTLSconfiguration in theocnadd-common-custom-values-25.2.200.yamlfile determines whether security is enabled in OCNADD. When set totrue, security is enabled; otherwise, it is disabled. The default value for this setting istrue. - If intraTLS and mTLS are set to
falsein the existing deployment, then setmTLStofalsein the target release. - If intraTLS is set to
trueand mTLS is set tofalsein the existing deployment, then setmTLStotruein the target release. - If intraTLS and mTLS are set to
truein the existing deployment, then setmTLStotruein the target release.
- In the target release, the
- The database name for Configuration Service, Health Service, and Alarm Service must
be modified and should be kept different from the source release in the
ocnadd-common-custom-values-25.2.200.yamlbefore the migration:global.cluster.database.configuration_db: configuration_schema # --> keep a different name for configuration db than source release global.cluster.database.alarm_db: alarm_schema # --> keep a different name for alarm db than source release global.cluster.database.health_db: healthdb_schema # --> keep a different name for health db than source releaseFor
global.cluster.database.storageadapter_db, the name should be aligned as per the source release. - Ensure to disable the Network Policies before the migration. The network policies can be enabled after the migration. Refer to the section Network Policy in the Oracle Communication Network Analytics Suite Security Guide for more details.
- If Druid is enabled in OCNADD for the source release, then Druid configurations must be enabled in the target release, and all required secrets must be created in the management namespace of the target release. For more details on creating secrets, refer to the section Druid Cluster Integration with OCNADD Site in the Oracle Communication Network Analytics Data Director User Guide.
- If Export feature is enabled in OCNADD for the source release, then export functionality must be enabled in the target release, and secrets for SFTP credentials must be created in the management namespace of the target release. For more details on creating secrets for SFTP credentials, refer to Steps to create SFTP credential for SFTP server in the Oracle Communication Network Analytics Data Director User Guide.
4.1.5 Migration Task
4.1.5.1 Choosing the OCNADD Deployment Model
To determine the most suitable deployment model for your use case, refer to the section OCNADD Deployment Models. Prior to initiating the migration process, ensure that the Management Group for both the existing and new deployments is located within the same cluster, as this is a prerequisite for configuration migration. The deployment for the Relay Agent and Mediation Group can be either co-located or distributed across multiple clusters, depending on the selected deployment model.
4.1.5.2 Migration Deployment Considerations (Optional)
Users can deploy Kafka instances and create topic partitions according to the resource profile selected for the target release during the migration. The number of aggregation service instances and adapter instances can be scaled up as needed to accommodate increased throughput. For information on the resource requirements for each profile, refer to the Oracle Communications Network Analytics Data Director Benchmarking Guide.
The following example is provided for reference and should not be used as a basis for sizing or configuring any actual deployment:
Let us say the user is deploying the 1500K MPS profile in the target release, and the source NF is SCP.
Profile running in old release: 500K MPS
Profile opted to deploy in new release: 1500K MPS
Example resource profile required for 1500K MPS
Relay agent Kafka broker instances: 20
Number of SCP instances: 57
SCP topic partition: 342
Mediation Kafka broker instances: 20
Number of TCP feed instances: 59
MAIN topic partition: 354When the migration is being performed, resources for the target release should be configured as follows:
Relay Agent Kafka replica: 20
SCP topic partition: 342
Number of SCP instances: 20 ## -----> as per 500K MPS profile
Mediation Kafka replica: 20
MAIN topic partition: 354
Number of TCP feed instances: 28 ## -----> as per 500K MPS profile4.1.5.3 Installing and Verifying the OCNADD Deployment
Install the OCNADD package by referring to the section Installing OCNADD Package. While installing OCNADD in the new architecture, it is mandatory to update the
worker group name in the custom values of both Relay Agent
(ocnadd-relayagent-custom-values-25.2.200.yaml) and Mediation
(ocnadd-mediation-custom-values-25.2.200.yaml) with the worker
group namespace name of the source release.
global.ocnaddrelayagent.cluster.workergroupName: wg1 # Update with the namespace name of the worker group in source release
Additionally, refer to the section OCNADD UI Configurations Changes for Dashboard Metrics and update the relay agent and mediation groups with the correct worker group name to enable Dashboard metrics in the UI.
Once the installation is completed, verify the installation by referring to the section Verifying OCNADD Installation.
Create Kafka topics and configure topic partitions according to the selected resource profile. For detailed instructions on creating Kafka topics, refer to the Creating OCNADD Kafka Topics section.
4.1.5.4 Migrating Configurations
In this section, migration of all the configurations will be performed from the existing deployment to the newly deployed setup. The configuration will be migrated via the OCNADD migration job. The job will migrate the following OCNADD configurations:
- Standard Feeds
- Ingress Feeds
- Kafka Feeds
- Filter configurations
- Global L3L4 mapping configurations
- OCNADD Metadata configurations
- Correlation and extended storage configuration
- Export configurations
Run the following command in the management group namespace of the target release (25.2.200) to trigger the migration job and start the configuration migration:
helm upgrade dd-mgmt -f ocnadd-common-custom-values-25.2.200.yaml -f ocnadd-management-custom-values-25.2.200-mgmt-group.yaml --namespace <target_release_management_namespace> --set global.ocnaddmanagement.migration.enable=true --set global.ocnaddmanagement.migration.sourceNamespace=<source_release_management_namespace> ocnadd
For example,
Source release management namespace is dd-mgmt-old and target release
management namespace is ocnadd-mgmt
helm upgrade dd-mgmt -f ocnadd-common-custom-values-25.2.200.yaml -f ocnadd-management-custom-values-25.2.200-mgmt-group.yaml --namespace ocnadd-mgmt --set global.ocnaddmanagement.migration.enable=true --set global.ocnaddmanagement.migration.sourceNamespace=dd-mgmt-old ocnadd
4.1.5.5 Verify Configuration Migration
Once the migration job is completed, the associated pod for that job will be marked Completed. The job will generate a report in the logs of the configurations identified in the source release that were successfully migrated into the target release. To access the report, run the following command
kubectl logs -n <target_release_namespace> <migration job podname>Example:
kubectl logs -n ocnadd-mgmt ocnaddmigration-hsnr9 =========================== EXPORT ============================
Feature | Configurations
------------------------------------+-------------------------
ExportConfigurations | Available
======================= dd-old:cluster-1 ========================
Feature | Configurations
------------------------------------+-------------------------
IngressAdapterConfigurations | Available
Filters | Available
L3L4Mapping | Available
OCNADDMetadata | Available
KafkaFeeds | Available
Configurations | Available
CorrelationConfigurations | Available
OCL 2025-10-25 10:20:52.251 [main] INFO c.o.c.c.o.m.s.MigrationService -
####### FEATURE: EXPORTCONFIGURATIONS #######
Config Name | Status
--------------------------------+------------
CSV-CONFIG | SUCCESS
=========== Worker Group: dd-old:cluster-1 ===========
####### FEATURE: INGRESSADAPTERCONFIGURATIONS #######
Config Name | Status
--------------------------------+------------
ingress-config | SUCCESS
####### FEATURE: FILTERS #######
Config Name | Status
--------------------------------+------------
Filter Configurations | SUCCESS
####### FEATURE: L3L4MAPPING #######
Config Name | Status
--------------------------------+------------
Gloabl Configuration | SUCCESS
####### FEATURE: OCNADDMETADATA #######
Config Name | Status
--------------------------------+------------
OCNADD MetaData Configuration | SUCCESS
####### FEATURE: KAFKAFEEDS #######
Config Name | Status
--------------------------------+------------
kafkafeed-config | SUCCESS
####### FEATURE: CONFIGURATIONS #######
Config Name | Status
--------------------------------+------------
standard-feed-config | SUCCESS
####### FEATURE: CORRELATIONCONFIGURATIONS #######
Config Name | Status
--------------------------------+------------
kafkafeed-config | SUCCESS
Verify that all the feeds and other configurations are created in the target release. If the job fails due to an error or if all the configurations are not migrated successfully, delete the job manually and re-run the same command provided in the Migrating Configurations section to trigger the execution again.
To get all the jobs in the target release management namespace:
kubectl get jobs.batch -n <target_release_management_namespace>To delete the job in the target release management namespace:
kubectl delete jobs.batch -n <target_release_management_namespace> ocnaddmigration4.1.5.6 Configuring OCNADD GUI
Configure the OCNADD UI if not already done by referring to the section Installing OCNADD GUI.
4.1.5.7 Traffic Migration
Once the configurations are migrated successfully, perform the following steps:
- Take the backup of the bootstrap IPs configured in source NFs for the old deployment. This will enable you to restore traffic to the previous deployment in case the migration is unsuccessful.
- Update the bootstrap server in NFs with the Relay Agent Kafka broker IPs/FQDN to migrate traffic to the Relay Agent Kafka cluster in the new deployment.
- If NFs are deployed in the same cluster and using the FQDN as the Kafka bootstrap to
connect to OCNADD, then all the FQDNs must be updated
as:
*.kafka-broker-headless.<ns>.svc.<domain>where,
<ns>is the namespace where the Relay Agent Kafka cluster is deployed.The asterisk (*) indicates different broker names (
kafka-broker-0,kafka-broker-1, and so on). - If NFs are deployed in a different cluster and using IP addresses as the Kafka
bootstrap to connect to OCNADD, then the IP addresses in NFs must be updated with
the new IP addresses assigned to the Relay Agent Kafka brokers of the target
release.
Only port
9094on the Kafka broker is supported for establishing connectivity in this access mode.
After the bootstrap server is updated and traffic is redirected to the Relay Agent Kafka cluster, verify the stability of the traffic in the target release by referring to the section Verifying Traffic Migration.
4.1.5.8 Finalizing Migration
Once the migration is complete and traffic is successfully streaming through the Relay Agent Kafka cluster, normal traffic throughput can be restored based on the deployment profile selected by the user. After resuming normal throughput, verify the stability of traffic using the Verifying Traffic Migration section.
Note:
Before scaling down the deployments for the Management and Worker groups in the old release, ensure that there is no consumer lag in the Kafka cluster of the old deployment for any worker group.- If consumer lag is present, the existing deployment must remain active until the lag is fully cleared.
- If the lag has accumulated due to a connection failure between feeds and a third-party application, ensure that the connectivity issue is resolved and all lag is cleared before proceeding.
- Scaling down all worker group resources (Source Release)
- Scale down Deployments
Run the following command for every deployment in the source worker group namespace:
kubectl scale deploy <deployment_name> -n <source_release_worker_group_namespace> --replicas 0 - Scale down StatefulSets
Run the following command for every StatefulSet (sts) in the source worker group namespace:
kubectl scale sts <sts_name> -n <source_release_worker_group_namespace> --replicas 0
- Scale down Deployments
- Scaling down management group resources (Source Release)
Run the following command for every deployment in the source management group namespace:
kubectl scale deploy <deployment_name> -n <source_release_management_group_namespace> --replicas 0
4.1.5.9 Verifying Traffic Migration
To confirm that traffic is running stably after migration, verify the following key points:
- Pod stability: Ensure that no pods are restarting or entering crash loops after beginning to receive traffic.
- Throughput metrics: Monitor both Ingress and Egress throughput to confirm that the expected Messages Per Second (MPS) rate has been achieved.
- Resource utilization: Check Pod CPU and Memory consumption to ensure that utilization remains within acceptable limits and no resource bottlenecks exist.
- Kafka consumer performance: Verify Kafka Consumer Lag to ensure that consumers are processing messages at the required rate.
- End-to-end latency: Measure OCNADD end-to-end latency to assess overall traffic processing performance.
For detailed steps on collecting the necessary data, refer to the section “Troubleshooting Traffic Stability in OCNADD” in the Oracle Communications Network Analytics Data Director Troubleshooting Guide.
If any critical anomalies are observed in the metrics or command outputs that significantly impact traffic in the new architecture, the user may redirect traffic back to the existing OCNADD deployment to avoid service disruption. To route traffic back to the old deployment, follow the steps below:
- Reconfigure the bootstrap IPs in the source NF with the bootstrap IPs of the old OCNADD deployment.
- Verify traffic on the old deployment using throughput metrics.
During this time, the user can troubleshoot issues in the new deployment. For guidance on resolving post-migration issues, refer to the Oracle Communications Network Analytics Data Director Troubleshooting Guide.
4.1.6 Post Migration Task
Caution:
Performing this task is irreversible and will prevent you from routing traffic back to the existing deployment. Proceed only when you are completely satisfied with the new deployment and have no intention of reverting to the previous version.
After this procedure is performed, if the user needs to revert to the previous release, then the user will have to perform a Fault Recovery on the older release.
After the migration is completed, monitor the new deployment for a sufficient period of time. Most users typically monitor the deployment for several days to a week. Once the user has monitored the new deployment for a sufficient amount of time, the user can uninstall the existing deployment using the following steps:
- Uninstall the worker groups one after another using the following
command:
helm uninstall <worker-group-release-name> --namespace <worker-group-namespace>Example:
helm uninstall ocnadd-wg1 -namespace dd-worker-group1 - Clean up Kafka Configuration for all the worker groups.
- To list the secrets in the namespace,
run:
kubectl get secrets -n <worker-group-namespace> - To delete all the secrets related to Kafka,
run:
kubectl delete secret --all -n <worker-group-namespace> - To delete the configmap used for Kafka,
run:
kubectl delete configmap --all -n <worker-group-namespace> - To delete PVCs used for Kafka:
- Run the following command to list the PVCs used in the
namespace:
kubectl get pvc -n <worker-group-namespace> - Run the following command to delete the PVCs used by the brokers and
zookeepers:
kubectl delete pvc --all -n <worker-group-namespace>
- Run the following command to list the PVCs used in the
namespace:
- To list the secrets in the namespace,
run:
- Delete all the worker group namespaces using the below command (This step is only
needed if there is more than one worker
group):
kubectl delete namespace <worker-group-namespace> - Uninstall the management group using the following
command:
helm uninstall <management-release-name> --namespace <management-group-namespace>Example:
helm uninstall ocnadd-mgmt --namespace dd-mgmt-group - Clean up Database
- Log in to the MySQL client on SQL Node with the OCNADD user and
password:
mysql -h <IP_address of SQL Node> -u <ocnadduser> -p - To clean up the configuration, alarm, and health database,
run:
DROP DATABASE <dbname>; - To remove MySQL users while uninstalling OCNADD,
run:
SELECT user FROM mysql.user; DROP USER 'ocnaddappuser@'%';
- Log in to the MySQL client on SQL Node with the OCNADD user and
password:
4.2 Post Upgrade Task
Note:
This step is required only when the OCCM is used to manage the certificates in source and target releases and the user wants to update the Loadbalancer IPs for a service in the target release. For step-by-step details, refer to the section Adding or Updating Load Balancer IPs in SAN When OCCM is Used.4.2.1 Druid Cluster Integration with OCNADD Site
Druid Cluster Integration with OCNADD Site
Note:
In the previous release(s) where the extended storage was available only using the cnDBTier database, the migration from cnDBTier-based extended storage to Druid-based extended storage is not supported. In case the user wants to move to the Druid-based extended storage from cnDBTier-based extended storage, the user must remove the correlation configurations, export and trace configurations before integrating the Druid-based extended storage. After the Druid storage has been integrated with the OCNADD site, the user can create the correlation, export and trace configuration again.This feature is introduced as part of extended storage in the Data Director. To enable it, refer to the Druid Cluster Integration with OCNADD section in the Oracle Communications Network Analytics Data Director User Guide. The feature is recommended to be enabled after the release upgrade is completed. The extended storage using the cnDBTier database is available by default if this Druid cluster integration is not enabled.
4.2.2 vCollector Integration for Diameter Feed
In this release, the integration with vCollector is provided. The vCollector acquires the Diameter traffic from vDSR using port mirroring. vCollector is deployed as a virtual machine outside the OCNADD cluster and provides the acquired Diameter traffic to Data Director over the Kafka interface. The vCollector is configured and managed by the Data Director OAM services. This feature is introduced as part of Diameter feed capabilities in the Data Director. To enable the integration with vCollector, refer to the vCollector Integration with Data Director section in the Oracle Communications Network Analytics Data Director Diameter User Guide. The feature is recommended to be enabled after the release installation is completed.