6 Kafka Cluster Management Procedures

The following sections describes the procedure to manage the kafka cluster.

6.1 Creating Topics for OCNADD

  • Create topics (MAIN, SCP, SEPP, PCF, BSF, and NRF) using the configuration service before starting data ingestion.
  • For more details on topics and partitions, see the Oracle Communications Network Analytics Data Director Benchmarking Guide Section "OCNADD resource requirement". The guidelines on the replicationFactor and retentionMs are also available in the planning guide.
  • The segmentMs property determines the maximum time a Kafka log segment can remain active before being rolled over, regardless of its size. This ensures that old data is periodically cleaned up or compacted by the retention mechanism. For low-traffic topics, a smaller segmentMs value is recommended to facilitate timely data retention. In contrast, for high-traffic topics, the retentionMs property takes precedence, making segmentMs less impactful.

To create a topic, invoke the API endpoint described below:

  • For Relay Agent Kafka Cluster: <MgmtGWIP:MgmtGW Port>.<mgmt-ns>/ocnadd-configuration/v2/topic?relayAgentGroup=<relayAgentGroupName>

    Where:

    where <relayAgentGroupName> = <siteName>:<workerGroupName>:<relayAgentNamespace>:<relayAgentClusterName>

  • For Mediation Kafka Cluster: <MgmtGWIP:MgmtGW Port>.<mgmt-ns>/ocnadd-configuration/v2/topic?mediationGroup=<mediationGroup>

    Where:

    where <mediationGroup> = <siteName>:<workerGroupName>:<mediationNamespace>:<mediationClusterName>

Example Usage:

If the Management Gateway has a load balancer IP, use the Load Balancer IP in the commands; otherwise, use the Management Gateway service name. The examples below use the Management Gateway service name.

Exec into management gateway pod and execute the below command:

1) To create SCP topic in Relay Agent Kafka cluster:

curl -k --location "http://ocnaddmanagementgateway.ddmgmt:12889/ocnadd-configuration/v2/topic?relayAgentGroup=BLR:ddworker1:dd-relay-ns:dd-relay-cluster" \
-H "Content-Type: application/json" \
-d '{"topicName":"<topicname>","partitions":3,"replicationFactor":2,"retentionMs":300000,"segmentMs":300000}'

If secure communication for OCNADD is enabled (mTLS: true), use:

curl -k --location --cert-type P12 --cert /var/securityfiles/keystore/serverKeyStore.p12:$OCNADD_SERVER_KS_PASSWORD \
"https://ocnaddmanagementgateway.ddmgmt:12889/ocnadd-configuration/v2/topic?relayAgentGroup=BLR:ddworker1:dd-relay-ns:dd-relay-cluster" \
-H "Content-Type: application/json" \
-d '{"topicName":"<topicname>","partitions":3,"replicationFactor":2,"retentionMs":300000,"segmentMs":300000}'

2) To create MAIN topic in Mediation Kafka cluster:

curl -k --location "http://ocnaddmanagementgateway.ddmgmt:12889/ocnadd-configuration/v2/topic?mediationGroup=BLR:ddworker1:dd-mediation-ns:dd-mediation-cluster" \
-H "Content-Type: application/json" \
-d '{"topicName":"<topicname>","partitions":3,"replicationFactor":2,"retentionMs":300000,"segmentMs":300000}'

If secure communication for OCNADD is enabled (mTLS: true), use:

curl -k --location --cert-type P12 --cert /var/securityfiles/keystore/serverKeyStore.p12:$OCNADD_SERVER_KS_PASSWORD \
"https://ocnaddmanagementgateway.ddmgmt:12889/ocnadd-configuration/v2/topic?mediationGroup=BLR:ddworker1:dd-mediation-ns:dd-mediation-cluster" \
-H "Content-Type: application/json" \
-d '{"topicName":"<topicname>","partitions":3,"replicationFactor":2,"retentionMs":300000,"segmentMs":300000}'

JSON for creating the topic:

{
    "topicName": "<topicname>",
    "partitions": 3,
    "replicationFactor": 2,
    "retentionMs": 300000,
    "segmentMs": 300000
}

6.2 Kafka Cluster Capacity Expansion

The Kafka cluster capacity expansion may be needed in case the traffic throughput requirement for the customer deployment has changed significantly. The standard approach is to first scale the brokers, then increase the topic partitions, and finally execute a single, comprehensive partition reassignment to distribute all partitions (existing and new) evenly across the larger cluster.

The capacity expansion can be carried out in the following three-step process:

  1. Adding more brokers to the existing Kafka Cluster
  2. Increasing the partitions in the existing topics, for example, SCP, SEPP, MAIN
  3. Partition reassignment for the topics

The procedures should be performed in the described sequence as needed in the Relay Agent or Mediation group's Kafka cluster.

  • Wait for each action to finish successfully before proceeding.
  • Monitor logs for errors, especially during reassignment and partition increase steps.
  • Adjust hostnames, broker lists, and file paths as needed for your environment.

6.3 Adding Kafka Brokers to Existing Kafka Cluster

The Kafka brokers in the cluster should be increased based on the benchmarking guide recommendations for the desired throughput.

Refer to the Oracle Communications Network Analytics Benchmarking Guide for the Kafka broker resource profile, number of brokers, and number of partitions for the Source and MAIN topic information.

Perform the following steps to increase the brokers in the existing Kafka Cluster in Relay Agent or Mediation group:

  1. Update the custom values for the Relay Agent or Mediation group to increase the Kafka broker replicas:
    # Update the helm chart to increase the number of Kafka brokers say 11
    
    For Relay Agent: Update the ocnadd-relayagent-custom-values.yaml for the current release as indicated below
    global.ocnaddrelayagent.kafka.kafkareplicas: 4 ======================> 11 increase this value to the desired value
    
    For Mediation group: Update the ocnadd-mediation-custom-values.yaml for the current release as indicated below
    global.ocnaddmediation.kafka.kafkareplicas: 4 ======================> 11 increase this value to the desired value
    
    # Save the custom values in the corresponding helm charts
  2. Scale the Kafka brokers: Scale the Kafka StatefulSet to the required replica count (for example, 11).
    Scale the Kafka StatefulSet to the required replica count say 11
    
    For Relay Agent:
    # kubectl scale sts -n dd-relay kafka-broker --replicas 11
    
    For Mediation group:
    # kubectl scale sts -n dd-med kafka-broker --replicas 11
  3. Verify that new Kafka Brokers are spawned:
    Verify the Kafka broker pods in the corresponding group.
    
    For Relay Agent: Check the Kafka brokers in the below command, match the count of Kafka brokers pods with the replicas configured. The brokers should be in the running state
    # kubectl get po -n dd-relay 
    
    For Mediation group: Check the Kafka brokers in the below command, match the count of Kafka brokers pods with the replicas configured. The brokers should be in the running state
    # kubectl get po -n dd-med
  4. Continue to the next procedure to increase the partitions for the topics.

6.4 Adding Partitions to an Existing Topic

The topics in OCNADD should already be created with the necessary partitions for the supported MPS during deployment. In case of a traffic load increase beyond the current supported MPS, it may be necessary to increase the number of partitions in existing topics. The procedure below can be executed to increase the number of partitions in the corresponding topic.

Caution:

The number of partitions cannot be decreased using this procedure. If partition reduction is required, the entire topic must be recreated with the desired number of partitions, which will result in complete data loss of the concerned topic.

Steps to add partitions to an existing topic:

  1. Login to the bastion host and enter the OCNADD Management Gateway service pod in the deployed OCNADD management group namespace:
    kubectl exec -it <mgmtgw-pod-name> -n <namespace> -- bash
    

    Example:

    kubectl exec -ti -n dd-mgmt ocnaddmanagementgateway-xxxxxx -- bash
    
  2. Determine the OCNADD component (Relay Agent or Mediation) to which the topic belongs and where partitions need to be increased.
  3. Describe the corresponding topic using the below command. Also provide the Relay Agent or Mediation group name in the command below:
    • For Relay Agent:
      curl -k --location "http://ocnaddmanagementgateway.dd-mgmt:12889/ocnadd-configuration/v2/topic/SCP?relayAgentGroup=BLR:ddworker1:dd-relay-ns:dd-relay-cluster"
      
    • If secure communication (intraTlsEnabled: true and mTLS: true) is enabled:
      curl -k --location --cert-type P12 --cert /var/securityfiles/keystore/serverKeyStore.p12:$OCNADD_SERVER_KS_PASSWORD \
      "https://ocnaddmanagementgateway.dd-mgmt:12889/ocnadd-configuration/v2/topic/SCP?relayAgentGroup=BLR:ddworker1:dd-relay-ns:dd-relay-cluster"
      
    • For Mediation:
      curl -k --location "http://ocnaddmanagementgateway.dd-mgmt:12889/ocnadd-configuration/v2/topic/MAIN?mediationGroup=BLR:ddworker1:dd-mediation-ns:dd-mediation-cluster"
      
    • If secure communication is enabled:
      curl -k --location --cert-type P12 --cert /var/securityfiles/keystore/serverKeyStore.p12:$OCNADD_SERVER_KS_PASSWORD \
      "https://ocnaddmanagementgateway.dd-mgmt:12889/ocnadd-configuration/v2/topic/MAIN?mediationGroup=BLR:ddworker1:dd-mediation-ns:dd-mediation-cluster"
      

    The above command will list the topic details such as number of partitions, replication factor, retentionMs, and so on.

  4. Add/increase partitions in the topic by executing the following command. Also provide the Relay Agent or Mediation group name in the below command:
    • For Relay Agent:
      curl -v -k --location --request PUT "http://ocnaddmanagementgateway.dd-mgmt:12889/ocnadd-configuration/v2/topic?relayAgentGroup=BLR:ddworker1:dd-relay-ns:dd-relay-cluster" \
      --header 'Content-Type: application/json' \
      --data-raw '{ "topicName": "SCP", "partitions": "84" }'
      
    • If secure communication is enabled:
      curl -v -k --location --cert-type P12 --cert /var/securityfiles/keystore/serverKeyStore.p12:$OCNADD_SERVER_KS_PASSWORD \
      --request PUT "https://ocnaddmanagementgateway.dd-mgmt:12889/ocnadd-configuration/v2/topic?relayAgentGroup=BLR:ddworker1:dd-relay-ns:dd-relay-cluster" \
      --header 'Content-Type: application/json' \
      --data-raw '{ "topicName": "SCP", "partitions": "84" }'
      
    • For Mediation:
      curl -v -k --location --request PUT "http://ocnaddmanagementgateway.dd-mgmt:12889/ocnadd-configuration/v2/topic?mediationGroup=BLR:ddworker1:dd-mediation-ns:dd-mediation-cluster" \
      --header 'Content-Type: application/json' \
      --data-raw '{ "topicName": "MAIN", "partitions": "270" }'
      
    • If secure communication is enabled:
      curl -v -k --location --cert-type P12 --cert /var/securityfiles/keystore/serverKeyStore.p12:$OCNADD_SERVER_KS_PASSWORD \
      --request PUT "https://ocnaddmanagementgateway.dd-mgmt:12889/ocnadd-configuration/v2/topic?mediationGroup=BLR:ddworker1:dd-mediation-ns:dd-mediation-cluster" \
      --header 'Content-Type: application/json' \
      --data-raw '{ "topicName": "MAIN", "partitions": "270" }'
      
  5. Verify that the partitions have been added to the topic by executing the following command. Also provide the Relay Agent or Mediation group name in the command below:
    • For Relay Agent:
      curl -k --location "http://ocnaddmanagementgateway.dd-mgmt:12889/ocnadd-configuration/v2/topic/SCP?relayAgentGroup=BLR:ddworker1:dd-relay-ns:dd-relay-cluster"
      
    • If secure communication is enabled:
      curl -k --location --cert-type P12 --cert /var/securityfiles/keystore/serverKeyStore.p12:$OCNADD_SERVER_KS_PASSWORD \
      "https://ocnaddmanagementgateway.dd-mgmt:12889/ocnadd-configuration/v2/topic/SCP?relayAgentGroup=BLR:ddworker1:dd-relay-ns:dd-relay-cluster"
      
    • For Mediation:
      curl -k --location "http://ocnaddmanagementgateway.dd-mgmt:12889/ocnadd-configuration/v2/topic/MAIN?mediationGroup=BLR:ddworker1:dd-mediation-ns:dd-mediation-cluster"
      
    • If secure communication is enabled:
      curl -k --location --cert-type P12 --cert /var/securityfiles/keystore/serverKeyStore.p12:$OCNADD_SERVER_KS_PASSWORD \
      "https://ocnaddmanagementgateway.dd-mgmt:12889/ocnadd-configuration/v2/topic/MAIN?mediationGroup=BLR:ddworker1:dd-mediation-ns:dd-mediation-cluster"
      
  6. Exit from the pod (container).
  7. Continue to the next procedure to reassign the partitions for the topics.

6.5 Partitions Reassignment in Kafka Cluster

The procedure for partition reassignment is divided into three major steps:

  1. Identify the topics for partition reassignment
  2. Generate the reassignment plan
  3. Execute the reassignment plan

1. Prepare the File Listing All Topics for Reassignment

For Relay Agent:

Create topics-to-move-relay.json

{
  "topics": [
    {"topic": "SCP"},
    {"topic": "SEPP"}
  ],
  "version": 1
}

For Mediation:

Create topics-to-move-med.json

{
  "topics": [
    {"topic": "MAIN"}
  ],
  "version": 1
}

Copy the created JSON file to the appropriate Kafka broker pod:

  • For Relay Agent:
kubectl cp topics-to-move-relay.json <dd-relay-namespace>/<kafka-pod-name>:/home/ocnadd/topics.json
  • For Mediation:
kubectl cp topics-to-move-med.json <dd-med-namespace>/<kafka-pod-name>:/home/ocnadd/topics.json

2. Generate Reassignment Plan: Generate the proposed plan to distribute all partitions across the new broker list in the corresponding group.

  • For Relay Agent:
# Exec into Kafka Broker (use Relay Agent namespace)
unset JMX_PORT
cd kafka/bin
./kafka-reassign-partitions.sh --bootstrap-server kafka-broker:9092 --topics-to-move-json-file /home/ocnadd/topics.json --broker-list "1001,...,1011" --generate

Save the generated "Proposed partition reassignment configuration" as reassignment.json and copy it to the Kafka broker pod:

kubectl cp reassignment.json <dd-relay-namespace>/<kafka-pod-name>:/home/ocnadd/reassignment.json
  • For Mediation:
# Exec into Kafka Broker (use Mediation group namespace)
unset JMX_PORT
cd kafka/bin
./kafka-reassign-partitions.sh --bootstrap-server kafka-broker:9092 --topics-to-move-json-file /home/ocnadd/topics.json --broker-list "1001,...,1011" --generate

Save and copy the reassignment plan:

kubectl cp reassignment.json <dd-mediation-namespace>/<kafka-pod-name>:/home/ocnadd/reassignment.json

3. Run Reassignment Plan

Start the partition movement with throttling limits.

  • For Relay Agent:
# Exec into Kafka Broker (use Relay Agent namespace)
unset JMX_PORT
cd kafka/bin
./kafka-reassign-partitions.sh --bootstrap-server kafka-broker:9092 --reassignment-json-file /home/ocnadd/reassignment.json --execute --throttle 50000000 --replica-alter-log-dirs-throttle 100000000
  • For Mediation:
# Exec into Kafka Broker (use Mediation group namespace)
unset JMX_PORT
cd kafka/bin
./kafka-reassign-partitions.sh --bootstrap-server kafka-broker:9092 --reassignment-json-file /home/ocnadd/reassignment.json --execute --throttle 50000000 --replica-alter-log-dirs-throttle 100000000

4. Verify Partition Reassignment

  • For Relay Agent:
# Exec into Kafka Broker (use Relay Agent namespace)
unset JMX_PORT
cd kafka/bin
./kafka-reassign-partitions.sh --bootstrap-server kafka-broker:9092 --reassignment-json-file /home/ocnadd/reassignment.json --verify
  • For Mediation:
# Exec into Kafka Broker (use Mediation group namespace)
unset JMX_PORT
cd kafka/bin
./kafka-reassign-partitions.sh --bootstrap-server kafka-broker:9092 --reassignment-json-file /home/ocnadd/reassignment.json --verify

6.6 Kafka Cluster External Access

Prerequisites

  • The external IPs for the Kafka Brokers must be updated in the SAN entries during the generation of TLS certificates. Refer to the section Certificate and Secret Generation in Oracle Communications Network Analytics Security Guide.

External Access with OCCNE LBVM

The same procedure applies for both Relay Agent and Mediation group.

  1. Update the custom values for the corresponding group (Relay Agent or Mediation group)

For Relay Agent:

  • Edit ocnadd-relayagent-custom-values.yaml for the current release and update the following parameters:
ocnaddrelayagent.ocnaddkafka.ocnadd.kafkaBroker.externalAccess.enabled: false  # =======> change to true
ocnaddrelayagent.ocnaddkafka.ocnadd.kafkaBroker.externalAccess.autoDiscovery: false # =======> change to true
  • If static IPs are to be used as LoadBalancer IPs for Kafka Broker, update:
ocnaddrelayagent.ocnaddkafka.ocnadd.kafkaBroker.externalAccess.setstaticLoadBalancerIps: false # =======> set to true
ocnaddrelayagent.ocnaddkafka.ocnadd.kafkaBroker.externalAccess.LoadBalancerIPList: [] # =======> set to comma-separated IP list, e.g., [10.121.45.50,10.121.45.51,10.121.45.52]
  • Save the file.

For Mediation Group:

  • Edit ocnadd-mediation-custom-values.yaml for the current release and update the following parameters:
ocnaddmediation.ocnaddkafka.ocnadd.kafkaBroker.externalAccess.enabled: false  # =======> change to true
ocnaddmediation.ocnaddkafka.ocnadd.kafkaBroker.externalAccess.autoDiscovery: false # =======> change to true
  • If static IPs are to be used as LoadBalancer IPs for Kafka Broker, update:
ocnaddmediation.ocnaddkafka.ocnadd.kafkaBroker.externalAccess.setstaticLoadBalancerIps: false # =======> set to true
ocnaddmediation.ocnaddkafka.ocnadd.kafkaBroker.externalAccess.LoadBalancerIPList: [] # =======> set to comma-separated IP list, e.g., [10.121.45.50,10.121.45.51,10.121.45.52]
  • Save the file.

Note:

If this is done during installation, continue with the installation steps from the Oracle Communications Network Analytics Data Director Installation, Upgrade, and Fault Recovery Guide; otherwise, proceed with the steps below.
Perform Helm Upgrade of the Corresponding Group

For Relay Agent:

Example:

helm upgrade ocnadd-relay -f ocnadd-common-custom-values.yaml -f ocnadd-relayagent-custom-values-relayagent-group.yaml --namespace ocnadd-relay ocnadd

For Mediation Group:

Example:

helm upgrade ocnadd-med -f ocnadd-common-custom-values.yaml -f ocnadd-mediation-custom-values-med-group.yaml --namespace ocnadd-med ocnadd
  • Verify that all Kafka broker pods are in the running state in the corresponding group.
External Access with OCCNE CNLB

6.7 Enabling Kafka Log Retention Policy

In Kafka, the log retention strategy determines how long the data is kept in the broker's logs before it is purged to free up storage space.

There are two main approaches for log retention in Kafka:
  • Time-based retention:

    Once the logs reach the specified age, they are considered eligible for deletion, and the broker will start a background task to remove the log segments that are older than the retention time. The time-based retention policy applies to all logs in a topic, including both the active logs that are being written and the inactive logs that have already been compacted.

    The retention time is usually set using the "log.retention.hours" or "log.retention.minutes" configuration.

    Parameters used:

    log.retention.minutes=5 The default value for "log.retention.minutes" is set to 5 min.

  • Size-based retention:

    Once the logs reach the specified size threshold, the broker will start a background task to remove the oldest log segments to ensure that the total log size remains below the specified limit. The size-based retention policy applies to all logs in a topic, including both the active logs that are being written and the inactive logs that have already been compacted.

    By default, these parameters are not available in the OCNADD helm chart. The parameters are customizable and can be added in 'ocnadd/charts/ocnaddkafka/templates/scripts-config.yaml', a helm upgrade needs to be performed in order to apply the parameters. A Kafka Broker restart is expected.

Parameters for size-based retention:

For size-based retention add the below parameters in 'ocnadd/charts/ocnaddkafka/templates/scripts-config.yaml' file, and perform a helm upgrade to apply the parameters.

#The maximum size of a single log file
log.segment.bytes=1073741824

#The maximum size of the log before deleting it
log.retention.bytes=32212254720

#Enable the log cleanup process to run on the server
log.cleaner.enable=true

#This is default cleanup policy. This will discard old segments when their retention time or size limit has been reached.
log.cleanup.policy=delete

#The interval at which log segments are checked to see if they can be deleted
log.retention.check.interval.ms=1000

#The amount of time to sleep when there are no logs to clean
log.cleaner.backoff.ms=1000

#The number of background threads to use for log cleaning
log.cleaner.threads=5

Calculate "log.retention.bytes":

The log retention size can be calculated as below.

Example: For 80% threshold of the PVC claim size,

"log.retention.bytes" will be calculated as: (pvc(in bytes) / TotalPartition) * threshold/100

Here TotalPartition will be the sum of partitions of all the topics. If any topic has replication factor 2, then the number of partitions will be twice the number of partitions in that topic. See Oracle Communications Network Analytics Data Director Benchmarking Guide section "OCNADD resource requirement" to determine the number of partitions per topic.

Note:

  • It's important to choose an appropriate time-based retention and size-based retention policy that balances the need for retaining data for downstream processing against the need to free up disk space. If the retention size or retention time is set too high, it may result in large amounts of disk space being consumed, and if it's set too low, important data may be lost.
  • Kafka also allows for a combination of both time-based and size-based retention by setting both "log.retention.hours" or "log.retention.minutes" and "log.retention.bytes" configurations, in which the broker will retain logs for the shorter of the two.
  • The "log.segment.bytes" is used to control the size of log segments in a topic's partition and is usually set to a relatively high value (for example, 1 GB) to reduce the number of segment files and minimize the overhead of file management. It is recommended to set this value lower based on smaller PVC size.
  • The above procedures should be applied for all the Kafka Broker clusters corresponding to each of the worker groups.

6.8 Expanding Kafka Storage

With the increase in throughput requirements in the OCNADD, the storage allocation in Kafka should also be increased. If the user had previously allocated storage for lower throughput, then, storage should be increased to meet the new throughput requirements.

The procedure should be applied to all the available Kafka Broker clusters corresponding to each of the worker groups in the Centralized deployment mode.

Caution:

  1. The PVC expansion is recommended to be performed before the release upgrade, for example, if the Data Director release is running in 25.2.1xx (source release) and planned to be upgraded to 25.2.2xx and there is a need for the expanding the PVC storage because of higher throughput support in the target release then PVC expansion must be done in the source release using the below mentioned procedures
  2. It is not possible to rollback the Kafka to the previous release if the Kafka PVC size has been increased after the upgrade. In the case of rollback is still required then DR procedures for the Kafka should be followed and this may result in data loss if data is not consumed already.
To increase the size of existing PVC:
  1. Check the StorageClass to which your PVC is attached.

    Note:

    • PVC storage size can only be increased. It cannot be decreased.
    • It is mandatory to keep the same PVC storage size for all the Kafka brokers.
    1. Run the following command to get the list of storage classes:
      kubectl get sc
      Sample output:
      NAME                 PROVISIONER         RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
      occne-esdata-sc      rook-ceph.rbd.csi.ceph.com   Delete          Immediate           true                   125d
      occne-esmaster-sc    rook-ceph.rbd.csi.ceph.com   Delete          Immediate           true                   125d
      occne-metrics-sc     rook-ceph.rbd.csi.ceph.com   Delete          Immediate           true                   125d
      standard (default)   rook-ceph.rbd.csi.ceph.com   Delete          Immediate           true                   125d
    2. Run the following command to describe a storage class:
      kubectl describe sc <storage_class_name>
      For example:
      kubectl describe sc standard
      Sample output:
      NAME                 PROVISIONER         RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
      occne-esdata-sc      rook-ceph.rbd.csi.ceph.com   Delete          Immediate           true                   125d
      occne-esmaster-sc    rook-ceph.rbd.csi.ceph.com   Delete          Immediate           true                   125d
      occne-metrics-sc     rook-ceph.rbd.csi.ceph.com   Delete          Immediate           true                   125d
      standard (default)   rook-ceph.rbd.csi.ceph.com   Delete          Immediate           true                   125d
      The PVC should belong to the StorageClass which has "AllowVolumeExpansion" set as True:
      AllowVolumeExpansion: True
  2. Run the following command to list all the available PVCs:

    For Relay Agent:

    kubectl get pvc -n dd-relay
    For Mediation Group:
    kubectl get pvc -n dd-med
    Sample output:
    NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
    backup-mysql-pvc Bound pvc-6e0c8366-fdaa-488e-a09d-f976e74f025b 20Gi RWO standard 8d
    kraft-broker-security-kraft-controller-0 Bound pvc-4f3c310c-1173-4d3d-824a-ddd2640ab028 5Gi RWO standard 8d
    kraft-broker-security-kraft-controller-1 Bound pvc-70fffae2-4f2a-4ecc-8f74-abfa003a8c1e 5Gi RWO standard 8d
    kraft-broker-security-kraft-controller-2 Bound pvc-e1b0b96d-e000-4e5c-a01c-d41cc8f3cfa4 5Gi RWO standard 8d
    kafka-volume-kafka-broker-0 Bound pvc-48845cf6-1708-400a-808e-5b9b3cda7242 20Gi RWO standard 8d
    kafka-volume-kafka-broker-1 Bound pvc-8acbde89-b223-4015-a170-ad6417b08be7 20Gi RWO standard 8d
    kafka-volume-kafka-broker-2 Bound pvc-7b444cba-d1ab-496b-8b60-7c7599ef8754 20Gi RWO standard 8d
    kafka-volume-kafka-broker-3 Bound pvc-e1584794-8e62-4f6a-8eff-c0c86e0864a1 20Gi RWO standard 8d
    ocnadd-cache-volume-ocnaddcache-0 Bound pvc-5bf8ce9c-7ea6-4cf3-b51e-6ca94104dc5b 1Gi RWO standard 8d
    ocnadd-cache-volume-ocnaddcache-1 Bound pvc-ddb0b53c-ec69-4e07-ac44-3ab98bee1d4c 1Gi RWO standard 8d
  3. List all the available PVCs using the below command:
    kubectl get pvc -n <namespace>
    Run the following command to edit the required PVC and update the storage size:

    For Relay Agent:

    kubectl edit pvc <pvc_name> -n <relaygent-namespace>
    For Mediation Group:
    kubectl edit pvc <pvc_name> -n <mediation-namespace>

    For example

    For Relay Agent:

    kubectl edit pvc kafka-volume-kafka-broker-0 -n dd-relay
    For Mediation Group:
    kubectl edit pvc kafka-volume-kafka-broker-0 -n dd-med
    Sample output:
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: <pvc_size>Gi  

    Increase the <pvc_size> for all available Kafka broker pods created during deployment. Steps 3 and 4 should be repeated for all Kafka PVCs for each broker in every available group.

    The size of the next Kafka broker PVC should only be increased once the augmented size of the current Kafka broker PVC is reflected in the output of step 3.

  4. To understand the storage requirement for your Kafka broker pods based on supported throughput, see "Kafka PVC-Storage Requirements" section in Oracle Communications Network Analytics Data Director Benchmarking Guide.
  5. Run the following command to delete the stateful set:
    kubectl delete statefulset --cascade=orphan <statefulset_name> -n <namespace>
    For Relay Agent:
    kubectl delete statefulset --cascade=orphan kafka-broker -n dd-relay
    For Mediation Group:
    kubectl delete statefulset --cascade=orphan kafka-broker -n dd-med
  6. Update the PVC size in ocnadd-relayagent-custom-values.yaml or ocnadd-mediation-custom-values.yaml (ocnaddkafka.ocnadd.kafkaBroker.pvcClaimSize) to the same value which has been configured in step 4.
    For Relay Agent:
    ocnaddrelayagent.ocnaddkafka:
       ocnadd:
           ################
           # kafka-broker #
           ################
           kafkaBroker:
               name: kafka-broker
               replicas: 4
               pvcClaimSize: <pvc_size>Gi
    
    For Mediation Group:
    ocnaddmediation.ocnaddkafka:
       ocnadd:
           ################
           # kafka-broker #
           ################
           kafkaBroker:
               name: kafka-broker
               replicas: 4
               pvcClaimSize: <pvc_size>Gi
    
  7. Perform the Helm upgrade to recreate the stateful set of the Kafka broker. This step should be performed corresponding to the group relayagent or mediation. The charts and custom values should be used corresponding to the group in which the Kafka broker pvcClaimSize is being increased.
    For Relay Agent:
    helm upgrade <relayagent-release-name> -f ocnadd-common-custom-values.yaml -f <relayagent-custom-values> --namespace <ocnadd-relayagent-namespace> <helm_chart> 
    
    For Mediation Group:
    helm upgrade <mediation-release-name> -f ocnadd-common-custom-values.yaml -f <mediation-custom-values> --namespace <ocnadd-med-namespace> <helm_chart>