8 Enable or Disable RAM Storage in Kafka Cluster
This chapter outlines the procedures to enable or disable RAM storage in a Kafka cluster, depending on performance and reliability requirements.
8.1 Enable RAM Storage in Kafka Cluster
The RAM storage is used to support higher throughput in the Kafka cluster. The throughput is normally restricted because of underlying DISK IOPS. In the case where the DISK bandwidth is not performant, the Kafka cluster results in restricted throughput and higher latency. This can be overcome if the Kafka cluster supports storing the messages in memory using RAM storage. However, since the storage is in memory (RAM), it is not possible to have higher message retention support. It should also be noted that with RAM-based storage, there may be message loss in case the broker goes down, as the messages are not persisted over the Disk.
This procedure is required to support RAM storage in the Kafka cluster. It should be executed on all the applicable worker groups where RAM storage is needed. Perform the following steps:
- Enable RAM storage in the custom values file: Update the custom
values file of the corresponding worker group or default worker
group, for example
ocnadd-custom-values-wg1.yaml
.Note:
It will be useful to take a backup of the custom-values file of the worker group so that parameters can be retrieved from the backup in case the RAM-based storage is disabled.- Update the following parameters in the custom values
file
ocnadd-custom-values-wg1.yaml
of the worker group in theocnaddkafka
section:ocnaddkafka.ocnadd.kafkaBroker.kafkaProperties.ramDriveStorage: false ==================> set it to true
- If the Kafka broker requires 48Gi of memory and data
retention of the topic requires 200Gi, then the
total memory required will be 248Gi. The memory
value for a single topic can be calculated
using:
MPS * Retention Period * RF * Average Message Size
Consider the memory with respect to the number of topics being planned, for example, SCP, NRF, SEPP, MAIN.
ocnaddkafka.ocnadd.kafkaBroker.resource.requests.memory: 48Gi ==================> set it to appropriate value. ocnaddkafka.ocnadd.kafkaBroker.resource.limit.memory: 48Gi ==================> set it to appropriate value. ocnaddkafka.ocnadd.kafkaBroker.kafkaProperties.offsetsTopicReplicationFactor: 3 ==================> set it to 2
- Update the following parameter under the
consumeradapter
section inocnaddadmin
:consumeradapter.env.ADAPTER_KAFKA_PROCESSING_GUARANTEE: exactly_once_v2 ===============> change it to "at_least_once"
Note:
When using "at_least_once," there may be duplicate messages in case of consumer rebalancing or broker restarts.Note:
- Skip the further steps if this is done as part of a fresh installation of the OCNADD cluster.
- The below steps are needed additionally when the Kafka cluster is required to be migrated from CEPH-based storage to RAM storage.
- Update the following parameters in the custom values
file
- Uninstall the worker group: Example: dd-worker-group1,
where RAM-based storage is being enabled.
Perform steps 1–4 from the section “Uninstalling OCNADD in Centralized Deployment Mode” in the Oracle Communications Network Analytics Data Director Install, Upgrade, and Fault Recovery Guide.
Note:
During uninstallation of the default worker group, the backup PVC is not deleted during Kafka broker/Kraft-controller PVC deletion. The delete all PVC command should be modified appropriately to delete only the required Kafka broker/Kraft-controller PVCs. - Re-install the same worker group: Example:
dd-worker-group1, where RAM-based storage is being
enabled.
- Ensure to use the custom-values file that has the changes for enabling RAM storage (refer to Step 1).
- Perform the steps mentioned in the section “Installing Worker Group” from the Oracle Communications Network Analytics Data Director Install, Upgrade, and Fault Recovery Guide.
8.2 Disable RAM Storage in Kafka Cluster
This procedure is required to support persistence storage in the Kafka cluster. It should be executed on all the applicable worker groups where persistence storage is needed. Perform the following steps:
- Disable RAM storage in the custom values file:
Update the custom values file of the corresponding worker group or default worker group, for example,
ocnadd-custom-values-wg1.yaml
.Note:
Ensure to use the corresponding parameters from the backup of the worker group custom values taken during the Enable RAM Storage procedure.- Update the following parameters in the custom values
file ocnadd-custom-values-wg1.yaml of the
worker group in the
ocnaddkafka
section:ocnaddkafka.ocnadd.kafkaBroker.kafkaProperties.ramDriveStorage: true ==================> set it to false
ocnaddkafka.ocnadd.kafkaBroker.resource.requests.memory: <248Gi> ==================> set it to 48Gi (set it to previously configured value) ocnaddkafka.ocnadd.kafkaBroker.resource.limit.memory: <248Gi> ==================> set it to 48Gi (set it to previously configured value) ocnaddkafka.ocnadd.kafkaBroker.pvcClaimSize: <previous value> ==================> set it to the previously configured value
- Update to the below values when higher throughput with
lower latency is needed.
Note:
This can have lower message reliability in case a Kafka broker goes down.offsetsTopicReplicationFactor: 1 transactionStateLogReplicationFactor: 1
- <Or> update to the below values when higher
message reliability is required (RF > 1).
Note:
This can potentially result in lower throughput and higher latency if the Kafka cluster Disk IOPS & cluster network bandwidth are less performant.offsetsTopicReplicationFactor: 2 transactionStateLogReplicationFactor: 2
- Update the following parameter under the
consumeradapter
section inocnaddadmin
:consumeradapter.env.ADAPTER_KAFKA_PROCESSING_GUARANTEE: at_least_once ===============> change it to "exactly_once_v2"
- Update the following parameters in the custom values
file ocnadd-custom-values-wg1.yaml of the
worker group in the
- Uninstall the worker group: Example: dd-worker-group1,
where RAM-based storage is being disabled.
Perform steps 1–4 from the section “Uninstalling OCNADD in Centralized Deployment Mode” in the Oracle Communications Network Analytics Data Director Install, Upgrade, and Fault Recovery Guide.
Note:
During uninstallation of the default worker group, the backup PVC is not deleted during Kafka broker/Kraft-controller PVC deletion. The delete all PVC command should be modified appropriately to delete only the required Kafka broker/Kraft-controller PVCs. - Re-install the same worker group:
Example: dd-worker-group1, where RAM-based storage is being disabled.
- Ensure to use the custom-values file that has the changes for disabling RAM storage (refer to Step 1).
- Perform the steps mentioned in the section “Installing Worker Group” from the Oracle Communications Network Analytics Data Director Install, Upgrade, and Fault Recovery Guide.