13 Enable Kafka Feed Configuration Support
This chapter lists the prerequisites for both NF producers and third-party consumer applications to communicate securely with the Data Director Kafka cluster. Additionally, the section lists the necessary configuration settings to be applied at the Kafka broker.
Certain prerequisites must be fulfilled before the external Kafka feed can operate effectively for consumer applications. Notably, some of these prerequisites have the potential to impact communication with producer clients, particularly if any client ACL (Access Control List) rules are configured in Kafka. In such cases, Kafka will authenticate and authorize each client, potentially causing disruption for existing clients if they are not already utilizing SASL_SSL or SSL (mTLS) connections. It is advisable to follow recommendations outlined in the Oracle Communications Network Analytics Security Guide to ensure a seamless transition.
13.1 Prerequisites for NF Producers
This applies to all Oracle NF producers (SCP, NRF, and SEPP).
Ensure that the producers use either SASL_SSL or SSL (mTLS) to establish a connection with the Data Director Kafka cluster. The Data Director exposes SASL_SSL on port 9094 and SSL on port 9093.
- SASL_SSL:
- Bootstrap Server List: KAFKABROKER_0_LB_IP:9094, KAFKABROKER_1_LB_IP:9094, KAFKABROKER_2_LB_IP:9094 or KAFKABROKER_0_FQDN:9094, KAFKABROKER_1_FQDN:9094, KAFKABROKER_2_FQDN:9094
- security.protocol: SASL_SSL
- sasl.mechanism: PLAIN
- Verify and update JAAS configuration - The Java Authentication and Authorization Service (JAAS) user used in the producer configuration must be available in the Data Director Kafka broker's JAAS configuration. It is recommended that each of the NFs configures its own JAAS user. Make a list of NF producer SASL users and follow the steps in Update Kafka Broker Configuration for updating the Data Director Broker configuration.
- Create Client ACL with configured JAAS user on the Data Director. See Creating Client ACL with SASL username.
- SSL:
- Bootstrap Server List: KAFKABROKER_0_LB_IP:9093, KAFKABROKER_1_LB_IP:9093, KAFKABROKER_2_LB_IP:9093 or KAFKABROKER_0_FQDN:9093, KAFKABROKER_1_FQDN:9093, KAFKABROKER_2_FQDN:9093
- security.protocol: SSL
- Ensure that SSL parameters are set correctly and SSL certificates are generated using the same CA root as the Data Director. Refer to the section "Certificate and Secret Generation" in Oracle Communications Network Analytics Security Guide for recommendations
- Create Client ACL on the Data Director with CN name in the client SSL certificate. See Creating Client ACL with CN Name from SSL Client Certificate.
Note:
In case any of the prerequisites are not met then the external Kafka feed must not be created as it can disrupt the traffic between NF producers and DD and may impact other feeds HTTP2/Synthetic.13.2 Prerequisites for External Consumers
The external consumer application requires user-based authorization. However, the Data Director does not offer a Kafka user creation interface, either within the UI or through backend services. Consequently, the user must already exist in the Kafka JAAS configuration, which cannot be accomplished without restarting the Kafka broker. This is why, Data Director doesn't enforce user verification during Kafka Feed creation, but instead mandates user creation before initiating Kafka Feed setup using the UI.
For enhanced security, it is essential to create the user for the external consumer application in Kafka's SCRAM configuration as well. The following steps provide a detailed explanation of this process.
- Create the ACL user in the Kafka JAAS and SCRAM configuration. And use
the same ACL user name while creating the Kafka Feed from the UI. Without this, the
external consumer application will not be able to consume from Kafka.
To create ACL user:
- Update Kafka Broker JAAS configuration for the external feed user, refer to the section Updating JAAS Configuration with Users.
- Update the Kafka SCRAM configuration with the external feed user, refer to the section Updating SCRAM Configuration with Users. This step must be done only after the JAAS configuration is updated in Kafka Broker.
- The external Kafka consumer application must support SASL_SSL to communicate with Data Director's Kafka server.
- The external Kafka consumer application must create SSL certificates to communicate with Data Director's Kafka server. See the "Certificate and Secret Generation" section in Oracle Communications Network Analytics Data Director Security Guide for recommendations.
13.3 Updating OCNADD Configuration
After making sure that prerequisites are completed, perform the below procedure to enable external Kafka Feed support:
- Update OCNADD intra TLS Configuration
See "Internal TLS Communication" section in Oracle Communications Network Analytics Security Guide.
- Update Kafka Broker Configuration
- Update the
ocnadd-custom-values-23.3.0.yamlfile as shown below:ssl: intraTlsEnabled: true ## Default is false change it to true acl: kafkaClientAuth: required ## Default is none change it to required aclNotAllowed: false ## Default is true change it to false - Add the ACL users for Oracle NF producers or external consumer
applications inside the
kafka_server_jaas.conf(<chart-path>/helm-charts/charts/ocnaddkafka/config). For more information see, Updating JAAS Configuration with Users.
- Update the
-
Perform helm upgradeDuring the Helm upgrade process, generic ACLs for the NF Kafka producer client are established. This safeguards the flow of data from Oracle NF producers, ensuring uninterrupted operation even when external Kafka feeds are generated with access control mechanisms in place.
- Run the following command to
upgrade
helm upgrade <release_name> <chart_name> -f ocnadd-custom-values.yaml -n <namespace-name> --set global.acl.genericAclAllowed=true,global.env.admin.OCNADD_ADAPTER_UPGRADE_ENABLE=trueFor example:helm upgrade ocnadd ocnadd -f ocnadd-custom-values-23.3.0.yaml -n ocnadd-deploy --set global.acl.genericAclAllowed=true,global.env.admin.OCNADD_ADAPTER_UPGRADE_ENABLE=trueNote:
In Step 2 if theintraTlsEnabledwas already enabled, use the below command:helm upgrade <release_name> <chart_name> -f ocnadd-custom-values.yaml -n <namespace-name> --set global.acl.genericAclAllowed=trueFor example:
helm upgrade ocnadd ocnadd -f ocnadd-custom-values.23.3.0.yaml -n ocnadd-deploy --set global.acl.genericAclAllowed=true - Verify that all PODs are in running state after the helm upgrade.
- Run the following command to
upgrade
- Update the Kafka NF Producer Client ACLs The NF producer client ACLs need to be updated from the generic ACLs to the specific producer client ACLs.
- See Creating Client ACLs for creating the specific NF producer client ACLs.
- See Deleting Generic Producer Client ACLs for deleting the generic NF producer client ACLs.
- Add the external consumer application users in SCRAM
- Create the external Kafka Feed using OCNADD UIContinue by initiating the creation of an external Kafka Feed through the OCNADD UI. Once you have configured the external Kafka Feed within the UI, proceed to configure the external Kafka consumer application, ensuring the inclusion of the following essential Kafka consumer configuration details:
- Set the following in the Consumer
properties:
security.protocol=SASL_SSL sasl.mechanism=SCRAM-SHA-256 sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="extuser1" password="extuser1"; ssl.truststore.location=<truststore location> ssl.truststore.password=<truststore pass> ssl.keystore.location=<keystore location> ssl.keystore.password=<keystore pass> ssl.key.password=<keystore pass> - The bootstrap server list should be setup using the
information provided in the response of feed creation on UI. It should
be similar to
below:
Bootstrap Server List: KAFKABROKER-0_LB_IP:9094, KAFKABROKER-1_LB_IP:9094, KAFKABROKER-2_LB_IP:9094 - Topic Name: Extract this from the UI after successfully creating the feed.
- Number of Partitions: Retrieve this from the UI after successfully creating the feed.
- Consumer Group Name: Obtain this from the UI after successfully creating the feed; it should match the Kafka feed name.
- ACL User Name: Utilize the same ACL username as provided
during feed creation, for example,
extuser1.
- Set the following in the Consumer
properties:
13.4 Updating JAAS Configuration with Users
Updating JAAS Config File
- Edit <chart-path>/helm-charts/charts/ocnaddkafka/config/kafka_server_jaas.conf.
- Update the user JAAS configuration as described below:
Note:
The below example is with theusername:ocnaddandpassword:ocnadd.=================Existing File Content=================== KafkaServer { org.apache.kafka.common.security.plain.PlainLoginModule required username="ocnadd" password="ocnadd" user_ocnadd="ocnadd"; }; Client { org.apache.zookeeper.server.auth.DigestLoginModule required username="ocnadd" password="ocnadd"; }; =================Updated File Content=================== # After Adding ACL User as extuser1 (password: extuser1) inside SCRAM Login Module for external application1 # After Adding ACL User as extuser2 (password: extuser2) inside SCRAM Login Module for external application2 # After Adding ACL User as scpuser(password: scp) inside PLAIN Login Module for Oracle NF SCP, assuming that SCP has configured "scpuser" as SASL user in its producer configuration # After Adding ACL User as nrfuser(password: nrf) inside PLAIN Login Module for Oracle NF NRF, assuming that NRF has configured "nrfuser" as SASL user in its producer configuration # After Adding ACL User as seppuser(password: sepp) inside PLAIN Login Module for Oracle NF SEPP, assuming that SEPP has configured "seppuser" as SASL user in its producer configuration KafkaServer { org.apache.kafka.common.security.plain.PlainLoginModule required username="ocnadd" password="ocnadd" user_ocnadd="ocnadd" user_scpuser="scp" user_nrfuser="nrf" user_seppuser="sepp"; org.apache.kafka.common.security.scram.ScramLoginModule required user_extuser1="extuser1" user_extuser2="extuser2"; }; Client { org.apache.zookeeper.server.auth.DigestLoginModule required username="ocnadd" password="ocnadd"; };
13.5 Updating SCRAM Configuration with Users
- Access the Kafka Pod from the OCNADD deployment. For example,
kafka-broker-0:
kubectl exec -it kafka-broker-0 -n <namespace> -- bash - Extract the SSL parameters from the Kafka broker environments, by running the
following command:
env | grep -i pass - Use the truststore and keystore passwords from above command output to create the
admin.propertiesfile as below:security.protocol=SASL_SSL sasl.mechanism=PLAIN sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="ocnadd" password="ocnadd"; ssl.truststore.location=/var/securityfiles/keystore/trustStore.p12 ssl.truststore.password=<truststore pass> ssl.keystore.location=/var/securityfiles/keystore/keyStore.p12 ssl.keystore.password=<keystore pass> ssl.key.password=<keystore pass> - Copy
admin.propertiesto any of the Kafka broker containers. For example, Kafka-broker-0:kubectl cp admin.properties <name-space>/kafka-broker-0:/home/ocnadd/ - Create the SCRAM User configuration for the external consumer
application users. Follow the below steps:
- Access the Kafka Pod from the OCNADD deployment. For example
kafka-broker-0
kubectl exec -it kafka-broker-0 -n <namespace> -- bash - Go to the Kafka bin folder
cd /home/ocnadd/kafka/binand run below commands:./kafka-configs.sh --bootstrap-server kafka-broker:9094 --alter --add-config 'SCRAM-SHA-256=[iterations=8192,password=extuser1],SCRAM-SHA-512=[password=extuser1]' --entity-type users --entity-name extuser1 --command-config ../../admin.properties ./kafka-configs.sh --bootstrap-server kafka-broker:9094 --alter --add-config 'SCRAM-SHA-256=[iterations=8192,password=extuser2],SCRAM-SHA-512=[password=extuser2]' --entity-type users --entity-name extuser2 --command-config ../../admin.properties
cd /home/ocnadd/kafka/bin ./kafka-configs.sh --bootstrap-server kafka-broker:9094 --alter --add-config 'SCRAM-SHA-256=[iterations=8192,password=extuser1],SCRAM-SHA-512=[password=extuser1]' --entity-type users --entity-name extuser1 --command-config ../../admin.properties ./kafka-configs.sh --bootstrap-server kafka-broker:9094 --alter --add-config 'SCRAM-SHA-256=[iterations=8192,password=extuser2],SCRAM-SHA-512=[password=extuser2]' --entity-type users --entity-name extuser2 --command-config ../../admin.propertiesNote:
extuser1 and extuser2 have already been added in JAAS config on Kafka Server. - Access the Kafka Pod from the OCNADD deployment. For example
kafka-broker-0
- To verify if the users are created in SCRAM, run the below
command:
./kafka-configs.sh --bootstrap-server kafka-broker:9094 --describe --entity-type users --command-config ../../admin.propertiesSample output:. / kafka - configs.sh--bootstrap - server kafka - broker: 9094--describe--entity - type users--command - config.. / .. / admin.properties SCRAM credential configs for user - principal 'extuser1' are SCRAM - SHA - 256 = iterations = 8192, SCRAM - SHA - 512 = iterations = 4096 SCRAM credential configs for user - principal 'extuser2' are SCRAM - SHA - 256 = iterations = 8192, SCRAM - SHA - 512 = iterations = 4096
13.6 Creating Client ACLs
This section describes the steps to create client ACLs (Access Control Lists) using different methods.
13.6.1 Creating Client ACL with SASL username
Follow the steps below:
- Access any pod within the OCNADD deployment, for instance, kafka-broker-0. Use the
following
command:
kubectl exec -it kafka-broker-0 -n <namespace> -- bash - Run the command below to create an ACL with SASL
username:
curl -k --location --request POST 'https://ocnaddconfiguration:12590/ocnadd-configuration/v1/client-acl' --header 'Content-Type: application/json' --data-raw '{ "principal": "<aclUser>", "resourceName": "<topic_name>", # Provide the topic name "aclOperation": "WRITE" }'Examples:- Create an ACL for the 'scpuser' to permit WRITE access on
the SCP
topic:
curl -k --location --request POST 'https://ocnaddconfiguration:12590/ocnadd-configuration/v1/client-acl' --header 'Content-Type: application/json' --data-raw '{ "principal": "scpuser", "resourceName": "SCP", "aclOperation": "WRITE" }' - Create an ACL for the 'nrfuser' to grant WRITE access on the
NRF
topic:
curl -k --location --request POST 'https://ocnaddconfiguration:12590/ocnadd-configuration/v1/client-acl' --header 'Content-Type: application/json' --data-raw '{ "principal": "nrfuser", "resourceName": "NRF", "aclOperation": "WRITE" }' - Create an ACL for the 'seppuser' allowing WRITE access on
the SEPP
topic:
curl -k --location --request POST 'https://ocnaddconfiguration:12590/ocnadd-configuration/v1/client-acl' --header 'Content-Type: application/json' --data-raw '{ "principal": "seppuser", "resourceName": "SEPP", "aclOperation": "WRITE" }'
- Create an ACL for the 'scpuser' to permit WRITE access on
the SCP
topic:
13.6.2 Creating Client ACL with CN Name from SSL Client Certificate
Follow the steps below:
- Access any pod within the OCNADD deployment, such as 'kafka-broker-0', using the
following
command:
kubectl exec -it kafka-broker-0 -n <namespace> -- bash - Run the command below to create client ACL with CN Name from SSL client
certificate:
curl -k --location --request POST 'https://ocnaddconfiguration:12590/ocnadd-configuration/v1/client-acl' --header 'Content-Type: application/json' --data-raw '{ "principal": "<aclUser>", "resourceName": "<topic_name>", # Provide the topic name "aclOperation": "WRITE" }'Here the <aclUser> is the CN Name configured in the Client SSL certificate. Extract the CN Name from the SCP, NRF and SEPP client SSL certificates used between NF and Kafka communication. The below commands to create aclUser are examples, the actual CN Name may differ in Client SSL certificate.
Examples:- Create an ACL for the CN name 'scp-worker', allowing WRITE
access on the SCP
topic:
curl -k --location --request POST 'https://ocnaddconfiguration:12590/ocnadd-configuration/v1/client-acl' --header 'Content-Type: application/json' --data-raw '{ "principal": "scp-worker", "resourceName": "SCP", "aclOperation": "WRITE" }' - Create an ACL for the CN name 'nrf-gw', granting WRITE
access on the NRF
topic:
curl -k --location --request POST 'https://ocnaddconfiguration:12590/ocnadd-configuration/v1/client-acl' --header 'Content-Type: application/json' --data-raw '{ "principal": "nrf-gw", "resourceName": "NRF", "aclOperation": "WRITE" }' - Create an ACL for the CN name 'sepp-gw', permitting WRITE
access on the SEPP
topic:
curl -k --location --request POST 'https://ocnaddconfiguration:12590/ocnadd-configuration/v1/client-acl' --header 'Content-Type: application/json' --data-raw '{ "principal": "sepp-gw", "resourceName": "SEPP", "aclOperation": "WRITE" }'
- Create an ACL for the CN name 'scp-worker', allowing WRITE
access on the SCP
topic:
13.7 Deleting Generic Producer Client ACLs
To delete generic producer client ACLs, follow the steps given below:
- Access any pod within the OCNADD deployment, like 'kafka-broker-0',
using this
command:
kubectl exec -it kafka-broker-0 -n <namespace> -- bash - Run the following
commands:
curl -k --location --request DELETE 'https://ocnaddconfiguration:12590/ocnadd-configuration/v1/client-acl' --header 'Content-Type: application/json' --data-raw '{ "principal": "*", "resourceName": "<topic_name>", # Provide the topic name "aclOperation": "WRITE" }'Note:
Make sure to replace <topic_name> with the actual topic name as required, as shown the examples below.Examples:- Delete ACL for the SCP
topic:
curl -k --location --request DELETE 'https://ocnaddconfiguration:12590/ocnadd-configuration/v1/client-acl' --header 'Content-Type: application/json' --data-raw '{ "principal": "*", "resourceName": "SCP", "aclOperation": "WRITE" }' - Delete ACL for the NRF
topic:
curl -k --location --request DELETE 'https://ocnaddconfiguration:12590/ocnadd-configuration/v1/client-acl' --header 'Content-Type: application/json' --data-raw '{ "principal": "*", "resourceName": "NRF", "aclOperation": "WRITE" }' - Delete ACL for the SEPP
topic:
curl -k --location --request DELETE 'https://ocnaddconfiguration:12590/ocnadd-configuration/v1/client-acl' --header 'Content-Type: application/json' --data-raw '{ "principal": "*", "resourceName": "SEPP", "aclOperation": "WRITE" }'
- Delete ACL for the SCP
topic: