11 Enable Or Disable Kafka Feed Configuration Support
This section explains how to enable or disable Kafka Feed configuration in OCNADD.
11.1 Enable Kafka Feed Configuration Support
This chapter lists the prerequisites for both NF producers and third-party consumer applications to communicate securely with the Data Director Kafka cluster. Additionally, the section lists the necessary configuration settings to be applied at the Kafka broker.
Certain prerequisites must be fulfilled before the external Kafka feed can operate effectively for consumer applications. Notably, some of these prerequisites have the potential to impact communication with producer clients, particularly if any client ACL (Access Control List) rules are configured in Kafka. In such cases, Kafka will authenticate and authorize each client, potentially causing disruption for existing clients if they are not already utilizing SASL_SSL or SSL (mTLS) connections. It is advisable to follow recommendations outlined in the Oracle Communications Network Analytics Security Guide to ensure a seamless transition.
Note:
The procedure mentioned below should be executed corresponding to the worker group on which the Kafka feed configuration support is being enabled.11.1.1 Prerequisites for NF Producers
This applies to all Oracle NF producers (SCP, NRF, PCF, BSF, and SEPP).
Ensure that the producers use either SASL_SSL or SSL (mTLS) to establish a connection with the Data Director Kafka cluster. The Data Director exposes SASL_SSL on port 9094 and SSL on port 9093.
- SASL_SSL:
- Bootstrap Server List: KAFKABROKER_0_LB_IP:9094, KAFKABROKER_1_LB_IP:9094, KAFKABROKER_2_LB_IP:9094 or KAFKABROKER_0_FQDN:9094, KAFKABROKER_1_FQDN:9094, KAFKABROKER_2_FQDN:9094
- security.protocol: SASL_SSL
- sasl.mechanism: PLAIN
- Verify and update JAAS configuration - The Java Authentication and Authorization Service (JAAS) user used in the producer configuration must be available in the Data Director Kafka broker's JAAS configuration. It is recommended that each of the NFs configures its own JAAS user. Make a list of NF producer SASL users and follow the steps in Update Kafka Broker Configuration for updating the Data Director Broker configuration.
- Create Client ACL with configured JAAS user on the Data Director. See Creating Client ACL with SASL username.
- SSL:
- Bootstrap Server List: KAFKABROKER_0_LB_IP:9093, KAFKABROKER_1_LB_IP:9093, KAFKABROKER_2_LB_IP:9093 or KAFKABROKER_0_FQDN:9093, KAFKABROKER_1_FQDN:9093, KAFKABROKER_2_FQDN:9093
- security.protocol: SSL
- Ensure that SSL parameters are set correctly and SSL certificates are generated using the same CA root as the Data Director. Refer to the section "Certificate and Secret Generation" in Oracle Communications Network Analytics Security Guide for recommendations
- Create Client ACL on the Data Director with CN name in the client SSL certificate. See Creating Client ACL with CN Name from SSL Client Certificate.
Note:
In case any of the prerequisites are not met then the external Kafka feed must not be created as it can disrupt the traffic between NF producers and DD and may impact other feeds HTTP2/Synthetic.11.1.2 Prerequisites for External Consumers
The external consumer application requires user-based authorization. However, the Data Director does not offer a Kafka user creation interface, either within the UI or through backend services. Consequently, the user must already exist in the Kafka JAAS configuration, which cannot be accomplished without restarting the Kafka broker. This is why, Data Director doesn't enforce user verification during Kafka Feed creation, but instead mandates user creation before initiating Kafka Feed setup using the UI.
For enhanced security, it is essential to create the user for the external consumer application in Kafka's SCRAM configuration as well. The following steps provide a detailed explanation of this process.
- Create the ACL user in the Kafka JAAS and SCRAM configuration. And use
the same ACL user name while creating the Kafka Feed from the UI. Without this, the
external consumer application will not be able to consume from Kafka.
To create ACL user:
- Update Kafka Broker JAAS configuration for the external feed user, refer to the section Updating JAAS Configuration with Users.
- Update the Kafka SCRAM configuration with the external feed user, refer to the section Updating SCRAM Configuration with Users. This step must be done only after the JAAS configuration is updated in Kafka Broker.
- The external Kafka consumer application must support SASL_SSL to communicate with Data Director's Kafka server.
- The external Kafka consumer application must create SSL certificates to communicate with Data Director's Kafka server. See the "Certificate and Secret Generation" section in Oracle Communications Network Analytics Data Director Security Guide for recommendations.
11.1.3 Updating OCNADD Configuration
After making sure that prerequisites are completed, perform the below procedure to enable external Kafka Feed support:
Non-Centralized Deployment Mode Configuration Update
Note:
This section is applicable to Non-Centralized deployment mode.- Update OCNADD intra TLS Configuration
See "Internal TLS Communication" section in Oracle Communications Network Analytics Security Guide.
- Update Kafka Broker Configuration
- Update the
ocnadd-custom-values-25.1.200.yaml
file as shown below:global: ssl: intraTlsEnabled: true acl: kafkaClientAuth: none ## ---> update it to 'required' aclNotAllowed: true ## ---> update it to 'false' ------------ global: env: admin: OCNADD_UPGRADE_WG_NS: ocnadd-deploy ## --> update it with ocnadd-namespace for example ocnadd-deploy
- Add the ACL users for Oracle NF producers or external
consumer applications inside the
kafka_server_jaas.conf
(<chart-path>/helm-charts/charts/ocnaddkafka/config
). For more information see, Updating JAAS Configuration with Users.
- Update the
-
Perform helm upgradeThe helm upgrade creates the generic NF Kafka producer client ACLs so that the traffic from the Oracle NF producers is not disrupted in the case of any external Kafka feeds being created with access control.
- Run the following command to
upgrade
helm upgrade <release_name> -f ocnadd-custom-values.yaml -n <namespace-name> <chart_name> --set global.acl.genericAclAllowed=true,global.env.admin.OCNADD_ADAPTER_UPGRADE_ENABLE=true,global.env.admin.OCNADD_CORR_UPGRADE_ENABLE=true
For example:helm upgrade ocnadd -f ocnadd-custom-values-25.1.200.yaml -n ocnadd-deploy ocnadd --set global.acl.genericAclAllowed=true,global.env.admin.OCNADD_ADAPTER_UPGRADE_ENABLE=true,global.env.admin.OCNADD_CORR_UPGRADE_ENABLE=true
- Verify that all PODs are in running state after the helm upgrade.
- Run the following command to
upgrade
- Update the Kafka NF Producer Client ACLs The NF producer client ACLs need to be updated from the generic ACLs to the specific producer client ACLs.
- See Creating Client ACLs for creating the specific NF producer client ACLs.
- See Deleting Generic Producer Client ACLs for deleting the generic NF producer client ACLs.
- Add the external consumer application users in SCRAM
- Create the external Kafka Feed using OCNADD UIContinue by initiating the creation of an external Kafka Feed through the OCNADD UI. Once you have configured the external Kafka Feed within the UI, proceed to configure the external Kafka consumer application, ensuring the inclusion of the following essential Kafka consumer configuration details:
- Set the following in the Consumer
properties:
Set the following in the Consumer properties security.protocol=SASL_SSL sasl.mechanism=SCRAM-SHA-256 sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="extuser1" password="extuser1"; ssl.truststore.location=<truststore location> ssl.truststore.password=<truststore pass> ssl.keystore.location=<keystore location> ssl.keystore.password=<keystore pass> ssl.key.password=<keystore pass>
Note:
In case the Kafka Feed should be enabled on the worker group other than the default group then the section "Centralized Deployment Mode Configuration Update" should be followed. - The bootstrap server list should be setup using the
information provided in the response of feed creation on UI. It
should be similar to
below:
Bootstrap Server List: KAFKABROKER-0_LB_IP:9094, KAFKABROKER-1_LB_IP:9094, KAFKABROKER-2_LB_IP:9094
- Topic Name: Extract this from the UI after successfully creating the feed.
- Number of Partitions: Retrieve this from the UI after successfully creating the feed.
- Consumer Group Name: Obtain this from the UI after successfully creating the feed; it should match the Kafka feed name.
- ACL User Name: Utilize the same ACL username as provided
during feed creation, for example,
extuser1
.
- Set the following in the Consumer
properties:
Centralized Deployment Mode Configuration Update
The section is applicable to the following scenarios:
- Centralized deployment with the default group (possible only when the DD was upgraded from earlier supported releases to the 23.4.x release in Centralized deployment mode)
- Centralized deployment with one or more worker group
- Update OCNADD intra TLS Configuration
See "Internal TLS Communication" section in Oracle Communications Network Analytics Security Guide.
Note:
If intra-TLS is enabled on any of the DD deployments then it must be enabled for all the available worker groups along with management groups. - Update the corresponding management group
ocnadd-custom-values-25.1.200.yaml
file as shown below:global: ssl: intraTlsEnabled: true acl: kafkaClientAuth: none ## ---> update it to 'required' aclNotAllowed: true ## ---> update it to 'false' ------------ global: env: admin: OCNADD_UPGRADE_WG_NS: ocnadd-deploy-wg1,ocnadd-deploy-wg2 ## ---> update it with the one or more namespace of the worker groups or the default worker group
-
Perform helm upgrade
- Centralized deployment with the default group. Run the
following command to
upgrade
helm upgrade <mgmt-release_name> -f ocnadd-mgmt-custom-values.yaml -n <ocnadd-namespace> <helm_chart> --set global.acl.genericAclAllowed=true,global.env.admin.OCNADD_ADAPTER_UPGRADE_ENABLE=true,global.env.admin.OCNADD_CORR_UPGRADE_ENABLE=true
For example:helm upgrade ocnadd-mgmt -f ocnadd-custom-values-25.1.200.yaml -n ocnadd-deploy ocnadd --set global.acl.genericAclAllowed=true,global.env.admin.OCNADD_ADAPTER_UPGRADE_ENABLE=true,global.env.admin.OCNADD_CORR_UPGRADE_ENABLE=true
- Centralized deployment with one or more worker group. Run
the following command to
upgrade
helm upgrade <release_name> -f ocnadd-mgmt-custom-values.yaml -n <mgmt-group-namespace-name> <mgmt_helm_chart> --set global.env.admin.OCNADD_ADAPTER_UPGRADE_ENABLE=true,global.env.admin.OCNADD_CORR_UPGRADE_ENABLE=true
For example:helm upgrade ocnadd-mgmt -f ocnadd-custom-values-mgmt-group.yaml -n dd-mgmt-group ocnadd_mgmt --set global.env.admin.OCNADD_ADAPTER_UPGRADE_ENABLE=true,global.env.admin.OCNADD_CORR_UPGRADE_ENABLE=true
- Verify that all PODs are in running state after the helm upgrade in the respective namespaces.
- Centralized deployment with the default group. Run the
following command to
upgrade
- Update Kafka Broker Configuration
- Update the corresponding worker group custom values file
ocnadd-custom-values.yaml as shown
below:
global: ssl: intraTlsEnabled: true acl: kafkaClientAuth: none ## ---> update it to 'required' aclNotAllowed: true ## ---> update it to 'false'
- Add the ACL users for Oracle NF producers or external consumer
applications inside the kafka_server_jaas.conf
(<chart-path>/helm-charts/charts/ocnaddkafka/config ) of the worker
group helm charts on which the Kafka Feeds should be enabled.
For more information, see Updating JAAS Configuration with Users.
- Perform helm upgrade:
The helm upgrade creates the generic NF Kafka producer client ACLs so that the traffic from the Oracle NF producers is not disrupted in the case of any external Kafka feeds being created with access control.
- Use the below command to
upgrade:
helm upgrade <worker-group-release-name> -f ocnadd-custom-values-<wg1-group>.yaml --namespace <worker-group1-namespace> <helm_chart> --set global.acl.genericAclAllowed=true
For example: For worker group1
helm upgrade ocnadd-wg1 -f ocnadd-custom-values-wg1-group.yaml --namespace dd-worker-group1 ocnadd_wg1 --set global.acl.genericAclAllowed=true
- Verify that all PODs are in running state after the helm upgrade.
- Use the below command to
upgrade:
- Update the corresponding worker group custom values file
ocnadd-custom-values.yaml as shown
below:
- Update the Kafka NF Producer Client ACLsThe NF producer client ACLs need to be updated from the generic ACLs to the specific producer client ACLs.
- See Creating Client ACLs for creating the specific NF producer client ACLs.
- See Deleting Generic Producer Client ACLs for deleting the generic NF producer client ACLs.
- Add the external consumer application users in SCRAM
- Create the external Kafka Feed using OCNADD UIContinue by initiating the creation of an external Kafka Feed through the OCNADD UI. Once you have configured the external Kafka Feed within the UI, proceed to configure the external Kafka consumer application, ensuring the inclusion of the following essential Kafka consumer configuration details:
- Set the following in the Consumer
properties:
security.protocol=SASL_SSL sasl.mechanism=SCRAM-SHA-256 sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="extuser1" password="extuser1"; ssl.truststore.location=<truststore location> ssl.truststore.password=<truststore pass> ssl.keystore.location=<keystore location> ssl.keystore.password=<keystore pass> ssl.key.password=<keystore pass>
- The bootstrap server list should be setup using the
information provided in the response of feed creation on UI. It
should be similar to
below:
Bootstrap Server List: KAFKABROKER-0_LB_IP:9094, KAFKABROKER-1_LB_IP:9094, KAFKABROKER-2_LB_IP:9094
- Topic Name: Extract this from the UI after successfully creating the feed.
- Number of Partitions: Retrieve this from the UI after successfully creating the feed.
- Consumer Group Name: Obtain this from the UI after successfully creating the feed; it should match the Kafka feed name.
- ACL User Name: Utilize the same ACL username as provided
during feed creation, for example,
extuser1
.
- Set the following in the Consumer
properties:
11.1.4 Updating JAAS Configuration with Users
Perform this on the worker group in which the Kafka feed should be enabled:
Updating JAAS Config File
- Edit <chart-path-worker-group>/helm-charts/charts/ocnaddkafka/config/kafka_server_jaas.conf.
- Update the user JAAS configuration as described below:
Note:
The below example is with theusername:ocnadd
andpassword:ocnadd
.=================Existing File Content=================== KafkaServer { org.apache.kafka.common.security.plain.PlainLoginModule required username="ocnadd" password="ocnadd" user_ocnadd="ocnadd"; }; Client { org.apache.zookeeper.server.auth.DigestLoginModule required username="ocnadd" password="ocnadd"; }; =================Updated File Content=================== # After Adding ACL User as extuser1 (password: extuser1) inside SCRAM Login Module for external application1 # After Adding ACL User as extuser2 (password: extuser2) inside SCRAM Login Module for external application2 # After Adding ACL User as scpuser(password: scp) inside PLAIN Login Module for Oracle NF SCP, assuming that SCP has configured "scpuser" as SASL user in its producer configuration # After Adding ACL User as nrfuser(password: nrf) inside PLAIN Login Module for Oracle NF NRF, assuming that NRF has configured "nrfuser" as SASL user in its producer configuration # After Adding ACL User as seppuser(password: sepp) inside PLAIN Login Module for Oracle NF SEPP, assuming that SEPP has configured "seppuser" as SASL user in its producer configuration # After Adding ACL User as bsfuser(password: bsf) inside PLAIN Login Module for Oracle NF BSF, assuming that BSF has configured "bsfuser" as SASL user in its producer configuration # After Adding ACL User as pcfuser(password: pcf) inside PLAIN Login Module for Oracle NF PCF, assuming that PCF has configured "pcfuser" as SASL user in its producer configuration KafkaServer { org.apache.kafka.common.security.plain.PlainLoginModule required username="ocnadd" password="ocnadd" user_ocnadd="ocnadd" user_scpuser="scp" user_nrfuser="nrf" user_seppuser="sepp" user_nrfuser="bsf" user_nrfuser="pcf"; org.apache.kafka.common.security.scram.ScramLoginModule required user_extuser1="extuser1" user_extuser2="extuser2"; }; Client { org.apache.zookeeper.server.auth.DigestLoginModule required username="ocnadd" password="ocnadd"; };
11.1.5 Updating SCRAM Configuration with Users
Perform the below steps inside the worker group in which the SCRAM user configuration should be added to the corresponding Kafka cluster.
- Access the Kafka Pod from the OCNADD deployment. For example,
kafka-broker-0:
kubectl exec -it kafka-broker-0 -n <namespace> -- bash
- Extract the SSL parameters from the Kafka broker environments, by running the
following command:
env | grep -i pass
- Use the truststore and keystore passwords from above command output to
create the
admin.properties
file as below:security.protocol=SASL_SSL sasl.mechanism=PLAIN sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="ocnadd" password="ocnadd"; ssl.truststore.location=/var/securityfiles/keystore/trustStore.p12 ssl.truststore.password=<truststore pass> ssl.keystore.location=/var/securityfiles/keystore/keyStore.p12 ssl.keystore.password=<keystore pass> ssl.key.password=<keystore pass>
- Copy
admin.properties
to any of the Kafka broker containers. For example, Kafka-broker-0:kubectl cp admin.properties <worker-group-namespace>/kafka-broker-0:/home/ocnadd/
- Create the SCRAM User configuration for the external consumer
application users, run the below commands from inside the Kafka Broker container For
example, kafka-broker-0:
The following commands are for Kafka cluster deployment in ZooKeeper mode. This mode is deprecated and will be removed in the future releases.
cd /home/ocnadd/kafka/bin ./kafka-configs.sh --bootstrap-server kafka-broker:9094 --alter --add-config 'SCRAM-SHA-256=[iterations=8192,password=extuser1],SCRAM-SHA-512=[password=extuser1]' --entity-type users --entity-name extuser1 --command-config ../../admin.properties ./kafka-configs.sh --bootstrap-server kafka-broker:9094 --alter --add-config 'SCRAM-SHA-256=[iterations=8192,password=extuser2],SCRAM-SHA-512=[password=extuser2]' --entity-type users --entity-name extuser2 --command-config ../../admin.properties
For the Kafka cluster deployment with KRaft controller mode:cd /home/ocnadd/kafka/bin for user "extuser1" ./kafka-configs.sh --bootstrap-server kafka-broker:9094 --alter --add-config 'SCRAM-SHA-512=[password=extuser1]' --entity-type users --entity-name extuser1 --command-config ../../admin.properties ./kafka-configs.sh --bootstrap-server kafka-broker:9094 --alter --add-config 'SCRAM-SHA-256=[iterations=8192,password=extuser1]' --entity-type users --entity-name extuser1 --command-config ../../admin.properties for user "extuser2" ./kafka-configs.sh --bootstrap-server kafka-broker:9094 --alter --add-config 'SCRAM-SHA-512=[password=extuser2]' --entity-type users --entity-name extuser2 --command-config ../../admin.properties ./kafka-configs.sh --bootstrap-server kafka-broker:9094 --alter --add-config 'SCRAM-SHA-256=[iterations=8192,password=extuser2]' --entity-type users --entity-name extuser2 --command-config ../../admin.properties
Note:
extuser1
andextuser2
have already been configured in the JAAS file on the Kafka server. - To verify if the users are created in SCRAM, run the below
command:
./kafka-configs.sh --bootstrap-server kafka-broker:9094 --describe --entity-type users --command-config ../../admin.properties
Sample output:. / kafka - configs.sh--bootstrap - server kafka - broker: 9094--describe--entity - type users--command - config.. / .. / admin.properties SCRAM credential configs for user - principal 'extuser1' are SCRAM - SHA - 256 = iterations = 8192, SCRAM - SHA - 512 = iterations = 4096 SCRAM credential configs for user - principal 'extuser2' are SCRAM - SHA - 256 = iterations = 8192, SCRAM - SHA - 512 = iterations = 4096
11.1.6 Creating Client ACLs
This section describes the steps to create client ACLs (Access Control Lists) using different methods.
11.1.6.1 Creating Client ACL with SASL username
Login to any container and execute the following curl command in the worker group in which the client ACLs should be created:
- Access any pod within the OCNADD deployment, for instance,
kafka-broker-0. Use the following
command:
kubectl exec -it kafka-broker-0 -n <worker-group-namespace> -- bash
- Run the command to create an ACL with SASL username. Provide the name
of the <workerGroup> in the below
command:
curl -k --location --cert-type P12 --cert /var/securityfiles/keystore/clientKeyStore.p12:$KEYSTORE_PASS --request POST 'https://ocnaddconfiguration:12590/ocnadd-configuration/v2/<workerGroup>/client-acl' --header 'Content-Type: application/json' --data-raw '{ "principal": "<aclUser>", "resourceName": "<topic_name>", # topic name has to be provided "aclOperation": "WRITE" }'
Examples:- Create an ACL for the 'scpuser' to permit WRITE access on
the SCP
topic:
curl -k --location --cert-type P12 --cert /var/securityfiles/keystore/clientKeyStore.p12:$KEYSTORE_PASS --request POST 'https://ocnaddconfiguration:12590/ocnadd-configuration/v2/<workerGroup>/client-acl' --header 'Content-Type: application/json' --data-raw '{ "principal": "scpuser", "resourceName": "SCP", "aclOperation": "WRITE" }
- Create an ACL for the 'nrfuser' to grant WRITE access on the
NRF
topic:
curl -k --location --cert-type P12 --cert /var/securityfiles/keystore/clientKeyStore.p12:$KEYSTORE_PASS --request POST 'https://ocnaddconfiguration:12590/ocnadd-configuration/v2/<workerGroup>/client-acl' --header 'Content-Type: application/json' --data-raw '{ "principal": "nrfuser", "resourceName": "NRF", "aclOperation": "WRITE" }'
- Create an ACL for the 'seppuser' allowing WRITE access on
the SEPP
topic:
curl -k --location --cert-type P12 --cert /var/securityfiles/keystore/clientKeyStore.p12:$KEYSTORE_PASS --request POST 'https://ocnaddconfiguration:12590/ocnadd-configuration/v2/<workerGroup>/client-acl' --header 'Content-Type: application/json' --data-raw '{ "principal": "seppuser", "resourceName": "SEPP", "aclOperation": "WRITE" }'
- Create an ACL for the 'bsfuser' allowing WRITE access on the BSF
topic:
curl -k --location --cert-type P12 --cert /var/securityfiles/keystore/clientKeyStore.p12:$KEYSTORE_PASS --request POST 'https://ocnaddconfiguration:12590/ocnadd-configuration/v2/<workerGroup>/client-acl' --header 'Content-Type: application/json' --data-raw '{ "principal": "bsfuser", "resourceName": "BSF", "aclOperation": "WRITE" }'
- Create an ACL for the 'pcfuser' allowing WRITE access on the PCF
topic:
curl -k --location --cert-type P12 --cert /var/securityfiles/keystore/clientKeyStore.p12:$KEYSTORE_PASS --request POST 'https://ocnaddconfiguration:12590/ocnadd-configuration/v2/<workerGroup>/client-acl' --header 'Content-Type: application/json' --data-raw '{ "principal": "pcfuser", "resourceName": "PCF", "aclOperation": "WRITE" }'
- Create an ACL for the 'scpuser' to permit WRITE access on
the SCP
topic:
11.1.6.2 Creating Client ACL with CN Name from SSL Client Certificate
Login to any container and run the following curl commands in the worker group in which the client ACLs should be created.
- Access any pod within the OCNADD deployment, such as 'kafka-broker-0',
using the following
command:
kubectl exec -it kafka-broker-0 -n <worker-group-namespace> -- bash
- Run the command below to create client ACL with CN Name from SSL client
certificate:
curl -k --location --cert-type P12 --cert /var/securityfiles/keystore/clientKeyStore.p12:$KEYSTORE_PASS --request POST 'https://ocnaddconfiguration:12590/ocnadd-configuration/v2/<workerGroup>/client-acl' --header 'Content-Type: application/json' --data-raw '{ "principal": "<aclUser>", "resourceName": "<topic_name>", # provide topic name "aclOperation": "WRITE" }'
Here, the <aclUser> is the CN Name configured in the Client SSL certificate, extract the CN Name from the SCP, NRF, BSF, PCF, and SEPP client SSL certificates used between NF and Kafka communication. The below commands to create aclUser are examples, the actual CN Name may differ in Client SSL certificate.
Examples:- Create an ACL for the CN name 'scp-worker', allowing WRITE
access on the SCP
topic:
curl -k --location --cert-type P12 --cert /var/securityfiles/keystore/clientKeyStore.p12:$KEYSTORE_PASS --request POST 'https://ocnaddconfiguration:12590/ocnadd-configuration/v2/<workerGroup>/client-acl' --header 'Content-Type: application/json' --data-raw '{ "principal": "scp-worker", "resourceName": "SCP", "aclOperation": "WRITE" }'
- Create an ACL for the CN name 'nrf-gw', granting WRITE
access on the NRF
topic:
curl -k --location --cert-type P12 --cert /var/securityfiles/keystore/clientKeyStore.p12:$KEYSTORE_PASS --request POST 'https://ocnaddconfiguration:12590/ocnadd-configuration/v2/<workerGroup>/client-acl' --header 'Content-Type: application/json' --data-raw '{ "principal": "nrf-gw", "resourceName": "NRF", "aclOperation": "WRITE" }'
- Create an ACL for the CN name 'sepp-gw', permitting WRITE
access on the SEPP
topic:
curl -k --location --cert-type P12 --cert /var/securityfiles/keystore/clientKeyStore.p12:$KEYSTORE_PASS --request POST 'https://ocnaddconfiguration:12590/ocnadd-configuration/v2/<workerGroup>/client-acl' --header 'Content-Type: application/json' --data-raw '{ "principal": "sepp-gw", "resourceName": "SEPP", "aclOperation": "WRITE" }'
- Create an ACL for the CN name 'bsf-gw', permitting WRITE access on the
BSF
topic:
curl -k --location --cert-type P12 --cert /var/securityfiles/keystore/clientKeyStore.p12:$KEYSTORE_PASS --request POST 'https://ocnaddconfiguration:12590/ocnadd-configuration/v2/<workerGroup>/client-acl' --header 'Content-Type: application/json' --data-raw '{ "principal": "bsf-gw", "resourceName": "BSF", "aclOperation": "WRITE" }'
- Create an ACL for the CN name 'pcf-gw', permitting WRITE access on the
PCF
topic:
curl -k --location --cert-type P12 --cert /var/securityfiles/keystore/clientKeyStore.p12:$KEYSTORE_PASS --request POST 'https://ocnaddconfiguration:12590/ocnadd-configuration/v2/<workerGroup>/client-acl' --header 'Content-Type: application/json' --data-raw '{ "principal": "pcf-gw", "resourceName": "PCF", "aclOperation": "WRITE" }'
- Create an ACL for the CN name 'scp-worker', allowing WRITE
access on the SCP
topic:
11.1.7 Deleting Generic Producer Client ACLs
Login to any of the Data Director worker group POD containers and run the following curl commands in the worker group from which the generic Producer client ACLs should be deleted:
- Access any pod within the OCNADD deployment, like 'kafka-broker-0',
using this
command:
kubectl exec -it kafka-broker-0 -n <worker-group-namespace> -- bash
- Run the below commands and provide the name of the <workerGroup>
in the below
command(s):
curl -k --location --cert-type P12 --cert /var/securityfiles/keystore/clientKeyStore.p12:$KEYSTORE_PASS --request DELETE 'https://ocnaddconfiguration:12590/ocnadd-configuration/v2/<workerGroup>/client-acl' --header 'Content-Type: application/json' --data-raw '{ "principal": "*", "resourceName": "<topic_name>", # provide topic name "aclOperation": "WRITE" }'
Note:
Make sure to replace <topic_name> with the actual topic name as required, as shown the examples below.Examples:- Delete ACL for the SCP
topic:
curl -k --location --cert-type P12 --cert /var/securityfiles/keystore/clientKeyStore.p12:$KEYSTORE_PASS --request DELETE 'https://ocnaddconfiguration:12590/ocnadd-configuration/v2/<workerGroup>/client-acl' --header 'Content-Type: application/json' --data-raw '{ "principal": "*", "resourceName": "SCP", "aclOperation": "WRITE" }
- Delete ACL for the NRF
topic:
curl -k --location --cert-type P12 --cert /var/securityfiles/keystore/clientKeyStore.p12:$KEYSTORE_PASS --request DELETE 'https://ocnaddconfiguration:12590/ocnadd-configuration/v2/<workerGroup>/client-acl' --header 'Content-Type: application/json' --data-raw '{ "principal": "*", "resourceName": "NRF", "aclOperation": "WRITE" }'
- Delete ACL for the SEPP
topic:
curl -k --location --cert-type P12 --cert /var/securityfiles/keystore/clientKeyStore.p12:$KEYSTORE_PASS --request DELETE 'https://ocnaddconfiguration:12590/ocnadd-configuration/v2/<workerGroup>/client-acl' --header 'Content-Type: application/json' --data-raw '{ "principal": "*", "resourceName": "SEPP", "aclOperation": "WRITE" }'
- Delete ACL for the BSF
topic:
curl -k --location --cert-type P12 --cert /var/securityfiles/keystore/clientKeyStore.p12:$KEYSTORE_PASS --request DELETE 'https://ocnaddconfiguration:12590/ocnadd-configuration/v2/<workerGroup>/client-acl' --header 'Content-Type: application/json' --data-raw '{ "principal": "*", "resourceName": "BSF", "aclOperation": "WRITE" }'
- Delete ACL for the PCF
topic:
curl -k --location --cert-type P12 --cert /var/securityfiles/keystore/clientKeyStore.p12:$KEYSTORE_PASS --request DELETE 'https://ocnaddconfiguration:12590/ocnadd-configuration/v2/<workerGroup>/client-acl' --header 'Content-Type: application/json' --data-raw '{ "principal": "*", "resourceName": "PCF", "aclOperation": "WRITE" }'
- Delete ACL for the SCP
topic:
11.2 Disabling Kafka Feed Configuration Support
The chapter describes the steps to be taken when external Kafka feeds are no longer needed within the Data Director deployment.
For the external Kafka Feed, TLS and access control settings are essential on the Kafka server. However, if external Kafka Feed support becomes unnecessary, access control within Kafka should be disabled.
The steps in this procedure should only be followed on the worker group in which the Kafka feed support is required to be disabled.
Note:
- If rolling back to a version without Kafka feed is supported, it is mandatory to delete the Producer Client ACLs and Kafka Feeds before initiating the rollback. Follow steps 1 and 3 for deleting the feeds and ACLs.
- When reverting to a version where Kafka feeds were supported and configured, there is no requirement to delete Kafka feeds and producer client ACLs.
- In case it is not possible to delete the ACLs and feeds before the rollback then contact Oracle Suport using MOS.
- Delete all the Kafka feeds using the UI. See Deleting Kafka Feed section.
Note:
Make sure to delete producer Client ACL and generic ACL if missed to Delete generic producer client ACLs previously else ignore and continue with the next steps. - Perform helm upgrade by following the steps below:
Note:
This step should be performed on the worker group where the feed support is to be disabled, this could be default group or any other worker group. For the default group use the corresponding charts and custom values.- Helm Upgrade for disabling ACL support
Edit the
ocnadd-custom-values-25.1.200.yaml
file and make the following updates in the global section:global: ssl: intraTlsEnabled: true ## In case of intra TLS connections are required keep this as 'true', else make it 'false' acl: kafkaClientAuth: required ## Update the kafkaClientAuth to none aclNotAllowed: false ## Update the aclNotAllowed to true
- To upgrade, run the below
command:
helm upgrade <worker-group-release-name> -f ocnadd-custom-values-<wg1-group>.yaml --namespace <worker-group1-namespace> <helm_chart> --set global.acl.genericAclAllowed=true
For example:helm upgrade ocnadd-wg1 -f ocnadd-custom-values-wg1-group.yaml --namespace dd-worker-group1 ocnadd_wg1 --set global.acl.genericAclAllowed=true
- After the helm upgrade, ensure that all pods are in a running state.
- Helm Upgrade for disabling ACL support
- Remove all the specific producer client ACLs from the worker group
where the Kafka feed support should be disabled:
- Access any pod within the OCNADD deployment, for example,
'kafka-broker-0', using this
command:
kubectl exec -it kafka-broker-0 -n <worker-group-namespace> -- bash
- Run the command below and provide the name of the
<workerGroup> in the below
command
curl -k --location --cert-type P12 --cert /var/securityfiles/keystore/clientKeyStore.p12:$KEYSTORE_PASS --request DELETE 'http://ocnaddconfiguration:12590/ocnadd-configuration/v2/<workerGroup>/client-acl' --header 'Content-Type: application/json' --data-raw '{ "principal": "<aclUser>", "resourceName": "<topic_name>", # provide topic name "aclOperation": "WRITE" }
Examples:- Delete ACL for the SCP topic, assuming the SCP
producer's <aclUser> name is
'scpuser':
curl -k --location --cert-type P12 --cert /var/securityfiles/keystore/clientKeyStore.p12:$KEYSTORE_PASS --request DELETE 'http://ocnaddconfiguration:12590/ocnadd-configuration/v2/<workerGroup>/client-acl' --header 'Content-Type: application/json' --data-raw '{ "principal": "scpuser", "resourceName": "SCP", "aclOperation": "WRITE" }'
- Delete ACL for the NRF topic, assuming the NRF
producer ACL user name is
'nrfuser':
curl -k --location --cert-type P12 --cert /var/securityfiles/keystore/clientKeyStore.p12:$KEYSTORE_PASS --request DELETE 'http://ocnaddconfiguration:12590/ocnadd-configuration/v2/<workerGroup>/client-acl' --header 'Content-Type: application/json' --data-raw '{ "principal": "nrfuser", "resourceName": "NRF", "aclOperation": "WRITE" }
- Delete ACL for the SEPP topic, assuming the SEPP
producer ACL user name is
'seppuser':
curl -k --location --cert-type P12 --cert /var/securityfiles/keystore/clientKeyStore.p12:$KEYSTORE_PASS --request DELETE 'http://ocnaddconfiguration:12590/ocnadd-configuration/v2/<workerGroup>/client-acl' --header 'Content-Type: application/json' --data-raw '{ "principal": "seppuser", "resourceName": "SEPP", "aclOperation": "WRITE" }'
- Delete ACL for the BSF topic, assuming the BSF producer ACL user
name is
'bsfuser':
curl -k --location --cert-type P12 --cert /var/securityfiles/keystore/clientKeyStore.p12:$KEYSTORE_PASS --request DELETE 'http://ocnaddconfiguration:12590/ocnadd-configuration/v2/<workerGroup>/client-acl' --header 'Content-Type: application/json' --data-raw '{ "principal": "bsfuser", "resourceName": "BSF", "aclOperation": "WRITE" }'
- Delete ACL for the PCF topic, assuming the PCF producer ACL user
name is
'pcfuser':
curl -k --location --cert-type P12 --cert /var/securityfiles/keystore/clientKeyStore.p12:$KEYSTORE_PASS --request DELETE 'http://ocnaddconfiguration:12590/ocnadd-configuration/v2/<workerGroup>/client-acl' --header 'Content-Type: application/json' --data-raw '{ "principal": "pcfuser", "resourceName": "PCF", "aclOperation": "WRITE" }'
- Delete ACL for the SCP topic, assuming the SCP
producer's <aclUser> name is
'scpuser':
- Access any pod within the OCNADD deployment, for example,
'kafka-broker-0', using this
command:
Note:
Delete Generic ACLs if not already deleted, see Deleting Generic Producer Client ACLs section.