11 Installing BRM REST Services Manager

Learn how to use the Oracle Communications Billing and Revenue Manager (BRM) REST Services Manager package to install both the BRM REST Services Manager API and the BRM REST Services Manager SDK on your system.

Topics in this document:

For more information about the BRM REST Services Manager API and SDK, see REST Services Manager API for Billing and Revenue Management.

Downloading the BRM REST Services Manager Package

You can download and install the BRM REST Services Manager software from one of the following locations:

Search for and download the Oracle Communications Billing and Revenue Management 15.2.x.y.0 software, where x refers to the maintenance release and y refers to the patch set release of BRM that you are installing.

The Zip archive includes the BRM REST Services Manager installer: BRM_REST_Services_Manager-15.2.x.y.0.jar

Installing BRM REST Services Manager

The BRM REST Services Manager package installs both the BRM REST Services Manager API and the BRM REST Services Manager SDK on your system.

You can install BRM REST Services Manager in the following modes:

Installing BRM REST Services Manager in GUI Mode

To install BRM REST Services Manager in GUI mode:

  1. Go to the temp_dir directory and run one of these commands:

    • To start the GUI installer:

      Java_home/bin/java -jar jarFile

      where:

      • Java_home is the directory in which you installed the latest compatible Java version.

      • jarFile is the BRM installer file. For example:

        BRM_REST_Services_Manager-15.2.0.0.0.jar
    • To start the GUI installer and install BRM REST Services Manager using the oraInventory directory in a different location:

      Java_home/bin/java -jar jarFile -invPtrLoc FilePath/oraInst.loc

      where FilePath is the path to the directory in which the oraInst.loc file is located.

    • To start the GUI installer and create a silent installer response file during the installation:

      Java_home/bin/java -jar jarFile -record -destinationFile path

      where path is the absolute path to the response file.

    Tip:

    You can run the following command to get more details about the options.

    Java_home/bin/java -jar jarFile -help
  2. In the Welcome window, click Next.

    Note:

    If the oraInst.loc file is corrupt or not present, the Installation Inventory window appears next. Otherwise, the Installation Location window appears.

  3. (Optional) In the Installation Inventory window, enter the details listed in Table 11-1 and then click Next.

    Table 11-1 Installation Inventory

    Field Description

    Inventory Directory

    The full path of the inventory directory.

    The default oraInventory directory is located in the /etc/oraInst.loc (Linux) file.

    Operating System Group

    The name of the operating system group that has write permission to the inventory directory.

  4. In the Installation Location window, enter the full path or browse to the directory in which you want to install BRM REST Services Manager and then click Next.

  5. In the Feature Sets Selection window, select the components to install, deselect any components that you do not want to install, and then click Next.

    Note:

    You cannot deselect a component if it is required to install any of the selected components.

    If you are installing an optional component for the first time after an upgrade, and you see an error similar to the following, see "Problem: An Error Occurred When Selecting an Optional Component to Install".

    The distribution Billing Revenue Management 15.2.0.0.0 contains incompatible features with the following:
    Billing Revenue Management 15.0.1.0.0 [CORE 15.2.0.0.0->CORE 15.0.1.0.0 (CORE 15.2.0.0.0->[CORE 15.0.1.0.0])]
  6. In the Server Details window, enter the details listed in Table 11-2 for the server on which you want to deploy BRM REST Services Manager and then click Next.

    Table 11-2 Server Details

    Field Description

    Host Name

    The host name of the computer on which the server is configured.

    Port Number

    The port number available on the host.

    SSL Port Number

    The SSL port number on which you want to install BRM REST Services Manager.

    KeyStore Location

    The path of the client-side KeyStore file you generated.

    KeyStore Password

    The password used for accessing the KeyStore file.

  7. In the Base URL Details window, enter the base URL to include in the response of BRM REST Services Manager requests. Click Next.

  8. In the BRM Connection Details window, enter the details listed in Table 11-3 for connecting to the BRM server and then click Next.

    Table 11-3 BRM Connection Details

    Field Description

    User Name

    The user name for connecting to BRM.

    Password

    The password of the BRM user.

    Host Name

    The host name of the computer on which the primary BRM Connection Manager (CM) or CM Master Process (CMMP) is running.

    Port Number

    The TCP port number of the CM or CMMP on the host computer. The default value is 11960.

    Service Type

    The BRM service type. The default value is /service/pcm_client.

    Service POID Id

    The POID of the BRM service. The default value is 1.

    Use SSL?

    Do one of the following:

    • If you have not enabled SSL for BRM, deselect this check box.

    • If you have enabled SSL for BRM, select this check box.

    Wallet Password

    The password for the BRM wallet.

    Confirm Wallet Password

    Enter the password for the BRM wallet again.

  9. In the Security Details window, do one of the following and then click Next.

    • If you want to configure BRM REST Services Manager securely, select Yes.

    • If you want to configure BRM REST Services Manager in a test installation mode, select No and go to step 11.

  10. In the Identity Provider Details window, enter the details listed in Table 11-4 and then click Next.

    Table 11-4 Identity Provider Details

    Field Description

    Identity URL

    The base URL of your Identity Provider (IdP) server.

    Scope Audience

    The primary audience registered for the application which is appended for scopes.

    Audience

    The name of the Oracle Access Manager OAuth server.

    Client Id

    The client ID of the application.

    Client Secret

    The client secret of the application.
  11. In the Installation Summary window, review the selections you made in the preceding windows and then click Install.

  12. The Installation Progress window appears. When the installation completes, click Next.

    Note:

    After the installation begins, you cannot stop or cancel the installation.
  13. The Installation Complete window appears. Make note of the BRM REST Services Manager base URL. You use this URL to access BRM REST Services Manager.

    Note:

    You can find the BRM REST Services Manager logs in the REST_home/logs directory, where REST_home is the BRM REST Services Manager installation directory.
  14. Click Finish to complete the installation.

    The BRM REST Services Manager installer exits.

  15. (For Security Enabled with Identity Provider) If your IdP requires an introspection endpoint to validate the JWT, uncomment the introspect-endpoint-uri entry in your application.yaml file and set it to the introspection endpoint.

    Note:

    For example, if you are using Oracle Access Management as your IdP, you need to uncomment and set the introspect-endpoint-uri entry.

  16. (For Security Enabled with Oracle Access Management Only) If you have not yet migrated to the latest versions of Oracle Access Management, where roles and groups are included in the JSON Web Tokens (JWTs) token, do the following:

    Note:

    This step is necessary when your JWTs do not adhere to the MicroProfile JWT RBAC v2.1 specification and lack a "groups" claim because it enables fetching user or client groups and roles. If your JWTs conform to the JWT RBAC v2.1 specification, skip this step.

    1. In your application.yaml file, uncomment the following oam-role-mapper entries.

      - oam-role-mapper:
           oud:
                host-name:
                admin-user-name:                   
                http-port:    
                https-port:               
                users-base-dn:          
                groups-base-dn:                 
                msgType: urn:ietf:params:rest:schemas:oracle:oud:1.0:SearchRequest          
                filter: (&(objectclass=*)(uniqueMember=cn=__USER_NAME__,__USER_BASE_DN__))
    2. Update the entries for your environment:

      • host-name: The Oracle Unified Directory host name.

      • admin-user-name: The administration user name for the Oracle Unified Directory.

      • http-port: The Oracle Unified Directory HTTP port.

      • https-port: The Oracle Unified Directory HTTPS port.

      • users-base-dn: The Oracle Unified Directory user domain name.

      • groups-base-dn: The Oracle Unified Directory group domain name.

      • msgType: The message type based on the schema used to search roles in the Oracle Unified Directory.

      • filter: The filter based on the user attribute.

      Note:

      The admin-password key is the administration password for the Oracle Unified Directory. Do not set this entry directly. Instead, write the password to the wallet as OAM_OUD_ROOTUSERPASS. To store the client secret password in the wallet, enter the following command:

      java -cp ".;oraclepkijarLocation;cetjarLocation"com.portal.cet.ConfigEditor -setconf -wallet clientWalletLocation –parameter configEntry -value value
  17. (For Security Enabled with Oracle Identity Cloud Service Only) If IDCS is not configured to include groups or roles in JSON Web Tokens (JWTs), do the following:

    Note:

    This step is necessary when your JWTs do not adhere to the MicroProfile JWT RBAC v2.1 specification and lack a "groups" claim because it enables fetching user or client groups and roles. If your JWTs conform to the JWT RBAC v2.1 specification, skip this step.

    1. In your application.yaml file, uncomment the following idcs-role-mapper entries:

      - idcs-role-mapper:
           multitenant: false 
           oidc-config:
              client-id:  
           identity-uri:
              cache-config:
              cache-timeout-millis: 10000 
    2. Update the entries for your environment:

      • client-id: The client ID generated by the Identity Server, used to validate the token.

      • identity-uri: The URI of the Identity Server, used as the base URL to retrieve metadata from the Identity Server.

      Note:

      The client-secret key is for the client secret generated by the Identity Server, used to authenticate the application when requesting a JWT based on a code. Do not set this entry directly. Instead, write the client secret password to the wallet as CLIENT_SECRET. To store the client secret password in the wallet, enter the following command:

      java -cp ".;oraclepkijarLocation;cetjarLocation"com.portal.cet.ConfigEditor -setconf -wallet clientWalletLocation –parameter configEntry -value value
  18. After installation completes, run the REST_home/scripts/start-brm-rsm.sh script to bring up BRM REST Services Manager.

Installing BRM REST Services Manager in Silent Mode

The silent installation uses a response file in which you have set installation information. To obtain the response file, you run the GUI installer for the first install. The GUI installer generates a response file that contains the key-value pairs based on the values that you specify during the GUI installation. You can then copy and edit the response file to create additional response files for installing the BRM REST Services Manager on different machines.

Creating a Response File

To create a response file:

  1. Create a copy of the response file that was generated during the GUI installation. See "Installing BRM REST Services Manager in GUI Mode" for more information.

    Note:

    A response file was created only if you ran the GUI installer with this command:

    Java_home/bin/java -jar jarFile -record -destinationFile path
  2. Open the response file in a text editor.

  3. Manually add all passwords to the response file. The GUI Installer does not store the passwords that you provided during installation in the response file.

  4. Modify the key-value information for the parameters you want in your installation.

    Note:

    The Installer treats incorrect context, format, and type values in a response file as if no value were specified.

  5. Save and close the response file.

Installing BRM REST Services Manager in Silent Mode

To install BRM REST Services Manager in silent mode:

  1. Create a response file. See "Creating a Response File".

  2. Copy the response file you created to the machine on which you run the silent installation.

  3. On the machine on which you run the silent installation, go to the temp_dir directory and run the following command:

    Java_home/bin/java -jar jarFile -debug -invPtrLoc Inventory_home/oraInventory/oraInst.loc -responseFile path -silent

    where:

    • Java_home is the directory in which you installed the latest supported Java version.

    • jarFile is the BRM installer file. For example:

      BRM_REST_Services_Manager-15.2.0.0.0.jar
    • path is the absolute path to the response file.

    For example:

    Java_home/bin/java -jar BRM_REST_Services_Manager-15.2.0.0.0.jar -debug -invPtrLoc Inventory_home/oraInventory/oraInst.loc -responseFile /tmp/BRM_REST_Services_Manager.rsp -silent

    The installation runs silently in the background.

    Tip:

    You can run the following command to get more details about the options.

    Java_home/bin/java -jar jarFile -help
  4. After installation completes, run the REST_home/scripts/start-brm-rsm.sh script to start BRM REST Services Manager.

  5. Open the BRM REST Services Manager URL (https://hostname:port/) in a browser.

    Note:

    You can find the port number in the REST_home/scripts/application.yaml file. The HTTPS port is in the server.port entry, and the HTTP port is in the server.sockets.plain.port entry.

BRM REST Services Manager Postinstallation Tasks

After you install BRM REST Services Manager, perform these postinstallation tasks:

  1. Setting Up a Custom TrustStore

  2. Configuring Policies for API Authorization

  3. Excluding the BRM REST Services Manager SDK

  4. Updating BRM Connection Details

  5. Updating the Base URL

  6. Modifying BRM REST Services Manager Properties

  7. Setting Up Zipkin Tracing in BRM REST Services Manager

  8. Connecting BRM REST Services Manager to Oracle Analytics Publisher

  9. Configuring BRM REST Services Manager for High Availability

  10. Allowing the Use of Deprecated Cyphers

  11. Configuring and Adding Custom Mapper Files

  12. Customizing REST Services with the Mapper Framework

  13. Configuring the REST Services Manager Notification Service

Setting Up a Custom TrustStore

If you want to use a custom TrustStore instead of the Java TrustStore, do the following:

  1. Stop BRM REST Services Manager by running the REST_home/scripts/stop-brm-rsm.sh script.

  2. In the REST_home/scripts/brm_rsm.properties file, configure the path to the TrustStore file in the TRUST_STORE_FILE_PATH key.

  3. Store the TrustStore passphrase in the TRUST_STORE_PASSPHRASE configuration entry in the client wallet by running the following command:

    java -cp ".;oraclepkijarLocation;cetjarLocation"com.portal.cet.ConfigEditor -setconf -wallet clientWalletLocation –parameter configEntry -value value

    where:

    • clientWalletLocation is the full path to the client wallet.

    • value is the TrustStore passphrase in Base64-encoded format.

  4. Start BRM REST Services Manager by running the REST_home/scripts/start-brm-rsm.sh script.

Configuring Policies for API Authorization

To configure the policies for API authorization:

  1. Stop BRM REST Services Manager by running the REST_home/scripts/stop-brm-rsm.sh script.

  2. Define the API authorization rules in a policy file.

    You can use the sample authorization policy file (REST_home/scripts/authorization-policy.yaml) as a template for defining API authorization rules.

  3. For any new BRM REST Services Manager API endpoints, ensure that appropriate policy statements are added to the file. This is essential for enforcing proper authorization and access restrictions for each new API.

  4. Start BRM REST Services Manager by running the REST_home/scripts/start-brm-rsm.sh script.

Excluding the BRM REST Services Manager SDK

The BRM REST Services Manager package installs both the BRM REST Services Manager API and the BRM REST Services Manager SDK on your system.

If you do not want to include the BRM REST Services Manager SDK in your deployment, remove this entry from the REST_home/scripts/brm_rsm.properties file: EXTENSION_LIB_PATH.

Updating BRM Connection Details

By default, the BRM REST Services Manager installer stores sensitive information such as passwords in the Oracle wallet, and BRM REST Services Manager retrieves the passwords from the Oracle wallet. However, you can update this sensitive information after the installation.

To update the BRM connection details:

  1. Stop BRM REST Services Manager using this command:

    REST_home/scripts/stop_brm_rsm.sh
  2. Navigate to the REST_home/wallet directory, where REST_home is the BRM REST Services Manager installation directory.

  3. Run the following command to update the BRM connection details:

    java -cp ".;oraclepkijarLocation;cetjarLocation"com.portal.cet.ConfigEditor -setconf -wallet clientWalletLocation –parameter configEntry -value value
    where:
    • clientWalletLocation is the path to the client wallet.
    • configEntry is the configuration entry in the client wallet.
    • value is the appropriate value for the respective entry in the client wallet.
  4. Start BRM REST Services Manager using this command:

    REST_home/scripts/start_brm_rsm.sh

Updating the Base URL

After installation, you can update the base URL with the resource details to return in the response of BRM REST Services Manager requests. To do so, edit the REST_home/scripts/application.yaml file.

Modifying BRM REST Services Manager Properties

To modify the BRM REST Services Manager properties:

  1. Stop BRM REST Services Manager using this command:

    REST_home/scripts/stop_brm_rsm.sh
  2. Open the REST_home/scripts/brm_rsm.properties file in a text editor.

  3. Modify the properties based on your requirements. For example, changing the port number and log level.

  4. Save the brm_rsm.properties file.

  5. Start BRM REST Services Manager using this command:

    REST_home/scripts/start_brm_rsm.sh

Setting Up Zipkin Tracing in BRM REST Services Manager

You can trace the flow of API calls made to BRM REST Services Manager by using Zipkin, which is an open-source tracing system. For more information, see the Zipkin website: https://zipkin.io/.

To set up Zipkin tracing for BRM REST Services Manager:

  1. Install Zipkin. See the Zipkin Quickstart documentation: https://zipkin.io/pages/quickstart.html.

  2. Stop BRM REST Services Manager using this command:

    REST_home/scripts/stop_brm_rsm.sh
  3. In your application.yaml file, update the following entries, as necessary:

    otel:
       traces:
          exporter: zipkin
       sdk:
          disabled: true
       service:
          name: brm-rest-service-manager
       exporter:
          zipkin:
             endpoint: ${traces.protocol}://${traces.host}:${traces.port}/api/v2/spans
  4. Start BRM REST Services Manager using this command:

    REST_home/scripts/start_brm_rsm.sh

Afterward, you can start tracing the flow of API calls to BRM REST Services Manager by using the Zipkin UI or Zipkin API.

Connecting BRM REST Services Manager to Oracle Analytics Publisher

To enable BRM REST Services Manager to retrieve PDF invoices generated by Oracle Analytics Publisher:

  1. Update the following Oracle Analytics Publisher connection details in the BRM REST Services Manager wallet:

    • BIP_USERID: Set this to the Oracle Analytics Publisher user that has web access.

    • BIP_PASSWORD: Set this to the password for the Oracle Analytics Publisher user password.

    • BIP_URL: Set this to the URL for accessing the Oracle Analytics Publisher instance. For example: http://hostname:port/xmlpserver/services/PublicReportService_v11.

  2. To store the connection details in the wallet, enter the following command:

    java -cp ".;oraclepkijarLocation;cetjarLocation"com.portal.cet.ConfigEditor -setconf -wallet clientWalletLocation –parameter configEntry -value value

Configuring BRM REST Services Manager for High Availability

You can optionally configure BRM REST Services Manager for high availability.

Because BRM REST Services Manager is a relatively simple API service, you can support high availability by using the open-source load balancer of your choice, such as Apache HTTP Server or HAProxy. The load balancer automatically routes requests to whichever server is available, which enables greater scalability by sharing the traffic among servers, and also lets processing continue if an instance is down.

The basic steps for configuring BRM REST Services Manager for high availability are:

  1. Install and configure BRM REST Services Manager. You can install multiple instances on the same server by using a different port or install different instances on different servers.

  2. Install a load balancer. You can install the load balancer on the same server as one of your BRM REST Services Manager instances, or a different server with access to the BRM REST Services Manager servers.

  3. Add the details of the BRM REST Services Manager servers to the load balancer, using a proxy to pass traffic to the group of servers.

Allowing the Use of Deprecated Cyphers

According to security best practices, by default BRM Rest Services Manager restricts less-secure cyphers from being used. These restricted cyphers are listed in REST_home/scripts/security.properties. You can change this configuration if you are willing to take the risk of allowing less-secure cyphers.

To allow individual restricted cyphers:

  1. Stop BRM REST Services Manager using this command:

    REST_home/scripts/stop_brm_rsm.sh
  2. Back up the REST_home/scripts/security.properties file.

  3. Open the REST_home/scripts/security.properties file in a text editor.

  4. Remove the names of any cyphers you would like to allow.

  5. Save the security.properties file.

  6. Start BRM REST Services Manager using this command:

    REST_home/scripts/start_brm_rsm.sh

To remove all cypher restrictions:

  1. Stop BRM REST Services Manager using this command:

    REST_home/scripts/stop_brm_rsm.sh
  2. Open the REST_home/scripts/brm_rsm.properties file in a text editor.

  3. Set the value of the OVERRIDE_SECURITY_PROPERTIES property to false.

  4. Save the brm_rsm.properties file.

  5. Start BRM REST Services Manager using this command:

    REST_home/scripts/start_brm_rsm.sh

Customizing REST Services with the Mapper Framework

Learn how to customize REST service payload mapping for Oracle Communications REST Services Manager using the Mapper Framework in a cloud native environment. You can use the mapper file to control how REST API payloads are transformed to and from internal BRM data structures (FLIST). By customizing the mapper file, you can extend or override out-of-the-box mappings, adapt to business requirements, and enable advanced field transformations.

Introduction to the Mapper File

The mapper file defines the transformation logic for each REST API endpoint and the event framework. It maps fields in incoming JSON requests to BRM FLIST fields and maps FLIST response fields back to outgoing JSON. Customizing the mapper file provides flexibility without requiring changes to underlying service code.

You can use the mapper file to:
  • Implement custom REST API payload logic.
  • Extend or override default field mappings.
  • Support multiple API versions and endpoints.
  • Add custom business rules, data validation, and type conversions.
Sample Events Configuration:
config:
        paths:
          event:
            v5:
              ProductCreateEvent:
                mapper-specification-keys:
                  default: Product
                  Event:
                    execute:
                    - "mapper.brm.v5.tmf637.productCreateEvent.execute"
                    request:
                    - "mapper.brm.v5.tmf637.productCreateEvent.request"
                    response:
                    - "mapper.brm.v5.tmf637.productCreateEvent.response"
                  Product:
                    request:
                    - "mapper.brm.v5.tmf637.post.request.product"
                    - "mapper.brm.v5.tmf637.post.request.extendedProduct"
                    response:
                    - "mapper.brm.v5.tmf637.post.response.product"
                    - "mapper.brm.v5.tmf637.post.response.extendedProduct"

where,

  • ProductCreateEvent defines the type of event.
  • mapper-specification-keys specifies logical profiles or mapping variants.
  • default is used when no explicit profile is provided.
  • execute, request, and response designate the mapping configuration files to be applied for execution opcode, request transformation, and response transformation respectively.
Understanding the Mapper File Structure

The mapper file defines the transformation logic for each REST API endpoint. It maps fields in incoming JSON requests to BRM FLIST fields and maps FLIST response fields back to outgoing JSON. Customizing the mapper file provides flexibility without requiring changes to underlying service code.

You can use the mapper file to:
  • Implement custom REST API payload logic.
  • Extend or override default field mappings.
  • Support multiple API versions and endpoints.
  • Add custom business rules, data validation, and type conversions.
Mapper files are written in YAML format and contain these key sections:
  • request: Defines how incoming JSON fields are mapped to BRM FLIST fields.
  • response: Defines how FLIST response fields are mapped to outgoing JSON.
  • execute: Contains BRM opcode and opcode flag information.

Note:

Mapper files do not contain a top-level validation section. Use optional on a per-field basis as needed.

Examples and Supported Field Attributes

The order in which mapper keys are defined determines the order of overrides: if a key is defined last, it will override any common keys present earlier. You must ensure that your custom mapper keys are referenced correctly in your configuration (such as in application.yaml or through deployment values).

Example configuration snippet:
config:
  paths:
    apiSpecifications:
      brm/productInventory/v5/product:
        post:
          mapper-specification-keys:
            default: Product
            Product:
              execute:
                - "mapper.brm.v5.tmf637.post.execute.product"
              request:
                - "mapper.brm.v5.tmf637.post.request.product"
                - "mapper.brm.custom.v5.tmf637.post.request.product"
              response:
                - "mapper.brm.v5.tmf637.post.response.product"
                - "mapper.brm.custom.v5.tmf637.post.response.product"
where,
  • config is the top-level configuration object containing all Mapper Framework settings.
  • paths is the section that defines REST API endpoint path specifications.
  • apiSpecifications is the mapping of API specifications to their endpoints.
  • brm/productInventory/v5/product is the specific API resource path being mapped.
  • post is the HTTP POST method being specified for mapping.
  • mapper-specification-keys is the list specifying which mappers to use for the endpoint and method.
  • default is the fallback mapping key used if no specific match for schema type is found from the request payload.
  • Product is the schema type for the request and response mapping obtained from request payload.
  • request is the listing of request-mapping keys or files to handle incoming API payload mapping.
  • response is the listing of response-mapping keys or files to handle returned API payload mapping.

Example 1: Simple Mapping

request:
  product:
    Product:
      field: PIN_FLD_PRODUCT
      optional: false
      dataFields:
        id:
          field: PIN_FLD_POID
          fieldValue: ""
          optional: false
response:
  product:
    PIN_FLD_PRODUCT:
      field: product
      type: array
      dataFields:
        PIN_FLD_POID:
          field: id
          type: string

where,

  • request is the top-level section defining how incoming JSON is mapped to BRM fields.
  • Product is the logical identifier/object being mapped in the request or response.
  • field is the target field within the payload or mapping output/input location.
  • dataFields is the grouping of child fields requiring mapping under the main object.
  • id is the attribute or subfield (such as a product identifier) being mapped.
  • PIN_FLD_POID is the BRM internal field for Portal Object Identifier (the mapped field in BRM).

Example 2: Array and Substructure Mapping

request:
  product:
    orderItems:
      field: PIN_FLD_ORDER_ITEMS
      optional: false
      arrayIndex: 0
      dataFields:
        price:
          field: PIN_FLD_PRICE
          fieldValue: ""
          optional: truee
where,
  • orderItems is an array field within the product object, representing line items in the order.
  • PIN_FLD_ORDER_ITEMS is the BRM array field for order items; square brackets indicate an array.
  • arrayIndex specifies the array position used for mapping (for example, zero is the first element).
  • price is the field representing the price for each order item.
  • PIN_FLD_PRICE is the BRM field for price, the mapping destination for the price data.

Example 3: Dynamic FieldValue Calculation

fieldValue: "${(getVal(context[3], 'PIN_FLD_BASE_AMOUNT') + getVal(context[3], 'PIN_FLD_FEE')) * getVal(context[2], 'PIN_FLD_MULTIPLIER') - getVal(context[5], 'PIN_FLD_DEDUCTION')}"
where,
  • PIN_FLD_PRICE is the BRM field for price, where the calculated value is stored.
  • fieldValue is the value assigned to this field; in this case, it uses Jakarta Expression Language (EL).
  • getVal is a Jakarta EL function that retrieves the value of a given field from the specified context/hierarchy level.
  • context is an array representing the hierarchical path through the payload or FLIST object at different levels; context[n] refers to a specific depth in the hierarchy.
  • base, supportFee, multiplier, and deduction are field names within the corresponding context objects used in the calculation formula.
Mapper Features

The Mapper Framework allows you to map and transform REST API payloads to and from BRM data structures. You can use these features to handle complex mapping scenarios.

API-Based Mapping

You can define mapping rules for each REST API endpoint and HTTP method, tailoring the mapping logic to each business operation. This enables you to ensure each API’s request and response format properly aligns with the required BRM data structure. For example, you can configure separate mappings for /productInventory/v5/product for POST and GET methods, ensuring the request and response structures match each API’s requirements.

Example:

mapper:
    brm:
        v5:
            tmf637:
                post:
                    execute:
                        product:
                            brmOpcode: PRODUCT_INVENTORY_POST
                            opcodeFlag: 0
                    request:
                    response:
                get:
                    execute:
                        product:
                            brmOpcode: PRODUCT_INVENTORY_GET
                            opcodeFlag: 0
                    request:
                    response:

Versioning Support

You can create and define separate mappings for different versions of APIs. You can use the corresponding mappings based on the version of the API being invoked.

Example:

mapper:
    brm:
        v5:
            tmf637:
                productCreateEvent:
                productDeleteEvent:
                productAttributeValueChangeEvent:
                productStateChangeEvent:
                delete:
                post:
                patch:
                get:
        v6:
            tmf637:
                productCreateEvent:
                productDeleteEvent:
                productAttributeValueChangeEvent:
                productStateChangeEvent:
                delete:
                post:
                patch:
                get:

Opcode, Schema, and Flags

Each mapping can specify a BRM opcode. The opcode determines which internal BRM operation runs for a request. You can also attach schema information to control FLIST structure validation, and apply flags to influence how the opcode is executed.

Note:

The BRM opcode supports Jakarta expressions. You can use Jakarta expression for opcode evaluation aswell

Example:

{
    "operation": "edit",
    "@type": "Product",
    "suspendedBatchObjs": [
        {
            "suspendedBatchObj": "0.0.0.1+-product+181321"
        },
        {
            "suspendedBatchObj": "0.0.0.1+-product+129321"
        },
        {
            "suspendedBatchObj": "0.0.0.1+-product+101321"
        }
    ]
}
Sample Expression:
brmOpcode: "${getVal(context[0], 'operation') == 'edit' ? 'SUSPENSE_SEARCH_EDIT' : (getVal(context[0], 'operation') == 'recycle' ? 'SUSPENSE_SEARCH_RECYCLE' : (getVal(context[0], 'operation') == 'writeoff' ? 'SUSPENSE_SEARCH_WRITE_OFF' : 'SUSPENSE_SEARCH_DELETE'))}" 

In this case, the brmOpcode is determined depending on the value of the operation field in the request payload.

Request and Response Mapping

Easily map REST API JSON fields to corresponding BRM FList fields for both requests and responses. You can handle simple or complex (nested, array) structures. Each mapping file contains a request block and a response block.

  • In the request block, you define how fields in the incoming JSON map to BRM FLIST fields.

  • In the response block, you map BRM FLIST fields back to JSON for the API output.

Mappings can be simple (one field to one field), or complex, including arrays and nested structures.

Example:

request:
  productName:
    field: PIN_FLD_NAME
    optional: false
response:
  PIN_FLD_NAME:
    field: productName
    type: string

Field Enumeration

Field enumeration is a way to restrict a field’s value to a specific set of allowed options. When you define an enumeration for a field, only the listed values are considered valid, and any other input will be rejected during validation. This ensures data consistency between API payloads and BRM, and helps prevent errors from invalid or unexpected values. In the following example, you can use different string values from request and to convert it to corresponding values accepted by BRM. For example, Credit Card changes to 10001 for FList population. This is applicable for both request and response.

Example (Enumeration):
payType:
  field: PIN_FLD_PAY_TYPE
  fieldValue: "${{'Credit Card': 10001, 'Cash': 10011, 'Check': 10012}[getMapKey(context[0], 'payType')]}"

Validation Rules for Map

For the validation key in a request, you can use the following rules:
  • To specify the type of a field, use one of the supported types: string, int, double, boolean, object, array, long, date.
    validation: "type:[string]"
  • To specify a date format (the format must be readable by Java):
    validation: "format:[yyyy-MM-dd]"
  • For enumerated values, define a list of permissible values:
    validation: "enum:[value1,value2,value3]"
  • To restrict the range of any numerical field:
    validation: "range:[min=1,max=1000]"
  • To use regex pattern matching on a field:
    validation: "pattern:[^[a-zA-Z0-9]+$]"
  • To use multiple validation rules at once, provide them as a comma-separated list:
    validation: "range:[min=1000,max=5000],pattern:[^\\d+$],enum:[3000,2000,2048]"

Note:

Enclose all rule values within square brackets [ ], as shown in the examples above.

Mapper Expressions (Jakarta EL)

You can use Jakarta Expression Language (EL) within field and fieldValue to evaluate values at runtime in your mapping files. These expressions use context objects to refer to fields at different levels of the JSON or FLIST structure.

You can use these built-in expressions inside the ${...} blocks in your YAML mapping files to reference fields, perform lookups, and create URLs during mapping.

Mapper Expression Functions
The Mapper Framework supports three primary mapper expression functions by default:
  • getVal: Use getVal to extract the value of a specific key from a JSON object or FList. In request mappings, getVal(context[index], 'key') retrieves a field from the JSON payload. In response mappings, it fetches a value from the FList object.

    fieldValue: "${getVal(context[0], 'accountId')}"
  • getMapKey: Use getMapKey for field enumeration and value mapping. It looks up the appropriate BRM code or value based on a key from the context and an inline map definition.

    fieldValue: "${{'Credit Card': 10001, 'Cash': 10011, 'Check':10012}[getMapKey(context[0], 'payType')]}"
  • generateResourceUrl: Use generateResourceUrl to generate an href or URL reference for a BRM resource. This is commonly used to set the href field in API responses.

    href:
      fieldValue: "${generateResourceUrl('paymentMethods/v4/paymentMethod/')}"

Object Support (Request and Response)

The Object parameter supports two types of objects for both request and response processing:
  • context is a list that stores data based on hierarchy. You must specify the index to indicate which level of the object you are referring to. If you do not reference an index, context at the same level where the expression exists will be used.

    • On the request side, context contains JsonNode objects representing the request payload.

    • On the response side, context contains FList objects representing the response payload.

  • rootContext stores the complete payload during processing and you can use it directly without specifying an index.
    • On the request side, rootContext is a JsonNode object that holds the entire request payload.

    • On the response side, rootContext is an FList object that holds the entire response payload.

When you process a bundled product, the system creates a new hierarchy. rootContext allows you to access data outside the current context. Once processing is complete, it restores the original hierarchy. You may use context[index], context, or rootContext as the Object.

Understanding the Hierarchy

Consider this request mapping:

productPrice:
  field: PIN_FLD_PRODUCT_PRICE
  validation: ""
  optional: true
  dataFields:
    description:
      field: ""
      fieldValue: "ProductPrice"
      validation: ""
      optional: true
    priceAlteration:
      field: PIN_FLD_VALUES
      validation: ""
      optional: true
      dataFields:
        _attype:
          field: ""
          fieldValue: "PriceAlteration"
          validation: ""
          optional: true
        price:
          field: ""
          validation: ""
          optional: true
          dataFields:
            dutyFreeAmount:
              field: PIN_FLD_PRICING_INFO
              validation: ""
              optional: true
              dataFields:
                unit:
                  field: PIN_FLD_UNIT_STR
                  validation: ""
                  optional: true
                value:
                  field: PIN_FLD_AMOUNT
                  validation: ""
                  optional: true
            taxIncludedAmount:
              field: ""
              fieldValue: ""
              validation: ""
              dataFields:
                unit:
                  field: PIN_FLD_UNIT_STR
                  validation: ""
                  optional: true
                value:
                  field: PIN_FLD_AMOUNT
                  validation: ""
                  optional: true
In this mapping:
  • context[0] holds the entire payload.

  • At any dataFields of dutyFreeAmount, the hierarchy objects are:
    • context[0] is the entire JSON payload as JsonNode

    • context[1] is the productPrice as JsonNode

    • context[2] is the price as JsonNode

    • context[3] is the dutyFreeAmount as JsonNode

  • For dataFields of taxIncludedAmount, the hierarchy objects are:

    • context[0] is the entire JSON payload as JsonNode

    • context[1] is the productPrice as JsonNode

    • context[2] is the price as JsonNode

    • context[3] is the taxIncludedAmount as JsonNode

The response side maintains a similar hierarchy structure.

Removing Inherited or Obsolete Fields

You can override or delete fields inherited from a base mapping by specifying action: delete for that field in your custom mapper file. Example:
obsoleteField:
  action: delete

This removes the inherited mapping for obsoleteField in your endpoint configuration.

Advanced Mapping Scenarios

You can use the features listed above for solutions to common mapping scenarios:

Handling Data Fields
You map deeply nested structures using the dataFields attribute. Example:
account:
  field: PIN_FLD_ACCOUNT
  dataFields:
    firstName:
      field: PIN_FLD_FIRST_NAME
    lastName:
      field: PIN_FLD_LAST_NAME

This maps account.firstName and account.lastName in the payload to the appropriate BRM fields.

If a base field is nested, you can omit the keyword dataFields from a substruct or array field in the base mapper file to use the base schema and that if a nested field is encountered, it is an instance of the same object. For example:
product:
    field: PIN_FLD_PRODUCTS
    fieldValue: ""
    optional: false
    validation: ""

In this case, the schema applicable for the product field is assumed to be the same as that used for the base request, and is a nested field of the same schema. This is applicable for both the request and the response.

Assigning Default Values
You provide a default value with fieldValue. If the source data is missing, REST Services Manager uses this value. Example:
quantity:
  field: PIN_FLD_QUANTITY
  fieldValue: "1"
  optional: true
If the API payload omits quantity, the field will default to 1.

Note:

If there is Jakarta EL expression is provided in the fieldValue attribute, it is evaluated. However, a string value is treated as a default value.

Changing BRM Field Values
You use fieldValue or mapper expressions (Jakarta EL) to set or transform BRM field values based on input, context, or calculations. Example:
price:
  field: PIN_FLD_PRICE
  fieldValue: "${getVal(context[1], 'base') * 1.07}"

This multiplies the base price by 1.07 (such as to apply tax).

Mapping Arrays and Array Indexes
You use arrayIndex to place or retrieve data at a specific index in a BRM array field. Example:
aliases:
  field: PIN_FLD_ALIAS_LIST
  arrayIndex: 0
  dataFields:
    name:
      field: PIN_FLD_NAME

This places the name value into the first element of the BRM alias list.

Handling Empty or Optional Fields
You use optional: true to indicate a field may be absent in the request or response. If the field exists but is empty, REST Services Manager omits it to the BRM FLIST unless prevented by additional validation. Example:
nickname:
  field: PIN_FLD_NICKNAME
  optional: true

Handling href Fields

You generate URL links (href) for resources in the response using static values or by running helper functions, allowing you to provide links back to created or referenced resources. Example:
href:
  field:
  fieldValue: "${generateResourceUrl('productInventory/v5/product/')}"
  type: string
Class Reference for Response Payloads:
To define fields such as @type and @referredType which are not present anywhere in FList but are required in the json fiel to reference classes:
_attype:
    field: ""
    type: string
    fieldValue: "Product"
where,
  • type is a string value.
  • fieldValue is the class name to populate into response payload.
  • _attype is the defned key, where _at is used instead of @ to prevent YAML syntax issues.

Note:

These classes mentioned above are for reference only; the actual data is taken from request payload.

Specifying Data Types in Response
You control data type in the response using the type key—for instance, mapping an integer to a string or a date. The supported key types are:
  • date

  • string

  • float

  • double

  • integer

  • boolean

  • json

Example:
PIN_FLD_START_T:
  field: startDate
  type: date
Using Dot Notation
You reference nested fields in request or response mappings using dot notation, such as parentField.childField. Example:
"contact.email":
  field: PIN_FLD_CONTACT_INFO.PIN_FLD_EMAIL
Handling Query Parameters, Path Parameters, and Storable Classes

You access HTTP query or path parameters by including their mapped names as top-level fields in your mapping. For advanced scenarios requiring additional object types or extended models (storable class), use custom handlers and reference them in your mapping definitions.

Example (Query Parameter):
queryParameter:
    field: PIN_FLD_ARGS
    dataFields:
         id:
             field: PIN_FLD_IS_BUNDLE
             fieldValue: ""
             validation: ""
Example (Path Parameter):
pathParameter:
  field: PIN_FLD_ARGS
  dataFields:
    id:
      field: PIN_FLD_POID
Example (StorableClass):
paymentMethod:
  storableClass: "/payinfo"

Note:

Query Parameters, Path Parameters, and Storable Classes are only applicable for GET.

Mapping Child Data from Unmapped Parent Objects

When there's an object/array in the JSON payload but there's no corresponding FList to map it to, yet the data inside the object/array is required for FList creation. You can handle this as follows:
price:
  field: PIN_FLD_PRICING_INFO
  validation: ""
  optional: true
  dataFields:
    _attype:
      field: ""
      fieldValue: "Price"
      validation: ""
      optional: true
    dutyFreeAmount:
      field: ""
      validation: ""
      optional: true
      dataFields:
        unit:
          field: PIN_FLD_UNIT_STR
          validation: ""
          optional: true
        value:
          field: PIN_FLD_AMOUNT
          validation: ""
          optional: true
Sample JSON:
{
  "price": {
    "@type": "Price",
    "dutyFreeAmount": {
      "unit": "USD",
      "value": 29.99
    }
  }
}
Sample FList:
0 PIN_FLD_PRICING_INFO    SUBSTRUCT [0] allocated 20, used 2
1     PIN_FLD_AMOUNT        DECIMAL [0] 29.99
1     PIN_FLD_UNIT_STR      STR [0] "USD"

Splitting and Mapping Composite JSON Fields to Multiple BRM Fields

There can be a case where you need to pass multiple BRM fields and multiple Jakarta EL expressions for a JSON entry. For this case, you can pass a comma-separated list of values as shown in the example.
contact:
  field: PIN_FLD_NAMEINFO
  fieldValue: ""
  validation: ""
  dataFields:
    contactName:
      field: PIN_FLD_FIRST_NAME, PIN_FLD_MIDDLE_NAME, PIN_FLD_LAST_NAME
      fieldValue: "${getVal(context, 'contactName').split(' ')[1]!=null ? getVal(context, 'contactName').split(' ')[0]: null}, ${getVal(context, 'contactName').split(' ')[2]!=null ? getVal(context, 'contactName').split(' ')[1] : null}, ${getVal(context, 'contactName').split(' ')[2] != null ? getVal(context, 'contactName').split(' ')[2] : (getVal(context, 'contactName').split(' ')[1] != null  ? getVal(context, 'contactName').split(' ')[1] : getVal(context, 'contactName').split(' ')[0])}"
      validation: ""

JSON PATCH Features

When you prepare a JSON Patch, you form a JSONPath expression and include it in the JSON Patch payload. The payload is an array of operations. Each object in the array contains these attributes:

  • op: Specifies the operation. Possible values are:
    • replace: Replaces the value at the specified path with a new value.
    • add: Adds a value at the path.
    • remove: Removes the attribute at the path.
  • path: Contains the JSONPath expression for the operation.
  • value: Specifies the value for the "replace" and "add" operations.

JSONPath Expressions

lists the supported JSONPath expressions and describes how each one is used to access or modify elements within a JSON payload.

Table 11-5 Rules for Writing JSONPath Expressions

Expression Description

$.

The expression starts with $. to refer to the root of the payload.

$.status

Refers to the status key in the root object.

$.productOffering.id

Refers to the id property inside the productOffering object.

$.productCharacteristic[?(@.name=='ServiceAlias' && @.id=='235' && @.valueType=='array')]

Selects elements from the productCharacteristic array that match all specified conditions. Use conditional statements inside square brackets to filter elements.

?(@.name=='ServiceAlias' && @.id=='235' && @.valueType=='array')

This is a conditional subexpression. It starts with a ? and uses @. before JSON keys. Only == and && operators are supported in JSON Patch. Enclose the condition in parentheses.

$.productCharacteristic[?(@.name=='ServiceAlias' && @.id=='235' && @.valueType=='array')].value

Accesses the value attribute of the selected productCharacteristic element.
Best Practices for Customizing Mapper Files
  • Validate the YAML syntax before deployment to prevent runtime errors.
  • Use ConfigMaps for mapping files with a size limit upto 1MiB in Kubernetes. Use SDK for larger files.
  • Always increment restartVersion for new mappings. After changing the restartVersion, run the helm upgrade command to force restart.
  • Review REST Services Manager pod logs for troubleshooting.
  • Use versioning and modular mappings for backward compatibility and gradual migration.
  • Ensure key ordering in configuration yields intended override results.

Configuring and Adding Custom Mapper Files

By default, BRM REST Services Manager uses predefined mapper files to translate between JSON payload objects and BRM opcode flists for the latest versions of account and inventory management. However, you can customize this mapping by creating and adding your own mapper files. This allows you to extend or override the default mappings to meet your specific integration needs.

To extend or override the mapping between BRM REST Services Manager JSON objects and BRM opcode flists:
  1. Create YAML files containing your custom mapping definitions. You can review sample files in the REST_home/referenceMappers directory for guidance.

  2. Copy your custom YAML files to REST_home/extendedMappers.

  3. Update the BRM REST Services Manager configuration so your custom YAML files are loaded at runtime. Open the REST_home/scripts/application.yaml file. Provide the path from your custom mapper files as requests, responses, or execute respectively. Update the application.yaml:

    config:
      paths:
        event:
          v5:
            ProductCreateEvent:
              mapper-specification-keys:
                default: Product
                Event:
                  execute:
                    - "mapper.brm.v5.tmf637.productCreateEvent.execute"
                  request:
                    - "mapper.brm.v5.tmf637.productCreateEvent.request"
                  response:
                    - "mapper.brm.v5.tmf637.productCreateEvent.response"
                    - "mapper.custom.v5.tmf637.productCreateEvent.response"
                Product:
                  request:
                    - "mapper.brm.v5.tmf637.post.request.product"
                    - "mapper.brm.v5.tmf637.post.request.extendedProduct"
                    - "mapper.custom.v5.tmf637.post.request.product"
                  response:
                    - "mapper.brm.v5.tmf637.post.response.product"
                    - "mapper.brm.v5.tmf637.post.response.extendedProduct"
                    - "mapper.custom.v5.tmf637.post.response.product"
    where,
    • ProductCreateEvent defines the type of event.
    • mapper-specification-keys specifies logical profiles or mapping variants.
    • default is used when no explicit profile is provided.
    • execute, request, and response designate the mapping configuration files to be applied for execution opcode, request transformation, and response transformation respectively.

      Note:

      For more information on custom mapper keys, see "Adding BRM REST Services Manager Keys" in the BRM Cloud Native Deployment Guide

  4. Restart BRM REST Services Manager to load your changes. Run:

    REST_home/scripts/stop_brm_rsm.sh
    REST_home/scripts/start_brm_rsm.sh

Configuring the REST Services Manager Notification Service

Configure the REST Services Manager Notification Service and validate your deployment. See the relevant topics for more information.

Overview and Prerequisites for REST Services Manager Notification Service
Before you begin configuration, ensure that your environment satisfies all prerequisites for the REST Services Manager Notification Service. This section lists the required Kafka infrastructure, Oracle database wallet setup, and configuration file requirements.

Note:

In order to get event notifications, uncomment or add the required configurations as shown in this document.

Kafka Cluster and Topic requirements
  • Provision a Kafka cluster with sufficient brokers to meet your throughput and high availability requirements.
  • Define all topics required for the REST Services Manager Notification Service, including source, sink, retry, and dead-letter queue (DLQ) topics.
  • Ensure topics are created with the desired number of partitions and that appropriate access control lists (ACLs) are applied.
  • Record the hostnames and port numbers of all Kafka brokers.
  • Monitor Kafka server logs from time to time to validate if the connections are established.
Oracle Database Wallet Setup for Sensitive Values
  • Configure an Oracle database wallet to securely store sensitive credentials, such as SSL passphrases, keystore/truststore files, and database schema credentials.
  • The population of the Oracle database wallet with security certificates is dependent on the database listener and instance being pre-configured for SSL/TLS. Ensure that the database encryption layer is active before attempting to map Keystore and Truststore locations within the wallet.
  • Store the necessary wallet files and keystore/truststore files for secure connection to the database in a location secured with appropriate file permissions.
  • Do not place passwords or secrets in plain configuration files.

Tip:

Maintain separate folders for keystores, truststores, and the Oracle database wallet.

Note:

Review the default REST_home/scripts/application.yaml supplied with the deployment package and familiarize yourself with the configuration block structure.

REST Services Manager Notification Service Configuration Process

This section outlines the end-to-end workflow for configuring and setting up the Notification Service. Steps include extracting delivery packages, editing configuration files, and securely applying SSL/SASL parameters are needed for Kafka broker connectivity.

To configure and deploy Kafka, Oracle database consumers, and kafka producers:

  1. Open the REST_home/scripts/application.yaml file for editing.

  2. Un-comment the sections relevant to your deployment:
    • Specify the active connection profiles.
    • Specify the active consumer and producer profiles.
  3. Populate connection parameters for Kafka and any required databases.

  4. Apply the appropriate SSL/SASL parameters to secure broker access.

  5. Store secrets in the wallet using a key-value format. Use the corresponding keys within your configuration to retrieve the passwords at runtime.
    java -cp ".;oraclepkijarLocation;cetjarLocation" com.portal.cet.ConfigEditor -setconf -wallet clientWalletLocation –parameter configEntry -value value
  6. Save the configuration and verify all folder paths referenced in REST_home/scripts/application.yaml exist and are accessible by the REST Services Manager process.

  7. Start REST Services Manager application to start the notification services.

To apply broker SSL/SASL parameters and store passphrases securely:
  • Enter SSL configurations (keystore, truststore locations and types, passwords) in REST_home/scripts/application.yaml, using wallet references.
  • Do not write plain credentials in configuration files. Reference them via wallet lookup.

Note:

A response file was created only if you ran the GUI installer with this command:

Java_home/bin/java -jar jarFile -record -destinationFile path
Setting Up Connection Profiles

Configure the Kafka and database connection profiles by mapping related parameters and integrating with your wallet-based secrets. Guidance covers property definitions and secure reference of credential values in the configuration.

Kafka Connection Profile
  • Set security protocol such as SSL or SASL_SSL under the properties tag.
  • Assign locations and types for keystore and truststore files.
  • Specify credentials via wallet or environment variable references.

Example block for Kafka connection profiles in the REST_home/scripts/application.yaml

connection-profiles:
     - id: "cp1"
       profile-type: "kafka"
       host: "host"
       port: port
       username: ""
       password-key: "CP1_PASSWORD_WALLET_KEY"
       truststore-type: PKCS12
       truststore-location: "/home/truststore.p12"
       truststore-password-key: "CP1_TRUSTSTORE_WALLET_KEY"
       keystore-type: PKCS12
       keystore-location: "/home/keystore.p12"
       keystore-password-key: "CP1_KEYSTORE_WALLET_KEY"
       properties:
         auto.offset.reset: latest
         enable.auto.commit: false
         session.timeout.ms: 60000
         heartbeat.interval.ms: 10000
         connections.max.idle.ms: 3600000
         max:
           poll:
             records: 10
             interval.ms: 600000
           partition.fetch.bytes: 1048576
         acks: all
         retries: 3
         batch.size: 16384
         linger.ms: 1
         buffer.memory: 33554432
         compression.type: none

where,

  • id is the unique identifier for the connection profile, referenced by consumers or producers to select this profile.

  • profile-type is the type of connection; set to "kafka" for Kafka connections.

  • host is the address of the Kafka broker to connect to.

  • port is the network port used to connect to the Kafka broker.

  • username is the authentication username, if required for SASL or similar authentication mechanisms (often left empty for SSL-only setups).

  • password-key is the key referencing the password (e.g., stored in the Wallet) for authenticating this profile.

  • truststore-type is the type of the truststore file (usually PKCS12 or JKS) used for SSL/TLS connections.

  • truststore-location is the path to the truststore file containing trusted certificate authorities.

  • truststore-password-key is the key to retrieve the truststore password (e.g., stored in the Wallet), ensuring secure access.

  • keystore-type is the type of the keystore file (usually PKCS12 or JKS) holding client SSL certificates.

  • keystore-location is the path to the keystore file containing the client’s SSL certificate and private key.

  • keystore-password-key is the key to retrieve the keystore password (e.g., stored in the Wallet), ensuring secure access.

  • properties is an object containing additional Kafka client properties (such as offsets, timeouts, batching, etc.). It follows standard kafka properties so the key and value must be as per standard kafka documentation guidelines.
    • auto.offset.reset determines where to start consuming if no offset is committed ("latest" starts from the latest message).

    • enable.auto.commit specifies if the consumer should auto-commit offsets; set this to false for manual management. (Always set this to false in REST Services Manager.)

    • session.timeout.ms sets the timeout in milliseconds to detect consumer failures.

    • heartbeat.interval.ms sets how often to send heartbeats to the Kafka broker.

    • connections.max.idle.ms sets the maximum idle time for a connection before closing.

    • max.poll.records sets the maximum number of records returned in a single poll.

    • max.poll.interval.ms sets the maximum interval between poll calls before the broker considers the consumer failed.

    • max.partition.fetch.bytes controls the maximum bytes per partition fetched in a request.

    • acks defines the number of acknowledgments the producer requires for a request (e.g., "all" for full commit).

    • retries is the maximum number of retry attempts for failed produce requests.

    • batch.size sets producer batch memory size in bytes.

    • linger.ms delays sending records to allow batching (in milliseconds).

    • buffer.memory is the total producer buffer memory in bytes.

    • compression.type sets the compression algorithm to use ("none", "gzip", etc.).

Example block for database connection profiles in the REST_home/scripts/application.yaml

     - id: "cp2"
       profile-type: "database"
       host: "host"
       port: port
       truststore-type: PKCS12
       truststore-location: "/home/truststore.p12"
       truststore-password-key: "CP2_TRUSTSTORE_WALLET_KEY"
       keystore-type: PKCS12
       keystore-location: "/home/keystore.p12"
       keystore-password-key: "CP2_KEYSTORE_WALLET_KEY"
       properties:
         service-name: pindb.example.com

where,

  • service-name is the specific name of the database service (or SID) to which the connection should be established.

Setting up Consumers

Example block for kafka consumers in the REST_home/scripts/application.yaml

consumers: 
     - connection-profile-id: "cp1"
       properties:
         group.id: cgroup-brmrsm-productInventory
       configuration:
         max-retries: 3
         retry-delay-interval-ms: 1000
         record-fetch-timeout-ms: 1000
         brm-connection-retry-interval-ms: 30000
         strict-ordering: false
         buffer-strategy: 
       topic-event-config:
         source-topic: productInventory
         sink-topic: productInventoryRetry
         key-state-topic: keyStateTopic
         consumer-thread-count: 1
         worker-thread-count: 2
         downstream-consumers:
           - connection-profile-id: "cp1"
             properties:
               group.id: cgroup-brmrsm-productInventory
             configuration:
               max-retries: 3
               retry-delay-interval-ms: 1000
               record.fetch.timeout.ms: 1000
               brm.connection.retry.interval.ms: 30000
             topic-event-config:
               source-topic: productInventoryRetry
               sink-topic: productInventoryDLQ
               consumer-thread-count: 1
               worker-thread-count: 2
               events:
                 - ProductCreateEvent
                 - ProductAttributeValueChangeEvent
                 - ProductStateChangeEvent
                 - ProductDeleteEvent
         events:
           - ProductCreateEvent
           - ProductAttributeValueChangeEvent
           - ProductStateChangeEvent
           - ProductDeleteEvent
     - connection-profile-id: "cp1"
       properties:
         group.id: cgroup-brmrsm-partyAccountInventory
       configuration:
         max-retries: 1
         retry-delay-interval-ms: 1000
         record-fetch-timeout-ms: 1000
         brm-connection-retry-interval-ms: 30000
         strict-ordering: false
         buffer-strategy: 
       topic-event-config:
         source-topic: partyAccountInventory
         sink-topic: partyAccountInventoryRetry
         key-state-topic: keyStateTopic
         consumer-thread-count: 1
         worker-thread-count: 2
         downstream-consumers:
           - connection-profile-id: "cp1"
             properties:
               group.id: cgroup-brmrsm-partyAccountInventory
             configuration:
               max-retries: 1
               retry-delay-interval-ms: 1000
               record-fetch-timeout-ms: 1000
               brm-connection-retry-interval-ms: 30000
             topic-event-config:
               source-topic: partyAccountInventoryRetry
               sink-topic: partyAccountInventoryDLQ
               key-state-topic:
               consumer-thread-count: 1
               worker-thread-count: 2
               events:
                 - PartyAccountCreateEvent
                 - PartyAccountAttributeValueChangeEvent
                 - PartyAccountStateChangeEvent
                 - PartyAccountDeleteEvent
         events:
           - PartyAccountCreateEvent
           - PartyAccountAttributeValueChangeEvent
           - PartyAccountStateChangeEvent
           - PartyAccountDeleteEvent
     - connection-profile-id: "cp1"
       properties:
         group.id: cgroup-brmrsm-billingAccountInventory
       configuration:
         max-retries: 1
         retry-delay-interval-ms: 1000
         record-fetch-timeout-ms: 1000
         brm-connection-retry-interval-ms: 30000
         strict-ordering: false
         buffer-strategy: 
       topic-event-config:
         source-topic: billingAccountInventory
         sink-topic: billingAccountInventoryRetry
         key-state-topic: keyStateTopic
         consumer-thread-count: 1
         worker-thread-count: 2
         downstream-consumers:
           - connection-profile-id: "cp1"
             properties:
               group.id: cgroup-brmrsm-billingAccountInventory
             configuration:
               max-retries: 1
               retry-delay-interval-ms: 1000
               record-fetch-timeout-ms: 1000
               brm-connection-retry-interval-ms: 30000
             topic-event-config:
               source-topic: billingAccountInventoryRetry
               sink-topic: billingAccountInventoryDLQ
               key-state-topic:
               consumer-thread-count: 1
               worker-thread-count: 2
               events:
                 - BillingAccountCreateEvent
                 - BillingAccountAttributeValueChangeEvent
                 - BillingAccountStateChangeEvent
                 - BillingAccountDeleteEvent
         events:
           - BillingAccountCreateEvent
           - BillingAccountAttributeValueChangeEvent
           - BillingAccountStateChangeEvent
           - BillingAccountDeleteEvent

where,

  • connection-profile-id is the identifier referencing the Kafka connection profile used by the consumer.
  • group.id is the unique consumer group identifier for coordinating message consumption and offset management.
  • max-retries is the maximum number of retry attempts for processing an event before routing it to the retry or DLQ topic.
  • retry-delay-interval-ms is the delay in milliseconds between retry attempts after a processing failure.
  • record-fetch-timeout-ms is the maximum time in milliseconds the consumer waits for records during a fetch operation. (Also appears as record.fetch.timeout.ms.)
  • brm-connection-retry-interval-ms is the interval in milliseconds to wait between attempts to reconnect to BRM services. (Also appears as brm.connection.retry.interval.ms.)
  • strict-ordering is a flag indicating whether strict ordering for message keys is enforced (true or false).
  • buffer-strategy is the method used for buffering events within the consumer; implementation-specific.
  • source-topic is the Kafka topic from which the consumer reads events.
  • sink-topic is the Kafka topic to which failed events are sent for retries.
  • key-state-topic is an optional topic used to track and manage state related to message keys.
  • consumer-thread-count is the number of threads allocated for polling messages from the source topic.
  • worker-thread-count is the number of threads used for processing polled events.
  • downstream-consumers is a list of additional consumer stages for retry or DLQ handling, each with its configuration.
  • events is the list of event types that the consumer instance is configured to process (e.g., ProductCreateEvent, PartyAccountDeleteEvent).

Note:

It is recommended to use different group.id for every individual consumer. However consumers in the same downstream consumer chain can have the same group.id.

Example block for kafka consumers in the REST_home/scripts/application.yaml

     - connection-profile-id: "cp2"
       properties:
         username: ""
         password-key: "CP2_SCHEMA_WALLET_KEY"
         queue-name: OPEN_API_QUEUE
       wallet-location: "home/db_wallet"
       configuration:
         worker-thread-count: 1

where,

  • connection-profile-id is the identifier referencing the database connection profile to be used by this configuration.
  • username is the login name used for authenticating the connection to the database or queue.
  • password-key is the key used to securely retrieve the authentication password (such as from a Kubernetes Secret or Wallet).
  • queue-name is the name of the queue from which this consumer will read messages (e.g., for queue-based processing).
  • wallet-location is the name of the wallet or credential store where sensitive authentication information is managed.
  • worker-thread-count is the number of threads assigned for concurrent processing of messages from the queue.

Setting up Producers

Note:

Configure this to recieve the TMF events for supported API requests and supported TMF consumer events.

Example block for producers in the REST_home/scripts/application.yaml

 producer:
     connection-profile-id: "cp1"
     properties:
     event-topics-config:
       - event-name: "ProductCreateEvent"
         topic-names: [ "productCreateTopic", "productCreateTopic2" ]
       - event-name: "ProductDeleteEvent"
         topic-names: [ "ProductDeleteTopic" ]
       - event-name: "ProductAttributeValueChangeEvent"
         topic-names: [ "productAttributeValueTopic" ]
       - event-name: "ProductStateChangeEvent"
         topic-names: [ "productStateChangeTopic" ]
       - event-name: "PartyAccountCreateEvent"
         topic-names: [ "partyAccountNotificationTopic" ]
       - event-name: "PartyAccountAttributeValueChangeEvent"
         topic-names: [ "partyAccountNotificationTopic" ]
       - event-name: "PartyAccountStateChangeEvent"
         topic-names: [ "partyAccountNotificationTopic" ]
       - event-name: "PartyAccountDeleteEvent"
         topic-names: [ "partyAccountNotificationTopic" ]
       - event-name: "BillingAccountCreateEvent"
         topic-names: [ "partyAccountNotificationTopic" ]
       - event-name: "BillingAccountAttributeValueChangeEvent"
         topic-names: [ "partyAccountNotificationTopic" ]
       - event-name: "BillingAccountStateChangeEvent"
         topic-names: [ "partyAccountNotificationTopic" ]
       - event-name: "BillingAccountDeleteEvent"
         topic-names: [ "partyAccountNotificationTopic" ]

where,

  • connection-profile-id is the reference to the id of a defined connection profile (such as "cp1") that the producer will use for publishing events.

  • properties is a collection of Kafka producer configuration parameters (such as acks, retries, batch size, etc.), set to control behavior of the producer and optimize performance, reliability, and security. These properties will override the connection-profile properties if they are configured.

  • event-topics-config is a list that maps each event name to one or more Kafka topics to which the event will be published.

  • event-name is the identifier for a specific business event type that will be produced and routed by this producer (e.g., "ProductCreateEvent").

  • topic-names is the list of one or more Kafka topics to which messages for the corresponding event-name are published.

Completing Post-Installation Tasks

After deployment, validate the service using logs and functional checks. Customize event handling with mapper overrides if required and follow the correct procedure for restarting the REST Services Manager Notification Service after any configuration change.

To validate deployment and check logs:
  1. Inspect the log directory for startup messages.

  2. Confirm the service connects to Kafka brokers and joins the expected consumer group(s).

  3. Verify correct topic subscriptions and successful offset commits.

To customize using customMapperDirectory:
  • To override default event mappers, place custom mapping files in the customMapperDirectory defined in application.yaml. See "Configuring and Adding Custom Mapper Files" for more information.

  • Restart the REST Services Manager Notification Service after adding or modifying custom mapper files to apply changes.

To restart REST Services Manager:
  • After any configuration or credential change, restart the REST Services Manager Notification Service to load the updated settings and wallet entries.

  • Use the supplied service management commands or scripts for safe restart procedures. Stop the BRM REST Services Manager by running the REST_home/scripts/stop-brm-rsm.sh script. Start the BRM REST Services Manager by running the REST_home/scripts/start-brm-rsm.sh script.

Performing Verification and Sanity Checks

Conduct a series of checks to ensure all topics, partitions, security settings, and service connections are correct. This section includes procedures for performing functional connectivity testing and verifying end-to-end message flow.

Perform checks to validate the REST Services Manager Notification Service deployment:
  • Confirm all topics, partitions, and ACLs are present in Kafka.

  • Ensure consumers join the correct consumer group and receive partition assignments.

  • Check that the service successfully establishes SSL or SASL connections to brokers.

  • Run test messages through the pipeline to validate end-to-end message flow.

  • Use built-in scripts to verify wallet entry references and keystore integrity.

Security Best Practices for REST Services Manager Notification Service

Adhere to recommended security practices for storage and handling of credentials. This section summarizes wallet usage, credential rotation, file permissions, and compliance alignment.

To maintain a secure deployment:
  • Store all credentials, passphrases, and secrets in the Oracle wallet. Never use plain text or version control for secret values.

  • Limit file permissions on wallet and keystore files to only necessary service accounts.

  • Rotate credentials and certificates periodically, updating the wallet, and restart services after changes.

  • Do not disable SSL endpoint identification unless explicitly required and with a formal risk assessment.

  • Ensure your deployment always aligns with Oracle security and compliance policies.

Troubleshooting SSL Connection Errors

BRM REST Services Manager can use SSL connections for invoking OAuth Token Introspection Endpoint and Oracle Unified Directory REST APIs. Table 11-6 describes the possible SSL connection errors and solutions.

Table 11-6 SSL Connection Errors and Solutions

SSL Connection Errors Solutions

javax.net.ssl.SSLHandshakeException: java.security.cert.CertificateException: No name matching hostname found

Ensure that the Common Name (CN) in the SSL certificate is the fully qualified domain name of the server where endpoints are hosted.

javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target

Ensure that the SSL certificate of the endpoint is imported into the Java KeyStore.