7 REST API Reference for AI and ML

Learn about the REST APIs required for creating the Artificial Intelligence (AI) and Machine Learning (ML) workflow and how to use them for configuring different services.

Topics in this document:

About the REST APIs

The REST API for AI and ML services allows you to create data sources, train models on the configured data sources, and make predictions based on a customer’s usage pattern. You use these APIs for applying and configuring different services.

All REST Endpoints

The following is a list of endpoints required for using AI and ML services.

Quick Start

Set up your environment and then use the REST API for AI and ML to make your first API call by performing these tasks.

Prerequisites

Table 7-1 lists the prerequisites for setting up your environment and making API calls.

Table 7-1 Prerequisites

Prerequisite More Information

Install Oracle Monetization Suite applications

This refers to the application under Oracle Monetization Suite with which you want to integrate the AI services.

For more information, you can see the installation guide for the respective product.

Install cURL

Use cURL

Send a Request

After you set up your REST API, you can send a request to ensure that your connection works.

Use cURL

The examples within this document use cURL to demonstrate how to access the REST API for AI and ML services in Oracle Monetization Suite.

Task 1: Install cURL

To connect securely to the server, you must install a version of cURL that supports SSL and provide an SSL certificate authority (CA) certificate file or bundle to authenticate against the CA certificate store, such as Verisign.

The following procedure demonstrates how to install cURL on a Windows 64-bit system:

  1. In your browser, navigate to the cURL Releases and Downloads page at https://curl.se/download.html.

  2. Locate the version of the cURL software that corresponds to your operating system, click the link to download the ZIP file, and then install the software.

  3. Go to the cURL CA Certs page at https://curl.se/docs/caextract.html and then download the ca-bundle.pem SSL CA certificate bundle to the folder in which you installed cURL.

Task 2: Set Environment Variable for cURL

In a command window, set the cURL environment variable, CURL_CA_BUNDLE, to the location of your local CA certificate bundle. For example:

C:\curl> set CURL_CA_BUNDLE=ca-bundle.pem

For information about CA certificate verification using cURL, see https://curl.se/docs/sslcerts.html.

Task 3: Start cURL

Start cURL and specify one or more of the command-line options defined in the following table:

Table 7-2 cURL Options and Descriptions

cURL Option Description

-d @filename.json

--data @filename.json

Identifies the file containing the request body in JSON format on the local machine. Alternatively, you can pass the request body with -d "{id=5,status='OK'}".

-F @filename.json

--form @filename.json

Identifies form data, in JSON format, on the local machine.

-H

--header

Defines one or more of the following:

  • Content type of the request document

  • Hostname and port number of your Oracle Services Communications Proxy (SCP) Authority Server

-i

--include

Displays header information in the response.

-X method

--request method

Indicates the type of request method (for example, GET or POST).

Authenticate

The API for AI and ML services uses OAuth 2.0 access tokens to authenticate requests from clients.

Before you can send requests to REST API services, you must acquire a valid OAuth access token. Then, your clients must pass the token in the header of every request sent to an API service (applicable only for Oracle Identity Cloud Service).

You can use either Oracle Identity Cloud Service or Oracle Access Management to set up authentication for your client requests. For more information, see "BRM REST Services Manager Security" in BRM Security Guide.

Send Requests

Use these guidelines when sending requests using the REST API for AI and ML services.

URL Structure

Here is the URL structure for the requests:

apiRoot/resourcePath

where:

  • apiRoot is for accessing the HTTP Gateway server at either http://hostname:Port or https://hostname:Port.

  • resourcePath is the path to the endpoint.

For example, the URL for fetching the data is:

http://hostname:port/data/fetch

where:

  • hostname is the URL for the application server.

  • port is the port for the application server.

Supported Methods

You can perform various operations on a resource using standard HTTP/HTTPS method requests.

Table 7-3 shows the supported methods.

Table 7-3 Supported Methods

HTTP Method Description

POST

Create the resource.

Media Types

The REST API for AI and ML services supports the following media type:

  • application/json

Supported Headers

The REST API for AI and ML services supports the following headers that may be passed in the header section of the HTTP/HTTPS request or response.

Table 7-4 Supported Headers

Header Description Example

Content-Type

The media type of the body of the request. This is required for POST and PUT requests.

Content-Type: application/json

Authorization

The type of authorization. The REST API for AI and ML servicdes uses the OAuth 2.0 bearer token framework.

Authorization: Bearer YOUR_TOKEN

where YOUR_TOKEN is your client application's OAuth 2.0 access token.

Accept

The media types that the client wants to receive in the response.

Accept: application/json

Host

The domain name (and optionally the port number) of the server to which the request is being.

Host: api.example.com

Swagger URL

If you use Swagger, you can make calls to the API for AI and ML services at this URL:

apiRoot/openapi/ui/index.html

Reference

This is some additional reference information to help you work with the REST API for AI and ML services.

Status Codes

When you call any of the REST API operations for AI and ML services, the response returns one of the standard HTTP/HTTPS status codes defined in the following table.

Table 7-5 Status Codes

HTTP Status Code Description

200 OK

The request was completed successfully.

400 Bad Request

The request could not be processed because it contains missing or invalid information (such as a validation odes on a parameter or a missing required value).

401 Unauthorized

The request is not authorized. The authentication credentials must be added or validated.

403 Forbidden

The user can't be authenticated. The user does not have authorization to perform this request.

404 Not Found

The request includes a resource URI that does not exist.

405 Method Not Allowed

The HTTP method specified in the request (POST) is not supported for this request URI.

409 Conflict

The request can't be completed due to a conflict with the current state of the target resource.

500 Internal Server Error

The server encountered an unexpected condition that prevented it from fulfilling the request.

501 Not Implemented

The server either does not recognize the request method or is unable to process the request.

503 Service Unavailable

The server is not available at the moment because it is either overloaded or experiencing downtime.

Create the Data Set

Method: POST

Path: /data/collect

Description: Creates the data set. It collects data from your data source, such as the BRM database or your personal database, and stores it in an object or file.

This shows the cURL command for sending a Create the Data Set request:

curl -H “Content-Type: application/json” -H “Authorization: Bearer YOUR_TOKEN” -H “Accept: application/json” -X POST -d @file.json 'http://hostname:port/data/collect'

where:

  • YOUR_TOKEN is your client application's OAuth 2.0 access token.

  • file.json is the JSON file that specifies the details of the data to collect.

  • hostname is the URL for the application server.

  • port is the port for the application server.

Request Body

The following shows the fields in the request body along with some sample data.

{
    "@type" : "CollectRequest",
    "data_source_type": [
        "oracledb"
    ],
    "tech_choice": ("spark" / "dataflow" / "datascience"),

      # Set this to true if you need to append the training data. Set to false if you want to fetch complete data without appending 
	"delta_data": ("true" / "false"),
 
    "storage_details": [
        {
            "@type": "StorageDetails",
            "path": "/mnt/data_services/",
            "output_file_type": ("json" / "csv" / "pickle"),
            "storage_type": ("pvc" / "object_storage")
        }
    ],
    "query_builder": [
        {
            "@type": "QueryBuilder",
            "queries": [
                {
                    "@type": "QueryList",
                    "name": "account_data",
                    "query": "select a.poid_id0 as acct_cd, a.currency as crncy_cd, b.country as cntry_name, b.state as state_name, b.city as city_name from account_t a left join account_nameinfo_t b on b.obj_id0 = a.poid_id0 where b.country <> 'null'"
                },
			    {
                    "@type": "QueryList",
                    "name": "msisdn_data",
                    "query": "select s.account_obj_id0, s.poid_type, a.name as msisdn from service_t s left join service_alias_list_t a on a.obj_id0 = s.poid_id0 where a.rec_id = 0"
                },
                {
                    "@type": "QueryList",
                    "name": "balance_group_data",
                    "query": "select d.poid_id0 as bal_grp_cd, d.ACCOUNT_OBJ_ID0 as acct_cd, b.rec_id2 as acct_bal_typ_cd, b.VALID_TO as bal_expry_dt, e.VALID_FROM as bal_begin_dt, e.CURRENT_BAL as bal_amt, e.GRANTED_BAL as orgnl_bkt_amt, e.GRANTOR_OBJ_ID0 as prod_sbrp_cd, e.GRANTOR_OBJ_TYPE as prod_sbrp_typ_cd from bal_grp_t d left join bal_grp_sub_bals_t e on e.obj_id0 = d.poid_id0 inner join (select rec_id2, max(valid_to) as valid_to from bal_grp_sub_bals_t group by rec_id2) b on e.rec_id2 = b.rec_id2 and e.valid_to = b.valid_to"
                },
                {
                    "@type": "QueryList",
                    "name": "service_data",
                    "query": "select ACCOUNT_OBJ_ID0 as acct_cd, BAL_GRP_OBJ_ID0 as bal_grp_cd, poid_id0 as ser_srvc_cd, poid_type as ser_srvc_typ_cd, name as  ser_srvc_name from service_t where poid_type not like '%/pcm_client' and poid_type not like '%/admin_client'"
                },
                {
                    "@type": "QueryList",
                    "name": "purchased_product_data",
                    "query": "select ACCOUNT_OBJ_ID0 as acct_cd, CYCLE_START_T as cycl_strt_dt, CYCLE_END_T as cycl_end_dt, PLAN_OBJ_ID0 as prod_ofr_cd, PRODUCT_OBJ_ID0 as prod_spec_cd, POID_ID0 as prod_sbrp_cd, PURCHASE_START_T as prod_sbrp_strt_dt, PURCHASE_END_T as prod_sbrp_end_dt, SERVICE_OBJ_ID0 as ser_srvc_cd, DEAL_OBJ_ID0 as deal_cd from purchased_product_t"
                },
                {
                    "@type": "QueryList",
                    "name": "product_data",
                    "query": "select poid_id0 as prod_spec_cd, name as shrt_name, permitted from product_t"
                },
                {
                    "@type": "QueryList",
                    "name": "deal_data",
                    "query": "select d.poid_id0 as deal_cd, d.permitted, d.name, p.PRODUCT_OBJ_ID0 as prod_spec_cd from deal_t d left join deal_products_t p on p.obj_id0 = d.poid_id0"
                },
                {
                    "@type": "QueryList",
                    "name": "plan_data",
                    "query": "select distinct p.POID_ID0 as prod_ofr_cd, p.NAME as prod_ofr_name, d.PERMITTED as srvc_typ_cd, pd.DEAL_OBJ_ID0 as deal_cd from PLAN_T p left join PLAN_SERVICES_DEALS_T pd on pd.OBJ_ID0 = p.POID_ID0 left join DEAL_T d on d.POID_ID0 = pd.DEAL_OBJ_ID0"
                },
                {
                    "@type": "QueryList",
                    "name": "rate_data",
                    "query": "select r.POID_ID0 as prod_ofr_price_cd, r.RATE_PLAN_OBJ_ID0 as prod_ofr_price_typ_cd, rp.PRODUCT_OBJ_ID0 as prod_spec_cd, rp.EVENT_TYPE as evt_typ_cd, rb.ELEMENT_ID as elmnt_cd, rb.SCALED_AMOUNT as scaled_amt from rate_t r left join rate_plan_t rp on rp.POID_ID0 = r.RATE_PLAN_OBJ_ID0 left join rate_bal_impacts_t rb on rb.obj_id0 = r.POID_ID0"
                },
                {
                    "@type": "QueryList",
                    "name": "event_data",
                    "query": "select i.ACCOUNT_OBJ_ID0 as acct_cd, i.OFFERING_OBJ_ID0 as prod_ofr_cd, i.PRODUCT_OBJ_ID0 as prod_spec_cd, i.RESOURCE_ID as acct_bal_typ_cd, e.SERVICE_OBJ_TYPE as service_type, sum(e.net_quantity) as usage from event_t e left join event_bal_impacts_t i on i.OBJ_ID0 = e.POID_ID0 where i.OFFERING_OBJ_ID0 <> 0 group by i.ACCOUNT_OBJ_ID0, i.OFFERING_OBJ_ID0, i.PRODUCT_OBJ_ID0, i.RESOURCE_ID, e.SERVICE_OBJ_TYPE"
                },
                {
                    "@type": "QueryList",
                    "name": "deal_data_2",
                    "query": "select i.ACCOUNT_OBJ_ID0 as acct_cd, i.OFFERING_OBJ_ID0 as prod_ofr_cd, p.DEAL_OBJ_ID0 as deal_cd from event_bal_impacts_t i left join event_t e on i.OBJ_ID0 = e.POID_ID0 left join purchased_product_t p on p.POID_ID0 = i.OFFERING_OBJ_ID0 where e.POID_TYPE = '/event/billing/product/fee/cycle/cycle_forward_monthly' and i.OFFERING_OBJ_ID0 <> 0"
                },
                {
                    "@type": "QueryList",
                    "name": "over_usage_data",
                    "query": "select i.ACCOUNT_OBJ_ID0 as acct_cd, i.OFFERING_OBJ_ID0 as prod_ofr_cd, i.PRODUCT_OBJ_ID0 as prod_spec_cd, EXTRACT(MONTH FROM (TO_DATE('01/01/1970', 'dd/mm/yyyy') + e.CREATED_T/86400)) as MONTH, EXTRACT(YEAR FROM (TO_DATE('01/01/1970', 'dd/mm/yyyy') + e.CREATED_T/86400)) as YEAR, e.SERVICE_OBJ_ID0 as srvc_cd, sum(i.AMOUNT) as amt, sum (e.NET_QUANTITY) / ABS(NULLIF(rb.SCALED_AMOUNT, 0)) as USAGE_OVER_GRANT from event_bal_impacts_t i left join event_t e on i.OBJ_ID0 = e.POID_ID0 left join rate_plan_t rp on rp.PRODUCT_OBJ_ID0 = i.PRODUCT_OBJ_ID0 left join rate_t r on rp.POID_ID0 = r.RATE_PLAN_OBJ_ID0 left join rate_bal_impacts_t rb on rb.obj_id0 = r.poid_id0 where rp.event_type = '/event/billing/product/fee/cycle/cycle_forward_monthly' and e.POID_TYPE like '/event/session%' and e.service_obj_type <> '/service/pcm_client' and i.OFFERING_OBJ_ID0 <> 0 group by i.ACCOUNT_OBJ_ID0, i.OFFERING_OBJ_ID0, e.SERVICE_OBJ_ID0, i.PRODUCT_OBJ_ID0, rb.SCALED_AMOUNT, EXTRACT(MONTH FROM (TO_DATE('01/01/1970', 'dd/mm/yyyy') + e.CREATED_T/86400)), EXTRACT(YEAR FROM (TO_DATE('01/01/1970', 'dd/mm/yyyy') + e.CREATED_T/86400))"
                },
                {
                    "@type": "QueryList",
                    "name": "usage_details",
                    "query": "select i.ACCOUNT_OBJ_ID0 as acct_cd, i.OFFERING_OBJ_ID0 as prod_ofr_cd, i.PRODUCT_OBJ_ID0 as prod_spec_cd, i.RESOURCE_ID as acct_bal_typ_cd, EXTRACT(MONTH FROM (TO_DATE('01/01/1970', 'dd/mm/yyyy') + e.CREATED_T/86400)) as MONTH, EXTRACT(YEAR FROM (TO_DATE('01/01/1970', 'dd/mm/yyyy') + e.CREATED_T/86400)) as YEAR, e.SERVICE_OBJ_ID0 as srvc_cd, substr(e.SERVICE_OBJ_TYPE, instr (e.SERVICE_OBJ_TYPE, '/', -1) + 1) as ser_srvc_name, sum (e.NET_QUANTITY) / ABS(NULLIF(rb.SCALED_AMOUNT, 0)) as USAGE from event_bal_impacts_t i left join event_t e on i.OBJ_ID0 = e.POID_ID0 left join rate_plan_t rp on rp.PRODUCT_OBJ_ID0 = i.PRODUCT_OBJ_ID0 left join rate_t r on rp.POID_ID0 = r.RATE_PLAN_OBJ_ID0 left join rate_bal_impacts_t rb on rb.obj_id0 = r.poid_id0 where rp.event_type = '/event/billing/product/fee/cycle/cycle_forward_monthly' and e.POID_TYPE like '/event/session%' and i.RESOURCE_ID <> 840 and e.service_obj_type <> '/service/pcm_client' and i.OFFERING_OBJ_ID0 <> 0 group by i.ACCOUNT_OBJ_ID0, i.OFFERING_OBJ_ID0, e.SERVICE_OBJ_ID0, i.PRODUCT_OBJ_ID0, rb.SCALED_AMOUNT, e.SERVICE_OBJ_TYPE, i.RESOURCE_ID, EXTRACT(MONTH FROM (TO_DATE('01/01/1970', 'dd/mm/yyyy') + e.CREATED_T/86400)), EXTRACT(YEAR FROM (TO_DATE('01/01/1970', 'dd/mm/yyyy') + e.CREATED_T/86400))"
                }
            ],
		    # 'join_condition' is an optional input which can be used to join the data from 2 queries 
    		"join_condition": [
    			{
					"@type": "JoinConditions",
	            	"name": "joined_data",
    	        	"join": [
						"service_data",
	            	    "purchased_product_data"
    	        	]
        	 	},
	        	{
    	               "@type": "JoinConditions",
        		         "name": "left_join_1",
            		"join": [
		            	"event_data",
        		        "purchased_product_data"
		            ],
        		          "join_type": "left",
		                  "join_column": ["acct_cd_1", "acct_cd"]
        		}
		     ]
        }
    ]
}

Response Body

If successful, the response code 200 is returned with a response body. For example:

{
	"@type": "CollectResponse",
	"status": "Data Fetch Successful" 
}

Send Specified Data to a Client

Method: POST

Path: /data/fetch

Description: Retrieves the specified data from your data set and sends it to the client.

This shows the cURL command for sending a Send Specified Data to Client request:

curl -H “Content-Type: application/json” -H “Authorization: Bearer YOUR_TOKEN” -H “Accept: application/json” -X POST -d @file.json 'http://hostname:port/data/fetch'

where:

  • YOUR_TOKEN is your client application's OAuth 2.0 access token.

  • file.json is the JSON file that specifies the details of the data to collect.

  • hostname is the URL for the application server.

  • port is the port for the application server.

Request Body

The following shows the fields in the request body along with some sample data.

{
  "@type": "FetchRequest",
  "required_data": [
    "account_data",
    "over_usage_data",
    "deal_data",
    "purchased_product_data"
  ],

  # 'additional_data' is an optional field. This data can be used in custom data filter script to filter out required information from cached data  
  # If 'sample' is set to 'true', the sample_filter_data.py   
  # script will be used. If it is set to 'false', then the sample script configured in helm chart (helm_charts/values.yaml - dataFetchProcessor.filterFile)   
  # will be used to filter out data  

  "additional_data": {
    "account_id": "2479678",
    "sample": ("true" / "false")
  }
}

Response Body

If successful, the response code 200 is returned with a response body. For example:

{
    "@type": "FetchResponse",
    "data": {
        "data": [(Data Requested)],
        "status": ("No data is stored currently" / "Successful")
    }
}

Create a Cache Object

Method: POST

Path: /data/cache

Description: Caches the specified data and uses it when required.

This shows the cURL command for sending a Create a Cache Object request:

curl -H “Content-Type: application/json” -H “Authorization: Bearer YOUR_TOKEN” -H “Accept: application/json” -X POST -d @file.json 'http://hostname:port/data/cache'

where:

  • YOUR_TOKEN is your client application's OAuth 2.0 access token.

  • file.json is the JSON file that specifies the details of the data to collect.

  • hostname is the URL for the application server.

  • port is the port for the application server.

Request Body

The following shows the fields in the request body along with some sample data.

{
  "@type": "CacheRequest",
  "required_data": [
    "account_data",
    "msisdn_data",
    "balance_group_data",
    "service_data",
    "purchased_product_data",
    "product_data",
    "deal_data",
    "plan_data",
    "rate_data",
    "event_data",
    "deal_data_2",
    "over_usage_data",
    "usage_data"
  ],
  "storage_type": ("pvc" / "object_storage"),

  # In case the 'storage_type' is 'object_storage', then the 'additional_data' must have 'bucket' and 'namespace' while using Dataflow.   
  # If the 'storage_type' is 'pvc', then the 'additional_data' must have 'path' which point to 'metadata.json' file

  "additional_data": {
    "bucket": "sample-for-dataflow",
    "namespace": "namespace"
  }
}

Response Body

If successful, the response code 200 is returned with a response body. For example:

{
    "@type": "CacheResponse"
    "status": "Data Caching Successful"
}

Train the Model

Method: POST

Path: /utility/train

Description: Trains the model based on the data acquired by the data service and the set configurations.

This shows the cURL command for sending a Train the Model request:

curl -H “Content-Type: application/json” -H “Authorization: Bearer YOUR_TOKEN” -H “Accept: application/json” -X POST -d @file.json 'http://hostname:port/utility/train'

where:

  • YOUR_TOKEN is your client application's OAuth 2.0 access token.

  • file.json is the JSON file that specifies the details of the data to collect.

  • hostname is the URL for the application server.

  • port is the port for the application server.

Request Body

The following shows the fields in the request body along with some sample data.

{
    "@type": "TrainRequest",
    "algorithm": [
        {
            "@type": "Algorithms",
            "algo": "knn",
            "model_dir": "model/", # Accepts only alphanumeric and underscore (_)
            "model_name": "recommend_knn",
            "hyper_parameters": {
                "k_value": 5
            }
        },
        {
            "@type": "Algorithms",
            "algo": "cosine",
            "model_dir": "model/", # Accepts only alphanumeric and underscore (_)
            "model_name": "recommend_cosine",
            "hyper_parameters": {
                "k_value": 5
            }
        },
        {
            "@type": "Algorithms",
            "algo": "dnn",
            "model_dir": "model/", # Accepts only alphanumeric and underscore (_)
            "model_name": "dnn",
            "hyper_parameters": {
                "test_size": 0.2, # Percentage of data that needs to be used for validation. Optimal range is between 0.1 to 0.3 
                "epochs": 150, # Number of epochs that the training should run
                "batch_size": 16,
                "loss": "categorical_crossentropy", # Loss function to be used in training (mean_squared_error / categorical_crossentropy / sparse_categorical_crossentropy / categorical_hinge / etc.). Refer keras loss functions. https://www.tensorflow.org/api_docs/python/tf/keras/losses
			"optimizer": "rmsprop", # Optimizer used to reduce loss (adam / rmsprop / adagrad / etc.). Refer keras optimizer functions https://www.tensorflow.org/api_docs/python/tf/keras/optimizers.
                "learning_rate": 0.001 # Ideal range is between 0.0001 to 0.1
                "final_activation": "softmax" # The activation function that must be used in the final layer ('softmax' / 'relu' / 'linear' / 'tanh' / 'sigmoid' etc.). https://www.tensorflow.org/api_docs/python/tf/keras/activations
            }
        }
    ],

    # 'sample' must be 'true' when we need to use default preprocessing script. If it is set to 'false',   
    # then the sample script configured in helm chart (helm_charts/values.yaml - trainUtilityProcessor.preprocessScript)

    "sample": true,

    "storage_details": [
        {
            "@type": "StorageDetails",
            "storage_type": "object_storage", # Possible values are 'object_storage' or 'pvc'
	
            # "required_data" values are the unique names given to queries during data collect
		
            "required_data": [ 
                "account_data",
                "balance_group_data",
                "service_data",
                "purchased_product_data",
                "product_data",
                "deal_data",
                "plan_data",
                "rate_data",
                "event_data",
                "deal_data_2",
                "over_usage_data",
                "usage_details”
            ], 

           # In case the 'storage_type' is 'object_storage', then the 'additional_details' must have 'bucket' and 'namespace' while using Dataflow. For Datascience, 'additional_details' must have 'tech_choice' as 'datascience'   
           # If the 'storage_type' is 'pvc', then the 'additional_details' must have 'path', 'file_type' (json/csv/pickle), and 'model_stage' (Staging/Production/Archived) 
            
            "additional_details": {
                "bucket": "sample-for-dataflow",
                "namespace": "adcdefghijk"
            }
        }
    ]
}

Response Body

If successful, the response code 200 is returned with a response body. For example:

{
    "@type": "TrainResponse",
    "status": "Training Complete / Exception Stack trace"
}

Create a Prediction

Method: POST

Path: /recommend/predict

Description: Creates a prediction for an account using the specified algorithm.

This shows the cURL command for sending a Create a Prediction request:

curl -H “Content-Type: application/json” -H “Authorization: Bearer YOUR_TOKEN” -H “Accept: application/json” -X POST -d @file.json 'http://hostname:port/recommend/predict'

curl -X POST 'http://hostname:port/recommend/predict' -d @file.json

where:

  • YOUR_TOKEN is your client application's OAuth 2.0 access token.

  • file.json is the JSON file that specifies the details of the data to collect.

  • hostname is the URL for the application server.

  • port is the port for the application server.

Request Body

The following shows the fields in the request body along with some sample data.

{
  "@type": "RecommendRequest",

  # "msisdn": "8928292909" can also be used instead of 'account_id'
  "account_id": "2824533",
  
  # 'algorithms' can be 'dnn', 'knn', 'cosine' based on requirement
  "algorithms": ["knn", "dnn"]
}

Response Body

If successful, the response code 200 is returned with a response body. For example:

{
    "@type": "RecommendResponse",
    "@baseType": "string",
    "@schemaLocation": "string",
    "status": "Success",
    "recommendations": [
        {
            "@type": "RecommendationDetails",
            "algorithm": "knn",
            "account_id": "2824533",
            "present_deal": "Deal 10 - Data Storm",
            "top_results": [
                {
                    "@type": "PredictDetails",
                    "prediction": "Deal 12 - Data Mini",
                    "probability_or_distance": 0.0
                },
				{
                    "@type": "PredictDetails",
                    "prediction": "Deal 4 - SMS Special",
                    "probability_or_distance": 0.0
                },
                {
                    "@type": "PredictDetails",
                    "prediction": "Deal 3 - CO for Voice 40 ",
                    "probability_or_distance": 0.0
                }
            ]
        },        
        {
            "@type": "RecommendationDetails",
            "algorithm": "dnn",
            "account_id": "2824533",
            "present_deal": "Deal 10 - Data Storm",
            "top_results": [
                {
                    "@type": "PredictDetails",
                    "prediction": "Deal 1 - Data 30",
                    "probability_or_distance": 0.11470177
                },
                {
                    "@type": "PredictDetails",
                    "prediction": "Deal 11 - Data Blaze",
                    "probability_or_distance": 0.11470167
                },
                {
                    "@type": "PredictDetails",
                    "prediction": "Deal 10 - Data Storm",
                    "probability_or_distance": 0.11470162
                }
            ]
        }
    ]
}

Note:

You can configure the response body based on your requirements.