Next Best Offer models
Algorithm name: Next best offer
Applies to: B2B, B2C
Next best offer algorithm helps recommend the most relevant offers or products for every customer based on their profile, behavioral and transactional history
Also commonly known as: Next best product recommendation
What is Next Best Offer algorithm?
The Next best offer algorithm is a ready-to-use data science model that provides customers the ability to choose from top recommendations on offers tied to different products or services. The model uses customers’ profile data, behavioral data, and their purchase information to generate personalized recommendations. The model also adds product popularity and product newness-based recommendations to the mix.
Parameters of the model
To create and configure the Next best offer model the following parameters must be defined:
-
Algorithm: Choose the Next best offer algorithm in Unity.
-
Offer catalog: You must choose the product or offer catalog from which the products/ offers need to be recommended
-
Top N offers: You must choose the number of offers to be recommended for every customer. The default value chosen is 5 while you can choose to populate up to 20 recommendations.
-
-
Queries: The queries you select generate the dataset used for both model training and scoring.
-
Inputs: Inputs are attributes from the Oracle Unity data model that the model uses during training and scoring.
-
Outputs: These are the data objects and attributes from the Unity data model that store the model's output values. You can customize the default output mappings if needed.
Model inputs
The Next best offer model uses the following data. For the model to successfully run, check the sections: key input considerations, key data guidelines and best practices.
Attribute from Unity data object |
Attribute name in the query |
Unity Data object |
Description of the attribute |
Expected Data Type |
Must have? |
---|---|---|---|---|---|
ID |
MasterCustomerID |
MasterCustomer |
Customer identifier |
String |
Y |
Country |
Country |
MasterCustomer |
Helps asses regional similarities in customer preferences |
String |
|
Age |
Age |
MasterCustomer |
Helps assess age based similarities in customer preferences |
Int |
|
Gender |
Gender |
MasterCustomer |
Helps assess gender based similarities in customer preferences |
String |
|
SourceID |
SourceID |
Customer |
Unique Identifier for the source |
String |
Y |
EventTS |
EventTS |
Event |
Timestamp of when an event occurred |
timestamp |
Y |
Type |
EventType |
Event |
Engagement event type |
String |
Y |
SubType |
EventSubType |
Event |
Engagement event subtype |
String |
|
URL |
EventURL |
Event |
Page URLassociated with the event |
String |
|
Medium |
Medium |
Event |
Engagement channel through which an event was captured |
String |
|
ProductID |
ProductID |
Event |
Product Identifier associated with the engagement event |
String |
Y |
Type |
ProductType |
Product |
Type of product associated with the event |
String |
|
SourceCategoryID |
CategoryID |
Category |
Promotion category identifier |
String |
Y |
Type |
CategoryType |
Category |
Type of category (catalog) - offer for NBO and action for NBA |
String |
|
Name |
CategoryName |
Category |
Promotion category name |
String |
Y |
ID |
Category_ID |
Category |
Promotion category identifier |
String |
Y |
SourceID |
Category_SourceID |
Category |
SourceID from the Category object |
String |
Y |
ID |
PromotionID |
Promotion |
Unique Identifier for the promotion |
String |
Y |
Name |
PromotionName |
Promotion |
Promotion Name |
String |
|
Key considerations on model inputs
-
Data schema checks: If an attribute is unavailable, assign a constant default value across all records. This allows the model to validate the input schema. This will not impact the model outcomes as the attribute will be excluded during feature importance analysis due to lack of data variance.
-
Schema consistency: Ensure that the attributes used in the query exactly match the specified 'Attribute name in the query' (in the above table) to avoid schema mismatches during model execution.
-
Non-mandatory attributes: These attributes, when available, can enhance the model’s learning and improve its predictive performance. However, if they are missing, the model will still leverage other available attributes to make predictions.
Key input data guidelines for Propensity models
-
Catalogue requirements: Ensure the promotion catalogue (category) and promotion offers/ actions are ingested before the model training (refer documentation on ingesting catalog data for the model)
-
Model extensibility: The same model can be used to recommend the best products (instead of offers) as the NBO inherently identifies the most relevant products at a customer level. Ensure to map ProductIDs to the ‘PromotionID’ attribute to enable this use case.
Best practices
-
Use case first approach: Begin with a specific use case to determine the data required for solving the problem effectively.
-
Review model data requirements with both business and data teams to ensure alignment
-
-
Contextual parameters: Choose model parameters (such as lookback window) based on the business/ use case context.
-
Query validation: Test both the training and scoring queries to validate that they return data and inspect sample records resulting from the query
Model outputs
Output values will be stored in the MasterCustomerRecommendation data object. You can review the ranks and scores for each recommended promotion/ product in the ‘Rank’ and ‘Score’ attributes.
Attribute |
Description |
DataType |
---|---|---|
MasterCustomerID |
Master customer identifier |
STRING |
PromotionID |
Promotion identifier – Offer, Action or Product recommended |
STRING |
SourcePromotionID |
Promotion identifier – Offer, Action or Product recommended |
STRING |
SourceMasterCustomerRecommendationID |
Unique identifier for the recommendation object |
STRING |
SourceMasterCustomerID |
Master customer identifier (source) |
STRING |
SourceID |
Unique identifier for the source |
STRING |
Score |
Recommendation score (Higher the score, better the affinity) |
FLOAT |
Rank |
Rank for the promotions based on 'Score' |
INTEGER |
Create and use a Next Best Offer model
To create and use a Next Best Offer model, you will need to do the following:
-
Before creating the data science model, you will need to create the catalog of offers that will be used when the model runs. Learn more about Creating Next best offers.
-
After creating the catalog of offers, follow the steps for Creating Next Best Offer models.
-
After creating the model, follow the steps for Running training and scoring jobs.
After the model runs and creates output values, you can do the following:
-
Access Next Best Offer data to review the values in the output attributes.
-
Learn more about Using data science data for the specific needs of your organization.
-
If needed, you can review the data science model details.