Types of Algorithms Available in WMS

Oracle WMS currently supports a variety of machine learning algorithms, each designed to handle specific types of prediction problems. While each algorithm requires different parameters and follows a unique training methodology, many of them can be applied across multiple supported metrics.

These algorithms form the foundation for predictive capabilities such as forecasting Order Cycle Time, Waiting Time, Processing Time, and powering features like Intelligent Cycle Counting and Market Basket Analysis.

Random Forest

Random Forest is a widely used algorithm for solving both regression and classification problems (see glossary for definitions). It operates by building multiple decision trees and combining their outputs for improved accuracy and robustness.

Key Parameters:

  • n_estimators

    The number of decision trees in the model. More trees can increase accuracy but also add processing time.
    • Default: 100
    • Allowed Range: 1–200
  • max_depth

    Specifies how deep each decision tree can grow. Deeper trees can capture more complexity in the data but also risk overfitting.
    • Allowed Range: 1–20
    • Note: No default value is set. Entering a value outside the range will trigger an error.
  • max_depth
    • This value
                    should not be changed.
      Training Tip: Run several iterations to train and evaluate the model. This helps fine-tune parameters and improves overall accuracy by capturing variations in results.

Feed Forward Neural Networks

Feed Forward Neural Networks are inspired by the human brain and excel at tasks that involve pattern recognition such as identifying images, text, or sounds. In WMS, they are typically used for classification and clustering tasks.

Key Parameters:

  • hidden_layer_sizes

    Determines the architecture of the network—the number of hidden layers and the number of neurons (nodes) in each.

  • max_iter

    Indicates the number of training iterations, or epochs.

Epoch: One complete pass through the entire training dataset.

Gradient Boosting

Gradient Boosting is a powerful and flexible algorithm that can be used for both classification and regression. It builds models in a sequential way, where each new model corrects errors made by the previous one.

Key Parameters:

  • max_iter

    The number of boosting stages or epochs to run.

  • max_depth

    Controls the depth of individual decision trees (same behavior as in Random Forest).

  • learning_rate

    Determines how quickly the model adapts to the data.
    • A high learning rate may overshoot the optimal solution.
    • A low learning rate may take longer but offers more control and stability.
Note: Results will vary based on the model and parameters given.