Using Model Evaluation Metrics for Machine Learning

Evaluating a model is a core part of building an effective machine learning model. There are several evaluation metrics such as confusion matrix, cross-validation, and AUC-ROC curve. Different evaluation metrics help with analyzing the model performance.

Use the aif.show_model_performance Python API to view the different evaluation metrics.

The following are the inputs:

·        performance_metric: Specify the name of the performance metric from the following options.

§       Prediction Deciles

§       Confusion Matrix

§       Prediction Density

§       ROC Curve

§       F1 Curve

§       Kappa Curve

§       Precision-Recall Curve

§       Ranking Power Tests

·        cut_off_method: This parameter is applicable if the performance metric is the Confusion Matrix. The following are the options:

§       F1: Provides confusion matrix at max F1 as the cut-off.

§       KS: Provides confusion matrix at max KS as the cut-off.

§       KA: Provides confusion matrix at max Kappa as the cut-off.

·        test_statistic: This parameter is applicable if the performance metric is Ranking Power Tests. The following are the options:

§       KS Test

§       Rank Order Test

§       Lorenz Curve

§       Lift Curve

·        model: This parameter is applicable if the performance metric is Ranking Power Tests. The following are the options:

§       Input any Model ID by looking at AUC Summary. If this parameter is not specified, by default Best model is chosen from AUC Summary

·        ranking_power_tests_as_matrix: This parameter is applicable if the performance metric is Ranking Power Tests and can be True or False. The default value is False.

 

The output returns the performance plots and/or matrix.