Fairness

You can calculate the fairness metrics of the dataset by choosing the target variable and protected features. The module provides metrics dedicated to assessing and checking whether the model predictions and/or true labels in data comply with a particular fairness metric. For this example, the statistical parity metric also known as demographic parity, measures how much a protected group’s outcome varies when compared to the rest of the population. Thus, such fairness metrics denote differences in error rates for different demographic groups/protected attributes in data. Therefore, these metrics are to be minimizedto decrease discrepancies in model predictions with respect to specific groups of people. Traditional classification metrics such as accuracy, on the other hand, are to be maximized.

The following three functions can be performed in MMG.
  • Measure Fairness Metrics of Dataset
  • Models Bias Mitigation of Models
  • Privacy Estimation of Models

For Example: In model pipeline, a new widget can be added to calculate the fairness metrics of a trained model or an advanced option can be added to model training widget to calculate the fairness metric of the model after training.

To calculate the dataset fairness, perform the following steps:
  1. In the Fairness screen, select the Target Variable from the drop-down.
  2. In the Protected features, select the required features to understand the fairness of the dataset.

    Figure 8-29 Fairness screen


    This image displays the Fairness screen.

    The metric and calculated values are displayed. Click Help icon for more details on how the fairness is calculated.