public static class TextClassificationModelMetrics.Builder extends Object
Constructor and Description |
---|
Builder() |
Modifier and Type | Method and Description |
---|---|
TextClassificationModelMetrics.Builder |
accuracy(Float accuracy)
The fraction of the labels that were correctly recognised .
|
TextClassificationModelMetrics |
build() |
TextClassificationModelMetrics.Builder |
copy(TextClassificationModelMetrics model) |
TextClassificationModelMetrics.Builder |
macroF1(Float macroF1)
F1-score, is a measure of a model’s accuracy on a dataset
|
TextClassificationModelMetrics.Builder |
macroPrecision(Float macroPrecision)
Precision refers to the number of true positives divided by the total number of positive
predictions (i.e., the number of true positives plus the number of false positives)
|
TextClassificationModelMetrics.Builder |
macroRecall(Float macroRecall)
Measures the model’s ability to predict actual positive classes.
|
TextClassificationModelMetrics.Builder |
microF1(Float microF1)
F1-score, is a measure of a model’s accuracy on a dataset
|
TextClassificationModelMetrics.Builder |
microPrecision(Float microPrecision)
Precision refers to the number of true positives divided by the total number of positive
predictions (i.e., the number of true positives plus the number of false positives)
|
TextClassificationModelMetrics.Builder |
microRecall(Float microRecall)
Measures the model’s ability to predict actual positive classes.
|
TextClassificationModelMetrics.Builder |
weightedF1(Float weightedF1)
F1-score, is a measure of a model’s accuracy on a dataset
|
TextClassificationModelMetrics.Builder |
weightedPrecision(Float weightedPrecision)
Precision refers to the number of true positives divided by the total number of positive
predictions (i.e., the number of true positives plus the number of false positives)
|
TextClassificationModelMetrics.Builder |
weightedRecall(Float weightedRecall)
Measures the model’s ability to predict actual positive classes.
|
public TextClassificationModelMetrics.Builder accuracy(Float accuracy)
The fraction of the labels that were correctly recognised .
accuracy
- the value to setpublic TextClassificationModelMetrics.Builder microF1(Float microF1)
F1-score, is a measure of a model’s accuracy on a dataset
microF1
- the value to setpublic TextClassificationModelMetrics.Builder microPrecision(Float microPrecision)
Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
microPrecision
- the value to setpublic TextClassificationModelMetrics.Builder microRecall(Float microRecall)
Measures the model’s ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
microRecall
- the value to setpublic TextClassificationModelMetrics.Builder macroF1(Float macroF1)
F1-score, is a measure of a model’s accuracy on a dataset
macroF1
- the value to setpublic TextClassificationModelMetrics.Builder macroPrecision(Float macroPrecision)
Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
macroPrecision
- the value to setpublic TextClassificationModelMetrics.Builder macroRecall(Float macroRecall)
Measures the model’s ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
macroRecall
- the value to setpublic TextClassificationModelMetrics.Builder weightedF1(Float weightedF1)
F1-score, is a measure of a model’s accuracy on a dataset
weightedF1
- the value to setpublic TextClassificationModelMetrics.Builder weightedPrecision(Float weightedPrecision)
Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
weightedPrecision
- the value to setpublic TextClassificationModelMetrics.Builder weightedRecall(Float weightedRecall)
Measures the model’s ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
weightedRecall
- the value to setpublic TextClassificationModelMetrics build()
public TextClassificationModelMetrics.Builder copy(TextClassificationModelMetrics model)
Copyright © 2016–2024. All rights reserved.