Package com.oracle.bmc.ailanguage.model
Class TextClassificationModelMetrics.Builder
- java.lang.Object
-
- com.oracle.bmc.ailanguage.model.TextClassificationModelMetrics.Builder
-
- Enclosing class:
- TextClassificationModelMetrics
public static class TextClassificationModelMetrics.Builder extends Object
-
-
Constructor Summary
Constructors Constructor Description Builder()
-
Method Summary
All Methods Instance Methods Concrete Methods Modifier and Type Method Description TextClassificationModelMetrics.Builderaccuracy(Float accuracy)The fraction of the labels that were correctly recognised .TextClassificationModelMetricsbuild()TextClassificationModelMetrics.Buildercopy(TextClassificationModelMetrics model)TextClassificationModelMetrics.BuildermacroF1(Float macroF1)F1-score, is a measure of a model\u2019s accuracy on a datasetTextClassificationModelMetrics.BuildermacroPrecision(Float macroPrecision)Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)TextClassificationModelMetrics.BuildermacroRecall(Float macroRecall)Measures the model’s ability to predict actual positive classes.TextClassificationModelMetrics.BuildermicroF1(Float microF1)F1-score, is a measure of a model\u2019s accuracy on a datasetTextClassificationModelMetrics.BuildermicroPrecision(Float microPrecision)Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)TextClassificationModelMetrics.BuildermicroRecall(Float microRecall)Measures the model’s ability to predict actual positive classes.TextClassificationModelMetrics.BuilderweightedF1(Float weightedF1)F1-score, is a measure of a model\u2019s accuracy on a datasetTextClassificationModelMetrics.BuilderweightedPrecision(Float weightedPrecision)Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)TextClassificationModelMetrics.BuilderweightedRecall(Float weightedRecall)Measures the model’s ability to predict actual positive classes.
-
-
-
Method Detail
-
accuracy
public TextClassificationModelMetrics.Builder accuracy(Float accuracy)
The fraction of the labels that were correctly recognised .- Parameters:
accuracy- the value to set- Returns:
- this builder
-
microF1
public TextClassificationModelMetrics.Builder microF1(Float microF1)
F1-score, is a measure of a model\u2019s accuracy on a dataset- Parameters:
microF1- the value to set- Returns:
- this builder
-
microPrecision
public TextClassificationModelMetrics.Builder microPrecision(Float microPrecision)
Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)- Parameters:
microPrecision- the value to set- Returns:
- this builder
-
microRecall
public TextClassificationModelMetrics.Builder microRecall(Float microRecall)
Measures the model’s ability to predict actual positive classes.It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
- Parameters:
microRecall- the value to set- Returns:
- this builder
-
macroF1
public TextClassificationModelMetrics.Builder macroF1(Float macroF1)
F1-score, is a measure of a model\u2019s accuracy on a dataset- Parameters:
macroF1- the value to set- Returns:
- this builder
-
macroPrecision
public TextClassificationModelMetrics.Builder macroPrecision(Float macroPrecision)
Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)- Parameters:
macroPrecision- the value to set- Returns:
- this builder
-
macroRecall
public TextClassificationModelMetrics.Builder macroRecall(Float macroRecall)
Measures the model’s ability to predict actual positive classes.It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
- Parameters:
macroRecall- the value to set- Returns:
- this builder
-
weightedF1
public TextClassificationModelMetrics.Builder weightedF1(Float weightedF1)
F1-score, is a measure of a model\u2019s accuracy on a dataset- Parameters:
weightedF1- the value to set- Returns:
- this builder
-
weightedPrecision
public TextClassificationModelMetrics.Builder weightedPrecision(Float weightedPrecision)
Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)- Parameters:
weightedPrecision- the value to set- Returns:
- this builder
-
weightedRecall
public TextClassificationModelMetrics.Builder weightedRecall(Float weightedRecall)
Measures the model’s ability to predict actual positive classes.It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
- Parameters:
weightedRecall- the value to set- Returns:
- this builder
-
build
public TextClassificationModelMetrics build()
-
copy
public TextClassificationModelMetrics.Builder copy(TextClassificationModelMetrics model)
-
-