Options
All
  • Public
  • Public/Protected
  • All
Menu

Namespace NamedEntityRecognitionModelMetrics

Model level named entity recognition metrics

Properties

macroF1

macroF1: number

F1-score, is a measure of a model\u2019s accuracy on a dataset Note: Numbers greater than Number.MAX_SAFE_INTEGER will result in rounding issues.

macroPrecision

macroPrecision: number

Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives) Note: Numbers greater than Number.MAX_SAFE_INTEGER will result in rounding issues.

macroRecall

macroRecall: number

Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct. Note: Numbers greater than Number.MAX_SAFE_INTEGER will result in rounding issues.

microF1

microF1: number

F1-score, is a measure of a model\u2019s accuracy on a dataset Note: Numbers greater than Number.MAX_SAFE_INTEGER will result in rounding issues.

microPrecision

microPrecision: number

Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives) Note: Numbers greater than Number.MAX_SAFE_INTEGER will result in rounding issues.

microRecall

microRecall: number

Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct. Note: Numbers greater than Number.MAX_SAFE_INTEGER will result in rounding issues.

Optional weightedF1

weightedF1: undefined | number

F1-score, is a measure of a model\u2019s accuracy on a dataset Note: Numbers greater than Number.MAX_SAFE_INTEGER will result in rounding issues.

Optional weightedPrecision

weightedPrecision: undefined | number

Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives) Note: Numbers greater than Number.MAX_SAFE_INTEGER will result in rounding issues.

Optional weightedRecall

weightedRecall: undefined | number

Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct. Note: Numbers greater than Number.MAX_SAFE_INTEGER will result in rounding issues.

Functions

getDeserializedJsonObj

getJsonObj