Get Multi-Label Model Metrics

Returns the metrics for a model that has a modelType of image-multi-label, such as the f1 score, accuracy, and confusion matrix. The combination of these metrics gives you a picture of model accuracy and how well the model will perform. This call returns the metrics for the last epoch in the training used to create the model. To see the metrics for each epoch, see Get Multi-Label Model Learning Curve.

Multi-label models are available in Einstein Vision API version 2.0 and later.

The call that you make to get model metrics is always the same format, but the response varies depending on the type of model for which you retrieve metrics.

Response Body

Name

Type

Description

Available Version

createdAt

date

Date and time that the model was created.

2.0

id

string

ID of the model. Contains letters and numbers.

2.0

language

string

Model language inherited from the dataset language. Default is N/A.

2.0

metricsData

object

Model metrics values.

2.0

object

string

Object returned; in this case, metrics.

2.0

metricsData Response Body

Name

Type

Description

Available Version

confusionMatrices

object

This object contains:

  • A string for each label in the model.
  • For each label, a two-element array.
  • For each element in the array, a two-element integer array that contains the confusion matrix values. These values are the correct and incorrect classifications for the label based on testing done during the training process.

Use this field to build a binary confusion matrix for each label in the model.

2.0

f1

array

Array of float arrays that contains the weighted average of precision and recall for each label in the dataset. The corresponding label for each value in this array can be found in the labels array. For example, the f1 score in the first array corresponds to the first label in the labels array.

2.0

labels

array

Array of strings that contains the dataset labels. These labels correspond to the values in the f1 array and the confusionMatrix array.

2.0

testAccuracies

array

Array of floats that specify the accuracy of the test data for each label. From your initial dataset, by default, 10% of the data is set aside and isn't used during training to create the model. This 10% is then sent to the model for prediction. How often the correct prediction is made with this 10% is reported for each label in the testAccuracies array.

2.0

trainingAccuracies

array

Array of floats that specify the accuracy of the training data for each label. By default, 90% of the data from your dataset is left after the test accuracy set is set aside. This 90% is then sent to the model for prediction. How often the correct prediction is made with this 90% is reported for each label in the trainingAccuracies array.

2.0

Use the confusionMatrices array to build a binary confusion matrix for each label in a model. Here's what the confusion matrices for the first three labels in the example results might look like.

tennis-ball

not tennis-ball

tennis-ball

43

0

not tennis-ball

0

12

baseball-bat

not baseball-bat

baseball-bat

44

2

not baseball-bat

0

9

tennis-court

not tennis-court

tennis-court

41

2

not tennis-court

0

12

Language