Get Image Model Metrics

Returns the metrics for a model that has a modelType of image, such as the f1 score, accuracy, and confusion matrix. The combination of these metrics gives you a picture of model accuracy and how well the model will perform. This call returns the metrics for the last epoch in the training used to create the model. To see the metrics for each epoch, see Get Image Model Learning Curve.

The call that you make to get model metrics is always the same format, but the response varies depending on the type of model for which you retrieve metrics.

Response Body

Name

Type

Description

Available Version

createdAt

date

Date and time that the model was created.

1.0

id

string

ID of the model. Contains letters and numbers.

1.0

language

string

Model language inherited from the dataset language. Default is N/A.

2.0

metricsData

object

Model metrics values.

1.0

object

string

Object returned; in this case, metrics.

1.0

metricsData Response Body

Name

Type

Description

Available Version

confusionMatrix

array

Array of integers that contains the correct and incorrect classifications for each label in the dataset based on testing done during the training process.

1.0

f1

array

Array of floats that contains the weighted average of precision and recall for each label in the dataset. The corresponding label for each value in this array can be found in the labels array. For example, the first f1 score in the f1 array corresponds to the first label in the labels array.

1.0

labels

array

Array of strings that contains the dataset labels. These labels correspond to the values in the f1 array and the confusionMatrix array.

1.0

testAccuracy

float

Accuracy of the test data. From your initial dataset, by default, 10% of the data is set aside and isn't used during training to create the model. This 10% is then sent to the model for prediction. How often the correct prediction is made with this 10% is reported as testAccuracy.

1.0

trainingAccuracy

float

Accuracy of the training data. By default, 90% of the data from your dataset is left after the test accuracy set is set aside. This 90% is then sent to the model for prediction. How often the correct prediction is made with this 90% is reported as trainingAccuracy.

1.0

trainingLoss

float

Summary of the errors made in predictions using the training and validation data. The lower the number value, the more accurate the model.

1.0

Use the labels array and the confusionMatrix array to build the confusion matrix for a model. The labels in the array become the matrix rows and columns. Here's what the confusion matrix for the example results looks like.

beach

mountain

beach

5

0

mountain

1

8

Language