Returns the metrics for a model that has a modelType
of image-multi-label
, such as the f1 score, accuracy, and confusion matrix. The combination of these metrics gives you a picture of model accuracy and how well the model will perform. This call returns the metrics for the last epoch in the training used to create the model. To see the metrics for each epoch, see Get Multi-Label Model Learning Curve.
Multi-label models are available in Einstein Vision API version 2.0 and later.
The call that you make to get model metrics is always the same format, but the response varies depending on the type of model for which you retrieve metrics.
Response Body
Name | Type | Description | Available Version |
---|---|---|---|
| date | Date and time that the model was created. | 2.0 |
| string | ID of the model. Contains letters and numbers. | 2.0 |
| string | Model language inherited from the dataset language. Default is | 2.0 |
| object | Model metrics values. | 2.0 |
| string | Object returned; in this case, | 2.0 |
metricsData Response Body
Name | Type | Description | Available Version |
---|---|---|---|
| object | This object contains:
Use this field to build a binary confusion matrix for each label in the model. | 2.0 |
| array | Array of float arrays that contains the weighted average of precision and recall for each label in the dataset. The corresponding label for each value in this array can be found in the | 2.0 |
| array | Array of strings that contains the dataset labels. These labels correspond to the values in the | 2.0 |
| array | Array of floats that specify the accuracy of the test data for each label. From your initial dataset, by default, 10% of the data is set aside and isn't used during training to create the model. This 10% is then sent to the model for prediction. How often the correct prediction is made with this 10% is reported for each label in the | 2.0 |
| array | Array of floats that specify the accuracy of the training data for each label. By default, 90% of the data from your dataset is left after the test accuracy set is set aside. This 90% is then sent to the model for prediction. How often the correct prediction is made with this 90% is reported for each label in the | 2.0 |
Use the confusionMatrices
array to build a binary confusion matrix for each label in a model. Here's what the confusion matrices for the first three labels in the example results might look like.
tennis-ball | not tennis-ball | |
---|---|---|
tennis-ball | 43 | 0 |
not tennis-ball | 0 | 12 |
baseball-bat | not baseball-bat | |
---|---|---|
baseball-bat | 44 | 2 |
not baseball-bat | 0 | 9 |
tennis-court | not tennis-court | |
---|---|---|
tennis-court | 41 | 2 |
not tennis-court | 0 | 12 |