Einstein Platform Services

Release Notes

Find out what's new, changed, or deprecated in the Einstein Vision and Language APIs.

Release Notes Consolidated

The Einstein Vision and Language release notes are now in one place. For the latest release notes, see the Salesforce Release Notes. At the top of the page, you can select the release.

May 6, 2020

NEW

Detect products on retail shelves with an optimized algorithm. Improve your retail execution scenarios that identify products on shelves. Now you can use an optimized algorithm to create a model with detection accuracy (mAP) that’s better for a retail use case. And it still has the same functionality as a model created using the standard detection algorithm.

How: To use the retail execution algorithm, first create a dataset that has a type of image-detection. Then when you train the dataset to create a model, you specify an algorithm of retail-execution. The cURL command is as follows.

curl -X POST -H "Authorization: <TOKEN>" -H "Cache-Control: no-cache" -H "Content-Type: multipart/form-data" -F "name=Alpine Retail Model" -F "datasetId=<DATASET_ID>" -F "algorithm=retail-execution" https://api.einstein.ai/v2/vision/train

These Einstein Vision calls take the algorithm parameter.

  • Train a model—POST /v2/vision/train
  • Retrain a model—POST /v2/vision/retrain

May 4, 2020

NEW

Detect text in an image with Einstein OCR (Generally Available). Get optical character recognition (OCR) models that detect alphanumeric text in an image with Einstein OCR. You access the models from a single REST API endpoint. Each model has specific use cases, such as business card scanning, product lookup, and digitizing documents and tables.

How: When you call the API, you send in an image, and the JSON response contains various elements based on the value of the task
parameter. Here’s what a cURL call to the OCR endpoint looks like.

curl -X POST -H "Authorization: Bearer <TOKEN>" -F sampleLocation="https://www.publicdomainpictures.net/pictures/240000/velka/emergency-evacuation-route-signpost.jpg" -F task="text" -F modelId="OCRModel" https://api.einstein.ai/v2/vision/ocr

The response JSON returns the text and coordinates of a bounding box (in pixels) for that text.

{
  "task": "text",
  "probabilities": [
    {
      "probability": 0.99937266,
      "label": "ROUTE",
      "boundingBox": {
        "minX": 582,
        "minY": 685,
        "maxX": 1151,
        "maxY": 815
      }
    },
    {
      "probability": 0.99471515,
      "label": "EMERGENCY",
      "boundingBox": {
        "minX": 361,
        "minY": 208,
        "maxX": 1383,
        "maxY": 346
      }
    },
    {
      "probability": 0.99469215,
      "label": "EVACUATION",
      "boundingBox": {
        "minX": 331,
        "minY": 438,
        "maxX": 1401,
        "maxY": 570
      }
    }
  ],
  "object": "predictresponse"
}

NEW

Einstein Intent now supports multiple languages. Einstein Intent datasets and models now support these languages: English (US), English (UK), French, German, Italian, Portuguese, Spanish, Chinese (Simplified) (beta), Chinese (Traditional) (beta), Japanese (beta). You specify the language when you create an intent dataset. When you train that dataset, the model inherits the language of the dataset.

How: There are two new API parameters that enable multilanguage support: language and algorithm. When you create the dataset, you specify the language in the language parameter. When you train the dataset to create a model, you pass in the algorithm parameter with a value of multilingual-intent or multilingual-intent-ood (to create a model that handles out-of-domain predictions). These calls take the language parameter.

  • Create a dataset asynchronously—POST /v2/language/datasets/upload
  • Create a dataset synchronously—POST /v2/language/datasets/upload/sync

These calls take the algorithm parameter.

  • Train a dataset—POST /v2/language/train
  • Retrain a dataset—POST /v2/language/retrain

Note
As a beta feature, Chinese (Simplified), Chinese (Traditional), and Japanese language support is a preview and isn’t part of the “Services” under your master subscription agreement with Salesforce. Use this feature at your sole discretion, and make your purchase decisions only on the basis of generally available products and features. Salesforce doesn’t guarantee general availability of this feature within any particular time frame or at all, and we can discontinue it at any time. This feature is for evaluation purposes only, not for production use. It’s offered as is and isn’t supported, and Salesforce has no liability for any harm or damage arising out of or in connection with it. All restrictions, Salesforce reservation of rights, obligations concerning the Services, and terms for related Non-Salesforce Applications and Content apply equally to your use of this feature.

NEW

Create Einstein Intent models that support out-of_domain text. Einstein Intent lets you create a model that handles predictions for unexpected, out-of-domain, text. Out-of-domain text is text that doesn’t fall into any of the labels in the model.

How: When you train an intent dataset, pass the algorithm parameter with a value of multilingual-intent-ood. To see how the algorithm works, let’s say you have a case routing model with five labels: Billing, Order Change, Password Help, Sales Opportunity, and Shipping Info. The following text comes in for prediction: “What is the weather in Los Angeles?” If the model was created using the standard algorithm, the response looks like this JSON.

{
"probabilities": [
{
"label": "Sales Opportunity",
"probability": 0.9987062
},
{
"label": "Shipping Info",
"probability": 0.0008089547
},
{
"label": "Order Change",
"probability": 0.00046194126
},
{
"label": "Billing",
"probability": 0.000021637188
},
{
"label": "Password Help",
"probability": 0.0000012197639
}
],
"object": "predictresponse"
}

The text sent for prediction clearly doesn’t fall into any of the labels. The model isn’t designed to handle predictions that don’t match one of the labels, so the model returns the labels with the best probability. If you create the model with the multilingual-intent-ood algorithm, and you send the same text for prediction, the response returns an empty probabilities array.

{
"probabilities": [ ],
"object": "predictresponse"
}

These calls take the algorithm parameter.

  • Train a dataset—POST /v2/language/train
  • Retrain a dataset—POST /v2/language/retrain

December 12, 2019

NEW

Get more detailed error messages for Einstein Object Detection training API calls. When you train an object detection dataset and the training process encounters an error, the API now returns more descriptive error messages. In most cases, the error message specifies the issue that caused the error and how to fix it.

How: The improved errors are returned for these API endpoints when the dataset type is image-detection.

  • Train a dataset—POST /v2/vision/train
  • Retrain a dataset—POST /v2/vision/retrain

November 21, 2019

NEW

Added elements in Language API model metrics response. New elements returned in the model metrics let you better understand the performance of your model. The response JSON for an Einstein Language API call that returns model metrics information contains three new elements: the macroF1 field, the precision array, and the recall array.

When: This change applies to all language models created after September 30, 2019. If you want to see these changes for models created
before that date, retrain the dataset and create a new model.

How: The new field and arrays appear in the response for these calls when the model type is text-intent or text-sentiment.

  • Get model metrics—GET /v2/language/models/<MODEL_ID>
  • Get model learning curve—GET /v2/language/models/<MODEL_ID>/lc

October 14, 2019

CHANGED

Text datasets can contain up to 3 million records. The maximum number of words in a text dataset is now 3 million. A text dataset is a dataset that has a type of text-intent or
text-sentiment.

How: You receive an error from the following calls when you train a text dataset that has more than 3 million words:

  • Train a dataset—POST /v2/language/datasets/train
  • Retrain a dataset—POST /v2/language/datasets/retrain

To avoid this error, be sure that when you create a dataset or add examples to a dataset, that it contains less than 3 million words across
all examples. For best results, we recommend that each example is around 100 words.

October 4, 2019

CHANGED

Object detection max image size increased. We increased the maximum size of an image you can add to an object detection dataset from 1 MB to 5 MB.

How: The new maximum image size applies to these calls when the dataset type is image-detection.

  • Create a dataset asynchronously—POST /v2/vision/datasets/upload
  • Create a dataset synchronously—POST /v2/vision/datasets/upload/sync
  • Create examples from a .zip file—PUT /v2/vision/datasets/<DATASET_ID>/upload
  • Create an example—POST /v2/vision/datasets/<DATASET_ID>/examples
  • Create a feedback example—POST /v2/vision/feedback
  • Create feedback examples from a .zip file—PUT /v2/vision/bulkfeedback

July 15, 2019

NEW

Intent API response JSON contains a new algorithm field. The response JSON for an Einstein Intent API call that returns model information now contains the algorithm field. The default return value is intent.

How: The algorithm field appears in the response for these calls when the dataset type or model type is text-intent.

  • Train a dataset—POST /v2/language/datasets/train
  • Retrain a dataset—POST /v2/language/datasets/retrain
  • Get training status—GET /v2/language/train/<MODEL_ID>
  • Get model metrics—GET /v2/language/models/<MODEL_ID>
  • Get all models for a dataset—GET /v2/language/datasets/<DATASET_ID>/models

July 3, 2019

NEW

API response JSON contains a new language field. The response JSON for an Einstein Vision API call that returns model information now contains the language field. When you train a dataset, the resulting model inherits the language of the dataset. For Einstein Vision datasets and models, the return value is N/A.

How: The language field appears in the response for these calls.

  • Train a dataset—POST /v2/vision/train
  • Retrain a dataset—POST /v2/vision/retrain
  • Get training status—GET /v2/vision/train/<MODEL_ID>
  • Get model metrics—GET /v2/vision/models/<MODEL_ID>
  • Get all models for a dataset—GET /v2/vision/datasets/<DATASET_ID>/models

NEW

API response JSON contains a new language field. The response JSON for an Einstein Language API call that returns model information now contains the language field. When you train a dataset, the resulting model inherits the language of the dataset. For Einstein Vision datasets and models, the return value is en_US.

How: The language field appears in the response for these calls.

  • Train a dataset—POST /v2/language/datasets/train
  • Retrain a dataset—POST /v2/language/datasets/retrain
  • Get training status—GET /v2/language/train/<MODEL_ID>
  • Get model metrics—GET /v2/language/models/<MODEL_ID>
  • Get all models for a dataset—GET /v2/language/datasets/<DATASET_ID>/models

NEW

Object Detection API response JSON contains a new algorithm field. The response JSON for an Einstein Vision API call that returns object detection model information now contains the algorithm field. The default return value is object-detection.

How: The algorithm field appears in the response for these calls when the dataset type or model type is image-detection.

  • Train a dataset—POST /v2/vision/train
  • Retrain a dataset—POST /v2/vision/retrain
  • Get training status—GET /v2/vision/train/<MODEL_ID>
  • Get model metrics—GET /v2/vision/models/<MODEL_ID>
  • Get all models for a dataset—GET /v2/vision/datasets/<DATASET_ID>/models

CHANGED

Einstein Language default language is now en-US. The default language changed to en_US from ENGLISH.

How: The language field now contains the value en_US in the response for these calls.

  • Create a dataset asynchronously—POST /v2/language/datasets/upload
  • Create a dataset synchronously—POST /v2/language/datasets/upload/sync
  • Get a dataset—GET /v2/language/datasets/<DATASET_ID>
  • Get all datasets—GET /v2/language/datasets
  • Create examples from a file—PUT /v2/language/datasets/<DATASET_ID>/upload

Release Notes Archive

For previous years' release notes, see Release Notes Archive.

Updated 2 months ago

Release Notes


Find out what's new, changed, or deprecated in the Einstein Vision and Language APIs.

Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.