You access the models from a single REST API endpoint. Each model is used for specific use cases, such as business card scanning, product lookup, and digitizing documents and tables.
- Business cards
- Images that contain unformatted data
- Images that contain data in tables
When you call the API, you send in an image, and the JSON response contains various elements based on the value of the task parameter:
- String of alphanumeric characters that the model predicts.
- Confidence (probability) that the detected bounding box contains text.
- XY coordinates for the location of the character string within the image (also called a bounding box).
- For tabular data, the table row and column in which the text is located.
- For business cards, the entity type of the detected text such as ORG, PERSON, and so on.
Here’s a sample image. The orange boxes and text indicate the detected text.
Currently, Einstein OCR supports only English.
Updated about a month ago