{"_id":"59de6223666d650024f78fab","category":"59de6223666d650024f78f9c","user":"573b5a1f37fcf72000a2e683","parentDoc":null,"project":"552d474ea86ee20d00780cd7","version":"59de6223666d650024f78f9b","updates":["59aa1edec250f8000f836ae5","5ab96ce7dc4a30002c589e67"],"next":{"pages":[],"description":""},"createdAt":"2016-09-15T21:28:44.503Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[{"name":"","status":200,"language":"json","code":"{}"},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","auth":"required","params":[],"url":""},"isReference":false,"order":0,"body":"Artificial Intelligence (AI) is already part of our lives. Whenever you pick up your smartphone, you’re already seeing what AI can do for you, from tailored recommendations to relevant search results. With Einstein Vision, developers can harness the power of image recognition to build AI-powered apps fast. All without a data science degree!\n\nEinstein Vision is part of the Einstein Platform Services technologies, and you can use it to AI-enable your apps. Leverage pre-trained classifiers, or train your own custom classifiers to solve a vast array of specialized image-recognition use cases. Developers can bring the power of image recognition to CRM and third-party applications so that end users across sales, service, and marketing can discover new insights about their customers and predict outcomes that lead to smarter decisions.\n\nEinstein Vision includes these APIs:\n\n- Einstein Image Classification—Enables developers to train deep learning models to recognize and classify images at scale.\n\n- Einstein Object Detection—Enables developers to train models to recognize and count multiple distinct objects within an image, providing granular details like the size and location of each object.\n\n<sub>Rights of ALBERT EINSTEIN are used with permission of The Hebrew University of Jerusalem. Represented exclusively by Greenlight.</sub>\n[block:callout]\n{\n  \"type\": \"danger\",\n  \"body\": \"There's a new API endpoint in town! You can now reference the Einstein Platform Services APIs by using this endpoint: **https://<span></span>api.einstein.ai**. The old api.metamind.io endpoint still works, but be sure to update your code to use the new endpoint.\"\n}\n[/block]","excerpt":"","slug":"introduction-to-the-einstein-predictive-vision-service","type":"basic","title":"Introduction to Salesforce Einstein Vision","__v":1,"childrenPages":[]}

Introduction to Salesforce Einstein Vision


Artificial Intelligence (AI) is already part of our lives. Whenever you pick up your smartphone, you’re already seeing what AI can do for you, from tailored recommendations to relevant search results. With Einstein Vision, developers can harness the power of image recognition to build AI-powered apps fast. All without a data science degree! Einstein Vision is part of the Einstein Platform Services technologies, and you can use it to AI-enable your apps. Leverage pre-trained classifiers, or train your own custom classifiers to solve a vast array of specialized image-recognition use cases. Developers can bring the power of image recognition to CRM and third-party applications so that end users across sales, service, and marketing can discover new insights about their customers and predict outcomes that lead to smarter decisions. Einstein Vision includes these APIs: - Einstein Image Classification—Enables developers to train deep learning models to recognize and classify images at scale. - Einstein Object Detection—Enables developers to train models to recognize and count multiple distinct objects within an image, providing granular details like the size and location of each object. <sub>Rights of ALBERT EINSTEIN are used with permission of The Hebrew University of Jerusalem. Represented exclusively by Greenlight.</sub> [block:callout] { "type": "danger", "body": "There's a new API endpoint in town! You can now reference the Einstein Platform Services APIs by using this endpoint: **https://<span></span>api.einstein.ai**. The old api.metamind.io endpoint still works, but be sure to update your code to use the new endpoint." } [/block]
Artificial Intelligence (AI) is already part of our lives. Whenever you pick up your smartphone, you’re already seeing what AI can do for you, from tailored recommendations to relevant search results. With Einstein Vision, developers can harness the power of image recognition to build AI-powered apps fast. All without a data science degree! Einstein Vision is part of the Einstein Platform Services technologies, and you can use it to AI-enable your apps. Leverage pre-trained classifiers, or train your own custom classifiers to solve a vast array of specialized image-recognition use cases. Developers can bring the power of image recognition to CRM and third-party applications so that end users across sales, service, and marketing can discover new insights about their customers and predict outcomes that lead to smarter decisions. Einstein Vision includes these APIs: - Einstein Image Classification—Enables developers to train deep learning models to recognize and classify images at scale. - Einstein Object Detection—Enables developers to train models to recognize and count multiple distinct objects within an image, providing granular details like the size and location of each object. <sub>Rights of ALBERT EINSTEIN are used with permission of The Hebrew University of Jerusalem. Represented exclusively by Greenlight.</sub> [block:callout] { "type": "danger", "body": "There's a new API endpoint in town! You can now reference the Einstein Platform Services APIs by using this endpoint: **https://<span></span>api.einstein.ai**. The old api.metamind.io endpoint still works, but be sure to update your code to use the new endpoint." } [/block]
{"_id":"59de6223666d650024f78fac","category":"59de6223666d650024f78f9c","user":"573b5a1f37fcf72000a2e683","project":"552d474ea86ee20d00780cd7","parentDoc":null,"version":"59de6223666d650024f78f9b","updates":[],"next":{"pages":[],"description":""},"createdAt":"2016-09-15T21:40:16.560Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[{"code":"{}","name":"","status":200,"language":"json"},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","auth":"required","params":[],"url":""},"isReference":false,"order":1,"body":"Einstein Vision enables you to tap into the power of AI and train deep learning models to recognize and classify images at scale. You can use pre-trained classifiers or train your own custom classifiers to solve unique use cases.\n\nFor example, Salesforce Social Studio integrates with this service to expand a marketer’s view beyond just keyword listening. You can “visually listen” to detect attributes about an image, such as detecting your brand logo or that of your competitor in a customer’s photo. You can use these attributes to learn more about your customers' lifestyles and preferences.\n \nImages contain contextual clues about all aspects of your business, including your customers’ preferences, your inventory levels, and the quality of your products. You can use these clues to enrich what you know about your sales, service, and marketing efforts to gain new insights about your customers and take action. The possibilities are limitless with applications that include:\n\n- Visual search—Expand the ways that your customers can discover your products and increase sales.\n - Provide customers with visual filters to find products that best match their preferences while browsing online.\n - Allow customers to take photos of your products to discover where they can make purchases online or in-store.\n\n\n- Brand detection—Monitor your brand across all your channels to increase your marketing reach and preserve brand integrity.\n - Better understand customer preferences and lifestyle through their social media images.\n -  Monitor user-generated images through communities and review boards to improve products and quality of service.\n - Evaluate banner advertisement exposure during broadcast events to drive higher ROI.\n\n\n- Product identification—Increase the ways that you can identify your products to streamline sales processes and customer service.\n - Identify product issues before sending out a field technician to increase case resolution time.\n - Discover which products are out of stock or misplaced to streamline inventory restocking.\n - Measure retail shelf-share to optimize product mix and represent top-selling products among competitors.\n\n\n#Deep Learning in a Nutshell#\nDeep learning is a branch of machine learning, so let’s first define that term. Machine learning is a type of AI that provides computers with the ability to learn without being explicitly programmed. Machine learning algorithms can tell you something interesting about a set of data without writing custom code specific to a problem. Instead, you feed data to generic algorithms, and these algorithms build their own logic as it relates to the patterns within the data.\n\nIn deep learning, you create and train a neural network in a specific way. A neural network is a set of algorithms designed to recognize patterns. In deep learning, the neural network has multiple layers. At the top layer, the network trains on a specific set of features and then sends that information to the next layer. The network takes that information, combines it with other features and passes it to the next layer, and so on. \n\nDeep learning has increased in popularity because it has proven to outperform other methodologies for machine learning. Due to the advancement of distributed compute resources and businesses generating an influx of image, text, and voice data, deep learning can deliver insights that weren’t previously possible.","excerpt":"","slug":"what-is-the-predictive-vision-service","type":"basic","title":"What is Einstein Vision?","__v":0,"childrenPages":[]}

What is Einstein Vision?


Einstein Vision enables you to tap into the power of AI and train deep learning models to recognize and classify images at scale. You can use pre-trained classifiers or train your own custom classifiers to solve unique use cases. For example, Salesforce Social Studio integrates with this service to expand a marketer’s view beyond just keyword listening. You can “visually listen” to detect attributes about an image, such as detecting your brand logo or that of your competitor in a customer’s photo. You can use these attributes to learn more about your customers' lifestyles and preferences. Images contain contextual clues about all aspects of your business, including your customers’ preferences, your inventory levels, and the quality of your products. You can use these clues to enrich what you know about your sales, service, and marketing efforts to gain new insights about your customers and take action. The possibilities are limitless with applications that include: - Visual search—Expand the ways that your customers can discover your products and increase sales. - Provide customers with visual filters to find products that best match their preferences while browsing online. - Allow customers to take photos of your products to discover where they can make purchases online or in-store. - Brand detection—Monitor your brand across all your channels to increase your marketing reach and preserve brand integrity. - Better understand customer preferences and lifestyle through their social media images. - Monitor user-generated images through communities and review boards to improve products and quality of service. - Evaluate banner advertisement exposure during broadcast events to drive higher ROI. - Product identification—Increase the ways that you can identify your products to streamline sales processes and customer service. - Identify product issues before sending out a field technician to increase case resolution time. - Discover which products are out of stock or misplaced to streamline inventory restocking. - Measure retail shelf-share to optimize product mix and represent top-selling products among competitors. #Deep Learning in a Nutshell# Deep learning is a branch of machine learning, so let’s first define that term. Machine learning is a type of AI that provides computers with the ability to learn without being explicitly programmed. Machine learning algorithms can tell you something interesting about a set of data without writing custom code specific to a problem. Instead, you feed data to generic algorithms, and these algorithms build their own logic as it relates to the patterns within the data. In deep learning, you create and train a neural network in a specific way. A neural network is a set of algorithms designed to recognize patterns. In deep learning, the neural network has multiple layers. At the top layer, the network trains on a specific set of features and then sends that information to the next layer. The network takes that information, combines it with other features and passes it to the next layer, and so on. Deep learning has increased in popularity because it has proven to outperform other methodologies for machine learning. Due to the advancement of distributed compute resources and businesses generating an influx of image, text, and voice data, deep learning can deliver insights that weren’t previously possible.
Einstein Vision enables you to tap into the power of AI and train deep learning models to recognize and classify images at scale. You can use pre-trained classifiers or train your own custom classifiers to solve unique use cases. For example, Salesforce Social Studio integrates with this service to expand a marketer’s view beyond just keyword listening. You can “visually listen” to detect attributes about an image, such as detecting your brand logo or that of your competitor in a customer’s photo. You can use these attributes to learn more about your customers' lifestyles and preferences. Images contain contextual clues about all aspects of your business, including your customers’ preferences, your inventory levels, and the quality of your products. You can use these clues to enrich what you know about your sales, service, and marketing efforts to gain new insights about your customers and take action. The possibilities are limitless with applications that include: - Visual search—Expand the ways that your customers can discover your products and increase sales. - Provide customers with visual filters to find products that best match their preferences while browsing online. - Allow customers to take photos of your products to discover where they can make purchases online or in-store. - Brand detection—Monitor your brand across all your channels to increase your marketing reach and preserve brand integrity. - Better understand customer preferences and lifestyle through their social media images. - Monitor user-generated images through communities and review boards to improve products and quality of service. - Evaluate banner advertisement exposure during broadcast events to drive higher ROI. - Product identification—Increase the ways that you can identify your products to streamline sales processes and customer service. - Identify product issues before sending out a field technician to increase case resolution time. - Discover which products are out of stock or misplaced to streamline inventory restocking. - Measure retail shelf-share to optimize product mix and represent top-selling products among competitors. #Deep Learning in a Nutshell# Deep learning is a branch of machine learning, so let’s first define that term. Machine learning is a type of AI that provides computers with the ability to learn without being explicitly programmed. Machine learning algorithms can tell you something interesting about a set of data without writing custom code specific to a problem. Instead, you feed data to generic algorithms, and these algorithms build their own logic as it relates to the patterns within the data. In deep learning, you create and train a neural network in a specific way. A neural network is a set of algorithms designed to recognize patterns. In deep learning, the neural network has multiple layers. At the top layer, the network trains on a specific set of features and then sends that information to the next layer. The network takes that information, combines it with other features and passes it to the next layer, and so on. Deep learning has increased in popularity because it has proven to outperform other methodologies for machine learning. Due to the advancement of distributed compute resources and businesses generating an influx of image, text, and voice data, deep learning can deliver insights that weren’t previously possible.
{"_id":"59de6223666d650024f78fad","category":"59de6223666d650024f78f9c","parentDoc":null,"project":"552d474ea86ee20d00780cd7","user":"573b5a1f37fcf72000a2e683","version":"59de6223666d650024f78f9b","updates":["580664694ea93f3700b5f1ab"],"next":{"pages":[],"description":""},"createdAt":"2016-09-15T21:49:11.143Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[{"code":"{}","name":"","status":200,"language":"json"},{"language":"json","code":"{}","name":"","status":400}]},"settings":"","auth":"required","params":[],"url":""},"isReference":false,"order":2,"body":"We’re now in the world of AI and deep learning, and this space has lots of new terms to become familiar with. Understanding these terms and how they relate to each other makes it easier to work with Einstein Vision.\n\n- **Dataset**—The training data, which consists of inputs and outputs. Training the dataset creates the model used to make predictions. For an image recognition problem, the image examples you provide train the model on the desired output labels that you want the model to predict. For example, in the Create a Custom Classifier [Scenario](doc:scenario), you create a model named Beach and Mountain Model from a binary training dataset consisting of two labels: Beaches (images of beach scenes) and Mountains (images of mountain scenes). A non-binary dataset contains three or more labels.\n\n- **Label**—A group of similar data inputs in a dataset that your model is trained to recognize. A label references the output name you want your model to predict. For example, for our Beach and Mountain model, the training data contains images of beaches and that label is  “Beaches.” Images of mountains have a label of “Mountains.” The food classifier, which is trained from a multi-label dataset, contains labels like chocolate cake, pasta, macaroons, and so on.\n\n- **Model**—A machine learning construct used to solve a classification problem. Developers design a classification model by creating a dataset and then defining labels and providing positive examples of inputs that belong to these labels. When you train the dataset, the system then determines the commonalities and differences between the various labels to generalize the characteristics that define each label. The model predicts which class a new input falls into based on the predefined classes specified in your training dataset.\n \n- **Training**—The process through which a model is created and learns the classification rules based on a given set of training inputs (dataset).\n\n- **Prediction**—The results that the model returns as to how closely the input matches data in the dataset.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/1801077-0089e22-vision_terms_graphic.png\",\n        \"0089e22-vision_terms_graphic.png\",\n        2000,\n        800,\n        \"#f3f3f3\"\n      ]\n    }\n  ]\n}\n[/block]","excerpt":"","slug":"predictive-vision-service-terminology","type":"basic","title":"Einstein Vision Terminology","__v":0,"childrenPages":[]}

Einstein Vision Terminology


We’re now in the world of AI and deep learning, and this space has lots of new terms to become familiar with. Understanding these terms and how they relate to each other makes it easier to work with Einstein Vision. - **Dataset**—The training data, which consists of inputs and outputs. Training the dataset creates the model used to make predictions. For an image recognition problem, the image examples you provide train the model on the desired output labels that you want the model to predict. For example, in the Create a Custom Classifier [Scenario](doc:scenario), you create a model named Beach and Mountain Model from a binary training dataset consisting of two labels: Beaches (images of beach scenes) and Mountains (images of mountain scenes). A non-binary dataset contains three or more labels. - **Label**—A group of similar data inputs in a dataset that your model is trained to recognize. A label references the output name you want your model to predict. For example, for our Beach and Mountain model, the training data contains images of beaches and that label is “Beaches.” Images of mountains have a label of “Mountains.” The food classifier, which is trained from a multi-label dataset, contains labels like chocolate cake, pasta, macaroons, and so on. - **Model**—A machine learning construct used to solve a classification problem. Developers design a classification model by creating a dataset and then defining labels and providing positive examples of inputs that belong to these labels. When you train the dataset, the system then determines the commonalities and differences between the various labels to generalize the characteristics that define each label. The model predicts which class a new input falls into based on the predefined classes specified in your training dataset. - **Training**—The process through which a model is created and learns the classification rules based on a given set of training inputs (dataset). - **Prediction**—The results that the model returns as to how closely the input matches data in the dataset. [block:image] { "images": [ { "image": [ "https://files.readme.io/1801077-0089e22-vision_terms_graphic.png", "0089e22-vision_terms_graphic.png", 2000, 800, "#f3f3f3" ] } ] } [/block]
We’re now in the world of AI and deep learning, and this space has lots of new terms to become familiar with. Understanding these terms and how they relate to each other makes it easier to work with Einstein Vision. - **Dataset**—The training data, which consists of inputs and outputs. Training the dataset creates the model used to make predictions. For an image recognition problem, the image examples you provide train the model on the desired output labels that you want the model to predict. For example, in the Create a Custom Classifier [Scenario](doc:scenario), you create a model named Beach and Mountain Model from a binary training dataset consisting of two labels: Beaches (images of beach scenes) and Mountains (images of mountain scenes). A non-binary dataset contains three or more labels. - **Label**—A group of similar data inputs in a dataset that your model is trained to recognize. A label references the output name you want your model to predict. For example, for our Beach and Mountain model, the training data contains images of beaches and that label is “Beaches.” Images of mountains have a label of “Mountains.” The food classifier, which is trained from a multi-label dataset, contains labels like chocolate cake, pasta, macaroons, and so on. - **Model**—A machine learning construct used to solve a classification problem. Developers design a classification model by creating a dataset and then defining labels and providing positive examples of inputs that belong to these labels. When you train the dataset, the system then determines the commonalities and differences between the various labels to generalize the characteristics that define each label. The model predicts which class a new input falls into based on the predefined classes specified in your training dataset. - **Training**—The process through which a model is created and learns the classification rules based on a given set of training inputs (dataset). - **Prediction**—The results that the model returns as to how closely the input matches data in the dataset. [block:image] { "images": [ { "image": [ "https://files.readme.io/1801077-0089e22-vision_terms_graphic.png", "0089e22-vision_terms_graphic.png", 2000, 800, "#f3f3f3" ] } ] } [/block]
{"_id":"59de6225666d650024f78fd7","category":"59de6223666d650024f78fa9","project":"552d474ea86ee20d00780cd7","user":"573b5a1f37fcf72000a2e683","parentDoc":null,"version":"59de6223666d650024f78f9b","updates":[],"next":{"pages":[],"description":""},"createdAt":"2017-06-05T20:29:37.281Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[{"name":"","code":"{}","language":"json","status":200},{"name":"","code":"{}","language":"json","status":400}]},"settings":"","auth":"required","params":[],"url":""},"isReference":false,"order":0,"body":"Einstein Language includes two APIs that you can use to unlock powerful insights within text.\n\n- Einstein Sentiment—Classify the sentiment of text into positive, negative, and neutral classes to understand the feeling behind text. You can use the Einstein Sentiment API to analyze emails, social media, and text from chat to: \n\n - Identify the sentiment of a prospect’s emails to trend a lead or opportunity up or down.\n - Provide proactive service by helping dissatisfied customers first or extending promotional offers to satisfied customers.\n - Use trending sentiment to identify product deficiencies and measure overall satisfaction or dissatisfaction with your products.\n - Monitor the perception of your brand across social media channels, identify brand evangelists, and monitor customer satisfaction.\n\n\n You can create your own custom model or use our pre-built sentiment model. See [Use the Pre-Built Sentiment Model](doc:use-pre-built-models-sentiment).\n\n- Einstein Intent—Categorize unstructured text into user-defined labels to better understand what users are trying to accomplish. Leverage the Einstein Intent API to analyze text from emails, chats, or web forms to:\n\n - Determine what products prospects are interested in and send customer inquiries to the appropriate sales person.\n - Identify topics across unstructured text in emails, meeting notes, or account notes to summarize key points.\n - Route service cases to the correct agents or departments, or provide self-service options.\n - Understand customer posts to provide personalized self-service in your communities.\n\nCurrently, Einstein Language supports only English.\n\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/3342fe7-intent_and_sentiment_flow.png\",\n        \"intent_and_sentiment_flow.png\",\n        1310,\n        319,\n        \"#ebeff0\"\n      ]\n    }\n  ]\n}\n[/block]","excerpt":"Create natural language processing models to classify the intent of text or to classify text as positive, negative, and neutral. Use the Einstein Language APIs to build natural language processing into your apps.","slug":"intro-to-einstein-language","type":"basic","title":"Introduction to Salesforce Einstein Language","__v":0,"childrenPages":[]}

Introduction to Salesforce Einstein Language

Create natural language processing models to classify the intent of text or to classify text as positive, negative, and neutral. Use the Einstein Language APIs to build natural language processing into your apps.

Einstein Language includes two APIs that you can use to unlock powerful insights within text. - Einstein Sentiment—Classify the sentiment of text into positive, negative, and neutral classes to understand the feeling behind text. You can use the Einstein Sentiment API to analyze emails, social media, and text from chat to: - Identify the sentiment of a prospect’s emails to trend a lead or opportunity up or down. - Provide proactive service by helping dissatisfied customers first or extending promotional offers to satisfied customers. - Use trending sentiment to identify product deficiencies and measure overall satisfaction or dissatisfaction with your products. - Monitor the perception of your brand across social media channels, identify brand evangelists, and monitor customer satisfaction. You can create your own custom model or use our pre-built sentiment model. See [Use the Pre-Built Sentiment Model](doc:use-pre-built-models-sentiment). - Einstein Intent—Categorize unstructured text into user-defined labels to better understand what users are trying to accomplish. Leverage the Einstein Intent API to analyze text from emails, chats, or web forms to: - Determine what products prospects are interested in and send customer inquiries to the appropriate sales person. - Identify topics across unstructured text in emails, meeting notes, or account notes to summarize key points. - Route service cases to the correct agents or departments, or provide self-service options. - Understand customer posts to provide personalized self-service in your communities. Currently, Einstein Language supports only English. [block:image] { "images": [ { "image": [ "https://files.readme.io/3342fe7-intent_and_sentiment_flow.png", "intent_and_sentiment_flow.png", 1310, 319, "#ebeff0" ] } ] } [/block]
Einstein Language includes two APIs that you can use to unlock powerful insights within text. - Einstein Sentiment—Classify the sentiment of text into positive, negative, and neutral classes to understand the feeling behind text. You can use the Einstein Sentiment API to analyze emails, social media, and text from chat to: - Identify the sentiment of a prospect’s emails to trend a lead or opportunity up or down. - Provide proactive service by helping dissatisfied customers first or extending promotional offers to satisfied customers. - Use trending sentiment to identify product deficiencies and measure overall satisfaction or dissatisfaction with your products. - Monitor the perception of your brand across social media channels, identify brand evangelists, and monitor customer satisfaction. You can create your own custom model or use our pre-built sentiment model. See [Use the Pre-Built Sentiment Model](doc:use-pre-built-models-sentiment). - Einstein Intent—Categorize unstructured text into user-defined labels to better understand what users are trying to accomplish. Leverage the Einstein Intent API to analyze text from emails, chats, or web forms to: - Determine what products prospects are interested in and send customer inquiries to the appropriate sales person. - Identify topics across unstructured text in emails, meeting notes, or account notes to summarize key points. - Route service cases to the correct agents or departments, or provide self-service options. - Understand customer posts to provide personalized self-service in your communities. Currently, Einstein Language supports only English. [block:image] { "images": [ { "image": [ "https://files.readme.io/3342fe7-intent_and_sentiment_flow.png", "intent_and_sentiment_flow.png", 1310, 319, "#ebeff0" ] } ] } [/block]
{"_id":"59de6225666d650024f78fd8","category":"59de6223666d650024f78fa9","project":"552d474ea86ee20d00780cd7","user":"573b5a1f37fcf72000a2e683","parentDoc":null,"version":"59de6223666d650024f78f9b","updates":[],"next":{"pages":[],"description":""},"createdAt":"2017-06-22T17:52:04.412Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"settings":"","results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"name":"","status":400,"language":"json","code":"{}"}]},"auth":"required","params":[],"url":""},"isReference":false,"order":1,"body":"##Prerequisites##\n\n- **Sign up for an account**—Follow the steps in [What You Need to Call the API](doc:what-you-need-to-call-api) to set up your Einstein Platform Services account.\n\n- **Find your key file**—If you've already created an account, locate the `einstein_platform.pem` file that you downloaded as part of that process. This file contains your private key.\n\n- **Install cURL**—We’ll be using the cURL command line tool throughout the following steps. This tool is installed by default on Linux and OSX. If you don’t already have it installed, download it from [https://curl.haxx.se/download.html](https://curl.haxx.se/download.html)\n\n- **Get a Token**—The Einstein Platform Services APIs use OAuth 2.0 JWT bearer token flow for authorization. Use the [token page](https://api.einstein.ai/token) to upload your key file and generate a JWT token. For step-by-step instructions, see [Set Up Authorization](doc:set-up-auth).\n\n\n##Step 1: Define Your Classes and Gather Data##\n\nIn this step, you define the labels that you want the model to output when text is sent into the model for prediction. Then you gather text data for each of those labels, and that text is used to create a model.\n\nThis is typically the most time-consuming part of of the process. To make it easier for you to go through these steps, we provide a case routing [.csv file](http://einstein.ai/text/case_routing_intent.csv) that you can use. \n\nThe labels in the case routing dataset define the intent behind the text. The intent can then be used to route that case to the right department. Those labels are:\n\n- `Billing`\n- `Order Change`\n- `Password Help`\n- `Sales Opportunity`\n- `Shipping Info`\n\n##Step 2: Create the Dataset##\n\nIn this step, you use the data you gathered to create a dataset. In the following command, replace `<TOKEN>` with your JWT token and run the command. This command:\n\n- Creates a dataset called `case_routing_intent` from the specified .csv file by accessing the file via a URL\n- Creates five labels as specified in the .csv file\n- Creates 150 examples\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"path=http://einstein.ai/text/case_routing_intent.csv\\\" -F \\\"type=text-intent\\\"  https://api.einstein.ai/v2/language/datasets/upload\",\n      \"language\": \"curl\",\n      \"name\": null\n    }\n  ]\n}\n[/block]\nThis call is asynchronous, so the response looks like this JSON.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"{\\n  \\\"id\\\": 1004804,\\n  \\\"name\\\": \\\"case_routing_intent.csv\\\",\\n  \\\"createdAt\\\": \\\"2017-06-22T19:31:58.000+0000.\\\",\\n  \\\"updatedAt\\\": \\\"2017-06-22T19:31:58.000+0000\\\",\\n  \\\"labelSummary\\\": {\\n    \\\"labels\\\": []\\n  },\\n  \\\"totalExamples\\\": 0,\\n  \\\"available\\\": false,\\n  \\\"statusMsg\\\": \\\"UPLOADING\\\",\\n  \\\"type\\\": \\\"text-intent\\\",\\n  \\\"object\\\": \\\"dataset\\\"\\n}\",\n      \"language\": \"json\"\n    }\n  ]\n}\n[/block]\nTo verify that the data has been loaded, make a call to get the dataset. Replace `<TOKEN>` with your JWT token and `<DATASET_ID>` with ID of the dataset you created.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X GET -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" https://api.einstein.ai/v2/language/datasets/<DATASET_ID>\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\nThe results look something like this JSON. You know the dataset is ready when `available` is `true` and `statusMsg` is `SUCCEEDED`.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"{\\n  \\\"id\\\": 1004804,\\n  \\\"name\\\": \\\"case_routing_intent.csv\\\",\\n  \\\"createdAt\\\": \\\"2017-06-22T19:31:58.000+0000\\\",\\n  \\\"updatedAt\\\": \\\"2017-06-22T19:31:59.000+0000\\\",\\n  \\\"labelSummary\\\": {\\n    \\\"labels\\\": [\\n      {\\n        \\\"id\\\": 23649,\\n        \\\"datasetId\\\": 1004804,\\n        \\\"name\\\": \\\"Order Change\\\",\\n        \\\"numExamples\\\": 26\\n      },\\n      {\\n        \\\"id\\\": 23650,\\n        \\\"datasetId\\\": 1004804,\\n        \\\"name\\\": \\\"Sales Opportunity\\\",\\n        \\\"numExamples\\\": 44\\n      },\\n      {\\n        \\\"id\\\": 23651,\\n        \\\"datasetId\\\": 1004804,\\n        \\\"name\\\": \\\"Billing\\\",\\n        \\\"numExamples\\\": 24\\n      },\\n      {\\n        \\\"id\\\": 23652,\\n        \\\"datasetId\\\": 1004804,\\n        \\\"name\\\": \\\"Shipping Info\\\",\\n        \\\"numExamples\\\": 30\\n      },\\n      {\\n        \\\"id\\\": 23653,\\n        \\\"datasetId\\\": 1004804,\\n        \\\"name\\\": \\\"Password Help\\\",\\n        \\\"numExamples\\\": 26\\n      }\\n    ]\\n  },\\n  \\\"totalExamples\\\": 150,\\n  \\\"totalLabels\\\": 5,\\n  \\\"available\\\": true,\\n  \\\"statusMsg\\\": \\\"SUCCEEDED\\\",\\n  \\\"type\\\": \\\"text-intent\\\",\\n  \\\"object\\\": \\\"dataset\\\"\\n}\",\n      \"language\": \"json\"\n    }\n  ]\n}\n[/block]\n##Step 3: Train the Dataset to Create the Model##\n\nUse this cURL command to train the dataset and create a model. Replace `<TOKEN>` with your JWT token and `<DATASET_ID>` with ID of the dataset you created\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=Case Routing Model\\\" -F \\\"datasetId=<DATASET_ID>\\\" https://api.einstein.ai/v2/language/train\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\nThe response looks like this JSON.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"{\\n  \\\"datasetId\\\": 1004804,\\n  \\\"datasetVersionId\\\": 0,\\n  \\\"name\\\": \\\"Case Routing Model\\\",\\n  \\\"status\\\": \\\"QUEUED\\\",\\n  \\\"progress\\\": 0,\\n  \\\"createdAt\\\": \\\"2017-06-22T19:39:38.000+0000\\\",\\n  \\\"updatedAt\\\": \\\"2017-06-22T19:39:38.000+0000\\\",\\n  \\\"learningRate\\\": 0,\\n  \\\"epochs\\\": 0,\\n  \\\"queuePosition\\\": 1,\\n  \\\"object\\\": \\\"training\\\",\\n  \\\"modelId\\\": \\\"5SXGNLCCOFGTMNMQYEOTAGBPVU\\\",\\n  \\\"trainParams\\\": null,\\n  \\\"trainStats\\\": null,\\n  \\\"modelType\\\": \\\"text-intent\\\"\\n}\",\n      \"language\": \"json\"\n    }\n  ]\n}\n[/block]\nUse the `modelId` to make this call and get the training status. Replace`<TOKEN>` with your JWT token and `MODEL_ID` with the ID of the model you created.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X GET -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" https://api.einstein.ai/v2/language/train/<MODEL_ID>\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\nThe training status response looks like this JSON. The `status` of `SUCCEEDED` means that the model is ready for predictions.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"{\\n  \\\"datasetId\\\": 1004804,\\n  \\\"datasetVersionId\\\": 2473,\\n  \\\"name\\\": \\\"Case Routing Model\\\",\\n  \\\"status\\\": \\\"SUCCEEDED\\\",\\n  \\\"progress\\\": 1,\\n  \\\"createdAt\\\": \\\"2017-06-22T19:39:38.000+0000\\\",\\n  \\\"updatedAt\\\": \\\"2017-06-22T19:43:39.000+0000\\\",\\n  \\\"learningRate\\\": 0,\\n  \\\"epochs\\\": 300,\\n  \\\"object\\\": \\\"training\\\",\\n  \\\"modelId\\\": \\\"5SXGNLCCOFGTMNMQYEOTAGBPVU\\\",\\n  \\\"trainParams\\\": null,\\n  \\\"trainStats\\\": {\\n    \\\"labels\\\": 5,\\n    \\\"examples\\\": 150,\\n    \\\"totalTime\\\": \\\"00:03:54:159\\\",\\n    \\\"trainingTime\\\": \\\"00:03:53:150\\\",\\n    \\\"earlyStopping\\\": true,\\n    \\\"lastEpochDone\\\": 267,\\n    \\\"modelSaveTime\\\": \\\"00:00:01:561\\\",\\n    \\\"testSplitSize\\\": 11,\\n    \\\"trainSplitSize\\\": 139,\\n    \\\"datasetLoadTime\\\": \\\"00:00:01:008\\\"\\n  },\\n  \\\"modelType\\\": \\\"text-intent\\\"\\n}\",\n      \"language\": \"json\"\n    }\n  ]\n}\n[/block]\n##Step 4: Send Text in for Prediction##\n\nNow your model is ready to go! To test it out, send some text in for prediction. This cURL call takes the `modelId` of the model from which you want to return a prediction and the text string to analyze. Replace `<TOKEN>` with your JWT token and `<MODEL_ID>` with the ID of your model.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"modelId=<MODEL_ID>\\\" -F \\\"document=how is my package being shipped?\\\"  https://api.einstein.ai/v2/language/intent\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\nThe response looks like this JSON. The model predicts that the text indicates that the user has a comment or question about shipping, so the model returns `Shipping Info` as the top probability. Your app can then use this information to route the case to the right department or agent.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"{\\n  \\\"probabilities\\\": [\\n    {\\n      \\\"label\\\": \\\"Shipping Info\\\",\\n      \\\"probability\\\": 0.82365495\\n    },\\n    {\\n      \\\"label\\\": \\\"Sales Opportunity\\\",\\n      \\\"probability\\\": 0.12523715\\n    },\\n    {\\n      \\\"label\\\": \\\"Billing\\\",\\n      \\\"probability\\\": 0.0487557\\n    },\\n    {\\n      \\\"label\\\": \\\"Order Change\\\",\\n      \\\"probability\\\": 0.0021365683\\n    },\\n    {\\n      \\\"label\\\": \\\"Password Help\\\",\\n      \\\"probability\\\": 0.0002156619\\n    }\\n  ],\\n  \\\"object\\\": \\\"predictresponse\\\"\\n}\",\n      \"language\": \"json\"\n    }\n  ]\n}\n[/block]","excerpt":"This quick start shows you how to use the Einstein Intent API to create a model to route support cases. You can then use this model to analyze text and infer what the user wants to accomplish.","slug":"intent-quick-start-custom-classifier","type":"basic","title":"Einstein Intent Quick Start","__v":0,"childrenPages":[]}

Einstein Intent Quick Start

This quick start shows you how to use the Einstein Intent API to create a model to route support cases. You can then use this model to analyze text and infer what the user wants to accomplish.

##Prerequisites## - **Sign up for an account**—Follow the steps in [What You Need to Call the API](doc:what-you-need-to-call-api) to set up your Einstein Platform Services account. - **Find your key file**—If you've already created an account, locate the `einstein_platform.pem` file that you downloaded as part of that process. This file contains your private key. - **Install cURL**—We’ll be using the cURL command line tool throughout the following steps. This tool is installed by default on Linux and OSX. If you don’t already have it installed, download it from [https://curl.haxx.se/download.html](https://curl.haxx.se/download.html) - **Get a Token**—The Einstein Platform Services APIs use OAuth 2.0 JWT bearer token flow for authorization. Use the [token page](https://api.einstein.ai/token) to upload your key file and generate a JWT token. For step-by-step instructions, see [Set Up Authorization](doc:set-up-auth). ##Step 1: Define Your Classes and Gather Data## In this step, you define the labels that you want the model to output when text is sent into the model for prediction. Then you gather text data for each of those labels, and that text is used to create a model. This is typically the most time-consuming part of of the process. To make it easier for you to go through these steps, we provide a case routing [.csv file](http://einstein.ai/text/case_routing_intent.csv) that you can use. The labels in the case routing dataset define the intent behind the text. The intent can then be used to route that case to the right department. Those labels are: - `Billing` - `Order Change` - `Password Help` - `Sales Opportunity` - `Shipping Info` ##Step 2: Create the Dataset## In this step, you use the data you gathered to create a dataset. In the following command, replace `<TOKEN>` with your JWT token and run the command. This command: - Creates a dataset called `case_routing_intent` from the specified .csv file by accessing the file via a URL - Creates five labels as specified in the .csv file - Creates 150 examples [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"path=http://einstein.ai/text/case_routing_intent.csv\" -F \"type=text-intent\" https://api.einstein.ai/v2/language/datasets/upload", "language": "curl", "name": null } ] } [/block] This call is asynchronous, so the response looks like this JSON. [block:code] { "codes": [ { "code": "{\n \"id\": 1004804,\n \"name\": \"case_routing_intent.csv\",\n \"createdAt\": \"2017-06-22T19:31:58.000+0000.\",\n \"updatedAt\": \"2017-06-22T19:31:58.000+0000\",\n \"labelSummary\": {\n \"labels\": []\n },\n \"totalExamples\": 0,\n \"available\": false,\n \"statusMsg\": \"UPLOADING\",\n \"type\": \"text-intent\",\n \"object\": \"dataset\"\n}", "language": "json" } ] } [/block] To verify that the data has been loaded, make a call to get the dataset. Replace `<TOKEN>` with your JWT token and `<DATASET_ID>` with ID of the dataset you created. [block:code] { "codes": [ { "code": "curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.einstein.ai/v2/language/datasets/<DATASET_ID>", "language": "curl" } ] } [/block] The results look something like this JSON. You know the dataset is ready when `available` is `true` and `statusMsg` is `SUCCEEDED`. [block:code] { "codes": [ { "code": "{\n \"id\": 1004804,\n \"name\": \"case_routing_intent.csv\",\n \"createdAt\": \"2017-06-22T19:31:58.000+0000\",\n \"updatedAt\": \"2017-06-22T19:31:59.000+0000\",\n \"labelSummary\": {\n \"labels\": [\n {\n \"id\": 23649,\n \"datasetId\": 1004804,\n \"name\": \"Order Change\",\n \"numExamples\": 26\n },\n {\n \"id\": 23650,\n \"datasetId\": 1004804,\n \"name\": \"Sales Opportunity\",\n \"numExamples\": 44\n },\n {\n \"id\": 23651,\n \"datasetId\": 1004804,\n \"name\": \"Billing\",\n \"numExamples\": 24\n },\n {\n \"id\": 23652,\n \"datasetId\": 1004804,\n \"name\": \"Shipping Info\",\n \"numExamples\": 30\n },\n {\n \"id\": 23653,\n \"datasetId\": 1004804,\n \"name\": \"Password Help\",\n \"numExamples\": 26\n }\n ]\n },\n \"totalExamples\": 150,\n \"totalLabels\": 5,\n \"available\": true,\n \"statusMsg\": \"SUCCEEDED\",\n \"type\": \"text-intent\",\n \"object\": \"dataset\"\n}", "language": "json" } ] } [/block] ##Step 3: Train the Dataset to Create the Model## Use this cURL command to train the dataset and create a model. Replace `<TOKEN>` with your JWT token and `<DATASET_ID>` with ID of the dataset you created [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=Case Routing Model\" -F \"datasetId=<DATASET_ID>\" https://api.einstein.ai/v2/language/train", "language": "curl" } ] } [/block] The response looks like this JSON. [block:code] { "codes": [ { "code": "{\n \"datasetId\": 1004804,\n \"datasetVersionId\": 0,\n \"name\": \"Case Routing Model\",\n \"status\": \"QUEUED\",\n \"progress\": 0,\n \"createdAt\": \"2017-06-22T19:39:38.000+0000\",\n \"updatedAt\": \"2017-06-22T19:39:38.000+0000\",\n \"learningRate\": 0,\n \"epochs\": 0,\n \"queuePosition\": 1,\n \"object\": \"training\",\n \"modelId\": \"5SXGNLCCOFGTMNMQYEOTAGBPVU\",\n \"trainParams\": null,\n \"trainStats\": null,\n \"modelType\": \"text-intent\"\n}", "language": "json" } ] } [/block] Use the `modelId` to make this call and get the training status. Replace`<TOKEN>` with your JWT token and `MODEL_ID` with the ID of the model you created. [block:code] { "codes": [ { "code": "curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.einstein.ai/v2/language/train/<MODEL_ID>", "language": "curl" } ] } [/block] The training status response looks like this JSON. The `status` of `SUCCEEDED` means that the model is ready for predictions. [block:code] { "codes": [ { "code": "{\n \"datasetId\": 1004804,\n \"datasetVersionId\": 2473,\n \"name\": \"Case Routing Model\",\n \"status\": \"SUCCEEDED\",\n \"progress\": 1,\n \"createdAt\": \"2017-06-22T19:39:38.000+0000\",\n \"updatedAt\": \"2017-06-22T19:43:39.000+0000\",\n \"learningRate\": 0,\n \"epochs\": 300,\n \"object\": \"training\",\n \"modelId\": \"5SXGNLCCOFGTMNMQYEOTAGBPVU\",\n \"trainParams\": null,\n \"trainStats\": {\n \"labels\": 5,\n \"examples\": 150,\n \"totalTime\": \"00:03:54:159\",\n \"trainingTime\": \"00:03:53:150\",\n \"earlyStopping\": true,\n \"lastEpochDone\": 267,\n \"modelSaveTime\": \"00:00:01:561\",\n \"testSplitSize\": 11,\n \"trainSplitSize\": 139,\n \"datasetLoadTime\": \"00:00:01:008\"\n },\n \"modelType\": \"text-intent\"\n}", "language": "json" } ] } [/block] ##Step 4: Send Text in for Prediction## Now your model is ready to go! To test it out, send some text in for prediction. This cURL call takes the `modelId` of the model from which you want to return a prediction and the text string to analyze. Replace `<TOKEN>` with your JWT token and `<MODEL_ID>` with the ID of your model. [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"modelId=<MODEL_ID>\" -F \"document=how is my package being shipped?\" https://api.einstein.ai/v2/language/intent", "language": "curl" } ] } [/block] The response looks like this JSON. The model predicts that the text indicates that the user has a comment or question about shipping, so the model returns `Shipping Info` as the top probability. Your app can then use this information to route the case to the right department or agent. [block:code] { "codes": [ { "code": "{\n \"probabilities\": [\n {\n \"label\": \"Shipping Info\",\n \"probability\": 0.82365495\n },\n {\n \"label\": \"Sales Opportunity\",\n \"probability\": 0.12523715\n },\n {\n \"label\": \"Billing\",\n \"probability\": 0.0487557\n },\n {\n \"label\": \"Order Change\",\n \"probability\": 0.0021365683\n },\n {\n \"label\": \"Password Help\",\n \"probability\": 0.0002156619\n }\n ],\n \"object\": \"predictresponse\"\n}", "language": "json" } ] } [/block]
##Prerequisites## - **Sign up for an account**—Follow the steps in [What You Need to Call the API](doc:what-you-need-to-call-api) to set up your Einstein Platform Services account. - **Find your key file**—If you've already created an account, locate the `einstein_platform.pem` file that you downloaded as part of that process. This file contains your private key. - **Install cURL**—We’ll be using the cURL command line tool throughout the following steps. This tool is installed by default on Linux and OSX. If you don’t already have it installed, download it from [https://curl.haxx.se/download.html](https://curl.haxx.se/download.html) - **Get a Token**—The Einstein Platform Services APIs use OAuth 2.0 JWT bearer token flow for authorization. Use the [token page](https://api.einstein.ai/token) to upload your key file and generate a JWT token. For step-by-step instructions, see [Set Up Authorization](doc:set-up-auth). ##Step 1: Define Your Classes and Gather Data## In this step, you define the labels that you want the model to output when text is sent into the model for prediction. Then you gather text data for each of those labels, and that text is used to create a model. This is typically the most time-consuming part of of the process. To make it easier for you to go through these steps, we provide a case routing [.csv file](http://einstein.ai/text/case_routing_intent.csv) that you can use. The labels in the case routing dataset define the intent behind the text. The intent can then be used to route that case to the right department. Those labels are: - `Billing` - `Order Change` - `Password Help` - `Sales Opportunity` - `Shipping Info` ##Step 2: Create the Dataset## In this step, you use the data you gathered to create a dataset. In the following command, replace `<TOKEN>` with your JWT token and run the command. This command: - Creates a dataset called `case_routing_intent` from the specified .csv file by accessing the file via a URL - Creates five labels as specified in the .csv file - Creates 150 examples [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"path=http://einstein.ai/text/case_routing_intent.csv\" -F \"type=text-intent\" https://api.einstein.ai/v2/language/datasets/upload", "language": "curl", "name": null } ] } [/block] This call is asynchronous, so the response looks like this JSON. [block:code] { "codes": [ { "code": "{\n \"id\": 1004804,\n \"name\": \"case_routing_intent.csv\",\n \"createdAt\": \"2017-06-22T19:31:58.000+0000.\",\n \"updatedAt\": \"2017-06-22T19:31:58.000+0000\",\n \"labelSummary\": {\n \"labels\": []\n },\n \"totalExamples\": 0,\n \"available\": false,\n \"statusMsg\": \"UPLOADING\",\n \"type\": \"text-intent\",\n \"object\": \"dataset\"\n}", "language": "json" } ] } [/block] To verify that the data has been loaded, make a call to get the dataset. Replace `<TOKEN>` with your JWT token and `<DATASET_ID>` with ID of the dataset you created. [block:code] { "codes": [ { "code": "curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.einstein.ai/v2/language/datasets/<DATASET_ID>", "language": "curl" } ] } [/block] The results look something like this JSON. You know the dataset is ready when `available` is `true` and `statusMsg` is `SUCCEEDED`. [block:code] { "codes": [ { "code": "{\n \"id\": 1004804,\n \"name\": \"case_routing_intent.csv\",\n \"createdAt\": \"2017-06-22T19:31:58.000+0000\",\n \"updatedAt\": \"2017-06-22T19:31:59.000+0000\",\n \"labelSummary\": {\n \"labels\": [\n {\n \"id\": 23649,\n \"datasetId\": 1004804,\n \"name\": \"Order Change\",\n \"numExamples\": 26\n },\n {\n \"id\": 23650,\n \"datasetId\": 1004804,\n \"name\": \"Sales Opportunity\",\n \"numExamples\": 44\n },\n {\n \"id\": 23651,\n \"datasetId\": 1004804,\n \"name\": \"Billing\",\n \"numExamples\": 24\n },\n {\n \"id\": 23652,\n \"datasetId\": 1004804,\n \"name\": \"Shipping Info\",\n \"numExamples\": 30\n },\n {\n \"id\": 23653,\n \"datasetId\": 1004804,\n \"name\": \"Password Help\",\n \"numExamples\": 26\n }\n ]\n },\n \"totalExamples\": 150,\n \"totalLabels\": 5,\n \"available\": true,\n \"statusMsg\": \"SUCCEEDED\",\n \"type\": \"text-intent\",\n \"object\": \"dataset\"\n}", "language": "json" } ] } [/block] ##Step 3: Train the Dataset to Create the Model## Use this cURL command to train the dataset and create a model. Replace `<TOKEN>` with your JWT token and `<DATASET_ID>` with ID of the dataset you created [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=Case Routing Model\" -F \"datasetId=<DATASET_ID>\" https://api.einstein.ai/v2/language/train", "language": "curl" } ] } [/block] The response looks like this JSON. [block:code] { "codes": [ { "code": "{\n \"datasetId\": 1004804,\n \"datasetVersionId\": 0,\n \"name\": \"Case Routing Model\",\n \"status\": \"QUEUED\",\n \"progress\": 0,\n \"createdAt\": \"2017-06-22T19:39:38.000+0000\",\n \"updatedAt\": \"2017-06-22T19:39:38.000+0000\",\n \"learningRate\": 0,\n \"epochs\": 0,\n \"queuePosition\": 1,\n \"object\": \"training\",\n \"modelId\": \"5SXGNLCCOFGTMNMQYEOTAGBPVU\",\n \"trainParams\": null,\n \"trainStats\": null,\n \"modelType\": \"text-intent\"\n}", "language": "json" } ] } [/block] Use the `modelId` to make this call and get the training status. Replace`<TOKEN>` with your JWT token and `MODEL_ID` with the ID of the model you created. [block:code] { "codes": [ { "code": "curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.einstein.ai/v2/language/train/<MODEL_ID>", "language": "curl" } ] } [/block] The training status response looks like this JSON. The `status` of `SUCCEEDED` means that the model is ready for predictions. [block:code] { "codes": [ { "code": "{\n \"datasetId\": 1004804,\n \"datasetVersionId\": 2473,\n \"name\": \"Case Routing Model\",\n \"status\": \"SUCCEEDED\",\n \"progress\": 1,\n \"createdAt\": \"2017-06-22T19:39:38.000+0000\",\n \"updatedAt\": \"2017-06-22T19:43:39.000+0000\",\n \"learningRate\": 0,\n \"epochs\": 300,\n \"object\": \"training\",\n \"modelId\": \"5SXGNLCCOFGTMNMQYEOTAGBPVU\",\n \"trainParams\": null,\n \"trainStats\": {\n \"labels\": 5,\n \"examples\": 150,\n \"totalTime\": \"00:03:54:159\",\n \"trainingTime\": \"00:03:53:150\",\n \"earlyStopping\": true,\n \"lastEpochDone\": 267,\n \"modelSaveTime\": \"00:00:01:561\",\n \"testSplitSize\": 11,\n \"trainSplitSize\": 139,\n \"datasetLoadTime\": \"00:00:01:008\"\n },\n \"modelType\": \"text-intent\"\n}", "language": "json" } ] } [/block] ##Step 4: Send Text in for Prediction## Now your model is ready to go! To test it out, send some text in for prediction. This cURL call takes the `modelId` of the model from which you want to return a prediction and the text string to analyze. Replace `<TOKEN>` with your JWT token and `<MODEL_ID>` with the ID of your model. [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"modelId=<MODEL_ID>\" -F \"document=how is my package being shipped?\" https://api.einstein.ai/v2/language/intent", "language": "curl" } ] } [/block] The response looks like this JSON. The model predicts that the text indicates that the user has a comment or question about shipping, so the model returns `Shipping Info` as the top probability. Your app can then use this information to route the case to the right department or agent. [block:code] { "codes": [ { "code": "{\n \"probabilities\": [\n {\n \"label\": \"Shipping Info\",\n \"probability\": 0.82365495\n },\n {\n \"label\": \"Sales Opportunity\",\n \"probability\": 0.12523715\n },\n {\n \"label\": \"Billing\",\n \"probability\": 0.0487557\n },\n {\n \"label\": \"Order Change\",\n \"probability\": 0.0021365683\n },\n {\n \"label\": \"Password Help\",\n \"probability\": 0.0002156619\n }\n ],\n \"object\": \"predictresponse\"\n}", "language": "json" } ] } [/block]
{"_id":"59de6223666d650024f78fae","category":"5a4e84a9ce8142001caa2918","project":"552d474ea86ee20d00780cd7","user":"573b5a1f37fcf72000a2e683","parentDoc":null,"version":"59de6223666d650024f78f9b","updates":[],"next":{"pages":[],"description":""},"createdAt":"2017-06-01T20:11:50.777Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[{"name":"","code":"{}","language":"json","status":200},{"name":"","code":"{}","language":"json","status":400}]},"settings":"","auth":"required","params":[],"url":""},"isReference":false,"order":0,"body":"##March 23, 2018##\n[block:parameters]\n{\n  \"data\": {\n    \"0-0\": \"**<span style=\\\"color:gold\\\">CHANGED</span>**\",\n    \"0-1\": \"**Changes to delete dataset functionality for Einstein Vision and Einstein Language.**\\n\\nThe delete dataset API call no longer returns a 204 status code for a successful dataset deletion. Instead, the API returns a 200 status code, which specifies that a dataset deletion response was successfully received, but the deletion has yet to be completed. See [Delete a Dataset (Vision)](doc:delete-a-dataset) and [Delete a Dataset (Language)](doc:delete-a-lang-dataset). \\n\\nIn addition to the new status code, the call returns a JSON response with a deletion ID. You can use this ID to query the status of the deletion. The response looks similar to this JSON. See [Get Deletion Status (Vision)](doc:get-vision-deletion-status) and [Get Deletion Status (Language)](doc:get-lang-deletion-status).\\n\\n```\\n{\\n    \\\"id\\\": \\\"Z2JTFBF3A7XKIJC5QEJXMO4HSY\\\",\\n    \\\"organizationId\\\": \\\"108\\\",\\n    \\\"type\\\": \\\"DATASET\\\",\\n    \\\"status\\\": \\\"QUEUED\\\",\\n    \\\"progress\\\": 0,\\n    \\\"message\\\": null,\\n    \\\"object\\\": \\\"deletion\\\",\\n    \\\"deletedObjectId\\\": \\\"1003360\\\"\\n}```\\n\\nDeleting a dataset no longer deletes the associated models. You must explicitly delete models. See [Delete a Model (Vision) ](doc:delete-a-vision-model) and [Delete a Model (Language)](doc:delete-a-lang-model).\",\n    \"1-0\": \"**<span style=\\\"color:green\\\">NEW</span>**\",\n    \"1-1\": \"**Get the deletion status with this new API endpoint.** After you delete a dataset or a model, it may take some time for the data to be deleted. To confirm whether a dataset or model has been deleted, call the  `/deletion` endpoint along with the deletion ID. \\n\\n```\\ncurl -X GET -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" https://api.einstein.ai/v2/vision/deletion/<DELETION_ID>\\n```\\n\\nValid values are:\\n- `QUEUED`—Object deletion hasn't started.\\n- `RUNNING`—Object deletion is in progress.\\n- `SUCCEEDED`—Object deletion is complete.\\n- `SUCCEEDED_WAITING_FOR_CACHE_REMOVAL`—Object was deleted, but it can take up to 30 days to delete some related files that are cached in the system.\\n\\n See [Get Deletion Status (Vision)](doc:get-vision-deletion-status) and [Get Deletion Status (Language)](doc:get-lang-deletion-status).\",\n    \"2-0\": \"**<span style=\\\"color:green\\\">NEW</span>**\",\n    \"2-1\": \"**Delete a model with this new API endpoint.** Now deleting a dataset doesn't delete the models associated with that dataset. Instead, use this new API endpoint to delete a model. This cURL call deletes a model.\\n\\n```\\ncurl -X DELETE -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" https://api.einstein.ai/v2/language/models/<MODEL_ID>\\n```\\nThe response looks similar to this JSON.\\n```\\n{\\n    \\\"id\\\": \\\"2GAUJLAG3L5WFQE6GYTOM4O2IM\\\",\\n    \\\"organizationId\\\": \\\"108\\\",\\n    \\\"type\\\": \\\"MODEL\\\",\\n    \\\"status\\\": \\\"QUEUED\\\",\\n    \\\"progress\\\": 0,\\n    \\\"message\\\": null,\\n    \\\"object\\\": \\\"deletion\\\",\\n    \\\"deletedObjectId\\\": \\\"P3NDGNJFA5JG5J7RW54WUZDWGI\\\"\\n}\\n```\\n\\nSee [Delete a Model (Vision) ](doc:delete-a-vision-model) and [Delete a Model (Language)](doc:delete-a-lang-model).\\n\\nAfter you delete a model, use the `id` to check the status of the deletion.\"\n  },\n  \"cols\": 2,\n  \"rows\": 3\n}\n[/block]\n##March 1, 2018##\n[block:parameters]\n{\n  \"data\": {\n    \"0-0\": \"**<span style=\\\"color:green\\\">NEW</span>**\",\n    \"0-1\": \"**Reset your private key.** After you sign up for an account, you download or save your private key in the form of a .pem file. But sometimes things happen. If you lose your private key, you can reset it. See [Reset Your Private Key](doc:reset-your-private-key).\"\n  },\n  \"cols\": 2,\n  \"rows\": 1\n}\n[/block]\n##February 8, 2018##\n[block:parameters]\n{\n  \"data\": {\n    \"0-1\": \"**Rate limiting for Einstein Language (which includes Einstein Intent and Einstein Sentiment) and Einstein Object Detection goes into effect today.** The free tier of our service will offer 2,000 free predictions (increased from 1,000 free predictions) each calendar month. See [Rate Limits](doc:rate-limits).\\n\\nWhen you exceed the maximum number of predictions for the current calendar month, you receive an error message when you call one of the prediction resources. To purchase predictions, contact your Salesforce or Heroku AE. \\n\\nA prediction is any POST call to these endpoints:\\n\\n- `/vision/predict` \\n- `/vision/detect`\\n- `/language/intent`\\n- `/language/sentiment`\",\n    \"0-0\": \"**<span style=\\\"color:gold\\\">CHANGED</span>**\"\n  },\n  \"cols\": 2,\n  \"rows\": 1\n}\n[/block]\n##January 8, 2018##\n[block:parameters]\n{\n  \"data\": {\n    \"0-0\": \"**<span style=\\\"color:gold\\\">CHANGED</span>**\",\n    \"0-1\": \"**On January 15, 2018, the response returned by the `/detect` call is changing.** In the new response JSON, the field `\\\"resultType\\\": \\\"DetectionResult\\\"` is removed and the field `\\\"object\\\": \\\"predictresponse\\\"` is added. \\n\\nThe new response looks like this JSON.\\n```\\n{\\n \\\"probabilities\\\": [\\n   {\\n     \\\"label\\\": \\\"Alpine - Corn Flakes\\\",\\n     \\\"probability\\\": 0.97197026,\\n     \\\"boundingBox\\\": {\\n       \\\"minX\\\": 646,\\n       \\\"minY\\\": 865,\\n       \\\"maxX\\\": 885,\\n       \\\"maxY\\\": 1430\\n     }\\n   },\\n   ...\\n   {\\n     \\\"label\\\": \\\"Alpine - Bran Cereal\\\",\\n     \\\"probability\\\": 0.57089806,\\n     \\\"boundingBox\\\": {\\n       \\\"minX\\\": 921,\\n       \\\"minY\\\": 694,\\n       \\\"maxX\\\": 1257,\\n       \\\"maxY\\\": 1304\\n     }\\n   }\\n ],\\n \\\"object\\\": \\\"predictresponse\\\"\\n}\\n```\\n\\nSee [Detection with Image File](doc:detection-with-image-file) and [Detection with Image URL](doc:detection-with-image-url).\",\n    \"1-0\": \"**<span style=\\\"color:green\\\">NEW</span>**\",\n    \"1-1\": \"**Generate an access token using a refresh token.** Instead of using your private key to generate an access token, you can generate a refresh token and use that to generate an access token. A refresh token is a JWT token that never expires. \\n\\nA refresh token is useful in cases where an application is offline and doesn't have access to they key, such as mobile apps. See [Generate an OAuth Access Token](doc:generate-an-oauth-access-token).\"\n  },\n  \"cols\": 2,\n  \"rows\": 2\n}\n[/block]\n##December 19, 2017##\n[block:parameters]\n{\n  \"data\": {\n    \"0-0\": \"**<span style=\\\"color:green\\\">NEW</span>**\",\n    \"0-1\": \"**Einstein Sentiment, Einstein Intent, and Einstein Object Detection now generally available.** Einstein Vision and Language make it possible to streamline your workflows across sales, service, and marketing so that you can do things like: visual product search, product identification, intelligent case routing, and automated planogram analysis.\"\n  },\n  \"cols\": 2,\n  \"rows\": 1\n}\n[/block]\n##December 6, 2017##\n[block:parameters]\n{\n  \"data\": {\n    \"0-0\": \"**<span style=\\\"color:green\\\">NEW</span>**\",\n    \"0-1\": \"**Add feedback to object detection models.** If your object detection model misclassifies images, you can use the feedback API to add those images, along with their correct labels, to the dataset. After you add feedback to the dataset you can:\\n- Train the dataset to create a new model\\n- Retrain the dataset to update the model and keep the same model ID\\n\\nSee [Add Feedback to a Dataset](doc:add-feedback-to-dataset) and [Create Feedback Examples From a Zip File](doc:create-feedback-examples-from-a-zip-file).\",\n    \"1-0\": \"**<span style=\\\"color:gold\\\">CHANGED</span>**\",\n    \"1-1\": \"**Model training must be complete before you can delete a dataset.** If a dataset is being trained and has an associated model with a status of `QUEUED` or `RUNNING`, you must wait until the training is complete before you can delete the dataset.\"\n  },\n  \"cols\": 2,\n  \"rows\": 2\n}\n[/block]\n##October 27, 2017##\n[block:parameters]\n{\n  \"data\": {\n    \"0-0\": \"**<span style=\\\"color:gold\\\">CHANGED</span>**\",\n    \"0-1\": \"**JWT token is now longer.** The JWT tokens you use to call the API are now longer. You see this change whether you use the token [web page](https://api.einstein.ai/token) to get a token or whether you generate the token in code by calling the `/oauth2/token` endpoint. See [Generate an OAuth Token](doc:generate-an-oauth-token).\",\n    \"1-0\": \"**<span style=\\\"color:green\\\">NEW</span>**\",\n    \"1-1\": \"**Get learning curve metrics for Einstein Language models.** Use this new API call to get the model metrics for each epoch (training iteration) performed to create a sentiment or intent model. See [Get Model Learning Curve](doc:get-lang-model-learning-curve).\",\n    \"2-0\": \"**<span style=\\\"color:green\\\">NEW</span>**\",\n    \"2-1\": \"**Use the precision-recall curve metrics to understand your Einstein Language model.** When you get the model metrics, the API now returns the precision-recall curve for your model. These metrics help you understand how well the model performs. See [Get Model Metrics](doc:get-lang-model-metrics).\"\n  },\n  \"cols\": 2,\n  \"rows\": 3\n}\n[/block]\n##October 16, 2017##\n[block:parameters]\n{\n  \"data\": {\n    \"0-0\": \"**<span style=\\\"color:green\\\">NEW</span>**\",\n    \"0-1\": \"**Einstein Object Detection now available.** Use this API to train models to recognize and count multiple distinct objects within an image. This API is part of Einstein Vision, so you use the same calls as you do for image and multi-label models. But the data you use to create the models is different. See [Create a Dataset From a Zip File Asynchronously](doc:create-a-dataset-zip-async).\",\n    \"1-0\": \"**<span style=\\\"color:green\\\">NEW</span>**\",\n    \"1-1\": \"**New Trailhead module: Einstein Intent API Basics.** Build a deep-learning custom model to categorize text and automate business processes. See [Einstein Intent API Basics](https://trailhead.salesforce.com/modules/einstein_intent_basics).\"\n  },\n  \"cols\": 2,\n  \"rows\": 2\n}\n[/block]\n##October 3, 2017##\n[block:parameters]\n{\n  \"data\": {\n    \"0-0\": \"**<span style=\\\"color:green\\\">NEW</span>**\",\n    \"0-1\": \"**Get all examples for a label.** You can now return all examples for a single label by passing in the label ID. This API call is available in both Einstein Vision and Einstein Language. For Einstein Vision, see [Get All Examples for Label](doc:get-all-vision-examples-for-label). For Einstein Language, see [Get All Examples for Label](doc:get-all-lang-examples-for-label).\"\n  },\n  \"cols\": 2,\n  \"rows\": 1\n}\n[/block]\n##July 31, 2017##\n[block:parameters]\n{\n  \"data\": {\n    \"0-0\": \"**<span style=\\\"color:green\\\">NEW</span>**\",\n    \"0-1\": \"**Pass parameters as JSON when classifying text using the Einstein Language APIs.** You can now pass text in JSON when calling the `/intent` and `/sentiment` resources. See [Prediction for Intent](doc:prediction-intent) and [Prediction for Sentiment](doc:prediction-sentiment).\"\n  },\n  \"cols\": 2,\n  \"rows\": 1\n}\n[/block]\n##July 27, 2017##\n[block:parameters]\n{\n  \"data\": {\n    \"0-0\": \"**<span style=\\\"color:gold\\\">CHANGED</span>**\",\n    \"0-1\": \"**Einstein Image Classification API limits updated. **\\n\\n- The image file name maximum length increased from 100 to 150 characters.\\n\\n- There's no longer a maximum number of examples you can create using the [Create an Example](doc:create-an-example) call.\",\n    \"1-0\": \"**<span style=\\\"color:gold\\\">CHANGED</span>**\",\n    \"1-1\": \"**Add single examples to a dataset.** You can use the  [Create an Example](doc:create-an-example) call to add an example to a dataset that was created from a .zip file.\",\n    \"2-0\": \"**<span style=\\\"color:gold\\\">CHANGED</span>**\",\n    \"2-1\": \"**Unicode characters now supported in all APIs.** These elements can now contain unicode characters:\\n- .zip file name\\n- directory or label name\\n- file or example name\\n- dataset name\",\n    \"3-0\": \"**<span style=\\\"color:gold\\\">CHANGED</span>**\",\n    \"3-1\": \"**Default split ratio changed.** In the Einstein Language APIs, the default split ratio used during training is now 0.8. With this split ratio, 80% of the data is used to create the model and 20% is used to test the model.\",\n    \"4-0\": \"**<span style=\\\"color:gold\\\">CHANGED</span>**\",\n    \"4-1\": \"**The minimum number of examples changed in the Einstein Language APIs.**\\n- A dataset with a type of `text-intent` must have at least five examples per label.\\n- A dataset with a type of `text-sentiment` must have at least five examples per label.\"\n  },\n  \"cols\": 2,\n  \"rows\": 5\n}\n[/block]\n##June 28, 2017##\n[block:parameters]\n{\n  \"data\": {\n    \"0-0\": \"**<span style=\\\"color:green\\\">NEW</span>**\",\n    \"0-1\": \"**Einstein Language (Beta) released.** Einstein Language includes two APIs that you can use to unlock powerful insights within text.\\n\\n- Einstein Intent (Beta)—Categorize unstructured text into user-defined labels to better understand what users are trying to accomplish.\\n\\n- Einstein Sentiment (Beta)—Classify the sentiment of text into positive, negative, and neutral classes.\\n\\nSee [Introduction to Salesforce Einstein Language](doc:intro-to-einstein-language).\"\n  },\n  \"cols\": 2,\n  \"rows\": 1\n}\n[/block]\n##June 27, 2017##\n\n[block:parameters]\n{\n  \"data\": {\n    \"0-0\": \"**<span style=\\\"color:green\\\">NEW</span>**\",\n    \"0-1\": \"**Einstein Image Classification API version 2.0 released.** This table lists all the changes to the API in the new version. Einstein Vision is now the umbrella term for all of the image recognition APIs. The Einstein Vision API is now called the Image Classification API.\\n\\nUse the version selector at the top of this page to switch to the documentation for another version.\",\n    \"2-0\": \"**<span style=\\\"color:green\\\">NEW</span>**\",\n    \"2-1\": \"**Optimize your model using feedback.** Use the feedback API to add a misclassified image with the correct label to the dataset from which the model was created. \\n- Use the new API call to add a feedback example. See [Create a Feedback Example](doc:create-a-feedback-example).\\n\\n- The call to get all examples now has three new query parameters: `feedback`, `upload`, and `all`. Use these query parameters to refine the examples that are returned. See [Get All Examples](doc:get-all-examples).\\n\\n- The call to train a dataset and create a model now takes the `trainParams` object `{\\\"withFeedback\\\": true}`. This option specifies that the feedback examples are used during the training process. By default, the feedback examples aren't used during training if you don't specify this value. See [Train a Dataset](doc:train-a-dataset).\",\n    \"1-0\": \"**<span style=\\\"color:green\\\">NEW</span>**\",\n    \"1-1\": \"**The API now uses the https://<span></span>api.einstein.ai endpoint.** When you access the Einstein Platform Services APIs, you can now use this new endpoint. For example, the endpoint to get a dataset is `https://api.einstein.ai/v2/vision/datasets/<DATASET_ID>`. \\n\\nThe old api.metamind.io endpoint still works, but be sure to update your code to use the new endpoint.\",\n    \"3-0\": \"**<span style=\\\"color:green\\\">NEW</span>**\",\n    \"3-1\": \"**Retrain a dataset and keep the same model ID.** There's now a call to retrain a dataset, for example, if you added new data to the dataset or you want to include feedback data. Retraining a dataset lets you maintain the model ID which is ideal if you reference the model in production code. See [Retrain a Dataset](doc:retrain-a-dataset).\",\n    \"4-0\": \"**<span style=\\\"color:green\\\">NEW</span>**\",\n    \"4-1\": \"**Multi-label datasets are available.** The new dataset type `image-multi-label` enables you to specify that the dataset contains multi-label data. Any models you create from this dataset have a `modelType` of `image-multi-label`. See [Determine the Model Type You Need](doc:determine-model-type).\",\n    \"9-0\": \"**<span style=\\\"color:gold\\\">CHANGED</span>**\",\n    \"9-1\": \"**Dataset type is required when you create a dataset.** When you call the API to create a dataset, you must pass in the `type` request parameter to specify the type of dataset. Valid values are:\\n\\n- `image`—Standard classification dataset. Returns the single class into which an image falls.\\n\\n- `image-multi-label`—Multi-label classification dataset. Returns multiple classes into which an image falls.\\n\\nSee [Determine the Model Type You Need](doc:determine-model-type).\",\n    \"5-0\": \"**<span style=\\\"color:green\\\">NEW</span>**\",\n    \"5-1\": \"**There are two new calls to get the model metrics and the learning curve for a multi-label model.** See [Get Multi-Label Model Metrics](doc:get-multi-label-model-metrics) and [Get Multi-Label Model Learning Curve](doc:get-multi-label-model-learning-curve).\",\n    \"11-0\": \"**<span style=\\\"color:red\\\">DEPRECATED</span>**\",\n    \"11-1\": \"The following calls have been removed from the Einstein Image Classification API in version 2.0.\\n\\n- Create a label. You must pass in the labels when you create the dataset. `/vision/datasets/<DATASET_ID>/labels`\\n\\n- Get a label. `vision/datasets/<DATASET_ID>/labels/<LABEL_ID>`\\n\\n- Get an example. `/vision/datasets/<DATASET_ID>/examples/<EXAMPLE_ID>`\\n\\n- Delete an example. `/vision/datasets/<DATASET_ID>/examples/<EXAMPLE_ID>`\",\n    \"10-0\": \"**<span style=\\\"color:gold\\\">CHANGED</span>**\",\n    \"10-1\": \"**Getting all datasets returns a maximum of 25 datasets.** If you omit the `count` parameter, the call to get all datasets returns 25. If you set the `count` query parameter to a value greater than 25, the call returns 25 datasets. See [Get All Datasets](doc:get-all-datasets).\",\n    \"6-0\": \"**<span style=\\\"color:green\\\">NEW</span>**\",\n    \"6-1\": \"**Get up and running with multi-label predictions using our prebuilt multi-label model.** This multi-label model is used to classify a variety of objects. See [Use the Prebuilt Models](doc:use-pre-built-models).\",\n    \"7-0\": \"**<span style=\\\"color:green\\\">NEW</span>**\",\n    \"7-1\": \"**Use the `numResults` parameter to limit prediction results.** The `numResults` optional request parameter lets you specify the number of labels and probabilities to return when sending in data for prediction. This parameter can be used with both Einstein Vision and Einstein Language.\",\n    \"8-0\": \"**<span style=\\\"color:green\\\">NEW</span>**\",\n    \"8-1\": \"**Use global datasets to include additional data in your model.** Global datasets are public datasets that Salesforce provides. When you train a dataset to create a model, you can include the data from a global dataset. One way you can use global datasets is to create a negative class in your model. See [Use Global Datasets](doc:use-global-datasets).\"\n  },\n  \"cols\": 2,\n  \"rows\": 12\n}\n[/block]","excerpt":"Find out what's new, changed, or deprecated in the Einstein Platform Services APIs.","slug":"release-notes-einstein-platform-services","type":"basic","title":"Release Notes","__v":0,"childrenPages":[]}

Release Notes

Find out what's new, changed, or deprecated in the Einstein Platform Services APIs.

##March 23, 2018## [block:parameters] { "data": { "0-0": "**<span style=\"color:gold\">CHANGED</span>**", "0-1": "**Changes to delete dataset functionality for Einstein Vision and Einstein Language.**\n\nThe delete dataset API call no longer returns a 204 status code for a successful dataset deletion. Instead, the API returns a 200 status code, which specifies that a dataset deletion response was successfully received, but the deletion has yet to be completed. See [Delete a Dataset (Vision)](doc:delete-a-dataset) and [Delete a Dataset (Language)](doc:delete-a-lang-dataset). \n\nIn addition to the new status code, the call returns a JSON response with a deletion ID. You can use this ID to query the status of the deletion. The response looks similar to this JSON. See [Get Deletion Status (Vision)](doc:get-vision-deletion-status) and [Get Deletion Status (Language)](doc:get-lang-deletion-status).\n\n```\n{\n \"id\": \"Z2JTFBF3A7XKIJC5QEJXMO4HSY\",\n \"organizationId\": \"108\",\n \"type\": \"DATASET\",\n \"status\": \"QUEUED\",\n \"progress\": 0,\n \"message\": null,\n \"object\": \"deletion\",\n \"deletedObjectId\": \"1003360\"\n}```\n\nDeleting a dataset no longer deletes the associated models. You must explicitly delete models. See [Delete a Model (Vision) ](doc:delete-a-vision-model) and [Delete a Model (Language)](doc:delete-a-lang-model).", "1-0": "**<span style=\"color:green\">NEW</span>**", "1-1": "**Get the deletion status with this new API endpoint.** After you delete a dataset or a model, it may take some time for the data to be deleted. To confirm whether a dataset or model has been deleted, call the `/deletion` endpoint along with the deletion ID. \n\n```\ncurl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.einstein.ai/v2/vision/deletion/<DELETION_ID>\n```\n\nValid values are:\n- `QUEUED`—Object deletion hasn't started.\n- `RUNNING`—Object deletion is in progress.\n- `SUCCEEDED`—Object deletion is complete.\n- `SUCCEEDED_WAITING_FOR_CACHE_REMOVAL`—Object was deleted, but it can take up to 30 days to delete some related files that are cached in the system.\n\n See [Get Deletion Status (Vision)](doc:get-vision-deletion-status) and [Get Deletion Status (Language)](doc:get-lang-deletion-status).", "2-0": "**<span style=\"color:green\">NEW</span>**", "2-1": "**Delete a model with this new API endpoint.** Now deleting a dataset doesn't delete the models associated with that dataset. Instead, use this new API endpoint to delete a model. This cURL call deletes a model.\n\n```\ncurl -X DELETE -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" https://api.einstein.ai/v2/language/models/<MODEL_ID>\n```\nThe response looks similar to this JSON.\n```\n{\n \"id\": \"2GAUJLAG3L5WFQE6GYTOM4O2IM\",\n \"organizationId\": \"108\",\n \"type\": \"MODEL\",\n \"status\": \"QUEUED\",\n \"progress\": 0,\n \"message\": null,\n \"object\": \"deletion\",\n \"deletedObjectId\": \"P3NDGNJFA5JG5J7RW54WUZDWGI\"\n}\n```\n\nSee [Delete a Model (Vision) ](doc:delete-a-vision-model) and [Delete a Model (Language)](doc:delete-a-lang-model).\n\nAfter you delete a model, use the `id` to check the status of the deletion." }, "cols": 2, "rows": 3 } [/block] ##March 1, 2018## [block:parameters] { "data": { "0-0": "**<span style=\"color:green\">NEW</span>**", "0-1": "**Reset your private key.** After you sign up for an account, you download or save your private key in the form of a .pem file. But sometimes things happen. If you lose your private key, you can reset it. See [Reset Your Private Key](doc:reset-your-private-key)." }, "cols": 2, "rows": 1 } [/block] ##February 8, 2018## [block:parameters] { "data": { "0-1": "**Rate limiting for Einstein Language (which includes Einstein Intent and Einstein Sentiment) and Einstein Object Detection goes into effect today.** The free tier of our service will offer 2,000 free predictions (increased from 1,000 free predictions) each calendar month. See [Rate Limits](doc:rate-limits).\n\nWhen you exceed the maximum number of predictions for the current calendar month, you receive an error message when you call one of the prediction resources. To purchase predictions, contact your Salesforce or Heroku AE. \n\nA prediction is any POST call to these endpoints:\n\n- `/vision/predict` \n- `/vision/detect`\n- `/language/intent`\n- `/language/sentiment`", "0-0": "**<span style=\"color:gold\">CHANGED</span>**" }, "cols": 2, "rows": 1 } [/block] ##January 8, 2018## [block:parameters] { "data": { "0-0": "**<span style=\"color:gold\">CHANGED</span>**", "0-1": "**On January 15, 2018, the response returned by the `/detect` call is changing.** In the new response JSON, the field `\"resultType\": \"DetectionResult\"` is removed and the field `\"object\": \"predictresponse\"` is added. \n\nThe new response looks like this JSON.\n```\n{\n \"probabilities\": [\n {\n \"label\": \"Alpine - Corn Flakes\",\n \"probability\": 0.97197026,\n \"boundingBox\": {\n \"minX\": 646,\n \"minY\": 865,\n \"maxX\": 885,\n \"maxY\": 1430\n }\n },\n ...\n {\n \"label\": \"Alpine - Bran Cereal\",\n \"probability\": 0.57089806,\n \"boundingBox\": {\n \"minX\": 921,\n \"minY\": 694,\n \"maxX\": 1257,\n \"maxY\": 1304\n }\n }\n ],\n \"object\": \"predictresponse\"\n}\n```\n\nSee [Detection with Image File](doc:detection-with-image-file) and [Detection with Image URL](doc:detection-with-image-url).", "1-0": "**<span style=\"color:green\">NEW</span>**", "1-1": "**Generate an access token using a refresh token.** Instead of using your private key to generate an access token, you can generate a refresh token and use that to generate an access token. A refresh token is a JWT token that never expires. \n\nA refresh token is useful in cases where an application is offline and doesn't have access to they key, such as mobile apps. See [Generate an OAuth Access Token](doc:generate-an-oauth-access-token)." }, "cols": 2, "rows": 2 } [/block] ##December 19, 2017## [block:parameters] { "data": { "0-0": "**<span style=\"color:green\">NEW</span>**", "0-1": "**Einstein Sentiment, Einstein Intent, and Einstein Object Detection now generally available.** Einstein Vision and Language make it possible to streamline your workflows across sales, service, and marketing so that you can do things like: visual product search, product identification, intelligent case routing, and automated planogram analysis." }, "cols": 2, "rows": 1 } [/block] ##December 6, 2017## [block:parameters] { "data": { "0-0": "**<span style=\"color:green\">NEW</span>**", "0-1": "**Add feedback to object detection models.** If your object detection model misclassifies images, you can use the feedback API to add those images, along with their correct labels, to the dataset. After you add feedback to the dataset you can:\n- Train the dataset to create a new model\n- Retrain the dataset to update the model and keep the same model ID\n\nSee [Add Feedback to a Dataset](doc:add-feedback-to-dataset) and [Create Feedback Examples From a Zip File](doc:create-feedback-examples-from-a-zip-file).", "1-0": "**<span style=\"color:gold\">CHANGED</span>**", "1-1": "**Model training must be complete before you can delete a dataset.** If a dataset is being trained and has an associated model with a status of `QUEUED` or `RUNNING`, you must wait until the training is complete before you can delete the dataset." }, "cols": 2, "rows": 2 } [/block] ##October 27, 2017## [block:parameters] { "data": { "0-0": "**<span style=\"color:gold\">CHANGED</span>**", "0-1": "**JWT token is now longer.** The JWT tokens you use to call the API are now longer. You see this change whether you use the token [web page](https://api.einstein.ai/token) to get a token or whether you generate the token in code by calling the `/oauth2/token` endpoint. See [Generate an OAuth Token](doc:generate-an-oauth-token).", "1-0": "**<span style=\"color:green\">NEW</span>**", "1-1": "**Get learning curve metrics for Einstein Language models.** Use this new API call to get the model metrics for each epoch (training iteration) performed to create a sentiment or intent model. See [Get Model Learning Curve](doc:get-lang-model-learning-curve).", "2-0": "**<span style=\"color:green\">NEW</span>**", "2-1": "**Use the precision-recall curve metrics to understand your Einstein Language model.** When you get the model metrics, the API now returns the precision-recall curve for your model. These metrics help you understand how well the model performs. See [Get Model Metrics](doc:get-lang-model-metrics)." }, "cols": 2, "rows": 3 } [/block] ##October 16, 2017## [block:parameters] { "data": { "0-0": "**<span style=\"color:green\">NEW</span>**", "0-1": "**Einstein Object Detection now available.** Use this API to train models to recognize and count multiple distinct objects within an image. This API is part of Einstein Vision, so you use the same calls as you do for image and multi-label models. But the data you use to create the models is different. See [Create a Dataset From a Zip File Asynchronously](doc:create-a-dataset-zip-async).", "1-0": "**<span style=\"color:green\">NEW</span>**", "1-1": "**New Trailhead module: Einstein Intent API Basics.** Build a deep-learning custom model to categorize text and automate business processes. See [Einstein Intent API Basics](https://trailhead.salesforce.com/modules/einstein_intent_basics)." }, "cols": 2, "rows": 2 } [/block] ##October 3, 2017## [block:parameters] { "data": { "0-0": "**<span style=\"color:green\">NEW</span>**", "0-1": "**Get all examples for a label.** You can now return all examples for a single label by passing in the label ID. This API call is available in both Einstein Vision and Einstein Language. For Einstein Vision, see [Get All Examples for Label](doc:get-all-vision-examples-for-label). For Einstein Language, see [Get All Examples for Label](doc:get-all-lang-examples-for-label)." }, "cols": 2, "rows": 1 } [/block] ##July 31, 2017## [block:parameters] { "data": { "0-0": "**<span style=\"color:green\">NEW</span>**", "0-1": "**Pass parameters as JSON when classifying text using the Einstein Language APIs.** You can now pass text in JSON when calling the `/intent` and `/sentiment` resources. See [Prediction for Intent](doc:prediction-intent) and [Prediction for Sentiment](doc:prediction-sentiment)." }, "cols": 2, "rows": 1 } [/block] ##July 27, 2017## [block:parameters] { "data": { "0-0": "**<span style=\"color:gold\">CHANGED</span>**", "0-1": "**Einstein Image Classification API limits updated. **\n\n- The image file name maximum length increased from 100 to 150 characters.\n\n- There's no longer a maximum number of examples you can create using the [Create an Example](doc:create-an-example) call.", "1-0": "**<span style=\"color:gold\">CHANGED</span>**", "1-1": "**Add single examples to a dataset.** You can use the [Create an Example](doc:create-an-example) call to add an example to a dataset that was created from a .zip file.", "2-0": "**<span style=\"color:gold\">CHANGED</span>**", "2-1": "**Unicode characters now supported in all APIs.** These elements can now contain unicode characters:\n- .zip file name\n- directory or label name\n- file or example name\n- dataset name", "3-0": "**<span style=\"color:gold\">CHANGED</span>**", "3-1": "**Default split ratio changed.** In the Einstein Language APIs, the default split ratio used during training is now 0.8. With this split ratio, 80% of the data is used to create the model and 20% is used to test the model.", "4-0": "**<span style=\"color:gold\">CHANGED</span>**", "4-1": "**The minimum number of examples changed in the Einstein Language APIs.**\n- A dataset with a type of `text-intent` must have at least five examples per label.\n- A dataset with a type of `text-sentiment` must have at least five examples per label." }, "cols": 2, "rows": 5 } [/block] ##June 28, 2017## [block:parameters] { "data": { "0-0": "**<span style=\"color:green\">NEW</span>**", "0-1": "**Einstein Language (Beta) released.** Einstein Language includes two APIs that you can use to unlock powerful insights within text.\n\n- Einstein Intent (Beta)—Categorize unstructured text into user-defined labels to better understand what users are trying to accomplish.\n\n- Einstein Sentiment (Beta)—Classify the sentiment of text into positive, negative, and neutral classes.\n\nSee [Introduction to Salesforce Einstein Language](doc:intro-to-einstein-language)." }, "cols": 2, "rows": 1 } [/block] ##June 27, 2017## [block:parameters] { "data": { "0-0": "**<span style=\"color:green\">NEW</span>**", "0-1": "**Einstein Image Classification API version 2.0 released.** This table lists all the changes to the API in the new version. Einstein Vision is now the umbrella term for all of the image recognition APIs. The Einstein Vision API is now called the Image Classification API.\n\nUse the version selector at the top of this page to switch to the documentation for another version.", "2-0": "**<span style=\"color:green\">NEW</span>**", "2-1": "**Optimize your model using feedback.** Use the feedback API to add a misclassified image with the correct label to the dataset from which the model was created. \n- Use the new API call to add a feedback example. See [Create a Feedback Example](doc:create-a-feedback-example).\n\n- The call to get all examples now has three new query parameters: `feedback`, `upload`, and `all`. Use these query parameters to refine the examples that are returned. See [Get All Examples](doc:get-all-examples).\n\n- The call to train a dataset and create a model now takes the `trainParams` object `{\"withFeedback\": true}`. This option specifies that the feedback examples are used during the training process. By default, the feedback examples aren't used during training if you don't specify this value. See [Train a Dataset](doc:train-a-dataset).", "1-0": "**<span style=\"color:green\">NEW</span>**", "1-1": "**The API now uses the https://<span></span>api.einstein.ai endpoint.** When you access the Einstein Platform Services APIs, you can now use this new endpoint. For example, the endpoint to get a dataset is `https://api.einstein.ai/v2/vision/datasets/<DATASET_ID>`. \n\nThe old api.metamind.io endpoint still works, but be sure to update your code to use the new endpoint.", "3-0": "**<span style=\"color:green\">NEW</span>**", "3-1": "**Retrain a dataset and keep the same model ID.** There's now a call to retrain a dataset, for example, if you added new data to the dataset or you want to include feedback data. Retraining a dataset lets you maintain the model ID which is ideal if you reference the model in production code. See [Retrain a Dataset](doc:retrain-a-dataset).", "4-0": "**<span style=\"color:green\">NEW</span>**", "4-1": "**Multi-label datasets are available.** The new dataset type `image-multi-label` enables you to specify that the dataset contains multi-label data. Any models you create from this dataset have a `modelType` of `image-multi-label`. See [Determine the Model Type You Need](doc:determine-model-type).", "9-0": "**<span style=\"color:gold\">CHANGED</span>**", "9-1": "**Dataset type is required when you create a dataset.** When you call the API to create a dataset, you must pass in the `type` request parameter to specify the type of dataset. Valid values are:\n\n- `image`—Standard classification dataset. Returns the single class into which an image falls.\n\n- `image-multi-label`—Multi-label classification dataset. Returns multiple classes into which an image falls.\n\nSee [Determine the Model Type You Need](doc:determine-model-type).", "5-0": "**<span style=\"color:green\">NEW</span>**", "5-1": "**There are two new calls to get the model metrics and the learning curve for a multi-label model.** See [Get Multi-Label Model Metrics](doc:get-multi-label-model-metrics) and [Get Multi-Label Model Learning Curve](doc:get-multi-label-model-learning-curve).", "11-0": "**<span style=\"color:red\">DEPRECATED</span>**", "11-1": "The following calls have been removed from the Einstein Image Classification API in version 2.0.\n\n- Create a label. You must pass in the labels when you create the dataset. `/vision/datasets/<DATASET_ID>/labels`\n\n- Get a label. `vision/datasets/<DATASET_ID>/labels/<LABEL_ID>`\n\n- Get an example. `/vision/datasets/<DATASET_ID>/examples/<EXAMPLE_ID>`\n\n- Delete an example. `/vision/datasets/<DATASET_ID>/examples/<EXAMPLE_ID>`", "10-0": "**<span style=\"color:gold\">CHANGED</span>**", "10-1": "**Getting all datasets returns a maximum of 25 datasets.** If you omit the `count` parameter, the call to get all datasets returns 25. If you set the `count` query parameter to a value greater than 25, the call returns 25 datasets. See [Get All Datasets](doc:get-all-datasets).", "6-0": "**<span style=\"color:green\">NEW</span>**", "6-1": "**Get up and running with multi-label predictions using our prebuilt multi-label model.** This multi-label model is used to classify a variety of objects. See [Use the Prebuilt Models](doc:use-pre-built-models).", "7-0": "**<span style=\"color:green\">NEW</span>**", "7-1": "**Use the `numResults` parameter to limit prediction results.** The `numResults` optional request parameter lets you specify the number of labels and probabilities to return when sending in data for prediction. This parameter can be used with both Einstein Vision and Einstein Language.", "8-0": "**<span style=\"color:green\">NEW</span>**", "8-1": "**Use global datasets to include additional data in your model.** Global datasets are public datasets that Salesforce provides. When you train a dataset to create a model, you can include the data from a global dataset. One way you can use global datasets is to create a negative class in your model. See [Use Global Datasets](doc:use-global-datasets)." }, "cols": 2, "rows": 12 } [/block]
##March 23, 2018## [block:parameters] { "data": { "0-0": "**<span style=\"color:gold\">CHANGED</span>**", "0-1": "**Changes to delete dataset functionality for Einstein Vision and Einstein Language.**\n\nThe delete dataset API call no longer returns a 204 status code for a successful dataset deletion. Instead, the API returns a 200 status code, which specifies that a dataset deletion response was successfully received, but the deletion has yet to be completed. See [Delete a Dataset (Vision)](doc:delete-a-dataset) and [Delete a Dataset (Language)](doc:delete-a-lang-dataset). \n\nIn addition to the new status code, the call returns a JSON response with a deletion ID. You can use this ID to query the status of the deletion. The response looks similar to this JSON. See [Get Deletion Status (Vision)](doc:get-vision-deletion-status) and [Get Deletion Status (Language)](doc:get-lang-deletion-status).\n\n```\n{\n \"id\": \"Z2JTFBF3A7XKIJC5QEJXMO4HSY\",\n \"organizationId\": \"108\",\n \"type\": \"DATASET\",\n \"status\": \"QUEUED\",\n \"progress\": 0,\n \"message\": null,\n \"object\": \"deletion\",\n \"deletedObjectId\": \"1003360\"\n}```\n\nDeleting a dataset no longer deletes the associated models. You must explicitly delete models. See [Delete a Model (Vision) ](doc:delete-a-vision-model) and [Delete a Model (Language)](doc:delete-a-lang-model).", "1-0": "**<span style=\"color:green\">NEW</span>**", "1-1": "**Get the deletion status with this new API endpoint.** After you delete a dataset or a model, it may take some time for the data to be deleted. To confirm whether a dataset or model has been deleted, call the `/deletion` endpoint along with the deletion ID. \n\n```\ncurl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.einstein.ai/v2/vision/deletion/<DELETION_ID>\n```\n\nValid values are:\n- `QUEUED`—Object deletion hasn't started.\n- `RUNNING`—Object deletion is in progress.\n- `SUCCEEDED`—Object deletion is complete.\n- `SUCCEEDED_WAITING_FOR_CACHE_REMOVAL`—Object was deleted, but it can take up to 30 days to delete some related files that are cached in the system.\n\n See [Get Deletion Status (Vision)](doc:get-vision-deletion-status) and [Get Deletion Status (Language)](doc:get-lang-deletion-status).", "2-0": "**<span style=\"color:green\">NEW</span>**", "2-1": "**Delete a model with this new API endpoint.** Now deleting a dataset doesn't delete the models associated with that dataset. Instead, use this new API endpoint to delete a model. This cURL call deletes a model.\n\n```\ncurl -X DELETE -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" https://api.einstein.ai/v2/language/models/<MODEL_ID>\n```\nThe response looks similar to this JSON.\n```\n{\n \"id\": \"2GAUJLAG3L5WFQE6GYTOM4O2IM\",\n \"organizationId\": \"108\",\n \"type\": \"MODEL\",\n \"status\": \"QUEUED\",\n \"progress\": 0,\n \"message\": null,\n \"object\": \"deletion\",\n \"deletedObjectId\": \"P3NDGNJFA5JG5J7RW54WUZDWGI\"\n}\n```\n\nSee [Delete a Model (Vision) ](doc:delete-a-vision-model) and [Delete a Model (Language)](doc:delete-a-lang-model).\n\nAfter you delete a model, use the `id` to check the status of the deletion." }, "cols": 2, "rows": 3 } [/block] ##March 1, 2018## [block:parameters] { "data": { "0-0": "**<span style=\"color:green\">NEW</span>**", "0-1": "**Reset your private key.** After you sign up for an account, you download or save your private key in the form of a .pem file. But sometimes things happen. If you lose your private key, you can reset it. See [Reset Your Private Key](doc:reset-your-private-key)." }, "cols": 2, "rows": 1 } [/block] ##February 8, 2018## [block:parameters] { "data": { "0-1": "**Rate limiting for Einstein Language (which includes Einstein Intent and Einstein Sentiment) and Einstein Object Detection goes into effect today.** The free tier of our service will offer 2,000 free predictions (increased from 1,000 free predictions) each calendar month. See [Rate Limits](doc:rate-limits).\n\nWhen you exceed the maximum number of predictions for the current calendar month, you receive an error message when you call one of the prediction resources. To purchase predictions, contact your Salesforce or Heroku AE. \n\nA prediction is any POST call to these endpoints:\n\n- `/vision/predict` \n- `/vision/detect`\n- `/language/intent`\n- `/language/sentiment`", "0-0": "**<span style=\"color:gold\">CHANGED</span>**" }, "cols": 2, "rows": 1 } [/block] ##January 8, 2018## [block:parameters] { "data": { "0-0": "**<span style=\"color:gold\">CHANGED</span>**", "0-1": "**On January 15, 2018, the response returned by the `/detect` call is changing.** In the new response JSON, the field `\"resultType\": \"DetectionResult\"` is removed and the field `\"object\": \"predictresponse\"` is added. \n\nThe new response looks like this JSON.\n```\n{\n \"probabilities\": [\n {\n \"label\": \"Alpine - Corn Flakes\",\n \"probability\": 0.97197026,\n \"boundingBox\": {\n \"minX\": 646,\n \"minY\": 865,\n \"maxX\": 885,\n \"maxY\": 1430\n }\n },\n ...\n {\n \"label\": \"Alpine - Bran Cereal\",\n \"probability\": 0.57089806,\n \"boundingBox\": {\n \"minX\": 921,\n \"minY\": 694,\n \"maxX\": 1257,\n \"maxY\": 1304\n }\n }\n ],\n \"object\": \"predictresponse\"\n}\n```\n\nSee [Detection with Image File](doc:detection-with-image-file) and [Detection with Image URL](doc:detection-with-image-url).", "1-0": "**<span style=\"color:green\">NEW</span>**", "1-1": "**Generate an access token using a refresh token.** Instead of using your private key to generate an access token, you can generate a refresh token and use that to generate an access token. A refresh token is a JWT token that never expires. \n\nA refresh token is useful in cases where an application is offline and doesn't have access to they key, such as mobile apps. See [Generate an OAuth Access Token](doc:generate-an-oauth-access-token)." }, "cols": 2, "rows": 2 } [/block] ##December 19, 2017## [block:parameters] { "data": { "0-0": "**<span style=\"color:green\">NEW</span>**", "0-1": "**Einstein Sentiment, Einstein Intent, and Einstein Object Detection now generally available.** Einstein Vision and Language make it possible to streamline your workflows across sales, service, and marketing so that you can do things like: visual product search, product identification, intelligent case routing, and automated planogram analysis." }, "cols": 2, "rows": 1 } [/block] ##December 6, 2017## [block:parameters] { "data": { "0-0": "**<span style=\"color:green\">NEW</span>**", "0-1": "**Add feedback to object detection models.** If your object detection model misclassifies images, you can use the feedback API to add those images, along with their correct labels, to the dataset. After you add feedback to the dataset you can:\n- Train the dataset to create a new model\n- Retrain the dataset to update the model and keep the same model ID\n\nSee [Add Feedback to a Dataset](doc:add-feedback-to-dataset) and [Create Feedback Examples From a Zip File](doc:create-feedback-examples-from-a-zip-file).", "1-0": "**<span style=\"color:gold\">CHANGED</span>**", "1-1": "**Model training must be complete before you can delete a dataset.** If a dataset is being trained and has an associated model with a status of `QUEUED` or `RUNNING`, you must wait until the training is complete before you can delete the dataset." }, "cols": 2, "rows": 2 } [/block] ##October 27, 2017## [block:parameters] { "data": { "0-0": "**<span style=\"color:gold\">CHANGED</span>**", "0-1": "**JWT token is now longer.** The JWT tokens you use to call the API are now longer. You see this change whether you use the token [web page](https://api.einstein.ai/token) to get a token or whether you generate the token in code by calling the `/oauth2/token` endpoint. See [Generate an OAuth Token](doc:generate-an-oauth-token).", "1-0": "**<span style=\"color:green\">NEW</span>**", "1-1": "**Get learning curve metrics for Einstein Language models.** Use this new API call to get the model metrics for each epoch (training iteration) performed to create a sentiment or intent model. See [Get Model Learning Curve](doc:get-lang-model-learning-curve).", "2-0": "**<span style=\"color:green\">NEW</span>**", "2-1": "**Use the precision-recall curve metrics to understand your Einstein Language model.** When you get the model metrics, the API now returns the precision-recall curve for your model. These metrics help you understand how well the model performs. See [Get Model Metrics](doc:get-lang-model-metrics)." }, "cols": 2, "rows": 3 } [/block] ##October 16, 2017## [block:parameters] { "data": { "0-0": "**<span style=\"color:green\">NEW</span>**", "0-1": "**Einstein Object Detection now available.** Use this API to train models to recognize and count multiple distinct objects within an image. This API is part of Einstein Vision, so you use the same calls as you do for image and multi-label models. But the data you use to create the models is different. See [Create a Dataset From a Zip File Asynchronously](doc:create-a-dataset-zip-async).", "1-0": "**<span style=\"color:green\">NEW</span>**", "1-1": "**New Trailhead module: Einstein Intent API Basics.** Build a deep-learning custom model to categorize text and automate business processes. See [Einstein Intent API Basics](https://trailhead.salesforce.com/modules/einstein_intent_basics)." }, "cols": 2, "rows": 2 } [/block] ##October 3, 2017## [block:parameters] { "data": { "0-0": "**<span style=\"color:green\">NEW</span>**", "0-1": "**Get all examples for a label.** You can now return all examples for a single label by passing in the label ID. This API call is available in both Einstein Vision and Einstein Language. For Einstein Vision, see [Get All Examples for Label](doc:get-all-vision-examples-for-label). For Einstein Language, see [Get All Examples for Label](doc:get-all-lang-examples-for-label)." }, "cols": 2, "rows": 1 } [/block] ##July 31, 2017## [block:parameters] { "data": { "0-0": "**<span style=\"color:green\">NEW</span>**", "0-1": "**Pass parameters as JSON when classifying text using the Einstein Language APIs.** You can now pass text in JSON when calling the `/intent` and `/sentiment` resources. See [Prediction for Intent](doc:prediction-intent) and [Prediction for Sentiment](doc:prediction-sentiment)." }, "cols": 2, "rows": 1 } [/block] ##July 27, 2017## [block:parameters] { "data": { "0-0": "**<span style=\"color:gold\">CHANGED</span>**", "0-1": "**Einstein Image Classification API limits updated. **\n\n- The image file name maximum length increased from 100 to 150 characters.\n\n- There's no longer a maximum number of examples you can create using the [Create an Example](doc:create-an-example) call.", "1-0": "**<span style=\"color:gold\">CHANGED</span>**", "1-1": "**Add single examples to a dataset.** You can use the [Create an Example](doc:create-an-example) call to add an example to a dataset that was created from a .zip file.", "2-0": "**<span style=\"color:gold\">CHANGED</span>**", "2-1": "**Unicode characters now supported in all APIs.** These elements can now contain unicode characters:\n- .zip file name\n- directory or label name\n- file or example name\n- dataset name", "3-0": "**<span style=\"color:gold\">CHANGED</span>**", "3-1": "**Default split ratio changed.** In the Einstein Language APIs, the default split ratio used during training is now 0.8. With this split ratio, 80% of the data is used to create the model and 20% is used to test the model.", "4-0": "**<span style=\"color:gold\">CHANGED</span>**", "4-1": "**The minimum number of examples changed in the Einstein Language APIs.**\n- A dataset with a type of `text-intent` must have at least five examples per label.\n- A dataset with a type of `text-sentiment` must have at least five examples per label." }, "cols": 2, "rows": 5 } [/block] ##June 28, 2017## [block:parameters] { "data": { "0-0": "**<span style=\"color:green\">NEW</span>**", "0-1": "**Einstein Language (Beta) released.** Einstein Language includes two APIs that you can use to unlock powerful insights within text.\n\n- Einstein Intent (Beta)—Categorize unstructured text into user-defined labels to better understand what users are trying to accomplish.\n\n- Einstein Sentiment (Beta)—Classify the sentiment of text into positive, negative, and neutral classes.\n\nSee [Introduction to Salesforce Einstein Language](doc:intro-to-einstein-language)." }, "cols": 2, "rows": 1 } [/block] ##June 27, 2017## [block:parameters] { "data": { "0-0": "**<span style=\"color:green\">NEW</span>**", "0-1": "**Einstein Image Classification API version 2.0 released.** This table lists all the changes to the API in the new version. Einstein Vision is now the umbrella term for all of the image recognition APIs. The Einstein Vision API is now called the Image Classification API.\n\nUse the version selector at the top of this page to switch to the documentation for another version.", "2-0": "**<span style=\"color:green\">NEW</span>**", "2-1": "**Optimize your model using feedback.** Use the feedback API to add a misclassified image with the correct label to the dataset from which the model was created. \n- Use the new API call to add a feedback example. See [Create a Feedback Example](doc:create-a-feedback-example).\n\n- The call to get all examples now has three new query parameters: `feedback`, `upload`, and `all`. Use these query parameters to refine the examples that are returned. See [Get All Examples](doc:get-all-examples).\n\n- The call to train a dataset and create a model now takes the `trainParams` object `{\"withFeedback\": true}`. This option specifies that the feedback examples are used during the training process. By default, the feedback examples aren't used during training if you don't specify this value. See [Train a Dataset](doc:train-a-dataset).", "1-0": "**<span style=\"color:green\">NEW</span>**", "1-1": "**The API now uses the https://<span></span>api.einstein.ai endpoint.** When you access the Einstein Platform Services APIs, you can now use this new endpoint. For example, the endpoint to get a dataset is `https://api.einstein.ai/v2/vision/datasets/<DATASET_ID>`. \n\nThe old api.metamind.io endpoint still works, but be sure to update your code to use the new endpoint.", "3-0": "**<span style=\"color:green\">NEW</span>**", "3-1": "**Retrain a dataset and keep the same model ID.** There's now a call to retrain a dataset, for example, if you added new data to the dataset or you want to include feedback data. Retraining a dataset lets you maintain the model ID which is ideal if you reference the model in production code. See [Retrain a Dataset](doc:retrain-a-dataset).", "4-0": "**<span style=\"color:green\">NEW</span>**", "4-1": "**Multi-label datasets are available.** The new dataset type `image-multi-label` enables you to specify that the dataset contains multi-label data. Any models you create from this dataset have a `modelType` of `image-multi-label`. See [Determine the Model Type You Need](doc:determine-model-type).", "9-0": "**<span style=\"color:gold\">CHANGED</span>**", "9-1": "**Dataset type is required when you create a dataset.** When you call the API to create a dataset, you must pass in the `type` request parameter to specify the type of dataset. Valid values are:\n\n- `image`—Standard classification dataset. Returns the single class into which an image falls.\n\n- `image-multi-label`—Multi-label classification dataset. Returns multiple classes into which an image falls.\n\nSee [Determine the Model Type You Need](doc:determine-model-type).", "5-0": "**<span style=\"color:green\">NEW</span>**", "5-1": "**There are two new calls to get the model metrics and the learning curve for a multi-label model.** See [Get Multi-Label Model Metrics](doc:get-multi-label-model-metrics) and [Get Multi-Label Model Learning Curve](doc:get-multi-label-model-learning-curve).", "11-0": "**<span style=\"color:red\">DEPRECATED</span>**", "11-1": "The following calls have been removed from the Einstein Image Classification API in version 2.0.\n\n- Create a label. You must pass in the labels when you create the dataset. `/vision/datasets/<DATASET_ID>/labels`\n\n- Get a label. `vision/datasets/<DATASET_ID>/labels/<LABEL_ID>`\n\n- Get an example. `/vision/datasets/<DATASET_ID>/examples/<EXAMPLE_ID>`\n\n- Delete an example. `/vision/datasets/<DATASET_ID>/examples/<EXAMPLE_ID>`", "10-0": "**<span style=\"color:gold\">CHANGED</span>**", "10-1": "**Getting all datasets returns a maximum of 25 datasets.** If you omit the `count` parameter, the call to get all datasets returns 25. If you set the `count` query parameter to a value greater than 25, the call returns 25 datasets. See [Get All Datasets](doc:get-all-datasets).", "6-0": "**<span style=\"color:green\">NEW</span>**", "6-1": "**Get up and running with multi-label predictions using our prebuilt multi-label model.** This multi-label model is used to classify a variety of objects. See [Use the Prebuilt Models](doc:use-pre-built-models).", "7-0": "**<span style=\"color:green\">NEW</span>**", "7-1": "**Use the `numResults` parameter to limit prediction results.** The `numResults` optional request parameter lets you specify the number of labels and probabilities to return when sending in data for prediction. This parameter can be used with both Einstein Vision and Einstein Language.", "8-0": "**<span style=\"color:green\">NEW</span>**", "8-1": "**Use global datasets to include additional data in your model.** Global datasets are public datasets that Salesforce provides. When you train a dataset to create a model, you can include the data from a global dataset. One way you can use global datasets is to create a negative class in your model. See [Use Global Datasets](doc:use-global-datasets)." }, "cols": 2, "rows": 12 } [/block]
{"_id":"59de6225666d650024f78fd0","category":"59de6223666d650024f78f9d","user":"573b5a1f37fcf72000a2e683","project":"552d474ea86ee20d00780cd7","parentDoc":null,"version":"59de6223666d650024f78f9b","updates":["5a6389d9cbaa37001cce4f04"],"next":{"pages":[],"description":""},"createdAt":"2016-10-11T16:44:42.625Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","auth":"required","params":[],"url":""},"isReference":false,"order":0,"body":"- [Get an account](https://metamind.readme.io/docs/what-you-need-to-call-api#section-get-an-einstein-platform-account)\n- [Generate a token](https://metamind.readme.io/docs/what-you-need-to-call-api#section-generate-a-token)\n\n##Get an Einstein Platform Services Account##\n\n1. From a browser, navigate to the [sign up page](https://api.einstein.ai/signup).\n\n2. Click **Sign Up Using Salesforce**.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/26cb34e-sign_up.png\",\n        \"sign_up.png\",\n        444,\n        458,\n        \"#7b649b\"\n      ]\n    }\n  ]\n}\n[/block]\n3. On the Salesforce login page, type your username and password, and click **Log In**.  If you’re already logged in to Salesforce, you won’t see this page and you can skip to Step 4.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/037038d-log_in.png\",\n        \"log_in.png\",\n        439,\n        602,\n        \"#0d84d3\"\n      ]\n    }\n  ]\n}\n[/block]\n4. Click **Allow** so the page can access basic information, such as your email address, and perform requests.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/e6ca8ef-allow_access.png\",\n        \"allow_access.png\",\n        428,\n        485,\n        \"#f3f2f9\"\n      ]\n    }\n  ]\n}\n[/block]\n5. On the activation page:\n - If you're using Chrome, click **Download Key** to save the key locally. The key file is named `einstein_platform.pem`. \n\n - If you're using any other browser, cut and paste your key from the browser into a text file and save it as `einstein_platform.pem`.\n\nMake a note of where you save the key file because you'll need it to authenticate when you call the API.\n[block:callout]\n{\n  \"type\": \"warning\",\n  \"title\": \"Caution\",\n  \"body\": \"The **Download Key** button is only supported in the most recent version of Google Chrome <sup>TM</sup>. If you're using a different browser, you can cut and paste your key into a text file and save it as `einstein_platform.pem`.\"\n}\n[/block]\n##Generate a Token##\n\nEach API call must contain a valid OAuth token in the request header. To generate a token, you create a JWT payload, sign the payload with your private key, and then call the API to get the token. \n\nTo get a token without code, see [Set Up Authorization](doc:set-up-auth). If you're generating a token in code, the sequence of steps is the same, but the details will vary depending on the programming language.\n\nBy default, the Einstein Platform Services APIs use TLS (Transport Layer Security) version 1.1 and require secure connections (HTTPS) for all communication.","excerpt":"Before you can access the Einstein Platform Services APIs, you first create an account and download your key. Then you use your key to generate an OAuth token. You can use your key to access both the Einstein Vision and Einstein Language APIs.","slug":"what-you-need-to-call-api","type":"basic","title":"What You Need to Call the API","__v":1,"childrenPages":[]}

What You Need to Call the API

Before you can access the Einstein Platform Services APIs, you first create an account and download your key. Then you use your key to generate an OAuth token. You can use your key to access both the Einstein Vision and Einstein Language APIs.

- [Get an account](https://metamind.readme.io/docs/what-you-need-to-call-api#section-get-an-einstein-platform-account) - [Generate a token](https://metamind.readme.io/docs/what-you-need-to-call-api#section-generate-a-token) ##Get an Einstein Platform Services Account## 1. From a browser, navigate to the [sign up page](https://api.einstein.ai/signup). 2. Click **Sign Up Using Salesforce**. [block:image] { "images": [ { "image": [ "https://files.readme.io/26cb34e-sign_up.png", "sign_up.png", 444, 458, "#7b649b" ] } ] } [/block] 3. On the Salesforce login page, type your username and password, and click **Log In**. If you’re already logged in to Salesforce, you won’t see this page and you can skip to Step 4. [block:image] { "images": [ { "image": [ "https://files.readme.io/037038d-log_in.png", "log_in.png", 439, 602, "#0d84d3" ] } ] } [/block] 4. Click **Allow** so the page can access basic information, such as your email address, and perform requests. [block:image] { "images": [ { "image": [ "https://files.readme.io/e6ca8ef-allow_access.png", "allow_access.png", 428, 485, "#f3f2f9" ] } ] } [/block] 5. On the activation page: - If you're using Chrome, click **Download Key** to save the key locally. The key file is named `einstein_platform.pem`. - If you're using any other browser, cut and paste your key from the browser into a text file and save it as `einstein_platform.pem`. Make a note of where you save the key file because you'll need it to authenticate when you call the API. [block:callout] { "type": "warning", "title": "Caution", "body": "The **Download Key** button is only supported in the most recent version of Google Chrome <sup>TM</sup>. If you're using a different browser, you can cut and paste your key into a text file and save it as `einstein_platform.pem`." } [/block] ##Generate a Token## Each API call must contain a valid OAuth token in the request header. To generate a token, you create a JWT payload, sign the payload with your private key, and then call the API to get the token. To get a token without code, see [Set Up Authorization](doc:set-up-auth). If you're generating a token in code, the sequence of steps is the same, but the details will vary depending on the programming language. By default, the Einstein Platform Services APIs use TLS (Transport Layer Security) version 1.1 and require secure connections (HTTPS) for all communication.
- [Get an account](https://metamind.readme.io/docs/what-you-need-to-call-api#section-get-an-einstein-platform-account) - [Generate a token](https://metamind.readme.io/docs/what-you-need-to-call-api#section-generate-a-token) ##Get an Einstein Platform Services Account## 1. From a browser, navigate to the [sign up page](https://api.einstein.ai/signup). 2. Click **Sign Up Using Salesforce**. [block:image] { "images": [ { "image": [ "https://files.readme.io/26cb34e-sign_up.png", "sign_up.png", 444, 458, "#7b649b" ] } ] } [/block] 3. On the Salesforce login page, type your username and password, and click **Log In**. If you’re already logged in to Salesforce, you won’t see this page and you can skip to Step 4. [block:image] { "images": [ { "image": [ "https://files.readme.io/037038d-log_in.png", "log_in.png", 439, 602, "#0d84d3" ] } ] } [/block] 4. Click **Allow** so the page can access basic information, such as your email address, and perform requests. [block:image] { "images": [ { "image": [ "https://files.readme.io/e6ca8ef-allow_access.png", "allow_access.png", 428, 485, "#f3f2f9" ] } ] } [/block] 5. On the activation page: - If you're using Chrome, click **Download Key** to save the key locally. The key file is named `einstein_platform.pem`. - If you're using any other browser, cut and paste your key from the browser into a text file and save it as `einstein_platform.pem`. Make a note of where you save the key file because you'll need it to authenticate when you call the API. [block:callout] { "type": "warning", "title": "Caution", "body": "The **Download Key** button is only supported in the most recent version of Google Chrome <sup>TM</sup>. If you're using a different browser, you can cut and paste your key into a text file and save it as `einstein_platform.pem`." } [/block] ##Generate a Token## Each API call must contain a valid OAuth token in the request header. To generate a token, you create a JWT payload, sign the payload with your private key, and then call the API to get the token. To get a token without code, see [Set Up Authorization](doc:set-up-auth). If you're generating a token in code, the sequence of steps is the same, but the details will vary depending on the programming language. By default, the Einstein Platform Services APIs use TLS (Transport Layer Security) version 1.1 and require secure connections (HTTPS) for all communication.
{"_id":"5a983f768df416006d327167","project":"552d474ea86ee20d00780cd7","version":"59de6223666d650024f78f9b","category":"59de6223666d650024f78f9d","user":"573b5a1f37fcf72000a2e683","updates":[],"next":{"pages":[],"description":""},"createdAt":"2018-03-01T17:59:18.925Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","auth":"required","params":[],"url":""},"isReference":false,"order":1,"body":"1.  Navigate to the reset page at [https://api.einstein.ai/reset](https://api.einstein.ai/reset).\n\n2.  Type the email address associated with your Einstein Platform Services account. Note that this is the account email address and not your Salesforce username.\n\n3.  Click **Reset My Private Key**.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/1d05bce-reset_private_key.png\",\n        \"reset_private_key.png\",\n        428,\n        499,\n        \"#d9cdae\"\n      ],\n      \"sizing\": \"smart\"\n    }\n  ]\n}\n[/block]\n4.  You'll receive an email with a link that takes you to a page where you can download your key.\n[block:callout]\n{\n  \"type\": \"warning\",\n  \"title\": \"Caution\",\n  \"body\": \"The **Download Key** button is only supported in the most recent version of Google Chrome <sup>TM</sup>. If you're using a different browser, you can cut and paste your key into a text file and save it as `einstein_platform.pem`.\"\n}\n[/block]","excerpt":"After you sign up for an account, you download or save your private key in the form of a .pem file. But sometimes things happen. If you lose your private key, you can reset it. \n\nResetting your key generates a new key, but you still have access to your datasets and models. However, your previous key will no longer work.","slug":"reset-your-private-key","type":"basic","title":"Reset Your Private Key","__v":0,"parentDoc":null,"childrenPages":[]}

Reset Your Private Key

After you sign up for an account, you download or save your private key in the form of a .pem file. But sometimes things happen. If you lose your private key, you can reset it. Resetting your key generates a new key, but you still have access to your datasets and models. However, your previous key will no longer work.

1. Navigate to the reset page at [https://api.einstein.ai/reset](https://api.einstein.ai/reset). 2. Type the email address associated with your Einstein Platform Services account. Note that this is the account email address and not your Salesforce username. 3. Click **Reset My Private Key**. [block:image] { "images": [ { "image": [ "https://files.readme.io/1d05bce-reset_private_key.png", "reset_private_key.png", 428, 499, "#d9cdae" ], "sizing": "smart" } ] } [/block] 4. You'll receive an email with a link that takes you to a page where you can download your key. [block:callout] { "type": "warning", "title": "Caution", "body": "The **Download Key** button is only supported in the most recent version of Google Chrome <sup>TM</sup>. If you're using a different browser, you can cut and paste your key into a text file and save it as `einstein_platform.pem`." } [/block]
1. Navigate to the reset page at [https://api.einstein.ai/reset](https://api.einstein.ai/reset). 2. Type the email address associated with your Einstein Platform Services account. Note that this is the account email address and not your Salesforce username. 3. Click **Reset My Private Key**. [block:image] { "images": [ { "image": [ "https://files.readme.io/1d05bce-reset_private_key.png", "reset_private_key.png", 428, 499, "#d9cdae" ], "sizing": "smart" } ] } [/block] 4. You'll receive an email with a link that takes you to a page where you can download your key. [block:callout] { "type": "warning", "title": "Caution", "body": "The **Download Key** button is only supported in the most recent version of Google Chrome <sup>TM</sup>. If you're using a different browser, you can cut and paste your key into a text file and save it as `einstein_platform.pem`." } [/block]
{"_id":"59de6225666d650024f78fc9","category":"59de6223666d650024f78f9e","project":"552d474ea86ee20d00780cd7","user":"573b5a1f37fcf72000a2e683","parentDoc":null,"version":"59de6223666d650024f78f9b","updates":[],"next":{"pages":[],"description":""},"createdAt":"2016-09-29T21:49:53.249Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"settings":"","results":{"codes":[{"name":"","status":200,"language":"json","code":"{}"},{"status":400,"language":"json","code":"{}","name":""}]},"auth":"required","params":[],"url":""},"isReference":false,"order":0,"body":"To help you get up and running quickly, you’ll step through integrating your Salesforce org with the Einstein Image Classification API. First, you create Apex classes that call the API. Then you create a Visualforce page to tie it all together.\n\nIf you need help as you go through these steps, check out the [Einstein Platform Services developer forum](https://developer.salesforce.com/forums?communityId=09aF00000004HMGIA2#!/feedtype=RECENT&dc=Predictive_Services&criteria=ALLQUESTIONS) on Salesforce Developers.","excerpt":"","slug":"apex_qs_scenario","type":"basic","title":"Scenario","__v":0,"childrenPages":[]}

Scenario


To help you get up and running quickly, you’ll step through integrating your Salesforce org with the Einstein Image Classification API. First, you create Apex classes that call the API. Then you create a Visualforce page to tie it all together. If you need help as you go through these steps, check out the [Einstein Platform Services developer forum](https://developer.salesforce.com/forums?communityId=09aF00000004HMGIA2#!/feedtype=RECENT&dc=Predictive_Services&criteria=ALLQUESTIONS) on Salesforce Developers.
To help you get up and running quickly, you’ll step through integrating your Salesforce org with the Einstein Image Classification API. First, you create Apex classes that call the API. Then you create a Visualforce page to tie it all together. If you need help as you go through these steps, check out the [Einstein Platform Services developer forum](https://developer.salesforce.com/forums?communityId=09aF00000004HMGIA2#!/feedtype=RECENT&dc=Predictive_Services&criteria=ALLQUESTIONS) on Salesforce Developers.
{"_id":"59de6225666d650024f78fca","category":"59de6223666d650024f78f9e","parentDoc":null,"project":"552d474ea86ee20d00780cd7","user":"573b5a1f37fcf72000a2e683","version":"59de6223666d650024f78f9b","updates":[],"next":{"pages":[],"description":""},"createdAt":"2016-09-29T21:54:16.224Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"settings":"","results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"language":"json","code":"{}","name":"","status":400}]},"auth":"required","params":[],"url":""},"isReference":false,"order":1,"body":"- **Set up your account**—Follow the steps in [What You Need to Call the API](doc:what-you-need-to-call-api) to set up your Einstein Platform Services account.\n\n- **Find your key file**—If you've already created an account, locate the `einstein_platform.pem` file that you downloaded (previously named `predictive_services.pem`) as part of that process. This file contains your private key.\n\n- **Install Git**—To get the Visualforce and Apex code, you need Git to clone the repos.","excerpt":"","slug":"apex-qs-prereqs","type":"basic","title":"Prerequisites","__v":0,"childrenPages":[]}

Prerequisites


- **Set up your account**—Follow the steps in [What You Need to Call the API](doc:what-you-need-to-call-api) to set up your Einstein Platform Services account. - **Find your key file**—If you've already created an account, locate the `einstein_platform.pem` file that you downloaded (previously named `predictive_services.pem`) as part of that process. This file contains your private key. - **Install Git**—To get the Visualforce and Apex code, you need Git to clone the repos.
- **Set up your account**—Follow the steps in [What You Need to Call the API](doc:what-you-need-to-call-api) to set up your Einstein Platform Services account. - **Find your key file**—If you've already created an account, locate the `einstein_platform.pem` file that you downloaded (previously named `predictive_services.pem`) as part of that process. This file contains your private key. - **Install Git**—To get the Visualforce and Apex code, you need Git to clone the repos.
{"_id":"59de6225666d650024f78fcb","category":"59de6223666d650024f78f9e","project":"552d474ea86ee20d00780cd7","user":"573b5a1f37fcf72000a2e683","parentDoc":null,"version":"59de6223666d650024f78f9b","updates":[],"next":{"pages":[],"description":""},"createdAt":"2017-01-20T00:18:43.721Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"settings":"","results":{"codes":[{"language":"json","code":"{}","name":"","status":200},{"name":"","status":400,"language":"json","code":"{}"}]},"auth":"required","params":[],"url":""},"isReference":false,"order":2,"body":"1. Log in to Salesforce.\n\n2. Click **Files**. \n\n3. Click **Upload File**. \n\n4. Navigate to the directory where you saved the `einstein_platform.pem` file, select the file, and click Open. You should see the key file in the list of files owned by you.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/4588eea-files_key.png\",\n        \"files_key.png\",\n        937,\n        242,\n        \"#f2f9fa\"\n      ]\n    }\n  ]\n}\n[/block]\nThe key file was previously named `predictive_services.pem`. If you signed up at an earlier time and you can't file your key file, try searching for a file by this name.\n[block:callout]\n{\n  \"type\": \"danger\",\n  \"title\": \"Caution\",\n  \"body\": \"If you plan to use the Einstein Platform Services APIs in an AppExchange package, adhere to the appropriate secret storage guidelines. See the Storing Sensitive Information section in this [article](https://developer.salesforce.com/page/Requirements_Checklist).\"\n}\n[/block]","excerpt":"You must upload your key to Salesforce Files so that the Apex controller class can access it.","slug":"upload-your-key","type":"basic","title":"Upload Your Key","__v":0,"childrenPages":[]}

Upload Your Key

You must upload your key to Salesforce Files so that the Apex controller class can access it.

1. Log in to Salesforce. 2. Click **Files**. 3. Click **Upload File**. 4. Navigate to the directory where you saved the `einstein_platform.pem` file, select the file, and click Open. You should see the key file in the list of files owned by you. [block:image] { "images": [ { "image": [ "https://files.readme.io/4588eea-files_key.png", "files_key.png", 937, 242, "#f2f9fa" ] } ] } [/block] The key file was previously named `predictive_services.pem`. If you signed up at an earlier time and you can't file your key file, try searching for a file by this name. [block:callout] { "type": "danger", "title": "Caution", "body": "If you plan to use the Einstein Platform Services APIs in an AppExchange package, adhere to the appropriate secret storage guidelines. See the Storing Sensitive Information section in this [article](https://developer.salesforce.com/page/Requirements_Checklist)." } [/block]
1. Log in to Salesforce. 2. Click **Files**. 3. Click **Upload File**. 4. Navigate to the directory where you saved the `einstein_platform.pem` file, select the file, and click Open. You should see the key file in the list of files owned by you. [block:image] { "images": [ { "image": [ "https://files.readme.io/4588eea-files_key.png", "files_key.png", 937, 242, "#f2f9fa" ] } ] } [/block] The key file was previously named `predictive_services.pem`. If you signed up at an earlier time and you can't file your key file, try searching for a file by this name. [block:callout] { "type": "danger", "title": "Caution", "body": "If you plan to use the Einstein Platform Services APIs in an AppExchange package, adhere to the appropriate secret storage guidelines. See the Storing Sensitive Information section in this [article](https://developer.salesforce.com/page/Requirements_Checklist)." } [/block]
{"_id":"59de6225666d650024f78fcc","category":"59de6223666d650024f78f9e","parentDoc":null,"project":"552d474ea86ee20d00780cd7","user":"573b5a1f37fcf72000a2e683","version":"59de6223666d650024f78f9b","updates":["58a5d20e79ac232f00cbaf72"],"next":{"pages":[],"description":""},"createdAt":"2016-09-29T22:43:21.085Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[{"name":"","status":200,"language":"json","code":"{}"},{"code":"{}","name":"","status":400,"language":"json"}]},"settings":"","auth":"required","params":[],"url":""},"isReference":false,"order":3,"body":"1. Clone the JWT repo by using this command.\n```git clone https://github.com/salesforceidentity/jwt```\n\n2. Clone the Apex code repo by using this command.\n```git clone https://github.com/MetaMind/apex-utils```\n\n##Download the Code Zip Files##\nIf you don't have a GitHub account or you want use the GitHub web interface, use this alternate method for getting the code.\n\n1. From your browser, navigate to [https://github.com/salesforceidentity/jwt](https://github.com/salesforceidentity/jwt).\n\n2. Click **Clone or download**.\n\n3. Select **Download ZIP** to download the classes that handle the JWT token processing.\n\n4. If prompted by your browser, click **OK** to save the jwt-master.zip file locally.\n\n5. Navigate to [https://github.com/MetaMind/apex-utils](https://github.com/MetaMind/apex-utils).\n\n6. Click **Clone or download**.\n\n7. Select **Download ZIP** to download the code for the Apex classes and the Visualforce page. These code elements call the Einstein Image Classification API.\n\n8. If prompted by your browser, click **OK** to save the apex-utils-master.zip file locally.\n\n9. From your file explorer, navigate to the folder where you saved the .zip files and extract each file. Make a note of where you extract the code because you use it later on to create the classes.","excerpt":"Now that you’ve uploaded your key, get the code from GitHub.","slug":"apex-qs-get-the-code","type":"basic","title":"Get the Code","__v":0,"childrenPages":[]}

Get the Code

Now that you’ve uploaded your key, get the code from GitHub.

1. Clone the JWT repo by using this command. ```git clone https://github.com/salesforceidentity/jwt``` 2. Clone the Apex code repo by using this command. ```git clone https://github.com/MetaMind/apex-utils``` ##Download the Code Zip Files## If you don't have a GitHub account or you want use the GitHub web interface, use this alternate method for getting the code. 1. From your browser, navigate to [https://github.com/salesforceidentity/jwt](https://github.com/salesforceidentity/jwt). 2. Click **Clone or download**. 3. Select **Download ZIP** to download the classes that handle the JWT token processing. 4. If prompted by your browser, click **OK** to save the jwt-master.zip file locally. 5. Navigate to [https://github.com/MetaMind/apex-utils](https://github.com/MetaMind/apex-utils). 6. Click **Clone or download**. 7. Select **Download ZIP** to download the code for the Apex classes and the Visualforce page. These code elements call the Einstein Image Classification API. 8. If prompted by your browser, click **OK** to save the apex-utils-master.zip file locally. 9. From your file explorer, navigate to the folder where you saved the .zip files and extract each file. Make a note of where you extract the code because you use it later on to create the classes.
1. Clone the JWT repo by using this command. ```git clone https://github.com/salesforceidentity/jwt``` 2. Clone the Apex code repo by using this command. ```git clone https://github.com/MetaMind/apex-utils``` ##Download the Code Zip Files## If you don't have a GitHub account or you want use the GitHub web interface, use this alternate method for getting the code. 1. From your browser, navigate to [https://github.com/salesforceidentity/jwt](https://github.com/salesforceidentity/jwt). 2. Click **Clone or download**. 3. Select **Download ZIP** to download the classes that handle the JWT token processing. 4. If prompted by your browser, click **OK** to save the jwt-master.zip file locally. 5. Navigate to [https://github.com/MetaMind/apex-utils](https://github.com/MetaMind/apex-utils). 6. Click **Clone or download**. 7. Select **Download ZIP** to download the code for the Apex classes and the Visualforce page. These code elements call the Einstein Image Classification API. 8. If prompted by your browser, click **OK** to save the apex-utils-master.zip file locally. 9. From your file explorer, navigate to the folder where you saved the .zip files and extract each file. Make a note of where you extract the code because you use it later on to create the classes.
{"_id":"59de6225666d650024f78fcd","category":"59de6223666d650024f78f9e","parentDoc":null,"project":"552d474ea86ee20d00780cd7","user":"573b5a1f37fcf72000a2e683","version":"59de6223666d650024f78f9b","updates":[],"next":{"pages":[],"description":""},"createdAt":"2016-09-29T22:49:29.135Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[{"name":"","status":200,"language":"json","code":"{}"},{"code":"{}","name":"","status":400,"language":"json"}]},"settings":"","auth":"required","params":[],"url":""},"isReference":false,"order":4,"body":"1. Log in to Salesforce.\n\n2. From Setup, enter `Remote Site` in the `Quick Find` box, then select **Remote Site Settings**. \n\n3. Click **New Remote Site**. \n\n4. Enter a name for the remote site.\n\n5. In the Remote Site URL field, enter `https://api.einstein.ai`. \n\n6. Click **Save**.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/7825208-remote_site.png\",\n        \"remote_site.png\",\n        327,\n        146,\n        \"#e7e8db\"\n      ]\n    }\n  ]\n}\n[/block]","excerpt":"Before you can call the Einstein Image Classification API from Apex, you must add the API endpoint as a remote site.","slug":"apex-qs-create-remote-site","type":"basic","title":"Create a Remote Site","__v":0,"childrenPages":[]}

Create a Remote Site

Before you can call the Einstein Image Classification API from Apex, you must add the API endpoint as a remote site.

1. Log in to Salesforce. 2. From Setup, enter `Remote Site` in the `Quick Find` box, then select **Remote Site Settings**. 3. Click **New Remote Site**. 4. Enter a name for the remote site. 5. In the Remote Site URL field, enter `https://api.einstein.ai`. 6. Click **Save**. [block:image] { "images": [ { "image": [ "https://files.readme.io/7825208-remote_site.png", "remote_site.png", 327, 146, "#e7e8db" ] } ] } [/block]
1. Log in to Salesforce. 2. From Setup, enter `Remote Site` in the `Quick Find` box, then select **Remote Site Settings**. 3. Click **New Remote Site**. 4. Enter a name for the remote site. 5. In the Remote Site URL field, enter `https://api.einstein.ai`. 6. Click **Save**. [block:image] { "images": [ { "image": [ "https://files.readme.io/7825208-remote_site.png", "remote_site.png", 327, 146, "#e7e8db" ] } ] } [/block]
{"_id":"59de6225666d650024f78fce","category":"59de6223666d650024f78f9e","user":"573b5a1f37fcf72000a2e683","project":"552d474ea86ee20d00780cd7","parentDoc":null,"version":"59de6223666d650024f78f9b","updates":[],"next":{"pages":[],"description":""},"createdAt":"2016-09-30T20:44:59.590Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"language":"json","code":"{}","name":"","status":400}]},"settings":"","auth":"required","params":[],"url":""},"isReference":false,"order":5,"body":"1. In Salesforce, from Setup, enter `Apex Classes` in the Quick Find box, then select **Apex Classes**. \n \n2. Click **New**.\n\n3. To create the `JWT` Apex class, copy all the code from `JWT.apex` into the Apex Class tab and click Save.\n\n4. To create the `JWTBearerFlow` Apex class, go back to to the Apex Classes page, and click **New**.\n\n5. Copy all the code from `JWTBearer.apex` to the Apex Class tab and click **Save**.\n\n6. To create the `HttpFormBuilder` Apex class, go back to the Apex Classes page, and click **New**.\n\n7. Copy all the code from `HttpFormBuilder.apex` into the Apex Class tab and click **Save**.\n\n8. To create the `Vision` Apex class, go back to the Apex Classes page, and click **New**.\n\n9. Copy all the code from `Vision.apex` into the Apex Class tab and click **Save**.\n\n10. To create the `VisionController` Apex class, go back to the Apex Classes page, and click **New**.\n\n11. Copy the VisionController code from the apex-utils `README.md` into the Apex Class tab. This class is all the code from `public class VisionController {` to the closing brace `}`. In this example, the expiration is one hour (3600 seconds).\n\n12. Update the `jwt.sub` placeholder text of `yourname@example.com` with your email address. Use your email address that’s contained in the Salesforce org you logged in to when you created an account. \n[block:callout]\n{\n  \"type\": \"danger\",\n  \"title\": \"Warning\",\n  \"body\": \"Use your email address that’s contained in the Salesforce org you logged in to when you created an account. Be sure to use your email address and not your Salesforce username.\"\n}\n[/block]\n13. Click **Save**.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \" // Get a new token\\n JWT jwt = new JWT('RS256');\\n // jwt.cert = 'JWTCert'; // Uncomment this if you used a Salesforce certificate to sign up for an Einstein Platform account\\n jwt.pkcs8 = keyContents; // Comment this if you are using jwt.cert\\n jwt.iss = 'developer.force.com';\\n jwt.sub = 'yourname@example.com';\\n jwt.aud = 'https://api.einstein.ai/v2/oauth2/token';\\n jwt.exp = '3600';\",\n      \"language\": \"text\"\n    }\n  ]\n}\n[/block]","excerpt":"In this step, you create the Apex classes that call the API and do all of the heavy lifting.","slug":"apex-qs-create-classes","type":"basic","title":"Create the Apex Classes","__v":0,"childrenPages":[]}

Create the Apex Classes

In this step, you create the Apex classes that call the API and do all of the heavy lifting.

1. In Salesforce, from Setup, enter `Apex Classes` in the Quick Find box, then select **Apex Classes**. 2. Click **New**. 3. To create the `JWT` Apex class, copy all the code from `JWT.apex` into the Apex Class tab and click Save. 4. To create the `JWTBearerFlow` Apex class, go back to to the Apex Classes page, and click **New**. 5. Copy all the code from `JWTBearer.apex` to the Apex Class tab and click **Save**. 6. To create the `HttpFormBuilder` Apex class, go back to the Apex Classes page, and click **New**. 7. Copy all the code from `HttpFormBuilder.apex` into the Apex Class tab and click **Save**. 8. To create the `Vision` Apex class, go back to the Apex Classes page, and click **New**. 9. Copy all the code from `Vision.apex` into the Apex Class tab and click **Save**. 10. To create the `VisionController` Apex class, go back to the Apex Classes page, and click **New**. 11. Copy the VisionController code from the apex-utils `README.md` into the Apex Class tab. This class is all the code from `public class VisionController {` to the closing brace `}`. In this example, the expiration is one hour (3600 seconds). 12. Update the `jwt.sub` placeholder text of `yourname@example.com` with your email address. Use your email address that’s contained in the Salesforce org you logged in to when you created an account. [block:callout] { "type": "danger", "title": "Warning", "body": "Use your email address that’s contained in the Salesforce org you logged in to when you created an account. Be sure to use your email address and not your Salesforce username." } [/block] 13. Click **Save**. [block:code] { "codes": [ { "code": " // Get a new token\n JWT jwt = new JWT('RS256');\n // jwt.cert = 'JWTCert'; // Uncomment this if you used a Salesforce certificate to sign up for an Einstein Platform account\n jwt.pkcs8 = keyContents; // Comment this if you are using jwt.cert\n jwt.iss = 'developer.force.com';\n jwt.sub = 'yourname@example.com';\n jwt.aud = 'https://api.einstein.ai/v2/oauth2/token';\n jwt.exp = '3600';", "language": "text" } ] } [/block]
1. In Salesforce, from Setup, enter `Apex Classes` in the Quick Find box, then select **Apex Classes**. 2. Click **New**. 3. To create the `JWT` Apex class, copy all the code from `JWT.apex` into the Apex Class tab and click Save. 4. To create the `JWTBearerFlow` Apex class, go back to to the Apex Classes page, and click **New**. 5. Copy all the code from `JWTBearer.apex` to the Apex Class tab and click **Save**. 6. To create the `HttpFormBuilder` Apex class, go back to the Apex Classes page, and click **New**. 7. Copy all the code from `HttpFormBuilder.apex` into the Apex Class tab and click **Save**. 8. To create the `Vision` Apex class, go back to the Apex Classes page, and click **New**. 9. Copy all the code from `Vision.apex` into the Apex Class tab and click **Save**. 10. To create the `VisionController` Apex class, go back to the Apex Classes page, and click **New**. 11. Copy the VisionController code from the apex-utils `README.md` into the Apex Class tab. This class is all the code from `public class VisionController {` to the closing brace `}`. In this example, the expiration is one hour (3600 seconds). 12. Update the `jwt.sub` placeholder text of `yourname@example.com` with your email address. Use your email address that’s contained in the Salesforce org you logged in to when you created an account. [block:callout] { "type": "danger", "title": "Warning", "body": "Use your email address that’s contained in the Salesforce org you logged in to when you created an account. Be sure to use your email address and not your Salesforce username." } [/block] 13. Click **Save**. [block:code] { "codes": [ { "code": " // Get a new token\n JWT jwt = new JWT('RS256');\n // jwt.cert = 'JWTCert'; // Uncomment this if you used a Salesforce certificate to sign up for an Einstein Platform account\n jwt.pkcs8 = keyContents; // Comment this if you are using jwt.cert\n jwt.iss = 'developer.force.com';\n jwt.sub = 'yourname@example.com';\n jwt.aud = 'https://api.einstein.ai/v2/oauth2/token';\n jwt.exp = '3600';", "language": "text" } ] } [/block]
{"_id":"59de6225666d650024f78fcf","category":"59de6223666d650024f78f9e","project":"552d474ea86ee20d00780cd7","user":"573b5a1f37fcf72000a2e683","parentDoc":null,"version":"59de6223666d650024f78f9b","updates":[],"next":{"pages":[],"description":""},"createdAt":"2016-09-30T20:53:02.193Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"settings":"","results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"auth":"required","params":[],"url":""},"isReference":false,"order":6,"body":"1. In Salesforce, from Setup, enter `Visualforce` in the Quick Find box, then select **Visualforce Pages**. \n \n2. Click **New**.\n\n3. Enter a label and name of Predict.\n\n4. From the `README.md` file, copy all of the code from `<apex:page Controller=\"VisionController\">` to `</apex:page>` and paste it into the code editor.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/249dfe1-vf_page.png\",\n        \"vf_page.png\",\n        965,\n        773,\n        \"#f6f5f3\"\n      ]\n    }\n  ]\n}\n[/block]\n5. Click **Save**.\n\n6. Click **Preview** to test out the page.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/7dd6624-f6dab44-prediction.png\",\n        \"f6dab44-prediction.png\",\n        396,\n        333,\n        \"#f6f7f5\"\n      ]\n    }\n  ]\n}\n[/block]\nYour page shows the prediction results from the General Image Classifier, and the classifier is pretty sure it’s a picture of a tree frog.\n\nCongratulations! You wrote code to call the Einstein Image Classification API to make a prediction with an image, and all from within your Salesforce org.","excerpt":"Now you create a Visualforce page that calls the classes that you just created to make a prediction.","slug":"apex-qs-create-vf-page","type":"basic","title":"Create the Visualforce Page","__v":0,"childrenPages":[]}

Create the Visualforce Page

Now you create a Visualforce page that calls the classes that you just created to make a prediction.

1. In Salesforce, from Setup, enter `Visualforce` in the Quick Find box, then select **Visualforce Pages**. 2. Click **New**. 3. Enter a label and name of Predict. 4. From the `README.md` file, copy all of the code from `<apex:page Controller="VisionController">` to `</apex:page>` and paste it into the code editor. [block:image] { "images": [ { "image": [ "https://files.readme.io/249dfe1-vf_page.png", "vf_page.png", 965, 773, "#f6f5f3" ] } ] } [/block] 5. Click **Save**. 6. Click **Preview** to test out the page. [block:image] { "images": [ { "image": [ "https://files.readme.io/7dd6624-f6dab44-prediction.png", "f6dab44-prediction.png", 396, 333, "#f6f7f5" ] } ] } [/block] Your page shows the prediction results from the General Image Classifier, and the classifier is pretty sure it’s a picture of a tree frog. Congratulations! You wrote code to call the Einstein Image Classification API to make a prediction with an image, and all from within your Salesforce org.
1. In Salesforce, from Setup, enter `Visualforce` in the Quick Find box, then select **Visualforce Pages**. 2. Click **New**. 3. Enter a label and name of Predict. 4. From the `README.md` file, copy all of the code from `<apex:page Controller="VisionController">` to `</apex:page>` and paste it into the code editor. [block:image] { "images": [ { "image": [ "https://files.readme.io/249dfe1-vf_page.png", "vf_page.png", 965, 773, "#f6f5f3" ] } ] } [/block] 5. Click **Save**. 6. Click **Preview** to test out the page. [block:image] { "images": [ { "image": [ "https://files.readme.io/7dd6624-f6dab44-prediction.png", "f6dab44-prediction.png", 396, 333, "#f6f7f5" ] } ] } [/block] Your page shows the prediction results from the General Image Classifier, and the classifier is pretty sure it’s a picture of a tree frog. Congratulations! You wrote code to call the Einstein Image Classification API to make a prediction with an image, and all from within your Salesforce org.
{"_id":"59de6224666d650024f78fb4","category":"59de6223666d650024f78f9f","parentDoc":null,"project":"552d474ea86ee20d00780cd7","user":"573b5a1f37fcf72000a2e683","version":"59de6223666d650024f78f9b","updates":[],"next":{"pages":[],"description":""},"createdAt":"2016-09-18T19:16:08.208Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[{"name":"","status":200,"language":"json","code":"{}"},{"code":"{}","name":"","status":400,"language":"json"}]},"settings":"","auth":"required","params":[],"url":""},"isReference":false,"order":0,"body":"After you've mastered the basics, it's time to step through creating your own image classifier and testing it out. You use the Einstein Image Classification REST API for all these tasks.\n\nIf you need help as you go through these steps, check out the [Einstein Platform Services developer forum](https://developer.salesforce.com/forums?communityId=09aF00000004HMGIA2#!/feedtype=RECENT&dc=Predictive_Services&criteria=ALLQUESTIONS) on Salesforce Developers.\n\nHere's the scenario: you’re a developer who works for a company that sells outdoor sporting gear. The company has automation that monitors social media channels. When someone posts a photo, the company wants to know whether the photo was taken at the beach or in the mountains. Based on where the photo was taken, the company can make targeted product recommendations to its customers.\n \nTo perform that kind of analysis manually requires multiple people. In addition, manual analysis is slow, so it’s likely that the company couldn’t respond until well after the photo was posted. You’ve been tasked with implementing automation that can solve this problem.\n \nYour task is straightforward: create a model that can identify whether an image is of the beach or the mountains. Then test the model with an image of a beach scene.","excerpt":"","slug":"scenario","type":"basic","title":"Scenario","__v":0,"childrenPages":[]}

Scenario


After you've mastered the basics, it's time to step through creating your own image classifier and testing it out. You use the Einstein Image Classification REST API for all these tasks. If you need help as you go through these steps, check out the [Einstein Platform Services developer forum](https://developer.salesforce.com/forums?communityId=09aF00000004HMGIA2#!/feedtype=RECENT&dc=Predictive_Services&criteria=ALLQUESTIONS) on Salesforce Developers. Here's the scenario: you’re a developer who works for a company that sells outdoor sporting gear. The company has automation that monitors social media channels. When someone posts a photo, the company wants to know whether the photo was taken at the beach or in the mountains. Based on where the photo was taken, the company can make targeted product recommendations to its customers. To perform that kind of analysis manually requires multiple people. In addition, manual analysis is slow, so it’s likely that the company couldn’t respond until well after the photo was posted. You’ve been tasked with implementing automation that can solve this problem. Your task is straightforward: create a model that can identify whether an image is of the beach or the mountains. Then test the model with an image of a beach scene.
After you've mastered the basics, it's time to step through creating your own image classifier and testing it out. You use the Einstein Image Classification REST API for all these tasks. If you need help as you go through these steps, check out the [Einstein Platform Services developer forum](https://developer.salesforce.com/forums?communityId=09aF00000004HMGIA2#!/feedtype=RECENT&dc=Predictive_Services&criteria=ALLQUESTIONS) on Salesforce Developers. Here's the scenario: you’re a developer who works for a company that sells outdoor sporting gear. The company has automation that monitors social media channels. When someone posts a photo, the company wants to know whether the photo was taken at the beach or in the mountains. Based on where the photo was taken, the company can make targeted product recommendations to its customers. To perform that kind of analysis manually requires multiple people. In addition, manual analysis is slow, so it’s likely that the company couldn’t respond until well after the photo was posted. You’ve been tasked with implementing automation that can solve this problem. Your task is straightforward: create a model that can identify whether an image is of the beach or the mountains. Then test the model with an image of a beach scene.
{"_id":"59de6224666d650024f78fb5","category":"59de6223666d650024f78f9f","parentDoc":null,"project":"552d474ea86ee20d00780cd7","user":"573b5a1f37fcf72000a2e683","version":"59de6223666d650024f78f9b","updates":[],"next":{"pages":[],"description":""},"createdAt":"2016-09-18T19:16:18.583Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"code":"{}","name":"","status":400,"language":"json"}]},"settings":"","auth":"required","params":[],"url":""},"isReference":false,"order":1,"body":"- **Sign up for an account**—Follow the steps in [What You Need to Call the API](doc:what-you-need-to-call-api) to set up your Einstein Platform Services account.\n\n- **Find your key file**—If you've already created an account, locate the `einstein_platform.pem` file that you downloaded (previously named `predictive_services.pem`) as part of that process. This file contains your private key.\n\n- **Install cURL**—We’ll be using the cURL command line tool throughout the following steps. This tool is installed by default on Linux and OSX. If you don’t already have it installed, download it from [https://curl.haxx.se/download.html](https://curl.haxx.se/download.html)","excerpt":"","slug":"prerequisites","type":"basic","title":"Prerequisites","__v":0,"childrenPages":[]}

Prerequisites


- **Sign up for an account**—Follow the steps in [What You Need to Call the API](doc:what-you-need-to-call-api) to set up your Einstein Platform Services account. - **Find your key file**—If you've already created an account, locate the `einstein_platform.pem` file that you downloaded (previously named `predictive_services.pem`) as part of that process. This file contains your private key. - **Install cURL**—We’ll be using the cURL command line tool throughout the following steps. This tool is installed by default on Linux and OSX. If you don’t already have it installed, download it from [https://curl.haxx.se/download.html](https://curl.haxx.se/download.html)
- **Sign up for an account**—Follow the steps in [What You Need to Call the API](doc:what-you-need-to-call-api) to set up your Einstein Platform Services account. - **Find your key file**—If you've already created an account, locate the `einstein_platform.pem` file that you downloaded (previously named `predictive_services.pem`) as part of that process. This file contains your private key. - **Install cURL**—We’ll be using the cURL command line tool throughout the following steps. This tool is installed by default on Linux and OSX. If you don’t already have it installed, download it from [https://curl.haxx.se/download.html](https://curl.haxx.se/download.html)
{"_id":"59de6224666d650024f78fb6","category":"59de6223666d650024f78f9f","user":"573b5a1f37fcf72000a2e683","parentDoc":null,"project":"552d474ea86ee20d00780cd7","version":"59de6223666d650024f78f9b","updates":[],"next":{"pages":[],"description":""},"createdAt":"2016-09-18T19:16:44.212Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[{"name":"","status":200,"language":"json","code":"{}"},{"code":"{}","name":"","status":400,"language":"json"}]},"settings":"","auth":"required","params":[],"url":""},"isReference":false,"order":2,"body":"1. Type your email address or account ID. \n - If you signed up using Salesforce, use the email address associated with your user in the Salesforce org you logged in to when you signed up. \n - If you signed up using Heroku, use the account ID contained in the `EINSTEIN_VISION_ACCOUNT_ID` config variable.\n\n\n2.  Click **Browse** and navigate to the `einstein_platform.pem` file that you downloaded when you signed up for an account. \n[block:callout]\n{\n  \"type\": \"info\",\n  \"title\": \"Tip\",\n  \"body\": \"The key file was previously named `predictive_services.pem`. If you signed up at an earlier time and you can't file your key file, try searching for a file by this name.\"\n}\n[/block]\n3.  Set the number of minutes after which the token expires.\n\n4.  Click **Get Token**. You can now cut and paste the JWT token from the page.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/f296c5e-token_page_with_token.png\",\n        \"token_page_with_token.png\",\n        436,\n        830,\n        \"#0f86b7\"\n      ]\n    }\n  ]\n}\n[/block]\nThis page gives you a quick way to generate a token. In your app, you'll need to add the code that creates an assertion and then calls the API to generate a token. See [Generate an OAuth Token](doc:generate-an-oauth-token).\n[block:callout]\n{\n  \"type\": \"info\",\n  \"title\": \"Tip\",\n  \"body\": \"The token you create when you use this site doesn't automatically refresh. Your application must refresh the token based on the expiration time that you set when you create it.\"\n}\n[/block]","excerpt":"The Einstein Platform Services APIs use OAuth 2.0 JWT bearer token flow for authorization. Use the [token page](https://api.einstein.ai/token) to upload your key file and generate a JWT token.","slug":"set-up-auth","type":"basic","title":"Set Up Authorization","__v":0,"childrenPages":[]}

Set Up Authorization

The Einstein Platform Services APIs use OAuth 2.0 JWT bearer token flow for authorization. Use the [token page](https://api.einstein.ai/token) to upload your key file and generate a JWT token.

1. Type your email address or account ID. - If you signed up using Salesforce, use the email address associated with your user in the Salesforce org you logged in to when you signed up. - If you signed up using Heroku, use the account ID contained in the `EINSTEIN_VISION_ACCOUNT_ID` config variable. 2. Click **Browse** and navigate to the `einstein_platform.pem` file that you downloaded when you signed up for an account. [block:callout] { "type": "info", "title": "Tip", "body": "The key file was previously named `predictive_services.pem`. If you signed up at an earlier time and you can't file your key file, try searching for a file by this name." } [/block] 3. Set the number of minutes after which the token expires. 4. Click **Get Token**. You can now cut and paste the JWT token from the page. [block:image] { "images": [ { "image": [ "https://files.readme.io/f296c5e-token_page_with_token.png", "token_page_with_token.png", 436, 830, "#0f86b7" ] } ] } [/block] This page gives you a quick way to generate a token. In your app, you'll need to add the code that creates an assertion and then calls the API to generate a token. See [Generate an OAuth Token](doc:generate-an-oauth-token). [block:callout] { "type": "info", "title": "Tip", "body": "The token you create when you use this site doesn't automatically refresh. Your application must refresh the token based on the expiration time that you set when you create it." } [/block]
1. Type your email address or account ID. - If you signed up using Salesforce, use the email address associated with your user in the Salesforce org you logged in to when you signed up. - If you signed up using Heroku, use the account ID contained in the `EINSTEIN_VISION_ACCOUNT_ID` config variable. 2. Click **Browse** and navigate to the `einstein_platform.pem` file that you downloaded when you signed up for an account. [block:callout] { "type": "info", "title": "Tip", "body": "The key file was previously named `predictive_services.pem`. If you signed up at an earlier time and you can't file your key file, try searching for a file by this name." } [/block] 3. Set the number of minutes after which the token expires. 4. Click **Get Token**. You can now cut and paste the JWT token from the page. [block:image] { "images": [ { "image": [ "https://files.readme.io/f296c5e-token_page_with_token.png", "token_page_with_token.png", 436, 830, "#0f86b7" ] } ] } [/block] This page gives you a quick way to generate a token. In your app, you'll need to add the code that creates an assertion and then calls the API to generate a token. See [Generate an OAuth Token](doc:generate-an-oauth-token). [block:callout] { "type": "info", "title": "Tip", "body": "The token you create when you use this site doesn't automatically refresh. Your application must refresh the token based on the expiration time that you set when you create it." } [/block]
{"_id":"59de6224666d650024f78fb7","category":"59de6223666d650024f78f9f","user":"573b5a1f37fcf72000a2e683","project":"552d474ea86ee20d00780cd7","parentDoc":null,"version":"59de6223666d650024f78f9b","updates":[],"next":{"pages":[],"description":""},"createdAt":"2016-09-18T19:17:02.894Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"name":"","status":400,"language":"json","code":"{}"}]},"settings":"","auth":"required","params":[],"url":""},"isReference":false,"order":3,"body":"In the following command, replace `<TOKEN>` with your JWT token and run the command. This command:\n\n- Creates a dataset called `beachvsmountains` from the specified .zip file\n- Creates two labels from the .zip file directories: a `Beaches` label and a `Mountains` label\n- Creates 49 examples named for the images in the Beaches directory and gives them the `Beaches` label\n- Creates 50 examples named for the images in the Mountains directory and gives them the `Mountains` label\n\n <sub>If you use the Service, Salesforce may make available certain images to you (\"Provided Images\"), which are licensed from a third party, as part of the Service. You agree that you will only use the Provided Images in connection with the Service, and you agree that you will not: modify, alter, create derivative works from, sell, sublicense, transfer, assign, or otherwise distribute the Provided Images to any third party.</sub>\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"type=image\\\" -F \\\"path=http://einstein.ai/images/mountainvsbeach.zip\\\" https://api.einstein.ai/v2/vision/datasets/upload/sync\",\n      \"language\": \"curl\",\n      \"name\": null\n    }\n  ]\n}\n[/block]\nThis call is synchronous, so you'll see a response after all the images have finished uploading. The response contains the dataset ID and name as well as information about the labels and examples.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"{\\n  \\\"id\\\": 1000044,\\n  \\\"name\\\": \\\"mountainvsbeach\\\",\\n  \\\"createdAt\\\": \\\"2017-02-21T21:59:29.000+0000\\\",\\n  \\\"updatedAt\\\": \\\"2017-02-21T21:59:29.000+0000\\\",\\n  \\\"labelSummary\\\": {\\n    \\\"labels\\\": [\\n      {\\n        \\\"id\\\": 1865,\\n        \\\"datasetId\\\": 1000044,\\n        \\\"name\\\": \\\"Mountains\\\",\\n        \\\"numExamples\\\": 50\\n      },\\n      {\\n        \\\"id\\\": 1866,\\n        \\\"datasetId\\\": 1000044,\\n        \\\"name\\\": \\\"Beaches\\\",\\n        \\\"numExamples\\\": 49\\n      }\\n    ]\\n  },\\n  \\\"totalExamples\\\": 99,\\n  \\\"totalLabels\\\": 2,\\n  \\\"available\\\": true,\\n  \\\"statusMsg\\\": \\\"SUCCEEDED\\\",\\n  \\\"type\\\": \\\"image\\\",\\n  \\\"object\\\": \\\"dataset\\\"\\n}\",\n      \"language\": \"json\"\n    }\n  ]\n}\n[/block]\n##Tell Me More##\nThere are other ways to work with datasets using the API. For example, use this command to return a list of all your datasets.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X GET -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" https://api.einstein.ai/v2/vision/datasets\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\nThe results look something like this.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"{\\n  \\\"object\\\": \\\"list\\\",\\n  \\\"data\\\": [\\n    {\\n      \\\"id\\\": 1000044,\\n      \\\"name\\\": \\\"mountainvsbeach\\\",\\n      \\\"createdAt\\\": \\\"2017-02-21T21:59:29.000+0000\\\",\\n      \\\"updatedAt\\\": \\\"2017-02-21T21:59:29.000+0000\\\",\\n      \\\"labelSummary\\\": {\\n      \\\"labels\\\": [\\n        {\\n          \\\"id\\\": 1865,\\n          \\\"datasetId\\\": 1000044,\\n          \\\"name\\\": \\\"Mountains\\\",\\n          \\\"numExamples\\\": 50\\n        },\\n        {\\n          \\\"id\\\": 1866,\\n          \\\"datasetId\\\": 1000044,\\n          \\\"name\\\": \\\"Beaches\\\",\\n          \\\"numExamples\\\": 49\\n        }\\n      ]\\n    },\\n    \\\"totalExamples\\\": 99,\\n    \\\"totalLabels\\\": 2,\\n    \\\"available\\\": true,\\n    \\\"statusMsg\\\": \\\"SUCCEEDED\\\",\\n    \\\"type\\\": \\\"image\\\",\\n    \\\"object\\\": \\\"dataset\\\"\\n   },\\n   {\\n      \\\"id\\\": 1000045,\\n      \\\"name\\\": \\\"Brain Scans\\\",\\n      \\\"createdAt\\\": \\\"2017-02-21T22:04:06.000+0000\\\",\\n      \\\"updatedAt\\\": \\\"2017-02-21T22:04:06.000+0000\\\",\\n      \\\"labelSummary\\\": {\\n        \\\"labels\\\": []\\n      },\\n      \\\"totalExamples\\\": 0,\\n      \\\"totalLabels\\\": 0,\\n      \\\"available\\\": true,\\n      \\\"statusMsg\\\": \\\"SUCCEEDED\\\",\\n       \\\"type\\\": \\\"image\\\",\\n      \\\"object\\\": \\\"dataset\\\"\\n    }\\n  ]\\n}\",\n      \"language\": \"json\"\n    }\n  ]\n}\n[/block]\nTo delete a dataset, use the DELETE verb and pass in the dataset ID.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X DELETE -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" https://api.einstein.ai/v2/vision/datasets/<DATASET_ID>\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\nDeleting a dataset returns an HTTP status of 204, but no JSON response is returned.\n\nIn this scenario, the API call to create the dataset and upload the image data is synchronous. You can also make an asynchronous call to create a dataset. See [Ways to Create a Dataset](doc:ways-to-create-a-dataset) for more information about when to use the various APIs.","excerpt":"The first step is to create the dataset that contains the beach and mountain images. You use this dataset to create the model.","slug":"step-1-create-the-dataset","type":"basic","title":"Step 1: Create the Dataset","__v":0,"childrenPages":[]}

Step 1: Create the Dataset

The first step is to create the dataset that contains the beach and mountain images. You use this dataset to create the model.

In the following command, replace `<TOKEN>` with your JWT token and run the command. This command: - Creates a dataset called `beachvsmountains` from the specified .zip file - Creates two labels from the .zip file directories: a `Beaches` label and a `Mountains` label - Creates 49 examples named for the images in the Beaches directory and gives them the `Beaches` label - Creates 50 examples named for the images in the Mountains directory and gives them the `Mountains` label <sub>If you use the Service, Salesforce may make available certain images to you ("Provided Images"), which are licensed from a third party, as part of the Service. You agree that you will only use the Provided Images in connection with the Service, and you agree that you will not: modify, alter, create derivative works from, sell, sublicense, transfer, assign, or otherwise distribute the Provided Images to any third party.</sub> [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"type=image\" -F \"path=http://einstein.ai/images/mountainvsbeach.zip\" https://api.einstein.ai/v2/vision/datasets/upload/sync", "language": "curl", "name": null } ] } [/block] This call is synchronous, so you'll see a response after all the images have finished uploading. The response contains the dataset ID and name as well as information about the labels and examples. [block:code] { "codes": [ { "code": "{\n \"id\": 1000044,\n \"name\": \"mountainvsbeach\",\n \"createdAt\": \"2017-02-21T21:59:29.000+0000\",\n \"updatedAt\": \"2017-02-21T21:59:29.000+0000\",\n \"labelSummary\": {\n \"labels\": [\n {\n \"id\": 1865,\n \"datasetId\": 1000044,\n \"name\": \"Mountains\",\n \"numExamples\": 50\n },\n {\n \"id\": 1866,\n \"datasetId\": 1000044,\n \"name\": \"Beaches\",\n \"numExamples\": 49\n }\n ]\n },\n \"totalExamples\": 99,\n \"totalLabels\": 2,\n \"available\": true,\n \"statusMsg\": \"SUCCEEDED\",\n \"type\": \"image\",\n \"object\": \"dataset\"\n}", "language": "json" } ] } [/block] ##Tell Me More## There are other ways to work with datasets using the API. For example, use this command to return a list of all your datasets. [block:code] { "codes": [ { "code": "curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.einstein.ai/v2/vision/datasets", "language": "curl" } ] } [/block] The results look something like this. [block:code] { "codes": [ { "code": "{\n \"object\": \"list\",\n \"data\": [\n {\n \"id\": 1000044,\n \"name\": \"mountainvsbeach\",\n \"createdAt\": \"2017-02-21T21:59:29.000+0000\",\n \"updatedAt\": \"2017-02-21T21:59:29.000+0000\",\n \"labelSummary\": {\n \"labels\": [\n {\n \"id\": 1865,\n \"datasetId\": 1000044,\n \"name\": \"Mountains\",\n \"numExamples\": 50\n },\n {\n \"id\": 1866,\n \"datasetId\": 1000044,\n \"name\": \"Beaches\",\n \"numExamples\": 49\n }\n ]\n },\n \"totalExamples\": 99,\n \"totalLabels\": 2,\n \"available\": true,\n \"statusMsg\": \"SUCCEEDED\",\n \"type\": \"image\",\n \"object\": \"dataset\"\n },\n {\n \"id\": 1000045,\n \"name\": \"Brain Scans\",\n \"createdAt\": \"2017-02-21T22:04:06.000+0000\",\n \"updatedAt\": \"2017-02-21T22:04:06.000+0000\",\n \"labelSummary\": {\n \"labels\": []\n },\n \"totalExamples\": 0,\n \"totalLabels\": 0,\n \"available\": true,\n \"statusMsg\": \"SUCCEEDED\",\n \"type\": \"image\",\n \"object\": \"dataset\"\n }\n ]\n}", "language": "json" } ] } [/block] To delete a dataset, use the DELETE verb and pass in the dataset ID. [block:code] { "codes": [ { "code": "curl -X DELETE -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.einstein.ai/v2/vision/datasets/<DATASET_ID>", "language": "curl" } ] } [/block] Deleting a dataset returns an HTTP status of 204, but no JSON response is returned. In this scenario, the API call to create the dataset and upload the image data is synchronous. You can also make an asynchronous call to create a dataset. See [Ways to Create a Dataset](doc:ways-to-create-a-dataset) for more information about when to use the various APIs.
In the following command, replace `<TOKEN>` with your JWT token and run the command. This command: - Creates a dataset called `beachvsmountains` from the specified .zip file - Creates two labels from the .zip file directories: a `Beaches` label and a `Mountains` label - Creates 49 examples named for the images in the Beaches directory and gives them the `Beaches` label - Creates 50 examples named for the images in the Mountains directory and gives them the `Mountains` label <sub>If you use the Service, Salesforce may make available certain images to you ("Provided Images"), which are licensed from a third party, as part of the Service. You agree that you will only use the Provided Images in connection with the Service, and you agree that you will not: modify, alter, create derivative works from, sell, sublicense, transfer, assign, or otherwise distribute the Provided Images to any third party.</sub> [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"type=image\" -F \"path=http://einstein.ai/images/mountainvsbeach.zip\" https://api.einstein.ai/v2/vision/datasets/upload/sync", "language": "curl", "name": null } ] } [/block] This call is synchronous, so you'll see a response after all the images have finished uploading. The response contains the dataset ID and name as well as information about the labels and examples. [block:code] { "codes": [ { "code": "{\n \"id\": 1000044,\n \"name\": \"mountainvsbeach\",\n \"createdAt\": \"2017-02-21T21:59:29.000+0000\",\n \"updatedAt\": \"2017-02-21T21:59:29.000+0000\",\n \"labelSummary\": {\n \"labels\": [\n {\n \"id\": 1865,\n \"datasetId\": 1000044,\n \"name\": \"Mountains\",\n \"numExamples\": 50\n },\n {\n \"id\": 1866,\n \"datasetId\": 1000044,\n \"name\": \"Beaches\",\n \"numExamples\": 49\n }\n ]\n },\n \"totalExamples\": 99,\n \"totalLabels\": 2,\n \"available\": true,\n \"statusMsg\": \"SUCCEEDED\",\n \"type\": \"image\",\n \"object\": \"dataset\"\n}", "language": "json" } ] } [/block] ##Tell Me More## There are other ways to work with datasets using the API. For example, use this command to return a list of all your datasets. [block:code] { "codes": [ { "code": "curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.einstein.ai/v2/vision/datasets", "language": "curl" } ] } [/block] The results look something like this. [block:code] { "codes": [ { "code": "{\n \"object\": \"list\",\n \"data\": [\n {\n \"id\": 1000044,\n \"name\": \"mountainvsbeach\",\n \"createdAt\": \"2017-02-21T21:59:29.000+0000\",\n \"updatedAt\": \"2017-02-21T21:59:29.000+0000\",\n \"labelSummary\": {\n \"labels\": [\n {\n \"id\": 1865,\n \"datasetId\": 1000044,\n \"name\": \"Mountains\",\n \"numExamples\": 50\n },\n {\n \"id\": 1866,\n \"datasetId\": 1000044,\n \"name\": \"Beaches\",\n \"numExamples\": 49\n }\n ]\n },\n \"totalExamples\": 99,\n \"totalLabels\": 2,\n \"available\": true,\n \"statusMsg\": \"SUCCEEDED\",\n \"type\": \"image\",\n \"object\": \"dataset\"\n },\n {\n \"id\": 1000045,\n \"name\": \"Brain Scans\",\n \"createdAt\": \"2017-02-21T22:04:06.000+0000\",\n \"updatedAt\": \"2017-02-21T22:04:06.000+0000\",\n \"labelSummary\": {\n \"labels\": []\n },\n \"totalExamples\": 0,\n \"totalLabels\": 0,\n \"available\": true,\n \"statusMsg\": \"SUCCEEDED\",\n \"type\": \"image\",\n \"object\": \"dataset\"\n }\n ]\n}", "language": "json" } ] } [/block] To delete a dataset, use the DELETE verb and pass in the dataset ID. [block:code] { "codes": [ { "code": "curl -X DELETE -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.einstein.ai/v2/vision/datasets/<DATASET_ID>", "language": "curl" } ] } [/block] Deleting a dataset returns an HTTP status of 204, but no JSON response is returned. In this scenario, the API call to create the dataset and upload the image data is synchronous. You can also make an asynchronous call to create a dataset. See [Ways to Create a Dataset](doc:ways-to-create-a-dataset) for more information about when to use the various APIs.
{"_id":"59de6224666d650024f78fb8","category":"59de6223666d650024f78f9f","parentDoc":null,"user":"573b5a1f37fcf72000a2e683","project":"552d474ea86ee20d00780cd7","version":"59de6223666d650024f78f9b","updates":["5b35e522920cba00036c0ce4"],"next":{"pages":[],"description":""},"createdAt":"2016-09-18T19:18:12.443Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[{"code":"{}","name":"","status":200,"language":"json"},{"language":"json","code":"{}","name":"","status":400}]},"settings":"","auth":"required","params":[],"url":""},"isReference":false,"order":4,"body":"1. Now that you’ve added the labeled images to the dataset, it’s time to train the dataset. In this command, replace `<TOKEN>` with your token and `<DATASET_ID>` with your dataset ID, and then run it. This command trains the dataset and creates a model with the name specified in the name parameter.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=Beach and Mountain Model\\\" -F \\\"datasetId=<DATASET_ID>\\\" https://api.einstein.ai/v2/vision/train\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\nThe response contains information about the training status and looks like the following. Make a note of the `modelId` because you use this value in the next step.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"{\\n  \\\"datasetId\\\": 1000038,\\n  \\\"datasetVersionId\\\": 0,\\n  \\\"name\\\": \\\"Beach and Mountain Model\\\",\\n  \\\"status\\\": \\\"QUEUED\\\",\\n  \\\"progress\\\": 0,\\n  \\\"createdAt\\\": \\\"2017-02-21T21:10:03.000+0000\\\",\\n  \\\"updatedAt\\\": \\\"2017-02-21T21:10:03.000+0000\\\",\\n  \\\"learningRate\\\": 0.001,\\n  \\\"epochs\\\": 3,\\n  \\\"queuePosition\\\": 1,\\n  \\\"object\\\": \\\"training\\\",\\n  \\\"modelId\\\": \\\"X76USM4Q3QRZRODBDTUGDZEHJU\\\",\\n  \\\"trainParams\\\": null,\\n  \\\"trainStats\\\": null,\\n  \\\"modelType\\\": \\\"image\\\"\\n}\",\n      \"language\": \"json\"\n    }\n  ]\n}\n[/block]\n2. Training a dataset can take a while depending on how many images the dataset contains. To get the training status, in this command, replace `<TOKEN>` with your token and `<YOUR_MODEL_ID>` with the model ID, and then run the command.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X GET -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" https://api.einstein.ai/v2/vision/train/<YOUR_MODEL_ID>\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\nThe response returns the status of the training process. If it’s in progress, you see a status of `RUNNING`. When the training is complete, it returns a status of `SUCCEEDED` and a progress value of `1`.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"{\\n  \\\"datasetId\\\": 1000072,\\n  \\\"datasetVersionId\\\": 0,\\n  \\\"name\\\": \\\"Beach and Mountain Model\\\",\\n  \\\"status\\\": \\\"SUCCEEDED\\\",\\n  \\\"progress\\\": 1,\\n  \\\"createdAt\\\": \\\"2017-02-21T22:08:52.000+0000\\\",\\n  \\\"updatedAt\\\": \\\"2017-02-21T22:10:20.000+0000\\\",\\n  \\\"learningRate\\\": 0.001,\\n  \\\"epochs\\\": 3,\\n  \\\"object\\\": \\\"training\\\",\\n  \\\"modelId\\\": \\\"X76USM4Q3QRZRODBDTUGDZEHJU\\\",\\n  \\\"trainParams\\\": null,\\n  \\\"trainStats\\\": {\\n    \\\"labels\\\": 2,\\n    \\\"examples\\\": 99,\\n    \\\"totalTime\\\": \\\"00:02:16:958\\\",\\n    \\\"trainingTime\\\": \\\"00:02:13:664\\\",\\n    \\\"earlyStopping\\\": false,\\n    \\\"lastEpochDone\\\": 3,\\n    \\\"modelSaveTime\\\": \\\"00:00:01:871\\\",\\n    \\\"testSplitSize\\\": 6,\\n    \\\"trainSplitSize\\\": 93,\\n    \\\"datasetLoadTime\\\": \\\"00:00:03:270\\\"\\n  },\\n  \\\"modelType\\\": \\\"image\\\"\\n}\",\n      \"language\": \"json\"\n    }\n  ]\n}\n[/block]\n##Tell Me More##\nAfter you create a model, you can retrieve metrics about the model, such as its accuracy, f1 score, and confusion matrix. You can use these values to tune and tweak your model. Use this call to get the model metrics.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X GET -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" https://api.einstein.ai/v2/vision/models/<MODEL_ID>\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\nThe command returns a response similar to this one.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"{\\n  \\\"metricsData\\\": {\\n    \\\"f1\\\": [\\n        0.8000000000000002,\\n        0.6666666666666666\\n    ],\\n    \\\"labels\\\": [\\n      \\\"Mountains\\\",\\n      \\\"Beaches\\\"\\n    ],\\n    \\\"testAccuracy\\\": 0.75,\\n    \\\"trainingLoss\\\": 0.0622,\\n    \\\"confusionMatrix\\\": [\\n        [\\n            4,\\n            1\\n        ],\\n        [\\n            1,\\n            2\\n        ]\\n    ],\\n    \\\"trainingAccuracy\\\": 0.9814\\n  },\\n  \\\"createdAt\\\": \\\"2017-02-21T22:19:25.000+0000\\\",\\n  \\\"id\\\": \\\"X76USM4Q3QRZRODBDTUGDZEHJU\\\",\\n  \\\"object\\\": \\\"metrics\\\"\\n}\",\n      \"language\": \"json\"\n    }\n  ]\n}\n[/block]\nTo see the model metrics for each training iteration (epoch) performed to create the model, call the learning curve API. See [Get Model Learning Curve](doc:get-model-learning-curve).","excerpt":"Training the dataset creates the model that delivers the predictions.","slug":"step-2-train-the-dataset","type":"basic","title":"Step 2: Train the Dataset","__v":1,"childrenPages":[]}

Step 2: Train the Dataset

Training the dataset creates the model that delivers the predictions.

1. Now that you’ve added the labeled images to the dataset, it’s time to train the dataset. In this command, replace `<TOKEN>` with your token and `<DATASET_ID>` with your dataset ID, and then run it. This command trains the dataset and creates a model with the name specified in the name parameter. [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=Beach and Mountain Model\" -F \"datasetId=<DATASET_ID>\" https://api.einstein.ai/v2/vision/train", "language": "curl" } ] } [/block] The response contains information about the training status and looks like the following. Make a note of the `modelId` because you use this value in the next step. [block:code] { "codes": [ { "code": "{\n \"datasetId\": 1000038,\n \"datasetVersionId\": 0,\n \"name\": \"Beach and Mountain Model\",\n \"status\": \"QUEUED\",\n \"progress\": 0,\n \"createdAt\": \"2017-02-21T21:10:03.000+0000\",\n \"updatedAt\": \"2017-02-21T21:10:03.000+0000\",\n \"learningRate\": 0.001,\n \"epochs\": 3,\n \"queuePosition\": 1,\n \"object\": \"training\",\n \"modelId\": \"X76USM4Q3QRZRODBDTUGDZEHJU\",\n \"trainParams\": null,\n \"trainStats\": null,\n \"modelType\": \"image\"\n}", "language": "json" } ] } [/block] 2. Training a dataset can take a while depending on how many images the dataset contains. To get the training status, in this command, replace `<TOKEN>` with your token and `<YOUR_MODEL_ID>` with the model ID, and then run the command. [block:code] { "codes": [ { "code": "curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.einstein.ai/v2/vision/train/<YOUR_MODEL_ID>", "language": "curl" } ] } [/block] The response returns the status of the training process. If it’s in progress, you see a status of `RUNNING`. When the training is complete, it returns a status of `SUCCEEDED` and a progress value of `1`. [block:code] { "codes": [ { "code": "{\n \"datasetId\": 1000072,\n \"datasetVersionId\": 0,\n \"name\": \"Beach and Mountain Model\",\n \"status\": \"SUCCEEDED\",\n \"progress\": 1,\n \"createdAt\": \"2017-02-21T22:08:52.000+0000\",\n \"updatedAt\": \"2017-02-21T22:10:20.000+0000\",\n \"learningRate\": 0.001,\n \"epochs\": 3,\n \"object\": \"training\",\n \"modelId\": \"X76USM4Q3QRZRODBDTUGDZEHJU\",\n \"trainParams\": null,\n \"trainStats\": {\n \"labels\": 2,\n \"examples\": 99,\n \"totalTime\": \"00:02:16:958\",\n \"trainingTime\": \"00:02:13:664\",\n \"earlyStopping\": false,\n \"lastEpochDone\": 3,\n \"modelSaveTime\": \"00:00:01:871\",\n \"testSplitSize\": 6,\n \"trainSplitSize\": 93,\n \"datasetLoadTime\": \"00:00:03:270\"\n },\n \"modelType\": \"image\"\n}", "language": "json" } ] } [/block] ##Tell Me More## After you create a model, you can retrieve metrics about the model, such as its accuracy, f1 score, and confusion matrix. You can use these values to tune and tweak your model. Use this call to get the model metrics. [block:code] { "codes": [ { "code": "curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.einstein.ai/v2/vision/models/<MODEL_ID>", "language": "curl" } ] } [/block] The command returns a response similar to this one. [block:code] { "codes": [ { "code": "{\n \"metricsData\": {\n \"f1\": [\n 0.8000000000000002,\n 0.6666666666666666\n ],\n \"labels\": [\n \"Mountains\",\n \"Beaches\"\n ],\n \"testAccuracy\": 0.75,\n \"trainingLoss\": 0.0622,\n \"confusionMatrix\": [\n [\n 4,\n 1\n ],\n [\n 1,\n 2\n ]\n ],\n \"trainingAccuracy\": 0.9814\n },\n \"createdAt\": \"2017-02-21T22:19:25.000+0000\",\n \"id\": \"X76USM4Q3QRZRODBDTUGDZEHJU\",\n \"object\": \"metrics\"\n}", "language": "json" } ] } [/block] To see the model metrics for each training iteration (epoch) performed to create the model, call the learning curve API. See [Get Model Learning Curve](doc:get-model-learning-curve).
1. Now that you’ve added the labeled images to the dataset, it’s time to train the dataset. In this command, replace `<TOKEN>` with your token and `<DATASET_ID>` with your dataset ID, and then run it. This command trains the dataset and creates a model with the name specified in the name parameter. [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=Beach and Mountain Model\" -F \"datasetId=<DATASET_ID>\" https://api.einstein.ai/v2/vision/train", "language": "curl" } ] } [/block] The response contains information about the training status and looks like the following. Make a note of the `modelId` because you use this value in the next step. [block:code] { "codes": [ { "code": "{\n \"datasetId\": 1000038,\n \"datasetVersionId\": 0,\n \"name\": \"Beach and Mountain Model\",\n \"status\": \"QUEUED\",\n \"progress\": 0,\n \"createdAt\": \"2017-02-21T21:10:03.000+0000\",\n \"updatedAt\": \"2017-02-21T21:10:03.000+0000\",\n \"learningRate\": 0.001,\n \"epochs\": 3,\n \"queuePosition\": 1,\n \"object\": \"training\",\n \"modelId\": \"X76USM4Q3QRZRODBDTUGDZEHJU\",\n \"trainParams\": null,\n \"trainStats\": null,\n \"modelType\": \"image\"\n}", "language": "json" } ] } [/block] 2. Training a dataset can take a while depending on how many images the dataset contains. To get the training status, in this command, replace `<TOKEN>` with your token and `<YOUR_MODEL_ID>` with the model ID, and then run the command. [block:code] { "codes": [ { "code": "curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.einstein.ai/v2/vision/train/<YOUR_MODEL_ID>", "language": "curl" } ] } [/block] The response returns the status of the training process. If it’s in progress, you see a status of `RUNNING`. When the training is complete, it returns a status of `SUCCEEDED` and a progress value of `1`. [block:code] { "codes": [ { "code": "{\n \"datasetId\": 1000072,\n \"datasetVersionId\": 0,\n \"name\": \"Beach and Mountain Model\",\n \"status\": \"SUCCEEDED\",\n \"progress\": 1,\n \"createdAt\": \"2017-02-21T22:08:52.000+0000\",\n \"updatedAt\": \"2017-02-21T22:10:20.000+0000\",\n \"learningRate\": 0.001,\n \"epochs\": 3,\n \"object\": \"training\",\n \"modelId\": \"X76USM4Q3QRZRODBDTUGDZEHJU\",\n \"trainParams\": null,\n \"trainStats\": {\n \"labels\": 2,\n \"examples\": 99,\n \"totalTime\": \"00:02:16:958\",\n \"trainingTime\": \"00:02:13:664\",\n \"earlyStopping\": false,\n \"lastEpochDone\": 3,\n \"modelSaveTime\": \"00:00:01:871\",\n \"testSplitSize\": 6,\n \"trainSplitSize\": 93,\n \"datasetLoadTime\": \"00:00:03:270\"\n },\n \"modelType\": \"image\"\n}", "language": "json" } ] } [/block] ##Tell Me More## After you create a model, you can retrieve metrics about the model, such as its accuracy, f1 score, and confusion matrix. You can use these values to tune and tweak your model. Use this call to get the model metrics. [block:code] { "codes": [ { "code": "curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.einstein.ai/v2/vision/models/<MODEL_ID>", "language": "curl" } ] } [/block] The command returns a response similar to this one. [block:code] { "codes": [ { "code": "{\n \"metricsData\": {\n \"f1\": [\n 0.8000000000000002,\n 0.6666666666666666\n ],\n \"labels\": [\n \"Mountains\",\n \"Beaches\"\n ],\n \"testAccuracy\": 0.75,\n \"trainingLoss\": 0.0622,\n \"confusionMatrix\": [\n [\n 4,\n 1\n ],\n [\n 1,\n 2\n ]\n ],\n \"trainingAccuracy\": 0.9814\n },\n \"createdAt\": \"2017-02-21T22:19:25.000+0000\",\n \"id\": \"X76USM4Q3QRZRODBDTUGDZEHJU\",\n \"object\": \"metrics\"\n}", "language": "json" } ] } [/block] To see the model metrics for each training iteration (epoch) performed to create the model, call the learning curve API. See [Get Model Learning Curve](doc:get-model-learning-curve).
{"_id":"59de6224666d650024f78fb9","category":"59de6223666d650024f78f9f","parentDoc":null,"user":"573b5a1f37fcf72000a2e683","project":"552d474ea86ee20d00780cd7","version":"59de6223666d650024f78f9b","updates":[],"next":{"pages":[],"description":""},"createdAt":"2016-09-18T19:18:42.562Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","auth":"required","params":[],"url":""},"isReference":false,"order":5,"body":"You send an image to the model, and the model returns label names and probability values. The probability value is the prediction that the model makes for whether the image matches a label in its dataset. The higher the value, the higher the probability. \n\nYou can classify an image in these ways. \n- Reference the file by a URL\n- Upload the file by its path\n- Upload the image in a base64 string\n\nFor this example, you’ll reference this picture by the file URL.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/069a96b-4d870d7-546212389.jpg\",\n        \"4d870d7-546212389.jpg\",\n        512,\n        512,\n        \"#bfbebb\"\n      ]\n    }\n  ]\n}\n[/block]\n1. In the following command, replace: \n - `<TOKEN>` with your JWT token\n - `<YOUR_MODEL_ID>` with the ID of the model that you created when you trained the dataset\n \nThen run the command from the command line.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"sampleLocation=http://einstein.ai/images/546212389.jpg\\\" -F \\\"modelId=<YOUR_MODEL_ID>\\\" https://api.einstein.ai/v2/vision/predict\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\nThe model returns results similar to the following.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"{\\n  \\\"probabilities\\\": [\\n    {\\n      \\\"label\\\": \\\"Beaches\\\",\\n      \\\"probability\\\": 0.97554934\\n    },\\n    {\\n      \\\"label\\\": \\\"Mountains\\\",\\n      \\\"probability\\\": 0.024450686\\n    }\\n  ],\\n  \\\"object\\\": \\\"predictresponse\\\"\\n}\",\n      \"language\": \"json\"\n    }\n  ]\n}\n[/block]\nThe model predicts that the image belongs in the beach label, and therefore, is a picture of a beach scene. The numeric prediction is contained in the `probability` field, and this value is anywhere from 0 (not at all likely) to 1 (very likely). \n\nIn this case, the model is about 98% sure that the image belongs in the beach label. The results are returned in descending order with the greatest probability first.\n\nIf you run a prediction against a model that’s still training, you receive an error that the model ID can't be found.\n\n[block:callout]\n{\n  \"type\": \"warning\",\n  \"title\": \"Caution\",\n  \"body\": \"The dataset used for this scenario contains only 99 images, which is considered a small dataset. When you build your own dataset and model, follow the guidance on the [Dataset and Model Best Practices](doc:dataset-and-model-best-practices) page and add a lot of data.\"\n}\n[/block]\n##Tell Me More##\nYou can also classify a local image by uploading the image or by converting the image to a base64 string. To upload a local image, instead of the `sampleLocation` parameter, pass in the `sampleContent` parameter, which contains the image file location of the file to upload.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"sampleContent=@C:\\\\Mountains vs Beach\\\\Beaches\\\\546212389.jpg\\\" -F \\\"modelId=<YOUR_MODEL_ID>\\\" https://api.einstein.ai/v2/vision/predict\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\nSee [Prediction with Image File](doc:prediction-with-image-file) and [Prediction with Image Base64 String](doc:prediction-with-image-base64-string).\n\nCreating the dataset and model are just the beginning. When you create your own model, be sure to test a range of images to ensure that it’s returning the results that you need.\n\nYou’ve done it! You’ve gone through the complete process of building a dataset, creating a model, and classifying images using the Einstein Image Classification API. You’re ready to take what you’ve learned and bring the power of deep learning to your users.","excerpt":"Now that the data is uploaded and you created a model, you’re ready to use it to make predictions.","slug":"step-3-classify-an-image","type":"basic","title":"Step 3: Classify an Image","__v":0,"childrenPages":[]}

Step 3: Classify an Image

Now that the data is uploaded and you created a model, you’re ready to use it to make predictions.

You send an image to the model, and the model returns label names and probability values. The probability value is the prediction that the model makes for whether the image matches a label in its dataset. The higher the value, the higher the probability. You can classify an image in these ways. - Reference the file by a URL - Upload the file by its path - Upload the image in a base64 string For this example, you’ll reference this picture by the file URL. [block:image] { "images": [ { "image": [ "https://files.readme.io/069a96b-4d870d7-546212389.jpg", "4d870d7-546212389.jpg", 512, 512, "#bfbebb" ] } ] } [/block] 1. In the following command, replace: - `<TOKEN>` with your JWT token - `<YOUR_MODEL_ID>` with the ID of the model that you created when you trained the dataset Then run the command from the command line. [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"sampleLocation=http://einstein.ai/images/546212389.jpg\" -F \"modelId=<YOUR_MODEL_ID>\" https://api.einstein.ai/v2/vision/predict", "language": "curl" } ] } [/block] The model returns results similar to the following. [block:code] { "codes": [ { "code": "{\n \"probabilities\": [\n {\n \"label\": \"Beaches\",\n \"probability\": 0.97554934\n },\n {\n \"label\": \"Mountains\",\n \"probability\": 0.024450686\n }\n ],\n \"object\": \"predictresponse\"\n}", "language": "json" } ] } [/block] The model predicts that the image belongs in the beach label, and therefore, is a picture of a beach scene. The numeric prediction is contained in the `probability` field, and this value is anywhere from 0 (not at all likely) to 1 (very likely). In this case, the model is about 98% sure that the image belongs in the beach label. The results are returned in descending order with the greatest probability first. If you run a prediction against a model that’s still training, you receive an error that the model ID can't be found. [block:callout] { "type": "warning", "title": "Caution", "body": "The dataset used for this scenario contains only 99 images, which is considered a small dataset. When you build your own dataset and model, follow the guidance on the [Dataset and Model Best Practices](doc:dataset-and-model-best-practices) page and add a lot of data." } [/block] ##Tell Me More## You can also classify a local image by uploading the image or by converting the image to a base64 string. To upload a local image, instead of the `sampleLocation` parameter, pass in the `sampleContent` parameter, which contains the image file location of the file to upload. [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"sampleContent=@C:\\Mountains vs Beach\\Beaches\\546212389.jpg\" -F \"modelId=<YOUR_MODEL_ID>\" https://api.einstein.ai/v2/vision/predict", "language": "curl" } ] } [/block] See [Prediction with Image File](doc:prediction-with-image-file) and [Prediction with Image Base64 String](doc:prediction-with-image-base64-string). Creating the dataset and model are just the beginning. When you create your own model, be sure to test a range of images to ensure that it’s returning the results that you need. You’ve done it! You’ve gone through the complete process of building a dataset, creating a model, and classifying images using the Einstein Image Classification API. You’re ready to take what you’ve learned and bring the power of deep learning to your users.
You send an image to the model, and the model returns label names and probability values. The probability value is the prediction that the model makes for whether the image matches a label in its dataset. The higher the value, the higher the probability. You can classify an image in these ways. - Reference the file by a URL - Upload the file by its path - Upload the image in a base64 string For this example, you’ll reference this picture by the file URL. [block:image] { "images": [ { "image": [ "https://files.readme.io/069a96b-4d870d7-546212389.jpg", "4d870d7-546212389.jpg", 512, 512, "#bfbebb" ] } ] } [/block] 1. In the following command, replace: - `<TOKEN>` with your JWT token - `<YOUR_MODEL_ID>` with the ID of the model that you created when you trained the dataset Then run the command from the command line. [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"sampleLocation=http://einstein.ai/images/546212389.jpg\" -F \"modelId=<YOUR_MODEL_ID>\" https://api.einstein.ai/v2/vision/predict", "language": "curl" } ] } [/block] The model returns results similar to the following. [block:code] { "codes": [ { "code": "{\n \"probabilities\": [\n {\n \"label\": \"Beaches\",\n \"probability\": 0.97554934\n },\n {\n \"label\": \"Mountains\",\n \"probability\": 0.024450686\n }\n ],\n \"object\": \"predictresponse\"\n}", "language": "json" } ] } [/block] The model predicts that the image belongs in the beach label, and therefore, is a picture of a beach scene. The numeric prediction is contained in the `probability` field, and this value is anywhere from 0 (not at all likely) to 1 (very likely). In this case, the model is about 98% sure that the image belongs in the beach label. The results are returned in descending order with the greatest probability first. If you run a prediction against a model that’s still training, you receive an error that the model ID can't be found. [block:callout] { "type": "warning", "title": "Caution", "body": "The dataset used for this scenario contains only 99 images, which is considered a small dataset. When you build your own dataset and model, follow the guidance on the [Dataset and Model Best Practices](doc:dataset-and-model-best-practices) page and add a lot of data." } [/block] ##Tell Me More## You can also classify a local image by uploading the image or by converting the image to a base64 string. To upload a local image, instead of the `sampleLocation` parameter, pass in the `sampleContent` parameter, which contains the image file location of the file to upload. [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"sampleContent=@C:\\Mountains vs Beach\\Beaches\\546212389.jpg\" -F \"modelId=<YOUR_MODEL_ID>\" https://api.einstein.ai/v2/vision/predict", "language": "curl" } ] } [/block] See [Prediction with Image File](doc:prediction-with-image-file) and [Prediction with Image Base64 String](doc:prediction-with-image-base64-string). Creating the dataset and model are just the beginning. When you create your own model, be sure to test a range of images to ensure that it’s returning the results that you need. You’ve done it! You’ve gone through the complete process of building a dataset, creating a model, and classifying images using the Einstein Image Classification API. You’re ready to take what you’ve learned and bring the power of deep learning to your users.
{"_id":"5a299980842c120012bd8184","project":"552d474ea86ee20d00780cd7","version":"59de6223666d650024f78f9b","category":"5a2998f0842c120012bd817e","user":"573b5a1f37fcf72000a2e683","updates":["5af4f23bf25e0900032955a5"],"next":{"pages":[],"description":""},"createdAt":"2017-12-07T19:41:52.181Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","auth":"required","params":[],"url":""},"isReference":false,"order":0,"body":"Use the Einstein Object Detection API to train deep-learning models to recognize and count multiple, distinct objects within an image. The API identifies objects within an image and provides details, like the size and location of each object.\n\nFor each object or set of objects identified in an image, the API returns the coordinates for the object’s bounding box and a class label. It also returns the probability of the object matching the class label. Some scenarios for using the Object Detection API include locating product logos in images or counting products on shelves.\n\nLet's say you're a developer that works for Alpine, a company that produces and sells cereal. Alpine wants to monitor which products are found on store shelves and where those products are located. So they have sales reps that visit various markets and take photos of shelves in the cereal aisle. \n\nYour job is to create a model that identifies Alpine cereal boxes in an image. The model returns the type of cereal and the coordinates of where that box is located in the image. In this quick start, you step through the process of creating a custom model by calling the API via cURL. \n\nIf you need help as you go through these steps, check out the [Einstein Platform Services developer forum](https://developer.salesforce.com/forums?communityId=09aF00000004HMGIA2#!/feedtype=RECENT&dc=Predictive_Services&criteria=ALLQUESTIONS) on Salesforce Developers.","excerpt":"","slug":"od_qs_scenario","type":"basic","title":"Scenario","__v":1,"parentDoc":null,"childrenPages":[]}

Scenario


Use the Einstein Object Detection API to train deep-learning models to recognize and count multiple, distinct objects within an image. The API identifies objects within an image and provides details, like the size and location of each object. For each object or set of objects identified in an image, the API returns the coordinates for the object’s bounding box and a class label. It also returns the probability of the object matching the class label. Some scenarios for using the Object Detection API include locating product logos in images or counting products on shelves. Let's say you're a developer that works for Alpine, a company that produces and sells cereal. Alpine wants to monitor which products are found on store shelves and where those products are located. So they have sales reps that visit various markets and take photos of shelves in the cereal aisle. Your job is to create a model that identifies Alpine cereal boxes in an image. The model returns the type of cereal and the coordinates of where that box is located in the image. In this quick start, you step through the process of creating a custom model by calling the API via cURL. If you need help as you go through these steps, check out the [Einstein Platform Services developer forum](https://developer.salesforce.com/forums?communityId=09aF00000004HMGIA2#!/feedtype=RECENT&dc=Predictive_Services&criteria=ALLQUESTIONS) on Salesforce Developers.
Use the Einstein Object Detection API to train deep-learning models to recognize and count multiple, distinct objects within an image. The API identifies objects within an image and provides details, like the size and location of each object. For each object or set of objects identified in an image, the API returns the coordinates for the object’s bounding box and a class label. It also returns the probability of the object matching the class label. Some scenarios for using the Object Detection API include locating product logos in images or counting products on shelves. Let's say you're a developer that works for Alpine, a company that produces and sells cereal. Alpine wants to monitor which products are found on store shelves and where those products are located. So they have sales reps that visit various markets and take photos of shelves in the cereal aisle. Your job is to create a model that identifies Alpine cereal boxes in an image. The model returns the type of cereal and the coordinates of where that box is located in the image. In this quick start, you step through the process of creating a custom model by calling the API via cURL. If you need help as you go through these steps, check out the [Einstein Platform Services developer forum](https://developer.salesforce.com/forums?communityId=09aF00000004HMGIA2#!/feedtype=RECENT&dc=Predictive_Services&criteria=ALLQUESTIONS) on Salesforce Developers.
{"_id":"5a299afcec2b8400128ed896","project":"552d474ea86ee20d00780cd7","version":"59de6223666d650024f78f9b","category":"5a2998f0842c120012bd817e","user":"573b5a1f37fcf72000a2e683","updates":[],"next":{"pages":[],"description":""},"createdAt":"2017-12-07T19:48:12.228Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","auth":"required","params":[],"url":""},"isReference":false,"order":1,"body":"- **Sign up for an account**—Follow the steps in [What You Need to Call the API](doc:what-you-need-to-call-api) to set up your Einstein Platform Services account. If you already have an account, you can skip this step.\n\n- **Find your key file**—If you've already created an account, locate the `einstein_platform.pem` file that you downloaded (previously named `predictive_services.pem`) as part of that process. This file contains your private key.\n\n- **Generate a token**—Follow the steps in [Set Up Authorization](doc:set-up-auth) and set the expiration time to 60 minutes. The token you create is valid for the time it takes to complete these steps.\n\n- **Install cURL**—cURL is a free command line tool for getting or sending data using URL syntax. If you already have cURL installed, you can skip this step. The Linux or Mac OSX operating systems have cURL installed by default, but if you use Windows you have to install it yourself. To install cURL, go to [https://curl.haxx.se/download.html](https://curl.haxx.se/download.html). Download and install the version for your operating system.","excerpt":"","slug":"od_qs_prereqs","type":"basic","title":"Prerequisites","__v":0,"parentDoc":null,"childrenPages":[]}

Prerequisites


- **Sign up for an account**—Follow the steps in [What You Need to Call the API](doc:what-you-need-to-call-api) to set up your Einstein Platform Services account. If you already have an account, you can skip this step. - **Find your key file**—If you've already created an account, locate the `einstein_platform.pem` file that you downloaded (previously named `predictive_services.pem`) as part of that process. This file contains your private key. - **Generate a token**—Follow the steps in [Set Up Authorization](doc:set-up-auth) and set the expiration time to 60 minutes. The token you create is valid for the time it takes to complete these steps. - **Install cURL**—cURL is a free command line tool for getting or sending data using URL syntax. If you already have cURL installed, you can skip this step. The Linux or Mac OSX operating systems have cURL installed by default, but if you use Windows you have to install it yourself. To install cURL, go to [https://curl.haxx.se/download.html](https://curl.haxx.se/download.html). Download and install the version for your operating system.
- **Sign up for an account**—Follow the steps in [What You Need to Call the API](doc:what-you-need-to-call-api) to set up your Einstein Platform Services account. If you already have an account, you can skip this step. - **Find your key file**—If you've already created an account, locate the `einstein_platform.pem` file that you downloaded (previously named `predictive_services.pem`) as part of that process. This file contains your private key. - **Generate a token**—Follow the steps in [Set Up Authorization](doc:set-up-auth) and set the expiration time to 60 minutes. The token you create is valid for the time it takes to complete these steps. - **Install cURL**—cURL is a free command line tool for getting or sending data using URL syntax. If you already have cURL installed, you can skip this step. The Linux or Mac OSX operating systems have cURL installed by default, but if you use Windows you have to install it yourself. To install cURL, go to [https://curl.haxx.se/download.html](https://curl.haxx.se/download.html). Download and install the version for your operating system.
{"_id":"5a29b82272dc34001ebcf02f","project":"552d474ea86ee20d00780cd7","version":"59de6223666d650024f78f9b","category":"5a2998f0842c120012bd817e","user":"573b5a1f37fcf72000a2e683","updates":[],"next":{"pages":[],"description":""},"createdAt":"2017-12-07T21:52:34.936Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","auth":"required","params":[],"url":""},"isReference":false,"order":2,"body":"You create the dataset from the .zip file called `alpine.zip`, referenced by its URL. In the following command, replace `<TOKEN>` with your JWT token and run the command. This command:\n\n- Creates a dataset called `alpine` from the specified .zip file\n- Creates three labels specified in the annotations.csv file: `Alpine - Oat Cereal`,  `Alpine - Corn Flakes`, and `Alpine - Bran Cereal`\n- Creates an example for each image specified in the annotations file. In this scenario there are 33 examples.\n- Adds the specified labels from the annotations file to each image.\n\nThe `type` parameter specifies that the new dataset is an object detection dataset.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"path=https://einstein.ai/images/alpine.zip\\\" -F \\\"type=image-detection\\\" https://api.einstein.ai/v2/vision/datasets/upload\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\n\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"{\\n  \\\"id\\\": 1004942,\\n  \\\"name\\\": \\\"alpine\\\",\\n  \\\"createdAt\\\": \\\"2017-12-11T22:07:32.000+0000\\\",\\n  \\\"updatedAt\\\": \\\"2017-12-11T22:07:32.000+0000\\\",\\n  \\\"labelSummary\\\": {\\n    \\\"labels\\\": []\\n  },\\n  \\\"totalExamples\\\": 0,\\n  \\\"available\\\": false,\\n  \\\"statusMsg\\\": \\\"UPLOADING\\\",\\n  \\\"type\\\": \\\"image-detection\\\",\\n  \\\"object\\\": \\\"dataset\\\"\\n}\",\n      \"language\": \"json\"\n    }\n  ]\n}\n[/block]\nThis call is asynchronous, so you get the dataset ID back right away, but the API continues to load data into the dataset. Use the call to [Get a Dataset](doc:get-a-dataset) to monitor the status of the upload. When `available` is `true` and `statusMsg` is `SUCCEEDED`, the upload is complete and the dataset is ready to be trained.\n\nThis cURL call gets the dataset.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X GET -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" https://api.einstein.ai/v2/vision/datasets/<DATASET_ID>\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\n\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"{\\n    \\\"id\\\": 1004942,\\n    \\\"name\\\": \\\"alpine\\\",\\n    \\\"createdAt\\\": \\\"2017-12-07T22:54:41.000+0000\\\",\\n    \\\"updatedAt\\\": \\\"2017-12-07T22:54:44.000+0000\\\",\\n    \\\"labelSummary\\\": {\\n        \\\"labels\\\": [\\n            {\\n                \\\"id\\\": 39688,\\n                \\\"datasetId\\\": 1004942,\\n                \\\"name\\\": \\\"Alpine - Oat Cereal\\\",\\n                \\\"numExamples\\\": 32\\n            },\\n            {\\n                \\\"id\\\": 39689,\\n                \\\"datasetId\\\": 1004942,\\n                \\\"name\\\": \\\"Alpine - Corn Flakes\\\",\\n                \\\"numExamples\\\": 30\\n            },\\n            {\\n                \\\"id\\\": 39690,\\n                \\\"datasetId\\\": 1004942,\\n                \\\"name\\\": \\\"Alpine - Bran Cereal\\\",\\n                \\\"numExamples\\\": 31\\n            }\\n        ]\\n    },\\n    \\\"totalExamples\\\": 33,\\n    \\\"totalLabels\\\": 3,\\n    \\\"available\\\": true,\\n    \\\"statusMsg\\\": \\\"SUCCEEDED\\\",\\n    \\\"type\\\": \\\"image-detection\\\",\\n    \\\"object\\\": \\\"dataset\\\"\\n}\",\n      \"language\": \"json\"\n    }\n  ]\n}\n[/block]\nThe .zip file used to create an object detection dataset must contain the images and an annotations.csv file. The .zip file must have a specific structure, and the annotations.csv file must also be in the required format. \n\nSee the Object Detection Datasets section in [Create a Dataset From a Zip File Asynchronously](doc:create-a-dataset-zip-async) for guidelines about the .zip file and the annotations file.","excerpt":"The first step is to create the dataset that contains the training data. You use this dataset to create the model.","slug":"od_qs_create_dataset","type":"basic","title":"Step 1: Create the Object Detection Dataset","__v":0,"parentDoc":null,"childrenPages":[]}

Step 1: Create the Object Detection Dataset

The first step is to create the dataset that contains the training data. You use this dataset to create the model.

You create the dataset from the .zip file called `alpine.zip`, referenced by its URL. In the following command, replace `<TOKEN>` with your JWT token and run the command. This command: - Creates a dataset called `alpine` from the specified .zip file - Creates three labels specified in the annotations.csv file: `Alpine - Oat Cereal`, `Alpine - Corn Flakes`, and `Alpine - Bran Cereal` - Creates an example for each image specified in the annotations file. In this scenario there are 33 examples. - Adds the specified labels from the annotations file to each image. The `type` parameter specifies that the new dataset is an object detection dataset. [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"path=https://einstein.ai/images/alpine.zip\" -F \"type=image-detection\" https://api.einstein.ai/v2/vision/datasets/upload", "language": "curl" } ] } [/block] [block:code] { "codes": [ { "code": "{\n \"id\": 1004942,\n \"name\": \"alpine\",\n \"createdAt\": \"2017-12-11T22:07:32.000+0000\",\n \"updatedAt\": \"2017-12-11T22:07:32.000+0000\",\n \"labelSummary\": {\n \"labels\": []\n },\n \"totalExamples\": 0,\n \"available\": false,\n \"statusMsg\": \"UPLOADING\",\n \"type\": \"image-detection\",\n \"object\": \"dataset\"\n}", "language": "json" } ] } [/block] This call is asynchronous, so you get the dataset ID back right away, but the API continues to load data into the dataset. Use the call to [Get a Dataset](doc:get-a-dataset) to monitor the status of the upload. When `available` is `true` and `statusMsg` is `SUCCEEDED`, the upload is complete and the dataset is ready to be trained. This cURL call gets the dataset. [block:code] { "codes": [ { "code": "curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.einstein.ai/v2/vision/datasets/<DATASET_ID>", "language": "curl" } ] } [/block] [block:code] { "codes": [ { "code": "{\n \"id\": 1004942,\n \"name\": \"alpine\",\n \"createdAt\": \"2017-12-07T22:54:41.000+0000\",\n \"updatedAt\": \"2017-12-07T22:54:44.000+0000\",\n \"labelSummary\": {\n \"labels\": [\n {\n \"id\": 39688,\n \"datasetId\": 1004942,\n \"name\": \"Alpine - Oat Cereal\",\n \"numExamples\": 32\n },\n {\n \"id\": 39689,\n \"datasetId\": 1004942,\n \"name\": \"Alpine - Corn Flakes\",\n \"numExamples\": 30\n },\n {\n \"id\": 39690,\n \"datasetId\": 1004942,\n \"name\": \"Alpine - Bran Cereal\",\n \"numExamples\": 31\n }\n ]\n },\n \"totalExamples\": 33,\n \"totalLabels\": 3,\n \"available\": true,\n \"statusMsg\": \"SUCCEEDED\",\n \"type\": \"image-detection\",\n \"object\": \"dataset\"\n}", "language": "json" } ] } [/block] The .zip file used to create an object detection dataset must contain the images and an annotations.csv file. The .zip file must have a specific structure, and the annotations.csv file must also be in the required format. See the Object Detection Datasets section in [Create a Dataset From a Zip File Asynchronously](doc:create-a-dataset-zip-async) for guidelines about the .zip file and the annotations file.
You create the dataset from the .zip file called `alpine.zip`, referenced by its URL. In the following command, replace `<TOKEN>` with your JWT token and run the command. This command: - Creates a dataset called `alpine` from the specified .zip file - Creates three labels specified in the annotations.csv file: `Alpine - Oat Cereal`, `Alpine - Corn Flakes`, and `Alpine - Bran Cereal` - Creates an example for each image specified in the annotations file. In this scenario there are 33 examples. - Adds the specified labels from the annotations file to each image. The `type` parameter specifies that the new dataset is an object detection dataset. [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"path=https://einstein.ai/images/alpine.zip\" -F \"type=image-detection\" https://api.einstein.ai/v2/vision/datasets/upload", "language": "curl" } ] } [/block] [block:code] { "codes": [ { "code": "{\n \"id\": 1004942,\n \"name\": \"alpine\",\n \"createdAt\": \"2017-12-11T22:07:32.000+0000\",\n \"updatedAt\": \"2017-12-11T22:07:32.000+0000\",\n \"labelSummary\": {\n \"labels\": []\n },\n \"totalExamples\": 0,\n \"available\": false,\n \"statusMsg\": \"UPLOADING\",\n \"type\": \"image-detection\",\n \"object\": \"dataset\"\n}", "language": "json" } ] } [/block] This call is asynchronous, so you get the dataset ID back right away, but the API continues to load data into the dataset. Use the call to [Get a Dataset](doc:get-a-dataset) to monitor the status of the upload. When `available` is `true` and `statusMsg` is `SUCCEEDED`, the upload is complete and the dataset is ready to be trained. This cURL call gets the dataset. [block:code] { "codes": [ { "code": "curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.einstein.ai/v2/vision/datasets/<DATASET_ID>", "language": "curl" } ] } [/block] [block:code] { "codes": [ { "code": "{\n \"id\": 1004942,\n \"name\": \"alpine\",\n \"createdAt\": \"2017-12-07T22:54:41.000+0000\",\n \"updatedAt\": \"2017-12-07T22:54:44.000+0000\",\n \"labelSummary\": {\n \"labels\": [\n {\n \"id\": 39688,\n \"datasetId\": 1004942,\n \"name\": \"Alpine - Oat Cereal\",\n \"numExamples\": 32\n },\n {\n \"id\": 39689,\n \"datasetId\": 1004942,\n \"name\": \"Alpine - Corn Flakes\",\n \"numExamples\": 30\n },\n {\n \"id\": 39690,\n \"datasetId\": 1004942,\n \"name\": \"Alpine - Bran Cereal\",\n \"numExamples\": 31\n }\n ]\n },\n \"totalExamples\": 33,\n \"totalLabels\": 3,\n \"available\": true,\n \"statusMsg\": \"SUCCEEDED\",\n \"type\": \"image-detection\",\n \"object\": \"dataset\"\n}", "language": "json" } ] } [/block] The .zip file used to create an object detection dataset must contain the images and an annotations.csv file. The .zip file must have a specific structure, and the annotations.csv file must also be in the required format. See the Object Detection Datasets section in [Create a Dataset From a Zip File Asynchronously](doc:create-a-dataset-zip-async) for guidelines about the .zip file and the annotations file.
{"_id":"5a2f0366b47baa001c646bbb","project":"552d474ea86ee20d00780cd7","version":"59de6223666d650024f78f9b","category":"5a2998f0842c120012bd817e","user":"573b5a1f37fcf72000a2e683","updates":[],"next":{"pages":[],"description":""},"createdAt":"2017-12-11T22:15:02.319Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","auth":"required","params":[],"url":""},"isReference":false,"order":3,"body":"1. Now that the dataset has been created and contains labels and images, it’s time to train the dataset. In this command, replace `<TOKEN>` with your token and `<DATASET_ID>` with your dataset ID, and then run it. This command trains the dataset and creates a model with the name specified in the `name` parameter.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\"  -F \\\"name=Alpine Boxes on Shelves\\\"  -F \\\"datasetId=1004942\\\" https://api.einstein.ai/v2/vision/train\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\nThe response contains information about the training status and looks like the following. Make a note of the `modelId` because you use this value in the next step.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"{\\n  \\\"datasetId\\\": 1004942,\\n  \\\"datasetVersionId\\\": 0,\\n  \\\"name\\\": \\\"Alpine Boxes on Shelves\\\",\\n  \\\"status\\\": \\\"QUEUED\\\",\\n  \\\"progress\\\": 0,\\n  \\\"createdAt\\\": \\\"2017-12-12T18:20:57.000+0000\\\",\\n  \\\"updatedAt\\\": \\\"2017-12-12T18:20:57.000+0000\\\",\\n  \\\"learningRate\\\": 0,\\n  \\\"epochs\\\": 0,\\n  \\\"queuePosition\\\": 1,\\n  \\\"object\\\": \\\"training\\\",\\n  \\\"modelId\\\": \\\"BN2PTZQ6U2F7ORW57ZIZWZWRDQ\\\",\\n  \\\"trainParams\\\": null,\\n  \\\"trainStats\\\": null,\\n  \\\"modelType\\\": \\\"image-detection\\\"\\n}\",\n      \"language\": \"json\"\n    }\n  ]\n}\n[/block]\n2. Training a dataset can take a while depending on how many images the dataset contains. To get the training status, in this command, replace `<TOKEN>` with your token and `<YOUR_MODEL_ID>` with the model ID, and then run the command.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X GET -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" https://api.einstein.ai/v2/vision/train/<YOUR_MODEL_ID>\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\nThe response returns the status of the training process. If it’s in progress, you see a status of `RUNNING`. When the training is complete, it returns a status of `SUCCEEDED` and a progress value of `1`.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"{\\n  \\\"datasetId\\\": 1004942,\\n  \\\"datasetVersionId\\\": 4090,\\n  \\\"name\\\": \\\"Alpine Boxes on Shelves\\\",\\n  \\\"status\\\": \\\"SUCCEEDED\\\",\\n  \\\"progress\\\": 1,\\n  \\\"createdAt\\\": \\\"2017-12-12T18:20:57.000+0000\\\",\\n  \\\"updatedAt\\\": \\\"2017-12-12T18:41:29.000+0000\\\",\\n  \\\"learningRate\\\": 0.001,\\n  \\\"epochs\\\": 20,\\n  \\\"object\\\": \\\"training\\\",\\n  \\\"modelId\\\": \\\"BN2PTZQ6U2F7ORW57ZIZWZWRDQ\\\",\\n  \\\"trainParams\\\": null,\\n  \\\"trainStats\\\": {\\n    \\\"labels\\\": 3,\\n    \\\"examples\\\": 33,\\n    \\\"totalTime\\\": \\\"00:20:31:335\\\",\\n    \\\"transforms\\\": null,\\n    \\\"trainingTime\\\": \\\"00:20:28:960\\\",\\n    \\\"earlyStopping\\\": false,\\n    \\\"lastEpochDone\\\": 20,\\n    \\\"modelSaveTime\\\": \\\"00:00:48:099\\\",\\n    \\\"testSplitSize\\\": 1,\\n    \\\"trainSplitSize\\\": 32,\\n    \\\"datasetLoadTime\\\": \\\"00:00:02:375\\\",\\n    \\\"preProcessStats\\\": null,\\n    \\\"postProcessStats\\\": null\\n  },\\n  \\\"modelType\\\": \\\"image-detection\\\"\\n}\",\n      \"language\": \"json\"\n    }\n  ]\n}\n[/block]\n##Tell Me More##\nAfter you create a model, you can retrieve metrics for each label, such as f1 score, precision, and recall. You can use these values to tune and tweak your model. Use this call to get the model metrics.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X GET -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" https://api.einstein.ai/v2/vision/models/<MODEL_ID>\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\nThe command returns a response similar to this one.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"{\\n  \\\"createdAt\\\": \\\"2017-12-12T18:41:29.000+0000\\\",\\n  \\\"metricsData\\\": {\\n    \\\"labelMetrics\\\": [\\n      {\\n        \\\"f1\\\": 1,\\n        \\\"label\\\": \\\"Alpine - Corn Flakes\\\",\\n        \\\"recall\\\": [\\n          0.5,\\n          1\\n        ],\\n        \\\"precision\\\": [\\n          1,\\n          1\\n        ],\\n        \\\"averagePrecision\\\": 1\\n      },\\n      {\\n        \\\"f1\\\": 0,\\n        \\\"label\\\": \\\"Alpine - Oat Cereal\\\",\\n        \\\"recall\\\": null,\\n        \\\"precision\\\": null,\\n        \\\"averagePrecision\\\": null\\n      },\\n      {\\n        \\\"f1\\\": 1,\\n        \\\"label\\\": \\\"Alpine - Bran Cereal\\\",\\n        \\\"recall\\\": [\\n          0.5,\\n          1\\n        ],\\n        \\\"precision\\\": [\\n          1,\\n          1\\n        ],\\n        \\\"averagePrecision\\\": 1\\n      }\\n    ],\\n    \\\"modelMetrics\\\": {\\n      \\\"trainingLoss\\\": 18.272620260715485,\\n      \\\"meanAveragePrecision\\\": 1\\n    }\\n  },\\n  \\\"id\\\": \\\"BN2PTZQ6U2F7ORW57ZIZWZWRDQ\\\",\\n  \\\"object\\\": \\\"metrics\\\"\\n}\",\n      \"language\": \"json\"\n    }\n  ]\n}\n[/block]","excerpt":"Training the dataset creates the model that delivers the predictions.","slug":"od_qs_train_datset","type":"basic","title":"Step 2: Train the Object Detection Dataset","__v":0,"parentDoc":null,"childrenPages":[]}

Step 2: Train the Object Detection Dataset

Training the dataset creates the model that delivers the predictions.

1. Now that the dataset has been created and contains labels and images, it’s time to train the dataset. In this command, replace `<TOKEN>` with your token and `<DATASET_ID>` with your dataset ID, and then run it. This command trains the dataset and creates a model with the name specified in the `name` parameter. [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=Alpine Boxes on Shelves\" -F \"datasetId=1004942\" https://api.einstein.ai/v2/vision/train", "language": "curl" } ] } [/block] The response contains information about the training status and looks like the following. Make a note of the `modelId` because you use this value in the next step. [block:code] { "codes": [ { "code": "{\n \"datasetId\": 1004942,\n \"datasetVersionId\": 0,\n \"name\": \"Alpine Boxes on Shelves\",\n \"status\": \"QUEUED\",\n \"progress\": 0,\n \"createdAt\": \"2017-12-12T18:20:57.000+0000\",\n \"updatedAt\": \"2017-12-12T18:20:57.000+0000\",\n \"learningRate\": 0,\n \"epochs\": 0,\n \"queuePosition\": 1,\n \"object\": \"training\",\n \"modelId\": \"BN2PTZQ6U2F7ORW57ZIZWZWRDQ\",\n \"trainParams\": null,\n \"trainStats\": null,\n \"modelType\": \"image-detection\"\n}", "language": "json" } ] } [/block] 2. Training a dataset can take a while depending on how many images the dataset contains. To get the training status, in this command, replace `<TOKEN>` with your token and `<YOUR_MODEL_ID>` with the model ID, and then run the command. [block:code] { "codes": [ { "code": "curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.einstein.ai/v2/vision/train/<YOUR_MODEL_ID>", "language": "curl" } ] } [/block] The response returns the status of the training process. If it’s in progress, you see a status of `RUNNING`. When the training is complete, it returns a status of `SUCCEEDED` and a progress value of `1`. [block:code] { "codes": [ { "code": "{\n \"datasetId\": 1004942,\n \"datasetVersionId\": 4090,\n \"name\": \"Alpine Boxes on Shelves\",\n \"status\": \"SUCCEEDED\",\n \"progress\": 1,\n \"createdAt\": \"2017-12-12T18:20:57.000+0000\",\n \"updatedAt\": \"2017-12-12T18:41:29.000+0000\",\n \"learningRate\": 0.001,\n \"epochs\": 20,\n \"object\": \"training\",\n \"modelId\": \"BN2PTZQ6U2F7ORW57ZIZWZWRDQ\",\n \"trainParams\": null,\n \"trainStats\": {\n \"labels\": 3,\n \"examples\": 33,\n \"totalTime\": \"00:20:31:335\",\n \"transforms\": null,\n \"trainingTime\": \"00:20:28:960\",\n \"earlyStopping\": false,\n \"lastEpochDone\": 20,\n \"modelSaveTime\": \"00:00:48:099\",\n \"testSplitSize\": 1,\n \"trainSplitSize\": 32,\n \"datasetLoadTime\": \"00:00:02:375\",\n \"preProcessStats\": null,\n \"postProcessStats\": null\n },\n \"modelType\": \"image-detection\"\n}", "language": "json" } ] } [/block] ##Tell Me More## After you create a model, you can retrieve metrics for each label, such as f1 score, precision, and recall. You can use these values to tune and tweak your model. Use this call to get the model metrics. [block:code] { "codes": [ { "code": "curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.einstein.ai/v2/vision/models/<MODEL_ID>", "language": "curl" } ] } [/block] The command returns a response similar to this one. [block:code] { "codes": [ { "code": "{\n \"createdAt\": \"2017-12-12T18:41:29.000+0000\",\n \"metricsData\": {\n \"labelMetrics\": [\n {\n \"f1\": 1,\n \"label\": \"Alpine - Corn Flakes\",\n \"recall\": [\n 0.5,\n 1\n ],\n \"precision\": [\n 1,\n 1\n ],\n \"averagePrecision\": 1\n },\n {\n \"f1\": 0,\n \"label\": \"Alpine - Oat Cereal\",\n \"recall\": null,\n \"precision\": null,\n \"averagePrecision\": null\n },\n {\n \"f1\": 1,\n \"label\": \"Alpine - Bran Cereal\",\n \"recall\": [\n 0.5,\n 1\n ],\n \"precision\": [\n 1,\n 1\n ],\n \"averagePrecision\": 1\n }\n ],\n \"modelMetrics\": {\n \"trainingLoss\": 18.272620260715485,\n \"meanAveragePrecision\": 1\n }\n },\n \"id\": \"BN2PTZQ6U2F7ORW57ZIZWZWRDQ\",\n \"object\": \"metrics\"\n}", "language": "json" } ] } [/block]
1. Now that the dataset has been created and contains labels and images, it’s time to train the dataset. In this command, replace `<TOKEN>` with your token and `<DATASET_ID>` with your dataset ID, and then run it. This command trains the dataset and creates a model with the name specified in the `name` parameter. [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=Alpine Boxes on Shelves\" -F \"datasetId=1004942\" https://api.einstein.ai/v2/vision/train", "language": "curl" } ] } [/block] The response contains information about the training status and looks like the following. Make a note of the `modelId` because you use this value in the next step. [block:code] { "codes": [ { "code": "{\n \"datasetId\": 1004942,\n \"datasetVersionId\": 0,\n \"name\": \"Alpine Boxes on Shelves\",\n \"status\": \"QUEUED\",\n \"progress\": 0,\n \"createdAt\": \"2017-12-12T18:20:57.000+0000\",\n \"updatedAt\": \"2017-12-12T18:20:57.000+0000\",\n \"learningRate\": 0,\n \"epochs\": 0,\n \"queuePosition\": 1,\n \"object\": \"training\",\n \"modelId\": \"BN2PTZQ6U2F7ORW57ZIZWZWRDQ\",\n \"trainParams\": null,\n \"trainStats\": null,\n \"modelType\": \"image-detection\"\n}", "language": "json" } ] } [/block] 2. Training a dataset can take a while depending on how many images the dataset contains. To get the training status, in this command, replace `<TOKEN>` with your token and `<YOUR_MODEL_ID>` with the model ID, and then run the command. [block:code] { "codes": [ { "code": "curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.einstein.ai/v2/vision/train/<YOUR_MODEL_ID>", "language": "curl" } ] } [/block] The response returns the status of the training process. If it’s in progress, you see a status of `RUNNING`. When the training is complete, it returns a status of `SUCCEEDED` and a progress value of `1`. [block:code] { "codes": [ { "code": "{\n \"datasetId\": 1004942,\n \"datasetVersionId\": 4090,\n \"name\": \"Alpine Boxes on Shelves\",\n \"status\": \"SUCCEEDED\",\n \"progress\": 1,\n \"createdAt\": \"2017-12-12T18:20:57.000+0000\",\n \"updatedAt\": \"2017-12-12T18:41:29.000+0000\",\n \"learningRate\": 0.001,\n \"epochs\": 20,\n \"object\": \"training\",\n \"modelId\": \"BN2PTZQ6U2F7ORW57ZIZWZWRDQ\",\n \"trainParams\": null,\n \"trainStats\": {\n \"labels\": 3,\n \"examples\": 33,\n \"totalTime\": \"00:20:31:335\",\n \"transforms\": null,\n \"trainingTime\": \"00:20:28:960\",\n \"earlyStopping\": false,\n \"lastEpochDone\": 20,\n \"modelSaveTime\": \"00:00:48:099\",\n \"testSplitSize\": 1,\n \"trainSplitSize\": 32,\n \"datasetLoadTime\": \"00:00:02:375\",\n \"preProcessStats\": null,\n \"postProcessStats\": null\n },\n \"modelType\": \"image-detection\"\n}", "language": "json" } ] } [/block] ##Tell Me More## After you create a model, you can retrieve metrics for each label, such as f1 score, precision, and recall. You can use these values to tune and tweak your model. Use this call to get the model metrics. [block:code] { "codes": [ { "code": "curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.einstein.ai/v2/vision/models/<MODEL_ID>", "language": "curl" } ] } [/block] The command returns a response similar to this one. [block:code] { "codes": [ { "code": "{\n \"createdAt\": \"2017-12-12T18:41:29.000+0000\",\n \"metricsData\": {\n \"labelMetrics\": [\n {\n \"f1\": 1,\n \"label\": \"Alpine - Corn Flakes\",\n \"recall\": [\n 0.5,\n 1\n ],\n \"precision\": [\n 1,\n 1\n ],\n \"averagePrecision\": 1\n },\n {\n \"f1\": 0,\n \"label\": \"Alpine - Oat Cereal\",\n \"recall\": null,\n \"precision\": null,\n \"averagePrecision\": null\n },\n {\n \"f1\": 1,\n \"label\": \"Alpine - Bran Cereal\",\n \"recall\": [\n 0.5,\n 1\n ],\n \"precision\": [\n 1,\n 1\n ],\n \"averagePrecision\": 1\n }\n ],\n \"modelMetrics\": {\n \"trainingLoss\": 18.272620260715485,\n \"meanAveragePrecision\": 1\n }\n },\n \"id\": \"BN2PTZQ6U2F7ORW57ZIZWZWRDQ\",\n \"object\": \"metrics\"\n}", "language": "json" } ] } [/block]
{"_id":"5a30295a434301002850e486","project":"552d474ea86ee20d00780cd7","version":"59de6223666d650024f78f9b","category":"5a2998f0842c120012bd817e","user":"573b5a1f37fcf72000a2e683","updates":[],"next":{"pages":[],"description":""},"createdAt":"2017-12-12T19:09:14.230Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","auth":"required","params":[],"url":""},"isReference":false,"order":4,"body":"You send an image to the model, and for each object the model identifies, the model returns a label, a probability, and the coordinates for a bounding box around the object. The probability value is the prediction that the model makes for whether the identified object matches the label. The higher the value, the higher the probability.\n\nYou can classify an image in these ways.\n\n- Reference the file by a URL\n- Upload the file by its path\n\nFor this example, you’ll reference this picture by the file URL.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/2682cff-9946272-alpine.jpg\",\n        \"9946272-alpine.jpg\",\n        806,\n        605,\n        \"#c5a874\"\n      ]\n    }\n  ]\n}\n[/block]\n\n  1. In the following command, replace:\n - `<TOKEN>` with your JWT token\n - `<YOUR_MODEL_ID>` with the ID of the model that you created when you trained the dataset\n\nThen run the command from the command line.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"sampleLocation=http://einstein.ai/images/alpine.jpg\\\" -F \\\"modelId=BN2PTZQ6U2F7ORW57ZIZWZWRDQ\\\" https://api.einstein.ai/v2/vision/detect\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\nThe model returns results similar to the following.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"{\\n  \\\"probabilities\\\": [\\n    {\\n      \\\"label\\\": \\\"Alpine - Oat Cereal\\\",\\n      \\\"probability\\\": 0.993008,\\n      \\\"boundingBox\\\": {\\n        \\\"minX\\\": 2149,\\n        \\\"minY\\\": 936,\\n        \\\"maxX\\\": 2896,\\n        \\\"maxY\\\": 1927\\n      }\\n    },\\n    {\\n      \\\"label\\\": \\\"Alpine - Corn Flakes\\\",\\n      \\\"probability\\\": 0.98303485,\\n      \\\"boundingBox\\\": {\\n        \\\"minX\\\": 748,\\n        \\\"minY\\\": 935,\\n        \\\"maxX\\\": 1440,\\n        \\\"maxY\\\": 1900\\n      }\\n    },\\n    {\\n      \\\"label\\\": \\\"Alpine - Bran Cereal\\\",\\n      \\\"probability\\\": 0.9943381,\\n      \\\"boundingBox\\\": {\\n        \\\"minX\\\": 1456,\\n        \\\"minY\\\": 944,\\n        \\\"maxX\\\": 2166,\\n        \\\"maxY\\\": 1914\\n      }\\n    },\\n    {\\n      \\\"label\\\": \\\"Alpine - Bran Cereal\\\",\\n      \\\"probability\\\": 0.9913053,\\n      \\\"boundingBox\\\": {\\n        \\\"minX\\\": 2848,\\n        \\\"minY\\\": 854,\\n        \\\"maxX\\\": 3713,\\n        \\\"maxY\\\": 1997\\n      }\\n    }\\n  ],\\n  \\\"object\\\": \\\"predictresponse\\\"\\n}\",\n      \"language\": \"json\"\n    }\n  ]\n}\n[/block]\nThe model predicts that there are four objects in the image: one box of oat cereal, one box of corn flakes, and two boxes of bran cereal. For each object, the model returns a high probability that the object matches the label returned.\n\nThe model also returns the x and y coordinates for each object identified in the image. This is what the image sent in for prediction looks like with bounding boxes created from the coordinates in the response.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/cfc1f45-alpine_with_bounding_boxes.jpg\",\n        \"alpine_with_bounding_boxes.jpg\",\n        500,\n        375,\n        \"#c5a673\"\n      ],\n      \"sizing\": \"80\"\n    }\n  ]\n}\n[/block]\n##Tell Me More##\nYou can also classify a local image by uploading the image. To upload a local image, instead of the `sampleLocation` parameter, pass in the `sampleContent` parameter, which contains the image file location of the file to upload.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"sampleContent=@C:\\\\data\\\\alpine.jpg\\\" -F \\\"modelId=<YOUR_MODEL_ID>\\\" https://api.einstein.ai/v2/vision/detect\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]","excerpt":"Now that the data is uploaded and you created a model, you’re ready to use the model to make predictions.","slug":"od_qs_classify_an_image","type":"basic","title":"Step 3: Classify an Image","__v":0,"parentDoc":null,"childrenPages":[]}

Step 3: Classify an Image

Now that the data is uploaded and you created a model, you’re ready to use the model to make predictions.

You send an image to the model, and for each object the model identifies, the model returns a label, a probability, and the coordinates for a bounding box around the object. The probability value is the prediction that the model makes for whether the identified object matches the label. The higher the value, the higher the probability. You can classify an image in these ways. - Reference the file by a URL - Upload the file by its path For this example, you’ll reference this picture by the file URL. [block:image] { "images": [ { "image": [ "https://files.readme.io/2682cff-9946272-alpine.jpg", "9946272-alpine.jpg", 806, 605, "#c5a874" ] } ] } [/block] 1. In the following command, replace: - `<TOKEN>` with your JWT token - `<YOUR_MODEL_ID>` with the ID of the model that you created when you trained the dataset Then run the command from the command line. [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"sampleLocation=http://einstein.ai/images/alpine.jpg\" -F \"modelId=BN2PTZQ6U2F7ORW57ZIZWZWRDQ\" https://api.einstein.ai/v2/vision/detect", "language": "curl" } ] } [/block] The model returns results similar to the following. [block:code] { "codes": [ { "code": "{\n \"probabilities\": [\n {\n \"label\": \"Alpine - Oat Cereal\",\n \"probability\": 0.993008,\n \"boundingBox\": {\n \"minX\": 2149,\n \"minY\": 936,\n \"maxX\": 2896,\n \"maxY\": 1927\n }\n },\n {\n \"label\": \"Alpine - Corn Flakes\",\n \"probability\": 0.98303485,\n \"boundingBox\": {\n \"minX\": 748,\n \"minY\": 935,\n \"maxX\": 1440,\n \"maxY\": 1900\n }\n },\n {\n \"label\": \"Alpine - Bran Cereal\",\n \"probability\": 0.9943381,\n \"boundingBox\": {\n \"minX\": 1456,\n \"minY\": 944,\n \"maxX\": 2166,\n \"maxY\": 1914\n }\n },\n {\n \"label\": \"Alpine - Bran Cereal\",\n \"probability\": 0.9913053,\n \"boundingBox\": {\n \"minX\": 2848,\n \"minY\": 854,\n \"maxX\": 3713,\n \"maxY\": 1997\n }\n }\n ],\n \"object\": \"predictresponse\"\n}", "language": "json" } ] } [/block] The model predicts that there are four objects in the image: one box of oat cereal, one box of corn flakes, and two boxes of bran cereal. For each object, the model returns a high probability that the object matches the label returned. The model also returns the x and y coordinates for each object identified in the image. This is what the image sent in for prediction looks like with bounding boxes created from the coordinates in the response. [block:image] { "images": [ { "image": [ "https://files.readme.io/cfc1f45-alpine_with_bounding_boxes.jpg", "alpine_with_bounding_boxes.jpg", 500, 375, "#c5a673" ], "sizing": "80" } ] } [/block] ##Tell Me More## You can also classify a local image by uploading the image. To upload a local image, instead of the `sampleLocation` parameter, pass in the `sampleContent` parameter, which contains the image file location of the file to upload. [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"sampleContent=@C:\\data\\alpine.jpg\" -F \"modelId=<YOUR_MODEL_ID>\" https://api.einstein.ai/v2/vision/detect", "language": "curl" } ] } [/block]
You send an image to the model, and for each object the model identifies, the model returns a label, a probability, and the coordinates for a bounding box around the object. The probability value is the prediction that the model makes for whether the identified object matches the label. The higher the value, the higher the probability. You can classify an image in these ways. - Reference the file by a URL - Upload the file by its path For this example, you’ll reference this picture by the file URL. [block:image] { "images": [ { "image": [ "https://files.readme.io/2682cff-9946272-alpine.jpg", "9946272-alpine.jpg", 806, 605, "#c5a874" ] } ] } [/block] 1. In the following command, replace: - `<TOKEN>` with your JWT token - `<YOUR_MODEL_ID>` with the ID of the model that you created when you trained the dataset Then run the command from the command line. [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"sampleLocation=http://einstein.ai/images/alpine.jpg\" -F \"modelId=BN2PTZQ6U2F7ORW57ZIZWZWRDQ\" https://api.einstein.ai/v2/vision/detect", "language": "curl" } ] } [/block] The model returns results similar to the following. [block:code] { "codes": [ { "code": "{\n \"probabilities\": [\n {\n \"label\": \"Alpine - Oat Cereal\",\n \"probability\": 0.993008,\n \"boundingBox\": {\n \"minX\": 2149,\n \"minY\": 936,\n \"maxX\": 2896,\n \"maxY\": 1927\n }\n },\n {\n \"label\": \"Alpine - Corn Flakes\",\n \"probability\": 0.98303485,\n \"boundingBox\": {\n \"minX\": 748,\n \"minY\": 935,\n \"maxX\": 1440,\n \"maxY\": 1900\n }\n },\n {\n \"label\": \"Alpine - Bran Cereal\",\n \"probability\": 0.9943381,\n \"boundingBox\": {\n \"minX\": 1456,\n \"minY\": 944,\n \"maxX\": 2166,\n \"maxY\": 1914\n }\n },\n {\n \"label\": \"Alpine - Bran Cereal\",\n \"probability\": 0.9913053,\n \"boundingBox\": {\n \"minX\": 2848,\n \"minY\": 854,\n \"maxX\": 3713,\n \"maxY\": 1997\n }\n }\n ],\n \"object\": \"predictresponse\"\n}", "language": "json" } ] } [/block] The model predicts that there are four objects in the image: one box of oat cereal, one box of corn flakes, and two boxes of bran cereal. For each object, the model returns a high probability that the object matches the label returned. The model also returns the x and y coordinates for each object identified in the image. This is what the image sent in for prediction looks like with bounding boxes created from the coordinates in the response. [block:image] { "images": [ { "image": [ "https://files.readme.io/cfc1f45-alpine_with_bounding_boxes.jpg", "alpine_with_bounding_boxes.jpg", 500, 375, "#c5a673" ], "sizing": "80" } ] } [/block] ##Tell Me More## You can also classify a local image by uploading the image. To upload a local image, instead of the `sampleLocation` parameter, pass in the `sampleContent` parameter, which contains the image file location of the file to upload. [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"sampleContent=@C:\\data\\alpine.jpg\" -F \"modelId=<YOUR_MODEL_ID>\" https://api.einstein.ai/v2/vision/detect", "language": "curl" } ] } [/block]
{"_id":"5a319c9a408fe9001272ef4e","project":"552d474ea86ee20d00780cd7","version":"59de6223666d650024f78f9b","category":"5a2998f0842c120012bd817e","user":"573b5a1f37fcf72000a2e683","updates":["5a676d3219979f0028d4882f"],"next":{"pages":[],"description":""},"createdAt":"2017-12-13T21:33:14.356Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","auth":"required","params":[],"url":""},"isReference":false,"order":5,"body":"##Collect Images##\nThe first step to implementing Einstein Object Detection is deciding which objects you want to identify. After you decide that, it's time to gather training data (images) to create the dataset. Use images that are representative of the images that the model will receive in production.\n\n###Training Image Considerations###\n\n- Objects in the images are visible and recognizable.\n- Images are forward-facing and not at an angle.\n- Images are neither too dark nor too bright.\n- Images contain 100-200 or more occurrences (across all images) for each object you want the model to identify. The more occurrences of an object you have, the better the model performs.\n\n\n##Label the Images##\nAfter you collect training images, you label objects in those images and specify a bounding box around each object. There are a few different options for image labeling.\n\n###CrowdFlower###\nUse Crowdflower's human-in-the-loop platform to create high-quality training datasets of annotated images. Their platform lets you select and manage the human labelers you need (including your own employees) to meet your quality and cost requirements. Email salesforce_einstein@crowdflower.com to discuss your labeling project.\n\n###SharinPix###\nUse the SharinPix managed package available on the AppExchange to label your images. Their labeling tool offers team management functionality for self-labeling using your own team or assisted labeling with help from SharinPix labelers. Email Jean-Michel Mougeolle at jmmougeolle@sharinpix.com to discuss your labeling project.\n\n###Self-Labeling###\nYou can do-it-yourself and self-label your images, as long as the annotations meet the required format.\n\n\n##Labeling and Zip File Format##\nNo matter which method you use to label your images, the labeling content is stored in a comma-separated (csv) file named annotations.csv. The annotations file contains the image file name and the labels and coordinates (in JSON format) for each object in the image. See the Annotations.csv File Format section of [Create a Dataset From a Zip File Asynchronously](doc:create-a-dataset-zip-async). Here are the first four lines from the annotations.csv file contained in alpine.zip.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"image_url,box0,box1,box2,box3,box4,box5,box6,box7\\n20171030_133845.jpg,\\\"{\\\"\\\"height\\\"\\\": 1612, \\\"\\\"y\\\"\\\": 497, \\\"\\\"label\\\"\\\": \\\"\\\"Alpine - Oat Cereal\\\"\\\", \\\"\\\"width\\\"\\\": 1041, \\\"\\\"x\\\"\\\": 548}\\\",\\\"{\\\"\\\"height\\\"\\\": 1370, \\\"\\\"y\\\"\\\": 571, \\\"\\\"label\\\"\\\": \\\"\\\"Alpine - Oat Cereal\\\"\\\", \\\"\\\"width\\\"\\\": 904, \\\"\\\"x\\\"\\\": 1635}\\\",\\\"{\\\"\\\"height\\\"\\\": 1553, \\\"\\\"y\\\"\\\": 383, \\\"\\\"label\\\"\\\": \\\"\\\"Alpine - Corn Flakes\\\"\\\", \\\"\\\"width\\\"\\\": 1059, \\\"\\\"x\\\"\\\": 2580}\\\",,,,,\\n20171030_133911.jpg,\\\"{\\\"\\\"height\\\"\\\": 1147, \\\"\\\"y\\\"\\\": 2299, \\\"\\\"label\\\"\\\": \\\"\\\"Alpine - Oat Cereal\\\"\\\", \\\"\\\"width\\\"\\\": 861, \\\"\\\"x\\\"\\\": 374}\\\",\\\"{\\\"\\\"height\\\"\\\": 1038, \\\"\\\"y\\\"\\\": 2226, \\\"\\\"label\\\"\\\": \\\"\\\"Alpine - Oat Cereal\\\"\\\", \\\"\\\"width\\\"\\\": 752, \\\"\\\"x\\\"\\\": 1263}\\\",\\\"{\\\"\\\"height\\\"\\\": 1464, \\\"\\\"y\\\"\\\": 709, \\\"\\\"label\\\"\\\": \\\"\\\"Alpine - Bran Cereal\\\"\\\", \\\"\\\"width\\\"\\\": 1056, \\\"\\\"x\\\"\\\": 179}\\\",\\\"{\\\"\\\"height\\\"\\\": 1470, \\\"\\\"y\\\"\\\": 746, \\\"\\\"label\\\"\\\": \\\"\\\"Alpine - Bran Cereal\\\"\\\", \\\"\\\"width\\\"\\\": 697, \\\"\\\"x\\\"\\\": 2327}\\\",\\\"{\\\"\\\"height\\\"\\\": 1434, \\\"\\\"y\\\"\\\": 752, \\\"\\\"label\\\"\\\": \\\"\\\"Alpine - Corn Flakes\\\"\\\", \\\"\\\"width\\\"\\\": 831, \\\"\\\"x\\\"\\\": 1312}\\\",\\\"{\\\"\\\"height\\\"\\\": 1080, \\\"\\\"y\\\"\\\": 2378, \\\"\\\"label\\\"\\\": \\\"\\\"Alpine - Corn Flakes\\\"\\\", \\\"\\\"width\\\"\\\": 965, \\\"\\\"x\\\"\\\": 2059}\\\",,\\n20171030_133915.jpg,\\\"{\\\"\\\"height\\\"\\\": 496, \\\"\\\"y\\\"\\\": 1811, \\\"\\\"label\\\"\\\": \\\"\\\"Alpine - Oat Cereal\\\"\\\", \\\"\\\"width\\\"\\\": 344, \\\"\\\"x\\\"\\\": 922}\\\",\\\"{\\\"\\\"height\\\"\\\": 100, \\\"\\\"y\\\"\\\": 112.18126888217523, \\\"\\\"label\\\"\\\": \\\"\\\"Alpine - Oat Cereal\\\"\\\", \\\"\\\"width\\\"\\\": 100, \\\"\\\"x\\\"\\\": 100}\\\",\\\"{\\\"\\\"height\\\"\\\": 517, \\\"\\\"y\\\"\\\": 1753, \\\"\\\"label\\\"\\\": \\\"\\\"Alpine - Oat Cereal\\\"\\\", \\\"\\\"width\\\"\\\": 337, \\\"\\\"x\\\"\\\": 1282}\\\",\\\"{\\\"\\\"height\\\"\\\": 590, \\\"\\\"y\\\"\\\": 1157, \\\"\\\"label\\\"\\\": \\\"\\\"Alpine - Bran Cereal\\\"\\\", \\\"\\\"width\\\"\\\": 383, \\\"\\\"x\\\"\\\": 889}\\\",\\\"{\\\"\\\"height\\\"\\\": 587, \\\"\\\"y\\\"\\\": 1157, \\\"\\\"label\\\"\\\": \\\"\\\"Alpine - Bran Cereal\\\"\\\", \\\"\\\"width\\\"\\\": 368, \\\"\\\"x\\\"\\\": 1674}\\\",\\\"{\\\"\\\"height\\\"\\\": 575, \\\"\\\"y\\\"\\\": 1169, \\\"\\\"label\\\"\\\": \\\"\\\"Alpine - Corn Flakes\\\"\\\", \\\"\\\"width\\\"\\\": 350, \\\"\\\"x\\\"\\\": 1303}\\\",\\\"{\\\"\\\"height\\\"\\\": 532, \\\"\\\"y\\\"\\\": 1757, \\\"\\\"label\\\"\\\": \\\"\\\"Alpine - Corn Flakes\\\"\\\", \\\"\\\"width\\\"\\\": 365, \\\"\\\"x\\\"\\\": 1629}\\\",\",\n      \"language\": \"text\"\n    }\n  ]\n}\n[/block]\nAfter you create the labels in the annotations.csv file, you package up that file along with the images in a .zip file. The API call to create an object detection dataset uses this .zip file to upload the images and labels. See the Object Detection Datasets section of [Create a Dataset From a Zip File Asynchronously](doc:create-a-dataset-zip-async).","excerpt":"In the Object Detection Quick Start, the .zip file with the images and the annotations file is provided for you. To create your own model, you first need to gather and label the training data. Here are some best practices when gathering your own data and labeling your images.","slug":"object-detection-images-and-labeling","type":"basic","title":"Object Detection Images and Labeling","__v":1,"parentDoc":null,"childrenPages":[]}

Object Detection Images and Labeling

In the Object Detection Quick Start, the .zip file with the images and the annotations file is provided for you. To create your own model, you first need to gather and label the training data. Here are some best practices when gathering your own data and labeling your images.

##Collect Images## The first step to implementing Einstein Object Detection is deciding which objects you want to identify. After you decide that, it's time to gather training data (images) to create the dataset. Use images that are representative of the images that the model will receive in production. ###Training Image Considerations### - Objects in the images are visible and recognizable. - Images are forward-facing and not at an angle. - Images are neither too dark nor too bright. - Images contain 100-200 or more occurrences (across all images) for each object you want the model to identify. The more occurrences of an object you have, the better the model performs. ##Label the Images## After you collect training images, you label objects in those images and specify a bounding box around each object. There are a few different options for image labeling. ###CrowdFlower### Use Crowdflower's human-in-the-loop platform to create high-quality training datasets of annotated images. Their platform lets you select and manage the human labelers you need (including your own employees) to meet your quality and cost requirements. Email salesforce_einstein@crowdflower.com to discuss your labeling project. ###SharinPix### Use the SharinPix managed package available on the AppExchange to label your images. Their labeling tool offers team management functionality for self-labeling using your own team or assisted labeling with help from SharinPix labelers. Email Jean-Michel Mougeolle at jmmougeolle@sharinpix.com to discuss your labeling project. ###Self-Labeling### You can do-it-yourself and self-label your images, as long as the annotations meet the required format. ##Labeling and Zip File Format## No matter which method you use to label your images, the labeling content is stored in a comma-separated (csv) file named annotations.csv. The annotations file contains the image file name and the labels and coordinates (in JSON format) for each object in the image. See the Annotations.csv File Format section of [Create a Dataset From a Zip File Asynchronously](doc:create-a-dataset-zip-async). Here are the first four lines from the annotations.csv file contained in alpine.zip. [block:code] { "codes": [ { "code": "image_url,box0,box1,box2,box3,box4,box5,box6,box7\n20171030_133845.jpg,\"{\"\"height\"\": 1612, \"\"y\"\": 497, \"\"label\"\": \"\"Alpine - Oat Cereal\"\", \"\"width\"\": 1041, \"\"x\"\": 548}\",\"{\"\"height\"\": 1370, \"\"y\"\": 571, \"\"label\"\": \"\"Alpine - Oat Cereal\"\", \"\"width\"\": 904, \"\"x\"\": 1635}\",\"{\"\"height\"\": 1553, \"\"y\"\": 383, \"\"label\"\": \"\"Alpine - Corn Flakes\"\", \"\"width\"\": 1059, \"\"x\"\": 2580}\",,,,,\n20171030_133911.jpg,\"{\"\"height\"\": 1147, \"\"y\"\": 2299, \"\"label\"\": \"\"Alpine - Oat Cereal\"\", \"\"width\"\": 861, \"\"x\"\": 374}\",\"{\"\"height\"\": 1038, \"\"y\"\": 2226, \"\"label\"\": \"\"Alpine - Oat Cereal\"\", \"\"width\"\": 752, \"\"x\"\": 1263}\",\"{\"\"height\"\": 1464, \"\"y\"\": 709, \"\"label\"\": \"\"Alpine - Bran Cereal\"\", \"\"width\"\": 1056, \"\"x\"\": 179}\",\"{\"\"height\"\": 1470, \"\"y\"\": 746, \"\"label\"\": \"\"Alpine - Bran Cereal\"\", \"\"width\"\": 697, \"\"x\"\": 2327}\",\"{\"\"height\"\": 1434, \"\"y\"\": 752, \"\"label\"\": \"\"Alpine - Corn Flakes\"\", \"\"width\"\": 831, \"\"x\"\": 1312}\",\"{\"\"height\"\": 1080, \"\"y\"\": 2378, \"\"label\"\": \"\"Alpine - Corn Flakes\"\", \"\"width\"\": 965, \"\"x\"\": 2059}\",,\n20171030_133915.jpg,\"{\"\"height\"\": 496, \"\"y\"\": 1811, \"\"label\"\": \"\"Alpine - Oat Cereal\"\", \"\"width\"\": 344, \"\"x\"\": 922}\",\"{\"\"height\"\": 100, \"\"y\"\": 112.18126888217523, \"\"label\"\": \"\"Alpine - Oat Cereal\"\", \"\"width\"\": 100, \"\"x\"\": 100}\",\"{\"\"height\"\": 517, \"\"y\"\": 1753, \"\"label\"\": \"\"Alpine - Oat Cereal\"\", \"\"width\"\": 337, \"\"x\"\": 1282}\",\"{\"\"height\"\": 590, \"\"y\"\": 1157, \"\"label\"\": \"\"Alpine - Bran Cereal\"\", \"\"width\"\": 383, \"\"x\"\": 889}\",\"{\"\"height\"\": 587, \"\"y\"\": 1157, \"\"label\"\": \"\"Alpine - Bran Cereal\"\", \"\"width\"\": 368, \"\"x\"\": 1674}\",\"{\"\"height\"\": 575, \"\"y\"\": 1169, \"\"label\"\": \"\"Alpine - Corn Flakes\"\", \"\"width\"\": 350, \"\"x\"\": 1303}\",\"{\"\"height\"\": 532, \"\"y\"\": 1757, \"\"label\"\": \"\"Alpine - Corn Flakes\"\", \"\"width\"\": 365, \"\"x\"\": 1629}\",", "language": "text" } ] } [/block] After you create the labels in the annotations.csv file, you package up that file along with the images in a .zip file. The API call to create an object detection dataset uses this .zip file to upload the images and labels. See the Object Detection Datasets section of [Create a Dataset From a Zip File Asynchronously](doc:create-a-dataset-zip-async).
##Collect Images## The first step to implementing Einstein Object Detection is deciding which objects you want to identify. After you decide that, it's time to gather training data (images) to create the dataset. Use images that are representative of the images that the model will receive in production. ###Training Image Considerations### - Objects in the images are visible and recognizable. - Images are forward-facing and not at an angle. - Images are neither too dark nor too bright. - Images contain 100-200 or more occurrences (across all images) for each object you want the model to identify. The more occurrences of an object you have, the better the model performs. ##Label the Images## After you collect training images, you label objects in those images and specify a bounding box around each object. There are a few different options for image labeling. ###CrowdFlower### Use Crowdflower's human-in-the-loop platform to create high-quality training datasets of annotated images. Their platform lets you select and manage the human labelers you need (including your own employees) to meet your quality and cost requirements. Email salesforce_einstein@crowdflower.com to discuss your labeling project. ###SharinPix### Use the SharinPix managed package available on the AppExchange to label your images. Their labeling tool offers team management functionality for self-labeling using your own team or assisted labeling with help from SharinPix labelers. Email Jean-Michel Mougeolle at jmmougeolle@sharinpix.com to discuss your labeling project. ###Self-Labeling### You can do-it-yourself and self-label your images, as long as the annotations meet the required format. ##Labeling and Zip File Format## No matter which method you use to label your images, the labeling content is stored in a comma-separated (csv) file named annotations.csv. The annotations file contains the image file name and the labels and coordinates (in JSON format) for each object in the image. See the Annotations.csv File Format section of [Create a Dataset From a Zip File Asynchronously](doc:create-a-dataset-zip-async). Here are the first four lines from the annotations.csv file contained in alpine.zip. [block:code] { "codes": [ { "code": "image_url,box0,box1,box2,box3,box4,box5,box6,box7\n20171030_133845.jpg,\"{\"\"height\"\": 1612, \"\"y\"\": 497, \"\"label\"\": \"\"Alpine - Oat Cereal\"\", \"\"width\"\": 1041, \"\"x\"\": 548}\",\"{\"\"height\"\": 1370, \"\"y\"\": 571, \"\"label\"\": \"\"Alpine - Oat Cereal\"\", \"\"width\"\": 904, \"\"x\"\": 1635}\",\"{\"\"height\"\": 1553, \"\"y\"\": 383, \"\"label\"\": \"\"Alpine - Corn Flakes\"\", \"\"width\"\": 1059, \"\"x\"\": 2580}\",,,,,\n20171030_133911.jpg,\"{\"\"height\"\": 1147, \"\"y\"\": 2299, \"\"label\"\": \"\"Alpine - Oat Cereal\"\", \"\"width\"\": 861, \"\"x\"\": 374}\",\"{\"\"height\"\": 1038, \"\"y\"\": 2226, \"\"label\"\": \"\"Alpine - Oat Cereal\"\", \"\"width\"\": 752, \"\"x\"\": 1263}\",\"{\"\"height\"\": 1464, \"\"y\"\": 709, \"\"label\"\": \"\"Alpine - Bran Cereal\"\", \"\"width\"\": 1056, \"\"x\"\": 179}\",\"{\"\"height\"\": 1470, \"\"y\"\": 746, \"\"label\"\": \"\"Alpine - Bran Cereal\"\", \"\"width\"\": 697, \"\"x\"\": 2327}\",\"{\"\"height\"\": 1434, \"\"y\"\": 752, \"\"label\"\": \"\"Alpine - Corn Flakes\"\", \"\"width\"\": 831, \"\"x\"\": 1312}\",\"{\"\"height\"\": 1080, \"\"y\"\": 2378, \"\"label\"\": \"\"Alpine - Corn Flakes\"\", \"\"width\"\": 965, \"\"x\"\": 2059}\",,\n20171030_133915.jpg,\"{\"\"height\"\": 496, \"\"y\"\": 1811, \"\"label\"\": \"\"Alpine - Oat Cereal\"\", \"\"width\"\": 344, \"\"x\"\": 922}\",\"{\"\"height\"\": 100, \"\"y\"\": 112.18126888217523, \"\"label\"\": \"\"Alpine - Oat Cereal\"\", \"\"width\"\": 100, \"\"x\"\": 100}\",\"{\"\"height\"\": 517, \"\"y\"\": 1753, \"\"label\"\": \"\"Alpine - Oat Cereal\"\", \"\"width\"\": 337, \"\"x\"\": 1282}\",\"{\"\"height\"\": 590, \"\"y\"\": 1157, \"\"label\"\": \"\"Alpine - Bran Cereal\"\", \"\"width\"\": 383, \"\"x\"\": 889}\",\"{\"\"height\"\": 587, \"\"y\"\": 1157, \"\"label\"\": \"\"Alpine - Bran Cereal\"\", \"\"width\"\": 368, \"\"x\"\": 1674}\",\"{\"\"height\"\": 575, \"\"y\"\": 1169, \"\"label\"\": \"\"Alpine - Corn Flakes\"\", \"\"width\"\": 350, \"\"x\"\": 1303}\",\"{\"\"height\"\": 532, \"\"y\"\": 1757, \"\"label\"\": \"\"Alpine - Corn Flakes\"\", \"\"width\"\": 365, \"\"x\"\": 1629}\",", "language": "text" } ] } [/block] After you create the labels in the annotations.csv file, you package up that file along with the images in a .zip file. The API call to create an object detection dataset uses this .zip file to upload the images and labels. See the Object Detection Datasets section of [Create a Dataset From a Zip File Asynchronously](doc:create-a-dataset-zip-async).
{"_id":"59de6226666d650024f78fef","category":"59de6223666d650024f78fa0","project":"552d474ea86ee20d00780cd7","user":"573b5a1f37fcf72000a2e683","parentDoc":null,"version":"59de6223666d650024f78f9b","updates":[],"next":{"pages":[],"description":""},"createdAt":"2017-05-05T21:53:18.245Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[{"code":"{}","language":"json","status":200,"name":""},{"status":400,"name":"","code":"{}","language":"json"}]},"settings":"","auth":"required","params":[],"url":""},"isReference":false,"order":0,"body":"The Einstein Vision APIs provide various features that let you optimize and retrain your model using feedback. Use these features to:\n\n- Add a misclassified images to a dataset with the correct label.\n- Get a list of images that were added as feedback to a dataset.\n- Create a model or update an existing model using feedback images.\n\nLet’s look at an example. Let’s say you have an image classification model that classifies beaches and mountains. You send in an image, `alps.jpg`, to the model to get back a prediction. \n\nThe model returns a response that contains a high probability that the image is a beach (in the Beaches class). However, you expect a response that contains a high probability that the image is a mountain (in the Mountains class). This means that the image was misclassified. \n\n##Add Feedback to the Dataset##\n\nThe first step is to add the misclassified images to the dataset along with their correct labels. The call you use depends on the type of dataset the model was created from.\n\n###Image or Multilabel###\nFor a dataset with a type of `image` or `image-multi-label`, you add the misclassified image and label one at a time.\n\nUse the feedback API to add a misclassified image with the correct label to the dataset from which the model was created. Getting back to our scenario, this cURL call adds `alps.jpg` as a new example to the dataset. The request parameter `\"expectedLabel=Mountains\"` specifies that the image is added to the correct class in the dataset. See [Create a Feedback Example](doc:create-a-feedback-example).\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"modelId=3CMCRC572BD3OZTQSTTUU4733Y\\\" -F \\\"data=@c:\\\\data\\\\alps.jpg\\\" -F \\\"expectedLabel=Mountains\\\" https://api.einstein.ai/v2/vision/feedback\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\n###Object Detection###\nFor a dataset with a type of `image-detection`, you add the misclassified images, labels, and bounding box information in bulk using a .zip file. This cURL call adds the contents of a .zip file to the dataset from which the model was created. The .zip file contains the images and an annotations.csv file that contains the labels and the bounding box coordinates for each image. See [Create Feedback Examples From a Zip File](doc:create-feedback-examples-from-a-zip-file). \n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X PUT -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"modelId=3CMCRC572BD3OZTQSTTUU4733Y\\\" -F \\\"data=@c:\\\\data\\\\alpine_feedback.zip\\\"\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\n##Get Feedback Examples##\n\nAfter you add feedback examples to a dataset, you can query the dataset and return just those examples that were added from the feedback API call. \n\nThe API call to get dataset examples takes a `source` query parameter that lets you specify which examples to return from a dataset. See [Get All Examples](doc:get-all-examples).\n\nValid values for this parameter are:\n\n- `all`—Return both upload and feedback examples.\n- `feedback`—Return examples that were created as feedback.\n- `upload`—Return examples that were created from uploading a .zip file.\n\nIf you omit the `source` parameter, feedback examples aren't returned from this call. The `source` query parameter can be combined with the `offset` and `count` query parameters used for paging. \n\nThis cURL call gets examples that were added as feedback from the specified dataset.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X GET -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" https://api.einstein.ai/v2/vision/datasets/57/examples?source=feedback\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\nThe response returns all the examples that were added by calling the feedback API such as the file `alps.jpg`.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"{\\n      \\\"id\\\": 618169,\\n      \\\"name\\\": \\\"alps.jpg\\\",\\n      \\\"location\\\": \\\"RPA8C4FwkbxRQJaXCmwPejGx4W1sKYjWn...\\\",\\n      \\\"createdAt\\\": \\\"2017-05-04T20:57:23.000+0000\\\",\\n      \\\"label\\\": {\\n        \\\"id\\\": 3235,\\n        \\\"datasetId\\\": 57,\\n        \\\"name\\\": \\\"Mountains\\\",\\n        \\\"numExamples\\\": 108\\n      },\\n      \\\"object\\\": \\\"example\\\"\\n    }\",\n      \"language\": \"json\"\n    }\n  ]\n}\n[/block]","excerpt":"If a model misclassifies images, you can use the feedback APIs to add those images to the correct label in the dataset. Then you can train that dataset and update the model.","slug":"add-feedback-to-dataset","type":"basic","title":"Add Feedback to a Dataset","__v":0,"childrenPages":[]}

Add Feedback to a Dataset

If a model misclassifies images, you can use the feedback APIs to add those images to the correct label in the dataset. Then you can train that dataset and update the model.

The Einstein Vision APIs provide various features that let you optimize and retrain your model using feedback. Use these features to: - Add a misclassified images to a dataset with the correct label. - Get a list of images that were added as feedback to a dataset. - Create a model or update an existing model using feedback images. Let’s look at an example. Let’s say you have an image classification model that classifies beaches and mountains. You send in an image, `alps.jpg`, to the model to get back a prediction. The model returns a response that contains a high probability that the image is a beach (in the Beaches class). However, you expect a response that contains a high probability that the image is a mountain (in the Mountains class). This means that the image was misclassified. ##Add Feedback to the Dataset## The first step is to add the misclassified images to the dataset along with their correct labels. The call you use depends on the type of dataset the model was created from. ###Image or Multilabel### For a dataset with a type of `image` or `image-multi-label`, you add the misclassified image and label one at a time. Use the feedback API to add a misclassified image with the correct label to the dataset from which the model was created. Getting back to our scenario, this cURL call adds `alps.jpg` as a new example to the dataset. The request parameter `"expectedLabel=Mountains"` specifies that the image is added to the correct class in the dataset. See [Create a Feedback Example](doc:create-a-feedback-example). [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"modelId=3CMCRC572BD3OZTQSTTUU4733Y\" -F \"data=@c:\\data\\alps.jpg\" -F \"expectedLabel=Mountains\" https://api.einstein.ai/v2/vision/feedback", "language": "curl" } ] } [/block] ###Object Detection### For a dataset with a type of `image-detection`, you add the misclassified images, labels, and bounding box information in bulk using a .zip file. This cURL call adds the contents of a .zip file to the dataset from which the model was created. The .zip file contains the images and an annotations.csv file that contains the labels and the bounding box coordinates for each image. See [Create Feedback Examples From a Zip File](doc:create-feedback-examples-from-a-zip-file). [block:code] { "codes": [ { "code": "curl -X PUT -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"modelId=3CMCRC572BD3OZTQSTTUU4733Y\" -F \"data=@c:\\data\\alpine_feedback.zip\"", "language": "curl" } ] } [/block] ##Get Feedback Examples## After you add feedback examples to a dataset, you can query the dataset and return just those examples that were added from the feedback API call. The API call to get dataset examples takes a `source` query parameter that lets you specify which examples to return from a dataset. See [Get All Examples](doc:get-all-examples). Valid values for this parameter are: - `all`—Return both upload and feedback examples. - `feedback`—Return examples that were created as feedback. - `upload`—Return examples that were created from uploading a .zip file. If you omit the `source` parameter, feedback examples aren't returned from this call. The `source` query parameter can be combined with the `offset` and `count` query parameters used for paging. This cURL call gets examples that were added as feedback from the specified dataset. [block:code] { "codes": [ { "code": "curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.einstein.ai/v2/vision/datasets/57/examples?source=feedback", "language": "curl" } ] } [/block] The response returns all the examples that were added by calling the feedback API such as the file `alps.jpg`. [block:code] { "codes": [ { "code": "{\n \"id\": 618169,\n \"name\": \"alps.jpg\",\n \"location\": \"RPA8C4FwkbxRQJaXCmwPejGx4W1sKYjWn...\",\n \"createdAt\": \"2017-05-04T20:57:23.000+0000\",\n \"label\": {\n \"id\": 3235,\n \"datasetId\": 57,\n \"name\": \"Mountains\",\n \"numExamples\": 108\n },\n \"object\": \"example\"\n }", "language": "json" } ] } [/block]
The Einstein Vision APIs provide various features that let you optimize and retrain your model using feedback. Use these features to: - Add a misclassified images to a dataset with the correct label. - Get a list of images that were added as feedback to a dataset. - Create a model or update an existing model using feedback images. Let’s look at an example. Let’s say you have an image classification model that classifies beaches and mountains. You send in an image, `alps.jpg`, to the model to get back a prediction. The model returns a response that contains a high probability that the image is a beach (in the Beaches class). However, you expect a response that contains a high probability that the image is a mountain (in the Mountains class). This means that the image was misclassified. ##Add Feedback to the Dataset## The first step is to add the misclassified images to the dataset along with their correct labels. The call you use depends on the type of dataset the model was created from. ###Image or Multilabel### For a dataset with a type of `image` or `image-multi-label`, you add the misclassified image and label one at a time. Use the feedback API to add a misclassified image with the correct label to the dataset from which the model was created. Getting back to our scenario, this cURL call adds `alps.jpg` as a new example to the dataset. The request parameter `"expectedLabel=Mountains"` specifies that the image is added to the correct class in the dataset. See [Create a Feedback Example](doc:create-a-feedback-example). [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"modelId=3CMCRC572BD3OZTQSTTUU4733Y\" -F \"data=@c:\\data\\alps.jpg\" -F \"expectedLabel=Mountains\" https://api.einstein.ai/v2/vision/feedback", "language": "curl" } ] } [/block] ###Object Detection### For a dataset with a type of `image-detection`, you add the misclassified images, labels, and bounding box information in bulk using a .zip file. This cURL call adds the contents of a .zip file to the dataset from which the model was created. The .zip file contains the images and an annotations.csv file that contains the labels and the bounding box coordinates for each image. See [Create Feedback Examples From a Zip File](doc:create-feedback-examples-from-a-zip-file). [block:code] { "codes": [ { "code": "curl -X PUT -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"modelId=3CMCRC572BD3OZTQSTTUU4733Y\" -F \"data=@c:\\data\\alpine_feedback.zip\"", "language": "curl" } ] } [/block] ##Get Feedback Examples## After you add feedback examples to a dataset, you can query the dataset and return just those examples that were added from the feedback API call. The API call to get dataset examples takes a `source` query parameter that lets you specify which examples to return from a dataset. See [Get All Examples](doc:get-all-examples). Valid values for this parameter are: - `all`—Return both upload and feedback examples. - `feedback`—Return examples that were created as feedback. - `upload`—Return examples that were created from uploading a .zip file. If you omit the `source` parameter, feedback examples aren't returned from this call. The `source` query parameter can be combined with the `offset` and `count` query parameters used for paging. This cURL call gets examples that were added as feedback from the specified dataset. [block:code] { "codes": [ { "code": "curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.einstein.ai/v2/vision/datasets/57/examples?source=feedback", "language": "curl" } ] } [/block] The response returns all the examples that were added by calling the feedback API such as the file `alps.jpg`. [block:code] { "codes": [ { "code": "{\n \"id\": 618169,\n \"name\": \"alps.jpg\",\n \"location\": \"RPA8C4FwkbxRQJaXCmwPejGx4W1sKYjWn...\",\n \"createdAt\": \"2017-05-04T20:57:23.000+0000\",\n \"label\": {\n \"id\": 3235,\n \"datasetId\": 57,\n \"name\": \"Mountains\",\n \"numExamples\": 108\n },\n \"object\": \"example\"\n }", "language": "json" } ] } [/block]
{"_id":"59de6226666d650024f78ff0","category":"59de6223666d650024f78fa0","parentDoc":null,"project":"552d474ea86ee20d00780cd7","user":"573b5a1f37fcf72000a2e683","version":"59de6223666d650024f78f9b","updates":[],"next":{"pages":[],"description":""},"createdAt":"2017-05-05T22:08:44.695Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[{"name":"","code":"{}","language":"json","status":200},{"name":"","code":"{}","language":"json","status":400}]},"settings":"","auth":"required","params":[],"url":""},"isReference":false,"order":1,"body":"You have two options:\n\n- [Create a new model](https://metamind.readme.io/v2/docs/update-model-with-feedback#section-create-a-new-model)—Train the dataset using the feedback images and generate a new model. This method creates a model with a new model ID. \n\n- [Update an existing model](https://metamind.readme.io/v2/docs/update-model-with-feedback#section-update-an-existing-model)—Train the dataset using the feedback images and update an existing model. When you update a model it maintains the model ID, so if you’re using that model ID in production code, you don’t need to update it.\n\n##Create a New Model##\n\nTo create a model with the dataset feedback, you call the `/train` resource and pass in the dataset ID as you normally would, but you also pass in this request parameter:\n\n`\"trainParams\": {\"withFeedback\" : true}`\n\nThe `withFeedback` parameter specifies that the training operation use the feedback examples to create the model. This cURL call trains a dataset and uses the feedback examples.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=Beach Mountain Model With Feedback\\\" -F \\\"datasetId=57\\\" -F \\\"trainParams={\\\\\\\"withFeedback\\\\\\\" : true}\\\" https://api.einstein.ai/v2/vision/train\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\nThis command has double quotes and escaped double quotes around `withFeedback` to run on Windows. You might need to reformat it to run on another OS. For more information, see [Train a Dataset](doc:train-a-dataset).\n\nThe response looks as you would expect from any training call. The `trainParams` field shows that the training uses feedback examples.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"{\\n  \\\"datasetId\\\": 57,\\n  \\\"datasetVersionId\\\": 0,\\n  \\\"name\\\": \\\"Beach Mountain Model With Feedback\\\",\\n  \\\"status\\\": \\\"QUEUED\\\",\\n  \\\"progress\\\": 0,\\n  \\\"createdAt\\\": \\\"2017-05-08T18:09:24.000+0000\\\",\\n  \\\"updatedAt\\\": \\\"2017-05-08T18:09:24.000+0000\\\",\\n  \\\"learningRate\\\": 0.001,\\n  \\\"epochs\\\": 3,\\n  \\\"queuePosition\\\": 3,\\n  \\\"object\\\": \\\"training\\\",\\n  \\\"modelId\\\": \\\"DWLKXLCOH7G7RSCCRM108RGOVE\\\",\\n  \\\"trainParams\\\": {\\n    \\\"withFeedback\\\": true\\n  },\\n  \\\"trainStats\\\": null,\\n  \\\"modelType\\\": \\\"image\\\"\\n}\",\n      \"language\": \"json\"\n    }\n  ]\n}\n[/block]\n##Update an Existing Model##\n\nIf you want to update an existing model with the feedback in the dataset and keep the model ID, you can call the `/retrain` resource and pass in this request parameter.\n\n`\"trainParams\": {\"withFeedback\" : true}`\n\nThis approach is useful when you have a model in production and you want to maintain the model ID. This cURL call trains the dataset associated with the specified model, uses the feedback examples, and updates the model.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"modelId=DWLKXLCOH7G7RSCCRM108RGOVE\\\" -F \\\"trainParams={\\\\\\\"withFeedback\\\\\\\" : true}\\\" https://api.einstein.ai/v2/vision/retrain\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\nThis command has double quotes and escaped double quotes around `withFeedback` to run on Windows. You might need to reformat it to run on another OS. For more information, see [Retrain a Dataset](doc:retrain-a-dataset).\n\nThe response looks as you would expect from any training call. The only difference is that this response contains the same `modelId` that was passed in. The `trainParams` field shows that the retraining uses feedback examples.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"{\\n  \\\"datasetId\\\": 57,\\n  \\\"datasetVersionId\\\": 0,\\n  \\\"name\\\": \\\"Beach Mountain Model With Feedback\\\",\\n  \\\"status\\\": \\\"QUEUED\\\",\\n  \\\"progress\\\": 0,\\n  \\\"createdAt\\\": \\\"2017-05-08T18:09:24.000+0000\\\",\\n  \\\"updatedAt\\\": \\\"2017-05-08T18:09:24.000+0000\\\",\\n  \\\"learningRate\\\": 0.001,\\n  \\\"epochs\\\": 3,\\n  \\\"queuePosition\\\": 2,\\n  \\\"object\\\": \\\"training\\\",\\n  \\\"modelId\\\": \\\"DWLKXLCOH7G7RSCCRM108RGOVE\\\",\\n  \\\"trainParams\\\": {\\n    \\\"withFeedback\\\": true\\n  },\\n  \\\"trainStats\\\": null,\\n  \\\"modelType\\\": \\\"image\\\"\\n}\",\n      \"language\": \"json\"\n    }\n  ]\n}\n[/block]","excerpt":"After you add feedback images to the correct classes in the dataset, you retrain the dataset to incorporate the new data into the model.","slug":"update-model-with-feedback","type":"basic","title":"Update a Model with Feedback","__v":0,"childrenPages":[]}

Update a Model with Feedback

After you add feedback images to the correct classes in the dataset, you retrain the dataset to incorporate the new data into the model.

You have two options: - [Create a new model](https://metamind.readme.io/v2/docs/update-model-with-feedback#section-create-a-new-model)—Train the dataset using the feedback images and generate a new model. This method creates a model with a new model ID. - [Update an existing model](https://metamind.readme.io/v2/docs/update-model-with-feedback#section-update-an-existing-model)—Train the dataset using the feedback images and update an existing model. When you update a model it maintains the model ID, so if you’re using that model ID in production code, you don’t need to update it. ##Create a New Model## To create a model with the dataset feedback, you call the `/train` resource and pass in the dataset ID as you normally would, but you also pass in this request parameter: `"trainParams": {"withFeedback" : true}` The `withFeedback` parameter specifies that the training operation use the feedback examples to create the model. This cURL call trains a dataset and uses the feedback examples. [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=Beach Mountain Model With Feedback\" -F \"datasetId=57\" -F \"trainParams={\\\"withFeedback\\\" : true}\" https://api.einstein.ai/v2/vision/train", "language": "curl" } ] } [/block] This command has double quotes and escaped double quotes around `withFeedback` to run on Windows. You might need to reformat it to run on another OS. For more information, see [Train a Dataset](doc:train-a-dataset). The response looks as you would expect from any training call. The `trainParams` field shows that the training uses feedback examples. [block:code] { "codes": [ { "code": "{\n \"datasetId\": 57,\n \"datasetVersionId\": 0,\n \"name\": \"Beach Mountain Model With Feedback\",\n \"status\": \"QUEUED\",\n \"progress\": 0,\n \"createdAt\": \"2017-05-08T18:09:24.000+0000\",\n \"updatedAt\": \"2017-05-08T18:09:24.000+0000\",\n \"learningRate\": 0.001,\n \"epochs\": 3,\n \"queuePosition\": 3,\n \"object\": \"training\",\n \"modelId\": \"DWLKXLCOH7G7RSCCRM108RGOVE\",\n \"trainParams\": {\n \"withFeedback\": true\n },\n \"trainStats\": null,\n \"modelType\": \"image\"\n}", "language": "json" } ] } [/block] ##Update an Existing Model## If you want to update an existing model with the feedback in the dataset and keep the model ID, you can call the `/retrain` resource and pass in this request parameter. `"trainParams": {"withFeedback" : true}` This approach is useful when you have a model in production and you want to maintain the model ID. This cURL call trains the dataset associated with the specified model, uses the feedback examples, and updates the model. [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"modelId=DWLKXLCOH7G7RSCCRM108RGOVE\" -F \"trainParams={\\\"withFeedback\\\" : true}\" https://api.einstein.ai/v2/vision/retrain", "language": "curl" } ] } [/block] This command has double quotes and escaped double quotes around `withFeedback` to run on Windows. You might need to reformat it to run on another OS. For more information, see [Retrain a Dataset](doc:retrain-a-dataset). The response looks as you would expect from any training call. The only difference is that this response contains the same `modelId` that was passed in. The `trainParams` field shows that the retraining uses feedback examples. [block:code] { "codes": [ { "code": "{\n \"datasetId\": 57,\n \"datasetVersionId\": 0,\n \"name\": \"Beach Mountain Model With Feedback\",\n \"status\": \"QUEUED\",\n \"progress\": 0,\n \"createdAt\": \"2017-05-08T18:09:24.000+0000\",\n \"updatedAt\": \"2017-05-08T18:09:24.000+0000\",\n \"learningRate\": 0.001,\n \"epochs\": 3,\n \"queuePosition\": 2,\n \"object\": \"training\",\n \"modelId\": \"DWLKXLCOH7G7RSCCRM108RGOVE\",\n \"trainParams\": {\n \"withFeedback\": true\n },\n \"trainStats\": null,\n \"modelType\": \"image\"\n}", "language": "json" } ] } [/block]
You have two options: - [Create a new model](https://metamind.readme.io/v2/docs/update-model-with-feedback#section-create-a-new-model)—Train the dataset using the feedback images and generate a new model. This method creates a model with a new model ID. - [Update an existing model](https://metamind.readme.io/v2/docs/update-model-with-feedback#section-update-an-existing-model)—Train the dataset using the feedback images and update an existing model. When you update a model it maintains the model ID, so if you’re using that model ID in production code, you don’t need to update it. ##Create a New Model## To create a model with the dataset feedback, you call the `/train` resource and pass in the dataset ID as you normally would, but you also pass in this request parameter: `"trainParams": {"withFeedback" : true}` The `withFeedback` parameter specifies that the training operation use the feedback examples to create the model. This cURL call trains a dataset and uses the feedback examples. [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=Beach Mountain Model With Feedback\" -F \"datasetId=57\" -F \"trainParams={\\\"withFeedback\\\" : true}\" https://api.einstein.ai/v2/vision/train", "language": "curl" } ] } [/block] This command has double quotes and escaped double quotes around `withFeedback` to run on Windows. You might need to reformat it to run on another OS. For more information, see [Train a Dataset](doc:train-a-dataset). The response looks as you would expect from any training call. The `trainParams` field shows that the training uses feedback examples. [block:code] { "codes": [ { "code": "{\n \"datasetId\": 57,\n \"datasetVersionId\": 0,\n \"name\": \"Beach Mountain Model With Feedback\",\n \"status\": \"QUEUED\",\n \"progress\": 0,\n \"createdAt\": \"2017-05-08T18:09:24.000+0000\",\n \"updatedAt\": \"2017-05-08T18:09:24.000+0000\",\n \"learningRate\": 0.001,\n \"epochs\": 3,\n \"queuePosition\": 3,\n \"object\": \"training\",\n \"modelId\": \"DWLKXLCOH7G7RSCCRM108RGOVE\",\n \"trainParams\": {\n \"withFeedback\": true\n },\n \"trainStats\": null,\n \"modelType\": \"image\"\n}", "language": "json" } ] } [/block] ##Update an Existing Model## If you want to update an existing model with the feedback in the dataset and keep the model ID, you can call the `/retrain` resource and pass in this request parameter. `"trainParams": {"withFeedback" : true}` This approach is useful when you have a model in production and you want to maintain the model ID. This cURL call trains the dataset associated with the specified model, uses the feedback examples, and updates the model. [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"modelId=DWLKXLCOH7G7RSCCRM108RGOVE\" -F \"trainParams={\\\"withFeedback\\\" : true}\" https://api.einstein.ai/v2/vision/retrain", "language": "curl" } ] } [/block] This command has double quotes and escaped double quotes around `withFeedback` to run on Windows. You might need to reformat it to run on another OS. For more information, see [Retrain a Dataset](doc:retrain-a-dataset). The response looks as you would expect from any training call. The only difference is that this response contains the same `modelId` that was passed in. The `trainParams` field shows that the retraining uses feedback examples. [block:code] { "codes": [ { "code": "{\n \"datasetId\": 57,\n \"datasetVersionId\": 0,\n \"name\": \"Beach Mountain Model With Feedback\",\n \"status\": \"QUEUED\",\n \"progress\": 0,\n \"createdAt\": \"2017-05-08T18:09:24.000+0000\",\n \"updatedAt\": \"2017-05-08T18:09:24.000+0000\",\n \"learningRate\": 0.001,\n \"epochs\": 3,\n \"queuePosition\": 2,\n \"object\": \"training\",\n \"modelId\": \"DWLKXLCOH7G7RSCCRM108RGOVE\",\n \"trainParams\": {\n \"withFeedback\": true\n },\n \"trainStats\": null,\n \"modelType\": \"image\"\n}", "language": "json" } ] } [/block]
{"_id":"59de6224666d650024f78fb3","category":"59de6223666d650024f78fa1","project":"552d474ea86ee20d00780cd7","user":"573b5a1f37fcf72000a2e683","parentDoc":null,"version":"59de6223666d650024f78f9b","updates":[],"next":{"pages":[],"description":""},"createdAt":"2016-09-30T20:15:45.871Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","auth":"required","params":[],"url":""},"isReference":false,"order":0,"body":"- [Food Image Model](https://metamind.readme.io/docs/food-image-model)\n- [General Image Model](https://metamind.readme.io/docs/general-image-model)\n- [Scene Image Model](https://metamind.readme.io/docs/scene-image-model)\n- [Multi-Label Image Model](https://metamind.readme.io/docs/multi-label-image-model)","excerpt":"Einstein Vision offers prebuilt models that you can use as long as you have a valid JWT token. These models are a good way to get started with the API because you can use them to work with and test the API without having to gather data and create your own model.","slug":"use-pre-built-models","type":"basic","title":"Prebuilt Models","__v":0,"childrenPages":[]}

Prebuilt Models

Einstein Vision offers prebuilt models that you can use as long as you have a valid JWT token. These models are a good way to get started with the API because you can use them to work with and test the API without having to gather data and create your own model.

- [Food Image Model](https://metamind.readme.io/docs/food-image-model) - [General Image Model](https://metamind.readme.io/docs/general-image-model) - [Scene Image Model](https://metamind.readme.io/docs/scene-image-model) - [Multi-Label Image Model](https://metamind.readme.io/docs/multi-label-image-model)
- [Food Image Model](https://metamind.readme.io/docs/food-image-model) - [General Image Model](https://metamind.readme.io/docs/general-image-model) - [Scene Image Model](https://metamind.readme.io/docs/scene-image-model) - [Multi-Label Image Model](https://metamind.readme.io/docs/multi-label-image-model)
{"_id":"59e52598d460b50010237c42","project":"552d474ea86ee20d00780cd7","version":"59de6223666d650024f78f9b","category":"59de6223666d650024f78fa1","user":"57619a58a7c9f729009a74f0","updates":[],"next":{"pages":[],"description":""},"createdAt":"2017-10-16T21:33:12.930Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","auth":"required","params":[],"url":""},"isReference":false,"order":1,"body":"This model is used to classify different foods and contains over 500 labels. You classify an image against this model just as you would a custom model; but instead of using the `modelId` of the custom model, you specify a `modelId` of `FoodImageClassifier`. For the list of classes this model contains, see [Food Image Model Class List](page:food-image-model-class-list).\n\nThis cURL command makes a prediction against the food model.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"sampleLocation=http://einstein.ai/images/foodimage.jpg\\\" -F \\\"modelId=FoodImageClassifier\\\" https://api.einstein.ai/v2/vision/predict\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\nThe model returns a result similar to the following for the pizza image referenced by `http://einstein.ai/images/foodimage.jpg`.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"{\\n  \\\"probabilities\\\": [\\n    {\\n      \\\"label\\\": \\\"pizza\\\",\\n      \\\"probability\\\": 0.4895147383213043\\n    },\\n    {\\n      \\\"label\\\": \\\"flatbread\\\",\\n      \\\"probability\\\": 0.30357491970062256\\n    },\\n    {\\n      \\\"label\\\": \\\"focaccia\\\",\\n      \\\"probability\\\": 0.10683325678110123\\n    },\\n    {\\n      \\\"label\\\": \\\"frittata\\\",\\n      \\\"probability\\\": 0.05281512811779976\\n    },\\n    {\\n      \\\"label\\\": \\\"pepperoni\\\",\\n      \\\"probability\\\": 0.029621008783578873\\n    }\\n  ],\\n  \\\"object\\\": \\\"predictresponse\\\"\\n}\",\n      \"language\": \"json\"\n    }\n  ]\n}\n[/block]","excerpt":"Identify different kinds of foods in a given image.","slug":"food-image-model","type":"basic","title":"Food Image Model","__v":0,"parentDoc":null,"childrenPages":[]}

Food Image Model

Identify different kinds of foods in a given image.

This model is used to classify different foods and contains over 500 labels. You classify an image against this model just as you would a custom model; but instead of using the `modelId` of the custom model, you specify a `modelId` of `FoodImageClassifier`. For the list of classes this model contains, see [Food Image Model Class List](page:food-image-model-class-list). This cURL command makes a prediction against the food model. [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"sampleLocation=http://einstein.ai/images/foodimage.jpg\" -F \"modelId=FoodImageClassifier\" https://api.einstein.ai/v2/vision/predict", "language": "curl" } ] } [/block] The model returns a result similar to the following for the pizza image referenced by `http://einstein.ai/images/foodimage.jpg`. [block:code] { "codes": [ { "code": "{\n \"probabilities\": [\n {\n \"label\": \"pizza\",\n \"probability\": 0.4895147383213043\n },\n {\n \"label\": \"flatbread\",\n \"probability\": 0.30357491970062256\n },\n {\n \"label\": \"focaccia\",\n \"probability\": 0.10683325678110123\n },\n {\n \"label\": \"frittata\",\n \"probability\": 0.05281512811779976\n },\n {\n \"label\": \"pepperoni\",\n \"probability\": 0.029621008783578873\n }\n ],\n \"object\": \"predictresponse\"\n}", "language": "json" } ] } [/block]
This model is used to classify different foods and contains over 500 labels. You classify an image against this model just as you would a custom model; but instead of using the `modelId` of the custom model, you specify a `modelId` of `FoodImageClassifier`. For the list of classes this model contains, see [Food Image Model Class List](page:food-image-model-class-list). This cURL command makes a prediction against the food model. [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"sampleLocation=http://einstein.ai/images/foodimage.jpg\" -F \"modelId=FoodImageClassifier\" https://api.einstein.ai/v2/vision/predict", "language": "curl" } ] } [/block] The model returns a result similar to the following for the pizza image referenced by `http://einstein.ai/images/foodimage.jpg`. [block:code] { "codes": [ { "code": "{\n \"probabilities\": [\n {\n \"label\": \"pizza\",\n \"probability\": 0.4895147383213043\n },\n {\n \"label\": \"flatbread\",\n \"probability\": 0.30357491970062256\n },\n {\n \"label\": \"focaccia\",\n \"probability\": 0.10683325678110123\n },\n {\n \"label\": \"frittata\",\n \"probability\": 0.05281512811779976\n },\n {\n \"label\": \"pepperoni\",\n \"probability\": 0.029621008783578873\n }\n ],\n \"object\": \"predictresponse\"\n}", "language": "json" } ] } [/block]
{"_id":"59e526b7d460b50010237c48","project":"552d474ea86ee20d00780cd7","version":"59de6223666d650024f78f9b","category":"59de6223666d650024f78fa1","user":"57619a58a7c9f729009a74f0","updates":[],"next":{"pages":[],"description":""},"createdAt":"2017-10-16T21:37:59.624Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","auth":"required","params":[],"url":""},"isReference":false,"order":2,"body":"This model is used to classify a variety of images and contains thousands of labels. You can classify an image against this model just as you would a custom model; but instead of using the `modelId` of the custom model, you specify a `modelId` of `GeneralImageClassifier`. For the list of classes this model contains, see [General Image Model Class List](page:general-image-model-class-list).\n\nThis cURL command makes a prediction against the general image model.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"sampleLocation=http://einstein.ai/images/generalimage.jpg\\\" -F \\\"modelId=GeneralImageClassifier\\\" https://api.einstein.ai/v2/vision/predict\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\nThe model returns a result similar to the following for the tree frog image referenced by `http://einstein.ai/images/generalimage.jpg`.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"{\\n  \\\"probabilities\\\": [\\n    {\\n      \\\"label\\\": \\\"tree frog, tree-frog\\\",\\n      \\\"probability\\\": 0.7963114976882935\\n    },\\n    {\\n      \\\"label\\\": \\\"tailed frog, bell toad, ribbed toad, tailed toad, Ascaphus trui\\\",\\n      \\\"probability\\\": 0.1978749930858612\\n    },\\n    {\\n      \\\"label\\\": \\\"banded gecko\\\",\\n      \\\"probability\\\": 0.001511271228082478\\n    },\\n    {\\n      \\\"label\\\": \\\"African chameleon, Chamaeleo chamaeleon\\\",\\n      \\\"probability\\\": 0.0013212867779657245\\n    },\\n    {\\n      \\\"label\\\": \\\"bullfrog, Rana catesbeiana\\\",\\n      \\\"probability\\\": 0.0011536618694663048\\n    }\\n  ],\\n  \\\"object\\\": \\\"predictresponse\\\"\\n}\",\n      \"language\": \"json\"\n    }\n  ]\n}\n[/block]","excerpt":"Detect the presence of a single object in a given image.","slug":"general-image-model","type":"basic","title":"General Image Model","__v":0,"parentDoc":null,"childrenPages":[]}

General Image Model

Detect the presence of a single object in a given image.

This model is used to classify a variety of images and contains thousands of labels. You can classify an image against this model just as you would a custom model; but instead of using the `modelId` of the custom model, you specify a `modelId` of `GeneralImageClassifier`. For the list of classes this model contains, see [General Image Model Class List](page:general-image-model-class-list). This cURL command makes a prediction against the general image model. [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"sampleLocation=http://einstein.ai/images/generalimage.jpg\" -F \"modelId=GeneralImageClassifier\" https://api.einstein.ai/v2/vision/predict", "language": "curl" } ] } [/block] The model returns a result similar to the following for the tree frog image referenced by `http://einstein.ai/images/generalimage.jpg`. [block:code] { "codes": [ { "code": "{\n \"probabilities\": [\n {\n \"label\": \"tree frog, tree-frog\",\n \"probability\": 0.7963114976882935\n },\n {\n \"label\": \"tailed frog, bell toad, ribbed toad, tailed toad, Ascaphus trui\",\n \"probability\": 0.1978749930858612\n },\n {\n \"label\": \"banded gecko\",\n \"probability\": 0.001511271228082478\n },\n {\n \"label\": \"African chameleon, Chamaeleo chamaeleon\",\n \"probability\": 0.0013212867779657245\n },\n {\n \"label\": \"bullfrog, Rana catesbeiana\",\n \"probability\": 0.0011536618694663048\n }\n ],\n \"object\": \"predictresponse\"\n}", "language": "json" } ] } [/block]
This model is used to classify a variety of images and contains thousands of labels. You can classify an image against this model just as you would a custom model; but instead of using the `modelId` of the custom model, you specify a `modelId` of `GeneralImageClassifier`. For the list of classes this model contains, see [General Image Model Class List](page:general-image-model-class-list). This cURL command makes a prediction against the general image model. [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"sampleLocation=http://einstein.ai/images/generalimage.jpg\" -F \"modelId=GeneralImageClassifier\" https://api.einstein.ai/v2/vision/predict", "language": "curl" } ] } [/block] The model returns a result similar to the following for the tree frog image referenced by `http://einstein.ai/images/generalimage.jpg`. [block:code] { "codes": [ { "code": "{\n \"probabilities\": [\n {\n \"label\": \"tree frog, tree-frog\",\n \"probability\": 0.7963114976882935\n },\n {\n \"label\": \"tailed frog, bell toad, ribbed toad, tailed toad, Ascaphus trui\",\n \"probability\": 0.1978749930858612\n },\n {\n \"label\": \"banded gecko\",\n \"probability\": 0.001511271228082478\n },\n {\n \"label\": \"African chameleon, Chamaeleo chamaeleon\",\n \"probability\": 0.0013212867779657245\n },\n {\n \"label\": \"bullfrog, Rana catesbeiana\",\n \"probability\": 0.0011536618694663048\n }\n ],\n \"object\": \"predictresponse\"\n}", "language": "json" } ] } [/block]
{"_id":"59e526f41d5f070010ccbb19","project":"552d474ea86ee20d00780cd7","version":"59de6223666d650024f78f9b","category":"59de6223666d650024f78fa1","user":"57619a58a7c9f729009a74f0","updates":[],"next":{"pages":[],"description":""},"createdAt":"2017-10-16T21:39:00.145Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","auth":"required","params":[],"url":""},"isReference":false,"order":3,"body":"This model is used to classify a variety of indoor and outdoor scenes. You can classify an image against this model just as you would a custom model; but instead of using the `modelId` of the custom model, you specify a `modelId` of `SceneClassifier`. For the list of classes this model contains, see [Scene Image Model Class List](page:scene-image-model-class-list).\n\nThis cURL command makes a prediction against the scene image model.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"sampleLocation=http://einstein.ai/images/gym.jpg\\\" -F \\\"modelId=SceneClassifier\\\" https://api.einstein.ai/v2/vision/predict\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\nThe model returns a result similar to the following for the image referenced by `http://einstein.ai/images/gym.jpg`.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"{\\n  \\\"probabilities\\\": [\\n    {\\n      \\\"label\\\": \\\"Gym interior\\\",\\n      \\\"probability\\\": 0.996387\\n    },\\n    {\\n      \\\"label\\\": \\\"Airport terminal\\\",\\n      \\\"probability\\\": 0.0025247275\\n    },\\n    {\\n      \\\"label\\\": \\\"Office or Cubicles\\\",\\n      \\\"probability\\\": 0.00049142947\\n    },\\n    {\\n      \\\"label\\\": \\\"Bus or train interior\\\",\\n      \\\"probability\\\": 0.00019321487\\n    },\\n    {\\n      \\\"label\\\": \\\"Restaurant patio\\\",\\n      \\\"probability\\\": 0.000069430374\\n    }\\n  ],\\n  \\\"object\\\": \\\"predictresponse\\\"\\n}\",\n      \"language\": \"json\"\n    }\n  ]\n}\n[/block]","excerpt":"Analyze an image for a specific type of scene.","slug":"scene-image-model","type":"basic","title":"Scene Image Model","__v":0,"parentDoc":null,"childrenPages":[]}

Scene Image Model

Analyze an image for a specific type of scene.

This model is used to classify a variety of indoor and outdoor scenes. You can classify an image against this model just as you would a custom model; but instead of using the `modelId` of the custom model, you specify a `modelId` of `SceneClassifier`. For the list of classes this model contains, see [Scene Image Model Class List](page:scene-image-model-class-list). This cURL command makes a prediction against the scene image model. [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"sampleLocation=http://einstein.ai/images/gym.jpg\" -F \"modelId=SceneClassifier\" https://api.einstein.ai/v2/vision/predict", "language": "curl" } ] } [/block] The model returns a result similar to the following for the image referenced by `http://einstein.ai/images/gym.jpg`. [block:code] { "codes": [ { "code": "{\n \"probabilities\": [\n {\n \"label\": \"Gym interior\",\n \"probability\": 0.996387\n },\n {\n \"label\": \"Airport terminal\",\n \"probability\": 0.0025247275\n },\n {\n \"label\": \"Office or Cubicles\",\n \"probability\": 0.00049142947\n },\n {\n \"label\": \"Bus or train interior\",\n \"probability\": 0.00019321487\n },\n {\n \"label\": \"Restaurant patio\",\n \"probability\": 0.000069430374\n }\n ],\n \"object\": \"predictresponse\"\n}", "language": "json" } ] } [/block]
This model is used to classify a variety of indoor and outdoor scenes. You can classify an image against this model just as you would a custom model; but instead of using the `modelId` of the custom model, you specify a `modelId` of `SceneClassifier`. For the list of classes this model contains, see [Scene Image Model Class List](page:scene-image-model-class-list). This cURL command makes a prediction against the scene image model. [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"sampleLocation=http://einstein.ai/images/gym.jpg\" -F \"modelId=SceneClassifier\" https://api.einstein.ai/v2/vision/predict", "language": "curl" } ] } [/block] The model returns a result similar to the following for the image referenced by `http://einstein.ai/images/gym.jpg`. [block:code] { "codes": [ { "code": "{\n \"probabilities\": [\n {\n \"label\": \"Gym interior\",\n \"probability\": 0.996387\n },\n {\n \"label\": \"Airport terminal\",\n \"probability\": 0.0025247275\n },\n {\n \"label\": \"Office or Cubicles\",\n \"probability\": 0.00049142947\n },\n {\n \"label\": \"Bus or train interior\",\n \"probability\": 0.00019321487\n },\n {\n \"label\": \"Restaurant patio\",\n \"probability\": 0.000069430374\n }\n ],\n \"object\": \"predictresponse\"\n}", "language": "json" } ] } [/block]
{"_id":"59e52737dc38f90010a49df2","project":"552d474ea86ee20d00780cd7","version":"59de6223666d650024f78f9b","category":"59de6223666d650024f78fa1","user":"57619a58a7c9f729009a74f0","updates":["5b6bb9ca0d95c6000380cd5e"],"next":{"pages":[],"description":""},"createdAt":"2017-10-16T21:40:07.901Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","auth":"required","params":[],"url":""},"isReference":false,"order":4,"body":"This multi-label model is used to classify a variety of objects. You can classify an image against this model just as you would a custom model; but instead of using the `modelId` of the custom model, you specify a `modelId` of `MultiLabelImageClassifier`. For the list of classes this model contains, see [Multi-Label Image Model Class List](page:multi-label-image-model-class-list).\n\nThis cURL command sends in a local image and returns a prediction from the multi-label image model.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"sampleContent=@C:\\\\Data\\\\laptop_and_camera.jpg\\\" -F \\\"modelId=MultiLabelImageClassifier\\\" https://api.einstein.ai/v2/vision/predict\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\nThe model returns a result similar to the following JSON. This response is truncated. When you use this model, the response contains all the classes in the model. Multi-label models are used to detect multiple objects in an image, so you'll see the classes with the highest probability returned first.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"// Response is truncated for brevity. Multi-label models return all classes\\n// sorted by probability.\\n{\\n  \\\"probabilities\\\": [\\n    {\\n      \\\"label\\\": \\\"laptop\\\",\\n      \\\"probability\\\": 0.96274024\\n    },\\n    {\\n      \\\"label\\\": \\\"camera\\\",\\n      \\\"probability\\\": 0.39719293\\n    },\\n    {\\n      \\\"label\\\": \\\"BACKGROUND_Google\\\",\\n      \\\"probability\\\": 0.2958626\\n    },\\n    {\\n      \\\"label\\\": \\\"cup\\\",\\n      \\\"probability\\\": 0.09132507\\n    },\\n    {\\n      \\\"label\\\": \\\"stapler\\\",\\n      \\\"probability\\\": 0.081633374\\n    }\\n  ],\\n  \\\"object\\\": \\\"predictresponse\\\"\\n}\",\n      \"language\": \"json\"\n    }\n  ]\n}\n[/block]\n If you want to send in an image by using its URL, replace the `sampleContent` parameter with the `sampleLocation` parameter. \n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"sampleLocation=https://www.einstein.ai/laptop_and_camera.jpg\\\" -F \\\"modelId=MultiLabelImageClassifier\\\" https://api.einstein.ai/v2/vision/predict\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]","excerpt":"Detect the presence of multiple objects in a given image.","slug":"multi-label-image-model","type":"basic","title":"Multi-Label Image Model","__v":1,"parentDoc":null,"childrenPages":[]}

Multi-Label Image Model

Detect the presence of multiple objects in a given image.

This multi-label model is used to classify a variety of objects. You can classify an image against this model just as you would a custom model; but instead of using the `modelId` of the custom model, you specify a `modelId` of `MultiLabelImageClassifier`. For the list of classes this model contains, see [Multi-Label Image Model Class List](page:multi-label-image-model-class-list). This cURL command sends in a local image and returns a prediction from the multi-label image model. [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"sampleContent=@C:\\Data\\laptop_and_camera.jpg\" -F \"modelId=MultiLabelImageClassifier\" https://api.einstein.ai/v2/vision/predict", "language": "curl" } ] } [/block] The model returns a result similar to the following JSON. This response is truncated. When you use this model, the response contains all the classes in the model. Multi-label models are used to detect multiple objects in an image, so you'll see the classes with the highest probability returned first. [block:code] { "codes": [ { "code": "// Response is truncated for brevity. Multi-label models return all classes\n// sorted by probability.\n{\n \"probabilities\": [\n {\n \"label\": \"laptop\",\n \"probability\": 0.96274024\n },\n {\n \"label\": \"camera\",\n \"probability\": 0.39719293\n },\n {\n \"label\": \"BACKGROUND_Google\",\n \"probability\": 0.2958626\n },\n {\n \"label\": \"cup\",\n \"probability\": 0.09132507\n },\n {\n \"label\": \"stapler\",\n \"probability\": 0.081633374\n }\n ],\n \"object\": \"predictresponse\"\n}", "language": "json" } ] } [/block] If you want to send in an image by using its URL, replace the `sampleContent` parameter with the `sampleLocation` parameter. [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"sampleLocation=https://www.einstein.ai/laptop_and_camera.jpg\" -F \"modelId=MultiLabelImageClassifier\" https://api.einstein.ai/v2/vision/predict", "language": "curl" } ] } [/block]
This multi-label model is used to classify a variety of objects. You can classify an image against this model just as you would a custom model; but instead of using the `modelId` of the custom model, you specify a `modelId` of `MultiLabelImageClassifier`. For the list of classes this model contains, see [Multi-Label Image Model Class List](page:multi-label-image-model-class-list). This cURL command sends in a local image and returns a prediction from the multi-label image model. [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"sampleContent=@C:\\Data\\laptop_and_camera.jpg\" -F \"modelId=MultiLabelImageClassifier\" https://api.einstein.ai/v2/vision/predict", "language": "curl" } ] } [/block] The model returns a result similar to the following JSON. This response is truncated. When you use this model, the response contains all the classes in the model. Multi-label models are used to detect multiple objects in an image, so you'll see the classes with the highest probability returned first. [block:code] { "codes": [ { "code": "// Response is truncated for brevity. Multi-label models return all classes\n// sorted by probability.\n{\n \"probabilities\": [\n {\n \"label\": \"laptop\",\n \"probability\": 0.96274024\n },\n {\n \"label\": \"camera\",\n \"probability\": 0.39719293\n },\n {\n \"label\": \"BACKGROUND_Google\",\n \"probability\": 0.2958626\n },\n {\n \"label\": \"cup\",\n \"probability\": 0.09132507\n },\n {\n \"label\": \"stapler\",\n \"probability\": 0.081633374\n }\n ],\n \"object\": \"predictresponse\"\n}", "language": "json" } ] } [/block] If you want to send in an image by using its URL, replace the `sampleContent` parameter with the `sampleLocation` parameter. [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"sampleLocation=https://www.einstein.ai/laptop_and_camera.jpg\" -F \"modelId=MultiLabelImageClassifier\" https://api.einstein.ai/v2/vision/predict", "language": "curl" } ] } [/block]
{"_id":"59de6225666d650024f78fe9","category":"5a43ec17dc9e6c00126fdda0","project":"552d474ea86ee20d00780cd7","user":"573b5a1f37fcf72000a2e683","parentDoc":null,"version":"59de6223666d650024f78f9b","updates":[],"next":{"pages":[],"description":""},"createdAt":"2017-06-14T21:02:03.867Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"settings":"","results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"auth":"required","params":[],"url":""},"isReference":false,"order":0,"body":"Use the community sentiment model to classify text without building your own custom model. This model was created from data that comes from multiple sources. The data is short snippets of text, about one or two sentences, and similar to what you would find in a public community or Chatter group, a review/feedback forum, or enterprise social media.\n\nThis cURL command sends in a text string and returns a prediction from the model. You call the pre-built model the same way you call a custom model, but instead of passing in your own `modelId`, you pass in a `modelId` of `CommunitySentiment`. See [Prediction for Sentiment](doc:prediction-sentiment)].\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"modelId=CommunitySentiment\\\" -F \\\"document=the presentation was great and I learned a lot\\\"  https://api.einstein.ai/v2/language/sentiment\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\nThe model returns a result similar to the following JSON.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"{\\n    \\\"probabilities\\\": [\\n        {\\n            \\\"label\\\": \\\"positive\\\",\\n            \\\"probability\\\": 0.8673582\\n        },\\n        {\\n            \\\"label\\\": \\\"negative\\\",\\n            \\\"probability\\\": 0.1316828\\n        },\\n        {\\n            \\\"label\\\": \\\"neutral\\\",\\n            \\\"probability\\\": 0.0009590242\\n        }\\n    ]\\n}\",\n      \"language\": \"json\"\n    }\n  ]\n}\n[/block]","excerpt":"Einstein Language offers a pre-built sentiment model that you can use as long as you have a valid JWT token. This model has three classes: \n- positive\n- negative\n- neutral","slug":"use-pre-built-models-sentiment","type":"basic","title":"Community Sentiment Model","__v":0,"childrenPages":[]}

Community Sentiment Model

Einstein Language offers a pre-built sentiment model that you can use as long as you have a valid JWT token. This model has three classes: - positive - negative - neutral

Use the community sentiment model to classify text without building your own custom model. This model was created from data that comes from multiple sources. The data is short snippets of text, about one or two sentences, and similar to what you would find in a public community or Chatter group, a review/feedback forum, or enterprise social media. This cURL command sends in a text string and returns a prediction from the model. You call the pre-built model the same way you call a custom model, but instead of passing in your own `modelId`, you pass in a `modelId` of `CommunitySentiment`. See [Prediction for Sentiment](doc:prediction-sentiment)]. [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"modelId=CommunitySentiment\" -F \"document=the presentation was great and I learned a lot\" https://api.einstein.ai/v2/language/sentiment", "language": "curl" } ] } [/block] The model returns a result similar to the following JSON. [block:code] { "codes": [ { "code": "{\n \"probabilities\": [\n {\n \"label\": \"positive\",\n \"probability\": 0.8673582\n },\n {\n \"label\": \"negative\",\n \"probability\": 0.1316828\n },\n {\n \"label\": \"neutral\",\n \"probability\": 0.0009590242\n }\n ]\n}", "language": "json" } ] } [/block]
Use the community sentiment model to classify text without building your own custom model. This model was created from data that comes from multiple sources. The data is short snippets of text, about one or two sentences, and similar to what you would find in a public community or Chatter group, a review/feedback forum, or enterprise social media. This cURL command sends in a text string and returns a prediction from the model. You call the pre-built model the same way you call a custom model, but instead of passing in your own `modelId`, you pass in a `modelId` of `CommunitySentiment`. See [Prediction for Sentiment](doc:prediction-sentiment)]. [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"modelId=CommunitySentiment\" -F \"document=the presentation was great and I learned a lot\" https://api.einstein.ai/v2/language/sentiment", "language": "curl" } ] } [/block] The model returns a result similar to the following JSON. [block:code] { "codes": [ { "code": "{\n \"probabilities\": [\n {\n \"label\": \"positive\",\n \"probability\": 0.8673582\n },\n {\n \"label\": \"negative\",\n \"probability\": 0.1316828\n },\n {\n \"label\": \"neutral\",\n \"probability\": 0.0009590242\n }\n ]\n}", "language": "json" } ] } [/block]
{"_id":"5a56889b36e2650032083b6c","project":"552d474ea86ee20d00780cd7","version":"59de6223666d650024f78f9b","category":"5a568870caec3a00286fc070","user":"573b5a1f37fcf72000a2e683","updates":[],"next":{"pages":[],"description":""},"createdAt":"2018-01-10T21:41:47.436Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","auth":"required","params":[],"url":""},"isReference":false,"order":0,"body":"There are three ways to generate an access token.\n\n\n- Web UI—Use the [token page](https://api.einstein.ai/token) to enter your email address, upload your private key file, and generate a JWT token. See [Set Up Authorization](doc:set-up-auth).\n\n- Programmatically using your key—Load your private key, generate an assertion, and call the API to get an access token. You must monitor when the token expires and generate a new one.\n\n- Programmatically using a refresh token—Load your private key, generate an assertion, and call the API to get a refresh token. Use that refresh token from then on call the API and generate an access token.","excerpt":"A JWT access token is required to make any Einstein Platform Services API calls.","slug":"generate-an-oauth-access-token","type":"basic","title":"Generate an OAuth Access Token","__v":0,"parentDoc":null,"childrenPages":[]}

Generate an OAuth Access Token

A JWT access token is required to make any Einstein Platform Services API calls.

There are three ways to generate an access token. - Web UI—Use the [token page](https://api.einstein.ai/token) to enter your email address, upload your private key file, and generate a JWT token. See [Set Up Authorization](doc:set-up-auth). - Programmatically using your key—Load your private key, generate an assertion, and call the API to get an access token. You must monitor when the token expires and generate a new one. - Programmatically using a refresh token—Load your private key, generate an assertion, and call the API to get a refresh token. Use that refresh token from then on call the API and generate an access token.
There are three ways to generate an access token. - Web UI—Use the [token page](https://api.einstein.ai/token) to enter your email address, upload your private key file, and generate a JWT token. See [Set Up Authorization](doc:set-up-auth). - Programmatically using your key—Load your private key, generate an assertion, and call the API to get an access token. You must monitor when the token expires and generate a new one. - Programmatically using a refresh token—Load your private key, generate an assertion, and call the API to get a refresh token. Use that refresh token from then on call the API and generate an access token.
{"_id":"5a568cbf799dd6001e148979","project":"552d474ea86ee20d00780cd7","version":"59de6223666d650024f78f9b","category":"5a568870caec3a00286fc070","user":"573b5a1f37fcf72000a2e683","updates":[],"next":{"pages":[],"description":""},"createdAt":"2018-01-10T21:59:27.043Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","auth":"required","params":[],"url":""},"isReference":false,"order":1,"body":"When you sign up for an account, you download your key contained in a file called einstein.pem. You can quickly get a token using the [token page](https://api.einstein.ai/token) and uploading your key.\n\nIn code, however, you must write the code to programmatically get an OAuth token using your key. You do this by generating an assertion and then passing that to the API to get an access token. That access token can then be used to make API calls.\n\n1. Open the `einstein_platform.pem` file and read in the key contents.\n\n2. Create the JWT payload. The payload is JSON that contains:\n\n - `sub`—Your email address. This is your email address contained in the Salesforce org you used to sign up for an Einstein Platform Services account.\n\n - `aud`—The API endpoint URL for generating a token.\n\n - `exp`—The expiration time in Unix time. This value is the current Unix time in seconds plus the number of seconds you want the token to be valid. For testing purposes, you can get the Unix time at [Time.is](https://time.is/Unix_time_now).\n\n  The JWT payload looks like this JSON.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"{\\n  \\\"sub\\\": \\\"<EMAIL_ADDRESS>\\\",\\n  \\\"aud\\\": \\\"https://api.einstein.ai/v2/oauth2/token\\\",\\n  \\\"exp\\\": <EXPIRATION_SECONDS_IN_UNIX_TIME>\\n}\",\n      \"language\": \"json\"\n    }\n  ]\n}\n[/block]\n3. Sign the JWT payload with your RSA private key to generate an assertion. The private key is contained in the `einstein_platform.pem` file you downloaded when you signed up for an account. The code to generate the assertion varies depending on your programming language. If you're doing manual testing, you can generate an assertion using [jwt.io](https://jwt.io/).\n\n\n4. Call the API and pass in the assertion. You pass in all the necessary data in the `-d` parameter. Replace `<ASSERTION_STRING>` with the assertion you generated. This cURL command shows the call.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -H \\\"Content-type: application/x-www-form-urlencoded\\\" -X POST https://api.einstein.ai/v2/oauth2/token -d \\\"grant_type=urn:ietf:params:oauth:grant-type:jwt-bearer&assertion=<ASSERTION_STRING>\\\"\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\nThe response looks similar to this JSON.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"{\\n  \\\"access_token\\\": \\\"EPMSDXBQSG6YH23HUE6VTA2UC53MBOEFTUND7QWVQIFOAZU42BGIK3SIXKGEWYPKO3GUFLCLHX5ZBMPPQIB6DCP2NAT7HQ108TLRQ7A\\\",\\n  \\\"token_type\\\": \\\"Bearer\\\",\\n  \\\"expires_in\\\": \\\"120\\\"\\n}\",\n      \"language\": \"json\"\n    }\n  ]\n}\n[/block]\n5. Use the access token to make an API call. For example, this cURL command to get a dataset.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X GET -H \\\"Authorization: Bearer EPMSDXBQSG6YH23HUE6VTA2UC53MBOEFTUND7QWVQIFOAZU42BGIK3SIXKGEWYPKO3GUFLCLHX5ZBMPPQIB6DCP2NAT7HQ108TLRQ7A\\\" -H \\\"Cache-Control: no-cache\\\" https://api.einstein.ai/v2/vision/datasets/1008108\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\nUse the access token to make any authenticated API calls as long as the token is valid (it's not expired). When the access token expires, you repeat this process to generate a new one.","excerpt":"","slug":"generate-an-oauth-token-using-your-key","type":"basic","title":"Generate an OAuth Token Using Your Key","__v":0,"parentDoc":null,"childrenPages":[]}

Generate an OAuth Token Using Your Key


When you sign up for an account, you download your key contained in a file called einstein.pem. You can quickly get a token using the [token page](https://api.einstein.ai/token) and uploading your key. In code, however, you must write the code to programmatically get an OAuth token using your key. You do this by generating an assertion and then passing that to the API to get an access token. That access token can then be used to make API calls. 1. Open the `einstein_platform.pem` file and read in the key contents. 2. Create the JWT payload. The payload is JSON that contains: - `sub`—Your email address. This is your email address contained in the Salesforce org you used to sign up for an Einstein Platform Services account. - `aud`—The API endpoint URL for generating a token. - `exp`—The expiration time in Unix time. This value is the current Unix time in seconds plus the number of seconds you want the token to be valid. For testing purposes, you can get the Unix time at [Time.is](https://time.is/Unix_time_now). The JWT payload looks like this JSON. [block:code] { "codes": [ { "code": "{\n \"sub\": \"<EMAIL_ADDRESS>\",\n \"aud\": \"https://api.einstein.ai/v2/oauth2/token\",\n \"exp\": <EXPIRATION_SECONDS_IN_UNIX_TIME>\n}", "language": "json" } ] } [/block] 3. Sign the JWT payload with your RSA private key to generate an assertion. The private key is contained in the `einstein_platform.pem` file you downloaded when you signed up for an account. The code to generate the assertion varies depending on your programming language. If you're doing manual testing, you can generate an assertion using [jwt.io](https://jwt.io/). 4. Call the API and pass in the assertion. You pass in all the necessary data in the `-d` parameter. Replace `<ASSERTION_STRING>` with the assertion you generated. This cURL command shows the call. [block:code] { "codes": [ { "code": "curl -H \"Content-type: application/x-www-form-urlencoded\" -X POST https://api.einstein.ai/v2/oauth2/token -d \"grant_type=urn:ietf:params:oauth:grant-type:jwt-bearer&assertion=<ASSERTION_STRING>\"", "language": "curl" } ] } [/block] The response looks similar to this JSON. [block:code] { "codes": [ { "code": "{\n \"access_token\": \"EPMSDXBQSG6YH23HUE6VTA2UC53MBOEFTUND7QWVQIFOAZU42BGIK3SIXKGEWYPKO3GUFLCLHX5ZBMPPQIB6DCP2NAT7HQ108TLRQ7A\",\n \"token_type\": \"Bearer\",\n \"expires_in\": \"120\"\n}", "language": "json" } ] } [/block] 5. Use the access token to make an API call. For example, this cURL command to get a dataset. [block:code] { "codes": [ { "code": "curl -X GET -H \"Authorization: Bearer EPMSDXBQSG6YH23HUE6VTA2UC53MBOEFTUND7QWVQIFOAZU42BGIK3SIXKGEWYPKO3GUFLCLHX5ZBMPPQIB6DCP2NAT7HQ108TLRQ7A\" -H \"Cache-Control: no-cache\" https://api.einstein.ai/v2/vision/datasets/1008108", "language": "curl" } ] } [/block] Use the access token to make any authenticated API calls as long as the token is valid (it's not expired). When the access token expires, you repeat this process to generate a new one.
When you sign up for an account, you download your key contained in a file called einstein.pem. You can quickly get a token using the [token page](https://api.einstein.ai/token) and uploading your key. In code, however, you must write the code to programmatically get an OAuth token using your key. You do this by generating an assertion and then passing that to the API to get an access token. That access token can then be used to make API calls. 1. Open the `einstein_platform.pem` file and read in the key contents. 2. Create the JWT payload. The payload is JSON that contains: - `sub`—Your email address. This is your email address contained in the Salesforce org you used to sign up for an Einstein Platform Services account. - `aud`—The API endpoint URL for generating a token. - `exp`—The expiration time in Unix time. This value is the current Unix time in seconds plus the number of seconds you want the token to be valid. For testing purposes, you can get the Unix time at [Time.is](https://time.is/Unix_time_now). The JWT payload looks like this JSON. [block:code] { "codes": [ { "code": "{\n \"sub\": \"<EMAIL_ADDRESS>\",\n \"aud\": \"https://api.einstein.ai/v2/oauth2/token\",\n \"exp\": <EXPIRATION_SECONDS_IN_UNIX_TIME>\n}", "language": "json" } ] } [/block] 3. Sign the JWT payload with your RSA private key to generate an assertion. The private key is contained in the `einstein_platform.pem` file you downloaded when you signed up for an account. The code to generate the assertion varies depending on your programming language. If you're doing manual testing, you can generate an assertion using [jwt.io](https://jwt.io/). 4. Call the API and pass in the assertion. You pass in all the necessary data in the `-d` parameter. Replace `<ASSERTION_STRING>` with the assertion you generated. This cURL command shows the call. [block:code] { "codes": [ { "code": "curl -H \"Content-type: application/x-www-form-urlencoded\" -X POST https://api.einstein.ai/v2/oauth2/token -d \"grant_type=urn:ietf:params:oauth:grant-type:jwt-bearer&assertion=<ASSERTION_STRING>\"", "language": "curl" } ] } [/block] The response looks similar to this JSON. [block:code] { "codes": [ { "code": "{\n \"access_token\": \"EPMSDXBQSG6YH23HUE6VTA2UC53MBOEFTUND7QWVQIFOAZU42BGIK3SIXKGEWYPKO3GUFLCLHX5ZBMPPQIB6DCP2NAT7HQ108TLRQ7A\",\n \"token_type\": \"Bearer\",\n \"expires_in\": \"120\"\n}", "language": "json" } ] } [/block] 5. Use the access token to make an API call. For example, this cURL command to get a dataset. [block:code] { "codes": [ { "code": "curl -X GET -H \"Authorization: Bearer EPMSDXBQSG6YH23HUE6VTA2UC53MBOEFTUND7QWVQIFOAZU42BGIK3SIXKGEWYPKO3GUFLCLHX5ZBMPPQIB6DCP2NAT7HQ108TLRQ7A\" -H \"Cache-Control: no-cache\" https://api.einstein.ai/v2/vision/datasets/1008108", "language": "curl" } ] } [/block] Use the access token to make any authenticated API calls as long as the token is valid (it's not expired). When the access token expires, you repeat this process to generate a new one.
{"_id":"5a5691693f58350012e0d377","project":"552d474ea86ee20d00780cd7","version":"59de6223666d650024f78f9b","category":"5a568870caec3a00286fc070","user":"573b5a1f37fcf72000a2e683","updates":[],"next":{"pages":[],"description":""},"createdAt":"2018-01-10T22:19:21.271Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","auth":"required","params":[],"url":""},"isReference":false,"order":2,"body":"If you don't want to generate an access token using your private key, you can use a refresh token. A refresh token is a JWT token that never expires. You can use a refresh token only to generate an access token; you can't use it to make an authenticated API call. \n\nThis is useful in cases where the client making API calls doesn't have access to the private key. A third-party system can generate the refresh token and provide it to the client making API calls.\n[block:callout]\n{\n  \"type\": \"warning\",\n  \"title\": \"Warning\",\n  \"body\": \"A refresh token never expires and is used to generate access tokens used to make API calls. Be sure to safeguard refresh tokens the same way you would any password.\"\n}\n[/block]\nTo get an access token using a refresh token, you must first get the refresh token. Then you use the refresh token from then on to generate an access token.\n\n##Generate a Refresh Token##\n1. Open the `einstein_platform.pem` file and read in the key contents.\n\n2. Create the JWT payload. The payload is JSON that contains:\n\n - `sub`—Your email address. This is your email address contained in the Salesforce org you used to sign up for an Einstein Platform Services account.\n\n - `aud`—The API endpoint URL for generating a token.\n\n - `exp`—The expiration time in Unix time. This value is the current Unix time in seconds plus the number of seconds you want the token to be valid. For testing purposes, you can get the Unix time at [Time.is](https://time.is/Unix_time_now).\n\n  The JWT payload looks like this JSON.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"{\\n  \\\"sub\\\": \\\"<EMAIL_ADDRESS>\\\",\\n  \\\"aud\\\": \\\"https://api.einstein.ai/v2/oauth2/token\\\",\\n  \\\"exp\\\": <EXPIRATION_SECONDS_IN_UNIX_TIME>\\n}\",\n      \"language\": \"json\"\n    }\n  ]\n}\n[/block]\n3. Sign the JWT payload with your RSA private key to generate an assertion. The private key is contained in the `einstein_platform.pem` file you downloaded when you signed up for an account. The code to generate the assertion varies depending on your programming language. If you're doing manual testing, you can generate an assertion using [jwt.io](https://jwt.io/).\n\n\n4. Call the API and pass in the assertion along with the `scope=offline` parameter. You pass in all the necessary data in the `-d` parameter. Replace `<ASSERTION_STRING>` with the assertion you generated. This cURL command shows the call.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -H \\\"Content-type: application/x-www-form-urlencoded\\\" -X POST https://api.einstein.ai/v2/oauth2/token -d \\\"grant_type=urn:ietf:params:oauth:grant-type:jwt-bearer&assertion=<ASSERTION_STRING>&scope=offline\\\"\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\nThe response looks similar to this JSON.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"{\\n  \\\"access_token\\\": \\\"SPFPQ5IBLB6DPE6FKPWHMIWW4MCRICX4M4KQXFQMI6THZXIEZ6QGNWNOERD6S7655LJAFWTRIKC4KGYO5G3XROMEOTBSS53CFSB6GIA\\\",\\n  \\\"refresh_token\\\": \\\"FL4GSVQS4W5CKSFRVZBLPIVZZJ2K4VIFPLGZ45SJGUQK4SS56IWPWACZ7V2B7OVLVKZCNK5JZSSW7CIHCNQJAO3TOUE3375108HHTLY\\\",\\n  \\\"token_type\\\": \\\"Bearer\\\",\\n  \\\"expires_in\\\": \\\"120\\\"\\n}\",\n      \"language\": \"json\"\n    }\n  ]\n}\n[/block]\n5. Store the refresh token.\n\n\n##Get an Access Token Using the Refresh Token##\n\nNow that you have a refresh token, you can use it to generate an access token that you can use to call the API. \n\n1. Call the `/v2/oauth2/token` endpoint and pass the refresh token along with these parameters.\n\n- `grant_type`—Specify the string `refresh_token`.\n- `refresh_token`—The refresh token you created.\n- `valid_for`—Number of seconds until the access token expires. Default is 60 seconds. Maximum value is 30 days (2592000 seconds).\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -H \\\"Content-type: application/x-www-form-urlencoded\\\" -X POST https://api.einstein.ai/v2/oauth2/token -d \\\"grant_type=refresh_token&refresh_token=FL4GSVQS4W5CKSFRVZBLPIVZZJ2K4VIFPLGZ45SJGUQK4SS56IWPWACZ7V2B7OVLVKZCNK5JZSSW7CIHCNQJAO3TOUE3375108HHTLY&valid_for=60\\\"\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\nThe response looks like this JSON.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"{\\n  \\\"access_token\\\": \\\"LA5JPHC6J2FOVPXVU36HW7WUF3GNNZC5PINC6NX272CEFCWGQNIBB3TSVEAUHD6SYZFED27YRDMRQDIEUNNRT7HXFJTNJCFU5DXNNOI\\\",\\n  \\\"token_type\\\": \\\"Bearer\\\",\\n  \\\"expires_in\\\": \\\"60\\\"\\n}\",\n      \"language\": \"json\"\n    }\n  ]\n}\n[/block]\nUse the access token to make any authenticated API calls as long as the token is valid (it's not expired). When the access token expires, use the refresh token to generate a new one. \n\nYou can't use refresh token to generate another refresh token. The `scope=offline` parameter doesn't work for this call.\n\nTo delete a token, use the call to [Delete a Refresh Token](doc:delete-a-refresh-token).","excerpt":"","slug":"generate-an-oauth-token-using-a-refresh-token","type":"basic","title":"Generate an OAuth Token Using a Refresh Token","__v":0,"parentDoc":null,"childrenPages":[]}

Generate an OAuth Token Using a Refresh Token


If you don't want to generate an access token using your private key, you can use a refresh token. A refresh token is a JWT token that never expires. You can use a refresh token only to generate an access token; you can't use it to make an authenticated API call. This is useful in cases where the client making API calls doesn't have access to the private key. A third-party system can generate the refresh token and provide it to the client making API calls. [block:callout] { "type": "warning", "title": "Warning", "body": "A refresh token never expires and is used to generate access tokens used to make API calls. Be sure to safeguard refresh tokens the same way you would any password." } [/block] To get an access token using a refresh token, you must first get the refresh token. Then you use the refresh token from then on to generate an access token. ##Generate a Refresh Token## 1. Open the `einstein_platform.pem` file and read in the key contents. 2. Create the JWT payload. The payload is JSON that contains: - `sub`—Your email address. This is your email address contained in the Salesforce org you used to sign up for an Einstein Platform Services account. - `aud`—The API endpoint URL for generating a token. - `exp`—The expiration time in Unix time. This value is the current Unix time in seconds plus the number of seconds you want the token to be valid. For testing purposes, you can get the Unix time at [Time.is](https://time.is/Unix_time_now). The JWT payload looks like this JSON. [block:code] { "codes": [ { "code": "{\n \"sub\": \"<EMAIL_ADDRESS>\",\n \"aud\": \"https://api.einstein.ai/v2/oauth2/token\",\n \"exp\": <EXPIRATION_SECONDS_IN_UNIX_TIME>\n}", "language": "json" } ] } [/block] 3. Sign the JWT payload with your RSA private key to generate an assertion. The private key is contained in the `einstein_platform.pem` file you downloaded when you signed up for an account. The code to generate the assertion varies depending on your programming language. If you're doing manual testing, you can generate an assertion using [jwt.io](https://jwt.io/). 4. Call the API and pass in the assertion along with the `scope=offline` parameter. You pass in all the necessary data in the `-d` parameter. Replace `<ASSERTION_STRING>` with the assertion you generated. This cURL command shows the call. [block:code] { "codes": [ { "code": "curl -H \"Content-type: application/x-www-form-urlencoded\" -X POST https://api.einstein.ai/v2/oauth2/token -d \"grant_type=urn:ietf:params:oauth:grant-type:jwt-bearer&assertion=<ASSERTION_STRING>&scope=offline\"", "language": "curl" } ] } [/block] The response looks similar to this JSON. [block:code] { "codes": [ { "code": "{\n \"access_token\": \"SPFPQ5IBLB6DPE6FKPWHMIWW4MCRICX4M4KQXFQMI6THZXIEZ6QGNWNOERD6S7655LJAFWTRIKC4KGYO5G3XROMEOTBSS53CFSB6GIA\",\n \"refresh_token\": \"FL4GSVQS4W5CKSFRVZBLPIVZZJ2K4VIFPLGZ45SJGUQK4SS56IWPWACZ7V2B7OVLVKZCNK5JZSSW7CIHCNQJAO3TOUE3375108HHTLY\",\n \"token_type\": \"Bearer\",\n \"expires_in\": \"120\"\n}", "language": "json" } ] } [/block] 5. Store the refresh token. ##Get an Access Token Using the Refresh Token## Now that you have a refresh token, you can use it to generate an access token that you can use to call the API. 1. Call the `/v2/oauth2/token` endpoint and pass the refresh token along with these parameters. - `grant_type`—Specify the string `refresh_token`. - `refresh_token`—The refresh token you created. - `valid_for`—Number of seconds until the access token expires. Default is 60 seconds. Maximum value is 30 days (2592000 seconds). [block:code] { "codes": [ { "code": "curl -H \"Content-type: application/x-www-form-urlencoded\" -X POST https://api.einstein.ai/v2/oauth2/token -d \"grant_type=refresh_token&refresh_token=FL4GSVQS4W5CKSFRVZBLPIVZZJ2K4VIFPLGZ45SJGUQK4SS56IWPWACZ7V2B7OVLVKZCNK5JZSSW7CIHCNQJAO3TOUE3375108HHTLY&valid_for=60\"", "language": "curl" } ] } [/block] The response looks like this JSON. [block:code] { "codes": [ { "code": "{\n \"access_token\": \"LA5JPHC6J2FOVPXVU36HW7WUF3GNNZC5PINC6NX272CEFCWGQNIBB3TSVEAUHD6SYZFED27YRDMRQDIEUNNRT7HXFJTNJCFU5DXNNOI\",\n \"token_type\": \"Bearer\",\n \"expires_in\": \"60\"\n}", "language": "json" } ] } [/block] Use the access token to make any authenticated API calls as long as the token is valid (it's not expired). When the access token expires, use the refresh token to generate a new one. You can't use refresh token to generate another refresh token. The `scope=offline` parameter doesn't work for this call. To delete a token, use the call to [Delete a Refresh Token](doc:delete-a-refresh-token).
If you don't want to generate an access token using your private key, you can use a refresh token. A refresh token is a JWT token that never expires. You can use a refresh token only to generate an access token; you can't use it to make an authenticated API call. This is useful in cases where the client making API calls doesn't have access to the private key. A third-party system can generate the refresh token and provide it to the client making API calls. [block:callout] { "type": "warning", "title": "Warning", "body": "A refresh token never expires and is used to generate access tokens used to make API calls. Be sure to safeguard refresh tokens the same way you would any password." } [/block] To get an access token using a refresh token, you must first get the refresh token. Then you use the refresh token from then on to generate an access token. ##Generate a Refresh Token## 1. Open the `einstein_platform.pem` file and read in the key contents. 2. Create the JWT payload. The payload is JSON that contains: - `sub`—Your email address. This is your email address contained in the Salesforce org you used to sign up for an Einstein Platform Services account. - `aud`—The API endpoint URL for generating a token. - `exp`—The expiration time in Unix time. This value is the current Unix time in seconds plus the number of seconds you want the token to be valid. For testing purposes, you can get the Unix time at [Time.is](https://time.is/Unix_time_now). The JWT payload looks like this JSON. [block:code] { "codes": [ { "code": "{\n \"sub\": \"<EMAIL_ADDRESS>\",\n \"aud\": \"https://api.einstein.ai/v2/oauth2/token\",\n \"exp\": <EXPIRATION_SECONDS_IN_UNIX_TIME>\n}", "language": "json" } ] } [/block] 3. Sign the JWT payload with your RSA private key to generate an assertion. The private key is contained in the `einstein_platform.pem` file you downloaded when you signed up for an account. The code to generate the assertion varies depending on your programming language. If you're doing manual testing, you can generate an assertion using [jwt.io](https://jwt.io/). 4. Call the API and pass in the assertion along with the `scope=offline` parameter. You pass in all the necessary data in the `-d` parameter. Replace `<ASSERTION_STRING>` with the assertion you generated. This cURL command shows the call. [block:code] { "codes": [ { "code": "curl -H \"Content-type: application/x-www-form-urlencoded\" -X POST https://api.einstein.ai/v2/oauth2/token -d \"grant_type=urn:ietf:params:oauth:grant-type:jwt-bearer&assertion=<ASSERTION_STRING>&scope=offline\"", "language": "curl" } ] } [/block] The response looks similar to this JSON. [block:code] { "codes": [ { "code": "{\n \"access_token\": \"SPFPQ5IBLB6DPE6FKPWHMIWW4MCRICX4M4KQXFQMI6THZXIEZ6QGNWNOERD6S7655LJAFWTRIKC4KGYO5G3XROMEOTBSS53CFSB6GIA\",\n \"refresh_token\": \"FL4GSVQS4W5CKSFRVZBLPIVZZJ2K4VIFPLGZ45SJGUQK4SS56IWPWACZ7V2B7OVLVKZCNK5JZSSW7CIHCNQJAO3TOUE3375108HHTLY\",\n \"token_type\": \"Bearer\",\n \"expires_in\": \"120\"\n}", "language": "json" } ] } [/block] 5. Store the refresh token. ##Get an Access Token Using the Refresh Token## Now that you have a refresh token, you can use it to generate an access token that you can use to call the API. 1. Call the `/v2/oauth2/token` endpoint and pass the refresh token along with these parameters. - `grant_type`—Specify the string `refresh_token`. - `refresh_token`—The refresh token you created. - `valid_for`—Number of seconds until the access token expires. Default is 60 seconds. Maximum value is 30 days (2592000 seconds). [block:code] { "codes": [ { "code": "curl -H \"Content-type: application/x-www-form-urlencoded\" -X POST https://api.einstein.ai/v2/oauth2/token -d \"grant_type=refresh_token&refresh_token=FL4GSVQS4W5CKSFRVZBLPIVZZJ2K4VIFPLGZ45SJGUQK4SS56IWPWACZ7V2B7OVLVKZCNK5JZSSW7CIHCNQJAO3TOUE3375108HHTLY&valid_for=60\"", "language": "curl" } ] } [/block] The response looks like this JSON. [block:code] { "codes": [ { "code": "{\n \"access_token\": \"LA5JPHC6J2FOVPXVU36HW7WUF3GNNZC5PINC6NX272CEFCWGQNIBB3TSVEAUHD6SYZFED27YRDMRQDIEUNNRT7HXFJTNJCFU5DXNNOI\",\n \"token_type\": \"Bearer\",\n \"expires_in\": \"60\"\n}", "language": "json" } ] } [/block] Use the access token to make any authenticated API calls as long as the token is valid (it's not expired). When the access token expires, use the refresh token to generate a new one. You can't use refresh token to generate another refresh token. The `scope=offline` parameter doesn't work for this call. To delete a token, use the call to [Delete a Refresh Token](doc:delete-a-refresh-token).
{"_id":"59de6224666d650024f78fc5","category":"59de6223666d650024f78fa2","user":"573b5a1f37fcf72000a2e683","project":"552d474ea86ee20d00780cd7","parentDoc":null,"version":"59de6223666d650024f78f9b","updates":[],"next":{"pages":[],"description":""},"createdAt":"2016-12-12T19:25:01.689Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"code":"{}","name":"","status":400,"language":"json"}]},"settings":"","auth":"required","params":[],"url":""},"isReference":true,"order":0,"body":"You access Einstein Vision and Einstein Language via these standard REST API calls. Use the APIs to programmatically work with datasets, labels, examples, models, and predictions.","excerpt":"","slug":"predictive-vision-service-api","type":"basic","title":"Einstein Platform Services API","__v":0,"childrenPages":[]}

Einstein Platform Services API


You access Einstein Vision and Einstein Language via these standard REST API calls. Use the APIs to programmatically work with datasets, labels, examples, models, and predictions.
You access Einstein Vision and Einstein Language via these standard REST API calls. Use the APIs to programmatically work with datasets, labels, examples, models, and predictions.
{"_id":"59de6224666d650024f78fc6","category":"59de6223666d650024f78fa2","parentDoc":null,"project":"552d474ea86ee20d00780cd7","user":"573b5a1f37fcf72000a2e683","version":"59de6223666d650024f78f9b","updates":["59c9c019ead2de00106f567e"],"next":{"pages":[],"description":""},"createdAt":"2017-01-20T18:41:53.835Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"method":"post","results":{"codes":[{"name":"","code":"// Response from call to generate an access token\n{\n  \"access_token\": \"XLDZHXACW7DC45HQQLBZABCVFHWRT6KTBQNBIEAHCOYHCICM2Y34OBI46BD2K4XQQ2KLXI5HPHGT322G7MMKFOGE7534OGUMR6WC108\",\n  \"token_type\": \"Bearer\",\n  \"expires_in\": 9999902\n}\n\n// Response from call to generate a refresh token\n{\n  \"access_token\": \"SPFPQ5IBLB6DPE6FKPWHMIWW4MCRICX4M4KQXFQMI6THZXIEZ6QGNWNOERD6S7655LJAFWTRIKC4KGYO5G3XROMEOTBSS53CFSB6GIA\",\n  \"refresh_token\": \"FL4GSVQS4W5CKSFRVZBLPIVZZJ2K4VIFPLGZ45SJGUQK4SS56IWPWACZ7V2B7OVLVKZCNK5JZSSW7CIHCNQJAO3TOUE3375108HHTLY\",\n  \"token_type\": \"Bearer\",\n  \"expires_in\": \"120\"\n}","language":"json","status":200},{"name":"","code":"{}","language":"json","status":400}]},"settings":"","examples":{"codes":[{"code":"// Generate an access token\ncurl -H \"Content-type: application/x-www-form-urlencoded\" -X POST https://api.einstein.ai/v2/oauth2/token -d \"grant_type=urn:ietf:params:oauth:grant-type:jwt-bearer&assertion=<ASSERTION_STRING>\"\n\n// Generate a refresh token\ncurl -H \"Content-type: application/x-www-form-urlencoded\" -X POST https://api.einstein.ai/v2/oauth2/token -d \"grant_type=urn:ietf:params:oauth:grant-type:jwt-bearer&assertion=<ASSERTION_STRING>&scope=offline\"","language":"curl"}]},"auth":"required","params":[],"url":"/oauth2/token"},"isReference":true,"order":1,"body":"For information about how to create an access token, see [Generate an OAuth Token Using Your Key](doc:generate-an-oauth-token-using-your-key). For information about how to create a refresh token, see [Generate an OAuth Token Using a Refresh Token](doc:generate-an-oauth-token-using-a-refresh-token).\n\n##Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`access_token`\",\n    \"0-1\": \"string\",\n    \"0-2\": \"Access token for authorization.\",\n    \"0-3\": \"1.0\",\n    \"1-0\": \"`expires_in`\",\n    \"1-1\": \"integer\",\n    \"1-2\": \"Number of seconds that the token will expire from the time it was generated.\",\n    \"1-3\": \"1.0\",\n    \"2-0\": \"`refresh_token`\",\n    \"2-1\": \"string\",\n    \"2-3\": \"2.0\",\n    \"2-2\": \"Refresh token that can be used to generate an access token. Only returned when you pass the `scope=offline` parameter to the endpoint.\",\n    \"3-0\": \"`token_type`\",\n    \"3-1\": \"string\",\n    \"3-2\": \"Type of token returned. Always `Bearer`.\",\n    \"3-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 4\n}\n[/block]","excerpt":"Returns an OAuth access token or a refresh token. You must pass a valid access token in the header of each API call.","slug":"generate-an-oauth-token","type":"post","title":"Generate an OAuth Token","__v":0,"childrenPages":[]}

postGenerate an OAuth Token

Returns an OAuth access token or a refresh token. You must pass a valid access token in the header of each API call.

For information about how to create an access token, see [Generate an OAuth Token Using Your Key](doc:generate-an-oauth-token-using-your-key). For information about how to create a refresh token, see [Generate an OAuth Token Using a Refresh Token](doc:generate-an-oauth-token-using-a-refresh-token). ##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`access_token`", "0-1": "string", "0-2": "Access token for authorization.", "0-3": "1.0", "1-0": "`expires_in`", "1-1": "integer", "1-2": "Number of seconds that the token will expire from the time it was generated.", "1-3": "1.0", "2-0": "`refresh_token`", "2-1": "string", "2-3": "2.0", "2-2": "Refresh token that can be used to generate an access token. Only returned when you pass the `scope=offline` parameter to the endpoint.", "3-0": "`token_type`", "3-1": "string", "3-2": "Type of token returned. Always `Bearer`.", "3-3": "1.0" }, "cols": 4, "rows": 4 } [/block]

Definition

{{ api_url }}{{ page_api_url }}

Examples


Result Format



For information about how to create an access token, see [Generate an OAuth Token Using Your Key](doc:generate-an-oauth-token-using-your-key). For information about how to create a refresh token, see [Generate an OAuth Token Using a Refresh Token](doc:generate-an-oauth-token-using-a-refresh-token). ##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`access_token`", "0-1": "string", "0-2": "Access token for authorization.", "0-3": "1.0", "1-0": "`expires_in`", "1-1": "integer", "1-2": "Number of seconds that the token will expire from the time it was generated.", "1-3": "1.0", "2-0": "`refresh_token`", "2-1": "string", "2-3": "2.0", "2-2": "Refresh token that can be used to generate an access token. Only returned when you pass the `scope=offline` parameter to the endpoint.", "3-0": "`token_type`", "3-1": "string", "3-2": "Type of token returned. Always `Bearer`.", "3-3": "1.0" }, "cols": 4, "rows": 4 } [/block]
{"_id":"5a57e5109e19b0002864c76a","project":"552d474ea86ee20d00780cd7","version":"59de6223666d650024f78f9b","category":"59de6223666d650024f78fa2","user":"573b5a1f37fcf72000a2e683","updates":[],"next":{"pages":[],"description":""},"createdAt":"2018-01-11T22:28:32.237Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"method":"delete","examples":{"codes":[{"language":"curl","code":"curl -X DELETE -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.einstein.ai/v2/oauth2/token/<REFRESH_TOKEN>"}]},"results":{"codes":[{"status":204,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","auth":"required","params":[],"url":"/oauth2/token/<REFRESH_TOKEN>"},"isReference":true,"order":2,"body":"This call doesn’t return a response body. Instead, it returns an HTTP status code 204.","excerpt":"","slug":"delete-a-refresh-token","type":"delete","title":"Delete a Refresh Token","__v":0,"parentDoc":null,"childrenPages":[]}

deleteDelete a Refresh Token


This call doesn’t return a response body. Instead, it returns an HTTP status code 204.

Definition

{{ api_url }}{{ page_api_url }}

Examples


Result Format



This call doesn’t return a response body. Instead, it returns an HTTP status code 204.
{"_id":"59de6224666d650024f78fc7","category":"59de6223666d650024f78fa2","user":"573b5a1f37fcf72000a2e683","parentDoc":null,"project":"552d474ea86ee20d00780cd7","version":"59de6223666d650024f78f9b","updates":[],"next":{"pages":[],"description":""},"createdAt":"2017-03-20T20:47:46.958Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"settings":"","examples":{"codes":[{"code":"curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.einstein.ai/v2/apiusage","language":"curl"}]},"method":"get","results":{"codes":[{"status":200,"language":"json","code":"{\n  \"object\": \"list\",\n  \"data\": [\n    {\n      \"id\": \"489\",\n      \"organizationId\": \"108\",\n      \"startsAt\": \"2017-03-01T00:00:00.000Z\",\n      \"endsAt\": \"2017-04-01T00:00:00.000Z\",\n      \"planData\": [\n        {\n          \"plan\": \"STARTER\",\n          \"amount\": 1,\n          \"source\": \"HEROKU\"\n        }\n      ],\n      \"licenseId\": \"kJCHtYDCSf\",\n      \"object\": \"apiusage\",\n      \"predictionsRemaining\": 1997,\n      \"predictionsUsed\": 3,\n      \"predictionsMax\": 2000\n    }\n  ]\n}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"auth":"required","params":[],"url":"/apiusage"},"isReference":true,"order":3,"body":"##Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"0-0\": \"`data`\",\n    \"1-0\": \"`object`\",\n    \"1-2\": \"Object returned; in this case, `list`.\",\n    \"1-1\": \"string\",\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-2\": \"Array of `apiusage` objects.\",\n    \"0-1\": \"object\",\n    \"0-3\": \"1.0\",\n    \"1-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 2\n}\n[/block]\n##Apiusage Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"1-0\": \"`id`\",\n    \"1-1\": \"long\",\n    \"1-2\": \"Unique ID for the API usage plan month.\",\n    \"1-3\": \"1.0\",\n    \"2-0\": \"`licenseId`\",\n    \"2-1\": \"string\",\n    \"2-2\": \"Unique ID of the API plan.\",\n    \"2-3\": \"1.0\",\n    \"3-0\": \"`object`\",\n    \"3-1\": \"string\",\n    \"3-2\": \"Object returned; in this case, `apiusage`.\",\n    \"3-3\": \"1.0\",\n    \"8-0\": \"`startsAt`\",\n    \"8-1\": \"date\",\n    \"8-2\": \"Date and time that the plan calendar month begins. Always the first of the month.\",\n    \"8-3\": \"1.0\",\n    \"0-0\": \"`endsAt`\",\n    \"0-1\": \"date\",\n    \"0-2\": \"Date and time that the plan calendar month ends. Always 12 am on the first day of the following month.\",\n    \"4-0\": \"`organizationId`\",\n    \"4-2\": \"Unique ID for the user making the API call.\",\n    \"5-0\": \"`predictionsMax`\",\n    \"5-2\": \"Total number of predictions for the calendar month.\",\n    \"5-3\": \"1.0\",\n    \"6-0\": \"`predictionsRemaining`\",\n    \"6-2\": \"Number of predictions left for the calendar month.\",\n    \"6-3\": \"1.0\",\n    \"7-0\": \"`predictionsUsed`\",\n    \"7-2\": \"Number of predictions used in the calendar month. A prediction is any call to these resources:\\n\\n- `/detect`\\n- `/intent`\\n- `/predict`\\n- `/sentiment`\",\n    \"7-3\": \"1.0\",\n    \"0-3\": \"1.0\",\n    \"4-3\": \"1.0\",\n    \"4-1\": \"long\",\n    \"5-1\": \"long\",\n    \"6-1\": \"long\",\n    \"7-1\": \"long\"\n  },\n  \"cols\": 4,\n  \"rows\": 9\n}\n[/block]\n##Plandata Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`amount`\",\n    \"0-1\": \"string\",\n    \"0-2\": \"Number of plans of the specified type.\",\n    \"0-3\": \"1.0\",\n    \"1-0\": \"`plan`\",\n    \"1-1\": \"string\",\n    \"1-2\": \"Type of plan based on the `source`. Valid values:\\n- `HEROKU`\\n - `STARTER`—2,000 predictions per calendar month.\\n - `BRONZE`—10,000 predictions per calendar month.\\n - `SILVER`—250,000 predictions per calendar month.\\n - `GOLD`—One million predictions per calendar month.\\n\\n\\n- `SALESFORCE`\\n - `STARTER`—2,000 predictions per calendar month.\\n - `SFDC_1M_EDITION`—One million predictions per calendar month.\",\n    \"1-3\": \"1.0\",\n    \"2-0\": \"`source`\",\n    \"2-1\": \"string\",\n    \"2-2\": \"Service that provisioned the plan. Valid values:\\n- `HEROKU`\\n- `SALESFORCE`\",\n    \"2-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 3\n}\n[/block]\nEach `apiusage` object in the response contains plan information for a single calendar month for a single license. If you have a six-month paid plan and you make this call on the first month, the response contains six `apiusage` objects; one for each calendar month in the plan.\n\n- If you're using the free tier, the response contains plan information only for the current month. You see plan information only after you make your first prediction. If you call the `/apiusage` resource before you make your first prediction call, the API returns an empty array.\n- If you're using the paid tier, the response contains plan information for each month in your plan starting with the current month.\n\nThe `planData` array contains an object for each plan type associated with the calendar month and the license. This code snippet shows the `planData` if the user has two Heroku GOLD plans.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"\\\"planData\\\": [\\n       {\\n         \\\"plan\\\": \\\"GOLD\\\",\\n         \\\"amount\\\": 2,\\n         \\\"source\\\": \\\"HEROKU\\\"\\n       }\\n     ]\",\n      \"language\": \"json\"\n    }\n  ]\n}\n[/block]","excerpt":"Returns prediction usage on a monthly basis for the current calendar month and future months. Each `apiusage` object in the response corresponds to a calendar month in your plan. For more information about plans, see [Rate Limits](doc:rate-limits).","slug":"get-api-usage","type":"get","title":"Get API Usage","__v":0,"childrenPages":[]}

getGet API Usage

Returns prediction usage on a monthly basis for the current calendar month and future months. Each `apiusage` object in the response corresponds to a calendar month in your plan. For more information about plans, see [Rate Limits](doc:rate-limits).

##Response Body## [block:parameters] { "data": { "0-0": "`data`", "1-0": "`object`", "1-2": "Object returned; in this case, `list`.", "1-1": "string", "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-2": "Array of `apiusage` objects.", "0-1": "object", "0-3": "1.0", "1-3": "1.0" }, "cols": 4, "rows": 2 } [/block] ##Apiusage Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "1-0": "`id`", "1-1": "long", "1-2": "Unique ID for the API usage plan month.", "1-3": "1.0", "2-0": "`licenseId`", "2-1": "string", "2-2": "Unique ID of the API plan.", "2-3": "1.0", "3-0": "`object`", "3-1": "string", "3-2": "Object returned; in this case, `apiusage`.", "3-3": "1.0", "8-0": "`startsAt`", "8-1": "date", "8-2": "Date and time that the plan calendar month begins. Always the first of the month.", "8-3": "1.0", "0-0": "`endsAt`", "0-1": "date", "0-2": "Date and time that the plan calendar month ends. Always 12 am on the first day of the following month.", "4-0": "`organizationId`", "4-2": "Unique ID for the user making the API call.", "5-0": "`predictionsMax`", "5-2": "Total number of predictions for the calendar month.", "5-3": "1.0", "6-0": "`predictionsRemaining`", "6-2": "Number of predictions left for the calendar month.", "6-3": "1.0", "7-0": "`predictionsUsed`", "7-2": "Number of predictions used in the calendar month. A prediction is any call to these resources:\n\n- `/detect`\n- `/intent`\n- `/predict`\n- `/sentiment`", "7-3": "1.0", "0-3": "1.0", "4-3": "1.0", "4-1": "long", "5-1": "long", "6-1": "long", "7-1": "long" }, "cols": 4, "rows": 9 } [/block] ##Plandata Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`amount`", "0-1": "string", "0-2": "Number of plans of the specified type.", "0-3": "1.0", "1-0": "`plan`", "1-1": "string", "1-2": "Type of plan based on the `source`. Valid values:\n- `HEROKU`\n - `STARTER`—2,000 predictions per calendar month.\n - `BRONZE`—10,000 predictions per calendar month.\n - `SILVER`—250,000 predictions per calendar month.\n - `GOLD`—One million predictions per calendar month.\n\n\n- `SALESFORCE`\n - `STARTER`—2,000 predictions per calendar month.\n - `SFDC_1M_EDITION`—One million predictions per calendar month.", "1-3": "1.0", "2-0": "`source`", "2-1": "string", "2-2": "Service that provisioned the plan. Valid values:\n- `HEROKU`\n- `SALESFORCE`", "2-3": "1.0" }, "cols": 4, "rows": 3 } [/block] Each `apiusage` object in the response contains plan information for a single calendar month for a single license. If you have a six-month paid plan and you make this call on the first month, the response contains six `apiusage` objects; one for each calendar month in the plan. - If you're using the free tier, the response contains plan information only for the current month. You see plan information only after you make your first prediction. If you call the `/apiusage` resource before you make your first prediction call, the API returns an empty array. - If you're using the paid tier, the response contains plan information for each month in your plan starting with the current month. The `planData` array contains an object for each plan type associated with the calendar month and the license. This code snippet shows the `planData` if the user has two Heroku GOLD plans. [block:code] { "codes": [ { "code": "\"planData\": [\n {\n \"plan\": \"GOLD\",\n \"amount\": 2,\n \"source\": \"HEROKU\"\n }\n ]", "language": "json" } ] } [/block]

Definition

{{ api_url }}{{ page_api_url }}

Examples


Result Format



##Response Body## [block:parameters] { "data": { "0-0": "`data`", "1-0": "`object`", "1-2": "Object returned; in this case, `list`.", "1-1": "string", "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-2": "Array of `apiusage` objects.", "0-1": "object", "0-3": "1.0", "1-3": "1.0" }, "cols": 4, "rows": 2 } [/block] ##Apiusage Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "1-0": "`id`", "1-1": "long", "1-2": "Unique ID for the API usage plan month.", "1-3": "1.0", "2-0": "`licenseId`", "2-1": "string", "2-2": "Unique ID of the API plan.", "2-3": "1.0", "3-0": "`object`", "3-1": "string", "3-2": "Object returned; in this case, `apiusage`.", "3-3": "1.0", "8-0": "`startsAt`", "8-1": "date", "8-2": "Date and time that the plan calendar month begins. Always the first of the month.", "8-3": "1.0", "0-0": "`endsAt`", "0-1": "date", "0-2": "Date and time that the plan calendar month ends. Always 12 am on the first day of the following month.", "4-0": "`organizationId`", "4-2": "Unique ID for the user making the API call.", "5-0": "`predictionsMax`", "5-2": "Total number of predictions for the calendar month.", "5-3": "1.0", "6-0": "`predictionsRemaining`", "6-2": "Number of predictions left for the calendar month.", "6-3": "1.0", "7-0": "`predictionsUsed`", "7-2": "Number of predictions used in the calendar month. A prediction is any call to these resources:\n\n- `/detect`\n- `/intent`\n- `/predict`\n- `/sentiment`", "7-3": "1.0", "0-3": "1.0", "4-3": "1.0", "4-1": "long", "5-1": "long", "6-1": "long", "7-1": "long" }, "cols": 4, "rows": 9 } [/block] ##Plandata Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`amount`", "0-1": "string", "0-2": "Number of plans of the specified type.", "0-3": "1.0", "1-0": "`plan`", "1-1": "string", "1-2": "Type of plan based on the `source`. Valid values:\n- `HEROKU`\n - `STARTER`—2,000 predictions per calendar month.\n - `BRONZE`—10,000 predictions per calendar month.\n - `SILVER`—250,000 predictions per calendar month.\n - `GOLD`—One million predictions per calendar month.\n\n\n- `SALESFORCE`\n - `STARTER`—2,000 predictions per calendar month.\n - `SFDC_1M_EDITION`—One million predictions per calendar month.", "1-3": "1.0", "2-0": "`source`", "2-1": "string", "2-2": "Service that provisioned the plan. Valid values:\n- `HEROKU`\n- `SALESFORCE`", "2-3": "1.0" }, "cols": 4, "rows": 3 } [/block] Each `apiusage` object in the response contains plan information for a single calendar month for a single license. If you have a six-month paid plan and you make this call on the first month, the response contains six `apiusage` objects; one for each calendar month in the plan. - If you're using the free tier, the response contains plan information only for the current month. You see plan information only after you make your first prediction. If you call the `/apiusage` resource before you make your first prediction call, the API returns an empty array. - If you're using the paid tier, the response contains plan information for each month in your plan starting with the current month. The `planData` array contains an object for each plan type associated with the calendar month and the license. This code snippet shows the `planData` if the user has two Heroku GOLD plans. [block:code] { "codes": [ { "code": "\"planData\": [\n {\n \"plan\": \"GOLD\",\n \"amount\": 2,\n \"source\": \"HEROKU\"\n }\n ]", "language": "json" } ] } [/block]
{"_id":"59de6224666d650024f78fc8","category":"59de6223666d650024f78fa2","parentDoc":null,"project":"552d474ea86ee20d00780cd7","user":"573b5a1f37fcf72000a2e683","version":"59de6223666d650024f78f9b","updates":[],"next":{"pages":[],"description":""},"createdAt":"2016-11-10T20:11:21.728Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[{"name":"","status":200,"language":"json","code":"{}"},{"code":"{}","name":"","status":400,"language":"json"}]},"settings":"","auth":"required","params":[],"url":""},"isReference":true,"order":4,"body":"Known errors are returned in the response body in this format.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"{\\n  \\\"message\\\": \\\"Invalid authentication scheme\\\"\\n}\",\n      \"language\": \"json\"\n    }\n  ]\n}\n[/block]\n##All##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"HTTP Code\",\n    \"h-1\": \"HTTP Message\",\n    \"h-2\": \"API Message\",\n    \"0-0\": \"401\",\n    \"0-1\": \"Unauthorized\",\n    \"0-2\": \"Invalid access token\",\n    \"0-3\": \"Any\",\n    \"0-4\": \"The access token is expired.\",\n    \"h-3\": \"Resource\",\n    \"h-4\": \"Possible Causes\",\n    \"1-0\": \"401\",\n    \"1-1\": \"Unauthorized\",\n    \"1-2\": \"Invalid authentication scheme\",\n    \"1-3\": \"Any\",\n    \"1-4\": \"An `Authorization` header was provided, but the token isn't properly formatted.\",\n    \"2-0\": \"5XX\",\n    \"2-1\": \"- Internal server error \\n- Service unavailable\",\n    \"2-2\": \"None\",\n    \"2-3\": \"Any\",\n    \"2-4\": \"Our systems encountered and logged an unexpected error. Please contact us if you continue to see the error.\"\n  },\n  \"cols\": 5,\n  \"rows\": 3\n}\n[/block]\n##Datasets##\nError codes that can occur when you access datasets, labels, or examples.\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"HTTP Code\",\n    \"h-1\": \"HTTP Message\",\n    \"h-2\": \"API Message\",\n    \"h-3\": \"Resource\",\n    \"h-4\": \"Possible Causes\",\n    \"0-0\": \"400\",\n    \"0-1\": \"Bad Request\",\n    \"0-4\": \"The request couldn’t be fulfilled because the HTTP request was malformed, the Content Type was incorrect, there were missing parameters, or a parameter was provided with an invalid value.\",\n    \"1-0\": \"400\",\n    \"1-1\": \"Bad Request\",\n    \"1-2\": \"The 'name' parameter is required to create a dataset.\",\n    \"1-3\": \"POST \\n`/vision/datasets`\",\n    \"1-4\": \"The `name` parameter was passed in, but no value was provided.\",\n    \"0-2\": \"None\",\n    \"2-0\": \"400\",\n    \"2-1\": \"Bad Request\",\n    \"2-2\": \"Uploading a dataset requires either the 'data' field or the 'path' field.\",\n    \"2-3\": \"POST\\n`/vision/datasets/upload`\",\n    \"2-4\": \"The path to the local .zip file or the URL to the .zip file in the cloud wasn’t specified.\",\n    \"3-0\": \"400\",\n    \"3-1\": \"Bad Request\",\n    \"3-2\": \"The 'data' parameter cannot be duplicated, nor sent along with the 'path' parameter.\",\n    \"3-3\": \"POST \\n`/vision/datasets/upload`\",\n    \"3-4\": \"Both the `path` and `data` parameters were passed to the call; only one of these parameters can be passed.\",\n    \"4-0\": \"400\",\n    \"4-1\": \"Bad Request\",\n    \"4-2\": \"The dataset is not yet available for update, try again once the dataset is ready.\",\n    \"4-3\": \"PUT \\n`/vision/datasets/<DATASET_ID>/upload`\",\n    \"4-4\": \"You’re adding examples to a dataset that’s currently being created. You must wait for dataset to become available before you can add examples to it.\",\n    \"7-0\": \"400\",\n    \"7-1\": \"Bad Request\",\n    \"7-2\": \"Example max size supported is 1024000\",\n    \"7-3\": \"POST \\n`/vision/datasets/<DATASET_ID>/examples`\",\n    \"7-4\": \"The image file being added as an example exceeds the maximum file size of 1 MB.\",\n    \"8-0\": \"404\",\n    \"8-1\": \"Not Found\",\n    \"8-4\": \"The requested REST resource doesn’t exist or you don't have permission to access the resource.\",\n    \"0-3\": \"Any dataset, label, or example resources.\",\n    \"8-2\": \"None\",\n    \"8-3\": \"Any dataset, label, or example resources.\",\n    \"9-0\": \"404\",\n    \"9-1\": \"Not Found\",\n    \"9-2\": \"Unable to find dataset.\",\n    \"9-4\": \"- You don’t have access to the dataset.\\n\\n- The dataset was deleted.\",\n    \"9-3\": \"GET \\n`/vision/datasets/<DATASET_ID>`\",\n    \"10-0\": \"404\",\n    \"10-1\": \"Not Found\",\n    \"10-2\": \"Unable to find dataset.\",\n    \"10-3\": \"DELETE \\n`/vision/datasets/<DATASET_ID>`\",\n    \"10-4\": \"- You don’t have access to the dataset.\\n\\n- The dataset was already deleted.\",\n    \"11-0\": \"404\",\n    \"11-1\": \"Bad Request\",\n    \"11-2\": \"Example file already exists for the label <NAME_OF_EXAMPLE>\",\n    \"11-3\": \"POST\\n`/vision/feedback`\",\n    \"11-4\": \"An example with the same name already exists in the dataset. Example names must be unique within a dataset.\",\n    \"13-0\": \"503\",\n    \"13-1\": \"Service Unavailable\",\n    \"13-2\": \"Operation timed out!\",\n    \"13-3\": \"GET\\n`/vision/datasets`\",\n    \"13-4\": \"The call has timed out due to large data size. By default, this call returns 100 datasets. If the datasets contain a lot of examples, this call may time out. Use the `offset` and `count` parameters to limit and page through the data.\",\n    \"12-0\": \"404\",\n    \"12-1\": \"Bad Request\",\n    \"12-2\": \"Duplicate labels are not allowed.\",\n    \"12-3\": \"POST\\n`/vision/datasets`\",\n    \"12-4\": \"The call is trying to create a label with a name that exists in the dataset. Label names must be unique within a dataset.\",\n    \"6-0\": \"400\",\n    \"6-1\": \"Bad Request\",\n    \"6-2\": \"Supported dataset types: [image, image-detection, image-multi-label]\",\n    \"6-3\": \"POST \\n`/vision/datasets/upload`\",\n    \"6-4\": \"The `type` request parameter contains a value that isn't a valid dataset type.\",\n    \"5-0\": \"400\",\n    \"5-1\": \"Bad Request\",\n    \"5-2\": \"Failed to download the dataset from the public URL.\",\n    \"5-3\": \"POST\\n`/vision/datasets/upload`\\n`/vision/datasets/upload/sync`\",\n    \"5-4\": \"The API can't access the dataset file via the URL provided.\\n\\nWhen specifying the URL for a .zip file in a cloud drive service like Dropbox, be sure it's a link to the file and not a link to the interactive download page. For example, the URL should look like `https://www.dropbox.com/s/abcdxyz/mountainvsbeach.zip?dl=1`.\"\n  },\n  \"cols\": 5,\n  \"rows\": 14\n}\n[/block]\n##Training##\nError codes that can occur when you train a dataset to create a model or access a model.\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"HTTP Code\",\n    \"h-1\": \"HTTP Message\",\n    \"h-2\": \"API Message\",\n    \"h-3\": \"Resource\",\n    \"h-4\": \"Possible Causes\",\n    \"0-0\": \"400\",\n    \"0-1\": \"Bad Request\",\n    \"0-2\": \"- The 'name' parameter is required to train.\\n\\n- A valid 'datasetId' parameter is required to create a example.\",\n    \"0-3\": \"POST \\n`/vision/train`\",\n    \"0-4\": \"The `name` or `datasetId` parameter was passed in, but no parameter value was provided.\",\n    \"1-0\": \"400\",\n    \"1-1\": \"Bad Request\",\n    \"1-2\": \"The 'name', and 'datasetId' parameters are required to train.\",\n    \"1-3\": \"POST \\n`/vision/train`\",\n    \"1-4\": \"The `name` or `datasetId` parameter is missing.\",\n    \"2-0\": \"400\",\n    \"2-1\": \"Bad Request\",\n    \"2-2\": \"Invalid id <MODEL_ID>\",\n    \"2-3\": \"GET \\n`/vision/train/<MODEL_ID>`\",\n    \"2-4\": \"There’s no model with an ID that matches the `modelId` parameter.\",\n    \"3-0\": \"400\",\n    \"3-1\": \"Bad Request\",\n    \"3-2\": \"Invalid id <MODEL_ID>\",\n    \"3-3\": \"GET \\n`/vision/train/<MODEL_ID>/lc`\",\n    \"3-4\": \"There’s no model with an ID that matches the `modelId` parameter.\",\n    \"4-0\": \"400\",\n    \"4-1\": \"Bad Request\",\n    \"4-2\": \"- The job has not terminated yet; its current status is RUNNING.\\n\\n- The job has not terminated yet; its current status is QUEUED.\",\n    \"4-3\": \"GET \\n`/vision/models/<MODEL_ID>`\",\n    \"4-4\": \"The model for which you are getting metrics hasn’t completed training.\",\n    \"5-0\": \"404\",\n    \"5-1\": \"Not Found\",\n    \"5-2\": \"None\",\n    \"5-3\": \"GET \\n`/vision/models/<MODEL_ID>`\",\n    \"5-4\": \"The `modelId` parameter is missing.\",\n    \"6-0\": \"405\",\n    \"6-1\": \"Method Not Allowed\",\n    \"6-2\": \"None\",\n    \"6-3\": \"GET \\n`/vision/train/<MODEL_ID>`\",\n    \"6-4\": \"The `modelId` parameter is missing.\"\n  },\n  \"cols\": 5,\n  \"rows\": 7\n}\n[/block]\n##Prediction##\nError codes that can occur when you make a prediction.\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"HTTP Code\",\n    \"h-1\": \"HTTP Message\",\n    \"h-2\": \"API Message\",\n    \"h-3\": \"Resource\",\n    \"h-4\": \"Possible Causes\",\n    \"0-0\": \"400\",\n    \"0-1\": \"Bad Request\",\n    \"0-2\": \"None\",\n    \"0-4\": \"The prediction request couldn’t be fulfilled because the HTTP request was malformed, the Content Type was incorrect, there were missing parameters, or a parameter was provided with an invalid value.\",\n    \"0-3\": \"POST\\n`/vision/predict`\",\n    \"1-0\": \"400\",\n    \"1-1\": \"Bad Request\",\n    \"1-2\": \"Bad Request: Bad sampleLocation\",\n    \"1-3\": \"POST\\n`/vision/predict`\",\n    \"1-4\": \"The URL passed in the `sampleLocation` parameter is invalid. The URL could be incorrect, contain the wrong file name, or the file may have been moved.\",\n    \"2-0\": \"400\",\n    \"2-1\": \"Bad Request\",\n    \"2-2\": \"The modelId parameter is required.\",\n    \"2-3\": \"POST\\n`/vision/predict`\",\n    \"2-4\": \"The `modelId` parameter is missing.\",\n    \"3-0\": \"400\",\n    \"3-1\": \"Bad Request\",\n    \"3-2\": \"Bad Request: Missing sampleLocation, sampleBase64Content, and sampleContent\",\n    \"3-3\": \"POST\\n`/vision/predict`\",\n    \"3-4\": \"The parameter that specifies the image to predict is missing.\",\n    \"4-0\": \"400\",\n    \"4-1\": \"Bad Request\",\n    \"4-2\": \"File size limit exceeded\",\n    \"4-3\": \"POST\\n`/vision/predict`\",\n    \"4-4\": \"The file you passed in for prediction exceeds the maximum file size limit of 5 MB.\",\n    \"5-0\": \"400\",\n    \"5-1\": \"Bad Request\",\n    \"5-2\": \"Bad Request: Unsupported sample file format\",\n    \"5-3\": \"POST\\n`/vision/predict`\",\n    \"5-4\": \"The file you passed in for prediction isn’t one of the supported file types.\",\n    \"6-0\": \"403\",\n    \"6-1\": \"Forbidden\",\n    \"6-2\": \"Forbidden!\",\n    \"6-3\": \"POST\\n`/vision/predict`\",\n    \"6-4\": \"- The model specified by the `modelId` parameter doesn’t exist.\\n\\n- The `modelId` parameter was passed in but no value was provided.\",\n    \"7-0\": \"429\",\n    \"7-1\": \"Too Many Requests\",\n    \"7-2\": \"You've reached the maximum number of predictions.\",\n    \"7-3\": \"POST `/vision/predict`\",\n    \"7-4\": \"You have exceeded the number of prediction requests for your current plan. Contact your AE to update your plan. See [Rate Limits](doc:rate-limits).\"\n  },\n  \"cols\": 5,\n  \"rows\": 8\n}\n[/block]","excerpt":"If an API call is unsuccessful, it returns an HTTP error code. If the error is known, you receive a message in the response body.","slug":"api-error-codes-and-messages","type":"basic","title":"API Error Codes and Messages","__v":0,"childrenPages":[]}

API Error Codes and Messages

If an API call is unsuccessful, it returns an HTTP error code. If the error is known, you receive a message in the response body.

Known errors are returned in the response body in this format. [block:code] { "codes": [ { "code": "{\n \"message\": \"Invalid authentication scheme\"\n}", "language": "json" } ] } [/block] ##All## [block:parameters] { "data": { "h-0": "HTTP Code", "h-1": "HTTP Message", "h-2": "API Message", "0-0": "401", "0-1": "Unauthorized", "0-2": "Invalid access token", "0-3": "Any", "0-4": "The access token is expired.", "h-3": "Resource", "h-4": "Possible Causes", "1-0": "401", "1-1": "Unauthorized", "1-2": "Invalid authentication scheme", "1-3": "Any", "1-4": "An `Authorization` header was provided, but the token isn't properly formatted.", "2-0": "5XX", "2-1": "- Internal server error \n- Service unavailable", "2-2": "None", "2-3": "Any", "2-4": "Our systems encountered and logged an unexpected error. Please contact us if you continue to see the error." }, "cols": 5, "rows": 3 } [/block] ##Datasets## Error codes that can occur when you access datasets, labels, or examples. [block:parameters] { "data": { "h-0": "HTTP Code", "h-1": "HTTP Message", "h-2": "API Message", "h-3": "Resource", "h-4": "Possible Causes", "0-0": "400", "0-1": "Bad Request", "0-4": "The request couldn’t be fulfilled because the HTTP request was malformed, the Content Type was incorrect, there were missing parameters, or a parameter was provided with an invalid value.", "1-0": "400", "1-1": "Bad Request", "1-2": "The 'name' parameter is required to create a dataset.", "1-3": "POST \n`/vision/datasets`", "1-4": "The `name` parameter was passed in, but no value was provided.", "0-2": "None", "2-0": "400", "2-1": "Bad Request", "2-2": "Uploading a dataset requires either the 'data' field or the 'path' field.", "2-3": "POST\n`/vision/datasets/upload`", "2-4": "The path to the local .zip file or the URL to the .zip file in the cloud wasn’t specified.", "3-0": "400", "3-1": "Bad Request", "3-2": "The 'data' parameter cannot be duplicated, nor sent along with the 'path' parameter.", "3-3": "POST \n`/vision/datasets/upload`", "3-4": "Both the `path` and `data` parameters were passed to the call; only one of these parameters can be passed.", "4-0": "400", "4-1": "Bad Request", "4-2": "The dataset is not yet available for update, try again once the dataset is ready.", "4-3": "PUT \n`/vision/datasets/<DATASET_ID>/upload`", "4-4": "You’re adding examples to a dataset that’s currently being created. You must wait for dataset to become available before you can add examples to it.", "7-0": "400", "7-1": "Bad Request", "7-2": "Example max size supported is 1024000", "7-3": "POST \n`/vision/datasets/<DATASET_ID>/examples`", "7-4": "The image file being added as an example exceeds the maximum file size of 1 MB.", "8-0": "404", "8-1": "Not Found", "8-4": "The requested REST resource doesn’t exist or you don't have permission to access the resource.", "0-3": "Any dataset, label, or example resources.", "8-2": "None", "8-3": "Any dataset, label, or example resources.", "9-0": "404", "9-1": "Not Found", "9-2": "Unable to find dataset.", "9-4": "- You don’t have access to the dataset.\n\n- The dataset was deleted.", "9-3": "GET \n`/vision/datasets/<DATASET_ID>`", "10-0": "404", "10-1": "Not Found", "10-2": "Unable to find dataset.", "10-3": "DELETE \n`/vision/datasets/<DATASET_ID>`", "10-4": "- You don’t have access to the dataset.\n\n- The dataset was already deleted.", "11-0": "404", "11-1": "Bad Request", "11-2": "Example file already exists for the label <NAME_OF_EXAMPLE>", "11-3": "POST\n`/vision/feedback`", "11-4": "An example with the same name already exists in the dataset. Example names must be unique within a dataset.", "13-0": "503", "13-1": "Service Unavailable", "13-2": "Operation timed out!", "13-3": "GET\n`/vision/datasets`", "13-4": "The call has timed out due to large data size. By default, this call returns 100 datasets. If the datasets contain a lot of examples, this call may time out. Use the `offset` and `count` parameters to limit and page through the data.", "12-0": "404", "12-1": "Bad Request", "12-2": "Duplicate labels are not allowed.", "12-3": "POST\n`/vision/datasets`", "12-4": "The call is trying to create a label with a name that exists in the dataset. Label names must be unique within a dataset.", "6-0": "400", "6-1": "Bad Request", "6-2": "Supported dataset types: [image, image-detection, image-multi-label]", "6-3": "POST \n`/vision/datasets/upload`", "6-4": "The `type` request parameter contains a value that isn't a valid dataset type.", "5-0": "400", "5-1": "Bad Request", "5-2": "Failed to download the dataset from the public URL.", "5-3": "POST\n`/vision/datasets/upload`\n`/vision/datasets/upload/sync`", "5-4": "The API can't access the dataset file via the URL provided.\n\nWhen specifying the URL for a .zip file in a cloud drive service like Dropbox, be sure it's a link to the file and not a link to the interactive download page. For example, the URL should look like `https://www.dropbox.com/s/abcdxyz/mountainvsbeach.zip?dl=1`." }, "cols": 5, "rows": 14 } [/block] ##Training## Error codes that can occur when you train a dataset to create a model or access a model. [block:parameters] { "data": { "h-0": "HTTP Code", "h-1": "HTTP Message", "h-2": "API Message", "h-3": "Resource", "h-4": "Possible Causes", "0-0": "400", "0-1": "Bad Request", "0-2": "- The 'name' parameter is required to train.\n\n- A valid 'datasetId' parameter is required to create a example.", "0-3": "POST \n`/vision/train`", "0-4": "The `name` or `datasetId` parameter was passed in, but no parameter value was provided.", "1-0": "400", "1-1": "Bad Request", "1-2": "The 'name', and 'datasetId' parameters are required to train.", "1-3": "POST \n`/vision/train`", "1-4": "The `name` or `datasetId` parameter is missing.", "2-0": "400", "2-1": "Bad Request", "2-2": "Invalid id <MODEL_ID>", "2-3": "GET \n`/vision/train/<MODEL_ID>`", "2-4": "There’s no model with an ID that matches the `modelId` parameter.", "3-0": "400", "3-1": "Bad Request", "3-2": "Invalid id <MODEL_ID>", "3-3": "GET \n`/vision/train/<MODEL_ID>/lc`", "3-4": "There’s no model with an ID that matches the `modelId` parameter.", "4-0": "400", "4-1": "Bad Request", "4-2": "- The job has not terminated yet; its current status is RUNNING.\n\n- The job has not terminated yet; its current status is QUEUED.", "4-3": "GET \n`/vision/models/<MODEL_ID>`", "4-4": "The model for which you are getting metrics hasn’t completed training.", "5-0": "404", "5-1": "Not Found", "5-2": "None", "5-3": "GET \n`/vision/models/<MODEL_ID>`", "5-4": "The `modelId` parameter is missing.", "6-0": "405", "6-1": "Method Not Allowed", "6-2": "None", "6-3": "GET \n`/vision/train/<MODEL_ID>`", "6-4": "The `modelId` parameter is missing." }, "cols": 5, "rows": 7 } [/block] ##Prediction## Error codes that can occur when you make a prediction. [block:parameters] { "data": { "h-0": "HTTP Code", "h-1": "HTTP Message", "h-2": "API Message", "h-3": "Resource", "h-4": "Possible Causes", "0-0": "400", "0-1": "Bad Request", "0-2": "None", "0-4": "The prediction request couldn’t be fulfilled because the HTTP request was malformed, the Content Type was incorrect, there were missing parameters, or a parameter was provided with an invalid value.", "0-3": "POST\n`/vision/predict`", "1-0": "400", "1-1": "Bad Request", "1-2": "Bad Request: Bad sampleLocation", "1-3": "POST\n`/vision/predict`", "1-4": "The URL passed in the `sampleLocation` parameter is invalid. The URL could be incorrect, contain the wrong file name, or the file may have been moved.", "2-0": "400", "2-1": "Bad Request", "2-2": "The modelId parameter is required.", "2-3": "POST\n`/vision/predict`", "2-4": "The `modelId` parameter is missing.", "3-0": "400", "3-1": "Bad Request", "3-2": "Bad Request: Missing sampleLocation, sampleBase64Content, and sampleContent", "3-3": "POST\n`/vision/predict`", "3-4": "The parameter that specifies the image to predict is missing.", "4-0": "400", "4-1": "Bad Request", "4-2": "File size limit exceeded", "4-3": "POST\n`/vision/predict`", "4-4": "The file you passed in for prediction exceeds the maximum file size limit of 5 MB.", "5-0": "400", "5-1": "Bad Request", "5-2": "Bad Request: Unsupported sample file format", "5-3": "POST\n`/vision/predict`", "5-4": "The file you passed in for prediction isn’t one of the supported file types.", "6-0": "403", "6-1": "Forbidden", "6-2": "Forbidden!", "6-3": "POST\n`/vision/predict`", "6-4": "- The model specified by the `modelId` parameter doesn’t exist.\n\n- The `modelId` parameter was passed in but no value was provided.", "7-0": "429", "7-1": "Too Many Requests", "7-2": "You've reached the maximum number of predictions.", "7-3": "POST `/vision/predict`", "7-4": "You have exceeded the number of prediction requests for your current plan. Contact your AE to update your plan. See [Rate Limits](doc:rate-limits)." }, "cols": 5, "rows": 8 } [/block]
Known errors are returned in the response body in this format. [block:code] { "codes": [ { "code": "{\n \"message\": \"Invalid authentication scheme\"\n}", "language": "json" } ] } [/block] ##All## [block:parameters] { "data": { "h-0": "HTTP Code", "h-1": "HTTP Message", "h-2": "API Message", "0-0": "401", "0-1": "Unauthorized", "0-2": "Invalid access token", "0-3": "Any", "0-4": "The access token is expired.", "h-3": "Resource", "h-4": "Possible Causes", "1-0": "401", "1-1": "Unauthorized", "1-2": "Invalid authentication scheme", "1-3": "Any", "1-4": "An `Authorization` header was provided, but the token isn't properly formatted.", "2-0": "5XX", "2-1": "- Internal server error \n- Service unavailable", "2-2": "None", "2-3": "Any", "2-4": "Our systems encountered and logged an unexpected error. Please contact us if you continue to see the error." }, "cols": 5, "rows": 3 } [/block] ##Datasets## Error codes that can occur when you access datasets, labels, or examples. [block:parameters] { "data": { "h-0": "HTTP Code", "h-1": "HTTP Message", "h-2": "API Message", "h-3": "Resource", "h-4": "Possible Causes", "0-0": "400", "0-1": "Bad Request", "0-4": "The request couldn’t be fulfilled because the HTTP request was malformed, the Content Type was incorrect, there were missing parameters, or a parameter was provided with an invalid value.", "1-0": "400", "1-1": "Bad Request", "1-2": "The 'name' parameter is required to create a dataset.", "1-3": "POST \n`/vision/datasets`", "1-4": "The `name` parameter was passed in, but no value was provided.", "0-2": "None", "2-0": "400", "2-1": "Bad Request", "2-2": "Uploading a dataset requires either the 'data' field or the 'path' field.", "2-3": "POST\n`/vision/datasets/upload`", "2-4": "The path to the local .zip file or the URL to the .zip file in the cloud wasn’t specified.", "3-0": "400", "3-1": "Bad Request", "3-2": "The 'data' parameter cannot be duplicated, nor sent along with the 'path' parameter.", "3-3": "POST \n`/vision/datasets/upload`", "3-4": "Both the `path` and `data` parameters were passed to the call; only one of these parameters can be passed.", "4-0": "400", "4-1": "Bad Request", "4-2": "The dataset is not yet available for update, try again once the dataset is ready.", "4-3": "PUT \n`/vision/datasets/<DATASET_ID>/upload`", "4-4": "You’re adding examples to a dataset that’s currently being created. You must wait for dataset to become available before you can add examples to it.", "7-0": "400", "7-1": "Bad Request", "7-2": "Example max size supported is 1024000", "7-3": "POST \n`/vision/datasets/<DATASET_ID>/examples`", "7-4": "The image file being added as an example exceeds the maximum file size of 1 MB.", "8-0": "404", "8-1": "Not Found", "8-4": "The requested REST resource doesn’t exist or you don't have permission to access the resource.", "0-3": "Any dataset, label, or example resources.", "8-2": "None", "8-3": "Any dataset, label, or example resources.", "9-0": "404", "9-1": "Not Found", "9-2": "Unable to find dataset.", "9-4": "- You don’t have access to the dataset.\n\n- The dataset was deleted.", "9-3": "GET \n`/vision/datasets/<DATASET_ID>`", "10-0": "404", "10-1": "Not Found", "10-2": "Unable to find dataset.", "10-3": "DELETE \n`/vision/datasets/<DATASET_ID>`", "10-4": "- You don’t have access to the dataset.\n\n- The dataset was already deleted.", "11-0": "404", "11-1": "Bad Request", "11-2": "Example file already exists for the label <NAME_OF_EXAMPLE>", "11-3": "POST\n`/vision/feedback`", "11-4": "An example with the same name already exists in the dataset. Example names must be unique within a dataset.", "13-0": "503", "13-1": "Service Unavailable", "13-2": "Operation timed out!", "13-3": "GET\n`/vision/datasets`", "13-4": "The call has timed out due to large data size. By default, this call returns 100 datasets. If the datasets contain a lot of examples, this call may time out. Use the `offset` and `count` parameters to limit and page through the data.", "12-0": "404", "12-1": "Bad Request", "12-2": "Duplicate labels are not allowed.", "12-3": "POST\n`/vision/datasets`", "12-4": "The call is trying to create a label with a name that exists in the dataset. Label names must be unique within a dataset.", "6-0": "400", "6-1": "Bad Request", "6-2": "Supported dataset types: [image, image-detection, image-multi-label]", "6-3": "POST \n`/vision/datasets/upload`", "6-4": "The `type` request parameter contains a value that isn't a valid dataset type.", "5-0": "400", "5-1": "Bad Request", "5-2": "Failed to download the dataset from the public URL.", "5-3": "POST\n`/vision/datasets/upload`\n`/vision/datasets/upload/sync`", "5-4": "The API can't access the dataset file via the URL provided.\n\nWhen specifying the URL for a .zip file in a cloud drive service like Dropbox, be sure it's a link to the file and not a link to the interactive download page. For example, the URL should look like `https://www.dropbox.com/s/abcdxyz/mountainvsbeach.zip?dl=1`." }, "cols": 5, "rows": 14 } [/block] ##Training## Error codes that can occur when you train a dataset to create a model or access a model. [block:parameters] { "data": { "h-0": "HTTP Code", "h-1": "HTTP Message", "h-2": "API Message", "h-3": "Resource", "h-4": "Possible Causes", "0-0": "400", "0-1": "Bad Request", "0-2": "- The 'name' parameter is required to train.\n\n- A valid 'datasetId' parameter is required to create a example.", "0-3": "POST \n`/vision/train`", "0-4": "The `name` or `datasetId` parameter was passed in, but no parameter value was provided.", "1-0": "400", "1-1": "Bad Request", "1-2": "The 'name', and 'datasetId' parameters are required to train.", "1-3": "POST \n`/vision/train`", "1-4": "The `name` or `datasetId` parameter is missing.", "2-0": "400", "2-1": "Bad Request", "2-2": "Invalid id <MODEL_ID>", "2-3": "GET \n`/vision/train/<MODEL_ID>`", "2-4": "There’s no model with an ID that matches the `modelId` parameter.", "3-0": "400", "3-1": "Bad Request", "3-2": "Invalid id <MODEL_ID>", "3-3": "GET \n`/vision/train/<MODEL_ID>/lc`", "3-4": "There’s no model with an ID that matches the `modelId` parameter.", "4-0": "400", "4-1": "Bad Request", "4-2": "- The job has not terminated yet; its current status is RUNNING.\n\n- The job has not terminated yet; its current status is QUEUED.", "4-3": "GET \n`/vision/models/<MODEL_ID>`", "4-4": "The model for which you are getting metrics hasn’t completed training.", "5-0": "404", "5-1": "Not Found", "5-2": "None", "5-3": "GET \n`/vision/models/<MODEL_ID>`", "5-4": "The `modelId` parameter is missing.", "6-0": "405", "6-1": "Method Not Allowed", "6-2": "None", "6-3": "GET \n`/vision/train/<MODEL_ID>`", "6-4": "The `modelId` parameter is missing." }, "cols": 5, "rows": 7 } [/block] ##Prediction## Error codes that can occur when you make a prediction. [block:parameters] { "data": { "h-0": "HTTP Code", "h-1": "HTTP Message", "h-2": "API Message", "h-3": "Resource", "h-4": "Possible Causes", "0-0": "400", "0-1": "Bad Request", "0-2": "None", "0-4": "The prediction request couldn’t be fulfilled because the HTTP request was malformed, the Content Type was incorrect, there were missing parameters, or a parameter was provided with an invalid value.", "0-3": "POST\n`/vision/predict`", "1-0": "400", "1-1": "Bad Request", "1-2": "Bad Request: Bad sampleLocation", "1-3": "POST\n`/vision/predict`", "1-4": "The URL passed in the `sampleLocation` parameter is invalid. The URL could be incorrect, contain the wrong file name, or the file may have been moved.", "2-0": "400", "2-1": "Bad Request", "2-2": "The modelId parameter is required.", "2-3": "POST\n`/vision/predict`", "2-4": "The `modelId` parameter is missing.", "3-0": "400", "3-1": "Bad Request", "3-2": "Bad Request: Missing sampleLocation, sampleBase64Content, and sampleContent", "3-3": "POST\n`/vision/predict`", "3-4": "The parameter that specifies the image to predict is missing.", "4-0": "400", "4-1": "Bad Request", "4-2": "File size limit exceeded", "4-3": "POST\n`/vision/predict`", "4-4": "The file you passed in for prediction exceeds the maximum file size limit of 5 MB.", "5-0": "400", "5-1": "Bad Request", "5-2": "Bad Request: Unsupported sample file format", "5-3": "POST\n`/vision/predict`", "5-4": "The file you passed in for prediction isn’t one of the supported file types.", "6-0": "403", "6-1": "Forbidden", "6-2": "Forbidden!", "6-3": "POST\n`/vision/predict`", "6-4": "- The model specified by the `modelId` parameter doesn’t exist.\n\n- The `modelId` parameter was passed in but no value was provided.", "7-0": "429", "7-1": "Too Many Requests", "7-2": "You've reached the maximum number of predictions.", "7-3": "POST `/vision/predict`", "7-4": "You have exceeded the number of prediction requests for your current plan. Contact your AE to update your plan. See [Rate Limits](doc:rate-limits)." }, "cols": 5, "rows": 8 } [/block]
{"_id":"59de6224666d650024f78fbf","category":"59de6223666d650024f78fa3","parentDoc":null,"project":"552d474ea86ee20d00780cd7","user":"573b5a1f37fcf72000a2e683","version":"59de6223666d650024f78f9b","updates":[],"next":{"pages":[],"description":""},"createdAt":"2017-02-16T17:46:31.641Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"method":"post","results":{"codes":[{"name":"","code":"{\n  \"id\": 1000014,\n  \"name\": \"mountainvsbeach\",\n  \"createdAt\": \"2017-02-16T16:25:57.000+0000\",\n  \"updatedAt\": \"2017-02-16T16:25:57.000+0000\",\n  \"labelSummary\": {\n    \"labels\": []\n  },\n  \"totalExamples\": 0,\n  \"available\": false,\n  \"statusMsg\": \"UPLOADING\",\n  \"type\": \"image\",\n  \"object\": \"dataset\"\n}\n","language":"json","status":200},{"code":"{}","language":"json","status":400,"name":""}]},"settings":"","examples":{"codes":[{"language":"curl","code":"// Create dataset from a local file\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"data=@C:\\Data\\mountainvsbeach.zip\" -F \"type=image\"  https://api.einstein.ai/v2/vision/datasets/upload\n\n// Create dataset from a web file\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"path=http://einstein.ai/images/mountainvsbeach.zip\" -F \"type=image\"  https://api.einstein.ai/v2/vision/datasets/upload"}]},"auth":"required","params":[],"url":"/vision/datasets/upload"},"isReference":false,"order":0,"body":"##Request Parameters##\n[block:parameters]\n{\n  \"data\": {\n    \"0-0\": \"`data`\",\n    \"2-0\": \"`path`\",\n    \"0-1\": \"string\",\n    \"2-1\": \"string\",\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-2\": \"Path to the .zip file on the local drive (FilePart). The maximum .zip file size you can upload from a local drive is 50 MB.\",\n    \"0-3\": \"1.0\",\n    \"2-2\": \"URL of the .zip file. The maximum .zip file size you can upload from a web location is 1 GB.\",\n    \"2-3\": \"1.0\",\n    \"3-0\": \"`type`\",\n    \"3-1\": \"string\",\n    \"3-2\": \"Type of dataset data. Valid values are:\\n- `image`\\n- `image-detection`—Available in Einstein Vision API version 2.0 and later.\\n- `image-multi-label`—Available in Einstein Vision API version 2.0 and later.\",\n    \"3-3\": \"1.0\",\n    \"1-0\": \"`name`\",\n    \"1-1\": \"string\",\n    \"1-2\": \"Name of the dataset. Optional. If this parameter is omitted, the dataset name is derived from the .zip file name.\",\n    \"1-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 4\n}\n[/block]\nThe API call is asynchronous, so you receive a dataset ID back immediately but the `available` value is `false` and the `statusMsg` value is `UPLOADING`. Use the dataset ID and make a call to [Get a Dataset](doc:get-a-dataset) to query when the upload is complete. When `available` is `true` and `statusMsg` is `SUCCEEDED`, the data upload is complete, and you can train the dataset to create a model. \n\nYou must provide the path to the .zip file on either the local machine or in the cloud. \n\nIf the dataset type is `image` or `image-multi-label`, this API:\n- Creates a dataset that has the same name as the .zip file (limit is 100 characters), if the `name` parameter is omitted.\n- Creates a label for each directory in the .zip file. The label name is the same name as the directory name (limit is 180 characters).\n- Creates an example for each image file in each directory in the .zip file. The example name is the same as the image file name. \n\nIf the dataset type is `image-detection`, this API:\n- Creates a dataset that has the same name as the .zip file (limit is 100 characters), if the `name` parameter is omitted.\n- Creates a label for each unique label in the annotations.csv file (limit is 180 characters).\n- Creates an example for each image file in the .zip file.\n\nKeep the following points in mind when creating datasets.\n###All Datasets###\n- If your .zip file is more than 20 MB, for better performance, we recommend that you upload it to a cloud location that doesn't require authentication and pass the URL in the `path` parameter.\n\n- The maximum .zip file size you can upload from a local drive is 50 MB. The maximum .zip file size you can upload from a web location is 1 GB.\n\n- The maximum total dataset size is 1 GB.\n \n- If the `name` parameter is passed, the maximum length is 100 characters.\n\n- The maximum image file name length is 150 characters including the file extension. If the .zip file contains a file with a name greater than 150 characters (including the file extension), the example is created in the dataset, but the API truncates the example name to 150 characters.\n\n- If the .zip file contains an image file that has a name containing spaces, the spaces are removed from the file name before the file is uploaded. For example, if you have a file called `sandy beach.jpg` the example name becomes `sandybeach.jpg`. If the .zip file contains an image file that has a name with non-ASCII characters, those characters are converted to UTF-8.\n\n- When specifying the URL for a .zip file in a cloud drive service like Dropbox, be sure it's a link to the file and not a link to the interactive download page. For example, the URL should look like `https://www.dropbox.com/s/abcdxyz/mountainvsbeach.zip?dl=1`\n\n- If the .zip file has an incorrect structure, the API returns an error: `FAILED: Invalid zip format provided for <dataset_name>`.\n\n- If you create a dataset or upload images from a .zip file in Apex code, be sure that you reference the URL to the file with `https` and not `http`.\n\n###Image or Image Multi-Label Datasets###\n\n- The .zip file must have a specific directory structure:\n - In the root, there should be a parent directory that contains subdirectories. \n - Each subdirectory below the parent directory becomes a label in the dataset. This subdirectory must contain images to be added to the dataset.\n - Each subdirectory below the parent directory should contain only images and not any nested subdirectories.\n\n\n- If you have a large amount of data (gigabytes), you might want to break up your data into multiple .zip files. You can load the first .zip file using this call and then load subsequent .zip files using PUT. See [Create Examples From Zip File](doc:create-examples-from-zip).\n\n- If you create a dataset from a .zip file, you can only add examples to it from a .zip file using PUT. See [Create Examples From Zip File](doc:create-examples-from-zip). You can't add a single example from a file.\n\n- The maximum directory name length is 180 characters. If the .zip file contains a directory with a name greater than 180 characters, the label is created in the dataset, but  the API truncates the label name to 180 characters.\n\n- The minimum number of examples per label is 10.\n\n- The minimum number of total examples across all labels is 40.\n\n- Image files must be smaller than 1 MB. If the .zip file contains image files larger than 1 MB, the image won't be loaded and no error is returned.\n\n- Images must be no larger than 2,000 pixels high by 2,000 pixels wide. You can upload images that are larger, but training the dataset might fail.\n\n- The supported image file types are PNG, JPG, and JPEG. If the .zip file contains any unsupported image file types, those images won't be uploaded and no error is returned.\n\n- Duplicate images are handled differently based on the dataset type.\n -  Image—For datasets of type `image`, if there are duplicate image files in the .zip file, only the first file is uploaded. Duplicate images are checked within directories and across directories. If there's more than one image file with the same file contents in the same directory or in multiple directories, only the first file is uploaded and the others are skipped.\n - Multi-label—For datasets of type  `image-multi-label`, if there are duplicate image files in a single directory, only the first file is uploaded and the others are skipped. In a multi-label dataset, it's expected that there are duplicate files across directories. If there's more than one image file with the same file contents in multiple directories, the file is loaded multiple times with a different label.\n\n\n###Object Detection Datasets###\n\n- Here are the guidelines for the .zip file:\n - The .zip file must contain two types of elements: (1) the image files specified in the annotations.csv file and (2) a file named annotations.csv that contains the bounding box data.\n - Images can be in the root of the .zip file or in a folder or folders in the root of the .zip file. If images are in folders more than one level deep, you'll receive an error when you try to create the dataset.\n - The supported image file types are PNG, JPG, and JPEG. If the .zip file contains any unsupported image file types, those images won't be uploaded and no error is returned.\n - The annotations.csv file is a text file that contains the data for the bounding boxes associated with each image. The file must have this exact name.\n - The annotations.csv file can be anywhere within the .zip file.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/d8c446e-obj_det_zip_file_format.png\",\n        \"obj_det_zip_file_format.png\",\n        535,\n        164,\n        \"#cccab6\"\n      ]\n    }\n  ]\n}\n[/block]\n- The maximum label name length is 180 characters. If the annotations file contains a label with a name greater than 180 characters, the label is created in the dataset, but  the API truncates the label name to 180 characters.\n\n- Labels are case sensitive. If you have labels `Oatmeal` and `oatmeal`, they are two distinct labels in the dataset and the resulting model.\n\n- When you create a dataset, all the images are checked for duplicates. If the .zip file contains multiple image files that have the same contents, only the first of the duplicate files is uploaded.\n\n- If there's an image in the .zip file, but no bounding box descriptions for that image in the annotations file, the image is dropped and no error is returned.\n\n####Annotations.csv File Format####\n\nThe annotations.csv file contains the bounding box coordinates and the labels for each image.\n\n1. The first row in the file contains the headers for the CSV values. We use the convention of `image_file` and `boxn`, but each header value can be any string. \n\n    - `image_file`—Header for the image file name. \n    - `boxn`—Header for each bounding box element. The number of `boxn` values in the header is the maximum number of bounding boxes you can have in an image.\n\n\n2. Each row after the header specifies the bounding box descriptions in JSON format for each image in the .zip file. There should be one row per file. Multiple bounding boxes for the same image are listed as separate columns in the same row. The image name provided must be the exact name of the image file included in the parent folder. The `x`, `y`, `width`, and `height` values specify the bounding box location within the image. The required fields for each bounding box are:\n - `label`—Classification label for the content in the bounding box.\n - `height`—Height of the bounding box in pixels.\n - `width`—Width of the bounding box in pixels.\n - `x`—Location of the bounding box on the horizontal axis.\n - `y`—Location of the bounding box on the vertical axis.\n \nHere's an example of an annotations.csv file for two images.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"\\\"image_file\\\",\\\"box0\\\",\\\"box1\\\"\\n\\\"picture1.jpg\\\",\\\"{\\\"\\\"label\\\"\\\": \\\"\\\"cat\\\"\\\", \\\"\\\"y\\\"\\\": 242, \\\"\\\"x\\\"\\\": 160, \\\"\\\"height\\\"\\\": 62, \\\"\\\"width\\\"\\\": 428}\\\", \\\"{\\\"\\\"label\\\"\\\": \\\"\\\"turtle\\\"\\\", \\\"\\\"y\\\"\\\": 113, \\\"\\\"x\\\"\\\": 61, \\\"\\\"height\\\"\\\": 74, \\\"\\\"width\\\"\\\": 718}\\\"\\n\\\"picture2.jpg\\\",\\\"{\\\"\\\"label\\\"\\\": \\\"\\\"dog\\\"\\\", \\\"\\\"y\\\"\\\": 94, \\\"\\\"x\\\"\\\": 27, \\\"\\\"height\\\"\\\": 144, \\\"\\\"width\\\"\\\": 184}\\\",\\\"{\\\"\\\"label\\\"\\\": \\\"\\\"dog\\\"\\\", \\\"\\\"y\\\"\\\": 50, \\\"\\\"x\\\"\\\": 286, \\\"\\\"height\\\"\\\": 344, \\\"\\\"width\\\"\\\": 348}\\\"\\n\",\n      \"language\": \"text\"\n    }\n  ]\n}\n[/block]\nHere's the second image referenced in the annotations.csv file showing the bounding boxes.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/ef4040e-7f6c214-annotations-format.png\",\n        \"7f6c214-annotations-format.png\",\n        480,\n        321,\n        \"#6f7554\"\n      ]\n    }\n  ]\n}\n[/block]\n##Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`available`\",\n    \"0-1\": \"boolean\",\n    \"0-2\": \"Specifies whether the dataset is ready to be trained.\",\n    \"0-3\": \"1.0\",\n    \"2-0\": \"`id`\",\n    \"2-1\": \"long\",\n    \"2-2\": \"Dataset ID.\",\n    \"2-3\": \"1.0\",\n    \"3-0\": \"`labelSummary`\",\n    \"3-1\": \"object\",\n    \"3-2\": \"Contains the `labels` array that contains all the labels for the dataset. This is an asynchronous call, so the `labels` array is empty when you first create a dataset.\",\n    \"3-3\": \"1.0\",\n    \"4-0\": \"`name`\",\n    \"4-1\": \"string\",\n    \"4-2\": \"Name of the dataset.\",\n    \"4-3\": \"1.0\",\n    \"5-0\": \"`object`\",\n    \"5-1\": \"string\",\n    \"5-2\": \"Object returned; in this case, `dataset`.\",\n    \"5-3\": \"1.0\",\n    \"7-0\": \"`totalExamples`\",\n    \"7-1\": \"int\",\n    \"7-2\": \"Total number of examples in the dataset.\",\n    \"7-3\": \"1.0\",\n    \"9-0\": \"`updatedAt`\",\n    \"9-1\": \"date\",\n    \"9-2\": \"Date and time that the dataset was last updated.\",\n    \"9-3\": \"1.0\",\n    \"1-0\": \"`createdAt`\",\n    \"1-1\": \"date\",\n    \"1-2\": \"Date and time that the dataset was created.\",\n    \"1-3\": \"1.0\",\n    \"6-0\": \"`statusMsg`\",\n    \"6-1\": \"string\",\n    \"6-2\": \"Status of the dataset creation and  data upload. Valid values are:\\n- `FAILURE: <failure_reason>`—Data upload has failed.\\n- `SUCCEEDED`—Data upload  is complete.\\n- `UPLOADING`—Data upload is in progress.\",\n    \"6-3\": \"1.0\",\n    \"8-0\": \"`type`\",\n    \"8-1\": \"string\",\n    \"8-2\": \"Type of dataset data. Valid values are:\\n- `image`\\n- `image-detection`—Available in Einstein Vision API version 2.0 and later.\\n- `image-multi-label`—Available in Einstein Vision API version 2.0 and later.\",\n    \"8-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 10\n}\n[/block]","excerpt":"Creates a dataset, labels, and examples from the specified .zip file. The call returns immediately and continues to upload the images in the background.","slug":"create-a-dataset-zip-async","type":"post","title":"Create a Dataset From a Zip File Asynchronously","__v":0,"childrenPages":[]}

postCreate a Dataset From a Zip File Asynchronously

Creates a dataset, labels, and examples from the specified .zip file. The call returns immediately and continues to upload the images in the background.

##Request Parameters## [block:parameters] { "data": { "0-0": "`data`", "2-0": "`path`", "0-1": "string", "2-1": "string", "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-2": "Path to the .zip file on the local drive (FilePart). The maximum .zip file size you can upload from a local drive is 50 MB.", "0-3": "1.0", "2-2": "URL of the .zip file. The maximum .zip file size you can upload from a web location is 1 GB.", "2-3": "1.0", "3-0": "`type`", "3-1": "string", "3-2": "Type of dataset data. Valid values are:\n- `image`\n- `image-detection`—Available in Einstein Vision API version 2.0 and later.\n- `image-multi-label`—Available in Einstein Vision API version 2.0 and later.", "3-3": "1.0", "1-0": "`name`", "1-1": "string", "1-2": "Name of the dataset. Optional. If this parameter is omitted, the dataset name is derived from the .zip file name.", "1-3": "1.0" }, "cols": 4, "rows": 4 } [/block] The API call is asynchronous, so you receive a dataset ID back immediately but the `available` value is `false` and the `statusMsg` value is `UPLOADING`. Use the dataset ID and make a call to [Get a Dataset](doc:get-a-dataset) to query when the upload is complete. When `available` is `true` and `statusMsg` is `SUCCEEDED`, the data upload is complete, and you can train the dataset to create a model. You must provide the path to the .zip file on either the local machine or in the cloud. If the dataset type is `image` or `image-multi-label`, this API: - Creates a dataset that has the same name as the .zip file (limit is 100 characters), if the `name` parameter is omitted. - Creates a label for each directory in the .zip file. The label name is the same name as the directory name (limit is 180 characters). - Creates an example for each image file in each directory in the .zip file. The example name is the same as the image file name. If the dataset type is `image-detection`, this API: - Creates a dataset that has the same name as the .zip file (limit is 100 characters), if the `name` parameter is omitted. - Creates a label for each unique label in the annotations.csv file (limit is 180 characters). - Creates an example for each image file in the .zip file. Keep the following points in mind when creating datasets. ###All Datasets### - If your .zip file is more than 20 MB, for better performance, we recommend that you upload it to a cloud location that doesn't require authentication and pass the URL in the `path` parameter. - The maximum .zip file size you can upload from a local drive is 50 MB. The maximum .zip file size you can upload from a web location is 1 GB. - The maximum total dataset size is 1 GB. - If the `name` parameter is passed, the maximum length is 100 characters. - The maximum image file name length is 150 characters including the file extension. If the .zip file contains a file with a name greater than 150 characters (including the file extension), the example is created in the dataset, but the API truncates the example name to 150 characters. - If the .zip file contains an image file that has a name containing spaces, the spaces are removed from the file name before the file is uploaded. For example, if you have a file called `sandy beach.jpg` the example name becomes `sandybeach.jpg`. If the .zip file contains an image file that has a name with non-ASCII characters, those characters are converted to UTF-8. - When specifying the URL for a .zip file in a cloud drive service like Dropbox, be sure it's a link to the file and not a link to the interactive download page. For example, the URL should look like `https://www.dropbox.com/s/abcdxyz/mountainvsbeach.zip?dl=1` - If the .zip file has an incorrect structure, the API returns an error: `FAILED: Invalid zip format provided for <dataset_name>`. - If you create a dataset or upload images from a .zip file in Apex code, be sure that you reference the URL to the file with `https` and not `http`. ###Image or Image Multi-Label Datasets### - The .zip file must have a specific directory structure: - In the root, there should be a parent directory that contains subdirectories. - Each subdirectory below the parent directory becomes a label in the dataset. This subdirectory must contain images to be added to the dataset. - Each subdirectory below the parent directory should contain only images and not any nested subdirectories. - If you have a large amount of data (gigabytes), you might want to break up your data into multiple .zip files. You can load the first .zip file using this call and then load subsequent .zip files using PUT. See [Create Examples From Zip File](doc:create-examples-from-zip). - If you create a dataset from a .zip file, you can only add examples to it from a .zip file using PUT. See [Create Examples From Zip File](doc:create-examples-from-zip). You can't add a single example from a file. - The maximum directory name length is 180 characters. If the .zip file contains a directory with a name greater than 180 characters, the label is created in the dataset, but the API truncates the label name to 180 characters. - The minimum number of examples per label is 10. - The minimum number of total examples across all labels is 40. - Image files must be smaller than 1 MB. If the .zip file contains image files larger than 1 MB, the image won't be loaded and no error is returned. - Images must be no larger than 2,000 pixels high by 2,000 pixels wide. You can upload images that are larger, but training the dataset might fail. - The supported image file types are PNG, JPG, and JPEG. If the .zip file contains any unsupported image file types, those images won't be uploaded and no error is returned. - Duplicate images are handled differently based on the dataset type. - Image—For datasets of type `image`, if there are duplicate image files in the .zip file, only the first file is uploaded. Duplicate images are checked within directories and across directories. If there's more than one image file with the same file contents in the same directory or in multiple directories, only the first file is uploaded and the others are skipped. - Multi-label—For datasets of type `image-multi-label`, if there are duplicate image files in a single directory, only the first file is uploaded and the others are skipped. In a multi-label dataset, it's expected that there are duplicate files across directories. If there's more than one image file with the same file contents in multiple directories, the file is loaded multiple times with a different label. ###Object Detection Datasets### - Here are the guidelines for the .zip file: - The .zip file must contain two types of elements: (1) the image files specified in the annotations.csv file and (2) a file named annotations.csv that contains the bounding box data. - Images can be in the root of the .zip file or in a folder or folders in the root of the .zip file. If images are in folders more than one level deep, you'll receive an error when you try to create the dataset. - The supported image file types are PNG, JPG, and JPEG. If the .zip file contains any unsupported image file types, those images won't be uploaded and no error is returned. - The annotations.csv file is a text file that contains the data for the bounding boxes associated with each image. The file must have this exact name. - The annotations.csv file can be anywhere within the .zip file. [block:image] { "images": [ { "image": [ "https://files.readme.io/d8c446e-obj_det_zip_file_format.png", "obj_det_zip_file_format.png", 535, 164, "#cccab6" ] } ] } [/block] - The maximum label name length is 180 characters. If the annotations file contains a label with a name greater than 180 characters, the label is created in the dataset, but the API truncates the label name to 180 characters. - Labels are case sensitive. If you have labels `Oatmeal` and `oatmeal`, they are two distinct labels in the dataset and the resulting model. - When you create a dataset, all the images are checked for duplicates. If the .zip file contains multiple image files that have the same contents, only the first of the duplicate files is uploaded. - If there's an image in the .zip file, but no bounding box descriptions for that image in the annotations file, the image is dropped and no error is returned. ####Annotations.csv File Format#### The annotations.csv file contains the bounding box coordinates and the labels for each image. 1. The first row in the file contains the headers for the CSV values. We use the convention of `image_file` and `boxn`, but each header value can be any string. - `image_file`—Header for the image file name. - `boxn`—Header for each bounding box element. The number of `boxn` values in the header is the maximum number of bounding boxes you can have in an image. 2. Each row after the header specifies the bounding box descriptions in JSON format for each image in the .zip file. There should be one row per file. Multiple bounding boxes for the same image are listed as separate columns in the same row. The image name provided must be the exact name of the image file included in the parent folder. The `x`, `y`, `width`, and `height` values specify the bounding box location within the image. The required fields for each bounding box are: - `label`—Classification label for the content in the bounding box. - `height`—Height of the bounding box in pixels. - `width`—Width of the bounding box in pixels. - `x`—Location of the bounding box on the horizontal axis. - `y`—Location of the bounding box on the vertical axis. Here's an example of an annotations.csv file for two images. [block:code] { "codes": [ { "code": "\"image_file\",\"box0\",\"box1\"\n\"picture1.jpg\",\"{\"\"label\"\": \"\"cat\"\", \"\"y\"\": 242, \"\"x\"\": 160, \"\"height\"\": 62, \"\"width\"\": 428}\", \"{\"\"label\"\": \"\"turtle\"\", \"\"y\"\": 113, \"\"x\"\": 61, \"\"height\"\": 74, \"\"width\"\": 718}\"\n\"picture2.jpg\",\"{\"\"label\"\": \"\"dog\"\", \"\"y\"\": 94, \"\"x\"\": 27, \"\"height\"\": 144, \"\"width\"\": 184}\",\"{\"\"label\"\": \"\"dog\"\", \"\"y\"\": 50, \"\"x\"\": 286, \"\"height\"\": 344, \"\"width\"\": 348}\"\n", "language": "text" } ] } [/block] Here's the second image referenced in the annotations.csv file showing the bounding boxes. [block:image] { "images": [ { "image": [ "https://files.readme.io/ef4040e-7f6c214-annotations-format.png", "7f6c214-annotations-format.png", 480, 321, "#6f7554" ] } ] } [/block] ##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`available`", "0-1": "boolean", "0-2": "Specifies whether the dataset is ready to be trained.", "0-3": "1.0", "2-0": "`id`", "2-1": "long", "2-2": "Dataset ID.", "2-3": "1.0", "3-0": "`labelSummary`", "3-1": "object", "3-2": "Contains the `labels` array that contains all the labels for the dataset. This is an asynchronous call, so the `labels` array is empty when you first create a dataset.", "3-3": "1.0", "4-0": "`name`", "4-1": "string", "4-2": "Name of the dataset.", "4-3": "1.0", "5-0": "`object`", "5-1": "string", "5-2": "Object returned; in this case, `dataset`.", "5-3": "1.0", "7-0": "`totalExamples`", "7-1": "int", "7-2": "Total number of examples in the dataset.", "7-3": "1.0", "9-0": "`updatedAt`", "9-1": "date", "9-2": "Date and time that the dataset was last updated.", "9-3": "1.0", "1-0": "`createdAt`", "1-1": "date", "1-2": "Date and time that the dataset was created.", "1-3": "1.0", "6-0": "`statusMsg`", "6-1": "string", "6-2": "Status of the dataset creation and data upload. Valid values are:\n- `FAILURE: <failure_reason>`—Data upload has failed.\n- `SUCCEEDED`—Data upload is complete.\n- `UPLOADING`—Data upload is in progress.", "6-3": "1.0", "8-0": "`type`", "8-1": "string", "8-2": "Type of dataset data. Valid values are:\n- `image`\n- `image-detection`—Available in Einstein Vision API version 2.0 and later.\n- `image-multi-label`—Available in Einstein Vision API version 2.0 and later.", "8-3": "1.0" }, "cols": 4, "rows": 10 } [/block]

Definition

{{ api_url }}{{ page_api_url }}

Examples


Result Format



##Request Parameters## [block:parameters] { "data": { "0-0": "`data`", "2-0": "`path`", "0-1": "string", "2-1": "string", "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-2": "Path to the .zip file on the local drive (FilePart). The maximum .zip file size you can upload from a local drive is 50 MB.", "0-3": "1.0", "2-2": "URL of the .zip file. The maximum .zip file size you can upload from a web location is 1 GB.", "2-3": "1.0", "3-0": "`type`", "3-1": "string", "3-2": "Type of dataset data. Valid values are:\n- `image`\n- `image-detection`—Available in Einstein Vision API version 2.0 and later.\n- `image-multi-label`—Available in Einstein Vision API version 2.0 and later.", "3-3": "1.0", "1-0": "`name`", "1-1": "string", "1-2": "Name of the dataset. Optional. If this parameter is omitted, the dataset name is derived from the .zip file name.", "1-3": "1.0" }, "cols": 4, "rows": 4 } [/block] The API call is asynchronous, so you receive a dataset ID back immediately but the `available` value is `false` and the `statusMsg` value is `UPLOADING`. Use the dataset ID and make a call to [Get a Dataset](doc:get-a-dataset) to query when the upload is complete. When `available` is `true` and `statusMsg` is `SUCCEEDED`, the data upload is complete, and you can train the dataset to create a model. You must provide the path to the .zip file on either the local machine or in the cloud. If the dataset type is `image` or `image-multi-label`, this API: - Creates a dataset that has the same name as the .zip file (limit is 100 characters), if the `name` parameter is omitted. - Creates a label for each directory in the .zip file. The label name is the same name as the directory name (limit is 180 characters). - Creates an example for each image file in each directory in the .zip file. The example name is the same as the image file name. If the dataset type is `image-detection`, this API: - Creates a dataset that has the same name as the .zip file (limit is 100 characters), if the `name` parameter is omitted. - Creates a label for each unique label in the annotations.csv file (limit is 180 characters). - Creates an example for each image file in the .zip file. Keep the following points in mind when creating datasets. ###All Datasets### - If your .zip file is more than 20 MB, for better performance, we recommend that you upload it to a cloud location that doesn't require authentication and pass the URL in the `path` parameter. - The maximum .zip file size you can upload from a local drive is 50 MB. The maximum .zip file size you can upload from a web location is 1 GB. - The maximum total dataset size is 1 GB. - If the `name` parameter is passed, the maximum length is 100 characters. - The maximum image file name length is 150 characters including the file extension. If the .zip file contains a file with a name greater than 150 characters (including the file extension), the example is created in the dataset, but the API truncates the example name to 150 characters. - If the .zip file contains an image file that has a name containing spaces, the spaces are removed from the file name before the file is uploaded. For example, if you have a file called `sandy beach.jpg` the example name becomes `sandybeach.jpg`. If the .zip file contains an image file that has a name with non-ASCII characters, those characters are converted to UTF-8. - When specifying the URL for a .zip file in a cloud drive service like Dropbox, be sure it's a link to the file and not a link to the interactive download page. For example, the URL should look like `https://www.dropbox.com/s/abcdxyz/mountainvsbeach.zip?dl=1` - If the .zip file has an incorrect structure, the API returns an error: `FAILED: Invalid zip format provided for <dataset_name>`. - If you create a dataset or upload images from a .zip file in Apex code, be sure that you reference the URL to the file with `https` and not `http`. ###Image or Image Multi-Label Datasets### - The .zip file must have a specific directory structure: - In the root, there should be a parent directory that contains subdirectories. - Each subdirectory below the parent directory becomes a label in the dataset. This subdirectory must contain images to be added to the dataset. - Each subdirectory below the parent directory should contain only images and not any nested subdirectories. - If you have a large amount of data (gigabytes), you might want to break up your data into multiple .zip files. You can load the first .zip file using this call and then load subsequent .zip files using PUT. See [Create Examples From Zip File](doc:create-examples-from-zip). - If you create a dataset from a .zip file, you can only add examples to it from a .zip file using PUT. See [Create Examples From Zip File](doc:create-examples-from-zip). You can't add a single example from a file. - The maximum directory name length is 180 characters. If the .zip file contains a directory with a name greater than 180 characters, the label is created in the dataset, but the API truncates the label name to 180 characters. - The minimum number of examples per label is 10. - The minimum number of total examples across all labels is 40. - Image files must be smaller than 1 MB. If the .zip file contains image files larger than 1 MB, the image won't be loaded and no error is returned. - Images must be no larger than 2,000 pixels high by 2,000 pixels wide. You can upload images that are larger, but training the dataset might fail. - The supported image file types are PNG, JPG, and JPEG. If the .zip file contains any unsupported image file types, those images won't be uploaded and no error is returned. - Duplicate images are handled differently based on the dataset type. - Image—For datasets of type `image`, if there are duplicate image files in the .zip file, only the first file is uploaded. Duplicate images are checked within directories and across directories. If there's more than one image file with the same file contents in the same directory or in multiple directories, only the first file is uploaded and the others are skipped. - Multi-label—For datasets of type `image-multi-label`, if there are duplicate image files in a single directory, only the first file is uploaded and the others are skipped. In a multi-label dataset, it's expected that there are duplicate files across directories. If there's more than one image file with the same file contents in multiple directories, the file is loaded multiple times with a different label. ###Object Detection Datasets### - Here are the guidelines for the .zip file: - The .zip file must contain two types of elements: (1) the image files specified in the annotations.csv file and (2) a file named annotations.csv that contains the bounding box data. - Images can be in the root of the .zip file or in a folder or folders in the root of the .zip file. If images are in folders more than one level deep, you'll receive an error when you try to create the dataset. - The supported image file types are PNG, JPG, and JPEG. If the .zip file contains any unsupported image file types, those images won't be uploaded and no error is returned. - The annotations.csv file is a text file that contains the data for the bounding boxes associated with each image. The file must have this exact name. - The annotations.csv file can be anywhere within the .zip file. [block:image] { "images": [ { "image": [ "https://files.readme.io/d8c446e-obj_det_zip_file_format.png", "obj_det_zip_file_format.png", 535, 164, "#cccab6" ] } ] } [/block] - The maximum label name length is 180 characters. If the annotations file contains a label with a name greater than 180 characters, the label is created in the dataset, but the API truncates the label name to 180 characters. - Labels are case sensitive. If you have labels `Oatmeal` and `oatmeal`, they are two distinct labels in the dataset and the resulting model. - When you create a dataset, all the images are checked for duplicates. If the .zip file contains multiple image files that have the same contents, only the first of the duplicate files is uploaded. - If there's an image in the .zip file, but no bounding box descriptions for that image in the annotations file, the image is dropped and no error is returned. ####Annotations.csv File Format#### The annotations.csv file contains the bounding box coordinates and the labels for each image. 1. The first row in the file contains the headers for the CSV values. We use the convention of `image_file` and `boxn`, but each header value can be any string. - `image_file`—Header for the image file name. - `boxn`—Header for each bounding box element. The number of `boxn` values in the header is the maximum number of bounding boxes you can have in an image. 2. Each row after the header specifies the bounding box descriptions in JSON format for each image in the .zip file. There should be one row per file. Multiple bounding boxes for the same image are listed as separate columns in the same row. The image name provided must be the exact name of the image file included in the parent folder. The `x`, `y`, `width`, and `height` values specify the bounding box location within the image. The required fields for each bounding box are: - `label`—Classification label for the content in the bounding box. - `height`—Height of the bounding box in pixels. - `width`—Width of the bounding box in pixels. - `x`—Location of the bounding box on the horizontal axis. - `y`—Location of the bounding box on the vertical axis. Here's an example of an annotations.csv file for two images. [block:code] { "codes": [ { "code": "\"image_file\",\"box0\",\"box1\"\n\"picture1.jpg\",\"{\"\"label\"\": \"\"cat\"\", \"\"y\"\": 242, \"\"x\"\": 160, \"\"height\"\": 62, \"\"width\"\": 428}\", \"{\"\"label\"\": \"\"turtle\"\", \"\"y\"\": 113, \"\"x\"\": 61, \"\"height\"\": 74, \"\"width\"\": 718}\"\n\"picture2.jpg\",\"{\"\"label\"\": \"\"dog\"\", \"\"y\"\": 94, \"\"x\"\": 27, \"\"height\"\": 144, \"\"width\"\": 184}\",\"{\"\"label\"\": \"\"dog\"\", \"\"y\"\": 50, \"\"x\"\": 286, \"\"height\"\": 344, \"\"width\"\": 348}\"\n", "language": "text" } ] } [/block] Here's the second image referenced in the annotations.csv file showing the bounding boxes. [block:image] { "images": [ { "image": [ "https://files.readme.io/ef4040e-7f6c214-annotations-format.png", "7f6c214-annotations-format.png", 480, 321, "#6f7554" ] } ] } [/block] ##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`available`", "0-1": "boolean", "0-2": "Specifies whether the dataset is ready to be trained.", "0-3": "1.0", "2-0": "`id`", "2-1": "long", "2-2": "Dataset ID.", "2-3": "1.0", "3-0": "`labelSummary`", "3-1": "object", "3-2": "Contains the `labels` array that contains all the labels for the dataset. This is an asynchronous call, so the `labels` array is empty when you first create a dataset.", "3-3": "1.0", "4-0": "`name`", "4-1": "string", "4-2": "Name of the dataset.", "4-3": "1.0", "5-0": "`object`", "5-1": "string", "5-2": "Object returned; in this case, `dataset`.", "5-3": "1.0", "7-0": "`totalExamples`", "7-1": "int", "7-2": "Total number of examples in the dataset.", "7-3": "1.0", "9-0": "`updatedAt`", "9-1": "date", "9-2": "Date and time that the dataset was last updated.", "9-3": "1.0", "1-0": "`createdAt`", "1-1": "date", "1-2": "Date and time that the dataset was created.", "1-3": "1.0", "6-0": "`statusMsg`", "6-1": "string", "6-2": "Status of the dataset creation and data upload. Valid values are:\n- `FAILURE: <failure_reason>`—Data upload has failed.\n- `SUCCEEDED`—Data upload is complete.\n- `UPLOADING`—Data upload is in progress.", "6-3": "1.0", "8-0": "`type`", "8-1": "string", "8-2": "Type of dataset data. Valid values are:\n- `image`\n- `image-detection`—Available in Einstein Vision API version 2.0 and later.\n- `image-multi-label`—Available in Einstein Vision API version 2.0 and later.", "8-3": "1.0" }, "cols": 4, "rows": 10 } [/block]
{"_id":"59de6224666d650024f78fc0","category":"59de6223666d650024f78fa3","user":"573b5a1f37fcf72000a2e683","project":"552d474ea86ee20d00780cd7","parentDoc":null,"version":"59de6223666d650024f78f9b","updates":[],"next":{"pages":[],"description":""},"createdAt":"2017-02-16T22:03:40.674Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"examples":{"codes":[{"language":"curl","code":"// Create dataset from a local file\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"data=@C:\\Data\\mountainvsbeach.zip\" -F \"type=image\"  https://api.einstein.ai/v2/vision/datasets/upload/sync\n\n// Create dataset from a web file\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"path=http://einstein.ai/images/mountainvsbeach.zip\" -F \"type=image\" https://api.einstein.ai/v2/vision/datasets/upload/sync"}]},"method":"post","results":{"codes":[{"name":"","code":"{\n  \"id\": 1000022,\n  \"name\": \"mountainvsbeach\",\n  \"createdAt\": \"2017-02-16T22:26:21.000+0000\",\n  \"updatedAt\": \"2017-02-16T22:26:21.000+0000\",\n  \"labelSummary\": {\n    \"labels\": [\n      {\n        \"id\": 1814,\n        \"datasetId\": 1000022,\n        \"name\": \"Mountains\",\n        \"numExamples\": 50\n      },\n      {\n        \"id\": 1815,\n        \"datasetId\": 1000022,\n        \"name\": \"Beaches\",\n        \"numExamples\": 49\n      }\n    ]\n  },\n  \"totalExamples\": 99,\n  \"totalLabels\": 2,\n  \"available\": true,\n  \"statusMsg\": \"SUCCEEDED\",\n  \"type\": \"image\",\n  \"object\": \"dataset\"\n}\n","language":"json","status":200},{"name":"","code":"{}","language":"json","status":400}]},"settings":"","auth":"required","params":[],"url":"/vision/datasets/upload/sync"},"isReference":false,"order":1,"body":"##Request Parameters##\n[block:parameters]\n{\n  \"data\": {\n    \"0-0\": \"`data`\",\n    \"2-0\": \"`path`\",\n    \"0-1\": \"string\",\n    \"2-1\": \"string\",\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-2\": \"Path to the .zip file on the local drive (FilePart). The maximum .zip file size you can upload from a local drive is 50 MB.\",\n    \"0-3\": \"1.0\",\n    \"2-2\": \"URL of the .zip file. The maximum .zip file size you can upload from a web location is 1 GB.\",\n    \"2-3\": \"1.0\",\n    \"3-0\": \"`type`\",\n    \"3-1\": \"string\",\n    \"3-2\": \"Type of dataset data. Valid values are:\\n- `image`\\n- `image-detection`—Available in Einstein Vision API version 2.0 and later.\\n- `image-multi-label`—Available in Einstein Vision API version 2.0 and later.\",\n    \"3-3\": \"1.0\",\n    \"1-0\": \"`name`\",\n    \"1-1\": \"string\",\n    \"1-2\": \"Name of the dataset. Optional. If this parameter is omitted, the dataset name is derived from the .zip file name.\",\n    \"1-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 4\n}\n[/block]\nThe API call is synchronous, so results are returned after the data has been uploaded to the dataset. If this call succeeds, it returns the `labels` array, `available` is `true`, and `statusMsg` is `SUCCEEDED`.\n\nYou must provide the path to the .zip file on either the local machine or in the cloud. \n\nIf the dataset type is `image` or `image-multi-label`, this API:\n- Creates a dataset that has the same name as the .zip file (limit is 100 characters), if the `name` parameter is omitted.\n- Creates a label for each directory in the .zip file. The label name is the same name as the directory name (limit is 180 characters).\n- Creates an example for each image file in each directory in the .zip file. The example name is the same as the image file name. \n\nIf the dataset type is `image-detection`, this API:\n- Creates a dataset that has the same name as the .zip file (limit is 100 characters), if the `name` parameter is omitted.\n- Creates a label for each unique label in the annotations.csv file (limit is 180 characters).\n- Creates an example for each image file in the .zip file.\n\nKeep the following points in mind when creating datasets.\n###All Datasets###\n\n- If your .zip file is more than 10 MB, we recommend that you use the asynchronous call to create a dataset. See [Create a Dataset From a Zip File Asynchronously](doc:create-a-dataset-zip-async).\n\n- The maximum .zip file size you can upload from a local drive is 50 MB. The maximum .zip file size you can upload from a web location is 1 GB.\n\n- The maximum total dataset size is 1 GB.\n \n- If the `name` parameter is passed, the maximum length is 100 characters.\n\n- The maximum image file name length is 150 characters including the file extension. If the .zip file contains a file with a name greater than 150 characters (including the file extension), the example is created in the dataset, but the API truncates the example name to 150 characters.\n\n- If the .zip file contains an image file that has a name containing spaces, the spaces are removed from the file name before the file is uploaded. For example, if you have a file called `sandy beach.jpg` the example name becomes `sandybeach.jpg`. If the .zip file contains an image file that has a name with non-ASCII characters, those characters are converted to UTF-8.\n\n- When specifying the URL for a .zip file in a cloud drive service like Dropbox, be sure it's a link to the file and not a link to the interactive download page. For example, the URL should look like `https://www.dropbox.com/s/abcdxyz/mountainvsbeach.zip?dl=1`\n\n- If the .zip file has an incorrect structure, the API returns an error: `FAILED: Invalid zip format provided for <dataset_name>`.\n\n- If you create a dataset or upload images from a .zip file in Apex code, be sure that you reference the URL to the file with `https` and not `http`.\n\n###Image or Image Multi-Label Datasets###\n\n- The .zip file must have a specific directory structure:\n - In the root, there should be a parent directory that contains subdirectories. \n - Each subdirectory below the parent directory becomes a label in the dataset. This subdirectory must contain images to be added to the dataset.\n - Each subdirectory below the parent directory should contain only images and not any nested subdirectories.\n\n\n- If you have a large amount of data (gigabytes), you might want to break up your data into multiple .zip files. You can load the first .zip file using this call and then load subsequent .zip files using PUT. See [Create Examples From Zip File](doc:create-examples-from-zip).\n\n- If you create a dataset from a .zip file, you can only add examples to it from a .zip file using PUT. See [Create Examples From Zip File](doc:create-examples-from-zip). You can't add a single example from a file.\n\n- The maximum directory name length is 180 characters. If the .zip file contains a directory with a name greater than 180 characters, the label is created in the dataset, but  the API truncates the label name to 180 characters.\n\n- The minimum number of examples per label is 10.\n\n- The minimum number of total examples across all labels is 40.\n\n- Image files must be smaller than 1 MB. If the .zip file contains image files larger than 1 MB, the image won't be loaded and no error is returned.\n\n- Images must be no larger than 2,000 pixels high by 2,000 pixels wide. You can upload images that are larger, but training the dataset might fail.\n\n- The supported image file types are PNG, JPG, and JPEG. If the .zip file contains any unsupported image file types, those images won't be uploaded and no error is returned.\n\n- Duplicate images are handled differently based on the dataset type.\n -  Image—For datasets of type `image`, if there are duplicate image files in the .zip file, only the first file is uploaded. Duplicate images are checked within directories and across directories. If there's more than one image file with the same file contents in the same directory or in multiple directories, only the first file is uploaded and the others are skipped.\n - Multi-label—For datasets of type  `image-multi-label`, if there are duplicate image files in a single directory, only the first file is uploaded and the others are skipped. In a multi-label dataset, it's expected that there are duplicate files across directories. If there's more than one image file with the same file contents in multiple directories, the file is loaded multiple times with a different label.\n\n###Object Detection Datasets###\n\n- Here are the guidelines for the .zip file:\n - The .zip file must contain two types of elements: (1) the image files specified in the annotations.csv file and (2) a file named annotations.csv that contains the bounding box data.\n - Images can be in the root of the .zip file or in a folder or folders in the root of the .zip file. If images are in folders more than one level deep, you'll receive an error when you try to create the dataset.\n - The supported image file types are PNG, JPG, and JPEG. If the .zip file contains any unsupported image file types, those images won't be uploaded and no error is returned.\n - The annotations.csv file is a text file that contains the data for the bounding boxes associated with each image. The file must have this exact name.\n - The annotations.csv file can be anywhere within the .zip file.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/d8c446e-obj_det_zip_file_format.png\",\n        \"obj_det_zip_file_format.png\",\n        535,\n        164,\n        \"#cccab6\"\n      ]\n    }\n  ]\n}\n[/block]\n- The maximum label name length is 180 characters. If the annotations file contains a label with a name greater than 180 characters, the label is created in the dataset, but  the API truncates the label name to 180 characters.\n\n- Labels are case sensitive. If you have labels `Oatmeal` and `oatmeal`, they are two distinct labels in the dataset and the resulting model.\n\n- When you create a dataset, all the images are checked for duplicates. If the .zip file contains multiple image files that have the same contents, only the first of the duplicate files is uploaded.\n\n- If there's an image in the .zip file, but no bounding box descriptions for that image in the annotations file, the image is dropped and no error is returned.\n\n####Annotations.csv File Format####\n\nThe annotations.csv file contains the bounding box coordinates and the labels for each image.\n\n1. The first row in the file contains the headers for the CSV values. We use the convention of `image_file` and `boxn`, but each header value can be any string. \n\n    - `image_file`—Header for the image file name. \n    - `boxn`—Header for each bounding box element. The number of `boxn` values in the header is the maximum number of bounding boxes you can have in an image.\n\n\n2. Each row after the header specifies the bounding box descriptions in JSON format for each image in the .zip file. There should be one row per file. Multiple bounding boxes for the same image are listed as separate columns in the same row. The image name provided must be the exact name of the image file included in the parent folder. The `x`, `y`, `width`, and `height` values specify the bounding box location within the image. The required fields for each bounding box are:\n - `label`—Classification label for the content in the bounding box.\n - `height`—Height of the bounding box in pixels.\n - `width`—Width of the bounding box in pixels.\n - `x`—Location of the bounding box on the horizontal axis.\n - `y`—Location of the bounding box on the vertical axis.\n \nHere's an example of an annotations.csv file for two images.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"\\\"image_file\\\",\\\"box0\\\",\\\"box1\\\"\\n\\\"picture1.jpg\\\",\\\"{\\\"\\\"label\\\"\\\": \\\"\\\"cat\\\"\\\", \\\"\\\"y\\\"\\\": 242, \\\"\\\"x\\\"\\\": 160, \\\"\\\"height\\\"\\\": 62, \\\"\\\"width\\\"\\\": 428}\\\", \\\"{\\\"\\\"label\\\"\\\": \\\"\\\"turtle\\\"\\\", \\\"\\\"y\\\"\\\": 113, \\\"\\\"x\\\"\\\": 61, \\\"\\\"height\\\"\\\": 74, \\\"\\\"width\\\"\\\": 718}\\\"\\n\\\"picture2.jpg\\\",\\\"{\\\"\\\"label\\\"\\\": \\\"\\\"dog\\\"\\\", \\\"\\\"y\\\"\\\": 94, \\\"\\\"x\\\"\\\": 27, \\\"\\\"height\\\"\\\": 144, \\\"\\\"width\\\"\\\": 184}\\\",\\\"{\\\"\\\"label\\\"\\\": \\\"\\\"dog\\\"\\\", \\\"\\\"y\\\"\\\": 50, \\\"\\\"x\\\"\\\": 286, \\\"\\\"height\\\"\\\": 344, \\\"\\\"width\\\"\\\": 348}\\\"\\n\",\n      \"language\": \"text\"\n    }\n  ]\n}\n[/block]\nHere's the second image referenced in the annotations.csv file showing the bounding boxes.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/9aa98e1-7f6c214-annotations-format.png\",\n        \"7f6c214-annotations-format.png\",\n        480,\n        321,\n        \"#6f7554\"\n      ]\n    }\n  ]\n}\n[/block]\n##Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`available`\",\n    \"0-1\": \"boolean\",\n    \"0-2\": \"Specifies whether the dataset is ready to be trained.\",\n    \"0-3\": \"1.0\",\n    \"2-0\": \"`id`\",\n    \"2-1\": \"long\",\n    \"2-2\": \"Dataset ID.\",\n    \"2-3\": \"1.0\",\n    \"3-0\": \"`labelSummary`\",\n    \"3-1\": \"object\",\n    \"3-2\": \"Contains the `labels` array that contains all the labels for the dataset.\",\n    \"3-3\": \"1.0\",\n    \"4-0\": \"`name`\",\n    \"4-1\": \"string\",\n    \"4-2\": \"Name of the dataset. The API uses the name of the .zip file for the dataset name.\",\n    \"4-3\": \"1.0\",\n    \"5-0\": \"`object`\",\n    \"5-1\": \"string\",\n    \"5-2\": \"Object returned; in this case, `dataset`.\",\n    \"5-3\": \"1.0\",\n    \"7-0\": \"`totalExamples`\",\n    \"7-1\": \"int\",\n    \"7-2\": \"Total number of examples in the dataset.\",\n    \"7-3\": \"1.0\",\n    \"9-0\": \"`updatedAt`\",\n    \"9-1\": \"date\",\n    \"9-2\": \"Date and time that the dataset was last updated.\",\n    \"9-3\": \"1.0\",\n    \"1-0\": \"`createdAt`\",\n    \"1-1\": \"date\",\n    \"1-2\": \"Date and time that the dataset was created.\",\n    \"1-3\": \"1.0\",\n    \"6-0\": \"`statusMsg`\",\n    \"6-1\": \"string\",\n    \"6-2\": \"Status of the dataset creation and data upload. Valid values are:\\n- `FAILURE: <failure_reason>`—Data upload has failed.\\n- `SUCCEEDED`—Data upload is complete.\\n- `UPLOADING`—Data upload is in progress.\",\n    \"6-3\": \"1.0\",\n    \"8-0\": \"`type`\",\n    \"8-1\": \"string\",\n    \"8-2\": \"Type of dataset data. Valid values are:\\n- `image`\\n- `image-detection`—Available in Einstein Vision API version 2.0 and later.\\n- `image-multi-label`—Available in Einstein Vision API version 2.0 and later.\",\n    \"8-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 10\n}\n[/block]\n## Labels Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"0-0\": \"`datasetId`\",\n    \"0-1\": \"long\",\n    \"0-2\": \"ID of the dataset that the label belongs to.\",\n    \"0-3\": \"1.0\",\n    \"1-0\": \"`id`\",\n    \"2-0\": \"`name`\",\n    \"3-0\": \"`numExamples`\",\n    \"1-1\": \"long\",\n    \"2-1\": \"string\",\n    \"3-1\": \"int\",\n    \"3-2\": \"Number of examples that have the label.\",\n    \"3-3\": \"1.0\",\n    \"2-2\": \"Name of the label.\",\n    \"2-3\": \"1.0\",\n    \"1-2\": \"ID of the label.\",\n    \"1-3\": \"1.0\",\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\"\n  },\n  \"cols\": 4,\n  \"rows\": 4\n}\n[/block]","excerpt":"Creates a dataset, labels, and examples from the specified .zip file. The call returns after the dataset is created and all of the images are uploaded. Use this API call for .zip files that are smaller than 10 MB.","slug":"create-a-dataset-zip-sync","type":"post","title":"Create a Dataset From a Zip File Synchronously","__v":0,"childrenPages":[]}

postCreate a Dataset From a Zip File Synchronously

Creates a dataset, labels, and examples from the specified .zip file. The call returns after the dataset is created and all of the images are uploaded. Use this API call for .zip files that are smaller than 10 MB.

##Request Parameters## [block:parameters] { "data": { "0-0": "`data`", "2-0": "`path`", "0-1": "string", "2-1": "string", "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-2": "Path to the .zip file on the local drive (FilePart). The maximum .zip file size you can upload from a local drive is 50 MB.", "0-3": "1.0", "2-2": "URL of the .zip file. The maximum .zip file size you can upload from a web location is 1 GB.", "2-3": "1.0", "3-0": "`type`", "3-1": "string", "3-2": "Type of dataset data. Valid values are:\n- `image`\n- `image-detection`—Available in Einstein Vision API version 2.0 and later.\n- `image-multi-label`—Available in Einstein Vision API version 2.0 and later.", "3-3": "1.0", "1-0": "`name`", "1-1": "string", "1-2": "Name of the dataset. Optional. If this parameter is omitted, the dataset name is derived from the .zip file name.", "1-3": "1.0" }, "cols": 4, "rows": 4 } [/block] The API call is synchronous, so results are returned after the data has been uploaded to the dataset. If this call succeeds, it returns the `labels` array, `available` is `true`, and `statusMsg` is `SUCCEEDED`. You must provide the path to the .zip file on either the local machine or in the cloud. If the dataset type is `image` or `image-multi-label`, this API: - Creates a dataset that has the same name as the .zip file (limit is 100 characters), if the `name` parameter is omitted. - Creates a label for each directory in the .zip file. The label name is the same name as the directory name (limit is 180 characters). - Creates an example for each image file in each directory in the .zip file. The example name is the same as the image file name. If the dataset type is `image-detection`, this API: - Creates a dataset that has the same name as the .zip file (limit is 100 characters), if the `name` parameter is omitted. - Creates a label for each unique label in the annotations.csv file (limit is 180 characters). - Creates an example for each image file in the .zip file. Keep the following points in mind when creating datasets. ###All Datasets### - If your .zip file is more than 10 MB, we recommend that you use the asynchronous call to create a dataset. See [Create a Dataset From a Zip File Asynchronously](doc:create-a-dataset-zip-async). - The maximum .zip file size you can upload from a local drive is 50 MB. The maximum .zip file size you can upload from a web location is 1 GB. - The maximum total dataset size is 1 GB. - If the `name` parameter is passed, the maximum length is 100 characters. - The maximum image file name length is 150 characters including the file extension. If the .zip file contains a file with a name greater than 150 characters (including the file extension), the example is created in the dataset, but the API truncates the example name to 150 characters. - If the .zip file contains an image file that has a name containing spaces, the spaces are removed from the file name before the file is uploaded. For example, if you have a file called `sandy beach.jpg` the example name becomes `sandybeach.jpg`. If the .zip file contains an image file that has a name with non-ASCII characters, those characters are converted to UTF-8. - When specifying the URL for a .zip file in a cloud drive service like Dropbox, be sure it's a link to the file and not a link to the interactive download page. For example, the URL should look like `https://www.dropbox.com/s/abcdxyz/mountainvsbeach.zip?dl=1` - If the .zip file has an incorrect structure, the API returns an error: `FAILED: Invalid zip format provided for <dataset_name>`. - If you create a dataset or upload images from a .zip file in Apex code, be sure that you reference the URL to the file with `https` and not `http`. ###Image or Image Multi-Label Datasets### - The .zip file must have a specific directory structure: - In the root, there should be a parent directory that contains subdirectories. - Each subdirectory below the parent directory becomes a label in the dataset. This subdirectory must contain images to be added to the dataset. - Each subdirectory below the parent directory should contain only images and not any nested subdirectories. - If you have a large amount of data (gigabytes), you might want to break up your data into multiple .zip files. You can load the first .zip file using this call and then load subsequent .zip files using PUT. See [Create Examples From Zip File](doc:create-examples-from-zip). - If you create a dataset from a .zip file, you can only add examples to it from a .zip file using PUT. See [Create Examples From Zip File](doc:create-examples-from-zip). You can't add a single example from a file. - The maximum directory name length is 180 characters. If the .zip file contains a directory with a name greater than 180 characters, the label is created in the dataset, but the API truncates the label name to 180 characters. - The minimum number of examples per label is 10. - The minimum number of total examples across all labels is 40. - Image files must be smaller than 1 MB. If the .zip file contains image files larger than 1 MB, the image won't be loaded and no error is returned. - Images must be no larger than 2,000 pixels high by 2,000 pixels wide. You can upload images that are larger, but training the dataset might fail. - The supported image file types are PNG, JPG, and JPEG. If the .zip file contains any unsupported image file types, those images won't be uploaded and no error is returned. - Duplicate images are handled differently based on the dataset type. - Image—For datasets of type `image`, if there are duplicate image files in the .zip file, only the first file is uploaded. Duplicate images are checked within directories and across directories. If there's more than one image file with the same file contents in the same directory or in multiple directories, only the first file is uploaded and the others are skipped. - Multi-label—For datasets of type `image-multi-label`, if there are duplicate image files in a single directory, only the first file is uploaded and the others are skipped. In a multi-label dataset, it's expected that there are duplicate files across directories. If there's more than one image file with the same file contents in multiple directories, the file is loaded multiple times with a different label. ###Object Detection Datasets### - Here are the guidelines for the .zip file: - The .zip file must contain two types of elements: (1) the image files specified in the annotations.csv file and (2) a file named annotations.csv that contains the bounding box data. - Images can be in the root of the .zip file or in a folder or folders in the root of the .zip file. If images are in folders more than one level deep, you'll receive an error when you try to create the dataset. - The supported image file types are PNG, JPG, and JPEG. If the .zip file contains any unsupported image file types, those images won't be uploaded and no error is returned. - The annotations.csv file is a text file that contains the data for the bounding boxes associated with each image. The file must have this exact name. - The annotations.csv file can be anywhere within the .zip file. [block:image] { "images": [ { "image": [ "https://files.readme.io/d8c446e-obj_det_zip_file_format.png", "obj_det_zip_file_format.png", 535, 164, "#cccab6" ] } ] } [/block] - The maximum label name length is 180 characters. If the annotations file contains a label with a name greater than 180 characters, the label is created in the dataset, but the API truncates the label name to 180 characters. - Labels are case sensitive. If you have labels `Oatmeal` and `oatmeal`, they are two distinct labels in the dataset and the resulting model. - When you create a dataset, all the images are checked for duplicates. If the .zip file contains multiple image files that have the same contents, only the first of the duplicate files is uploaded. - If there's an image in the .zip file, but no bounding box descriptions for that image in the annotations file, the image is dropped and no error is returned. ####Annotations.csv File Format#### The annotations.csv file contains the bounding box coordinates and the labels for each image. 1. The first row in the file contains the headers for the CSV values. We use the convention of `image_file` and `boxn`, but each header value can be any string. - `image_file`—Header for the image file name. - `boxn`—Header for each bounding box element. The number of `boxn` values in the header is the maximum number of bounding boxes you can have in an image. 2. Each row after the header specifies the bounding box descriptions in JSON format for each image in the .zip file. There should be one row per file. Multiple bounding boxes for the same image are listed as separate columns in the same row. The image name provided must be the exact name of the image file included in the parent folder. The `x`, `y`, `width`, and `height` values specify the bounding box location within the image. The required fields for each bounding box are: - `label`—Classification label for the content in the bounding box. - `height`—Height of the bounding box in pixels. - `width`—Width of the bounding box in pixels. - `x`—Location of the bounding box on the horizontal axis. - `y`—Location of the bounding box on the vertical axis. Here's an example of an annotations.csv file for two images. [block:code] { "codes": [ { "code": "\"image_file\",\"box0\",\"box1\"\n\"picture1.jpg\",\"{\"\"label\"\": \"\"cat\"\", \"\"y\"\": 242, \"\"x\"\": 160, \"\"height\"\": 62, \"\"width\"\": 428}\", \"{\"\"label\"\": \"\"turtle\"\", \"\"y\"\": 113, \"\"x\"\": 61, \"\"height\"\": 74, \"\"width\"\": 718}\"\n\"picture2.jpg\",\"{\"\"label\"\": \"\"dog\"\", \"\"y\"\": 94, \"\"x\"\": 27, \"\"height\"\": 144, \"\"width\"\": 184}\",\"{\"\"label\"\": \"\"dog\"\", \"\"y\"\": 50, \"\"x\"\": 286, \"\"height\"\": 344, \"\"width\"\": 348}\"\n", "language": "text" } ] } [/block] Here's the second image referenced in the annotations.csv file showing the bounding boxes. [block:image] { "images": [ { "image": [ "https://files.readme.io/9aa98e1-7f6c214-annotations-format.png", "7f6c214-annotations-format.png", 480, 321, "#6f7554" ] } ] } [/block] ##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`available`", "0-1": "boolean", "0-2": "Specifies whether the dataset is ready to be trained.", "0-3": "1.0", "2-0": "`id`", "2-1": "long", "2-2": "Dataset ID.", "2-3": "1.0", "3-0": "`labelSummary`", "3-1": "object", "3-2": "Contains the `labels` array that contains all the labels for the dataset.", "3-3": "1.0", "4-0": "`name`", "4-1": "string", "4-2": "Name of the dataset. The API uses the name of the .zip file for the dataset name.", "4-3": "1.0", "5-0": "`object`", "5-1": "string", "5-2": "Object returned; in this case, `dataset`.", "5-3": "1.0", "7-0": "`totalExamples`", "7-1": "int", "7-2": "Total number of examples in the dataset.", "7-3": "1.0", "9-0": "`updatedAt`", "9-1": "date", "9-2": "Date and time that the dataset was last updated.", "9-3": "1.0", "1-0": "`createdAt`", "1-1": "date", "1-2": "Date and time that the dataset was created.", "1-3": "1.0", "6-0": "`statusMsg`", "6-1": "string", "6-2": "Status of the dataset creation and data upload. Valid values are:\n- `FAILURE: <failure_reason>`—Data upload has failed.\n- `SUCCEEDED`—Data upload is complete.\n- `UPLOADING`—Data upload is in progress.", "6-3": "1.0", "8-0": "`type`", "8-1": "string", "8-2": "Type of dataset data. Valid values are:\n- `image`\n- `image-detection`—Available in Einstein Vision API version 2.0 and later.\n- `image-multi-label`—Available in Einstein Vision API version 2.0 and later.", "8-3": "1.0" }, "cols": 4, "rows": 10 } [/block] ## Labels Response Body## [block:parameters] { "data": { "0-0": "`datasetId`", "0-1": "long", "0-2": "ID of the dataset that the label belongs to.", "0-3": "1.0", "1-0": "`id`", "2-0": "`name`", "3-0": "`numExamples`", "1-1": "long", "2-1": "string", "3-1": "int", "3-2": "Number of examples that have the label.", "3-3": "1.0", "2-2": "Name of the label.", "2-3": "1.0", "1-2": "ID of the label.", "1-3": "1.0", "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version" }, "cols": 4, "rows": 4 } [/block]

Definition

{{ api_url }}{{ page_api_url }}

Examples


Result Format



##Request Parameters## [block:parameters] { "data": { "0-0": "`data`", "2-0": "`path`", "0-1": "string", "2-1": "string", "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-2": "Path to the .zip file on the local drive (FilePart). The maximum .zip file size you can upload from a local drive is 50 MB.", "0-3": "1.0", "2-2": "URL of the .zip file. The maximum .zip file size you can upload from a web location is 1 GB.", "2-3": "1.0", "3-0": "`type`", "3-1": "string", "3-2": "Type of dataset data. Valid values are:\n- `image`\n- `image-detection`—Available in Einstein Vision API version 2.0 and later.\n- `image-multi-label`—Available in Einstein Vision API version 2.0 and later.", "3-3": "1.0", "1-0": "`name`", "1-1": "string", "1-2": "Name of the dataset. Optional. If this parameter is omitted, the dataset name is derived from the .zip file name.", "1-3": "1.0" }, "cols": 4, "rows": 4 } [/block] The API call is synchronous, so results are returned after the data has been uploaded to the dataset. If this call succeeds, it returns the `labels` array, `available` is `true`, and `statusMsg` is `SUCCEEDED`. You must provide the path to the .zip file on either the local machine or in the cloud. If the dataset type is `image` or `image-multi-label`, this API: - Creates a dataset that has the same name as the .zip file (limit is 100 characters), if the `name` parameter is omitted. - Creates a label for each directory in the .zip file. The label name is the same name as the directory name (limit is 180 characters). - Creates an example for each image file in each directory in the .zip file. The example name is the same as the image file name. If the dataset type is `image-detection`, this API: - Creates a dataset that has the same name as the .zip file (limit is 100 characters), if the `name` parameter is omitted. - Creates a label for each unique label in the annotations.csv file (limit is 180 characters). - Creates an example for each image file in the .zip file. Keep the following points in mind when creating datasets. ###All Datasets### - If your .zip file is more than 10 MB, we recommend that you use the asynchronous call to create a dataset. See [Create a Dataset From a Zip File Asynchronously](doc:create-a-dataset-zip-async). - The maximum .zip file size you can upload from a local drive is 50 MB. The maximum .zip file size you can upload from a web location is 1 GB. - The maximum total dataset size is 1 GB. - If the `name` parameter is passed, the maximum length is 100 characters. - The maximum image file name length is 150 characters including the file extension. If the .zip file contains a file with a name greater than 150 characters (including the file extension), the example is created in the dataset, but the API truncates the example name to 150 characters. - If the .zip file contains an image file that has a name containing spaces, the spaces are removed from the file name before the file is uploaded. For example, if you have a file called `sandy beach.jpg` the example name becomes `sandybeach.jpg`. If the .zip file contains an image file that has a name with non-ASCII characters, those characters are converted to UTF-8. - When specifying the URL for a .zip file in a cloud drive service like Dropbox, be sure it's a link to the file and not a link to the interactive download page. For example, the URL should look like `https://www.dropbox.com/s/abcdxyz/mountainvsbeach.zip?dl=1` - If the .zip file has an incorrect structure, the API returns an error: `FAILED: Invalid zip format provided for <dataset_name>`. - If you create a dataset or upload images from a .zip file in Apex code, be sure that you reference the URL to the file with `https` and not `http`. ###Image or Image Multi-Label Datasets### - The .zip file must have a specific directory structure: - In the root, there should be a parent directory that contains subdirectories. - Each subdirectory below the parent directory becomes a label in the dataset. This subdirectory must contain images to be added to the dataset. - Each subdirectory below the parent directory should contain only images and not any nested subdirectories. - If you have a large amount of data (gigabytes), you might want to break up your data into multiple .zip files. You can load the first .zip file using this call and then load subsequent .zip files using PUT. See [Create Examples From Zip File](doc:create-examples-from-zip). - If you create a dataset from a .zip file, you can only add examples to it from a .zip file using PUT. See [Create Examples From Zip File](doc:create-examples-from-zip). You can't add a single example from a file. - The maximum directory name length is 180 characters. If the .zip file contains a directory with a name greater than 180 characters, the label is created in the dataset, but the API truncates the label name to 180 characters. - The minimum number of examples per label is 10. - The minimum number of total examples across all labels is 40. - Image files must be smaller than 1 MB. If the .zip file contains image files larger than 1 MB, the image won't be loaded and no error is returned. - Images must be no larger than 2,000 pixels high by 2,000 pixels wide. You can upload images that are larger, but training the dataset might fail. - The supported image file types are PNG, JPG, and JPEG. If the .zip file contains any unsupported image file types, those images won't be uploaded and no error is returned. - Duplicate images are handled differently based on the dataset type. - Image—For datasets of type `image`, if there are duplicate image files in the .zip file, only the first file is uploaded. Duplicate images are checked within directories and across directories. If there's more than one image file with the same file contents in the same directory or in multiple directories, only the first file is uploaded and the others are skipped. - Multi-label—For datasets of type `image-multi-label`, if there are duplicate image files in a single directory, only the first file is uploaded and the others are skipped. In a multi-label dataset, it's expected that there are duplicate files across directories. If there's more than one image file with the same file contents in multiple directories, the file is loaded multiple times with a different label. ###Object Detection Datasets### - Here are the guidelines for the .zip file: - The .zip file must contain two types of elements: (1) the image files specified in the annotations.csv file and (2) a file named annotations.csv that contains the bounding box data. - Images can be in the root of the .zip file or in a folder or folders in the root of the .zip file. If images are in folders more than one level deep, you'll receive an error when you try to create the dataset. - The supported image file types are PNG, JPG, and JPEG. If the .zip file contains any unsupported image file types, those images won't be uploaded and no error is returned. - The annotations.csv file is a text file that contains the data for the bounding boxes associated with each image. The file must have this exact name. - The annotations.csv file can be anywhere within the .zip file. [block:image] { "images": [ { "image": [ "https://files.readme.io/d8c446e-obj_det_zip_file_format.png", "obj_det_zip_file_format.png", 535, 164, "#cccab6" ] } ] } [/block] - The maximum label name length is 180 characters. If the annotations file contains a label with a name greater than 180 characters, the label is created in the dataset, but the API truncates the label name to 180 characters. - Labels are case sensitive. If you have labels `Oatmeal` and `oatmeal`, they are two distinct labels in the dataset and the resulting model. - When you create a dataset, all the images are checked for duplicates. If the .zip file contains multiple image files that have the same contents, only the first of the duplicate files is uploaded. - If there's an image in the .zip file, but no bounding box descriptions for that image in the annotations file, the image is dropped and no error is returned. ####Annotations.csv File Format#### The annotations.csv file contains the bounding box coordinates and the labels for each image. 1. The first row in the file contains the headers for the CSV values. We use the convention of `image_file` and `boxn`, but each header value can be any string. - `image_file`—Header for the image file name. - `boxn`—Header for each bounding box element. The number of `boxn` values in the header is the maximum number of bounding boxes you can have in an image. 2. Each row after the header specifies the bounding box descriptions in JSON format for each image in the .zip file. There should be one row per file. Multiple bounding boxes for the same image are listed as separate columns in the same row. The image name provided must be the exact name of the image file included in the parent folder. The `x`, `y`, `width`, and `height` values specify the bounding box location within the image. The required fields for each bounding box are: - `label`—Classification label for the content in the bounding box. - `height`—Height of the bounding box in pixels. - `width`—Width of the bounding box in pixels. - `x`—Location of the bounding box on the horizontal axis. - `y`—Location of the bounding box on the vertical axis. Here's an example of an annotations.csv file for two images. [block:code] { "codes": [ { "code": "\"image_file\",\"box0\",\"box1\"\n\"picture1.jpg\",\"{\"\"label\"\": \"\"cat\"\", \"\"y\"\": 242, \"\"x\"\": 160, \"\"height\"\": 62, \"\"width\"\": 428}\", \"{\"\"label\"\": \"\"turtle\"\", \"\"y\"\": 113, \"\"x\"\": 61, \"\"height\"\": 74, \"\"width\"\": 718}\"\n\"picture2.jpg\",\"{\"\"label\"\": \"\"dog\"\", \"\"y\"\": 94, \"\"x\"\": 27, \"\"height\"\": 144, \"\"width\"\": 184}\",\"{\"\"label\"\": \"\"dog\"\", \"\"y\"\": 50, \"\"x\"\": 286, \"\"height\"\": 344, \"\"width\"\": 348}\"\n", "language": "text" } ] } [/block] Here's the second image referenced in the annotations.csv file showing the bounding boxes. [block:image] { "images": [ { "image": [ "https://files.readme.io/9aa98e1-7f6c214-annotations-format.png", "7f6c214-annotations-format.png", 480, 321, "#6f7554" ] } ] } [/block] ##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`available`", "0-1": "boolean", "0-2": "Specifies whether the dataset is ready to be trained.", "0-3": "1.0", "2-0": "`id`", "2-1": "long", "2-2": "Dataset ID.", "2-3": "1.0", "3-0": "`labelSummary`", "3-1": "object", "3-2": "Contains the `labels` array that contains all the labels for the dataset.", "3-3": "1.0", "4-0": "`name`", "4-1": "string", "4-2": "Name of the dataset. The API uses the name of the .zip file for the dataset name.", "4-3": "1.0", "5-0": "`object`", "5-1": "string", "5-2": "Object returned; in this case, `dataset`.", "5-3": "1.0", "7-0": "`totalExamples`", "7-1": "int", "7-2": "Total number of examples in the dataset.", "7-3": "1.0", "9-0": "`updatedAt`", "9-1": "date", "9-2": "Date and time that the dataset was last updated.", "9-3": "1.0", "1-0": "`createdAt`", "1-1": "date", "1-2": "Date and time that the dataset was created.", "1-3": "1.0", "6-0": "`statusMsg`", "6-1": "string", "6-2": "Status of the dataset creation and data upload. Valid values are:\n- `FAILURE: <failure_reason>`—Data upload has failed.\n- `SUCCEEDED`—Data upload is complete.\n- `UPLOADING`—Data upload is in progress.", "6-3": "1.0", "8-0": "`type`", "8-1": "string", "8-2": "Type of dataset data. Valid values are:\n- `image`\n- `image-detection`—Available in Einstein Vision API version 2.0 and later.\n- `image-multi-label`—Available in Einstein Vision API version 2.0 and later.", "8-3": "1.0" }, "cols": 4, "rows": 10 } [/block] ## Labels Response Body## [block:parameters] { "data": { "0-0": "`datasetId`", "0-1": "long", "0-2": "ID of the dataset that the label belongs to.", "0-3": "1.0", "1-0": "`id`", "2-0": "`name`", "3-0": "`numExamples`", "1-1": "long", "2-1": "string", "3-1": "int", "3-2": "Number of examples that have the label.", "3-3": "1.0", "2-2": "Name of the label.", "2-3": "1.0", "1-2": "ID of the label.", "1-3": "1.0", "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version" }, "cols": 4, "rows": 4 } [/block]
{"_id":"59de6224666d650024f78fc1","category":"59de6223666d650024f78fa3","parentDoc":null,"user":"573b5a1f37fcf72000a2e683","project":"552d474ea86ee20d00780cd7","version":"59de6223666d650024f78f9b","updates":[],"next":{"pages":[],"description":""},"createdAt":"2016-09-19T18:43:54.306Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"settings":"","examples":{"codes":[{"language":"curl","code":"curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=Beach and Mountain\" -F \"labels=beach,mountain\" -F \"type=image\" https://api.einstein.ai/v2/vision/datasets"}]},"method":"post","results":{"codes":[{"name":"","code":"{\n  \"id\": 57,\n  \"name\": \"Beach and Mountain\",\n  \"createdAt\": \"2016-09-15T16:51:41.000+0000\",\n  \"updatedAt\": \"2016-09-15T16:51:41.000+0000\",\n  \"labelSummary\": {\n    \"labels\": [\n      {\n        \"id\": 611,\n        \"datasetId\": 57,\n        \"name\": \"beach\",\n        \"numExamples\": 0\n      },\n    {\n        \"id\": 612,\n        \"datasetId\": 57,\n        \"name\": \"mountain\",\n        \"numExamples\": 0\n      }\n          ]\n  },\n  \"totalExamples\": 0,\n  \"totalLabels\": 2,\n  \"available\": true,\n  \"statusMsg\": \"SUCCEEDED\",\n  \"type\": \"image\",\n  \"object\": \"dataset\"\n}\n","language":"json","status":200},{"name":"","code":"{}","language":"json","status":400}]},"auth":"required","params":[],"url":"/vision/datasets"},"isReference":false,"order":2,"body":"[block:callout]\n{\n  \"type\": \"warning\",\n  \"title\": \"Warning\",\n  \"body\": \"For better performance, we recommend that you create a dataset by uploading a .zip file. See [Create a Dataset From a Zip File Asynchronously](doc:create-a-dataset-zip-async).\"\n}\n[/block]\n##Request Parameters##\n[block:parameters]\n{\n  \"data\": {\n    \"0-0\": \"`labels`\",\n    \"1-0\": \"`name`\",\n    \"0-1\": \"string\",\n    \"1-1\": \"string\",\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-2\": \"Optional comma-separated list of labels. If specified, creates the labels in the dataset. Maximum number of labels per dataset is 250.\",\n    \"0-3\": \"1.0\",\n    \"1-2\": \"Name of the dataset. Maximum length is 180 characters.\",\n    \"1-3\": \"1.0\",\n    \"2-0\": \"`type`\",\n    \"2-1\": \"string\",\n    \"2-3\": \"2.0\",\n    \"2-2\": \"Type of dataset data. Valid values are:\\n- `image`\\n- `image-multi-label`—Available in Einstein Vision API version 2.0 and later.\"\n  },\n  \"cols\": 4,\n  \"rows\": 3\n}\n[/block]\nKeep the following points in mind when creating datasets.\n- Only `type` parameter values of `image` and `image-multi-label` are supported. This call can't be used to create a dataset with a `type` of `image-detection`.\n\n- Label names can’t contain a comma.\n\n- You can’t delete a label. To change the labels in a dataset, recreate the dataset with the correct labels.\n\n- Label names must be unique within the dataset.\n\n- A dataset must have a minimum of two labels to create a model.\n\n- To add examples to a dataset created using this API, use the [Create an Example](doc:create-an-example) call.\n\n##Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"1-0\": \"`createdAt`\",\n    \"1-1\": \"date\",\n    \"1-2\": \"Date and time that the dataset was created.\",\n    \"1-3\": \"1.0\",\n    \"2-0\": \"`id`\",\n    \"2-1\": \"long\",\n    \"2-2\": \"Dataset ID.\",\n    \"2-3\": \"1.0\",\n    \"3-0\": \"`labelSummary`\",\n    \"3-1\": \"object\",\n    \"3-2\": \"Contains the `labels` array that contains all the labels for the dataset.\",\n    \"3-3\": \"1.0\",\n    \"4-0\": \"`name`\",\n    \"4-1\": \"string\",\n    \"4-2\": \"Name of the dataset.\",\n    \"4-3\": \"1.0\",\n    \"5-0\": \"`object`\",\n    \"5-1\": \"string\",\n    \"5-2\": \"Object returned; in this case, `dataset`.\",\n    \"5-3\": \"1.0\",\n    \"7-0\": \"`totalExamples`\",\n    \"7-1\": \"int\",\n    \"7-2\": \"Total number of examples in the dataset.\",\n    \"7-3\": \"1.0\",\n    \"8-0\": \"`totalLabels`\",\n    \"8-1\": \"int\",\n    \"8-2\": \"Total number of labels in the dataset.\",\n    \"8-3\": \"1.0\",\n    \"10-0\": \"`updatedAt`\",\n    \"10-1\": \"date\",\n    \"10-2\": \"Date and time that the dataset was last updated.\",\n    \"10-3\": \"1.0\",\n    \"0-0\": \"`available`\",\n    \"0-1\": \"boolean\",\n    \"0-2\": \"Specifies whether the dataset is ready to be trained.\",\n    \"0-3\": \"1.0\",\n    \"6-0\": \"`statusMsg`\",\n    \"6-1\": \"string\",\n    \"6-3\": \"1.0\",\n    \"6-2\": \"Status of the dataset creation and data upload. Valid values are:\\n- `FAILURE: <failure_reason>`—Data upload has failed.\\n- `SUCCEEDED`—Data upload is complete.\\n- `UPLOADING`—Data upload is in progress.\",\n    \"9-0\": \"`type`\",\n    \"9-1\": \"string\",\n    \"9-2\": \"Type of dataset data. Valid values are:\\n- `image`\\n- `image-multi-label` Available in Einstein Vision API version 2.0 and later.\",\n    \"9-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 11\n}\n[/block]\n## Labels Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`datasetId`\",\n    \"0-1\": \"long\",\n    \"0-2\": \"ID of the dataset that the label belongs to.\",\n    \"0-3\": \"1.0\",\n    \"1-0\": \"`id`\",\n    \"1-1\": \"long\",\n    \"1-2\": \"ID of the label.\",\n    \"1-3\": \"1.0\",\n    \"2-0\": \"`name`\",\n    \"2-1\": \"string\",\n    \"2-2\": \"Name of the label.\",\n    \"2-3\": \"1.0\",\n    \"3-0\": \"`numExamples`\",\n    \"3-1\": \"int\",\n    \"3-2\": \"Number of examples that have the label.\",\n    \"3-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 4\n}\n[/block]","excerpt":"Creates a dataset and labels, if they're specified.","slug":"create-a-dataset","type":"post","title":"Create a Dataset","__v":0,"childrenPages":[]}

postCreate a Dataset

Creates a dataset and labels, if they're specified.

[block:callout] { "type": "warning", "title": "Warning", "body": "For better performance, we recommend that you create a dataset by uploading a .zip file. See [Create a Dataset From a Zip File Asynchronously](doc:create-a-dataset-zip-async)." } [/block] ##Request Parameters## [block:parameters] { "data": { "0-0": "`labels`", "1-0": "`name`", "0-1": "string", "1-1": "string", "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-2": "Optional comma-separated list of labels. If specified, creates the labels in the dataset. Maximum number of labels per dataset is 250.", "0-3": "1.0", "1-2": "Name of the dataset. Maximum length is 180 characters.", "1-3": "1.0", "2-0": "`type`", "2-1": "string", "2-3": "2.0", "2-2": "Type of dataset data. Valid values are:\n- `image`\n- `image-multi-label`—Available in Einstein Vision API version 2.0 and later." }, "cols": 4, "rows": 3 } [/block] Keep the following points in mind when creating datasets. - Only `type` parameter values of `image` and `image-multi-label` are supported. This call can't be used to create a dataset with a `type` of `image-detection`. - Label names can’t contain a comma. - You can’t delete a label. To change the labels in a dataset, recreate the dataset with the correct labels. - Label names must be unique within the dataset. - A dataset must have a minimum of two labels to create a model. - To add examples to a dataset created using this API, use the [Create an Example](doc:create-an-example) call. ##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "1-0": "`createdAt`", "1-1": "date", "1-2": "Date and time that the dataset was created.", "1-3": "1.0", "2-0": "`id`", "2-1": "long", "2-2": "Dataset ID.", "2-3": "1.0", "3-0": "`labelSummary`", "3-1": "object", "3-2": "Contains the `labels` array that contains all the labels for the dataset.", "3-3": "1.0", "4-0": "`name`", "4-1": "string", "4-2": "Name of the dataset.", "4-3": "1.0", "5-0": "`object`", "5-1": "string", "5-2": "Object returned; in this case, `dataset`.", "5-3": "1.0", "7-0": "`totalExamples`", "7-1": "int", "7-2": "Total number of examples in the dataset.", "7-3": "1.0", "8-0": "`totalLabels`", "8-1": "int", "8-2": "Total number of labels in the dataset.", "8-3": "1.0", "10-0": "`updatedAt`", "10-1": "date", "10-2": "Date and time that the dataset was last updated.", "10-3": "1.0", "0-0": "`available`", "0-1": "boolean", "0-2": "Specifies whether the dataset is ready to be trained.", "0-3": "1.0", "6-0": "`statusMsg`", "6-1": "string", "6-3": "1.0", "6-2": "Status of the dataset creation and data upload. Valid values are:\n- `FAILURE: <failure_reason>`—Data upload has failed.\n- `SUCCEEDED`—Data upload is complete.\n- `UPLOADING`—Data upload is in progress.", "9-0": "`type`", "9-1": "string", "9-2": "Type of dataset data. Valid values are:\n- `image`\n- `image-multi-label` Available in Einstein Vision API version 2.0 and later.", "9-3": "1.0" }, "cols": 4, "rows": 11 } [/block] ## Labels Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`datasetId`", "0-1": "long", "0-2": "ID of the dataset that the label belongs to.", "0-3": "1.0", "1-0": "`id`", "1-1": "long", "1-2": "ID of the label.", "1-3": "1.0", "2-0": "`name`", "2-1": "string", "2-2": "Name of the label.", "2-3": "1.0", "3-0": "`numExamples`", "3-1": "int", "3-2": "Number of examples that have the label.", "3-3": "1.0" }, "cols": 4, "rows": 4 } [/block]

Definition

{{ api_url }}{{ page_api_url }}

Examples


Result Format



[block:callout] { "type": "warning", "title": "Warning", "body": "For better performance, we recommend that you create a dataset by uploading a .zip file. See [Create a Dataset From a Zip File Asynchronously](doc:create-a-dataset-zip-async)." } [/block] ##Request Parameters## [block:parameters] { "data": { "0-0": "`labels`", "1-0": "`name`", "0-1": "string", "1-1": "string", "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-2": "Optional comma-separated list of labels. If specified, creates the labels in the dataset. Maximum number of labels per dataset is 250.", "0-3": "1.0", "1-2": "Name of the dataset. Maximum length is 180 characters.", "1-3": "1.0", "2-0": "`type`", "2-1": "string", "2-3": "2.0", "2-2": "Type of dataset data. Valid values are:\n- `image`\n- `image-multi-label`—Available in Einstein Vision API version 2.0 and later." }, "cols": 4, "rows": 3 } [/block] Keep the following points in mind when creating datasets. - Only `type` parameter values of `image` and `image-multi-label` are supported. This call can't be used to create a dataset with a `type` of `image-detection`. - Label names can’t contain a comma. - You can’t delete a label. To change the labels in a dataset, recreate the dataset with the correct labels. - Label names must be unique within the dataset. - A dataset must have a minimum of two labels to create a model. - To add examples to a dataset created using this API, use the [Create an Example](doc:create-an-example) call. ##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "1-0": "`createdAt`", "1-1": "date", "1-2": "Date and time that the dataset was created.", "1-3": "1.0", "2-0": "`id`", "2-1": "long", "2-2": "Dataset ID.", "2-3": "1.0", "3-0": "`labelSummary`", "3-1": "object", "3-2": "Contains the `labels` array that contains all the labels for the dataset.", "3-3": "1.0", "4-0": "`name`", "4-1": "string", "4-2": "Name of the dataset.", "4-3": "1.0", "5-0": "`object`", "5-1": "string", "5-2": "Object returned; in this case, `dataset`.", "5-3": "1.0", "7-0": "`totalExamples`", "7-1": "int", "7-2": "Total number of examples in the dataset.", "7-3": "1.0", "8-0": "`totalLabels`", "8-1": "int", "8-2": "Total number of labels in the dataset.", "8-3": "1.0", "10-0": "`updatedAt`", "10-1": "date", "10-2": "Date and time that the dataset was last updated.", "10-3": "1.0", "0-0": "`available`", "0-1": "boolean", "0-2": "Specifies whether the dataset is ready to be trained.", "0-3": "1.0", "6-0": "`statusMsg`", "6-1": "string", "6-3": "1.0", "6-2": "Status of the dataset creation and data upload. Valid values are:\n- `FAILURE: <failure_reason>`—Data upload has failed.\n- `SUCCEEDED`—Data upload is complete.\n- `UPLOADING`—Data upload is in progress.", "9-0": "`type`", "9-1": "string", "9-2": "Type of dataset data. Valid values are:\n- `image`\n- `image-multi-label` Available in Einstein Vision API version 2.0 and later.", "9-3": "1.0" }, "cols": 4, "rows": 11 } [/block] ## Labels Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`datasetId`", "0-1": "long", "0-2": "ID of the dataset that the label belongs to.", "0-3": "1.0", "1-0": "`id`", "1-1": "long", "1-2": "ID of the label.", "1-3": "1.0", "2-0": "`name`", "2-1": "string", "2-2": "Name of the label.", "2-3": "1.0", "3-0": "`numExamples`", "3-1": "int", "3-2": "Number of examples that have the label.", "3-3": "1.0" }, "cols": 4, "rows": 4 } [/block]
{"_id":"59de6224666d650024f78fc2","category":"59de6223666d650024f78fa3","parentDoc":null,"project":"552d474ea86ee20d00780cd7","user":"573b5a1f37fcf72000a2e683","version":"59de6223666d650024f78f9b","updates":[],"next":{"pages":[],"description":""},"createdAt":"2016-09-23T20:54:05.059Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"examples":{"codes":[{"code":"curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.einstein.ai/v2/vision/datasets/57","language":"curl"}]},"method":"get","results":{"codes":[{"status":200,"language":"json","code":"{\n  \"id\": 57,\n  \"name\": \"mountainvsbeach\",\n  \"createdAt\": \"2016-09-15T16:51:41.000+0000\",\n  \"updatedAt\": \"2016-09-15T16:51:41.000+0000\",\n  \"labelSummary\": {\n    \"labels\": [\n      {\n        \"id\": 612,\n        \"datasetId\": 57,\n        \"name\": \"Mountains\",\n        \"numExamples\": 49\n      },\n      {\n        \"id\": 611,\n        \"datasetId\": 57,\n        \"name\": \"Beaches\",\n        \"numExamples\": 50\n      }\n    ]\n  },\n  \"totalExamples\": 99,\n  \"totalLabels\": 2,\n  \"available\": true,\n  \"statusMsg\": \"SUCCEEDED\",\n  \"type\": \"image\",\n  \"object\": \"dataset\"\n}\n","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","auth":"required","params":[],"url":"/vision/datasets/<DATASET_ID>"},"isReference":false,"order":3,"body":"##Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"1-0\": \"`createdAt`\",\n    \"1-1\": \"date\",\n    \"1-2\": \"Date and time that the dataset was created.\",\n    \"1-3\": \"1.0\",\n    \"2-0\": \"`id`\",\n    \"2-1\": \"long\",\n    \"2-2\": \"Dataset ID.\",\n    \"2-3\": \"1.0\",\n    \"3-0\": \"`labelSummary`\",\n    \"3-1\": \"object\",\n    \"3-2\": \"Contains the `labels` array that contains all the labels for the dataset.\",\n    \"3-3\": \"1.0\",\n    \"4-0\": \"`name`\",\n    \"4-1\": \"string\",\n    \"4-2\": \"Name of the dataset.\",\n    \"4-3\": \"1.0\",\n    \"5-0\": \"`object`\",\n    \"5-1\": \"string\",\n    \"5-2\": \"Object returned; in this case, `dataset`.\",\n    \"5-3\": \"1.0\",\n    \"7-0\": \"`totalExamples`\",\n    \"7-1\": \"int\",\n    \"7-2\": \"Total number of examples in the dataset.\",\n    \"7-3\": \"1.0\",\n    \"8-0\": \"`totalLabels`\",\n    \"8-1\": \"int\",\n    \"8-2\": \"Total number of labels in the dataset.\",\n    \"8-3\": \"1.0\",\n    \"10-0\": \"`updatedAt`\",\n    \"10-1\": \"date\",\n    \"10-2\": \"Date and time that the dataset was last updated.\",\n    \"10-3\": \"1.0\",\n    \"0-0\": \"`available`\",\n    \"0-1\": \"boolean\",\n    \"0-2\": \"Specifies whether the dataset is ready to be trained.\",\n    \"0-3\": \"1.0\",\n    \"6-0\": \"`statusMsg`\",\n    \"6-1\": \"string\",\n    \"6-2\": \"Status of the dataset creation and data upload. Valid values are:\\n- `DELETION_PENDING`—Dataset is in the process of being deleted. Available in Einstein Vision API version 2.0 and later.\\n- `FAILURE: <failure_reason>`—Data upload has failed.\\n- `SUCCEEDED`—Data upload is complete.\\n- `UPLOADING`—Data upload is in progress.\",\n    \"6-3\": \"1.0\",\n    \"9-0\": \"`type`\",\n    \"9-1\": \"string\",\n    \"9-2\": \"Type of dataset data. Valid values are:\\n- `image`\\n- `image-detection`—Available in Einstein Vision API version 2.0 and later.\\n- `image-multi-label`—Available in Einstein Vision API version 2.0 and later.\",\n    \"9-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 11\n}\n[/block]\n##Labels Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`datasetId`\",\n    \"0-1\": \"long\",\n    \"0-2\": \"ID of the dataset that the label belongs to.\",\n    \"0-3\": \"1.0\",\n    \"1-0\": \"`id`\",\n    \"1-1\": \"long\",\n    \"1-2\": \"ID of the label.\",\n    \"1-3\": \"1.0\",\n    \"2-0\": \"`name`\",\n    \"2-1\": \"string\",\n    \"2-2\": \"Name of the label.\",\n    \"2-3\": \"1.0\",\n    \"3-0\": \"`numExamples`\",\n    \"3-1\": \"int\",\n    \"3-2\": \"Number of examples that have the label.\",\n    \"3-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 4\n}\n[/block]","excerpt":"Returns a single dataset.","slug":"get-a-dataset","type":"get","title":"Get a Dataset","__v":0,"childrenPages":[]}

getGet a Dataset

Returns a single dataset.

##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "1-0": "`createdAt`", "1-1": "date", "1-2": "Date and time that the dataset was created.", "1-3": "1.0", "2-0": "`id`", "2-1": "long", "2-2": "Dataset ID.", "2-3": "1.0", "3-0": "`labelSummary`", "3-1": "object", "3-2": "Contains the `labels` array that contains all the labels for the dataset.", "3-3": "1.0", "4-0": "`name`", "4-1": "string", "4-2": "Name of the dataset.", "4-3": "1.0", "5-0": "`object`", "5-1": "string", "5-2": "Object returned; in this case, `dataset`.", "5-3": "1.0", "7-0": "`totalExamples`", "7-1": "int", "7-2": "Total number of examples in the dataset.", "7-3": "1.0", "8-0": "`totalLabels`", "8-1": "int", "8-2": "Total number of labels in the dataset.", "8-3": "1.0", "10-0": "`updatedAt`", "10-1": "date", "10-2": "Date and time that the dataset was last updated.", "10-3": "1.0", "0-0": "`available`", "0-1": "boolean", "0-2": "Specifies whether the dataset is ready to be trained.", "0-3": "1.0", "6-0": "`statusMsg`", "6-1": "string", "6-2": "Status of the dataset creation and data upload. Valid values are:\n- `DELETION_PENDING`—Dataset is in the process of being deleted. Available in Einstein Vision API version 2.0 and later.\n- `FAILURE: <failure_reason>`—Data upload has failed.\n- `SUCCEEDED`—Data upload is complete.\n- `UPLOADING`—Data upload is in progress.", "6-3": "1.0", "9-0": "`type`", "9-1": "string", "9-2": "Type of dataset data. Valid values are:\n- `image`\n- `image-detection`—Available in Einstein Vision API version 2.0 and later.\n- `image-multi-label`—Available in Einstein Vision API version 2.0 and later.", "9-3": "1.0" }, "cols": 4, "rows": 11 } [/block] ##Labels Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`datasetId`", "0-1": "long", "0-2": "ID of the dataset that the label belongs to.", "0-3": "1.0", "1-0": "`id`", "1-1": "long", "1-2": "ID of the label.", "1-3": "1.0", "2-0": "`name`", "2-1": "string", "2-2": "Name of the label.", "2-3": "1.0", "3-0": "`numExamples`", "3-1": "int", "3-2": "Number of examples that have the label.", "3-3": "1.0" }, "cols": 4, "rows": 4 } [/block]

Definition

{{ api_url }}{{ page_api_url }}

Examples


Result Format



##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "1-0": "`createdAt`", "1-1": "date", "1-2": "Date and time that the dataset was created.", "1-3": "1.0", "2-0": "`id`", "2-1": "long", "2-2": "Dataset ID.", "2-3": "1.0", "3-0": "`labelSummary`", "3-1": "object", "3-2": "Contains the `labels` array that contains all the labels for the dataset.", "3-3": "1.0", "4-0": "`name`", "4-1": "string", "4-2": "Name of the dataset.", "4-3": "1.0", "5-0": "`object`", "5-1": "string", "5-2": "Object returned; in this case, `dataset`.", "5-3": "1.0", "7-0": "`totalExamples`", "7-1": "int", "7-2": "Total number of examples in the dataset.", "7-3": "1.0", "8-0": "`totalLabels`", "8-1": "int", "8-2": "Total number of labels in the dataset.", "8-3": "1.0", "10-0": "`updatedAt`", "10-1": "date", "10-2": "Date and time that the dataset was last updated.", "10-3": "1.0", "0-0": "`available`", "0-1": "boolean", "0-2": "Specifies whether the dataset is ready to be trained.", "0-3": "1.0", "6-0": "`statusMsg`", "6-1": "string", "6-2": "Status of the dataset creation and data upload. Valid values are:\n- `DELETION_PENDING`—Dataset is in the process of being deleted. Available in Einstein Vision API version 2.0 and later.\n- `FAILURE: <failure_reason>`—Data upload has failed.\n- `SUCCEEDED`—Data upload is complete.\n- `UPLOADING`—Data upload is in progress.", "6-3": "1.0", "9-0": "`type`", "9-1": "string", "9-2": "Type of dataset data. Valid values are:\n- `image`\n- `image-detection`—Available in Einstein Vision API version 2.0 and later.\n- `image-multi-label`—Available in Einstein Vision API version 2.0 and later.", "9-3": "1.0" }, "cols": 4, "rows": 11 } [/block] ##Labels Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`datasetId`", "0-1": "long", "0-2": "ID of the dataset that the label belongs to.", "0-3": "1.0", "1-0": "`id`", "1-1": "long", "1-2": "ID of the label.", "1-3": "1.0", "2-0": "`name`", "2-1": "string", "2-2": "Name of the label.", "2-3": "1.0", "3-0": "`numExamples`", "3-1": "int", "3-2": "Number of examples that have the label.", "3-3": "1.0" }, "cols": 4, "rows": 4 } [/block]
{"_id":"59de6224666d650024f78fc3","category":"59de6223666d650024f78fa3","project":"552d474ea86ee20d00780cd7","parentDoc":null,"user":"573b5a1f37fcf72000a2e683","version":"59de6223666d650024f78f9b","updates":[],"next":{"pages":[],"description":""},"createdAt":"2016-09-23T21:53:15.555Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"method":"get","results":{"codes":[{"name":"","status":200,"language":"json","code":"{\n  \"object\": \"list\",\n  \"data\": [\n    {\n      \"id\": 57,\n      \"name\": \"Beach and Mountain\",\n      \"updatedAt\": \"2016-09-09T22:39:22.000+0000\",\n      \"createdAt\": \"2016-09-09T22:39:22.000+0000\",\n     \"labelSummary\": {\n           \"labels\": [\n          {\n              \"id\": 36,\n              \"datasetId\": 57,\n              \"name\": \"beach\",\n              \"numExamples\": 49\n          },\n          {\n            \"id\": 37,\n            \"datasetId\": 57,\n            \"name\": \"mountain\",\n            \"numExamples\": 50\n          }\n        ]\n      },\n      \"totalExamples\": 99,\n      \"totalLabels\": 2,\n      \"available\": true,\n      \"statusMsg\": \"SUCCEEDED\",\n      \"type\": \"image\",\n      \"object\": \"dataset\"\n    },\n    {\n      \"id\": 58,\n      \"name\": \"Brain Scans\",\n      \"updatedAt\": \"2016-09-24T21:35:27.000+0000\",\n      \"createdAt\": \"2016-09-24T21:35:27.000+0000\",\n     \"labelSummary\": {\n           \"labels\": [\n          {\n              \"id\": 122,\n              \"datasetId\": 58,\n              \"name\": \"healthy\",\n              \"numExamples\": 5064\n          },\n          {\n            \"id\": 123,\n            \"datasetId\": 58,\n            \"name\": \"unhealthy\",\n            \"numExamples\": 5080\n          }\n      ]\n     },\n      \"totalExamples\": 10144,\n      \"totalLabels\": 2,\n      \"available\": true,\n      \"statusMsg\": \"SUCCEEDED\",\n      \"type\": \"image\",\n      \"object\": \"dataset\"\n    }\n  ]\n}"},{"code":"{}","name":"","status":400,"language":"json"}]},"settings":"","examples":{"codes":[{"code":"curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.einstein.ai/v2/vision/datasets","language":"curl"}]},"auth":"required","params":[],"url":"/vision/datasets"},"isReference":false,"order":4,"body":"##Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`data`\",\n    \"0-1\": \"array\",\n    \"0-2\": \"Array of `dataset` objects.\",\n    \"0-3\": \"1.0\",\n    \"1-0\": \"`object`\",\n    \"1-1\": \"string\",\n    \"1-2\": \"Object returned; in this case, `list`.\",\n    \"1-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 2\n}\n[/block]\n##Dataset Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"1-0\": \"`createdAt`\",\n    \"1-1\": \"date\",\n    \"1-2\": \"Date and time that the dataset was created.\",\n    \"1-3\": \"1.0\",\n    \"2-0\": \"`id`\",\n    \"2-1\": \"long\",\n    \"2-2\": \"Dataset ID.\",\n    \"2-3\": \"1.0\",\n    \"3-0\": \"`labelSummary`\",\n    \"3-1\": \"object\",\n    \"3-2\": \"Contains the `labels` array that contains all the labels for the dataset.\",\n    \"3-3\": \"1.0\",\n    \"4-0\": \"`name`\",\n    \"4-1\": \"string\",\n    \"4-2\": \"Name of the dataset.\",\n    \"4-3\": \"1.0\",\n    \"5-0\": \"`object`\",\n    \"5-1\": \"string\",\n    \"5-2\": \"Object returned; in this case, `dataset`.\",\n    \"5-3\": \"1.0\",\n    \"10-0\": \"`updatedAt`\",\n    \"10-1\": \"date\",\n    \"10-2\": \"Date and time that the dataset was last updated.\",\n    \"10-3\": \"1.0\",\n    \"0-0\": \"`available`\",\n    \"0-1\": \"boolean\",\n    \"0-2\": \"Specifies whether the dataset is ready to be trained.\",\n    \"0-3\": \"1.0\",\n    \"7-0\": \"`totalExamples`\",\n    \"7-1\": \"int\",\n    \"7-2\": \"Total number of examples in the dataset.\",\n    \"7-3\": \"1.0\",\n    \"8-0\": \"`totalLabels`\",\n    \"8-1\": \"int\",\n    \"8-2\": \"Total number of labels in the dataset.\",\n    \"8-3\": \"1.0\",\n    \"9-0\": \"`type`\",\n    \"9-1\": \"string\",\n    \"9-2\": \"Type of dataset data. Valid values are:\\n- `image`\\n- `image-detection`—Available in Einstein Vision API version 2.0 and later.\\n- `image-multi-label`—Available in Einstein Vision API version 2.0 and later.\",\n    \"9-3\": \"1.0\",\n    \"6-0\": \"`statusMsg`\",\n    \"6-1\": \"string\",\n    \"6-3\": \"1.0\",\n    \"6-2\": \"Status of the dataset creation and data upload. Valid values are:\\n- `DELETION_PENDING`—Dataset is in the process of being deleted. Available in Einstein Vision API version 2.0 and later. \\n- `FAILURE: <failure_reason>`—Data upload has failed.\\n- `SUCCEEDED`—Data upload is complete.\\n- `UPLOADING`—Data upload is in progress.\"\n  },\n  \"cols\": 4,\n  \"rows\": 11\n}\n[/block]\n##Labels Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`datasetId`\",\n    \"0-1\": \"long\",\n    \"0-2\": \"ID of the dataset that the label belongs to.\",\n    \"0-3\": \"1.0\",\n    \"1-0\": \"`id`\",\n    \"1-1\": \"long\",\n    \"1-2\": \"ID of the label.\",\n    \"1-3\": \"1.0\",\n    \"2-0\": \"`name`\",\n    \"2-1\": \"string\",\n    \"2-2\": \"Name of the label.\",\n    \"2-3\": \"1.0\",\n    \"3-0\": \"`numExamples`\",\n    \"3-1\": \"int\",\n    \"3-2\": \"Number of examples in the label.\",\n    \"3-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 4\n}\n[/block]\n##Page Through Datasets##\n\nBy default, this call returns 25 datasets. If you want to page through your datasets, use the `offset` and `count` query parameters.\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"0-2\": \"Number of datsets to return. Maximum valid value is 25. If you specify a number greater than 25, the call returns 25 datasets. Optional.\",\n    \"1-2\": \"Index of the dataset from which you want to start paging. Optional.\",\n    \"0-0\": \"`count`\",\n    \"1-0\": \"`offset`\",\n    \"1-1\": \"int\",\n    \"0-1\": \"int\",\n    \"h-3\": \"Available Version\",\n    \"0-3\": \"1.0\",\n    \"1-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 2\n}\n[/block]\nHere's an example of these query parameters. If you omit the `count` parameter, the API returns 25 datasets. If you omit the `offset` parameter, paging starts at 0.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X GET -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\"  \\\"https://api.einstein.ai/v2/vision/datasets?offset=100&count=20\\\"\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\nFor example, let's say you want to page through all of your datasets and show 20 at a time. The first call would have `offset=0` and `count=20`, the second call would have `offset=20` and `count=20`, and so on.\n\n##Get Global Datasets##\n\nGlobal datasets are public datasets that Salesforce provides. You can use these datasets to include additional data during training when you create a model. To get a list of the global datasets, use the  `global` query parameter.\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"0-0\": \"`global`\",\n    \"0-1\": \"boolean\",\n    \"h-3\": \"Available Version\",\n    \"0-2\": \"If `true`, returns all global datasets.\",\n    \"0-3\": \"2.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 1\n}\n[/block]\nHere's an example of the `global` query parameter. The response JSON is the same as for your own custom datasets.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X GET -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\"  \\\"https://api.einstein.ai/v2/vision/datasets?global=true\\\"\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]","excerpt":"Returns a list of datasets and their labels that were created by the current user. The response is sorted by dataset ID.","slug":"get-all-datasets","type":"get","title":"Get All Datasets","__v":0,"childrenPages":[]}

getGet All Datasets

Returns a list of datasets and their labels that were created by the current user. The response is sorted by dataset ID.

##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`data`", "0-1": "array", "0-2": "Array of `dataset` objects.", "0-3": "1.0", "1-0": "`object`", "1-1": "string", "1-2": "Object returned; in this case, `list`.", "1-3": "1.0" }, "cols": 4, "rows": 2 } [/block] ##Dataset Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "1-0": "`createdAt`", "1-1": "date", "1-2": "Date and time that the dataset was created.", "1-3": "1.0", "2-0": "`id`", "2-1": "long", "2-2": "Dataset ID.", "2-3": "1.0", "3-0": "`labelSummary`", "3-1": "object", "3-2": "Contains the `labels` array that contains all the labels for the dataset.", "3-3": "1.0", "4-0": "`name`", "4-1": "string", "4-2": "Name of the dataset.", "4-3": "1.0", "5-0": "`object`", "5-1": "string", "5-2": "Object returned; in this case, `dataset`.", "5-3": "1.0", "10-0": "`updatedAt`", "10-1": "date", "10-2": "Date and time that the dataset was last updated.", "10-3": "1.0", "0-0": "`available`", "0-1": "boolean", "0-2": "Specifies whether the dataset is ready to be trained.", "0-3": "1.0", "7-0": "`totalExamples`", "7-1": "int", "7-2": "Total number of examples in the dataset.", "7-3": "1.0", "8-0": "`totalLabels`", "8-1": "int", "8-2": "Total number of labels in the dataset.", "8-3": "1.0", "9-0": "`type`", "9-1": "string", "9-2": "Type of dataset data. Valid values are:\n- `image`\n- `image-detection`—Available in Einstein Vision API version 2.0 and later.\n- `image-multi-label`—Available in Einstein Vision API version 2.0 and later.", "9-3": "1.0", "6-0": "`statusMsg`", "6-1": "string", "6-3": "1.0", "6-2": "Status of the dataset creation and data upload. Valid values are:\n- `DELETION_PENDING`—Dataset is in the process of being deleted. Available in Einstein Vision API version 2.0 and later. \n- `FAILURE: <failure_reason>`—Data upload has failed.\n- `SUCCEEDED`—Data upload is complete.\n- `UPLOADING`—Data upload is in progress." }, "cols": 4, "rows": 11 } [/block] ##Labels Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`datasetId`", "0-1": "long", "0-2": "ID of the dataset that the label belongs to.", "0-3": "1.0", "1-0": "`id`", "1-1": "long", "1-2": "ID of the label.", "1-3": "1.0", "2-0": "`name`", "2-1": "string", "2-2": "Name of the label.", "2-3": "1.0", "3-0": "`numExamples`", "3-1": "int", "3-2": "Number of examples in the label.", "3-3": "1.0" }, "cols": 4, "rows": 4 } [/block] ##Page Through Datasets## By default, this call returns 25 datasets. If you want to page through your datasets, use the `offset` and `count` query parameters. [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "0-2": "Number of datsets to return. Maximum valid value is 25. If you specify a number greater than 25, the call returns 25 datasets. Optional.", "1-2": "Index of the dataset from which you want to start paging. Optional.", "0-0": "`count`", "1-0": "`offset`", "1-1": "int", "0-1": "int", "h-3": "Available Version", "0-3": "1.0", "1-3": "1.0" }, "cols": 4, "rows": 2 } [/block] Here's an example of these query parameters. If you omit the `count` parameter, the API returns 25 datasets. If you omit the `offset` parameter, paging starts at 0. [block:code] { "codes": [ { "code": "curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" \"https://api.einstein.ai/v2/vision/datasets?offset=100&count=20\"", "language": "curl" } ] } [/block] For example, let's say you want to page through all of your datasets and show 20 at a time. The first call would have `offset=0` and `count=20`, the second call would have `offset=20` and `count=20`, and so on. ##Get Global Datasets## Global datasets are public datasets that Salesforce provides. You can use these datasets to include additional data during training when you create a model. To get a list of the global datasets, use the `global` query parameter. [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "0-0": "`global`", "0-1": "boolean", "h-3": "Available Version", "0-2": "If `true`, returns all global datasets.", "0-3": "2.0" }, "cols": 4, "rows": 1 } [/block] Here's an example of the `global` query parameter. The response JSON is the same as for your own custom datasets. [block:code] { "codes": [ { "code": "curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" \"https://api.einstein.ai/v2/vision/datasets?global=true\"", "language": "curl" } ] } [/block]

Definition

{{ api_url }}{{ page_api_url }}

Examples


Result Format



##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`data`", "0-1": "array", "0-2": "Array of `dataset` objects.", "0-3": "1.0", "1-0": "`object`", "1-1": "string", "1-2": "Object returned; in this case, `list`.", "1-3": "1.0" }, "cols": 4, "rows": 2 } [/block] ##Dataset Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "1-0": "`createdAt`", "1-1": "date", "1-2": "Date and time that the dataset was created.", "1-3": "1.0", "2-0": "`id`", "2-1": "long", "2-2": "Dataset ID.", "2-3": "1.0", "3-0": "`labelSummary`", "3-1": "object", "3-2": "Contains the `labels` array that contains all the labels for the dataset.", "3-3": "1.0", "4-0": "`name`", "4-1": "string", "4-2": "Name of the dataset.", "4-3": "1.0", "5-0": "`object`", "5-1": "string", "5-2": "Object returned; in this case, `dataset`.", "5-3": "1.0", "10-0": "`updatedAt`", "10-1": "date", "10-2": "Date and time that the dataset was last updated.", "10-3": "1.0", "0-0": "`available`", "0-1": "boolean", "0-2": "Specifies whether the dataset is ready to be trained.", "0-3": "1.0", "7-0": "`totalExamples`", "7-1": "int", "7-2": "Total number of examples in the dataset.", "7-3": "1.0", "8-0": "`totalLabels`", "8-1": "int", "8-2": "Total number of labels in the dataset.", "8-3": "1.0", "9-0": "`type`", "9-1": "string", "9-2": "Type of dataset data. Valid values are:\n- `image`\n- `image-detection`—Available in Einstein Vision API version 2.0 and later.\n- `image-multi-label`—Available in Einstein Vision API version 2.0 and later.", "9-3": "1.0", "6-0": "`statusMsg`", "6-1": "string", "6-3": "1.0", "6-2": "Status of the dataset creation and data upload. Valid values are:\n- `DELETION_PENDING`—Dataset is in the process of being deleted. Available in Einstein Vision API version 2.0 and later. \n- `FAILURE: <failure_reason>`—Data upload has failed.\n- `SUCCEEDED`—Data upload is complete.\n- `UPLOADING`—Data upload is in progress." }, "cols": 4, "rows": 11 } [/block] ##Labels Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`datasetId`", "0-1": "long", "0-2": "ID of the dataset that the label belongs to.", "0-3": "1.0", "1-0": "`id`", "1-1": "long", "1-2": "ID of the label.", "1-3": "1.0", "2-0": "`name`", "2-1": "string", "2-2": "Name of the label.", "2-3": "1.0", "3-0": "`numExamples`", "3-1": "int", "3-2": "Number of examples in the label.", "3-3": "1.0" }, "cols": 4, "rows": 4 } [/block] ##Page Through Datasets## By default, this call returns 25 datasets. If you want to page through your datasets, use the `offset` and `count` query parameters. [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "0-2": "Number of datsets to return. Maximum valid value is 25. If you specify a number greater than 25, the call returns 25 datasets. Optional.", "1-2": "Index of the dataset from which you want to start paging. Optional.", "0-0": "`count`", "1-0": "`offset`", "1-1": "int", "0-1": "int", "h-3": "Available Version", "0-3": "1.0", "1-3": "1.0" }, "cols": 4, "rows": 2 } [/block] Here's an example of these query parameters. If you omit the `count` parameter, the API returns 25 datasets. If you omit the `offset` parameter, paging starts at 0. [block:code] { "codes": [ { "code": "curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" \"https://api.einstein.ai/v2/vision/datasets?offset=100&count=20\"", "language": "curl" } ] } [/block] For example, let's say you want to page through all of your datasets and show 20 at a time. The first call would have `offset=0` and `count=20`, the second call would have `offset=20` and `count=20`, and so on. ##Get Global Datasets## Global datasets are public datasets that Salesforce provides. You can use these datasets to include additional data during training when you create a model. To get a list of the global datasets, use the `global` query parameter. [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "0-0": "`global`", "0-1": "boolean", "h-3": "Available Version", "0-2": "If `true`, returns all global datasets.", "0-3": "2.0" }, "cols": 4, "rows": 1 } [/block] Here's an example of the `global` query parameter. The response JSON is the same as for your own custom datasets. [block:code] { "codes": [ { "code": "curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" \"https://api.einstein.ai/v2/vision/datasets?global=true\"", "language": "curl" } ] } [/block]
{"_id":"59de6224666d650024f78fc4","category":"59de6223666d650024f78fa3","parentDoc":null,"project":"552d474ea86ee20d00780cd7","user":"573b5a1f37fcf72000a2e683","version":"59de6223666d650024f78f9b","updates":[],"next":{"pages":[],"description":""},"createdAt":"2016-09-23T22:02:04.938Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"examples":{"codes":[{"language":"curl","code":"curl -X DELETE -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.einstein.ai/v2/vision/datasets/1003360"}]},"method":"delete","results":{"codes":[{"status":200,"name":"","code":"{\n    \"id\": \"Z2JTFBF3A7XKIJC5QEJXMO4HSY\",\n    \"organizationId\": \"2\",\n    \"type\": \"DATASET\",\n    \"status\": \"QUEUED\",\n    \"progress\": 0,\n    \"message\": null,\n    \"object\": \"deletion\",\n    \"deletedObjectId\": \"1003360\"\n}","language":"json"},{"language":"json","status":400,"name":"","code":"{}"}]},"settings":"","auth":"required","params":[],"url":"/vision/datasets/<DATASET_ID>"},"isReference":false,"order":5,"body":"Keep the following points in mind when deleting a dataset.\n\n- If a dataset is being trained and has an associated model with a status of `QUEUED` or `RUNNING`, you must wait until the training is complete before you can delete the dataset.\n\n- If you want to delete a dataset and the models associated with it, delete the models first. After you delete a dataset, you won't be able to reference that dataset and get a list of associated models.\n\n- After you delete a dataset, use the `id` to get the status of the deletion. See [Get Deletion Status](doc:get-vision-deletion-status).\n\n##Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"3-2\": \"Object returned; in this case, `deletion`.\",\n    \"3-0\": \"`object`\",\n    \"3-1\": \"string\",\n    \"3-3\": \"2.0\",\n    \"1-0\": \"`id`\",\n    \"1-1\": \"string\",\n    \"1-2\": \"ID of the deletion. Use this ID to query the status of the deletion and when it's complete.\",\n    \"1-3\": \"2.0\",\n    \"4-0\": \"`organizationId`\",\n    \"4-1\": \"integer\",\n    \"4-2\": \"ID of the org to which the dataset belongs.\",\n    \"4-3\": \"2.0\",\n    \"6-0\": \"`status`\",\n    \"5-0\": \"`progress`\",\n    \"5-3\": \"2.0\",\n    \"6-3\": \"2.0\",\n    \"5-1\": \"integer\",\n    \"6-1\": \"string\",\n    \"6-2\": \"Status of the dataset deletion. When you delete a dataset, this value is `QUEUED`.\",\n    \"5-2\": \"How far the dataset deletion has progressed. Values are between 0–1.\",\n    \"7-0\": \"`type`\",\n    \"7-1\": \"string\",\n    \"7-2\": \"Object that's being deleted. When you delete a dataset, this value is `DATASET`.\",\n    \"7-3\": \"2.0\",\n    \"0-0\": \"`deletedObjectId`\",\n    \"0-1\": \"string\",\n    \"0-2\": \"ID of the object deleted. When you delete a dataset, this value is the dataset ID.\",\n    \"0-3\": \"2.0\",\n    \"2-0\": \"`message`\",\n    \"2-1\": \"string\",\n    \"2-3\": \"2.0\",\n    \"2-2\": \"Additional information about the dataset deletion.\"\n  },\n  \"cols\": 4,\n  \"rows\": 8\n}\n[/block]","excerpt":"Deletes the specified dataset and associated labels and examples.","slug":"delete-a-dataset","type":"delete","title":"Delete a Dataset","__v":0,"childrenPages":[]}

deleteDelete a Dataset

Deletes the specified dataset and associated labels and examples.

Keep the following points in mind when deleting a dataset. - If a dataset is being trained and has an associated model with a status of `QUEUED` or `RUNNING`, you must wait until the training is complete before you can delete the dataset. - If you want to delete a dataset and the models associated with it, delete the models first. After you delete a dataset, you won't be able to reference that dataset and get a list of associated models. - After you delete a dataset, use the `id` to get the status of the deletion. See [Get Deletion Status](doc:get-vision-deletion-status). ##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "3-2": "Object returned; in this case, `deletion`.", "3-0": "`object`", "3-1": "string", "3-3": "2.0", "1-0": "`id`", "1-1": "string", "1-2": "ID of the deletion. Use this ID to query the status of the deletion and when it's complete.", "1-3": "2.0", "4-0": "`organizationId`", "4-1": "integer", "4-2": "ID of the org to which the dataset belongs.", "4-3": "2.0", "6-0": "`status`", "5-0": "`progress`", "5-3": "2.0", "6-3": "2.0", "5-1": "integer", "6-1": "string", "6-2": "Status of the dataset deletion. When you delete a dataset, this value is `QUEUED`.", "5-2": "How far the dataset deletion has progressed. Values are between 0–1.", "7-0": "`type`", "7-1": "string", "7-2": "Object that's being deleted. When you delete a dataset, this value is `DATASET`.", "7-3": "2.0", "0-0": "`deletedObjectId`", "0-1": "string", "0-2": "ID of the object deleted. When you delete a dataset, this value is the dataset ID.", "0-3": "2.0", "2-0": "`message`", "2-1": "string", "2-3": "2.0", "2-2": "Additional information about the dataset deletion." }, "cols": 4, "rows": 8 } [/block]

Definition

{{ api_url }}{{ page_api_url }}

Examples


Result Format



Keep the following points in mind when deleting a dataset. - If a dataset is being trained and has an associated model with a status of `QUEUED` or `RUNNING`, you must wait until the training is complete before you can delete the dataset. - If you want to delete a dataset and the models associated with it, delete the models first. After you delete a dataset, you won't be able to reference that dataset and get a list of associated models. - After you delete a dataset, use the `id` to get the status of the deletion. See [Get Deletion Status](doc:get-vision-deletion-status). ##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "3-2": "Object returned; in this case, `deletion`.", "3-0": "`object`", "3-1": "string", "3-3": "2.0", "1-0": "`id`", "1-1": "string", "1-2": "ID of the deletion. Use this ID to query the status of the deletion and when it's complete.", "1-3": "2.0", "4-0": "`organizationId`", "4-1": "integer", "4-2": "ID of the org to which the dataset belongs.", "4-3": "2.0", "6-0": "`status`", "5-0": "`progress`", "5-3": "2.0", "6-3": "2.0", "5-1": "integer", "6-1": "string", "6-2": "Status of the dataset deletion. When you delete a dataset, this value is `QUEUED`.", "5-2": "How far the dataset deletion has progressed. Values are between 0–1.", "7-0": "`type`", "7-1": "string", "7-2": "Object that's being deleted. When you delete a dataset, this value is `DATASET`.", "7-3": "2.0", "0-0": "`deletedObjectId`", "0-1": "string", "0-2": "ID of the object deleted. When you delete a dataset, this value is the dataset ID.", "0-3": "2.0", "2-0": "`message`", "2-1": "string", "2-3": "2.0", "2-2": "Additional information about the dataset deletion." }, "cols": 4, "rows": 8 } [/block]
{"_id":"5aaac85f7c4f4000128a4839","project":"552d474ea86ee20d00780cd7","version":"59de6223666d650024f78f9b","category":"59de6223666d650024f78fa3","user":"573b5a1f37fcf72000a2e683","updates":[],"next":{"pages":[],"description":""},"createdAt":"2018-03-15T19:24:15.137Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"examples":{"codes":[{"language":"curl","code":"curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.einstein.ai/v2/vision/deletion/Z2JTFBF3A7XKIJC5QEJXMO4HSY"}]},"settings":"","results":{"codes":[{"code":"{\n    \"id\": \"Z2JTFBF3A7XKIJC5QEJXMO4HSY\",\n    \"organizationId\": \"2\",\n    \"type\": \"DATASET\",\n    \"status\": \"SUCCEEDED\",\n    \"progress\": 1,\n    \"message\": null,\n    \"object\": \"deletion\",\n    \"deletedObjectId\": \"1003360\"\n}","language":"json","status":200,"name":""},{"language":"json","status":400,"name":"","code":"{}"}]},"method":"get","auth":"required","params":[],"url":"/vision/deletion/<DELETION_ID>"},"isReference":false,"order":7,"body":"##Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"3-2\": \"Object returned; in this case, `deletion`.\",\n    \"3-0\": \"`object`\",\n    \"3-1\": \"string\",\n    \"3-3\": \"2.0\",\n    \"1-0\": \"`id`\",\n    \"1-1\": \"string\",\n    \"1-2\": \"ID of the deletion.\",\n    \"1-3\": \"2.0\",\n    \"4-0\": \"`organizationId`\",\n    \"4-1\": \"integer\",\n    \"4-2\": \"ID of the org to which the dataset or model being deleted belongs.\",\n    \"4-3\": \"2.0\",\n    \"6-0\": \"`status`\",\n    \"5-0\": \"`progress`\",\n    \"5-3\": \"2.0\",\n    \"6-3\": \"2.0\",\n    \"5-1\": \"integer\",\n    \"6-1\": \"string\",\n    \"6-2\": \"Status of the deletion. Valid values are:\\n`QUEUED`—Object deletion hasn't started.\\n`RUNNING`—Object deletion is in progress.\\n`SUCCEEDED`—Object deletion is complete.\\n`SUCCEEDED_WAITING_FOR_CACHE_REMOVAL`—Object was deleted, but it can take up to 30 days to delete some related files that are cached in the system.\",\n    \"5-2\": \"How far the deletion has progressed. Values are between 0–1.\",\n    \"7-0\": \"`type`\",\n    \"7-1\": \"string\",\n    \"7-2\": \"Object that's being deleted. Valid values:\\n- `DATASET`\\n- `MODEL`\",\n    \"7-3\": \"2.0\",\n    \"2-0\": \"`message`\",\n    \"2-1\": \"string\",\n    \"2-3\": \"2.0\",\n    \"0-0\": \"`deletedObjectId`\",\n    \"0-1\": \"string\",\n    \"0-2\": \"ID of the object deleted. Depending on the object you delete, this contains the dataset ID or the model ID.\",\n    \"0-3\": \"2.0\",\n    \"2-2\": \"Additional information about the deletion. For example, a message is returned if the deletion fails.\"\n  },\n  \"cols\": 4,\n  \"rows\": 8\n}\n[/block]","excerpt":"Returns the status of an image dataset or model deletion. When you delete a dataset or model, the deletion may not occur immediately. Use this call to find out when the deletion is complete.","slug":"get-vision-deletion-status","type":"get","title":"Get Deletion Status","__v":0,"parentDoc":null,"childrenPages":[]}

getGet Deletion Status

Returns the status of an image dataset or model deletion. When you delete a dataset or model, the deletion may not occur immediately. Use this call to find out when the deletion is complete.

##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "3-2": "Object returned; in this case, `deletion`.", "3-0": "`object`", "3-1": "string", "3-3": "2.0", "1-0": "`id`", "1-1": "string", "1-2": "ID of the deletion.", "1-3": "2.0", "4-0": "`organizationId`", "4-1": "integer", "4-2": "ID of the org to which the dataset or model being deleted belongs.", "4-3": "2.0", "6-0": "`status`", "5-0": "`progress`", "5-3": "2.0", "6-3": "2.0", "5-1": "integer", "6-1": "string", "6-2": "Status of the deletion. Valid values are:\n`QUEUED`—Object deletion hasn't started.\n`RUNNING`—Object deletion is in progress.\n`SUCCEEDED`—Object deletion is complete.\n`SUCCEEDED_WAITING_FOR_CACHE_REMOVAL`—Object was deleted, but it can take up to 30 days to delete some related files that are cached in the system.", "5-2": "How far the deletion has progressed. Values are between 0–1.", "7-0": "`type`", "7-1": "string", "7-2": "Object that's being deleted. Valid values:\n- `DATASET`\n- `MODEL`", "7-3": "2.0", "2-0": "`message`", "2-1": "string", "2-3": "2.0", "0-0": "`deletedObjectId`", "0-1": "string", "0-2": "ID of the object deleted. Depending on the object you delete, this contains the dataset ID or the model ID.", "0-3": "2.0", "2-2": "Additional information about the deletion. For example, a message is returned if the deletion fails." }, "cols": 4, "rows": 8 } [/block]

Definition

{{ api_url }}{{ page_api_url }}

Examples


Result Format



##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "3-2": "Object returned; in this case, `deletion`.", "3-0": "`object`", "3-1": "string", "3-3": "2.0", "1-0": "`id`", "1-1": "string", "1-2": "ID of the deletion.", "1-3": "2.0", "4-0": "`organizationId`", "4-1": "integer", "4-2": "ID of the org to which the dataset or model being deleted belongs.", "4-3": "2.0", "6-0": "`status`", "5-0": "`progress`", "5-3": "2.0", "6-3": "2.0", "5-1": "integer", "6-1": "string", "6-2": "Status of the deletion. Valid values are:\n`QUEUED`—Object deletion hasn't started.\n`RUNNING`—Object deletion is in progress.\n`SUCCEEDED`—Object deletion is complete.\n`SUCCEEDED_WAITING_FOR_CACHE_REMOVAL`—Object was deleted, but it can take up to 30 days to delete some related files that are cached in the system.", "5-2": "How far the deletion has progressed. Values are between 0–1.", "7-0": "`type`", "7-1": "string", "7-2": "Object that's being deleted. Valid values:\n- `DATASET`\n- `MODEL`", "7-3": "2.0", "2-0": "`message`", "2-1": "string", "2-3": "2.0", "0-0": "`deletedObjectId`", "0-1": "string", "0-2": "ID of the object deleted. Depending on the object you delete, this contains the dataset ID or the model ID.", "0-3": "2.0", "2-2": "Additional information about the deletion. For example, a message is returned if the deletion fails." }, "cols": 4, "rows": 8 } [/block]
{"_id":"59de6224666d650024f78fba","category":"59de6223666d650024f78fa4","parentDoc":null,"project":"552d474ea86ee20d00780cd7","user":"573b5a1f37fcf72000a2e683","version":"59de6223666d650024f78f9b","updates":[],"next":{"pages":[],"description":""},"createdAt":"2017-02-16T23:16:16.110Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"method":"put","results":{"codes":[{"name":"","code":"{\n  \"id\": 1000022,\n  \"name\": \"mountainvsbeach\",\n  \"createdAt\": \"2017-02-17T00:22:10.000+0000\",\n  \"updatedAt\": \"2017-02-17T00:22:12.000+0000\",\n  \"labelSummary\": {\n    \"labels\": [\n      {\n        \"id\": 1819,\n        \"datasetId\": 1000022,\n        \"name\": \"Mountains\",\n        \"numExamples\": 50\n      },\n      {\n        \"id\": 1820,\n        \"datasetId\": 1000022,\n        \"name\": \"Beaches\",\n        \"numExamples\": 49\n      }\n    ]\n  },\n  \"totalExamples\": 99,\n  \"totalLabels\": 2,\n  \"available\": false,\n  \"statusMsg\": \"UPLOADING\",\n  \"type\": \"image\",\n  \"object\": \"dataset\"\n}","language":"json","status":200},{"name":"","code":"{}","language":"json","status":400}]},"settings":"","examples":{"codes":[{"language":"curl","code":"// Add examples to a dataset from a local file\ncurl -X PUT -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"data=@C:\\Data\\mountainvsbeach.zip\"  https://api.einstein.ai/v2/vision/datasets/1000022/upload\n\n// Add examples to a dataset from a web file\ncurl -X PUT -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"path=http://einstein.ai/images/mountainvsbeach.zip\"  https://api.einstein.ai/v2/vision/datasets/1000022/upload"}]},"auth":"required","params":[],"url":"/vision/datasets/<DATASET_ID>/upload"},"isReference":false,"order":0,"body":"##Request Parameters##\n[block:parameters]\n{\n  \"data\": {\n    \"0-0\": \"`data`\",\n    \"0-1\": \"string\",\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-2\": \"Path to the .zip file on the local drive. The maximum file size you can upload from a local drive is 50 MB.\",\n    \"0-3\": \"1.0\",\n    \"1-0\": \"`path`\",\n    \"1-1\": \"string\",\n    \"1-2\": \"URL of the .zip file. The maximum file size you can upload from a web location is 1 GB.\",\n    \"1-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 2\n}\n[/block]\nYou must provide the path to the .zip file on either the local machine or in the cloud. This call adds examples to the specified dataset from a .zip file. This is an asynchronous call, so the results that are initially returned contain information for the original dataset and `available` is `false`. \n\nUse the dataset ID and make a call to [Get a Dataset](doc:get-a-dataset) to query when the upload is complete. When `available` is `true`  and `statusMsg` is `SUCCEEDED` the data upload is complete. \n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"{\\n  \\\"id\\\": 1000022,\\n  \\\"name\\\": \\\"mountainvsbeach\\\",\\n  \\\"createdAt\\\": \\\"2017-02-17T00:22:10.000+0000\\\",\\n  \\\"updatedAt\\\": \\\"2017-02-17T00:29:56.000+0000\\\",\\n  \\\"labelSummary\\\": {\\n    \\\"labels\\\": [\\n      {\\n        \\\"id\\\": 1819,\\n        \\\"datasetId\\\": 1000022,\\n        \\\"name\\\": \\\"Mountains\\\",\\n        \\\"numExamples\\\": 150\\n      },\\n      {\\n        \\\"id\\\": 1820,\\n        \\\"datasetId\\\": 1000022,\\n        \\\"name\\\": \\\"Beaches\\\",\\n        \\\"numExamples\\\": 147\\n      }\\n    ]\\n  },\\n  \\\"totalExamples\\\": 297,\\n  \\\"totalLabels\\\": 2,\\n  \\\"available\\\": true,\\n  \\\"statusMsg\\\": \\\"SUCCEEDED\\\",\\n  \\\"type\\\": \\\"image\\\",\\n  \\\"object\\\": \\\"dataset\\\"\\n}\",\n      \"language\": \"json\"\n    }\n  ]\n}\n[/block]\nKeep the following points in mind when creating examples from a .zip file:\n###All Datasets###\n- If you try to create examples in a dataset while a previous call to create examples is still processing (the dataset's `available` value is `false`), the call fails and you receive an error. You must wait until the dataset's `available` value is `true` before starting another upload.\n\n- If the .zip file has an incorrect structure, the API returns an error: `FAILED: Invalid zip format provided for <dataset_name>`.\n\n- The maximum total dataset size is 1 GB.\n\n- The maximum image file name length is 150 characters including the file extension.  If the .zip file contains a file with a name greater than 150 characters (including the file extension), the example is created in the dataset but API truncates the example name to 150 characters.\n\n- Supported image file types are PNG, JPG, and JPEG. If the .zip file contains any unsupported image file types, those images won't be uploaded and no error is returned.\n\n- If the .zip file contains an image file that has a name containing spaces, the spaces are removed from the file name before the file is uploaded. For example, if you have a file called `sandy beach.jpg` the example name becomes `sandybeach.jpg`. If the .zip file contains an image file that has a name with non-ASCII characters, those characters are converted to UTF-8.\n\n- When specifying the URL for a .zip file in a cloud drive service like Dropbox, be sure it's a link to the file and not a link to the interactive download page. For example, the URL should look like `https://www.dropbox.com/s/abcdxyz/mountainvsbeach.zip?dl=1`\n\n- If you create a dataset or upload images from a .zip file in Apex code, be sure that you reference the URL to the file with `https` and not `http`.\n\n###Image or Image Multi-Label Datasets###\n\n- The .zip file must have a specific directory structure:\n - In the root, there should be a parent directory that contains subdirectories. \n - Each subdirectory below the parent directory becomes a label in the dataset unless the directory name matches a label that's already in the dataset. This subdirectory must contain images to be added to the dataset.\n - Each subdirectory below the parent directory should contain only images and not any nested subdirectories.\n\n- If the .zip file contains a directory label that's already in the dataset, the API adds the images from that directory to the specified label in the dataset.\n \n- If the .zip file contains a directory name that isn't a label in the dataset, the API adds a new label (limit is 180 characters).\n\n- The maximum directory name length is 180 characters. If the .zip file contains a directory with a name greater than 180 characters, the label is created in the dataset, but  the API truncates the label name to 180 characters.\n\n- Image files must be smaller than 1 MB. If the .zip file contains image files larger than 1 MB, the image won't be loaded and no error is returned.\n\n- Images must be no larger than 2,000 pixels high by 2,000 pixels wide. You can upload images that are larger, but training the dataset might fail.\n\n- This API call checks for duplicates in the .zip file that contains the new images using these business rules. However, the call doesn't check for duplicates between the .zip file and the images already in the dataset. Duplicate images are handled differently based on the dataset type.\n -  Image—For datasets of type `image`, if there are duplicate image files in the .zip file, only the first file is uploaded. Duplicate images are checked within directories and across directories. If there's more than one image file with the same file contents in the same directory or in multiple directories, only the first file is uploaded and the others are skipped.\n - Multi-label—For datasets of type  `image-multi-label`, if there are duplicate image files in a single directory, only the first file is uploaded and the others are skipped. In a multi-label dataset, it's expected that there are duplicate files across directories. If there's more than one image file with the same file contents in multiple directories, the file is loaded multiple times with a different label.\n\n###Object Detection Datasets###\n\n- Here are the guidelines for the .zip file:\n - The .zip file must contain two types of elements: (1) the image files specified in the annotations.csv file and (2) a file named annotations.csv that contains the bounding box data.\n - Images can be in the root of the .zip file or in a folder or folders in the root of the .zip file. If images are in folders more than one level deep, you'll receive an error when you try to create the dataset.\n - The supported image file types are PNG, JPG, and JPEG. If the .zip file contains any unsupported image file types, those images won't be uploaded and no error is returned.\n - The annotations.csv file is a text file that contains the data for the bounding boxes associated with each image. The file must have this exact name.\n - The annotations.csv file can be anywhere within the .zip file.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/d8c446e-obj_det_zip_file_format.png\",\n        \"obj_det_zip_file_format.png\",\n        535,\n        164,\n        \"#cccab6\"\n      ]\n    }\n  ]\n}\n[/block]\n- The maximum label name length is 180 characters. If the annotations file contains a label with a name greater than 180 characters, the label is created in the dataset, but  the API truncates the label name to 180 characters.\n\n- Images must be no larger than 1,600 pixels high by 1,600 pixels wide. You can upload images that are larger, but training the dataset might fail.\n\n- Labels are case sensitive. If you have labels `Oatmeal` and `oatmeal`, they are two distinct labels in the dataset and the resulting model.\n\n- When you create a dataset, all the images are checked for duplicates. If the .zip file contains multiple image files that have the same contents, only the first of the duplicate files is uploaded. This call also checks for duplicate images between the .zip file and the images already in the dataset. If an image that has the same contents exists in both the .zip file and the dataset, the image in the dataset is replaced with the more recent image from the .zip file. \n\n- If there's an image in the .zip file, but no bounding box descriptions for that image in the annotations file, the image is dropped and no error is returned.\n\n####Annotations.csv File Format####\n\nThe annotations.csv file contains the bounding box coordinates and the labels for each image.\n\n1. The first row in the file contains the headers for the CSV values. We use the convention of `image_file` and `boxn`, but each header value can be any string. \n\n    - `image_file`—Header for the image file name. \n    - `boxn`—Header for each bounding box element. The number of `boxn` values in the header is the maximum number of bounding boxes you can have in an image.\n\n\n2. Each row after the header specifies the bounding box descriptions in JSON format for each image in the .zip file. There should be one row per file. Multiple bounding boxes for the same image are listed as separate columns in the same row. The image name provided must be the exact name of the image file included in the parent folder. The `x`, `y`, `width`, and `height` values specify the bounding box location within the image. The required fields for each bounding box are:\n - `label`—Classification label for the content in the bounding box.\n - `height`—Height of the bounding box in pixels.\n - `width`—Width of the bounding box in pixels.\n - `x`—Location of the bounding box on the horizontal axis.\n - `y`—Location of the bounding box on the vertical axis.\n \nHere's an example of an annotations.csv file for two images.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"\\\"image_file\\\",\\\"box0\\\",\\\"box1\\\"\\n\\\"picture1.jpg\\\",\\\"{\\\"\\\"label\\\"\\\": \\\"\\\"cat\\\"\\\", \\\"\\\"y\\\"\\\": 242, \\\"\\\"x\\\"\\\": 160, \\\"\\\"height\\\"\\\": 62, \\\"\\\"width\\\"\\\": 428}\\\", \\\"{\\\"\\\"label\\\"\\\": \\\"\\\"turtle\\\"\\\", \\\"\\\"y\\\"\\\": 113, \\\"\\\"x\\\"\\\": 61, \\\"\\\"height\\\"\\\": 74, \\\"\\\"width\\\"\\\": 718}\\\"\\n\\\"picture2.jpg\\\",\\\"{\\\"\\\"label\\\"\\\": \\\"\\\"dog\\\"\\\", \\\"\\\"y\\\"\\\": 94, \\\"\\\"x\\\"\\\": 27, \\\"\\\"height\\\"\\\": 144, \\\"\\\"width\\\"\\\": 184}\\\",\\\"{\\\"\\\"label\\\"\\\": \\\"\\\"dog\\\"\\\", \\\"\\\"y\\\"\\\": 50, \\\"\\\"x\\\"\\\": 286, \\\"\\\"height\\\"\\\": 344, \\\"\\\"width\\\"\\\": 348}\\\"\\n\",\n      \"language\": \"text\"\n    }\n  ]\n}\n[/block]\nHere's the second image referenced in the annotations.csv file showing the bounding boxes.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/8bf5a1c-7f6c214-annotations-format.png\",\n        \"7f6c214-annotations-format.png\",\n        480,\n        321,\n        \"#6f7554\"\n      ]\n    }\n  ]\n}\n[/block]\n\n##Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"2-0\": \"`id`\",\n    \"2-1\": \"long\",\n    \"2-2\": \"ID of the dataset.\",\n    \"2-3\": \"1.0\",\n    \"4-0\": \"`name`\",\n    \"4-1\": \"string\",\n    \"4-2\": \"Name of the dataset.\",\n    \"4-3\": \"1.0\",\n    \"5-0\": \"`object`\",\n    \"5-1\": \"string\",\n    \"5-2\": \"Object returned; in this case, `dataset`.\",\n    \"5-3\": \"1.0\",\n    \"1-0\": \"`createdAt`\",\n    \"1-1\": \"date\",\n    \"1-2\": \"Date and time that the dataset was created.\",\n    \"1-3\": \"1.0\",\n    \"3-0\": \"`labelSummary`\",\n    \"3-1\": \"object\",\n    \"3-2\": \"Contains the `labels` array that contains all the labels for the dataset.\",\n    \"3-3\": \"1.0\",\n    \"0-0\": \"`available`\",\n    \"0-1\": \"boolean\",\n    \"0-2\": \"Specifies whether the dataset is ready to be trained.\",\n    \"0-3\": \"1.0\",\n    \"7-0\": \"`totalExamples`\",\n    \"7-1\": \"int\",\n    \"7-2\": \"Total number of examples in the dataset.\",\n    \"7-3\": \"1.0\",\n    \"8-0\": \"`totalLabels`\",\n    \"8-1\": \"int\",\n    \"8-2\": \"Total number of labels in the dataset.\",\n    \"8-3\": \"1.0\",\n    \"10-0\": \"`updatedAt`\",\n    \"10-1\": \"date\",\n    \"10-2\": \"Date and time that the dataset was last updated.\",\n    \"10-3\": \"1.0\",\n    \"6-0\": \"`statusMsg`\",\n    \"6-1\": \"string\",\n    \"6-3\": \"1.0\",\n    \"6-2\": \"Status of the dataset creation and data upload. Valid values are:\\n- `FAILURE: <failure_reason>`—Data upload has failed.\\n- `SUCCEEDED`—Data upload is complete.\\n- `UPLOADING`—Data upload is in progress.\",\n    \"9-0\": \"`type`\",\n    \"9-1\": \"string\",\n    \"9-2\": \"Type of dataset data. Valid values are:\\n- `image`\\n- `image-multi-label`—Available in Einstein Vision API version 2.0 and later.\",\n    \"9-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 11\n}\n[/block]\n##Label Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`datasetId`\",\n    \"0-1\": \"long\",\n    \"1-0\": \"`id`\",\n    \"1-1\": \"long\",\n    \"2-0\": \"`name`\",\n    \"2-1\": \"string\",\n    \"0-2\": \"ID of the dataset that the label belongs to.\",\n    \"1-2\": \"ID of the label.\",\n    \"2-2\": \"Name of the label.\",\n    \"0-3\": \"1.0\",\n    \"1-3\": \"1.0\",\n    \"2-3\": \"1.0\",\n    \"3-0\": \"`numExamples`\",\n    \"3-1\": \"int\",\n    \"3-2\": \"Number of examples that have the label.\",\n    \"3-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 4\n}\n[/block]","excerpt":"Adds examples from a .zip file to a dataset. You can use this call only with a dataset that was created from a .zip file.","slug":"create-examples-from-zip","type":"put","title":"Create Examples From a Zip File","__v":0,"childrenPages":[]}

putCreate Examples From a Zip File

Adds examples from a .zip file to a dataset. You can use this call only with a dataset that was created from a .zip file.

##Request Parameters## [block:parameters] { "data": { "0-0": "`data`", "0-1": "string", "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-2": "Path to the .zip file on the local drive. The maximum file size you can upload from a local drive is 50 MB.", "0-3": "1.0", "1-0": "`path`", "1-1": "string", "1-2": "URL of the .zip file. The maximum file size you can upload from a web location is 1 GB.", "1-3": "1.0" }, "cols": 4, "rows": 2 } [/block] You must provide the path to the .zip file on either the local machine or in the cloud. This call adds examples to the specified dataset from a .zip file. This is an asynchronous call, so the results that are initially returned contain information for the original dataset and `available` is `false`. Use the dataset ID and make a call to [Get a Dataset](doc:get-a-dataset) to query when the upload is complete. When `available` is `true` and `statusMsg` is `SUCCEEDED` the data upload is complete. [block:code] { "codes": [ { "code": "{\n \"id\": 1000022,\n \"name\": \"mountainvsbeach\",\n \"createdAt\": \"2017-02-17T00:22:10.000+0000\",\n \"updatedAt\": \"2017-02-17T00:29:56.000+0000\",\n \"labelSummary\": {\n \"labels\": [\n {\n \"id\": 1819,\n \"datasetId\": 1000022,\n \"name\": \"Mountains\",\n \"numExamples\": 150\n },\n {\n \"id\": 1820,\n \"datasetId\": 1000022,\n \"name\": \"Beaches\",\n \"numExamples\": 147\n }\n ]\n },\n \"totalExamples\": 297,\n \"totalLabels\": 2,\n \"available\": true,\n \"statusMsg\": \"SUCCEEDED\",\n \"type\": \"image\",\n \"object\": \"dataset\"\n}", "language": "json" } ] } [/block] Keep the following points in mind when creating examples from a .zip file: ###All Datasets### - If you try to create examples in a dataset while a previous call to create examples is still processing (the dataset's `available` value is `false`), the call fails and you receive an error. You must wait until the dataset's `available` value is `true` before starting another upload. - If the .zip file has an incorrect structure, the API returns an error: `FAILED: Invalid zip format provided for <dataset_name>`. - The maximum total dataset size is 1 GB. - The maximum image file name length is 150 characters including the file extension. If the .zip file contains a file with a name greater than 150 characters (including the file extension), the example is created in the dataset but API truncates the example name to 150 characters. - Supported image file types are PNG, JPG, and JPEG. If the .zip file contains any unsupported image file types, those images won't be uploaded and no error is returned. - If the .zip file contains an image file that has a name containing spaces, the spaces are removed from the file name before the file is uploaded. For example, if you have a file called `sandy beach.jpg` the example name becomes `sandybeach.jpg`. If the .zip file contains an image file that has a name with non-ASCII characters, those characters are converted to UTF-8. - When specifying the URL for a .zip file in a cloud drive service like Dropbox, be sure it's a link to the file and not a link to the interactive download page. For example, the URL should look like `https://www.dropbox.com/s/abcdxyz/mountainvsbeach.zip?dl=1` - If you create a dataset or upload images from a .zip file in Apex code, be sure that you reference the URL to the file with `https` and not `http`. ###Image or Image Multi-Label Datasets### - The .zip file must have a specific directory structure: - In the root, there should be a parent directory that contains subdirectories. - Each subdirectory below the parent directory becomes a label in the dataset unless the directory name matches a label that's already in the dataset. This subdirectory must contain images to be added to the dataset. - Each subdirectory below the parent directory should contain only images and not any nested subdirectories. - If the .zip file contains a directory label that's already in the dataset, the API adds the images from that directory to the specified label in the dataset. - If the .zip file contains a directory name that isn't a label in the dataset, the API adds a new label (limit is 180 characters). - The maximum directory name length is 180 characters. If the .zip file contains a directory with a name greater than 180 characters, the label is created in the dataset, but the API truncates the label name to 180 characters. - Image files must be smaller than 1 MB. If the .zip file contains image files larger than 1 MB, the image won't be loaded and no error is returned. - Images must be no larger than 2,000 pixels high by 2,000 pixels wide. You can upload images that are larger, but training the dataset might fail. - This API call checks for duplicates in the .zip file that contains the new images using these business rules. However, the call doesn't check for duplicates between the .zip file and the images already in the dataset. Duplicate images are handled differently based on the dataset type. - Image—For datasets of type `image`, if there are duplicate image files in the .zip file, only the first file is uploaded. Duplicate images are checked within directories and across directories. If there's more than one image file with the same file contents in the same directory or in multiple directories, only the first file is uploaded and the others are skipped. - Multi-label—For datasets of type `image-multi-label`, if there are duplicate image files in a single directory, only the first file is uploaded and the others are skipped. In a multi-label dataset, it's expected that there are duplicate files across directories. If there's more than one image file with the same file contents in multiple directories, the file is loaded multiple times with a different label. ###Object Detection Datasets### - Here are the guidelines for the .zip file: - The .zip file must contain two types of elements: (1) the image files specified in the annotations.csv file and (2) a file named annotations.csv that contains the bounding box data. - Images can be in the root of the .zip file or in a folder or folders in the root of the .zip file. If images are in folders more than one level deep, you'll receive an error when you try to create the dataset. - The supported image file types are PNG, JPG, and JPEG. If the .zip file contains any unsupported image file types, those images won't be uploaded and no error is returned. - The annotations.csv file is a text file that contains the data for the bounding boxes associated with each image. The file must have this exact name. - The annotations.csv file can be anywhere within the .zip file. [block:image] { "images": [ { "image": [ "https://files.readme.io/d8c446e-obj_det_zip_file_format.png", "obj_det_zip_file_format.png", 535, 164, "#cccab6" ] } ] } [/block] - The maximum label name length is 180 characters. If the annotations file contains a label with a name greater than 180 characters, the label is created in the dataset, but the API truncates the label name to 180 characters. - Images must be no larger than 1,600 pixels high by 1,600 pixels wide. You can upload images that are larger, but training the dataset might fail. - Labels are case sensitive. If you have labels `Oatmeal` and `oatmeal`, they are two distinct labels in the dataset and the resulting model. - When you create a dataset, all the images are checked for duplicates. If the .zip file contains multiple image files that have the same contents, only the first of the duplicate files is uploaded. This call also checks for duplicate images between the .zip file and the images already in the dataset. If an image that has the same contents exists in both the .zip file and the dataset, the image in the dataset is replaced with the more recent image from the .zip file. - If there's an image in the .zip file, but no bounding box descriptions for that image in the annotations file, the image is dropped and no error is returned. ####Annotations.csv File Format#### The annotations.csv file contains the bounding box coordinates and the labels for each image. 1. The first row in the file contains the headers for the CSV values. We use the convention of `image_file` and `boxn`, but each header value can be any string. - `image_file`—Header for the image file name. - `boxn`—Header for each bounding box element. The number of `boxn` values in the header is the maximum number of bounding boxes you can have in an image. 2. Each row after the header specifies the bounding box descriptions in JSON format for each image in the .zip file. There should be one row per file. Multiple bounding boxes for the same image are listed as separate columns in the same row. The image name provided must be the exact name of the image file included in the parent folder. The `x`, `y`, `width`, and `height` values specify the bounding box location within the image. The required fields for each bounding box are: - `label`—Classification label for the content in the bounding box. - `height`—Height of the bounding box in pixels. - `width`—Width of the bounding box in pixels. - `x`—Location of the bounding box on the horizontal axis. - `y`—Location of the bounding box on the vertical axis. Here's an example of an annotations.csv file for two images. [block:code] { "codes": [ { "code": "\"image_file\",\"box0\",\"box1\"\n\"picture1.jpg\",\"{\"\"label\"\": \"\"cat\"\", \"\"y\"\": 242, \"\"x\"\": 160, \"\"height\"\": 62, \"\"width\"\": 428}\", \"{\"\"label\"\": \"\"turtle\"\", \"\"y\"\": 113, \"\"x\"\": 61, \"\"height\"\": 74, \"\"width\"\": 718}\"\n\"picture2.jpg\",\"{\"\"label\"\": \"\"dog\"\", \"\"y\"\": 94, \"\"x\"\": 27, \"\"height\"\": 144, \"\"width\"\": 184}\",\"{\"\"label\"\": \"\"dog\"\", \"\"y\"\": 50, \"\"x\"\": 286, \"\"height\"\": 344, \"\"width\"\": 348}\"\n", "language": "text" } ] } [/block] Here's the second image referenced in the annotations.csv file showing the bounding boxes. [block:image] { "images": [ { "image": [ "https://files.readme.io/8bf5a1c-7f6c214-annotations-format.png", "7f6c214-annotations-format.png", 480, 321, "#6f7554" ] } ] } [/block] ##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "2-0": "`id`", "2-1": "long", "2-2": "ID of the dataset.", "2-3": "1.0", "4-0": "`name`", "4-1": "string", "4-2": "Name of the dataset.", "4-3": "1.0", "5-0": "`object`", "5-1": "string", "5-2": "Object returned; in this case, `dataset`.", "5-3": "1.0", "1-0": "`createdAt`", "1-1": "date", "1-2": "Date and time that the dataset was created.", "1-3": "1.0", "3-0": "`labelSummary`", "3-1": "object", "3-2": "Contains the `labels` array that contains all the labels for the dataset.", "3-3": "1.0", "0-0": "`available`", "0-1": "boolean", "0-2": "Specifies whether the dataset is ready to be trained.", "0-3": "1.0", "7-0": "`totalExamples`", "7-1": "int", "7-2": "Total number of examples in the dataset.", "7-3": "1.0", "8-0": "`totalLabels`", "8-1": "int", "8-2": "Total number of labels in the dataset.", "8-3": "1.0", "10-0": "`updatedAt`", "10-1": "date", "10-2": "Date and time that the dataset was last updated.", "10-3": "1.0", "6-0": "`statusMsg`", "6-1": "string", "6-3": "1.0", "6-2": "Status of the dataset creation and data upload. Valid values are:\n- `FAILURE: <failure_reason>`—Data upload has failed.\n- `SUCCEEDED`—Data upload is complete.\n- `UPLOADING`—Data upload is in progress.", "9-0": "`type`", "9-1": "string", "9-2": "Type of dataset data. Valid values are:\n- `image`\n- `image-multi-label`—Available in Einstein Vision API version 2.0 and later.", "9-3": "1.0" }, "cols": 4, "rows": 11 } [/block] ##Label Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`datasetId`", "0-1": "long", "1-0": "`id`", "1-1": "long", "2-0": "`name`", "2-1": "string", "0-2": "ID of the dataset that the label belongs to.", "1-2": "ID of the label.", "2-2": "Name of the label.", "0-3": "1.0", "1-3": "1.0", "2-3": "1.0", "3-0": "`numExamples`", "3-1": "int", "3-2": "Number of examples that have the label.", "3-3": "1.0" }, "cols": 4, "rows": 4 } [/block]

Definition

{{ api_url }}{{ page_api_url }}

Examples


Result Format



##Request Parameters## [block:parameters] { "data": { "0-0": "`data`", "0-1": "string", "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-2": "Path to the .zip file on the local drive. The maximum file size you can upload from a local drive is 50 MB.", "0-3": "1.0", "1-0": "`path`", "1-1": "string", "1-2": "URL of the .zip file. The maximum file size you can upload from a web location is 1 GB.", "1-3": "1.0" }, "cols": 4, "rows": 2 } [/block] You must provide the path to the .zip file on either the local machine or in the cloud. This call adds examples to the specified dataset from a .zip file. This is an asynchronous call, so the results that are initially returned contain information for the original dataset and `available` is `false`. Use the dataset ID and make a call to [Get a Dataset](doc:get-a-dataset) to query when the upload is complete. When `available` is `true` and `statusMsg` is `SUCCEEDED` the data upload is complete. [block:code] { "codes": [ { "code": "{\n \"id\": 1000022,\n \"name\": \"mountainvsbeach\",\n \"createdAt\": \"2017-02-17T00:22:10.000+0000\",\n \"updatedAt\": \"2017-02-17T00:29:56.000+0000\",\n \"labelSummary\": {\n \"labels\": [\n {\n \"id\": 1819,\n \"datasetId\": 1000022,\n \"name\": \"Mountains\",\n \"numExamples\": 150\n },\n {\n \"id\": 1820,\n \"datasetId\": 1000022,\n \"name\": \"Beaches\",\n \"numExamples\": 147\n }\n ]\n },\n \"totalExamples\": 297,\n \"totalLabels\": 2,\n \"available\": true,\n \"statusMsg\": \"SUCCEEDED\",\n \"type\": \"image\",\n \"object\": \"dataset\"\n}", "language": "json" } ] } [/block] Keep the following points in mind when creating examples from a .zip file: ###All Datasets### - If you try to create examples in a dataset while a previous call to create examples is still processing (the dataset's `available` value is `false`), the call fails and you receive an error. You must wait until the dataset's `available` value is `true` before starting another upload. - If the .zip file has an incorrect structure, the API returns an error: `FAILED: Invalid zip format provided for <dataset_name>`. - The maximum total dataset size is 1 GB. - The maximum image file name length is 150 characters including the file extension. If the .zip file contains a file with a name greater than 150 characters (including the file extension), the example is created in the dataset but API truncates the example name to 150 characters. - Supported image file types are PNG, JPG, and JPEG. If the .zip file contains any unsupported image file types, those images won't be uploaded and no error is returned. - If the .zip file contains an image file that has a name containing spaces, the spaces are removed from the file name before the file is uploaded. For example, if you have a file called `sandy beach.jpg` the example name becomes `sandybeach.jpg`. If the .zip file contains an image file that has a name with non-ASCII characters, those characters are converted to UTF-8. - When specifying the URL for a .zip file in a cloud drive service like Dropbox, be sure it's a link to the file and not a link to the interactive download page. For example, the URL should look like `https://www.dropbox.com/s/abcdxyz/mountainvsbeach.zip?dl=1` - If you create a dataset or upload images from a .zip file in Apex code, be sure that you reference the URL to the file with `https` and not `http`. ###Image or Image Multi-Label Datasets### - The .zip file must have a specific directory structure: - In the root, there should be a parent directory that contains subdirectories. - Each subdirectory below the parent directory becomes a label in the dataset unless the directory name matches a label that's already in the dataset. This subdirectory must contain images to be added to the dataset. - Each subdirectory below the parent directory should contain only images and not any nested subdirectories. - If the .zip file contains a directory label that's already in the dataset, the API adds the images from that directory to the specified label in the dataset. - If the .zip file contains a directory name that isn't a label in the dataset, the API adds a new label (limit is 180 characters). - The maximum directory name length is 180 characters. If the .zip file contains a directory with a name greater than 180 characters, the label is created in the dataset, but the API truncates the label name to 180 characters. - Image files must be smaller than 1 MB. If the .zip file contains image files larger than 1 MB, the image won't be loaded and no error is returned. - Images must be no larger than 2,000 pixels high by 2,000 pixels wide. You can upload images that are larger, but training the dataset might fail. - This API call checks for duplicates in the .zip file that contains the new images using these business rules. However, the call doesn't check for duplicates between the .zip file and the images already in the dataset. Duplicate images are handled differently based on the dataset type. - Image—For datasets of type `image`, if there are duplicate image files in the .zip file, only the first file is uploaded. Duplicate images are checked within directories and across directories. If there's more than one image file with the same file contents in the same directory or in multiple directories, only the first file is uploaded and the others are skipped. - Multi-label—For datasets of type `image-multi-label`, if there are duplicate image files in a single directory, only the first file is uploaded and the others are skipped. In a multi-label dataset, it's expected that there are duplicate files across directories. If there's more than one image file with the same file contents in multiple directories, the file is loaded multiple times with a different label. ###Object Detection Datasets### - Here are the guidelines for the .zip file: - The .zip file must contain two types of elements: (1) the image files specified in the annotations.csv file and (2) a file named annotations.csv that contains the bounding box data. - Images can be in the root of the .zip file or in a folder or folders in the root of the .zip file. If images are in folders more than one level deep, you'll receive an error when you try to create the dataset. - The supported image file types are PNG, JPG, and JPEG. If the .zip file contains any unsupported image file types, those images won't be uploaded and no error is returned. - The annotations.csv file is a text file that contains the data for the bounding boxes associated with each image. The file must have this exact name. - The annotations.csv file can be anywhere within the .zip file. [block:image] { "images": [ { "image": [ "https://files.readme.io/d8c446e-obj_det_zip_file_format.png", "obj_det_zip_file_format.png", 535, 164, "#cccab6" ] } ] } [/block] - The maximum label name length is 180 characters. If the annotations file contains a label with a name greater than 180 characters, the label is created in the dataset, but the API truncates the label name to 180 characters. - Images must be no larger than 1,600 pixels high by 1,600 pixels wide. You can upload images that are larger, but training the dataset might fail. - Labels are case sensitive. If you have labels `Oatmeal` and `oatmeal`, they are two distinct labels in the dataset and the resulting model. - When you create a dataset, all the images are checked for duplicates. If the .zip file contains multiple image files that have the same contents, only the first of the duplicate files is uploaded. This call also checks for duplicate images between the .zip file and the images already in the dataset. If an image that has the same contents exists in both the .zip file and the dataset, the image in the dataset is replaced with the more recent image from the .zip file. - If there's an image in the .zip file, but no bounding box descriptions for that image in the annotations file, the image is dropped and no error is returned. ####Annotations.csv File Format#### The annotations.csv file contains the bounding box coordinates and the labels for each image. 1. The first row in the file contains the headers for the CSV values. We use the convention of `image_file` and `boxn`, but each header value can be any string. - `image_file`—Header for the image file name. - `boxn`—Header for each bounding box element. The number of `boxn` values in the header is the maximum number of bounding boxes you can have in an image. 2. Each row after the header specifies the bounding box descriptions in JSON format for each image in the .zip file. There should be one row per file. Multiple bounding boxes for the same image are listed as separate columns in the same row. The image name provided must be the exact name of the image file included in the parent folder. The `x`, `y`, `width`, and `height` values specify the bounding box location within the image. The required fields for each bounding box are: - `label`—Classification label for the content in the bounding box. - `height`—Height of the bounding box in pixels. - `width`—Width of the bounding box in pixels. - `x`—Location of the bounding box on the horizontal axis. - `y`—Location of the bounding box on the vertical axis. Here's an example of an annotations.csv file for two images. [block:code] { "codes": [ { "code": "\"image_file\",\"box0\",\"box1\"\n\"picture1.jpg\",\"{\"\"label\"\": \"\"cat\"\", \"\"y\"\": 242, \"\"x\"\": 160, \"\"height\"\": 62, \"\"width\"\": 428}\", \"{\"\"label\"\": \"\"turtle\"\", \"\"y\"\": 113, \"\"x\"\": 61, \"\"height\"\": 74, \"\"width\"\": 718}\"\n\"picture2.jpg\",\"{\"\"label\"\": \"\"dog\"\", \"\"y\"\": 94, \"\"x\"\": 27, \"\"height\"\": 144, \"\"width\"\": 184}\",\"{\"\"label\"\": \"\"dog\"\", \"\"y\"\": 50, \"\"x\"\": 286, \"\"height\"\": 344, \"\"width\"\": 348}\"\n", "language": "text" } ] } [/block] Here's the second image referenced in the annotations.csv file showing the bounding boxes. [block:image] { "images": [ { "image": [ "https://files.readme.io/8bf5a1c-7f6c214-annotations-format.png", "7f6c214-annotations-format.png", 480, 321, "#6f7554" ] } ] } [/block] ##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "2-0": "`id`", "2-1": "long", "2-2": "ID of the dataset.", "2-3": "1.0", "4-0": "`name`", "4-1": "string", "4-2": "Name of the dataset.", "4-3": "1.0", "5-0": "`object`", "5-1": "string", "5-2": "Object returned; in this case, `dataset`.", "5-3": "1.0", "1-0": "`createdAt`", "1-1": "date", "1-2": "Date and time that the dataset was created.", "1-3": "1.0", "3-0": "`labelSummary`", "3-1": "object", "3-2": "Contains the `labels` array that contains all the labels for the dataset.", "3-3": "1.0", "0-0": "`available`", "0-1": "boolean", "0-2": "Specifies whether the dataset is ready to be trained.", "0-3": "1.0", "7-0": "`totalExamples`", "7-1": "int", "7-2": "Total number of examples in the dataset.", "7-3": "1.0", "8-0": "`totalLabels`", "8-1": "int", "8-2": "Total number of labels in the dataset.", "8-3": "1.0", "10-0": "`updatedAt`", "10-1": "date", "10-2": "Date and time that the dataset was last updated.", "10-3": "1.0", "6-0": "`statusMsg`", "6-1": "string", "6-3": "1.0", "6-2": "Status of the dataset creation and data upload. Valid values are:\n- `FAILURE: <failure_reason>`—Data upload has failed.\n- `SUCCEEDED`—Data upload is complete.\n- `UPLOADING`—Data upload is in progress.", "9-0": "`type`", "9-1": "string", "9-2": "Type of dataset data. Valid values are:\n- `image`\n- `image-multi-label`—Available in Einstein Vision API version 2.0 and later.", "9-3": "1.0" }, "cols": 4, "rows": 11 } [/block] ##Label Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`datasetId`", "0-1": "long", "1-0": "`id`", "1-1": "long", "2-0": "`name`", "2-1": "string", "0-2": "ID of the dataset that the label belongs to.", "1-2": "ID of the label.", "2-2": "Name of the label.", "0-3": "1.0", "1-3": "1.0", "2-3": "1.0", "3-0": "`numExamples`", "3-1": "int", "3-2": "Number of examples that have the label.", "3-3": "1.0" }, "cols": 4, "rows": 4 } [/block]
{"_id":"59de6224666d650024f78fbb","category":"59de6223666d650024f78fa4","user":"573b5a1f37fcf72000a2e683","parentDoc":null,"project":"552d474ea86ee20d00780cd7","version":"59de6223666d650024f78f9b","updates":[],"next":{"pages":[],"description":""},"createdAt":"2016-09-23T22:45:33.433Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"examples":{"codes":[{"language":"curl","code":"curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=77880132.jpg\" -F \"labelId=614\" -F \"data=@C:\\Mountains vs Beach\\Beaches\\77880132.jpg\" https://api.einstein.ai/v2/vision/datasets/57/examples"}]},"method":"post","results":{"codes":[{"name":"","code":"{\n  \"id\": 43887,\n  \"name\": \"77880132.jpg\",\n  \"location\": \"https://jBke4mtMuOjrCK3A04Q79O5TBySI2BC3zqi7...\",\n  \"createdAt\": \"2016-09-15T23:18:13.000+0000\",\n  \"label\": {\n    \"id\": 614,\n    \"datasetId\": 57,\n    \"name\": \"beach\",\n    \"numExamples\": 50\n  },\n  \"object\": \"example\"\n}","language":"json","status":200},{"code":"{}","language":"json","status":400,"name":""}]},"settings":"","auth":"required","params":[],"url":"/vision/datasets/<DATASET_ID>/examples"},"isReference":false,"order":1,"body":"##Request Parameters##\n[block:parameters]\n{\n  \"data\": {\n    \"0-0\": \"`data`\",\n    \"0-1\": \"string\",\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-2\": \"Location of the local image file to upload.\",\n    \"0-3\": \"1.0\",\n    \"2-0\": \"`name`\",\n    \"1-0\": \"`labelId`\",\n    \"1-1\": \"long\",\n    \"2-1\": \"string\",\n    \"2-2\": \"Name of the example. Maximum length is 180 characters.\",\n    \"1-2\": \"ID of the label to add to the example.\",\n    \"1-3\": \"1.0\",\n    \"2-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 3\n}\n[/block]\nKeep the following points in mind when creating examples.\n\n- This call supports only datasets that have a type of `image` or `image-multi-label`. \n\n- After you add an example, you can return information about it, along with a URL to access the image. The URL expires in 30 minutes.\n\n- Add an example to only one label.\n\n- The maximum image file size is 1 MB.\n\n- We recommend a minimum of 100 examples per label.\n\n- The supported image file types are PNG, JPG, and JPEG.\n\n##Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"1-0\": \"`id`\",\n    \"1-1\": \"long\",\n    \"1-2\": \"ID of the example.\",\n    \"1-3\": \"1.0\",\n    \"4-0\": \"`name`\",\n    \"4-1\": \"string\",\n    \"4-2\": \"Name of the example.\",\n    \"4-3\": \"1.0\",\n    \"5-0\": \"`object`\",\n    \"5-1\": \"string\",\n    \"5-2\": \"Object returned; in this case, `example`.\",\n    \"5-3\": \"1.0\",\n    \"0-0\": \"`createdAt`\",\n    \"0-1\": \"date\",\n    \"0-2\": \"Date and time that the example was created.\",\n    \"0-3\": \"1.0\",\n    \"2-0\": \"`label`\",\n    \"2-1\": \"object\",\n    \"2-2\": \"Contains information about the label with which the example is associated.\",\n    \"2-3\": \"1.0\",\n    \"3-0\": \"`location`\",\n    \"3-1\": \"string\",\n    \"3-3\": \"1.0\",\n    \"3-2\": \"URL of the image in the dataset. This is a temporary URL that expires in 30 minutes. This URL can be used to display images that were uploaded to a dataset in a UI.\"\n  },\n  \"cols\": 4,\n  \"rows\": 6\n}\n[/block]\n##Label Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`datasetId`\",\n    \"0-1\": \"long\",\n    \"1-0\": \"`id`\",\n    \"1-1\": \"long\",\n    \"2-0\": \"`name`\",\n    \"2-1\": \"string\",\n    \"0-2\": \"ID of the dataset that the example’s label belongs to.\",\n    \"1-2\": \"ID of the example’s label.\",\n    \"2-2\": \"Name of the example’s label.\",\n    \"0-3\": \"1.0\",\n    \"1-3\": \"1.0\",\n    \"2-3\": \"1.0\",\n    \"3-0\": \"`numExamples`\",\n    \"3-1\": \"int\",\n    \"3-2\": \"Number of examples that have the label.\",\n    \"3-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 4\n}\n[/block]","excerpt":"Adds an example with the specified label to a dataset.","slug":"create-an-example","type":"post","title":"Create an Example","__v":1,"childrenPages":[]}

postCreate an Example

Adds an example with the specified label to a dataset.

##Request Parameters## [block:parameters] { "data": { "0-0": "`data`", "0-1": "string", "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-2": "Location of the local image file to upload.", "0-3": "1.0", "2-0": "`name`", "1-0": "`labelId`", "1-1": "long", "2-1": "string", "2-2": "Name of the example. Maximum length is 180 characters.", "1-2": "ID of the label to add to the example.", "1-3": "1.0", "2-3": "1.0" }, "cols": 4, "rows": 3 } [/block] Keep the following points in mind when creating examples. - This call supports only datasets that have a type of `image` or `image-multi-label`. - After you add an example, you can return information about it, along with a URL to access the image. The URL expires in 30 minutes. - Add an example to only one label. - The maximum image file size is 1 MB. - We recommend a minimum of 100 examples per label. - The supported image file types are PNG, JPG, and JPEG. ##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "1-0": "`id`", "1-1": "long", "1-2": "ID of the example.", "1-3": "1.0", "4-0": "`name`", "4-1": "string", "4-2": "Name of the example.", "4-3": "1.0", "5-0": "`object`", "5-1": "string", "5-2": "Object returned; in this case, `example`.", "5-3": "1.0", "0-0": "`createdAt`", "0-1": "date", "0-2": "Date and time that the example was created.", "0-3": "1.0", "2-0": "`label`", "2-1": "object", "2-2": "Contains information about the label with which the example is associated.", "2-3": "1.0", "3-0": "`location`", "3-1": "string", "3-3": "1.0", "3-2": "URL of the image in the dataset. This is a temporary URL that expires in 30 minutes. This URL can be used to display images that were uploaded to a dataset in a UI." }, "cols": 4, "rows": 6 } [/block] ##Label Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`datasetId`", "0-1": "long", "1-0": "`id`", "1-1": "long", "2-0": "`name`", "2-1": "string", "0-2": "ID of the dataset that the example’s label belongs to.", "1-2": "ID of the example’s label.", "2-2": "Name of the example’s label.", "0-3": "1.0", "1-3": "1.0", "2-3": "1.0", "3-0": "`numExamples`", "3-1": "int", "3-2": "Number of examples that have the label.", "3-3": "1.0" }, "cols": 4, "rows": 4 } [/block]

Definition

{{ api_url }}{{ page_api_url }}

Examples


Result Format



##Request Parameters## [block:parameters] { "data": { "0-0": "`data`", "0-1": "string", "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-2": "Location of the local image file to upload.", "0-3": "1.0", "2-0": "`name`", "1-0": "`labelId`", "1-1": "long", "2-1": "string", "2-2": "Name of the example. Maximum length is 180 characters.", "1-2": "ID of the label to add to the example.", "1-3": "1.0", "2-3": "1.0" }, "cols": 4, "rows": 3 } [/block] Keep the following points in mind when creating examples. - This call supports only datasets that have a type of `image` or `image-multi-label`. - After you add an example, you can return information about it, along with a URL to access the image. The URL expires in 30 minutes. - Add an example to only one label. - The maximum image file size is 1 MB. - We recommend a minimum of 100 examples per label. - The supported image file types are PNG, JPG, and JPEG. ##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "1-0": "`id`", "1-1": "long", "1-2": "ID of the example.", "1-3": "1.0", "4-0": "`name`", "4-1": "string", "4-2": "Name of the example.", "4-3": "1.0", "5-0": "`object`", "5-1": "string", "5-2": "Object returned; in this case, `example`.", "5-3": "1.0", "0-0": "`createdAt`", "0-1": "date", "0-2": "Date and time that the example was created.", "0-3": "1.0", "2-0": "`label`", "2-1": "object", "2-2": "Contains information about the label with which the example is associated.", "2-3": "1.0", "3-0": "`location`", "3-1": "string", "3-3": "1.0", "3-2": "URL of the image in the dataset. This is a temporary URL that expires in 30 minutes. This URL can be used to display images that were uploaded to a dataset in a UI." }, "cols": 4, "rows": 6 } [/block] ##Label Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`datasetId`", "0-1": "long", "1-0": "`id`", "1-1": "long", "2-0": "`name`", "2-1": "string", "0-2": "ID of the dataset that the example’s label belongs to.", "1-2": "ID of the example’s label.", "2-2": "Name of the example’s label.", "0-3": "1.0", "1-3": "1.0", "2-3": "1.0", "3-0": "`numExamples`", "3-1": "int", "3-2": "Number of examples that have the label.", "3-3": "1.0" }, "cols": 4, "rows": 4 } [/block]
{"_id":"59de6224666d650024f78fbc","category":"59de6223666d650024f78fa4","parentDoc":null,"user":"573b5a1f37fcf72000a2e683","project":"552d474ea86ee20d00780cd7","version":"59de6223666d650024f78f9b","updates":[],"next":{"pages":[],"description":""},"createdAt":"2017-05-04T20:32:00.696Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"method":"post","results":{"codes":[{"name":"","status":200,"language":"json","code":"{\n  \"id\": 618168,\n  \"name\": \"alps.jpg\",\n  \"location\": \"https://HnpTxmdFb6%2BY1jwAqBtjkUMUj6qKQD0CTjsJ...\",\n  \"createdAt\": \"2017-05-04T20:52:02.000+0000\",\n  \"label\": {\n    \"id\": 3235,\n    \"datasetId\": 1000475,\n    \"name\": \"Mountains\",\n    \"numExamples\": 104\n  },\n  \"object\": \"example\"\n}"},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","examples":{"codes":[{"code":"curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"modelId=3CMCRC572BD3OZTQSTTUU4733Y\" -F \"data=@c:\\data\\alps.jpg\" -F \"expectedLabel=Mountains\" https://api.einstein.ai/v2/vision/feedback","language":"curl"}]},"auth":"required","params":[],"url":"/vision/feedback"},"isReference":false,"order":2,"body":"##Request Parameters##\n[block:parameters]\n{\n  \"data\": {\n    \"0-0\": \"`data`\",\n    \"0-1\": \"string\",\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-2\": \"Local image file to upload.\",\n    \"0-3\": \"2.0\",\n    \"3-0\": \"`name`\",\n    \"2-0\": \"`modelId`\",\n    \"2-1\": \"string\",\n    \"3-1\": \"string\",\n    \"3-2\": \"Name of the example. Optional. Maximum length is 180 characters.\",\n    \"2-2\": \"ID of the model that misclassified the image. The feedback example is added to the dataset associated with this model.\",\n    \"2-3\": \"2.0\",\n    \"3-3\": \"2.0\",\n    \"1-0\": \"`expectedLabel`\",\n    \"1-1\": \"string\",\n    \"1-2\": \"Correct label for the example. Must be a label that exists in the dataset.\",\n    \"1-3\": \"2.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 4\n}\n[/block]\nIf a model returns an incorrect prediction for an image, you can use that image to improve the model. You do this by adding the image to the dataset and retraining it to update the model. The misclassified images that you add to the dataset are known as feedback. Use this API to add a misclassified image with the correct label to the dataset from which the model was created. \n\nThis call supports only datasets that have a type of `image` or `image-multi-label`. \n\nKeep the following points in mind when creating feedback examples.\n\n- You pass in a `modelId` parameter, but the example is added to the dataset from which the specified model was created.\n\n- If you omit the `name` request parameter, the API uses the file name for the example name.\n\n- Feedback examples must have unique names. If an example with the same name exists in the dataset, you'll receive an error. This rule applies whether the API uses the file name as the example name or whether you pass in an example name in the `name` parameter.\n\n- After you add a feedback example, you can return information about it, along with a URL to access the image. The URL expires in 30 minutes.\n\n- Add a feedback example to only one label per call.\n\n- The maximum image file size is 1 MB.\n\n- The supported image file types are PNG, JPG, and JPEG.\n\n##Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"1-0\": \"`id`\",\n    \"1-1\": \"long\",\n    \"1-2\": \"ID of the feedback example.\",\n    \"1-3\": \"2.0\",\n    \"4-0\": \"`name`\",\n    \"4-1\": \"string\",\n    \"4-2\": \"Name of the feedback example.\",\n    \"4-3\": \"2.0\",\n    \"5-0\": \"`object`\",\n    \"5-1\": \"string\",\n    \"5-2\": \"Object returned; in this case, `example`.\",\n    \"5-3\": \"2.0\",\n    \"0-0\": \"`createdAt`\",\n    \"0-1\": \"date\",\n    \"0-2\": \"Date and time that the feedback example was created.\",\n    \"0-3\": \"2.0\",\n    \"2-0\": \"`label`\",\n    \"2-1\": \"object\",\n    \"2-2\": \"Contains information about the label that the feedback example is associated with.\",\n    \"2-3\": \"2.0\",\n    \"3-0\": \"`location`\",\n    \"3-1\": \"string\",\n    \"3-3\": \"2.0\",\n    \"3-2\": \"URL of the image in the dataset. This is a temporary URL that expires in 30 minutes. This URL can be used to display images that were uploaded to a dataset in a UI.\"\n  },\n  \"cols\": 4,\n  \"rows\": 6\n}\n[/block]\n##Label Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`datasetId`\",\n    \"0-1\": \"long\",\n    \"1-0\": \"`id`\",\n    \"1-1\": \"long\",\n    \"2-0\": \"`name`\",\n    \"2-1\": \"string\",\n    \"0-2\": \"ID of the dataset to which the feedback example’s label belongs.\",\n    \"1-2\": \"ID of the feedback example’s label.\",\n    \"2-2\": \"Name of the feedback example’s label.\",\n    \"0-3\": \"2.0\",\n    \"1-3\": \"2.0\",\n    \"2-3\": \"2.0\",\n    \"3-0\": \"`numExamples`\",\n    \"3-1\": \"int\",\n    \"3-2\": \"Number of examples that have the label.\",\n    \"3-3\": \"2.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 4\n}\n[/block]","excerpt":"Adds a feedback example to the dataset associated with the specified model. Available in Einstein Vision API version 2.0 and later.","slug":"create-a-feedback-example","type":"post","title":"Create a Feedback Example","__v":0,"childrenPages":[]}

postCreate a Feedback Example

Adds a feedback example to the dataset associated with the specified model. Available in Einstein Vision API version 2.0 and later.

##Request Parameters## [block:parameters] { "data": { "0-0": "`data`", "0-1": "string", "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-2": "Local image file to upload.", "0-3": "2.0", "3-0": "`name`", "2-0": "`modelId`", "2-1": "string", "3-1": "string", "3-2": "Name of the example. Optional. Maximum length is 180 characters.", "2-2": "ID of the model that misclassified the image. The feedback example is added to the dataset associated with this model.", "2-3": "2.0", "3-3": "2.0", "1-0": "`expectedLabel`", "1-1": "string", "1-2": "Correct label for the example. Must be a label that exists in the dataset.", "1-3": "2.0" }, "cols": 4, "rows": 4 } [/block] If a model returns an incorrect prediction for an image, you can use that image to improve the model. You do this by adding the image to the dataset and retraining it to update the model. The misclassified images that you add to the dataset are known as feedback. Use this API to add a misclassified image with the correct label to the dataset from which the model was created. This call supports only datasets that have a type of `image` or `image-multi-label`. Keep the following points in mind when creating feedback examples. - You pass in a `modelId` parameter, but the example is added to the dataset from which the specified model was created. - If you omit the `name` request parameter, the API uses the file name for the example name. - Feedback examples must have unique names. If an example with the same name exists in the dataset, you'll receive an error. This rule applies whether the API uses the file name as the example name or whether you pass in an example name in the `name` parameter. - After you add a feedback example, you can return information about it, along with a URL to access the image. The URL expires in 30 minutes. - Add a feedback example to only one label per call. - The maximum image file size is 1 MB. - The supported image file types are PNG, JPG, and JPEG. ##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "1-0": "`id`", "1-1": "long", "1-2": "ID of the feedback example.", "1-3": "2.0", "4-0": "`name`", "4-1": "string", "4-2": "Name of the feedback example.", "4-3": "2.0", "5-0": "`object`", "5-1": "string", "5-2": "Object returned; in this case, `example`.", "5-3": "2.0", "0-0": "`createdAt`", "0-1": "date", "0-2": "Date and time that the feedback example was created.", "0-3": "2.0", "2-0": "`label`", "2-1": "object", "2-2": "Contains information about the label that the feedback example is associated with.", "2-3": "2.0", "3-0": "`location`", "3-1": "string", "3-3": "2.0", "3-2": "URL of the image in the dataset. This is a temporary URL that expires in 30 minutes. This URL can be used to display images that were uploaded to a dataset in a UI." }, "cols": 4, "rows": 6 } [/block] ##Label Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`datasetId`", "0-1": "long", "1-0": "`id`", "1-1": "long", "2-0": "`name`", "2-1": "string", "0-2": "ID of the dataset to which the feedback example’s label belongs.", "1-2": "ID of the feedback example’s label.", "2-2": "Name of the feedback example’s label.", "0-3": "2.0", "1-3": "2.0", "2-3": "2.0", "3-0": "`numExamples`", "3-1": "int", "3-2": "Number of examples that have the label.", "3-3": "2.0" }, "cols": 4, "rows": 4 } [/block]

Definition

{{ api_url }}{{ page_api_url }}

Examples


Result Format



##Request Parameters## [block:parameters] { "data": { "0-0": "`data`", "0-1": "string", "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-2": "Local image file to upload.", "0-3": "2.0", "3-0": "`name`", "2-0": "`modelId`", "2-1": "string", "3-1": "string", "3-2": "Name of the example. Optional. Maximum length is 180 characters.", "2-2": "ID of the model that misclassified the image. The feedback example is added to the dataset associated with this model.", "2-3": "2.0", "3-3": "2.0", "1-0": "`expectedLabel`", "1-1": "string", "1-2": "Correct label for the example. Must be a label that exists in the dataset.", "1-3": "2.0" }, "cols": 4, "rows": 4 } [/block] If a model returns an incorrect prediction for an image, you can use that image to improve the model. You do this by adding the image to the dataset and retraining it to update the model. The misclassified images that you add to the dataset are known as feedback. Use this API to add a misclassified image with the correct label to the dataset from which the model was created. This call supports only datasets that have a type of `image` or `image-multi-label`. Keep the following points in mind when creating feedback examples. - You pass in a `modelId` parameter, but the example is added to the dataset from which the specified model was created. - If you omit the `name` request parameter, the API uses the file name for the example name. - Feedback examples must have unique names. If an example with the same name exists in the dataset, you'll receive an error. This rule applies whether the API uses the file name as the example name or whether you pass in an example name in the `name` parameter. - After you add a feedback example, you can return information about it, along with a URL to access the image. The URL expires in 30 minutes. - Add a feedback example to only one label per call. - The maximum image file size is 1 MB. - The supported image file types are PNG, JPG, and JPEG. ##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "1-0": "`id`", "1-1": "long", "1-2": "ID of the feedback example.", "1-3": "2.0", "4-0": "`name`", "4-1": "string", "4-2": "Name of the feedback example.", "4-3": "2.0", "5-0": "`object`", "5-1": "string", "5-2": "Object returned; in this case, `example`.", "5-3": "2.0", "0-0": "`createdAt`", "0-1": "date", "0-2": "Date and time that the feedback example was created.", "0-3": "2.0", "2-0": "`label`", "2-1": "object", "2-2": "Contains information about the label that the feedback example is associated with.", "2-3": "2.0", "3-0": "`location`", "3-1": "string", "3-3": "2.0", "3-2": "URL of the image in the dataset. This is a temporary URL that expires in 30 minutes. This URL can be used to display images that were uploaded to a dataset in a UI." }, "cols": 4, "rows": 6 } [/block] ##Label Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`datasetId`", "0-1": "long", "1-0": "`id`", "1-1": "long", "2-0": "`name`", "2-1": "string", "0-2": "ID of the dataset to which the feedback example’s label belongs.", "1-2": "ID of the feedback example’s label.", "2-2": "Name of the feedback example’s label.", "0-3": "2.0", "1-3": "2.0", "2-3": "2.0", "3-0": "`numExamples`", "3-1": "int", "3-2": "Number of examples that have the label.", "3-3": "2.0" }, "cols": 4, "rows": 4 } [/block]
{"_id":"5a0b68d095139e00260e4796","project":"552d474ea86ee20d00780cd7","version":"59de6223666d650024f78f9b","category":"59de6223666d650024f78fa4","user":"573b5a1f37fcf72000a2e683","updates":[],"next":{"pages":[],"description":""},"createdAt":"2017-11-14T22:06:08.710Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"examples":{"codes":[{"language":"curl","code":"curl -X PUT -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"modelId=3CMCRC572BD3OZTQSTTUU4733Y\" -F \"data=@c:\\data\\alpine_feedback.zip\""}]},"settings":"","results":{"codes":[{"name":"","status":200,"language":"json","code":"{\n  \"id\": 1004850,\n  \"name\": \"Alpine Products\",\n  \"createdAt\": \"2017-11-30T18:17:40.000+0000\",\n  \"updatedAt\": \"2017-11-30T18:17:42.000+0000\",\n  \"labelSummary\": {\n    \"labels\": [\n      {\n        \"id\": 39297,\n        \"datasetId\": 1004850,\n        \"name\": \"Alpine - Oat Cereal\",\n        \"numExamples\": 2\n      },\n      {\n        \"id\": 39298,\n        \"datasetId\": 1004850,\n        \"name\": \"Alpine - Corn Flakes\",\n        \"numExamples\": 3\n      },\n      {\n        \"id\": 39299,\n        \"datasetId\": 1004850,\n        \"name\": \"Alpine - Bran Cereal\",\n        \"numExamples\": 10\n      },\n      {\n        \"id\": 39300,\n        \"datasetId\": 1004850,\n        \"name\": \"Other\",\n        \"numExamples\": 3\n      }\n    ]\n  },\n  \"totalExamples\": 15,\n  \"totalLabels\": 4,\n  \"available\": false,\n  \"statusMsg\": \"UPLOADING\",\n  \"type\": \"image-detection\",\n  \"object\": \"dataset\"\n}"},{"status":400,"language":"json","code":"{}","name":""}]},"method":"put","auth":"required","params":[],"url":"/vision/bulkfeedback"},"isReference":false,"order":3,"body":"##Request Parameters##\n[block:parameters]\n{\n  \"data\": {\n    \"0-0\": \"`data`\",\n    \"0-1\": \"string\",\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-2\": \"Local .zip file to upload. The maximum .zip file size you can upload from a local drive is 50 MB.\\n\\nThe images and an annotations.csv file with the label and bounding box information are contained in a .zip file, just like when you create a dataset from a .zip file.\",\n    \"0-3\": \"2.0\",\n    \"1-0\": \"`modelId`\",\n    \"1-1\": \"string\",\n    \"1-2\": \"ID of the model that misclassified the images. The feedback examples are added to the dataset associated with this model.\",\n    \"1-3\": \"2.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 2\n}\n[/block]\nIf a model returns an incorrect prediction for an image, you can use that image to improve the model. You do this by adding the image to the dataset and retraining it to update the model. The misclassified images that you add to the dataset are known as feedback. Use this API to add misclassified images with the correct labels to the dataset from which the model was created. \n\nThis call supports only datasets that have a type of `image-detection`. \n\nKeep the following points in mind when creating feedback examples.\n\n- You pass in a `modelId` parameter, but the examples are added to the dataset from which the specified model was created.\n\n- The .zip file that contains the feedback images and annotations file follows the same format and structure as the .zip file you use to create a dataset. See the Object Detection Datasets section in [Create a Dataset From a Zip File Asynchronously](doc:create-a-dataset-zip-async).\n\n- This API call checks for duplicate images in the .zip file that contains the feedback images. If the .zip file contains multiple image files that have the same contents, only the first of the duplicate files is uploaded.\n\n- The call also checks for duplicates between the images in the .zip file and the images already in the dataset. When the feedback examples and data from the annotations.csv file are merged, the API reconciles any differences by replacing earlier examples with the feedback images in the .zip file. \n\n- The maximum image file size is 1 MB.\n\n- Images must be no larger than 1,600 pixels high by 1,600 pixels wide. You can upload images that are larger, but training the dataset might fail.\n\n- The supported image file types are PNG, JPG, and JPEG.\n\n- This call is asynchronous, so the response has a status of `UPLOADING`.\n\n- Use the dataset ID and make a call to [Get a Dataset](doc:get-a-dataset) to query when the upload is complete. When available is true and statusMsg is `SUCCEEDED` the upload of feedback examples is complete.\n\n\n##Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`available`\",\n    \"0-1\": \"boolean\",\n    \"0-2\": \"Specifies whether the dataset is ready to be trained.\",\n    \"0-3\": \"2.0\",\n    \"2-0\": \"`id`\",\n    \"2-1\": \"long\",\n    \"2-2\": \"ID of the dataset to which the feedback examples are added.\",\n    \"2-3\": \"2.0\",\n    \"3-0\": \"`labelSummary`\",\n    \"3-1\": \"object\",\n    \"3-2\": \"Contains the `labels` array that contains all the labels for the dataset.\",\n    \"3-3\": \"2.0\",\n    \"4-0\": \"`name`\",\n    \"4-1\": \"string\",\n    \"4-2\": \"Name of the dataset.\",\n    \"4-3\": \"2.0\",\n    \"5-0\": \"`object`\",\n    \"5-1\": \"string\",\n    \"5-2\": \"Object returned; in this case, `dataset`.\",\n    \"5-3\": \"2.0\",\n    \"7-0\": \"`totalExamples`\",\n    \"7-1\": \"int\",\n    \"7-2\": \"Total number of examples in the dataset.\",\n    \"7-3\": \"2.0\",\n    \"10-0\": \"`updatedAt`\",\n    \"10-1\": \"date\",\n    \"10-2\": \"Date and time that the dataset was last updated.\",\n    \"10-3\": \"2.0\",\n    \"1-0\": \"`createdAt`\",\n    \"1-1\": \"date\",\n    \"1-2\": \"Date and time that the dataset was created.\",\n    \"1-3\": \"2.0\",\n    \"6-0\": \"`statusMsg`\",\n    \"6-1\": \"string\",\n    \"6-2\": \"Status of the dataset while feedback is being added. Valid values are:\\n- `FAILURE: <failure_reason>`—Creation of feedback examples failed.\\n- `SUCCEEDED`—Creation of feedback examples is complete.\\n- `UPLOADING`—Upload of feedback examples is in progress.\",\n    \"6-3\": \"2.0\",\n    \"9-0\": \"`type`\",\n    \"9-1\": \"string\",\n    \"9-2\": \"Type of dataset data. Feedback examples can be added via a .zip file only for object detection datasets, so this returns `image-detection`.\",\n    \"9-3\": \"2.0\",\n    \"8-0\": \"`totalLabels`\",\n    \"8-1\": \"int\",\n    \"8-2\": \"Total number of labels in the dataset.\",\n    \"8-3\": \"2.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 11\n}\n[/block]\n##Label Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`datasetId`\",\n    \"0-1\": \"long\",\n    \"1-0\": \"`id`\",\n    \"1-1\": \"long\",\n    \"2-0\": \"`name`\",\n    \"2-1\": \"string\",\n    \"0-2\": \"ID of the dataset to which the label belongs.\",\n    \"1-2\": \"ID of the label.\",\n    \"2-2\": \"Name of the label.\",\n    \"0-3\": \"2.0\",\n    \"1-3\": \"2.0\",\n    \"2-3\": \"2.0\",\n    \"3-0\": \"`numExamples`\",\n    \"3-1\": \"int\",\n    \"3-2\": \"Number of examples that have the label. This is the number of examples before the feedback examples are added to the dataset.\",\n    \"3-3\": \"2.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 4\n}\n[/block]","excerpt":"Adds feedback examples to the dataset associated with the specified object detection model. Available in Einstein Vision API version 2.0 and later.","slug":"create-feedback-examples-from-a-zip-file","type":"put","title":"Create Feedback Examples From a Zip File","__v":0,"parentDoc":null,"childrenPages":[]}

putCreate Feedback Examples From a Zip File

Adds feedback examples to the dataset associated with the specified object detection model. Available in Einstein Vision API version 2.0 and later.

##Request Parameters## [block:parameters] { "data": { "0-0": "`data`", "0-1": "string", "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-2": "Local .zip file to upload. The maximum .zip file size you can upload from a local drive is 50 MB.\n\nThe images and an annotations.csv file with the label and bounding box information are contained in a .zip file, just like when you create a dataset from a .zip file.", "0-3": "2.0", "1-0": "`modelId`", "1-1": "string", "1-2": "ID of the model that misclassified the images. The feedback examples are added to the dataset associated with this model.", "1-3": "2.0" }, "cols": 4, "rows": 2 } [/block] If a model returns an incorrect prediction for an image, you can use that image to improve the model. You do this by adding the image to the dataset and retraining it to update the model. The misclassified images that you add to the dataset are known as feedback. Use this API to add misclassified images with the correct labels to the dataset from which the model was created. This call supports only datasets that have a type of `image-detection`. Keep the following points in mind when creating feedback examples. - You pass in a `modelId` parameter, but the examples are added to the dataset from which the specified model was created. - The .zip file that contains the feedback images and annotations file follows the same format and structure as the .zip file you use to create a dataset. See the Object Detection Datasets section in [Create a Dataset From a Zip File Asynchronously](doc:create-a-dataset-zip-async). - This API call checks for duplicate images in the .zip file that contains the feedback images. If the .zip file contains multiple image files that have the same contents, only the first of the duplicate files is uploaded. - The call also checks for duplicates between the images in the .zip file and the images already in the dataset. When the feedback examples and data from the annotations.csv file are merged, the API reconciles any differences by replacing earlier examples with the feedback images in the .zip file. - The maximum image file size is 1 MB. - Images must be no larger than 1,600 pixels high by 1,600 pixels wide. You can upload images that are larger, but training the dataset might fail. - The supported image file types are PNG, JPG, and JPEG. - This call is asynchronous, so the response has a status of `UPLOADING`. - Use the dataset ID and make a call to [Get a Dataset](doc:get-a-dataset) to query when the upload is complete. When available is true and statusMsg is `SUCCEEDED` the upload of feedback examples is complete. ##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`available`", "0-1": "boolean", "0-2": "Specifies whether the dataset is ready to be trained.", "0-3": "2.0", "2-0": "`id`", "2-1": "long", "2-2": "ID of the dataset to which the feedback examples are added.", "2-3": "2.0", "3-0": "`labelSummary`", "3-1": "object", "3-2": "Contains the `labels` array that contains all the labels for the dataset.", "3-3": "2.0", "4-0": "`name`", "4-1": "string", "4-2": "Name of the dataset.", "4-3": "2.0", "5-0": "`object`", "5-1": "string", "5-2": "Object returned; in this case, `dataset`.", "5-3": "2.0", "7-0": "`totalExamples`", "7-1": "int", "7-2": "Total number of examples in the dataset.", "7-3": "2.0", "10-0": "`updatedAt`", "10-1": "date", "10-2": "Date and time that the dataset was last updated.", "10-3": "2.0", "1-0": "`createdAt`", "1-1": "date", "1-2": "Date and time that the dataset was created.", "1-3": "2.0", "6-0": "`statusMsg`", "6-1": "string", "6-2": "Status of the dataset while feedback is being added. Valid values are:\n- `FAILURE: <failure_reason>`—Creation of feedback examples failed.\n- `SUCCEEDED`—Creation of feedback examples is complete.\n- `UPLOADING`—Upload of feedback examples is in progress.", "6-3": "2.0", "9-0": "`type`", "9-1": "string", "9-2": "Type of dataset data. Feedback examples can be added via a .zip file only for object detection datasets, so this returns `image-detection`.", "9-3": "2.0", "8-0": "`totalLabels`", "8-1": "int", "8-2": "Total number of labels in the dataset.", "8-3": "2.0" }, "cols": 4, "rows": 11 } [/block] ##Label Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`datasetId`", "0-1": "long", "1-0": "`id`", "1-1": "long", "2-0": "`name`", "2-1": "string", "0-2": "ID of the dataset to which the label belongs.", "1-2": "ID of the label.", "2-2": "Name of the label.", "0-3": "2.0", "1-3": "2.0", "2-3": "2.0", "3-0": "`numExamples`", "3-1": "int", "3-2": "Number of examples that have the label. This is the number of examples before the feedback examples are added to the dataset.", "3-3": "2.0" }, "cols": 4, "rows": 4 } [/block]

Definition

{{ api_url }}{{ page_api_url }}

Examples


Result Format



##Request Parameters## [block:parameters] { "data": { "0-0": "`data`", "0-1": "string", "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-2": "Local .zip file to upload. The maximum .zip file size you can upload from a local drive is 50 MB.\n\nThe images and an annotations.csv file with the label and bounding box information are contained in a .zip file, just like when you create a dataset from a .zip file.", "0-3": "2.0", "1-0": "`modelId`", "1-1": "string", "1-2": "ID of the model that misclassified the images. The feedback examples are added to the dataset associated with this model.", "1-3": "2.0" }, "cols": 4, "rows": 2 } [/block] If a model returns an incorrect prediction for an image, you can use that image to improve the model. You do this by adding the image to the dataset and retraining it to update the model. The misclassified images that you add to the dataset are known as feedback. Use this API to add misclassified images with the correct labels to the dataset from which the model was created. This call supports only datasets that have a type of `image-detection`. Keep the following points in mind when creating feedback examples. - You pass in a `modelId` parameter, but the examples are added to the dataset from which the specified model was created. - The .zip file that contains the feedback images and annotations file follows the same format and structure as the .zip file you use to create a dataset. See the Object Detection Datasets section in [Create a Dataset From a Zip File Asynchronously](doc:create-a-dataset-zip-async). - This API call checks for duplicate images in the .zip file that contains the feedback images. If the .zip file contains multiple image files that have the same contents, only the first of the duplicate files is uploaded. - The call also checks for duplicates between the images in the .zip file and the images already in the dataset. When the feedback examples and data from the annotations.csv file are merged, the API reconciles any differences by replacing earlier examples with the feedback images in the .zip file. - The maximum image file size is 1 MB. - Images must be no larger than 1,600 pixels high by 1,600 pixels wide. You can upload images that are larger, but training the dataset might fail. - The supported image file types are PNG, JPG, and JPEG. - This call is asynchronous, so the response has a status of `UPLOADING`. - Use the dataset ID and make a call to [Get a Dataset](doc:get-a-dataset) to query when the upload is complete. When available is true and statusMsg is `SUCCEEDED` the upload of feedback examples is complete. ##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`available`", "0-1": "boolean", "0-2": "Specifies whether the dataset is ready to be trained.", "0-3": "2.0", "2-0": "`id`", "2-1": "long", "2-2": "ID of the dataset to which the feedback examples are added.", "2-3": "2.0", "3-0": "`labelSummary`", "3-1": "object", "3-2": "Contains the `labels` array that contains all the labels for the dataset.", "3-3": "2.0", "4-0": "`name`", "4-1": "string", "4-2": "Name of the dataset.", "4-3": "2.0", "5-0": "`object`", "5-1": "string", "5-2": "Object returned; in this case, `dataset`.", "5-3": "2.0", "7-0": "`totalExamples`", "7-1": "int", "7-2": "Total number of examples in the dataset.", "7-3": "2.0", "10-0": "`updatedAt`", "10-1": "date", "10-2": "Date and time that the dataset was last updated.", "10-3": "2.0", "1-0": "`createdAt`", "1-1": "date", "1-2": "Date and time that the dataset was created.", "1-3": "2.0", "6-0": "`statusMsg`", "6-1": "string", "6-2": "Status of the dataset while feedback is being added. Valid values are:\n- `FAILURE: <failure_reason>`—Creation of feedback examples failed.\n- `SUCCEEDED`—Creation of feedback examples is complete.\n- `UPLOADING`—Upload of feedback examples is in progress.", "6-3": "2.0", "9-0": "`type`", "9-1": "string", "9-2": "Type of dataset data. Feedback examples can be added via a .zip file only for object detection datasets, so this returns `image-detection`.", "9-3": "2.0", "8-0": "`totalLabels`", "8-1": "int", "8-2": "Total number of labels in the dataset.", "8-3": "2.0" }, "cols": 4, "rows": 11 } [/block] ##Label Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`datasetId`", "0-1": "long", "1-0": "`id`", "1-1": "long", "2-0": "`name`", "2-1": "string", "0-2": "ID of the dataset to which the label belongs.", "1-2": "ID of the label.", "2-2": "Name of the label.", "0-3": "2.0", "1-3": "2.0", "2-3": "2.0", "3-0": "`numExamples`", "3-1": "int", "3-2": "Number of examples that have the label. This is the number of examples before the feedback examples are added to the dataset.", "3-3": "2.0" }, "cols": 4, "rows": 4 } [/block]
{"_id":"59de6224666d650024f78fbd","category":"59de6223666d650024f78fa4","user":"573b5a1f37fcf72000a2e683","parentDoc":null,"project":"552d474ea86ee20d00780cd7","version":"59de6223666d650024f78f9b","updates":[],"next":{"pages":[],"description":""},"createdAt":"2016-09-25T00:04:55.609Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"examples":{"codes":[{"language":"curl","code":"curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.einstein.ai/v2/vision/datasets/57/examples"}]},"method":"get","results":{"codes":[{"status":200,"language":"json","code":"{\n  \"object\": \"list\",\n  \"data\": [\n    {\n      \"id\": 43888,\n      \"name\": \"659803277.jpg\",\n      \"location\": \"https://K3A04Q79O5TBySIZSeMIj%2BC3zqi7rOmeK...\",\n      \"createdAt\": \"2016-09-16T17:14:38.000+0000\",\n      \"label\": {\n        \"id\": 618,\n        \"datasetId\": 57,\n        \"name\": \"Beaches\",\n        \"numExamples\": 50\n    },\n      \"object\": \"example\"\n    },\n    {\n      \"id\": 43889,\n      \"name\": \"661860605.jpg\",\n      \"location\": \"https://jBke4mtMuOjrCK3A04Q79O5TBySI2BC3zqi7...\",\n      \"createdAt\": \"2016-09-16T17:14:42.000+0000\",\n      \"label\": {\n        \"id\": 618,\n        \"datasetId\": 57,\n        \"name\": \"Beaches\",\n        \"numExamples\": 50\n      },\n      \"object\": \"example\"\n    },\n    {\n      \"id\": 43890,\n      \"name\": \"660548647.jpg\",\n      \"location\": \"https://HKzY79n47nd%2F0%2FCem6PJBkUoyxMWVssCX...\",\n      \"createdAt\": \"2016-09-16T17:15:25.000+0000\",\n      \"label\": {\n        \"id\": 619,\n        \"datasetId\": 57,\n        \"name\": \"Mountains\",\n        \"numExamples\": 49\n      },\n      \"object\": \"example\"\n    },\n    {\n      \"id\": 43891,\n      \"name\": \"578339672.jpg\",\n      \"location\": \"https://LRlXQeRyTVDiujSzHTabcJ2FGGnuGhAvedvu0D...\",\n      \"createdAt\": \"2016-09-16T17:15:29.000+0000\",\n      \"label\": {\n        \"id\": 619,\n        \"datasetId\": 57,\n        \"name\": \"Mountains\",\n        \"numExamples\": 49\n      },\n      \"object\": \"example\"\n    }\n  ]\n}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","auth":"required","params":[],"url":"/vision/datasets/<DATASET_ID>/examples"},"isReference":false,"order":4,"body":"##Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`data`\",\n    \"0-1\": \"array\",\n    \"0-2\": \"Array of `example` objects.\",\n    \"0-3\": \"1.0\",\n    \"1-0\": \"`object`\",\n    \"1-1\": \"string\",\n    \"1-2\": \"Object returned; in this case, `list`.\",\n    \"1-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 2\n}\n[/block]\n##Example Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`createdAt`\",\n    \"0-1\": \"date\",\n    \"0-2\": \"Date and time that the example was created.\",\n    \"0-3\": \"1.0\",\n    \"1-0\": \"`id`\",\n    \"1-1\": \"long\",\n    \"1-2\": \"ID of the example.\",\n    \"1-3\": \"1.0\",\n    \"2-0\": \"`label`\",\n    \"2-1\": \"object\",\n    \"2-2\": \"Label that the example is associated with.\",\n    \"2-3\": \"1.0\",\n    \"4-0\": \"`name`\",\n    \"4-1\": \"string\",\n    \"4-2\": \"Name of the example.\",\n    \"4-3\": \"1.0\",\n    \"5-0\": \"`object`\",\n    \"5-1\": \"string\",\n    \"5-2\": \"Object returned; in this case, `example`.\",\n    \"5-3\": \"1.0\",\n    \"3-0\": \"`location`\",\n    \"3-1\": \"string\",\n    \"3-3\": \"1.0\",\n    \"3-2\": \"URL of the image in the dataset. This is a temporary URL that expires in 30 minutes. This URL can be used to display in a UI images that were uploaded to a dataset.\"\n  },\n  \"cols\": 4,\n  \"rows\": 6\n}\n[/block]\n##Label Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`datasetId`\",\n    \"0-1\": \"long\",\n    \"0-2\": \"ID of the dataset that the label belongs to.\",\n    \"0-3\": \"1.0\",\n    \"1-0\": \"`id`\",\n    \"1-1\": \"long\",\n    \"1-2\": \"ID of the label.\",\n    \"1-3\": \"1.0\",\n    \"2-0\": \"`name`\",\n    \"2-1\": \"string\",\n    \"2-2\": \"Name of the label.\",\n    \"2-3\": \"1.0\",\n    \"3-0\": \"`numExamples`\",\n    \"3-1\": \"int\",\n    \"3-2\": \"Number of examples that have the label.\",\n    \"3-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 4\n}\n[/block]\n##Page Through Examples##\n\nBy default, this call returns 100 examples. If you want to page through the examples in a dataset, use the `offset` and `count` query parameters.\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"0-0\": \"`count`\",\n    \"0-1\": \"int\",\n    \"0-2\": \"Number of examples to return. Optional.\",\n    \"1-0\": \"`offset`\",\n    \"1-1\": \"int\",\n    \"1-2\": \"Index of the example from which you want to start paging. Optional.\",\n    \"h-3\": \"Available Version\",\n    \"0-3\": \"1.0\",\n    \"1-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 2\n}\n[/block]\nThe following call shows how to page through examples using these query parameters. If you omit the `count` parameter, the API returns 100 examples. If you omit the `offset` parameter, paging starts at 0.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X GET -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\"  \\\"https://api.einstein.ai/v2/vision/datasets/57/examples?offset=100&count=50\\\"\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\n###How Paging Works###\n\nTo page through all the examples in a dataset:\n\n1. Make the [Get a Dataset](doc:get-a-dataset) call to return the `totalExamples` value for the dataset.\n2. Make the [Get All Examples](doc:get-all-examples) call and pass in the `offset` and `count` values until you reach the end of the examples.\n\nFor example, let's say you have a dataset and you want to display information about the examples in a UI and show 50 at a time. The first call would have `offset=0` and `count=50`, the second call would have `offset=50` and `count=50`, and so on.\n\n##Return Specific Example Types##\n\nBy default, this call returns examples created by uploading images from a .zip file (using either POST or PUT). Use the `source` query parameter to return examples that were created in the dataset as feedback. The `source` query parameter is available in Einstein Vision API version 2.0 and later. Valid values for the `source` parameter are:\n\n- `all`—Return both upload and feedback examples.\n- `feedback`—Return examples that were created as feedback.\n- `upload`—Return examples that were created from uploading a .zip file.\n\n This cURL call returns only feedback examples.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X GET -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" https://api.einstein.ai/v2/vision/datasets/57/examples?source=feedback\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]","excerpt":"Returns all the examples for the specified dataset. By default, returns examples created by uploading them from a .zip file.","slug":"get-all-examples","type":"get","title":"Get All Examples","__v":0,"childrenPages":[]}

getGet All Examples

Returns all the examples for the specified dataset. By default, returns examples created by uploading them from a .zip file.

##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`data`", "0-1": "array", "0-2": "Array of `example` objects.", "0-3": "1.0", "1-0": "`object`", "1-1": "string", "1-2": "Object returned; in this case, `list`.", "1-3": "1.0" }, "cols": 4, "rows": 2 } [/block] ##Example Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`createdAt`", "0-1": "date", "0-2": "Date and time that the example was created.", "0-3": "1.0", "1-0": "`id`", "1-1": "long", "1-2": "ID of the example.", "1-3": "1.0", "2-0": "`label`", "2-1": "object", "2-2": "Label that the example is associated with.", "2-3": "1.0", "4-0": "`name`", "4-1": "string", "4-2": "Name of the example.", "4-3": "1.0", "5-0": "`object`", "5-1": "string", "5-2": "Object returned; in this case, `example`.", "5-3": "1.0", "3-0": "`location`", "3-1": "string", "3-3": "1.0", "3-2": "URL of the image in the dataset. This is a temporary URL that expires in 30 minutes. This URL can be used to display in a UI images that were uploaded to a dataset." }, "cols": 4, "rows": 6 } [/block] ##Label Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`datasetId`", "0-1": "long", "0-2": "ID of the dataset that the label belongs to.", "0-3": "1.0", "1-0": "`id`", "1-1": "long", "1-2": "ID of the label.", "1-3": "1.0", "2-0": "`name`", "2-1": "string", "2-2": "Name of the label.", "2-3": "1.0", "3-0": "`numExamples`", "3-1": "int", "3-2": "Number of examples that have the label.", "3-3": "1.0" }, "cols": 4, "rows": 4 } [/block] ##Page Through Examples## By default, this call returns 100 examples. If you want to page through the examples in a dataset, use the `offset` and `count` query parameters. [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "0-0": "`count`", "0-1": "int", "0-2": "Number of examples to return. Optional.", "1-0": "`offset`", "1-1": "int", "1-2": "Index of the example from which you want to start paging. Optional.", "h-3": "Available Version", "0-3": "1.0", "1-3": "1.0" }, "cols": 4, "rows": 2 } [/block] The following call shows how to page through examples using these query parameters. If you omit the `count` parameter, the API returns 100 examples. If you omit the `offset` parameter, paging starts at 0. [block:code] { "codes": [ { "code": "curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" \"https://api.einstein.ai/v2/vision/datasets/57/examples?offset=100&count=50\"", "language": "curl" } ] } [/block] ###How Paging Works### To page through all the examples in a dataset: 1. Make the [Get a Dataset](doc:get-a-dataset) call to return the `totalExamples` value for the dataset. 2. Make the [Get All Examples](doc:get-all-examples) call and pass in the `offset` and `count` values until you reach the end of the examples. For example, let's say you have a dataset and you want to display information about the examples in a UI and show 50 at a time. The first call would have `offset=0` and `count=50`, the second call would have `offset=50` and `count=50`, and so on. ##Return Specific Example Types## By default, this call returns examples created by uploading images from a .zip file (using either POST or PUT). Use the `source` query parameter to return examples that were created in the dataset as feedback. The `source` query parameter is available in Einstein Vision API version 2.0 and later. Valid values for the `source` parameter are: - `all`—Return both upload and feedback examples. - `feedback`—Return examples that were created as feedback. - `upload`—Return examples that were created from uploading a .zip file. This cURL call returns only feedback examples. [block:code] { "codes": [ { "code": "curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.einstein.ai/v2/vision/datasets/57/examples?source=feedback", "language": "curl" } ] } [/block]

Definition

{{ api_url }}{{ page_api_url }}

Examples


Result Format



##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`data`", "0-1": "array", "0-2": "Array of `example` objects.", "0-3": "1.0", "1-0": "`object`", "1-1": "string", "1-2": "Object returned; in this case, `list`.", "1-3": "1.0" }, "cols": 4, "rows": 2 } [/block] ##Example Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`createdAt`", "0-1": "date", "0-2": "Date and time that the example was created.", "0-3": "1.0", "1-0": "`id`", "1-1": "long", "1-2": "ID of the example.", "1-3": "1.0", "2-0": "`label`", "2-1": "object", "2-2": "Label that the example is associated with.", "2-3": "1.0", "4-0": "`name`", "4-1": "string", "4-2": "Name of the example.", "4-3": "1.0", "5-0": "`object`", "5-1": "string", "5-2": "Object returned; in this case, `example`.", "5-3": "1.0", "3-0": "`location`", "3-1": "string", "3-3": "1.0", "3-2": "URL of the image in the dataset. This is a temporary URL that expires in 30 minutes. This URL can be used to display in a UI images that were uploaded to a dataset." }, "cols": 4, "rows": 6 } [/block] ##Label Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`datasetId`", "0-1": "long", "0-2": "ID of the dataset that the label belongs to.", "0-3": "1.0", "1-0": "`id`", "1-1": "long", "1-2": "ID of the label.", "1-3": "1.0", "2-0": "`name`", "2-1": "string", "2-2": "Name of the label.", "2-3": "1.0", "3-0": "`numExamples`", "3-1": "int", "3-2": "Number of examples that have the label.", "3-3": "1.0" }, "cols": 4, "rows": 4 } [/block] ##Page Through Examples## By default, this call returns 100 examples. If you want to page through the examples in a dataset, use the `offset` and `count` query parameters. [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "0-0": "`count`", "0-1": "int", "0-2": "Number of examples to return. Optional.", "1-0": "`offset`", "1-1": "int", "1-2": "Index of the example from which you want to start paging. Optional.", "h-3": "Available Version", "0-3": "1.0", "1-3": "1.0" }, "cols": 4, "rows": 2 } [/block] The following call shows how to page through examples using these query parameters. If you omit the `count` parameter, the API returns 100 examples. If you omit the `offset` parameter, paging starts at 0. [block:code] { "codes": [ { "code": "curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" \"https://api.einstein.ai/v2/vision/datasets/57/examples?offset=100&count=50\"", "language": "curl" } ] } [/block] ###How Paging Works### To page through all the examples in a dataset: 1. Make the [Get a Dataset](doc:get-a-dataset) call to return the `totalExamples` value for the dataset. 2. Make the [Get All Examples](doc:get-all-examples) call and pass in the `offset` and `count` values until you reach the end of the examples. For example, let's say you have a dataset and you want to display information about the examples in a UI and show 50 at a time. The first call would have `offset=0` and `count=50`, the second call would have `offset=50` and `count=50`, and so on. ##Return Specific Example Types## By default, this call returns examples created by uploading images from a .zip file (using either POST or PUT). Use the `source` query parameter to return examples that were created in the dataset as feedback. The `source` query parameter is available in Einstein Vision API version 2.0 and later. Valid values for the `source` parameter are: - `all`—Return both upload and feedback examples. - `feedback`—Return examples that were created as feedback. - `upload`—Return examples that were created from uploading a .zip file. This cURL call returns only feedback examples. [block:code] { "codes": [ { "code": "curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.einstein.ai/v2/vision/datasets/57/examples?source=feedback", "language": "curl" } ] } [/block]
{"_id":"59de6224666d650024f78fbe","category":"59de6223666d650024f78fa4","project":"552d474ea86ee20d00780cd7","user":"573b5a1f37fcf72000a2e683","parentDoc":null,"version":"59de6223666d650024f78f9b","updates":[],"next":{"pages":[],"description":""},"createdAt":"2017-10-02T19:59:02.352Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"settings":"","results":{"codes":[{"status":200,"language":"json","code":"{\n    \"object\": \"list\",\n    \"data\": [\n        {\n            \"id\": 322291,\n            \"name\": \"583673532.jpg\",\n            \"location\": \"https://u7kEqOMaHuNOS7Wp2oOgtKQwPs2AgkwOwqHB...\",\n            \"createdAt\": \"2017-04-12T18:38:19.000+0000\",\n            \"label\": {\n                \"id\": 3235,\n                \"datasetId\": 1000475,\n                \"name\": \"Mountains\",\n                \"numExamples\": 108\n            },\n            \"object\": \"example\"\n        },\n        {\n            \"id\": 322292,\n            \"name\": \"483951488.jpg\",\n            \"location\": \"https://6fu7kEqOMaHuNOS7Wp2oOgtKQwPs2AgkwOwq...\",\n            \"createdAt\": \"2017-04-12T18:38:19.000+0000\",\n            \"label\": {\n                \"id\": 3235,\n                \"datasetId\": 1000475,\n                \"name\": \"Mountains\",\n                \"numExamples\": 108\n            },\n            \"object\": \"example\"\n        }\n    ]\n}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"method":"get","examples":{"codes":[{"language":"curl","code":"curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.einstein.ai/v2/vision/examples?labelId=<LABEL_ID>"}]},"auth":"required","params":[],"url":"/vision/examples"},"isReference":false,"order":5,"body":"##Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`data`\",\n    \"0-1\": \"array\",\n    \"0-2\": \"Array of `example` objects.\",\n    \"0-3\": \"2.0\",\n    \"1-0\": \"`object`\",\n    \"1-1\": \"string\",\n    \"1-2\": \"Object returned; in this case, `list`.\",\n    \"1-3\": \"2.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 2\n}\n[/block]\n##Example Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`createdAt`\",\n    \"0-1\": \"date\",\n    \"0-2\": \"Date and time that the example was created.\",\n    \"0-3\": \"2.0\",\n    \"1-0\": \"`id`\",\n    \"1-1\": \"long\",\n    \"1-2\": \"ID of the example.\",\n    \"1-3\": \"2.0\",\n    \"2-0\": \"`label`\",\n    \"2-1\": \"object\",\n    \"2-2\": \"Label that the example is associated with.\",\n    \"2-3\": \"2.0\",\n    \"4-0\": \"`name`\",\n    \"4-1\": \"string\",\n    \"4-2\": \"Name of the example.\",\n    \"4-3\": \"2.0\",\n    \"5-0\": \"`object`\",\n    \"5-1\": \"string\",\n    \"5-2\": \"Object returned; in this case, `example`.\",\n    \"5-3\": \"2.0\",\n    \"3-0\": \"`location`\",\n    \"3-1\": \"string\",\n    \"3-3\": \"2.0\",\n    \"3-2\": \"URL of the image in the dataset. This is a temporary URL that expires in 30 minutes.\"\n  },\n  \"cols\": 4,\n  \"rows\": 6\n}\n[/block]\n##Label Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`datasetId`\",\n    \"0-1\": \"long\",\n    \"0-2\": \"ID of the dataset that the label belongs to.\",\n    \"0-3\": \"2.0\",\n    \"1-0\": \"`id`\",\n    \"1-1\": \"long\",\n    \"1-2\": \"ID of the label.\",\n    \"1-3\": \"2.0\",\n    \"2-0\": \"`name`\",\n    \"2-1\": \"string\",\n    \"2-2\": \"Name of the label.\",\n    \"2-3\": \"2.0\",\n    \"3-0\": \"`numExamples`\",\n    \"3-1\": \"int\",\n    \"3-2\": \"Number of examples that have the label.\",\n    \"3-3\": \"2.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 4\n}\n[/block]\n##Page Through Examples##\n\nBy default, this call returns 100 examples. If you want to page through all the examples returned by this call, use the `offset` and `count` query parameters. The `numExamples` value indicates how many examples have the specified label, so you can use this value to control the paging.\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"0-0\": \"`count`\",\n    \"0-1\": \"int\",\n    \"0-2\": \"Number of examples to return. Optional.\",\n    \"1-0\": \"`offset`\",\n    \"1-1\": \"int\",\n    \"1-2\": \"Index of the example from which you want to start paging. Optional.\",\n    \"h-3\": \"Available Version\",\n    \"0-3\": \"2.0\",\n    \"1-3\": \"2.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 2\n}\n[/block]\nThe following call shows how to page through examples using these query parameters. If you omit the `count` parameter or the `count` parameter is greater than 100, the API returns 100 examples. If you omit the `offset` parameter, paging starts at 0. When using multiple query parameters in a cURL call, be sure to enclose the endpoint in quotes.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X GET -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\"  \\\"https://api.einstein.ai/v2/vision/examples?labelId=1234&offset=50&count=50\\\"\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\n\n[block:callout]\n{\n  \"type\": \"info\",\n  \"title\": \"Note\",\n  \"body\": \"This call returns both uploaded and feedback examples that have the specified label. This call doesn't support the `source` parameter.\"\n}\n[/block]","excerpt":"Returns all the examples for the specified label. Returns both uploaded examples and feedback examples.","slug":"get-all-vision-examples-for-label","type":"get","title":"Get All Examples for Label","__v":0,"childrenPages":[]}

getGet All Examples for Label

Returns all the examples for the specified label. Returns both uploaded examples and feedback examples.

##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`data`", "0-1": "array", "0-2": "Array of `example` objects.", "0-3": "2.0", "1-0": "`object`", "1-1": "string", "1-2": "Object returned; in this case, `list`.", "1-3": "2.0" }, "cols": 4, "rows": 2 } [/block] ##Example Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`createdAt`", "0-1": "date", "0-2": "Date and time that the example was created.", "0-3": "2.0", "1-0": "`id`", "1-1": "long", "1-2": "ID of the example.", "1-3": "2.0", "2-0": "`label`", "2-1": "object", "2-2": "Label that the example is associated with.", "2-3": "2.0", "4-0": "`name`", "4-1": "string", "4-2": "Name of the example.", "4-3": "2.0", "5-0": "`object`", "5-1": "string", "5-2": "Object returned; in this case, `example`.", "5-3": "2.0", "3-0": "`location`", "3-1": "string", "3-3": "2.0", "3-2": "URL of the image in the dataset. This is a temporary URL that expires in 30 minutes." }, "cols": 4, "rows": 6 } [/block] ##Label Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`datasetId`", "0-1": "long", "0-2": "ID of the dataset that the label belongs to.", "0-3": "2.0", "1-0": "`id`", "1-1": "long", "1-2": "ID of the label.", "1-3": "2.0", "2-0": "`name`", "2-1": "string", "2-2": "Name of the label.", "2-3": "2.0", "3-0": "`numExamples`", "3-1": "int", "3-2": "Number of examples that have the label.", "3-3": "2.0" }, "cols": 4, "rows": 4 } [/block] ##Page Through Examples## By default, this call returns 100 examples. If you want to page through all the examples returned by this call, use the `offset` and `count` query parameters. The `numExamples` value indicates how many examples have the specified label, so you can use this value to control the paging. [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "0-0": "`count`", "0-1": "int", "0-2": "Number of examples to return. Optional.", "1-0": "`offset`", "1-1": "int", "1-2": "Index of the example from which you want to start paging. Optional.", "h-3": "Available Version", "0-3": "2.0", "1-3": "2.0" }, "cols": 4, "rows": 2 } [/block] The following call shows how to page through examples using these query parameters. If you omit the `count` parameter or the `count` parameter is greater than 100, the API returns 100 examples. If you omit the `offset` parameter, paging starts at 0. When using multiple query parameters in a cURL call, be sure to enclose the endpoint in quotes. [block:code] { "codes": [ { "code": "curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" \"https://api.einstein.ai/v2/vision/examples?labelId=1234&offset=50&count=50\"", "language": "curl" } ] } [/block] [block:callout] { "type": "info", "title": "Note", "body": "This call returns both uploaded and feedback examples that have the specified label. This call doesn't support the `source` parameter." } [/block]

Definition

{{ api_url }}{{ page_api_url }}

Examples


Result Format



##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`data`", "0-1": "array", "0-2": "Array of `example` objects.", "0-3": "2.0", "1-0": "`object`", "1-1": "string", "1-2": "Object returned; in this case, `list`.", "1-3": "2.0" }, "cols": 4, "rows": 2 } [/block] ##Example Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`createdAt`", "0-1": "date", "0-2": "Date and time that the example was created.", "0-3": "2.0", "1-0": "`id`", "1-1": "long", "1-2": "ID of the example.", "1-3": "2.0", "2-0": "`label`", "2-1": "object", "2-2": "Label that the example is associated with.", "2-3": "2.0", "4-0": "`name`", "4-1": "string", "4-2": "Name of the example.", "4-3": "2.0", "5-0": "`object`", "5-1": "string", "5-2": "Object returned; in this case, `example`.", "5-3": "2.0", "3-0": "`location`", "3-1": "string", "3-3": "2.0", "3-2": "URL of the image in the dataset. This is a temporary URL that expires in 30 minutes." }, "cols": 4, "rows": 6 } [/block] ##Label Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`datasetId`", "0-1": "long", "0-2": "ID of the dataset that the label belongs to.", "0-3": "2.0", "1-0": "`id`", "1-1": "long", "1-2": "ID of the label.", "1-3": "2.0", "2-0": "`name`", "2-1": "string", "2-2": "Name of the label.", "2-3": "2.0", "3-0": "`numExamples`", "3-1": "int", "3-2": "Number of examples that have the label.", "3-3": "2.0" }, "cols": 4, "rows": 4 } [/block] ##Page Through Examples## By default, this call returns 100 examples. If you want to page through all the examples returned by this call, use the `offset` and `count` query parameters. The `numExamples` value indicates how many examples have the specified label, so you can use this value to control the paging. [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "0-0": "`count`", "0-1": "int", "0-2": "Number of examples to return. Optional.", "1-0": "`offset`", "1-1": "int", "1-2": "Index of the example from which you want to start paging. Optional.", "h-3": "Available Version", "0-3": "2.0", "1-3": "2.0" }, "cols": 4, "rows": 2 } [/block] The following call shows how to page through examples using these query parameters. If you omit the `count` parameter or the `count` parameter is greater than 100, the API returns 100 examples. If you omit the `offset` parameter, paging starts at 0. When using multiple query parameters in a cURL call, be sure to enclose the endpoint in quotes. [block:code] { "codes": [ { "code": "curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" \"https://api.einstein.ai/v2/vision/examples?labelId=1234&offset=50&count=50\"", "language": "curl" } ] } [/block] [block:callout] { "type": "info", "title": "Note", "body": "This call returns both uploaded and feedback examples that have the specified label. This call doesn't support the `source` parameter." } [/block]
{"_id":"59de6225666d650024f78fd4","category":"59de6223666d650024f78fa5","parentDoc":null,"user":"573b5a1f37fcf72000a2e683","project":"552d474ea86ee20d00780cd7","version":"59de6223666d650024f78f9b","updates":[],"next":{"pages":[],"description":""},"createdAt":"2016-09-25T00:18:41.425Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"examples":{"codes":[{"language":"curl","code":"curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=Beach Mountain Model\" -F \"datasetId=57\" https://api.einstein.ai/v2/vision/train"}]},"method":"post","results":{"codes":[{"status":200,"language":"json","code":"{\n  \"datasetId\": 57,\n  \"datasetVersionId\": 0,\n  \"name\": \"Beach and Mountain Model\",\n  \"status\": \"QUEUED\",\n  \"progress\": 0,\n  \"createdAt\": \"2016-09-16T18:03:21.000+0000\",\n  \"updatedAt\": \"2016-09-16T18:03:21.000+0000\",\n  \"learningRate\": 0.001,\n  \"epochs\": 3,\n  \"queuePosition\": 1,\n  \"object\": \"training\",\n  \"modelId\": \"7JXCXTRXTMNLJCEF2DR5CJ46QU\",\n  \"trainParams\": null,\n  \"trainStats\": null,\n  \"modelType\": \"image\"\n}","name":""},{"status":400,"language":"json","code":"{\n  \"message\": \"Train job not yet completed successfully\"\n}","name":""}]},"settings":"","auth":"required","params":[],"url":"/vision/train"},"isReference":false,"order":0,"body":"##Request Parameters##\n[block:parameters]\n{\n  \"data\": {\n    \"0-0\": \"`datasetId`\",\n    \"0-1\": \"long\",\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-2\": \"ID of the dataset to train.\",\n    \"0-3\": \"1.0\",\n    \"2-0\": \"`learningRate`\",\n    \"1-0\": \"`epochs`\",\n    \"1-1\": \"int\",\n    \"2-1\": \"float\",\n    \"2-2\": \"Specifies how much the gradient affects the optimization of the model at each time step. Optional. Use this parameter to tune your model. Valid values are between 0.0001 and 0.01. If not specified, the default is 0.0001. We recommend keeping this value between 0.0001 and 0.001. \\n\\nThis parameter isn't used when training a detection dataset.\",\n    \"1-2\": \"Number of training iterations for the neural network. Optional. Valid values are 1–1,000.\\nIf not specified:\\n- For image and multi-label datasets, the default is calculated based on the dataset size.\\n- For detection datasets, the default is 20 epochs.\\n\\nThe larger the number, the longer the training takes to complete.\",\n    \"1-3\": \"1.0\",\n    \"2-3\": \"1.0\",\n    \"3-0\": \"`name`\",\n    \"3-2\": \"Name of the model. Maximum length is 180 characters.\",\n    \"3-1\": \"string\",\n    \"3-3\": \"1.0\",\n    \"4-0\": \"`trainParams`\",\n    \"4-1\": \"object\",\n    \"4-3\": \"1.0\",\n    \"4-2\": \"JSON that contains parameters that specify how the model is created. Optional. Valid values:\\n- `{\\\"trainSplitRatio\\\": 0.n}`—Lets you specify the ratio of data used to train the dataset and the data used to test the model. The default split ratio is 0.9; 90% of the data is used to train the dataset and create the model and 10% of the data is used to test the model. If you pass in a split ratio of 0.6, then 60% of the data is used to train the dataset and create the model and 40% of the data is used to test the model.\\n\\n- `{\\\"withFeedback\\\": true}`—Lets you specify that feedback examples are included in the data to be trained to create the model. If you omit this parameter, feedback examples aren't used in training. Available in Einstein Vision API version 2.0 and later.\\n\\n- `{\\\"withGlobalDatasetId\\\": <DATASET_ID>}`—Lets you specify that a global dataset is used in addition to the specified dataset to create the model. Available in Einstein Vision API version 2.0 and later.\\nThis parameter isn't used when training a detection dataset.\"\n  },\n  \"cols\": 4,\n  \"rows\": 5\n}\n[/block]\nKeep the following points in mind when training a dataset:\n\n- If you’re unsure which values to set for the `epochs` and `learningRate` parameters, we recommend that you omit them and use the defaults.\n- A dataset can have only one training in progress at a time. Let's say you train a dataset and there's a model with a status of `RUNNING` or `QUEUED`. If you attempt to train the same dataset again, you receive an error.\n- If you try to train a dataset that was deleted or that has a status of `DELETE_PENDING`, you receive an error.\n\n##Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"1-0\": \"`datasetId`\",\n    \"1-1\": \"long\",\n    \"1-2\": \"ID of the dataset trained to create the model.\",\n    \"1-3\": \"1.0\",\n    \"7-0\": \"`name`\",\n    \"7-1\": \"string\",\n    \"7-2\": \"Name of the model.\",\n    \"7-3\": \"1.0\",\n    \"8-0\": \"`object`\",\n    \"8-1\": \"string\",\n    \"8-2\": \"Object returned; in this case, `training`.\",\n    \"8-3\": \"1.0\",\n    \"0-0\": \"`createdAt`\",\n    \"0-1\": \"date\",\n    \"0-2\": \"Date and time that the model was created.\",\n    \"0-3\": \"1.0\",\n    \"5-0\": \"`modelId`\",\n    \"5-1\": \"string\",\n    \"5-2\": \"ID of the model. Contains letters and numbers.\",\n    \"5-3\": \"1.0\",\n    \"3-0\": \"`epochs`\",\n    \"3-1\": \"int\",\n    \"3-2\": \"Number of epochs used during training.\",\n    \"3-3\": \"1.0\",\n    \"4-0\": \"`learningRate`\",\n    \"4-1\": \"float\",\n    \"4-2\": \"Learning rate used during training.\",\n    \"4-3\": \"1.0\",\n    \"9-0\": \"`progress`\",\n    \"9-1\": \"float\",\n    \"9-2\": \"How far the training job has progressed. Values are between 0–1.\",\n    \"9-3\": \"1.0\",\n    \"10-0\": \"`queuePosition`\",\n    \"10-1\": \"int\",\n    \"10-2\": \"Where the training job is in the queue. This field appears in the response only if the status is `QUEUED`.\",\n    \"10-3\": \"1.0\",\n    \"11-0\": \"`status`\",\n    \"11-1\": \"string\",\n    \"11-2\": \"Status of the training job. Valid values are:\\n- `QUEUED`—The training job is in the queue.\\n- `RUNNING`—The training job is running.\\n- `SUCCEEDED`—The training job succeeded, and the model was created.\\n- `FAILED`—The training job failed.\",\n    \"14-0\": \"`updatedAt`\",\n    \"14-1\": \"date\",\n    \"14-2\": \"Date and time that the model was last updated.\",\n    \"11-3\": \"1.0\",\n    \"14-3\": \"1.0\",\n    \"2-0\": \"`datasetVersionId`\",\n    \"2-1\": \"int\",\n    \"2-2\": \"N/A\",\n    \"2-3\": \"1.0\",\n    \"12-0\": \"`trainParams`\",\n    \"12-1\": \"object\",\n    \"12-2\": \"Training parameters passed into the request. For example, if you sent in a split of 0.7, the response contains `\\\"trainParams\\\": {\\\"trainSplitRatio\\\": 0.7}`\",\n    \"12-3\": \"1.0\",\n    \"13-0\": \"`trainStats`\",\n    \"13-1\": \"object\",\n    \"13-2\": \"Returns null when you train a dataset. Training statistics are returned when the status is `SUCCEEDED` or `FAILED`.\",\n    \"13-3\": \"1.0\",\n    \"6-0\": \"`modelType`\",\n    \"6-1\": \"string\",\n    \"6-3\": \"1.0\",\n    \"6-2\": \"Type of data from which the model was created. Inferred from the dataset `type`. Valid values are:\\n- `image`\\n- `image-detection`—Available in Einstein Vision API version 2.0 and later.\\n- `image-multi-label`—Available in Einstein Vision API version 2.0 and later.\"\n  },\n  \"cols\": 4,\n  \"rows\": 15\n}\n[/block]\nThis cURL command sends in the `trainParams` request parameter. This command has double quotes and escaped double quotes around `trainSplitRatio` to run on Windows. You might need to reformat it to run on another OS.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=Beach Mountain Model\\\" -F \\\"datasetId=57\\\" -F \\\"trainParams={\\\\\\\"trainSplitRatio\\\\\\\":0.7}\\\" https://api.einstein.ai/v2/vision/train\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\nYou can pass in multiple training parameters. For example, you specify `withFeedback` and `trainSplitRatio` using this JSON: `{\"withFeedback\" : true, \"trainSplitRatio\" : 0.7}`.\n\nIf you want to train a dataset and update an existing model, see [Retrain a Dataset](doc:retrain-a-dataset).","excerpt":"Trains a dataset and creates a model.","slug":"train-a-dataset","type":"post","title":"Train a Dataset","__v":0,"childrenPages":[]}

postTrain a Dataset

Trains a dataset and creates a model.

##Request Parameters## [block:parameters] { "data": { "0-0": "`datasetId`", "0-1": "long", "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-2": "ID of the dataset to train.", "0-3": "1.0", "2-0": "`learningRate`", "1-0": "`epochs`", "1-1": "int", "2-1": "float", "2-2": "Specifies how much the gradient affects the optimization of the model at each time step. Optional. Use this parameter to tune your model. Valid values are between 0.0001 and 0.01. If not specified, the default is 0.0001. We recommend keeping this value between 0.0001 and 0.001. \n\nThis parameter isn't used when training a detection dataset.", "1-2": "Number of training iterations for the neural network. Optional. Valid values are 1–1,000.\nIf not specified:\n- For image and multi-label datasets, the default is calculated based on the dataset size.\n- For detection datasets, the default is 20 epochs.\n\nThe larger the number, the longer the training takes to complete.", "1-3": "1.0", "2-3": "1.0", "3-0": "`name`", "3-2": "Name of the model. Maximum length is 180 characters.", "3-1": "string", "3-3": "1.0", "4-0": "`trainParams`", "4-1": "object", "4-3": "1.0", "4-2": "JSON that contains parameters that specify how the model is created. Optional. Valid values:\n- `{\"trainSplitRatio\": 0.n}`—Lets you specify the ratio of data used to train the dataset and the data used to test the model. The default split ratio is 0.9; 90% of the data is used to train the dataset and create the model and 10% of the data is used to test the model. If you pass in a split ratio of 0.6, then 60% of the data is used to train the dataset and create the model and 40% of the data is used to test the model.\n\n- `{\"withFeedback\": true}`—Lets you specify that feedback examples are included in the data to be trained to create the model. If you omit this parameter, feedback examples aren't used in training. Available in Einstein Vision API version 2.0 and later.\n\n- `{\"withGlobalDatasetId\": <DATASET_ID>}`—Lets you specify that a global dataset is used in addition to the specified dataset to create the model. Available in Einstein Vision API version 2.0 and later.\nThis parameter isn't used when training a detection dataset." }, "cols": 4, "rows": 5 } [/block] Keep the following points in mind when training a dataset: - If you’re unsure which values to set for the `epochs` and `learningRate` parameters, we recommend that you omit them and use the defaults. - A dataset can have only one training in progress at a time. Let's say you train a dataset and there's a model with a status of `RUNNING` or `QUEUED`. If you attempt to train the same dataset again, you receive an error. - If you try to train a dataset that was deleted or that has a status of `DELETE_PENDING`, you receive an error. ##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "1-0": "`datasetId`", "1-1": "long", "1-2": "ID of the dataset trained to create the model.", "1-3": "1.0", "7-0": "`name`", "7-1": "string", "7-2": "Name of the model.", "7-3": "1.0", "8-0": "`object`", "8-1": "string", "8-2": "Object returned; in this case, `training`.", "8-3": "1.0", "0-0": "`createdAt`", "0-1": "date", "0-2": "Date and time that the model was created.", "0-3": "1.0", "5-0": "`modelId`", "5-1": "string", "5-2": "ID of the model. Contains letters and numbers.", "5-3": "1.0", "3-0": "`epochs`", "3-1": "int", "3-2": "Number of epochs used during training.", "3-3": "1.0", "4-0": "`learningRate`", "4-1": "float", "4-2": "Learning rate used during training.", "4-3": "1.0", "9-0": "`progress`", "9-1": "float", "9-2": "How far the training job has progressed. Values are between 0–1.", "9-3": "1.0", "10-0": "`queuePosition`", "10-1": "int", "10-2": "Where the training job is in the queue. This field appears in the response only if the status is `QUEUED`.", "10-3": "1.0", "11-0": "`status`", "11-1": "string", "11-2": "Status of the training job. Valid values are:\n- `QUEUED`—The training job is in the queue.\n- `RUNNING`—The training job is running.\n- `SUCCEEDED`—The training job succeeded, and the model was created.\n- `FAILED`—The training job failed.", "14-0": "`updatedAt`", "14-1": "date", "14-2": "Date and time that the model was last updated.", "11-3": "1.0", "14-3": "1.0", "2-0": "`datasetVersionId`", "2-1": "int", "2-2": "N/A", "2-3": "1.0", "12-0": "`trainParams`", "12-1": "object", "12-2": "Training parameters passed into the request. For example, if you sent in a split of 0.7, the response contains `\"trainParams\": {\"trainSplitRatio\": 0.7}`", "12-3": "1.0", "13-0": "`trainStats`", "13-1": "object", "13-2": "Returns null when you train a dataset. Training statistics are returned when the status is `SUCCEEDED` or `FAILED`.", "13-3": "1.0", "6-0": "`modelType`", "6-1": "string", "6-3": "1.0", "6-2": "Type of data from which the model was created. Inferred from the dataset `type`. Valid values are:\n- `image`\n- `image-detection`—Available in Einstein Vision API version 2.0 and later.\n- `image-multi-label`—Available in Einstein Vision API version 2.0 and later." }, "cols": 4, "rows": 15 } [/block] This cURL command sends in the `trainParams` request parameter. This command has double quotes and escaped double quotes around `trainSplitRatio` to run on Windows. You might need to reformat it to run on another OS. [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=Beach Mountain Model\" -F \"datasetId=57\" -F \"trainParams={\\\"trainSplitRatio\\\":0.7}\" https://api.einstein.ai/v2/vision/train", "language": "curl" } ] } [/block] You can pass in multiple training parameters. For example, you specify `withFeedback` and `trainSplitRatio` using this JSON: `{"withFeedback" : true, "trainSplitRatio" : 0.7}`. If you want to train a dataset and update an existing model, see [Retrain a Dataset](doc:retrain-a-dataset).

Definition

{{ api_url }}{{ page_api_url }}

Examples


Result Format



##Request Parameters## [block:parameters] { "data": { "0-0": "`datasetId`", "0-1": "long", "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-2": "ID of the dataset to train.", "0-3": "1.0", "2-0": "`learningRate`", "1-0": "`epochs`", "1-1": "int", "2-1": "float", "2-2": "Specifies how much the gradient affects the optimization of the model at each time step. Optional. Use this parameter to tune your model. Valid values are between 0.0001 and 0.01. If not specified, the default is 0.0001. We recommend keeping this value between 0.0001 and 0.001. \n\nThis parameter isn't used when training a detection dataset.", "1-2": "Number of training iterations for the neural network. Optional. Valid values are 1–1,000.\nIf not specified:\n- For image and multi-label datasets, the default is calculated based on the dataset size.\n- For detection datasets, the default is 20 epochs.\n\nThe larger the number, the longer the training takes to complete.", "1-3": "1.0", "2-3": "1.0", "3-0": "`name`", "3-2": "Name of the model. Maximum length is 180 characters.", "3-1": "string", "3-3": "1.0", "4-0": "`trainParams`", "4-1": "object", "4-3": "1.0", "4-2": "JSON that contains parameters that specify how the model is created. Optional. Valid values:\n- `{\"trainSplitRatio\": 0.n}`—Lets you specify the ratio of data used to train the dataset and the data used to test the model. The default split ratio is 0.9; 90% of the data is used to train the dataset and create the model and 10% of the data is used to test the model. If you pass in a split ratio of 0.6, then 60% of the data is used to train the dataset and create the model and 40% of the data is used to test the model.\n\n- `{\"withFeedback\": true}`—Lets you specify that feedback examples are included in the data to be trained to create the model. If you omit this parameter, feedback examples aren't used in training. Available in Einstein Vision API version 2.0 and later.\n\n- `{\"withGlobalDatasetId\": <DATASET_ID>}`—Lets you specify that a global dataset is used in addition to the specified dataset to create the model. Available in Einstein Vision API version 2.0 and later.\nThis parameter isn't used when training a detection dataset." }, "cols": 4, "rows": 5 } [/block] Keep the following points in mind when training a dataset: - If you’re unsure which values to set for the `epochs` and `learningRate` parameters, we recommend that you omit them and use the defaults. - A dataset can have only one training in progress at a time. Let's say you train a dataset and there's a model with a status of `RUNNING` or `QUEUED`. If you attempt to train the same dataset again, you receive an error. - If you try to train a dataset that was deleted or that has a status of `DELETE_PENDING`, you receive an error. ##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "1-0": "`datasetId`", "1-1": "long", "1-2": "ID of the dataset trained to create the model.", "1-3": "1.0", "7-0": "`name`", "7-1": "string", "7-2": "Name of the model.", "7-3": "1.0", "8-0": "`object`", "8-1": "string", "8-2": "Object returned; in this case, `training`.", "8-3": "1.0", "0-0": "`createdAt`", "0-1": "date", "0-2": "Date and time that the model was created.", "0-3": "1.0", "5-0": "`modelId`", "5-1": "string", "5-2": "ID of the model. Contains letters and numbers.", "5-3": "1.0", "3-0": "`epochs`", "3-1": "int", "3-2": "Number of epochs used during training.", "3-3": "1.0", "4-0": "`learningRate`", "4-1": "float", "4-2": "Learning rate used during training.", "4-3": "1.0", "9-0": "`progress`", "9-1": "float", "9-2": "How far the training job has progressed. Values are between 0–1.", "9-3": "1.0", "10-0": "`queuePosition`", "10-1": "int", "10-2": "Where the training job is in the queue. This field appears in the response only if the status is `QUEUED`.", "10-3": "1.0", "11-0": "`status`", "11-1": "string", "11-2": "Status of the training job. Valid values are:\n- `QUEUED`—The training job is in the queue.\n- `RUNNING`—The training job is running.\n- `SUCCEEDED`—The training job succeeded, and the model was created.\n- `FAILED`—The training job failed.", "14-0": "`updatedAt`", "14-1": "date", "14-2": "Date and time that the model was last updated.", "11-3": "1.0", "14-3": "1.0", "2-0": "`datasetVersionId`", "2-1": "int", "2-2": "N/A", "2-3": "1.0", "12-0": "`trainParams`", "12-1": "object", "12-2": "Training parameters passed into the request. For example, if you sent in a split of 0.7, the response contains `\"trainParams\": {\"trainSplitRatio\": 0.7}`", "12-3": "1.0", "13-0": "`trainStats`", "13-1": "object", "13-2": "Returns null when you train a dataset. Training statistics are returned when the status is `SUCCEEDED` or `FAILED`.", "13-3": "1.0", "6-0": "`modelType`", "6-1": "string", "6-3": "1.0", "6-2": "Type of data from which the model was created. Inferred from the dataset `type`. Valid values are:\n- `image`\n- `image-detection`—Available in Einstein Vision API version 2.0 and later.\n- `image-multi-label`—Available in Einstein Vision API version 2.0 and later." }, "cols": 4, "rows": 15 } [/block] This cURL command sends in the `trainParams` request parameter. This command has double quotes and escaped double quotes around `trainSplitRatio` to run on Windows. You might need to reformat it to run on another OS. [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=Beach Mountain Model\" -F \"datasetId=57\" -F \"trainParams={\\\"trainSplitRatio\\\":0.7}\" https://api.einstein.ai/v2/vision/train", "language": "curl" } ] } [/block] You can pass in multiple training parameters. For example, you specify `withFeedback` and `trainSplitRatio` using this JSON: `{"withFeedback" : true, "trainSplitRatio" : 0.7}`. If you want to train a dataset and update an existing model, see [Retrain a Dataset](doc:retrain-a-dataset).
{"_id":"59de6225666d650024f78fd5","category":"59de6223666d650024f78fa5","parentDoc":null,"project":"552d474ea86ee20d00780cd7","user":"573b5a1f37fcf72000a2e683","version":"59de6223666d650024f78f9b","updates":[],"next":{"pages":[],"description":""},"createdAt":"2017-05-05T18:52:25.287Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"examples":{"codes":[{"code":"curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"modelId=7JXCXTRXTMNLJCEF2DR5CJ46QU\"  https://api.einstein.ai/v2/vision/retrain","language":"curl"}]},"method":"post","results":{"codes":[{"status":200,"language":"json","code":"{\n  \"datasetId\": 57,\n  \"datasetVersionId\": 0,\n  \"name\": \"Beach and Mountain Model\",\n  \"status\": \"QUEUED\",\n  \"progress\": 0,\n  \"createdAt\": \"2016-09-16T18:03:21.000+0000\",\n  \"updatedAt\": \"2016-09-16T18:03:21.000+0000\",\n  \"learningRate\": 0.001,\n  \"epochs\": 3,\n  \"queuePosition\": 1,\n  \"object\": \"training\",\n  \"modelId\": \"7JXCXTRXTMNLJCEF2DR5CJ46QU\",\n  \"trainParams\": null,\n  \"trainStats\": null,\n  \"modelType\": \"image\"\n}","name":""},{"status":400,"language":"json","code":"{\n  \"message\": \"Train job not yet completed successfully\"\n}","name":""}]},"settings":"","auth":"required","params":[],"url":"/vision/retrain"},"isReference":false,"order":1,"body":"##Request Parameters##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"1-0\": \"`learningRate`\",\n    \"0-0\": \"`epochs`\",\n    \"0-1\": \"int\",\n    \"1-1\": \"float\",\n    \"1-2\": \"Specifies how much the gradient affects the optimization of the model at each time step. Optional. Use this parameter to tune your model. Valid values are between 0.0001 and 0.01. If not specified, the default is 0.0001. We recommend keeping this value between 0.0001 and 0.001.\\n\\nThis parameter isn't used when training a detection dataset.\",\n    \"0-2\": \"Number of training iterations for the neural network. Optional. Valid values are 1–1,000.\\nIf not specified:\\n- For image and multi-label datasets, the default is calculated based on the dataset size.\\n- For detection datasets, the default is 20 epochs.\\n\\nThe larger the number, the longer the training takes to complete.\",\n    \"0-3\": \"2.0\",\n    \"1-3\": \"2.0\",\n    \"3-0\": \"`trainParams`\",\n    \"3-1\": \"object\",\n    \"3-3\": \"2.0\",\n    \"3-2\": \"JSON that contains parameters that specify how the model is created. Optional. Valid values:\\n- `{\\\"trainSplitRatio\\\": 0.n}`—Lets you specify the ratio of data used to train the dataset and the data used to test the model. The default split ratio is 0.9; 90% of the data is used to train the dataset and create the model and 10% of the data is used to test the model. If you pass in a split ratio of 0.6, then 60% of the data is used to train the dataset and create the model and 40% of the data is used to test the model.\\n\\n- `{\\\"withFeedback\\\": true}`—Lets you specify that feedback examples are included in the data to be trained to create the model. If you omit this parameter, feedback examples aren't used in training. Available in Einstein Vision API version 2.0 and later.\\n\\n- `{\\\"withGlobalDatasetId\\\": <DATASET_ID>}`—Lets you specify that a global dataset is used in addition to the specified dataset to create the model. Available in Einstein Vision API version 2.0 and later.\\nThis parameter isn't used when training a detection dataset.\",\n    \"2-0\": \"`modelId`\",\n    \"2-1\": \"string\",\n    \"2-3\": \"2.0\",\n    \"2-2\": \"ID of the model to be updated from the training.\"\n  },\n  \"cols\": 4,\n  \"rows\": 4\n}\n[/block]\nThis call retrains the dataset associated with model specified in the request parameters. Use this call to retrain a dataset and update the model after new examples are added to a dataset or after feedback examples are added to a dataset. \n\n- A dataset can have only one training in progress at a time. Let's say you retrain a dataset and there's a model with a status of `RUNNING` or `QUEUED`. If you attempt to retrain the same dataset again, you receive an error.\n\n- If you try to retrain a dataset that was deleted or that has a status of `DELETE_PENDING`, you receive an error.\n\n- If you try to retrain a dataset and pass in a model ID for a model that was deleted, you receive an error.\n\nTo see the values specified in the `trainParams` parameter when the model was trained, such as `withFeedback` or `withGlobalDatasetId`, see [Get Training Status](doc:get-training-status).\n\n##Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"1-0\": \"`datasetId`\",\n    \"1-1\": \"long\",\n    \"1-2\": \"ID of the dataset trained to create the model.\",\n    \"1-3\": \"2.0\",\n    \"7-0\": \"`name`\",\n    \"7-1\": \"string\",\n    \"7-2\": \"Name of the model.\",\n    \"7-3\": \"2.0\",\n    \"8-0\": \"`object`\",\n    \"8-1\": \"string\",\n    \"8-2\": \"Object returned; in this case, `training`.\",\n    \"8-3\": \"2.0\",\n    \"0-0\": \"`createdAt`\",\n    \"0-1\": \"date\",\n    \"0-2\": \"Date and time that the model was created.\",\n    \"0-3\": \"2.0\",\n    \"5-0\": \"`modelId`\",\n    \"5-1\": \"string\",\n    \"5-2\": \"ID of the model. Contains letters and numbers.\",\n    \"5-3\": \"2.0\",\n    \"3-0\": \"`epochs`\",\n    \"3-1\": \"int\",\n    \"3-2\": \"Number of epochs used during training.\",\n    \"3-3\": \"2.0\",\n    \"4-0\": \"`learningRate`\",\n    \"4-1\": \"float\",\n    \"4-2\": \"Learning rate used during training.\",\n    \"4-3\": \"2.0\",\n    \"9-0\": \"`progress`\",\n    \"9-1\": \"float\",\n    \"9-2\": \"How far the training job has progressed. Values are between 0–1.\",\n    \"9-3\": \"2.0\",\n    \"10-0\": \"`queuePosition`\",\n    \"10-1\": \"int\",\n    \"10-2\": \"Where the training job is in the queue. This field appears in the response only if the status is `QUEUED`.\",\n    \"10-3\": \"2.0\",\n    \"11-0\": \"`status`\",\n    \"11-1\": \"string\",\n    \"11-2\": \"Status of the training job. Valid values are:\\n- `QUEUED`—The training job is in the queue.\\n- `RUNNING`—The training job is running.\\n- `SUCCEEDED`—The training job succeeded, and the model was created.\\n- `FAILED`—The training job failed.\",\n    \"14-0\": \"`updatedAt`\",\n    \"14-1\": \"date\",\n    \"14-2\": \"Date and time that the model was last updated.\",\n    \"11-3\": \"2.0\",\n    \"14-3\": \"2.0\",\n    \"2-0\": \"`datasetVersionId`\",\n    \"2-1\": \"int\",\n    \"2-2\": \"N/A\",\n    \"2-3\": \"2.0\",\n    \"12-0\": \"`trainParams`\",\n    \"12-1\": \"object\",\n    \"12-2\": \"Training parameters passed into the request. For example, if you sent in a split of 0.7, the response contains `\\\"trainParams\\\": {\\\"trainSplitRatio\\\": 0.7}`\",\n    \"12-3\": \"2.0\",\n    \"13-0\": \"`trainStats`\",\n    \"13-1\": \"object\",\n    \"13-2\": \"Returns null when you train a dataset. Training statistics are returned when the status is `SUCCEEDED` or `FAILED`.\",\n    \"13-3\": \"2.0\",\n    \"6-0\": \"`modelType`\",\n    \"6-1\": \"string\",\n    \"6-3\": \"2.0\",\n    \"6-2\": \"Type of data from which the model was created. Inferred from the dataset `type`. Valid values are:\\n- `image`\\n- `image-detection`— Available in Einstein Vision API version 2.0 and later.\\n- `image-multi-label`— Available in Einstein Vision API version 2.0 and later.\"\n  },\n  \"cols\": 4,\n  \"rows\": 15\n}\n[/block]\nThis cURL command sends in the `trainParams` request parameter. This command has double quotes and escaped double quotes around `trainSplitRatio` to run on Windows. You might need to reformat it to run on another OS.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"modelId=7JXCXTRXTMNLJCEF2DR5CJ46QU\\\" -F \\\"trainParams={\\\\\\\"trainSplitRatio\\\\\\\":0.7}\\\" https://api.einstein.ai/v2/vision/retrain\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\nYou can pass in multiple training parameters. For example, you specify `withFeedback` and `trainSplitRatio` using this JSON: `{\"withFeedback\" : true, \"trainSplitRatio\" : 0.7}`.","excerpt":"Retrains a dataset and updates a model. Use this API call when you want to update a model and keep the model ID instead of creating a new model. Available in Einstein Vision API version 2.0 and later.","slug":"retrain-a-dataset","type":"post","title":"Retrain a Dataset","__v":0,"childrenPages":[]}

postRetrain a Dataset

Retrains a dataset and updates a model. Use this API call when you want to update a model and keep the model ID instead of creating a new model. Available in Einstein Vision API version 2.0 and later.

##Request Parameters## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "1-0": "`learningRate`", "0-0": "`epochs`", "0-1": "int", "1-1": "float", "1-2": "Specifies how much the gradient affects the optimization of the model at each time step. Optional. Use this parameter to tune your model. Valid values are between 0.0001 and 0.01. If not specified, the default is 0.0001. We recommend keeping this value between 0.0001 and 0.001.\n\nThis parameter isn't used when training a detection dataset.", "0-2": "Number of training iterations for the neural network. Optional. Valid values are 1–1,000.\nIf not specified:\n- For image and multi-label datasets, the default is calculated based on the dataset size.\n- For detection datasets, the default is 20 epochs.\n\nThe larger the number, the longer the training takes to complete.", "0-3": "2.0", "1-3": "2.0", "3-0": "`trainParams`", "3-1": "object", "3-3": "2.0", "3-2": "JSON that contains parameters that specify how the model is created. Optional. Valid values:\n- `{\"trainSplitRatio\": 0.n}`—Lets you specify the ratio of data used to train the dataset and the data used to test the model. The default split ratio is 0.9; 90% of the data is used to train the dataset and create the model and 10% of the data is used to test the model. If you pass in a split ratio of 0.6, then 60% of the data is used to train the dataset and create the model and 40% of the data is used to test the model.\n\n- `{\"withFeedback\": true}`—Lets you specify that feedback examples are included in the data to be trained to create the model. If you omit this parameter, feedback examples aren't used in training. Available in Einstein Vision API version 2.0 and later.\n\n- `{\"withGlobalDatasetId\": <DATASET_ID>}`—Lets you specify that a global dataset is used in addition to the specified dataset to create the model. Available in Einstein Vision API version 2.0 and later.\nThis parameter isn't used when training a detection dataset.", "2-0": "`modelId`", "2-1": "string", "2-3": "2.0", "2-2": "ID of the model to be updated from the training." }, "cols": 4, "rows": 4 } [/block] This call retrains the dataset associated with model specified in the request parameters. Use this call to retrain a dataset and update the model after new examples are added to a dataset or after feedback examples are added to a dataset. - A dataset can have only one training in progress at a time. Let's say you retrain a dataset and there's a model with a status of `RUNNING` or `QUEUED`. If you attempt to retrain the same dataset again, you receive an error. - If you try to retrain a dataset that was deleted or that has a status of `DELETE_PENDING`, you receive an error. - If you try to retrain a dataset and pass in a model ID for a model that was deleted, you receive an error. To see the values specified in the `trainParams` parameter when the model was trained, such as `withFeedback` or `withGlobalDatasetId`, see [Get Training Status](doc:get-training-status). ##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "1-0": "`datasetId`", "1-1": "long", "1-2": "ID of the dataset trained to create the model.", "1-3": "2.0", "7-0": "`name`", "7-1": "string", "7-2": "Name of the model.", "7-3": "2.0", "8-0": "`object`", "8-1": "string", "8-2": "Object returned; in this case, `training`.", "8-3": "2.0", "0-0": "`createdAt`", "0-1": "date", "0-2": "Date and time that the model was created.", "0-3": "2.0", "5-0": "`modelId`", "5-1": "string", "5-2": "ID of the model. Contains letters and numbers.", "5-3": "2.0", "3-0": "`epochs`", "3-1": "int", "3-2": "Number of epochs used during training.", "3-3": "2.0", "4-0": "`learningRate`", "4-1": "float", "4-2": "Learning rate used during training.", "4-3": "2.0", "9-0": "`progress`", "9-1": "float", "9-2": "How far the training job has progressed. Values are between 0–1.", "9-3": "2.0", "10-0": "`queuePosition`", "10-1": "int", "10-2": "Where the training job is in the queue. This field appears in the response only if the status is `QUEUED`.", "10-3": "2.0", "11-0": "`status`", "11-1": "string", "11-2": "Status of the training job. Valid values are:\n- `QUEUED`—The training job is in the queue.\n- `RUNNING`—The training job is running.\n- `SUCCEEDED`—The training job succeeded, and the model was created.\n- `FAILED`—The training job failed.", "14-0": "`updatedAt`", "14-1": "date", "14-2": "Date and time that the model was last updated.", "11-3": "2.0", "14-3": "2.0", "2-0": "`datasetVersionId`", "2-1": "int", "2-2": "N/A", "2-3": "2.0", "12-0": "`trainParams`", "12-1": "object", "12-2": "Training parameters passed into the request. For example, if you sent in a split of 0.7, the response contains `\"trainParams\": {\"trainSplitRatio\": 0.7}`", "12-3": "2.0", "13-0": "`trainStats`", "13-1": "object", "13-2": "Returns null when you train a dataset. Training statistics are returned when the status is `SUCCEEDED` or `FAILED`.", "13-3": "2.0", "6-0": "`modelType`", "6-1": "string", "6-3": "2.0", "6-2": "Type of data from which the model was created. Inferred from the dataset `type`. Valid values are:\n- `image`\n- `image-detection`— Available in Einstein Vision API version 2.0 and later.\n- `image-multi-label`— Available in Einstein Vision API version 2.0 and later." }, "cols": 4, "rows": 15 } [/block] This cURL command sends in the `trainParams` request parameter. This command has double quotes and escaped double quotes around `trainSplitRatio` to run on Windows. You might need to reformat it to run on another OS. [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"modelId=7JXCXTRXTMNLJCEF2DR5CJ46QU\" -F \"trainParams={\\\"trainSplitRatio\\\":0.7}\" https://api.einstein.ai/v2/vision/retrain", "language": "curl" } ] } [/block] You can pass in multiple training parameters. For example, you specify `withFeedback` and `trainSplitRatio` using this JSON: `{"withFeedback" : true, "trainSplitRatio" : 0.7}`.

Definition

{{ api_url }}{{ page_api_url }}

Examples


Result Format



##Request Parameters## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "1-0": "`learningRate`", "0-0": "`epochs`", "0-1": "int", "1-1": "float", "1-2": "Specifies how much the gradient affects the optimization of the model at each time step. Optional. Use this parameter to tune your model. Valid values are between 0.0001 and 0.01. If not specified, the default is 0.0001. We recommend keeping this value between 0.0001 and 0.001.\n\nThis parameter isn't used when training a detection dataset.", "0-2": "Number of training iterations for the neural network. Optional. Valid values are 1–1,000.\nIf not specified:\n- For image and multi-label datasets, the default is calculated based on the dataset size.\n- For detection datasets, the default is 20 epochs.\n\nThe larger the number, the longer the training takes to complete.", "0-3": "2.0", "1-3": "2.0", "3-0": "`trainParams`", "3-1": "object", "3-3": "2.0", "3-2": "JSON that contains parameters that specify how the model is created. Optional. Valid values:\n- `{\"trainSplitRatio\": 0.n}`—Lets you specify the ratio of data used to train the dataset and the data used to test the model. The default split ratio is 0.9; 90% of the data is used to train the dataset and create the model and 10% of the data is used to test the model. If you pass in a split ratio of 0.6, then 60% of the data is used to train the dataset and create the model and 40% of the data is used to test the model.\n\n- `{\"withFeedback\": true}`—Lets you specify that feedback examples are included in the data to be trained to create the model. If you omit this parameter, feedback examples aren't used in training. Available in Einstein Vision API version 2.0 and later.\n\n- `{\"withGlobalDatasetId\": <DATASET_ID>}`—Lets you specify that a global dataset is used in addition to the specified dataset to create the model. Available in Einstein Vision API version 2.0 and later.\nThis parameter isn't used when training a detection dataset.", "2-0": "`modelId`", "2-1": "string", "2-3": "2.0", "2-2": "ID of the model to be updated from the training." }, "cols": 4, "rows": 4 } [/block] This call retrains the dataset associated with model specified in the request parameters. Use this call to retrain a dataset and update the model after new examples are added to a dataset or after feedback examples are added to a dataset. - A dataset can have only one training in progress at a time. Let's say you retrain a dataset and there's a model with a status of `RUNNING` or `QUEUED`. If you attempt to retrain the same dataset again, you receive an error. - If you try to retrain a dataset that was deleted or that has a status of `DELETE_PENDING`, you receive an error. - If you try to retrain a dataset and pass in a model ID for a model that was deleted, you receive an error. To see the values specified in the `trainParams` parameter when the model was trained, such as `withFeedback` or `withGlobalDatasetId`, see [Get Training Status](doc:get-training-status). ##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "1-0": "`datasetId`", "1-1": "long", "1-2": "ID of the dataset trained to create the model.", "1-3": "2.0", "7-0": "`name`", "7-1": "string", "7-2": "Name of the model.", "7-3": "2.0", "8-0": "`object`", "8-1": "string", "8-2": "Object returned; in this case, `training`.", "8-3": "2.0", "0-0": "`createdAt`", "0-1": "date", "0-2": "Date and time that the model was created.", "0-3": "2.0", "5-0": "`modelId`", "5-1": "string", "5-2": "ID of the model. Contains letters and numbers.", "5-3": "2.0", "3-0": "`epochs`", "3-1": "int", "3-2": "Number of epochs used during training.", "3-3": "2.0", "4-0": "`learningRate`", "4-1": "float", "4-2": "Learning rate used during training.", "4-3": "2.0", "9-0": "`progress`", "9-1": "float", "9-2": "How far the training job has progressed. Values are between 0–1.", "9-3": "2.0", "10-0": "`queuePosition`", "10-1": "int", "10-2": "Where the training job is in the queue. This field appears in the response only if the status is `QUEUED`.", "10-3": "2.0", "11-0": "`status`", "11-1": "string", "11-2": "Status of the training job. Valid values are:\n- `QUEUED`—The training job is in the queue.\n- `RUNNING`—The training job is running.\n- `SUCCEEDED`—The training job succeeded, and the model was created.\n- `FAILED`—The training job failed.", "14-0": "`updatedAt`", "14-1": "date", "14-2": "Date and time that the model was last updated.", "11-3": "2.0", "14-3": "2.0", "2-0": "`datasetVersionId`", "2-1": "int", "2-2": "N/A", "2-3": "2.0", "12-0": "`trainParams`", "12-1": "object", "12-2": "Training parameters passed into the request. For example, if you sent in a split of 0.7, the response contains `\"trainParams\": {\"trainSplitRatio\": 0.7}`", "12-3": "2.0", "13-0": "`trainStats`", "13-1": "object", "13-2": "Returns null when you train a dataset. Training statistics are returned when the status is `SUCCEEDED` or `FAILED`.", "13-3": "2.0", "6-0": "`modelType`", "6-1": "string", "6-3": "2.0", "6-2": "Type of data from which the model was created. Inferred from the dataset `type`. Valid values are:\n- `image`\n- `image-detection`— Available in Einstein Vision API version 2.0 and later.\n- `image-multi-label`— Available in Einstein Vision API version 2.0 and later." }, "cols": 4, "rows": 15 } [/block] This cURL command sends in the `trainParams` request parameter. This command has double quotes and escaped double quotes around `trainSplitRatio` to run on Windows. You might need to reformat it to run on another OS. [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"modelId=7JXCXTRXTMNLJCEF2DR5CJ46QU\" -F \"trainParams={\\\"trainSplitRatio\\\":0.7}\" https://api.einstein.ai/v2/vision/retrain", "language": "curl" } ] } [/block] You can pass in multiple training parameters. For example, you specify `withFeedback` and `trainSplitRatio` using this JSON: `{"withFeedback" : true, "trainSplitRatio" : 0.7}`.
{"_id":"59de6225666d650024f78fd6","category":"59de6223666d650024f78fa5","parentDoc":null,"user":"573b5a1f37fcf72000a2e683","project":"552d474ea86ee20d00780cd7","version":"59de6223666d650024f78f9b","updates":[],"next":{"pages":[],"description":""},"createdAt":"2016-09-25T00:26:06.156Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[{"language":"json","status":200,"name":"","code":"{\n  \"datasetId\": 57,\n  \"datasetVersionId\": 0,\n  \"name\": \"Beach and Mountain Model\",\n  \"status\": \"SUCCEEDED\",\n  \"progress\": 1,\n  \"createdAt\": \"2016-09-16T18:03:21.000+0000\",\n  \"updatedAt\": \"2016-09-16T18:03:21.000+0000\",\n  \"learningRate\": 0.001,\n  \"epochs\": 3,\n  \"object\": \"training\",\n  \"modelId\": \"7JXCXTRXTMNLJCEF2DR5CJ46QU\",\n  \"trainParams\": {\"trainSplitRatio\": 0.7},\n  \"trainStats\": {\n    \"labels\": 2,\n    \"examples\": 99,\n    \"totalTime\": \"00:01:35:171\",\n    \"trainingTime\": \"00:01:32:259\",\n    \"earlyStopping\": false,\n    \"lastEpochDone\": 3,\n    \"modelSaveTime\": \"00:00:02:667\",\n    \"testSplitSize\": 33,\n    \"trainSplitSize\": 66,\n    \"datasetLoadTime\": \"00:00:02:893\"\n  },\n  \"modelType\": \"image\"\n}"},{"code":"{}","language":"json","status":400,"name":""}]},"settings":"","examples":{"codes":[{"language":"curl","code":"curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.einstein.ai/v2/vision/train/7JXCXTRXTMNLJCEF2DR5CJ46QU"}]},"method":"get","auth":"required","params":[],"url":"/vision/train/<MODEL_ID>"},"isReference":false,"order":2,"body":"##Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"1-0\": \"`datasetId`\",\n    \"1-1\": \"long\",\n    \"1-2\": \"ID of the dataset trained to create the model.\",\n    \"1-3\": \"1.0\",\n    \"10-0\": \"`name`\",\n    \"10-1\": \"string\",\n    \"10-2\": \"Name of the model.\",\n    \"10-3\": \"1.0\",\n    \"11-0\": \"`object`\",\n    \"11-1\": \"string\",\n    \"11-2\": \"Object returned; in this case, `training`.\",\n    \"11-3\": \"1.0\",\n    \"0-0\": \"`createdAt`\",\n    \"0-1\": \"date\",\n    \"0-2\": \"Date and time that the model was created.\",\n    \"0-3\": \"1.0\",\n    \"8-0\": \"`modelId`\",\n    \"8-1\": \"string\",\n    \"8-2\": \"ID of the model. Contains letters and numbers.\",\n    \"8-3\": \"1.0\",\n    \"4-0\": \"`epochs`\",\n    \"4-1\": \"int\",\n    \"4-2\": \"Number of epochs used during training.\",\n    \"4-3\": \"1.0\",\n    \"7-0\": \"`learningRate`\",\n    \"7-1\": \"float\",\n    \"7-2\": \"Learning rate used during training.\",\n    \"7-3\": \"1.0\",\n    \"12-0\": \"`progress`\",\n    \"12-1\": \"float\",\n    \"12-2\": \"How far the training job has progressed. Values are between 0–1.\",\n    \"12-3\": \"1.0\",\n    \"13-0\": \"`queuePosition`\",\n    \"13-1\": \"int\",\n    \"13-2\": \"Where the training job is in the queue. This field appears in the response only if the status is `QUEUED`.\",\n    \"13-3\": \"1.0\",\n    \"14-0\": \"`status`\",\n    \"14-1\": \"string\",\n    \"14-2\": \"Status of the model training. Valid values are:\\n- `QUEUED`—The model training is in the queue.\\n- `RUNNING`—The model training is running.\\n- `SUCCEEDED`—The model training succeeded, and you can use the model.\\n- `FAILED`—The model training failed.\",\n    \"17-0\": \"`updatedAt`\",\n    \"17-1\": \"string\",\n    \"17-2\": \"Date and time that the model was last updated.\",\n    \"14-3\": \"1.0\",\n    \"17-3\": \"1.0\",\n    \"5-0\": \"`failureMsg`\",\n    \"5-1\": \"string\",\n    \"5-2\": \"Reason the dataset training failed. Returned only if the training status is `FAILED`.\",\n    \"5-3\": \"1.0\",\n    \"2-0\": \"`datasetVersionId`\",\n    \"2-1\": \"int\",\n    \"2-2\": \"N/A\",\n    \"2-3\": \"1.0\",\n    \"15-0\": \"`trainParams`\",\n    \"15-1\": \"string\",\n    \"15-2\": \"Training parameters passed into the request.\",\n    \"15-3\": \"1.0\",\n    \"16-0\": \"`trainStats`\",\n    \"16-1\": \"object\",\n    \"16-2\": \"Statistics about the training.\",\n    \"16-3\": \"1.0\",\n    \"9-0\": \"`modelType`\",\n    \"9-1\": \"string\",\n    \"9-2\": \"Type of data from which the model was created. Valid values are:\\n- `image`\\n- `image-detection`—Available in Einstein Vision API version 2.0 and later.\\n- `image-multi-label`—Available in Einstein Vision API version 2.0 and later.\",\n    \"9-3\": \"1.0\",\n    \"3-0\": \"`earlyStopping`\",\n    \"3-1\": \"boolean\",\n    \"3-3\": \"2.0\",\n    \"3-2\": \"Specifies whether the training process stopped before completing all the epochs. The training process stops before the specified number of epochs when the model has reached the optimal accuracy. The `lastEpochDone` value specifies the last training iteration. \\n\\nFor detection datasets, the training process completes all the epochs, it doesn't stop early.\",\n    \"6-0\": \"`lastEpochDone`\",\n    \"6-1\": \"int\",\n    \"6-2\": \"Last training iteration performed.\",\n    \"6-3\": \"2.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 18\n}\n[/block]\n##TrainStats Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`datasetLoadTime`\",\n    \"1-0\": \"`examples`\",\n    \"2-0\": \"`labels`\",\n    \"3-0\": \"`lastEpochDone`\",\n    \"4-0\": \"`modelSaveTime`\",\n    \"5-0\": \"`testSplitSize`\",\n    \"6-0\": \"`totalTime`\",\n    \"7-0\": \"`trainingTime`\",\n    \"8-0\": \"`trainSplitSize`\",\n    \"0-3\": \"1.0\",\n    \"1-3\": \"1.0\",\n    \"2-3\": \"1.0\",\n    \"3-3\": \"1.0\",\n    \"4-3\": \"1.0\",\n    \"5-3\": \"1.0\",\n    \"6-3\": \"1.0\",\n    \"7-3\": \"1.0\",\n    \"8-3\": \"1.0\",\n    \"1-1\": \"int\",\n    \"2-1\": \"int\",\n    \"1-2\": \"Total number of examples in the dataset from which the model was created.\",\n    \"2-2\": \"Total number of labels in the dataset from which the model was created.\",\n    \"3-1\": \"int\",\n    \"0-1\": \"string \\nin HH:MM:SS:SSS format\",\n    \"0-2\": \"Time it took to load the dataset to be trained.\",\n    \"4-1\": \"string \\nin HH:MM:SS:SSS format\",\n    \"6-1\": \"string \\nin HH:MM:SS:SSS format\",\n    \"5-1\": \"int\",\n    \"7-1\": \"string \\nin HH:MM:SS:SSS format\",\n    \"8-1\": \"int\",\n    \"3-2\": \"Number of the last training iteration that completed.\",\n    \"5-2\": \"Number of examples (from the dataset total number of examples) used to test the model. `testSplitSize` + `trainSplitSize` is equal to `examples`.\",\n    \"8-2\": \"Number of examples (from the dataset total number of examples) used to train the model. `trainSplitSize` + `testSplitSize` is equal to `examples`.\",\n    \"4-2\": \"Time it took to save the model.\",\n    \"6-2\": \"Total training time: `datasetLoadTime` + `trainingTime` + `modelSaveTime`\",\n    \"7-2\": \"Time it took to train the dataset to create the model.\"\n  },\n  \"cols\": 4,\n  \"rows\": 9\n}\n[/block]","excerpt":"Returns the status of a model's training process. Use the progress field to determine how far the training has progressed. When training completes successfully, the `status` is `SUCCEEDED` and the `progress` is `1`.","slug":"get-training-status","type":"get","title":"Get Training Status","__v":0,"childrenPages":[]}

getGet Training Status

Returns the status of a model's training process. Use the progress field to determine how far the training has progressed. When training completes successfully, the `status` is `SUCCEEDED` and the `progress` is `1`.

##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "1-0": "`datasetId`", "1-1": "long", "1-2": "ID of the dataset trained to create the model.", "1-3": "1.0", "10-0": "`name`", "10-1": "string", "10-2": "Name of the model.", "10-3": "1.0", "11-0": "`object`", "11-1": "string", "11-2": "Object returned; in this case, `training`.", "11-3": "1.0", "0-0": "`createdAt`", "0-1": "date", "0-2": "Date and time that the model was created.", "0-3": "1.0", "8-0": "`modelId`", "8-1": "string", "8-2": "ID of the model. Contains letters and numbers.", "8-3": "1.0", "4-0": "`epochs`", "4-1": "int", "4-2": "Number of epochs used during training.", "4-3": "1.0", "7-0": "`learningRate`", "7-1": "float", "7-2": "Learning rate used during training.", "7-3": "1.0", "12-0": "`progress`", "12-1": "float", "12-2": "How far the training job has progressed. Values are between 0–1.", "12-3": "1.0", "13-0": "`queuePosition`", "13-1": "int", "13-2": "Where the training job is in the queue. This field appears in the response only if the status is `QUEUED`.", "13-3": "1.0", "14-0": "`status`", "14-1": "string", "14-2": "Status of the model training. Valid values are:\n- `QUEUED`—The model training is in the queue.\n- `RUNNING`—The model training is running.\n- `SUCCEEDED`—The model training succeeded, and you can use the model.\n- `FAILED`—The model training failed.", "17-0": "`updatedAt`", "17-1": "string", "17-2": "Date and time that the model was last updated.", "14-3": "1.0", "17-3": "1.0", "5-0": "`failureMsg`", "5-1": "string", "5-2": "Reason the dataset training failed. Returned only if the training status is `FAILED`.", "5-3": "1.0", "2-0": "`datasetVersionId`", "2-1": "int", "2-2": "N/A", "2-3": "1.0", "15-0": "`trainParams`", "15-1": "string", "15-2": "Training parameters passed into the request.", "15-3": "1.0", "16-0": "`trainStats`", "16-1": "object", "16-2": "Statistics about the training.", "16-3": "1.0", "9-0": "`modelType`", "9-1": "string", "9-2": "Type of data from which the model was created. Valid values are:\n- `image`\n- `image-detection`—Available in Einstein Vision API version 2.0 and later.\n- `image-multi-label`—Available in Einstein Vision API version 2.0 and later.", "9-3": "1.0", "3-0": "`earlyStopping`", "3-1": "boolean", "3-3": "2.0", "3-2": "Specifies whether the training process stopped before completing all the epochs. The training process stops before the specified number of epochs when the model has reached the optimal accuracy. The `lastEpochDone` value specifies the last training iteration. \n\nFor detection datasets, the training process completes all the epochs, it doesn't stop early.", "6-0": "`lastEpochDone`", "6-1": "int", "6-2": "Last training iteration performed.", "6-3": "2.0" }, "cols": 4, "rows": 18 } [/block] ##TrainStats Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`datasetLoadTime`", "1-0": "`examples`", "2-0": "`labels`", "3-0": "`lastEpochDone`", "4-0": "`modelSaveTime`", "5-0": "`testSplitSize`", "6-0": "`totalTime`", "7-0": "`trainingTime`", "8-0": "`trainSplitSize`", "0-3": "1.0", "1-3": "1.0", "2-3": "1.0", "3-3": "1.0", "4-3": "1.0", "5-3": "1.0", "6-3": "1.0", "7-3": "1.0", "8-3": "1.0", "1-1": "int", "2-1": "int", "1-2": "Total number of examples in the dataset from which the model was created.", "2-2": "Total number of labels in the dataset from which the model was created.", "3-1": "int", "0-1": "string \nin HH:MM:SS:SSS format", "0-2": "Time it took to load the dataset to be trained.", "4-1": "string \nin HH:MM:SS:SSS format", "6-1": "string \nin HH:MM:SS:SSS format", "5-1": "int", "7-1": "string \nin HH:MM:SS:SSS format", "8-1": "int", "3-2": "Number of the last training iteration that completed.", "5-2": "Number of examples (from the dataset total number of examples) used to test the model. `testSplitSize` + `trainSplitSize` is equal to `examples`.", "8-2": "Number of examples (from the dataset total number of examples) used to train the model. `trainSplitSize` + `testSplitSize` is equal to `examples`.", "4-2": "Time it took to save the model.", "6-2": "Total training time: `datasetLoadTime` + `trainingTime` + `modelSaveTime`", "7-2": "Time it took to train the dataset to create the model." }, "cols": 4, "rows": 9 } [/block]

Definition

{{ api_url }}{{ page_api_url }}

Examples


Result Format



##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "1-0": "`datasetId`", "1-1": "long", "1-2": "ID of the dataset trained to create the model.", "1-3": "1.0", "10-0": "`name`", "10-1": "string", "10-2": "Name of the model.", "10-3": "1.0", "11-0": "`object`", "11-1": "string", "11-2": "Object returned; in this case, `training`.", "11-3": "1.0", "0-0": "`createdAt`", "0-1": "date", "0-2": "Date and time that the model was created.", "0-3": "1.0", "8-0": "`modelId`", "8-1": "string", "8-2": "ID of the model. Contains letters and numbers.", "8-3": "1.0", "4-0": "`epochs`", "4-1": "int", "4-2": "Number of epochs used during training.", "4-3": "1.0", "7-0": "`learningRate`", "7-1": "float", "7-2": "Learning rate used during training.", "7-3": "1.0", "12-0": "`progress`", "12-1": "float", "12-2": "How far the training job has progressed. Values are between 0–1.", "12-3": "1.0", "13-0": "`queuePosition`", "13-1": "int", "13-2": "Where the training job is in the queue. This field appears in the response only if the status is `QUEUED`.", "13-3": "1.0", "14-0": "`status`", "14-1": "string", "14-2": "Status of the model training. Valid values are:\n- `QUEUED`—The model training is in the queue.\n- `RUNNING`—The model training is running.\n- `SUCCEEDED`—The model training succeeded, and you can use the model.\n- `FAILED`—The model training failed.", "17-0": "`updatedAt`", "17-1": "string", "17-2": "Date and time that the model was last updated.", "14-3": "1.0", "17-3": "1.0", "5-0": "`failureMsg`", "5-1": "string", "5-2": "Reason the dataset training failed. Returned only if the training status is `FAILED`.", "5-3": "1.0", "2-0": "`datasetVersionId`", "2-1": "int", "2-2": "N/A", "2-3": "1.0", "15-0": "`trainParams`", "15-1": "string", "15-2": "Training parameters passed into the request.", "15-3": "1.0", "16-0": "`trainStats`", "16-1": "object", "16-2": "Statistics about the trai